Significance
We are often tasked with recognizing familiar items from partial input, like the obscured face of a friend. Remarkably, our brains have the ability to “complete” these visually sparse cues from memory. This operation, termed “pattern completion,” though critical for successful memory, has also been thought to underlie memory errors, whereby a similar novel item is incorrectly called “old.” Using eye movement monitoring, we showed that when presented with a novel degraded cue, participants looked to regions that they had viewed during the previous encoding of a similar item, and these gaze shifts were associated with an increase in false alarms. These findings provide evidence that retrieval of encoded memory representations (i.e., pattern completion), underlies false alarms to similar items.
Keywords: eye movements, memory, pattern completion
Abstract
The ability to recall a detailed event from a simple reminder is supported by pattern completion, a cognitive operation performed by the hippocampus wherein existing mnemonic representations are retrieved from incomplete input. In behavioral studies, pattern completion is often inferred through the false endorsement of lure (i.e., similar) items as old. However, evidence that such a response is due to the specific retrieval of a similar, previously encoded item is severely lacking. We used eye movement (EM) monitoring during a partial-cue recognition memory task to index reinstatement of lure images behaviorally via the recapitulation of encoding-related EMs or gaze reinstatement. Participants reinstated encoding-related EMs following degraded retrieval cues and this reinstatement was negatively correlated with accuracy for lure images, suggesting that retrieval of existing representations (i.e., pattern completion) underlies lure false alarms. Our findings provide evidence linking gaze reinstatement and pattern completion and advance a functional role for EMs in memory retrieval.
Memory is often a kind of guessing game: We receive fragmentary, noisy, or degraded information from the sensory environment and then must “fill in” the blanks. Seeing a blurry face in the distance or hearing a note from a familiar song can suddenly lead to recognition of a close friend or the incessant humming of a tune. This remarkable ability is thought to rely on a neurocomputational process called “pattern completion” (for review, see refs. 1 and 2). Pattern completion refers to the retrieval of a complete memory representation from partial or degraded input, while the complementary process of “pattern separation” refers to the transformation of similar inputs into distinct, nonoverlapping memory traces (3, 4; for review, see ref. 5). Both processes have been attributed to distinct subregions in the hippocampus, and contribute to its broader role in the encoding and retrieval of the relations among stimulus or event features (6, 7).
Although pattern separation and pattern completion describe neurocomputational processes, they are often inferred from behavioral memory responses. That is, when a lure (i.e., similar) item (e.g., two different pictures of apples) is correctly identified as new, pattern separation is thought to have occurred; whereas when a lure item is incorrectly identified as old, pattern completion is thought to have occurred (8–11). While previous work has often relied on such responses to index pattern completion, this approach is indirect and rests on untested assumptions; namely, that false alarms to lure stimuli are the result of retrieval of the originally encoded item (12; see also ref. 13). Presumably, the operation of pattern completion should entail the retrieval of individual stimulus features absent from the test probe as well as the relations between those features and features present in the test probe (see also ref. 14). However, that retrieval of such a detailed relational representation has occurred cannot be assumed from a simple behavioral response. Thus, to evaluate this claim more directly, the present study used eye movement (EM) monitoring to measure the overlap between participant- and stimulus-specific gaze patterns during encoding and retrieval (visualization) of degraded lure stimuli as a predictor of memory performance. Whereas behavioral responses reflect the outcome of an underlying retrieval process, EM-based reinstatement has been linked to the online maintenance and retrieval of relational information (for review, see ref. 15), making it a powerful tool to assess reactivation.
EMs provide a unique window into cognitive processes as they unfold in time (for review, see ref. 16). Research using EM monitoring suggests that EMs are involved in the binding of stimulus features into cohesive memory representations during encoding, and the retrieval of those features and the relations among them during retrieval (14; for review, see refs. 15 and 16). For example, whereas fixed viewing impairs memory (17–21), spontaneous gaze shifts to regions corresponding with previously encoded (i.e., viewed) image features have been shown to facilitate reactivation of those features and the relations among them (22, 23; for review, see ref. 15). Reinstatement of encoding-related EMs, or “gaze reinstatement,” during memory maintenance (24, 25) and retrieval (26–29) has been associated with mnemonic performance across a variety of tasks. Even in the absence of visual input, humans spontaneously direct their gaze to image regions previously inspected during encoding (i.e., “looking at nothing”), and this gaze reinstatement has been correlated with explicit measures of memory. (18, 20, 26, 30–34; for review, see refs. 15 and 33).
Extending this work, recent neuroimaging findings suggest that gaze reinstatement may rely on the same neural mechanisms that support memory retrieval. For example, a recent study from Bone et al. (34) found that participants reinstated encoding-related EMs during stimulus-free visualization, and this reinstatement was positively correlated with whole-brain neural reactivation (i.e., similarity between image-specific patterns of brain activity evoked during perception and imagery), which in turn was correlated with objective (change detection performance) and subjective (vividness ratings) measures of memory. Gaze reinstatement that differentiates hits from misses for configurally similar scene images has also been correlated with activity in the hippocampus (35), supporting the existence of a functional link between EMs and hippocampally mediated relational memory processes (see also refs. 36–40 and 41; for review, see refs. 42 and 43). Given that EMs and memory retrieval, and the neural networks underlying them, are intimately related, the present study used EM monitoring to assess the online reactivation of previously encoded stimuli during mnemonic discrimination of lure images via gaze reinstatement.
From a computational standpoint, pattern completion involves the reactivation of a total memory trace from a partial or degraded stimulus that serves as an input pattern to an autoassociative network (for review, see refs. 5 and 44). Thus, to capture pattern completion as it has been defined in computational models, we used memory cues that had been systematically degraded by randomly removing chunks of information from the recognition probes. After an image-encoding phase, participants were briefly (<750 ms) presented with old and lure test images that were manipulated such that a proportion of the image (0–80%) was occluded. Before making an old/new recognition response, participants were instructed to visualize the presented image while looking at a blank screen. Importantly, the lack of visual input during this posttest interval allowed us to isolate the effect of memory on retrieval-related EMs from the effects of the visual properties of the test probe.
To determine whether the degraded test probe elicited retrieval of the corresponding (same or similar) encoded stimulus, we computed the overlap between encoding- and retrieval-related gaze patterns. If false alarms to lure stimuli are indeed related to retrieval of the similar item via pattern completion, this should be reflected in the accompanying EMs, which should be directed toward screen regions previously visited during encoding of the originally presented stimulus. Accordingly, we hypothesized that similarity between encoding- and retrieval-related gaze patterns during the posttest interval should be greater than chance even when images are substantially degraded, indicating that the corresponding encoded representation has been reactivated. Moreover, based on previous evidence of gaze reinstatement and behavioral pattern completion, we predicted that reinstatement of encoding-related EMs during partially cued retrieval would be positively correlated with accuracy for old images and negatively correlated with accuracy for lure images.
To further examine the content and specificity of retrieved representations, we computed retrieval-related reactivation of the degraded test probe image (i.e., “probe reinstatement”) along with two separate measures of encoding-retrieval EM similarity to measure both the idiosyncratic reactivation of stimulus-specific features and relations (i.e., “gaze reinstatement”) and the participant-invariant reactivation of nonspecific image features and relations (i.e., “image reinstatement”). We then compared the strength of each measure in predicting retrieval-related EMs and accuracy across the retrieval interval. Critically, by computing multiple measures of EM-based reinstatement, we were able to not only deduce the occurrence of pattern completion—that is, that a specific previously encoded item has been retrieved—but also the nature of the pattern being completed, and specifically whether it comprises the stimulus itself (image reinstatement) or the stimulus and the operation by which it was encoded (gaze reinstatement).
Methods
Participants.
Participants were 64 young adults (43 female) aged 19 to 35 y (mean = 23.66, SD = 3.85) with normal or corrected-to-normal vision who were recruited through the Rotman Research Institute’s participant database. The study was approved by the Rotman Research Institute’s Research Ethics Board. All participants provided informed consent before participating in the experiment in accordance with the ethical guidelines of the Rotman Research Institute and were compensated at a rate of $10/h for their participation. Seven participants were excluded from analysis on the basis of missing data (n = 2), average performance (corrected recognition) lower than 2.5 SD from the mean (n = 2), average gaze reinstatement greater than 3.5 SD from the mean* (n = 1), and failure to follow instructions† (n = 2). Data from the remaining 57 participants were analyzed.
Apparatus.
Stimuli were presented on a 1,024 × 768 resolution, 19-inch Dell M991 monitor. Monocular EMs were recorded using a head-mounted EyeLink II eyetracking system at 500-Hz sampling rate (SR Research Ltd.). EM calibration was accomplished using a nine-point calibration procedure, which was performed prior to the experiment. Drift correction (>5°) was performed between trials. Saccades and blinks were defined by EyeLink as saccades greater than 0.5° of visual angle and the period in which saccade signal was missing for three or more consecutive samples, respectively. All remaining samples were classified as fixations.
Stimuli.
Stimuli consisted of 240 images or 120 sets (A-B) of unique, but similar 800 × 600-pixel images displayed against a black background. During the test phase, image duration (250, 500, 750 ms) and degradation (0%, 20%, 40%, 60%, 80%) were manipulated. Images were degraded by randomly placing 100 × 100-pixel gray squares over the image (Fig. 1). This was done in order to assess pattern completion as it is defined computationally, as the retrieval of a complete memory representation from partial or degraded input. Test image duration was also manipulated as an additional variable measure of visual input. Images were randomly assigned to one of four study/test blocks and counterbalanced across duration, degradation, and probe type (old, new). Of the 30 images presented during each study block, participants viewed 15 again as test probes (“old”). Lures from the alternate set of similar images were presented as test probes for the remaining 15 studied images.
Fig. 1.
Examples of old and lure stimulus images at all levels of image degradation.
Procedure.
Before the start of the experiment, participants completed two short (six trials) study/test practice blocks with novel images to familiarize themselves with the paradigm. Participants subsequently completed four blocks of a modified recognition memory test (Fig. 2), with each block containing a novel set of images. During each study block, participants were instructed to view and memorize a set of 30 images, which were presented four times each. Image presentation order was randomized within each repetition. On each trial, participants were presented with a single image for 3 s. A 2-s fixation cross was presented between trials, during which the experimenter was able to perform online drift correction if necessary. Following the study block, participants were tested on their memory for the studied images during a test block. On each test trial, participants were presented with an old (presented at study) or new (lure: similar, but not identical to an image presented at study) image for 250, 500, or 750 ms, followed by a 50-ms mask. The purpose of the mask was to prevent sensory persistence from influencing viewing during the posttest interval. Image presentation was manipulated such that each image could be presented either in full (0% degradation), or with 20%, 40%, 60%, or 80% degradation (Fig. 1). Participants were told to ignore the gray squares when making a response (i.e., to base their response on the visible portions of the underlying image). Following the mask, participants were presented with a gray square the same size as the study and test images (800 × 600 pixels) against a black background for 3 s, during which they were instructed to visualize the presented test image. After this posttest interval, participants were given 3 s to indicate whether the presented test image was “old” or “new” via key press. Participants were instructed to respond “old” only if the test image was exactly the same as an image presented during study, and to respond “new” to all other images (i.e., lures). Test trials were separated by a variable length (2 to 6 s) fixation cross, during which online drift correction was performed at the discretion of the experimenter.
Fig. 2.
Experimental procedure. During the study period, each image was presented for 3,000 ms. A 2,000-ms fixation cross (not shown here) appeared between trials, to allow for online drift correction. Study images were presented four times each. During the test period, each test probe was presented for 250, 500, or 750 ms, either in full (0% degradation), or with 20%, 40%, 60%, or 80% of the image obscured by 100 × 100-pixel gray squares. Following the test probe, a visual mask was presented for 50 ms, after which a gray square was presented for 3,000 ms, during which participants were instructed to visualize the presented image. Finally, participants were given 3,000 ms to indicate whether the presented image was “old” (presented at study) or “new” (i.e., lure: similar, but not identical to an image presented at study). Note that all images and the gray posttest interval square were presented at 800 × 600 pixels on a black background; the images have been expanded here for visualization.
During each test block, each of the 30 previously encoded images was presented in one condition, with condition referring to every possible combination of probe type (old, lure), duration (250, 500, 750), and degradation (0, 20, 40, 60, 80) for a total of 30 conditions. Across four study/test blocks, this resulted in a total of four images in each condition.
EM Analyses.
In the present study, we computed three separate measures of retrieval-related EM-based reinstatement, or the similarity between spatial fixation patterns during encoding or test and retrieval (Figs. 3 and 4), using the R eyesim package (https://github.com/bbuchsbaum/eyesim). To compute reinstatement scores, we generated duration-weighted, smoothed fixation density maps for each relevant interval (e.g., encoding) using all fixations collapsed over that interval (see SI Appendix for further details). Corresponding density maps were compared using a Fisher’s z-transformed Pearson correlation. This correlation yielded a single value representing the spatial overlap between fixations during the posttest interval and fixations during study or test (i.e., raw reinstatement). To ensure that reinstatement values were driven by memory and not by generic viewing patterns (e.g., center bias), each participant-specific posttest interval density map was additionally correlated with 50 other randomly selected across-participant study (or test) image density maps (depending on the measure being computed). The values resulting from this permutation were then averaged to obtain a control (across image) reinstatement score (i.e., permuted reinstatement). The resulting permuted reinstatement score was then subtracted from the raw reinstatement score of interest, yielding a final difference score. In this way, all EM-based reinstatement measures control for image-invariant viewing tendencies, including the tendency to fixate the center of the screen.
Fig. 3.
Visualization of EMs for a single participant during encoding and retrieval. In this example, the retrieval cue was a lure image presented at 40% degradation. The lure image is presented here for visualization purposes; it was not present on the screen during retrieval. Note that EMs at retrieval extend to the right of the central figure, to the location previously occupied by the figure in the encoded image.
Fig. 4.
Illustration of the three measures of EM-based reinstatement. The heat maps reflect fixation density, with warm values indicating areas of high fixation density. Probe reinstatement is computed by correlating the heat map generated from the test probe weighted by the fixations of all participants (S All) viewing the same image during study and a single participant (S1) subsequently retrieving that image (or a similar lure) during the posttest interval. Image reinstatement is computed by correlating the heat map generated from the cumulative fixations of all participants (S All) viewing a single image over four study presentations and a single participant (S1) subsequently retrieving that image (or a similar lure) during the posttest interval. Gaze reinstatement is computed by correlating the heat map generated from the cumulative fixations of a single participant (S1) viewing a single image over four study presentations and subsequently retrieving that image (or a similar lure) during the posttest interval. All density maps are smoothed and duration weighted.
While we were primarily interested in EM-based reinstatement of information from long-term memory, it is possible that EMs during the posttest interval reinstate salient regions of the just-presented test-probe image. Therefore, in order to distinguish between reinstatement that is guided by memory for the encoded image from reinstatement guided by memory for the just-presented test probe, we computed a measure of probe reinstatement, reflecting the similarity between the EM patterns of all participants viewing the test probe (i.e., the visible portions of the test image weighted by the EM pattern of all participants encoding the same image during the study period), and a single participant subsequently retrieving that image (or the alternate lure image) during the 3-s stimulus-free posttest interval.
To quantify the reinstatement of image features from long-term memory, we computed two additional measures of reinstatement. Gaze reinstatement reflects the similarity between the EM patterns of a single participant encoding a single image over four study opportunities and subsequently retrieving that same image (or a similar lure) during the 3-s stimulus-free posttest interval. Based on the theory that EMs both encode and are themselves embedded in mnemonic representations (22, 23), this measure captures reinstatement of the encoded image including the corresponding encoding operations, in this case, the pattern of EMs. Although there is ample research supporting a specialized role for such gaze reinstatement in memory retrieval (for review, see ref. 15), other work suggests that retrieval-related EMs reinstate salient regions of the encoded stimulus, even when EMs are restricted at encoding (19). Therefore, to distinguish idiosyncratic reinstatement from more general image reinstatement, and to determine whether there is something special about reinstating one’s own fixations that benefits memory above and beyond the benefits conferred by reinstating generally salient or semantically informative regions of the encoded image, we also measured image reinstatement. This measure reflects the similarity between the EM patterns of all participants encoding a single image over the four study opportunities and a single participant subsequently retrieving that image during the 3-s stimulus-free posttest interval, and captures reinstatement of the encoded image without requiring reinstatement of the accompanying EMs.
For old images, the analysis proceeded as described by comparing the gaze patterns corresponding to encoding and retrieval of the same image during the study period and posttest interval, respectively (i.e., A-A). Importantly, for lure images presented at test, the posttest interval gaze pattern was compared to the gaze patterns corresponding to encoding of the similar (lure) image during study (i.e., A-B). Thus, a high gaze reinstatement score for an old image indicates reinstatement of the same image, whereas a high score for a lure image indicates reinstatement of the similar studied image. Further details regarding the reinstatement measures can be found in SI Appendix.
Data Analysis.
To investigate factors contributing to performance on the recognition task, we ran a generalized linear mixed-effects model (GLMM; glmer of package lme4, 45) with a bobyqa optimizer on trial-level accuracy (correct, incorrect, with a binomial distribution and logistic link function), and linear mixed effects models (LMEM) on gaze reinstatement, image reinstatement, and probe reinstatement, with probe type (old, lure), degradation (0%, 20%, 40%, 60%, 80%), and duration (250 ms, 500 ms, 750 ms) as independent variables. To allow for simple effects analysis of significant interactions, probe type was recoded as 0 (old) and 1 (lure). Duration and degradation were z-scored and participant and item were modeled as random effects (intercepts). To build the models, we used a backward selection approach, starting with a maximal model (45), which included fixed effects for all variables and their interactions, as well as random intercepts for participant and item. Models were compared using likelihood ratio tests with α = 0.05, such that nonsignificant fixed effects were removed from the model in a stepwise fashion until no further model changes resulted in a significant likelihood ratio test. Results of the final best-fit models arrived at via model comparison are reported, with significance values approximated with the lmerTest R package (46).
To evaluate our hypotheses that gaze reinstatement should be positively predictive of accuracy for old images and negatively predictive of accuracy for lure images, we ran three additional GLMMs on accuracy by adding each of the described reinstatement scores to the final GLMM from the behavioral analysis. The models including reinstatement were compared to the base model using likelihood ratio tests with α = 0.05. Model comparison proceeded in a stepwise fashion, starting with main effects of reinstatement and proceeding to interactions of reinstatement and other predictors, such that only effects that significantly improved the fit of the model were retained.
To investigate individual differences in the relationship between gaze reinstatement and behavioral performance, we ran bootstrapped Pearson correlations (bootstrap iterations = 5,000) of gaze reinstatement and accuracy (overall percent correct) separately for old and lure images.
Finally, to examine changes in reinstatement over time, we extracted from each of the three density maps (gaze, image, and probe, described previously), the value corresponding to the location of fixation at each point in time across the posttest interval, sampled in 50-ms intervals. This analysis allowed us to determine whether the location of fixation at various stages of retrieval best reflected reinstatement of the test probe image (i.e., probe reinstatement), the previously encoded image (i.e., image reinstatement), or the previously encoded image along with the accompanying gaze pattern (i.e., gaze reinstatement). For ease of analysis and interpretation, we aggregated the values from each time point into three discrete 1,000-ms time bins. We then ran separate LMEMs for old and lure images with density as the dependent variable and accuracy (correct, incorrect), measure (i.e., density map: gaze, image, probe), and time (T1: 0 to 1,000 ms, T2: 1,000 to 2,000 ms, T3: 2,000 to 3,000 ms) as independent variables. To allow for simple effects analysis of significant interactions, accuracy was coded as 0 (incorrect) and 1 (correct) and time was coded for a linear effect. To evaluate the effect of reinstatement measure on density, measure was coded such that the gaze density map served as the reference level. Participant and item were modeled as random effects (intercepts). Models were compared in a backward stepwise manner using likelihood ratio tests with α = 0.05.
Results
Behavioral Results.
Results of the best-fit model of accuracy arrived at via model comparison revealed significant effects of probe type, duration, and degradation (SI Appendix, Table S1). As expected, memory accuracy was greater for old relative to lure images (Fig. 5A) and increased with increased test probe duration (Fig. 5B) and with decreased test probe degradation (Fig. 5C).
Fig. 5.
Accuracy (overall percent correct) by (A) probe type, (B) duration, and (C) degradation. Note that although the y axis here corresponds to overall percent correct, this is for the purpose of visualization only. The GLMM uses a trial-level binary measure of accuracy (correct, incorrect)
EM Results.
To investigate the factors contributing to reinstatement of the test probe and the encoded image, we ran LMEMs on probe reinstatement, image reinstatement, and gaze reinstatement, with duration, degradation, and probe type as predictors. We were particularly interested in the ability of each of these measures to differentiate between retrieval of old and lure images. For all analyses, reinstatement is reported as the difference between raw and permuted reinstatement scores, such that positive scores indicate reinstatement of the same image that is greater than reinstatement of other images derived from a permutation.
Probe reinstatement.
Results of the best-fit model of probe reinstatement (i.e., reinstatement of the test probe image) (SI Appendix, Table S2) arrived at via model comparison revealed significant effects of duration (Fig. 6A) and degradation (Fig. 6B), indicating that reinstatement of the test probe decreased with increased test probe duration and degradation.
Fig. 6.
(A) Probe reinstatement by duration and (B) degradation. (C) Image reinstatement by duration and degradation. (D) Reinstatement scores within-image and across-image (permuted 50 times). Gaze reinstatement is computed by subtracting the across-image (permuted) score from the within-image (raw) score. (E) Gaze reinstatement by probe type.
Image reinstatement.
Results of the best-fit model of image reinstatement (i.e., reinstatement of generally salient image regions) (SI Appendix, Table S3) arrived at via model comparison revealed a significant main effect of duration and a significant interaction of duration × degradation (Fig. 6C), indicating that reinstatement decreased with increased test probe duration, and this effect was attenuated with increased test probe degradation.
Gaze reinstatement.
Before conducting the LMEM on gaze reinstatement, we first compared the raw reinstatement values (within-image) to the permuted reinstatement values (across-image) to ensure that retrieval-related gaze patterns were indeed closer to encoding-related gaze patterns reflecting the same image than to encoding-related gaze patterns reflecting other images. A paired samples t test comparing raw reinstatement values (mean = 0.22) to permuted reinstatement values (mean = 0.19) was significant [t (56) = 3.42, P = 0.001, 95% CI: 0.014, 0.052] (Fig. 6D), indicating that reinstatement of encoding-related gaze patterns was indeed greater than would be expected based on generic viewing tendencies, like the tendency to fixate the center of the screen.
Whereas probe type was eliminated from both the probe reinstatement and image reinstatement models, results of the best-fit model of gaze reinstatement (i.e., reinstatement of own fixations) (SI Appendix, Table S4) arrived at via model comparison revealed a significant effect of probe type on gaze reinstatement (Fig. 6E), indicating that reinstatement of encoding-related gaze patterns differentiates retrieval of old images from retrieval of lure images. Notably, mean gaze reinstatement scores, obtained by bootstrapping participant means (bootstrap iterations = 5,000), were greater than 0 (i.e., greater than the mean permuted value) for all levels of test probe duration (mean250 = 0.037, 95% CI250: 0.018, 0.057; mean500 = 0.035, 95% CI500: 0.016, 0.054; mean750 = 0.026, 95% CI750: 0.007, 0.046) and image degradation (mean0 = 0.032, 95% CI0: 0.012, 0.052; mean20 = 0.039, 95% CI20: 0.018, 0.057; mean40 = 0.037, 95% CI40: 0.018, 0.057; mean60 = 0.024, 95% CI60: 0.004, 0.044; mean80 = 0.032, 95% CI80: 0.012, 0.053), suggesting that even briefly presented or severely degraded visual input is sufficient to elicit gaze reinstatement.
Summary of EM results.
Results of the analyses on viewing behavior indicated that both probe and image reinstatement decreased significantly with increased test probe duration, suggesting that EMs may play a more significant role in retrieval when visual cues are insufficient. Degradation of the test probe significantly attenuated probe reinstatement, but had no effect on image or gaze reinstatement, further suggesting that sparse visual information is sufficient to elicit reinstatement of an encoded image from long-term memory, whereas reinstatement of the test probe is dependent on its visual properties. Finally, only gaze reinstatement showed a significant effect of probe type, with greater gaze reinstatement for old compared to lure test probes. To further investigate whether gaze reinstatement plays a specialized role in retrieval, we subsequently investigated the relationship between gaze reinstatement and memory performance in a model containing all reinstatement measures.
Relationship between EMs and Behavior.
To test our hypotheses that gaze reinstatement should be positively associated with accuracy for old images and negatively associated with accuracy for lure images, we investigated whether adding each of the previously described reinstatement scores to the accuracy model (SI Appendix, Table S1) improved the model fit. As expected, the addition of gaze reinstatement significantly improved the fit of the model (χ2 = 13.502, P < 0.001), whereas additions of image reinstatement (χ2 = 2.347, P = 0.126) and probe reinstatement (χ2 = 0.290, P = 0.590) did not. To account for encoding effects, we also added the cumulative number of fixations (z-scored) on each image during the study period (over the four study presentations) as a predictor. This measure has been previously linked to memory success (47–49) and hippocampal activity (40). The addition of cumulative study fixations significantly improved the fit of the model (χ2 = 10.354, P = 0.001). Interactions of gaze reinstatement and cumulative study fixations with the other predictors (probe type, duration, degradation), as well with each other, were subsequently added to the model in a stepwise manner. Only the interaction of gaze reinstatement and probe type significantly improved the fit of the model (χ2 = 9.812, P = 0.002). The results of the best-fit accuracy model (with the additions of gaze reinstatement and cumulative study fixations) arrived at via model comparison are reported in Table 1.
Table 1.
Relationship between eye movements and behavior
| Fixed effects | β | SE | 95% CI | z | P |
| Intercept | 2.307 | 0.085 | 2.144, 2.484 | 27.250 | <0.001*** |
| Gaze reinstatement | 0.136 | 0.281 | −0.422, 0.704 | 0.486 | 0.627 |
| Probe type | −1.303 | 0.071 | −1.448, −1.165 | −18.457 | <0.001*** |
| Duration | 0.179 | 0.033 | 0.114, 0.244 | 5.513 | <0.001*** |
| Degradation | −0.475 | 0.043 | −0.563, −0.391 | −10.944 | <0.001*** |
| Cumulative study fixations | 0.104 | 0.042 | 0.022, 0.188 | 2.514 | 0.012* |
| Gaze reinstatement X probe type | −1.071 | 0.340 | −1.756, −0.395 | −3.153 | <0.002*** |
| Total observations = 6,660 | |||||
The random effects variance and SD for participants (intercept) are 0.148 and 0.384, respectively. For items (intercept), these values are 0.427 and 0.653, respectively. Model equation: Accuracy ∼ gaze reinstatement + probe type + duration + degradation + cumulative study fixations + gaze reinstatement × probe type + (1 | participant) + (1 | item). *P < 0.05, ***P < 0.001.
Results of the final best-fit model of accuracy arrived at via model comparison (Table 1) revealed significant effects of probe type (old > lure), duration, and degradation, with accuracy increasing with increased test probe duration and decreased test probe degradation. In line with previous work (47–49), cumulative study fixations (a measure of encoding success) significantly predicted accuracy on the recognition task. Finally, the effect of gaze reinstatement on accuracy was nonsignificant for old images, and significantly negative for lure images, suggesting that reinstatement of participant- and image-specific encoding-related gaze patterns does not support recognition of old images, but does predict false alarms to similar (lure) images at test.
To determine whether the significant interaction of gaze reinstatement and probe type revealed by the GLMM was also present in an across-participants analysis, we ran bootstrapped Pearson correlations (bootstrap iterations = 5,000) of gaze reinstatement and accuracy (overall percent correct) separately for old and lure images (Fig. 7A). As was the case in the within-participants analysis, we did not find evidence supporting the hypothesis that gaze reinstatement predicts correct recognition of old images (r = 0.035, 95% CI: −0.203, 0.278). However, in line with our predictions and results of the within-participants analysis, there was a robust negative correlation between gaze reinstatement and accuracy for lure images, suggesting that reinstatement of participant- and image- specific encoding-related gaze patterns during the posttest interval predicts false endorsement of similar images as old (r = −0.399, 95% CI: −0.664, −0.148).
Fig. 7.
Correlations of old and lure gaze reinstatement and accuracy (overall percent correct) for (A) all images, (B) easy images (mean accuracy > 80.702), and (C) difficult images (mean accuracy < 80.702).
Previous work suggests that gaze reinstatement might only facilitate memory performance when cognitive demands exceed cognitive resources (25; for review, see ref. 15). Given that performance was at ceiling for old images, we were unable to probe this effect. Thus, to examine whether mnemonic demands modulated the effect of gaze reinstatement for old images, we conducted a median split of the data based on the mean accuracy for each image (median = 80.702), which we used to approximate difficulty in correctly recognizing the image (see ref. 50). In line with the results of the previous bootstrap, gaze reinstatement was not significantly correlated with accuracy when only easy images were included in the analysis (r = −0.075, 95% CI: −0.324, 0.167) (Fig. 7B). However, when only difficult images were included in the analysis, gaze reinstatement was positively correlated with accuracy for old images (r = 0.234, 95% CI: 0.014, 0.469) (Fig. 7C).
Reinstatement across Time.
To visualize the distinct trajectories of the three measures of reinstatement across the retrieval interval, we extracted density values for each measure (probe reinstatement, image reinstatement, gaze reinstatement) at discrete points in time from the onset of the test probe to the end of the posttest interval by sampling from each of the three density maps in 50-ms intervals (collapsing across duration and degradation) (Fig. 8). We then ran separate LMEMs on density for old and lure images, with accuracy (correct, 1; incorrect, 0), measure (i.e., density map: gaze [reference variable], image, probe), and time, aggregated into three bins and coded for a linear effect (T1: 0 to 1,000 ms, T2: 1,000 to 2,000 ms, T3: 2,000 to 3,000 ms) as independent variables. Results of the final best-fit models arrived at via model comparison are reported below.
Fig. 8.
Gaze reinstatement, image reinstatement, and probe reinstatement across time for old and lure images. Reinstatement is indexed by the value in the respective density map at each point (50 ms) in time. Solid lines represent accurate responses and dotted lines represent inaccurate responses.
Old images.
Results of the final best-fit model of density for old images arrived at via model comparison (SI Appendix, Table S5) revealed a significant effect of accuracy for gaze reinstatement (reference category) at T1 (hits > misses), and this effect was marginally attenuated for image reinstatement and significantly attenuated for probe reinstatement. We also observed a significant interaction of accuracy and time, indicating that the difference in gaze reinstatement between hits and misses (hits > misses) significantly decreased over the course of the posttest interval. This effect (accuracy × time) was significantly attenuated and even reversed for probe reinstatement, indicating that over time, the difference in probe reinstatement between hits and misses (hits > misses) significantly increased relative to gaze reinstatement. Finally, probe reinstatement significantly outperformed gaze reinstatement for misses early in the posttest interval and this difference was significantly attenuated and even reversed over time (gaze > probe). For hits, the difference between probe reinstatement and gaze reinstatement was significantly attenuated at T1, however this difference significantly increased and was even reversed over time, such that gaze reinstatement outperformed probe reinstatement later in the posttest interval.
Lure images.
Results of the final best-fit model of density for lure images arrived at via model comparison (SI Appendix, Table S6) revealed a significant effect of accuracy for gaze reinstatement (reference category) at T1 (false alarms > correct rejections), and this effect was significantly attenuated for both image and probe reinstatement. Interactions of accuracy and time were eliminated from the model, indicating that the difference in reinstatement between correct rejections and false alarms remained consistent across the posttest interval. Gaze reinstatement significantly outperformed image reinstatement at T1 for false alarms, and this difference increased over time. However, gaze reinstatement performed significantly worse than probe reinstatement at T1. This difference (probe > gaze) was significantly attenuated and even reversed over time, such that gaze reinstatement later in the posttest interval outperformed probe reinstatement.
Summary of temporal analyses.
In summary, results of the temporal analyses revealed changes in the spatial distribution of EMs over time as well as distinct gaze trajectories for old and lure images over the posttest interval. For both old and lure images, early fixations returned to regions of the screen previously occupied by salient regions of the test probe. However, with increasing time, EMs were primarily directed to regions that were previously occupied by salient regions of the studied image, and particularly those that were fixated previously (i.e., gaze reinstatement). These findings suggest that, whereas initial EMs during retrieval may be biased by sensory input held online, subsequent EMs are increasingly driven by mnemonic processes. Moreover, gaze reinstatement was related to the subsequent memory judgment, with greater gaze reinstatement for hits compared to misses early in the posttest interval and for false alarms compared to correct rejections across the posttest interval. Together, these findings suggest that EMs flexibly support memory retrieval by differentially reinstating varying levels of mnemonic content across time and space.
Discussion
Pattern completion describes a neurocomputational process whereby a memory representation is retrieved from partial or degraded input, and is a critical function of the hippocampal relational memory system (3, 4; for review, see ref. 5). Yet, it is often studied behaviorally. In mnemonic discrimination tasks, participants are required to discriminate repeated old items from similar “lure” items, with false endorsement of lure items taken as prima facie evidence of pattern completion (8–11). However, the relationship between this behavioral response and the underlying retrieval mechanism remains unclear. In the present study, we used an EM-based measure of encoding-retrieval overlap to investigate whether incomplete lure memory cues elicit retrieval of the originally encoded stimulus (i.e., that pattern completion has occurred). Based on evidence of behavioral pattern completion and EM-based reinstatement, we predicted that the similarity between encoding- and retrieval-related EMs or, gaze reinstatement, a measure that has been previously linked to memory retrieval (for review, see ref. 15), hippocampal activity (35), and neural reactivation (34), would be spontaneously recruited in response to degraded test probes, and would be greater for hits than misses and for lure false alarms compared with correct rejections.
Consistent with previous findings of gaze reinstatement (20, 29–32, 34, 51; for review, see, ref. 15), retrieval-related EMs were more similar to gaze patterns reflecting encoding of the same (old) or similar (lure) image than to gaze patterns reflecting encoding of other images, suggesting that they reflect image-specific memory. In line with our predictions, gaze reinstatement was above 0 at all levels of test probe degradation, indicating that given an incomplete cue, EMs facilitate reactivation of a specific item representation from memory. Moreover, temporal analyses indicated that whereas early fixations following the test probe largely reinstated salient regions of the test probe, later fixations favored regions previously visited during encoding of the same or similar image. To investigate whether this reinstatement played a functional role in retrieval, we modeled accuracy on the recognition memory task as a function of EMs during both encoding and retrieval.
Consistent with findings of behavioral pattern completion (8–11), gaze reinstatement for lure images—that is, the similarity between EMs following presentation of a lure test probe and EMs during encoding of a similar image—was negatively correlated with accuracy both within and across participants. This effect could not be accounted for by reinstatement of generally salient regions of the studied image (i.e., image reinstatement), reinstatement of the presented test probe (i.e., probe reinstatement), or encoding-related EMs. Extending previous work, these findings lend critical support to the assumption that false alarms to lure items (i.e., behavioral pattern completion) reflect retrieval of previously encoded similar items (8–11). Moreover, these findings are consistent with—and may be related to—a growing literature suggesting that reinstatement of encoding-related neural activity patterns can have detrimental effects on recognition memory, particularly with respect to discriminating lures (e.g., refs. 52–54).
In addition to the hypothesized role of gaze reinstatement in lure false alarms, we predicted that gaze reinstatement would facilitate correct recognition of old images, consistent with theories and findings of EM-based mnemonic reinstatement (22, 23; for review, see, ref. 15). Indeed, of the three reinstatement measures, only gaze reinstatement was significantly different between old images and lure images, suggesting that reinstatement of one’s own encoding-related EMs reflects the contents of memory. Whereas we did not observe a general effect of gaze reinstatement on recognition of old images, finer-grained temporal analyses revealed that within participants, the relationship between gaze reinstatement and performance emerged early in the posttest interval and was attenuated over time. Consistent with previous work, this finding suggests that recognition is more likely when early fixations reinstate remembered image regions (27, 29, 30). Moreover, analyses of individual differences revealed a positive correlation between gaze reinstatement and performance for difficult old images. This finding lends support to previous work suggesting that gaze reinstatement facilitates performance when task demands exceed cognitive resources (25; for review, see, ref. 15). Finally, results of the accuracy model further revealed a significant positive relationship between the number of cumulative fixations made across the four study presentations of each image and accuracy on the recognition task. This finding is consistent with previous work linking encoding-related EMs to memory performance (47–49) and hippocampal activity (41), and further suggests that EMs during both encoding and retrieval reflect memory success.
Although the present study did not measure neural activity, several lines of evidence support the idea that the observed EM effect is related to hippocampal pattern completion. First, gaze reinstatement is increased in older adults, a population with deficits in hippocampal structure and function (for review, see ref. 55), compared with younger adults (25, 27), and this is consistent with an age-related bias toward pattern completion (e.g., lure false alarms) (56). Indeed, a recent study from Vieweg et al. (57) showed that the similarity between gaze patterns for correctly identified degraded images (e.g., library called library) and response matching incorrectly identified degraded images (e.g., office called library) was greater in older adults compared to younger adults, and was correlated with an age-related bias toward behavioral pattern completion (learned images > new images). Second, lure false alarms are increased in cases of hippocampal damage due to mild cognitive impairment (9) or amnesia (58, 59). Finally, both gaze reinstatement (35; see also, ref. 60) and other retrieval-related EM effects (61–63) have been linked to activity in the hippocampus. Considered together with the present results, these findings suggest that the outcome of behavioral pattern completion (i.e., lure false alarms) may be attributed, at least in part, to the hippocampally mediated reactivation of relational information, reflected, and likely supported by gaze reinstatement.
In summary, the present study used EM monitoring to provide evidence that lure false alarms are the result of pattern completion, whereby, in response to the presentation of an incomplete test probe, a visually similar, previously encoded image is retrieved. Given a partial retrieval cue, participants reinstated their encoding-related EMs for the same (old) or similar (lure) image, and this gaze reinstatement both supported the recognition of old images and increased the likelihood of false alarms for lure images. Over the course of retrieval, gaze trajectories shifted from initially reinstating just-viewed sensory input to subsequently reinstating image features and the accompanying EMs from memory, demonstrating that the component processes involved in EM-based reinstatement may flexibly adapt and change over time. Critically, these findings extend previous work by showing first that gaze reinstatement can provide a rich proxy for incidental behavioral pattern completion (i.e., lure false alarms), and second, that the “pattern” of input that is completed or reactivated during retrieval involves not only the stimulus itself (i.e., image reinstatement), but also the operation by which that stimulus was encoded (i.e., gaze reinstatement). Thus, taken together, the results of the present study suggest that gaze reinstatement supports behavioral pattern completion as part of its larger role in memory retrieval. More specifically, by reenacting an encoded sequence of EMs, gaze reinstatement reflects, and likely facilitates the reactivation of encoded stimulus features and the relations among them, which together constitute a “pattern” that can be subsequently “completed” or retrieved given a partial input cue. Further research should continue to explore the various ways in which EMs might support retrieval of complex stimuli and how information retrieved via gaze reinstatement might help guide adaptive behavior.
Data Availability.
Data analysis code and data are available on GitHub (https://github.com/bbuchsbaum/eyesim).
Supplementary Material
Acknowledgments
We thank Jonathan Musat, Fahad Ahmad, Sam Alain and Rahim Ahmed for their assistance with recruitment and testing, as well as Tarek Amer for his helpful comments on earlier versions of the manuscript.
Footnotes
The authors declare no competing interest.
This article is a PNAS Direct Submission.
Data deposition: Data analysis code and data are available on GitHub (https://github.com/bbuchsbaum/eyesim).
*This participant also had an average performance score lower than 1.5 SD from the mean.
†These participants also had average performance scores lower than 2 SD from the mean.
This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1917586117/-/DCSupplemental.
References
- 1.Rolls E. T., The mechanisms for pattern completion and pattern separation in the hippocampus. Front. Syst. Neurosci. 7, 74 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Yassa M. A., Stark C. E., Pattern separation in the hippocampus. Trends Neurosci. 34, 515–525 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Marr D., Simple memory: A theory for Archicortex. Philos. Trans. R. Soc. B Biol. Sci. 262, 23–81 (1971). [DOI] [PubMed] [Google Scholar]
- 4.McClelland J., O’Reilly R. C., McNaughton B. L., Why there are complimentary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102, 419–457 (1995). [DOI] [PubMed] [Google Scholar]
- 5.Hunsaker M. R., Kesner R. P., The operation of pattern separation and pattern completion processes associated with different attributes or domains of memory. Neurosci. Biobehav. Rev. 37, 36–58 (2013). [DOI] [PubMed] [Google Scholar]
- 6.Cohen N. J., Eichenbaum H., Memory, Amnesia, and the Hippocampal System (The MIT Press, 1993). [Google Scholar]
- 7.Eichenbaum H., Cohen N. J., From Conditioning to Conscious Recollection: Memory Systems of the Brain (Oxford University Press, 2001). [Google Scholar]
- 8.Toner C. K., Pirogovsky E., Kirwan C. B., Gilbert P. E., Visual object pattern separation deficits in nondemented older adults. Learn. Mem. 16, 338–342 (2009). [DOI] [PubMed] [Google Scholar]
- 9.Stark S. M., Yassa M. A., Lacy J. W., Stark C. E. L., A task to assess behavioral pattern separation (BPS) in humans: Data from healthy aging and mild cognitive impairment. Neuropsychologia 51, 2442–2449 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Clelland C. D., et al. , A functional role for adult hippocampal neurogenesis in spatial pattern separation. Science 325, 210–213 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bakker A., Kirwan C. B., Miller M., Stark C. E. L., Pattern separation in the human hippocampal CA3 and dentate gyrus. Science 319, 1640–1642 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Molitor R. J., Ko P. C., Hussey E. P., Ally B. A., Memory-related eye movements challenge behavioral measures of pattern completion and pattern separation. Hippocampus 24, 666–672 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Cowell R. A., Barense M. D., Sadil P. S., A roadmap for understanding memory: Decomposing cognitive processes into operations and representations. eNeuro 6, ENEURO.0122–19.2019 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Hannula D. E., Ryan J. D., Tranel D., Cohen N. J., Rapid onset relational memory effects are evident in eye movement behavior, but not in hippocampal amnesia. J. Cogn. Neurosci. 19, 1690–1705 (2007). [DOI] [PubMed] [Google Scholar]
- 15.Wynn J. S., Shen K., Ryan J. D., Eye movements Actively reinstate spatiotemporal mnemonic content. Vision (Basel) 3, 21 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hannula D. E., et al. , Worth a glance: Using eye movements to investigate the cognitive neuroscience of memory. Front. Hum. Neurosci. 4, 166 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Henderson J. M., Williams C. C., Falk R. J., Eye movements are functional during face learning. Mem. Cognit. 33, 98–106 (2005). [DOI] [PubMed] [Google Scholar]
- 18.Bochynska A., Laeng B., Tracking down the path of memory: Eye scanpaths facilitate retrieval of visuospatial information. Cogn. Process. 16 (suppl. 1), 159–163 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Johansson R., Holsanova J., Dewhurst R., Holmqvist K., Eye movements during scene recollection have a functional role, but they are not reinstatements of those produced during encoding. J. Exp. Psychol. Hum. Percept. Perform. 38, 1289–1314 (2012). [DOI] [PubMed] [Google Scholar]
- 20.Johansson R., Johansson M., Look here, eye movements play a functional role in memory retrieval. Psychol. Sci. 25, 236–242 (2014). [DOI] [PubMed] [Google Scholar]
- 21.Armson M. J., Ryan J. D., Levine B., Maintaining fixation does not increase demands on working memory relative to free viewing. PeerJ 7, e6839 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Noton D., Stark L., Scanpaths in eye movements during pattern perception. Science 171, 308–311 (1971). [DOI] [PubMed] [Google Scholar]
- 23.Noton D., Stark L., Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Res. 11, 929–942 (1971). [DOI] [PubMed] [Google Scholar]
- 24.Olsen R. K., Chiew M., Buchsbaum B. R., Ryan J. D., The relationship between delay period eye movements and visuospatial memory. J. Vis. 14, 8 (2014). [DOI] [PubMed] [Google Scholar]
- 25.Wynn J. S., Olsen R. K., Binns M. A., Buchsbaum B. R., Ryan J. D., Fixation reinstatement supports visuospatial memory in older adults. J. Exp. Psychol. Hum. Percept. Perform. 44, 1119–1127 (2018). [DOI] [PubMed] [Google Scholar]
- 26.Laeng B., Teodorescu D.-S., Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cogn. Sci. 26, 207–231 (2002). [Google Scholar]
- 27.Wynn J. S., et al. , Selective scanpath repetition during memory-guided visual search. Vis. Cogn. 24, 15–37 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Damiano C., Walther D. B., Distinct roles of eye movements during memory encoding and retrieval. Cognition 184, 119–129 (2019). [DOI] [PubMed] [Google Scholar]
- 29.Holm L., Mäntylä T., Memory for scenes: Refixations reflect retrieval. Mem. Cognit. 35, 1664–1674 (2007). [DOI] [PubMed] [Google Scholar]
- 30.Foulsham T., Kingstone A., Where have eye been? Observers can recognise their own fixations. Perception 42, 1085–1089 (2013). [DOI] [PubMed] [Google Scholar]
- 31.Laeng B., Bloem I. M., D’Ascenzo S., Tommasi L., Scrutinizing visual images: The role of gaze in mental imagery and memory. Cognition 131, 263–283 (2014). [DOI] [PubMed] [Google Scholar]
- 32.Scholz A., Mehlhorn K., Krems J. F., Listen up, eye movements play a role in verbal memory retrieval. Psychol. Res. 80, 149–158 (2016). [DOI] [PubMed] [Google Scholar]
- 33.Ferreira F., Apel J., Henderson J. M., Taking a new look at looking at nothing. Trends Cogn. Sci. (Regul. Ed.) 12, 405–410 (2008). [DOI] [PubMed] [Google Scholar]
- 34.Bone M. B., et al. , Eye movement reinstatement and neural reactivation during mental imagery. Cereb. Cortex 29, 1075–1089 (2019). [DOI] [PubMed] [Google Scholar]
- 35.Ryals A. J., Wang J. X., Polnaszek K. L., Voss J. L., Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration. Hippocampus 25, 1028–1041 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Nau M., Julian J. B., Doeller C. F., How the brain’s navigation system shapes our visual experience. Trends Cogn. Sci. (Regul. Ed.) 22, 810–825 (2018). [DOI] [PubMed] [Google Scholar]
- 37.Bicanski A., Burgess N., A computational model of visual recognition memory via grid cells. Curr. Biol. 29, 979–990.e4 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Shen K., Bezgin G., Selvam R., McIntosh A. R., Ryan J. D., An Anatomical interface between memory and oculomotor systems. J. Cogn. Neurosci. 28, 1772–1783 (2016). [DOI] [PubMed] [Google Scholar]
- 39.Ryan J. D., et al. , The functional reach of the hippocampal memory system to the oculomotor system. bioRxiv:10.1101/303511 (18 April 2018).
- 40.Liu Z.-X., Shen K., Olsen R. K., Ryan J. D., Visual sampling predicts hippocampal activity. J. Neurosci. 37, 599–609 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Kragel J. E., et al. , Hippocampal theta coordinates memory processing during visual exploration. bioRxiv:10.1101/629451 (7 May 2019). [DOI] [PMC free article] [PubMed]
- 42.Voss J. L., Bridge D. J., Cohen N. J., Walker J. A., A closer look at the hippocampus and memory. Trends Cogn. Sci. (Regul. Ed.) 21, 577–588 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Hannula D. E., Ryan J. D., Warren D. E., “Beyond long-term declarative memory: Evaluating hippocampal contributions to unconscious memory expression, perception, and short-term retention” in The Hippocampus from Cells to Systems, Hanunula D. E., Duff M. C., Eds. (Springer International Publishing, 2017), pp. 281–336. [Google Scholar]
- 44.Santoro A., Reassessing pattern separation in the dentate gyrus. Front. Behav. Neurosci. 7, 96 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Bates D., Mächler M., Bolker B., Walker S., Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67, 1–48 (2015). [Google Scholar]
- 46.Kuznetsova A., Brockhoff P. B., Christensen R. H. B., lmerTest package: Tests in linear mixed effects models. J. Stat. Softw. 82, 1–26 (2017). [Google Scholar]
- 47.Olsen R. K., et al. , The relationship between eye movements and subsequent recognition: Evidence from individual differences and amnesia. Cortex 85, 182–193 (2016). [DOI] [PubMed] [Google Scholar]
- 48.Loftus G., Eye fixations and recognition memory for pictures. Cogn. Psychol. 3, 525–551 (1972). [Google Scholar]
- 49.Chan J. P. K., Kamino D., Binns M. A., Ryan J. D., Can changes in eye movement scanning alter the age-related deficit in recognition memory? Front. Psychol. 2, 1–11 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Bylinskii Z., Isola P., Bainbridge C., Torralba A., Oliva A., Intrinsic and extrinsic effects on image memorability. Vision Res. 116, 165–178 (2015). [DOI] [PubMed] [Google Scholar]
- 51.Kumcu A., Thompson R. L., Less imageable words lead to more looks to blank locations during memory retrieval. Psychol. Res., 10.1007/s00426-018-1084-6 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Staudigl T., Vollmar C., Noachtar S., Hanslmayr S., Temporal-pattern similaritynalysis reveals the beneficial and detrimental effects of context reinstatement on human memory. J. Neurosci. 35, 5373–5384 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Ye Z., et al. , Neural global pattern similarity underlies true and false memories. J. Neurosci. 36, 6792–6802 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Chadwick M. J., et al. , Semantic representations in the temporal pole predict false memories. Proc. Natl. Acad. Sci. U.S.A. 113, 10180–10185 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Grady C. L., Ryan J. D., “Age-related differences in the human hippocampus: Behavioral, structural and functional measures” in The Hippocampus from Cells to Systems, Hannula D. E., Duff M. C., Eds. (Springer International Publishing, 2017), pp. 167–208. [Google Scholar]
- 56.Stark S. M., Stevenson R., Wu C., Rutledge S., Stark C. E. L., Stability of age-related deficits in the mnemonic similarity task across task variations. Behav. Neurosci. 129, 257–268 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Vieweg P., Riemer M., Berron D., Wolbers T., Memory image completion: Establishing a task to behaviorally assess pattern completion in humans. Hippocampus 29, 340–351 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Brock Kirwan C., et al. , Pattern separation deficits following damage to the hippocampus. Neuropsychologia 50, 2408–2414 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Baker S., et al. , The human dentate gyrus plays a necessary role in discriminating new memories. Curr. Biol. 26, 2629–2634 (2016). [DOI] [PubMed] [Google Scholar]
- 60.Sakon J. J., Suzuki W. A., A neural signature of pattern separation in the monkey hippocampus. Proc. Natl. Acad. Sci. U.S.A. 116, 9634–9643 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Hannula D. E., Ranganath C., The eyes have it: Hippocampal activity predicts expression of memory in eye movements. Neuron 63, 592–599 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Bridge D. J., Cohen N. J., Voss J. L., Distinct hippocampal versus frontoparietal-network contributions to retrieval and memory-guided exploration. J. Cogn. Neurosci. 29, 1324–1338 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Ryan J. D., Althoff R. R., Whitlow S., Cohen N. J., Amnesia is a deficit in relational memory. Psychol. Sci. 11, 454–461 (2000). [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data analysis code and data are available on GitHub (https://github.com/bbuchsbaum/eyesim).








