Abstract
How do we form mental links between related items? Forming associations between representations is a key feature of episodic memory and provides the foundation for learning and guiding behavior. Theories suggest that spatial context plays a supportive role in episodic memory, providing a scaffold on which to form associations, but this has mostly been tested in the context of autobiographical memory. We examined the memory boosting effect of spatial stimuli in memory using an associative inference paradigm combined with eye-tracking. Across two experiments, we found that memory was better for associations that included scenes, even indirectly, compared to objects and faces. Eye-tracking measures indicated that these effects may be partly mediated by greater fixations to scenes compared to objects, but did not explain the differences between scenes and faces. These results suggest that scenes facilitate associative memory and integration across memories, demonstrating evidence in support of theories of scenes as a spatial scaffold for episodic memory. A shared spatial context may promote learning and could potentially be leveraged to improve learning and memory in educational settings or for memory-impaired populations.
Episodic memories encode particular moments in our lives, and the flexible integration of such memories allows us to detect patterns, form knowledge and even imagine the future. In this way, episodic memory provides the foundation for the formation of more abstract forms of memory and knowledge, and shapes how we behave in the world. The ability to link related memories is a crucial component of these higher cognitive abilities, dependent on the hippocampus (Preston et al. 2004; Shohamy and Wagner 2008; Zalesak and Heckers 2009; Zeithamova and Preston 2010; Olsen et al. 2012; Ryan et al. 2016; Pajkert et al. 2017). While episodic memories can involve the association of entities, actions, contexts and mental states, spatial features, in particular, may facilitate associative memory formation and integration by efficiently binding elements in memory (Hassabis and Maguire 2009; Maguire and Mullally 2013; Robin et al. 2016, 2018; Robin 2018).
Mnemonic techniques like the “method of loci” demonstrate that people have intuited the important role of spatial context in forming memories for centuries (Yates 1966; O'Keefe and Nadel 1978; Rolls 2017). In the present study, we use the term spatial context to refer to the setting where an event takes place, including its visuospatial details as well as semantic content related to that setting. More recently, intuitions about the importance of spatial context have been formalized into theories proposing that spatial context serves as a scaffold for the construction of memories (Byrne et al. 2007; Hassabis and Maguire 2007, 2009; Maguire and Mullally 2013; Maguire et al. 2016; Robin 2018). In support of these views, studies of autobiographical episodic memory and imagination report that a familiar spatial setting elicits more detailed and vivid remembered and imagined events, as compared to less familiar scenes or familiar people (Arnold et al. 2011; de Vito et al. 2012; Robin and Moscovitch 2014; Robin et al. 2016; Sheldon and Chu 2017; Hebscher et al. 2018). Other studies have demonstrated how shared spatial context can help to organize memory and structure recall (Hupbach et al. 2011; Miller et al. 2013; Horner et al. 2016; Merriman et al. 2016; Pacheco et al. 2017).
Despite these demonstrations of the importance of spatial context in memory, it is unknown if the mnemonic benefits of spatial context carry over to simpler spatial stimuli such as images of scenes. In autobiographical episodic memory, spatial context is necessarily present, since events in our lives always unfold in a location and spatial context occupies much of the visual field. In contrast, in an associative memory paradigm, when scenes, objects and faces are displayed as images on a screen, it may be the case that the role of scenes is equal to other memory components. A few previous studies have compared associative memory formation including scenes compared to objects or faces, but have not reported any differences in memory performance (Zeithamova et al. 2012; Koster et al. 2018), or have found better memory for object–face associations compared to scene–face associations (Tambini et al. 2010). Even when displayed on a screen, scenes have been shown to automatically evoke broader spatial representations (Mullally et al. 2012; Chadwick et al. 2013), so displaying an image of a scene may generate a mental simulation of spatial context, serving as a setting in which to place other associations. If images of scenes evoke spatial processing that supports memory formation, we predict that associations and inferences including a scene will be more memorable than those involving other types of information, such as objects or faces. This could relate to increased hippocampal activity for scenes, which may strengthen related associations (Hassabis et al. 2007a,b; Zeidman et al. 2015b; Hodgetts et al. 2016, 2017; Robin et al. 2019). If scenes are found to support more effective memory formation and integration, this has implications for how we understand the mechanisms of nonautobiographical, arbitrary, associative memories, and for the conditions which promote more efficient learning and knowledge formation.
While scenes may promote memory formation and integration by virtue of evoking representations of spatial context, it is also possible scenes may evoke differences in eye movements, which may also have consequences on memory formation. Increased eye movements are known to relate to better memory, and have recently been linked to increased hippocampal activity (Loftus 1972; Hannula et al. 2010, 2012; Olsen et al. 2016; Liu et al. 2017; Voss et al. 2017; Lucas et al. 2019). Thus, we predict that better associative memory and integration for associations including scenes may relate to increased eye movements to scene stimuli. If scenes offer an integrative advantage beyond the effect of eye movements, memory differences should still be obtained when eye movements are equated across stimulus types.
To test these predictions, we adapted a well-studied associative inference paradigm (Preston et al. 2004; Zeithamova and Preston 2010; Schlichting et al. 2014). This paradigm involves single-trial episodic memory learning, and has been shown to recruit the hippocampus for successful integration (Preston et al. 2004; Zeithamova and Preston 2010; Schlichting et al. 2014). Participants learn overlapping pairs of associations, and are subsequently tested on them (Fig. 1). In addition, participants must infer the connection between indirectly linked items, never seen together at study but linked by a common associate. By varying the type of linking item, we probed whether scenes resulted in better memory for associations (for directly studied pairs) and inferences (for indirectly linked items) as compared to objects in Experiment 1. To control for the novelty of the scene stimuli as the linking item, in Experiment 2 we compared associations between objects and scenes, and objects and faces. We predict that if scenes serve as superior integrators in memory, a scene will enhance direct associations as well as inferences, in which the scene serves as a common associate but is unseen at test. We also monitored eye movements to measure differences in viewing of the different stimuli. Thus, we were able to determine if memory advantages related to increased visual sampling of scenes compared to other stimuli. This study provides new insight into the role of scenes in the association and integration of items in memory, and examines the links between these memory effects and visual sampling behavior.
Figure 1.
Examples of study and test trials for each condition for Experiments 1 and 2. At study, participants viewed a pair of items (in the AB, BC, and XY conditions) or a single item (in the X condition) and were instructed to view and memorize the pair or the single item. A, C, and Y items were objects in all three conditions. B and X items were repeated, appearing in two trials each (e.g., AB and BC), and forming a linkage among the indirect AC pairs. The category of B and X items varied according to condition (i.e., objects and scenes in Experiment 1; scenes and faces in Experiment 2). At test, participants viewed three items on each trial and had to make a memory response (i.e., choose the correct association). The top centre item was the cue and the two bottom test items were the two potential associates. One test item was the correct associate and the other was a lure item from the same category. In the Figure, the item on the left is always the correct associate, but position was randomized in the experiment. Experiment 1 included object and scene blocks, and Experiment 2 included scene and face blocks.
Results
Experiment 1
Memory accuracy
A 2 × 4 repeated-measures ANOVA with factors of Condition (object, scene) and Trial Type (AB, BC, AC, XY) was conducted to compare logit-transformed memory accuracy scores across conditions. The ANOVA revealed significant main effects of Condition, F(1,31) = 13.12, P = 0.001, η2G = 0.02 and Trial Type, F(3,93) = 51.80, P < 0.001, η2G = 0.21 (see Fig. 2). The interaction between Condition and Trial Type was not significant, F(3,93) = 1.92, P = 0.13. The significant main effect of Condition indicated that memory was significantly better for associations including scenes compared to those that only included objects.
Figure 2.

Experiment 1—Memory accuracy. Accuracy, measured by proportion correct answers at test, for each condition and trial type for Experiment 1 (comparing object–object and object–scene conditions). Error bars are within-subjects 95% confidence intervals of the means.
To directly test whether accuracy differed on indirect AC test trials, which contained only object stimuli as the cue item and memory associates (see Fig. 1), we compared accuracy on AC test trials for trials in which both the AB and BC premise pairs were correct, to minimize the contribution of memory differences for the premise pairs to measures of AC inference. This resulted in the inclusion of a mean of 21.22 (SD = 8.74) object trials and 24.94 (SD = 8.13) scene trials per subject, out of a possible 36 trials. Accuracy was higher for AC trials in the scene condition compared to the object condition, even when controlling for the accuracy of the premise pairs, t(31) = 2.47, P = 0.019, Mdiff = 0.34, 95%CIdiff = [0.06, 0.61].
Fixations at study
We compared the number of fixations directed toward the study items using a 2 × 2 × 4 repeated-measures ANOVA, with factors of Study Item, Condition (object, scene), and Trial Type (AB, BC, X, XY). This ANOVA revealed significant main effects for all three factors, as well as significant two-way and three-way interactions (all Ps < 0.001) (see Fig. 3). Thus, we conducted follow-up repeated-measures ANOVAs for each Trial Type, comparing the effects of Study Item and Condition.
Figure 3.

Experiment 1—Fixations at study. Mean number of fixations made to each item at study, by condition and trial type for Experiment 1. A, C, and Y items were objects in all conditions. B and X items were objects or scenes, depending on condition. In X trials, only one item was displayed on the screen. Error bars are within-subjects 95% confidence intervals of the means.
For AB study trials, there was a significant main effect of Condition, F(1,31) = 6.80, P = 0.014, η2G = 0.01, of Study Item (A vs. B), F(1,31) = 126.17, P < 0.001, η2G = 0.37, and a significant interaction, F(1,31) = 128.45, P < 0.001, η2G = 0.36. For AB study trials in the scene condition, participants made significantly more fixations to the B item (a scene) than the A item (an object), t(31) = 12.29, P < 0.001, Mdiff = 1.74, 95%CIdiff = [1.45, 2.03]. In contrast, in the object condition, participants made equal fixations to the two objects, t(31) = 0.23, P > 0.99.
For BC study trials, there was a significant interaction of Condition and Study Item (B vs. C), F(1,31) = 47.25, P < 0.001, η2G = 0.18, but no significant main effect of Condition, F(1,31) = 0.87, P = 0.360, or Study Item, F(1,31) = 2.96, P = 0.096. For BC study trials in the scene condition, participants made significantly more fixations to the B item (a scene) than the C item (a novel object), t(31) = 3.49, P = 0.003, Mdiff = 0.51, 95%CIdiff = [0.21, 0.81]. In contrast, in the object condition, participants made more fixations to the novel object (C item) than the already seen object (B item), t(31) = 8.34, P < 0.001, Mdiff = 0.78, 95%CIdiff = [0.59, 0.98].
For X study trials, in which only one item was presented, participants made significantly more fixations to the X item when it was a scene compared to an object, t(31) = 5.89, P < 0.001, Mdiff = 0.77, 95%CIdiff = [0.50, 1.03].
For XY study trials, there was a significant main effect of Condition, F(1,31) = 6.50, P = 0.016, η2G = 0.01, of Study Item (X vs. Y), F(1,31) = 5.53, P = 0.025, η2G = 0.02, and a significant interaction, F(1,31) = 85.01, P < 0.001, η2G = 0.27. Similar to BC trials, for XY study trials in the scene condition, participants made significantly more fixations to the X item (a scene) than the Y item (a novel object), t(31) = 4.34, P < 0.001, Mdiff = 0.69, 95%CIdiff = [0.36, 1.01]. In the object condition, participants made more fixations to the novel object (Y item) than the already seen object (X item), t(31) = 10.54, P < 0.001, Mdiff = 1.12, 95%CIdiff = [0.91, 1.34].
Fixations and subsequent memory
Next, to determine if differences in the number of fixations contributed to memory performance, we modeled the relationship between fixations to B and X items (i.e., those that varied in category) at study and memory for associations based on those items. For each trial, we calculated the number of fixations to the study item that varied in category (i.e., the B items for AB and BC trials, and the X items for XY trials). For AC trials, we included fixations to the B item at AB and BC study. For each trial type, we fitted a generalized linear mixed-effects model with fixed factors of condition (object vs. scene) and fixation count, and random factors of participant and trial. We compared statistical models with and without the fixation count factor and the interaction of fixation count and condition to determine if the number of fixations related to memory performance independently from, or in interaction with, the effect of condition.
For AB trials, memory accuracy was positively related to fixations to B items at study, χ2(1) = 13.26, P < 0.001, and there was no interaction between fixations and condition, χ2(1) = 0.99, P = 0.32. For BC trials, fixations to B items at study were not significantly related to memory accuracy, χ2(1) = 0.02, P = 0.88, and there was no interaction between fixations and condition, χ2(1) = 0.03, P = 0.86. Similarly, for XY trials, fixations to X items at study were not significantly related to memory accuracy, χ2(1) = 2.19, P = 0.14, and there was no interaction between fixations and condition, χ2(1) < 0.01, P = 0.98. See Figure 4 for a summary of the mean number of fixations for remembered and forgotten trials, by condition.
Figure 4.
Experiment 1—Fixations by subsequent memory. Mean number of fixations made to B (for AB, BC, and AC trials) and X items (for XY trials) at study, based on whether a trial was subsequently remembered correctly or forgotten, by condition and trial type for Experiment 1. AC pairs were never viewed at study, so fixations are plotted for AB and BC study trials based on AC memory outcome (labeled as AB|AC memory and BC|AC memory). Error bars are within-subjects 95% confidence intervals of the means.
For AC trials, we tested the relationship between fixations to B items at AB study and BC study with AC memory accuracy. We found a significant relationship between fixations at AB study and AC accuracy, χ2(1) = 12.45, P < 0.001, but not between fixations at BC study and AC accuracy, χ2(1) = 0.21, P = 0.65. The effect of AB fixations did not interact with condition, χ2(1) = 1.09, P = 0.30 (see Fig. 4, AB|AC memory panel), and there was no interaction between BC fixations and condition, χ2(1) = 0.29, P = 0.59. In summary, the number of fixations to the B item at first viewing (i.e., AB study) was related to memory for AB and AC pairs, while fixation numbers to B and X items at BC and XY study were not related to memory success.
Experiment 1 summary
To summarize, we found that associations between scenes and objects were better remembered than associations between two objects. Critically, participants were better at inferring an indirect relationship between two objects associated with a common scene, even when the scene was not shown at test. Qualifying these results, however, were the findings of increased fixations to scenes compared to objects in every condition. Linear mixed-effects models demonstrated that increased fixations at study were predictive of subsequent memory at test for AB and XY trials, and AB fixations were also predictive of AC memory, suggesting that increased fixations to scenes may have contributed to their memory advantages. In Experiment 2, we sought to better control for these differences in visual attention by comparing scenes to another visually salient stimulus category, human faces.
Experiment 2
Memory accuracy
A 2 × 4 repeated-measures ANOVA with factors of Condition (face, scene) and Trial Type (AB, BC, AC, XY) revealed significant main effects of Condition, F(1,31) = 42.32, P < 0.001, η2G = 0.11, Trial Type, F(3,93) = 46.02, P < 0.001, η2G = 0.12 and a significant interaction between Condition and Trial Type, F(3,93) = 3.38, P = 0.022, η2G = 0.01 (see Fig. 5). Follow-up paired t-tests revealed that for AB test trials, memory accuracy was significantly higher in the scene condition than in the face condition, t(31) = 5.42, P < 0.001, Mdiff = 0.74, 95%CIdiff = [0.46, 1.02]. Similarly, for BC trials, memory accuracy was significantly higher for the scene condition than the face condition, t(31) = 6.34, P < 0.001, Mdiff = 0.86, 95%CIdiff = [0.58, 1.13]. Notably, for AC test trials, accuracy was significantly higher for the scene condition than the face condition, even though neither scenes nor faces were displayed at test, t(31) = 4.80, P < 0.001, Mdiff = 0.59, 95%CIdiff = [0.34, 0.84]. Finally, for the XY test trials, the difference in accuracy between the conditions was marginal following Bonferroni correction, t(31) = 2.56, P = 0.06, Mdiff = 0.39, 95%CIdiff = [0.08, 0.70].
Figure 5.

Experiment 2—Memory accuracy. Accuracy, measured by proportion correct answers at test, for each condition and trial type for Experiment 2 (comparing object–face and object–scene conditions). Error bars are within-subjects 95% confidence intervals of the means.
We compared accuracy on AC test trials for trials in which both the AB and BC premise pairs were correct. This resulted in the inclusion of a mean of 18.81 (SD = 7.43) face trials and 25.66 (SD = 8.81) scene trials per subject, out of a possible 36 trials. Accuracy was still higher for AC trials in the scene condition compared to the face condition, t(31) = 5.90, P < 0.001, Mdiff = 0.71, 95%CIdiff = [0.47, 0.96].
Fixations at study
A 2 × 2 × 4 repeated-measures ANOVA, with factors of Study Item, Condition (face, scene), and Trial Type (AB, BC, X, XY) revealed significant main effects for all three factors, as well as significant two-way and three-way interactions (all Ps < 0.005), except for the two-way interaction between Condition and Trial Type, F(3,93) = 0.9, P = 0.44 (see Fig. 6). Given the significant three-way interaction, F(3,93) = 6.07, P < 0.001, η2G = 0.01, we conducted follow-up repeated-measures ANOVAs for each Trial Type comparing the effects of Study Item and Condition.
Figure 6.

Experiment 2—Fixations at study. Mean number of fixations made to each item at study, by condition and trial type for Experiment 2. A, C, and Y items were objects in all conditions. B and X items were scenes or faces, depending on condition. In X trials, only one item was displayed on the screen. Error bars are within-subjects 95% confidence intervals of the means.
For AB study trials, there was a significant main effect of Study Item (A vs. B), F(1,31) = 170.42, P < 0.001, η2G = 0.49, but no main effect of Condition, F(1,31) = 1.03, P = 0.32, or interaction between Condition and Study Item, F(1,31) = 3.23, P = 0.082. The main effect of Study Item reflected significantly more fixations to the B items (faces and scenes) in both conditions (M = 5.02, SD = 1.42) compared to the A items (objects; M = 2.83, SD = 0.73).
For BC study trials, there was a significant main effect of Condition, F(1,31) = 4.81, P = 0.036, η2G = 0.005, and of Study Item (B vs. C), F(1,31) = 96.39, P < 0.001, η2G = 0.23, and a significant interaction of Condition and Study Item, F(1,31) = 5.88, P = 0.021, η2G = 0.02. For both conditions, participants made significantly more fixations to the B item (scene/face) than the C item (a novel object), Scene condition: t(31) = 5.99, P < 0.001, Mdiff = 0.86, 95%CIdiff = [0.57, 1.15]; Face condition: t(31) = 7.59, P < 0.001, Mdiff = 1.44, 95%CIdiff = [1.05, 1.83]. Comparing B items across conditions revealed that participants made more fixations to faces than scenes, t(31) = 3.30, P = 0.009, Mdiff = 0.43, 95%CIdiff = [0.17, 0.70], but there was no difference between the C items (both objects) in the two conditions, t(31) = 1.06, P = 0.30.
For X study trials, in which only one item was presented, there was no significant difference between fixations to faces compared to scenes, t(31) = 1.44, P = 0.16.
For XY study trials, there was a significant main effect of Condition, F(1,31) = 9.52, P = 0.004, η2G = 0.01, of Study Item (X vs. Y), F(1,31) = 66.78, P < 0.001, η2G = 0.23, and a significant interaction of Condition and Study Item, F(1,31) = 14.82, P < 0.001, η2G = 0.05. Similar to the BC condition, for both conditions, participants made significantly more fixations to the X item (scene/face) than the Y item (a novel object), Scene condition: t(31) = 4.70, P < 0.001, Mdiff = 0.67, 95%CIdiff = [0.38, 0.95]; Face condition: t(31) = 7.38, P < 0.001, Mdiff = 1.58, 95%CIdiff = [1.14, 2.01]. Comparing X items across conditions revealed that participants made more fixations to faces than scenes, t(31) = 4.10, P = 0.001, Mdiff = 0.68, 95%CIdiff = [0.34, 1.02], but there was no difference between the Y items (both objects) in the two conditions, t(31) = 2.20, P = 0.14.
Fixations and subsequent memory
Memory accuracy on AB trials was related to fixations to B items (which were either faces or scenes) at study, χ2(1) = 18.96, P < 0.001, and there was no interaction between fixation count and condition, χ2(1) = 0.11, P = 0.74. For BC trials, fixations to B items at study were not significantly related to memory accuracy, χ2(1) = 0.24, P = 0.62, and the interaction between fixations at study and condition was not significant, χ2(1) = 1.90, P = 0.17. For XY trials, there was a significant relationship between fixations to X items at study and memory accuracy, χ2(1) = 4.74, P = 0.029, and no interaction between fixations and condition, χ2(1) = 0.98, P = 0.32. See Figure 7 for a summary of the mean number of fixations to B/X items for remembered and forgotten trials, by condition.
Figure 7.
Experiment 2—Fixations by subsequent memory. Mean number of fixations made to B (for AB, BC, and AC trials) and X items (for XY trials) at study, based on whether a trial was subsequently remembered correctly or forgotten, by condition and trial type for Experiment 2. AC pairs were never viewed at study, so fixations are plotted for AB and BC study trials based on AC memory outcome (labeled as AB|AC memory and BC|AC memory). Error bars are within-subjects 95% confidence intervals of the means.
For AC trials, we tested the relationship between fixations to B items at AB study and BC study with AC memory accuracy. We found marginal evidence for a relationship between fixations to B items at AB study and AC accuracy, χ2(1) = 2.92, P = 0.09, and no relationship between fixations to B items at BC study and AC accuracy, χ2(1) = 0.07, P = 0.79 (see Fig. 7, AB|AC memory and BC|AC memory panels). There was a significant interaction between fixations to B items at AB study and condition, χ2(1) = 8.03, P = 0.005, but not between fixations to B items at BC study and condition, χ2(1) = 0.80, P = 0.37. Follow-up testing on the scene and face conditions revealed that fixations to B items at AB study were significantly related to AC accuracy at test in the scene condition, χ2(1) = 9.78, P = 0.002, but not in the face condition, χ2(1) = 0.28, P = 0.60 (see Fig. 7, AB|AC memory panel). In summary, the number of fixations to B items at AB study was related to memory for AB associations, but only for AC associations when the B item was a scene, not a face. In addition, fixations to X items at XY study were also related to XY memory success.
Experiment 2 summary
We found that associations between scenes and objects were better remembered than associations between faces and objects. Again, participants were better at inferring an indirect relationship between two objects associated with a common scene, even when the scene was not shown at test. These results were found despite equal fixations to scenes and faces at initial study, and increased fixations to faces compared to scenes on second presentation. Linear mixed-effects models demonstrated that increased fixations at study were predictive of subsequent memory at test for AB and XY trials for both faces and scenes, but that AB fixations were only predictive of AC memory for scenes, suggesting that differences in fixations cannot fully explain the memory differences in this experiment.
Discussion
In two experiments, we found that associations between scenes and objects were better remembered than associations between two objects and associations between objects and faces. Critically, participants were better at inferring an indirect relationship between two objects associated with a common scene (compared to a common object or face), even when the scene was not shown at test. This was true when both premise pairs were accurately remembered, minimizing the influence of memory differences for the direct associations on the indirect association performance. Together, these findings suggest that scenes form more memorable associations in memory and allow for more successful integration across associations. These findings are consistent with theories of scenes serving as a scaffold for forming and integrating other associations, but demonstrate these effects in a nonautobiographical, associative memory paradigm (Hassabis and Maguire 2007, 2009; Maguire and Mullally 2013; Robin et al. 2016, 2018; Robin 2018). Viewing images of scenes may evoke representations of spatial context that provide the cognitive and/or neural architecture to form novel direct and indirect linkages in memory.
Qualifying these results, however, were the findings of increased fixations to scenes compared to objects in Experiment 1. Participants made more fixations to scenes even when the scenes were familiar and paired with a novel object, in contrast to the expected novelty effects seen in the object-only condition (Althoff and Cohen 1999; Ryan et al. 2000; Hannula et al. 2010). Linear mixed-effects models demonstrated that increased fixations to B items at study was predictive of subsequent memory for AB associations and even indirect AC associations, suggesting that increased fixations to scenes on first viewing may have contributed to their memory advantages (Liu et al. 2017). Thus, in Experiment 1, it was difficult to disentangle the memory advantages for scenes from differences in visual attention (Voss et al. 2017). Scenes may have disproportionately drawn attention due to their novelty or visual complexity. In Experiment 2, we sought to better control for these differences in visual attention by comparing scenes to another visually salient stimulus category, human faces.
In Experiment 2, participants made equivalent numbers of fixations to novel scenes and faces, and increased fixations to faces as compared to scenes when viewed for a second time. Increased fixations to scenes and faces at study were predictive of subsequent memory success for AB and XY trials. Increased fixations to scenes at AB study were also associated with better AC memory, but this was not the case for faces. Overall, despite matched or even increased fixation counts to faces compared to scenes, memory advantages for scenes persisted, providing evidence that differences in visual sampling do not alone explain the memory effects (Loftus 1972; Voss et al. 2017).
Despite the finding that increased fixations related to better memory for some trial types, fixations to faces seemed to be less predictive of successful memory formation, as suggested by the significant relationship between AB fixations and AC memory success for scenes but not faces. Thus, increased fixations to faces did not appear selective to trials or items that were successfully remembered. It may be the case that increased fixations to faces served a different function than fixations made toward the scenes. For example, participants’ fixations might have been directed toward distinguishing each face from the other faces in the experiment, as the faces were not as readily nameable as scenes or objects. We used faces from a well-studied stimulus set (Althoff and Cohen 1999; Ryan et al. 2007; Hannula and Ranganath 2009; Heisz and Ryan 2011; Hannula et al. 2012; Liu et al. 2017), but further studies are needed to explicitly test if differences in the discriminability of the stimuli contributed to memory differences found in the present results.
The finding that greater visual sampling of the faces did not contribute to better memory for face-associations highlights how increased fixations can contribute to memory advantages in some cases, but not others. We examined the number of fixations made to the stimuli, which has been shown to be related to recognition memory for single faces and hippocampal activity in previous studies (Olsen et al. 2016; Liu et al. 2017). It is possible that other measures of eye movement behaviors may relate to memory more strongly in associative memory paradigms, such as the number and pattern of transitions between items, or reinstatement of these patterns during delay periods (Olsen et al. 2014; Kamp and Zimmer 2015; Lucas et al. 2019). The number of transitions between items did not differ between stimulus conditions in this study, and we did not collect eye movements during delay periods, so further research is needed to reveal how different eye movement measures contribute to memory formation and whether various types of memory (e.g., recognition memory versus associative memory) are differentially related to visual behavior. Recent studies have suggested that eye movements may mediate the link between hippocampal activity and memory formation (Olsen et al. 2016; Ringo et al. 1994; Liu et al. 2017, 2018), but these results suggest that there may be important differences in how eye movements contribute to memory formation depending on the category or memorability of the stimuli.
If viewing behavior does not fully explain the memory and integration advantages associated with scenes, what other aspects of scenes may promote memory? Following predictions made by scene construction and spatial scaffold theories (Kumaran et al. 2007; Hassabis and Maguire 2009; Maguire and Mullally 2013; Maguire et al. 2016; Robin 2018), we suggest that scenes provide a stronger basis for forming associations in memory due to their ability to bind other items by evoking a spatial context, a possibility that has not been tested in previous experiments. In postexperiment debriefing interviews, 94% of participants in Experiment 1 and 84% of participants in Experiment 2 reported attempting to remember associations by imagining a scenario or creating a mental story involving the items. Creating stories or scenarios would likely involve the generation of mental imagery, though this was not explicitly measured in the present study. Scenes may provide a rich source of imagery and more easily form a stable background setting for imagined scenarios, conducive to nesting associations with objects. Objects and faces, treated as items rather than contexts, may be less efficient at binding other associations (Davachi 2006).
Other factors that differed across the stimuli included that scenes were nameable, each depicting a unique setting, while faces were unfamiliar and not as easily named, which may have made the faces more difficult to distinguish from one another or generated more interference. Objects were also semantically distinct and nameable, however, and scenes still maintained memory advantages compared to objects, so nameability cannot fully explain the memory advantages for scenes in this study. The scenes also contain more visual detail, on average, than the faces or the objects, since scenes often included objects and filled the square frame of the image entirely, while faces and objects were on black and white backgrounds, respectively. This difference may relate to the increased fixations to scenes compared to objects, but more fixations were nevertheless made to faces compared to scenes, so again this did not fully underlie the memory effects. As noted above, it is possible that the increased fixations to faces reflected difficulty in distinguishing the faces and did not relate as closely to memory formation. Future studies could use simplified or less nameable scenes to better control for these factors.
While we compared memory integration based on scenes, faces, and objects while controlling for the accuracy of the premise pairs, this cannot completely account for differences in memorability of the component stimuli and direct associations. Among remembered pairs, if the scene associations were more strongly remembered, this could carry over to better performance on the inferential memory test. Future work should test and control for differences in item memory as well as associative memory. While the stimuli in the present study were drawn from databases of widely used object, scene and face stimuli (Althoff and Cohen 1999; Ryan et al. 2007; Brady et al. 2008; Hannula and Ranganath 2009; Konkle et al. 2010; Heisz and Ryan 2011), there may be differences in stimulus sets that contribute to memory differences. For example, in the present study, each scene was drawn from a unique semantic category, which may have helped lead to distinctive contextual representations. A previous study comparing associative memory for objects and faces compared to scenes and faces found better memory for object–face pairs (Tambini et al. 2010), seemingly inconsistent with our findings (though we never tested scene–face pairs), but it was unclear how distinctive the scenes and objects were in that study. Other associative memory studies with varying stimulus categories including faces, objects and scenes do not report on memory differences between conditions (Zeithamova et al. 2012; Koster et al. 2018), so more research is needed to compare scene effects across paradigms and stimulus sets.
We speculate that the scene conditions in this study may have led to improved memory performance by eliciting increased hippocampal activity. Previous work demonstrates increased hippocampal activity for successfully remembered associations and, especially, formation of indirect inferences (Zeithamova and Preston 2010; Zeithamova et al. 2012; Schlichting et al. 2014). Perceiving, remembering and imagining scenes are also associated with increased hippocampal activity compared to similar tasks involving objects, and to some extent, faces (Hassabis et al. 2007a; Zeidman et al. 2015a,b; Hodgetts et al. 2016; Ross et al. 2018; Dalton et al. 2018; Robin et al. 2019). Thus, the hippocampus may have been more active on trials with scenes, resulting in more durable associations involving scenes. Recent research linking hippocampal activity and/or integrity with eye movements may further link the hippocampus to successful memory formation, especially in the scene condition (Olsen et al. 2016; Liu et al. 2017; Voss et al. 2017). Viewing and remembering scenes are additionally associated with a posterior-medial network of brain regions, including the parahippocampal, retrosplenial, and posterior cingulate cortices (Epstein et al. 2003; Hodgetts et al. 2016; Robin et al. 2018), so neuroimaging evidence will be needed to determine how the hippocampus and other related areas contribute to forming associations with scenes.
In summary, we demonstrate that scenes promote better memory for associations and integration across related memories compared to objects and faces. Crucially, indirect associations were more successfully formed between items never seen together if they were linked via a common scene. This study provides an empirical demonstration that scenes promote better learning of associations in memory and linking related associations, which may relate to how we learn relationships and abstract knowledge from our environments. The memory advantages relating to scenes may be related in part to increased fixations to scenes compared to objects, though scenes and faces were fixated equally, and memory differences persisted. These results therefore have implications for how eye movements to different stimuli types may differentially contribute to memory formation. We suggest that the ability of scenes to serve as a spatial scaffold for other items in memory stems from their role as a superior integrator of memories, by serving as context and more easily binding other items, potentially relating to the function of the hippocampus and other brain areas involved in spatial and episodic memory. More broadly, these findings suggest that imagining items you wish to remember in the same spatial context may enhance learning and memory for these associations, which could be leveraged to develop educational tools or boost memory in older adults or memory-impaired populations.
Materials and Methods
Experiment 1
Participants
Thirty-five participants participated in this experiment. Three participants’ data were excluded from the reported analyses: two due to technical problems resulting in incomplete data, and one due to not making any responses in at least one condition. This resulted in a remaining 32 participants (25 female, 7 male; Mean age = 23.34, SD = 3.55). A sample size of 32 was chosen as this was similar to previous experiments using the associative inference paradigm (Zeithamova and Preston 2010; Zeithamova et al. 2012; Schlichting et al. 2014), and to have two subjects in each of 16 unique counterbalanced versions of the experiment. All participants were native or fluent speakers of English, with a mean 15.90 yr of education (SD = 1.79). All participants reported normal or corrected-to-normal vision and no psychological or neurological disorders. This study was approved by the Research Ethics Board at Baycrest Health Sciences. All participants consented to participate in the study in accordance with a protocol approved by the Research Ethics Board at Baycrest Health Sciences. Participants were provided monetary compensation for their participation in the study.
Materials
Experiment 1 included an object-only condition and a scene–object condition (Fig. 1). Stimuli for the study consisted of color pictures of objects and scenes. Two hundred and eighty-eight object images were chosen from the Massive Visual Memory Stimuli database (Brady et al. 2008). Images were of unique everyday objects (e.g., watch, basketball) and did not include multiple exemplars of the same object. Seventy-two scene images were chosen from the Massive Visual Memory Stimuli database (Konkle et al. 2010), and from online searching for additional scene images. Scene images depicted indoor and outdoor scenes from distinct categories (e.g., office, museum) and did not include multiple exemplars of the same scene category. All images were resized and displayed at 256 × 256 pixels, subtending ∼8.3° × 8.3° of visual angle.
Objects and scenes were combined to create overlapping (AB, BC) and nonoverlapping (XY) pairs of images. All A, C, and Y items were objects. In half of the blocks, B and X items were objects, and in the other half, they were scenes. The pairings of objects and scenes, and whether they were part of an overlapping or nonoverlapping pair, were shuffled across counterbalanced versions of the experiment so that the same items were not paired together for all participants.
Apparatus
The experiment was programed and run using Experiment Builder software (SR Research). Participants were seated at a distance of ∼60 cm from the computer monitor. Participants completed a nine-point calibration and validation prior to the experiment, and after each break. Monocular eye movements were recorded during study and test trials with a head-mounted EyeLink II eyetracker (sample rate = 500 Hz; SR Research). Online drift correction (<5°) was performed during the fixation cross between trials, if needed. Saccades were determined using the built-in EyeLink saccade-detector heuristic; acceleration and velocity thresholds were set to detect saccades >0.5° of visual angle. Blinks are defined as periods in which the saccade-detector signal was missing for ≥3 samples in a sequence. Fixations are defined as the samples remaining after the categorization of saccades and blinks.
Procedures
The study design was based on the associative inference paradigm (Preston et al. 2004; Zeithamova and Preston 2010; Schlichting et al. 2014). Participants completed 12 study-test blocks of the experiment with opportunities for breaks between blocks. Each block included 24 study trials and 24 test trials. Study trials consisted of a 2-sec fixation cross, and then a pair of images (or single image) was displayed on the left and right sides of the screen for 3 sec. Participants were instructed to view and memorize the pairs of images (or single image, on X trials). After 3 sec, a judgment of learning (JOL) scale appeared on the screen in addition to the images for an additional 1.5 sec, and participants were asked to select “1—won't remember,” “2—may remember,” or “3—will remember” to promote engagement with the task and memory formation. Study blocks consisted of 6 AB pairs and 6 BC pairs, which shared an overlapping (B) item. Each block also included 6 X items (presented alone) and 6 XY items, as a control condition with no overlapping association, in keeping with previous studies (Zeithamova and Preston 2010; Schlichting et al. 2014). Each pair was viewed only once, and images were not repeated across blocks. The order of trials within these categories was random, but the order of the trial categories was either AB–BC–X–XY or X–XY–AB–BC to ensure that AB and X learning always preceded BC and XY learning. The two orders alternated across blocks and the order of these blocks was counterbalanced across participants.
After each block of 24 study trials, a screen indicated the start of the test period. Test trials consisted of a 2-sec fixation cross, and then three images were displayed on the screen, testing memory for the associations using a two-alternative forced choice procedure. The cue image was displayed at the top centre of the screen, and two possible associates were displayed on the bottom half of the screen, on the left and right sides. Participants had 4 sec to view the images and to indicate which image on the bottom of the screen was associated with the test image at the top of the screen by pressing the associated key. Lure images were always from the same category as the target image (i.e., on an AB test trial, another B image from the same study block would be used as a lure). Test trials consisted of direct associates that were learned in the study block (AB, BC, and XY pairs) and indirect pairs that could be solved by associating the images via the common B item, not shown at test (AC pairs). Trials were presented in a pseudorandom order, so that AC pairs were always tested before their associated AB and BC pairs, to prevent additional learning of the direct associations preceding indirect test.
Six of the 12 blocks contained all object images and six blocks contained objects and scenes. The blocks alternated, and the type of block to appear first was counterbalanced across participants. Prior to the experiment, participants completed eight practice study trials and eight practice test trials with instructions and were given the opportunity to ask questions about the procedure. AC trials were included in the practice and the indirect nature of the AC associations was explained to participants, as in previous studies (Zeithamova and Preston 2010; Schlichting et al. 2014).
After completing the study, participants completed an informal debriefing interview to assess the strategies used during the experiment. In the debriefing interview, participants were asked if they noticed the two conditions in the study, and if so, which condition they found to be easier, if either. Next, participants were asked what strategies they used, if any, to remember the pairs, including whether they imagined scenarios linking the items or created verbal stories including the items.
Experiment 2
Participants
Thirty-seven participants participated in this version of the study. Five participants’ data were excluded from analyses: Two due to technical problems resulting in incomplete data, one due to participation in Experiment 1, one due to the diagnosis of a psychological disorder, and one due to not responding on more than 50% of trials. This resulted in a remaining 32 participants (25 female, 7 male; Mean age = 22.06, SD = 3.12). All participants were native or fluent speakers of English, with a mean 15.48 yr of education (SD = 2.15), and reported normal or corrected-to-normal vision and no diagnosis of a psychological or neurological disorder. This study was approved by the Research Ethics Board at Baycrest Health Sciences.
All participants consented to participate in the study in accordance with a protocol approved by the Research Ethics Board at Baycrest Health Sciences. Participants were provided monetary compensation for their participation in the study.
Materials
Study design was identical to Experiment 1, with the exception of the substitution of a face-object condition instead of the object-only condition (Fig. 1). Seventy-two face images were chosen from a data set used in previous studies (Althoff and Cohen 1999; Ryan et al. 2007; Hannula and Ranganath 2009; Heisz and Ryan 2011; Hannula et al. 2012; Liu et al. 2017). Face images each depicted a nonfamous female face on a black background.
Objects, faces and scenes were combined to create overlapping (AB, BC) and nonoverlapping (XY) pairs of images. All A, C, and Y items were objects. In half of the blocks, B and X items were faces, and in the other half, they were scenes. Six blocks contained faces and objects and six blocks contained scenes and objects. The blocks alternated between faces and scenes, and the order was counterbalanced across participants.
Data analysis
Accuracy of responses was analyzed by calculating proportion correct for test trials by condition. Trials in which the incorrect image was chosen or no response was made were considered incorrect. Repeated-measures ANOVAs with factors of condition (object vs. scene) and trial type (AB, BC, AC, XY) were used to compare logit-transformed accuracy scores (correcting for the nonnormal distribution of proportion scores). Effect sizes are reported via generalized eta-squared (η2G). Follow-up Bonferroni-corrected paired t-tests, with 95% confidence intervals (CI) for the difference between the means, were used to further test significant effects, when applicable.
We measured visual sampling by counting the number of fixations made to each item during study and test. Fixations were defined as the samples omitting blinks and saccades, as described above. In order to determine the number of fixations to each image on the screen, regions of interest (ROIs) were defined as square boxes of 300 × 300 pixels surrounding the images (256 × 256 pixels), capturing all fixations made to each image or in a border region just outside the image. For study trials, fixations were counted from the onset of the trial until the JOL scale appeared (3 sec). For test trials, fixations were counted for the full test image display (4 sec). Fixations within the ROIs were counted per trial and analyzed using repeated-measures ANOVAs and follow-up Bonferroni-corrected paired t-tests, when applicable. Analyses of fixations at test are included in the Supplemental Material.
The relationship between fixations at study and memory performance was modeled using generalized linear mixed-effects modeling (logistic regression with a binomial distribution), using the lme4 package in R (Bates et al. 2015). Models included fixed effects of condition and the number of fixations at study to B and X items (i.e., those that varied in category), with random intercepts for subject and trial, and memory accuracy at test as the dependent variable. The significance of effects was determined by comparing full models including the number of fixations and the interaction between fixation number and condition to reduced models without eye movement measures, using χ2 likelihood ratio tests to evaluate statistical significance.
All data and analysis scripts are available on the Open Science Framework (doi: 10.17605/OSF.IO/4AF69).
Supplementary Material
Acknowledgments
We thank Yumna Farooq, Verena Mikhail, Nahid Iseyas, and Keisha Joseph for their assistance with data collection, and Kirk Geier for technical assistance and training. R.K.O. is funded by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2017-06178), a New Investigator Grant from the Alzheimer Society of Canada, and Project Grant from the Canadian Institutes of Health (CIHR; PJT 162292). J.R. is funded by a postdoctoral award from the Alzheimer Society of Canada.
Footnotes
[Supplemental material is available for this article.]
Article is online at http://www.learnmem.org/cgi/doi/10.1101/lm.049486.119.
References
- Althoff RR, Cohen NJ. 1999. Eye-movement-based memory effect: a reprocessing effect in face perception. J Exp Psychol Learn Mem Cogn 25: 997–1010. 10.1037/0278-7393.25.4.997 [DOI] [PubMed] [Google Scholar]
- Arnold KM, McDermott KB, Szpunar KK. 2011. Imagining the near and far future: the role of location familiarity. Mem Cogn 39: 954–967. 10.3758/s13421-011-0076-1 [DOI] [PubMed] [Google Scholar]
- Bates D, Mächler M, Bolker B, Walker S. 2015. Fitting linear mixed-effects models using lme4. J Stat Softw 67: 1–48. 10.18637/jss.v067.i01 [DOI] [Google Scholar]
- Brady TF, Konkle T, Alvarez GA, Oliva A. 2008. Visual long-term memory has a massive storage capacity for object details. Proc Natl Acad Sci 105: 14325–14329. 10.1073/pnas.0803390105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Byrne P, Becker S, Burgess N. 2007. Remembering the past and imagining the future: a neural model of spatial memory and imagery. Psychol Rev 114: 340–375. 10.1037/0033-295X.114.2.340 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chadwick MJ, Mullally SL, Maguire EA. 2013. The hippocampus extrapolates beyond the view in scenes: an fMRI study of boundary extension. Cortex 49: 2067–2079. 10.1016/j.cortex.2012.11.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dalton MA, Zeidman P, McCormick C, Maguire EA. 2018. Differentiable processing of objects, associations, and scenes within the hippocampus. J Neurosci 38: 8146–8159. 10.1523/JNEUROSCI.0263-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davachi L. 2006. Item, context and relational episodic encoding in humans. Curr Opin Neurobiol 16: 693–700. 10.1016/j.conb.2006.10.012 [DOI] [PubMed] [Google Scholar]
- de Vito S, Gamboz N, Brandimonte MA. 2012. What differentiates episodic future thinking from complex scene imagery? Conscious Cogn 21: 813–823. 10.1016/j.concog.2012.01.013 [DOI] [PubMed] [Google Scholar]
- Epstein R, Graham KS, Downing PE. 2003. Viewpoint-specific scene representations in human parahippocampal cortex. Neuron 37: 865–876. 10.1016/S0896-6273(03)00117-X [DOI] [PubMed] [Google Scholar]
- Hannula DE, Ranganath C. 2009. The eyes have it: hippocampal activity predicts expression of memory in eye movements. Neuron 63: 592–599. 10.1016/j.neuron.2009.08.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hannula DE, Althoff RR, Warren DE, Riggs L, Cohen NJ, Ryan JD. 2010. Worth a glance: using eye movements to investigate the cognitive neuroscience of memory. Front Hum Neurosci 4: 1–16. 10.3389/fnhum.2010.00166 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hannula DE, Baym CL, Warren DE, Cohen NJ. 2012. The eyes know: eye movements as a veridical index of memory. Psychol Sci 23: 278–287. 10.1177/0956797611429799 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hassabis D, Maguire EA. 2007. Deconstructing episodic memory with construction. Trends Cogn Sci 11: 299–306. 10.1016/j.tics.2007.05.001 [DOI] [PubMed] [Google Scholar]
- Hassabis D, Maguire EA. 2009. The construction system of the brain. Philos Trans R Soc Lond B Biol Sci 364: 1263–1271. 10.1098/rstb.2008.0296 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hassabis D, Kumaran D, Maguire EA. 2007a. Using imagination to understand the neural basis of episodic memory. J Neurosci 27: 14365–14374. 10.1523/JNEUROSCI.4549-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hassabis D, Kumaran D, Vann SD, Maguire EA. 2007b. Patients with hippocampal amnesia cannot imagine new experiences. Proc Natl Acad Sci 104: 1726–1731. 10.1073/pnas.0610561104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hebscher M, Levine B, Gilboa A. 2018. The precuneus and hippocampus contribute to individual differences in the unfolding of spatial representations during episodic autobiographical memory. Neuropsychologia 110: 123–133. 10.1016/j.neuropsychologia.2017.03.029 [DOI] [PubMed] [Google Scholar]
- Heisz JJ, Ryan JD. 2011. The effects of prior exposure on face processing in younger and older adults. Front Aging Neurosci 3: 1–6. 10.3389/fnagi.2011.00015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hodgetts CJ, Shine JP, Lawrence AD, Downing PE, Graham KS. 2016. Evidencing a place for the hippocampus within the core scene processing network. Hum Brain Mapp 37: 3779–3794. 10.1002/hbm.23275 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hodgetts CJ, Voets NL, Thomas AG, Clare S, Lawrence AD, Graham KS. 2017. Ultra-high-field fMRI reveals a role for the subiculum in scene perceptual discrimination. J Neurosci 37: 3150–3159. 10.1523/JNEUROSCI.3225-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horner AJ, Bisby JA, Wang A, Bogus K, Burgess N. 2016. The role of spatial boundaries in shaping long-term event representations. Cognition 154: 151–164. 10.1016/j.cognition.2016.05.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hupbach A, Gomez R, Nadel L. 2011. Episodic memory updating: the role of context familiarity. Psychon Bull Rev 18: 787–797. 10.3758/s13423-011-0117-6 [DOI] [PubMed] [Google Scholar]
- Kamp S-M, Zimmer HD. 2015. Contributions of attention and elaboration to associative encoding in young and older adults. Neuropsychologia 75: 252–264. 10.1016/j.neuropsychologia.2015.06.026 [DOI] [PubMed] [Google Scholar]
- Konkle T, Brady TF, Alvarez GA, Oliva A. 2010. Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psychol Sci 21: 1551–1556. 10.1177/0956797610385359 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koster R, Chadwick MJ, Chen Y, Berron D, Banino A, Düzel E, Hassabis D, Kumaran D. 2018. Big-loop recurrence within the hippocampal system supports integration of information across episodes. Neuron 99: 1342–1354.e6. 10.1016/j.neuron.2018.08.009 [DOI] [PubMed] [Google Scholar]
- Kumaran D, Hassabis D, Spiers HJ, Vann SD, Vargha-Khadem F, Maguire EA. 2007. Impaired spatial and non-spatial configural learning in patients with hippocampal pathology. Neuropsychologia 45: 2699–2711. 10.1016/j.neuropsychologia.2007.04.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Z-X, Shen K, Olsen RK, Ryan JD. 2017. Visual sampling predicts hippocampal activity. J Neurosci 37: 599–609. 10.1523/JNEUROSCI.2610-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Z-X, Shen K, Olsen RK, Ryan JD. 2018. Age-related changes in the relationship between visual exploration and hippocampal activity. Neuropsychologia 119: 81–91. 10.1016/j.neuropsychologia.2018.07.032 [DOI] [PubMed] [Google Scholar]
- Loftus GR. 1972. Eye fixations and recognition memory for pictures. Cogn Psychol 3: 525–551. 10.1016/0010-0285(72)90021-7 [DOI] [Google Scholar]
- Lucas HD, Duff MC, Cohen NJ. 2019. The hippocampus promotes effective saccadic information gathering in humans. J Cogn Neurosci 31: 186–201. 10.1162/jocn_a_01336 [DOI] [PubMed] [Google Scholar]
- Maguire EA, Mullally SL. 2013. The hippocampus: a manifesto for change. J Exp Psychol Gen 142: 1180–1189. 10.1037/a0033650 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maguire EA, Intraub H, Mullally SL. 2016. Scenes, spaces, and memory traces. Neuroscientist 22: 432–439. 10.1177/1073858415600389 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Merriman NA, Ondřej J, Roudaia E, O'Sullivan C, Newell FN. 2016. Familiar environments enhance object and spatial memory in both younger and older adults. Exp Brain Res 234: 1555–1574. 10.1007/s00221-016-4557-0 [DOI] [PubMed] [Google Scholar]
- Miller JF, Lazarus EM, Polyn SM, Kahana MJ. 2013. Spatial clustering during memory search. J Exp Psychol Learn Mem Cogn 39: 773–781. 10.1037/a0029684 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mullally SL, Intraub H, Maguire EA. 2012. Attenuated boundary extension produces a paradoxical memory advantage in amnesic patients. Curr Biol 22: 261–268. 10.1016/j.cub.2012.01.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- O'Keefe J, Nadel L. 1978. The hippocampus as a cognitive map. Oxford University Press, Oxford. [Google Scholar]
- Olsen RK, Moses SN, Riggs L, Ryan JD. 2012. The hippocampus supports multiple cognitive processes through relational binding and comparison. Front Hum Neurosci 6: 1–13. 10.3389/fnhum.2012.00146 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Olsen RK, Chiew M, Buchsbaum BR, Ryan JD. 2014. The relationship between delay period eye movements and visuospatial memory. J Vis 14: 8 10.1167/14.1.8 [DOI] [PubMed] [Google Scholar]
- Olsen RK, Sebanayagam V, Lee Y, Moscovitch M, Grady CL, Rosenbaum RS, Ryan JD. 2016. The relationship between eye movements and subsequent recognition: evidence from individual differences and amnesia. Cortex 85: 182–193. 10.1016/j.cortex.2016.10.007 [DOI] [PubMed] [Google Scholar]
- Pacheco D, Sánchez-Fibla M, Duff A, Verschure PFMJ. 2017. A spatial-context effect in recognition memory. Front Behav Neurosci 11: 1–9. 10.3389/fnbeh.2017.00143 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pajkert A, Finke C, Shing YL, Hoffmann M, Sommer W, Heekeren HR, Ploner CJ. 2017. Memory integration in humans with hippocampal lesions. Hippocampus 27: 1230–1238. 10.1002/hipo.22766 [DOI] [PubMed] [Google Scholar]
- Preston AR, Shrager Y, Dudukovic NM, Gabrieli JDE. 2004. Hippocampal contribution to the novel use of relational information in declarative memory. Hippocampus 14: 148–152. 10.1002/hipo.20009 [DOI] [PubMed] [Google Scholar]
- Ringo JL, Sobotka S, Diltz MD, Bunce CM. 1994. Eye movements modulate activity in hippocampal, parahippocampal, and inferotemporal neurons. J Neurophysiol 71: 1285–1288. 10.1152/jn.1994.71.3.1285 [DOI] [PubMed] [Google Scholar]
- Robin J. 2018. Spatial scaffold effects in event memory and imagination. Wiley Interdiscip Rev Cogn Sci 9: e1462 10.1002/wcs.1462 [DOI] [PubMed] [Google Scholar]
- Robin J, Moscovitch M. 2014. The effects of spatial contextual familiarity on remembered scenes, episodic memories, and imagined future events. J Exp Psychol Learn Mem Cogn 40: 459–475. 10.1037/a0034886 [DOI] [PubMed] [Google Scholar]
- Robin J, Wynn J, Moscovitch M. 2016. The spatial scaffold: the effects of spatial context on memory for events. J Exp Psychol Learn Mem Cogn 42: 308–315. 10.1037/xlm0000167 [DOI] [PubMed] [Google Scholar]
- Robin J, Buchsbaum BR, Moscovitch M. 2018. The primacy of spatial context in the neural representation of events. J Neurosci 38: 2755–2765. 10.1523/JNEUROSCI.1638-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robin J, Rai Y, Valli M, Olsen RK. 2019. Category specificity in the medial temporal lobe: a systematic review. Hippocampus 29: 313–339. 10.1002/hipo.23024 [DOI] [PubMed] [Google Scholar]
- Rolls ET. 2017. A scientific theory of Ars Memoriae: spatial view cells in a continuous attractor network with linked items. Hippocampus 27: 570–579. 10.1002/hipo.22713 [DOI] [PubMed] [Google Scholar]
- Ross DA, Sadil P, Wilson DM, Cowell RA. 2018. Hippocampal engagement during recall depends on memory content. Cereb Cortex 28: 2685–2698. 10.1093/cercor/bhx147 [DOI] [PubMed] [Google Scholar]
- Ryan JD, Althoff RR, Whitlow S, Cohen NJ. 2000. Amnesia is a deficit in relational memory. Psychol Sci 11: 454–461. 10.1111/1467-9280.00288 [DOI] [PubMed] [Google Scholar]
- Ryan JD, Hannula DE, Cohen NJ. 2007. The obligatory effects of memory on eye movements. Memory 15: 508–525. 10.1080/09658210701391022 [DOI] [PubMed] [Google Scholar]
- Ryan JD, D'Angelo MC, Kamino D, Ostreicher M, Moses SN, Rosenbaum RS. 2016. Relational learning and transitive expression in aging and amnesia. Hippocampus 26: 170–184. 10.1002/hipo.22501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schlichting ML, Zeithamova D, Preston AR. 2014. CA1 subfield contributions to memory integration and inference. Hippocampus 24: 1–37. 10.1002/hipo.22310 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sheldon S, Chu S. 2017. What versus where: investigating how autobiographical memory retrieval differs when accessed with thematic versus spatial information. Q J Exp Psychol 70: 1909–1921. 10.1080/17470218.2016.1215478 [DOI] [PubMed] [Google Scholar]
- Shohamy D, Wagner AD. 2008. Integrating memories in the human brain: hippocampal-midbrain encoding of overlapping events. Neuron 60: 378–389. 10.1016/j.neuron.2008.09.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tambini A, Ketz N, Davachi L. 2010. Enhanced brain correlations during rest are related to memory for recent experiences. Neuron 65: 280–290. 10.1016/j.neuron.2010.01.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Voss JL, Bridge DJ, Cohen NJ, Walker JA. 2017. A closer look at the hippocampus and memory. Trends Cogn Sci 21: 577–588. 10.1016/j.tics.2017.05.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yates FA. 1966. The art of memory. Routledge & Kegan Paul, London. [Google Scholar]
- Zalesak M, Heckers S. 2009. The role of the hippocampus in transitive inference. Psychiatry Res 172: 24–30. 10.1016/j.pscychresns.2008.09.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeidman P, Lutti A, Maguire EA. 2015a. Investigating the functions of subregions within anterior hippocampus. Cortex 73: 240–256. 10.1016/j.cortex.2015.09.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeidman P, Mullally SL, Maguire EA. 2015b. Constructing, perceiving, and maintaining scenes: hippocampal activity and connectivity. Cereb Cortex 25: 3836–3855. 10.1093/cercor/bhu266 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeithamova D, Preston AR. 2010. Flexible memories: differential roles for medial temporal lobe and prefrontal cortex in cross-episode binding. J Neurosci 30: 14676–14684. 10.1523/JNEUROSCI.3250-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeithamova D, Dominick AL, Preston AR. 2012. Hippocampal and ventral medial prefrontal activation during retrieval-mediated learning supports novel inference. Neuron 75: 168–179. 10.1016/j.neuron.2012.05.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



