Abstract
People spontaneously segment continuous ongoing actions into sequences of events. Prior research found that gaze similarity and pupil dilation increase at event boundaries and that older adults segment more idiosyncratically than do young adults. We used eye tracking to explore age-related differences in gaze similarity (i.e., the extent to which individuals look at the same places at the same time as others) and pupil dilation at event boundaries. Older and young adults watched naturalistic videos of actors performing everyday activities while we tracked their eye-movements. Afterwards, they segmented the videos into sub-events. Replicating prior work, we found that pupil size and gaze similarity increased at event boundaries. Thus, there were fewer individual differences in eye position at boundaries. We also found that young adults had higher gaze similarity than older adults throughout an entire video and at event boundaries. This study is the first to show that age-related differences in how people parse continuous everyday activities into events may be partially explained by individual differences in gaze patterns. Those who segment less normatively may do so because they fixate less normative regions. Results have implications for future interventions designed to improve encoding in older adults.
Keywords: Aging, Attentional Selection, Event Segmentation, Eye movements, Film Comprehension
When you observe an everyday activity, such as an instructional video about setting up a video game console, you are exposed to a continuous stream of visual and auditory input. Such activities usually have few overt pauses that you can use to help structure it. Nevertheless, people cope with such information overload by segmenting the continuous information into discrete events (Zacks, 2020). For instance, you can parse the instructional video into many actions, such as removing the console from the box, connecting the necessary cables, and connecting the console to the television.
Viewers segment activities by representing the information for ‘what is happening now’ in an event model. Theories of event comprehension, such as Event Segmentation Theory (Zacks et al., 2007) and the Scene Perception and Event Comprehension Theory (Loschky et al., 2019), propose that event models facilitate comprehension by informing both predictions for the near future, and backward inferences that connect current information with the recent past (Loschky et al., 2019; Zacks et al., 2011). When predictions fail and/or incoming information is inconsistent with the current event model, viewers incur a cognitive load (Swets & Kurby, 2016) and perceive an event boundary. At these moments, viewers store the previous event model in long-term memory (Bailey & Zacks, 2015; Pettijohn & Radvansky, 2016), shift attention towards new incoming information (Eisenberg & Zacks, 2016), and lay the foundation of a new event model in working memory (Gernsbacher, 1990; Loschky et al., 2019).
In a typical event segmentation study, observers watch videos and press a button whenever they perceive an event boundary (Newtson, 1973). Despite the complex nature of the stimuli, different viewers are strikingly similar in their event segmentation (Zacks, Tversky, et al., 2001), brain activity (Hasson et al., 2004), and gaze patterns (Dorr et al., 2010).
Synchronous event segmentation responses, brain responses, and gaze patterns likely reflect optimal event encoding because individual differences in these variables predict both better comprehension (Hutson et al., 2022; Loschky et al., 2015; Yeshurun et al., 2017) and memory (Davis et al., 2020; Hasson et al., 2008; Zacks et al., 2006). Importantly, there are also robust age-related differences in event segmentation (Kurby & Zacks, 2011) and neural synchronization during movie viewing (Campbell et al., 2015). Nevertheless, some prior work has failed to find age-related differences in gaze patterns when people watched videos. Specifically, Davis et al. (2020) tracked young and older adults’ eye-movements while watching a Hollywood-style film in which they previously found reduced neural synchrony in older adults (Campbell et al., 2015). Unexpectedly, Davis et al. (2020) found that young and older adults looked at similar screen locations over time. Perhaps aspects of the stimulus drive eye movements, regardless of age. Alternatively, perhaps there are meaningful age-related differences in gaze patterns, but the videos used in prior research (i.e., Hollywood-style films, which use filmmaking techniques known to guide attention) may have masked them. Thus, the current study explored age-related differences in gaze similarity using unedited videos.
Experiment Overview and Hypotheses
We reanalyzed eye tracking data previously published in Smith et al. (2021), who examined the relationship between knowledge and attention to goal-relevant information in videos. We extended this work by investigating age-related differences in gaze similarity and pupil changes at event boundaries. Given age-related differences in neural synchrony (Campbell et al., 2015) and segmentation agreement (Kurby & Zacks, 2011), we predicted that older adults’ gaze similarity would be lower both throughout videos and at event boundaries compared to young adults (see however, Davis et al., 2020). Event boundaries coincide with large changes in motion (Hard et al., 2011; Zacks, Kumar, et al., 2009), goal completion (Kurby & Zacks, 2019), and perceptual change (Hard et al., 2011) as well as increased brain activity in regions involved in motion processing (i.e., MT+), and eye movements (i.e., Frontal Eye Fields; (Speer et al., 2007; Zacks, Braver, et al., 2001). We predicted that gaze similarity would increase around event boundaries, consistent with Davis et al. (2020)’s findings that gaze similarity was higher after boundaries. We extend this work by examining the time course of changes in gaze similarity around boundaries. Alternatively, gaze similarity could decrease at event boundaries because young adults make fewer predictive eye movements (Eisenberg et al., 2018) and more exploratory eye movements (Eisenberg & Zacks, 2016) at event boundaries.
Lastly, people’s pupils dilate more when experiencing cognitive load (Kahneman & Beatty, 1966). Some work found that pupil size increases at event boundaries, when people made overt responses at event boundaries (Clewett et al., 2020), but not when no overt response was required (Eisenberg & Zacks, 2016). Thus, we also conducted an exploratory analysis examining changes in pupil size around event boundaries when participants passively watched videos.
Method
Transparency and Openness
We did not preregister the analyses; however, de-identified data, stimuli, and R analysis scripts are available on OSF. We report the manipulated and measured variables here and we report a power analysis to justify the sample size. The Institutional Review Board at Kansas State University (Protocol 8915, Effects of Knowledge, and Comprehension on Eye Movements) approved this study.
Participants
Sixty-two participants (N = 32 young adults, N = 30 older adults) participated in the experiment in the 2019 school year. The sample was predominantly white (Young Adults: N = 2 American Indian, N = 2 Black, N = 28 White; Older Adults: N = 2 American Indian, N = 1 Black, N = 27 White). We ran a power analysis by comparing segmentation agreement between young and older adults using data collected in Smith et al. (2020). With an effect size of d = .67, alpha = .05, and power = .80, G*Power indicated that a sample size of 28 in each group should be sufficient to detect an age-related difference on segmentation agreement. We removed data from one additional young adult from the analyses because of poor eye tracker calibration. We recruited young adults (16 females and 17 males) from Kansas State University’s research pool and cognitively healthy older adults (16 females and 14 males) from the local community. We compensated young adults with course credit and paid older adults $10/hour.
Materials
Participants watched 4 videos of college-aged actors performing everyday activities (Figure 1A). Videos were shot from a fixed camera position without panning or zooming. Videos did not contain sound or edits. Participants also watched a practice video of a man building a boat from Duplo blocks to become familiarized with the procedure.
Figure 1. Frames from stimuli, Gaze Similarity, and Likelihood of segmenting.
Note. Panel A: Stills from the four videos: Balancing a Checkbook (duration = 258 secs), Planting Flowers (duration = 297 secs), Installing a Printer (duration = 148 secs), and Setting Up a Game Console (duration = 267 secs). We blurred the actors’ faces here and in later figures to conceal the actors’ identities. We blurred the faces for this publication. We did not blur the faces in the movies shown to participants. Some activities were more familiar to young adults (Installing a Printer, Setting Up a Game Console) and some were more familiar to older adults (Balancing a Checkbook, Planting Flowers). More details about these videos are provided in Smith et al. (2021, Table 2). Panel B: Frames show gaze heat maps at two moments that had low gaze similarity (Time Stamp: 21 seconds and Time Stamp: 124 seconds) and two moments that had high gaze similarity (Time Stamp: 55 seconds and Time Stamp: 80 seconds). Diagonal black lines connect the four heat maps in B to matching time points in C. Panel C: Gaze similarity for the first 3 minutes and 19 seconds from the Setting up a Game Console video. Panel D: The likelihood of perceiving a new event as a function of time in the video. Data points just above the x-axis indicate the moments when individual participants pressed the button to indicate an event boundary. Vertical gray lines through the peaks of event segmentation in Panel D, are also shown in Panel C, and correspond to participants’ normative event boundaries. ≥20% of participants in the sample pressed the button ≤1 second from each peak. Gaze similarity and segmentation probabilities for each video are provided in Supplemental Figures S1 and S2, respectively.
Eye Tracker
Participants’ gaze was tracked monocularly with an EyeLink 1000+ at a sampling rate of 1000Hz. Videos were shown on a 19” ViewSonic CRT Monitor (model G90fb) at 60 Hz, at a pixel resolution of 1024x768. Viewing angle (31.4°x23.89°) was maintained at 65 cm from the monitor using a chin and forehead rest.
Procedure
After signing a consent form, participants went through a nine-point eye tracking calibration routine, and they were asked to watch the videos for a later memory test (awareness of a memory test does not influence gaze similarity; Davis et al., 2021). Participants watched the practice video, followed by all four experimental videos, in a counterbalanced order. Immediately after each video, participants completed a filler test and three measures of memory for the video: free recall, recognition and order memory. Participants indicated how often they performed each activity after the final memory test (see Pitts et al., 2022; Smith et al., 2021 for details on the memory tests).
Participants rewatched the videos at the end of the experiment while performing the event segmentation task (Newtson, 1973), so we could evaluate changes in gaze similarity around event boundaries. Participants pressed the spacebar when they judged that one meaningful unit of activity ended and another began. We did not provide an example of how to segment the videos. Participants did, however, practice the segmentation task on the practice video. If the participant identified fewer than 3 event boundaries (a number unknown to participants) during the practice video, we instructed them that most participants identify more units (Zacks et al., 2006). Participants repeated the task until they identified at least 3 units in the practice video.
Gaze Similarity Calculation
Gaze similarity is a measure of the extent to which people look at the same places at the same moments in time. To calculate it, we first cleaned the data by removing saccades, blinks, and moments when eye movements were not tracked. We then compared the spatiotemporal distribution of gaze behavior between young and older adults using gaze heatmaps on each frame (Dorr et al., 2010). Briefly, as shown in Figure 1B, gaze heatmaps represent the probabilistic spatial distribution of raw gaze points.
We used a method based on the Normalized Scanpath Saliency (NSS) to calculate gaze similarity (Dorr et al., 2010). We downsampled the eye tracking data to 25 Hz to express raw fixation locations on each video frame. We fit a 120-pixel (2° Gaussian) probability distribution around each raw gaze location, so each pixel in each frame had a fixation probability. Using the “leave-one-out” procedure, we then averaged all the probabilities within a 7-frame (280 ms) moving time-window (approximately the average fixation duration in videos (Hutson et al., 2017) except for one participant. Next, we sampled the gaze location of that remaining participant and calculated a z-score for it to identify how well it fit within the distribution for that frame. We repeated this leave-one-out procedure for all participants until each participant had a z-scored value, referred to as gaze similarity. Finally, we z-scored the values across the video to evaluate how gaze similarity fluctuated over time, relative to the mean of the video. Standardizing similarity was essential for comparing gaze similarity around event boundaries. Thus, gaze similarity (Figure 1C) is the extent to which gaze location on each frame of the video compares with the sample of participants, and how much the similarity between each gaze location differs from all other moments in the video. We removed data from the first one second from each video, because all participants fixated a central dot just before video onset, producing maximal gaze similarity. We compared gaze similarity of young and older adults to their own and other age groups, and we provide those analyses in the Supplemental Materials Section 2 Figure S5. Results were analogous to what we report here.
Event Boundary Selection
We determined the normative event boundaries from participants’ segmentation responses. We started by removing double presses that were ≤ 300 milliseconds apart. We then calculated the likelihood of perceiving an event boundary using the density of participants’ button presses over time (see also Sasmita & Swallow, 2022). We estimated the likelihood of perceiving an event boundary by centering a 1-second Gaussian kernel on the frame number of each button press and calculated the mean of these probabilities across participants. We selected peaks in the distribution if at least 20% of participants segmented within 1-second from the peak (see Figure 1D for density plot examples; Checkbook = 14 boundaries; Planting flowers = 13, Printer = 13, Game Console = 17). We found similar results using stricter criteria for determining normative event boundaries (See Supplemental Section 3, Figure S4). Age-related differences in segmentation agreement are reported in Smith et al. (2021, Table 1). Young adults had significantly higher agreement than older adults.
Perceptual Change Calculation
We calculated a measure of frame-to-frame perceptual change to account for the possibility that boundaries are moments of high perceptual change (Hard et al., 2011; Kosie & Baldwin, 2019). To do so, we calculated the 3D Euclidean distance using the RGB value of each pixel in each pair of consecutive frames (e.g., pixeli in frameN and frameN+1) and averaged these distance scores across all the pixels. We calculated perceptual change for every pair of consecutive frames in each video.
Analyses
We tested the fixed effects of interest using linear mixed effects models (LMMs). We ran all LMMs using the lmer function from the lme4 library (Bates et al., 2014). We determined the random effect structure of each LMM by fitting the ‘maximal model’ first (Barr et al., 2013) and then reduced the model by removing one random effect at a time (Bates et al., 2015). We compared LMMs using a likelihood ratio test and retained the more complex model when it differed significantly from the reduced model. Finally, we calculated Bayes factors (BF10) using the lmBF function from the BayesFactor library (Morey et al., 2015). We estimated BFs by computing the ratio for a model with and without the fixed effect of interest using default priors.1
Results
First, we report gaze similarity averaged across the videos for older and young adults. Then, we report perceptual change and gaze similarity around event boundaries, statistically controlling for the impact that perceptual change had on gaze similarity at event boundaries. Finally, we report the analysis of pupil size around event boundaries.
The model of gaze similarity included the fixed effect of age group, and the random intercepts of participant and video. Mean gaze similarity, averaged across the entire video, is shown for each participant in Figure 2A. Overall, young adults had significantly higher gaze similarity (M = 0.06, SE = 0.04) than older adults (M = −0.05, SE = 0.04), F(1, 54.27) = 4.22, p = .04, BF10 = 1.48, but the BF10 indicated weak evidence that young adults looked at the same places at the same times in videos more than older adults. This difference was numerically smaller in the Balancing a Checkbook and Planting Flowers videos (Figure 2A). Thus, we re-ran the analyses after treating video and the video x age interaction as fixed effects, but we found strong evidence to support the null, F(3, 158.64) = 0.51, p = .68, BF10 = 0.12. See Supplemental Materials Section 1 Table S1 for details.
Figure 2. Analysis of gaze similarity, perceptual change, and gaze similarity at event boundaries.
Note. Panel A: Mean gaze similarity for young and older adults in each video. Individual data points reflect each participant’s mean gaze similarity. Black dots represent the mean across participants. Panel B: Mean perceptual change before and after event boundaries for each video. Vertical gray lines at time point 0 represent event boundaries. Panel C: Mean gaze similarity, after controlling for perceptual change, relative to event boundaries for young and older adults. Mean gaze similarity around event boundaries for each video are in Supplemental Materials Section 1 Figure S3. Error bars correspond to 1 ± SE from the estimated means. Panel D: Fixation heatmaps illustrating changes in gaze similarity at one event boundary in the Setting Up a Game Console Video. Panel E: Mean pupil size, after controlling for perceptual change, relative to event boundaries for young and older adults.
We also explored whether gaze similarity predicted subsequent memory. We found a positive correlation between gaze similarity and memory, r = 0.20, p = .03, replicating (Davis et al., 2020). See Supplemental Materials Section 3, Figure S6.
Next, we evaluated whether we replicated prior work showing that perceptual change increases at event boundaries (Hard et al., 2011). Figure 2B depicts mean perceptual change within 13 one-second bins from −6 to +6 seconds around the event boundaries in each video. We submitted perceptual change to an LMM containing the fixed effect of time to the event boundary and the random effect of video. The main effect of time to the event boundary was significant, F(12,36) = 2.39, p = .02, BF10 = 2.75. As shown in Figure 2B, perceptual change increases gradually, peaking just prior to or at the boundary in all but the Game Console video. Boundaries in this video may have involved changes in smaller details (i.e., submitting the Wi-Fi password on the TV screen), than in the other videos (i.e., picking up the planter).
Finally, we evaluated the extent to which gaze similarity changed around event boundaries. For this, we calculated the mean gaze similarity within each one-second bin from −6 to +6 seconds around each event boundary. We submitted these means to an LMM, which included the main effect of age group, time to the event boundary, perceptual change around event boundaries (centered at its mean) and all their interactions as fixed effects, and the participant and video intercepts as random effects. We found strong evidence for a main effect of perceptual change, F(1,2697.23) = 8.92, p = .003, BF10 = 17.32e+7, such that participants looked more at the same places and times as perceptual change increased (Mital et al., 2010). Importantly, as shown in Figure 2C, we found strong evidence of a main effect of time, even after statistically controlling for increased perceptual change around event boundaries, F(12,2738.18) = 17.47, p < .001, BF10 = 1.38e+342. Gaze similarity increased and peaked 1 second before the boundary, and then decreased immediately after it. We observed these effects after controlling for perceptual change; therefore, the modulation of gaze similarity around the event boundary likely reflects event model updating. We also found that young adults (M = 0.11, SE = 0.07) had higher gaze similarity than older adults (M = −0.04, SE = 0.07) at event boundaries, F(1,55.94) = 9.69, p = .002, BF10 = 6.04. No other effects were significant.
Lastly, we evaluated changes in pupil size at event boundaries. Before conducting these analyses, we z-scored pupil size within each video for each participant. We then averaged the values within the same 1-second bin around the boundaries and submitted these averages to an LMM. The model included the same fixed and random effects as the model of gaze similarity. We found a main effect of time, F(12,2800) = 2.33, p = .006, BF10 = 3.15. Pupil size moderately increased around boundaries (Figure 2E), similarly to Clewett et al. (2020). No other effects were significant.
Discussion
Viewers spontaneously segment continuous activity into events, and older adults segment more idiosyncratically and remember less than young adults (Kurby & Zacks, 2011). We asked whether age-related differences in attention during event encoding exist and whether differences in attention may be related to such effects. Indeed, the current results supported this hypothesis. We found that older adults had lower gaze similarity than young adults at event boundaries. Such differences may result from older adults’ poorer attentional control during encoding (e.g., Hasher & Zacks, 1988). Further, we found that increases in gaze similarity and pupil size peaks just before event boundaries and drops off immediately after.
Although our findings are consistent with prior work showing age-related reductions in neural synchrony (Campbell et al., 2015) and segmentation agreement (Kurby & Zacks, 2011), they are inconsistent with work showing no age-related differences in eye-movements during movie viewing (Davis et al., 2020). However, it is noteworthy that Davis et al. (2020) used a Hollywood-style film, which typically produce high attentional synchrony (Hutson et al., 2017). The filmmaking techniques used in their film may have driven eye-movement patterns and masked possible age-related differences. Our results show that differences emerge when such filmmaking techniques (e.g., cuts, foregrounding) are absent. Future work could evaluate how different filmmaking techniques influence gaze similarity across age groups.
Finding that gaze similarity and pupil size increased at event boundaries is important for theories of event cognition. These effects are even more compelling because perceptual change (which increases at event boundaries) did not completely explain them. This critical result suggests that something besides low-level perceptual change guides attention to activities around event boundaries. Event cognition theories, such as the Scene Perception and Event Comprehension Theory and Event Segmentation Theory suggest that observers shift to create a new event model at event boundaries (Gernsbacher, 1990; Loschky et al., 2019), which produces more exploratory eye movements (Eisenberg & Zacks, 2016). Shifting also increases cognitive load (Swets & Kurby, 2016; Zacks, Speer, et al., 2009), especially for older adults (Bailey & Zacks, 2015). Specifically, at event boundaries people read text more slowly (Swets & Kurby, 2016; Zacks, Speer, et al., 2009), and they are less likely to notice edits made to a video, or to detect changes to probes (Crundall et al., 2002; Huff et al., 2012; Newtson, 1973). Increased cognitive load affects eye movements (Stuyven et al., 2000) and pupil size, such that fixation durations and pupil size increase with cognitive load (Cronin et al., 2020; Loschky et al., 2014). Segmenting and laying the foundation for a new event model may increase cognitive load, causing pupil size to increase and viewers to look at similar locations at event boundaries. Future work could induce event model updating through different task instructions or cueing procedures to see if such manipulations influence gaze similarity.
Conversely, perhaps the cause/effect relationship between event model updating and gaze similarity is reversed. Rather than event model updating changing gaze similarity, perhaps gaze synchrony produces normative event model updating. Perhaps those who attend to important information at boundaries are more likely to detect shifts between events and update their event models accordingly. Thus, the direction of the causal relationship is unknown. Future research could cue participants to fixate critical information at event boundaries and investigate how cuing influences gaze patterns and segmentation.
Supplementary Material
Public Significance Statement.
It is commonly said that everyone looks at things differently. In this study, we found differences in where young and older adults looked while watching real world events, especially at important moments when their understanding changed. In addition, we found that idiosyncrasies in the way some older adults looked at everyday events, likely reflected idiosyncrasies in their understanding of those events.
Acknowledgments
We previously reported the results of this study as a poster at the 62nd Annual Meeting of the Psychonomic Society, Virtual Conference. This work was supported by two research grants from the National Institutes of Health: GM113109 and T32AG000030. The authors would like to thank Taylor Capko, Jameson Brehm, and Jennica Rogers for their help with participant recruitment and data collection. We would also like to thank Dr. John Hutson and Dr. Tim Smith for helping with the gaze similarity analyses. Finally, the authors would like to thank Dr. Jeffrey Zacks, Dr. Zachariah Reagh, and the members of the Dynamic Cognition and Complex Memory Labs for discussing this project with us.
Footnotes
All data, analysis scripts, and stimuli can be downloaded at https://osf.io/ztnw8/ (Smith, 2023, April 13).
For reference, BFs close to one provide inconclusive evidence in favor of the alternative, values between 2-3 provide weak evidence, values between 3–10 provide moderate evidence, and values greater than 10 provide strong evidence in favor of the alternative Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225-237.
We ran the analyses with and without the continuous covariate of perceptual change at event boundaries. The results were analogous; therefore, the fact that perceptual change decreased at event boundaries in the Game Console video likely had a negligible effect on gaze similarity.
References
- Bailey H, & Zacks JM (2015). Situation model updating in young and older adults: Global versus incremental mechanisms. Psychology of Aging, 30(2), 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barr DJ, Levy R, Scheepers C, & Tily HJ (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bates D, Kliegl R, Vasishth S, & Baayen H (2015). Parsimonious mixed models. arXiv, preprint arXiv1506.04967, 1–27. [Google Scholar]
- Bates D, Mächler M, Bolker B, & Walker S (2014). Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823. [Google Scholar]
- Campbell KL, Shafto MA, Wright P, Tsvetanov KA, Geerligs L, Cusack R, Tyler LK, Brayne C, Bullmore E, & Calder A (2015). Idiosyncratic responding during movie-watching predicted by age differences in attentional control. Neurobiology of Aging, 36(11), 3045–3055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clewett D, Gasser C, & Davachi L (2020). Pupil-linked arousal signals track the temporal organization of events in memory. Nature communications, 11(1), 4007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cronin DA, Peacock CE, & Henderson JM (2020). Visual and verbal working memory loads interfere with scene-viewing. Attention, Perception, & Psychophysics, 82(6), 2814–2820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crundall DE, Underwood G, & Chapman PR (2002). Attending to the peripheral world while driving. Applied Cognitive Psychology, 16(4), 459–475. 10.1002/acp.806 [DOI] [Google Scholar]
- Davis E, Chemnitz E, Collins TK, Geerligs L, & Campbell KL (2020). Looking the same, but remembering differently: Preserved eye-movement synchrony with age during movie-watching. [DOI] [PubMed] [Google Scholar]
- Dorr M, Martinetz T, Gegenfurtner KR, & Barth E (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of Vision, 10(10):28(10), 1–28. 10.1167/10.10.28 [DOI] [PubMed] [Google Scholar]
- Eisenberg ML, & Zacks JM (2016). Ambient and focal visual processing of naturalistic activity. Journal of Vision, 16(2), 5: 1–12, Article 5. 10.1167/16.2.5 [DOI] [PubMed] [Google Scholar]
- Eisenberg ML, Zacks JM, & Flores S (2018). Dynamic prediction during perception of everyday events. Cognitive Research: Principles and Implications, 3(1), 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gernsbacher MA (1990). Language comprehension as structure building (Vol. xi). Lawrence Erlbaum Associates, Inc. [Google Scholar]
- Hard B, Recchia G, & Tversky B (2011). The shape of action. Journal of Experimental Psychology: General, 140(4), 586. [DOI] [PubMed] [Google Scholar]
- Hasher L, & Zacks RT (1988). Working memory, comprehension, and aging: A review and a new view. In Psychology of Learning and Motivation (Vol. 22, pp. 193–225). Elsevier. [Google Scholar]
- Hasson U, Furman O, Clark D, Dudai Y, & Davachi L (2008). Enhanced intersubject correlations during movie viewing correlate with successful episodic encoding. Neuron, 57(3), 452–462. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hasson U, Nir Y, Levy I, Fuhrmann G, & Malach R (2004). Intersubject synchronization of cortical activity during natural vision. Science, 303(5664), 1634–1640. 10.1126/science.1089506 [DOI] [PubMed] [Google Scholar]
- Huff M, Papenmeier F, & Zacks JM (2012). Visual target detection is impaired at event boundaries. Visual Cognition, 20(7), 848–864. 10.1080/13506285.2012.705359 [DOI] [Google Scholar]
- Hutson JP, Chandran P, Magliano JP, Smith TJ, & Loschky LC (2022). Narrative comprehension guides eye movements in the absence of motion. Cognitive Science, 46(5), e13131. [DOI] [PubMed] [Google Scholar]
- Hutson JP, Smith TJ, Magliano JP, & Loschky LC (2017). What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film [journal article]. Cognitive Research: Principles and Implications, 2(1), 46. 10.1186/s41235-017-0080-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kahneman D, & Beatty J (1966). Pupil diameter and load on memory. Science, 154(3756), 1583–1585. [DOI] [PubMed] [Google Scholar]
- Kosie JE, & Baldwin D (2019). Attentional profiles linked to event segmentation are robust to missing information. Cognitive Research: Principles and Implications, 4(1), 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kurby CA, & Zacks JM (2011). Age differences in the perception of hierarchical structure in events. Memory and Cognition, 39, 75–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kurby CA, & Zacks JM (2019). Age differences in the perception of goal structure in everyday activity. Psychology and Aging, 34(2), 187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loschky LC, Larson A, Smith TJ, & Magliano JP (2019). The scene perception & event comprehension theory (SPECT) applied to visual narratives. Topics in Cognitive Science, 1–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loschky LC, Larson AM, Magliano JP, & Smith TJ (2015). What Would Jaws Do? The Tyranny of Film and the Relationship between Gaze and Higher-Level Narrative Film Comprehension. PLoS ONE, 10(11), 1–23. 10.1371/journal.pone.0142474 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loschky LC, Ringer RV, Johnson AP, Larson AM, Neider M, & Kramer AF (2014). Blur detection is unaffected by cognitive load. Visual Cognition, 22(3/4), 522–547. 10.1080/13506285.2014.884203 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mital PK, Smith TJ, Hill RL, & Henderson JM (2010). Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion. Cognitive Computation, 3(1), 5–24. https://doi.org/ 10.1007/s12559-010-9074-z [DOI] [Google Scholar]
- Morey RD, Rouder JN, Jamil T, & Morey MRD (2015). Package ‘bayesfactor’. URLh http://cran/r-projectorg/web/packages/BayesFactor/BayesFactor (accessed 1006 15). [Google Scholar]
- Newtson D (1973). Attribution and the unit of perception of ongoing behavior. Journal Of Personality And Social Psychology, 28(1), 28–38. [Google Scholar]
- Pettijohn KA, & Radvansky GA (2016). Walking through doorways causes forgetting: environmental effects. Journal of Cognitive Psychology, 28(3), 329–340. [Google Scholar]
- Pitts BL, Smith ME, Newberry KM, & Bailey HR (2022). Semantic knowledge attenuates age-related differences in event segmentation and episodic memory. Memory & Cognition, 50(3), 586–600. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rouder JN, Speckman PL, Sun D, Morey RD, & Iverson G (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237. [DOI] [PubMed] [Google Scholar]
- Sasmita K, & Swallow KM (2022). Measuring event segmentation: An investigation into the stability of event boundary agreement across groups. Behavior Research Methods, 1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith ME, Loschky LC, & Bailey HR (2021). Knowledge guides attention to goal-relevant information in older adults. Cognitive Research: Principles and Implications, 6(1), 1–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith ME, Loschky LC, Bailey HR (2023, April 13). Eye Movements Reveal Age-related Differences in Event Model Updating [Dataset]. 10.17605/OSF.IO/ZTNW8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith ME, Newberry KM, & Bailey HR (2020). Differential effects of knowledge and aging on the encoding and retrieval of everyday activities. Cognition, 196, 104159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Speer NK, Zacks JM, & Reynolds JR (2007). Human brain activity time-locked to narrative event boundaries. Psychological Science, 18(5), 449–455. 10.1111/j.1467-9280.2007.01920.x [DOI] [PubMed] [Google Scholar]
- Stuyven E, Van der Goten K, Vandierendonck A, Claeys K, & Crevits L (2000). The effect of cognitive load on saccadic eye movements. Acta Psychologica, 104(1), 69–85. [DOI] [PubMed] [Google Scholar]
- Swets B, & Kurby CA (2016). Eye Movements Reveal the Influence of Event Structure on Reading Behavior. Cognitive Science, 40(2), 466–480. 10.1111/cogs.12240 [DOI] [PubMed] [Google Scholar]
- Yeshurun Y, Swanson S, Simony E, Chen J, Lazaridi C, Honey CJ, & Hasson U (2017). Same Story, Different Story: The Neural Representation of Interpretive Frameworks. Psychological Science, 0956797616682029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zacks JM (2020). Event Perception and Memory. Annual Review of Psychology, 71(1), 165–191. 10.1146/annurev-psych-010419-051101 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zacks JM, Braver T, Sheridan M, Donaldson D, Snyder A, Ollinger J, Buckner R, & Raichle M (2001). Human brain activity time-locked to perceptual event boundaries. Nature Neuroscience, 4(6), 651–655. 10.1038/88486 [DOI] [PubMed] [Google Scholar]
- Zacks JM, Kumar S, Abrams R, & Mehta R (2009). Using movement and intentions to understand human activity. Cognition, 112(2), 201–216. 10.1016/j.cognition.2009.03.007 [DOI] [PubMed] [Google Scholar]
- Zacks JM, Kurby CA, Eisenberg ML, & Haroutunian N (2011). Prediction error associated with the perceptual segmentation of naturalistic events. Journal of Cognitive Neuroscience, 23(12), 4057–4066. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zacks JM, Speer N, & Reynolds J (2009). Segmentation in reading and film comprehension. Journal of Experimental Psychology: General, 138(2), 307–327. https://doi.org/2009-05547-010 [pii] 10.1037/a0015305 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zacks JM, Speer N, Swallow K, Braver T, & Reynolds J (2007). Event perception: A mind-brain perspective. Psychological Bulletin, 133(2), 273–293. 10.1037/0033-2909.133.2.273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zacks JM, Speer N, Vettel J, & Jacoby L (2006). Event understanding and memory in healthy aging and dementia of the Alzheimer type. Psychology and Aging, 21(3), 466. [DOI] [PubMed] [Google Scholar]
- Zacks JM, Tversky B, & Iyer G (2001). Perceiving, remembering, and communicating structure in events. Journal of Experimental Psychology-General, 130(1), 29–58. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.