Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Aug 1.
Published in final edited form as: Atten Percept Psychophys. 2019 Aug;81(6):1822–1835. doi: 10.3758/s13414-019-01719-2

Feature-based guidance of attention during postsaccadic selection

Andrew Hollingworth 1, Michi Matsukura 1,2
PMCID: PMC6677607  NIHMSID: NIHMS1526937  PMID: 30980343

Abstract

Current models of transsaccadic perception propose that, after a saccade, the saccade target object must be localized among objects near the landing position. Yet, the nature of the attentional mechanisms supporting this process is currently under debate. In the present study, we tested whether surface properties of the saccade target object automatically bias postsaccadic selection using a variant of the visual search task. Participants executed a saccade to a shape-singleton target in a circular array. During this primary saccade, the array sometimes rotated so that the eyes landed between the target and an adjacent distractor, requiring gaze correction. In addition, each object in the array had an incidental color value. On Switch trials, the target and adjacent distractor switched colors. The accuracy and latency of gaze correction to the target (measures that provide a direct index of target localization) were compared with a control condition in which no color switch occurred (No Switch trials). Gaze correction to the target was substantially impaired in the Switch condition. This result was obtained even when participants had substantial incentive to avoid encoding the color of the saccade target. In addition, similar effects were observed when the roles of the two feature dimensions (color and shape) were reversed. The results indicate that saccade target features are automatically encoded before a saccade, are retained in VWM across the saccade, and instantiate a feature-based selection operation when the eyes land, biasing attention toward objects that match target features.

Keywords: Eye Movements: Mechanisms, eye movements and visual attention, visual working memory


Each saccadic eye movement spatially translates the pattern of activity on the retina and generates a brief disruption in perceptual input. These events introduce a fundamental correspondence problem: How does the visual system establish the mapping between objects visible before and after the saccade to generate the experience of perceptual continuity? Solving the problem of transsaccadic continuity has been central to vision science for many decades and is still a major area of research in visual psychophysics and neuroscience (for recent reviews, see Aagten-Murphy & Bays, in press; Herwig, 2015; Higgins & Rayner, 2015; Marino & Mazer, 2016; Rolfs, 2015; Van der Stigchel & Hollingworth, 2018).

Early, image-based theories held that transsaccadic continuity is achieved by the global translation of an internal visual representation of the scene to anticipate the shift in sensory input, allowing for direct, sensory integration of pre- and post-saccadic visual representations (Jonides, Irwin, & Yantis, 1982; McConkie & Rayner, 1976). However, subsequent work demonstrated that global, image-based integration does not occur across saccades (Bridgeman & Mayer, 1983; Irwin, 1991; Irwin, Yantis, & Jonides, 1983; O’Regan & Lévy-Schoen, 1983). Instead, the perceptual information retained across a saccade is severely limited (Irwin, 1991) and strongly biased toward information near the saccade target location (Currie, McConkie, Carlson-Radvansky, & Irwin, 2000; Irwin, 1992a; McConkie & Currie, 1996). Thus, many current theories of transsaccadic continuity stress a relatively local solution, based on the mapping of one or few discrete object representations across the saccade and with this information limited, primarily, to objects near the saccade target location (Cavanagh, Hunt, Afraz, & Rolfs, 2010; Currie et al., 2000; Deubel, 2004; Deubel, Bridgeman, & Schneider, 1998; Irwin, McConkie, Carlson-Radvansky, & Currie, 1994).

If transsaccadic continuity is generated by a relatively local solution, what is the nature of the saccade target information functional in this process? Clearly, location information is important to the mapping operation, particularly as the saccade target should appear at or near the point of regard after the saccade. Perceptual discontinuity arises if the target is displaced by more than approximately one-third of the distance of the saccade (Bridgeman, Hendry, & Stark, 1975). Moreover, the relative positions of the target and nearby landmark objects have been shown to contribute to the correspondence operation (Deubel, 2004; Gysen, Verfaillie, & De Graef, 2002).

In addition to position information, surface feature properties of the saccade target (e.g., color, shape) are used to establish correspondence across saccades (Demeyer, De Graef, Wagemans, & Verfaillie, 2010; Herwig & Schneider, 2014; Hollingworth, Richard, & Luck, 2008; Richard, Luck, & Hollingworth, 2008; Tas, Moore, & Hollingworth, 2012). Hollingworth and colleagues have argued that the presaccadic shift of spatial attention to the target (Deubel & Schneider, 1996; Hoffman & Subramaniam, 1995; Kowler, Anderson, Dosher, & Blaser, 1995) leads to the selective encoding of target surface features (T. Moore & Fallah, 2004; Schut, Van der Stoep, Postma, & Van der Stigchel, 2017; Shao et al., 2010; Tas, Luck, & Hollingworth, 2016). This target representation is maintained transsaccadically in visual working memory (VWM). When the eyes land, the VWM representation acts as a template for localizing the target and confirming that the eyes have landed on the appropriate object (Irwin et al., 1994). In the common situation that the eyes fail to land on the target, the VWM representation acts as a template for visual search, biasing attention in a feature-based manner, so that the original object can be selected among other nearby objects and the appropriate corrective saccade can be generated efficiently (Hollingworth et al., 2008).

This last claim of postsaccadic, feature-based guidance was supported by evidence from a series of studies examining gaze correction processes (Hollingworth & Luck, 2009; Hollingworth et al., 2008; see also Schut, Fabius, Van der Stoep, & Van der Stigchel, 2017). The participants executed a primary saccade to a single target object (cued by rapid expansion and contraction of that object) in a circular array of colored objects. On some trials, the array rotated during the primary saccade one half of the angular difference between adjacent objects, causing the eyes to land between the original target and a distractor. Because the array was spatially regular, and because the rotation was masked by saccadic suppression, the act of generating the appropriate corrective saccade could be solved only by using stored information about the color of the target. Feature-driven corrective saccades in this paradigm were highly accurate, rapid, and automatic (Hollingworth et al., 2008). Moreover, Hollingworth and Luck (2009) found that an incidental color held in VWM for a concurrent task interacted with feature-based gaze correction, implicating VWM in guiding target localization. Hollingworth and colleagues concluded that the events occurring after a saccade can be conceptualized as an automatized visual search operation, in which transsaccadic VWM for the saccade target features biases the competition between candidate objects for selection.

Yet, this proposal of postsaccadic, feature-based guidance has been challenged recently. Two studies found that the shift of spatial attention to the target before the saccade failed to generate feature-based biases; there was no presaccadic perceptual enhancement at other locations sharing target features (Jonikaitis & Theeuwes, 2013; White, Rolfs, & Carrasco, 2013; but see Born, Ansorge, & Kerzel, 2012). More directly relevant to transsaccadic processes, Eymond, Cavanagh, and Collins (2016) tested feature-based selection after the saccade using a combined eye movement and singleton search task. The participants executed a saccade to a colored object appearing in isolation. During the saccade, the target was replaced by a search array consisting of a singleton color target among uniform colored distractors, randomly arrayed around the expected saccade landing position. Search reaction times (RTs) were no shorter when the search target matched the color of the saccade target than when it did not. Eymond et al. argued that the process of establishing object correspondence across saccades (which they termed a “landmark search operation”) does not depend on feature-based attention.

However, a limitation of the Eymond et al. (2016) study is that the results may have been caused by a failure to generalize feature representations across two substantially different tasks (i.e., simple saccadic orienting and full-array visual search), rather than reflecting the mechanisms inherent to gaze control. Substantial postsaccadic changes in a display can provide unambiguous evidence that the scene has changed, supplanting the processes designed to establish transsaccadic correspondence (Deubel et al., 1998; Deubel, Schneider, & Bridgeman, 1996; Poth & Schneider, 2016; Tas, Moore, et al., 2012). In Eymond et al., the replacement of a single saccade target with a full search array is therefore likely to have disrupted the correspondence operation, minimizing the need to consult encoded feature information. In addition, the orienting behavior used to test feature-based guidance after the saccade (i.e., search through a new array) was not directly related to the process of localizing the original saccade target. Thus, their null effect cannot be taken as unambiguous evidence against feature-based correspondence operations or feature-based guidance of attention to the original saccade target after the saccade.

It is important to acknowledge that the gaze correction results of Hollingworth et al. (2008) and Hollingworth and Luck (2009) are also subject to an alternative interpretation. In these experiments, which support feature-based guidance, array rotations occurred on a substantial proportion of trials. As discussed by Eymond et al. (2016), if the participants anticipated the need to conduct color-based visual search after the saccade, then it is possible that they strategically encoded the saccade target color as a general template to support this search. That is, efficient gaze correction could have reflected the general goals of search rather than the specific processes involved in programming and executing the primary saccade. The Eymond et al. method eliminated this possibility by making the color of the saccade target unpredictive of the search target color in the upcoming search task, so that there would be no reason to strategically configure a general template for search based on the saccade target color.

In the present study, we implemented a strong test of this issue by combining key components of the Hollingworth et al. (2008) and Eymond et al. (2016) approaches. The basic method was built on the gaze correction technique of Hollingworth et al. (see Figure 1). This paradigm was chosen for three reasons. First, the gaze correction method probes postsaccadic guidance within a single eye movement task, rather than depending on a dual-task design (saccade generation, then search through a second array, as in Eymond et al.). Second, unlike the major transsaccadic stimulus change introduced by Eymond et al., the rotation of the array introduces a change that preserves the structure of the display and is unlikely to disrupt correspondence operations. In this method, gaze correction on array rotation trials produces functionally equivalent results to gaze correction for naturally occurring saccade errors without array rotation (Hollingworth et al., 2008), both for the accuracy and latency of corrective saccades. Finally, the gaze correction method provides a direct measure of the behavior of interest: guidance of attention to localize the original saccade target object after the saccade.

Figure 1.

Figure 1

Design and sequence of events in a trial of Experiments 1 and 2. Participants executed a saccade to a singleton disk among squares. Each object had an incidental color that varied randomly on a trial-by-trial basis. On a subset of trials, the array rotated during the saccade so that the eyes tended to land between the target and an adjacent distractor, requiring gaze correction. On No-switch trials, the target and adjacent distractor retained their original colors. On Switch trials, the target and adjacent distractor swapped colors during the primary saccade.

The basic method involved changing an incidental surface feature of the saccade target object during the primary saccade (Figure 1). Participants fixated centrally and executed a saccade to a singleton shape target (disk) among distractors (squares) in a circular array. Each object had an incidental color, irrelevant to target selection. During the saccade to the target, the array rotated on a subset of trials. In addition, the colors of the target (disk) and adjacent distractor (square) either remained the same (No-switch condition) or switched (Switch condition). In the latter condition, when the eyes landed, the target still retained the target-defining shape feature, and this feature continued to be a singleton within the display. However, the target changed on the incidental feature dimension; the color associated with the target was now associated with the distractor, and vice versa. We expected attention to be biased, postsaccadically, by the encoded surface features of the target, leading to impaired gaze correction on Switch trials compared with No-switch trials.

In addition, we constructed the experiments so that color was not just unpredictive of the goal of the corrective saccade (Eymond et al., 2016); it was antipredictive, associated with the distractor object on the large majority of trials. Specifically, we included a much larger proportion of Switch trials than No-switch trials, providing incentive to avoid encoding the color of the target and, if encoded, to avoid using that color to guide attention after the saccade. Experiments 1 implemented this basic design. To provide an even stronger test, in Experiment 2 we reduced the proportion of rotation trials (from 50% to 15.6%), ensuring that the effects were not driven by the expectation of array rotation on a relatively large proportion of trials. Finally, in Experiment 3, we reversed the roles of color and shape: participants executed a primary saccade to a color singleton, and the switch manipulation involved the shapes of the target and adjacent distractor. To preview the results, in all three experiments significant gaze correction interference was observed when the incidental surface feature of the saccade target was associated, postsaccadically, with the adjacent distractor. These results indicate that encoded surface features of the saccade target automatically guide attention when the eyes land.

Experiments 1

In Experiment 1A, 75% of trials were Switch and 25% No-switch. In Experiment 1B, all trials were Switch trials, maximizing the disincentive to encode target color. For rotation trials (50% of all trials) gaze correction accuracy (the proportion of trials on which the eyes were directed first to the appropriate target object) and gaze correction latency (saccade latency when a single corrective saccade brought to the eyes to the target) were compared between Switch and No-switch trials. Feature-based guidance from a representation of the incidental color of the saccade target should generate interference with selection of the shape-singleton target as the goal of the corrective saccade in the Switch condition, decreasing correction accuracy and increasing correction latency.

Method

Participants.

Participants in all experiments were drawn from the University of Iowa community and were between the ages of 18 and 30. They either received course credit or were paid for participation. All procedures were approved by the University of Iowa Institutional Review Board. To ensure a sufficient sample size, we examined the corresponding correction accuracy effect in Hollingworth and Luck (2009, Experiment 1, N = 16), by comparing one condition in which the array rotated without a color change (No Change condition) and another in which the array rotated with a color change so that the distractor color matched a color maintained in VWM (Related condition), F(1,15) = 19.5, p < .001, ηp2=.565. In Hollingworth and Luck, the color match was between the distractor and a color maintained in VWM for a concurrent, secondary task. In the present experiments, the color match was between the distractor and a feature of the saccade target itself. Thus, we expected the present effects to be as large or larger than the effect observed in Hollingworth and Luck. Power analysis (using G*power, Faul, Erdfelder, Lang, & Buchner, 2007) of the Hollingworth and Luck experiment indicated that a minimum sample of 9 would be sufficient to achieve 80% power. We established a base N of 10 for the present experiments. Ten participants (all female) completed Experiment 1A, and ten different participants (4 female) completed Experiment 1B.

Stimuli.

For all stimulus images, the background was set to a mid-level gray. The initial array consisted of ten objects evenly spaced around a virtual circle (radius 4.9°). Each contained nine squares (1.5° X 1.5°) and one disk (1.8° diameter). The center-to-center distance between adjacent objects was 3.0°. The target disk was equally likely to appear at each of the ten array locations. The entire array was subject to a random angular offset on each trial (between 0 and 35°) to vary the absolute locations of the objects.

The color of each disk was chosen randomly from a set of 11 highly discriminable colors, reported using the 1931 CIE color coordinate system: red (x = .65, y = .33, 16.9 cd/m2), blue (x = .15, y = .08, 10.4 cd/m2), green (x = .31, y = .60, 10.5 cd/m2), yellow (x = .43, y = .51, 80.3 cd/m2), magenta (x = .30, y = .15, 29.0 cd/m2), black (<.001 cd/m2), white (81.6 cd/m2), brown (x = .46, y = .42, 10.1 cd/m2), pink (x = .41, y = .31, 36.3 cd/m2), orange (x = .56, y = .40, 27.9 cd/m2), and aqua (x = .22, y = .31, 72.0 cd/m2). There were two constraints on color selection. First, the colors of the target disk and the distractor square (i.e., the square that could swap color with the target during the eye movement) were unique within the display. This constraint ensured that color manipulations between the target and distractor were not complicated by the presence of a matching color among the other array objects. For the remaining objects, color repetitions were possible, but they had to be separated by at least two objects.

On rotation trials (50% of all trials), during the saccade to the target, the entire array was rotated 18° clockwise on half the trials and 18° counterclockwise on the other half (i.e., one half of the angular distance between array objects). When the array rotated, the target and distractor either retained their original colors (No-switch condition) or the target and distractor exchanged colors (Switch condition).

For the no-rotation trials (50% of all trials), the positions of the array objects did not change across the saccade. In the Switch condition, there were two distractor squares flanking the target. One of these was chosen to swap color with the target (50% the clockwise distractor and 50% the counterclockwise distractor). On No-switch trials, the colors of the array objects did not change.

Apparatus.

Stimuli were displayed on a 17-in CRT monitor with a 120 Hz refresh rate. The right eye was monitored by an SR Research Eyelink 1000 eyetracker sampling at 1000 Hz. A chin and forehead rest was used to maintain a constant viewing distance of 70 cm and to minimize head movement. The experiment was controlled by E-prime software (Schneider, Eschmann, & Zuccolotto, 2002).

Procedure.

Each trial was initiated by the experimenter. The central fixation cross was then displayed for 500 ms, followed by the circular array. The participants were instructed to generate an eye movement to the target disk as quickly as possible. They were informed about the possibility of array rotations and color changes, but they were told that the task remained the same under these circumstances: they were to fixate the disk as quickly as possible. When the target disk was successfully fixated, it was outlined by a green box for 400 ms to indicate completion of the trial.

Array rotation during the primary saccade was implemented using a boundary technique. After array onset, the computer monitored for an eye position sample beyond 1.3° from the central fixation point. When detected, the post-saccade image was written to the screen. Pilot testing ensured that screen changes were completed before the beginning of the next fixation (Hollingworth et al., 2008). The direction of rotation could not be perceived directly during the saccade itself, because of visual suppression during the saccade and masking generated by the post-saccade perceptual input (for a review, see Matin, 1974).

After receiving instructions and initial calibration, the participants first completed 12 practice trials, drawn randomly from the full experiment design. The practice session was followed by an experiment session of 320 trials: 160 no-rotation trials and 160 rotation trials. In Experiment 1A, 75% of trials within each rotation condition were Switch, and 25% were No-switch. In Experiment 1B, all trials were Switch trials. Trials from the aforementioned conditions were randomly intermixed.

Data Analysis.

Eye tracking data analysis was conducted offline. A combined velocity (30°/s) and acceleration (8000°/s2) threshold was used to define saccades. For the rotation condition, trials were included only if the primary saccade landed within a particular region of the display, illustrated in Figure 2. This region was a 60° segment of an annulus surrounding central fixation, with an inner radius of 2.04° and an outer radius of 6.92°. The segment was centered at the original location of the target disk. In addition, this region excluded landing positions within 1.14° of the center of the target or distractor objects. Thus, we limited the analyses to those trials on which the primary saccade was correctly directed to the vicinity of the target disk without landing on or very close to the post-rotation positions of the target disk or distractor square (see Figure 3, illustrating the primary saccade landing positions of included and excluded trials). In addition, trials were eliminated from the rotation-condition analysis if the participant was not fixating within 1.14° of the central cross when the array appeared or if the latency of the primary saccade was longer than 800 ms or shorter than 90 ms. The majority of eliminated trials were those on which the primary saccade landed on an object, rather than between the target and distractor, reflecting the fact that saccades are often inaccurate. A total of 32.9% and 35.6% of the rotation trials were eliminated from Experiments 1A and 1B, respectively. Although the proportions of eliminated trials were quite large, it is important to note that all the manipulations in the present experiments occurred after the primary saccade had been launched, so there could have been no systematic effect of independent variables on the proportion of eliminated trials. The remaining trials were then analyzed with respect to scoring regions defined around the target and distractor objects (2.28° diameter, see Figure 2), allowing us to calculate correction accuracy (the proportion of trials on which gaze was directed first to the target region after the primary saccade) and correction latency (the latency of the corrective saccade when a single corrective saccade brought the eyes to the target region).

Figure 2.

Figure 2

Illustration of primary saccade landing region for inclusion in the analyses of rotation trials. The trial was included if the primary saccade landed within the shaded region, defined relative to the post-rotation display. Note that on this example trial, the array had rotated clockwise during the saccade so that the eyes tended to land between the target (disk) and the adjacent, counterclockwise distractor (square).

Figure 3.

Figure 3

Landing position plots for rotation trials of Experiments 1A, 1B, 2, and 3. Each point indicates the landing position of the first saccade on each trial (for all participants), normalized for depiction as a trial on which the target appeared at the 12 o’clock position before clockwise rotation. Green dots indicate trials in which the landing position was within the acceptable region (Figure 2) and was included in the analysis. Black dots indicate trials that were excluded from the analysis.

Execution of the primary saccade to the target was highly efficient. Mean latency (timed from the onset of the array) was 225 ms in Experiment 1A and 242 ms in Experiment 1B. On rotation trials, the average landing position of this primary saccade was midway between the target and distractor, slightly short of the target eccentricity (see Figure 3). For Experiment 1A, mean landing position was 1.72° from the center of both the target and distractor. In Experiment 1B, mean landing position was 1.72° from the center of the target and 1.71° from the center of the distractor.

No-rotation trials were used as filler, so that an array rotation did not occur on every trial. Although the inclusion of color-switch trials could have supported examination of the correction of naturally occurring gaze errors, there were too few trials to support such an analysis; in the absence of rotation, the majority of saccades landed on or near the target.

Results

The rotation trials were of central interest for examining postsaccadic, feature-based attention controlled by a stored representation of the saccade target. Figure 4 shows the gaze correction accuracy and latency results in the rotation condition as a function of color switch.

Figure 4.

Figure 4

Experiment 1A and 1B results. A) Mean gaze correction accuracy. B) Mean gaze correction latency. C) Distributions of correction latency. For Experiment 1A, error bars are condition-specific, within-subject 95% confidence intervals (Morey, 2008). For Experiment 1B, error bars are standard errors of the means.

Gaze correction accuracy.

Gaze correction accuracy is the proportion of trials on which the eyes were directed first to the target region after the primary saccade (Figure 4A). For Experiment 1A, gaze correction accuracy was perfect in the No-switch condition: on every trial, for every participant, gaze was directed first to the target object. Mean accuracy was reliably lower in the Switch condition (M = .864, SD = .090), F(1,9) = 23.1, p < .001, ηp2=.720. That is, on 13.6% of Switch trials, gaze was corrected first to the square distractor that had the original color of the saccade target rather than to the singleton target disk that had the original color of the distractor.

For Experiment 1B, all trials were Switch trials. Mean gaze correction accuracy was .947 (SD = .033). We compared correction accuracy to the No-switch and Switch conditions of Experiment 1A using a nonparametric approach (due to violations of normality in the Experiment 1A data). Mann-Whitney tests indicated that gaze correction accuracy in Experiment 1B was lower than accuracy for No-switch trials of Experiment 1A [U = 0, Z = −4.04, p < .001] and was higher than accuracy for Switch trials of Experiment 1A [U = 14, Z = −2.72, p = .007]. The latter difference suggests that the larger proportion of Switch trials may have reduced interference from a color switch, potentially suggesting some degree of control over the encoding of the task-irrelevant color or selection of the correction saccade target. However, increased accuracy was achieved in the context of increase in correction latency (Figure 4B), indicating some degree of speed-accuracy tradeoff.

Gaze correction latency.

Gaze correction latency is the duration of the fixation before the corrective saccade is initiated, when only one corrective saccade is required to fixate the target. Outlier latencies above 700 ms or below 90 ms were eliminated from the analysis (1.2% in Experiment 1A; 2.6% in Experiment 1B), which did not influence the pattern of results in either experiment.

Note that mean corrective saccade latencies in the current paradigm typically fall within the range of 190–250 ms, depending on several factors (Hollingworth et al., 2008). This may be surprising for readers who are familiar with corrective saccades from the double-step paradigm (Becker & Jürgens, 1979), where latencies can be extremely short. However, such short latencies are generated in the double-step paradigm because the shifted target is visible before the primary saccade is launched, and the two saccades can be programmed in parallel. This circumstance is atypical. Corrective saccade latencies in other paradigms—including the correction of errors to single, static targets—tend to be quite similar to those of primary saccades (for a review, see Becker, 1991).

Figure 4B shows mean correction latency as a function of color switch for Experiments 1A and 1B. In the No-switch condition of Experiment 1A, mean correction latency was 222 ms (SD = 32.7 ms). Correction latency significantly increased on Switch trials of Experiment 1A, F(1,9) = 26.7, p < .001, ηp2=.748, with mean latency of 280 ms (SD = 42.8 ms). As illustrated in Figure 4C, the mean latency increase in the Switch condition was driven by a paucity of rapid correction latencies in the range of 130–220 ms and a corresponding increase in the proportion of longer correction latencies in the range of 300–500 ms, likely reflecting competition between target and distractor for selection.

The correction latency data from Experiment 1B were also consistent with the pattern predicted by interference from the task-irrelevant color. As is evident in the distributions plotted in Figure 4C, accurate corrections were no more efficient on Switch trials of Experiment 1B (M = 304 ms, SD = 45.4 ms) compared with Switch trials of Experiment 1A, F(1,18) = 1.39, p = .254, η2 = .020. Correction latency in Experiment 1B was reliably longer than latency for No-switch trials of Experiment 1A, F(1,18) = 21.2, p < .001, η2 = .514, indicating competition from the feature-matching distractor.

Discussion

Both the accuracy and the latency of gaze correction were impaired when an incidental feature of the saccade target (color) changed during the saccade so that it was associated with an adjacent distractor. The participants had substantial disincentive to encode the target color. When the eyes landed, the original target color was associated with the distractor on the 75% (Experiment 1A) or 100% (Experiment 1B) of trials. The task could have been solved optimally by consulting only shape information, as the target was always a shape singleton, both before and after the saccade. Thus, the participants should not have strategically encoded color to support gaze correction. Capture by the color-matching distractor therefore suggests that the elementary processes involved in computing and executing the primary saccade led to the encoding of all the target features. Following the saccade, this feature information automatically guided selection among objects near the landing position.

Experiment 2

In Experiment 1, there was a clear disincentive to encode the saccade target color. However, the relatively large proportion of rotation trials (50%) may have generated a perceived demand to encode target properties in general, so as to localize a shifted target. That is, the effects may be specific to conditions under which participants expect frequent array rotations, rather than reflecting mechanisms functional under normative conditions. Thus, in Experiment 2, we substantially reduced the proportion of rotation trials to 15.6%. As in Experiment 1A, the colors of the target and an adjacent distractor switched on 75% of trials. If the elementary processes of saccade programming and execution involve the automatic encoding of the saccade target features into VWM (Schut, Van der Stoep, et al., 2017; Shao et al., 2010; Tas et al., 2016), and if these features bias selection after the saccade, then we should nevertheless observe the same type of interference on Switch trials as observed in Experiment 1.

Method

Participants.

Ten new participants (9 female) completed Experiment 2.

Stimuli, Procedure, and Apparatus.

The basic method was identical to Experiment 1A, except in the following respects. Participants completed 640 trials in the experiment session: 75% of these (480) were Switch trials and 25% (160) were No-switch trials. In addition, 15.6% of all trials (100) were rotation trials, evenly divided between Switch and No-switch. Thus, the full breakdown of trial count was as follows: No-rotation / Switch = 430, No-rotation / No-switch = 110, Rotation / Switch = 50, and Rotation / No-switch = 50. This design allowed a sufficient number of trials in the Rotation / No-switch condition while keeping the overall percentage of Switch trials at 75%. Participants were not informed about the possibility of array rotations.

Stimuli were presented on a 100-Hz LCD monitor. All other aspects of the stimuli and apparatus were the same as in Experiment 1A.

Data Analysis.

A total of 42.3% of the rotation trials was eliminated from the analysis for the reasons discussed in Experiment 1. As in previous experiments, this was due primarily to relatively inaccurate primary saccades that did not land within the acceptable region (see Figure 3). The mean latency of primary saccades was 219 ms. Mean landing position on rotation trials was 1.70° from the center of the target and 1.68° from the center of the distractor.

Results

Figure 5 shows the gaze correction accuracy and latency results as a function of color switch for rotation trials of Experiment 2.

Figure 5.

Figure 5

Experiment 2 results. A) Mean gaze correction accuracy. B) Mean gaze correction latency. C) Distributions of correction latency. Error bars are condition-specific, within-subject 95% confidence intervals (Morey, 2008).

Gaze correction accuracy.

Mean gaze correction accuracy was lower in the Switch condition (.694, SD = .163) compared with the No-switch condition (.996, SD = .012), F(1,9) = 32.8, p < .001, ηp2=.785.

Gaze correction latency.

In addition, there was a latency cost associated with incidental color switch, with a mean latency of 211 ms (SD = 29.8 ms) in the No-switch condition and 261 ms (SD = 35.7 ms) in the Switch condition, F(1,9) = 32.2, p < .001, ηp2=.782.

Discussion

Replicating Experiment 1A, gaze correction inference was observed when the target and adjacent distractor traded an incidental surface feature during the primary saccade. These effects were observed even though array rotation trials were rare. Moreover, the effects were of equal or greater magnitude compared with Experiment 1A. Thus, we observed post-saccadic guidance by transsaccadic VWM under circumstances that more closely approach those in the real world (the visual scene rarely changes during a saccade), indicating that these effects were unlikely to have been caused by an idiosyncratic strategy developed in the context of a high probability of array rotation.

Experiment 3

To ensure generalization of post-saccadic guidance to a feature dimension other than color, we replicated Experiment 1A but with the roles of color and shape reversed (Figure 6). Participants searched for a red color singleton among blue distractors, and each object had an incidental shape. On Switch trials, the target and adjacent distractor swapped shapes. We expected to again observe gaze correction interference on Switch trials, but likely of reduced absolute magnitude given that color is more efficient in guiding attention than shape (Rutishauser & Koch, 2007; Williams, 1967).

Figure 6.

Figure 6

Design and sequence of events in a trial of Experiment 3. Participants executed a saccade to a singleton color (red). The switch manipulation involved object shape.

Method

Participants.

Ten new participants (8 female) completed Experiment 3.

Stimuli, Procedure, and Apparatus.

The target was always a red singleton (x = .65, y = .33, 16.9 cd/m2) among blue distractors blue (x = .15, y = .08, 10.4 cd/m2). Each object had one of 11 different shapes (circle, triangle, square, pentagon, hexagon, star, flower, teardrop, hourglass, shield, cross). The size of each shape was chosen to fit within the same analysis region used in Experiment 1A (Figure 2). The assignment of shapes to objects varied randomly on a trial-by-trial basis, with the specific shapes on each trial chosen in the same manner as color was chosen in Experiment 1A. On Switch trials, the shapes of the target and the adjacent distractor swapped. In all other respects, the experiment was identical to Experiment 1A.

Data Analysis.

A total of 41.1% of the rotation trials was eliminated from the analysis for the reasons discussed in Experiment 1. As in previous experiments, this was due primarily to relatively inaccurate primary saccades that did not land within the acceptable region (see Figure 3). The mean latency of primary saccades was 192 ms. Mean landing position on rotation trials was 1.66° from the center of the target and 1.69° from the center of the distractor.

Results and Discussion

Figure 7 shows the gaze correction accuracy and latency results in the rotation condition as a function of shape switch. Gaze correction accuracy was perfect on No-switch trials. Mean accuracy was reliably lower on Switch trials (.977, SD = .005), F(1,9) = 20.0, p = .002, ηp2=.690. In addition, there was a latency cost associated with incidental shape switch, with a mean latency of 204 ms (SD = 8.61 ms) in the No-switch condition and 223 ms (SD = 12.4 ms) in the Switch condition, F(1,9) = 14.1, p = .005, ηp2=.567. Thus, the Experiment 1 results generalize to feature-based guidance by shape.

Figure 7.

Figure 7

Experiment 3 results. A) Mean gaze correction accuracy. B) Mean gaze correction latency. C) Distributions of correction latency. Error bars are condition-specific, within-subject 95% confidence intervals (Morey, 2008).

General Discussion

As discussed in the Introduction, most current theories of object correspondence and perceptual continuity across saccades hold the view that these operations depend on a relatively local representation, consisting of the saccade target and, at most, a few other landmark objects (Cavanagh et al., 2010; Currie et al., 2000; Deubel, 2004; Deubel et al., 1998; Irwin et al., 1994). Specifically, the shift of attention preceding a saccade leads to the encoding of saccade target properties and the maintenance of these features across the saccade. When the eyes land, this representation is used to localize the target among objects near the landing position. We have proposed that one means of target localization is feature-based: the target representation serves as a postsaccadic template, biasing attention toward items that match target features (Hollingworth & Luck, 2009; Hollingworth et al., 2008; Richard et al., 2008). However, Eymond et al. (2016) found no evidence for feature-based guidance when a colored saccade target was replaced by a singleton search array upon saccade landing. They argued that the mechanisms involved in localizing the target do not necessary involve feature-based attention.

In three experiments, we observed robust feature-based guidance of attention after saccades, consistent with the original proposal of Hollingworth and colleagues. In each experiment, an incidental feature of the saccade target object (color or shape) was reliably encoded into VWM, was retained across the saccade, and biased the selection of the corrective saccade goal upon landing. In contrast with Eymond et al. (2016), the gaze correction method allowed us to examine target localization mechanisms (a) without changing the entire stimulus across the saccade and (b) without requiring the participant to switch tasks within a trial (both of which may have limited the application of a feature representation in Eymond et al.). In addition, the gaze correction method provides a natural means of assessing postsaccadic target localization, as target localization is the central process required to accurately correct gaze. Moreover, the experimental design employed in the present series of experiments eliminated an incentive to strategically encode saccade target features (particularly in Experiment 2), replicating a key methodological feature of the Eymond et al. (2016) study. The present gaze correction results are consistent with a series of recent findings demonstrating that surface features representations are used to solve the correspondence problem across saccades (Demeyer et al., 2010; Tas, Moore, et al., 2012), as well as correspondence problems in other domains, such as object motion and brief occlusion (Hein & Cavanagh, 2012; Hollingworth & Franconeri, 2009; C. M. Moore, Stephens, & Hein, 2010; Tas, Dodd, & Hollingworth, 2012).

The results from the present study also provide the strongest evidence to date that features of the saccade target object are automatically encoded into VWM and maintained across the saccade. The target was always a feature singleton, making it relatively easy to discriminate this item from the rest of the array, and the incidental feature of the target was more often associated, postsaccadically, with the distractor than with the target. Thus, the optimal strategy would have been to avoid encoding the specific features of the saccade target and instead guide attention and gaze based on singleton status, both before and after the primary saccade. Nevertheless, the incidental color of the target object (incidental shape in Experiment 3) was encoded into VWM and maintained across the saccade.

Converging evidence for automatic saccade target encoding comes from three recent studies (Schut, Van der Stoep, et al., 2017; Shao et al., 2010; Tas et al., 2016). Each used a dual-task paradigm to examine whether executing a saccade to an object would interfere with the maintenance of items in VWM for a concurrent memory task. Shao et al. found saccade-related interference in orientation memory precision, Tas et al. found interference with color change-detection performance, and Schut et al. found interference with shape feature report. In these cases, memory performance impairment indicates that the saccade target was encoded into VWM (in Schut et al., the impairment was equivalent to a one-item displacement in VWM, presumably caused by the encoding of the saccade target). However, the indirect nature of the dual-task paradigm limits the conclusions from these studies to some degree; it is possible that the observed interference was generated by other factors related to dual-task performance, such as increased demands on executive processes. In the present method, automatic encoding into VWM was assessed in a single task via reference to the specific content of VWM (e.g., the target color), which serves as a more direct measure.

It is important to note that automatic encoding into VWM appears to be specific to the shift of attention that immediately precedes a saccade and does not necessarily extend to covert shifts of attention that are not associated with saccade preparation.1 In Tas et al. (2016), saccades to an object interfered with the maintenance of colors for a primary VWM task. However, no equivalent interference was observed when the participants shifted attention covertly to an object during VWM maintenance. This finding is consistent with the idea that transsaccadic VWM serves the function of bridging the disruption introduced by the saccade (Hollingworth et al., 2008; Irwin, 1992b). Unlike saccades, purely covert shifts of attention do not necessarily introduce a correspondence problem, since they produce neither a perceptual disruption nor a spatial displacement. It is impending saccade execution that creates a demand to encode saccade target features into VWM, given that saccade execution introduces a demand to bridge perceptual disruption and displacement.

The results from the present study clearly indicate that the saccade target color was represented across the saccade. How do we infer that this representation depended on VWM? First, given the disruption in perceptual input, the transsaccadic representation is, by definition, a memory representation. In addition, transsaccadic memory has functional properties that mirror those of VWM (for a review, see Irwin, 1992b), including highly limited capacity (Irwin, 1992a), object-based encoding (Irwin & Andrews, 1996), and a format that is abstracted away from precise image features (Irwin, 1991). Moreover, Hollingworth and Luck (2009) demonstrated that a color held in VWM for a secondary task interacted with corrective saccades in a manner similar to the effect of remembered saccade target color, observed here. Finally, Hollingworth et al. (2008) found that the accuracy and speed of gaze correction was impaired when VWM was engaged in remembering a secondary set of stimuli. Thus, we can be confident that the representation supporting transsaccadic memory and gaze correction depends on the VWM system.

In addition to automatic encoding of the saccade target into VWM, the present results indicate that this encoding is object-based in the following sense: when only one feature of an object is relevant (e.g., shape in Experiment 1), participants cannot exclude other features of the object (color) from VWM. This issue has been investigated outside of the context of eye movements, but with mixed results. Two studies suggested that task-relevant features of an object could be selectively encoded into VWM, with task-irrelevant features efficiently excluded (Serences, Ester, Vogel, & Awh, 2009; Woodman & Vogel, 2008). However, others have found evidence for object-based encoding (Foerster & Schneider, 2018; Gao, Gao, Li, Sun, & Shen, 2011; Hyun, Woodman, Vogel, Hollingworth, & Luck, 2009; Marshall & Bays, 2013; Matsukura & Vecera, 2011; Shen, Tang, Wu, Shui, & Gao, 2013; Yin et al., 2012). The method employed in Experiment 1 provided a clear disincentive to encode color, and thus constituted a particularly strong test of object-based encoding. Object-based encoding of saccade targets may support post-saccade correspondence and perceptual continuity, as all features of the target can contribute to target localization, not just those features that were relevant for the initial selection. In sum, participants appear to have minimal control over the object features encoded into transsaccadic VWM. All features of the saccade target, including task-irrelevant features, are encoded, maintained across the eye movement, and consulted when the visual system locates the target after the saccade.

Conclusion

There exists a close functional relationship between spatial attention, saccades, VWM, and feature-based attention (Van der Stigchel & Hollingworth, 2018). To bridge the perceptual disruption created by the saccade, saccade target properties are automatically encoded into VWM via the presaccadic shift of spatial attention to the target location. This VWM representation then implements a feature-based selection process after the eyes land, biasing attention toward objects that match target features.

Acknowledgments

The research was supported by NIH grant R01EY017356.

Footnotes

Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of a an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.

1

A large and relatively consistent literature demonstrates that, although saccade execution requires a shift of spatial attention to the saccade target location (Deubel & Schneider, 1996; Hoffman & Subramaniam, 1995; Kowler et al., 1995), attention can be covertly shifted without saccade preparation (Hunt & Kingstone, 2003; Juan, Shorter-Jacobi, & Schall, 2004; Klein, 1980; Klein & Pontefract, 1994; Schafer & Moore, 2011; Thompson, Biscoe, & Sato, 2005).

References

  1. Aagten-Murphy D, & Bays PM (in press). Functions of memory across saccadic eye movements Current Topics in Behavioral Neurosciences Berlin, Heidelberg: Springer. [DOI] [PubMed] [Google Scholar]
  2. Becker W (1991). Saccades. In Carpenter RHS (Ed.), Vision and visual dysfunction, Vol. 8: Eye movements (pp. 93–137). London: MacMillan. [Google Scholar]
  3. Becker W, & Jürgens R (1979). An analysis of the saccadic system by means of double step stimuli. Vision Research, 19(9), 967–983. doi: 10.1016/0042-6989(79)90222-0 [DOI] [PubMed] [Google Scholar]
  4. Bridgeman B, Hendry D, & Stark L (1975). Failure to detect displacement of the visual world during saccadic eye movements. Vision Research, 15(6), 719–722. doi: 10.1016/0042-6989(75)90290-4 [DOI] [PubMed] [Google Scholar]
  5. Bridgeman B, & Mayer M (1983). Failure to integrate visual information from successive fixations. Bulletin of the Psychonomic Society, 21(4), 285–286. [Google Scholar]
  6. Cavanagh P, Hunt AR, Afraz A, & Rolfs M (2010). Visual stability based on remapping of attention pointers. Trends in Cognitive Sciences, 14(4), 147–153. doi: 10.1016/j.tics.2010.01.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Currie CB, McConkie GW, Carlson-Radvansky LA, & Irwin DE (2000). The role of the saccade target object in the perception of a visually stable world. Perception & Psychophysics, 62(4), 673–683. doi: 10.3758/BF03206914 [DOI] [PubMed] [Google Scholar]
  8. Demeyer M, De Graef P, Wagemans J, & Verfaillie K (2010). Object form discontinuity facilitates displacement discrimination across saccades. Journal of Vision, 10(6), 17. doi: 10.1167/10.6.17 [DOI] [PubMed] [Google Scholar]
  9. Deubel H (2004). Localization of targets across saccades: Role of landmark objects. Visual Cognition, 11(2–3), 173–202. doi: 10.1080/13506280344000284 [DOI] [Google Scholar]
  10. Deubel H, Bridgeman B, & Schneider WX (1998). Immediate post-saccadic information mediates space constancy. Vision Research, 38(20), 3147–3159. doi: 10.1016/S0042-6989(98)00048-0 [DOI] [PubMed] [Google Scholar]
  11. Deubel H, & Schneider WX (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36(12), 1827–1837. doi: 10.1016/0042-6989(95)00294-4 [DOI] [PubMed] [Google Scholar]
  12. Deubel H, Schneider WX, & Bridgeman B (1996). Postsaccadic target blanking prevents saccadic suppression of image displacement. Vision Research, 36(7), 985–996. doi: 10.1016/0042-6989(95)00203-0 [DOI] [PubMed] [Google Scholar]
  13. Eymond C, Cavanagh P, & Collins T (2016). Feature-based attention across saccades and immediate postsaccadic selection. Attention Perception & Psychophysics, 78(5), 1293–1301. doi: 10.3758/s13414-016-1110-y [DOI] [PubMed] [Google Scholar]
  14. Faul F, Erdfelder E, Lang AG, & Buchner A (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191. doi: 10.3758/BF03193146 [DOI] [PubMed] [Google Scholar]
  15. Foerster RM, & Schneider WX (2018). Involuntary top-down control by search-irrelevant features: Visual working memory biases attention in an object-based manner. Cognition, 172, 37–45. doi: 10.1016/j.cognition.2017.12.002 [DOI] [PubMed] [Google Scholar]
  16. Gao T, Gao Z, Li J, Sun Z, & Shen M (2011). The perceptual root of object-based storage: An interactive model of perception and visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 37(6), 1803–1823. doi: 10.1037/a0025637 [DOI] [PubMed] [Google Scholar]
  17. Gysen V, Verfaillie K, & De Graef P (2002). Transsaccadic perception of translating objects: Effects of landmark objects and visual field position. Vision Research, 42(14), 1785–1796. doi: 10.1016/S0042-6989(02)00105-0 [DOI] [PubMed] [Google Scholar]
  18. Hein E, & Cavanagh P (2012). Motion correspondence in the Ternus display shows feature bias in spatiotopic coordinates. Journal of Vision, 12(7), 16: 11–14. doi: 10.1167/12.7.16 [DOI] [PubMed] [Google Scholar]
  19. Herwig A (2015). Transsaccadic integration and perceptual continuity. Journal of Vision, 15(16), 7. doi: 10.1167/15.16.7 [DOI] [PubMed] [Google Scholar]
  20. Herwig A, & Schneider WX (2014). Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 143(5), 1903–1922. doi: 10.1037/a0036781 [DOI] [PubMed] [Google Scholar]
  21. Higgins E, & Rayner K (2015). Transsaccadic processing: stability, integration, and the potential role of remapping. Attention, Perception, & Psychophysics, 77(1), 3–27. doi: 10.3758/s13414-014-0751-y [DOI] [PubMed] [Google Scholar]
  22. Hoffman JE, & Subramaniam B (1995). The role of visual attention in saccadic eye movements. Perception & Psychophysics, 57(6), 787–795. doi: 10.3758/BF03206794 [DOI] [PubMed] [Google Scholar]
  23. Hollingworth A, & Franconeri SL (2009). Object correspondence across brief occlusion is established on the basis of both spatiotemporal and surface feature cues. Cognition, 113(2), 150–166. doi: 10.1016/j.cognition.2009.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hollingworth A, & Luck SJ (2009). The role of visual working memory (VWM) in the control of gaze during visual search. Attention, Perception, & Psychophysics, 71(4), 936–949. doi: 10.3758/APP.71.4.936 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Hollingworth A, Richard AM, & Luck SJ (2008). Understanding the function of visual short-term memory: Transsaccadic memory, object correspondence, and gaze correction. Journal of Experimental Psychology: General, 137(1), 163–181. doi: 10.1037/0096-3445.137.1.163 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hunt AR, & Kingstone A (2003). Covert and overt voluntary attention: linked or independent? Cognitive Brain Research, 18(1), 102–105. doi: 10.1016/j.cogbrainres.2003.08.006 [DOI] [PubMed] [Google Scholar]
  27. Hyun JS, Woodman GF, Vogel EK, Hollingworth A, & Luck SJ (2009). The comparison of visual working memory representations with perceptual inputs. Journal of Experimental Psychology: Human Perception and Performance, 35(4), 1140–1160. doi: 10.1037/a0015019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Irwin DE (1991). Information integration across saccadic eye movements. Cognitive Psychology, 23(3), 420–456. doi: 10.1016/0010-0285(91)90015-G [DOI] [PubMed] [Google Scholar]
  29. Irwin DE (1992a). Memory for position and identity across eye movements. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(2), 307–317. doi: 10.1037/0278-7393.18.2.307 [DOI] [Google Scholar]
  30. Irwin DE (1992b). Visual memory within and across fixations. In Rayner K (Ed.), Eye movements and visual cognition: Scene perception and reading (pp. 146–165). New York: Springer-Verlag. [Google Scholar]
  31. Irwin DE, & Andrews RV (1996). Integration and accumulation of information across saccadic eye movements. In Inui T & McClelland JL (Eds.), Attention and performance XVI: Information integration in perception and communication (pp. 125–155). Cambridge, MA: MIT Press. [Google Scholar]
  32. Irwin DE, McConkie GW, Carlson-Radvansky LA, & Currie C (1994). A localist evaluation solution for visual stability across saccades. Behavioral and Brain Sciences, 17(2), 265–266. doi: 10.1017/S0140525X00034439 [DOI] [Google Scholar]
  33. Irwin DE, Yantis S, & Jonides J (1983). Evidence against visual integration across saccadic eye movements. Perception & Psychophysics, 34(1), 49–57. doi: 10.3758/BF03205895 [DOI] [PubMed] [Google Scholar]
  34. Jonides J, Irwin DE, & Yantis S (1982). Integrating visual information from successive fixations. Science, 215(4529), 192–194. doi: 10.1126/science.7053571 [DOI] [PubMed] [Google Scholar]
  35. Jonikaitis D, & Theeuwes J (2013). Dissociating oculomotor contributions to spatial and feature-based selection. Journal of Neurophysiology, 110(7), 1525–1534. doi: 10.1152/jn.00275.2013 [DOI] [PubMed] [Google Scholar]
  36. Juan CH, Shorter-Jacobi SM, & Schall JD (2004). Dissociation of spatial attention and saccade preparation. Proceedings of the National Academy of Sciences of the United States of America, 101(43), 15541–15544. doi: 10.1073/pnas.0403507101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Klein RM (1980). Does oculomotor readiness mediate cognitive control of visual attention? In Nickerson RS (Ed.), Attention and Performance VIII (pp. 259–276). Hillsdale, NJ: Erlbaum. [Google Scholar]
  38. Klein RM, & Pontefract A (1994). Does oculomotor readiness mediate cognitive control of visual attention? Revisited! In Umilta C & Moscovitch M (Eds.), Attention and Performance Xv - Conscious and Nonconscious Information Processing (Vol. 15, pp. 333–350). [Google Scholar]
  39. Kowler E, Anderson E, Dosher B, & Blaser E (1995). The role of attention in the programming of saccades. Vision Research, 35(13), 1897–1916. doi: 10.1016/0042-6989(94)00279-U [DOI] [PubMed] [Google Scholar]
  40. Marino AC, & Mazer JA (2016). Perisaccadic updating of visual representations and attentional states: Linking behavior and neurophysiology. Frontiers in Systems Neuroscience, 10, 3. doi: 10.3389/fnsys.2016.00003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Marshall L, & Bays PM (2013). Obligatory encoding of task-irrelevant features depletes working memory resources. J Vis, 13(2). doi: 10.1167/13.2.21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Matin E (1974). Saccadic suppression: A review and an analysis. Psychological Bulletin, 81(12), 899–917. doi: 10.1037/h0037368 [DOI] [PubMed] [Google Scholar]
  43. Matsukura M, & Vecera SP (2011). Object-based selection from spatially-invariant representations: evidence from a feature-report task. Attention Perception & Psychophysics, 73(2), 447–457. doi: 10.3758/s13414-010-0039-9 [DOI] [PubMed] [Google Scholar]
  44. McConkie GW, & Currie CB (1996). Visual stability across saccades while viewing complex pictures. Journal of Experimental Psychology: Human Perception and Performance, 22(3), 563–581. doi: 10.1037//0096-1523.22.3.563 [DOI] [PubMed] [Google Scholar]
  45. McConkie GW, & Rayner K (1976). Identifying the span of the effective stimulus in reading: Literature review and theories of reading. In Singer H & Ruddell RB (Eds.), Theoretical Models and Processes in Reading (pp. 137–162). Newark DE: International Reading Association. [Google Scholar]
  46. Moore CM, Stephens T, & Hein E (2010). Features, as well as space and time, guide object persistence. Psychonomic Bulletin & Review, 17(5), 731–736. doi: 10.3758/pbr.17.5.731 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Moore T, & Fallah M (2004). Microstimulation of the frontal eye field and its effects on covert spatial attention. Journal of Neurophysiology, 91(1), 152–162. doi: 10.1152/jn.00741.2002. [DOI] [PubMed] [Google Scholar]
  48. Morey RD (2008). Confidence intervals from normalized data: A correction to Cousineau (2005). Tutorials in Quantitative Methods for Psychology, 4, 61–64. [Google Scholar]
  49. O’Regan JK, & Lévy-Schoen A (1983). Integrating visual information from successive fixations: Does trans-saccadic fusion exist? Vision Research, 23(8), 765–768. doi: 10.1016/0042-6989(83)90198-0 [DOI] [PubMed] [Google Scholar]
  50. Poth CH, & Schneider WX (2016). Breaking object correspondence across saccades impairs object recognition: The role of color and luminance. Journal of Vision, 16(11). doi: 10.1167/16.11.1 [DOI] [PubMed] [Google Scholar]
  51. Richard AM, Luck SJ, & Hollingworth A (2008). Establishing object correspondence across eye movements: Flexible use of spatiotemporal and surface feature information. Cognition, 109(1), 66–88. doi: 10.1016/j.cognition.2008.07.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Rolfs M (2015). Attention in Active Vision: A Perspective on Perceptual Continuity Across Saccades. Perception, 44(8–9), 900–919. doi: 10.1177/0301006615594965 [DOI] [PubMed] [Google Scholar]
  53. Rutishauser U, & Koch C (2007). Probabilistic modeling of eye movement data during conjunction search via feature-based attention. Journal of Vision, 7(6), 20. doi: 10.1167/7.6.5 [DOI] [PubMed] [Google Scholar]
  54. Schafer RJ, & Moore T (2011). Selective attention from voluntary control of neurons in prefrontal cortex. Science, 332(6037), 1568–1571. doi: 10.1126/science.1199892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Schneider W, Eschmann A, & Zuccolotto A (2002). E-Prime user’s guide Pittsburgh, PA: Psychology Software Tools, Inc. [Google Scholar]
  56. Schut MJ, Fabius JH, Van der Stoep N, & Van der Stigchel S (2017). Object files across eye movements: Previous fixations affect the latencies of corrective saccades. Attention Perception & Psychophysics, 79(1), 138–153. doi: 10.3758/s13414-016-1220-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Schut MJ, Van der Stoep N, Postma A, & Van der Stigchel S (2017). The cost of making an eye movement: A direct link between visual working memory and saccade execution. Journal of Vision, 17(6). doi: 10.1167/17.6.15 [DOI] [PubMed] [Google Scholar]
  58. Serences JT, Ester EF, Vogel EK, & Awh E (2009). Stimulus-specific delay activity in human primary visual cortex. Psychological Science, 20(2), 207–214. doi: 10.1111/j.1467-9280.2009.02276.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Shao N, Li J, Shui RD, Zheng XJ, Lu JG, & Shen MW (2010). Saccades elicit obligatory allocation of visual working memory. Memory & Cognition, 38(5), 629–640. doi: 10.3758/mc.38.5.629 [DOI] [PubMed] [Google Scholar]
  60. Shen MW, Tang N, Wu F, Shui RD, & Gao ZF (2013). Robust object-based encoding in visual working memory. Journal of Vision, 13(2), 11. doi: 10.1167/13.2.1 [DOI] [PubMed] [Google Scholar]
  61. Tas AC, Dodd MD, & Hollingworth A (2012). The role of surface feature continuity in object-based inhibition of return. Visual Cognition, 20(1), 29–47. doi: 10.1080/13506285.2011.626466 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Tas AC, Luck SJ, & Hollingworth A (2016). The relationship between visual attention and visual working memory encoding: A dissociation between covert and overt orienting. Journal of Experimental Psychology: Human Perception and Performance, 42(8), 1121–1138. doi: 10.1037/xhp0000212 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Tas AC, Moore CM, & Hollingworth A (2012). An object-mediated updating account of insensitivity to transsaccadic change. Journal of Vision, 12(11). doi: 10.1167/12.11.18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Thompson KG, Biscoe KL, & Sato TR (2005). Neuronal basis of covert spatial attention in the frontal eye field. Journal of Neuroscience, 25(41), 9479–9487. doi: 10.1523/jneurosci.0741-05.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Van der Stigchel S, & Hollingworth A (2018). Visuo-spatial working memory as a fundamental component of the eye movement system. Current Directions in Psychological Science, 27(2), 136–143. doi: 10.1177/0963721417741710 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. White AL, Rolfs M, & Carrasco M (2013). Adaptive deployment of spatial and feature-based attention before saccades. Vision Research, 85, 26–35. doi: 10.1016/j.visres.2012.10.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Williams LG (1967). The effects of target specification on objects fixated during visual search. Acta Psychologica, 27, 355–360. doi: 10.1016/0001-6918(67)90080-7 [DOI] [PubMed] [Google Scholar]
  68. Woodman GF, & Vogel EK (2008). Selective storage and maintenance of an object’s features in visual working memory. Psychonomic Bulletin & Review, 15(1), 223–229. doi: 10.3758/pbr.15.1.223 [DOI] [PubMed] [Google Scholar]
  69. Yin J, Zhou J, Xu H, Liang J, Gao Z, & Shen M (2012). Does high memory load kick task-irrelevant information out of visual working memory? Psychonomic Bulletin & Review, 19(2), 218–224. doi: 10.3758/s13423-011-0201-y [DOI] [PubMed] [Google Scholar]

RESOURCES