Abstract
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating – a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the “Spatial Congruency Bias”, to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the Spatial Congruency Bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic Congruency Bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.
Keywords: feature binding, visual stability, object recognition, remapping, retinotopic, spatiotopic
Introduction
Two of the most fundamental questions about human visual processing are 1) how we maintain visual stability across saccades and 2) how we combine information about objects and their locations. Our visual system produces a stable perception of the world despite frequent eye movements that dramatically shift the locations of objects on our retinas. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Meanwhile, it must also combine information about object locations with information about object identities so that we can tell which objects are located at which positions in space—this is the problem of object-location binding (Treisman & Gelade, 1980; Treisman, 1996). An even larger theoretical challenge lies in how these two questions interact: how does our visual system maintain stable representations of objects across saccades? This question has been described as the “hard binding problem” (Cavanagh, Hunt, Afraz, & Rolfs, 2010), and has remained unresolved despite decades of behavioral, neuroimaging, and neurophysiological research.
One reason this is a challenge for our visual system is that it is unclear what type of location information object representations should be “bound” to. In other words, if we assume that representations of object identities are somehow linked, or “bound”, to the corresponding spatial representations, how is visual stability preserved across eye movements? Are object identities bound to spatiotopic or retinotopic locations? If they were bound to spatiotopic locations, they would always be bound to the correct real-world location across an eye movement, which would intuitively seem easier for human behavior. On the other hand, if they were bound to retinotopic locations, they would need to be updated around the time of a saccade to maintain stable perception. This updating could occur in multiple ways. One way could be that the link between objects and their locations is simply reestablished after each saccade; we refer to this possibility as “re-binding”. Alternatively, binding could be updated through spatial “remapping”, a neural updating mechanism in which retinotopic spatial representations are updated, or “remapped” to compensate for displacements caused by eye movements (Duhamel, Colby, & Goldberg, 1992). It is currently unknown whether object features are remapped along with location information (Lescroart, Kanwisher, & Golomb, 2016), but if so, remapping could be a way to achieve object representations that are spatiotopic in effect, even if the underlying organization of the visual system is retinotopic. Finally, it is possible that objects are bound to retinotopic locations and are not automatically updated after a saccade through either mechanism. In this case, updating may happen only in certain circumstances, or our visual system may rely on other compensatory mechanisms (e.g., visual memory or the use of stable visual landmarks). Support for these different possibilities — and for retinotopic vs. spatiotopic processing in general – has been mixed across many different paradigms, as described below.
Underlying neural representations: Retinotopic or spatiotopic?
In terms of neural evidence for the existence of retinotopic vs. spatiotopic location representations, it is well agreed upon that processing in early visual cortex is retinotopic (Crespi et al., 2011; Gardner, Merriam, Movshon, & Heeger, 2008; Golomb, Nguyen-Phuc, Mazer, McCarthy, & Chun, 2010). However, there is debate over whether higher-level visual areas may support spatiotopic representations. There have been reports of spatiotopic responses in some higher-level areas of human (Crespi et al., 2011; d’Avossa et al., 2007; McKyton & Zohary, 2007) and monkey cortex (Duhamel, Bremmer, BenHamed, & Graf, 1997; Galletti, Battaglini, & Fattori, 1993; Snyder, Grieve, Brotchie, & Andersen, 1998), and some have suggested that cortex may transition from retinotopic representations in lower-level areas to spatiotopic representations in higher-level areas (Andersen, Snyder, Bradley, & Xing, 1997; Melcher & Colby, 2008; Wurtz, 2008).
But these results are mixed—other research has found solely retinotopic information in these same higher-level areas (Gardner et al., 2008; Golomb & Kanwisher, 2012a). Particularly relevant to the question at hand, Golomb & Kanwisher (2012a) found purely retinotopic representations in higher-level ventral stream areas known to support object recognition in humans, such as the Lateral Occipital Complex (LOC) and category-specific face, place, and body areas — if these brain regions contain information about both object identity and retinotopic location, then we might expect to see object-location binding in retinotopic coordinates after a saccade, at least initially. On the other hand, if representations do become more spatiotopic in higher-level areas, we might predict binding in retinotopic coordinates for simple stimuli and in spatiotopic coordinates for more complex stimuli.
Remapping: Updating to spatiotopic coordinates
As noted earlier, remapping is one mechanism that has been proposed to allow for spatiotopic perception, even with underlying retinotopic neural representations. Spatial remapping has been demonstrated in neurons in many cortical areas, with neurons responding in anticipation to the location that the receptive field will occupy after the saccade (Duhamel et al., 1992; Gottlieb, Kusunoki, & Goldberg, 1998; Kusunoki & Goldberg, 2003; Sommer & Wurtz, 2006; Umeno & Goldberg, 1997; Walker, Fitzgibbon, & Goldberg, 1995; but see Zirnsak & Moore, 2014; Zirnsak, Steinmetz, Noudoost, Xu, & Moore, 2014). Spatial remapping has also been demonstrated in human fMRI (Merriam, Genovese, & Colby, 2003, 2007), EEG (Parks & Corballis, 2008), and behavioral (Hunt & Cavanagh, 2011; Mathôt & Theeuwes, 2010a; Pertzov, Zohary, & Avidan, 2010; Rolfs, Jonikaitis, Deubel, & Cavanagh, 2011; Szinte, Wexler, & Cavanagh, 2012; Szinte & Cavanagh, 2011) paradigms. However, the question of whether object features are remapped along with location information is an area of active debate (Cavanagh et al., 2010; Lescroart, Kanwisher, & Golomb, 2016; Subramanian & Colby, 2014; Yao, Treue, & Krishna, 2016).
An emphasis of recent research, especially for human behavior, has been the remapping of spatial attention. While some studies have shown evidence for predictive remapping of attention (e.g., Rolfs et al., 2011), other studies have demonstrated that spatial attention temporarily lingers at the previously attended retinotopic location for up to 150ms after a saccade, a phenomenon termed the “retinotopic attentional trace” (Golomb, Chun, & Mazer, 2008; Golomb et al., 2010). These results have led some to conclude that spatial attention is natively retinotopic and is updated to spatiotopic coordinates in two stages: anticipatory remapping to the new location, and a more delayed decaying of the retinotopic trace (Casarotti, Lisi, Umiltà, & Zorzi, 2012; Golomb et al., 2008; Golomb, Marino, Chun, & Mazer, 2011; Golomb et al., 2010; Jonikaitis, Szinte, Rolfs, & Cavanagh, 2013; Mathôt & Theeuwes, 2010b). A recent study demonstrated that this delayed remapping of attention can have interesting consequences for feature binding, causing a perceptual blending of features appearing at the retinotopic and spatiotopic locations (Golomb, L’Heureux, & Kanwisher, 2014). However, this study only presented objects after the saccade, so it cannot directly address the question of whether features are themselves remapped along with spatial attention.
What happens to an object’s features across a saccade? Visual aftereffects and transsaccadic integration
Visual aftereffects have been one of the primary techniques used to ask specifically about what happens to an object’s features across a saccade. There have been many reports of spatiotopic transfer of motion, orientation, and face aftereffects (Ezzati, Golzar, & Afraz, 2008; Melcher, 2005, 2007; Turi & Burr, 2012; Zimmermann, Morrone, Fink, & Burr, 2013), although it is debated how robust these effects are, with others reporting primarily retinotopic aftereffects (Afraz & Cavanagh, 2009; Knapen, Rolfs, & Cavanagh, 2009; Wenderoth & Wiese, 2008). Aftereffects have also been used to argue for an effect of stimulus complexity, with one study finding increased spatiotopic transfer of aftereffects with increased stimulus complexity, including complete spatiotopic transfer of a face aftereffect (Melcher, 2005; but see Afraz & Cavanagh, 2009; Cavanagh et al., 2010). Other lines of research have focused on integration of visual features across saccades (Davidson, Fox, & Dick, 1973; Demeyer, de Graef, Verfaillie, & Wagemans, 2011; Demeyer, De Graef, Wagemans, & Verfaillie, 2009, 2010; Harrison & Bex, 2014; Hayhoe, Lachter, & Feldman, 1991; Irwin, Brown, & Sun, 1988; McRae, Butler, & Popiel, 1987; Melcher & Morrone, 2003; Oostwoud Wijdenes, Marshall, & Bays, 2015; Prime, Niemeier, & Crawford, 2006; Van Eccelpoel, Germeys, De Graef, & Verfaillie, 2008). Again here, the results offer mixed evidence for retinotopic versus spatiotopic integration. Moreover, in the cases of spatiotopic feature integration, it remains unclear whether the effects are driven by spatiotopic representations, feature remapping, or other higher-level cognitive process such as visual memory.
The Spatial Congruency Bias as a measure of object-location binding
Thus, although there has been a lot of work related to the reference frames of spatial attention and object perception, it remains unknown what type of location is being bound to object representations, and whether it is remapped or re-bound following an eye movement. We tackle these questions using a novel approach: we adapt a behavioral paradigm — the “Spatial Congruency Bias” — introduced by Golomb, Kupitz, & Thiemann (2014) as related to object-location binding.
The Spatial Congruency Bias reveals that when two sequential objects are presented in the same location, people are more likely to judge them as the same identity than when the objects are presented in different locations. This effect is a robust effect that seems to reflect an automatic influence of location information on object perception, and a special role for location information during object recognition. Golomb et al. (2014) proposed that the Spatial Congruency Bias is related to object-location binding for a few reasons: First, the locations of stimuli influence object judgments even when this location information is completely uninformative (and may even be harmful) to the object judgments. This suggests that irrelevant location information is automatically encoded with and bound to other object properties, biasing their perceptual judgments. Second, rather than simply facilitating these judgments (as might be expected by enhanced attentional orienting), the Spatial Congruency Bias alters the judgments themselves, causing a shift in perception. Note that this effect appears to occur on a perceptual rather than a response level1, first because the effect remained when subjects responded on a continuous scale rather than responding “same” or “different” (eliminating the obvious response-level conflict), second because location biased shape and color judgments while these other dimensions did not generate biases, and third because the strength of the effect was modulated by the perceptual similarity of the objects being judged, contrary to what would be predicted by a response bias account (Golomb, Kupitz, et al., 2014). Finally, a key principle in many theories of binding is that location serves a critical role as a pointer, index, or “object file” during object recognition (e.g., Huang & Pashler, 2007; Kahneman, Treisman, & Gibbs, 1992; Treisman & Gelade, 1980). The fact that the Spatial Congruency Bias underscores a special role for location information — that location influences how object characteristics are perceived but not vice versa — suggests this paradigm may serve as a reliable measure of the influence of location information in this type of object-location binding. The Spatial Congruency Bias seems to reveal an underlying assumption of our visual system that stimuli appearing in the same location are likely to be the same object, with the increased tendency to judge two objects as “same” presumably resulting from location serving as an indirect link between the two objects, with each object representation bound to the same location pointer.
For these reasons, the Spatial Congruency Bias paradigm seems well suited to address the theoretical questions described above, particularly how object-location binding accounts for spatial changes across eye movements. If we take the Congruency Bias as an indication of location information being bound to the object representation, we can ask: 1) Are objects bound to retinotopic and/or spatiotopic locations after an eye movement? 2) Does this location binding dynamically remap or re-bind over time? And 3) Does the type of location binding vary with stimulus complexity (e.g., gabors vs. shapes vs. faces)?
Current Study
We first conducted two preliminary no-saccade experiments to serve as a baseline for the saccade experiments and confirm that the original Spatial Congruency Bias findings (with shape stimuli) extended to both lower-level stimuli (gabors - Experiment 1) and higher-level stimuli (faces - Experiment 2). Next, we modified the paradigm by adding a saccade between the first and second stimulus. This allowed us to test the reference frame of object-location binding by creating conditions where the two stimuli shared the same retinotopic (but not spatiotopic) location, and vice versa. If binding reflects more ecologically-relevant coordinates, we would expect a spatiotopic Spatial Congruency Bias; however, if binding is simply tied to low-level location information and not updated after an eye movement, we would expect a retinotopic Spatial Congruency Bias. However, as reviewed above, updating to spatiotopic coordinates may not be an all-or-none process, but could emerge gradually over time or with increased stimulus complexity. Thus, we included two additional manipulations. To investigate whether updating occurred on a similar timeframe to remapping, we manipulated the post-saccade delay (the time from the end of the saccade to the appearance of the second stimulus). Specifically, we presented the second stimulus at one of two post-saccade delays: We hypothesized that while we might still see retinotopic processing at the 50ms delay, if binding does update to spatiotopic coordinates, we should expect a spatiotopic effect by the 500ms delay. Finally, we tested stimuli of different complexities: gabors (Experiment 3), shapes (Experiment 4), and faces (Experiment 5). If representations are increasingly spatiotopic with increased stimulus complexity, we would expect faces and maybe shapes — but not gabors — to show a spatiotopic Congruency Bias.
Methods
Subjects
Each of the five experiments included 16 subjects (with a different set of subjects in each experiment). All subjects reported normal or corrected-to-normal vision and gave informed consent, and study protocols were approved by the Ohio State University Behavioral and Social Sciences Institutional Review Board. Subjects were compensated with course credit or payment.
Experimental setup
Stimuli were presented using Psychtoolbox (Brainard, 1997) for MATLAB (MathWorks), on a 21-in. (53.34-cm) flat-screen CRT monitor with a refresh rate of 85 Hz. Subjects were seated at a chinrest 61 cm from the monitor, and eye position was tracked using an Eyelink 1000 eye-tracking system. If at any point a subject’s eye position deviated greater than 1.5° from fixation (2° for Experiments 2 and 5), the trial was aborted and repeated later in the block. An average of 30% (Expt 1), 15.2% (Expt 2), 31% (Expt 3), 30% (Expt 4), and 26% (Expt 5) of trials were aborted and re-run.
Experiment 1: Gabors, no-saccade baseline
Stimuli for Experiment 1 were gabor patches with a spatial frequency of 1.07 cycles per degree, 100% contrast, and an orientation between 1 and 180 degrees, presented on a gray background. The gabors were sized 6.25 degrees × 6.25 degrees. Masks were generated using random noise of the same frequency and were 8.13 degrees × 8.13 degrees.
The task was modified from Golomb, Kupitz, and Thiemann (2014). Subjects began each trial by fixating on a black fixation cross in the center of the screen, and they maintained fixation on this location throughout the trial (monitored via eye-tracking). Once subjects had been fixating for 500 ms, a gabor stimulus (Stimulus 1) appeared in one of four locations in the periphery (upper left, upper right, lower left, or lower right of fixation, centered at 7.06 degrees eccentricity). The gabor remained visible for 500 ms, followed by a screen with only the fixation (50 ms) and then a mask (150ms). Next, another fixation-only screen was present for 900ms or 1350ms, during which subjects maintained fixation. The timing of this screen was chosen to correspond with the total duration of the fixation-only screens for the saccade experiments (Figure 1a–b). Stimulus 2 was then presented and masked for the same duration as Stimulus 1.
Figure 1.

Task and conditions. a) Trial timing for No-saccade experiments (Expts 1 and 2; example trial from Expt 2, Faces). b) Trial timing for Saccade experiments (Expts 3–5; example trial from Experiment 4, Shapes). At the first fixation, participants saw a stimulus followed by a mask in their periphery. After a 500ms delay, the fixation moved to a second location, and subjects executed a saccade. After a 50 or 500ms post-saccade delay, subjects saw a second stimulus, followed by a second mask. They were allowed to respond (same/different item) as soon as they saw the second stimulus. Inset shows the four possible fixation locations (crosses) and 9 possible stimulus locations (squares). c) For the saccade experiments, for a given Stimulus 1 location (dotted circle), there were 4 possible locations for Stimulus 2 (indicated by 4 squares): Spatiotopic (same screen position as Stimulus 1), Retinotopic (same location relative to fixation as Stimulus 1), Control Location A (different spatiotopic and retinotopic location), and Control Location B (different spatiotopic and retinotopic location). d) Examples of Same Item and Different Item stimuli for each of the three saccade experiments.
Stimulus 2 was a gabor of either the same or different orientation as Stimulus 1 and appeared either in the same or different location. These four conditions were counterbalanced and equally likely. The location and orientation of Stimulus 1 were randomly assigned for each trial. In the Same Orientation condition, the orientation of Stimulus 2 exactly matched the orientation of Stimulus 1 (Figure 1d). In the Different Orientation condition, the degree of similarity between the two orientations was determined individually for each subject, using a staircase procedure (Watson & Pelli, 1983) targeting 70–75% accuracy. Staircasing took place during the practice block, and the final staircase value was set as the orientation difference for the main task. If necessary, this value was further adjusted between blocks to maintain performance near 70–75% accuracy. In the Same Location condition, Stimulus 2 was presented in the exact same position as Stimulus 1. In the Different Location condition, Stimulus 2 was presented in the horizontally adjacent position; for example, if Stimulus 1 was presented in the upper left stimulus position, Stimulus 2 would be presented in the upper right.
Subjects were instructed to make a non-speeded 2-alternative-forced-choice same/different orientation judgment comparing the two gabor stimuli; location was irrelevant to the task. Subjects responded by button press and were presented with visual feedback (an “X” for an incorrect answer or a “:)” for a correct answer) informing them whether their response was correct. They were also provided with feedback if they broke fixation at any point during the trial: a large red X would appear in the middle of the screen, and the trial was aborted and repeated later in the run.
Subjects completed 32 trials per block (8 trials for each of the 4 location x orientation conditions, in randomized order). Each subject completed 1 practice block and 5 main blocks.
Experiment 2: Faces, no saccade
Procedures were largely the same for Experiment 2, except that stimuli were faces, and participants judged whether the two faces were the same or different. Face stimuli were modified from Pitcher et al.’s stimulus set (Pitcher, Charles, Devlin, Walsh, & Duchaine, 2009). To match the difficulty of the task for each observer, we created 15 families of morphs from a group of 6 distinct faces using Abrosoft FantaMorph. Each morph family contained 50 face morphs varying in similarity. The Stimulus 1 face was randomly chosen on each trial. On Same Face trials, the Stimulus 2 face was the identical image. On Different Face trials, the second face was chosen as a different face from the same morph family (Figure 1d). Difficulty was adjusted using the same staircase procedure as for Experiment 1. Stimulus orientation was never varied.
Stimuli were sized at 7×7 degrees, centered at 7.09 degrees eccentricity, and presented on a white background. Masks were generated by randomly setting pixels to a value between black and white and were presented in a square occupying the same location as the face. For all experiments except gabor experiments, mask duration was 100ms. In this experiment, feedback for the main task was given with a red square (for an incorrect answer) or green square (for a correct answer), and feedback for eye-tracking was the same as in Experiment 1.
Experiment 3: Gabors, saccade
Experiment 3 used the same stimuli as Experiment 1 but added a saccade to the paradigm to create conditions in which the two stimuli shared either the same retinotopic location, the same spatiotopic location, or neither. For saccade experiments, there were four possible fixation locations, centered on the screen and forming the corners of an invisible 10×10 degree square. There were 9 possible stimulus locations, forming a 3×3 grid such that each fixation location had four adjacent stimulus locations of equal eccentricity (7.06 degrees; Figure 1b, inset).
On each trial, the fixation cross began at one fixation location, and Stimulus 1 appeared and was masked with the same timing as in Experiment 1. After a 500ms pre-saccade delay, the fixation cross jumped to either the adjacent horizontal or vertical fixation location. Once subjects successfully completed the saccade, Stimulus 2 was presented after either a 50ms or 500ms post-saccade delay. The timing of each trial was the same as in the no-saccade version, except that the 350ms estimated-saccade delay was replaced with the actual time it took participants to saccade to the new fixation location. We verified that participants accurately completed the saccade for each trial by aborting the trial if participants did not land in a 1.5–2-degree window around the saccade target.
After the saccade, Stimulus 2 could appear either in the same location as Stimulus 1 relative to fixation (Retinotopic Location), the same absolute screen location (Spatiotopic Location), or in one of two control locations at an equal eccentricity from Fixation 2 (see Figure 1c). As in the no-saccade experiments, Stimulus 2 could have either the same or different orientation as Stimulus 1. These eight conditions, along with the post-saccade delay and saccade direction, were counterbalanced and equally likely. The orientation of Stimulus 1 was randomly assigned for each trial, as before, and orientation difference was staircased for each subject. The location of Stimulus 1 was chosen from one of two possible locations for a given Fixation 1 position and saccade direction, such that all four Stimulus 2 location conditions were possible. (For example, if Fixation 1 was the upper-left fixation position, to be followed by a downward saccade, then Stimulus 1 could appear in either the middle-left or middle-middle position on the screen, such that after the saccade, the Retinotopic, Spatiotopic, and control locations were located at equal eccentricity from the final fixation.)
Subjects completed 32 trials per block (4 trials for each of the 8 location x identity conditions, in randomized order), in addition to any trials that were aborted due to eye-tracking errors (which were repeated at the end of the same block). Each subject completed 1 block of practice followed by one block of staircasing. In all three saccade experiments, participants completed 8–10 main blocks. (Participants who were not able to complete the full 10 blocks in the time allotted for the experimental session were still included if they completed at least 8 blocks; criteria defined in advance.)
Experiment 4: Shapes, saccade
Procedures were largely the same as Experiment 3, except that stimuli were novel shapes, and subjects judged whether the two shapes were the same or different. Shape stimuli were the same as those in Golomb, Kupitz, and Thiemann (2014), modified from the Tarr stimulus set (stimulus images courtesy of Michael J. Tarr, Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University, http://www.tarrlab.org). Stimuli were drawn from ten families of shape morphs; within each family, the body of the shape remained constant, while the appendages could vary in shape, length, or relative location (Figure 1d). The Stimulus 1 shape was randomly chosen on each trial. On Same Shape trials, the Stimulus 2 shape was the identical image. On Different Shape trials, the second shape was chosen as a different shape from the same morph family (Figure 1d). We used the easiest morph level (the two images with the greatest morph distance within a family) for all subjects instead of individually staircasing task difficulty, since in Golomb, Kupitz, and Thiemann (2014), most participants were already within the desired accuracy range at this easiest morph level (maximum staircase value) in the no-saccade task.
Stimuli were sized 6.25 degrees × 6.25, and stimulus orientation was never varied. Masks were generated by randomly setting pixels to a value between black and white. Timing was the same as for Experiment 3, except that the mask was presented for 100ms instead of 150. There was one block of practice for this experiment, followed by 8–10 main blocks.
Experiment 5: Faces, saccade
Procedures were largely the same as Experiments 3–4, except Experiment 5 used the face stimuli from Experiment 2. Timing was the same as Experiment 4. Before the main experiment, there was one staircasing block and one practice block.
Analysis
Our primary measure for all experiments was the Spatial Congruency Bias (Golomb, Kupitz, et al., 2014). For each participant, we calculated hit and false alarm rates for each location condition. We defined a “hit” as a “same item” response when the stimuli actually were the same (Same Item condition, i.e., same orientation/shape/face), and a “false alarm” as a “same item” response when the stimuli were actually different (Different Item condition). Using the hit rate and false alarm rate, we used signal detection theory to calculate bias (criterion measure) for each location condition:
Note that signal detection theory on its own cannot differentiate between response-level (decision) biases versus other types of bias (e.g., perceptual: see Witt, Taylor, Sugovic, & Wixted, 2015 and Discussion).
To assess the Spatial Congruency Bias (i.e., whether there was a greater bias to report two stimuli as the same gabor/shape/face when they appeared in the same spatial location), we calculated a “Shift in Bias” measure. For no-saccade experiments, we calculated the Shift in Bias by subtracting the bias for the Different Location condition from the bias for the Same Location condition. The Shift in Bias was calculated separately for each subject. A negative shift means an increased bias to respond “same item” in the Same Location condition.
For saccade experiments, we wanted to quantify the Shift in Bias for the Retinotopic and Spatiotopic Location conditions separately. Our design included four location conditions: two Same Location conditions (Retinotopic and Spatiotopic) and two Different Location conditions (Control A and Control B). However, to calculate the Shift in Bias for each reference frame, we first had to determine which Different Location (control) conditions should be matched to which Same Location conditions.
A number of previous studies exploring attention across saccades have made a point to choose “mirror control” locations for the spatiotopic and retinotopic locations, to control for factors such as position along the saccadic trajectory (Mathôt & Theeuwes, 2010b; Satel, Wang, Hilchey, & Klein, 2012). This system would match Spatiotopic with Control A and Retinotopic with Control B. However, there is a potential critical problem with this matchup. Golomb, Kupitz, and Thiemann (2014) found that the strength of the Congruency Bias scales with the distance between the two stimuli. Thus, if we were to assume that the Congruency Bias was based entirely on retinotopic location, for example, then a pure distance-based hypothesis would predict the strongest bias for the Retinotopic Location (same location), followed by a weaker bias for both Spatiotopic Location and Control B (different location, short distance), and the weakest bias for Control A (different location, long distance). If we were to choose Control A as the Spatiotopic “mirror” control, then we might find what looks like a small spatiotopic Congruency Bias, but it would be confounded by the distance effect. The same challenge would apply for Retinotopic versus Control B, if the Congruency Bias were based in spatiotopic coordinates.
To control for these distance effects, for our primary analyses we assigned Control A to the Retinotopic Location and Control B to the Spatiotopic Location, and we calculated the Shift in Bias accordingly. However, we also repeated Congruency Bias analyses with the mirror control pairings (Table S1), and we note that the raw bias scores for each of the four location conditions are displayed in Figure 2e–g and tables. Paired t-tests comparing each combination of conditions, as well as the Shift in Bias differences, are reported in the Results.
Figure 2.

Spatial Congruency Bias results collapsed across delay condition. (a–c) Signal detection theory bias scores for the No-Saccade experiments as a function of location condition. A negative bias means participants are more likely to judge two stimuli as the same orientation (a), shape (b), or face (c). Data from (b) are replotted from Golomb, Kupitz, and Thiemann, 2014). d) Shift in Bias (calculated as Same Location bias minus Different Location bias) plotted for all three no-saccade experiments. Asterisks indicate significant Shifts in Bias (p<0.05). (e–g) Signal detection theory bias scores for the Saccade experiments as a function of location condition. A negative bias means participants are more likely to judge two stimuli as the same orientation (e), shape (f), or face (g). (h) Shift in Bias in retinotopic coordinates (calculated as Retinotopic Location bias minus Control A Location bias) and spatiotopic coordinates (calculated as Spatiotopic Location bias minus Control B Location bias) plotted for all three saccade experiments. Asterisks indicate a significant (p<0.05) Retinotopic Shift in Bias for all three experiments, and a significant difference between Retinotopic and Spatiotopic effects. Error bars are standard error of the mean; N=16 for each experiment.
Our main analyses focused on the Spatial Congruency Bias, but we also present data for two measures of facilitation: reaction time (RT) and sensitivity (d′). We calculated d′ using signal detection theory: d′=z(hit rate)−z(false alarm rate).
Statistics
Sample size was chosen to match the no-saccade, shape experiment reported in Golomb, Kupitz, and Thiemann (2014), which had an effect size of d=1.01 and statistical power (1 −β) of 0.96 with N=16. For each analysis, values for all measures were averaged separately for each subject and condition. For ANOVAs, effect size was calculated using partial eta-squared. We followed up each ANOVA with planned, paired, 2-tailed t-tests. Effect sizes for t-tests were calculated using Cohen’s d (uncorrected). Trials on which subjects failed to respond, or responded with RTs greater than or less than 2.5 standard deviations of the subject’s mean RT, were excluded (less than 3.5% of trials for each experiment). One participant in Experiment 4 was excluded for overall task accuracy of less than 55% (predetermined threshold).
Results
No-saccade experiments
Before asking how the Spatial Congruency Bias varies across saccades, it was important to establish a baseline and confirm the existence of a Spatial Congruency Bias for each stimulus type in the absence of saccades. Hit and false alarm rates for gabors (Expt 1) and faces (Expt 2) are shown in Tables 1–2, for Same Location and Different Location conditions. Figure 2a–c illustrates the bias scores for each location condition and experiment. Negative values indicate an increased bias to respond “same item”. The results for gabors and faces replicated and extended the shape results from Golomb, Kupitz, and Thiemann (2014); for both stimulus types tested, participants were more likely to report that two objects were the same if they appeared in the same location (Gabors: t(15) = −3.49, p = .003, d = −.87; Faces: t(15) = −5.19, p < .001, d = −1.30).
Table 1.
Means and standard deviations (in parentheses) for measures associated with conditions in Experiment 1 (gabors, no saccade). Results are collapsed across post-saccade delay condition.
| Expt 1 (Gabors, no saccade) | Same/Diff Item | Same Loc | Diff Loc |
|---|---|---|---|
| RT (s) | Same Item | 0.76 (0.12) | 0.81 (0.12) |
| Diff Item | 0.81 (0.10) | 0.84 (0.11) | |
| Accuracy | Same Item | 0.91 (0.08) | 0.76 (0.11) |
| Diff Item | 0.74 (0.12) | 0.74 (0.12) | |
| p(“Same Item”) | Same Item | 0.91 (0.08) | 0.76 (0.11) |
| Diff Item | 0.26 (0.12) | 0.26 (0.12) | |
| d-prime | 2.19 (0.71) | 1.47 (0.51) | |
| Bias | −0.41 (0.27) | −0.03 (0.33) |
Table 2.
Means and standard deviations (in parentheses) for measures associated with conditions in Experiment 2 (faces, no saccade). Results are collapsed across post-saccade delay condition.
| Expt 2 (Faces, no saccade) | Same/Diff Item | Same Loc | Diff Loc |
|---|---|---|---|
| RT (s) | Same Item | 0.81 (0.11) | 0.87 (0.13) |
| Diff Item | 0.86 (0.14) | 0.86 (0.13) | |
| Accuracy | Same Item | 0.87 (0.07) | 0.70 (0.11) |
| Diff Item | 0.71 (0.07) | 0.77 (0.07) | |
| P(“Same Item”) | Same Item | 0.87 (0.07) | 0.70 (0.11) |
| Diff Item | 0.29 (0.07) | 0.23 (0.07) | |
| d-prime | 1.72 (0.32) | 1.32 (0.28) | |
| Bias | −0.31 (0.21) | 0.12 (0.25) |
To more easily compare across experiments and reference frames, we next calculated a Shift in Bias measure, defined as the difference in bias for Same-Location minus Different-Location conditions (Figure 2d). To test whether the magnitude of the Shift in Bias differed significantly across the three stimulus types, we did a between-subjects ANOVA that included Experiments 1, 2, and the Shapes experiment from Golomb et al. (2014). We found no effect of stimulus type (F(2,45) = 0.24, p = .79, η2p = .01).
Saccade experiments
We next asked how the Spatial Congruency Bias is affected by an intervening saccade, specifically whether it is preserved in the retinotopic and/or spatiotopic reference frames, and whether this changes as a function of post-saccade delay and/or stimulus complexity. In the saccade experiments, there were four location conditions: Same Retinotopic, Same Spatiotopic, or one of two control locations differing in both reference frames. Average saccadic latency for each experiment was 260ms (Expt 3), 267ms (Expt 4), and 235ms (Expt 5).
Hit and false alarm rates for each of these four location conditions are shown in Tables 3–5 for gabors (Expt 3), shapes (Expt 4), and faces (Expt 5). Data are presented separately for the 50ms and 500ms post-saccade delays, as well as collapsed across delay. ANOVAs including Delay as a factor revealed no significant main effects or interactions with Delay (all F’s < 1.82 and p’s > 0.17), arguing against the hypothesis that the reference frame might dynamically change over time following the saccade. For subsequent analyses and figures, we collapsed across Delay.
Table 3.
Means and standard deviations (in parentheses) for measures associated with conditions in Experiment 3 (gabors, saccade). Data for this table are shown for 50ms delay, 500ms delay, and collapsed across post-saccade delay condition.
| Expt 3 (Gabors, saccade), 50ms delay | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
|---|---|---|---|---|---|
| RT (s) | Same Item | 0.81 (0.16) | 0.86 (0.12) | 0.84 (0.15) | 0.85 (0.16) |
| Diff Item | 0.85 (0.15) | 0.87 (0.12) | 0.85 (0.15) | 0.88 (0.14) | |
| Accuracy | Same Item | 0.86 (0.13) | 0.75 (0.13) | 0.73 (0.10) | 0.75 (0.16) |
| Diff Item | 0.54 (0.10) | 0.67 (0.12) | 0.63 (0.13) | 0.62 (0.12) | |
| P(“Same Item”) | Same Item | 0.86 (0.13) | 0.75 (0.13) | 0.73 (0.10) | 0.75 (0.16) |
| Diff Item | 0.46 (0.10) | 0.33 (0.12) | 0.37 (0.13) | 0.38 (0.12) | |
| d-prime | 1.30 (0.65) | 1.21 (0.52) | 0.99 (0.55) | 1.10 (0.79) | |
| Bias | −0.54 (0.26) | −0.14 (0.32) | −0.15 (0.24) | −0.23 (0.24) | |
| Expt 3 (Gabors, saccade), 500ms delay | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
| RT (s) | Same Item | 0.78 (0.17) | 0.85 (0.17) | 0.82 (0.17) | 0.84 (0.18) |
| Diff Item | 0.83 (0.15) | 0.85 (0.15) | 0.84 (0.16) | 0.84 (0.16) | |
| Accuracy | Same Item | 0.83 (0.13) | 0.73 (0.16) | 0.76 (0.16) | 0.76 (0.10) |
| Diff Item | 0.62 (0.12) | 0.63 (0.11) | 0.62 (0.12) | 0.69 (0.12) | |
| P(“Same Item”) | Same Item | 0.83 (0.13) | 0.73 (0.16) | 0.76 (0.16) | 0.76 (0.10) |
| Diff Item | 0.38 (0.12) | 0.37 (0.11) | 0.38 (0.12) | 0.31 (0.12) | |
| d-prime | 1.41 (0.64) | 1.04 (0.57) | 1.15 (0.58) | 1.26 (0.44) | |
| Bias | −0.39 (0.32) | −0.18 (0.36) | −0.25 (0.35) | −0.12 (0.25) | |
| Expt 3 (Gabors, saccade), both delays | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
| RT (s) | Same Item | 0.79 (0.17) | 0.85 (0.14) | 0.83 (0.16) | 0.84 (0.17) |
| Diff Item | 0.84 (0.14) | 0.86 (0.14) | 0.84 (0.15) | 0.86 (0.15) | |
| Accuracy | Same Item | 0.85 (0.12) | 0.74 (0.13) | 0.74 (0.11) | 0.76 (0.12) |
| Diff Item | 0.58 (0.09) | 0.65 (0.09) | 0.62 (0.10) | 0.65 (0.09) | |
| P(“Same Item”) | Same Item | 0.85 (0.12) | 0.74 (0.13) | 0.74 (0.11) | 0.76 (0.12) |
| Diff Item | 0.42 (0.09) | 0.35 (0.09) | 0.38 (0.10) | 0.35 (0.09) | |
| d-prime | 1.38 (0.65) | 1.07 (0.41) | 1.03 (0.43) | 1.15 (0.52) | |
| Bias | −0.48 (0.28) | −0.15 (0.27) | −0.19 (0.24) | −0.18 (0.20) |
Table 4.
Means and standard deviations (in parentheses) for measures associated with conditions in Experiment 4 (shapes, saccade). Data for this table are shown for 50ms delay, 500ms delay, and collapsed across post-saccade delay condition
| Expt 4 (Shapes, saccade), 50ms delay | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
|---|---|---|---|---|---|
| RT (s) | Same Item | 0.87 (0.14) | 0.91 (0.14) | 0.88 (0.13) | 0.90 (0.13) |
| Diff Item | 0.91 (0.14) | 0.91 (0.14) | 0.91 (0.14) | 0.90 (0.14) | |
| Accuracy | Same Item | 0.85 (0.14) | 0.71 (0.11) | 0.75 (0.12) | 0.77 (0.11) |
| Diff Item | 0.69 (0.14) | 0.75 (0.14) | 0.76 (0.07) | 0.70 (0.14) | |
| P(“Same Item”) | Same Item | 0.85 (0.14) | 0.71 (0.11) | 0.75 (0.12) | 0.77 (0.11) |
| Diff Item | 0.31 (0.14) | 0.25 (0.14) | 0.24 (0.07) | 0.30 (0.14) | |
| d-prime | 1.67 (0.75) | 1.36 (0.41) | 1.47 (0.51) | 1.39 (0.63) | |
| Bias | −0.30 (0.31) | 0.09 (0.38) | −0.01 (0.22) | −0.10 (0.30) | |
| Expt 4 (Shapes, saccade), 500ms delay | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
| RT (s) | Same Item | 0.85 (0.15) | 0.90 (0.15) | 0.87 (0.15) | 0.90 (0.16) |
| Diff Item | 0.88 (0.14) | 0.89 (0.17) | 0.87 (0.13) | 0.89 (0.14) | |
| Accuracy | Same Item | 0.84 (0.10) | 0.68 (0.10) | 0.79 (0.10) | 0.72 (0.13) |
| Diff Item | 0.70 (0.09) | 0.75 (0.12) | 0.78 (0.11) | 0.75 (0.11) | |
| P(“Same Item”) | Same Item | 0.84 (0.10) | 0.68 (0.10) | 0.79 (0.10) | 0.72 (0.13) |
| Diff Item | 0.30 (0.09) | 0.25 (0.12) | 0.22 (0.11) | 0.25 (0.11) | |
| d-prime | 1.64 (0.55) | 1.24 (0.53) | 1.71 (0.67) | 1.40 (0.63) | |
| Bias | −0.28 (0.25) | 0.13 (0.27) | −0.00 (0.23) | 0.04 (0.32) | |
| Expt 4 (Shapes, saccade), both delays | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
| RT (s) | Same Item | 0.86 (0.14) | 0.90 (0.14) | 0.87 (0.14) | 0.90 (0.14) |
| Diff Item | 0.89 (0.14) | 0.90 (0.15) | 0.89 (0.13) | 0.90 (0.14) | |
| Accuracy | Same Item | 0.84 (0.10) | 0.70 (0.09) | 0.77 (0.09) | 0.75 (0.10) |
| Diff Item | 0.69 (0.08) | 0.75 (0.11) | 0.77 (0.08) | 0.73 (0.10) | |
| P(“Same Item”) | Same Item | 0.84 (0.10) | 0.70 (0.09) | 0.77 (0.09) | 0.75 (0.10) |
| Diff Item | 0.31 (0.08) | 0.25 (0.11) | 0.23 (0.08) | 0.27 (0.10) | |
| d-prime | 1.62 (0.54) | 1.28 (0.44) | 1.55 (0.48) | 1.34 (0.57) | |
| Bias | −0.28 (0.20) | 0.10 (0.26) | 0.01 (0.16) | −0.03 (0.16) |
Table 5.
Means and standard deviations (in parentheses) for measures associated with conditions in Experiment 5 (faces, saccade). Data for this table are shown for 50ms delay, 500ms delay, and collapsed across post-saccade delay condition.
| Expt 5 (Faces, saccade), 50ms delay | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
|---|---|---|---|---|---|
| RT (s) | Same Item | 0.74 (0.11) | 0.78 (0.12) | 0.74 (0.11) | 0.76 (0.11) |
| Diff Item | 0.75 (0.12) | 0.75 (0.10) | 0.76 (0.10) | 0.76 (0.11) | |
| Accuracy | Same Item | 0.81 (0.10) | 0.63 (0.18) | 0.82 (0.12) | 0.76 (0.13) |
| Diff Item | 0.74 (0.13) | 0.80 (0.07) | 0.81 (0.09) | 0.82 (0.08) | |
| P(“Same Item”) | Same Item | 0.81 (0.10) | 0.63 (0.18) | 0.82 (0.12) | 0.76 (0.13) |
| Diff Item | 0.26 (0.13) | 0.20 (0.07) | 0.19 (0.09) | 0.18 (0.08) | |
| d-prime | 1.61 (0.45) | 1.25 (0.52) | 1.91 (0.35) | 1.75 (0.54) | |
| Bias | −0.12 (0.33) | 0.26 (0.33) | −0.02 (0.34) | 0.10 (0.35) | |
| Expt 5 (Faces, saccade), 500ms delay | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
| RT (s) | Same Item | 0.73 (0.12) | 0.74 (0.11) | 0.74 (0.13) | 0.75 (0.12) |
| Diff Item | 0.74 (0.10) | 0.74 (0.11) | 0.73 (0.10) | 0.74 (0.09) | |
| Accuracy | Same Item | 0.86 (0.19) | 0.70 (0.13) | 0.74 (0.12) | 0.77 (0.15) |
| Diff Item | 0.76 (0.13) | 0.80 (0.11) | 0.77 (0.13) | 0.77 (0.12) | |
| P(“Same Item”) | Same Item | 0.86 (0.19) | 0.70 (0.13) | 0.74 (0.12) | 0.77 (0.15) |
| Diff Item | 0.24 (0.13) | 0.20 (0.11) | 0.23 (0.13) | 0.23 (0.12) | |
| d-prime | 2.05 (0.59) | 1.49 (0.63) | 1.54 (0.39) | 1.64 (0.58) | |
| Bias | −0.24 (0.50) | 0.14 (0.30) | 0.06 (0.45) | −0.00 (0.38) | |
| Expt 5 (Faces, saccade), both delays | Same/Diff Item | Retinotopic | Control A | Spatiotopic | Control B |
| RT (s) | Same Item | 0.73 (0.11) | 0.76 (0.11) | 0.74 (0.11) | 0.75 (0.11) |
| Diff Item | 0.75 (0.10) | 0.74 (0.10) | 0.74 (0.10) | 0.75 (0.10) | |
| Accuracy | Same Item | 0.83 (0.13) | 0.67 (0.13) | 0.78 (0.11) | 0.77 (0.11) |
| Diff Item | 0.75 (0.10) | 0.80 (0.07) | 0.79 (0.10) | 0.80 (0.07) | |
| P(“Same Item”) | Same Item | 0.83 (0.13) | 0.67 (0.13) | 0.78 (0.11) | 0.77 (0.11) |
| Diff Item | 0.25 (0.10) | 0.20 (0.07) | 0.21 (0.10) | 0.20 (0.07) | |
| d-prime | 1.77 (0.43) | 1.33 (0.42) | 1.68 (0.22) | 1.64 (0.31) | |
| Bias | −0.18 (0.36) | 0.20 (0.27) | 0.02 (0.36) | 0.04 (0.30) |
Figure 2e–g illustrates the bias score for each of the 4 location conditions and 3 stimulus type experiments. As before, negative values indicate an increased bias to respond “same”. The results reveal a striking and consistent pattern. For all stimulus types, a strong “same” bias was found for the Retinotopic Location compared to the other 3 locations. That is, participants were more likely to report that two objects were the same identity when they appeared in the same retinotopic location after a saccade. This retinotopic Congruency Bias was driven by an increase in both hits (correctly responding “same” when the two objects were in fact identical) and false alarms (incorrectly responding “same” when the objects were different), in all three experiments (Tables 3–5, bottom panels). The bias at the Retinotopic Location was significantly stronger than at the Spatiotopic Location (Expt 3: t(15)= −3.06,p=.008,d= −0.76; Expt 4: t(15)= −5.95,p<.001,d= −1.49; Expt 5: t(15)= −2.86,p=.01,d= −0.72), Control A location (Expt 3: t(15)= −3.67,p=.002,d= −.92; Expt 4: t(15)= −6.08,p=<001,d= −1.52; Expt 5: t(15)= −8.36,p<.001,d= −2.09), and Control B location (Expt 3: t(15)= −3.59,p=.003,d= −.90; Expt 4: t(15)= −4.68,p=<.001,d= −1.17; Expt 5: t(15)= −4.28,p<.001,d= −1.07).
To calculate the “Shift in Bias” (difference score) in the retinotopic and spatiotopic reference frames, we subtracted the bias for Retinotopic minus Control A, and Spatiotopic minus Control B. Figure 2h illustrates these Shifts in Bias for the Retinotopic and Spatiotopic locations compared to their respective control locations. (Analyses were repeated with other combinations of control locations (see Table S1) and revealed a similar pattern of results.) For gabors, shapes, and faces, a strong Spatial Congruency Bias was found in retinotopic coordinates (similar in magnitude to the no-saccade Congruency Biases), and no Spatial Congruency Bias was found in spatiotopic coordinates. A 2 × 2 × 3 ANOVA with within-subjects factors of Reference frame and Delay and a between-subjects factor of Stimulus type revealed a main effect of Reference frame (F(1,45)=48.77, p<0.001, η2p =.52), but no other significant main effects or interactions, including a lack of Stimulus type * Reference frame interaction (F(2,45) = .685, p=.51, η2p =.030), meaning we found no evidence that the reference frame of the bias changed with increasingly complex stimuli.
Other measures: facilitation effects
The primary focus of this study was to assess the reference frame of the Spatial Congruency Bias, using the bias measure described above. We focus on the Bias because it reflects a fundamental influence of location on feature judgments, shifting participants’ behavior in a systematic way, rather than a general boost in performance. However, we can also compare measures of facilitation: d′ and RT (note, however, that the task was not a speeded task). As above, we primarily report statistics using “distance” controls but include analyses using “mirror” controls in the supplement. We performed a 2 × 2 × 3 ANOVA with within-subjects factors of Reference frame and Delay and a between-subjects factor of Stimulus type and found a significant main effect of Reference frame for both d-prime (F(1,45) = 9.00, p = .004, η2p = .167) and RT (F(1,45) = 6.08, p = .02, η2p = .119), with no significant interactions. For both measures, t-tests showed facilitation that was mostly larger in the retinotopic reference frame for all experiments. However, there were some spatiotopic facilitation effects as well, which is in line with the remapping of attention to spatiotopic coordinates after a saccade. For shapes, there was a marginally larger d′ for the Spatiotopic location than for Control B (t(15) = 1.89, p=.08, d = .47), and for faces, there was a significantly faster RT for the Spatiotopic location than Control B (t(15) = −2.16, p=.048, d= −.54). Repeating these analyses controlling instead for “mirror” locations found no main effect of reference frame for d-prime or RT, and post hoc t-tests revealed a few cases of both retinotopic and spatiotopic facilitation. Full d-prime and RT results are found in Tables S2–S3.
Discussion
In these experiments, we investigated the question of how we achieve visual stability of object/feature information across saccades. To do this, we applied a recent behavioral paradigm: the Spatial Congruency Bias. Previous work using this paradigm in the absence of eye movements revealed that participants were more likely to judge two objects as the same if they appeared in the same location compared to different locations, which was proposed to reflect a signature of object-location binding (Golomb, Kupitz, et al., 2014). In the current paper, we present three key findings: 1) The Spatial Congruency Bias is retinotopic; that is, participants are more likely to judge two objects as the same if they appeared in the same retinotopic location across an eye movement. 2) This is true even at longer post-saccade delays, when spatial attention would have had sufficient time to update to spatiotopic coordinates. And 3) This is true both for simple stimuli like gabors and for complex stimuli like faces. These findings carry important implications for understanding both visual stability across eye movements and object-location interactions, as discussed below.
What does the Spatial Congruency Bias reflect?
As discussed in the Introduction, an important aspect of the Spatial Congruency Bias is that it seems to reflect a location-influenced shift that changes how objects are perceived, beyond simple boosts of performance that might be expected from attentional facilitation. On the surface, it may seem unconventional to draw perceptual conclusions from the bias (criterion) measure used to quantify this effect, since this measure is traditionally associated with changes in response bias. However, bias effects can in fact result from either perceptual or response processes (e.g., Mack, Richler, Gauthier, & Palmeri, 2011; Wixted & Stretch, 2000), even when there is no effect on d-prime/sensitivity (Morgan, Hole, & Glennerster, 1990; Witt et al., 2015). This is because, in the case of a perceptual shift (e.g. perceiving two stimuli to be more similar than they actually are), both hits and false alarms would increase, creating a change in bias. (Note that a d-prime effect would not necessarily be present in this case since a d-prime effect is a shift in hits relative to false alarms.) While Signal Detection Theory on its own cannot differentiate between perceptual and response biases, a number of characteristics of the original Spatial Congruency Bias argue that it is a perceptual bias (Golomb, Kupitz, et al., 2014), including the finding that location still influenced identity judgments in a variation of the paradigm where the obvious response-level conflict was eliminated and participants rated the objects’ similarity on a continuous scale, and the result that this effect was dependent on the perceptual difficulty of the task. (If the bias were a response-level effect, we would expect it to be present regardless of the perceptual similarity of the two objects—in contrast, Golomb, Kupitz, et al. (2014) demonstrated that location only shifted similarity judgments when the two objects were perceptually very similar; when the two objects were very different in shape, location did not influence the similarity judgments.) Finally, the fact that we failed to find a spatiotopic bias in the current experiments argues further against a response-level account: A perceptual bias might occur in either coordinate system, but a response bias would be expected to occur in the more ecologically relevant coordinates for real-world behavior (i.e., spatiotopic).
Because it appears even when location is irrelevant to the task, is specific to location, and does not appear to be due to response-level conflict, the Spatial Congruency Bias seems to reflect a fundamental influence of object location on identity and feature representations (similar in spirit to Fischer & Whitney, 2014; Liberman, Fischer, & Whitney, 2014). The Spatial Congruency Bias not only provides evidence that location is special, but it suggests that this privilege may result in irrelevant location information being automatically encoded with and bound to other object properties.
Note that our results cannot be explained by retinotopic persistence, both because stimuli were masked2 and because the second stimulus was presented approximately one second after the first. We also ensured that location was completely uninformative to the “same/different item” judgment participants performed; i.e., both retinotopic and spatiotopic location were task-irrelevant. Same/Different Item conditions were equally likely to occur for each location condition, and subjects completed exactly the same number of trials in each possible combination of Same/Different Item condition and location condition. Small heterogeneities or borders on the screen, if anything, would have caused spatiotopic rather than retinotopic effects and thus also cannot explain our results. (It is possible that the fixation point itself could have served as a retinotopic landmark, although it is not obvious how this could influence feature perception, especially given that previous research has shown that this factor did not contribute to the dominance of retinotopic spatial effects (Golomb & Kanwisher, 2012b)).
While recent research raises the possibility of (retinotopic) heterogeneities in the visual field (Afraz, Pashkam, & Cavanagh, 2010), we do not believe this can explain our full pattern of results. An explanation based on retinal/neural heterogeneity would indeed predict that participants would be more likely to judge two identical objects as the same when they’re in the same retinotopic location vs. a different retinotopic location. I.e., we might expect representations in the retina, V1, etc. to be more similar when two identical stimuli are presented in the same rather than a different location, since different sets of photoreceptors, V1 cells, etc. might represent the input in slightly different ways. However, this explanation would also predict that participants would be better able to discriminate between different objects when they appear in the same location and the inputs can be directly compared. Instead, we found the opposite pattern for different objects, with participants more likely to make errors and mistakenly judge two different objects as the same when they were in the same retinotopic location (Tables 3–5).
It is still possible that there is some downstream mechanism that takes into account overall similarity of the neural signal coming from the retina/V1 (including the location information); this would go against the idea of later visual areas important for object recognition having more location-tolerant identity representations (Rust & Dicarlo, 2010), but it could reflect a potential neural mechanism for the Spatial Congruency Bias. However, we also note that this explanation based on overall neural similarity would predict that any property that would make the two objects more neurally similar would be expected to result in a Congruency Bias; however, Golomb et al. (2014) showed that this was not the case, and location was the only property that induced a Congruency Bias.
The Spatial Congruency Bias seems to reflect an underlying tendency, or default assumption, of our visual system that stimuli appearing in the same location are likely to be the same object. It is interesting, then, that we find this effect in retinotopic, as opposed to the more ecologically-relevant spatiotopic, coordinates. This suggests that if two objects are bound to the same retinotopic location, even if an eye movement intervenes, they will be more likely to be judged as the same object. This does not mean that their features are being directly bound to each other, but rather that location is serving as an indirect link; i.e., the idea that location serves a critical role as a pointer, index, or “object file” during object recognition (e.g., Huang & Pashler, 2007; Kahneman, Treisman, & Gibbs, 1992; Treisman & Gelade, 1980). Interestingly, in other classic paradigms, an inability to report a change or detect a second stimulus accurately is taken as evidence that the two events have been perceived as a single object or instance (Deubel, Schneider, & Bridgeman (1996) and Kanwisher et al. (1987)). Briefly, this effect would be analogous to an increased tendency to judge two different objects as the same in our paradigm. Such an effect (in our paradigm expressed as an increase in false alarms – i.e., reporting “same” when the objects are not), would in fact be consistent with our findings of a shift in the bias; the difference being that we find an increase in both hits and false alarms.
Implications of a retinotopic Spatial Congruency Bias
In our main experiments (Experiments 3–5), we used a saccade to differentiate between binding in retinotopic and spatiotopic coordinates. We found that participants were more biased to judge faces, objects, and gabors as the “same item” when they appeared in the same retinotopic but not the same spatiotopic location. Moreover, we found no interaction between reference frame and stimulus complexity or post-saccade delay, demonstrating that not only is this type of object-location binding based natively in retinotopic coordinates, but it does not automatically remap or re-bind to account for spatial shifts during eye movements.
The lack of stimulus type effect argues against the idea that object-location binding transitions from retinotopic to spatiotopic coordinates with increasing stimulus complexity (c.f. Andersen et al., 1997; Melcher & Colby, 2008; Melcher, 2005; Wurtz, 2008). Of course, it’s possible that binding could take place in spatiotopic coordinates with even more complex stimuli, and/or with a more complex task (e.g., recognition of facial identity across viewpoints or comparison of emotional expressions). However, Melcher (2005) found spatiotopic face aftereffects even without a more complex task, suggesting that face stimuli alone are complex enough to evoke spatiotopic responses in other scenarios.
The lack of a delay effect argues against the possibility that the Spatial Congruency Bias dynamically transitions to spatiotopic coordinates over time. If features of an object were remapped along with spatial attention, we would expect the Congruency Bias to update to spatiotopic coordinates either predictively (e.g., Duhamel et al., 1992; Rolfs et al., 2011), or at the latest within about 150ms of the completion of a saccade, which is the time it takes for spatial attention to fully remap (Golomb et al., 2008). However, even at a delay of 500ms after an eye movement, the Spatial Congruency Bias did not update to spatiotopic coordinates for any of the three stimulus types.
This complete lack of spatiotopic bias effects was surprising in light of the intuition that spatial representations should at least on some level be based on ecologically relevant coordinates that are functionally tied to behavior, e.g., spatiotopic coordinates. Our results thus support neither an explicit spatiotopic representation nor an anticipatory remapping or rebinding of feature information. Even with a slower remapping or rebinding mechanism, we would have expected this updating to happen within 500ms of the completion of a saccade (otherwise the timecourse would not be practical for real-world stability). Interestingly, our results suggest that this link between object and location representations is based in retinotopic coordinates and may not be automatically updated at all, consistent with the idea that updating to spatiotopic coordinates only occurs when behaviorally relevant or for attentionally salient items (Golomb et al., 2008; Gottlieb et al., 1998; Joiner, Cavanaugh, & Wurtz, 2011). As discussed more below, the implications of such a system may mean that other compensatory mechanisms, such as the use of visual landmarks to stabilize perception (Deubel, 2004; McConkie & Currie, 1996; Verfaillie, 1997), may be required to supplement these lower-level retinotopic representations in order to create a stable spatiotopic percept.
Are our results at odds with previously reported spatiotopic effects?
There has been considerable evidence for spatiotopic visual processing: spatiotopic responses in monkey neurons (Duhamel et al., 1997; Galletti et al., 1993; Snyder et al., 1998) and human cortex (Crespi et al., 2011; d’Avossa et al., 2007; McKyton & Zohary, 2007), spatiotopic transfer of visual aftereffects (Ezzati et al., 2008; Melcher, 2005, 2007; Turi & Burr, 2012; Zimmermann et al., 2013), and spatiotopic integration or interaction of features across saccades (Demeyer et al., 2011, 2009, 2010; Fischer & Whitney, 2014; Hayhoe et al., 1991; Melcher & Morrone, 2003; Oostwoud Wijdenes et al., 2015; Prime et al., 2006; Van Eccelpoel et al., 2008; Wittenberg, Bremmer, & Wachtler, 2008; Wolfe & Whitney, 2015). Why didn’t we see spatiotopic effects in the current paradigm? First, we argue that the lack of a spatiotopic Congruency Bias, even when given time to update and for relatively complex stimuli, carries compelling theoretical implications for object and location stability across saccades, as discussed above. However, we also note that our retinotopic results are not necessarily at odds with previously reported spatiotopic effects, given the mixed record of retinotopic and spatiotopic effects in the literature, including spatiotopic effects that occur only under certain circumstances.
For example, several previously reported spatiotopic effects have been strongly challenged (Cavanagh et al., 2010)—while some papers reported spatiotopic transfer of aftereffects (Ezzati et al., 2008; Melcher, 2005, 2007; Turi & Burr, 2012), others have failed to replicate these results or have found effects that are much smaller in size (Afraz & Cavanagh, 2009; Knapen et al., 2009; Wenderoth & Wiese, 2008) or emerge later in time (Golomb et al., 2008; Mathôt & Theeuwes, 2010b; Zimmermann et al., 2013).
Other apparent discrepancies may be caused by differences in methodology. One recent experiment using a related paradigm — the Serial Dependence effect (Fischer & Whitney, 2014) — showed that the orientation of a gabor can bias the report of a subsequent gabor presented in the same location, with an experiment reported in the supplement claiming evidence for both retinotopic and spatiotopic serial dependence. However, they combined trials in such a way that the “spatiotopic” condition included trials that shared the same spatiotopic and retinotopic location (i.e., [spatiotopic-only + both-same] compared to [retinotopic-only + neither-same], and vice versa for the “retinotopic” condition). This makes it impossible to know for certain whether the spatiotopic and retinotopic effects they found were driven by the spatiotopic- or retinotopic- only trials, or by the “both” vs. “neither” trials.
Furthermore, some spatiotopic effects may only occur under certain circumstances. Lisi, Cavanagh, & Zorzi (2015) found that presenting objects continuously throughout a saccade allowed for more stable spatiotopic attention than when objects disappeared before a saccade (see also Deubel, Bridgeman, & Schneider, 1998; Deubel, Schneider, & Bridgeman, 1996). Demeyer et al. (2010) similarly found the most spatiotopic integration between subtly different stimuli when pre- and post-saccadic stimuli were continuously present across a saccade, and this integration disappeared entirely when they were separated by a blank and a mask. It is possible that spatiotopic processing may be involved in our task but not reflected in the Spatial Congruency Bias; indeed, we found some evidence for spatiotopic facilitation of RT and accuracy, although these effects were also generally weaker and less systematic than the corresponding retinotopic effects. We focused on the Spatial Congruency Bias because of its reported links to object-location binding (Golomb, Kupitz, et al., 2014). However, even if some neurons appear to have spatiotopic properties (Duhamel et al., 1997; Galletti et al., 1993; Snyder et al., 1998) or show remapping of feature information (Subramanian & Colby, 2014), it is not certain whether these would result in spatiotopic object-location binding, either in general, or specifically with respect to a spatiotopic Congruency Bias. Finally, we can’t rule out the possibility that multiple mechanisms might be involved in object-location binding (Hollingworth & Rasmussen, 2010), and that these mechanisms may differ in their reference frames and/or updating across saccades.
How, then, do we form stable perceptions of objects?
Our results suggest that at least under certain circumstances, object-location binding is retinotopic and is not automatically updated after each saccade. In terms of neural mechanisms, this would be consistent with findings that object-selective areas have been shown to be primarily retinotopic (Golomb & Kanwisher, 2012a; although the fact that there is continued controversy over these retinotopic vs spatiotopic effects (e.g., Crespi et al, 2011) underscores that a retinotopic effect here is not a given). Given that these regions contain information about both object identity and location, they may be likely candidates for supporting object-location binding within the same brain region (Di Lollo, 2012). It is also possible that object representations in ventral visual cortex are bound to location information in other areas, such as dorsal stream areas. Some of these areas contain location information that is remapped during a saccade (Duhamel et al., 1992; Gottlieb, Kusunoki, & Goldberg, 1998; Kusunoki & Goldberg, 2003; Sommer & Wurtz, 2006; Umeno & Goldberg, 1997; Walker, Fitzgibbon, & Goldberg, 1995; but see Zirnsak & Moore, 2014; Zirnsak, Steinmetz, Noudoost, Xu, & Moore, 2014). However, it may still be the case that while “attentional pointers” (Cavanagh et al., 2010) are remapped, the feature/object representations they are bound to are not. Instead, the binding may need to be reestablished after each eye movement, and this may not happen under all circumstances, especially if an object is not continually present.
If binding is not automatically updated after a saccade, we might rely on mechanisms other than feature remapping or spatiotopic representations to maintain visual stability. For example, location updating might be integrated with processes involving transsaccadic memory, learning, and feature prediction (Herwig & Schneider, 2014). Many studies have also focused on the role of visual landmarks in stability across a saccade (Deubel, 2004; McConkie & Currie, 1996; Verfaillie, 1997). Since our visual environment is rich with regularities and features we could use across saccades, these findings could translate into a perception that the world is stable across saccades even if object-location binding is not automatically updated.
Supplementary Material
Acknowledgments
This work was supported by research grants from the National Institutes of Health (R01-EY025648) and Alfred P. Sloan Foundation (BR-2014-098). We thank Adeel Tausif for assistance with stimulus generation and data collection; Sarah Tower-Richardi for pilot data and discussion; members of the Golomb Lab for assistance with data collection; Xiaoli Zhang for comments on the manuscript; and Sarah Cormiea for helpful discussion.
Footnotes
Note that signal detection theory alone cannot distinguish between perceptual and response-level biases; see Witt, Taylor, Sugovic, & Wixted, 2015 and Discussion
We found the same pattern of results in a pilot data set on a same/different color judgment task where no masks were presented (Tower-Richardi, Golomb, & Kanwisher, VSS 2011).
References
- Afraz A, Cavanagh P. The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations. Journal of Vision. 2009;9:10.1–17. doi: 10.1167/9.10.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Afraz A, Pashkam MV, Cavanagh P. Spatial Heterogeneity in the Perception of Face and Form Attributes. Current Biology. 2010;20(23):2112–2116. doi: 10.1016/j.cub.2010.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andersen RA, Snyder LH, Bradley DC, Xing J. Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience. 1997;20:303–330. doi: 10.1146/annurev.neuro.20.1.303. [DOI] [PubMed] [Google Scholar]
- Brainard D. The psychophysics toolbox. Spatial Vision. 1997;10:433–436. doi: 10.1163/156856897X00357. [DOI] [PubMed] [Google Scholar]
- Casarotti M, Lisi M, Umiltà C, Zorzi M. Paying Attention through Eye Movements: A Computational Investigation of the Premotor Theory of Spatial Attention. Journal of Cognitive Neuroscience. 2012;24(7):1519–1531. doi: 10.1162/jocn_a_00231. [DOI] [PubMed] [Google Scholar]
- Cavanagh P, Hunt AR, Afraz A, Rolfs M. Visual stability based on remapping of attention pointers. Trends in Cognitive Sciences. 2010;14(4):147–53. doi: 10.1016/j.tics.2010.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crespi S, Biagi L, D’Avossa G, Burr DC, Tosetti M, Morrone MC. Spatiotopic coding of BOLD signal in human visual cortex depends on spatial attention. PLoS ONE. 2011;6(7):1–14. doi: 10.1371/journal.pone.0021661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- d’Avossa G, Tosetti M, Crespi S, Biagi L, Burr DC, Morrone MC. Spatiotopic selectivity of BOLD responses to visual motion in human area MT. Nature Neuroscience. 2007;10(2):249–255. doi: 10.1038/nn1824. [DOI] [PubMed] [Google Scholar]
- Davidson ML, Fox MJ, Dick AO. Effect of eye movements on backward masking and perceived location. Perception and Psychophysics. 1973;14(1):110–116. [Google Scholar]
- Demeyer M, de Graef P, Verfaillie K, Wagemans J. Perceptual grouping of object contours survives saccades. PLoS ONE. 2011;6(6):1–8. doi: 10.1371/journal.pone.0021257. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Demeyer M, De Graef P, Wagemans J, Verfaillie K. Transsaccadic identification of highly similar artificial shapes. Journal of Vision. 2009;9:1–14. doi: 10.1167/9.4.28.Introduction. [DOI] [PubMed] [Google Scholar]
- Demeyer M, De Graef P, Wagemans J, Verfaillie K. Parametric integration of visual form across saccades. Vision Research. 2010;50(13):1225–1234. doi: 10.1016/j.visres.2010.04.008. [DOI] [PubMed] [Google Scholar]
- Deubel H. Localization of targets across saccades: Role of landmark objects. Visual Cognition. 2004;11(2–3):173–202. doi: 10.1080/13506280344000284. [DOI] [Google Scholar]
- Deubel H, Bridgeman B, Schneider WX. Immediate post-saccadic information mediates space constancy. Vision Research. 1998;38(20):3147–59. doi: 10.1016/S0042-6989(98)00048-0. [DOI] [PubMed] [Google Scholar]
- Deubel H, Schneider WX, Bridgeman B. Postsaccadic target blanking prevents saccadic suppression of image displacement. Vision Research. 1996;36(7):985–996. doi: 10.1016/0042-6989(95)00203-0. [DOI] [PubMed] [Google Scholar]
- Di Lollo V. The feature-binding problem is an ill-posed problem. Trends in Cognitive Sciences. 2012;16(6):317–321. doi: 10.1016/j.tics.2012.04.007. [DOI] [PubMed] [Google Scholar]
- Duhamel JR, Bremmer F, BenHamed S, Graf W. Spatial invariance of visual receptive fields in parietal cortex neurons. Nature. 1997;389(6653):845–848. doi: 10.1038/39865. [DOI] [PubMed] [Google Scholar]
- Duhamel JR, Colby CL, Goldberg ME. The updating of the representation of visual space in parietal cortex by intended eye movements. Science. 1992;255(5040):90–92. doi: 10.1126/science.1553535. Retrieved from http://visionlab.harvard.edu/members/Patrick/SpatiotopyRefs/Duhamel1992.pdf. [DOI] [PubMed] [Google Scholar]
- Ezzati A, Golzar A, Afraz ASR. Topography of the motion aftereffect with and without eye movements. Journal of Vision. 2008;8(14):23.1–16. doi: 10.1167/8.14.23. [DOI] [PubMed] [Google Scholar]
- Fischer J, Whitney D. Serial dependence in visual perception. Nature Neuroscience. 2014;17(5):738–743. doi: 10.1167/11.11.1201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Galletti C, Battaglini PP, Fattori P. Parietal neurons encoding spatial locations in craniotopic coordinates. Experimental Brain Research. 1993;96(2):221–229. doi: 10.1007/BF00227102. [DOI] [PubMed] [Google Scholar]
- Gardner JL, Merriam EP, Movshon JA, Heeger DJ. Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. The Journal of Neuroscience. 2008;28(15):3988–3999. doi: 10.1523/JNEUROSCI.5476-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golomb JD, Chun MM, Mazer JA. The native coordinate system of spatial attention is retinotopic. The Journal of Neuroscience. 2008;28(42):10654–10662. doi: 10.1523/JNEUROSCI.2525-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golomb JD, Kanwisher N. Higher level visual cortex represents retinotopic, not spatiotopic, object location. Cerebral Cortex (New York, NY: 1991) 2012a;22(12):2794–810. doi: 10.1093/cercor/bhr357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golomb JD, Kanwisher N. Retinotopic memory is more precise than spatiotopic memory. Proceedings of the National Academy of Sciences of the United States of America. 2012b;109(5):1796–801. doi: 10.1073/pnas.1113168109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golomb JD, Kupitz CN, Thiemann CT. The Influence of Object Location on Identity: A “Spatial Congruency Bias. Journal of Experimental Psychology: General. 2014;143(6):2262–2278. doi: 10.1037/xge0000017. [DOI] [PubMed] [Google Scholar]
- Golomb JD, L’Heureux ZE, Kanwisher N. Feature-binding errors after eye movements and shifts of attention. Psychological Science. 2014;25(5):1067–78. doi: 10.1177/0956797614522068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golomb JD, Marino AC, Chun MM, Mazer JA. Attention doesn’t slide: spatiotopic updating after eye movements instantiates a new, discrete attentional locus. Attention, Perception & Psychophysics. 2011;73(1):7–14. doi: 10.3758/s13414-010-0016-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golomb JD, Nguyen-Phuc AY, Mazer JA, McCarthy G, Chun MM. Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. The Journal of Neuroscience. 2010;30(31):10493–506. doi: 10.1523/JNEUROSCI.1546-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gottlieb JP, Kusunoki M, Goldberg ME. The representation of visual salience in monkey parietal cortex. Nature. 1998;391(6666):481–484. doi: 10.1038/35135. [DOI] [PubMed] [Google Scholar]
- Harrison WJ, Bex PJ. Integrating Retinotopic Features in Spatiotopic Coordinates. Journal of Neuroscience. 2014;34(21):7351–7360. doi: 10.1523/JNEUROSCI.5252-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hayhoe M, Lachter J, Feldman J. Integration of form across saccadic eye movements. Perception. 1991;20(3):393–402. doi: 10.1068/p200393. [DOI] [PubMed] [Google Scholar]
- Herwig A, Schneider WX. Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General. 2014;143(5):1903–22. doi: 10.1037/a0036781. [DOI] [PubMed] [Google Scholar]
- Hollingworth A, Rasmussen IP. Binding objects to locations: the relationship between object files and visual working memory. Journal of Experimental Psychology: Human Perception and Performance. 2010;36(3):543–564. doi: 10.1037/a0017836. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang L, Pashler H. A Boolean map theory of visual attention. Psychological Review. 2007;114(3):599–631. doi: 10.1037/0033-295X.114.3.599. [DOI] [PubMed] [Google Scholar]
- Hunt AR, Cavanagh P. Remapped visual masking. Journal of Vision. 2011;11(1):1–8. doi: 10.1167/11.1.13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Irwin DE, Brown JS, Sun JS. Visual masking and visual integration across saccadic eye movements. Journal of Experimental Psychology: General. 1988;117(3):276–287. doi: 10.1037/0096-3445.117.3.276. [DOI] [PubMed] [Google Scholar]
- Joiner WM, Cavanaugh J, Wurtz RH. Modulation of shifting receptive field activity in frontal eye field by visual salience. Journal of Neurophysiology. 2011;106(3):1179–1190. doi: 10.1152/jn.01054.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jonikaitis D, Szinte M, Rolfs M, Cavanagh P. Allocation of attention across saccades. Journal of Neurophysiology. 2013;109:1425–1434. doi: 10.1167/12.9.440. [DOI] [PubMed] [Google Scholar]
- Kahneman D, Treisman A, Gibbs BJ. The reviewing of object files: object-specific integration of information. Cognitive Psychology. 1992;24(2):175–219. doi: 10.1016/0010-0285(92)90007-O. [DOI] [PubMed] [Google Scholar]
- Knapen T, Rolfs M, Cavanagh P. The reference frame of the motion aftereffect is retinotopic. Journal of Vision. 2009;9(5):16.1–7. doi: 10.1167/9.5.16. [DOI] [PubMed] [Google Scholar]
- Kusunoki M, Goldberg ME. The time course of perisaccadic receptive field shifts in the lateral intraparietal area of the monkey. Journal of Neurophysiology. 2003;89(3):1519–1527. doi: 10.1152/jn.00519.2002. [DOI] [PubMed] [Google Scholar]
- Lescroart MD, Kanwisher N, Golomb JD. No Evidence for Automatic Remapping of Stimulus Features or Location Found with fMRI. Frontiers in Systems Neuroscience. 2016 Jun;10:53. doi: 10.3389/fnsys.2016.00053. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liberman A, Fischer J, Whitney D. Serial Dependence in the Perception of Faces. Current Biology. 2014:1–6. doi: 10.1016/j.cub.2014.09.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lisi M, Cavanagh P, Zorzi M. Spatial constancy of attention across eye movements is mediated by the presence of visual objects. Attention, Perception, & Psychophysics. 2015;77(4):1159–1169. doi: 10.3758/s13414-015-0861-1. [DOI] [PubMed] [Google Scholar]
- Mack ML, Richler JJ, Gauthier I, Palmeri TJ. Indecision on decisional separability. Psychonomic Bulletin & Review. 2011;18:1–9. doi: 10.3758/s13423-010-0017-1. [DOI] [PubMed] [Google Scholar]
- Mathôt S, Theeuwes J. Evidence for the predictive remapping of visual attention. Experimental Brain Research. 2010a;200(1):117–122. doi: 10.1007/s00221-009-2055-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mathôt S, Theeuwes J. Gradual remapping results in early retinotopic and late spatiotopic inhibition of return. Psychological Science. 2010b;21(12):1793–1798. doi: 10.1177/0956797610388813. [DOI] [PubMed] [Google Scholar]
- McConkie GW, Currie CB. Visual stability across saccades while viewing complex pictures. Journal of Experimental Psychology: Human Perception and Performance. 1996;22(3):563–581. doi: 10.1037/0096-1523.22.3.563. [DOI] [PubMed] [Google Scholar]
- McKyton A, Zohary E. Beyond retinotopic mapping: The spatial representation of objects in the human lateral occipital complex. Cerebral Cortex. 2007;17(5):1164–1172. doi: 10.1093/cercor/bhl027. [DOI] [PubMed] [Google Scholar]
- McRae K, Butler BE, Popiel SJ. Spatiotopic and retinotopic components of iconic memory. Psychological Research. 1987;49(4):221–227. doi: 10.1007/BF00309030. [DOI] [PubMed] [Google Scholar]
- Melcher D. Spatiotopic transfer of visual-form adaptation across saccadic eye movements. Current Biology: CB. 2005;15(19):1745–8. doi: 10.1016/j.cub.2005.08.044. [DOI] [PubMed] [Google Scholar]
- Melcher D. Predictive remapping of visual features precedes saccadic eye movements. Nature Neuroscience. 2007;10(7):903–907. doi: 10.1038/nn1917. [DOI] [PubMed] [Google Scholar]
- Melcher D, Colby CL. Trans-saccadic perception. Trends in Cognitive Sciences. 2008;12(12):466–73. doi: 10.1016/j.tics.2008.09.003. [DOI] [PubMed] [Google Scholar]
- Melcher D, Morrone MC. Spatiotopic temporal integration of visual motion across saccadic eye movements. Nature Neuroscience. 2003;6(8):877–881. doi: 10.1038/nn1098. [DOI] [PubMed] [Google Scholar]
- Merriam EP, Genovese CR, Colby CL. Spatial updating in human parietal cortex. Neuron. 2003;39(2):361–373. doi: 10.1016/S0896-6273(03)00393-3. [DOI] [PubMed] [Google Scholar]
- Merriam EP, Genovese CR, Colby CL. Remapping in human visual cortex. Journal of Neurophysiology. 2007;97(2):1738–55. doi: 10.1152/jn.00189.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morgan MJ, Hole GJ, Glennerster A. Biases and sensitivities in geometrical illusions. Vision Research. 1990;30(11):1793–1810. doi: 10.1016/0042-6989(90)90160-M. [DOI] [PubMed] [Google Scholar]
- Oostwoud Wijdenes L, Marshall L, Bays PM. Evidence for Optimal Integration of Visual Feature Representations across Saccades. Journal of Neuroscience. 2015;35(28):10146–10153. doi: 10.1523/JNEUROSCI.1040-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parks NA, Corballis PM. Electrophysiological correlates of presaccadic remapping in humans. Psychophysiology. 2008;45(5):776–83. doi: 10.1111/j.1469-8986.2008.00669.x. [DOI] [PubMed] [Google Scholar]
- Pertzov Y, Zohary E, Avidan G. Rapid formation of spatiotopic representations as revealed by inhibition of return. The Journal of Neuroscience. 2010;30(26):8882–8887. doi: 10.1523/JNEUROSCI.3986-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D, Charles L, Devlin JT, Walsh V, Duchaine B. Triple Dissociation of Faces, Bodies, and Objects in Extrastriate Cortex. Current Biology. 2009;19(4):319–324. doi: 10.1016/j.cub.2009.01.007. [DOI] [PubMed] [Google Scholar]
- Prime SL, Niemeier M, Crawford JD. Transsaccadic integration of visual features in a line intersection task. Experimental Brain Research. 2006;169(4):532–548. doi: 10.1007/s00221-005-0164-1. [DOI] [PubMed] [Google Scholar]
- Rolfs M, Jonikaitis D, Deubel H, Cavanagh P. Predictive remapping of attention across eye movements. Nature Neuroscience. 2011;14(2):252–6. doi: 10.1038/nn.2711. [DOI] [PubMed] [Google Scholar]
- Rust NC, Dicarlo JJ. Selectivity and tolerance (“invariance”) both increase as visual information propagates from cortical area V4 to IT. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience. 2010;30(39):12978–95. doi: 10.1523/JNEUROSCI.0179-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Satel J, Wang Z, Hilchey MD, Klein RM. Examining the dissociation of retinotopic and spatiotopic inhibition of return with event-related potentials. Neuroscience Letters. 2012;524(1):40–44. doi: 10.1016/j.neulet.2012.07.003. [DOI] [PubMed] [Google Scholar]
- Snyder LH, Grieve KL, Brotchie P, Andersen RA. Separate body- and world-referenced representations of visual space in parietal cortex. Nature. 1998 Aug;394:887–891. doi: 10.1038/29777. [DOI] [PubMed] [Google Scholar]
- Sommer MA, Wurtz RH. Influence of the thalamus on spatial visual processing in frontal cortex. Nature. 2006;444(7117):374–377. doi: 10.1038/nature05279. [DOI] [PubMed] [Google Scholar]
- Subramanian J, Colby CL. Shape selectivity and remapping in dorsal stream visual area LIP. Journal of Neurophysiology. 2014;111(3):613–627. doi: 10.1152/jn.00841.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Szinte M, Cavanagh P. Spatiotopic apparent motion reveals local variations in space constancy. Journal of Vision. 2011;11(2):1–20. doi: 10.1167/11.2.4. [DOI] [PubMed] [Google Scholar]
- Szinte M, Wexler M, Cavanagh P. Temporal dynamics of remapping captured by peri-saccadic continuous motion. Journal of Vision. 2012;12(7):1–18. doi: 10.1167/12.7.12. [DOI] [PubMed] [Google Scholar]
- Tower-Richardi S, Golomb JD, Kanwisher N. Binding of location and color in retinotopic, not spatiotopic, coordinates. Journal of Vision. 2011;11(11):521–521. doi: 10.1167/11.11.521. [DOI] [Google Scholar]
- Treisman A. The binding problem. Current Opinion in Neurobiology. 1996;6(2):171–178. doi: 10.1016/S0959-4388(96)80070-5. [DOI] [PubMed] [Google Scholar]
- Treisman A, Gelade G. A feature-integration theory of attention. Cognitive Psychology. 1980;12:97–136. doi: 10.1016/0010-0285(80)90005-5. [DOI] [PubMed] [Google Scholar]
- Turi M, Burr D. Spatiotopic perceptual maps in humans: evidence from motion adaptation. Proceedings of the Royal Society B: Biological Sciences. 2012;279(1740):3091–3097. doi: 10.1098/rspb.2012.0637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Umeno MM, Goldberg ME. Spatial Processing in the Monkey Frontal Eye Field. The American Physiological Society. 1997:1373–1383. doi: 10.1152/jn.1997.78.3.1373. [DOI] [PubMed] [Google Scholar]
- Van Eccelpoel C, Germeys F, De Graef P, Verfaillie K. Coding of identity-diagnostic information in transsaccadic object perception. Journal of Vision. 2008;8(14):29.1–16. doi: 10.1167/8.14.29. [DOI] [PubMed] [Google Scholar]
- Verfaillie K. Transsaccadic memory for the egocentric and allocentric position of a biological-motion walker. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1997;23(3):739–760. doi: 10.1037//0278-7393.20.3.649. [DOI] [PubMed] [Google Scholar]
- Walker MF, Fitzgibbon EJ, Goldberg ME. Neurons in the monkey superior colliculus predict the visual result of impending saccadic eye movements. Journal of Neurophysiology. 1995;73(5):1988–2003. doi: 10.1152/jn.1995.73.5.1988. [DOI] [PubMed] [Google Scholar]
- Watson AB, Pelli DG. QUEST: a Bayesian adaptive psychometric method. Perception & Psychophysics. 1983 doi: 10.3758/BF03202828. [DOI] [PubMed] [Google Scholar]
- Wenderoth P, Wiese M. Retinotopic encoding of the direction aftereffect. Vision Research. 2008;48(19):1949–1954. doi: 10.1016/j.visres.2008.06.013. [DOI] [PubMed] [Google Scholar]
- Witt JK, Taylor JET, Sugovic M, Wixted JT. Signal detection measures cannot distinguish perceptual biases from response biases. Perception. 2015;44(3):289–300. doi: 10.1068/p7908. [DOI] [PubMed] [Google Scholar]
- Wittenberg M, Bremmer F, Wachtler T. Perceptual evidence for saccadic updating of color stimuli. Journal of Vision. 2008;8(14):9.1–9. doi: 10.1167/8.14.9. [DOI] [PubMed] [Google Scholar]
- Wixted JT, Stretch V. The case against a criterion-shift account of false memory. Psychological Review. 2000;107(2):368–376. doi: 10.1037/0033-295X.107.2.368. [DOI] [PubMed] [Google Scholar]
- Wolfe BA, Whitney D. Saccadic remapping of object-selective information. Attention, Perception, & Psychophysics. 2015 Jun;:2260–2269. doi: 10.3758/s13414-015-0944-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wurtz RH. Neuronal mechanisms of visual stability. Vision Research. 2008;48(20):2070–2089. doi: 10.1016/j.visres.2008.03.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yao T, Treue S, Krishna BS. An Attention-Sensitive Memory Trace in Macaque MT Following Saccadic Eye Movements. PLoS Biology. 2016;14(2):1–17. doi: 10.1371/journal.pbio.1002390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zimmermann E, Morrone MC, Fink GR, Burr D. Spatiotopic neural representations develop slowly across saccades. Current Biology. 2013;23(5):R193–R194. doi: 10.1016/j.cub.2013.01.065. [DOI] [PubMed] [Google Scholar]
- Zirnsak M, Moore T. Saccades and shifting receptive fields: anticipating consequences or selecting targets? Trends in Cognitive Sciences. 2014;18(12):621–628. doi: 10.1016/j.tics.2014.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zirnsak M, Steinmetz NA, Noudoost B, Xu KZ, Moore T. Visual space is compressed in prefrontal cortex before eye movements. Nature. 2014;507(7493):504–7. doi: 10.1038/nature13149. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
