Abstract
In three experiments, we investigated transsaccadic object file representations. In each experiment, participants moved their eyes from a central fixation cross to a saccade target located between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials in which the target identity matched one of the preview objects, its color either matched or did not match the previewed object color. The results indicated that color changes disrupt perceptual continuity, but only for the class of objects for which color is diagnostic of object identity. When the color is not integral to identifying an object (for example, when the object is a letter or an object without a characteristic color), object continuity is preserved regardless of changes to the object's color. These results suggest that object features that are important for defining the object are incorporated into its episodic representation. Furthermore, the results are consistent with previous work showing that the quality of a feature's representation determines its importance in preserving continuity.
When viewing a complex scene, observers acquire information that supports the development of mental representations of much of the scene. Some of these representations might preserve information about abstract scene properties, such as the scene's gist (e.g., Potter, 1976) or a description of a depicted event. An observer might also acquire a representation of the layout of a scene, maintaining some information about the configuration of objects or surfaces within the scene (e.g., Sanocki, 2003). Finally, one might acquire one or more representations of specific objects located within the scene (e.g., Hollingworth, 2006). It is this last representation that will be the focus of the present studies.
As Henderson (1994) has pointed out, objects are represented within at least two representational systems. First, observers maintain a set of long-term “type” representations that define general object classes. For example, one might maintain a representation of “dog” that describes the properties of such an object, but does so without respect to a specific exemplar of such an object category. In contrast, a “token” representation is a relatively temporary representation that describes a specific object that occupies a particular place in time and space (a representation of a particular dog that is currently visible, for example, rather than a representation of dogs in general). Such representations are thought to exist within visual working memory (e.g., Irwin & Andrews, 1996).
Token representations have been proposed to play a critical role in many aspects of visual cognition. In general, they appear to play a useful role in preserving perceptual continuity across change (e.g., Kahneman, Treisman, & Gibbs, 1992). For example, Irwin and his colleagues (e.g., Irwin, 1996; Irwin & Andrews, 1996; Irwin & Gordon, 1998) have argued that transsaccadic memory, which is useful for integrating information across eye movements, consists largely of the representation a small number of object tokens. Furthermore, some have argued that scene representations are built up out of representations of individual object tokens (e.g., Hollingworth, 2004). Thus, in order to understand the representation of complex visual information, it is necessary to understand the representation of object tokens in visual working memory.
A token representation system that has received considerable experimental support is the object file representation proposed by Kahneman and Treisman (1984). According to Kahneman and Treisman, object files are temporary object representations that are created to integrate and maintain information about specific objects in the environment. As described by Kahneman, Treisman, and Gibbs (1992), object files integrate an object's perceptual features, but may also include non-perceptual object properties (such as identity; Gordon & Irwin, 1996, 2000) if those properties can be acquired from the available perceptual information.
According to object file theory, object files play a crucial role in supporting perceptual continuity within and across fixations. On this account, continuity depends on the outcome of three processing stages. When an object is first attended, an object file is created to represent the object's features. When a display change occurs, a correspondence process establishes links between the objects that are currently visible and the object files that were previously created. Establishing these links likely depends on an analysis of simple spatiotemporal display properties (e.g., Kahneman, Treisman, & Gibbs, 1992; Mitroff & Alvarez, 2007), although some have recently argued that the correspondence process may, in some cases, be influenced by an object's surface properties (Hollingworth, Richard, & Luck, 2008). Finally, in the reviewing stage, object file representations are retrieved from memory and compared with their corresponding objects in the new display. When the comparison fails to produce a match between the current object and the object file, a new object file must be created, and responses to the new object are consequently slowed.
Kahneman et al (1992) provided evidence for this role of object files in supporting perceptual continuity using a reviewing paradigm that they developed. This paradigm forms the foundation for the approach used in the present investigation. It is therefore useful to consider the paradigm in detail, and to explain how the paradigm has been used to make inferences about the nature of episodic object representation. In a typical application of the reviewing paradigm, participants initially view a preview display containing two objects (in many cases, square “frames”) that are equidistant from fixation. A stimulus (typically a letter, picture, or word) is presented within each frame, then the preview stimuli are removed from the display and the frames move continuously to new locations within the display. When the frames stop moving, a single target stimulus is presented within one of the frames. The participant's task is usually to name the target, although in some experiments participants indicate whether the target matches one of the preview stimuli (e.g., Noles & Scholl, 2005; Noles, Scholl, & Mitroff, 2005).
There are typically three target conditions. In the same-object (SO) condition, the target matched the previewed object presented in the same frame; in the different-object (DO) condition, the target matched the preview object in the opposite frame; finally, in the no-match (NM) condition, the target did not match either of the previewed objects. Based on these conditions, Kahneman et al (1992) described two experimental effects of theoretical interest. A non-specific preview benefit was defined as the reaction time (RT) difference between the NM and DO conditions, and was thought to reflect general priming from the preview display. Of greater interest was the object-specific preview benefit, defined as the RT difference between the SO and DO conditions. Although responses to the target should benefit in both conditions from previewing the target object, a mismatch between the target and its object file representation should impair performance in the DO condition, relative to the SO condition. The object-specific benefit thus reflects perceptual continuity in the SO condition and a disruption of perceptual continuity in the DO condition.
Considerable work has been done to identify the features that are represented in object files (e.g., Gordon & Irwin, 1996, 2000; Gordon, Vollmer, & Frankl, 2008; Henderson, 1994; Henderson & Siefert, 2001). The general approach has been to manipulate the similarity of the preview and target objects, and examine how changes in some shared feature affect the object-specific benefit. When changing a feature is found not to disrupt perceptual continuity (as indexed by the object-specific benefit), it is concluded that the feature is not part of the object file representation. Henderson (1994), for example, found that changing the case in which a letter was displayed (e.g., previewing an uppercase ‘M’ with a lowercase ‘m’) did not reduce the object-specific benefit, and concluded that letter case is not a represented feature. Based on similar findings, Gordon & Irwin (1996, 2000) argued that object files represent abstract, post-categorical information (such as the object's identity), but do not represent an object's perceptual features.
Using a transsaccadic preview paradigm, Gordon, Vollmer, and Frankl (2008) examined the representation of object orientation in object files. Though different in some respects from the traditional reviewing paradigm, this approach (developed by Henderson and his colleagues; e.g., Henderson & Siefert, 2001) shares features of the reviewing paradigm that permit measurement of object-specific benefits. Because object files are thought to play a critical role in transsaccadic integration (e.g., Irwin & Andrews, 1996), performance in studies using this paradigm likely draws upon the same representation as the more traditional reviewing studies. In the Gordon et al study, participants began each trial by fixating a fixation cross; a preview display was then presented, consisting of two objects, one above and one below a peripheral fixation cross. The participants were instructed to move their eyes to the peripheral cross as soon as the objects appeared; when the eyes landed, a single target was presented in one of the preview locations. The target object either matched one of the preview objects exactly, or was rotated 60° in depth. As in previous experiments, a significant object-specific benefit was observed. Furthermore, the magnitude of this benefit was unaffected by a change in the object's orientation.
While this result is consistent with Gordon & Irwin's (2000) claim that object file representations are abstract, other data seem to contradict this conclusion. In an additional experiment, for example, the preview objects were presented closer to fixation; under such conditions, changes in an object's orientation did reduce object specific preview benefits (as had been reported by Henderson and Siefert, 2001). Gordon et al (2008) concluded that the extent to which changing an object feature disrupts continuity depends on the quality of the object's representation; moving the object closer to fixation results in a higher quality representation of the object's perceptual features, enhancing the importance of those features to perceptual continuity.
Even when changes to a feature do not disrupt continuity, however, Gordon et al note that the feature may nonetheless be represented in visual working memory. For example, participants were asked whether or not the orientation of the target matched its orientation in the preview display. Because two objects were present in the preview display, accurate performance on this task requires an object-specific representation of the previewed orientation. Although the data from the object-naming task suggested that such information was not represented for objects far from fixation, Gordon et al (2008) found that participants were able to perform this orientation judgment task very accurately for such objects (78.4% correct). These results suggest that object files may indeed routinely represent object orientation, and therefore conflict with the results obtained from the reviewing paradigm. In order to resolve this conflict, Gordon et al argued for a distinction between what features are represented in object files, and the role of those features in preserving continuity across change.
This argument has resonance in recent interpretations of the marked insensitivity to change that is demonstrated in change blindness studies (e.g., Grimes, 1996; Rensink, O'Regan, & Clark, 1997). While change blindness has often been ascribed to a severely impoverished scene representation (e.g., Rensink, 2000), it has been argued more recently that change blindness results not from a failure to encode scene elements, but from a failure to compare pre- and post-change scene representations (e.g., Angelone, Levin, & Simons, 2003; Hollingworth, 2003; Hollingworth & Henderson, 2002; Mitroff, Simons, & Levin, 2004; Simons, 2000; Zelinsky, 2001, 2003).
Gordon et al (2008) offered a similar account of their data. The results from the explicit judgment task suggest that object files represent orientation (confirming conclusions drawn by Henderson and Siefert, 2001). However, even if orientation is represented, changes of orientation will disrupt perceptual continuity only if object orientation plays a role in the comparison process that supports continuity. If the comparison process described by Kahneman et al (1992) relies only on a comparison of object identities, then orientation change will not produce a mismatch (and disrupt continuity), whether orientation is represented or not. Based on the results from the reviewing paradigm, Gordon et al concluded that orientation is represented, but only plays a role in maintaining perceptual continuity when the representation is of especially high quality. In other cases, they argued that continuity depends primarily on a comparison of the post-change and pre-change objects' identities.
It is possible, however, that other object features may play a role in maintaining perceptual continuity; this may be especially likely if those features are essential to the object's perceptual representation or integral to the object's identity. Orientation is not an example of such a feature, in that it reflects a property of the object's relationship with the observer, rather than a property that is intrinsic to the object itself. That is, orientation is determined by the position of the object and by the position of the observer, and so is not a property of the object per se. In contrast, color is an intrinsic part of the object, and may therefore be essential to its perception and representation (e.g., Naor-Raz, Tarr, & Kersten, 2003; Tanaka & Presnell, 1999; Tanaka, Weiskopf, & Williams, 2001). Naor-Raz, Tarr, and Kersten (2003), for example, have shown that long-term visual representations of some objects necessarily include color information, and that color information is automatically retrieved during the identification of those objects. Given the importance of color to object type representations, it is likely that color information is also included in episodic object file representations. In the experiments reported here, we manipulate the color properties of preview and target objects in a transsaccadic reviewing paradigm, and examine the effects of color change on measures of object continuity.
Experiment 1
In Experiment 1 we examined the episodic representation of object color, and its role in preserving perceptual continuity across a saccade. We did so using a class of objects – letters – that do not have strong associations with any particular colors, in order to determine the importance of color to episodic object representations, when color is an arbitrary property of the object. On each trial participants viewed a fixation cross on the left side of the computer screen, and two letters appeared above and below a second fixation cross on the right side of the screen. Participants moved their eyes to the second fixation cross immediately after the letters appeared; during the saccade, the display was changed so that only a single letter was presented in one of the two preview locations when the eyes landed. The subjects then named the target letter as quickly as possible. On some trials the target's color matched its color in the initial display, while on other trials its color was different. We then analyzed target naming times to measure object-specific preview benefits, and to examine how such preview benefits were affected by a change in color. If color is part of the object's representation, and if it plays a role in preserving perceptual continuity, then changing color across the saccade should reduce or eliminate object-specific preview benefits.
Method
Subjects
Thirty-one undergraduate students at North Dakota State University participated in this study in exchange for course credit. All subjects had normal or corrected-to-normal vision.
Stimuli
The stimuli consisted of 16 letters from the English alphabet. Of these, six (A, N, Q, R, S, and T) were used in practice trials and ten (B, C, D, F, G, H, J, K, M, and P) were used in experimental trials. The average size of the letters was 4.0° vertically and 3.5° horizontally. Letters were presented in one of four colors (blue, green, red, and yellow) against a light gray background.
Apparatus
Stimuli were presented at a resolution of 1024 × 768 pixels on an NEC MultiSync FP2141SB color monitor, with a refresh rate of 75 Hz. Participants viewed the screen from a distance of 57 cm with a chin rest to reduce head movement. At this distance, the size of the total display area was 32° vertically and 45° horizontally. The fixation cross located in the center of the screen, and the saccade target, each had a height and width of 0.6°. The distance from the fixation cross to the saccade target was 13°. The distance from the saccade target to the center of each stimulus was 4.6°. Although the letters were presented rather far from fixation, the results of this and subsequent experiments confirm that the stimuli could be identified at this eccentricity. Furthermore, as the results confirm, colors are discriminable and able to influence object recognition at eccentricities considerably larger than those used in Experiments 1-3 (e.g., Naili, Despretz, & Boucart, 2006).
Participants' eye movements were recorded using a head mounted EyeLink II eyetracker (SR Research Ltd., Mississauga, Ontario, Canada) configured to sample eye position at 250 Hz. The eyetracker was calibrated at the beginning of the experiment and a drift correction was performed at the start of each trial. Participants responded by speaking into a microphone attached to a voice key (Cedrus SV-1 Smart Voice Key, San Pedro, CA) to provide naming latencies.
Procedure
Participants initiated the trial sequence by focusing on a fixation cross on the left side of the screen and pressing the “Enter” key on the keyboard. Following a 500 ms interval, two letters appeared on the right side of the screen, above and below a saccade target (which was identical to the fixation cross). This preview display remained on the screen until a saccade was initiated. A saccade was detected when eye velocity exceeded 30°/ or eye acceleration exceeded 9500°/sec2. Across subjects, the mean saccade latency was 247 ms (SD = 75.1 ms).
During the saccade, the preview display was replaced with a target display which contained a single letter, located either above or below the saccade target. In the same-object (SO) condition the target letter's identity matched that of the letter presented in the same location in the preview display. In the different-object (DO) condition, the target's identity matched the letter that appeared in the opposite location in the preview display (thus, in the experiments reported here, we manipulate position-continuity – whether a preview and target occupy the same spatial location – rather than object-continuity per se). Because the no match condition that has often been used does not yield effects of theoretical interest for the present studies, it was excluded from the design of Experiments 1-3, as it has been from previous studies (e.g., Gordon et al, 2008). The color of the target letter was manipulated as well. When the target appeared, its color either matched or did not match its color in the preview display.
Participants responded by naming the target letter. Reaction time (RT) was measured from the onset of the target until the voice key was triggered. After the participant responded, the experimenter noted whether or not the participant correctly named the target letter. Trials on which the participant spoke too softly to trigger the voice key or triggered the voice key prematurely by making an extraneous noise were excluded from the analyses, but not counted as naming errors. Each subject completed one block of 12 practice trials followed by one block of 160 experimental trials.
Results
Before analyzing RTs, the data were trimmed by eliminating RTs greater than 2 s, or those that differed by more than 2.5 standard deviations from that subject's mean RT for that condition (SO or DO) and color (same or different). These criteria eliminated 5.6% of the trials from analysis. We also eliminated trials containing anticipatory saccades (defined as saccades with latencies less than 50 ms); this criterion eliminated an additional 6.8% of the trials from the analysis. We did not exclude the small number of trials on which the first saccade did not land on the saccade target, because previous research (e.g., Gajewski & Henderson, 2005; Gordon et al, 2008) has indicated that object-specific preview benefits are not contingent upon the landing position of the saccade. The data were then analyzed in a 2 (condition) × 2 (color) ANOVA.
Mean RTs and error rates are presented in Table 1. In the analyses that follow, all effects are significant at the 0.05 level, except as otherwise noted. The results of an ANOVA performed on the RT data revealed a main effect of color, F(1, 30) = 6.6, MSE = 856, as participants responded more slowly when letter color changed (M = 727 ms) than when it did not change (M = 713 ms). Importantly, there was a significant object-specific preview benefit, F(1, 30) = 8.2, MSE = 1004, with participants responding faster in the SO condition (M = 712 ms) than in the DO condition (M = 728 ms). This effect did not interact with color change, F(1, 30) < 1, MSE = 801; the object-specific preview benefit was as large when letter color changed (M = 16 ms) as when it stayed the same (M = 17 ms). Planned comparisons confirmed that the object-specific preview benefit was significant in both the same-color, F(1, 30) = 5.6, MSE = 801, and different-color conditions, F(1, 30) = 4.7, MSE = 801.
Table 1. Mean reaction times and preview effects (in ms) and error rates (in percentages) in Experiment 1.
| Color | ||
|---|---|---|
| Condition | Same | Different |
| Same-Object | 705 (0.1) | 719 (0.3) |
| Different-Object | 722 (0.0) | 735 (0.2) |
| Preview Effects (ms) | ||
| Object-Specific Benefit | 17 (-0.1) | 16 (0.1) |
Because the participants' task was to name highly familiar and discriminable objects, error rates were very low. An ANOVA performed on the error data revealed no effect of color, F(1, 30) = 2.8, MSE = 0.41. There was no object-specific preview benefit, and no interaction of the object-specific preview benefit with color change, both Fs < 1. Thus, there was no indication of a speed-accuracy tradeoff that may contribute to the RT results.
Discussion
As expected, we observed a significant object-specific preview benefit in Experiment 1: participants were faster to name a letter that was previewed within the same frame than to name a letter presented in the opposite frame. Furthermore, changing the color of the letter had no effect on the magnitude of the preview benefit. Such a result might suggest that color information is not represented in object files. Alternatively, color information may be represented, but may not play a substantial role in the processes that underlie object continuity. The results of Experiment 1 support the latter conclusion; although it is true that color change did not disrupt perceptual continuity, it did produce an overall effect on naming latency. Thus, it appears that, although object-specific color information is retained, it does not play a role in preserving continuity.
Experiment 2
The results of Experiment 1 may reflect the unique properties of the letters used as preview and target stimuli. As symbolic objects, letters may represent an unusual case in which color is an entirely arbitrary object feature. For most observers, for example, it is unlikely that any particular letter is strongly associated with a specific color. Furthermore, the contours that make up a letter do not represent the surfaces of a concrete object in the real world. Features, such as color, that typically define real object surfaces may, therefore, play no role in the defining the essential properties of the letter stimuli (Friedman, 1980). In contrast, color is an intrinsic part of the surfaces that make up real objects, and of the surfaces depicted in pictorial representations or photographs of objects. In addition, color is often integral to defining objects in the real world. This is especially true for the class of objects that have typical or diagnostic colors; a banana, for example, is much more likely to be yellow than to be blue or red. Because Naor-Raz et al (2003) have shown that color is an essential component of the long-term representation of such objects, color may play an important role in their episodic representation as well, and in their perceptual continuity across change. This possibility was examined in Experiment 2, in which the stimuli consisted of photographs of real-world objects.
Method
Subjects
Seventy-five undergraduate students at North Dakota State University participated in this study in exchange for course credit. All subjects had normal or corrected-to-normal vision.
Stimuli
The stimuli consisted of 87 color photographs of common objects (including 12 used exclusively for practice trials and 75 used for experimental trials), taken from the Photo Objects collection (Hemera Technologies, Inc.) and from a database provided by Dr. Michael J. Tarr (http://www.tarrlab.org). These included objects that have a typical or diagnostic color (which we call diagnostic objects) and those that do not (which we call non-diagnostic objects). Two artificially colored versions of each object were created; for diagnostic objects, one version was colored using the object's typical color, while the other version was colored using an implausible color for that object. For non-diagnostic objects, for which a much larger range of colors are possible, both versions were plausibly colored. The size of the objects varied, but all objects fit within a 7° by 7° square. Note that, because of a programming error, there were 38 non-diagnostic objects, but only 37 diagnostic objects, included in the experiment.
In order to confirm the diagnosticity of the objects used in Experiment 2, a separate group of 15 participants completed a norming study. In the norming study, each subject viewed grayscale images of the objects used for the experimental trials in Experiment 2, and typed in the color that they would expect the object to have in the real world. Analysis of the norming data confirmed that there was much better agreement concerning the color of diagnostic objects (M = 87.0%) than the color of non-diagnostic objects (M = 54.7%), t(73) = 7.07, p < 0.01. Furthermore, when asked to identify the most likely color for the object, fewer unique responses were offered for diagnostic objects (M = 2.0) than for non-diagnostic objects (M = 5.0), t(73) = 7.56, p < 0.01. Thus, the results of the norming study confirm the status of our diagnostic and non-diagnostic objects.
Apparatus
Stimuli were presented at a resolution of 1024 × 768 pixels on an NEC MultiSync FP2141SB color monitor, with a refresh rate of 75 Hz. Participants viewed the screen from a distance of 57 cm with a chin rest to reduce head movement. At this distance, the size of the total display area was 32° vertically and 45° horizontally. The fixation cross located in the center of the screen, and the saccade target, each had a height and width of 0.6°. The distance from the fixation cross to the saccade target was 16.0°. The distance from the saccade target to the center of each stimulus was 5.4°. As in Experiment 1, participants' eye movements were recorded using a head mounted EyeLink II eyetracker, and naming latencies were recorded using a Cedrus SV-1 Voice Key.
Procedure
The procedure was essentially the same as that used in Experiment 1; participants began each trial by fixating on a fixation cross, and then made a saccade to a peripheral target cross located between two preview objects when the preview display appeared. Across subjects, the mean saccade latency was 282 ms (SD = 107.4 ms). Following the saccade, participants named a single target object that was presented above or below the saccade target. Each subject completed one block of 12 practice trials followed by one block of 150 experimental trials. For each participant, each object appeared in only two trials. Each color version of each object was used as a target on half of the trials.
Results
Before analyzing RTs, the data were trimmed by eliminating RTs greater than 2 s, or those that differed by more than 2.5 standard deviations from that subject's mean RT for that condition (SO or DO), color (same or different), and diagnosticity (color diagnostic or non-diagnostic). These criteria eliminated 3.4% of the trials from analysis. We also eliminated trials containing anticipatory saccades (defined as saccades with latencies less than 50 ms); this criterion eliminated an additional 5.5% of the trials from the analysis. The data were then analyzed in a 2 (condition) × 2 (color) × 2 (diagnosticity) ANOVA.
Mean RTs and error rates are presented in Table 2. In the analyses that follow, all effects are significant at the 0.05 level, except as otherwise noted. Results from the ANOVA conducted on mean RTs revealed a main effect of condition, F(1, 74) = 31.3, MSE = 6808. Thus, there was a significant object-specific preview benefit, with faster RTs in the SO condition (M = 985 ms) than in the DO condition (M = 1023 ms). There were also significant main effects of diagnosticity, F(1, 74) = 13.2, MSE = 5425, with faster RTs for non-diagnostic objects (M = 993 ms) than diagnostic objects (M = 1015 ms), and color, F(1, 74) = 8.2, MSE = 3948, with faster RTs when the color of the target matched its previewed color (M = 996 ms) then when its color changed (M = 1011 ms). As in Experiment 1, therefore, the results suggest that object color is represented in object files. Furthermore, this effect did not interact with diagnosticity, F(1, 74) < 1, MSE = 3683, suggesting that object files preserve the color of objects, regardless of whether or not the color is diagnostic for the object's identity.
Table 2. Mean reaction times and preview effects (in ms) and error rates (in percentages) in Experiment 2.
| Color | ||
|---|---|---|
| Diagnosticity | Same | Different |
| Diagnostic | ||
| Same-Object | 973 (5.4) | 1019 (5.6) |
| Different-Object | 1040 (4.3) | 1027 (4.3) |
| Preview Effects | ||
| Object-Specific Benefit | 67 (-1.1) | 8 (-1.3) |
| Non-Diagnostic | ||
| Same-Object | 964 (2.9) | 984 (2.4) |
| Different-Object | 1009 (3.8) | 1015 (4.0) |
| Preview Effects | ||
| Object-Specific Benefit | 45 (0.9) | 31 (1.6) |
Unlike Experiment 1, color change significantly disrupted object continuity, resulting in a significant color by condition interaction, F(1, 74) = 13.5, MSE = 3909. This interaction reflects the fact that the object-specific preview benefit was greater when the target's color matched its previewed color (M = 56 ms) than when its color changed (M = 19 ms). Thus, the results suggest that object color plays a role in preserving continuity across saccades, when the stimuli consist of depictions of concrete objects. The results further suggest that the transsaccadic reviewing paradigm is sufficiently sensitive to detect effects of color change on object continuity, strengthening our conclusion that color did not play a role in preserving continuity in Experiment 1.
The extent to which color plays such a role appears to depend on how integral the color is to defining the object's identity. This is reflected in a significant three-way interaction of condition, color, and diagnosticity, F(1, 74) = 5.1, MSE = 3701. For objects that have a diagnostic color, color changes dramatically reduce the object-specific preview benefit; the benefit is 67 ms when object color is constant, but just 8 ms when the object's color changes across a saccade. In contrast, color changes affect the preview benefit less dramatically for objects whose color is non-diagnostic, reducing the effect from 45 ms in the same-color condition to 31 ms in the different-color condition. Planned comparisons confirmed that the reduction in the object-specific preview benefit was significant for diagnostic objects, F(1, 74) = 53.0, MSE = 3701, but not for non-diagnostic objects, F(1, 74) = 1.1, MSE = 3701.
An ANOVA performed on the error data revealed a main effect of diagnosticity, F(1, 74) = 12.4, MSE = 0.32, with more errors in naming objects with diagnostic colors (M = 4.9%) than objects without diagnostic colors (M = 3.3%). This effect interacted with condition, F(1, 74) = 7.8, MSE = 0.29. For non-diagnostic objects, participants made slightly more errors in the DO condition (M = 3.9%) than in the SO condition (M = 2.7%). For diagnostic objects, in contrast, participants made more errors in the SO condition (M = 5.5%) than in the DO condition (M = 4.3%). None of the other main effects or interactions was significant, all Fs < 1.
Discussion
As in Experiment 1, the results of Experiment 2 revealed a significant object-specific preview benefit. Unlike Experiment 1, this benefit was reduced by a color change across the saccade, suggesting that color is part of the episodic representation of concrete, real-world objects, and that it plays a role in preserving the perceptual continuity of such objects across change. However, the importance of color in preserving perceptual continuity appears to apply primarily to the class of objects that have typical or diagnostic colors. For those objects, changing color disrupts perceptual continuity, eliminating the object-specific preview benefit. In contrast, for objects that are not strongly associated with one color (or with a very small number of colors), color changes do not significantly reduce or eliminate object-specific preview benefits, despite evidence that the color of those objects is represented in visual working memory. The implications of these results are considered in the General Discussion.
Experiment 3
The results of Experiment 2 suggest that, for objects that have diagnostic or typical colors, color is part of the episodic object representation and plays a role in preserving perceptual continuity across change. An interesting question is whether the order in which typical and atypical object versions are viewed matters (that is, if color changes are more disruptive following a typical or atypical color preview). If an object's typical color is retrieved from long-term memory and integrated with its object file, then one might expect changes to typically colored objects to be especially disruptive. On the other hand, atypical colors may be particularly salient, and therefore more likely to be represented in object files and to play a role in determining continuity. Alternatively, if the quality of a feature's representation determines its importance to preserving continuity (Gordon et al, 2008), and if both typical and atypical object colors are represented with high quality (relative to neutral colors), then changes to both should be disruptive.
The design of Experiment 2, in which each object appeared only twice, did not permit a separate analysis of trials in which the preview object was typically or atypically colored. In Experiment 3, we systematically manipulated preview color in order to determine whether or not color changes are more disruptive following a typical or atypical preview, or are equally disruptive in either case.
Method
Subjects
Forty undergraduate students at North Dakota State University participated in this study in exchange for course credit. All subjects had normal or corrected-to-normal vision.
Stimuli
The stimuli consisted of 45 diagnostic object photographs (including the 37 diagnostic object photographs used in Experiment 2). Because Experiment 3 was focused on examining the representation of typical or atypical object color, we did not include the non-diagnostic objects from Experiment 2. Subsequent analyses revealed that two of the objects were rarely named correctly by our participants, and the analyses we report below therefore exclude trials in which those objects appeared.
Apparatus
The apparatus was the same as in Experiment 2, except that participants' eye movements were recorded using an Eyelink 1000 eyetracker (SR Research Ltd., Mississauga, Ontario, Canada) configured to sample eye position at 1000 Hz. Stimuli were presented at a resolution of 1024 × 768 pixels on a Viewsonic Graphics Series G225f 21″ color monitor, with a refresh rate of 75 Hz.
Procedure
The procedure was essentially the same as that used in Experiment 2; participants began each trial by fixating on a fixation cross, and then made a saccade to a peripheral target cross located between two preview objects when the preview display appeared. In an attempt to reduce the incidence of anticipatory saccades, in Experiment 3 the SOA between the onset of the initial fixation cross and the onset of the preview display was randomly selected on each trial to be either 500, 750, or 1000 ms. Across subjects, the mean saccade latency was 258 ms (SD = 108.3 ms). Following the saccade, participants named a single target object that was presented above or below the saccade target. Each subject completed one block of 12 practice trials followed by one block of 720 experimental trials. Within each participant, each object appeared in trials representing each combination of preview typicality (typical or atypical preview), color change (same or different color), target location (top or bottom), and condition (SO or DO).
Results
Before analyzing RTs, the data were trimmed by eliminating RTs greater than 2 s, or those that differed by more than 2.5 standard deviations from that subject's mean RT for that condition, color, and preview typicality. These criteria eliminated 3.8% of the trials from analysis. We also eliminated trials containing anticipatory saccades (defined as saccades with latencies less than 50 ms); this criterion eliminated an additional 11.1% of the trials from the analysis. The data were then analyzed in a 2 (condition) × 2 (color) × 2 (preview typicality) ANOVA. Because preliminary analyses indicated that target location did not interact with the effects of interest in this or other experiments, we did not include it as a variable in the analyses we report below. In order to assess the generalizability of our results to other stimuli, we conducted analyses by items as well as by participants. Analyses by participants are reported below using the subscript ‘1’, while analyses by items are reported using the subscript ‘2’.
Mean RTs and error rates are presented in Table 3. In the analyses that follow, all effects are significant at the 0.05 level, except as otherwise noted. Results from the ANOVA conducted on mean RTs revealed a main effect of condition, and therefore a significant object-specific preview benefit, F1(1, 39) = 7.5, MSE1 = 977, F2(1, 42) = 6.6, MSE2 = 1058; participants named targets faster in the SO condition (M = 892 ms) than in the DO condition (M = 902 ms). There was also a significant main effect of preview typicality, F1(1, 39) = 4.4, MSE1 = 1478, F2(1, 42) = 8.2, MSE2 = 1757, with faster RTs following typical previews (M = 892 ms) than atypical previews (M = 902 ms). The main effect of color change did not reach significance, F1(1, 39) = 2.0, p > 0.05, MSE1 = 665, F2(1, 42) = 1.1, MSE2 = 1181. There was, however, a significant interaction of preview typicality and color change, F1(1, 39) = 42.0, MSE1 = 1067, F2(1, 42) = 17.9, MSE2 = 2645. Following a typically colored preview, RTs are faster when the color of the object stays the same (M = 879 ms) than when it changes (M = 907 ms); in contrast, when the preview object was atypically colored, RTs are faster when the color changes (M = 892 ms) than when it stays the same (M = 911 ms). This result likely reflects the fact that objects are named more quickly overall when they appear in their typical color (e.g., Tanaka & Presnell, 1999).
Table 3. Mean reaction times and preview effects (in ms) and error rates (in percentages) in Experiment 3.
| Color | ||
|---|---|---|
| Preview Typicality | Same | Different |
| Typical | ||
| Same-Object | 870 (2.0) | 904 (1.9) |
| Different-Object | 888 (1.9) | 909 (2.3) |
| Preview Effects | ||
| Object-Specific Benefit | 18 (-0.1) | 5 (0.4) |
| Atypical | ||
| Same-Object | 904 (1.4) | 891 (1.7) |
| Different-Object | 919 (2.3) | 892 (1.7) |
| Preview Effects | ||
| Object-Specific Benefit | 15 (0.9) | 1 (0.0) |
As in Experiment 2, color change significantly disrupted object continuity, resulting in a significant color by condition interaction, F1(1, 39) = 4.4, MSE1 = 832, which was marginally significant by items, F2(1, 42) = 3.0, p < 0.10, MSE2 = 1120. This interaction reflects the fact that, as in Experiment 2, the object-specific preview benefit was greater when the target's color matched its previewed color (M = 16 ms) than when its color changed (M = 3 ms).
Critically, the extent to which color change disrupted perceptual continuity did not depend on whether the preview item was typically atypically colored; there was no three-way interaction of color, condition, and preview typicality, F1(1, 39) < 1, MSE1 = 802, F2(1, 42) < 1, MSE2 = 1053. Indeed, the reduction in the object-specific preview benefit was essentially the same for typically colored preview objects (13 ms) as for atypically colored preview objects (14 ms). The implications of this result are discussed below.
An ANOVA performed on the error data revealed no main effect of preview typicality, F1(1, 39) = 3.2, MSE1 = 1.85, F2(1, 42) = 1.5, MSE2 = 1.95, or of condition, F1(1, 39) = 2.7, MSE1 = 2.33, F2(1, 42) = 1.4, MSE2 = 2.27. As was the case for the RT data, there was no significant three-way interaction, F1(1, 39) = 3.1, MSE1 = 3.32, F2(1, 42) = 3.4, MSE2 = 1.49. None of the other main effects or interactions was significant, all Fs < 1.
Discussion
The results of Experiment 3 replicate the critical finding of Experiment 2: for objects that have diagnostic or typical colors, changing the object's color during a saccade disrupts perceptual continuity, as reflected by significantly reduced object-specific preview benefits. Furthermore, the results extend those findings by demonstrating that color change produces this effect regardless of whether the previewed object is depicted in its typical or atypical color. The implications of these results are considered below, in the General Discussion.
General Discussion
The purpose of the present study was to examine episodic object representations and the use of those representations in preserving object continuity across change. In Experiment 1, participants viewed a preview display consisting of two colored letters, then made a saccade to a location between the two preview letters. When the eyes landed, a target letter was presented in either the same or opposite location as its previewed location. Not surprisingly, we obtained an object-specific preview benefit, with faster responses to a target presented in its preview location.
On some trials, the target's color matched its previewed color, and on other trials it mismatched. Color changes affected naming time, suggesting that an episodic representation of the preview letter's color had been preserved, and that this representation influenced responses to the target. Importantly, however, the object-specific preview benefit was unaffected by color change. This suggests that, even though color may be preserved as part of an object's representation, it does not play a role in the processes that underlie perceptual continuity.
In Experiment 2, we tested whether this conclusion was limited to stimuli, like letters, whose color is arbitrary and not reflective of the object's surface properties. We did so by replicating Experiment 1 with two new sets of objects. All of the objects were photographs of concrete, real-world objects (as opposed to the symbolic letter stimuli used in Experiment 1). One set included diagnostic objects that possess typical colors that are diagnostic for that object's identity (such as a banana, whose typical color is yellow). The other set included non-diagnostic objects (e.g., a car or a pail) that are not strongly associated with a particular color.
The results of Experiment 2 revealed a significant object-specific benefit for each class of objects. Furthermore, the results suggested that object color is part of the episodic representation of objects in each set. The results further revealed, however, that color plays a role in the processes underlying perceptual continuity only for the set of objects that possess a typical or diagnostic color. Changing the color of a diagnostic object during a saccade disrupted its perceptual continuity, completely eliminating the object-specific preview benefit; the results of Experiment 3 demonstrated that this is true regardless of whether the object was previewed in its typical or atypical color. In contrast, changing the color of a non-diagnostic object did not significantly reduce the preview benefit, suggesting that such a change did not disrupt perceptual continuity. Given that object color is an essential part of the representation of diagnostic objects (but not of non-diagnostic objects; Tanaka & Presnell, 1999), the results of the present study are in general agreement with previous findings (e.g., Gordon & Irwin, 1996, 2000; Henderson, 1994) that changes to an object's identity are much more likely to disrupt perceptual continuity than changes that do not alter an object's identity.
Thus, the results suggest that, although episodic representations generally include an object's color, whether or not color plays a role in preserving perceptual continuity depends on the nature of the object itself. The critical factor appears to be whether or not color is critical to defining the object's identity. For diagnostic objects, color is part of the object's long-term memory representation, and thus constitutes a defining feature that distinguishes the object from other object types (e.g., Tanaka & Presnell, 1999; Tanaka et al, 2001). In contrast, color plays a much more limited role in the representation of non-diagnostic objects; while it may be useful for discriminating similar objects, it does not contribute to the identification of the object per se. The results of the present study suggest that these differences in how objects are represented in long-term memory affect the use of their episodic representations as well.
The results of Experiment 3 are especially revealing of the role of long-term memory in episodic representation, and of the process by which color information comes to influence the maintenance of perceptual continuity. On the basis of the results of Experiment 2, for example, it might be assumed that, to the extent that an object has a defining color, that color is routinely retrieved from long-term memory and incorporated with the episodic representation of the object formed from its preview. When the object's color changes during a saccade, the conflict between its new color and its color as represented in visual working memory disrupts perceptual continuity, eliminating the object-specific preview benefits observed in the reviewing paradigm. This may be especially true given that the new color also conflicts with the object's continuing representation in long-term memory.
The results of Experiment 3 do not fully support this account. While changing the color of an object that was previously viewed in its typical color does disrupt continuity, changing the color of an object viewed in an atypical color produces the same effect. This pattern of results is most consistent with a different account, in which the extent to which an object property plays a role in determining continuity is determined by the quality of its representation in visual working memory (e.g., Gordon et al, 2008).
Gordon et al (2008), for example, have previously found that the quality of a feature's representation in working memory determines the likelihood that changing the feature will disrupt perceptual continuity. The pattern of results in Experiments 1-3 suggests that representations of the color of diagnostic objects may be of especially high quality, whether the previewed color is typical or atypical. The source of the representation's quality may differ in each case, however. For typical colors, the representation may be strengthened by its match with the object's long-term color representation. For atypical colors, in contrast, the inconsistency between the object's color and its long-term representation may make its color more salient, thereby enhancing its strength as well. For nondiagnostic objects or letters, for which long-term representations do not include object color, the object's color is neither salient nor supported by long-term memory, and is therefore less likely to play a role in preserving continuity. Thus, changing the color of a letter or of a non-diagnostic object is much less likely to disrupt continuity.
Our finding that color changes disrupt perceptual continuity only for objects that have typical colors appears to be inconsistent with recent work by Hollingworth, Richard, and Luck (2008). In Experiment 1 of their study, participants viewed a display containing a circular array of differently-colored discs. After fixating the center of the array, a brief cue drew the eyes to the location of one of the colored discs. On one-third of the trials, the array shifted during the saccade, so that the eyes landed in a location midway between the targeted disc and another disc. Hollingworth et al examined the accuracy of the automatic corrective saccade that followed, and found that participants were well above chance at making the corrective saccade to the intended saccade target, rather than to the other colored disc. On the basis of this and similar results, Hollingworth et al argued that object color may be used to establish object correspondence across views. Thus, even for colored discs (which presumably have no strong association with any particular color in long-term memory), color appears to be part of the object's representation, and is compared with the currently visible objects to establish correspondence.
It's worth noting that Hollingworth et al (2008) addressed the processes that establish correspondence, while the present study addressed object file reviewing and comparison that follow correspondence processes. Nonetheless, their results may conflict with our own.
It is possible to reconcile these results, however, by considering factors that may influence the quality of a feature's representation in visual working memory. We have previously described two such factors: support from representations in long-term memory, and feature salience. Although the representation of a disc's color in the Hollingworth et al study is surely not supported by a representation of discs in long-term memory, the second factor – salience – may play a role in producing a high-quality representation of the discs' color. In their experiment, color was the defining feature of each object in the array; indeed, it was the only feature (aside from location) that distinguished each disc from the discs that surrounded it. The target disc's color was therefore its more salient perceptual feature, leading to a high-quality color representation that influences subsequent interactions with the disc. This is in contrast to the colored letters and non-diagnostic objects used in the present study, both of which have distinguishing characteristics (e.g., shape) that are far more diagnostic of the object's identity than their color is. Rather than conflicting with the present results, then, the results of Hollingworth et al (2008) may be seen as converging evidence for the importance of the quality of a feature's representation in determining its influence on perceptual continuity.
The present study is also similar in many respects to a series of experiments reported by Noles and Scholl (2005). In their experiments, which used a within-fixation preview paradigm similar to that used by Kahneman, Treisman, and Gibbs (1992), participants responded to one feature of a target stimulus, while a task-irrelevant feature of the stimulus changed between the preview and target display. In one experiment, for example, participants reported whether an emotion portrayed by a target face was also portrayed by one of two preview faces. On some trials, the same face was shown in the preview and target displays, while on other trials a different target face (which might nonetheless portray the same emotion) was presented. Their results indicated that object-specific preview benefits were eliminated when the face's identity changed, despite the fact that identity was not relevant for performing the task. Thus, the task-irrelevant stimulus property appeared to be represented in object files, and to play a role in preserving continuity. This was not the case, however, in a subsequent experiment in which the task was to indicate whether or not a simple geometric target shape had been present in the preview display. In this case, the object's color either matched or did not match its previewed color. Under such circumstances, a task-irrelevant color change did not reduce or eliminate the object-specific preview benefit, suggesting that color was either not included in the object file representation or was included but was not relevant to maintaining continuity.
On the basis of these results, Noles and Scholl (2005) argued that features in object file representations may be stored in either an integral or separable fashion (e.g., Garner, 1974). Face identity and emotion appear to be represented integrally, while, in their experiment, color and shape are separable. As Noles and Scholl point out, however, whether or not an object's features are stored integrally or separably likely depends on several factors. Among these, they include stimulus complexity, the semantic properties of the stimulus, and the need to process stimuli holistically (as is the case for the faces used in their experiments). Consistent with that, our findings suggest that color and identity are processed in an integral fashion for some classes of objects – namely, those objects for which color is part of their semantic representation – but not for other classes of objects. For other objects, color may nonetheless be represented (as our data suggest), but that representation is separable from the representation of identity and is therefore less likely to influence object continuity processes.
Conclusion
The experiments reported here suggest that episodic object representations incorporate representations of object color. However, whether or not color representations play a role in preserving perceptual continuity across a saccade depends on the quality of the representation, which is influenced by its salience and by the importance of color in the object's long-term representation. The results are therefore consistent with our previous work (Gordon et al, 2008) showing that object files represent an object's perceptual features, but that pre- and post-change comparisons of those features are made under only limited circumstances.
Acknowledgments
This project was supported by Grant Number 1P20 RR020151 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NCRR or NIH. The project was also supported by the National Science Foundation under Grant Number 01322899.
References
- Angelone BL, Levin DT, Simons DJ. The roles of representation and comparison failures in change blindness. Perception. 2003;32:947–962. doi: 10.1068/p5079. [DOI] [PubMed] [Google Scholar]
- Friedman RB. Identity without form: Abstract representations of letters. Perception and Psychophysics. 1980;28:53–60. doi: 10.3758/bf03204315. [DOI] [PubMed] [Google Scholar]
- Gajewski DA, Brockmole JR. Feature bindings endure without attention: Evidence from an explicit recall task. Psychonomic Bulletin and Review. 2006;13:581–587. doi: 10.3758/bf03193966. [DOI] [PubMed] [Google Scholar]
- Gajewski DA, Henderson JM. The role of saccade targeting in the transsaccadic integration of object types and tokens. Journal of Experimental Psychology: Human Perception and Performance. 2005;31:820–830. doi: 10.1037/0096-1523.31.4.820. [DOI] [PubMed] [Google Scholar]
- Garner W. The processing of information and structure. Potomac, MD: Erlbaum; 1974. [Google Scholar]
- Gordon RD, Irwin DE. What's in an object file? Evidence from priming studies. Perception and Psychophysics. 1996;58:1260–1277. doi: 10.3758/bf03207558. [DOI] [PubMed] [Google Scholar]
- Gordon RD, Irwin DE. The role of physical and conceptual properties in preserving object continuity. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2000;26:136–150. doi: 10.1037//0278-7393.26.1.136. [DOI] [PubMed] [Google Scholar]
- Gordon RD, Vollmer SD, Frankl ML. Object continuity and the transsaccadic representation of form. Perception and Psychophysics. 2008;70:667–679. doi: 10.3758/pp.70.4.667. [DOI] [PubMed] [Google Scholar]
- Grimes J. On the failure to detect changes in scenes across saccades. In: Akins K, editor. Perception. Vol. 2. New York: Oxford University Press; 1996. pp. 89–110. [Google Scholar]
- Henderson JM. Two representational systems in dynamic visual identification. Journal of Experimental Psychology: General. 1994;123:410–426. doi: 10.1037//0096-3445.123.4.410. [DOI] [PubMed] [Google Scholar]
- Henderson JM, Siefert ABC. Types and tokens in transsaccadic object identification: Effects of spatial position and left-right orientation. Psychonomic Bulletin and Review. 2001;8:753–760. doi: 10.3758/bf03196214. [DOI] [PubMed] [Google Scholar]
- Hollingworth A. Failures of retrieval and comparison constrain change detection in natural scenes. Journal of Experimental Psychology: Human Perception and Performance. 2003;29:388–403. doi: 10.1037/0096-1523.29.2.388. [DOI] [PubMed] [Google Scholar]
- Hollingworth A. Constructing visual representations of natural scenes: The roles of short-term and long-term visual memory. Journal of Experimental Psychology: Human Perception and Performance. 2004;30:519–537. doi: 10.1037/0096-1523.30.3.519. [DOI] [PubMed] [Google Scholar]
- Hollingworth A. Scene and position specificity in visual memory for objects. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2006;32:58–69. doi: 10.1037/0278-7393.32.1.58. [DOI] [PubMed] [Google Scholar]
- Hollingworth A, Henderson JM. Accurate visual memory for previously attended objects in natural scenes. Journal of Experimental Psychology: Human Perception and Performance. 2002;28:113–136. [Google Scholar]
- Hollingworth A, Richard AM, Luck SJ. Understanding the function of visual short-term memory: Transsaccadic memory, object correspondence, and gaze correction. Journal of Experimental Psychology: General. 2008;137:163–181. doi: 10.1037/0096-3445.137.1.163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Irwin DE. Integrating information across saccadic eye movements. Current Directions in Psychological Science. 1996;5:94–100. [Google Scholar]
- Irwin DE, Andrews RV. Integration and accumulation of information across saccadic eye movements. In: Inui T, McClelland JL, editors. Attention and Performance XVI: Information integration in perception and communication. Cambridge, MA: MIT Press; 1996. pp. 125–156. [Google Scholar]
- Irwin DE, Gordon RD. Eye movements, attention, and transsaccadic memory. Visual Cognition. 1998;5:127–155. [Google Scholar]
- Kahneman D, Treisman A. Changing views of attention and automaticity. In: Parasuraman R, Davies R, editors. Varieties of Attention. Cambridge, MA: MIT Press; 1984. pp. 29–61. [Google Scholar]
- Kahneman D, Treisman A, Gibbs BJ. The reviewing of object files: Object-specific integration of information. Cognitive Psychology. 1992;24:175–219. doi: 10.1016/0010-0285(92)90007-o. [DOI] [PubMed] [Google Scholar]
- Mitroff SR, Alvarez GA. Space and time, not surface features, guide object persistence. Psychonomic Bulletin and Review. 2007;14:1199–1204. doi: 10.3758/bf03193113. [DOI] [PubMed] [Google Scholar]
- Mitroff SR, Simons DJ, Levin DT. Nothing compares 2 views: Change blindness can occur despite preserved access to the changed information. Perception and Psychophysics. 2004;66:1268–1281. doi: 10.3758/bf03194997. [DOI] [PubMed] [Google Scholar]
- Naor-Raz G, Tarr MJ, Kersten D. Is color an intrinsic property of object representation? Perception. 2003;32:667–680. doi: 10.1068/p5050. [DOI] [PubMed] [Google Scholar]
- Noles NS, Scholl BJ. What's in an object file? Integral vs. separable features [Abstract] Journal of Vision. 2005;5(8):614–614a. doi: 10.1167/5.8.614. http://journalofvision.org/5/8/614/ [DOI]
- Noles NS, Scholl BJ, Mitroff SR. The persistence of object file representations. Perception and Psychophysics. 2005;67:324–334. doi: 10.3758/bf03206495. [DOI] [PubMed] [Google Scholar]
- Potter MC. Short-term conceptual memory for pictures. Journal of Experimental Psychology: Human Learning and Memory. 1976;2:509–522. [PubMed] [Google Scholar]
- Rensink RA. The dynamic representation of scenes. Visual Cognition. 2000;7:17–42. [Google Scholar]
- Rensink RA, O'Regan JK, Clark JJ. To see or not to see: The need for attention to perceive changes in scenes. Psychological Science. 1997;8:368–373. [Google Scholar]
- Sanocki T. Representation and perception of scenic layout. Cognitive Psychology. 2003;47:43–86. doi: 10.1016/s0010-0285(03)00002-1. [DOI] [PubMed] [Google Scholar]
- Simons DJ. Current approaches to change blindness. Visual Cognition. 2000;7:1–15. [Google Scholar]
- Tanaka JW, Presnell LM. Color diagnosticity in object recognition. Perception and Psychophysics. 1999;61:1140–1153. doi: 10.3758/bf03207619. [DOI] [PubMed] [Google Scholar]
- Tanaka JW, Weiskopf D, Williams P. The role of color in high-level vision. Trends in Cognitive Sciences. 2001;5:211–215. doi: 10.1016/s1364-6613(00)01626-0. [DOI] [PubMed] [Google Scholar]
- Zelinsky GJ. Eye movements during change detection: Implications for search constraints, memory limitations, and scanning strategies. Perception and Psychophysics. 2001;63:209–225. doi: 10.3758/bf03194463. [DOI] [PubMed] [Google Scholar]
- Zelinsky GJ. Detecting changes between real-world objects using spatiochromatic filters. Psychonomic Bulletin and Review. 2003;10:533–555. doi: 10.3758/bf03196516. [DOI] [PubMed] [Google Scholar]
