Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jan 26.
Published in final edited form as: Nat Hum Behav. 2017 Mar 8;1(3):0058. doi: 10.1038/s41562-017-0058

Five Factors that Guide Attention in Visual Search

Jeremy M Wolfe 1, Todd S Horowitz 2
PMCID: PMC9879335  NIHMSID: NIHMS1843501  PMID: 36711068

Abstract

How do we find what we are looking for? Fundamental limits on visual processing mean that even when the desired target is in our field of view, we often need to search, because it is impossible to recognize everything at once. Searching involves directing attention to objects that might be the target. This deployment of attention is not random. It is guided to the most promising items and locations by five factors discussed here: Bottom-up salience, top-down feature guidance, scene structure and meaning, the previous history of search over time scales from msec to years, and the relative value of the targets and distractors. Modern theories of search need to specify how all five factors combine to shape search behavior. An understanding of the rules of guidance can be used to improve the accuracy and efficiency of socially-important search tasks, from security screening to medical image perception.


How can a texting pedestrian walk right into a pole, even though it is clearly visible 1? At any given moment, our attention and eyes are focused on some aspects of the scene in front of us, while other portions of the visible world go relatively unattended. We deploy this selective visual attention because we are unable to fully process everything in the scene at the same time. We have the impression of seeing everything in front of our eyes, but over most of the visual field we are probably seeing something like visual textures, rather than objects 2,3 Identifying specific objects and apprehending their relationships to each other typically requires attention, as our unfortunate texting pedestrian can attest.

Figure 1 illustrates this point. It is obvious that this image is filled with Ms and Ws in various combinations of red, blue, and yellow, but it takes attentional scrutiny to determine whether or not there is a red and yellow M.

Figure 1:

Figure 1:

On first glimpse, you know something about the distribution of colors and shapes but not how those colors and shapes are bound to each other. Find ‘M’s that are red and yellow.

The need to attend to objects in order to recognize them raises a problem. At any given moment, the visual field contains a very large, possibly uncountable number of objects. We can count the Ms and Ws of Figure 1, but imagine looking at your reflection in the mirror. Are you an object? What about your eyes or nose or that small spot on your chin? If object recognition requires attention, and if the number of objects is uncountable, how do we manage to get our attention to a target object in a reasonable amount of time? Attention can process items at a rate of, perhaps, 20-50 items per second. If you were looking for a street sign in an urban setting containing a mere 1000 possible objects (every window, tire, door handle, piece of trash, etc.), it would take 20-50 seconds just to find that sign. It is introspectively obvious that you routinely find what you are looking for in the real world in a fraction of that time. To be sure, there are searches of the needle-in-a-haystack, Where’s Waldo? variety that take significant time, but routine searches for the saltshaker, the light switch, your pen, and so forth, obviously proceed much more quickly. Search is not overwhelmed by the welter of objects in the world because search is guided to a (often very small) subset of all possible objects by several sources of information. The purpose of this article is to briefly review the growing body of knowledge about the nature of that guidance.

We will discuss five forms of guidance:

  1. Bottom-up, stimulus-driven guidance in which the visual properties of some aspects of the scene attract more attention than others.

  2. Top-down, user-driven guidance in which attention is directed to objects with known features of desired targets.

  3. Scene guidance in which attributes of the scene guide attention to areas likely to contain targets.

  4. Guidance based on the perceived value of some items or features.

  5. Guidance based on the history of prior search.

Measuring Guidance

We can operationalize the degree of guidance in a search for a target by asking what fraction of all items can be eliminated from consideration. One of the more straight-forward methods to do this is to present observers with visual search displays like those in Figure 2 and measure the reaction time (RT) required for them to report whether or not there is a target (here a “T), as a function of the number of items (set size). The slope of the RT x set size function is a measure of the efficiency of search. For a search for a T among Ls (Fig 2A), the slope would be in the vicinity of 20-50 msec/item 4. We believe that this reflects serial deployment of attention from item to item 5 though this need not be the case 6.

Figure 2:

Figure 2:

The basic visual search paradigm. A target (here a ‘T‘) is presented amidst a variable number of distractors. Search ‘efficiency’ can be indexed by the slope of the function relating reaction time (RT) to the visual set size. If the target in 2B is a red T, the slope for 2B will be half of that for 2A because attention can be limited to just half of the items in 2B.

In Fig. 2B, the target is a red T. This search would be faster and more efficient 7 because attention can be guided to the red items. If half the items are red (and if guidance is perfect), the slope will be reduced by about half, suggesting that, at least in this straightforward case, slopes index the amount of guidance.

The relationship of slopes to guidance is not entirely simple, even for arrays of items like those in Fig 2 8 but see 9. Matters become far more complex with real world scenes where the visual set size is not easily defined 10,11. However, if the slope is cut in half when half the items acquire some property, like the color red in 2B, it is reasonable to assert that search has been guided by that property 9.

The problem of distractor rejection

As shown in Figure 2, a stimulus attribute can make search slopes shallower by limiting the number of items in a display that need to be examined. However, guidance of attention is not the only factor that can modulate search slopes. If observers are attending to each item in the display (in series or in parallel), the slope of the RT x set size function can also be altered by changing how long it takes to reject each distractor. Thus, if we markedly reduced the contrast of Figure 2A, the RT x set size function would become steeper, not because of a change in guidance but because it would now take longer to decide if any given item was a T or an L.

Bottom-up guidance by stimulus salience

Attention is attracted to items that differ from their surroundings, if those differences are large enough and if those differences occur in one of a limited set of attributes that guide attention. The basic principles are illustrated in Figure 3.

Figure 3:

Figure 3:

Which items ‘pop-out’ of this display, and why?

Three items ‘pop-out’ of this display. The purple item on the left differs from its neighbors in color. It is identical to the purple item just inside the upper right corner of the image. That second purple item is not particularly salient even though it is the only other item in that shade of purple; its neighbors are close enough in color that the differences in color do not attract attention. The bluish item to its left is salient by virtue of an orientation difference. The square item a bit further to the left is salient because of the presence of a ‘closure’ feature 12 or the absence of a collection of line terminations 13. We call properties like color, orientation, or closure basic (or guiding) features, because they can guide the deployment of attention. Other properties may be striking when one is directly attending to an item, and may be important for object recognition, but they do not guide attention. For example, the one ‘plus’ in the display is not salient, even though it possesses the only X-intersection in the display, because intersection type is not a basic feature 14. The ‘pop-out’ we see in Figure 3 is not just subjective phenomenonology. Pop-out refers to extremely effective guidance, and is diagnosed by a near-zero slope of the RT x set size function; though there may be systematic variability even in these ‘flat’ slopes 15.

There are two fundamental rules of bottom-up salience 16. Salience of a target increases with difference from the distractors (target-distractor – TD- heterogeneity) and with the homogeneity of the distractors (distractor-distractor –DD- homogeneity) along basic feature dimensions. Bottom-up salience is the most extensively modeled aspect of visual guidance nicely reviewed in 17. The seminal modern work on bottom-up salience is Koch and Ullman’s 18 description of a winner-take-all network for deploying attention. Subsequent decades have seen the development of several influential bottom-up models e.g. 19,2022. However, bottom-up salience is just one of the factors guiding attention. By itself, it does only modestly well in predicting the deployment of attention (usually indexed by eye fixations). Models do quite well predicting search for salience, but not as well predicting search for other sorts of targets 17. This is quite reasonable. If you are looking for your cat in the bedroom, it would be counterproductive to have your attention visit all the shiny, colorful objects first. Thus, a bottom-up saliency model will not do well if the observer has a clear top-down goal 23. One might think that bottom-up salience would dominate if observers free-viewed a scene in the absence of such a goal, but bottom-up models can be poor at predicting fixations even when observers “free view” scenes without specific instructions 24. It seems that observers generate their own, idiosyncratic tasks, allowing other guiding forces to come into play. It is worth noting that salience models work better if they are not based purely on local features but acknowledge the structure of objects in the field of view 25. For instance, while the most salient spot in an image might be the edge between the cat’s tail and the white sheet on the bed, fixations are more likely to be directed to middle of the cat 26,27.

Top-down Feature Guidance

Returning to Figure 1, if you search for Ws with yellow elements, you can guide your attention to yellow items and subsequently determine if they are Ws or Ms 7. This is feature guidance, sometimes referred to as feature-based attention 28. Importantly, it is possible to guide attention to more than one feature at a time. Thus, search for a big, red, vertical item can benefit from our knowledge of its color, size, and orientation 29. Following the TD heterogeneity rule, search efficiency is dependent on the number of features shared by targets and distractors 29, and observers appear to be able to guide to multiple target features simultaneously 30. This finding raises the attractive possibility that search for an arbitrary object among other arbitrary objects would be quite efficient because objects would be represented sparsely in a high-dimensional space. Such sparse coding has been invoked to explain object recognition 31,32. However, search for arbitrary objects turns out not to be particularly efficient 11,33. By itself, guidance to multiple features does not appear to be an adequate account of how we search for objects in the real world (see the section on scene guidance, below).

What are the guiding attributes?

Feature guidance bears some metaphorical similarity to your favorite computer search engine. You enter some terms into the search box and an ordered list of places to attend is returned. A major difference between internet search engines and the human visual search engine is that human search uses only a very small vocabulary of search terms (i.e., features). The idea that there might be a limited set of features that could be appreciated “preattentively” 34 was at the heart of Treisman’s “Feature Integration Theory” 35. She predicted that targets defined by unique features would pop-out of displays. Subsequent theorists modified this proposal to suggest that features could guide the deployment of attention 7 36.

There are probably only a couple dozen attributes that can guide attention. The visual system can detect and identify a vast number of stimuli, but it cannot use arbitrary properties to guide attention the way that Google or Bing can use arbitrary search terms. A list of guiding attributes is found in Table 1. This article does not list all of the citations that support each entry. Many of these can be found in older versions of the list 37,38. Recent changes to the list are marked in color in Table 1 and citations are given for those.

Table 1:

The guiding attributes for feature search

Undoubted Guiding Attributes
Color Motion Orientation Size (incl. length, spatial freq., and apparent size 39)
Probable Guiding Attributes
Luminance onset (flicker) but see 40 Luminance polarity Vernier offset Stereoscopic depth & tilt
Pictorial depth cues But see 41. Shape Line termination Closure
Curvature Topological status
Possible Guiding Attributes
Lighting direction (shading) Expansion / Looming Number Glossiness (luster)
Aspect ratio eye of origin / binocular rivalry
Doubtful cases
Novelty Letter Identity Alphanumeric Category Familiarity – over-learned sets, in general 42
Probably Not Guiding Attributes
Intersection Optic flow Color change 3-D volumes (eg. geons)
Luminosity Material type Scene Category Duration
Stare-in-crowd 43,44 Biological motion Your name Threat
Semantic Category (Animal, artifact, etc) Blur 45 Visual rhythm 46 Animacy/Chasing 47
Threat 48
Faces are a complicated issue
Faces among other objects Familiar Faces Emotional faces Schematic Faces
Factors that modulate search
Cast shadows Amodal Completion Apparent Depth

Attributes like color are deemed to be “undoubted” because multiple experiments from multiple labs attest to their ability to guide attention. “Probable” feature dimensions may be merely probable because we are not sure how to define the feature. Shape is the most notable entry here. It seems quite clear that something about shape guides attention 49. It is less clear exactly what that might be, though the success of deep learning algorithms in enabling computers to classify objects may open up new vistas for understanding human search for shape 50.

The attributes described as “possible” await more research. Often these attributes only have a single paper supporting their entry on the list, as in the case of numerosity: Can you direct attention to the pile with “more” elements in it, once you eliminate size, density, and other confounding visual factors? Perhaps 51, but it would be good to have converging evidence. Search for the magnitude of a digit (e.g. “find the highest number”) is not guided by the semantic meaning of the digits but by their visual properties 52

The list of attributes that do not guide attention is, of course, potentially infinite. Table 1 lists a few plausible candidates that have been tested and found wanting. For example, there has been considerable interest recently in what could be called “evolutionarily motivated” candidates for guidance. What would enhance our survival if we could find it efficiently? Looking at a set of moving dots on a computer screen, we can perceive that one is “chasing” another 53. However, this aspect of animacy does not appear to be a guiding attribute 47. Nor does “threat” (defined by association with electric shock) seem to guide search 48.

Some caution is needed here because a failure to guide is a negative finding and it is always possible that, were the experiment done correctly, the attribute might guide after all. Thus, early research 54 found that binocular rivalry and eye-of-origin information did not guide attention, but more recent work 55,56 suggests that it may be possible to guide attention to interocular conflict, and our own newer data 57 indicates that rivalry may guide attention if care is taken to suppress other signals that interfere with that guidance. Thus, binocular rivalry was listed under “doubtful cases & probable non-features” in 37, but is now listed under “possible guiding attributes” in Table 1.

Faces remain a problematic candidate for feature status, with a substantial literature yielding conflicting results and conclusions. Faces are quite easy to find among other objects 58,59 but there is dispute about whether the guiding feature is “face-ness” or some simpler stimulus attribute 60,61. A useful review by Frischen et al. 62 argues that “preattentive search processes are sensitive to and influenced by facial expressions of emotion”, but this is one of the cases where it is hard to reject the hypothesis that the proposed feature is modulating the processing of attended items, rather than guiding the selection of which items to attend. Suppose that, once attended, it takes 10 msec longer to disengage attention from an angry face than from a neutral face. The result would be that search would go faster (10 msec/item faster) when the distractors were neutral than when they were angry. Consequently, an angry target among neutral distractors would be found more efficiently than a neutral face among angry. Evidence for guidance by emotion would be stronger if the more efficient emotion searches were closer to pop-out than to classic inefficient, unguided searches, e.g., T among Ls,63. Typically, this is not the case. For example, Gerritsen et al 64 report that “Visual search is not blind to emotion” but, in a representative finding, search for hostile faces produced a slope of 64 msec/item, which is quite inefficient, even if somewhat more efficient than the 82 msec/item for peaceful target faces (p1054).

There are stimulus properties that, while they may not be guiding attributes in their own right, do modulate the effectiveness of other attributes. For example, apparent depth modulates apparent size, and search is guided by that apparent size 65. Finally, there are properties of the display that influence the deployment of attention. These could be considered aspects of “scene guidance” (see the next major section, below). For example, attention tends to be attracted to the center of gravity in a display 66. Elements like arrows direct attention even if they, themselves do not pop-out 67. As discussed by Rensink 68, these and related factors can inform graphic design and other situations where the creator of an images wants to control how the observer consumes that image.

There have been some general challenges to the enterprise of defining specific features, notably the hypothesis that many of the effects attributed to the presence or absence of basic features are actually produced by crowding in the periphery 3. For example, is efficient search for cubes lit from one side among cubes lit from another side evidence for preattentive processing of 3D shape and lighting 69, or merely a by-product of the way these stimuli are represented in peripheral vision 41? Resolution of this issue requires a set of visual search experiments with stimuli that are “uncrowded”. This probably means using low set sizes; for example, see the evidence that material type is not a guiding attribute 70.

A different challenge to the preattentive feature enterprise is the possibility that too many discrete features are proposed. Perhaps many specific features form a continuum of guidance by a single, more broadly defined attribute. For instance, the cues to the 3D layout of the scene include stereopsis, shading, linear perspective and more. These might be part of a single attribute describing the 3D disposition of an object. Motion, onsets, and flicker might be part of a general dynamic change property 71. Most significantly, we might combine the spatial features of line termination, closure, topological status, orientation, and so forth into a single shape attribute with properties defined by the appropriate layer of the right convolutional neural net (CNN). Such nets have shown themselves capable of categorizing objects, so one could imagine a preattentive CNN guiding attention to objects as well 72. At this writing, such an idea remains a promissory note. Regardless of how powerful CNNs may become, humans cannot guide attention to entirely arbitrary/specific properties in order to find particular types of object 73 and it is unknown if some intermediate representation in a CNN could capture the properties of the human search engine. If it did, we might well find that such a layer represented a space with dimensions corresponding to attributes like size, orientation, line termination, vernier offset, and so forth, but this remains to be seen.

Guidance by scene properties

While the field of visual search has largely been built on search for targets in arbitrary 2D arrays of items, most real world search takes place in structured scenes, and this structure provides a source of guidance. To illustrate, try search for any humans in Figure 4. Depending on the resolution of the image as you are viewing it, you may or may not be able to see legs poking out from behind the roses by the gate. Regardless, what should be clear is that the places you looked were strongly constrained. Biederman, Mezzanotte, and Rabinowitz 74 suggested a distinction between semantic and syntactic guidance.

Figure 4:

Figure 4:

Scene Guidance: Where is attention guided if you are looking for humans? What if the target was a bird?

Syntactic guidance has to do with physical constraints. You don’t look for people on the front surface of the wall or in the sky because people typically need to be supported against gravity. Semantic guidance refers to the meaning of the scene. You don’t look for people on the top of the wall, not because they could not be there but because they are unlikely to be there given your understanding of the scene, whereas you might scrutinize the bench. Scene guidance would be quite different (and less constrained) if the target were a bird. The use of the terms “semantic” and “syntactic” should not be seen as tying scene processing too closely to linguistic processing, nor should the two categories be seen as neatly non-overlapping 75 76. Nevertheless, the distinction between syntactic and semantic factors, as roughly defined here, can be observed in electrophysiological recordings: scenes showing semantic violations (e.g., a bar of soap sitting next to the computer on the desk) produce different neural signatures than scenes showing syntactic violations (e.g., a computer mouse on top of the laptop screen) 77. While salience may have some influence in this task 78, it does not appear to be the major force guiding attention here 24,79. But note that feature guidance and scene guidance work together. People certainly could be on the lawn, but you do not scrutinize the empty lawn in Figure 4 because it lacks the correct target features.

Extending the study of guidance from controlled arrays of distinct items to structured scenes poses some methodological challenges. For example, how do we define the set size of a scene? Is “rose bush” an item in Figure 4, or does each bloom count as an item? In bridging between the world of artificial arrays of items and scenes, perhaps the best we can do is to talk about the “effective set size” 80 10, the number of items/locations that are treated as candidate targets in a scene give a specific task. If you are looking for the biggest flower, each rose bloom is part of the effective set. If you are looking for a human, those blooms are not part of the set. While any estimate of effective set size is imperfect, it is a very useful idea and it is clear that, for most tasks, the effective set size will be much smaller than the set of all possible items 11.

Preview methods have been very useful in examining the mechanisms of scene search 81. A scene is flashed for a fraction of a second and then the observer searches for a target. The primary data are often eye tracking records. Often, these experiments involve searching while the observer’s view of the scene is restricted to a small region around the point of fixation (“gaze-contingent” displays). Very brief exposures (50-75 msec) can guide deployment of the eyes once search begins 82. A preview of the specific scene is much more useful than a preview of another scene of the same category, though the preview scene does not need to be the same size as the search stimulus 81. Importantly, the preview need not contain the target in order to be effective 83. Search appears to be more strongly guided by a relatively specific scene ‘gist’ 80 84, an initial understanding of the scene that does not rely on recognizing specific objects 85. The gist includes both syntactic (e.g., spatial layout) and semantic information, and this combination can provide powerful search guidance. Knowledge about the target provides an independent source of guidance 86,87. These sources of information provide useful ‘priors’ on where targets might be (“If there is a vase present, it’s more likely to be on a table than in the sink”), that are more powerful than memory for where a target might have been seen 88 89,90 in terms of guiding search.

Preview effects may be fairly limited in search of real scenes. If the observer searches a fully visible scene rather than being limited to a gaze-contingent window, guidance by the preview is limited to the first couple of fixations 91. Once search begins, guidance is presumably updated based on the real scene, rendering the preview obsolete. In gaze-contingent search, the effects last longer because this updating cannot occur. This updating can be seen in the work of Hwang et al. 76, where, in the course of normal search, the semantic content of the current fixation in a scene influences the target of the next fixation.

Modulation of search by prior history

In this section, we summarize evidence showing that the prior history of the observer, especially the prior history of search, modulates the guidance of attention. We can organize these effects by their time scale, from within a trial (on the order of 100s of ms) to lifetime learning (on the order of years).

A number of studies have demonstrated the preview benefit: when half of the search array is presented a few hundred msec before the rest of the array, the effective set size is reduced, either because attention is guided away from the old “marked” items (visual marking 92) and/or toward the new items (onset prioritization 93).

On a slightly longer timescale, priming phenomena are observed from trial to trial within an experiment, and can be observed over seconds to weeks. The basic example is “priming of pop-out” 94, in which an observer might be asked to report the shape of the one item of unique color in a display. If that item is the one red shape among green on one trial, responses will be faster if the next trial repeats red among green as compared to a switch to green among red; though the search in both cases will be a highly efficient, color pop-out search. More priming of pop-out is found if the task is harder 95. Note that it is neither the response nor the reporting feature which is repeated in priming of pop-out, but the target-defining or selection feature.

More generally, seeing the features of the target makes search faster than reading a word cue describing the target, even for overlearned targets. This priming by target features takes about 200 msec to develop 96. Priming by the features of a prior stimulus can be entirely incidental; simply repeating the target from trial to trial is sufficient 97. More than one feature can be primed at the same time 97,98 and both target and distractor features can be primed 97,99. Moreover, it is not just that observers are more ready to report targets with the primed feature; priming actually boosts sensitivity (i.e., d’) 100. Such priming can last for at least a week 101.

Observers can also incidentally learn information over the course of an experiment that can guide search. In contextual cueing 102, a subset of the displays are repeated across several blocks of trials. While observers do not notice this repetition, RTs are faster for repeated displays than for novel, unrepeated displays 103. The contextual cueing effect is typically interpreted as an abstract form of scene guidance: just as you learn that, in your friend’s kitchen, the toaster is on the counter next to the coffeemaker, you learn that, in this configuration of rotated Ls, the T is in the bottom left corner. However, evidence for this interpretation is mixed. RT x set size slopes are reduced for repeated displays 102 in some experiments, but not in others 104. Contextual cueing effects can also be observed in cases such as pop-out search and 105, attentionally-cued search, 106, where guidance is already nearly perfect. Kunar et al. 104 suggested that contextual cueing reflects response facilitation, rather than guidance. Again, the evidence is mixed. There is a shift towards a more liberal response criterion for repeated displays 107, but this is not correlated with the size of the contextual cueing RT effect. In pop-out search, sensitivity to the target improves for repeated displays without an effect on decision criterion 105. It seems likely that observed contextual cueing effects reflect a combination of guidance effects and response facilitation, the mix depending on the specifics of the task. Oculomotor studies show that the context is often not retrieved and available to guide attention until a search has been underway for several fixations 108,109. Thus, the more efficient the search, the greater the likelihood that the target will be found before the context can be retrieved. Indeed, in simple letter displays, search does not become more efficient even when the same display is repeated several hundred times 110, presumably because searching de novo is always faster than waiting for context to become available. Once the task becomes more complex (e.g., searching for that toaster) 111, it becomes worthwhile to let memory guide search 112 113.

Over years and decades, we become intimately familiar with, for example, the characters of our own written language. There is a long-running debate about whether familiarity (or, conversely, novelty) might be a basic guiding attribute. Much of this work has been conducted with overlearned categories like letters. While the topic is not settled, semantic categories like “letter” probably do not guide attention 114,115, though mirror-reversed letters may stand out against standard letters 116 117. Instead, items made familiar in LTM can modulate search 42,118 though there are limits on the effects of familiarity in search 119 120.

Modulation of search by the value of items

In the past few years, there has been increasing interest in the effects of reward or value on search. Value proves to be a strong modulator of guidance. For instance, if observers are rewarded more highly for red items than for green, they will subsequently guide attention toward red, even if this is irrelevant to the task 121. Note, color is the guiding feature; value modulates its effectiveness. The learned associations of value do not need to be task relevant or salient in order to have their effects 122 and learning can be very persistent with value-driven effects being seen half a year after acquisition 123. Indeed, the effects of value may be driving some of the long-term familiarity effects described in the previous paragraph 42.

Visual search is mostly effortless. Unless we are scrutinizing aerial photographs for hints to North Korea’s missile program, or hunting for signs of cancer in a chest radiograph, we typically find what we are looking for in seconds or less. This remarkable ability is the result of attentional guidance mechanisms. While thirty-five years or so of research has given us a good grasp of the mechanisms of bottom-up salience, top-down feature-driven guidance and how those factors combine to guide attention 124,125, we are just beginning to understand how attention is guided by the structure of scenes and the sum of our past experiences. Future challenges for the field will include understanding how discrete features might fit together in a continuum of guidance and extending our theoretical frameworks from two-dimensional scenes to immersive, dynamic, three-dimensional environments.

Footnotes

Competing Interests:

JMW occasionally serves as an expert witness or consultant for which this article might be relevant. TSH has no competing interests to declare.

Contributor Information

Jeremy M Wolfe, Brigham and Women’s Hospital / Harvard Med.

Todd S Horowitz, NIH, National Cancer Inst..

BIBLIOGRAPHY

  • 1.Hyman IE, Boss SM, Wise BM, McKenzie KE & Caggiano JM Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology 24, 597–607, doi: 10.1002/acp.1638 (2010). [DOI] [Google Scholar]
  • 2.Keshvari S & Rosenholtz R Pooling of continuous features provides a unifying account of crowding. J Vis 16, 39, doi: 10.1167/16.3.39 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Rosenholtz R, Huang J & Ehinger KA Rethinking the role of top-down attention in vision: effects attributable to a lossy representation in peripheral vision. Frontiers in Psychology 3, doi: 10.3389/fpsyg.2012.00013 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wolfe JM What do 1,000,000 trials tell us about visual search? Psychological Science 9, 33–39 (1998). [Google Scholar]
  • 5.Moran R, Zehetleitner M, Liesefeld H, Müller H & Usher M Serial vs. parallel models of attention in visual search: accounting for benchmark RT-distributions. Psychonomic Bulletin & Review, 1–16, doi: 10.3758/s13423-015-0978-1 (2015). [DOI] [PubMed] [Google Scholar]
  • 6.Townsend JT & Wenger MJ The serial-parallel dilemma: A case study in a linkage of theory and method. Psychonomic Bulletin & Review 11, 391–418 (2004). [DOI] [PubMed] [Google Scholar]
  • 7.Egeth HE, Virzi RA & Garbart H Searching for conjunctively defined targets. J. Exp. Psychol: Human Perception and Performance 10, 32–39 (1984). [DOI] [PubMed] [Google Scholar]
  • 8.Kristjansson A Reconsidering visual search. i-Perception 6, doi: 10.1177/2041669515614670 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wolfe JM Visual Search Revived: The Slopes Are Not That Slippery: A comment on Kristjansson (2015). i-Perception May-June 2016, 1–6, doi:doi: 10.1177/2041669516643244 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Neider MB & Zelinsky GJ Exploring set size effects in scenes: Identifying the objects of search. Visual Cognition 16, 1–10 (2008). [Google Scholar]
  • 11.Wolfe JM, Alvarez GA, Rosenholtz R, Kuzmova YI & Sherman AM Visual search for arbitrary objects in real scenes. Atten Percept Psychophys 73, 1650–1671, doi: 10.3758/s13414-011-0153-3 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kovacs I & Julesz B A closed curve is much more than an incomplete one: effect of closure in figure-ground segmentation. Proc Natl Acad Sci U S A 90, 7495–7497 (1993). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Taylor S & Badcock D Processing feature density in preattentive perception. Perception and Psychophysics 44, 551–562 (1988). [DOI] [PubMed] [Google Scholar]
  • 14.Wolfe JM & DiMase JS Do intersections serve as basic features in visual search? Perception 32, 645–656 (2003). [DOI] [PubMed] [Google Scholar]
  • 15.Buetti S, Cronin DA, Madison AM, Wang Z & Lleras A Towards a Better Understanding of Parallel Visual Processing in Human Vision: Evidence for Exhaustive Analysis of Visual Information. Journal of Experimental Psychology: General 145, 672–707., doi: 10.1037/xge0000163 (2016). [DOI] [PubMed] [Google Scholar]
  • 16.Duncan J & Humphreys GW Visual search and stimulus similarity. Psychological Review 96, 433–458 (1989). [DOI] [PubMed] [Google Scholar]
  • 17.Koehler K, Guo F, Zhang S & Eckstein MP What do saliency models predict? Journal of Vision 14, doi: 10.1167/14.3.14 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Koch C & Ullman S Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985). [PubMed] [Google Scholar]
  • 19.Itti L, Koch C & Niebur E A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal.Mach. Intell 20, 1254–1259 (1998). [Google Scholar]
  • 20.Itti L & Koch C A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Res 40, 1489–1506 (2000). [DOI] [PubMed] [Google Scholar]
  • 21.Bruce NDB, Wloka C, Frosst N, Rahman S & Tsotsos JK On computational modeling of visual saliency: Examining what’s right, and what’s left. Vision Research 116, Part B, 95–112, doi: 10.1016/j.visres.2015.01.010 (2015). [DOI] [PubMed] [Google Scholar]
  • 22.Zhang L, Tong MH, Marks TK, Shan H & Cottrell GW SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision 8, 1–20 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Henderson JM, Malcolm GL & Schandl C Searching in the dark: Cognitive relevance drives attention in real-world scenes. Psychon Bull Rev 16, 850–856, doi:16/5/850 [pii] 10.3758/PBR.16.5.850 (2009). [DOI] [PubMed] [Google Scholar]
  • 24.Tatler BW, Hayhoe MM, Land MF & Ballard DH Eye guidance in natural vision: Reinterpreting salience. Journal of Vision 11, doi: 10.1167/11.5.5 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Nuthmann A & Henderson JM Obiect-based attentional selection in scene viewing. J Vis 10, 20, doi: 10.1167/10.8.20 (2010). [DOI] [PubMed] [Google Scholar]
  • 26.Einhäuser W, Spain M & Perona P Objects predict fixations better than early saliency. Journal of Vision 8, 1–26 (2008). [DOI] [PubMed] [Google Scholar]
  • 27.Stoll J, Thrun M, Nuthmann A & Einhäuser W. Overt attention in natural scenes: Objects dominate features. Vision Research 107, 36–48, doi: 10.1016/j.visres.2014.11.006 (2015). [DOI] [PubMed] [Google Scholar]
  • 28.Maunsell JH & Treue S. Feature-based attention in visual cortex. Trends Neurosci 29, 317–322, doi: 10.1016/j.tins.2006.04.001 (2006). [DOI] [PubMed] [Google Scholar]
  • 29.Nordfang M & Wolfe JM Guided Search for Triple Conjunctions Atten Percept Psychophys 76, 1535–1559, doi: 10.3758/s13414-014-0715-2 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Friedman-Hill SR & Wolfe JM Second-order parallel processing: Visual search for the odd item in a subset. J. Experimental Psychology: Human Perception and Performance 21, 531–551 (1995). [DOI] [PubMed] [Google Scholar]
  • 31.Olshausen BA & Field DJ Sparse coding of sensory inputs. Curr Opin Neurobiol 14, 481–487, doi: 10.1016/j.conb.2004.07.007 (2004). [DOI] [PubMed] [Google Scholar]
  • 32.DiCarlo JJ, Zoccolan D & Rust NC How Does the Brain Solve Visual Object Recognition? Neuron 73, 415–434, doi: 10.1016/j.neuron.2012.01.010 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Vickery TJ, King L-W & Jiang Y. Setting up the target template in visual search. J. of Vision 5, 81–92 (2005). [DOI] [PubMed] [Google Scholar]
  • 34.Neisser U. Cognitive Psychology. (Appleton, Century, Crofts, 1967). [Google Scholar]
  • 35.Treisman A & Gelade G. A feature-integration theory of attention. Cognitive Psychology 12, 97–136 (1980). [DOI] [PubMed] [Google Scholar]
  • 36.Wolfe JM, Cave KR & Franzel SL Guided Search: An alternative to the Feature Integration model for visual search. J. Exp. Psychol. - Human Perception and Perf. 15, 419–433 (1989). [DOI] [PubMed] [Google Scholar]
  • 37.Wolfe JM in Oxford Handbook of Attention (eds Nobre AC & Kastner S) 11–55 (Oxford U Press, 2014). [Google Scholar]
  • 38.Wolfe JM & Horowitz TS What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience 5, 495–501 (2004). [DOI] [PubMed] [Google Scholar]
  • 39.Proulx MJ & Green M. Does apparent size capture attention in visual search? Evidence from the Müller, ÄìLyer illusion. Journal of Vision 11, doi: 10.1167/11.13.21 (2011). [DOI] [PubMed] [Google Scholar]
  • 40.Kunar MA & Watson DG When are abrupt onsets found efficiently in complex visual search? Evidence from multielement asynchronous dynamic search. Journal of Experimental Psychology: Human Perception and Performance 40, 232–252, doi: 10.1037/a0033544 (2014). [DOI] [PubMed] [Google Scholar]
  • 41.Zhang X, Huang J, Yigit-Elliott S & Rosenholtz R. Cube search, revisited. Journal of Vision 15, 9–9, doi: 10.1167/15.3.9 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Qin XA, Koutstaal W & Engel S. The Hard Won Benefits of Familiarity on Visual Search — Familiarity Training on Brand Logos Has Little Effect on Search Speed and Efficiency. Atten Percept Psychophys 76, 914–930 (2014). [DOI] [PubMed] [Google Scholar]
  • 43.Shirama A. Stare in the crowd: Frontal face guides overt attention independently of its gaze direction. Perception 41, 447–459 (2012). [DOI] [PubMed] [Google Scholar]
  • 44.von Grunau M & Anston C. The detection of gaze direction: A stare-in-the-crowd effect. Perception 24, 1297–1313 (1995). [DOI] [PubMed] [Google Scholar]
  • 45.Enns JT & MacDonald SC The Role of Clarity and Blur in Guiding Visual Attention in Photographs. Journal of Experimental Psychology: Human Perception and Performance 39, 568–578, doi: 10.1037/a0029877 (2013. ). [DOI] [PubMed] [Google Scholar]
  • 46.Li H, Bao Y, Poppel E & Su YH A unique visual rhythm does not pop out. Cogn Process 15, 93–97, doi: 10.1007/sl0339-013-0581-l (2014). [DOI] [PubMed] [Google Scholar]
  • 47.Meyerhoff HS, Schwan S & Huff M. Perceptual animacy: Visual search for chasing objects among distractors. Journal of Experimental Psychology: Human Perception and Performance 40, 702–717, doi: 10.1037/a0034846 (2014). [DOI] [PubMed] [Google Scholar]
  • 48.Notebaert L, Crombez G, Van Damme S, De Houwer J & Theeuwes J. Signals of Threat Do Not Capture, but Prioritize, Attention: A Conditioning Approach. Emotion 11, 81–89, doi: 10.1037/a0021286 (2011). [DOI] [PubMed] [Google Scholar]
  • 49.Alexander RG, Schmidt J & Zelinsky GJ Title: Are summary statistics enough? Evidence for the importance of shape in guiding visual search Authors: Alexander, Robert; Schmidt, Joseph; Zelinsky, Gregory. Vis Cognition 22, 595–609 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Yamins DLK & DiCarlo JJ Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci 19, 356–365, doi: 10.1038/nn.4244 (2016). [DOI] [PubMed] [Google Scholar]
  • 51.Reijnen E, Wolfe JM & Krummenacher J. Coarse Guidance by Numerosity in Visual Search. Atten Percept Psychophys 75, 16–28, doi: 10.3758/s13414-012-0379-8. (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Godwin HJ, Hout MC & Menneer T. Visual similarity is stronger than semantic similarity in guiding visual search for numbers. Psychon Bull Rev 21, 689–695, doi: 10.3758/s13423-013-0547-4 (2014). [DOI] [PubMed] [Google Scholar]
  • 53.Gao T, Newman GE & Scholl BJ The psychophysics of chasing: A case study in the perception of animacy. Cogn Psychol 59, 154–179, doi:S0010-0285(09)00018-8 [pii] 10.1016/j.cogpsych.2009.03.001 (2009). [DOI] [PubMed] [Google Scholar]
  • 54.Wolfe JM & Franzel SL Binocularity and visual search. Perception and Psychophysics 44, 81–93 (1988). [DOI] [PubMed] [Google Scholar]
  • 55.Paffen C, Hooge I, Benjamins J & Hogendoorn H. A search asymmetry for interocular conflict. Attention, Perception, & Psychophysics 73, 1042–1053, doi: 10.3758/s13414-011-0100-3 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Paffen CL, Hessels RS & Van der Stigchel S. Interocular conflict attracts attention. Atten Percept Psychophys 74, 251–256 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Zou B, Utochkin IS, Liu Y & Wolfe JM Binocularity and visual search - Revisited. Atten Percept Psychophys, doi: 10.3758/s13414-016-1247-8 (2016). [DOI] [PubMed] [Google Scholar]
  • 58.Hershler O & Hochstein S. At first sight: a high-level pop out effect for faces. Vision Res 45, 1707–1724 (2005). [DOI] [PubMed] [Google Scholar]
  • 59.Golan T, Bentin S, DeGutis JM, Robertson LC & Harel A. Low-level vs. category-specific mechanisms in visual search: Evidence from visual expertise, developmental prosopagnosia, and a computational approach. Atten Percept Psychophys APP12_108r2 (2013). [Google Scholar]
  • 60.VanRullen R. On second glance: still no high-level pop-out effect for faces. Vision Res 46, 3017–3027 (2006). [DOI] [PubMed] [Google Scholar]
  • 61.Hershler O & Hochstein S. With a careful look: Still no low-level confound to face pop-out. Vision Res 46, 3028–3035 (2006). [DOI] [PubMed] [Google Scholar]
  • 62.Frischen A, Eastwood JD & Smilek D. Visual search for faces with emotional expressions. Psychological Bulletin 134, 662–676 (2008). [DOI] [PubMed] [Google Scholar]
  • 63.Dugué L, McLelland D, Lajous M & VanRullen R. Attention searches nonuniformly in space and in time. Proceedings of the National Academy of Sciences 112, 15214–15219, doi: 10.1073/pnas.1511331112 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Gerritsen C, Frischen A, Blake A, Smilek D & Eastwood JD Visual search is not blind to emotion Percept Psychophys 70, 1047–1059 (2008). [DOI] [PubMed] [Google Scholar]
  • 65.Aks DJ & Enns JT Visual search for size is influenced by a background texture gradient. J. Experimental Psychology: Human Perception and Performance 22, 1467–1481 (1996). [DOI] [PubMed] [Google Scholar]
  • 66.Richards W & Kaufman L. ‘Centre-of-gravity’ tendencies for fixations and flow patterns. Perception and Psychophysics 5, 81–84 (1969). [Google Scholar]
  • 67.Kuhn G & Kingstone A. Look away! Eyes and arrows engage oculomotor responses automatically. Atten Percept Psychophys 71, 314–327, doi:71/2/314 [pii] 10.3758/APP.71.2.314 (2009). [DOI] [PubMed] [Google Scholar]
  • 68.Rensink RA in Human Attention in Digital Environments (ed Roda C) Ch. Ch 3, 63–92 (Cambridge University Press, 2011). [Google Scholar]
  • 69.Enns JT & Rensink RA Scene based properties influence visual search. Science 247, 721–723 (1990). [DOI] [PubMed] [Google Scholar]
  • 70.Wolfe JM & Myers L. Fur in the midst of the waters: visual search for material type is inefficient. J Vis 10, 8, doi:10.9.8 [pii] 10.1167/10.9.8 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Kunar MA & Watson DG Visual search in a multi-element asynchronous dynamic (MAD) world. J Exp Psychol Hum Percept Perform 37, 1017–1031, doi: 10.1037/a0023093 (2011). [DOI] [PubMed] [Google Scholar]
  • 72.Ehinger KA & Wolfe JM How is visual search guided by shape? Using features from deep learning to understand preattentive “shape space”es. Poster presented at Vision Sciences Society meeting, St. Petersburg, FL, May 12-18, 2016. (2016). [Google Scholar]
  • 73.Vickery TJ, King LW & Jiang Y. Setting up the target template in visual search. J Vis 5, 81–92, doi: 10.1167/5.1.85/1/8 [pii] (2005). [DOI] [PubMed] [Google Scholar]
  • 74.Biederman I, Mezzanotte RJ & Rabinowitz JC Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology 14, 143–177 (1982). [DOI] [PubMed] [Google Scholar]
  • 75.Henderson JM Object identification in context: The visual processing of natural scenes. Canadian J of Psychology 46, 319–341 (1992). [DOI] [PubMed] [Google Scholar]
  • 76.Henderson JM & Hollingworth A. High-level scene perception. Annu Rev Psychol 50, 243–271 (1999). [DOI] [PubMed] [Google Scholar]
  • 77.Vo ML & Wolfe JM Differential ERP Signatures Elicited by Semantic and Syntactic Processing in Scenes. Psychological Science 24, 1816–1823 doi:doi: 10.1177/0956797613476955 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.‘t Hart BM, Schmidt HCEF, Klein-Harmeyer I & Einhäuser W. Attention in natural scenes: contrast affects rapid visual processing and fixations alike. Philosophical Transactions of the Royal Society B: Biological Sciences 368, doi: 10.1098/rstb.2013.0067 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Henderson JM, Brockmole JR, Castelhano MS & Mack ML in Eye Movement Research: Insights into Mind and Brain. (eds van Gompel Roger, Fischer Martin, Murray Wayne, & Hill Robin) 537–562 (Elsevier, 2007). [Google Scholar]
  • 80.Rensink RA Seeing, sensing, and scrutinizing. Vision Res 40, 1469–1487 (2000). [DOI] [PubMed] [Google Scholar]
  • 81.Castelhano MS & Henderson JM Initial Scene Representations Facilitate Eye Movement Guidance in Visual Search. J. Exp. Psychol: Human Perception and Performance 33, 753–763 (2007). [DOI] [PubMed] [Google Scholar]
  • 82.Vo ML-H & Henderson JM The time course of initial scene processing for eye movement guidance in natural scene search .Journal of Vision 10, 1–13 (2010). [DOI] [PubMed] [Google Scholar]
  • 83.Hollingworth A. Two forms of scene memory guide visual search: Memory for scene context and memory for the binding of target object to scene location. Visual Cognition 17, 273–291 (2009). [Google Scholar]
  • 84.Oliva A in Neurobiology of attention (eds Itti L, Rees G, & Tsotsos J) 251–257 (Academic Press / Elsevier., 2005). [Google Scholar]
  • 85.Greene MR & Oliva A. The briefest of glances: the time course of natural scene understanding. Psychol Sci 20, 464–472 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Castelhano M & Heaven C. Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin & Review 18, 890–896, doi: 10.3758/s13423-011-0107-8 (2011). [DOI] [PubMed] [Google Scholar]
  • 87.Malcolm GL & Henderson JM Combining top-down processes to guide eye movements during real-world scene search. Journal of Vision 10, 1–11 (2010). [DOI] [PubMed] [Google Scholar]
  • 88.Torralba A, Oliva A, Castelhano MS & Henderson JM Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review 113, 766–786 (2006). [DOI] [PubMed] [Google Scholar]
  • 89.Vo ML & Wolfe JM When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. J Exp Psychol Hum Percept Perform 38, 23–41, doi: 10.1037/a0024147 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Vo ML-H & Wolfe JM The role of memory for visual search in scenes. Ann. N.Y. Acad. Sci, 1–10 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Hillstrom AP, Scholey H, Liversedge SP & Benson V. The effect of the first glimpse at a scene on eye movements during search. Psychon Bull Rev 19, 204–210, doi: 10.3758/s13423-011-0205-7 (2012). [DOI] [PubMed] [Google Scholar]
  • 92.Watson DG & Humphreys GW Visual marking: Prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90–122 (1997). [DOI] [PubMed] [Google Scholar]
  • 93.Donk M & Theeuwes J. Prioritizing selection of new elements: bottom-up versus top-down control. Percept Psychophys 65, 1231–1242 (2003). [DOI] [PubMed] [Google Scholar]
  • 94.Maljkovic V & Nakayama K. Priming of popout: I. Role of features. Memory & Cognition 22, 657–672 (1994). [DOI] [PubMed] [Google Scholar]
  • 95.Lamy D, Zivony A & Yashar A. The role of search difficulty in intertrial feature priming. Vision Research 51, 2099–2109, doi: 10.1016/j.visres.2011.07.010 (2011). [DOI] [PubMed] [Google Scholar]
  • 96.Wolfe J, Horowitz T, Kenner NM, Hyle M & Vasan N. How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research 44, 1411–1426 (2004). [DOI] [PubMed] [Google Scholar]
  • 97.Wolfe JM, Butcher SJ, Lee C & Hyle M. Changing your mind: On the contributions of top-down and bottom-up guidance in visual search for feature singletons. J Exp Psychol: Human Perception and Performance 29, 483–502 (2003). [DOI] [PubMed] [Google Scholar]
  • 98.Kristjansson A. Simultaneous priming along multiple feature dimensions in a visual search task. Vision Res 46, 2554–2570 (2006). [DOI] [PubMed] [Google Scholar]
  • 99.Kristjansson A & Driver J. Priming in visual search: Separating the effects of target repetition, distractor repetition and role-reversal. Vision Res 48, 1217–1232 (2008). [DOI] [PubMed] [Google Scholar]
  • 100.Sigurdardottir HM, Kristjansson A & Driver J. Repetition streaks increase perceptual sensitivity in visual search of brief displays. Visual Cognition 16, 643–658 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Kruijne W & Meeter M. Long-Term Priming of Visual Search Prevails Against the Passage of Time and Counteracting Instructions. Journal of Experimental Psychology: Learning, Memory, and Cognition 42, 1293–1303, doi:10.1037/xlm0000233 10.1037/xlm0000233.supp (Supplemental) (2016). [DOI] [PubMed] [Google Scholar]
  • 102.Chun M & Jiang Y. Contextual cuing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology 36, 28–71 (1998). [DOI] [PubMed] [Google Scholar]
  • 103.Chun MM & Jiang Y. Top-down attentional guidance based on implicit learning of visual covariation. Psychological Science 10, 360–365 (1999). [Google Scholar]
  • 104.Kunar MA, Flusberg SJ, Horowitz TS & Wolfe JM Does Contextual Cueing Guide the Deployment of Attention? J Exp Psychol Hum Percept Perform 33, 816–828 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Geyer T, Zehetleitner M & Muller HJ Contextual cueing of pop-out visual search: When context guides the deployment of attention. Journal of Vision 10, article 20 (2010). [DOI] [PubMed] [Google Scholar]
  • 106.Schankin A & Schubo A. Contextual cueing effects despite spatially cued target locations. Psychophysiology 47, 717–727, doi: 10.1111/j.1469-8986.2010.00979.x (2010). [DOI] [PubMed] [Google Scholar]
  • 107.Schankin A, Hagemann D & Schubo A. Is contextual cueing more than the guidance of visual-spatial attention? Biol Psychol 87, 58–65, doi: 10.1016/j.biopsycho.2011.02.003 (2011). [DOI] [PubMed] [Google Scholar]
  • 108.Peterson MS & Kramer AF Attentional guidance of the eyes by contextual information and abrupt onsets. Percept Psychophys 63, 1239–1249 (2001). [DOI] [PubMed] [Google Scholar]
  • 109.Tseng YC & Li CS Oculomotor correlates of context-guided learning in visual search. Percept Psychophys 66, 1363–1378 (2004). [DOI] [PubMed] [Google Scholar]
  • 110.Wolfe JM, Klempen N & Dahlen K. Post-attentive vision. Journal of Experimental Psychology:Human Perception & Performance 26, 693–716 (2000). [DOI] [PubMed] [Google Scholar]
  • 111.Brockmole JR & Henderson JM Using real-world scenes as contextual cues for search. Visual Cognition 13, 99–108 (2006). [Google Scholar]
  • 112.Hollingworth A & Henderson JM Accurate visual memory for previously attended objects in natural scenes. J. Exp. Psychol: Human Perception and Performance 28, 113–136 (2002). [Google Scholar]
  • 113.Vo ML-H & Wolfe JM When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. J. Exp. Psychol: Human Perception and Performance 38, 23–41, doi:doi: 10.1037/a0024147. (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Flowers JH & Lohr DJ How does familiarity affect visual search for letter strings? Perception and Psychophysics 37, 557–567 (1985). [DOI] [PubMed] [Google Scholar]
  • 115.Krueger LE The category effect in visual search depends on physical rather than conceptual differences. Perception and Psychophysics 35, 558–564 (1984). [DOI] [PubMed] [Google Scholar]
  • 116.Frith U. A curious effect with reversed letters explained by a theory of schema. Perception and Psychophysics 16, 113–116 (1974). [Google Scholar]
  • 117.Wang Q, Cavanagh P & Green M. Familiarity and pop-out in visual search. Perception and Psychophysics 56, 495–500 (1994). [DOI] [PubMed] [Google Scholar]
  • 118.Fan JE & Turk-Browne NB Incidental Biasing of Attention From Visual Long-Term Memory. J Exp Psychol Learn Mem Cogn, doi: 10.1037/xlm0000209 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Huang L. Familiarity does not aid access to features. Psychonomic Bulletin & Review 18, 278–286, doi: 10.3758/s13423-011-0052-6 (2011). [DOI] [PubMed] [Google Scholar]
  • 120.Wolfe JM, Boettcher SEP, Josephs EL, Cunningham CA & Drew T. You look familiar, but I don’t care: Lure rejection in hybrid visual and memory search is not based on familiarity. J. Exp. Psychol: Human Perception and Performance 41, 1576–1587 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Anderson BA, Laurent PA & Yantis S. Value-driven attentional capture. Proceedings of the National Academy of Sciences 108, 10367–10371 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 122.MacLean M & Giesbrecht B. Irrelevant reward and selection histories have different influences on task-relevant attentional selection. Attention, Perception, & Psychophysics 77, 1515–1528, doi: 10.3758/s13414-015-0851-3 (2015). [DOI] [PubMed] [Google Scholar]
  • 123.Anderson BA & Yantis S. Persistence of value-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance 39, 6–9, doi: 10.1037/a0030860 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Moran R, Zehetleitner MH, Mueller HJ & Usher M. Competitive Guided Search: Meeting the challenge of benchmark RT distributions. J of Vision 13, pii: 24, doi:J Vis. 2013 Jul 25;13(8). pii: 24. doi: 10.1167/13.8.24. (2013). [DOI] [PubMed] [Google Scholar]
  • 125.Wolfe JM in Integrated Models of Cognitive Systems (ed Wayne Gray) 99–119 (Oxford, 2007). [Google Scholar]

RESOURCES