Abstract
When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies. The results demonstrate that when a stimulus feature is associated with a reward outcome in one spatial location but not another, attentional capture by that feature is selective to when it appears in the rewarded location. This finding provides insight into how reward learning effectively modulates attention in an environment with complex stimulus–reward contingencies, thereby supporting efficient foraging.
Keywords: selective attention, spatial attention, reward learning, contextual learning
By selectively attending to certain stimuli and not others, organisms prioritize information in the environment for perceptual processing, determining which stimuli guide decision-making and action. Attentional selection has long been characterized as arising from the interplay between goal-directed (e.g., Folk, Remington, & Johnston, 1992) and salience-driven mechanisms (Theeuwes, 1992, 2010; Yantis & Jonides, 1984). In order to promote survival and well-being, however, it is also important that the attention system selects stimuli associated with reward (Anderson, 2013). Recent evidence shows that attentional priority is modulated by the reward associated with visual stimuli (e.g., Della Libera & Chelazzi, 2009; Kiss, Driver, & Eimer, 2009; Krebs, Boehler, & Woldorff, 2010; Raymond & O'Brien, 2009), and that the receipt of high reward strongly primes attentional selection (e.g., Della Libera & Chelazzi, 2006; Hickey, Chelazzi, & Theeuwes, 2010a, 2010b). When a stimulus feature is learned to predict a reward outcome, a bias to attend to stimuli possessing that feature develops such that these stimuli will involuntarily capture attention even when physically non-salient, currently task-irrelevant, and no longer associated with reward (e.g., Anderson, Laurent, & Yantis, 2011a, 2011b; Anderson & Yantis, 2012, 2013; Qi, Zeng, Ding, & Li, 2013). This automatic orienting of attention to stimuli previously associated with reward has been referred to as value-driven attentional capture (Anderson et al., 2011b).
Whether a stimulus feature is predictive of reward will vary according to contingencies that govern which reward-associated objects tend to be found in which contexts. For example, when foraging for food, red berries are likely to be found close to the ground in bushes, whereas yellow bananas are often found above the ground in treetops. Such location-based contingencies are known to have a strong influence on search strategy that is largely implicit. Searched-for targets are found more efficiently when they appear within a familiar spatial configuration of stimuli, a phenomenon referred to as contextual cuing (Chun & Jiang, 1998). Attention is biased towards locations that have been more likely to contain a target in the past, despite a lack of reported awareness of this target–location relationship (Jiang & Swallow, 2013; Jiang, Swallow, Rosenbaum, & Herzig, 2013).
An attentional bias for a particular region of space can also arise as a result of associative reward learning. When selecting a target stimulus in a particular location is associated with a comparatively large reward, targets subsequently appearing in that location are more quickly and accurately reported even when rewards are no longer available (Chelazzi et al., 2014; Sawaki & Raymond, this issue). Although associative reward learning can influence attention to both stimulus features and spatial locations, whether the attention system is sensitive to the confluence of these two sources of visual information in predicting reward (i.e., reward is contingent upon a particular feature appearing in a particular location) is unknown.
Value-driven attentional selection is not limited to cases in which the properties of the stimulus and context match what has been rewarded in the past. Rather, the influence of associative reward learning on attention has been shown to be capable of transferring across stimuli and contexts. In the study by Sawaki and Raymond (this issue), the observed location bias was evident even for stimuli appearing at the previously high-reward location that were themselves never rewarded. In another study in which comparatively high reward was associated with a stimulus feature (color), different objects possessing that color were preferentially attended in a different experimental task (Anderson et al., 2012). Such generalization of value-based attentional priority can be adaptive, allowing the organism to leverage prior learning in newly encountered contexts. However, as previously discussed, the reward value of a particular feature may vary reliably across spatial locations. When this is the case, can the value-driven attentional bias for a particular stimulus feature be location-dependent? Is the attention system only sensitive to the aggregated value of a stimulus feature, abstracted from where it appears in the visual field, or is value-driven attentional priority for stimulus features modulated by learning about the locations in which a particular feature is predictive of reward?
In the present study, participants experienced a training phase in which targets of a particular color were only rewarded when they appeared on a particular side of the display. In Experiment 1A, participants searched for a red target that was only followed by reward when presented on either the left or right side of the display. In the test phase, I examined whether value-driven attentional capture by a red stimulus would be specific to when that stimulus appeared in the location in which it was previously rewarded. Experiment 1B tested this same idea, but with two target colors each of which was only rewarded when appearing on a different side of the display (red on right, green on left or vice versa). In this latter case, neither target color nor target location was itself predictive of reward, which could only be predicted by the conjunction of target color and target location. In both experiments, value-driven attentional capture by a previously reward-associated feature was found to be modulated by whether that feature appeared in a location within which it was rewarded during training.
Methods
Experiment 1A
Participants
Sixteen participants were recruited from the Johns Hopkins University community. All reported normal or corrected-to-normal visual acuity and normal color vision.
Apparatus
A Mac Mini equipped with Matlab software and Psychophysics Toolbox extensions (Brainard, 1997) was used to present the stimuli on an Asus VE247 monitor. The participants viewed the monitor from a distance of approximately 70 cm in a dimly lit room. Manual responses were entered using a standard keyboard.
Training phase
Stimuli
Each trial consisted of a fixation display, a search array, and a feedback display (Figure 1A). The fixation display contained a white fixation cross (.8° × .8° visual angle) presented in the center of the screen against a black background, and the search array consisted of the fixation cross surrounded by six colored circles (each 3.1° × 3.1°), three on each side of fixation. The middle of the three shapes on each side of the display was presented 7.3° center-to-center from fixation, and the two outer shapes were presented 5.7° from the vertical meridian, 5.5° above and below the horizontal meridian.
Figure 1.
Sequence and time course of trial events. (A) Targets during the training phase were defined by color, and participants reported the identity of the line segment inside of the target (vertical or horizontal) with a key press. Correct responses were followed by the delivery of monetary reward feedback, which varied based on the combination of target color and target location. (B) During the test phase, the target was defined as the unique shape, and no reward feedback was provided. On half of the trials, one of the non-target items—the distractor—was rendered in the color of a formerly rewarded target.
The target was a red circle, exactly one of which was presented on each trial. The color of each nontarget circle was drawn from the set {green, blue, pink, orange, yellow, white} without replacement. A white bar appeared inside each of the six circles; for the target it was oriented either vertically or horizontally, and for each of the nontarget circles it was tilted at 45° to the left or to the right (randomly determined for each nontarget). The feedback display indicated the amount of monetary reward earned on the current trial, as well as the total accumulated reward.
Design
The target appeared in each of the six possible stimulus positions equally-often. Correct identification of the oriented bar within the target was followed by a reward of 10¢ when the target appeared on one side of the display (right or left, counterbalanced across participants) and 0¢ feedback when it appeared on the other side.
Procedure
The training phase consisted of 360 trials, which were preceded by 48 practice trials. Each trial began with the presentation of the fixation display for a randomly varying interval of 400, 500, or 600 ms. The search array then appeared and remained on screen until a response was made or 1000 ms had elapsed, after which the trial timed out. The search array was followed by a blank screen for 1000 ms, the reward feedback display for 1500 ms, and a blank 1000 ms inter-trial interval (ITI).
Participants made a forced-choice target identification by pressing the "z" and the "m" keys for the vertically- and horizontally-orientated bars within the targets, respectively. Correct responses were followed by monetary reward feedback in which either 10¢ or 0¢ was added to the participant's total earnings, depending on the location of the target as outlined above. Incorrect responses were followed by feedback in which the word "Incorrect" was presented in place of the monetary increment, and responses that were too slow (i.e., no response before the trial timed out) were followed by a 500 ms 1000 Hz tone and no monetary increment (i.e., just the total earnings were presented in the feedback display).
Test phase
Stimuli
Each trial consisted of a fixation display, a search array, and a feedback display (Figure 1B). The six shapes now consisted of either a diamond among circles or a circle among diamonds, and the target was defined as the unique shape. On a subset of the trials, one of the nontarget shapes was rendered in the color of a formerly reward-associated target from the training phase (referred to as the valuable distractor); the target shape was never the color of a target from the training phase. The feedback display only informed participants if their prior response was correct or not.
Design
Target identity, target location, distractor identity, and distractor location were fully crossed and counterbalanced, and trials were presented in a random order. Thus, both the target shape and the distractor shape varied unpredictably trial-to-trial. Red (i.e., valuable) distractors were presented on 50% of all trials, while the remaining trials contained no red stimulus (distractor absent trials).
Procedure
Participants were instructed to ignore the color of the shapes and to focus on identifying the oriented bar within the unique shape using the same orientation-to-response mapping. The test phase consisted of 480 trials, which were preceded by 32 practice (distractor absent) trials. The search array was followed immediately by non-reward feedback (the word "Incorrect") for 1000 ms in the event of an incorrect response (this display was omitted following a correct response) and then by a 500 ms ITI; no monetary rewards were given in the test phase, and the task instructions made no reference to reward. Trials timed out after 1500 ms. As in the training phase, if the trial timed out, the computer emitted a 500 ms 1000 Hz tone. Upon completion of the experiment, participants were paid the cumulative reward they had earned in the training phase.
Exit question
At the conclusion of the test phase, participants were asked to select which of three statements they believed best described the reward contingencies in the training phase (see Appendix).
Data analysis
Only correct responses were included in all analyses of RT, and RTs more than three SDs above or below the mean of their respective condition for each participant were trimmed. This resulted in a reduction of <1% of all trials.
Experiment 1B
Participants
Twelve new participants were recruited from the Johns Hopkins University community. All reported normal or corrected-to-normal visual acuity and normal color vision.
Apparatus
The apparatus was identical to that used in Experiment 1A.
Training phase
Stimuli
Each trial consisted of a fixation display, a search array, and a feedback display as in Experiment 1A (Figure 1A). Experiment 1B differed in that the target was now defined as the red or green circle, exactly one of which was present in the display on each trial. The color of each nontarget circle was drawn from the set {cyan, blue, pink, orange, yellow, white} without replacement.
Design
Each color target appeared in each location equally-often. The amount of reward that could be earned on each trial was determined by the conjunction of target color and target location. Each color target was rewarded 10¢ for correct identification when it appeared on a particular side of the display, and 0¢ when appearing on the other side of the display. For each participant, one color target (counterbalanced across participants) was rewarded when appearing on the right side of the display while the other was rewarded when appearing on the left side of the display -- therefore, neither color nor location alone predicted reward, but reward was predicted by the conjunction of target color and location (e.g., red on the right and green on the left).
Procedure
The procedure was identical to that of Experiment 1A, with the exception that the training phase consisted of 480 trials and correct responses were followed by 10¢ or 0¢ according to the contingencies outlined above.
Test phase
Stimuli
Each trial consisted of a fixation display, a search array, and a feedback display as in Experiment 1A (Figure 1B). All that differed in Experiment 1B was that the valuable distractor was now equally-often red and green (rather than only red), and cyan was included in the color set as in the preceding training phase.
Design
Target identity, target location, distractor identity, and distractor location were fully crossed and counterbalanced, and trials were presented in a random order. Half of the trials contained a valuable distractor (red or green nontarget), and half did not (distractor absent trials). Red and green distractors were presented equally-often on distractor present trials (i.e., each color on 25% of all total trials), with each color distractor appearing equally-often in each of the six possible stimulus positions.
Procedure
The procedure was identical to that of Experiment 1A.
Exit question
At the conclusion of the test phase, participants were asked to select which of six statements they believed best described the reward contingencies in the training phase (see Appendix). Due to experimenter error, one of the participants was not administered the exit question.
Data analysis
Only correct responses were included in all analyses of RT, and RTs more than three SDs above or below the mean of their respective condition for each participant were trimmed. This resulted in a reduction of <2% of all trials.
Results
Experiment 1A
Training phase
Participants were not significantly faster, t(15) = 1.19, p = .255, or more accurate, t(15) = 0.17, p = .867, to report the target when it appeared on the rewarded compared to the unrewarded side of the display (see Table 1). This is consistent with previous findings and suggests that in simple search tasks such as the one used here, top-down goals favor targets regardless of reward value (e.g., Anderson et al., 2011a, 2012, 2013). As the training task emphasized accuracy in order to obtain reward, participants may also have responded conservatively, making RT a potentially insensitive measure to detect value-based effects. Most importantly, however, the training phase provided participants with the opportunity to experience the experimental reward contingencies, and the effect of this experience on involuntary attentional selection was examined in the test phase.
Table 1.
Mean response time and accuracy by target location in the training phase, seperately for each experiment.
| Experiment 1A | Experiment 1B | |||
|---|---|---|---|---|
| Unrewarded | Rewarded | Unrewarded | Rewarded | |
| Response Time (ms) | 545 | 538 | 580 | 583 |
| Accuracy | 96.0% | 96.2% | 95.1% | 94.4% |
Test phase
A repeated measures analysis of variance (ANOVA) with distractor condition (absent, unrewarded location, rewarded location) as a factor revealed a marginally significant main effect, F(2,30) = 2.72, p = .082, (see Figure 2). Planned orthogonal comparisons revealed that RT was significantly slower when the distractor was presented in a location in which it was previously rewarded compared to the other two conditions (averaged together), t(15) = 2.43, p = .028, d = .61, which did not significantly differ, t(15) = 0.34, p = .736. Thus, the red distractor captured attention when presented in a location in which it was previously rewarded, but not when it appeared in a location in which it was never rewarded. There was no main effect of distractor condition evident in accuracy, F(2,30) = 1.02, p = .375 (91.7%, 91.1%, and 92.2% across the absent, unrewarded, and rewarded distractor conditions, respectively).
Figure 2.
Mean response time by distractor condition in the test phase, separately for each experiment. Error bars reflect the within-subjects S.E.M.
Collapsing across distractor condition, participants were not significantly faster to report the shape target in the test phase when it appeared on the side of the display in which the red target was rewarded during training, mean difference = 7 ms, t(15) = 0.96, p = .352. This suggests that a purely spatial bias, independent of feature information, was weak to nonexistent. Instead, the combination of feature and location had an especially strong effect on attentional selection, above and beyond either alone.
Experiment 1B
Training phase
Participants were not significantly faster, t(11) = −0.98, p = .347, or more accurate, t(15) = −1.00, p = .337, to report the target when it appeared on the side of the display in which its color was rewarded, mirroring the results from Experiment 1A (see Table 1).
Test phase
A repeated measures ANOVA with distractor condition (absent, unrewarded location, rewarded location) as a factor revealed a significant main effect, F(2,22) = 4.73, p = .020, (see Figure 2). As in the prior experiment, planned orthogonal comparisons revealed that RT was significantly slower when the distractor was presented in a location in which it was previously rewarded compared to the other two conditions (averaged together), t(11) = 2.78, p = .018, d = .80, which did not significantly differ, t(11) = −0.71, p = .495. There was no main effect of distractor condition evident in accuracy, F < 1 (92.8%, 93.5%, and 92.6% across the absent, unrewarded, and rewarded distractor conditions, respectively). Thus, even with more complex contingencies in which only the combination of a particular color in a particular location predicts reward, value-driven attentional capture is selective for when this combination matches what has been rewarded in the past.
Combined Analysis
Collapsing across experiment, the location in which a distractor feature had been rewarded had a robust influence on RT in the test phase, F(2,54) = 7.02, p = .002, . RT was slower when a distractor was presented in a location in which it was previously rewarded compared to when the very same stimulus was presented in a location in which it was never rewarded, t(27) = 2.70, p = .012, d = .51; while the former captured attention when compared to distractor absent trials, mean difference = 15 ms, t(27) = 4.49, p < .001, d = .85, the latter did not, mean difference = −1 ms, t(27) = −0.18, p = .861.
Exit Question
In Experiment 1A, 11 of the 16 participants indicated that the rewards were random, three indicated the correct contingency, and two indicated the incorrect contingency. In Experiment 1B, seven of eleven participants indicated that the rewards were random, two indicated the correct contingency, and two indicated an incorrect contingency. Across both experiments, the number of participants indicating the correct contingency was less than what would be expected by random guessing.
Discussion
The present study demonstrates that when a stimulus feature (in this case, color) is associated with a reward outcome in one spatial location but not another, value-driven attentional capture by a stimulus possessing that feature is modulated by the location within which it appears. Specifically, when the combination of feature and location match what has been rewarded in the past, value-driven attentional capture by that feature is observed. In contrast, when that same feature appears in a location within which it has gone unrewarded, it does not produce evidence of attentional capture.
In Experiment 1A, a single target feature was selectively rewarded on one side of the display during training. In the test phase of this experiment, stimuli possessing this feature only captured attention when appearing in the previously rewarded location. Such selectivity could be explained by either a bias to attend to a particular feature appearing in a particular spatial position, or two separate biases, one for the reward-associated feature and one for the reward-associated location, working in tandem to guide selection. However, in Experiment 1B, each of the two target-defining features and each of the two sides of the display was alone unpredictive of reward, which could only be predicted from the confluence of a particular feature in a particular location. Thus, the selectivity of value-driven attentional capture in the test phase of this experiment can only be explained by a bias that is more narrowly tuned to specific combinations of feature and location information.
Interestingly, in Experiment 1A, the observed value-based attentional bias was found to be specific to the previous target-defining feature. Although rewards were only delivered for stimuli appearing on one particular side of the display, a more general bias to attend to that region of space was not found to significantly benefit the processing of a shape-defined target. On the surface, this conflicts with previous studies reporting attentional biases for stimuli appearing in previously reward-predictive locations (Chelazzi et al., 2014; Sawaki & Raymond, this issue). There are differences in the experimental design used in the present study that likely contributed to this difference. First, the target feature during training was consistent across trials in Experiment 1A, making the bound representation of color and location equally as predictive of reward as location alone. Second, the target during the test phase was defined by its relative salience (shape singleton), encouraging a broad distribution of attention across the entire stimulus array. The fact that the previously rewarded target feature captured attention in the test phase of Experiment 1A, but only when appearing in a particular location, demonstrates that feature-based attentional biases arising from reward history can be modulated by spatial context, a conclusion corroborated by Experiment 1B.
The findings from the present study provide the first evidence that value-driven attentional priorities can be sensitive to contextual information. Rather than associate a color with reward without regard to spatial context, which would have produced equivalent attentional capture across all spatial locations, the attentional priority for color as a function of reward history was contingent upon where that feature appears in visual space. This contrasts with other studies demonstrating the ability of value-based attentional priorities to generalize across stimuli, locations, and tasks (Anderson et al., 2012; Sawaki & Raymond, this issue). A critical difference between the present study and these previous studies is that in these previous studies, reward was entirely predicted by either feature or location alone. Thus, it appears to be the case that the attention system defaults to context-general representations of stimulus value when contextual information is itself non-predictive of reward, but is capable of incorporating contextual information when such information predicts whether a feature will be rewarded or not. In this sense, organisms are poised to exploit previous reward learning in newly encountered contexts, but can appropriately limit the influence of that learning based on context when doing so is supported by the reward structure, thereby avoiding overgeneralization of learning.
The mechanisms by which spatial context modulate value-driven attentional biases for stimulus features are unclear. One possibility is that the combination of feature identity and spatial position are necessary to generate a bias signal that guides selection. Another possibility is that a reward-associated feature always generates a bias signal regardless of where it is presented, but this bias signal is suppressed when the context of that feature suggests that expected value should be low. Assessment of the processing of nontargets as a function of spatial context, potentially using neuroimaging methods, might provide insight into this issue by allowing for direct measurement of suppression. A related question concerns the locus of the modulation of attentional priority. The observed contextual modulation is consistent with a top-down influence on value-based attentional priority resulting from feedback from higher-level visual representations, but a biasing of stimulus-driven visual input remains an equally tenable explanation. The brain's representation of elementary visual features such as color is retinotopically organized (e.g., Johnson, Hawken, & Shapley, 2008), and even higher-level representations of complex visual objects are sensitive to the position of these objects in space (e.g., Kravitz, Kriegeskorte, & Baker, 2010; Kravitz, Vinson, & Baker, 2008). To the degree that the observed findings reflect changes in the tuning of stimulus-driven visual processing, value-driven attentional capture should reflect an egocentric, or person-centered, orientation, which is true of attentional biases for high-probability target locations (Jiang & Swallow, 2013). Alternatively, to the degree that the observed modulation of value-driven attention reflects top-down feedback signals, it should be evident for other, more complex forms of contextual information that are not bound to feature representations.
Value-driven attentional capture has been shown to reflect both covert and overt orienting (Anderson et al., 2011a, 2011b; Anderson & Yantis, 2012; Buckner, Belopolsky, & Theeuwes, this issue; Theeuwes & Belopolsky, 2012; Tran, Pearson, Donkin, Most, & Le Pelley, this issue). Indeed, covert and overt attention are interrelated, with covert attention guiding eye movements (e.g., Deubel & Schneider, 1996; Hoffman & Subramaniam, 1995; Thompson & Bichot, 2005). The slowing of RT observed in the present study might reflect contribution from either or both of these selection mechanisms. However, it is important to note that these findings cannot be explained by anticipatory eye movements and instead reflect reactive mechanisms of control driven by the relationship between stimulus properties and prior learning. No significant bias was observed for targets appearing in the previously reward-associated location during the test phase of Experiment 1A, and both locations were overall equally predictive of reward in Experiment 1B, precluding the selection of a particular location prior to the onset of the stimulus array as an explanation for the spatially-specific capture observed in the present study.
Interestingly, participants were largely unable to correctly report which of several reward contingencies were in place during the training phase when provided with a forced-choice question, despite the fact that the actual contingency was 100% predictive of reward. This is consistent with the reward learning that automatically guides attention being implicit in nature, relying on the co-occurrence of visual information and reward feedback rather than the establishment of strategic priorities that persist due to reinforcement, as has been suggested previously and elsewhere (e.g., Anderson et al., 2013; Anderson & Yantis, 2013; Buckner et al., this issue; Della Libera, Perlato, & Chelazzi, 2011; Sali, Anderson, & Yantis, in press; Tran et al., this issue). However, it should be noted that the evidence provided by the forced-choice question is only suggestive of implicit learning and cannot rule out awareness of the reward contingencies that either extinguished over the course of the test phase or was not sufficiently strong that participants were willing to endorse the correct contingency over random contingencies.
The present study provides insights into how the attention system supports efficient foraging. Rather than relying exclusively on goals and strategies in order to inform when and where to search for particular objects, my findings show that reward learning can automatically guide attention in a way that is sensitive to complex, situationally-dependent reward contingencies. By tuning attentional priorities in accordance with the co-occurrence of visual events and reward outcomes, organisms can locate valuable stimuli in the future with minimal effort. Such automatic guidance is surprisingly efficient, taking into account multiple sources of information in biasing selection. This efficiency might help explain why value-driven attention is not easily overridden by goal-directed attentional control mechanisms: the more efficient value-driven attention is, the less of a need there will be for the organism to have to override value-based selection.
Acknowledgements
This research was supported by NIH grants F31-DA033754 and R01-DA013165
Appendix
Questions used to assess awareness of the stimulus–reward contingencies.
Which option do you believe best describes the part of the experiment in which you were earning money (please choose only one):
Experiment 1A
The red circle was generally worth more when it appeared on the right side of the screen
The red circle was generally worth more when it appeared on the left side of the screen
How much money I received was random and unrelated to where the red circle appeared
Experiment 1B
The red circle was generally worth more than the green circle regardless of which side of the screen it appeared on
The green circle was generally worth more than the red circle regardless of which side of the screen it appeared on
The two circles were worth the same overall, but one color was worth more when it appeared on the left side of the screen and the other was worth more when it appeared on the right side of the screen
Both color circles were generally worth more when presented on the left side of the screen
Both color circles were generally worth more when presented on the right side of the screen
How much money I received was random and unrelated to both color and location
References
- Anderson BA. A value-driven mechanism of attentional selection. Journal of Vision. 2013;13(3):1–16. doi: 10.1167/13.3.7. 7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Faulkner ML, Rilee JJ, Yantis S, Marvel CL. Attentional bias for non-drug reward is magnified in addiction. Experimental and Clinical Psychopharmacology. 2013;21:499–506. doi: 10.1037/a0034575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, Yantis S. Learned value magnifies salience-based attentional capture. PLoS ONE. 2011a;6(11):e27926. doi: 10.1371/journal.pone.0027926. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, Yantis S. Value-driven attentional capture. Proceedings of the National Academy of Sciences, USA. 2011b;108:10367–10371. doi: 10.1073/pnas.1104047108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, Yantis S. Generalization of value-based attentional priority. Visual Cognition. 2012;20:647–658. doi: 10.1080/13506285.2012.679711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Yantis S. Value-driven attentional and oculomotor capture during goal-directed, unconstrained viewing. Attention, Perception, and Psychophysics. 2012;74:1644–1653. doi: 10.3758/s13414-012-0348-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Yantis S. Persistence of value-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance. 2013;39:6–9. doi: 10.1037/a0030860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brainard DH. The psychophysics toolbox. Spatial Vision. 1997;10:433–436. [PubMed] [Google Scholar]
- Buckner B, Belopolsky AV, Theeuwes J. Distractors that signal reward capture the eyes: Automatic reward capture without previous goal-directed selection. this issue.
- Chelazzi L, Estocinova J, Calletti R, Lo Gerfo E, Sani I, Della Libera C, Santandrea E. Altering spatial priority maps via reward-based learning. Journal of Neuroscience. 2014;34:8594–8604. doi: 10.1523/JNEUROSCI.0277-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chun MM, Jiang Y. Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cognitive Psychology. 1998;36:28–71. doi: 10.1006/cogp.1998.0681. [DOI] [PubMed] [Google Scholar]
- Della Libera C, Chelazzi L. Visual selective attention and the effects of monetary reward. Psychological Science. 2006;17:222–227. doi: 10.1111/j.1467-9280.2006.01689.x. [DOI] [PubMed] [Google Scholar]
- Della Libera C, Chelazzi L. Learning to attend and to ignore is a matter of gains and losses. Psychological Science. 2009;20:778–784. doi: 10.1111/j.1467-9280.2009.02360.x. [DOI] [PubMed] [Google Scholar]
- Della Libera C, Perlato A, Chelazzi L. Dissociable effects of reward on attentional learning: From passive associations to active monitoring. PLoS ONE. 2011;6(4):e19460. doi: 10.1371/journal.pone.0019460. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deubel H, Schneider WX. Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research. 1996;36:1827–1837. doi: 10.1016/0042-6989(95)00294-4. [DOI] [PubMed] [Google Scholar]
- Hickey C, Chelazzi L, Theeuwes J. Reward changes salience in human vision via the anterior cingulate. Journal of Neuroscience. 2010a;30:11096–11103. doi: 10.1523/JNEUROSCI.1026-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hickey C, Chelazzi L, Theeuwes J. Reward guides vision when it's your thing: Trait reward-seeking in reward-mediated visual priming. PLoS ONE. 2010b;5(11):e14087. doi: 10.1371/journal.pone.0014087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoffman JE, Subramaniam B. The role of visual attention in saccadic eye movements. Perception and Psychophysics. 1995;57:787–795. doi: 10.3758/bf03206794. [DOI] [PubMed] [Google Scholar]
- Jiang YV, Swallow KM. Spatial reference frame of incidentally learned attention. Cognition. 2013;126:378–390. doi: 10.1016/j.cognition.2012.10.011. [DOI] [PubMed] [Google Scholar]
- Jiang YV, Swallow KM, Rosenbaum GM, Herzig C. Rapid acquisition but slow extinction of an attentional bias in space. Journal of Experimental Psychology: Human Perception and Performance. 2013;39:87–99. doi: 10.1037/a0027611. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson EN, Hawken MJ, Shapley R. The orientation selectivity of color-responsive neurons in macaque V1. Journal of Neuroscience. 2008;28:8096–8106. doi: 10.1523/JNEUROSCI.1404-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kiss M, Driver J, Eimer M. Reward priority of visual target singletons modulates event-related potential signatures of attentional selection. Psychological Science. 2009;20:245–251. doi: 10.1111/j.1467-9280.2009.02281.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krebs RM, Boehler CN, Woldorff MG. The influence of reward associations on conflict processing in the Stroop task. Cognition. 2010;117:341–347. doi: 10.1016/j.cognition.2010.08.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kravitz DJ, Kriegeskorte N, Baker CI. High-level visual object representations are constrained by position. Cerebral Cortex. 2010;20:2916–2925. doi: 10.1093/cercor/bhq042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kravitz DJ, Vinson LD, Baker CI. How position dependent is visual object recognition. Trends in Cognitive Sciences. 2008;12:114–122. doi: 10.1016/j.tics.2007.12.006. N. [DOI] [PubMed] [Google Scholar]
- Qi S, Zeng Q, Ding C, Li H. Neural correlates of reward-driven attentional capture in visual search. Brain Research. 2013;1532:32–43. doi: 10.1016/j.brainres.2013.07.044. [DOI] [PubMed] [Google Scholar]
- Raymond JE, O'Brien JL. Selective visual attention and motivation: The consequences of value learning in an attentional blink task. Psychological Science. 2009;20:981–988. doi: 10.1111/j.1467-9280.2009.02391.x. [DOI] [PubMed] [Google Scholar]
- Sali AW, Anderson BA, Yantis S. The role of reward prediction in the control of attention. Journal of Experimental Psychology: Human Perception and Performance. doi: 10.1037/a0037267. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sawaki R, Raymond JE. Irrelevant spatial value learning modulates visual search. this issue.
- Theeuwes J. Perceptual selectivity for color and form. Perception and Psychophysics. 1992;51:599–606. doi: 10.3758/bf03211656. [DOI] [PubMed] [Google Scholar]
- Theeuwes J. Top-down and bottom-up control of visual selection. Acta Psychologica. 2010;135:77–99. doi: 10.1016/j.actpsy.2010.02.006. [DOI] [PubMed] [Google Scholar]
- Thompson KG, Bichot NP. A visual salience map in the primate frontal eye field. Progress in Brain Research. 2005;147:251–262. doi: 10.1016/S0079-6123(04)47019-8. [DOI] [PubMed] [Google Scholar]
- Tran S, Pearson D, Donkin C, Most SB, Le Pelley M. Cognitive control and counterproductive oculomotor capture by reward-related stimuli. this issue.
- Yantis S, Jonides J. Abrupt visual onsets and selective attention: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance. 1984;10:350–374. doi: 10.1037//0096-1523.10.5.601. [DOI] [PubMed] [Google Scholar]


