Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Oct 1.
Published in final edited form as: Q J Exp Psychol (Hove). 2022 Dec 27;76(10):2401–2409. doi: 10.1177/17470218221144318

Generalisation of value-based attentional priority is category-specific

Andrew Clement 1, Laurent Grégoire 1, Brian A Anderson 1
PMCID: PMC10319404  NIHMSID: NIHMS1910201  PMID: 36453711

Abstract

A large body of research suggests that previously reward-associated stimuli can capture attention. Recent evidence also suggests that value-driven attentional biases can occur for a particular category of objects. However, it is unclear how broadly these category-level attentional biases can generalise. In the present study, we examined whether value-driven attentional biases can generalise to new exemplars of a category or semantically related categories using a modified version of the value-driven attentional capture paradigm. In an initial training phase, participants searched for two categories of objects and were rewarded for correctly fixating members of one target category. In a subsequent test phase, participants searched for two new categories of objects. A new exemplar of one of the previous target categories or a member of a semantically related category could appear as a critical distractor in this phase. Participants were more likely to initially fixate the critical distractor and fixated the distractor longer when it was a new exemplar of the previously rewarded category. However, similar findings were not observed for members of semantically related categories. Together, these findings suggest that the generalisation of value-based attentional priority is category-specific.

Keywords: Attentional capture, eye movements, value-driven attention, reward, semantic relationships

Introduction

A large body of research suggests that reward learning can influence the allocation of attention. For example, previously reward-associated stimuli have been found to capture attention, even when they are not visually salient and are no longer relevant to observers’ task (Anderson et al., 2011). This phenomenon, known as value-driven attentional capture, suggests that reward associations can influence attentional selection independently of visual salience or observers’ task goals (e.g., Anderson et al., 2021; Awh et al., 2012). Similar value-driven attentional biases have been observed using eye movements (Anderson & Yantis, 2012; Le Pelley et al., 2015; Theeuwes & Belopolsky, 2012) and event-related potentials (ERPs) associated with attentional selection (Kiss et al., 2009; Qi et al., 2013). Moreover, reward associations have been found to modulate performance in other attentional tasks, including intertrial priming (Della Libera & Chelazzi, 2006; Hickey et al., 2010) and the attentional blink (Raymond & O’Brien, 2009). Together, these findings suggest that reward learning plays an important role in the allocation of attention.

Typically, most studies have examined value-driven attentional biases using relatively simple features, such as colour (e.g., Anderson et al., 2011) or orientation (e.g., Laurent et al., 2015; Theeuwes & Belopolsky, 2012). However, real-world objects are rarely defined by such simple features. In such cases, it may be more efficient to search for objects based on their category rather than the specific features of individual exemplars. A growing body of research suggests that observers can efficiently search for a particular category of objects, such as food or furniture (Nako et al., 2014; Wyble et al., 2013; Yang & Zelinsky, 2009). Recent evidence also suggests that value-driven attentional biases can occur for a particular category of objects. For example, previously reward-associated categories have been found to bias attention in real-world scenes (Hickey et al., 2015; Hickey & Peelen, 2015, 2017) and modulate ERPs associated with attentional selection (Donohue et al., 2016). Together, these findings suggest that value-driven attentional biases can not only occur for relatively simple features, but can also occur at the level of object categories.

While the previous findings suggest that value-driven attentional biases can occur for a particular category of objects, it is unclear how broadly these category-level attentional biases can generalise. For example, if observers learn to associate a particular category with reward, can value-driven attentional biases generalise to new exemplars of that category or semantically related categories? A growing body of research suggests that semantic relationships among objects can bias attention (Belke et al., 2008; de Groot et al., 2016; Moores et al., 2003). Recent evidence also suggests that value-driven attentional biases can generalise to semantically related stimuli under certain conditions. For example, Grégoire and Anderson (2019) found that synonyms of previously reward-associated words produced greater interference in a Stroop task. Interestingly, this effect was only observed when participants were unaware of the reward contingencies. Thus, participants who were aware of these contingencies appeared to suppress value-driven attentional biases (see also Leganes-Fonteneau et al., 2019). However, because this study used words as stimuli, it remains unclear whether these biases can generalise to new exemplars of a category or semantically related categories. Moreover, it remains unclear whether these stimuli can capture attention in a visual search task.

In the present study, we examined whether value-driven attentional biases can generalise to new exemplars of a category or semantically related categories using a modified version of the value-driven attentional capture paradigm (e.g., Anderson et al., 2011). In an initial training phase, participants searched for two categories of objects and were rewarded for correctly fixating members of one category. In a subsequent test phase, participants searched for two new categories of objects. A new exemplar of one of the previous target categories or a member of a semantically related category could appear as a critical distractor in this phase. If value-driven attentional biases generalise to new exemplars of a category, participants should be more likely to initially fixate the critical distractor when it is a new exemplar of the previously rewarded category. Moreover, if these biases extend to semantically related categories, participants should be more likely to initially fixate the critical distractor when it is semantically related to the previously rewarded category. Finally, if awareness plays a role in the present findings, we predicted that these biases would only be observed when participants are unaware of the reward contingencies (Grégoire & Anderson, 2019).

Methods

Participants

A group of 24 participants (15 females; mean age = 22.0 years, SD = 3.4years) were recruited from the Texas A&M community. All participants were between the ages of 18 and 35 and reported normal or corrected-to-normal visual acuity and normal colour vision. All participants received their total earnings from the training phase (minimum = $15.61, maximum = $16.80).

Apparatus and stimuli

Stimuli were adapted from Konkle et al. (2010) and Clement et al. (2022), and consisted of 384 images of objects. Each image belonged to one of 24 object categories, which were further grouped into 12 superordinate categories (see Figure 1). Object categories consisted of different images of the same object (e.g., different images of chairs), while superordinate categories consisted of different object categories that belonged to the same higher-level category (e.g., chairs and couches, which together comprised furniture). Each object category consisted of 16 images. In a previous study, a group of 160 participants rated how closely related the images were to their superordinate category or a different category using a 5-point Likert scale (Clement et al., 2022). Critically, participants rated the images as significantly more related to their superordinate category (M = 4.57, SD = 0.41) than to a different category (M = 1.53, SD = 0.65), t (159) = 40.79, p < .001, ηP2=.913. Half of the images from each object category were presented during the training phase, and the other half were presented during the test phase. All images subtended 5° × 5° and were presented in colour on a white background. The images were arranged into search displays, which consisted of four objects equally spaced around an imaginary circle with a radius of 8°. Eight object categories (pants, shirt, bread, sandwich, chair, couch, hammer, screwdriver) served as targets, while the remaining object categories served as distractors. Two object categories were selected as targets during the training phase and two object categories were selected as targets during the test phase, with the constraint that no two target categories could belong to the same superordinate category. Which object categories served as targets during the training and test phases were counterbalanced across participants.

Figure 1.

Figure 1.

Example images from each object category in the present study. The object categories in the left column (pants, shirt, bread, sandwich, chair, couch, hammer, screwdriver) served as targets, while the remaining object categories served as distractors.

Stimuli were presented on a 27-in LCD monitor with a refresh rate of 60Hz. Participants sat 70cm from the monitor so that it subtended 46.4° horizontally and 26.1° vertically. Participants’ eye movements were recorded using an EyeLink 1000 Plus eye-tracking system (SR Research Ltd.) with a sampling rate of 1000Hz.

Training phase

At the beginning of the training phase, participants were instructed to search for members of two object categories (e.g., hammers and pants). Participants were not shown a preview of the target on each trial. At the beginning of each trial, a black fixation cross (0.5° × 0.5°) was presented in the centre of the screen (see Figure 2a). After fixation was registered within 1° of the centre of the fixation cross for a continuous period of 500ms, an array of four objects appeared on the screen. A member of one of the two target categories (e.g., an image of a hammer or pants) appeared on each trial, and participants were instructed to fixate (“look directly at”) this object. The other objects were randomly selected from the remaining object categories, with the constraints that no two objects could belong to the same superordinate category and no object could belong to the same superordinate category as any of the target categories. Participants received 7¢ for correctly fixating members of one target category and 0¢ for correctly fixating members of the other target category. Participants were not informed of the reward contingencies. A trial ended after 1000ms or once fixation was registered within 3.75° of the centre of the target for a continuous period of 100ms. After a 1000ms blank screen, a feedback display indicating participants’ current and total earnings was presented for 1500ms. Participants received an error message (the word “miss” presented instead of the reward feedback) if they failed to fixate the target within 1000ms.

Figure 2.

Figure 2.

(a) Example trial sequence in the training phase. (b) Example trial sequence in the test phase.

Participants completed 24 practice trials followed by four blocks of 120 trials, for a total of 480 trials. The two target categories were presented randomly and equally often within a block. As a result, the reward association of the target was counterbalanced across trials. The target also appeared equally often at each of the four locations. Thus, the location of the target was also counterbalanced across trials.

Test phase

The task was the same as in the training phase (see Figure 2b). However, at the beginning of the test phase, participants were instructed to search for members of two new object categories (e.g., sandwiches and chairs). As in the training phase, participants were not shown a preview of the target on each trial. A member of one of the two new target categories (e.g., an image of a sandwich or chair) appeared on each trial, and participants were instructed to fixate this object. Participants were informed that they would not be rewarded for correctly fixating members of either target category. A member of one of four object categories could also appear as a critical distractor on each trial. On same category trials, this distractor was a new exemplar of one of the target categories from the training phase (e.g., a new image of a hammer or pants). On related category trials, this distractor belonged to the same superordinate category as one of these categories (e.g., an image of a screwdriver or shirt). On distractor-absent trials, no critical distractor was presented. All other details of the experimental procedure were identical to those in the training phase.

Participants completed four blocks of 120 trials, for a total of 480 trials. The two target categories and four distractor categories were presented randomly and equally often within a block. As a result, the reward association of the distractor and the distractor condition were counterbalanced across trials. The target and critical distractor also appeared equally often at each of the four locations. Thus, the location of the target and critical distractor were also counterbalanced across trials.

Contingency awareness test

After completing the experiment, participants were asked whether they noticed any difference between the target categories from the training phase, and if so, whether they could explain this difference. Participants were coded as noticing the reward contingencies if they correctly identified the rewarded category. Participants then completed a short test to further assess their awareness of these contingencies. On each trial, participants viewed a search display from the training phase and were asked to indicate whether they thought they would receive 7¢ or 0¢ for correctly fixating the target on this trial. Participants completed a total of 24 trials, and the target category and the location of the target were counterbalanced across trials.

Data analysis

We measured which object was initially fixated on each trial and dwell times on each object, as well as whether the target was fixated within 1000ms and the time to fixate the target. Fixation was registered if eye position fell within 3.75° of the centre of an object for a continuous period of 50ms. Dwell times were computed as the sum of all fixations on an object. On distractor-absent trials, to quantify the probability of initially fixating a distractor, one of the objects was dummy-coded as a critical distractor. The location of this object was counterbalanced across trials. Response times were measured from the onset of the search display until a valid fixation on the target was registered, from which 100ms was subtracted to yield the time to initially fixate the target. Response times less than 100ms and greater than 1000ms were excluded from analysis.

All dependent variables in the training phase were analysed using paired samples t-tests, and all dependent variables in the test phase were analysed using 2 (reward: rewarded, unrewarded) × 2 (distractor condition: same category, related category) repeated measures analyses of variance (ANOVAs). We also conducted planned comparisons comparing same category and related category trials for the rewarded and unrewarded categories with each other and with distractor-absent trials. Critically, this allowed us to quantify any attentional biases toward the critical distractor by directly comparing each distractor condition with distractor-absent trials. All analyses were conducted using IBM SPSS Statistics software (IBM Corp.).

Results

Training phase

Accuracy in the training phase was high (M = 96.97%, SD = 3.82%), indicating that participants were correctly fixating the target. Participants responded significantly faster to members of the rewarded category (M = 427ms, SD = 42ms) compared to the unrewarded category (M = 495ms, SD = 55ms), t (23) = −6.04, p < .001, ηP2=.614, and initially fixated members of the rewarded category (M = 75.14%, SD = 13.73%) significantly more often than members of the unrewarded category (M = 59.22%, SD = 17.45%), t (23) = 4.94, p < .001, ηP2=.514. Together, these results suggest that attention was biased toward members of the rewarded category.

Test phase

Accuracy in the test phase was high (M = 97.26%, SD = 1.93%), again indicating that participants were correctly fixating the target. To test whether attention was biased toward members of the rewarded category, we first analysed average response times. The analysis revealed a trending but not statistically significant main effect of distractor condition, F (1, 23) = 3.91, p = .060, ηP2=.145, with participants responding slower on same category trials (M = 465 ms, SD = 35 ms) compared to related category trials (M = 457 ms, SD = 39 ms). However, there was neither a significant main effect of reward, F (1, 23) = 2.07, p = .164, ηP2=.082, nor a significant interaction between reward and distractor condition, F (1, 23) = 0.43, p = .518, ηP2=.018. Planned comparisons revealed a significant main effect of distractor condition for the rewarded category, F (2, 46) = 5.98, p = .005, ηP2=.206, with participants responding slower on same category (M = 471 ms, SD = 50ms), p = .005, and related category trials (M = 460 ms, SD = 41 ms), p = .049, compared to distractor-absent trials (M = 451 ms, SD = 34 ms). Same category trials did not significantly differ from related category trials, p = .091. However, there was no significant main effect of distractor condition for the unrewarded category, F (2, 46) = 0.98, p = .383, ηP2=.041. Thus, participants responded slower when the distractor was a new exemplar of the rewarded category or a semantically related category (see Figure 3). However, while this effect was supported by our planned comparisons, it was not supported by our omnibus ANOVA.

Figure 3.

Figure 3.

Average response times. The dashed line represents values on distractor-absent trials. Error bars reflect ±1 within-subjects standard error (Cousineau, 2005; Morey, 2008).

To further test whether attention was biased toward members of the rewarded category, we next analysed the proportion of first fixations on the critical distractor. There was no significant main effect of reward, F (1, 23) = 0.74, p = .399, ηP2=.031. However, there a significant main effect of distractor condition, F (1, 23) = 7.23, p = .013, ηP2=.239, with participants initially fixating the distractor more often on same category trials (M = 17.36%, SD = 7.56%) compared to related category trials (M = 13.46%, SD = 6.24%). Moreover, this effect was qualified by a significant interaction between reward and distractor condition, F (1, 23) = 7.19, p = .013, ηP2=.238 Planned comparisons revealed a significant main effect of distractor condition for the rewarded category, F (2, 46) = 8.39, p = .001, ηP2=.267, with participants initially fixating the distractor more often on same category trials (M = 20.05%, SD = 13.70%) compared to related category (M = 13.18%, SD = 9.01%), p = .003, and distractor-absent trials (M = 11.64%, SD = 4.06%), p = .005. Related category trials did not significantly differ from distractor-absent trials, p = .378. However, there was no significant main effect of distractor condition for the unrewarded category, F (2, 46) = 1.88, p = .164, ηP2=.076. Thus, while participants were more likely to initially fixate the distractor when it was a new exemplar of the rewarded category, this effect was not observed for members of semantically related categories (see Figure 4a).

Figure 4.

Figure 4.

(a) The proportion of first fixations on the critical distractor. (b) Average dwell times on the critical distractor. The dashed line represents values on distractor-absent trials. Error bars in both panels reflect ±1 within-subjects standard error (Cousineau, 2005; Morey, 2008).

Finally, to test whether attention was slower to disengage from members of the rewarded category, we analysed average dwell times on the critical distractor. Again, there was no significant main effect of reward, F (1, 22) = 1.68, p = .209, ηP2=.071. However, there was a significant main effect of distractor condition, F (1, 22) = 24.94, p < .001, ηP2=.531, with participants fixating the distractor longer on same category trials (M = 156 ms, SD = 31 ms) compared to related category trials (M = 136 ms, SD = 26 ms). Moreover, this effect was qualified by a significant interaction between reward and distractor condition, F (1, 22) = 7.83, p = .010, ηP2=.263. Planned comparisons revealed a significant main effect of distractor condition for the rewarded category, F (2, 44) = 17.60, p < .001, ηP2=.444, with participants fixating the distractor longer on same category trials (M = 168 ms, SD = 39 ms) compared to related category (M = 133 ms, SD = 27 ms), p < .001, and distractor-absent trials (M = 134 ms, SD = 27 ms), p < .001. Related category trials did not significantly differ from distractor-absent trials, p = .836. However, there was no significant main effect of distractor condition for the unrewarded category, F (2, 44) = 1.40, p = .258, ηP2=.060. Thus, while participants fixated the distractor longer when it was a new exemplar of the rewarded category, this effect was not observed for members of semantically related categories (see Figure 4b).

Contingency awareness test

Two participants were excluded for failing to complete the contingency awareness test. Only 7 of the remaining 22 participants reported explicitly noticing the reward contingencies. To further assess participants’ awareness of these contingencies, we analysed the proportion of trials on which participants indicated they would be rewarded for correctly fixating the target using a 2 (noticing: noticed, failed to notice) × 2 (reward: rewarded, unrewarded) mixed-model ANOVA. The analysis revealed a significant main effect of reward, F (1, 20) = 47.78, p < .001, ηP2=.705, with participants indicating they would be rewarded more often for correctly fixating members of the rewarded category (M = 97.22%, SD = 12.55%) compared to the unrewarded category (M = 25.87% SD = 44.81%). However, there was neither a significant main effect of noticing, F (1, 20) = 0.30, p = .588, ηP2=.015, nor a significant interaction between noticing and reward, F (1, 20) = 1.09, p = .308, ηP2=.052. Thus, while participants appeared to be generally aware of the reward contingencies, awareness was not higher for participants who reported explicitly noticing these contingencies. Finally, to assess whether explicitly noticing the reward contingencies modulated any of our effects, we re-ran all of our analyses with noticing entered as a between-subjects variable. There were no significant effects of noticing, ps for all noticing effects ⩾ .257. Thus, explicitly noticing the reward contingencies did not appear to modulate any of our effects.

Discussion

A large body of research suggests that previously reward-associated stimuli can capture attention (Anderson et al., 2011; Della Libera & Chelazzi, 2006; Hickey et al., 2010). Recent evidence also suggests that value-based attentional biases can occur for a particular category of objects (Donohue et al., 2016; Hickey et al., 2015; Hickey & Peelen, 2015, 2017). However, it is unclear how broadly these category-level attentional biases can generalise. In the present study, we examined whether value-driven attentional biases can generalise to new exemplars of a category or semantically related categories using a modified version of the value-driven attentional capture paradigm. In an initial training phase, participants searched for two categories of objects and were rewarded for correctly fixating members of one category. In a subsequent test phase, participants searched for two new categories of objects. A new exemplar of one of the previous target categories or a member of a semantically related category could appear as a critical distractor in this phase. Participants were more likely to initially fixate the critical distractor and fixated the distractor longer when it was a new exemplar of the previously rewarded category. However, similar findings were not observed for members of semantically related categories. Together, these findings suggest that the generalisation of value-based attentional priority is category-specific.

Overall, the present findings provide new evidence regarding the scope of value-based attentional priority. A growing body of research suggests that value-driven attentional biases can occur for a particular category of objects (Donohue et al., 2016; Hickey et al., 2015; Hickey & Peelen, 2015, 2017). Recent evidence also suggests that value-driven attentional biases can generalise to semantically related stimuli under certain conditions. For example, synonyms of previously reward-associated words have been found to produce greater interference in a Stroop task (Grégoire & Anderson, 2019). Similar findings have been observed for synonyms of threat-associated words (Grégoire et al., 2021a). In the present study, we found that value-driven attentional biases generalised to new exemplars of a category. However, similar findings were not observed for members of semantically related categories. Thus, while value-driven attentional biases can generalise within a category, these biases do not appear to extend to semantically related categories.

Notably, the present findings also provide new evidence regarding the semantic guidance of attention. A growing body of research suggests that observers can efficiently search for a particular category of objects (Nako et al., 2014; Wyble et al., 2013; Yang & Zelinsky, 2009). Previous evidence also suggests that semantic relationships among objects can bias attention. For example, when observers search for a particular category of objects, attention is often biased toward members of semantically related categories (Belke et al., 2008; de Groot et al., 2016; Moores et al., 2003). Similar findings can be observed when semantic relationships are irrelevant to observers’ task, suggesting that semantic relationships can influence attentional selection independently of observers’ task goals (Malcolm et al., 2016). However, semantic relationships do not appear to be sufficient to influence other processes, such as visual awareness (Clement et al., 2019). In the present study, we found little evidence that value-driven attentional biases generalised to members of semantically related categories. Thus, like visual awareness, semantic relationships do not appear to be sufficient to influence value-driven attentional biases.

The notion that value-driven attentional biases have limited generalisability is consistent with previous evidence regarding the scope of value-based attentional priority. For example, value-driven attentional biases have been shown to be context-specific in cases where observers learn to associate stimuli with reward in a particular context (Anderson, 2015a, 2015b). Similar findings have been observed for threat-related attentional biases (Grégoire et al., 2021b), which are thought to rely on the same attentional learning mechanisms (Anderson et al., 2021). In the present study, we found little evidence that value-driven attentional biases generalised to members of semantically related categories. Along with the previous findings, these findings suggest that value-driven attentional biases are specific to the particular context in which reward associations are learned. Future research should further examine the scope of value-based attentional priority, including whether these category-specific and context-specific effects rely on a similar mechanism.

In the present study, we assume that participants searched for objects based on their category rather than the specific features of individual exemplars. However, because members of the same category are more visually similar than members of different categories, it is likely that the present findings were due to feature-based generalisation. Previous evidence suggests that observers often rely on category-consistent features when searching for a particular category of objects (Reeder & Peelen, 2013; Yu et al., 2016), and attention is often biased toward objects that share features with this category (Alexander & Zelinsky, 2011). Moreover, value-driven attentional biases have been found to generalise to stimuli that share features with previously rewarded stimuli (Anderson, 2017). In the present study, members of the same category were more visually similar than members of different categories. Thus, while value-driven attentional biases generalised to new exemplars of a category that shared features with the previously rewarded category, these biases did not appear to extend to visually dissimilar but semantically related categories. However, because members of a category differed in colour, shape, and texture, it is clear that value-driven attentional biases are tolerant to variation in some features. Nonetheless, future research should attempt to clarify the role of feature-based generalisation in the present findings by examining the extent to which value-driven attentional biases generalise to visually similar but semantically unrelated categories.

Finally, it is worth noting that participants appeared to be generally aware of the reward contingencies. Previous evidence suggests that value-driven attentional biases are often implicit, and can be observed even in the absence of explicit awareness (Anderson et al., 2021). However, there is some evidence that awareness can facilitate the effects of reward learning on spatial attention (Mine et al., 2021; Sisk et al., 2020). Interestingly, Grégoire and Anderson (2019) found that value-driven attentional biases only generalised to semantically related stimuli when participants were unaware of the reward contingencies. Thus, participants who were aware of the reward contingencies appeared to suppress value-driven attentional biases (see also Leganes-Fonteneau et al., 2019). In the present study, awareness did not appear to play a substantial role in our findings. However, unlike Grégoire and Anderson’s (2019) study, participants appeared to be generally aware of the reward contingencies. Thus, it is possible that our analyses were simply underpowered to observe any effects of awareness. Moreover, Grégoire and Anderson (2019) only used semantically related stimuli in the test phase. Thus, the present findings are at least partially consistent with their findings, as value-driven attentional biases did not generalise to semantically related stimuli when participants were aware of the reward contingencies. Nonetheless, future research should attempt to clarify the role of awareness in the present findings, as well as the relationship between awareness and value-driven attentional biases in general (see also Anderson et al., 2021).

In summary, we found that the generalisation of value-based attentional priority is category-specific. Participants were more likely to initially fixate the critical distractor and fixated the distractor longer when it was a new exemplar of the previously rewarded category. However, similar findings were not observed for members of semantically related categories. Moreover, while participants appeared to be generally aware of the reward contingencies, explicitly noticing these contingencies did not appear to modulate the present findings. Together, these findings suggest that while value-driven attentional biases can generalise within a category, these biases do not appear to extend to semantically related categories.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by an NIH Research Project Grant (R01-DA046410) to Brian A. Anderson.

Footnotes

Declaration of conflicting interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Data accessibility statement

The data and materials from the present experiment are publicly available at the Open Science Framework website: https://osf.io/d4eau/. The experiment was not preregistered.

References

  1. Alexander RG, & Zelinsky GJ (2011). Visual similarity effects in categorical search. Journal of Vision, 11(8), 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson BA (2015a). Value-driven attentional capture is modulated by spatial context. Visual Cognition, 23(1–2), 67–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anderson BA (2015b). Value-driven attentional priority is context specific. Psychonomic Bulletin and Review, 22(3), 750–756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Anderson BA (2017). On the feature-specificity of value-driven attention. PLOS ONE, 12(5), 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Anderson BA, Kim H, Kim AJ, Liao M-R, Mrkonja L, Clement A, & Grégoire L (2021). The past, present, and future of selection history. Neuroscience & Biobehavioral Reviews, 130, 326–350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Anderson BA, Laurent PA, & Yantis S (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences of the United States of America, 108(25), 10367–10371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Anderson BA, & Yantis S (2012). Value-driven attentional and oculomotor capture during goal-directed, unconstrained viewing. Attention, Perception, & Psychophysics, 74(8), 1644–1653. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Awh E, Belopolsky AV, & Theeuwes J (2012). Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in Cognitive Sciences, 16(8), 437–443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Belke E, Humphreys GW, Watson DG, Meyer AS, & Telling AL (2008). Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load. Perception & Psychophysics, 70(8), 1444–1458. [DOI] [PubMed] [Google Scholar]
  10. Clement A, Lim YI, & Pratt J (2022). Conceptual grouping in visual working memory: The effects of perceptual grouping, category structure, and encoding time. Manuscript submitted for publication.
  11. Clement A, Stothart C, Drew T, & Brockmole JR (2019). Semantic associations do not modulate the visual awareness of objects. Quarterly Journal of Experimental Psychology, 72(5), 1224–1232. [DOI] [PubMed] [Google Scholar]
  12. Cousineau D (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in Quantitative Methods for Psychology, 1(1), 42–45. [Google Scholar]
  13. de Groot F, Huettig F, & Olivers CNL (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 180–196. [DOI] [PubMed] [Google Scholar]
  14. Della Libera C, & Chelazzi L (2006). Visual selective attention and the effects of monetary rewards. Psychological Science, 17(3), 222–227. [DOI] [PubMed] [Google Scholar]
  15. Donohue SE, Hopf J-M, Bartsch MV, Shoenfeld MA, Heinze H-J, & Woldorff MG (2016). The rapid capture of attention by rewarded objects. Journal of Cognitive Neuroscience, 28(4), 529–541. [DOI] [PubMed] [Google Scholar]
  16. Grégoire L, & Anderson BA (2019). Semantic generalization of value-based attentional priority. Learning & Memory, 26(12), 460–464. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Grégoire L, Kim AJ, & Anderson BA (2021a). Semantic generalization of punishment-related attentional priority. Visual Cognition, 29(5), 310–317. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Grégoire L, Kim H, & Anderson BA (2021b). Punishment-modulated attentional capture is context-specific. Motivation Science, 7(2), 165–175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hickey C, Chelazzi L, & Theeuwes J (2010). Reward changes salience in human vision via the anterior cingulate. The Journal of Neuroscience, 30(33), 11096–11103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Hickey C, Kaiser D, & Peelen MV (2015). Reward guides attention to object categories in real-world scenes. Journal of Experimental Psychology: General, 144(2), 264–273. [DOI] [PubMed] [Google Scholar]
  21. Hickey C, & Peelen MV (2015). Neural mechanisms of incentive salience in naturalistic human vision. Neuron, 85(3), 512–518. [DOI] [PubMed] [Google Scholar]
  22. Hickey C, & Peelen MV (2017). Reward selectively modulates the lingering neural representation of recently attended objects in natural scenes. The Journal of Neuroscience, 37(31), 7297–7304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kiss M, Driver J, & Eimer M (2009). Reward priority of visual target singletons modulates event-related potential signatures of attentional selection. Psychological Science, 20(2), 245–251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Konkle T, Brady TF, Alvarez GA, & Oliva A (2010). Conceptual distinctiveness supports detailed visual long-term memory for objects. Journal of Experimental Psychology: General, 139(3), 558–578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Laurent PA, Hall MG, Anderson BA, & Yantis S (2015). Valuable orientations capture attention. Visual Cognition, 23, 133–146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Leganes-Fonteneau M, Nikolaou K, Scott R, & Duka T (2019). Knowledge about the predictive value of reward conditioned stimuli modulates their interference with cognitive processes. Learning & Memory, 26(3), 66–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Le Pelley ME, Pearson D, Griffiths O, & Beesley T (2015). When goals conflict with values: Counterproductive attentional and oculomotor capture by reward-related stimuli. Journal of Experimental Psychology: General, 144(1), 158–171. [DOI] [PubMed] [Google Scholar]
  28. Malcolm GL, Rattinger M, & Shomstein S (2016). Intrusive effects of semantic information on visual selective attention. Attention, Perception, & Psychophysics, 78(7), 2066–2078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Mine C, Yokoyama T, & Takeda Y (2021). Awareness is necessary for attentional biases by location-reward association. Attention, Perception, & Psychophysics, 83(5), 2002–2016. [DOI] [PubMed] [Google Scholar]
  30. Moores E, Laiti L, & Chelazzi L (2003). Associative knowledge controls deployment of visual selective attention. Nature Neuroscience, 6(2), 182–189. [DOI] [PubMed] [Google Scholar]
  31. Morey RD (2008). Confidence intervals from normalized data: A correction to Cousineau (2005). Tutorials in Quantitative Methods for Psychology, 4(2), 61–64. [Google Scholar]
  32. Nako R, Wu R, & Eimer M (2014). Rapid guidance of visual search by object categories. Journal of Experimental Psychology: Human Perception and Performance, 40(1), 50–60. [DOI] [PubMed] [Google Scholar]
  33. Qi S, Zeng Q, Ding C, & Li H (2013). Neural correlates of reward-driven attentional capture in visual search. Brain Research, 1532, 32–43. [DOI] [PubMed] [Google Scholar]
  34. Raymond JE, & O’Brien JL (2009). Selective visual attention and motivation: The consequences of value learning in an attentional blink task. Psychological Science, 20(8), 981–988. [DOI] [PubMed] [Google Scholar]
  35. Reeder RR, & Peelen MV (2013). The contents of the search template for category-level search in natural scenes. Journal of Vision, 13(3), 1–13. [DOI] [PubMed] [Google Scholar]
  36. Sisk CA, Remington RW, & Jiang YV (2020). A spatial bias toward highly rewarded locations is associated with awareness. Journal of Experimental Psychology: Human Perception and Performance, 46(4), 669–683. [DOI] [PubMed] [Google Scholar]
  37. Theeuwes J, & Belopolsky AV (2012). Reward grabs the eye: Oculomotor capture by rewarding stimuli. Vision Research, 74, 80–85. [DOI] [PubMed] [Google Scholar]
  38. Wyble B, Folk C, & Potter MC (2013). Contingent attentional capture by conceptually relevant images. Journal of Experimental Psychology: Human Perception and Performance, 39(3), 861–871. [DOI] [PubMed] [Google Scholar]
  39. Yang H, & Zelinsky GJ (2009). Visual search is guided to categorically-defined targets. Vision Research, 49(16), 2095–2103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Yu C-P, Maxfield JT, & Zelinsky GJ (2016). Searching for category-consistent features: A computational approach to understanding visual category representation. Psychological Science, 27(6), 870–884. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data and materials from the present experiment are publicly available at the Open Science Framework website: https://osf.io/d4eau/. The experiment was not preregistered.

RESOURCES