Abstract
This study aimed to determine whether value-based attentional biases learned in the auditory domain can correspondingly shape visual attention. A learning phase established associations between auditory words and monetary rewards via a modified version of the dichotic listening task. In a subsequent test phase, participants performed a Stroop task including written representations of auditory words previously paired with reward and semantic associates of formerly rewarded words. Results support a semantic generalization of value-driven attention from the auditory to the visual domain. The findings provide valuable insight into a critical aspect of adaptation and the understanding of maladaptive behaviors (e.g., addiction).
Keywords: Selective attention, Attentional prioritization, Associative learning, Reward, Generalization
Stimuli previously paired with reward preferentially draw attention, even when they are physically nonsalient, currently task irrelevant, and no longer predictive of reward (e.g., Anderson & Halpern, 2017; Anderson et al., 2011; see Watson et al., 2019, for a review). This value-based attentional priority can generalize to objects perceptually related to rewarded stimuli (e.g., Anderson et al., 2012; Hickey & Peelen, 2015, 2017; Mine & Saiki, 2015, 2018). However, studies about generalization of stimulus–reward associations have mostly focused on perceptual cues presented within a single sensory modality. Semantic and cross-modal generalization of such associations remain largely unexplored, even though real-world learning situations, especially emotional experiences, often entail conceptual knowledge (Dunsmoor & Murphy, 2015).
Neuroimaging studies reported that stimuli previously associated with high reward produce stronger visually evoked responses in the brain, relative to previously unrewarded or less valuable stimuli (see Anderson, 2019, for a review). This effect has been observed in the ventral (object-selective) visual cortex (Barbaro et al., 2017; Hickey & Peelen, 2015, 2017; see also Anderson et al., 2014), the early visual cortex (MacLean & Giesbrecht, 2015; Serences, 2008; Serences & Saproo, 2010), and the caudate tail (Anderson et al., 2014; Kim & Anderson, 2020a, 2020b; Kim et al., 2021c; Yamamoto et al., 2013). In addition, a causal role for early visual representations in value-driven attention was provided by a study in which transcranial random noise stimulation was applied to the occipital lobe during reward training (van Koningsbruggen et al., 2016). These results suggest that value-driven attention would be related to the sensory representation of rewarded items. A semantic generalization of value-based attentional priority across sensory modalities would thus challenge assumptions about neural mechanisms of reward history effects. Such an outcome could also provide novel insights into maladaptive behaviors such as addiction because attentional biases for drug cues play a substantial role in motivating drug-seeking behavior and contribute to relapse (Anderson, 2016a).
Grégoire and Anderson (2019) demonstrated that attentional prioritization of stimuli associated with reward can transfer across conceptual knowledge independently of perceptual features. They devised a Stroop task in which neutral words were paired with high, low, or no monetary reward during a learning phase.1 In a subsequent test phase, participants performed a similar task with semantic associates of words presented in the learning phase. Semantic associates of words paired with high reward produced a Stroop interference effect (i.e., slowed down the color-identifying task), relative to semantic associates of words paired with low or no reward.
The influence of associative reward learning on attention has almost exclusively been investigated in the visual domain (see Anderson, 2016b, 2019, for reviews). Using a modified version of the dichotic listening task, a recent study showed that auditory stimuli paired with reward could also bias attention (Kim et al., 2021b). In a learning phase, a spoken letter and number were simultaneously presented in different auditory streams and participants had to report the letter while ignoring the number. Three letters were paired with high, low, or no monetary reward. In a subsequent test phase, the same auditory stimuli were presented but participants had to report the number while ignoring the letter. In both the learning and test phases, attention was biased in favor of the auditory stimulus associated with high value (see also Asutay & Västfjäll, 2016; Kim et al., 2021a).
The current study aimed to explore novel aspects of value-driven attention by determining (1) whether value-based attentional biases learned in the auditory domain can correspondingly bias visual attention and (2) whether this potential cross-modal generalization could also affect the processing of stimuli that are semantically related to valued items. Participants first completed a dichotic listening task in which they learned to pair three spoken words with high, low, or no monetary reward. The written representations of these three words were then presented in an unrewarded Stroop task, in addition to corresponding semantic associates. For the sake of simplicity, spoken words associated with reward and their written representations were called conditioned stimuli (e.g., fuel presented orally or visually); written words semantically related to conditioned stimuli were called generalized stimuli (e.g., gas). A Stroop effect induced by conditioned stimuli associated with high reward, relative to conditioned stimuli associated with low or no reward, would reflect a generalization of value-based attentional priority from the auditory to the visual domain. A similar pattern for generalized stimuli would reflect a semantic generalization of value-based attentional priority across modalities.
Method
Participants
Assuming a small effect size (f = 0.1) and a moderate correlation between levels of our within-subjects variables (ρ = 0.5), an a priori power analysis indicated that a sample size of 36 participants would be sufficient to detect a two-way interaction between condition and type of stimuli at 80% statistical power. The same power analysis for the main effect of condition (which was of primary interest) indicated a sample size of 24 participants. Written informed consent was obtained for 38 participants, between the ages of 18 and 35 inclusive, from the Texas A&M University community. All were native English speakers, reported normal or corrected-to-normal visual acuity and normal color vision. Data from four participants were discarded due to a low proportion of correct responses in the training phase (below 2.5 standard deviations of the group mean, N = 1; see, e.g., Anderson, 2016c; Grégoire et al., 2021) or because they reported using strategies to avoid reading words in the Stroop task (e.g., squinting, N = 3; Grégoire & Anderson, 2019). The final sample included 34 participants (22 females, mean age = 20.21 years, SD = 3.13). All procedures were approved by the Texas A&M University Institutional Review Board.
Apparatus
A Dell OptiPlex 7040 equipped with MATLAB software and Psychophysics Toolbox extensions (Brainard, 1997) was used to present the stimuli on a Dell P217H monitor. The participants viewed the monitor from a distance of approximately 70 cm in a dimly lit room. Participants wore Beyerdynamic DT 770 Pro 250Ω professional studio headphones. Responses of the Stroop task (test phase) were entered using a 5-button response box (MilliKey MK-5).
Auditory stimuli
All auditory stimuli were recorded using a Spark SL condenser microphone, with an Arrow audio interface. The recordings were sampled and modified using the built-in functions on the Logic Pro X software (Apple Inc.). All recorded samples of the stimuli were cut to begin at exactly the same time, compressed to make the sound intensity equal, and condensed to be 500 ms in duration.
Visual stimuli
Three pairs of semantic associates were selected from The University of South Florida Word Association, Rhyme and Word Fragmentation Norms database of free association (Nelson et al., 1998): clock–time, fuel–gas, pet–dog. The chosen pairs were all rated highly (i.e., above 63%) for frequency of free association when the first word was provided (see Grégoire et al., 2021; Grégoire & Anderson, 2019; Grégoire & Greening, 2020). There was no phonological or orthographic similarity between the two words of each pair. Stroop words were presented in equiluminant red, green, blue, and purple. Throughout the experiment, the background of the screen was dark grey while the fixation cross and feedback appeared in white. Written information was presented in 60-point Arial font.
Learning phase—Dichotic listening task
Each run of the learning phase consisted of 72 trials. The sequence and timing of trial events is presented in Fig. 1a. During the presentation of the auditory target and distractor, participants simultaneously heard a spoken word played to one ear and a spoken number played to the other ear. The possible words were clock, fuel, and pet, and the possible numbers were one, two, three, and nine. These words and numbers were chosen based on their phonetics (not rhyming) and length (one syllable, between three and five letters). In each run, the possible word–number combinations and what side they were presented on the headphones were fully counterbalanced and the order of trials was randomized. Participants were instructed to listen for the word they heard and press the respective key on the keyboard as fast as possible while remaining accurate. The letters U, I, and O of a QWERTY keyboard were labelled clock, fuel, and pet, respectively. Participants were told that correct responses could result in monetary reward, but no information was given about reward–word contingencies. We also specified to participants that they would receive the total monetary reward attained throughout the experiment or the base rate ($10/hr), whichever was higher.
Fig. 1.

Sequence of trial events in (A) the learning phase and (B) the test phase. The duration of the initial fixation display varied randomly on each trial within the range indicated. For the test phase (Stroop task), in the event of an incorrect or missed response, a 500-ms blank screen followed by a 1,000-ms feedback display were added in the sequence of trial events after the presentation of the Stroop word.
If participants did not respond before the end of the ISI or pressed the wrong key, they were presented with the words “Too Slow” or “Incorrect,” respectively, and their accumulated total earnings (no sound was presented during such feedback). Each of the three words (clock, fuel, and pet) was paired with high (10¢), low (2¢), or no reward (0¢). The word-to-value mapping was counterbalanced across participants. For correct responses, participants were shown their corresponding reward earnings and their accumulated total earnings, in addition to an audible cue that played for the first 500 ms of feedback (sinewave form, high reward = 650 Hz, low reward = 500 Hz, no reward = 350 Hz). We included the auditory feedback to help ensure that participants robustly processed the feedback, since it was possible to perform the task without actually looking at or otherwise processing the visual display. Each trial terminated with a 1,000-ms interval during which the fixation cross disappeared for the last 200 ms to indicate to participants that the next trial was about to begin.
Test phase—Stroop task
Each run of the test phase consisted of 96 trials. The sequence and timing of trial events is presented in Fig. 1b. We used a trial-to-trial spatial uncertainty of 100 pixels around the center location to present words in order to limit opportunities for employing strategies (e.g., fixating on a small portion of the print to avoid reading words; Ben-Haim et al., 2014). In each run, all possible combinations between the six words (clock, time, fuel, gas, pet, dog) and the four colors (red, green, blue, and purple) were presented an equal number of times, in a pseudorandom order, excluding immediate repetitions of colors and words. Participants were instructed to report the color of each word as quickly and accurately as possible, ignoring their meaning, by using the button box with their dominant hand. Two keys of the button box were labeled “left” and “right.” Participants had to press the “left” key if the word was colored in green or purple or the “right” key if the word was colored in blue or red. Before the test phase, we specified that no reward was delivered during this task. However, to maintain motivation, we indicated that they would receive a $3 bonus if their overall accuracy was higher than 90%.
Contingency-awareness test
Each of the possible word-number combinations was presented once (leading to 24 trials) in the same way as in the learning phase, and stimuli were randomly ordered. Participants were asked: “How much money do you think you would make for a correct response to this item?” and selected 10¢, 2¢, or 0¢ (three-alternative forced choice) by clicking on the amount with the computer mouse.
Procedure
The experiment began with a brief hearing test to ensure adequate volume of stimuli (see Kim et al., 2021b). Participants then completed four runs of the learning phase, three runs of the test phase, and the contingency-awareness test. Finally, participants responded to a short questionnaire to indicate if they used strategies to avoid reading words during the Stroop task.
Data analysis
Response time (RT) was measured from the onset of the target stimulus. Only correct responses were included in the RT analyses. Furthermore, RTs for correct responses beyond 2.5 standard deviations from the mean for a given condition were trimmed (Kim et al., 2021a, 2021b), which led to the removal of 1.50% and 0.89% of RTs in the training and the test phase, respectively.
Repeated-measures analyses of variance (ANOVA) were conducted with condition (high reward, low reward, no reward) as a within-subject variable, separately for mean RTs and accuracy in the learning phase. The same analyses were performed on the test phase data with type of stimuli (conditioned stimuli, generalized stimuli) added as a second within-subject variable. Sphericity was tested with Mauchly’s test of sphericity, and when the sphericity assumption was violated, degrees of freedom were adjusted using the Greenhouse–Geisser epsilon correction. Subsequent t tests were performed when appropriate. For each t test, data were checked for normality of distribution with the Kolmogorov–Smirnov test. A Wilcoxon signed-ranks test was used when data were not normally distributed. Note that we calculated Cohen’s dz using the formula dz = t/sqrt(n) for paired sample t tests (Lakens, 2013; Rosenthal, 1991). The raw data of this study can be found online (https://osf.io/nk327/).
Results
Learning phase—Dichotic listening task
The ANOVA performed on mean RTs revealed a significant main effect of condition, F(2, 66) = 22.77, p < .001, . Subsequent t tests indicated that RTs were significantly faster in the high-reward condition than in the low- and no-reward conditions, ts > 4.62, ps < .001, dzs > 0.78. RTs were also significantly shorter in the low-reward condition than in the no reward condition, t(33) = 2.45, p = .020, dz = 0.42 (Fig. 2a).
Fig. 2.

Correct response times as a function of condition (high reward, low reward, no reward) in (A) learning and (B) test phases. Error bars depict within-subjects 95% confidence intervals calculated using the Cousineau method (Cousineau, 2005) with a Morey correction (Morey, 2008). *p < .05, **p < .01, ***p < .001, NS = nonsignificant
The ANOVA performed on accuracy revealed no significant main effect of condition, F(2, 66) = 2.97, p = .058. Accuracy was overall very high (98.27%; see Table 1).
Table 1.
Proportion of correct responses and correct response times (for the test phase) as a function of condition (high reward, low reward, no reward) and type of stimuli (conditioned stimuli, generalized stimuli) in learning and test phases
| Conditioned stimuli | Generalized stimuli | |||||
|---|---|---|---|---|---|---|
| High reward | Low reward | No reward | High reward | Low reward | No reward | |
| Learning phase | ||||||
| Proportion of correct responses | 98.68 (1.57) | 98.31 (1.95) | 97.82 (2.17) | – | – | – |
| Test phase | ||||||
| Correct response times (ms) | 495.62 (52.51) | 489.33 (52.55) | 486.48 (51.59) | 492.02 (52.43) | 483.56 (49.03) | 489.37 (50.92) |
| Proportion of correct responses | 95.83 (3.36) | 94.00 (4.25) | 94.98 (3.94) | 94.36 (4.77) | 95.53 (3.52) | 94.91 (3.91) |
Note. Standard deviations are in parentheses
Test phase—Stroop task
The ANOVA performed on mean RTs revealed a significant main effect of condition, F(2, 66) = 4.94, p = .010, , no significant main effect of type of stimuli, F(1, 33) = 1.11, p = .301, nor an interaction, F(1.52, 50.25) = 1.85, p = .176. Subsequent t tests indicated that RTs were significantly greater in the high-reward condition than in the low and no reward conditions, ts > 2.76, ps < .010, dzs > 0.47. No significant difference was observed between the low-reward condition and the no-reward condition, t(33) = 0.56, p = 0.577 (Fig. 2b). Analyses performed specifically on conditioned stimuli revealed that RTs were significantly slower in the high-reward condition than in the low- and no-reward conditions, ts > 2.53, ps < .017, dzs > 0.43. Analyses performed for generalized stimuli indicated that RTs were significantly slower in the high-reward condition than in the low-reward condition, t(33) = 2.33, p = .026, dz = 0.40, but did not differ significantly between the high-reward condition and the no reward condition, t(33) = 0.86, p = .397 (see Table 1). After computing the difference between RTs in the high-reward condition and the low-reward condition for each participant and each type of stimuli (i.e., conditioned and generalized), we observed a significant positive correlation between the Stroop effects measured for conditioned and generalized stimuli, r(32) = .497, p = .003 (Fig. 3). No such correlation was observed for the difference between RTs in the low reward and no reward condition, r(32) = −.067, p = .705, with the difference between the two correlations being significant, z = 2.41, p = .016.
Fig. 3.

Relationship between Stroop effects (RT high reward minus RT low reward) for conditioned and generalized stimuli
The ANOVA performed on accuracy revealed no significant main effect of condition, F(2, 66) = 0.23, p = .795, no significant main effect of type of stimuli, F(1, 33) < 0.01, p > .99, and a significant interaction between condition and type of stimuli, F(2, 66) = 4.43, p = .016, . Analyses performed for conditioned stimuli indicated that accuracy was significantly greater in the high-reward condition than in the low-reward condition, Z = −2.23, p = .026, but no significant difference was observed between the no reward condition and the other two conditions (ps > .10). All the 2-by-2 comparisons for the generalized stimuli were not significant (ps > .10; see Table 1). Mean accuracy was overall high (94.94%).
Contingency-awareness test
Contingency-awareness measures were analyzed with a binomial test. All the participants correctly reported word–reward contingencies (for the three conditions) with a cumulative probability lower than 0.1% (i.e., significantly above chance). Thus, all the participants were considered aware of the word–reward contingencies.
Discussion
This study aimed to determine whether value-based attentional priority can transfer from the auditory to the visual domain and whether this potential generalization could also affect the processing of stimuli that are semantically related to valued items. A learning phase established associations between auditory words and monetary rewards. In a subsequent test phase, participants performed a visual Stroop task including written representations of auditory words previously paired with reward (i.e., conditioned stimuli) and semantic associates of formerly rewarded words (i.e., generalized stimuli).
In the learning phase, attention was biased toward stimuli associated with high reward as reflected by faster RTs in the high-reward condition, replicating previous findings (Kim et al., 2021a, 2021b). In the test phase, participants were instructed to identify the color of colored words while ignoring their meaning. Conditioned stimuli associated with high reward generated slower RTs than conditioned stimuli associated with no reward. Thus, the verbal information of conditioned stimuli previously associated with high reward had greater attentional priority (and so was more difficult to inhibit) than the verbal information of conditioned stimuli previously associated with no reward, demonstrating a transfer of value-based attentional priority from the auditory to the visual domain. We observed a similar pattern of results for RTs when the high-reward condition was compared with the low-reward condition, but the accuracy was greater for conditioned stimuli associated with high reward. Although accuracy was overall high (≥94% across all conditions) and previously reported value effects using this paradigm have been exclusively in RT (Grégoire & Anderson, 2019), the comparison between the high and the low-reward condition might at least in part reflect a speed–accuracy trade-off. The findings for the comparison between the high and the no reward condition are more convincing because RT and accuracy effects go in the same direction. Results also revealed that RTs for generalized stimuli were slower in the high-reward condition than in the low-reward condition, and the magnitude of this effect was significantly correlated with the effect for conditioned stimuli, suggesting that the influence of reward learning on attention can transfer to visual stimuli semantically related to formerly rewarded auditory stimuli.
It is possible that the correlation between conditioned and generalized stimuli ensues from idiosyncratic properties of words, such as the semantic category, irrespective of potential value effects. Conditioned and generalized stimuli associated with a specific value (e.g., high reward) come from the same semantic category. Consequently, if the semantic categories themselves have an influence on RTs in the Stroop task, we would observe a similar correlation even without manipulation of reward. For example, suppose that the words pet and dog produce a larger Stroop interference than the other words. When the pair pet–dog is associated with high or low reward, this effect would promote a comparable RT difference (high reward minus low reward) for conditioned and generalized stimuli. Some studies reported that animate words generate a larger Stroop effect than inanimate words (e.g., Bugaiska et al., 2019). Thus, the words pet and dog could be more distracting than the other words, which are inanimate. A way to test this alternative explanation for our results would be to examine the correlation between conditioned and generalized stimuli for the RT difference between the low-reward and no-reward conditions, which was not significant. This result argues against the assumption that the significant correlation we observed for the high- and low-reward conditions ensues from idiosyncratic properties of words used in this study, which would predict a comparable correlation when comparing any two groups of words arbitrarily assigned to a value condition.
We hypothesize that the Stroop interference observed in this study could occur at an early stage of information processing. In the learning phase, word–reward pairings might produce associations between the semantic representation of words and the respective value. The semantic representation of words can be activated via the auditory or the visual modality. Similarly, generalized stimuli should activate the same semantic representations as conditioned stimuli given their strong association in memory (Collins & Loftus, 1975). In the Stroop task, written words are thought to induce an obligatory processing of semantic information due to the automaticity of word reading (MacLeod, 1991). Thus, the automatic activation of the semantic representations would bias the processing of words formerly associated with high reward, to a larger extent than words previously unrewarded or associated with low reward, by allocating more attention toward these stimuli (due to their higher potential adaptive value; Anderson, 2013, 2021). As a consequence, RTs to identify the color of Stroop stimuli is slower in the high-reward condition than in the low or no reward conditions. This hypothesis is consistent with electroencephalographic studies that reported an early attentional effect (i.e., greater P1 amplitudes) for emotional words compared with neutral words using the Stroop paradigm, which suggests greater attention allocation to emotional words (Li et al., 2007; van Hooff et al., 2008). However, our interpretation is speculative insofar as the present study was not primarily designed to examine the specific mechanisms of cross-modal and semantic generalization effects of value-driven attention, but to test the existence of such effects. Further research should inspect the locus of the Stroop interference observed in this study.
Research in psycholinguistics postulated that the activation of phonological and semantic representations in response to written input is automatic. This hypothesis was supported by reading models (e.g., Harm & Seidenberg, 1999, 2004; Van Orden & Goldinger, 1994) and empirical works (e.g., Rodd, 2004; Ziegler & Jacobs, 1995). The generalization effects observed in the test phase could thus be mediated by the automatic phonological processing of written words. This potential mechanism would not invalidate our main conclusions but might limit the generalizability of our findings. A way to exclude the assumption that the generalization effects are mediated by the phonological processing of written words would be to present visual objects in the test phase.
Contingency-awareness measures performed after the Stroop task indicated that all the participants were aware of stimulus–reward contingencies. Grégoire and Anderson (2019) reported a semantic generalization effect of reward learning on attention specifically for participants who were unaware of the reward contingencies, but they did not manipulate the stimulus presentation modality. Our data show that the influence of reward learning on attention can generalize from the auditory to the visual domain when participants are aware of stimulus–reward contingencies, which does not exclude the possibility that participants unaware of stimulus–reward contingencies could manifest such a generalization effect. In apparent contradiction with our results, Grégoire and Anderson (2019) also reported that aware participants exhibited a reverse Stroop effect in the test phase, potentially reflecting value-based signal suppression (Gaspelin & Luck, 2018). However, this outcome was observed in a subgroup of 13 participants, which does not allow to draw strong conclusions given the small sample size. Altogether, data from both Grégoire and Anderson (2019) and the present study casts some measure of doubt on the idea that awareness modulates the semantic generalization of value-driven attention. One salient difference between studies is that in Grégoire and Anderson (2019), the stimulus–reward contingencies were incidental to the task during training (which was to report the color of the font written words were presented in), whereas in the present study the (spoken) word stimuli were response-relevant and explicitly reported, which might recruit fundamentally different learning mechanisms. Future research is needed to clarify how awareness modulates semantic generalization of value-driven attention.
To conclude, our results are consistent with a generalization of value-based attentional priority from the auditory to the visual domain. Our data also seem to support a semantic generalization of reward learning’s influence on attention across modalities, although this outcome should be considered with caution because effects observed with generalized stimuli were limited to the difference between the high and the low-reward condition. The findings provide valuable insight into a critical aspect of adaptation (i.e., detect stimuli associated with reward) and are relevant to the understanding of maladaptive behaviors to which value-based attentional biases contribute (e.g., substance abuse; Anderson, 2016a; Field & Cox, 2008).
Funding
This study was supported by a grant from the NIH [R01-DA046410] to Brian A. Anderson.
Footnotes
Competing interests The authors declare no competing interests.
In the Stroop task, participants are asked to identify the color of colored words while ignoring their meaning. Although the instructions specify to disregard word meaning, it is commonly suggested that Stroop stimuli induce an obligatory processing of semantic information due to the automaticity of word reading (MacLeod, 1991), which could thus allow participants to learn word-reward associations in this situation.
Data availability
All materials and codes are available upon request to the first author.
References
- Anderson BA (2013). A value-driven mechanism of attentional selection. Journal of Vision, 13(3), 1–16. 10.1167/13.3.7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2016a). What is abnormal about addiction-related attentional biases? Drug and Alcohol Dependence, 167, 8–14. 10.1016/j.drugalcdep.2016.08.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2016b). The attention habit: How reward learning shapes attentional selection. Annals of the New York Academy of Sciences, 1369(1), 24–39. 10.1111/nyas.12957 [DOI] [PubMed] [Google Scholar]
- Anderson BA (2016c). Value-driven attentional capture in the auditory domain. Attention, Perception, & Psychophysics, 78(1), 242–250. 10.3758/s13414-015-1001-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2019). Neurobiology of value-driven attention. Current Opinion in Psychology, 29, 27–33. 10.1016/j.copsyc.2018.11.004 [DOI] [PubMed] [Google Scholar]
- Anderson BA (2021). An adaptive view of attentional control. American Psychologist, 76(9), 1410–1422. 10.1037/amp0000917 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Halpern M (2017). On the value-dependence of value-driven attentional capture. Attention, Perception, & Psychophysics, 79(4), 1001–1011. 10.3758/s13414-017-1289-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences of the United States of America, 108(25), 10367–10371. 10.1073/pnas.1104047108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2012). Generalization of value-based attentional priority. Visual Cognition, 20(6), 647–658. 10.1080/13506285.2012.679711 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2014). Value-driven attentional priority signals in human basal ganglia and visual cortex. Brain Research, 1587, 88–96. 10.1016/j.brainres.2014.08.062 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Asutay E, & Västfjäll D (2016). Auditory attentional selection is biased by reward cues. Scientific Reports, 6, Article 36989. 10.1038/srep36989 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barbaro L, Peelen MV, & Hickey C (2017). Valence, not utility, underlies reward-driven prioritization in human vision. Journal of Neuroscience, 37(43), 10438–10450. 10.1523/JNEUROSCI.1128-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ben-Haim MS, Mama Y, Icht M, & Algom D (2014). Is the emotional Stroop task a special case of mood induction? Evidence from sustained effects of attention under emotion. Attention, Perception, & Psychophysics, 76(1), 81–97. 10.3758/s13414-013-0545-7 [DOI] [PubMed] [Google Scholar]
- Brainard DH (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436. 10.1163/156856897X00357 [DOI] [PubMed] [Google Scholar]
- Bugaiska A, Grégoire L, Camblats A-M, Gelin M, Méot A, & Bonin P (2019). Animacy and attentional processes: Evidence from the Stroop task. Quarterly Journal of Experimental Psychology, 72(4), 882–889. 10.1177/1747021818771514 [DOI] [PubMed] [Google Scholar]
- Collins AM, & Loftus EF (1975). A spreading-activation theory of semantic processing. Psychological Review, 82(6), 407–428. 10.1037/0033-295X.82.6.407 [DOI] [Google Scholar]
- Cousineau D (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorial in Quantitative Methods for Psychology, 1(1), 42–45. 10.20982/tqmp.01.1.p042 [DOI] [Google Scholar]
- Dunsmoor JE, & Murphy GL (2015). Categories, concepts, and conditioning: How humans generalize fear. Trends in Cognitive Sciences, 19(2), 73–77. 10.1016/j.tics.2014.12.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Field M, & Cox WM (2008). Attentional bias in addictive behaviors: A review of its development, causes, and consequences. Drug and Alcohol Dependence, 97(1/2), 1–20. 10.1016/j.drugalcdep.2008.03.030 [DOI] [PubMed] [Google Scholar]
- Gaspelin N, & Luck SJ (2018). The role of inhibition in avoiding distraction by salient stimuli. Trends in Cognitive Sciences, 22(1), 79–92. 10.1016/j.tics.2017.11.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grégoire L, & Anderson BA (2019). Semantic generalization of value-based attentional priority. Learning & Memory, 26(12), 460–464. 10.1101/lm.050336.119 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grégoire L, & Greening SG (2020). Fear of the known: Semantic generalisation of fear conditioning across languages in bilinguals. Cognition & Emotion, 34(2), 352–358. 10.1080/02699931.2019.1604319 [DOI] [PubMed] [Google Scholar]
- Grégoire L, Kim AJ, & Anderson BA (2021). Semantic generalization of punishment-related attentional priority. Visual Cognition, 29(5), 310–317. 10.1080/13506285.2021.1914796 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harm MW, & Seidenberg MS (1999). Phonology, reading acquisition, and dyslexia: Insights from connectionist models. Psychological Review, 106(3), 491–528. 10.1037/0033-295X.106.3.491 [DOI] [PubMed] [Google Scholar]
- Harm MW, & Seidenberg MS (2004). Computing the meanings of words in reading: Cooperative division of labor between visual and phonological processes. Psychological Review, 111(3), 662–720. 10.1037/0033-295X.111.3.662 [DOI] [PubMed] [Google Scholar]
- Hickey C, & Peelen MV (2015). Neural mechanisms of incentive salience in naturalistic human vision. Neuron, 85(3), 512–518. 10.1016/j.neuron.2014.12.049 [DOI] [PubMed] [Google Scholar]
- Hickey C, & Peelen MV (2017). Reward selectively modulates the lingering neural representation of recently attended objects in natural scenes. Journal of Neuroscience, 37(31), 7297–7304. 10.1523/jneurosci.0684-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim AJ, & Anderson BA (2020a). Arousal-biased competition explains reduced distraction by reward cues under threat. eNeuro, 7(4), ENEURO.0099–20.2020. 10.1523/ENEURO.0099-20.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim AJ, & Anderson BA (2020b). Neural correlates of attentional capture by stimuli previously associated with social reward. Cognitive Neuroscience, 11(1/2), 5–15. 10.1080/17588928.2019.1585338 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim AJ, Grégoire L, & Anderson BA (2021a). Value-biased competition in the auditory system of the brain. Journal of Cognitive Neuroscience, 34(1), 180–191. 10.1162/jocn_a_01785 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim AJ, Lee DS, & Anderson BA (2021b). Previously reward-associated sounds interfere with goal-directed auditory processing. Quarterly Journal of Experimental Psychology, 74 (7), 1257–1263. 10.1177/1747021821990033 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim H, Nanavaty N, Ahmed H, Mathur VA, & Anderson BA (2021c). Motivational salience guides attention to valuable and threatening stimuli: Evidence from behavior and fMRI. Journal of Cognitive Neuroscience, 33(12), 2440–2460. 10.1162/jocn_a_01769 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lakens D (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. 10.3389/fpsyg.2013.00863 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li W, Zinbargm RE, & Paller KA (2007). Trait anxiety modulates supraliminal and subliminal threat: Brain potential evidence for early and late processing influences. Cognitive, Affective, & Behavioral Neuroscience, 7(1), 25–36. 10.3758/cabn.7.1.25 [DOI] [PubMed] [Google Scholar]
- MacLean MH, & Giesbrecht B (2015). Neural evidence reveals the rapid effects of reward history on selective attention. Brain Research, 1606, 86–94. 10.1016/j.brainres.2015.02.016 [DOI] [PubMed] [Google Scholar]
- MacLeod CM (1991). Half a century of research on the Stroop effect—An integrative review. Psychological Bulletin, 109(2), 163–203. 10.1037//0033-2909.109.2.163 [DOI] [PubMed] [Google Scholar]
- Mine C, & Saiki J (2015). Task-irrelevant stimulus-reward association induces value-driven attentional capture. Attention, Perception, & Psychophysics, 77(6), 1896–1907. 10.3758/s13414-015-0894-5 [DOI] [PubMed] [Google Scholar]
- Mine C, & Saiki J (2018). Pavlovian reward learning elicits attentional capture by reward-associated stimuli. Attention, Perception, & Psychophysics, 80(5), 1083–1095. 10.3758/s13414-018-1502-2 [DOI] [PubMed] [Google Scholar]
- Morey RD (2008). Confidence intervals from normalized data: A correction to Cousineau (2005). Tutorial in Quantitative Methods for Psychology, 4(2), 61–64. 10.20982/tqmp.04.2.p061 [DOI] [Google Scholar]
- Nelson DL, McEvoy CL, & Schreiber TA (1998). The University of South Florida word association, rhyme, and word fragment norms. http://w3.usf.edu/FreeAssociation [DOI] [PubMed]
- Rodd JM (2004). When do leotards get their spots? Semantic activation of lexical neighbors in visual word recognition. Psychonomic Bulletin & Review, 11(3), 434–439. 10.3758/BF03196591 [DOI] [PubMed] [Google Scholar]
- Rosenthal R (1991). Meta-analytic procedures for social research (2nd ed.). SAGE Publications. 10.4135/9781412984997 [DOI] [Google Scholar]
- Serences JT (2008). Value-based modulations in human visual cortex. Neuron, 60(6), 1169–1181. 10.1016/j.neuron.2008.10.051 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Serences JT, & Saproo S (2010). Population response profiles in early visual cortex are biased in favor of more valuable stimuli. Journal of Neurophysiology, 104(1), 76–87. 10.1152/jn.01090.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- van Hooff JC, Dietz KC, Sharma D, & Bowman H (2008). Neural correlates of intrusion of emotion words in a modified Stroop task. International Journal of Psychophysiology, 67(1), 23–34. 10.1016/j.ijpsycho.2007.09.002 [DOI] [PubMed] [Google Scholar]
- van Koningsbruggen MG, Ficarella SC, Battelli L, & Hickey C (2016). Transcranial random noise stimulation of visual cortex potentiates value-driven attentional capture. Social Cognitive and Affective Neuroscience, 11(9), 1481–1488. 10.1093/scan/nsw056 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Orden GC, & Goldinger SD (1994). Interdependence of form and function in cognitive systems explains perception of printed words. Journal of Experimental Psychology: Human Perception and Performance, 20(6), 1269–1291. 10.1037/0096-1523.20.6.1269 [DOI] [PubMed] [Google Scholar]
- Watson P, Pearson D, Wiers RW, & Le Pelley ME (2019). Prioritizing pleasure and pain: Attentional capture by reward-related and punishment-related stimuli. Current Opinion in Behavioral Sciences, 26, 107–113. 10.1016/j.cobeha.2018.12.002 [DOI] [Google Scholar]
- Yamamoto S, Kim HF, & Hikosaka O (2013). Reward value-contingent changes of visual responses in the primate caudate tail associated with a visuomotor skill. Journal of Neuroscience, 33(27), 11227–11238. 10.1523/JNEUROSCI.0318-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ziegler JC, & Jacobs AM (1995). Phonological information provides early sources of constraint in the processing of letter strings. Journal of Memory and Language, 34(5), 567–593. 10.1006/jmla.1995.1026 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All materials and codes are available upon request to the first author.
