Abstract
Previously reward-associated stimuli have consistently been shown to involuntarily capture attention in the visual domain. Although previously reward-associated but currently task-irrelevant sounds have also been shown to interfere with visual processing, it remains unclear whether such stimuli can interfere with the processing of task-relevant auditory information. To address this question, we modified a dichotic listening task to measure interference from task-irrelevant but previously reward-associated sounds. In a training phase, participants were simultaneously presented with a spoken letter and number in different auditory streams and learned to associate the correct identification of each of three letters with high, low, and no monetary reward, respectively. In a subsequent test phase, participants were again presented with the same auditory stimuli but were instead instructed to report the number while ignoring spoken letters. In both the training and test phases, response time measures demonstrated that attention was biased in favour of the auditory stimulus associated with high value. Our findings demonstrate that attention can be biased towards learned reward cues in the auditory domain, interfering with goal-directed auditory processing.
Keywords: Attentional capture, auditory attention, reward, associative learning
Introduction
The world is comprised of vast amounts of information that our sensory systems take into our brain. However, the final representation of the surrounding environment is determined by the attention system acting as a filter of sensory information (Desimone & Duncan, 1995). Attention has been consistently shown to be biased to select stimuli that have been learned to predict a variety of rewards, including primary rewards such as food and water (e.g., Pool et al., 2014; Seitz et al., 2009), social reward (e.g., Anderson, 2016b, 2017; A. J. Kim & Anderson, 2020), and monetary reward (e.g., Anderson & Yantis, 2013; Failing & Theeuwes, 2017; Hickey et al., 2010; Libera & Chelazzi, 2006; Serences, 2008). This influence of reward history on attention has been shown to persist even when previously reward-associated stimuli are non-salient and task-irrelevant using what has been referred to as the value-driven attentional capture (VDAC) paradigm (Anderson et al., 2011).
The overwhelming majority of studies examining the influence of reward learning on attention have been conducted in the visual domain (see Anderson, 2016a, 2019, for reviews). Mechanisms of learning-dependent attentional bias in other sensory systems, such as audition, are relatively unexplored. Many studies of attentional bias that utilise auditory stimuli investigate cross-modal interfacing of visual and auditory systems and focus on how the addition of sound stimuli modulates visual processing (e.g., Anderson, 2016c; McDonald et al., 2000, 2005; Sanz et al., 2018; Stormer et al., 2009). Cheng et al. (2020) argued that the reward value of visual and audio inputs are integrated together and that the associative value of vision dominates over the associative value of audition, highlighting a need to investigate learning-dependent auditory attentional capture in isolation.
The ability for sounds to be distracting and evoke shifts of attention has been well documented in behavioural and electrophysiological studies (e.g., Schröger et al., 2000; see also, Parmentier, 2014, for a review). Furthermore, studies using only auditory stimuli have demonstrated that attention can be modulated by experience (Alards-Tomalin et al., 2017; Dyson, 2010; Klein & Stolz, 2015; see also Addleman & Jiang, 2019, for a review). In the context of auditory attention being modulated by learned value, Asutay and Västfjäll (2016) have demonstrated that participants selectively attend to an auditory stream previously associated with high value when monitoring for targets (in their study, a gap in an auditory stream). However, the extent to which reward-associated sounds interfere with goal-directed auditory processing, which requires presentation of entirely task-irrelevant stimuli previously associated with reward, remains to be demonstrated. Entirely task-irrelevant but previously reward-associated sounds produce elevated stimulus-evoked activation of auditory cortex (Folyi et al., 2016; Folyi & Wentura, 2019), but a defined behavioural cost in the processing of such auditory information has not been observed to our knowledge. Although previously high-value sounds have been shown to interfere with the identification of a visual target (Anderson, 2016c), it is unclear whether VDAC competes with the processing of stimuli presented auditorily. In this study, we sought direct evidence for impaired processing of auditory stimuli as a function of learned value, thereby extending the theoretical scope of VDAC as a mechanism of biased information processing.
The dichotic listening (DL) task is a commonly used tool to investigate selective attention in the auditory domain (e.g., Ahveninen et al., 2011; Alho et al., 2012; Cherry, 1953; Ross et al., 2010; Sabri et al., 2014; Tallus et al., 2015). In this study, we combined the DL task with the design of the VDAC paradigm to investigate whether associative learning between auditory stimuli and reward results in VDAC in the context of auditory target identification. Consistent with prior studies (e.g., Anderson, 2016c; Asutay & Västfjäll, 2016), we hypothesised that attention would be biased in favour of high-value stimuli during training, consistent with motivated attention and resulting in faster report of high-value targets compared with the other value conditions. In addition, we hypothesised that even when the previously high-value sound is task-irrelevant, it would automatically capture attention in the test phase, slowing target report relative to trials on which a previously unrewarded sound was presented as a distractor, reflecting a robust measure of VDAC (e.g., Anderson et al., 2011; Anderson & Halpern, 2017).
Materials and methods
Participants
Thirty-eight participants (22 female, 15 male, 1 no response), whose ages ranged from 18 to 31 years inclusive (M = 20.7, SD = 2.9), were recruited from the Texas A&M University community. All participants were English-speaking and reported normal or corrected-to-normal visual acuity and normal colour vision. All procedures were approved by the Texas A&M Institutional Review Board. Written informed consent was obtained for each participant and all study procedures were conducted in accordance with the principles expressed in the Declaration of Helsinki. Participants were compensated with their earnings in the task. Three participants were eliminated due to poor accuracy in the task (see section “Data analysis”), resulting in a final sample size of 35.
Our sample size was based off a power analysis evaluating the effect of high- versus no-value distractors on response time (RT) in the visual domain (see Anderson & Halpern, 2017), estimating a minimum sample size of n = 34 to yield power (1 − β) > 0.8. This was more conservative than the magnitude of effect in auditory studies of reward and attention, which indicated power (1 − β) > 0.9 using the same measure of VDAC in Experiment 1 of Anderson (2016c) and also the effect of CS+ versus CS− in Asutay and Västfjäll (2016).
Apparatus
A Dell OptiPlex 7040 (Dell, Round Rock, TX, USA) equipped with MATLAB software (MathWorks, Natick, MA, USA) and Psychophysics Toolbox extensions (Brainard, 1997) was used to present the stimuli on a Dell P217H monitor. The participants viewed the monitor from a distance of approximately 70 cm in a dimly lit room. Participants also wore Beyerdynamic DT 770 Pro 250Ω professional studio headphones (Beyerdynamic, Heilbronn, Germany) to listen to all sounds.
Auditory stimuli
All auditory stimuli were recorded using a Spark SL condenser microphone (Baltic Latvian Universal Electronics LLC., Westlake Village, CA, USA), with an Arrow audio interface (Universal Audio Inc., Scotts Valley, CA, USA), on a 2017 MacBook Pro (Apple Inc., Cupertino, CA, USA). The recordings were sampled and modified using the built-in functions on the Logic Pro X software (Apple Inc.). All recorded samples of the numbers and letters were cut to begin at exactly the same time, compressed to make the sound intensity equal, and condensed to be 300 ms in duration to ensure acoustic similarities across all stimuli.
Training phase
Each run of the training phase consisted of 72 trials. Each trial began with a fixation display (1,800 ms), followed by the auditory stimuli (300 ms), an inter-stimulus interval (ISI), auditory feedback (1,500 ms), and an inter-trial interval (ITI) (see Figure 1). Throughout each trial, a fixation cross (0.7° × 0.7° visual angle) was presented at the centre of the screen. During the presentation of the auditory stimuli, participants would simultaneously hear a spoken letter played to one ear and a spoken number played to the other ear. The possible letters were U, I, and O, and the possible numbers were 1, 2, 3, and 4 (participants were informed of these possibilities beforehand). These letters and numbers were chosen based on their phonetics (not rhyming and similar intonation) and their close proximity on the keyboard. The possible letter-number combinations and what side they were presented on the headphones were fully counterbalanced and the order of trials was randomised each run. Participants were instructed to listen for the letter they heard and press the respective key on the keyboard. The ISI lasted for 1,500, 2,700, or 3,900 ms (equally often, order randomised). Next, participants were given feedback based on what key they pressed. If the participant did not respond before the end of the ISI, they were presented with the words “Too Slow” and their accumulated total earnings, while if they pressed the wrong key they were presented with the words “Incorrect” and their accumulated total earnings (no sound was presented during such feedback). For each participant, each letter was associated with high (20 cents), low (4 cents), or no reward (0 cents). The letter-to-value mapping was counterbalanced across participants. For correct responses, participants were shown their corresponding reward earnings and their accumulated total earnings, in addition to an audible cue for 500 ms (sine wave form, high reward = 650 Hz, low reward = 500 Hz, no reward = 350 Hz). Finally, the ITI lasted for 900, 2,700, or 4,500 ms (exponentially distributed, with the shorter time lengths being more frequent). The fixation cross disappeared for the last 200 ms of the ITI to indicate to the participant that the next trial was about to begin.
Figure 1.
Sequence of trial events in the training and test phases. In both phases, a spoken letter and a spoken number were played simultaneously, one to each ear. In the training phase, participants responded to the letter they heard and were presented with monetary feedback. In the test phase, participants responded to the number they heard while trying to ignore the same letters that had served as targets during training.
Test phase
Each run of the test phase consisted of 72 trials. Each trial began with a fixation display (1,800 ms), followed by the auditory stimuli (300 ms) and an ITI (see Figure 1). Throughout each trial, a fixation cross (0.7° × 0.7° visual angle) was presented at the centre of the screen. During the presentation of auditory stimuli, participants would again simultaneously hear a letter and a number (design identical to the training phase). However, participants were now instructed to listen for the number they heard and press the respective number key on the keyboard, with the letters serving as distractors. Finally, the ITI lasted for 2,100, 3,900, or 5,700 ms (exponentially distributed, with the shorter time lengths being more frequent). The fixation cross again disappeared for the last 200 ms of the ITI to indicate to the participant that the next trial was about to begin.
Procedure
The experiment began with a brief hearing test in which participants indicated when they perceived five tones of 300–700 Hz (sine wave form, increments of 100 Hz), which were presented at intervals that randomly varied between 3,000 and 11,000 ms (increments of 2,000 ms). Each tone was played to each ear separately, in random order, and volume was adjusted if needed until the participant was 100% correct in identifying the tones. The computer volume was originally set to ~56 dB and all participants were 100% accurate in the hearing test without adjustment, resulting in the original intensity being retained for the entire experiment in all cases. Participants then completed two runs of the training phase, three runs of the test phase, then one more run of the training phase, and finally two more runs of the test phase.
Data analysis
RT was measured from the onset of the auditory stimuli. Only correct trials were included in the RT analyses. RTs more than 2.5 standard deviations above and below the mean for a given condition for a given participant were trimmed (Anderson & Yantis, 2013; A. J. Kim & Anderson, 2020). In addition, we excluded three participants’ data whose mean accuracy or RT exceeded 2.5 standard deviations below or above the group average (see Anderson, 2016c), respectively, as outliers. Thus, 35 complete data sets were analysed.
Results
Training phase
In the training phase, a repeated-measures analysis of variance (ANOVA) revealed that RTs significantly differed among the three target conditions, F(2, 68) = 12.39, p < .001, . Post hoc comparisons revealed that participants were significantly faster to report high-value targets compared with both unrewarded targets, t(34) = 4.58, p < .001, dz = 0.774, and low-value targets, t(34) = 3.31, p = .002, dz = 0.559, but no significant differences were found comparing low-value and unrewarded targets, t(34) = 1.79, p = .082 (see Figure 2a). Accuracy also differed significantly among the three target conditions, F(2, 68) = 4.23, p = .019, . Post hoc comparisons revealed that participants were significantly more accurate in reporting high-value targets compared with unrewarded targets, t(34) = 2.71, p = .011, dz = 0.458, but no differences were found between high- and low-value targets, t(34) = 1.56, p = .129, or between low-value and unrewarded targets, t(34) = 1.52, p = .139 (see Figure 2b).
Figure 2.
Behaviour results. (a) Response time and (b) accuracy in the training phase and (c) response time and (d) accuracy in the test phase. Data are broken down by trials based on target-reward contingencies (unrewarded, low-value, high-value) in the training phase and by learned reward-distractor associations (no-value, low-value, high-value) in the test phase. Error bars depict within-subject confidence intervals calculated using the Cousineau method with a Morey correction. *p < .05, ***p < .001.
Test phase
In the test phase, a repeated-measures ANOVA revealed that RTs differed significantly among the three distractor conditions, F(2, 68) = 3.37, p = .040, . Post hoc comparisons revealed that RTs were significantly slower on high-value distractor trials compared with no-value distractor trials, t(34) = 2.71, p = .011, dz = 0.458, but no statistical differences were found comparing high- to low-value distractor trials, t(34) = 1.63, p = .113, or comparing low- to no-value distractor trials, t(34) = 0.70, p = .490 (see Figure 2c). Accuracy did not significantly differ among the three distractor conditions, F(2, 68) = 1.43, p = .246 (see Figure 2d), with numerical differences mirroring the pattern in RT (and thus inconsistent with a speed-accuracy trade-off).
Discussion
In this study, we investigated whether reward-associated sounds could automatically capture attention and interfere with goal-directed auditory processing. In the training phase, our findings reveal faster responses in favour of the stimulus associated with high reward, demonstrating a potentially strategic attentional bias in favour of high-value sounds when they are task-relevant. Importantly, our results in the test phase show biased competition in favour of the stimulus previously associated with high value, even when it is task-irrelevant, demonstrating an involuntary attentional bias towards reward-associated sounds in an auditory target identification task.
Previously, Asutay and Västfjäll (2016) concluded that auditory attention can be biased by reward learning. However, in the task that Asutay and Västfjäll (2016) utilised, reward was associated with an absence of sound within an auditory stream and participants could choose to selectively attend to one of the two streams when monitoring for such gaps. In this study, we extend these findings by demonstrating involuntary auditory attentional capture that interferes with the ability to process competing information presented auditorily. Even though participants knew that spoken letters were entirely task-irrelevant, presentation of the letter previously associated with high-value impaired auditory target identification.
Our findings extend the principle of VDAC to stimulus competition evoked by task-relevant and irrelevant auditory stimuli; these findings build on the results of Anderson (2016c) showing auditory attentional capture interfering with visual processing. It is thus apparent that the influence of value-driven attentional processes on stimulus representation is not limited to representations within the visual system and instead reflects a broader principle of how information is represented across sensory systems. With respect to this conclusion, it is important to note that the high-value and no-value distractors were equated in their selection history, having served as a target an equal number of times during training; it thus appears that learned value, rather than reward-independent aspects of prior experience such as history as a task-relevant stimulus and stimulus-response habit learning (see H. Kim & Anderson, 2019), is responsible for the observed slowing of RT (see Anderson & Halpern, 2017; Sha & Jiang, 2016).
In this study, we do not observe significant differences between the low-value and either of the two other distractor conditions, consistent with prior reports evidencing a failure to detect differences between the low-value and other conditions in both the visual (e.g., Anderson, 2016b; Anderson et al., 2011, 2013, 2014; Laurent et al., 2015) and auditory domains (Anderson, 2016c). This lack of sensitivity with respect to the low-value distractor condition is likely an issue of statistical power. Effects concerning the low-value distractor are considerably smaller in magnitude (see Anderson & Halpern, 2017, for a power analysis), and our study was powered to detect the more robust difference between the high-value and no-value distractor conditions. Measuring eye movements in the visual domain produces a measure of value-driven attention across differently valued stimuli that is more reliable and thus more readily detected with smaller sample sizes (see Anderson & Kim, 2019), and studies of value-driven attention in the auditory domain might be similarly advantaged by the development of a more reliable measure of performance. As stated above, that selection history was equated between the high-value and no-value distractor conditions provides clear evidence for a value-modulated effect, albeit one that is more readily detected using stimuli of relatively higher value in the task.
The neural mechanisms of VDAC in the auditory domain remain to be explored. Some studies have investigated whether reward modulates auditory cortical processing in an approach/avoidance task (David et al., 2011) and a monetary incentive delay task using non-human primates (Wikman et al., 2019). Furthermore, electrophysiological studies have shown evidence for elevated stimulus-evoked activation by previously reward-associated by currently task-irrelevant sounds in auditory cortex (Folyi et al., 2016; Folyi & Wentura, 2019). It is unknown whether value-driven attentional processes evoked by auditory stimuli influence stimulus representation in regions of multisensory integration or cross-modal attentional control, or whether such value-driven attentional processes are restricted to stimulus-evoked representations in the auditory system (Folyi et al., 2016; Folyi & Wentura, 2019). Furthermore, in this study, we employed auditory stimuli that were alphanumeric in nature (spoken letters and numbers), which may have recruited amodal or otherwise non-auditory stimulus-evoked representations within which competition for selection occurred. Therefore, although this study evidences attentional capture by task-irrelevant auditory stimuli that interferes with the processing of task-relevant auditory stimuli, the role of the auditory system of the brain in mediating this behavioural effect is unclear. This study offers a framework for future neuroimaging studies to compare and contrast how reward modulates attention networks between the visual, auditory, and multisensory systems of the brain.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by grants from the National Institutes of Health (R01-DA046410) to B.A.A.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
References
- Addleman DA, & Jiang YV (2019). Experience-driven auditory attention. Trends in Cognitive Sciences, 23(11), 927–937. [DOI] [PubMed] [Google Scholar]
- Ahveninen J, Hamalainen M, Jaaskelainen IP, Ahlfors SP, Huang S, Lin F-H, Raij T, Sams M, Vasios CE, & Belliveau JW (2011). Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise. Proceedings of the National Academy of Sciences, USA, 108(10), 4182–4187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alards-Tomalin D, Brosowsky NP, & Mondor TA (2017). Auditory statistical learning: Predictive frequency information affects the deployment of contextually mediated attentional resources on perceptual tasks. Journal of Cognitive Psychology, 29(8), 977–987. [Google Scholar]
- Alho K, Salonen J, Rinne T, Medvedev SV, Hugdahl K, & Hamalainen H (2012). Attention-related modulation of auditory-cortex responses to speech sounds during dichotic listening. Brain Research, 1442, 47–54. [DOI] [PubMed] [Google Scholar]
- Anderson BA (2016a). The attention habit: How reward learning shapes attentional selection. Annals of the New York Academy of Sciences, 1369, 24–39. [DOI] [PubMed] [Google Scholar]
- Anderson BA (2016b). Social reward shapes attentional biases. Cognitive Neuroscience, 7(1–4), 30–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2016c). Value-driven attentional capture in the auditory domain. Attention, Perception, & Psychophysics, 78(1), 242–250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2017). Counterintuitive effects of negative social feedback on attention. Cognition & Emotion, 31(3), 590–597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA (2019). Neurobiology of value-driven attention. Current Opinion in Psychology, 29, 27–33. [DOI] [PubMed] [Google Scholar]
- Anderson BA, Faulkner ML, Rilee JJ, Yantis S, & Marvel CL (2013). Attentional bias for non-drug reward is magnified in addiction. Experimental and Clinical Psychopharmacology, 21, 499–506. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Halpern M (2017). On the value-dependence of value-driven attentional capture. Attention, Perception, & Psychophysics, 79, 1001–1011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Kim H (2019). Test-retest reliability of value-driven attentional capture. Behavior Research Methods, 51, 720–726. [DOI] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, USA, 108(25), 10367–10371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, Laurent PA, & Yantis S (2014). Value-driven attentional priority signals in human basal ganglia and visual cortex. Brain Research, 1587, 88–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson BA, & Yantis S (2013). Persistence of value-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance, 39(1), 6–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Asutay E, & Västfjäll D (2016). Auditory attentional selection is biased by reward cues. Scientific Reports, 6, 36989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brainard DH (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436. [PubMed] [Google Scholar]
- Cheng P-H, Saglam A, Andre S, & Pooresmaeili A (2020). Cross-modal integration of reward value during oculomotor planning. eNeuro, 7(1), eneuro.0381–19.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cherry EC (1953). Some experiments on the recognition of speech, with one and with two ears. Journal of the Acoustical Society of America, 25, 975–979. [Google Scholar]
- David SV, Fritz JB, & Shamma SA (2011). Task reward structure shapes rapid receptive field plasticity in auditory cortex. Proceedings of the National Academy of Sciences, USA, 109(6), 2144–2149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Desimone R, & Duncan J (1995). Neural mechanisms of selective visual-attention. Annual Review of Neuroscience, 18, 193–222. [DOI] [PubMed] [Google Scholar]
- Dyson BJ (2010). Trial after trial: General processing consequences as a function of repetition and change in multidimensional sound. Quarterly Journal of Experimental Psychology, 63(9), 1770–1788. [DOI] [PubMed] [Google Scholar]
- Failing M, & Theeuwes J (2017). Don’t let it distract you: How information about the availability of reward affects attentional selection. Attention Perception & Psychophysics, 79(8), 2275–2298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Folyi T, Liesefeld HR, & Wentura D (2016). Attentional enhancement for positive and negative tones at an early stage of auditory processing. Biological Psychology, 114, 23–32. [DOI] [PubMed] [Google Scholar]
- Folyi T, & Wentura D (2019). Involuntary sensory enhancement of gain-and loss-associated tones: A general relevance principle. International Journal of Psychophysiology, 138, 11–26. [DOI] [PubMed] [Google Scholar]
- Hickey C, Chelazzi L, & Theeuwes J (2010). Reward guides vision when it’s your thing: Trait reward-seeking in reward-mediated visual priming. PLOS ONE, 5(11), Article e14087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim AJ, & Anderson BA (2020). Neural correlates of attentional capture by stimuli previously associated with social reward. Cognitive Neuroscience, 11(1–2), 5–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim H, & Anderson BA (2019). Dissociable components of experience-driven attention. Current Biology, 29, 841–845. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klein MD, & Stolz JA (2015). Looking and listening: A comparison of intertrial repetition effects in visual and auditory search tasks. Attention, Perception, & Psychophysics, 77(6), 1986–1997. [DOI] [PubMed] [Google Scholar]
- Laurent PA, Hall MG, Anderson BA, & Yantis S (2015). Valuable orientations capture attention. Visual Cognition, 23, 133–146. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Libera CD, & Chelazzi L (2006). Visual selective attention and the effects of monetary rewards. Psychological Science, 17(3), 222–227. [DOI] [PubMed] [Google Scholar]
- McDonald JJ, Teder-Salejarvi WA, Di Russo F, & Hillyard SA (2005). Neural basis of auditory-induced shifts in visual time-order perception. Nature Neuroscience, 8, 1197–1202. [DOI] [PubMed] [Google Scholar]
- McDonald JJ, Teder-Salejarvi WA, & Hillyard SA (2000). Involuntary orienting to sound improves visual perception. Nature, 407, 906–908. [DOI] [PubMed] [Google Scholar]
- Parmentier FBR (2014). The cognitive determinants of behavioral distraction by deviant auditory stimuli: A review. Psychological Research, 78, 321–338. [DOI] [PubMed] [Google Scholar]
- Pool E, Brosch T, Delplanque S, & Sander D (2014). Where is the chocolate? Rapid spatial orienting toward stimuli associated with primary rewards. Cognition, 130(3), 348–359. [DOI] [PubMed] [Google Scholar]
- Ross B, Hillyard SA, & Picton TW (2010). Temporal dynamics of selective attention during dichotic listening. Cerebral Cortex, 20(6), 1360–1371. [DOI] [PubMed] [Google Scholar]
- Sabri M, Humphries C, Verber M, Liebenthal E, Binder JR, Mangalathu J, & Desai A (2014). Neural effects of cognitive control load on auditory selective attention. Neuropsychologia, 61, 269–279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sanz LR, Vuilleumier P, & Bourgeois A (2018). Cross-modal integration during value-driven attentional capture. Neuropsychologia, 120, 105–112. [DOI] [PubMed] [Google Scholar]
- Schröger E, Giard MH, & Wolff C (2000). Auditory distraction: Event-related potential and behavioral indices. Clinical Neurophysiology, 111(8), 1450–1460. [DOI] [PubMed] [Google Scholar]
- Seitz AR, Kim D, & Watanabe T (2009). Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61(5), 700–707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Serences JT (2008). Value-based modulations in human visual cortex. Neuron, 60(6), 1169–1181. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sha LZ, & Jiang YV (2016). Components of reward-driven attentional capture. Attention, Perception, & Psychophysics, 78(2), 403–414. [DOI] [PubMed] [Google Scholar]
- Stormer VS, McDonald JJ, & Hillyard SA (2009). Cross-modal cuing of attention alters appearance and early cortical processing of visual stimuli. Proceedings of the National Academy of Sciences, USA, 106(52), 22456–22461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tallus J, Soveri A, Hamalainen H, Tuomainen J, & Laine M (2015). Effects of auditory attention training with the dichotic listening task: Behavioural and neurophysiological evidence. PLOS ONE, 10(10), e0139318. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wikman P, Rinne T, & Petkov CI (2019). Reward cues readily direct monkeys’ auditory performance resulting in broad auditory cortex modulation and interaction with sites along cholinergic and dopaminergic pathways. Scientific Reports, 9, 3055. [DOI] [PMC free article] [PubMed] [Google Scholar]