Skip to main content
The Journal of the Acoustical Society of America logoLink to The Journal of the Acoustical Society of America
. 2019 Aug 29;146(2):1475–1491. doi: 10.1121/1.5123391

No effects of attention or visual perceptual load on cochlear function, as measured with stimulus-frequency otoacoustic emissions

Jordan A Beim 1,a),, Andrew J Oxenham 1, Magdalena Wojtczak 1
PMCID: PMC6715442  PMID: 31472524

Abstract

The effects of selectively attending to a target stimulus in a background containing distractors can be observed in cortical representations of sound as an attenuation of the representation of distractor stimuli. The locus in the auditory system at which attentional modulations first arise is unknown, but anatomical evidence suggests that cortically driven modulation of neural activity could extend as peripherally as the cochlea itself. Previous studies of selective attention have used otoacoustic emissions to probe cochlear function under varying conditions of attention with mixed results. In the current study, two experiments combined visual and auditory tasks to maximize sustained attention, perceptual load, and cochlear dynamic range in an attempt to improve the likelihood of observing selective attention effects on cochlear responses. Across a total of 45 listeners in the two experiments, no systematic effects of attention or perceptual load were observed on stimulus-frequency otoacoustic emissions. The results revealed significant between-subject variability in the otoacoustic-emission measure of cochlear function that does not depend on listener performance in the behavioral tasks and is not related to movement-generated noise. The findings suggest that attentional modulation of auditory information in humans arises at stages of processing beyond the cochlea.

I. INTRODUCTION

Selective attention plays a key role in our ability to overcome challenges in perception posed by the myriad of sensory stimuli that constantly surround us. Solving the classical cocktail party problem (Cherry, 1953) in auditory perception depends on the ability of listeners to segregate and attend to a target amongst a background of many concomitant sounds. The mechanisms underlying our ability to successfully complete this task are a subject of intense study. Selective attention produces enhanced neural representations of attended speech relative to unattended speech in the cortex (Mesgarani and Chang, 2012). This enhancement is robust enough to allow researchers to decode the target of auditory attention from the neural signals (Choi et al., 2013; Ding and Simon, 2012; O'Sullivan et al., 2015; Zion Golumbic et al., 2013). Although these cortical changes are well documented, the origins of these changes within the auditory pathways, and the underlying neural mechanisms, remain unknown.

Based on anatomical evidence, it is possible that attention could also modulate signals within the peripheral auditory system, including the cochlea. The medial olivocochlear reflex (MOCR) is a well-studied neural circuit that can modulate cochlear function (Guinan, 2006). Activation of the MOCR results in a reduction of cochlear gain produced by the outer hair cells. Otoacoustic emissions (OAEs; Kemp, 1978) have been used to study effects of MOCR activity in humans, where direct invasive measurement is not possible. Many studies have investigated whether MOCR activity is modulated by attention by examining changes in the level of OAEs under different attentional conditions (Avan and Bonfils, 1992; Beim et al., 2018; de Boer and Thornton, 2007; Giard et al., 1994; Harkrider and Bowers, 2009; Srinivasan et al., 2014; Walsh et al., 2015). This body of research has revealed small and inconsistent effects of selective attention, with changes in the OAE level caused by attention that ranged from fractions of a decibel (dB) to single-digit values. The direction of these effects has varied across studies and across listeners within individual studies, making it difficult to ascribe any perceptual importance to them. It is possible that differences in measurement technique, as well as in task, may have contributed to the differences observed between studies. In particular, attention has been manipulated in a variety of different ways, ranging from simply comparing measurements obtained during passive listening versus active task demands (Avan and Bonfils, 1992; de Boer and Thornton, 2007; Maison et al., 2001; Puel et al., 1988) to more controlled paradigms, in which effects of attention were compared between performance in a visual task and performance in an auditory task (Beim et al., 2018; Michie et al., 1996; Srinivasan et al., 2012, 2014). However, even in such studies, differences in factors such as perceptual or cognitive load have rarely been considered. These factors may be important in determining the relationship between cochlear effects and selective attention, particularly with respect to whether selection occurs at an early or late stage of processing.

Early debates on the nature of selective attention yielded evidence for both early selection (i.e., selectively filtering out unattended stimuli at an early stage of perceptual processing) and late selection (i.e., filtering only after considerable processing of the unattended stimuli had occurred). Late attentional selection is supported by studies showing that unattended stimuli can influence behavioral responses in attentional tasks (e.g., Miller, 1987; Stroop, 1935), suggesting that unattended stimuli are processed and that attentional filtering occurs shortly before the behavioral response. In contrast, other studies have provided evidence in favor of early selection by showing that certain aspects of unattended stimuli seem to be consciously inaccessible (Broadbent, 1958; Treisman, 1969). The perceptual load theory of selective attention (Lavie, 1995, 2005; Lavie and Tsal, 1994) proposes that the early and late selection theories can be reconciled by considering the perceptual or cognitive load involved in the specific tasks. According to the perceptual load theory, stimuli are processed obligatorily if enough attentional resources exist, resulting in late selection. However, when attentional resources are limited by a high perceptual load, selection operates earlier so that more resources can be devoted to attending the target stimuli. Evidence of the effects of perceptual load on selective attention can be found in many studies in vision. For instance, increasing the perceptual load of a linguistic judgment task eliminated signatures of motion processing in visual cortex produced by the motion of dots in the participants peripheral vision compared to a condition where the linguistic judgment task had a low perceptual load (Rees et al., 1997). The attenuation of neural representations of distractor stimuli under high perceptual load has also been demonstrated in subcortical structures (e.g., Lateral Geniculate Nucleus; O'Connor et al., 2002), demonstrating the possibility that this attentional modulation is not limited to cortical processing. In the auditory system, some form of early selection could potentially occur via MOCR modulation of cochlear gain, which would propagate forward through the auditory pathways, resulting in large changes in the cortical representations of unattended stimuli (e.g., Mesgarani and Chang, 2012).

It is critical to note that perceptual load is not increased by reducing stimulus fidelity or by performing near-threshold stimulus judgments, but rather by increasing the processing demands (Lavie et al., 2004). In the case of Rees et al. (1997) the high-load task involved determining whether visually presented words were bi-syllabic, which would require phonemic understanding of the presented word (an increase in the linguistic processing load). The low-load task was to identify whether words were presented in uppercase letters, which does not even require that participants attempt to read or recognize the word. Increasing the number of stimuli presented to a participant (e.g., increasing the number of possible targets and/or distractors in a set of stimulus) is another method by which perceptual load can be increased (Lavie, 1995, 2005). As it relates to our hypothesis, a high perceptual load could lead to earlier attentional filtering and may produce a more robust medial olivocochlear (MOC) efferent effect than a low perceptual load.

Most previous studies of the effects of selective attention on MOCR activity have utilized tasks that involve a low perceptual load. Examples of these low-load tasks include detecting a change in the intensity of a single repeated tone pip (Giard et al., 1994), detecting a change in visual grating orientation (Srinivasan et al., 2012), or detecting a target “Q” among a series of “O” stimuli in a serially presented stream of letters (Lukas, 1980). These tasks involve attending to a single stimulus without the presence of distractors in order to detect a change in a single feature. Recent studies by Walsh et al. (2014a,b, 2015) involved listening to or reading a series of digits, which imposes more perceptual load than simply detecting a deviant response. This task also imposed a memory load, as subjects had to memorize the series of digits and compare them to a probe. In contrast to a high perceptual load, a high working-memory load has been shown to increase the influence of distractor stimuli (Lavie et al., 2004) suggesting that high memory load may interfere with early selection. Another recent study (Beim et al., 2018) also involved a memory load in the form of counting target stimuli, and the visual 1-back task was a memory-based task, but little in the way of a perceptual load. In addition, the task involved gaps of 2 s between targets, which could have led to momentary lapses in attention that would not have been detected. Finally, Beim et al. (2018) selected a low-frequency tone (∼750 Hz) as the probe, because stronger acoustically elicited MOCR effects had been found with probes at lower frequencies (between 500 and 1000 Hz) than at higher frequencies (Lilaonitkul and Guinan, 2012). However, studies of cochlear mechanics also show less cochlear gain at lower frequencies (Gorga et al., 2007; Recio-Spinoso and Oghalai, 2017; Robles and Ruggero, 2001) than at medium and high frequencies, implying that the choice of a low frequency may have limited the observable gain reduction produced by MOC efferent activity.

Here we present two experiments designed to better control the attentional demands of the tasks by ensuring a high perceptual load in both the auditory and visual attention tasks, assessing cochlear function using higher-frequency stimulus-frequency otoacoustic emissions (SFOAEs) to maximize the observable MOCR effect, and ensuring that sustained attention is necessary for optimal performance. Based on the perceptual load theory of selective attention, we hypothesized that tasks involving a high perceptual load would be most likely to produce attentional modulation of cochlear gain.

II. EXPERIMENT 1

A. Rationale

The purpose of this experiment was to determine whether manipulations of attention result in changes to cochlear gain. The experimental design was similar to that used by Beim et al. (2018) but was refined in three key ways: (1) by increasing the perceptual load of the visual task, (2) by measuring cochlear function in both low- and high-frequency regions, and (3) by collecting behavioral responses to target stimuli immediately after detection rather than at the end of a trial. In addition to implementing changes to the test stimuli, background noise in the ear contralateral to stimulus presentation was monitored during the behavioral task and SFOAE recordings, as some previous investigations of selective attention on cochlear function have noted differences in physiological noise depending on attention (Walsh et al., 2014a,b). The involvement of efferent effects in changes to noise level was recently disputed by Francis et al. (2018), who argued that the changes in noise are not likely to be due to MOCR activity but may instead reflect the tendency of participants to reduce movements while performing a behavioral task. By comparing observed changes in SFOAEs to any observed changes in the noise in the unstimulated ear, it should be possible to determine the degree to which changes in SFOAE data might be related to noise generated from participant motion.

B. Method

1. Overview

Participants were presented with a series of auditory and visual stimuli that remained present across three attentional conditions. The auditory stimuli comprised short low- and high-frequency tones embedded in a noise with two spectral notches. Noise was used to elicit the MOCR because attention effects have been observed more consistently in conjunction with acoustic efferent activation (Froehlich et al., 1990, 1993; Maison et al., 2001; Michie et al., 1996; Veuillet et al., 1991; Walsh et al., 2014b,a, 2015) than without an acoustic elicitor (Picton et al., 1971; Picton and Hillyard, 1974). The visual stimuli comprised 12 red squares in a black background on a computer screen. At the beginning of each run, participants were instructed to attend to one aspect of the combined stimulus, auditory low tones (hereafter termed the AL condition), auditory high tones (AH condition), or visual stimuli (V condition), and to perform a behavioral task, as described in Secs. II B 4 and II B 5. During each of the three attention conditions, SFOAEs were measured at one frequency to investigate task-dependent changes in SFOAE magnitude. Baseline measurements of the SFOAE were made at the beginning of each trial to ensure that a robust SFOAE with a signal-to-noise ratio (SNR) ≥20 dB (Goodman et al., 2013) was evoked by the tone used to monitor attention effects and that no systematic changes in the baseline SFOAE occurred across attention conditions. The three attention conditions were each tested once to monitor the SFOAE at the lower frequency and were tested again to monitor the SFOAE at a higher frequency. The effects of attention were examined both within and across listeners by comparing the levels of the SFOAE evoked by the same stimuli across the different attention conditions. In this way it was possible to assess the effects of attention, as well as the effects of probe frequency. The working hypothesis was that a smaller SFOAE would be observed in conditions where attention is directed away from the probe (V and auditory for the non-probe frequency) and a larger SFOAE will be observed in conditions where attention is directed to the SFOAE-evoking probe frequency.

2. Participants

A total of 39 normal hearing (NH) participants aged 19–61 yr were recruited to participate in experiment 1. Participants were required to pass both an SFOAE screening procedure and training on the behavioral task to be included in the experiment. After screening, a total of 30 participants (7 male, 23 female, aged 19–35 years, mean age: 23.2 years) remained in the participant sample. All the participants who failed screening were rejected due to an inability to measure robust SFOAEs in either the low-frequency (750 Hz) region (3 participants) or high-frequency (4 kHz) region (6 participants). All participants in this study (experiments 1 and 2) had NH, defined by air conduction thresholds <20 dB hearing level at octave frequencies between 0.25 and 8 kHz and measured using a calibrated Madsen Conera audiometer (GN Otometrics, Schaumberg, IL). All participants provided written informed consent before participating in the study and were either monetarily compensated for their time or received course credit. All experimental procedures were approved by the Institutional Review Board at the University of Minnesota.

3. Calibration

The SFOAE-evoking stimuli used in both the screening procedure and the experimental tasks were presented using 40 dB forward pressure level (FPL), individually calibrated in each participant's ear. The suppressor stimulus used in the baseline measurement of SFOAE was presented at 60 dB FPL. Forward pressure calibration was used to moderate effects of standing waves on in-the-ear calibration. Standing waves can cause large variations in the sound level reaching the cochlea, depending on the insertion depth of the probe and the size of individual participants' ear canals, especially at the higher nominal frequency of 4 kHz (Neely and Gorga, 1998; Scheperle et al., 2008, 2011; Siegel, 1994). In order to decompose the total recorded pressure from the microphone into forward and reverse pressure, we first estimated the Thévenin-equivalent source characteristics using a calibration tool specifically designed for use with the ER10X system (“ER10X Stepper,” available from http://audres.org/cel/thev/, last viewed 9/7/2018). Once source characteristics were estimated, in-ear measurements were made to calculate forward pressure using version 3.32 of the EMAV software package (Neely and Liu, 1994; available from http://audres.org/rc/emav, last viewed 9/7/2018). In-ear calibration was performed at the beginning of every session, as well as any time participants removed the acoustic probe assembly from their ear.

4. Auditory stimuli

The auditory stimuli were used both for the behavioral auditory task and for evoking the SFOAE used to assess the cochlear responses during all three selective-attention conditions (AL, AH, and V). Auditory stimuli were grouped into runs consisting of a baseline SFOAE measurement and a series of stimuli used for the behavioral task. The baseline SFOAE measurement was obtained using a suppression method (Shera and Guinan, 1999). A 10-s probe tone was presented at 40 dB FPL with a suppressor tone 50 Hz above the probe tone. The suppressor tone was gated on and off every 500 ms in alternating polarity, ending at the same time as the probe tone. The frequencies of the probe and suppressor tones were near either 0.75 or 4 kHz, depending on which SFOAE frequency was used to monitor selective-attention effects.

The task-relevant stimuli began after a 2-s silent gap that followed the offset of the probe tone used in the baseline measurement. Task-relevant stimuli consisted of a block of repeating sounds. Each block began with a 1-s cue tone presented at 40 dB FPL at the frequency (nominally 0.75 or 4 kHz) that listeners were instructed to attend. A notched-noise MOCR elicitor began 500 ms after the onset of the cue tone and was presented for 1.7 s with a root-mean-square (RMS) level of 60 dB sound pressure level (SPL). This Gaussian white noise spanned a frequency band from 0.1 to 10 kHz and included two 1-octave spectral notches centered at 0.75 and 4 kHz. Following the offset of the cue tone, a total of four 300-ms tone pips (two at each frequency) were presented at 40 dB FPL. The tones were gated with 5-ms raised-cosine ramps to avoid spectral splatter. The two tones presented at each frequency had random inter-stimulus intervals (ISIs). These random ISIs were used to make the timing of the stimuli less predictable so that listeners would need to attend vigilantly to the sequence of tones to detect the target stimuli. The ISIs were constrained to have a minimum duration of 100 ms and the offset of the final tone in the sequence occurred no later than the end of the noise. Each block ended with a 2-s silent interval to allow for the decay of MOCR effects produced by the noise elicitor. A total of four blocks were presented after each baseline measurement, resulting in a total of eight tone pips at each frequency. Up to three of these eight tone pips at each frequency were amplitude modulated at 10 Hz with 100% modulation depth. The selection of which tones were amplitude modulated was constrained so that only one of the two tones at each frequency could be modulated within a block. This constraint was imposed so that each block contained an unmodulated tone pip, which was used to measure the SFOAE at the test frequency. A schematic representation of the auditory stimuli used in experiment 1 is shown in Fig. 1.

FIG. 1.

FIG. 1.

(Color online) Schematic representation of the stimuli used in experiment 1. The top portion of the figure shows time-frequency representations of the auditory stimuli. The SFOAE probe is represented by the solid lines (light blue online) at the beginning of each run. Suppressor presentations are shown in black and MOC elicitor noise is illustrated by the shaded gray regions. Tones with AM are highlighted by a change in brightness (red online). The bottom panels show the timing for the visual stimuli. Listener responses to the visual task were collected during the free response frame. During auditory trials the stimulus display ended 1 second after the free response window began and the final frame with correct responses was not shown.

All auditory stimuli were generated digitally with a sampling frequency of 48 kHz and 24-bit resolution using custom Matlab software (The Mathworks, Natick, MA), converted to analog signals using a RME Fireface UC sound card (Haimhausen, Germany), and delivered to participants' left ears via transducers in the ER10X acoustic probe system (Etymotic Research, Elk Grove, IL). Playback and recording control were performed using the Psychophysics toolbox (Brainard, 1997) in Matlab. Recordings were made using the microphones of the ER10X system, which provided 20 dB of gain to the signal before it was digitized by the RME Fireface UC sound card and saved on the computer hard drive for offline analysis.

5. Visual stimuli

The visual stimuli began concurrently with the sequences of short tones used for the auditory behavioral task. We used a modified version of a classical set of multiple object tracking (MOT; Pylyshyn and Storm, 1988) stimuli that was adapted from Makovski and Jiang (2009). At the onset of visual stimulus presentation, 12 squares were presented at random locations on a computer monitor. Each square subtended approximately 0.28° of visual angle. Squares were presented in color on a black background that subtended approximately 15° × 18°. A subset of 5 squares was initially presented in yellow, while the remaining 7 were presented in red. Yellow squares were used to cue participants to attend to them throughout their motion. Yellow squares turned red and all squares began movement on random trajectories 2 s after appearing on the screen. Motion trajectories were calculated to avoid collision or overlap between nearby squares. After 15 s, all squares ceased movement simultaneously with the offset of the auditory stimuli. When motion stopped, a mouse cursor appeared so that participants could select the squares that were initially yellow at the beginning of the trial. After the participant had selected a total of five squares, the cursor was removed from the screen and feedback was provided by highlighting any correct responses in green. Participants were not constrained to respond within any specific time during the response period. Any missed targets were not shown to participants. The free response window and visual feedback were only used when participants were completing the visual task. During the auditory tasks the visual stimulus presentation stopped 1 s after the end of the motion trajectory. The visual stimuli and their trajectories were generated prior to each trial and presented at a rate of 30 frames per second on a computer monitor. Sample frames depicting the stimuli during various stages of the MOT task are shown at the bottom of Fig. 1.

6. Procedure

All the procedures were conducted in a double-walled sound-attenuating chamber (Industrial Acoustic Company, Bronx, NY) with the participants seated in a semi-reclined position in a chair with a head rest to minimize motion-related artifacts. Participants kept a computer keyboard on their lap to make responses during the auditory attention trials and were required to press a key within 1 s of detecting an amplitude-modulated tone at the cued frequency. A mouse was used to record responses to the visual stimuli after the end of stimulus presentation. Participants were instructed to remain as still and relaxed as possible during each run without closing their eyes.

Before testing began, participants completed a two-part screening procedure to ensure that: (1) any spontaneous otoacoustic emissions were at least 100 Hz away from the experimental frequency range, and (2) high SNR measurements of SFOAE were obtainable within the frequency range of the experimental stimuli. These screening procedures are described in detail in Beim et al. (2018). The frequency of the low-frequency tone used for the SFOAE probe was selected as the frequency between 675 and 825 Hz that produced the largest SFOAE with an SNR greater than 25 dB. The frequency of the high-frequency tone was selected from the range of 3.6–4.4 kHz using the same criterion. Participants passed this screening procedure if at least one valid probe frequency was found in both the low- and high-frequency ranges.

After passing screening, participants completed 15 training runs of the experiment for each of the three attentional conditions. Training runs differed from experimental runs in two key ways: (1) amplitude modulation (AM) was never imposed on the distractor frequency, and (2) the MOCR noise elicitor was not presented. Participants had to respond to the presence of AM at the target frequency within 1 s from the offset of AM for their response to be valid and needed to achieve at least 75% correct detection of the AM targets across the 15 runs for each condition to pass the training. The screening and training procedures were typically completed over the course of one 2-h session.

In a second session, typically within 1 week of the first session, participants completed the experimental runs under a total of six conditions (3 attention conditions × 2 SFOAE probe frequency conditions). Participants were instructed to attend to the low-frequency tone (AL), the high-frequency tone (AH), or the visual stimuli (V) and to perform the relevant task for each stimulus (detecting AM for the auditory conditions and performing MOT for the visual stimuli). The experimental runs were grouped by condition, and participants completed 15 runs for each condition. The order in which conditions were completed was counterbalanced across participants to attempt to control for any order effects or longer-term buildup effects. Participants completed all six attention conditions within a single experimental session, during which the probe assembly was not removed from their ears.

While participants were completing the experimental runs, a second ER10X probe was used to record background noise in the contralateral ear. The contralateral recordings were made during half of the total runs (either during the low- or high-frequency SFOAE measurements) for each participant, selected randomly.

7. SFOAE analysis

Before extracting emissions, recordings were manually scanned for noise and movement-related artifacts. Recorded runs that contained visible artifacts were excluded from analysis. Additional recordings were made if the magnitude of the SFOAE in the baseline measurement was less than 25 dB above the noise floor. On average 12 recordings (range: 6–15) were left after exclusion, yielding 120 probe-suppressor pairs for the baseline SFOAE measurement. The baseline measurement was split into 1-s segments that contained 500 ms of the SFOAE probe alone and 500 ms of the probe plus the suppressor. The tone pips presented during the behavioral task that did not contain AM were pooled across blocks and runs for averaging. The stimuli for both the baseline SFOAE measurement and the task stimuli were averaged across the accepted runs. A heterodyne procedure was then used to extract the sound pressure at the stimulus frequency (Guinan et al., 2003). To estimate SFOAE magnitude the baseline measurement was extracted using the suppression technique (Shera and Guinan, 1999). The suppression technique uses the vector difference of the ear-canal sound pressure recorded with SFOAE probe tone alone and the sound pressure recorded with the SFOAE probe tone presented with the suppressor to estimate the SFOAE residual. A phasor diagram depicting this vector subtraction is shown in Fig. 2. The mean complex-valued sound pressure taken from a 300-ms window, temporally centered on the duration of the suppressor tone, was subtracted from each point of the heterodyned waveform. This yields the magnitude and phase of the SFOAE residual during the first 500 ms of the waveform and the noise floor during the final 500 ms, as shown in a typical participant in Figs. 3(A)–3(D). Next, the same mean sound pressure was subtracted from the segments corresponding to the behavioral task stimuli (unmodulated tones), yielding the SFOAE residual magnitude and phase. For each participant the mean emission magnitude was calculated from a 200-ms window centered temporally on the task tone pip waveform. The SFOAE evoked during this window is shown in Fig. 3(E) alongside the baseline SFOAE measurement. Note that the difference between the baseline (gray) and task (blue) emissions reflects both the contribution of our MOCR elicitor as well as any effect of selective attention.

FIG. 2.

FIG. 2.

(Color online) Phasor diagrams and time-frequency schematic illustrations depicting the emission extraction procedure from the averaged waveform of recorded ear canal pressure. (A) A phasor diagram depicting the extraction of baseline SFOAE as the resultant of vector subtraction of baseline pressure Pb and suppressed baseline Ps. (B) Phasor diagram depicting the extraction of SFOAE from the task relevant stimuli as the vector difference between pressure during the behavioral task Pt and Ps. (C) and (D) are time-frequency schematics of the stimuli used to extract SFOAEs depicted in (A) and (B), respectively. Shaded gray regions represent the notched noise elicitor. Boxes highlight analyses windows where averaging occurs to perform the vector subtraction.

FIG. 3.

FIG. 3.

(Color online) Results of the SFOAE extraction procedure from a single participant using the 4-kHz probe tone. (A)–(D) show the result of the extraction procedure from data recorded during the baseline measurement. Stimulus pressure recorded in the ear canal is decomposed into magnitude and phase in the left panels, while the extracted SFOAE is shown on the right. (E) compares the emission magnitude evoked by the behavioral task stimulus (in blue) with the baseline SFOAE magnitude shown in gray [baseline replotted from (B)].

8. SFOAE bootstrap analysis procedure

To determine whether differences in SFOAE levels across attentional conditions were significant within an individual, a bootstrapping procedure was used to estimate variability in the extracted SFOAE magnitudes. Artifact-free pairs of recorded segments for both the baseline measurement and behavioral task were resampled randomly with replacement, such that the total number of resampled segments was the same as the original number of segments for that participant. The SFOAE extraction procedure described previously was then repeated on each resampled set of recorded data and the mean emission magnitudes were saved. This resampling procedure was repeated 10 000 times for each participant to construct estimated distributions of emission magnitude across the attentional conditions. Two conditions were considered significantly different when the mean magnitude obtained from one bootstrap distribution fell outside the 95% confidence interval of the second distribution.

C. Results

1. Behavioral results

Group-level analysis of the behavioral data confirmed that our participants were able to complete the different tasks with high accuracy, suggesting that they were attending to the intended task in each case. Hit and false alarm rates were used to calculate participants' sensitivity, d′, to the presence of AM on the attended carriers (Green and Swets, 1966). We did not estimate d′ for the visual task because participants were required to make five responses corresponding to five targets, meaning that misses and false alarms would not be independent of one another. Instead performance on the visual task was scored as a percentage of correctly identified targets.

A paired samples t-test was used to compare group-level performance on the behavioral tasks between the two auditory attention conditions. There was a significant difference in performance with respect to attended frequency [t(59) = 3.39, p = 0.001] indicating that performance in the AH condition (mean: 2.23) was better than in the AL condition (mean: 1.50). Performance in the visual condition was similar across the two sets of recorded SFOAE frequencies with participants correctly identifying 73% of targets on average.

2. SFOAE results

First, attentional effects on SFOAE were examined at a group level. Mean values of SFOAE magnitude during each attention condition are plotted separately by SFOAE frequency in Fig. 4. A repeated-measures analysis of variance (ANOVA) with factors of probe frequency and attention condition was conducted on the mean SFOAE magnitude values. The ANOVA revealed a significant main effect of probe frequency on SFOAE magnitudes, indicating that SFOAE magnitudes were greater at lower frequencies [F(1,29) = 8.78, p = 0.006], as expected since SFOAE magnitudes have been shown to decrease with increasing frequency (Dewey and Dhar, 2017). However, there was no significant effect of attention [F(2,58) = 0.361, p = 0.699] and no significant interaction between the factors [F(1.35,39.3) = 0.18, p = 0.835]. A second ANOVA with the same factors was conducted on the SFOAE phase rather than magnitude to examine whether the SFOAE phase was influenced by attention. The ANOVA revealed no significant main effects [F(1,29) < 0.798, p > 0.455] or interactions [F(2,58) = 0.854, p = 0.431] between factors on the SFOAE phase.

FIG. 4.

FIG. 4.

(Color online) Mean data (n = 30) for SFOAE magnitudes extracted from the task stimuli in experiment 1. Sets of three bars are grouped by probe frequency. SFOAEs evoked during the auditory attention conditions are shown in shades of blue, while SFOAEs from the visual attention condition are shown in yellow. Error bars denote ±1 standard error of the mean.

The lack of any effect of attention in the mean data might obscure large individual differences, which have been reported previously (e.g., Beim et al., 2018). To explore this possibility, a bootstrap analysis at the level of the individual participants was carried out, which revealed that most participants failed to exhibit significant differences in SFOAE magnitude across attentional conditions at either probe frequency. The number of participants who did not exhibit significant shifts in emission magnitude between the three paired attention comparisons is shown by gray bars in Fig. 5. Some participants exhibited significant shifts in SFOAE magnitude across attentional conditions but the direction of the shifts was not consistent across these individuals (blue bars represent cases in which the minuend was significantly larger than the subtrahend in comparisons shown by labels on the x axis and orange bars represent the opposite effect). These within-subjects analyses do not support the hypothesis that a higher SFOAE frequency might better reveal effects of selective attention because the pattern of results and the total number of subjects exhibiting significant effects is similar across the two probe frequencies (top vs bottom panel in Fig. 5). The bootstrap analysis also failed to provide evidence of any systematic shift in SFOAE levels based on attention condition. Thus, analysis at both the group and individual levels provides no evidence for attentional modulation of SFOAE amplitudes.

FIG. 5.

FIG. 5.

(Color online) Summary results from the experiment 1 bootstrap analysis. Each set of three bars shows the number of participants exhibiting each possible directional relationship between SFOAE magnitudes for the three attentional condition comparisons of interest. Gray bars show the proportion of cases for each paired comparison where there was no significant difference. Blue bars show the number of participants where the first condition produced larger SFOAE magnitudes than the second (e.g., in the first set of bars showing the AL-V comparison, the blue bar indicates the number of participants where SFOAEs were significantly larger in the AL condition). Orange bars show a significant effect in the opposite direction of the blue bars. The top panel shows the results using the low-frequency SFOAE probe, while the bottom panel shows results for the high-frequency probe tone.

3. Relationship between behavioral and SFOAE data

To relate SFOAE data to behavioral performance we correlated individual performance on each attention condition with the SFOAE magnitude evoked while participants were attending the target stimulus. If MOCR activity were linked to behavioral performance on these tasks, then we might expect SFOAEs to vary with individual listener performance. Measures of behavioral performance plotted against SFOAE magnitudes are shown in Fig. 6 for each attention condition. As seen in the figure, there is no systematic relationship between performance measures and the corresponding SFOAE measures at either stimulus frequency. Correlational analysis revealed no significant correlations between SFOAE magnitudes and behavioral task performance for the three attentional conditions and both probe frequencies [0.01 < r(29) < 0.23, p > 0.234].

FIG. 6.

FIG. 6.

(Color online) Scatterplots relating SFOAE magnitude to behavioral performance. Each condition is plotted on a separate axis. Correlation coefficients and significance values are shown for each plot. Dashed horizontal lines in the V conditions denote chance performance on the behavioral task.

4. Noise measurements

Analysis of noise recorded in the ear contralateral to the stimulus presentation was conducted to examine whether changes in the SFOAE level were related to evidence of participant-generated noise that was not detected as an artifact but might have contaminated the recordings. To analyze the noise, recorded samples from the unstimulated ear were obtained from the same time intervals as the SFOAE measurements, so that the noise levels could be temporally linked to their corresponding SFOAE waveforms in the stimulated ear. A high-pass filter with a 250-Hz cutoff was used to eliminate low-frequency noise that was present within the sound-attenuating chamber. The noise level was calculated from the RMS amplitude of the filtered noise waveform within 200 ms windows at the same times used to analyze the SFOAE stimuli. Mean values for the noise levels in the ear canal of participants measured during the behavioral task are plotted in Fig. 7. A repeated-measures ANOVA with within-subjects factors of task (whether the noise sample was from the baseline measurement or during the behavioral task) and attention condition was conducted on the noise values. The ANOVA revealed no significant effect of task [F(1,28) = 3.95, p = 0.057], indicating that noise levels did not change between the baseline measurement of SFOAEs and the behavioral task portion of each trial. The ANOVA also revealed no significant effect of attention [F(2,56) = 0.775, p = 0.466]. There was also no significant interaction between the two factors [F(2,56) = 0.776, p = 0.465].

FIG. 7.

FIG. 7.

(Color online) Mean contralateral-ear noise magnitudes across participants for each of the experimental conditions in experiment 1. Magnitudes are estimated from noise sampled during the same time periods as the task SFOAE. Error bars represent standard deviations.

Finally, we conducted a correlational analysis between noise recorded in the ears of participants and the SFOAEs extracted from the same time segments. The analysis revealed no significant correlation between noise levels in the unstimulated ear and SFOAE magnitudes in any attention condition [-0.19 < r(14) < 0.25, p > 0.419] or pooled across attention conditions [0.065 < r(44) < 0.095, p > 0.565].

5. Minimum detectable shifts in SFOAE magnitude

In addition to estimating whether changes in SFOAE magnitude were significant within individual listeners, we also used our resampled data from the bootstrap procedure to estimate the minimum detectable shift (MDS) in OAE magnitude that we were capable of measuring, based on the procedure used by Goodman et al. (2013). The mean magnitude from each participant's distribution of resampled data was subtracted from that distribution and the 95% confidence intervals were used to estimate the smallest changes in SFOAE magnitude we could reliably detect. Data from both SFOAE frequency regions were pooled after a t-test confirmed that there was no significant difference in MDS across frequency regions [t(29) = –0.86, p = 0.397]. Across participants and SFOAE frequency regions, the median value of the MDS was 1.54 dB (range: 0.53–5.96 dB). While this is not directly comparable to Goodman et al. (2013) due to the differences in the number of averaged presentations, stimulus type, and artifact rejection methodology it is still a useful evaluation of variability in our results. Our data do not allow us to examine the shift in OAE magnitude driven purely by the acoustic activation produced by our MOCR elicitor because the elicitor was only present during the attentional task. The median difference between the SFOAE magnitudes in the baseline vs attentional conditions was a factor 5.35 (range: 0.04–118.21) times greater than the MDS for each participant. The large range reflects the differing patterns across participants in both the MDS and shifts in SFOAE magnitude. Some participants had very reliable data with a small MDS and large shifts between the baseline and the task, while others exhibited higher variability and smaller baseline-to-task shifts. The median shift in OAE magnitude between any pair of attention conditions was 2.36 dB when not considering shift direction, but was only −0.70 dB when the direction of the shift was accounted for. This suggests that while the absolute shifts in magnitude between attention conditions were smaller than the shifts from the baseline to the task (median differences between attentional conditions shifts were 1.09× larger than the MDS, compared to 5.35× larger than the MDS for baseline-task shifts) the changes were still detectable in most participants. It is instead the lack of consistent directionality in shifts in OAE magnitude that results in our inability to detect a systematic effect of attention across participants.

D. Discussion

Experiment 1 was designed to improve the ability to observe attentional modulation of MOC efferent effects, relative to earlier studies in two important ways: (1) By using a visual task that involves a high perceptual load, which according to the perceptual load theory of selective attention should lead to early filtering of unattended stimuli (possibly as early as the cochlea); and (2) by using a high-frequency probe tone, which may have been subjected to higher cochlear gain, thereby providing a wider dynamic range for MOC efferent modulation, making it more likely to observe efferent-induced changes. Contrary to our hypothesis regarding the high-frequency probe, our individual analyses revealed that statistically significant shifts of SFOAE magnitude occurred with roughly equal frequency when measuring SFOAEs using low- (approximately 0.75 kHz) or high- (approximately 4 kHz) frequency probe tones. A lack of improvement using high-frequency SFOAE probe tones may not be entirely surprising, however, as there is some human evidence indicating the MOC efferent effects measured using SFOAEs are larger at frequencies around 0.5 and 1 kHz than near 4 kHz (Lilaonitkul and Guinan, 2012; Zhao and Dhar, 2012).

In both cases we were unable to find evidence of a relationship between participants' behavioral performance on the task and the SFOAE magnitudes measured during the task. Experiment 1 also provided no evidence that background noise, influenced by participants' movement, was related to SFOAE magnitudes, indicating the noise levels due to movement were unlikely to explain an attention effect or lack thereof.

One potentially limiting factor in our experimental design is that the notch width used by our MOCR elicitor noise could lead to some intracochlear suppression of the SFOAEs by the elicitor noise. If the effects observed in our study were dominated by intracochlear suppression of the OAE by the spectral edges of the elicitor noise, it could be more difficult to detect acoustic or attentional MOCR effects. Backus and Guinan (2006) used a 2-octave wide notch in their MOCR elicitor stimuli after empirically assessing the range in which two-tone suppression occurred in their subjects. While some two-tone suppression within the 1-octave range was reported by Backus and Guinan (2006) it would be unlikely to dominate the effects seen in our stimuli. The MOCR elicitor noise in our study had much less energy in the cochlear filter tuned to the probe since the overall level of the noise elicitor was equal to the level of the suppressor tone used in the study by Backus and Guinan (2006). Furthermore, the effect of intracochlear suppression would not be expected to produce the roughly equal distribution of significant SFOAE magnitude increases and decreases across attention conditions that contributed to the overall null findings of experiment 1.

III. EXPERIMENT 2

A. Rationale

Our first experiment focused on using tasks with high perceptual load and measuring cochlear function in a high-frequency region, where cochlear gain should be high, to improve the chances of observing an attentional effect. The target tones were relatively brief and were separated by random silent intervals. It is possible that attention lapsed somewhat during these intervals, only to be reengaged by the tone onset. In this experiment, the sequences of tone pips were replaced by long tones that could contain short epochs of AM throughout their duration, eliminating the silent gaps where attention was not required. The continuous auditory stimuli also better matched the visual MOT task used in experiment 1, since both tasks required continuous sustained attention.

This experiment also examined whether the effects of selective attention depend on the presence of the acoustic MOCR elicitor. Previous studies of selective attention effects on cochlear function, including our experiment 1, have utilized noise to elicit the MOCR to examine whether or not the effect of the MOCR is modulated by attention (e.g., de Boer and Thornton, 2007; Harkrider and Bowers, 2009; Walsh et al., 2014b,a, 2015). When investigating auditory attention, Walsh et al. (2014a) claimed that larger attention effects were seen than in previous studies, due to the presence of their MOCR elicitor noise. However, it is also possible that the corticofugal projections to the medial olivary complex could allow attention to modulate the MOC efferent activation in the absence of an acoustic elicitor. Although some studies have examined attention effects on cochlear processing without the use of an acoustic MOCR elicitor (Avan and Bonfils, 1992; Giard et al., 1994; Puel et al., 1988), the findings as they relate to attention are not conclusive. (Avan and Bonfils, 1992) found no effect of an attentional task on OAEs, whereas Giard et al. (1994) saw larger transient-evoked OAEs when an auditory stimulus was attended. Although studies of distortion product otoacoustic emission (DPOAE) rapid adaptation (e.g., Smith et al., 2012; Srinivasan et al., 2012, 2014) could be used to examine cochlear responses both before (using the first 25–30 ms) and after MOC-efferent-mediated rapid adaptation begins, no previous study has directly compared the effects of attention measured with and without an acoustic MOCR elicitor in the same participants under otherwise identical conditions.

B. Method

1. Overview

Similar to experiment 1, participants were presented with a series of auditory and visual stimuli that remained identical across three attentional conditions. SFOAEs evoked by tones at a fixed frequency were used to assess cochlear function during the three conditions. The task stimuli consisted of continuous low- and high-frequency tones as well as a set of visual MOT stimuli. During a run, the low- and high-frequency tones were amplitude modulated for a short duration at random intervals. Participants completed the same attentional conditions (AL, AH, and V) as in experiment 1. When attending to the tones, participants were required to respond with a keypress as soon as AM occurred on the tone they were attending. The participants completed the three selective attention conditions with and without a notched-noise MOCR elicitor to examine whether any attention effects are influenced by acoustic activation of the MOCR. The order in which participants completed the attentional conditions was counterbalanced to control for order effects. As in experiment 1, the effect of attention was examined by comparing the levels of the SFOAE evoked by the same stimulus across the attention and MOCR elicitor conditions.

2. Participants

A total of 20 NH participants were recruited for this experiment. Participants needed to pass both an SFOAE screening procedure and training on the behavioral task to be included in the experiment. Five listeners (1 male, 4 female) were excluded from the sample because they did not have SFOAEs with a sufficiently high SNR around the probe frequency used to measure the effect of selective attention. The remaining 15 participants (11 female, 4 male), aged 18–33 yr (mean: 23.5), were included in the final dataset. None of the listeners had previously taken part in experiment 1.

3. Stimuli and procedure

The visual stimuli were identical to those used in experiment 1. The bottom panels of Fig. 8 show the timing of the visual stimulus presentation as it relates to the auditory stimuli. The auditory stimuli and task were modified to use continuous tones rather than short tone pips. The baseline SFOAE was measured using the suppression paradigm described in experiment 1. Due to the lack of a finding with respect to probe frequency in experiment 1, and to keep data collection short enough so that all conditions could be measured in a single session, cochlear function was assessed in only the 750-Hz region, since larger and less variable SFOAE magnitudes were generally observed at this frequency.

FIG. 8.

FIG. 8.

(Color online) Schematic representation of the stimuli used in experiment 2. The top portion of the figure shows time-frequency representations of the auditory stimuli. The SFOAE probe is represented by the solid lines (dark blue online) at the beginning of each run. Suppressor presentations are shown in black and MOCR elicitor noise is illustrated by the shaded gray regions. AM segments are illustrated by short segments with different brightness (red online) during both the low and high frequency tones. The bottom panels show the timing for the visual stimuli.

The task-relevant stimuli began after a 2-s silent gap that followed the offset of the 10-s probe tone. After the silent gap, either a low (approximately 0.75 kHz) or high (4 kHz) tone with duration of 17 s was presented. The first 1 s of this tone served as the cue for participants to selectively attend to the low or high frequency (AL or AH condition, respectively). After 1 s, a second 16-s tone at the other frequency began so that both tones ended simultaneously. Each tone was presented at 40 dB SPL and could contain multiple segments that were amplitude modulated at full (100%) depth at a rate of 10 Hz for a duration of 300 ms per segment. The timing of the AM segments was randomized for both the high- and low-frequency tones with the mean time of 1.7 s between AM segments at a given frequency. The only constraint on the timing of the AM was the inclusion of protected windows that were used for extracting SFOAEs. Each tone could have up to five AM segments within a run. Participants were required to attend to one frequency and respond within 1 s via button push to each modulated segment in the attended tone while ignoring the modulations on the other tone.

Attention effects were measured using the stimuli described above, presented alone in one condition and with an MOCR elicitor in another condition. When present, the elicitor began 500 ms after the onset of the first tone. The elicitor was a 2-s Gaussian white noise that spanned a frequency range from 0.1 to 10 kHz in frequency with two 1-octave spectral notches centered at 0.75 and 4 kHz. The noise was gated on and off every 2 s and was presented at an overall level of 60 dB SPL. A schematic representation of the auditory stimuli is shown in the top half of Fig. 8.

Prior to the experiment, participants completed training as described in experiment 1. During the experiment, a total of six conditions were tested (3 attention conditions × 2 MOCR elicitor conditions). The experimental runs were grouped by condition, and participants completed 15 runs for each condition. The order in which conditions were completed was counterbalanced across participants to attempt to control for any potential order effects. Participants completed all six attention conditions within a single experimental session, during which the probe assembly was not removed from their ears.

The methods for the generation and recording of the stimuli and the equipment were identical to those in experiment 1 except that a Lynx Two-B sound interface (Lynx Studio Technology, Costa Mesa, CA) was used to convert signals between the analog and digital domains. The sound delivery system was calibrated with a type 4153 artificial ear and a 2 cc coupler (Bruel & Kjaer NA, Duluth, GA) to verify output sound pressure in an average ear canal. FPL calibration was not used since SFOAE was only measured for the nominal frequency of 0.75 kHz, for which standing waves do not affect calibration.

4. SFOAE analysis

The emission extraction technique was the same as in experiment 1. Adjustments to averaging windows were made to accommodate the continuous auditory stimuli. The 16 s of the low-frequency tone presented during the behavioral task portion of the experiment was broken into 4 segments per run lasting 4 s each. The stimuli for both the baseline SFOAE measurement and the task stimuli were averaged across the number of accepted runs. The same heterodyne procedure described in experiment 1 was used to estimate the ear-canal sound pressure during the segments with the suppressor tone present. Next, the same mean of this sound pressure was subtracted from the 4-s segments corresponding to the behavioral task stimuli, yielding the SFOAE residual magnitude and phase. For each participant the mean emission magnitude was calculated from a 300-ms window centered temporally in the averaged 4-s task stimulus waveform. This timing was chosen because AM segments were distributed such that they did not occur during this portion of the waveform across 4-s blocks and across runs. Examples of the extracted SFOAE magnitude and phase from a single participant are shown in Fig. 9. The same bootstrapping procedure as outlined in Sec. II A 8 was used to evaluate whether or not significant shifts in SFOAE magnitude occurred at the level of individual participants.

FIG. 9.

FIG. 9.

(Color online) Example OAE data extracted from a sample participant in experiment 2. Left panels show magnitude and phase during the baseline measurement of SFOAE. In the right panels, extracted SFOAEs are shaded gray during the regions in which AM can occur and are shaded in color during the analysis window. Fluctuations in level due to averaging segments of the tone containing AM can be clearly seen in the gray regions. Different colored traces correspond to differing attention conditions. Note that in the left panels SFOAE magnitudes for each condition overlap but diverge when this participant performs the task (as shown in the right panels).

C. Results

1. Behavioral results

We quantified the behavioral performance in experiment 2 using the same methodology as experiment 1. Across listeners, auditory task performance ranged from d′ values of 0.96 to 4.65 (representing perfect performance after adjustment for cases with 0% false-alarm or 100% hit rate) with mean values of 2.78 and 2.88 for the AL and AH conditions, respectively. In conditions with the MOCR elicitor present, performance in the auditory tasks was poorer overall. Mean d′ values were 1.59 and 2.05 for the AL and AH conditions, respectively. A repeated-measures ANOVA with factors corresponding to the presence or absence of the MOCR elicitor and attended frequency (low or high) revealed no significant effect of the attended frequency [F(1,14) = 1.14, p = 0.304]. There was a significant main effect of the MOCR elicitor on behavioral performance [F(1,14) = 34.5, p < 0.001], confirming poorer performance when the MOCR elicitor was present. There was no significant interaction between the elicitor and attention conditions [F(1,14) = 1.4, p = 0.256].

Visual task performance was analyzed separately from the auditory task performance and ranged from 65%–96% correct (3.25 to 4.8 targets out of 5 correctly identified) with a mean of 80% correct, and was nearly identical between the elicitor present and elicitor absent conditions.

2. SFOAE results

Figure 10 shows the mean levels of SFOAE extracted from the low-frequency tone for each attentional condition. Grayscale bars behind each colored bar show the level of SFOAEs when the elicitor noise was absent. A repeated-measures ANOVA with factors MOCR elicitor and attention condition revealed a significant effect of MOCR elicitor [F(1,14) = 35.1, p < 0.001] with lower SFOAE magnitudes when the elicitor was present. However, there was no main effect of attention condition on SFOAE magnitudes [F(2,28) = 2.17, p = 0.133], and no interaction [F(2,28) = 1.40, p = 0.263], suggesting that attention does not modulate SFOAE magnitudes either with or without an acoustic elicitor. There were no significant effects of, or interactions between, attention and MOCR elicitor presence on the baseline SFOAE and the noise floor recorded during each measurement [F < 1.39, p > 0.258].

FIG. 10.

FIG. 10.

(Color online) Mean magnitudes across participants of SFOAE extracted from the behavioral task stimuli. Colored bars indicate different attention conditions. Gray bars behind each colored bar indicate SFOAE magnitudes when the MOCR elicitor was not present for the same attentional condition. Error bars denote ±1 standard error of the mean.

To examine the potential role of individual differences in the null effect overall, an individual analysis was undertaken using the bootstrap technique, as described in experiment 1. Figure 11 shows a summary of the bootstrap analysis. Most participants did not show significant shifts in OAE magnitude across attention conditions. Those who showed significant effects did not exhibit effects in a consistent direction. This was true for both MOCR elicitor present (noise +) and absent (noise −) conditions (top and bottom panels in Fig. 11, respectively).

FIG. 11.

FIG. 11.

(Color online) Bootstrapping summary data for experiment 2. Bars are as described in Fig. 5. The top panel shows results with the MOCR (noise -) elicitor absent, while the bottom panel shows the results with the MOCR (noise +) elicitor present.

3. Relationship between behavioral and SFOAE data

Results of the correlational analysis are shown in Fig. 12. There were no significant correlations between performance in the auditory conditions and the level of respective SFOAEs extracted from those trials [−r(13) < 0.50, p > 0.056]. This was true for both MOCR elicitor conditions. There was also no significant correlation between performance on the visual task and SFOAE magnitude with the MOCR elicitor present [r(13) = −0.42, p = 0.121] or absent [r(13) = 0.13, p = 0.637].

FIG. 12.

FIG. 12.

(Color online) Correlational analysis between behavioral performance in experiment 2 and SFOAE magnitudes extracted from the task stimuli. Data and lines as in Fig. 6.

D. Discussion

This experiment examined whether the presence of an MOCR elicitor noise influenced any effect of selective attention on cochlear responses, and whether sustained attention (via a continuous probe tone) might better reveal attentional modulation of cochlear function. The MOCR elicitor produced the expected effect of reducing SFOAE magnitude when it was present, but we found no evidence of attentional modulation in either the individual or the group data either with or without the MOCR elicitor noise. The trend in the correlational analysis toward lower SFOAE magnitudes with increasing performance on the visual task might be consistent with the same type of reduction in SFOAE observed in Beim et al. (2018), but it was not statistically significant, even before any correction for multiple comparisons, and the reduction in SFOAE magnitude during the V condition observed in Beim et al. (2018) was not visible in the bootstrap analysis or the average data across attentional conditions. Taken as a whole, the data from this experiment suggest that MOCR effects were detectable in our participants while they were performing a perceptual task, but that the MOCR effects do not appear to differ in any systematic way across attentional conditions and were not systematically related to performance on the tasks used in the study. This outcome suggests that the use of continuous stimuli did not improve our ability to detect attentional changes, as the overall pattern of results mirrors that of experiment 1.

IV. GENERAL DISCUSSION

In this study we conducted two experiments designed to follow up on the study by Beim et al. (2018), in which attentional effects on cochlear processing were investigated using SFOAEs. In experiment 1, we incorporated perceptual load theory into our experimental design, with the hypothesis that a task involving a high perceptual load would elicit early attentional effects. We also examined whether MOCR effects of attention are different at higher frequencies, where larger cochlear gain may improve the dynamic range of MOCR effects. Our results with a low-frequency probe tone, finding no effect of attention, replicated the findings of the replication sample in Beim et al. (2018). The higher-frequency SFOAE measurements also failed to reveal any significant effects of selective attention. Finally, we observed no differences in measured background noise between attention conditions and no evidence of a relationship between noise levels and SFOAE magnitudes, suggesting that participant-generated noise could not account for significant effects in some individuals and for the lack of effect at a group level.

Our second experiment sought to answer the questions of whether more sustained attention or the absence of the acoustic MOCR elicitor would produce observable attentional effects on SFOAEs. We observed no differences in the pattern of results between the continuous stimuli used in experiment 2 and the discrete stimuli used in experiment 1. The acoustic MOCR elicitor produced the expected decrease in SFOAE levels when present, but attentional effects were not observed either with or without the elicitor.

In both experiments, we saw significant shifts in SFOAE magnitude between various attention conditions in a minority of participants. The direction of these differences was roughly evenly split in each comparison, making it difficult to ascribe any functional importance to them. In the majority of participants there were no significant shifts in SFOAE magnitude between attention conditions. This outcome suggests that there may not be a role for MOC efferents in selective attention. The lack of low-level attentional effects is also supported by work showing no evidence of attentional modulation of electrophysiological signals at the level of the brainstem (Holmes et al., 2018; Kuk and Abbas, 1989; Ruggles et al., 2018; Varghese et al., 2015). Electrophysiological evidence for the attentional modulation of peripheral auditory processing remains inconclusive, however, as an animal model demonstrated a reduction in the compound action potentials measured while animals attended to visual stimuli, relative to auditory stimuli (Delano et al., 2007). These differences could reflect species differences in attentional effects, but may also simply arise due to methodological differences between studies. It would be important to attempt to replicate the results reported by Delano et al. (2007) to demonstrate a robust effect in a different sample of animals, given the large inter-subject variability observed in humans under similar conditions (Beim et al., 2018).

The findings of this study contradict some other studies, which have reported an effect of selective attention on cochlear responses (Maison et al., 2001; Srinivasan et al., 2012, 2014; Walsh et al., 2014a,b, 2015; Wittekindt et al., 2014), but agree with other OAE studies that did not find such effects (Avan and Bonfils, 1992; Michie et al., 1996). Part of the lack of agreement amongst the existing literature may be due to different stimulus and task design choices, but also due to sample sizes that have typically been smaller in earlier studies.

The results of the current study are in apparent disagreement with recent work by Walsh et al. (2014a,b, 2015), but there are important differences between our study and this previous work that could explain the difference. Perhaps most critically, the studies by Walsh et al. compared conditions in which participants were not engaged in any task with conditions in which participants selectively attended to task-related stimuli. The differences in OAE magnitudes could therefore be due to differences in general arousal caused by engagement in the task or lack thereof. In contrast, our current study compared only conditions in which participants were engaged in a task, which would not be expected to produce strong differences in arousal. Along the same lines, Francis et al. (2018) suggested that the differences in physiological noise in the ear canal observed by Walsh et al. (2014a,b) may be due to differences in participant movement between conditions. Our current study showed no differences between attention conditions in physiological noise, which could indicate that any subtle participant movements were similar across all three attention conditions, as might be expected given similar levels of arousal. However, because we had to filter out low-frequency noise (<250 Hz), we cannot rule out the possibility that physiological noise below this frequency varied between conditions.

Studies of cochlear function that used DPOAE have reported effects that differ in direction. For instance, Srinivasan et al. (2012, 2014) reported decreases in DPOAE with auditory attention, whereas Wittekindt et al. (2014) saw decreases in DPOAE magnitude with visual attention. Researchers have often explained this discrepancy in direction of effects seen in DPOAEs as resulting from the MOCR activation producing both increases and decreases in DPOAE magnitude in both humans and animals (e.g., Maison and Liberman, 2000; Müller et al., 2005). An alternative interpretation of such disagreements is that significant shifts of DPOAE magnitude in different directions are due to noise in the measurement of OAEs in general. The current study examined a large sample of listeners (45 participants, 90 measurements of SFOAE during behavioral tasks for each attention condition) and found no statistically significant shifts in SFOAE magnitude under different conditions of selective attention, but nevertheless revealed large individual differences. Given this large variability and small (or non-existent) effect size, it is possible that small sample sizes will produce spurious false positive results.

There are many possible causes of noise in SFOAE measurements. The current study tried to control for several sources of variability in measurement by using only high SNR measurements of SFOAE, optimizing stimulus parameters individually for each subject, and by calibrating stimuli using forward pressure to ensure that the level of sound reaching the cochlea is as invariant as possible across listeners. In many participants, these attempts to control sources of variability resulted in our ability to reliably detect small (<1 dB) changes in OAE magnitude. Being able to reliably detect such small changes is important as they may reflect larger changes in the afferent signal (Puria et al., 1996). Given this sensitivity to small differences, and the much larger inter-individual differences, it seems that our ability to detect meaningful attentional effects was not limited by the sensitivity of our measure.

We also used tasks with high perceptual load as this has been shown in the attention literature to produce evidence of attentional selection earlier in cortical processing streams (Schwartz et al., 2005) and at subcortical sites (O'Connor et al., 2002). The perceptual load manipulations used in the current study are similar to those described as high in the studies of visual attention. For instance, Schwartz et al. (2005) used a conjunction of two visual features instead of a single feature in their high- and low-load tasks, respectively. The auditory task requires the conjunction of two features (tone frequency, presence of AM) and contains numerous distractor stimuli. The visual task requires attending to multiple targets and once the task begins there are no salient differences between targets and distractors to aid observers in attending the correct stimuli. While it may be possible to increase perceptual load even further, the evidence provided by this study, as well as our previous work (Beim et al., 2018), suggests that any increased effects may still be lost in the variability of the attentional MOCR effects across participants.

We analyzed performance data in conjunction with SFOAE magnitudes to test for any relationship between these variables. For example, if greater cochlear gain reduction is applied while attending a visual stimulus, we might expect to see a negative relationship between task performance and SFOAE magnitude (i.e., best performance accompanied by smallest SFOAE magnitudes) for the visual condition, but not necessarily for either of the auditory conditions. The correlational analyses were not consistent with this prediction. We also used performance data to analyze SFOAEs only from correct trials (not shown), but this did not reveal a pattern of results that was any different from that using the SFOAEs obtained from both correct and incorrect trials.

Although relatively large, our sample of participants was largely homogeneous, in that most were young, female, NH university students. Future work with a more diverse range of participants may be able to shed light on the sources of variability observed in the current dataset.

In conclusion, across two experiments designed to optimize the chances of observing an effect of selective attention on cochlear function we did not observe any robust changes in SFOAEs that could provide such evidence. The results of our study show that even with reliable measurements within participants and a relatively large sample size, it was not possible to detect any attentional effects at the level of the cochlea using SFOAEs.

ACKNOWLEDGMENTS

This work was supported by National Institutes of Health Grant No. R01 DC015462 (M.W.), 1-19. The authors declare no competing financial interests.

References

  • 1. Avan, P. , and Bonfils, P. (1992). “ Analysis of possible interactions of an attentional task with cochlear micromechanics,” Hear. Res. 57, 269–275. 10.1016/0378-5955(92)90156-H [DOI] [PubMed] [Google Scholar]
  • 2. Backus, B. C. , and Guinan, J. J. (2006). “ Time-course of the human medial olivocochlear reflex,” J. Acoust. Soc. Am. 119, 2889–2904. 10.1121/1.2169918 [DOI] [PubMed] [Google Scholar]
  • 3. Beim, J. A. , Oxenham, A. J. , and Wojtczak, M. (2018). “ Examining replicability of an otoacoustic measure of cochlear function during selective attention,” J. Acoust. Soc. Am. 144, 2882–2895. 10.1121/1.5079311 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. de Boer, J. , and Thornton, A. R. D. (2007). “ Effect of subject task on contralateral suppression of click evoked otoacoustic emissions,” Hear. Res. 233, 117–123. 10.1016/j.heares.2007.08.002 [DOI] [PubMed] [Google Scholar]
  • 5. Brainard, D. H. (1997). “ The psychophysics toolbox,” Spat. Vis. 10, 433–436. 10.1163/156856897X00357 [DOI] [PubMed] [Google Scholar]
  • 6. Broadbent, D. E. (1958). “ Selective listening to speech,” in Perception Communication ( Pergamon Press, Elmsford: ), pp. 11–35. [Google Scholar]
  • 7. Cherry, E. C. (1953). “ Some experiments on the recognition of speech, with one and with two ears,” J. Acoust. Soc. Am. 25, 975–979. 10.1121/1.1907229 [DOI] [Google Scholar]
  • 8. Choi, I. , Rajaram, S. , Varghese, L. A. , and Shinn-Cunningham, B. G. (2013). “ Quantifying attentional modulation of auditory-evoked cortical responses from single-trial electroencephalography,” Front. Hum. Neurosci. 7, 115, 1–19 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Delano, P. H. , Elgueda, D. , Hamame, C. M. , and Robles, L. (2007). “ Selective attention to visual stimuli reduces cochlear sensitivity in chinchillas,” J. Neurosci. 27, 4146–4153. 10.1523/JNEUROSCI.3702-06.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Dewey, J. B. , and Dhar, S. (2017). “ Profiles of stimulus-frequency otoacoustic emissions from 05 to 20 kHz in humans,” J. Assoc. Res. Otolaryngol. 18, 89–110. 10.1007/s10162-016-0588-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Ding, N. , and Simon, J. Z. (2012). “ Neural coding of continuous speech in auditory cortex during monaural and dichotic listening,” J. Neurophysiol. 107, 78–89. 10.1152/jn.00297.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Francis, N. A. , Zhao, W. , and Guinan, J. J. (2018). “ Auditory attention reduced ear-canal noise in humans by reducing subject motion, not by medial olivocochlear efferent inhibition: Implications for measuring otoacoustic emissions during a behavioral task,” Front. Syst. Neurosci. 12(42), 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Froehlich, P. , Collet, L. , Chanal, J.-M. , and Morgon, A. (1990). “ Variability of the influence of a visual task on the active micromechanical properties of the cochlea,” Brain Res. 508, 286–288. 10.1016/0006-8993(90)90408-4 [DOI] [PubMed] [Google Scholar]
  • 14. Froehlich, P. , Collet, L. , and Morgon, A. (1993). “ Transiently evoked otoacoustic emission amplitudes change with changes of directed attention,” Physiol. Behav. 53, 679–682. 10.1016/0031-9384(93)90173-D [DOI] [PubMed] [Google Scholar]
  • 15. Giard, M.-H. , Collet, L. , Bouchet, P. , and Pernier, J. (1994). “ Auditory selective attention in the human cochlea,” Brain Res. 633, 353–356. 10.1016/0006-8993(94)91561-X [DOI] [PubMed] [Google Scholar]
  • 16. Goodman, S. S. , Mertes, I. B. , Lewis, J. D. , and Weissbeck, D. K. (2013). “ Medial olivocochlear-induced transient-evoked otoacoustic emission amplitude shifts in individual subjects,” J. Assoc. Res. Otolaryngol. 14, 829–842. 10.1007/s10162-013-0409-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Gorga, M. P. , Neely, S. T. , Dierking, D. M. , Kopun, J. , Jolkowski, K. , Groenenboom, K. , Tan, H. , and Stiegemann, B. (2007). “ Low-frequency and high-frequency cochlear nonlinearity in humans,” J. Acoust. Soc. Am. 122, 1671–1680. 10.1121/1.2751265 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Green, D. , and Swets, J. A. (1966). Signal Detection Theory and Psychophysics ( Wiley, New York: ). [Google Scholar]
  • 19. Guinan, J. J. (2006). “ Olivocochlear efferents: Anatomy, physiology, function, and the measurement of efferent effects in humans,” Ear Hear. 27, 589–607. 10.1097/01.aud.0000240507.83072.e7 [DOI] [PubMed] [Google Scholar]
  • 20. Guinan, J. J. , Backus, B. C. , Lilaonitkul, W. , and Aharonson, V. (2003). “ Medial olivocochlear efferent reflex in humans: Otoacoustic Emission (OAE) measurement issues and the advantages of stimulus frequency OAEs,” J. Assoc. Res. Otolaryngol. 4, 521–540. 10.1007/s10162-002-3037-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Harkrider, A. W. , and Bowers, C. D. (2009). “ Evidence for a cortically mediated release from inhibition in the human cochlea,” J. Am. Acad. Audiol. 20, 208–215. 10.3766/jaaa.20.3.7 [DOI] [PubMed] [Google Scholar]
  • 22. Holmes, E. , Purcell, D. W. , Carlyon, R. P. , Gockel, H. E. , and Johnsrude, I. S. (2018). “ Attentional modulation of envelope-following responses at lower (93-109 Hz) but not higher (217-233 Hz) modulation rates,” J. Assoc. Res. Otolaryngol. 19, 83–97. 10.1007/s10162-017-0641-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Kemp, D. T. (1978). “ Stimulated acoustic emissions from within the human auditory system,” J. Acoust. Soc. Am. 64, 1386–1391. 10.1121/1.382104 [DOI] [PubMed] [Google Scholar]
  • 24. Kuk, F. K. , and Abbas, P. J. (1989). “ Effects of attention on the auditory evoked potentials recorded from the vertex (ABR) and the promontory (CAP) of human listeners,” Neuropsychologia 27, 665–673. 10.1016/0028-3932(89)90111-5 [DOI] [PubMed] [Google Scholar]
  • 25. Lavie, N. (1995). “ Perceptual load as a necessary condition for selective attention,” J. Exp. Psychol. Hum. Percept. Perform. 21, 451–468. 10.1037/0096-1523.21.3.451 [DOI] [PubMed] [Google Scholar]
  • 26. Lavie, N. (2005). “ Distracted and confused?: Selective attention under load,” Trends Cogn. Sci. 9, 75–82. 10.1016/j.tics.2004.12.004 [DOI] [PubMed] [Google Scholar]
  • 27. Lavie, N. , Hirst, A. , de Fockert, J. W. , and Viding, E. (2004). “ Load theory of selective attention and cognitive control,” J. Exp. Psychol. Gen. 133, 339–354. 10.1037/0096-3445.133.3.339 [DOI] [PubMed] [Google Scholar]
  • 28. Lavie, N. , and Tsal, Y. (1994). “ Perceptual load as a major determinant of the locus of selection in visual attention,” Percept. Psychophys. 56, 183–197. 10.3758/BF03213897 [DOI] [PubMed] [Google Scholar]
  • 29. Lilaonitkul, W. , and Guinan, J. J. (2012). “ Frequency tuning of medial-olivocochlear-efferent acoustic reflexes in humans as functions of probe frequency,” J. Neurophysiol. 107, 1598–1611. 10.1152/jn.00549.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Lukas, J. H. (1980). “ Human auditory attention: The olivocochlear bundle may function as a peripheral filter,” Psychophys. 17, 444–452. 10.1111/j.1469-8986.1980.tb00181.x [DOI] [PubMed] [Google Scholar]
  • 31. Maison, S. , Micheyl, C. , and Collet, L. (2001). “ Influence of focused auditory attention on cochlear activity in humans,” Psychophys. 38, 35–40. 10.1111/1469-8986.3810035 [DOI] [PubMed] [Google Scholar]
  • 32. Maison, S. F. , and Liberman, M. C. (2000). “ Predicting vulnerability to acoustic injury with a noninvasive assay of olivocochlear reflex strength,” J. Neurosci. 20, 4701–4707. 10.1523/JNEUROSCI.20-12-04701.2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Makovski, T. , and Jiang, Y. V (2009). “ The role of visual working memory in attentive tracking of unique objects,” J. Exp. Psychol. Hum. Percept. Perform. 35, 1687–1697. 10.1037/a0016453 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Mesgarani, N. , and Chang, E. F. (2012). “ Selective cortical representation of attended speaker in multi-talker speech perception,” Nature 485, 233–236. 10.1038/nature11020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Michie, P. T. , LePage, E. L. , Solowij, N. , Haller, M. , and Terry, L. (1996). “ Evoked otoacoustic emissions and auditory selective attention,” Hear. Res. 98, 54–67. 10.1016/0378-5955(96)00059-7 [DOI] [PubMed] [Google Scholar]
  • 36. Miller, J. (1987). “ Priming is not necessary for selective-attention failures: Semantic effects of unattended, unprimed letters,” Percept. Psychophys. 41, 419–434. 10.3758/BF03203035 [DOI] [PubMed] [Google Scholar]
  • 37. Müller, J. , Janssen, T. , Heppelmann, G. , and Wagner, W. (2005). “ Evidence for a bipolar change in distortion product otoacoustic emissions during contralateral acoustic stimulation in humans,” J. Acoust. Soc. Am. 118, 3747–3756. 10.1121/1.2109127 [DOI] [PubMed] [Google Scholar]
  • 38. Neely, S. T. , and Gorga, M. P. (1998). “ Comparison between intensity and pressure as measures of sound level in the ear canal,” J. Acoust. Soc. Am. 104, 2925–2934. 10.1121/1.423876 [DOI] [PubMed] [Google Scholar]
  • 39. Neely, S. T. , and Liu, Z. (1994). “ EMAV: Otoacoustic emission average,” Technical Memo No. 17, Boys Town National Research Hospital, Omaha, NE.
  • 40. O'Connor, D. H. , Fukui, M. M. , Pinsk, M. A. , and Kastner, S. (2002). “ Attention modulates responses in the human lateral geniculate nucleus,” Nat. Neurosci. 5, 1203–1209. 10.1038/nn957 [DOI] [PubMed] [Google Scholar]
  • 41. O'Sullivan, J. A. , Power, A. J. , Mesgarani, N. , Rajaram, S. , Foxe, J. J. , Shinn-Cunningham, B. G. , Slaney, M. , Shamma, S. A. , and Lalor, E. C. (2015). “ Attentional selection in a cocktail party environment can be decoded from single-trial EEG,” Cereb. Cortex 25, 1697–1706. 10.1093/cercor/bht355 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Picton, T. W. , and Hillyard, S. A. (1974). “ Human auditory evoked potentials II: Effects of attention,” Electroencephalogr. Clin. Neurophysiol. 36, 191–200. 10.1016/0013-4694(74)90156-4 [DOI] [PubMed] [Google Scholar]
  • 43. Picton, T. W. , Hillyard, S. A. , Galambos, R. , and Schiff, M. (1971). “ Human auditory attention: A central or peripheral process?,” Science 173, 351–353. 10.1126/science.173.3994.351 [DOI] [PubMed] [Google Scholar]
  • 44. Puel, J.-L. , Bonfils, P. , and Pujol, R. (1988). “ Selective attention modifies the active micromechanical properties of the cochlea,” Brain Res. 447, 380–383. 10.1016/0006-8993(88)91144-4 [DOI] [PubMed] [Google Scholar]
  • 45. Puria, S. , Guinan, J. J. , and Liberman, M. C. (1996). “ Olivocochlear reflex assays: Effects of contralateral sound on compound action potentials versus ear-canal distortion products,” J. Acoust. Soc. Am. 99, 500–507. 10.1121/1.414508 [DOI] [PubMed] [Google Scholar]
  • 46. Pylyshyn, Z. W. , and Storm, R. W. (1988). “ Tracking multiple independent targets: Evidence for a parallel tracking mechanism,” Spat. Vis. 3, 179–197. 10.1163/156856888X00122 [DOI] [PubMed] [Google Scholar]
  • 47. Recio-Spinoso, A. , and Oghalai, J. S. (2017). “ Mechanical tuning and amplification within the apex of the guinea pig cochlea,” J. Physiol. 595, 4549–4561. 10.1113/JP273881 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Rees, G. , Firth, C. D. , and Lavie, N. (1997). “ Modulating irrelevant motion perception by varying attentional load in an unrelated task,” Science 278, 1616–1619. 10.1126/science.278.5343.1616 [DOI] [PubMed] [Google Scholar]
  • 49. Robles, L. , and Ruggero, M. A. (2001). “ Mechanics of the mammalian cochlea,” Physiol. Rev. 81, 1305–1352. 10.1152/physrev.2001.81.3.1305 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Ruggles, D. R. , Tausend, A. N. , Shamma, S. A. , and Oxenham, A. J. (2018). “ Cortical markers of auditory stream segregation revealed for streaming based on tonotopy but not pitch,” J. Acoust. Soc. Am. 144, 2424–2433. 10.1121/1.5065392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Scheperle, R. A. , Goodman, S. S. , and Neely, S. T. (2011). “ Further assessment of forward pressure level for in situ calibration,” J. Acoust. Soc. Am. 130, 3882–3892. 10.1121/1.3655878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Scheperle, R. A. , Neely, S. T. , Kopun, J. G. , and Gorga, M. P. (2008). “ Influence of in situ, sound-level calibration on distortion-product otoacoustic emission variability,” J. Acoust. Soc. Am. 124, 288–300. 10.1121/1.2931953 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Schwartz, S. , Vuilleumier, P. , Hutton, C. , Maravita, A. , Dolan, R. J. , and Driver, J. (2005). “ Attentional load and sensory competition in human vision: Modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field,” Cereb. Cortex 15, 770–786. 10.1093/cercor/bhh178 [DOI] [PubMed] [Google Scholar]
  • 54. Shera, C. A. , and Guinan, J. J. (1999). “ Evoked otoacoustic emissions arise by two fundamentally different mechanisms: A taxonomy for mammalian OAEs,” J. Acoust. Soc. Am. 105, 782–798. 10.1121/1.426948 [DOI] [PubMed] [Google Scholar]
  • 55. Siegel, J. H. (1994). “ Ear-canal standing waves and high-frequency sound calibration using otoacoustic emission probes,” J. Acoust. Soc. Am. 95, 2589–2597. 10.1121/1.409829 [DOI] [Google Scholar]
  • 56. Smith, D. W. , Aouad, R. K. , and Keil, A. (2012). “ Cognitive task demands modulate the sensitivity of the human cochlea,” Front. Psychol. 3, 30, 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Srinivasan, S. , Keil, A. , Stratis, K. , Osborne, A. F. , Cerwonka, C. , Wong, J. , Rieger, B. L. , Polcz, V. , and Smith, D. W. (2014). “ Interaural attention modulates outer hair cell function,” Eur. J. Neurosci. 40, 3785–3792. 10.1111/ejn.12746 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Srinivasan, S. , Keil, A. , Stratis, K. , Woodruff Carr, K. L. , and Smith, D. W. (2012). “ Effects of cross-modal selective attention on the sensory periphery: Cochlear sensitivity is altered by selective attention,” Neuroscience 223, 325–33262. 10.1016/j.neuroscience.2012.07.062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Stroop, J. R. (1935). “ Studies of interference in serial verbal reactions,” J. Exp. Psychol. 28, 643–662. 10.1037/h0054651 [DOI] [Google Scholar]
  • 60. Treisman, A. M. (1969). “ Strategies and models of selective attention,” Psychol. Rev. 76, 282–299. 10.1037/h0027242 [DOI] [PubMed] [Google Scholar]
  • 61. Varghese, L. , Bharadwaj, H. M. , and Shinn-Cunningham, B. G. (2015). “ Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses,” Brain Res. 1626, 146–164. 10.1016/j.brainres.2015.06.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Veuillet, E. , Collet, L. , and Duclaux, R. (1991). “ Effect of contralateral acoustic stimulation on active cochlear micromechanical properties in human subjects: Dependence on stimulus variables,” J. Neurophysiol. 65, 724–735. 10.1152/jn.1991.65.3.724 [DOI] [PubMed] [Google Scholar]
  • 63. Walsh, K. P. , Pasanen, E. G. , and McFadden, D. (2014a). “ Selective attention reduces physiological noise in the external ear canals of humans II: Visual attention,” Hear. Res. 312, 160–167. 10.1016/j.heares.2014.03.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Walsh, K. P. , Pasanen, E. G. , and McFadden, D. (2014b). “ Selective attention reduces physiological noise in the external ear canals of humans I: Auditory attention,” Hear. Res. 312, 143–159. 10.1016/j.heares.2014.03.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Walsh, K. P. , Pasanen, E. G. , and McFadden, D. (2015). “ Changes in otoacoustic emissions during selective auditory and visual attention,” J. Acoust. Soc. Am. 137, 2737–2757. 10.1121/1.4919350 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Wittekindt, A. , Kaiser, J. , and Abel, C. (2014). “ Attentional modulation of the inner ear: A combined otoacoustic emission and EEG study,” J. Neurosci. 34, 9995–10002. 10.1523/JNEUROSCI.4861-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Zhao, W. , and Dhar, S. (2012). “ Frequency tuning of the contralateral medial olivocochlear reflex in humans,” J. Neurophysiol. 108, 25–30. 10.1152/jn.00051.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Zion Golumbic, E. M. , Ding, N. , Bickel, S. , Lakatos, P. , Schevon, C. A. , McKhann, G. M. , Goodman, R. R. , Emerson, R. , Mehta, A. D. , Simon, J. Z. , Poeppel, D. , and Schroeder, C. E. (2013). “ Mechanisms underlying selective neuronal tracking of attended speech at a ‘cocktail party’,” Neuron 77, 980–991. 10.1016/j.neuron.2012.12.037 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of the Acoustical Society of America are provided here courtesy of Acoustical Society of America

RESOURCES