Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Apr 1.
Published in final edited form as: Int J Audiol. 2019 Jan 25;58(4):213–223. doi: 10.1080/14992027.2018.1551632

Auditory Event-Related Potentials and Function of the Medial Olivocochlear Efferent System in Children with Auditory Processing Disorders

Thierry Morlet a,b,c,*, Kyoko Nagao a,b,d, L Ashleigh Greenwood a, R Matthew Cardinale a, Rebecca G Gaffney a, Tammy Riegner e
PMCID: PMC6430672  NIHMSID: NIHMS1520165  PMID: 30682902

Abstract

Objective:

The objectives were to investigate the function of central auditory pathways and of the medial efferent olivocochlear system (MOCS).

Design:

Event-related potentials (ERP) were recorded following the delivery of the stimulus /da/ in quiet and in ipsilateral, contralateral, and binaural noise conditions and correlated to the results of the auditory processing disorders (APD) diagnostic test battery. MOCS function was investigated by adding ipsilateral, contralateral, and binaural noise to transient evoked otoacoustic emission recordings. Auditory brainstem responses and pure tone audiogram were also evaluated.

Study Sample:

Nineteen children (7 to 12 years old) with APD compared with 24 age-matched controls.

Results:

Otoacoustic emissions and ABR characteristics did not differ between groups, whereas ERP latencies were significantly longer and of higher amplitudes in APD children than in controls, in both quiet and noise conditions. The MOCS suppression was higher in APD children.

Conclusions:

Findings indicate that children with APD present with neural deficiencies in both challenging and non-challenging environments with an increase in the timing of several central auditory processes correlated to their behavioral performances. Meanwhile, their modulation of the auditory periphery under noisy conditions differs from control children with higher suppression.

Keywords: auditory processing disorder, event-related potentials, olivocochlear efferent system, speech in noise

Introduction

Auditory processing disorder (APD) represents a family of deficits affecting several auditory processes responsible for sound localization and lateralization, auditory discrimination, temporal resolution, masking, integration, ordering, as well as for auditory performance in the presence of competing or degraded acoustic signals (ASHA 1996) in both children and adults. A common feature frequently described in affected children is their failure to hear well in the presence of competing speech or background noise (Bellis 1996; Chermak et al. 1999; Bamiou et al. 2001; Chermak 2002; Muchnik et al. 2004). Most patients have normal hearing as defined by standard audiometric testing involving the detection of fairly long pure tones and monosyllabic or disyllabic words delivered in a very quiet environment one at a time. Early in age, APD can impair a child’s receptive speech and language development, leading to listening and learning deficits, and its diagnosis is usually not made until these deficits are well established.

The origin of APD is still a matter of debate. A common theory is that these deficits result from impairments in the processing of information in the auditory modality at either the central and/or the peripheral auditory system (Jerger and Musiek 2000). Evidence for a central origin of APD includes abnormalities in the cortical development of auditory areas (Cohen et al. 1989), underdeveloped cells in the left hemisphere and corpus callosum or neural immaturity (Chermak 2002), and abnormal asymmetries in the perisylvian region of the temporal lobe with an absence of left hemispheric advantage for this region (Jernigan et al. 1991; Plante et al. 1991). Functionally, children with APD show smaller middle latency responses than control children (Schochat et al. 2010) and delayed Na latency (Purdy et al. 2002). Significantly delayed P300 latencies have been reported as well both in children (Jirsa and Clontz 1990) and adults with APD (Krishnamurti 2001) compared with normally hearing controls.

A different hypothesis states that APD is not a disorder specific to auditory modality but arises from a more generalized cognitive and/or developmental disorder affecting attention, memory, or language processing (Moore et al. 2010; Moore 2012). In fact, APD frequently co-exists with other disorders such as language disorders and/or learning disorders. This theory remains controversial, however, as others argued that APD and attentional problems do not seem to always be interrelated (Sharma et al. 2009) and that “attention and memory issues do not cause APD in children who (a) have been diagnosed with the disorder using central auditory tests with documented sensitivity and specificity to this disorder, and (b) have had their auditory processing results interpreted in light of any dominating influence of potential comorbid issues” (Weihing et al. 2013).

Deficits in the top-down peripheral processes, not necessarily exclusive of other deficits in the afferent auditory system and/or cognitive functions including attention, have also been suggested as a potential source of APD. Through the efferent fibers of the medial olivocochlear system (MOCS), the brain regulates the processing of sounds by the auditory periphery. When activated acoustically, the MOCS inhibits the activity of the outer hair cells as shown by a decrease in the level of otoacoustic emissions, with the strongest efferent effect obtained for binaural stimulation, followed by ipsilateral and contralateral stimuli. The inhibitory function of the MOCS can lead to an improvement in coding of signals embedded in noise, suggesting an anti-masking role. Furthermore, activation of the MOCS in humans has been shown to improve the detection of tones embedded in ipsilateral noise, speech-in-noise intelligibility in normally hearing adults, and speech-in-noise perception in normal children (Lopez-Poveda, 2018 for a review). In these studies, individuals who have better performance in noise tended to demonstrate larger suppression of the otoacoustic emissions than those with lower performance. Other studies however, have shown either a lack of significant correlation (Wagner et al., 2008; Stuart and Butler, 2012) or an inverse correlation (de Boer et al., 2012) between MOCS activity and speech in noise performance. These conflicting results regarding whether or not MOC activity is involved in speech in noise recognition might be explained by a number of reasons, such as differences in testing protocols of evaluating the MOCS activity with OAEs, the speech perception tasks and the signal to noise ratio of the speech material. Despite the still incomplete understanding of the MOCS role in hearing in noise, it is conceivable to hypothesize that an impairment of the MOCS could be associated with an APD since a common feature of APD is the child’s failure to perceive auditory stimuli in the presence of competing speech or background noise. In clinical populations, studies have shown that despite normal hearing thresholds, some children with APD, language impairment, or even dyslexia, exhibit a decrease in the function of the MOCS innervating the outer hair cells (Veuillet et al. 1999; 2007; Morlet et al. 2003; Muchnik et al. 2004; Sanches and Carvallo 2006).

These different theories of the origin of APD are not necessarily mutually exclusive as bottom-up processes influence the top-down regulation and vice versa. Moreover, the APD population is particularly heterogeneous and in all likelihood different impaired mechanisms can lead to the same outcomes. The complexities in apprehending the underlying mechanisms of APD are also based on the fact that many investigations in children have only focused on one specific part or function of the auditory system at a time. In the present study, we attempted to objectively assess the function of both afferent and efferent auditory pathways in children with APD and a group of typically developing (TD) children. We investigated the function of the central auditory pathways under quiet and noise conditions using obligatory event-related potentials (ERPs). The MOCS function was evaluated in response to ipsilateral, contralateral, and binaural suppressors and its input/output function was recorded as well. Last, we examined if any of these objective measures correlated with the behavioral scores obtained to reach the APD diagnosis.

Materials and Methods

Participants

The group of children with APD was composed of 19 children (12 males) between the ages of 7 and 12 years (mean 8.9 years, standard deviation [SD] 1.6). They were referred specifically for an APD evaluation based on APD related symptoms and diagnosed by audiologists at Nemours/Alfred I. duPont Hospital for Children based on the APD test battery developed from the American Academy of Audiology (AAA) (Jerger and Musiek 2000) and the American Speech-Language-Hearing Association (ASHA 2005) recommendations. The APD test battery includes the following tests: SCAN-3 for Children (Keith 2009), Bamford-Kowal-Bench Sentences in Noise (Bench et al. 1979), Dichotic Digits Test (Musiek 1983), Frequency Pattern Test/Pitch Pattern Sequence Test (Musiek 1994), Staggered Spondaic Words Test (Katz 1962), Random Gap Detection Test (Keith 2000), Phonemic Synthesis Test (PST) (Katz and Fletcher 1998), and Auditory Continuous Performance Test (Keith 1994). Inclusion in the study was realized one to eight weeks after the diagnosis of APD. Children were diagnosed as having APD based on ASHA guidelines (ASHA 2005; i.e., scored two SDs below published normative data in at least two tests).

The control group was composed of 24 TD children (10 males) ages 7 to 12 years (mean 8.9 years, SD 1.5). These children had normal hearing and no reported difficulties in cognitive development or complaints in listening. Their speech-in-noise abilities were assessed using the Bamford-Kowal-Bench Sentences in Noise (Bench et al. 1979; BKB-SIN, 2005). All showed normal results for their age. Their language and reading levels were judged appropriate for their age and grade levels based on parental reports and a questionnaire regarding their medical and developmental history, family history and education, attesting the lack of diagnosis of any learning disability and speech delay. In addition, previous learning disabilities and speech-language intervention were criteria for exclusion.

The inclusion criteria for both groups were as follows: normal otoscopy, normal tympanometry at the time of the experiments, present transient otoacoustic emissions as defined below, normal pure-tone thresholds (defined as 20 dB HL or better at 0.25, 0.5, 1, 2, 4, 6, and 8 kHz), normal speech reception threshold in quiet (≤ 20 dB), word recognition score for phonetically balanced monosyllabic words in quiet within a normal range (> 84%; Katz and Fletcher 1998), normal ipsilateral middle ear muscle reflex thresholds (MEMRs) at 0.5, 1, 2, and 4 kHz (<95 dB HL for all frequencies), and normal auditory brainstem responses recorded with positive and negative click polarities to rule out the presence of auditory neuropathy spectrum disorder.

The common exclusion criteria for both groups were autism spectrum disorder, attention deficit hyperactivity disorder as the primary diagnosis, conductive or sensorineural hearing loss, neurological problems, or below average intelligence quotient.

A questionnaire was filled out by all parents to collect information regarding each child’s birth and medical history. The study was approved by the Nemours Institutional Review Board and written consent was obtained from both parents and children.

Procedure

The following objective and behavioral tests were administered during three experimental sessions over a period of one month. The first session was postponed for a couple of participants due to abnormal tympanometry. All tests took place in a sound-proof room.

Objective testing

Children were seated comfortably in a reclining chair and allowed to watch a silent movie of their choice during objective testing to help ensure they were unlikely to attend to the test stimuli.

Tympanometry and middle ear muscle reflexes

Tympanometry was tested with a 226 Hz probe tone from −300 daPa to 200 daPa with automatic pump speed setting, i.e., the high pump speed (200 daPa/s) automatically slows down to low pump speed (50 daPa/s) around the peak of the tympanogram (Titan, Interacoustics, Eden Prairie, MN, USA) in all participants prior to each experimental session to ensure proper sound conduction. A tympanogram was considered to be normal if middle-ear pressure was between −100 and 50 daPa, volume between 0.5 and 1.5 ml and compliance between 0.2 and 1.6 ml. Abnormal tympanometry resulted in postponing the session. The search for ipsilateral MEMR was conducted between 80–100 dB HL at 0.5, 1, 2, and 4 kHz (Titan, Interacoustics). A change in compliance bigger than 0.02 mmho was considered a pass. If a threshold was not obtained at or below 100 dB, the response was considered absent.

Transient evoked otoacoustic emissions

Transient evoked otoacoustic emissions (TEOAEs) were recorded with an 80 μs unfiltered electrical pulse presented at the rate of 50/s, at 80 dB peSPL (± 1.5 dB) using SmartTrOAE (Intelligent Hearing Systems, Miami, FL, USA). A non-linear method was used: the stimulus consisted of groups of 4 clicks, three of similar amplitude and the fourth stimulus three times bigger and of opposite polarity. The analysis time was 20 milliseconds (ms) with the initial 2.5-ms software-zeroed to eliminate stimulus artifact; the ear-canal signal was band-pass filtered from 0.5 to 6 kHz. One thousand responses were averaged for each recording. The TEOAEs were considered present when the overall signal-to-noise ratio was higher than 6 dB and the reproducibility higher than 90% over the whole spectrum.

Function of the medial olivocochlear efferent system

The MOCS function was investigated with the same recording system by adding ipsilateral, contralateral, and binaural white noise to TEOAE recordings in a forward masking paradigm (Berlin et al. 1995) to investigate both crossed and uncrossed pathways. Clicks at 65 ± 3 dB peSPL were used to elicit TEOAEs using the linear method (i.e., groups of four clicks of the same polarity). A 400-ms broadband noise (0–16 kHz bandwidth) with a 10-ms inter-stimulus interval was used to stimulate the MOCS. Five hundred sweeps were averaged for each condition. The noise was presented at 65 dB SPL for the ipsilateral and contralateral conditions, and at 65, 50, 40, and 30 dB SPL for the binaural condition while keeping the intensity of the click eliciting TEOAEs constant. Three recordings of TEOAEs without noise were randomly interleaved with the different noise conditions. Root mean square values across the 8 to 18-ms-post-stimulus window were averaged to obtain a single number. The amount of suppression was determined by subtracting the with-noise conditions from the without-noise conditions.

Event-related potentials

Event-related potentials were recorded following the delivery of the stimulus /da/ in quiet and in noise. The speech stimulus was originally produced by a female speaker. The original recording was resynthesized using STRAIGHT (V40 007) without any modification in parameters. Then the initial 40 ms of the /da/ from the burst selected by PRAAT version 5.1.31 (Paul Boersma and David Weenink, Institute of Phonetic Sciences, University of Amsterdam, Amsterdam, NL) was used as the speech stimulus. Stimuli were delivered through insert earphones (ER-3A) at 80 dB peSPL and presented in quiet as well as in ipsilateral, contralateral and binaural continuous white noise (signal-to-noise ratio of +10 dB) using SmartEP (Intelligent Hearing Systems, Miami, FL, USA). The calibration of the equipment and earphones were performed with the Larson Davis System 824 sound level meter and AEC100 coupler (Larson Davis, PCB Piezotronics, Provo, UT).

The various conditions (quiet and noise) were randomly interleaved. Responses were recorded over the right (C3; International 10–20 system) and left temporal lobes (C4) in addition to Cz with mastoids as references. A forehead electrode served as the ground. The recording window was 500 ms, with 75 ms of pre-stimulus period. Responses were amplified (by a factor of 5000) by the Opti-Ampl 8008 amplifier, filtered online with a 1–100 Hz bandpass filter, digitized at a sampling rate of 1666 Hz, and then filtered with a 1–30 Hz bandpass offline. Sixty responses were averaged three times for each condition in each ear (quiet and three noise conditions). Trials containing artifacts exceeding ±40 μV were automatically rejected from averaging. Waveforms for each condition and each recording site were averaged for each participant and grand-averaged across participants for each group.

Auditory brainstem responses

Auditory brainstem responses (ABRs) were elicited with 100 μsec air-conduction clicks and recorded using the same montage and computer program as for the ERP. Two recordings of 800 sweeps each were performed at 80 dB normal hearing level (nHL) for each click polarity (condensation and rarefaction) to distinguish the cochlear microphonic from the compound action potential and subsequently averaged for analysis. The rate of stimulation was 27.7/sec, a 100–1500 Hz bandpass filter was used online, and gain was set at 20000.

Behavioral testing

Audiogram

Pure tone thresholds were obtained with an audiometer (Equinox, Interacoustics, Eden Prairie, MN, USA) and EAR-3A insert earphones at 0.25, 0.5, 1, 2, 3, 4, 6, and 8 kHz. The children indicated that they heard the sound by raising their hand.

Speech reception threshold

A speech reception threshold (SRT) was obtained in each ear by randomly presenting six spondees via monitored live voice to a child wearing EAR-3A insert earphones. The children indicated their choice by pointing to a picture from a set of six spondee pictures. Word recognition scores in quiet were obtained using the CID W-22 via recorded CD.

Data analysis

The results of the audiometric tests used to assess individual inclusion in the study (i.e., ipsilateral middle ear muscle reflexes, pure tone audiogram, SRT, and word recognition scores in quiet) were in the normal range for all APD and control children included in the study and are not detailed here. The TEOAE amplitude, amount of TEOAE suppression, latency and amplitude of ABR and ERP were analyzed between groups, ears, and noise conditions. All subjects were included in the analysis. The ERP data were analyzed according to the recording sites. Finally, results from the APD diagnostic test battery were included in the data analysis. A few ERP waveforms had to be discarded due to excessive noise contamination. However, because ERP recordings were repeated 3 times for each condition, each subject had at least one recording per condition per electrode that could be analyzed. ABRs and ERP waveforms were reviewed by three experienced individuals (TM, KN and LAG) who rated their morphology and agreed on the marking of the peaks, which had to be repeatable with good morphology across runs. Waves I, III and V were identified as the first, second and third most prominent positive peaks respectively that did replicate with absolute latencies around 1.6 ms, 3.7 ms and 5.6 ms respectively. P2 was defined as the most prominent positive peak in each recording, with a latency around 100 ms; N2 was defined as the following negative peak. The amplitude of each wave was measured from each peak to the following negative trough.

The morphology of the ERP was similar between children with APD and control children as illustrated in Figure 1 in two representative subjects. In young children, the ERPs are dominated by a broadband positivity suggestive of P1 and P2, followed by a large amplitude negativity, N2 (Sharma et al. 1997; Ponton et al. 2000; Wunderlich and Cone-Wesson 2006). Although P2 seems to emerge early in infancy (Barnet et al. 1975; Kurtzberg et al. 1984; Novak et al. 1989), recent observations suggest that it is not until 5–6 years of age that it can be clearly discerned (Ponton et al. 2000), and N1 is still not clearly apparent in pre-adolescent children (Čeponienė et al. 1998). Čeponienė et al. (2005) suggested that P2 is actually fused with P1 in 7- to 10-year-old children and thus not clearly distinguishable from P1. Therefore, because P1 and N1 were not clearly defined in some children in either group, we focused our attention on N2, which dominates the ERP until adolescence (Ponton et al. 2000) and the first prominent peak just prior to N2 and defined as P2. Although P2 was labeled as such, it is implied throughout this article that according to the previously-mentioned studies, this peak is likely a representation of the activity of several generators, which may include some of those responsible for P1 and N1. Statistical analysis was conducted using SigmaPlot 12.0 (Systat Software Inc., San Jose, CA, USA) and SPSS 22 (IBM Corp., Armonk, NY, USA).

Figure 1:

Figure 1:

Example of event-related potentials recorded in a typically developing child (in red) and a child with APD (in blue) to the stimulus /da/ presented in quiet. Recordings were obtained at C3 (left), Cz (middle), and C4 (right).

Results

Transient evoked otoacoustic emissions

To evaluate the effect of group and ear for TEOAE amplitude and TEOAE noise levels, we performed a linear mixed effects model with group and ear as the fixed effect factors and subject as the random effect factor. Means and standard deviations (SDs) of TEOAE amplitude and noise levels are summarized in Table 1 (top). There was no significant main or interaction effect for TEOAE amplitude level (Fs= 0.36–2.91, ps > 0.05) and TEOAE noise level (Fs= 0.07–1.98, ps > 0.05).

Table 1.

Means (and SDs) of TEOAE amplitudes and noise (top) in dB SPL and TEOAE suppression (bottom) in APD versus control children. Results of suppression are indicated for the different types and intensity of suppressors.

Ear APD Control
TEOAE level L 21.5 (3.3) 20.8 (4.1)
R 22.8 (4.1) 20.5 (4.2)
Noise L 10.3 (4.6) 10.2 (4.7)
R 10.3 (5.7) 10.7 (5.4)

Suppressor Type Suppressor Level (dB SPL) Ear APD Control

Ipsilateral 65 L 2.7 (3.1) 3.0 (3.0)
R 3.1 (3.0) 4.2 (2.5)
Contralateral 65 L 2.7 (2.5) 2.0 (2.1)
R 1.9 (2.1) 2.9 (1.9)
Binaural 65 L 5.3 (5.9) 5.0 (3.2)
R 5.2 (5.4) 4.7 (3.5)
Binaural 50 L 4.2 (4.2) 3.0 (2.8)
R 3.6 (3.6) 2.8 (2.1)
Binaural 40 L 3.4 (2.7) 2.0 (1.8)
R 2.3 (2.0) 2.2 (1.8)
Binaural 30 L 2.5 (1.4) −0.1 (2.3)
R 1.6 (0.9) 0.9 (1.8)

APD = auditory processing disorder; L= left ear; R= right ear; SD = standard deviation; TEOAE = transient evoked otoacoustic emissions.

Medial efferent olivocochlear system function

Means and SDs of TEOAE suppression for the different conditions of stimulation are also indicated in Table 1 (bottom). We performed a linear mixed effects model analysis with group, ear, and suppressor type (ipsilateral, contralateral, and binaural suppressor at an intensity of 65 dB SPL) as the fixed factors and subject as the random factor. The model showed no main group effect, ear effect, or any interaction effects on the amount of suppression obtained with three suppressor types at an intensity of 65 dB SPL. There was a significant effect of suppressor type (F(2, 199)= 16.858, p < 0.0001). A post-hoc analysis revealed that the amount of suppression was significantly larger for the binaural condition (mean 4.9 dB) than for the ipsilateral (mean 3.2 dB) and contralateral conditions (mean 2.3 dB). Another linear mixed effects model analysis was conducted on binaural suppression data with group, ear, and suppressor level (30, 40, 50, and 65 dB SPL) as the fixed factors and subject as the random factor. The model showed a significant main group effect with higher amount of suppression in APD than in TD children as illustrated in Figure 2 (F(1, 255)= 9.085, p < 0.005), and a main effect of suppressor level (F(3, 255)= 20.952, p < 0.0001), but no main effect of ear or any interaction effect. Pairwise comparisons between the different suppressor levels were significant (ps < 0.005), except for the pairs 30 and 40 dB SPL and 40 and 50 dB SPL. Although we did no find a significant ear effect, it is noteworthy that the decrease in the amount of suppression from 65 to 30 dB SPL was less pronounced in the left ear of the APD group than in TD children (Table 1).

Figure 2:

Figure 2:

Amount of transient evoked otoacoustic emissions binaural suppression (dB) and standard deviations recorded in both auditory processing disorder (in blue) and control children (in red) according to four different levels of suppressors for both ears combined.

Auditory Brainstem Responses

The means and SDs of the ABR latencies and amplitudes are summarized in Table 2. A linear mixed effects model analysis was conducted on ABR latency and amplitude data separately with ear and group as the fixed factors and subject as the random factor. We found no main effect (group and ear) or interaction effect on both latency and amplitude data, Fs= 0.001~0.772, ps > 0.05. The results indicate that both groups had very similar ABR characteristics binaurally.

Table 2:

Means (and SDs) of ABR latencies and amplitudes in children with APD and control children. There is no significant difference between groups.

Latencies (ms) Ear APD Control
Wave I L 1.65 (0.12) 1.68 (0.16)
R 1.67 (0.12) 1.65 (0.17)
Wave III L 3.83 (0.20) 3.90 (0.24)
R 3.83 (0.18) 3.86 (0.18)
Wave V L 5.78 (0.22) 5.82 (0.27)
R 5.84 (0.28) 5.86 (0.23)

Amplitude (μV)
Wave I L 0.10 (0.14) 0.08 (0.18)
R 0.02 (0.18) − 0.04 (0.14)
Wave III L 0.13 (0.13) 0.12 (0.22)
R 0.10 (0.17) 0.07 (0.19)
Wave V L 0.44 (0.18) 0.46 (0.24)
R 0.41 (0.23) 0.46 (0.21)

APD = auditory processing disorder; L=left; R=right; SD = standard deviation.

Event-related potentials

Quiet condition

Table 3a shows the means and SDs of P2 and N2 latencies and amplitudes for the quiet and three noise conditions. A linear mixed effects model was used to test the effect of group, ear, and recording site (C3, Cz, and C4) separately on the ERP latencies (P2 and N2) and the peak-to-peak P2-N2 amplitudes. The ERP activities (P2 and N2 latencies and P2-N2 amplitude) did not differ significantly among the three recording sites (C3, Cz, and C4). Furthermore, the effect of recording site did not interact with other factors and therefore data from all sites were averaged together for further analysis. Both P2 and N2 latencies obtained in the quiet condition were significantly longer in the APD group than in the control group as illustrated in a representative subject Figure 1 (P2 latency: F(1, 190)= 18.479, p < 0.0001; N2 latency F(1, 179)= 40.497, p < 0.0001). There was no other main significant effect or interaction effect on P2 and N2 latencies in the quiet condition. Similar to the latency results, a mixed linear analysis on the peak-to-peak P2-N2 amplitudes in the quiet condition showed a significant main group effect (F(1, 175)= 9.752, p= 0.002) but no other significant main effect or interaction effect. The peak-to-peak P2-N2 amplitudes in the quiet condition were significantly higher in children with APD than in control children.

Table 3a.

Means (and SDs) of ERP latencies (ms) and amplitudes (μV) in children with APD and control children obtained in quiet and three different noise conditions. The latencies and amplitudes have been averaged between the three electrodes sites.

Peak Condition Ear APD Control p value
P2 latency Quiet L 124.1 (24.6) 104.9 (26.1) <0.001
R 117.5 (22.5) 107.5 (22.4) ns
P2 latency Ipsilateral L 147.6 (15.0) 134.8 (19.3) 0.001
R 133.7 (20.1) 137.7 (24.1) ns
P2 latency Contralateral L 116.7 (23.2) 107.2 (25.1) ns
R 112.8 (20.6) 104.4 (18.3) ns
P2 latency Binaural L 139.0 (17.9) 141.5 (21.8) ns
R 135.2 (19.4) 132.1 (21.2) ns

N2 latency Quiet L 206.6 (28.9) 175.6 (32.4) <0.001
R 200.7 (28.3) 166.0 (36.7) <0.001
N2 latency Ipsilateral L 212.2 (18.7) 206.7 (25.2) ns
R 212.4 (29.5) 194.4 (20.9) 0.001
N2 latency Contralateral L 211.2 (28.5) 174.4 (27.5) <0.001
R 199.3 (28.3) 175.0 (31.2) <0.001
N2 latency Binaural L 215.1 (22.8) 205.0 (24.9) ns
R 215.6 (34.8) 196.0 (21.2) 0.002

P2-N2 amplitude Quiet L 7.1 (2.9) 5.9 (2.4) ns
R 7.4 (3.8) 5.7 (2.7) ns
P2-N2 amplitude Ipsilateral L 5.1 (2.2) 4.4 (2.0) ns
R 5.8 (2.8) 5.2 (2.5) ns
P2-N2 amplitude Contralateral L 8.5 (4.6) 7.0 (2.8) ns
R 9.0 (4.5) 7.4 (3.6) ns
P2-N2 amplitude Binaural L 6.5 (2.3) 4.8 (2.2) <0.001
R 7.2 (3.4) 5.7 (3.1) ns

APD = auditory processing disorder; ERP = event-related potentials; L=left; ns = not significant; R=right; SD = standard deviation.

Noise condition

A linear mixed effects model was used to test the effect of group, ear, and noise conditions (absent, ipsilateral, contralateral, and binaural) separately on the ERP latencies (P2 and N2) and the peak-to-peak P2-N2 amplitudes. The analysis (group x noise condition x ear) showed significant main effects of noise condition (F(3, 520)= 289.984, p<0.0001), group (F(1, 192)= 5.883, p<0.05), group x noise (F(3, 520)= 11.991, p<0.0001), and group x ear x noise condition (F(3, 520)= 5.230, p<0.001) on P2 latency. Moreover, a separate analysis (group x noise condition x ear) showed significant main effects of noise condition (F(3, 505)= 97.526, p<0.0001), group (F(1, 191)= 43.966, p<0.0001), group x noise interaction (N2 latency: F(3, 505)= 8.557, p<0.0001), and group x ear x noise condition (N2 latency: F(3, 505)= 2.811, p<0.05) on N2 latency. We did not find a significant main effect of ear, but the group x ear x condition interaction effect was significant for both latencies.

We performed post-hoc pairwise comparisons between conditions in each group and ear for P2 and N2 latencies. The results are presented in Table 3b and 3c, respectively. Post-hoc tests showed that P2 latencies were longer overall in the ipsilateral and binaural noise conditions than in the quiet and contralateral noise conditions (ps < 0.0001; see Table 3a and 3b). Detailed analysis revealed that P2 latencies were significantly longer in the APD group than in the TD group for the ipsilateral noise condition in the left ear. N2 latencies were significantly longer in the APD group than in the TD group for all conditions except in the left ear for the ipsilateral and binaural conditions (Tables 3a and 3c). A significant difference between ears was noticed within the APD group under the ipsilateral noise condition, which induced a significant delay of P2 latency in the left ear (mean 147.6 ms) compared with the right ear (mean 133.7 ms; p < 0.001). The main effect of noise in TD children is therefore a significant increase in P2 and N2 latencies under ipsilateral or binaural noise conditions, while latencies remain unchanged under contralateral noise. Remarkably, this noise pattern was only observed for P2 in children with APD as N2 latencies were not significantly modified by any noise conditions (Table 3c).

Table 3b.

Pair-wise comparison of conditions in one-way repeated ANOVA on P2 latencies within group and ear.

graphic file with name nihms-1520165-t0005.jpg
Table 3c.

Pair-wise comparison of conditions on N2 latencies within group and ear.

graphic file with name nihms-1520165-t0006.jpg

As for the P2-N2 amplitude results, a linear mixed models analysis showed a main group effect (F(1, 192)= 8.500, p= 0.004), a main effect of noise condition (F(3, 501)= 71.647, p<0.0001), and a significant interaction effect group x condition, (F(3, 501)=3.072, p=0.027), but the ear effect was not significant and none of the interactions were significant. Means and SDs of P2-N2 amplitudes in each noise condition are presented in Table 3a. Noise induced the same behavior on ERP amplitudes in both groups with a decrease, from the quiet condition, under ipsilateral noise condition, an increase under contralateral noise condition, and a relative stability under binaural noise condition. The ERP amplitude in children with APD always remained higher than in TD children under all noise conditions.

Event-related potentials and behavioral measures

The latencies of ERP responses to /da/ in quiet obtained in children with APD were found to correlate significantly with several behavioral measures obtained as part of the APD test battery. Figure 3 illustrates the findings by showing the correlations between P2 latency and the SCAN-3 auditory figure ground percentile (r(12)= 0.67, p= 0.011) and N2 latency and the auditory figure ground percentile (r(11)= 0.74, p= 0.006). Other correlations include SCAN-3 composite score (P2: r(11)= 0.62, p= 0.03; N2: r(10)= 0.67, p= 0.024), SCAN-3 competing words percentile rank (N2: r(11)= 0.61, p= 0.033), PST quantitative scores (P2: r(11)= 0.67, p= 0.013; N2: r(10)= 0.75, p= 0.007), and PST qualitative scores (P2: r(11)= 0.58, p= 0.049; N2: r(10)= 0.68, p= 0.021).

Figure 3:

Figure 3:

Correlations between the auditory figure ground percentile in children with auditory processing disorder and P2 latency (blue) and N2 latency (red). The lines represent the linear regressions detailed in the text.

The N2 latency was found to be inversely correlated with the latency shift observed under ipsilateral or binaural noise conditions in both groups of children (Figure 4). Noise induced the largest increase in N2 latencies in children who had the shortest latencies in quiet. Since N2 latency correlated with the auditory figure ground results, children with APD with the smallest increase under the noise condition presented with the smallest results on auditory figure ground.

Figure 4:

Figure 4:

Correlations between N2 latency in quiet and N2 latency difference between the ipsilateral noise condition and the quiet condition in typically developing children (red) and children with auditory processing disorder (blue) for the left ear (left panel; APD: r(11)=0.68, p<0.05; TD children: r(17)=0.59, p<0.05) and the right ear (right panel; APD: r(11)=0.34, p>0.05; TD children: r(17)=0.76, p<0.001). Each line represents the linear regression.

Discussion

This study revealed that children with APD differed from typically developing children in several aspects of their neurophysiological responses to auditory signals, even in a quiet environment. Most notable are the differences observed in ERP latencies and amplitudes and at the level of the peripheral efferent pathways. To rule out several factors that could have affected responses between the APD and control groups, we first emphasize that the experimental and recording settings were identical for all children. Second, our control and APD groups were composed of children in the same age range (identical mean and median) and none of the ERP characteristics correlated with age within each group. Third, children of both groups had the same behavioral thresholds for sound detection in quiet, ruling out a latency/intensity effect of the stimuli on the ERP responses. Last, it is unlikely that a general attentional effect played a major role in these group differences because the ERP and TEOAE suppression were obtained passively while participants of both groups were engaged in the same activity (watching a silent movie).

Transient evoked otoacoustic emissions and efferent suppression

Similar to previous findings (Burguetti and Carvallo 2008; Sanches and Carvallo 2006; Butler et al. 2011; but see Muchnik et al. 2004), we did not observe a significant group difference in the TEOAE overall levels or noise levels. There was a group difference for TEOAE suppression however, with higher amount of suppression overall in the group of APD children than in controls for the binaural suppression condition. The decrease of the intensity of the binaural suppressor revealed a trend in the left ear with the decrease of the amount of suppression from 65 to 30 dB SPL being less prominent in APD children than in control children. In normally hearing individuals, the threshold level of MOCS activation is thought to be 10–20 dB above the threshold of hearing (Collet et al. 1990; Liberman and Guinan 1998). Our observations in control children corroborate this threshold value with, on average, no suppression in the left ear and minor suppression in the right ear at 30 dB SPL. In APD children, however, the threshold of MOCS activation appears to be abnormally low in the left ear.

Because suppressor sounds could potentially activate both the MOCS and the MEMR, it is legitimate to ask if the MEMR could be involved in the group differences observed in this study. Some reports indicated that standard measures of acoustic reflex thresholds, like those used in our study, may overestimate the level for which the stapedius muscle is activated by a suppressor sound (Feeney and Keefe, 2001; Zhao and Dhar, 2010; Schairer et al., 2013). However, the differences of suppression between ears and groups in our study were unlikely to be the consequence of “sub-threshold” middle ear compliance changes from activation of the MEMR because such changes would be expected to modify responses in all conditions and intensities, which was not the case. Finally, the difference in TEOAE suppression between groups observed at the lowest intensity of stimulation (30 dB SPL) was well below the thresholds of activation of the MEMRs, even measured with a wideband probe. We therefore hypothesized that the MEMR did not play a significant role in the suppression differences observed between both groups.

The MOCS has been shown to be involved in the detection of signals in noise, acting to minimize the response to long-lasting stimuli (the “noise”) while maximizing the response to novel stimuli (such as speech stimuli). Indeed, the auditory efferents, including corticofugal and brainstem portions, have a critical role in speech perception by initial tuning of the cochlear nucleus and cochlea to sounds critical for speech perception (Lopez-Poveda, 2018 for a review). Because the auditory cortex may be involved in filtering ascending auditory input via the medial efferent system this cortical regulation could be of importance to hearing functions, such as those involved in the processing of complex signals and/or in complex environments. Previously published studies revealed evidence of an impairment of MOCS function with either a decrease in TEOAE suppression despite normal hearing thresholds in children with APD (Muchnik et al. 2004; Sanches and Carvallo 2006) and language impairment (Veuillet et al. 1999) or a left ear advantage in TEOAE suppression (Burguetti and Carvallo 2008). In other clinical populations such as in individuals with tinnitus and/or hyperacusis, however, the MOC efferent system appears to be enhanced (Knudson et al., 2014; Sturm and Weisz, 2015). In autistic children, one study reported a weaker olivocochlear efferent system (Danesh and Kaf, 2012), while in another the opposite trend was observed (Wilson et al., 2017). Here we similarly found a significantly stronger MOCS activity for binaural stimulation in APD children. There are a number of possible explanations for the discrepancy in these findings, some of which includes differences in the choice of otoacoustic emissions, the characteristics of the sounds used to elicit MOCS activity (bandwidth, intensity), but also other criteria used to select specific clinical groups. Further studies are therefore necessary to investigate under which conditions the MOCS operates (e.g., signal-to-noise ratio, attention factor, etc.) and to evaluate if a weaker or stronger MOCS activity in specific clinical populations can contribute to difficulty in everyday listening in a noisy environment. It is likely, as suggested by de Boer et al. (2012), that the masking of complex signals, such as speech, by noise can be affected both beneficially and detrimentally by MOCS activation, although the specific conditions for each scenario remain elusive.

Event-related potentials in quiet

Latency of ERPs are predominantly determined by axon myelination and maturation of synaptic mechanisms (Eggermont 1988) and ERP amplitude linked to the number of activated cells generating the response and/or to the number of synapses. Between the age of 3.5 years and 12 years of age, an important decrease of the mean synaptic density in the auditory cortex has been reported (Huttenlocher and Dabhlkar 1997). During that time in TD children, N2 amplitude gradually declines, suggesting that this maturational process is due in part to decreasing synaptic count and changes in myelination, hence resulting in greater efficiency of higher-level processes.

It is known that ERP have very distinct patterns depending on the scalp location at which the responses are recorded (Ponton et al. 2000). We did not observe any significant differences in terms of wave latencies or amplitudes between our three recording sites--C3, Cz, and C4--most likely because they were very close to each other. We found, however, similar trends in both groups with ERP amplitudes always larger over the side contralateral to the stimulation, which aligns with the known organization of the auditory pathways. Furthermore, these amplitudes were always larger over C3 than C4 for both right and left monaural stimulations in both groups, again in accordance with previous observations of general left hemisphere dominance for auditory processing (Devlin et al. 2003).

In response to a stimulus /da/ presented without background noise, P2 and N2 latencies recorded in APD children were much longer than in TD children, corroborating previous investigations (Jirsa and Clontz 1990; Jirsa 1992). The difference was of 10 ms for P2 in the right ear and 20 ms in the left ear, and 30 ms for N2 bilaterally. In addition to longer latencies, children with APD exhibited significantly larger ERP amplitude than TD children, a finding previously observed in children with suspected APD (Liasis et al. 2003).

The longer latencies and higher amplitudes observed in the present study therefore suggest that children with APD present with a delay or impairment in development that affects their neuronal transmission along the auditory pathways and/or the connections within or between different cortical areas. These findings indicate that the auditory processes reflected by the complex P1/P2, believed to reflect the encoding of acoustic sound features, and N2, representing a synthesis of these features into a sensory representation (Čeponienė et al. 2005), require extra time in children with APD compared with TD children. This longer processing time in children with APD does not appear to have behavioral consequences in a quiet environment, however, as these children have normal speech processing skills. Consequences in the presence of noise are likely, however, and are discussed in the following paragraphs.

A previous study found the Na component of the middle latency responses to be delayed as well in children with APD (Purdy et al. 2002), which suggests that the delays observed at the cortical level could be the consequence of accumulative immaturity along the auditory pathways, likely more pronounced after the generation sites of wave V, since there was no significant difference in wave V latency between groups in our study.

The techniques used in the present study are not sensitive enough to specify if one specific part of the auditory system is more immature or impaired than the others. There might be various degrees of maturation of the different subcortical and cortical pathways between TD children and those with APD, as it is known that P2 and N2 are considered to have multiple generators (Näätänen and Picton 1987; Vaughan and Arezzo 1988; Wunderlich and Cone-Wesson 2006), and that the different auditory pathways/centers mature at different rates (Ponton et al. 2002; Bishop et al. 2011). It is noteworthy to add to the hypothesis of greater auditory immaturity in children with APD, that according to parental reports, 42% of these children presented with some degree of developmental delay while no one reported such a delay in the control group.

Noise effects on event-related potentials and behavioral measures

In healthy adults and children, the presence of background noise is known to degrade neural synchrony, leading to delayed cortical response and reduced cortical amplitude (Whiting et al. 1998; Warrier et al. 2004; Androulidakis and Jones 2006; Burkard and Don 2007; Billings et al. 2009; Parbery-Clark et al. 2011; Schochat et al. 2012). Here, cortical amplitudes varied comparably in both groups with the introduction of ipsilateral or binaural noise. As for the timing of cortical responses, normally noise-induced increases were definitely observed in the TD group for both P2 and N2 latencies under either ipsilateral or binaural noise conditions. In the APD group, however, latency increases with noise were only observed for P2, while N2 latencies remained unchanged. Another remarkable finding is that N2 latencies in quiet in children with APD were equal to those obtained in noise in TD children. Hence, the synthesis of acoustic features into a perceptual representation in a quiet condition, represented by N2, is a process that requires more time in children with APD than in TD children. But when the stimulus is embedded in background noise, the process is prolonged in TD children. The fact that the process is not comparably prolonged in children with APD under the same noise condition, suggests a limitation of their available resources to process sounds in noise. This hypothesis is reinforced by the fact that significant correlations between ERP latencies and behavioral measures were found, in those children with APD whose ERP latencies were the longest in quiet, and did not change with the introduction of noise, obtaining the lowest behavioral scores. These tasks consisted of speech-in-noise abilities, binaural integration, and phonemic decoding. Similar correlations between electrophysiological recordings and behavioral testing have been observed in normal children and adults with a positive correlation between N2 latency and the level of difficulty of discrimination tasks (Maiste et al. 1995). Anderson et al. (2010) observed that TD children who performed very poorly during a speech perception in noise task had larger N2 amplitudes than those who performed well in the same task. They suggested that the best perceivers of speech in noise might be recruiting fewer neural resources due to greater neural efficiency. Decreased N2 amplitudes in their good speech-in-noise group may reflect greater inhibitory control, a necessary function for suppressing unwanted background noise. Interestingly, in their group of TD children, the ERP of individuals with the lowest speech-in-noise scores differed from those with the highest scores only in the noise condition. In our study, however, the ERP latency and amplitude differences between the control and experimental groups are present even in the quiet condition. This suggests that the problem with auditory processing reveals itself behaviorally only when the listening conditions become challenging. Hence, the neuronal resources available to process sounds appear to be limited in children with APD, but normal speech processing occurs when signals are processed in the absence of noise.

In contradiction to the ipsilateral and binaural noise effect, a contralateral noise induced an increase in ERP amplitude, while the latencies remained similar to those obtained in the quiet condition. This contralateral noise effect was observed in both groups although the amplitude increase was slightly more prominent in the right ear of control children. These results confirm that although there is a detrimental effect of noise on auditory perception, under certain circumstances, low intensity background noise does not degrade neural synchrony and therefore improve listeners’ ability to detect a weak signal. It is also thought that the low intensity background noise may enhance concurrent stream segregation through the activation of the efferent system (Giraud et al. 1997; Kumar and Vanaja 2004; Alain et al. 2009). The benefit of contralateral noise is reduced, however, with the addition of ipsilateral noise (i.e., binaural listening situation), which appears to be more detrimental to children with APD than TD children.

Auditory processing disorder and cognitive functions

Defining which neuronal network(s) is/are impaired in APD has been a longstanding matter. It is especially difficult to separate purely auditory from cognitive disorders because it is impossible to suppress cognitive skills when the listening tasks use complex stimuli, such as speech, delivered in noisy environment. Recent evidence supports the contribution of higher order cognitive abilities to listening difficulties in children who have suspected APD (Moore et al. 2010, 2013). Farah et al. (2014) in a diffusion tensor imaging in children with suspected APD discovered, among other findings, a disruption in fibers connecting to the dominant contralateral pathway, from the right ear to the left auditory cortex, suggesting that the right ear input may experience less efficient processing. Based on their results, they suggest that APD, particularly in populations with atypical left ear advantage, may have their roots in the top-down attentional networks that modulate auditory attention and processing. This supports one of the major hypotheses concerning APD, namely that APD stems from a rather global cognitive function deficit arising from multimodal processing centers in the brain. Although we cannot completely rule out an attention effect, our findings indicate that children with APD present with basic auditory processing dysfunction even in quiet conditions, with an increase in the timing of processes leading to the synthesis of the acoustic features of sounds into a sensory representation. Our findings concur with other documented symptoms of APD reported, such as difficulty following and remembering spoken instructions in quiet (Smoski et al. 1992; Bamiou et al. 2001). Paralleling this, it is reported that neural differences between good and poor speech-in-noise perceivers occur even when speech stimuli are presented in quiet (Chandrasekaran et al. 2009; Anderson et al. 2010). Thus, neural deficiencies in non-challenging listening conditions can still be symptomatic of APD. Also, because the MOCS function is not identical in TD children, it is possible to envision that in APD children, higher-order processing levels might not have an efficient feedback to lower-order processing brainstem levels.

Conclusion

The findings from this study indicate that children with APD present with neural deficiencies in both challenging and non-challenging environments, while their control of the auditory periphery under some noisy conditions is different from TD children. Because both ERP and MOCS testing only have in common the activation of the same auditory pathways up to a certain level (i.e., at least at the peripheral level of the auditory system), it is problematic to attempt to explain ERP changes under noisy conditions according to the functional features of the MOCS and vice-versa. The present findings suggest a bottom-up impairment in APD with prolonged ERP processes in non-challenging acoustical environments as well as a top-down influence in challenging environments with differences in the MOCS function, particularly for low level of stimulation. In conclusion, children with APD may have a less efficient auditory system that is unable to dynamically adapt to challenging backgrounds because it uses too many resources in quiet that cannot translate in a greater neural effort to facilitate the sound segregation process in noise.

Acknowledgments

The authors thank all the children and their families for participating in this study, and the Audiology Clinic at Nemours/Alfred I. duPont Hospital for Children and Jobayer Hossain and Li Xie for statistical consultation.

T.M. designed and performed experiments, analyzed data and wrote the paper. K.N., L.A.G, T.R., recruited subjects, performed experiments, analyzed data, and provided critical revision. M.C and R.G.G. performed experiments and analyzed data.

Conflicts of Interest and Source of Funding

This work was supported by NIH 8P20GM103464 (to T.M.). The authors declare no other conflict of interest.

References

  1. Alain C, Quan J, McDonald K, et al. (2009). Noise-induced increase in human auditory evoked neuromagnetic fields. Eur J Neurosci, 30, 132–142. [DOI] [PubMed] [Google Scholar]
  2. American Speech and Language Hearing Association. (1996). Central auditory processing: current status of research and implications for clinical practice. Am J Audiol, 5, 41–52. [Google Scholar]
  3. American Speech-Language-Hearing Association. (2005). (Central) Auditory Processing Disorders. [Technical Report]. Available from www.asha.org/policy.
  4. Anderson S, Chandrasekaran B, Yi HG, et al. (2010). Cortical-evoked potentials reflect speech-in-noise perception in children. Eur J Neurosci, 32, 1407–1413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Androulidakis AG, Jones SJ (2006). Detection of signals to modulated and unmodulated noise observed using auditory evoked potentials. Clin Neurophysiol, 117, 1783–1793. [DOI] [PubMed] [Google Scholar]
  6. Bamiou DE, Musiek FE, Luxon LM (2001). Aetiology and clinical presentations of auditory processing disorders – a review. Arch Dis Child, 85, 361–365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Barnet AB (1975). Auditory evoked potentials during sleep in normal children from 10 days to 3 years of age. Electroencephalogr Clin Neurophysiol, 39, 29–41. [DOI] [PubMed] [Google Scholar]
  8. Bellis TR. Assessment and management of central auditory processing disorders in the educational settings. San Diego (CA): Singular Publishing Group; 1996. [Google Scholar]
  9. Bench J, Kowal Å, Bamford J (1979). The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children. Br J Audiol, 13, 108–112. [DOI] [PubMed] [Google Scholar]
  10. Berlin CI, Hood LJ, Hurley AE, et al. (1995). Binaural noise suppresses linear click-evoked otoacoustic emissions more than ipsilateral or contralateral noise. Hear Res, 87, 96–103. [DOI] [PubMed] [Google Scholar]
  11. Billings CJ, Tremblay KL, Stecker GC, et al. (2009). Human evoked cortical activity to signal-to-noise ratio and absolute signal level. Hear Res, 254, 15–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bishop DV, Anderson M, Reid C, et al. (2011). Auditory development between 7 and 11 years: An event-related potential (ERP) study. PloS ONE, 6, e18993. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. BKB-SIN: Bamford-Kowal-Bench Speech in Noise Test. (2005). Elk Grove, IL: Etymotic Research. [Google Scholar]
  14. Burguetti FA, Carvallo RM (2008) Efferent auditory system: effect on auditory processing. Braz J Otorhinolaryngol, 74, 737–745. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Burkard RF, Don M. The auditory brainstem response In: Burkard RF, Don M, Eggermont JJ, editors. Auditory evoked potentials: Basic principles and clinical applications. Baltimore (MD): Lippincott Williams & Wilkins; 2007. p. 229–253. [Google Scholar]
  16. Butler BE, Purcell DW, Allen P (2011). Contralateral inhibition of distortion product otoacoustic emissions in children with auditory processing disorders. Int J Audiol, 50, 530–539. [DOI] [PubMed] [Google Scholar]
  17. Čeponienė R, Alku P, Westerfield M, et al. (2005). ERPs differentiate syllable and nonphonetic sound processing in children and adults. Psychophysiology, 42, 391–406. [DOI] [PubMed] [Google Scholar]
  18. Čeponienė R, Cheour M Näätänen R (1998). Interstimulus interval and auditory event-related potentials in children: evidence for multiple generators. Electroencephalogr Clin Neurophysiol, 108, 345–354. [DOI] [PubMed] [Google Scholar]
  19. Chandrasekaran B, Hornickel J, Skoe E et al. (2009). Context-dependent encoding in the human auditory brainstem relates to hearing speech in noise: Implications for developmental dyslexia. Neuron, 64, 311–319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Chermak GD, Hall JW 3rd, Musiek FE (1999). Differential diagnosis and management of central auditory processing disorder and attention deficit hyperactivity disorder. J Am Acad Audiol, 10, 289–303. [PubMed] [Google Scholar]
  21. Chermak GD (2002). Deciphering auditory processing disorders in children. Otolaryngol Clin North Am, 35, 733–749. [DOI] [PubMed] [Google Scholar]
  22. Cohen M, Campbell R, Yaghmai F (1989). Neuropathological abnormalities in developmental dysphasia. Ann Neurol, 25, 567–570. [DOI] [PubMed] [Google Scholar]
  23. Collet L, Kemp DT, Veuillet E, et al. (1990). Effect of contralateral auditory stimuli on active cochlear micro-mechanical properties in human subjects. Hear Res, 43, 251–261. [DOI] [PubMed] [Google Scholar]
  24. Danesh AA, Kaf WA (2012) DPOAEs and contralateral acoustic stimulation and their link to sound hypersensitivity in children with autism. Int J Audiol, 51:345–352. [DOI] [PubMed] [Google Scholar]
  25. de Boer J, Thornton AR, Krumbholz K (2012) What is the role of the medial olivocochlear system in speech-in-noise processing? J Neurophysiol, 107, 1301–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Devlin JT, Raley J, Tunbridge E, et al. (2003). Functional asymmetry for auditory processing in human primary auditory cortex. J Neurosci, 23, 11516–11522. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Eggermont JJ (1988). On the rate of maturation of sensory evoked potentials. Electroencephalogr Clin Neurophysiol, 70, 293–305. [DOI] [PubMed] [Google Scholar]
  28. Farah R, Schmithorst VJ, Keith RW, et al. (2014). Altered white matter microstructure underlies listening difficulties in children suspected of auditory processing disorders: a DTI study. Brain Behav, 4, 531–543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Feeney MP, Keefe DH (2001). Estimating the acoustic reflex threshold from wideband measures of reflectance, admittance, and power. Ear Hear. 22, 316–332. [DOI] [PubMed] [Google Scholar]
  30. Giraud AL, Garnier S, Micheyl C, et al. (1997). Auditory efferents involved in speech-in-noise intelligibility. Neuroreport, 8, 1779–1783. [DOI] [PubMed] [Google Scholar]
  31. Huttenlocher PR, Dabholkar AS (1997). Regional differences in synaptogenesis in human cerebral cortex. J Comp Neurol, 387, 167–178. [DOI] [PubMed] [Google Scholar]
  32. Jerger J, Musiek F (2000). Report of the consensus conference on the diagnosis of auditory processing disorders in school-aged children. J Am Acad Audiol, 11, 467–474. [PubMed] [Google Scholar]
  33. Jernigan TL, Hesselink JR, Sowell E, et al. (1991). Cerebral structure on magnetic resonance imaging in language- and learning-impaired children. Arch Neurol, 48, 539–545. [DOI] [PubMed] [Google Scholar]
  34. Jirsa RE, Clontz KB (1990). Long latency auditory event-related potentials from children with auditory processing disorders. Ear Hear, 11, 222–232. [DOI] [PubMed] [Google Scholar]
  35. Jirsa RE (1992). The clinical utility of the P3 AERP in children with auditory processing disorders. J Speech Hear Res, 35, 903–912. [DOI] [PubMed] [Google Scholar]
  36. Katz J (1962). The use of staggered spondaic words for assessing the integrity of the central auditory nervous system. J Audit Res, 2, 327–337. [Google Scholar]
  37. Katz J, Fletcher C. Central Auditory Processing Tests. Vancouver (WA): Precision Acoustics; 1998. [Google Scholar]
  38. Keith RW. (1994). Auditory continuous performance test. San Antonio (TX): Psychological Corporation; 1994. [Google Scholar]
  39. Keith RW. RGDT– random gap detection test. St. Louis (MO): Auditec; 2000. [Google Scholar]
  40. Keith RW. (2009) SCAN-3-C tests for auditory processing disorders for children. San Antonio (TX): Pearson; 2009. [Google Scholar]
  41. Knudson IM, Shera CA, Melcher JR (2014) Increased contralateral suppression of otoacoustic emissions indicates a hyperresponsive medial olivocochlear system in humans with tinnitus and hyperacusis. J Neurophyiol, 112:3197–208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Krishnamurti S (2001). P300 auditory event-related potentials in binaural and competing conditions in adults with central auditory processing disorders. Contemp Issues Comm Sci Disord, 28, 40–47. [Google Scholar]
  43. Kumar UA, Vanaja CS (2004). Functioning of olivocochlear bundle and speech perception in noise. Ear Hear, 25, 142–146. [DOI] [PubMed] [Google Scholar]
  44. Kurtzberg D, Hilpert PL, Kreuzer JA, et al. (1984). Differential maturation of cortical auditory evoked potentials to speech sounds in normal fullterm and very low-birthweight infants. Dev Med Child Neurol, 26, 466–75. [DOI] [PubMed] [Google Scholar]
  45. Liasis A, Bamiou DE, Campbell P, et al. (2003). Auditory event-related potentials in the assessment of auditory processing disorders: a pilot study. Neuropediatrics, 34, 23–29. [DOI] [PubMed] [Google Scholar]
  46. Liberman MC, Guinan JJ Jr. (1998) Feedback control of the auditory periphery: anti-masking effects of middle ear muscles vs. olivocochlear efferents. J Commun Disor, 31, 471–482. [DOI] [PubMed] [Google Scholar]
  47. Lopez-Poveda EA (2018) Olivocochlear efferents in animals and humans : From anatomy to clinical relevance. Front. Neurol, 9, 197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Maiste AC, Wiens AS, Hunt MJ, et al. (1995). Event-related potentials and the categorical perception of speech sounds. Ear Hear, 16, 68–90. [DOI] [PubMed] [Google Scholar]
  49. Moore DR. (2012). Listening difficulties in children: bottom-up and top-down contributions. J Commun Disord, 45, 411–418 [DOI] [PubMed] [Google Scholar]
  50. Moore DR, Ferguson MA, Edmondson-Jones AM., et al. (2010). Nature of auditory processing disorder in children. Pediatrics, 126, e382–e390. [DOI] [PubMed] [Google Scholar]
  51. Moore DR, Rosen S, Bamiou DE, et al. (2013). Evolving concepts of developmental auditory processing disorder (APD): a British Society of Audiology APD special interest group ‘white paper’. Int J Audiol, 52, 3–13. [DOI] [PubMed] [Google Scholar]
  52. Morlet T, Berlin CI, Norman M, et al. Fast ForWord™: its scientific basis and treatment effects on the human efferent auditory system In: Berlin CI, Weyand TG, editors. The brain and sensory plasticity: language acquisition and hearing. New York (NY): Delmar Learning; 2003. p. 129–148. [Google Scholar]
  53. Muchnik C, Ari-Even Roth D, Othman-Jebara R, et al. (2004). Reduced medial olivocochlear bundle system function in children with auditory processing disorders. Audiol Neurootol, 9, 107–114. [DOI] [PubMed] [Google Scholar]
  54. Musiek FE (1983) Results of three dichotic speech tests on subjects with intracranial lesions. Ear Hear, 4, 318–323. [DOI] [PubMed] [Google Scholar]
  55. Musiek FE (1994) Frequency (pitch) and duration pattern tests. J Am Acad Audiol, 5, 265–268. [PubMed] [Google Scholar]
  56. Näätänen R, Picton TW (1987). The N1 wave of the human electric and magnetic response to sound: a review and analysis of the component structure. Psychophysiology, 24, 375–425. [DOI] [PubMed] [Google Scholar]
  57. Novak GP, Kurtzberg D, Kreuzer JA, et al. (1989). Cortical responses to speech sounds and their formants in normal infants: maturational sequence and spatiotemporal analysis. J Electroencephalogr Clin Neurophysiol, 73, 295–305. [DOI] [PubMed] [Google Scholar]
  58. Parbery-Clark A, Marmel F, Bair J, et al. (2011). What subcortical-cortical relationships tell us about processing speech in noise. Eur J Neurosci, 33, 549–557. [DOI] [PubMed] [Google Scholar]
  59. Plante E, Swisher L, Vance R, et al. (1991). MRI findings in boys with specific language impairment. Brain Lang, 41, 52–66. [DOI] [PubMed] [Google Scholar]
  60. Ponton CW, Eggermont JJ, Kwong B, et al. (2000). Maturation of human central auditory system activity: evidence from multi-channel evoked potentials. Clin Neurophysiol, 111, 220–236. [DOI] [PubMed] [Google Scholar]
  61. Purdy SC, Kelly AS, Davies MC (2002). Auditory brainstem response, middle latency response, and late cortical evoked potentials in children with learning disabilities. J Am Acad Audiol, 13, 367–382. [PubMed] [Google Scholar]
  62. Sanches SG, Carvallo RM (2006). Contralateral suppression of transient evoked otoacoustic emissions in children with auditory processing disorder. Audiol Neurootol, 11, 366–372. [DOI] [PubMed] [Google Scholar]
  63. Schairer KS, Feeney MP, Sanford CA (2013) Acoustic reflex measurement. Ear Hear, 34, 43S–47S. [DOI] [PubMed] [Google Scholar]
  64. Schochat E, Musiek FE, Alonso R, et al. (2010). Effect of auditory training on the middle latency response in children with (central) auditory processing disorder. Br J Med Biol Res, 43, 777–785. [DOI] [PubMed] [Google Scholar]
  65. Schochat E, Matas CG, Samelli AG, et al. (2012). From otoacoustic emission to late auditory potentials P300: the inhibitory effect. Acta Neurobiol Exp (Wars), 72, 296–308. [DOI] [PubMed] [Google Scholar]
  66. Sharma A, Kraus N, McGee TJ., et al. (1997). Developmental changes in P1 and N1 central auditory responses elicited by consonant-vowel syllables. Electroencephalogr Clin Neurophysiol, 104, 540–545. [DOI] [PubMed] [Google Scholar]
  67. Sharma M, Purdy SC, Kelly AS (2009). Comorbidity of auditory processing, language, and reading disorders. J Speech Lang Hear Res, 52, 706–722. [DOI] [PubMed] [Google Scholar]
  68. Smoski WJ, Brunt MA, Tannahill JC (1992). Listening characteristics of children with central auditory processing disorders. Lang Speech Hear Serv School, 23: 145–152. [Google Scholar]
  69. Sturm JJ, Weisz CJ (2015) Hyperactivity in the medial olivocochlear efferent system is a common feature of tinnitus and hyperacusis in humans. J Neurophysiol, 114:2551–2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Stuart A, Butler AK (2012) Contralateral suppression of transient otoacoustic emissions and sentence recognition in noise in young adults. J Am Acad Audiol, 23, 686–96. [DOI] [PubMed] [Google Scholar]
  71. Vaughan H, Arezzo J. The neural basis of event-related potentials In: Picton TW, editor. Human event-related potentials. Amsterdam (NL): Elsevier Science; 1988. p. 45–96. [Google Scholar]
  72. Veuillet E, Collet L, Bazin F (1999). Objective evidence of peripheral auditory disorders in learning-impaired children. J Audiol Med, 8, 18–29. [Google Scholar]
  73. Veuillet E, Magnan A, Ecalle J, et al. (2007). Auditory processing disorder in children with reading disabilities: effect of audiovisual training. Brain, 130, 2915–2928. [DOI] [PubMed] [Google Scholar]
  74. Wagner W, Frey K, Heppelmann G, et al. (2008) Speech-in-noise intelligibility does not correlate with efferent olivocochlear reflex in humans with normal hearing. Acta Otolaryngol, 128, 53–60. [DOI] [PubMed] [Google Scholar]
  75. Warrier CM, Johnson KL, Hayes EA, et al. (2004). Learning impaired children exhibit timing deficits and training-related improvements in auditory cortical responses to speech in noise. Exp Brain Res, 157, 431–441. [DOI] [PubMed] [Google Scholar]
  76. Weihing J, Bellis TJ, Chermak GD, et al. Current issues in the diagnosis and treatment of CAPD in children In: Gaffner D, Ross-Swain D, editors. Auditory processing disorders: assessment, management and treatment. San Diego (CA): Plural Publishing, Inc.; 2013. p. 8–32. [Google Scholar]
  77. Whiting KA, Martin BA, Stapells DR (1998). The effects of broadband noise masking on cortical event-related potentials to speech sounds /ba/ and /da/. Ear Hear, 19, 218–231. [DOI] [PubMed] [Google Scholar]
  78. Wilson US, Sadler KM, Hancock KE, et al. (2017) Efferent inhibition strength is a physiological correlate of hyperacusis in children with autism spectrum disorder. J Neurophysiol, 118:1164–1172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Wunderlich JL, Cone-Wesson BK (2006). Maturation of CAEP in infants and children: a review. Hear Res, 212, 212–223. [DOI] [PubMed] [Google Scholar]
  80. Zhao W, Dhar S (2010). The effect of contralateral acoustic stimulation on spontaneous otoacoustic emissions. J. Assoc. Res. Otolaryngol 11, 53–67. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES