Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jan 1.
Published in final edited form as: Ear Hear. 2020 Jan-Feb;41(1):25–38. doi: 10.1097/AUD.0000000000000804

Middle-ear muscle reflex and word-recognition in “normal hearing” adults: evidence for cochlear synaptopathy?

Anita M Mepani 1,, Sarah A Kirk 1,, Kenneth E Hancock 1,2, Kara Bennett 3, Victor de Gruttola 3, M Charles Liberman 1,2,4, Stéphane F Maison 1,2,4
PMCID: PMC6934902  NIHMSID: NIHMS1538037  PMID: 31584501

Structured Abstract

Objectives:

Permanent threshold elevation after noise exposure, ototoxic drugs or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of synapses between sensory cells and auditory nerve fibers. The silencing of these neurons, especially those with high thresholds and low spontaneous rates, degrades auditory processing and may contribute to difficulties understanding speech in noise. Although cochlear synaptopathy can be diagnosed in animals by measuring suprathreshold auditory brainstem responses, its diagnosis in humans remains a challenge. In mice, cochlear synaptopathy is also correlated with measures of middle-ear muscle (MEM) reflex strength, possibly because the missing high-threshold neurons are important drivers of this reflex. We hypothesized that measures of the MEM reflex might be better than other assays of peripheral function in predicting difficulties hearing in difficult listening environments in human subjects.

Design:

We recruited 165 normal-hearing healthy subjects, between the ages of 18 and 63, with no history of ear or hearing problems, no history of neurologic disorders and unremarkable otoscopic examinations. Word recognition in quiet and in difficult listening situations was measured in four ways: using isolated words from the NU-6 corpus with either a) 0 dB signal-to noise, b) 45% time compression with reverberation, or c) 65% time compression with reverberation and d) with a modified version of the QuickSIN. Audiometric thresholds were assessed at standard and extended high frequencies (EHFs). Outer hair cell function was assessed by distortion product otoacoustic emissions (DPOAEs). Middle-ear function and reflexes were assessed using three methods: the acoustic reflex threshold as measured clinically, wideband tympanometry as measured clinically and a custom wideband method that uses a pair of click probes flanking an ipsilateral noise elicitor. Other aspects of peripheral auditory function were assessed by measuring click-evoked gross potentials, i.e. summating potential (SP) and action potential (AP) from ear-canal electrodes.

Results:

After adjusting for age and gender, word-recognition scores were uncorrelated with audiometric or DPOAE thresholds, at either standard or EHFs. MEM reflex thresholds were significantly correlated with scores on isolated-word recognition, but not with the modified version of the QuickSIN. The highest pairwise correlations were seen using the custom assay. AP measures were correlated with some of the word scores, but not as highly as seen for the MEM custom assay, and only if amplitude was measured from SP peak to AP peak, rather than baseline to AP peak. The highest pairwise correlations with word scores, on all four tests, were seen with the SP/AP ratio, followed closely by SP itself. When all predictor variables were combined in a stepwise multivariate regression, SP/AP dominated models for all four word-score outcomes. MEM measures only enhanced the adjusted r-square values for the 45% time compression test. The only other predictors that enhanced model performance (and only for two outcome measures) were measures of interaural threshold asymmetry.

Conclusions:

Results suggest that, among normal-hearing subjects, there is a significant peripheral contribution to diminished hearing performance in difficult listening environments that is not captured by either threshold audiometry or DPOAEs. The significant univariate correlations between word scores and either SP/AP, SP, MEM reflex thresholds, or AP amplitudes (in that order) are consistent with a type of primary neural degeneration. However, interpretation is clouded by uncertainty as to the mix of pre- and post-synaptic contributions to the click-evoked SP. None of the assays presented here has the sensitivity to diagnose neural degeneration on a case-by-case basis; however, these tests may be useful in longitudinal studies to track accumulation of neural degeneration in individual subjects.

Introduction

Acoustic overexposure can lead to hair cell damage, threshold elevation, degraded frequency tuning and loss of important cochlear nonlinearities (e.g., Liberman et al. 1984; Schmiedt 1984). A longstanding dogma was that hair cells are the primary targets of noise, and that cochlear neurons only die as a result of hair cell degeneration (Bohne et al. 2000). Indeed, hair cell loss can be seen within hours of exposure, while loss of spiral ganglion neurons, the primary sensory nerves of the auditory pathway, is not detectable for months to years (Johnsson 1974; Johnsson et al. 1976). According to this old view, cochlear neuropathy is a delayed downstream consequence of noise-induced hair cell loss, so that an exposure causing a temporary threshold elevation is benign and causes no permanent impairment. This assumption underlies the current damage-risk criteria for occupational noise exposure set by federal agencies (Arenas et al. 2014).

Recent animal studies have altered the view that temporary threshold shift is always benign (Kujawa et al. 2009). Following acoustic exposure, loss of synapses between cochlear neurons and surviving inner hair cells (IHCs) can occur, even if cochlear thresholds return to normal. Loss of synapses between primary afferents and sensory cells is also seen in the aging ear (Sergeyenko et al. 2013) or following exposure to ototoxic drugs (Bourien et al. 2014; Kujawa et al. 2015; Ruan et al. 2014). This cochlear synaptopathy remained undetected, because the synapse is not visible in routine histological preparations, and the ultimate loss of spiral ganglion cells is extremely slow (Liberman et al. 1978). Cochlear synaptopathy is also “hidden” because neural degeneration per se does not elevate behavioral or electrophysiological thresholds until it becomes extreme (Lobarinas et al. 2013; Woellner et al. 1955). This is true, in part, because the most vulnerable cochlear neurons, to both noise and aging, are those with high thresholds and low (and medium) spontaneous rates (SRs) (Furman et al. 2013; Schmiedt et al. 1996). These low-SR fibers are major players in the coding of transient stimuli in noisy environments (Costalupes et al. 1984), because the high-SR fibers saturate in noise by virtue of their low thresholds and limited dynamic range. Together, these findings suggest that cochlear synaptopathy could be a contributor to poorer speech discrimination observed in older or noise-exposed patients (Alvord 1983; Dubno et al. 1984; Kujawa and Liberman 2015; Rajan et al. 2008). It also may be important in limiting psychophysical performance among human listeners with normal hearing sensitivity, as deficits in binaural temporal processing are highly correlated with changes in ABR responses consistent with cochlear synaptopathy (Bharadwaj et al. 2014; Bharadwaj et al. 2015). Cochlear synaptopathy may also be key to the genesis of hyperacusis and tinnitus (Hickox et al. 2014; Knipper et al. 2013), via induction of central gain changes secondary to loss of afferent input to the auditory central nervous system (Hesse et al. 2016).

In animal studies of noise and aging, cochlear synaptopathy has been diagnosed by supra-threshold amplitude of ABR wave 1 (Kujawa and Liberman 2009, 2015; Shaheen et al. 2015), the summed activity of cochlear neurons. The fractional reduction in responses to moderate-level (60 – 80 dB SPL) tone-pips is correlated with the fractional reduction in synaptic counts in appropriate cochlear regions (Sergeyenko et al. 2013). In humans, the diagnostic utility of ABR wave I for cochlear synaptopathy remains controversial (see Grinn et al. 2017; Tufts et al. 2018; Bramhall et al. 2019), possibly because, for ABR wave I amplitude in humans, inter-subject variability is considerable, due to heterogeneity in head shape/size, tissue conductivity, sex, etc. (Nikiforidis et al. 1993). However, recent histopathological studies clearly show that cochlear synaptopathy is widespread in human ears, at least in age-related hearing loss (Wu et al., 2018).

Recent data from animal studies suggest that the middle-ear muscle reflex (MEMR) may be a sensitive metric of cochlear synaptopathy in animals with normal thresholds (Valero et al. 2016; Valero et al. 2018), as suggested by the longstanding speculation that low-SR cochlear-nerve fibers may be especially important in driving this sound-evoked feedback (Kobler et al. 1992; Liberman et al. 1984). The test was first introduced for the diagnosis of middle-ear pathology. Years later, it was shown that, in absence of conductive hearing loss, the MEMR could also be useful to assess 1) “retrocochlear pathology” such as vestibular schwannoma (Stach 1987) or auditory neuropathy (Berlin et al. 2005) and 2) “third-window” lesions of the inner ear such as semicircular canal dehiscence and enlarged vestibular aqueduct (Merchant et al. 2008). Historically, the diagnostic power of the MEMR has seemed limited, given the wide range of test results in “normal hearing” people (Margolis 1993; McGregor et al., 2018). However, some of this “normal” variation could be due to underlying cochlear synaptopathy, as suggested by the correlation, among normal-hearing subjects, between noise-induced tinnitus and MEMR strength (Wojtczak et al. 2017). On the other hand, other recent studies failed to see a relationship between classic measures of the acoustic reflex and tinnitus or speech-in-noise performance (Guest et al., 2018).

The present study aims at assessing the utility of the MEMR in the assessment of cochlear synaptopathy in humans by comparing MEMR measures with electrocochleographic measures with respect to the correlations with word-recognition scores. We recruited normal-hearing subjects with a wide range of performance on challenging word-recognition tasks and found significant correlations between word scores and MEMR thresholds; however, the MEMR tests did not outperform the SP/AP ratio extracted from auditory evoked potentials in predicting word scores. The results are consistent with contributions of cochlear synaptopathy to the degradation of word recognition in challenging listening environments.

Materials and Methods

Subject Pool, Cognitive Assessment and Inclusion Criteria

A total of 165 subjects in good health, between the ages of 18 and 63, with no history of ear or hearing problems, no history of neurologic disorders and unremarkable otoscopic examinations were recruited as subjects. All had normal audiometric thresholds from 0.25 – 8 kHz in both ears (≤ 25 dB HL) with no interaural asymmetry and normal middle-ear function. Thresholds were considered asymmetrical if there was an interaural difference of ≥10 dB at two test-frequencies, or ≥15 dB at one test-frequency. Tympanometry was performed using the Titan Suite from Interacoustics, with a probe-tone frequency of 226 Hz and an ear-canal pressure change ranging from −300 daPa to +200 daPa in each ear to ensure that ear canal volume, tympanic membrane mobility and middle-ear pressure were within normal limits, as defined by Margolis and Heller (1987). There were no other initial inclusion criteria beyond the ability to give voluntary informed written consent prior to participation. This study was reviewed and approved by the Institutional Review Board of the Massachusetts Eye & Ear.

The Montreal Cognitive Assessment (MoCA) was administered to all subjects to screen for mild cognitive dysfunction related to deficits in attention and concentration, executive functions, memory, language, visuo-constructional skills, conceptual thinking, calculations, and orientation. The test was administered following the MoCA Version 8.1 Administration and Scoring Instructions (www.mocatest.org).

Audiometric Thresholds and DPOAEs

Audiometric thresholds were obtained using Interacoustics Equinox 4.0 with the High Hz option. Pure-tone air-conduction (AC) thresholds were measured at standard audiometric frequencies from 0.25 kHz to 8 kHz, and also at 3 and 6 kHz using DD45 headphones. To minimize changes in sound levels due to standing waves and improve intra-subject reliability of threshold estimates above 8 kHz, we measured AC thresholds at extended high-frequencies (EHF: 9, 10, 11.2, 12.5, 14 and 16 kHz) using warble-tones delivered via a circumaural HDA200 high-frequency headset.

Distortion product otoacoustic emissions (DPOAEs) provide an objective, rapid and independent measure of cochlear amplifier function. To complement behavioral audiometry, we measured DPOAEs as amplitude vs. level functions with two primary tones f1 and f2 (f2/f1 = 1.22) with f2 = 0.5, 1, 2, 4 or 6 kHz in either ear (random selection), using the Interacoustics Titan v.3.4.0. For DPOAEs generated at f2 = 8, 11.2, 12.5, 14 and 16 kHz, stimulus generation and data acquisition were handled by a custom rig based on 24-bit digital input-output boards from National Instruments in a PXI chassis, with custom software control via LabVIEW. Response and stimulus waveforms, to and from the input-output boards, were transduced via microphone and dual sound sources in an ER-10X system (Etymotics Research). DPOAEs were measured in both ears as amplitude vs. level functions in 5 dB steps from 5 dB below threshold to 80 dB SPL. The DPOAE at 2f1–f2 was extracted from the ear canal sound pressure after both time-domain and spectral averaging. Threshold was defined as the lowest level required to elicit a DPOAE > 5 dB above the noise floor.

Middle-Ear Muscle Reflex

Three assays of the MEMR were performed in both ears on each subject. Two are based on commercially available packages designed for clinical use, and a third was a custom assay based on animal experiments showing strong correlations between cochlear synaptopathy and MEMR strength (Valero et al. 2016). All these methods rely on the basic principle that middle-ear muscle contractions evoked by an elicitor stimulus, presented either ipsilaterally or contralaterally, stiffen the ossicular chain and thereby change the ratio of absorbed and reflected sound measured in the ear canal in response to a probe stimulus.

The first clinical test, the acoustic reflex threshold (ART), used the Titan Suite from Interacoustics. It measures changes in acoustic admittance in response to an ipsilateral probe tone at 226 Hz evoked by an ipsilateral elicitor at 0.5, 1, 2 or 4 kHz, presented at increasing sound levels from 65 to 95 dB HL in 1 dB steps. Threshold at each elicitor frequency was defined as the lowest level producing a change in admittance > 0.03 mmho. The final ART was defined as the averaged threshold obtained across the four elicitor frequencies.

Secondly, wideband tympanometry (WBT) was also performed using the Titan Suite from Interacoustics. The program measures changes in absorbance of an ipsilateral noise probe (~65 dB nHL) evoked by a contralateral noise elicitor. Absorbance was measured (at atmospheric pressure) from 0.226 kHz to 8 kHz with and without a white-noise elicitor at 95 dB SPL. Each condition was repeated twice in alternation. Absorbance spectra were averaged for each condition. Reflex strength was obtained by averaging spectral differences from 500 Hz to 2,000 Hz, where effects were the largest.

Lastly, MEMR effects were assessed using a custom method (MEMC) similar to that of Keefe and colleagues (Keefe et al. 2010), recently used to study cochlear synaptopathy in mice (Valero et al. 2015; Valero et al. 2018). Stimulus generation and data acquisition were controlled by the same custom rig used to measure high-frequency DPOAEs, using the ER-10X system as transducers to deliver and measure sound. As illustrated in Figure 1, this approach measures changes in ear-canal sound pressure to a click probe evoked by an ipsilateral noise elicitor. Specifically, we use a pair of 100-μsec clicks at 95 dB pSPL separated by a 500-msec elicitor (noise burst with a 2.5 msec ramp) presented 30 msec after the first click and preceding the second by 5 msec. This click-noise-click complex was repeated every 1535 msec, leaving 1 sec of silence between noise bursts to allow relaxation of the MEMs. Four complexes were presented at each elicitor level, and elicitor level was raised in 5 dB steps from 40 to 95 dB SPL. To eliminate click-evoked otoacoustic emissions, the waveforms were truncated at 2 msec after the peak of the click. For each ear, the whole process was repeated three times and averaged. For each average, the spectral difference (gain) between the two click waveforms was computed. Growth functions (gain vs. elicitor level) were then displayed for each 500-Hz window from 500 – 5,000 Hz. Threshold was assessed by visual inspection of these growth functions by two observers blinded to all other test results. Threshold was defined as the lowest elicitor level, for any of the frequency windows, at which the gain emerged from the noise floor. The mean inter-observer difference was 0.71 dB ± 0.14 (SEM). To compute MEMR strength, the absolute values of the gain were summed across all frequencies from 500 to 5000 Hz, for an elicitor level of 95 dB SPL.

Figure 1: Custom MEMR assay (MEMC) with a click-noise-click paradigm.

Figure 1:

A: Schematic of the stimulus complex for measuring MEMR threshold and strength. B-C: Data from one subject. Each curve in B is the spectrum of the difference in sound-pressure waveforms between the pre- and post-elicitor clicks at one elicitor level, color-coded as shown. Gain vs. level functions (C) were derived by summing (at each elicitor level) the absolute spectral values (in dB) within a 500-Hz window, positioned where the signal to noise ratio was best. Threshold was defined by visual inspection of these growth functions from two observers blinded to the other test results.

Word Recognition

Of the many clinical speech-in-noise tests, we used the NU-6 corpus (from Auditec, Inc.) and a modified version of the QuickSIN™ Speech-in-Noise test (from Etymotic Research, Inc.). We randomly selected one ear and presented 4 different NU-6 word lists of 50 CNC phonemically balanced words at 55 dB HL (~75 dB SPL) under different conditions: 1) in the absence or presence of an ipsilateral speech-shaped noise masker (weighted random noise with a constant amplitude from 125 to 1000 Hz and falling 12 dB/octave from 1000 Hz to 6000 Hz) at 0 dB signal-to-noise-ratio (SNR) or 2) speeded up (“time compression”) at 45% or 65% with added reverberation (0.3 sec echo) (Noffsinger et al. 1994). Each participant was presented with the same lists in the same order. Our modified (m)QuickSIN test consisted of 4 lists of 6 sentences. Each sentence within one list was presented in the presence of a four-talker babble noise at decreasing SNRs (from 10 to 5, 3, 2, 1 and 0 dB SNR). Each sentence contained 5 key words. The first list of six sentences was used as practice. The overall score was obtained by averaging the number of correctly repeated key words (up to 30 per list) across the three subsequent lists.

Word scores from 26 participants were excluded, because they were not native speakers of English (n=12), were familiar with the word tests (n=2) and/or failed the MoCA with a score < 26 out of 30 (n=12).

Electrocochleography

Stimuli were generated by our custom rig, stimulus waveforms were transduced via ER-3A insert earphones, and data acquisition was handled by the Interacoustics Eclipse computer system and software. Subjects’ ear canals were prepped by scrubbing with a cotton swab coated in Nuprep®. Electrode gel was applied on the cleaned portion of the canal and over the gold-foil of ER3–26A/B tiptrodes before insertion. A horizontal montage was used, with a ground on the forehead at midline, one tiptrode as the inverting electrode and the other as the non-inverting electrode in the opposite ear. Low (< 5 kΩ) and balanced impedance readings were obtained with inter-electrode impedance values within 2 kΩ of each other. Acoustic stimuli were delivered via silicone tubing connected to the ER-3A earphones. Stimuli were 100 μs-clicks delivered at 125 dB pSPL in alternating polarity at 9.1 Hz. Electrical responses were amplified 100,000X and 2,000 sweeps were averaged, with artifact rejection enabled in the software. Average traces acquired by the Eclipse software (passband 3.3 Hz to 5,000 Hz) were exported for further analysis, including (optional) digital filtering with a 10 Hz to 3,000 Hz passband. The summating potential (SP) and action potential (AP) peaks were defined by visual inspection by two observers blinded to all other test results. Inter-observer reliability was assessed: discrepancies, observed in ~10% of cases, were resolved while still blinded to the other test results. The SP peak was defined as the highest inflection point preceding the AP. The AP was defined in two ways: 1) as the difference between baseline and the maximum value from 1 – 2 msec post onset, or 2) the difference between SP peak and the maximum value from 1 – 2 msec post onset. SP and AP identities were confirmed by comparing waveforms obtained at repetition rates of 9.1 vs 40.1 per second: a significant reduction of AP amplitude is seen at the higher presentation rate. The total noise dose for all electrocochleographic measurements was well within OSHA and NIOSH standards.

Statistics

Four speech recognition measures were considered as outcome variables: the word recognition score in 1) noise, 2) with 45% or 3) 65% time-compression plus reverberation and 4) the number of correct words on the mQuickSIN test. These outcome measures were not ear-specific, so there is only one measure per subject. The following 14 measures were considered as predictors: 1) mean AC thresholds at standard frequencies, 2) mean AC thresholds at EHF, 3) mean DPOAE thresholds at standard frequencies, 4) mean DPOAE thresholds at EHFs, 5) MEMC threshold, 6) MEMc strength, 7) AR thresholds averaged across all four elicitor, 8) WBT strength summed from 500 – 200 Hz, 9) WBT strength summed from 256 – 8000 Hz, 10) SP amplitude 11) AP amplitude defined as the difference between baseline and AP peak, 12) AP amplitude defined as the difference between SP peak and AP peak, 13) SP/AP ratio with AP defined as the difference between baseline and AP peak and 14) SP/AP ratio with AP amplitude defined as the difference between SP peak and AP peak. All variables were measured in both ears except DPOAEs at standard frequencies. For all remaining predictors, we transformed the two measurements (one for each ear) into the mean of the two ears and the difference between the right ear and the mean, thereby including information from both ears in the models while avoiding collinearity.

Pearson’s correlation coefficients were used to assess the strength of the pairwise correlations between each predictor and each outcome measure. A Fisher r-to-z transformation was used to assess the significance of the differences among correlation coefficients. To investigate combinations of variables in a multivariable regression, stepwise selection methods were then applied using all predictors to determine the best model separately for each outcome variable. The criterion for inclusion or exclusion from the model was a significance level of p = 0.10. Individual predictors were added to the model, in order of decreasing pairwise correlation, until the adjusted R-squared stopped increasing. As a final step, the confounding variables of age (continuous) and gender (binary) were added to the models. Data were analyzed using SAS (SAS Institute, version 9.4).

Results

Audiometric thresholds

Threshold audiometry was performed on 165 subjects (93 females, 72 males), aged 18 to 63 years. All had normal thresholds (≤ 25 dB HL) from 0.25 – 8 kHz in both ears (Fig. 2A); however, significant threshold variability was observed at extended high frequencies (EHF; 9 – 16 kHz). Not surprisingly, mean EHF threshold was significantly correlated with age (Fig. 2B), with no obvious effect of gender. As shown in Figure 2A, 137 of the 165 subjects had mean EHF thresholds ≤ 20 dB HL, while the remaining 28 were worse. To further probe cochlear function, we measured distortion product otoacoustic emissions (DPOAEs). Mean DPOAE thresholds were highly correlated with mean air-conduction (AC) thresholds for either standard audiometric frequencies (Fig. 2C) or EHFs (Fig. 2D).

Figure 2: Threshold sensitivity was highly variable at extended high frequencies.

Figure 2:

A: Air-conduction (AC) thresholds for all subjects, color coded according to mean extended high-frequency (EHF) thresholds (9 – 16 kHz), as shown in the key, along with the number of subjects falling within the mean EHF values cited. B: Mean AC thresholds at EHF were correlated with age. C-D: AC thresholds were correlated with corresponding DPOAE thresholds (f2 matched with audiometric frequency), both for standard audiometric frequencies (C) and EHFs (D). Correlation coefficients are shown for each panel. ***p<0.001.

Word recognition performance was assessed using the NU-6 corpus, because these lists of phonemically balanced words offer no contextual clues. When a list of 50 NU-6 words was presented monaurally at 55 dB HL in quiet, scores were excellent (≥ 96%) in all subjects (Fig. 3A). However, when word lists were presented in speech-shaped noise at 0 dB SNR or when words were time compressed (45% or 65%) with added reverberation, a large range in performance was observed (Fig. 3BD). For instance, for words in noise, scores were as low as 18% and as high as 62% correct (Fig. 3B). A similar range of scores was observed with a modified version of the QuickSIN, a test based on phonemically balanced sentences in increasingly high-level background babble, (Fig. 3E) with scores ranging from 10 to 26 correctly repeated key words out of 30.

Figure 3: Distribution of word recognition scores.

Figure 3:

Histograms show the score distribution for each word recognition test (A-D) or for the mQuickSIN test (E).

Given the wide range of EHF thresholds, as measured by either air conduction or DPOAEs (Fig 2A), it was important to examine the effect of thresholds on word scores. As summarized in Figure 4A, there were statistically significant correlations (poorer thresholds – worse scores) between standard frequency averages (AC only, not DPOAEs) and scores on the 45% time-compressed words and the QuickSIN. There were stronger correlations between EHF averages (either AC or DPOAE) and both time-compression/reverberation tests and the QuickSIN test. However, none of these correlations was statistically significant after adjustment for age, as illustrated for the 65% time-compression test in Figures 4B,C. We also considered the relationship between word scores and three different measures of the pure-tone average (PTA) (0.5-1-2 kHz, 1-2-4 kHz and 1-2-3-4 kHz), as significant correlations have been observed between the SPRINT, WIN, and NU-6 test in quiet (Wilson and Cates, 2008). None of these relationships was statistically significant after adjustment for age.

Figure 4: Thresholds were uncorrelated with word recognition scores, after adjustment for age and gender.

Figure 4:

A: Table shows correlation coefficients (r) and p values (before (p), and after (pA), adjusting for age and gender) obtained between word scores and thresholds, at standard or extended high frequencies, and as measured by AC or DPOAEs, as indicated. Shaded boxes indicate relations that were significant (p < 0.05) before adjusting for age and gender. None was significant after adjusting. B-C: Word scores vs. mean AC thresholds at EHFs for the 65% time-compression test, before (B) and after adjusting for age (C). Correlation coefficients are shown for each panel.

Middle-Ear Muscle Reflex

MEMR strength and/or threshold were assessed bilaterally in each subject using three methods. Firstly, 1) we used a custom wideband method (MEMc) (Keefe et al. 2010) that uses a pair of click probes flanking an ipsilateral noise elicitor (Fig. 1A). Since the offset time-constant of MEM effects is ~100 msec (Pang et al. 1997), the ear-canal response to the second click is modified by lingering effects of MEM contraction on middle-ear reflectance. In addition, we used two clinical tests: 2) the acoustic reflex threshold (ART), where the acoustic admittance of a low-frequency probe is compared with and without an ipsilateral tonal elicitor; and 3) wideband tympanometry (WBT), where changes in absorbance elicited by a contralateral white noise are measured with a broadband probe.

MEMc effects differed greatly across our normal-hearing subjects, with thresholds ranging from 45 to 95 dB SPL (Fig. 5A). Subjects with the lowest MEMc thresholds tended to have the highest reflex strength, and vice versa (Fig. 5B), and there was high degree of correlation between MEMc thresholds in the two ears (Fig. S1). The inter-subject variability in MEMc thresholds was not associated with gender or with thresholds, either at standard frequencies or EHFs, as measured either behaviorally or by DPOAEs (Fig. S2). Inter-subject variation in the ART was also uncorrelated with threshold at standard frequencies or EHFs, as measured behaviorally or by DPOAEs (data not shown).

Figure 5: MEMc thresholds and strengths were highly variable.

Figure 5:

A: Box and whiskers plot of MEMC thresholds from each ear for all subjects, coded as shown in B, defines a lower (< 68 dB SPL), median and upper-quartile (> 77 dB SPL) distribution of MEMC reflex thresholds. B: Box and whiskers plots of MEMC strength for the three MEMC threshold groups defined in A. **p<0.01, ***p<0.001.

Comparing MEMc thresholds to word scores (Fig. 6) revealed significant negative correlations, even after adjusting for age and gender, for words in noise (Fig. 6A), words with 45% time compression (Fig. 6B) or words with 65% time compression (Fig. 6C). In contrast, the correlation with the mQuickSIN was not significant (Fig. 6D). Similarly, the ART was negatively correlated with performance on the all three word tasks after adjusting for age and gender, but not with the mQuickSIN (Fig. 6). Estimated correlations were not increased by separately considering any of the four elicitor frequencies that are combined into the ART measure (Supplementary Table 1).

Figure 6: MEM reflex thresholds were correlated with word recognition scores.

Figure 6:

MEM reflex thresholds, as assessed with MEMc or ART for each subject, are plotted against % correct scores on the different word tests or the QuickSIN as indicated. Arrows indicate that no response was detected at any elicitor level. Regression lines are plotted only for statistically significant correlations obtained after adjusting the data shown in this figure for age and gender: *p<0.05, ***p<0.001.

Interestingly, the correlations between word scores and MEMR function were lower when measured as strength rather than threshold, either by the custom assay or the WBT test (data not shown). Only for words with 45% time compression were the correlations with MEMR strength statistically significant after adjusting for age and gender (r = 0.27, p<0.05).

Electrocochleography

To further probe the peripheral contributions to deficits in word recognition, we compared word scores with measures of cochlear function seen via electrocochleography. We measured click-evoked potentials from ear-canal electrodes and extracted the amplitudes of both the “summating potential” (SP) and the “action potential” (AP), as illustrated in Figure 7A. In animal studies (Sergeyenko et al., 2013; Shaheen et al., 2015), the suprathreshold AP is reduced by cochlear synaptopathy, while SP is not. These observations are consistent with the classic view that AP represents the summed activity of auditory nerve fibers (some of which are silenced by cochlear synaptopathy), whereas the SP is dominated by pre-synaptic potentials from hair cells (which remain intact) (Zheng et al., 1997; Durrant et al., 1998).

Figure 7: Some electrocochleographic measures were highly correlated with word scores.

Figure 7:

A: The mean waveform (±S.E.M.) for the click-evoked responses from all subjects is used to illustrate the two methods for measuring AP amplitude. B, C, D: SP amplitude (B) and the SP/AP ratio (D) were correlated with scores on the words presented with time compression (65%) and reverberation, whereas AP amplitude, when measured baseline to peak, was not (C). D: AP amplitudes were correlated with word scores, when AP was measured shoulder to peak, as illustrated in A. Regression lines are shown for significant correlations obtained after adjusting the data shown in this figure for age and gender. **p<0.01; ***p<0.001.

As noted with the SP/AP ratio in our prior study of word recognition in normal-hearing subjects (Liberman et al., 2016), there was a significant negative correlation between SP amplitude and word scores on all three tests, after adjusting for age and gender. Data for words with 65% time compression are shown in Figure 7B; for the other word-recognition tests, the correlations were as follows: r = −0.34, p < 0.001 for noise at 0 dB SNR; r = −0.34, p < 0.001 for 45% time compression, and r = −0.28, p = 0.002 for mQuickSIN. As in our prior study, the correlation was even higher for the SP/AP ratio. Data for 65% time compression are shown in Figure 7; for the other outcome measures the correlations were: r = −0.35, p < 0.001 for noise at 0 dB SNR; r = −0.44, p < 0.001 for 45% time compression; and r = −0.39, p < 0.001 for mQuickSIN.

The strength, and statistical significance, of the correlations between word scores and AP amplitudes depended on how AP was measured. If measured from baseline to peak, there were no significant correlations (Fig. 7C). However, if measured from the SP shoulder to peak, the correlation with word scores was significant (Fig. 7E). Data for 65% time compression are shown in Figure 7. Correlation for the other outcomes were r = 0.14, p = 0.149 for noise at 0 dB SNR; r = 0.30, p = 0.006 for 45% time compression, and r = 0.32, p = 0.003 for mQuickSIN. Measuring from shoulder to peak is more appropriate here, because 1) the use of ear-canal electrodes yields a relatively large SP, and 2) the lower corner of our response filter (10 Hz) leaves the SP “pedestal” largely unattenuated within the 1–2 msec latency window of the AP. Given these considerations, expressing the AP amplitude re the SP shoulder should better approximate the measure of AP amplitude with minimal contamination from the SP.

Statistical Comparisons and Multivariable Models

A Pearson correlation-coefficient matrix (Fig. 8) summarizes the strength and significance of the pairwise correlations between word scores and the functional assays used in this study. The lack of statistical significant correlation for DPOAEs and behavioral thresholds, after adjustment for age and gender, suggests that differences in cochlear amplifier function do not explain the differences in performance. The correlations between MEM reflex measures and word-recognition performance suggests an anti-masking role for the MEMs and/or a contribution of auditory-nerve loss to the performance deficit. The further association between word recognition and both SP and AP amplitudes (in opposite directions) supports the existence of a peripheral deficit in participants with poor performance and are consistent with a cochlear neuropathy, although a pre-synaptic contribution cannot be excluded based on the overall pattern of results.

Figure 8: Visual representation of the pairwise correlations between 12 assays and four word-recognition scores.

Figure 8:

In this matrix of Pearson’s bivariate correlations, the diameter of each disk is proportional to unadjusted r values. Gray disks indicate lack of statistical significance after adjustment for age and gender. Blue and red disks indicate statistical significance after adjusting for age and gender (*p<0.05, **p<0.01, ***p<0.001). The color indicates the slope of the regression (blue, r > 0; red, r < 0).

Multivariable regression (Table 1) showed that, once the SP/AP ratio (the single strongest predictor) is included in the model, addition of MEMR metrics provide minimal additional predictive power: in only one case (45% time compression with reverberation) do MEMR thresholds (ART) improve the prediction. Thus, MEMR and SP/AP effects are likely to share the same underlying mechanism. The latter suggests that poor performance in word recognition is likely to arise from the degradation of stimulus coding due to cochlear dysfunction that also attenuates the MEMR.

Table 1:

Stepwise multivariate regression identifies the best model for each word test. AP values in these SP/AP ratios are all measured baseline to peak. P values are for the individual pairwise correlation between the predictor and the relevant outcome measure.

Noise 0 dB SNR (n=113)
Predictors Only Par. Est. P value Adj. r2
SP/AP (L-R mean) −19.77 <0.0001 0.18
AC stand freq (L-R asymmetry) 2.45 0.0020
Confounders Only Par. Est. P value Adj. r2
Gender 0.40 0.8233 −0.018
Age −0.01 0.9098
Predictors plus Confounders Par. Est. P value Adj. r2
SP/AP (L-R mean) −20.53 <0.0001
0.171
AC stand freq (L-R asymmetry) 2.50 0.0019
Gender 1.07 0.5129
Age 0.06 0.4626
45% Time Compression (n=97)
Predictors Only Par. Est. P value Adj. r2
SP/AP (L-R mean) −15.10 <0.0001 0.301
ART (L-R mean) −0.43 0.0004
Confounders Only Par. Est. P value Adj. r2
Gender 2.27 0.0891 0.025
Age −0.08 0.2921
Predictors plus Confounders Par. Est. P value Adj. r2
SP/AP (L-R mean) −15.00 <0.0001
0.30
ART (L-R mean) −0.41 0.0009
Gender 1.57 0.1689
Age −0.00 0.9509
65% Time Compression (n=97)
Predictors Only Par. Est. P value Adj. r2
SP/AP (L-R mean) −36.50 <0.0001 0.361
AC high freq (L-R mean) −0.19 0.0021
AP (SP to Peak) (L-R mean) −18.33 0.0317
Confounders Only Par. Est. P value Adj. r2
Gender 0.24 0.8834 0.104
Age −0.32 0.0002
Predictors plus Confounders Par. Est. P value Adj. r2
SP/AP (L-R mean) −37.45 <0.0001
0.367
AC high freq (L-R mean) −0.07 0.4144
AP (SP to Peak) (L-R mean) −20.39 0.0197
Gender 0.91 0.5250
Age −0.18 0.1005
mQuickSIN (n=111)
Predictors Only Par. Est. P value Adj. r2
SP/AP (L-R mean) −6.72 <0.0001 0.174
DPOAE EHF (L-R asymmetry) −0.11 0.0340
Confounders Only Par. Est. P value Adj. r2
Gender 0.89 0.1047 0.019
Age −0.03 0.3107
Predictors plus Confounders Par. Est. P value Adj. r2
SP/AP (L-R mean) −6.63 <0.0001
0.185
DPOAE EHF (L-R asymmetry) −0.11 0.0326
Gender 0.90 0.0694
Age −0.00 0.8864

Discussion

Acoustic injury, cochlear synaptopathy and hearing in noise

Two people with the same hearing sensitivity can have very different speech discrimination scores, particularly in noisy environments (Vermiglio et al. 2012). The contribution of cochlear neural loss to this difference has always been a logical possibility, however, the present results, showing a significant correlation between SP/AP ratio and word scores in a large normal-hearing cohort provides strong evidence for a peripheral contribution to these differences. Recent animal work has suggested that de-afferentation of IHCs may be the rule rather than the exception in acquired sensorineural hearing loss, and that hair cell de-afferentation occurs well before threshold elevation in the noise-exposed or aging ear (Kujawa and Liberman 2015; Liberman 2017). Recent human studies have corroborated the finding that the loss of auditory nerve connections to IHCs greatly outstrips the loss of IHCs themselves in the aging ear (Viana et al. 2015; Wu et al., 2018), and another human study suggests an association between poor word scores in quiet and the loss of auditory-nerve peripheral axons in cases of presbycusis with high-tone hearing loss (Felder and Schrott-Fischer, 1995).

In animal studies of acoustic overexposure and aging, cochlear synaptopathy can be detected using the supra-threshold amplitude of ABR wave 1 (Kujawa and Liberman 2009, 2015; Shaheen et al. 2015). As long as cochlear thresholds remain normal, the fractional reduction in ABR responses is correlated with the fractional reduction in synaptic counts (Sergeyenko et al. 2013). Attempts to translate these animal results to human subjects with normal hearing sensitivity have produced mixed results (Bramhall et al., 2019). While some failed to observe an association between ABR Wave I amplitude and noise exposure (Fulbright et al. 2017; Grinn et al. 2017; Guest et al. 2017, 2018; Prendergast et al. 2017; Spankovich et al. 2017), others have found signs of neural damage in aging populations (Johannesen et al., 2019; Grose et al., 2019) or cohorts likely to have suffered occupational or recreational overexposure (Bramhall et al. 2017, 2018; Grose et al. 2017; Liberman et al. 2016; Ridley et al. 2018; Skoe et al. 2018; Valderrama et al. 2018). Possible reasons for the lack of agreement include 1) difficulties in accurately estimating cumulative noise exposure, 2) significant inter-subject differences in noise vulnerability, 3) lower noise vulnerability in humans compared to other mammals (Dobie et al. 2018); and 4) large variability in human ABR amplitudes, due to differences in head size, electrode impedance, etc. (Nikiforidis et al. 1993).

In an attempt to reduce the intersubject variability in ABR amplitudes, we tried normalizing the neural peak (Wave 1 or AP) to the SP peak (Liberman et al. 2016), thought to be dominated by contributions from the IHCs (Zheng et al. 1997; Durrant et al., 1998). This approach was inspired by animal work on aging and noise exposure, which showed that SP remained stable as Wave 1 amplitude was reduced by synaptopathy (Sergeyenko et al. 2013), and by human electrocochleography showing that a robust SP remains, despite an attenuated or reduced AP, in people with genetic deafness arising from IHC synaptic dysfunction (Santarelli et al. 2009).

In a prior study, we noted that SP was enhanced, as AP was reduced, in those subjects assumed to have the worst acoustic overexposure (Liberman et al. 2016). These results echo a study of the acute effects of recreational music exposure traumatic enough to cause a 10-dB temporary threshold shift (Kim et al. 2005): click-evoked electrocochleography showed post-exposure enhancement of SP coupled with an attenuation of AP. In our prior study, the high-risk group (with elevated SP/AP ratios) also performed more poorly on word-recognition tests (the same battery as used here). Correspondingly, in the present study, elevated SP or SP/AP ratio were the predictors most robustly correlated with word scores (Fig. 8), and the SP/AP ratio dominated all the multivariate models derived for each of the word tests (Table 1).

These correlations from electrocochleography suggest a cochlear contribution to the differences in word scores among normal-hearing listeners. But how strong is the link to synaptopathy? The lack of correlation between word scores and thresholds, either behavioral or OAE-based, at either standard or EHF (Fig. 8), suggests that the differences cannot be ascribed to cochlear amplifier function, including the cochlear battery, as powered by the stria vascularis, which is required for normal DPOAEs (Mills, 2006). By this logic, the only viable candidates are the IHC (mechano-electric transduction or synaptic transmission) and/or the auditory nerve. Thus, changes in SP are not inconsistent with cochlear synaptopathy. However, at present, any explanation of SP enhancement would be highly speculative, e.g. that it comprises a complex sum of pre-synaptic and post-synaptic potentials with different latencies and/or polarities (Pappa et al., 2019) such that reduction of a post-synaptic component (due to synaptopathy) could lead to an enhancement of the measured SP. A reduction in AP is pathognomic for synaptopathy, at least when threshold sensitivity is still normal, but the present results suggest that an SP enhancement can mask an AP reduction, depending on the response-filtering protocols and response-measurement algorithms. SP and AP responses from humans with a genetic mutation compromising IHC synaptic transmission (Santarelli et al. 2009), and from animals in which the AP is acutely silenced by round-window application of a neural blocker (Yuan et al. 2014), both strongly suggest that the AP rides on top of the SP when the filter settings don’t eliminate the steady-state SP. Such an effect was likely pronounced in this study, because SP is larger with an ear-canal electrode than with an earlobe electrode, for example. Differences in the conventions for measuring AP (or Wave 1), and the possible masking of an AP/wave-1 reduction by a simultaneous SP enhancement, could contribute to negative results in other studies assessing the correlations between wave 1 amplitude and word recognition scores.

MEM reflexes and hearing in noise

Here, we noted that MEM reflex thresholds were correlated with word scores in difficult listening environments, and data from noise-exposed animals has suggested that the MEM reflex is a sensitive measure of synaptopathy (Valero et al. 2016; Valero et al. 2018) that can be a better predictor of primary neural degeneration than Wave 1 amplitude reduction. The underlying logic is that the high-threshold, low-SR auditory nerve fibers, which are the first to degenerate in the noise-exposure model (Furman et al. 2013) and the aging model (Lang et al. 2010), may also be important afferent drivers of the MEM feedback loop (Valero et al. 2016; Valero et al. 2018).

There are several clinical assays of MEM reflexes available in commercial audiology equipment. Because these assays are all relatively quick to administer, we chose to use two established tests (ART and WBT) and to design a third (MEMC). There are potentially important differences among the tests with respect to the probe stimuli (tones vs. clicks vs. noise), elicitor stimuli (ipsilateral vs. contralateral and tones vs. noise) and whether they measure only threshold or also suprathreshold reflex strength. Given data showing smaller MEM effects contralateral to the stimulated ear (Moller 1961), our custom assay uses an ipsilateral noise elicitor and an ipsilateral wideband probe, to maximize sensitivity. Indeed, although the correlation between MEMR threshold assessment methods was significant (ART vs. MEMC, r = 0.35, p < 0.001), the MEMc test produced the lowest reflex thresholds, regardless of which elicitor frequencies were included in the ART (Fig. 5).

The stapedius muscle, when activated, stiffens the ossicular chain and reduces sound transmission to the inner ear, especially for frequencies < 1 kHz (Rabinowitz 1977). A protective role has been suggested (Brask 1979), and people lacking the reflex are more vulnerable to acoustic injury in the workplace (Borg et al. 1983). However, the most important function of the reflex may be in preventing the upward spread of masking (Liberman et al. 1998; Pang and Guinan 1997).

Patients without acoustic reflex tend to show poorer speech recognition in quiet at high SPLs vs. moderate SPLs (Borg et al. 1973; French et al. 1947; Hannley et al. 1981; Jerger et al. 1971; McCandless et al. 1974; Wormald et al. 1995), a phenomenon known as “rollover”. For example, patients with vestibular schwannoma, and absent MEM reflexes, have greater rollover compared to subjects with normal ART (Dorman et al. 1987; Hannley and Jerger 1981). With respect to noise masking, subjects without a measurable acoustic reflex performed more poorly on a sentence identification test compared to audiometrically matched control with a functional reflex (Anastasio et al. 2005). Similarly, stapedectomy patients (who lack a functional MEMR) scored lower on speech tests in low-pass noise in the affected vs. contralateral ear (Weisz et al. 2006), and these inter-aural differences were not seen when words were presented in quiet (Chadwell et al. 1979).

MEM reflexes and cochlear synaptopathy

Here, we show, in audiometrically normal adults, that word recognition scores in difficult listening situations are linked to MEMR thresholds (Fig. 6). This relationship could not be attributed to outer hair cell dysfunction, as no correlation was detected between word scores and thresholds, either in the standard audiometric range or at EHF, as measured behaviorally and with DPOAEs (Fig. 8), after adjusting for age and gender.

Poor listening performance in subjects with high MEMR thresholds could arise either 1) directly from loss of the anti-masking function of the MEMR, arising from dysfunction of the brainstem circuitry driving the reflex, or 2) from the degradation of stimulus coding due to cochlear dysfunction that also attenuates the reflex, or 3) from both mechanisms. In support of option 2), the electrocochleography results discussed above suggest that cochlear pathology may underlie both elevated MEMR thresholds and poor word scores. Further support derives from the observation that, once the SP/AP ratio is included in the multivariate predictor models, the addition of MEMR results only improved the predictive power in one of the four models (Table 1). The apparent redundancy of SP/AP and MEMR results in predicting word scores suggests a common mechanism, and the SP/AP changes suggest that the dysfunction is in the cochlea.

Additional hints that an attenuated MEM reflex may reflect underlying cochlear neural degeneration is provided by a recent study of normal-hearing subjects with and without tinnitus (Wojtczak et al. 2017). When suprathreshold growth of MEM reflex strength was assessed with a type of click-probe assay similar to that used here, a striking difference was observed. Subjects with tinnitus had significant weaker MEM reflexes than those without, consistent with emerging ideas that the loss of auditory nerve fibers is a key elicitor of the amplification in “central gain” that leads to the establishment and persistence of the tinnitus (Hesse et al. 2016).

The anti-masking function of the MEMR arises because it acts as a high-pass filter with a cutoff at ~1000 Hz (Rabinowitz 1977). This filtering can minimize the upward spread of masking from low-frequency noise on high-frequency signals (Liberman and Guinan 1998), because, in the absence of the MEMR, low-frequency stimuli strongly suppress the responses of high-frequency auditory-nerve fibers to stimuli near their best frequencies (Delgutte 1990). Indeed, the noise masker used here was low-pass filtered with a cutoff at 1000 Hz and a 12 dB/octave slope from 1000 to 6000 Hz. Thus, it is exactly the type of masker for which the MEMR should be particularly effective. On the other hand, the known anti-masking effects of the MEMR cannot explain improved identification of time-compressed words with reverberation, and scores on both these tests also showed significant correlations with MEMR thresholds (Fig. 6B,C). Together, these observations also support the idea that cochlear pathology underlies the poor word scores, rather than the loss of the MEMR function per se.

The anti-masking effect would be expected to improve performance on the mQuickSIN, which comprises sentences (Engineers 1969) from a female talker in four-talker babble. A robust MEMR should decrease the masking of (high-frequency) consonants by (low-frequency) speech babble with a spectrum dominated by vowels and peaking at ~650 Hz (Killion et al. 2004). However, we did not observe a significant correlation between mQuickSIN scores and MEMR thresholds (Fig. 6D). This sentence-based test, with its increased importance of contextual clues, engages more high-level processing, such that differences in central auditory function may obscure the underlying differences in the quality of the signal coded in the response of the auditory nerve.

Another recent study failed to detect a correlation between MEMR thresholds and a speech in noise measure, among normal-hearing listeners (Guest et al. 2018). There are many possible reasons for a cross-study discrepancy such as this, including the fact their speech-in-noise test was very different (binaural listening with spatially separated two-talker babble in a 16-alternative forced-choice test, scored as a signal-to-noise threshold criterion for 50% performance), their subject pool was smaller (n=67), and their MEMR test was the ART. Here, we found that the ART didn’t correlate as well as did the custom assay with word scores (Fig. 6A vs. 6E). Furthermore, using a bootstrapping approach, we randomly sampled 70-member subsets from our larger dataset and concluded that the correlations would have reached statistical significance in only one of the four speech tests (45% time compression with reverberation).

Future directions

Establishing diagnostic indicators for cochlear synaptopathy in humans is important if we are to understand the prevalence of primary neural degeneration in clinical and “not-yet-clinical” human populations. Animal studies show that ear abuse at a young age exacerbates the progression of age-related hearing impairment (Fernandez et al. 2015). Thus, early diagnosis is critical in identifying those with “tender ears” who may already be incurring significant inner ear damage, long before there is elevation of standard audiometric thresholds. Furthermore, clarification of the true risks of noise exposure is important to public policy on noise abatement and to raising general consciousness about the dangers of ear abuse. Based on the present results, it seems unlikely that electrocochleography or a MEM reflex assay as presented here could diagnose the presence or absence of mild or moderate cochlear neural degeneration on a case-by-case basis. Although statistically significant, the correlation coefficients were weak and power to detect modest levels of correlation reflect the large number of participants. The interpretation of these tests is even more difficult if there is hair cell damage and threshold elevation are superimposed on any neural damage. However, despite these difficulties, these tests may be useful in longitudinal studies to track the accumulation of neural degeneration in individual subjects. Recent animal research suggests that reconnecting surviving spiral ganglion cells to hair cells is possible after noise damage, by local delivery of growth factors to the round window (Sly et al. 2016; Suzuki et al. 2016). Thus, assays similar to those used here could conceivably be applied in a future clinical trial to track the repair of synaptic connections in human subjects.

Supplementary Material

Supplemental Figure 1

Figure S1: Interaural symmetry in MEMR thresholds. ART (A) and MEMC thresholds (B) were measured from each ear of each subject. Statistically significant correlations were observed between ears for both assays. Arrows indicate that no ART was obtained at any elicitor level. *** p<0.001

Supplemental Figure 2

Figure S2: MEMc thresholds were uncorrelated with pure-tone threshold sensitivity. MEMR thresholds as assessed by our MEMc assay did not correlate with mean AC thresholds (A,B) nor with mean DPOAE thresholds (C,D) at standard audiometric frequencies (A,C) or EHFs (B,D). Correlation coefficients are given in each panel. None of the correlations was significant, either before adjusting for age and gender (as illustrated here) or after adjusting (not shown).

Supplemental Table 1

Table S1: Effect of elicitor frequency on correlations between ART and word scores. The ART, as conventionally defined, is averaged across ipsilateral elicitors at 0.5, 1, 2 and 4 kHz. Here we show that no subset of elicitor frequencies is a significantly better predictor of word scores than the ensemble average, by assessing correlation coefficients (columns 2, 3 and 4) for alternate combinations of elicitors (column 1). No pairwise comparison of these subset coefficients was statistically significant, using a Fisher r to z transformation.

Acknowledgements

We gratefully acknowledge Mrs. Inge Knudson for coordinating subject recruitment. We thank Drs. J.J. Guinan Jr., S.G. Kujawa and M.D. Valero for their comments on earlier versions of this manuscript. This work was supported by the NIH – NIDCD P50 DC015857 and the Lauer Tinnitus Research Center at the Massachusetts Eye & Ear. We also gratefully acknowledge a gift from Decibel Therapeutics for the purchase of the commercial audiometric equipment. MCL is a scientific founder of Decibel Therapeutics.

AMM and SAK performed the experiments and contributed equally to this work. KEH developed software for data acquisition and analysis. KB and VDG ran the statistical analyses. MCL and SFM designed the study and wrote the paper. SFM also performed experiments and data analysis.

Conflicts of Interest and Source of Funding:

This research was funded by the NIH – NIDCD P50 DC015857 (SFM, Project PI) and the Lauer Tinnitus Research Center at the Massachusetts Eye & Ear (SFM, PI). We also gratefully acknowledge a gift from Decibel Therapeutics for the purchase of the commercial audiometric equipment.

References

  1. Alvord LS (1983). Cochlear dysfunction in “normal-hearing” patients with history of noise exposure. Ear Hear, 4, 247–250. [DOI] [PubMed] [Google Scholar]
  2. Anastasio AR, Momensohn-Santos TM (2005). [Synthetic sentence identification (SSI) and contralateral acoustic stapedius reflex]. Pro Fono, 17, 355–366. [DOI] [PubMed] [Google Scholar]
  3. Arenas JP, Suter AH (2014). Comparison of occupational noise legislation in the Americas: an overview and analysis. Noise Health, 16, 306–319. [DOI] [PubMed] [Google Scholar]
  4. Berlin CI, Hood LJ, Morlet T, et al. (2005). Absent or elevated middle ear muscle reflexes in the presence of normal otoacoustic emissions: a universal finding in 136 cases of auditory neuropathy/dys-synchrony. J Am Acad Audiol, 16, 546–553. [DOI] [PubMed] [Google Scholar]
  5. Bharadwaj HM, Verhulst S, Shaheen L, et al. (2014). Cochlear neuropathy and the coding of supra-threshold sound. Front Syst Neurosci, 8, 26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bharadwaj HM, Masud S, Mehraei G, et al. (2015). Individual differences reveal correlates of hidden hearing deficits. J Neurosci, 35, 2161–2172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bohne BA, Harding GW (2000). Degeneration in the cochlea after noise damage: primary versus secondary events. Am J Otol, 21, 505–509. [PubMed] [Google Scholar]
  8. Borg E, Nilsson R, Engstrom B (1983). Effect of the acoustic reflex on inner ear damage induced by industrial noise. Acta Otolaryngol, 96, 361–369. [DOI] [PubMed] [Google Scholar]
  9. Borg E, Zakrisson JE (1973). Letter: Stapedius reflex and speech features. J Acoust Soc Am, 54, 525–527. [DOI] [PubMed] [Google Scholar]
  10. Bourien J, Tang Y, Batrel C, et al. (2014). Contribution of auditory nerve fibers to compound action potential of the auditory nerve. J Neurophysiol, 112, 1025–1039. [DOI] [PubMed] [Google Scholar]
  11. Bramhall NF, Konrad-Martin D, McMillan GP, et al. (2017). Auditory Brainstem Response Altered in Humans With Noise Exposure Despite Normal Outer Hair Cell Function. Ear Hear, 38, e1–e12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bramhall NF, Konrad-Martin D, McMillan GP, et al. (2018). Tinnitus and auditory perception after a history of noise exposure: relationship to auditory brainstem response measures. Ear Hear, 39, 881–894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bramhall N, Beach EF, Epp B, et al. (2019) The search for noise-induced cochlear synaptopathy in humans: Mission impossible? Hear Res, 377, 88–103. [DOI] [PubMed] [Google Scholar]
  14. Brask T (1979). The noise protection effect of the stapedius reflex. Acta Otolaryngol Suppl, 360, 116–117. [DOI] [PubMed] [Google Scholar]
  15. Chadwell DL, Greenberg HJ (1979). Speech intelligibility in stapedectomized individuals. Am J Otol, 1, 103–108. [PubMed] [Google Scholar]
  16. Costalupes JA, Young ED, Gibson DJ (1984). Effects of continuous noise backgrounds on rate response of auditory nerve fibers in cat. J Neurophysiol, 51, 1326–1344. [DOI] [PubMed] [Google Scholar]
  17. Delgutte B (1990). Physiological mechanisms of psychophysical masking: observations from auditory-nerve fibers. J Acoust Soc Am, 87, 791–809. [DOI] [PubMed] [Google Scholar]
  18. Dobie RA, Clark WW, Kallogjeri D et al. (2018). Exchange Rate and Risk of Noise-Induced Hearing Loss in Construction Workers. Ann Work Expo Health, 62, 1176–1178. [DOI] [PubMed] [Google Scholar]
  19. Dorman MF, Lindholm JM, Hannley MT, et al. (1987). Vowel intelligibility in the absence of the acoustic reflex: performance-intensity characteristics. J Acoust Soc Am, 81, 562–564. [DOI] [PubMed] [Google Scholar]
  20. Dubno JR, Dirks DD, Morgan DE (1984). Effects of age and mild hearing loss on speech recognition in noise. J Acoust Soc Am, 76, 87–96. [DOI] [PubMed] [Google Scholar]
  21. Durrant JD, Wang J, Ding DL, et al. (1998). Are inner or outer hair cells the source of summating potentials recorded from the round window? The Journal of the Acoustical Society of America, 104, 370–377. [DOI] [PubMed] [Google Scholar]
  22. Engineers I. o. E. a. E. (1969). IEEE recommended pratice for speech quality measurements Global Engineering Documents, Boulder CO. [Google Scholar]
  23. Felder E, Schrott-Fischer A (1995) Quantitative evaluation of myelinated nerve fibres and hair cells in cochleae of humans with age-related high-tone hearing loss. Hear Res, 91, 19–32. [DOI] [PubMed] [Google Scholar]
  24. Fernandez KA, Jeffers PW, Lall K, et al. (2015). Aging after noise exposure: acceleration of cochlear synaptopathy in “recovered” ears. J Neurosci, 35, 7509–7520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. French N, Steinberg J (1947). Factors governing the intelligbility of speech sounds. J. Acoustic. Soc. Am, 19, 90–119. [Google Scholar]
  26. Fulbright ANC, LePrell CG, Griffiths SK et al. (2017) Effect of recreational noise on threshold and suprathreshold measures of auditory function. Semin Hear, 38, 298–318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Furman AC, Kujawa SG, Liberman MC (2013). Noise-induced cochlear neuropathy is selective for fibers with low spontaneous rates. Journal of neurophysiology, 110, 577–586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Grinn SK, Wiseman KB, Baker JA, et al. (2017). Hidden Hearing Loss? No Effect of Common Recreational Noise Exposure on Cochlear Nerve Response Amplitude in Humans. Front Neurosci, 11, 465. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Grose JH, Buss E, Hall JW 3rd. (2017). Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences. Trends Hear, 21, 2331216517737417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Grose JH, Buss E, Elmore H (2019). Age-Related Changes in the Auditory Brainstem Response and Suprathreshold Processing of Temporal and Spectral Modulation. Trends Hear, 23, 2331216519839615. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Guest H, Munro KJ, Plack CJ (2017). Tinnitus with a normal audiogram: Role of high-frequency sensitivity and reanalysis of brainstem-response measures to avoid audiometric over-matching. Hear Res, 356, 116–117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Guest H, Munro KJ, Plack CJ (2018). Acoustic Middle-Ear-Reflex Thresholds in Humans with Normal Audiograms: No Relations to Tinnitus, Speech Perception in Noise, or Noise Exposure. Neuroscience, in press. [DOI] [PubMed] [Google Scholar]
  33. Guest H, Munro KJ, Prendergast G, et al. (2018). Impaired speech perception in noise with normal audiogram: no evidence for cochlear synaptopathy and no relation to lifetime noise exposure. Hear Res, 364, 142–151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Hannley M, Jerger J (1981). PB rollover and the acoustic reflex. Audiology, 20, 251–258. [DOI] [PubMed] [Google Scholar]
  35. Hesse LL, Bakay W, Ong HC, et al. (2016). Non-Monotonic Relation between Noise Exposure Severity and Neuronal Hyperactivity in the Auditory Midbrain. Front Neurol, 7, 133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Hickox AE, Liberman MC (2014). Is noise-induced cochlear neuropathy key to the generation of hyperacusis or tinnitus? J Neurophysiol, 111, 552–564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Jerger J, Jerger S (1971). Diagnostic significance of PB word functions. Arch Otolaryngol, 93, 573–580. [DOI] [PubMed] [Google Scholar]
  38. Johannesen PT, Buzo BC, Lopez-Poveda EA (2019). Evidence for age-related cochlear synaptopathy in humans unconnected to speech-in-noise intelligibility deficits. Hear Res,374, 35–48. [DOI] [PubMed] [Google Scholar]
  39. Johnsson LG (1974). Sequence of degeneration of Corti’s organ and its first-order neurons. Ann Otol Rhinol Laryngol, 83, 294–303. [DOI] [PubMed] [Google Scholar]
  40. Johnsson LG, Hawkins JE Jr. (1976). Degeneration patterns in human ears exposed to noise. Ann Otol Rhinol Laryngol, 85, 725–739. [DOI] [PubMed] [Google Scholar]
  41. Keefe DH, Fitzpatrick D, Liu YW, et al. (2010). Wideband acoustic-reflex test in a test battery to predict middle-ear dysfunction. Hear Res, 263, 52–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Killion MC, Niquette PA, Gudmundsen GI, et al. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am, 116, 2395–2405. [DOI] [PubMed] [Google Scholar]
  43. Kim JS, Nam E-C, Park SI (2005). Electrocochleography is more sensitive than distortion-product otoacoustic emission test for detecting noise-induced temporary threshold shift. Otolaryngol Head Neck Surg, 133, 619–624. [DOI] [PubMed] [Google Scholar]
  44. Knipper M, Van Dijk P, Nunes I, et al. (2013). Advances in the neurobiology of hearing disorders: recent developments regarding the basis of tinnitus and hyperacusis. Prog Neurobiol, 111, 17–33. [DOI] [PubMed] [Google Scholar]
  45. Kobler JB, Guinan JJ Jr., Vacher SR, et al. (1992). Acoustic reflex frequency selectivity in single stapedius motoneurons of the cat. J Neurophysiol, 68, 807–817. [DOI] [PubMed] [Google Scholar]
  46. Kujawa SG, Liberman MC (2009). Adding insult to injury: cochlear nerve degeneration after “temporary” noise-induced hearing loss. J Neurosci, 29, 14077–14085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Kujawa SG, Liberman MC (2015). Synaptopathy in the noise-exposed and aging cochlea: Primary neural degeneration in acquired sensorineural hearing loss. Hear Res, 330, 191–199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Lang H, Jyothi V, Smythe NM, Dubno JR, Schulte BA, Schmiedt RA (2010). Chronic reuction of endocochlear potential reduces auditory nerve activity: further confirmation of an animal model of metabolic presbyacusis. JARO, 11, 419–434. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Liberman MC (2017). Noise-induced and age-related hearing loss: new perspectives and potential therapies. F1000Res, 6, 927. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Liberman MC, Kiang NY (1978). Acoustic trauma in cats. Cochlear pathology and auditory-nerve activity. Acta Otolaryngol Suppl, 358, 1–63. [PubMed] [Google Scholar]
  51. Liberman MC, Dodds LW (1984). Single-neuron labeling and chronic cochlear pathology. III. Stereocilia damage and alterations of threshold tuning curves. Hear Res, 16, 55–74. [DOI] [PubMed] [Google Scholar]
  52. Liberman MC, Guinan JJ Jr. (1998). Feedback control of the auditory periphery: anti-masking effects of middle ear muscles vs. olivocochlear efferents. J Commun Disord, 31, 471–482; quiz 483; 553. [DOI] [PubMed] [Google Scholar]
  53. Liberman MC, Epstein MJ, Cleveland SS, et al. (2016). Toward a Differential Diagnosis of Hidden Hearing Loss in Humans. PLoS One, 11, e0162726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Lobarinas E, Salvi R, Ding D (2013). Insensitivity of the audiogram to carboplatin induced inner hair cell loss in chinchillas. Hearing research. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Margolis RH (1993). Detection of hearing impairment with the acoustic stapedius reflex. Ear Hear, 14, 3–10. [DOI] [PubMed] [Google Scholar]
  56. Margolis RH, Heller JW (1987). Screening tympanometry: criteria for medical referral. Audiology, 26, 197–208. [DOI] [PubMed] [Google Scholar]
  57. McCandless GA, Goering DM (1974). Changes in loudness after stapedectomy. Arch Otolaryngol, 100, 344–350. [DOI] [PubMed] [Google Scholar]
  58. McGregor KD, Flamme GA, Tasko SM, et al. (2018) Acoustic reflexes are common but not pervasive: evidence using a diagnositc middle ear analyser. Int J Audiol, 57, 42–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Merchant SN, Rosowski JJ (2008). Conductive hearing loss caused by third-window lesions of the inner ear. Otol Neurotol, 29, 282–289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Mills DM (2006). Determining the cause of hearing loss: differential diagnosis using a comparison of audiometric and otoacoustic emission responses. Ear Hear, 27, 508–525. [DOI] [PubMed] [Google Scholar]
  61. Moller AR (1961). Bilateral contraction of the tympanic muscles in man. Ann Otol, Rhinol Laryngol, 70, 735–752. [DOI] [PubMed] [Google Scholar]
  62. Nikiforidis GC, Koutsojannis CM, Varakis JN, et al. (1993). Reduced variance in the latency and amplitude of the fifth wave of auditory brain stem response after normalization for head size. Ear Hear, 14, 423–428. [DOI] [PubMed] [Google Scholar]
  63. Noffsinger D, Wilson RH, Musiek FE (1994). Department of Veterans Affairs compact disc recording for auditory perceptual assessment: background and introduction. J Am Acad Audiol, 5, 231–235. [PubMed] [Google Scholar]
  64. Pang XD, Guinan JJ Jr. (1997). Effects of stapedius-muscle contractions on the masking of auditory-nerve responses. J Acoust Soc Am, 102, 3576–3586. [DOI] [PubMed] [Google Scholar]
  65. Pappa AK, Hutson KA, Scott WC, et al. (2019). Hair Cell and Neural Contributions to the Cochlear Summating Potential. J Neurophysiol, in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Prendergast G, Guest H, Munro KJ, et al. (2017). Effects of noise exposure on young adults with normal audiograms I: Electrophysiology. Hear Res, 344, 68–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Rabinowitz WM (1977). Acoustic-Reflex effects on the input admittance and transfer characteristics of the human middle-ear Ph.D. Dissertation Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. . [Google Scholar]
  68. Rajan R, Cainer KE (2008). Ageing without hearing loss or cognitive impairment causes a decrease in speech intelligibility only in informational maskers. Neuroscience, 154, 784–795. [DOI] [PubMed] [Google Scholar]
  69. Ridley CL, Kopun JG, Neely ST et al. (2018). Using thresholds in noise to identify hidden hearing loss in humans. Ear Hear, 39, 829–844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Ruan Q, Ao H, He J, et al. (2014). Topographic and quantitative evaluation of gentamicin-induced damage to peripheral innervation of mouse cochleae. Neurotoxicology, 40, 86–96. [DOI] [PubMed] [Google Scholar]
  71. Santarelli R, Del Castillo I, Rodriguez-Ballesteros M et al. (2009). Abnormal Cochlear Potentials from Deaf Patients with Mutations in the Otoferlin Gene. JARO, 10, 545–556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Schmiedt RA (1984). Acoustic injury and the physiology of hearing. J Acoust Soc Am, 76, 1293–1317. [DOI] [PubMed] [Google Scholar]
  73. Schmiedt RA, Mills JH, Boettcher FA (1996). Age-related loss of activity of auditory-nerve fibers. J Neurophysiol, 76, 2799–2803. [DOI] [PubMed] [Google Scholar]
  74. Sergeyenko Y, Lall K, Liberman MC, et al. (2013). Age-related cochlear synaptopathy: an early-onset contributor to auditory functional decline. The Journal of neuroscience : the official journal of the Society for Neuroscience, 33, 13686–13694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Shaheen LA, Valero MD, Liberman MC (2015). Towards a Diagnosis of Cochlear Neuropathy with Envelope Following Responses. J Assoc Res Otolaryngol, 16, 727–745. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Skoe E, Tufts J (2018). Evidence of noise-induced subclinical hearing loss using auditory brainstem responses and objective measures of noise exposure in humans. Hear Res, 361, 80–91. [DOI] [PubMed] [Google Scholar]
  77. Sly DJ, Campbell L, Uschakov A, et al. (2016). Applying Neurotrophins to the Round Window Rescues Auditory Function and Reduces Inner Hair Cell Synaptopathy After Noise-induced Hearing Loss. Otol Neurotol, 37, 1223–1230. [DOI] [PubMed] [Google Scholar]
  78. Spankovich C, LePrell CG, Lobarinas E, et al. (2017) Noise history and auditory function in young adults with and without type 1 diabetes mellitus. Ear Hear, 38, 724–735. [DOI] [PubMed] [Google Scholar]
  79. Stach BA (1987). The acoustic reflex in diagnostic audiology: from Metz to present. Ear Hear, 8, 36S–42S. [DOI] [PubMed] [Google Scholar]
  80. Suzuki J, Corfas G, Liberman MC (2016). Round-window delivery of neurotrophin 3 regenerates cochlear synapses after acoustic overexposure. Sci Rep, 6, 24907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Tufts JB, Skoe E (2018). Examining the noisy life of the college musician: weeklong noise dosimetry of music and non-music activities. Int J Audiol, 57, S20–S27. [DOI] [PubMed] [Google Scholar]
  82. Valderrama JT, Beach EF, Yeend I et al. (2018). Effects of lifetime noise exposure on the middle-age human auditory brainstem response, tinnitus and speech-in-noise intelligibility. Hear Res, 365, 36–48. [DOI] [PubMed] [Google Scholar]
  83. Valero MD, Hancock KE, Liberman MC (2016). The middle ear muscle reflex in the diagnosis of cochlear neuropathy. Hear Res, 332, 29–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Valero MD, Hancock KE, Maison SF, et al. (2018). Effects of cochlear synaptopathy on middle-ear muscle reflexes in unanesthetized mice. Hear Res. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Vermiglio AJ, Soli SD, Freed DJ, et al. (2012). The relationship between high-frequency pure-tone hearing loss, hearing in noise test (HINT) thresholds, and the articulation index. J Am Acad Audiol, 23, 779–788. [DOI] [PubMed] [Google Scholar]
  86. Viana LM, O’Malley JT, Burgess BJ, et al. (2015). Cochlear neuropathy in human presbycusis: Confocal analysis of hidden hearing loss in post-mortem tissue. Hear Res, 327, 78–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Weisz N, Hartmann T, Dohrmann K, et al. (2006). High-frequency tinnitus without hearing loss does not mean absence of deafferentation. Hear Res, 222, 108–114. [DOI] [PubMed] [Google Scholar]
  88. Wilson RH, Cates WB (2008) A comparison of two word-recognition tasks in multitalker babble: Speech Recognition in Noise test (SPRINT) and Words-in-Noise Test (WIN). J Am Acad Audiol, 19, 548–556. [DOI] [PubMed] [Google Scholar]
  89. Woellner RC, Schuknecht HF (1955). Hearing loss from lesions of the cochlear nerve: an experimental and clinical study. Transactions - American Academy of Ophthalmology and Otolaryngology. American Academy of Ophthalmology and Otolaryngology, 59, 147–149. [PubMed] [Google Scholar]
  90. Wojtczak M, Beim JA, Oxenham AJ (2017). Weak Middle-Ear-Muscle Reflex in Humans with Noise-Induced Tinnitus and Normal Hearing May Reflect Cochlear Synaptopathy. eNeuro, 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Wormald PJ, Rogers C, Gatehouse S (1995). Speech discrimination in patients with Bell’s palsy and a paralysed stapedius muscle. Clin Otolaryngol Allied Sci, 20, 59–62. [DOI] [PubMed] [Google Scholar]
  92. Wu PZ, Liberman LD, Bennett K, et al. (2018). Primary neural degeneration in the human cochlea: evidence for hidden hearing loss in the aging ear. Neuroscience, S0306–4522, 30537–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Yuan Y, Shi F, Yin Y et al. (2014). Ouabain-induced cochlear nerve degeneration: synaptic loss and plasticity in a mouse model of auditory neuropathy. J Assoc Res Otolaryngol, 15, 31–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Zheng X-Y, Ding D-L, McFadden SL, et al. (1997). Evidence that inner hair cells are the major source of cochlear summating potentials. Hear Res, 113, 76–88. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Figure 1

Figure S1: Interaural symmetry in MEMR thresholds. ART (A) and MEMC thresholds (B) were measured from each ear of each subject. Statistically significant correlations were observed between ears for both assays. Arrows indicate that no ART was obtained at any elicitor level. *** p<0.001

Supplemental Figure 2

Figure S2: MEMc thresholds were uncorrelated with pure-tone threshold sensitivity. MEMR thresholds as assessed by our MEMc assay did not correlate with mean AC thresholds (A,B) nor with mean DPOAE thresholds (C,D) at standard audiometric frequencies (A,C) or EHFs (B,D). Correlation coefficients are given in each panel. None of the correlations was significant, either before adjusting for age and gender (as illustrated here) or after adjusting (not shown).

Supplemental Table 1

Table S1: Effect of elicitor frequency on correlations between ART and word scores. The ART, as conventionally defined, is averaged across ipsilateral elicitors at 0.5, 1, 2 and 4 kHz. Here we show that no subset of elicitor frequencies is a significantly better predictor of word scores than the ensemble average, by assessing correlation coefficients (columns 2, 3 and 4) for alternate combinations of elicitors (column 1). No pairwise comparison of these subset coefficients was statistically significant, using a Fisher r to z transformation.

RESOURCES