Abstract
Objective
Older adults often have trouble adjusting to hearing aids when they start wearing them for the first time. Probe-microphone measurements verify appropriate levels of amplification up to the tympanic membrane. Little is known, however, about the effects of amplification on auditory evoked responses to speech stimuli during initial hearing aid use. The present study assesses the effects of amplification on neural encoding of a speech signal in older adults using hearing aids for the first time. It was hypothesized that amplification results in improved stimulus encoding (higher amplitudes, improved phase locking, and earlier latencies), with greater effects for the regions of the signal that are less audible.
Design
Thirty-seven adults, ages 60 to 85 with mild to severe sensorineural hearing loss and no prior hearing aid use, were bilaterally fit with Widex Dream 440 receiver-in-the-canal hearing aids. Probe-microphone measures were used to adjust the gain of the hearing aids and verify the fitting. Unaided and aided frequency-following responses (FFRs) and cortical auditory-evoked potentials (CAEPs) to the stimulus /ga/ were recorded in sound field over the course of two days for three conditions: 65 dB SPL and 80 dB SPL in quiet, and 80 dB SPL in 6-talker babble (+10 signal-to-noise ratio).
Results
Responses from midbrain were analyzed in the time regions corresponding to the consonant transition (18–68 ms) and the steady-state vowel (68–170 ms). Generally, amplification increased phase locking and amplitude, and decreased latency for the region and presentation conditions that had lower stimulus amplitudes – the transition region and 65 dB SPL level. Responses from cortex showed decreased latency for P1, but an unexpected decrease in N1 amplitude. Previous studies have demonstrated an exaggerated cortical representation of speech in in older adults compared to younger adults, possibly due to an increase in neural resources necessary to encode the signal. Therefore, a decrease in N1 amplitude with amplification and with increased presentation level may suggest that amplification decreases the neural resources necessary for cortical encoding.
Conclusion
Increased phase locking and amplitude and decreased latency in midbrain suggest that amplification may improve neural representation of the speech signal in new hearing aid users. The improvement with amplification was also found in cortex, and, in particular, decreased P1 latencies and lower N1 amplitudes may indicate greater neural efficiency. Further investigations will evaluate changes in subcortical and cortical responses during the first six months of hearing aid use.
Keywords: Amplification, frequency-following response, cortical auditory-evoked potential, phase locking, older adults, hearing loss
Introduction
The primary treatment option for most people with mild to moderate sensorineural hearing loss is the use of hearing aids. However, the benefit received from hearing aids varies greatly from person to person, regardless of the degree and configuration of the hearing loss (Kochkin 2012). Hearing aid benefit may be reduced by age- and hearing-related central changes that affect the quality of the speech signal reaching the central auditory nervous system. Current amplification strategies do not compensate for auditory temporal processing deficits that have been demonstrated in behavioral (Fitzgibbons et al. 2006; Grose et al. 2010; Pichora-Fuller et al. 2007) and electrophysiological studies of aging (Harris et al. 2010; Presacco et al. 2015; Presacco et al. 2016b; Tremblay et al. 2003). Hearing impairment may also lead to downstream changes in central auditory processing, including changes in tonotopicity (Thai-Van et al. 2010; Willott 1991) and in the balance of excitatory and inhibitory neurotransmission (Dong et al. 2009; Felix et al. 2007; Mossop et al. 2000). Changes in neural encoding associated with aging and hearing loss may affect speech perception. Thus, efforts to examine hearing aid benefit should consider taking into account amplification effects on speech encoding beyond the cochlea. The current method of hearing aid verification, probe-microphone measurement, does not address effects of amplification beyond the tympanic membrane. An understanding of amplification effects at higher levels of the auditory system (both auditory cortex and midbrain) may inform amplification algorithms or strategies, and thus improve the success of hearing aid fittings.
Few studies to date have assessed the effects of amplification on auditory evoked responses. Billings et al. (2007) investigated the effects of amplification on cortical auditory evoked potentials (CAEPs) of young normal-hearing listeners to tonal stimuli. Normal-hearing listeners were used to control for the effects of hearing loss on cortical responses. These listeners were fit with behind-the-ear hearing aids to the right ear, and probe microphone measurements were used to ensure that the hearing aids provided 20 dB of gain to input signals of varying intensities. Comparisons of intensity-growth functions for both conditions revealed no differences in CAEP morphology for aided and unaided conditions. Further investigation into individual in-the-canal intensity measurements found that listeners with more favorable signal-to-noise ratios (SNRs) had larger CAEP amplitudes and shorter latencies than listeners with less favorable SNRs, suggesting that the SNR may have a greater effect on cortical potentials than amplitude only. A follow-up study investigated differences in aided and unaided CAEP intensity-growth functions when hearing aid gain was 0, 10, 20, and 30 dB (Billings et al. 2011). Intensity of the input tonal signal was fixed at 40 dB SPL for aided conditions and was 40, 50, 60, and 70 dB SPL for unaided conditions to ensure equal in-the-canal intensities for aided and unaided conditions. Although in-the-canal intensities were equal for the aided and unaided conditions, aided responses had prolonged latencies and reduced amplitudes when compared to unaided responses, possibly due to lower in-the-canal SNRs in aided compared to unaided conditions as a result of the noise produced by the hearing aid. The Billings et al. (2011) study examined effects of hearing aid gain on CAEPs in young adults with normal hearing. In contrast, Van Dun et al. (2016) compared effects of amplification, audibility, and SNR on CAEPs of young normal hearing listeners and older listeners with hearing loss to the speech stimuli /m/, /g/, and /t/ presented in sound field. Unaided stimulus levels were 55, 65, and 75 dB SPL and the aided stimulus level was 55 dB SPL. Amplification resulted in increased CAEP amplitudes for listeners with hearing loss, but no effects were seen in normal hearing listeners. Furthermore, in the listeners with hearing loss, CAEP amplitudes were positively correlated with audibility but were negatively correlated with SNR. The lower SNRs occurred for aided versus the unaided conditions, but the internal noise of the hearing aid was likely inaudible to the hearing impaired listeners. Therefore, audibility is likely the most significant factor in the listeners with hearing loss. These studies indicate the importance of recruiting listeners with hearing loss when investigating the viability of using evoked potentials to verify amplification effects.
Evoked potentials may also be used to verify audibility in infants and other difficult-to-test populations and to evaluate the effects of hearing aid technology. Glista et al. (2012) investigated effects of hearing aid frequency compression technology on CAEPs to tonal stimuli in children with moderately severe high-frequency sensorineural hearing loss (SNHL). Frequency compression compresses high-frequency signals into a narrower bandwidth at lower frequencies with better hearing thresholds to increase audibility. They found that P1-N1-P2 responses to high-frequency tones were present when frequency compression was activated and were absent when it was turned off. These results verified that frequency compression improves detection of high-frequency sounds in the auditory cortex and that cortical evoked potentials are sensitive to changes in signal processing of hearing aids.
Feasibility studies have also been conducted to investigate use of the frequency-following response (FFR) with hearing aids. The FFR is an evoked potential to periodic stimuli arising largely from brainstem and midbrain for modulation frequencies associated with the fundamental frequency of the human voice (Chandrasekaran et al. 2010; Smith et al. 1975) with possible cortical contributions (Coffey et al. 2016). As the FFR waveform closely resembles the stimulus waveform (Galbraith et al. 1995; Greenberg 1980), the FFR can be used to assess precision of midbrain encoding of temporal and spectral features of speech (Skoe et al. 2010). Evaluation of amplification effects on the FFR may provide insight into the perceptual changes that listeners with hearing loss experience when using hearing aids. Bellier et al. (2015) recorded aided and unaided FFR responses to the speech syllable /ba/ in four listeners with normal hearing. The signal was presented at 80 dB SPL via insert earphones (unaided), through wireless transmission to the hearing aids at two different gain levels, and to three “muted” conditions (microphones turned off), resulting in a total of five listening conditions. Clear responses were collected for the insert earphone condition and both hearing aid conditions and no responses were present in the muted conditions, verifying the feasibility of collecting viable FFRs when the signal is presented through wireless transmission.
Amplification effects on the FFR have also been investigated in individuals with hearing loss. In a 75-year-old individual with hearing loss, FFRs to a speech syllable /da/ presented through a speaker demonstrated changes in speech encoding in unaided vs. aided conditions and with different hearing aid settings (S. Anderson and N. Kraus 2013). Easwar et al. (2015) used direct audio input (DAI) to investigate aided and unaided FFRs to a male-spoken token /susa∫i/ representing a wide range of frequencies in older listeners with hearing loss. Amplification resulted in increased detectability and amplitude of the response. Increasing hearing aid bandwidth to 4 kHz further increased detectability, suggesting that the FFR may be used to verify audibility and to evaluate the effects of manipulating hearing aid parameters. Taken together, these studies demonstrate the potential usefulness of the FFR for assessing hearing aid benefit on midbrain encoding of speech signals in individuals with hearing loss.
The purpose of the current study was to assess the effects of amplification on FFRs and CAEPs to a speech syllable /ga/ presented in sound field in first-time hearing aid users at input levels that approximate normal listening conditions. This study addresses the first of two aims in a larger project that investigated amplification effects on central auditory processing and plasticity changes with hearing aid use over a six month period. While previous studies have focused on increasing detectability of the signal, this study investigated the effects of amplification on suprathreshold speech processing. It was hypothesized that amplification results in improved encoding of the speech signal in the midbrain (higher response amplitudes, improved phase locking, and earlier latencies) due to an increase in audibility. Based on the findings of Van Dun et al. (2016) for listeners with hearing loss, it was also expected that amplification would result in higher CAEP amplitudes and decreased latencie. These hypotheses will be tested by comparing FFRs and CAEPs to a speech syllable presented at different intensity levels in aided and unaided conditions. The interacting effects of noise and amplification will also be evaluated. The evaluation of amplification effects on central processing using ecologically valid listening conditions is a first step in determining the usefulness of these measures for improving hearing aid outcomes.
Materials and Method
Participants
Thirty-seven older adults with sensorineural hearing loss [22 Females, ages 60 to 88 years, mean ± sd: 73.97 ± 5.79 years] were recruited from the Washington D.C. metro area through the use of flyers distributed across the University of Maryland campus, local senior living communities, and through Craigslist advertisements. Participants had hearing levels ranging from mild to severe, with pure-tone averages ≥ 25 dB HL from 500–4000 Hz, no pure-tone thresholds ≥ 90 dB HL at any one frequency, no air-bone gaps of 15 dB HL or greater at two or more adjacent frequencies, and no interaural asymmetries of 15 dB HL or greater at two or more frequencies. Refer to Figure 1 for individual hearing thresholds in right and left ears and for average thresholds at each frequency. All subjects had normal click-evoked auditory brainstem response (ABR) latencies for age and hearing loss (wave V < 6.8 ms; Otto et al. 1982), measured by a 100 μs click stimulus presented at 80 dB SPL (peak equivalent) at a rate of 21.1 Hz. In one participant, data were not obtained for the noise condition due to equipment difficulties, and in two other participants data were not obtained for CAEPs due to time limitations.
Figure 1.
Individual pure-tone air-conduction thresholds for participants (n=35) from 125 Hz–8000 Hz for right and left ears, shown in gray. The solid black line indicates group average pure-tone thresholds.
Participants had normal IQs (≥ 85) as evaluated using the Wechsler Abbreviated Scale of Intelligence (mean ± sd: 113.05 ± 14.76; WASI; Zhu et al. 1999) and were screened for dementia using a criterion score of 22/30 on the Montreal Cognitive Assessment (mean ± sd: 25.74 ± 2.27; MOCA; Nasreddine et al. 2005). All participants were native speakers of English, had no history of neurological disorders, and had no previous experience with hearing aid use. As music training may have an effect subcortical auditory processing (Bidelman et al. 2010; Parbery-Clark et al. 2012), professional musicians were excluded from the study. All procedures were approved by the Institutional Review Board of the University of Maryland. Participants provided informed consent and were compensated for their time.
Hearing aid fitting
Each participant was fit bilaterally with Widex Dream 440 receiver-in-the-ear hearing aids with size M receivers with open domes (individual thresholds for 250–500 Hz < 30 dB HL) or tulip domes (individual thresholds for 250–500 Hz ≥ 30 dB HL). The Widex Dream 440 Fusion hearing aids accommodate hearing losses up to 85 dB HL from 125–8000 Hz when coupled with M receivers. Although there is greater variability in the amount of low-frequency gain provided by the hearing aids when open domes are utilized, their use was necessary in terms of patient comfort and compliance. As this study was also part of a longer project to assess central plasticity associated with hearing aid use, it was imperative that the patients be comfortable enough with the hearing aids to wear them eight hours per day over the course of six months. Although the hearing aids can be programmed with five manual programs, only one automatic program was used for the purposes of this study. This program had an extended input dynamic range of 113 dB SPL, 15 frequency channels, wide dynamic range compression, directional microphones, and noise reduction technology. The hearing aids were linked using ear-to-ear communication technology for compression, speech enhancer, and feedback cancellation.
Individual real-ear measurements were performed to verify the fitting. Real-ear-to-coupler differences were first obtained and then the hearing aids were adjusted to match NAL-NL2 prescriptive targets for International Speech Test Signal stimuli (Holube et al. 2010) presented at 55 dB SPL, 65 dB SPL, and 75 dB SPL. Each participant received a pair of hearing aids that were programmed to match his or her NAL-NL2 prescriptive targets based on each individual’s audiogram. Table 1 reports the group average aided SPLs obtained from real-ear testing and the differences between actual and target SPLs for 250 to 4000 Hz at 55, 65, and 75 dB SPL in each ear. A goodness of fit test was performed to determine how well measured outputs matched expected outputs based on NAL-NL2 prescriptive targets. As shown in the Table 1, gain measures fit well to expected values, with the exception of 4000 Hz. The average thresholds at 4000 Hz were in the moderately severe range, and fitting to the target for this frequency was not possible without causing feedback or patient discomfort when listening to high-frequency sounds. Maximum power output (MPO) measurements were performed to ensure that the hearing aids were not uncomfortably loud.
Table 1.
Real-ear measurement values are summarized. Mean output levels and mean differences from NAL-N2 targets in dB SPL are displayed along with standard deviations for the right and left ears at 55, 65, and 75 dB SPL input levels. The levels are generally within a few dB of the target values except for the 55 and 65 dB levels at 4000 Hz, at which it was not possible to provide sufficient gain to meet the targets. The Output values in regular font meet the Goodness of Fit test (F > 6.0, p < 0.02) and values in lighter font do not meet the Goodness of Fit test (F < 4.0, p > 0.05).
| 250 Hz | 500 Hz | 1000 Hz | 2000 Hz | 4000 Hz | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| dB SPL | Ear | Output | Target Diff. | Output | Target Diff. | Output | Target Diff. | Output | Target Diff. | Output | Target Diff. |
| 55 | Right | 48.17 (2.47) | 0.28 (4.26) | 49.97 (4.49) | −1.59 (5.26) | 55.48 (5.32) | 1.83 (5.97) | 62.79 (6.58) | −2.90 (5.16) | 58.76 (5.47) | −8.97 (7.82) |
| Left | 48.21 (3.52) | 0.07 (4.71) | 50.26 (4.92) | −2.10 (4.59) | 55.96 (8.19) | 0.64 (5.72) | 63.75 (8.17) | −3.07 (4.74) | 60.52 (7.45) | −9.32 (7.65) | |
| 65 | Right | 56.50 (2.06) | 0.89 (1.62) | 58.89 (4.00) | 0.29 (4.67) | 63.21 (5.12) | 3.86 (4.62) | 69.39 (6.70) | 0.61 (3.54) | 65.86 (5.92) | −7.29 (5.27) |
| Left | 56.96 (3.57) | 1.21 (3.99) | 59.41 (3.82) | 0.11 (3.28) | 62.63 (6.25) | 2.25 (4.04) | 69.46 (7.10) | −1.89 (4.04) | 68.11 (7.80) | −4.93 (7.27) | |
| 75 | Right | 55.77 (3.69) | −9.18 (3.80) | 63 (3.95) | −4.18 (3.98) | 70.55 (5.58) | 5.95 (4.87) | 77.59 (7.20) | 4.05 (4.28) | 74.23 (7.19) | −2.05 (6.02) |
| Left | 55.71 (4.04) | −9.38 (3.96) | 62.10 (4.98) | −4.95 (4.93) | 70.05 (6.23) | 6.00 (5.20) | 77.71 (6.51) | 3.62 (4.62) | 76.71 (7.70) | 0.24 (7.75) | |
Hearing aid audibility
As the use of open domes results in variable gain values for low frequency inputs, further investigation into the audibility of the signal were done using open and closed domes. After reviewing audiograms of each participant, we determined that each hearing loss fell into one of three configurations: gradually sloping mild to moderate SNHL, gradually sloping mild to severe SNHL, and mild sharply sloping to severe SNHL (Fig. 1). This information was utilized in the collection of KEMAR® measurements to ensure the hearing aids were providing adequate audibility for the input signals used in the protocol. Hearing aids were programmed for each of the three categories of hearing losses described above, and experimental stimuli were presented to KEMAR at ear level at a distance of two meters from the loudspeaker at 0° azimuth for all presentation levels, an identical collection paradigm to that used in the study. In-ear intensity levels were measured for frequencies from 125–8000 Hz. Table 2 reports the degree to which the aided in-ear levels of the [ga] syllable exceed audiometric thresholds for each frequency for the 65 dB SPL, 80 dB SPL in quiet, and 80 dB SPL in noise presentation conditions for open and tulip (closed) dome fittings for the three types of sensorineural hearing losses. Positive values indicate that the aided levels exceed audiometric threshold, verifying adequate audibility through 2000 Hz for most conditions for both the tulip dome and open dome fitting configurations.
Table 2.
Aided sensation levels above thresholds based on one-third octave band measurements obtained from KEMAR® to simulate electrophysiological recordings. Stimuli were presented from a loudspeaker placed at 00 azimuth at ear level at two meters distance from the mannequin. A Widex 440 dream hearing aid was programmed with three different hearing loss configurations representing the average hearing loss types in the study (gradually sloping mild to severe SNHL, gradually sloping mild to moderate SNHL, and mild sharply sloping to severe SNHL) with ear open dome or tulip dome tips. As denoted by positive values, audibility was achieved through 2000 Hz for all listening conditions and hearing aid configurations.
| Gradually sloping mild to severe SNHL | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Open Dome | Tulip Dome | |||||||||||||||||
| SPL | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 |
| 65 dB | 22 | 37 | 28 | 17 | −4 | −11 | −27 | −37 | −55 | 21 | 30 | 33 | 13 | 5 | −7 | −15 | −34 | −55 |
| 80 dB | 38 | 48 | 32 | 25 | 11 | 8 | −2 | −26 | −47 | 35 | 43 | 44 | 22 | 16 | 10 | −7 | −35 | −50 |
| Noise | 38 | 49 | 31 | 24 | 14 | 9 | 16 | −14 | −40 | 37 | 45 | 40 | 22 | 16 | 4 | −6 | −27 | −51 |
| Gradually sloping mild to moderate SNHL | ||||||||||||||||||
| Open Dome | Tulip Dome | |||||||||||||||||
| SPL | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 |
| 65 dB | 32 | 36 | 18 | 21 | 8 | −1 | −2 | −27 | −49 | 22 | 37 | 32 | 20 | 10 | 1 | 0 | −22 | −49 |
| 80 dB | 40 | 50 | 32 | 31 | 19 | 9 | 5 | −20 | −48 | 36 | 48 | 42 | 35 | 19 | 15 | 5 | −19 | −42 |
| Noise | 43 | 50 | 31 | 28 | 19 | 10 | 3 | −21 | −42 | 40 | 58 | 37 | 29 | 20 | 9 | 7 | −18 | −40 |
| Mild sharply sloping to severe SNHL | ||||||||||||||||||
| Open Dome | Tulip Dome | |||||||||||||||||
| SPL | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 |
| 65 dB | 31 | 38 | 33 | 39 | 19 | −1 | −26 | −37 | −57 | 22 | 36 | 46 | 48 | 17 | −1 | −25 | −34 | −59 |
| 80 dB | 41 | 50 | 46 | 51 | 33 | 10 | −15 | −30 | −53 | 42 | 50 | 50 | 30 | 29 | 11 | −19 | −31 | −55 |
| Noise | 42 | 49 | 47 | 48 | 37 | 11 | −16 | −30 | −53 | 32 | 49 | 58 | 47 | 35 | 26 | −15 | −31 | −54 |
Electrophysiology
Stimuli and Recording
A 170-ms speech syllable /ga/ (Fig. 2) synthesized with a Klatt-based synthesizer (Boersma et al. 2009) at 20 kHz was the chosen stimulus. The stimulus was characterized by a 10-ms onset burst followed by a 50-ms consonant-vowel transition and a steady-state vowel region from 60 to 170 ms. Voicing was constant for the duration of the stimulus with a fundamental frequency (F0) of 100 Hz. The transition region was characterized by rapidly changing formants: the first formant rose from 400 Hz to 720 Hz, the second formant fell from 2480 Hz to 1240 Hz, and the third formant fell from 2580 Hz to 2500 Hz; all three formants stabilized for the steady-state region of the syllable. The fourth through sixth formants remained constant over the entire duration of the syllable at 3300, 3750, and 4900 Hz, respectively. This stimulus was chosen to investigate amplification effects on audibility of the higher frequency information present in the transition region of the syllable. In addition to frequency differences, we also note that the stimulus regions differ in relative power. A root-mean-square (RMS) power calculation revealed values of 0.08 V2 for the transition region and 0.10 V2 for the steady state region. The syllable’s waveform and its spectral energy are represented in Figure 2.
Figure 2.
A: Spectrogram of the stimulus /ga/. B: Stimulus waveform with horizontal lines marking the transition (18–68 ms) and the steady state (68–170 ms) regions. The onsets of the waveform and spectrogram are temporally aligned with the response. C: Grand average response waveform to the unaided /ga/ syllable presented at 80 dB SPL in sound field.
All testing was conducted in a sound-treated, electrically-shielded booth with the lights off to reduce electrical interference. The /ga/ stimulus was presented through a speaker placed two meters from the participants at 0° azimuth via Presentation software (Neurobehavioral Systems, Inc.). The stimulus was presented through sound field rather than via direct input to allow processing through hearing aid microphones and to simulate situations that were ecologically valid. The /ga/ was presented in three listening conditions: 1) 65 dB SPL in quiet; 2) 80 dB SPL in quiet; and 3) 80 dB SPL in the presence of 70 dB SPL 6-talker babble (herein referred to as 80 dB SPL in noise). The 6-talker babble was taken from the Words-in-Noise (WIN) sentence lists (Wilson et al. 2003) and was continually played on a 4.6 second loop. Prior to recording, the /ga/ and noise stimuli were calibrated to within ± 1 dB of the desired presentation level using a Larson Davis System 824 sound level meter at ear level.
Frequency-Following Response
Recording
The /ga/ stimulus was presented with alternating polarities at a rate of 4 Hz. A standard vertical montage of five electrodes (Cz active, two forehead ground, unlinked earlobe references) was used with all offsets < 50 μV. Responses were recorded using the Biosemi ActiABR–200 acquisition system (BioSemi B.V., Amsterdam, Netherlands) with a sampling frequency of 16,384 Hz. A single run of 2300 sweeps was collected for each condition. During recording, participants were seated in an upright position so that the microphones of the hearing aids were in the same plane as the speaker at a relative angle elevation of 0 degrees. They watched a silent movie with subtitles playing on a projector screen to promote relaxation and a state of calm wakefulness and to minimize head movement. All three conditions were recorded consecutively during one test session in both aided and unaided conditions, resulting in a total of six listening conditions per participant. Order of condition presentation was randomized.
Data Reduction
The sweeps were averaged and processed off-line using MATLAB (MathWorks, version R2011b). The time window for each sweep was −50 to 185 ms referenced to the stimulus onset. The stimulus onset in the aided conditions was adjusted by 2 ms to allow for hearing aid processing time (based on frequency-specific values for hearing aid processing delays provided by Widex USA). Responses were digitally bandpass-filtered from 70 to 2000 Hz using a 4th order Butterworth filter to minimize the effects of low-frequency signals originating from cortex (Dinse et al. 1997). A criterion of ± 30 μV was used for offline artifact rejection. A final average response was created by averaging the first 2000 artifact-free sweeps of the two polarities (1000 per polarity) in order to minimize the influence of cochlear microphonic and stimulus artifact on the response and to maximize the envelope response (Aiken et al. 2008; Campbell et al. 2012; Gorga et al. 1985). SNR in dB was calculated using the following formula:
where the post-stimulus period is defined as 5 – 190 ms and the pre-stimulus period (noise) is defined as −40 – 0 ms. All unaided and aided responses had SNR dB values > 1.
Response amplitude and latency
Root-mean-square (RMS) amplitude was calculated for the transition (18–68 ms) and steady-state (68–170 ms) regions for each condition (aided and unaided, for a total of six conditions). An automatic peak-picking algorithm was run in MATLAB that identified the peak that was closest to the expected latency (within 2 ms), based on average latencies obtained in previous studies (Anderson et al. 2012; S. Anderson, T White-Schwoch, et al. 2013; Presacco et al. 2015). A trained peak picker confirmed each peak identification and made changes where appropriate. The first consistently identifiable peak of the consonant transition (~31 ms) was used in the analysis.
Phase Locking Factor (PLF)
The complex Morlet wavelets were used to decompose the signal between 80 and 800 Hz and to analyze the PLF of single trials in the time-frequency domain (Tallon-Baudry et al. 1996). The PLF evaluates the inter-trial phase consistency by extracting the phase from each of the N sweeps recorded and then averaging the N phases. The phase is extracted for each frequency bin (1 Hz) at each point in time. The normalized energy was calculated for each sweep by dividing the convolution of the complex wavelet with the signal by its absolute value: , leading to a complex value that describes the phase distribution at each frequency and point in time. The final PLF was represented by the modulus of the average across sweeps of this complex value, which ranges from 1 (phase-locked) to 0 (non-phase locked). The mean values for the fundamental frequency (F0) were averaged in a 10 Hz bin across the transition (18–68 ms) and steady-state (68–170 ms) regions. Two participants were eliminated from the dataset due to PLF values that were three standard deviations above the mean.
Cortical Response
Recording
The /ga/ stimulus was presented at a rate of 1 Hz, and responses were recorded at a sampling frequency of 2048 Hz using a 32-channel electrode cap that incorporated a subset of the International 10–20 system (Jasper 1958), with average earlobes (A1 and A2) serving as references. A single run of 600 sweeps was collected for each condition.
Data Processing and Analyses
Responses were offline bandpass filtered from 1–30 Hz with a 4th order Butterworth filter. Eye movements were removed from filtered data using a regression-based electrooculography reduction method (Romero et al. 2006; Schlögl et al. 2007). The time window for each sweep was −100 to 400 ms referenced to the stimulus onset. A final average response was extracted with the first 500 artifact-free sweeps.
Denoising source separation (DSS)
Artifact free data from each of the 32 channels recorded were decomposed into N signal components (where N ≤ 32) using the denoising source separation (DSS) algorithm (de Cheveigne et al. 2008; Särelä et al. 2005). The first DSS, which accounts for the highest variability in our data and therefore the best signal-to-noise ratio for our ERPs, was then used for the final analysis. Amplitude was calculated for the expected time region for each of the prominent cortical peaks: P1 (35–75 ms), N1 (75–130 ms), and P2 (130–250 ms) in the quiet conditions (65 and 80 dB SPL) and P1 (35–75 ms), N1 (125–175 ms), and P2 (225–275 ms) in the noise condition. Latency was calculated for the highest peak in each of these time ranges.
Statistical Analyses
FFR
To test effects of amplification, individual two-way repeated-measures (RM) ANOVAs (stimulus region, two levels: transition and steady state; amplification, two levels: aided, unaided) were performed for each presentation condition (65 dB SPL, 80 dB SPL in quiet, and 80 dB SPL in noise) for the FFR PLF using SPSS version 21.0. RM ANOVAs were also performed with two within-subjects independent variables (stimulus region and amplification) for RMS and one within subject variable (amplification) for latency for each presentation condition (65 dB SPL, 80 dB SPL in quiet, and 80 dB SPL in noise). In addition, level effects were tested with a three-way RM ANOVA (65 dB SPL in quiet vs. 80 dB SPL in quiet, transition vs. steady-state regions, aided vs. unaided), and noise effects were tested with a three-way RM ANOVA (80 dB SPL in quiet vs. 80 dB SPL in noise, transition vs. steady-state regions, aided vs. unaided) for the PLF, RMS and latency analyses.
Cortical
To test amplification effects on DSS amplitude and latency, individual two-way RM ANOVAs (peak, three levels: P1, N1, and P2; and amplification, two levels: aided, unaided) were performed for each presentation condition (65 dB SPL in quiet, 80 dB SPL in quiet, and 80 dB SPL in noise). In addition, three-way RM ANOVAs were used to evaluate effects of level (65 dB SPL in quiet vs. 80 dB SPL in quiet; P1, N1, and P2; aided vs. unaided) and noise (80 dB SPL in quiet vs. 80 dB SPL in noise; P1, N1, and P2; aided vs. unaided). Post-hoc paired t-tests were used to evaluate within-subject differences when interactions were noted for single variables (FFR RMS, latency, and cortical analyses). The false discovery rate (FDR) procedure (Benjamini et al. 1995) was applied to control for multiple comparisons for main effects.
RESULTS
Frequency-Following Response (FFR)
Table 3 provides mean and standard deviation data of PLF, RMS, and latency values and F statistics for main effects and interactions for the different presentation conditions. The specific details are as follows:
Table 3.
Unaided sensation levels above thresholds based on one-third octave band measurements obtained to.stimuli presented from a loudspeaker placed at 00 azimuth at ear level at two meters distance from the KEMAR® mannequin to simulate electrophysiological recordings. Values are provided for each of the three hearing loss groups (gradually sloping mild to severe SNHL, gradually sloping mild to moderate SNHL, and mild sharply sloping to severe SNHL).
| Gradually sloping mild to severe SNHL | |||||||||
| SPL | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 |
| 65 dB | 20 | 30 | 21 | −4 | −20 | −31 | −27 | −43 | −75 |
| 80 dB | 36 | 44 | 35 | 25 | 8 | −5 | −2 | −30 | −75 |
| Gradually sloping mild to moderate SNHL | |||||||||
| SPL | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 |
| 65 dB | 22 | 34 | 20 | 0 | −11 | −21 | −27 | −52 | −60 |
| 80 dB | 35 | 45 | 35 | 13 | −5 | −16 | −19 | −38 | −53 |
| Mild sharply sloping to severe SNHL | |||||||||
| SPL | 125 | 250 | 500 | 1000 | 2000 | 3000 | 4000 | 6000 | 8000 |
| 65 dB | 21 | 35 | 38 | 30 | 7 | −23 | −55 | −67 | −75 |
| 80 dB | 42 | 51 | 44 | 44 | 20 | −8 | −40 | −54 | −73 |
PLF
Amplification Effects
In response to the 65 dB SPL presentation level, there was a main effect of amplification [F(1,34) = 6.052, p = 0.019, η2 = 0.151]. The region × aided interaction was not significant [F(1,34) = 2.234, p = 0.144, η2 = 0.062] (Fig. 3). In response to 80 dB SPL in quiet and 80 dB SPL in noise, there were no main effects of amplification [80 quiet: F(1,34) = 1.546, p = 0.222, η2 = 0.044; 80 noise: F(1,34) = 1.074, p = 0. 307, η2 = 0.031] (Figs. 4 and 5).
Figure 3.
Amplification increased phase locking to the speech syllable /ga/ at 65 dB SPL. A. PLF in the time-frequency domain for group average unaided and aided responses. B. Unaided (black) and aided (red) PLF at the F0. We note that phase cancellation occurred as a result of averaging across subjects, so that the color intensity is less than that shown in the means displayed in the line graphs. The scale of the colormap in Panel A is reduced compared to the line graphs to enhance the color contrasts in the PLF. Error bars=1 S.E.
Figure 4.
There were no effects of amplification on phase locking at 80 dB SPL in quiet. A. PLF in the time-frequency domain for group average unaided and aided responses. B. Unaided (black) and aided (red) PLF at the F0. We note that phase cancellation occurred as a result of averaging across subjects, so that the color intensity is less than that shown in the means displayed in the line graphs. The scale of the colormap in Panel A is reduced compared to the line graphs to enhance the color contrasts in the PLF. Error bars=1 S.E.
Figure 5.
There were no effects of amplification on phase locking at 80 dB SPL in noise. A. PLF in the time-frequency domain for group average unaided and aided responses. B. Unaided (black) and aided (red) PLF at the F0. We note that phase cancellation occurred as a result of averaging across subjects, so that the color intensity is less than that shown in the means displayed in the line graphs. The scale of the colormap in Panel A is reduced compared to the line graphs to enhance the color contrasts in the PLF. Error bars=1 S.E.
Level Effects
The PLF was significantly higher in response to the 80 dB SPL level compared to the 65 dB SPL level [F(1,34) = 7645, p = 0.009, η2 = 0.184], and there was no level × aided interaction [F(1,34) = 0.145, p = 0.706, η2= 0.004] or level × region interaction [F(1,34) = 2.068 p = 0.160, η2= 0.057].
Noise Effects
The effect of noise on the PLF of the envelope was not significant [F(1,34) = 1.544, p = 0.223, η2 = 0.043].
RMS Amplitude
Amplification Effects
In response to the 65 dB SPL level, there was a main effect of amplification [F(1,34) = 4.343, p = 0.045, η2 = 0.113], and a significant aided × region interaction [F(1,34) = 7.407, p = 0.010, η2 = 0.179], driven by a significant increase in amplitude in the transition but not the steady-state regions [transition: t(34) = 3.079, p = 0.004, η2 = 0.218; steady state: t(34) = 0.527, p = 0.601, η2 = 0.008]. There was no main effect of amplification in response to 80 dB SPL in quiet or noise [80 dB quiet: F(1,34) = 1.698, p = 0.201, η2 = 0.048; 80 dB noise: F(1,34) = 1.212, p = 0.279, η2 = 0.035]. Refer to Figure 6 for a display of unaided and aided time domain waveforms and bar graphs representing RMS values at different presentation conditions.
Figure 6.
Amplification increased response amplitude in the consonant transition region at 65 dB SPL, but not at the other presentation conditions. Latency decreased for 65 and 85 dB SPL in quiet but not for 85 dB SPL in noise. A. Time domain waveforms for unaided (black) and aided (red) responses. The asterisks indicate significant latency decreases. B. Bar graphs demonstrating RMS increases in the consonant transition region at 65 dB SPL (top panel) but no changes for other presentation levels or for the steady-state vowel. The asterisks indicate amplitude changes. **p < 0.01, ***p<0.001. Error bars = 1 S.E.
Level Effects
Response amplitude was significantly higher in response to the 80 dB SPL level compared to the 65 dB SPL level [F(1,34) = 8.342, p = 0.007, η2 = 0.197)] and there was a level × region interaction [F(1,34) = 5.040, p = 0.031, η2 = 0.129)]. Amplitude was larger for the 80 dB SPL level in quiet in both regions across amplification conditions, but the effects were larger for the steady state [transition: F(1,34) = 7.106, p = 0.012, η2 = 0.173; steady state: F(1,34) = 10.695, p = 0.002, η2 = 0.239]. The level × aided interaction was not significant [F(1,34) = 0.713, p = 0.404, η2 = 0.021].
Noise Effects
Noise had no effect on response amplitude [F(1,33) = 0.00, p = 0.994, η2= 0.000].
Latency
Amplification Effects
Amplification resulted in significant latency decreases at 65 dB SPL [t(34) = 5.187, p < 0.001, η2 = 0.442] and at 80 dB in quiet [t(34) = 3.717, p = 0.001, η2 = 0.289], but not at 80 dB in noise [t(33) = 1.582, p = 0.123, η2 = 0.070].
Level Effects
Latencies were earlier at 80 dB SPL in quiet than at 65 dB SPL[F(1,34) = 13.098, p = 0.001, η2 = 0.278], and the latency × aided interaction was not significant [F(1,34) = 2.459 p = 0.126, η2 = 0.067].
Noise Effects
Noise had no effect on response latency [F(1,33) = 0.507, p = 0.481, η2= 0.015].
Cortical
Table 4 provides means and standard deviations of amplitude and latency values and F statistics for main effects and interactions for the different presentation conditions. The specific details are as follows:
Table 4.
Aided and unaided means and standard deviations for PLF (μV), RMS (μV), and latency (ms) are displayed for three presentation conditions (65 dB SPL, 80 dB SPL in quiet, and 80 dB SPL in noise). In addition, the F statistic is provided for main effects of amplification, level, and noise, and region × aided, aided × level, and region × level interactions.
| PLF (μV) Mean (S.D.) |
Amplitude (μV) Mean (S.D.) |
Latency (ms) Mean (S.D.) |
|||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| dB SPL |
Transition | Steady State | F statistic | Transition | Steady State | F statistic | F statistic | ||||
| Hz | 100 | 200 | 100 | 200 | |||||||
| 65 dB | Unaided | 0.041 (0.018) | 0.027 (0.009) | 0.046 (0.022) | 0.025 (0.005) | 0.076 (0.027) | 0.074 (0.025) | 32.05 (0.90) | |||
| Aided | 0.061 (0.039) | 0.037 (0.022) | 0.058 (0.038) | 0.031 (0.016) | 0.086 (0.031) | 0.076 (0.027) | 31.03 (0.74) | ||||
| Aided | 8.043** | Aided | 4.343 | Aided | 12.91** | ||||||
| Region × Aided | 4.985 | Region × Aided | 7.407* | ||||||||
| Region × Harmonic | 1.886 | ||||||||||
| 80 dB | Unaided | 0.061 (0.027) | 0.040 (0.017) | 0.058 (0.024) | 0.037 (0.014) | 0.096 (0.029) | 0.085 (0.026) | 31.39 (0.68) | |||
| Aided | 0.077 (0.065) | 0.047 (0.033) | 0.067 (0.077) | 0.034 (0.018) | 0.131 (0.141) | 0.085 (0.033) | 30.77 (0.84) | ||||
| Aided | 1.391 | Aided | 1.698 | Aided | 13.819** | ||||||
| Level | 12.510** | Level | 8.342** | Level | 13.098 ** | ||||||
| Aided × Level | 0.622 | Aided × Level | 0.713 | Aided × Level | 2.459 | ||||||
| Region × Level | 2.673 | Region × Level | 5.040 | ||||||||
| Noise | Unaided | 0.065 (0.029) | 0.038 (0.016) | 0.068 (0.044) | 0.037 (0.016) | 0.098 (0.034) | 0.092 (0.033) | 31.34 (0.60) | |||
| Aided | 0.078 (0.058) | 0.038 (0.026) | 0.068 (0.048) | 0.029 (0.010) | 0.121 (0.101) | 0.088 (0.031) | 31.06 (0.94) | ||||
| Aided | 0.115 | Aided | 1.212 | Aided | 2.502 | ||||||
| Noise | 0.014 | Noise | 0.000 | Noise | 0.507 | ||||||
p < 0.02,
p < 0.004 (corrected α levels using the FDR procedure).
Amplitude
Amplification Effects
In response to the 65 dB SPL in quiet presentation level, there was no main effect of amplification [F(1,34) = 0.782, p = 0.383, η2 = 0.022]. At 80 dB SPL in quiet there was a main effect of amplification [F(1,34) = 4.767, p = 0.007, η2 = 0.309] and a significant aided × peak interaction [F(2,33) = 7.305, p = 0.002, η2 = 0.027], driven by an increase in amplitude for P1, a decrease in amplitude for N1, and no change in P2 [P1: t(34) = 2.952, p = 0.006, η2 = 0.204; N2: t(34) = 2.794, p = 0.008, η2 = 0.187; P2: t(34) = 0.423, p = 0.675, η2 = 0.027]. In the noise condition there was no main effect of amplification [F(1,34) = 0.109, p = 0.743, η2 = 0.003]. Refer to Figure 6 for a display of unaided and aided time domain waveforms and bar graphs representing amplitude levels at different presentation conditions.
Level Effects
There was no main effect of level [F(1,34) = 0.369, p = 0.547, η2 = 0.011] but there was a level × peak interaction [F(1,34) = 9.148, p = 0.001, η2 = 0.357]. P1 amplitude increased significantly with presentation level [F(1,34) = 6.137, p = 0.018, η2 = 0.153], but in contrast, N1 amplitude decreased significantly with presentation level [F(1,34) = 16.102, p < 0.001, η2 = 0.321]. P2. There was no main effect of level for P2 [F(1,34) = 0.263, p = 0.611, η2 = 0.008]. The aided × level interaction was not significant [F(1,34) = 0.245, p = 0.624, η2 = 0.007].
Noise Effects
There was a main effect of noise [F(1,34) = 57.469, p < 0.001, η2 = 0.628] but no peak × noise interaction [F(1,34) = 2.905, p = 0.069, η2 = 0.150] or aided × noise interaction [F(1,34) = 0.491, p = 0.488, η2 = 0.014].
Latency
Amplification Effects
In response to the 65 dB SPL presentation level, amplification resulted in earlier latencies [F(1,34) = 8.994, p < 0.001, η2 = 0.428], and there was no aided × peak interaction [F(1,34) = 2.519, p = 0.096, η2 = 0.132]. There was a main effect of amplification at 80 dB SPL in quiet [F(1,34) = 9.323, p < 0.001, η2 = 0.466], and there was also a significant aided × peak interaction [F(1,34) = 8.338, p = 0.001, η2 = 0.336], driven by an amplification-related latency decrease for P1 that was not present for N1 or P2 [P1: t(34) = 5.029, p < 0.001, η2 = 0.427; N1: t(34) = 0.612, p = 0.545, η2 = 0.011; P2: t(34) = 2.034, p = 0.095, η2 = 0.080]. Amplification did not affect peak latencies in noise [F(1,34) = 0.573, p = 0.637, η2 = 0.051].
Level Effects
There was no main effect of level [F(1,34) = 1.304, p = 0.261, η2 = 0.037], but there was a level × peak interaction [F(1,34) = 8.880, p = 0.001, η2 = 0.350]. There was a significant latency decrease in the 80 dB SPL in quiet vs. the 65 dB SPL conditions for P1 [F(1,34) = 15.303, p < 0.001, η2 = 0.310] and N1 [F(1,34) = 14.792, p = 0.001, η2 = 0.303] but not for P2 [F(1,34) = 1.063, p = 0.310, η2 = 0.030]. The aided × level interaction was not significant [F(1,34) = 2.236, p = 0.144, η2 = 0.062].
Noise Effects
There was a main effect of noise on peak latencies [F(1,34) = 254.001, p < 0.001, η2 = 0.882] and a significant noise × peak interaction [F(1,34) = 63.864, p < 0.001, η2 = 0.795]. Noise had no effect on the latency of the P1 peak [F(1,34) = 1.609, p = 0.213, η2 = 0.045] but significantly delayed the latency of N1 [F(1,34) = 131.171, p < 0.001, η2 = 0.795] and P2 [F(1,34) = 79.130, p < 0.001, η2 = 0.699]. There was no noise × aided interaction [F(1,34) = 0.754, p = 0.391, η2 = 0.022].
Discussion
This study investigated hearing aid amplification effects on FFRs and CAEPs to a speech syllable in first-time hearing aid users at levels which approximated typical listening conditions. Overall results suggest that amplification may improve subcortical representation of the speech syllable /ga/. More notably, the findings throughout the study suggest that this improvement may, in part, be due to increased audibility. While previous studies found minimal amplification effects on CAEPs in individuals with normal hearing (Billings et al. 2011; Billings et al. 2007), the current investigation found differences in CAEP responses between aided and unaided conditions, consistent with Dun et al. (2016). These results suggest the importance of using participants with sensorineural hearing loss when investigating the efficacy of incorporating evoked potentials in the hearing aid fitting.
Frequency-Following Response
Phase Locking Factor and RMS Amplitude
Amplification effects on the FFR were similar for phase locking factor and RMS amplitude. More consistent phase locking and increased amplitudes to the speech syllable were observed between aided and unaided responses, but only for the 65 dB SPL presentation level and not for the 80 dB SPL level in quiet or in noise (Figs. 3–5). Furthermore, the increase in RMS amplitude was stronger in the transition region than in the steady state region (Fig. 6). The relatively higher frequency transition region has lower RMS power compared to the steady-state region (Fig. 2).
Level effects were also observed in the form of improved phase locking and increased RMS amplitudes at higher presentation levels for both the transition and steady-state regions. The increases in phase locking and amplitude from 65 dB SPL to 80 dB SPL in quiet suggest, in part, improved midbrain processing due to increased audibility of the signal. We found no significant aided × level interactions for either RMS or PLV, suggesting that level effects were similar for unaided and aided conditions. These results are similar to the level effects found by Easwar et al. (2015a), who noted that increasing the test level from 50 to 65 dB SPL resulted in an increase in response amplitudes. In their follow-up study, Easwar et al. (2015b) found a level × aided interaction for response amplitude. In that study, the differences between aided and unaided responses to the vowel /a/ first formant were greater for the 50 dB SPL condition than for the 65 dB SPL condition, although amplification increased response amplitude there was a main effect of amplification at both levels. In our study, we presented stimuli at 65 dB SPL and 80 dB SPL, and the amplification effects may have been more pronounced if we included lower level stimuli of less 65 dB SPL.
Latency
Increases in audibility may also explain the reductions seen in FFR latencies (Fig. 6). Decreased latencies were seen in aided compared to unaided responses for both 65 and 80 dB SPL in quiet presentation levels. We used a 2-ms correction for latency to account for hearing aid processing time, corresponding to the delay noted for the low-frequency components of the signal (frequency-specific delays provided by Widex USA). If the latency reduction was attributable to this correction, we would have expected a more uniform latency decrease across conditions. However, we found that the latency decrease was greatest for 65 dB SPL in quiet, was smaller for 80 dB SPL in quiet, and was not significant for 80 dB SPL in noise. The latency results may provide support for the idea that improved midbrain processing is the result of increased audibility. However, differences in latency changes between listening conditions may also be influenced by smaller changes in sensation level at the 80 dB SPL presentation level due to the compression in the hearing aids or to increased audibility of the stimulus at 80 dB SPL in the unaided condition.
Noise Effects
A number of factors may account for the lack of significant noise effects on any of the FFR variables. In this study we used a relatively favorable SNR of +10 dB, which may not have resulted in sufficient degradation of neural synchrony to affect midbrain processing. (Easwar et al. 2015b); Li et al. (2011) evaluated the effects of noise on FFRs to lexical tones in young adults and found that midbrain processing was relatively unaffected at SNRs of 6 and 12 dB but the F0 amplitude and other measures of neural fidelity were significantly decreased at SNRs of −6 and −12 dB. These results suggest the need to use more unfavorable SNRs when evaluating effects of noise on midbrain processing.
Minimal effects of noise might also be attributed to hearing loss. Noise-related reductions of amplitude and latency shifts in the auditory brainstem response have been demonstrated in young adults (Burkard et al. 2002; Hecox et al. 1989), but these effects are not as pronounced in older adults with hearing loss. Even older adults with normal hearing have reduced effects of noise compared to younger adults. In a comparison of FFRs to a speech syllable presented in one-talker babble at SNRs varying from −6 to +3 dB, Presacco et al. (2016b) found that noise minimally affected the response amplitudes of older adults with normal hearing compared to younger adults with normal hearing. These differences may arise from age-related cochlear synaptopathy. A study investigating the feasibility of assessing Wave-V clicks in noise as a measure of cochlear synaptopathy in humans found that Wave-V latency shifts in noise mirrored changes in Wave-I amplitude (Mehraei et al. 2016). The study also found that greater Wave-V latency shifts correlated with better performance on a temporal processing task (discrimination of interaural-time differences). Taken together, these results suggest that larger Wave-V shifts in noise are an indication of healthier auditory nerve function. Therefore, the lack of effects of noise on any of the FFR measures in this study may be due to reduced auditory nerve function associated with hearing loss.
Cortical Response
Effects of amplification
The results of our cortical analyses only partially supported our hypothesis. Changes in the P1 peak between aided and unaided responses suggest effects of increased audibility associated with amplification, specifically increased P1 amplitude and decreased P1 latency. The P1 component likely reflects a nonspecific sensory response to an acoustic stimulus (Čeponienė et al. 2005; Sharma et al. 2002; Shtyrov et al. 1998); therefore, increased amplitude/decreased latency with amplification may indicate increased detection (Fig. 7). These results are consistent with those of Billings et al. (2012) who found that CAEPs may reflect physiological detection of hearing-aid processed signals.
Figure 7.
Different effects of amplification were noted for different cortical components. A. DSS rectified waveforms for unaided (black) and aided (red) responses at three presentation levels with the P1, N1 and P2 peaks. The asterisks indicate significant latency decreases. B. Amplitude of the cortical peaks for unaided (black) and aided (red) response for three presentation levels. A significant increase in P1 amplitude and a significant decrease in N1 amplitude from unaided to aided responses was noted in the 80 dB SPL condition. The asterisks indicate significant amplitude changes. *p < 0.05, **p < 0.01, Error bars=1 S.E.
The results for the N1 peak did not support our hypothesis of larger amplitudes with amplification. N1 amplitudes were smaller for aided than unaided responses at 80 dB SPL in quiet, and decreased with louder presentation levels. These results contrast with those of Van Dun et al. (2016), who found that the N1 amplitude increased with amplification. Our study differed from that of Van Dun et al. in that we assessed overall cortical activity and the Van Dun study focused analyses on the Cz electrode only. We only found the amplitude decrease when we analyzed overall electrode activity using the DSS analysis, but did not find it when we compared amplitudes for the Cz electrode only. The results in the unaided condition may reflect aging and hearing loss effects that lead to exaggerated amplitudes of the N1 component due to inefficient resource allocation. Using CAEPs, Billings et al. (2015) found that the older group with hearing loss had larger N1 amplitudes than either the younger or older groups with normal hearing, while this effect was not seen for other components. Using magnetoencephalography, several studies have found over-representation of the N1 and P2 components in older adults, both with normal hearing and with hearing loss, compared to young adults with normal hearing (Alain et al. 2014; Presacco et al. 2016a, 2016b; Sörös et al. 2009). Imaging studies have suggested that cortical network connectivity is reduced in older adults, resulting in redundant processing of the same stimulus by neighboring cortical areas (Peelle et al. 2010). This redundant processing may be a contributing factor to over-representation of the N1 component. A reduction in N1 amplitude may therefore indicate that amplification results in a decrease in redundant processing.
Effects of level
Increasing the presentation level resulted in increased P1 amplitudes and decreased N1 amplitudes and decreased P1 and N1 latencies for both aided and unaided conditions. These results are consistent with those of Billings et al. (2015), who found effects of level in older participants with hearing loss but not in the normal hearing younger or older groups. Previous studies revealed minimal effects of level on the CAEPs of individuals with normal hearing (Billings et al. 2013; Billings et al. 2009), but the fact that increased level results in changes in individuals with hearing loss suggests that audibility is a key factor in these changes. The direction of the amplitude change for P1 and N1 is consistent with the interpretation of amplification effects. Larger P1 amplitudes suggest that increased level leads to increased detectability for this sensory component. On the other hand, the N1 component, which is believed to reflect early triggering of attention to auditory signals (Ceponiene et al. 2002; Näätänen 1990), decreases in amplitude with level, suggesting that less neural activity is required to trigger attention to the signal at louder input levels. The P2 component was unaffected by level. This component may represent a later stage of auditory processing than signal detection, possibly auditory object formation (Ross et al. 2013), and may be minimally affected by level.
Effects of noise
In contrast to the FFR, significant effects of noise were noted for aided and unaided CAEPs (Fig. 7). Significant increases in latency were noted for the N1 and P2 components and the P2 component had a significant reduction in amplitude, consistent with the findings of previous studies (Billings et al. 2015; Kuruvilla-Mathew et al. 2015; Sharma et al. 2014). The differences in noise effects between midbrain and cortex in individuals with hearing loss may arise from how these signals are processed. Precise synchrony is required to accurately represent signals in the brainstem and midbrain, even when those signals are presented in quiet (Kraus et al. 2000), but cortical responses may be present in cases of complete desynchronization of the auditory nerve or lower levels of the auditory system (Chambers et al. 2016; Kraus et al. 2000). Nevertheless, cortical responses are more vulnerable to the effects of noise in individuals with auditory neuropathy compared to individuals with normal hearing (Michalewski et al. 2009). In our study, the FFRs of the participants (who all had hearing loss) may be affected by desynchronization to some degree, and the addition of noise (at least at a favorable SNR) did not significantly affect an already degraded system, yet their cortical responses remain vulnerable to noise effects. As noted by Sharma et al. (2014), however, it is important to include multiple stimulation levels and SNRs to make any reasoned interpretations about noise effects on cortical processing.
Limitations
Speech stimuli were presented in sound field to simulate an ecologically valid listening situation. When using direct audio input, as was done in the Easwar et al. (2015b) study, the hearing aid microphone is bypassed. However, the use of sound field presentation introduces the possibility of jitter contaminating the response through slight movements on the part of the participants. The participants were encouraged to remain still while watching a subtitled movie, and their movements were monitored through observation of electrical activity and through a webcam. However, slight movements may have reduced the temporal precision of the responses. Nevertheless, this possibility likely does not change interpretation of the results as robust responses above the noise floor were obtained in all of the participants (SNR dB values > 1).
Another aspect of the method that may limit the interpretation of the findings is the use of open-fit hearing aids. The effect of this fitting is that the unaided speech signal enters the ear canal through the open dome, along with an aided signal that has been processed through the hearing aid circuitry. The aided signal has a processing delay that varies depending on the frequency content, and this delay results in a degree of temporal smearing that will also affect the recording. This temporal smearing may minimize amplification effects on evoked potentials. Nevertheless, the resulting somewhat distorted signal represents what the hearing aid listener becomes accustomed to during every day listening.
Participants were part of a larger plasticity study; therefore, to mimic the typical clinic fitting, we used traditional real-ear measures to verify that the hearing aids appropriately met NAL-N2 targets. However, controlled comparisons of aided and unaided responses using in-situ measures during EEG testing in each individual participant would have allowed us to make more definitive interpretations of our data.
Although the hearing aids are designed to amplify stimuli from 100–7300 Hz, there was a significant roll-off of hearing aid output above 2000 Hz. When gain was increased to match the higher frequency targets, the participants experienced feedback or discomfort because of the level of the high-frequency sounds. As indicated by Tables 1 and 2, adequate audibility above 2000 Hz was not achieved.
The listeners had never worn hearing aids before participating in this experiment; therefore, these results may not apply to individuals who are experienced hearing aid users. As mentioned previously, work is underway to determine if and when these effects change with hearing aid use over time. Twelve weeks of hearing aid use did not result in changes in the click-evoked auditory brainstem response (Dawes et al. 2013), but it may be possible to see neuroplasticity using a more complex stimulus, as has been demonstrated in auditory training studies (Samira Anderson et al. 2013; S. Anderson, T White-Schwoch, et al. 2013).
Conclusions
As hearing aid technology advances, so too must the assessment of the hearing aid fitting. As speech perception is influenced by both peripheral and central components of the auditory system, it is important that verification of hearing aid benefit be confirmed beyond the tympanic membrane at higher levels of auditory processing. The current study demonstrates that it is feasible to collect aided sound field responses to ecologically valid signals in listeners with sensorineural hearing loss, and to identify key differences between aided and unaided responses that may indicate increased audibility with amplification. This information contributes to the understanding of the central effects of amplification, and may lead to the development of improved verification measures and hearing aid algorithms to improve speech intelligibility for listeners with sensorineural hearing loss. However, future studies should consider study design and stimuli parameters that might provide better control of amplification contrasts, such as the use of direct audio input or wireless transmission, real-ear measures during EEG testing, and a greater range of stimulus level and frequency differences. Furthermore, substantial inter-subject variability was noted between subjects, possibly related to different degrees or etiologies of hearing loss, and it would be important to identify sources of individual variability in responses. Hearing aid manufacturers employ different strategies to maximize speech clarity, and evoked potentials may provide a useful tool for evaluating the effects of these strategies on neural speech processing, taking into consideration the likelihood of large variability in this population.
In this study, clear differences were noted on both FFRs and CAEPs between aided and unaided responses, at least at suprathreshold levels. The comparison of aided and unaided FFRs and CAEPs may provide information, in part, regarding audibility of the speech signal at central levels of the auditory system. This study also verifies the importance of testing participants with hearing loss when determining amplification effects on central processing. Therefore, it provides important first steps into determining how auditory evoked responses can be utilized to enhance hearing aid outcomes, as well as improve methods of investigation into the possible uses of auditory evoked potentials in future aural rehabilitation treatment plans.
Table 5.
Aided and unaided means and standard deviations for cortical DSS amplitude (μV), and latency (ms) values are displayed for three presentation conditions (65 dB SPL, 80 dB SPL in quiet, and 80 dB SPL in noise). In addition, the F statistic is provided for main effects of amplification, level, and noise, and peak × aided interactions.
| Amplification Effects | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| Amplitude (μV) Mean (S.D.) |
Latency (ms) Mean (S.D.) |
||||||||
| dB SPL |
P1 | N1 | P2 | F statistic | P1 | N1 | P2 | F statistic | |
| 65 | Unaided | 3.08 (1.39) | 3.42 (1.10) | 3.09 (0.59) | 59.41 (6.58) | 107.79 (13.82) | 213.83 (19.98) | ||
| Aided | 3.09 (1.29) | 3.36 (1.14) | 2.90 (0.68) | 55.51 (6.56) | 108.21(12.72) | 210.59 (12.70) | |||
| Aided | 0.782 | Aided | 8.994** | ||||||
| Peak × Aided | Peak × Aided | 2.519 | |||||||
| 80 | Unaided | 3.13 (1.40) | 3.16 (1.06) | 2.94 (0.70) | 56.22 (9.18) | 99.33 (13.43) | 211.89 (12.05) | ||
| Aided | 3.63 (1.43) | 2.80 (1.01) | 2.90 (0.64) | 49.66 (8.13) | 100.84 (16.17) | 217.48 (15.94) | |||
| Aided | 4.767* | Aided | 9.332** | ||||||
| Peak × Aided | 7.305** | Peak × Aided | 8.338** | ||||||
| Noise | Unaided | 2.59 (1.12) | 3.01 (0.93) | 1.68 (0.85) | 56.28 (10.89) | 150.02(30.68) | 251.27 (18.37) | ||
| Aided | 2.41 (1.10) | 3.08 (1.02) | 1.72 (0.87) | 55.11 (8.56) | 156.66(25.06) | 252.78 (16.95) | |||
| Aided | 0.109 | Aided | 0.573 | ||||||
| Level Effects | |||||||||
| F statistic | Amplitude | Latency | |||||||
| P1 | N1 | P2 | P1 | N1 | P2 | ||||
| 6.137* | 16.102** | 0.263 | 15.303** | 14.792** | 2.523 | ||||
| Noise Effects | |||||||||
| F statistic | Amplitude | Latency | |||||||
| P1 | N1 | P2 | P1 | N1 | P2 | ||||
| 13.193** | 0.292 | 91.067* | 1.609 | 131.171** | 79.130** | ||||
p < 0.03,
p < 0.005 (corrected α levels using the FDR procedure).
Acknowledgments
We would like to thank Widex USA for providing hearing aids and participant funds and providing KEMAR measurements. We would also like to thank Dr. Richard Wilson for providing the six-talker babble used in the noise presentation condition. We also wish to acknowledge Lauren Evans, Arielle Abrams, Alyson Schapiro, Andrea Kaplanges, Alanna Schloss, and other lab members for their help with data collection and analysis. This study was funded by the Hearing Health Foundation and by NIH NIDCD T32DC000046.
Footnotes
Authors’ Contributions
S.A. designed the experiment; K.J., C.F., A.P., and S.A. collected and analyzed the data; K.J., C.F., A.P., and S.A. wrote the paper.
Financial Disclosures/Conflicts of Interest: We have no conflict of interest to report. This study was funded by University of Maryland’s Department of Hearing and Speech Sciences, the Hearing Health Foundation, NIH-NIDCD Grant T32DC000046, and Widex USA, Inc. who provided hearing aids for the duration of the study and contributed to subject compensation.
References
- Aiken SJ, Picton TW. Envelope and spectral frequency-following responses to vowel sounds. Hearing Research. 2008;245:35–47. doi: 10.1016/j.heares.2008.08.004. [DOI] [PubMed] [Google Scholar]
- Alain C, Roye A, Salloum C. Effects of age-related hearing loss and background noise on neuromagnetic activity from auditory cortex. Frontiers in Systems Neuroscience. 2014;8:8. doi: 10.3389/fnsys.2014.00008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S, Kraus N. The potential role of the cABR in assessment and management of hearing impairment. International Journal of Otolaryngology. 2013 doi: 10.1155/2013/604729. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S, Parbery-Clark A, White-Schwoch T, et al. Aging affects neural precision of speech encoding. The Journal of Neuroscience. 2012;32:14156–14164. doi: 10.1523/JNEUROSCI.2176-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S, White-Schwoch T, Choi HJ, et al. Training changes processing of speech cues in older adults with hearing loss. Frontiers in Systems Neuroscience. 2013;7:1–9. doi: 10.3389/fnsys.2013.00097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S, White-Schwoch T, Parbery-Clark A, et al. Reversal of age-related neural timing delays with training. Proceedings of the National Academy of Sciences - USA. 2013;110:4357–4362. doi: 10.1073/pnas.1213555110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bellier L, Veuillet E, Vesson JF, et al. Speech Auditory Brainstem Response through hearing aid stimulation. Hearing Research. 2015;325:49–54. doi: 10.1016/j.heares.2015.03.004. [DOI] [PubMed] [Google Scholar]
- Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 1995:289–300. [Google Scholar]
- Bidelman GM, Krishnan A. Effects of reverberation on brainstem representation of speech in musicians and non-musicians. Brain Research. 2010;1355:112–125. doi: 10.1016/j.brainres.2010.07.100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Billings CJ, McMillan GP, Penman TM, et al. Predicting perception in noise using cortical auditory evoked potentials. J Assoc Res Otolaryngol. 2013;14:891–903. doi: 10.1007/s10162-013-0415-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Billings CJ, Papesh MA, Penman TM, et al. Clinical Use of Aided Cortical Auditory Evoked Potentials as a Measure of Physiological Detection or Physiological Discrimination. International Journal of Otolaryngology. 2012;2012:14. doi: 10.1155/2012/365752. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Billings CJ, Penman TM, McMillan GP, et al. Electrophysiology and perception of speech in noise in older listeners: Effects of hearing impairment and age. Ear Hear. 2015;36:710–722. doi: 10.1097/AUD.0000000000000191. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Billings CJ, Tremblay KL, Miller CW. Aided cortical auditory evoked potentials in response to changes in hearing aid gain. International Journal of Audiology. 2011;50:459–467. doi: 10.3109/14992027.2011.568011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Billings CJ, Tremblay KL, Souza PE, et al. Effects of hearing aid amplification and stimulus intensity on cortical auditory evoked potentials. Audiology & Neuro-otology. 2007;12:234–246. doi: 10.1159/000101331. [DOI] [PubMed] [Google Scholar]
- Billings CJ, Tremblay KL, Stecker GC, et al. Human evoked cortical activity to signal-to-noise ratio and absolute signal level. Hearing Research. 2009;254:15–24. doi: 10.1016/j.heares.2009.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boersma P, Weenink D. Praat: Doing phonetics by computer (Version 5.1. 05)[Computer program] 2009. Retrieved May 1, 2009. [Google Scholar]
- Burkard RF, Sims D. A comparison of the effects of broadband masking noise on the Auditory Brainstem Response in young and older adults. Am J Audiol. 2002;11:13–22. doi: 10.1044/1059-0889(2002/004). [DOI] [PubMed] [Google Scholar]
- Campbell T, Kerlin J, Bishop C, et al. Methods to eliminate stimulus transduction artifact from insert earphones during electroencephalography. Ear and Hearing. 2012;33:144–150. doi: 10.1097/AUD.0b013e3182280353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Čeponienė R, Alku P, Westerfield M, et al. ERPs differentiate syllable and nonphonetic sound processing in children and adults. Psychophysiology. 2005;42:391–406. doi: 10.1111/j.1469-8986.2005.00305.x. [DOI] [PubMed] [Google Scholar]
- Ceponiene R, Rinne T, Näätänen R. Maturation of cortical sound processing as indexed by event-related potentials. Clin Neurophysiol. 2002;113:870–882. doi: 10.1016/s1388-2457(02)00078-0. [DOI] [PubMed] [Google Scholar]
- Chambers AR, Resnik J, Yuan Y, et al. Central gain restores auditory processing following near-complete cochlear denervation. Neuron. 2016;89:867–879. doi: 10.1016/j.neuron.2015.12.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chandrasekaran B, Kraus N. The scalp-recorded brainstem response to speech: Neural origins and plasticity. Psychophysiology. 2010;47:236–246. doi: 10.1111/j.1469-8986.2009.00928.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coffey EB, Herholz SC, Chepesiuk AM, et al. Cortical contributions to the auditory frequency-following response revealed by MEG. Nature Communications. 2016;7:11070. doi: 10.1038/ncomms11070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dawes P, Munro KJ, Kalluri S, et al. Brainstem processing following unilateral and bilateral hearing-aid amplification. NeuroReport. 2013;24:271–275. doi: 10.1097/WNR.0b013e32835f8b30. [DOI] [PubMed] [Google Scholar]
- de Cheveigne A, Simon JZ. Denoising based on spatial filtering. J Neurosci Methods. 2008;171:331–339. doi: 10.1016/j.jneumeth.2008.03.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dinse HR, Kruger K, Akhavan AC, et al. Low-frequency oscillations of visual, auditory and somatosensory cortical neurons evoked by sensory stimulation. International Journal of Psychophysiology. 1997;26:205–227. doi: 10.1016/s0167-8760(97)00765-4. [DOI] [PubMed] [Google Scholar]
- Dong S, Mulders W, Rodger J, et al. Changes in neuronal activity and gene expression in guinea-pig auditory brainstem after unilateral partial hearing loss. Neuroscience. 2009;159:1164–1174. doi: 10.1016/j.neuroscience.2009.01.043. [DOI] [PubMed] [Google Scholar]
- Easwar V, Purcell DW, Aiken SJ, et al. Effect of stimulus level and bandwidth on speech-evoked envelope following responses in adults with normal hearing. Ear Hear. 2015a;36:619–634. doi: 10.1097/AUD.0000000000000188. [DOI] [PubMed] [Google Scholar]
- Easwar V, Purcell DW, Aiken SJ, et al. Evaluation of speech-evoked envelope following responses as an objective aided outcome measure: Effect of stimulus level, bandwidth, and amplification in adults with hearing loss. Ear Hear. 2015b;36:635–652. doi: 10.1097/AUD.0000000000000199. [DOI] [PubMed] [Google Scholar]
- Felix RA, Portfors CV. Excitatory, inhibitory and facilitatory frequency response areas in the inferior colliculus of hearing impaired mice. Hearing Research. 2007;228:212–229. doi: 10.1016/j.heares.2007.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitzgibbons PJ, Gordon-Salant S, Friedman SA. Effects of age and sequence presentation rate on temporal order recognition. The Journal of the Acoustical Society of America. 2006;120:991–999. doi: 10.1121/1.2214463. [DOI] [PubMed] [Google Scholar]
- Galbraith GC, Arbagey PW, Branski R, et al. Intelligible speech encoded in the human brain stem frequency-following response. Neuroreport. 1995;6:2363–2367. doi: 10.1097/00001756-199511270-00021. [DOI] [PubMed] [Google Scholar]
- Glista D, Easwar V, Purcell DW, et al. A pilot study on cortical auditory evoked potentials in children: aided CAEPs reflect improved high-frequency audibility with frequency compression hearing aid technology. International Journal of Audiology. 2012;2012:12. doi: 10.1155/2012/982894. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gorga M, Abbas P, Worthington D. Stimulus calibration in ABR measurements. In: Jacobsen J, editor. The auditory brainstem response. San Diego: College Hill; 1985. pp. 49–62. [Google Scholar]
- Greenberg S. Neural temporal coding of pitch and vowel quality : human frequency-following response studies of complex signals. Los Angeles, CA: Phonetics Laboratory, Dept. of Linguistics, UCLA; 1980. [Google Scholar]
- Grose JH, Mamo SK. Processing of temporal fine structure as a function of age. Ear Hear. 2010;31:755–760. doi: 10.1097/AUD.0b013e3181e627e7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris KC, Eckert MA, Ahlstrom JB, et al. Age-related differences in gap detection: Effects of task difficulty and cognitive ability. Hear Res. 2010;264:21–29. doi: 10.1016/j.heares.2009.09.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hecox KE, Patterson J, Birman M. Effect of broadband noise on the human brain stem auditory evoked response. Ear Hear. 1989;10:346–353. doi: 10.1097/00003446-198912000-00005. [DOI] [PubMed] [Google Scholar]
- Holube I, Fredelake S, Vlaming M, et al. Development and analysis of an international speech test signal (ISTS) International Journal of Audiology. 2010;49:891–903. doi: 10.3109/14992027.2010.506889. [DOI] [PubMed] [Google Scholar]
- Jasper H. The 10–20 electrode system of the International Federation. Electroencephalography and Clinical Neurophysiology. 1958;10:370–375. [PubMed] [Google Scholar]
- Kochkin S. MarkeTrak VIII: The key influencing factors in hearing aid purchase intent. Hearing Review. 2012;19:12–25. [Google Scholar]
- Kraus N, Bradlow MA, Cunningham CJ, et al. Consequences of neural asynchrony: A case of AN. Journal of the Association for Research in Otolaryngology. 2000;01:33–45. doi: 10.1007/s101620010004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuruvilla-Mathew A, Purdy SC, Welch D. Cortical encoding of speech acoustics: Effects of noise and amplification. Int J Audiol. 2015;54:852–864. doi: 10.3109/14992027.2015.1055838. [DOI] [PubMed] [Google Scholar]
- Li X, Jeng F-C. Noise tolerance in human frequency-following responses to voice pitch. The Journal of the Acoustical Society of America. 2011;129:EL21–EL26. doi: 10.1121/1.3528775. [DOI] [PubMed] [Google Scholar]
- Mehraei G, Hickox AE, Bharadwaj HM, et al. Auditory Brainstem Response Latency in Noise as a Marker of Cochlear Synaptopathy. J Neurosci. 2016;36:3755–3764. doi: 10.1523/JNEUROSCI.4460-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michalewski HJ, Starr A, Zeng FG, et al. N100 cortical potentials accompanying disrupted auditory nerve activity in auditory neuropathy (AN): Effects of signal intensity and continuous noise. Clinical Neurophysiology. 2009;120:1352–1363. doi: 10.1016/j.clinph.2009.05.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mossop JE, Wilson MJ, Caspary DM, et al. Down-regulation of inhibition following unilateral deafening. Hear Res. 2000;147:183–187. doi: 10.1016/s0378-5955(00)00054-x. [DOI] [PubMed] [Google Scholar]
- Näätänen R. The role of attention in auditory information processing as revealed by event-related potentials and other brain measures of cognitive function. Behavioral and Brain Sciences. 1990;13:201–233. [Google Scholar]
- Nasreddine ZS, Phillips NA, Bédirian V, et al. The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society. 2005;53:695–699. doi: 10.1111/j.1532-5415.2005.53221.x. [DOI] [PubMed] [Google Scholar]
- Otto WC, McCandless GA. Aging and the auditory brain stem response. International Journal of Audiology. 1982;21:466–473. doi: 10.3109/00206098209072759. [DOI] [PubMed] [Google Scholar]
- Parbery-Clark A, Anderson S, Hittner E, et al. Musical experience offsets age-related delays in neural timing. Neurobiology of Aging. 2012 doi: 10.1016/j.neurobiolaging.2011.12.015. [DOI] [PubMed] [Google Scholar]
- Peelle JE, Troiani V, Wingfield A, et al. Neural processing during older adults’ comprehension of spoken sentences: age differences in resource allocation and connectivity. Cereb Cortex. 2010;20:773–782. doi: 10.1093/cercor/bhp142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pichora-Fuller MK, Schneider BA, MacDonald E, et al. Temporal jitter disrupts speech intelligibility: A simulation of auditory aging. Hearing Research. 2007;223:114–121. doi: 10.1016/j.heares.2006.10.009. [DOI] [PubMed] [Google Scholar]
- Presacco A, Jenkins K, Lieberman R, et al. Effects of aging on the encoding of dynamic and static components of speech. Ear and Hearing. 2015;36:e352–e363. doi: 10.1097/AUD.0000000000000193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Presacco A, Simon JZ, Anderson S. Effect of informational content of noise on speech representation in the aging midbrain and cortex. Journal of Neurophysiology. 2016a;116:2356–2367. doi: 10.1152/jn.00373.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Presacco A, Simon JZ, Anderson S. Evidence of degraded representation of speech in noise, in the aging midbrain and cortex. Journal of Neurophysiology. 2016b;116:2346–2355. doi: 10.1152/jn.00372.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Romero S, Mananas MA, Barbanoj MJ. Quantitative evaluation of automatic ocular removal from simulated EEG signals: regression vs. second order statistics methods. Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society Annual Conference; 2006. pp. 5495–5498. [DOI] [PubMed] [Google Scholar]
- Ross B, Jamali S, Tremblay KL. Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation. BMC Neurocience. 2013;14:151. doi: 10.1186/1471-2202-14-151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Särelä J, Valpola H. Denoising Source Separation. J Mach Learn Res. 2005;6:233–272. [Google Scholar]
- Schlögl A, Keinrath C, Zimmermann D, et al. A fully automated correction method of EOG artifacts in EEG recordings. Clinical Neurophysiology. 2007;118:98–104. doi: 10.1016/j.clinph.2006.09.003. [DOI] [PubMed] [Google Scholar]
- Sharma A, Dorman MF, Spahr AJ. A sensitive period for the development of the central auditory system in children with cochlear implants: Implications for age of implantation. Ear and Hearing. 2002;23:532–539. doi: 10.1097/00003446-200212000-00004. [DOI] [PubMed] [Google Scholar]
- Sharma M, Purdy SC, Munro KJ, et al. Effects of broadband noise on cortical evoked auditory responses at different loudness levels in young adults. Neuroreport. 2014;25:312–319. doi: 10.1097/WNR.0000000000000089. [DOI] [PubMed] [Google Scholar]
- Shtyrov Y, Kujala T, Ahveninen J, et al. Background acoustic noise and the hemispheric lateralization of speech processing in the human brain: magnetic mismatch negativity study. Neurosci Lett. 1998;251:141–144. doi: 10.1016/s0304-3940(98)00529-1. [DOI] [PubMed] [Google Scholar]
- Skoe E, Kraus N. Auditory brain stem response to complex sounds: A tutorial. Ear and Hearing. 2010;31:302–324. doi: 10.1097/AUD.0b013e3181cdb272. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith JC, Marsh JT, Brown WS. Far-field recorded frequency-following responses: evidence for the locus of brainstem sources. Electroencephalography and Clinical Neurophysiology. 1975;39:465–472. doi: 10.1016/0013-4694(75)90047-4. [DOI] [PubMed] [Google Scholar]
- Sörös P, Teismann IK, Manemann E, et al. Auditory temporal processing in healthy aging: a magnetoencephalographic study. BMC Neuroscience. 2009;10:34–34. doi: 10.1186/1471-2202-10-34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tallon-Baudry C, Bertrand O, Delpuech C, et al. Stimulus specificity of phase-locked and non-phase-locked 40 hz visual responses in human. The Journal of Neuroscience. 1996;16:4240–4249. doi: 10.1523/JNEUROSCI.16-13-04240.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thai-Van H, Veuillet E, Norena A, et al. Plasticity of tonotopic maps in humans: influence of hearing loss, hearing aids and cochlear implants. Acta Oto-laryngologica. 2010;130:333–337. doi: 10.3109/00016480903258024. [DOI] [PubMed] [Google Scholar]
- Tremblay K, Piskosz M, Souza P. Effects of age and age-related hearing loss on the neural representation of speech cues. Clinical Neurophysiology. 2003;114:1332–1343. doi: 10.1016/s1388-2457(03)00114-7. [DOI] [PubMed] [Google Scholar]
- Van Dun B, Kania A, Dillon H. Cortical Auditory Evoked Potentials in (Un)aided Normal-Hearing and Hearing-Impaired Adults. Semin Hear. 2016;37:9–24. doi: 10.1055/s-0035-1570333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Willott JF. Central physiological correlates of ageing and presbycusis in mice. Acta Oto-laryngol. 1991;111:153–156. doi: 10.3109/00016489109127271. [DOI] [PubMed] [Google Scholar]
- Wilson R, Abrams H, Pillion A. A word-recognition task in multitalker babble using a descending presentation mode from 24 dB to 0 dB signal to babble. Journal of Rehabilitation Research & Development. 2003;40:321–328. doi: 10.1682/jrrd.2003.07.0321. [DOI] [PubMed] [Google Scholar]
- Zhu J, Garcia E. The Wechsler Abbreviated Scale of Intelligence (WASI) New York: Psychological Corporation; 1999. [Google Scholar]







