Abstract
Objectives:
The goal of this study was to identify the effects of auditory deprivation (age-related hearing loss; ARHL) and auditory stimulation (history of hearing aid use) on the neural registration of sound across two stimulus presentation conditions: 1) equal sound pressure level (SPL) and 2) equal sensation level (SL).
Design:
We used a between-groups design, involving three groups of 14 older adults (n= 42, 62–84 years): 1) clinically defined normal hearing (≤ 25 dB from 250–8000 Hz, bilaterally), 2) bilateral mild-moderate/moderately severe sensorineural hearing loss who have never used hearing aids, and 3) bilateral mild-moderate/moderately severe sensorineural hearing loss who have worn bilateral hearing aids for at least the past 2 years.
Results:
There were significant delays in the auditory P1-N1-P2 complex in older adults with hearing loss compared with their normal hearing peers when using equal SPLs for all participants. However, when the degree and configuration of hearing loss were accounted for through the presentation of equal SL stimuli, no latency delays were observed. These results suggest stimulus audibility modulates P1-N1-P2 morphology and should be controlled for when defining deprivation and stimulus-related neuroplasticity in people with hearing loss. Moreover, a history of auditory stimulation, in the form of hearing aid use, does not appreciably alter the neural registration of unaided auditory evoked brain activity when quantified by the P1-N1-P2.
Conclusions:
When comparing auditory cortical responses in older adults with and without hearing loss, stimulus audibility, and not hearing loss-related neurophysiological changes, result in delayed response latency for those with age-related hearing loss. Future studies should carefully consider stimulus presentation levels when drawing conclusions about deprivation- and stimulation-related neuroplasticity. Additionally, auditory stimulation, in the form of a history of hearing aid use, does not significantly affect the neural registration of sound when quantified using the P1-N1-P2 evoked response.
INTRODUCTION
As the world’s aging population increases, age-related sensorineural hearing loss (ARHL), referred to as presbycusis, will affect a growing number of individuals globally. Most concerning is that only a small fraction of older adults with hearing loss pursue treatment with hearing aids (Chien & Lin, 2012). Left untreated, progressive ARHL has been shown to correlate with decreased speech understanding, both in quiet and in noise (Dubno et al., 2008; Akeroyd, 2008), social withdrawal (Singh & Pichora-Fuller, 2010), an increase in falls (Viljanen et al., 2009), depression (Huang et al., 2010) and cognitive decline, including decreased working memory function and dementia (Lin et al., 2011; Lin, 2011; Lin et al., 2013; Gurgel et al., 2014; Deal et al., 2015). Collectively, these findings suggest that the effects of sound deprivation in older adults may extend beyond sensory aspects of the auditory system, and that a large proportion of the population is at risk for experiencing such consequences. For these reasons, there is growing interest in understanding the effects of auditory deprivation (e.g., due to ARHL) and auditory stimulation (e.g., via hearing aid use) on cortical processing in older adults (Grady, 2012; Albers et al., 2015, for reviews).
Hearing loss related to auditory peripheral damage, is known to affect the integrity of sound information arriving at the cortex. Such central effects of peripheral pathology (CEPP), identified in animal models, include synaptic and neuronal losses along the auditory pathway and in the auditory cortex (Willott et al., 1991; Kazee et al., 1995; Willott & Bross, 1996), changes in auditory cortical frequency representation (Willott et al., 1993; Carlson & Willott, 1996), abnormal temporal processing (Zhong et al., 2014; Eggermont, 2016) and dysfunction of the excitatory and inhibitory neurotransmitter system (Profant et al., 2013; Alvarado et al., 2014). Central auditory system changes, subsequent to ARHL, have additionally been reported for human subjects using various neuroimaging techniques, including EEG and fMRI. These studies described auditory cortex atrophy and brain volume decline (Eckert et al., 2012; Lin et al., 2014), altered functional connectivity between sensory cortices (Puschmann & Thiel, 2016), changes in glutamate and lactate levels in the auditory cortex (Profant et al., 2013), downregulation of neural activity during speech processing (Peelle et al., 2011), and altered subcortical speech encoding (Anderson et al., 2013; Millman et al., 2017) in groups with auditory sensory deficits. Thus, the resulting reduced fidelity of auditory input, affecting the way sound is relayed to and processed in the brain, is believed to significantly contribute to auditory communication difficulties (see Peelle & Wingfield, 2016, for a further review). As a result, there is keen interest in defining how degraded sound input, associated with ARHL, as well as the reintroduction of sound to the auditory system through hearing aids, may affect the auditory system and higher-level neural networks. Results could help to define the limits and benefits of clinical interventions by providing electrophysiological evidence of neuroplasticity (or lack thereof) following treatment, such as hearing aids.
The auditory P1-N1-P2 cortical evoked response has been used to study the central effects of ARHL and amplification (for a review, Picton & Durieux-Smith, 1988; Picton, 2013; Tremblay, 2015). The P1-N1-P2 reflects sound detection at the level of the auditory cortex (Hillyard & Kutas, 1983). It consists of a series of positive and negative deflections in the scalp-recorded EEG between 50–250 milliseconds after stimulus presentation, and is sensitive to the parameters of the sound stimulus (e.g., intensity, frequency, duration), with increases in stimulus intensity generally leading to increased response amplitude and decreased latency (Adler & Adler, 1989; Hyde, 1997). While some studies have identified increased response latencies interpreted as evidence of inefficiencies in cortical sound processing (Campbell & Sharma, 2013), others reported no effects of ARHL on P1-N1-P2 latencies (Tremblay et al., 2003; Bertoli et al., 2005; Harkrider et al., 2005; Bertoli et al., 2011; Alain et al., 2014). Several studies reported increased component amplitudes, possibly related to decreased neural inhibition (Tremblay et al., 2003; Harkrider et al., 2005; Bertoli et al., 2011; Campbell & Sharma, 2013; Alain et al., 2014), and others revealed equivalent component amplitudes in older adults with hearing loss, when compared with their normal hearing peers – suggesting that sound encoding up to the level of the cortex may be minimally affected by auditory deprivation associated with ARHL (Tremblay et al., 2003; Bertoli et al., 2005).
Mixed findings and interpretations can be partially attributed to the limited number of studies investigating ARHL effects on cortical sound processing, and is also likely related to methodological differences, such as varied inter-stimulus intervals, stimulus type (e.g., tone, speech), recording context (e.g., passive recording vs. active tasks), stimulus audibility, and study population (e.g., individuals with untreated or treated hearing loss). For example, previously published results suggest that the auditory P1-N1-P2, could serve as a biomarker of cognitive resource reallocation related to hearing loss (Campbell & Sharma, 2013; Lister et al., 2016) and reduced working memory function (Karawani et al., 2018). Campbell and Sharma (2013) recorded auditory P1-N1-P2 responses to a speech syllable (/ba/). They focused a portion of their analysis on a subset of fronto-central electrodes and found that adults with hearing loss had longer latency and larger P2 response amplitudes when compared to their normal hearing peers. Their interpretation was that these latency and amplitude changes, in addition to topographical changes, represented cortical resource re-allocation resultant from auditory deprivation. One important limitation of Campbell and Sharma (2013) was the use of the same stimulus sound pressure level for all participants, irrespective of hearing status. It is well known that decreased stimulus intensity generates P1-N1-P2 responses that are prolonged in latency and reduced in amplitude (Adler & Adler, 1989). In other words, signal attenuation, due to hearing loss, decreases stimulus audibility, resulting in smaller and later responses for adults with hearing loss (Oates et al., 2002). Therefore, the prolonged P2 latencies reported by Campbell and Sharma (2013), may reflect reduced stimulus audibility and not deprivation-related changes in neuroplasticity related to ARHL.
There are several ways to avoid the confound of stimulus audibility when recording auditory evoked potentials. Ross et al. (2007), for example, accounted for individual differences in hearing thresholds by presenting stimuli 60 dB above each participant’s threshold for the specific stimulus used in the study. Others have presented stimuli at a particular sensation level (SL), relative to an individual’s pure-tone average or threshold at a specific frequency (e.g., Bidelman et al., 2014) or applied spectral shaping to enhance specific aspects of the sound stimuli (Harkrider et al., 2005; Harkrider et al., 2006). An additional way to account for differences in auditory thresholds is to simulate an equal amount of hearing loss for all participants. While this latter approach has been used in behavioral studies involving older adults (see Humes, 2007), it has seldom been used to study the effects of ARHL on auditory evoked responses. To summarize, when there are dissimilarities in auditory evoked response timing or strength between older individuals with and without hearing loss, it is unclear whether differences are related to the research question (i.e., speech perception in noise or cognitive decline), or, rather, confounded by the use of one stimulus level, resulting in unequal stimulus audibility.
Finally, only a handful of studies have looked at possible cortical neural correlates of hearing aid use in adults with age-related hearing loss (Bertoli et al., 2011; Dawes et al., 2014; Giroud et al., 2017; Karawani et al., 2018). For example, Dawes et al. (2014) measured auditory evoked P1-N1-P2 responses in first-time hearing aid users prior to and following 12 weeks of unilateral or bilateral hearing aid use. They found no significant changes in P1-N1-P2 amplitudes or latencies; however, their analyses were limited to a single vertex electrode. Bertoli et al. (2011) examined the effects of long-term unilateral and bilateral hearing aid use on the P1-N1-P2 response in older adults with bilaterally symmetrical sensorineural hearing loss, as compared to their normal hearing peers. They found larger P2 amplitudes for unilateral hearing aid users, as compared to bilateral and normal hearing peers, which they interpreted as more effortful listening in the unilateral group. They did not, however, find significant differences in amplitude for the P1 or N1 or differences in latency for any of the response components. Karawani et al. (2018) reported increased N1 and P2 amplitude in aided cortical responses following 24 weeks of hearing aid use, which correlated with improvement on a working memory task. They interpreted their findings to suggest that hearing aid experience resulted in enhanced cortical sound processing and improved cognitive function. However, they note in their discussion that the experimental group was seen four times more than the control group, to check hearing aid usage/compliance and for EEG sessions. Consequently, increased amplitudes, particularly for the P2 component, may have been affected by increased stimulus exposure, as reported in auditory training studies (Sheehan et al., 2005; Tremblay et al., 2010; Tremblay et al., 2014). Therefore, the effects of hearing loss treatment, in the form of consistent daily use of hearing aids, on the neural registration of sound remains unclear.
In this study, we set out to characterize the effects of auditory deprivation in the form of ARHL, as well as auditory stimulation in the form of a history of hearing aid use, on the neural registration of sound in three groups of older adults, using electroencephalography (EEG). Group 1 included older adults with clinically-defined normal hearing (NH). Individuals in group 2 had bilateral, mild sloping to moderate/moderately-severe sensorineural hearing loss and had never worn hearing aids (untreated hearing loss, u-HL). Group 3 included those with a similar amount of hearing loss as group 2, but who had used bilateral amplification consistently for at least the past two years (treated hearing loss, t-HL). Most importantly, this study controlled for hearing loss as well as stimulus audibility, with two stimulus conditions, which we describe as equal sound pressure level (SPL) and equal sensation level (SL). We hypothesized that ARHL would negatively impact the speed and strength of the neural registration of sound because of reduced audibility and that these effects would be resolved when stimuli were presented at equal SL. Additionally, because of the well documented auditory stimulation- and deprivation-related neuroplastic changes seen in animal models (e.g., Willott et al., 1993; Pienkowski & Eggermont, 2011), we also hypothesized that groups with untreated and treated hearing loss would show morphological differences in the P1-N1-P2 response.
MATERIALS AND METHODS
All procedures were approved by the Institutional Review Board of the University of Washington. Participants were recruited from the University of Washington Speech and Hearing Clinic, the University of Washington Communication Studies Participant Pool, study fliers posted at Seattle area businesses, and word of mouth. All participants gave written, informed consent prior to participation and were paid for their time.
Participants
Participants were monolingual English-speaking non-musicians, age 62–84 years, with no reported history of neurological disorders. Group 1 consisted of individuals with normal hearing (NH), defined as pure tone thresholds 25 dB HL or lower from 250–8000 Hz (n=14, age 62–70, M = 66, SD = 2, 12 female). Group 2 included individuals with bilaterally symmetrical (< 15 dB inter-aural difference at any 3 consecutive frequencies or < 20 dB inter-aural difference at any single frequency), mild to moderate/moderately-severe sensorineural hearing loss (u-HL) who had never worn hearing aids (n=14, age 60–72, M = 68, SD = 4, 11 female). Individuals in group 3 also had bilaterally symmetrical mild to moderate/moderately-severe sensorineural hearing loss but had been treated with hearing aids (t-HL) for at least the past two years and reported wearing their devices for at least 6 hours/day (n=14, age 62–84, M = 69, SD = 6, 6 female). All individuals in the t-HL group wore bilateral receiver-in-the-ear (RITE) style digital hearing aids with universal domes or custom earmolds. See Figure 1 for group averaged audiometric data, averaged between ears, at octave frequencies from 250–8000 Hz. There was a slight but significant difference in mean Pure Tone Average (PTA; average thresholds at 500, 1000, and 2000 Hz) between hearing loss groups, with an average PTA for the t-HL of 35 dB HL compared to 26 dB HL for the u-HL group (t (26) = 2.923, p = .007).
Figure 1.
Group-averaged audiometric thresholds. NH: normal hearing, u-HL: untreated hearing loss, t-HL: treated hearing loss. Shaded area represents standard deviation from the mean.
Participants were screened for mild cognitive impairment using the Montreal Cognitive Assessment Test (MoCA; Nasreddine et al., 2005). All participants enrolled in this study scored above 23 points (M = 26.7, SD = 1.7, Range 24–30). This cutoff score is known to have 96% sensitivity and 95% specificity for mild cognitive impairment (Luis et al., 2009; Carson et al., 2018). Groups did not significantly differ on MoCA score (F (2,39) = .761, p = .474).
Procedure
Prior to neurophysiology testing, all participants completed the following audiological assessment: pure-tone air and bone conduction audiometry (Madsen Astera, Otometrics: Taastrup, Denmark or 61 Grason-Stadler Inc. (GSI): Eden Prairie, USA) using the modified Hughson-Westlake procedure (Carhart & Jerger, 1959), word recognition testing in quiet (Auditec recordings of Northwestern University Auditory Test Number Six (NU-6) materials) at 40 dB SL re: PTA or louder, with masking used when appropriate, to avoid crossover (Yacullo, 1999), and tympanometry to confirm normal middle ear function (Tympstar, GSI: Eden Prairie, USA). To ensure that individuals in the t-HL group received significant daily amplification of sound, their devices’ frequency response to the standard speech passage at 55, 65, and 70 dB SPL and maximum power output (MPO) were recorded using the Verifit I system (Audioscan, Etymonic Design Inc.: Dorchester, Ontario, Canada) in the on-ear mode via a probe tube microphone inserted into the participant’s ear canal. Hearing aid response was compared to National Acoustic Laboratory non-linear 2 (NAL-NL2) targets calculated by the Verifit I system. All participants were receiving a significant amount of gain from their hearing aids. All but two participants’ devices were within 10 dB of NAL-NL2 targets from 250–2000 Hz. The remaining two participants had a single frequency that was more than 10 dB below the target for that frequency but were otherwise well fit. Devices were not adjusted under this protocol. Combined audiometric and hearing aid evaluation times approximated 60 minutes.
Stimuli
Two stimuli were presented in a block design, one at a consistent sound pressure level (SPL) for all participants, the other at an equal sensation level (SL). The use of two stimulus presentation conditions allowed us to study the effects of hearing loss and stimulus audibility on cortical auditory evoked responses.
The equal SPL stimuli were 180 ms Klatt synthesized /ba/ speech tokens, used in previous studies (see Tremblay et al., 1997 through 2014). SPL stimuli were presented diotically at 71 dBC (broadband C-weighted) through insert earphones (ER4B, Etymotic Research: Elk Grove, IL).
The equal SL stimuli were 180 ms Klatt synthesized /ba/ speech tokens (same as previously described for SPL), filtered according to the following method to approximate 40 dB SL. Filtering was executed through a custom MATLAB program. First, a target audiogram was created, representing a mild to severe hearing loss, typical of ARHL. Specifically, the target thresholds were 40 dB HL from .25–1 kHz, 60 dB HL at 2 kHz, 65 dB HL at 3 kHz, 70 dB HL at 4 kHz, and 85 dB HL at 5 kHz. The sampling rate of the /ba/ (10 kHz) resulted in no energy above the Nyquist frequency of 5 kHz. Participants’ audiometric thresholds averaged between ears at octave frequencies from .25–5 kHz were then subtracted from the target audiogram values. Thresholds at 5 kHz were approximated by averaging thresholds at 4 and 6 kHz, as the participants with hearing loss had audiometric profiles that were downward sloping at these frequencies. These differences (in dB HL) were interpolated across frequency to create a spectral filter. The spectral filter was applied to the amplitude spectrum of the original /ba/ stimulus, to simulate an equivalent amount and shape of hearing loss across all participants. The newly generated stimulus was reconstructed in the time domain via an inverse discrete fast Fourier transform with a 5 ms Hanning window added to both the onset and offset of the stimulus. This process resulted in a spectrally filtered, individualized stimulus for each participant.
Each participant’s stimulus was then calibrated to a 46.2 dBC stimulus, representing 40 dB SL for an ideal listener with thresholds of 0 dB HL across all frequencies (PTA.5, 1, 2 kHz = 6.2 dB SPL). Calibration was performed with a Larson Davis system 824 sound level meter and an occluded ear simulator (AEC204). The level of the SL stimuli for the NH, u-HL, and t-HL groups averaged 57.3 dB SPL (SD = 4.7), 70.2 dB SPL (SD = 7.5), and 80.1 dB SPL (SD = 9.1), respectively.
Stimulus Presentation
Stimuli were presented using a custom MATLAB program (version R2013b, The Mathworks, Inc., Natick, MA) and System 2 Real Time Processor (TDT-RP2; Tucker Davis Technologies, Alachua FL). Stimuli were routed through a microphone amplifier (TDT-MA2) to a programmable sound attenuator (TDT-PA5) through a headphone buffer (TDT-HB6) and presented diotically through insert earphones (Etymotic Research, ER-4B). Two blocks of 410 stimuli (i.e., one SPL block and one SL block), approximately 15 minutes per block, were presented with a randomly jittered inter-stimulus interval (offset to onset) of 2–2.25 seconds to avoid anticipatory responses. Presentation order of the two blocks was counterbalanced across subjects within each group.
Electrophysiology (EEG) Data Acquisition
EEG recordings were completed in a sound-attenuated booth with participants seated comfortably in a reclining chair. A passive EEG paradigm was used, such that participants were instructed to watch a closed-captioned movie of their choice while staying still and alert and were told to ignore the sounds being played through the earphones. Following the first block of stimuli, each participant was given a 3–5 minute break, during which they could stand and stretch, if desired.
Continuous EEG signals were recorded using a custom 64-channel electrode cap (Electro-cap International), and Neuroscan system software (SCAN, version 4.5) with a Synamps2 amplifier (Compumedics). The electrode montage followed an extended 10–20 system, including 4 ocular (EOG) channels to monitor horizontal and vertical eye movement, a vertex reference channel (CZ), and a ground channel (AFZ). See Appendix for electrode montage. Electrode impedance was kept below 5 kΩ. EEG signals were bandpass filtered from .1 to 100 Hz with 12 dB/octave roll off, amplified with a gain of 2010x and digitized at a sampling rate of 1000 Hz.
Analysis
Offline analysis was completed using a custom MATLAB program in combination with EEGLAB (version 12.0.2.5b; Delorme and Makeig, 2004). Continuous EEG files were imported and down-sampled to 250 Hz. EOG channels were removed, leaving 60 channels. The data were epoched into segments of −300 to 596 ms from stimulus onset. Independent component analysis (ICA) was used to remove ocular artifacts, with 1–4 independent components (NH: M=2.14 SD=.36, u-HL: M=1.92 SD=.73, t-HL: M=2.29 SD=.61) removed from each participant’s data. The data were re-referenced to the average reference, and CZ was reconstructed, resulting in 61 total channels. The data were baseline corrected to the interval −300 to 0 ms and threshold artifact rejection was applied using +/− 150 μV bounds. Epochs were then sorted according to stimulus level (SL, SPL). Auditory evoked potentials (AEPs) were generated by averaging across trials for each participant for each stimulus level (SL, SPL) at each electrode (1–61). A custom least-squares linear-response finite impulse response (FIRLS) filter was created in MATLAB, using an order of 255. The AEPs were zero-padded prior to filtering. Zero-phase digital filtering was performed in MATLAB (using the filtfilt command) to low-pass filter the sorted AEP data at 30 Hz. The AEPs were analyzed in two ways: 1) examination of P1-N1-P2 peak latencies and amplitudes at electrodes CZ and FZ; 2) whole-scalp approach using cluster-based permutation tests.
First, for the peak analyses, the peak latency and amplitude were determined for each component (P1-N1-P2) at electrodes CZ and FZ. Peak analysis was completed using a custom MATLAB script. First, the AEPs were grand-averaged across groups, to determine the latency of the grand (across-groups) mean P1, N1, and P2 components. Next, these grand mean latency values were used to select the P1, N1, and P2 peak amplitude and latency values for each individual participant, using a window of ± 20 ms around the grand mean latency values. These individual latency and amplitude values were chosen automatically in MATLAB and recorded but confirmed manually. An ANOVA was used to determine the effect of Group (NH, u-HL, t-HL) on Amplitude and Latency for each of the response components (P1, N1, P2) for each stimulus level (SPL, SL). See the Appendix for single-channel group mean latency and amplitude values.
A whole-scalp analysis was completed using cluster-based permutation tests, as implemented in FieldTrip software (Maris & Oostenveld, 2007; Oostenveld et al., 2011). We used data-driven permutation tests, in addition to traditional peak picking methods, in order to determine if correcting for multiple comparisons (due to many time points and 61 channels) and minimizing biases associated with identifying specific sets of electrodes (e.g., focus on CZ or FZ) resulted in observable group differences not previously reported. The data from all 61 electrodes and from 0 to 300 ms post-sound-onset were inputted into the statistical analysis. First, univariate independent-samples t-tests (two-tailed) were conducted at each data sample of the true data. Data samples with univariate p-values less than 0.05 were aggregated into clusters across neighboring time points and channels. Because cluster formation relies on knowing which channels neighbor which channels, FieldTrip’s built-in “triangulation” method was used to determine for each channel, which channels are its “neighbors”. Once clusters were formed, FieldTrip’s weighted cluster mass option was used to compute cluster-level statistics, with the “wcm_weight” parameter set to 2000, to emphasize small, intense clusters (e.g., small groups of channels and timepoints). To form the null distribution, the data were randomly partitioned (i.e., group labels shuffled) and the same steps detailed above (i.e., two-tailed independent-samples t-tests at the sample level, cluster formation, calculation of cluster statistics) were repeated for the randomly partitioned data. This process was repeated 5,000 times (number of permutations), and this null distribution was used to threshold the true cluster-level statistics. The above steps were repeated to address four contrasts of interest. We used contrasts 1 (NH vs. u-HL equal SPL) and 2 (NH vs. u-HL equal SL) to determine the effects of hearing loss before (1) and after (2) accounting for stimulus audibility. Contrasts 3 (u-HL vs t-HL equal SPL stimuli) and 4 (u-HL vs. t-HL equal SL stimuli) were done to determine the effects of a history of hearing aid use before (3) and after (4) accounting for stimulus audibility. At the cluster-level, a threshold of 0.0125 was used to correct for each of the four contrasts tested. The cluster-level (Monte Carlo) p-values are reported in the Results section.
RESULTS
Single Electrode Peak Measures
To compare our results with previous studies, peak latency and amplitude values were analyzed using ANOVA for the P1, N1, and P2 components for both stimulus conditions (SL and SPL), to determine differences across the three groups (NH, u-HL, t-HL) at electrodes CZ and FZ.
Figure 2 includes group-averaged waveforms for the SPL (top) and SL (bottom) conditions for all three groups at electrodes CZ (left) and FZ (right).
Figure 2.
Group averaged waveforms for the SPL (top) and SL (bottom) stimulus conditions for all three groups (NH, u-HL, t-HL) at vertex Cz (left) and frontocentral channel Fz (right). Asterisks denote peaks identified by ANOVA as significantly different across groups.
Equal Sound Pressure Level (SPL)
Amplitude
Group mean P1, N1, and P2 amplitudes were not significantly different at electrodes CZ or FZ for the equal SPL stimulus level. See Table 1 for summary statistics.
Table 1.
Single Channel Peak Analysis
Amplitude | Equal SPL | Equal SL | |||||
---|---|---|---|---|---|---|---|
F (2,39) | p | ηp2 | F (2,39) | p | ηp2 | ||
CZ | P1 | 2.62 | 0.09 | .012 | 1.35 | 0.27 | 0.07 |
N1 | .039 | 0.68 | 0.02 | 4.81 | 0.014 | 0.20 | |
P2 | 1.37 | 0.27 | 0.07 | 1.28 | 0.29 | 0.06 | |
FZ | P1 | 1.33 | 0.28 | 0.06 | 0.62 | 0.55 | 0.03 |
N1 | 0.07 | 0.50 | 0.04 | 4.25 | 0.021 | 0.18 | |
P2 | 0.78 | 0.46 | 0.04 | 1.68 | 0.20 | 0.08 |
Latency | Equal SPL | Equal SL | |||||
---|---|---|---|---|---|---|---|
F (2,39) | p | ηp2 | F (2,39) | p | ηp2 | ||
CZ | P1 | 0.91 | 0.41 | 0.04 | 5.97 | 0.005 | 0.23 |
N1 | 4.51 | 0.017 | 0.19 | 0.82 | 0.45 | 0.04 | |
P2 | 0.06 | 0.94 | 0.003 | 0.04 | 0.96 | 0.002 | |
FZ | P1 | 0.57 | 0.57 | 0.03 | 3.77 | 0.032 | 0.16 |
N1 | 6.61 | 0.005 | .024 | 0.72 | 0.49 | 0.04 | |
P2 | 0.20 | 0.82 | 0.01 | 0.58 | 0.56 | 0.03 |
Results of ANOVA: F-statistics, p-values, and effect sizes (ηp2) for P1, N1, and P2 component Amplitude and Latency at electrodes CZ and FC for the equal Sound Pressure Level (SPL) and equal Sensation Level (SL) stimuli. Significance highlighted in bold.
Latency
For N1 latency in response to the equal SPL stimulus, there was a main effect of group at electrodes CZ (F (2,39) = 4.51, p = .017) and FZ (F (2,39) = 6.61, p = .005). Post-hoc pairwise comparisons following Bonferroni correction revealed shorter N1 latency for the NH group compared to the t-HL group at CZ (p = .019) and FZ (p = .004), and marginally shorter latency when compared to the u-HL group at CZ (p = .076) and FZ (p = .064). There was no significant difference in mean N1 latency between u-HL and t-HL groups at CZ (p = .819) or FZ (p = .510). Groups did not exhibit significant differences in P1 or P2 latency at either electrode.
In summary, for a sound played at 71 dB SPL, individuals with hearing loss, especially experienced hearing aid users (t-HL group), showed delayed N1 latency at both CZ and FZ, compared to their NH peers.
Equal Sensation Level (SL)
Amplitude
Mean N1 amplitudes were significantly different across groups at electrodes CZ (F (2,39) = 4.81, p = .014) and FZ (F (2,39) = 4.25, p = .021). Post-hoc pairwise comparisons following Bonferroni correction revealed significantly larger N1 amplitudes for the u-HL group, compared to the NH group, at CZ (p = .016) and FZ (p = .020), with marginally increased N1 amplitude for the t-HL group compared to the NH group at CZ (p = .055). N1 amplitude was not significantly different between u-HL and t-HL groups (CZ: p = .864), FZ: p = .708). Groups did not differ in mean amplitude for P1 or P2 at electrodes CZ or FZ.
Latency
Mean latency for P1 was significantly different across groups at CZ (F (2,39) = 5.97, p = .005) and FZ (F (2,39) = 3.77, p = .032). Post-hoc pairwise comparisons following Bonferroni correction revealed significantly shorter P1 latency for the t-HL group compared to the u-HL group at CZ (p = .009) and FZ (p = .046) and compared to the NH group at CZ (p = .019) with marginally shorter P1 latency at FZ (p = .072). NH and u-HL groups did not differ in P1 latency (CZ: p = .954, FZ: p = .978). Groups did not exhibit significant differences in N1 or P2 latency at either electrode (See Table 1 for summary statistics).
Overall, P1-N1-P2 responses to equal SL stimuli revealed larger N1 amplitudes for the groups with hearing loss compared to the NH group. Additionally, the group of older adults with a history of hearing aid use had significantly earlier P1 onset compared to the NH group and, particularly, compared to the u-HL group.
Therefore, single channel analysis revealed significant differences in the P1-N1-P2 response across three groups of older adults for sound stimuli played at an equal SPL and after accounting for the shape and severity of hearing loss through individualized, equal SL stimuli. Importantly, these group differences were not the same for each stimulus. For stimuli presented at the same SPL across groups, delayed latencies of the N1 component were present in those with hearing loss compared to persons with NH. However, once hearing loss was taken into account, larger N1 amplitudes were present in the u-HL group, compared with NH, and the t-HL group showed shortened P1 latencies at CZ and FZ.
Whole-Scalp Analysis via Permutation Tests
Permutation tests, comparing NH to u-HL (effects of hearing loss) and u-HL to t-HL (effects of history of hearing aid use) were used to identify significant differences across the whole scalp at time points between stimulus onset and response offset.
Equal Sound Pressure Level (SPL)
NH vs. u-HL
Figure 3 displays results for equal SPL NH vs. u-HL, including the time waveforms and topographies for each cluster identified during permutation testing. Permutation tests revealed significant differences between the NH and u-HL groups at time points consistent with P1 onset (Cluster 1, 25–52 ms, p < .001) at right temporal electrodes (F4, F6, FC6, T8, C4, C6, CP6, TP8) and during the N1-P2 transition (Cluster 2, 112–140 ms, p < .001) at both frontocentral electrodes (AF4, F3, F1, FZ, F2, F4, F6, FC5, FC1, FCZ, FC2, C3, C1, CZ, C2, C4, CP1, CPZ, CP2) and occipitoparietal electrodes (F9, FT9, T7, TP9, TP7, CB1, P7, P5, O1, OZ, O2, IZ, TP8, TP10, P8, CB2) showing the response reversal (Cluster 3, 116–144 ms, p < .001). Further inspection across all three cluster averaged waveforms revealed a shift to later latencies of the entire response complex for the u-HL group compared to the group with NH. Furthermore, as can be seen in Figure 3, the topographies in the NH and u-HL groups have different scalp patterns. Particularly, the NH group had a bi-focal fronto-central distribution for the P1 onset and N1-P2 transition (with two lateralized foci of activity), whereas the u-HL group showed a more unifocal frontocentral distribution.
Figure 3.
Equal SPL: NH vs. u-HL. A. Group-averaged time waveforms at the three clusters identified during permutation tests. Insert shows scalp location of channels included in each cluster. In this and subsequent figures, group mean amplitude ± across-subjects standard error amplitude calculated at each time point is represented by thin grey lines. B. Topographies during time points showing a significant difference between groups.
To summarize, for stimuli played at an equal SPL, significant group differences were observed at time points consistent with P1 onset and N1-P2 transition at a number of central, right temporal, and occipitoparietal electrodes.
u-HL vs. t-HL
Cluster-based permutation tests comparing u-HL to t-HL revealed no significant differences between groups. Because there were no identified clusters, Figure 4 displays group averaged waveforms for u-HL and t-HL at the same electrode clusters from the previous contrast (Figure 3. NH vs. u-HL) for comparison, representing right temporal (Cluster 1), frontocentral (Cluster 2), and occipitoparietal (Cluster 3) scalp regions.
Figure 4.
Equal SPL: u-HL vs. t-HL. Permutation tests revealed no significant group differences. Group-averaged time waveforms for channels included in Clusters 1–3 for the previous contrast (Figure 3) are displayed for consistency and comparison.
In summary, when sounds were played at the same level for all participants, at an equal SPL, differences were present between those with NH and u-HL, across several scalp regions, including at right temporal electrodes for timepoints associated with P1 onset and frontocentral and occipitoparietal clusters at timepoints consistent with the N1 to P2 transition. There were no significant differences in activity between groups with treated and untreated hearing loss.
Equal Sensation Level (SL)
NH vs. u-HL
Figure 5 displays results for NH vs. u-HL, including the group averaged time waveforms and topographies for the clusters identified during permutation testing. Permutation tests revealed significant differences between the NH and u-HL groups at time points consistent with N1 (Cluster 1, 100–136 ms, p < .001) at frontocentral electrodes (AF4, F4, F2, FZ, F1, F3, F5, FC5, FC1, FCZ, FC2, C3, C1, CZ, C2, C4, CP1, CPZ, CP2). Unlike the SPL analysis, for the SL comparison there was no clear shift to later latencies for the u-HL group, but rather, mean N1 amplitude was greater for the u-HL group compared to the NH group, consistent with the outcome of the single channel ANOVAs at CZ and FZ.
Figure 5.
Equal SL: NH vs. u-HL. A. Group-averaged time waveforms at the cluster identified during permutation tests. Insert shows scalp location of channels included in the cluster. B. Topographies during time points showing a significant difference between groups.
u-HL vs. t-HL
Cluster-based permutation tests revealed no significant group differences between the u-HL and t-HL groups. Figure 6 shows representative waveforms averaged across channels included in significant Cluster 1 for the NH vs u-HL SL contrast (Figure 5), for comparison.
Figure 6.
Equal SL: u-HL vs. t-HL. Permutation tests revealed no significant group differences. Group-averaged time waveforms for channels included in Clusters 1 for the previous contrast (Figure 5) are displayed for consistency and comparison.
To summarize, latency delays for individuals with hearing loss were only seen in the equal SPL condition, when the stimulus was played at the same sound level for all participants. With equal SL stimuli, which were individualized for each participant based on the amount and shape of hearing loss across frequency, larger N1 amplitudes were present for individuals with u-HL compared with NH peers. Cluster-based permutation tests indicated no significant differences between the untreated and treated hearing loss groups.
DISCUSSION
The goal of this study was to examine the effects of auditory deprivation, in the form of ARHL, and auditory stimulation through a history of daily use of bilateral hearing aids, on the morphology of the P1-N1-P2 auditory evoked response in older adults. Two stimulus presentation conditions were used to avoid any potential confounds related to stimulus audibility. Furthermore, in addition to single channel analyses, permutation tests were performed in order to determine group differences across the whole scalp and to statistically account for multiple comparisons.
Effects of Auditory Deprivation in the form of Age-Related Hearing Loss (ARHL)
Auditory deprivation, in the form of ARHL, affected the way sound was encoded in the brain. Specifically, increased neural conduction time, evident through longer response latency of one or more components of the P1-N1-P2 complex, was seen at electrodes CZ and FZ and whole-scalp analysis. Delayed latencies were observed at time points consistent with the P1 and N1-P2 transition for individuals with intact cognitive function with mild to moderate/moderately-severe sensorineural hearing loss. Furthermore, the whole-scalp analysis revealed different topographical patterns between the u-HL and NH groups. One interpretation of these topographical differences is that individuals with u-HL and NH may have underlying differences at the neural level (e.g., in terms of the neural sources involved in generating the P1 and N1, or the trajectory/angle of the neuroelectrical activity arising from these sources). Stimulus audibility appears to drive these neurophysiological differences, and not cognitive function, because these latency delays were not observed when stimuli were presented at equal SL. This finding is encouraging because it suggests the capacity to transmit sound from the ear to the cortex is retained in individuals with ARHL once stimuli are presented at a sensation level that is similar to their normal hearing peers.
When audibility was controlled (i.e., equal SL condition), only the N1 amplitude was larger in the two groups with hearing loss (both treated and untreated) compared with their NH peers, with no significant differences in P1 or P2 amplitude across groups. Enhanced N1 amplitudes, generated in primary auditory cortex, have previously been attributed to diminished central inhibition (Tremblay et al., 2003; Harkrider et al., 2005) and possibly tinnitus, a variable not intentionally explored or controlled for in this study (Roberts et al., 2015). Alternatively, stimulus intensity may have contributed to increased N1 amplitudes for the hearing loss groups. SL stimuli levels were, on average, 20 dB higher for the groups with hearing loss than for the NH group. Because hearing loss groups had normal to only mild low frequency hearing loss, increased amplitude may reflect this retained sensitivity to the low frequency energy within the /ba/ stimulus. Additionally, previous studies reported enhanced P2 amplitude in older adults with hearing loss or those with suspected mild cognitive impairment (Campbell & Sharma, 2013; Lister et al., 2016). In contrast, the present study controlled for stimulus audibility and cognitive status and found P2 amplitudes were not significantly different across individuals with and without hearing loss, similar to previous reports (Tremblay et al., 2003; Bertoli et al., 2005). Taken together, these results suggest that the relationship between P2 amplitude and sensory impairment is still not well defined.
Effects of Auditory Stimulation in the form of a History of Hearing Aid Use
In the present study, we did not observe a robust effect of auditory stimulation, in the form of a history of hearing aid use, on the neural registration of sound. P1-N1-P2 morphology did not differ for groups with hearing loss, even though the group with treated hearing loss had worn well fit hearing aids for over two years. One possible explanation is that daily auditory stimulation through amplification did not alter the neurophysiological representation of sound at the level of the auditory cortex. This interpretation is supported by Dawes et al. (2014), who found no significant differences in cortical response morphology prior to or following 12 weeks of hearing aid use, in a group of older individuals. However, it contrasts with the findings of Karawani et al. (2018), who reported increased N1 and P2 amplitude in aided cortical responses following 24 weeks of hearing aid use and correlated with improved performance on a working memory task. However, as previously noted, their results could have been confounded by the experimental group receiving four additional study sessions. Increased P2 amplitude, across multiple test sessions, might reflect increased stimulus exposure (Sheehan et al., 2005; Tremblay et al., 2010; Tremblay et al., 2014).
Our study eliminated these potential confounds by controlling for stimulus audibility, the amount of stimulus exposure, and including high density EEG analyses. With these variables accounted for, well-fit hearing aids, worn on a consistent basis, did not appear to provide stimulation-related changes in the morphology of the P1-N1-P2. This point is interesting as previous studies, which controlled for stimulus audibility while studying the maturation of the auditory system among pediatric cochlear implant users, have identified P1-N1-P2 latency and amplitude changes resulting from auditory stimulation in previously deafened ears (Ponton et al., 2002). A possible inference is that the mature auditory system, is less responsive to sound amplification following the gradual decline of age-related high frequency hearing loss.
In the present study, with stimulus audibility accounted for, P1 latency was earlier for the group of experienced hearing aid users, compared with peers having a similar degree of hearing loss but who had never worn hearing aids and those with normal hearing. This observation was based on singular electrodes CZ and FZ, but not substantiated by permutation tests, which statistically control for multiple comparisons. In their consensus report, describing recording standards and publication guidelines for using and interpreting human event-related potentials, Picton et al. (2000) caution against drawing conclusions based on single electrode analyses. Therefore, the P1 latency results observed here should be interpreted with caution.
Limitations and Caveats
One limitation of this study is that the heterogeneity of P1-N1-P2 timing characteristics both within each participant (on a trial by trial basis) and within each group, likely contributed to averaged group results. Consequently, averaging over multiple trials, as well as across individuals in the determination of group effects, may have resulted in temporal smearing and an insensitivity to stimulation-related neuroplastic effects present at an individual level (see Giroud et al., 2017 and Karawani et al., 2018 for stimulation-related findings within individuals). Another caveat of the present study is the fact that the SPL and SL stimuli were both, on average, approximately 71 dB SPL for the u-HL group. However, the mean levels for the SPL and SL stimuli differed by approximately 9 and 14 dB SPL for the t-HL and NH groups, respectively. Therefore, this study did not address the issue of how untreated and treated hearing loss may affect auditory processing at various suprathreshold levels, which may be addressed in future studies.
Benefits of Whole Scalp Analysis
In addition to single-channel analyses, we used a data driven, whole-scalp approach to contrast scalp-recorded auditory evoked activity across groups to increase statistical rigor and survey all available data points. Cluster-based permutation testing is statistically robust and appropriate for both within- and between-subjects study designs. It is a powerful and flexible procedure in the analysis of large sets of data, such as highly sampled continuous EEG (Frömer et al., 2018).
In our study, these whole-scalp measures revealed several groups of electrodes, at which a range of timepoints of the response were significantly different between those with normal hearing and untreated age-related hearing loss. For example, the cluster-based permutation tests revealed differences between NH and u-HL groups, due to robust differences in topographical patterns (see Figure 3), which were not observed in the single-channel analysis. Furthermore, single channel analysis failed to identify delayed P1 latency for individuals with hearing loss, that was focused at right temporal electrodes, revealing an additional shortcoming of single channel analysis—i.e., data exclusion. Therefore, our results illustrate that differences across all timepoints of the response and at non-vertex electrodes are also important to evaluate, as whole-scalp analyses can reveal differences in scalp topography patterns that would otherwise be missed. However, while traditional peak analyses can formally test for differences in both peak latency and amplitude, the whole-scalp analysis, as implemented in this study, does not directly test for latency differences. Thus, single-channel and whole-scalp approaches can serve as complementary windows into the data, as exemplified in the present study.
Conclusions
In summary, previous reports of the effects of auditory deprivation (ARHL) and auditory stimulation (history of hearing aid use) on the physiological capacity to encode sound, and association of these effects with higher-level cognitive changes, likely reflect the confounds of stimulus audibility. Once audibility is accounted for through the presentation of equal SL stimuli, delayed latencies previously reported and attributed to neural consequences of hearing loss, are no longer observable. This suggests that the relationship between the P1-N1-P2 and higher-level cognitive processes requires more rigorous investigation.
Furthermore, no significant differences were observed between older adults with a history of treated or untreated hearing loss, as defined by the P1-N1-P2 response morphology, suggesting that auditory stimulation in the form of a history of hearing aid use may not significantly alter sound registration at the level of the auditory cortex. Because the P1-N1-P2 complex reflects early stages in signal detection, and not how the (amplified) sound is used during higher-level cognitive tasks, future studies are needed to compare the effects of treated and untreated hearing loss on the cognitive processing of sound and speech.
Supplementary Material
Figure A1. Electrode montage
Table A1. Latency and amplitude values for channels CZ and FZ. Significant outcomes are highlighted in bold.
ACKNOWLEDGEMENTS
The authors would like to thank Dr. Christopher Bishop for his technical advice and support, Drs. Michael Lee and Ashley Moore, and Brain and Behavior Lab members Drs. Cornetta Mosley, Janice Vong, and Lauren Langley, for assistance with participant recruitment and data collection. This study was supported by NIH T32-DC005361 (K.S.M., K.C.B.), F30-DC10297 (K.S.M.), and a Student Investigator Research Grant provided by the American Academy of Audiology Foundation (K.S.M.). Portions of this article were presented at the 7th International and Interdisciplinary Research Conference on Aging and Speech Communication, November 6th, 2017, Tampa FL, and the 41st Annual MidWinter Meeting of the Association for Research in Otolaryngology, February 12th, 2018. Correspondence: Katrina (Kate) S. McClannahan, Washington University in St. Louis, Department of Psychological and Brain Sciences, 1 Brookings Drive, Box 1125, St. Louis, MO 63130, USA. k.mcclannahan@wustl.edu
Conflicts of Interest and Source of Funding
No relevant conflicts of interest for KM, KB or KT.
NIH: T32-DC005361 (KM and KB), F30-DC10297 (KM)
American Academy of Audiology Student Investigator Research Grant (KM)
K.S.M designed and performed experiments, analyzed data and wrote the paper; K.C.B performed experiments, analyzed data, and wrote the paper; K.L.T assisted in the design of the experiments and wrote the paper.
REFERENCES
- Adler G, & Adler J (1989). Influence of stimulus intensity on AEP components in the 80- to 200-millisecond latency range. Audiology, 28(6), 316–324. [DOI] [PubMed] [Google Scholar]
- Akeroyd MA (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol, 47 Suppl 2, S53–71. [DOI] [PubMed] [Google Scholar]
- Alain C, Roye A, & Salloum C (2014). Effects of age-related hearing loss and background noise on neuromagnetic activity from auditory cortex. Front Syst Neurosci, 8, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Albers MW, Gilmore GC, Kaye J, Murphy C, Wingfield A, Bennett DA, … Zhang LI (2015). At the interface of sensory and motor dysfunctions and Alzheimer’s disease. Alzheimers Dement, 11(1), 70–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alvarado JC, Fuentes-Santamaría V, Gabaldón-Ull MC, Blanco JL, & Juiz JM (2014). Wistar rats: a forgotten model of age-related hearing loss. Front Aging Neurosci, 6, 29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S, Parbery-Clark A, White-Schwoch T, Drehobl S, & Kraus N (2013). Effects of hearing loss on the subcortical representation of speech cues. J Acoust Soc Am, 133(5), 3030–3038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bertoli S, Probst R, & Bodmer D (2011). Late auditory evoked potentials in elderly long-term hearing-aid users with unilateral or bilateral fittings. Hear Res, 280(1–2), 58–69. [DOI] [PubMed] [Google Scholar]
- Bertoli S, Smurzynski J, & Probst R (2005). Effects of age, age-related hearing loss, and contralateral cafeteria noise on the discrimination of small frequency changes: psychoacoustic and electrophysiological measures. J Assoc Res Otolaryngol, 6(3), 207–222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bidelman GM, Villafuerte JW, Moreno S, & Alain C (2014). Age-related changes in the subcortical-”cortical encoding and categorical perception of speech. Neurobiology of aging, 35(11), 2526–2540. [DOI] [PubMed] [Google Scholar]
- Campbell J, & Sharma A (2013). Compensatory changes in cortical resource allocation in adults with hearing loss. Front Syst Neurosci, 7, 71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carhart R, & Jerger J (1959). Preferred methods for clinical determination of pure-tone thresholds. J Speech Hear Res, 24, 330–345. [Google Scholar]
- Carlson S, & Willott JF (1996). The behavioral salience of tones as indicated by prepulse inhibition of the startle response: relationship to hearing loss and central neural plasticity in C57BL/6J mice. Hear Res, 99(1–2), 168–175. [DOI] [PubMed] [Google Scholar]
- Carson N, Leach L, & Murphy KJ (2018). A re-examination of Montreal Cognitive Assessment (MoCA) cutoff scores. Int J Geriatr Psychiatry, 33(2), 379–388. [DOI] [PubMed] [Google Scholar]
- Chien W, & Lin F (2012). Prevalence of hearing aid use among older adults in the United States. Arch Intern Med, 172, 292–293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dawes P, Munro KJ, Kalluri S, & Edwards B (2014). Auditory acclimatization and hearing aids: late auditory evoked potentials and speech recognition following unilateral and bilateral amplification. J Acoust Soc Am, 135(6), 3560–3569. [DOI] [PubMed] [Google Scholar]
- Deal JA, Sharrett AR, Albert MS, Coresh J, Mosley TH, Knopman D, … Lin FR (2015). Hearing impairment and cognitive decline: a pilot study conducted within the atherosclerosis risk in communities neurocognitive study. Am J Epidemiol, 181(9), 680–690. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dubno JR, Lee FS, Matthews LJ, Ahlstrom JB, Horwitz AR, & Mills JH (2008). Longitudinal changes in speech recognition in older persons. J Acoust Soc Am, 123(1), 462–475. [DOI] [PubMed] [Google Scholar]
- Eckert MA, Cute SL, Vaden KI, Kuchinsky SE, & Dubno JR (2012). Auditory cortex signs of age-related hearing loss. J Assoc Res Otolaryngol, 13(5), 703–713. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eggermont JJ (2016). Acquired hearing loss and brain plasticity. Hear Res. [DOI] [PubMed] [Google Scholar]
- Frömer R, Maier M, & Abdel Rahman R (2018). Group-Level EEG-Processing Pipeline for Flexible Single Trial-Based Analyses Including Linear Mixed Models. Front Neurosci, 12, 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Giroud N, Lemke U, Reich P, Matthes KL, & Meyer M (2017). The impact of hearing aids and age-related hearing loss on auditory plasticity across three months - An electrical neuroimaging study. Hear Res, 353, 162–175. [DOI] [PubMed] [Google Scholar]
- Grady C (2012). The cognitive neuroscience of ageing. Nat Rev Neurosci, 13(7), 491–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gurgel RK, Ward PD, Schwartz S, Norton MC, Foster NL, & Tschanz JT (2014). Relationship of hearing loss and dementia: a prospective, population-based study. Otol Neurotol, 35(5), 775–781. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harkrider AW, Plyler PN, & Hedrick MS (2005). Effects of age and spectral shaping on perception and neural representation of stop consonant stimuli. Clin Neurophysiol, 116(9), 2153–2164. [DOI] [PubMed] [Google Scholar]
- Harkrider AW, Plyler PN, & Hedrick MS (2006). Effects of hearing loss and spectral shaping on identification and neural response patterns of stop-consonant stimuli. J Acoust Soc Am, 120(2), 915–925. [DOI] [PubMed] [Google Scholar]
- Hillyard SA, & Kutas M (1983). Electrophysiology of cognitive processing. Annu Rev Psychol, 34, 33–61. [DOI] [PubMed] [Google Scholar]
- Huang CQ, Dong BR, Lu ZC, Yue JR, & Liu QX (2010). Chronic diseases and risk for depression in old age: a meta-analysis of published literature. Ageing Res Rev, 9(2), 131–141. [DOI] [PubMed] [Google Scholar]
- Humes LE (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. J Am Acad Audiol, 18(7), 590–603. [DOI] [PubMed] [Google Scholar]
- Hyde M (1997). The N1 response and its applications. Audiol Neurootol, 2(5), 281–307. [DOI] [PubMed] [Google Scholar]
- Karawani H, Jenkins K, & Anderson S (2018). Restoration of sensory input may improve cognitive and neural function. Neuropsychologia, 114, 203–213. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kazee AM, Han LY, Spongr VP, Walton JP, Salvi RJ, & Flood DG (1995). Synaptic loss in the central nucleus of the inferior colliculus correlates with sensorineural hearing loss in the C57BL/6 mouse model of presbycusis. Hear Res, 89(1–2), 109–120. [DOI] [PubMed] [Google Scholar]
- Lin FR (2011). Hearing loss and cognition among older adults in the United States. J Gerontol A Biol Sci Med Sci, 66(10), 1131–1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin FR, Ferrucci L, An Y, Goh JO, Doshi J, Metter EJ, … Resnick SM (2014). Association of hearing impairment with brain volume changes in older adults. Neuroimage, 90, 84–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin FR, Ferrucci L, Metter EJ, An Y, Zonderman AB, & Resnick SM (2011). Hearing loss and cognition in the Baltimore Longitudinal Study of Aging. Neuropsychology, 25(6), 763–770. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin FR, Yaffe K, Xia J, Xue QL, Harris TB, Purchase-Helzner E, … Health, A. B. C. S. G. (2013). Hearing loss and cognitive decline in older adults. JAMA Intern Med, 173(4), 293–299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lister JJ, Harrison Bush AL, Andel R, Matthews C, Morgan D, & Edwards JD (2016). Cortical auditory evoked responses of older adults with and without probable mild cognitive impairment. Clin Neurophysiol, 127(2), 1279–1287. [DOI] [PubMed] [Google Scholar]
- Luis CA, Keegan AP, & Mullan M (2009). Cross validation of the Montreal Cognitive Assessment in community dwelling older adults residing in the Southeastern US. Int J Geriatr Psychiatry, 24(2), 197–201. [DOI] [PubMed] [Google Scholar]
- Maris E, & Oostenveld R (2007). Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods, 164(1), 177–190. [DOI] [PubMed] [Google Scholar]
- Millman RE, Mattys SL, Gouws AD, & Prendergast G (2017). Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise. J Neurosci, 37(32), 7727–7736. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, … Chertkow H (2005). The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc, 53(4), 695–699. [DOI] [PubMed] [Google Scholar]
- Oates PA, Kurtzberg D, & Stapells DR (2002). Effects of sensorineural hearing loss on cortical event-related potential and behavioral measures of speech-sound processing. Ear Hear, 23(5), 399–415. [DOI] [PubMed] [Google Scholar]
- Oostenveld R, Fries P, Maris E, & Schoffelen JM (2011). FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput Intell Neurosci, 2011, 156869. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle JE, Troiani V, Grossman M, & Wingfield A (2011). Hearing loss in older adults affects neural systems supporting speech comprehension. J Neurosci, 31(35), 12638–12643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle JE, & Wingfield A (2016). The Neural Consequences of Age-Related Hearing Loss. Trends Neurosci, 39(7), 486–497. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Picton T (2013). Hearing in time: evoked potential studies of temporal processing. Ear Hear, 34(4), 385–401. [DOI] [PubMed] [Google Scholar]
- Picton TW, Bentin S, Berg P, Donchin E, Hillyard SA, Johnson R, … Taylor MJ (2000). Guidelines for using human event-related potentials to study cognition: recording standards and publication criteria. Psychophysiology, 37(2), 127–152. [PubMed] [Google Scholar]
- Picton TW, & Durieux-Smith A (1988). Auditory evoked potentials in the assessment of hearing. Neurol Clin, 6(4), 791–808. [PubMed] [Google Scholar]
- Pienkowski M, & Eggermont JJ (2011). Cortical tonotopic map plasticity and behavior. Neurosci Biobehav Rev, 35(10), 2117–2128. [DOI] [PubMed] [Google Scholar]
- Ponton C, Eggermont JJ, Khosla D, Kwong B, & Don M (2002). Maturation of human central auditory system activity: separating auditory evoked potentials by dipole source modeling. Clin Neurophysiol, 113(3), 407–420. [DOI] [PubMed] [Google Scholar]
- Profant O, Balogová Z, Dezortová M, Wagnerová D, Hájek M, & Syka J (2013). Metabolic changes in the auditory cortex in presbycusis demonstrated by MR spectroscopy. Exp Gerontol, 48(8), 795–800. [DOI] [PubMed] [Google Scholar]
- Puschmann S, & Thiel CM (2016). Changed crossmodal functional connectivity in older adults with hearing loss. Cortex, 86, 109–122. [DOI] [PubMed] [Google Scholar]
- Roberts LE, Bosnyak DJ, Bruce IC, Gander PE, & Paul BT (2015). Evidence for differential modulation of primary and nonprimary auditory cortex by forward masking in tinnitus. Hear Res, 327, 9–27. [DOI] [PubMed] [Google Scholar]
- Ross B, Fujioka T, Tremblay KL, & Picton TW (2007). Aging in binaural hearing begins in mid-life: evidence from cortical auditory-evoked responses to changes in interaural phase. J Neurosci, 27(42), 11172–11178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sheehan KA, McArthur GM, & Bishop DVM (2005). Is discrimination training necessary to cause changes in the P2 auditory event-related brain potential to speech sounds. Cognitive Brain Research, 25(2), 547–553. [DOI] [PubMed] [Google Scholar]
- Singh G, & Pichora-Fuller K (2010). Older adults’ performance on the speech, spatial, and qualities of hearing scale (SSQ): Test-retest reliability and a comparison of interview and self-administration methods. Int J Audiol, 49(10), 733–740. [DOI] [PubMed] [Google Scholar]
- Tremblay KL (2015). The Ear-Brain Connection: Older Ears and Older Brains. Am J Audiol. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tremblay KL, Inoue K, McClannahan K, & Ross B (2010). Repeated stimulus exposure alters the way sound is encoded in the human brain. PLoS One, 5(4), e10283. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tremblay KL, Kraus N, Carrell TD, & McGee T (1997). Central auditory system plasticity: generalization to novel stimuli following listening training. J Acoustic Soc Am, 102(6), 3762–3773. [DOI] [PubMed] [Google Scholar]
- Tremblay KL, Piskosz M, & Souza P (2003). Effects of age and age-related hearing loss on the neural representation of speech cues. Clin Neurophysiol, 114(7), 1332–1343. [DOI] [PubMed] [Google Scholar]
- Tremblay KL, Ross B, Inoue K, McClannahan K, & Collet G (2014). Is the auditory evoked P2 response a biomarker of learning. Front Syst Neurosci, 8, 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Viljanen A, Kaprio J, Pyykkö I, Sorri M, Pajala S, Kauppinen M, … Rantanen T (2009). Hearing as a predictor of falls and postural balance in older female twins. J Gerontol A Biol Sci Med Sci, 64(2), 312–317. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Willott JF, Aitkin LM, & McFadden SL (1993). Plasticity of auditory cortex associated with sensorineural hearing loss in adult C57BL/6J mice. J Comp Neurol, 329(3), 402–411. [DOI] [PubMed] [Google Scholar]
- Willott JF, & Bross LS (1996). Morphological changes in the anteroventral cochlear nucleus that accompany sensorineural hearing loss in DBA/2J and C57BL/6J mice. Brain Res Dev Brain Res, 91(2), 218–226. [DOI] [PubMed] [Google Scholar]
- Willott JF, Parham K, & Hunter KP (1991). Comparison of the auditory sensitivity of neurons in the cochlear nucleus and inferior colliculus of young and aging C57BL/6J and CBA/J mice. Hear Res, 53(1), 78–94. [DOI] [PubMed] [Google Scholar]
- Yacullo WS (1999). Clinical masking in speech audiometry: a simplified approach. Am J Audiol, 8(2), 106–116. [DOI] [PubMed] [Google Scholar]
- Zhong Z, Henry KS, & Heinz MG (2014). Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas. Hear Res, 309, 55–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Figure A1. Electrode montage
Table A1. Latency and amplitude values for channels CZ and FZ. Significant outcomes are highlighted in bold.