Introduction
Thirty years ago cochlear implantation was expected to increase the ease of face-to-face communication for postlingually deafened adults. Presently, conversing over a cell phone is a realistic expectation for many cochlear implant (CI) recipients. Although performance outcomes have improved over time, a great deal of unexplained individual variability exists. A number of factors are known to be associated with performance. Biographic and audiologic factors such as duration of severe to profound hearing loss (SPHL), age at SPHL, duration of hearing loss, hearing aid (HA) use, residual hearing, age at CI, pre-operative speech recognition and post-operative implant experience have each been shown to correlate, either positively or negatively, with performance (1–6). Device and surgical factors, such as brand of implant, percentage of available electrodes programmed (4), residual hearing preservation (7,8) and electrode position within the cochlea (3,9–12) have also been related to performance. Specifically, electrodes that are: 1) located in scala tympani (ST); 2) inserted to the intended design depth of the array, i.e., not over- or under-inserted; and 3) proximal to the modiolar wall have been positively correlated with outcomes. The extent to which biographic, audiologic and device related factors interact and individually contribute to speech understanding is not fully understood, especially when speech must be understood in more realistic and complex listening environments, i.e., background noise. Controlling for some factors should allow a more focused examination of others. The purpose of this study was to identify primary biographic and audiologic factors contributing to CI performance variability in quiet and noise by controlling electrode array type and electrode position within the cochlea.
Materials and Methods
Participants
This study was approved by the Human Research Protection Office (ID #: 201408112) at Washington University in St. Louis, School of Medicine (WUSM). Adults implanted with a Cochlear Nucleus (Cochlear Limited, Sydney, Australia) perimodiolar electrode array with all 22 electrodes in ST were invited to participate. Thirty-nine individuals (19 men; 20 women) with at least six months of CI use (mean = 4.8 years) were enrolled. These participants were among a larger group who met the inclusion criteria. Reasons for enrollment included flexible schedules and/or the ability to coordinate research testing with other CI-related appointments. Participants were tested using a unilateral CI only. Thirteen participants were implanted bilaterally; nine were tested using the implant that met inclusion criteria. Both ears met inclusion criteria for the other four bilateral users, and all four had similar speech understanding between ears. Three were tested using their second CI only, and the other was tested with each CI (testing occurred four months apart). Therefore, a total of 40 test ears were included in the data analysis. Participant ages ranged from 35–83 years (mean age at study = 64.0 years). For the test ear, mean age at implantation was 58.9 years, mean age at SPHL was 49.2 years, and mean duration of SPHL was 9.9 years. The mean duration of SPHL for the non-test ear was 9.3 years. For the test and non-test ears respectively, the mean pre-operative, four-frequency (.5, 1, 2, 4 kHz) pure tone averages (PTA) were 96.3 dB HL and 88.6 dB HL, and the mean pre-operative monosyllabic word scores were 8.6% and 21.7%. Thirty-one participants used bilateral HAs prior to implantation. Of those participants who were not aided bilaterally, three used a HA on the ear to be implanted, three used a HA on the contralateral ear, and two never wore amplification in either ear because one ear had SPHL while the other ear had normal hearing from 250–2000 Hz. Table 1 summarizes participants’ demographic information.
Table 1.
Participant demographic information
Test Ear | Range | Mean | SD |
Age at study | 35–83 yrs | 64.0 yrs | 11.3 yrs |
Age at CI | 24–75 yrs | 58.9 yrs | 11.4 yrs |
Age at SPHL | 5–72 yrs | 49.2 yrs | 16.9 yrs |
Duration CI use | 0.5–13 yrs | 4.8 yrs | 3.1 yrs |
Duration of hearing loss | 3–59 yrs | 28.9 yrs | 16.5 yrs |
Duration of HA use | 0–54 yrs | 17.0 yrs | 14.76 yrs |
Duration of SPHL | .5–45 yrs | 9.9 yrs | 10.4 yrs |
Pre-CI HL at 250 Hz | 15–120+ dB HL | 69.5 dB HL | 27.1 dB HL |
Pre-CI 4-freq PTA | 70–120+ dB HL | 96.3 dB HL | 14.9 dB HL |
Pre-CI CNC word score | 0–44% | 8.6% | 10.8% |
Non-Test Ear | Range | Mean | SD |
Duration of SPHL | 0–45 yrs | 9.3 yrs | 10.1 yrs |
Pre-CI 4-freq PTA | 24–120+ dB HL | 88.6 dB HL | 23.3 dB HL |
Pre-CI CNC word score | 0–93% | 21.7% | 23.1% |
CI = cochlear implant, CNC = Consonant-Vowel Nucleus-Consonant, dB = decibel, freq = frequency, HA = hearing aid, HL = hearing level, PTA = pure tone average, SPHL = severe to profound hearing loss, yrs = years, 120+ = no response at limits of audiometer
Electrode Position and Speech Processor Programs
3D reconstruction of pre- and post-operative CT scans, based on the technique developed by Skinner et al. (11) and verified by Teymouri et al. (13), showed that for each participant all 22 electrodes were in ST, i.e., not in scala vestibuli. The group mean insertion angle was 22° for the most basal electrode and 394° for the most apical electrode. Each participant was implanted with a perimodiolar electrode array, as required for inclusion, and the mediolateral position or electrode array wrapping factor (WF) was calculated (3). The WF provides an indirect metric of the proximity of the electrode array relative to the modiolar wall. The metric is defined as WF = LEL/LLW where LEL is the length along the electrode trajectory from basal to apical electrode, and LLW is the lateral wall length from insertion angle of the most basal electrode to the insertion angle of the most apical electrode (see Figure 1 in Holden et al.) (3). A number close to 1.0 represents an array near the lateral wall. The group mean WF for the perimodiolar arrays in the current study was .60 (Range = .55 – .68, SD = .03). For comparison, the mean WF for individuals implanted at WUSM with lateral wall arrays and electrodes in ST were .81 (Range = .77 – .85, SD = .02, n = 14) for the Advanced Bionic (Valencia, CA) HiRes 90k HiFocus 1j and .83 (Range = .77 – .86, SD = .03, n = 8) for the MED EL (Innsbruck, Austria) Concert Medium or Synchrony Flex 24. Table 2 summarizes CT information for the test ear.
Table 2.
Device location based on 3D reconstruction of pre- and post-operative CT scans
CT Information | Range | Mean | SD |
---|---|---|---|
Insertion Angle Basal Electrode |
4° – 71° | 22° | 11.6° |
Insertion Angle Apical Electrode |
317° – 475° | 394° | 31.0° |
Wrapping Factor | .55 – .68 | .60 | .03 |
All participants received device programming and aural rehabilitation (AR) from audiologists at WUSM. Our standard clinical protocol aims to optimize each recipient’s audibility and speech understanding; consequently, study participants were typically seen weekly during the first several months after device activation. Programming took place over four to six weeks after initial activation and AR continued for an additional 2–4 weeks depending on the individual’s needs and goals. Study participants all used the Advanced Combination Encoder (ACE) speech coding strategy (14), monopolar stimulation mode (MP1+2), and 25 µseconds per phase pulse width; a variety of stimulation rates were used to optimize each recipient’s speech understanding. The majority of participants used 18 to 22 active electrodes in their speech processor program. Table 3 provides a summary of device and program information for the test ear.
Table 3.
Device and speech processor program information
Cochlear Implant | N | Speech Processor |
N | Processing Strategy |
N | Stimulation Rate (pps/ch) |
N | Num. of Active Electrodes |
N |
---|---|---|---|---|---|---|---|---|---|
CI24R N24 Contour | 2 | Freedom | 7 | ACE | 40 | 250 | 1 | 22 | 18 |
CI24RE Contour | 1 | 810 (N5) | 24 | 500 | 6 | 21 | 5 | ||
CI24RE Contour Advance | 26 | 910 (N6) | 9 | 720 | 1 | 20 | 10 | ||
CI512 | 11 | 900 | 12 | 19 | 2 | ||||
1200 | 10 | 18 | 3 | ||||||
1800 | 9 | 16 | 1 | ||||||
2400 | 1 | 15 | 1 |
ACE = Advanced Combination Encoder, N = number of test ears, N5 = Nucleus 5, N6 = Nucleus 6, Num. = number, pps/ch = pulses per second per channel
Procedures
During a single session, a test battery was administered in the unilateral CI condition. Participants’ preferred speech processor program and settings were used. If the participant wore a HA (n = 20) or an implant (n=13) in the non-test ear, it was turned off prior to testing. For participants with useable residual hearing in the non-test ear (four-frequency PTA ≤ 60 dB HL, n = 3), the non-test ear was plugged and muffed prior to testing. Test materials, stored as audio files on a PC, were presented via a 24-bit studio sound card (Lynx Studio Technology L22) and power amplifier (Crown D-150A) to a sound-field loudspeaker (JBL LSR32) in a double-walled sound attenuating booth (IAC). Frequency modulated (FM) tone, sound-field thresholds were obtained from 250 – 6000 Hz. Outcome measures included monosyllabic words presented in quiet and sentences presented in quiet and noise; a spectral discrimination test was administered to determine participants’ ability to resolve spectral cues.
Monosyllabic word testing consisted of the Consonant-Vowel Nucleus-Consonant (CNC) Word Lists (15) presented at 60 dB SPL. AzBio Sentences (16) were presented at 60 dB SPL in both quiet and noise using a signal-to-noise ratio (SNR) of +8 dB (4-talker babble). Both CNC words and AzBio sentences were presented through a loudspeaker at 0° azimuth and 1.5 meters from the center of the participant’s head. To better understand participants’ listening abilities in commonly encountered noisy environments, testing was also completed in the R-SPACE™ test environment (17,18). Participants were surrounded by eight loudspeakers, 45° apart. Restaurant noise was presented through all loudspeakers at 60 dB SPL, and Hearing in Noise Test (HINT) sentences (19) were presented through the front loudspeaker. Sentence level varied depending on participant responses (i.e., level increased 2 dB with incorrect responses and decreased 2 dB with correct responses). The outcome measure was the SNR at which 50% of the sentences could be repeated correctly. Two lists of each speech perception test were given. To assess spectral discrimination ability, the spectral-temporally modulated ripple test (SMRT) (20) was administered at 65 dB SPL. For each trial, participants heard three stimuli (2 reference; 1 target) and used a touchscreen to indicate which stimulus sounded different. The stimuli differed spectrally in their number of ripples per octave (RPO). Reference stimuli were fixed at a relatively high spectral-ripple density (20 RPO) while the target stimulus initially had a ripple density of 0.5 RPO. The ripple density of the target stimulus varied depending on participant responses (i.e., ripple density decreased 0.2 RPO with incorrect responses and increased 0.2 RPO with correct responses). The ripple repetition rate was fixed throughout at 5 Hz (20). The test stopped after ten reversals, and values from the last 6 reversals were averaged to give an SMRT threshold (in RPO) for that adaptive run. The procedure was administered three times and thresholds from the last two runs averaged for a final SMRT threshold.
Results
Outcome measures
Group mean FM tone, sound-field thresholds at .25, .50, 1.0, 2.0, 3.0, 4.0, and 6.0 kHz were 18, 22, 19, 18, 23, 20, and 18 dB HL, respectively. These levels indicate participants could hear conversational and soft speech at a typical (1.5 meter) speaking distance. Figure 1 shows individual and group mean scores for each outcome measure ranked by performance on CNC words (Panel A). The group mean CNC word score was 76% (Range = 52% – 94%, SD = 11.6%). Panel B shows individual and group mean scores for AzBio sentences in quiet (Mean = 87%, Range = 36% – 99%, SD = 14.7%) and in noise (Mean = 52%, Range = 0 – 96%, SD = 24.2%). Sentence scores in quiet from two older (ages 79 and 81), but not the oldest, participants were considerably lower than the others. Excluding these two scores, the remaining sentence scores in quiet ranged from 61% to 99%. Sentence scores in noise were lower and more varied than in quiet with an average decrease of 35 percentage points (Range = 3 – 60 percentage points) when noise was added. Results from the R-SPACE™, which simulated listening in a busy restaurant, are shown in Panel C; lower SNRs indicate better ability to understand sentences in noise. The group mean SNR was 6.8 dB (SD = 5.35 dB), and, as with AzBio sentences in noise, there was a sizeable range in scores (−1.6 dB – 22 dB). Panel D shows the SMRT thresholds. The group mean threshold was 3.8 RPO with a range of 1.5 – 6.2 RPO (SD = 1.3 RPO). Pilot testing, in our lab, with 26 normal hearing adults yielded an average threshold of 8.1 RPO (Range = 5.3 – 11.2 RPO, SD = 1.6 RPO) and was consistent with the trimmed mean score of approximately 8.5 RPO reported by Aronoff & Landsberger (20) for eight normal hearing participants.
Figure 1.
Individual and group mean scores for CNC words (Panel A), AzBio sentences in quiet and in noise (Panel B), HINT sentences in the R-SPACE™ (Panel C), and spectral ripple discrimination (Panel D). CNC word scores in Panel A are ranked by performance. Panels B – D use the same participant order as Panel A. Error bars indicate ± one standard deviation.
Correlations to outcome measures
Biographic and audiologic factors examined were those with potential to impact CI outcomes; these were age at CI, age at study, age at SPHL, duration of SPHL, duration of hearing loss, duration of HA use, pre-operative four-frequency PTA, and pre-operative hearing level at 250 Hz. Audiologic factors included in the analysis were for the test ear. Because many of these factors are highly correlated with each other, a principal components (PC) analysis was completed to reduce the highly correlated factors into meaningful components. After entering relevant variables, the PC analysis indicated three distinct factors: PC1 Age (age at CI, age at study, age at SPHL), PC2 Duration (duration of hearing loss, duration of HA use, duration of SPHL), and PC3 Pre-op Hearing (pre-operative four-frequency PTA and hearing level at 250 Hz). Pearson Product correlations were computed between these three PCs and the five outcome measures. PC1 Age was moderately and negatively correlated to speech perception outcomes (CNC words: r = −0.32, p ≤ 0.05; AzBio Quiet: r = −0.45, p ≤ 0.01; AzBio Noise: r = −0.37, p ≤ 0.05). No other correlations were significant. As noted previously, two older participants had the lowest AzBio scores in quiet. Analyses without these two participants resulted in the same three PCs; PC1 Age still correlated with AzBio scores in quiet (r = −0.34, p ≤ 0.05) but not with the other two measures. None of the three PC factors correlated with SMRT thresholds. However, similar to results of previous studies (21–24), participants’ abilities to resolve spectral cues (SMRT thresholds) were correlated with speech understanding in quiet (CNC words: r = .62, p ≤ .001; AzBio Quiet: r = .60, p ≤ .001) and noise (AzBio Noise: r = .60, p ≤ .001; R-SPACE™: r = −.48, p < .005).
Discussion
The purpose of this study was to identify primary biographic and audiologic factors contributing to CI performance variability in quiet and noise by controlling electrode array type and electrode position within the cochlea. All participants were implanted with a perimodiolar electrode array and had all electrodes located in ST. Curiously, PC1 Age was the only factor significantly correlated with speech recognition for this group. Age has been shown to correlate with outcomes in some studies (1,3,25,26) while others have shown no relation (2,5,12,27). Eighteen participants in the current study were ≥65 years of age (Mean = 74 years, Range = 65 – 83 years); their average pre-operative CNC score was 8.4%, and their average CNC score obtained during the study was 72%. Despite the significant correlation between PC1 Age and performance, older individuals received considerable benefit from cochlear implantation, a finding similar to other reports (26,28). Duration of SPHL, duration of hearing loss, and duration of HA use comprised PC2 Duration; PC2 Duration did not correlate with study outcomes. This was unexpected as duration of SPHL has routinely been shown to correlate with CI performance (1–3,5,6,27). One reason for the lack of correlation between PC2 Duration and speech outcomes may be that extensive HA use by this group during the period of SPHL mitigated the effects of duration of SPHL. Notably, 37 participants in this study used amplification prior to implantation. Only two participants did not use amplification; however, both had normal hearing in the low and mid frequencies in the non-implanted ear. Lazard et al. (4) found that participants who continued HA use during SPHL had a slower decline in speech perception performance and better overall speech perception with a CI compared to participants who discontinued HA use. The use of amplification and subsequent reduction of auditory deprivation during SPHL might preserve central auditory pathways from negative effects of cognitive reorganization (29,30). Therefore, it is possible that the lack of auditory deprivation during SPHL for this group reduced the effect of duration of SPHL. Another possible reason may be the relative homogeneity of word and sentence scores in quiet (Figure 1, Panels A and B). Participants’ word scores ranged from 52% – 94% and sentence scores ranged from 36 – 99%. Previous studies that relate duration of SPHL to unilateral CI speech recognition in quiet reported individual scores that spanned the range of all possible scores (2,3,5,6,27). Lastly, PC3 Pre-op Hearing (pre-operative four-frequency PTA and hearing level at 250 Hz) did not correlate with outcome measures. Noting the group’s limited pre-operative hearing in the test ear (Table 1), it is not surprising that PC3 Pre-op Hearing did not influence performance. This lack of correlation between pre-operative hearing and post-operative outcomes is consistent with a number of studies (4,5,31).
Speech recognition scores in quiet were high for the majority of study participants; the mean CNC word score of 76% was higher than reported in previous studies. Older studies have reported average post-operative CNC scores of 40–45% (32–34); more recent publications report mean word scores of approximately 55–60% (3,35–37). We speculate that the position of electrodes examined in this study (insertion angle, ST, perimodiolar) contributed to high speech recognition performance in quiet. Insertion angle of the most basal and most apical electrodes were consistent among this group with all electrodes in the cochlea, specifically ST, and no overly deep insertions (Table 2). Because the array was not inserted past its intended design depth and was located in ST, trauma to cochlear structures may have been minimal resulting in relatively high speech recognition scores in quiet (3,9–12,38,39). Furthermore, a perimodiolar electrode array is presumed to be situated closer to surviving ganglion cells than a lateral wall array. Electrodes near target neural structures may have reduced current requirements and improved spatial selectivity compared to more distant electrodes, leading to reduced channel interactions and possibly better outcomes (3,39–43).
Despite consistent electrode position, understanding in noise was challenging and variable for this group. AzBio sentence scores in noise decreased substantially, an average of 35 percentage points, compared to scores in quiet. Furthermore, the impact of noise varied widely among participants, with the decrement in scores ranging from 3 – 60 percentage points. R-SPACE™ testing also showed wide variability for sentence understanding in noise with SNRs ranging from −1.6 dB to 22 dB. The group mean SNR (6.8 dB) was about 11 dB poorer than the SNR obtained by normal hearing listeners (n = 23, Mean SNR = −4.9 dB, Firszt et al., submitted). Listening in noise is particularly difficult when there is unilateral input, as was the case in this study (listening with a unilateral CI). Bilateral listening, either with a HA or a CI in the other ear, would likely improve these listeners’ performance in noise (44,45). Regardless, these results underscore the importance of evaluating CI recipients in both quiet and noise to determine function in daily life as high scores in quiet did not necessarily equate to high scores in noise (Figure 1, Panel B). Correlation analysis showed that PC1 Age was the only examined factor that had a significant, albeit modest, correlation with speech understanding in noise when including all participants (AzBio Noise: r = −0.37, p ≤ 0.05). As noted above, when two participants were removed from the analysis, PC1 Age was no longer correlated with sentence recognition in noise. Perhaps differences in auditory/cognitive processing, known to decline with age, contributed to outcome variability, especially in noise (46–48). Given the degrading effects of noise on speech understanding for most participants, it is likely that difficult listening situations require more cognitive resources than listening in quiet, regardless of age. Subsequently, individual differences in verbal processing, verbal learning and working memory could account for a portion of the variance in noise among CI users (49–51).
Interestingly, the substantial inter-subject variability and poorer than normal hearing spectral resolution (Figure 1, Panel D) of these study participants occurred despite the absence of language processing required for this task; furthermore, spectral resolution ability did not correlate with any of the PC factors examined. Nevertheless, spectral ripple thresholds were correlated with performance in quiet (CNC words: r = .62, p ≤ .001; AzBio Quiet: r = .60, p ≤ .001) and in noise (AzBio Noise: r = .60, p ≤ .001; R-SPACE™: r = −.48, p < .005). The ability to resolve spectral cues, particularly important for speech understanding in noise, is generally poor in CI recipients due to the limited number of channels providing spectral cues (22,23,52,53). The number of “effective” channels is limited because of the number of electrodes available for implantation and the overlap in electrical fields among these electrodes (channel interaction). CI channel interactions are influenced by spacing amongst electrode contacts, distances from electrodes to surviving spiral ganglion cells, and number of surviving cells. Generally, the higher the current levels required for excitation, the greater the current spread and channel interactions (52,53). Jones et al. (54) showed a negative correlation between channel interactions and spectral discrimination thresholds in a group of seven CI users. Noble et al. (55) showed improved spectral resolution ability for CI users when electrodes thought to be contributing to channel interactions were deactivated. Hence, device design further limiting channel interactions may improve spectral resolution. Findings from this and other studies (21–24) suggest improved spectral resolution ability may result in increased speech understanding in noise.
Conclusion
For this group of CI recipients with consistent electrode array type and electrode position within the cochlea, PC1 Age was the only examined biographic or audiologic factor that correlated, though modestly, to speech recognition. Consistent electrode position (insertion angle, ST, perimodiolar) may have contributed to high speech recognition performance in quiet; however, understanding in noise was challenging for most participants. The impact of noise varied widely among participants, with the decrement in sentence scores ranging from 3 – 60 percentage points when noise was added. Although integral to the study design, unilateral rather than bilateral listening may have influenced performance in noise. Furthermore, individual differences in auditory/cognitive processing may have contributed to this group’s decreased and variable speech recognition in noise. Spectral resolution ability had a strong correlation with speech measures in both quiet and noise suggesting device designs that improve spectral resolution may in turn improve CI outcomes, particularly in noise. Continued research examining peripheral and central mechanisms that affect speech recognition may refine and individualize device programming and auditory training techniques to improve overall CI performance.
Acknowledgments
Conflicts of Interest and Sources of Funding: Laura Holden, AuD, is a member of the audiology advisory board for Advanced Bionics and receives a consulting fee. Jill Firszt, PhD, is a member of the audiology advisory board for Cochlear Americas and Advanced Bionics and receives a consulting fee. WUSM has received payment from Cochlear Americas for work performed by Timothy Holden specific to research studies at Cochlear Americas. For the remaining authors, none were declared. This study was funded in part by Cochlear Americas (Centennial, CO) and by the National Institute on Deafness and Other Communication Disorders Grant R01 DC009010 (JBF).
References
- 1.Blamey P, Artieres F, Baskent D, et al. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: An update with 2251 patients. Audiol Neurootol. 2013;18:36–47. doi: 10.1159/000343189. [DOI] [PubMed] [Google Scholar]
- 2.Green KM, Bhatt Y, Mawman DJ, et al. Predictors of audiological outcome following cochlear implantation in adults. Cochlear Implants Int. 2007;8:1–11. doi: 10.1179/cim.2007.8.1.1. [DOI] [PubMed] [Google Scholar]
- 3.Holden LK, Finley CC, Firszt JB, et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear. 2013;34:342–360. doi: 10.1097/AUD.0b013e3182741aa7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Lazard DS, Vincent C, Venail F, et al. Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: A new conceptual model over time. PLoS One. 2012;7:e48739. doi: 10.1371/journal.pone.0048739. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Plant K, McDermott H, van Hoesel R, et al. Factors predicting postoperative unilateral and bilateral speech recognition in adult cochlear implant recipients with acoustic hearing. Ear Hear. 2016;37:153–163. doi: 10.1097/AUD.0000000000000233. [DOI] [PubMed] [Google Scholar]
- 6.Rubinstein JT, Parkinson WS, Tyler RS, et al. Residual speech recognition and cochlear implant performance: Effects of implantation criteria. Am J Otol. 1999;20:445–452. [PubMed] [Google Scholar]
- 7.Carlson ML, Driscoll CLW, Gifford RH, et al. Implications of minimizing trauma during conventional cochlear implantation. Otol Neurotol. 2011;32:962–968. doi: 10.1097/MAO.0b013e3182204526. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Gifford RH, Dorman MF, Skarzynski H, et al. Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments. Ear Hear. 2013;34:413–425. doi: 10.1097/AUD.0b013e31827e8163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Aschendorff A, Kromeier J, Klenzner T, et al. Quality control after insertion of the Nucleus Contour and Contour Advance electrode in adults. Ear Hear. 2007;28:75S–79S. doi: 10.1097/AUD.0b013e318031542e. [DOI] [PubMed] [Google Scholar]
- 10.Finley CC, Holden TA, Holden LK, et al. Role of electrode placement as a contributor to variability in cochlear implant outcomes. Otol Neurotol. 2008;29:920–928. doi: 10.1097/MAO.0b013e318184f492. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Skinner MW, Holden TA, Whiting BR, et al. In vivo estimates of the position of Advanced Bionics' electrode arrays in the human cochlea. Ann Otol Rhinol Laryngol. 2007;116:1–24. [PubMed] [Google Scholar]
- 12.Wanna GB, Noble JH, Carlson ML, et al. Impact of electrode design and surgical approach on scalar location and cochlear implant outcomes. Laryngoscope. 2014;124:S1–S7. doi: 10.1002/lary.24728. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Teymouri J, Hullar TE, Holden TA, et al. Verification of computed tomographic estimates of cochlear implant array position: A micro-CT and histologic analysis. Otol Neurotol. 2011;32:980–986. doi: 10.1097/MAO.0b013e3182255915. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Vandali AE, Whitford LA, Plant KL, et al. Speech perception as a function of electrical stimulation rate: Using the Nucleus 24 cochlear implant system. Ear Hear. 2000;21:608–624. doi: 10.1097/00003446-200012000-00008. [DOI] [PubMed] [Google Scholar]
- 15.Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Disord. 1962;27:62–70. doi: 10.1044/jshd.2701.62. [DOI] [PubMed] [Google Scholar]
- 16.Spahr AJ, Dorman MF, Litvak LM, et al. Development and validation of the AzBio sentence lists. Ear Hear. 2012;33:112–117. doi: 10.1097/AUD.0b013e31822c2549. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Compton-Conley CL, Neuman AC, Killion MC, et al. Performance of directional microphones for hearing aids: Real-world versus simulation. J Am Acad Audiol. 2004;15:440–455. doi: 10.3766/jaaa.15.6.5. [DOI] [PubMed] [Google Scholar]
- 18.Revit LJ, Schulein RB, Julstrom SD. Toward accurate assessment of real-world hearing aid benefit. Hear Rev. 2002;9:34–38. 51. [Google Scholar]
- 19.Nilsson M, Soli SD, Sullivan JA. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994;95:1085–1099. doi: 10.1121/1.408469. [DOI] [PubMed] [Google Scholar]
- 20.Aronoff JM, Landsberger DM. The development of a modified spectral ripple test. J Acoust Soc Am. 2013;134:EL217–EL222. doi: 10.1121/1.4813802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Drennan WR, Anderson ES, Won JH, et al. Validation of a clinical assessment of spectral-ripple resolution for cochlear-implant users. Ear Hear. 2014;35:e92–e98. doi: 10.1097/AUD.0000000000000009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Henry BA, Turner CW, Behrens A. Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners. J Acoust Soc Am. 2005;118:1111–1121. doi: 10.1121/1.1944567. [DOI] [PubMed] [Google Scholar]
- 23.Jeon EK, Turner CW, Karsten SA, et al. Cochlear implant users' spectral ripple resolution. J Acoust Soc Am. 2015;138:2350–2358. doi: 10.1121/1.4932020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Won JH, Drennan WR, Rubinstein JT. Spectral-ripple resolution correlates with speech reception in noise in cochlear implant users. J Assoc Res Otolaryngol. 2007;8:384–392. doi: 10.1007/s10162-007-0085-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Friedland DR, Runge-Samuelson C, Baig H, et al. Case-control analysis of cochlear implant performance in elderly patients. Arch Otolaryngol Head Neck Surg. 2010;136:432–438. doi: 10.1001/archoto.2010.57. [DOI] [PubMed] [Google Scholar]
- 26.Zwolan TA, Henion K, Segel P, et al. The role of age on cochlear implant performance, use, and health utility: A multicenter clinical trial. Otol Neurotol. 2014;35:1560–1568. doi: 10.1097/MAO.0000000000000583. [DOI] [PubMed] [Google Scholar]
- 27.Leung J, Wang NY, Yeagle JD, et al. Predictive models for cochlear implantation in elderly candidates. Arch Otolaryngol Head Neck Surg. 2005;131:1049–1054. doi: 10.1001/archotol.131.12.1049. [DOI] [PubMed] [Google Scholar]
- 28.Budenz CL, Cosetti MK, Coelho DH, et al. The effects of cochlear implantation on speech perception in older adults. J Am Geriatr Soc. 2011;59:446–453. doi: 10.1111/j.1532-5415.2010.03310.x. [DOI] [PubMed] [Google Scholar]
- 29.Lazard DS, Giraud AL, Truy E, et al. Evolution of non-speech sound memory in postlingual deafness: Implications for cochlear implant rehabilitation. Neuropsychologia. 2011;49:2475–2482. doi: 10.1016/j.neuropsychologia.2011.04.025. [DOI] [PubMed] [Google Scholar]
- 30.Lazard DS, Lee HJ, Gaebler M, et al. Phonological processing in post-lingual deafness and cochlear implant outcome. NeuroImage. 2010;49:3443–3451. doi: 10.1016/j.neuroimage.2009.11.013. [DOI] [PubMed] [Google Scholar]
- 31.Gifford RH, Dorman MF, Shallop JK, et al. Evidence for the expansion of adult cochlear implant candidacy. Ear Hear. 2010;31:186–194. doi: 10.1097/AUD.0b013e3181c6b831. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Firszt JB, Holden LK, Skinner MW, et al. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear. 2004;25:375–387. doi: 10.1097/01.aud.0000134552.22205.ee. [DOI] [PubMed] [Google Scholar]
- 33.Parkinson AJ, Arcaroli J, Staller SJ, et al. The Nucleus 24 Contour cochlear implant system: Adult clinical trial results. Ear Hear. 2002;23:41S–48S. doi: 10.1097/00003446-200202001-00005. [DOI] [PubMed] [Google Scholar]
- 34.Zwolan T, Kileny PR, Smith S, et al. Adult cochlear implant patient performance with evolving electrode technology. Otol Neurotol. 2001;22:844–849. doi: 10.1097/00129492-200111000-00022. [DOI] [PubMed] [Google Scholar]
- 35.Balkany T, Hodges A, Menapace C, et al. Nucleus Freedom North American clinical trial. Otolaryngol Head Neck Surg. 2007;136:757–762. doi: 10.1016/j.otohns.2007.01.006. [DOI] [PubMed] [Google Scholar]
- 36.Gifford RH, Shallop JK, Peterson AM. Speech recognition materials and ceiling effects: Considerations for cochlear implant programs. Audiol Neurootol. 2008;13:193–205. doi: 10.1159/000113510. [DOI] [PubMed] [Google Scholar]
- 37.Skinner MW, Holden LK, Fourakis MS, et al. Evaluation of equivalency in two recordings of monosyllabic words. J Am Acad Audiol. 2006;17:350–366. doi: 10.3766/jaaa.17.5.5. [DOI] [PubMed] [Google Scholar]
- 38.Adunka O, Kiefer J. Impact of electrode insertion depth on intracochlear trauma. Otolaryngol Head Neck Surg. 2006;135:374–382. doi: 10.1016/j.otohns.2006.05.002. [DOI] [PubMed] [Google Scholar]
- 39.Roland JT., Jr A model for cochlear implant electrode insertion and force evaluation: Results with a new electrode design and insertion technique. Laryngoscope. 2005;115:1325–1339. doi: 10.1097/01.mlg.0000167993.05007.35. [DOI] [PubMed] [Google Scholar]
- 40.Gordon KA, Papsin BC. From Nucleus 24 to 513: Changing cochlear implant design affects auditory response thresholds. Otol Neurotol. 2013;34:436–442. doi: 10.1097/MAO.0b013e3182804784. [DOI] [PubMed] [Google Scholar]
- 41.Long CJ, Holden TA, McClelland GH, et al. Examining the electro-neural interface of cochlear implant users using psychophysics, CT scans, and speech understanding. J Assoc Res Otolaryngol. 2014;15:293–304. doi: 10.1007/s10162-013-0437-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Saunders E, Cohen L, Aschendorff A, et al. Threshold, comfortable level and impedance changes as a function of electrode-modiolar distance. Ear Hear. 2002;23:28S–40S. doi: 10.1097/00003446-200202001-00004. [DOI] [PubMed] [Google Scholar]
- 43.van der Beek FB, Boermans PPBM, Verbist BM, et al. Clinical evaluation of the Clarion CII HiFocus 1 with and without positioner. Ear Hear. 2005;26:577–592. doi: 10.1097/01.aud.0000188116.30954.21. [DOI] [PubMed] [Google Scholar]
- 44.Potts LG, Skinner MW, Litovsky RA, et al. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing) J Am Acad Audiol. 2009;20:353–373. doi: 10.3766/jaaa.20.6.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Reeder RM, Firszt JB, Holden LK, et al. A longitudinal study in adults with sequential bilateral cochlear implants: Time course for individual ear and bilateral performance. J Speech Lang Hear Res. 2014;57:1108–1126. doi: 10.1044/2014_JSLHR-H-13-0087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Gates GA, Feeney MP, Mills D. Cross-sectional age-changes of hearing in the elderly. Ear Hear. 2008;29:865–874. doi: 10.1097/aud.0b013e318181adb5. [DOI] [PubMed] [Google Scholar]
- 47.Pichora-Fuller MK, Schneider BA, Daneman M. How young and old adults listen to and remember speech in noise. J Acoust Soc Am. 1995;97:593–608. doi: 10.1121/1.412282. [DOI] [PubMed] [Google Scholar]
- 48.Schneider BA, Pichora-Fuller K, Daneman M. Effects of senescent changes in audition and cognition on spoken language comprehension. In: Gordon-Salant S, Frisina DR, Popper NA, Fay RR, editors. The Aging Auditory System. New York: Springer; 2010. pp. 167–210. [Google Scholar]
- 49.Besser J, Koelewijn T, Zekveld AA, et al. How linguistic closure and verbal working memory relate to speech recognition in noise--a review. Trends Amplif. 2013;17:75–93. doi: 10.1177/1084713813495459. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Gordon-Salant S, Cole S. Effects of age and working memory capacity on speech recognition performance in noise among listeners with normal hearing. Ear Hear. 2016 doi: 10.1097/AUD.0000000000000316. (in press) [DOI] [PubMed] [Google Scholar]
- 51.Heydebrand G, Hale S, Potts L, et al. Cognitive predictors of improvements in adults' spoken word recognition six months after cochlear implant activation. Audiol Neurootol. 2007;12:254–264. doi: 10.1159/000101473. [DOI] [PubMed] [Google Scholar]
- 52.Fu QJ, Nogaki G. Noise susceptibility of cochlear implant users: The role of spectral resolution and smearing. J Assoc Res Otolaryngol. 2005;6:19–27. doi: 10.1007/s10162-004-5024-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Wilson BS, Dorman MF. Cochlear implants: Current designs and future possibilities. J Rehabil Res Dev. 2008;45:695–730. doi: 10.1682/jrrd.2007.10.0173. [DOI] [PubMed] [Google Scholar]
- 54.Jones GL, Won JH, Drennan WR, et al. Relationship between channel interaction and spectral-ripple discrimination in cochlear implant users. J Acoust Soc Am. 2013;133:425–433. doi: 10.1121/1.4768881. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Noble JH, Gifford RH, Hedley-Williams AJ, et al. Clinical evaluation of an image-guided cochlear implant programming strategy. Audiol Neurootol. 2014;19:400–411. doi: 10.1159/000365273. [DOI] [PMC free article] [PubMed] [Google Scholar]