Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jan 1.
Published in final edited form as: Audiol Neurootol. 2022 Nov 30;28(3):151–157. doi: 10.1159/000527671

Acoustic change complex recorded in Hybrid cochlear implant users

Eun Kyung Jeon a, Bruna S Mussoi b, Carolyn J Brown a,c, Paul J Abbas a,c
PMCID: PMC10227181  NIHMSID: NIHMS1854165  PMID: 36450234

Abstract

Introduction:

Expanding CI candidacy criteria and advances in electrode arrays and soft surgical techniques have increased the number of CI recipients who have residual low frequency hearing. Objective measures such as obligatory cortical auditory evoked potentials (CAEPs) may help clinicians make more tailored recommendations to recipients regarding optimal listening mode. As a step toward this goal, this study investigated how CAEPs measured from Nucleus Hybrid cochlear implant users differ in two listening modes: acoustic alone (A-alone) vs. acoustic plus electric (A+E).

Methods:

Eight successful Hybrid CI users participated in this study. Two CAEPs, the P1-N1-P2 and the acoustic change complex (ACC), were measured simultaneously in response to the onset and change of a series of different and spectrally complex acoustic signals, in each of the two listening modes (A-alone and A+E). We examined the effects of listening mode and stimulus type on the onset and ACC N1-P2 amplitudes and peak latencies.

Results:

ACC amplitudes in Hybrid CI users significantly differed as a function of listening mode and stimulus type. ACC responses in A+E were larger than those in the A-alone mode. This was most evident for stimuli involving a change from low to high frequency.

Conclusions:

Results of this study showed that the ACC varies in response to listening mode and stimulus type. This finding suggests that the ACC can be used as a physiologic, objective measure of the benefit of Hybrid CIs, potentially supporting clinicians in making clinical recommendations on individualized listening mode, or to document subjective preference for a given listening mode. Further research into this potential clinical application in a range of Hybrid recipients and/or long electrode users who have residual low frequency hearing is warranted.

Keywords: Cochlear implant, Nucleus Hybrid cochlear implant, Electric acoustic stimulation, Cortical auditory evoked potential, Acoustic change complex

Introduction

Individuals with steeply sloping, high-frequency hearing loss often complain of poor word recognition [e.g., Kamm et al., 1985; Ching et al., 1998; Hogan and Turner, 1998]. Extensive damage to the hair cells in the basal turn of the cochlea makes perception of high-frequency information in the acoustic signal challenging, even with well fit amplification [Liberman and Dodds, 1984; Moore 2004].

Hybrid cochlear implants (CI), also referred to as electric and acoustic stimulation (EAS) CIs, allow both electrical and acoustic stimulation in the same ear and can help address the needs of listeners with steeply sloping high frequency hearing loss. Low-frequency sounds are amplified using a conventional hearing aid. High-frequency sounds are processed electrically. The damaged hair cells are bypassed, and the auditory nerve is stimulated directly. Studies show that speech perception scores in quiet and noise are improved when a Hybrid CI is used, compared to use of a conventional hearing aid or CI [for reviews, see Woodson et al., 2010; Schaefer et al., 2021; Gantz et al., 2022]. Hybrid CI users also display better music perception and instrument recognition [Gfeller et al., 2006; Dorman et al., 2008].

As CI technology evolved, performance improved and candidacy requirements have expanded [for a review, see Zwolan & Basura, 2021]. In addition, from a public health perspective, there has been an increased awareness of the need to improve access to hearing healthcare, as only a small proportion of listeners who could benefit from hearing devices pursue them [Huddle et al., 2017; Buchman et al., 2020]. To support these efforts, it is important to have objective data documenting benefit from CIs. While rates of hearing preservation in CI users have increased over time, not all CI recipients who could benefit from use of an EAS device continue to use the acoustic component routinely [Perkins et al., 2021; Spitzer et al., 2021]. It seems that objective measures such as cortical auditory evoked potentials (CAEPs) could be useful to clinicians who work with hearing impaired listeners, allowing them to make more tailored recommendations about the added benefit they may be receiving (or are not receiving) from use of the acoustic component of the device.

As a step toward this goal, we examined the effect of stimulation mode (acoustic alone vs acoustic + electric) on CAEPs recorded from successful Hybrid CI users. The paradigm we used allowed us to record both an onset P1-N1-P2 and an acoustic change complex (ACC). Both responses are obligatory and cortically generated evoked responses that are related to detection and discrimination of sounds, respectively [Hyde, 1997; Martin & Boothroyd, 2000]. Both are recorded reliably from normal hearing subjects [e.g., Martin and Boothroyd, 2000], hearing aid users [e.g., Billings et al., 2011; Kirby & Brown, 2015], and standard CI users [e.g., Friesen and Tremblay, 2006; Won et al., 2011]. Both responses can be evoked using a wide range of acoustic signals. In our previous study, ACC responses were successfully recorded using speech-like stimuli in Nucleus Hybrid CI users [Brown et al., 2015]. In that study, listening in the acoustic + electric (A+E) mode led to better ACC responses than in acoustic alone (A-alone) mode, but this was the case for one of the two vowel contrasts used. The current study extends these previous findings by contrasting within-subject recordings of the ACC in two listening modes (A-alone and A+E), using various complex stimuli. Our overall goal was to use an objective measure (CAEPs) to explore the benefit of Hybrid CI device use over acoustic stimulation alone.

Materials and Methods

Subjects

Eight Cochlear Nucleus Hybrid (Cochlear Ltd., Australia) CI users (mean age = 59.08 years, range 30.5 – 76.3 years; 4 females and 4 males) participated in this study. All received their CIs at the University of Iowa Hospitals and Clinics (UIHC) and were considered to be successful full-time Hybrid CI users by their audiologists, with at least 1 year of CI experience prior to testing. Study participants used different internal electrode arrays: 4 used the S8 array with 6 active electrodes, 2 study participants use the S12 array with 10 active electrodes, and 2 used the L24 array with 18 electrode contacts. Despite differing in length and the number of electrode contacts, audiometric criteria for candidacy are similar across these electrode arrays, as is the typical rate of low frequency hearing preservation and speech perception outcomes [Gantz et al., 2022].

Figure 1 shows the audiometric configuration of the implanted ear of each study participant at the time of testing. The grey solid line indicates the average thresholds for each frequency, across all subjects. For all participants, the frequency responses of the acoustic and electric components were partially overlapping, by an average of 381.7 Hz. The frequency cutoff of the acoustic component was typically determined by the last frequency at which hearing aid prescription targets were met, following clinical best practices.

Fig. 1.

Fig. 1.

Individual audiometric configuration for the Hybrid cochlear implant subjects, from 125 to 4000 Hz. The grey solid line indicates the mean audiometric thresholds.

Procedures

CAEPs were recorded from each subject using a range of acoustic stimuli presented in the sound field at 65 dBA from a loudspeaker located at 0° azimuth, approximately 4 feet from the listener. All subjects were tested twice: once using both the acoustic and electric components of the Hybrid device (i.e., A+E mode) and once using the acoustic component of the device alone (i.e., A-alone mode). The non-test ear was plugged and muffled during testing. All stimuli and listening modes were randomly presented across subjects. A custom script in MATLAB (MathWorks, Inc. US R2011b) was used to present stimuli and to trigger the averaging computer. To minimize the effects of adaptation, a relatively slow stimulation rate was used (varied randomly from 2.8 to 3.8 stimuli per second), with the addition of some jitter.

During testing, all participants were encouraged to read, play with an iPad, or watch captioned videos in order to stay alert during testing. The researchers monitored the subject’s state using two video cameras mounted in the sound booth. The test session lasted approximately 4 hours, including 30 minutes of preparation and breaks.

Stimuli

Acoustic stimuli to evoke cortical potentials were generated digitally using Adobe Audition or Matlab, at a sampling rate of 44,100 samples/second. All acoustic change stimuli were 800 ms-long, created by concatenating two 400 ms-long segments that were RMS balanced but differed in timbre, pitch, speech segment, or spectral ripple phase. Digital editing was used to ensure that there were no audible pops or clicks at the transition point in the acoustic stimulus. The specific stimuli used in this study are described below, for a total of seven stimulus contrasts.

• Pitch change:

A musical note played by a clarinet that included a change in pitch from middle C (C4) to D#4. This corresponded to a change in fundamental frequency from 262 Hz to 310 Hz. A second stimulus was created with the two notes in opposite order (i.e., D#4 changing to C4, both played by a clarinet).

• Timbre change:

A musical note (C4) played by a clarinet changed to an oboe playing the same note, corresponding to a change in timbre. A second stimulus was created with the two instruments in opposite order (i.e., an oboe changing to a clarinet, both playing C4).

• Vowel change:

The vowel /u/ changed into the vowel /i/, as a result of the second formant frequency being shifted from 1178 Hz to 2270 Hz. A second stimulus was created with the two vowels in opposite order (i.e., /i/ changing into /u/).

• Spectral ripple phase change:

Spectral ripples changed in phase by 180 degrees, which was described by most subjects as a change in pitch. The ripple density was 1 ripple/octave, and the modulation depth was 40 dB.

Electrophysiologic measures

Nine standard disposable disc electrodes were applied to the scalp. The recording electrode sites included the vertex (Cz), both mastoids (M1 and M2), the contralateral temporal site (T3 or T4), the high forehead (Fz) and the inion (Oz). The ground electrode was located off center on the forehead. Two additional electrodes were placed above and below, lateral to one eye to monitor eye blinks. Online artifact rejection was used to eliminate any sweeps containing voltage excursions greater than 80 μV in one or more of the six recording channels.

An optically isolated, Intelligent Hearing Systems differential amplifier (IHS8000) was used to amplify (gain = 10,000) and band-pass filter (1–30 Hz) the raw EEG activity. A National Instruments Data Acquisition board (DAQ card-6062E) was used to sample the ongoing EEG activity at a rate of 10,000 Hz per channel. Custom-designed LabVIEW software (National Instruments Corp.) was used to compute and display the averaged waveforms. Two recordings of 100 sweeps each were obtained for each of the 7 stimulus contrasts, in each of the two listening modes. These two recordings were then combined off-line to create a single evoked potential response.

Data Analysis

Peak latencies and peak-to-peak amplitudes of N1 and P2 components were analyzed off-line for all stimuli in the two listening modes. Repeated measures analysis of variance (ANOVA) with Tukey-Kramer post hoc tests were conducted to test the effect of listening mode (2 levels) and stimulus type (7 levels) on the peak latencies and N1-P2 peak-to-peak amplitudes of the onset and ACC responses. Paired t-tests were used to compare individual responses obtained in the two listening modes.

Results

Figure 2 shows spectrograms and grand average waveforms recorded from the eight Hybrid cochlear implant users tested in the two listening modes, in response to four spectral contrasts. A red dotted line represents responses obtained in A-alone and a black solid line in A+E listening mode. Solid lines at 0 and 400 ms represent when the stimulus begins and when changes occur (in pitch, timbre, speech, or spectral ripple). Data were missing for one participant in response to the pitch contrasts (both listening conditions) and for another participant in response to the /i/-/u/ contrast (both listening conditions).

Fig. 2.

Fig. 2.

Spectrograms of stimuli and grand mean waveforms recorded from the 8 Hybrid cochlear implant users tested in both listening modes: A-alone and A+E, represented by a red-dotted line and a black solid line, respectively (scaling shown in the top right panel). The stimulus contrast is indicated at the top of each spectrogram.

ACC N1-P2 amplitudes in the A-alone condition were not associated with the average of low frequency thresholds between 125 – 1000 Hz (r = .395; p = .333). Therefore, hearing thresholds were not entered as a covariate in the following analyses.

A repeated measures ANOVA revealed that for ACC N1-P2 peak-to-peak amplitudes, the effect of stimulus mode was significant (F(1,7) = 11.00, p = 0.0128), as was the effect of stimulus type (F(6,41) = 4.99, p = 0.0006). No significant interaction between listening mode and stimulus type was found for the ACC (p> 0.05). Post-hoc tests showed that the ACC amplitudes were larger when using the A+E listening mode than in the A-alone listening mode (t(7) = −3.32, p = 0.0128).

Additionally, ACC amplitudes were significantly larger when the stimulus contained a change from low to high frequency. For example, the ACC amplitude in A+E was significantly larger for C4 to D#4 (t(7) = −2.72, p = 0.0296); however, this was not true for D#4 to C4 (t(7) = −1.12, p = 0.2998 ). The ACC amplitude in A+E was significantly larger for /u/ to /i/ (t(6) = −4.88, p = 0.0028 ); however, this was not true for /i/ to /u/ (t(5) = −1.32, p = 0.2439 ). In addition, all music- and speech-like stimuli elicited significantly larger ACC amplitudes than the spectral ripple noise stimulus (C4 to D#4: t (21) = 4.78, p = 0.0001; oboe to clarinet: t (21) = 4.38, p = 0.0003; and /u/ to /i/: t (21) = 2.70, p = 0.0134).

For onset N1-P2 amplitudes, there were no significant differences as a function of listening mode or stimulus type (all p > 0.05). Finally, latencies of N1 and P2 components of the onset and the ACC responses were also analyzed with a repeated-measures ANOVA. Neither listening mode nor stimulus type had a significant effect on N1 or P2 latencies (all p > 0.05). This was true for both the onset and ACC responses.

Box plots in Figure 3 show the distributions of the onset and ACC N1-P2 peak-to-peak amplitudes, measured in A-alone and A+E listening modes for the stimuli described in Figure 2. As previously discussed, ACC amplitudes were significantly larger in the A+E mode than those measured in A-alone for stimuli with a change from oboe to clarinet, from C4 to D#4, and from /u/ to /i/. However, this was not true for the spectral ripple noise stimulus.

Fig. 3.

Fig. 3.

Distribution of the onset and ACC N1-P2 peak-to-peak amplitudes measured using different stimuli (indicated at the top of each graph), in both listening modes (A alone, A+E). The mean and median are shown in thick and thin black lines, respectively. The top and bottom of each box plot indicates the 3rd and 1st quartiles, respectively. Asterisks (*) indicate statistically significant differences.

Discussion

The goal of this study was to make a within-subject comparison of CAEPs recorded in response to spectrally complex stimuli presented to Hybrid CI listeners in two different stimulation modes (A and A+E). Results demonstrate that the ACC can be reliably recorded from Hybrid cochlear implant users and is sensitive to differences in stimulation mode. The ACC responses measured in the A+E mode were significantly larger in amplitude than when they were measured in the A-alone condition using spectrally complex stimuli. This finding was not surprising, as the addition of electrical stimulation should allow for perception of a broader range of frequencies, making the change in the stimuli more salient to recipients. The exception was the spectral ripple noise stimulus, which also elicited the smallest amplitude responses. Past research shows that noise-like stimuli typically lead to smaller CAEP amplitudes than tonal stimuli [e.g., He et al., 2012]. However, evidence suggests that better behavioral and electrophysiological responses to spectral ripples are associated with increased spectral resolution [Won et al., 2011]. Thus, we expected the spectral ripple stimuli to lead to larger ACC amplitudes in the A+E condition. It is possible that our choice of ripple density (1 ripple per octave) was too close to the ACC threshold for at least some of the participants [Won et al., 2011], which is supported by the small ACC responses in both listening conditions in this study.

In addition, speech and music-like stimuli that shifted from low to high frequency elicited larger ACC amplitudes in the A+E listening mode than those in A-alone listening mode. This result likely reflects the larger contribution of the electric excitation in the second half of the acoustic stimulus, which contained higher frequency information than the first half of the stimulus. This finding confirms and extends our previous observation that ACC amplitudes were larger in vowel contrasts involving a change from low to high frequency, in Hybrid CI users [Brown et al., 2015]. In addition, this finding agrees with previous studies showing larger ACC N1-P2 amplitudes in response to pure tones of lower base frequency, when compared to higher frequency base tones [Dimitrijevic et al., 2018; McGuire et al., 2021].

The fact that larger ACC responses were evident in the A+E mode in response to the pitch change stimuli was somewhat surprising. As the fundamental frequencies of both musical notes (262 Hz and 310 Hz) are within the range of frequencies coded acoustically in Hybrid devices, and the access to low frequency information is thought to be the main driver of the advantage of Hybrid stimulation for music perception [e.g., Gfeller et al., 2006], we thought we might not see a significant benefit of adding the electrical stimulation for this stimulus contrast. The fact that we found such benefit suggests that even with fewer electrodes to code higher frequency information, the Hybrid CI recipients in this study were able to take advantage of the additional spectral cues in the higher formants provided by the electrical stimulation, as evidenced by increased cortical responses.

Another interesting finding was the lack of an effect of listening mode on the onset N1-P2 amplitudes or latencies. Onset N1-P2 responses primarily reflect encoding of sound detection, and as such, they are known to be relatively stable within listeners [e.g., He et al., 2012]. Our results suggest that even when sound access is restricted to the low frequencies (as in the A-alone condition), the encoding of sound detection at the level of the auditory cortex is robust and comparable to when there is greater access to spectral information (as in the A+E condition).

Finally, these results also suggest that the ACC response may provide a viable method of objectively assessing the benefit that different listening modes provide to Hybrid CI users. While we did not assess perception of the stimuli used for ACC recordings, results from previous studies suggest that ACC recordings are associated with perception [e.g., Won et al, 2011; Brown et al., 2017]. Therefore, the ACC could theoretically be used to guide clinicians in making recommendations as to optimal listening modes for individual CI recipients, or to confirm subjective reports of benefit (or lack of benefit) from a particular listening mode. Further research into this potential clinical application in a range of Hybrid recipients and/or long electrode users who have residual low frequency hearing is warranted.

One limitation of this study is that we did not control for how signals were processed (e.g., amplitude compression, time delays) by the acoustic and electric components of participants’ devices. Studies show that changes made by different hearing aid signal processing schemes can impact ACC recordings [e.g., Billings et al., 2011; Kirby & Brown, 2015]. On the other hand, because recipients were using their typical device settings, our recordings were more representative of everyday listening, in addition to not being impacted by the need to acclimatize to new device settings.

Acknowledgement

The authors would like to thank Brittany E. James for assistance with data collection. We also gratefully appreciate the subjects who generously gave their time to participate in this study. Finally, we acknowledge the adult CI team at the University of Iowa Hospitals and Clinics for scheduling and collecting baseline speech perception data from our subjects.

Funding Sources

This study was funded by a grant from the NIH/NIDCD (P50 DC000242).

Footnotes

Conflict of Interest Statement

The authors have no conflicts of interest to declare.

Statement of Ethics

This study protocol was reviewed and approved by the University of Iowa Institutional Review Board, approval number 200011075. All participants provided written informed consent to participating in this study and were compensated for their time.

Data Availability Statement

The data that support the findings of this study are available in Open Science Framework at http://doi.org/10.17605/OSF.IO/967YV

References

  1. Billings CJ, Tremblay KL, Miller CW. Aided cortical auditory evoked potentials in response to changes in hearing aid gain. Int J Audiol 2011;50:459–467. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Brown CJ, Jeon EK, Chiou LK, Kirby B, Karsten SA, Turner CW, et al. Cortical auditory evoked potentials recorded from Nucleus Hybrid cochlear implant users. Ear Hear 2015;36:723–732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Brown CJ, Jeon EK, Driscoll V, Mussoi B, Deshpande SB, Gfeller K, et al. Effects of long-term musical training on cortical auditory evoked potentials. Ear Hear 2017;38:e74–e84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Buchman CA, Gifford RH, Haynes DS, Lenarz T, O’Donoghue G, Adunka A, et al. Unilateral cochlear implants for severe, profound, or moderate sloping to profound bilateral sensorineural hearing loss – A systematic review and consensus statements. JAMA Otolaryngol Head Neck Surg 2020;146:942–953. [DOI] [PubMed] [Google Scholar]
  5. Ching TY, Dillon H, Byrne D. Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am. 1998;103:1128–1140. [DOI] [PubMed] [Google Scholar]
  6. Dimitrijevic A, Michalewski HJ, Zeng F-G, Pratt H, Starr A. Frequency changes in a continuous tone: Auditory cortical potentials. Clin Neurophysiol 2018;119:2111–2124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dorman MF, Gifford RH, Spahr AJ, McKarns SA. The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies. Audiol Neurotol 2008;13:105–112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Friesen LM, Tremblay KL. Acoustic change complexes recorded in adult cochlear implant listeners. Ear Hear 2006;27:678–685. [DOI] [PubMed] [Google Scholar]
  9. Gantz BJ, Hansen M, Dunn CC. Review: Clinical perspective on hearing preservation in cochlear implantation, the University of Iowa Experience. Hear Res 2022. (Online ahead of print); 108487. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Gfeller KE, Olszewski C, Turner C, Gantz B, Oleson J. Music perception with cochlear implants and residual hearing. Audiol Neurotol 2006;11:12–15. [DOI] [PubMed] [Google Scholar]
  11. He S, Grose JH, Buchman CA. Auditory discrimination: The relationship between psychophysical and electrophysiological measures. Int J Audiol 2012;51:771–782. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hogan CA, Turner CW. High-frequency audibility: Benefits for hearing-impaired listeners. J Acoust Soc Am 1998;104:432–441. [DOI] [PubMed] [Google Scholar]
  13. Huddle MG, Goman AM, Kernizan FC, Foley DM, Price C, Frick KD, et al. The economic impact of adult hearing loss – A systematic review. JAMA Otolaryngol Head Neck Surg 2017;143:1040–1048. [DOI] [PubMed] [Google Scholar]
  14. Hyde M The N1 response and its applications. Audiol Neurotol 1997;2:281–307. [DOI] [PubMed] [Google Scholar]
  15. Kamm CA, Dirks DD, Bell TS. Speech recognition and the Articulation Index for normal and hearing-impaired listeners. J Acoust Soc Am 1985;77:281–288. [DOI] [PubMed] [Google Scholar]
  16. Kirby BJ, Brown CJ. Effects of nonlinear frequency compression on ACC amplitude and listener performance. Ear Hear 2015;36:e261–e270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Liberman MC, Dodds LW, Pierce S. Afferent and efferent innervation of the cat cochlea: quantitative analysis with light and electron microscopy. J Comp Neurol 1990;301:443–460. [DOI] [PubMed] [Google Scholar]
  18. Martin BA, Boothroyd A. Cortical, auditory, evoked potentials in response to changes of spectrum and amplitude. J Acoust Soc Am 2000;107:2155–2161. [DOI] [PubMed] [Google Scholar]
  19. McGuire K, Firestone GM, Zhang N, Zhang F. The acoustic change complex in response to frequency changes and its correlation to cochlear implant speech outcomes. Front Hum Neurosci 2021;15:757254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Moore BC, Glasberg BR, Stone MA. New version of the TEN test with calibration in dB HL. Ear Hear 2004;25:478–487. [DOI] [PubMed] [Google Scholar]
  21. Perkins E, Lee J, Manzoor N, O’Malley M, Bennett M, Labadie R, et al. The reality of hearing preservation in cochlear implantation: Who is utilizing EAS? Otol Neurotol 2021;42:832–837. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Schaefer S, Sahwan M, Metryka A, Kluk K, Bruce IA. The benefits of preserving residual hearing following cochlear implantation: a systematic review. Int J Audiol 2021. Aug;60(8):561–577. [DOI] [PubMed] [Google Scholar]
  23. Spitzer ER, Waltzman SB, Landsberger DM, Friedmann DR. Acceptance and benefits of electro-acoustic stimulation for conventional-length electrode arrays. Audiol Neurotol. 2021. Jul;26(1):17–26. [DOI] [PubMed] [Google Scholar]
  24. Won JH, Clinard CG, Kwon S, Dasika VK, Nie K, Drennan WR, et al. Relationship between behavioral and physiological spectral-ripple discrimination. J Assoc Res Otolaryngol 2011;12:375–393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Woodson EA, Reiss LA, Turner CW, Gfeller K, Gantz BJ. The Hybrid cochlear implant: a review. Adv Otorhinolaryngol 2010;67:125–134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Zwolan TA, Basura G. Determining cochlear implant candidacy in adults: Limitations, expansions, and opportunities for improvement. Semin Hear 2021;42:331–341. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available in Open Science Framework at http://doi.org/10.17605/OSF.IO/967YV

RESOURCES