Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Mar 12.
Published in final edited form as: Cochlear Implants Int. 2014 Oct 20;16(3):159–167. doi: 10.1179/1754762814Y.0000000101

Bimodal benefit depends on the performance difference between a cochlear implant and a hearing aid

Yang-Soo Yoon 1, You-Ree Shin 2, Jae-Sook Gho 3, Qian-Jie Fu 4
PMCID: PMC5847325  NIHMSID: NIHMS947758  PMID: 25329752

Abstract

Objectives

The present study characterizes the relationship between bimodal benefit and hearing aid (HA) performance, cochlear implant (CI) performance, and the difference in the performances of the two devices.

Methods

Fourteen adult bimodal listeners participated in the study. Consonant, vowel, and sentence recognition were measured in quiet and noise (at a +5 and +10 dB signal-to-noise ratio (SNR)) with an HA alone, a CI alone, and with the combined use of an HA and CI in each listener. Speech and noise were presented directly in front of the listener.

Results

The correlation analyses showed that bimodal benefit was significantly associated with the difference in performances of a CI and an HA in all testing materials, with HA-alone performance in vowel recognition, and with CI-alone performance in sentence recognition. However, regression analyses showed that the independent contribution of the difference in performance across ears to bimodal benefit was significant, irrespective of the testing material or the SNR: the smaller the difference, the greater the benefit. Further, the independent contributions of HA-only performance and CI-alone performance were not significant factors in predicting the existence of bimodal benefit across testing materials and SNRs when the effect of the difference between CI and HA performance was removed from the model.

Conclusion

The results suggest that bimodal benefit is limited by how effectively the modalities integrate, rather than HA-only or CI-alone performance, and that this integration is facilitated when the performances of the modalities are similar.

Keywords: Hearing aid, Cochlear implant, Bimodal hearing, Electric-acoustic stimulation

1. Introduction

Bimodal hearing produces a significant enhancement in speech understanding in noise (Dorman et al., 2008; Gifford et al., 2007; Mok et al., 2006). In theory, bimodal hearing is effective because speech cues processed acoustically by a hearing aid (HA) in a lower spectral region are integrated with those processed electrically by a cochlear implant (CI) in a higher spectral region. It is generally assumed that the effectiveness of the integration process is directly linked with the degree of residual hearing (Dorman et al., 2008). However, the evidence supporting this assumption is unconvincing. Furthermore, the integration process is highly listener-specific; some bimodal patients receive significant bimodal benefit in speech perception while others receive little or none (Kiefer et al., 2005). In addition, some patients experience bimodal interference, poorer performance with the combined use of a CI and HA than with the better ear alone (Litovsky et al., 2006; Mok et al., 2006). Such mixed results from bimodal fittings suggest that the integration process may be facilitated when a certain condition(s) is met.

In a recent study of acoustic simulation of bilateral CIs, Yoon et al. (2011a) reported that greater bilateral benefit occurred when the performance of the two ears was similar. Bilateral and unilateral speech recognition performance was measured by presenting consonants, vowels, and sentences in quiet and in noise. In that report, binaural advantage increased as the symmetry in the unilateral performances of the ears increased. This functional relationship was evident across different speech materials and signal-to-noise ratios (SNRs). The results of the study by Yoon et al. (2011a) lead to the prediction that speech information processed by a CI and HA may be integrated in a similar manner as bilateral CIs. Thus, in the present study, we tested the hypothesis, ‘bimodal benefit is directly associated with the relative difference in performance between the two modalities as a function of SNRs’. The results of this study could provide information regarding the underlying mechanism of bimodal hearing. The subsets of data evaluated here were previously published for unrelated analyses (Yoon et al., 2012), which demonstrated that speech information embedded in both low- and high-frequency regions contribute to bimodal benefit.

2. METHODS

2.1 Subjects

Fourteen adult bimodal listeners (six females and eight males) participated in the study. Subjects were native speakers of American English between 24 and 84 years of age (mean age = 62 years). All subjects but one (S2) were post-lingually deafened. All subjects used both an HA and a CI full time and all had previous experience with similar speech perception tests. Subjects were recruited from the House Clinic in Los Angeles, CA, and there were no specific limitations placed on their audiograms. Subjects’ demographic information, along with unaided and aided pure-tone average thresholds for 0.25, 0.5, 1, and 2 kHz, are shown in Table 1. Seven out of the 14 subjects had less than 3 years of experience with their bimodal fittings, four subjects had between 3 and 7 years of experience, and three subjects had more than 10 years of experience. As all subjects had worn a contralateral HA, since the initial activation of their implant of the length of their bimodal experience is equal to years of CI use in all cases. All subjects had at least a year of experience using a CI. All subjects provided informed consent, and all procedures were approved by the Saint Vincent Institutional Review Board.

Table 1.

Subject demographics. Pure-tone average (PTA) threshold (dB HL) was computed over 0.25, 0.5, 1, and 2 kHz. ACE = advanced combination encoder, SPEAK = spectral peak, HiRes = high resolution. For bimodal listening, the presentation level for all subjects was 65 dBA, except for S7 (70 dBA).

ID (age, sex, etiology) PTA (dB HL) in non-implanted ear CI: processor, strategy, years of use
HA: device, years of use
Presentation level (dBA)
S1 (77, F, unknown) Unaided: 92 CI: Freedom, ACE, 11 CI alone: 70
Aided: 85 HA: Phonak Savia, 34 HA alone: 70
S2 (44, M, rubella) Unaided: 101 CI: Freedom, SPEAK, 13 CI alone: 70
Aided: 63 HA: Phonak 313, 43 HA alone: 70
S3 (77, M, unknown) Unaided: 91 CI: 120 Harmony, HiRes, 11 CI alone: 65
Aided: 50 HA: Widex Senso Diva, 11 HA alone: 70
S4 (73, M, unknown) Unaided: 71 CI: Freedom, ACE, 1 CI alone: 65
Aided: 60 HA: Phonex CROS, 36 HA alone: 70
S5 (71, M, otosclerosis) Unaided: 100 CI: Freedom, ACE, 3 CI alone: 65
Aided: 81 HA: Siemens BTE, 35 HA alone: 70
S6 (25, M, meningitis) Unaided: 78 CI: Freedom, ACE, 1.5 CI alone: 65
Aided: 33 HA: Phonak Supero, 6 HA alone: 65
S7 (56, F, ototoxicity) Unaided: 86 CI: C2 Harmony, HiRes, 2 CI alone: 70
Aided: 52 HA: Widex Body, 8 HA alone: 70
S8 (69, M, unknown) Unaided: 81 CI: Freedom, ACE, 3 CI alone: 65
Aided: 34 HA: Unitron 360+, 50 HA alone: 70
S9 (47, M, unknown) Unaided: 85 CI: Freedom, ACE, 2 CI alone: 65
Aided: 43 HA: Siemens Centra, 42 HA alone: 70
S10 (67, F, hereditary) Unaided: 76 CI: 3G Freedom, ACE, 7 CI alone: 65
Aided: 70 HA: Resound Metrix, 37 HA alone: 70
S11 (84, F, hereditary) Unaided: 75 CI: HiRes 90K, HiRes 120, 1 CI alone: 65
Aided: 43 HA: Interton Bionic-bignano, 30 HA alone: 70
S12 (69, F, hereditary) Unaided: 65 CI: HiRes 90K, HiRes 120, 1 CI alone: 70
Aided: 54 HA: Widex Senso Diva, 19 HA alone: 70
S13 (24, F, unknown) Unaided: 95 CI: Freedom, ACE, 4.5 CI alone: 65
Aided: 58 HA: Siemens Traiano SP, 10 HA alone: 70
S14 (71, M, unknown) Unaided: 89 CI: Freedom, ACE, 1.5 CI alone: 65
Aided: 60 HA: Siemens Traiano SP, 50 HA alone: 70

2.2 Stimuli

Consonant, vowel, and sentence recognition were measured in quiet and in noise (speech-shaped steady noise was presented) at +5 and +10 dB SNRs. Sixteen consonants were presented in an /a/-consonant-/a/ context (‘aba’, ‘ada’, ‘aga’, ‘apa’, ‘ata’, ‘aka’, ‘ama’, ‘ana’, ‘afa’, ‘asa’, ‘asha’, ‘ava’, ‘aza’, ‘atha’,’acha’, ‘aja’), and were produced by five male and five female talkers (Shannon et al., 1999). Twelve vowels, including 10 monophthongs (/i/, /I/, /ε/, /æ/, /u/,/ʊ/, /ɑ/, /ʌ/, /ɔ/, /ɝ/) and 2 diphthongs (/əʊ/, /eI/), were presented in an /h/-vowel-/d/ context (‘heed’, ‘hid’, ‘head’, ‘had’, ‘who’d’, ‘hood’, ‘hod’, ‘hud’, ‘hawed’, ‘heard’, ’hoed’, ‘hayed’), and produced by five male and five female talkers (Hillenbrand et al., 1995). Sentence recognition was measured using Hearing-In-Noise Test (HINT) sentences (Nilsson et al., 1994).

2.3 Procedure

Speech recognition was measured with for each subject using an HA alone, a CI alone, and the HA and CI in combination. Each subject was seated in a double-walled sound-treated booth (IAC) directly facing the loudspeaker (0° azimuth) placed 1 m away. Subjects were tested using their clinical devices and settings. The sounds were presented at 65 dBA to all subjects for bimodal listening, except for S7, for whom the presentation level was 70 dBA. For HA-only and CI-only listening, the presentation level was 65 or 70 dBA (see Table 1 for the exact levels for each subject); the presentation level (the level at which the subject was most comfortable) was determined according to the Cox loudness rating scale (Cox, 2005) in response to consonant stimuli presented in quiet. For HA-only listening, the CI was turned off. For CI-only listening, the HA was removed and the ear was plugged. The three listening conditions (HA alone, CI alone, and CI + HA) were evaluated in random order within and across subjects.

During testing, a stimulus was randomly selected from each of three testing stimulus sets. The subject responded by clicking on 1 of 16 response boxes for consonant recognition or 12 response boxes for vowel recognition. Individual consonant and vowel syllable was presented 20 times (10 talkers × 2 repetitions) at each SNR for each of three listening conditions. For sentence recognition, subjects were sked to repeat what they heard as accurately as possible. Each set of 10 sentences was measured three times (for a total of 30 sentences) for each listening condition at each SNR. No training or trial-by-trial feedback was provided. The complete test protocol took approximately 8 hours per subject.

Throughout this article, bimodal benefit refers to the difference between performances of a CI + HA and the better ear alone. Since there are no cases that the ‘better ear alone’ was the side with the HA, bimodal benefit = CI + HA performance–CI performance.

3. Results

Figure 1 shows individual consonant recognition scores along with the subjects’ average scores with the HA alone (white bar), CI alone (gray bar), and CI + HA (black bar) at + 5 dB SNR (top panel), +10 dB SNR (middle panel), and in quiet (bottom panel). Average bimodal benefit in consonant recognition was approximately 5 percentage points at each SNR. Bimodal benefit seems to be influenced by HA-alone performance. For example, six subjects (S1, S2, S5, S7, S10, and S13) with relatively poor speech recognition performance using the HA alone (i.e. <25% correct on all test materials across SNRs) received no or little bimodal benefit while three other subjects (S6, S8, and S14) with better performance using the HA alone (i.e. between 25 and 40% correct) received relatively greater benefit. However, the presence of bimodal benefit cannot be fully explained by the performance of the HA alone. Subject S2 received bimodal benefit even though consonant recognition performance using the HA alone was relatively low (<20%) across SNRs while subject S4 received little bimodal benefit even though the subject performance with the HA alone was higher than 25% correct. For these subjects, including S8 and S9, bimodal benefit seemed to be influenced by the difference between the performances of the HA and CI.

Figure 1.

Figure 1

Individual performance scores with average for consonant recognition with HA alone (white bar), CI alone (gray bar), and CI + HA (black bar) at +5 dB SNR (top panel), +10 dB SNR (middle panel), and in quiet (bottom panel). No bar is shown if the score is zero.

Figure 2 shows individual vowel recognition scores and the average scores of all subjects. Average bimodal benefit in vowel recognition was approximately 8 percentage points at each SNR. The performance of the HA alone seems to influence bimodal benefit in vowel recognition more than in consonant recognition. Four subjects (S3, S6, S9, and S14) who achieved more than 25% correct across SNRs using an HA alone received the greatest bimodal benefit while six subjects (S1, S7, S8, S11, S12, and S13) who achieved less than 25% correct across SNRs using an HA alone received little or no bimodal benefit. As with consonant recognition; however, bimodal benefit in vowel recognition cannot be fully explained by the performance of the HA alone. S2 and S5 scored less than 25% in noise using an HA alone, but received greater bimodal benefit, while S4 and S10 scored higher than 25% using an HA alone and received no or little bimodal benefit. As was seen in consonant recognition, bimodal benefit in vowel recognition is clearly influenced by the difference between the performances of the two devices. Individual sentence scores are presented in Figure 3. The average bimodal benefit was approximately 12 percentage points in noise but less than 5 percentage points in quiet. As was observed in consonant and vowel recognition, bimodal benefit seems to be related to the performance of the HA alone as well as the difference in the performances of the two devices.

Figure 2.

Figure 2

Individual performance scores with average for vowel recognition with HA alone (white bar), CI alone (gray bar), and CI + HA (black bar) at +5 dB SNR (top panel), +10 dB SNR (middle panel), and in quiet (bottom panel). No bar is shown if the score is zero.

Figure 3.

Figure 3

Individual performance scores with average for sentence recognition with HA alone (white bar), CI alone (gray bar), and CI + HA (black bar) at +5 dB SNR (top panel), +10 dB SNR (middle panel), and in quiet (bottom panel). No bar is shown if the score is zero.

The goal of the present study was to characterize the relationship between bimodal benefit and HA-alone performance, CI-alone performance, and the difference in performance (CI score–HA score) between ears. To investigate the relationship between these variables, Pearson Product Moment Correlation analyses using SigmaStat® Version 3.1 were performed. Assuming medium effects (delta = 1; Cohen, 1988), the proposed sample size (n= 14) should provide adequate power (>80%) to determine the degree of significance (two-tailed, alpha = 0.05) of the conditions tested, despite the expectation of inter-subject variability.

The results of the correlation analyses are given in Table 2. For consonant recognition, bimodal benefit is significantly negatively correlated with CI score–HA score, as indicated with an asterisk; the smaller the difference between the CI and HA scores, the greater the bimodal benefit. This holds true across SNRs. Note that the benefit is not significantly correlated with performance with an HA alone at any SNR or with a CI alone in noise, but there was some correlation between the two in quiet (r = 0.561, P = 0.037), as indicated with an asterisk. Bimodal benefit in vowel recognition is significantly correlated at all SNRs with both CI score–HA score and performance with an HA alone, as indicated with an asterisk. In contrast, the benefit is independent of CI-alone performance. In sentence recognition, the benefit is significantly correlated with both CI score–HA score and CI-alone performance at all SNRs, as indicated with an asterisk. However, the benefit is independent of the level of performance with an HA alone except at +5 dB SNR (r= 0.537, P = 0.048). Figure 4 illustrates the functional relationship between bimodal benefit and CI score–HA score for each of the testing materials (row) at each SNR (column), along with the corresponding linear regression lines and coefficients. It is noted that the smaller the difference in the performances of the HA and the CI, the greater the bimodal benefit.

Table 2.

Summary of correlation analyses between bimodal benefit and each of three variables (n = 14)

Variable +5 dB SNR
+10 dB SNR
Quiet
R P r P r P
Consonant HA alone 0.397 0.160 0.429 0.126 0.427 0.128
CI alone −0.506 0.053 −0.350 0.220 0.561 0.037*
CI–HA −0.622 0.010** −0.517 0.050* −0.660 0.010**
Vowel HA alone 0.600 0.023* 0.570 0.034* 0.614 0.020*
CI alone −0.276 0.340 0.014 0.962 0.026 0.931
CI–HA −0.774 0.001** −0.830 0.001** −0.793 0.001**
Sentence HA alone 0.537 0.048* 0.437 0.118 0.019 0.947
CI alone −0.625 0.017* −0.674 0.001** −0.647 0.012*
CI–HA −0.687 0.001** −0.654 0.011* −0.657 0.011*

Significant correlation is indicated in bold.

*

P< 0.50.

**

P <0.01.

Figure 4.

Figure 4

Scatter plot for bimodal benefit versus difference in performance between a CI and an HA for consonant (first row), vowel (second row), and sentence (third row) at +5 dB SNR (first column), at +10 dB SNR (second column), and in quiet (third column). Linear regression line is given as a solid line for each panel, along with correlation coefficient.

In creating our definition of bimodal benefit, bimodal performance minus CI-alone performance, three variables were considered: HA-alone performance, CI-alone performance, and bimodal performance. These three variables contributed to the results of the correlation analyses above. Multiple regression models were used to quantify the independent contribution of each of the predictors to bimodal benefit. However, since our main interest is to determine what causes bimodal benefit when an HA is added to the use of a unilateral CI, two of our three variables, HA-alone performance and CI score–HA score, were the most relevant to the model. Multiple regression models were tested for each testing materials at each SNR (i.e. nine models). When using multiple predictors, it is possible that the predictors do not operate independently, but reveal multicollinearity, preventing an indication of the influence of individual predictors. Multicollinearity is >0.2 for all six models, which indicates no violation of the assumption that the predictors operated independently.

The summary of the regression analyses along with coefficients for the variables predicting bimodal benefit (n= 14) is presented in Table 3. In the table, the partial correlation coefficient is the most relevant quantity for our primary interest. Partial correlation for the first predictor (i.e. CI score–HA score) represents the relationship between bimodal benefit and the first predictor after common variance with the second predictor (i.e. HA-only performance) removed from both bimodal benefit and the first predictor. Thus, partial correlation allows us to determine what the relationship between the benefit and CI score–HA score would be if they were not each correlated with HA-alone performance. Table 3 shows that CI score–HA score is a significant predictor of bimodal benefit. The main finding of the correlation analysis, that a smaller difference between the performances of a CI and an HA indicates a greater bimodal benefit, remains true. In contrast, HA-alone performance is a completely independent predictor in all testing materials and SNRs except one (sentence recognition in quiet) after common variance with the CI score–HA score removed from both bimodal benefit and HA-alone performance.

Table 3.

Summary of regression analysis for variables predicting bimodal benefit (n = 14)

Variable +5 dB SNR
+10 dB SNR
Quiet
Beta pc Beta pc Beta pc
Consonant CI–HA −0.18 −0.52* −0.16 −0.48* −0.16 −0.56*
HA alone 0.03 0.05 0.11 0.16 −0.03 −0.07
R2 0.39 0.37 0.44
F(2, 11) 3.59* 3.08* 4.31*
Vowel CI–HA −0.36 −0.64* −0.38 −0.73** −0.23 −0.64*
HA alone 0.09 0.23 −0.01 −0.04 0.01 0.05
R2 0.62 0.69 0.63
F(2, 11) 8.94** 12.15** 9.36**
Sentence CI–HA −0.28 −0.64* −0.21 −0.54* −0.28 −0.66*
HA alone 0.01 0.45 0.05 −0.05 0.21 −0.62*
R2 0.58 0.43 0.44
F(2, 11) 7.54* 4.13* 4.21*

pc=partial correlation.

*

P< 0.50.

**

P< 0.01.

4. Discussion

The present data reveal an important functional relationship between the magnitude of bimodal benefit and the difference in performance across ears: the smaller the difference, the greater the benefit. The difference in the performances of the two modalities is a significant predictor of the benefit, irrespective of testing materials and SNRs. Multiple regression models show that the independent contribution of the difference in performance between ears to bimodal benefit is significant across testing materials and SNRs, but that of HA-alone performance is not. These results suggest that bimodal benefit is more affected by the interactions between the two modalities than by the acoustic ear alone. Indeed, the efficiency of the integration process in bimodal hearing seems to be facilitated when the speech recognition scores of the two modalities are similar.

Bimodal hearing involves the detection of low-frequency cues by an HA, the transmission of high-frequency cues by a CI, and the integration of these cues. Several factors may affect the detection of low frequency cues in the acoustic ear, including the degree of residual hearing and the appropriate gain provided by an HA. Similarly, many parametric variations may affect the transmission of high-frequency cues in electric hearing, including variability in the site of surviving healthy auditory neurons, the number of electrodes, and frequency-to-electrode mapping.

Three types of integration can occur: the integration of similar speech information, the integration of complementary speech information, or a combination of these two. The present study was not designed to determine which integration mechanism is most responsible for bimodal benefit. In a study of bilateral CI performance using acoustic simulation, Yoon et al. (2011b) demonstrated that a significant binaural benefit occurred when the binaural spectral mismatches, generated by different insertion depths between the ears, was 1 mm or less. This suggests that bilateral benefit is more dependent on the presence of similar speech information across the ears than on distinctive speech information.

On the other hand, a study by Kong and Braida (2011) asserted that bimodal benefit is correlated with the ability to integrate complementary speech cues across the ears. This conclusion was drawn from the fact that the speech information extracted from a CI and an HA by subjects who received a significant bimodal benefit in vowel recognition was highly complementary, while in consonant recognition, the speech information analyzed by subjects who did not receive a significant bimodal benefit was highly redundant.

However, the results of two previous studies (Mok et al., 2006; Yoon et al., 2012) support the idea that a combination of redundant and complementary integration is responsible for bimodal benefit. Both studies showed that adding an HA to a CI enhanced the transmission of acoustic and phonetic speech cues embedded in both low- and high-spectral regions. Thus, it is unclear whether complementary alone, redundant alone, or the combination of these two types of integration is responsible for bimodal benefit. It should be emphasized that the results of the present study strongly support the idea that integration is the primary mechanism of bimodal hearing. Previous literature (Kong and Braida, 2011) has discussed the possibility that listeners have a strong perceptual bias toward the cues presented by the dominant modality and tend to ignore cues from the other modality rather than integrating available cues from both modalities. However, Figure 4 shows that bimodal interference, a decrement in performance with bimodal hearing compared to performance with the better ear alone, consistently existed across test materials and SNRs when the performance between the two ears differed by more than 30%, particularly in consonant and vowel recognition. This bimodal interference suggests that the auditory system attends to both ears for integration, but that the integration process is hindered by a large difference in the performances of the HA and CI (Litovsky et al., 2006; Mok et al., 2006). If such ‘perceptual bias’ is one of the mechanisms of bimodal hearing, then bimodal interference should not occur.

The integration process in bimodal hearing could also be influenced by a variety of other factors: the hearing threshold of the non-implanted ear, the level of auditory function, and the efficiency of the transmission pathways on both sides. However, previous studies have shown that bimodal benefit exists independent of each of these variables. Multiple studies show a non-significant correlation between the hearing threshold of the non-implanted ear and bimodal benefit (Ching et al., 2004; Mok et al., 2010). Previous research has also demonstrated that bimodal benefit occurs independently of auditory functions such as frequency resolution, temporal resolution, and non-linear cochlear function (Gifford et al., 2007). Research has also confirmed that listeners’ demographic information, such as length of experience with an HA, a CI, or both, was a poor predictor of bimodal benefit (Ching et al., 2006; Cullington and Zeng, 2012). Ching et al. (2006) measured sentence perception with spatial cues and localization task (horizontal plane) with 21 adult bimodal and 29 pediatric bimodal users. Their correlation analysis showed that the duration of bimodal devices use was not predictive of bimodal speech benefit. Based on our reading of the present data (Figs. 13), it is unlikely that experience in bimodal hearing is a consistent factor for bimodal speech benefit.

It is likely that variability in bimodal benefit across subjects is related to the differences between the two devices. This implies that variability is due to differences in subjects’ ability to fuze speech information from the two modalities. A similar functional relationship has been documented in studies of bilateral amplification and electric hearing. The benefit of binaural amplification for speech discrimination in noise is generally greater for those with more symmetrical hearing loss (<15 dB Hearing Level [HL]) between 1 and 4 kHz (Firszt et al., 2008). This relationship has also been reported in previous studies, which employed a larger sample of bilateral CI users (Litovsky et al., 2006; Yoon et al., 2011a).

The present study has two major limitations. The first is that bimodal benefit was assessed without the use of spatial cues. When speech and noise are coincided, the source of bimodal benefit is limited to binaural redundancy (or summation). Measuring binaural redundancy is one way to evaluate how speech signals processed by an HA and a CI are integrated in bimodal hearing. However, using a nonspatial listening task would not require that the subjects take advantage of head shadow and squelch. More importantly, the way the integration process will be affected by the head shadow and squelch cannot be evaluated. It would be interesting to see how the patterns of bimodal benefit would differ from the current findings if spatial cues were tested.

The second limitation of this study is the nonoptimal bimodal mapping. All participants in the present study were tested using their clinical setting for each device because the study’s main purpose was to evaluate bimodal benefit in the everyday use of an HA and CI, not to evaluate the effect of optimal HA or CI fittings on bimodal benefit. However, given that four (S1, S2, S5, and S10) of the 14 participants had aided thresholds poorer than 60 dB and presentation levels of 65 or 70 dB (A), the possibility exists that the spectral gain prescribed for the HAs may be suboptimal. This possibility is supported by the results of the research of Ching et al. (2001), which showed that 15 of the 16 children studied required 6 dB more gain than prescribed by National Acoustic Laboratories-Revised to balance the loudness of the implanted ear for speech signals presented at 65 dB Sound Pressure Level (SPL). This result suggests that adjusting the HA to match the loudness of the implanted ear may facilitate the integration of speech information across the ears, leading to greater bimodal benefit. To test this possibility, a subject (S4, Pure-tone average (PTA) = 60 dB HL) was unilaterally and bilaterally retested with a fixed 80 dB SPL presentation level for sentence recognition in noise and quiet. This presentation SPL provides the subject about 30 and 15 dB sensational level at frequencies between 0.25 and 1 kHz and at frequencies between 1 and 3 kHz. The results (not shown here) showed that to improved bimodal benefit was not directly linked with increased audibility even though performance improved for HA alone at +10 dB SNR and in quiet, but not at +5 dB SNR. In contrast, bimodal interference (bimodal performance < CI alone performance) occurred by approximately 30%, due to clipping for both CI-alone and CI + HA conditions in noise. This result suggests that optimal bimodal loudness balancing is needed to facilitate bimodal benefit; it is likely that some degree of bimodal benefit could be observed when 80 and 65 dB SPL levels for the HA side and the CI side are presented.

Acknowledgments

We thank our participants for their time and sincere effort. We would like to thank Christopher Hall for his editorial help. We also thank Darren Kadis for his statistical comments. This work was supported by NIH grants R01-DC004993 and R01-DC004792.

References

  1. Ching TY, Incerti P, Hill M. Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear and Hearing. 2004;25(1):9–21. doi: 10.1097/01.AUD.0000111261.84611.C8. [DOI] [PubMed] [Google Scholar]
  2. Ching TY, Incerti P, Hill M, van Wanrooy E. An overview of binaural advantages for children and adults who use binaural/bimodal hearing devices. Audiology & Neurootology. 2006;11:6–11. doi: 10.1159/000095607. [DOI] [PubMed] [Google Scholar]
  3. Ching TY, Psarros C, Hill M, Dillon H, Incerti P. Should children who use cochlear implants wear hearing aids in the opposite ear? Ear and Hearing. 2001;22:365–380. doi: 10.1097/00003446-200110000-00002. [DOI] [PubMed] [Google Scholar]
  4. Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, New Jersey, USA: Lawrence Erlbaum Associates; 1988. [Google Scholar]
  5. Cox RM. Using loudness data for hearing aid selection: the IHAFF approach. The Hearing Journal. 2005;48:39–44. [Google Scholar]
  6. Cullington HE, Zeng FG. Comparison of bimodal and bilateral cochlear implant users on speech recognition with competing talker, music perception, affective prosody discrimination and talker identification. Ear and Hearing. 2012;32(1):16–30. doi: 10.1097/AUD.0b013e3181edfbd2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dorman MF, Gifford RH, Spahr AJ, McKarns SA. The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies. Audiology & Neuro-otology. 2008;13(2):105–112. doi: 10.1159/000111782. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Firszt JB, Reeder RM, Skinner MW. Restoring hearing symmetry with two cochlear implants or one cochlear implant and a contralateral hearing aid. Journal of Rehabilitation Research Development. 2008;45(5):749–767. doi: 10.1682/jrrd.2007.08.0120. [DOI] [PubMed] [Google Scholar]
  9. Gifford RH, Dorman MF, McKarns SA, Spahr AJ. Combined electric and contralateral acoustic hearing: word and sentence recognition with bimodal hearing. Journal of Speech, Language, and Hearing Research. 2007;50(4):835–843. doi: 10.1044/1092-4388(2007/058). [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Hillenbrand J, Getty L, Clark M, Wheeler K. Acoustic characteristics of American English vowels. The Journal of the Acoustical Society of America. 1995;97:3099–3111. doi: 10.1121/1.411872. [DOI] [PubMed] [Google Scholar]
  11. Kiefer J, Pok M, Adunka O, Stürzebecher E, Baumgartner W, Tillein J, et al. Combined electric and acoustic stimulation of the auditory system: results of a clinical study. Audiology & Neuro-otology. 2005;10(3):134–144. doi: 10.1159/000084023. [DOI] [PubMed] [Google Scholar]
  12. Kong Y, Braida LD. Cross-frequency integration for consonant and vowel identification in bimodal hearing. Journal of Rehabilitation Research Development. 2011;54(3):959–980. doi: 10.1044/1092-4388(2010/10-0197). [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Litovsky RY, Johnstone PM, Godar S. Benefits of bilateral cochlear implants and/or hearing aids in children. International Journal of Audiology. 2006;45(Suppl 1):S78–S91. doi: 10.1080/14992020600782956. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Mok M, Galvin KL, Dowell RC, McKay CM. Speech perception benefit for children with a cochlear implant and a hearing aid in opposite ears and children with bilateral cochlear implants. Audiology & Neuro-otology. 2010;15(1):44–56. doi: 10.1159/000219487. [DOI] [PubMed] [Google Scholar]
  15. Mok M, Grayden D, Dowell RC, Lawrence D. Speech perception for adults who use hearing aids in conjunction with cochlear implants in opposite ears. Journal of Rehabilitation Research Development. 2006;49(2):338–351. doi: 10.1044/1092-4388(2006/027). [DOI] [PubMed] [Google Scholar]
  16. Nilsson M, Soli SD, Sullivan JA. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of Rehabilitation Research Development. 1994;95:1085–1099. doi: 10.1121/1.408469. [DOI] [PubMed] [Google Scholar]
  17. Shannon RV, Jensvold A, Padilla M, Robert M, Wang X. Consonant recordings for speech testing. Journal of Rehabilitation Research Development. 1999;106:L71–L74. doi: 10.1121/1.428150. [DOI] [PubMed] [Google Scholar]
  18. Yoon YS, Li YX, Fu QJ. Speech recognition and acoustic features in combined electric and acoustic stimulation. Journal of Rehabilitation Research Development. 2012;55(1):105–124. doi: 10.1044/1092-4388(2011/10-0325). [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Yoon YS, Li YX, Kang HY, Fu QJ. The relationship between binaural benefit and difference in unilateral speech recognition performance for bilateral cochlear implant users. International Journal of Audiology. 2011a;50(8):554–565. doi: 10.3109/14992027.2011.580785. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Yoon YS, Liu A, Fu QJ. Binaural benefit for speech recognition with spectral mismatch across ears in simulated electric hearing. Journal of Rehabilitation Research Development. 2011b;130(2):EL94–EL100. doi: 10.1121/1.3606460. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES