Abstract
To compare sentence recognition scores in quiet and noise in 8 to 15-years old children using bimodal hearing in CI only condition and bimodal condition (BM) (CI + HA). Twenty prelingually deafened participants (8–15 years) using cochlear implant in one ear and hearing aid in the other ear were recruited. The sentence recognition was assessed in CI Quiet, CI + 15 dB SNR, CI + 8 dB SNR, BM Quiet, BM + 15 dB SNR and BM + 8 dB SNR. The highest sentence recognition scores were obtained in the quiet condition, followed by the + 15 dB SNR and then by the + 8 dB SNR condition in both CI only and BM conditions. The sentence recognition scores obtained in BM condition were significantly better than CI only condition. Variables like unaided PTA and aided PTA correlated significantly with the sentence recognition scores in BM conditions. This study was done on Indian population where till date no published data is available. It recommends that all the school going children using unilateral cochlear implants should be recommended to use a hearing aid in the contra lateral ear. This practice will help them to receive all the binaural benefits, better listening in noise, localization, spatial release from masking and pitch perception in comparison to unilateral CI use. Moreover, it will help to keep the auditory nerve viable for future implantation which is an important implication for children who have limited benefit from the contra lateral hearing aid.
Keywords: Bimodal hearing, Cochlear implants, Sentence recognition, Noise
Introduction
The bimodal hearing solution is designed to combine the benefits of both a hearing aid (HA) (acoustic hearing) and a cochlear implant (CI) (electric hearing) so that the user can benefit from the “best of both worlds”. Bimodal hearing has shown several advantages over using unilateral CI alone. Firstly, the addition of a contralateral HA has been shown to improve localization abilities and the perception of speech, especially in noise [1, 2]. The advantage of providing a contralateral HA can particularly be explained by the fact that phonetic information and pitch cues are more accurately preserved in the low-frequency range of the acoustic signal. The additional pitch cues provided via this low-frequency hearing (HAs) have been associated with improved pitch perception [3, 4]. Secondly, the provision of amplification in the non-implanted ear might help prevent possible effects of auditory deprivation in that ear. A lack of auditory stimulation may in turn lead to deterioration of speech perception abilities in the unaided ear [5, 6].
In a country like India, it is more economic and feasible to opt for bimodal stimulation over bilateral cochlear implantation for reasons like economic burdens, fear of second surgery, awaiting newer technology and possible complications for a CI in the other ear. Despite these reports, bimodal stimulation is still not used by many Indian CI recipients. This can be attributed to financial constraints, lack of knowledge about benefits provided by bimodal stimulation and the parental impression that once their child is implanted there is no need for hearing aid on the other side.
Children with unilateral implants, especially those in the school-going age, need to listen in acoustically challenging environments such as noisy classrooms, outdoors, increased distance from the listener, etc. and need to use their listening for learning. Most of the studies on speech perception in bimodal hearing users have been carried out abroad [1, 2, 4, 7, 8] and limited research is available in India. Current estimates indicate that more than 25,000 CI recipients are present across the country. Since we have so many users who are increasing at a greater rate, such information can be readily applied clinically. Data on Indian recipients in the school age can directly be utilized clinically as evidence towards regular use of bimodal stimulation. Hence the present study was aimed at studying sentence recognition in quiet and noise in the CI only and bimodal condition.
Aim
To compare sentence recognition scores in quiet and noise in 8 to 15-years-old children using bimodal hearing in CI only condition and bimodal (BM) (CI + HA) condition.
Methods
The protocol for the study was approved by the Ethics Committee of Ali Yavar Jung National Institute of Speech and Hearing Disabilities. All procedures were in strict adherence to the protocol.
Participants
Twenty prelingually deafened children (11 males and 9 females) in the age range of 8–15 years using bimodal stimulation were recruited for the study. Participants were using verbal mode of communication and were studying in schools where the primary medium of instruction was English and using English as their primary language for communication. All the participants included in the study were using a multichannel cochlear implant in one ear for a minimum of two and a half years and using a digital or analog hearing aid in the contralateral ear (with hearing loss ranging from moderate to profound category) for a minimum of 2 years. All the participants were screened on Milestones of Early Communication by Rhea Paul (2001) to confirm that expressive and receptive language age was greater than 7 years.
Participants with abnormal otoscopic findings, presence of middle ear infection, presence of any cochlear or auditory nerve anomaly, non-verbal mode of communication, considerable deficiency in language and communication skills, inability to write English in addition with poor speech intelligibility and associated impairments of any type such as mental sub-normality, visual impairment, developmental neuro-motor disabilities, attention deficit hyperactivity disorder and pervasive developmental disorders were excluded from the study. Biographical data is provided in Table 1.
Table 1.
Biographical data of participants
| Sr. no | Age (Y;m) | Sex | Age at implantation (Y;m) | Duration of implant use (Y;m) | Duration of HA use (Y;m) | Speech processor used | (500, 1 and 2 kHz) (dBHL) | (250, 500 and 1 kHz) (dBHL) | ||
|---|---|---|---|---|---|---|---|---|---|---|
| Unaided PTA | Aided PTA | Unaided PTA | Aided PTA | |||||||
| 1 | 12;3 | F | 2;7 | 9;7 | 9;7 | SPrint | 103 | 45 | 93 | 33 |
| 2 | 13;3 | F | 9;8 | 3;7 | 2; 0 | CP810 | 98 | 47 | 90 | 42 |
| 3 | 11;2 | M | 4;11 | 6;3 | 6; 0 | Freedom | 78 | 48 | 55 | 40 |
| 4 | 9;11 | M | 7;6 | 2;6 | 2;6 | CP810 | 80 | 35 | 59 | 25 |
| 5 | 13;11 | M | 5;6 | 8;6 | 8;6 | CP810 | 98 | 40 | 85 | 35 |
| 6 | 10;1 | F | 2; 0 | 8;1 | 8; 0 | CP810 | 93 | 75 | 80 | 40 |
| 7 | 11;10 | F | 1;6 | 10;1 | 10; 0 | CP810 | 73 | 23 | 52 | 13 |
| 8 | 8;4 | M | 4;5 | 3;1 | 3;11 | CP810 | 80 | 23 | 67 | 18 |
| 9 | 9; 0 | M | 4;5 | 3;7 | 2; 0 | SPrint | 97 | 42 | 80 | 30 |
| 10 | 9;6 | M | 2;9 | 7;9 | 4;6 | CP810 | 106 | 78 | 88 | 63 |
| 11 | 11; 0 | M | 3;9 | 7;3 | 3; 0 | SPrint | 112 | 75 | 103 | 70 |
| 12 | 8;5 | F | 3;4 | 5;1 | 2; 0 | SPrint | 113 | 48 | 98 | 37 |
| 13 | 9;4 | F | 5;5 | 3;1 | 3;11 | SPrint | 100 | 47 | 97 | 47 |
| 14 | 9;3 | M | 3;2 | 6;1 | 6; 0 | SPrint | 107 | 58 | 92 | 47 |
| 15 | 12;5 | F | 5;1 | 7;4 | 7;3 | Freedom | 98 | 33 | 87 | 20 |
| 16 | 11;7 | M | 4;3 | 7;4 | 7;3 | CP810 | 78 | 38 | 60 | 27 |
| 17 | 10;3 | F | 3;6 | 6;9 | 2; 0 | CP810 | 112 | 85 | 98 | 68 |
| 18 | 9;4 | M | 5;3 | 4;1 | 4; 0 | SPrint | 108 | 52 | 95 | 40 |
| 19 | 14;3 | M | 6;2 | 8;1 | 6; 0 | SPrint | 102 | 82 | 98 | 70 |
| 20 | 14;4 | F | 4;8 | 9;8 | 7; 0 | CP810 | 111 | 46 | 96 | 37 |
| M | 11; 0 | 4;6 | 6;6 | 5;3 | 95.5 | 51 | 82.05 | 40.1 | ||
M = Mean, Y = years, m = months
Stimulus Material and Noise
English sentences developed by Kumar [9] that were revised sentences of the BKB Lists with uniform sentence lengths and familiar to the Indian population were used for the study. This sentence material consists of six lists of 10 sentences each. The noise signal was the four-talker babble from Punnoose et al. [10]. This was used as competing stimulus in the study.
Procedure
The participants were given a briefing about the test procedure. Written consent and assent for the participation in the study was obtained from the parent and the child respectively before the testing started. Tympanometry (using GSI 38 middle ear analyzer) was done using 226 Hz probe tone to ascertain normal status of the middle ear. A calibrated dual channel diagnostic audiometer (GSI-61) was used for both Pure Tone Audiometry and sound field testing. The test material was played from a laptop (Sony Vaio VPCCB15FG) connected to the external A and external B inputs of a GSI-61 audiometer using a stereo cable.
A standard two room sound treated audiometric test setup was used for the study. The presentation level used for the stimulus material was 60 dBHL. The tracks were presented through one speaker placed at 0° azimuth at a distance of 1 m from the participant. The six lists were randomly assigned to six listening conditions; CI Quiet, CI + 15 dB SNR, CI + 8 dB SNR, BM Quiet, BM + 15 dB SNR and BM + 8 dB SNR.
Each participant was first presented a practice list and was asked to repeat the sentence he/she heard and also to write down the sentence on the form provided. If the participant repeated the sentence intelligibly then the remainder of the testing was done using repetition as the desired response from the subject. If the speech was not intelligible then written mode of response was employed.
Scoring
Each list consists of 10 sentences and a total of 30 key words per list. A score of 1 was given for every correct response, 0 was given for an incorrect response and 1/2 was given for a phonetically equivalent form. Maximum obtainable score is thus 30.
Statistics
It was a single sample repeated measures quasi-experimental type of research design. The dependent variable was sentence recognition scores, while the independent variables were the listening conditions (quiet and two SNRs) and the hearing device used (CI only and CI + HA). Parametric test of repeated measures ANOVA was used to analyze statistical significance between the sentence recognition scores in quiet and noise in children using bimodal hearing. Pearson’s correlation was applied to find the relation between variables.
Results
The level of performance with implant alone as a function of listening condition is shown in Table 2. The highest sentence recognition score was obtained in the quiet condition (mean score is 21.725), followed by the + 15 dB SNR condition (mean score is 13.225) and then by the + 8 dB SNR condition (mean score is 7.375). Statistically significant differences were seen for quiet versus + 15 dB SNR [F(1,19) = 81.469, p = .000], for quiet versus + 8 dB SNR, [F(1,19) = 282.442, p = .000] and for + 15 dB SNR versus + 8 dB SNR [F(1,19) = 52.427, p = .000].None of the participants obtained the maximum score of 30, even for the best listening condition, i.e. in quiet.
Table 2.
Mean scores obtained in three listening conditions during CI only and bimodal stimulation
| Listening condition | Mean scores | Range of scores | SD | |
|---|---|---|---|---|
| CI only | Quiet | 21.725 | 12–29 | 4.2565 |
| + 15 dB SNR | 13.225 | 5–26 | 5.8635 | |
| + 8 dB SNR | 7.375 | .5–17 | 4.3008 | |
| Bimodal Stimulation | Quiet | 23.825 | 17–28 | 2.5817 |
| + 15 dB SNR | 18.400 | 12–28 | 4.0282 | |
| + 8 dB SNR | 12.75 | 5–20 | 4.529 |
The sentence recognition scores in bimodal stimulation (BM) condition, as a function of listening condition (quiet, + 15 dB SNR, + 8 dB SNR) were obtained by averaging the scores for the 20 participants. The highest score was obtained in the quiet condition (mean score is 23.825), followed by the + 15 dB SNR condition (mean score is 18.400) and then by the + 8 dB SNR condition (mean score is 12.75) as shown in Table 2. This trend is the same as that in the CI-only condition. None of the subjects obtained maximum score of 30 even in the best of listening conditions, i.e. quiet. Statistically significant differences wereseen for quiet versus + 15 dB SNR [F (1, 19) = 81.551, p = .000], for quiet versus + 8 dB SNR [F (1, 19) = 170.643, p = .000] and for + 15 dB SNR versus + 8 dB SNR [F (1, 19) = 76.269, p = .000].
A comparison was done between the CI only and BM condition. The difference between the sentence recognition scores obtained in CI only condition compared to BM condition was maximum in + 8 dB SNR (5.375), followed by + 15 dB SNR (5.175) and minimum for quiet (2.1). Statistically significant differences wereseen for quiet versus + 15 dB SNR [F(1,19) = 4.366, p = .050], for quiet versus + 8 dB SNR F(1,19) = 32.551, p = .000] and for + 15 dB SNR versus + 8 dB SNR [F(1,19) = 47.029, p = .000].
To assess if the sentence recognition scores in the six different listening conditions correlated with any of the co-variables of age at implantation, duration of implant use, duration of hearing aid use, unaided PTA of non-implanted ear and aided PTA of the non-implanted ear, Pearson’s correlation was applied. The results are displayed in Table 3, which show that significantly better speech recognition scores were obtained with lower unaided PTA at BM + 15 dB SNR (p = .049) as well as BM + 8 dB SNR (p = .011). Also, participants with lower aided PTA scored significantly higher during the BM in quiet (p = .039) as well as BM + 15 dB SNR (p = .048).
Table 3.
Correlation between the different listening conditions across the CI only and BM condition with the different co-variables
| Parameters | CI quiet | CI + 15 dB SNR | CI + 8 dB SNR | BM quiet | BM + 15 dB SNR | BM + 8 dB SNR | |
|---|---|---|---|---|---|---|---|
| Age | Pearson correlation | .204 | .221 | .300 | .312 | .120 | − .018 |
| Sig. (2-tailed) | .389 | .348 | .198 | .180 | .614 | .939 | |
| Age of implantation | Pearson correlation | − .117 | − .085 | − .082 | − .194 | − .204 | − .248 |
| Sig. (2-tailed) | .624 | .720 | .732 | .412 | .388 | .292 | |
| Duration of implant use | Pearson correlation | .301 | .240 | .334 | .406 | .206 | .179 |
| Sig. (2-tailed) | .197 | .307 | .150 | .076 | .383 | .451 | |
| Duration of HA use | Pearson correlation | .152 | .151 | .199 | .276 | .283 | .268 |
| Sig. (2-tailed) | .523 | .526 | .401 | .241 | .227 | .253 | |
| Non implanted ear unaided PTA | Pearson correlation | − .197 | − .332 | − .327 | − .256 | − .446* | − .555* |
| Sig. (2-tailed) | .406 | .153 | .159 | .276 | .049 | .011 | |
| Non implanted ear aided PTA | Pearson correlation | .145 | − .022 | .080 | − .465* | − .447* | − .265 |
| Sig. (2-tailed) | .541 | .926 | .738 | .039 | .048 | .259 | |
*Correlation is significant at the .05 level (2-tailed)
Discussion
Results of the present study indicate that for both CI only condition and BM condition, the sentence recognition scores in quiet were greater than those obtained at + 15 dB SNR followed by + 8 dB SNR. This is as expected. In general, speech perception ability is highest at favorable SNRs and decreases as a function of reduction in SNR [11–14]. Finitzo-Hieber and Tillman [12] studied speech recognition scores using monosyllabic words in children with normal hearing and children with sensorineural hearing loss, across + 12 dB SNR, + 6 dB SNR, 0 dB SNR and in quiet. Results showed that children with sensorineural hearing loss obtained a score of 83% in quiet, 70% at + 12 dB SNR, 59.5% at 6 dB SNR and 39% at 0 dB SNR with reverberation time at 0 s. Similar results are obtained when sentence recognition scores were obtained in multi-talker babble [15]. Fetterman and Domico [16] evaluated ninety-six CI users with the City University of New York (CUNY) Sentences presented at 70 dB in quiet and at signal-to-noise ratios (SNR) of + 10 and + 5 dB. Similar trend was seen with 88% words correct in quiet, 73% correct at an SNR of + 10 dB, and 47% correct at an SNR of + 5 dB. The above studies support the effect of different SNRs in both CI only condition and BM condition as found in the present study. Possible explanation for poorer scores at lower SNR is the greater masking effect for the listeners by the competing stimuli.
In the present study, the sentence recognition scores obtained in the BM condition were greater than those obtained in CI only condition for all the three listening conditions. Dorman et al. [4] assessed the monosyllabic as well as the sentence recognition scores in quiet and at + 10 and + 5 dB SNR in 11 children with CI in four conditions (pre-implant, HA only, CI only and BM). He found that BM performance was significantly higher than CI only performance for monosyllabic word recognition and sentence recognition at + 10 dB SNR. However, in both quiet and at + 5 dB, the mean scores in the BM conditions were 14–15% points higher than the mean scores in the CI only conditions. Luntz et al. [17] assessed the benefit of using a HA in the contra lateral ear in 12 patients who were unilateral CI users. They found that in the sentence recognition task in background noise, the mean score was 34.9% for CI only whereas it was 41.1% for BM condition in first session and the mean score was 60.6% with CI alone and 75.5% with both devices in the second session. Similar results are seen in study by Potts et al. [18] and Hamzavi et al. [19]. From the above studies it is clear that BM leads to improved perception of speech especially in noise. The current study findings are in accordance with the available literature in this respect. This could be possible primarily due to the improved perception of low frequency sounds from the hearing aid along with the binaural benefits obtained by the combined use of contralateral hearing aid and CI. As can be seen from the Table 1, all the 20 participants in the current study had good low frequency hearing in the non-implanted ear (Mean aided PTA of 40.1 dBHL for the frequencies 250, 500 and 1 kHz). Hence the significantly greater scores obtained in the BM condition can be attributed to the access of low frequencies resulting in improved perception skills.
In the present study, correlations were obtained between the sentence recognition scores obtained in CI only condition and BM condition and various co-variables like age of the child, age at implantation, duration of CI use, duration of HA use, non-implanted ear unaided PTA and non-implanted ear aided PTA to assess their effect on the sentence recognition scores. A statistically significant moderate correlation was seen between the unaided PTA of non-implanted ear and BM + 15 dB SNR as well as with BM + 8 dB SNR. Also, a statistically significant correlation was seen between the aided PTA of non-implanted ear and BM quiet as well as with BM + 15 dB SNR. In all the four instances, there was a negative correlation observed which indicated that as the unaided PTA and aided PTA decreased in value (became better), the performance of the children increased in terms of their sentence recognition scores i.e. the scores increased.
In a study by Yoon et al. [20], ten bimodal users between the ages of 25 and 77 years were assessed in terms of their speech recognition in CI alone, HA alone, and CI + HA condition. The participants were separated into two groups; good (aided pure-tone average PTA < 55 dB) and poor (aided PTA ≥ 55 dB) at audiometric frequencies ≤ 1 kHz in HA. The good aided PTA group derived a clear bimodal benefit for vowel and sentence recognition in noise while the poor aided PTA group received little benefit across speech tests and SNRs. Results also showed that a better aided PTA helped in processing cues embedded in both low and high frequencies; none of these cues were significantly perceived by the poor aided PTA group. Potts et al. [18] obtained the speech recognition in 19 adult Cochlear Nucleus 24 implant recipients to determine the variables moderating results for the speech recognition. The moderators that were found significant were unaided hearing thresholds and HA sound field thresholds under headphones. Ching [21] analyzed the data from three NAL studies on bimodal users. The data on hearing thresholds and binaural speech perception in noise (with speech and noise from 0° azimuth) from two studies of children and one study of adults were combined. He concluded that people with less hearing loss at 500 Hz in the non-implanted ear derived greater binaural speech-perception benefits than those with more severe hearing loss. In the current study, unaided PTA and aided PTA in the non-implanted ear were calculated using the frequencies 250, 500 and 1 kHz, to ascertain how much access to low frequency information the subjects had. It is clear from the above studies that low frequency information below 1 kHz delivered by the contra lateral hearing aid contributes to the benefits obtained with bimodal stimulation. Thus, the correlation obtained in the current study between unaided and aided PTA with the BM scores can be explained due to this reason.
In the current study, the co-variables related to demographic details like age, age of implantation, duration of CI use and duration of HA use did not have a significant influence on the speech recognition scores. These findings are supported by the above-mentioned studies. Also, it is found in literature that the unaided and aided PTA of the non-implanted ear have a significant influence on the speech recognition scores. Similar findings were seen in the current study. However, this influence was seen only in two of the three listening conditions for both unaided and aided PTA. Possible explanation for this can be the small sample size inherent with its heterogeneity within the group of BM users. A larger sample size involving a homogeneous group may help us to derive a more precise influence pattern across the mentioned co-variables.
Conclusions
The sentence recognition scores of school aged children were highest in quiet followed by + 15 dB SNR and lowest in + 8 dB SNR in both CI only and BM listening conditions. Participants performed significantly better in BM condition as compared to CI only conditions for all listening conditions studied here. Benefits from bimodal stimulation were perceived more significantly in noise as compared to quiet. This study was done on Indian population where no published data is available till date. Also, the current study confirms the findings of several earlier studies reported from the western countries. Thus, all the school going children using unilateral cochlear implants should be recommended to use a hearing aid in the contralateral ear despite no perceivable benefit. This practice will help them to receive all the binaural benefits, better listening in noise, localization, spatial release from masking and pitch perception in comparison to unilateral CI use. Moreover, it will help to keep the auditory nerve viable for future implantation which is an important implication for children who have limited benefit from the contra lateralhearing aid.
References
- 1.Tyler RS, Parkinson AJ, Wilson BS, Witt S, Preece JP, Noble W. Patients utilizing a hearing aid and a cochlear implant: speech perception and localization. Ear Hear. 2002;23(2):98–105. doi: 10.1097/00003446-200204000-00003. [DOI] [PubMed] [Google Scholar]
- 2.Kong YY, Stickney GS, Zeng FG. Speech and melody recognition in binaurally combined acoustic and electric hearing. J Acoust Soc Am. 2005;117(3):1351–1361. doi: 10.1121/1.1857526. [DOI] [PubMed] [Google Scholar]
- 3.McDermott HJ, Sucher CM. Perceptual dissimilarities among acoustic stimuli and ipsilateral electric stimuli. Hear Res. 2006;218(1–2):81–88. doi: 10.1016/j.heares.2006.05.002. [DOI] [PubMed] [Google Scholar]
- 4.Dorman MF, Gifford RH, Spahr AJ, McKarns SA. The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies. Audiol Neurotol. 2008;13(2):105–112. doi: 10.1159/000111782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Silman S, Gelfand SA, Silverman CA. Late-onset auditory deprivation: effects of monaural versus binaural hearing aids. J Acoust Soc Am. 1984;76(5):1357–1362. doi: 10.1121/1.391451. [DOI] [PubMed] [Google Scholar]
- 6.Silverman CA, Silman S, Emmer MB, Schoepflin JR, Lutolf JJ. Auditory deprivation in adults with asymmetric, sensorineural hearing impairment. J Am Acad Audiol. 2006;17(10):747–762. doi: 10.3766/jaaa.17.10.6. [DOI] [PubMed] [Google Scholar]
- 7.Ching TY, Incerti P, Hill M, van Wanrooy E. An overview of binaural advantages for children and adults who use binaural/bimodal hearing devices. Audiol Neurotol. 2006;11(Suppl. 1):6–11. doi: 10.1159/000095607. [DOI] [PubMed] [Google Scholar]
- 8.Cullington HE, Zeng FG. Bimodal hearing benefit for speech recognition with competing voice in cochlear implant subject with normal hearing in contralateral ear. Ear Hear. 2010;31(1):70. doi: 10.1097/AUD.0b013e3181bc7722. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kumar A (2004) Development of hearing in noise test in Indian English. Dissertation, Bangalore University
- 10.Punnoose MM, Arya R, Nandurkar AN. Speech perception in noise among children with learning disabilities. Int J Commun Health Med Res. 2017;3(1):24–31. [Google Scholar]
- 11.Crum D (1974) The effects of noise, reverberation, and speaker to listener distance on speech understanding. Dissertation, Northwestern University
- 12.Finitzo-Hieber T, Tillman TW. Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. J Speech Lang Hear Res. 1978;21(3):440. doi: 10.1044/jshr.2103.440. [DOI] [PubMed] [Google Scholar]
- 13.Nabelek AK, Pickett JM. Monaural and binaural speech perception through hearing aids under noise and reverberation with normal and hearing impaired listeners. J Speech Lang Hear Res. 1974;17(4):724. doi: 10.1044/jshr.1704.724. [DOI] [PubMed] [Google Scholar]
- 14.Nabelek AK, Pickett JM. Reception of consonants in a classroom as affected by monaural and binaural listening, noise, reverberation, and hearing aids. J Acoust Soc Am. 1974;56:628. doi: 10.1121/1.1903301. [DOI] [PubMed] [Google Scholar]
- 15.Crandell CC, Smaldino JJ, Flexer C (1995) Sound-field fm amplification: theory and practical applications. Thomson Learning, Customer Service, 10650 Toebben Drive, Independence, KY 41051
- 16.Fetterman BL, Domico EH. Speech recognition in background noise of cochlear implant patients. Otolaryngol Head Neck Surg. 2002;126(3):257–263. doi: 10.1067/mhn.2002.123044. [DOI] [PubMed] [Google Scholar]
- 17.Luntz M, Shpak T, Weiss H. Binaural-bimodal hearing: concomitant use of a unilateral cochlear implant and a contra lateral hearing aid. Acta Otolaryngol. 2005;125(8):863–869. doi: 10.1080/00016480510035395. [DOI] [PubMed] [Google Scholar]
- 18.Potts LG, Skinner MW, Litovsky RA, Strube MJ, Kuk F. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the no implanted ear (bimodal hearing) J Am Acad Audiol. 2009;20(6):353. doi: 10.3766/jaaa.20.6.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Hamzavi J, Marcel PS, Gstoettner W, Baumgartner WD. Speech perception with a cochlear implant used in conjunction with a hearing aid in the opposite ear. Int J Audiol. 2004;43(2):61–65. doi: 10.1080/14992020400050010. [DOI] [PubMed] [Google Scholar]
- 20.Yoon YS, Li Y, Fu QJ. Speech recognition and acoustic features in combined electric and acoustic stimulation. J Speech Lang Hear Res. 2012;55(1):105. doi: 10.1044/1092-4388(2011/10-0325). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Ching TY. The evidence calls for making binaural-bimodal fittings routine. Hear J. 2005;58(11):32–34. doi: 10.1097/01.HJ.0000286404.64930.a8. [DOI] [Google Scholar]
