Skip to main content
Lippincott Open Access logoLink to Lippincott Open Access
. 2018 Mar 13;39(4):417–421. doi: 10.1097/MAO.0000000000001756

Right Ear Advantage of Speech Audiometry in Single-sided Deafness

Vincent G Wettstein 1, Rudolf Probst 1
PMCID: PMC5882291  PMID: 29533329

Abstract

Background:

Postlingual single-sided deafness (SSD) is defined as normal hearing in one ear and severely impaired hearing in the other ear. A right ear advantage and dominance of the left hemisphere are well established findings in individuals with normal hearing and speech processing. Therefore, it seems plausible that a right ear advantage would exist in patients with SSD.

Methods:

The audiometric database was searched to identify patients with SSD. Results from the German monosyllabic Freiburg word test and four-syllabic number test in quiet were evaluated. Results of right-sided SSD were compared with left-sided SSD. Statistical calculations were done with the Mann–Whitney U test.

Results:

Four hundred and six patients with SSD were identified, 182 with right-sided and 224 with left-sided SSD. The two groups had similar pure-tone thresholds without significant differences. All test parameters of speech audiometry had better values for right ears (SSD left) when compared with left ears (SSD right). Statistically significant results (p < 0.05) were found for a weighted score (social index, 98.2 ± 4% right and 97.5 ± 4.7% left, p < 0.026), for word understanding at 60 dB SPL (95.2 ± 8.7% right and 93.9 ± 9.1% left, p < 0.035), and for the level at which 100% understanding was reached (61.5 ± 10.1 dB SPL right and 63.8 ± 11.1 dB SPL left, p < 0.022) on a performance-level function.

Conclusion:

A right ear advantage of speech audiometry was found in patients with SSD in this retrospective study of audiometric test results.

Keywords: Hearing loss, Right ear advantage, Side difference


Left-hemispheric dominance for speech processing has been known since the second half of the 19th century. Early anatomical observations as well as neurophysical studies have shown dominant left-hemispheric areas for speech perception and production, notably the inferior frontal gyrus called Broca's area, described in 1861, and the posterior part of the superior temporal gyrus, called Wernicke's area, described in 1874. In more recent times, the classical model of the motor and sensory speech centers has expanded to an understanding of a larger and more complex system, where frontal, temporal, and parietal language areas are included (1). Correspondingly, a right ear advantage for speech processing has been identified and is widely accepted as a reflection of the left-hemisphere dominance (2). Ascending auditory projections pass through the brainstem and end in the primary auditory cortex of the ipsi- and contralateral hemisphere, with a predominant representation on the side opposite to the originating ear (3). Right ear input is therefore directly transferred to the left-hemispheric areas of speech perception, whereas stimuli to the left ear have to be transferred from the initial right hemisphere to the left side through the corpus callosum (3). Right ear dominance was found in otoacoustic emission and auditory brainstem response (4). A right ear advantage has also been identified in patients with bilaterally impaired hearing and unilateral cochlear implants (CI) (57).

Against this background, it seems possible that a right ear advantage would exist also for patients with single sided deafness (SSD). Postlingual SSD is defined as normal hearing in one ear and severely impaired hearing in the other ear. The population of individuals with SSD has gained interest increasingly in light of the ongoing successful provision of unilateral hearing devices, such as bone anchored devices or cochlear implants (810).

Several publications have shown a disadvantage in verbal development and in selected speech tests for children with congenital or prelingually acquired right-sided SSD when compared with their left-sided SSD counterparts (11,12). However, there is a lack of knowledge regarding side differences in speech processing and understanding of spoken language in patients with late-acquired SSD. Therefore, our goal was to evaluate routine speech audiometric test performance of patients with right versus left-sided SSD.

METHODS

The digital audiometric database of the Department of Otorhinolaryngology of the University Hospital Zurich was searched for patients with SSD. The database begins with records from 1953 and contains around 200,000 pure-tone audiograms. We reviewed retrospectively all audiograms entered up to December, 2014. We used the following criteria for SSD: pure-tone audiogram (PTA) air-conduction thresholds for the healthy ear 20 dB HL or better at 0.5, 1, and 2 kHz, and 25 dB HL or better at 4 kHz. Masked thresholds for the impaired ear had to be 75 dB HL or poorer at these frequencies. Speech audiometry (SA) had to be performed on the same day as pure-tone testing. The standard SA testing included the German Freiburg test, which consists of two parts. The first part is a number recognition test, where sets of 10 two-figured, four-syllabic numbers are repeated by the patient. The second part is a word recognition test, where lists of 20 monosyllabic words have to be repeated by the patient. Scores are in percentage of correct responses at different presentation levels going up in 10 dB SPL-steps until 100% is reached, then used to construct a performance-level function. Speech recognition threshold (SRT) is defined as the point at which the function crosses 50% correct. A weighted score (social index—SI) is calculated as the average of the percentages of speech understanding at 60, 75, and 90 dB SPL. Since the measurement is not carried out at 75 dB SPL, the value at this level is interpolated from the performances at 70 and 80 dB SPL. The SI score is always calculated because it is the basis for determining social insurance payments for hearing loss treatment in Switzerland. An SI of 100% represents no impairment, and 0% is equivalent to complete functional hearing loss.

Children under the age of 10 years were excluded. In patients with multiple data sets, the earliest set fulfilling the inclusion criteria was taken. Data collected before digital record-keeping was the standard had been previously entered into the database, thus facilitating access to data back to 1953. Apart from age and sex, no further patient data were collected. The study was approved by the local ethics commission (KEK-ZH-Nr. 2014-0075). Data analysis was performed with SPSS (IBM SPSS Statistics for Windows, Version 22.0; IBM Corp, Armonk, NY) and GraphPad (GraphPad Prism for Windows, GraphPad Software, La Jolla, CA). The unpaired t test and the Mann–Whitney U test were used for the analysis, while all statistical calculations were done with the Mann–Whitney U test.

RESULTS

Patient Data

Out of the approximately 200,000 pure-tone audiograms existing in our patient data base, 3,641 data sets fulfilled our criteria for SSD based on PTA and 406 patients had SA tested on the same day as the PTA. There were 224 (55%) with SSD on the left side (right hearing ear RHE) and 182 (45%) had SSD on the right side (left hearing ear LHE). The earliest matching complete data set was from 1961 (one single set), whereas all other data sets fulfilling the inclusion criteria were found from 1969 onwards. Mean age was 40 years for both SSD groups, and the female-to-male ratio was 51% for RHE and 53% in the LHE group.

Pure-Tone Audiogram (PTA)

There were no significant differences in pure-tone thresholds (dB HL ± SD) at any PTA frequency between the LHE and RHE: 10.1 ± 5.3 versus 9.9 ± 5.0 for 500 Hz (p = 0.498), 10.0 ± 5.1 versus 10.2 ± 5.0 for 1000 Hz (p = 0.616), 10.2 ± 5.5 versus 9.7 ± 6.1 for 2000 Hz (p = 0.536), 14.1 ± 6.9 versus 13.8 ± 6.9 for 4000 Hz (p = 0.651). Pure-tone average of the four frequencies (0.5, 1, 2, 4 kHz) also showed no significant threshold differences between LHE and RHE: 11.1 ± 6.0 versus 10.9 ± 6.0 (p = 0.531). Results are shown graphically in Figure 1.

FIG. 1.

FIG. 1

Pure-tone thresholds of the 182 left hearing ear (LHE, blue) and 224 right hearing ear (RHE, red) patients. Bars indicate ±1 SD. Left and right sides at each frequency are graphically separated for better visualization. SD indicates standard deviation.

Speech Audiometry

All tests of speech audiometry revealed better scores for RHE when compared with LHE. Figure 2 and Table 1 display our findings. One hundred percent word understanding ability was reached at a significantly lower presentation level for RHE than for LHE (61.5 ± 10.1 [SD] dB SPL RHE and 63.8 ± 11.1 dB SPL LHE, p = 0.022). The SRT for the word test was also lower on the right, though non-significant. Similarly, SRT for the number test was lower for RHE without statistical significance. Word understanding was non-significantly better for RHE at the presentation levels of 40 dB SPL (46.1% ± 27.5 versus 50.8% ± 27.6; p = 0.084), of 50 dB SPL (80.6% ± 19.0 versus 82.6% ± 17.6; p = 0.378), and of 70 dB SPL (98.0% ± 5.3 versus 98.6% ± 4.2; p = 0.239). The difference was significant at the level of 60 dB SPL (95.2 ± 8.7% RHE and 93.9 ± 9.1% LHE, p = 0.035). One hundred percent word understanding at 60 dB SPL was reached by 127 patients (66%) with RHE, but by only 86 (53%) with LHE. The same effect was seen with the social index: 100% was reached by 168 patients (77%) with RHE, but only by 116 patients (66%) with LHE. Calculation of the SI revealed a significantly better mean value for RHE (98.2 ± 4% for RHE and 97.5 ± 4.7% for LHE; p = 0.026).

FIG. 2.

FIG. 2

Word understanding ability at different presentation levels. Patients with right hearing ear (RHE, red) have better performance at all levels. Scores for left and right sides at each level are graphically separated for better visualization.

TABLE 1.

Side differences in speech audiometry

Left Hearing Ear (LHE) Right Hearing Ear (RHE)
Speech Audiometry Mean (±SD) Mean (±SD) p
100% word under-standing (dB SPL) 63.8 (±11.1) 61.5 (±10.1) 0.022
SRT for word test (dB SPL) 45.7 (±4.5) 45.1 (±4.2) 0.298
SRT for number test (dB SPL) 12.1 (±8.4) 11.3 (±7.0) 0.621
Word understanding at 60 dB SPL (%) 93.9% (±9.1) 95.2% (±8.7) 0.035
Social index (SI) (%) 97.5 (±4.7) 98.2 (±4.0) 0.026

SD indicates standard deviation; SRT, speech recognition threshold.

DISCUSSION

Patients with single-sided deafness with a right hearing ear had better performance on all parameters of speech audiometry than did their left hearing counterparts in our analysis. Given that there were no significant differences in pure-tone thresholds between the two groups, we are confident that our results document the presence of a right ear advantage for speech in SSD. Even though the side differences in our SA results are subtle and running multiple comparisons includes the risk of spurious significant findings, the advantage of the right side was constant throughout all tested values of speech understanding. Moreover, the finding matches the well-known general right ear advantage in audiometry.

The most convincing evidence for the right ear advantage may be the clear graphic difference in the Performance-Level functions of speech understanding illustrated in Figure 2. The significant difference of the SI fits well with this finding, because the SI as a weighted score is more sensitive in describing differences of the entire Performance-Level function than parameters using single values such as SRT. Interestingly, differences were also significant for speech understanding at 60 dB SPL. One of the parameters based on the Freiburger speech test used in Germany to indicate hearing aids derives from a presentation level of 65 dB SPL (13). The difference in the level needed to reach 100% speech understanding was also significant.

The non-significant advantage of the right side for SRT seems less important in this context, even though SRT is possibly the single most widely used parameter in speech audiometry. It seems possible that SRT would have shown a significant difference with a larger sample size, which was not possible with the design of our study using only our in-house database. On the contrary, our strict inclusion criteria for pure-tone thresholds in the healthy and impaired ear and the requirement of same-day testing of speech and pure-tone audiometry reduced our number of cases to 406 from a total of 3,641 SSD in the database. Moreover, the strict inclusion criteria for pure-tone thresholds also prevented the identification of any ear-related pure-tone threshold advantage. Further, our methods did not allow assessment of either the duration, or the course of onset (gradual versus sudden) of the one-sided hearing loss. Both of these could have influenced the performance on speech audiometry through adaptive changes such as brain plasticity. However, the assumption of equal distribution of hearing loss duration and onset between right and left ears seems likely and reasonable.

Another restriction of our approach was the inability to evaluate speech understanding in noise. Speech tests in noise were not carried out routinely during the entire time span of the database but have now become more commonly used. We would expect an even clearer right ear advantage when evaluating speech tests in noise. Differences in speech understanding including side differences can be expected to be more pronounced and more relevant in difficult listening situations such as in noisy or reverberant environments. Saliba et al. (14) found a right ear advantage in SSD-patients, when speech understanding was tested in the sound field with presentation to the front and noise at 60 dB in the hearing ear. Other studies, mainly focusing on binaural hearing and bilaterally impaired hearing, have shown a relevant right ear advantage (57,1517), while Morris et al. (18) found no side difference for site of cochlear implant in patients with bilateral hearing loss.

Our findings of a right ear advantage in SSD cannot reveal the clinical relevance for patients in their daily life or for hearing rehabilitation. It is possible that patients with SSD on the left side have a subtle advantage in using hearing aids, but this cannot be determined from our study given that all patients had normal hearing in the better ear.

The right side advantage in speech understanding for SSD is to be seen in the broader context of a general phenomenon of the auditory system. Right side advantage is well described for peripheral auditory findings such as pure-tone thresholds (1921) and otoacoustic emissions (22,23). In the central auditory system, a right ear advantage due to the crossing of ascending auditory projections in the brainstem and the left-hemisphere dominance for speech processing (2) is common knowledge. However, brain laterality of auditory processing is not strictly left-sided in the overall population. Amongst other influencing parameters, such as integrity of the corpus callosum (2427), left-hemispheric dominance is associated with right-handedness. Around 90% of right-handed persons have left-hemispheric dominance, whereas in persons who are left-handed, right dominance is only present in around 70% (28,29). Right-hemispheric language dominance has been found in 4% of the right-handed and 10.5 to 27% of the left-handed population (28,29). There is also a certain percentage of people with bilateral cerebral language representation (28,29). We could not assess the factor of handedness in this retrospective study investigating an audiometric database. It could play a role in the performance of speech recognition and processing in patients with SSD and future studies have to examine this factor.

CONCLUSION

This study on speech audiometry in patients with SSD found a right ear advantage throughout all parameters. With regard to the ongoing developments of hearing rehabilitation through technology, the clinical implications of this finding should be the topic of further investigation in future studies.

Acknowledgments

The authors would like to thank Dr. René Holzreuter for his critical role in the data mining and extraction process. Also, they are very grateful to Prof. Burkhardt Seifert, who provided essential support in the biostatistical data analysis, and to Dr. Fran Harris for providing editing and professional expertise.

Both authors had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Footnotes

The authors disclose no conflicts of interest.

REFERENCES

  • 1.Fujii M, Maesawa S, Ishiai S, Iwami K, Futamura M, Saito K. Neural basis of language: an overview of an evolving model. Neurol Med Chir (Tokyo) 2016; 56:379–386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Lazard DS, Collette JL, Perrot X. Speech processing: from peripheral to hemispheric asymmetry of the auditory system. Laryngoscope 2012; 122:167–173. [DOI] [PubMed] [Google Scholar]
  • 3.Westerhausen R, Hugdahl K. The corpus callosum in dichotic listening studies of hemispheric asymmetry: a review of clinical and experimental evidence. Neurosci Biobehav Rev 2008; 32:1044–1054. [DOI] [PubMed] [Google Scholar]
  • 4.Keefe DH, Gorga MP, Jesteadt W, Smith LM. Ear asymmetries in middle-ear, cochlear, and brainstem responses in human infants. J Acoust Soc Am 2008; 123:1504–1512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Henkin Y, Taitelbaum-Swead R, Hildesheimer M, Migirov L, Kronenberg J, Kishon-Rabin L. Is there a right cochlear implant advantage? Otol Neurotol 2008; 29:489–494. [DOI] [PubMed] [Google Scholar]
  • 6.Sharpe RA, Camposeo EL, Muzaffar WK, Holcomb MA, Dubno JR, Meyer TA. Effects of age and implanted ear on speech recognition in adults with unilateral cochlear implants. Audiol Neurootol 2016; 21:223–230. [DOI] [PubMed] [Google Scholar]
  • 7.Budenz CL, Cosetti MK, Coelho DH, et al. The effects of cochlear implantation on speech perception in older adults. J Am Geriatr Soc 2011; 59:446–453. [DOI] [PubMed] [Google Scholar]
  • 8.Probst R. [Cochlear implantation for unilateral deafness?]. HNO 2008; 56:886–888. [DOI] [PubMed] [Google Scholar]
  • 9.Peters JP, van Zon A, Smit AL, et al. CINGLE-trial: cochlear implantation for siNGLE-sided deafness, a randomised controlled trial and economic evaluation. BMC Ear Nose Throat Disord 2015; 15:3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Laske RD, Roosli C, Pfiffner F, Veraguth D, Huber AM. Functional results and subjective benefit of a transcutaneous bone conduction device in patients with single-sided deafness. Otol Neurotol 2015; 36:1151–1156. [DOI] [PubMed] [Google Scholar]
  • 11.Hartvig Jensen J, Johansen PA, Borre S. Unilateral sensorineural hearing loss in children and auditory performance with respect to right/left ear differences. Br J Audiol 1989; 23:207–213. [DOI] [PubMed] [Google Scholar]
  • 12.Niedzielski A, Humeniuk E, Blaziak P, Gwizda G. Intellectual efficiency of children with unilateral hearing loss. Int J Pediatr Otorhinolaryngol 2006; 70:1529–1532. [DOI] [PubMed] [Google Scholar]
  • 13.Richtlinie des Gemeinsamen Bundesauschusses über die Verordnung von Hilfsmitteln in der vertragsärztlichen Versorgung, C § 21,22. SGB V:17-18. [Google Scholar]
  • 14.Saliba I, Nader ME, El Fata F, Leroux T. Bone anchored hearing aid in single sided deafness: outcome in right-handed patients. Auris Nasus Larynx 2011; 38:570–576. [DOI] [PubMed] [Google Scholar]
  • 15.Poelmans H, Luts H, Vandermosten M, Ghesquiere P, Wouters J. Hemispheric asymmetry of auditory steady-state responses to monaural and diotic stimulation. J Assoc Res Otolaryngol 2012; 13:867–876. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Foundas AL, Corey DM, Hurley MM, Heilman KM. Verbal dichotic listening in right and left-handed adults: laterality effects of directed attention. Cortex 2006; 42:79–86. [DOI] [PubMed] [Google Scholar]
  • 17.Henkin Y, Swead RT, Roth DA, et al. Evidence for a right cochlear implant advantage in simultaneous bilateral cochlear implantation. Laryngoscope 2014; 124:1937–1941. [DOI] [PubMed] [Google Scholar]
  • 18.Morris LG, Mallur PS, Roland JT, Jr, Waltzman SB, Lalwani AK. Implication of central asymmetry in speech processing on selecting the ear for cochlear implantation. Otol Neurotol 2007; 28:25–30. [DOI] [PubMed] [Google Scholar]
  • 19.Chung DY, Mason K, Gannon RP, Willson GN. The ear effect as a function of age and hearing loss. J Acoust Soc Am 1983; 73:1277–1282. [DOI] [PubMed] [Google Scholar]
  • 20.McFadden D. A speculation about the parallel ear asymmetries and sex differences in hearing sensitivity and otoacoustic emissions. Hear Res 1993; 68:143–151. [DOI] [PubMed] [Google Scholar]
  • 21.Pirila T. Left-right asymmetry in the human response to experimental noise exposure. I. Interaural correlation of the temporary threshold shift at 4 kHz frequency. Acta Otolaryngol 1991; 111:677–683. [DOI] [PubMed] [Google Scholar]
  • 22.Ari-Even Roth D, Hildesheimer M, Roziner I, Henkin Y. Evidence for a right-ear advantage in newborn hearing screening results. Trends Hear 2016; 20:2331216516681168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Snihur AW, Hampson E. Sex and ear differences in spontaneous and click-evoked otoacoustic emissions in young adults. Brain Cogn 2011; 77:40–47. [DOI] [PubMed] [Google Scholar]
  • 24.Westerhausen R, Woerner W, Kreuder F, Schweiger E, Hugdahl K, Wittling W. The role of the corpus callosum in dichotic listening: a combined morphological and diffusion tensor imaging study. Neuropsychology 2006; 20:272–279. [DOI] [PubMed] [Google Scholar]
  • 25.Clarke JM, Lufkin RB, Zaidel E. Corpus callosum morphometry and dichotic listening performance: individual differences in functional interhemispheric inhibition? Neuropsychologia 1993; 31:547–557. [DOI] [PubMed] [Google Scholar]
  • 26.Benavidez DA, Fletcher JM, Hannay HJ, et al. Corpus callosum damage and interhemispheric transfer of information following closed head injury in children. Cortex 1999; 35:315–336. [DOI] [PubMed] [Google Scholar]
  • 27.Mataró M, Poca MA, Matarín M, Sahuquillo J, Sebastián N, Junqué C. Corpus callosum functioning in patients with normal pressure hydrocephalus before and after surgery. J Neurol 2006; 253:625–630. [DOI] [PubMed] [Google Scholar]
  • 28.Khedr EM, Hamed E, Said A, Basahi J. Handedness and language cerebral lateralization. Eur J Appl Physiol 2002; 87:469–473. [DOI] [PubMed] [Google Scholar]
  • 29.Knecht S, Drager B, Deppe M, et al. Handedness and hemispheric language dominance in healthy humans. Brain 2000; 123:2512–2518. [DOI] [PubMed] [Google Scholar]

Articles from Otology & Neurotology are provided here courtesy of Wolters Kluwer Health

RESOURCES