Abstract
This study compared spatial speech-in-noise performance in two cochlear implant (CI) patient groups: bimodal listeners, who use a hearing aid contralaterally to support their impaired acoustic hearing, and listeners with contralateral normal hearing, i.e., who were single-sided deaf before implantation. Using a laboratory setting that controls for head movements and that simulates spatial acoustic scenes, speech reception thresholds were measured for frontal speech-in-stationary noise from the front, the left, or the right side. Spatial release from masking (SRM) was then extracted from speech reception thresholds for monaural and binaural listening. SRM was found to be significantly lower in bimodal CI than in CI single-sided deaf listeners. Within each listener group, the SRM extracted from monaural listening did not differ from the SRM extracted from binaural listening. In contrast, a normal-hearing control group showed a significant improvement in SRM when using two ears in comparison to one. Neither CI group showed a binaural summation effect; that is, their performance was not improved by using two devices instead of the best monaural device in each spatial scenario. The results confirm a “listening with the better ear” strategy in the two CI patient groups, where patients benefited from using two ears/devices instead of one by selectively attending to the better one. Which one is the better ear, however, depends on the spatial scenario and on the individual configuration of hearing loss.
Keywords: cochlear implant, spatial hearing, single-sided deafness, bimodal
Introduction
In the last decade, more and more people with unilateral deafness and varying degrees of contralateral acoustic hearing have been implanted with a cochlear implant (CI), leading to various combinations of electrical hearing on one side and normal-to-aided acoustic hearing on the other side (Blamey et al., 2015; van Zon, Peters, Stegeman, Smit, & Grolman, 2015). The combination of acoustic and electric hearing shows manifold benefits, including better speech intelligibility (Gifford, Dorman, Sheffield, Teece, & Olund, 2014), improved quality of life (van Zon et al., 2015), improved localization (Arndt et al., 2011; Veugen, Chalupper, Snik, van Opstal, & Mens, 2016), and tinnitus reduction (van de Heyning et al., 2008). However, as shown by Ching, van Wanrooy, and Dillon (2007), there is large interindividual variability in the benefit obtained, and some listeners show no improvement or even interference when using both ears compared with the better ear alone.
This large variability may arise from the different hearing strategies employed by the listeners and differences in the information available across ears. The hearing strategies for spatial hearing in acoustic and electric listening modes may be influenced by access to interaural cues (interaural time difference [ITD] and interaural level difference [ILD]) and access to redundant or complementary information across the differently stimulated ears. The access to interaural cues can significantly improve speech-in-noise performance. For example, in normal-hearing (NH) listeners, spatially separating speech and noise sources leads to a spatial unmasking and thus to better speech intelligibility (Cherry, 1953). This spatial unmasking, termed spatial release from masking (SRM), is an interplay of access to monaural and binaural cues.
Another reason for the large interindividual variability may be the varying degrees of contralateral acoustic hearing, and the use of different loudspeaker setups in most of the clinical studies addressing this research question. While early studies employed mainly patients with poor contralateral acoustic hearing (Armstrong, Pegg, James, & Blamey, 1997; Ching, Incerti, & Hill, 2004; Dunn, Tyler, & Witt, 2005), more recent studies have investigated speech perception across acoustic and electric stimulation even in patients with normal or near-NH contralateral to their CI (Arndt et al., 2011; Grossmann et al., 2016; Zeitler et al., 2015). A comparison across different studies, however, is not really possible because the studies used different speech material, different noises, different spatial settings in free field, and different room acoustics for testing the patients. In addition, small head movements of the subjects can have a large influence on SRM (Grange & Culling, 2016). In our study, these variables are controlled: Virtual acoustics are used to render the spatial scenes, and the stimuli are presented via earphone and CI audio cable. This eliminates the influence of room acoustics, loudspeaker positions, head movements, and microphone positions across devices. The subject’s hearing aid (HA) is replaced with a simulation of HA processing in the computer. In addition, the Oldenburg sentence test used here is standardized and optimized for speech reception threshold (SRT) measurements. This test is available in different languages (Kollmeier et al., 2015), such that this study also provides normative data with bimodal and single-sided deaf (SSD) CI listeners that may be transferrable across languages.
The main goal of this study was to quantify and compare spatial speech-in-noise performance in two patient groups who differ in their degree of acoustic hearing using measurement settings that control for head movements. The patient groups were bimodal CI users who use a HA in the ear contralateral to their CI and CI SSD users who have (nearly) NH in their contralateral ear. In addition, NH listeners were included as a control group. SRM was extracted from the individual SRTs, and the contribution of the different monaural and binaural cues, measured by binaural summation, in these patient groups was compared with that of the NH control group.
Methods
Sixteen CI users, eight bimodal CIs and eight CI SSDs, participated in this two center study at the University of Oldenburg and University of Freiburg Medical Center. Demographic details and etiology of hearing loss of all subjects are provided in Table 1. Six CI users were tested in Oldenburg (S01–S09) and 10 in Freiburg (P01–P12). As a control group, 11 NH listeners (6 females and 5 males with ages between 21.2 and 37.5 years, median: 26.1 years) were tested in Oldenburg. Inclusion criteria for the CI users were at least 1 year of experience with the CI (of any manufacturer), the ability to perform speech-in-noise experiments with either ear alone, and regular HA usage in the contralateral ear for bimodal CI users. Subjects with additional handicaps, for example, blindness or dementia, were not included. In all participants, (monaural) hearing thresholds were assessed with standard audiometry before testing. NH listeners were included in the control group if their (air-conduction) pure-tone thresholds were equal to or below 20 dB HL at all audiometric frequencies between 125 Hz and 8 kHz in both ears. In the CI SSD group, the threshold criterion for the acoustic (normal) ear was slightly less restrictive: All pure-tone thresholds had to be equal to or less than 30 dB HL at the audiometric frequencies between 125 Hz and 4 kHz (in agreement with the criterion for NH from Arndt et al., 2017). Bimodal CI listeners were included if their pure-tone thresholds were less than (i.e., better) 100 dB HL at 1 kHz and less than 80 dB HL at 500 Hz in the acoustic ear. Unaided hearing thresholds of the ears contralateral to the CI are displayed in Figure 1 for both patient groups.
Table 1.
Demographic Details and Etiology of Hearing Loss of the Included CI Subjects.
| ID | Age | Etiology | Sex | Duration of deafness (years) | HA use (years) | CI ear | Implant type | CI processor | CI use (years) | Group | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| P01 | 61 | Morbus Menière | M | 6 | 8.7 | R | CI512 | CP910 | 2.2 | Bimodal | |
| P02 | 74 | Sudden hearing loss | M | 3.4 | 9.8 | R | CI422 | CP810 | 5.8 | Bimodal | |
| P03 | 59 | Unknown | M | 23.9 | 31.8 | R | CI512 | CP810 | 7.8 | Bimodal | |
| P04 | 41 | Unknown | M | 0.3 | N/A | L | CI422 | CP910 | 3.1 | SSD | |
| P05 | 56 | Unkown | F | 33.6 | N/A | L | CI422 | CP810 | 4.9 | Bimodal | |
| P06 | 64 | Morbus Menière | F | 0.1 | N/A | R | CI24RE (CA) | CP910 | 3.7 | SSD | |
| P08 | 77 | Unknown | F | 26.1 | 27.8 | R | CI512 | CP810 | 6.6 | Bimodal | |
| P09 | 60 | Labyrintitis | F | 3.3 | N/A | R | CI512 | CP810 | 7.5 | SSD | |
| P10 | 61 | Sudden hearing loss | M | 0.7 | N/A | R | CI422 | CP910 | 4 | SSD | |
| P12 | 48 | Otitis media | F | 1.7 | 4.5 | R | CI512 | CP910 | 2.4 | SSD | |
| S01 | 27 | Ototoxicity | M | 3 | 25.8 | R | Hybrid L | CP910 | 10.7 | Bimodal | |
| S02 | 19 | Unknown | F | 6 | N/A | L | Concerto | Opus 2 | 4.4 | SSD | |
| S06 | 65 | Sudden hearing loss | M | 1 | 8.8 | L | CI422 | CP910 | 1.8 | Bimodal | |
| S07 | 64 | Sudden hearing loss | M | N/A | 19.8 | R | HiRes 90 K HiFocus | Naída CI Q70 | 5.8 | Bimodal | |
| S08 | 66 | Sudden hearing loss | F | N/A | N/A | R | N/A | RONDO | 2.8 | SSD | |
| S09 | 50 | Otosclerosis | M | N/A | N/A | R | CI24RE (CA) | CP810 | 3.2 | SSD | |
Note. Duration of deafness: Time between onset of severe hearing loss/deafness on implanted ear and cochlear implantation. N/A = not applicable; SSD = single-sided deaf; M = male; F = female; HA = hearing aid; CI = cochlear implant.
Figure 1.

Individual pure-tone (air-conduction) hearing thresholds for the ear contralateral to the CI of the bimodal CI group (blue) and the CI SSD group (pink). For the three steeply sloping hearing losses in the lower left of the graph, no threshold (up to 120 dB HL) was measurable for higher frequencies.
All subjects volunteered to participate in the study, received travel reimbursement, and signed written consent. In Oldenburg, subjects were additionally paid for their participation. The study was approved by the (medical) ethics committee of the Universities of Oldenburg and Freiburg.
Pretest
Prior to assessing speech intelligibility in the bimodal CI and CI SSD listener groups, a loudness balancing was conducted, as described by Veugen et al. (2016): The subject listened to stationary speech-shaped noise (equal to the long-term-averaged spectrum of all sentences used in the speech intelligibility test; olnoise, Wagener, Kühnel, & Kollmeier, 1999) that was convolved with a head-related impulse response (HRIR) for frontal incident angle, delivered first to the acoustically hearing ear via an insert earphone at 65 dB sound pressure level (SPL) and then to the patient’s own CI via an audio cable. Their task was then to adjust the loudness of the signal transmitted to the CI ear by pressing either a “louder” or “softer” button on a touch screen, until the signal transferred to the CI and the signal transferred to the insert earphone were perceived as being equally loud. The loudness was changed by adjusting the broadband gain on the CI ear in a one-up one-down adaptive procedure, first in steps of 5 dB and then, after the first reversal, in steps of 1 dB. After the subject was satisfied with the loudness balancing, this gain setting on the CI ear was used in all further experiments.
Speech Intelligibility Assessment
The German matrix sentence test (Oldenburger Satztest, OlSa; Wagener et al., 1999) was used to measure SRTs, which were defined as the signal-to-noise ratio (SNR) corresponding to 50% speech intelligibility score (per sentence), in the presence of stationary noise. This speech test consists of sentences made up of five words in a fixed order: subject, verb, numeral, adjective, and object. For example, “Peter kauft drei nasse Sessel,” which translates as, “Peter buys three wet chairs.” The sentences are spoken by a male speaker. Lists of 20 sentences were used, and an SNR of 10 dB was chosen as a starting value in all test conditions. Listeners’ responses were obtained using a touch screen that showed all possible response alternatives for each word (position) of the sentence test (closed set response procedure; see Warzybok, Rennies, Brand, Doclo, & Kollmeier, 2013).
The spectrum of the stationary noise equals the long-term-averaged spectrum of all sentences used in this test (olnoise, Wagener et al., 1999). During the course of the measurement, the speech level was held constant at a comfortable loudness (corresponding to 65 dB SPL at the acoustic side) and the noise level—and thus the SNR—was varied to measure the SRT using an adaptive procedure (A1 procedure from Brand & Kollmeier, 2002). In brief, the adaptive procedure converges to the desired threshold by changing the noise level from one sentence to the next in small steps. The step size is dependent on the current sentence number and the number of correctly understood words in the previous sentence and is halved after every second reversal. The SRT is obtained by fitting a psychometric function with maximum likelihood criterion, to all responses of the subject during one sentence list, when considering the response of the subject for a presented word as a Bernoulli distribution. Note that by varying the noise level instead of the speech level, the speech dynamic range could be optimally placed within the narrow sound processor input dynamic range of the patient’s CI (and in the HA input dynamic range), thereby ensuring that the speech information was optimally transmitted throughout the course of the measurement.
To be able to infer complete psychometric functions in addition to the SRT50%, the SRT80% and the SRT20% were measured as well (Akeroyd et al., 2015) by specifying the different target speech intelligibility in the A1 adaptive procedure. If the SRT80% was not measurable, the percentage correct score at 20 dB SNR was measured instead (constant stimuli procedure).
Using the measured SNRs for SRT20%, SRT50%, and SRT80%, the slopes of the psychometric function was calculated individually for each subject by fitting the psychometric function in Equation 1 (Jürgens & Brand, 2009) to the data points. The fitted function was found by minimizing the root-mean-square error between measured data and the model function. The variable g denotes the guessing probability of each word, which is 10% (because of 10 possible response alternatives for each word in the sentence).
| (1) |
In both CI groups, three different listening modes were tested: (speech) performance with the NH/HA ear alone, performance with the CI alone, and binaural. Each of these listening modes was tested for three different spatial scenarios (noise from −90°, 0°, and 90°), whereas the speech was always presented from the front (0°). Spatial rendering was done using anechoic KEMAR HRIRs from the Kayser et al. (2009) database (80 cm distance between loudspeaker and microphone, 0° elevation, −90°, 0°, or 90° azimuth, frontal microphone of the behind-the-ear [BTE] casing).
The subjects did not all have their CI on the same side. To better display the results and for the subsequent SRM analysis, SRTs of subjects with CI on the left side were swapped, such that all SRTs are displayed as if all subjects had their CI on the right side and acoustic hearing on the left side. This was possible because, due to the anechoic conditions, both unilateral HRIRs were almost symmetrical (Kayser et al., 2009).
SRM and Binaural Summation
Using the different spatial scenarios and the different listening modes, SRM was calculated for each subject individually as follows:
| (2) |
| (3) |
| (4) |
where A stands for listening with the acoustic ear, E for listening with the electrically stimulated (CI) ear, mon denotes monaural listening and bin binaural listening. The degree values denote the noise incident direction. These formulae implicitly assume that acoustic hearing is at the left ear and CI hearing at the right ear in line with the way chosen to display the data.
Binaural summation, which is the difference in SRT between listening with two ears and listening with the better monaural ear, was calculated for each subject individually using the following formulae:
| (5) |
| (6) |
| (7) |
Note that in most studies in the literature, binaural summation is only defined for 0° azimuth. To evaluate whether binaural benefit also occurs at lateral angles, we decided to also calculate binaural summation for noise directions 90° and −90°. For both angles, the best monaural SRT was used as a baseline to avoid inflated binaural summation due to, for example, adding a considerably better ear to a weaker ear and using the weaker ear as a baseline. For lateral angles, the definition of summation effects given in Equations 5 and 7 corresponds to the definition of the classical binaural squelch if the ear ipsilateral to the noise is also the weaker ear, that is, has a higher SRT (cf., e.g., van Hoesel, 2012).
Measurement Procedure
A portable measurement setup was designed to be applied in both participating study locations (Oldenburg and Freiburg) to control the acoustic scene without the need for loudspeaker setups. The hardware consisted of a laptop (Microsoft Surface 3) connected to a headphone amplifier (FiiO E12, cross-fading off, bass-boost off, gain = 0 dB) and a y-shaped cable from stereo jack to two mono jacks. The mono jacks were connected either to an Etymotics EP4 in-ear (insert) earphone or via a CI-manufacturer-specific audio input cable to the subject’s own CI speech processor (see Figure 2). The laptop ran AFC version 1.4 (Ewert, 2013) to control the Oldenburg sentence test and the master HA (MHA, Grimm, Herzke, Berg, & Hohmann, 2006) which is a software to simulate HA processing with low processing latency. Sound output to the headphone amplifier was accomplished using SoundMexPro (HörTech gGmbH). The software Jack Audio connection kit version 1.9.10 was used to route the signals from AFC to the MHA and then to headphone playback in real time. The complete system was able to achieve 115 dB SPL maximum output level with the Etymotics ER4PT insert earphone but was limited to 105 dB SPL in the MHA so that the sound levels were not too high.
Figure 2.

Flowchart of the measurement setup used for assessing speech intelligibility in noise in a controlled acoustic environment. CI = cochlear implant; MHA = master HA; HA = hearing aid; OISa = Oldenburg sentence test; HRIR = head-related-impulse response; BTE = behind-the-ear.
Calibration of the measurement setup was performed by using a 2 cc coupler (Brüel & Kjaer 4157) attached to the insert earphone. Frequency-specific real-ear-to-coupler differences from Dillon (2001) were used to convert SPL at the coupler to SPL at the eardrum. In addition, a frequency-independent correction factor of 2 dB, specific for the Etymotics ER4PT insert earphones, was needed to achieve correct broadband levels after taking into account the coupler differences.
If the participants had an HA, they were asked to switch it off and take it out during testing. The audio stimuli were presented via insert earphone to the acoustic ear and via audio cable connected to the subject’s own CI speech processor. The software on the laptop simulated HA processing for the bimodal CI patient group and created and played back the spatially rendered audio stimuli. All tests were conducted either in soundproof booths or in quiet office rooms. To avoid any unintended changes in SRTs due to sound processor-specific noise reduction, all preprocessing algorithms in the CIs were switched off for the duration of the tests, and the mixing ratio of audio cable to the speech processor microphone input was set to the highest value possible. In the CI, the everyday fitting map was used, and the subjects used their own speech processors during testing. Correct functioning of the CIs was verified by comparing training SRTs with available SRTs of the same participant for the Oldenburg sentence test in noise at S0°N0° presented via loudspeaker. Only one subject (P10) showed largely reduced performance compared with the available clinical data. This could be resolved by creating a new map before testing (with the new map), which was conducted 1 month later.
Fitting of the MHA
The MHA was used as a substitute for the subject’s own HA to control for differences in preprocessing algorithms, differences in automatic gain controls, or differences in microphones across the subjects’ own HAs. The fitting formula CAMFIT (Moore, Alcántara, Stone, & Glasberg, 1999) was used to fit the MHA to the impaired ear of the bimodal subjects. In contrast to NAL-NL1 (Byrne et al., 2001), CAMFIT provides more gain at low frequencies (especially below 250 Hz), which is the frequency region most important for the majority of bimodal CI users, and was thus chosen to sufficiently aid the profound hearing loss in the acoustically hearing ear of the bimodal subjects.
In addition, CAMFIT was modified as suggested by Haumann et al. (2012), namely, by restricting amplification by setting the maximum power output to 105 dB SPL and performing a band selection process such that in bands with very poor hearing thresholds (above 90 dB HL, indicating dead regions), no amplification was provided (in line with Zhang, Dorman, Gifford, & Moore, 2014). Furthermore, as specified in CAMFIT, frequencies above 5 kHz were not aided.
Timing Between Acoustic and Electric Stimulation
Signal processing latency differences between actual CIs and HAs and the latencies caused by acoustic transmission within the peripheral auditory system (outer, middle, and inner ear), as opposed to bypassing it, as in electric hearing with CI, may create interaural stimulation timing differences in bimodal or CI SSD users. To take these timing differences into account within the two different stimulation modes of this study, auditory brainstem response data of Zirn, Arndt, Aschendorff, and Wesarg (2015) were evaluated. These data suggest virtually the same latency for stimulation with the CI as for acoustic stimulation without an HA, and a 5 to 7 ms increase in latency for acoustic stimulation with an HA, which is frequency-dependent and also varies across different HA models. Therefore, both CI and acoustic stimuli were presented time-synchronously to the CI SSD listeners. Because the subject’s HA (and its delay) was replaced by the MHA, a deliberate 6 ms latency was imposed on the HA side for the bimodal CI listeners in order to reproduce an average HA-related processing delay.
Statistical Analysis
Statistical analysis was performed using SPSS V25. Normal distribution of data was assessed using Kolmogorov–Smirnov test, and effect sizes are given for significant results. The values of p were Bonferroni-corrected for multiple comparisons. To determine the necessary number of subjects, a sample size analysis was conducted a priori using G*Power 3.1 (Faul, Erdfelder, Buchner, & Lang, 2009). Based on data from Gifford et al. (2014), the binaural SRM in bimodal CI users was estimated to have an effect size of 1 (average SRM: 4 dB; interindividual standard deviation: 4 dB), thus leading to n = 8 with α = .05 and a statistical power of 80%. Based on the data from Hoppe, Hocke, and Digeser (2018), the binaural summation for S0°N0° in bimodal CI users was estimated to have an effect size of 0.56 (average binaural summation: 1.3 dB; interindividual standard deviation: 2.32 dB), which would require n = 21. As this study concentrates on the assessment and comparison of SRM, statements about binaural summation should be considered as explorative without the necessary statistical power.
Results
Speech Reception Thresholds (SRTs)
The goal of the study was to measure and compare the SRM in two patient groups (bimodal CI listeners and CI SSD listeners) using a controlled measurement setup and compare the results to the SRM obtained from an NH control group. To obtain the SRM, SRTs were measured for each spatial scenario and each patient group. The SRT50%s for the three groups are displayed in Figure 3. For the two patient groups, the SRTs are grouped by listening mode for listening with left ear alone, with right ear alone, and binaurally (coded by different colors and indicated by head symbols at the top). For each of the listening modes, the SRTs are further divided according to the three spatial scenarios with different noise incident angles (N−90°, N0°, and N90°, as indicated on the abscissa).
Figure 3.
SRT50% boxplots for the bimodal CI, CI SSD, and NH listeners. Head symbols denote listening mode (monaural or binaural, specified by the text on each side: CI = cochlear implant, HA = hearing aid, NH = normal hearing). For each listening mode, three different spatial scenarios were tested, as displayed on the abscissa: noise from −90°, 0°, and 90°. SSD = single-sided deaf; SRT = speech reception threshold.
The different spatial scenarios led to the following patterns, depending on the listening mode: for listening with only one ear (e.g., monaural NH, black color in the right panel), the SRT improved (decreased), when the noise moved from a frontal position to a position contralateral to the listening ear (here: N90°). The SRT increased when the noise moved from the frontal position to a position ipsilateral to the listening ear (here: N−90°). This diagonal pattern is visible for all monaural listening modes, independent of subject group, and for the binaural listening mode for CI SSD listeners. The bimodal CI+HA (blue color) and the NH binaural (black color) listening modes show a different pattern. Here, SRTs decreased (improved) from N0° toward both sides (N−90° and N90°). In particular, in the binaural NH listening mode (black color, right panel), SRTs improved greatly when the noise source moved to the side, irrespective of the noise direction. For the bimodal CI+HA mode, this SRT improvement was much smaller.
The different listening modes also influenced the absolute SRTs, which differed widely across individual patients. While SRT variability for the monaural NH ear in CI SSD was small, the variability was much larger in the monaural HA in bimodal CI and the monaural CI in CI SSD. Note that the SRT variability in the monaural CI in the bimodal CI listener group was also sufficiently smaller than the variability in the monaural CI in the CI SSD group.
To investigate whether the visible median differences in SRTs also had statistical significance, a two-factor repeated measures analysis of variance (ANOVA) of the SRTs was conducted for each subject group separately with the factors listening mode (left ear alone, right ear alone, and binaural) and spatial scenario (N−90°, N0°, and N90°): The listening mode was found to have a significant effect on SRT for the CI SSD group, F(2, 14) = 31.719, p < .001, and the NH control group, F(1, 10) = 1,445.284, p < .001, but not for the bimodal CI user group, F(2, 14) = 3.263, p = .0969. The factor spatial scenario was also found to be significant for the CI SSD, F(2, 14) = 13.662, p = .001, and the NH group, F(2, 20) = 809.430, p < .001, but not for the bimodal CI group, F(2, 14) = 0.7, p = .513. In all three listener groups, there was a significant interaction between the factors listening mode and spatial scenario, bimodal CI: F(4, 28) = 48.331, p < .001; CI SSD: F(4, 28) = 295.249, p < .001; NH: F(2, 20) = 794.127, p < .001. To untangle these interactions, for each listening mode, a one-way repeated measures ANOVA with the factor spatial scenario was conducted for each subject group. The factor spatial scenario was always significant, except for in the CI+HA listening mode, Greenhouse-Geisser-corrected, F(1.125, 8.016) = 0.049, p = .861. All other spatial configurations were significantly different from each other, when compared pairwise with Bonferroni correction (see Table 2). This was true for all patient groups and listening modes, except for the binaural listening mode in the CI SSD group, where N−90° was not significantly different from N0° (p = .089), and the binaural listening mode in the NH group, where N−90° was not significantly different from N90° (p = .259). Thus, the binaural NH listening mode showed the expected SRT improvement when noise and speech were spatially separated independent of whether the noise was on the right or left.
Table 2.
Statistical Results From the One-Factor Repeated Measures ANOVA and Post Hoc Tests to Investigate the Effect of Spatial Scenario for Different Listening Modes per Subject Group.
| Patient group | Listening mode (L + R) | Repeated measures ANOVA |
Post hoc test |
|||
|---|---|---|---|---|---|---|
| F | p | N−90° versus N0° | N0° versus N90° | N−90° versus N90° | ||
| Bimodal CI | HA + — | F(2/14) = 187.08 | <.001 | <0.001 | 0.003 | <0.001 |
| — + CI | F(2/14) = 213.98 | <.001 | 0.001 | <0.001 | <0.001 | |
| HA + CI | F(1.145/8.016) = 0.049 | .861 | 1 | 1 | 1 | |
| CI SSD | NH + — | F(2/14) = 419.672 | <.001 | <0.001 | <0.001 | <0.001 |
| — + CI | F(1.143/8.003) = 225.894 | <.001 | 0.001 | <0.001 | <0.001 | |
| NH + CI | F(1.043/7.299) = 49.693 | <.001 | 0.089 | <0.001 | <0.001 | |
| NH | NH + — | F(2/20) = 631.427 | <.001 | <0.001 | <0.001 | <0.001 |
| NH + NH | F(2/20) = 942.322 | <.001 | <0.001 | <0.001 | 0.259 | |
Note. Bold values indicate statistically significant differences (after Bonferroni correction). ANOVA = analysis of variance; CI = cochlear implant; HA = hearing aid; NH = normal hearing; SSD = single-sided deaf.
The effect of listening mode on SRT for a given spatial scenario was untangled by applying a one-way repeated measures ANOVA with the factor listening mode for each spatial scenario separately in each subject group (Table 3). The listening mode was found to significantly affect SRT for most spatial scenarios in all subject groups, indicating a clear difference, particularly between monaural and binaural SRTs. No significant effect of listening mode was found for the N0° spatial scenario in either the bimodal CI group or the NH group, or for the N−90° spatial scenario in the CI SSD group. In addition, post hoc testing in the bimodal CI group showed a statistically significant difference between HA-only and HA+CI (p = .031), and no difference between HA+CI and CI-only in the N−90° spatial condition (p = 1). In the N90° spatial condition, HA+CI was significantly different from CI (p = .013), but HA+CI did not differ from HA-only (p = 1). This indicates a better ear-listening strategy, where the SRT for binaural listening is the same as that of the better monaural listening mode, albeit the better monaural mode can be CI or HA, depending on the noise direction.
Table 3.
Statistical Results From the One-Factor Repeated Measures ANOVA and Post Hoc Tests to Investigate the Effect of Listening Mode for Different Spatial Scenarios per Subject Group.
| Patient group | Spatial scenario | Repeated measures ANOVA |
Post hoc test |
|||
|---|---|---|---|---|---|---|
| F | p | HA/NH versus HA/NH+CI | CI versus HA/NH+CI | HA/NH versus CI | ||
| Bimodal CI | N-90° | F(2, 14) = 12.134 | .001 | 0.031 | 1 | 0.017 |
| N0° | F(1.171, 8.194) = 2.127 | .183 | 0.467 | 0.027 | 1 | |
| N90° | F(1.101, 7.708) = 12.233 | .008 | 1 | 0.013 | 0.047 | |
| CI SSD | N−90° | F(1.193, 8.352) = 2.759 | .131 | 0.165 | 0.132 | 1 |
| N0° | F(1.109, 7.764) = 26.278 | .001 | 1 | 0.003 | 0.004 | |
| N90° | F(1.044, 7.306) = 93.297 | <.001 | 1 | <0.001 | <0.001 | |
| NH | N−90° | F(1, 10) = 2,120.753 | <.001 | – | – | – |
| N0° | F(1, 10) = 4.031 | .072 | – | – | – | |
| N90° | F(1, 10) = 316.892 | <.001 | – | – | – | |
Note. Bold values indicate statistically significant differences (after Bonferroni correction). ANOVA = analysis of variance; HA = hearing aid; NH = normal hearing; CI = cochlear implant; SSD = single-sided deaf.
Spatial Release From Masking (SRM)
SRMs were extracted from the individual SRTs of each listener as the difference between colocated SRT and the best SRT in a spatially separated scenario (see Equations 2–4). The motivation here was to determine whether there was a difference between monaural and binaural SRMs across patient groups. SRMs extracted from SRT50% for each patient group and each listening mode are shown in Figure 4. The SRM in the binaural NH (black) listening mode was, at 8.8 dB, significantly larger than the SRM of any other bilateral listening mode, t(binaural NH vs. NH+CI, df = 17) = 18.109, p = .001, d = 8.9093; t(binaural NH vs. bimodal HA+ CI, df = 11.505) = 15.249, p = .001, d = 6.847, and larger than the SRM of the monaural NH listening mode, t(binaural NH vs. monaural NH, df = 10) = 17.708, p = .005, d = 5.34. The 3.4 dB SRM in the NH+CI (magenta) listening mode of the CI SSD group was significantly larger than the 2.3 dB HA+CI (blue) listening mode of the bimodal group, t(NH+CI vs. HA+CI CI, df = 8.174) = 3.301, p = .033, d = 0.565. For the CI SSD and the bimodal CI patient groups, the binaural SRMs were not significantly different from the best monaural SRMs, t(HA+CI vs. CI-only, df = 7) = 2.823, p = .13; t(HA + CI vs. HA-only, df = 7) = 2.971, p = .105; t(NH+CI vs. CI-only, df = 7) = 0.548, p = 1; t(NH+CI vs. NH-only, df = 7) = 1.311, p = 1. Thus, the bimodal CI and the CI SSD patient groups did not receive more release from masking when using both ears compared with monaural listening, whereas the NH control group did receive a substantial 5.1 dB release from masking benefit.
Figure 4.

Individually calculated SRM from SRT50% displayed as boxplots for the bimodal CI and CI SSD patient groups and for the NH listeners, for the left ear, the right ear, and both ears, according to the head symbols. CI = cochlear implant; HA = hearing aid; NH = normal hearing; SRM = spatial release from masking; SSD = single-sided deaf.
Binaural Summation
The SRM only includes the benefit from spatial separation of speech and noise; SRM in the binaural listening modes does not distinguish between the use of redundant information, complementary information across the ears, or interaural cues (such as ITDs or ITDs). To assess the binaural redundancy and the use of complementary information across the ears, the binaural summation was calculated for the N0°, the N−90°, and N90° spatial scenarios from SRT50% (see Figure 5). The comparison between the spatial scenarios allows an assessment of the usage of interaural cues. All patient groups showed no binaural summation on average (not significantly different from zero), independent of spatial scenario, with the exception of the NH control group, which showed a significant binaural summation of more than 5 dB for N−90° and N90°, both t(binaural summation N−90°/90° vs. 0 dB, df = 10) = 17.8, p < .001, d = 5.4.
Figure 5.

Individually extracted binaural summation from SRT50% for the bimodal CI and CI SSD patient groups and for the NH control group. In each group, the binaural summation was calculated for each noise direction. CI = cochlear implant; NH = normal hearing; SSD = single-sided deaf.
Another measure used to assess performance in a speech-in-noise test is the slope of the psychometric function of speech intelligibility at the SRT50%. Average slope and standard deviation for each patient group and each listening mode were extracted by fitting psychometric functions to the three data points SRT{20%,50%,80%} and are listed in Table 4. Averaging was done by calculating the slope for each subject individually and then averaging individual slopes across subjects for each group and listening mode.
Table 4.
Means and Standard Deviations of Slopes (in %/dB) of the Psychometric Functions for Each Patient Group and Listening Mode.
| Listening mode | Mean (%/dB) | Standard deviation (%/dB) |
|---|---|---|
| Bimodal | ||
| HA | 11.8 | ±4.8 |
| CI | 10.7 | ±2.6 |
| HA+CI | 13.0 | ±6.7 |
| SSD | ||
| NH | 16.1 | ±5.0 |
| CI | 10.4 | ±3.6 |
| NH+CI | 16.6 | ±3.6 |
| NH | ||
| NH | 14.3 | ±4.1 |
| NH+NH | 16.2 | ±4.1 |
Note. The slopes are calculated from the individually fitted psychometric functions, based on the SRT20%, SRT50%, and SRT80%. HA = hearing aid; NH = normal hearing; CI = cochlear implant; SSD = single-sided deaf.
The monaural slopes were shallower than the binaural slopes. Also, the slope of the CI listening mode was much shallower (10.7%/dB for monaural CI in bimodal and CI SSD patient group) than the slopes extracted for NH ears (16.1%/dB for monaural NH in the CI SSD group, 14.3%/dB for NH monaural, and 16.2%/dB for NH binaural).
To summarize, in both patient groups, the SRM of each monaural listening mode did not differ from the SRM of the binaural listening mode. Furthermore, there was, on average, no binaural summation in either group. The results of the SRM and binaural summation measures, therefore, do not support the idea that there is a binaural processing of the two monaural inputs in order to improve speech-in-noise perception beyond that what the best monaural ear provides. In other words, the results suggest that the CI SSD and bimodal CI patients used a selective better ear listening for spatial speech intelligibility in stationary noise. In contrast, the NH control group showed evidence of binaural processing in the form of strongly increased SRM when using both ears rather than one.
Discussion
This study systematically assessed spatial speech intelligibility in a controlled measurement setup and extracted SRMs for the CI SSD and bimodal CI patient groups and a NH control group. The main finding was a selective better ear-listening strategy employed by both patient groups: These subjects attended to the better monaural ear when listening with both ears. The better ear was not always the same ear, but rather switched, depending on the noise direction. Furthermore, the individually calculated SRM in the binaural listening mode was found to be smaller for the bimodal CI than for the CI SSD patient group. A better ear-listening strategy was confirmed by the results of SRM and binaural summation analysis: Monaural SRMs were not significantly different from binaural SRMs in either patient group. However, the SRM in the CI SSD patient group was similar to the monaural SRM of NH listeners. Whereas a binaural summation of more than 5 dB was seen for the NH control group, the bimodal CI and CI SSD patient groups showed no benefit from listening with two ears compared with listening with the better monaural ear.
Traditionally, spatial unmasking is assessed by monaural measures, such as head shadow, and by binaural measures, such as binaural summation and binaural squelch (Litovsky, 2012). However, for subject groups with large interindividual variability, these measures are difficult to compare across studies. One example is binaural squelch, which is often defined as the benefit in speech intelligibility that is obtained when adding an ear with a poorer SNR. In this definition, it is implicitly assumed that both ears have the same performance at S0°N0°, which may not be the case when dealing with bimodal CI users or people with severe asymmetric hearing loss. Here, we follow the argumentation of van Hoesel (2012) and Gartrell et al. (2014) who showed that this comparison problem can be partly resolved by measuring the SRM and comparing monaural and binaural SRMs across patient groups. In this way, it is possible to obtain a measure of the contribution of monaural and interaural cues in spatial speech-in-noise listening conditions. As stated in the introduction, listening with both ears may allow the combination of interaural information, which include binaural spatial cues, like ILDs and ITDs, and monaural spatial cues arising from SNR improvement at the ears. Another factor to consider is the combination of electric hearing and acoustic hearing: This combination may allow the use of complementary information across the ears and access to redundant information. In the following, the possible usage of these cues for speech intelligibility in noise is discussed.
Contribution of Binaural Spatial Cues
A possible explanation for exclusive better ear listening in CI SSD and bimodal CI patients could be the patients’ inability to utilize binaural spatial cues, such as ITDs and ILDs. CI users have access to ILD cues, but are unable to access ITD information at pulse rates higher than 300 pps (Laback, Egger, & Majdak, 2015). The transmitted ILDs are probably distorted by the different frequency-specific loudness growth functions in the ears (Francart & McDermott, 2013). In addition, Francart and McDermott (2013) also observed that perception of ITDs may only be possible if the interaural processing delay between stimulation of the auditory nerve fibers due to acoustic hearing (via HA/NH) and stimulation of the auditory nerve fibers due to electric stimulation (via CI) is compensated for. This study did not provide this compensation because we wanted to remain as close as possible to the everyday listening situation of bimodal CI listeners. The compensation is nontrivial: A major factor is the frequency dependency of the traveling wave along the basilar membrane; additionally, the sound processors of different CI and HA manufacturers have different processing delays (Francart, Wiebe, & Wesarg, 2018; Zirn et al., 2015).
Other factors compromising ITD perception in bimodal and CI SSD listeners are also unresolved, such as the physical location mismatch between frequency information transmitting channels in the acoustic ear and transmitting electrodes in the electrically stimulated ear (Francart et al., 2018; Landsberger et al., 2015). Using vocoders, that is, acoustic simulations of CI information transmission, Wess, Brungart, and Bernstein (2017) showed that this kind of stimulation site mismatch between the acoustically and electrically stimulated ears decreases binaural unmasking and that a compensation for the simulated spectral mismatch could restore binaural unmasking, at least partly. In bilateral CI users, this stimulation site mismatch can be compensated by performing an electrode pairing based on the binaural interaction component with single-electrode stimuli (Hu & Dietz, 2015). However, the interaural compensation for complex multielectrode stimuli like speech is still an unresolved issue (see Dietz, 2016, for a review).
One main outcome of this study was the observation that the better ear can switch in a speech-in-noise task depending on the spatial scene. This has also been found for other tasks: Crew, Galvin, Landsberger, and Fu (2015) showed a task-dependent better ear for speech perception in babble noise compared with melodic contour identification.
Contribution of Listener-Specific Factors to SRM
Spatially separating speech and noise will lead to one ear having a better SNR than the other. The improvement due to better ear listening can be assessed using the monaural SRM.
The monaural HA-only SRM of the bimodal CI patient group (about 2 dB) was probably lower than that of CI SSD users (about 4 dB) because of the limited residual aided hearing in the acoustic ear of the bimodal CI users. This hypothesis is supported by results from vocoder studies (Williges, Dietz, Hohmann, & Jürgens, 2015): The simulated severe hearing loss had a lower SRM than the monaural NH listeners. For bilateral HA users, Glyde, Cameron, Dillon, Hickson, and Seeto (2013) also showed that a more severe hearing loss had a negative influence on binaural SRM.
Age is another factor that may contribute to the differences in SRM between the groups. The bimodal CI (median age: 62.5 years) and CI SSD (median age: 55 years) patient groups were nearly age-matched but were substantially older than the NH control group (median age: 26.1 years). The literature provides ambiguous evidence on this aspect. While Glyde et al. (2013) did not find a correlation between age and binaural SRM, Gallun, Diedesch, Kampel, and Jakien (2013) found that SRM declines with age. Both studies, however, used more complex masking signals, in the form of competing talkers, compared with the stationary noise applied in this study, and their noises were symmetrically placed in spatially separated conditions. It is therefore difficult to compare their results with the results of this study. Füllgrabe, Moore, and Stone (2015) found a 10% decline in speech intelligibility due to age in NH subjects but no influence of age on SRM. Given the equivocal indications in the literature and the close age matching of the bimodal CI and CI SSD groups, it seems reasonable to assume that the subjects’ age only weakly affected the comparisons with the NH control group, if at all.
Füllgrabe et al. (2015), however, showed that the decline in speech intelligibility with age in NH subjects correlates with cognitive measures. Although cognitive tests were not used to screen for or to control cognitive dysfunction beforehand, all participants were carefully selected to ensure that they did not have any other severe impairments, physically (e.g., restricted or uncorrected sight) or mentally (e.g., dementia). Furthermore, the relatively low complexity of the speech-in-noise task (five words only, fixed grammar, and no competing talker) imposes relatively little cognitive load upon listeners.
The absolute size of the SRM is difficult to compare across studies because the studies used different speech materials, maskers, and acoustic scenes. However, similar parameter changes should result in similar trends across studies. Grossmann et al. (2016) used competing talkers instead of a stationary noise masker to assess SRM. For CI SSD, they found a monaural SRM of 4.3 dB (using the NH ear) in line with the 4 dB found for monaural NH of the CI SSD subjects in this study. For binaural conditions, they showed an SRM of 7.2 dB, which is larger than the SRM found here (3.4 dB). For bimodal CI users, Gifford et al. (2014) reported a binaural SRM of 4 dB when using multitalker babble noise, whereas in this study, an SRM of 2 dB was found for bimodal CI users. One explanation may be that there is a release from masking when using the acoustic and CI ears in combination even for slightly fluctuating noises such as babble noise but no release from masking for stationary noise signals (Bernstein, Goupell, Schuchman, Rivera, & Brungart, 2016). This release from masking could be due to “listening in the dips,” or it could be based on more complex principles of auditory scene analysis for separating speech and noise (Josupeit & Hohmann, 2017).
The SRM also differs depending on the HRIR used. Most studies use loudspeaker setups (van Hoesel, 2012). This means that NH ears pick up the sound at the eardrums, including pinna and concha-spatial cues, whereas CIs and HAs usually use BTE microphones, which do not include these cues, in addition to the individual subject-specific variance of (non in-ear) HRIRs (Denk, Ernst, Ewert, & Kollmeier, 2018). This study used HRIRs recorded from BTE microphones mounted on KEMAR pinnas, which is similar to where state-of-the-art hearing devices pick up their input signals. The same HRIRs were also used for the NH reference group. When in-ear HRIRs were used, SRMs of NH subjects were about 3 dB higher (Williges et al., 2015) than reported here. However, note that the spectral resolution and interaural timing of the BTE microphone signals were still sufficient to substantially increase the SRM from 4 dB (monaural) to 9 dB (binaural) in the NH control group. This increase may be related to “effective” binaural processing (Beutelmann & Brand, 2006).
Contribution of Redundant/Complementary Information
Access to redundant and complementary information can be assessed by calculating the binaural summation for the frontal sound incident (S0°N0°) by subtracting the SRT for the best monaural listening mode from that of the binaural listening mode. This benefit is usually small (1.1 dB for NH listeners, Williges et al., 2015; 2.1 dB for bilateral CI users, Schleich, Nopp, & D’Haese, 2004) for the sentence tests used in this study and did not differ significantly from 0 dB for any subject group in this study. Across groups, individual values ranged from 2.1 dB to −5.8 dB.
In the bimodal CI and CI SSD patient groups, binaural summation measures not only the integration of redundant information across ears but also the access to complementary information that arises from the combination of acoustic and electric information. Again, no benefit from the use of complementary or redundant information was found in either patient group. This was surprising and contradictory to the existing literature on bimodal CI and CI SSD: Multiple studies have shown a binaural summation effect (bimodal benefit) in bimodal CI patients (Armstrong et al., 1997; Berrettini, Passetti, Giannarelli, & Forli, 2010; Ching et al., 2004; Crew et al., 2015; Dorman, Gifford, Spahr, & McKarns, 2008; Dunn et al., 2005; Gifford et al., 2014; Kong & Braida, 2010; Luntz, Shpak, & Weiss, 2005; Mok, Grayden, Dowell, & Lawrence, 2006; Morera et al., 2005; Neuman et al., 2017; Potts, Skinner, Litovsky, Strube, & Kuk, 2009; Zhang, Spahr, Dorman, & Saoji, 2013). In these studies, bimodal benefit was found to vary across subjects, with many patients performing similarly whether they used both ears or their best monaural ear, and some benefiting from using both ears. Ching et al. (2007) and Neuman et al. (2017) even reported individual cases of binaural interference in bimodal CI users. In addition, all these studies used some form of babble or competing talker noise, or assessed speech perception in quiet, while our study used stationary noise. There are a few studies that investigated binaural summation in bimodal CI using stationary noise (Hoppe et al., 2018; Illg, Bojanowicz, Lesinski-Schiedat, Lenarz, & Büchner, 2014; Morera et al., 2012; Yoon, Shin, Gho, & Fu, 2015;). Morera et al. (2012, n = 15) did not observe a bimodal benefit. Yoon et al. (2015, n = 14) observed an average binaural summation of 12%, and Hoppe et al. (2018, n = 148) found a binaural summation of 0.8 to 1.8 dB, depending on the hearing loss in the acoustically hearing ear. Illg et al. (2014, n = 141) found a larger binaural summation (6% on average) when using a competing talker than when using a stationary noise masker.
Several other studies also reported no binaural summation for S0°N0° in CI SSD users (Arndt et al., 2011, n = 11; Dorbeau et al., 2018, n = 18; Grossmann et al., 2016, n = 12; Mertens, Kleine Punte, De Bodt, & Van de Heyning, 2015, n = 12; Zeitler et al., 2015, n = 9); it is possible that a larger sample size is needed (n = 8 in this study). In contrast, Arndt et al. (2017) observed a binaural summation of almost 1 dB in CI SSD (n = 45). Dirks et al. (2019, n = 8) found a larger binaural benefit in fluctuating maskers than in stationary maskers. If we pool the SRT20%, SRT50%, and SRT80% data and therefore artificially increase the number of cases to n = 24, we still do not see a bimodal benefit, except for a slight binaural summation for S0°N−90°, t(23) = 2.597, p = .064, d = 0.53, significant without Bonferoni correction. A power analysis and estimation of the effect size of the binaural summation can be found in the Supplementary Material.
According to Yoon et al. (2015), the bimodal benefit is smaller when one ear has a substantially better SRT than the other. This hypothesis is based on the negative correlation (r = −.65) observed between the performance difference between the two unilaterally stimulated ears and the binaural benefit. When applying the same analysis (Yoon et al., 2015) to our data, no significant correlation was found in the bimodal CI group. In the CI SSD group, a significant correlation was found when the noise was presented to the NH ear (r = −.5547, p = .0049) or the CI ear (r = −.6095, p = .0016), but not when the noise was presented from the front (r = .0573, p = .7904). One reason our outcomes differed from those of Yoon et al. (2015) may be that their study correlated percentage scores at a fixed SNR, so they looked at different points on the psychometric functions from one listener to the next, whereas in our study, the correlation analysis was performed at the SRT50% for all listeners.
Nonoptimal fitting of the HA may also have reduced the bimodal benefit measured here. This study used the CAMFIT fitting rule for the HA side; this rule focuses especially on audibility at the low frequencies, which are usually better preserved and therefore may deviate from the fitting that the patient is familiar with. Vroegop and Goedegebure (2018) reviewed the effect of different HA fitting paradigms on bimodal outcomes but could not draw a clear recommendation for the best fitting, despite gain changes of several dB in some studies. Therefore, although it is possible that the HA fitting had an influence on speech performance, further research is needed to investigate this question more thoroughly.
Another factor that might explain the missing binaural benefit is missing binaural fusion. Binaural fusion occurs when a single sound source is perceived as a single sound source by the bimodal CI or CI SSD listener, even under widely varying acoustic and electric cues. Reiss, Eggleston, Walker, and Oh (2016) showed that binaural fusion for vowels is highly subject-dependent, just as binaural summation was found to be highly subject-dependent in this study. Binaural fusion was not explicitly investigated with the listeners here; therefore, a direct link between fusion and binaural summation could not be assessed.
One difference between this study and other clinical studies addressing spatial speech perception in bimodal CI users is the application of virtual acoustics instead of loudspeaker setups: In clinical studies using free-field setups, even small head movements can influence the SNRs at the ears by several dB (Grange & Culling, 2016) which may lead to SRT benefits, particularly for separated speech and noise scenarios. As the SRM is a difference between SRTs in colocated and separated speech and noise, the SRM may also be affected. Schleich et al. (2004), for instance, measured SRM with the same sentence test and noise material as used in this study for unilateral CI usage in bilateral CI users and found an average SRM of around 4 dB using an anechoic chamber, whereas we found an average SRM of around 3.4 dB. Jacob et al. (2011) measured CI SSD subjects in a quiet room and found an SRM of 4.5 dB (our study: 3–4 dB), and an SRM of 9 dB for NH subjects (our study: 8.8 dB). Note that Jacob et al. varied the speech angle and kept the noise angle fixed at 0° azimuth. Employing virtual acoustics in NH listeners, Beutelmann and Brand (2006) reported an SRM of around 12 dB (for 80° spatial separation of speech and noise), and Kollmeier, Brand, and Meyer (2008) found an SRM of 10 dB; both studies used head-related transfer functions in which the microphone is at the “eardrum” of KEMAR, not from BTE microphones mounted on KEMAR pinnas.
Despite the methodological differences, our SRT and SRM results are fairly consistent with previous studies employing virtual acoustics in NH listeners and with previous clinical studies using loudspeaker setups. However, the differences between virtual acoustics and free-field setups cannot explain why we did not observe binaural summation in colocated speech and noise at S0°N0°, because regardless of setup, the SNR at each ear cannot change at S0°N0° if the participant performs small head movements.
Conclusions
This study compared speech-in-noise performance, SRM, and binaural summation in two patient groups: bimodal CI listeners and CI SSD listeners.
Bimodal CI and CI SSD listeners benefit from using both ears simultaneously, by selectively attending to the better ear, especially in spatially separated speech-in-noise conditions.
The monaural and binaural SRM was lower in the bimodal CI patient group (2–3 dB) than in the CI SSD patient group (3–4 dB), possibly due to the lower audibility in the acoustically stimulated ear of the bimodal CI users.
No binaural summation effect was observed in either group. Possible reasons include the relatively small number of participants, the use of stationary noise instead of more complex noise backgrounds, and the use of relatively simple acoustic scenes.
Supplemental Material
Supplemental Material for Spatial Speech-in-Noise Performance in Bimodal and Single-Sided Deaf Cochlear Implant Users by Ben Williges, Thomas Wesarg, Lorenz Jung, Leontien I. Geven, Andreas Radeloff and Tim Jürgens in Trends in Hearing
Acknowledgments
The authors cordially thank the study participants who listened to the speech material for several hours in total. The authors would also like to thank the audiologist teams in Oldenburg and Freiburg and the Förderverein Taube Kinder lernen Hören e.V. Freiburg for support and Jennifer Truempler for proofreading the manuscript.
Authors’ Note
Portions of this study were presented at the 8th Speech in Noise Workshop 2017, Oldenbur and 40th Annual MidWinter Meeting of the Association for Research in Otolaryngology 2017, Baltimore, MD.
Declaration of Conflicting Interests
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: A. R. has received compensation for travel expenses from Cochlear and MED-EL. T. W. reports grants from Advanced Bionics, grants from MED-EL, and grants from Phonak Communications, outside the submitted work, and T. W. has also received compensation for travel expenses from Advanced Bionics, Cochlear, MED-EL, Oticon Medical, and Phonak Communications.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by Deutsche Forschungsgemeinschaft (JU2858/2-1).
Data Accessibility
Data will be made available upon reasonable request.
Supplemental Material
Supplemental material is available for this article online.
References
- Akeroyd M. A., Arlinger S., Bentler R. A., Boothroyd A., Dillier N., Dreschler W. A., Kollmeier B. (2015) International Collegium of Rehabilitative Audiology (ICRA) recommendations for the construction of multilingual speech tests: ICRA Working Group on Multilingual Speech Tests. International Journal of Audiology 54(Suppl 2): 17-2doi:10.3109/14992027.2015.1030513. [DOI] [PubMed] [Google Scholar]
- Armstrong M., Pegg P., James C., Blamey P. (1997) Speech perception in noise with implant and hearing aid. The American Journal of Otology 18(6 Suppl): S140–S141. [PubMed] [Google Scholar]
- Arndt S., Aschendorff A., Laszig R., Beck R., Schild C., Kroeger S., Wesarg T. (2011) Comparison of pseudobinaural hearing to real binaural hearing rehabilitation after cochlear implantation in patients with unilateral deafness and tinnitus. Otology & Neurotology 32(1): 39–47. doi:10.1097/MAO.0b013e3181fcf271. [DOI] [PubMed] [Google Scholar]
- Arndt S., Laszig R., Aschendorff A., Hassepass F., Beck R., Wesarg T. (2017) Cochlear implant treatment of patients with single-sided deafness or asymmetric hearing loss. HNO 65(Suppl 2): 98–108. doi:10.1007/s00106-016-0297-5. [DOI] [PubMed] [Google Scholar]
- Bernstein J. G., Goupell M. J., Schuchman G. I., Rivera A. L., Brungart D. S. (2016) Having two ears facilitates the perceptual separation of concurrent talkers for bilateral and single-sided deaf cochlear implantees. Ear and Hearing 37(3): 289–302. doi:10.1097/AUD.0000000000000284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Berrettini S., Passetti S., Giannarelli M., Forli F. (2010) Benefit from bimodal hearing in a group of prelingually deafened adult cochlear implant users. American Journal of Otolaryngology 31(5): 332–338. doi:10.1016/j.amjoto. 2009.04.002. [DOI] [PubMed] [Google Scholar]
- Beutelmann R., Brand T. (2006) Prediction of speech intelligibility in spatial noise and reverberation for normal-hearing and hearing-impaired listeners. The Journal of the Acoustical Society of America 120(1): 331–342. doi:10.1121/1.2202888. [DOI] [PubMed] [Google Scholar]
- Blamey P. J., Maat B., Başkent D., Mawman D., Burke E., Dillier N., Lazard D. S. (2015) A retrospective multicenter study comparing speech perception outcomes for bilateral implantation and bimodal rehabilitation. Ear and Hearing 36(4): 408–416. doi:10.1097/AUD.0000000000000150. [DOI] [PubMed] [Google Scholar]
- Brand T., Kollmeier B. (2002) Efficient adaptive procedures for threshold and concurrent slope estimates for psychophysics and speech intelligibility tests. The Journal of the Acoustical Society of America 111(6): 2801-2doi:10.1121/1.1479152. [DOI] [PubMed] [Google Scholar]
- Byrne, D., Dillon, H., Ching, T., Katsch, R., & Keidser, G. (2001). NAL-NL1 procedure for fitting nonlinear hearing aids: characteristics and comparisons with other procedures. J Am Acad Audiol, 12, 37–51. [PubMed]
- Cherry E. C. (1953) Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America 25(5): 975–979. doi:10.1121/1.1907229. [Google Scholar]
- Ching T. Y. C., Incerti P., Hill M. (2004) Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear and Hearing 25(1): 9–21. doi:10.1097/01.AUD.0000111261.84611.C8. [DOI] [PubMed] [Google Scholar]
- Ching T. Y. C., van Wanrooy E., Dillon H. (2007) Binaural-bimodal fitting or bilateral implantation for managing severe to profound deafness: A review. Trends in Amplification 11(3): 161–192. doi:10.1177/1084713807304357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crew J. D., Galvin J. J. III., Landsberger D. M., Fu Q.-J. (2015) Contributions of electric and acoustic hearing to bimodal speech and music perception. PLoS One 10(3): e0120279.doi:10.1371/journal.pone.0120279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Denk F., Ernst S. M. A., Ewert S. D., Kollmeier B. (2018) Adapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device styles. Trends in Hearing 22: 233121651877931.doi:10.1177/2331216518779313. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dietz M. (2016) Models of the electrically stimulated binaural system: A review. Network: Computation in Neural Systems 27(2-3): 186-2doi:10.1080/0954898X.2016.1219411. [DOI] [PubMed] [Google Scholar]
- Dillon, H. (2001). Hearing aids (p. 99). Table 4.6. Sydney, Australia: Boomerang Press.
- Dirks, C., Nelson, P. B., Sladen, D. P., & Oxenham, A. J. (2019). Mechanisms of Localization and Speech Perception with Colocated and Spatially Separated Noise and Speech Maskers Under Single-Sided Deafness with a Cochlear Implant. Ear and Hearing , 1 . 10.1097/AUD.0000000000000708. [DOI] [PMC free article] [PubMed]
- Dorbeau C., Galvin J., Fu Q. -J., Legris E., Marx M., Bakhos D. (2018) Binaural perception in single-sided deaf cochlear implant users with unrestricted or restricted acoustic hearing in the non-implanted ear. Audiology and Neurotology 23: 187–197. doi:10.1159/000490879. [DOI] [PubMed] [Google Scholar]
- Dorman M. F., Gifford R. H., Spahr A. J., McKarns S. A. (2008) The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies. Audiology and Neurotology 13(2): 105–112. doi:10.1159/000111782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dunn C. C., Tyler R. S., Witt S. A. (2005) Benefit of wearing a hearing aid on the unimplanted ear in adult users of a cochlear implant. Journal of Speech, Language and Hearing Research 48(3): 668.doi:10.1044/1092-4388(2005/046). [DOI] [PubMed] [Google Scholar]
- Ewert, S.D. (2013). AFC—A modular framework for running psychoacoustic experiments and computational perception models. In Proceedings of the International Conference on Acoustics (AIA-DAGA), Merano (pp. 1326-1329).
- Faul F., Erdfelder E., Buchner A., Lang A.-G. (2009) Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 41(4): 1149–1160. doi:10.3758/BRM.41.4.1149. [DOI] [PubMed] [Google Scholar]
- Francart T., McDermott H. J. (2013) Psychophysics, fitting, and signal processing for combined hearing aid and cochlear implant stimulation. Ear and Hearing 34: 685-2doi:10.1097/AUD.0b013e31829d14cb. [DOI] [PubMed] [Google Scholar]
- Francart T., Wiebe K., Wesarg T. (2018) Interaural time difference perception with a cochlear implant and a normal ear. Journal of the Association for Research in Otolaryngology 19(6): 703–715. doi:10.1007/s10162-018-00697-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Füllgrabe C., Moore B. C. J., Stone M. A. (2015) Age-group differences in speech identification despite matched audiometrically normal hearing: Contributions from auditory temporal processing and cognition. Frontiers in Aging Neuroscience 6: 347.doi:10.3389/fnagi.2014.00347. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallun F. J., Diedesch A. C., Kampel S. D., Jakien K. M. (2013) Independent impacts of age and hearing loss on spatial release in a complex auditory environment. Frontiers in Neuroscience 7: 252.doi:10.3389/fnins.2013.00252. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gartrell B. C., Jones H. G., Kan A., Buhr-Lawler M., Gubbels S. P., Litovsky R. Y. (2014) Investigating long-term effects of cochlear implantation in single-sided deafness: A best practice model for longitudinal assessment of spatial hearing abilities and tinnitus handicap. Otology & Neurotology: Official Publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology 35(9): 1525–1532. doi:10.1097/MAO.0000000000000437. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gifford R. H., Dorman M. F., Sheffield S. W., Teece K., Olund A. P. (2014) Availability of binaural cues for bilateral implant recipients and bimodal listeners with and without preserved hearing in the implanted ear. Audiology and Neurotology 19(1): 57–71. doi:10.1159/000355700. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glyde H., Cameron S., Dillon H., Hickson L., Seeto M. (2013) The effects of hearing impairment and aging on spatial processing. Ear and Hearing 34(1): 15–2doi:10.1097/AUD.0b013e3182617f94. [DOI] [PubMed] [Google Scholar]
- Grange J. A., Culling J. F. (2016) Head orientation benefit to speech intelligibility in noise for cochlear implant users and in realistic listening conditions. The Journal of the Acoustical Society of America 140(6): 4061–4072. doi:10.1121/1.4968515. [DOI] [PubMed] [Google Scholar]
- Grimm G., Herzke T., Berg D., Hohmann V. (2006) The master hearing aid: A PC-based platform for algorithm development and evaluation. Acta Acustica United with Acustica 92(4): 618–628. [Google Scholar]
- Grossmann W., Brill S., Moeltner A., Mlynski R., Hagen R., Radeloff A. (2016) Cochlear implantation improves spatial release from masking and restores localization abilities in single-sided deaf patients. Otology & Neurotology 37(6): 658–664. doi:10.1097/MAO.0000000000001043. [DOI] [PubMed] [Google Scholar]
- Haumann S., Hohmann V., Meis M., Herzke T., Lenarz T., Büchner A. (2012) Indication criteria for cochlear implants and hearing aids: Impact of audiological and non-audiological findings. Audiology Research 2(1): e12.doi:10.4081/audiores.2012.e12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoppe U., Hocke T., Digeser F. (2018) Bimodal benefit for cochlear implant listeners with different grades of hearing loss in the opposite ear. Acta Oto-Laryngologica 138(8): 713–721. doi:10.1080/00016489.2018.1444281. [DOI] [PubMed] [Google Scholar]
- Hu H., Dietz M. (2015) Comparison of interaural electrode pairing methods for bilateral cochlear implants. Trends in Hearing 19: 2331–2165 doi:10.1177/2331216515617143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Illg A., Bojanowicz M., Lesinski-Schiedat A., Lenarz T., Büchner A. (2014) Evaluation of the bimodal benefit in a large cohort of cochlear implant subjects using a contralateral hearing aid. Otology & Neurotology 35(9): e240–e244. doi:10.1097/MAO.0000000000000529. [DOI] [PubMed] [Google Scholar]
- Jacob, R., Stelzig, Y., Nopp, P., & Schleich, P. (2011). Audiological results with cochlear implants for single-sided deafness. HNO , 59 , 453–460. 10.1007/s00106-011-2321-0. [DOI] [PubMed]
- Josupeit A., Hohmann V. (2017) Modeling speech localization, talker identification, and word recognition in a multi-talker setting. The Journal of the Acoustical Society of America 142(1): 35–54. doi:10.1121/1.4990375. [DOI] [PubMed] [Google Scholar]
- Jürgens T., Brand T. (2009) Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model. Journal of the Acoustical Society of America 126: 2635–2648. doi:10.1121/1.3224721. [DOI] [PubMed] [Google Scholar]
- Kayser H., Ewert S. D., Anemüller J., Rohdenburg T., Hohmann V., Kollmeier B. (2009) Database of multichannel in-ear and behind-the-ear head-related and binaural room impulse responses. EURASIP Journal on Advances in Signal Processing 2009(1): 298605.doi:10.1155/2009/298605. [Google Scholar]
- Kollmeier, B., Brand, T., & Meyer, B. (2008). Perception of speech and sound. In Yacob B, Sondhi MM, and Yiteng H (Ed.) Springer handbook of speech processing (pp. 61–82). Berlin, Germany: Springer.
- Kollmeier B., Warzybok A., Hochmuth S., Zokoll M. A., Uslar V., Brand T., Wagener K. C. (2015) The multilingual matrix test: Principles, applications, and comparison across languages: A review. International Journal of Audiology 54: 3–16. doi:10.3109/14992027.2015.1020971. [DOI] [PubMed] [Google Scholar]
- Kong Y. -Y., Braida L. D. (2010) Cross-frequency integration for consonant and vowel identification in bimodal hearing. Journal of Speech, Language, and Hearing Research 54(3): 959–980. doi:10.1044/1092-4388(2010/10-0197). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laback, B., Egger, K., & Majdak, P. Perception and coding of interaural time differences with bilateral cochlear implants. Hearing Research, 322, 138–150. doi:10.1016/j.heares.2014. 10.004. [DOI] [PubMed]
- Landsberger, D. M., Svrakic, M., Roland Jr, J. T., & Svirsky, M. (2015). The relationship between insertion angles, default frequency allocations, and spiral ganglion place pitch in cochlear implants. Ear and Hearing , 36 , e207–e213. 10.1097/AUD.0000000000000163. [DOI] [PMC free article] [PubMed]
- Litovsky R. Y. (2012) Spatial release from masking. Acoustics Today 8(2): 18–25. [Google Scholar]
- Luntz M., Shpak T., Weiss H. (2005) Binaural-bimodal hearing: Concomitant use of a unilateral cochlear implant and a contralateral hearing aid. Acta Oto-Laryngologica 125: 863–869. doi:10.1080/00016480510035395. [DOI] [PubMed] [Google Scholar]
- Mertens G., Kleine Punte A., De Bodt M., Van de Heyning P. (2015) Binaural auditory outcomes in patients with postlingual profound unilateral hearing loss: 3 years after cochlear implantation. Audiology and Neurotology 20(1): 67–72. doi:10.1159/000380751. [DOI] [PubMed] [Google Scholar]
- Mok M., Grayden D., Dowell R. C., Lawrence D. (2006) Speech perception for adults who use hearing aids in conjunction with cochlear implants in opposite ears. Journal of Speech, Language and Hearing Research 49(2): 338.doi:10.1044/1092-4388(2006/027). [DOI] [PubMed] [Google Scholar]
- Moore B. C. J., Alcántara J. I., Stone M. A., Glasberg B. R. (1999) Use of a loudness model for hearing aid fitting: II. Hearing aids with multi-channel compression. British Journal of Audiology 33(3): 157–170. doi:10.3109/03005369909090095. [DOI] [PubMed] [Google Scholar]
- Morera C., Cavalle L., Manrique M., Huarte A., Angel R., Osorio A., Morera-Ballester C. (2012) Contralateral hearing aid use in cochlear implanted patients: Multicenter study of bimodal benefit. Acta Oto-Laryngologica 132(10): 1084–1094. doi:10.3109/00016489.2012.677546. [DOI] [PubMed] [Google Scholar]
- Morera C., Manrique M., Ramos A., Garcia-Ibanez L., Cavalle L., Huarte A., Estrada E. (2005) Advantages of binaural hearing provided through bimodal stimulation via a cochlear implant and a conventional hearing aid: A 6-month comparative study. Acta Oto-Laryngologica 125: 596–606. doi:10.1080/00016480510027493. [DOI] [PubMed] [Google Scholar]
- Neuman A. C., Waltzman S. B., Shapiro W. H., Neukam J. D., Zeman A. M., Svirsky M. A. (2017) Self-reported usage, functional benefit, and audiologic characteristics of cochlear implant patients who use a contralateral hearing aid. Trends in Hearing 21: 233121651769953.doi:10.1177/2331216517699530. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potts L. G., Skinner M. W., Litovsky R. A., Strube M. J., Kuk F. (2009) Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). Journal of the American Academy of Audiology 20: 353–373. doi:10.3766/jaaa.20.6.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Eggleston J. L., Walker E. P., Oh Y. (2016) Two ears are not always better than one: Mandatory vowel fusion across spectrally mismatched ears in hearing-impaired listeners. Journal of the Association for Research in Otolaryngology 17: 341–356. doi:10.1007/s10162-016-0570-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schleich P., Nopp P., D’Haese P. (2004) Head shadow, squelch, and summation effects in bilateral users of the MED-EL COMBI 40/40+ cochlear implant. Ear and Hearing 25(3): 197–204. doi:10.1097/01.AUD.0000130792. 43315.97. [DOI] [PubMed] [Google Scholar]
- van de Heyning P., Vermeire K., Diebl M., Nopp P., Anderson I., De Ridder D. (2008) Incapacitating unilateral tinnitus in single-sided deafness treated by cochlear implantation. Annals of Otology, Rhinology & Laryngology 117(9): 645–652. doi:10.1177/000348940811700903. [DOI] [PubMed] [Google Scholar]
- van Hoesel R. J. M. (2012) Contrasting benefits from contralateral implants and hearing aids in cochlear implant users. Hearing Research 288(1-2): 100-2doi:10.1016/j.heares. 2011.11.014. [DOI] [PubMed] [Google Scholar]
- van Zon A., Peters J. P. M., Stegeman I., Smit A. L., Grolman W. (2015) Cochlear implantation for patients with single-sided deafness or asymmetrical hearing loss: A systematic review of the evidence. Otology & Neurotology 36(2): 209–219. doi:10.1097/MAO.0000000000000681. [DOI] [PubMed] [Google Scholar]
- Veugen L. C. E., Chalupper J., Snik A. F. M., van Opstal A. J., Mens L. H. M. (2016) Frequency-dependent loudness balancing in bimodal cochlear implant users. Acta Oto-Laryngologica 136(8): 775–781. doi:10.3109/00016489. 2016.1155233. [DOI] [PubMed] [Google Scholar]
- Vroegop J. L., Goedegebure A. (2018) How to optimally fit a hearing aid for bimodal cochlear implant users: A systematic review. Ear and Hearing 39(6): 7.doi:10.1097/AUD.0000000000000577. [DOI] [PubMed] [Google Scholar]
- Wagener K. C., Kühnel V., Kollmeier B. (1999) Entwicklung und Evaluation eines Satztests für die deutsche Sprache I: Design des Oldenburger Satztests [Development and evaluation of a German sentence test I: Design of the Oldenburg sentence test]. Zeitschrift Für Audiologie 38(1): 4–15. [Google Scholar]
- Warzybok A., Rennies J., Brand T., Doclo S., Kollmeier B. (2013) Effects of spatial and temporal integration of a single early reflection on speech intelligibility. The Journal of the Acoustical Society of America 133(1): 269–282. doi:10.1121/1.4768880. [DOI] [PubMed] [Google Scholar]
- Wess J. M., Brungart D. S., Bernstein J. G. (2017) The effect of interaural mismatches on contralateral unmasking with single-sided vocoders. Ear and Hearing 38(3): 374–2doi:10.1097/AUD.0000000000000374. [DOI] [PubMed] [Google Scholar]
- Williges B., Dietz M., Hohmann V., Jürgens T. (2015) Spatial release from masking in simulated cochlear implant users with and without access to low-frequency acoustic hearing. Trends in Hearing 19: 2331216515616940.doi:10.1177/2331216515616940. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yoon Y.-S., Shin Y.-R., Gho J.-S., Fu Q.-J. (2015) Bimodal benefit depends on the performance difference between a cochlear implant and a hearing aid. Cochlear Implants International 16(3): 159–167. doi:10.1179/1754762814Y.0000000101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeitler D. M., Dorman M. F., Natale S. J., Loiselle L., Yost W. A., Gifford R. H. (2015) Sound source localization and speech understanding in complex listening environments by single-sided deaf listeners after cochlear implantation. Otology & Neurotology 36: 1467–1471. doi:10.1097/MAO.0000000000000841. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang T., Dorman M. F., Gifford R., Moore B. C. (2014) Cochlear dead regions constrain the benefit of combining acoustic stimulation with electric stimulation. Ear and Hearing 35(4): 410-2doi:10.1097/AUD.0000000000000032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang T., Spahr A. J., Dorman M. F., Saoji A. (2013) Relationship between auditory function of nonimplanted ears and bimodal benefit. Ear and Hearing 34(2): 133–141. doi:10.1097/AUD.0b013e31826709af. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zirn S., Arndt S., Aschendorff A., Wesarg T. (2015) Interaural stimulation timing in single sided deaf cochlear implant users. Hearing Research 328: 148–2doi:10.1016/j.heares.2015.08.010. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental Material for Spatial Speech-in-Noise Performance in Bimodal and Single-Sided Deaf Cochlear Implant Users by Ben Williges, Thomas Wesarg, Lorenz Jung, Leontien I. Geven, Andreas Radeloff and Tim Jürgens in Trends in Hearing
Data Availability Statement
Data will be made available upon reasonable request.

