Abstract
Purpose
For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads.
Method
Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation.
Results
In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs.
Conclusion
Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients.
Supplemental Material
Spatial resolution is generally poor for singly-implanted cochlear implant (CI) users (e.g., Dorman et al., 2016; Grantham et al., 2007, 2008; Seeber et al., 2004). Bilateral implantation typically improves sound source localization (e.g., Grantham et al., 2007; Nopp et al., 2004; van Hoesel & Tyler, 2003). For example, Dorman et al. (2016) reported a mean root-mean-square (RMS) error of 29° for bilaterally implanted CI listeners, whereas listeners with severe hearing loss in both ears and only one implant were generally at chance in sound source localization in the frontal hemifield. By comparison, the mean RMS localization error for normal-hearing (NH) listeners was 6° (see also Yost et al., 2013). While the sound source localization acuity of CI listeners may be reduced compared to NH listeners (e.g., Dorman et al., 2016), the ability to localize sound sources even this well appears to confer benefit to CI listeners. It is therefore increasingly common to implant not just one but both ears. Most bilaterally implanted listeners report improvements in quality of life that are often (but not always) mirrored in observed improvements of the ability to localize sound sources in the laboratory (e.g., Bichey & Miyamoto, 2008).
Loss of Cochlear Amplification and Compression
In the cochlea of a listener with profound hearing impairment who would be appropriate for cochlear implantation, the hair cells are absent or nearly so; therefore, both efferent and afferent connections between the peripheral and central auditory system are lost (Moore, 2003). Cochlear implantation bypasses the inner hair cells to directly stimulate the auditory nerve using electrodes in the scala tympani. Therefore, the amplification and compression that comes from active outer hair cell processes is not recovered with cochlear implantation. In addition, bypassing the electromechanical processes in hair cells results in the hair cell auditory nerve refractory period being lost, so that individual auditory nerve neurons can be driven at greater rates than with acoustic stimulation. The result of this loss of compression on the basilar membrane and loss of a refractory period is that the range of electrical stimulation between that which can be detected and that which causes pain is very small, typically between 3 and 20 dB (e.g., Shannon, 1983). CIs therefore require the use of automatic gain control (AGC) in order to compress the roughly 100-dB dynamic range used by NH listeners into a range that can be mapped to the 10- to 20-dB range required for electrical stimulation (e.g., Zeng et al., 2002). In the processors used with CIs manufactured by Advanced Bionics (AB), AGC involves two related systems of compression. The first system is responsible for most of the compression, and has relatively slow onset and offset times coupled with a relatively low compression threshold, as these settings have been shown to benefit speech identification because speech envelopes remain largely intact with slower compression (see Boyle et al., 2009, for further details). The compression ratio for this first “slow” system is relatively high, at 12:1 (e.g., compared to MED-EL, which typically uses a 3:1 compression ratio; note that Cochlear Corporation uses nearly infinite compression). Sudden, impulsive sounds would not be compressed by the slow compressor and could cause considerable discomfort to listeners, so a second, fast limiter with a compression threshold that is 8 dB higher than for the slow compressor is included to limit impulsive sounds to within the comfortable range of electrical stimulation. This “dual-loop” system chooses the instantaneous gain that is the least of the two compression systems and scales electrical stimulation accordingly. Details of the compression settings for this experiment are offered in the Method section of this article.
AGCs Not Linked
The AGCs in the left and right processors are generally not synchronized or linked, and so both slow and fast compressions can distort interaural level differences (ILDs) considerably (Archer-Boyd & Carlyon, 2019; Dorman et al., 2014). By “distortion” of ILDs, we mean that the pattern of ILDs across frequency and that pattern across time, especially as sound sources and listeners move, is disrupted so as to be different in magnitude and form to that which would be measured between the entrances of the two ear canals over time. This can occur under two basic scenarios. First, a high-frequency sound stimulus is head shadowed, so that the level on one side is below the compression threshold (no compression) and the level on the side ipsilateral to the sound source exceeds the threshold (compression does occur). In this case, the ILD is reduced. In some cases, the low-pass characteristics of head shadowing can produce a situation where the ILD in low-frequency components of the sound is actually reversed in sign (see Dorman et al., 2014). This ILD reversal at low-frequency becomes additionally likely as a side-effect of the high-frequency pre-emphasis that is included in processing before the AGCs apply broadband compression (Archer-Boyd & Carlyon, 2019). Second, the sound stimulus exceeds the compression threshold on both sides, and so the ILD is also reduced at the output of the AGCs. If the level at both ears is below the compression threshold, then the ILD will be faithfully reproduced at the output of the AGCs. However, regardless of the effect of the AGCs, the output of the AGCs is then compressively mapped to the actual levels of electrical stimulation, which will generally further reduce the magnitude of ILDs. See Archer-Boyd and Carlyon (2019) for an excellent discussion of this dual-loop system and its effects on the ILDs available to the CI listener.
For NH listeners, the interaural time difference (ITD) cue is generally used for low-frequency stimulus components, and ILDs are used primarily for high-frequency components. However, ITDs are essentially unavailable to CI listeners (e.g., see Gifford et al., 2018) because (a) the implant signal processors largely discard the temporal fine structure of the signal and instead use a constant rate of pulse stimulation, so that the envelope but not the fine structure is transmitted, and thus the ITD information is rather poorly represented; (b) the two implant processors are usually independent and not time synchronized with each other; (c) electrodes are often spectrally mismatched between ears; and (d) even under experimental conditions where ITDs are directly manipulated, listeners still usually demonstrate very high (roughly 300–1500 μs) thresholds for ITDs, especially at the relatively high (roughly 800–900 pulses per second) stimulation rates that are prevalent in clinical CI processors. Therefore, the fine-structure ITD cue provided to CI users is largely useless for sound source localization (Gifford et al., 2018; Grantham et al., 2007, 2008; Kerber & Seeber, 2013; Laback et al., 2004). For these reasons, bilateral CI users generally rely on ILDs caused by head shadowing of high frequencies for sound source localization (Dorman et al., 2015; Grantham et al., 2007; van Hoesel & Tyler, 2003). Perceived lateral position for sound sources has been shown to be affected by compression from AGC (Schwartz & Shinn-Cunningham, 2013; Wiggins & Seeber, 2011). While the gain mismatch between left and right ears has been shown to have little effect on hearing aid listeners, as they still have access to ITDs (Keidser et al., 2011), the effect on horizontal sound source localization for bilateral CI patients can be expected to be considerable (Potts et al., 2019). Furthermore, ILD thresholds have also been shown to increase as a result of even relatively modest AGC compression (Grantham et al., 2008). Dorman et al. (2014) investigated the magnitude of ILDs in CIs with a relatively low compression ratio (MED-EL) and found that distortions and the reduction of ILDs that result from the AGCs and mapping processor output to electrical stimulation was correlated with bilateral CI listeners' inferior (though perhaps adequate) sound source localization performance. See the introduction of Potts et al. (2019) for an excellent, detailed review of the issues involved with AGC compression.
Head Movement
While most testing of CI patients in laboratory settings has involved stationary conditions, both listeners and sound sources often move in real-world environments, and these “dynamic” listening scenarios have recently gained more attention (e.g., Brimijoin & Akeroyd, 2012, 2014; Brimijoin et al., 2013; Mueller et al., 2014; Pastore et al., 2018, 2020). Head movements have been shown to (a) allow NH listeners to improve their localization acuity under conditions where the sound stimulus does not provide adequate information to determine its front–back location (Perrett & Noble, 1997; Wallach, 1939, 1940; Wightman & Kistler, 1999), (b) improve listeners' ability to hear out one source from several by improving their signal-to-noise ratio using head orientation (Deshpande & Braasch, 2017; Grange & Culling, 2016), and (c) reduce horizontal localization errors in the frontal hemifield (Honda et al., 2013; Perrett & Noble, 1997; Thurlow & Runge, 1967; Toshima et al., 2008).
Similar results have been found for some bilateral CI patients. For example, Pastore et al. (2018) found that, for listeners using bilaterally implanted MED-EL CIs, head movements could reduce the rate of front–back reversals from 41.9% to 6.7% (see also Mueller et al., 2014). The results of Pastore et al. (2018) suggested that the lack of synchronization or linking of the two AGCs did not impair ILD cues enough to render them useless under dynamic listening conditions that included head movement. However, the compression ratio utilized in the AGC for the left and right processors is relatively “moderate” at 3:1 for the MED-EL processors used by the CI listeners tested in Pastore et al. (2018). It is also possible that head movements improved listeners' ability to determine front–back sound source location at the expense of horizontal acuity in the frontal hemifield—the effect of head movements on horizontal localization acuity was not tested in Pastore et al. (2018).
Effects of High Compression Ratios
In the case of CI AGCs with higher compression ratios, such as the 12:1 ratio used by AB processors and those used by Cochlear Corporation, asymmetrical compression from independent AGCs may lead to greater distortions of the ILD cue. As listeners and/or sound sources move, or sounds change in level; these distortions could be expected to change in ways that may be difficult to predict, rendering the ILD cue difficult or impossible for listeners to use effectively. Archer-Boyd and Carlyon (2019) investigated this, comparing ILDs at the inputs of bilateral AB CI processors with independent AGCs to the ILDs at the outputs under stationary and head movement conditions. Results showed that the high compression ratio of the AGCs reduced the magnitude of ILDs considerably, and the value of ILDs depended on the velocity and angular range of head movements more than the angular position of the sound source itself. The slow time constant of the AGCs also introduced further distortions of the ILD. Furthermore, the sign and direction of ILDs in single channels were often different for high-frequency versus low-frequency channels for the same broadband inputs, and this distortion was also exacerbated by head movements.
The results of the simulations in Archer-Boyd and Carlyon (2019) suggest that listeners using CI processors with such a high compression ratio would have great difficulties localizing sound sources, especially when head movements are involved. However, without behavioral data, and given results for unlinked bilateral implants with head movements at relatively low compression ratios (e.g., Mueller et al., 2014; Pastore et al., 2018), we cannot be sure how sound source localization performance would be affected with the higher compression ratios.
Wiggins and Seeber (2013) showed promising results suggesting that speech intelligibility in spatially separated noise could be improved by linking AGCs, which would imply improved spatial acuity. Encouragingly, Chen (2017), reported for AB bilateral CI patients an average RMS error of 16° with synchronized AGCs versus 37° for unsynchronized AGCs. Recently, Potts et al. (2019) investigated the effect of linking AGCs for Cochlear Corporation implants and found that listener RMS error decreased by 8° across eight loudspeakers spaced 20° apart in the frontal hemifield. At the loudspeaker locations where localization was most affected by independent AGC compression, the improvement was 19°. Also, speech reception thresholds improved by 2.5 dB when AGCs were linked.
However, it is unclear how using linked AGCs might improve listeners' ability to use ILD cues under conditions where ILDs change, such as during head movements, because of the time course of AGC onsets and offsets and how they could change overall level during head movements. That is, Archer-Boyd's simulations suggest that whatever localization ability CI listeners have with unsynchronized, high-compression AGCs are likely to be lost with any head movements. In addition, given that head movement and sound source movement are common occurrences in everyday life, the improvement (or lack thereof) in localization that occurs for synchronized AGCs would hopefully be robust to head movements. In other words, if synchronizing AGCs improved localization performance for listeners only when they do not move at all, the improvement would not be of nearly the practical value that it would be if listeners were able to move freely and still retain the benefits of AGC synchronization (assuming there were any).
On the other hand, allowing listeners to orient their heads advantageously may be helpful under these circumstances. Head movements may improve sound source localization accuracy by giving the listener “multiple looks” (i.e., multiple relative head-centric sound source angles over the time course of the stimulus). For example, if ILDs are only reliable to the extent that listeners can determine “sidedness” rather than precise location, we might conjecture that head movements could be used to narrow down the range of possible sound source locations. Such advantages would only be available if ILD cues change with the correct trajectory as listeners move their heads. We tested the sound source localization performance in the frontal hemifield of listeners implanted with AB CIs in both ears under conditions where (a) their AGCs were either synchronized or not and (b) head movements were allowed or not.
Method
Subjects
Seven listeners (four males and three females between the ages of 59 and 75 years, M = 68.5 years), all bilaterally implanted with AB CIs, participated in the study. All patients used the T-Mic as their everyday microphone. All CI participants had at least 1 year of bilateral experience with an average of 4.6 years of experience. See Table 1 for characteristics of the individual CI subjects with their implant and device settings. Participants were compensated for their participation. All procedures reported in this study were approved by the Arizona State University Institutional Review Board.
Table 1.
Subject demographics.
| Subject | Processor model (everyday/tested) | Processing strategy | Age | Duration deafness of left ear (years) | Duration deafness of right ear (years) | Duration of CI use in left ear (years) | Duration of CI use in right ear (years) |
|---|---|---|---|---|---|---|---|
| 1 | Naida Q70/Naida Q90 | HiRes Optima-S | 67.8 | 54.4 | 54.4 | 6.9 | 1.0 |
| 2 | Naida Q90/Naida Q90 | HiRes Optima-P | 71.3 | 64.0 | 64.0 | 2.7 | 2.6 |
| 3 | Naida Q90/Naida Q90 | HiRes Optima-S | 63.5 | 19.0 | 19.0 | 1.5 | 2.4 |
| 4 | Naida Q70/Naida Q90 | HiRes-S w/Fidelity 120 | 59.0 | 59.0 | 59.0 | 11.7 | 18.1 |
| 5 | Naida Q90/Naida Q90 | HiRes Optima-S | 75.0 | 16.8 | 16.8 | 11.4 | 3.9 |
| 6 | Naida Q90/Naida Q90 | HiRes Optima-S | 66.2 | 17.2 | 17.2 | 7.9 | 7.3 |
| 7 | Naida Q90/Naida Q90 | HiRes Optima-P | 69.8 | 23.0 | 9.0 | 5.4 | 4.2 |
Note. All listeners are bilaterally implanted. CI = cochlear implant.
Stimuli
Three-second duration, Gaussian noise bursts were bandpass filtered to 250–8000 Hz with a four-pole, Butterworth filter implemented using MATLAB's filtfilt function (therefore an effectively eight-pole, zero-phase filter). The resulting signal was then windowed with 20-ms, cosine-squared onset and offset ramps. Sound stimuli were presented at 70 dBA as measured at the center of the room where the listener's head would be. This presentation level was above the AGC threshold of approximately 60 dBA and was therefore expected to trigger AGC compression (before mapping to electrical stimulation) in one or both processors for the unlinked-AGC conditions (see Boyle et al., 2009), or both processors for the linked-AGC conditions.
CI Settings
All listeners used the in-ear, AB T-Mic for both ears in all conditions. The AB CI processors use a dual-loop system with a slow-acting compressor to retain as much of the speech envelope as possible and a fast-acting compressor to protect the wearer from sudden increases in sound intensity. The signal is subject to a pre-emphasis filter that increases the level of relatively high frequencies, also to aid in speech comprehension. The broadband compression threshold is therefore affected by this pre-emphasis filtering. For the slow compressor, the AGC compression threshold, or knee point, is 75 dB SPL for a 200-Hz pure tone, 65 dB SPL for a 500-Hz tone, 60 dB SPL for a 1-kHz tone, 56.5 dB SPL for a 2-kHz tone, and 55.5 dB SPL for a 4-kHz tone. The knee points for the fast loop are 8 dB SPL higher at all frequencies. The synchronized AGC program was programmed on the research processors used in this experiment. To synchronize the left and right AGCs, the digitized signals of each AB Naida Q90 CI research processor was transmitted to the opposite ear using Phonak's Hearing instrument Body Area Network wireless system. In this way, both AGCs had both signals to work with. The two (left/right) signals were combined, using a max operator on the left and right signals, after the pre-emphasis filter was applied to create a signal solely for determining the compression for both AGCs. This “common compression signal” was composed of the maximum envelope (time–domain RMS value in each 1.5-ms interval) of the left/right signals. Since both AGCs used this “common compression signal” for setting their compression gains and both AGCs applied this compression gain to their own original signals, the ILD was largely preserved. Also, the delay in wireless transmission between left and right processors (~5 ms) was accounted for by adding the same delay to the ipsilateral side, so that the instantaneous ILD was not distorted—rather an overall delay of ~5 ms was added to the overall binaural signal. In other words, while the compression level varied with the signal envelope maxima over time, the same compression was always applied to the signal in both ears and so the ILD was left largely intact. The slow-loop attack time was 139 ms, and the release time was 383 ms. The fast loop attack time was 0.33 ms, and its release time was 46.2 ms. For both the fast and slow loops, there was infinite steady-state compression above threshold for the research processors used in this set of experiments. Since the 12:1 ratio used in commercially available AB processors is effectively near-infinite compression, results were not expected to be substantially affected by this small change. Note also that, as a result of the slow compression onset and offset for the slow loop, a speech envelope would still be largely preserved. See Boyle et al. (2009) for further details.
The most commonly used map for each ear was taken from Soundwave 3.2 and converted for use with the BEPsnet research software provided by Advanced Bionics. Using Naida Q90 research processors, BEPsnet was used to create a map for unlinked ACGs and an identical map for linked AGCs (except for the AGC settings). For the unlinked AGC condition, each microphone signal was connected only to the AGC on that side, and so compression was independent for the left and right processors, resulting in distorted ILDs that would not necessarily relate in a clear way to the ILD at the listener's ears.
Test Environment for Localization
The Spatial Hearing Laboratory, also used in Pastore et al. (2020, 2018) was used for this group of experiments. The room measures 10 ft × 15 ft × 10 ft, with all six surfaces covered by 4-in.-thick acoustic foam. The broadband reverberation time (RT60) is 102 ms. Twenty-four loudspeakers (Boston Acoustics 100×) are spaced on the azimuth plane, 15° apart from each other on a 5-ft radius circle at approximately the same height as the listeners' pinnae. Stimuli were presented from the front 13 speakers (+90° to −90°). The loudspeakers were clearly labeled 1–13, as shown in Figure 1. A camera and intercom system allowed the experimenter to monitor the listener's head position and communicate instructions to the listener from the remote control room. Listeners entered their responses on a numerical keypad. All sounds were presented via a 32-channel digital-to-analog converter (PreSonus Quantum 4848) at a rate of 44,100 samples/second/channel.
Figure 1.
Schematic illustration of the experimental apparatus. Speakers are spaced 15° apart.
Procedure
The procedure and loudspeaker apparatus is essentially the same as that described in (Yost & Zhong, 2014)—detailed considerations regarding loudspeaker spacing and sound source identification are explained in detail in that article (see also Hartmann et al., 1998). For all conditions, listeners sat in a stationary chair, facing the loudspeaker at 0° (labeled #7; see Figure 1), directly in front of them. Listeners were told that a sound would be presented from one of the 13 loudspeakers they could see in front of them. Then, a sound stimulus was played at each speaker location in order from left to right (and listeners were informed of this order), to familiarize listeners with the task and the loudspeaker locations. For stationary conditions, listeners were asked to keep their heads fixed, focused on a red dot on Loudspeaker 7. For the head movement condition, listeners were asked to make horizontal, rotational head movements within the range of ±30° to help them localize the sound source. Listeners were encouraged to use whatever rotational head movements and/or series of different head orientations they preferred as part of their localization strategy—the 3-s stimulus duration gave ample time for several approaches. Listeners were not naïve to the purpose of head turns and encouraged to find a motion they felt helped them perform at their best. Continuous observation, via webcam, confirmed that motion was at most ±30° and only horizontal in nature. After each trial in all conditions, listeners were again asked to return to facing the center loudspeaker (#7) and focus on the red dot—this was monitored and confirmed over webcam by the experimenter before the next stimulus presentation began. Since the particular details of listener head motion were not the focus of these experiments, head tracking was not employed. No feedback was offered regarding their responses. Once a stimulus was presented, the listener entered the number of the loudspeaker that they perceived to be the source. The next presentation would occur only after the listener indicated their response to the previous presentation. It should be noted that, while all 13 loudspeakers looked identical, the loudspeakers were not hidden, and localization of the sound source may therefore have been assisted by visual cues. In informal interviews after the full experiment was completed, no subject mentioned any front–back reversals—given the instructions and the visual presence of the loudspeakers in front of the subjects, this is not surprising. For each condition, the stimulus was presented 12 times from each of the 13 loudspeakers, in randomized order, for a total of 156 presentations. Each condition formed a separate block, and the blocks were presented in random order for each listener.
The four blocked conditions were:
Turn N Sync N : No head turns (listener's head is stationary), AGCs are not synchronized.
Turn Y Sync N : Free head movements within ±30° encouraged, AGCs are not synchronized.
Turn N Sync Y : No head turns (listener's head is stationary), AGCs are synchronized.
Turn Y Sync Y : Free head movements within ±30° encouraged, AGCs are synchronized.
Results
Figure 2 shows the raw results pooled across all seven CI listeners for each of the four conditions. These group data are first considered in a qualitative way, and then in a more detailed, quantitative way in Figure 3. Then, individual data are considered in Figures 3, 5, and 6, and in Table 2.
Figure 2.
Sound source localization responses for seven bilaterally implanted patients using Advanced Bionics cochlear implants. The angular location of the loudspeaker presenting the stimulus, in relation to the listener's midline, is represented along the horizontal axis. Listener responses are represented along the vertical axis in terms of the same loudspeaker positions. The size of each bubble is linearly proportional to the percent of all responses at that perceived location to stimuli from the presenting loudspeaker. The figure legend shows example bubble size for cases where 25%, 50%, and 100% of responses to a particular presenting loudspeaker were perceived to be at one particular loudspeaker position. Correct responses are plotted along the positively sloped diagonal in filled black circles. Responses within ±30° are represented with empty black circles and shown within the shaded area, as indicated in the figure legend. Response errors greater than ±30° are represented by red empty circles in the unshaded areas.
Figure 3.
The mean absolute error (MAE) for each individual listener. The mean MAE across listeners is shown with black filled circles. Error bars show the standard error of the mean.
Figure 5.
Individual listener performance for Listeners 1–3. Otherwise, read the same as Figure 2.
Figure 6.
Individual listener performance for Listeners 4–7. Otherwise, read the same as Figure 2.
Table 2.
The individual and mean across listeners for mean absolute error (MAE) and root-mean-square (RMS) error.
| Subject | MAE (RMS) | |||
|---|---|---|---|---|
| TurnNSyncN | TurnYSyncN | TurnNSyncY | TurnYSyncY | |
| L.1 | 47.2 (61.5) | 48.9 (62.9) | 39.0 (50.3) | 38.5 (51.1) |
| L.2 | 76.6 (95.0) | 40.7 (55.9) | 30.3 (37.5) | 17.9 (21.8) |
| L.3 | 49.6 (69.2) | 57.6 (76.7) | 20.7 (28.7) | 14.3 (28.6) |
| L.4 | 31.3 (40.5) | 29.5 (37.7) | 22.6 (29.1) | 19.8 (28.6) |
| L.5 | 23.0 (32.8) | 15.9 (23.1) | 21.5 (27.7) | 14.7 (26.7) |
| L.6 | 21.2 (27.2) | 19.1 (24.4) | 10.4 (16.1) | 1.6 (5.0) |
| L.7 | 17.4 (25.0) | 22.0 (28.2) | 27.9 (34.9) | 19.7 (24.2) |
| Mean | 38.0 (50.1) | 33.4 (44.1) | 24.6 (32.1) | 18.1 (26.6) |
| SD | 21.2 (26.0) | 16.0 (21.1) | 9.0 (10.5) | 10.9 (13.6) |
In Figure 2, the pooled data are presented in a confusion matrix with the presented loudspeaker location along the horizontal axis and listeners' reported speaker location along the vertical axis. Correct responses are plotted along the positively sloped diagonal in filled black circles. Responses within ±30° are represented with empty black circles and shown within the shaded area. Response errors greater than ±30° are represented by red empty circles in the unshaded areas. The size of each circle is linearly proportional to the relative frequency of each combination of presentation and response. The raw data for each individual listener are shown in Figures 5 (Listeners 1–3) and 6 (Listeners 4–7).
A comparison of the figure panels in the left column of Figure 2 (Panels A and B, unlinked AGCs) with those in the right column (Panels C and D, linked AGCs) allows for an evaluation of the effect of linking AGCs on localization performance under stationary head conditions (top row, Panels A and C) and when head movements were encouraged (Panels B and D). Linking AGCs leads to a clear, if relatively modest, improvement in sound source localization acuity regardless of whether head movements are permitted or not. This improvement can be seen in the reduced spread of the data about the positive diagonal when comparing the data in the right column versus the left column for both the upper and lower rows of panels.
Looking at Panel A, where listeners maintained a stationary head and AGCs were independent, localization performance is fairly poor, with localization errors often so large as to be on the wrong side of the head. When synchronization of AGCs is introduced in Panel C, localization errors remain, but their average magnitude is reduced for presentations from loudspeakers at relatively lateral positions, with most responses within roughly 30o of the target loudspeaker position. However, for stimuli presented from more centrally located loudspeakers, the spread of listener responses remains roughly the same with and without synchronization of AGCs.
Comparing the data shown in the upper two panels (A and C) with those in the lower two panels (B and D) reveals the effects of allowing head movements on sound source localization acuity. Comparing Panels A and B, where AGCs were independent of each other, introducing head movements appears to make little difference in listeners' overall acuity. However, when AGCs were synchronized (right column, comparing Panels C and D), head movements do appear to improve localization acuity to a relatively small degree with a reduced spread in responses. That is, listeners' responses are more closely associated with the position of the sound source, even if errors do remain.
Using mean absolute error (MAE) values (the mean absolute value of the difference between presented and reported loudspeaker location, in degrees) and listeners as a random variable, a two-way repeated-measures analysis of variance was significant for synchronization of AGCs, F(1, 6) = 6.01, p = .049, ηp 2 = .500. The main effect of head turns was not significant, F(1, 6) = 2.92, p = .138, ηp 2 = .327, and the interaction between head turns and AGC synchronization was not significant, F(1, 6) = 0.159, p = .703, ηp 2 = .026), meaning that the effect of AGCs did not depend on head movements. The interaction between subjects and synchronization of AGCs was also significant, F(6, 6) = 6.08, p = .023, meaning that the degree of change brought about by synchronization of AGCs was highly variable between listeners. Planned comparisons for the effect of synchronization showed that, when head movements were not allowed, synchronization of AGCs was not statistically significant (t = 1.92, p = .052). When head movements were allowed, synchronization of AGCs was significant (t = 2.79, p = .016). It is worth noting that the individual data (see Figures 5 and 6, and also Table 2) show that all but Listener 7 showed at least some improvement in localization when AGCs were synchronized. Even for Listener 7, AGC synchronization did reduce the MAE when free head movements were allowed. Also, Table 2 shows that for all listeners, allowing head movements resulted in at least a small reduction in MAE when AGCs were synchronized. That is, when AGCs were synchronized, head movements did not increase the MAE for any listener, and therefore, head movements appear not to interfere with or degrade sound source identification accuracy when AGCs are synchronized. A post hoc comparison of the performance of stationary listeners with independent AGCs to their performance with head movements and synchronized AGCs was significant (t = 2.55, p = .022), and all but one listener showed a decrease in the error rate (Listener 7 showed an increase in MAE of 2.3° and a decrease in RMS error of 0.8°). Although the magnitude of the improvement was very small for several listeners, this result nevertheless suggests that, with synchronization of AGCs, the ILD cue remained at least as useful when head movements occurred as when listeners remained stationary.
Figure 3 summarizes the data shown in Figure 2 in terms of the MAE (see also the Supplemental Material S1 for a figure that breaks down group and individual listeners' error rates by loudspeaker). Solid black circles show the group mean, and error bars show the standard error of the mean. Individual listeners' MAE is shown slightly to the left of the group means, with individual listeners' symbols indicated in the figure legend. RMS error, in degrees, for each listener and the group are included in Table 2 for comparison with other studies that used the RMS error metric.
Several overall group trends become clearer in Figure 3. First, the mean MAE (and variability about those means—see also Table 2) are reduced considerably with synchronization of the AGCs, both with and without head turns (recall that the main effect of synchronization of AGCs was statistically significant). When listeners' heads remain stationary, synchronization of AGCs brings the mean down from 38.0° to 24.6°. Similarly, when head movement was encouraged, synchronization of AGCs brings the mean down from 33.4° to 18.1°. Second, allowing head movement reduces the mean MAE by only a relatively modest amount both when AGCs are synchronized and when they are not—this is not surprising given that both the main effect of head movements and the interaction of head movements with synchronization of AGCs were not statistically significant. When AGCs are not synchronized, the mean MAE decreases from 38.0° down to 33.4° when listeners are able to move their heads. Allowing head movements results in a somewhat larger reduction when AGCs are synchronized, from 24.6° to 18.1°. Most importantly, head movements apparently did not disrupt the effectiveness of the synchronization of AGCs.
We now turn to trends in the mean MAE for the individual listeners shown in Figure 3 (see also the corresponding individual raw data shown in Figures 5 [Listeners 1–3] and 6 [Listeners 4–7]). Listener performance can, loosely, be broken down into two “groups” of listeners: those who performed near chance in the TurnNSyncN condition (Listeners 1–3) and those who were at least able to reliably determine which side the sound source was on in the TurnNSyncN condition (Listeners 4–7). Without synchronization of AGCs, Listeners 1–3 (shown with symbols made of lines: +, *, and ×, in Figure 3) have the highest MAE of all listeners, and this remains so for Listener 1 even with synchronization of AGCs. However, with the synchronization of AGCs, all three listeners are able to reliably determine the correct side of the sound source, and Listeners 2 and 3 attain localization performance that is near the group mean.
Looking at the individual raw data in Figure 5, with synchronization of AGCs, Listener 3 is able to attribute sounds presented from near the midline to that same area, with the addition of free head movements, and actually attains the second lowest MAE of all seven listeners. With AGC synchronization, Listeners 1 and 2 appear only sensitive to “sidedness” and respond almost exclusively near the more lateral positions and not to the center—this is nevertheless a substantial improvement over what appears to be guessing when AGCs were not synchronized. When free head movements are allowed, Listener 1 appears to lose much (but not all) of the gains associated with synchronization of AGCs, though at least a rough sense of sidedness remains, whereas responses without synchronization of AGCs were clearly based on guessing.
Listeners 4–7 (symbols made of filled shapes in Figure 3) could at least determine sidedness without the synchronization of AGCs (see Figure 6). These listeners generally gain the least from synchronization of AGCs. This is presumably (at least in part) because they have less room for improvement, as they perform reasonably well without synchronized AGCs (though still not as well as would be expected for NH listeners). When free head movements are also allowed, these listeners are able to localize most presented sounds within ±30° of the actual sound source. With synchronization of AGCs and free head movements, Listener 6 localizes as well as a NH listener (1.6° MAE, 5° RMS error) under these laboratory conditions. Interestingly, Listener 7 appears to perceive most sounds at the extreme lateral sound source positions once AGCs are synchronized.
A summary metric of the improvement in sound source localization, if any, that came with head turns and synchronization of AGCs, is offered in Figure 4. First, the percent correct localization, where listeners reported the correct loudspeaker as the source of the sound stimulus, is shown with the leftmost, light gray bars for each condition. With no synchronization, localization acuity was quite poor regardless of head turns being allowed or not, at roughly 20% correct. Synchronizing AGCs lead to an improvement in percent correct for stationary listeners from 18.6% to 24.2%. For listeners who were allowed free head movements, synchronizing AGCs led to an improvement from 19.1% to 37.7% correct. While these improvements are considerable, listeners are still, at best, likely to correctly localize a sound source at less than chance levels. However, for many real-life conditions, sound source localization that is roughly correct may be all that is required to gain the many of the essential benefits of auditory localization (Bichey & Miyamoto, 2008; Dorman et al., 2016). If responses that are within ±15° of the correct loudspeaker are included together with correct responses (i.e., responses one loudspeaker position to the right and left of the correct loudspeaker are also counted as “correct”), then localization performance is still fairly poor with no head movements and no synchronization of AGCs, but localization acuity for this measure reaches roughly 70% “correct” when both synchronization and head movements are included. In order to accrue at least some of the benefits of auditory sound source localization (e.g., to direct a listener's visual attention to a salient source of sound; Van Opstal, 2016), we might consider that only approximate localization accuracy could be sufficient, and thus count all responses within ±30° of the correct loudspeaker as “correct” (i.e., two loudspeakers to each side of the correct loudspeaker). In this case, localization performance is somewhat better than chance for stationary listeners without AGC synchronization at roughly 65% “correct.” However, including both AGC synchronization and head movements for this “correct within ±30°” metric leads to performance that is 85% “correct.”
Figure 4.
Mean and standard deviation of listener accuracy in terms of percent correct for each of the four conditions. Light bars show performance for percent correct. Medium-shaded bars show performance when one loudspeaker immediately to the left and right of the correct response are counted as “correct.” For example, if Loudspeaker #7 at 0° presented the stimulus, then responses of −15°, 0°, and +15° would be counted as “correct.” The darkest bars represent performance when the two loudspeakers immediately to the left and right of the target loudspeaker were counted as “correct.” Note that, for sound stimuli presented from peripheral loudspeaker positions, there may be no loudspeaker available that is one or two loudspeakers more peripheral (i.e., more than 90° to the listener's left or right), and so these statistics for the ±15° and ±30° percent “correct” statistics may be somewhat undervalued.
Discussion
The high compression ratio used in the AGCs of some unlinked bilateral CIs (AB in this case) has been shown to distort the ILD cue (Archer-Boyd & Carlyon, 2019; Dorman et al., 2014) for stimuli that engage unlinked AGC compression, and since this is the only binaural cue that is useful to bilateral CI listeners, localization was expected to be negatively impacted. Furthermore, the trajectory of dynamic ILD cues that arise as a result of listener head movements and/or sound source movements was also expected to be distorted and perhaps lead to further sound source localization impairments. Linking and synchronizing AGCs offers the possibility of preserving correct ILD cues, at least before the signal is mapped to electrical stimulation at the CI electrodes. We investigated the ability of bilaterally implanted CI users to localize in the frontal hemifield with and without AGC synchronization under conditions where the listeners were stationary and where they were allowed to move their heads.
Across listeners, localization acuity was improved by synchronization of AGCs, though to varying degrees, depending on the listener. This is consistent with the previous reports by Chen (2017) and Potts et al. (2019). We now compare the RMS error results (in degrees) in this experiment to those of others. See Table 2 for conversion between MAE used in this report and RMS error discussed in comparisons to other studies here in this section. Under conditions where the listener remained stationary, we found a mean RMS error across listeners of 50.1°. Synchronization of AGCs led to a reduction in RMS error to 32.1°. This is a reduction of 18°, as compared to 21° reported by Chen (2017), also for AB implants. Potts et al. (2019) reported a reduction of 8° after synchronizing the AGCs of Cochlear Corporation CIs. Note that Potts et al. used eight loudspeakers spaced 20° apart, and this study used 13 loudspeakers spaced 15° apart, so direct comparison must be considered with care. For example, the most laterally placed loudspeakers in Potts were at ±70°, whereas they were at ±90° for this study—the greater error at lateral positions that is commonly reported (e.g., see Dorman et al, 2014) may have elevated the RMS error in this study relative to that reported in Potts.
In this study, under synchronized AGC conditions where free head movements were allowed, a further reduction in MAE of 5.5° was realized, for an RMS error of 26.6°. For sound source identification (using the same experimental setup as this study) with stationary NH listeners, Dorman et al. (2016), Yost and Zhong (2014), and Yost et al. (2013) reported RMS errors of 6°, 5.98°, and 6.2°, respectively. We are unaware of any published data for AGC synchronization with free head movements for comparison.
Improvement with synchronization of AGCs was not uniform across listeners. In general, those listeners who already localized reasonably well with independent AGCs gained the least from their synchronization, though, in several cases, this may have been because performance was already quite good, as so there was relatively less “room for improvement.” Those listeners who performed worst with independent AGCs gained the most from synchronization. In addition, no listener performed worse with synchronization than without.
The stationary, unlinked RMS error of 50.1° is relatively high compared to the 29° RMS error reported by Dorman et al. (2016) for a mixed group of 32 listeners wearing AB, MED-EL, and Cochlear Corporation CIs. Clearly, this high RMS value is driven by Listeners 1–3 (see Table 1). Comparing the performance of the three best performers (Listeners 5–7, mean RMS error = 28.33°) to that of the three worst performers (Listeners 1–3, mean RMS error = 75.23°) in light of the duration of bilateral CI use and the duration of bilateral deafness may shed some light on the difference in their performance. Listeners 1–3 had use of bilateral implants for an average of 1.7 years, whereas Listeners 5–7 had use of bilateral implants for an average of 15.4 years. At the same time, the duration of deafness for Listeners 1–3 was an average of 45.8 years, whereas the duration of deafness for Listeners 5–7 was an average of 14.3 years. To summarize, the worst performers had an RMS error roughly triple that of the best performers, roughly triple the duration of deafness, and roughly one ninth the time spent using bilateral CIs. Even though the number of listeners is small, this might suggest that the RMS error of the worst three listeners may have been as much or more a function of their demographics than the particular CI they were using.
All seven listeners demonstrated a reduced RMS error in the free head turns and synchronized AGC condition as compared to the no head movements, independent AGCs condition. However, the amount of reduction in RMS error was highly variable (a range of 0.8°–73.2° decrease in RMS error) and the main effect of head turns was not statistically significant at the group level. The main outcome here is that head movements did not generally impede the effectiveness of synchronizing AGCs (but note that the RMS error of Listener 5 increased by 3.6° when AGCs were synchronized vs. not under conditions where free head movements were allowed). Altogether, these results argue in favor AGC synchronization being useful to improving localization performance for listeners using bilateral CIs with high ratios of AGC compression. It would seem likely that gains would also be available to those using CIs with relatively lower compression ratios.
Without synchronization of AGCs, Listeners 5, 6, and 7 demonstrated somewhat laterally compressive localization patterns, meaning that sounds were often perceived closer to the midline than the location from which they were presented. This response pattern has been noted before and has been often attributed to a greater reduction in ILDs from AGC compression in response to sounds coming from more lateral positions (e.g., Dorman et al., 2014; Kerber & Seeber, 2012; Seeber & Fastl, 2008).
While listeners appear able to at least approximately localize (i.e., within the 60° range shown in Figure 4 as “percent correct ±30°”), the usefulness of these cues over stimulus duration is unclear. That is, would listeners be able to monitor the change in ILDs that accompany head movements to resolve spatial ambiguities such as front–back reversals? With lower rates of compression, listeners have been shown to be able to utilize the dynamic ILD cues that come from head turns to their advantage (e.g., Pastore et al., 2018). Whether the synchronization of AGCs would extend this benefit to users of CIs with higher ratios of AGC compression remains unclear—this is the subject of a related study with the same group of subjects (Pastore et al., under review). This issue is important; although stationary localization conditions may be favorable with synchronized AGCs, this does not necessarily mean that the resulting cues will be useful under dynamic conditions where sound sources and/or listeners move, as in everyday life.
Limitations
This article only offers results for localization of a single stationary, broadband sound source presented for a relatively long time (3 s). In real-world scenarios, there are often many sources sounding at nearly the same time, and often their frequency spectra are different, as are, presumably, their locations. Especially in the case of broadband compression, as used in most CIs, it is not clear how well the ILDs relating to the various, nearly simultaneous sources will be preserved after AGC compression, especially across frequency. Future studies should test localization in conditions that more closely approximate multiple sound source “cocktail party” listening conditions.
It is also unclear what effect reverberation would have on localization even with linked AGCs. It may be that the spatial widening found by Hassager et al. (2017) with hearing aid compression would also apply to AGC compression in CIs as well; however, the slower compression onsets of the AB AGCs may avoid the reduced direct-to-reverberant ratio and grouping of different sound sources into the same reconstructed envelope that can occur for fast compression (cf. Hassager et al., 2017). This consideration would seem to be worth studying under conditions with multiple sound sources.
It is also not clear why localization acuity improved less for central sound source positions. Head movements, in combination with synchronization, further reduce errors at medial locations. It may be that, for some listeners, the ILD is so “noisy” that localization based on ILD is largely reduced to determining “sidedness.” The small ILDs from sound sources in the median plane may be smaller than the “noise” in the ILD representation, so head movements may place sound sources further to the listener's side for a more stable ILD estimate, thereby improving sound source localization (e.g., Listeners 3 and 4, and to a lesser extent, Listener 2). Future investigations, including modeling, will be necessary to test this hypothesis and understand this result.
Stakhovskaya and Goupell (2017) found that the mapping procedures used for bilateral CIs can produce mismatches between perceived location across single or multiple electrodes and the actual physical ILDs. They further found that, by using the CI listener's perceived intracranial location for single-electrode pairs, they could adjust the mapping so that localization performance improved considerably. It seems reasonable to expect that such a mapping procedure, in combination with synchronization of AGCs, could yield better results than either on its own might. Such a combination might also help reduce errors in the central area around listeners' midline.
In this experiment, listeners had very little time to learn how to use synchronized AGCs for localization. It is possible that listeners' performance might improve with longer, everyday use of synchronized AGCs (perhaps a month or more), and perhaps also some training in their use. It is also possible that different types of head movements may work better for different listeners. These questions are left for a future study.
Finally, during head movements and sound source movement, the time course of compression, even when AGCs are synchronized, can lead to the perceived loudness of the sound source changing as stimulus level reaches the compression threshold from above and below. That is, the relatively slow attack and release times for the 12:1 AGC compression could lead to anomalies in the level of the signal coming out of the AGCs, especially when they are synchronized, such that the overall intensity of the stimulus and the perceived loudness of the stimulus become decorrelated from each other. How this would affect localization, especially in terms of distance perception, and how this might interfere with listeners' ability to use the ILD cue, is not clear. This will be the subject of future research and modeling.
Overall, those listeners whose RMS/MAE error was greatest under stationary conditions with no synchronization tended to show the greatest improvements when AGCs were synchronized. Those who had the lowest RMS/MAE error benefited the least from synchronization of AGCs. For the best performing listener (Listener 6), synchronization of AGC led to localization performance within the same RMS error range as NH listeners. It therefore appears that the synchronization of AGCs “does no harm” to the performance of those who already localize sound sources effectively, while helping those who do not.
Conclusions
The following are the conclusions of this study:
At the group level (n = 7), synchronization of AGCs led to a statistically significant decrease in the error rate of sound source identification in the frontal hemifield. Synchronization of AGCs led to an average reduction in the MAE of 13.4° when head movements were not allowed and a 15.3° reduction in MAE when head movements were allowed. Allowing head movements and synchronizing AGCs led to a 19.9° reduction in MAE as compared to performance with independent AGCs and no head turns.
Head movements did not lead to a significant difference in performance, and the interaction between head movements and synchronization of AGCs was not significant.
When AGCs were not synchronized, head movements led to decreased error rates for four listeners and increased error rates for three listeners. The magnitude of these changes also varied across listeners.
When AGCs were synchronized, head movements did not elevate the error rate of any listener, though again, the magnitude of the change was variable across listeners.
When free head movements were allowed, all seven listeners demonstrated sound source identification with a lower error rate with synchronized AGCs than with independent AGCs, though the magnitude of the change varied considerably between listeners.
When free head movements were not allowed, synchronization of AGCs led to a decrease in error rate for six listeners and an increase in error rate for one listener. The magnitude of the change varied considerably between listeners.
Supplementary Material
Acknowledgments
Research was supported by National Institute on Deafness and Communication Disorders Grant 5R01DC015214 (W. A. Y.) and Facebook Reality Labs (W. A. Y. and M. T. P.) and National Institute on Deafness and Communication Disorders Grant F32DC017676 (M. T. P.). Expenses related to testing cochlear implant patients were paid for in a grant from Advanced Bionics to Michael F. Dorman. Chen Chen is an employee of Advanced Bionics, and only participated in the programming of the cochlear implant processors and some aspects of experimental design (not in data collection, data analysis, or interpretation). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Funding Statement
Research was supported by National Institute on Deafness and Communication Disorders Grant 5R01DC015214 (W. A. Y.) and Facebook Reality Labs (W. A. Y. and M. T. P.) and National Institute on Deafness and Communication Disorders Grant F32DC017676 (M. T. P.). Expenses related to testing cochlear implant patients were paid for in a grant from Advanced Bionics to Michael F. Dorman. Chen Chen is an employee of Advanced Bionics, and only participated in the programming of the cochlear implant processors and some aspects of experimental design (not in data collection, data analysis, or interpretation).
References
- Archer-Boyd, A. W. , & Carlyon, R. P. (2019). Simulations of the effect of unlinked cochlear-implant automatic gain control and head movement on interaural level differences. The Journal of the Acoustical Society of America, 145(3), 1389–1400. https://doi.org/10.1121/1.5093623 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bichey, B. G. , & Miyamoto, R. T. (2008). Outcomes in bilateral cochlear implantation. Otolaryngology—Head & Neck Surgery, 138(5), 655–661. https://doi.org/10.1016/j.otohns.2007.12.020 [DOI] [PubMed] [Google Scholar]
- Boyle, P. , Büchner, A. , Stone, M. , Lenarz, T. , & Moore, B. (2009). Comparison of dual-time-constant and fast-acting automatic gain control (AGC) systems in cochlear implants. International Journal of Audiology, 48(4), 211–221. https://doi.org/10.1080/14992020802581982 [DOI] [PubMed] [Google Scholar]
- Brimijoin, W. O. , & Akeroyd, M. A. (2012). The role of head movements and signal spectrum in an auditory front/back illusion. i-Perception, 3(3), 179–182. https://doi.org/10.1068/i7173sas [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brimijoin, W. O. , & Akeroyd, M. A. (2014). The moving minimum audible angle is smaller during self motion than during source motion. Frontiers in Neuroscience, 8(SEP), 1–8. https://doi.org/10.3389/fnins.2014.00273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brimijoin, W. O. , Boyd, A. W. , & Akeroyd, M. A. (2013). The contribution of head movement to the externalization and internalization of sounds. PLOS ONE, 8(12), Article e83068. https://doi.org/10.1371/journal.pone.0083068 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen, C. (2017). Effects of synchronizing automatic gain controls for bilateral cochlear implant users. Poster presentation at Conference on Implantable Auditory Prostheses. [Google Scholar]
- Deshpande, N. , & Braasch, J. (2017). Blind localization and segregation of two sources including a binaural head movement model. JASA Express Letters, 142(1), 113–117. https://doi.org/10.1121/1.4986800 [DOI] [PubMed] [Google Scholar]
- Dorman, M. F. , Loiselle, L. H. , Cook, S. J. , Yost, W. A. , & Gifford, R. H. (2016). Sound source localization by normal hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiology and Neurotology, 21, 127–131. https://doi.org/10.1159/000444740 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dorman, M. F. , Loiselle, L. H. , Stohl, J. , Yost, W. A. , Spahr, A. J. , Brown, C. A. , & Cook, S. J. (2014). Interaural level differences and sound source localization for bilateral cochlear implant patients. Ear and Hearing, 35(6), 633–640. https://doi.org/10.1097/AUD.0000000000000057.Interaural [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dorman, M. F. , Zeitler, D. M. , Cook, S. J. , Loiselle, L. H. , Yost, W. A. , Wanna, G. B. , & Gifford, R. H. (2015). Interaural level difference cues determine sound source localization by single-sided deaf patients fit with a cochlear implant. Audiology and Neurootology, 20(3), 183–188. https://doi.org/10.1159/000375394 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gifford, R. H. , Loiselle, L. H. , Natale, S. J. , Sheffield, S. W. , Sunderhaus, L. W. , Dietrich, M. S. , & Dorman, M. F. (2018). Speech understanding in noise for adults with cochlear implants: Effects of hearing configuration, source location certainty, and head movement. Journal of Speech, Language, and Hearing Research, 61(5), 1306–1321. https://doi.org/10.1044/2018_JSLHR-H-16-0444 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grange, J. A. , & Culling, J. F. (2016). The benefit of head orientation to speech intelligibility in noise. The Journal of the Acoustical Society of America, 139(2), 703–712. https://doi.org/10.1121/1.4941655 [DOI] [PubMed] [Google Scholar]
- Grantham, D. W. , Ashmead, D. H. , Ricketts, T. A. , Haynes, D. S. , & Labadie, R. F. (2008). Interaural time and level difference thresholds for acoustically presented signals in post-lingually deafened adults fitted with bilateral cochlear implants using CIS+ Processing. Ear and Hearing, 29(1), 33–44. https://doi.org/10.1097/AUD.0b013e31815d636f [DOI] [PubMed] [Google Scholar]
- Grantham, D. W. , Ashmead, D. H. , Ricketts, T. A. , Labadie, R. F. , & Haynes, D. S. (2007). Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear and Hearing, 28(4), 524–541. https://doi.org/10.1097/AUD.0b013e31806dc21a [DOI] [PubMed] [Google Scholar]
- Hartmann, W. M. , Rakerd, B. , & Gaalaas, J. B. (1998). On the source-identification method. The Journal of the Acoustical Society of America, 104(6), 3546–3557. https://doi.org/10.1121/1.423936 [DOI] [PubMed] [Google Scholar]
- Hassager, H. , Wiinberg, A. , & Dau, T. (2017). Effects of hearing-aid dynamic range compression on spatial perception in a reverberant environment. The Journal of the Acoustical Society of America, 141(4), 2556–2568. https://doi.org/10.1121/1.4979783 [DOI] [PubMed] [Google Scholar]
- Honda, A. , Shibata, H. , Hidaka, S. , Gyoba, J. , Iwaya, Y. , & Suzuki, Y. (2013). Effects of head movement and proprioceptive feedback in training of sound localization. i-Perception, 4(4), 253–264. https://doi.org/10.1068/i0522 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keidser, G. , Convery, E. , & Hamacher, V. (2011). The effect of gain mismatch on horizontal localization performance. The Hearing Journal, 64(2), 26–33. https://doi.org/10.1097/01.HJ.0000394541.95207.c7 [Google Scholar]
- Kerber, S. , & Seeber, B. U. (2012). Sound localization in noise by normal-hearing listeners and cochlear implant users. Ear and Hearing, 33(4), 445–457. https://doi.org/10.1097/AUD.0b013e318257607b [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kerber, S. , & Seeber, B. U. (2013). Localization in reverberation with cochlear implants: Predicting performance from basic psychophysical measures. Journal of the Association for Research in Otolaryngology, 14(3), 379–392. https://doi.org/10.1007/s10162-013-0378-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laback, B. , Pok, S. M. , Baumgartner, W.-D. , Deutsch, W. A. , & Schmid, K. (2004). Sensitivity to interaural level and envelope time differences of two bilateral cochlear implant listeners using clinical sound processors. Ear and Hearing, 25(5), 488–500. https://doi.org/10.1097/01.aud.0000145124.85517.e8 [DOI] [PubMed] [Google Scholar]
- Moore, B. C. J. (2003). Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants. Otology & Neurotology, 24(2), 243–254. https://doi.org/10.1097/00129492-200303000-00019 [DOI] [PubMed] [Google Scholar]
- Mueller, M. F. , Meisenbacher, K. , Lai, W.-K. , & Dillier, N. (2014). Sound localization with bilateral cochlear implants in noise: How much do head movements contribute to localization? Cochlear Implants International, 15(1), 36–42. https://doi.org/10.1179/1754762813Y.0000000040 [DOI] [PubMed] [Google Scholar]
- Nopp, P. , Schleich, P. , & D’Haese, P. (2004). Sound Localization in Bilateral Users of MED-EL COMBI 40/40+ Cochlear Implants. Ear and Hearing, 25(3), 205–214. https://doi.org/10.1097/01.AUD.0000130793.20444.50 [DOI] [PubMed] [Google Scholar]
- Pastore, M. T. , Natale, S. J. , Clayton, C. , Dorman, M. F. , Yost, W. A. , & Zhou, Y. (2002). Effects of head movements on sound-source localization in single-sided deaf patients with their cochlear implant on versus off. Ear and Hearing, 41(6), 1660–1674. https://doi.org/10.1097/aud.0000000000000882 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pastore, M. T. , Natale, S. J. , Yost, W. A. , & Dorman, M. F. (2018). Head movements allow listeners bilaterally implanted with cochlear implants to resolve front–back confusions. Ear and Hearing, 39(6), 1224–1231. https://doi.org/10.1097/AUD.0000000000000882 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pastore, M. T. , Pulling, K. R. , Chen, C. , Yost, W. A. , & Dorman, M. F. (under review). Individuals’ baseline sound-source localization acuity constrains the value of synchronizing automatic gain control in bilateral implants. Ear and Hearing. [DOI] [PubMed] [Google Scholar]
- Perrett, S. , & Noble, W. (1997). The effect of head rotations on vertical plane sound localization. The Journal of the Acoustical Society of America, 102(4), 2325–2332. https://doi.org/10.1121/1.419642 [DOI] [PubMed] [Google Scholar]
- Potts, W. B. , Ramanna, L. , Perry, T. , & Long, C. J. (2019). Improving localization and speech reception in noise for bilateral cochlear implant recipients. Trends in Hearing, 23, 1–18. https://doi.org/10.1177/2331216519831492 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seeber, B. U. , Baumann, U. , & Fastl, H. (2004). Localization ability with bimodal hearing aids and bilateral cochlear implants. The Journal of the Acoustical Society of America, 116(3), 1698. https://doi.org/10.1121/1.1776192 [DOI] [PubMed] [Google Scholar]
- Seeber, B. U. , & Fastl, H. (2008). Localization cues with bilateral cochlear implants. The Journal of the Acoustical Society of America, 123(2), 1030–1042. https://doi.org/10.1121/1.2821965 [DOI] [PubMed] [Google Scholar]
- Shannon, R. V. (1983). Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics. Hearing Research, 11(2), 157–189. https://doi.org/10.1016/0378-5955(83)90077-1 [DOI] [PubMed] [Google Scholar]
- Stakhovskaya, O. A. , & Goupell, M. J. (2017). Lateralization of interaural level differences with multiple electrode stimulation in bilateral cochlear-implant listeners. Ear and Hearing, 38(1), e22–e38. https://doi.org/10.1097/AUD.0000000000000360 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schwartz, A. H. , & Shinn-Cunningham, B. G. (2013). Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners. The Journal of the Acoustical Society of America, 133(4), 2329–2339. https://doi.org/10.1121/1.4794386 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thurlow, W. R. , & Runge, P. S. (1967). Effect of induced head movements on localization of direction of sounds. The Journal of the Acoustical Society of America, 42(2), 480–488. https://doi.org/10.1121/1.1910604 [DOI] [PubMed] [Google Scholar]
- Toshima, I. , Aoki, S. , & Hirahara, T. (2008). Sound localization using an acoustical telepresence robot: TeleHead II. Presence: Teleoperators and Virtual Environments, 17(4), 392–404. https://doi.org/10.1162/pres.17.4.392 [Google Scholar]
- van Hoesel, R. J. M. , & Tyler, R. S. (2003). Speech perception, localization, and lateralization with bilateral cochlear implants. The Journal of the Acoustical Society of America, 113(3), 1617–1630. https://doi.org/10.1121/1.1539520 [DOI] [PubMed] [Google Scholar]
- Van Opstal, A. J. (2016). The auditory system and human sound-localization behavior (1st ed.). Academic Press. [Google Scholar]
- Wallach, H. (1939). On sound localization. The Journal of the Acoustical Society of America, 10(4), 270–274. https://doi.org/10.1121/1.1915985 [Google Scholar]
- Wallach, H. (1940). The role of head movements and vestibular and visual cues in sound localization. Journal of Experimental Psychology, 27(4), 339–368. https://doi.org/10.1037/h0054629 [Google Scholar]
- Wiggins, I. M. , & Seeber, B. U. (2011). Dynamic-range compression affects the lateral position of sounds. The Journal of the Acoustical Society of America, 130(6), 3939–3953. https://doi.org/10.1121/1.3652887 [DOI] [PubMed] [Google Scholar]
- Wiggins, I. M. , & Seeber, B. U. (2013). Linking dynamic-range compression across the ears can improve speech intelligibility in spatially separated noise. The Journal of the Acoustical Society of America, 133(2), 1004–1016. https://doi.org/10.1121/1.4773862 [DOI] [PubMed] [Google Scholar]
- Wightman, F. L. , & Kistler, D. J. (1999). Resolution of front–back ambiguity in spatial hearing by listener and source movement. The Journal of the Acoustical Society of America, 105(5), 2841–2853. https://doi.org/10.1121/1.426899 [DOI] [PubMed] [Google Scholar]
- Yost, W. A. , Loiselle, L. H. , Dorman, M. F. , Burns, J. , & Brown, C. A. (2013). Sound source localization of filtered noises by listeners with normal hearing: A statistical analysis. The Journal of the Acoustical Society of America, 133(5), 2876–2882. https://doi.org/10.1121/1.4799803 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yost, W. A. , & Zhong, X. (2014). Sound source localization identification accuracy: Bandwidth dependencies. The Journal of the Acoustical Society of America, 136(5), 2737–2746. https://doi.org/10.1121/1.4898045 [DOI] [PubMed] [Google Scholar]
- Zeng, F.-G. , Grant, G. , Niparko, J. , Galvin, J. , Shannon, R. , Opie, J. , & Segel, P. (2002). Speech dynamic range and its effect on cochlear implant performance. The Journal of the Acoustical Society of America, 111(1), 377–386. https://doi.org/10.1121/1.1423926 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.






