Abstract
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.
Keywords: asymmetrical hearing, binaural hearing, spatial hearing, cochlear implants, auditory deprivation
Introduction
Current clinical practice aims to preserve bilateral sound input, which has resulted in a growing number of patients who receive bilateral cochlear implants (BiCIs; Gifford & Dorman, 2018; Turton et al., 2020). Bilateral implantation is recommended because it can lead to improved sound source localization and speech understanding in noise relative to unilateral implantation (e.g., Litovsky et al., 2006; Loizou et al., 2009; van Hoesel & Tyler, 2003). Much research on BiCIs implicitly assumes interaural symmetry but does not explicitly consider the effects of a weaker ear contributing to bilateral perception. However, due to factors associated with hearing loss (e.g., deterioration of the auditory periphery; Shepherd & Hardie, 2001) and surgical implantation (e.g., electrode placement; Goupell et al., 2022), interaural asymmetry is pervasive and its causes are varied. Recent research has shown that interaural asymmetry limits the benefits of BiCIs (e.g., Bakal et al., 2021; Burg et al., 2022; Ihlefeld et al., 2015; Mosnier et al., 2009; Yoon et al., 2011) and contributes to difficulty navigating complex sound environments (e.g., Bakal et al., 2021; Bernstein et al., 2016; Goupell et al., 2016, 2018a). As discussed in greater detail below, most studies aiming to understand the impact of interaural asymmetry on performance in patients have focused on identifying the sources of asymmetry, particularly on how to counteract said sources. However, the terminology and point of view in different topic areas can vary widely, making it difficult to reach a consensus about how to optimize patient outcomes.
The binaural hearing literature has also historically assumed that binaural processing of inputs from the two ears is independent, and that binaural hearing involves an optimal selection of binaural and monaural sources of information (Durlach, 1963). In other words, this assumption suggests that information from one ear can be ignored at will because each ear is treated like a separate “channel” by the auditory system.
Experimental paradigms using binaural stimulation demonstrate that information from both ears is inextricably linked by the auditory system, such that processing of information from the two ears often does not occur in two independent channels. For example, listening with two ears rather than one in conditions when both ears are spectro-temporally degraded impairs task performance in listeners with normal-hearing (NH) (DeRoy Milvae et al., 2021; Gallun et al., 2007; Goupell et al., 2021) and hearing loss (Bakal et al., 2021; Bernstein et al., 2016, 2020; Goupell et al., 2016; Oh et al., 2019). There are several other examples of how contralateral stimulation changes perception in the ipsilateral ear that are not necessarily connected in the literature. Contralateral stimulation disrupts the ability to detect sounds in listeners with NH (Zwislocki, 1971) and BiCIs (Lin et al., 2013), a phenomenon called “central masking,” though physiological manifestations or causes may differ between groups. Central masking occurs most often for stimuli with a common onset time and similar place-of-stimulation in each ear (Lin et al., 2013; Zwislocki, 1971). Sensitivity to monaural temporal cues can decrease due to the presence of confounding binaural cues in listeners with NH (Piechowiak et al., 2007; Schimmel et al., 2008). Speech information can be disruptively fused together, resulting in changes in vowel perception for listeners with hearing loss (Reiss et al., 2016; Reiss & Molis, 2021).
Understanding that bilateral auditory processing arises from two interdependent pathways and recognizing the pervasiveness of interaural asymmetry are both essential for devising strategies to improve patient outcomes. The goal of the present manuscript is to review the literature concerning interaural asymmetry for listeners with BiCIs and present a conceptual framework that integrates the findings from a wide variety of studies and methodologies, including human psychophysics and animal electrophysiology. Because the literature concerning interaural asymmetry has focused on its specific causes (e.g., unilateral stimulation during development) or manifestations (i.e., interaural mismatch in electrode placement), we group results into categories of bottom-up or top-down processing. First, we review the findings on bottom-up and top-down binaural processing using psychophysics or physiology. Then, we summarize the findings and propose evidence-based best practices for researchers and clinicians assessing the outcomes of patients with BiCIs. Finally, we suggest important directions for future research.
Definitions and Conceptual Framework
Before proceeding further, it is important to provide several operational definitions. The phrase “interaural asymmetry” is used here to refer to any difference in the representation or perception of identical sounds presented to the left and right ear. Interaural asymmetry includes differences in hearing thresholds, loudness growth, speech understanding scores, place-of-stimulation of each auditory nerve, and spectro-temporal patterns conveyed by each auditory nerve. While this review focuses primarily on listeners with BiCIs, individuals with hearing loss who use other hearing assistive technologies such as bilateral hearing aids, hybrid CIs, or a unilateral CI and hearing aid are also likely to be impacted by these kinds of asymmetries. Thus, this review aims to characterize the impacts of many different kinds of interaural asymmetry, such that they can be generalized to numerous clinical populations.
Binaural benefits are perceptual advantages that can occur when listeners have access to sound from two ears instead of one. These are usually encapsulated under conditions in which sound sources are on the horizontal plane, whereby measurements focus on sound source localization and improvement in speech understanding in noise when there is a spatial separation between target and masking sounds compared to when they are co-located (i.e., spatial release from masking). There are some monaural components involved with sound source localization (loudness cues and spectral cues) and improvements associated with speech understanding in noise (head shadow). Binaural benefits include binaural redundancy or summation, which results from having access to the auditory signal through both ears (Carhart, 1965), and binaural squelch, which requires a comparison of the signal in the left and right ear (Harris, 1965; e.g., resulting from a difference in signal phase or a spatial separation of target and masker). Benefits of bilateral hearing include those driven by binaural cues (e.g., Middlebrooks & Green, 1991; Swaminathan et al., 2016), the two primary binaural cues being interaural level and time differences (ILDs and ITDs, respectively). These cues result from differences in a sound's amplitude (temporal envelope) and phase (temporal fine-structure) as they reach the ears according to their horizontal locations on either side of the head.
Binaural cues (i.e., ITDs and ILDs) help to disambiguate and localize independent sound sources in complex auditory environments. Throughout this manuscript, “cues” refer to acoustical qualities present in a sound that may or may not be well-represented by the auditory system, but are suspected to be useful for distinguishing sound sources. In contrast, we will also discuss features, which refer to the internal representation or perception of sounds that can be used to distinguish between sound sources. Examples of sound features related to spatial cues are perceived location and perceived width of a sound image. Thus, cues refer to aspects of a signal (i.e., external to the listener) and features refer to perceptual qualities (i.e., internal perceptions), both of which are intimately related and involved with segregation of sound sources in complex environments. The next two sections of this review will elaborate on these ideas in the context of completed research.
Bottom-Up Processing: The Superior Olive
Overview of the Lateral Superior Olive
Binaural cues must be represented by the auditory system in order to be useful. The circuits thought to encode binaural cues in the NH system are well-characterized (for review, see Yin et al., 2019). However, there are differences between listeners with BiCIs and those with NH that warrant distinct consideration of their bottom-up processing. Firstly, most commercially available CI processing strategies discard temporal fine-structure (Loizou, 2006), rendering these cues inaccessible to listeners with CIs. Recent measurements evaluating the output of CI processors confirm that only ILDs and ITDs in the temporal envelope are preserved (Gray et al., 2021). Secondly, even when temporal fine-structure is available using research devices and processing strategies, listeners with BiCIs do not seem to benefit from fine-structure ITDs during localization and related tasks (Ausili et al., 2020; Dennison et al., 2022; Fischer et al., 2021). Listeners with BiCIs show sensitivity to ITDs in the envelope as well as the fine structure during discrimination tasks (e.g., Anderson et al., 2019a; van Hoesel & Tyler, 2003). Thus, localization instead seems to be facilitated primarily by ILDs for listeners with BiCIs (Aronoff et al., 2010; Grantham et al., 2008; Seeber & Fastl, 2008). Thirdly, most CI arrays do not extend to the most apical portion of the cochlea. If fine-structure processing in the spiral ganglion is indeed place-specific (for review, see Moore, 2008), the depth to which the electrode array is inserted may be insufficient (Goupell et al., 2022). Fourthly, stimulation provided by CIs is made up of transient electrical pulses, which are amplitude-modulated by the envelope of the sound being represented (Loizou, 2006). They also show reliable evidence of “rate limitations,” meaning that there is a decline in envelope ITD sensitivity with pulse rates (i.e., fine-structure) or amplitude modulation (AM) rates above 300 Hz (Anderson et al., 2019a; Kan & Litovsky, 2015; Laback et al., 2015).
One circuit in the auditory brainstem, the lateral superior olive (LSO; Figure 1), seems particularly equipped to represent binaural cues for listeners with BiCIs. The LSO corresponds to the second and third synapse from the ipsilateral and contralateral sides of the brainstem, respectively, indicating that binaural cues are encoded early in auditory processing. The LSO is thought to have evolved to encode for processing of ITDs in transient sounds and perform preprocessing for ILD coding in the inferior colliculus (Brown & Tollin, 2016; Joris & Trussell, 2018). Accordingly, it shows sensitivity to ITDs conveyed by broadband transients (Beiderbeck et al., 2018; Franken et al., 2021) and temporal envelopes (Brown & Tollin, 2016; Joris & Yin, 1995). The LSO receives excitatory and inhibitory input from the ipsilateral and contralateral ear, respectively (Boudreau & Tsuchitani, 1968), and acts as an anticoincidence detector, responding with the lowest spike rates around 0-µs ITDs, and showing ITD-dependent rates of firing (Beiderbeck et al., 2018). Time-constants in the LSO circuit are among the fastest in the central nervous system (Brown & Tollin, 2016; Franken et al., 2018), as are those in the medial superior olive that is suspected to code for fine-structure ITDs (Golding & Oertel, 2012). Most cells in the LSO have high characteristic frequencies (Guinan et al., 1972; Sanes et al., 1989; Tsuchitani, 1977, 1997). High-frequency cells in the LSO show a low-pass characteristic (Joris, 1996; Joris & Yin, 1998; Remme et al., 2014) for envelope ITD sensitivity, consistent with the rate limitations observed in listeners with BiCIs, as well as those observed in listeners with NH listening to high-frequency transients (Anderson et al., 2019a; Bernstein & Trahiotis, 2014; Monaghan et al., 2015). These perceptual rate limitations are also present for monaural temporal discrimination for listeners with CIs (Ihlefeld et al., 2015; Kong et al., 2009; Kong & Carlyon, 2010). Monaural temporal sensitivity is predictive of binaural sensitivity at the same rates (Ihlefeld et al., 2015) suggesting a common mechanism that may limit binaural sensitivity that parallels processing in the LSO. The LSO is also sensitive to changes in ILDs, with ILDs favoring the ipsilateral and contralateral ear resulting in greater and lesser spike rates, respectively (Boudreau & Tsuchitani, 1968; Tollin & Yin, 2002). Thus, deafness or CI stimulation, which affect LSO processing, also likely result in changes to the encoding of ITDs conveyed by transients, ITDs in the temporal envelope, and ILDs. Finally, LSO responses adapt to preceding ITDs and ILDs over time, indicating a mechanism that could compensate for consistently biased input (Beiderbeck et al., 2018; Gleiss et al., 2019).
Figure 1.
(A) Bottom-up binaural processing via the LSO. Stimulation arrives at the auditory nerve via electrical pulses and travels through action potentials to the LSO. (B) LSO output for accurately encoded binaural cues. The x-axis corresponds to ILD in dB or ITD in µs. The y-axis corresponds to the spike rate measured by single unit recording. Spike rates are modulated by binaural cue, showing distinct responses for different ILD or ITD magnitudes. (C) LSO output for poorly encoded binaural cues. The x-axis corresponds to ILD in dB or ITD in µs. The y-axis corresponds to the spike rate measured by single unit recording. Spike rates are weakly modulated by binaural cue, showing indistinct responses for different ILD or ITD magnitudes. ILD=interaural level difference; ITD=interaural time difference; LSO=lateral superior olive.
It is important to mention the other major contributor to localization in humans: the medial superior olive (MSO). The MSO is an anatomical neighbor to the LSO and receives its input at a similar stage in the auditory pathway. The MSO is tuned primarily to low frequencies (Guinan et al., 1972; Pecka et al., 2008) and demonstrates sensitivity to low-frequency ITDs in the fine-structure of sounds for animals with NH (Goldberg & Brown, 1969; Yin & Chan, 1990). Listeners with NH show sensitivity to fine-structure ITDs up to about 1500 Hz (e.g., Brughera et al., 2013). Changes in biophysically modeled MSO responses to ITDs become less distinct for high rates of stimulation (∼1500 Hz; e.g., Brughera et al., 2013), exceeding the rate-limitations of the LSO. Because most CI electrode arrays only stimulate mid- to higher-frequency neurons (Goupell et al., 2022), it is unlikely that they stimulate the MSO. In animals, the MSO also shows poorer sensitivity to ITDs of transients compared to the LSO (Franken et al., 2021), suggesting that its role in ITD processing with BiCIs may be limited. Despite this, studies of ITD processing among listeners with BiCIs often include data from “star” listeners, most notably those who can discriminate temporal pitch or ITDs at very high pulse rates (e.g., Goupell, 2015; Kong & Carlyon, 2010; Laback et al., 2015). Because these “star” performers’ results on spatial tasks may be more consistent with MSO processing than LSO processing, we considered it crucial to recognize the possible role of the MSO for localization in listeners with BiCIs. Throughout this manuscript we focus primarily upon the LSO because its properties correspond more closely with the performance of a typical listener with BiCIs.
Figure 1 shows an illustration of the LSO circuit. The cells in the cochlear nucleus that provide excitatory ipsilateral input to the LSO are still debated (Cant & Casseday, 1986; Doucet & Ryugo, 2003). One candidate is spherical bushy cells. The contralateral input arrives through the medial nucleus of the trapezoid body (MNTB), and is conveyed first by globular bushy cells (Smith et al., 1991, 1998). Compared to the auditory nerve, both spherical and globular bushy cells show improved phase locking to the temporal fine-structure (Joris et al., 1994; Joris & Smith, 2008; Joris & Yin, 1998) and envelope (Rhode & Greenberg, 1994) of stimuli. These cells act as monaural coincidence detectors, firing when inputs are temporally coincident within some window of sensitivity. Monaural and binaural coincidence detectors have characteristically low input resistances that result from low-voltage-gated potassium channels, resulting in very short time constants (Golding & Oertel, 2012). For spherical bushy cells, it is thought that precisely timed inhibition facilitates spiking, as inputs capitalize on the repolarization channels that are already open (Keine & Rübsamen, 2015). This same sort of early, faciliatory inhibition is present in LSO cells and gives rise to their response properties to ITDs (Beiderbeck et al., 2018). For globular bushy cells, high temporal precision is achieved by combining inputs from many auditory nerve fibers (Rothman et al., 1993; Spirou et al., 2005). The role of inhibition in globular bushy cells is less clear, but blocking inhibition increases overall spike rate in globular bushy cells (e.g., Caspary et al., 1994). Excitatory signals from globular bushy cells become inhibitory upon reaching the MNTB, and the MNTB shares extremely similar temporal characteristics with globular bushy cells (Banks & Smith, 1992). Thus, shifts in the number of auditory nerve fibers or the balance between excitation and inhibition in the circuitry would likely result in poorer phase locking of bushy cells, and therefore less temporal precision among inputs to the LSO. The balance of excitation and inhibition is also clearly important for the LSO.
Figure 1B and C show illustrations of the types of changes in output that one would expect with deterioration of the LSO circuit, or more generally the encoding system. Of particular concern is the distinctness between varying magnitudes of ITD or ILD and the output of the encoder. In recordings that measure the firing rate of neurons, the distinctness between neural responses associated with different spatial cues is usually measured via neural d' (Smith & Delgutte, 2007) or mutual information (Buck et al., 2021; Thornton et al., 2021). Neural d' is computed as a ratio of the difference in means between distributions of firing rates relative to their standard deviation. This is based on the ubiquitous d' statistic from the psychophysics literature (Green & Swets, 1966). Mutual information is an information theoretic approach that computes from the output of the encoder the information (in bits) about the binaural cue. In other words, mutual information tells the researcher how much is known about the binaural cue being presented based upon the output of the encoder.
No published studies have directly investigated the effects of different sources of interaural asymmetry in the auditory periphery on LSO responses. This probably has to do with the technical challenges associated with making recordings in the superior olivary complex, where the LSO is housed. However, our understanding of its circuitry can be leveraged to make reasonable predictions about how LSO output would change based upon deterioration of its inputs. The LSO and any neurons sensitive to microsecond ITDs have such specialized mechanisms that any decrement to structures earlier in the auditory pathway is suspected to lead to poorer binaural processing. Various studies have been conducted concerning sources of interaural asymmetry that are directly relevant for LSO processing. The rest of this section on bottom-up processing is dedicated to a discussion of these sources.
Auditory Deprivation
Auditory deprivation for patients with BiCIs has received considerable attention in the literature. This section is divided into two parts because auditory deprivation early versus late in life is suspected to have unique impacts on the auditory periphery and brainstem. Most listeners with early onset of deafness and BiCIs are either children or young adults because pediatric implantation at a young age is a fairly new phenomenon within the past 10 to 20 years, where decision making regarding age at implantation has been influenced heavily by factors such as financial constraints and indications of approval by the Food and Drug Administration for patients in the United States. In contrast, listeners who receive CIs in adulthood may have experienced either short or long periods of auditory deprivation. This indicates at least three clinical subpopulations. Further, many listeners with BiCIs undergo sequential implantation (Holder et al., 2018; Peters et al., 2010), meaning that each ear experiences a different period of auditory deprivation. As a guiding principle, we will distinguish between bilateral and unilateral auditory deprivation throughout the text.
Early in Development: Central Auditory Changes
The auditory brainstem is thought to undergo activity-dependent reorganization during a critical window of early development (Sanes & Bao, 2009; Takesian et al., 2009). This re-organization coincides with increasing head size during a child's development, which changes the relationship between binaural cues and physical locations in space, requiring the auditory system to remain plastic and re-map cues to spatial locations (Anbuhl et al., 2016; Clifton et al., 1988; Litovsky & Ashmead, 1997). Accordingly, ITD discrimination thresholds in listeners with NH are elevated during periods of head growth (Litovsky & Ashmead, 1997) and later decrease once the head stops growing.
Early exposure to coherent ITDs in the temporal fine structure of a signal is likely important for development of brainstem circuits that are sensitive to ITDs (Seidl & Grothe, 2005). For listeners with BiCIs, an early, bilateral onset of deafness is associated with greater errors in sound source localization (Anderson et al., 2022a; Asp et al., 2015; Grieco-Calub & Litovsky, 2010; Killan et al., 2019; Litovsky et al., 2010; Steffens et al., 2008; Strøm-Roum et al., 2012; Van Deun et al., 2010) and poorer sensitivity to ITDs (Ehlers et al., 2017; Litovsky et al., 2010). However, these listeners retain or develop sensitivity to ILDs, which do not require the same temporal precision as ITDs (Brown & Tollin, 2016). One challenge in these types of correlational studies is that many factors predictive of binaural performance in children are interrelated (e.g., experience with BiCIs, age at onset of deafness, duration without CI stimulation), making it difficult to disentangle which factors result in worse performance. Animal models help to illuminate specific mechanisms.
These issues have also been addressed in part by animal models of early, bilateral onset of deafness. In rabbits stimulated with BiCIs, early bilateral deafness is associated with poorer tuning to ITDs in neurons recorded from the inferior colliculus (Chung et al., 2019; Hancock et al., 2013). The inferior colliculus receives direct projections from the LSO and MSO (Schofield, 2005) and shows similar ILD and ITD tuning characteristics (Brown & Tollin, 2016), suggesting that it is reflective of earlier stages of binaural processing. Evidence for the importance of providing coherent ITDs via BiCI stimulation early in development has emerged from studies in rabbits and rats who are congenitally deaf; it appears that sensitivity to ITDs is preserved or recovered in these animals with sufficient sensory experience or training (Buck et al., 2021; Rosskothen-Kuhl et al., 2021; Sunwoo et al., 2021).
Humans and other animals who experienced intermittent or prolonged periods of asymmetric hearing loss early in life also show evidence of compromised binaural processing. Human listeners with congenital, unilateral hearing loss, who did not receive coherent ITDs during childhood even after unilateral hearing loss is corrected, show reduced sensitivity to binaural masking level differences up to 2 years after surgery (Wilmington et al., 1994). Guinea pigs and ferrets with intermittent, asymmetric hearing loss induced by earplugs showed poorer localization after ear plugs were removed (Anbuhl, 2017; Clements & Kelly, 1978; Moore et al., 1999). Using the same paradigm, chinchillas showed poorer correspondence between ILD and spike rate in the inferior colliculus when the earplug was removed (Thornton et al., 2021). Critically, these prolonged periods of poor sensitivity to spatial cues do not correspond with the millisecond to second long periods of neural adaptation to stimuli observed in the LSO (Beiderbeck et al., 2018; Gleiss et al., 2019), MSO (Lingner et al., 2018; Stange et al., 2013), and inferior colliculus (Dahmen et al., 2010), suggesting that these changes are pathophysiological. Results from studies concerning early deafness on responses in the inferior colliculus suggest a shift in the balance of excitation and inhibition. Inhibition in the inferior colliculus appears to increase (i.e., ratio between excitation and inhibition or “suppression”; Chung et al., 2019). Moreover, the proportion of cells showing sensitivity to binaural stimulation decreases drastically, and is accompanied by a proportional increase in sensitivity to monaural stimulation (Thornton et al., 2021). Children who grow up with bilateral deafness and receive a unilateral CI show a similar bias in their auditory brainstem responses (Gordon et al., 2014; Polonenko et al., 2015). This last result will be discussed in more detail when considering hemispheric lateralization.
Throughout the Lifespan: Peripheral Deterioration
Recent evidence from adults with BiCIs demonstrated that ITD sensitivity is predicted by the interaction between duration of bilateral hearing impairment (i.e., period of time between hearing loss onset and age at second implant) and period of time with at least one CI (Thakkar et al., 2020). In other words, the decrement in performance associated with prolonged bilateral hearing impairment was mitigated by a longer period of time with at least one CI, and vice versa. This suggests auditory input, and especially bilateral auditory input, is important to preserve binaural circuitry even after development.
In cases where deafness occurs early in life, proper binaural connectivity may never form. In contrast, when deafness occurs later in life, there is a risk for deterioration to the existing circuitry. Particular attention has been paid to changes in monaural processing that might contribute to poorer binaural sensitivity. Long periods of auditory deprivation in humans and other animals are associated with deterioration of dendritic processes and cell death (Leake & Hradek, 1988; Nadol et al., 1989; Nadol, 1997; Shepherd & Javel, 1997; Spoendlin & Schrott, 1989). This cell death can occur somewhat uniformly across the auditory nerve or in large subpopulations called “dead regions” (Moore, 2004). Prolonged auditory deprivation also leads to demyelination of auditory nerve axons (Shepherd & Hardie, 2001; Spoendlin & Schrott, 1989). Studies of human temporal bones harvested postmortem suggest that the etiology and onset of loss is associated with differing amounts of auditory nerve fiber survival, with listeners who experienced early deafness demonstrating the least amount of auditory nerve fiber survival (Nadol et al., 1989). However, this may be confounded with development or prolonged auditory deprivation. Noise-induced hearing loss has been associated with demyelination of auditory nerve fibers (Tagoe et al., 2014; Wan & Corfas, 2017). Thus, demyelination of auditory nerve fibers, deterioration of dendrites, and death of neurons may represent the pathophysiological changes that occur in response to hearing loss and auditory deprivation.
The aforementioned structural changes are associated with temporal response changes in the auditory nerve. Loss of peripheral dendrites is correlated with shifts in the latency of action potential initiation in models of auditory nerve fibers with CI stimulation (Goldwyn et al., 2010). Loss and demyelination of peripheral dendrites is also associated with poorer refractory properties of the auditory nerves of mice and rats (Shepherd et al., 2004; Zhou et al., 1995). Since bushy cells refine the temporal precision of auditory nerve fibers, it seems likely that the temporal fidelity of their output would decrease as the temporal fidelity of the input also decreases. In particular, loss of auditory nerve fibers has been related to poorer temporal response properties of models of globular bushy cells (Ashida et al., 2019). Shifts in excitation and inhibition may be more likely to impact the responses of spherical bushy cells, and loss of auditory nerve fibers may be more likely to affect globular bushy cells. Because bushy cells refine tuning before providing input to binaural neurons (Joris et al., 1994), poorer temporal response properties of bushy cells on one or both sides are suspected to degrade binaural processing.
Most of the changes discussed so far have involved the peripheral auditory system, but some changes above the level of the auditory nerve likely occur in listeners who experience a later onset of deafness. The amount of myelin on MNTB axons is regulated by activity during development and into adulthood (Sinclair et al., 2017). Axon diameter also increases in an activity-dependent fashion, but only during development (Sinclair et al., 2017). Axon diameter and myelination both affect the conduction velocity of action potentials, where more heavily myelinated and wider axons convey action potentials more quickly (e.g., Gillespie & Stein, 1983). Both of these factors were also associated with decreased maximum firing rate of MNTB cells in a computational model (Sinclair et al., 2017). Demyelination of axons in a computational model of the MSO have been associated with poorer response properties to ITDs (Li et al., 2022), suggesting that myelination may be a key factor to maintaining the exquisite timing properties required of binaural circuitry. Peripheral degradation may therefore limit the encoding of temporal information as it arrives to the nerve. Other factors involving the CI array may act prior to and at the auditory nerve to prevent this information from being accurately or symmetrically represented in the first place. They are discussed in the rest of this section.
Intracochlear Implant Array Position and Interaural Place Mismatch
The distance between the CI electrode array and auditory nerve fibers varies depending upon the electrode type, cochlear anatomy, and surgical outcomes (Chakravorti et al., 2019; Goupell et al., 2022; Wanna et al., 2014). Greater distance from the auditory nerve fibers and modiolus results in higher detection thresholds (Schvartz-Leyzac et al., 2020) and is also thought to lead to increased spatial spread as the current travels from CI electrodes to auditory nerve fibers. Because the temporal fluctuations on neighboring electrodes are not necessarily related, current spread also results in temporal smearing. Therefore, multi-electrode stimulation that is used in clinical processing strategies can result in spectro-temporal smearing of the signal, referred to as channel interaction. Reduced spectro-temporal fidelity is associated with poorer speech understanding in listeners with CIs (Croghan et al., 2017; Friesen et al., 2001). In simulations with listeners who have NH, reduced spectro-temporal fidelity leads to decreased spatial unmasking of speech whether it is interaurally symmetric (Gallun et al., 2007) or asymmetric (Goupell et al., 2021).
Electrode arrays are surgically placed in the scala tympani of the cochlea, a location in close proximity to the auditory nerve with minimal physical obstructions between the current source and auditory nerve fibers. Of particular concern are translocations of CI electrode arrays into other cochlear scalae. Scalar translocations are one of the strongest predictors of poor monaural speech intelligibility (Chakravorti et al., 2019; Wanna et al., 2014). Scalar translocations result in substantially further distance between the electrode array and auditory nerve fibers, requiring that current travel through additional tissue, which increases electrical resistance (Dong et al., 2021). One study evaluating the prevalence of unilateral translocations found them for at least one electrode in one ear in 50% of listeners (Goupell et al., 2022). Critically, because translocations are strongly associated with speech understanding, these differences in the placement of the electrodes within the cochlea can result in differences of the fidelity of speech information that impair bilateral benefits and even lead to interference. They can also smear or alter the spectro-temporal relationship between ears, limiting the LSO's ability to compute binaural cues.
Previous studies attempting to optimize ITD sensitivity for patients with BiCIs have strived to match interaural place-of-stimulation (Bernstein et al., 2021; Hu & Dietz, 2015; Kan et al., 2013; Long et al., 2003; van Hoesel, 2008; van Hoesel & Tyler, 2003). If not stated explicitly in these studies, attempting to match place-of-stimulation was most likely motivated by physiological findings supporting the axiom that the MSO processes ITDs and relies on frequency-matched inputs (Day & Semple, 2011; Goldberg & Brown, 1969), and by classical cross-correlation based models of binaural processing (Jeffress, 1948; Yin & Chan, 1990). The LSO also relies on frequency-matched input from the two ears (Boudreau & Tsuchitani, 1968). A recent study investigating the binaural interaction component of the auditory brainstem response (thought to reflect activity in the LSO; Benichoux et al., 2018; Laumen et al., 2016) in chinchillas (Brown et al., 2019) and humans (Sammeth et al., 2023) found that it was modulated by ITD and interaural place-of-stimulation mismatch. The effect of ITD on the binaural interaction component decreaed with increasing interaural place-of-stimulation mismatch. Similar effects have been documented in listeners with BiCIs (He et al., 2010; Hu & Dietz, 2015). This finding corresponds strongly with decreased ITD sensitivity and poorer intracranial lateralization ranges in listeners with BiCIs (Goupell et al., 2022; Kan et al., 2013, 2019). Increased interaural place-of-stimulation mismatch can induce poorer sensitivity to interaural decorrelation (Goupell, 2015), which is associated with less spatial fusion or more frequent reports of multiple sound images (Kan et al., 2013, 2019). Place-of-stimulation mismatch also results in reduced spatial release from masking in simulations with listeners who have NH (Goupell et al., 2018b).
Unlike other sources of interaural asymmetry, place-of-stimulation mismatch likely reflects the binaural system having fewer “looks” (i.e., neurons processing binaural cues) at the binaural cues conveyed in the stimulus. Thus, it seems highly likely that spectro-temporal degradations would add a secondary, orthogonal effect, making it extremely challenging for the LSO and other frequency-matched binaural structures like the MSO to make accurate binaural computations. Consistent with this hypothesis, a recent study with simulations in listeners with NH showed that reducing AM depth and increasing interaural place-of-stimulation mismatch have additive effects on ITD lateralization (Anderson, 2022). Other studies showed reduced interaural decorrelation detection with increasing interaural place-of-stimulation mismatch in listeners with NH and BiCIs (Goupell, 2015). That is, image width, thought to be the primary perceptual feature used in decorrelation detection (Whitmer et al., 2014), may be much more difficult to distinguish when the sound image is already quite diffuse. For listeners who receive interaurally decorrelated inputs due to differences in loudness growth between the ears, the sound with 100% interaural correlation in the signal would have a correlation closer to zero in their brainstems. Listeners with NH show poorer sensitivity in decorrelation detection experiments with a reference condition of 0% interaural correlation compared to a reference condition of 100% (Goupell, 2012, 2015; Goupell & Litovsky, 2013). Listeners with BiCIs show reduced sensitivity to interaural decorrelation compared to listeners with NH, making it difficult to assess whether there are different patterns between reference conditions in either group (Goupell, 2015; Kan et al., 2015).
System-Level Measures of Electrode–Neuron Interface
It is sometimes more efficient to consider the state of encoding at the auditory nerve from the system level rather than try to detect or diagnose specific causes, that is, as more or less “healthy” or ideal when indexed using a perceptual task. Loudness growth has been proposed as a measure of electrode–neuron interface (Bierer & Nye, 2014). Loudness growth differs depending upon the electrode being stimulated within the same ear (Bierer & Nye, 2014). For some patients, the levels resulting in a perceptually centered image during bilateral stimulation closely match those measured monaurally (Fitzgerald et al., 2015). However, there are some subjects for whom these levels do not result in a centered image (Fitzgerald et al., 2015). Presumably, differences in loudness growth would also be observed between the electrodes in either ear, though this has not been tested systematically by measuring bilateral loudness growth or unilateral loudness growth in both ears of the same individual. Studies showing decreased sensitivity to ITDs for high-rate, amplitude-modulated compared to low-rate, constant-amplitude pulse trains (Anderson et al., 2019a; van Hoesel & Tyler, 2003) provide indirect evidence of differences between loudness growth on pairs of electrodes. This is because listeners with BiCIs do not show improved sensitivity to sharp compared to shallow or sloping temporal onsets in the envelope for ITD discrimination (Laback et al., 2011), unlike listeners with NH (Bernstein & Trahiotis, 2002). Balancing of bilateral loudness at a comfortable level does not alter lateralization of ITDs, but does improve lateralization of ILDs under conditions of large interaural place-of-stimulation mismatch (Goupell et al., 2013a). To our knowledge, no attempts have ever been made to balance loudness growth between ears for listeners with BiCIs. Instead, CI processing algorithms may apply the same loudness growth function to each electrode based upon their threshold and comfortable loudness levels, which could contribute to spurious ILDs or disruption of envelope ITD coding.
While loudness growth has not been evaluated, there are known contributions of dynamic range, or the amount of electrical current corresponding to a comfortably loud level minus the amount of current required to detect a sound in quiet for listeners with BiCIs. Smaller dynamic range has been associated with poorer sensitivity to ITDs within the same listener (Todd et al., 2017). That is, the ITD sensitivity of a particular electrode pair can be predicted by the dynamic range of both electrodes. Similarly, as the AM depth of a stimulus decreases so does ITD sensitivity for listeners with BiCIs (Ihlefeld et al., 2014; van Hoesel & Tyler, 2003). Studies in listeners with NH have shown that interaurally symmetric or asymmetric reductions in AM depth are associated with less sensitivity to interaural phase differences and ITDs in the envelope (Anderson et al., 2019b; Anderson, 2022). It is well-documented that the dynamic range of CI electrodes varies across the electrode array (e.g., Long et al., 2014; Todd et al., 2017). This can be due to many factors, including distance from the electrode to the modiolus and the health of the auditory nerve (Schvartz-Leyzac et al., 2020). Thresholds measured using “focused stimulation” strategies, which employ negative currents on neighboring electrodes to restrict the neural populations being excited, have been proposed as an index of the quality of the interface between CI electrodes and auditory nerve fibers (Bierer, 2010). This may be related to dynamic range of the electrodes (Long et al., 2014).
Differences in loudness growth and dynamic range between ears may be analogous to stimulus-independent, time-varying differences in sound level in each ear for stimuli with complex temporal envelopes like speech. Random, time-varying amplitude fluctuations increase the amount of interaural decorrelation, obscuring ITDs and ILDs in the signal at any moment in time. The effect of interaural decorrelation on binaural processing has been studied in listeners with NH. These studies found that as the amount of interaural decorrelation increases, ITD sensitivity declines (Buchholz et al., 2018) and the ability to understand speech in noise decreases (Swaminathan et al., 2016). Similarly, as interaural decorrelation increases, the perceived width of a sound image increases or the sound image becomes more diffuse (Whitmer et al., 2014), which could contribute to the limited binaural benefits observed under such conditions.
Summary
Binaural processing in the LSO shows excellent correspondence with trends observed in listeners with BiCIs. This section overviewed sources of interaural asymmetry, their suspected impacts on the inputs to the LSO and its processing of binaural cues, and effects on perception. It is unlikely that these sources of interaural asymmetry occur in isolation and only recently has evidence begun to accumulate suggesting that they may interact with one another to produce especially deleterious effects on patient outcomes (Anderson et al., 2019b; Anderson, 2022).
Top-Down Processing: Predictive Coding
Overview of Predictive Coding
The goal of this section is to use a broad approach to encapsulate changes in top-down auditory processing associated with interaural asymmetry. In contrast to the earlier section where we proposed one circuit that is primarily responsible for bottom-up encoding of binaural cues, in this section, we discuss several bodies of literature that describe various top-down contributions to bilateral auditory processing. Perhaps the broadest framework useful for understanding top-down processing is predictive coding, which consists of a cascading series of circuits. Predictive coding supposes that the brain generates predictions about the outside world that are updated in light of new, aberrant sensory input (Rao & Ballard, 1999). The network of circuits responsible for predictive coding in the auditory domain likely include at least the auditory cortex, inferior temporal cortex, intraparietal lobule, posterior dorsal field, and hippocampus evidenced in particular by neuroimaging studies on the time-course of activation through the auditory system (Bizley & Cohen, 2013). An extremely detailed model of this network is not necessary to generate predictions about the relationship between interaural asymmetry and top-down processing. Models of predictive coding have already been used to describe changes associated with hearing loss (e.g., Kral et al., 2017). Predictive coding frameworks have also been applied to speech and language processing (Lupyan & Clark, 2015) and language development (Ylinen et al., 2017), suggesting that it may represent a general scheme for information processing in the brain.
Predictive coding lends itself to other conceptual models of auditory scene analysis. For example, Shinn-Cunningham (2008) described auditory scene analysis in terms of auditory objects that compete for attention. In that model, a listener can attend to a source of interest using cues encoded during earlier stages of processing (e.g., pitch and location cues). In a predictive coding framework, an auditory object is represented by a unique collection of neurons, with receptive fields corresponding to different acoustic cues, and whose activity can be modulated with attention. However, as the representations of those features become more or less distinct, so too would the boundaries between different sound sources. Mathematically, this can be represented using Bayesian inference, where a prior prediction (i.e., the internal perceptual model) is updated according to the data (i.e., aberrant sensory input; Rao & Ballard, 1999) in feedforward and feedbackward directions. Groups of neurons further downstream in auditory processing, then, represent hyperparameters of other distributions or probabilistic processes (i.e., distributions of distributions).
Consider the following schematic example accompanying Figure 2. A listener is walking through a dog park. Based on the context, the auditory system will have a set of candidate objects that becomes narrowed over time via upstream and downstream predictions. When a dog barks, the group of neurons that represent lower pitch are excited, refining the upstream and downstream predictions. Activating similar low pitch neurons over time excites the neurons that represent flat pitch contours downstream in the hierarchy. When combined with other sensory information, the neurons that represent “dog” are excited. Predictions are propagated upstream and when sensory input conforms to these predictions, it generates minimal neural responsiveness. This process is ongoing such that predictions of the object as well as the confidence of those predictions are being updated in each group of neurons over time and in light of additional incoming stimulation. A similar process would unfold if the individual actively listens for or attends to a “dog,” making refined predictions. This scheme of sensory representation is highly efficient as it avoids redundancy at different feature levels and sensory input simply needs to be tested against predictions. Groups of neurons therefore respond primarily to prediction errors. If the listener knows a dog is present, the internal model of perception does not need all of the features to determine that the sound source was a dog. However, if a feature violates the expectation of “dog,” and instead corresponds more closely to “bird,” the system can update its internal model accordingly. Such would be the case if a bird suddenly starts chirping. The internal representation will look more like the case on the left in Figure 2 for a listener with good frequency resolution and sensitivity to pitch while the dog is barking.
Figure 2.
Schematic representations of distinct and indistinct stimuli. Units representing sound features corresponding to the stimulus are predicted, and units representing features different from the stimulus are not predicted. Distinctness corresponds to greater prediction confidence. Lines between blocks are meant to represent feedforward and feedback connections, providing excitatory or inhibitory input. Connections between blocks within a single layer (e.g., sideband inhibition) and connections between one block and itself are excluded to avoid visually cluttering the figure.
If instead the listener has CIs, which are known to have lower resolution of frequency information and accordingly limit the sensitivity to pitch of CI users (e.g., Reiss et al., 2018), their internal representation will look more like the example on the right. It will be more difficult to narrow down candidate objects and the level of confidence in predictions will be lower overall. Note that if the auditory system were chronically stimulated as in the example on the right, neurons may begin to deteriorate due to understimulation. Overall excitation could increase to offset this problem (e.g., an increase in “central gain”; Auerbach et al., 2014), neurons could be repurposed for other sensory processing (e.g., from the opposite hemisphere if asymmetric; Kral et al., 2013a; Polonenko et al., 2018a), or a reweighting of inputs to the representation of “dog” and “bird” could occur (e.g., greater reliance on visual cues; Moberly et al., 2020). Poor object representation therefore results from reduced distinctiveness between features and not low or high overall excitation. These maps of auditory objects to stimulus features may be distorted in children who have limited access to high fidelity auditory cues, suggested by poor performance in behavioral tasks for individuals who had limited acoustic experience (Anderson et al., 2022a; Thakkar et al., 2020). Similarly, the ability to rely on maps learned with high-fidelity auditory cues may be compromised (e.g., after sudden onset hearing loss during adulthood), suggested by variable or poorer performance in comparison to listeners with NH (e.g., Anderson et al., 2022a; Thakkar et al., 2020). Obviously, there are a considerable number of features that could be used to identify perceptual objects. However, the sensory information pictured in Figure 2 could become particularly important in certain contexts (e.g., if the sound source is out of the field of vision; in the presence of background noise). Note that this very simple example does not include effects of temporal cues at varying durations (e.g., shape of the temporal envelope, F0 cues, call duration), which would very likely play a role. The intention here was not to be comprehensive, but to provide a simplified example of the interplay between some perceptual cues within a predictive coding framework. Moreover, the representation of the object (i.e., dog) is not likely to “end” its representation at a neural locus representing all barking dogs, instead being distributed across many different neural loci representing a rich set of information about the object.
Rather than focusing on specific insults that lead to interaural asymmetry, the rest of this section focuses on systems or processes thought to be involved directly or indirectly in auditory predictive coding. Some specific contributors to interaural asymmetry (e.g., auditory deprivation) will be discussed within each subsection.
Auditory Object Formation and Fusion
Auditory object formation is the process by which various features of a sound are combined into a singular, fundamental unit of auditory perception (for review, see Griffiths & Warren, 2004; Shinn-Cunningham, 2008; Shinn-Cunningham et al., 2017). Auditory objects carry information that can be derived or parsed. Auditory object formation is thought to occur on two different time scales: short windows of time when spectro-temporal components bind or “fuse” together based on similar properties, and through the formation of auditory “streams” that can be tracked over time. Common onset time has been identified as particularly important for grouping which spectro-temporal segments belong to the same sound source (Darwin, 1997). Auditory streams are updated over time according to the internal predictive model. Both spectro-temporal fusion and auditory stream formation are important for perception, though less work has focused on auditory stream formation in the binaural literature.
As mentioned previously, the ears represent interdependent channels, meaning that the signals in one interact with the other both in the air before reaching the ears and throughout binaural neurons in the central auditory system. Studies using a wide variety of paradigms suggest that information from each ear may be mandatorily integrated, even when this is disadvantageous to performance in listeners with NH (Gallun et al., 2007; Kidd et al., 2003; Piechowiak et al., 2007; Schimmel et al., 2008; Zwislocki, 1971) and hearing loss (Bernstein et al., 2016, 2020; Goupell et al., 2016, 2018a; Lin et al., 2013; Oh et al., 2019; Reiss et al., 2016; Reiss & Molis, 2021). In headphone experiments, worsening discrimination, identification, or speech understanding performance in the presence of contralateral stimulation is most widely referred to as contralateral interference, meaning that information in one ear interferes with the other. In listeners with BiCIs as well as those who use hearing aids, it is difficult to distinguish between failures of attention to one side of the head and failures of auditory object formation.
Listeners with BiCIs may fuse the pitch of bilaterally presented stimuli over very disparate places-of-stimulation, corresponding to frequencies ranging greater than an octave (Reiss et al., 2018). This same pattern is observed in listeners who use hearing aids (Reiss et al., 2014a, 2017). Recent evidence shows a negative linear relationship between the pitch fusion range and the benefit attained from differences in fundamental frequency of target and interfering masker for listeners who use hearing aids (Oh et al., 2019). In other words, as the range of frequencies fused between ears increases, the benefit of frequency differences between speakers decreases. Additionally, there is a positive linear relationship between the fusion of pitch and vowels with interaural frequency disparities in listeners who use hearing aids (Reiss & Molis, 2021), implying that listeners adversely fuse different vowels. Fusing stimuli that represent different words provides one perceptual mechanism that may contribute to contralateral interference and poor speech in noise performance for listeners with hearing loss. These experiments could not explicitly account for broader ranges of interaural pitch and vowel fusion based upon unilateral pitch perception, suggesting a central auditory mechanism. Listeners with hearing loss who use hearing aids and/or CIs also show different psychometric functions associated with vowel continua (e.g., nine step continuum of mixtures of the vowels /IH/ and /EH/) between the ears (Reiss et al., 2016). When presented with stimuli bilaterally, the psychometric function does not always correspond to the psychometric function of either ear. This implies that the spectro-temporal profiles assigned to specific vowels differ between ears and are decoded differently when listening unilaterally compared to listening bilaterally. This is consistent with simulations of interaurally asymmetric temporal fidelity in listeners with NH, which showed that listeners were more likely to perceive a single word when stimuli were temporally degraded and more likely to misunderstand the word being presented (Anderson et al., 2023). Interestingly, very short onset time differences also modulate the number of vowels reported by listeners with hearing loss, but to a lesser extent than differences in fundamental frequency (Eddolls et al., 2022).
The literature concerning spatial fusion with listeners who use BiCIs is not consistent. Adults with BiCIs demonstrate slightly greater amounts of fusion over interaural place-of-stimulation mismatch in ITD lateralization experiments compared to NH (Kan et al., 2013, 2019; Long et al., 2003; van Hoesel & Clark, 1995, 1997). They also show fusion for very large ITDs (e.g., 4 ms; van Hoesel & Clark, 1995) compared to listeners with NH. Similarly, for listeners with BiCIs, very large ITDs (≥2000 µs) can be used to lateralize stimuli further to the left and right than smaller, more physiologically plausible ITDs that arrive directly from sounds (≤800 µs) (Anderson et al., 2019a; Baumgärtel et al., 2017; Litovsky et al., 2010). These ITDs are also higher in magnitude than those for which listeners with NH begin to report hearing two sounds (e.g., Sayers, 1964). This insensitivity to interaural frequency and temporal cues might suggest that over-fusing sounds underlies poorer sound source localization (Suneel et al., 2017) and spatial release from masking (Goupell et al., 2018b). This is also supported by wider central masking functions compared to NH, where electrodes from disparate places-of-stimulation in the ear opposite the target result in higher detection thresholds (Lin et al., 2013; van Hoesel & Clark, 1997). On the other hand, adults with BiCIs show less fusion compared to listeners with NH for time-delayed pairs of stimuli with opposite ITDs (i.e., echo thresholds of lead-lag pairs) in precedence effect experiments (Brown et al., 2015). Less fusion corresponds with poorer lateralization of ITDs in adults when interaural place-of-stimulation mismatch is present (Goupell et al., 2013b; Kan et al., 2013, 2019). Moreover, children with BiCIs, particularly those with the earliest onset of deafness, tend not to fuse interaurally place-matched stimuli even with 0-µs ITDs (Salloum et al., 2010; Steel et al., 2015).
Assuming that place-based pitch is not a very salient cue to distinguish sounds, and that ITDs are not very useful as they are conveyed via clinical processors, both sets of findings could be indicative of a similar underlying problem. That is, because the ITDs conveyed cannot be used effectively to segregate between sound sources, the auditory system relies more heavily on other cues (e.g., ILDs) to segregate between sounds and treats inputs from each ear as always of the same or different source origin. The perceptual categorization of “one” or “two” sounds when the presented cues lack salience may be somewhat arbitrary, which Figure 2 implies. This corresponds to the holding open of multiple possible objects instead of the distinct perception of a single auditory object when the features are not salient.
Attention
Attentional shifting and modulation begins after auditory object formation and eventually works in parallel with auditory object formation for ongoing stimuli. Auditory streams compete for attention, which is allocated using features of auditory objects. Thorough reviews of object-based auditory attention were published previously (Fritz et al., 2007; Shinn-Cunningham et al., 2017), and some of their findings are summarized here. Allocation of attention depends upon accurate representation of cues. Thus, when features of different sounds are less distinct or attention is drawn elsewhere, performance in difficult listening conditions becomes poorer. Auditory attention can make use of auditory objects’ features to sort them into the foreground or background. This attention may manifest as suppression of neural responses related to objects in the background, as well as amplifying and refining tuning of neural responses related to objects in the foreground. It may be that when multiple sound sources are present, the ability to listen for a desired sound source using a predictive network similar to Figure 2 is compromised for listeners with interaural asymmetry, making poorer predictions or demonstrating bias toward the more salient source.
Interaural asymmetry of attention in listeners with BiCIs was only recently explored. Results suggest attentional differences between the ears when there are sufficiently challenging task conditions. Many of these studies have used binaural unmasking tasks, where a mixture of target and masking speech is presented to one ear and compared against conditions where the other ear is also presented with a copy of the masker. Listeners with NH usually show an improvement in the latter condition, presumably because the masking speech is fused across ears (e.g., Gallun et al., 2007; Goupell et al., 2021) resulting in a centered image (ITD = 0 µs, ILD = 0 dB) and the target speech is perceived toward the ear in which it is presented (ITD = ±∞ µs, ILD = ±∞ dB). Listeners with BiCIs who experienced a prolonged period of deafness, on the other hand, frequently show a decrement in performance when a copy of the masker is added to the other ear (i.e., contralateral interference; Goupell et al., 2018a). In this and related studies, the result of adding a copy of the masker to the opposite ear is dependent upon the ear to which the target is presented. When target speech is presented to the better ear (defined by speech recognition scores), some listeners show improvements in speech perception, although these are typically smaller than those observed in listeners with NH. When target speech is presented to the poorer ear and there is significant interaural asymmetry, contralateral interference occurs for listeners with BiCIs (Bernstein et al., 2016; Goupell et al., 2016, 2018a), simulations in NH (Goupell et al., 2021), and listeners with a CI in one ear and NH in the other ear (Bernstein et al., 2020). Simulations in NH with similar conditions show evidence of increased listening effort via pupillometry (DeRoy Milvae et al., 2021). This is consistent with the idea of a mandatory shift in attention toward the more salient source.
Interestingly, patients who experienced stroke demonstrate a similar difficulty allocating attention to the ear contralateral to the lesion when there is stimulation in the ear ipsilateral to the lesion. This is referred to as auditory extinction (Deouell & Soroker, 2000; Tanabe et al., 1986). Auditory extinction is modulated by the relative onset time of the stimulation of either ear (Witte et al., 2012), where the ear presented first is more likely to be correct, especially if it is ipsilateral to the lesion. Contralateral interference in listeners with BiCIs may therefore be indicative of a failure of the attentional network that presents as auditory extinction. That is, in order to detect the failure of attention for these listeners, both ears must be stimulated and the worse ear must contain the target. It may also be that certain information in the signal(s) is prioritized. Unconscious shifting of attention to particular information in sounds is supported by a study showing that identification of whether or not a sound is moving is poorer with stimuli that have temporal envelopes from speech compared to stimuli that have the original temporal envelopes from noise (i.e., an acoustic chimera) for CI simulations in listeners with NH (Warnecke & Litovsky, 2021). This suggests that the speech processing and localization pathways may compete for resources during behavioral tasks.
An alternative explanation of contralateral interference to that presented in the previous section is that when stimuli are sufficiently spectro-temporally degraded, changes in auditory object formation occur. This could be due to an over-fusion of unrelated speech between the ears, resulting in an obscured sound image, making it more difficult to parse for speech information. This could also occur due to an under-fusion of the masking speech in each ear, effectively introducing an additional masker. One important intermediate step has been to explore the fusion of speech stimuli for monosyllabic words (Anderson et al., 2023) and vowels (Eddolls et al., 2022; Reiss et al., 2016; Reiss & Molis, 2021). Results from the former imply that over-fusion occurs when stimuli are spectro-temporally degraded, that the worse ear is ignored, and that the worse ear interferes with access to speech information in the better ear.
In conclusion, it seems likely that both auditory object formation and attention are affected by interaural asymmetry. Particularly indicative of an attentional problem are studies with listeners who have a CI in one ear and NH in the other (Bernstein et al., 2020) and simulations in NH (Goupell et al., 2021) showing contralateral interference. In this case, the signals in each ear are so different that over-fusion seems unlikely. Together these findings suggest that changes in object formation and problems allocating attention likely interact with one another to produce poorer bilateral speech outcomes when listeners demonstrate interaural asymmetry. They also suggest that it may be necessary to tax the attentional system by introducing masking stimuli in spatially distinct channels (e.g., the opposite ear) before interaural asymmetry is apparent (Goupell et al., 2016, 2018a), like the special case of hemispheric neglect (auditory extinction) observed in some patients who have experienced stroke.
Cortical Lateralization and Specialization
In NH listeners, some specialization of speech and language function is present. Reviews on this topic has been published previously (Hiscock & Kinsbourne, 2011) and results are summarized here. Auditory hemispheric lateralization in listeners with NH is typically associated with a right ear advantage in listening experiments (Hugdahl et al., 2008; Wood et al., 2000) and greater activation identified in cortical imaging (Tanaka et al., 2021). Classical studies have also shown right ear advantage associated with simulated ablation via anesthesia (the “WADA” test; Kimura, 1967). Right ear advantage is slightly less prevalent in individuals who are left-handed, but both groups tend to show right ear advantage over left ear advantage (Hiscock et al., 2000). Interestingly, this advantage can still be seen via ILDs favoring the left ear (Hugdahl et al., 2008) and when stimuli have ITDs up to ∼60 to 90 ms (Wood et al., 2000). The amount of advantage depends highly upon the task used and is sometimes difficult to reproduce in the same groups of listeners (Hiscock et al., 2000; Voyer & Flight, 2001; Voyer & Ingram, 2005). An interesting parallel in the binaural literature is “earedness,” or the tendency for a listener to reliably perceive a spatially ambiguous sound on one side of the head (Zhang & Hartmann, 2008).
Many listeners with BiCIs have a perceptually indicated “better ear” (e.g., Burg et al., 2022; Goupell et al., 2018a; Ihlefeld et al., 2015; Litovsky et al., 2006; Mosnier et al., 2009). This is one type of interaural asymmetry. Most often, the better ear corresponds to the first-implanted ear. Different factors predict speech understanding in either ear, largely stemming from differences associated with the auditory periphery. Thus, while we discussed asymmetries in attention in the section above, this section will be dedicated to the asymmetric representation of sounds in either side of the cortical and subcortical auditory structures.
Two reviews have addressed the effects of unilateral auditory deprivation during development (Gordon & Kral, 2019; Kumpik & King, 2019) and are partially summarized here. Children with BiCIs who have an extensive history of unilateral stimulation show an extreme shift toward bilateral cortical representations of stimuli from the first-implanted ear (Gordon et al., 2015; Polonenko et al., 2018a; Yamazaki et al., 2017). The same finding has been replicated in congenitally deaf cats (Gordon & Kral, 2019; Kral et al., 2013a). These studies show auditory deprivation during early stages of development is associated with a repurposing of cells in the contralateral cortical hemisphere to represent ipsilateral input, and can only be mediated by early cochlear implantation (Kral et al., 2013a, 2013b). At lower levels of processing, single units of the inferior colliculus in chinchillas who experienced mild, intermittent hearing loss via earplugging (Thornton et al., 2021) and asymmetries in brainstem function observed in children who are congenitally deaf and receive a unilateral CI (Polonenko et al., 2015) show the same types of reorganization. Thus, hemispheric reorganization during sensitive periods of development may represent one pathophysiological mechanism that underlies sensory asymmetry. This type of reorganization would be visualized in Figure 2 as having fewer processing units, or bias in each, toward the earlier implanted ear. Accordingly, research in children indicates that the best outcomes are observed in listeners with the least history of asymmetric auditory input (Polonenko et al., 2018b).
An important finding demonstrates that no consistent ear advantage is found in children (Koopmann et al., 2020) or adults (Litovsky et al., 2006; Mosnier et al., 2009) who are simultaneously, bilaterally implanted. That is, ear advantage in listeners with BiCIs may have more to do with neural health or auditory experience. Some differences in speech understanding take time to emerge as listeners learn to use their CIs (Mosnier et al., 2009), suggesting that listeners may adapt to one ear over time. Thus, rather than attention as in listeners with NH, other factors associated with the auditory periphery likely contribute to differences in hemispheric representations of sound. One example that is likely to have a strong effect on interaural asymmetries in speech outcomes is scalar translocation (Chakravorti et al., 2019; Wanna et al., 2014). It should be noted that differences in psychophysical sensitivity and predictors of auditory nerve health vary depending upon the place-of-stimulation even within the same ear as well as across the ears (e.g., Chatterjee & Oberzut, 2011; Chatterjee & Peng, 2008; Garadat et al., 2012; Ihlefeld et al., 2015; Kong et al., 2009; Long et al., 2014; Schvartz-Leyzac et al., 2020; Zhou & Pfingst, 2012). Thus, interaural asymmetry and unilateral deprivation are not synonymous. Unilateral auditory deprivation, especially during development, is a condition that has an extraordinarily high chance of resulting in interaural asymmetry and whose pathophysiology has been well-documented. Since it is likely that many of the same peripheral issues occur for patients who are sequentially implanted and patients who are simultaneously implanted, it is important not to overlook how peripheral factors contribute to interaural asymmetry for listeners with prolonged unilateral deprivation. One particularly important question is whether psychophysically based predictors of neural health are sensitive to the pathophysiological changes associated with auditory deprivation. It may be possible to save time and resources if these predictors provide similar information about spectro-temporal processing as more complicated and intensive assessments of cortical lateralization. It remains unclear whether differences in sensitivity to spectro-temporal cues between ears are reflected in cortical activity for most listeners with BiCIs who experience asymmetries from varied sources. In other words, interaural asymmetry in the auditory periphery leading to poorer binaural outcomes may not necessarily be reflected in all measures of cortical activity.
Auditory Experience and Training
Listeners with BiCIs who are implanted in adulthood show a period of plasticity during which they adapt to the new mode of stimulation. With respect to pitch, this is characterized by adaptation of the place-of-stimulation to pitch map, which is different for patients with CIs than individuals with acoustic hearing (Reiss et al., 2014b, 2015). With respect to speech understanding, accuracy increases and seems to plateau around 3 to 5 years of CI use (Blamey et al., 2012). This plasticity does not necessarily imply that both ears will improve at a similar rate. For example, adult listeners who were simultaneously implanted demonstrated significant differences in performance between ears that emerged only after 1 year of experience (Mosnier et al., 2009). Children with BiCIs tend to have the best outcomes over time when they are implanted shortly after onset of deafness, improving over time for sound source localization or lateralization (Killan et al., 2019; Steffens et al., 2008; Strøm-Roum et al., 2012; Zheng et al., 2015) and speech understanding (Dunn et al., 2014), but not spatial release from masking (Litovsky & Misurelli, 2016). These factors are often interrelated in children with BiCIs, so it is difficult to discern exactly how deafness and cochlear implantation interact with the normal developmental trajectory. Together with earlier sections, these studies imply that auditory deprivation, especially during early development, may interact with experience, producing interaural asymmetries. This could be illustrated in Figure 2 by modulating the connections between units that are learned over time.
Recent evidence suggests that providing coherent ITDs via CIs to the developed auditory system might restore access to ITD sensitivity later in life (Buck et al., 2021; Rosskothen-Kuhl et al., 2021; Sunwoo et al., 2021). In these experiments, animals were deafened early in life, underwent a period of deafness, then used ITDs to localize sounds with operant conditioning later in life via bilateral CIs. Thus, it may be that the utility of ITDs and sensitivity of the midbrain was based as much on training as exposure to coherent cues. One important future direction, especially when making attempts to restore access to spatial cues for human listeners with BiCIs, may be to involve training. A lack of training or experience may help explain why benefits are not observed in listeners with BiCIs when ITDs in the temporal fine-structure are preserved (Ausili et al., 2020; Dennison et al., 2022; Fischer et al., 2021), particularly because they may have learned to rely on other cues for sound source localization.
When considered in the context of a patient with significant interaural asymmetry, it is not surprising that many listeners report removing one or both hearing devices during the day (Cox et al., 2011; Fitzpatrick & Leblanc, 2010; McArdle et al., 2012; Walden & Walden, 2005) as they may feel that their worse ear interferes with or frustrates listening. Thus, rather than asking patients to passively listen in these difficult configurations with the hope that device compliance and performance improve, auditory training may provide a promising and motivating alternative. Results from listeners with unilateral CIs and simulated asymmetric loss via earplugging suggest listeners are able to re-weight cues based on auditory training (Firszt et al., 2015; Keating et al., 2016). Pilot testing of another training procedure suggests that it can also lead to improvements in spatial hearing outcomes for listeners with BiCIs (Tyler et al., 2010).
Summary
Top-down was proposed as a process whereby ongoing perceptual predictions are refined by new, unexpected sensory input. This process is thought to occur by using features associated with different auditory objects to allocate attention to a desired source of interest. In listeners with interaural asymmetry, auditory object formation may be compromised and attention may be drawn toward the clearer signal. Both of these processes may be facilitated by an overrepresentation of the better ear throughout the ipsilateral hemisphere, and a lack of training or relevant experience may maintain or compound interaural asymmetry. While unilateral deprivation has received considerable attention, it is unlikely to be the only factor contributing to overrepresentation of the better ear and is not the sole predictor of ear advantage.
Implications for Researchers and Clinicians
Interaural asymmetry, evidenced by differences in the neural health, mismatches in the placement of the electrode array, psychophysical sensitivity, hemispheric representation, and speech understanding outcomes is a common problem for listeners with BiCIs. Differences between each ear's auditory periphery (or two poorly performing peripheries) lead to poorer encoding of the binaural cues used to distinguish between sound sources. Poor encoding leads to poorer auditory object formation, challenges allocating attention to sources of interest, and hemispheric reorganization favoring the better ear. While the sources may overlap or vary, the manifestation is the same: one better performing ear and poorer binaural outcomes.
It is important to note that even listeners who demonstrate large interaural asymmetries in speech understanding generally do not experience a decrement from using both CIs compared to a unilateral CI in more realistic, free field speech understanding tests (Bakal et al., 2021) even as they might face some challenges with vocal production (Aronoff et al., 2018). Thus, bilateral implantation is unlikely to result in worse outcomes than unilateral implantation, but its benefits may be mediated by interaural asymmetry. This may be due to more favorable “looks” to target stimuli or the provision of some limited binaural benefits. It is our hope that the studies outlined in this review can be used to maximize improvement of bilateral device use and facilitate better patient outcomes.
Interrelated Sources of Interaural Asymmetry
While the literature has focused on specific conditions that induce interaural asymmetry, a central idea is that interaural asymmetry results from similar manifestations of bottom-up and top-down processing. While we aimed to address each topic separately, by no means do these sources of interaural asymmetry occur in isolation. For example, the number of surviving nerve fibers associated with etiology of hearing loss predicts the speech outcomes of listeners (Blamey et al., 2012; Nadol, 1997). Patients with early onset of hearing loss and delayed implantation will experience deterioration of the auditory periphery, brainstem changes, hemispheric reorganization, and will also have less bilateral experience relative to age-matched peers. These listeners may have similar odds of having experienced scalar translocations and interaural place-of-stimulation mismatch.
The factor affecting interaural asymmetry that has received the most attention in the literature is auditory deprivation. There is consensus across clinicians and laboratory studies in animals and humans that providing consistent bilateral input, especially during development, is important for binaural outcomes (Gifford & Dorman, 2018; Gordon et al., 2014; Polonenko et al., 2018b; Turton et al., 2020). Auditory deprivation is associated with poorer temporal response properties beginning at the level of the auditory periphery and a development of bilateral processing favoring the better ear. Recent research shows promising results that binaural processing may be improved (and performance decrements mitigated) by experience with accurate binaural cues (Buck et al., 2021; Rosskothen-Kuhl et al., 2021; Sunwoo et al., 2021). While it would be ideal if deprivation can be avoided in the first place, an audiologist can incorporate auditory training into a patient's rehabilitative plan to leverage any remaining plasticity.
Some issues associated with the auditory periphery may be improved with technological advances and improvements in surgical techniques. For example, robot-assisted electrode insertion significantly reduces the number of translocations and amount of mechanical trauma to the cochlea during surgery (Kaufmann et al., 2020). The type of CI array also influences outcomes, with precurved arrays being more likely to result in translocation (Goupell et al., 2022) but resulting in the smallest distance from the modiolus (Chakravorti et al., 2019). Deactivating electrodes suspected of smearing a signal's fluctuations (and thus degrading its spectro-temporal representation) improves speech understanding (DeVries et al., 2016; Garadat et al., 2013; Noble et al., 2014; Schvartz-Leyzac et al., 2017; Zhou & Pfingst, 2012). With new imaging approaches, it may also be possible to match interaural placement of CI arrays (Bernstein et al., 2021), or at least use insight from these techniques to reallocate frequencies to electrodes with similar interaural place-of-stimulation (Goupell et al., 2022).
Differences in loudness growth or dynamic range have not been systematically addressed in the literature. It has been hypothesized that loudness growth differences result in increased interaural decorrelation (Goupell, 2015; Goupell & Litovsky, 2015), implying that resolving differences in loudness growth could result in improvement of patient outcomes. This is a promising avenue for future research. Similarly, linking the automatic gain control in each ear may improve binaural outcomes (Archer-Boyd & Carlyon, 2019; Potts et al., 2019) and reduce the occurrence of spurious spatial cues represented by current clinical processors (Gray et al., 2021). Recent strategies have been devised to provide fine-structure ITDs, but their use yields inconsistent benefits on spatial hearing tasks (Ausili et al., 2020; Dennison et al., 2022; Fischer et al., 2021). Improvements to processors resulting in coherent and consistent cues may be most beneficial early in BiCI experience or may become more beneficial with training.
Assessing Interaural Asymmetry
Depending on the time and equipment available to researchers and clinicians, it may not be possible to intensively assess the sources of interaural asymmetry for listeners. It also may be difficult to determine the level at which interaural asymmetry should be assessed. One particularly helpful trend in the literature is a systems-based approach, where the behavioral responses (i.e., “output”) of the patient (i.e., “system”) are used to make assumptions about the relevant underlying problems. One example includes measuring temporal sensitivity in each ear or electrode and predicting monaural or binaural outcomes (e.g., Garadat et al., 2012; Ihlefeld et al., 2015; Zhou & Pfingst, 2012). Another example is the various forms of the spectral ripple test, which are meant to provide a proxy of spectro-temporal resolution and has mainly been used to predict unilateral speech understanding (e.g., Anderson et al., 2012; Croghan et al., 2017). Such a task can be completed remotely with limited instructions or within minutes in the clinic, meaning it can be cost-effective and efficient for researchers, clinicians, and patients.
While this systems-based approach to investigate issues with encoding is relatively straightforward, a few additional considerations should be noted regarding top-down processing. Most importantly, some listeners with BiCIs do not demonstrate interaural asymmetries until they are given a sufficiently difficult task or are stimulated in both ears simultaneously (Goupell et al., 2016, 2018a). Measures need to be taken to demonstrate that the issues assumed to be associated with top-down processing are not in fact a bottom-up problem in disguise. Assessments of contralateral masking and interference, in which speech understanding measured in each ear alone is compared against the speech understanding measured following simultaneous stimulation of both ears, provide a useful example of one such approach to this dissociation. Finally, like the interrelated sources of interaural asymmetry, bottom-up and top-down problems are likely to manifest together since decoded features depend upon encoded cues. A disproportionate decrement in performance with no more than a slight challenge to the listener may be indicative of top-down problems compounding bottom-up problems.
Worse Ear or Degree of Asymmetry?
One important topic that has not been addressed so far is whether poorer binaural outcomes associated with interaural asymmetry are a result of irreconcilable differences between ears or the worse ear acting as a limiting factor. Results from patients are mixed, suggesting that the worse ear may be predictive of poorer sensitivity to binaural cues (Ihlefeld et al., 2015) or that the degree of difference and worse ear are similarly predictive (Anderson et al., 2022b). The best way to address this problem would be to test listeners who have all combinations of good or poor, symmetric or asymmetric hearing outcomes. In practice, most participants in laboratory studies are high performers, so they tend to have at least one “good” ear. Thus, to investigate this question we conducted a series of studies simulating interaurally symmetric and asymmetric conditions with high and low temporal fidelity (by varying the amplitude modulation depth, where lower depth provides less fidelity), then measured lateralization and speech perception. The worse ear predicted extent of lateralization of envelope ITDs and results did not differ when both ears had low temporal fidelity (Anderson, 2022), suggesting that the poorer ear limits localization abilities. Increasing interaural asymmetry and decreasing average degree of temporal fidelity negatively affected speech outcomes (Anderson et al., 2023), suggesting that both worse ear and degree of asymmetry play a role. In particular, it seems that interaural asymmetry may direct attention toward the better ear, interfering with performance (Anderson et al., 2023; Goupell et al., 2021). Poorly directed attention may be facilitated by a fusion of sounds when one or both ears have low temporal fidelity (Anderson et al., 2019b 2022b, 2023). The degree of asymmetry in patients with BiCIs is predictive of binaural redundancy (Figure 2 of Burg et al., 2022). The performance of the worse ear (data not shown), in contrast, was not related to binaural redundancy, suggesting that fusion of even interaurally coherent speech may be affected by interaural asymmetry. Ultimately, it seems both factors, the degree of difference between the ears and performance of the poorer ear, play a role whose extent differs depending upon the stimulus paradigm.
Challenges and Suggested Future Directions
The present review attempts to bring together research on interaural asymmetry conducted using human or animal behavior, physiology, and computational modeling. This approach has strengths and weaknesses. One significant weakness is the idea that bottom-up and top-down processing are independent of one another and occur in distinct physiological structures along the auditory pathway. As research on the efferent auditory system develops, it is becoming increasingly clear that bottom-up and top-down processing occur in parallel, and that bottom-up processing is shaped by ongoing top-down processing. Similarly, recordings from the binaural brainstem suggest that even at this low level, neurons are sensitive to changes rather than explicit cues (Gleiss et al., 2019; Lingner et al., 2018). One circuit or set of circuits will not be able to explain all of auditory or more general perceptual processing. Thus, the framework proposed here should be treated as a means to make predictions and devise solutions for patients, not as a complete model of auditory perception.
One of the largest challenges associated with addressing interaural asymmetry is that the prevalence of different kinds of asymmetry is often unknown or ill-defined. A main argument in the present manuscript is that interaural asymmetry is not a singularly dimensional spectrum from symmetric to asymmetric, or a dichotomous symmetric/asymmetric state. Instead, it represents a collection of states or continua (e.g., health of the auditory nerve in each ear, distance between CI electrodes and auditory nerve fibers, interaural place-of-stimulation mismatch, degree of cortical lateralization) that generate similar outcomes (e.g., differences in speech understanding, difficulty using spatial cues, challenges segregating speech from noise). These different “types” of interaural asymmetry could be described using similar manifestations of the bottom-up and top-down processing. A natural conclusion of such a framework is that no listener has purely interaurally symmetric hearing, supported by research showing modest right ear advantage and earedness in listeners with NH. The question that researchers should ask instead is whether it is reasonable to assume interaural symmetry, and if not, we hope that the the present review can serve as a helpful guide in how to proceed.
Acknowledgments
The authors would like to thank the numerous individuals in the field of auditory science who have engaged us in conversations about interaural asymmetry over the years. In particular, Dr. Matthew Goupell, Dr. Lina Reiss, Dr. Karen Gordon, Dr. Melissa Polonenko, and Dr. Joshua Bernstein have been drivers of interesting debate and discussion. We would also like to thank Dr. Andrew Vandali for his comments on a previous version of this manuscript. SRA would like to thank former and present members of the Binaural Hearing and Speech Lab, who have been invaluable in his understanding of binaural hearing and cochlear implants.
Footnotes
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: SRA was employed by Cochlear, Ltd. at the time of the submission of this manuscript. RYL serves on the scientific advisory board of Hemedeina (https://hemideina.com/).
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was funded by the National Institute on Deafness and Other Communication Disorders, Eunice Kennedy Shriver National Institute of Child Health and Human Development, (grant number F31 DC018483-01A1, R01 DC003083, P30 HD03352).
ORCID iDs: Sean R. Anderson https://orcid.org/0000-0002-1638-0197
Ruth Y. Litovsky https://orcid.org/0000-0002-6221-937X
References
- Anbuhl K. L. (2017). Early temporary hearing loss impairs behavioral and neural sensitivity to sound location. Dissertation. [Aurora, CO] University of Colorado Anschutz Medical Campus. [Google Scholar]
- Anbuhl K. L., Uhler K., Werner L. A., Tollin D. J. (2016). Early development of the human auditory system. In Pollin R. A., Abman S. H., Rowitch D., Benitz W. E. (Eds.), Fetal and neonatal physiology (3rd ed., Vol. 2–2, pp. 1803–1819). Amsterdam. 10.1016/B978-0-323-35214-7.00138-4 [DOI] [Google Scholar]
- Anderson E. S., Oxenham A. J., Nelson P. B., Nelson D. A. (2012). Assessing the role of spectral and intensity cues in spectral ripple detection and discrimination in cochlear-implant users. Journal of the Acoustical Society of America, 132(6), 3925–3934. 10.1121/1.4763999 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S. R. (2022). Mechanisms that underlie poorer binaural outcomes in patients with asymmetrical hearing and bilateral cochlear implants. Dissertation. [Madison, WI] University of Wisconsin-Madison.
- Anderson S. R., Easter K., Goupell M. J. (2019a). Effects of rate and age in processing interaural time and level differences in normal-hearing and bilateral cochlear-implant listeners. Journal of the Acoustical Society of America, 146(5), 3232–3254. 10.1121/1.5130384 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S. R., Kan A., Litovsky R. Y. (2019b). Asymmetric temporal envelope encoding: Implications for within- and across-ear envelope comparison. Journal of the Acoustical Society of America, 146(2), 1189–1206. 10.1121/1.5121423 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S. R., Gallun F. J., Litovsky R. Y. (2023). Interaural asymmetry of dynamic range: Abnormal fusion, bilateral interference, and shifts in attention. Frontiers in Neuroscience, 16, 1–24. 10.3389/fnins.2022.1018190 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S. R., Jocewicz R., Kan A., Zhu J., Tzeng S., Litovsky R. Y. (2022a). Sound source localization patterns and bilateral cochlear implants: Age at onset of deafness effects. PLoS ONE, 17(2 February), 1–30. 10.1371/journal.pone.0263516 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson S. R., Kan A., Litovsky R. Y. (2022b). Asymmetric temporal envelope sensitivity: Within- and across-ear envelope comparisons in listeners with bilateral cochlear implants. Journal of the Acoustical Society of America, 152(6), 3294–3312. 10.1121/10.0016365 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Archer-Boyd A. W., Carlyon R. P. (2019). Simulations of the effect of unlinked cochlear-implant automatic gain control and head movement on interaural level differences. Journal of the Acoustical Society of America, 145(3), 1389–1400. 10.1121/1.5093623 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aronoff J. M., Kirchner A., Abbs E., Harmon B. (2018). When singing with cochlear implants, are two ears worse than one for perilingually/postlingually deaf individuals? Journal of the Acoustical Society of America, 143(6), EL503–EL508. 10.1121/1.5043093 [DOI] [PubMed] [Google Scholar]
- Aronoff J. M., Yoon Y., Freed D. J., Vermiglio A. J., Pal I., Soli S. D. (2010). The use of interaural time and level difference cues by bilateral cochlear implant users. Journal of the Acoustical Society of America, 127(3), EL87–EL92. 10.1121/1.3298451 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ashida G., Heinermann H. T., Kretzberg J. (2019). Neuronal population model of globular bushy cells covering unit-to-unit variability. PLoS Computational Biology, 15(12), 1–38. 10.1371/journal.pcbi.1007563 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Asp F., Mäki-Torkko E., Karltorp E., Harder H., Hergils L., Eskilsson G., Stenfelt S. (2015). A longitudinal study of the bilateral benefit in children with bilateral cochlear implants. International Journal of Audiology, 54(2), 77–88. 10.3109/14992027.2014.973536 [DOI] [PubMed] [Google Scholar]
- Auerbach B. D., Rodrigues P. V., Salvi R. J. (2014). Central gain control in tinnitus and hyperacusis. Frontiers in Neurology, 5(206), 1–21. 10.3389/fneur.2014.00206 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ausili S. A., Agterberg M. J. H., Engel A., Voelter C., Thomas J. P., Brill S. … & Mylanus E. A. M. (2020). Spatial hearing by bilateral cochlear implant users with temporal fine-structure processing. Frontiers in Neurology, 11(915), 1–13. 10.3389/fneur.2020.00915 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bakal T. A., Milvae K. D. R., Chen C., Goupell M. J. (2021). Head shadow, summation, and squelch in bilateral cochlear-implant users with linked automatic gain controls. Trends in Hearing, 25(23312165211018147), 1–17. 10.1177/23312165211018147 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Banks M. I., Smith P. H. (1992). Intracellular recordings from neurobiotin-labeled cells in brain slices of the rat medial nucleus of the trapezoid body. Journal of Neuroscience, 12(7), 2819–2837. 10.1523/jneurosci.12-07-02819.1992 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baumgärtel R. M., Hu H., Kollmeier B., Dietz M. (2017). Extent of lateralization at large interaural time differences in simulated electric hearing and bilateral cochlear implant users. Journal of the Acoustical Society of America, 141(4), 2338–2352. 10.1121/1.4979114 [DOI] [PubMed] [Google Scholar]
- Beiderbeck B., Myoga M. H., Müller N. I. C., Callan A. R., Friauf E., Grothe B., Pecka M. (2018). Precisely timed inhibition facilitates action potential firing for spatial coding in the auditory brainstem. Nature Communications, 9(1771), 1–13. 10.1038/s41467-018-04210-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Benichoux V., Ferber A., Hunt S., Hughes E., Tollin D. (2018). Across species “natural ablation” reveals the brainstem source of a noninvasive biomarker of binaural hearing. Journal of Neuroscience, 38(40), 8563–8573. 10.1523/JNEUROSCI.1211-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bernstein J. G. W., Goupell M. J., Schuchman G. I., Rivera A. L., Brungart D. S. (2016). Having two ears facilitates the perceptual separation of concurrent talkers for bilateral and single-sided deaf cochlear implantees. Ear and Hearing, 37(3), 289–302. 10.1097/AUD.0000000000000284 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bernstein J. G. W., Jensen K. K., Stakhovskaya O. A., Noble J. H., Hoa M., Kim H. J., Goupell M. J. (2021). Interaural place-of-stimulation mismatch estimates using CT scans and binaural perception, but not pitch, are consistent in cochlear-implant users. Journal of Neuroscience, 41(49), 10161–10178. 10.1523/JNEUROSCI.0359-21.2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bernstein J. G. W., Stakhovskaya O. A., Jensen K. K., Goupell M. J. (2020). Acoustic hearing can interfere with single-sided deafness cochlear-implant speech perception. Ear and Hearing, 41(4), 747–761. 10.1097/AUD.0000000000000805 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bernstein L. R., Trahiotis C. (2002). Enhancing sensitivity to interaural delays at high frequencies by using “transposed stimuli.”. Journal of the Acoustical Society of America, 112(3), 1026–1036. 10.1121/1.1497620 [DOI] [PubMed] [Google Scholar]
- Bernstein L. R., Trahiotis C. (2014). Sensitivity to envelope-based interaural delays at high frequencies: Center frequency affects the envelope rate-limitation. Journal of the Acoustical Society of America, 135(2), 808–816. 10.1121/1.4861251 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bierer J. A. (2010). Probing the electrode-neuron interface with focused cochlear implant stimulation. Trends in Amplification, 14(2), 84–95. 10.1177/1084713810375249 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bierer J. A., Nye A. D. (2014). Comparisons between detection threshold and loudness perception for individual cochlear implant channels. Ear and Hearing, 35(6), 641–651. 10.1097/AUD.0000000000000058 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bizley J. K., Cohen Y. E. (2013). The what, where and how of auditory-object perception. Nature Reviews Neuroscience, 14(10), 693–707. 10.1038/nrn3565 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blamey P., Artieres F., Başkent D., Bergeron F., Beynon A., Burke E. … & Lazard D. S. (2012). Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: An update with 2251 patients. Audiology and Neurootology, 18(1), 36–47. 10.1159/000343189 [DOI] [PubMed] [Google Scholar]
- Boudreau J. C., Tsuchitani C. (1968). Binaural interaction in the cat superior olive S segment. Journal of Neurophysiology, 31(3), 442–454. 10.1152/jn.1968.31.3.442 [DOI] [PubMed] [Google Scholar]
- Brown A. D., Anbuhl K. L., Gilmer J. I., Tollin D. J. (2019). Between-ear sound frequency disparity modulates a brain stem biomarker of binaural hearing. Journal of Neurophysiology, 122(3), 1110–1122. 10.1152/jn.00057.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown A. D., Jones H. G., Kan A., Thakkar T., Stecker G. C., Goupell M. J., Litovsky R. Y. (2015). Evidence for a neural source of the precedence effect in sound localization. Journal of Neurophysiology, 114(5), 2991–3001. 10.1152/jn.00243.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown A. D., Tollin D. J. (2016). Slow temporal integration enables robust neural coding and perception of a cue to sound source location. Journal of Neuroscience, 36(38), 9908–9921. 10.1523/JNEUROSCI.1421-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brughera A., Dunai L., Hartmann W. M. (2013). Human interaural time difference thresholds for sine tones: The high-frequency limit. Journal of the Acoustical Society of America, 133(5), 2839–2855. 10.1121/1.4795778 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buchholz J. M., Le Goff N., Dau T. (2018). Localization of broadband sounds carrying interaural time differences: Effects of frequency, reference location, and interaural coherence. Journal of the Acoustical Society of America, 144(4), 2225–2237. 10.1121/1.5058776 [DOI] [PubMed] [Google Scholar]
- Buck A. N., Rosskothen-Kuhl N., Schnupp J. W. (2021). Sensitivity to interaural time differences in the inferior colliculus of cochlear implanted rats with or without hearing experience. Hearing Research, 408(108305), 1–16. 10.1016/j.heares.2021.108305 [DOI] [PubMed] [Google Scholar]
- Burg E. A., Thakkar T. D., Litovsky R. Y. (2022). Interaural speech asymmetry predicts bilateral speech intelligibility but not listening effort in adults with bilateral cochlear implants. Frontiers in Neuroscience, 16(December), 1–13. 10.3389/fnins.2022.1038856 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cant N. B., Casseday J. H. (1986). Projections from the anteroventral cochlear nucleus to the lateral and medial superior olivary nuclei. Journal of Comparative Neurology, 247(4), 457–476. 10.1002/cne.902470406 [DOI] [PubMed] [Google Scholar]
- Carhart R. (1965). Monaural and binaural discrimination against competing sentences. International Audiology, 4(3), 5–10. 10.1121/1.1939552 [DOI] [Google Scholar]
- Caspary D. M., Backoff P. M., Finlayson P. G., Palombi P. S. (1994). Inhibitory inputs modulate discharge rate within frequency receptive fields of anteroventral cochlear nucleus neurons. Journal of Neurophysiology, 72(5), 2124–2133. 10.1152/jn.1994.72.5.2124 [DOI] [PubMed] [Google Scholar]
- Chakravorti S., Noble J. H., Gifford R. H., Dawant B. M., O’Connell B. P., Wang J., Labadie R. F. (2019). Further evidence of the relationship between cochlear implant electrode positioning and hearing outcomes. Otology & Neurotology, 40(5), 617–624. 10.1097/MAO.0000000000002204 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chatterjee M., Oberzut C. (2011). Detection and rate discrimination of amplitude modulation in electrical hearing. Journal of the Acoustical Society of America, 130(3), 1567–1580. 10.1121/1.3621445 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chatterjee M., Peng S.-C. (2008). Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition. Hearing Research, 235(1–2), 143–156. 10.1016/j.heares.2007.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chung Y., Buechel B. D., Sunwoo W., Wagner J. D., Delgutte B. (2019). Neural ITD sensitivity and temporal coding with cochlear implants in an animal model of early-onset deafness. Journal of the Association for Research in Otolaryngology, 20(1), 37–56. 10.1007/s10162-018-00708-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clements M., Kelly J. B. (1978). Auditory spatial responses of young Guinea pigs (Cavia porcellus) during and after ear blocking. Journal of Comparative and Physiological Psychology, 92(1), 34–44. 10.1037/h0077424 [DOI] [PubMed] [Google Scholar]
- Clifton R. K., Gwiazda J., Bauer J. A., Clarkson M. G., Held R. M. (1988). Growth in head size during infancy: Implications for sound localization. Developmental Psychobiology, 24(4), 477–483. 10.1037/0012-1649.24.4.477 [DOI] [Google Scholar]
- Cox R. M., Schwartz K. S., Noe C. M., Alexander G. C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing, 32(2), 181–197. 10.1097/AUD.0b013e3181f8bf6c [DOI] [PMC free article] [PubMed] [Google Scholar]
- Croghan N. B. H., Duran S. I., Smith Z. M. (2017). Re-examining the relationship between number of cochlear implant channels and maximal speech intelligibility. Journal of the Acoustical Society of America, 142(6), EL537–EL543. 10.1121/1.5016044 [DOI] [PubMed] [Google Scholar]
- Dahmen J. C., Keating P., Nodal F. R., Schulz A. L., King A. J. (2010). Adaptation to stimulus statistics in the perception and neural representation of auditory space. Neuron, 66(6), 937–948. 10.1016/j.neuron.2010.05.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Darwin C. J. (1997). Auditory grouping. Trends in Cognitive Sciences, 1(9), 327–333. 10.1016/b978-012505626-7/50013-3 [DOI] [PubMed] [Google Scholar]
- Day M. L., Semple M. N. (2011). Frequency-dependent interaural delays in the medial superior olive: Implications for interaural cochlear delays. Journal of Neurophysiology, 106(4), 1985–1999. 10.1152/jn.00131.2011 [DOI] [PubMed] [Google Scholar]
- Dennison S. R., Jones H. G., Kan A., Litovsky R. Y. (2022). The impact of synchronized cochlear implant sampling and stimulation on free-field spatial hearing outcomes: Comparing the ciPDA research processor to clinical processors. Ear and Hearing, 43(4), 1262–1272. 10.1097/AUD.0000000000001179 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deouell L. Y., Soroker N. (2000). What is extinguished in auditory extinction? NeuroReport, 11(13), 3059–3062. 10.1097/00001756-200009110-00046 [DOI] [PubMed] [Google Scholar]
- DeRoy Milvae K., Kuchinsky S. E., Stakhovskaya O. A., Goupell M. J. (2021). Dichotic listening performance and effort as a function of spectral resolution and interaural symmetry. Journal of the Acoustical Society of America., 150(2), 920–935. 10.1121/10.0005653 [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeVries L., Scheperle R., Bierer J. A. (2016). Assessing the electrode-neuron interface with the electrically evoked compound action potential, electrode position, and behavioral thresholds. Journal of the Association for Research in Otolaryngology, 17(3), 237–252. 10.1007/s10162-016-0557-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dong Y., Briaire J. J., Siebrecht M., Stronks H. C., Frijns J. H. M. (2021). Detection of translocation of cochlear implant electrode arrays by intracochlear impedance measurements. Ear and Hearing, 42(5), 1397–1404. 10.1097/AUD.0000000000001033 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Doucet J. R., Ryugo D. K. (2003). Axonal pathways to the lateral superior olive labeled with biotinylated dextran amine injections in the dorsal cochlear nucleus of rats. Journal of Comparative Neurology, 461(4), 452–465. 10.1002/cne.10722 [DOI] [PubMed] [Google Scholar]
- Dunn C. C., Walker E. A., Oleson J., Kenworthy M., Van Voorst T., Tomblin J. B. … & Gantz B. J. (2014). Longitudinal speech perception and language performance in pediatric cochlear implant users: The effect of age at implantation. Ear and Hearing, 35(2), 148–160. 10.1097/AUD.0b013e3182a4a8f0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Durlach N. I. (1963). Equalization and cancellation theory of binaural masking-level differences. Journal of the Acoustical Society of America, 35(8), 1206–1218. 10.1121/1.1918675 [DOI] [Google Scholar]
- Eddolls M. S., Molis M. R., Reiss L. A. J. (2022). Onset asynchrony: Cue to aid dichotic vowel segregation in listeners with normal hearing and hearing loss. Journal of Speech, Language, and Hearing Research, 65(7), 2709–2719. 10.1044/2022_jslhr-21-00411 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ehlers E., Goupell M. J., Zheng Y., Godar S. P., Litovsky R. Y. (2017). Binaural sensitivity in children who use bilateral cochlear implants. Journal of the Acoustical Society of America, 141(6), 4264–4277. 10.1121/1.4983824 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Firszt J. B., Reeder R. M., Dwyer N. Y., Burton H., Holden L. K. (2015). Localization training results in individuals with unilateral severe to profound hearing loss. Hearing Research, 319, 48–55. 10.1016/j.heares.2014.11.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fischer T., Schmid C., Kompis M., Mantokoudis G., Caversaccio M., Wimmer W. (2021). Effects of temporal fine structure preservation on spatial hearing in bilateral cochlear implant users. Journal of the Acoustical Society of America, 150(2), 673–686. 10.1121/10.0005732 [DOI] [PubMed] [Google Scholar]
- Fitzgerald M. B., Kan A., Goupell M. J. (2015). Bilateral loudness balancing and distorted spatial perception in recipients of bilateral cochlear implants. Ear and Hearing, 36(5), e225–e236. 10.1097/AUD.0000000000000174 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitzpatrick E. M., Leblanc S. (2010). Exploring the factors influencing discontinued hearing aid use in patients with unilateral cochlear implants. Trends in Amplification, 14(4), 199–210. 10.1177/1084713810396511 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Franken T. P., Bondy B. J., Haimes D. B., Goldwyn J. H., Golding N. L., Smith P. H., Joris P. X. (2021). Glycinergic axonal inhibition subserves acute spatial sensitivity to sudden increases in sound intensity. ELife, 10(e62183), 1–33. 10.7554/eLife.62183 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Franken T. P., Joris P. X., Smith P. H. (2018). Principal cells of the brainstem’s interaural sound level detector are temporal differentiators rather than integrators. ELife, 7(e33854), 1–25. 10.7554/eLife.33854 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friesen L. M., Shannon R. V., Baskent D., Wang X. (2001). Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants. Journal of the Acoustical Society of America, 110(2), 1150–1163. 10.1121/1.1381538 [DOI] [PubMed] [Google Scholar]
- Fritz J. B., Elhilali M., David S. V., Shamma S. A. (2007). Auditory attention - focusing the searchlight on sound. Current Opinion in Neurobiology, 17(4), 437–455. 10.1016/j.conb.2007.07.011 [DOI] [PubMed] [Google Scholar]
- Gallun F. J., Mason C. R., Kidd G. J. (2007). The ability to listen with independent ears. Journal of the Acoustical Society of America, 122(5), 2814–2825. 10.1121/1.2780143 [DOI] [PubMed] [Google Scholar]
- Garadat S. N., Zwolan T. A., Pfingst B. E. (2012). Across-site patterns of modulation detection: Relation to speech recognition. Journal of the Acoustical Society of America, 131(5), 4030–4041. 10.1121/1.3701879 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Garadat S. N., Zwolan T. A., Pfingst B. E. (2013). Using temporal modulation sensitivity to select stimulation sites for processor MAPs in cochlear implant listeners. Audiology and Neurootology, 18(4), 247–260. 10.1159/000351302 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gifford R. H., Dorman M. F. (2018). Bimodal hearing or bilateral cochlear implants? Ask the patient. Ear and Hearing, 40(3), 501–516. 10.1097/AUD.0000000000000657 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gillespie M. J., Stein R. B. (1983). The relationship between axon diameter, myelin thickness and conduction velocity during atrophy of mammalian peripheral nerves. Brain Research, 259(1), 41–56. 10.1016/0006-8993(83)91065-X [DOI] [PubMed] [Google Scholar]
- Gleiss H., Encke J., Lingner A., Jennings T. R., Brosel S., Kunz L. … & Pecka M. (2019). Cooperative population coding facilitates efficient sound-source separability by adaptation to input statistics. PLoS Biology, 17(7), 1–24. 10.1371/journal.pbio.3000150 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldberg J. M., Brown P. B. (1969). Response of binaural neurons of dog superior olivary complex to dichotic tonal stimuli: Some physiological mechanisms of sound localization. Journal of Neurophysiology, 32(4), 613–636. 10.1007/978-1-4612-2700-7_3 [DOI] [PubMed] [Google Scholar]
- Golding N. L., Oertel D. (2012). Synaptic integration in dendrites: Exceptional need for speed. Journal of Physiology, 590(22), 5563–5569. 10.1113/jphysiol.2012.229328 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldwyn J. H., Bierer S. M., Bierer J. A. (2010). Modeling the electrode-neuron interface of cochlear implants: Effects of neural survival, electrode placement, and the partial tripolar configuration. Hearing Research, 268(1–2), 93–104. 10.1242/jcs.03292.Multiple [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gordon K., Henkin Y., Kral A. (2015). Asymmetric hearing during development: The aural preference syndrome and treatment options. Pediatrics, 136(1), 141–153. 10.1542/peds.2014-3520 [DOI] [PubMed] [Google Scholar]
- Gordon K., Kral A. (2019). Animal and human studies on developmental monaural hearing loss. Hearing Research, 380, 60–74. 10.1016/j.heares.2019.05.011 [DOI] [PubMed] [Google Scholar]
- Gordon K. A., Jiwani S., Papsin B. C. (2014). What is the optimal timing for bilateral cochlear implantation in children? Cochlear Implants International, 12(S2), S8–S14. 10.1179/146701011X13074645127199 [DOI] [PubMed] [Google Scholar]
- Goupell M. J. (2012). The role of envelope statistics in detecting changes in interaural correlation. Journal of the Acoustical Society of America, 132(3), 1561–1572. 10.1121/1.4740498 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J. (2015). Interaural envelope correlation change discrimination in bilateral cochlear implantees: Effects of mismatch, centering, and onset of deafness. Journal of the Acoustical Society of America, 137(3), 1282–1297. 10.1121/1.4908221 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Eisenberg D., DeRoy Milvae K. (2021). Dichotic listening performance with cochlear-implant simulations of ear asymmetry is consistent with difficulty ignoring clearer speech. Attention, Perception, & Psychophysics, 83(5), 2083–2101. 10.3758/s13414-021-02244-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Kan A., Litovsky R. Y. (2016). Spatial attention in bilateral cochlear-implant users. Journal of the Acoustical Society of America, 140(3), 1652–1662. 10.1121/1.4962378 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Kan A., Litovsky R. Y. (2013a). Mapping procedures can produce non-centered auditory images in bilateral cochlear implantees. Journal of the Acoustical Society of America, 133(2), EL101–EL107. 10.1121/1.4776772 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Litovsky R. Y. (2013). The effect of interaural fluctuation rate on correlation change discrimination. Journal of the Association for Research in Otolaryngology, 15, 115–129. 10.1007/s10162-013-0426-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Litovsky R. Y. (2015). Sensitivity to interaural envelope correlation changes in bilateral cochlear-implant users. Journal of the Acoustical Society of America, 137(1), 335–349. 10.1121/1.4904491 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Noble J. H., Phatak S. A., Kolberg E., Cleary M., Stakhovskaya O. A. … & Bernstein J. G. W. (2022). Computed-tomography estimates of interaural mismatch in insertion depth and scalar location in bilateral cochlear-implant users. Otology & Neurotology, 43(6), 666–675. 10.1097/MAO.0000000000003538 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Stakhovskaya O. A., Bernstein J. G. W. (2018a). Contralateral interference caused by binaurally presented competing speech in adult bilateral cochlear-implant users. Ear and Hearing, 39(1), 110–123. 10.1097/AUD.0000000000000470 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Stoelb C., Kan A., Litovsky R. Y. (2013b). Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening. Journal of the Acoustical Society of America, 133(4), 2272–2287. 10.1121/1.4792936 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goupell M. J., Stoelb C. A., Kan A., Litovsky R. Y. (2018b). The effect of simulated interaural frequency mismatch on speech understanding and spatial release from masking. Ear and Hearing, 39(5), 895–905. 10.1097/AUD.0000000000000541 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grantham D. W., Ashmead D. H., Ricketts T. A., Haynes D. S., Labadie R. F. (2008). Interaural time and level difference thresholds for acoustically presented signals in post-lingually deafened adults fitted with bilateral cochlear implants using CIS+ processing. Ear and Hearing, 29(1), 33–44. 10.1097/AUD.0b013e31815d636f [DOI] [PubMed] [Google Scholar]
- Gray W. O., Mayo P. G., Goupell M. J., Brown A. D. (2021). Transmission of binaural cues by bilateral cochlear implants: Examining the impacts of bilaterally independent spectral peak-picking, pulse timing, and compression. Trends in Hearing, 25, 1–23. 10.1177/23312165211030411 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Green D. M., Swets J. A. (1966). Signal detection theory and psychophysics (1st ed). Peninsula Publishing. [Google Scholar]
- Grieco-Calub T. M., Litovsky R. Y. (2010). Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing. Ear and Hearing, 31(5), 645–656. 10.1097/AUD.0b013e3181e50a1d [DOI] [PMC free article] [PubMed] [Google Scholar]
- Griffiths T. D., Warren J. D. (2004). What is an auditory object? Nature Reviews Neuroscience, 5, 887–892. 10.1038/nrn1538 [DOI] [PubMed] [Google Scholar]
- Guinan J. J., Norris B. E., Guinan S. S. (1972). Single auditory units in the superior olivary complex: II: Locations of unit categories and tonotopic organization. International Journal of Neuroscience, 4(4), 147–166. 10.3109/00207457209164756 [DOI] [Google Scholar]
- Hancock K. E., Chung Y., Delgutte B. (2013). Congenital and prolonged adult-onset deafness cause distinct degradations in neural ITD coding with bilateral cochlear implants. Journal of the Association for Research in Otolaryngology, 14, 393–411. 10.1007/s10162-013-0380-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris J. D. (1965). Monaural and binaural speech intelligibility and the stereophonic effect based upon temporal cues. The Laryngoscope, 75, 428–446. 10.1288/00005537-196503000-00003 [DOI] [PubMed] [Google Scholar]
- He S., Brown C. J., Abbas P. J. (2010). Effects of stimulation level and electrode pairing on the binaural interaction component of the electrically evoked auditory brain stem response. Ear and Hearing, 31(4), 457–470. 10.1097/AUD.0b013e3181d5d9bf [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hiscock M., Cole L. C., Benthall J. G., Carlson V. L., Ricketts J. M. (2000). Toward solving the inferential problem in laterality research: Effects of increased reliability on the validity of the dichotic listening right-ear advantage. Journal of the International Neuropsychological Society, 6, 539–547. 10.1017/S1355617700655030 [DOI] [PubMed] [Google Scholar]
- Hiscock M., Kinsbourne M. (2011). Attention and the right-ear advantage: What is the connection? Brain and Cognition, 76(2), 263–275. 10.1016/j.bandc.2011.03.016 [DOI] [PubMed] [Google Scholar]
- Holder J. T., Reynolds S. M., Sunderhaus L. W., Gifford R. H. (2018). Current profile of adults presenting for preoperative cochlear implant evaluation. Trends in Hearing, 22, 1–16. 10.1177/2331216518755288 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hu H., Dietz M. (2015). Comparison of interaural electrode pairing methods for bilateral cochlear implants. Trends in Hearing, 19, 1–22. 10.1177/2331216515617143 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hugdahl K., Westerhausen R., Alho K., Medvedev S., Hämäläinen H. (2008). The effect of stimulus intensity on the right ear advantage in dichotic listening. Neuroscience Letters, 431(1), 90–94. 10.1016/j.neulet.2007.11.046 [DOI] [PubMed] [Google Scholar]
- Ihlefeld A., Carlyon R. P., Kan A., Churchill T. H., Litovsky R. Y. (2015). Limitations on monaural and binaural temporal processing in bilateral cochlear implant listeners. Journal of the Association for Research in Otolaryngology, 16(5), 641–652. 10.1007/s10162-015-0527-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ihlefeld A., Kan A., Litovsky R. Y. (2014). Across-frequency combination of interaural time difference in bilateral cochlear implant listeners. Frontiers in Systems Neuroscience, 8(22), 1–12. 10.3389/fnsys.2014.00022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jeffress L. A. (1948). A place theory of sound localization. Journal of Comparative and Physiological Psychology, 41(1), 35–39. 10.1037/h0061495 [DOI] [PubMed] [Google Scholar]
- Joris P. X. (1996). Envelope coding in the lateral superior olive. II. Characteristic delays and comparison with responses in the medial superior olive. Journal of Neurophysiology, 76(4), 2137–2156. 10.1152/jn.1996.76.4.2137 [DOI] [PubMed] [Google Scholar]
- Joris P. X., Carney H., Smith H., Yin T. C. T. (1994). Enhancement of neural synchronization in the anteroventral cochlear nucleus. I. Responses to tones at the characteristic frequency. Journal of Neurophysiology, 71(3), 1022–1036. 10.1152/jn.1994.71.3.1022 [DOI] [PubMed] [Google Scholar]
- Joris P. X., Smith P. H. (2008). The volley theory and the spherical cell puzzle. Neuroscience, 154, 65–76. 10.1016/j.neuroscience.2008.03.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Joris P. X., Trussell L. O. (2018). The calyx of held: A hypothesis on the need for reliable timing in an intensity-difference encoder. Neuron, 100(3), 534–549. 10.1016/j.neuron.2018.10.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Joris P. X., Yin T. C. T. (1995). Envelope coding in the lateral superior olive. I. Sensitivity to interaural time differences. Journal of Neurophysiology, 73(3), 1043–1062. 10.1152/jn.1995.73.3.1043 [DOI] [PubMed] [Google Scholar]
- Joris P. X., Yin T. C. T. (1998). Envelope coding in the lateral superior olive. III. Comparison with afferent pathways. American Physiological Society, 79(1), 253–269. 10.1152/jn.1998.79.1.253 [DOI] [PubMed] [Google Scholar]
- Kan A., Goupell M. J., Litovsky R. Y. (2019). Effect of channel separation and interaural mismatch on fusion and lateralization in normal-hearing and cochlear-implant listeners. Journal of the Acoustical Society of America, 146(2), 1448–1463. 10.1121/1.5123464 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A., Litovsky R. Y. (2015). Binaural hearing with electrical stimulation. Hearing Research, 322, 127–137. 10.1016/j.heares.2014.08.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A., Litovsky R. Y., Goupell M. J. (2015). Effects of interaural pitch matching and auditory image centering on binaural sensitivity in cochlear implant users. Ear and Hearing, 36(3), e62–e68. 10.1097/AUD.0000000000000135 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A., Stoelb C., Litovsky R. Y., Goupell M. J. (2013). Effect of mismatched place-of-stimulation on binaural fusion and lateralization in bilateral cochlear-implant users. Journal of the Acoustical Society of America, 134(4), 2923–2936. 10.1121/1.4820889 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaufmann C. R., Henslee A. M., Claussen A., Hansen M. R. (2020). Evaluation of insertion forces and cochlea trauma following robotics-assisted cochlear implant electrode array insertion. Otology & Neurotology, 41(5), 631–638. 10.1097/MAO.0000000000002608 [DOI] [PubMed] [Google Scholar]
- Keating P., Rosenior-Patten O., Dahmen J. C., Bell O., King A. J. (2016). Behavioral training promotes multiple adaptive processes following acute hearing loss. ELife, 5, e12264. 10.7554/eLife.12264 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keine C., Rübsamen R. (2015). Inhibition shapes acoustic responsiveness in spherical bushy cells. Journal of Neuroscience, 35(22), 8579–8592. 10.1523/JNEUROSCI.0133-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kidd G., Mason C. R., Arbogast T. L., Brungart D. S., Simpson B. D. (2003). Informational masking caused by contralateral stimulation. Journal of the Acoustical Society of America, 113(3), 1594–1603. 10.1121/1.1547440 [DOI] [PubMed] [Google Scholar]
- Killan C., Scally A., Killan E., Totten C., Raine C. (2019). Factors affecting sound-source localization in children with simultaneous or sequential bilateral cochlear implants. Ear and Hearing, 40(4), 870–877. 10.1097/AUD.0000000000000666 [DOI] [PubMed] [Google Scholar]
- Kimura D. (1967). Functional asymmetry of the brain in dichotic listening. Cortex, 3(2), 163–178. 10.1016/s0010-9452(67)80010-8 [DOI] [Google Scholar]
- Kong Y.-Y., Carlyon R. P. (2010). Temporal pitch perception at high rates in cochlear implants. Journal of the Acoustical Society of America, 127(5), 3114–3123. 10.1121/1.3372713 [DOI] [PubMed] [Google Scholar]
- Kong Y.-Y., Deeks J. M., Axon P. R., Carlyon R. P. (2009). Limits of temporal pitch in cochlear implants. Journal of the Acoustical Society of America, 125(3), 1649–1657. 10.1121/1.3068457 [DOI] [PubMed] [Google Scholar]
- Koopmann M., Lesinski-Schiedat A., Illg A. (2020). Speech perception, dichotic listening, and ear advantage in simultaneous bilateral cochlear implanted children. Otology & Neurotology, 41(2), e208–e215. 10.1097/MAO.0000000000002456 [DOI] [PubMed] [Google Scholar]
- Kral A., Heid S., Hubka P., Tillein J. (2013a). Unilateral hearing during development: Hemispheric specificity in plastic reorganizations. Frontiers in Systems Neuroscience, 7(93), 1–13. 10.3389/fnsys.2013.00093 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kral A., Hubka P., Heid S., Tillein J. (2013b). Single-sided deafness leads to unilateral aural preference within an early sensitive period. Brain, 136(1), 180–193. 10.1093/brain/aws305 [DOI] [PubMed] [Google Scholar]
- Kral A., Yusuf P. A., Land R. (2017). Higher-order auditory areas in congenital deafness: Top-down interactions and corticocortical decoupling. Hearing Research, 343, 50–63. 10.1016/j.heares.2016.08.017 [DOI] [PubMed] [Google Scholar]
- Kumpik D. P., King A. J. (2019). A review of the effects of unilateral hearing loss on spatial hearing. Hearing Research, 372, 17–28. 10.1016/j.heares.2018.08.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laback B., Egger K., Majdak P. (2015). Perception and coding of interaural time differences with bilateral cochlear implants. Hearing Research, 322, 138–150. 10.1016/j.heares.2014.10.004 [DOI] [PubMed] [Google Scholar]
- Laback B., Zimmermann I., Majdak P., Baumgartner W.-D., Pok S.-M. (2011). Effects of envelope shape on interaural envelope delay sensitivity in acoustic and electric hearing. Journal of the Acoustical Society of America, 130(3), 1515–1529. 10.1121/1.3613704 [DOI] [PubMed] [Google Scholar]
- Laumen G., Ferber A. T., Klump G. M., Tollin D. J. (2016). The physiological basis and clinical use of the binaural interaction component of the auditory brainstem response. Ear and Hearing, 37(5), e276–e290. 10.1097/AUD.0000000000000301 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leake P. A., Hradek G. T. (1988). Cochlear pathology of long term neomycin induced deafness in cats. Hearing Research, 33(1), 11–33. 10.1016/0378-5955(88)90018-4 [DOI] [PubMed] [Google Scholar]
- Li B. Z., Pun S. H., Vai M. I., Lei T. C., Klug A. (2022). Predicting the influence of axon myelination on sound localization precision using a spiking neural network model of auditory brainstem. Frontiers in Neurology, 16(March), 1–13. 10.3389/fnins.2022.840983 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin P., Lu T., Zeng F.-G. (2013). Central masking with bilateral cochlear implants. Journal of the Acoustical Society of America, 133(2), 962–969. 10.1121/1.4773262 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lingner A., Pecka M., Leibold C., Grothe B. (2018). A novel concept for dynamic adjustment of auditory space. Scientific Reports, 8(8335), 1–12. 10.1038/s41598-018-26690-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Litovsky R., Parkinson A., Arcaroli J., Sammeth C. (2006). Simultaneous bilateral cochlear implantation in adults: A multicenter clinical study. Ear and Hearing, 27(6), 714–731. 10.1097/01.aud.0000246816.50820.42 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Litovsky R. Y., Ashmead D. H. (1997). Development of binaural and spatial hearing. In Gilkey R. H., Anderson T. R. (Eds.), Binaural and spatial hearing in real and virtual environments (pp. 163–195). Lawrence Erlbaum Associates. 10.1007/978-1-4614-1421-6_6 [DOI] [Google Scholar]
- Litovsky R. Y., Jones G. L., Agrawal S., van Hoesel R. (2010). Effect of age at onset of deafness on binaural sensitivity in electric hearing in humans. Journal of the Acoustical Society of America, 127(1), 400–414. 10.1121/1.3257546 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Litovsky R. Y., Misurelli S. M. (2016). Does bilateral experience lead to improved spatial unmasking of speech in children who use bilateral cochlear implants? Otology & Neurotology, 37(2), e35–e42. 10.1097/MAO.0000000000000905 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loizou P. C. (2006). Speech processing in vocoder-centric cochlear implants. In Møller A. R. (Ed.), Cochlear and brainstem implants (Vol. 64, pp. 109–143). Karger. 10.1159/000094648 [DOI] [PubMed] [Google Scholar]
- Loizou P. C., Hu Y., Litovsky R., Yu G., Peters R., Lake J., Roland P. (2009). Speech recognition by bilateral cochlear implant users in a cocktail-party setting. Journal of the Acoustical Society of America, 125(1), 372–383. 10.1121/1.3036175 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Long C. J., Eddington D. K., Colburn H. S., Rabinowitz W. M. (2003). Binaural sensitivity as a function of interaural electrode position with a bilateral cochlear implant user. Journal of the Acoustical Society of America, 114(3), 1565–1574. 10.1121/1.1603765 [DOI] [PubMed] [Google Scholar]
- Long C. J., Holden T. A., McClelland G. H., Parkinson W. S., Shelton C., Kelsall D. C., Smith Z. M. (2014). Examining the electro-neural interface of cochlear implant users using psychophysics, CT scans, and speech understanding. Journal of the Association for Research in Otolaryngology, 15(2), 293–304. 10.1007/s10162-013-0437-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lupyan G., Clark A. (2015). Words and the world: Predictive coding and the language-perception-cognition interface. Current Directions in Psychological Science, 24(4), 279–284. 10.1177/0963721415570732 [DOI] [Google Scholar]
- McArdle R. A., Killion M., Mennite M. A., Chisolm T. H. (2012). Are two ears not better than one?. Journal of the American Academy of Audiology, 23(3), 171–181. 10.3766/jaaa.23.3.4 [DOI] [PubMed] [Google Scholar]
- Middlebrooks J. C., Green D. M. (1991). Sound localization by human listeners. Annual Review of Psychology, 42(1), 135–159. 10.1146/annurev.ps.42.020191.001031 [DOI] [PubMed] [Google Scholar]
- Moberly A. C., Vasil K. J., Ray C. (2020). Visual reliance during speech recognition in cochlear implant users and candidates. Journal of the American Academy of Audiology, 31(1), 30–39. 10.3766/jaaa.18049.Visual [DOI] [PMC free article] [PubMed] [Google Scholar]
- Monaghan J. J. M., Bleeck S., McAlpine D. (2015). Sensitivity to envelope interaural time differences at high modulation rates. Trends in Hearing, 19, 1–14. 10.1177/2331216515619331 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore B. C. J. (2004). Dead regions in the cochlea: Conceptual foundations, diagnosis, and clinical applications. Ear and Hearing, 25(2), 98–116. 10.1097/01.AUD.0000120359.49711.D7 [DOI] [PubMed] [Google Scholar]
- Moore B. C. J. (2008). The role of temporal fine structure processing in pitch perception, masking, and speech perception for normal-hearing and hearing-impaired people. Journal of the Association for Research in Otolaryngology, 9(4), 399–406. 10.1007/s10162-008-0143-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore D. R., Hine J. E., Jiang Z. D., Matsuda H., Parsons C. H., King A. J. (1999). Conductive hearing loss produces a reversible binaural hearing impairment. Journal of Neuroscience, 19(19), 8704–8711. 10.1523/JNEUROSCI.19-19-08704.1999 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mosnier I., Sterkers O., Bebear J. P., Godey B., Robier A., Deguine O. … & Ferrary E. (2009). Speech performance and sound localization in a complex noisy environment in bilaterally implanted adult patients. Audiology and Neurootology, 14(2), 106–114. 10.1159/000159121 [DOI] [PubMed] [Google Scholar]
- Nadol J. B. J., Young Y.-S., Glynn R. J. (1989). Survival of spiral ganglion cells in profound sensorineural hearing loss: Implications for cochlear implantation. Annals of Otology, Rhinology & Laryngology, 98(6), 411–416. 10.1177/000348948909800602 [DOI] [PubMed] [Google Scholar]
- Nadol J. J. (1997). Patterns of neural degeneration in the human cochlea and auditory nerve: Implications for cochlear implantation. Otolaryngology–Head and Neck Surgery, 117(3), 220–228. 10.1016/S0194-5998(97)70178-5 [DOI] [PubMed] [Google Scholar]
- Noble J. H., Gifford R. H., Hedley-Williams A. J., Dawant B. M., Labadie R. F. (2014). Clinical evaluation of an image-guided cochlear implant programming strategy. Audiology and Neurootology, 19(6), 400–411. 10.1159/000365273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oh Y., Hartling C. L., Srinivasan N. K., Eddolls M., Diedesch A. C., Gallun F. J., Reiss L. A. J. (2019). Broad binaural fusion impairs segregation of speech based on voice pitch differences in a ‘cocktail party’ environment. BioRxiv, 1–39. 10.1101/805309 [DOI] [Google Scholar]
- Pecka M., Brand A., Behrend O., Grothe B. (2008). Interaural time difference processing in the mammalian medial superior olive: The role of glycinergic inhibition. Journal of Neuroscience, 28(27), 6914–6925. 10.1523/JNEUROSCI.1660-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peters B. R., Wyss J., Manrique M. (2010). Worldwide trends in bilateral cochlear implantation. The Laryngoscope, 120(5), S17–S44. 10.1002/lary.20859 [DOI] [PubMed] [Google Scholar]
- Piechowiak T., Ewert S. D., Dau T. (2007). Modeling comodulation masking release using an equalization-cancellation mechanism. Journal of the Acoustical Society of America, 121(4), 2111–2126. 10.1121/1.2534227 [DOI] [PubMed] [Google Scholar]
- Polonenko M. J., Papsin B. C., Gordon K. A. (2015). The effects of asymmetric hearing on bilateral brainstem function: Findings in children with bimodal (electric and acoustic) hearing. Audiology and Neurootology, 20(suppl 1), 13–20. 10.1159/000380743 [DOI] [PubMed] [Google Scholar]
- Polonenko M. J., Papsin B. C., Gordon K. A. (2018a). Delayed access to bilateral input alters cortical organization in children with asymmetric hearing. Neuroimage: Clinical, 17, 415–425. 10.1016/j.nicl.2017.10.036 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polonenko M. J., Papsin B. C., Gordon K. A. (2018b). Limiting asymmetric hearing improves benefits of bilateral hearing in children using cochlear implants. Scientific Reports, 8(1), 1–17. 10.1038/s41598-018-31546-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potts W. B., Ramanna L., Perry T., Long C. J. (2019). Improving localization and speech reception in noise for bilateral cochlear implant recipients. Trends in Hearing, 23, 1–18. 10.1177/2331216519831492 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rao R. P. N., Ballard D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. Retrieved from http://neurosci.nature.com. 10.1038/4580 [DOI] [PubMed] [Google Scholar]
- Reiss L. A. J., Eggleston J. L., Walker E. P., Oh Y. (2016). Two ears are not always better than one: Mandatory vowel fusion across spectrally mismatched ears in hearing-impaired listeners. Journal of the Association for Research in Otolaryngology, 17, 341–356. 10.1007/s10162-016-0570-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Fowler J. R., Hartling C. L., Oh Y. (2018). Binaural pitch fusion in bilateral cochlear implant users. Ear and Hearing, 39(2), 390–397. 10.1097/AUD.0000000000000497 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Ito R. A., Eggleston J. L., Liao S., Becker J. J., Lakin C. E. … & Mcmenomey S. O. (2015). Pitch adaptation patterns in bimodal cochlear implant users: Over time and after experience. Ear and Hearing, 36(2), e23–e34. 10.1097/AUD.0000000000000114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Ito R. A., Eggleston J. L., Wozny D. R. (2014a). Abnormal binaural spectral integration in cochlear implant users. Journal of the Association for Research in Otolaryngology, 15, 235–248. 10.1007/s10162-013-0434-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Molis M. R. (2021). Abnormal fusion of dichotic vowels across different fundamental frequencies in hearing-impaired listeners: An alternative explanation for difficulties with speech in background talkers. Journal of the Association for Research in Otolaryngology, 22(4), 443–461. 10.1007/s10162-021-00790-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Shayman C. S., Walker E. P., Bennett K. O., Fowler J. R., Hartling C. L. … & Oh Y. (2017). Binaural pitch fusion: Comparison of normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America, 141(3), 1909–1920. 10.1121/1.4978009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reiss L. A. J., Turner C. W., Karsten S. A., Gantz B. J. (2014b). Plasticity in human pitch perception induced by tonotopically mismatched electro-acoustic stimulation. Neuroscience, 256, 43–52. 10.1016/j.neuroscience.2013.10.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Remme M. W. H., Donato R., Mikiel-Hunter J., Ballestero J. A., Foster S., Rinzel J., McAlpine D. (2014). Subthreshold resonance properties contribute to the efficient coding of auditory spatial cues. Proceedings of the National Academy of Sciences of the USA, 111(22), 2339–2348. 10.1073/pnas.1316216111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rhode W. S., Greenberg S. (1994). Encoding of amplitude modulation in the cochlear nucleus of the cat. Journal of Neurophysiology, 71(5), 1797–1825. 10.1152/jn.1994.71.5.1797 [DOI] [PubMed] [Google Scholar]
- Rosskothen-Kuhl N., Buck A. N., Li K., Schnupp J. W. H. (2021). Microsecond interaural time difference discrimination restored by cochlear implants after neonatal deafness. ELife, 10, e5930005. 10.7554/eLife.59300 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rothman J. S., Young E. D., Manis P. B. (1993). Convergence of auditory nerve fibers onto bushy cells in the ventral cochlear nucleus: Implications of a computational model. Journal of Neurophysiology, 70(6), 2562–2583. 10.1152/jn.1993.70.6.2562 [DOI] [PubMed] [Google Scholar]
- Salloum C. A. M., Valero J., Wong D. D. E., Papsin B. C., Van Hoesel R., Gordon K. A. (2010). Lateralization of interimplant timing and level differences in children who use bilateral cochlear implants. Ear and Hearing, 31(4), 441–456. 10.1097/AUD.0b013e3181d4f228 [DOI] [PubMed] [Google Scholar]
- Sammeth C. A., Brown A. D., Greene N. T., Tollin D. J. (2023). Interaural frequency mismatch jointly modulates neural brainstem binaural interaction and behavioral interaural time difference sensitivity in humans. Hearing Research, 437, 108839. 10.1016/j.heares.2023.108839 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sanes D. H., Bao S. (2009). Tuning up the developing auditory CNS. Current Opinion in Neurobiology, 19(2), 188–199. 10.1016/j.conb.2009.05.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sanes D. H., Merickel M., Rubel E. W. (1989). Evidence for an alteration of the tonotopic map in the gerbil cochlea during development. Journal of Comparative Neurology, 279(3), 436–444. 10.1002/cne.902790308 [DOI] [PubMed] [Google Scholar]
- Sayers B. M. (1964). Image lateralization judgments with binaural tones. Journal of the Acoustical Society of America, 36(5), 923–926. 10.1121/1.1919121 [DOI] [Google Scholar]
- Schimmel O., van de Par S., Breebaart J., Kohlrausch A. (2008). Sound segregation based on temporal envelope structure and binaural cues. Journal of the Acoustical Society of America, 124(2), 1130–1145. 10.1121/1.2945159 [DOI] [PubMed] [Google Scholar]
- Schofield B. R. (2005). Superior olivary complex and lateral lemniscal connections of the auditory midbrain. In Winer J. A., Schreiner C. E. (Eds.), The inferior colliculus (pp. 132–154). Springer. 10.1007/0-387-27083-3_4 [DOI] [Google Scholar]
- Schvartz-Leyzac K. C., Holden T. A., Zwolan T. A., Arts H. A., Firszt J. B., Buswinka C. J., Pfingst B. E. (2020). Effects of electrode location on estimates of neural health in humans with cochlear implants. Journal of the Association for Research in Otolaryngology, 21(3), 259–275. 10.1007/s10162-020-00749-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schvartz-Leyzac K. C., Zwolan T. A., Pfingst B. E. (2017). Effects of electrode deactivation on speech recognition in multichannel cochlear implant recipients. Cochlear Implants International, 18(6), 324–334. 10.1080/14670100.2017.1359457 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seeber B. U., Fastl H. (2008). Localization cues with bilateral cochlear implants. Journal of the Acoustical Society of America, 123(2), 1030–1042. 10.1121/1.2821965 [DOI] [PubMed] [Google Scholar]
- Seidl A. H., Grothe B. (2005). Development of sound localization mechanisms in the Mongolian gerbil is shaped by early acoustic experience. Journal of Neurophysiology, 94, 1028–1036. 10.1152/jn.01143.2004 [DOI] [PubMed] [Google Scholar]
- Shepherd R. K., Hardie N. A. (2001). Deafness-induced changes in the auditory pathway: Implications for cochlear implants. Audiology and Neurootology, 6(6), 305–318. 10.1159/000046843 [DOI] [PubMed] [Google Scholar]
- Shepherd R. K., Javel E. (1997). Electrical stimulation of the auditory nerve. I. Correlation of physiological responses with cochlear status. Hearing Research, 108(1–2), 112–144. 10.1016/S0378-5955(97)00046-4 [DOI] [PubMed] [Google Scholar]
- Shepherd R. K., Roberts L. A., Paolini A. G. (2004). Long-term sensorineural hearing loss induces functional changes in the rat auditory nerve. European Journal of Neuroscience, 20(11), 3131–3140. 10.1111/j.1460-9568.2004.03809.x [DOI] [PubMed] [Google Scholar]
- Shinn-Cunningham B., Best V., Lee A. K. C. (2017). Auditory object formation and selection. In Middlebrooks J. C., Simon J., Popper A. N., Fay R. R. (Eds.), The auditory system at the cocktail party. Springer handbook of auditory research (60th ed, pp. 7–40). Springer. 10.1007/978-3-319-51662-2_2 [DOI] [Google Scholar]
- Shinn-Cunningham B. G. (2008). Object-based auditory and visual attention. Trends in Cognitive Sciences, 12(5), 182–186. 10.1016/j.tics.2008.02.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sinclair J. L., Fischl M. J., Alexandrova O., Heβ M., Grothe B., Leibold C., Kopp-Scheinpflug C. (2017). Sound-evoked activity influences myelination of brainstem axons in the trapezoid body. Journal of Neuroscience, 37(34), 8239–8255. 10.1523/JNEUROSCI.3728-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith P. H., Joris P. X., Carney L. H., Yin T. C. T. (1991). Projections of physiologically characterized globular bushy cell axons from the cochlear nucleus of the cat. Journal of Comparative Neurology, 304, 387–407. 10.1002/cne.903040305 [DOI] [PubMed] [Google Scholar]
- Smith P. H., Joris P. X., Yin T. C. T. (1998). Anatomy and physiology of principal cells of the medial nucleus of the trapezoid body (MNTB) of the cat. Journal of Neurophysiology, 79(6), 3127–3142. 10.1152/jn.1998.79.6.3127 [DOI] [PubMed] [Google Scholar]
- Smith Z. M., Delgutte B. (2007). Sensitivity to interaural time differences in the inferior colliculus with bilateral cochlear implants. Journal of Neuroscience, 27(25), 6740–6750. 10.1523/JNEUROSCI.0052-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spirou G. A., Rager J., Manis P. B. (2005). Convergence of auditory-nerve fiber projections onto globular bushy cells. Neuroscience, 136(3), 843–863. 10.1016/j.neuroscience.2005.08.068 [DOI] [PubMed] [Google Scholar]
- Spoendlin H., Schrott A. (1989). Analysis of the human auditory nerve. Hearing Research, 43, 25–38. 10.1016/0378-5955(89)90056-7 [DOI] [PubMed] [Google Scholar]
- Stange A., Myoga M. H., Lingner A., Ford M. C., Alexandrova O., Felmy F. … & Grothe B. (2013). Adaptation in sound localization: From GABA B receptor-mediated synaptic modulation to perception. Nature Neuroscience, 16(12), 1840–1847. 10.1038/nn.3548 [DOI] [PubMed] [Google Scholar]
- Steel M. M., Papsin B. C., Gordon K. A. (2015). Binaural fusion and listening effort in children who use bilateral cochlear implants: A psychoacoustic and pupillometric study. PLoS ONE, 10(2), e0117611. 10.1371/journal.pone.0117611 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steffens T., Lesinski-Schiedat A., Strutz J., Aschendorff A., Klenzner T., Rühl S. … & Lenarz T. (2008). The benefits of sequential bilateral cochlear implantation for hearing-impaired children. Acta Oto-Laryngologica, 128(2), 164–176. 10.1080/00016480701411528 [DOI] [PubMed] [Google Scholar]
- Strøm-Roum H., Rødvik A. K., Osnes T. A., Fagerland M. W., Wie O. B. (2012). Sound localising ability in children with bilateral sequential cochlear implants. International Journal of Pediatric Otorhinolaryngology, 76(9), 1245–1248. 10.1016/j.ijporl.2012.05.013 [DOI] [PubMed] [Google Scholar]
- Suneel D., Staisloff H., Shayman C. S., Stelmach J., Aronoff J. M. (2017). Localization performance correlates with binaural fusion for interaurally mismatched vocoded speech. Journal of the Acoustical Society of America, 142(3), EL276–EL280. 10.1121/1.5001903 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sunwoo W., Delgutte B., Chung Y. (2021). Chronic bilateral cochlear implant stimulation partially restores neural binaural sensitivity in neonatally-deaf rabbits. Journal of Neuroscience, 41(16), 3651–3664. 10.1523/JNEUROSCI.1076-20.2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swaminathan J., Mason C. R., Streeter T. M., Best V., Roverud E., Kidd G. (2016). Role of binaural temporal fine structure and envelope cues in cocktail-party listening. Journal of Neuroscience, 36(31), 8250–8257. 10.1523/JNEUROSCI.4421-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tagoe T., Barker M., Jones A., Allcock N., Hamann M. (2014). Auditory nerve perinodal dysmyelination in noise-induced hearing loss. Journal of Neuroscience, 34(7), 2684–2688. 10.1523/JNEUROSCI.3977-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Takesian A. E., Kotak V. C., Sanes D. H. (2009). Developmental hearing loss disrupts synaptic inhibition: Implications for auditory processing. Future Neurology, 4(3), 331–349. 10.2217/FNL.09.5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tanabe H., Nishikawa T., Okuda J. -i., Shiraishi J. (1986). Auditory extinction to nonverbal and verbal stimuli. Acta Neurologica Scandinavica, 73(2), 173–179. 10.1111/j.1600-0404.1986.tb03260.x [DOI] [PubMed] [Google Scholar]
- Tanaka K., Ross B., Kuriki S., Harashima T., Obuchi C., Okamoto H. (2021). Neurophysiological evaluation of right-ear advantage during dichotic listening. Frontiers in Psychology, 12(696263), 1–12. 10.3389/fpsyg.2021.696263 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thakkar T., Anderson S. R., Kan A., Litovsky R. Y. (2020). Evaluating the impact of age, acoustic exposure, and electrical stimulation on binaural sensitivity in adult bilateral cochlear implant patients. Brain Sciences, 10(406), 1–26. 10.3390/brainsci10060406 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thornton J. L., Anbuhl K. L., Tollin D. J. (2021). Temporary unilateral hearing loss impairs spatial auditory information processing in neurons in the central auditory system. Frontiers in Neurology, 15(721922), 1–8. 10.3389/fnins.2021.721922 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Todd A. E., Goupell M. J., Litovsky R. Y. (2017). The relationship between intensity coding and binaural sensitivity in adults with cochlear implants. Ear and Hearing, 38(2), e128–e141. 10.1097/AUD.0000000000000382 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tollin D. J., Yin T. C. T. (2002). The coding of spatial location by single units in the lateral superior olive of the cat. II. The determinants of spatial receptive fields in azimuth. Journal of Neuroscience, 22(4), 1468–1479. 10.1523/jneurosci.22-04-01468.2002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tsuchitani C. (1977). Functional organization of lateral cell groups of cat superior olivary complex. Journal of Neurophysiology, 40(2), 296–318. 10.1152/jn.1977.40.2.296 [DOI] [PubMed] [Google Scholar]
- Tsuchitani C. (1997). Input from the medial nucleus of trapezoid body to an interaural level detector. Hearing Research, 105, 165–176. 10.1016/s0378-5955(96)00212-2 [DOI] [PubMed] [Google Scholar]
- Turton L., Souza P., Thibodeau L., Hickson L., Gifford R., Bird J. … & Timmer B. (2020). Guidelines for best practice in the audiological management of adults with severe and profound hearing loss. Seminars in Hearing, 41(3), 141–245. 10.1055/s-0040-1714744 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tyler R. S., Witt S. A., Dunn C. C., Wang W. (2010). Initial development of a spatially separated speech-in-noise and localization training program. Journal of the American Academy of Audiology, 21(6), 390–403. 10.3766/jaaa.21.6.4.Initial [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Deun L., van Wieringen A., Scherf F., Deggouj N., Desloovere C., Offeciers F. E. … & Wouters J. (2010). Earlier intervention leads to better sound localization in children with bilateral cochlear implants. Audiology and Neurootology, 15(1), 7–17. 10.1159/000218358 [DOI] [PubMed] [Google Scholar]
- van Hoesel R. J. M. (2008). Observer weighting of level and timing cues in bilateral cochlear implant users. Journal of the Acoustical Society of America, 124(6), 3861–3872. 10.1121/1.2998974 [DOI] [PubMed] [Google Scholar]
- van Hoesel R. J. M., Clark G. M. (1995). Fusion and lateralization study with two binaural cochlear implant patients. Annals of Otology, Rhinology & Laryngology, 104(166), 233–235. [PubMed] [Google Scholar]
- van Hoesel R. J. M., Clark G. M. (1997). Psychophysical studies with two binaural cochlear implant subjects. Journal of the Acoustical Society of America, 102(1), 495–507. 10.1121/1.419611 [DOI] [PubMed] [Google Scholar]
- van Hoesel R. J. M., Tyler R. S. (2003). Speech perception, localization, and lateralization with bilateral cochlear implants. Journal of the Acoustical Society of America, 113(3), 1617–1630. 10.1121/1.1539520 [DOI] [PubMed] [Google Scholar]
- Voyer D., Flight J. I. (2001). Reliability and magnitude of auditory laterality effects: The influence of attention. Brain and Cognition, 46(3), 397–413. 10.1006/brcg.2001.1298 [DOI] [PubMed] [Google Scholar]
- Voyer D., Ingram J. D. (2005). Attention, reliability, and validity of perceptual asymmetries in the fused dichotic words test. Laterality, 10(6), 545–561. 10.1080/13576500442000292 [DOI] [PubMed] [Google Scholar]
- Walden T. C., Walden B. E. (2005). Unilateral versus bilateral amplification for adults with impaired hearing. Journal of the American Academy of Audiology, 16(8), 574–584. 10.3766/jaaa.16.8.6 [DOI] [PubMed] [Google Scholar]
- Wan G., Corfas G. (2017). Transient auditory nerve demyelination as a new mechanism for hidden hearing loss. Nature Communications, 8(14487), 1–13. 10.1038/ncomms14487 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wanna G. B., Noble J. H., Carlson M. L., Gifford R. H., Dietrich M. S., Haynes D. S. … & Labadie R. F. (2014). Impact of electrode design and surgical approach on scalar location and cochlear implant outcomes. The Laryngoscope, 124(S6), S1–S7. 10.1002/lary.24728 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Warnecke M., Litovsky R. Y. (2021). Signal envelope and speech intelligibility differentially impact auditory motion perception. Scientific Reports, 11(1), 1–10. 10.1038/s41598-021-94662-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Whitmer W. M., Seeber B. U., Akeroyd M. A. (2014). The perception of apparent auditory source width in hearing-impaired adults. Journal of the Acoustical Society of America, 135(6), 3548–3559. 10.1121/1.4875575 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilmington D., Gray L., Jahrsdoerfer R. (1994). Binaural processing after corrected congenital unilateral conductive hearing loss. Hearing Research, 74, 99–114. 10.1016/0378-5955(94)90179-1 [DOI] [PubMed] [Google Scholar]
- Witte C., Grube M., Cramon D. Y. v., Rübsamen R. (2012). Auditory extinction and spatio-temporal order judgment in patients with left- and right-hemisphere lesions. Neuropsychologia, 50(5), 892–903. 10.1016/j.neuropsychologia.2012.01.029 [DOI] [PubMed] [Google Scholar]
- Wood S., Hiscock M., Widrig M. (2000). Selective attention fails to alter the dichotic listening lag effect: Evidence that the lag effect is preattentional. Brain and Language, 71(3), 373–390. 10.1006/brln.1999.2271 [DOI] [PubMed] [Google Scholar]
- Yamazaki H., Easwar V., Polonenko M. J., Jiwani S., Wong D. D. E., Papsin B. C., Gordon K. A. (2017). Cortical hemispheric asymmetries are present at young ages and further develop into adolescence. Human Brain Mapping, 39(2), 941–954. 10.1002/hbm.23893 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yin T. C., Chan J. C. (1990). Interaural time sensitivity in medial superior olive of cat. Journal of Neurophysiology, 64(2), 465–488. 10.1152/jn.1990.64.2.465 [DOI] [PubMed] [Google Scholar]
- Yin T. C. T., Smith P. H., Joris P. X. (2019). Neural mechanisms of binaural processing in the auditory brainstem. Comprehensive Physiology, 9(4), 1503–1575. 10.1002/cphy.c180036 [DOI] [PubMed] [Google Scholar]
- Ylinen S., Bosseler A., Junttila K., Huotilainen M. (2017). Predictive coding accelerates word recognition and learning in the early stages of language development. Developmental Science, 20(6), 1–13. 10.1111/desc.12472 [DOI] [PubMed] [Google Scholar]
- Yoon Y. S., Li Y., Kang H. Y., Fu Q. J. (2011). The relationship between binaural benefit and difference in unilateral speech recognition performance for bilateral cochlear implant users. International Journal of Audiology, 50(8), 554–565. 10.3109/14992027.2011.580785 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang P. X., Hartmann W. M. (2008). Lateralization of Huggins pitch. Journal of the Acoustical Society of America, 124(6), 3873–3887. 10.1121/1.2977683 [DOI] [PubMed] [Google Scholar]
- Zheng Y., Godar S. P., Litovsky R. Y. (2015). Development of sound localization strategies in children with bilateral cochlear implants. PLoS ONE, 10(8), e0135790. 10.1371/journal.pone.0135790 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhou N., Pfingst B. E. (2012). Psychophysically based site selection coupled with dichotic stimulation improves speech recognition in noise with bilateral cochlear implants. Journal of the Acoustical Society of America, 132(2), 994–1008. 10.1121/1.4730907 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhou R., Abbas P. J., Assouline J. G. (1995). Electrically evoked auditory brainstem response in peripherally myelin-deficient mice. Hearing Research, 88(1–2), 98–106. 10.1016/0378-5955(95)00105-D [DOI] [PubMed] [Google Scholar]
- Zwislocki J. J. (1971). A theory of central auditory masking and its partial validation. Journal of the Acoustical Society of America, 52(2), 644–659. 10.1121/1.1913154 [DOI] [Google Scholar]


