Abstract
Adults with unilateral hearing loss often demonstrate decreased sound localization ability and report that situations requiring spatial hearing are especially challenging. Few studies have evaluated localization abilities combined with training in this population. The present pilot study examined whether localization of two sound types would improve after training, and explored the relation between localization ability or training benefit and demographic factors. Eleven participants with unilateral severe to profound hearing loss attended five training sessions; localization cues gradually decreased across sessions. Localization ability was assessed pre- and post-training. Assessment stimuli were monosyllabic words and spectral and temporal random spectrogram sounds. Root mean square errors for each participant and stimulus type were used in group and correlation analyses; individual data were examined with ordinary least squares regression. Mean pre- to post-training test results were significantly different for all stimulus types. Among the participants, eight significantly improved following training on at least one localization measure, whereas three did not. Participants with the poorest localization ability improved the most and likewise, those with the best pre-training ability showed the least training benefit. Correlation results suggested that test age, age at onset of severe to profound hearing loss and better ear high frequency audibility may contribute to localization ability. Results support the need for continued investigation of localization training efficacy and consideration of localization training within rehabilitation protocols for individuals with unilateral severe to profound hearing loss.
Keywords: Unilateral severe to profound hearing loss, localization ability, training, single-sided deafness
1.1 INTRODUCTION
Normal hearing (NH) individuals localize sound in the horizontal plane primarily through binaural auditory input. Based on the spatial separation of the ears, interaural timing differences (ITDs) and intensity level differences (ILDs) provide critical cues to horizontal (azimuthal) sound localization. These binaural cues are frequency-dependent; low frequency stimuli (< ~1500 Hz) are a cue source for ITDs and higher frequency stimuli (> ~1500 Hz) provide the source for ILDs (Rayleigh, 1907; Zwislocki and Feldman, 1956; Middlebrooks and Green, 1991). Judgments of sound source in the vertical plane and front/back azimuth rely on spectral spatial cues produced through high frequency band-pass filtering by the pinnae, head, and torso positions (Middlebrooks, 1992; Algazi et al., 2001).
Horizontal sound localization accuracy declines when normal binaural localization mechanisms are disrupted, for example by unilateral hearing loss or malformation of peripheral auditory structures. This has been demonstrated in animal models following monaural plugging (King et al., 2000; Kacelnik et al., 2006; Irving et al., 2011), deactivation of areas of auditory cortex (Nodal et al., 2012) and lesions to auditory cortex (Nodal et al., 2010). In humans, artificially altering the pinnae or other ear structures (e.g., occluding the concha with a custom earmold), deteriorates localization accuracy (Wightman and Kistler, 1997; Hofman et al., 1998; Van Wanrooij and Van Opstal, 2004; Irving and Moore, 2011; Agterberg et al., 2012). Furthermore, decreased azimuth localization ability has been shown in NH adults under simulated hearing loss conditions, such as insertion of a monaural plug (Gustafson and Hamill, 1995; Van Wanrooij and Van Opstal, 2007; Kumpik et al., 2010; Agterberg et al., 2011). Decreased localization accuracy also has been demonstrated in adults with unilateral severe to profound hearing loss (SPHL); these individuals especially have difficulty localizing azimuthal sounds in a complex auditory environment containing multiple targets (Slattery and MIddlebrooks, 1994; Van Wanrooij and Van Opstal, 2004; Wazen et al., 2005; Rothpletz et al., 2012). Collectively, these studies indicate structural or sensory impairment to the auditory system reduces horizontal sound localization.
Reported data also show considerable variability in the degree to which localization accuracy is affected with monaural hearing. For example, relatively good localization has been found in some adults with various degrees of unilateral hearing loss (UHL), whereas others cannot localize at all (Slattery and Middlebrooks, 1994; Van Wanrooij and Van Opstal, 2004; Wazen et al., 2005; Agterberg et al., 2011; Firszt et al., 2012; Rothpletz et al., 2012). The variable impact of hearing loss on localization ability, even in individuals with unilateral SPHL, indicates hearing loss in the poorer ear alone does not eradicate the potential to localize sound. Rather this variability suggests that malleable processes in higher level structures might be affected through training to improve localization accuracy. Support for this possibility includes the adaptive changes in source cues that occur naturally. As children grow and develop, their sound localization adjusts to the modified binaural and spectral cues that result from head (Clifton et al., 1988; Ashmead et al., 1991) and ear size (Niemitz et al., 2007; Otte et al., 2013). Furthermore, animal studies have shown that the localization mechanism adjusts to modified spectral cues known as head-related transfer functions (HRTFs), which are based on the spatial relationships between the head, ears and body/torso, relationships that change with growth and development (Carlile 1991; Campbell et al., 2008). Adaptive changes can also be derived artificially. Studies have shown that adults can adapt to artificial modifications, such as monaural pinna or concha reshaping (Van Wanrooij and Van Opstal, 2005). Adaptation is facilitated by learning to use differences in perceived spectra, which is more effective where the sound spectrum is constant and known (Wightman and Kistler, 1997). Where sound spectra are more diverse, spectral cues are less reliable, which leads to dependence on changes in proximal sound levels, or the much less precise head-shadow effects (Van Wanrooij and Van Opstal, 2004). In a study of adults with unilateral SPHL, Agterberg et al. (2014) attributed much of the inter-subject variability for horizontal plane localization to the varying degrees of high frequency hearing in the better ear of participants. The participants with better high frequency hearing were also better able to use pinna-induced spectral-shape cues of the better hearing ear to aid sound localization. However, there was still considerable variation in ability among UHL listeners with low thresholds in the high-frequencies.
To date, very few studies have assessed localization skills in combination with training in individuals with sensorineural hearing loss. Even scarcer are studies that looked at training to improve localization accuracy in individuals with unilateral SPHL. Tyler and colleagues (2010) developed a localization training program for individuals with hearing loss and reported data from three study participants, all of whom were bilateral cochlear implant users. One participant’s performance improved following training, while the other two did not. A case study by Nawaz et al. (2014) reported the localization ability of an individual with single sided deafness who received a cochlear implant and auditory training. Yet, despite the scarcity of training studies, data indicate or at least support the need for greater investigation into the utility of localization training. Gatehouse and Noble (2004) developed the Speech, Spatial and Other Qualities of Hearing scale (SSQ; Gatehouse and Noble, 2004) to evaluate effects of hearing loss in terms of disability and communication functioning. Results of their initial study (in over 150 individuals with hearing loss) showed speech understanding and spatial hearing were the most difficult listening contexts and spatial hearing was a significant contributor to hearing handicap (Gatehouse and Noble, 2004). In a follow-up, retrospective study, individuals with hearing loss had difficulty with speech understanding; however, those with asymmetric hearing had the additional disadvantage of poorer spatial hearing than those with symmetric hearing loss (Noble and Gatehouse, 2004). Not surprisingly, the inability to localize sound is a stated deficit and area of frustration for individuals with unilateral SPHL (Bateman et al., 2000; Subramaniam et al., 2005). Dwyer et al. (2014) administered the SSQ to 31 individuals with unilateral SPHL and found that situations requiring spatial hearing were rated the most difficult everyday listening environments. It is becoming more apparent that poor self-perceived localization and spatial hearing functioning can have serious ramifications for hearing handicap and quality of life, perhaps to a greater extent than initially imagined. It is unknown whether everyday localization accuracy can be facilitated by training and, whether improvements identified in a laboratory setting can be sustained and generalized.
We conducted a pilot study of localization-specific training in adults with unilateral SPHL. The primary objective was to determine whether a period of training would improve localization of two sound types: monosyllabic words and random spectrographic broadband noise bursts. Like Litovsky et al. (2006), a speech stimulus was used because it is a naturally occurring signal and salient to everyday communication and functioning. Furthermore, several studies suggest that localization might be more accurate when speech stimuli are used compared to noise (Verschuur et al., 2005; Litovsky et al., 2006; Grantham et al., 2007). Reasons for this improvement are not entirely clear but are likely due in part to factors known to increase localization accuracy, including broader spectral composition and longer stimuli duration compared to noise bursts and tones (Middlebrooks and Green, 1991). In addition, preference for a speech stimulus was based on the varied results obtained when using noise bursts or other non-speech stimuli (e.g., Van Wanrooij and Van Opstal, 2004; Van Wanrooij and Van Opstal, 2005; Wazen et al., 2005; Agterberg et al., 2012), particularly in training studies. Lack of training improvements with Gaussian noise, 1 kHz and 4 kHz tones have been reported by Recanzone et al. (1998) while others have found localization accuracy improved with 4 kHz and broadband stimuli (Abel and Paik, 2004). Random spectrographic broadband noise bursts were utilized in the training regime to explore whether targeting specific stimulus attributes, for example spectral information, leads to generalized improvement in the localization of words.
A secondary objective was to explore the relation between demographic variables (e.g., length of deafness or age) and localization ability or training benefit. Determining whether training improves localization accuracy carries clinical implications, including guidance for aural rehabilitation and potential patient benefit in everyday situations where directionality assists communication.
2.1 METHODS
All participants provided informed consent in compliance with the Code of Ethics of the World Medical Association (Declaration of Helsinki) and guidelines approved by the Human Research Protection Office at Washington University in St. Louis (ID # 201108278).
2.1.1 Participants
Eleven individuals with unilateral SPHL participated, five males and six females. The SPHL was in the left ear for seven participants and the right ear for four participants. Mean age at test was 54.2 years (SD 16.2; range 27–73 years), and mean duration of SPHL was 26.1 years (SD 21.6; range 2 months - 49 years). Based on a four frequency pure tone average (FFPTA) at 0.5, 1, 2 and 4 kHz, mean hearing in the better ear was 14.3 dB HL (SD 8.8; range 5.0–32.5 dB HL). Several participants had some high frequency hearing loss in the better ear; the group mean of 6 and 8 kHz threshold averages was 32.0 dB HL (SD 21.4; range 10.0–62.5 dB HL). The mean FFPTA in the poorer ear was 111.5 dB HL (SD 12.7; range 75.0–118.8 dB HL). Only one participant had measurable hearing in the poorer ear at 6 and 8 kHz; P11’s high frequency thresholds were at 80 dB HL. Etiology was as follows: acoustic neuroma (n=3), meningitis (n=1), mumps (n=1) and unknown (n=6). Table 1 provides the demographic information for each participant.
Table 1.
Participant demographics
Participant | Gender | Etiology | Age at Test (years) | AAO SPHL (years) | Length of Deafness (years) | Poorer Ear | FF PTA Better Ear (dB HL) | FF PTA Poorer Ear (dB HL) | HF PTA Better Ear (dB HL) |
---|---|---|---|---|---|---|---|---|---|
P01 | M | Unknown | 72 | 72 | 0.2 | Left | 32.5 | 75.0 | 62.5 |
P02 | F | AN | 51 | 48 | 3 | Right | 12.5 | NR | 37.5 |
P03 | M | AN | 72 | 48 | 24 | Right | 23.8 | NR | 55.0 |
P04 | M | AN | 56 | 33 | 23 | Right | 13.8 | NR | 57.5 |
P05 | F | Meningitis | 41 | 37 | 4 | Left | 5.0 | NR | 10.0 |
P06 | M | Unknown | 27 | Birth | 27 | Right | 5.0 | 111.3 | 12.5 |
P07 | F | Unknown | 46 | 30 | 16 | Left | 5.0 | 112.3 | 10.0 |
P08 | F | Unknown | 49 | Birth | 49 | Left | 10.0 | 113.8 | 12.5 |
P09 | M | Unknown | 37 | 3 | 34 | Left | 21.3 | 112.5 | 12.5 |
P10 | F | Unknown | 52 | 18 | 34 | Left | 17.5 | 110.0 | 50.0 |
P11 | F | Unknown | 73 | Birth | 73 | Left | 11.3 | NR | 32.5 |
AAO SPHL = age at onset of severe to profound hearing loss, AN = acoustic neuroma, dB = decibel, F = female, FF = four-frequency (.5, 1, 2, 4 kHz), HA = hearing aid, HF = high frequency (6 and 8 kHz); HL = hearing level, M = male, NR = no response at limits of audiometer, PTA = pure tone average
2.1.2 Procedures
Participants attended seven sessions: a pre-training assessment, five sound localization training sessions and a post-training assessment. All sessions were conducted in a double-walled sound booth (IAC 404-A) with participants seated facing an arc of 15 loudspeakers 10° apart ranging from −70° to 70° and numbered 1–15. Participants sat 54 inches from the center loudspeaker #8; loudspeakers were approximately head-level. A MatLab script pseudo randomly controlled stimuli source presentation with equal numbers of presentations from each active loudspeaker. For pre- and post-training assessments, the test administrator was blinded to loudspeaker source and recorded participant responses into the MatLab developed program to compare presentation and response loudspeaker locations. Stimuli included monosyllabic words (Skinner et al., 2006) and two types of broad-band (250 – 16,000 Hz) random spectrogram sounds (RSS) (Burton et al., 2012; Schönwiesner et al., 2005). The monosyllabic words were from CNC (consonant-vowel nucleus-consonant) lists created at the University of Melbourne based on the selection criteria of the CNC word test by Peterson and Lehiste (1962). The American English recording of these 30 CNC lists were spoken by the same male talker as the 10 Peterson and Lehiste CNC lists. The carrier word “ready” preceded each presented word and was 433 ms in duration. The monosyllabic word duration was on average 528 ms (SD 90 ms). RSS stimuli were introduced by Schönwiesner and colleagues (2005) and originally developed for use in functional magnetic resonance imaging studies exploring asymmetries in hemispheric processing of temporal and spectral information. RSS stimuli are complex sounds that vary independently in spectral or temporal complexity without changing bandwidth, energy or duration (referred to here as spectral RSS and temporal RSS stimuli). Although RSS stimuli share features of speech, they do not resemble speech (see Burton et al. 2012 for complete description of the RSS). A single, RSS spectral stimulus was used that had 10 spectral regions and a 3 Hz temporal modulation rate. Likewise, a single, temporal RSS stimulus was used that had 3 spectral regions and a 15 Hz temporal modulation rate. Duration of both RSS stimulus types was 2 sec.
2.1.2.1 Pre- and Post-Training Assessments
Pre- and post-training assessments were conducted with 10 of the 15 loudspeakers active. Stimuli were presented through loudspeakers positioned at −70, −50, −30, −20, −10, 10, 20, 30, 50 and 70 degrees. Participants were unaware that five loudspeakers were inactive. There were three localization tests, one for each stimulus (words, spectral RSS, and temporal RSS), with 100 trials per test, 10 from each active loudspeaker. Presentation order of stimulus type was pseudo randomized among participants and test sessions. Speech stimuli were presented at 60 dB SPL (intensity roved ± 3 dB). Prior to the onset of each trial, participants were instructed to keep their gaze fixed at 0° azimuth (center loudspeaker #8) until the word “ready” was heard, after which they could turn their head. Participants indicated the loudspeaker number from which they heard the word. After stating the loudspeaker number, they repositioned their gaze to the center loudspeaker. No feedback about response accuracy was provided. Assessment with RSS stimuli was similar to that with monosyllabic words; 100 RSS stimuli were presented at 60 dB SPL (intensity roved ± 3 dB) through each of the 10 active loudspeakers (10 presentations per active loudspeaker). Participants responded by stating the loudspeaker number from which they heard the sound; as with monosyllabic words, there was no response feedback.
2.1.2.2 Localization Training Sessions
Five, one-hour training sessions followed the pre-training assessment and were spaced at least three days but no more than one week apart. The equipment set-up was identical to the pre- and post-training assessments; however, the procedure was altered. Training sessions were conducted with all 15 loudspeakers active, only spectral and temporal RSS stimuli were used, and presentation level was not roved. Each training session consisted of four trial runs, two with spectral and two with temporal RSS stimuli. Each run presented five stimuli from each of the 15 active loudspeakers for a total of 75 presentations. The stimulus presentation order alternated within a training session, and the initial stimulus type alternated among each of the 11 participants based on study enrollment (i.e., temporal, spectral, temporal, spectral or spectral, temporal, spectral, temporal). Source loudspeaker location cues and accuracy feedback were provided at each training session via a touch screen monitor positioned directly in front of the participant at lap level. Initially the monitor displayed a diagram of the 15 numbered loudspeaker array and a “Play” button (Figure 1, Panel A). Upon touching “Play” the numbers disappeared and the stimulus location was cued by a 1 sec color change of the source loudspeaker on the diagram (Figure 1, Panel B). The auditory stimulus was presented 0.5 sec after the cue ended. After stimulus presentation, the numbers reappeared on the diagram and the participant identified the source by touching the loudspeaker location number on the monitor. After each response, accuracy feedback was signaled with three black flashes of the correct source loudspeaker on the diagram.
Figure 1.
Display diagrams used during localization training sessions. Panel A is the diagram as viewed by the participant with the loudspeaker location set-up and play button. Panels B–E depict the progression of cue specificity, from exact source (B) to no cues (E). Examples of cued loudspeaker locations are indicated in black (B–D).
Training session 1 provided maximum cues. The exact source loudspeaker was cued prior to every stimulus presentation (100% cue frequency) (Figure 1, Panel B). Before each stimulus trial, the participant was instructed to look for the cue on the diagram, and then orient their gaze to the center loudspeaker for the stimulus presentation. Accuracy feedback was provided for every response. In training session 2, the procedures were identical to training session 1 except that “no cue” trials randomly occurred for 10% of the presentations. In session 3, the exact source loudspeaker was not cued; instead, a group of three adjacent loudspeakers was cued, one of which was the actual source (Figure 1, Panel C). Sets of three adjacent loudspeakers were pre-determined based on loudspeaker number such that each loudspeaker only belonged to one set. This resulted in five loudspeaker sets: (1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12) and (13, 14, 15). Thus, if the source loudspeaker was #5, the cluster of loudspeakers 4, 5 and 6 was cued; or if the source loudspeaker was #9, the cluster of loudspeakers 7, 8 and 9 was cued. As in session 2, 10% of the presentations were not cued; however, accuracy feedback was still provided for every response. In session 4, the side of the source loudspeaker was cued on the diagram, i.e., right or left of 0° azimuth including center loudspeaker #8 (Figure 1, Panel D). Again, “no cue” trials randomly occurred for 10% of the presentations. In the fifth and final training session, no source cues were provided (Figure 1, Panel E); however, consistent with all other training sessions, accuracy feedback was provided after every response. The post-training assessment session occurred approximately one week following the fifth training session.
2.1.3 Data analysis
Group data were analyzed using the root mean square (RMS) error obtained for each participant and stimulus type. The RMS error score was the mean error of the response, i.e., the difference in degrees between the actual sound source and the participant’s response for each loudspeaker location. Those values were summed, divided by the total number of trials, and the RMS error score was the computed square root of that quotient. A lower RMS error score indicated less localization error, or greater accuracy. Chance performance on this measure is 59 degrees.1 A priori comparisons were evaluated with paired t-tests to identify significant differences between group mean pre- and post-training RMS error scores for each stimulus type. RMS error is useful because the unit of degrees is practical and can readily be compared across individuals; however, the RMS error value does not intrinsically provide information about localization accuracy as a function of source location, nor does it account for variance. Therefore, ordinary least squares (OLS) regression with correction of standard errors for unequal variance between conditions was used to compare pre- and post-training performance for individual participants. OLS analysis is a form of linear regression where response accuracy is measured by the slope of the best-fit line. The closer the best-fit line is to a perfect linear relationship, the more accurate the response(s). A value of p < 0.001 was used to determine significant individual differences. Finally, RMS error values were used in Pearson correlations to identify possible relations between localization ability and length of deafness, age at onset of hearing loss, age at test, and better ear audibility. The two PTA measures of audibility were: 1) a FFPTA at .5, 1, 2 and 4 kHz; and 2) a high frequency PTA at 6 and 8 kHz.
3.1 RESULTS
Figure 2 shows results from each participant with source location along the x-axis and response location along the y-axis (both in degrees azimuth). For ease of interpretation, all data are plotted as if the participant’s left ear had SPHL. A source location of −70° indicates the stimulus originated from the left most loudspeaker (#1), on the SPHL side; a source location of 70° indicates the stimulus originated from the right most loudspeaker (#15), on the NH side. Complete accuracy would result in a straight diagonal line from the lower left to the upper right corner of the plot. There are three plots for each participant organized in columns by stimulus type; monosyllabic words (squares in left column), spectral RSS (triangles in center column) and temporal RSS (circles in right column). Pre-training scores are represented by the grey symbols, post-training scores by the black symbols. Out of 11 participants, eight showed significant improvement in localization accuracy for at least one stimulus, while three participants (P5, P7 and P8) did not improve significantly for any stimulus. Six participants improved for monosyllabic word localization; six participants improved for spectral RSS stimuli; and seven participants improved for temporal RSS. Four participants improved for all three stimulus types; three improved on at least two stimuli; and one participant improved on a single stimulus. For some participants, the changes were smaller and more limited in location across the array (e.g., P6, P9) whereas for others, the changes were much larger and occurred across most of the array (e.g., P1, P4).
Figure 2.
Individual participant localization results. Results are shown for Words (left column of panels, squares), Spectral RSS stimuli (middle column of panels, triangles) and Temporal RSS stimuli (right column of panel, circles). For each participant and stimulus type, scores in grey-filled symbols are pre-training results and in black-filled symbols are post-training results. Symbols represent mean responses in degrees azimuth. X-axis represents the stimulus speaker location and the y-axis represents the response speaker selection. Significant differences between the pre- and post-training conditions for a given stimulus type are indicated with asterisks in the lower right corner of each panel.
Figure 3 shows the group mean RMS error (in degrees) for each stimulus (monosyllabic words, spectral RSS and temporal RSS) at pre-training (light grey bars) and post-training (dark grey bars). Post-training RMS error scores were significantly lower (better localization) than pre-training scores for all stimuli: monosyllabic words, t(10) = 2.3, p < 0.05; spectral RSS, t(10) =3.1, p < 0.05; and temporal RSS, t(10) = 3.4, p < 0.01. (Note that corrections were not made for multiple comparisons because of the planned nature of these three comparisons.)
Figure 3.
Group mean RMS error scores. Group means are shown for Words and Spectral and Temporal RSS stimuli. Scores in grey are pre-training results and in black are post-training results.
* p < 0.05; ** p < 0.01.
Correlation analysis found age at test correlated with pre-training temporal RSS localization ability (r = 0.65), and with post-training monosyllabic word (r = 0.71), spectral RSS (r = 0.64) and temporal RSS (r = 0.72) localization accuracy. Age at onset of hearing loss correlated with pre-training monosyllabic word (r = 0.66) and spectral RSS (r = 0.63) localization ability, and with post-training spectral RSS (r = 0.71) localization. The FFPTA of the better ear correlated with post-training spectral (r = 0.64) and temporal (r = 0.65) RSS localization scores. The better ear high-frequency PTA correlated with pre-training monosyllabic word (r = 0.80), spectral RSS (r = 0.86), and temporal RSS (r = 0.83) RMS error scores and post-training monosyllabic word (r = 0.74), spectral RSS (r = 0.66), and temporal RSS (r = 0.76) RMS error scores. The correlations reported here were all significant at the 0.01 level, except post-training temporal RSS (p < 0.05). Length of deafness did not correlate with pre- or post-training localization ability for any stimulus (p > 0.05). There were significant correlations between pre- and post-training localization RMS error scores for all stimuli (r values ranged from 0.64 to 0.74, all ps < 0.05). The difference between pre- and post-training RMS error, or how much improvement in localization accuracy occurred following training, correlated with pre-training scores for all stimulus types (r values ranged from 0.67 to 0.85, all ps < 0.05).
4.1 DISCUSSION
The primary objective was to investigate whether sound localization of two sounds types would improve following a training regime in individuals with unilateral SPHL. In addition, we collected and analyzed demographic information to determine whether demographic factors, such as age or length of deafness, underlie monaural sound localization ability or training benefit.
Several studies show that training promotes localization accuracy in the presence of an artificially modified normal auditory system (Shinn-Cunningham et al., 1998; Kumpik et al., 2010; Irving and Moore, 2011). Decrements in localization ability have been produced by fitting a unilateral hearing aid to delay input, which disrupts binaural timing cues (Javer and Schwarz, 1995); monaural plugging, which interferes with binaural ITD and ILD cues (Irving and Moore, 2011); and employing head-related transfer functions and a nonlinear transform, which skews the relationship between actual-source and perceived-source locations (Shinn-Cunningham et al., 1998). In animal studies, monaural plugging in adulthood immediately degraded accuracy that recovered over time (King et al., 2000) and improved at a faster rate when combined with training (Kacelnik et al., 2006). Similarly in adults with NH, localization accuracy significantly declined immediately after modifications like a monaural plug, and then recovered over time with experience (Javer et al., 1995; Kumpik et al., 2010) and training (Shinn-Cunningham et al., 1998; Kumpik et al., 2010; Irving and Moore, 2011). These studies indicate that sound localization in a disrupted auditory system can improve with experience, and can be facilitated by training. However, the applicability of these findings in NH individuals with simulated temporary conductive hearing loss to individuals living with unilateral SPHL remains unclear. Studies that plugged one ear in NH participants may not have reduced hearing levels sufficiently to render the stimuli inaudible by the plugged ear (Wightman and Kistler, 1997). Thus, despite alteration to the ITD and ILD thresholds, it is likely that participants still had access to binaural ITD cues provided by low frequencies, which are less effectively attenuated by an ear plug compared to high frequencies. Learning to attend to diminished binaural cues could have occurred with training. In contrast, hearing levels in unilateral SPHL individuals are so extreme that monaural cue reliance is mandatory. These monaural cues may include spectral information from high frequency filtering by the pinna, concha and ear canal or proximal intensity levels arising from head shadow effects (Wightman and Kistler, 1997; Van Wanrooij et al., 2004; Van Wanrooij et al., 2005).
In the current study, adults with unilateral SPHL, and no access to binaural cues, showed significantly better group mean post- than pre-training scores for all three stimulus types: 10 degrees for words, 14 degrees for temporal RSS and 16 degrees for spectral RSS. The 10 degrees improvement for words was greater than the mean test-retest difference of 4.62 (SD=3.39) found for a group of 26 untrained individuals (mean age 49.1 years; SD 12.8 years) with unilateral SPHL who were tested with the same monosyllabic words. Improved accuracy that exceeds expected test-retest differences suggests the efficacy of localization training. However, as in other studies of adults with UHL, pre-training performance ranged considerably from a few participants who localized fairly well to others who had RMS error scores worse than chance. All participants, however, had poorer word localization accuracy compared to a group of 23 NH listeners (mean age 49.7 years; SD 11.6 years) who had a mean RMS error score of 3.41 degrees (SD = 2.49) (Firszt et al., 2013). Although individual abilities varied, participants with good word localization accuracy tended to have good accuracy localizing RSS stimuli, and participants with poor word localization accuracy typically had poor localization for all measures (e.g., see Figure 2, P6 and P4, respectively). Individuals with better accuracy also had less variance in their responses per speaker location. Of the 11 participants, eight significantly improved on at least one localization measure following training, and three did not. High correlations between pre-training scores and the change in score (the difference between pre- and post-training RMS error scores) suggested participants with the highest RMS error scores (poorest localization) benefitted the most; likewise, individuals with the best pre-training accuracy benefitted the least from training (although P6 and P9 did demonstrate some improvements for some stimuli). Notably, those with the poorest pre-training localization ability were also those with the greatest room for improvement and vice versa. Why some individuals localized better than others prior to training, and whether any factors unrelated to pre-training localization ability predict training benefit remains unclear.
Several studies found localization accuracy declined with age even after accounting for age-related hearing loss, presumably as a result of declining ability to process auditory spectral and temporal cues (Abel et al., 2000; Dobreva et al., 2011; Freigang et al., 2014). Other studies have looked at effects of audibility and found that hearing loss does impair localization ability (Neher et al., 2011). Noble and colleagues (1994) found hearing loss in the mid and low frequencies impaired frontal horizontal plane localization, although this correlation between threshold levels and localization performance was only moderate (Noble et al., 1994). These studies differed from the current one in that the hearing impaired participants had some bilateral hearing. Correlation results in the current study sample with unilateral SPHL suggest that test age, age at onset of SPHL, and high frequency audibility may contribute to localization ability. Thus, because better-ear high-frequency thresholds were correlated with localization ability, it is possible that training assisted participants in attending to the salient information for improved localization. In particular, the training paradigm may have allowed participants to attend to the high-frequency cues because of the controlled and quiet environment not experienced in everyday activities. If the usefulness of these cues was made evident through the training, it might explain improved results based on training versus the length of UHL experience for these listeners. It is also possible that these improvements do not generalize to everyday listening that occurs in uncontrolled and rarely quiet settings.
Although this study did not have sufficient participants to differentiate performance of those with congenital SPHL from those with later onset, it is possible that age at onset of hearing loss may contribute to localization ability. King and colleagues (2000) found adult ferrets that were monaurally plugged in infancy performed a localization task as well as their NH counterparts, presumably as a result of adaptation to unilateral conductive hearing loss resulting from early-onset long-term occlusion. Slattery and Middlebrooks (1994) tested five patients with long-term congenital unilateral hearing loss (four had no measurable hearing and one had moderate to profound hearing loss in the poor ear). Three of five demonstrated near-normal localization whereas none of seven NH plugged controls could localize. In the present study there were three individuals with congenital onset of SPHL; two of the three had the best localization ability (see Figure 2, P6 and P8). There were no correlations however, between demographic factors and amount of benefit from training.
While the current study protocol was not designed to determine which monaural cues or combination of cues aided localization accuracy, it is possible that performance differences reflect individual salience of head shadow or spectral cues. Head shadow effects are easily learned when the stimulus level is fairly constant, but they are not highly accurate. Therefore, participants who rely solely on them are potentially prone to greater RMS error scores. Spectral cues can be spatially accurate when stimuli are similar in level across a broad frequency band with no variation in spectrum (Van Wanrooij and Van Opstal, 2004); however, training was completed with RSS stimuli while outcome was assessed with words – two stimuli that are neither singular nor similar. Finally, it is possible that some participants learned to combine head shadow effects and spectral cues, though this is highly unlikely since doing so would require extracting and learning spectral cues during training and then applying them to a different and spectrally varied set of stimuli (words).
In summary, the results demonstrated improvements in sound localization with a localization-specific training protocol and indicated the need for continued investigation of localization training efficacy and future utility in rehabilitation programs for individuals with UHL. Though limited by the lack of a control group, data indicated a majority of individuals benefited from training. Determining whether certain demographic factors predisposed some individuals to have better pre-training localization or to benefit from training will require a larger study cohort. Future work should address the contributions of age and better-ear hearing loss degree and configuration on localization ability, as well as the impact of hearing history and cognitive factors (e.g., processing speed and visuospatial memory) on localization training efficacy. It is worth acknowledging that improvement on the specific localization task used in this study does not necessarily mean that sound localization abilities in daily life were changed for these participants. Training benefit should be evaluated for persistence and generalization to non-experimental stimuli and environments; and experimental design factors such as the intervals and duration of training, stimulus characteristics of frequency and intensity, and the contributions of external auditory structural features should be explored.
Highlights.
Adults with unilateral hearing loss had improved localization ability after training
Participants’ pre- and post-training localization abilities varied greatly
Post-training localization abilities varied less than pre-training localization abilities
Participants with the poorest localization ability improved the most from training
Further study of localization training is needed in this population
Acknowledgments
This research was supported by the Washington University Institute of Clinical and Translational Sciences Grant UL1 TR000448 from the National Center for Advancing Translational Sciences (NCATS) and from the National Institute of Deafness and Other Communication Disorders R01 DC009010, both of the National Institutes of Health. We thank Tim Holden and Rosalie Uchanski for study design and programming assistance, Dorina Kallogjeri for statistical consultation, and Laura Czarniak for assistance with data collection.
Footnotes
Eleven adults (mean age 59 years; SD 2.9 years) were tested without sound to determine chance performance. Two modifications were made to the sound localization task described in the Methods section. All loudspeakers were inactive to ensure word presentations were inaudible. Hand-held vibrotactile stimulation cued participants to respond. Chance RMS error scores were calculated based on the presentation loudspeaker locations as identified by the MatLab program (even though the loudspeakers were inactive) and the participant’s responses (guesses) as to sound source location. Each chance participant completed the task twice and results were averaged. The group mean RMS error score was 58.88 degrees (SD 2.91 degrees; range 55.90 – 63.85 degrees
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Contributor Information
Jill B. Firszt, Email: FirsztJ@ent.wustl.edu.
Ruth M. Reeder, Email: ReederR@ent.wustl.edu.
Noël Y. Dwyer, Email: DwyerN@ent.wustl.edu.
Harold Burton, Email: HoldenL@ent.wustl.edu.
Laura K. Holden, Email: Harold@pcg.wustl.edu.
References
- Abel SM, Paik JES. The benefit of practice for sound localization without sight. App Acoust. 2004;65:229–241. doi: 10.1016/j.apacoust.2003.10.003. [DOI] [Google Scholar]
- Abel SM, Giguère C, Consoli A, Papsin BC. The effect of aging on horizontal plane sound localization. J Acoust Soc Am. 2000;108:743–752. doi: 10.1121/1.429607. [DOI] [PubMed] [Google Scholar]
- Agterberg MJ, Snik AF, Hol MK, van Esch TE, Cremers CW, Van Wanrooij MM, Van Opstal AJ. Improved horizontal directional hearing in bone conduction device users with acquired unilateral conductive hearing loss. J Assoc Res Otolaryngol. 2011;12:1–11. doi: 10.1007/s10162-010-0235-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Agterberg MJH, Snik AFM, Hol MKS, Van Wanrooij MM, Van Opstal AJ. Contribution of monaural and binaural cues to sound localization in listeners with acquired unilateral conductive hearing loss: Improved directional hearing with a bone-conduction device. Hear Res. 2012;286:9–18. doi: 10.1016/j.heares.2012.02.012. [DOI] [PubMed] [Google Scholar]
- Agterberg MJH, Hol MKS, Wanrooij MMV, Opstal AJV, Snik AFM. Single-sided deafness & directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear. Frontiers in Neuroscience. 2014;8 doi: 10.3389/fnins.2014.00188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Algazi VR, Avendano C, Duda RO. Elevation localization and head-related transfer function analysis at low frequencies. J Acoust Soc Am. 2001;109:1110–1122. doi: 10.1121/1.1349185. [DOI] [PubMed] [Google Scholar]
- Ashmead DH, Davis DL, Whalen T, Odom RD. Sound localization and sensitivity to interaural time differences in human infants. Child Dev. 1991;62:1211–1226. doi: 10.1111/j.1467-8624.1991.tb01601.x. [DOI] [PubMed] [Google Scholar]
- Bateman N, Nikolopoulos TP, Robinson K, O’Donoghue GM. Impairments, disabilities, and handicaps after acoustic neuroma surgery. Clin Otolaryngol Allied Sci. 2000;25:62–65. doi: 10.1046/j.1365-2273.2000.00326.x. [DOI] [PubMed] [Google Scholar]
- Burton H, Firszt JB, Holden T, Agato A, Uchanski RM. Activation lateralization in human core, belt, and parabelt auditory fields with unilateral deafness compared to normal hearing. Brain Res. 2012;1454:33–47. doi: 10.1016/j.brainres.2012.02.066. S0006-8993(12)00398-8 [pii] [DOI] [PMC free article] [PubMed] [Google Scholar]
- Campbell RAA, King AJ, Nodal FR, Schnupp JWH, Carlile S, Doubell TP. Virtual adult ears reveal the roles of acoustical factors and experience in auditory space map development. Journal of Neuroscience. 2008;28:11557–11570. doi: 10.1523/JNEUROSCI.0545-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carlile S. The auditory periphery of the ferret: Postnatal development of acoustic properties. Hear Res. 1991;51:265–278. doi: 10.1016/0378-5955(91)90043-9. [DOI] [PubMed] [Google Scholar]
- Clifton RK, Gwiazda J, Bauer JA, Clarkson MG, Held RM. Growth in head size during infancy: Implications for sound localization. Dev Psychol. 1988;24:477–483. [Google Scholar]
- Dobreva MS, O’Neill WE, Paige GD. Influence of aging on human sound localization. Journal of neurophysiology. 2011;105:2471–2486. doi: 10.1152/jn.00951.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dwyer NY, Firszt JB, Reeder RM. Effects of unilateral input and mode of hearing in the better ear: Self-reported performance using the Speech, Spatial and Qualities of Hearing Scale. Ear Hear. 2014;35:126–36. doi: 10.1097/AUD.0b013e3182a3648b. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Firszt JB, Reeder RM, Holden LK. Speech recognition in noise and localization abilities in adults with unilateral hearing loss: Implications for cochlear implant candidacy. Conference on Implantable Auditory Prostheses; Lake Tahoe, CA. 2013. [Google Scholar]
- Firszt JB, Holden LK, Reeder RM, Waltzman SB, Arndt S. Auditory abilities after cochlear implantation in adults with unilateral deafness: a pilot study. Otol Neurotol. 2012;33:1339–46. doi: 10.1097/MAO.0b013e318268d52d. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freigang C, Schmiedchen K, Nitsche I, R3bsamen R. Free-field study on auditory localization and discrimination performance in older adults. Exp Brain Res. 2014;232:1157–1172. doi: 10.1007/s00221-014-3825-0. [DOI] [PubMed] [Google Scholar]
- Gatehouse S, Noble W. The Speech, Spatial and Qualities of Hearing Scale (SSQ) International journal of audiology. 2004;43:85–99. doi: 10.1080/14992020400050014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grantham DW, Ashmead DH, Ricketts TA, Labadie RF, Haynes DS. Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear Hear. 2007;28:524–41. doi: 10.1097/AUD.0b013e31806dc21a. [DOI] [PubMed] [Google Scholar]
- Gustafson TJ, Hamill TA. Differences in localization ability in cases of right versus left unilateral simulated conductive hearing loss. J Am Acad Audiol. 1995;6:124–8. [PubMed] [Google Scholar]
- Hofman PM, Van Riswick JGA, Van Opstal AJ. Relearning sound localization with new ears. Nat Neurosci. 1998;1:417–421. doi: 10.1038/1633. [DOI] [PubMed] [Google Scholar]
- Irving S, Moore DR. Training sound localization in normal hearing listeners with and without a unilateral ear plug. Hear Res. 2011;280:100–108. doi: 10.1016/j.heares.2011.04.020. [DOI] [PubMed] [Google Scholar]
- Irving S, Moore DR, Liberman MC, Sumner CJ. Olivocochlear efferent control in sound localization and experience-dependent learning. J Neurosci. 2011;31:2493–2501. doi: 10.1523/JNEUROSCI.2679-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Javer AR, Schwarz DWF. Plasticity in human directional hearing. J Otolaryngol. 1995;24:111–117. [PubMed] [Google Scholar]
- Kacelnik O, Nodal FR, Parsons CH, King AJ. Training-induced plasticity of auditory localization in adult mammals. PLoS Biol. 2006;4:627–638. doi: 10.1371/journal.pbio.0040071. [DOI] [PMC free article] [PubMed] [Google Scholar]
- King AJ, Parsons CH, Moore DR. Plasticity in the neural coding of auditory space in the mammalian brain. Proc Natl Acad Sci U S A. 2000;97:11821–11828. doi: 10.1073/pnas.97.22.11821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kumpik DP, Kacelnik O, King AJ. Adaptive reweighting of auditory localization cues in response to chronic unilateral earplugging in humans. J Neurosci. 2010;30:4883–94. doi: 10.1523/JNEUROSCI.5488-09.2010. 30/14/4883 [pii] [DOI] [PMC free article] [PubMed] [Google Scholar]
- Litovsky RY, Johnstone PM, Godar SP. Benefits of bilateral cochlear implants and/or hearing aids in children. International journal of audiology. 2006;45(Suppl 1):S78–91. doi: 10.1080/14992020600782956. P780766523R0572V [pii] [DOI] [PMC free article] [PubMed] [Google Scholar]
- Middlebrooks JC. Narrow-band sound localization related to external ear acoustics. J Acoust Soc Am. 1992;92:2607–2624. doi: 10.1121/1.404400. [DOI] [PubMed] [Google Scholar]
- Middlebrooks JC, Green DM. Sound localization by human listeners. Annu Rev Psychol. 1991;42:135–59. doi: 10.1146/annurev.ps.42.020191.001031. [DOI] [PubMed] [Google Scholar]
- Nawaz S, McNeill C, Greenberg SL. Improving sound localization after cochlear implantation and auditory training for the management of single-sided deafness. Otol Neurotol. 2014;35:271–6. doi: 10.1097/MAO.0000000000000257. [DOI] [PubMed] [Google Scholar]
- Neher T, Laugesen S, Jensen NS, Kragelund L. Can basic auditory and cognitive measures predict hearing-impaired listeners’ localization and spatial speech recognition abilities? J Acoust Soc Am. 2011;130:1542–1558. doi: 10.1121/1.3608122. [DOI] [PubMed] [Google Scholar]
- Niemitz C, Nibbrig M, Zacher V. Human ears grow throughout the entire lifetime according to complicated and sexually dimorphic patterns -- conclusions from a cross-sectional analysis. Anthropologischer Anzeiger. 2007;65:391–413. doi: 10.2307/29542890. [DOI] [PubMed] [Google Scholar]
- Noble W, Gatehouse S. Interaural asymmetry of hearing loss, Speech, Spatial and Qualities of Hearing Scale (SSQ) disabilities, and handicap. International journal of audiology. 2004;43:100–114. doi: 10.1080/14992020400050015. [DOI] [PubMed] [Google Scholar]
- Noble W, Byrne D, Lepage B. Effects on sound localization of configuration and type of hearing impairment. J Acoust Soc Am. 1994;95:992–1005. doi: 10.1121/1.408404. [DOI] [PubMed] [Google Scholar]
- Nodal FR, Bajo VM, King AJ. Plasticity of spatial hearing: Behavioural effects of cortical inactivation. J Physiol. 2012;590:3965–3986. doi: 10.1113/jphysiol.2011.222828. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nodal FR, Kacelnik O, Bajo VM, Bizley JK, Moore DR, King AJ. Lesions of the auditory cortex impair azimuthal sound localization and its recalibration in ferrets. Journal of neurophysiology. 2010;103:1209–1225. doi: 10.1152/jn.00991.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Otte R, Agterberg MH, Van Wanrooij M, Snik AM, Van Opstal AJ. Age-related hearing loss and ear morphology affect vertical but not horizontal sound-localization performance. Journal of the Association for Research in Otolaryngology. 2013;14:261–273. doi: 10.1007/s10162-012-0367-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Disord. 1962;27:62–70. doi: 10.1044/jshd.2701.62. [DOI] [PubMed] [Google Scholar]
- Rayleigh L. On our perception of sound direction. Philos Mag. 1907;13:214–232. [Google Scholar]
- Recanzone GH, Makhamra SD, Guard DC. Comparison of relative and absolute sound localization ability in humans. J Acoust Soc Am. 1998;103:1085–97. doi: 10.1121/1.421222. [DOI] [PubMed] [Google Scholar]
- Rothpletz AM, Wightman FL, Kistler DJ. Informational masking and spatial hearing in listeners with and without unilateral hearing loss. J Speech Lang Hear Res. 2012;55:511–31. doi: 10.1044/1092-4388(2011/10-0205). 1092–4388_2011_10-0205 [pii] [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schönwiesner M, Rubsamen R, von Cramon DY. Hemispheric asymmetry for spectral and temporal processing in the human antero-lateral auditory belt cortex. Eur J Neurosci. 2005;22:1521–1528. doi: 10.1111/j.1460-9568.2005.04315.x. [DOI] [PubMed] [Google Scholar]
- Shinn-Cunningham BG, Durlach NI, Held RM. Adapting to supernormal auditory localization cues. I. Bias and resolution. J Acoust Soc Am. 1998;103:3656–66. doi: 10.1121/1.423088. [DOI] [PubMed] [Google Scholar]
- Skinner MW, Holden LK, Fourakis MS, Hawks JW, Holden T, Arcaroli J, Hyde M. Evaluation of equivalency in two recordings of monosyllabic words. J Am Acad Audiol. 2006;17:350–66. doi: 10.3766/jaaa.17.5.5. [DOI] [PubMed] [Google Scholar]
- Slattery WH, Middlebrooks JC. Monaural sound localization: acute versus chronic unilateral impairment. Hear Res. 1994;75:38–46. doi: 10.1016/0378-5955(94)90053-1. [DOI] [PubMed] [Google Scholar]
- Subramaniam K, Eikelboom RH, Eager KM, Atlas MD. Unilateral profound hearing loss and the effect on quality of life after cerebellopontine angle surgery. Otolaryngol Head Neck Surg. 2005;133:339–346. doi: 10.1016/j.otohns.2005.05.017. [DOI] [PubMed] [Google Scholar]
- Tyler RS, Witt SA, Dunn CC, Perreau A, Parkinson AJ, Wilson BS. An attempt to improve bilateral cochlear implants by increasing the distance between electrodes and providing complementary information to the two ears. J Am Acad Audiol. 2010;21:52–65. doi: 10.3766/jaaa.21.1.7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Wanrooij MM, Van Opstal AJ. Contribution of head shadow and pinna cues to chronic monaural sound localization. J Neurosci. 2004;24:4163–71. doi: 10.1523/JNEUROSCI.0048-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Wanrooij MM, Van Opstal AJ. Relearning sound localization with a new ear. J Neurosci. 2005;25:5413–24. doi: 10.1523/JNEUROSCI.0850-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Wanrooij MM, Van Opstal AJ. Sound localization under perturbed binaural hearing. Journal of neurophysiology. 2007;97:715–26. doi: 10.1152/jn.00260.2006. [DOI] [PubMed] [Google Scholar]
- Verschuur CA, Lutman ME, Ramsden R, Greenham P, O’Driscoll M. Auditory localization abilities in bilateral cochlear implant recipients. Otol Neurotol. 2005;26:965–71. doi: 10.1097/01.mao.0000185073.81070.07. [DOI] [PubMed] [Google Scholar]
- Wazen JJ, Ghossaini SN, Spitzer JB, Kuller M. Localization by unilateral BAHA users. Otolaryngol Head Neck Surg. 2005;132:928–32. doi: 10.1016/j.otohns.2005.03.014. S0194599805002597 [pii] [DOI] [PubMed] [Google Scholar]
- Wightman FL, Kistler DJ. Monaural sound localization revisited. J Acoust Soc Am. 1997;101:1050–1063. doi: 10.1121/1.418029. [DOI] [PubMed] [Google Scholar]
- Zwislocki J, Feldman RS. Just noticeable differences in dichotic phase. J Acoust Soc Am. 1956;28:860–864. [Google Scholar]