Skip to main content
eLife logoLink to eLife
. 2016 Mar 23;5:e12264. doi: 10.7554/eLife.12264

Behavioral training promotes multiple adaptive processes following acute hearing loss

Peter Keating 1,*,, Onayomi Rosenior-Patten 1, Johannes C Dahmen 1, Olivia Bell 1, Andrew J King 1
Editor: Thomas D Mrsic-Flogel2
PMCID: PMC4841776  PMID: 27008181

Abstract

The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders.

DOI: http://dx.doi.org/10.7554/eLife.12264.001

Research Organism: Human, Other

eLife digest

The brain normally compares the timing and intensity of the sounds that reach each ear to work out a sound’s origin. Hearing loss in one ear disrupts these between-ear comparisons, which causes listeners to make errors in this process. With time, however, the brain adapts to this hearing loss and once again learns to localize sounds accurately.

Previous research has shown that young ferrets can adapt to hearing loss in one ear in two distinct ways. The ferrets either learn to remap the altered between-ear comparisons, caused by losing hearing in one ear, onto their new locations. Alternatively, the ferrets learn to locate sounds using only their good ear. Each strategy is suited to localizing different types of sound, but it was not known how this adaptive flexibility unfolds over time, whether it persists throughout the lifespan, or whether it is shared by other species.

Now, Keating et al. show that, with some coaching, adult humans also adapt to temporary loss of hearing in one ear using the same two strategies. In the experiments, adult humans were trained to localize different kinds of sounds while wearing an earplug in one ear. These sounds were presented from 12 loudspeakers arranged in a horizontal circle around the person being tested. The experiments showed that short periods of behavioral training enable adult humans to adapt to a hearing loss in one ear and recover their ability to localize sounds.

Just like the ferrets, adult humans learned to correctly associate altered between-ear comparisons with their new locations and to rely more on the cues from the unplugged ear to locate sound. Which of these adaptive strategies the participants used depended on the frequencies present in the sounds. The cells in the ear and brain that detect and make sense of sound typically respond best to a limited range of frequencies, and so this suggests that each strategy relies on a distinct set of cells. Keating et al. confirmed in ferrets that different brain cells are indeed used to bring about adaptation to hearing loss in one ear using each strategy. These insights may aid the development of new therapies to treat hearing loss.

DOI: http://dx.doi.org/10.7554/eLife.12264.002

Introduction

A major challenge faced by the brain is to maintain stable and accurate representations of the world despite changes in sensory input. This is important because the statistical structure of sensory experience varies across different environments (Mlynarski and Jost, 2014; Qian et al., 2012; Seydell et al., 2010), but also because long-term changes in sensory input result from a range of sensory impairments (Feldman and Brecht, 2005; Keating and King, 2015; Sengpiel, 2014). Adaptation to altered inputs has been demonstrated in different sensory systems, particularly during development, and serves to shape neural circuits to the specific inputs experienced by the individual (Margolis et al., 2014; Mendonca, 2014; Schreiner and Polley, 2014; Sur et al., 2013). However, many ecologically important aspects of neural processing require the integration of multiple sensory cues, either within or across different sensory modalities (Seilheimer et al., 2014; Seydell et al., 2010). A specific change in sensory input may therefore have a considerable impact on some cues whilst leaving others intact. In such cases, adaptation can be achieved in two distinct ways, as demonstrated by recent studies of sound localization following monaural deprivation during infancy (Keating et al., 2013; 2015).

Monaural deprivation alters the binaural spatial cues that normally determine the perceived location of a sound in the horizontal plane (Figure 1A) (Kumpik et al., 2010Lupo et al., 2011). Adaptation can therefore be achieved by learning the altered relationships between particular cue values and spatial locations (Gold and Knudsen, 2000; Keating et al., 2015; Knudsen et al., 1984), a process referred to as cue remapping. However, at least in mammals, monaural spectral cues are also available to judge sound source location in both the horizontal and vertical planes (Carlile et al., 2005). These spectral cues arise from the acoustic properties of the head and external ears, which filter sounds in a direction-dependent way (Figure 1B). Monaural deprivation has no effect on the spectral cues available to the non-deprived ear. This means it is possible to adapt by learning to rely more on these unchanged spectral cues, whilst learning to ignore the altered binaural cues (Kacelnik et al., 2006; Keating et al., 2013; Kumpik et al., 2010), a form of adaptation referred to as cue reweighting.

Figure 1. Effect of training on localization of broadband noise stimuli in the horizontal plane by monaurally deprived human listeners.

(A) When one of these sounds is presented on one side of the head, it will be louder and arrive earlier at the ipsilateral ear (blue), producing interaural time and level differences, which are respectively the primary cues to sound location at low and high frequencies. (B) Because of acoustic filtering by the head and ears, the spectrum of a sound at the tympanic membrane (post-filtering, color) differs from that of the original sound (pre-filtering, black) and varies with location (amplitude in dB is plotted as a function of frequency; color indicates different locations). These spectral cues make it possible to localize sounds using a single ear, but only for sounds that have relatively flat spectra (solid lines) and are sufficiently broadband (shape of spectra in narrow frequency bands varies little with location – see shaded gray region). When spectral features are artificially added to the pre-filtered sound source (dotted lines), these added features can be misattributed to the filtering effects of the head and ears. This produces sound localization errors (e.g. dotted green spectrum is more easily confused with solid turquoise spectrum because of additional peak at high frequencies). The extent of these errors allows us to infer subjects’ reliance on spectral cues. (C, D) Joint distribution of stimulus and response obtained from the first (C) and last (D) training session for an individual subject with an earplug in the right ear. Grayscale indicates the number of trials corresponding to each stimulus-response combination. Data are shown for trials on which flat-spectrum stimuli were used (i.e. all spatial cues were available). (E) Sound localization performance (% correct) as a function of training session for the same subject. Scores for each session (dots) were fitted using linear regression (lines) to calculate slope values, which quantified the rate of adaptation. Relative to flat-spectrum stimuli (blue), much less adaptation occurred with random-spectrum stimuli (pink), which limit the usefulness of spectral cues to sound location (Figure 1—figure supplement 1). (F) Adaptation rate is shown for flat- and random-spectrum stimuli for each subject (gray lines; n = 11). Positive values indicate improvements in localization performance with training. Mean adaptation rates across subjects (± bootstrapped 95% confidence intervals) are shown in blue and pink. Similar results are observed if front-back errors are excluded and changes in error magnitude are calculated (Figure 1—figure supplement 2). Dotted black lines indicate adaptation rates observed previously in humans (Kumpik et al., 2010; total adaptation reported divided by number of sessions, n = 8).

DOI: http://dx.doi.org/10.7554/eLife.12264.003

Figure 1.

Figure 1—figure supplement 1. Experimental setup and stimuli.

Figure 1—figure supplement 1.

(A) Schematic illustrating the circular loudspeaker array used for sound localization training. Subjects sat at the centre of this array, facing in the direction indicated by the arrow. (B) Spectral profile for a random-spectrum stimulus (black). Spectra were filtered to eliminate abrupt spectral transitions to which the auditory system is insensitive (see Materials and methods). The overall amount of spectral randomization was also fixed on each trial (SD = 10 dB). Although the spectrum varied considerably across trials (many different examples are shown in gray), the mean spectrum was relatively flat (red). (C) Our randomization procedure allowed us to set the amount of randomization and overall level of each stimulus, but these parameters could still vary within individual frequency bands. We can measure this for a single stimulus by dividing its spectrum (gray) into one-octave bands and calculating the mean ± SD amplitude values for each band (black). This indicates that the level (mean) and amount of randomization (SD) of each frequency band fluctuates on each trial. (D,E) To determine whether these differences have an impact on sound localization, we compared the random-spectrum stimuli presented on correct (pink) and incorrect (blue) trials. Relative to mislocalized stimuli, we found that the amount of spectral randomization was smaller for correctly localized stimuli, but only at higher frequencies (D, indicated by asterisk; significant interaction between frequency and correctness of response, P < 0.001, ANOVA; P < 0.05, post hoc test). In other words, the spectra of correctly-localized sounds tended to be relatively flat at high frequencies. No differences in sound level were observed between stimuli on correct and incorrect trials (E). Data show mean ± SEM. (F) To understand the implications of this for sound localization, we subdivided trials into groups (deciles) based on the amount of spectral randomization at high frequencies (1st decile represents 10% of trials with the smallest amount of spectral randomization) and quantified sound localization accuracy (% correct) for each group. This indicates that some of our random-spectrum stimuli were more difficult to localize than others, with performance declining as spectral randomization increased at high frequencies. It also supports the view that increasing amounts of spectral randomization progressively degrade the usefulness of spectral cues, which are most prominent at high frequencies.

Figure 1—figure supplement 2. Effect of training on localization by human listeners of broadband stimuli using same analysis method as for narrowband stimuli in Figure 2.

Figure 1—figure supplement 2.

(A–D) Joint distributions of stimulus and response obtained from the first (A,C) and last (B,D) training sessions for flat- (A,B) and random-spectrum (C,D) stimuli. Data are shown for an individual subject wearing an earplug in the left ear, with grayscale indicating the number of trials corresponding to each stimulus-response combination. Stimulus- and response-locations in the front and rear hemifields have been collapsed to provide a measure of sound localization that is insensitive to front-back errors. (E) Mean error magnitude plotted as a function of training session for the same subject shown in A-D. Data are plotted separately for flat- (turquoise) and random-spectrum (red) stimuli. Scores for each session (dots) were fitted using linear regression (lines) to calculate slope values, which quantified the change in error magnitude (Δ error) with training. Improved performance was associated with a reduction in error magnitude, producing negative values for Δ error. (F) Δ error for flat- and random-spectrum stimuli plotted for each subject (gray lines; n = 11). Mean values for Δ error across subjects ( ± bootstrapped 95% confidence intervals) are shown in color. Adaptation occurs for both flat- and random-spectrum stimuli (Δ error values are significantly <0; p<0.01, bootstrap test), but the extent of adaptation is greater for flat-spectrum stimuli (p<0.01, bootstrap test on the within-subject differences in Δ error). (G) Bias in sound localization responses plotted as a function of training session for the subject in E. Positive values indicate that responses were biased toward the side of the open ear. Data are plotted separately for flat- (turquoise) and random-spectrum (red) stimuli. Scores for each session (dots) were fitted using linear regression (lines) to calculate slope values, which quantified the change in response bias (Δ bias) with training. Negative values of Δ bias indicate an adaptive shift in response bias toward the side of the plugged ear. (H) Δ bias for flat- and random-spectrum stimuli plotted for each subject (gray lines; n = 11). Mean values for Δ bias across subjects ( ± bootstrapped 95% confidence intervals) are shown in color. No changes in bias were observed for either stimulus type (Δ bias values do not deviate significantly from 0;p>0.05, bootstrap test).

Developmental studies of sound localization plasticity following monaural deprivation have found evidence for both cue remapping (Gold and Knudsen, 2000; Keating et al., 2015; Knudsen et al., 1984) and cue reweighting (Keating et al., 2013), but it is not known whether these adaptive processes can occur simultaneously. Indeed, until recently, it was thought that monaural deprivation might induce different adaptive processes in different species (Keating and King, 2013; Shamma, 2015). However, whilst we now know that ferrets use both cue remapping and reweighting to adapt to monaural deprivation experienced during development (Keating et al., 2013; 2015), it is not known whether the same neural populations are involved in each case.

It is also not known whether the ability to use both adaptive processes is restricted to specific species or developmental epochs. Although the mature auditory system can adapt to monaural deprivation using cue reweighting (Kumpik et al., 2010), conflicting evidence for cue remapping has been obtained in adult humans fitted with an earplug in one ear for several days (Florentine, 1976; McPartland et al., 1997). To the extent that adaptive changes in binaural cue sensitivity are possible in adulthood, as suggested by other sensory manipulations (Trapeau and Schonwiesner, 2015), these may occur at the expense of cue reweighting. It is therefore unclear whether the same adult individuals can adapt to a unilateral hearing loss using multiple adaptive processes. Although numerous studies have shown that spatial hearing is more plastic early in life (Keating and King, 2013; Knudsen et al., 1984; Popescu and Polley, 2010), behavioral training can facilitate accommodation to altered cues in adulthood (Carlile, 2014; Carlile et al., 2014; Kacelnik et al., 2006; Shinn-Cunningham et al., 1998). Here, we show that adult humans are equally capable of using both adaptive processes, provided they are given appropriate training. Moreover, our results suggest that cue remapping and reweighting are neurophysiologically distinct, which we confirmed by recording from auditory cortical neurons in ferrets reared with an intermittent hearing loss in one ear.

Results

Adult humans were trained to localize sounds from 12 loudspeakers in the horizontal plane (Figure 1—figure supplement 1A) whilst wearing an earplug in one ear (~5600 trials split into 7 sessions completed in < 3 weeks). In order to directly measure the efficacy of training, earplugs were worn only during training sessions. This contrasts with previous work in which adult humans received minimal training, but were required to wear earplugs for extended periods of everyday life (Florentine, 1976; Kumpik et al., 2010; McPartland et al., 1997). On ~50% of trials, subjects were required to localize flat-spectrum broadband noise (0.5–20 kHz), which provide all of the available auditory spatial cues (Blauert, 1997). With these cue-rich stimuli, trials were repeated following incorrect responses (“correction trials”) and subjects were given performance feedback. Across training sessions, sound localization performance (% correct) gradually improved (Figure 1C–F; slope values >0; bootstrap test, p<0.01; Cohen’s d = 1.43), indicating that relatively short periods of training are sufficient to drive adaptation.

To determine the relative contributions of cue remapping and reweighting to these changes in localization accuracy, we measured the extent of adaptation for two additional stimulus types that restrict the availability of specific cues. For these cue-restricted stimuli, which were randomly interleaved with cue-rich stimuli, correction trials were not used and no feedback was given. The first of these additional stimulus types comprised broadband noise with a random spectral profile that varied across trials (Figure 1—figure supplement 1B). These stimuli disrupt spectral localization cues because it is unclear whether specific spectral features are produced by the filtering effects of the head and ears or are instead properties of the sound itself (Figure 1B) (Keating et al., 2013). Consequently, if subjects adapt to asymmetric hearing loss by giving greater weight to the spectral cues provided by the non-deprived ear, we would expect to see less improvement in sound localization performance for random-spectrum sounds than for flat-spectrum sounds. This is precisely what we found (Figure 1E,F; random-spectrum slope values < flat-spectrum slope values; bootstrap test, p<0.01; Cohen’s d = 1.18; see also Figure 1—figure supplement 2), indicating that adaptation involves learning to rely more on spectral cues.

However, if adaptation were solely dependent on this type of cue reweighting, we would expect no improvement in sound localization for narrowband sounds, such as pure tones. This is because spectral cues require a comparison of sound energy at different frequencies, which is not possible for these sounds (Figure 1B) (Carlile et al., 2005). Improved localization of pure tones would therefore indicate adaptive processing of binaural cues. Because interaural time differences (ITDs) and interaural level differences (ILDs) are respectively the primary cues for localizing low- (<1.5 kHz) and high-frequency (≥1.5 kHz) tones (Blauert, 1997), we tested each of these stimuli separately. To detect changes in binaural sensitivity, and facilitate comparison with previous work (Keating et al., 2015; Kumpik et al., 2010), stimulus and response locations in the front and rear hemifields were collapsed. This produces a measure of performance that is insensitive to front-back errors, which reflect failures in spectral, rather than binaural, processing. We observed improvements in subjects’ ability to localize both low- and high-frequency pure tones over time, demonstrated by a decline in error magnitude (Figure 2E,F; Δ error <0; bootstrap test, p<0.01). The initial bias toward the side of the open ear was also reduced (Figure 2G,H; Δ bias <0; bootstrap test, p<0.01; low-frequency, Cohen’s d = 0.7; high-frequency, Cohen’s d = 0.96). Adaptation therefore involves a shift in the mapping of altered binaural cues onto spatial location. Together, these results show that subjects adapted to monaural deprivation using a combination of both cue remapping and cue reweighting.

Figure 2. Effect of training on localization of pure tone stimuli in the horizontal plane by monaurally deprived human listeners.

Figure 2.

(A–D) Joint distributions of stimulus and response obtained from the first (A,C) and last (B,D) training sessions for low- (A,B) and high-frequency (C,D) tones. Data are shown for an individual subject wearing an earplug in the left ear, with grayscale indicating the number of trials corresponding to each stimulus-response combination. Because pure tones can be accurately localized only by using binaural spatial cues, which are susceptible to front-back errors, data from the front and rear hemifields have been collapsed. (E) Mean error magnitude plotted as a function of training session for the same subject shown in A–D. Data are plotted separately for low- (1 kHz, dark blue) and high-frequency (8 kHz, light blue) tones. Scores for each session (dots) were fitted using linear regression (lines) to calculate slope values, which quantified the change in error magnitude (Δ error) with training. Improved performance was associated with a reduction in error magnitude, producing negative values for Δ error. (F) Δ error for low- and high-frequency tones plotted for each subject (gray lines; n = 11). Mean values for Δ error across subjects (± bootstrapped 95% confidence intervals) are shown in blue. Although there are pronounced individual differences for the adaptation observed at the two tone frequencies, almost all values are <0, indicating that error magnitude declined over the training sessions. Dotted red line shows Δ error values that would have been observed if subjects had adapted as well as ferrets reared with a unilateral earplug (Keating et al. 2015; total Δ error reported for ferrets was divided by the number of training sessions used in the present study, n = 7; normalization used in previous work has been removed to facilitate comparison). (G) Bias in sound localization responses plotted as a function of training session for the subject in E. Positive values indicate that responses were biased toward the side of the open ear. Data are plotted separately for low- (1 kHz, dark blue) and high-frequency (8 kHz, light blue) tones. Scores for each session (dots) were fitted using linear regression (lines) to calculate slope values, which quantified the change in response bias (Δ bias) with training. Negative values of Δ bias indicate a shift in response bias toward the side of the plugged ear. (H) Δ bias for low- and high-frequency tones plotted for each subject (gray lines; n = 11). Mean values for Δ bias across subjects (± bootstrapped 95% confidence intervals) are shown in blue.

DOI: http://dx.doi.org/10.7554/eLife.12264.006

We next considered the relationship between these two adaptive processes. Although cue remapping and cue reweighting share a similar time-course (significant correlation between the amount of remapping and reweighting across sessions; Figure 3A, r = 0.81, P = 0.028), the overall amount of cue remapping exhibited by each subject was independent of the amount of cue reweighting (Figure 3B, r = 0.03, P = 0.90). This inter-subject variability was not attributable to differences in the effectiveness of earplugs used (Figure 3—figure supplement 1). Instead, we found that these two adaptive processes are affected by the frequency composition of the stimulus in different ways (Figure 3C, interaction between sound frequency and adaptation type, p = 0.005, permutation test). As expected, cue reweighting was greater for frequencies where spectral cues are most prominent in humans (≥4 kHz, Figure 3C, p<0.05, post-hoc test; Figure 3—figure supplement 2) (Blauert, 1997; Hofman and Van Opstal, 2002), whereas equal amounts of cue remapping were observed for tones above and below 4 kHz (Figure 3C, p>0.05, post-hoc test).

Figure 3. Relationship between different adaptive processes.

(A) Time-course of behavioral adaptation for adult humans, measured by the amount of cue reweighting (pink) and remapping (blue). Data are normalized (z scores) to facilitate comparison between different adaptation measures. All data have been averaged across subjects. (B) Comparison between the amount of behavioral cue reweighting and remapping for individual human subjects (black dots; n = 11). Variation in the degree of adaptation across subjects was not attributable to differences in earplug effectiveness (Figure 3—figure supplement 1). (C) Amount of cue remapping (blue) and cue reweighting (pink) observed at frequencies above (lighter shades) and below (darker shades) 4 kHz. Greater reweighting of spectral cues (more positive values) is observed >4 kHz, which is where spectral cues are most prominent in humans. Frequency-specific measures of cue reweighting were determined using reverse correlation (see Materials and methods, Figure 3—figure supplement 2). (D) Bilateral extracellular recordings were performed in the primary auditory cortex of ferrets reared with an earplug in one ear. These data were then compared with controls to obtain measures of cue reweighting and cue remapping (see Materials and methods). (E) Cue reweighting versus cue remapping, with each dot representing either a single neuron or small multi-unit cluster (n = 505). (F) Amount of cue remapping (blue) and cue reweighting (pink) observed for neurons tuned to frequencies above (lighter shades) or below (darker shades) 8 kHz. To facilitate comparison between measures of cue reweighting and remapping at different frequencies, these values were normalized separately so that they each had an overall mean of 0 and a variance of 1. Greater reweighting of spectral cues (more positive values) is observed > 8 kHz, which is where spectral cues are most prominent in ferrets. Relative to humans, spectral cues in ferrets are shifted toward higher frequencies because of differences in head and external ear morphology.

DOI: http://dx.doi.org/10.7554/eLife.12264.007

Figure 3.

Figure 3—figure supplement 1. Variation across subjects in the degree of adaptation to acute asymmetric hearing loss is not related to differences in earplug effectiveness.

Figure 3—figure supplement 1.

(A) Effect of earplug on mean hearing threshold (Δ threshold ± SD) is plotted as a function of frequency. Positive values indicate thresholds were higher when an earplug was worn. Data from Kumpik et al. (2010) are replotted (red) alongside those from the present study (black). For visualization purposes, symmetric displacements along the x-axis have been introduced to each dataset. (B) A significant correlation (p<0.05) was observed between Δ threshold and the initial drop in sound localization performance when an earplug was worn during the first training session (Δ performance; change in% correct relative to normal hearing conditions averaged across all stimulus types). In other words, initial sound localization deficits were more extensive when the earplug produced greater attenuation. Each dot represents an individual subject. (C,D) No obvious relationship was observed between Δ threshold and the degree of remapping (C) or reweighting (D) observed in individual subjects (dots).

Figure 3—figure supplement 2. Determining the behavioral importance of spectral features at different frequencies using reverse correlation.

Figure 3—figure supplement 2.

(A) Although the mean spectrum of random-spectrum stimuli was close to zero when averaged across all trials (black), distinct spectral features emerged when averaging was restricted to trials on which subjects responded to a particular location (gray/color). This provides insight into which spectral features influence sound localization behavior. To reduce the noise in this estimate, a threshold was applied (mean ± 1.5 SD, dashed lines) and this process was repeated for each response location to construct a reverse correlation map. (B) Reverse correlation map showing the mean stimulus spectrum associated with each response location. Color is proportional to spectral amplitude, as illustrated in A. In order to quantify the behavioral importance of spectral features in different frequency bands, we calculated the ‘feature strength’ by averaging the unsigned magnitude of these spectral features across locations. (C) Cue reweighting plotted as a function of frequency. Cue reweighting was estimated by calculating training-induced changes in feature strength (i.e. feature strength values obtained in the first session were subtracted from those obtained in subsequent sessions; these differences in feature strength were then averaged). Positive cue reweighting values indicate an increase in feature strength, which reflects increased behavioral importance of spectral cues. Dotted line shows the upper 95% confidence interval for cue reweighting values that would be expected under the null hypothesis that cue reweighting did not occur. Values above this line (red symbols) indicate cue reweighting values that are significantly greater than chance.

This indicates that these adaptive processes are relatively independent of one another and suggests that they may depend on distinct neural substrates. This motivated us to revisit neurophysiological measures of cue reweighting and remapping in ferrets reared with an intermittent hearing loss in one ear (Figure 3D) (Keating et al., 2013; Keating et al., 2015). In common with our human behavioral data, we found no correlation between the degree of cue reweighting and remapping in cortical neurons recorded from ferrets raised with one ear plugged (Figure 3E, r = 0.08, p = 0.073). The type of plasticity observed also depended on the frequency preference of the neurons (Figure 3F, interaction between unit characteristic frequency and adaptation process, p = 0.012, permutation test). Greater cue reweighting was found in neurons tuned to frequencies where spectral cues are most prominent in ferrets (>8 kHz, Figure 3F, p<0.05, post-hoc test; frequency tuning bandwidth at 10 dB above threshold (µ ± SD) = 0.97 ± 0.51 octaves) (Carlile and King, 1994; Keating et al., 2013), whereas equal amounts of cue remapping occurred in neurons tuned to low and high frequencies (Figure 3F, p>0.05, post-hoc test). Thus, different neurons can exhibit cue remapping and reweighting in a relatively independent manner.

Discussion

We have shown that adult humans can adapt to asymmetric hearing loss by both learning to rely more on the unchanged spectral localization cues available and by remapping the altered binaural cues onto appropriate spatial locations. Recent work has shown that both adaptive processes occur in response to monaural deprivation during development (Keating et al., 2013; 2015). Our results suggest that this flexibility is likely to be a general feature of neural processing that also occurs in adulthood. Moreover, we show that these two forms of adaptation emerge together and that remapping of binaural spatial cues occurs at low as well as high frequencies, indicating plasticity in the processing of both ITDs and ILDs.

Although adaptive changes in sound localization have previously been observed when human subjects wear an earplug for prolonged periods of everyday life (Kumpik et al., 2010), we found here that much shorter periods of training are sufficient to induce adaptation to an episodic hearing loss. Our results also demonstrate that subjects adapt using a combination of cue remapping and cue reweighting. In contrast, previous work has shown that cue remapping did not occur when subjects wore an earplug most of the time for several days, and were therefore able to interact with their natural environments under these hearing conditions, but received relatively little training (Kumpik et al., 2010). This suggests that the nature of adaptation may depend on the behavioral or environmental context in which it occurs. Consequently, it should be possible to devise training protocols that would help subjects to adapt to altered auditory inputs in ways that do not ordinarily occur, or occur more slowly, during the course of everyday life.

When both adaptive processes occur together, observed either behaviourally in adult humans or neurophysiologically in monaurally-deprived ferrets, there was no obvious relationship between the amount of cue remapping and reweighting. This is at least in part because the spatial cues involved differ in their frequency dependence. Whereas equal amounts of binaural cue remapping occurred at different frequencies, spanning the range where both ITDs and ILDs are available, reweighting of spectral cues was restricted to those frequencies where these cues are most prominent. This suggests that the neural substrates for cue remapping and reweighting are at least partially distinct, with separate populations of cortical neurons displaying different types of spatial plasticity depending on their frequency preferences and sensitivity to different spatial cues.

It is not known, however, whether remapping and reweighting occur at different stages of the processing hierarchy. Although experience-dependent plasticity in the processing of binaural cues has been observed at multiple levels of the auditory pathway (Keating et al., 2015; Popescu and Polley, 2010; Seidl and Grothe, 2005), the changes induced by unilateral hearing loss during development are more extensive in the cortex than in the midbrain (Popescu and Polley, 2010). Much less is known about the neural processing of spectral localization cues and how this might be affected by experience (Carlile et al., 2005; Keating et al., 2013). However, reweighting of these cues is likely to reflect a change in the way they are integrated with other cues, which is thought to occur in the inferior colliculus (Chase and Young, 2005). This is consistent with the finding that adaptive changes in sound localization behavior in monaurally deprived adult ferrets rely on descending projections from the cortex to the inferior colliculus (Bajo et al., 2010). It is likely therefore that adaptive plasticity emerges via dynamic interactions between different stages of processing (Keating and King, 2015).

Although we found evidence for both cue reweighting and cue remapping in our human behavioral and ferret neurophysiological data, the nature of the episodic hearing loss in each case was very different. Whereas ferrets had one ear occluded for ~80% of the time over the course of several months of development (Keating et al., 2013; 2015), adult human subjects wore an earplug for only ~7 hr in total (1 hr every 1–3 days). It is not known whether comparable physiological changes to those observed in the ferrets are responsible for the rapid shifts in localization strategy in adult human listeners following these brief periods of acute hearing loss. Nevertheless, the close similarity in the results obtained in each species has important implications for the generality of our findings.

Our results emphasize the flexibility of neural systems when changes in sensory input affect ethologically important aspects of sensory processing, such as sound localization. They also reveal individual differences in the adaptive strategy adopted (Figure 3B). Further work is needed to understand the causes of these differences and to determine whether knowing how different individuals adapt to hearing loss could help tailor rehabilitation strategies. Our results also highlight the importance of training in promoting multiple adaptive processes, and this is likely to be relevant to other aspects of sensory processing (Feldman and Brecht, 2005; Keating and King, 2015; Sengpiel, 2014), particularly in situations where changes in sensory input affect some cues but not others.

Materials and methods

All procedures involving human listeners conformed to ethical standards approved by the Central University Research Ethics Committee (CUREC) at the University of Oxford. All work involving animals was approved by the local ethical review committee and performed under licenses granted by the UK Home Office under the Animals (Scientific Procedures) Act of 1986. 11 audiologically normal human subjects (2 male, 9 female; aged 18–30) took part in the behavioral study. Sample size was determined on the basis of previous work, in which effect sizes of 2 – 4.6 were observed in human subjects who adapted to an earplug in one ear (Kumpik et al., 2010). To achieve a desired power of 0.8 with an alpha level of 0.001, 6–10 subjects were therefore required. All subjects provided written informed consent and were paid for their participation. Neurophysiological data were obtained from 13 ferrets (6 male, 7 female), seven of which were reared with an intermittent unilateral hearing loss, the details of which have been described previously (Keating et al., 2013). Briefly, earplugs were first introduced to the left ear of ferrets between postnatal day 25 and 29, shortly after the age of hearing onset in these animals. From then on, an earplug was worn ~80% of the time within any 15-day period, with normal hearing experienced otherwise. To achieve this, earplugs were monitored routinely and replaced or removed as necessary. All remaining ferrets were reared under normal hearing conditions. Expected effect sizes were less clear for neurophysiological changes so sample sizes were chosen based on previous studies in our lab (Dahmen et al., 2010).

For both human and animal subjects, hearing loss was induced by inserting an earplug into one ear (EAR Classic), which attenuated (low-pass filter, attenuation of 20–40 dB in humans, and 15–45 dB in ferrets) and delayed (150 µs in humans and 110 µs in ferrets) acoustical input (Keating et al., 2013; Kumpik et al., 2010). For 10 of the 11 human subjects tested, we measured hearing thresholds at 1–8 kHz in octave steps and assessed the impact on those thresholds of wearing an earplug in the trained ear (Figure 3—figure supplement 1). This yielded very similar results to those reported previously in humans (Kumpik et al., 2010).

Human behavior

Apparatus

All human behavioral testing was performed in a double-walled sound attenuating chamber. Stimuli were presented to subjects using a circular array (1 m radius) of 12 loudspeakers (Audax TW025M0) placed at approximately head height, with loudspeakers positioned at 30° intervals (Figure 1—figure supplement 1A). This testing apparatus was similar to that used previously for both humans (Kumpik et al., 2010) and ferrets (Keating et al., 2015). Subjects sat at the mid-point of the loudspeaker array, with their head positioned on a chin-rest, and indicated the perceived location of each sound by using a mouse to click on a custom Matlab (Mathworks, Natick, MA) GUI that represented the locations of different loudspeakers. All stimuli were generated in Matlab, sent to a real-time processor (RP2; Tucker Davis Technologies), then amplified and routed to a particular loudspeaker using a power multiplexer (PM2R; Tucker Davis Technologies).

Stimuli

Stimuli consisted of either pure tones (varying in frequency from 1–8 kHz in one-octave steps) or broadband noise. All stimuli were 100 ms in duration (including 10 ms cosine ramps), generated with a sampling rate of 97.6 kHz, and presented at 49–77 dB SPL in increments of 7 dB. Different intensities and stimulus types were randomly interleaved across trials. Broadband noise stimuli (0.5–20 kHz) either had a flat spectral profile (flat-spectrum) or a spectral profile that varied randomly across trials (random-spectrum). Spectral randomization was produced by adding a vector to the logarithmic representation of the source spectrum (Figure 1—figure supplement 1B). This vector was created by low-pass filtering the spectra of Gaussian noise stimuli so that all energy was removed at frequencies > 3 cycles/octave (Keating et al., 2013). This removed abrupt spectral transitions to which humans are relatively insensitive (i.e. the width of any remaining peaks and notches exceeded 1/6th of an octave) (Hofman and Van Opstal, 2002). The RMS of this vector was then normalized to 10 dB.

These random-spectrum stimuli allowed us to determine which spectral features are behaviorally important (see Figure 3—figure supplement 2), whilst making very few assumptions about the nature of these features in advance. Their unpredictable nature also prevented subjects from learning which spectral features were properties of the sound source. If subjects had learned that particular spectral features were invalid cues to sound location (i.e. they were not caused by the filtering effects of the head and ears and were instead properties of the sound source), they might have learned to ignore these features when judging sound location. This would have prevented us from measuring cue reweighting because our ability to do so requires subjects to misattribute spectral properties of the stimulus to the filtering effects of the head and ears.

Training

Subjects were initially familiarized with the task under normal hearing conditions, receiving feedback for all stimuli. Once an asymptotic level of performance was reached, they were trained to localize sounds whilst wearing an earplug in either the left (8 subjects) or right (3 subjects) ear. Subjects completed 7 training sessions over ~3 weeks, with no more than 2 days between each session. Each session comprised ~800 trials and lasted ~45 min, with short breaks provided every ~15 min. Whilst undergoing training with an earplug in place, feedback was only provided for flat-spectrum broadband noise stimuli.

On trials where feedback was provided, correct responses were followed by a brief period during which the GUI background flashed green, with the GUI background flashing red for incorrect responses. The overall % correct score achieved for all feedback trials was also displayed by the GUI. Where feedback was given, incorrect responses were followed by “correction trials” on which the same stimulus was presented. Successive errors made on correction trials were followed by “easy trials”, on which the stimulus was repeated continuously until subjects made a response. Recent work has shown that head-movements may enhance adaptation to changes in auditory spatial cues (Carlile et al., 2014). On easy trials, subjects were therefore allowed to move their heads freely until a response was made. Subjects were also not allowed to respond during the first 3 s of easy trials (i.e. any responses made during this period were ignored), which was visually indicated to subjects by the GUI background turning blue. In previous work, ferrets received broadly similar feedback when performing a sound localization task (i.e. incorrect trials were followed by correction trials and easy trials, with the latter allowing for the possibility of head-movements) (Keating et al., 2013; 2015). However, instead of a GUI, ferrets received a small water reward for physically approaching the correct speaker location whilst the absence of water reward indicated an incorrect response.

Analyses

Sound localization performance for pure tones was calculated by first collapsing stimulus and response locations in the front and rear hemifields. This was done to provide a measure of performance that is unaffected by front-back errors, which primarily reflect a failure in spectral, rather than binaural, processing. To facilitate comparison with previous work (Keating et al., 2015), the average error magnitude (mean unsigned error) was then used to quantify the precision of these sound localization responses. The mean signed error was also calculated to provide a measure of sound localization bias (Kumpik et al., 2010). Although we measured cue remapping at a number of frequencies above 1.5 kHz (2, 4 and 8 kHz), we found comparable training-induced changes in both bias (Kruskal-Wallis test, p = 0.18) and error magnitude (Kruskal-Wallis test, p = 0.58). These data were therefore pooled to facilitate comparison between cue remapping for tones above and below 1.5 kHz, which should respectively reflect changes in ILD and ITD processing (Blauert, 1997).

To assess the extent of cue reweighting at different frequencies, we used a method based on reverse correlation, which reveals the frequencies where spectral cues become more behaviorally important with training (Figure 3—figure supplement 2) (Keating et al., 2013). Note that the scale of the reverse correlation map (RCM) does not necessarily resemble that of the HRTF because the RCM is affected by the amount of spectral randomization present in stimuli (greater randomization typically produces larger RCM features) as well as the dependence on individual spectral features for localizing sounds in particular directions (i.e. if responses to a particular location can be induced by multiple spectral features or cues, then any given feature will not always be present when responses are made to that location; averaging over these data therefore reduces the scale of features detected by reverse correlation).

This analysis showed that training increased the behavioral importance of spectral cues at frequencies ≥4 kHz, but not below (Figure 3—figure supplement 2). In other words, we found greater reweighting of spectral cues at higher frequencies. This is consistent with human head-related transfer functions, which show that spectral cues are most prominent at frequencies ≥4 kHz (Blauert, 1997; Hofman and Van Opstal, 2002). For frequencies above and below 4 kHz, we therefore calculated the average change in spectral feature strength separately, which provided a low- and high-frequency measure of cue reweighting. To facilitate comparison between different adaptive measures, we also separately calculated the average amount of cue remapping for tones above and below 4 kHz. Measures for different adaptive processes, which are expressed in different units, were then standardized by converting them to z scores.

Neurophysiology

All neurophysiological procedures have been previously described in detail (Keating et al., 2013; 2015). Bilateral extracellular recordings were made under medetomidine/ketamine anaesthesia from primary auditory cortex units (n = 505) in response to virtual acoustic space stimuli generated from acoustical measurements in each animal. These stimuli recreated the acoustical conditions associated with either normal hearing or an earplug in the left ear and were used to manipulate individual spatial cues independently of one another.

Cue weights were determined by calculating the mutual information between neuronal responses and individual spatial cues. A weighting index was then used to calculate the weight given by each neuron to spectral cues provided by the right ear (i.e. contralateral to the developmentally-occluded ear) relative to all other available cues. The mapping between binaural spatial cues and neurophysiological responses was measured by determining the best ILD for each unit, which represented the ILD corresponding to the peak of the binaural interaction function (see Keating et al., 2015 for more details). Best ILDs and weighting index values were converted to z scores using the corresponding means and standard deviations of data obtained from controls. Data were normalized separately for each hemisphere and different frequency bands. Measures of cue reweighting and remapping for each unit therefore respectively reflected changes in weighting index values and best ILDs relative to those observed in controls. These values were then normalized again so that measures of reweighting and remapping had the same overall mean (0) and variance (1) prior to comparing the amount of each form of adaptation in different frequency bands.

Frequency tuning was calculated using 50-ms tones (0.5–32 kHz in 0.25 octave steps, varying between 30 – 80 dB SPL in increments of 10 dB). Characteristic frequency (CF) and bandwidth were calculated in a manner similar to that described previously (Bartlett et al., 2011; Bizley et al., 2005). Briefly, firing rates were averaged across stimulus repetitions (n = 30) of each combination of frequency and level. This matrix was then smoothed with a boxcar function 0.75 octaves wide, following which a threshold was applied that was equal to the spontaneous rate plus 20% of the maximum firing rate. CF was defined as the frequency that elicited the greatest response at threshold. Bandwidth was measured at 10 dB above threshold by first calculating the area underneath the tuning curve. We then identified a rectangle that had the same area but constrained its height to be equal to the maximum firing rate. The width of this rectangle then provided a measure of bandwidth that approximates the width at half-maximum for a Gaussian tuning curve (Bartlett et al., 2011).

Statistical analyses

Confidence intervals at the 95% level were estimated empirically for different measures using 10,000 bootstrapped samples, each of which was obtained by re-sampling with replacement from the original data. These samples were then used to construct bootstrapped distributions of the desired measure, from which confidence intervals were derived. A bootstrap procedure was also used to assess the significance of group differences. First, the difference between two groups was measured using an appropriate statistic (e.g. difference in means, t-statistic, or rank-sum statistic). The data from different groups were then pooled and re-sampled with replacement to produce two new samples, and the difference between these samples was measured using the same statistic as before. This procedure was subsequently repeated 10,000 times, which provided an empirical estimate of the distribution that would be expected for the statistic of interest under the null hypothesis. This bootstrapped distribution was then used to derive a P value for the difference observed in the original sample. In all cases, two-sided tests of significance were used, with Bonferroni correction used to correct for multiple comparisons. Cohen’s d was also calculated to provide a measure of the effect size for different types of adaptation in adult humans.

The significance of factor interactions was also assessed using permutation tests (Manly, 2007). This involved randomly permuting observations across different factors and calculating an F statistic for each factor and interaction (i.e. the proportion of variance explained relative to the proportion of unexplained variance). This procedure was repeated many times in order to assess the percentage of repetitions that produce F values greater than those obtained for the non-permuted data. This percentage then provided an estimate of the P values associated with each effect under the null hypothesis. Precise details of the permutation procedure used have been described elsewhere (Manly, 2007). Additional comparisons between conditions were made using appropriate post-hoc tests corrected for multiple comparisons. Although bootstrap and permutation tests were used because they make fewer distributional assumptions about the data, conventional parametric and non-parametric statistical tests were also performed and produced very similar results (not reported).

Acknowledgements

This work was supported by the Wellcome Trust through a Principal Research Fellowship (WT076508AIA, WT108369/Z/15/Z) to AJK.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Funding Information

This paper was supported by the following grants:

  • Wellcome Trust WT076508AIA to Andrew J King.

  • Wellcome Trust WT108369/Z/15/Z to Andrew J King.

Additional information

Competing interests

AJK: Reviewing editor, eLife.

The other authors declare that no competing interests exist.

Author contributions

PK, Conception and design, Acquisition of data, Analysis and interpretation of data, Drafting or revising the article.

OR-P, Acquisition of data, Analysis and interpretation of data, Drafting or revising the article.

JCD, Acquisition of data, Drafting or revising the article.

OB, Acquisition of data, Drafting or revising the article.

AJK, Conception and design, Drafting or revising the article.

Ethics

Human subjects: All procedures conformed to ethical standards approved by the Central University Research Ethics Committee (CUREC) at the University of Oxford. All human subjects provided informed written consent.

Animal experimentation: All procedures conformed to ethical standards approved by the Committee.

on Animal Care and Ethical Review at the University of Oxford. All work involving animals was performed under licenses granted by the UK Home Office under the Animals (Scientific Procedures) Act of 1986.

References

  1. Bajo VM, Nodal FR, Moore DR, King AJ. The descending corticocollicular pathway mediates learning-induced auditory plasticity. Nature Neuroscience. 2010;13:253–260. doi: 10.1038/nn.2466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bartlett EL, Sadagopan S, Wang X. Fine frequency tuning in monkey auditory cortex and thalamus. Journal of Neurophysiology. 2011;106:849–859. doi: 10.1152/jn.00559.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bizley JK, Nodal FR, Nelken I, King AJ. Functional organization of ferret auditory cortex. Cerebral Cortex. 2005;15:1637–1653. doi: 10.1093/cercor/bhi042. [DOI] [PubMed] [Google Scholar]
  4. Blauert J. Spatial Hearing: The Psychophysics of Human Sound Localization. London: MIT Press; 1997. [Google Scholar]
  5. Carlile S, King AJ. Monaural and binaural spectrum level cues in the ferret: acoustics and the neural representation of auditory space. Journal of Neurophysiology. 1994;71:785–801. doi: 10.1152/jn.1994.71.2.785. [DOI] [PubMed] [Google Scholar]
  6. Carlile S, Martin R, McAnally K. Spectral information in sound localization. International Review of Neurobiology. 2005;70:399–434. doi: 10.1016/S0074-7742(05)70012-X. [DOI] [PubMed] [Google Scholar]
  7. Carlile S. The plastic ear and perceptual relearning in auditory spatial perception. Frontiers in Neuroscience. 2014;8:e12264. doi: 10.3389/fnins.2014.00237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Carlile S, Balachandar K, Kelly H. Accommodating to new ears: the effects of sensory and sensory-motor feedback. The Journal Socof the Acoustical Society of America. 2014;135:2002–2011. doi: 10.1121/1.4868369. [DOI] [PubMed] [Google Scholar]
  9. Chase SM, Young ED. Limited segregation of different types of sound localization information among classes of units in the inferior colliculus. Journal of Neuroscience. 2005;25:7575–7585. doi: 10.1523/JNEUROSCI.0915-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Dahmen JC, Keating P, Nodal FR, Schulz AL, King AJ. Adaptation to stimulus statistics in the perception and neural representation of auditory space. Neuron. 2010;66:937–948. doi: 10.1016/j.neuron.2010.05.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Feldman DE, Brecht M. Map plasticity in somatosensory cortex. Science. 2005;310:810–815. doi: 10.1126/science.1115807. [DOI] [PubMed] [Google Scholar]
  12. Florentine M. Relation between lateralization and loudness in asymmetrical hearing losses. Journal of the American Audiology Society. 1976;1:243–251. [PubMed] [Google Scholar]
  13. Gold JI, Knudsen EI. Abnormal auditory experience induces frequency-specific adjustments in unit tuning for binaural localization cues in the optic tectum of juvenile owls. Journal of Neuroscience. 2000;20:862–877. doi: 10.1523/JNEUROSCI.20-02-00862.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hofman PM, Van Opstal AJ. Bayesian reconstruction of sound localization cues from responses to random spectra. Biological Cybernetics. 2002;86:305–316. doi: 10.1007/s00422-001-0294-x. [DOI] [PubMed] [Google Scholar]
  15. Kacelnik O, Nodal FR, Parsons CH, King AJ. Training-induced plasticity of auditory localization in adult mammals. PLoS Biology. 2006;4:627–638. doi: 10.1371/journal.pbio.0040071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Keating P, Dahmen JC, King AJ. Context-specific reweighting of auditory spatial cues following altered experience during development. Current Biology . 2013;23:1291–1299. doi: 10.1016/j.cub.2013.05.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Keating P, King AJ. Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Frontiers in Systems Neuroscience. 2013;7:e12264. doi: 10.3389/fnsys.2013.00123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Keating P, Dahmen JC, King AJ. Complementary adaptive processes contribute to the developmental plasticity of spatial hearing. Nature Neuroscience. 2015;18:185–187. doi: 10.1038/nn.3914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Keating P, King AJ. Sound localization in a changing world. Current Opinion in Neurobiology. 2015;35:35–43. doi: 10.1016/j.conb.2015.06.005. [DOI] [PubMed] [Google Scholar]
  20. Knudsen EI, Esterly SD, Knudsen PF. Monaural occlusion alters sound localization during a sensitive period in the barn owl. Journal of Neuroscience . 1984;4:1001–1011. doi: 10.1523/JNEUROSCI.04-04-01001.1984. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Kumpik DP, Kacelnik O, King AJ. Adaptive reweighting of auditory localization cues in response to chronic unilateral earplugging in humans. Journal of Neuroscience. 2010;30:4883–4894. doi: 10.1523/JNEUROSCI.5488-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Lupo JE, Koka K, Thornton JL, Tollin DJ. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear. Hearing Research. 2011;272:30–41. doi: 10.1016/j.heares.2010.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Manly BFJ. Randomization, Bootstrap, and Monte Carlo Methods in Biology. 3rd ed. London: Chapman & Hall; 2007. [Google Scholar]
  24. Margolis DJ, Lütcke H, Helmchen F. Microcircuit dynamics of map plasticity in barrel cortex. Current Opinion in Neurobiology. 2014;24:76–81. doi: 10.1016/j.conb.2013.08.019. [DOI] [PubMed] [Google Scholar]
  25. McPartland JL, Culling JF, Moore DR. Changes in lateralization and loudness judgements during one week of unilateral ear plugging. Hearing Research. 1997;113:165–172. doi: 10.1016/S0378-5955(97)00142-1. [DOI] [PubMed] [Google Scholar]
  26. Mendonça C. A review on auditory space adaptations to altered head-related cues. Frontiers in Neuroscience. 2014;8:e12264. doi: 10.3389/fnins.2014.00219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Młynarski W, Jost J. Statistics of natural binaural sounds. PLoS ONE. 2014;9:e12264. doi: 10.1371/journal.pone.0108968. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Popescu MV, Polley DB. Monaural deprivation disrupts development of binaural selectivity in auditory midbrain and cortex. Neuron. 2010;65:718–731. doi: 10.1016/j.neuron.2010.02.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Qian T, Jaeger TF, Aslin RN. Learning to represent a multi-context environment: more than detecting changes. Frontiers in Psychology. 2012;3:e12264. doi: 10.3389/fpsyg.2012.00228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Schreiner CE, Polley DB. Auditory map plasticity: diversity in causes and consequences. Current Opinion in Neurobiology. 2014;24:143–156. doi: 10.1016/j.conb.2013.11.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Seidl AH, Grothe B. Development of sound localization mechanisms in the mongolian gerbil is shaped by early acoustic experience. Journal of Neurophysiology. 2005;94:1028–1036. doi: 10.1152/jn.01143.2004. [DOI] [PubMed] [Google Scholar]
  32. Seilheimer RL, Rosenberg A, Angelaki DE. Models and processes of multisensory cue combination. Current Opinion in Neurobiology. 2014;25:38–46. doi: 10.1016/j.conb.2013.11.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Sengpiel F. Plasticity of the visual cortex and treatment of amblyopia. Current Biology . 2014;24:R936–940. doi: 10.1016/j.cub.2014.05.063. [DOI] [PubMed] [Google Scholar]
  34. Seydell A, Knill DC, Trommershäuser J. Adapting internal statistical models for interpreting visual cues to depth. Journal of Vision. 2010;10:1–27. doi: 10.1167/10.4.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Shamma SA. A convergent tale of two species. Nature Neuroscience. 2015;18:168–169. doi: 10.1038/nn.3928. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Shinn-Cunningham BG, Durlach NI, Held RM. Adapting to supernormal auditory localization cues. I. Bias and resolution. The Journal of the Acoustical Society of America. 1998;103:3656–3666. doi: 10.1121/1.423088. [DOI] [PubMed] [Google Scholar]
  37. Sur M, Nagakura I, Chen N, Sugihara H. Mechanisms of plasticity in the developing and adult visual cortex. Progress in Brain Research. 2013;207:243–254. doi: 10.1016/B978-0-444-63327-9.00002-3. [DOI] [PubMed] [Google Scholar]
  38. Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. NeuroImage. 2015;118:26–38. doi: 10.1016/j.neuroimage.2015.06.006. [DOI] [PubMed] [Google Scholar]
eLife. 2016 Mar 23;5:e12264. doi: 10.7554/eLife.12264.010

Decision letter

Editor: Thomas D Mrsic-Flogel1

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your work entitled "Behavioral training promotes multiple adaptive processes following hearing loss" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by Thomas Mrsic-Flogel as the Reviewing Editor and Gary Westbrook as the Senior Editor.

The following individuals involved in review of your submission have agreed to reveal their identity: Simon Carlile (peer reviewer).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

Your manuscript reports a largely human psychophysical study examining accommodation to transient monaural ear plugging and localisation training. The principal finding is that (i) cue reweighting to the intact open ear monaural spectral cues and (ii) cue remapping of the distorted ILD and ITD binaural cues to new spatial location both occur in this process of accommodation. Some previous electrophysiological data from ferret cortex is also included that concurs with the behavioural findings in humans. The reviewers agree that this is a well-crafted study that represents a logical step in a sequence of nice experiments from the King group at Oxford over the last few years. The experiments are well executed and the analysis is sound. The reviewers found the results not overly surprising, but they thought they were nonetheless important and worthy of wide dissemination.

Essential revisions:

The reviewers raised several important issues that we would like you to address in your revision.

1) The reviewers thought that the adult human psychophysics data were the most novel part of the manuscript. For this reason, they deserve to be show-cased more prominently, including expanding the analyses. For example, what is the advantage of inferring the degree of cue reweighting by presenting stimuli with random spectrum as opposed to stimuli where the spectrum is consistently adjusted to deviate from the flat spectrum by a known quantity? With random spectra, stimuli will occasionally provide a rich basis for determining azimuth location and other times not at all. The reviewers ask you to mine the dataset for performance differences in the random-spectrum condition for subsets of trials that offered rich vs. impoverished localization cues.

Alternatively, in order to make a much stronger statement about the contribution of cue reweighting versus remapping, experiments could be designed to bias the subjects towards one strategy or the other by training them with sounds that explicitly devalued monaural spectral cues or dichotic cues. If these data exist, please include them to the manuscript.

For comparison, it might be useful to compare the acute plugging human data to previous data on humans with chronic plugging. Please add if these data are available (additional experiments are not required).

2) There are significant and important differences in behavioural methods when comparing the human and ferret data. On the one hand, the manuscript draws attention to the fact that human accommodation training is based on a small number of relatively brief training episodes (in contrast to the chronic occlusion of previous studies). The demonstration of effective accommodation to this episodic exposure to the distorted cues is itself the most interest finding. On the other hand, the ferret electrophysiological data is obtained from animals reared with a chronic ear-plug since the onset of their hearing. Presumably, the training that these animals received also involved some form of feedback, however, those methods are not mentioned in the manuscript. More critical, however, are the differences in the chronic vs. transient nature of the exposure to the distorted cues. While it may be that both approaches are sufficient to induce the sort of cue reweighting and remapping that is argued for, these differences and resulting potential confounds do need to be grappled with more directly in the Discussion. An extension of the discussion about the differences between the ferret and human data, and how this impacts the main conclusions, is a crucial point that all reviewers felt needs far more attention.

eLife. 2016 Mar 23;5:e12264. doi: 10.7554/eLife.12264.011

Author response


Essential revisions:

The reviewers raised several important issues that we would like you to address in your revision. 1) The reviewers thought that the adult human psychophysics data were the most novel part of the manuscript. For this reason, they deserve to be show-cased more prominently, including expanding the analyses. For example, what is the advantage of inferring the degree of cue reweighting by presenting stimuli with random spectrum as opposed to stimuli where the spectrum is consistently adjusted to deviate from the flat spectrum by a known quantity?

We agree that it would be helpful to expand upon the advantages of random spectrum stimuli and describe their synthesis more clearly.

Firstly, our approach rests upon the fact that subjects find it difficult to determine whether monaural spectral features are properties of the sound source itself or whether they are caused by the acoustical filtering of the head and ears. It is this ambiguity that causes subjects to misinterpret spectral properties of the sound source as cues to sound location. And it is this misinterpretation that causes subjects to make systematic errors in sound localization. The extent and nature of these errors then allows us to determine how much weight subjects give to monaural spectral cues.

However, if we had used stimuli where the spectrum was consistently adjusted to deviate from a flat spectrum by a fixed quantity, subjects might have learned that this fixed deviation was a property of the sound source itself, rather than a result of the filtering effects of the head and ears. Consequently, subjects might have learned that a constant deviation from a flat spectrum was not a valid localization cue and ignored it when judging sound location. This would have prevented us from determining the weight given to spectral cues. In addition, adding a fixed spectral deviation to stimuli would have required us to make assumptions about the behavioural relevance of specific spectral features (i.e. we would need to know in advance which spectral deviations are likely to influence behaviour).

By varying the spectrum randomly across trials, and recording the spectrum used on each trial, we were able to use reverse correlation to empirically determine which spectral features are most relevant for sound localization. Spectral randomization also made it very difficult for subjects to learn which spectral features were properties of the sound source.

These advantages of spectral randomization are now stated clearly in the Methods:

“These random-spectrum stimuli allowed us to determine which spectral features are behaviorally important (see Figure 3—figure supplement 2), whilst making very few assumptions about the nature of these features in advance. […] This would have prevented us from measuring cue reweighting because our ability to do so requires subjects to misattribute spectral properties of the stimulus to the filtering effects of the head and ears.”

With random spectra, stimuli will occasionally provide a rich basis for determining azimuth location and other times not at all. The reviewers ask you to mine the dataset for performance differences in the random-spectrum condition for subsets of trials that offered rich vs. impoverished localization cues.

As noted by the reviewers, the usefulness of spatial cues can vary across trials when random spectrum stimuli are used. We attempted to minimize this trial-to-trial variability by constraining the overall amount of spectral randomization on each trial (SD = 10 dB). Nevertheless, within individual frequency bands, the amount of energy (sound level) and spectral randomization (SD of the amplitude values within a given frequency band) fluctuated across trials. This could have made some stimuli more difficult to localize than others by impoverishing certain spatial cues to varying degrees.

To test this, we separated our dataset into trials on which subjects responded correctly and trials on which they made errors. We then asked whether there were differences between the stimuli presented on correct versus incorrect trials. We found that the spectra of correctly localized stimuli were flatter than average (i.e. had lower spectral randomization), but only at high frequencies. We found no relationship between trial accuracy and variations of sound level within individual frequency bands. Together, these results confirm our claim that spectral randomization impairs sound localization primarily by limiting the usefulness of high-frequency spectral cues. This provides additional validation for our experimental approach and strengthens the case for using random-spectrum stimuli. We have therefore added this analysis to Figure 1—figure supplement 1.

Alternatively, in order to make a much stronger statement about the contribution of cue reweighting versus remapping, experiments could be designed to bias the subjects towards one strategy or the other by training them with sounds that explicitly devalued monaural spectral cues or dichotic cues. If these data exist, please include them to the manuscript.

We agree that this is an excellent idea and represents the next logical step in our attempts to understand these adaptive processes. In the present manuscript, we elected to focus on what happens when monaural spectral and dichotic cues are both useful, as is often the case in natural environments. In particular, we wanted to know what subjects would do spontaneously, without us providing feedback that biased them toward one adaptive strategy or the other. However, it would be interesting to determine what happens when specific cues are explicitly devalued during training. We do not currently have data of this type, and we believe it would require an entirely separate study to do justice to this approach. For example, there are a variety of different factors (e.g. frequency content) which could bias subjects towards using one strategy or the other, but it is unclear which of these is important. It is also possible that the importance of these different factors varies as a function of the task performed, or the environment in which it occurs. Addressing these issues properly represents a key challenge for future work but is beyond the scope of the current manuscript.

For comparison, it might be useful to compare the acute plugging human data to previous data on humans with chronic plugging. Please add if these data are available (additional experiments are not required).

We agree that it would be helpful to make direct comparisons with previous data, including the effects of chronic plugging in humans. We have therefore amended Figures 1 and 2 to facilitate comparison with previous work in humans (Kumpik et al., 2010) and ferrets (Keating et al., 2015).

2) There are significant and important differences in behavioural methods when comparing the human and ferret data. On the one hand, the manuscript draws attention to the fact that human accommodation training is based on a small number of relatively brief training episodes (in contrast to the chronic occlusion of previous studies). The demonstration of effective accommodation to this episodic exposure to the distorted cues is itself the most interest finding. On the other hand, the ferret electrophysiological data is obtained from animals reared with a chronic ear-plug since the onset of their hearing. Presumably, the training that these animals received also involved some form of feedback, however, those methods are not mentioned in the manuscript. More critical, however, are the differences in the chronic vs. transient nature of the exposure to the distorted cues. While it may be that both approaches are sufficient to induce the sort of cue reweighting and remapping that is argued for, these differences and resulting potential confounds do need to be grappled with more directly in the Discussion. An extension of the discussion about the differences between the ferret and human data, and how this impacts the main conclusions, is a crucial point that all reviewers felt needs far more attention.

We agree that there are considerable differences between the ferret neurophysiology experiments and the human behavioural experiments, and that these differences should be discussed at greater length in the manuscript. Our intention was not to imply that the kinds of neurophysiological changes observed in ferrets necessarily underpin the behavioural changes we see in humans. Nevertheless, despite the methodological differences between our ferret neurophysiology and human behaviour, a surprising aspect of our data is the broad similarity of the findings from these experiments. All the same, whilst our neurophysiological data do provide a plausible neural substrate for our behavioural results, we primarily included the ferret neurophysiological data because they have important implications for the generality of our findings.

The methodological differences between the human and ferret studies are now spelled out more clearly in the Discussion:

“Although we found evidence for both cue reweighting and cue remapping in our human behavioral and ferret neurophysiological data, the nature of the episodic hearing loss in each case was very different. […] Nevertheless, the close similarity in the results obtained in each species has important implications for the generality of our findings.”

The feedback given to ferrets was conceptually very similar to that given to humans. In particular, incorrect trials were followed by ‘correction trials’ on which the same stimulus was presented. Incorrect responses to correction trials were also followed by ‘easy trials’, on which the stimulus was repeatedly presented until the ferret made its response. In addition, whilst ferrets were reared with a chronic earplug in one ear, they also experienced brief intermittent periods of normal hearing approximately 20% of the time. Both ferrets and humans therefore experienced episodic hearing loss, with the key difference being the temporal parameters that were used in each case.

These details are now included in the Methods:

“In previous work, ferrets received broadly similar feedback [to humans] when performing a sound localization task (i.e. incorrect trials were followed by correction trials and easy trials, with the latter allowing for the possibility of head-movements) (Keating et al., 2013; Keating et al., 2015). However, instead of a GUI, ferrets received a small water reward for physically approaching the correct speaker location whilst the absence of water reward indicated an incorrect response.”

“Neurophysiological data were obtained from 13 ferrets (6 male, 7 female), seven of which were reared with an intermittent unilateral hearing loss, the details of which have been described previously (Keating et al., 2013). Briefly, earplugs were first introduced to the left ear of ferrets between postnatal day 25 and 29, shortly after the age of hearing onset in these animals. From then on, an earplug was worn ~80% of the time within any 15-day period, with normal hearing experienced otherwise. To achieve this, earplugs were monitored routinely and replaced or removed as necessary.”


Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

RESOURCES