Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Mar 11.
Published in final edited form as: Trends Amplif. 2008 Dec;12(4):300–315. doi: 10.1177/1084713808325880

Evaluating the benefit of hearing aids in solving the cocktail party problem

Nicole Marrone 1, Christine R Mason 1, Gerald Kidd Jr 1
PMCID: PMC2836772  NIHMSID: NIHMS88632  PMID: 19010794

Abstract

The benefit of wearing hearing aids in multitalker, reverberant listening environments was evaluated in a study of speech-on-speech masking with two groups of listeners with hearing loss (younger/older). Listeners selectively attended a known spatial location in two room conditions (low/high reverberation) and identified target speech in the presence of two competing talkers that were either colocated or symmetrically spatially separated from the target. The amount of spatial release from masking (SRM) with bilateral aids was similar to that when listening unaided at or near an equivalent sensation level and was negatively correlated with the amount of hearing loss. When using a single aid, SRM was reduced and was related to the level of the stimulus in the unaided ear. Increased reverberation also reduced SRM in all listening conditions. Results suggest a complex interaction between hearing loss, hearing aid use, reverberation, and performance in auditory selective attention tasks.

Keywords: Hearing aids, informational masking, reverberation, spatial benefit, selective attention

Introduction

Difficulty understanding speech in background noise is a common complaint of listeners with hearing loss seeking care in audiology clinics. Often the noise that patients describe actually refers to a background of speech, such as having a conversation in a busy restaurant or trying to talk to one’s spouse while the kids carry on their own conversation. When the issue is selectively attending to one talker in the presence of multiple competing talkers, researchers often describe it as the “cocktail party problem” (Cherry, 1953), which has been the focus of recent literature reviews (e.g., Yost, 1997; Bronkhorst, 2000; Ebata, 2003).

In considering the literature concerning the cocktail party problem, it is striking that there have been relatively few studies of selective listening in multisource acoustic environments for listeners wearing hearing aids. However, it is well known that listeners with hearing loss frequently experience difficulty understanding speech in these situations and there is strong evidence that this problem is not fully remediated by amplification. For example, ratings of perceived difficulty of listening in complex and dynamic auditory environments on The Speech, Spatial, and Qualities of Hearing Scale (SSQ), such as following simultaneous or rapidly altering speech streams, were highly correlated with a self-assessment of experiencing social limitations and emotional distress as a consequence of hearing loss (Gatehouse and Noble, 2004). Also, Harkins and Tucker (2007) surveyed a group of over 400 adults with hearing loss (78% wore hearing aids, 22% had cochlear implants) to identify situations in which listeners continue to have difficulty understanding speech with amplification, to determine how often they are in those situations, and to determine whether they use assistive listening devices in addition to their hearing aids/cochlear implants. The situation in which the greatest percentage of respondents reported continued communication difficulty was in a noisy group environment (94%) and 41% of the respondents reported that they were in such a situation often or very often. Around 65% of the respondents reported experiencing difficulty in noisy situations when conversing with one or two people, with 82% indicating that they were frequently in this situation. The motivation for the present study is to obtain a better understanding of the cause of these communication difficulties and the benefit provided by hearing aids.

In multitalker environments, listeners (regardless of whether they are wearing hearing aids) typically focus attention on a target talker but also maintain awareness of the full auditory scene in order to shift attention to another source (cf. Broadbent, 1958). This is a complex problem of speech-on-speech masking, where the reception of the target talker is adversely affected by the presence of the competing talkers. In order to communicate successfully in such situations the listener must perceptually segregate the target talker and direct attention to him or her. This is not a simple task, however, and the interference between talkers has both peripheral and central components. When the target and masker occur in the same frequency region at the same time, the target may be “energetically masked” due to an overlap of the target and masker representations in the peripheral auditory system. In ongoing speech, however, this masking varies from moment to moment because of fluctuations of the sounds in frequency and amplitude. A second component of masking that cannot be accounted for by peripheral masking effects is referred to as “informational masking.” In this type of masking the target and masker are both audible, but the listener cannot perceptually segregate the target from the background or is unable to successfully direct attention to the target. This may occur if there is a high degree of similarity between the background talkers and the target talker, if the background is highly uncertain, or if the background is simply difficult to ignore (see review by Kidd et al., 2008).

When differentiating between multiple competing talkers, there are a number of acoustic cues that may facilitate source segregation (see reviews by Bregman, 1990, and Darwin and Carlyon, 1995), such as differences in: fundamental frequency, spatial location, onsets and offsets, prosody, and intensity levels. There are also higher-level factors that may provide a benefit too such as a priori knowledge about the sources or the message content that may help direct the focus of attention. In the current study, we examined the use of separation of sources in azimuth as a means for providing spatial release from masking (SRM). Differences in spatial location between sources produce binaural cues including interaural time differences (ITDs) and interaural level differences (ILDs) that form the basis for SRM. Recent evidence has suggested that SRM for multiple simultaneous talkers may involve both energetic masking and informational masking (e.g., Freyman et al., 1999; Arbogast et al., 2002; Shinn-Cunningham, et al., 2005; Colburn et al., 2006; Darwin, 2008). Historically, however, it has been the factors involved in reducing energetic masking that have been most commonly studied in the context of producing SRM. These include the better-ear advantage (an improvement in target-to-masker ratio in one ear due to head shadow) and binaural analysis (within channel masking level difference, MLD). The higher-level factors related to informational masking and release from informational masking have received considerably less study and are less clearly understood. Kidd et al. (2005a) found that speech identification in a highly uncertain three-talker listening situation was significantly improved by a priori knowledge about target location. When listeners were uncertain about the location of the target, speech identification performance was, on average, about 67% correct but when the listeners knew where to direct their attention performance improved to greater than 90% correct. These results were replicated recently by Singh et al. (2008) who found similar advantages of a priori knowledge in both younger and older listener groups, but generally poorer performance overall for the older group. This suggests that age may be a factor in some SRM conditions.

In a previous speech-on-speech masking study in normal-hearing young-adult listeners, Marrone et al. (2008a) concluded that a large component of SRM was a reduction in informational masking. In their experiment, the speech materials - the target and both maskers - were sentences from the Coordinate Response Measure (CRM) test (Bolia et al., 2000) and were presented through loudspeakers in a sound field. The CRM test has been shown to produce large amounts of informational masking under certain conditions (cf. Arbogast et al., 2002; Brungart et al., 2006). The target was always presented from a loudspeaker directly in front of the listener, while the two independent speech maskers were either colocated with the target or were symmetrically spatially separated from the target. The maximum SRM (difference in speech reception thresholds for colocated versus spatially separated maskers) found was about 13 dB.

In order to determine whether the same trends reported by Marrone et al. (2008a) would be observed in listeners with hearing loss, a similar approach was taken in a second study (Marrone et al., 2008b) that was intended to create large amounts of informational masking in a multitalker speech identification task. In that study, a total of forty listeners were tested, twenty of whom had sensorineural hearing loss. The other twenty listeners were age-matched normal-hearing controls. The stimuli and procedures were the same as in the earlier study although only a subset of spatial separations were tested. Consistent with the report by Marrone et al. (2008a), listeners with normal hearing demonstrated a large benefit in performance when the sources were spatially separated. This effect was obtained without the availability of a simple better-ear advantage - as revealed by the absence of any SRM in a control condition that simulated monaural listening - and was relatively robust with respect to increased reverberation. However, the listeners with bilateral sensorineural hearing loss received significantly less benefit from spatial separation of sources than their normal-hearing counterparts. Both listener groups had similar speech identification thresholds when the three talkers were colocated. However when the talkers were spatially separated, listeners with hearing loss required much higher target-to-masker ratios (T/Ms) at threshold than normal-hearing listeners, particularly in the reverberant environment. While a few listeners with hearing loss had SRM within the range of the normal-hearing listeners, for others, performance was as poor in the spatially separated condition as in the colocated condition. There was a strong inverse relationship between the amount of hearing loss (as estimated by the listener’s threshold for speech in quiet) and the benefit of spatial separation between the talkers.

Several possible explanations were offered for the much reduced SRM in listeners with hearing loss. One possibility is increased energetic masking. Generally, conditions in which energetic masking dominates yield less benefit from the perceptual cues that normally provide a release from informational masking (cf. Kidd et al., 2008). Evidence for increased energetic masking in listeners with sensorineural hearing loss, and a concomitant reduction in SRM, has been reported by Arbogast et al. (2005). In that study, which used speech targets and maskers processed into mutually exclusive narrow frequency bands, SRM occurred when a single masker talker was separated from the target talker by 90° azimuth. The authors concluded that the reduced SRM in the hearing-impaired listeners could have been due to wider auditory filters that smeared the representations of the target and masker causing greater energetic masking. Another possibility is that the degraded spectral and temporal representation of the stimuli affected the ability of the listeners with hearing loss to segregate the target stream and maintain it over time.

One question raised by the results from listeners with hearing loss in conditions producing large amounts of informational masking (Arbogast et al., 2005; Marrone et al. 2008b) is whether the benefit of spatial separation between talkers observed unaided1 would be different when wearing hearing aids. Beyond the obvious benefit hearing aids provide in restoring the audibility of sounds that is the prerequisite to comprehension, several questions remain regarding whether and to what extent hearing aids allow listeners to make use of spatial cues, especially in complex and highly uncertain multisource environments. In the current study, the same listeners from Marrone et al. (2008b) were tested with their personal hearing aids to determine if amplification would alter the benefit of spatial separation observed. Listeners wore their personal hearing aids at user-adjusted settings, assuring that they were accustomed to the amplification provided.

Because current hearing aids in a bilateral fitting process incoming sounds independently of one another, differences between the aids in compression, noise reduction, and other adaptive algorithms could alter the natural interaural level and timing cues. There is evidence that these types of distortions can negatively impact localization abilities (Van den Bogaert et al., 2006; Keidser et al., 2006; Byrne and Noble, 1998). Since performance in the experimental task is dependent upon binaural processing and the effective use of spatial cues (cf. Marrone et al., 2008a), aided performance could be worse than unaided performance as a consequence of the hearing aids operating independently at each ear. Alternatively, the frequency-specific gain applied by the hearing aids could, for example, improve performance by restoring high-frequency audibility. As in Marrone et al. (2008b), here the target speech was presented at a fixed sensation level above the listener’s quiet (and in this case, aided) threshold so that the speech was highly intelligible in isolation. Given that hearing aids provide benefit for speech recognition in noise by amplifying low level sounds that would otherwise be inaudible to the listener, the question in the current study was more specifically whether at a supra-threshold level there would be a performance difference attributable to the frequency-specific amplification and/or other processing by the instrument(s). The assumption underlying this hypothesis is that improved high-frequency audibility may translate into improved binaural cues that would facilitate perceptual segregation of the talkers based on their perceived spatial location. Dubno et al. (2002) found that listeners with sensorineural hearing loss showed little benefit from spatially separated speech and noise that were high-pass filtered as compared to normal-hearing listeners in the same condition and hypothesized that this was a consequence of reduced high-frequency head-shadow cues (ILDs) since the listeners had the most hearing loss in the high frequencies. Thus, high-frequency amplification could restore these cues and potentially yield better representations of the spatial locations of the sources, which in turn may improve perceptual segregation of the sound sources. Preliminary results by Ahlstrom et al. (2006) suggest that the high-frequency audibility that is restored by hearing aids leads to an improvement in the benefit of spatial separation between a target talker (located directly ahead) and multitalker babble (to the side). When the low-pass cutoff frequency of the speech stimuli was increased and mid-to-high frequency speech information was restored with amplification, listeners had improved speech recognition in the spatially separated condition. They suggested that the improved audibility likely restores head shadow cues in the high frequencies.

There has been little previous study of SRM in listeners wearing hearing aids. Festen and Plomp (1986) examined the benefit of spatial separation between a speech target presented from straight ahead and a masking noise shaped to match the long-term average of the speech stimuli and presented from either straight ahead, to the right side, or to the left side. Listeners were tested with their personal hearing aids at user-adjusted settings in an anechoic room and listened either unaided, with one aid, or with both hearing aids. Overall, performance was about 2 dB better unaided than in any of the aided conditions. The amount of SRM was, on average, 6.5 dB for listeners with pure tone averages (PTA) less than 50 dB HL and 5.5 dB for listeners with a PTA greater than 50 dB HL. This was significantly reduced as compared to results from normal-hearing listeners in identical conditions reported by Plomp and Mimpen (1981). In that study, they found a 10 dB SRM for speech masked by a single noise source. By comparison Festen and Plomp (1986) reported that SRM in the unilateral and bilateral amplification conditions was 4.5 dB on average. However, when the noise source was located on the same side as the hearing aid, listeners with a PTA greater than 50 dB HL showed less SRM using one aid than two aids. Recently, Kalluri and Edwards (2007) studied the effect of compression on SRM in normal-hearing listeners. They found that compression acting independently at the ears reduced the benefit of spatial separation between multiple talkers in an ILD-only condition but not when the sounds were spatialized using the head-related transfer function (with ITD and ILD cues).

In the present study we also examined whether the benefit of spatial separation between talkers while using amplification would be affected by an increase in room reverberation. Based on previous findings (Marrone et al., 2008a, b; Novick et al., 2001; Nabelek and Pickett, 1974), the underlying hypothesis was that an increase in reverberation would negatively impact performance. Second, since the choice of using one versus two hearing aids in noisy situations affects binaural hearing, we compared conditions of bilateral amplification to unilateral amplification. This tested the hypothesis that there would be a bilateral advantage observed since the experimental task is fundamentally based on binaural listening. In Noble and Gatehouse (2006), SSQ questions that are related to the listening situation tested in the current experiment (e.g., having a conversation in an echoic environment, ignoring an interfering voice of the same pitch) revealed benefit from a unilateral fitting and additional benefit from a bilateral fitting in groups of experienced hearing aid wearers. However, in Walden and Walden (2005), listeners (ages 50–90 years) performed better when using a single hearing aid than when using two hearing aids on the QuickSIN (Quick Speech-In-Noise) test, where the task was to repeat keywords from sentences in the presence of four-talker babble. In their study, the target and maskers were presented from the same loudspeaker located directly in front of the listener, so there were no spatial difference cues available. There was a trend for better performance with a single aid as listener age increased. Consequently, our approach was to recruit both younger and older listeners with hearing loss in order to examine the effect of age. Finally, the focus of the current study was on a sample of listeners using their personal hearing aids with omnidirectional microphones and thus we did not investigate the processing parameters, directionality, etc. that might optimize performance on the task.

Methods

Listeners

Forty listeners were recruited from the university and the greater-Boston area via newspaper advertisements and word-of-mouth. There were two groups of twenty subjects, a group of listeners with hearing loss (HL) who reported regular use of bilateral hearing aids in daily life and an age-matched normal-hearing (NH) group. These were the same listeners as tested in Marrone et al. (2008b). All listeners were native English speakers and in both the NH and HL groups there were 10 younger (ages 19–42) and 10 older (ages 57–80) participants. Audiometric measurements were conducted in a sound-treated double-walled booth to determine eligibility for participation. Criteria for participation included hearing that was essentially the same in both ears (asymmetry defined as >10 dB HL difference between ears at two or more audiometric test frequencies) and no significant air-bone gap (≤ 10 dB HL difference between air and bone conduction thresholds at any audiometric test frequency). Table 1 gives demographic information for the listeners with hearing loss including sex, age, etiology of hearing loss, duration of hearing loss, duration of hearing aid use, pure tone average for the right and left ears (average threshold at 500, 1000, and 2000 Hz), and audiometric slope for each ear. The HL listeners had mild to moderately-severe symmetric sensorineural hearing loss that was either flat or gradually sloping in configuration. The NH listeners had audiometric thresholds ≤ 25 dB HL at octave frequencies from 250–4000 Hz. In the aided conditions the HL listeners were tested with their own aids at their normal settings in omnidirectional mode.

Table 1.

Listener demographics for the HL group, including: sex, age (years), etiology of hearing loss, duration of hearing loss (years), duration of hearing aid use (years), pure tone average (PTA: average air conduction threshold in dB HL at 500, 1000, and 2000 Hz), and average slope over the range between 250–4000 Hz (dB/octave). Listeners are sorted by age and are divided into two sub-groups (younger/older).

ID Sex Age Etiology Duration
(years)
PTA
(dB HL)
Slope
(dB)
HL HA
use
R L R L
Y1 M 19 Alport’s Syndrome 11 11 58 62 6.25 6.25
Y2 M 19 Hereditary 19 11 42 45 7.5 6.25
Y3 M 20 Unknown 20 15 63 60 8.75 6.3
Y4 F 21 Unknown 16 11 50 38 8.75 9.25
Y5 F 22 Meniere’s Disease 22 7 42 47 −1.25 −2.5
Y6 F 27 Unknown 27 23 68 67 7.5 6.25
Y7 F 36 Scarlet Fever 34 20 48 43 5 8.75
Y8 F 38 Unknown 38 33 62 62 8.75 11.25
Y9 F 41 Hereditary 41 35 45 45 6.25 7.5
Y10 F 42 Unknown 42 10 43 42 10 10
O1 F 57 Unknown 50 22 58 53 15 15
O2 F 59 Meniere’s Disease 14 1 47 50 8.75 10
O3 F 63 Rubella 63 55 52 53 7.5 6.25
O4 F 66 Unknown 55 42 70 68 7.5 7.5
O5 F 71 Unknown 65 45 58 58 6.25 6.25
O6 F 72 Presbycusis 3 1 52 45 8.75 7.5
O7 F 75 Unknown 20 17 60 67 2.5 3.75
O8 F 78 Presbycusis 3 1 47 50 1.25 1.25
O9 F 78 Presbycusis 9 8 58 62 3.75 5
O10 M 80 Presbycusis 4 2 38 40 10 8.75

Stimuli

The stimuli were recordings of the four female talkers from the Coordinate Response Measure (CRM) corpus (Bolia et al., 2000). Every sentence in this corpus has the structure, “Ready [callsign] go to [color] [number] now.” The corpus has sentences with all combinations of eight callsigns (Arrow, Baron, Charlie, Eagle, Hopper, Laker, Ringo, Tiger), four colors (blue, green, red, white), and eight numbers (digits 1–8). On every trial, the listener heard three sentences spoken by different talkers. The talkers and sentences varied from trial to trial. The target sentence was identified by the callsign “Baron.” The masker sentences were also from the CRM corpus, spoken by two different female talkers. The masker talkers, callsigns, colors, and numbers were different from the target and from each other.

Description of amplification used and electroacoustic measures

The listeners wore their own hearing aids with their regular fitting set in omnidirectional mode. None of the other characteristics of the aids were adjusted. Consequently, a variety of manufacturers, models, and styles were represented in the group. Eighty percent were digital signal processing devices. The majority were custom in-the-ear devices (23/40) and the remainder were behind-the-ear devices, three of which were open canal fittings. The hearing aids were required to be in working order but no attempt was made to change or improve the fitting. This was based on the decision to have a sample that was representative of the hearing aids in current use by the listeners recruited for the study.

Electroacoustic measurements of each hearing aid were made at each of the two listening sessions. Coupler measurements were made using a Frye Systems 7000 test box and hearing aid analyzer to characterize the frequency response, determine input/output transfer functions, assess attack/release time and processing delay, and to determine gain at user-adjusted settings. Of the 40 hearing aids tested, 11 provided linear amplification and none of these reached an output limit within the range of sound levels presented in the current experiment. The remaining 29 hearing aids were nonlinear, multiband compression instruments. The time constants for this type of hearing aid can be broadly described as fitting one of two classes, as outlined by Moore (2008): fast- versus slow-acting compression. In the current group, most of the hearing aids (19/29) had fast-acting compression (an attack time of 0.5–20 ms and a recovery time of 5–200 ms).

Following otoscopic examination, probe microphone measurements in the listener’s ear were made using the modulated noise test signal (“digital speech”) on the Frye Systems 7000 real ear analyzer. These measurements were conducted in a sound-treated, double-walled audiometric test booth using the test protocol for real ear verification described by Hawkins and Mueller (1998). The real ear unaided response was measured using a 65 dB SPL input signal. The real ear aided response measurements were made using 50, 65, and 80 dB SPL signals. The aided measurements were made with the hearing aid set at the listener’s preferred settings in omnidirectional mode, the same settings that were then used for the full experiment. The measured insertion gain was compared to the target insertion gain values that would be prescribed based on the NAL-RP prescriptive method (Byrne et al., 1990). The average measured and target real ear insertion gain (REIG) values (in dB) at audiometric frequencies are given for the right and left ears in Table 2. With the exception of one listener (Y5) who had measured REIG across frequencies that exceeded NAL-RP prescribed gain by 12–30 dB, most listeners tended to use settings that were below the prescriptive targets based on their current hearing thresholds.

Table 2.

Average and standard deviations for measured real ear insertion gain (REIG), target REIG calculated using the NAL-RP prescriptive method, and the deviation from target (measured REIG – target REIG) for right and left ears.

Frequency Right ear Left ear
Measured
REIG
Target
REIG
Difference Measured
REIG
Target
REIG
Difference
250 Hz 5.8 (6.2) 3.2 (3.9) 2.6 (5.7) 6.7 (9.2) 3.6 (4.2) 3.0 (8.2)
500 Hz 11.3 (9.3) 14.2 (5.0) −2.9 (8.8) 11.2 (10.8) 14.2 (5.7) −3.0 (8.6)
1000 Hz 21.6 (10.1) 26.1 (5.0) −4.5 (8.7) 20.6 (12.0) 26.1 (5.2) −5.5 (10.2)
2000 Hz 23.4 (9.7) 25.6 (4.8) −2.2 (8.5) 22.9 (10.4) 25.4 (4.3) −2.5 (9.4)
4000 Hz 16.1 (9.0) 25.9 (4.9) −9.8 (9.1) 15.6 (7.7) 25.6 (4.8) −10.0 (6.6)
6000 Hz 8.2 (11.8) 26.3 (5.7) −18.1 (14.0) 6.3 (14.6) 26.7 (5.9) −20.5 (13.5)

Room conditions

The study was conducted at the Soundfield Laboratory at Boston University. This space includes a single-walled IAC sound booth (12’4” long, 13’ wide, and 7’6” high) that was designed to allow changing the sound absorption characteristics of the soundfield. This is done by covering all surfaces (ceiling, walls, floor, and door) with panels of different acoustic reflectivity, such as acoustic foam or Plexiglas©. For the current experiment, there was a low-reverberation and a high-reverberation condition. In the low-reverberation condition, the room configuration was that of a standard IAC booth: the ceiling, walls, and door had a perforated metal surface and the floor was carpeted (referred to as the “BARE” room condition since no surface coverings were applied). In the more reverberant condition, all surfaces were covered with reflective Plexiglas© panels (the “PLEX” room condition), creating a noticeable increase in reverberation. The measured reverberation time increased from about 60 to 250 ms while the direct-to-reverberant ratio decreased from about 6.3 to −0.9 dB. Furthermore, for a given voltage input to the loudspeakers, the SPL in the PLEX room was approximately 3 dB higher than in the BARE room. Additional acoustic measurements in this room for the different surfaces are described in Kidd et al. (2005b). The listener was seated in the center of a semicircle of seven loudspeakers positioned at head height and located 5’ from the approximate center of the listener’s head. Only the loudspeakers directly in front of the listener (0° azimuth) and to the right and left of the listener (±90° azimuth) were used in the experiment.

Procedures

The task was 1-interval 4×8-alternative forced-choice with feedback. Listeners used a handheld keypad with liquid crystal display (Q-term II) to enter their responses and receive feedback on each trial. They were instructed to identify the color and number from the sentence with the callsign “Baron” and were informed that this sentence would always be presented from the loudspeaker directly ahead. At the beginning of each trial, the word “Listen” appeared on the display. After stimulus presentation, the prompts “Color [B R W G]?” and “Number [1–8]?” appeared and the listener registered their choice of each. Responses were scored as correct only if the listener identified both the color and number accurately. Feedback consisted of a message indicating whether the response was correct and what the target color and number had been on that trial (for example, “Incorrect, it was red two.”). Listeners completed a short practice block of target identification in quiet at a comfortable listening level to familiarize them with the procedures and keypad.

For the NH listeners, there were two monaural earplug plus ear muff conditions (right ear occluded, left ear occluded). The plug and muff condition was intended as a “monaural” control, even though it is not strictly monaural listening, and is a potentially useful comparison for the unilateral aided conditions in the HL group. There were three aided listening conditions tested for the HL listeners: bilateral aided, right ear aided, and left ear aided.

These conditions were conducted in two experimental sessions, one for each room condition. The two sessions typically occurred within one week of each other. The order of the listening conditions and room conditions was counter-balanced across participants. Audiometric testing was completed during the first listening session. Thresholds were re-checked at the second listening session for one listener that had a history of fluctuating hearing loss (Y5). Electroacoustic measurements were completed at both listening sessions to confirm that the hearing aids were set in the same way for both sessions.

Testing in each room condition began with two quiet conditions. First, unmasked identification thresholds for the target CRM sentences at the target location (0°) were obtained in both unaided and aided conditions. A one-up, one-down adaptive procedure was used to estimate the 50% correct point on the psychometric function (Levitt, 1971). Each adaptive track continued until a minimum of 30 trials were tested and at least 10 reversals had been obtained. The initial step size was 4 dB and was reduced to 2 dB after three reversals. The threshold estimate was computed after discarding the first three or four reversals (whichever produced an even number for averaging) and thus was based on at least the last six reversals. Two estimates of threshold in quiet were measured and averaged. If the threshold estimates were greater than 5 dB different, an additional two estimates were collected and all four were averaged.

Next, percent correct performance in an unmasked fixed-level identification task was measured for the target at the level at which it would be presented in the masked conditions. For the NH listeners, the target level in the masked conditions was set to 60 dB SPL. For the HL group, the target was set to 30 dB sensation level (SL) re: quiet aided speech identification thresholds whenever possible and at lower SLs in a few cases to avoid excessively high masker levels (at 20 dB SL for listeners O4, O7, and O8). This same sensation level was used across aided listening conditions and across room conditions with two exceptions (listeners Y3 and Y8, who were tested at 20 dB SL in one of the unilateral aided conditions due to an asymmetry in unilateral aided thresholds). For most listeners, the fixed level for the target stimulus in the aided listening conditions was comparable to that of normal conversational speech (e.g., 65 dB SPL ±5 dB). Quiet speech identification performance was nearly perfect at the test level of the target for all listeners (98.6% correct on average, standard deviation (SD) = 2.7%).

For the measurements of masked speech identification thresholds, listeners heard three sentences played concurrently (one target with two independent maskers) in every trial. The maskers were either colocated with the target at 0° or symmetrically spatially separated (target at 0°, masker 1 from −90°, masker 2 from +90°). The target level was fixed and the level of the maskers was varied adaptively to estimate threshold. The two masker talkers were presented at the same rms level on each trial of the adaptive track. The initial masker level was 20 dB below the target. The masker level adapted in 4 dB steps initially and was reduced to 2 dB after the third reversal. The threshold estimate was computed after discarding the first three or four reversals (whichever produced an even number) and thus was based on at least the last six reversals. Threshold estimates were averaged over four tracks per condition.

In the plug and muff conditions, the NH listeners wore commercially available hearing protectors on one ear and were tested in quiet and in the masked speech conditions at 0° and ±90°. The hearing protectors used were disposable E-A-R® plugs and the AOSafety® Economy Earmuff, both manufactured by the Aearo Company. In order to estimate the amount of attenuation obtained, listeners wore earplugs in both ears and both earmuffs while speech identification threshold estimates were obtained in quiet. If they did not achieve at least 35 dB of attenuation (± 5 dB) relative to their unoccluded speech thresholds, the earplugs were reinserted, the earmuffs were repositioned, and new threshold estimates were obtained. An earplug was then removed from one ear and the earmuff from that ear was removed from the headband, which had been modified so that it could be positioned comfortably but still tightly on the listener’s head. Listeners were tested in a monaural right and monaural left condition in each of the two room conditions (order counterbalanced across listeners).

Results

Preliminary measurements: identification in quiet

Speech identification thresholds in quiet for the three aided listening conditions are given in Table 3, along with unaided thresholds for comparison. On average, bilateral aided thresholds were 13 dB better than unaided thresholds with a range of −1 to 28 dB improvement in the low-reverberant room and were 15 dB lower than unaided thresholds in the more reverberant room on average, with a range of 3–29 dB improvement. Bilateral aided thresholds were 2 dB better on average than the unilateral aided thresholds in both room conditions.

Table 3.

Means and standard deviations in dB SPL for speech identification thresholds in quiet. Unaided thresholds from Marrone et al. (2008b) are given for comparison.

HL listener group Room
condition
Listening Condition
Unaided Bilateral
aided
Right aided Left aided
Younger (n=10) BARE 45.0 (10.4) 35.5 (5.6) 37.4 (6.2) 36.2 (8.2)
PLEX 48.4 (9.2) 36.4 (5.7) 38.5 (5.0) 37.6 (7.3)
Older (n=10) BARE 55.1 (13.4) 38.2 (6.5) 40.3 (6.0) 41.3 (9.8)
PLEX 58.2 (12.0) 40.0 (5.5) 41.4 (4.6) 43.8 (7.5)

The speech identification thresholds in quiet in the low-reverberation room were highly correlated with both the audiometric pure-tone average (PTA; r = 0.94, p < 0.001) and the standard audiometric speech recognition threshold (SRT; r = 0.96, p < 0.001) obtained using recorded spondaic words (the PTA and SRT values used for analysis were the average values for the right and left ears). As described in the Methods section, speech identification performance in quiet was nearly perfect when the CRM sentences were presented at the test level used for the target in both room conditions and for all listeners.

Speech identification thresholds with two competing talkers

The masked speech identification thresholds are expressed in terms of T/M and SRM in dB. The T/M was calculated by subtracting the level of the individual maskers at threshold from the fixed target level. SRM is the difference in the T/M at threshold for the colocated and spatially separated conditions so that a positive SRM indicates a benefit of spatial separation.

The group-mean results in T/M are contained in Table 4 for all listeners and conditions. Data from unaided conditions for the same listeners, stimuli and procedures from Marrone et al. (2008b) are given for comparison. In the unaided conditions from Marrone et al. (2008b), the target was set to 30 dB SL re: speech identification threshold in quiet, which is roughly equivalent to applying linear uniform gain of 30 dB. Inspection of Table 4 indicates that when the three talkers were colocated, group mean performance was similar across listening conditions and room conditions. Threshold T/Ms ranged from 4.5 to 6.5 dB indicating that the target was just higher in level than the combined 2-talker masker (i.e., above 3 dB). However, a somewhat larger range of performance was observed when the three talkers were spatially separated. In that case, the lowest (best) T/Ms at threshold occurred in the low-reverberation room while the highest (poorest) T/Ms at threshold occurred in the unilateral aided conditions in the more reverberant room. In general, the older listeners had slightly higher T/Ms at threshold than the younger listeners (~1–3 dB across conditions).

Table 4.

Means and standard deviations for T/Ms at CRM identification threshold (in dB) in the colocated and spatially separated conditions for the bilateral aided, unilateral aided, and unaided listening conditions. SRM in dB is also given (difference in T/M between the spatially separated and colocated conditions calculated on an individual listener basis, then averaged). Data for the unaided condition are from Marrone et al., 2008b. This condition shows performance without hearing aids, although uniform gain was applied to the loudspeakers to raise the target level to 30 dB SL re: speech identification threshold in quiet.

Measure HL Group Unaided Bilateral aided Unilateral aided
Low
reverb
High
reverb
Low
reverb
High
reverb
Low
reverb
High
reverb
T/M at 0° Younger 4.8 (1.5) 4.9 (2.3) 4.8 (1.7) 4.6 (1.9) 4.5 (1.5) 4.8 (2.3)
Older 6.4 (2.2) 6.3 (1.8) 5.5 (1.1) 6.5 (1.5) 5.4 (1.2) 6.1 (1.4)
T/M at ±90° Younger −0.1 (3.5) 1.9 (4.0) 0.4 (3.3) 1.9 (4.0) 1.3 (3.6) 3.7 (3.5)
Older 3.5 (4.4) 4.9 (2.3) 3.7 (3.9) 5.9 (3.3) 4.2 (3.5) 6.0 (3.0)
SRM Younger 4.9 (2.5) 3.0 (2.1) 4.4 (2.3) 2.7 (2.5) 3.2 (2.9) 1.1 (1.7)
Older 2.9 (2.7) 1.4 (1.9) 1.8 (2.9) 0.6 (2.4) 1.3 (2.8) 0.1 (2.1)

Spatial benefit

The benefit of spatial separation between talkers was measured on an individual listener basis by subtracting the T/M at threshold when the talkers were spatially separated from the T/M at threshold when the talkers were colocated. This gives the amount of SRM for an individual listener and was calculated for each aided listening condition in both room conditions. Since there was not an ear difference in SRM between the right and left aided conditions in either room condition (paired t-tests, BARE: t = 0.19, p = 0.849; PLEX: t = 1.52, p = 0.144), data were collapsed across ear condition to simplify group comparisons between the unilateral aided condition and the other listening conditions.

The amount of SRM was used as the dependent variable in statistical analysis by repeated-measures ANOVA. The within-subjects factors were room reverberation (BARE/PLEX) and listening condition (unaided, bilateral aided, unilateral aided); the between-subjects factor was listener age (younger/older). The unaided data were from the Marrone et al. (2008b) study. There were significant main effects of reverberation [F (1, 18) = 29.24, p < 0.001] and listening condition [F (2, 36) = 10.9, p < 0.001]. The effect of listener age approached but did not reach statistical significance [F (1, 18) = 4.16, p = 0.056] and therefore will not be considered as a separate factor in SRM in the discussion below. There were no statistically significant interactions. Post-hoc pairwise comparisons with a Bonferroni correction between the listening conditions revealed that there was not a significant difference (p = 0.26) in SRM between the bilateral aided condition and the unaided condition (at a similar sensation level) from Marrone et al. (2008b). Listeners showed a small but statistically significant advantage (1 dB on average) in SRM when using two aids instead of one (p = 0.028). There was also a slight but significant difference between listening with one hearing aid and the unaided condition (p = 0.001), where listening with one aid was 1.6 dB worse on average.

On an individual subject basis, there was a moderately strong and statistically significant correlation between the amount of SRM in the unaided condition and the amount of SRM in the bilateral aided condition in both the low-reverberant room (r = 0.74, p < 0.001) and in the more reverberant room (r = 0.68, p < 0.001). This is illustrated in Figure 1, showing individual listener data for the amount of SRM in the unaided condition from Marrone et al. (2008b) along the abscissa and SRM in the bilateral aided condition along the ordinate for both room conditions (in separate panels). The dashed line represents where the values would fall if the amount of SRM was equivalent in both listening conditions. Fourteen of the 20 listeners had less than a 2 dB difference between the bilateral aided condition and the unaided condition at an equivalent sensation level. There was not a relationship between the listener’s threshold in quiet (estimate of the amount of hearing loss) and the difference in performance between the bilateral aided condition and the unaided condition in the low-reverberation room (r = −0.04, p = 0.88). The mild inverse correlation in the more reverberant room did not reach statistical significance (r = −0.43, p = 0.062).

Figure 1.

Figure 1

A comparison of bilateral aided SRM and unaided SRM from Marrone et al., (2008b). Results were obtained at a comparable sensation level on an individual listener basis. The dashed line along the diagonal illustrates equivalent SRM in the two conditions. The solid line represents the best-fit for the data.

Given that there was not a significant difference in spatial benefit observed in the bilateral aided condition and the unaided condition at a comparable sensation level, the question arose whether the uniform gain provided at the input to the loudspeakers was similar to the gain applied by the hearing aids. An analysis of the difference in level at the eardrum between the two conditions was performed. A comparison of the long-term average spectrum for the Frye digital speech signal used in the real ear measurements and the CRM stimuli used in the experiment revealed that the spectra were quite similar and in particular, had the same high-frequency roll-off (6 dB per octave after 1000 Hz). Consequently, the real-ear measurements were used to estimate the frequency response in the ear at the target levels used in the two listening conditions. The real-ear unaided response (REUR) at the target level used in the unaided condition was subtracted from the real-ear aided response (REAR) at the target level used in the bilateral aided condition. For all listeners, the aided levels were higher than those in the unaided condition for frequencies above 1000 Hz, meaning that the hearing aids provided frequency-specific improvements in audibility over the uniform gain provided by the amplifier and loudspeaker in the unaided condition.

To test the hypothesis that the amount of SRM could be related to the additional high-frequency amplification in the aided condition, a correlation analysis was conducted. In order to have a single value to use as an estimate of the difference in high-frequency response between the conditions (“high-frequency benefit”), the real-ear unaided response values at the target level for 2 and 3 kHz for each ear were averaged and subtracted from the average real-ear aided response values at the target level for 2 and 3 kHz for each ear. The correlation was stronger for the more reverberant room, but neither reached statistical significance (BARE: r = 0.2, p = 0.393; PLEX: r = 0.43, p = 0.058).

Figure 2 shows individual data for the unilateral-bilateral aided comparison for both room conditions. Each listener has two points in each panel; the right aided values are represented by circles and the left aided values are represented by crosses. In these panels, note that some listeners are still able to achieve a small amount of SRM with one hearing aid. The amount of spatial release in the unilateral aided condition was correlated with that in the bilateral aided condition in both room conditions (BARE: r = 0.74, p < 0.001; PLEX: r = 0.67, p < 0.001). There was considerable inter-individual variability in the amount of unilateral aided SRM (ranging from −2.6 to 7.9 dB in BARE and from −2 to 4 dB in PLEX). One possible explanation for this was that the amount of unilateral aided SRM was dependent upon how audible the target was in the unaided ear. This was explored in an analysis presented in Figure 3, where the amount of SRM in the unilateral aided condition is presented as a function of the sensation level of the target in the unaided ear (re: unaided threshold in quiet). Data are shown for the low-reverberant condition; results for the more reverberant room were equivalent. Individual values for the right aided condition are represented as circles and the crosses represent the left aided condition. The square with error bars represents the mean and standard deviation for the normal hearing group listening with an earplug and earmuff on one ear(n=36 ears). There is a horizontal dashed line drawn at 0 dB SRM for reference and the solid line represents the best-fit regression line to the data. There was a moderately strong correlation between the amount of unilateral aided SRM and the sensation level of the target in the unaided ear (r = 0.62, p < 0.001).

Figure 2.

Figure 2

The relationship between the amount of SRM in the bilateral aided condition and the amount of release in the unilateral aided conditions. The dashed line along the diagonal represents performance that is equivalent in the two conditions. The solid line represents the best-fit for the data.

Figure 3.

Figure 3

The amount of SRM in the unilateral aided condition as a function of the sensation level of the stimuli in the unaided ear. The square symbol represents the mean performance for the normal-hearing listeners when using an earplug and earmuff to occlude one ear and the error bars represent ±1 standard error. The solid line represents the best-fit line to the data and the dashed line is a reference line to illustrate 0 dB SRM.

To investigate whether there was a relationship between listener age and unilateral SRM, additional correlation analyses were performed. There was not a correlation between unilateral aided SRM and listener age in either room condition (BARE: r = −0.28, p = 0.24; PLEX: r = −0.25, p = 0.28). There was also not a correlation between age and the difference in the amount of SRM in the bilateral and unilateral aided conditions in either room (BARE: r = −0.18, p = 0.45; PLEX: r = −0.34, p = 0.14).

Discussion

The purpose of the current study was to examine the benefit of spatial separation between multiple talkers in a sample of experienced hearing aid users wearing their personal hearing aids. On average, the amount of SRM with bilateral hearing aids was slightly less, but not significantly different, than performance without hearing aids at a comparable sensation level. This result should not be interpreted as implying that the hearing aids do not provide benefit in this type of listening situation since the presentation level in the unaided condition was equivalent to providing a significant amount of uniform amplification and is therefore not representative of unaided listening in daily life. There was slightly, but significantly, less SRM for the unilateral aided condition as compared to either the bilateral aided condition or the unaided condition.

Overall, the listeners’ hearing aids did provide additional high-frequency gain as compared to the relatively uniform gain provided by the loudspeakers in the unaided condition. However, this was not related to the amount of SRM observed, although there was a trend toward improved spatial benefit in the more reverberant room with more high-frequency gain (and presumably better audibility). The measured high-frequency gain at the listeners’ preferred hearing aid settings was often lower than a widely-used prescriptive target (NAL-RP) that was calculated based on their current hearing thresholds. It is therefore possible that the performance observed in this group, while representative of their current hearing aid use, may not reflect their best possible performance.

Although we might have hoped that bilateral amplification would increase SRM by improving perceptual segregation based on improved high-frequency audibility, it could also have had the opposite effect if the two aids altered important interaural timing differences or were poorly matched such that integration of information across the ears was adversely affected. Instead, the current result is somewhat encouraging and suggests that improvements in hearing aid design, fitting, and use may lead to greater benefits. Furthermore, Neher et al. (2007) have recently found that some listeners can achieve aided performance benefits in multitalker spatially separated conditions through training. These possible benefits – acoustic and otherwise - would only be the case, though, if the fundamental limitation on SRM is not determined by the degree and nature of the hearing impairment. It remains unclear as to why listeners with hearing loss do not benefit more from spatial separation between multiple talkers. In comparison to the normal-hearing listeners in Marrone et al. (2008b), bilateral aided spatial benefit is around 8 dB less on average. In this task, there is greatly reduced opportunity to benefit from “better ear listening” (cf. Marrone et al., 2008a). We speculate that under these conditions, SRM is reduced in listeners with hearing loss due to poorer perceptual segregation of sources and/or the ability to focus attention at a point in space. It may also be limited by increased energetic masking. Consequently, it may be that the best spatial benefit that can occur with hearing aid fittings is to preserve whatever SRM is present without hearing aids, unless the amplification provided can strengthen access to perceptual segregation cues.

The symmetric placement of the maskers eliminates the complication of whether a single hearing aid is on the acoustically better or poorer ear. Listeners were able to achieve a small amount of spatial release with one hearing aid and performance with one aid was correlated with performance with two aids. Overall, the unilateral aided results suggest that listeners are using input from the unaided ear to obtain a small binaural benefit. This was in contrast to the “monaural” control condition, in which normal-hearing listeners with one ear occluded by an earplug and earmuff did not show SRM. Several other studies have shown binaural benefit with interaural asymmetries in the input signals (MacKeith and Coles, 1971; Festen and Plomp, 1986; McCullough and Abbas, 1992). Of particular relevance, Festen and Plomp (1986) found that listeners were able to use input from the unaided ear when only one ear was aided, particularly when the listener’s PTA was less than 50 dB HL such that the noise level was always above threshold in the unaided ear. Two aids were better than one when the noise was on the same side as the hearing aid and the listener’s PTA was greater than 50 dB HL. In the current data, there was a positive correlation between the sensation level of the target in the unaided ear and unilateral SRM. Stated differently, this means that the degree to which listeners are able to use information from two ears with only one hearing aid is dependent on the amount of hearing loss in the unaided ear. Similarly, an interaction between hearing level and the effect of unilateral versus bilateral amplification has been reported in studies of horizontal localization. For example, Byrne et al. (1992) found that localization is relatively good when listeners are fit either unilaterally or bilaterally for mild hearing losses up to 50 dB HL, but that poorer localization accuracy was apparent at greater degrees of loss for unilateral fittings than for bilateral fittings.

Overall, the trends in the current data are the same as those observed in the study of spatial separation between speech and a single speech-shaped noise masker by Festen and Plomp (1986). When comparing threshold T/Ms, those in the Festen and Plomp study were lower than in the current study, likely as a result of differences in the number and type of masking sources used. Despite these differences, it is interesting to note that the interactions between spatial benefit and listening condition were similar between the two studies. Finally, listeners in both studies used their own hearing aids at their preferred settings. Much like the current findings, Festen and Plomp found that on average, the listeners chose a level of applied gain that did not necessarily compensate for their hearing loss in some situations. In fact, they had several listeners that used an average gain close to zero.

Based in part on the results of Novick et al. (2001), it was hypothesized that the bilateral advantage might have been larger in this study in the condition having the most reverberation. However there was not a difference in bilateral advantage between the two room conditions tested here. In the Novick et al. study, ten listeners with mild-to-moderate sensorineural hearing loss were tested on the hearing in noise test (HINT) and speech in noise (SPIN) test in two room conditions, an anechoic room and a reverberant classroom (reverberation time listed at 0.67 sec.), while listening with hearing aids (Oticon DigiFocus ITE). They found no difference between unilateral and bilateral fittings on the HINT in either room or on the SPIN in the anechoic chamber. However, in the reverberant room, listeners performed significantly better on the SPIN test with two hearing aids as compared to one. They speculated that this was because the sound field is asymmetric in the reverberant room, so that cues at one ear are not the same as those at the other ear. Specifically, since the multitalker babble in the SPIN was not spectrally matched to the target talker, they hypothesized that the listener might be able to make use of small differences between the target and the babble at the two ears. They concluded that a bilateral fitting may therefore be more important in a reverberant environment. In the current study, SRM was on average approximately 1 dB better in the bilateral aided condition than in the unilateral aided condition for both reverberation conditions. Across individual subjects, this difference ranged from −2.9 to 4.5 dB SRM in the low reverberant room and from −2.1 dB to 3.0 dB SRM in the more reverberant room. However, there was an interaction between the aided listening condition and room reverberation on T/M at threshold. The best aided T/Ms were obtained for bilateral fittings in the low-reverberant condition and the worst T/Ms at threshold across the aided conditions occurred when listeners used a single hearing aid in the more reverberant room. This result is contrary to the position that listeners would benefit by removing one hearing aid in a noisy environment (e.g., Walden and Walden, 2005). Furthermore, there was not a relationship between increasing age and the difference between SRM with a single hearing aid as compared to SRM with two hearing aids. While the older listeners in the current study tended to show less SRM than their younger counterparts, their performance was essentially the same across listening conditions and the effect of age approached but did not reach statistical significance.

Summary

This study evaluated the effect of amplification on the benefit of spatial separation between multiple talkers in rooms with reverberation. We compared aided performance with one or two hearing aids to an unaided condition (Marrone et al., 2008b) where the speech was presented at or near an equivalent sensation level. A summary of the results, including data from Marrone et al. (2008b) for comparison, is given in Figure 4. Overall, with or without hearing aids, hearing-impaired listeners showed much less SRM than age-matched normal-hearing listeners. There was not a significant difference with respect to SRM between listening to speech with uniform gain applied to the source and listening with personal hearing aid amplification at user adjusted settings. There was a small but significant advantage of using bilateral aids as compared to unilateral aids for SRM. Further, the amount of SRM obtained with a unilateral aid was strongly related to the amount of spatial release obtained with bilateral hearing aids. Those listeners with more mild hearing loss were able to make use of binaural information from the unaided ear and achieve a small SRM in the unilateral aided listening condition. In a control condition with normal-hearing listeners, SRM was not observed when listeners had one ear occluded by an earplug and earmuff that greatly reduced the sensation level of the stimuli in one ear. Finally, in the cases where the talkers were spatially separated, there was an interaction between aided listening condition and the amount of room reverberation such that the best performance when using hearing aids was obtained for a bilateral fitting in low reverberation and the worst performance occurred when listening with a single hearing aid in higher reverberation.

Figure 4.

Figure 4

A summary of the amount of SRM in the aided listening conditions for the two levels of room reverberation (BARE and PLEX). The NH binaural and HL binaural unaided data are replotted from Marrone et al. (2008b) for comparison. Error bars represent ± 1 standard error of the mean.

Acknowledgements

The authors wish to thank the listeners that participated in the experiment for their time and effort, as well as Virginia Best, Nathaniel Durlach, Barbara Shinn-Cunningham, H. Steven Colburn, and Melanie Matthies for many helpful discussions of the results.

This work was supported by grants from the National Institute on Deafness and Other Communication Disorders (NIDCD). Portions of the data were presented at the 153rd meeting of the Acoustical Society of America in Salt Lake City, UT, June 4–8, 2007 and at the International Symposium on Auditory and Audiological Research in Helsingør, Denmark, August 29–31, 2007.

Footnotes

1

The term “unaided” as used in this article means that the listener was not wearing their personal hearing aids. However, under many conditions the stimuli were presented through loudspeakers at specified suprathreshold levels providing essentially uniform “amplification”.

References

  1. Ahlstrom JB, Horwitz AR, Dubno JR. “Spatial benefit of hearing aids,”. presented at the International Hearing Aids Research Conference; August, 2006; Lake Tahoe, CA. 2006. [Google Scholar]
  2. Arbogast TL, Mason CR, Kidd G., Jr The effect of spatial separation on informational and energetic masking of speech. Journal of the Acoustical Society of America. 2002;112:2086–2098. doi: 10.1121/1.1510141. [DOI] [PubMed] [Google Scholar]
  3. Arbogast TL, Mason CR, Kidd G., Jr The effect of spatial separation on informational masking of speech in normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America. 2005;117:2169–2180. doi: 10.1121/1.1861598. [DOI] [PubMed] [Google Scholar]
  4. Bolia RS, Nelson WT, Ericson MA, Simpson BD. A speech corpus for multitalker communications research. Journal of the Acoustical Society of America. 2000;107:1065–1066. doi: 10.1121/1.428288. [DOI] [PubMed] [Google Scholar]
  5. Bregman A. The perceptual organization of sound. Cambridge, MA: MIT Press; 1990. Auditory scene analysis. [Google Scholar]
  6. Broadbent DE. Perception and Communication. London: Pergamon Press; 1958. [Google Scholar]
  7. Bronkhorst A. The cocktail party phenomenon: A review of research on speech intelligibility in multiple-talker conditions. Acustica Acta Acustica. 2000;86:117–128. [Google Scholar]
  8. Byrne D, Parkinson A, Newall P. Hearing aid gain and frequency response requirements for the severely/profoundly hearing impaired. Ear and Hearing. 1990;11:40–49. doi: 10.1097/00003446-199002000-00009. [DOI] [PubMed] [Google Scholar]
  9. Byrne D, Noble W, LePage B. Effects of long-term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. Journal of the American Academy of Audiology. 1992;3:369–382. [PubMed] [Google Scholar]
  10. Byrne D, Noble W. Optimizing sound localization with hearing aids. Trends in Amplification. 1998;3:51–73. doi: 10.1177/108471389800300202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Byrne D, Noble W, LePage B. Effects of long-term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. Journal of the American Academy of Audiology. 1992;3:369–382. [PubMed] [Google Scholar]
  12. Cherry EC. Some experiments on the recognition of speech, with one and with two ears. Journal of the Acoustical Society of America. 1953;25:975–979. [Google Scholar]
  13. Colburn HS, Shinn-Cunningham BA, Kidd G, Jr, Durlach NI. The perceptual consequences of binaural hearing. International Journal of Audiology. 2006;45:34–44. doi: 10.1080/14992020600782642. [DOI] [PubMed] [Google Scholar]
  14. Darwin CJ. Spatial hearing and perceiving sources. In: Yost WA, Popper AN, Fay RR, editors. Auditory Perception of Sound Sources. New York, NY: Springer Science + Business Media LLC; 2008. pp. 215–232. [Google Scholar]
  15. Darwin CJ, Carlyon RP. Auditory grouping. In: Moore BCJ, editor. Hearing. London: Academic Press; 1995. pp. 387–424. [Google Scholar]
  16. Dubno JR, Ahlstrom JB, Horwitz AR. Spectral contributions to the benefit from spatial separation of speech and noise. Journal of Speech, Language, and Hearing Research. 2002;45:1297–1310. doi: 10.1044/1092-4388(2002/104). [DOI] [PubMed] [Google Scholar]
  17. Ebata M. Spatial unmasking and attention related to the cocktail party problem. Acoustical Science and Technology. 2003;24:208–219. [Google Scholar]
  18. Festen JM, Plomp R. Speech-reception threshold in noise with one and two hearing aids. Journal of the Acoustical Society of America. 1986;79:465–471. doi: 10.1121/1.393534. [DOI] [PubMed] [Google Scholar]
  19. Freyman RL, Helfer KS, McCall DD, Clifton RK. “The role of perceived spatial separation in the unmasking of speech”. J. Acoust. Soc. Am. 1999;106:3578–3588. doi: 10.1121/1.428211. [DOI] [PubMed] [Google Scholar]
  20. Gatehouse S, Noble W. The Speech, Spatial, and Qualities of Hearing Scale (SSQ) International Journal of Audiology. 2004;43:85–99. doi: 10.1080/14992020400050014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Harkins J, Tucker P. An internet survey of individuals with hearing loss regarding assistive listening devices. Trends in Amplification. 2007;11:91–100. doi: 10.1177/1084713807301322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hawkins DB, Mueller HG. Procedural considerations in probemicrophone measurements. In: Mueller HG, Northern JL, Hawkins DB, editors. Probe Microphone Measurements: Hearing Aid Selection and Assessment. San Diego: Singular Publishing; 1998. pp. 67–89. [Google Scholar]
  23. Kalluri S, Edwards B. Impact of hearing impairment and hearing aids on benefits due to binaural hearing. Proceedings of the International Congress on Acoustics; Madrid; September, 2007.; Spain. 2007. [Google Scholar]
  24. Keidser G, Rohrseitz K, Dillon H, et al. The effect of multi-channel wide dynamic range compression, noise reduction, and the directional microphone on horizontal localization performance in hearing aid wearers. International Journal of Audiology. 2006;45:563–579. doi: 10.1080/14992020600920804. [DOI] [PubMed] [Google Scholar]
  25. Kidd G, Jr, Arbogast TL, Mason CR, Gallun FJ. The advantage of knowing where to listen. Journal of the Acoustical Society of America. 2005a;118:3804–3815. doi: 10.1121/1.2109187. [DOI] [PubMed] [Google Scholar]
  26. Kidd G, Jr, Mason CR, Brughera A, Hartmann WM. "The role of reverberation in release from masking due to spatial separation of sources for speech identification. Acustica Acta Acustica. 2005b;91:526–536. [Google Scholar]
  27. Kidd G, Jr, Mason CR, Richards VM, Gallun FJ, Durlach N. Informational masking. In: Yost WA, Popper AN, Fay RR, editors. Auditory Perception of Sound Sources. New York, NY: Springer Science + Business Media LLC, New York; 2008. pp. 143–190. [Google Scholar]
  28. Levitt H. Transformed up-down methods in psychoacoustics. Journal of the Acoustical Society of America. 1971;49:467–477. [PubMed] [Google Scholar]
  29. MacKeith NW, Coles RR. Binaural advantages in hearing of speech. Journal of Laryngology and Otology. 1971;85:213–232. doi: 10.1017/s0022215100073369. [DOI] [PubMed] [Google Scholar]
  30. Marrone N, Mason CR, Kidd G., Jr Tuning in the spatial dimension: Evidence from a masked speech identification task. Journal of the Acoustical Society of America. 2008a;124:1146–1158. doi: 10.1121/1.2945710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Marrone N, Mason CR, Kidd G., Jr Effects of hearing loss and age on the benefit of spatial separation between multiple talkers in reverberant rooms. Journal of the Acoustical Society of America. 2008b;124:3064–3075. doi: 10.1121/1.2980441. [PubMed: 19045792] [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. McCullough JA, Abbas PJ. Effects of interaural speech-recognition differences on binaural advantage for speech in noise. Journal of the American Academy of Audiology. 1992;3:255–261. [PubMed] [Google Scholar]
  33. Moore BC., J The choice of compression speed in hearing aids: Theoretical and practical considerations and the role of individual differences. Trends in Amplification. 2008;12:103–112. doi: 10.1177/1084713808317819. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Nabelek A, Pickett JM. Monaural and binaural speech perception through hearing aids under noise and reverberation with normal and hearing-impaired listeners. Journal of Speech and Hearing Research. 1974;17:724–739. doi: 10.1044/jshr.1704.724. [DOI] [PubMed] [Google Scholar]
  35. Neher T, Behrens T, Kragelund L, Petersen AS. “Spatial unmasking in aided hearing-impaired listeners and the need for training”; poster presented at the International Symposium on Auditory and Audiologic Research, Marienlyst; August 2007; Denmark. [Google Scholar]
  36. Noble W, Gatehouse S. Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the Speech, Spatial, and Qualities of Hearing scale (SSQ) International Journal of Audiology. 2006;45:172–181. doi: 10.1080/14992020500376933. [DOI] [PubMed] [Google Scholar]
  37. Novick ML, Bentler RA, Dittberner A, Flamme G. Effects of release time and directionality on unilateral and bilateral hearing aid fittings in complex sound fields. Journal of the American Academy of Audiology. 2001;12:534–544. [PubMed] [Google Scholar]
  38. Plomp R, Mimpen AM. Effect of the orientation of the speaker's head and the azimuth of a noise source on the speech-reception threshold for sentences. Acustica. 1981;48:325–328. [Google Scholar]
  39. Shinn-Cunningham BG, Ihlefeld A, Satyavarta, and Larson E. Bottom-up and top-down influences on spatial unmasking. Acustica Acta Acustica. 2005;91:967–979. [Google Scholar]
  40. Singh G, Pichora-Fuller K, Schneider B. The effect of age on auditory spatial attention in conditions of real and simulated spatial separation. Journal of the Acoustical Society of America. 2008;124:1294–1305. doi: 10.1121/1.2949399. [DOI] [PubMed] [Google Scholar]
  41. Van den Bogaert T, Klasen TJ, Moonen M, Van Deun L, Wouters J. Horizontal localization with bilateral hearing aids: without is better than with. Journal of the Acoustical Society of America. 2006;119:515–526. doi: 10.1121/1.2139653. [DOI] [PubMed] [Google Scholar]
  42. Walden TC, Walden BE. Unilateral versus bilateral amplification for adults with impaired hearing. Journal of the American Academy of Audiology. 2005;16:574–584. doi: 10.3766/jaaa.16.8.6. [DOI] [PubMed] [Google Scholar]
  43. Yost WA. The cocktail party problem: Forty years later. In: Gilkey RA, Anderson TR, editors. Binaural and Spatial Hearing in Real and Virtual Environments. Mahwah, NJ: Lawrence Erlbaum; 1997. pp. 329–348. [Google Scholar]

RESOURCES