Skip to main content
The Journal of the Acoustical Society of America logoLink to The Journal of the Acoustical Society of America
. 2019 Apr 22;145(4):EL284–EL290. doi: 10.1121/1.5097377

Psychometric function slope for speech-in-noise and speech-in-speech: Effects of development and aging

Kathryn A Sobon 1, Nardine M Taleb 2, Emily Buss 1,a),, John H Grose 1, Lauren Calandruccio 2
PMCID: PMC6910021  PMID: 31046371

Abstract

Masked sentence recognition was evaluated in normal-hearing children (8.8–10.5 years), young adults (18–28 years), and older adults (60–71 years). Consistent with published data, speech recognition thresholds were poorer for young children and older adults than for young adults, particularly when the masker was composed of speech. Psychometric function slopes were steeper for young children and older adults than for young adults when the masker was two-talker speech, but not when it was speech-shaped noise. Multiple factors are implicated in the age effects observed for speech-in-speech recognition at low signal-to-noise ratios.

1. Introduction

School-age children and older adults tend to perform more poorly than young adults on masked speech recognition tasks, particularly when the masker is itself composed of speech (e.g., Goossens et al., 2017; Wightman and Kistler, 2005). A recent study by Buss et al. (2018) compared the performance of children, young adults, and older adults with normal or near-normal hearing on a masked sentence recognition task, where the masker was either speech-shaped noise or two-talker speech. As expected, speech reception thresholds (SRTs) were elevated in children and older adults compared to young adults, and this age effect was larger for the two-talker speech masker than the speech-shaped noise masker. In addition, there was a non-significant trend for shallower psychometric function slopes in the two-talker speech masker for young adults compared to children and older adults. The purpose of the present report was to evaluate the age effect on psychometric function slopes for speech-in-speech recognition, and to consider factors that might limit the ability to recognize speech in a speech masker. Interest in studying speech-in-speech recognition across the lifespan is fueled by the observation that such recognition is an important aspect of functional hearing ability (Phatak et al., 2018), and yet it is unclear to what extent effects related to development and aging are based on the same factors (e.g., poor stream segregation) or different factors (e.g., immature linguistic knowledge vs age-related auditory deficits).

Previous data from young adult listeners indicate that psychometric functions tend to be shallower for speech-in-speech than speech-in-noise recognition (MacPherson and Akeroyd, 2014), an observation that has been attributed to the increased opportunity for glimpsing in the speech masker. Glimpsing refers to the ability to recognize speech using cues that occur during time/frequency epochs of favorable signal-to-noise ratio (SNR; Cooke, 2006). The presumed relationship between glimpsing and psychometric function slope is illustrated in Fig. 1. This schematic shows a target sentence (The clown had a funny face) at a range of SNRs, indicated via shading. The target is presented in either a speech-shaped noise masker (top) or a two-talker speech masker (bottom), shown in gray. Masking is depicted when the masker visually occludes the target. For the two-talker masker, cues associated with some words are audible at the lowest SNR, and others remain inaudible at the highest SNR. In contrast, for the speech-shaped noise masker, all cues are masked at the two lowest SNRs, but cues associated with all words are audible at the highest SNR. If listeners are able to use the audible speech cues provided in these two contexts, then the psychometric function slope should be steeper for the speech-shaped noise than for the two-talker speech masker.

Fig. 1.

Fig. 1.

(Color online) Illustration of glimpsing and psychometric function slope. The target sentence, indicated at the top of the panel, is presented at a range of intensities as reflected in waveform shading, from −15 dB SNR (light) to 5 dB SNR (dark). The speech-shaped noise masker (top) and two-talker speech masker (bottom) are shown in gray. Panels at the left show the proportion correct (pc), by keyword, as a function of SNR; these data are hypothetical. In this example, a subset of speech cues is audible at −15 dB SNR for the speech masker but not for the noise masker.

The relationship between psychometric function slope and glimpsing illustrated in Fig. 1 omits many important details necessary to fully understand speech-in-speech recognition, such as auditory filtering, auditory stream segregation, and the number and quality of cues required for recognition. For the speech masker, failure of segregation and selective attention would result in the integration of audible speech cues from both the target and the masker. At negative SNRs this would result in poor performance, but at increasingly positive SNRs the target would dominate the mixture, and performance would improve irrespective of glimpsing ability. The reduced importance of glimpsing at high SNRs could result in rapid improvements in performance around 0 dB SNR, particularly for poor performers.

The current study tested the hypothesis that the psychometric function for speech-in-speech recognition is steeper for children and older adults than for young adults, a trend that was observed by Buss et al. (2018), but which did not reach significance in that dataset. Data from children and young adults are a subset of those previously reported by Miller et al. (2018), where the focus was on the relationship between speech recognition and reading ability. That study noted that children had steeper functions than adults in the two-talker masker, but did not evaluate the relationship between slope and SRT, and did not offer any hypotheses regarding the factors responsible for this age effect. Data for older adults were collected specifically for the current report. Whereas the study of Buss et al. (2018) was designed primarily to evaluate age effects on SRTs as a function of semantic context, several aspects of the current study design increase the likelihood of observing a significant difference in slope. The present study used a larger number of sentences to assess performance in each masker (64 sentences compared to 30 sentences), and children were clustered in a relatively restricted age range (n = 42, 8–10 years compared to n = 42, 5–16 years), precluding the need to account for auditory development within the child group.

2. Methods

There were three groups of listeners: 42 school-age children (21 females, 8.8–10.5 years, mean 9.2 years), 15 young adults (10 females, 18–28 years, mean 21 years), and 12 older adults (9 females, 60–71 years, mean 64 years). All listeners were native American-English speakers. Children and young adults had pure-tone detection thresholds of 25 dB hearing level (HL) or less bilaterally at octave frequencies 250–8000 Hz. Older adults met this criterion in their better ear, but had thresholds up to 45 dB HL in the contralateral ear. Older adults in the present study also received a passing score of 26 or better on the Montreal Cognitive Assessment (maximum score of 30; Nasreddine et al., 2005). Forty-nine older adults were screened in order to identify the 12 who met the inclusion criteria.

Target speech was Bamford–Kowal–Bench sentences (Bench et al., 1979) produced by a female talker. The masker was either two-talker speech, composed of two other females reading separate passages from Jack and the Beanstalk, or a spectrally matched speech-shaped noise (Calandruccio et al., 2014). The masker turned on 500 ms before the target sentence and turned off 500 ms after the end of the target sentence. The masker level was 65 dB sound pressure level, and the target level was adjusted to control the SNR. Prior to testing in each masker, two sentences were presented at an SNR of +10 dB to familiarize the listeners with the task and the target voice. The listener was asked to repeat back the target sentence, and a researcher scored each keyword as correct or incorrect. The SNR was adjusted in two interleaved adaptive tracks, both following a one-down, one-up stepping rule, but with different criteria for a correct response. One track required one or more keywords correct to count the sentence as correct, whereas the other track required three or more keywords correct. Each of the two interleaved tracks contained 32 sentences and a mean of 116 keywords.

The reason for using two tracks, one with a lax criterion and the other with a strict criterion, was to ensure a range of performance recognizing keywords. For these stimuli and procedures, the two tracks converged on SNRs associated with approximately 30% and 55% correct, respectively.1 At the end of each threshold estimation run, word-level data were fitted with a logit function defined as y={1/1+exp[(xα)/β]}, where α is the SNR associated with 50% correct, β is the slope, x is the level in dB SNR, and y is the proportion of words correct. The order of test conditions and selection of sentence lists was randomized for each listener, with the caveat that listeners did not hear the same sentence twice. The experiment was controlled by a custom Matlab script. Stimuli were played out of a soundcard (M-Track 2 × 2; M-Audio, Cumberland, RI) and presented diotically over headphones (HD25; Sennheiser, Wedemark, Germany). Testing was conducted in a quiet room.

Effects of age group and masker type were evaluated using a regression model that accommodated different group variances and correlations for within subject variables. The model was implemented in R (Pinheiro et al., 2016; R Core Team, 2016), and results were evaluated using F-tests. Simple effects were evaluated with two-tailed Welch t-tests with Bonferroni correction. The association between SRT (α) and slope (β) was evaluated using Pearson Correlation. A log transformation was applied to estimates of slope prior to statistical analysis to normalize variance across age groups.

Speech cue audibility was characterized using the extended speech intelligibility index (ESII; Rhebergen and Versfeld, 2005). This model computes the average speech intelligibility index over sequential temporal segments of the stimulus, characterizing effects related to variability in SNR with time and frequency. The ESII was computed for critical bandwidths and the speech-in-noise band importance function, as implemented in R using the SII package (Warnes, 2018). The ESII values reported below are based on 30-s stimulus samples. This model does not incorporate effects related to forward masking (Rhebergen et al., 2006) or masker-dependent differences in the weights applied to different cues (Song et al., 2018), but it is thought to provide a good first-order approximation of the audibility of cues that support speech recognition. Preliminary analyses of the stimuli were qualitatively consistent with the illustration in Fig. 1; the ESII was higher for the two-talker speech than the speech shaped noise masker at low SNRs (0.39 and 0.14 at −10 dB SNR), but values converged at higher SNRs (0.88 and 0.79 at 10 dB SNR).

3. Results

Psychometric function fits to the word-level data were quite good, with median values of r2 ranging from 0.89 to 0.97 across the six group-by-masker combinations; the minimum value for any individual in any condition was r2 = 0.65. Figure 2 shows the psychometric function slope plotted as a function of SRT for individual listeners, with results for speech-shaped noise on the left and those for the two-talker speech masker on the right. Symbols reflect listener group, and marginal boxplots show the distributions of scores for each group.

Fig. 2.

Fig. 2.

(Color online) Psychometric function slope plotted as a function of the SRT in dB SNR for speech-shaped noise (left panel) and two-talker speech (right panel) maskers. Symbols reflect age group, as defined in the legend. Boxplots indicating the distribution of SRT and slope by group are shown below and to the left of each scatterplot, respectively. Boxes indicate the 25th, 50th, and 75th percentiles, whiskers span the 10th to 90th percentiles, plus signs indicate minimum and maximum values, and filled symbols show the mean SRT and the median slope values.

As expected, mean SRTs expressed in dB SNR were lower for young adults than either children or older adults. For the speech-shaped noise masker, average values were −7.1 dB for children, −8.2 dB for young adults, and −7.3 dB for older adults. For the two-talker speech masker, mean SRTs were −1.2 dB for children, −6.2 dB for young adults, and −3.3 dB for older adults. Regression analysis indicates a significant main effect of masker type (F1,132 = 17.86, p < 0.001), a significant main effect of age group (F2,132 = 21.38, p < 0.001), and a significant interaction between masker type and group (F2,132 = 37.39, p < 0.001). For the speech-shaped noise masker, SRTs were significantly higher for children and older adults than for young adults (p ≤ 0.037), but they did not differ significantly for children and older adults (p = 1). For the two-talker speech masker, SRTs were higher for children than older adults (p < 0.001), and SRTs were higher for older adults than for young adults (p < 0.001).

The distribution of slopes for each age group and masker are shown in the marginal boxplots along the ordinate of the two panels of Fig. 2. Large values of β represent shallower function slopes. For example, an increase from 25% to 75% correct corresponds to an increase in SNR of 4.4 dB for β = 2 and 8.8 dB for β = 4. For the speech-shaped noise masker, the slope is relatively consistent across age groups, with a geometric mean of β ≈ 1.8 for all three groups. For the two-talker speech masker, however, slopes were steeper for children (β = 1.9) and older adults (β = 2.2) compared to young adults (β = 3.7). For slope, regression analysis indicates a significant main effect of masker type (F1,132 = 36.73, p < 0.001), a significant main effect of group (F2,132= 3.44, p = 0.035), and a significant interaction between masker type and group (F2,132 = 18.11, p < 0.001). For the speech-shaped noise masker, slope did not differ significantly between any of the three groups (p ≥ 0.184). For the two-talker masker, slopes were steeper for children and older adults than for young adults (p ≤ 0.024), but they did not differ significantly for children and older adults (p = 1).

When all three groups are analyzed together there is no evidence of an association between SRT and psychometric function slope for the speech-shaped noise masker (r = 0.09, p = 0.456), but there is a robust correlation for the two-talker speech masker (r = −0.73, p < 0.001). One question of interest is whether this association between SRT and slope for the two-talker speech masker is based entirely on the age group effects observed above, or whether it also exists within listener groups. Analyses by group indicated no significant correlation between SRT and slope in the two-talker masker for children (r = −0.08, p = 0.607), a non-significant trend for an association in older adults (r = −0.54, p = 0.070), and a significant association in young adults (r = −0.63, p = 0.012). These results indicate that better-performing adults (those with lower SRTs) tended to have shallower psychometric function slopes, consistent with better ability to use low-level speech cues coincident with epochs of favorable SNR.

The ability to utilize audible speech cues in the two masker conditions was evaluated by comparing performance with respect to the ESII. The mean ESII at SRT was more similar across listener groups for the speech-shaped noise masker (0.23 for children, 0.20 for young adults, and 0.23 for older adults; standard deviation = 0.02–0.03) than the two-talker speech masker (0.64 for children, 0.50 for young adults, and 0.58 for older adults; standard deviation = 0.03–0.06). This indicates that the ability to make use of the audible speech cues depended on both masker type and listener age. If audibility were the dominant factor determining performance in the two maskers, then the ESII should be constant across maskers at threshold. The contribution of audibility in the group differences observed for the two-talker speech masker was assessed by comparing predictions of each listener's SRT for the two-talker speech masker based on their ESII at threshold for the speech-shaped noise masker. The results appear in Fig. 3, which shows the distributions of prediction error in dB, plotted by age group. Predictions based on the ESII at threshold in the noise masker underestimated SRTs for the two-talker speech masker for all listeners, but the magnitude of prediction error differed across age groups (F2,67 = 32.50, p < 0.001); on average, prediction error was 1.6-dB smaller for young adults than older adults (p = 0.007), and 1.7-dB smaller for older adults than children (p < 0.001). This result is consistent with the idea that young adults are more efficient at using audible speech cues to identify speech than older adults and children, even after accounting for individual differences in the audibility required to recognize speech in the speech-shaped noise masker.

Fig. 3.

Fig. 3.

(Color online) Distribution of prediction error in dB (observed-predicted) for SRTs in the two-talker speech masker. Difference scores are plotted separately for the three listener groups, as indicated on the abscissa. Boxes indicate the 25th, 50th, and 75th percentiles, and whiskers span the 10th to 90th percentiles.

4. Discussion

One goal of the present study was to test the hypothesis that young children and older adults have steeper psychometric functions for speech-in-speech recognition compared to young adults. Masked SRTs were higher for children and older adults than for young adults, and this effect was more pronounced in the two-talker speech masker than the speech-shaped noise masker. These results replicate published data (Buss et al., 2018; Goossens et al., 2017; Wightman and Kistler, 2005). The psychometric function slope for the speech-shaped noise masker did not differ significantly across age groups, but functions for the two-talker speech masker were steeper for children and older adults than for young adults. Psychometric function slopes for speech-in-speech recognition observed in the current study were broadly similar to those observed by Buss et al. (2018) for semantically meaningful target sentences. The children in that study ranged from 5 to 16 years of age; for comparison with the present data on 8 - to 10-year-olds, a linear fit was used to estimate the psychometric function slope at 9 years of age in the earlier dataset. For the data of Buss et al. (2018), group-mean estimates of slope were β = 3.1 (9 years), β = 2.8 (older adults), and β = 4.3 (young adults). For comparison, group-mean estimates of slope in the present study were β = 1.9 (children), β= 2.2 (older adults), and β = 3.7 (young adults).

The present study broadly corroborates the trend in Buss et al. (2018) that both children and older adults have steeper psychometric function slopes for speech-in-speech recognition than young adults. This result is consistent with the idea that both groups are less adept than young adults at recognizing speech based on sparse glimpses of target speech in the context of a two-talker masker (Fig. 1). However, implicating glimpsing in the effects of development and aging does not isolate the particular factor or factors responsible for poor performance for these two age groups. There are a number of possible reasons why children and older adults might not perform as well as young adults under these conditions, including effects related to short-term memory, auditory stream segregation, and selective auditory attention. In addition, there are age-specific factors that could affect children and older adults differently, including linguistic knowledge, extended high-frequency sensitivity, and age-related reductions in neural synchrony, which could be associated with hidden hearing loss.

Comparisons of performance across maskers at comparable ESII suggest that a range of factors contribute to the age effects observed for the two-talker speech masker. The ESII at threshold for speech-shaped noise was used to predict SRTs for the two-talker speech masker. Those predictions severely overestimated sensitivity for all listeners, with observed-predicted differences of 9.0–17.6 dB. However, those predictions also indicated that young adults require less additional audibility to recognize speech in the speech masker than children and older adults, with group differences of 3.3 dB (child vs young adult) and 1.6 dB (older vs young adult). These group differences are consistent with age effects related to segregation and selective attention. In contrast, differences in the ESII at threshold are consistent with group differences related to linguistic knowledge, high-frequency sensitivity, and greater reliance on redundant speech cues.

Although the present results are consistent with the idea that young children and older adults are poorer than young adults at recognizing speech based on sparse speech cues when the task is diotic speech-in-speech recognition, it is unclear whether these results generalize to other stimulus conditions. Speech recognition in a noise masker improves when the masker is amplitude modulated due to the introduction of opportunities for glimpsing. Some studies indicate that children and older adults derive comparable benefit from amplitude modulation of a noise masker compared to young adults (e.g., Fullgrabe et al., 2014; Stuart, 2008), whereas other studies indicate effects of maturation and aging (e.g., Goossens et al., 2017; Grose et al., 2009; Hall et al., 2012). Discrepancies in findings across studies could indicate that age effects are more modest for modulated noise maskers than speech maskers; this might occur if auditory stream segregation and selective attention were less challenging for noise maskers compared to speech maskers. If segregation and selective attention could be facilitated for the two-talker speech masker by the introduction of a binaural difference cue, the prediction would be for lower SRTs and shallower psychometric functions in all three age groups. Work in this area is ongoing.

At a practical level, the finding that psychometric slope differs across listener groups for speech-in-speech recognition highlights the limitations associated with characterizing performance with SRT alone. If psychometric function slopes are not parallel, then age effects documented at one level of performance do not generalize to other levels. For the present dataset, the SRT at 50% correct is 5 dB higher for children than young adults, and it is 2.9 dB higher for older adults than young adults. These group differences are smaller when the SRT is defined at 79% correct, with differences of 2.6 and 0.9 dB, respectively. This example demonstrates the importance of estimating both the SRT and psychometric function slope when evaluating speech-in-speech recognition across the lifespan. The technique used in this study, of adaptively tracking performance at two points along the psychometric function and fitting the results to estimate slope and SRT, appears to be a robust and effective procedure for accomplishing this goal.

Acknowledgments

This work was supported by NIH Grant No. NIDCD R03 DC015074 (L.C.) and Grant No. R01 DC001507 (J.H.G.). Dr. Jacob Oleson provided advice on the statistical analyses. Dr. Lori Leibold, Dr. Ryan McCreery, and three anonymous reviewers provided helpful suggestions on this work.

Footnotes

1

These estimates were obtained by averaging reversals after the first four to estimate threshold in dB SNR, and then using the psychometric function fit to determine the performance level associated with that threshold.

References and links

  • 1. Bench, J. , Kowal, A. , and Bamford, J. (1979). “ The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children,” Br. J. Audiol. 13, 108–112. 10.3109/03005367909078884 [DOI] [PubMed] [Google Scholar]
  • 2. Buss, E. , Hodge, S. E. , Calandruccio, L. , Leibold, L. J. , and Grose, J. H. (2018). “ Masked sentence recognition in children, young adults, and older adults: Age-dependent effects of semantic context and masker type,” Ear Hear. (published online 2018). 10.1097/AUD.0000000000000692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Calandruccio, L. , Gomez, B. , Buss, E. , and Leibold, L. J. (2014). “ Development and preliminary evaluation of a pediatric Spanish-English speech perception task,” Am. J. Audiol. 23, 158–172. 10.1044/2014_AJA-13-0055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Cooke, M. (2006). “ A glimpsing model of speech perception in noise,” J. Acoust. Soc. Am. 119, 1562–1573. 10.1121/1.2166600 [DOI] [PubMed] [Google Scholar]
  • 5. Fullgrabe, C. , Moore, B. C. , and Stone, M. A. (2014). “ Age-group differences in speech identification despite matched audiometrically normal hearing: Contributions from auditory temporal processing and cognition,” Front Aging Neurosci. 6, 347. 10.3389/fnagi.2014.00347 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Goossens, T. , Vercammen, C. , Wouters, J. , and van Wieringen, A. (2017). “ Masked speech perception across the adult lifespan: Impact of age and hearing impairment,” Hear. Res. 344, 109–124. 10.1016/j.heares.2016.11.004 [DOI] [PubMed] [Google Scholar]
  • 7. Grose, J. H. , Mamo, S. K. , and Hall, J. W., III. (2009). “ Age effects in temporal envelope processing: Speech unmasking and auditory steady state responses,” Ear Hear. 30, 568–575. 10.1097/AUD.0b013e3181ac128f [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Hall, J. W. , Buss, E. , Grose, J. H. , and Roush, P. A. (2012). “ Effects of age and hearing impairment on the ability to benefit from temporal and spectral modulation,” Ear Hear. 33, 340–348. 10.1097/AUD.0b013e31823fa4c3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. MacPherson, A. , and Akeroyd, M. A. (2014). “ Variations in the slope of the psychometric functions for speech intelligibility: A systematic survey,” Trends Hear. 18, 1–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Miller, G. , Lewis, B. , Benchek, P. , Buss, E. , and Calandruccio, L. (2018). “ Masked speech recognition and reading ability in school-age children: Is there a relationship?,” J. Speech Lang. Hear. Res. 61, 776–788. 10.1044/2017_JSLHR-H-17-0279 [DOI] [PubMed] [Google Scholar]
  • 11. Nasreddine, Z. S. , Phillips, N. A. , Bedirian, V. , Charbonneau, S. , Whitehead, V. , Collin, I. , Cummings, J. L. , and Chertkow, H. (2005). “ The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment,” J. Am. Geriatr. Soc. 53, 695–699. 10.1111/j.1532-5415.2005.53221.x [DOI] [PubMed] [Google Scholar]
  • 12. Phatak, S. A. , Brungart, D. S. , Zion, D. J. , and Grant, W. (2018). “ Clinical assessment of functional hearing deficits: Speech-in-noise performance,” Ear Hear. 40, 426–436 [DOI] [PubMed] [Google Scholar]
  • 13. Pinheiro, J. , Bates, D. , DebRoy, S. , Sarkar, D. , and R Core Team (2016). “ nlme: Linear and nonlinear mixed effects models,” in R package version 3.1-125.
  • 14.R Core Team. (2016). “ R: A language and environment for statistical computing,” in R Foundation for Statistical Computing, Vienna, Austria.
  • 15. Rhebergen, K. S. , and Versfeld, N. J. (2005). “ A Speech Intelligibility Index-based approach to predict the speech reception threshold for sentences in fluctuating noise for normal-hearing listeners,” J. Acoust. Soc. Am. 117, 2181–2192. 10.1121/1.1861713 [DOI] [PubMed] [Google Scholar]
  • 16. Rhebergen, K. S. , Versfeld, N. J. , and Dreschler, W. A. (2006). “ Extended speech intelligibility index for the prediction of the speech reception threshold in fluctuating noise,” J. Acoust. Soc. Am. 120, 3988–3997. 10.1121/1.2358008 [DOI] [PubMed] [Google Scholar]
  • 17. Song, M. , Chen, F. , Wu, X. , and Chen, J. (2018). “ A time-weighted method for predicting the intelligibility of speech in the presence of interfering sounds,” in 2018 International Conference on Acoustics, Speech, and Signal Processing, Calgary, Alberta. [Google Scholar]
  • 18. Stuart, A. (2008). “ Reception thresholds for sentences in quiet, continuous noise, and interrupted noise in school-age children,” J. Am. Acad. Audiol. 19, 135–146. 10.3766/jaaa.19.2.4 [DOI] [PubMed] [Google Scholar]
  • 19. Warnes, G. R. (2018). “ Calculating speech intelligibility index (SII) using R,” v 1.0.3.1.
  • 20. Wightman, F. L. , and Kistler, D. J. (2005). “ Informational masking of speech in children: Effects of ipsilateral and contralateral distracters,” J. Acoust. Soc. Am. 118, 3164–3176. 10.1121/1.2082567 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of the Acoustical Society of America are provided here courtesy of Acoustical Society of America

RESOURCES