Abstract
Two amplification features were examined using auditory tasks that varied in stimulus familiarity. It was expected that the benefits of certain amplification features would increase as the familiarity with the stimuli decreased. A total of 20 children and 15 adults with normal hearing as well as 21 children and 17 adults with mild to severe hearing loss participated. Three models of ear-level devices were selected based on the quality of the high-frequency amplification or the digital noise reduction (DNR) they provided. The devices were fitted to each participant and used during testing only. Participants completed three tasks: (a) word recognition, (b) repetition and lexical decision of real and nonsense words, and (c) novel word learning. Performance improved significantly with amplification for both the children and the adults with hearing loss. Performance improved further with wideband amplification for the children more than for the adults. In steady-state noise and multitalker babble, performance decreased for both groups with little to no benefit from amplification or from the use of DNR. When compared with the listeners with normal hearing, significantly poorer performance was observed for both the children and adults with hearing loss on all tasks with few exceptions. Finally, analysis of across-task performance confirmed the hypothesis that benefit increased as the familiarity of the stimuli decreased for wideband amplification but not for DNR. However, users who prefer DNR for listening comfort are not likely to jeopardize their ability to detect and learn new information when using this feature.
Keywords: children, adults, hearing loss, word learning, nonword detection, amplification, bandwidth, digital noise reduction
Introduction
The introduction of digital signal processing in hearing aids allowed for the development of amplification features that increased user benefit relative to that of analog devices. For example, wide dynamic-range compression, directional microphone technology, and device connectivity (e.g., FM, Bluetooth) improved hearing-aid function for a large portion of hearing-aid users. Nevertheless, several issues remain and have motivated the further development of specific amplification features. These include the extension of amplification bandwidth and the refinement of digital noise reduction (DNR). However, technological advances in these areas have not been associated with significant improvements in speech perception. For example, some ear-level hearing devices now provide effective bandwidths ≥8 kHz for losses between 60 and 80 dB HL and slightly narrower bandwidths for hearing losses > 80 dB HL (Kimlinger, McCreery, & Lewis, 2015). The benefit of an increased bandwidth for speech perception, however, appears to be minimal. Because early research showed little to no word-recognition improvement for bandwidths extending beyond 4 kHz in listeners with hearing loss (Ching, Dillon, & Byrne, 1998; Hogan & Turner, 1998), the search for benefit turned to other areas including perception in noise or reverberation (Levy, Freed, Nilsson, Moore, & Puria, 2015; Plyler & Fleck, 2006; Turner & Henry, 2002), perception of phonemes, consonants, and vowels (John et al., 2014; Lau, Kuk, Keenan, & Schumacher, 2014; Wolfe et al., 2015), and listener preference (Brennan et al., 2014; Lau et al., 2014). Some benefits have been reported in these areas, most notably a significant preference for high-frequency amplification (Brennan et al., 2014).
Similar improvements in DNR technology were motivated by frequent complaints of individuals with hearing loss regarding their intolerance for background noise while using hearing aids (Kochkin, 2000). Like extended bandwidth however, little to no improvement in word recognition has been reported with DNR (Bentler, Wu, Kettel, & Hurtig, 2008; Brons, Houben, & Dreschler, 2015; Mueller, Weber, & Hornsby, 2006; Nordrum, Erler, Garstecki, & Dhar, 2006). Hearing-aid users do, however, prefer this feature for listening comfort (Brons et al., 2015; McCreery, Venediktov, Coleman, & Leech, 2012; Ricketts & Hornsby, 2005; Stelmachowicz et al., 2010). Given listener preference for DNR and wideband amplification, it is possible that word recognition tests are insensitive to the perceptual benefits these features provide.
New Measures for New Hearing Aids
It is often the case that advances in hearing-aid technology proceed more rapidly than the development of the objective and subjective measures needed to assess them. While some researchers have dedicated considerable effort to developing hearing-aid measures and fitting procedures for clinical use (Alexander, 2014; Byrne, Dillon, Ching, Katsch, & Keidser, 2001; Keidser, Dillon, Carter, & O'Brien, 2012; Scollie et al., 2005; Seewald, Moodie, Sinclair, & Scollie, 1999), the development of objective behavioral measures to validate advanced hearing-aid features has received less attention and therefore has proceeded more slowly. Like the increasingly unique advances in hearing-aid technology, equally unique approaches may be necessary to evaluate these new features, first in a research environment and ultimately in a clinical setting.
A common clinical measure of hearing-aid benefit is word recognition. These tests have been standard components of diagnostic audiometry since phonetically balanced word lists were compiled in the mid-20th century (PAL PB-50, Egan, 1948; W-22, Hirsch et al., 1952; NU-6, Peterson & Lehiste, 1962; Tillman & Carhart, 1966). While several new word-recognition tests have been developed in recent years, each test contains similar stimuli (see Wilson & McArdle, 2005, for a historical review). That is, word recognition tests involve the perception of a small set of highly familiar words. But the vocabularies of both children and adults continue to grow beyond these words for many decades (∼60 years of age) with variable decline in vocabulary knowledge after the age of 75 years (Verhaeghen, 2003). New entries into the vocabulary include new words created to accommodate advances in science, medicine, technology, social interactions, and just about any new item or concept that needs a unique label. Indeed, the Oxford Dictionary of English is updated annually to include more than 1,000 new entries as well as modifications to the definitions of many existing words. Because vocabulary is essential for effective communication, measures of new word detection and learning may provide a more comprehensive assessment of the benefit of certain amplification features (Stelmachowicz, Lewis, Choi, & Hoover, 2007; Stelmachowicz et al., 2008; Stelmachowicz, Pittman, Hoover, & Lewis, 2001, 2002; Stelmachowicz, Pittman, Hoover, Lewis, & Moeller, 2004).
Word Recognition and Learning
The experimental tasks used in this study were based on a theoretical framework of word recognition and learning (Pittman & Rash, 2016). This framework is shown in Figure 1 and is a merging of the Neighborhood Activation Model by Luce and Pisoni (1998; shaded boxes) and components thought to be involved in learning new words (open box and dashed lines; Gray, Pittman, & Weinhold, 2014; Leach & Samuel, 2007; Storkel & Lee, 2011). During word learning, the process of configuration integrates input from the acoustic or phonetic pattern activation component (i.e., the sounds within the word) and from the higher level lexical input component (i.e., semantic interpretation of the word derived from the context in which it was perceived). This reciprocal exchange between the higher level lexical information and configuration culminates in a representation of a novel word within the lexicon. Put simply, if an unknown word is detected by the listener, the sounds and the meaning of the word are gleaned from the input, joined together, and added to the vocabulary. This process is repeated each time the word is encountered until a stable acoustic and semantic representation has been achieved. This process has been referred to as “configuration” (Leach & Samuel, 2007), “encoding” (McGregor, 2014; McGregor et al., 2013), or simply “learning” (Storkel, 2015).
Figure 1.
Framework of familiar word recognition and new word learning created by combining the Neighborhood Activation Model (shaded boxes and solid lines; Luce & Pisoni, 1998) and an emerging framework of word learning (open box and dashed lines; Gray et al., 2014; Leach & Samuel, 2007; Storkel & Lee, 2011).
Purpose
This framework guided the creation of tasks that target discrete and increasingly more difficult processes involved in word recognition and learning. These tasks include the recognition of known words, the repetition of and lexical decision regarding real and nonsense words, and the rapid learning of nonsense words. The overall purpose of this study was to examine the effect of certain amplification features on the perception of familiar words and the detection and learning of new words. It was hypothesized that if these amplification features provide the listener with a significantly improved acoustic signal, then amplification benefit will be minimal for tasks involving known words and greater for tasks involving unfamiliar words. This hypothesis was tested by comparing the performance of listeners with hearing loss across unaided and aided conditions both with and without the feature under investigation. It was also of interest to compare the aided performance of the listeners with hearing loss to that of their normal-hearing peers to identify areas of residual deficit between groups.
General Method
Participants
A total of 75 individuals participated in this project. They included 15 adults and 20 children with normal hearing as well as 18 adults and 22 children with hearing loss. Listeners with normal hearing participated in one test session while those with hearing loss participated in one unaided session and up to three sessions involving specific hearing-aid features. No participants were excluded and only 1 adult and 1 child with hearing loss were lost to follow-up after the first unaided test session, leaving a total of 73 participants. All of the children with hearing loss were current hearing-aid users, while 13 of the 18 adults with hearing loss were current or previous hearing-aid users. Only four of the adults had not been fitted with hearing aids prior to enrollment. Table 1 shows the number of participants in each condition by age-group and hearing status. Also shown is the age range and the male-to-female ratio. The final column shows the number of missing data points relative to the total number of data points (in parentheses) collected for that group (NH: number of participants × 3 tasks, HL: number of participants × 3 tasks × 3 listening conditions). Missing data were the result of procedural errors or refinement of the test procedures at the commencement of the project.
Table 1.
Enrollment, Average Age, and Age Ranges for Each Group and Hearing-Aid Feature.
Experiment | Age group | Hearing status | n | Mean age (years) | Minimum age (years) | Maximum age (years) | M:F | Missing data |
---|---|---|---|---|---|---|---|---|
1. Bandwidth | Children | NH | 20 | 10.1 | 8 | 12 | 8:12 | 7 (60) |
HL | 18 | 9.5 | 8 | 12 | 4:14 | 1 (162) | ||
Adults | NH | 15 | 57.6 | 50 | 67 | 5:10 | 0 (45) | |
HL | 12 | 64.8 | 52 | 78 | 7:5 | 0 (108) | ||
2. Digital noise reduction in steady-state noise | Children | NH | 11 | 10.4 | 8 | 12 | 4:7 | 0 (33) |
HL | 13 | 9.8 | 8 | 12 | 3:10 | 0 (117) | ||
Adults | NH | 7 | 57.4 | 51 | 67 | 2:5 | 2 (21) | |
HL | 13 | 66.3 | 52 | 78 | 7:6 | 0 (117) | ||
3. Digital noise reduction in multitalker babble | Children | NH | 9 | 9.8 | 8 | 12 | 4:5 | 1 (27) |
HL | 8 | 9.0 | 8 | 10 | 4:4 | 0 (72) | ||
Adults | NH | 8 | 57.8 | 50 | 66 | 3:5 | 0 (24) | |
HL | 8 | 65.5 | 60 | 69 | 5:3 | 0 (72) |
Note. NH = Normal Hearing; HL = Hearing Loss; M = Male; F = Female. Missing data were the result of procedural errors or refinement of the test procedures at the start of the project.
Behavioral hearing thresholds for octave frequencies between 250 and 8,000 Hz were obtained for all participants during the first test session. Figure 2 shows the right- and left-ear hearing thresholds obtained under earphones for the adults (left panels) and the children (right panels) as well as the binaural unaided and aided thresholds obtained in the sound field for each hearing-aid condition. The shaded area in each panel is the minimum and maximum range of hearing thresholds for the normal-hearing peers. The hearing tests were used to confirm normal hearing or to determine candidacy for each hearing-aid feature. For example, listeners with thresholds < 80 dB HL at 4 and 8 kHz were provided wideband amplification while listeners with thresholds > 80 dB HL at these frequencies were not but were included in the evaluation of DNR. All listeners were assigned to one of the two DNR conditions (steady state noise and multitalker babble), although four adults participated in both.
Figure 2.
Average unaided right and left ear hearing thresholds for the adults (left column) and the children (right column) with hearing loss participating in each hearing-aid condition. The shaded areas in each panel are the minimum and maximum hearing thresholds of the normal-hearing peers. Also shown are average (±1 SD) unaided binaural hearing thresholds as well as aided thresholds obtained in each hearing-aid condition and setting.
This study was approved by the institutional review board at Arizona State University. Written consent was obtained from each adult participant, whereas informed assent was obtained from the children with written consent from their parents. Each test session lasted approximately 2 hr. Participants were paid $15 per hour for their participation and were allowed to keep the customized earmolds made for them during the study. No other incentives were provided.
Stimuli
The stimuli for all tasks were recorded at a sampling rate of 44.1 kHz, 16-bit resolution using a microphone (AKG, C535 EB) with a flat frequency response from 100 to 10k Hz (±2 dB). The same adult female voice was used to create all stimuli. She had a standard American-English dialect. The speech samples were digitized and edited into individual .wav files using Adobe Audition v1.5 and equated for RMS level by the experimental software upon presentation. More stimulus lists were created than required for each task (eight each) to accommodate participants in more than one listening or hearing-aid condition. Thus, no lists were repeated. In the event that the lists were not equivalent in difficulty (unknown for some tasks), the lists were counterbalanced across listening conditions.
Behavioral Tasks
At each visit, participants completed three behavioral tasks. The first two tasks required verbal responses which were captured with a digital audio recorder (Olympus, WS 801/802) coupled to a head-worn microphone (Shure, WH20) positioned approximately 2 inches from the corner of the speaker’s mouth. Responses were scored as either correct or incorrect by two independent examiners. The third task did not require scoring by examiners. Participants interacted with custom experimental software via a computer monitor.
Word recognition
Twenty-five words from lists 1 to 4 of the Northwestern University NU-6 word recognition test were administered in each hearing-aid condition. These test materials are widely used in clinical settings to determine a patient’s perception for familiar words. Participants repeated each word aloud as it was presented. No reinforcement was provided for correct responses. Overall performance was scored in percent correct by an independent examiner who scored the recorded responses.
Auditory lexical decision
Twenty-four words were randomly presented to the participant who repeated each word aloud, and then judged whether the word was real or nonsense. Half of the words in each list were real, and the other half were nonsense. Lexical judgements were made by selecting the appropriately labeled button (“real” or “not real”) displayed on a computer monitor. Visual reinforcement in a videogame format was provided for correct lexical categorization but not for verbal repetition. A single percent-correct score was calculated for each participant representing the combined accuracy of both the categorical selection and the verbal repetition (see Pittman & Rash, 2016, for a detailed description of the stimuli, task, and scoring.
Rapid word learning
Participants learned the singular and plural forms of three nonsense words associated with three novel images. Each novel image was displayed on a computer monitor in singular and plural forms for a total of six images. For the plural stimuli, the same token of the phoneme /s/ was appended to the end of each word to avoid natural variations in talker production (e.g., minimization, coarticulation). Each of the six words were presented 20 times for a total of 120 randomized trials. The listeners played an interactive game to learn to associate the novel words with the correct novel images through a process of trial and error. The interactive game provided visual reinforcement for correct selections only.
Overall learning rate for the six words was expressed in units of speed representing the number of trials necessary to reach criterion performance. A detailed description of the task and analyses can be found in Pittman (2008, 2011). The number of trials to criterion was log transformed and limited to no more than 1,000 (calculated by extrapolating from the function derived from 120 trials). This resulted in a scale of 0 to 3 for which a learning speed of 0 denoted 1,000 trials to criterion (no learning), whereas a learning speed of 3 denoted only 1 trial to criterion (flawless learning).
Amplification Parameters
A total of six hearing devices were used for this project; one pair each for the wideband, steady-state noise, and multitalker babble conditions. The same pair of devices were programmed and fit binaurally to each participant during testing to control for potential processing differences across manufacturers and devices. The devices were selected from among devices offered by six manufacturers. An inversion technique was used to determine the devices providing optimal signal-to-noise ratio (SNR) in steady-state noise and multitalker babble when the DNR feature was enabled (Hagerman & Olofsson, 2004; Souza, Jenstad, & Boike, 2006). The hearing aids were preprogrammed according to DSL adult or child fitting parameters using the participant’s hearing thresholds and real-ear-to-coupler differences obtained during the unaided test session. All fittings were verified at the beginning of the aided testing sessions and adjusted as necessary using real-ear measurements for soft, average, and loud speech inputs (Verifit, Audioscan). Hearing-aid output was within 5 dB of the aided targets. All other features within the devices were disabled (e.g., feedback reduction, directional microphones, etc.) including all buttons (e.g., program, volume control).
For the extended bandwidth condition, participants were fitted binaurally with behind-the-ear receiver-in-the-canal devices. Gain at 8 kHz was adjusted to provide at least 5 dB of sensation for a soft level input (55 dB SPL). For this amplification feature, only one hearing-aid memory was programmed. Bandwidth was varied during testing by low-pass filtering the stimuli at two cut-off frequencies representing narrowband amplification (4 kHz) and wideband amplification (10 kHz).
For the DNR conditions, participants were fitted binaurally with one pair of behind-the-ear instruments for the steady-state noise condition and a different pair of devices for the multi-talker babble condition. Custom earmolds with #13 tubing were made for each participant. The devices contained two memories; one with DNR enabled at its maximum setting and one with this feature disabled. The aids were then paired to a remote control. To reduce examiner bias, one research assistant programmed the hearing aids and randomly assigned the amplification feature to one of the two memories. A second research assistant conducted the behavioral tests without knowledge of the content of the memories. The remote control was operated by the examiner at all times. In addition to the visual display on the remote control, the participant confirmed that the desired memory was enabled via the audible indicator (i.e., one or two beeps, “Program 1,” “Program 2”). The content of the memories was revealed to the participant, and the examiner after testing was completed.
The magnitude of the DNR was confirmed on the day of testing using the noise-reduction measurement feature in the real-ear verification system (Verifit). A 60 dB broadband noise (air conditioner) was presented to each aid for 30 sec. On average, the output of the hearing instruments was reduced with DNR by 11.1 dB for the adults (range: 8 to 15 dB) and by 10.1 dB for the children (range: 6 to 14 dB).
Procedure
Testing for each condition was conducted in the same sound-treated room equipped with Nucleus Micro loudspeakers as well as a computer monitor and mouse. The listeners were seated 1 m from a loudspeaker at 0° azimuth. In this way, the parameters that were free to vary across all tasks were the individual participants, the presence or absence of the hearing aids, and the settings of the hearing aids. All stimuli were presented in the free field at 54 dB SPL (re: calibrated position) with bandwidths of 4 and 10 kHz in quiet. In the steady-state noise and multitalker babble conditions, the stimuli were presented at 54 dB SPL in a + 3 dB SNR based on the results of previous work in the Pediatric Amplification Lab with these tasks. These presentation levels successfully avoided ceiling effects for the listeners with normal hearing and floor effects for the listeners with hearing loss.
Custom software was used in each condition to present the stimuli and record responses. The examiner entered the responses for the word recognition task (correct, incorrect), while the participants interacted directly with the software for the other two tasks. On each trial, the software provided a 15-s response window. The interstimulus interval was 1 s if a response was entered or 15 s with no response. In this way, the software allowed the listener to proceed at his or her own pace. The parameters of each trial (stimulus, presentation level, listening condition) and participant response (response category, correct/incorrect/no response) were stored automatically after each trial for later analyses.
Statistical Analyses
Performance for the word recognition and lexical decision tasks was scored in percent correct. These values were arcsine transformed prior to statistical analyses to equalize the variance over the range of scores (Studebaker, 1985). Performance for the word-learning task was expressed in values of speed (re: trials to criterion) as described earlier. Although the breadth of the data collected would support extensive analyses, the focus of these analyses was limited to comparison across amplification conditions. Differences between adults and children were not examined directly.
Repeated-measures analyses of variance (ANOVA) and pairwise comparisons were conducted separately for each task with group (normal hearing, hearing loss) as the between-subjects factor and listening condition (unaided, aided, and aided + feature) as the within-subjects factor. Post hoc analyses identified the amplification conditions associated with improved performance. To compare performance across tasks directly, percent correct and speed values were converted to Z scores. Pairwise comparisons based on repeated-measures ANOVA were conducted to reveal the relative differences between performance for each task in the aided and aided + feature conditions.
Finally, the performance of the adults and children with hearing loss was compared with that of their normal-hearing counterparts through univariate ANOVA. Significant differences indicate that the performance of the listeners with hearing loss was poorer than that of their normal-hearing peers despite the use of amplification. For all analyses, the degrees of freedom were adjusted as necessary using the Greenhouse–Geisser method to accommodate any lack of sphericity in the data. Significance was indicated by p < .05, although Bonferroni adjustments were applied for multiple comparisons.
Results
In addition to unaided hearing thresholds, Figure 2 also shows aided thresholds obtained with and without the feature of interest (filled stars and triangles). The free-field aided thresholds confirmed that, on average, the participants with hearing loss received sufficient and equivalent amplification in each hearing-aid condition, such that differences in task performance would be the result of the amplification feature rather than differences in the hearing-aid fitting.
Figures 3 to 5 show the overall performance of the children and adults with the bandwidth, noise reduction in steady-state noise, and noise reduction in multitalker babble features, respectively. Average performance (+1 SE) is plotted as a function of task for the unaided, aided, and aided + feature conditions. Because word learning was calculated in units of speed, those results are shown on a log scale in separate panels to the right. The vertical gray bars at the top of each panel represent the 95% confidence intervals for the listeners with normal hearing. To facilitate interpretation of the statistical analyses, the results for the unaided versus aided and the aided versus aided + feature are shown in each figure. The “+” symbol indicates conditions for which performance was significantly greater than the condition to the immediate left. In Figure 4, an asterisk was included to denote the significant improvement in performance with the amplification feature compared with the unaided condition.
Figure 3.
Average (+1 SE) performance as a function of task for the children and adults participating in the extended bandwidth condition. The parameter in each panel is listening condition. The vertical gray bars represent the 95% confidence intervals of the performance of the children and adults with normal hearing. WR = word recognition; LD = lexical decision; WL = word learning.
Figure 4.
Average (+1 SE) performance as a function of task for the children and adults participating in the digital-noise reduction, steady-state noise condition. The parameter in each panel is listening condition. The vertical gray bars represent the 95% confidence intervals of the performance of the children and adults with normal hearing. WR = word recognition; LD = lexical decision; WL = word learning.
Figure 5.
Average (+1 SE) performance as a function of task for the children and adults participating in the digital-noise reduction, multi-talker babble condition. The parameter in each panel is listening condition. The vertical gray bars represent the 95% confidence intervals of the performance of the children and adults with normal hearing. WR = word recognition; LD = lexical decision; WL = word learning.
Amplification Feature
Wideband amplification
Performance was poorest in the unaided condition for both the children and adults and improved with narrowband amplification for all but one task (Figure 3). The children’s performance improved further with wideband amplification for all tasks while the adult’s performance improved for word learning only. These observations were confirmed with repeated-measures ANOVA and pairwise comparisons shown in Table 2. A significant main effect of amplification condition (Unaided, Aided, Aided+) was observed for both adults and children for all tasks, while pairwise comparisons revealed that performance improved significantly between the Unaided and Aided conditions for most tasks and between the Unaided and Aided+ conditions for all tasks. In summary, the children received more overall benefit from wideband amplification than the adults who appeared to benefit only in the most difficult task (word learning).
Table 2.
Repeated-Measures ANOVA and Pairwise Comparisons in Each Hearing-Aid Condition.
Pairwise comparisons |
|||||||
---|---|---|---|---|---|---|---|
Feature | Group | Task | p | hp2 | Aid vs. unaided | Aid+ vs. unaided | Aid+ vs. aid |
Wideband | Children | Word recognition | <.001 | .647 | * | * | * |
Lexical decision | <.001 | .709 | * | * | * | ||
Word learning | <.001 | .671 | * | * | |||
Adults | Word recognition | <.001 | .750 | * | * | ||
Lexical decision | <.001 | .677 | * | * | |||
Word learning | <.001 | .615 | * | * | * | ||
DNR SSN | Children | Word recognition | <.001 | .590 | * | * | |
Lexical decision | .003 | .446 | * | ||||
Word learning | .011 | .364 | * | ||||
Adults | Word recognition | <.001 | .608 | * | * | ||
Lexical decision | .007 | .464 | |||||
Word learning | .073 | .196 | |||||
DNR MTB | Children | Word recognition | .002 | .601 | * | * | |
Lexical decision | .003 | .697 | * | * | |||
Word learning | .088 | .293 | |||||
Adults | Word recognition | .180 | .248 | ||||
Lexical decision | .052 | .345 | |||||
Word learning | .042 | .364 |
Note. DNR = digital noise reduction; SSN = steady-state noise; MTB = multitalker babble. Bold indicates significance at p < .05.
DNR in steady-state noise
Performance for both groups decreased 20% to 30% in noise compared with performance in quiet (Figure 4). Repeated-measures ANOVA and the pairwise comparisons (Table 2) revealed significantly improved performance with amplification for word recognition in both groups and for the children’s lexical decisions. Word learning did not improve with amplification for either group. No additional benefit from DNR was observed; however, pairwise comparisons revealed a significant increase with DNR relative to unaided performance for the children only (asterisk). It should be noted however that word learning was already near the normal range for both groups, thus little improvement was expected for this task.
DNR in multitalker babble
Overall performance for both groups was also poorer in multitalker babble by 10% to 25% compared with quiet (Figure 5). The results of the repeated-measures ANOVA (Table 2) revealed that the children’s performance for word recognition and lexical decisions increased significantly with amplification with no further increase (or decrease) in performance when the DNR was activated. Interestingly, no benefit from either type of amplification was observed for the adults. A significant main effect of listening condition was found for word learning, but this was not born out in the pairwise comparisons. Again, performance for lexical decision and word learning was near the normal range for this listening condition so little improvement was expected.
Across-Task Performance
The relative benefit of each hearing-aid feature across tasks was calculated by taking the difference in performance between the aided and aided+feature conditions after converting to Z scores. These differences were subjected to one-way ANOVA with task as the within-subjects factor. For the wideband condition, a significant main effect of task was revealed for both the adults, F(2, 32) = 6.696, p = .004, and the children, F(2, 52) = 13.287, p < .001. Pairwise comparisons indicated a significant increase in benefit from wideband amplification between the most familiar (word recognition) and the least familiar (word learning) stimuli for the children (p < .001) and for the adults (p = .005). For the DNR in steady-state noise condition, the repeated-measures ANOVA failed to show any increase or decrease in performance when the feature was activated for the children, F(2, 38) = .001, p = .999, or for the adults, F(2, 38) = .714, p = .751. Similar results were observed for DNR in multitalker babble for the children, F(2, 23) = .423, p = .661, and the adults, F(2, 21) = 393, p = .681. These results support the hypothesis that amplification benefit would increase as the familiarity of the stimuli decreased. This was true for wideband amplification but not for DNR.
Performance Relative to Normal
Finally, Table 3 shows the results of univariate ANOVA comparing the performance of the children and adults with hearing loss to that of their age-matched peers with normal hearing. For each analysis, performance in the aided+feature condition was compared with the normal range. With wideband amplification, the performance of the adults with hearing loss remained significantly below that of their normal-hearing peers on all three tasks, whereas only word recognition was poorer for the children with hearing loss. In steady-state noise, the adults’ word recognition and lexical decisions were below that of the normal-hearing adults, whereas the children with hearing loss performed more poorly on all three tasks. Finally, only the adult’s word learning and the children’s lexical decisions were significantly poorer in multitalker babble. Overall, word recognition and learning continue to be adversely affected by the combination of hearing loss and age despite optimal amplification.
Table 3.
Univariate ANOVA Comparing the Performance of the Listeners With Hearing Loss in Each Hearing-Aid Condition to that of the Listeners With Normal Hearing.
Children |
Adults |
||||
---|---|---|---|---|---|
Task | p | hp2 | p | hp2 | |
Wideband | Word recognition | .00 | .26 | .00 | .57 |
Lexical decision | .06 | .11 | .00 | .66 | |
Word learning | .05 | .82 | .00 | .31 | |
DNR SSN | Word recognition | .00 | .34 | .00 | .67 |
Lexical decision | .00 | .79 | .00 | .66 | |
Word learning | .02 | .22 | .08 | .15 | |
DNR MTB | Word recognition | .00 | .61 | .00 | .48 |
Lexical decision | .27 | .08 | .01 | .40 | |
Word learning | .03 | .28 | .40 | .05 |
Note. DNR = digital noise reduction; SSN = steady-state noise; MTB = multitalker babble. Bold indicates significant differences (p < .05).
Discussion
In this project, the effects of two amplification features on discrete processes involved in the perception of familiar words and the detection and learning of new words were examined. The results showed that the children’s word recognition and lexical decisions improved significantly with amplification in quiet, noise, and babble. Their performance improved further in quiet with wideband amplification for all tasks (including word learning) while no additional benefit or decrement was observed when using DNR. Like the children, the adults’ word recognition, detection, and learning also improved with amplification in quiet, although little to no improvement with amplification was found in noise or babble. No benefit or decrement was observed when using wideband amplification or DNR. The only exception was a significant improvement in word learning speed in quiet.
The central tenet of this research was that traditional word recognition tasks (e.g., 25-word lists) would be relatively unaffected by the amplification features under examination due to the highly familiar nature of the stimuli, such that perception can withstand significant signal degradation. The benefit of a particular amplification feature may instead be evident for less familiar speech stimuli that would require a greater reliance on auditory perception. It was therefore hypothesized that if an amplification feature was truly beneficial to perception, that benefit would increase as the familiarity with the stimuli decreased. The data for the children and adults in the wideband amplification condition support this hypothesis. That is, more benefit from wideband amplification was observed for word learning than for word recognition. These results suggest that wideband amplification provides an acoustic signal sufficient to support tasks that require a higher level of acoustic precision. Moreover, when an optimal level of acoustic precision was achieved, the children performed at levels similar to those of normal-hearing children. The adults, on the other hand, did not demonstrate the same advantage even though they received more benefit from wideband amplification for learning new words than they did for word recognition. Although efforts were made to equate the aided hearing thresholds of both age groups, the nearly 20 dB difference at 8 kHz may have limited their ability to use high-frequency amplification as effectively as the children. The unique configurations of loss also represent age-specific etiologies of hearing loss across groups which may have contributed to the differences in performance. Another possibility is that the hearing-impaired adults’ were unable to learn new words as rapidly because they were out of practice; however, the adults with normal hearing performed as well or better than the children with normal hearing despite a nearly 50-year age difference. Thus, the poorer word recognition and learning speed displayed by the adults with hearing loss may simply be a consequence of age overlaid with hearing loss.
The lack of benefit from wideband amplification for word recognition is consistent with previous reports regarding this feature (Ching et al., 1998; Hogan & Turner, 1998; John et al., 2014; Lau et al., 2014; Levy et al., 2015; Plyler & Fleck, 2006; Turner & Henry, 2002; Wolfe et al., 2015). However, the 14% improvement in performance for the children with hearing loss was somewhat higher than reported previously. Because the listening conditions were counterbalanced, this improvement was not due to order or learning effects. It is more likely that the use of a receiver-in-the-canal device provided more high-frequency energy than that provided in previous studies, resulting in improved performance for all tasks in the wideband condition. The results for word learning are consistent with one previous study showing significantly faster learning in wideband (9 kHz) compared with narrowband amplification (4 kHz; Pittman, 2008).
The Noise Problem
The results for DNR are consistent with previous research showing little to no benefit from this feature (McCreery et al., 2012; Nordrum et al., 2006; Pittman & Hiipakka, 2013; Ricketts & Hornsby, 2005; Stelmachowicz et al., 2010). In the present study, improvement with DNR was observed only for the children engaged in the word learning task. No other benefit or detriment from DNR was found. Interestingly, the potential for improvement (re: normal hearing) was smallest for word learning than for word recognition or nonword detection.
A puzzling aspect of these results is that the amount of SNR improvement offered by DNR in the physical signal did not affect performance in any way. A number of studies examining SNR have shown significant and incremental improvements in word recognition with incremental increases in SNR in both children and adults (Beattie, 1989; Cooper & Cutts, 1971; Crandell, 1993; Crandell & Smaldino, 2000; Keith & Talis, 1972). Based on this evidence, one would expect to see at least some improvement in performance with DNR for one or more of the behavioral tasks. But this was not the case in previous research focusing on relatively simple word recognition or in the present study which focused on tasks designed to rely more heavily on any acoustic benefits that DNR might provide. Put simply, the use of these tasks did not identify previously undetected benefits of DNR.
It is possible that, although the speech signal is somewhat preserved in noise with DNR, the remaining distortion from the noise or the listener’s poor auditory integrity may be sufficient to cancel benefits to perception that would be expected from an improved SNR. Another possibility is that speech perception fails to improve when the noise reduction feature itself produces distortion, despite improved SNR measured in the physical signal (Jorgensen & Dau, 2011). On the other hand, the 10 dB average reduction in hearing-aid output with DNR may improve listening comfort without sacrificing performance (Wu & Stangl, 2013). This may offer the listener more opportunities to use their hearing devices in listening environments that might otherwise cause them to reduce or suspend amplification use altogether.
Making Use of Nonsense
Because new words are equivalent to nonsense the first time they are heard, verbal repetition of nonsense words or syllables would appear to be an adequate test for estimating the impact of hearing loss on new-word detection and learning. Indeed, a number of nonsense-syllable and nonsense-word lists have been available for some time (Edgerton & Danhauer, 1979; Ewing & Ewing, 1946; Fletcher & Steinberg, 1929; Levitt & Resnick, 1978). These tests were created to reduce the contribution of context to performance and to provide a more accurate appraisal of hearing loss with regard to the perception of individual phonemes. A recent contribution to these lists is the ORCA Nonsense Syllable Test (Kuk et al., 2010) which provides a thorough analysis of perception along six stimulus parameters (plosives, fricatives, affricates, approximates, nasals, and vowels). For practical reasons, however, these tests are rarely used in clinical settings. As advised by Kuk et al. (2010), the examiner must be proficient with the language from which the nonsense items were derived, he or she should be trained in phonetic transcription, the patient’s responses should be amplified and free of distortion, and the patient’s face must be in full view of the examiner. One could add to this list the necessity for the examiner to have normal hearing and for the patient’s speech production to be free of anomalies. If the patient is provided with an alternate response format (e.g., written responses or selection from a closed-set of options), the patient must be capable of engaging with these materials, which generally excludes young children, adults with minimal literacy skills, and foreign-language speakers, among others.
The results of this project suggest that it may be possible to eliminate the error associated with a second perceiver through automation of auditory tasks involving nonsense words. For example, examiner input was not required for the word-learning task used in this study, and the results were most sensitive to the effects of wideband amplification. The development of similar automated tests of nonsense word detection and categorization would be an interesting focus of future research, especially if that research provided rapid, comprehensive, and sensitive clinical tests.
Limitations for Consideration
There are at least two limitations within the current project that should be mentioned. First, the results reported may not reflect the results that might be obtained with different devices (i.e., models) from the same or other manufacturers. This is especially true for DNR given the variability in implementation of this feature across manufacturers. Although substantial improvements in SNR were observed (∼10 dB) in the devices selected for this study, those improvements did not result in improved word recognition, non-word detection, or word learning. This suggests that the measures used to determine acoustic changes in SNR (i.e., inversion technique) may be generous estimates of SNR improvement and that other properties of the physical signal matter for perception (Jorgensen & Dau, 2011).
Second, the word-learning task was largely dependent on the perception of the high-frequency content in the stimuli. Recall that the listener’s task was to learn the singular and plural forms of three novel words paired with three novel images. That is, half the tokens contained the high-frequency phoneme /s/ and half did not. Word-learning rate for all the words was reported here and showed that learning rate was significantly reduced due to signal distortion resulting from hearing loss, noise, or the combination of the two. This may have been particularly true for the listeners with greater high-frequency loss. Their learning rates may have been reduced artificially due to an inability to perceive /s/ rather than an inability to learn the words.
Summary
The results of this project revealed significant benefits from wideband amplification for word recognition, non-word detection, and word learning in children with hearing loss. Adults also benefited from wideband amplification for word learning. DNR, on the other hand, neither improved nor detracted from either group’s performance in a systematic fashion. Hearing-aid users who prefer noise reduction features for listening comfort or clarity are not likely to jeopardize their ability to perceive familiar words or detect and learn new words when this feature is enabled.
Acknowledgments
Gratitude is extended to the staff and students working in the Auditory Prosthesis Laboratory at Arizona State University including Ashley Wright and Emily Venskytis, as well as the directors of research at the six major hearing-aid manufactures who generously supported this project with their expertise and resources: Stefan Launer, Brent Edwards, Graham Naylor, Joel Beilin, Andrew Dittberner, and Lars Sunesen. The authors would also like to thank the children and adults who took time out of their busy schedules to participate in research.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project was supported by a grant awarded to the first author from the Hearing Industry Research Consortium (IRC).
References
- Alexander J. M. (2014) How to use probe microphone measures with frequency-lowering hearing aids. Audiology Practices 6: 8–13. [Google Scholar]
- Beattie R. C. (1989) Word recognition functions for the CID W-22 test in multitalker noise for normally hearing and hearing-impaired subjects. The Journal of Speech and Hearing Disorders 54(1): 20–32. [DOI] [PubMed] [Google Scholar]
- Bentler R., Wu Y. H., Kettel J., Hurtig R. (2008) Digital noise reduction: Outcomes from laboratory and field studies. International Journal of Audiology 47(8): 447–460. [DOI] [PubMed] [Google Scholar]
- Brennan M. A., McCreery R., Kopun J., Hoover B., Alexander J., Lewis D., Stelmachowicz P. G. (2014) Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss. Journal of the American Academy of Audiology 25(10): 983–998. doi:10.3766/jaaa.25.10.7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brons I., Houben R., Dreschler W. A. (2015) Acoustical and perceptual comparison of noise reduction and compression in hearing aids. Journal of Speech, Language, and Hearing Research. doi:10.1044/2015_JSLHR-H-14-0347. [DOI] [PubMed] [Google Scholar]
- Byrne D., Dillon H., Ching T., Katsch R., Keidser G. (2001) NAL-NL1 procedure for fitting nonlinear hearing aids: Characteristics and comparisons with other procedures. Journal of the American Academy of Audiology 12(1): 37–51. [PubMed] [Google Scholar]
- Ching T. Y., Dillon H., Byrne D. (1998) Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. The Journal of the Acoustical Society of America 103(2): 1128–1140. [DOI] [PubMed] [Google Scholar]
- Cooper J. C., Jr, Cutts B. P. (1971) Speech discrimination in noise. Journal of Speech and Hearing Research 14(2): 332–337. [DOI] [PubMed] [Google Scholar]
- Crandell C. C. (1993) Speech recognition in noise by children with minimal degrees of sensorineural hearing loss. Ear and Hearing 14(3): 210–216. [DOI] [PubMed] [Google Scholar]
- Crandell C. C., Smaldino J. J. (2000) Classroom acoustics for children with normal hearing and with hearing impairment. Language, Speech and Hearing Services in Schools 31: 362–370. [DOI] [PubMed] [Google Scholar]
- Edgerton B., Danhauer J. L. (1979) Clinical implications of speech discrimination testing using nonsense stimuli, Baltimore, MD: University Park Press. [Google Scholar]
- Egan J. P. (1948) Articulation testing methods. Laryngoscope 58(9): 955–991. doi:10.1288/00005537-194809000-00002. [DOI] [PubMed] [Google Scholar]
- Ewing A. W., Ewing I. R. (1946) The handicap of deafness, London, England: Longmans Green & Co. Ltd. [Google Scholar]
- Fletcher H., Steinberg J. C. (1929) Articulation testing methods. Bell System Technical Journal 8: 806–854. [Google Scholar]
- Gray S., Pittman A., Weinhold J. (2014) Effect of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with typical development and specific language impairment. Journal of Speech, Language, and Hearing Research 57(3): 1011–1025. doi: 10.1044/2014_JSLHR-L-12-0282. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagerman B., Olofsson A. (2004) A method to measure the effect of noise reduction algorithms using simultaneous speech and noise. Acta Acoustica 90: 356–361. [Google Scholar]
- Hirsch I. J., Davis H., Silverman S., Reynolds E. G., Eldert E., Bentler R. (1952) Development of materials for speech audiometry. Journal of Speech and Hearing Disorders 17(3): 321–337. [DOI] [PubMed] [Google Scholar]
- Hogan C. A., Turner C. W. (1998) High-frequency audibility: Benefits for hearing-impaired listeners. The Journal of the Acoustical Society of America 104(1): 432–441. [DOI] [PubMed] [Google Scholar]
- John A., Wolfe J., Scollie S., Schafer E., Hudson M., Woods W., Neumann S. (2014) Evaluation of wideband frequency responses and nonlinear frequency compression for children with cookie-bite audiometric configurations. Journal of the American Academy of Audiology 25(10): 1022–1033. doi:10.3766/jaaa.25.10.10. [DOI] [PubMed] [Google Scholar]
- Jorgensen S., Dau T. (2011) Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing. The Journal of the Acoustical Society of America 130(3): 1475–1487. doi:10.1121/1.3621502. [DOI] [PubMed] [Google Scholar]
- Keidser G., Dillon H., Carter L., O’Brien A. (2012) NAL-NL2 empirical adjustments. Trends in Amplification 16(4): 211–223. doi:10.1177/1084713812468511. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keith R. W., Talis H. P. (1972) The effects of white noise on PB scores of normal and hearing-impaired listeners. Audiology 11(3): 177–186. [DOI] [PubMed] [Google Scholar]
- Kimlinger C., McCreery R., Lewis D. (2015) High-frequency audibility: The effects of audiometric configuration, stimulus type, and device. Journal of the American Academy of Audiology 26(2): 128–137. doi:10.3766/jaaa.26.2.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kochkin S. (2000) “Why my hearing aids are in the drawer”: The consumer’s perspective. The Hearing Journal 53: 34–41. [Google Scholar]
- Kuk F., Lau C. C., Korhonen P., Crose B., Peeters H., Keenan D. (2010) Development of the ORCA nonsense syllable test. Ear and Hearing 31(6): 779–795. doi:10.1097/AUD.0b013e3181e97bfb. [DOI] [PubMed] [Google Scholar]
- Lau C. C., Kuk F., Keenan D., Schumacher J. (2014) Amplification for listeners with a moderately severe high-frequency hearing loss. Journal of the American Academy of Audiology 25(6): 562–575. doi:10.3766/jaaa.25.6.6. [DOI] [PubMed] [Google Scholar]
- Leach L., Samuel A. G. (2007) Lexical configuration and lexical engagement: When adults learn new words. Cognitive Psychology 55(4): 306–353. doi:10.1016/j.cogpsych.2007.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Levitt H., Resnick S. B. (1978) Speech reception by the hearing-impaired: Methods of testing and the development of new tests. Scandinavian Audiology. Supplementum 6: 107–130. [PubMed] [Google Scholar]
- Levy S. C., Freed D. J., Nilsson M., Moore B. C., Puria S. (2015) Extended high-frequency bandwidth improves speech reception in the presence of spatially separated masking speech. Ear and Hearing. doi:10.1097/AUD.0000000000000161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luce P. A., Pisoni D. B. (1998) Recognizing spoken words: The neighborhood activation model. Ear and Hearing 19(1): 1–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCreery R. W., Venediktov R. A., Coleman J. J., Leech H. M. (2012) An evidence-based systematic review of directional microphones and digital noise reduction hearing aids in school-age children with hearing loss. American Journal of Audiology 21(2): 295–312. doi:10.1044/1059-0889(2012/12-0014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGregor K. K. (2014) What a difference a day makes: Change in memory for newly learned word forms over 24 hours. Journal of Speech, Language, and Hearing Research 57(5): 1842–1850. doi:10.1044/2014_JSLHR-L-13-0273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGregor K. K., Licandro U., Arenas R., Eden N., Stiles D., Bean A., Walker E. (2013) Why words are hard for adults with developmental language impairments. Journal of Speech, Language, and Hearing Research 56(6): 1845–1856. doi:10.1044/1092-4388(2013/12-0233). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mueller H. G., Weber J., Hornsby B. W. (2006) The effects of digital noise reduction on the acceptance of background noise. Trends in Amplification 10(2): 83–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nordrum S., Erler S., Garstecki D., Dhar S. (2006) Comparison of performance on the hearing in noise test using directional microphones and digital noise reduction algorithms. American Journal of Audiology 15(1): 81–91. [DOI] [PubMed] [Google Scholar]
- Peterson G. E., Lehiste I. (1962) Revised CNC lists for auditory tests. The Journal of Speech and Hearing Disorders 27: 62–70. [DOI] [PubMed] [Google Scholar]
- Pittman A. (2011) Age-related benefits of digital noise reduction for short-term word learning in children with hearing loss. Journal of Speech, Language, and Hearing Research 54(5): 1448–1463. doi:1092-4388_2011_10-0341 [pii];10.1044/1092-4388(2011/10-0341) [doi]. [DOI] [PubMed] [Google Scholar]
- Pittman A. L. (2008) Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. Journal of Speech, Language, and Hearing Research 51(3): 785–797. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pittman, A. L., & Hiipakka, M. M. (2013). Hearing impaired children's preference for, and performance with, four combinations of directional microphone and digital noise reduction technology. Journal of the American Academy of Audiology, 24(9), 832–844. doi:10.3766/jaaa.24.9.7. [DOI] [PubMed]
- Pittman, A. L., & Rash, M. A. (2016). Auditory Lexical Decision and Repetition in Children: Effects of Acoustic and Lexical Constraints. Ear Hear, 37(2), e119–128. doi:10.1097/AUD.0000000000000230. [DOI] [PubMed]
- Plyler P. N., Fleck E. L. (2006) The effects of high-frequency amplification on the objective and subjective performance of hearing instrument users with varying degrees of high-frequency hearing loss. Journal of Speech, Language, and Hearing Research 49(3): 616–627. [DOI] [PubMed] [Google Scholar]
- Ricketts T. A., Hornsby B. W. (2005) Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction. Journal of the American Academy of Audiology 16(5): 270–277. [DOI] [PubMed] [Google Scholar]
- Scollie S., Seewald R., Cornelisse L., Moodie S., Bagatto M., Laurnagaray D., Pumford J. (2005) The Desired Sensation Level multistage input/output algorithm. Trends in Amplification 9(4): 159–197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seewald R. C., Moodie K. S., Sinclair S. T., Scollie S. D. (1999) Predictive validity of a procedure for pediatric hearing instrument fitting. American Journal of Audiology 8(2): 143–152. [DOI] [PubMed] [Google Scholar]
- Souza P. E., Jenstad L. M., Boike K. T. (2006) Measuring the acoustic effects of compression amplification on speech in noise. The Journal of the Acoustical Society of America 119(1): 41–44. [DOI] [PubMed] [Google Scholar]
- Stelmachowicz P., Lewis D., Hoover B., Nishi K., McCreery R., Woods W. (2010) Effects of digital noise reduction on speech perception for children with hearing loss. Ear and Hearing 31(3): 345–355. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stelmachowicz P. G., Lewis D. E., Choi S., Hoover B. (2007) Effect of stimulus bandwidth on auditory skills in normal-hearing and hearing-impaired children. Ear and Hearing 28(4): 483–494. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stelmachowicz P. G., Nishi K., Choi S., Lewis D. E., Hoover B. M., Dierking D., Lotto A. (2008) Effects of stimulus bandwidth on the imitation of ish fricatives by normal-hearing children. Journal of Speech, Language, and Hearing Research 51(5): 1369–1380. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stelmachowicz P. G., Pittman A. L., Hoover B. M., Lewis D. E. (2001) Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. The Journal of the Acoustical Society of America 110(4): 2183–2190. [DOI] [PubMed] [Google Scholar]
- Stelmachowicz P. G., Pittman A. L., Hoover B. M., Lewis D. E. (2002) Aided perception of /s/ and /z/ by hearing-impaired children. Ear and Hearing 23(4): 316–324. [DOI] [PubMed] [Google Scholar]
- Stelmachowicz P. G., Pittman A. L., Hoover B. M., Lewis D. E., Moeller M. P. (2004) The importance of high-frequency audibility in the speech and language development of children with hearing loss. Archives of Otolaryngology—Head & Neck Surgery 130(5): 556–562. [DOI] [PubMed] [Google Scholar]
- Storkel H. L. (2015) Learning from input and memory evolution: Points of vulnerability on a pathway to mastery in word learning. International Journal of Speech Language Pathology 17(1): 1–12. doi:10.3109/17549507.2014.987818. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Storkel H. L., Lee S. Y. (2011) The independent effects of phonotactic probability and neighborhood density on lexical acquisition by preschool children. Language and Cognitive Processes 26(2): 191–211. doi:10.1080/01690961003787609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Studebaker G. A. (1985) A “rationalized” arcsine transform. Journal of Speech and Hearing Research 28(3): 455–462. [DOI] [PubMed] [Google Scholar]
- Tillman, T. W., & Carhart, R. (1966). An expanded test for speech discrimination utilizing CNC monosyllabic words. Northwestern University Auditory Test No. 6. SAM-TR-66-55. Technical Report SAM-TR, 1–12. [DOI] [PubMed]
- Turner C. W., Henry B. A. (2002) Benefits of amplification for speech recognition in background noise. The Journal of the Acoustical Society of America 112(4): 1675–1680. [DOI] [PubMed] [Google Scholar]
- Verhaeghen P. (2003) Aging and vocabulary scores: A meta-analysis. Psychology and Aging 18(2): 332–339. [DOI] [PubMed] [Google Scholar]
- Wilson R. H., McArdle R. (2005) Speech signals used to evaluate functional status of the auditory system. Journal of Rehabilitation Research and Development 42(4 Suppl 2): 79–94. [DOI] [PubMed] [Google Scholar]
- Wolfe J., John A., Schafer E., Hudson M., Boretzki M., Scollie S., Neumann S. (2015) Evaluation of wideband frequency responses and non-linear frequency compression for children with mild to moderate high-frequency hearing loss. International Journal of Audiology 54(3): 170–181. doi:10.3109/14992027.2014.943845. [DOI] [PubMed] [Google Scholar]
- Wu Y. H., Stangl E. (2013) The effect of hearing aid signal-processing schemes on acceptable noise levels: Perception and prediction. Ear and Hearing 34(3): 333–341. doi:10.1097/AUD.0b013e31827417d4. [DOI] [PubMed] [Google Scholar]