Abstract
Purpose
This study examined differences in voiced CV perception in older listeners with normal hearing, and in two groups of older listeners with matched hearing losses: those with good and those with poor word recognition scores.
Method
36 participants identified CVs from an 8-item display from the natural voiced initial consonants /b, d, g, m, n, ð, v and z/ in three vowel contexts (/a, o, u/) spoken by a male and a female talker.
Results
The listeners with hearing loss and poor word recognition made more of the same types of errors, as well as errors not made by listeners with hearing loss and good word recognition. Errors above chance rates for these listeners were highest in the context of /a/, and similar in the contexts of /o/ and /u/. SINFA analyses verified that information was transmitted least efficiently in the context of /a/. The results yielded a list of consonant confusions unique to listeners with poor WRS.
Conclusions
Listeners with poor WRS have more difficulty identifying voiced initial consonants in CV syllables than listeners with good WRS. These listeners made some systematic errors, but most were nonsystematic, perhaps due to the low level of feature information transmitted.
While the majority of older listeners with hearing loss perform well on tests of speech perception when stimuli are presented at levels high enough for maximum audibility, there is a group of older people who have unexpectedly poor word recognition. Listeners with similar audiograms may have very different speech perception abilities. It is possible these speech perception deficits are the result of cochlear dead zones. If listeners with poor WRS had cochlear dead spots, their thresholds could be similar to listeners with good WRS because the hair cells on the periphery of these dead zones process the test tones (Moore, Glasberg & Stone, 2004). Another potential cause of poor speech perception is auditory neuropathy. In auditory neuropathy, outer hair cells function normally, and produce otoacoustic emissions, but acoustic reflexes are absent (Berlin, 2005). However, the signal from the inner hair cells or the neural connections from them are compromised, as has been seen in animal studies (Bussoli, Kelly & Steel, 1997; Harrison, 1998). Vinay and Moore (2007) have suggested that neural dysynchony may cause a lack of processing efficiency. Kumar & Jayaram, (2006) found that 37 of 61 cases in their Indian population had an onset age of 16 years or more.
The listener with hearing loss and poor word recognition presents a dilemma to the audiologist, both in terms of amplification and aural rehabilitation needs. Previous research has examined what types of perceptual errors in speech are common in listeners with hearing loss but have not controlled for word recognition ability (Dubno, Dirks, & Langhofer, 1982; Lindholm, Dorman, Taylor & Hannley, 1988; Van Tasell, Hagen, Koblas, & Penner, 1982). There have been no investigations to date of the consonant identification abilities of listeners with poor word recognition. Do they simply make more of the same kinds of errors made by listeners with hearing loss and good word recognition, or do they make different kinds of errors in consonant recognition? Without an understanding of the specific perceptual difficulties encountered by those with poor word recognition, it has not been possible to apply amplification strategies and aural rehabilitation strategies that would improve the quality of life for these people.
Hustedde and Wiley (1991) have reported that consonant error patterns differed between listeners with hearing loss and good consonant recognition ability and those with poor consonant recognition ability in both number and types of error. The authors reported significant differences in performance for the two groups on syllables with both low-frequency and high-frequency spectra. The listeners in the two groups exhibited large differences in auditory sensitivity, with differences in pure-tone thresholds of 8–15 dB in the low frequencies, and as much as 24 dB in the high frequencies. Even with a presentation level of 30 dB above the threshold at 2000 Hz, sounds above 2000 Hz would have been very near threshold for the listeners with poorer hearing and consonant perception.
The typical hearing-impaired adult tends to miss the ends of words, in part due to lower audibility (Dubno, Dirks, & Langhofer, 1982; Helfer & Huntley, 1991). They also tend to confuse speech sounds in the higher frequencies, where the typical loss of sensitivity for hearing occurs. Many consonants are composed predominantly of high frequency energy, and the inability to perceive these consonants contributes greatly to problems with speech understanding for the hearing-impaired listener. Consonants that are voiced (/b/, /d/) are easier to detect and identify than consonants that are not voiced (/p/, /t/), because there is more power in voiced speech sounds. The typical hearing-impaired listener does not confuse /b/ with /d/ in quiet or reverberation, but may do so in noise (Helfer & Huntley, 1991; Van Tasell, Hagan, Koblas & Penner, 1982). Consonants in the initial position are easier to identify than consonants in the final position (Dubno et al., 1982; Helfer & Huntley, 1991).
Previous studies have used the Nonsense Syllable Test (Resnick, Dubno, Hoffnung & Levitt, 1975), which has a limited set of initial CV’s. A re-examination of NST data from a previous study (Phillips, Gordon-Salant, Fitzgibbons & Yeni-Komshian, 1994) showed that older listeners with hearing loss and poor WRS had difficulty identifying voiced initial consonants when compared with older listeners with good WRS. The present investigation examined voiced CV identification for /b,d,g,m,n, ð,v,z/ in normal-hearing older adults and two groups of hearing-impaired older adults matched for hearing sensitivity: those with good Word Recognition Scores (WRS) and those with poor WRS. The aims of the present experiments were to characterize the consonant confusion patterns of listeners with poor word recognition and to determine the effects of talker gender and vowel context (/a, o, u/ upon consonant identification.
Method
Participants
Participants aged 60–80 years with a mild-moderately severe sensorineural hearing loss from 500–4000 Hz and normal acoustic immittance bilaterally participated. They were recruited from two university clinic databases and placed into two groups. The first group (N=12, mean age 76 years, range 68–80 years) demonstrated good WRS (≥ 90%) when tested in quiet at 75 dB HL using the NU-6 word lists. The second group (N=12, mean age 73 years, range 67–80 years) demonstrated poor WRS (≤ 70%) when tested under the same conditions. All listeners with hearing loss were experienced hearing aid users though participants did not use amplification devices in these experiments. The NU-6 word lists were used for this purpose because they have 14/50 words beginning with the voiced consonants used in the present study. Subjects in these two groups were matched for hearing sensitivity within 10 dB from 500–4000 Hz (See Figure 1). Two participants in the group with poor WRS became ill and were unable to complete all vowel conditions. A control group of normal-hearing listeners aged 60–80 years (mean age 66 years, range 61–78 years) was included in the experiment, whose thresholds were ≤20 dB HL, re: ANSI, 1996, from 500–4000 Hz. All participants in the study passed the Short Portable Mental Status questionnaire (Pfeiffer, 1975), were native speakers of English and were in general good health with no history of neurologic pathology. All listeners were paid $10/hour for their participation. Each participant signed an approved consent to act as a human subject that was approved by the appropriate university.
Figure 1.
Mean thresholds with standard deviations for matched older listeners with good and poor word recognition scores.
Stimuli: CV syllables
A set of 24 natural CV’s was chosen from a larger set of utterances that were recorded by both a male and female speaker with a Shure BG 1.1 microphone mounted at a distance of 6 inches from the speaker’s mouth. All recordings were made in a sound-treated booth, using Cool Edit Pro software and a Tucker-Davis MA3 interface. Three vowels were used (/a/, /o/, /u/) and eight voiced consonants in the initial position (/b/, /d/, /g/, /m/, /n/, /v/, /z/, /ð/) for a CV format. Stimuli were recorded at a 48,828 Hz sampling rate, in 16-bit stereo, equated for average RMS power, and edited to a set duration at 400 msec, with a fade-out from a zero-crossing point near the end of the steady-state portion of the vowel. All CV’s were saved as separate .wav files.
Procedure
The experiment assessed consonant confusions using identification of eight voiced consonant-vowel syllables in a closed-set task in three vowel contexts (/a/, /i/, and /u/). Stimuli were played by the EcosWin software through a Tucker-Davis system. They were presented to the listener monaurally via an insert earphone (ER 3-A) in a sound-treated both (IAC) at 95 dB SPL to ensure audibility (Kamm, Morgan & Dirks, 1983). Both facilities were equipped with identical systems. No participant judged the sound level to be uncomfortable. Each CV was randomized and repeated 10 times in an 80-item test run in two blocks. In one block, the CVs were spoken by the male voice and in the other by the female voice. Presentation order of the blocks was randomized.
Listeners viewed a computer screen with 8 blocks in which the 8 consonant choices were represented in enlarged orthographic form. Their task was to use a computer mouse to click on the box representing the sound they heard in each trial. Participants completed a practice run to familiarize them with the task. Participants were instructed to guess when uncertain. There was no feedback provided. The initial hearing evaluation and subsequent experiments were conducted over two 2-hour sessions, with breaks every half-hour. Five of the 80-item blocks were completed to provide 50 targets for each CV (Dubno & Dirks, 1982). The five runs were repeated for each of the three vowel contexts, which took one half-hour each. The order of vowel presentation was counter-balanced across listeners. A percent correct score and mean proportion of errors were determined for each CV. Group effects were determined using a mixed model logistic regression analysis of error rates. The types of errors made by each group were determined by examination of confusion matrices, and the causes of group differences were examined through a Sequential Information Analysis (SINFA) as described by Wang and Bilger (1973).
Results
Syllable Identification Error Rates
Older listeners with normal hearing had a mean overall percent correct score of 88%. The mean percent correct for older listeners with a hearing loss and good word recognition was 75%, and for those with poor word recognition was 53%. The statistical model was fit with group as a between subjects factor and consonant, and vowel and talker gender as within subjects factors, using a mixed model logistic regression analysis. The response variable was the number of errors made for the 25 observations made under each of the combinations (8×3×2 = 48) of consonant, vowel context and gender. Type III fixed effects found significant main effects for group [F(2, 34.06) = 62.55, p<.0001], vowel context [F(2, 1624) = 15.51, p<.0001], talker gender [F(1, 1624) = 87.91, p< .0001] and stimulus consonant [F(7, 1624) = 308.09, p<.0001].
Significant two-way interactions were found for group x consonant [F(14, 1624) = 105.06, p<.0001], group x vowel context [F(4, 1624) = 11.34, p<.0001], group x talker gender [F(2, 1624) = 5.28 p<.005], consonant x vowel context [F(14, 1624) = 123.74, p<.0001], consonant x talker gender [F(7, 1624) = 98.54, p<.0001] and vowel context by talker gender [F(2, 1624) = 15.13, p<.0001].
Table 1 shows the mean proportion of errors for each of the 48 CVs by group, vowel context and talker gender. The proportion of errors for 45 of the 48 CVs for normal hearing listeners were in the .00-.34 range. For older listeners with hearing loss and good word recognition, the error rates for 39 of the 48 CVs were within that range. For older listeners with hearing loss and poor word recognition the proportion of errors were within that range for only 14 of the 48 CVs. Table 1 also shows that error rate differences for male and female speakers were generally greater for CVs spoken by the female voice, with the exception of /v/, where errors tended to occur more for the male speaker. Listeners with normal hearing performed more poorly for the female speaker in 12/24 conditions. Listeners with hearing loss and good word recognition performed similarly, with poorer performance for the female speaker in 11/24 conditions. Listeners with hearing loss and poor word recognition performed more poorly for the female speaker in 14/24 conditions. These error differences were small (0.0-.09) for normal hearing listeners for all but /g/, where the mean difference was .13 with more errors for the female voice. Listeners with hearing impairment and good WRS exceeded the .09 vocal gender difference for /n/ (.24), /ð/ (.32) and /v/ (.17), with the higher number of errors for /d/ and /v/ for the male voice. Listeners with hearing impairment and poor WRS exceeded this difference for /b/ (.13), /n/ (.23), /ð/ (.25), and /z/ (.10). Differences in performance due to gender of the speaker were therefore similar for the two hearing impaired groups for /n/, with less of a difference for /ð/ and more of a difference for /b/ and /z/ for those with poor WRS.
Table 1.
Observed proportion of consonant errors by group, vowel and talker gender.
| Consonant | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| b | d | g | m | n | ð | v | z | ||||||||||
| Group | Vowel | ♀ | ♂ | ♀ | ♂ | ♀ | ♂ | ♀ | ♂ | ♀ | ♂ | ♀ | ♂ | ♀ | ♂ | ♀ | ♂ |
|
1 Normal Hearing |
a | 0.34 | 0.56 | 0.21 | 0.19 | 0.18 | 0.03 | 0.09 | 0.00 | 0.02 | 0.02 | 0.12 | 0.06 | 0.12 | 0.11 | 0.01 | 0.01 |
| o | 0.32 | 0.13 | 0.01 | 0.13 | 0.13 | 0.05 | 0.04 | 0.00 | 0.01 | 0.01 | 0.25 | 0.01 | 0.16 | 0.44 | 0.01 | 0.00 | |
| u | 0.14 | 0.29 | 0.02 | 0.06 | 0.31 | 0.16 | 0.00 | 0.05 | 0.02 | 0.01 | 0.30 | 0.58 | 0.05 | 0.06 | 0.01 | 0.01 | |
|
Group Mean |
0.27 | 0.33 | 0.08 | 0.13 | 0.21 | 0.08 | 0.04 | 0.03 | 0.02 | 0.01 | 0.22 | 0.22 | 0.11 | 0.20 | 0.01 | 0.01 | |
|
2 Hearing Impaired Good WRS |
a | 0.26 | 0.34 | 0.13 | 0.22 | 0.04 | 0.05 | 0.11 | 0.12 | 0.39 | 0.08 | 0.58 | 0.29 | 0.30 | 0.23 | 0.22 | 0.15 |
| o | 0.40 | 0.31 | 0.08 | 0.22 | 0.01 | 0.01 | 0.21 | 0.25 | 0.25 | 0.09 | 0.70 | 0.23 | 0.52 | 0.83 | 0.34 | 0.06 | |
| u | 0.21 | 0.29 | 0.04 | 0.07 | 0.11 | 0.13 | 0.17 | 0.28 | 0.29 | 0.03 | 1.00 | 0.62 | 0.17 | 0.45 | 0.07 | 0.02 | |
|
Group Mean |
0.29 | 0.31 | 0.08 | 0.17 | 0.05 | 0.06 | 0.16 | 0.22 | 0.31 | 0.07 | 0.76 | 0.38 | 0.33 | 0.50 | 0.21 | 0.08 | |
|
3 Hearing Impaired Poor WRS |
a | 0.22 | 0.24 | 0.64 | 0.47 | 0.49 | 0.59 | 0.33 | 0.25 | 0.75 | 0.40 | 0.79 | 0.55 | 0.53 | 0.53 | 0.51 | 0.50 |
| o | 0.32 | 0.34 | 0.40 | 0.38 | 0.28 | 0.31 | 0.65 | 0.65 | 0.44 | 0.39 | 0.71 | 0.30 | 0.73 | 0.81 | 0.63 | 0.45 | |
| u | 0.26 | 0.61 | 0.24 | 0.40 | 0.31 | 0.16 | 0.67 | 0.52 | 0.36 | 0.07 | 0.93 | 0.83 | 0.42 | 0.56 | 0.24 | 0.12 | |
|
Group Mean |
0.27 | 0.40 | 0.43 | 0.42 | 0.36 | 0.35 | 0.55 | 0.47 | 0.52 | 0.29 | 0.81 | 0.56 | 0.56 | 0.63 | 0.46 | 0.36 | |
Table 2 contains odds ratios comparisons between normal hearing and hearing impaired listeners for making an error. The odds for older listeners with hearing loss and good word recognition to make a consonant error are significantly higher than older listeners with normal hearing for all but the stop consonants (p<.0001). For every consonant except /b/, the odds for older listeners with hearing loss and poor word recognition making a consonant error are even higher compared with those for normal-hearing older listeners (p<.001). Table 3 shows the odds ratios for making an error for the listeners with poor word recognition compared with those with good word recognition. For listeners with poor word recognition, odds ratios for the likelihood of an error are significantly higher than those for listeners with good word recognition for all consonants except /b/. For /d/, /g/, /m/ and /n/, the odds for listeners with poor word recognition making an error was approximately 2–6.5 times as great when compared with those with good word recognition (p<.001).
Table 2.
Overall odds of making an error for each of the two hearing impaired groups compared to the normal hearing group, for each consonant.
| Good WRS | Poor WRS | |
|---|---|---|
| b | 1.03 | 1.26 |
| d | 1.41 | 8.72* |
| g | 0.38‡ | 2.49* |
| m | 7.54* | 25.31* |
| n | 20.76* | 42.75* |
| ð | 7.62* | 8.85* |
| v | 4.85* | 7.33* |
| z | 26.25* | 69.37* |
Significantly higher rate of error for the hearing impaired group (p <.001)
Significantly lower rate of error for the hearing impaired group (p < .001)
Table 3.
From Table 2, the ratio for Poor WRS compared to the ratio of Good WRS, for each consonant.
| Consonant | Poor WRS/Good WRS |
|---|---|
| /b/ | 1.22 |
| /d/ | 6.18** |
| /g/ | 6.55** |
| /m/ | 3.36** |
| /n/ | 2.06** |
| / ð/ | 1.16* |
| /v/ | 1.51* |
| /z/ | 2.64* |
Significantly higher rate of error for Poor WRS, compared to normal, than Good WRS (p <.05)
Significantly higher rate of error for Poor WRS, compared to normal, than Good WRS (p <.001)
Table 2 and Table 3 present additional odds ratio calculations for consonants, separated into plosive, nasal and fricative manner of articulation categories. The odds for both normal-hearing and listeners with hearing loss and good word recognition are low for making an error within the plosive category, while the odds of making a plosive error (for /d/ and /g/) for those with poor word recognition was 6 times as high as for those with good word recognition (p<.001). The effect of hearing loss alone upon the perception of nasal consonants is seen in the higher odds of misidentification in the group with good word recognition compared with those of normal hearing (p<.001). Listeners with poor word recognition have increased odds of making a nasal error, twofold for /n/ and threefold for /m/ (p<.001). The effect of hearing impairment is greatest for the perception of /z/, with 26 times the odds of making this error when compared with normal hearing older listeners (p<.001). Poor word recognition increased these odds by 2.6 times (p<.001).
Listeners with normal hearing were four times more likely to make errors for CVs in the /a/ context than those with /o/ or /u/. Listeners with hearing loss and good WRS were equally likely to make errors in the /a/ and /u/ contexts, with more /a/ errors than /o/ errors. They were three times more likely to make errors for CVs in the /a/ context than listeners with normal hearing. They were 5.3 times more likely to make errors for CVs in the /o/ context and 4.5 times more likely to make errors for CVs in the /u/ context than listeners with normal hearing. When compared with listeners with normal hearing, listeners with hearing loss and poor WRS were 13 times more likely to make errors for CVs in the /a/ context, 20 times more likely to make errors for CVs in the /o/ context and 16.7 times more likely to make errors for CVs in the /u/ context. In comparison with listeners with hearing loss and good WRS, they were 4.5 times more likely to make errors in the context of /a/, 3.5 times more likely to make errors in the context of /o/, and 3.7 times more likely to make errors in the context of /u/. These group differences are considered in the sections on consonant confusions and the SINFA analysis.
Consonant confusions
Table 4–Table 6 present a sample of the consonant confusions for the three groups of listeners for the /a/ vowel context. Chance error rate was 12.5% for 8 possible choices, error rates above that level are in bold. For normal hearing older listeners, most of the errors tended to cluster within each of the manner categories: plosives, nasals, fricatives. Confusions of plosives, fricatives and nasals were termed manner errors. Confusions based on the point of constriction were termed place errors, and errors across both categories were termed place/manner errors. Examination of confusion matrices for each listener group revealed errors that were common to all groups, though at varying rates, as well as error patterns specific to each group.
Table 4.
Consonant confusion matrix in the /a/ vowel context for normal-hearing older listeners. Confusions in bold are above chance levels.
| Table of Stimulus Consonant by Consonant Response | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| Consonant presented |
Consonant response |
Total | |||||||
| b | d | g | m | n | ð | v | z | ||
| b | 330 | 22 | 8 | 9 | 4 | 26 | 201 | 0 | 600 |
| d | 8 | 482 | 83 | 0 | 0 | 25 | 1 | 1 | 600 |
| g | 0 | 54 | 537 | 0 | 1 | 5 | 2 | 1 | 600 |
| m | 0 | 0 | 0 | 573 | 24 | 1 | 1 | 1 | 600 |
| n | 1 | 2 | 0 | 1 | 588 | 5 | 0 | 3 | 600 |
| ð | 1 | 0 | 2 | 0 | 0 | 546 | 49 | 2 | 600 |
| v | 0 | 0 | 1 | 0 | 1 | 39 | 529 | 30 | 600 |
| z | 0 | 2 | 0 | 0 | 2 | 2 | 0 | 594 | 600 |
| Total | 340 | 562 | 631 | 583 | 620 | 649 | 783 | 632 | 4800 |
Table 6.
Consonant confusion matrix in the /u/ vowel context for normal-hearing older listeners. Confusions in bold are above chance levels.
| Table of Stimulus Consonant by Consonant Response | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| Consonant presented |
Consonant response |
Total | |||||||
| b | d | g | m | n | ð | v | z | ||
| b | 470 | 6 | 21 | 0 | 2 | 9 | 92 | 0 | 600 |
| d | 7 | 575 | 0 | 0 | 14 | 1 | 3 | 0 | 600 |
| g | 75 | 7 | 459 | 1 | 0 | 12 | 46 | 0 | 600 |
| m | 1 | 0 | 2 | 584 | 8 | 2 | 3 | 0 | 600 |
| n | 1 | 1 | 0 | 6 | 591 | 1 | 0 | 0 | 600 |
| ð | 0 | 68 | 18 | 2 | 1 | 335 | 125 | 51 | 600 |
| v | 0 | 2 | 0 | 2 | 2 | 24 | 565 | 5 | 600 |
| z | 0 | 0 | 0 | 1 | 2 | 0 | 1 | 596 | 600 |
| Total | 554 | 659 | 500 | 596 | 620 | 384 | 835 | 652 | 4800 |
However, the most common error made for older normal-hearing listeners for all vowel contexts was mistaking /v/ for /b/, which is a place/manner error. This error was greatest in the /a/ vowel context, accounting for 33.5% of presentations, while only accounting for 10.7% in the /o/ context and 15% in the /u/ context. The place error of /ð/ for /v/ was common, particularly in the /o/ context (28% of presentations), with the reverse error common in the /u/ context (21% of presentations). Other than these 5 confusions, all other errors were below chance level.
Hearing impaired listeners with good word recognition had 14 consonant confusions above the chance level across vowel contexts. There were a few place/manner error responses; most were below chance level. These place/manner errors included /v/ for /b/ in all vowel contexts, though only just above chance for the /a/ context, /g/ for /d/ in the /o/ context, and /g/ for /ð/ in the /u/ vowel context. Nasals were never mistaken for any consonant outside of the nasal category, and confused for each other only just above chance levels. Mistaking /ð/ for /v/ and /v/ for /ð/ were the most common errors in this group. For two CV combinations, these listeners made more errors than correct responses, choosing /ð/ for /v/ in the /o/ vowel context for 55% of the presentations, and choosing /g/ for 38% and /v/ for 18% of the presentations of /ð/ in the /u/ vowel context.
The listeners with hearing loss and poor word recognition made more error responses than correct responses for 6 of the 8 consonants in the /a/ context, 4 of the 8 consonants in the /o/ context and 2 of the 8 consonants in the /u/ context. The lowest correct response level was 59/500 for /ð/ in the /u/ context, where participants appeared to have guessed across the various response possibilities with /v/, /z/ and /g/ as the only errors rising above chance level. No systematic errors occurred for /ð/ in the /o/ context. The most common error for /ð/ in the /a/ context was /v/, as was the case for listeners with good word recognition. Place/manner errors were common but were not systematic or above chance levels. Listeners in this group made more of the same types of errors made by listeners with good word recognition ability, including: /ða/ for /za/ and /go/ for /do/ (2.2 times as many errors), /ma/ for /na/, /no/ for /mo/ and /vu/ for /bu/ (2.4 times as many errors), and /nu/ for /mu/ (2.5 times as many errors). Error types that were above chance level and unique to listeners with poor word recognition included:
/ba/ for /da/ /ba/ for /ga/ /da/ for /ga//na/ for /ma//za/ for /va/ /ða/ for /za/ /va/ for /za//do/ for /go/ /mo/ for /no//vu/ for /bu/ /zu/ for /ðu/ /zu/ for /vu/
The most consistent pattern of errors for this group was confusions within the plosive category. Listeners with good WRS made this type of error only as mistaking /g/ for /d/ in the /o/ context. Note that this group made symmetric m/n and v/z confusions across vowel contexts. While listeners with good word recognition also made m/n confusions, they were close to the chance level. The other two groups did not make v/z confusions.
Overall, this group had 22 consonant confusions above the chance level; errors were diffuse across the consonant choices. This was particularly true for the /a/ vowel context and for the consonant /ð/ across vowel contexts. The highest error rates, which indicated errors that were more systematic, were for those error types held in common with listeners with good word recognition. These listeners therefore make not only more of the same misidentifications as hearing impaired listeners with good word recognition, but also have high error rates for CVs the other two groups could identify relatively well.
SINFA analyses
The SINFA analyses provide another qualitative picture that describes the independent information transmitted and received for each feature. Table 7 presents the SINFA feature definitions chosen from Wang and Bilger (1973) for this analysis. Feature information was calculated for the following features as defined by Wang and Bilger (1973): high/anterior/back, coronal, nasal, strident, fricative/open, and sibilant/duration. Stimulus information for the sibilant/duration and fricative/open features are identical for each consonant and are therefore joined. Stimulus information for the high/anterior/back feature is redundant; all consonants which are high are also non-anterior and back. The feature with the highest amount of transmitted information is determined in the first iteration. At the second and each iteration afterward, the feature with the highest transmission rate was held constant while the highest transmission rate for the remaining features is determined. This partialing out of features has an effect on information transmission for the other features remaining.
Table 7.
Features specified for the sequential information analysis.
| High | Anterior | Back | Coronal | Nasal | Strid | Contin | Open | Fric | Duration | Sibil | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| b | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| d | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| g | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| m | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
| n | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
| v | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 |
| δ | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
| z | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
Table 8–Table 10 present the results of the SINFA analyses of consonant confusions for each of the three listener groups by vowel context. The tables present the amount of information, in bits, received for each feature as well as the percent of information transmitted. It is important to consider the amount of transmitted information in the first column for each group when examining the percent of transmitted (or received) information. Receiving a high percentage of a low-transmitting feature does not necessarily provide much benefit to the listener.
Table 8.
Summary of the SINFA for responses to consonant stimuli in the /a/ context for the three listening groups. The first column for each listener group indicates transmitted information in bits and the second column the percent of that transmitted information that is received by the listeners.
| Normal Hearing | Good WRS | Poor WRS | ||||
|---|---|---|---|---|---|---|
| Feature |
Transmitted. information. |
%Trans. information |
Transmitted information |
%Trans. information |
Transmitted information |
%Trans. information |
| Nasal | 0.757 | 0.933 | 0.66 | 0.814 | 0.49 | 0.604 |
| Duration | ||||||
| Sibilant | 0.437 | 0.896 | 0.161 | 0.471 | 0.047 | 0.143 |
| Strident | 0.442 | 0.642 | 0 | 0 | 0 | 0 |
| Continuous | ||||||
| Fricative | ||||||
| Open | 0.303 | 0.75 | 0.379 | 0.632 | 0.319 | 0.429 |
| High | ||||||
| Back | ||||||
| Anterior | 0.166 | 0.528 | 0.38 | 0.781 | 0.072 | 0.212 |
| Coronal | 0.133 | 0.693 | 0.283 | 0.393 | 0.066 | 0.09 |
| Total bits sent | 2.238 | 1.863 | 0.994 | |||
| Total bits received | 1.789 | 1.261 | 0.461 | |||
| % info rec'd | 79.92% | 67.66% | 46.35% | |||
Table 10.
Summary of the SINFA for responses to consonant stimuli in the /u/ context for the three listening groups. The first column for each listener group indicates transmitted information in bits and the second column the percent of that transmitted information that is received by the listeners.
| Normal Hearing | Good WRS | Poor WRS | ||||
|---|---|---|---|---|---|---|
|
Feature |
Trans. inf. |
%Trans. inf |
Trans. inf. |
%Trans. inf |
Trans. inf. |
%Trans. inf |
| Nasal | 0.75 | 0.924 | 0.681 | 0.84 | 0.508 | 0.626 |
| Duration | ||||||
| Sibilant | 0.424 | 0.871 | 0.338 | 0.694 | 0.218 | 0.45 |
| Strident | 0.469 | 0.67 | 0 | 0 | 0 | 0 |
| Continuous | ||||||
| Fricative | ||||||
| Open | 0.186 | 0.612 | 0.221 | 0.478 | 0.088 | 0.192 |
| High | ||||||
| Back | ||||||
| Anterior | 0.122 | 0.493 | 0.186 | 0.418 | 0.146 | 0.34 |
| Coronal | 0.281 | 0.745 | 0.308 | 0.462 | 0.12 | 0.176 |
| Total bits sent | 2.232 | 1.734 | 1.08 | |||
| Total bits received | 1.760 | 1.132 | 0.504 | |||
| % info rec'd | 78.85% | 65.30% | 46.64% | |||
For the group with normal hearing, the /a/ and /u/ contexts were very similar in percent of total information received at 80% and 79% respectively, while receiving a slightly higher 84.5% in the /o/ context. The /o/ context also has the highest amount of total bits sent when compared with the other vowel contexts, but these amounts are within 10% of each other. For the listeners with hearing loss and good WRS, the percent of information transferred was similar across vowel contexts, ranging from 65–68%. The total bits of transmitted information were similar across vowel contexts as well (1.734–1.863). Listeners with poor WRS received from 46–51% of information transferred, with /o/ again providing the strongest vowel context for performance and the highest total bits of transmitted information within a slightly broader range (.994–1.131).
For the /a/ context, the initial SINFA iteration determines that the nasal feature has the highest percentage of independently transmitted information across listener groups. Table 1 reveals that the proportion of errors was very low (.01-.04) for nasal identification for listeners with normal hearing. Listeners with hearing loss and good WRS received 12% less transmitted information than listeners with normal hearing. In addition, the amount of transmitted information also declined by .097 bits. This small loss of information is not surprising, since the F1 and F2 information for nasals and vowels is in a frequency range at which even the listeners with hearing loss have good hearing. Table 1 reveals that the proportion of errors for listeners with good WRS was higher (.07–.31) for nasal CV identification. The decrease in transferred information for nasal information to listeners with poor WRS was an additional 21%, with a much lower transfer rate. Error rates for nasal CV identification ranged from .29-.55.
With the nasal feature held constant, the Duration/Sibilant feature (related within this group of consonants to the discrimination of /z/) transmitted the highest independent amount of received information (nearly 90%) for listeners with normal hearing. The proportion of errors on /z/ for these listeners was very low at .01. The amount of transferred information to listeners with hearing loss and good WRS was much less at 47% of a far lower bit rate, and ranked fourth in independently transmitted information for this group. The proportion of errors on /z/ for this group was only .08 for the male talker and .21 for the female talker. For the listeners with poor WRS, transmitted information was only 14% for this feature in the fourth iteration, and the error rate was much higher than that for listeners with good WRS. Related Strident information was affected by partialing out sibilance, and ranked fifth for listeners with normal hearing at 64%. Strident information was reduced in each iteration for the two groups with hearing loss to a negligible level in the sixth iteration.
For listeners with normal hearing, the Continuant/Fricative/Open feature ranked first in the third iteration in amount of independently transmitted information at 75% of an again lowered bit rate of transmission. This information was audible for the listeners with good WRS, and ranked highest in the third iteration also, at 63%. Although this feature ranked highest in the second iteration for listeners with poor WRS, the percentage of transmitted information was low at 43%. Continuants are sounds that involve incomplete closure of the vocal tract, and therefore this feature distinguished stops and nasals from the fricatives among the consonants in this study. The only confusion of this type was the confusion of /v/ for /b/, made by the listeners with normal hearing and those with good WRS.
The High/Back/Anterior feature ranked highest in the second iteration for the listeners with hearing loss and good WRS at 78% of a low bit rate, and in the third iteration for listeners with hearing loss and poor WRS at only 21% of a very low bit rate. It ranked highest only in the sixth iteration for listeners with normal hearing; the percentage of transmitted information was 53% of a very low bit rate. This feature relates to discrimination of /b/ and /d/ from /g/, an error that did not occur at above chance levels for listeners with hearing loss and good WRS, but did occur at above chance levels for listeners with normal hearing in the confusion of /g/ for /d/. Listeners with hearing loss and poor WRS also had high levels of confusions of both /b/ and /d/ for /g/.
Lastly, the Coronal feature ranked first in the fourth iteration for listeners with normal hearing at 69% of a very low rate of independently transmitted information. Both groups of listeners with hearing loss had a first ranking for this feature in the fifth iteration, with 39% of transmitted information for the group with good WRS, but only 9% of an extremely low bit rate for the group with poor WRS. This feature refers to the height of the tongue position, and helps to discriminate /m/ from /n/, /d/ from /b/ and /g/, and /v/ from /ð/ and /z/. Listeners with normal hearing did not have an above-chance number of such errors with the exception of the previously mentioned /g/ for /d/. Listeners with good WRS confused /m/ for /n/, /v/ for /ð/, and /ð/ for /v/ in above-chance numbers, and listeners with poor WRS had 10 different confusions in this category that were above chance levels.
SINFA analyses for the /o/ vowel context tell a somewhat different story, with a switch between nasal and sibilant information for the highest amount of information transferred for those with normal hearing. In the first iteration for listeners with normal hearing, the Duration/Sibilant feature ranked highest, with 97% of information transmitted, but the bit rate remained lower than that for nasal information. For those with normal hearing and good WRS, this feature did not rank highest until the fourth iteration, with 38% of information transmitted at a very low bit rate, as was seen in the /a/ context as well. This feature did not rank highest until the fifth iteration for those with poor WRS, with 10% of information transmitted at an extremely low bit rate. Performance of each listener group on the /z/ identification seen in Table 1 supports these results. Nasal information transmission was similar to that for the /a/ context for the /o/ context for all listener groups. The relative relationship is also similar between the /a/ and /o/ contexts, but shifted to the second and fifth iterations for /o/ from the first and fourth iterations for /a/.
The next feature with high rankings was the Continuant/Fricative/Open feature, which was highest in the first iteration for the listeners with poor WRS (63% of a medium bit rate). This feature ranked highest in the second iteration for those with good WRS (75% of a medium bit rate) and in the third iteration for listeners with normal hearing (83% of a medium bit rate). There were no consonant confusions across the categories of fricative, stop or nasal for any listener group.
The High/Back/Anterior feature ranked highest for iterations in the middle for all three groups, fourth for those with normal hearing and third for the two groups with hearing loss. Transfer of information was fair for those with normal hearing (73% of a low bit rate) and good WRS (64% of a low bit rate), but was very poor (18% of a very low bit rate) for those with poor WRS. Listeners with normal hearing did not have an above-chance level of confusions of /b/ or /d/ with /g/, but those with good WRS confused /g/ for /d/ at above chance levels. Listeners with poor WRS made the same confusion in both directions.
The Coronal feature surfaced in the fifth iteration for listeners with normal hearing and good WRS, and in the fourth iteration for those with poor WRS. Transfer of information was fair for those with normal hearing at 65% of a medium bit rate, poor for those with hearing loss and good WRS at 36% of a low bit rate and very poor for those with poor WRS at 17% of a very low bit rate. The only above-chance confusion for this feature for those with normal hearing was /ð/ for /v/. Listeners with hearing loss and good WRS confused /v/ for /ð/ and vice versa, and /ð/ for /z/ at above chance levels.
The SINFA analyses for the /u/ vowel context presents a third picture, in which the only resemblance to the other vowels is with the /a/ context for nasal information across listener groups. As with the /a/ context, nasal information transmits the highest amount of information in the first iteration at very similar percentages.
The second iteration determined the Duration/Sibilant feature to have the highest transmitted information for all listener groups, with 87% of a medium bit rate for those with normal hearing, 69% of a lower bit rate for those with hearing loss and good WRS, and 45% of a low bit rate for those with hearing loss and poor WRS. The only confusions for /z/ in the /u/ context were for both /ð/ and /v/ in the listeners with poor WRS. Again, related Strident information resurfaced for the listeners with normal hearing only in the fourth iteration.
The third iteration found the Coronal feature with the highest place for listeners with normal hearing, with 74.5% of independently transmitted information at a low bit rate. For those with hearing loss and good WRS, 46% of a similar bit rate was transmitted independently in the fourth iteration. Listeners with poor WRS received only 18% of this information at a very low bit rate in the fifth iteration. These within category confusions were seen in those with normal hearing as /v/ for /ð/ errors, in those with hearing loss and good WRS as /m/-/n/ and /v/-/ð/ confusions, and for those with poor WRS as /m/-/n/ confusions, /z/ for /ð/ and /v/, and /v/ for /ð/ errors.
61% of Continuant/Fricative/Open information was transmitted independently to listeners with normal hearing in the fifth iteration. This group illustrates this low amount in the confusion of /v/ for /b/. Listeners with hearing loss received the highest amounts of this information in the third iteration, with 48% of the transmitted information for those with good WRS and only 19% for those listeners with poor WRS. For listeners with good WRS, the confusion of /g/ for /ð/ was above chance levels. For listeners with poor WRS, both of the confusions for the other two groups were made at above chance levels.
The High/Anterior/Back feature surfaced as highest in the sixth iteration for those with normal hearing, in the fifth iteration for those with hearing loss and good WRS, and in the fourth iteration for those with poor WRS. Bit rates of transmitted information were very low for all listener groups. The percent of transmitted information was in reverse order, at 34% for those with poor WRS, 42% for those with good WRS and 49% for those with normal hearing.
Discussion
The principal aim of this experiment was to determine the types of consonant confusions experienced by older listeners with hearing loss and poor word recognition. The results of this experiment suggest that older listeners with hearing loss and poor word recognition abilities not only make more of the same kinds of consonant confusions for voiced initial consonants as older listeners with hearing loss and good word recognition, they also make different kinds of errors. Although some CV identification errors were systematic, such as /ba/ for /da/, most errors were not systematic, and included all the possible CV choices. This was most evident for the consonant /ð/, a consonant that was relatively more difficult for all listeners.
Overall percent correct scores for normal-hearing older listeners were in agreement with previous results (Gelfand, Piper & Silman, 1986). Error rates for these voiced initial consonants were generally low for normal-hearing and hearing impaired with good word recognition. Error rates were similar for normal hearing and hearing impaired listeners with good word recognition for stop consonant identification, which is in agreement with results reported by Van Tasell et al. (1982). Selected consonant confusions from the listeners with good word recognition are similar to those reported by Dubno et al. (1982) for listeners with gradually sloping sensorineural hearing loss.
Bilger and Wang (1976) did not use the same set of consonants as the present study, nor did they control for amount of hearing loss. Nevertheless, they used the SINFA analysis to divide participants into categories which revolved to some extent around type of hearing loss. The group that had normal hearing to mild hearing loss performed similarly to the listeners with normal hearing in the present study, in terms of which features were most salient. The group they designated as having a high frequency hearing loss performed similarly to the listeners with hearing loss in this study.
The order in which features became prominent through the iterative process of the SINFA was similar for both groups of listeners with hearing loss across vowel contexts. Listeners with normal hearing received more transmitted information, both in bits and in percentage of those bits received. For most features, listeners with hearing loss had fewer bits of transmitted information, and received a lower percentage of that transmitted information. The difference between the two groups of listeners with hearing loss involved both the amount of information transmitted in bits, and the percentage of that transmitted information received, which was much lower for the listeners with poor WRS when compared with those with good WRS.
The present results demonstrate that older listeners with hearing loss and poor word recognition abilities have extraordinary difficulty with voiced initial consonant identification when compared with older listeners with similar hearing acuity and good word recognition skills. One of the more striking errors made by those with poor WRS was the confusion of /z/ with other fricatives. Listeners with good WRS were better able to use the strong energy present above 4000 Hz in /z/ for identification, even though hearing acuity was similar in the high frequencies to listeners with poor WRS. SINFA results corroborated this, suggesting that information for this consonant is transmitted poorly. This suggests the possibility of cochlear dead zones in the listeners with poor WRS. Future examination of this possibility using the TEN test or psychophysical tuning curves could provide illumination on this issue.
Other cues that these listeners seem to miss systematically may involve differences in F2 configuration, particularly for the plosives and nasals. In this case, providing more mid-frequency information to older listeners with hearing loss and poor word recognition might improve their ability to understand speech. This has been suggested by other investigators (Vickers, Moore & Baer 2001; Baer, Moore & Kluk, 2002), in studies examining the effects of low-pass filtered speech on listeners with and without cochlear “dead zones,” as determined by psychophysical tuning curves. Turner and Brus (2001) used CVs in a low-pass filtered speech perception experiment in which they found that for frequencies under 2800 Hz, amplification improved speech perception in every case. Not all studies, however, have found a benefit to limiting high frequency information (Mackersie, Crocker & Davis, 2004; Amos & Humes, 2007). Previous findings with older listeners with good and poor WRS do suggest degraded frequency resolution for complex stimuli in this population when compared with listeners with good word recognition (Phillips, Gordon-Salant, Fitzgibbons & Yeni-Komshian, 2000).
Alternatively, the confusions unique to those with poor WRS are consonants easily visible for speechreading. Recent studies on the efficacy of speechreading training have had encouraging results, with older adults benefiting as much as younger adults from combined auditory and visual speech cues (Grant, Walden & Seitz, 1998; Sommers, Tye-Murray & Spehar, 2000). Perhaps for this subgroup of the hearing impaired population, speechreading training that focuses on these clearly discernable visemes would be beneficial.
Table 5.
Consonant confusion matrix in the /o/ vowel context for normal-hearing older listeners. Confusions in bold are above chance levels.
| Table of Stimulus Consonant by Consonant Response | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| Consonant presented |
Consonant response |
Total | |||||||
| b | d | g | m | n | ð | v | z | ||
| b | 464 | 7 | 51 | 0 | 1 | 13 | 64 | 0 | 600 |
| d | 1 | 558 | 38 | 0 | 0 | 1 | 0 | 2 | 600 |
| g | 0 | 50 | 545 | 0 | 0 | 3 | 1 | 0 | 599 |
| m | 0 | 0 | 0 | 588 | 1 | 11 | 0 | 0 | 600 |
| n | 0 | 0 | 0 | 3 | 596 | 0 | 1 | 0 | 600 |
| ð | 1 | 0 | 0 | 2 | 6 | 522 | 65 | 4 | 600 |
| v | 1 | 2 | 1 | 4 | 0 | 172 | 420 | 0 | 600 |
| z | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 597 | 599 |
| Total | 468 | 617 | 635 | 598 | 604 | 722 | 551 | 603 | 4798 |
| Frequency Missing = 2 | |||||||||
Table 9.
Summary of the SINFA for responses to consonant stimuli in the /o/ context for the three listening groups. The first column for each listener group indicates transmitted information in bits and the second column the percent of that transmitted information that is received by the listeners.
| Normal Hearing | Good WRS | Poor WRS | ||||
|---|---|---|---|---|---|---|
|
Feature |
Trans. inf. |
%Trans. inf |
Trans. inf. |
%Trans. inf |
Trans. inf. |
%Trans. inf. |
| Nasal | 0.71 | 0.94 | 0.646 | 0.797 | 0.446 | 0.55 |
| Duration | ||||||
| Sibilant | 0.528 | 0.972 | 0.127 | 0.381 | 0.021 | 0.097 |
| Strident | 0 | 0 | 0 | 0 | 0 | 0 |
| Continuous | ||||||
| Fricative | ||||||
| Open | 0.501 | 0.828 | 0.555 | 0.749 | 0.463 | 0.627 |
| High | ||||||
| Back | ||||||
| Anterior | 0.168 | 0.729 | 0.21 | 0.636 | 0.062 | 0.182 |
| Coronal | 0.539 | 0.648 | 0.257 | 0.363 | 0.139 | 0.172 |
| Total bits sent | 2.446 | 1.795 | 1.131 | |||
| Total bits received | 2.067 | 1.206 | 0.573 | |||
| % info rec'd | 84.51% | 67.18% | 50.65% | |||
Acknowledgements
This research was supported by a grant issued by the National Institute of Deafness and other Communication Disorders (R03-DC004948 01). The author would like to thank Karma Bullock and Diane Eshleman for their assistance in data collection.
Contributor Information
Susan L. Phillips, Department of Communication Disorders, University of North Carolina at Greensboro, P.O. Box 26170, Greensboro, NC 27402
Scott J. Richter, Department of Statistics, University of North Carolina at Greensboro, P.O. Box 26170, Greensboro, NC 27402
David McPherson, Department of Communication Disorders, Brigham Young University, 129 TLRB, Provo, UT 84602.
References
- Agresti A. Hoboken, NJ: Wiley; 2007. An Introduction to Categorical Data Analysis. [Google Scholar]
- American National Standards Institute. American national standard specification for audiometers. New York: ANSI; 1996. (ANSI-S3.6‒1996). [Google Scholar]
- Amos NE, Humes LE. Contribution of high frequencies to speech recognition in quiet and noise in listeners with varying degrees of high-frequency sensorineural hearing loss. Journal of Speech Language and Hearing Research. 2007;50(4):819–834. doi: 10.1044/1092-4388(2007/057). [DOI] [PubMed] [Google Scholar]
- Baer T, Moore BC, Kluk K. Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 2002;112(3 Part 1):1133–1144. doi: 10.1121/1.1498853. [DOI] [PubMed] [Google Scholar]
- Berlin CJ, Hood LJ, Morlet T, Wilensky D, St. John P, et al. Absent or elevated middle ear muscle reflexes in the presence of normal otoacoustic emissions: A universal finding in 136 cases of auditory neuropathy/dys-synchrony. J Am Acad Audiol. 2005;16:546–553. doi: 10.3766/jaaa.16.8.3. [DOI] [PubMed] [Google Scholar]
- Bussoli TJ, Kelly A, Steel P. Localisation of the bronx waltzer (bv) deafness gene to mouse chromosome 5. Mammalian Genome. 1997;10:714–717. doi: 10.1007/s003359900552. [DOI] [PubMed] [Google Scholar]
- Dubno J, Dirks D, Langhofer L. Evaluation of hearing-impaired listeners using a Nonsense-syllable Test. II. Syllable recognition and consonant confusion patterns. Journal of Speech and Hearing Research. 1982;25(1):141–148. doi: 10.1044/jshr.2501.141. [DOI] [PubMed] [Google Scholar]
- Gelfand SA, Piper N, Silman S. Consonant recognition in quiet and noise with aging among normal hearing listeners. Journal of the Acoustical Society of America. 1986;80(6):1589–1598. doi: 10.1121/1.394323. [DOI] [PubMed] [Google Scholar]
- Grant KW, Walden BE, Seitz PF. Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration. Journal of the Acoustical Society of America. 1998;103(5 Part 1):2677–2690. doi: 10.1121/1.422788. [DOI] [PubMed] [Google Scholar]
- Harrison RV. An animal model of auditory neuropathy. Ear & Hearing. 1998;19:355–361. doi: 10.1097/00003446-199810000-00002. [DOI] [PubMed] [Google Scholar]
- Helfer KS, Huntley RA. Aging and consonant errors in reverberation and noise. Journal of the Acoustical Society of America. 1991;90(4 Pt 1):1786–1796. doi: 10.1121/1.401659. [DOI] [PubMed] [Google Scholar]
- Hustedde C, Wiley T. Consonant-recognition patterns and self-assessment of hearing handicap. Journal of Speech and Hearing Research. 1991;34(6):1397–1409. doi: 10.1044/jshr.3406.1397. [DOI] [PubMed] [Google Scholar]
- Kamm C, Morgan D, Dirks D. Accuracy of adaptive procedure estimates of PFmax level. Journal of Speech and Hearing Disorders. 1983;48(2):202–209. doi: 10.1044/jshd.4802.202. [DOI] [PubMed] [Google Scholar]
- Kumar UA, Jayaram AA. Prevalence and audiological characteristics in individuals with auditory neuropathy/auditory dys-synchrony. International Journal of Audiology. 2006;45(6):360–366. doi: 10.1080/14992020600624893. [DOI] [PubMed] [Google Scholar]
- Lindholm JM, Dorman M, Taylor BE, Hannley MT. Stimulus factors influencing the identification of voiced stop consonants by normal-hearing and hearing impaired adults. Journal of the Acoustical Society of America. 1988;83(4):1608–1614. doi: 10.1121/1.395915. [DOI] [PubMed] [Google Scholar]
- Moore Brian CJ, Glasberg BR, Stone MA. Dead regions in the cochlea. Ear and Hearing. 2004;25(5):478–487. doi: 10.1097/01.aud.0000145992.31135.89. [DOI] [PubMed] [Google Scholar]
- Pfeiffer E. A short portable mental status questionnaire for the assessment of organic brain deficit in elderly patients. Journal of the American Geriatric Society. 1975;23:433–441. doi: 10.1111/j.1532-5415.1975.tb00927.x. [DOI] [PubMed] [Google Scholar]
- Phillips SL, Gordon-Salant S, Fitzgibbons PJ, Yeni-Komshian G. Frequency and temporal resolution in elderly listeners with good and poor word recognition. Journal of Speech, Language and Hearing Research. 2000;43:217–228. doi: 10.1044/jslhr.4301.217. [DOI] [PubMed] [Google Scholar]
- Resnick S, Dubno JR, Hoffnung S, Levitt H. Phoneme errors on a nonsense syllable test. Journal of the Acoustical Society of America. 1975;58:S114. [Google Scholar]
- Sommers MS, Tye-Murray N, Spehar B. Auditory-visual speech perception and auditory-visual enhancement in normal-hearing younger and older adults. Ear and Hearing. 2000;26(3):263–275. doi: 10.1097/00003446-200506000-00003. [DOI] [PubMed] [Google Scholar]
- Turner CW, Brus SL. Providing low- and mid-frequency speech information to listeners with sensorineural hearing loss. Journal of the Acoustical Society of America. 2001;109(6):2999–3006. doi: 10.1121/1.1371757. [DOI] [PubMed] [Google Scholar]
- VanTasell D, Hagen L, Koblas L, Penner S. Perception of short-term spectral cues for stop consonant place by normal and hearing-impaired subjects. Journal of the Acoustical Society of America. 1982;72(6):1771–1780. doi: 10.1121/1.388650. [DOI] [PubMed] [Google Scholar]
- Vickers DA, Moore BC, Baer T. Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America. 2001;110(2):1164–1175. doi: 10.1121/1.1381534. [DOI] [PubMed] [Google Scholar]
- Vinay & Moore BC. Ten(HL)-test results and psychophysical tuning curves for subjects with auditory neuropathy. International Journal of Audiology. 2007;46(1):39–46. doi: 10.1080/14992020601077992. [DOI] [PubMed] [Google Scholar]
- Wang MD, Bilger RC. Consonant confusions in noise: a study of perceptual features. Journal of the Acoustical Society of America. 1973;54(5):1248–1266. doi: 10.1121/1.1914417. [DOI] [PubMed] [Google Scholar]

