Skip to main content
Seminars in Hearing logoLink to Seminars in Hearing
. 2018 Oct 26;39(4):349–363. doi: 10.1055/s-0038-1670698

The Physiologic and Psychophysical Consequences of Severe-to-Profound Hearing Loss

Pamela Souza 1,, Eric Hoover 2
PMCID: PMC6235680  PMID: 30443103

Abstract

Substantial loss of cochlear function is required to elevate pure-tone thresholds to the severe hearing loss range; yet, individuals with severe or profound hearing loss continue to rely on hearing for communication. Despite the impairment, sufficient information is encoded at the periphery to make acoustic hearing a viable option. However, the probability of significant cochlear and/or neural damage associated with the loss has consequences for sound perception and speech recognition. These consequences include degraded frequency selectivity, which can be assessed with tests including psychoacoustic tuning curves and broadband rippled stimuli. Because speech recognition depends on the ability to resolve frequency detail, a listener with severe hearing loss is likely to have impaired communication in both quiet and noisy environments. However, the extent of the impairment varies widely among individuals. A better understanding of the fundamental abilities of listeners with severe and profound hearing loss and the consequences of those abilities for communication can support directed treatment options in this population.

Keywords: hearing loss, frequency resolution, temporal resolution, dead region, speech recognition


Learning Outcomes: As a result of this activity, the participant will be able to (1) describe the typical effects of severe-to-profound hearing loss on frequency resolution; (2) describe the expected effects of severe-to-profound hearing loss on speech recognition.

Severe hearing loss has been variously defined as loss with hearing thresholds exceeding 60 to 70 dB hearing level (HL), and profound loss as thresholds exceeding 80 to 90 dB HL. 1 2 3 4 For the purposes of this article, the focus will be listeners who have hearing loss greater than 60 dB HL across all or most of the speech frequency range (∼500 Hz and above), and on the psychoacoustic and perceptual abilities of this group. From an understanding of audibility and the acoustic information contained in conversational speech, it is anticipated that most listeners with severe hearing loss will be unable to carry on a conversation without some amplification, such as hearing aids or assistive devices. Even amplified speech transmitted via well-fit hearing aids or cochlear implants is unlikely to be easily understood. 2 Consequently, listeners with severe hearing loss must rely more on visual cues, 5 or on cognitive repair strategies. 6 In sum, communication with severe hearing loss is an effortful process for the listener.

Listeners with severe or profound loss are estimated to comprise approximately 10% of all listeners with hearing loss. 7 8 However, the number of individuals with severe hearing loss may be higher in some populations including hearing aid wearers (36% a with severe hearing loss) 9 and veterans over 65 years of age (17% a with severe hearing loss). 10 While relatively low across the lifespan, the incidence of severe hearing loss is highest in the young (children < 2 years) and old (adults > 65 years), 11 12 reflecting congenital and perinatal causes in infants and the increased likelihood of having experienced systemic disease, ototoxicity, or substantial noise exposure in older adults. Congenital or early-onset severe or profound loss in children represents a special case because it impairs speech and language development. Fortunately, infant hearing screening programs have been highly successful at improving early identification of severe hearing loss, 12 13 leading to earlier remediation and treatment. 14

Substantial loss of cochlear function is required to elevate pure-tone thresholds to the severe hearing loss range; yet, individuals with severe or profound hearing loss continue to rely on hearing for communication. Despite the impairment, sufficient information is encoded at the periphery to make acoustic hearing a viable option. Treatment of severe hearing loss can be framed as a problem of maximizing the use of the limited capacity of the existing physiology. This task is made more difficult by the fact that it is impossible to directly assess the function of the cochlea. Along the basilar membrane, the integrity of various structures may vary from nearly healthy to complete loss of function, and the combination of factors resulting in severe hearing loss may differ from one cochlear place to another. For example, a listener may rely on certain frequency ranges where good survival of inner hair cells and low spontaneous rate afferent neurons can provide an accurate representation of the signal despite elevated detection thresholds.

Because it is not possible to look inside the cochlea of listeners with severe or profound hearing loss, estimates of the type and extent of cochlear damage are obtained from animal studies or models, or educated guesses based on indirect measures. 15 16 17 However, some direct data are available from temporal bone studies. 18 19 20 21 22 Those data indicate that cochlear structures may be damaged to various extents by inflammatory disease or ototoxicity. For listeners with severe hearing loss, significant loss of both outer and inner hair cells is expected. 23 24 25 26 Listeners with severe or profound loss also show loss of spiral ganglion cells. The extent of the ganglion cell loss varies (anywhere from 15 to 70% of the normal complement of cells), with the extent of damage associated with the specific etiology. 20 21 In some cases, the ganglion cells may be directly affected by inflammatory processes; in others, their deterioration may be a consequence of inner hair cell damage. 27 28 Because both inner hair cell loss and spiral ganglion fiber loss will affect transmission of auditory input, even audible signals are likely to be distorted.

Some information may not be transmitted at all in cases of areas of missing or very sparse inner hair cells, the so-called dead regions. 29 Dead regions are more common in listeners with severe or profound loss than in listeners with mild or moderate loss. The prevalence of dead regions in severe hearing loss has been variously reported as ranging from to 21 to 76%. 3 30 31 32 33 Dead regions are expected to cause distortions in sound perception 34 when the signal is transmitted off frequency by adjacent cochlear regions with remaining hair cells. For example, some listeners with severe high-frequency loss report that pure tones at high frequencies sound like a click or buzz. Speech information obtained via off-frequency listening is also likely to be distorted. 35 36 Each of these effects will have consequences for speech recognition.

Frequency Selectivity in Severe Hearing Loss

Testing suprathreshold auditory processing can provide a better understanding of the capabilities of the listener given their individual auditory system. The ability to resolve signals that differ in frequency is a basic function of the auditory system that underlies many complex tasks. Those tasks include speech understanding in quiet and in noise, perception of prosodic cues, and music perception. Psychophysical measures of frequency selectivity rely on a combination of peripheral and central mechanisms, but in listeners with peripheral hearing loss the primary limiting factor is damage to the active processes in the cochlea. Frequency selectivity is provided to a first approximation by the variation in mass and stiffness of the basilar membrane. The active process of the outer hair cells and their afferent and efferent innervation serves to sharpen the tuning around a frequency of interest by suppressing the spread of excitation of energy from neighboring frequencies. Because the active process is also responsible for the detection of low-intensity sounds, listeners with tone detection thresholds in the severe hearing loss range will necessarily have impaired frequency selectivity.

Although there is broad agreement that sensorineural hearing loss results in degraded frequency selectivity, the relationship between pure-tone threshold elevation and frequency selectivity is not straightforward. The measurement of frequency selectivity in listeners with hearing loss is complicated by several factors. First, the presentation level of the signals must be increased to accommodate hearing loss. At high presentation levels, frequency selectivity is broader in listeners with normal hearing, and this broadening affects the low-frequency side disproportionately. 37 38 Second, differences in the distribution of damage along the basilar membrane may allow a listener to use relatively good hearing in an off-frequency region away from the test frequency to perform the task. This issue is exacerbated by the use of high presentation levels that can produce a spread of excitation over a broad range of the cochlea even in listeners with normal hearing. Several studies have shown that frequency selectivity, as quantified by the bandwidth of a filter representing the range of frequency influencing the perception of a tone of a given frequency, increases with audiometric threshold. 39 40 However, listeners with similar hearing may vary widely in the extent to which frequency selectivity is impaired. 38 39 41 42 43 44 45 This variability suggests that measuring frequency selectivity can provide useful information about listeners with severe hearing loss and may be useful to direct audiological care.

Several methods have been used to measure frequency resolution for nonspeech signals. Examples include psychophysical tuning curves, which can be measured by presenting a tone at a fixed level and measuring the amplitude of a bandpass or notched noise that just masks the tone; masking patterns, which are measured by presenting a fixed noise and adjusting the level of tones at different frequencies to find the detection threshold; and use of high-rate spectral modulation in a detection or ripple phase-reversal task. The following section reviews two such measures: spectral ripple phase-reversal and psychophysical tuning curves.

Spectral Ripple

Spectral ripple evaluates the highest spectral modulation rate at which the listener is able to detect an inversion of the phase of the modulator of a band-limited carrier. 46 47 48 Spectral modulation at high rates is thought to reflect the frequency selectivity of the cochlea because the listener must detect a difference in amplitude in a narrow range of frequencies. 49 The higher the ripples per octave (rpo) of the stimulus, the more difficult it will be to accurately discriminate ripple phase. Increasing the spectral modulation rate until the listener can no longer hear that difference indicates that the rpo has exceeded the listener's ability to resolve frequency information and serves as an indirect measure of cochlear tuning. This has been validated in correlational studies, in which spectral ripple has been shown to relate to other behavioral estimates of frequency selectivity including tone detection in notched noise and high-rate spectral modulation detection. 50 51 Spectral ripple is particularly suitable to assess frequency selectivity in listeners with cochlear implants and severe hearing impairment because the stimuli can be presented in a narrow dynamic range. All other measures of frequency selectivity require increasing the level of a signal until another signal that differs in frequency or time becomes inaudible, and the increase in level can approach the loudness discomfort level of a listener with a limited dynamic range.

Spectral modulation has proven to be an important predictor of performance on complex tasks including speech understanding in noise. Detection of low-rate spectral modulation predicted speech in quiet and noise for listeners with cochlear implants. 52 53 54 For listeners with acoustic hearing, spectral modulation predicts individual differences in speech understanding in quiet 48 50 and in noise. 55 56 Spectral ripple discrimination is an effective means of assessing the residual hearing ability of people fit with cochlear implants, predicting speech understanding in quiet and noise. 46 57 However, there is considerable evidence to suggest that it is not spectral ripple discrimination thresholds per se that relate to speech, but the ability to detect spectral modulation at rates 0.25 to 2 rpo.

Spectral ripple assesses the highest spectral modulation rate the listener can use to discriminate a modulation phase reversal, and studies that found a relationship between spectral ripple and speech had many listeners for whom that maximum modulation rate was below 2 rpo. 3 46 48 Studies that directly compared ripple discrimination with detection at 2 rpo showed that 2 rpo thresholds—not spectral ripple—predicted speech understanding in quiet and noise for cochlear implant (CI) listeners. 53 58 Davies-Venn et al 50 evaluated spectral ripple, spectral modulation detection, and frequency selectivity using psychophysical tuning curves. For listeners with normal hearing, ripple thresholds were high (interquartile range: 7–8 rpo). For listeners with hearing impairment, ripple thresholds were lower but above 2 rpo for most listeners (interquartile range: 2–4 rpo). The authors found that detection at 2 rpo was significantly correlated with all frequency selectivity and spectral modulation detection measures and was the best predictor of speech in quiet and noise. Adding ripple discrimination threshold and 2,000-Hz tuning curve bandwidth marginally improved the prediction. Therefore, the benefit of assessing spectral ripple in listeners with severe hearing loss may be in screening for those whose frequency selectivity is so poor as to affect the coding of important speech cues at low spectral modulation rates.

In a recent study, 3 frequency selectivity was evaluated in listeners with severe hearing loss using a spectral ripple discrimination task. 46 48 Ripple stimuli were generated from 800 sinusoidal components logarithmically spaced between 100 and 5,000 Hz. Sinusoid amplitudes were scaled to produce a spectrum with sinusoidal modulation in logarithmic frequency and linear amplitude, and the overall spectral shape was adjusted to match the long-term average spectral shape of speech. The depth of spectral modulation was 30 dB. In a three-alternative forced-choice task, modulation rate was adaptively varied using a two-down, one-up rule to find the highest modulation rate at which an inversion in the phase of spectral modulation could be detected. The presentation level was set at a level that was rated “comfortable but slightly loud” using the Contour test 59 Stimuli and procedures were similar to those used by Won and colleagues 46 to assess spectral ripple discrimination in listeners with cochlear implants. For the 35 listeners with severe hearing loss in our study, the mean maximum spectral modulation rate detected was 1.77 rpo, and ranged from 0.33 to 4.97 rpo. This is a very low rate, and consistent with the mean (1.73 rpo) and range (0.60–4.87 rpo) of thresholds obtained by Won et al 46 in listeners with cochlear implants. For the many listeners with severe hearing loss with spectral ripple discrimination thresholds below 2 rpo, it is likely that frequency selectivity is a limiting factor in the perception of low-rate spectral modulation important for speech.

As an estimate of frequency resolution, spectral ripple has several limitations. One limitation is that the listener can use any part of the carrier frequency range to detect a change in the spectral shape, and thus the threshold obtained represents the best frequency resolution within the carrier bandwidth. Another limitation is that spectral ripple is unable to provide information about the bandwidth or shape of the tuning curve at a given frequency. Spectral ripple can be useful as a gross estimate of the best frequency resolution of the impaired auditory system, and to predict a deficit in speech perception when thresholds are so low as to affect the perception of speech cues, but not for deciding whether or not to provide gain at a given frequency, or for estimating how much a specific signal will mask the detection of another signal. For these decisions, a direct estimate of tuning curves may offer more useful guidance.

Estimating Auditory Filter Width

Psychophysical tuning curves and masking patterns can be measured using simultaneous or non-simultaneous signals, varying the extent to which the signal and masker overlap in both frequency and time. These methods result in the characteristic tuning curve (or inverted tuning curve masking pattern) with a rounded tip and steeply sloping high- and low-frequency sides. When controlling for off-frequency listening, tuning curves for listeners with hearing loss tend to be broader and show increased masking on the low-frequency side. In listeners with severe hearing loss, dead regions can lead to a shift in the tip of the tuning curve away from the signal frequency, or a loss of a well-defined best frequency resembling a “ W ” rather than a “ V ” shape. 60 61

Traditionally, psychophysical tuning curves are obtained in the following way. The listener completes a series of test conditions in which she or he is asked to detect a probe tone in the presence of a notched noise. The maskers are placed symmetrically or asymmetrically around the tone frequency, creating a variety of notch widths. 62 At each notch width, the tone level is fixed and the masker level is varied adaptively to obtain thresholds. A modeling procedure 63 is used to calculate the auditory filter parameters and (estimated) equivalent rectangular bandwidth. Such methods have long been considered the gold standard for assessing frequency resolution. However, they require significant time and listener training, and accurate modeling depends on the fidelity of the underlying data.

A more time-efficient method of obtaining auditory filter shapes is fast psychophysical tuning curves (fast-PTCs). 64 65 Unlike other methods of estimating auditory filter shapes, fast-PTC can estimate the filter at a single frequency in less than 5 minutes. The task relies on Békésy tracking of a narrow band of noise that gradually sweeps across a range of center frequencies. The listener is asked to hold down a button when a pulsed probe tone is audible, and to release the button when the tone becomes inaudible due to being masked by the noise. The result is a jagged track of the presentation level of the noise as a function of frequency. Using Békésy tracking, fast-PTC could be susceptible to bias and inattention, but it has the advantage of providing an arbitrary tuning curve shape. The assumptions about tuning curve shape used to fit data obtained using other methods may not be valid for listeners with significant cochlear impairment and an unknown distribution of healthy physiology along basilar membrane.

As part of the battery of tests administered to participants with severe hearing loss reported in the study of Souza et al, 3 fast-PTCs were measured in 35 adults with severe hearing loss for probe frequencies of 500 and 2,000 Hz presented at 10 dB sensation level (SL). The masking noise was a 100 or 180 Hz wide band (respectively) of Gaussian noise generated in the time domain by adding successive windows of bandpass-filtered noise varying gradually in center frequency from one octave below to one half octave above the probe frequency. 66 To prevent loudness discomfort from the continuously on noise, the maximum presentation level of the noise was conservatively set at 90 dB SPL. Békésy tracks were smoothed using local polynomial regression fitting 67 with a smoothing parameter of 0.25. The widths of the resulting curves in Hz were determined at a point 6 dB above their minima, and divided by the frequency of the minima, resulting in an estimate of quality at 6 dB (Q6). Q6 estimates were compared qualitatively with raw tracking data to manually reject tracks for which clipping affected the result. Fig. 1 shows example curves obtained from listeners with severe hearing loss, including successful and unsuccessful attempts to obtain an estimate of Q6. Note that our choice of a conservative 90 dB SPL output limit to prevent potentially harmful exposure over the duration of this and other suprathreshold measures presented at high intensities resulted in fewer acceptable curves than has been the case in other published work. 68 The small number of acceptable curves illustrates the difficulty of obtaining accurate fast-PTCs in this population while maintaining loudness comfort.

Figure 1.

Figure 1

Individual fast psychophysical tuning curve (fast-PTC) data for six listeners with severe hearing loss. Audiometric thresholds for the test ear are shown in dB SPL (thick black line). The target tone was presented at 10 dB SL at 500 and 2,000 Hz (circle). Each fast-PTC track attempted for each target frequency is shown (thin black lines), including tracks that failed or were incomplete. Partial data were collected for nearly all of the 35 listeners who participated in the study, but estimates of bandwidth 6 dB above the minima were able to be calculated for only 10. Successful tuning curves were typically obtained only when audiometric thresholds were 60 dB SPL or better at the target frequency. Partial, clipped, and incomplete tracks were due to a variety of factors including the limited dynamic range of the listeners and the output limit of 90 dB SPL, as demonstrated by several of the examples shown. Even in the heavily clipped conditions, the partial outline of a tuning curve can be observed for nearly all cases.

Of the 35 listeners evaluated, estimates of the tuning curve bandwidth 6 dB above the peak (Q6) were able to be calculated for only 10 participants. Results show variable but generally broader tuning, consistent with other reports for listeners with severe or profound loss. 44 45 69 Indeed, Faulkner et al 44 reported that some participants—those with the poorest thresholds in the profound hearing loss range—had essentially no ability to distinguish sounds in frequency.

For the majority of participants, the output limit of 90 dB SPL was too low to mask the probe tone for some or all of the frequency range presented. Partial tuning curves were obtained for most listeners, but clipping of the tuning curve prevented an unbiased estimate of Q6. Visual inspection of the partial data obtained for many subjects suggests wide variability in bandwidth, best frequency, and shape of tuning curves in severe hearing loss, as shown in the examples in Fig. 1 . A method of obtaining tuning curves that does not require uncomfortable and potentially dangerous presentation levels could provide considerable benefit in the understanding of the idiosyncrasies of individual with severe hearing losses.

Temporal Resolution in Severe Hearing Loss

Temporal resolution can be broadly divided into two categories: fast and slow. Fast temporal resolution refers to the ability of the auditory system to code the instantaneous amplitude of the signal, or temporal fine structure (TFS). Slow temporal resolution includes the ability of the auditory system to process changes in intensity of sound over time integrated over multiple cycles of a carrier signal, or temporal envelope (TE). TFS and TE are represented in the cochlea and peripheral auditory brainstem up to at least the medial superior olivary complex, 70 71 72 73 where TFS is compared between the ears to establish the azimuth of an auditory source. It remains controversial whether TFS is maintained at higher levels of the auditory system, and exactly how TFS coding contributes to the processing of monaural signal features like pitch and intensity. TE is coded in the cochlea as the probability of auditory neuron firing given the amplitude of displacement at a cochlear place, and phase locking to the TE has been observed throughout the cochlea and maintained to the auditory cortex. 74 75

Information in acoustic signals is conveyed primarily through changes in amplitude as a function of frequency over time, and TE represents this information after an initial stage of cochlear filtering. Many basic auditory features rely on TE local to a specific frequency region, including loudness, pitch and source location, and auditory object formation. 76 77 78 In general, the small number of studies which included listeners with severe or profound loss found that provided the signal bandwidth was audible, TE (via gap detection or temporal modulation detection) was essentially unimpaired. 45 69 79 80 This is fortunate because listeners with severe hearing loss, as a consequence of their poorer frequency selectivity, are expected to rely to a greater extent on TE than listeners with mild to moderate loss. Amplification for listeners with severe or profound loss who have a small dynamic range necessarily requires amplitude compression to reduce the range of speech input levels to an acceptable output range, and that compression will alter the TE to varying degrees depending on the amplification parameters. Some authors have expressed concern that such alteration might affect cues on which the listener depends. This issue is discussed later in relation to speech recognition.

Speech Recognition in Listeners with Severe Hearing Loss

Speech recognition depends on the ability to distinguish sounds in frequency and time, as well as to track dynamic variations (such as formant trajectories) which require the ability to resolve frequency detail. Accordingly, a listener with severe hearing loss (who will almost certainly have broadened auditory filters) can expect to receive a degraded speech signal. How degraded will it be? Table 1 summarizes data from studies that have attempted to characterize the speech-recognition abilities in this population. All of the studies noted, on average, poor recognition, even with appropriate amplification. Most also noted very high individual variability, with as much as an 80% range about the mean score. 2 81 82 To illustrate this variability, consider data from Souza et al 3 representing individual performance for speech in quiet ( Fig. 2 ) and in noise ( Fig. 3 ) for listeners with a range of hearing thresholds from normal hearing (open bars) to severe hearing loss. Each individual's score is labeled with his or her pure-tone average. Note the high quiet speech recognition scores for listeners with normal hearing, with decreasing performance (and increasing heterogeneity) for the listeners with hearing loss exceeding 60 dB HL. All of the listeners with severe hearing loss had at least a moderate signal-to-noise ratio (SNR) loss, and the majority had a severe SNR loss. 83 To put this in context, consider that many everyday listening situations have SNRs on the order of 0 to 10 dB, including restaurants, public transportation, hospitals, and shopping centers. 84 85 86 Some of the listeners shown in Fig. 3 would require a 10 dB or greater improvement in SNR to be able to communicate successfully in those environments. That level of SNR improvement is likely to require assistive devices (i.e., remote microphones) in addition to hearing aids with directional microphones.

Table 1. Summary of Studies Examining Speech Recognition in Severe Hearing Loss.

Number of participants Mean pure-tone average (dB HL) Mean age (y) Age range (y) Words in quiet (% correct) Sentences in noise (dB SNR threshold)
Choi et al 82 27 86 49 25–87 42 Not tested
Souza et al 3 36 69 79 54–93 52 17.5
Flynn et al 2 34 76 66 19–87 55 Not tested
Davies-Venn and Souza 51 22 72 60 22–89 Not tested 12.7
Souza et al 102 13 74 58 19–88 61 Not tested
Kishon-Rabin et al 69 12 91 17 16–18 40 Not tested

Abbreviations: HL, hearing level; SNR, signal-to-noise ratio.

Figure 2.

Figure 2

Individual scores for speech recognition in quiet (NU6 monosyllables, presented at 30 dB SL re: pure-tone average). The values on the x -axis show the pure-tone average for each individual. Open bars show results for listeners with normal hearing.

Figure 3.

Figure 3

Individual signal-to-noise ratio (SNR) threshold, obtained using the Quick Speech-in-Noise test. 111 The test was administered per its instructions, with speech set to a “loud but okay” level for each participant.

Research has shown that most listeners with hearing loss are accurate reporters of their level of communication difficulty. 87 Fig. 4 summarizes responses of 36 participants with severe hearing loss to individual questions from the speech subscale of the Speech, Spatial, and Quality (SSQ) questionnaire. 88 For each question, the participants were asked to rate their ability to hear on a continuous scale from 0 (not at all) to 10 (perfectly). For this representative sample of listeners with severe hearing loss, the highest-rated communication ability was for three situations with no background noise: a single talker in a quiet, carpeted room; a group in a quiet room, but where only one person speaks at a time; and the ability to follow a phone conversation. The lowest-rated situations were those with simultaneous speakers, such as speaking on the telephone with another talker in the room, or talking to someone while watching television. The latter situations certainly involve both informational and energetic masking, but may also reflect the consequences of poor frequency selectivity. The ability to segregate and track auditory streams based on pitch differences is fundamental to the ability to focus on one talker in the presence of a competing talker. 89 Hearing loss is known to impair this ability, albeit with individual variability. 90 91 It is likely that the listeners presented in Fig. 4 , many of who have very poor frequency selectivity, have greater difficulty in this regard.

Figure 4.

Figure 4

Mean ratings for individual questions in the Speech, Spatial, and Quality (SSQ) speech subscale. A score of 10 indicates that the listener feels s/he hears “perfectly” in that scenario. A score of 0 indicates that the listener feels s/he hears “not at all” in that scenario. Error bars indicate 95% confidence intervals.

Treatment of severe hearing loss with amplification devices includes a focus on the improvement of audibility through fast-acting, wide dynamic range compression, and the improvement of SNR through directional microphones and noise reduction algorithms. A consequence of these signal-processing strategies is distortion of temporal cues to speech understanding. 92 93 94 95 96 97 Several authors have assumed that listeners with severe hearing loss rely to a greater extent on TE cues based on reduced benefit from compression amplification, even when that amplification would be expected to improve recognition on the basis of improved audibility. 98 99 100 Some of those authors also have noted variability among individuals, such that some seem more sensitive to the effect than others. 98 101 Others have noted that the reduced benefit of amplitude compression is associated with phoneme confusions that depend on TE 51 102 and that the effect is reduced by using longer time constants which minimize envelope distortion. 103 Following from such findings, some commercial hearing aids now employ envelope-preserving processing for listeners with severe or profound loss. 104 While there are few data on the overall effectiveness of such strategies, they are a reasonable response to the known effects of severe hearing loss.

To what extent is overall communication ability related to the deficits (as characterized by psychoacoustic tests) that are a consequence of specific auditory damage patterns? In the study of Souza et al, 3 regression models indicated that amount of hearing loss, spectral resolution, and extent of cochlear dead regions each accounted for approximately 20% of the variability in the quiet word recognition score. Importantly, the psychoacoustic measures accounted for additional variance after hearing threshold were taken into account. Similar conclusions were drawn by Choi et al, 82 who found significant correlations between monosyllabic word recognition in quiet and a measure of spectrotemporal modulation, and by Rosen and colleagues, 45 who reported that auditory filter width coincided with ability to distinguish spectrally different vowel sounds in a small group of listeners with profound loss. To summarize these findings in another way, it is not just that listeners with more hearing loss have poorer spectral resolution. Instead, imagine two listeners with similar amounts of hearing loss but with different spectral resolution ability. In that case, the listener with poorer spectral resolution will also have poorer speech recognition. In other words, if the spectral resolution ability of each individual with severe-to-profound hearing loss was known, it would (partially) explain why some have relatively good abilities to understand speech, and others have more difficulty communicating.

Summary

Listeners with severe or profound hearing loss are a unique and varied population, deserving of research and clinical attention. In general, listeners with severe hearing loss can expect to have impaired auditory abilities, particularly poor spectral resolution and the resultant consequences for speech recognition. However, the extent of impairment on psychoacoustic measures varies widely among individuals. Similarly, there is wide variability in speech recognition in quiet. The degree of communication difficulty is linked to spectral resolution and the extent of cochlear damage (e.g., dead regions), but is also influenced by other factors.

There is strong potential for the use of tests of suprathreshold auditory processing in the assessment and treatment of severe hearing loss. At this time, there is little guidance for clinicians beyond gain targets for compression amplification derived from the audiogram. Clinical tools to assess suprathreshold deficits (including both speech and nonspeech tasks) that are suitable for presentation in a severely reduced dynamic range would be a useful addition to the audiometric battery. Further research is needed to understand the individual differences among listeners with severe hearing loss and how those differences can be leveraged to guide treatment.

Severe or profound loss causes difficulty with everyday communication that has implications for work, social activities, and overall health. As a consequence of communication difficulty, listeners with severe hearing loss report higher levels of anxiety and stress, 105 and greater reluctance to participate in social occasions. 106 With more difficulty communicating, listeners with severe or profound hearing loss may realize lower levels of education or experience limitations in the work environment. 11 107 According to recent epidemiology data, listeners with severe hearing loss are five times more likely than those with normal hearing to develop dementia. 108 The extent to which these issues can be ameliorated by prompt and effective auditory treatment including well-fit hearing aids or cochlear implants is unclear and is a source of continued research.

While hearing aids cannot address all of the difficulties of severe hearing loss, they will dramatically improve speech audibility. Despite the obvious need for amplification, not all listeners who identify as having severe hearing loss report hearing aid use. 9 109 Some individuals with severe or profound loss can benefit from cochlear implants, although not all such individuals will be aware this is an option or have access to the necessary evaluation process. And, despite the expected benefit of hearing-assistive technology such as remote microphones as important solutions to improving communication in noise, these devices are underused in this population. 3 110 A better understanding of the fundamental abilities of severe and profound hearing loss and the consequences of those abilities for communication can facilitate patient education, improve counseling, and may better direct rehabilitation options.

Acknowledgments

This work was supported by the National Institutes of Health (R01 DC006014).

Conflict of Interest The authors have no conflict of interest associated with this work.

a

Based on self-reported response to hearing loss survey questions. Such measures are reliable relative to actual pure-tone thresholds.

References

  • 1.Van Tasell D J. Hearing loss, speech, and hearing aids. J Speech Hear Res. 1993;36(02):228–244. doi: 10.1044/jshr.3602.228. [DOI] [PubMed] [Google Scholar]
  • 2.Flynn M C, Dowell R C, Clark G M. Aided speech recognition abilities of adults with a severe or severe-to-profound hearing loss. J Speech Lang Hear Res. 1998;41(02):285–299. doi: 10.1044/jslhr.4102.285. [DOI] [PubMed] [Google Scholar]
  • 3.Souza P, Hoover E, Blackburn M, Gallun F J. The characteristics of adults with severe hearing loss. J Am Acad Audiol. 2018;29(08):764–779. doi: 10.3766/jaaa.17050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.World Health Organization.Grades of hearing impairment2018; Available at:http://www.who.int/pbd/deafness/hearing_impairment_grades/en/. Accessed August 20, 2018
  • 5.Atcherson S R, Mendel L L, Baltimore W J et al. The effect of conventional and transparent surgical masks on speech understanding in individuals with and without hearing loss. J Am Acad Audiol. 2017;28(01):58–67. doi: 10.3766/jaaa.15151. [DOI] [PubMed] [Google Scholar]
  • 6.Lyxell B, Andersson U, Borg E, Ohlsson I-S. Working-memory capacity and phonological processing in deafened adults and individuals with a severe hearing impairment. Int J Audiol. 2003;42 01:S86–S89. doi: 10.3109/14992020309074628. [DOI] [PubMed] [Google Scholar]
  • 7.Cruickshanks K J, Wiley T L, Tweed T S et al. Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin. Am J Epidemiol. 1998;148(09):879–886. doi: 10.1093/oxfordjournals.aje.a009713. [DOI] [PubMed] [Google Scholar]
  • 8.Margolis R H, Saly G L. Distribution of hearing loss characteristics in a clinical population. Ear Hear. 2008;29(04):524–532. doi: 10.1097/AUD.0b013e3181731e2e. [DOI] [PubMed] [Google Scholar]
  • 9.Kochkin S. MarkeTrak VIII: 25-year trends in the hearing health market. Hearing Review. 2009;16:12–31. [Google Scholar]
  • 10.Centers for Disease Control and Prevention.Severe hearing impairment among military veterans–United States, 2010 Morbidity Mortality Weekly Report 2011201160955–958. [PubMed] [Google Scholar]
  • 11.Mohr P E, Feldman J J, Dunbar J L et al. The societal costs of severe to profound hearing loss in the United States. Int J Technol Assess Health Care. 2000;16(04):1120–1135. doi: 10.1017/s0266462300103162. [DOI] [PubMed] [Google Scholar]
  • 12.Thompson D C, McPhillips H, Davis R L, Lieu T L, Homer C J, Helfand M. Universal newborn hearing screening: summary of evidence. JAMA. 2001;286(16):2000–2010. doi: 10.1001/jama.286.16.2000. [DOI] [PubMed] [Google Scholar]
  • 13.Williams T R, Alam S, Gaffney M; Centers for Disease Control and Prevention (CDC).Progress in identifying infants with hearing loss—United States, 2006-2012 MMWR Morb Mortal Wkly Rep 20156413351–356. [PMC free article] [PubMed] [Google Scholar]
  • 14.Uus K, Bamford J. Effectiveness of population-based newborn hearing screening in England: ages of interventions and profile of cases. Pediatrics. 2006;117(05):e887–e893. doi: 10.1542/peds.2005-1064. [DOI] [PubMed] [Google Scholar]
  • 15.Prieve B A, Gorga M P, Neely S T. Otoacoustic emissions in an adult with severe hearing loss. J Speech Hear Res. 1991;34(02):379–385. doi: 10.1044/jshr.3402.379. [DOI] [PubMed] [Google Scholar]
  • 16.Psarommatis I M, Tsakanikos M D, Kontorgianni A D, Ntouniadakis D E, Apostolopoulos N K. Profound hearing loss and presence of click-evoked otoacoustic emissions in the neonate: a report of two cases. Int J Pediatr Otorhinolaryngol. 1997;39(03):237–243. doi: 10.1016/s0165-5876(97)01491-2. [DOI] [PubMed] [Google Scholar]
  • 17.Lopez-Poveda E A, Johannesen P T. Behavioral estimates of the contribution of inner and outer hair cell dysfunction to individualized audiometric loss. J Assoc Res Otolaryngol. 2012;13(04):485–504. doi: 10.1007/s10162-012-0327-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Nelson E G, Hinojosa R.Presbycusis: a human temporal bone study of individuals with downward sloping audiometric patterns of hearing loss and review of the literature Laryngoscope 2006116(9, Pt 3; Suppl 112):1–12. [DOI] [PubMed] [Google Scholar]
  • 19.Hinojosa R, Marion M. Histopathology of profound sensorineural deafness. Ann N Y Acad Sci. 1983;405:459–484. doi: 10.1111/j.1749-6632.1983.tb31662.x. [DOI] [PubMed] [Google Scholar]
  • 20.Santos F, Nadol J B. Temporal bone histopathology of furosemide ototoxicity. Laryngoscope Investig Otolaryngol. 2017;2(05):204–207. doi: 10.1002/lio2.108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Nadol J B., JrPatterns of neural degeneration in the human cochlea and auditory nerve: implications for cochlear implantation Otolaryngol Head Neck Surg 1997117(3, Pt 1):220–228. [DOI] [PubMed] [Google Scholar]
  • 22.Schmidt J M.Cochlear neuronal populations in developmental defects of the inner ear. Implications for cochlear implantation Acta Otolaryngol 198599(1-2):14–20. [DOI] [PubMed] [Google Scholar]
  • 23.Stebbins W C, Hawkins J E, Jr, Johnson L G, Moody D B. Hearing thresholds with outer and inner hair cell loss. Am J Otolaryngol. 1979;1(01):15–27. doi: 10.1016/s0196-0709(79)80004-6. [DOI] [PubMed] [Google Scholar]
  • 24.Hamernik R P, Patterson J H, Turrentine G A, Ahroon W A. The quantitative relation between sensory cell loss and hearing thresholds. Hear Res. 1989;38(03):199–211. doi: 10.1016/0378-5955(89)90065-8. [DOI] [PubMed] [Google Scholar]
  • 25.Cheatham M A, Dallos P. The dynamic range of inner hair cell and organ of Corti responses. J Acoust Soc Am. 2000;107(03):1508–1520. doi: 10.1121/1.428437. [DOI] [PubMed] [Google Scholar]
  • 26.Ryan A, Dallos P.Effect of absence of cochlear outer hair cells on behavioural auditory threshold Nature 1975253(5486):44–46. [DOI] [PubMed] [Google Scholar]
  • 27.Kujawa S G, Liberman M C. Adding insult to injury: cochlear nerve degeneration after “temporary” noise-induced hearing loss. J Neurosci. 2009;29(45):14077–14085. doi: 10.1523/JNEUROSCI.2845-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Xu S-A, Shepherd R K, Chen Y, Clark G M. Profound hearing loss in the cat following the single co-administration of kanamycin and ethacrynic acid. Hear Res. 1993;70(02):205–215. doi: 10.1016/0378-5955(93)90159-x. [DOI] [PubMed] [Google Scholar]
  • 29.Moore B C, Huss M, Vickers D A, Glasberg B R, Alcántara J I. A test for the diagnosis of dead regions in the cochlea. Br J Audiol. 2000;34(04):205–224. doi: 10.3109/03005364000000131. [DOI] [PubMed] [Google Scholar]
  • 30.Moore B C.Vinay,Prevalence of dead regions in subjects with sensorineural hearing loss Ear Hear 20072802231–241. [DOI] [PubMed] [Google Scholar]
  • 31.Moore B C, Killen T, Munro K J. Application of the TEN test to hearing-impaired teenagers with severe-to-profound hearing loss. Int J Audiol. 2003;42(08):465–474. doi: 10.3109/14992020309081516. [DOI] [PubMed] [Google Scholar]
  • 32.Preminger J E, Carpenter R, Ziegler C H.A clinical perspective on cochlear dead regions: intelligibility of speech and subjective hearing aid benefit J Am Acad Audiol 20051608600–613., quiz 631–632 [DOI] [PubMed] [Google Scholar]
  • 33.Ahadi M, Milani M, Malayeri S. Prevalence of cochlear dead regions in moderate to severe sensorineural hearing impaired children. Int J Pediatr Otorhinolaryngol. 2015;79(08):1362–1365. doi: 10.1016/j.ijporl.2015.06.013. [DOI] [PubMed] [Google Scholar]
  • 34.Huss M, Moore B C. Dead regions and pitch perception. J Acoust Soc Am. 2005;117(06):3841–3852. doi: 10.1121/1.1920167. [DOI] [PubMed] [Google Scholar]
  • 35.Won J H, Jones G L, Moon I J, Rubinstein J T. Spectral and temporal analysis of simulated dead regions in cochlear implants. J Assoc Res Otolaryngol. 2015;16(02):285–307. doi: 10.1007/s10162-014-0502-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Vinay B T, Baer T, Moore B C. Speech recognition in noise as a function of highpass-filter cutoff frequency for people with and without low-frequency cochlear dead regions. J Acoust Soc Am. 2008;123(02):606–609. doi: 10.1121/1.2823497. [DOI] [PubMed] [Google Scholar]
  • 37.Glasberg B R, Moore B C, Patterson R D, Nimmo-Smith I. Dynamic range and asymmetry of the auditory filter. J Acoust Soc Am. 1984;76(02):419–427. doi: 10.1121/1.391584. [DOI] [PubMed] [Google Scholar]
  • 38.Tyler R S, Hall J W, Glasberg B R, Moore B C, Patterson R D. Auditory filter asymmetry in the hearing impaired. J Acoust Soc Am. 1984;76(05):1363–1368. doi: 10.1121/1.391452. [DOI] [PubMed] [Google Scholar]
  • 39.Glasberg B R, Moore B C. Auditory filter shapes in subjects with unilateral and bilateral cochlear impairments. J Acoust Soc Am. 1986;79(04):1020–1033. doi: 10.1121/1.393374. [DOI] [PubMed] [Google Scholar]
  • 40.Laroche C, Hétu R, Quoc H T, Josserand B, Glasberg B. Frequency selectivity in workers with noise-induced hearing loss. Hear Res. 1992;64(01):61–72. doi: 10.1016/0378-5955(92)90168-m. [DOI] [PubMed] [Google Scholar]
  • 41.Zwicker E, Schorn K. Psychoacoustical tuning curves in audiology. Audiology. 1978;17(02):120–140. doi: 10.3109/00206097809080039. [DOI] [PubMed] [Google Scholar]
  • 42.Festen J M, Plomp R. Relations between auditory functions in impaired hearing. J Acoust Soc Am. 1983;73(02):652–662. doi: 10.1121/1.388957. [DOI] [PubMed] [Google Scholar]
  • 43.Dubno J R, Dirks D D, Ellison D E. Stop-consonant recognition for normal-hearing listeners and listeners with high-frequency hearing loss. I: The contribution of selected frequency regions. J Acoust Soc Am. 1989;85(01):347–354. doi: 10.1121/1.397686. [DOI] [PubMed] [Google Scholar]
  • 44.Faulkner A, Rosen S, Moore B C. Residual frequency selectivity in the profoundly hearing-impaired listener. Br J Audiol. 1990;24(06):381–392. doi: 10.3109/03005369009076579. [DOI] [PubMed] [Google Scholar]
  • 45.Rosen S, Faulkner A, Smith D A. The psychoacoustics of profound hearing impairment. Acta Otolaryngol Suppl. 1990;469:16–22. [PubMed] [Google Scholar]
  • 46.Won J-H, Drennan W R, Rubinstein J T. Spectral-ripple resolution correlates with speech reception in noise in cochlear implant users. J Assoc Res Otolaryngol. 2007;8(03):384–392. doi: 10.1007/s10162-007-0085-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Won J H, Clinard C G, Kwon S et al. Relationship between behavioral and physiological spectral-ripple discrimination. J Assoc Res Otolaryngol. 2011;12(03):375–393. doi: 10.1007/s10162-011-0257-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Henry B A, Turner C W, Behrens A. Spectral peak resolution and speech recognition in quiet: normal hearing, hearing impaired, and cochlear implant listeners. J Acoust Soc Am. 2005;118(02):1111–1121. doi: 10.1121/1.1944567. [DOI] [PubMed] [Google Scholar]
  • 49.Eddins D A, Bero E M. Spectral modulation detection as a function of modulation frequency, carrier bandwidth, and carrier frequency region. J Acoust Soc Am. 2007;121(01):363–372. doi: 10.1121/1.2382347. [DOI] [PubMed] [Google Scholar]
  • 50.Davies-Venn E, Nelson P, Souza P. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing. J Acoust Soc Am. 2015;138(01):492–503. doi: 10.1121/1.4922700. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Davies-Venn E, Souza P. The role of spectral resolution, working memory, and audibility in explaining variance in susceptibility to temporal envelope distortion. J Am Acad Audiol. 2014;25(06):592–604. doi: 10.3766/jaaa.25.6.9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Litvak L M, Spahr A J, Saoji A A, Fridman G Y. Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners. J Acoust Soc Am. 2007;122(02):982–991. doi: 10.1121/1.2749413. [DOI] [PubMed] [Google Scholar]
  • 53.Saoji A A, Litvak L, Spahr A J, Eddins D A. Spectral modulation detection and vowel and consonant identifications in cochlear implant listeners. J Acoust Soc Am. 2009;126(03):955–958. doi: 10.1121/1.3179670. [DOI] [PubMed] [Google Scholar]
  • 54.Anderson S, Parbery-Clark A, White-Schwoch T, Kraus N. Aging affects neural precision of speech encoding. J Neurosci. 2012;32(41):14156–14164. doi: 10.1523/JNEUROSCI.2176-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Mehraei G, Gallun F J, Leek M R, Bernstein J G. Spectrotemporal modulation sensitivity for hearing-impaired listeners: dependence on carrier center frequency and the relationship to speech intelligibility. J Acoust Soc Am. 2014;136(01):301–316. doi: 10.1121/1.4881918. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Bernstein J G, Mehraei G, Shamma S, Gallun F J, Theodoroff S M, Leek M R. Spectrotemporal modulation sensitivity as a predictor of speech intelligibility for hearing-impaired listeners. J Am Acad Audiol. 2013;24(04):293–306. doi: 10.3766/jaaa.24.4.5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Gifford R H, Hedley-Williams A, Spahr A J. Clinical assessment of spectral modulation detection for adult cochlear implant recipients: a non-language based measure of performance outcomes. Int J Audiol. 2014;53(03):159–164. doi: 10.3109/14992027.2013.851800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Anderson S, Parbery-Clark A, Yi H G, Kraus N. A neural basis of speech-in-noise perception in older adults. Ear Hear. 2011;32(06):750–757. doi: 10.1097/AUD.0b013e31822229d3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Cox R M, Alexander G C, Taylor I M, Gray G A. The contour test of loudness perception. Ear Hear. 1997;18(05):388–400. doi: 10.1097/00003446-199710000-00004. [DOI] [PubMed] [Google Scholar]
  • 60.Thornton A R, Abbas P J, Abbas P J. Low-frequency hearing loss: perception of filtered speech, psychophysical tuning curves, and masking. J Acoust Soc Am. 1980;67(02):638–643. doi: 10.1121/1.383888. [DOI] [PubMed] [Google Scholar]
  • 61.Kluk K, Moore B C.Factors affecting psychophysical tuning curves for hearing-impaired subjects with high-frequency dead regions Hear Res 2005200(1-2):115–131. [DOI] [PubMed] [Google Scholar]
  • 62.Stone M A, Glasberg B R, Moore B C. Simplified measurement of auditory filter shapes using the notched-noise method. Br J Audiol. 1992;26(06):329–334. doi: 10.3109/03005369209076655. [DOI] [PubMed] [Google Scholar]
  • 63.Patterson R D, Nimmo-Smith I, Weber D L, Milroy R. The deterioration of hearing with age: frequency selectivity, the critical ratio, the audiogram, and speech threshold. J Acoust Soc Am. 1982;72(06):1788–1803. doi: 10.1121/1.388652. [DOI] [PubMed] [Google Scholar]
  • 64.Sek A, Alcántara J, Moore B C, Kluk K, Wicher A. Development of a fast method for determining psychophysical tuning curves. Int J Audiol. 2005;44(07):408–420. doi: 10.1080/14992020500060800. [DOI] [PubMed] [Google Scholar]
  • 65.Sęk A, Moore B C. Implementation of a fast method for measuring psychophysical tuning curves. Int J Audiol. 2011;50(04):237–242. doi: 10.3109/14992027.2010.550636. [DOI] [PubMed] [Google Scholar]
  • 66.Charaziak K K, Souza P, Siegel J H. Time-efficient measures of auditory frequency selectivity. Int J Audiol. 2012;51(04):317–325. doi: 10.3109/14992027.2011.625982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Cleveland W S. Robust locally weighted regression and smoothing scatterplots. J Am Stat Assn. 1979;74:829–836. [Google Scholar]
  • 68.Kluk K, Moore B C. Detecting dead regions using psychophysical tuning curves: a comparison of simultaneous and forward masking. Int J Audiol. 2006;45(08):463–476. doi: 10.1080/14992020600753189. [DOI] [PubMed] [Google Scholar]
  • 69.Kishon-Rabin L, Segal O, Algom D. Associations and dissociations between psychoacoustic abilities and speech perception in adolescents with severe-to-profound hearing loss. J Speech Lang Hear Res. 2009;52(04):956–972. doi: 10.1044/1092-4388(2008/07-0072). [DOI] [PubMed] [Google Scholar]
  • 70.Smalt C J, Heinz M G, Strickland E A. Modeling the time-varying and level-dependent effects of the medial olivocochlear reflex in auditory nerve responses. J Assoc Res Otolaryngol. 2014;15(02):159–173. doi: 10.1007/s10162-013-0430-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Kale S, Heinz M G.Temporal modulation transfer functions measured from auditory-nerve responses following sensorineural hearing loss Hear Res 2012286(1-2):64–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Swaminathan J, Heinz M G. Psychophysiological analyses demonstrate the importance of neural envelope coding for speech perception in noise. J Neurosci. 2012;32(05):1747–1756. doi: 10.1523/JNEUROSCI.4493-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Henry K S, Heinz M G. Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery. Hear Res. 2013;303:39–47. doi: 10.1016/j.heares.2013.01.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Otsuka S, Furukawa S, Yamagishi S, Hirota K, Kashino M. Relation between cochlear mechanics and performance of temporal fine structure-based tasks. J Assoc Res Otolaryngol. 2016;17(06):541–557. doi: 10.1007/s10162-016-0581-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Aushana Y, Souffi S, Edeline J M, Lorenzi C, Huetz C. Robust neuronal discrimination in primary auditory cortex despite degradations of spectro-temporal acoustic details: comparison between guinea pigs with normal hearing and mild age-related hearing loss. J Assoc Res Otolaryngol. 2018;19(02):163–180. doi: 10.1007/s10162-017-0649-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Ardoint M, Agus T, Sheft S, Lorenzi C. Importance of temporal-envelope speech cues in different spectral regions. J Acoust Soc Am. 2011;130(02):EL115–EL121. doi: 10.1121/1.3602462. [DOI] [PubMed] [Google Scholar]
  • 77.Rosen S. Temporal information in speech: acoustic, auditory and linguistic aspects. Philos Trans R Soc Lond B Biol Sci. 1992;336:367–373. doi: 10.1098/rstb.1992.0070. [DOI] [PubMed] [Google Scholar]
  • 78.Grimault N, Bacon S P, Micheyl C. Auditory stream segregation on the basis of amplitude-modulation rate. J Acoust Soc Am. 2002;111(03):1340–1348. doi: 10.1121/1.1452740. [DOI] [PubMed] [Google Scholar]
  • 79.Souza P E, Boike K T. Combining temporal-envelope cues across channels: effects of age and hearing loss. J Speech Lang Hear Res. 2006;49(01):138–149. doi: 10.1044/1092-4388(2006/011). [DOI] [PubMed] [Google Scholar]
  • 80.Turner C W, Souza P E, Forget L N. Use of temporal envelope cues in speech recognition by normal and hearing-impaired listeners. J Acoust Soc Am. 1995;97(04):2568–2576. doi: 10.1121/1.411911. [DOI] [PubMed] [Google Scholar]
  • 81.Seldran F, Gallego S, Micheyl C, Veuillet E, Truy E, Thai-Van H. Relationship between age of hearing-loss onset, hearing-loss duration, and speech recognition in individuals with severe-to-profound high-frequency hearing loss. J Assoc Res Otolaryngol. 2011;12(04):519–534. doi: 10.1007/s10162-011-0261-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Choi J E, Hong S H, Won J H et al. Evaluation of cochlear implant candidates using a non-linguistic spectrotemporal modulation detection test. Sci Rep. 2016;6:35235. doi: 10.1038/srep35235. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Etymotic Research.QuickSIN Speech in Noise test Elk Grove Village, IL: Etymotic Research; 2006 [Google Scholar]
  • 84.Hodgson M, Steininger G, Razavi Z. Measurement and prediction of speech and noise levels and the Lombard effect in eating establishments. J Acoust Soc Am. 2007;121(04):2023–2033. doi: 10.1121/1.2535571. [DOI] [PubMed] [Google Scholar]
  • 85.Olsen W O. Average speech levels and spectra in various speaking/listening conditions: a summary of the Pearson, Bennett & Fidell (1977) report. Am J Audiol. 1998;7(02):21–25. doi: 10.1044/1059-0889(1998/012). [DOI] [PubMed] [Google Scholar]
  • 86.Pope D S, Gallun F J, Kampel S. Effect of hospital noise on patients' ability to hear, understand, and recall speech. Res Nurs Health. 2013;36(03):228–241. doi: 10.1002/nur.21540. [DOI] [PubMed] [Google Scholar]
  • 87.Nondahl D M, Cruickshanks K J, Wiley T L, Tweed T S, Klein R, Klein B E. Accuracy of self-reported hearing loss. Audiology. 1998;37(05):295–301. doi: 10.3109/00206099809072983. [DOI] [PubMed] [Google Scholar]
  • 88.Gatehouse S, Noble W. The speech, spatial and qualities of hearing scale (SSQ) Int J Audiol. 2004;43(02):85–99. doi: 10.1080/14992020400050014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Brokx J P, Noteboom S G. Intonation and the perceptual segregation of competing voices. J Phonetics. 1982;10:23–36. [Google Scholar]
  • 90.Shen J, Souza P E. The effect of dynamic pitch on speech recognition in temporally modulated noise. J Speech Lang Hear Res. 2017;60(09):2725–2739. doi: 10.1044/2017_JSLHR-H-16-0389. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Shen J, Souza P E.Do older listeners with hearing loss benefit from dynamic pitch for speech recognition in noise? Am J Audiol 201726(3S):462–466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Jenstad L M, Souza P E. Quantifying the effect of compression hearing aid release time on speech acoustics and intelligibility. J Speech Lang Hear Res. 2005;48(03):651–667. doi: 10.1044/1092-4388(2005/045). [DOI] [PubMed] [Google Scholar]
  • 93.Souza P E, Jenstad L M, Boike K T. Measuring the acoustic effects of compression amplification on speech in noise. J Acoust Soc Am. 2006;119(01):41–44. doi: 10.1121/1.2108861. [DOI] [PubMed] [Google Scholar]
  • 94.Souza P E, Turner C W. Effect of single-channel compression on temporal speech information. J Speech Hear Res. 1996;39(05):901–911. doi: 10.1044/jshr.3905.901. [DOI] [PubMed] [Google Scholar]
  • 95.Souza P E, Turner C W. Multichannel compression, temporal cues, and audibility. J Speech Lang Hear Res. 1998;41(02):315–326. doi: 10.1044/jslhr.4102.315. [DOI] [PubMed] [Google Scholar]
  • 96.Drullman R, Festen J M, Plomp R.Effect of reducing slow temporal modulations on speech reception J Acoust Soc Am 199495(5, Pt 1):2670–2680. [DOI] [PubMed] [Google Scholar]
  • 97.Noordhoek I M, Drullman R. Effect of reducing temporal intensity modulations on sentence intelligibility. J Acoust Soc Am. 1997;101(01):498–502. doi: 10.1121/1.417993. [DOI] [PubMed] [Google Scholar]
  • 98.Boothroyd A, Springer N, Smith L, Schulman J. Amplitude compression and profound hearing loss. J Speech Hear Res. 1988;31(03):362–376. doi: 10.1044/jshr.3103.362. [DOI] [PubMed] [Google Scholar]
  • 99.De Gennaro S, Braida L D, Durlach N I. Multichannel syllabic compression for severely impaired listeners. J Rehabil Res Dev. 1986;23(01):17–24. [PubMed] [Google Scholar]
  • 100.Souza P E, Bishop R D. Improving speech audibility with wide dynamic range compression in listeners with severe sensorineural loss. Ear Hear. 1999;20(06):461–470. doi: 10.1097/00003446-199912000-00002. [DOI] [PubMed] [Google Scholar]
  • 101.Drullman R, Smoorenburg G F. Audio-visual perception of compressed speech by profoundly hearing-impaired subjects. Audiology. 1997;36(03):165–177. doi: 10.3109/00206099709071970. [DOI] [PubMed] [Google Scholar]
  • 102.Souza P E, Jenstad L M, Folino R. Using multichannel wide-dynamic range compression in severely hearing-impaired listeners: effects on speech recognition and quality. Ear Hear. 2005;26(02):120–131. doi: 10.1097/00003446-200504000-00002. [DOI] [PubMed] [Google Scholar]
  • 103.Souza P, Brennan M A, Davies-Venn E. Scottsdale, AZ: American Auditory Society; 2007. Using Short vs. Long Time Constants in Severe Loss. [Google Scholar]
  • 104.Weile J N, Behrens T, Wagener K. An improved option for people with severe to profound hearing losses. Hearing Review. 2011:32–45. [Google Scholar]
  • 105.Gevonden M J, Myin-Germeys I, van den Brink W, van Os J, Selten J P, Booij J. Psychotic reactions to daily life stress and dopamine function in people with severe hearing impairment. Psychol Med. 2015;45(08):1665–1674. doi: 10.1017/S0033291714002797. [DOI] [PubMed] [Google Scholar]
  • 106.Lucas L, Katiri R, Kitterick P T. The psychological and social consequences of single-sided deafness in adulthood. Int J Audiol. 2018;57(01):21–30. doi: 10.1080/14992027.2017.1398420. [DOI] [PubMed] [Google Scholar]
  • 107.Detaille S I, Haafkens J A, van Dijk F JH. What employees with rheumatoid arthritis, diabetes mellitus and hearing loss need to cope at work. Scand J Work Environ Health. 2003;29(02):134–142. doi: 10.5271/sjweh.715. [DOI] [PubMed] [Google Scholar]
  • 108.Lin F R, Metter E J, O'Brien R J, Resnick S M, Zonderman A B, Ferrucci L. Hearing loss and incident dementia. Arch Neurol. 2011;68(02):214–220. doi: 10.1001/archneurol.2010.362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Aazh H, Prasher D, Nanchahal K, Moore B C. Hearing-aid use and its determinants in the UK National Health Service: a cross-sectional study at the Royal Surrey County Hospital. Int J Audiol. 2015;54(03):152–161. doi: 10.3109/14992027.2014.967367. [DOI] [PubMed] [Google Scholar]
  • 110.Harkins J, Tucker P. An internet survey of individuals with hearing loss regarding assistive listening devices. Trends Amplif. 2007;11(02):91–100. doi: 10.1177/1084713807301322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Killion M C, Niquette P A, Gudmundsen G I, Revit L J, Banerjee S.Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners J Acoust Soc Am 2004116(4, Pt 1):2395–2405. [DOI] [PubMed] [Google Scholar]

Articles from Seminars in Hearing are provided here courtesy of Thieme Medical Publishers

RESOURCES