Abstract
The topic of compression has been discussed quite extensively in the last 20 years (eg, Braida et al., 1982; Dillon, 1996, 2000; Dreschler, 1992; Hickson, 1994; Kuk, 2000 and 2002; Kuk and Ludvigsen, 1999; Moore, 1990; Van Tasell, 1993; Venema, 2000; Verschuure et al., 1996; Walker and Dillon, 1982). However, the latest comprehensive update by this journal was published in 1996 (Kuk, 1996). Since that time, use of compression hearing aids has increased dramatically, from half of hearing aids dispensed only 5 years ago to four out of five hearing aids dispensed today (Strom, 2002b). Most of today's digital and digitally programmable hearing aids are compression devices (Strom, 2002a). It is probable that within a few years, very few patients will be fit with linear hearing aids. Furthermore, compression has increased in complexity, with greater numbers of parameters under the clinician's control. Ideally, these changes will translate to greater flexibility and precision in fitting and selection. However, they also increase the need for information about the effects of compression amplification on speech perception and speech quality. As evidenced by the large number of sessions at professional conferences on fitting compression hearing aids, clinicians continue to have questions about compression technology and when and how it should be used. How does compression work? Who are the best candidates for this technology? How should adjustable parameters be set to provide optimal speech recognition? What effect will compression have on speech quality? These and other questions continue to drive our interest in this technology. This article reviews the effects of compression on the speech signal and the implications for speech intelligibility, quality, and design of clinical procedures.
Categorizing Compression
With a linear hearing aid, a constant gain is applied to all input levels until the hearing aid's saturation limit is reached. Because daily speech includes such a wide range of intensity levels, from low-intensity consonants such as /f/ to high-intensity vowels such as /i/, and from whispered speech to shouting, the benefit of a linear hearing aid is restricted when the amplification needed to make low-intensity sounds audible amplifies high-intensity sounds to the point of discomfort. In other words, linear hearing aids have a limited capacity to maximize audibility across a range of input intensities. The smaller the dynamic range (ie, the difference between hearing threshold and loudness discomfort level) of the listener, the more difficult it is to make speech (and other daily sounds) audible in a variety of situations.
To solve this problem, most hearing aids now offer some forms of compression in which gain is automatically adjusted based on the intensity of the input signal. The higher the input intensity, the more gain is reduced. This seems like a reasonable strategy. High-intensity signals (such as shouted speech) require less gain to be heard by the listener than low-intensity signals (such as whispered speech). We might expect patients wearing compression hearing aids to perform better than those wearing linear peak clipping aids in listening conditions that include a wide range of speech levels. However, the benefits of compression are not clear-cut. We begin by describing the characteristics of compression hearing aids.
Compression hearing aids are generally described according to a set of fixed or adjustable compression parameters. The compression threshold or kneepoint is the lowest level at which gain reduction occurs. Linear gain is usually applied below this level. Alternatively, some digital hearing aids use expansion rather than linear gain below the compression threshold. With expansion, the lower the input level, the less gain is applied. The intent is to reduce amplification of microphone noise or low-level ambient noise (eg, Kuk, 2001).
For example, a hearing aid with a compression threshold of 80 dB SPL could apply constant (linear) gain below the compression threshold and reduce its gain automatically for signals exceeding 80 dB SPL. In contrast, a hearing aid with a compression threshold of 40 dB SPL would have variable gain over nearly the entire intensity range of speech. For the purposes of this article, compression threshold is described as low (50 dB SPL or less), moderate (approximately 55–70 dB SPL) and high (75 dB SPL or greater). Hearing aids with low compression thresholds are referred to as wide-dynamic range compression (WDRC) (eg, Dillon, 1996 and 2000; Kuk, 2000) or full-dynamic range compression (FDRC) aids (eg, Kuk, 2000; Verschuure, 1997). Hearing aids with high compression thresholds are referred to as compression limiting aids (Walker and Dillon, 1982).
The compression ratio determines the magnitude of gain reduction. The compression ratio is the ratio of increase in input level to increase in output level. For example, a compression ratio of 2:1 means that for every 2 dB increase in the input signal, the output signal increases by 1 dB. Figure 1 shows an example of an input-output function for a compression hearing aid. Linear gain, with gain of 30 dB, is applied below the compression threshold of 40 dB SPL. Above this input level, a compression ratio of 2:1 is applied.
Compression ratios for WDRC aids are typically low (<5:1), while compression ratios for compression limiting aids are usually high (>8:1) (Walker and Dillon, 1982). Often, both features are combined in the same aid, with a low compression ratio for low-to-moderate level signals and a high compression ratio to limit saturation as the output level approaches the listener's discomfort threshold.
Figure 2 shows examples of input-output functions of four different circuit configurations. Figure 3 shows the gain plotted as a function of input level for the same four circuits.
An important parameter of a compression hearing aid is the speed with which it adjusts its gain to changes in input levels. Attack time refers to the time it takes the hearing aid to stabilize to the reduced gain state following an abrupt increase in input level. For measurement purposes, the attack time is defined as the time it takes the output to drop to within 3 dB of the steady-state level after a 2000 Hz sinusoidal input changes from 55 db SPL to 90 dB SPL (ANSI, 1996). Because compression should respond quickly to reduce gain in the presence of high-level sounds that might otherwise exceed the listener's discomfort threshold, attack times are usually short. An informal review of commonly prescribed WDRC hearing aids shows that most have attack times of less than 5 milliseconds (Buyer's Guide, 2001).
Release time refers to the time it takes the hearing aid to recover to linear gain following an abrupt decrease in input level. For measurement purposes, the release time is defined as the time it takes the 2000 Hz sinusoidal output to stabilize to within 4 dB of the steady-state level after input changes from 90 dB SPL to 55 dB SPL (ANSI, 1996). Clinicians can choose among hearing aids with release times ranging from a few milliseconds to several seconds.
Attack and release time are illustrated in Figure 4. Compression amplifiers are traditionally classified based on their time constants as slow-acting (release times greater than 200 milliseconds) or fast-acting (release times less than 200 milliseconds) (Dreschler, 1992; Walker and Dillon, 1982). This nomenclature has become somewhat blurred in current use, with some aids referred to as fast-acting even with release times greater than 200 milliseconds. Fast-acting compression systems can serve two distinct purposes. In conjunction with a high compression threshold they act as an output limiter, limiting output while preventing saturation distortion. This is referred to as compression limiting. In conjunction with a low compression threshold, they act on a syllable-length speech sound and are referred to as syllabic compressors because they reduce the level differences between syllables or phonemes (Braida et al., 1982). Although technically any release time shorter than a syllable—about 200 milliseconds—can be termed a syllabic compressor, in practice syllabic compression uses release times of 150 milliseconds or less (Hickson, 1994).
Figure 5 shows an example of speech processed with syllabic compression, using a 50-millisecond release time and compression threshold of 45 dB SPL. The upper panel shows the unprocessed sentences “Joe took father's shoe bench out. She was seated at my lawn”. In the unprocessed version, there is a marked contrast between low-intensity (typically consonants) and high-intensity (typically vowels) inputs. The lower panel shows the same sentences, processed by the syllabic compression circuit with a compression threshold at 45 dB SPL and an attack and release time of 3 milliseconds and 50 milliseconds, respectively. The most obvious effect is the reduced amplitude variation; high-intensity phonemes are reduced in level relative to low-intensity phonemes, resulting in an overall smoothing of amplitude variations. This is also known as amplitude smearing.
Slow-acting compression uses a long release time. The intent is to maintain a relatively constant output level, thus avoiding the need for frequent adjustments of the volume control. One potential problem with a long release time is that if the hearing aid has just responded to a high-intensity sound by decreasing gain, it may not be able to provide sufficient gain for a low-intensity sound that occurs while the aid is still at reduced gain. A release time that is very short can compensate quickly for changes in input levels but often causes an unpleasant “pumping” sensation as the aid cycles rapidly in and out of compression. As a compromise, some hearing aids have used an adaptive release time in which the release time depends on the duration of the activating signals. For brief, intense sounds (such as a door slamming), the release time is short, allowing the hearing aid to increase gain quickly to amplify successive low-intensity speech sounds. For longer intense sounds (such as a raised voice), the release time is long, allowing the hearing aid to maintain a comfortable output level. In these types of hearing aids, the typical release time is about 200 milliseconds for most daily situations.
When we consider the effect of compression hearing aids for our patients, it is important to keep in mind that the listed compression characteristics are measured using a steady-state signal (ANSI, 1996). Such measurements do not adequately describe the effects of compression on complex waveforms such as speech (Stelmachowicz et al., 1995). For example, the effective compression ratio for speech will be lower than the compression ratio noted in the hearing aid specifications (Stone and Moore, 1992). This happens when the modulation period of the input signal does not exceed the predefined attack and/or release times of the compression hearing aid. This is because the circuit only reaches maximum compression at the end of the attack time and only remains at maximum compression as long as the signal does not drop below the compression threshold. Because the level of the speech signal fluctuates from moment to moment, maximum compression is seldom achieved with everyday inputs. The higher the specified compression ratio and the longer the attack and release time, then the greater the discrepancy between specified and actual (or effective) compression ratio. For example, a compression aid with a specified compression ratio of 5:1 will provide closer to 3.5:1 for actual speech, depending on the time constants used (Stelmachowicz et al., 1994).
Single-channel compression systems vary gain across the entire frequency range of the signal. Thus, they cannot accommodate variations in the listener's dynamic range that may occur for different frequency regions. For example, many listeners with a sloping loss have a normal or near-normal dynamic range for low-frequency sounds but a sharply reduced dynamic range for high-frequency sounds where hearing loss is more severe. In some single-channel systems, an intense low-frequency sound can decrease overall gain and cause high-frequency sounds to become inaudible, although inclusion of an appropriate prefilter can minimize this problem (Kuk, 1996).
In a multichannel compression hearing aid, the incoming speech signal is filtered into two or more frequency channels. Compression is then performed independently within each channel prior to summing the output of all channels. The cutoff frequency between channels is termed the crossover frequency, and it may be either fixed or adjusted by the clinician. It is also important to consider whether each compression channel can be controlled independently. In some hearing aids, shallow filter slopes and/or preset interdependence between channels effectively limits how much one channel can be adjusted without affecting other channels. In general, digital hearing aids with a capability for steeper filter slopes provide greater channel independence.
A recent report on the state of the hearing aid industry notes, “virtually all high-performance products … are exclusively multichannel, nonlinear processing devices” (Strom, 2002a). Commercially available multichannel aids offer between 2 and 20 channels, although 2-channel or 3-channel systems are still the most likely choice. A survey of product lines available from major manufacturers indicates that about one third are single-channel, one third are two-channel and the remaining one third are divided equally between three-channel systems and those with more than three channels (Buyer's Guide, 2001).
A multichannel compression hearing aid may be better able than a single-channel compression hearing aid to accommodate variations in hearing threshold at each frequency (Venema, 2000), especially for atypical audiometric configurations. For example, a listener with a “cookie bite” configuration could be fit with a three-channel compression system with amplification in each channel that is precisely suited to her hearing loss. From the clinician's perspective, this is easier to accomplish if each channel can be independently controlled. However, it is not clear whether a larger number of channels will result in greater hearing aid benefit and/or higher listener satisfaction. These issues are discussed later in this article.
Finally, compression can be described as input or output compression. Unlike the compression parameters previously described, this is not a parameter within the control of the clinician but instead is determined by the circuit configuration of the hearing aid. In a compression hearing aid, a level detector monitors signal level. The level detector may rely on peak or average amplitude, on the root mean square level of the signal, or on some statistical property of the signal (Kuk and Ludvigsen, 1999). The output of the level detector is then connected via a feedback loop to the amplifier, whose gain is controlled by this output level. In a simple compression circuit with gain controlled by a level detector, gain is automatically varied once the level of the input signal exceeds the compression threshold. The distinction between input and output compression refers to the position of the level detector relative to the volume control.
In output compression, also called AGC-O, the volume control is positioned before the level detector (Figure 6). Activation of compression (ie, gain reduction) is based on the output level from the amplifier (which is determined by the input level plus the volume control setting on the hearing aid). Thus in output compression, the maximum output level is not influenced by volume control setting (Figure 7). For this reason, compression limiting is most often implemented in an output compression circuit.
In input compression, the volume control is positioned after the level detector (Figure 6). Activation of compression is based on the level of the input signal. The compressed signal is then amplified according to the frequency–gain response of the hearing aid before it is modified by the volume control setting. Thus in input compression, the volume control setting influences the maximum output level received by the listener (Figure 7).
Compression is available in analog, digitally programmable analog, and digital hearing aids. Thus far, no evidence shows that compression in some digital aids is superior to compression in a programmable aid (Walden et al., 2000), although digitally implemented compression may allow more control over compression parameters. As in other amplification features available in both digital and analog aids (Valente et al., 1999), it is the signal processing strategy that matters rather than the underlying digital or analog “hardware”.
Using Compression for Output Limiting
Compression limiting refers to compression with a high compression threshold, high compression ratio, and fast attack time. The purpose of compression limiting is to serve as an output limiter to prevent discomfort—or hearing damage—from high-level signals while limiting saturation distortion. Thus, this type of compression is an alternative to peak clipping. In peak clipping, maximum electric output is controlled by instantaneously limiting the output of the hearing aid and, thereby, clipping the peaks of the signal. When the input signal plus gain is below the saturation point, a hearing aid with peak clipping is expected to perform similarly as one with output limiting.
Peak clipping causes undesirable distortion as input increases beyond the output limit of the hearing aid. Electroacoustically, there are marked differences in distortion levels between linear peak clipping and compression limiting aids once input level plus gain exceeds the saturation threshold. Listeners perceive this distortion as degraded speech clarity and sound quality (eg, Larson et al., 2000). The greater the amount of saturation, then the stronger the preference for compression limiting over peak clipping (Hawkins and Naidoo, 1993; Stelmachowicz et al., 1999; Storey et al., 1998).
Figure 8 shows results of electroacoustic tests of the same programmable hearing aid, set for peak clipping (top panel) or compression limiting (lower panel). In each case, the input signal was a 90 dB SPL pure-tone sweep. One consequence of peak clipping saturation is the high harmonic distortion (18.4%) caused by peak clipping when the aid is in saturation. When operating as a compression limiter, harmonic distortion is very low—only 1.1%. This increased distortion (eg, when the speech level increases relative to the aid's maximum output) in a peak clipping aid is associated with reduced speech intelligibility, while output limiting has little effect on intelligibility in a compression limiting aid (Crain and Van Tasell, 1994; Dreschler, 1988a). Therefore, for most listeners, compression limiting should be used rather than peak clipping. One potential exception is for listeners with a severe-to-profound loss. These listeners, who require maximum power and often are accustomed to wearing high-gain linear aids in saturation, may report (at least initially) that compression limiting aids are not loud enough (Dawson et al., 1991).
Using Compression to Normalize Loudness
One characteristic of sensorineural hearing loss is a steeper-than-normal loudness growth curve. The goal of loudness normalization is also consistent with the idea that patients with a sensorineural hearing impairment lose the compressive nonlinearity that is part of a normally functioning cochlea (Dillon, 1996; Moore, 1996). Use of WDRC has been proposed as a means to compensate for abnormal loudness growth, and several fitting procedures have been developed in accordance with this philosophy (eg, Allen et al., 1990; Cox, 1995 and 1999; Kiessling et al., 1997; Kiessling et al., 1996; Ricketts, 1996; Ricketts et al., 1996). The intent of these procedures is to set compression parameters such that a listener wearing a WDRC aid will perceive changes in loudness in the same way as a normal-hearing listener (Kuk, 2000). Some of these procedures recommend that the patient's own loudness growth functions be measured in one or more frequency bands prior to the hearing aid fitting. Alternatively, loudness growth functions can be estimated based on the wearer's hearing threshold measurements, without the need to conduct specific loudness growth measures at fitting (Jenstad et al., 2000; Moore, 2000; Moore et al., 1999).
Recent data confirms that WDRC amplification can normalize loudness growth better than linear amplification (Fortune, 1999; Jenstad et al., 2000). It is less clear whether a fitting based on normalized loudness is superior in terms of speech intelligibility and/or speech quality to a fitting based on some other criteria, such as speech audibility. Kiessling et al., (1996) demonstrated improved speech recognition with a loudness–normalization-based fitting procedure (ScalAdapt) versus a threshold-based fitting strategy. In contrast, Keidser and Grant (2001a) found superior speech recognition in noise with NAL-NL1, an audibility maximization strategy, over IHAFF, which is based on normalizing loudness growth. Given that loudness normalization is implemented differently in each fitting method, it is likely that the differences in hearing aid benefit seen in research studies are due to the individual procedure, rather than the underlying philosophy of loudness normalization.
The idea of placing amplified speech within the listener's loudness comfort range seems reasonable and is implicit within many nonlinear prescriptive procedures. However, there is little direct evidence that normalizing loudness will provide optimal amplification characteristics (eg, Byrne, 1996; Van Tasell, 1993). In fact, Byrne (1996) argues against strict loudness normalization, pointing out that normal-hearing subjects can easily adjust to situational variations in loudness. Byrne (1996) also notes that hearing-impaired listeners might do better with compression parameters that explicitly do not normalize loudness growth, such as equalizing loudness across frequency (Byrne et al., 2001).
Finally, clinical measures of loudness growth rely on brief, steady-state signals which are used to determine the desired compression ratio. However, the effective compression ratio obtained with a complex, time-varying speech signal will be lower than that specified for a static signal such as a pure tone (Stone and Moore, 1992). Thus, normalizing loudness for individual frequency bands or pure tones does not mean loudness growth will be normalized for more complex, broad-band signals such as speech (Dillon, 1996; Moore, 1990 and 2000), although recent work (Moore, Vickers et al., 1999) suggests that loudness judgments using steady-state sounds should be adequate for predicting loudness with time-varying signals in compression systems with long time constants.
Using Compression to Improve Speech Intelligibility
Studies of the effects of compression amplification on speech intelligibility have generally taken one of two forms: laboratory-based research and clinical trials. Clinical trials may provide the most realistic assessment because the subjects wear the hearing aids in the home environment for several weeks or months and then complete one or more outcome measures. The disadvantage of clinical trials is that many variables are manipulated simultaneously, making it difficult to isolate specific effects. For example, a number of clinical trials compared aids that not only differed in compression characteristics, but also in frequency-gain responses, microphones, receivers, acoustic modifications, and/or fitting algorithms (eg, Knebel and Bentler, 1998; Newman and Sandridge, 1998; Valente et al., 1998 and 1997; Walden et al., 2000). In such studies, it is difficult to attribute differences between hearing aids to differences in compression processing versus other amplification variables.
Laboratory studies provide better control over experimental variables, thus one may interpret the results in a more straightforward manner. However, some laboratory-based studies may not incorporate variables inherent in wearable hearing aids, such as venting or earmold acoustics; or the acoustic test environment used in the laboratory may be dissimilar to that encountered by the subjects in everyday life. Both types of research are needed to understand the benefits and limitations of nonlinear amplification.
In a much-anticipated study, a large-scale clinical trial was conducted with 360 patients recruited from audiology clinics at eight Department of Veterans Affairs medical centers (Larson et al., 2000). An important feature of this study was double-blinding, in which neither the subjects nor the test audiologists knew which type of circuit was being tested. All patients wore three different hearing aids (peak clipping, compression limiting, and WDRC) for 3 months each. The compression limiting aid had an 8:1 compression ratio and adaptive release time; the WDRC hearing aid had a compression threshold of 52 dB SPL, a compression ratio between 1.1:1 and 2.7:1, and a 50-millisecond release time.
After each 3-month trial period, the patient completed a series of outcome measures, including speech recognition in quiet and in noise; speech quality ratings, and a questionnaire assessing overall communication performance. At the end of the three trials, each patient rank-ordered the three aids. As expected, the WDRC circuit provided more favorable loudness comfort for a range of input levels than the other circuits.
Although there were significant differences in speech intelligibility among circuits for selected conditions, there was no consistent pattern and the mean differences in performance between circuits were small. In the rank-order test, the patients preferred compression limiting (41.6%), followed by WDRC (29.8%) and peak clipping (28.6%).
In a similar but smaller trial, Humes et al., (1999) fit 55 hearing-impaired adults with linear peak clipping (fit according to linear, NAL-R targets) and two-channel WDRC aids (fit according to nonlinear, DSL [i/o] targets). All patients wore the linear aids for 2 months, followed by the WDRC aids for 2 months. At the end of each 2-month trial period, a battery of outcome measures were completed that included word recognition in quiet and in noise at various presentation levels; judgments of sound quality; and subjective ratings of hearing aid benefit. In general, results showed better speech intelligibility with the WDRC aid at all but high-level inputs. Patients also reported that the WDRC hearing aids provided greater ease of listening for low-level speech in quiet. The authors attributed these results to the greater gain at low input levels provided by the WDRC circuit and the higher DSL target gain levels for the WDRC aid.
Many focused studies have compared linear and compression amplification in a controlled environment, using either simulated hearing aid responses or wearable hearing aids. A good example of this is recent work by Jenstad, Seewald et al., (1999). Five conditions were included, representing different speech spectra that varied in levels and frequency responses. The same hearing aid was used for both linear and WDRC conditions, with targets generated using the same prescriptive procedure (DSL[i/o]). Outcome measures included sentence and nonsense syllable intelligibility and speech loudness ratings. For average speech levels, both circuits provided similar loudness comfort and speech intelligibility. For low and high speech levels, the WDRC aid provided better intelligibility and loudness comfort.
Bentler and Duve (2000) tested a variety of hearing aids that represented advances in amplification technology during the 20th century. Among the devices were a linear peak clipping analog aid, a single-channel analog compression aid, a two-channel analog WDRC aid, and two digital multichannel WDRC aids, all in behind-the-ear versions. Each device was fit using its recommended prescriptive procedure: NAL-R for the linear aid, FIG6 for the single-channel compression hearing aid, and the manufacturers' proprietary fitting algorithms for the remaining devices. Despite the differences in circuitry, speech recognition scores in quiet and in noise were similar across devices. The exception was poorer performance at very high speech levels (93 dB SPL) for the linear aid. This is not a surprising result given the distortion generated by peak clipping at such high input levels.
Moore and his colleagues (eg, Laurence et al., 1983; Moore and Glasberg, 1986; Moore et al., 1985 and 1992) worked extensively with an amplification system that applies a first-stage, slow-acting compression with a compression threshold of 75 dB SPL to compensate for overall level variations, followed by fast-acting compression amplifiers, acting independently in two frequency channels. Results showed improved speech reception threshold in quiet and in noise (Moore, 1987) and improved speech intelligibility, particularly at low input levels (Moore and Glasberg, 1986; Laurence et al., 1983) when compared to linear amplification or to slow-acting compression.
An important issue is the ability of compression amplification to improve speech intelligibility in noise. Although initially expected as a benefit of nonlinear amplification, compression does not appear to provide substantial benefit in noise compared to linear amplification (eg, Boike and Souza, 2000a; Dreschler et al., 1984; Hohmann and Kollmeier, 1995; Kam and Wong, 1999; Nabalek, 1983; Stone et al., 1997; van Buuren et al., 1999; van Harten-de Bruijn et al., 1997). This is certainly not the case when compared to a directional microphone (Ricketts, 2001; Valente, 1999; Yueh et al., 2001).
More recently, some investigators have suggested that the modulation properties of the background noise may influence the benefit of compression (Boike and Souza, 2000b; Moore et al., 1999; Stone et al., 1999; Verschuure et al., 1998). Specifically, compression may improve intelligibility when the background noise is modulated instead of unmodulated. This may be related to improved speech audibility during the noise “dips”.
In summary, WDRC provides the greatest advantage over linear amplification for low-level speech in quiet. In background noise, WDRC and linear amplification provide similar benefit. Several factors emerge as possible explanations for the disparate results seen across research studies. First, in some studies, performance with recently fitted and validated compression aids was compared to performance of the patient's own (linear) hearing aids (Benson et al., 1992; Schuchman et al., 1996). In addition to an expectation bias that the subject anticipated better performance from the “new” aid, the patients' own aids may have differed in other ways, such as a narrower frequency response or higher distortion. Second, the ability of compression to improve speech intelligibility in noise may be linked to the characteristics of the background noise (eg, Moore et al., 1999) or to the specific signal-to-noise ratio (Yund and Buckles, 1995a). Third, in some studies, potential differences in speech audibility were not accounted for and may have affected results of comparisons among amplification conditions. This issue is discussed in the next section.
Effects of Compression on Speech Audibility
A primary goal of compression is to place greater amounts of the speech signal within the listener's dynamic range (ie, between threshold and loudness discomfort level) without the wearers adjusting the volume control. This is particularly true of fast-acting WDRC, which can improve audibility of short-term speech components by providing customized gain suited to each syllable or phoneme. The well-known “speech banana” should actually become narrower, with the amplified output varying across a smaller intensity range than the unamplified input.
A few investigators have actually compared measured distributions of short-term RMS speech levels for linear and compressed speech (Verschuure et al., 1996; Souza and Turner, 1996, 1998, 1999). Such studies show that the range of speech levels is indeed reduced by compression. The reduction of the speech range depends on the compression parameters of the amplification system, most notably the compression ratio, on the release time, and on the length of the measurement window. With multichannel compression, the speech level distribution is reduced across frequency in accordance with the compression ratio in each channel (Souza and Turner, 1999). The higher the compression ratio, the greater the effect on the speech level distribution. Even for a single-channel compressor, the speech level distribution is unevenly affected across frequency (Verschuure et al., 1996).
Figure 9 shows the measured speech level distribution for linear (top panel) and compressed (lower panel) speech. In each panel, the filled circles represent audiometric thresholds for a listener with a moderate-to-severe loss. The filled diamonds are the loudness discomfort levels for the same listener. The thick line shows the long-term speech level, and the thin lines represent the range of short-term speech levels. For the linear speech, when speech is adjusted to avoid discomfort from peak levels, lower speech levels are inaudible. In contrast, the speech level distribution is reduced by compression, allowing for audibility of the entire range of speech levels.
Another possible explanation for the conflicting results seen among previous studies of compression hearing aids is that compression did not always improve audibility. In many studies, the subject was allowed to adjust the volume control or choose the presentation level (eg, Dreschler et al., 1984; Laurence et al., 1983; Tyler and Kuk, 1989). While this is the most realistic procedure because it reflects the way listeners will use the aids in everyday communication, adjustment of the volume control could minimize the difference between the linear and compressed conditions. For example, listeners might show similar performance with a linear versus a compression hearing aid if they adjusted hearing aid output to the same level in both conditions.
How do hearing aid wearers adjust the speech presentation level in a compression aid relative to a linear aid? Souza and Kitch (2001b) measured speech audibility at the volume setting chosen by mild-to-moderately impaired listeners. The hearing aid was a programmable single-channel aid, programmed (in sequential order) for peak clipping, compression limiting, or wide-dynamic range compression processing. For each amplification condition, listeners were instructed to adjust the volume control while listening to a variety of different test signals. Regardless of the speech input level or the background (quiet or noise), listeners adjusted each circuit configuration to similar output levels. In essence, the volume control adjustment eliminated the natural audibility advantage of WDRC hearing aids.
Of course, the subjects in this study were specifically directed to adjust volume to accommodate changes in input level. We expect that in everyday use of compression hearing aids, subjects will also choose to make volume adjustments to accommodate changes in the listening environment. This is supported by research showing that subjects fit with WDRC amplification prefer to have a manual volume control (Knebel and Bentler, 1998; Kochkin, 2000; Valente et al., 1998) and that once one is provided, most experienced hearing aid wearers report using it (Barker and Dillon, 1999).
However, there are numerous situations when use of a manual volume control is not possible or practical. For example, completely-in-the-canal (CIC) hearing aids rarely include manual volume adjustments and instead rely on compression to accommodate changes in the communication environment. Manual volume controls are also eliminated if the patient cannot physically manipulate them because of poor manual dexterity, loss of physical control from strokes or arthritis, impaired cognitive functioning from Alzheimer's disease or developmental delay, or is a young child. How well does compression work when a range of speech levels is presented at a fixed volume control setting?
Several investigators have noted improved performance with compression amplification relative to linear amplification only at low speech levels and/or when a wide range of speech levels was processed with a single volume control setting (Jenstad, Seewald, et al., 1999; Kam and Wong, 1999; Laurence et al., 1983; Mare et al., 1992; Moore and Glasberg, 1986; Peterson et al., 1990; Stelmachowicz et al., 1995). The recent large-scale study sponsored by the National Institute on Deafness and Communication Disorders and the Department of Veterans Affairs found minimal differences in speech intelligibility between WDRC and linear (compression limiting or peak clipping) hearing aids when hearing aid volume was adjusted to National Acoustic Laboratories (NAL-R) targets (Larson et al., 2000). Presumably, NAL-R targets were similar across amplification types for moderate input levels, resulting in similar speech audibility and no net advantage for the WDRC hearing aid in that situation.
In a more direct test of the relationship between audibility and the benefit of compression for speech intelligibility, Souza and Turner (1996 and Souza and Turner 998) measured the distribution of short-term RMS speech levels for a set of nonsense syllables. Speech identification scores were then measured for a two-channel WDRC amplification scheme and for a linear amplification scheme under conditions of varying speech audibility. Speech audibility was defined according to the proportion of the short-term RMS level distribution that was above the listener's hearing threshold. Listeners with a mild-to-moderate loss performed better with compression as long as it improved speech audibility. When compression and linear amplification provided equivalent audibility, there was no difference in performance.
In summary, wide-dynamic range compression is shown to improve audibility of speech components at low input levels (eg, Moore and Glasberg, 1986; Souza and Turner, 1998; Larson et al., 2000). One caution is that improved audibility is usually defined according to an optimal volume setting determined by the dispensing clinician. However, audibility could vary considerably in everyday listening situations where the patient controls the volume setting. This may partially explain the poor correlation between improved speech audibility measured in the clinic and everyday communication benefit described by the patient (Souza et al., 2000).
We cannot explain the results of all compression research in terms of speech audibility. Some studies have shown no improvements with compression even under conditions where compression clearly increased the amount of audible information in the speech signal (eg, DeGennaro et al., 1986). To account for such results, numerous investigators have speculated that essential cues for intelligibility are disrupted by WDRC (eg, Boothroyd et al., 1988; Dreschler, 1989; Festen et al., 1990; Plomp, 1988; Verschuure et al., 1996). This issue is discussed in the next section.
Effects of Compression on Acoustic Cues for Speech Identification
Speech intelligibility is determined by the listener's ability to identify acoustic cues essential to each sound. Implicit in this process is accurate transmission of these cues by the hearing aid. Certainly audibility of specific speech cues is a major factor in speech intelligibility. However, it is also important to consider whether acoustic cues are distorted or enhanced by compression amplification.
The work of DeGennaro et al. (1986) provides a convincing demonstration that more than simple audibility changes are involved. These investigators began by measuring the distribution of short-term RMS levels at each frequency. They then processed speech with compression systems that placed progressively greater amounts of the range of amplitude distributions above the subject's hearing threshold. Interestingly, no subject showed a consistent improvement with compression, although from an audibility perspective some improvement would be expected as greater amounts of auditory information exceeded detection thresholds and thus became audible.
It is possible that compression distorts some speech cues, offsetting the benefits of improved audibility, at least for some compression systems and for some listeners. Recently, interest has been renewed in the importance of temporal cues for speech intelligibility (eg, Shannon et al., 1995; Turner et al., 1995; van der Horst et al., 1999; Van Tasell et al., 1987 and 1992) and speculation that these cues are disrupted by fast-acting WDRC (eg, Boothroyd et al., 1988; Dreschler, 1989; Festen et al., 1990; Plomp, 1988; Verschuure et al., 1996). Temporal cues include the variations in speech amplitude over time and range from the very slow variations of the amplitude envelope to the rapid “fine-structure” fluctuations in formant patterns or voicing pulses (Rosen, 1992). With regard to compression, most attention has focused on fluctuations in the amplitude envelope, in part because alteration of the amplitude envelope is the most prominent temporal effect of fast-acting WDRC. The amplitude envelope contains information about manner and voicing (Rosen, 1992; Van Tasell et al., 1992) and some cues to prosody and also the suprasegmentals of speech (Rosen, 1992). Compression alters the variations in the amplitude envelope and reduces the contrast between high-intensity and low-intensity speech sounds. Of course, the reduced intensity variation is a desirable effect of compression. However, because both normal-hearing and hearing-impaired listeners can extract identification information from amplitude envelope variations (Turner et al., 1995), it is possible that alterations of these cues could affect speech intelligibility.
This has not been a simple issue to resolve. Because most studies use natural speech, which simultaneously varies in spectral and temporal content, it is difficult to separate the effect of altered temporal variations from other possible consequences, such as spectral distortion. The most direct method is to use a speech signal processed to limit spectral information. One processing technique is to digitally multiply the time-intensity variations of the speech signal by a broad-band noise (eg, Van Tasell et al., 1992; Turner et al., 1995). Naïve listeners describe these signals as “robotic” or “noisy” speech. Although more difficult to understand than natural speech, these signals can provide some identification information. Such signals can then be used to compare speech intelligibility for linear and compressed speech, while focusing on transmission of temporal cues. Results of such studies show that WDRC reduces consonant (Souza, 2000; Souza and Turner, 1996 and 1998) and sentence (Souza and Kitch, 2001a; Van Tasell and Trine, 1996) intelligibility. This effect is greater for higher compression ratios and/or short time constants (Souza and Kitch, 2001a).
These studies measured the effects of compression under conditions where the listener was forced to rely on temporal cues. Of course, the natural speech signal also contains spectral information. What is the clinical impact of these findings? The impact is probably minimal when one considers conversational speech presented in quiet to listeners with a mild-to-moderate loss who wear WDRC hearing aids with a small number of channels and a low compression ratio (<3:1). These listeners have normal or near-normal spectral discrimination ability (Moore, 1996; Van Tasell, 1993) and should be able to extract sufficient spectral and contextual information to compensate for altered temporal cues. The clinical impact may be greater for listeners who depend to a greater extent on temporal cues—most obviously, listeners with a severe-to-profound loss (Lamore et al., 1990; Moore 1996; Van Tasell et al., 1987).
Are all sounds equally susceptible to distortions of temporal cues, or do they affect some sounds more than others? Today's sophisticated digital algorithms could, in theory, allow hearing aids to be programmed to provide compression characteristics customized to each phoneme. To reach this point, we need to understand what aspects of each phoneme are enhanced or degraded by multichannel WDRC and which compression parameters will preserve these cues optimally. We might expect the greatest effect for sounds where critical information is carried by variations in sound amplitude over time. For example, important features of the stop consonants (/p, t, k, b, d, g/) include a stop gap (usually 50 to 100 milliseconds in duration) followed by a noise burst (5 to 40 milliseconds in duration). Voiced stops (/b, d, g/) are distinguished from voiceless stops (/p, t, k/) by the onset of voicing relative to the start of the burst. For syllable-initial stops, voice onset time (VOT) ranges from close to 0 milliseconds for voiced stops to 25 milliseconds or more for voiceless stops (Kent and Read, 1992). Perception of stop consonants can therefore be modeled as a series of temporal cues (ie, a falling or rising burst spectrum followed by a late or early onset of voicing) (Kewley-Port, 1983).
Because stop consonant identification depends on transmission of temporal cues (Turner et al., 1992; van der Horst et al., 1999) we expect these sounds to be especially susceptible to WDRC-induced alterations in the amplitude envelope. For example, hearing-impaired listeners may place more weight on the relative amplitude between the consonant burst and the following vowel (Hedrick and Younger, 2001), a cue that can be significantly changed by WDRC. The little data available do suggest WDRC can have negative effects on stop consonant intelligibility. In one study, single-channel, fast-acting compression applied to synthetic speech increased the amplitude of the consonant burst, resulting in erroneous perception (/t/ for /p/) (Hedrick and Rice, 2000). Similarly, Sreenivas et al., (1997) noted that a two-channel syllabic compressor increased the amplitude of the consonant burst, particularly in the mid-frequency region, resulting in more errors of /g/ for /d/ (for unprocessed speech, the peaks of /g/ are more prominent in the 1–2 kHz range, with the spectral peaks for /d/ mainly in the 4–5 kHz range). Alternatively, stop consonant errors could occur if the attack time overshoot is mistaken for a burst (Franck et al., 1999), so errors might be reduced by careful selection of time constants.
As another example, the affricates /d3, t∫/ are distinguished from stops by the rise-time of their noise energy and the duration of frication noise (Howell and Rosen, 1983). Specifically, the rise-time of affricates is intermediate between the short rise-time of stop consonants and the long rise-time of fricatives. Because one effect of WDRC would be to alter the rise-time pattern of the phoneme, it is possible that fast-acting WDRC systems would be detrimental (Dreschler, 1988b). Indeed, recent data from our laboratory suggests that affricate perception is impaired in multichannel WDRC systems, and that the most common error is a stop consonant (Jenstad and Souza, 2002b).
In summary, we expect multichannel WDRC to have diverse effects on acoustic cues that depend on the individual phoneme and on the salient cues for identification of that phoneme. Additionally, we expect these effects will be contingent on the parameters of the compressor. Compression with a high compression ratio and short time constants will produce the most dramatic alterations. These changes should be considered in conjunction with the improved speech audibility possible with compression amplification.
Effects of WDRC on Speech Quality
There is increased interest in using sound-quality judgments as an aid to the hearing aid fitting process (eg, Byrne, 1996; Mueller, 1996). Sound-quality judgments appeal to clinicians for a number of reasons. In the absence of a standard protocol for adjusting compression parameters, they can be used to guide compression settings according to patient preference; they can be completed in a short amount of time; and they involve the patient in the fitting process (Iskowitz, 1999). Of course, they are also subject to the patient's previous experience or biases.
Studies of speech quality have used paired comparisons (Byrne and Walker, 1982; Neuman et al., 1994; Kam and Wong, 1999), sound quality ratings (Neuman et al., 1998) or ratings of speech intelligibility (Bentler and Nelson, 1997). Generally, patients prefer the quality of speech with the least complex processing. Specifically, they prefer lower compression ratios (Boike and Souza, 2000a; Neuman et al., 1994; Neuman et al., 1998; van Buuren et al., 1999); longer release times (Hansen, 2002; Neuman et al., 1998) and smaller numbers (<3) of compression channels (Souza et al., 2000). Listeners with sloping loss show a slight preference for two-channel over single-channel compression (Keidser and Grant, 2001b; Preminger et al., 2001).
Before using sound-quality judgments to determine whether compression is “better” than linear amplification, it is important to consider the relationship between sound quality and speech intelligibility. Previous research has shown that listeners who were asked to choose a system on the basis of sound quality did not necessarily choose the system that maximizes speech intelligibility. An example of this is the frequency-gain response used in a linear aid; generally, patients prefer a frequency response with greater low-frequency gain, while greater high-frequency gain usually provides better speech intelligibility (Punch and Beck, 1980).
With regard to compression, a few studies have examined the relationship between intelligibility and sound quality for compressed speech. Boike and Souza (2000a) measured speech intelligibility and sound-quality ratings for speech processed with single-channel WDRC amplification at a range of compression ratios. Quality ratings and speech intelligibility were highest for linear speech, and decreased with increasing compression ratios. High quality ratings were significantly correlated with higher speech intelligibility scores. Souza et al., (2001) asked patients with a severe loss to choose among four digitally simulated amplification conditions: linear peak clipping, compression limiting, two-channel WDRC, and three-channel WDRC in a paired-comparison test. The WDRC systems used a compression threshold of 45 dB SPL, a compression ratio of 3:1, and attack and release times of 3 and 25 milliseconds, respectively. Speech intelligibility was also measured in each condition. The pattern of preference rankings paralleled that of speech intelligibility, with compression limiting preferred most often and providing the highest intelligibility. The least preferred (and least intelligible) was the three-channel compression system.
In studies with wearable hearing aids, patients preferred compression limiting over peak clipping or single-channel, fast-acting WDRC hearing aids in the large-scale clinical trial sponsored by the Veteran's Administration and the National Institute on Deafness and Communication Disorders (Larson, 2000). Humes et al., (1999) reported that 76% of patients tested preferred a two-channel WDRC aid to a linear peak clipping aid. Given the higher distortion from peak clipping aids at a high input level, a preference for WDRC is not surprising. Additionally, Humes et al., (1999) noted that the WDRC aid was the last circuit option used and may have represented a “new” aid to some of the subjects. Kam and Wong (1999) found WDRC was preferred over compression limiting for “loudness appropriateness” and for “pleasantness” of high-level signals in paired-comparison testing.
In summary, current research studies indicate that patients prefer simpler sound processing strategies to those with large numbers of compression channels and/or high compression ratios. In the few studies that assessed patient preference and speech intelligibility, subjects also selected amplification systems that provided good speech intelligibility. These results are encouraging because they imply that sound-quality judgments or patient preference could be used clinically to select compression parameters without compromising speech intelligibility.
Setting Compression Parameters
The electroacoustic parameters on conventional compression hearing aids were generally fixed by the manufacturer, or were adjustable only within a small range. Today's digital and digitally programmable aids are increasingly adjustable by the clinician. How, then, should compression parameters be adjusted for a particular patient?
The hearing aid fitting procedures recommended by the American Speech-Language-Hearing Association rely heavily on probe microphone testing (ASHA, 1998). However, this approach provides little guidance in setting compression parameters. Differences in attack and release time, for example, would not be evident with standard probe microphone procedures but could have significant effects on speech intelligibility and/or speech quality.
In a recent study (Jenstad, Van Tasell et al., 1999), clinicians were surveyed about their solutions to common fitting problems noted by the patient, including lack of clarity, excessive loudness, and complaints about background noise. Some problems received consistent responses. For example, the majority of clinicians surveyed would adjust either maximum output or gain in response to complaints that the aid was too loud. Knowledge of the effect of compression adjustments was much less consistent. Only about half the respondents answered the questions about compression parameters. Of those who did, responses were often inconsistent or conflicting. For example, survey respondents sometimes stated that they would solve the same fitting problem in opposite ways (ie, both by increasing the release time and by decreasing the release time). Given the obvious uncertainty about setting compression parameters, how should these aids be set in the clinic?
One approach is to accept the manufacturer's default settings. In programmable aids, these are usually applied automatically when the manufacturer's fitting algorithm is used. While this may provide a good starting point, it does not account for the effects of later adjustments in response to patient complaints about intelligibility or sound quality. Additionally, many manufacturers' fitting recommendations are based on the audiogram only and may not address individual differences in suprathreshold processing, loudness growth, or preference.
Obviously, there is a need for guidance in setting compression parameters. Fortunately, a number of researchers have directly or indirectly addressed these questions (eg, Barker and Dillon, 1999; Boike and Souza, 2000a; Fikret-Pasa, 1994; Hansen, 2002; Hornsby and Ricketts, 2001; Neuman et al., 1994). Although it is difficult to compare results directly across studies, as each study used different amplification systems, subject population, and fitting procedures, their research can provide some guidance to setting compression characteristics.
Setting Compression Threshold and Compression Ratio
For compression limiting aids, the primary goal is to avoid discomfort for high-level inputs without saturation distortion. Assuming a hearing aid that processes sound linearly below the compression threshold, the compression ratio per se is less important than setting the aid appropriately to prevent loudness discomfort. For example, the NAL-R prescriptive procedure recommends setting compression limiting parameters (compression threshold and/or compression ratio, depending on the hearing aid) so that maximum output is halfway between the listener's loudness discomfort level and a level that allows a 75 dB SPL input to be amplified without saturation (Dillon and Storey, 1998).
Interestingly, Fortune and Scheller (2000) found that a hearing aid with a low compression threshold, low compression ratio, and slow attack time produced loudness discomfort levels that varied with signal duration (ie, increasing loudness discomfort levels with decreasing signal duration). Use of compression limiting resulted in flat loudness discomfort functions (ie, signal duration did not affect the loudness discomfort level). The authors suggested that use of such parameters rather than the conventional high compression threshold, high compression ratio, and short attack time of a compression limiting aid might allow greater signal audibility without discomfort for brief speech components.
For wide-dynamic range compression amplification, the compression threshold and compression ratio are usually low (Walker and Dillon, 1982). Generally, the lower the compression threshold, the more audibility is improved for low-level speech (Souza and Turner, 1998). Some recently introduced hearing aids use compression thresholds as low as 0 dB HL, in contrast to the 40–50 dB SPL compression thresholds normally used in WDRC aids. However, most hearing aid wearers prefer substantially higher compression thresholds (Dillon et al., 1998; Barker and Dillon, 1999). Barker and Dillon, as well as Ricketts (in Mueller, 1999) speculate that listeners reject a low compression threshold because it amplifies low-level background noise, resulting in undesirable sound quality. A low compression threshold might be more acceptable if the hearing aid used expansion instead of linear processing below the compression threshold.
In many programmable aids, compression ratio is the primary adjustment available to the clinician. How should this parameter be adjusted? In a recent study, Boike and Souza (2000a) measured sentence recognition for compression ratios ranging from 1:1 to 10:1. For each condition, listeners also rated clarity, pleasantness, ease of listening, and overall sound quality. Sentences were presented in quiet and in noise at a +10 signal-to-noise ratio.
In quiet, increasing compression ratio had no effect on speech intelligibility. In background noise, there was a decrease in performance as compression ratio increased. Goedegebure et al., (2001) and Hohmann and Kollmeier (1995) reported similar findings for speech in noise. Speech level may also have an effect. Hornsby and Ricketts (2001) found decreased speech intelligibility at increased compression ratios ranging up to 6:1 for conversational-level speech. However, there were only minimal effects of increasing compression ratio for high-level (95 dB SPL) speech.
In the laboratory studies previously described, speech audibility was purposely held constant. In many compression aids, the compression ratio is controlled by adjusting gain separately for low-intensity versus high-intensity input signals. Most often, the gain for high-intensity sounds is fixed according to the listener's loudness discomfort level, and the compression ratio is increased by increasing the gain for low-intensity speech. In other words, a higher compression ratio could also improve speech audibility. This may account for the results of some studies with wearable aids in which increasing the compression ratio did not show a detrimental effect (eg, Fikret-Pasa, 1994); perhaps the improved audibility offset any negative effects. If we consider improved speech audibility as a primary goal, in conjunction with the potential for reduced speech intelligibility and decreased preference at higher compression ratios, it seems reasonable to use the lowest possible compression ratio that will maximize audibility across a range of speech levels. This may require decreasing gain for soft sounds and/or increasing gain for loud sounds, as long as soft sounds remain audible and loud sounds do not cause discomfort.
Setting Attack and Release Time
Attack time does not vary much and usually cannot be adjusted by the clinician. A short attack time is important to allow the hearing aid to respond quickly to increases in sound level. However, the listener could perceive as a click an attack time that is too short (<1 millisecond). Walker and Dillon (1982) recommend attack times between 2 and 5 milliseconds.
Release times have a wide range of possible settings and may have an impact on speech intelligibility and sound quality. There is no consensus about the optimal release time (Dillon, 1996; Hickson, 1994; Jenstad et al., 1999). Shorter release times minimize intensity differences between speech peaks (eg, high-intensity vowels) and valleys (eg, low-intensity consonants) and therefore can provide greater speech audibility (eg, Jenstad and Souza, 2002a). As an example, recall that a compression hearing aid applies greater gain for low-intensity input signals, and less gain to high-intensity input signals. Consider a speech signal with a high-intensity vowel followed by a low-intensity consonant. The variable gain amplifier would respond to the high-intensity vowel with increased compression (ie, decreased gain). With a short release time, gain recovers quickly for the low-intensity consonant, allowing for greater consonant audibility.
Next, consider the same speech stimulus processed with a long release time. Again, the variable gain amplifier would respond to the high-intensity vowel with increased compression (ie, decreased gain). In this case, however, the hearing aid recovers its gain slowly; that is, it takes a longer time to return to higher gain (and output). With gain still low, the low-intensity consonant receives little gain, and may be inaudible to the listener.
We can use acoustic measurements to illustrate the consequences of release time for speech audibility. Figure 10 summarizes the effect of input level and release time on consonant audibility. The most obvious effect is a systematic increase in output level at higher input levels. Additionally, the distribution of consonant levels (ie, from lowest-intensity consonant to highest-intensity consonant) is reduced with shorter release times, both within an input level and across input levels.
In addition to improved audibility for brief, low-intensity speech components, a short release time may even out small amplitude fluctuations, making it easier to detect gaps in the signal (Glasberg and Moore, 1992; Moore et al., 2001). However, fast-acting compression also alters the normal amplitude contrast between these sounds, which may be an invariant cue for speech identification (Hickson and Byrne, 1997; Hickson et al., 1999; Plomp, 1988). Thus, shorter release times can also minimize or distort temporal cues.
A hearing aid with a long release time cannot respond quickly to changes in level between individual phonemes. However, with longer release times, the natural amplitude contrast between the vowel and consonant is preserved (Jenstad and Souza, 2002a). Therefore, compression with longer release times may be a better choice for listeners who rely on variations in speech amplitude.
Acoustic measurements can also be used to describe the effect of release time on speech amplitude variations. Figure 11 compares the amplitude envelope for the syllable /ip/ for a linear circuit (thick line) and a WDRC circuit with a 12-millisecond release time (thin line). Amplitude was normalized to emphasize the differences in amplitude envelope rather than differences in overall level, which also depend on the individual gain prescription and volume control setting. Acoustically, the amplitude contrast between the consonant and vowel, as well as the rise-time of the initial portion of the consonant, are significantly altered by WDRC. Specifically, WDRC reduces the vowel level and increases the consonant level relative to the original speech token.
To describe these effects, it is helpful to use an index that quantifies the degree of temporal change to the signal. One available measure is the envelope difference index (EDI; Fortune et al., 1994). The EDI is an index of change to the temporal envelope between the two signals, designed to describe temporal effects of amplification. Briefly, this involves obtaining the amplitude envelope of two signals (by rectifying and low-pass filtering) and calculating the difference between them on a scale from 0 to 1, with 0 representing complete correspondence between the waveforms and 1 representing no correspondence between the waveforms. For the syllables compared in Figure 11, the calculated EDI is 0.14. How much the speech is altered depends not only on the release time, but also on input level, compression threshold, compression ratio, frequency response, and number of compression channels. Nonetheless, we can use the EDI to systematically describe the effect of varying release time on individual phonemes.
The relationship between release time and EDI is plotted in Figure 12 for a variety of consonants presented in an /iC/ format. Shorter release times altered the amplitude envelope more than longer release times, although there was little difference between release times of 12 and 100 milliseconds for most consonants. These effects were also phoneme-dependent, with the greatest alteration for voiceless fricatives and affricates and minimal effects for nasals and glides.
Evidently, varying release time can introduce significant differences in speech audibility or temporal cues. The consequences of these acoustic changes for intelligibility are less clear. Based on research data available at the time, Walker and Dillon (1982) suggested that a release time of between 60 and 150 milliseconds would provide the best speech intelligibility. More recent studies show little effect of varying release time on sentence intelligibility for release times up to 200 milliseconds on listeners with a mild-to-moderate loss (Bentler and Nelson, 1997; Jerlvall and Lindblad, 1978; Novick et al., 2001). However, changes in release time may have more subtle effects. The amplitude envelope has been shown to carry important properties for identification of some phonemes (eg, Turner et al., 1995), and changes in these properties may affect phoneme recognition for listeners who rely on these cues (Souza et al., 2001). Finally, the issue is complicated by use of an adaptive release time, which can respond in different ways depending on the duration, intensity, and crest factor of the triggering signal (Fortune, 1997).
Changes in release time may also affect speech quality. Results have been mixed; some studies show no distinct preference for release time (Neuman et al., 1994; Bentler and Nelson, 1997) while others show speech with longer release times (>200 milliseconds) is rated as more pleasant or less noisy than shorter release times (Hansen, 2002; Neuman et al., 1998).
In summary, there is a possible discrepancy between a preference for longer release times and improved speech intelligibility for shorter release times. It is also important to consider the interaction between release time and other processing features such as the number of channels and the compression thresholds of the hearing aids. For example, because a lower compression threshold can improve audibility, a hearing aid with a slow release time may improve intelligibility if paired with a lower compression threshold. Also, the combination of multiple compression channels with short release times may cause significant temporal and spectral smearing (eg, Moore and Glasberg, 1986). Thus, in a multichannel hearing aid, a longer release time may improve intelligibility over a shorter release time.
Number of Compression Channels
How many channels should be used? Presumably, the more channels, the more control the clinician has over signal characteristics and speech audibility. With a large number of compression channels, relative differences in level across frequency (ie, spectral peak-to-valley differences) will be reduced. Therefore, use of more than two or three channels may substantially reduce the frequency distinction in the speech signal, potentially degrading temporal and spectral cues (Bustamente and Braida, 1987; Dreschler, 1992; Moore and Glasberg, 1986). Any negative effects of increasing numbers of channels are likely to have the greatest consequences for sounds that carry pertinent information in the spectral domain; among them, vowels or the nasal consonants /m, n, η/ (Kent and Read, 1992). For example, the most important cue for vowel identity is detection of spectral peaks relative to the surrounding frequency components. Even if overall audibility of the sound is improved, these changes may reduce intelligibility. Differences in the number of channels could explain differences in results between investigators who demonstrate improved vowel intelligibility using WDRC with a small number of channels (eg, Dreschler et al., 1988b and 1989; Stelmachowicz et al., 1995) and those who show a detrimental effect. For example, Franck et al., (1999) showed vowels were harder to identify via an eight-channel compression hearing aid than with a single-channel compression hearing aid.
In a review of published data on multichannel amplification prior to 1994, Hickson (1994) concluded that the best results were obtained with compression systems having three or fewer channels. For speech intelligibility in general, recent data suggest that multichannel systems with up to four channels are equivalent to, but not superior to, single-channel systems (eg, Keidser and Grant, 2001b; van Buuren et al., 1999).
For studies that demonstrated improved performance with greater numbers of channels, the advantage appears to be one of improved audibility rather than the number of channels per se. For example, Yund and Buckles (1995b) demonstrated improved nonsense syllable recognition in noise as the number of channels increased from four to eight. Comparison of consonant confusions and frequency response for the different numbers of channels were consistent with improved high-frequency audibility. The authors note that results of multichannel compression experiments should be interpreted in the context of the stimuli used. In this case, no additional improvement was seen with more than eight channels, perhaps because the eight-channel system already provided sufficient information for recognition of high-frequency consonants. Similarly, Braida et al., (1982) pointed out that some early studies that showed a large advantage for multichannel compression likely provided improved high-frequency audibility relative to a linear condition.
For most audiometric configurations, two-channel or three-channel compression hearing aids seem to offer a good compromise between customized manipulation of the hearing aid response and providing coherent spectral contrast. For more unusual audiometric configurations (ie, rising or cookie bite audiograms), larger numbers of channels are appealing. Available data on larger numbers of channels is mixed, although larger numbers of channels should be most advantageous when adequate frequency shaping is provided (Crain and Yund, 1995); when adding more channels improves speech audibility over a smaller number of channels; and when compression ratios are low enough to avoid distortion of speech components (Yund and Buckles, 1995b). Larger numbers of channels also have potential benefits for feedback cancellation. The audibility advantage of multichannel compression may be most effective for listeners with a mild-to-moderate loss (Yund and Buckles, 1995a).
Candidacy for Compression Amplification
Is one type of compression system best for every patient? No, almost certainly not. In research studies, results are usually reported in favor of the majority. For example, in a recent study, 7 of 16 subjects demonstrated improved performance for a compression aid versus a linear aid, 5 showed no difference, and 4 showed degraded performance. The overall conclusion was that WDRC was superior to linear amplification (Yund and Buckles, 1995a). While such statistical conclusions follow accepted research standards, the underlying individual variability is of great interest to clinicians, whose goal is to determine the optimal hearing aid processing for an individual patient. Numerous studies show such differences in performance across subjects, with improved scores with compression for some listeners but not for others (eg, Benson et al., 1992; Laurence et al., 1983; Moore, Johnson et al., 1992; Tyler and Kuk, 1989; Walker et al., 1984). Individual performance differences are noted even across listeners with the same amount of hearing loss (eg, Boothroyd et al., 1988). Clearly more research is needed to relate individual audiometric characteristics, suprathreshold processing ability, and previous hearing aid experience to performance with nonlinear amplification. For example, differences in compression benefit may be related to individual differences in dynamic range across subjects (Moore, Johnson et al., 1992; Peterson et al., 1990), to the configuration of the audiogram (Souza and Bishop, 2000), or to the perceptual weights individual listeners place on different portions of the signal (Doherty and Turner, 1996). The next sections review recent research on the benefits of compression amplification for specific audiometric groups.
Use of WDRC for Severe-to-Profound Loss
Until recently, most listeners with a severe-to-profound loss were fit with either linear peak clipping or linear compression limiting aids, both of which operate linearly for most situations. The availability of high-gain wide-dynamic range compression aids offers new options for those with greater degrees of hearing loss. Nearly all of the major programmable and digital product lines are now available in a power behind-the-ear style (Buyer's Guide, 2001). However, most research has focused on listeners with mild-to-moderate hearing losses. According to dispensing professionals, 23% of hearing aids were dispensed to listeners with hearing thresholds exceeding 70 dB HL (Strom, 2002). Clearly, there is a need for research that is focused on this special group.
A drastically reduced dynamic range is characteristic of a severe-to-profound hearing loss. The range from the threshold to the loudness discomfort level can be as small as 5 dB at some frequencies. Because conversational speech varies over 30 dB SPL or more, it is difficult to place the full range of speech components into the audible range of the listener using only linear amplification. In theory, WDRC amplification could be used to solve this problem.
It has long been accepted that listeners with a severe loss require different linear amplification characteristics than listeners with a mild to moderate loss (Byrne, 1978; Byrne et al., 1990; Schwartz et al., 1988; Van Tasell, 1993). For WDRC amplification, recall that one effect is alteration of the natural time-intensity variations of the speech signal. For listeners with a mild-to-moderate loss who presumably depend to a greater extent on spectral cues, these changes in time-intensity variations do not significantly offset the benefits of improved speech audibility (Souza and Turner, 1996, 1998 and 1999). Because of their broader auditory filters (Faulkner et al., 1990), listeners with a severe-to-profound loss may not be able to take full advantage of spectral information (eg, Erber, 1972; Rosen, 1992) and must rely to a greater extent on temporal cues, which are altered by WDRC amplification (Lamore et al., 1990; Moore 1996; Van Tasell et al., 1987).
Several studies have shown that for listeners with a severe-to-profound hearing loss, multichannel compression can decrease intelligibility compared to linear amplification, even under conditions where it improves speech audibility. Boothroyd et al., (1988) used a two-channel compression system to place speech into the dynamic range of nine listeners with a severe-to-profound sensorineural hearing loss. Only one subject showed improved recognition, and the remaining eight listeners performed more poorly even though more auditory information was available. The authors suggested that temporal distortion resulting from a high compression ratio might have reduced speech intelligibility.
DeGennaro et al., (1986) compared speech intelligibility for two severely impaired listeners for linear amplification versus a 16-channel laboratory-based compression system that placed increasing amounts of the speech amplitude distribution within the listener's dynamic range. Compression ratios were typical of those in WDRC hearing aids, ranging from an average across bands of 1.1:1 to 4.2:1. Although some improvement was expected, neither listener showed a consistent improvement with compression as greater amounts of auditory information were made available
Souza et al., (2001) measured nonsense syllable identification in listeners with a severe sensorineural loss for four amplification conditions: linear peak clipping, compression limiting, two-channel WDRC, and three-channel WDRC. Presentation level was the same for all conditions. The best performance was obtained for the compression limiting condition, with slightly (although not significantly) worse performance for the peak clipping and two-channel compression condition, and significantly poorer performance for the three-channel WDRC condition. The authors suggested that the severe loss group, already at a disadvantage due to their limited discrimination abilities, were more susceptible to subtle alterations in the acoustic signal introduced by the three-channel WDRC system.
Even some studies that found improved performance using compression amplification in listeners with a severe loss did so with reservations. Souza and Bishop (1999) found that when compression increased audibility over linear amplification, speech intelligibility did improve; but not to the same extent as for a control group of mild-to-moderately impaired listeners. Verschuure et al., (1998) found improved performance with compression only for specific types of background noise.
In summary, compression amplification implemented with a low compression threshold can, in theory, supply more audible speech information than linear amplification for listeners with a severe-to-profound loss. However, the advantages of improved audibility may be offset by alteration of speech cues, particularly in the temporal domain. The critical issue with this population is to preserve amplitude variations that contain usable information (Kuk and Ludvigsen, 2000; Rosen et al., 1990). Available data support this; the data most favorable to use of WDRC in severely impaired listeners used compression with low compression ratios and few channels (eg, Barker et al., 2001; Verschuure et al., 1998) while negative results were found with high compression ratios (Boothroyd et al., 1988) or larger numbers of compression channels (DeGennaro et al., 1986; Souza et al., 2000). This idea counteracts the current industry trend of providing greater flexibility through larger numbers of channels.
Use of WDRC in Older Listeners
About 70% of hearing aid wearers are over 65 years old (Strom, 2002). Although manufacturers provide some adaptations such as larger controls, the processing strategy is chosen in the same way for a 70-year-old hearing aid wearer as for a 40-year-old hearing aid wearer. However, in addition to the expected changes in hearing sensitivity with age, older listeners also show changes in suprathreshold processing. One specific problem is an older listener's ability to discriminate variations in the signal amplitude over time (for a review, see Fitzgibbons and Gordon-Salant, 1996). Included in this group of speech cues are variations in amplitude envelope, which can be significantly altered by compression hearing aids. If older listeners have difficulty discriminating these cues (Turner et al., 1995), might they respond differently to compression hearing aids than younger listeners?
One approach to study this issue is to compare intelligibility of WDRC-amplified speech for older and younger listeners. To understand whether the consequences of compression processing are different for older listeners, it is important to distinguish between the effects of age per se versus the changes in hearing thresholds that occur with age. One frequently used approach is to test older and younger listeners with normal hearing. However, this presents practical difficulties because of the limited availability of older listeners with normal hearing. Additionally, results from normal-hearing listeners may not generalize to the hearing-impaired listeners who will actually be using compression hearing aids.
A second approach is to compare younger and older hearing-impaired listeners whose audiograms are matched as closely as possible. This method offers a way to control for changes in hearing sensitivity separately from effects due to aging. If the older and younger groups perform differently, we can conclude that older listeners are a “special” group with regard to use of compression. Similar performance between the groups suggests that selection and fitting techniques for compression hearing aids can be applied regardless of patient age.
In general, results of research studies show that while older listeners consistently perform worse than younger listeners, the effects of WDRC on speech intelligibility are similar across the age range (Souza, 2000; Souza and Kitch, 2001a). The exception is listeners with normal hearing who are presented with speech processed to restrict spectral information, which forces them to rely on temporal cues for speech identification. In that case, the addition of WDRC processing degraded speech scores more for older than for younger listeners. Clinically, these data suggest that the choice between linear or compression amplification can be made regardless of listener age as long as the listener has adequate spectral resolution. Currently, most tests of spectral resolution are designed for use in a research environment. Additional work is needed to develop clinically feasible tests of spectral resolution and to define the relationship between individual spectral and temporal discrimination abilities and use of compression amplification.
Candidacy for WDRC Based on Auditory Ecology
Recent innovative work by Gatehouse et al., (2000) may provide a framework to determine candidacy for compression. In this study, subjects were fit with five different hearing aid processing strategies: single-channel linear, two-channel linear, two-channel slow-acting compression (ie, automatic volume control), two-channel fast-acting WDRC, and a two-channel hybrid with fast time constants in the low-frequency channel and slow time constants in the high-frequency channel, intended to preserve high-frequency cues and minimize upward spread of masking. A manual volume control was available only in the single-channel linear hearing aid. Subjects wore each aid for 10 weeks. Outcome measures included several subjective questionnaires, a closed-set speech-in-noise test; psychoacoustic tests related to upward spread of masking, temporal and spectral discrimination; noise dosimetry, and an “auditory lifestyle” questionnaire that asked about specific auditory situations (ie, listening when two or more people are talking simultaneously), how often the situation was encountered, and how important it was to the subject's everyday life.
Four outcome domains were identified using factor analysis: listening comfort, satisfaction, rated intelligibility, and a speech test factor based on the speech-in-noise test. Across subjects, the linear hearing aids scored more poorly on all outcome dimensions. For listening comfort, the slow-acting AVC hearing aid received highest ratings. For intelligibility and speech-in-noise, the fast-acting WDRC and hybrid compression hearing aids performed best. For satisfaction, the three compression hearing aids were rated higher than the two linear hearing aids. Results of the auditory lifestyle questionnaire and the noise dosimetry tests were strong predictors of the hearing aid outcome measures.
This study is notable for several reasons. It is the first to suggest that “auditory ecology” (defined by the authors as the listening environments in which people function, the tasks to be undertaken in these environments, and their importance to everyday living) is a defining characteristic in selecting an amplification strategy. It also demonstrates a consistent advantage to fast-acting compression over linear amplification for both measured and perceived speech intelligibility. Finally, it highlights the importance of individually chosen compression characteristics based on psychoacoustic abilities as well as lifestyle and listening requirements.
Prescriptive Procedures for Compression Amplification
With increased use of compression aids comes the need for clinical fitting and verification procedures suited for this technology. Most established clinical prescriptive procedures, including NAL-R (Byrne and Dillon, 1986), Prescription of Gain/Output (POGO) (McCandless and Lyregaard, 1983), and Berger (Berger et al., 1989) formulas, specify desired gain at a single input level. These procedures were designed and validated with linear aids, which provided the same gain at every input up to the saturation limit of the aid. Such procedures are not appropriate for use with wide-dynamic range compression aids, which provide a different gain at each input level. Luckily, several new fitting strategies have been introduced which specify the compression characteristics for a particular patient. These procedures have several common features. They provide target gain values for multiple input levels, usually a conversational-level input (65–70 dB SPL), a low-level input (45–55 dB SPL), and a high-level input (80–95 dB SPL). They specify target compression ratios but do not specify attack or release times. Because of the complexity of these formulas, they are implemented in computer programs, either as stand-alone software, as built-in software to probe microphone test systems, or within the manufacturer's software module in NOAH.
The current version of the Desired Sensation Level (DSL[i/o]) method (Cornelisse et al., 1994 and 1995) can be used with either linear (ie, peak clipping or compression limiting) or WDRC hearing aids. The program provides target amplification values expressed as 2cc coupler levels or as dB SPL measured at the tympanic membrane (ie, the real ear aided response or real ear aided gain, not real ear insertion gain) for low-input, moderate-input, and high-input levels (Figure 13). Target compression ratios are also given for each of nine frequencies between 250 Hz and 6000 Hz. In practice, many hearing aids do not allow control over compression ratios within those exact frequency ranges, but the target compression ratios can still be used as a guideline for hearing aid selection and adjustment of compression parameters. Compression threshold is not given as a target value but is selected a priori by the clinician. The DSL [i/o] program offers a number of customizable options, including measurement of loudness discomfort levels, real ear unaided response (REUR) and real-ear-to-coupler difference (RECD). A significant advantage is the inclusion of age-appropriate predicted values for RECD and REUR. Currently, DSL [i/o] is available as a stand-alone computer program and, in a convenient option for clinicians, incorporated within two probe microphone systems. It is also incorporated within several NOAH manufacturers' modules.
As noted above, the NAL-R prescriptive procedure widely used in clinics is not intended for use with WDRC amplification where gain is varied depending on input level. A recent addition is the NAL-NL1 procedure for nonlinear hearing aids (Dillon, 1999; Keidser et al., 1999). This prescriptive procedure, currently available only in a stand-alone computer program, provides target gain and output values at multiple input levels. These values can be viewed in a number of ways, including 2cc coupler levels or real ear values. Like the DSL [i/o] procedure, compression threshold is selected a priori by the clinician. The program provides target compression ratios at each frequency. NAL-NL1 also provides target crossover frequencies, based on the audiometric configuration. The clinician specifies the number of compression channels. The prescription can be customized according to the patient's age, and measured REUR and RECD values can also be used. Unlike DSL[i/o], there is no option for entering measured loudness discomfort levels.
FIG6 (Killion and Fikret-Pasa, 1993) is similar in format to the DSL[i/o] and NAL-NL1 programs but provides fewer customizable options. For example, the conversions from real ear values to 2cc coupler values in FIG6 are based on an adult ear; thus, this procedure is not appropriate for use with children. FIG6 displays frequency-specific insertion gain or 2cc coupler gain targets for low-input (45 dB SPL), moderate-input (65 dB SPL) and high-input (95 dB SPL) levels. Compression ratio is assumed to be low and cannot be changed by the clinician. Target compression ratios are also specified for a low-frequency (500–1000 Hz) and a high-frequency (2000–4000 Hz) range. If the aid being fit is a single-channel aid, these compression ratios can be averaged, as the difference between them is usually small. Maximum output targets are based on predicted rather than measured loudness discomfort levels. An example of a FIG6 target is shown in Figure 14.
The Independent Hearing Aid Fitting Forum (IHAFF) fitting protocol uses a different format. This protocol, essentially a fitting philosophy for nonlinear amplification, incorporates six elements, only one of which generates a hearing aid prescription (Cox, 1995 and 1999; Valente and Van Vliet, 1997). This prescription method, termed the Visual Input-Output Loudness Algorithm, or VIOLA, was based on the idea that amplification should normalize loudness growth. More generally, a sound that is perceived as soft to a normal-hearing listener should be perceived as a soft sound by a hearing-impaired listener wearing amplification, a sound that is perceived as comfortable by a normal-hearing listener should be perceived as comfortable by a hearing-impaired listener wearing amplification, and a sound that is perceived as loud by a normal-hearing listener should be perceived as loud by a hearing-aid wearer.
As part of the IHAFF protocol, a standardized test of loudness perception called the Contour test (Cox et al., 1997) is administered to the patient. In this test, the listener rates the loudness of a series of pulsed tones presented at different levels. If the patient cannot complete this task, predicted loudness values can be used. The resulting data is used to generate amplification targets, which are displayed in the form of input-output functions. To use this prescription, the clinician enters data about a particular hearing aid and compares the generated input-output function for that hearing aid to the target input-output function. The closer the match, the more appropriate the hearing aid based on the IHAFF method. This system, in which the “best match” is chosen from hearing aids picked by the clinician, is different from the three other methods described, in which the formula specifies the desired static compression characteristics.
At present, the IHAFF protocol does not seem to have been as widely accepted as some of the other procedures (Medwetsky et al., 1999b). This may be because few clinicians routinely measure loudness judgments (Medwetsky et al., 1999a); because loudness functions can be reliably estimated from hearing thresholds (Jenstad et al., 2000; Moore, 2000; Moore et al., 1999); because more time is required initially (Lindley and Palmer, 1997); because this program is available only as a stand-alone program, hence is less convenient; or through simple unfamiliarity with the method.
In summary, a variety of prescriptive procedures are now available for fitting low-compression threshold hearing aids. Unlike prescriptive procedures for linear aids that provide a single target gain curve, nonlinear prescriptive procedures provide targets at multiple input levels. At present, these procedures are underused compared to linear prescriptive procedures such as NAL-R. Although NAL-R is used in over 90% of clinics, clinicians report using the various nonlinear procedures only about 10% of the time (Medwetsky et al., 1999b). Use of nonlinear prescriptive procedures may be increased by continued clinical education and by inclusion of these formulas within hearing aid programming software or probe microphone systems.
Adapting Linear Prescriptions to Nonlinear Aids
As an alternative to methods designed for nonlinear aids, some clinicians have suggested using a linear prescription (eg, NAL-R) to select gain for average speech (eg, ASHA, 1998; Mueller, 1997). Appropriate compression characteristics, such as compression ratio or increased gain for low-level inputs, could then be based on the listener's dynamic range. For example, a commonly used clinical technique is to set amplification characteristics according to prescribed gain for average speech, then adjust compression characteristics until low-level speech is audible (Ontario Rehabilitation Technology Consortium, 2000). This can be verified using probe microphone measurements to ensure that the REAR for a low-level input (typically 50 dB SPL) is above threshold. However, this method does not provide specific frequency-gain targets for low-speech and high-speech levels or recommend an appropriate compression ratio.
Comparing Prescriptive Procedures for Compression Aids
At present, there is no consensus as to which prescription is best. Since we have yet to reach agreement on the “best” prescriptive procedure for linear aids (eg, Hamill and Barron, 1992; Humes and Hackett, 1990), it is hardly surprising that we have not reached this point with prescriptions for nonlinear aids, which are more complex and were only recently introduced. The different formulas will result in differences in prescribed frequency-gain response, maximum output, and compression characteristics (Byrne et al., 2001; Lindley and Palmer, 1997; Ricketts, 1996; Stelmachowicz et al., 1998). Assuming targets are met, these will translate to differences in speech audibility. For example, DSL [i/o] tends to prescribe more gain, and hence better predicted speech audibility, than other methods such as NAL-NL1. NAL-NL1 prescribes less low-frequency gain for flat hearing losses, less high-frequency gain for steeply sloping losses, and. less compression overall than FIG6 or IHAFF (Byrne et al., 2001). To further complicate the issues, use gain may be considerably lower than prescribed gain, and the difference between use gain and target gain is specific to the fitting formula (Stelmachowicz, 1998). Without additional research, it is unclear whether any particular fitting formula will lead to greater speech intelligibility and/or user satisfaction. It is possible that different formulas will be appropriate for different patients, types of hearing aids, or listening situations.
How, then, to decide which fitting formula to use? A key consideration should be the underlying rationale of the method. For example, the IHAFF procedure is designed to normalize loudness perception for amplified speech; NAL-NL1, on the other hand, is intended to provide a frequency-gain response that maximizes speech audibility while restricting overall loudness so that it is no greater than that perceived by a normal-hearing person presented with the same sound (Dillon, 1999). A second consideration is the patient population. For fitting children, a formula that includes corrections for age, such as DSL [i/o] or NAL-NL1, is most appropriate. Finally, most audiologists will also consider efficient use of clinical time (Dillon and So, 2000). Fitting formulas that are incorporated within hearing aid programming software and/or probe microphone measurement systems are likely to be used more often than those that require transfer of data from stand-alone computer programs.
Electroacoustic Measurements of Compression Aids
When making electroacoustic measurements in the coupler, clinicians can choose between a composite-noise signal shaped to represent the average speech spectrum and a swept pure-tone signal. For compression hearing aids, a composite signal should be used. This broad-band signal, with energy spread across the frequency range, most closely mimics the reaction of a compression hearing aid to speech. With a pure-tone sweep, energy is concentrated within a single frequency component. Because compression is activated at different levels as a function of frequency, results of a pure-tone sweep can appear as a broadened frequency response compared to results obtained with a composite signal (Preves et al., 1989; Stelmachowicz et al., 1990).
Most hearing aid test systems allow measurement of input-output functions at different frequencies. These measurements can be used to characterize the static performance of multichannel compression aids or to compare measured responses to target input-output responses.
When interpreting results of any electroacoustic tests completed in the coupler, it is important to remember that the hearing aid will behave differently in the patient's ear. For example, the presence of a vent, especially an IROS vent, reduces the effective compression ratio relative to that measured in the coupler (Fortune, 1997). When the hearing aid is worn, the type of microphone (directional versus omnidirectional) does not appear to interact with compression processing (Ricketts et al., 2001).
Probe Microphone Measurements of Compression Aids
The American Speech-Language-Hearing Association recommends the use of probe microphone measures as the primary method of verifying hearing aid performance (ASHA, 1998). The standard protocol for linear aids includes measurement of real-ear insertion gain (or real-ear aided response) at an input level equivalent to average speech (typically 70 dB SPL) and measurement of the real-ear saturation response at a 90 dB SPL input level. Measured and target values are then compared, and any necessary adjustments are made. WDRC hearing aids, which vary gain based on input level, require verification of REIG (or REAG) at multiple input levels. This is most easily accomplished using a probe microphone system that allows viewing of multiple input levels on the same screen, as shown in Figure 15. Alternatively, some fitting software allows the tester to enter measured gain and/or output values at each frequency and view them in graphic format compared to target values. With nonlinear amplification, it is important to use a broad-band signal (such as composite noise) rather than a pure-tone sweep, as the aid will respond differently to a broad-band signal (Dreschler, 1992) and this more closely mimics its performance for speech. The newest probe microphone systems also allow use of a time-varying speech stimulus as the input signal.
Effectiveness of Compression in Everyday Environments
Research studies have shown measurable differences between compression and linear amplification in audibility, intelligibility, and sound quality of speech. Such studies are often designed to measure differences between compression and linear amplification in an experimental setting. In the clinic, we are interested in the benefit the average patient receives under ordinary conditions. Simply put, do patients notice differences in communication, satisfaction, and benefit, when wearing compression aids in their everyday environments?
In recent years, a number of self-assessment inventories have been developed which can be used to measure treatment effectiveness. These include questionnaires focused on communication ability, such as the Abbreviated Profile of Hearing Aid Benefit (APHAB) (Cox and Alexander, 1995), on hearing aid satisfaction, such as the Satisfaction with Amplification in Daily Life (SADL) scale (Cox and Alexander, 1999) or on quality of life, such as the Hearing Handicap Inventory for the Elderly (HHIE) (Newman and Weinstein, 1988). Humes (1999) recommended that outcome measurements should include a measure of subjective benefit, satisfaction, or use, in addition to measures of objective speech intelligibility and subjective sound quality.
Most comparisons of subjective outcome measures have found no significant differences between compression and linear aids (eg, Humes et al., 1999; Souza et al., 2002). An example from a recent study (Souza et al., 2002) is shown in Figure 16. Ratings were taken from a group of 75 adult hearing-impaired patients who were fit binaurally with compression limiting or WDRC hearing aids for 3 months. At the end of the 3-month period, all patients completed subjective ratings of their aid's performance using the APHAB questionnaire. Results of ratings for the WDRC aid showed some expected trends of a variable-gain processor: most notably, better ratings of aversive sounds (AV) and improved communication in quiet (EC). However, there were no significant differences between patient ratings of communication with a compression limiting versus a wide-dynamic range compression aid.
Cox et al., (1991) pointed out that subjective measures were less sensitive to differences between amplification conditions than objective measures. This may be because use of hearing aids in the everyday environment depends on many uncontrolled factors, including the speaker's voice, distance from the speaker, the amount of background noise or reverberation, and adherence (amount of time the aid is used), all of which have unknown and overlapping effects on the patient's ratings of the aid. Another issue specific to compression hearing aids is that most self-assessment questionnaires have been designed to assess global benefit rather than to distinguish differences among hearing aids.
Some clinicians and researchers raise the question of whether patients need time to become accustomed to use of compression amplification, particularly if they are previous users of linear amplification. Acclimatization refers to an improvement in speech intelligibility over time as the listener learns to more effectively use available cues in the amplified speech. For linear aids, acclimatization effects are small, typically a few percentage points at the most (see Bentler et al., 1999 and Turner et al., 1996 for reviews). Fewer studies have addressed acclimatization in patients accustomed to linear amplification and newly fit with wide-dynamic range compression hearing aids, but tests of single-channel or two-channel compression have generally found no acclimatization effect (eg, Keidser and Grant, 2001a; Saunders and Cienkowski, 1997; Surr et al., 1998).
Conversely, Yund and Buckles (1995c) found improved performance over time for nonsense syllables processed with 8, 12, or 16 channels, even though the subjects received exposure to the multichannel processed speech only in a laboratory environment. It is possible that more complex processing schemes that significantly alter speech cues require more experience before optimal performance is achieved. Kuk and his colleagues (Kuk, 2001; Kuk et al., in press) provide some support for this idea. Subjects with severe-to-profound loss and previous experience with linear amplification were fit binaurally with a three-channel low-compression threshold hearing aid. At the initial evaluation, few subjects performed better, and some performed worse with the multichannel compression hearing aid than with their previous aids for low-level (50 dB SPL) speech. At 3 months, most of the subjects performed better, and none performed worse with the new aid compared to their previous aids. The pattern was similar for high-level speech. Thus, the data suggest that subjects accustomed to linear hearing aids and newly fit with complex processing schemes, or with more than two compression channels, may require additional time and/or counseling by the clinician to achieve maximum hearing aid benefit.
Use of Compression in Children
Little data is available regarding the use of compression amplification in children. This is one of the key research needs identified by the Pediatric Working Group 1996. Therefore, ideas about potential benefits for children are by necessity derived from research on adults, in conjunction with the theoretical and practical issues unique to pediatric amplification. Prescribed amplification characteristics for young children may be based on incomplete results; for example, threshold data may be available at only a few frequencies. Children may not be able to provide loudness judgment data. These factors limit the use of fitting formulas that rely on these measures.
Setting conservative output limits on hearing aids is important for children who may not be able to provide feedback regarding the appropriateness of the set maximum output. In infants and young children, closer proximity to the speaker can substantially increase input levels (Stelmachowicz et al., 1993). The combination of higher input levels and a lower maximum output increases the potential for saturation distortion if peak clipping is used. Acoustic distortion can be minimized by use of compression limiting rather than peak clipping (Clark, 1996). Use of compression limiting over peak clipping in children also improves speech recognition at high presentation levels (Christensen and Thomas, 1997). Interestingly, unlike adults, young children do not show a clear preference for compression limiting over peak clipping (Stelmachowicz et al., 1999).
Children may be exposed to a wider range of input levels due to their distance and position from the speaker (Stelmachowicz et al., 1993). Unlike an adult, young children cannot make manual volume adjustments to compensate for situational changes in input levels. The variable-gain strategy used in WDRC hearing aids can improve speech audibility over a wider range of input levels than linear amplification. For example, where an adult conversation partner would likely position herself at a consistent distance from the speaker, a child may move about the room, including having her back turned to the speaker. Low-threshold compression should, at least in theory, provide an advantage in this situation by automatically compensating for changes in input level, and maintaining speech output levels within the listener's audible range (Kuk, 1998; Stelmachowicz, 1996).
WDRC amplification has some potential drawbacks. As a practical concern, Stelmachowicz (1996) cautioned that the increased gain for low-level inputs with WDRC amplification may increase the risk of feedback. The potential for feedback is already greater in children, who require frequent earmold remakes due to growth. Stelmachowicz (1966) also pointed out that WDRC is usually implemented in an input compression system. With input compression, changes in volume control will also change the maximum output level; a potential problem if a young child adjusts the volume control accidentally.
An additional concern is the potential for WDRC amplification, particularly when implemented with short time constants, to alter the temporal and/or spectral characteristics of the speech signal. Young children learning to identify speech sounds require a consistent input signal, and the cues they use are different from adults. In contrast to adults with normal hearing or a mild hearing loss who rely heavily on spectral cues, children may have more difficulty discriminating spectral details (eg, Eisenberg et al., 2000). It is possible that children rely to a greater extent on temporal variations, which are the cues most susceptible to being altered by WDRC. Therefore, altering these cues may negatively impact speech identification in young children (Kuk, 1998).
Compared to adults, few data are available to address these issues. Christensen and Thomas (1997) found no difference in speech intelligibility between a compression limiting and a WDRC hearing aid for children aged 9–14 years. Bamford et al., (2000) found that children aged 6–15 performed better on a speech-in-noise test with a two-channel compression aid (with compression in the low-frequency channel and linear amplification in the high-frequency channel) than with their own aids. Jenstad, Seewald et al., (1999) found that adolescents performed better with single-channel WDRC than with linear amplification for soft speech.
Clearly, it is difficult to draw definitive conclusions from this small number of studies, which differed in subject population, amplification system, and research methodology. However, the limited data available suggest that school-age children perform as well or better with WDRC amplification compared to linear amplification. At the present time, no data is available regarding use of WDRC amplification on infants.
It is important to consider how hearing aid outcome will be assessed. In adults, potential outcome measures include probe microphone measures, functional gain, aided speech recognition, and subjective measures of benefit and satisfaction. Fewer options are available to assess hearing aid outcome in children. Young children cannot provide reliable measures of aided speech recognition, or respond to self-assessment instruments (Stelmachowicz, 1999). Although some assessment questionnaires are designed for parents or teachers (eg, Smaldino and Anderson, 1997), second-party observations of a child's communication ability are unlikely to be sensitive enough to discriminate between different processing strategies. Limited information can be obtained using probe microphone measures. For young children who cannot cooperate long enough to complete a full series of probe microphone tests, the RECD can be used in conjunction with coupler measurements. In this procedure, a probe microphone system is used to measure the frequency response of a signal delivered to the child's ear canal through a hearing aid or an insert earphone. The same signal and transducer is then measured in a 2cc coupler. The difference between the ear and coupler measurements is calculated and used as a correction factor for more extensive coupler measurements of hearing aid gain and output (Seewald, 1997). Although this technique can quantify gain (and, by extension, audibility) for different input levels, it cannot provide information about speech intelligibility or quality.
Another useful tool is the Situational Hearing Aid Response Profile (SHARP) program developed by Stelmachowicz et al., (1994). This program, now available in Microsoft Windows format, provides a graphic representation of speech audibility in different listening environments, which takes into account the child's hearing thresholds and the processing characteristics of the hearing aid, including frequency-gain response, maximum output, and compression ratio. Clinicians can use the SHARP program to evaluate the effect of different processing characteristics, such as the decision to use linear or WDRC amplification. It also serves as a useful counseling tool for parents or caregivers.
An example of information provided by SHARP is shown in Figure 17. It includes the listener's audiogram, the range of speech levels, and the portion of the speech spectrum that is above threshold, and the range of the speech spectrum. SHARP also calculates the expected Audibility Index (AI). This is an index of audibility, ranging from 0.0 (inaudible) to 1.0 (audible) and weighted according to frequency; the frequency bands most critical to speech recognition receive greater weights. The left shows the expected unaided response, which in this case is virtually inaudible, with an AI of only 0.02. Audibility is improved, but not complete, for the aided response shown in the right panel (AI = .53).
Conclusions and Areas for Future Research
Compression amplification is a complex processing scheme that can be applied in a number of ways, each with inherent advantages and limitations. The core feature of compression is automatic adjustment of hearing aid gain in response to changes in input levels. If carefully implemented, this strategy can maintain speech audibility over a wide range of input levels, resulting in improved speech intelligibility, quality, and loudness comfort. However, there is also the potential for reduced speech intelligibility or quality if a large number of compression channels are used in conjunction with high compression ratios. The consequences of these effects may be greatest for listeners with severe-to-profound hearing loss.
Many studies to research the effects of compression have been completed; however, important questions remain. Pressing needs for future investigation include
The effects of compression amplification on the acoustics of speech;
The development of candidacy guidelines for wide-dynamic range compression;
The validation of fitting procedures and prescription of compression characteristics, including dynamic properties such as attack and release times; and
The development of subjective outcome measures sensitive to differences in hearing aid technology.
Acknowledgments
The author thanks Lorienne Jenstad and Kumiko Boike for their help in preparing this article; an anonymous reviewer for helpful comments; and special thanks to Francis Kuk for his many thoughtful suggestions. The author's recent work on acoustic effects of compression is supported by the University of Washington Royalty Research Fund.
References
- Allen JB, Hall JL, Jeng PS. Loudness growth in _ octave bands (LGOB): A procedure for the assessment of loudness. J Acoust Soc Am 88: 745–753, 1990 [DOI] [PubMed] [Google Scholar]
- American National Standards Institute. Specification of hearing aid characteristics. (ANSI S3.22-1996). New York: ANSI, 1996 [Google Scholar]
- ASHA Ad Hoc Committee on Hearing Aid Selection and Fitting. Guidelines for hearing aid fitting for adults. Am J Audiol 7: 5–13, 1998 [Google Scholar]
- Bamford J, McCracken W, Peers I, Grayson P. Trial of a two-channel hearing aid (low-frequency compression-high-frequency linear amplification) with school age children. Ear Hear 20: 290–298, 2000 [DOI] [PubMed] [Google Scholar]
- Barker C, Dillon H. Client preferences for compression threshold in single-channel wide dynamic range compression hearing aids. Ear Hear 20: 127–139, 1999 [DOI] [PubMed] [Google Scholar]
- Barker C, Dillon H, Newall P. Fitting low ratio compression to people with severe and profound hearing losses. Ear Hear 22: 130–141, 2001 [DOI] [PubMed] [Google Scholar]
- Benson D, Clark TM, Johnson JS. Patient experiences with full dynamic range compression. Ear Hear 13: 320–330, 1992 [PubMed] [Google Scholar]
- Bentler RA, Duve MR. Comparison of hearing aids over the 20th century. Ear Hear 21: 625–639, 2000 [DOI] [PubMed] [Google Scholar]
- Bentler R, Holte L, Turner C. An update on the acclimatization issues. Hear J 52: 44–48, 1999 [Google Scholar]
- Bentler RA, Nelson JA. Assessing release-time options in a two-channel AGC hearing aid. Am J Audiol 6: 43–51, 1997 [Google Scholar]
- Berger K, Hagberg E, Rane R. Prescription of hearing aids: Rationale, procedures and results (5th ed.) Kent, OH: Herald Publishing House, 1989
- Boike KT, Souza PE. Effect of compression ratio on speech recognition and speech-quality ratings with wide dynamic range compression amplification. J Speech Lang Hear Res 43: 456–468, 2000a [DOI] [PubMed] [Google Scholar]
- Boike KT, Souza PE. Effect of compression ratio on speech recognition in temporally complex background noise. Presented at the International Hearing Aid Conference, Lake Tahoe, CA, 2000b
- Boothroyd A, Springer N, Smith L, Schulman J. Amplitude compression and profound hearing loss. J Speech Hear Res 31: 362–376, 1988 [DOI] [PubMed] [Google Scholar]
- Braida LD, Durlach NI, DeGennaro SV, Peterson PM, Bustamente DK. Review of recent research on multi-band amplitude compression for the hearing impaired. In Studebaker GA, Bess FH. eds: The Vanderbilt Hearing Aid Report: State of the Art Research Needs. Monographs in Contemporary Audiology, 1982
- Bustamente DK, Braida LD. Multiband compression limiting for severely impaired listeners. J Rehabil Res Dev 24: 149–160, 1987 [PubMed] [Google Scholar]
- Buyer's guide for programmable DSP hearing instruments. Hearing Review 8: 38–47, 2001 [Google Scholar]
- Byrne D. Selecting hearing aids for severely deaf children. Br J Audiol 12: 9–22, 1978 [DOI] [PubMed] [Google Scholar]
- Byrne D. Hearing aid selection for the 1990s: Where to? J Am Acad Audiol 7: 377–395, 1996 [PubMed] [Google Scholar]
- Byrne D, Dillon H. The National Acoustic Laboratories' new procedure for selecting the gain and frequency response of a hearing aid. Ear Hear 7: 257–265, 1986 [DOI] [PubMed] [Google Scholar]
- Byrne D, Walker G. The effects of multichannel compression and expansion amplification on perceived quality of speech. Aust J Audiol 4: 1–8, 1982 [Google Scholar]
- Byrne D, Dillon H, Ching T, Katsch R, Keidser G. The NAL-NL1 procedure for fitting non-linear hearing aids: Characteristics and comparisons with other procedures. J Am Acad Audiol 12: 37–51, 2001 [PubMed] [Google Scholar]
- Byrne D, Parkinson A, Newall P. Hearing aid gain and frequency response requirements for the severely/profoundly hearing impaired. Ear Hear 11: 40–49, 1990 [DOI] [PubMed] [Google Scholar]
- Christensen LA, Thomas TE. The use of multiple-memory hearing-aid technology in children. Presented at the Hearing Aid Research and Development Conference, Bethesda, MD, 1997
- Clark JG. Pediatric amplification: selection and verification. In Martin FN, Clark JG. Hearing care for children. Boston: Allyn and Bacon, pp. 213–232, 1996 [Google Scholar]
- Cornelisse LE, Seewald RC, Jamieson DG. Wide-dynamic range compression hearing aids: The DSL [i/o] approach. Hear J 47: 23–26, 1994 [Google Scholar]
- Cornelisse LE, Seewald RC, Jamieson DG. The input/output formula: A theoretical approach to the fitting of personal amplification devices. J Acoust Soc Am 97: 1854–1864, 1995 [DOI] [PubMed] [Google Scholar]
- Cox RM. Using loudness data for hearing aid selection: The IHAFF approach. Hear J 48(10):39–44, 1995 [Google Scholar]
- Cox RM. Five years later: An update on the IHAFF fitting protocol. Hear J 52: 10–18, 1999 [Google Scholar]
- Cox RM, Alexander GC. Measuring Satisfaction with Amplification in Daily Life: The SADL scale. Ear Hear 20: 306–320, 1999 [DOI] [PubMed] [Google Scholar]
- Cox RM, Alexander GC. The abbreviated profile of hearing aid benefit. Ear Hear 16: 176–186, 1995 [DOI] [PubMed] [Google Scholar]
- Cox RM, Alexander GC, Rivera IM. Comparison of objective and subjective measures of speech intelligibility in elderly hearing-impaired listeners. J Speech Hear Res 34: 904–915, 1991 [DOI] [PubMed] [Google Scholar]
- Cox RM, Alexander GC, Taylor IM, Gray GA. The Contour test of loudness perception. Ear Hear 18: 388–400, 1997 [DOI] [PubMed] [Google Scholar]
- Crain TR, Van Tasell DJ. Effect of peak clipping on speech recognition threshold. Ear Hear 15: 443–453, 1994 [DOI] [PubMed] [Google Scholar]
- Crain TR, Yund EW. The effect of multichannel compression on vowel and stop-consonant discrimination in normal-hearing and hearing-impaired subjects. Ear Hear 16: 529–543, 1995 [DOI] [PubMed] [Google Scholar]
- Dawson P, Dillon H, Battaglia J. Output limiting compression for the severely and profoundly deaf. Aust J Audiol 13: 1–12, 1991 [Google Scholar]
- DeGennaro S, Braida LD, Durlach NI. Multichannel syllabic compression for severely impaired listeners. J Rehab Res Devel 23: 17–24, 1986 [PubMed] [Google Scholar]
- Dillon H. Compression? Yes, but for low or high frequencies, for low or high intensities, and with what response times? Ear Hear 17: 287–307, 1996 [DOI] [PubMed] [Google Scholar]
- Dillon H. NAL-NL1: A new procedure for fitting non-linear hearing aids. Hear J 52: 10–16, 1999 [Google Scholar]
- Dillon H. Compression systems in hearing aids. In: Dillon H. Hearing aids. New York: Thieme Medical Publishers, pp. 159–186, 2000 [Google Scholar]
- Dillon H, So M. Incentives and obstacles to the routine use of outcome measures by clinicians. Ear Hear 21: 2S–6S, 2000 [DOI] [PubMed] [Google Scholar]
- Dillon H, Storey L. The National Acoustic Laboratories' procedure for selecting the saturation sound pressure level of hearing aids: Theoretical derivation. Ear Hear 19: 255–266, 1998 [DOI] [PubMed] [Google Scholar]
- Dillon H, Storey L, Grant F, et al. Preferred compression threshold with 2:1 wide dynamic range compression in everyday environments. Aust J Audiol 20: 33–44, 1998 [Google Scholar]
- Doherty KA, Turner CW. Use of a correlational method to establish a listener's weighting function for speech. J Acoust Soc Am 100: 3769–3773, 1996 [DOI] [PubMed] [Google Scholar]
- Dreschler WA. Dynamic-range reduction by peak clipping or compression and its effects on phoneme perception in hearing-impaired listeners. Scand Audiol 17: 35–43, 1988a [DOI] [PubMed] [Google Scholar]
- Dreschler WA. The effect of specific compression settings on phoneme identification in hearing-impaired subjects. Scand Audiol 17: 35–43, 1988b [DOI] [PubMed] [Google Scholar]
- Dreschler WA. Phoneme perception via hearing aids with and without compression and the role of temporal resolution. Audiology 28: 49–60, 1989 [DOI] [PubMed] [Google Scholar]
- Dreschler WA. Fitting multichannel-compression hearing aids. Audiology 31: 121–131, 1992 [DOI] [PubMed] [Google Scholar]
- Dreschler WA, Eberhardt D, Melk PW. The use of single-channel compression for the improvement of speech intelligibility. Scand Audiol 13: 231–236, 1984 [DOI] [PubMed] [Google Scholar]
- Eisenberg LS, Shannon RV, Martinez AS, Wygonski J, Boothroyd A. Speech recognition with reduced spectral cues as a function of age. J Acoust Soc Am 107: 2704–2710, 2000 [DOI] [PubMed] [Google Scholar]
- Erber NP. Speech-envelope cues as an acoustic aid to lipreading for profoundly deaf children. J Acoust Soc Am 51: 1224–1227, 1972 [DOI] [PubMed] [Google Scholar]
- Faulkner A, Rosen S, Moore BC. Reduced frequency selectivity in the profoundly hearing-impaired listener. Br J Audiol 24: 381–392, 1990 [DOI] [PubMed] [Google Scholar]
- Festen JM, van Kijkhuizen JN, Plomp R. Considerations on adaptive gain and frequency response in hearing aids. Acta Otolaryngologica Suppl 469: 196–201, 1990 [PubMed] [Google Scholar]
- Fikret-Pasa S. The effect of compression ratio on speech intelligibility and quality. J Acoust Soc Am 95:2992, 1994 [Google Scholar]
- Fitzgibbons PJ, Gordon-Salant S. Auditory temporal processing in elderly listeners. J Am Acad Audiol 7: 183–189, 1996 [PubMed] [Google Scholar]
- Fortune T. Real ear compression ratios: The effects of venting and adaptive release time. Am J Audiol 6: 55–63, 1997 [Google Scholar]
- Fortune T. Aided growth of masking for speech and non-speech signals. Ear Hear 20: 214–227, 1999 [DOI] [PubMed] [Google Scholar]
- Fortune T, Scheller T. Duration, compression, and the aided loudness discomfort level. Ear Hear 21: 329–341, 2000 [DOI] [PubMed] [Google Scholar]
- Fortune TW, Woodruff BD, Preves DA. A new technique for quantifying temporal envelope contrasts. Ear Hear 15: 93–99, 1994 [DOI] [PubMed] [Google Scholar]
- Franck BA, van Kreveld-Bos CS, Dreschler WA, Verschuure H. Evaluation of spectral enhancement in hearing aids, combined with phonemic compression. J Acoust Soc Am 106: 1452–1464, 1999 [DOI] [PubMed] [Google Scholar]
- Gatehouse S, Elberling C, Naylor G. Aspects of auditory ecology and psychoacoustic function as determinants of benefits from and candidature for nonlinear processing in hearing aids. Paper presented at the International Hearing Aid Conference, Lake Tahoe, CA, 2000
- Glasberg BR, Moore BCJ. Effects of envelope fluctuations on gap detection. Hearing Res 64: 81–92, 1992 [DOI] [PubMed] [Google Scholar]
- Goedegebure A, Hulshof M, Maas RJ, Dreschler WA, Verschuure H. Effects of single-channel phonemic compression schemes on the understanding of speech by hearing-impaired listeners. Audiology 40: 10–25, 2001 [PubMed] [Google Scholar]
- Hamill TA, Barron TP. Frequency response differences of four gain-equalized hearing aid prescription formulae. Audiology 31: 87–94, 1992 [DOI] [PubMed] [Google Scholar]
- Hansen M. Effects of multi-channel compression time constants on subjectively perceived sound quality and speech intelligibility. Ear Hear 23: 369–380, 2002 [DOI] [PubMed] [Google Scholar]
- Hawkins DB, Naidoo SV. Comparison of sound quality and clarity with asymmetrical peak clipping and output limiting compression. J Am Acad Audiol 4: 221–228, 1993 [PubMed] [Google Scholar]
- Hedrick MS, Rice T. Effect of a single-channel wide dynamic range compression circuit on perception of stop consonant place of articulation. J Speech Hear Res 43: 1174–1184, 2000 [DOI] [PubMed] [Google Scholar]
- Hedrick M, Younger MS. Perceptual weighting of relative amplitude and formant transition cues in aided CV syllables. J Speech Hear Res 44: 964–974, 2001 [DOI] [PubMed] [Google Scholar]
- Hickson LMH. Compression amplification in hearing aids. Am J Audiol 3: 51–65, 1994 [DOI] [PubMed] [Google Scholar]
- Hickson LMH, Byrne D. Consonant perception in quiet: Effect of increasing the consonant-vowel ratio with compression amplification. J Am Acad Audiol 8: 322–332, 1997 [PubMed] [Google Scholar]
- Hickson L, Thyer N, Bates D. Acoustic analysis of speech through a hearing aid: Consonant-vowel ratio effects with two-channel compression amplification. J Am Acad Audiol 10: 549–556, 1999 [PubMed] [Google Scholar]
- Hohmann V, Kollmeier B. The effect of multichannel dynamic compression on speech intelligibility. J Acoust Soc Am 97: 1191–1195, 1995 [DOI] [PubMed] [Google Scholar]
- Hornsby BWY, Ricketts TA. The effects of compression ratio, signal-to-noise ratio, and level on speech recognition in normal-hearing listeners. J Acoust Soc Am 109: 2964–2973, 2001 [DOI] [PubMed] [Google Scholar]
- Howell P, Rosen S. Production and perception of rise time in the voiceless affricate/fricative distinction. J Acoust Soc Am 73: 976–984, 1983 [DOI] [PubMed] [Google Scholar]
- Humes LE, Christensen L, Thomas T, Bess FH, Hedley-Williams A, Bentler R. A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. J Speech Lang Hear Res 42: 65–79, 1999 [DOI] [PubMed] [Google Scholar]
- Humes L, Hackett T. Comparison of frequency response and aided speech-recognition performance for hearing aids selected by three different prescriptive methods. J Am Acad Audiol 1: 101–108, 1990 [PubMed] [Google Scholar]
- Humes LE. Dimensions of hearing aid outcome. J Am Acad Audiol 10: 26–39, 1999 [PubMed] [Google Scholar]
- Iskowitz M. Back to the future in fitting. Advance March 15: 7–9, 1999 [Google Scholar]
- Jenstad LM, Pumford J, Seewald RC, Cornelisse LE. Comparison of linear gain and wide dynamic range compression hearing aid circuits II: Aided loudness measures. Ear Hear 21: 32–44, 2000 [DOI] [PubMed] [Google Scholar]
- Jenstad LM, Seewald RC, Cornelisse LE, Shantz J. Comparison of linear gain and wide dynamic range compression hearing aid circuits: Aided speech perception measures. Ear Hear 20: 117–126, 1999 [DOI] [PubMed] [Google Scholar]
- Jenstad LM, Souza PE. Quantifying the effect of release time from compression on the temporal cues of speech. Presented at the International Hearing Aid Conference, Lake Tahoe, CA, 2002a
- Jenstad L, Souza P. Speech information transmitted by four amplification systems to listeners with severe hearing loss. Poster presented at the annual meeting of the Association for Research in Otolaryngology, St. Petersburg Beach, FL, 2002b
- Jenstad LM, Van Tasell D, Ewert C, Seewald R. My hearing aid sounds funny! Patients' descriptions of hearing-aid processed sound. Poster presented at the American Academy of Audiology convention Miami Beach, FL, 1999
- Jerlvall LB, Lindblad AC. The influence of attack and release time on speech intelligibility: A study of the effects of AGC on normal hearing and hearing impaired subjects. Scand Audiol Suppl 6: 341–353, 1978 [PubMed] [Google Scholar]
- Kam AC, Wong LL. Comparison of performance with wide dynamic range compression and linear amplification. J Am Acad Audiol 10: 445–457, 1999 [PubMed] [Google Scholar]
- Keidser G, Grant F. Comparing loudness normalization (IHAFF) with speech intelligibility maximization (NAL-NL1) when implemented in a two-channel device. Ear Hear 22: 501–515, 2001a [DOI] [PubMed] [Google Scholar]
- Keidser G, Grant F. The preferred number of channels (one, two, or four) in NAL-NL1 prescribed wide dynamic range compression (WDRC) devices. Ear Hear 22: 516–527, 2001b [DOI] [PubMed] [Google Scholar]
- Keidser G, Dillon H, Brewer S. Using the NAL-NL1 prescriptive procedure with advanced hearing instruments. Hearing Review 6: 8–20, 1999 [Google Scholar]
- Kent RD, Read C. The acoustic analysis of speech. San Diego: Singular Publishing Group, Inc, 1992 [Google Scholar]
- Kewley-Port D. Time-varying features as correlates of place of articulation in stop consonants. J Acoust Soc Am 73: 322–335, 1983 [DOI] [PubMed] [Google Scholar]
- Kiessling J, Pfreimer C, Dyrlund O. Clinical evaluation of three different loudness scaling protocols. Scand Audiol 26: 117–121, 1997 [DOI] [PubMed] [Google Scholar]
- Kiessling J, Schubert M, Archut A. Adaptive fitting of hearing instruments by categorical loudness scaling (ScalAdapt). Scand Audiol 25: 153–160, 1996 [DOI] [PubMed] [Google Scholar]
- Killion MC, Fikret-Pasa S. The 3 types of sensorineural hearing loss: Loudness and intelligibility considerations. Hear J 46: 31–36, 1993 [Google Scholar]
- Knebel SB, Bentler RA. Comparison of two digital hearing aids. Ear Hear 19: 280–289, 1998 [DOI] [PubMed] [Google Scholar]
- Kochkin S., MarkeTrak V. Consumer satisfaction revisited. Hear J 53: 38–55, 2000 [Google Scholar]
- Kuk FK. Theoretical and practical considerations in compression hearing aids. Trends in Amplification 1: 5–39, 1996 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuk FK. Hearing aid design considerations for optimally fitting the youngest patients. Hear J 52: 48–55, 1998 [Google Scholar]
- Kuk FK. Recent approaches to fitting nonlinear hearing aids. In Valente M, Hosford-Dunn H, Roeser RJ. eds: Audiology Treatment. New York: Thieme Medical Publishers, pp. 261–289, 2000 [Google Scholar]
- Kuk FK. Adaptation to enhanced dynamic range compression (EDRC): Examples from the Senso P38 digital hearing aid. Sem Hear 22: 161–171, 2001 [Google Scholar]
- Kuk FK. Considerations in modern multichannel nonlinear hearing aids. In Valente M. ed., Hearing Aids: Standards, Options and Limitations (2nd ed.). New York: Thieme Medical Publishers, pp. 178–213, 2002 [Google Scholar]
- Kuk FK, Ludvigsen C. Variables affecting the use of prescriptive formulae to fit modern nonlinear hearing aids. J Am Acad Audiol 10: 453–465, 1999 [PubMed] [Google Scholar]
- Kuk FK, Ludvigsen C. Hearing aid design and fitting solutions for persons with severe-to-profound loss. Hear J 53: 29–37, 2000 [Google Scholar]
- Kuk FK, Potts L, Valente M, Lee L, Picirrili J. Evidence of acclimatization in subjects with severe-to-profound hearing loss. J Am Acad Audiol (in press) [DOI] [PubMed]
- Lamore PJJ, Verweij C, Brocaar MP. Residual hearing capacity of severely hearing-impaired subjects. Acta Otolaryngol 469(Suppl):7–15, 1990 [DOI] [PubMed] [Google Scholar]
- Larson VD, Williams DW, Henderson WG, et al. Efficacy of three commonly used hearing aid circuits. JAMA 284: 1806–1813, 2000 [DOI] [PubMed] [Google Scholar]
- Laurence RF, Moore BCJ, Glasberg BR. A comparison of behind-the-ear high fidelity linear hearing aids and two-channel compression aids, in the laboratory and in everyday life. Br J Audiol 17: 31–48, 1983 [DOI] [PubMed] [Google Scholar]
- Lindley GA, Palmer CV. Fitting wide dynamic range compression hearing aids: DSL [i/o], the IHAFF protocol, and FIG6. Am J Audiol 6: 19–28, 1997 [Google Scholar]
- Mare MJ, Dreschler WA, Verschuure H. The effects of input-output configuration in syllabic compression on speech perception. J Speech Hear Res 35: 675–685, 1992 [DOI] [PubMed] [Google Scholar]
- McCandless GA, Lyregaard PE. Prescription of Gain/Output (POGO) for hearing aids. Hear Instrum 35: 16–21, 1983 [Google Scholar]
- Medwetsky L, Sanderson D, Young D. A national survey of audiology clinical practices, Part 1. Hear Rev 6: 24–32, 1999a [Google Scholar]
- Medwetsky L, Sanderson D, Young D. A national survey of audiology clinical practices, Part 2. Hear Rev 6: 14–22, 1999b [Google Scholar]
- Moore BCJ. Design and evaluation of a two-channel compression hearing aid. J Rehabil Res Dev 24: 181–192, 1987 [PubMed] [Google Scholar]
- Moore BCJ. How much do we gain by gain control in hearing aids? Acta Otolaryngol Suppl 469: 250–256, 1990 [PubMed] [Google Scholar]
- Moore BCJ. Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear Hear 17: 133–161, 1966 [DOI] [PubMed] [Google Scholar]
- Moore BCJ. Use of a loudness model for hearing aid fitting. IV: Fitting hearing aids with multichannel compression so as to restore ‘normal’ loudness for speech at different levels Br J Audiol 34: 165–177, 2000 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Alcantara JI, Stone MA, Glasberg BR. Use of a loudness model for hearing aid fitting: II. Hearing aids with multi-channel compression Br J Audiol 33: 157–170, 1999 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Glasberg BR. A comparison of two-channel and single-channel compression hearing aids. Audiology 25: 210–226, 1986 [PubMed] [Google Scholar]
- Moore BCJ, Glasberg BR, Alcantara JI, Launer S, Kuehnel V. Effects of slow and fast-acting compression on the detection of gaps in narrow bands of noise. Br J Audiol 35: 365–374, 2001 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Johnson JS, Clark TM, Pluvinage V. Evaluation of a dual-channel full dynamic range compression system for people with sensorineural hearing loss. Ear Hear 13: 349–369, 1992 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Laurence RF, Wright D. Improvements in speech intelligibility in quiet and in noise produced by two-channel compression hearing aids. Br J Audiol 19: 175–187, 1985 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Lynch C, Stone MA. Effects of the fitting parameters of a two-channel compression system on the intelligibility of speech in quiet and in noise. Br J Audiol 26: 369–379, 1992 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Peters RW, Stone MA. Benefits of linear amplification and multichannel compression for speech comprehension in backgrounds with spectral and temporal dips. J Acoust Soc Am 105: 400–411, 1996 [DOI] [PubMed] [Google Scholar]
- Moore BCJ, Vickers DA, Baer T, Launer S. Factors affecting the loudness of modulated sounds. J Acoust Soc Am 105: 2757–2772, 1999 [DOI] [PubMed] [Google Scholar]
- Mueller HG. Hearing aids and people: Strategies for a successful match. Hear J 49: 13–28, 1996 [Google Scholar]
- Mueller HG. 20 Questions: Prescriptive fitting methods: The next generation. Hear J 50: 10–19, 1997 [Google Scholar]
- Mueller HG. Experts debate key fitting issues: LDLs, kneepoints, and high frequencies. Hear J 52: 21–32, 1999 [Google Scholar]
- Nabalek IV. Performance of hearing-impaired listeners under various types of amplitude compression. J Acoust Soc Am 74: 776–791, 1983 [DOI] [PubMed] [Google Scholar]
- Neuman AC, Bakke MH, Hellman S, Levitt H. Effect of compression ratio in a slow-acting compression hearing aid: Paired comparison judgments of quality. J Acoust Soc Am 96: 1471–1478, 1994 [DOI] [PubMed] [Google Scholar]
- Neuman AC, Bakke MH, Mackersie C, Hellman S, Levitt H. The effect of compression ratio and release time on the categorical rating of sound quality. J Acoust Soc Am 103: 2273–2281, 1998 [DOI] [PubMed] [Google Scholar]
- Newman CW, Sandridge SA. Benefit from, satisfaction with, and cost-effectiveness of three different hearing aid technologies. Am J Audiol 7: 1–14, 1998 [DOI] [PubMed] [Google Scholar]
- Newman CW, Weinstein BE. The Hearing Handicap Inventory for the Elderly as a measure of hearing aid benefit. Ear Hear 9: 81–85, 1988 [DOI] [PubMed] [Google Scholar]
- Novick ML, Bentler RA, Dittberner A, Flamme GA. Effects of release time and directionality on unilateral and bilateral hearing aid fittings in complex sound fields. J Am Acad Audiol 10: 534–544, 2001 [PubMed] [Google Scholar]
- Ontario Rehabilitation Technology Consortium. The DSL Report. London, Ontario: National Centre for Audiology, 2000
- Pediatric Working Group. Amplification for infants and children with hearing loss. Am J Audiol 5: 53–68, 1996 [Google Scholar]
- Peterson ME, Feeney MP, Yantis PA. The effect of automatic gain control in hearing-impaired listeners with different dynamic ranges. Ear Hear 11: 185–194 1990 [DOI] [PubMed] [Google Scholar]
- Plomp R. The negative effect of amplitude compression in multichannel hearing aids in the light of the modulation-transfer function. J Acoust Soc Am 83: 2322–2327, 1988 [DOI] [PubMed] [Google Scholar]
- Preminger JE, Neuman AC, Cunningham DR. The selection and validation of output sound pressure level in multichannel hearing aids. Ear Hear 22: 487–500, 2001 [DOI] [PubMed] [Google Scholar]
- Preves DA, Beck LB, Burnett ED, Teder H. Input stimuli for obtaining frequency responses of automatic gain control hearing aids. J Speech Hear Res 32: 189–194, 1989 [DOI] [PubMed] [Google Scholar]
- Punch JL, Beck EL. Low frequency response of hearing aids and judgments of aided speech quality. J Speech Hear Disord 45: 325–335, 1980 [DOI] [PubMed] [Google Scholar]
- Ricketts TA. Fitting hearing aids to individual loudness perception measures. Ear Hear 17: 124–132, 1996 [DOI] [PubMed] [Google Scholar]
- Ricketts TA. Directional hearing aids. Trends in Amplification 5: 139–176, 2001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ricketts TA, Bentler RA. The effect of test signal type and bandwidth on the categorical scaling of loudness. J Acoust Soc Am 99: 2281–2287, 1996 [DOI] [PubMed] [Google Scholar]
- Ricketts TA, Lindley G, Henry P. Impact of compression and hearing aid style on directional hearing and benefit and performance. Ear Hear 22: 348–361, 2001 [DOI] [PubMed] [Google Scholar]
- Rosen S. Philosophical Transactions of the Royal Society of London B 367:336, 1992 [DOI] [PubMed] [Google Scholar]
- Rosen S, Faulkner A, Smith D. The psychoacoustics of profound hearing impairment. Acta Otolarygol Suppl 469: 16–22, 1990 [PubMed] [Google Scholar]
- Saunders GH, Cienkowski KM. Acclimitization in hearing aids. Ear Hear 18: 129–138, 1997 [DOI] [PubMed] [Google Scholar]
- Schuchman G, Franqui M, Beck LB. Comparison of performance with a conventional and a two-channel hearing aid. J Am Acad Audiol 7: 15–22, 1996 [PubMed] [Google Scholar]
- Schwartz D, Lyregaard PE, Lundh P. Hearing aid selection for severe-to-profound hearing loss. Hear J 41: 13–17, 1988 [Google Scholar]
- Seewald RC. Amplification: A child-centered approach. Hear J 50:61, 1997 [Google Scholar]
- Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M. Speech recognition with primarily temporal cues. Science 270: 303–304, 1995 [DOI] [PubMed] [Google Scholar]
- Smaldino J, Anderson K. Development of the Listening Inventory for Education. Poster presented at the Hearing Aid Research and Development Conference, Bethesda, MD, 1997
- Souza P, Jenstad L, Folino R. Use of new amplification strategies in listeners with severe loss. Presented at the American Speech-Language-Hearing Association convention, New Orleans, LA, 2001
- Souza PE. Older listeners' use of temporal cues altered by nonlinear amplification. J Speech Lang Hear Res 48: 661–674, 2000 [DOI] [PubMed] [Google Scholar]
- Souza PE, Bishop RD. Improving speech audibility with wide-dynamic range compression in listeners with severe sensorineural loss. Ear Hear 20: 461–470, 1999 [DOI] [PubMed] [Google Scholar]
- Souza PE, Bishop RD. Improving audibility with nonlinear amplification for listeners with high frequency loss. J Am Acad Audiol 11: 214–223, 2000 [PubMed] [Google Scholar]
- Souza PE, Kitch V. The contribution of amplitude envelope cues to sentence identification in young and aged listeners Ear Hear 22: 112–119, 2001a [DOI] [PubMed] [Google Scholar]
- Souza PE, Kitch VJ. Effect of preferred volume setting on speech audibility for linear peak clipping, compression limiting, and wide dynamic range compression amplification. J Am Acad Audiol 12: 415–422, 2001b [PubMed] [Google Scholar]
- Souza PE, Turner CW. Effect of single-channel compression on temporal speech information. J Speech Hear Res 39: 901–911 1996 [DOI] [PubMed] [Google Scholar]
- Souza PE, Turner CW. Multichannel compression, temporal cues and audibility. J Speech Lang Hear Res 41: 315–326, 1998 [DOI] [PubMed] [Google Scholar]
- Souza PE, Turner CW. Quantifying the contribution of audibility to recognition of compression-amplified speech. Ear Hear 20: 12–20, 1999 [DOI] [PubMed] [Google Scholar]
- Souza PE, Yueh B, Collins M, et al. Sensitivity of self-assessment questionnaires to differences in hearing aid technology. Presented at the International Hearing Aid Conference, Lake Tahoe, CA, 2002
- Souza PE, Yueh B, Sarubbi M, Loovis C. Fitting hearing aids with the Articulation Index: Impact on hearing aid effectiveness. J Rehabil Res Dev 37: 473–481, 2000 [PubMed] [Google Scholar]
- Sreenivas C, Fourakis M, Davidson S. Effect of varying release times on speech perception using syllabic compression. Poster session presented at the annual meeting of the American Academy of Audiology, Fort Lauderdale, FL, 1997
- Stelmachowicz PG. Current issues in pediatric amplification. Hear J 49: 10–20, 1996 [Google Scholar]
- Stelmachowicz PG. Hearing aid outcome measures for children. J Am Acad Audiol 10: 14–25, 1999 [PubMed] [Google Scholar]
- Stelmachowicz PG, Dalzell S, Peterson D, Kopun J, Lewis DL, Hoover BE. A comparison of threshold-based fitting strategies for nonlinear hearing aids. Ear Hear 19: 131–138, 1998 [DOI] [PubMed] [Google Scholar]
- Stelmachowicz PG, Kopun J, Mace A, Lewis D. The perception of amplified speech by listeners with hearing loss: Acoustic correlates. J Acoust Soc Am 98: 1388–1399, 1995 [DOI] [PubMed] [Google Scholar]
- Stelmachowicz PG, Lewis DE, Hoover B, Keefe DH. Subjective effects of peak clipping and compression limiting in normal and hearing-impaired children and adults. J Acoust Soc Am 105: 412–422, 1999 [DOI] [PubMed] [Google Scholar]
- Stelmachowicz PG, Lewis D, Kalberer A, Creutz T. Situational Hearing Aid Response Profile Users Manual (SHARP, v. 2.0). Omaha, NE: Boys Town National Research Hospital, 1994
- Stelmachowicz PG, Lewis DE, Seewald RC, Hawkins DB. Complex and pure-tone signals in the evaluation of hearing-aid characteristics. J Speech Hear Res 33: 380–385, 1990 [DOI] [PubMed] [Google Scholar]
- Stelmachowicz PG, Mace AL, Kopun JG, Carney E. Long-term and short-term characteristics of speech: Implications for hearing aid selection for young children. J Speech Hear Res 36: 609–620, 1993 [DOI] [PubMed] [Google Scholar]
- Stone MA, Moore BCJ. Syllabic compression: Effective compression ratios for signals modulated at different rates. Br J Audiol 26: 351–356, 1992 [DOI] [PubMed] [Google Scholar]
- Stone MA, Moore BCJ, Alcantara JI, Glasberg BR. Comparison of different forms of compression using wearable digital hearing aids. J Acoust Soc Am 106: 3603–3619, 1999 [DOI] [PubMed] [Google Scholar]
- Stone MA, Moore BCJ, Wojtczak M, Gudgin E. Effects of fast-acting high-frequency compression on the intelligibility of speech in steady and fluctuating background sounds. Br J Audiol 31: 257–273, 1997 [DOI] [PubMed] [Google Scholar]
- Storey L, Dillon H, Yeend I, Wigney D. The National Acoustic Laboratories' procedure for selecting the saturation sound pressure level of hearing aids: Experimental validation. Ear Hear 19: 267–279, 1998 [DOI] [PubMed] [Google Scholar]
- Strom K. The HR 2002 dispenser survey. Hear Rev 9: 14–32, 2002b [Google Scholar]
- Strom KE. DSP: Past, present, and future. Part 1: The evolution of advanced hearing solutions. Hear Rev 9: 12–52, 2002a [Google Scholar]
- Surr RK, Cord MT, Walden BE. Long-term versus short-term hearing aid benefit. J Am Acad Audiol 9: 165–171, 1998 [PubMed] [Google Scholar]
- Turner CW, Horwitz AR, Souza PE. Identification and discrimination of stop consonants: Formants versus spectral peaks. In Cazals Y, Demanyu L, Horner K. eds: Auditory Physiology and Perception. Oxford, UK: Pergammon Press, pp. 463–470, 1992
- Turner CW, Humes LE, Bentler RA, Cox RM. A review of past research on changes in hearing aid benefit over time. Ear Hear 17: 14S–25S, 1996 [DOI] [PubMed] [Google Scholar]
- Turner CW, Souza PE, Forget LN. Use of temporal envelope cues in speech recognition by normal and hearing-impaired listeners. J Acoust Soc Am 97: 2568–2576, 1995 [DOI] [PubMed] [Google Scholar]
- Tyler RS, Kuk FK. The effects of “noise suppression” hearing aids on consonant recognition in speech babble and low-frequency noise. Ear Hear 10: 243–249, 1989 [DOI] [PubMed] [Google Scholar]
- Valente M. Use of microphone technology to improve user performance in noise. Trends in Amplification 4: 112–135, 1999 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Valente M, Van Vliet D. The Independent Hearing Aid Fitting Forum (IHAFF) protocol. Trends in Amplification 2: 1–30, 1997 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Valente M, Fabry DA, Potts LG, Sandlin RE. Comparing the performance of the Widex Senso digital hearing aid with analog hearing aids. J Am Acad Audiol 9: 342–360, 1998 [PubMed] [Google Scholar]
- Valente M, Sammeth CA, Potts LG, Wynne MK, Wagner-Escobar M, Coughlin M. Differences in performance between Oticon MultiFocus Compact and Resound BT2-E hearing aids. J Am Acad Audiol 8: 280–293, 1997 [PubMed] [Google Scholar]
- Valente M, Sweetow R, Potts LG, Bingea B. Digital versus analog signal processing: Effect of directional microphone. J Am Acad Audiol 10: 133–150, 1999 [Google Scholar]
- Van Buuren RA, Festen JM, Houtgast T. Compression and expansion of the temporal envelope: Evaluation of speech intelligibility and sound quality. J Acoust Soc Am 105: 2903–2913, 1999 [DOI] [PubMed] [Google Scholar]
- van der Horst R, Leeuw AR, Dreschler WA. Importance of temporal-envelope cues in consonant recognition. J Acoust Soc Am 105: 1801–1809, 1999 [DOI] [PubMed] [Google Scholar]
- Van Harten-de Bruijn HE, van Kreveld-Bos CS, Draschler WA, Verschuure H. Design of two syllabic nonlinear multichannel signal processors and the results of speech tests in noise. Ear Hear 18: 26–33, 1997 [DOI] [PubMed] [Google Scholar]
- Van Tasell DJ. Hearing loss, speech, and hearing aids. J Speech Hear Res 36: 228–244, 1993 [DOI] [PubMed] [Google Scholar]
- Van Tasell DJ, Greenfield DG, Logemann JJ, Nelson DA. Temporal cues for consonant recognition: Training, talker generalization, and use in evaluation of cochlear implants. J Acoust Soc Am 92: 1247–1257, 1992 [DOI] [PubMed] [Google Scholar]
- Van Tasell DJ, Soli SD, Kirby VM, Widin GP. Speech waveform envelope cues for consonant recognition. J Acoust Soc Am 32: 1152–1160, 1987 [DOI] [PubMed] [Google Scholar]
- Van Tasell DJ, Trine TD. Effects of single-band syllabic amplitude compression on temporal speech information in nonsense syllables and in sentences. J Speech Hear Res 39: 912–922, 1996 [DOI] [PubMed] [Google Scholar]
- Venema TH. The many faces of compression. In Sandlin RE. ed: Hearing Aid Amplification: Technical and Clinical Considerations. San Diego, CA: Singular Publishing Group, pp. 209–246, 2000 [Google Scholar]
- Verschuure J, Benning FJ, van Cappellen M, Dreschler WA, Boeremans PP. Speech intelligibility in noise with fast compression hearing aids. Audiology 37: 127–150, 1998 [DOI] [PubMed] [Google Scholar]
- Verschuure J, Maas AJJ, Stikvoort E, de Jong RM, Goedegebure A, Dreschler WA. Compression and its effect on the speech signal. Ear Hear 17: 162–175, 1996 [DOI] [PubMed] [Google Scholar]
- Walden BE, Surr RK, Cord MT, Edwards B, Olson L. Comparison of benefits provided by different hearing aid technologies. J Am Acad Audiol 11: 540–560, 2000 [PubMed] [Google Scholar]
- Walker G, Byrne D, Dillon H. The effects of multichannel compression/expansion on the intelligibility of nonsense syllables in noise. J Acoust Soc Am 76: 746–757, 1984 [DOI] [PubMed] [Google Scholar]
- Walker G, Dillon H. Compression in hearing aids: An analysis, a review and some recommendations. National Acoustic Laboratories Report No. 90. Canberra, Australia: Australian Government Publishing Service, 1982 [Google Scholar]
- Yueh B, Souza PE, McDowell JA, et al. Randomized trial of amplification strategies. Arch Otolaryngol Head Neck Surg 127: 1197–1204, 2001 [DOI] [PubMed] [Google Scholar]
- Yund EW, Buckles KM. Enhanced speech perception at low signal-to-noise ratios with multichannel compression hearing aids. J Acoust Soc Am 97: 1224–1240, 1995a [DOI] [PubMed] [Google Scholar]
- Yund EW, Buckles KM. Multichannel compression hearing aids: Effect of number of channels on speech discrimination in noise. J Acoust Soc Am 97: 1206–1223, 1995b [DOI] [PubMed] [Google Scholar]
- Yund EW, Buckles KM. Discrimination of multichannel compressed speech in noise: Long-term learning in hearing-impaired subjects. Ear Hear 16: 417–427, 1995c [DOI] [PubMed] [Google Scholar]