Skip to main content
Trends in Amplification logoLink to Trends in Amplification
. 2006 Jun;10(2):67–82. doi: 10.1177/1084713806289514

Digital Noise Reduction: An Overview

Ruth Bentler 1,, Li-Kuei Chiou 1
PMCID: PMC4111515  PMID: 16959731

Abstract

Digital noise reduction schemes are being used in most hearing aids currently marketed. Unlike the earlier analog schemes, these manufacturer-specific algorithms are developed to acoustically analyze the incoming signal and alter the gain/output characteristics according to their predetermined rules. Although most are modulation-based schemes (ie, differentiating speech from noise based on temporal characteristics), spectral subtraction techniques are being applied as well. The purpose of this article is to overview these schemes in terms of their differences and similarities.

Keywords: digital noise reduction, amplification, algorithms, onset time


Simply stated, digital noise reduction (DNR) schemes are intended to reduce hearing aid output in the presence of noise. What is noise? Unwanted sound. That oversimplification does not take into account the reality that individual perception and preference determine which sounds are wanted and which sounds are not. In addition, environments, moods, and circumstances can result in different judgments of unwanted sounds for a given person. Speech, music, and even environmental signals can serve as wanted or unwanted sounds, depending on all these factors. To this end, the hearing aid industry has been challenged to develop schemes and algorithms that provide some relief from those unwanted sounds to hearing-impaired hearing aid users. In this article, an overview of attempts to-date at DNR will be provided alongside some interesting data relative to efficacy and effectiveness of several of the algorithms.

Historical Review

Noise reduction as a feature has been available in hearing aids since the 1970s. Early analog versions on behind-the-ear styles of hearing aids included a tone switch (eg, N-H) that was designed to switch on a low-frequency filter to reduce the low-frequency amplification of background interference; in fact, that option was available several decades earlier on one body hearing aid! Other early processing schemes that were marketed as noise reduction included adaptive filtering (eg, Manhattan circuit, Argosy, St Paul, Minn), Adaptive Compression (Telex Communications, Minneapolis, Minn), and low-frequency compression (ASP, Siemens Hearing Instruments Inc, Piscataway, NJ) but did not provide the anticipated improvement in speech-perception ability in background noise.1 In most of these early designs, the reduced gain/output occurred in the lower frequency region, presumably to keep higher energy, low-frequency sounds from either (1) triggering the compressor and thus reducing gain in the entire frequency range, or (2) increasing the likelihood of upward spread of masking. Although it is the case that multitalker babble noise has higher energy for low frequencies, other environmental noise may not. A sample of the environmental sounds judged to be annoying or unpleasant in some manner is shown in Figure 1. These stimuli, along with 7 others, such as motorcycle accelerating, telephone ringing, and glass breaking, were analyzed spectrally and temporally to determine whether any relationships existed between physical characteristics (temporal characteristics, peak energy, bandwidth, frequency roll-off, etc) and ratings of loudness, annoyance, harshness, tinniness, and noisiness. Relating these negative sound quality attributes to physical characteristics of the signal for appropriate gain reduction is important in algorithm development and critical to hearing aid success. Apparent in the report by Warner and Bentler2 were 2 factors: (1) Individual persons differ in perceptual ratings of these noise stimuli, and (2) negative sound-quality ratings are not always predictable from spectral and temporal characteristics, results that further emphasize the complexity of the situation.

Figure 1.

Figure 1.

Examples of one-third octave band spectra for typical environmental sounds. From Warner RL, Bentler RA. Threshold of discomfort for complex stimuli: acoustic and sound quality predictors. J Speech Hear Res. 2002; 45: 1016–1026. Copyright 2002 by American Speech-Language-Hearing Association. Reprinted with permission.

The early noise reduction schemes were generally intended to filter out noise or reduce the gain. Doing so provided reduced loudness, even some “easier listening,”3 but the marketing claims of “better speech in noise” were not realized. Ultimately, such claims only served to draw the attention of the Food and Drug Administration.

Various investigators examined the impact of analog noise reduction schemes on sound quality and speech perception.1,38 In all studies, reducing gain for noise environments (in the laboratory) resulted in a concomitant reduction in speech-perception ability, an outcome that seems obvious now. Perhaps more disconcerting was the finding that the various analog schemes promoted as noise reduction also reduced the sound quality judgments, compared to sound quality ratings obtained from a control group of hearing aid users with linear schemes.3 Figure 2 shows average ratings from subjects using these early versions of noise reduction across 9 bipolar pairs of sound quality dimensions.9 The subjects were asked to listen to running speech and rate the sound quality of their hearing aid for each bipolar pair (as shown in Figure 2) after 6 months and 1 year of hearing aid use. It is apparent that these analog noise-reduction schemes were prone to reducing the various dimensions of sound quality compared to the control hearing aids without noise reduction (referred to as No-NS). Kuk et al6 reported some reduction in hollowness perception with the same noise reduction schemes, however.

Figure 2.

Figure 2.

Mean qualitative judgment ratings obtained at 6 months for each of the circuit types investigated by Bentler et al.1,3 No-NS = no noise suppression or the control group, Zeta = Zeta Noise Blocker (Intellitech Inc Corp, Northbrook, Ill), AF = adaptive filter, FDIC = frequency-dependent input compression, AC = Adaptive Compression (Telex Communications, Minneapolis, Minn). All positive attributes of the bipolar pairs are shown on the left. A rating of 5 indicated no judged polarity. From Bentler RA, Anderson CV, Niebuhr D, Getta J. A longitudinal study of noise reduction circuits, II: subjective measures. J Speech Hear Res. 1993; 36: 820–831. Copyright 1993 by American Speech-Language-Hearing Association. Reprinted with permission.

First-Generation Digital Algorithms

Analog attempts at noise reduction were restricted by other technological limitations of the time. Except for the Zeta Noise Blocker (Intellitech Inc Corp), the schemes tended to be implemented in a single channel, the gain reduction was restricted to the capabilities of the analog filters used, and gain reduction (often all or none) was typically based on input level only. With the advent of DNR began the evolution of increasingly more complex algorithms that employ decision rules capable of defining what constitutes noise, how much gain reduction is appropriate, and in which frequency ranges the gain reduction should be implemented.

A variety of schemes has been considered to achieve the primary goal of improving the signal-to-noise ratio for the hearing-impaired hearing aid user. If environmental noise and the speech signal differed spectrally, the solution to reduce the gain in the frequency region of the interfering noise source would be straightforward. Because environmental noise is time variant and spectrally overlaps the intended speech signal, that solution is not a plausible one. Other adaptive filtering schemes have been considered. The Wiener filter, first described in the 1940s,10 produces an estimate of the original image by minimizing the mean square error between estimated and original signal. It has been used to restore images corrupted by noise and/or blurring (eg, motion blur, atmospheric turbulence, or out-of-focus blur). This filter, however, requires that the spectra of both the intended signal (speech) and the noise have stationary values, a requirement rarely met in real-world situations. In their prototype device, Levitt et al11 showed evidence that a short-term Wiener filter (assuming both speech and noise are relatively stable over short periods of time) provided some benefit for persons with sensorineural hearing loss. Currently, at least one manufacturer implements a Weiner filter in its overall scheme.

Spectral subtraction is another digital scheme that has been proposed for hearing aid use. In this approach, the short-term noise spectrum can be obtained during pauses in the speech and can be subtracted from the speech-plus-noise spectrum when speech is again present. Earlier efforts using this approach revealed audible distortions (processing noise) that counteracted the potential benefits.12,13 Each of these schemes has been used successfully in larger sound systems.

By the mid-1990s, hearing aids with digital signal processing were marketed in the United States, many with some form of noise reduction. Although the marketing focus at the time was on the digital provision of output and gain/frequency, the noise reduction capability may have been the real advantage of that early era. All sounds entering into the hearing aid could now be analyzed (online) and defined by their spectrum, as well as by their level and temporal characteristics. Because of the time-variant nature of the world in which we live and because speech has known temporal patterns, called modulations, noise reduction algorithms changed from analog filtering of targeted frequency regions to digital filtering based (primarily) on the temporal characteristics of the environmental signals.

Temporal characteristics of the speech signal have been studied for many years. An idealized temporal waveform of a single talker (Figure 3) shows an envelope of amplitude modulation that is easily discernable. Plomp14 determined that temporal modulations relevant to speech occur roughly from 0.1 to 40 times a second (Hz). Most speech modulations occur around 3 Hz, with the midfrequency range having the most fluctuation of amplitude (from peak to valley); clean speech modulations show a range of amplitude fluctuations of approximately 30 to 50 dB. That is, speech has fewer modulations (Hz) with more depth (dB) than do most noiselike stimuli. With this acoustic basis, noise reduction algorithms were developed to distinguish speech from noise. In Figure 4A, a modulation spectrum of a sentence is shown. The primary frequency of modulations is around 4 Hz. In Figure 4b, the modulation spectrum for jet engine noise is shown. It is obvious in that figure that the primary frequencies of modulation are much higher, mostly greater than 30 Hz. An algorithm based on modulation spectra might alter gain in a manner shown in Figure 5; that is, for modulation frequencies less than 10 Hz, gain will not be reduced, whereas for modulation rates similar to that of the jet noise from Figure 4B, 6 to 8 dB of gain reduction will occur. It is important to remember that these decision rules can be applied to any or all channels of the digital hearing aid.

Figure 3.

Figure 3.

Example of a speech time waveform. The obvious fluctuations of amplitude (called modulations) can be analyzed by frequency (modulations per second) and depth or range (in dB). Courtesy of Starkey Laboratories Inc (Eden Prairie, Minn).

Figure 4.

Figure 4.

Modulation spectra of a sentence (A) and jet engine noise (B). Adapted from Powers TA, Holube I, Wesselkamp M. The use of digital filters to combat background noise. In: Kochkin S, Strom KE, eds. High performance hearing solutions. Hear Rev. 1999; 3(suppl): 36–39. Copyright 1999 by The Hearing Journal and Lippincott Williams & Wilkins. Reprinted with permission.

Figure 5.

Figure 5.

An example of how gain reduction may be based on modulation frequency of the environmental signal. Adapted from Powers TA, Holube I, Wesselkamp M. The use of digital filters to combat background noise. In: Kochkin S, Strom KE, eds. High performance hearing solutions. Hear Rev. 1999; 3(suppl): 36–39. Copyright 1999 by The Hearing Journal and Lippincott Williams & Wilkins. Reprinted with permission.

Another parameter of the modulations that is used in algorithm design is the modulation depth (dB). Figure 6 shows examples of the amplitude fluctuation differences that are typical in everyday stimuli. It is apparent that the range (in dB) is narrow for the jet noise (approximately 5 dB), whereas speech babble (approximately 15–20 dB) and a single talker (35–50 dB) have more amplitude fluctuation over time. If the decision rule is based on modulation range or depth (as shown in Figure 7), considerably more gain reduction would be applied to the jet noise than to speech or babble. A number of manufacturers established their gain reduction rules based on some combination of modulation frequency and modulation depth. Figure 8 shows an example of in-house measures of modulation characteristics from one manufacturer. From that graphic, it is apparent that clean speech has considerably more amplitude fluctuations than speech in a babble or speech-shaped noise. Background steady-state noise (even the circuit or microphone noise) is easily discernable from those signals carrying speech information. Again, it is important to point out that some of the early algorithms were based only on modulation depth differences, whereas others included modulation rate information, overall level of the environment, and so on.15 The decision rules of those algorithms further differed from manufacturer to manufacturer in terms of how much gain reduction would occur (and in which frequencies or channels), the speed with which that gain reduction occurred (time constants), and finally, the signal-to-noise ratio that would trigger the activation of that gain reduction.

Figure 6.

Figure 6.

Time waveforms of jet noise, speech in quiet, and speech in babble noise. The different modulation depths (or ranges) across the stimuli are easily discernable. SPL = sound pressure level.

Figure 7.

Figure 7.

An example of how gain reduction may be based on modulation depth. Adapted from Powers TA, Holube I, Wesselkamp M. The use of digital filters to combat background noise. In: Kochkin S, Strom KE, eds. High performance hearing solutions. Hear Rev. 1999; 3 (suppl): 36–39. Copyright 1999 by The Hearing Journal and Lippincott Williams & Wilkins. Reprinted with permission.

Figure 8.

Figure 8.

Modulations (in dB) for different signals including clean speech and speech in different noise backgrounds. Adapted from Edwards BW, Hou Z, Struck CJ, Dharan P. Signal processing algorithms for a new, software-based, digital hearing device. Hear J. 1998; 51: 44–52. Copyright 1998 by The Hearing Journal and Lippincott Williams & Wilkins. Reprinted with permission.

How Much Gain Reduction?

One area of great variability among manufacturers is the amount of gain reduction provided by the scheme. Although the intention (or hope) for all manufacturers has been that gain for speech signals will not be altered, the decision rules for noiselike inputs show considerable variability. Several examples are shown in Figure 9. In each case, the hearing aid was programmed (manufacturer's default formula) for a 50-dB flat hearing loss. All other features such as directional microphones, feedback management, and even expansion were turned off. A library of 1-minute sound files was assembled for a variety of sounds, including speech, speech in noise, and various noise types, so that the same files could be used for different products and settings. After approximately 30 seconds of processing time, to allow the DNR effect to stabilize, the output of the hearing aid was measured and compared to the output obtained when the DNR feature was turned off. The graphics indicate the difference in gain between the DNR-off versus DNR-on conditions.

Figure 9.

Figure 9.

Examples of how 2 different algorithms respond to 4 stimuli. Each hearing aid was set to the same 50-dB hearing loss, with all other features disengaged. Both recognized clear speech and International Collegium for Rehabilitative Audiology (ICRA) noise as speechlike, with no resulting gain reduction. Each responded differently to random and babble noises.

Although the actual decision rules are typically proprietary for most manufacturers, it is apparent in both of the modulation-based schemes shown in Figure 9 that these algorithms correctly identified speech, with no resultant gain reduction in any frequencies. Both also identified the International Collegium for Rehabilitative Audiology (ICRA) noise as speechlike and provided no gain reduction. The differences in implementation are obvious for random and babble noise sources. Although both recognized random (white) noise as less speechlike than babble, the amount of gain reduction is different across schemes. One could argue that more gain reduction for true noise inputs is better; that, however, assumes accurate and reliable classification schemes for each algorithm. A perceptual consequence might be too much reduction of audibility for the more severe hearing losses. Several manufacturers have decision rules implemented to limit the maximum noise reduction allowed across channels in the presence of noise in all channels.

How Fast Does the Algorithm Engage?

Several time constants have been defined for these algorithms. Chung17 identifies 4 different time constants for DNR:

  • the time between the noise reduction algorithm detecting noise in any channel and the time at which the gain begins to decrease;

  • the time between the beginning of gain reduction and maximum gain reduction;

  • the time between the noise reduction algorithm detecting the absence of noise in any channel and the time at which the gain begins to increase;

  • the time between the start of the gain recovery and 0-dB gain reduction.

The difficulty with defining (precisely) those time constants lies in the fact that each varies depending on the starting point of the incoming signal; that is, if silence is interrupted by loud noise, a different time constant is measured in many schemes than if speech is interrupted by loud noise. For some algorithms, this is related to the expansion in the system being on (or off). For others, the analysis time is different for different stimuli. For some, the analysis time is even different for different audiograms.

The primary analysis window, the first time constant, gathers data relative to the immediate environment (overall level, signal-to-noise ratio, etc) and is generally a few seconds long.15 In most systems, this primary analysis window moves forward in time, and the data are averaged over a period of 10 to 15 seconds. We have used the term onset time to represent the time an algorithm takes to complete the noise reduction process because that is the time period that the hearing aid user is subjected to during hearing aid use. That is, our onset time incorporates both the analysis window and the activation time of the particular manufacturer. In Figure 10, an 85-dB random noise signal was fed to each represented processor, and the time period until the signal reached 3 dB of its steady-state level was recorded and defined as onset time. In this set of examples, the high-level noise activated the system; that is, the input went from silence to 85-dB random noise. The measured onset times range from a couple of seconds (Sonic Innovations Inc, Salt Lake City, UT) to more than 30 seconds (Widex A/S, Værløse, Denmark). The effect would be different if there had been an original, low-level input rather than silence prior to the high-level noise input. One could argue that an onset of DNR that is too slow may not respond quickly enough to changes in the environment for a given person. One could also argue that sudden onsets (with offsets) could have negative perceptual consequences in the same environments. The impact of the different onset times has only recently been investigated19 and does not appear to influence performance or preference for the majority of hearing-impaired listeners. In that double-blind investigation, conditions of onset ranged from approximately 5 to 20 seconds, a range that may have been too narrow for perceptual consequences to be noted.

Figure 10.

Figure 10.

Examples of different onset times across 4 manufacturers. The hearing aids were set to provide gain for a flat 50-dB hearing level (HL) hearing loss. An 85-dB random noise was used as the stimulus (starting from silence), and the resultant waveform was analyzed to determine the point in time at which the level was 3 dB from the steady-state level. That time was defined as the onset time.

Fewer data are available as to the offset, or recovery, time implications for the listener. As was the case with compression time constants, a long recovery (from gain reduction) may have negative perceptual consequences for many hearing aid users. Offset time is more difficult to quantify, but the dependence on activating factors (speech vs noise, level, etc) holds for this time constant as well.

What Is the Threshold (Signal-to-Noise Ratio) of Activation?

All manufacturers face the same dilemma of setting the threshold of activation for speech-in-noise environments too low (limiting audibility) or too high (allowing too much noise). One way to compare that threshold across manufacturers was to look at the effect of the DNR for speech-plus-noise inputs of different signal-to-noise ratios. As shown in the example in Figure 11, varying the signal-to-noise ratio of the input has the effect of reducing low-frequency gain, and that effect is not altered by level, a decision rule used by only a few manufacturers. Most manufacturers base the amount of gain reduction on input level as well. In Figure 12, the impact is somewhat different. Again, each of the hearing aids was set for a 50-dB flat loss with all other features disengaged. Although the Unitron Conversa (Unitron Industries Ltd, Kitchener, Ontario, Canada) shows a predictable increase in the magnitude of the gain reduction as the signal-to-noise ratio of the input worsens, that effect is also a low-frequency one. The Innova (Sonic Innovations Inc) algorithm shows less impact in the low-frequency region, with a slight increase in the gain for the middle and higher frequencies. The same increase in gain is not present for less-modulated stimuli. Which is the better approach: to boost the higher frequency gain or to reduce the lower frequency gain? Although the overall effect might be the same (especially when coupled to input compression), it is interesting to see how the signal-to-noise ratio impact varies across products.

Figure 11.

Figure 11.

Examples of how one algorithm responds to 4 signal-to-noise ratios (speech in babble noise) presented at 2 levels (70-and 85-dB sound pressure level [SPL]). Each hearing aid was set to the same 50-dB hearing loss, with all other features disengaged. In this case, the Wiener filter could not be disengaged for any of the measures, so the outcome reflects the impact of the modulation-based noise reduction algorithm only.

Figure 12.

Figure 12.

Examples of how 2 different algorithms respond to 4 signal-to-noise ratios (speech in babble noise). Each hearing aid was set to the same 50-dB hearing loss, with all other features disengaged.

For each manufacturer, determination of the “statistic” or ratio of speech to noise that activates the DNR is dependent on the accuracy of the classification scheme in the first place. The impact could be too much gain reduction (potentially limiting audibility of important information) or too little gain reduction (potentially causing dissatisfaction with the hearing aid). Several manufacturers have considered band importance from articulation index theory20 in their schemes. The impact is to have less gain reduction in the middle frequency range than on either end to preserve any important speech cues than may be discernable. The relationship between audibility and speech perception in quiet is well documented; that relationship is less clear for speech in noise.21 Nonetheless, at least one study suggests that patients who achieve higher audibility report using their hearing aids more frequently.22

Additional Gain Reduction Processing

Not all schemes are intended to work in this modulation-based manner. One manufacturer (Oticon, Smørum, Denmark) first introduced an algorithm referred to as synchronous morphology. Rather than base the gain reduction on the modulation characteristics of the speech envelope, this algorithm was based on comodulation, whereby the absence of harmonic structure rather than modulation depth and count is the primary trigger for the gain reduction. An obvious effect of that scheme would be the classification of stimuli with harmonic structure as wanted rather than unwanted background noise. Figure 13 shows examples of this algorithm's impact on musical passages. The harmonic structure of most musical passages results in no noise reduction with that scheme. By comparison, the Starkey algorithm (Starkey Laboratories Inc, Eden Prairie, Minn) gives the intended outcome for a modulation-based algorithm. Again, neither is right or wrong, but rather, different implementations of DNR and should be a consideration in the clinical management of any hearing loss.

Figure 13.

Figure 13.

Examples of how 2 different algorithms respond to 5 different stimuli. Each hearing aid was set to the same 50-dB hearing loss, with all other features disengaged. Both recognized clear speech with no resulting gain reduction. The ADAPTO (Oticon, Smørum, Denmark) uses an algorithm called synchronous morphology, wherein stimuli with harmonic structure are classified as speech with no resultant gain reduction. The impact of the random noise stimulus is also significantly different for the 2 algorithms.

There are other uses of gain reduction that some manufacturers consider key to their DNR scheme's success. Although these might not be DNR per se, it is still important to consider their impact on the amplified signal.

Expansion. Expansion is often referred to as noise reduction for low-level inputs. (For many systems, expansion will also be speech reduction if the speech signal falls below the expansion kneepoint.) Wherein compression is designed to decrease the gain as a function of increasing input, expansion is intended to decrease gain as a function of decreasing input, when those inputs are below the kneepoint or threshold of the compressor. The intention has been to reduce the audibility of low-level environmental noises as well as internal noise generated by the hearing aid itself. There has been little research published on the effect of this signal-processing scheme, especially when implemented with DNR. Plyler et al23 evaluated expansion in a single-channel amplifier. They reported improved preference in quiet listening situations but degraded speech perception in both quiet and noise when input levels were at or below the activation threshold. The findings were not related to the configuration of the loss, as has been suggested by some manufacturers, but were related to the time constant of the expander. In a follow-up study, the impact of multiple-channel expansion was assessed.24 The 20 hearing-impaired subjects showed better speech perception ability in quiet and in noise with expansion off, although satisfaction and overall preference ratings were higher for expansion on in 4 channels and expansion limited to the lower 2 channels (ie, 2000 Hz and below).

Wind noise reduction. With the reemergence of directional microphones has come the increased dilemma of wind noise. Whether using a dual-omni or a single-case directional scheme, the turbulence of the wind increases the SPL output of a hearing aid by as much as 20 to 25 dB, depending on the head's angle relative to the origin of the wind. Some manufacturers incorporate wind noise algorithms to deals with this resultant noise. In the Siemens Acuris (Siemens Hearing Instruments Inc), for example, the signals from each microphone are compared and correlated. If wind noise is detected (uncorrelated microphone signals), then (1) the directional microphone is switched to omni mode (fading with a 2-second time constant) and (2) the gain of the low-frequency channels (less than 1000 Hz) is reduced. The usefulness of this scheme is apparent for many hearing aid users.

Verification of the Digital Noise Reduction Feature

With the introduction of digital hearing aids came some concern relative to the usefulness of coupler and/or probe microphone measures to accurately represent gain and output characteristics. The real issue of the time, though often not clearly understood, was that of taking measurements using stimuli that would activate the compression and/or noise reduction within the hearing aid. Because several of the popular measurement systems used a multitone complex or some speech-shaped noise signal, the gain/output measures were often not stable enough for interpretation. As clinicians began to use more interrupted or modulated speechlike stimuli, the previous concern of instability while using probe microphone measures for verification purposes was less warranted. Still, verification of how the noise reduction scheme actually works for a given patient is an important step in the hearing aid fitting process.

To ascertain the effect of the various DNR schemes in more typical environments, we looked at overall gain reduction when a speech signal precedes a random noise signal. Three hearing aids were set to the manufacturer's National Acoustic Laboratories–nonlinear version 1 (NAL-NL1) targets for a 50-dB flat hearing loss. Figure 14 shows the expected time waveforms that were captured for clean speech (85 dB), followed by a noise stimulus (85 dB). After the expected analysis time and activation time (referred to as onset time previously), an overall level reduction of 4.25 dB was recorded with the first aid (Natura, Sonic Innovations Inc). With the same methodology, an overall level reduction was recorded for the second hearing aid (Axent, Starkey Laboratories Inc) and was found to be 13.6 dB. The third hearing aid (not identified here) showed no gain reduction for the 85-dB random noise; in fact, the overall level increased by almost 3 dB. Similarly unexpected findings have been reported by Dreschler et al,16 wherein gain for a clear speech signal actually decreased when the DNR was activated. Just as the electroacoustic performance of the hearing aids should be verified at various stages in the fitting process, so must verification of the features. Most clinical environments rely on coupler or probe microphone measurements to determine such outcomes.

Figure 14.

Figure 14.

These time waveforms show a clean speech signal (85 dB), followed by a random noise signal (85 dB), followed by the clean speech signal. The shaded area in the top panel indicates where the root mean square (rms) level was calculated for the speech; the shaded area in the bottom panel indicates where the rms was calculated for the noise after the noise reduction scheme had engaged. The difference between the 2 represents the level difference and is discussed in the text.

Several examples of clinical verification outcomes are shown in Figures 15 and 16. In Figure 15, the effect of DNR off is shown alongside the measured effect of minimum, medium, and maximum settings of DNR for a particular manufacturer using a steady-state random noise stimulus. It is clear that each increased setting provides a predictable change in measured gain across frequencies. In contrast, Figure16 shows the effect of DNR off versus DNR on for another manufacturer, with the same level and type of signal. In this case, it is clear that the effect is limited to the low frequencies and that the magnitude of gain reduction is significantly greater.

Figure 15.

Figure 15.

An example of the measured effect (via probe microphone measurements) on gain for minimum, medium, and maximum settings of digital noise reduction compared to the off position. The x–x represents the desired gain. Courtesy of Bill Cole (Audioscan, Kitchener, Ontario, Canada).

Figure 16.

Figure 16.

An example of the measured effect (via probe microphone measurements) on gain for a different algorithm of digital noise reduction compared to the off position. The x–x represents the desired gain. Courtesy of Bill Cole (Audioscan, Kitchener, Ontario, Canada).

Next-Generation Algorithms

The complexity of current (second and third generation) digital hearing aid processing makes defining the function of any one feature difficult. Whereas earlier schemes were relatively straightforward to define, evaluate, and compare, current schemes are not. Many current processors use some form of environmental classification to determine the when, where, how much, and how fast of gain reduction in the presence of noise. Whether this series of rules is called Artificial Intelligence (Oticon), Auto-Pilot (Phonak AG, Stäfa, Switzerland), Environmental Classification (Siemens Hearing Instruments Inc), and so on, the goal is the same. By continually assessing the spectral, temporal, level, and even angular input characteristics of the listener's communication environment, appropriate “steering” of features such as directional microphones and noise reduction can be performed. In general, the combination of the microphone scheme and the DNR scheme is becoming more common. Algorithms incorporating adaptive beamforming25 and adaptive optimal filtering processing26 show promise, as more complexity is possible for the current power consumption capabilities of the ear-worn devices. Future generations of noise reduction schemes are already on the horizon for many manufacturers.

Evidence of Effectiveness

Outcome data for digital attempts at environmental noise reduction are only starting to emerge. Early investigations tended to focus on other features, such as directional microphone effectiveness, with the noise reduction feature considered concurrently. As a result, the evidence has been sparse for DNR for speech perception, sound quality, or listening ease.27 On the other hand, based on the negative findings of the analog noise reduction era relative to decreased speech perception and sound quality, one could argue that even equivocal findings are positive findings.

Data are only starting to emerge relative to actual benefits of current DNR algorithms. Ricketts and Hornsby28 used a paired-comparison approach to determine preference for both directional and DNR features. Subjects also provided a strength-of-preference rating for their decision. Even though speech perception was not affected, their results indicated a significant and strong preference for the DNR, in both low-level and high-level noise levels. Because the instructions were to choose the setting of preference, one could argue the subjects were responding to listening comfort rather than quality, as the authors suggest. The authors note that their data are in contrast to other investigators (eg, Alcantara et al29) who found no preference for the DNR feature but in agreement with others (eg, Boymans and Dreschler30) using similar paired-comparison measures. These data provide further evidence that using DNR in any hearing aid does not imply similar subject (or patient) outcomes, especially in light of the differences in the way different manufacturers implement the scheme.

Early analog noise reduction schemes showed some evidence of the listening ease that has been reported anecdotally in the present era.3 If specific subscales of self-report inventories such as the Abbreviated Profile of Hearing Aid Benefit (APHAB)31 can be construed as measures of listening ease, some evidence to support the feature is indicated in the digital era as well. Boymans and Dreschler30 found 3 items on the aversiveness subscale supporting the usefulness of DNR: speech recognition in car noise, sudden loud sounds, and traffic noises. Further (and confounding) evidence of DNR affecting ratings of annoyance and improving acceptable noise levels can be found in companion articles.32,33

Conclusion

Several things should be obvious in this overview: First, all systems are not created equal, and second, the outcomes reported herein should not be construed as better or worse than each other. The clinician can not assume that the use of DNR is an easy and straightforward feature across the various manufacturers. The right algorithm for one patient may be the wrong algorithm for the next patient. In fact, the right algorithm in one environment may be the wrong algorithm in another environment for the same patient. The clinician should, however, understand how the particular manufacturers that they represent implement DNR in their various models of product. That understanding comes from inquiry, experience, and a willingness to challenge marketing literature. The clinician should also verify (electroacoustically in a coupler or in the ear) that the intended effect is actually realized on the patient at hand. Self-report measures also provide useful information across a variety of domains that can be useful in the management process. This feature is complex; that is why our patients seek our professional help.

Acknowledgment

We acknowledge the help of each of the manufacturers of these noise reduction schemes. Their input and support to make this information both technically accurate and clear to understand is greatly appreciated. In addition, the help of Yu-Hsiang Wu in the measurement and graphic development stages has been greatly appreciated.

Footnotes

Intellitech Inc Corp developed the Zeta Noise Blocker algorithm, unique for the era in that it used a digital chip incorporated into the hearing aid circuit (and marketed by Maico). Incoming signals were analyzed for rate of modulation, applying attenuation by 4 analog filters in the presence of noise.

The International Collegium for Rehabilitative Audiology (ICRA) introduced these noise signals with long-term average energy levels and temporal modulations similar to those of real speech.16

This was derived from the American National Standard Specification of Hearing Aid Characteristics18 definition of attack time (TA) in a compression system.

References

  • 1.Bentler RA, Anderson CV, Niebuhr D, Getta J. A longitudinal study of noise reduction circuits, I: objective measures. J Speech Hear Res. 1993;36: 808–819 [DOI] [PubMed] [Google Scholar]
  • 2.Warner RL, Bentler RA. Threshold of discomfort for complex stimuli: acoustic and sound quality predictors. J Speech Hear Res. 2002;45: 1016–1026 [DOI] [PubMed] [Google Scholar]
  • 3.Bentler RA, Anderson CV, Niebuhr D, Getta J. A longitudinal study of noise reduction circuits, II: subjective measures. J Speech Hear Res. 1993;36: 820–831 [DOI] [PubMed] [Google Scholar]
  • 4.Bentler RA. “Satisfaction” with current noise-reduction circuits. Am J Audiol. 1993;2: 51–53 [DOI] [PubMed] [Google Scholar]
  • 5.Dillon H, Lovegrove R. Single microphone noise reduction systems for hearing aids: a review and evaluation. In: Studebaker GA, Hochberg I. eds. Acoustical Factors Affecting Hearing Aid Performance. Boston, Mass: Allyn and Bacon; 1993: 353–372 [Google Scholar]
  • 6.Kuk F, Tyler R, Mims L. Subjective ratings of noise reduction hearing aids. Scand Audiol. 1990;19: 237–244 [DOI] [PubMed] [Google Scholar]
  • 7.Kuk FK, Plager A, Pape NM. Hollowness perception with noise-reduction hearing aids. J Am Acad Audiol. 1992;3(1): 39–45 [PubMed] [Google Scholar]
  • 8.Van Tasell DJ, Larsen SY, Fabry DA. Effects of an adaptive filter hearing aid on speech recognition in noise by hearing impaired subjects. Ear Hear. 1988;36: 808–819 [DOI] [PubMed] [Google Scholar]
  • 9.Gabrielsson A, Lindstrom B. Perceived sound quality of high fidelity loudspeakers. J Audio Eng Soc. 1985;33: 33–53 [Google Scholar]
  • 10.Wiener N. Interpolation, Extrapolation, and Smoothing of Stationary Time Series. New York, NY: J Wiley; 1949 [Google Scholar]
  • 11.Levitt H, Bakke M, Kates J, Neuman A, Weiss M. Advanced signal processing hearing aids. In: Beilen J, Jensen GR. eds. Recent Developments in Hearing Instrument Technology: Proceedings of the 15th Danavox Symposium. Copenhagen, Denmark: Stougaard Jensen; 1993: 247–254 [Google Scholar]
  • 12.Weiss M, Aschkenasy E. Automatic Detection and Enhancement of Speech Signals. Grifiss Air Force Base, NY: Rome Air Development Center; 1975. Report No. RADC-TR-75-77. [Google Scholar]
  • 13.Boll SF. Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans Acoust Speech. 1979;27: 113–120 [Google Scholar]
  • 14.Plomp R. Perception of speech as a modulated signal. In: Proceedings of the 10th International Congress of Phonetic Sciences, Utrecht, 1–6 August, 1983. Utrecht, the Netherlands: The Congress; 1983: 29–40 [Google Scholar]
  • 15.Mueller HG, Ricketts TA. Digital noise reduction: much ado about something? Hear J. 2005;58: 10–17 [Google Scholar]
  • 16.Dreschler WA, Verschuure H, Ludvigsen C, Westerman S. ICRA noises: artificial noise signals with speech-like spectral and temporal properties for hearing instrument assessment. Audiology. 2001;40: 148–157 [PubMed] [Google Scholar]
  • 17.Chung K. Challenges in recent developments in hearing aids, I: speech understanding in noise, microphone technologies and noise reduction algorithms. Trends Amplif. 2004;8: 83–124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.American National Standards Institute (ANSI) American National Standard Specification of Hearing Aid Characteristics. New York, NY: ANSI; 1996 [Google Scholar]
  • 19.Bentler RA. Noise Reduction: how and how much?. Presented at: American Auditory Society; March 5–7, 2006; Scottsdale, Ariz. [Google Scholar]
  • 20.Studebaker GA, Marincovich PJ. Importance weighted audibility and the recognition of hearing aid-processed speech. Ear Hear. 1989;10: 101–108 [DOI] [PubMed] [Google Scholar]
  • 21.Humes LE, Riker S. Evaluation of two clinical versions of the articulation index. Ear Hear. 1992;13: 406–409 [DOI] [PubMed] [Google Scholar]
  • 22.Souza PE, Yueh B, Sarubbi MS, Loovis CF. Fitting hearing aids with the Articulation Index: impact on hearing aid effectiveness. J Rehabil Res Dev. 2000;37(4): 1–13 [PubMed] [Google Scholar]
  • 23.Plyler PN, Hill AB, Trine TD. The effects of expansion on the objective and subjective performance of hearing instrument users. J Am Acad Audiol. 2005;16: 101–113 [DOI] [PubMed] [Google Scholar]
  • 24.Plyler PN, Hill AB, Trine TD. The effects of expansion time constants on the objective performance of hearing instrument users. J Am Acad Audiol. 2005;16: 614–621 [DOI] [PubMed] [Google Scholar]
  • 25.Wouters J. Multimicrophone and adaptive strategies. Hear J. 2003;56(11): 48–51 [Google Scholar]
  • 26.Maj J-B, Moonen M, Wouters J. SVD-based optimal filtering technique for noise reduction in hearing aids using two microphones. J Appl Signal Proc. 2002;4: 432–443 [Google Scholar]
  • 27.Bentler RA. Effectiveness of directional microphones and noise reduction schemes in hearing aids: a systematic review of the evidence. J Amer Acad Audol. 2005;16: 477–488 [DOI] [PubMed] [Google Scholar]
  • 28.Ricketts TA, Hornsby BWY. Sound quality measures for speech in noise through a commercial hearing aid implementing “digital noise reduction.”. J Am Acad Audiol. 2005;16: 270–277 [DOI] [PubMed] [Google Scholar]
  • 29.Alcantara JI, Moore BC, Kuhnel V, Launer S. Evaluation of the noise reduction system in a commercial digital hearing aid. Int J Audiol. 2003;42: 34–42 [DOI] [PubMed] [Google Scholar]
  • 30.Boymans M, Dreschler WA. Field trials using a digital hearing aid with active noise reduction and dual-microphone directionality. Audiology. 2000;39: 260–268 [DOI] [PubMed] [Google Scholar]
  • 31.Cox RM, Alexander GC. The abbreviated profile of hearing aid benefit. Ear Hear. 1995;16: 176–186 [DOI] [PubMed] [Google Scholar]
  • 32.Mueller HG, Weber J, Hornsby BWY. The effects of digital noise reduction on the acceptance of background noise. Trends Amplif. 2006;10: 83–93 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Palmer CV, Bentler R, Mueller HG. Amplification with digital noise reduction and the perception of annoying and aversive sounds. Trends Amplif. 2006;10: 95–104 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Trends in Amplification are provided here courtesy of SAGE Publications

RESOURCES