Abstract
This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized.
1. Introduction
About 10% of the world's population suffers from hearing loss. For these individuals, the most common amplification choice is hearing aids. The hearing aids of today are vastly different from their predecessors because of the application of digital signal processing technologies. With the advances in digital chip designs and the reduction in current consumption, many of the historically unachievable concepts have now been put into practice. This article reviews current hearing aid design and concepts that are specifically attempting to meet the variety of amplification needs of hearing aid users. Some of these technologies and algorithms have been introduced to the consumer market very recently. Validation data on the effectiveness of these technologies, if available, are also discussed.
2. Basics of Hearing Aid Signal Processing Technologies
Four basic concepts in hearing aids and digital signal processing underlie today's advanced signal processing technologies.
2.1. Differentiating Hearing Aids
The first and most basic concept is the differentiation among analog, analog programmable, and digital programmable hearing aids:
In the conventional analog hearing aids, the acoustic signal is picked up by the microphone and converted to an electric signal. The level and the frequency response of the microphone output is then altered by a set of analog filters and the signal sent to the receiver. The signal of the analog hearing aids remains continuous throughout the signal processing path.
In analog programmable hearing aids, the electric signal is normally split into two or more frequency channels. The level of the signal in each channel is amplified and processed by a digital control circuit. The parameters of the digital control circuit are programmed via hearing aid fitting software. The signal, however, remains continuous throughout the signal processing path in the analog programmable hearing aids.
In the digital programmable hearing aid (or simply digital hearing aids), the output of the microphone is sampled, quantized, and converted into discrete numbers by analog-to-digital converters. All the signal processing is then carried out in the digital domain by digital filters and algorithms. Upon completion of digital signal processing, the digital signal is converted back to the analog domain by a digital-to-analog converter or a demodulator.
For a detailed explanation on the differences in hearing aids, please refer to Schweitzer, 1997.
2.2. Channels and Bands
Another basic concept of hearing aids is the differentiation between channels and bands. In general, channel refers to signal processing channels that are the signal processing units for such algorithms as compression, noise reduction, and feedback reduction. The gain control and other functions within the channel operate independently of each other. A band, on the other hand, refers to a frequency shaping band that is mainly used to control the amount of time-invariant gain in a frequency region. A given channel may have several frequency bands; each is subjected to the same signal processing of the channel in which it resides. Some digital hearing aids have an equal number of channels and bands; for example, the Natura by Sonic Innovations has nine signal processing channels and nine frequency shaping bands. Other digital hearing aids, however, may have different numbers of channels and bands; for example, the Adapto by Oticon has two signal processing channels and seven frequency shaping bands.
In some cases, the above distinctions of the channels and bands are smeared because of a lack of a better term to describe the complexity of the signal processing algorithms implemented in hearing aids. The Oticon Syncro, for example, has four channels of signal processing in the adaptive directional microphone algorithm and eight channels of signal processing for the compression and noise reduction algorithms. To distinguish the two, Oticon chooses to describe the multichannel adaptive directional microphone as a “multiband adaptive directional microphone” and reserve the term channel to describe its compression system and noise reduction algorithm.
With the advances in signal processing technologies, some hearing aids may have many channels and bands, which can become difficult to manage in the hearing aid fitting process. Some manufacturers have grouped the channels and bands into a lesser number of fitting regions to simplify the fitting process. For example, the Canta7 by GNResound has 64 frequency-shaping bands and 14 signal processing channels. These channels and bands are grouped into controls at six frequency regions in the fitting software.
2.3. The Building Blocks of Advanced Signal Processing Algorithms
Recently, many adaptive or automatic features have been implemented in digital hearing aids and most of these features are accomplished by signal processing algorithms. These signal processing algorithms typically have three basic building blocks: a signal detection and analysis unit, a set of decision rules, and an action unit. The signal detection and analysis unit usually has either one or several detectors. These detectors observe the signal for a period of time in an analysis time window and then analyze for the presence or absence of certain pertinent characteristics or calculate a value of relevant characteristics. The output of the detection and analysis unit is subsequently compared with the set of predetermined decision rules. The action unit then executes the corresponding actions according to the decision rules.
An analogy of the building blocks of digital signal processing algorithms can be drawn from the operation of compression systems: The signal detection and analysis unit of the compression system is the level detector that detects and estimates the level of the incoming system. The set of decision rules in the compression system is the input-output function and the time constants. Specifically, the input-output function specifies the amount of gain at different input levels, whereas the attack and release times determine how fast the change occurs. The action unit carries out the action that is reflected in the output of the compression system.
2.4. Time Constants of the Advanced Signal Processing Algorithms
Another important concept for advanced signal processing algorithms is the time constants that govern the speed of action. The concept of time constants of an adaptive or automatic algorithm can be described using the example of time constants of a compression system. In a compression system, the attack and release times are defined by the duration that a predetermined gain change occurs given a specific change in input level. In other words, they tell us how quickly a change in the gain occurs in a compression system when the level of the input changes.
Similarly, the time constants of an adaptive or automatic algorithm tell us the time that the algorithm takes to switch from the default state to another signal processing state (i.e., attack/adaptation/engaging time) and the time that the algorithm takes to switch back to the default state (i.e., release/disengaging time) when the acoustic environment changes. For example, the attack/adaptation/engaging time for an automatic directional microphone algorithm is the time for the algorithm to switch from the default omni-directional microphone to the directional microphone mode when the hearing aid user walks from a quiet street into a noisy restaurant. The release/disengaging time is the time for the algorithm to switch from the directional microphone back to the omni-directional microphone mode when the user exits the noisy restaurant to the quiet street.
For some algorithms, the time constants can also be the switching time from one mode to another. An example is the switching time from one polar pattern to another polar pattern in an adaptive directional microphone algorithm. In addition, time constants can also be associated with tracking speed (e.g., the speed with which a feedback reduction algorithm tracks a change in the feedback path).
The proprietary algorithms from different manufacturers have different time constants, depending on factors such as their hearing aid fitting philosophy, interactions or synergy among other signal processing units, and limitations on signal processing speed. Similar to the dilemma in choosing the appropriate release time in a compression system, there are pros and cons associated with the choices of fast or slow time constants in advanced signal processing algorithms:
Fast engaging and disengaging times can act on the changes in the incoming signal very quickly. Yet they may be overly active and create undesirable artifacts (e.g., the pumping effect generated by a noise reduction algorithm with fast time constants).
Slow engaging and disengaging times may be more stable and have fewer artifacts. However, they may appear to be sluggish and allow the undesirable components of the signal to linger a little longer before any signal processing action is taken.
The general trend in the hearing aid industry is to have variable engaging and disengaging times, similar to the concept of variable release times in a compression system. The exact value of the time constants depends on the characteristics of the incoming signal, the lifestyle of the hearing aid user, and the style and model of the hearing aid, among others.
3. Challenges and Recent Developments in Hearing Aids
Challenge No. 1: Enhancing Speech Understanding and Listening Comfort in Background Noise
Difficulty in understanding speech in noise has been one of the most common complaints of hearing aid users. People with hearing loss often have more difficulty understanding speech in noise than do people with normal hearing. When the ability to understand speech in noise is expressed in a signal-to-noise ratio (SNR) for understanding 50% of speech (SNR-50), the SNRs-50 of people with hearing loss may be as much as 30 dB higher than that of people with normal hearing. This means that for a given background noise, the speech needs to be as much as 30 dB higher for people with hearing loss to achieve the same level of understanding as people with normal hearing (Baer and Moore, 1994; Dirks et al., 1982; Duquesnoy, 1983; Eisenberg et al., 1995; Festen and Plomp, 1990; Killion, 1997a; Killion and Niquette, 2000; Peters et al., 1998; Plomp, 1994; Tillman et al., 1970; Soede, 2000). The difference in SNRs-50 between people with normal hearing and hearing loss is called SNR-loss (Killion, 1997b). The exact amount of SNR-loss depends on the degree and type of hearing loss, the speech materials, and the temporal and spectral characteristics of background noise.
From the signal processing point of view, the relationship between speech and noise can be characterized by their relative occurrences in the temporal, spectral, and spatial domain. Temporally, speech and noise can occur at the same instance or at different instances. Spectrally, speech and noise can have similar frequency content, or they may slightly overlap or have different primary frequency regions. Spatially, noise may originate from the same direction as the targeted speech or from a different spatial angle than the targeted speech. Further, speech and noise can have a constant spatial relationship or their relative positions may vary over time. When a constant spatial relationship exists with speech and noise, both components are fixed in space or both are moving at the same velocity. When their relative position varies over time, the talker, the noise, or both may be moving in space.
One of the most challenging tasks of engineers who design hearing aids is to reduce background noise and to increase speech intelligibility without introducing undesirable distortions. Multiple technologies have been developed in the long history of hearing aids to enhance speech understanding and listening comfort for people with hearing loss. The following section reviews some of the recent developments in two broad categories of noise reduction strategies: directional microphones and noise reduction algorithms.
3.1. Noise Reduction Strategy No. 1: Directional Microphones
Directional microphones are designed to take advantage of the spatial differences between speech and noise. They are second only to personal frequency modulation (FM) or infrared listening systems in improving the SNR for hearing aid users. Directional microphones are more sensitive to sounds coming from the front than sounds coming from the back and the sides. The assumption is that when the hearing aid user engages in conversation, the talker(s) is usually in front and sounds from other directions are undesirable.
In the last several years, many new algorithms have been developed to maintain the performance of directional microphones over time and to maximally attenuate moving or fixed noise source(s) from the back hemisphere. In addition, second-order directional microphones and array microphones with higher directional effects are available to further attenuate noise originating from the back hemisphere. The following section reviews the basics of first-order directional microphones, updates the current research findings, and discusses some of the recent developments in directional microphones.
3.1.1. First-Order Directional Microphones
First-order directional microphones have been implemented in behind-the-ear hearing aids since the 1970s. The performance of modern directional microphones has been greatly improved compared to the earlier generations of directional microphones marketed in the 1970s and 1980s (Killion, 1997b). Now, first-order directional microphones are implemented not only in behind-the-ear hearing aids but also in in-the-ear and in-the-canal hearing aids.
3.1.1.1. How They Work
First-order directional microphones are implemented with either a single-microphone design or a dual/twin-microphone design. In the single-microphone design, the directional microphone has an anterior and a posterior microphone port. The acoustic signal entering the posterior port is acoustically delayed and subtracted from the signal entering the anterior port at the diaphragm of the microphone.
The rationale is that if a sound comes from the front, it reaches the anterior port first and then reaches the posterior port a few milliseconds later. Since the sound in the posterior port is delayed by the traveling time between the two microphone ports (i.e., external delay) and the acoustic delay network (i.e., the internal delay), the sound from the front is minimally affected. Therefore, the directional microphones have high sensitivity to sounds from the front. However, if a sound comes from the back, it reaches the posterior port first and continues to travel to the anterior port. If the internal delay equals the external delay, sounds entering the posterior port and the anterior port reach the diaphragm at the same time but on opposite sides of the diaphragm, thus they are cancelled. Therefore, the sensitivity of the directional microphone to sounds from the back is greatly reduced.
The sensitivity of the directional microphone to sounds coming from different azimuths is usually displayed in a polar pattern. Directional microphones exhibit four distinct types of polar patterns: bipolar (or bidirectional, dipole), hypercardioid, supercardioid, and cardioid (Figure 1A). The least sensitive microphone locations (i.e., nulls) of these polar patterns are at different azimuths relative to the most sensitive location (0° azimuth). Notice that these measurements are made when the directional microphones are free-hanging in a free field where the sound field is uniform, free from boundaries, free from the disturbance of other sound sources, and nonreflective. When the directional microphones are measured in three dimensions, their sensitivity patterns to sounds from different locations are called directional sensitivity patterns (Figure 1B).
The directional sensitivity patterns of directional microphones are generated by altering the ratio between the internal and external delays. The internal delay is determined by the acoustic delay network placed close to the entrance of the back microphone port. The external delay is determined by the port spacing between the front and back ports, which in turn, is determined by the available space and considerations of the amount of low-frequency gain reduction and the amount of high-frequency directivity (Figure 2).
In the dual-microphone design, the directional microphones are composed of two omni-directional microphones that are matched in frequency response and phase (Figure 3). The two omnidirectional microphones are combined by using delay-and-subtract processing, similar to single-microphone directional microphones. The electrical signal generated from the posterior microphone is electrically delayed and subtracted from that of the anterior microphone in an integrated circuit (Buerkli-Halevy, 1987; Preves, 1999; Ricketts and Mueller, 1999). By varying the ratio between the internal and external delays, the polar patterns of the dual-microphone directional microphones can also generate bipolar, cardioid, hypercardioid, or supercardioid patterns.
Although the performances of single-microphone and dual-microphone directional microphones are comparable, most of the high-performance digital hearing aids use dual-microphone directional microphones because of their flexibility. Single-microphone directional microphones have fixed polar patterns after being manufactured because neither the external delay nor the internal delay can be altered. However, dual-microphone directional microphones can have variable polar patterns because their internal delays can be varied by signal processing algorithms. The ability to vary the polar pattern after the hearing aid is made opens doors to the implementation of advanced signal processing algorithms (e.g., adaptive directional microphone algorithms).
The directional effect of the directional microphones can be quantified in several ways:
The front-back ratio is the microphone sensitivity difference in dB for sounds coming from 0° azimuth to sounds from 180° azimuth.
The directivity index is the ratio of the microphone output for sounds coming from 0° azimuth to the average of microphone output for sounds from all other directions in a diffuse/reverberant field (Beranek, 1954).
The articulation index-weighted directivity index (AI-DI) is the sum and average of the directivity index at each frequency multiplied by the articulation index weighting of the frequency band for speech intelligibility (Killion et al., 1998).
For a review on the design and evaluation of first-order directional microphones, please refer to the reviews by Ricketts (2001) and Valente (1999).
3.1.1.2. Updates on the Clinical Verification of First Order Directional Microphones
Many factors affect the benefits of directional microphones. Research studies on the effect of directional microphones on speech recognition conducted in laboratory settings showed a large range of SNR-50 improvement, from 1 to 16 dB. The amount of benefit experienced by the hearing aid users depends on the directivity index of the directional microphone; the number, the placement, and the type of noise sources; the room and environmental acoustics, relative distance between the talker and listener, location of the noise relative to the listener, and vent size, among others (Beck, 1983; Hawkins and Yacullo, 1984; Gravel et al., 1999; Kuk et al., 1999; Nielsen and Ludvigsen, 1978; Preves et al., 1999; Ricketts, 2000a; Ricketts and Dhar, 1999; Studebaker et al., 1980; Valente et al., 1995; Wouters et al., 1999).
Normally, the higher the directivity index and AI-DI, the higher the directional benefit provided by the directional microphones (Ricketts, 2000b, Ricketts et al., 2001). Studies by various researchers have also shown that the directivity index or AI-DI can be used to predict the improvements in SNRs-50 provided by the directional microphones with multiple noise sources or relatively diffuse environments (Laugesen and Schmidtke, 2004; Mueller and Ricketts, 2000; Ricketts, 2000a).
The amount of directional benefit of the hearing aids is affected by the number of noise sources and the testing environment. Studies conducted with one noise source placed at the null of the directional microphones (Agnew and Block, 1997; Gravel et al., 1999; Lurquin and Rafhay, 1996; Mueller and John, 1979; Valente et al., 1995; Wouters et al., 2002) showed greater SNR-50 improvements than studies conducted with multiple noise sources or with testing materials recorded in real-world environments (Killion et al., 1998; Preves et al., 1999; Pumfort et al., 2000; Ricketts, 2000a; Ricketts and Dhar, 1999; Valente et al., 2000). In general, 3 to 5 dB of improvement in the SNR-50 is reported in real-world environments with multiple noise sources (Amlani, 2001; Ricketts et al., 2001; Valente et al., 2000; Wouters et al., 1999).
In addition, greater improvements are generally observed in less reverberant environments than in more reverberant environments (Hawkins and Yacullo, 1984; Killion et al., 1998; Ricketts, 2000b; Ricketts and Dhar, 1999; Studebaker et al., 1980; Ricketts and Henry, 2002). Reverberation reduces directional effects because sounds are reflected from surfaces in all directions. The reflected sounds make it impossible for directional microphones to take advantage of the spatial separation between speech and noise. Research studies have also shown that directional microphones are more effective if speech, noise, or both speech and noise are within the critical distance (Ricketts, 2000a; Leeuw and Dreschler, 1991; Ricketts and Hornsby, 2003). Critical distance is the distance at which the level of the direct sound is equal to the level of the reverberant sound. Within the critical distance, the level of the direct sound is higher than the level of the reverberant sound.
Further, the directional effect at the low-frequency region reduces as the vent size increases because vents tend to reduce the gain of the hearing aid below 1000 Hz and allow unprocessed signals from all directions to reach the ear canal. However, when the weightings of articulation index are considered, the decrease in AI-DI was only about 0.4 dB for each 1-mm increase in vent size up to a diameter of 2 mm (Ricketts, 2001; Ricketts and Dittberner, 2002). Although a larger decrease of AI-DI (i.e., 0.8 dB) was observed when the vent size increased from 2 mm to open fitting, the open earmold fitting would still have about a 4 dB higher AI-DI than the omni-directional mode. In general, vents have the greatest effect on hearing aids with high directivity indexes at low frequencies (Ricketts, 2001).
Factors that do not affect the benefit of directional microphones are compression and hearing aid style (Pumfort et al., 2000; Ricketts, 2000b; Ricketts et al., 2001). At the first glance, the actions of compression and directional microphones seem to act in opposite directions, i.e., directional microphones reduce background noise which is usually softer than speech while compression amplifies softer sounds more than louder sounds. In practice, however, sounds from multiple sources occur at the same time and the gain of the compression circuit is determined by the most dominant source or the overall level. If both speech and noise occur at the same instance with a positive SNR, the gain of the hearing aid is determined by the level of the speech, not the noise. Research studies comparing the directional benefits of linear and compression hearing aids did not show any difference in speech understanding ability of hearing aid users if speech and noise coexist at the same instance (Ricketts et al., 2001).
Another factor that does not affect the performance of directional microphones is the hearing aid style (Pumfort et al., 2000; Ricketts et al., 2001). Previous research studies have shown that the omni-directional microphones of the in-the-ear hearing aids have higher directivity indexes than do behind-the-ear hearing aids because of the pinna effect (Fortune, 1997, Olsen and Hagerman, 2002) and the SNRs-50 of subjects also concur with this finding (Pumfort et al., 2000). However, SNRs-50 of subjects were not significantly different for the directional microphones of the two hearing aid styles (Pumfort et al., 2000; Ricketts et al., 2001). This indicated that directional microphones provide less improvement in speech understanding in an in-the-ear hearing aid than a behind-the-ear hearing aid. In other words, although the omni-directional microphones of in-the-ear hearing aids are more directional than the omni-directional microphones of the behind-the-ear hearing aids, the performance of the directional microphones implemented in both hearing aid styles was not significantly different (Ricketts, 2001).
Most laboratory tests have shown measurable directional benefits and many hearing aids users in field evaluation studies also report perceived directional benefit. However, a number of recent field studies reported that a significant percentage of hearing aid users might not perceive the benefits of directional amplification in their daily lives even if the signal processing, venting, and hearing aid style are kept the same in the field trials and laboratory tests (Cord et al., 2002; Mueller et al., 1983; Ricketts et al., 2003; Surr et al., 2002; Walden et al., 2000).
According to the researchers, the possible reasons for the discrepancies can be attributed to the relative locations of the signal and noise, acoustic environments, the type and location of noise encountered, subjects’ willingness to switch between directional and omni-directional microphones, and the percentage of time the use of directional microphone is indicated, among others. (Cord et al., 2002, Surr et al., 2002, Walden et al., 2000; Walden et al., 2004).
Specifically, directional microphones are designed to be more sensitive to sounds coming from the front than sounds coming from other directions. Many laboratory tests showing the benefit of directional microphones were conducted with speech presented at 0° azimuth and noise from the sides or the back with both speech and noise in close proximity to the hearing aid user. However, hearing aid users reported that the desired signal did not come from the front in as much as 20% of the time in daily life (Walden et al., 2003). Studies have indicated that when speech comes from directions other than the front, the use of directional microphone may have a positive, neutral, or negative effect on speech intelligibility, especially for low-level speech from the back (Kuk, 1996; Kuk et al., 2005, Lee et al., 1998; Ricketts et al., 2003).
Two other possible reasons for the discrepancies between laboratory tests and field trials are the acoustic environments, and the type(s) and location(s) of noise encountered. Most laboratory tests are conducted in environments with a reverberation time of less than 600 milliseconds (Amlani, 2001). A wide range of reverberation environments that often have higher reverberation times may be encountered in daily life, however. As directional benefits diminish with increase in reverberation, hearing aid users may not be able to detect the benefits of directional microphones in their everyday life. In addition, the use of non-real-world noise in the laboratory (e.g., speech spectrum noise) at fixed, examiner-determined locations may have exaggerated the benefits of directional microphones.
The need for the user to switch between omni-directional microphones and directional microphones and the percentage of time/situations that the use of directional microphone is indicated in daily life can also partly account for the laboratory and field evaluation differences. Cord and colleagues (2002) reported that about 23% of the subjects stated that they left their hearing aids in the default omni-directional mode during the field trials because they did not notice much difference in the first few trials of directional microphones. Further, Cord and colleagues reported that subjects who actively engaged in switching between the omni-directional and directional microphones reported that they only used directional microphones about 22% of the time. This indicated that omni-directional microphones were sufficient in 78% of daily life, and subjects may not have adequate directional microphones usage time to realize the benefit of directional microphones.
Cord and colleagues (2004) have also investigated the predictive factors differentiating the subjects who regularly switched between the omni-directional and directional modes and those who left the hearing aids in the default position in a subsequent study. They reported that the two groups did not significantly differ in their degree or configuration of hearing loss, hearing aid settings, directional benefits that they receive when tested in the test booth, or the likelihood to encounter situations where bothersome background noise occurs. In other words, there is no ensured evidence that can be used to predict which hearing aid users will switch between the omni-directional and directional microphones versus those who will leave the hearing aids in the default omni-directional mode. In addition, previous studies also failed to predict directional benefits based on hearing aid users’ audiometric testing results (Jespersen and Olsen, 2003; Ricketts and Mueller, 2000).
3.1.1.3. Updates on the Limitations of First-Order Directional Microphones
With the increase in directional microphone usage in recent years, the limitations of directional microphones have become more apparent to the hearing aid engineers and audiology community. These limitations of directional microphones include relatively higher internal noise, low-frequency gain reduction (roll-off), higher sensitivity to wind noise, and reduced probability to hear soft sounds from the back (Kuk et al., 2005, Lee et al., 1998; Ricketts and Henry, 2002; Thompson, 1999).
Two factors contribute to the problem of higher internal noise for the dual-microphone directional microphones. First, the internal noise of the modern omni-directional microphones is about 28 dB SPL. When two omni-directional microphones are combined to make a dual-microphone directional microphone in the delay-and-subtract process, the internal noise of dual microphones is about 3 dB higher than the internal noise of omni-directional microphones (Thompson, 1999). This internal noise is normally masked by environmental sounds and is inaudible to hearing aid users, even in quiet environments. However, the problem arises when a hearing aid manufacturer tries to accommodate the second factor, low-frequency roll-off.
The low-frequency roll-off occurs when low-frequency sounds reaching the two omni-directional microphones are subtracted at similar phase. The amount of low-frequency roll-off is about 6 dB/octave for first-order directional microphones (Thompson, 1999; Ricketts, 2001). The perceptual consequence of the low-frequency roll-off is “tinny” sound quality and under-amplification of low-frequency sounds for hearing aid users with low-frequency hearing loss (Ricketts and Henry, 2002; Walden et al., 2004).
The common practice to solve this problem is to provide low-frequency equalization so that the low-frequency responses of the directional microphones are similar to that of the omni-directional microphones. Unfortunately, by matching the gain between omni-directional and directional modes, the internal microphone noise is also amplified. Some hearing aid users may find this increase in microphone noise objectionable, especially in quiet environments (Lee and Geddes, 1998; Macrae and Dillon, 1996).
Two practices are adopted to circumvent this dilemma. First, instead of fully compensating for the 6-dB/octave low-frequency roll-off, hearing aid manufacturers may decide to provide partial low-frequency compensation (i.e., 3 dB/octave). Second, the consensus in the audiology community is to stay in the omni-directional mode in quiet environments. Field studies have also shown that subjects either preferred the omni-directional mode or showed no preference between the two modes in quiet environments (Mueller et al., 1983; Preves et al., 1999; Walden et al., 2004).
Directional microphones are also more susceptible to wind noise because they have a higher sensitivity to near field signals. When the wind curves around the head, turbulence is created very close to the head. As directional microphones have higher sensitivity to sounds in near field (i.e., sounds within 30 cm), the wind noise level picked up by the directional microphones can be as much as 20 to 30 dB higher than that picked up by an omni-directional microphone (Figure 4) (Kuk et al., 2000; Thompson, 1999). Because wind noise has dominant energy at low frequencies, the negative effect of wind noise is further exacerbated if the directional microphone has low-frequency gain compensation. Again, the strategy is to use omni-directional microphone mode should wind noise be the dominant signal at the microphone input. In addition, some algorithms automatically reduce low frequency amplification when wind noise is detected (Siemens Audiology Group, 2004).
Although a design objective, it can be said that a limitation of directional microphones is that they are less sensitive to speech or environmental sounds coming from the back hemisphere, especially at low levels (Kuk et al., 2005; Lee et al., 1998). Directional microphones should be used with caution in environments in which audibility to sounds or warning signals from the back hemisphere is desirable.
3.1.1.4. Working with Directional Microphones
Despite these limitations, directional microphones are currently the most effective noise reduction strategy (second to personal FM or infrared systems) available in hearing aids. Several cautions should be exercised when clinicians fit directional microphones:
First, the performance of directional microphones decreases near reflective surfaces, such as a wall or a hand, or in reverberant environments. Hearing aid users therefore need to be counseled to move away from reflective surfaces or to converse at a place with less reverberation, if possible.
Second, the polar patterns of a directional hearing aid and the locations of the nulls when the hearing aid is worn on the user's head can be very different from an anechoic chamber measurement in which the directional microphone is free-hanging in space (Chung and Neuman, 2003; Neuman et al., 2002). Depending on the hearing aid style, the most sensitive angle of the first-order directional microphones may vary from 30° to 45° for in-the-ear hearing aids to 90° for behind-the-ear hearing aids (Foutune, 1997, Neuman et al, 2002; Ricketts, 2000b). If possible, clinicians need to use the polar patterns measured when the hearing aids are worn in the ear so they can counsel the hearing aid users to position themselves so that the most sensitive direction of their directional microphone points to the direction of the desired signal and the most intense noise is located at the direction with the least sensitivity, if possible.
Third, clinicians need to be aware that some hearing aids automatically provide low-frequency compensation for the directional microphone mode. Others require the clinician to select the low-frequency compensation in the fitting software. Clinicians also need to determine if low-frequency compensation for the directional microphone mode is appropriate given the hearing aid user's listening needs.
Fourth, Walden and colleagues (2004) recently reported that hearing aid users who actively switch between the omni-directional and directional microphones preferred the omni-directional mode more in relatively quiet listening situations. When noise existed, they preferred the omni-directional mode when the signal source was relatively far away. On the other hand, hearing aid users tended to prefer the directional mode in noisy environments, when speech comes from the front, or when the signal is relatively close to them. Walden and colleagues also noted that counseling hearing aid users to switch to the appropriate microphone mode might increase the success rate of directional hearing aid fitting.
Fifth, although a number of studies have shown that children can also benefit from directional microphones to understand speech in noise (Condie et al., 2002; Gravel e al., 1999; Bohnert and Brantzen, 2004), the use of directional microphones that require manual switching in very young children should be cautioned. Very young children who are starting to learn the auditory, speech, and language skills need every opportunity to access auditory stimuli. As directional microphones attenuate sounds from the sides and back, they may reduce the incidental learning opportunities that may help children acquire or develop speech and language skills. In addition, young children probably will not be able to switch between microphone modes requiring parents or caregivers to effectively assume this responsibility among other care-giving liabilities.
As mentioned before, always listening in the directional mode may reduce the chance of detecting warning signals or soft speech from behind, which is crucial to a child's safety. The American Academy of Audiology recommended the use of directional microphones on children with caution, especially on young children who cannot switch between the directional and omni-directional modes (American Academy of Audiology, 2003).
3.1.2. Adaptive Directional Microphones
In the past, all directional microphones had fixed polar patterns. The azimuths of the nulls were kept constant. Noise in the real world, however, may come from different locations and the relative locations of speech and noise may change over time. A directional microphone with a fixed polar pattern may not provide the optimal directional effect in all situations. With the advances in digital technology, directional microphones with variable polar patterns (i.e., adaptive directional microphones) are available in many digital hearing aids. These adaptive directional microphones can vary their polar patterns depending on the location of the noise. The goal is to always have maximum sensitivity to sounds from the frontal hemisphere and minimum sensitivity to sounds from the back hemisphere in noisy environments (Kuk et al., 2002a; Powers and Hamacher, 2004; Ricketts and Henry, 2002). It should be noted that adaptive directional microphones are not the same as the switchless directional microphones implemented in some hearing aids. Adaptive directional microphones automatically vary their polar pattern, whereas switchless directional microphones automatically switch between the omni-directional and directional mode. Most adaptive directional microphones in the market, automatically switch between polar patterns and microphone modes, however.
3.1.2.1. How They Work
Most of the adaptive directional microphones implemented in commercially available hearing aids are first-order directional microphones. The physical construction of adaptive directional microphones is identical to that of the dual-microphone directional microphones. The difference is that the signal processing algorithm of the adaptive directional microphones can take advantage of the independent microphone outputs of the omni-directional microphones and vary the internal delay of the posterior microphone. As mentioned before, the polar pattern of a directional microphone can be changed by varying the ratio of the internal and external delays. Because the external delay (determined by the microphone spacing) is fixed after the hearing aid is manufactured, the ratio of the internal and external delays can be changed by varying the internal delay of the posterior microphone. When the ratio is changed from 0 to 1, the polar pattern is varied from bidirectional to cardioid (Powers and Hamacher, 2004).
Ideally, adaptive directional microphones should always adopt the polar pattern that places the nulls at the azimuths of the dominant noise sources. For example, the adaptive directional microphone should adopt the bidirectional pattern if the dominant noise source is located at the 90° or 270° azimuths and adopt the cardioid pattern if the dominant noise source is located at 180° azimuth. In practice, different hearing aid manufacturers use different calculation methods to estimate the location of the dominant noise source and to vary the internal delay of the directional microphones accordingly. The actual location of the null may vary, depending on the calculation method and the existence of other noise and sounds in the environment.
The adaptive ability of the adaptive directional microphones is achieved in three to four steps:
signal detection and analysis;
determination of the appropriate operational mode (i.e., omni-directional mode or directional mode);
determination of the appropriate polar pattern; and
execution of the decision.
Table 1 summarizes the characteristics and strategies implemented in adaptive directional microphones of some hearing aids. Notice that the determination of the operational mode is user-determined for GNReSound Canta but automatic for other hearing aids. Another point worth noting is that most adaptive directional microphone algorithms process the signal in a single band. More recently, multichannel adaptive directional hearing aids have been introduced. First introduced in the Oticon Syncro hearing aids, this technology allows different directional sensitivity patterns to occur within multiple channels at the same time.
Table 1.
Oticon-Syncro | Phonak-Perseo | ReSound-Canta | Siemens-Triano | Widex-Diva | |
---|---|---|---|---|---|
Signal Detection and Analysis |
|
|
Front-back ratio detector to estimate the location of the dominant sound source |
|
|
Decision Rules for Determining the Microphone Mode | Surround Mode:
Split-Directionality Mode:
Full-Directionality Mode:
|
Omni Mode: speech only Directional Mode: The decision rules for switching to directional microphones can be adjusted by the clinician on the basis of user priority for speech audibility or comfort:
|
User determined | Omni Mode:
Directional Mode:
|
Omni Mode:
Directional Mode:
|
Adaptation Speed for Omni-Directional and Directional Switch | 2–4 sec, depending on the hearing aid's Identity setting, i.e., the life style of the hearing aid user in the fitting software | Variable/programmable by clinician, from 4–10 sec, based on “Audibility” or “Comfort” selections in the hearing aid fitting software | Not applicable because switch is user determined | 6–12 sec, depending on the settings of the listening program | 5–10 sec, depending on the settings of the listening program |
Decision Rules for Determining the Polar Patterns |
|
The internal delay that yield the minimum power output from the directional microphone is adopted | The internal delay that yield minimum output from the directional microphone is adopted | The weighted sum of a bidirectional and cardioid pattern is calculated and the internal delay that yields the minimum output (weighted sum) from the directional microphone is adopted |
|
All hearing aids: Any polar pattern with nulls between 90° to 270° is possible | |||||
Adaptation Speech Between Different Polar Patterns | 2 sec/90°, speed may vary depending on the hearing aid's Identity setting | 100 ms between polar patterns | Analysis of environment every 4 ms, changing of polar pattern every 10 ms | 50 ms/90° | Typically less than 5 sec |
Polar Pattern when Multiple Noise Sources Exist |
|
Cardiod | Hypercardioid | Hypercardiod | Hypercardioid |
Low Frequency Equalization | Automatic | Programmable in fitting software via “Contrast” feature | Programmable in fitting software | Automatic | Automatic for each polar patterns |
Information Source(s) | Oticon, 2004; Flynn, 2004, personal communication | www.Phonak.com (a); Ricketts and Henry (2002); Fabry (2004), personal communication | Groth (2004), personal communication | Powers (2004), personal communication. Powers & Hamacher (2004) | Kuk et al., 2002a; Kuk, 2004, personal communication |
Clinical Verification | Flynn (2004): compared to the first-order fixed directional microphone implemented in Adapto, Syncro's Full-Directionality mode combined with its noise reduction algorithm yielded about 1–2 dB better SNR-50s for hearing aid users with multiple broadband noise sources in the back hemisphere. It is unclear how much of the improvement is solely generated by the adaptive directional microphone | Unavailable. See text for the evaluation of the first-order adaptive directional microphone implemented in Phonak Claro | Unavailable |
Bentler et al. (2004a): the hybrid second-order adaptive directional microphone has improved the SNR-50s of hearing aid users for 4 dB. No significant difference in SNR-50s between the first-order and the hybrid second-order adaptive directional microphones. Ricketts et al. (2003): Significant benefit was observed using the second-order directional microphone compared to its fixed directionality mode in moving noise |
Valente and Mispagel (2004): Compared to the omni-directional mode, the adaptive directional microphone improved SNR-50s for 7.2 dB if a single noise source was located at 180°. The improvement in SNR-50s decreased to 5.1 dB and 4.5 dB when noise was presented at 90° + 270°, and 90°+180°+270°, respectively |
These hearing aids are selected to demonstrate the range and the differences in implementation methods of adaptive directional microphone algorithms in commercially available hearing aids. SNR = signal-to-noise ratio.
It is apparent in Table 1 that hearing aid manufacturers use different strategies to implement their adaptive directional microphone algorithms. The following discussion explains the similarities and differences among the adaptive directional microphone algorithms from different hearing aid manufacturers or models.
a. Signal Detection and Analysis
In the signal detection and analysis unit, algorithms implemented in different hearing aids may have a different number of signal detectors to analyze different aspects of the incoming signal. Some of the most common detectors are the level detector, modulation detector, spectral content analyzer, wind noise detector, and front-back ratio detector, among others.
i. Level Detector
The level detector in adaptive directional microphone algorithms estimates the level of the incoming signal. Many adaptive directional microphones only switch to directional mode when the level of the signal exceeds a certain predetermined level. At levels lower than the predetermined level, the algorithm assumes that the hearing aid user is in a quiet environment and the directional microphone is not needed. Thus, the hearing aid stays at the omni-directional mode.
ii. Modulation Detector
The modulation detector is commonly used in hearing aids to infer the presence or absence of speech sounds in the incoming signal and to estimate the SNR of the incoming signal. The rationale is that the amplitude of speech has a constantly varying envelope with a modulation rate between 2 and 50 Hz (Rosen, 1992), with a center modulation rate of 4 to 6 Hz (Houtgast and Steeneken, 1985). Noise, on the other hand, usually has a modulation rate outside of this range (e.g., ocean waves at the beach may have a modulation rate of around 0.5 Hz) or it occurs at a relatively steady or unmodulated level (e.g., fan noise).
The speech modulation rate of 4 to 6 Hz is associated with the closing and opening of the vocal tract and the mouth. Speech in quiet may have a modulation depth of more than 30 dB, which reflects the difference between the softest consonant (i.e., voiceless /ø/) and the loudest vowel (i.e., /u/). Modulation depth is the level difference between the peaks and toughs of a waveform plotted in the amplitude-time domain (Figure 5). If a competing signal (noise or speech babble) is present in the incoming signal, the modulation depth decreases. Because the amount of amplitude modulation normally decreases with an increase in noise level, the signal detection and analysis unit uses the modulation depth of signals with modulation rates centered at 4 to 6 Hz to estimate the SNR in the incoming signal—the greater the modulation depth, the higher the SNR. Notice that if the competing signal (noise) is a single talker or has a modulation rate close to that of speech, the signal detection and analysis unit cannot differentiate between the desired speech and the noise.
The modulation detector is used in the adaptive directional microphone algorithms of the Oticon Syncro and Phonak Perseo digital hearing aids. However, the results of the modulation detectors are used to make different decisions in the algorithm.
The modulation detector of Perseo analyzes the modulation pattern and the spectral center of gravity to estimate the presence or absence of speech and noise. An analog to the spectral center of gravity is the center of gravity for an object. The difference is that center of gravity refers to the weight center of the object, whereas spectral center of gravity refers to the frequency center of a sound. The result of the modulation detector in Perseo is then combined with the priority setting (i.e., Audibility or Comfort) and used to determine the appropriate operational mode for the instance.
Syncro, on the other hand, uses the results of the modulation detector to calculate SNR at the output of the directional microphone. The signal processing algorithm is programmed to seek the operational mode (i.e., the Surround Mode, Split-or Full-Directionality Modes) and the polar patterns to maximize the SNRs of the four frequency bands at the microphone output (Flynn, 2004a). Syncro defines the speech as waves with modulation rates ranged from 2–20 Hz.
iii. Wind Noise Detector
The wind noise detector is used to detect the presence and the level of wind noise. Although the exact mechanisms used in the algorithms from different manufacturers are unknown, it is possible that the wind noise detectors make use of several physical characteristics of the wind noise and hearing aid microphones to infer the presence or absence of wind. First, directional microphones are more sensitive to sounds coming from the near field than sounds coming from the far field, whereas omni-directional microphones have similar sensitivity to sounds coming from the near field and the far field. To a dual-microphone directional microphone, near field refers to sounds coming from a distance of within 10 times of the microphone spacing; far field refers to sounds coming from a distance of more than 100 times of the microphone spacing. Sounds coming from a distance of between 10 to 100 times of the microphone spacing have the properties of both a near field and a far field (Thompson, 2004, personal communication).
When a sound comes from the far field, the outputs generated at the two omni-directional microphones that form the directional microphone are highly correlated. If the outputs are correlated 100%, the peaks and valleys of the waveform from the two microphone outputs should coincide when an appropriate amount of delay is applied to one of the microphone output during the cross-correlation process. The amount of delay applied depends on the direction of the sound. In other words, the outputs of the two omni-directional microphones have a constant relationship and similar amplitude for a sound coming from the far field.
Because microphone output is highly correlated for sounds from the far field, when the microphone outputs are delayed and subtracted to form a directional microphone, the amplitude of the signal is reduced if the signal comes from the sides or the back hemisphere and not much affected if the signal comes from the front hemisphere. In addition, the directional microphone exhibits a 6-dB/octave roll-off at the low-frequency region for sounds coming from any direction. Assuming that the frequency response of the hearing aid is compensated for low-frequency roll-off, the output of the omni-directional microphone mode is comparable to the output of the directional microphone mode for sounds coming from the far field (Edwards, 2004, personal communication; Thompson, 2004, personal communication).
When wind is blowing, a turbulence and some eddies are generated close to the head. Wind noise is therefore a sound from the near field. For a sound coming from the near field, the outputs of the two omni-directional microphones that form a directional microphone are poorly correlated. When the outputs of the omni-directional microphones are delayed and subtracted, minimum reduction in amplitude results no matter which direction the sounds are coming from. In fact, the wind noise entering the two microphones are added to further increase the sensitivity of the directional microphone to wind noise, especially at high frequencies. In addition, the output of the directional microphone also does not exhibit a 6-dB/octave roll-off at the low-frequency region; that is, the frequency response of the sounds is similar for the directional and the omni-directional modes. Assume that it is the same hearing aid with low-frequency compensation; now, the output of the directional microphone is much higher than the output of the omni-directional microphone for this near field sound because of the increased sensitivity and the low frequency gain compensation (Edwards, 2004, personal communication; Thompson, 2004, personal communication).
Although the exact mechanisms of wind noise detectors are proprietary to each hearing aid manufacturer, it is possible that one characteristic that the wind noise detector monitors is the differences between the outputs of the omni-directional and directional microphones (Edwards, 2004, personal communication). Using the example with equalized low-frequency gain, the outputs of the omni-directional and directional microphones are comparable for sounds coming from the far field, but the output of the directional microphone is much higher than the omnidirectional microphone for sounds coming from the near field (wind noise). On the other hand, if the low-frequency gain is not equalized, the output of the directional microphone is lower than the output of the omni-directional microphone for sounds coming from the far field, but the output of the directional microphone is higher than the output of the omni-directional microphone for sounds from the near field.
Another possible strategy to detect wind noise is to use the correlation coefficient to infer the presence of wind noise. The correlation coefficient can be determined by applying several delays to the output of one of the omni-directional microphones and calculating the correlation coefficient between the outputs of the two microphones for each delay time. As mentioned previously, if the microphone outputs are correlated 100%, the peaks and valleys of the waveforms coincide perfectly. If the peaks and valleys of the waveform are slightly mismatched in amplitude or phase, the outputs are said to have a lower correlation coefficient. For sounds in the near field, the correlation coefficient can be close to 0%.
The wind noise detector can make inference based on the degree of correlation between the outputs of the two omni-directional microphones. If the outputs have a high correlation coefficient, the wind noise detector infers that wind noise is absent. If the outputs have a low correlation coefficient, the algorithm infers that wind noise is present (Thompson, 2004, personal communication; Siemens Audiology Group, 2004). According to Oticon, the wind noise detectors in the Syncro hearing aids detect the uncorrelated signals between the microphone outputs that are consistent with the spectral pattern of wind noise to infer the presence or absence of wind noise (Flynn, 2004, personal communication).
In addition, it is possible that a wind noise detector can set different correlation criteria for the coefficients at low- and high-frequency regions for wind noise reduction. High-frequency eddies are normally generated by finer structures around the head (e.g., pinna, battery door of an in-the-ear hearing aid) and low-frequency eddies are generated by larger structures (e.g., the head and the shoulders). As the finer structures are much closer to the hearing aid microphones (in the near field) and the larger structures are further away from the microphone (in the mixed field), high-frequency sounds tend to have a lower correlation coefficient than low-frequency sounds at the microphone output (Thompson, 2004, personal communication). A sample decision rule for the wind noise detector to make use of this acoustic phenomenon can be: wind noise is present in the microphone output if the correlation coefficient is less than 20% at the low-frequency region and less than 35% at the high-frequency region.
When wind noise is detected, many hearing aids with adaptive directional microphones either remain at or switch to the omni-directional microphone mode to reduce annoyance of the wind noise or to increase the audibility of speech, or both (Kuk et al., 2002b; Oticon, 2004a, Siemens Audiology Group, 2004).
iv. Front-Back Detector
Some adaptive directional microphone algorithms also have a front-back ratio detector that detects the level differences between the front and back microphones and estimates the location of dominant signals (Fabry, 2004, personal communication, Groth, 2004, personal communication, Kuk, 2004, personal communication; Oticon, 2004a). For example, the front-back detector of Oticon Syncro combines the analysis results of the front-back ratio detector and the modulation detector to determine if the dominant speech is located at the back. If a higher modulation depth is detected at the output of the back microphone, the algorithm would remain at or switch to the omni-directional mode (Oticon, 2004a).
b. Determination of Operational Mode
As mentioned, the automatic switching between the omni-directional and directional mode, strictly speaking, can be classified as a different hearing aid feature in addition to adaptive directional microphones. Most hearing aids, however, have incorporated the automatic switching function into their adaptive directional microphone algorithms.
Every hearing aid has its own set of decision rules to determine whether the hearing aid should operate in the omni-directional mode or the directional mode for the instance (Table 1). Some hearing aids have simple switching rules. For example, the switching is user-determined in GNReSound Canta; whereas, Siemens Triano switches to the directional mode when the level of the incoming signal reaches a predetermined level.
Other adaptive directional microphone algorithms take more factors into account in the decision-making process, such as the level of the wind noise, the location of the dominating signal, and the level of environmental noise (Kuk et al., 2002a; Oticon, 2004a). The omni-directional mode is often chosen if wind noise dominates the microphone input, if the front-back ratio detector indicates that the dominant signal is located at the back of the hearing aid user, or if the level of the environmental noise or overall signal is below a predetermined level. The predetermined level is usually between 50 and 68 dB SPL, depending on the particular algorithm (Kuk, 2004, personal communication; Oticon, 2004a; Powers, 2004, personal communication).
Some adaptive directional microphone algorithms have more complex decision rules to determine the switching between the omni-directional and the directional mode. For example, the switching rules of Phonak Perseo can be changed by the clinician based on the hearing aid user's preference for audibility of speech (Audibility) or listening comfort (Comfort) (Table 1). If audibility is chosen, the hearing aid switches to directional mode only when speech-in-noise is detected in the incoming signal. If speech-only or noise-only is detected, the hearing aid remains in the omni-directional mode. However, if comfort is chosen, the hearing aid switches to the directional mode whenever noise is detected in the incoming signal. This means that the hearing aid remains in the omni-directional microphone mode only if speech-only is detected.
The adaptive directional microphone algorithm implemented in Oticon Syncro has the most complex decision rules (Table 1). Syncro operates at three distinctive directionality modes, namely, surround mode (i.e., omni-directional in all four bands), split-directionality mode (i.e., omni-directional at the lowest band and directional at the upper three bands), and full-directionality mode (i.e., directional in all four bands).
In the decision-making process, the algorithm uses the information from the level detector and the modulation detector in each of the frequency bands as well as two alarm detectors (i.e., the front-back ratio detector and the wind noise detector). The information provided by the alarm detectors takes precedence in the microphone mode selection process. As mentioned before, the signal processing algorithm implemented in Syncro seeks to maximize the SNR at the directional microphone output. Specifically, the algorithm stays in the surround mode if the omni-directional mode provides the best SNR at the microphone output, if the level of the incoming signal is soft to moderate, if the dominant speaker is located at the back, or if strong wind is detected.
The algorithm switches to the split-directionality mode if speech is detected in background noise, if the omni-directional mode at the lowest band and directional mode in the upper three bands yields the highest SNR, if the incoming signal is at the moderate level, or if a moderate amount of wind noise is detected. The algorithm switches to the full-directionality mode if speech from the front is detected in a high level of background noise, if the SNR is the highest with all four bands in the directional mode, and if no or only a low level of wind noise is detected (Flynn, 2004a).
c. Determination of Polar Pattern(s)
After the adaptive directional microphone algorithm decides that the hearing aid should operate in the directional mode, it needs to decide which polar pattern it should adopt for the instance. The common rule for all the adaptive directional microphone algorithms is that the polar pattern always has the most sensitive beam pointing to the front of the hearing aid user. To determine the polar pattern, many algorithms adjust the internal delay so that the resultant output or the power is minimum (Fabry, 2004, personal communication, Groth, 2004, personal communication, Powers and Hamacher, 2004; Kuk et al., 2002). Oticon Syncro, on the other hand, uses the estimated SNR to guide the decision process for choosing the polar patterns at the four frequency bands. Specifically, the adaptive directional microphone algorithm of Syncro calculates the SNR of each polar pattern with nulls from 180° to 270° at 1° intervals in the four frequency bands. The polar patterns that yield the highest SNR at the directional microphone output at each frequency band are chosen. As most of the adaptive directional microphones do not limit their calculations to bidirectional, hypercardioid, supercardioid, or cardioid patterns, they are capable of generating polar patterns with nulls at any angle(s) from 90° to 270°.
d. Execution of Decision
After the algorithm decides which operational mode or which polar pattern it needs to adopt, the appropriate action is executed. A very important parameter in this execution process is the time constants of the adaptive directional microphone algorithm. Similar to the attack-and-release times in the compression systems, each adaptive directional microphone algorithm has the adaptation/engaging times and release/disengaging times to govern the duration between the changes in microphone modes or polar pattern choices. Adaptive directional microphone algorithms implemented in different hearing aids have a set of time constants to switch from omnidirectional microphones to directional microphones and another set of time constants to adapt to different polar patterns (Table 1). The adaptation time for the algorithms to switch from the omni-directional to the directional mode generally varies from 4 to 10 seconds, depending on the particular algorithm. The adaptation time for an algorithm to change from one polar pattern to another is usually much shorter. It varies from 10 milliseconds to less than 5 seconds, depending on the particular algorithm.
One feature of the adaptive directional microphones worth noting is that their adaptation time varies, depending on other settings in the hearing aid listening program. For example, the time constants of Siemens Triano and Widex Diva change with the listening program, whereas the time constants of Phonak Perseo change with the Audibility or Comfort setting. A set of faster time constants is adopted if audibility is chosen as the priority of the hearing aid use, and a set of slower time constants are used if comfort is chosen to increase listening comfort.
In addition, the time constants of Oticon Syncro change with the Identity setting of the hearing aid program. The Identity setting is chosen by the clinician during the hearing aid fitting session based on the degree of hearing loss, age, life style, amplification experience, listening preference, and etiology of hearing loss of the hearing aid user. It controls the time constants for the adaptive directional microphones as well as many variables in the compression and noise reduction systems. In general, faster time constants are adopted if the Identity is set at Energetic and slower time constants are adopted if the Identity is set at Calm.
Unlike the adaptive release times implemented in compression systems, none of the time constants of the adaptive directional microphone algorithm varies in corresponding to the changes in the characteristics of the incoming acoustic signal. In other words, the time constants of the adaptive directional microphones are pre-set with hearing aid settings, but they do not vary with the acoustic environment. Further, the time constants of the adaptive directional microphones are not directly adjustable by the clinician. They are preset with different programming/priority choices but not as a stand-alone parameter in the fitting software.
3.1.2.2. Verification and Limitations
a. Clinical Verification
Several researchers have evaluated studies to compare the performance of the single-band adaptive directional microphones with the regular directional microphones with fixed polar patterns (Bentler et al., 2004b; Ricketts and Henry, 2002; Valente and Mispagel, 2004). Several inferences can be drawn from these research studies:
The adaptive directional microphones are superior to the fixed directional microphones if noise comes from a relatively narrow spatial angle (Ricketts and Henry, 2002).
The adaptive directional microphones perform similarly to the fixed directional microphones if noise sources span over a wide spatial angle or multiple noise sources from different azimuths coexist (Bentler et al., 2004a). According to Ricketts (personal communication, 2004), when multiple noise sources from different azimuths coexist, the single noise source needs to be at least 15 dB greater than the total level of all other noise sources to obtain a measurable adaptive advantages in at least two hearing aids.
When multiple noise sources from different azimuths coexist or the noise field is diffuse, adaptive directional microphones resort to a fixed cardioid or hypercardioid pattern (Table 1). Thus, the relative performance of the adaptive and fixed directional microphones in a diffuse field and for noise from a particular direction depends on the polar pattern of the fixed directional microphone. For example, compared to a fixed directional microphone with a cardioid pattern, the adaptive directional microphone yields better speech understanding if the noise comes from the side (i.e., it changes to bidirectional pattern) and yields similar speech understanding if the noise comes from the back (i.e., it changes to the cardioid pattern) (Ricketts and Henry, 2002).
Adaptive directional microphones have not been reported to be worse than the fixed directional microphones.
Subjective ratings using the Abbreviated Profile of Hearing Aid Benefit (APHAB) scales have shown higher ratings for the adaptive directional microphones compared with the omni-directional microphones after a 4-week trial in real life environments (Valente and Mispagel, 2004). Oticon has conducted a clinical trial to compare the performance of its hearing aids with a single-band, first-order, fixed directional microphone (Adapto) and a multiband first-order adaptive directional microphone with the noise reduction and compression system active (Syncro) (Flynn, 2004b). The SNRs-50 of hearing aid users were tested when speech was presented from 0° azimuth and uncorrelated broadband noises were presented from four locations in the back hemisphere. Flynn reported approximately a 1-dB improvement in the SNR-50 of hearing aid users between the omni-directional modes of the two hearing aids and approximately 2 dB of improvement between the directional modes of the two hearing aids. However, as the noise reduction algorithm was active for the multi-band adaptive directional microphones and the two hearing aids have different compression systems, it is unclear how much of the differences were solely due to the differences in the directional microphones.
b. Time Constants
The optimum adaptation speeds between the omni-directional and directional mode or among different polar patterns have not been systematically explored. As noted before, adaptive directional microphone algorithms implemented in different hearing aids have a different speed of adaptation for switching the microphone modes and the polar patterns. Some take several seconds to adapt and others claim to adapt almost instantaneously (i.e., in 4 to 5 milliseconds) (Kuk et al., 2000; Powers and Hamacher, 2002; Ricketts and Henry, 2002, Groth, 2004, personal communication).
Similar to the attack-and-release times of a compression system, there are pros and cons associated with having a faster or a slower adaptation time for the adaptive directional microphones. For example, a system with a fast adaptation time can change the polar pattern for maximum noise reduction when the head moves or when a noise source is traveling in the back hemisphere of the hearing aid user. The fast adaptation may be overly active, however, and it may change its polar pattern during the stress and unstressed patterns of natural speech when a competing speaker and a noise source are located at different locations in the back hemisphere of the hearing aid user. The advantage of a slower adaptation time is that it does not act on every small change in the environment, yet it may not be able to quickly and effectively attenuate a moving noise source, for example, a truck moving from one side to the other behind the hearing aid user.
3.1.3. Second-Order Directional Microphones
Although first-order directional microphones generally provide 3 to 5 dB of improvement in SNR for speech understanding in real-world environments, people with hearing loss often experience a much higher degree of SNR-loss. This means that the benefits provided by first-order directional microphones are insufficient to close the gap between the speech understanding ability of people with hearing loss and that of people with normal hearing in background noise. This limitation prompted the development of a number of instruments, such as second-order directional microphones and array microphones, to provide higher directionality.
Second-order directional microphones are composed of three matched omni-directional microphones, and they usually have a higher directivity index than the first-order directional microphones. The only commercially available second-order directional microphones to date are implemented in the behind-the-ear Siemens Triano hearing aids. According to Siemens, Triano is implemented with a first-order directional microphone for frequencies below 1000 Hz and a second-order directional microphone above 1000 Hz (Figure 6) (Powers and Hamacher, 2002).
The reason for this particular set up is because the second-order directional microphone is implemented using the delay-and-subtract processing that yields higher internal microphone noise and a low-frequency roll-off of 12 dB/octave. The steep low-frequency roll-off makes it difficult to amplify the low-frequency region and any effort to compensate for the roll-off would exacerbate the amount of internal noise.
The first-order directional microphone is used to circumvent the problem by keeping the internal noise manageable. It can also preserve the ability of the hearing aid to provide low-frequency amplification. The second-order directional microphone is used to take advantage of its higher directionality. The directional microphones of Triano can also be programmed to have adaptive directionality.
3.1.3.1. Verification and Limitations
Bentler and colleagues (2004a) measured a random sample of behind-the-ear Triano hearing aids with the hybrid second-order directional microphones and reported the free-field average directivity index (DI-a) values ranged from 6.5 to 7.8 dB. The DI-a values were calculated from the sum-average of the DI values from 500 to 5000 Hz without frequency weighting. When the Triano hearing aids were worn on a Knowles Electronics Manikin for Acoustic Research (KEMAR), the DI-a values ranged from 4.5 to 6.0 dB.
Several research studies have investigated the effectiveness of the hybrid second-order directional microphones in stationary and moving noises. Bentler and colleagues (2004a) compared the speech understanding performance of subjects with normal hearing and subjects with an average of 30 to 65 dB HL hearing loss from 250 to 8000 Hz. Subjects with normal hearing listened to the Hearing in Noise Test (HINT, Nilsson et al., 1994) in stationary and moving noises to serve as the standard. Subjects with hearing loss were fit with a pair of Triano hearing aids and a pair of another hearing aid with a first-order adaptive directional microphone by the same manufacturer. They listened to the HINT sentences in omni-directional and directional modes in a stationary noise field and in omni-directional, directional, and adaptive directional modes in a moving noise field.
The results indicated that subjects with hearing loss exhibited an aided SNR-loss of 4 dB in stationary noise and slightly less than 5 dB in a moving noise. The performance of subjects with hearing loss showed roughly a 4 dB of SNR improvement in stationary noise by using Triano compared with the hearing aids with first-order directional microphones. Subjects with hearing loss also obtained a little more than 3 dB of SNR improvement using the hearing aids with first-order adaptive directional microphone and approximately 4 dB of improvements using Triano adaptive directional microphone. No significant differences were found in subjects’ performance between Triano and the hearing aid with first-order directional microphones in either noise field.
Ricketts and colleagues (2003) have reported that the hybrid second-order adaptive directional microphones yielded approximately 2 dB lower SNR-50 than the same hearing aid at the fixed directional microphone mode in the presence of a moving noise source at the back hemisphere. They also reported that the fixed and adaptive directional microphones generated about 5.7 dB and 7.6 dB lower SNRs-50 than the omni-directional mode of the hearing aid.
3.1.4. Microphone-Matching Algorithms
Most directional microphones implemented in digital hearing aids use the dual-microphone design because of its flexibility for adaptive signal-processing options. The challenge is that the sensitivity and phases of the two omni-directional microphones forming the dual-microphone directional microphones have to be matched to within 0.02 dB and within 1°, respectively, to ensure good directional performance (Schmitt M, 2003, personal communication).
The matching of the omni-directional microphones for directional microphone application requires several steps. The first matching is conducted in the factory where the microphone is manufactured. The frequency and phase of the omni-directional microphones from the same lot are measured and matched for the directional microphone application. According to Thompson (2004, personal communication), a simple predictable relationship exists between the sensitivity and the phase of the microphone across frequency regions. Therefore, if two omni-directional microphones are matched for four parameters, they should be sufficiently matched for directional microphone applications. These four parameters are the phase at a low frequency (e.g., 250 Hz), the sensitivity at a mid frequency (e.g., 1000 Hz), and the peak frequency and amplitude of the microphone resonance (e.g., normally at 5000 to 6000 Hz).
The second matching is performed when the directional microphone is built into a hearing aid in the manufacturing facility. This procedure is accomplished by measuring the frequency response and the phase of the two omni-directional microphones and using a digital filter(s) to correct the discrepancies.
Despite these matching processes, the frequency responses and phase relationship of the two omni-directional microphones may drift apart when the microphones are exposed to extreme temperature changes, humidity, vibration, or some other environmental factors (Dittberner, 2003, Kuk et al., 2000; Matsui and Lemons, 2001). Microphone drift can also occur in the natural aging process of the microphones. Matsui and Lemons (2001) reported an average of 1 dB decrease in the directivity index when 13 dual-microphone directional microphones were stored in an office for just 3 months. Therefore, some manufacturers use aged omni-directional microphones to improve the performance stability of the resulting directional microphones.
Microphone drift can happen in both the frequency and the phase domain(s). It poses a challenge in the maintenance of directional effect over time. In addition, if the characteristics of the microphone are drifted for the same amount in the high and low frequency regions (e.g., 1–2 dB), a more degrading effect is often created in the directivity index of the low-frequency region than that of the high-frequency region. Figure 7 illustrates the effects of frequency drift at low- and high-frequency regions and two examples of the effects of phase drift.
When a directional microphone with hypercardioid pattern has perfectly matched frequency response, it has a directivity index of 6 dB and its polar pattern has two nulls at about 110° and 250° (Figure 7A). If the frequency responses of the two omni-directional microphones have a mismatch of 1 or 2 dB occurring in 1000 Hz, the directivity index is reduced to 4.4 dB and 2.7 dB, respectively (Figure 7A) (Edwards, 1998). However, if a much smaller mismatch (0.25 dB) occurs at 250 Hz, the nulls in the polar pattern disappear and the directivity index decreases to 4.1 dB (Figure 7B).
Deleterious effects can also be observed when the phase of the two microphones drift apart (Kuk et al., 2000). The hypercardioid polar pattern at 250 Hz is changed to a cardioid pattern and the directivity index is decreased to 4.6 dB when the front microphone lags the back microphone for 2° (Figure 7C). A more detrimental effect is seen if the back microphone lags the front microphone for 2°. The polar pattern is changed to a reverse hypercardioid pattern where the nulls point to the front and the most sensitive beam of the directional microphone is changed to 180° at the back (Figure 7C) (Kuk et al., 2000).
These examples illustrates that exact matching of the microphones are essential to the performance of the directional microphones. Fortunately, phase mismatching at frequencies higher than 250 Hz has a much less detrimental effect than at the frequency region below 250 Hz because the sensitivity-phase relationship is more stable at higher frequency regions. In other words, phase can be relatively well matched if the sensitivity of the microphones is matched at higher frequency regions.
If microphone drift happens after the directional hearing aid is fit to its user, the hearing aid user may experience good directional benefit at first but later may report no differences between the omni-directional and directional modes. To meet the challenge of maintaining directional performance over time, engineers have developed microphone matching algorithms.
Like any other signal processing algorithms, microphone matching algorithms are also implemented in various ways. Because of the existence of a predictable relationship between the sensitivity and phase of the microphone, many microphone matching algorithms only match the sensitivity of the microphones (Flynn, 2004, personal communication, Hamacher, 2004, personal communication). A few algorithms also match the phase of the microphones (Kuk, 2004, personal communication). If a difference is detected between the microphones, the microphone matching algorithm generates a digital filter to match the two microphones (Groth, 2004, personal communication). One important component of this matching process is that the microphone output is digitized separately so that the frequency response and the phase of the microphones can be adjusted separately (Edwards et al., 1998; Kuk et al., 2000).
Microphone matching algorithms can also differ in their speed of action. Depending on the signal processing power of the hearing aid chip and/or the priority of the microphone matching algorithm set among the signal processing algorithms, some microphone-matching algorithms monitor the output of the two omni-directional microphones over a relatively long window of several hours (Groth, 2004, personal communication) and others match the sensitivity of the microphones in the order of seconds (Hamacher, 2004, personal communication; Kuk et al., 2002a) or in the order of milliseconds (Flynn, 2004, personal communication).
3.1.4.1. Verifications and Limitations
A properly functioning microphone matching algorithm should provide individualized in situ matching of the microphones and maximize the directional performance of the directional microphone throughout the lifetime of the hearing aid. The rationale of microphone matching algorithms sounds very logical and promising; however, the exact procedures used by different manufacturers are unknown to the public. To date, no verification data on the effectiveness of these algorithms are available.
The limitation of the microphone matching algorithm is that it cannot protect against factors (e.g., clogged microphone ports) other than the microphone drift. In fact, Thompson (2003) argued that the degradation in directional performance of the directional microphones is often due to the condensation of debris on the screen or clogged microphone ports rather than microphone drift. As reasons other than microphone mismatch may determine the performance of the directional microphones over time, constant monitoring of the microphones’ physical conditions is very important. Clinicians need to check the conditions of the two omni-directional microphones under a microscope or amplifying lens during regular clinic visits to ensure that the microphone openings are free of debris and the microphone screens are clearly seen and well defined, even though the hearing aids are equipped with microphone-matching algorithms (Ricketts, 2001).
3.1.5. Microphone Arrays as Assistive Listening Devices
Because some people with hearing loss experience more than 15 dB of SNR loss, the benefits provided by directional microphones may not be enough to compensate for their SNR loss. The traditional solution is to resort to the use of personal FM systems. An FM system is very useful in classrooms or in one-on-one communications. The microphone of the FM system greatly reduces background noise by significantly reducing the distance between the talker and the hearing aid user. However, FM systems are limited in their effectiveness to pick up multiple speakers in a conversation. They are not practical to use in daily life where listening to multiple talkers is essential.
Several companies have marketed several array microphones that are designed to bypass and to provide higher directional effects than the hearing aid microphone(s). These array microphones are implemented in either head-worn or hand-held units. When these array microphones are used in conjunction with hearing aids, sounds from the environment are pre-processed by the array microphone and then sent to the hearing aids via a telecoil, a direct audio input, or an FM receiver. The advantage of array microphones over traditional FM systems is that the talker does not need to wear the microphone or the transmitter unit. The hearing aid user can choose to listen to different talkers by facing or pointing to the desired talker. Some array microphones are only compatible with hearing aids from their own manufacturers (e.g., SmartLink SX from Phonak). Other array microphones are compatible with hearing aids from multiple manufacturers. This following section focuses on the latter.
a. Head-Worn Array Microphones
Head-worn array microphones can be implemented as either an end-fire or a broadside array. An end-fire array has its most sensitive beam parallel to the microphone array, such as an array microphone implemented along an arm of a pair of eyeglasses. A broadside array has its most sensitive beam perpendicular to the microphone array, such as an array microphone implemented above the glasses of a pair of eyeglasses.
Etymotic Research designed and marketed an end-fire array microphone, Link.It. Link.It uses delay-and-sum processing to combine the outputs of three single-microphone directional microphones. Each directional microphone is spaced 25 mm apart, and the outputs of the second and third directional microphones are delayed for 75 and 150 milliseconds, respectively. When sounds come from the front, the outputs of the microphones are in phase after accounting for the traveling time and the delay circuit. The sum of the outputs from the three microphones is three times as large as the single directional microphone output.
When sounds come from the sides, the outputs of the three microphones are 180° out of phase because of the delay added to the output of the microphones (Christensen et al., 2002). According to Etymotic Reseach (http://www.etymotic.com/ha/linkit-ts.asp), the single-microphone directional microphones (instead of omnidirectional microphones) are used to optimize the performance of the array microphone over time and minimize the need to monitor and match the sensitivity and the phase of the microphones.
Link.It sends its processed signal to the hearing aid wirelessly via telecoil (Figure 8A). If necessary, the output of Link.It can be fed into the direct audio input of the hearing aid. Link.It has a relatively flat frequency response from 200 Hz to 4000 Hz. When measured on KEMAR in an anechoic chamber, it yielded an AI-DI of 7 dB on KEMAR and 8 dB in free-field (Christensen et al., 2002).
b. Hand-Held Array Microphones
Most hand-held array microphones are implemented in the end-fire array (i.e., shot-gun microphone array). Recently, a new hand-held array microphone, Lexis, has been introduced. Lexis has a hand-held unit with an array microphone and a built-in FM transmitter. Signals from Lexis are sent to the hearing aid via an FM receiver plugged into the direct audio input of the hearing aid.
The hand-held unit of Lexis is composed of four single-microphone directional microphones aligned on the side of the unit (Figure 8B). Again, single-microphone directional microphones are used to maintain the directional effect over time while minimizing the need to monitor and match the sensitivity and phase of the microphone components. The port spacing between these single-microphone directional microphones is 15 mm. According to Oticon (2004b), 15 mm was chosen as a compromise between the amount of low-frequency roll-off (the larger the port spacing, the less the low-frequency roll-off) and high directivity at the high-frequency range (the smaller the port spacing, the higher the high-frequency directivity index).
Lexis has three user-switchable directionality modes: omni-directional, focus, and superfocus. The superfocus mode has a narrower sensitive beam to the front than the focus mode. The AI-DI at the superfocus mode is reported to be 8.5 dB and 5.9 dB in the focus mode. Lexis has a relatively flat frequency response, from 600 Hz to 5000 Hz (Oticon, 2004b). During one-on-one communication or listening, the hand-held unit can be worn around the neck of the talker like the microphone and transmitter unit in other FM systems.
3.1.5.1. Verifications and Limitations
Clinical trials of Link.It (Christensen et al., 2002) and studies carried out during its developmental stages (Bilsen et al., 1993; Soede et al., 1993) reported a 7 to 10 dB SNR improvement for people with hearing impairment in noisy, reverberate environments.
Oticon (2004b) conducted a “just-follow-the-conversation” test in a laboratory setting. During the test, speech was fixed at 65 dB SPL and subjects were asked to adjust the level of the noise so that they could understand 50% of the information. The results indicated that five subjects with moderate-to-profound hearing loss obtained 5.6 and 8.7 dB of directional benefit for the focus and superfocus modes relative to the omni-directional mode, respectively. An interesting observation is that when Lexis is used in the hand-held position, the omni-directional mode of Lexis is about 4 dB better than the omni-directional mode of the subjects’ own hearing aids because of the body baffle effect. Significant improvement was also reported in all subtests of the Abbreviated Profile of Hearing Aid Benefit (APHAB, Cox and Alexander 1995) when the subjects’ own hearing aids were compared with the superfocus mode of Lexis.
Super-directionality is a double-edged sword. On one hand, it has very high sensitivity to sounds from a very narrow beam to the front and it can reduce background noise significantly. This feature is especially useful for listening to a talker located at a fixed direction or a talker moving at a predictable path. On the other hand, if several talkers are participating in a discussion or conversation, say in a round table, it may be extremely hard for the user to zoom in to the correct talker as the beam is so narrow. If the talker is not in the beam of the directional microphone sensitivity, the user has to rely on visual cues to locate the talker and then point the hand-held unit to the talker. The user may miss the first several words whenever the talker changes. In such a case, the focus mode may be more appropriate because it has a wider sensitivity beam than the superfocus mode. The drawback is that it is less directional and thus the noise reduction ability is less than that of the superfocus mode. Another caution when using Lexis is that highly directional devices may reduce the user's ability to hear warning sounds from locations, such as the sides, with low microphone sensitivity.
It is worthy of noting that although array microphones can provide up to 7 to 8 dB of improvement in SNR, FM systems have been shown to provide 10 to 20 dB of improvement (Crandell and Smaldino, 2001; Lewis et al., 2004). FM systems can remarkably improve the SNR because the microphone is usually located near the mouth of the talker, thus they significantly reduce the effects of reverberation, distance and noise. Therefore, in situations where the voice of one talker is desirable (e.g., one-on-one conversation in classrooms or lecture halls), the use of FM systems or array microphones configured to function as FM systems (i.e., the hand-held unit of Lexis worn around the talkers’ neck) is recommended.
3.1.6. General Remarks
Directional microphones and array microphones have made significant advances in the past several years. With all of the advances in directional hearing aids, counseling is essential. Clinicians need to be knowledgeable about the benefits and the limitations of the hearing aids with directional microphones and counsel the hearing aid users accordingly. Hearing aid users need to be informed of what to expect from their hearing aids and how to obtain the maximum benefit from different directional products. Topics for additional discussions with users also need to include how to position themselves to receive maximum directional benefit, how much low-frequency equalization is appropriate and acceptable, how to get used to directional microphones, when to switch between directional and omni-directional modes, and when is it appropriate to deactivate automatic signal processing options, among others.
3.2. Noise-Reduction Strategy No. 2: Noise-Reduction Algorithms
Whereas directional microphones are designed to take advantage of spatial separation between speech and noise, noise reduction algorithms are designed to take advantage of the temporal separation and spectral differences between speech and noise. The ultimate goals for noise reduction algorithms are to increase listening comfort and speech intelligibility. Noise reduction algorithms are different from the speech enhancement algorithms in that noise reduction algorithms aim to reduce noise interference whereas speech enhancement algorithms are designed to enhance the contrast between vowel and consonants (Bunnel, 1990; Cheng and O'Shaughnessy, 1991). Most of the high-performance hearing aids have some type of noise reduction algorithms; whereas, only a few (e.g., GNReSound Canta) have speech-enhancement algorithms. The following discussion concentrates on the mechanisms and features of noise reduction algorithms.
All noise reduction algorithms are proprietary to the hearing aid manufacturers. They have different signal detection methods, decision rules, and time constants. The only common feature among these algorithms is the detection of modulation in the incoming signal to infer the presence or absence of the speech signal and to estimate the SNR in the microphone output.
Speech has a modulation rate centered at 4 to 6 Hz. Noise in most listening environments has either a constant temporal characteristic or a modulation rate outside the range of speech. Further, speech exhibits co-modulation, another type of modulation that is generated by the opening and closing of the vocal folds during the voicing of vowels and voiced constants (Rosen, 1992). The rate of co-modulation is the fundamental frequency of the person's voice.
Depending on the type of modulation detection used, noise reduction algorithms are divided into two categories: multichannel adaptive noise reduction algorithms that detect the slow modulation in speech, and synchrony-detection noise reduction algorithms that detect the co-modulation in speech.
3.2.1. Multichannel Adaptive Noise-Reduction Algorithms
Most of the noise reduction algorithms in commercial hearing aids use the multichannel adaptive noise reduction strategy. These algorithms are intended to reduce noise interference at frequency channels with noise dominance. In theory, multichannel adaptive noise reduction algorithms are the most effective in their noise reduction efforts when there is spectral differences between speech and noise. The major limitation of these noise reduction algorithms is that they cannot differentiate between the desired signal and the unwanted noise if speech is the competing noise. Table 2 summarizes the characteristics of noise reduction algorithms implemented in some hearing aids.
Table 2.
Oticon-Syncro | ReSound-Canta | Sonic In-Natura | Siemens-Triano | Widex Diva | |
---|---|---|---|---|---|
No. of Channels | 8 | 14 | 9 | 16 | 15 |
Type of Noise Reduction | Synchrony detection + Multichannel adaptive | Multichannel adaptive | Multichannel adaptive | Multichannel adaptive | Multichannel adaptive |
Signal Detection and Analysis |
|
1. Modulation detector to detect the modulation in the envelope of the incoming signal in each frequency channel | |||
2. Maxima modulation detector to follow the maxima in the input signal. It attempts to reduce noise in running speech without reducing audibility 3. Minima modulation detector to follow the minima in the input signal. It provides the baseline for determining the modulation, and estimates the level of noise |
2. Noise detector to estimate the steady state noise based upon modulation rate. The target modulation rate changes depending on the frequency channel 3. SNR calculation based on the noise estimate vs the amplitude of the entire signal |
2. Modulation detection block to determine the modulation rate | 2. Signal detector to detect the intensity pattern of the incoming in a 30–60-sec window within a frequency channel 3. Signal detector to monitor the spectral-intensity-temporal patterns of incoming signal across frequency channels 4. Level detector to estimate the sound pressure level in each channel |
||
Decision Rules |
|
|
|
|
|
Adaptation Speed/Speed of Gain Reduction |
|
|
The noise detector is a sliding 1.2-sec calculation. It changes gain based on estimated SNR. Speed of gain reduction: equals to the attack time of the compression system, (i.e., between 2 and 50 ms across frequency channels) |
Speed of gain reduction: Initial gain reduction within 2 sec, maximum gain reduction is achieved within 6–8-sec | Speed of gain reduction: 5-sec for a 10 dB gain change |
Release Speed/Speed of Gain Recovery |
|
|
Speed of gain recovery: equals to release time of compression system (i.e., between 2 and 50 ms across frequency channels) | Speed of gain recovery: Less than 1 sec | Speed of gain recovery: 0.5 sec |
Information Source(s) | Oticon, 2004; Flynn, 2004, personal communication | Groth, 2004, personal communication; Smriga and Groth, 1999 | Nilsson, personal communication. US Patent 06757395; Johns et al., 2002 | Powers, 2004, personal communication | Kuk et al., 2002b; Kuk, 2004, personal communication |
Clinical Verification | Unavailable | Unavailable | Bray and Nilsson, 2000; Bray et al., 2002; Johns et al., 2002; Galster and Ricketts, 2004: improvement of SNR for 1–1.8 dB | Unavailable | Unavailable |
These hearing aids are selected to demonstrate the range and the differences in implementation methods of noise reduction algorithms in commercially available hearing aids. ACG = automatic gain control. SNR = signal-to-noise ratio.
3.2.1.1. How They Work
a. Signal Detection and Analysis
The first and foremost action of a multichannel adaptive noise reduction algorithm is the classification of speech and noise in the incoming signal. Noise-reduction algorithms may monitor one or several aspects of the incoming signal for characteristics that resemble speech or noise, or both. Multichannel adaptive noise reduction algorithms use similar speech detection strategies to the adaptive directional microphone algorithms. They have detectors to estimate the modulation rate and the modulation depth within each frequency channel to infer the presence of speech, noise, or both, and the SNR within the frequency channel (Boymans and Dreschler, 2000; Van Dijkhuizen et al., 1991; Edwards, 1998; Fang and Nilsson, 2004; Mueller, 2002; Powers and Hamacher, 2002; Walden et al., 2000).
Some noise reduction algorithms may also detect other dimensions of the incoming signal, such as the intensity-modulation-temporal changes within each frequency channel (Tellier et al., 2003) or the spectral-intensity-temporal patterns of the incoming signal across frequency channels (Kuk et al., 2002b). For example, Widex Diva detects the modulation, the intensity patterns of the incoming signal in each channel, and the spectral-intensity-temporal patterns across frequency channels (Kuk et al., 2002b). The intensity distribution of the signal is monitored over 10- to 15-second periods in each frequency channel. The assumptions are that the level of noise is relatively stable within and across frequency channels, whereas the level of speech varies rapidly within and across frequency channels.
Another important task of the signal detection and analysis unit is to estimate the SNR within each frequency channel. As mentioned in the section on adaptive directional microphones, the estimation of the SNR is usually accomplished by calculating the modulation depth of the incoming signals with a modulation rate resembling speech. If the modulation depth is high, say 30 dB, the signal detection and analysis unit assumes that the SNR is high in the frequency channel and that speech is the dominant signal in the frequency channel. If the modulation is moderate or low, the unit assumes that the SNR is moderate or low in the frequency channel. The actual implementation of the signal detection and analysis unit in hearing aids from different manufacturers or among different models may vary. Table 2 summarizes simplified versions of the signal detection mechanisms used by different hearing aid manufacturers.
b. Decision Rules
The decision rules of a noise reduction algorithm may depend on several factors. The most common of these include the estimated modulation depth/SNR, frequency importance function, the level of the incoming signal, and the degree of noise reduction selected in the hearing aid fitting software. The amount of gain reduction in each channel is usually inversely proportional to the SNR estimated in the frequency channel (Kuk et al., 2002b; Powers and Hamacher, 2002; Johns et al., 2002; Edwards et al., 1998; Latzel, Kiessling and Margolf-Hackl, 2003; Schum, 2003; Walden et al., 2000).
This approach is based on the rationale that if the signal-detection and analysis unit estimated a high SNR in a frequency channel, the algorithm assumes that speech-in-quiet is detected in the frequency channel and the action unit should let the signal pass without attenuation. If the unit estimates a moderate or low SNR in the frequency channel, the algorithm assumes that either speech coexists with noise or noise dominates the frequency channel. Thus, the gain of the frequency channel should be reduced to decrease the noise interference. When no modulation is detected in a frequency channel, the analysis unit assumes that no speech is present in the frequency channel and maximum attenuation should be applied.
The modulation depth at which the gain reduction starts to be applied at a frequency channel is sometimes called the “modulation threshold for noise reduction activation” (Groth, 2004, personal communication). The modulation threshold for noise reduction activation and the exact amount of gain reduction applied at different SNRs differs among the noise reduction algorithms. Figure 9 illustrates the relationship between the estimated SNR and the amount of gain reduction in two commercially available digital hearing aids.
Another common consideration in the decision rules of the multichannel adaptive noise reduction algorithms is the frequency-importance weighting of the frequency region for speech understanding. One of the approaches is to set the amount of gain reduction inversely proportional to the articulation index of the frequency region (Kuk et al., 2002b; Alcantara et al., 2003; Boysmans and Dreschler, 2000). The assumption is that as the weightings of the articulation index increase, the importance of the frequency channel for speech understanding also increases; therefore, less gain reduction should be applied to these frequency channels (Kuk et al., 2002b; Oticon, 2004a).
Other manufacturers may also use different sets of gain reduction rules to account for the importance of speech information in the frequency channel (Alcantara et al., 2003; Tellier et al., 2003). For example, Phonak Claro only reduces the gain at frequency channels below 1 kHz and above 2 kHz. The rationale is that frequencies between 1 and 2 kHz are very important for speech understanding; therefore, the gain is not reduced regardless of the modulation depth of the incoming signal at those frequencies (Alcantara et al., 2003). Another form of frequency-dependent gain reduction is found in Siemens’ Prisma, in which the amount of maximum gain reduction in a frequency channel can be programmed by the clinician in the hearing aid fitting software (Powers et al., 1999).
In addition to the modulation depth and the importance of speech content in the frequency channel, some manufacturers may add another dimension to their gain reduction decision rules: the sound pressure level of the incoming signal or the sound pressure level of the noise. For example, if a particular modulation depth is detected within a frequency channel, the Widex Diva starts to reduce the gain of the frequency channel only if the input level exceeds 50 to 60 dB. The amount of reduction also increases as the level of the incoming signal increases. If the modulation depth is high and the level is low, no gain reduction is applied (Kuk et al., 2002b). The assumptions are that noise reduction is not needed in quiet or in environments with low levels of noise and a higher amount of noise reduction is needed in a noisier environment.
Many multichannel adaptive noise reduction algorithms allow the clinician to choose the degree of noise reduction in the fitting software. As the degree of noise reduction increases, the maximum gain reduction also increases. This maximum gain reduction is usually applied across frequency channels without affecting the frequency weighting of the particular channel (Tellier et al., 2003).
c. Execution of Gain Reduction
After the noise reduction algorithm “determines” that a certain amount of gain reduction is needed for a given frequency channel, the gain reduction is carried out. In this final stage of the noise reduction signal processing, the time constants for actions are crucial factors to determine the effectiveness of the noise reduction algorithm and the amount of artifact, if any, generated. Four different time constants are in the multichannel adaptive noise reduction algorithms:
the engaging/adaption/attack time (i.e., the time between the noise reduction algorithm detecting the presence of noise in a frequency channel and the time that the gain of the frequency channel starts to reduce);
the speed of gain reduction (i.e., the time between the beginning of the gain reduction and the maximum gain reduction);
the disengaging/release time (i.e., the time between the noise reduction algorithm detecting the absence of noise in a frequency channel and the time that the gain of the frequency channel starts to recover); and
the speed of gain recovery (i.e., the time between the starting of the gain recovery and 0 dB gain reduction).
The determination of the appropriate time constants is an art as well as a science. If a noise reduction algorithm has very fast attack and release time constants or very fast gain reduction or recovery times, it may treat transient speech components such as stop or fricative consonants as noise and suppress them. This may result in reduced speech intelligibility or create other artifacts. On the other hand, if a noise reduction algorithm has very slow time constants or speed of action, the algorithm may not respond to sudden changes in the environment and brief noises may not be detected (Tellier et al., 2003).
Table 2 summarizes the time constants of the noise reduction algorithms implemented in different hearing aids. Notice that some algorithms use the same time constants as the compression system in the hearing aid, whereas others may have different time constants for the two systems.
3.2.1.2. Verification and Limitations
a. Evaluation of Noise Reduction Algorithms
Multichannel adaptive noise reduction algorithms have been evaluated for their effectiveness in improving speech understanding and perceived sound quality of hearing aid users. Many research studies reported that the noise reduction algorithms implemented in hearing aids increased subjective listening comfort, naturalness of speech, sound quality, and/or listening preference in background noise (Boymans et al., 1999; Boymans and Dreschler, 2000; Bray and Nilsson, 2001; Levitt, 2001; Mueller, 2002; Valente et al., 1998; Walden et al., 2000). A few studies reported no benefits on sound quality ratings (Alcantara et al., 2003).
In theory, multichannel adaptive noise reduction algorithms work the best when there is spectral differences between speech and noise. If noise only exists in a very narrow frequency region, the multichannel adaptive noise reduction algorithm can reduce the gain of the hearing aid at that particular region without affecting the speech components in other frequency regions. Lurquin and colleagues (2001) reported that the noise reduction algorithm of the Phonak Claro, a 20-channel digital hearing aid, increased speech understanding in octave band noises centered at 250 Hz or 500 Hz. However, Alcantara and colleagues (2003) tested the same hearing aid and reported no significant improvement in speech understanding in car noise, or in a noise with a much wider bandwidth than the low-frequency octave band noises.
Most studies on noise reduction algorithms did not report any benefit for speech understanding in broadband noises, such as car noise or speech spectrum noise (Alcantara et al., 2003; Boymans and Dreschler, 2000; Ricketts and Dhar, 1999; Walden et al., 2000). The reason is that if the noise reduction algorithm reduces the gain at frequency channels with noise dominance, it also reduces the audibility of speech information in the channel. Thus, the user's speech understanding is not enhanced. Nevertheless, some studies conducted by researchers at Sonic Innovations and by independent researchers have reported that Natura hearing aids improved SNR-50 of subjects with hearing loss for 1 to 1.8 dB (Bray and Nilsson, 2000; Bray et al., 2002; Johns et al., 2002; Galster and Ricketts, 2004). Chung and colleagues (2004) have also observed an improvement in speech understanding scores in cochlear implant users when Natura is switched from the directional microphone mode to the directional microphone plus noise reduction mode.
Some studies investigated the combination effect of directional microphones with multichannel adaptive noise reduction algorithms (Ricketts and Dhar, 1999; Boymans and Dreschler, 2000; Walden et al., 2000). The results indicated no additional benefits provided by the noise reduction algorithms implemented in various hearing aids when accessing speech understanding in noise.
It should be noted that the benefits of the noise reduction algorithms, if any, on speech understanding or listening comfort are observed in steady-state noise (e.g., speech spectrum noise, narrow-band noise) but not in noise that has modulation patterns of a speech signal (e.g., single-talker competing signal, speech babble or the International Collegium for Rehabilitative Audiology noise). This is because multichannel adaptive noise reduction algorithms rely heavily on the detection of modulation to infer the presence of speech. If the competing noise has similar modulation and acoustic patterns as the desired speech, the noise reduction algorithm cannot differentiate between the two. In general, the larger the differences in acoustic characteristics between speech and noise, the more effective the noise reduction algorithm (Levitt, 2001).
b. Interaction Between Multichannel Adaptive Noise Reduction Algorithm and Wide Dynamic Range Compression
A caution for fitting hearing aids with wide dynamic range compression and noise reduction is that wide dynamic range compression may reduce the effectiveness of noise reduction algorithms. The interactions between wide dynamic range compression and noise reduction algorithm can be seen when these two signal processing units are implemented in series. Specifically, interactions exist if the level detector of the compression system uses the output of the noise reduction algorithm to make decisions on the amount of gain that is applied to the signal or if the noise reduction algorithm uses the output of the compression system to make decisions of noise level and modulation.
To illustrate the interactions, Figure 10A displays the amplitude envelope of two sentences presented to a diffuse sound field at SNR of 3 dB. Figure 10B and C are the same sentences processed by a digital hearing aid with its noise reduction algorithm activated and the compression system programmed to linear and 3:1 compression, respectively. The compression system has fast time constants and was implemented in series with the noise reduction algorithm in the signal processing path. The frequency responses in the compression and linear modes were matched at the presentation level. Compared with the envelope of the sentence processed in the linear mode, the envelope of the sentence processed in the compression mode exhibits a lower modulation amplitude. The amplitude of the noise between the sentences was also higher in the compression mode than the linear mode. The combination of low modulation depth and higher noise level suggests that a 3:1 compression reduced the modulation depth and thus the SNR of the processed signal. The reason for the increase in noise level is because wide dynamic range compression provides more gain for soft sounds (i.e., noise in this case) and less gain for loud sounds (i.e., speech in this case).
To date, few research studies have investigated the perceptual effects of the interactions between wide dynamic range compression and noise reduction algorithms on speech understanding and sound quality. Research studies are also needed to explore if any interaction exists between noise reduction algorithms and compression systems when the two systems are implemented in parallel in the signal processing path.
An example of the parallel implementation of the noise reduction algorithm and the compression system is that the signal detectors of the noise reduction algorithm and the compression system detect and make decisions based on the signal at the microphone output. Clinicians need to keep in mind that the interactions between the noise reduction algorithm and the wide dynamic range compression system with a high compression ratio might reduce the effectiveness of noise reduction algorithms.
c. Number of Channels and Processing Delay
In theory, multichannel hearing aids with more channels are better choices than those with a few channels for the application of multichannel adaptive noise reduction algorithms (Edwards, 2000; Mueller, 2002; Kuk et al., 2002b). If a hearing aid only has two to three channels and the noise reduction algorithm decides to turn down the gain of one or two channels, the gain of a large proportion of the speech spectrum is also reduced. On the other hand, a hearing aid with nine to ten channels can provide a finer tuning in the noise reduction process. The negative effects on the overall speech spectrum are much less when the gain in only one or two channels out of nine is reduced.
In practice, digital hearing aids with many channels may have longer processing delays than analog hearing aids or digital hearing aids with fewer channels (Dillon et al., 2003; Henrickson and Frye, 2003; Stone and Moore, 1999). Processing delay is the time between the entrance of an acoustic signal to the microphone and the exit of the same signal from the receiver. It is sometimes referred to as group delay or time delay. A processing delay of 6 to 8 milliseconds can be noticeable to some listeners (Agnew, 1997). A delay of 10 milliseconds is likely to be annoying to most hearing aid users because an echoing effect may be created. This echoing effect can be caused by two types of mismatch: (1) a mismatch between the bone-conducted and the air-conducted signals during speech production, and (2) a mismatch between the hearing aid-processed sound and the direct sound entering the ear canal via the vent while the hearing aid user is listening to others (Agnew and Thornton, 2000; Stone and Moore, 2002; Stone and Moore, 2003).
Several researchers measured the processing delay in some commercially available digital hearing aids and reported a processing delay of 1.1 to 11.2 milliseconds (Dillon et al., 2003; Henrickson and Frye, 2003). It is possible that the long processing delay of some commercial hearing aids with a high number of frequency channels can be rated as objectionable to some hearing aid users. In addition, a previous research study suggested that hearing aid users with less hearing loss or good low-frequency hearing are likely to detect processing delays and rate a lower processing delay objectionable than are hearing aid users with more severe hearing loss (Stone and Moore, 1999).
Another form of processing delay is the across-frequency processing delay, which is the relative processing delay among the frequency channels of a hearing aid. The low-frequency channels may have a longer processing delay than the high-frequency channels. Research showed that an across-channel processing delay of 15 milliseconds could significantly reduce nonsense vowel-constant-vowel identification (Stone and Moore, 2003). Stone and Moore (2002, 2003) also showed that an overall processing delay of 8 to 10 milliseconds was more preferable than the same amount of across-frequency delay. Fortunately, most of the digital hearing aids have across-channel processing delays lower than objectionable values (Dillon et al., 2003).
In clinical practice, clinicians need to test the processing delay of digital hearing aids and choose hearing aids with a balance between the signal processing complexity and the amount of processing delay. A short processing delay is especially important for users with good low-frequency hearing or when hearing aids with large vents are used. In addition, different brands and models of hearing aids may have different amounts of processing delay.
Extra care must be taken during binaural hearing aid fitting. The processing delay and the phase relationship of the two hearing aids must be matched for good localization ability and for the avoidance of objectionable echoing effects due to the differences in processing delay in the two hearing aids (Henrickson and Frye, 2003). This measurement can be made with the AudioScan or Frye Hearing Aid Analyzer.
3.2.2. Synchrony-Detection Noise Reduction Algorithms
The second category of noise reduction algorithms detects the fast modulation of speech across frequency channels and takes advantage of the temporal separation between speech and noise. The rationale is that the energy of speech sounds is co-modulated by the opening and closing of the vocal folds during the voicing of vowels and voiced consonants (i.e., the fast modulation of speech). Noise, on the other hand, is rarely co-modulated.
The co-modulated nature of speech is revealed as the vertical striations in a spectrogram (Figure 11). The colored vertical stripes of the spectrogram depict the instances with higher energy contents, such as when the vocal folds are open. The darker stripes show the instances with no energy emitted from the mouth, such as when the vocal folds are closed. These vertical striations of the spectrogram thus indicate that speech contains periodic and synchronous energy bursts across the speech frequency spectrum. In other words, the speech components across the speech frequency spectrum are modulated by the opening and closing of the vocal folds at the same rate and at the same instance (i.e., co-modulated). The rate of co-modulation is the fundamental frequency of the human voice, which ranges from 100 to 250 Hz for adults and up to 500 Hz for children.
3.2.2.1. How It Works
The synchrony detection noise reduction algorithm makes use of the co-modulated/synchronous nature of the speech sounds to detect the presence of speech (Elberling, 2002; Schum, 2003). The signal detection unit of the noise reduction algorithm constantly monitors the incoming signal at high-frequency bands (i.e., upper three bands in Adapto) for synchronous energy at the rate the fundamental frequencies of human voices. According to Oticon, the signal detection unit is capable of detecting synchronous energy down to −4 dB SNRs (Flynn, 2004, personal communication).
The synchrony detection noise reduction algorithm is implemented in Oticon Adapto hearing aids. If synchronous energy in the upper three frequency bands is not detected, the noise reduction algorithm assumes that no speech signal is present and the noise reduction unit gradually reduces the overall gain by decreasing the gain at high input levels (i.e., the compression ratio is increased) at all frequency bands. When synchronous energy is detected, the hearing aid returns to normal settings instantaneously and allows the signal to pass without attenuation. In other words, the detection of synchronous energy across the frequency bands deactivates the actions of the noise reduction algorithm (Schum, 2003; Bachler et al., 1995; Elberling, 2002).
3.2.2.2. Verifications and Limitations
The synchrony-detection noise reduction algorithm is designed to take advantage of the temporal separation between speech and noise because it acts at the instances when speech is not present and allows the signal to pass when speech is present. The goal of this algorithm is to increase listening comfort in the absence of speech signals. Yet it does not provide any benefit in listening comfort or speech understanding when speech and noise coexist or when speech is the competing signal. The synchrony-detection noise reduction algorithm is solely implemented in hearing aids by Oticon. No validation data is available on its effectiveness.
3.2.3. Combination of the Two Types of Noise Reduction Algorithms
Most of the commercially available hearing aids are implemented with either a multichannel adaptive noise reduction algorithm (e.g., GNReSound Canta, Widex Diva, Phonak Perseo, Sonic Innovations Natura) or a synchrony detection noise reduction algorithm (i.e., Oticon Adapto). Oticon has recently launched Syncro, which incorporates a combination of the multichannel adaptive and the synchronous detection noise reduction algorithms.
3.2.3.1. How it Works
The noise reduction algorithm of Syncro has three detectors in the signal detection and analysis unit: a synchrony detector, a modulation detector, and a level detector (Table 2). The synchrony detector monitors the presence or the absence of synchronous energy across the upper four frequency channels to infer the presence or absence of speech in the incoming signal. The modulation detector monitors the modulation depth and the noise level of the incoming signal within each frequency channel. The noise level detector determines the noise level in the incoming signal.
The Syncro Optimization Equation integrates the information from these detectors and determines if the incoming signal has speech only, speech with noise, or noise only. Then it examines the current instrument settings, calculates the output of each of the three states, and then decides the amount of gain reduction that should be applied to each frequency channel to maximize the SNR for that particular instance (Table 2). In general, the amount of gain reduction that should be applied to each frequency channel depends on the modulation depth, the noise level, and the weighting of articulation index of the frequency channel (Figure 12) (Flynn, 2004a, Oticon, 2004a).
3.2.3.2. Verification and Limitations
Syncro was recently introduced into the hearing aid market. No verification data on the effectiveness of its noise reduction algorithms are available.
3.2.4. Working with Noise Reduction Algorithms
First, many noise reduction algorithms are reported to enhance listening comfort and sound quality in noise. Clinicians need to be very careful not to project unrealistically high expectations that noise reduction algorithms can enhance speech understanding in broadband noise, which is typical of many daily listening situations. Unrealistic expectations often lead to user dissatisfaction or disappointment.
Second, most manufacturers provide a choice of the degree of noise reduction. It should be noted that a higher degree of noise reduction (i.e., higher allowable maximum gain reduction) does not necessarily imply better sound quality or better speech understanding than the lower degree of noise reduction (Johns et al., 2002).
Third, noise reduction algorithms may mis-classify music as noise because music generally exhibits a higher modulation rate than that of speech. Clinicians need to deactivate the noise reduction algorithms for music programs and inform the hearing aid user to use these listening programs without noise reduction algorithms to enhance music appreciation.
Fourth, clinicians need to choose the appropriate test signals when checking the electroacoustic characteristics or when making real ear measurements of hearing aids with noise reduction algorithms. Noise reduction algorithms may classify some of the conventional testing signals, such as pure tones or composite noise, as noise and reduce the gain of the test signal. Thus, these test signals generate gain/output frequency responses when the noise reduction algorithm is engaged. To obtain the frequency response of the hearing aid when the noise reduction algorithm is not engaged, clinicians can turn off the noise reduction algorithm feature. Yet, this is not desirable because interactions among the signal processing algorithms may exist and alter the test results. To test the frequency response of the hearing aid with the noise reduction algorithm on but not engaged, the clinician needs to choose a test signal that is not considered as noise by the noise reduction algorithm (e.g., Digital speech noises in Frye Hearing Aid Analyzers and filtered speech or ICRA noise in AudioScan).
Acknowledgment
I sincerely thank Jennifer Groth at GNReSound, Drs David Fabry at Phonak, Mark Flynn at Oticon, Francis Kuk at Widex, Michael Nilsson at Sonic Innovations, and Volkmar Hamacher and Tom Powers at Siemens Hearing for providing technical information on their hearing products and for checking the accuracy of the tables. I appreciate Dr Brent Edwards at Starkey and Steve Thompson at Knowles Electronics very much for explaining the properties of microphones and the mechanisms of wind noise detectors. I would also like to thank Drs Robert Novak and Jennifer Simpson, Jessica Daw, and the Doctor of Audiology students in Department of Speech, Language and Hearing Sciences at Purdue University for their editorial help.
References
- Agnew J. (1997). An overview of digital signal processing in hearing instruments. Hear Rev 4 (7): 8, 12,, 16,, 18,, 66 [Google Scholar]
- Agnew J, Block M. (1997). HINT thresholds for dual-microphone BTE. Hear Rev 4 (26): 29–30 [Google Scholar]
- Agnew J, Thornton JM. (2000). Just noticeable and objectionable group delays in digital hearing aids. J Am Acad Audiol 11 (6): 330–336 [PubMed] [Google Scholar]
- American Academy of Audiology (2003). Pediatric Amplification Protocol. http://www.audiology.org/professional/positions/pedamp.pdf Last accessed Dec 2, 2004
- Amlani AM. (2001). Efficacy of directional microphone hearing aids: A meta-analytic perspective. J Am Acad Audiol 12 (4): 202–214 [PubMed] [Google Scholar]
- Alcantara JL, Moore DP, Kuhnel V, et al. (2003). Evaluation of the noise reduction system in a commercial digital hearing aid. Int J Audiol 42 (1): 34–42 [DOI] [PubMed] [Google Scholar]
- Bachler H, Knecht WG, Launer S, et al. (1995). Audibility, intelligibility, sound quality and comfort. High Perform Hear Sol 2: 31–36 [Google Scholar]
- Baer T, Moore BCJ. (1994). Effects of spectral smearing on the intelligibility of sentences in the presence of interfering speech. J Acoust Soc Am 95: 2277–2280 [DOI] [PubMed] [Google Scholar]
- Beck L. (1983). Assessment of directional hearing aid characteristics. Audiol Acoust 22: 187–191 [Google Scholar]
- Bentler RA, Palmer C, Dittberner AB. (2004a). Hearing-in-noise: Comparions of listeners with normal and (aided) impaired hearing. J Am Acad Audiol 15 (3): 216–225 [DOI] [PubMed] [Google Scholar]
- Bentler RA, Tubbs JL, Egge JLM, et al. (2004b). Evaluation of an adaptive directional system in a DSP hearing aid. Am J Audiol 13 (1): 73–79 [DOI] [PubMed] [Google Scholar]
- Beranek LL. Acoustics. McGraw-Hill Electrical and Electronic Engineering Series. New York: McGraw Hill, 1954 [Google Scholar]
- Bilsen FA, Soede W, Berkhout AJ. (1993). Development and assessment of two fixed-array microphones for use with hearing aids. J Rehab Res Develop 30 (1): 73–81 [PubMed] [Google Scholar]
- Bohnert A, Brantzen P. (2004). Experiences when fitting children with a digital directional hearing aid. Hear Rev 11 (2): 50, 52,, 54–55 [Google Scholar]
- Boymans M, Dreschler WA. (2000). Field trials using a digital hearing and with active noise reduction and dual-microphone directionality. Audiology 39: 260–268 [DOI] [PubMed] [Google Scholar]
- Boymans M, Dreschler W, Shoneveld P, et al. (1999). Clinical evaluation in a full-digital in-the-ear hearing instrument. Audiology 38 (2): 99–108 [DOI] [PubMed] [Google Scholar]
- Bray V., Nilsson M. (2001). Additive SNR benefits of signal processing features in a directional DSP aid. Hear Rev 8 (12): 48–51, 62 [Google Scholar]
- Buerkli-Halevy O. (1987). The directional microphone advantage. Hearing Instruments 38 (8): 34–38 [Google Scholar]
- Bunnel HT. (1990). On enhancement of spectral contrast in speech for hearing-impaired listeners. J Acoust Soc Am 88 (6): 2546–56 [DOI] [PubMed] [Google Scholar]
- Cheng YM, O'Shaughnessy D. (1991). Speech enhancement based conceptually on auditory evidence. IEEE Transaction Signal Processing 39, 1943–1954
- Christensen LA, Helmink D, Soede W, et al. (2002). Complaints about hearing in noise: a new answer. Hear Rev 9 (6): 34–36 [Google Scholar]
- Chung K, Zeng F-G, Waltzman S. (2004). Utilizing hearing aid directional microphones and noise reduction algorithms to improve speech understanding and listening preferences for cochlear implant users. Proceedings to 8th International Cochlear Implant Conference. International Congress Series, Vol 1273, Nov. 2004, pp. 89–92
- Condie RK, Scollie SD, Checkley P. (2002). Children's performance: analog vs digital adaptive dual microphone instruments. Hear Rev 9 (6): 40–43, 56 [Google Scholar]
- Cord MT, Surr RK, Walden BE, et al. (2002). Performance of directional microphone hearing aids in everyday life. J Am Acad Audiol 13 (6): 295–307 [PubMed] [Google Scholar]
- Cord MT, Surr RK, Walden BE, et al. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. J Am Acad Audiol 15: 353–364 [DOI] [PubMed] [Google Scholar]
- Cox RM, Alexander GC. (1995). The abbreviated profile of hearing aid benefit. Ear Hear 16 (2): 176–86 [DOI] [PubMed] [Google Scholar]
- Crandell CC, Smaldino J. (2001). Improving classroom acoustics: utilizing hearing assistive technology and communication strategies in the educational setting. Volta Review 101: 47–62 [Google Scholar]
- Dillon H, Keidser G, O'Brien A, et al. (2003). Sound quality comparisons of advanced hearing aids. Hear J 56 (4): 30–40 [Google Scholar]
- Dirks DD, Morgan DE, Dubno JR. (1982). A procedure for quantifying the effects of noise on speech recognition. J Speech Hear 47: 114–123 [DOI] [PubMed] [Google Scholar]
- Dittberner AB. (2003). Page Ten: What's new in directional-microphone systems? Hear J 56 (10): 14–18 [Google Scholar]
- Duquesnoy AJ. (1983). Effect of a single interfering noise or speech source on the binaural sentence intelligibility of aged persons. J Acoust Soc Am 74: 739–743 [DOI] [PubMed] [Google Scholar]
- Edwards BW. (2000). Beyond amplification: Signal processing techniques for improving speech intelligibility in noise with hearing aids. Semin Hear 21 (2): 137–156 [Google Scholar]
- Edwards BW, Struck CJ, Dharan P, Hou Z. (1998). New digital processor for hearing loss compensation based on the auditory system. Hear J 51 (8): 38–49 [Google Scholar]
- Eisenberg LS, Dirks DD, Bell TS. (1995). Speech recognition in amplitude-modulated noise of listeners with normal and listeners with impaired hearing. J Speech Hear 38 (1): 222–233 [DOI] [PubMed] [Google Scholar]
- Elberling C. (2002). About the VoiceFinder. News from Oticon, January.
- Research Etymotic. (1996). FIG6. Hearing Aid Fitting Protocol. Operating manual. Elk Grove Village: Etymotic Research.
- Fang X, Nilsson MJ. (2004). Noise reduction apparatus and method. US Patent No. 6,757,395 B1
- Festen JM, Plomp R. (1990). Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing. J Acoust Soc Am 88: 1725–1736 [DOI] [PubMed] [Google Scholar]
- Flynn M. (2004a). Maximizing the voice-to-noise ratio (VNR) via voice priority processing. Hear Rev 11 (4): 54–59 [Google Scholar]
- Flynn M. (2004b). Clinical evidence for the benefits of Oticon Syncro. Oticon Syncro White Paper.
- Fortune TW. (1997). Real-ear polar patterns and aided directional sensitivity. J Am Acad Audiol 8: 119–131 [PubMed] [Google Scholar]
- Gravel J, Fausel N, Liskow C, Chobot J. (1999) Children's speech recognition in noise using omni-directional and dual-microphone hearing aid technology. Ear Hear 20 (1): 1–11 [DOI] [PubMed] [Google Scholar]
- Hawkins DB, Yacullo WS. (1984). Signal-to-noise ratio advantage of binaural hearing aids and directional microphones under different levels of reverberation. J Speech Hear Disord 49: 278–286 [DOI] [PubMed] [Google Scholar]
- Hellgren J, Lunner T, Arlinger S. (1999). Variations in the feedback of hearing aids. J Acoust Soc Am 106 (5): 2821–2833 [DOI] [PubMed] [Google Scholar]
- Henrickson LK, Frye G. (2003). Processing delay in digital hearing aids: Measurement and perception. Presentation at American Speech-Language and Hearing Association Convention, Chicago, IL.
- Houtgast T., Steeneken HJM. (1985). A review of the MTF concept in room acoustics and its use for estimating speech intelligibility in auditoria. J Acoust Soc Am 77: 1069–1077 [Google Scholar]
- Jespersen CT, Olsen SO. (2003). Hearing research: Does directional benefit vary systematically with omnidirectional performance? Hear Rev 10 (11): 16, 18,, 20,, 22,, 24,, 62 [Google Scholar]
- Johns M, Bray V, Nilsson M. (2002). Effective noise reduction. www.audiologyonline.com Jan 03, 2003
- Killion MC. (1997a). Hearing aids, past, present, future: Moving toward normal conversations in noise. Br J Audiol 31: 141–148 [DOI] [PubMed] [Google Scholar]
- Killion MC. (1997b). Circuits haven't solved the hearing-in-noise problem. Hear J 51 (10): 28–32 [Google Scholar]
- Killion MC, Schulien R, Christensen L, et al. (1998). Real world performance of an ITE directional microphone. Hear J 51: 24–38 [Google Scholar]
- Killion MC, Niquette PA. (2000). What can the pure-tone audiogram tell us about a patient's SNR loss? Hear J 53 (3): 46–53 [Google Scholar]
- Kuk F. (1996). Sujective preference for microphone types in daily listening environments. Hear J 49 (4): 29–34 [Google Scholar]
- Kuk F, Baekgaard L, Ludvigsen C. (2000). Design considerations in directional microphones. Hear Rev 7 (9): 58–63 [Google Scholar]
- Kuk FK, Kollofski C, Brown S, et al. (1999). Use of a ditial hearing aid with directional microphones in school-aged children. J Am Acad Audiol 10: 535–548 [PubMed] [Google Scholar]
- Kuk F, Keenan D, Lau C, Ludvigsen C. (2005). Performance of a fully adaptive directional microphone to signals presenterd from various azimuths. J Am Acad Audiol. Accepted for publication in June 2005. [DOI] [PubMed]
- Kuk F, Baekgaard L, Ludvigsen C. (2002a). Using digital signal processing to enhance the performance of dual microphones. Hear J 55 (1): 35–43 [Google Scholar]
- Kuk F, Ludvigsen C, Paludan-Muller C. (2002b). Improving hearing aid performance in noise: Challenges and strategies. Hear J 55 (4): 34–46 [Google Scholar]
- Laugesen S, Schmidtke T. (2004). Improving on the speech-in-noise problem with wireless array technology. News from Oticon, 3–23
- Latzel M, Kiessling J, Margolf-Hackl S. (2003). Optimizing noise suppression and comfort in hearing instruments. Hear Rev 10 (3): 76–82 [Google Scholar]
- Lee LW, Geddes ER. (1998). Perception of microphone noise in hearing instruments. J Acoustic Soc Am 104: 41–56 [DOI] [PubMed] [Google Scholar]
- Lee L, Lau C, Sullivan D. (1998). The advantage of a low compression threshold in directional microphones. Hear Rev 5 (8): 30–32 [Google Scholar]
- Leeuw AR, Dreschler WA. (1991). Advantages of directional hearing aid microphones related to room acoustics. Audiology 30 (6): 330–344 [DOI] [PubMed] [Google Scholar]
- Levitt H. (2001). Noise reduction in hearing aids: A review. J Rehab Res Dev 38 (1): 111–121 [PubMed] [Google Scholar]
- Lewis MS, Crandell CC, Valente M, et al. (2004). Speech perception in noise: Directional microphones versus frequency modulation (FM) systems. J Am Acad Audiol 15: 426–439 [DOI] [PubMed] [Google Scholar]
- Lurquin P, Rafhay S. (1996). Intelligibility in noise using multi-microphone hearing aids. Acta Otorhinolaryngol Belg 50: 103–109 [PubMed] [Google Scholar]
- Lurquin P, Delacressonniere C, May A. (2001). Examination of a multi-band noise cancellation system. Hear Rev 8 (1): 48–54, 60 [Google Scholar]
- Macrae JH, Dillon H. (1996). An equivalent noise level criterion for hearing aids. J Rehab Res Dev 33: 355–362 [PubMed] [Google Scholar]
- Matsui G, Lemons T. (2001) A special report on new digital hearing instrument technology. Hear Rev 8 (4 suppl): 7–31 [Google Scholar]
- Mueller HG. (2002). A candid round-table discussion on modern digital hearing aid and their features. Hear J 55 (10): 23–35 [Google Scholar]
- Mueller HG, John RM. (1979). The effects of various front-to-back ratios on the performance of directional microphone hearing aids. J Am Audiol Soc 5: 30–33 [PubMed] [Google Scholar]
- Mueller H.G., Grimes A.M., Erdman S.A. (1983). Directional microphone. Hear Instrum 34 (2): 14–16, 47–48 [Google Scholar]
- Mueller HG, Ricketts TA. (2000). Directional-microphone hearing aids: An update. Hear J 53 (5): 10–19 [Google Scholar]
- Neuman AC, Chung K, Bakke M, et al. (2002). The Directional Hearing Aid Analyzer: An In-situ Measurement System. Presentation at International Hearing Aid Conference, Lake Tahoe, CA.
- Nielsen H, Ludvigsen C. (1978). Effects of hearing aids with directional microphones in different acoustic environments. Scand Audiol Suppl 7: 217–224 [DOI] [PubMed] [Google Scholar]
- Nilsson MJ, Soli SD, Sullivan J. (1994). Development of a hearing in noise test for the measurement of speech reception threshold. J Acoust Soc Am 95: 1985–1999 [DOI] [PubMed] [Google Scholar]
- Olsen HL, Hagerman B. (2002). Directivity of different hearing aid microphone locations. Int J Audiol 41: 48–56 [DOI] [PubMed] [Google Scholar]
- Oticon (2004a). The Syncro Audiological Concept.
- Oticon (2004b). Improving on the speech-in-noise problem with wireless array technology. News from Oticon.
- Peters RW, Moore BCJ, Baer T. (1998). Speech reception thresholds in noise with and without spectral and temporal dips for hearing-impaired and normally hearing people. J Acoust Soc Am 103: 577–587 [DOI] [PubMed] [Google Scholar]
- Plomp R. (1994). Noise, amplification and compression: Considerations of three main issues in hearing aid design. Ear Hear 15: 2–12 [PubMed] [Google Scholar]
- Powers TA, Hamacher V. (2002). Three-microphone instrument is designed to extend benefits of directionality. Hear J 55 (10): 38–45 [Google Scholar]
- Powers T, Hamacher V. (2004). Proving adaptive directional technology works: A review of studies. Hear Rev 46: 48–49, 69 [Google Scholar]
- Powers T, Holube I, Wesselkamp M. (1999). The use of digital features to combat background noise. Hear Rev 3 (suppl): 36–39 [Google Scholar]
- Preves D. (1999). Directional microphone use in ITE hearing instruments. Hear Rev 4 (7): 21–27 [Google Scholar]
- Preves DA, Sammeth CA, Wynne MK. (1999). Field trial evaluations of a switched directional/omnidirectional in-the-ear hearing instrument. J Am Acad Audiol 10 (5): 273–284 [PubMed] [Google Scholar]
- Pumford JM, Seewald RC, Scollie S, et al. (2000). Speech recognition with in-the-ear and behind-the-ear dual microphone hearing instruments. J Am Acad Audiol 11: 23–35 [PubMed] [Google Scholar]
- Ricketts TA. (2000a). Impact of noise source configuration on directional hearing aid benefit and performance. Ear Hear 21: 194–205 [DOI] [PubMed] [Google Scholar]
- Ricketts T. (2000b). Directivity quantification in hearing aids: Fitting and measurement effects. Ear Hear 21: 45–58 [DOI] [PubMed] [Google Scholar]
- Ricketts TA. (2001). Directional hearing aids. Trends Amplif 5 (4): 139–176 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ricketts TA, Dahr S. (1999). Aided benefit across directional and omni-directional hearing aid microphones for behind-the-ear hearing aids. J Am Acad Audiol 10 (4): 180–189 [PubMed] [Google Scholar]
- Ricketts TA, Dittberner AB. (2002). Directional amplification for improved signal-to-noise ratio: Strategies, measurement, and limitations. In Valente M. (ed.): Strategies for Selecting and Verifying Hearing aid Fittings, 2nd edition. New York: Thieme; 274–345 [Google Scholar]
- Ricketts TA, Henry P. (2002). Evaulation of an adaptive directional-microphone hearing aid. Int J Audiol 41: 100–112 [DOI] [PubMed] [Google Scholar]
- Ricketts T, Henry P, Gnewikow D. (2003) Full time directional versus user selectable microphone modes in hearing aids. Ear Hear 24 (5): 424–439 [DOI] [PubMed] [Google Scholar]
- Ricketts TA, Hornsby BW. (2003). Distance and reverberation effects on directional benefit. Ear Hear 24 (6): 472–484 [DOI] [PubMed] [Google Scholar]
- Ricketts T, Lindley G, Henry P. (2001). Impact of compression and hearing aid style on directional hearing aid benefit and performance. Ear Hear 22 (4) 348–361 [DOI] [PubMed] [Google Scholar]
- Ricketts T, Mueller HG. (1999). Making sense of directional microphone hearing aids. Am J Audiol 8: 117–127 [DOI] [PubMed] [Google Scholar]
- Ricketts T, Mueller GH. (2000). Predicting directional hearing aid benefit for individual listeners. J Am Acad Audiol 11 (10): 561–574 [PubMed] [Google Scholar]
- Rosen S. (1992). Temporal information in speech: acoustic, auditory and linguistic aspects. Phil Trans R Soc Lond 336: 367–373 [DOI] [PubMed] [Google Scholar]
- Schum D. (2003). Noise-reduction circuitry in hearing aids: Goals and current strategies. Hear J 56 (6): 32–40 [Google Scholar]
- Schweitzer C. (1997). Development of Digital Hearing Aids. Trends Amplif 2 (2): 41–77 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Siemens Audiology Group (2004). http://factsandfigures.hearing-siemens.com/englisch/allgemein/ueberblick_ido-hdo/triano/direktionales_mikro1.jsp Last accessed November 20, 2004
- Soede W. (2000). The array mic designed for people who want to communicate in noise. Etymotic Research, Elk Grove, IL.
- Soede W, Bilsen FA, Berkhout AJ, et al. (1993). Directional hearing aid based on array technology. Scand Audiol Suppl (Copen) 38: 20–27 [PubMed] [Google Scholar]
- Stone MA, Moore BCJ. (1999). Tolerable hearing aid delays. I. Estimation of limits imposed by the auditory path alone using simulated hearing losses. Ear a andHear 20 (3): 182–192 [DOI] [PubMed] [Google Scholar]
- Stone MA, Moore BC. (2002). Tolerable hearing aid delays. II. Estimation of limits imposed during speech production Ear Hear 23 (4): 325–338 [DOI] [PubMed] [Google Scholar]
- Stone MA, Moore BC. (2003). Tolerable hearing aid delays. III. Effects on speech production and perception of across-frequency variation in delay Ear Hear 24 (2): 175–183 [DOI] [PubMed] [Google Scholar]
- Studebaker G, Cox R, Formby C. (1980). The effect of environment on the directional performance of head-worn hearing aids. In Studebaker G, Hochberg I. (eds): Acoustical Factors Affecting Hearing Aid Performance. Baltimore, MD: University Park Press [Google Scholar]
- Surr RK, Walden BE, Cord MT, et al. (2002). Influence of environmental factors on hearing aid microphone preference. J Am Acad Audiol 13 (6): 308–322 [PubMed] [Google Scholar]
- Tellier N, Arndt H, Luo H. (2003). Speech or noise? Using signal detection and noise reduction. Hear Rev 10 (5): 48–51 [Google Scholar]
- Tillman TW, Carhart R, Olsen WO. (1970). Hearing aid efficiency in a competing speech situation. J Speech Hear Research 13 (4): 789–811 [DOI] [PubMed] [Google Scholar]
- Thompson SC. (1999). Dual microphones or directionalplus-omni: Which is best? In Kochkin S, Strom KE (eds): High Performance Hearing Solutions, 3 (Suppl) to Hearing Review 6 (1): 31–35 [Google Scholar]
- Thompson SC. (2003). Tutorial on microphone technologies for directional hearing aids. Hear J 56 (11): 14–21 [Google Scholar]
- Valente M. (1999). Use of microphone technology to improve user performance in noise. Trends Amplif 4 (3): 112–135 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Valente M, Mispagel KM. (2004). Performance of an automatic adaptive dual-microphone ITC digital hearing aid. Hear Rev 11 (2): 42–46, 71 [Google Scholar]
- Valente M, Fabry D, Potts L. (1995). Recognition of speech in noise with hearing aids using dual microphones. J Am Acad Audiol 6: 440–449 [PubMed] [Google Scholar]
- Valente M, Fabry D, Potts L, Sandlin R. (1998). Comparing the performance of the Widex Senso digital hearing aid with analog hearing aids. J Am Acad Audiol 9 (5): 342–360 [PubMed] [Google Scholar]
- Valente M, Schuchman G, Potts LG, Beck LB. (2000). Performance of dual-microphone in-the-ear hearing aids. J Am Acad Audiol 11 (4): 181–189 [PubMed] [Google Scholar]
- Van Dijkhuizen JN, Festen JM, Plomp R. (1991). The effect of frequency-selective attenuation on the speech-reception threshold of sentences in conditions of low-frequency noise. J Acoust Soc Am 90 (2 Pt 1): 885–894 [DOI] [PubMed] [Google Scholar]
- Walden BE, Surr RK, Cord MT, et al. (2004). Predicting hearing aid microphone preference in every day listening. J Am Acad Audiol 15: 365–396 [DOI] [PubMed] [Google Scholar]
- Walden BE, Surr RK, Cord MT, et al. (2000). Comparison of benefits provided by different hearing aid technologies. J Am Acad Audiol 11 (10): 540–560 [PubMed] [Google Scholar]
- Wouters J, Vanden Berghe J, Maj J-B. (2002). Adaptive noise suppression for a dual-mcrophone hearing aid. Int J Audiology 41: 401–407 [DOI] [PubMed] [Google Scholar]
- Wouters J, Litere L, Van Wieringen A. (1999). Speech intelligibility in noisy environments with one and two microphone hearing aids. Audiology 38: 91–98 [DOI] [PubMed] [Google Scholar]