Skip to main content
Trends in Amplification logoLink to Trends in Amplification
. 1998 Sep;3(3):82–118. doi: 10.1177/108471389800300302

The Causes and Effects of Distortion and Internal Noise in Hearing Aids

Jeremy Agnew 1
PMCID: PMC4172235  PMID: 25425880

INTRODUCTION

When fitting hearing aids, there are three guidelines that should always be followed by the dispenser:

  1. restore audibility; so that the amplified sound is above the user's threshold,

  2. limit the output; so that the amplified signal does not exceed the user's discomfort level, and

  3. do no harm; so that the amplified signal is not unintentionally or undesirably altered by the hearing aid.

The first two of these three guidelines are obvious. The first is to present sounds above the user's threshold of hearing in order to create an acoustic environment with the maximum amount of audible speech cues. The second ensures that discomfort level is not exceeded, so that the user does not wince or remove the hearing aids when loud sounds are present.

The third guideline—do no harm—is less obvious. This indicates that it is important that the hearing aids produce no alteration of the signal other than that which the fitter intends. This is a subtle way of saying that, among other things, well-fitted hearing aids should not distort the desired sound. The topic of distortion is the subject of this issue of Trends in Amplification. The intention of this issue is to explain why distortion occurs in some hearing aids; to explain how distortion is measured; to summarize what can be done to prevent distortion; and to summarize the perception of distortion by the hearing aid wearer.

Distortion in hearing aids can be broadly defined as the generation of undesired audible components that are present in the output, but which are not present in the input. When distortion occurs, hearing aids produce undesired elements at the output through the interaction of the processed signal with some internal non-linear mechanism. These undesired components may interfere to some degree or other with the reception of sound by the listener. If these added elements are small compared to the overall signal level, they may effectively cause no interference at all. If they are large, they can be so disruptive to the listener that the desired sound becomes irritating or even incomprehensible.

All audio systems inevitably contain some amount of distortion. The practical problem for the listener is the type of distortion that is present and the level that is acceptable or tolerable in hearing aids before the distortion becomes disruptive to speech intelligibility and sound quality. It is known that highly distorted speech in quiet can remain intelligible (Licklider, 1946); however, as dispensers often experience, intelligibility is not the only measure of whether or not a user will accept and wear hearing aids. Gabrielsson and Sjogren (1979b) have shown that overall perceived sound quality is important to users when selecting a hearing aid. Punch (1978) has shown that listeners with mild to moderate sensorineural hearing losses retain the same ability to make distinctions in sound quality judgments as listeners with normal hearing. This implies that good sound quality is as equally important and desired by listeners with a hearing impairment as it is for listeners with normal hearing.

Three characteristics of distortion in hearing aids modify the sound delivered to the user:

  1. the type of distortion,

  2. the relative amount of distortion at different frequencies, and

  3. the variation of distortion with different input levels.

Since the most important goal for fitting hearing aids is to restore or facilitate communication ability, undistorted sound is important for optimum speech intelligibility and sound quality. However, it is important to also recognize that there are other reasons to provide undistorted amplification of sound. For example, undistorted and pleasant reproduction of music may be an important sensory experience for some listeners.

This issue concentrates on different types of undesired modification to the waveshape of the sound, primarily by total harmonic distortion (THD) and intermodulation distortion (IMD). However, in a more general sense, any undesired component added to the sound by hearing aids is a form of distortion. Thus, this issue also discusses sound generated by hearing aids with no signal present at the microphone. This artifact is more commonly called internal noise. Internal noise fits within the definition of distortion given earlier, since internal noise is an undesired product generated in the output of hearing aids that is not present at the input. In some cases, distortion and internal noise are so closely linked that they cannot always be separated into discrete entities. For example, instability in the frequency and shape of the clock pulses in a Class D output stage (often loosely called clock jitter) results in distortion of the signal. However, this artifact may be audibly revealed in the output sound primarily as noise, rather than as distortion of the waveform.

According to this broad definition of distortion, undesired noises such as “motorboating” and other acoustic feedback oscillations can also be considered to be forms of distortion. However, acoustic feedback and other similar audible artifacts occurring within hearing aids will not be discussed here. These types of distortion have been discussed in detail by Agnew (1996b) in a previous issue of Trends in Amplification.

In this issue distortion is discussed in two general categories:

  1. Undesired modification of the waveform of the incoming sound by some mechanism occurring within the hearing aids. The severity of this effect may or may not vary with the acoustic level of the incoming sound. This type of audible artifact is generally what most dispensers refer to as “distortion”. This will be the subject of Part I of this issue, and will follow the definition used for distortion in this text.

  2. The production of undesired audible components in the output sound, regardless of whether or not there is any acoustic input to the hearing aid. This is generally called “noise”. This will be the subject of Part II of this issue.

The first part of this issue discusses the causes and effect of distortion. First, different types of distortion are described, followed by an explanation of how they are measured. This is followed by a discussion of the sources of distortion that may be present in hearing aids and the various mechanisms by which distortion is created. The final section of Part I discusses the effects of distortion on the hearing aid wearer. For the reader having difficulty understanding the technical concepts in the first part of this tutorial, a good overview from a different perspective has been presented by Kuk (1996).

The second part of this issue discusses the causes and effects of internal noise in hearing aids. This part starts with a discussion of sources of internal noise and continues with a description of methods for the measurement of internal noise. This part concludes with a discussion of the perception of internal noise and its effects on the user of hearing aids.

At the end of the text is an intentionally long list of references, which is intended to serve as a reasonably comprehensive resource for the reader seeking further information on distortion and noise in hearing aids.

TYPES OF DISTORTION

There are two fundamental and significant forms of distortion that occur in hearing aids. These are harmonic distortion (HD), usually more broadly called total harmonic distortion (THD), and intermodulation distortion (IMD). Both of these result from non-linearities in the amplifying system, and are sometimes collectively called amplitude distortion (Langford-Smith, 1960). Harmonic distortion occurs when a single frequency is presented to the input of a hearing aid and the output contains the original frequency plus additional undesired frequencies that are harmonically related to the original frequency. Intermodulation distortion occurs when two frequencies are presented simultaneously to a hearing aid and the output contains one or more frequencies that are related to the sum and difference of the two input frequencies.

There are other forms of distortion that may be present in the output of hearing aids. For example, transient intermodulation distortion (TIM) occurs when a large abrupt change occurs in the level of the input sound and creates IMD products in the output. Such distortions will be discussed in more detail later. However, these other distortions are not generally perceived as readily by a listener as the effects of THD and IMD, and are thus not considered to be as important for this discussion. Thus most of the discussion presented here will center around THD and IMD.

Often confusing the understanding of distortion is that there is no such thing as inherent distortion in a hearing aid. Rather, the level and type of distortion present in the output sound delivered to the user depends on the applied test conditions and the characteristics of the hearing aid being tested (Burnett, 1967). Thus, the type and level of input signal applied and the associated test conditions must be carefully considered when evaluating reported distortion performance.

One final comment should be made before addressing the details of distortion. This discussion will treat distortion from the point of view that distortion is an undesirable characteristic of amplified sound. While this is generally true for listeners with normal hearing, and for those with a mild to moderate hearing loss, it is possible that some amount of distortion may be useful for some listeners with severe or profound hearing losses.

It has been theorized that distortion may increase the perception of loudness of the signal through the creation of additional harmonic energy and, through these additional harmonics, may provide additional identification cues for speech understanding for some listeners with hearing losses. This viewpoint has been discussed among hearing scientists, but there has not been an unequivocal conclusion.

One such technique adds additional frequencies to the processed sound at one-half and double the frequencies present in the 1000 Hz to 2000 Hz range (DuPret and Lefevre, 1991). These additional frequency components could be considered distortion under the definition used for this discussion, because audible components are being added to the output even though, in this case, they are intentionally added. Although this technique has shown promise when evaluated in wearable devices on patients, there is only limited clinical data available on its usefulness (Parent et al, 1997).

It may be helpful to refer to Figure 1 during the following discussion of distortion and its measurement. Figure 1 presents an overview of the three major types of distortion measurement commonly used for objective measurement of hearing aid distortion, and then shows methods by which each is accomplished.

Figure 1.

Figure 1.

An overview of the three major types of distortion measurement commonly used for quantifying hearing aid distortion and the methods by which each is accomplished.

Total Harmonic Distortion

Harmonic distortion is present when undesired frequencies that are harmonics of an input frequency are created in the output. For example, if the input is 1000 Hz, an output with harmonic distortion could contain 2000 Hz, 3000 Hz, 4000 Hz, and other multiples (harmonics) of 1000 Hz, as well as the original signal at 1000 Hz. The even-numbered multiples of the fundamental (2 × 1000, 4 × 1000, 6 × 1000, etc.) are called even-order harmonics. The odd-numbered multiples of the fundamental (3 × 1000, 5 × 1000, 7 × 1000, etc.) are called odd-order harmonics. The fundamental, the original frequency, is sometimes called the first harmonic.

Figure 2 is a graph of output versus frequency for a hearing aid with an input signal of 1000 Hz, showing the harmonics present in the output. The graph shows that 2nd and 3rd harmonics are present. Numerical values for these harmonics are listed in Table 1. The harmonic values in Table 1 may be converted to a percentage of distortion at each harmonic frequency, as shown in the last column of the table. This conversion is made by measuring or estimating how much lower the harmonic frequency of interest is below the fundamental frequency, and then looking up this difference in Table 2. In Table 1 the level of the 1000 Hz frequency was 89.5 dB SPL. The second harmonic, 2000 Hz, was 50.5 dB SPL. Thus, the second harmonic is 39 dB down from (e.g., lower than) the fundamental. Looking at Table 2, a difference of 39 dB corresponds to a distortion level of 1.1%. Similarly, the 3rd harmonic is 47 dB down from the fundamental, which corresponds to a distortion level of 0.45%.

Figure 2.

Figure 2.

Harmonic distortion components present in a hearing aid adjusted below saturation with an input signal of 1000 Hz. This graph clearly reveals 2nd and 3rd harmonics present at 2000 Hz and 3000 Hz, respectively. Table 1 lists the corresponding numerical values for the three output frequencies.

Table 1.

Levels of distortion present in Figure 2.

Frequency (Hz) Output Level (dB SPL) Level down from fundamental (dB) Level of distortion (%)
1000 89.5
2000 50.5 39.0 1.10
3000 42.5 47.0 0.45
noise floor at 3500 Hz 29.7 59.8 equivalent to 0.10

Table 2.

Conversion from decibels below the signal level of the fundamental to percentage of distortion.

Decibels Percent Decibels Percent Decibels Percent
1 89.1 28 4.0 55 0.18
2 79.4 29 3.5 56 0.16
3 70.8 30 3.2 57 0.14
4 63.1 31 2.8 58 0.13
5 56.2 32 2.5 59 0.11
6 50.1 33 2.2 60 0.10
7 44.7 34 2.0 61 0.09
8 39.8 35 1.8 62 0.08
9 35.5 36 1.6 63 0.07
10 31.6 37 1.4 64 0.06
11 28.2 38 1.3 65 0.06
12 25.1 39 1.1 66 0.05
13 22.4 40 1.0 67 0.04
14 19.9 41 0.89 68 0.04
15 17.8 42 0.79 69 0.03
16 15.8 43 0.71 70 0.03
17 14.1 44 0.63 71 0.03
18 12.6 45 0.56 72 0.02
19 11.2 46 0.50 73 0.02
20 10.0 47 0.45 74 0.02
21 8.9 48 0.40 75 0.02
22 7.9 49 0.35 76 0.02
23 7.1 50 0.32 77 0.01
24 6.3 51 0.28 78 0.01
25 5.6 52 0.25 79 0.01
26 5.0 53 0.22 80 0.01
27 4.5 54 0.20

When considering these types of graphs, two useful numbers that are easy to remember are that harmonics that are 20 dB down from the fundamental correspond to 10% distortion, and 40 dB down correspond to 1%. As will be discussed in the later section on the perception of distortion, distortion below 1% is probably negligible for hearing aid purposes; values above 10% are probably audible and may be objectionable. Values between 1% and 10% are considered on a case-by-case basis.

If the fundamental frequency is filtered from the output and all the other harmonic frequencies are measured and summed, the resulting combination is a measure of the amount of all the harmonic distortion present. Since all the harmonics are summed together into one measurement, the resulting figure is called the total harmonic distortion (THD) that is present. THD is usually expressed as an (undesired) percentage of the desired signal level.

The basic concept for the formula for calculating THD is:

THD=(outputfundamental)(output)

thus,

THD(%)=100f22+f32+f42+f12+f22+f32+f42+

where f1 is the level of the fundamental and f2, f3, f4, and so forth, are the levels of the harmonics present. Note that this calculation will also include any circuit noise that may be present. If the levels of the harmonics are comparable to the inherent noise, the internal noise will be a significant factor in the calculations.

The standard method of measuring THD in hearing aids in the United States is according to ANSI standard S3.22 (1996). This is a measure of the THD present at 500 Hz with an input level of 70 dB SPL, at 800 Hz with 70 dB SPL, and at 1600 Hz with 65 dB SPL, or at three special-purpose frequencies with the same levels. In the event that the frequency response curve rises 12 dB or more between any test frequency and its second harmonic, the test may be omitted at that particular frequency. For example, if the frequency response curve rises 15 dB between 500 Hz and 1000 Hz, the distortion test at 500 Hz may be omitted.

ANSI standard S3.22 (1996) is intended to be a quality control standard for manufacturers to test their product, and this measurement is easy to perform with automated equipment. An example of results from this type of testing is shown in Figure 3, which shows a hearing aid tested according to this standard. The results of distortion testing at the three required frequencies are in the text section at the top.

Figure 3.

Figure 3.

Example of a printout from the measurement of a hearing aid according to ANSI standard S3.22 (1996).

In countries outside the United States, the IEC (1983a) standard method for the characterization of the electroacoustic characteristics of hearing aids specifies sweeping the test frequency from 200 Hz to 5000 Hz at an input level of 70 dB SPL, then plotting the THD content of the output at each frequency. Examples of this type of graph will be shown and discussed later.

One limitation of the ANSI standard S3.22 (1996) method is that it only requires tests for distortion at three frequencies. However, this is not necessarily inappropriate, because ANSI (1996) is intended to be a quality control standard for manufacturers to test their product, not an indication of how the hearing aid will perform on the user.

A more complete method for measuring THD sweeps the input test frequency across the whole range, in order to find possible distortion spikes that could occur between the three designated ANSI measuring frequencies. An example of a swept-frequency THD graph is shown in Figure 4. This graph shows the level of THD that is generated by the hearing aid between 200 Hz and 6200 Hz with an input level of 70 dB SPL. Immediately below the graph is data gathered from the same hearing aid according to ANSI (1996). Thus, even though the ANSI data showed that distortion levels were less than 3% at the three ANSI frequencies measured, the hearing aid contained a distortion peak of 12.8% at 2500 Hz. The presence of this distortion peak could degrade the performance of the hearing aid for the user. This frequency is not routinely measured by the ANSI test, unless the hearing aid is a special-purpose hearing aid and requires measurement at this frequency. See ANSI (1996) for further details on the definition and measurement of special-purpose hearing aids.

Figure 4.

Figure 4.

Data from a hearing aid showing the measured three-frequency ANSI data, compared to a graph showing the levels of THD that occurred as the distortion was measured continuously between 200 Hz and 6200 Hz.

In general, THD measurements are meaningful up to about half the upper frequency limit of the hearing aid, which means in practice that THD measurements are valid up to about 3000 Hz. The reason for this is related to the limited bandwidth of most hearing aids. Hearing aids are typically severely limited in high frequency response above about 6000 Hz, due to falling high frequency response characteristics inherent in the receiver. Thus the second harmonic of 3000 Hz and frequencies beyond are so severely attenuated that any harmonics present will be buried in either the measurement system noise or the internal hearing aid noise. The noise floor of a typical measurement is shown in the graph in Figure 2. The noise floor of the measurement, which is the wavy line between about 30 dB and 40 dB SPL at frequencies other than 1000, 2000 and 3000 Hz, is due to internal noise present in the hearing aid. At 3500 Hz the level of the noise is 29.7 dB SPL, which is 12.8 dB lower than the harmonic signal being measured at 3000 Hz. This illustrates the point that eventually any high frequency harmonics present will become buried in the internal noise.

A further factor in the measurement of THD is that the combination of 2 cm3 coupler and the associated microphone that are used for hearing aid measurements also produce severe attenuation of the signal in the higher frequencies. This makes reliable measurements of high frequency harmonics very difficult.

Intermodulation Distortion

Intermodulation distortion is created when two frequencies (f1 and f2) are present simultaneously at the input of the hearing aid and the output contains various multiples of the sum (f2+f1) and difference (f2-f1) of these two frequencies. Thus IMD can result in the creation of many frequencies that occur across the frequency spectrum. For example, if the input frequencies are 1000 Hz and 1200 Hz, the output might contain added distortion frequencies at 200 Hz (the difference frequency), 400 Hz (twice the difference frequency), and other frequencies spaced every 200 Hz across the spectrum. In addition, the output may contain 2200 Hz (the sum of the frequencies), 4400 Hz (twice the sum of the frequencies), and many other frequencies related to the sum and difference of the input frequencies. Direct harmonics of the frequencies may also be present at 2000 Hz and 2400 Hz. In practice, the distortion products of the most interest for hearing aid measurement are the difference frequency (f2–f1) and the 3rd order product (f2 – 2f1). Distortion products higher than third order typically rapidly decrease in intensity and do not contribute significantly to the final distortion value (Brockbank and Wass, 1945).

Because many difference frequencies at the input occur within the passband of hearing aids, audible IMD products tend to appear in the middle and higher frequencies, though they may also commonly appear in the low frequencies. A measurement of IMD is probably a more realistic measure of hearing aid distortion than THD, since speech and music consist of the equivalent of multiple frequencies applied to hearing aids simultaneously, rather than a single frequency. Cabot (1988) has stated that this type of testing may be the most likely to measure what the ear might hear.

Figure 5 shows how an intense high frequency sibilant sound can overload a linear hearing aid and produce many IMD products at lower frequencies. Figure 6 is the acoustic spectrum of a test signal that resembles an extended “…sssss…” sound at a level equivalent to that in a spoken word. This type of sound is often received at the microphone at a level that is high enough to overdrive and saturate a hearing aid. This energy is concentrated in a band of frequencies primarily between 3500 Hz and 8500 Hz.

Figure 5.

Figure 5.

Output of a linear hearing aid tested with the signal shown in Figure 6, with the test signal subtracted in order to reveal residual products in the output. The output shown in this graph is the resulting IMD with energy primarily in the 500 Hz to 2000 Hz region.

Figure 6.

Figure 6.

Test signal consisting of a band of noise with energy primarily between 3500 Hz and 8500 Hz, used to simulate an extended sibilant speech sound.

Figure 5 shows the resulting output from a particular hearing aid, after the test signal has been electronically subtracted. This graph shows that a large amount of IMD products not present in the input signal have been created between about 500 Hz and 2000 Hz. The level of the test signal at the input of the hearing aid was 68 dB SPL, thus showing that even relatively low levels of high frequency energy can provide significant IMD in some hearing aids.

This knowledge can be used in a dispenser's office as a rough test for the presence of IMD. If a suspect hearing aid is held fairly close to the mouth and the word “tesssssssssst” is spoken slowly into the microphone, the sound level on the sibilant portion will probably be on the order of 75 to 80 dB SPL, which is enough to saturate many hearing aids. If a raspy, harsh or buzzing quality is simultaneously heard in the output of the hearing aid, then the hearing aid may produce the same type of sound quality under conditions of intense sibilant input sounds. A loud talker's own voice can easily reach these levels at the hearing aid microphone.

There are two primary methods for measuring IMD in audio systems. Both methods apply two frequencies to the input of the hearing aid and measure the resulting distortion products at the output. The first of these methods to be discussed is generally not suitable for hearing aid measurement. However, it is important to understand why it is not, therefore it will be discussed briefly for completeness.

The first standardized method of testing for IMD in audio amplifiers is called the SMPTE (Society of Motion Picture and Television Engineers) test. This is a test for frequency modulation (FM) distortion, which is sometimes incorrectly called Doppler distortion. This type of distortion occurs when one sound at the input modulates another. An example of a possible situation that could cause this problem is listening to choral music accompanied by a sustained low note on an organ. If the sound reproduction quality is poor, the sound of the voices perceived by the listener may contain a wavering quality because the singing has been modulated by the low organ note. This example is similar to how the test is applied in practice. A low frequency of 60 Hz is applied to the amplifier under test, along with a high test frequency of 7000 Hz (Metzler, 1993). The amplitude of the low frequency is applied at a level four times higher than that of the high frequency. The resulting distortion is measured and recorded as a single number.

The SMPTE test for IMD is not considered suitable for hearing aid testing (Burnett, 1967), because the 60 Hz low frequency and the 7000 Hz high frequency are both outside the passband (amplifying frequency range) of most hearing aids. Also, if the test is performed by sweeping the high frequency, the relationship of the amplitudes of the low and high frequencies would vary drastically according to the frequency response of the hearing aid, thus making the resulting measurement difficult to interpret.

The other method for testing for IMD that has been standardized in the audio industry is useful for hearing aid measurement. The test is variously known as the CCIF (International Telephonic Consultative Committee) test, the twin-tone test, the CCITT (International Telephone and Telegraph Consultative Committee) test, or the IHF (Institute of High Fidelity) intermodulation distortion test. Common frequencies used for testing audio equipment are fixed at 13 kHz and 14 kHz, or at 19 kHz and 20 kHz (Metzler, 1993), though frequencies from 1000 Hz to 9000 Hz with difference frequencies from 50 Hz to 500 Hz have also been used (Langford-Smith, 1960).

As when measuring THD, a more complete method for measuring IMD in practice is to sweep the two input frequencies across the whole frequency range, while maintaining a constant difference frequency between them. The IEC (1983a) standard for characterizing hearing aids uses a difference frequency of 125 Hz, and sweeps the test frequencies over a range from 350 Hz to 5000 Hz, with an input sound level of 64 dB SPL. This small frequency difference ensures that the frequencies within the hearing aid will be maintained at approximately the same level for both frequencies while they are swept across the frequency range. If the relative amplitudes of the two test frequencies vary with respect to each other due to the normal variations in the frequency response of the hearing aid, errors in the measurement result may occur. Specific techniques for the measurement of IMD in hearing aids in the setting of an acoustics laboratory are described in Thomsen and Moller (1975) and White (1977).

Useful as the measurement and interpretation of IMD would appear to be, the routine measurement and reporting of IMD by manufacturers on user brochures and specification sheets is not currently widespread. Hampering this is that equipment incorporating the routine measurement of IMD is not readily available for the clinician. Also, this measurement is not required by ANSI standard S3.22 (1996) or by IEC (1983b), which are quality control standards intended for testing hearing aids during manufacturing.

Other Types of Distortion

There are several other forms of distortion that may be present in hearing aids, but which are generally of lesser practical importance than THD and IMD. Some of these distortions are discussed in more detail in Agnew (1988).

Frequency distortion is the unequal amplification of different parts of the spectrum. In hearing aids, this type of “distortion” is introduced deliberately as a method of compensating for hearing loss that varies with frequency. This is sometimes also called spectral distortion.

Phase distortion is the alteration of timing relationships between input and output, and between different frequencies that exist simultaneously in a particular sound. This should be distinguished from a phase shift that is proportional to frequency, and which does not cause phase distortion (Langford-Smith, 1960). Phase distortion occurs in almost all hearing aids due to the use of capacitors and inductors for amplifying and tailoring the frequency response in the circuitry. For example, band-split filters, commonly used in hearing aids with signal processing in two frequency bands, produce large alterations in phase at the cross-over frequency. Though phase relationships between the two ears are important for localization of sound (Batteau, 1967; Rodgers, 1981; Blauert, 1983), the significance for a listener of changes in relative phase within a complex sound at a single ear are uncertain. The ear may be unable to detect phase shifts in continuous tones (Scroggie, 1958), though phase is important to undistorted reproduction of transient sounds (Langford-Smith, 1960; Moller, 1978a; 1978b). Changes in phase between harmonically-related components of a complex sound can be perceived, and can change the perception of timbre and pitch (Moore, 1982). Phase distortion is probably not significant at frequencies of interest for hearing aids (Douglas-Young, 1981; Moore, 1982). Killion (1979) has indicated that phase changes of less than 90° per octave are generally inaudible.

Transient distortion occurs when a hearing aid or other audio system cannot respond rapidly enough to sounds that either change very rapidly or which have short duration, such as drums and cymbal clashes. This type of distortion is related to an inaccurate response to phase changes. A typical symptom of poor transient response is an oscillation that continues for a brief period of time after the test signal has ceased. This is called ringing, and causes a blurred quality to be introduced into the sound, which degrades the clarity and sharpness of transient sounds.

Crossover distortion occurs primarily in Class B (push-pull) amplifiers. It results from a discontinuity in amplification around the zero crossings of the wave, when the amplifier switches from one side of the Class B output to the other side. For this reason, what are loosely called Class B amplifiers in hearing aids are, in reality, Class AB amplifiers. This type of amplifier is not a true Class B amplifier, but contains some forward bias as in a Class A amplifier—hence the addition of the “A” to create Class AB. This creates a smooth transition between the two sides of the amplifier and reduces crossover distortion.

Frequency Modulation (FM) distortion occurs when a low frequency modulates a higher frequency or frequencies. One method for testing for FM distortion was described earlier as the SMPTE method.

The preceding distortions are forms of distortion that are static with a constant input signal. There are also dynamic forms of distortion that change the characteristics of the processed signal as the input varies. For example, hearing aids may distort the amplitude relationship between sound levels within speech. Thus the relationship of the relative sound levels between a loud sound (perhaps a vowel) followed closely by a soft sound (perhaps a consonant) is not maintained and speech identification cues may be altered. This phenomenon is common in compression hearing aids, which deliberately distort this relationship in order to fit sounds into the residual dynamic range of a hearing aid user.

Little work has been reported in the literature to quantify the results of this type of distortion on intelligibility and sound quality. Indications that certain compression settings can adversely affect the signal-to-distortion ratio (SDR) have been reported by Kates (1992).

Because of space limitations, it is only possible here to present a broad picture of distortion measurement. Many specific improvements in measurement techniques have been proposed. The interested reader is referred for more details to Corliss et al (1968), Leinonen et al (1977), Cordell (1983), Skritek (1983; 1987), Thiele (1983), Small (1986), Levitt et al (1987), Williamson et al (1987), Schneider and Jamieson (1995), and Anderson et al (1996).

COHERENCE

A different technique used for the measurement of distortion in hearing aids is the use of coherence. Coherence shows the degree to which the output from a hearing aid is correlated to the input (ANSI, 1992). For a random noise test signal, coherence is degraded by non-linearity and by system noise. Coherence is not degraded by steady-state magnitude (i.e., gain) changes, because a distortionless change occurring in gain is a linear property. Thus, in its most fundamental sense, coherence is a measure of how well the output signal of a system, such as a hearing aid, is linearly related to the input signal.

Coherence is reported as a dimensionless quantity between 0 and 1. If the measured coherence is 0, then the output signal is completely unrelated to the input signal. If the coherence is 1, then the output is linearly related to the input with no corrupting influences. If the coherence is between 0 and 1, then there is some amount of distortion in the signal.

The basic formula for calculating the percentage of distortion from a coherence measurement is (Preves, 1994):

Distortion=100SDR%

where: SDR = signal-to-distortion ratio

SDR=signaltodistortionratioSDR=(coherence)/(1coherence)

The percentage of distortion for selected coherence values between 0 and 1, calculated by this formula, are listed in Table 3. Note that 100% distortion occurs for a coherence value of 0.5. In other words, for a coherence of 0.5, equal contributions come from distortion and from the signal. As the coherence value decreases below 0.5 and approaches 0, the output signal has less and less resemblance to the input signal. Finally, as the coherence value reaches 0, the input signal has been totally degraded by the time that it appears at the output. Further tutorial information and specific details of hearing aid test methodology are described in ANSI (1992), Kates (1992), and IEC (1997).

Table 3.

Percentage of distortion for selected coherence values.

Coherence value Percentage of distortion
1.00 0
0.99 10
0.97 17
0.95 23
0.90 33
0.85 42
0.80 50
0.70 65
0.60 82
0.50 100

Coherence is an overall measure of anything that makes the output different from the input, except for any system gain that may be present. Thus, with one measurement of coherence it is possible to quantify the level of all the artifacts that cause the output to be different from the input, a situation which meets the definition of distortion given in the introduction to this issue. However, because coherence is a global measurement, the results do not distinguish the individual characteristics that make the output and input different. While this difference may well be due to THD or IMD, differences may also be due to phase shift (time delay), internal oscillations, or internal noise. The specific nature of these differences is not clear from the measurement of coherence. For example, coherence may decrease around the frequencies of the resonant peaks in a hearing aid receiver; however, this is due to differences in group delay (phase shift) and not to what may be traditionally considered to be distortion.

In general, for low input levels, such as 50 dB SPL, a low coherence value in a hearing aid measurement often indicates high system noise. This may be due either to inherent noise within the hearing aid or to external noise in the test environment corrupting the measurement results. For high input levels, such as 80 or 90 dB SPL, a low coherence value often indicates that saturation distortion is present.

Two examples of coherence measured across frequency are shown in Figure 7. Figure 7a shows a coherence measurement for what would generally be considered to be a “good” hearing aid. It can be seen that the coherence is almost 1 across the entire frequency range, particularly above about 500 Hz. This indicates a very high correlation between output and input. Figure 7b shows a coherence measurement for what would generally be considered to be a “poor” hearing aid. In the low frequencies, particularly below about 2000 Hz, the coherence drops to about 0.7, then drops even further at frequencies below 1000 Hz. This indicates very poor correlation between output and input in these lower frequencies. However, Figure 7b also illustrates the difficulty that may be encountered when interpreting coherence measurements. This hearing aid had a very large decrease in amplification in the low frequencies. This decrease probably resulted in a degraded signal-to-noise ratio (SNR) in the low frequencies which, in turn, may have resulted in the poor coherence below 2000 Hz because the measurement does not distinguish between distortion and noise.

Figure 7.

Figure 7.

Example of coherence measurements from two hearing aids.

Though a considerable amount of effort has been expended on coherence measurements for hearing aids (Dyrlund, 1989; 1992; Preves and Newton, 1989; Preves, 1990; Preves and Woodruff, 1990; Kates, 1992; Dyrlund et al, 1994; Schneider and Jamieson, 1995) and the ANSI S3.48 hearing aid working group has studied the application of the measurement in detail, the use of coherence has been slow to be adopted by hearing aid manufacturers for reporting individual hearing aid data to the dispenser.

In a similar effort to improve hearing aid distortion reporting, Kates (1990) has proposed a measure for distortion similar to the Articulation Index (AI). The signal-to-distortion ratio (SDR) of the hearing aid is measured in each auditory critical band. Each value obtained is then limited to a lower value of 0 dB SDR and an upper value of 30 dB SDR. The resultant set of SDR values are summed and then divided by 30 times the number of critical bands. The result yields a number between 0 and 1, where 0 is the poorest and 1 is the best.

SOURCES OF DISTORTION

Distortion that occurs in hearing aids may be categorized into two different types, depending on the level of the input signal present. The first type is a fixed level of distortion that is present at all levels of input, but which is primarily observed with low levels of input. The exact input level below which this may be observed will vary according to the hearing aid undergoing measurement and the composition of the input signal; however, in general, “low level” in this context refers to input levels that are below about 70 dB SPL. The distortion resulting from low levels of input is inherent in the design and operation of the amplifier.

The second type is distortion, which occurs primarily with higher levels of input (i.e. higher than about 70 dB SPL), typically varies in measured level with the level of the input signal. This is often called saturation distortion because saturation of some part of the circuit results in overload and the generation of distortion. This type of distortion is also called non-linear distortion because it results from non-linearities in the hearing aids, and may be called amplitude distortion or overload distortion, because it occurs and worsens as the amplitude of the input signal increases. This results in the production of high levels of THD and IMD in the output. These generic categories of distortion are illustrated as a diagram in Figure 8.

Figure 8.

Figure 8.

Diagram illustrating the relationship of different categories of distortion to some of the possible causes.

Distortion Caused by Low-level Inputs

Due to inherent limitations in hearing aid circuits and transducers, some small percentage of distortion normally occurs in most hearing aids. This is known and accepted by hearing aid designers, fitters and users. This distortion may have several causes, from slight inaccuracies in the matching and linearity of components to minor electromechanical non-linearities in receivers. In Class A circuits distortion may occur due to non-linearities and output loading effects. In Class B circuits distortion may occur due to cross-over distortion. In Class D circuits distortion may arise from clock jitter. Due to non-linearities in electromechanical operation, hearing aid receivers produce inherent distortion. All these contributions to overall distortion usually result in less than 5% THD and IMD across the frequency spectrum. This low level of distortion is typically unnoticed by the user and has no significant effect on hearing aid use.

Objectionable levels of distortion in hearing aids with low level inputs are primarily caused by two mechanisms: inadequate design or hearing aid failure. Distortion due to poor design is rare in modern hearing aids. Occurrence of this problem may be easily observed by performing a distortion frequency sweep, as shown in Figure 4.

A more likely cause for persistent objectionable distortion in hearing aids with low levels inputs is that some component in the hearing aid has failed, or that the hearing aid has been damaged. For example, accidentally dropping a hearing aid onto a hard surface, such as a tiled floor, may damage the receiver and produce levels of distortion that are higher than normal. Typically this type of problem is not a subtle effect, but results in large and very noticeable increases in distortion. This distortion will most likely be present with all inputs, all the time. A diagnosis can easily be made by performing standard distortion tests. The obvious solution is to return the hearing aid for repair.

Fully digital hearing aids may have additional potential sources of low-level distortion, such as A/D converter non-linearities, aliasing, clock instability, quantization errors, quantization noise and phase distortion (Pohlmann, 1992). To date, very little work has been performed to determine how significant these sources of distortion will be to the user with a hearing impairment. Those readers who are unfamiliar with digital signal processing (DSP) in hearing aids may wish to refer to Levitt (1987), Staab (1990), Williamson and Punch (1990), Agnew (1991), and Murray and Hansen (1992) for comprehensive background information. General information on digital signal processing at a reasonably-readable technical level for the non-technical reader may be found in Bloom (1985) and Pohlmann (1991, 1992).

Distortion Caused by High-level Inputs

Distortion that occurs with low-level inputs is usually reduced to a minimum by the hearing aid circuit designer, and is not usually a particular problem. A more significant type of distortion occurs when high levels of input are presented to the hearing aid. This type of distortion is often not constant, but varies with the level of the input, typically becoming more severe with higher levels of input. This problem is due to saturation distortion, named because some internal part of the hearing aid has saturated and overloaded.

This phenomenon is also known as peak-clipping, as illustrated in Figure 9. An undistorted sine wave is shown in Figure 9a. Figure 9b and 9c show the effects of peak-clipping on a sine wave. Figure 9b shows clipping on one half of the sine wave; this is called asymmetrical peak clipping. Figure 9c shows equal clipping on both halves of the sine wave; this is called symmetrical peak clipping.

Figure 9.

Figure 9.

The effects of peak clipping on a sine wave.

Peak-clipping is used deliberately as a simple and inexpensive method of limiting the output of a hearing aid in response to loud sounds. The threshold of the clipping is set such that the peaks of the amplified waves are cut off, or clipped, as shown in Figure 9c at a level that produces the desired lowered acoustic output. Hawkins and Naidoo (1993), reporting on a survey of hearing aid manufacturers stated that, in 1990, 82% of the hearing aids sold in the United States used peak-clipping as a method of output limitation.

The side-effect of peak clipping as a form of output limitation is the production of THD and IMD. Because of the deleterious effects of this on sound quality, it has also been half-humorously called “crummy” peak clipping. Revit (1994) has described how a lack of smoothness in the appearance of the family of hearing aid output curves obtained by varying the input level of composite noise between 50 dB SPL and 80 dB SPL, in 10 dB steps, may be used to indicate the presence of peak-clipping in linear hearing aids.

An example of the increase in distortion that may occur when a hearing aid saturates is illustrated in Figure 10a. This graph shows output versus frequency for the same hearing aid that was shown below saturation in Figure 2. The corresponding numerical results are shown in Table 4. The graph shows large increases in the levels of the 2nd and 3rd harmonics and the corresponding distortion. The 4th through the 9th harmonics have now appeared on the graph, though the distortion figures in Table 4 indicate that the 5th through the 9th harmonics are effectively negligible because they are so small. Figure 10a shows the harmonic distortion products for the condition under which the hearing aid was driven slightly into saturation. In Figure 10b, the hearing aid was driven further into saturation by increasing the level of the input signal. In this instance, the level of the 2nd harmonic has increased, but the levels of the 3rd and 4th have decreased. The other harmonics may be neglected. The difference between 10a and 10b shows that types and levels of harmonic distortion products, even during saturation, can vary widely.

Figure 10.

Figure 10.

Graph of output versus frequency for the hearing aid measured in Figure 2, showing HD present when the hearing aid was driven into saturation.

Table 4.

Levels of distortion present in Figures 10a and 10b.

Frequency (Hz) Output level (dB SPL) Level down from fudnamental (dB) Level of distortion (%)
Figure 10a
1000 101
2000 87 14 19.90
3000 75 26 5.00
4000 71 30 3.20
5000 57 44 0.63
6000 60 41 0.89
7000 42 59 0.11
8000 31 70 0.03
9000 19 82 0.01
Figure 10b
1000 102
2000 92 10 31.60
3000 65 37 1.40
4000 68 34 2.00
5000 61 41 0.89
6000 61 41 0.89
7000 43 59 0.11
8000 41 61 0.09
9000 31 71 0.03

As will be discussed further in the section on the perception of distortion, the presence and relative levels of different harmonics contribute to the perception of the annoyance of the distorted sound. Even though Figure 10b shows measurements with the hearing aid further into saturation than does Figure 10a and shows a higher level of second harmonic distortion to be present, the perception of the condition in Figure 10a may actually be worse for a listener, due to the higher level of third harmonic present.

Saturation distortion occurs primarily because there is only a limited amount of amplification that can be obtained from the battery used to power hearing aids. At some combination of input level and amplification, something in the hearing aid, either microphone, amplifier, or receiver, reaches the limit of what it can amplify and deliver to the ear without distortion. Overload can occur at any stage in the hearing aid.

Contemporary microphones in hearing aids operate with very low distortion in the range of sound levels typically present in the environment. Distortion figures on the order of less than 0.5% are typical for electret microphones with an input frequency of 1000 Hz at 60 dB SPL. Distortion levels remain on the order of 1% up to about 105 dB SPL input, then increase to about 10% by 120 dB SPL. Coherence typically remains essentially 1 up to about 110 dB SPL input with a broadband noise input signal. Even though these levels are higher than those required to reproduce the normal range of speech, it is not unusual for other sounds exceeding speech to occur commonly in listener's environment. Table 5 illustrates some of the high levels of sound that may be encountered in household environments. Teder (1995) measured crowd noise at a baseball game to be as high as 120 dB SPL. Peaks of orchestral music may reach 120–125 dB SPL.

Table 5.

Levels of loud sounds commonly encountered in everyday environments.

Common kitchen sounds (compiled from Teder, 1995)
Source Peak sound level (dB SPL)
Cupboard door closing 84
Pots and pans put in cupboard 89
Setting plate in sink 91
Dropped pot lid 102
Fork dropped on plate 104
Spoon tapped on cup 104
Common environmental sounds (Agnew, 1995)
Source Peak sound level (dB SPL)
Electric hair dryer (slow setting) 82–88
Conversational speech 85–90
Man's electric shaver 89
Electric hair dryer (fast setting) 90–98
3/8″ electric drill in wood 95
Gasoline-powered lawn mower 105
7″ circular saw cutting wood 110
Hammer driving nail in wood 125–135

A linear preamplifier is limited in the amount of amplification that it can provide without distortion by its supply voltage. If the amount of gain present in the amplifier causes the signal at the output of a Class A preamplifier to exceed this limit, it will overload, saturate and distort when the input signal is increased beyond the level that exceeds the signal swing limitation.

Similar to the overload and distortion in the preamplifier that occurs as the input signal increases, the electrical interaction of the output amplifier and receiver can also produce significant distortion if the input level to the amplifier is high enough. The level where saturation will occur is determined by the type and design of the output amplifier and receiver that are specified by the hearing aid designer. A Class A output stage can produce a voltage swing of slightly less than two times the battery voltage (Agnew et al, 1997). A Class B output amplifier can produce a swing of four times the battery voltage. This is one reason that a Class B amplifier is typically used for high-power hearing aids, because it is possible to obtain twice the output voltage swing for a given input than from a Class A amplifier.

An erroneous assumption that is often made is that low distortion amplification may be achieved up to the maximum SSPL90 value of a hearing aid. (Note that the term SSPL90 has been replaced by the newer term OSPL90 in ANSI standard S3.22 (1996) in order to harmonize with IEC (1983b) specifications; however, the older term SSPL90 will be retained in this issue due to widespread contemporary usage). The value for SSPL90 defines the maximum level of hearing aid output with a 90 dB SPL input; however, this is not necessarily either the maximum output of the hearing aid, or the point of the onset of distortion. Maximum output of a hearing aid may occur a few decibels above the maximum SSPL90 value if the input signal is greater than 90 dB SPL, and may occur below this value at different frequencies. The test signal of 90 dB SPL input is used as a convenient and consistent input value for testing a hearing aid for quality control purposes, which is the intent of ANSI Standard S3.22 (1996).

Amplifier Headroom

Saturation distortion basically occurs because of a lack of headroom in hearing aids. In sound system engineering, headroom is defined as the difference, in decibels, between the highest amplified level present in a given output signal and the maximum output level that the system can produce without noticeable distortion (White, 1993). This maximum level is the upper end of the dynamic range of the amplifying system (Foreman, 1987). High quality audio amplifiers are capable of being designed to have minimal distortion until clipping occurs due to saturation, because of the ready availability of high enough voltages from the power supply.

Headroom and distortion level definitions are more complex in a hearing aid because significant distortion occurs in most Class A and Class D hearing aid output stages below the hearing aid saturation level. Thus, it is more appropriate to say that hearing aid amplifier headroom is the amount of amplification range remaining between the instantaneous signal output level and the maximum undistorted output capability of the hearing aid, bearing in mind that this maximum undistorted output level varies with frequency and is not necessarily the SSPL90 value of the hearing aid.

Though saturation and distortion do not necessarily only occur in the output stage, the onset of receiver saturation is often assumed to be the SSPL90 value of the hearing aid. However, the onset of distortion does not necessarily occur at this value. The onset of distortion in a linear hearing aid usually occurs below the values plotted in the SSPL90 graph and the amount of distortion usually varies with frequency. An example illustrating this is shown in Figure 11. The solid black line is the SSPL90 curve of a typical linear ITE hearing aid with a Class A output amplifier stage. This hearing aid had 40 dB of peak gain, a peak SSPL90 of 108 dB SPL, and a rising frequency response curve of about 12 dB from 500 Hz to the first peak at 1500 Hz. The dashed line is the output level of the hearing aid at which 10% distortion occurs. From 200 Hz to about 1000 Hz, this level is about 6 dB to 8 dB lower than the SSPL90 curve. At 2000 Hz, the 10% distortion level is only about 2 dB below the SSPL90 value. Thus low frequency sounds will tend to saturate this hearing aid sooner than will high frequency sounds, and will therefore produce harmonic distortion components that spread upwards into the higher frequencies.

Figure 11.

Figure 11.

Graph showing the SSPL90 of a hearing aid (black line), and the output level (dashed line) at which 10% distortion occurred.

Linear hearing aids with Class D output stages have been presented as having more headroom, and thus lower distortion, than linear hearing aids with Class A output stages. The maximum voltage swing across the receiver in a linear Class A or Class D output amplifier is approximately twice the voltage available from the battery. In both types of output stage there are some losses of output drive capability due to finite resistances in the output transistors that do not allow the use of the full battery voltage to drive the receiver (Grebene, 1984). However, due to different concepts used to drive the receiver in the two types of output stage, the levels of the output signal that may be obtained without appreciable distortion also differ.

Depending on the receiver and the design of the output stage, an empirical rule-of-thumb is that the voltage swing that appears across the receiver without appreciable distortion in a linear Class D output stage is approximately 3 dB less than twice the full battery voltage. Beyond this point the levels of distortion rise very rapidly. In a linear Class A output stage, similar empirical findings are that the maximum undistorted voltage swing across the receiver is approximately 6 dB less than twice the battery voltage. Therefore the difference between Class A and Class D maximum undistorted voltage swing across the receiver is approximately 3 dB, when using receivers of the same type and impedance (Agnew et al, 1997). Thus, when configured to produce the same SSPL90, a Class D output stage will have approximately 3 dB more headroom than will the equivalent Class A output stage, and the Class D will not reach the same distortion levels as the equivalent Class A output stage until a 3 dB higher output level is achieved.

Agnew et al (1997) showed that, for low and medium level input signals, both Class A and Class D output stages can perform equally well. As will be discussed further in the section on the perception of distortion, Palmer et al (1995) noted that when the design of the Class A amplifier that they tested was changed to increase the current drive through the receiver, comparable sound quality performance of the Class A circuit to the Class D circuit was obtained with no obvious audible difference. Johnson and Killion (1994) have stated that “all other things being equal, competently designed amplifiers of any class cannot be distinguished from one another on the basis of even the most careful listening tests”.

Because the headroom phenomenon is noticeable at levels approaching maximum amplification and output, it is only for higher input levels that the Class A circuit will distort sooner than the Class D circuit with equivalent SSPL90. In other words, there is additional headroom available in the Class D circuit. When either output stage amplifies the signal close to the saturation level, either hearing aid will distort, whether the output stage is a Class A or Class D.

It is important to recognize that this discussion relates to Class A and Class D output stages with equal SSPL90 values, and to understand how this relates to the amount of headroom available in both. Inadequate headroom in any hearing aid can result from high gain with a low SSPL90, a combination that causes clipping and other types of nonlinear distortion at high input levels (Palmer et al, 1995). Increased headroom reduces the generation of these distortion products that degrade the coherence and sound quality of a hearing aid (Preves, 1990). However, the specific value of SSPL90 is set primarily by the receiver type and the configuration of the output stage chosen by the design engineer, not by the generic type of output stage. In practice, a hearing aid with a Class D or a Class A output stage will have the combination of SSPL90 and gain that the circuit designer chooses. This could include a combination of high gain and high SSPL90, low gain and low SSPL90, low gain and high SSPL90, or high gain and low SSPL90, though it is unlikely that this last combination would be found in an appropriately-fitted hearing aid.

There is also another important factor in this discussion that should not be overlooked: a Class D amplifier is inherently more efficient than a Class A amplifier (Carlson, 1988). Thus, for a particular SSPL90, a hearing aid with a Class D output stage will typically draw less current than an equivalent hearing aid with a Class A output stage. Agnew et al (1997) studied the sound quality performance of a hearing aid with a Class A output stage with 1.2 mA of current drain. These researchers found equivalent distortion performance was obtained with a hearing aid with a Class D output stage that had an idle current drain of 0.27 mA, increasing to a current drain of 0.51 mA at 90 dB SPL input (Agnew et al, 1997). Depending on the design, a Class D amplifier may dissipate as little as one-fourth the power of a Class AB design for the same output power (Subbarao, 1974).

Preves and Woodruff (1990) have presented an example of how increasing the headroom can improve coherence measurements in linear hearing aids. They compared a linear hearing aid with a Class A output stage with 35 dB of peak gain and an HFA SSPL90 of 103 dB (hearing aid A) to one that had a different linear amplifier, and a Class D output stage and receiver that provided 40 dB of peak gain and an HFA SSPL90 of 117 dB (hearing aid B). They showed that coherence measurements were improved for hearing aid B with the higher SSPL90 value. Hearing aid B used a Class D output stage; however, the same experiment could have been performed by comparing two Class A output stages with different SSPL90 values configured to provide increased headroom in one of them. The outcome was not necessarily the comparison between Class A and Class D, but that hearing aid B had increased headroom due to a different amplifier and a different receiver. The result was a higher HFA SSPL90 for hearing aid B than for hearing aid A, thus the higher headroom produced lower distortion in hearing aid B than in hearing aid A.

Preves and Newton (1989), and Preves and Woodruff (1990) have presented expanded explanations of the concept of headroom and the distortion problems that can occur in hearing aids due to the lack of headroom.

Multiple Input Levels for Distortion Testing

Since a hearing aid user may enter many different acoustic environments, it is important to characterize hearing aid distortion performance with different input levels. A range of sound levels may be encountered from soft conversational speech (60 dB SPL), through average speech (70 dB SPL) and intense speech (80 dB SPL), to shouted speech and loud music (90 dB SPL). Thus it is important to determine hearing aid distortion performance across a range of these levels.

Though it would seem that distortion would not be present during normal speech communication, since conversational speech occurs at an average level of around 65 dB SPL, peaks in speech occur at 12 dB to 20 dB above this average level. This potential peak level of 85 dB SPL can quickly drive a hearing aid amplifier into saturation and produce distortion. In addition, many other common sounds in the environment occur at levels of greater than 85 dB SPL, as shown in Table 5.

Hearing aids that have the same specifications when measured according to ANSI S3.22 (1996) may have very different distortion performance when the input levels are varied. (Note that this is not necessarily a shortcoming of the standard; ANSI standard S3.22 (1996) is intended to be a quality control standard for manufacturers to test their product, not as an indication of how the hearing aid will perform on the user). Thus, one method used to comprehensively characterize the distortion performance of a hearing aid is to perform swept measurements of THD and IMD at levels of 60, 70, 80 and 90 dB SPL input. Agnew (1994) has described measurements made on three different hearing aids that had the same specifications and which exhibited low levels of distortion when measured according to ANSI test methods. Distortion performance was very similar for the three hearing aids with a test input level of 60 dB SPL. However, the three aids had significantly different levels of distortion when tested with a swept frequency at 75 dB SPL input. One aid contained a distortion peak of 27%, as compared to the other two which peaked respectively at 7% and 17%.

Graphs obtained from THD measurement of these three hearing aids are shown in Figure 12. Figure 12a shows a comparison of THD graphs measured with 70 dB SPL input level. Under this condition, all three hearing aids exhibited similar low distortion performance. However, when measured with 80 dB SPL input level with no adjustment to the hearing aids, hearing aid #1 exhibited significantly lower levels of THD than did hearing aids #2 and #3, as shown in Figure 12b.

Figure 12.

Figure 12.

Graphs of a comparison of THD on a scale of 0% to 50% measured on three hearing aids, showing the difference in distortion levels that were obtained when measured with two different input levels.

A further example of graphs obtained from hearing aid measurements with different input levels is shown in Figure 13a and 13b. Figure 13a shows swept-frequency THD measurements made on a linear hearing aid with 60, 70, 80 and 90 dB SPL input. With 60 dB and 70 dB SPL input, the distortion levels are low, being less than 2% across most of the frequency range. However, with 80 dB SPL input, the distortion suddenly rises, peaking at over 50% at 900 Hz. With 90 dB SPL input, the distortion is 100% at 1300 Hz. These results indicate that it is probable that, at some input level between 70 dB and 80 dB SPL, some part of the hearing aid amplifier has saturated, thus resulting in a rapid increase in measured distortion. Similar performance is observed in Figure 13b, which shows swept-frequency IMD measurements on the same hearing aid with 60, 70, 80 and 90 dB SPL input and a 200 Hz separation of the test frequencies.

Figure 13.

Figure 13.

An example of graphs of the results of swept-frequency measurements for a linear hearing aid, using input levels of 60, 70, 80 and 90 dB SPL input.

Reduction of Saturation Distortion

One solution to the amplifier saturation problem is to limit the amplifier gain in various stages with the use of compression, such that the processed signal never exceeds the amplifier limits of gain, battery voltage, output amplifier and receiver. For example, if the appropriate compression is added to the preamplifier and output amplifier circuits, these circuits will operate in the linear region until the limit of undistorted amplification is approached, then the circuitry will limit the available gain to the maximum that can be achieved without distortion (Agnew, 1997a).

Agnew (1995) has presented data that compared three hearing aids: a linear hearing aid, a hearing aid with input compression, and a hearing aid with multiple compression functions configured to prevent saturation. Swept-frequency THD graphs at 60 and 70 dB SPL input showed that the distortion performance was similar for all three circuits, and was very low (only about 2% or 3% at maximum). However, with 80 and 90 dB SPL input the distortion performance diverged considerably. With 80 dB SPL input the linear hearing aid peaked at 50% distortion, and with 90 dB SPL input was well over 50%. Distortion in the hearing aid with input compression peaked at about 20% with 80 dB SPL input and about 40% with 90 dB SPL input. Thus the use of input compression lowered the distortion compared to the linear hearing aid. The hearing aid with the circuitry to prevent saturation was superior in distortion performance to both the linear hearing aid and the hearing aid with input compression, and maintained the measured distortion levels consistently at less than 5% for all input levels.

THE PERCEPTION OF DISTORTION

It is difficult to state definitively which type of distortion is perceived by a listener as being worst; however, it is recognized that most of the unpleasant sound quality that occurs when distortion is present in an audio system is due to the presence of IMD, and not THD (Scroggie, 1954; Thomsen and Moller, 1975). IMD typically generates dissonant frequencies, or ones that are not pleasantly related to musical intervals. Thus, IMD is often considered strident and more objectionable to listeners than are THD products (Durrant and Lovrinic, 1984). Also, a given non-linearity in a system will typically result in more IMD components being produced, and ones that are often higher in amplitude than the THD present (Thomsen and Moller, 1975). Schweitzer et al (1977) have expressed a belief that IMD is equal to or more important than THD for specifying hearing aid performance and suggested that the presence of IMD is a pronounced possibility in every spoken syllable.

The audible perception of IMD has been described as making the sound quality of the signal as “blurred”, “fuzzy”, “tinny”, “harsh”, “rattling”, “shrill”, “mushy”, “raucous”, “muddy”, “grating”, “rasping”, “buzzing” and “rough” (Scroggie, 1958; Langford-Smith, 1960). Though sound containing IMD products may be intelligible to a hearing aid user, listening to these objectionable-sounding components of IMD for long periods of time may lead to auditory fatigue (Ashley, 1976).

Fortune et al (1991) showed that linear hearing aids that generate saturation distortion produce a lower loudness discomfort level (LDL) than hearing aids with a high enough headroom to reduce saturation distortion. These authors contend that, to combat this, wearers of these hearing aids may turn down the gain of their aids to the point of receiving inadequate gain for beneficial amplification.

Levels of THD in a hearing aid of less than approximately 10% may often not be perceived as audible or particularly objectionable (Killion, 1979; Dillon and Macrae, 1984; Agnew, 1988; Cole 1993). There are two possible reasons for this. One is that, by definition, THD products in the output are harmonically-related to the input frequencies.

The second harmonic of a frequency is twice the frequency, or one octave above that frequency. Tones an octave apart sound very similar in pitch (Handel, 1993), and the ear tolerates a fairly large percentage of the second harmonic frequency in a sound before its presence becomes objectionable (Scroggie, 1958). Indeed, it is precisely because of the presence of these harmonic relationships in sound that music and voices have a rich-sounding timbre. The second possible reason that low levels of harmonic distortion in hearing aids may not be particularly objectionable is that many of the harmonic products fall above the passband of the hearing aid and are not audible.

The Effect of Distortion on Sound Quality Judgments

Though distorted sounds may remain highly intelligible (Licklider, 1946), the distortion present may severely degrade a listener's perception of sound quality. Criteria used in judgments of sound quality are commonly based on those of Gabrielsson (Gabrielsson and Sjogren, 1979a; 1979b; Gabrielsson and Lindstrom, 1985; Gabrielsson et al, 1988). These sound quality judgments are based on a 10 point scale of descriptors, such as clarity, fullness and spaciousness, using contrasts such as sharpness/softness, fullness/thinness and brightness/dullness (Gabrielsson and Sjogren, 1979a; 1979b; Gabrielsson and Lindstrom, 1985; Gabrielsson et al, 1988). Examples of scales used to record subject responses are shown in Figure 14. Gabrielsson and Sjogren (1979b) found that there was good agreement between the sound quality perception of hearing aid users and similar sound quality experiments using sound engineers and normal listeners as subjects.

Figure 14.

Figure 14.

Examples of scales used to record subject responses in sound quality testing.

Punch et al (1980) found that there was a high correlation between the low cut-off frequency of the amplified spectrum and the perceived sound quality. The lower cut-off frequencies were preferred by their subjects. Similarly, Franks (1982) found that though subjects with hearing impairments were not able to detect or appreciate the high frequency components of music, low frequencies were perceived and appreciated.

Yanick (1977), reporting on the results from 12 subjects wearing hearing aids with and without transient intermodulation distortion, concluded that the hearing aid amplifier which minimized distortion was consistently more effective in improving the clarity of speech sounds. Witter and Goldstein (1971), using 30 listeners with normal hearing to compare frequency response, THD, IMD and transient response, concluded that transient response was the best predictor of the listener's judgments of sound quality.

Kates and Kozma-Spytek (1994), studying the responses of eight listeners with normal hearing, showed that speech quality was significantly affected by peak-clipping. They went on to propose good sound quality as a design goal for hearing aids because of the ability of their subjects to detect small amounts of distortion and the significant effect of peak-clipping on speech quality. Hence these authors concluded that clipping distortion should be minimized in all stages of a hearing instrument. Van Tasell and Crain (1992), in a study of adaptive frequency response hearing aids, suggested completely avoiding peak-clipping as a method of output limitation. Hawkins and Naidoo (1993) studied 12 subjects with mild-to-moderate hearing loss and found that the subjects preferred the sound quality and clarity of compression limiting as a method of output reduction, as compared to asymmetrical peak clipping.

Several studies have reported on distortion relative to different types of output stages used in hearing aids. Kochkin and Ballad (1991) reported that participants in a focus group study and 110 participants in listening tests at a trade show exhibit felt that a Class D output stage used produced higher sound quality than a Class A output stage. This difference was theorized to be the result of the higher headroom available in the hearing aids with the Class D output stage (Longwell and Gawinski, 1992). However, electroacoustic data on the hearing aids were not reported and to properly evaluate these results it is important to know if the test devices had identical gain, frequency responses, saturation levels and peak frequencies (Johnson and Killion, 1994).

Agnew (1997a) compared the perception of sound quality of seventeen listeners with hearing impairments using hearing aids with a Class D output stage that contained circuitry to prevent saturation distortion to linear hearing aids with a Class D output stage that were allowed to amplify into saturation. The stimuli used were a female talker, orchestral music and solo piano music presented at input levels of 60, 80 and 90 dB SPL. For 60 dB SPL input, when the measured distortion was low, there appeared to be no particular preference for either hearing aid and the differences noted were not statistically significant. For 80 dB SPL and 90 dB SPL inputs the listeners preferred the sound quality of the hearing aid with the antisaturation compression, which had significantly lower distortion than the standard hearing aid. At these higher input levels the results were found to be statistically significant.

Agnew and Mayhugh (1997) reported on the results of a perception study using fourteen listeners with normal hearing to determine if it was possible to perceive distortion differences between a linear hearing aid with a Class D output stage and a linear hearing aid with a Class D output stage and identical ANSI specifications, but with compression added to minimize saturation distortion. The stimuli used were recordings made through the hearing aids with different input levels, then equalized to remove the effects of loudness on the perceived judgments. In three-alternative forced-choice trials, the listeners consistently correctly identified the hearing aid with the higher measured distortion.

Palmer et al (1995) reported on sound quality judgments obtained during a comparison of hearing aids with a starved Class A output stage to hearing aids with a Class D output stage. The term “starved” in this context was used to indicate a Class A output stage that was deliberately biased for a low current drain and was thereby inadequate to minimize distortion. The subjects rated the Class D circuit as having superior sound quality. Palmer et al (1995) noted, however, that this study was not intended to compare Class D and Class A amplifiers, but to compare a Class D output stage to a “starved” Class A output stage. The authors also noted that when the biasing of the Class A amplifier was changed such that the current drain was significantly increased, performance of the Class A circuit comparable to the Class D was obtained with no obvious audible difference.

Agnew et al (1997) studied the relationship between headroom and perceived sound quality measures for a Class A output stage compared to a Class D output stage with the same measured SSPL90 value. Measured THD and IMD levels were approximately the same for both output stages at low input levels; however, at higher input levels, the output stage with lower headroom produced higher distortion. When the hearing aids were matched to have equal SSPL90, the hearing aid with the Class A output stage produced higher levels of distortion than the Class D output stage, particularly for input levels of 75 dB SPL and 90 dB SPL. When the SSPL90 of the hearing aid with the Class A output stage was raised 4 dB over that of the Class D in order to provide equivalent headroom in the two output stages, the Class D circuit had higher levels of distortion than the Class A circuit. Subjective preferences of sound quality obtained from four hearing aid wearers coincided with the trends in objective electroacoustic measurements of THD and IMD made for both conditions. There was generally a preference for the hearing aid circuit that exhibited lower distortion, particularly with high input levels.

Though these studies reported in the literature show a link between increasing distortion and a perception of decreased sound quality, a definitive correlation has not yet been shown. For example, the study of Agnew et al (1997) showed that even though the distortion was lower for the Class D than for the Class A circuit under the same conditions, there was a subjective preference for the Class A. Trends observed in the data indicated that there are also apparently other subtle features to sound quality perception that were not quantified by the sound quality rating scales used in this study, and which made a definite link between the distortion measurements and the sound quality perceptions in this study inconclusive.

The Effect of Distortion on Speech Intelligibility

There have also been attempts to link increased hearing aid distortion to reduced speech intelligibility. However, a clear-cut link between hearing aid distortion and intelligibility has not yet been established (Peters and Burkhard, 1968; Curran, 1974; Williamson et al, 1987). This is probably because speech is highly redundant and a highly disruptive combination of corrupting influences must be present in the signal, the hearing mechanism and the hearing aid before intelligibility is significantly degraded.

The classic study of intelligibility and distortion was performed by Licklider (1946), who found that severely clipped and distorted speech remained highly intelligible for young adults with normal hearing. Though distorted speech remained intelligible for tests of speech reception in quiet with these listeners, intelligibility was reduced for tests performed with an ambient noise background, due to intermodulation effects between the speech and the noise. The author also stated that most types of noise, especially those with intense low-frequency components, have a severely detrimental effect on intelligibility.

Bode and Kasten (1971), studying 34 normal-hearing listeners under conditions of varying distortion, showed that consonant identification decreased by 15% to 29% as distortion levels increased. The introduction of moderate levels of distortion alone did not significantly decrease intelligibility. However, when combined with background noise and with the bandwidth reduced to simulate a hearing aid circuit, increasing levels of THD affected speech recognition scores, particularly the identification of final consonants. LaCroix et al (1979) found that minimal levels of distortion, when combined with low-pass filtered sound, caused statistically-significant marked decreases in speech comprehension.

Krebs (1972) quoted a study by Hartman that showed that a level of 30% THD significantly reduced sentence intelligibility as compared to the no-distortion condition for a group of subjects with sensorineural hearing loss.

Singer (1981) showed that the addition of 12% and 30% of IMD under several conditions degraded speech intelligibility by an average of 6.7%. The authors felt that this was due to a masking effect, particularly on the second formant of the test words.

LaCroix et al (1979) performed speech intelligibility tests using test signal disruptions of low-pass filtering, time compression, temporal interruption and masking by speech-shaped noise on young men with normal hearing. They found that each of these distortions decreased speech comprehension and, if all the distortions were present simultaneously, the decrease was significantly extended.

Jirsa and Norris (1982) studied the effects of IMD on speech intelligibility in quiet using listeners with sensorineural hearing loss, and also in noise using listeners with normal hearing and with sensorineural hearing loss. They found that high levels of IMD occurring below 1000 Hz significantly interfered with speech intelligibility for subjects with sensorineural hearing impairment listening to sentences in quiet. From their results, they theorized that the upward spread of masking from the introduced distortion interfered with high frequency speech cues.

Both Teder (1990) and Hawkins and Naidoo (1993) have hypothesized that some of the difficulties mentioned by hearing aid users in noisy situations may be due to saturation-induced distortion. Distortion products generated by circuit saturation fill in the temporal structure of speech and degrade the syllabic distinctions (Teder, 1993). Since these additional products are frequencies that are not present in the input signal, saturation distortion is effectively generating masking noise. This added “noise” can easily mask quiet speech cues (Killion, 1993). A series of experiments by Crain and Van Tasell (1994) showed that speech reception threshold (SRT) became higher (i.e. poorer) with increased levels of peak-clipping, especially when the level of clipping was greater than 18 dB. The level of the clipping at which the SRT was affected was also where the listeners judged the sound quality to become unacceptable. The authors concluded that the addition of distortion products was responsible for these changes.

Using peak clipping as a method of output limitation results in little reduction of intelligibility in quiet, and articulation scores as high as 90% may be obtained (Langford-Smith, 1960; Moore, 1982). However, as several of the papers discussed above have shown, intelligibility may be reduced by the interaction and intermodulation of sounds when competing noise is present.

Acceptable Levels of Distortion

Tolerable or acceptable levels of distortion in hearing aids are hard to define. One reason that this is difficult is that the perception of “good” or “bad” depends on many factors, among them:

  1. whether the distortion is primarily THD or IMD,

  2. the bandwidth of the sound to which the user is listening,

  3. the spectrum of the input sound (e.g. pure tone versus speech versus music),

  4. the levels of the harmonic components present in the output sound, and

  5. the order of the harmonics and how they combine.

As an example of the potential significance of differences in the composition of distortion, it has been claimed that crossover distortion of 0.01% in a push-pull amplifier may sound worse than 10% THD due to soft peak clipping (Moller, 1978a; Moore, 1982). As another example, Killion (1979) has stated that just-audible distortion levels for musical and speech material are at least ten times greater than just-audible distortion levels for pure-tones.

Table 6, compiled from data presented in Langford-Smith (1960), illustrates the effect of bandwidth on listeners' perceptions of THD. The table shows that, as the bandwidth increases, the percentage of THD that is perceptible, tolerable and objectionable decreases. In all cases, a lower percentage of distortion was noted for music than for speech. The reasons for this were not given, but can be hypothesized to be that, as the system bandwidth is progressively limited, offensive harmonics fall above the upper limit of the frequency response of the system and become inaudible, thus the tolerable percentage of distortion becomes higher.

Table 6.

Effects of bandwidth on the perception of THD (compiled from Langford-Smith, 1960).

Bandwidth Perceptible Tolerable Objectionable
3750 Hz speech 1.5% speech 8.8% speech 12.8%
music 1.1% music 5.6% music 10.8%
5000 Hz not recorded speech 5.2% speech 8.8%
music 4.0% music 6.0%
7500 Hz speech 1.2% speech 4.0% speech 6.4%
music 0.9% music 3.2% music 4.0%
15,000 Hz speech 0.9% speech 1.9% speech 3.0%
music 0.7% music 1.4% music 2.0%

In general, the higher the bandwidth, the lower the level of THD that is tolerated in high quality audio equipment. Higher harmonics of odd-order, such as the 7th, 9th, and 11th, are dissonant (Langford-Smith, 1960). A fraction of a percent of the 11th harmonic introduces noticeable harshness (Scroggie, 1958). In hearing aids, though, most of these harmonics are above the upper band limit of the receiver and are inaudible. As an example of the possible significance of the relative levels and types of harmonics present in sound, Class B push-pull output stages generally have lower levels of THD than do the equivalent Class A output stages. This is due to the cancellation of even-order harmonics in the two sides of the Class B amplifier (Agnew, 1988). However, even though the overall measured level of THD may be lower in the Class B output than in the Class A output, there are relatively more odd harmonics present in the Class B output, most of which tend to add dissonant and irritating components to the sound. For further insight into the complex relationship of harmonics in sound, Handel (1993) offers a detailed description and explanation for the perception of harmonic relationships as applied to music.

Permissible levels of distortion for a hearing aid have not yet been clearly defined. Dillon and Macrae (1984) have suggested that THD in hearing aids should be less than 10%, and preferably less than 5%. Killion (1988) has suggested that maximum THD or IMD should be less than 2% between 50 dB SPL and 90 dB SPL at the eardrum, but can be as high as 10% for sound levels less than 50 dB and greater than 90 dB SPL. Lotterman and Kasten (1967) in a study of 367 new hearing aid users, found that though most of the hearing aids exhibited less than 10% distortion across the frequency range, distortion levels of greater than 20% were not uncommon.

Table 7 contains various opinions, gathered from different sources within the literature, on the amount of distortion that may be allowable in an amplified signal. This table should be taken only as a guideline, because descriptions of the conditions and the types of distortion varied widely in the original sources, thus making accurate comparisons difficult.

Table 7.

Opinions on the acceptability of distortion.

OPINION SOURCE
For high fidelity:
 some types may be detected at 0.1 % Moore (1982)
 0.3% is detectable in a sustained tone Ward (1970)
 0.62% to 2.6% is just perceptible Shorter (1950)
 less than 1% is good and may not be noticed Moore (1982)
Douglas-Young (1981)
 2% may be acceptable Ward (1970)
 2% to 3% may not be noticed Moore (1982)
 2% to 3% may not be objectionable Olson (1972)
 2.3% to 3.7% is bad Shorter (1950)
 3% may affect music Ward (1970)
 4% may be allowable Langford-Smith (1960)
For hearing aids:
 below 2% generally inaudible Killion (1979)
 should be less than 2% Killion (1988)
 3% is generally not noticeable Agnew (1988)
 preferably less than 5% Dillon and Macrae (1984)
 below 6% is generally not objectionable Agnew (1988)
 6% to 12% may not be objectionable Killion (1979)
 less than 10% is a good compromise Cole (1993)
 should be less than 10% Dillon and Macrae (1984)

INTERNAL NOISE

The second part of this issue discusses internal noise in hearing aids. Internal noise is considered to be a form of distortion because internal noise is an undesired product generated within hearing aids that is audible at the output, though not present at the input. Typically, this undesired sound is perceived as an audible hiss produced by steady-state broadband noise generated by the internal electronic circuitry.

Internal amplifier noise generated within any audio system has always been looked upon as objectionable, since it adds undesired “coloration” to the reproduced sound. Because in practice it is not possible to realize a totally noiseless hearing aid amplifier, it is important for the dispenser to be aware of potential problems that can be associated with internal noise.

When discussing “noise” in hearing aids, a distinction must be made between internal noise and external noise. Internal noise refers to noise generated by circuitry within the hearing aid that is audible to the user at the output. This noise is present in the output of the hearing aids whether or not there is any acoustic input. External noise, by contrast, is noise generated outside the hearing aids in the environment and which is picked up and amplified by the hearing aids, in this way becoming audible to the user. Frequently these two types of noise are confused by the hearing aid user who simply complains that “these hearing aids are too noisy”. It may well be that the hearing aids themselves are not at fault, but that environmental sounds that were previously below the threshold of audibility are now being heard by the user. Often such sounds consist of fan and air conditioner noise; motor noise, such as from a refrigerator; audible hum from fluorescent lights; distant traffic noises; or even garden noise, such as the wind rustling the leaves on a bush or a tree.

When troubleshooting user complaints, one of two easy methods may be used to distinguish between internal and external noise. One is to take the hearing aid user into an audiometric sound booth or anechoic room and ask if the perception of the noise has diminished or disappeared. If the user says “yes”, then the problem is due to an external noise source. Possibly this problem can be helped through counseling to explain that these sounds were present but previously not audible, or by alteration of the amplification characteristics of the hearing aid. A similar test may be performed in any quiet room by temporarily blocking the microphone port with putty. Again, if the apparent noise disappears, then the noise source is external. If the perception of the noise does not diminish, then the source of the noise is internal to the hearing aid. This type of unresolved noise is the subject of this part of this issue.

In instances where the internal noise becomes audible, the ambient environmental noise level may be lower than the SPL output level of the internal noise. If the problem is due to internal noise, it may only be audible to the user in very quiet listening situations, such as in an audiometric sound booth or in a quiet bedroom. In environments with higher noise levels, the ambient noise may be enough to mask out any perception of internal noise. For reference, some typical ambient sound levels that may occur in very quiet environments are listed in Table 8.

Table 8.

Typical sound levels in quiet environments.

Source Sound level (dBA SPL)
Threshold of normal hearing in mid-frequencies 7–10
Broadcast studio 15
Quiet forest 15–20
Quiet bedroom at night 25
Library, conference room 35
Private office 40
Suburban living room 45
Secretarial pool office 55
Normal conversational speech 65

If internally-generated hearing aid noise occurs at a very high level, it has three undesirable characteristics:

  1. its presence may be objectionable to the hearing aid user,

  2. its presence in the output may directly mask weak speech sounds, and

  3. the internal noise may interact with low level input sounds to produce IMD products that either sound undesirable or which further mask desired low-level external sounds and important speech cues.

The amount of degradation of speech cues will depend on the level at which internal noise occurs in the output of the hearing aid. If the internal noise occurs at a high enough level, it may be louder than some quiet segments of speech and may directly mask out low level speech sounds, thus degrading speech cues presented to the listener (Jirsa and Norris, 1982; Moore, 1986; Teder, 1993). Fielder (1985) has stated that noise with energy concentrated in the region from 1000 Hz to 3000 Hz tends to produce the most effective masking. Even if the internal hearing aid noise does not occur at a level high enough to disrupt communication, many listeners object to its presence as a distracting audible artifact when in a quiet listening situation.

SOURCES OF INTERNAL CIRCUIT NOISE

Inherent Noise at the Component Level

Most inherent electronic noise in hearing aid amplifiers is generated by components which amplify, such as transistors and integrated circuits. Other components in a hearing aid amplifier, such as resistors and capacitors, are also capable of generating electronic noise, but are not as significant a factor in the overall noise level. Details of the physics of semiconductor noise generation are not appropriate for this discussion; however, the generation of inherent electronic circuit noise can be categorized into three different types: (a) white noise produced by thermal and shot mechanisms, (b) burst noise, and (c) flicker noise (Fish, 1994).

Thermal noise and shot noise have different physical mechanisms, but both produce white noise, which sounds like a constant low-level hiss in the output of the hearing aid. Burst noise (also known as popcorn noise) adds audible sounds that are intermittent popping or crackling noises. This is a particularly objectionable type of noise, because of its intermittent nature and abrupt onset. Flicker noise is a random noise that is inversely related to frequency and, because of this, is also commonly known as 1/f noise (pronounced “one-over-eff” noise). Flicker noise produces its highest output level at low frequencies, sometimes within the lower end of the audio frequency amplifying range. Because most sensorineural hearing losses result in hearing at low frequencies within normal limits, the low frequency content of 1/f noise may be high enough to become audible to the wearer of a hearing aid. Unfortunately, modern integrated circuits fabricated with complementary metal oxide semiconductor (CMOS) integrated circuits can have 3 to 10 times higher equivalent input noise than the same circuits implemented with low-noise bipolar devices (Gregorian and Temes, 1986).

Resistors are also a source of inherent noise in electronic circuits. The larger the electrical size of the resistor, the more noise it may produce. Resistors generate thermal and flicker noise. In practice, inherent noise generated by modern film and diffused resistors is small when compared to that generated by other semiconductor devices. Capacitors should ideally be noiseless; however, real capacitors effectively have parasitic series and parallel resistors associated with them. The parallel resistor, which is the theoretical capacitor equivalent of electrical leakage, produces thermal noise. The series resistor is a source of both thermal and flicker noise. In practice, capacitor noise is also negligible when compared to other inherent noise sources in the circuit.

Inherent Noise at the System Level

Internal noise can occur at several places in a hearing aid. Though inherent noise is generated by all semiconductor devices, the effect of a particular device on the overall output noise level of the amplifier may vary, depending on the type and location of the noisy device in the signal pathway. Noise generated at the beginning of the signal path will be amplified more than noise at the end of the signal path. Also the level of the perceived noise will vary, depending on the particular setting of the gain control and on how much noise is generated before and after the gain control in the signal path. The spectrum of the noise may be modified by the frequency response settings of the hearing aid, depending on the location of the dominant source of noise in the circuit.

The practical result of the generation of internal noise in hearing aids is the production of a broad band of noise that is spectrally modified by the internal frequency response characteristics of the hearing aid, and dominated by the acoustic frequency response and passband of the receiver. Thus the long-term spectrum of the internal noise of a hearing aid has essentially the same shape as its acoustic frequency response, but occurs at a much lower level. An example of a typical long-term spectrum of internal hearing aid noise, as measured in a 2 cm3 coupler and averaged over 100 samples, is shown in Figure 15. This figure shows that the noise delivered to the listener consists of broadband noise that is low-pass filtered by the receiver characteristics.

Figure 15.

Figure 15.

Example of a noise spectrum measurement of the internal noise of a hearing aid with 40 dB of gain.

Different hearing aids may produce different internal noise spectrum, depending on the particular amplifier used and the frequency response prescribed. Figure 16 shows an acoustic measurement of the broadband noise generated by two different linear hearing aids and illustrates the effects of different amplifier designs on noise performance. Each hearing aid had the same gain, SSPL90 and frequency response and used the same microphone and receiver. It can be seen that the internal noise generated by one of the amplifiers occurs at a considerably higher level and had a different spectrum than that of the other. Agnew (1988) showed similar graphs for the inherent electrical noise generated by two bipolar integrated circuits from two different manufacturers of hearing aid amplifiers.

Figure 16.

Figure 16.

Acoustic output noise measured for two hearing aids with the same ANSI specifications, but using different amplifiers.

The following paragraphs will describe internal noise as it relates to the location of the noise source in the signal path. Figure 17 contains a diagram of the blocks that will be discussed.

Figure 17.

Figure 17.

Generic block diagram of a hearing aid, for reference in the discussion of saturation distortion.

Inherent noise generated within the microphone is amplified by the gain of the hearing aid. Thus, inherent noise generated by the microphone plays a significant role in total output noise. Typical equivalent input noise figures for modern electret microphones are about 23 dB to 27 dBA SPL, though some newer models of microphone are 2 dB to 3 dB quieter. In electrical terms this relates to about 5 μV to 6 μV rms of output noise. In absolute terms, the microphone would seem to be a very small source of noise. However, since this noise occurs at the beginning of the signal amplification pathway, by the time that it is amplified and appears at the output it becomes significant in relative terms compared to other inherent sources of noise present in the circuit.

Careful design of the preamplifier input stage is particularly important in order to minimize noise generated at the beginning of the signal path. Electrical equivalent input noise (EIN) of a well-designed analog amplifier is about 2 μV rms. Electrical EIN is a theoretical figure that represents what the input noise level would be if it is assumed to be a single electrical noise source placed at the input to a noiseless amplifier. This figure is calculated by measuring the level of the electrical output noise and then by dividing this figure by the gain of the amplifier. For a microphone with an inherent noise level of about 5 μV, the microphone noise is then about 8 dB higher than preamplifier noise at an EIN level of 2 μV. Thus, the microphone is typically the dominant noise source in hearing aids. Frequency shaping may or may not be incorporated into the preamplifier. Since this function may also occur later in the signal path, it is not as significant for noise performance as the preamplifier input stages.

The output noise level of the microphone basically determines the lower end of dynamic range of the hearing aid, because the level of this noise limits the amplification of low level sounds. Stated another way, if the equivalent input noise level of the microphone is 25 dBA SPL, then any incoming sound below 25 dB SPL will be lower than the noise of the microphone, thus making the effective low limit of amplification 25 dB SPL. As discussed in the section on saturation distortion, the supply voltage limits the maximum level of a signal that may be amplified by linear hearing aids. If this maximum level is the equivalent of 85 dB SPL, the dynamic range of these hearing aids, or the range over which the hearing aids will amplify above the noise and below the saturation level, will be 85 dB minus 25 dB, or 60 dB. If the maximum signal is 85 dB and the noise level is 25 dB, then the maximum possible signal-to-noise ratio (SNR) is 60 dB. This occurs whether the hearing aids are analog or digital.

Figure 18 compares the approximate dynamic range of the ear, a typical linear hearing aid, a compact disc (CD), and analog tape. In all cases the dynamic range is limited by inherent noise at the lower end, and by system saturation and distortion at the upper end. This concept also applies to the human ear, which is limited by inherent biological noise on the low end of its listening range, and by biological and biomechanical overload at the upper end.

Figure 18.

Figure 18.

Approximate dynamic range of the ear, a typical linear hearing aid, a compact disc (CD), and analog tape.

Settings of the volume control (see Figure 17) can affect the amount of noise perceived in the output of the hearing aid. One distinct condition occurs when the gain control is fully on; the other when the gain control is fully off. With the gain control fully on, noise generated in the early stages of the signal path, including the microphone, will be amplified and presented to the user at the output. With the gain control in the fully off position, the microphone, preamplifier and frequency shaping are effectively isolated from the output amplifier and the noise from the early sections of the signal path will be completely attenuated and will not be audible. The noise will be only that generated by the output amplifier. In order for the hearing aids to appear quiet in this second condition, it is important for the gain and noise of the output amplifier to be carefully controlled.

The gain control itself does not act as an inherent noise source. However, transient bursts of noise may sometimes be audible when rotating the control, due to the mechanical movement of the wiper on the resistive element. Today this is true typically for older carbon-type gain controls or to gain controls with defective elements. Modern metal-film gain controls tend to be very quiet in operation. Digital gain controls do not have any moving mechanical resistive elements to create noise. However, reports are occasionally heard of click sounds occurring in the output of particular hearing aids when the control switches between gain steps.

A well-designed output amplifier (see Figure 17) will add very little noise to the total noise of hearing aids. If the receiver is a passive component, it merely acts a transducer for electrical to acoustic energy conversion, and does not actively contribute any inherent noise to the total system noise. Integrated Class D receivers are different, because they contain an active amplifier inside the receiver housing (Carlson, 1988). Class D receivers operate via pulse-width modulation, which requires internal analog-to-digital (A/D) conversion which may introduce additional sources of noise. For example, the Class D receiver circuit incorporates a clock to produce timing pulses for synchronization. Clock jitter or other non-linearities that result in slight inconsistencies in the positioning of the pulses, will have the effect of adding additional noise to the conversion process and thus raising the overall noise level.

With the introduction of digital hearing aids, many dispensers assume that digitizing the signal will eliminate all internal hearing aid noise, similar to the lack of noise when playing back music from pre-recorded compact discs. This may turn out to be true, but in these early stages of introduction may be an optimistic assumption. Digital hearing aids still contain the same microphones as are contained in analog hearing aids. As discussed earlier, the microphone is usually the dominant noise source in hearing aids, thus the overall inherent dynamic range of the system should not be expected to improve beyond that of quiet analog hearing aids. In addition, there is still usually some analog amplifying circuitry in digital hearing aids, at least at the beginning of the signal path, that could again be a major contributor to the overall circuit noise.

There are additional sources of internal noise in digital hearing aids that are not found in analog hearing aids. For example, the A/D converter is an inherent noise source that occurs at the beginning of the signal path. The conversion process may also introduce additional sources of noise due to quantization error, clock jitter, round-off errors, and granulation noise. A detailed discussion of these problems may be found in Pohlmann (1992). These potential noise sources must be minimized during the design of the circuit in order to keep the overall noise at an acceptable level for the user.

It may, however, be possible to reduce the perception of internal noise in DSP hearing aids through sophisticated signal processing techniques. For example, it may be possible to reduce the gain or perform some type of DSP squelch function to reduce the perception of noise during periods when there is no input signal.

MEASUREMENT OF INTERNAL NOISE

Measurement Methods

Acoustic noise is usually specified by one of four general measurement methods:

  1. weighted noise, measured typically as an A-weighted measurement and stated as a single figure for the total spectrum,

  2. the complete noise spectrum, displayed as a graph,

  3. the one-third octave noise spectrum, displayed either as a bar graph or as a chart with a series of numbers, or

  4. equivalent input noise EIN, stated either as a single figure for the total spectrum or at 1/3 octave frequencies.

These measurements may be made with the gain control turned to the full-on position, to the full-off position, or any position between (such as the reference test gain position), depending on the desired hearing aid setting. Internal noise measurements are made with no input signal.

These four measurement methods are generic and may be used for any type of noise at any hearing aid setting. In addition, there are methods for measuring hearing aid noise with specific control settings that are detailed in ANSI (1996) and IEC (1983a; 1983b). Annex C of ANSI (1996) describes an optional method of specifying the hearing aid output noise spectrum and EIN in 1/3-octave bands.

The simplest method of expressing internal hearing aid noise is with a weighted noise figure. To obtain the measurement, the hearing aid is connected through the appropriate coupling to a sound level meter that has an attached 2 cm3 coupler or ear simulator (e.g., Zwislocki or IEC-711 coupler). The measurement coupler used will usually be stated along with the measured noise value because the type of coupler used will affect the value of the measurement due to differing frequency responses of couplers. In addition, a Zwislocki ear simulator has a smaller effective internal volume which will raise the overall measured sound pressure as compared to the 2 cm3 coupler. If no coupler is stated, probably a 2 cm3 coupler can be assumed. The value given will also be stated as either A-weighted or C-weighted. These standardized weighting factors will affect the obtained measured level because of their effect on the frequency response of the measurement. Information on weighting filters and their application to sound level measurements may be found in any standard text on acoustic measurement, such as Broch (1971).

A more-complete method of specifying internal noise performance is with a noise spectrum measurement. This is performed by attaching the hearing aid to a measuring microphone with a 2 cm3 coupler or ear simulator and then plotting the resulting frequency spectrum with a tracking filter or a spectrum analyzer. Such a graph has been shown in Figure 15.

One-third octave noise measurements are similar to noise spectrum measurements. The measurement is made by attaching the hearing aid, through a 2 cm3 coupler or ear simulator, to a spectrum analyzer or to a sound level meter which has the capability of measuring the spectrum in 1/3-octave bands. The internal noise level is then measured and recorded in each 1/3-octave band. The results of such a measurement are shown in Figure 19, which was made on the same hearing aid measured in Figure 15. For reference, Table 9 lists preferred center frequencies and band limits for 1/3-octave passbands.

Figure 19.

Figure 19.

Example of 1/3-octave measurement of the internal noise of a hearing aid, showing values for the hearing aid measured in Figure 15.

Table 9.

Preferred 1/3-octave passbands.

Center Frequency (Hz) Lower Limit (Hz) Upper Limit (Hz)
160 141 178
200 178 224
250 224 282
315 282 355
400 355 447
500 447 562
630 562 708
800 708 891
1000 891 1120
1260 1120 1410
1600 1410 1780
2000 1780 2240
2500 2240 2820
3150 2820 3550
4000 3550 4470
5000 4470 5620
6300 5620 7080
8000 7080 8910

Equivalent Input Noise EIN

The preceding three noise measurements are absolute noise measurements. The level of the noise is recorded in decibels SPL. The EIN figure is a different type of noise specification because it is a relative figure. The EIN figure is calculated by subtracting the gain of the hearing aid from the measurement of the acoustic output noise. EIN figures may be calculated either for the total spectrum or in 1/3-octave bands.

The specification for internal noise provided on hearing aid data sheets is the EIN level or Ln, measured according to the ANSI standard method (ANSI, 1996). For a linear hearing aid, this figure is obtained from the formula:

Ln=L2(Lav60)dB

where L2 is the sound pressure level of the internal noise in decibels, measured in a 2 cm3 coupler by removing the input signal and measuring the output due to the inherent noise within the hearing aid. Lav is the mean sound pressure level in decibels in the coupler, measured with pure tone input signals of 60 dB SPL at 1000, 1600, and 2500 Hz. Both measurements are made with the hearing aid gain control set to the reference test gain position.

ANSI standard S3.22 (1996) notes that for AGC hearing aids this computation method may be misleading. The presence of an input signal may cause the gain to decrease while measuring the reference test gain, then to increase again when measuring the noise with no input signal, thus providing a misleading EIN value. For example, measuring a compression hearing aid that has a low knee-point may result in an artificially high EIN value. This is because the gain under these conditions is highest for no input sound, thus the value for L2 is very high, while Lav is low because the compression is active when measuring the gain value to use in the formula for Lav. Thus the EIN figure may not be indicative of performance on the user.

Another situation in which a misleading EIN figure may be observed is with an unusual hearing aid frequency response, such as one with amplification only in the very high frequencies and very little amplification at the frequencies used to calculate Lav. This again may result in an artificially high EIN figure that does not accurately reflect internal noise levels perceived during hearing aid use. In this situation the measurement should be performed using special purpose average (SPA) frequencies instead of the standard frequencies (ANSI, 1996).

The purpose for calculating an EIN figure is to allow a comparison of the inherent circuit noise of two hearing aids independent of the gain of the hearing aids. A high gain hearing aid will have a higher absolute level of output noise than will a low gain hearing aid. Thus, by subtracting the gain, a comparison may be made between hearing aids without the influence of gain.

IEC (1983a; 1983b) standards for the measurement of hearing aid electroacoustic characteristics calculate EIN in a similar manner, except that only 1600 Hz is used to calculate the hearing aid gain. A method for calculating equivalent input noise in 1/3-octave bands is also described in these standards.

All of the above measurements should be made with regard to the appropriate measurement environment. The levels of internal noise are typically so low that any competing noise in the environment can influence the measurement. As a rule of thumb, the noise floor during the measurement should be more than 10 dB lower than the level of the measurement being made. Thus after the desired hearing aid noise measurement is made, the noise measurement should be repeated with the hearing aid battery removed in order to determine the influence of ambient noise in the test environment. The observed noise level should be at least 10 dB lower than the noise level measured from the hearing aid in order to consider the measurement to be uncontaminated by ambient noise. Armstrong (1995) has further described some of the complexities of specifying and understanding internal circuit noise in hearing aids.

THE PERCEPTION OF INTERNAL NOISE

Very little has appeared in the literature concerning the audibility and perception of internal circuit noise by hearing aid users. The ANSI standard S3.22 (1996) describes a method for calculating a single-figure EIN level for a hearing aid. However, because this is intended to be a quality control standard for hearing aid manufacturing, no attempt is made to relate this calculated noise figure to clinical or psychoacoustic effects. The standard adds, as optional method in Annex C, a method for calculating 1/3-octave output noise measurements and 1/3-octave EIN levels. Although the data obtained could be useful in terms of overall noise perception for the wearer of the hearing aids being measured, no attempt is made in the standard to relate these measurements to psychoacoustic or clinical effects. The corresponding section of the IEC hearing aid measurement standard is essentially the same (IEC, 1983a).

There are two difficulties with relating EIN to perception. First, the hearing aid wearer does not listen to a single EIN figure, but to absolute output noise. Thus the higher the output noise, the more likely the wearer is to hear the noise. To counteract this, high gain hearing aids will normally only be fitted on users with severe hearing losses, thus the higher output noise associated with the higher gain hearing aids may not be perceived by the users.

The second difficulty is that a single figure noise measurement does not take into account the spectral shape of the internal noise. The same single EIN figure may be obtained from two hearing aids though the noise spectra may be quite different. Agnew (1996a) has shown that internal circuit noise becomes audible when a 1/3-octave band of noise reaches the user's audiometric threshold at a particular frequency. Thus for two different hearing aids with the same single EIN figure, but different spectra, one may be audible and one may not, depending on the spectra and the configuration of hearing loss involved. This will be discussed further below.

Dillon and Macrae (1984), and Macrae and Dillon (1986) have described a criterion for maximum equivalent input noise of hearing aids, based on an acceptable SNR in 1/3-octave bands. The specification was derived from measurements of noise deemed to be acceptable to hearing aid wearers when listening to a speech signal with a long-term level of 65 dB SPL. The criterion may be relaxed for hearing aids with high gain because, as described above, the internal noise will probably be below the threshold of audibility for individuals with severe hearing losses. Further work reported by Macrae and Dillon (1996) specified a variation of allowable noise according to the coupler gain of the hearing aids, with a caution added that the values relate to hearing aids with linear amplification and not to hearing aids with wide dynamic range compression (WDRC). While providing useful information for hearing aid specifications, these data were based on an acceptable SNR with a speech input present. The criteria described do not apply to the situation where internal circuit noise from hearing aids alone is the stimulus. In practice, hearing aid users may complain of internal circuit noise when there is no additional external sound present to mask the internal noise.

Stuart (1994) has presented a review of the underlying psychoacoustic mechanisms of noise detection and threshold phenomena, and contrasted various measures of human auditory frequency selectivity. The discussion related to normal hearing and a method of modeling the human auditory process, and was not extended to listeners with hearing impairment. Killion (1976) has compared the noise level of a subminiature research hearing aid microphone to the internal noise of the human ear. However, no attempt was made to study the audibility of the microphone noise.

Other reported research efforts in the investigation of the effects of noise have concentrated primarily in the area of speech understanding in environments with noise external to the hearing aid. That is not the intent of this discussion. The intention here is to better understand the perception of internal noise.

Audible and Objectionable Levels of Internal Noise

It is a complex problem to specify reasonable levels of internal noise performance because the user of hearing aids has impaired hearing. Low levels of internal noise that are audible to a listener with normal-hearing may be inaudible to a listener with a hearing impairment if the levels are below auditory threshold. However, as the internal noise level increases, it eventually exceeds the listener's raised hearing threshold and becomes audible. As the internal noise becomes even louder, at some level it will become distracting and objectionable. This increasing excessive noise may be more than objectionable, but may eventually be disruptive to the ability to understand desired speech communications.

A simplified diagram showing the pathway of hearing aid noise from internal generation within the hearing aid to the auditory cortex is shown in Figure 20. The top line of the figure (row A), reading from left to right, outlines the sequence of the major components of the signal path. This shows a progression of the internal circuit noise from its generation in the hearing aid, through amplification and conversion to an acoustic noise by the receiver to the ear. Then the noise is coupled to the external ear, passes to the cochlea and to the auditory cortex.

Figure 20.

Figure 20.

Simplified schematic representation of the perception of internally-generated hearing aid noise traveling from the amplifier noise source to the auditory cortex. Row A illustrates the signal path, row B represents the transfer functions of the components of the signal path, and row C represents the effect of the transfer functions on the noise.

As the noise passes through this electronic, acoustic and biological pathway, it is modified by various transfer functions. The pictorials in Figure 20 simplistically illustrate these transfer functions (row B) and the resulting effect on the signal (row C). In this simplified viewpoint, the internal noise is idealistically depicted as being due to a noise generator at the beginning of the pathway. In practice this is a fairly accurate representation because, as mentioned earlier, the dominant noise source in hearing aids is usually the microphone. As shown in Figure 17, the microphone is connected to the input of the preamplifier, which usually has lower inherent noise levels than the microphone. The hearing aid amplifier frequency response is shown in row B of Figure 20 as being flat except in the extreme low and high frequency regions. In practice, there is additional frequency response shaping in order to compensate for the specific hearing loss of the user. Spectral shaping of the noise is also strongly affected by the transfer function of the receiver, which typically acts as a low-pass filter with a peak around the roll-off frequency, as shown in Figure 15.

After being amplified, hearing aid circuit noise is converted by the receiver into a sound in the external ear. As the sound passes into the cochlea, perception for the user is modified by the hearing loss of the listener, exemplified in Figure 20 as an audiogram with a high frequency sensorineural hearing loss. This loss results in a lowered sensitivity to high-frequency sounds. Thus, the low frequency components of the noise may be audible, whereas the high frequency components may fall below the listener's threshold of hearing. Finally, the sound is perceived by the listener as though presented through a parallel bank of overlapping level-dependent filters with a bandwidth equivalent to critical bands (Fletcher, 1940). Critical bands are roughly 1/3-octave in width over a fairly wide frequency range (Moore, 1982), though this approximation of critical bands by 1/3-octave filters is only acceptable above about 300 Hz (Zwicker and Fasti, 1990).

Agnew (1996a) described a study that related the audibility of internal hearing aid circuit noise to the hearing thresholds of eight listeners with moderate hearing losses. While listening to a hearing aid that produced a level of internal circuit noise that could be varied, each participant in the study was asked to adjust the level of the noise and then indicate the level at which internal circuit noise just became audible to them. Following this judgment, the spectrum of the noise was measured in 1/3-octave bands at the chosen setting. Then the audiometric thresholds of the listener were converted to the corresponding SPL values and were superimposed on the individual graph of 1/3-octave bands of noise. This showed that the internal hearing aid noise became audible when the audiometric threshold of the listener in SPL was reached by any one 1/3-octave band of noise. Figure 21 shows data from one typical test subject. Figure 21a shows the audiogram. Figure 21b shows the audiometric loss converted to SPL values, then plotted on the same graph as the 1/3-octave noise from the hearing aid, at the level that the noise just became audible to the subject. The line connecting the solid squares is the audiometric threshold converted to SPL and the open bars are the noise of the test hearing aid measured in 1/3-octave bands. This graph shows that the noise has become audible where the hearing aid noise level is such that the 1/3-octave band at 1000 Hz has reached the level of the audiometric loss.

Figure 21.

Figure 21.

Example of plotting the SPL audiogram on 1/3-octave band measurements of output noise from a hearing aid.

Agnew (1997b) also measured the level of internal hearing aid noise that was perceived to be objectionable by eight listeners with moderate sensorineural hearing impairments. The descriptor “objectionable” was defined for the purposes of this study as the level of noise that the subject felt would be unpleasant to listen to for an extended period of time. The results showed that noise levels between 4 dB and 15 dB (mean value 8.8 dB) above the 1/3-octave audible level became objectionable to the listeners. This was similar to the earlier study reported by Agnew (1996a), which concluded that a mean noise level of about 10 dB above the audible level started to become objectionable to the test subjects. However, because the measured level had a wide variation, it was theorized that the descriptor “objectionable” was subject to wide interpretation by different individuals and was not a reliable measure of the perceived level of annoyance.

Pitch and Level of Audible Internal Noise

Agnew and Block (1997) commented that reports on the perception of internal circuit noise received from hearing aid users were not consistent. Though the characteristics of inherent hearing aid amplifier noise can be measured objectively, wearers differ in their subjective reporting of the perceived pitch and loudness of the noise.

The fundamental physical attributes of a steady tonal sound are the frequency, the intensity and the duration (Handel, 1993). The corresponding psychoacoustic attributes that make up the perception of a sound are the pitch, the loudness and the duration of the sound. Beyond these three, there is a residual combination of sensations that is commonly collectively called timbre (Zwicker and Fasti, 1990). Timbre has been defined as “that attribute of auditory sensation in terms of which a listener can judge that two sounds having the same loudness and pitch are dissimilar” (Moore, 1995). Timbre depends on both spectral and temporal aspects of sound, and involves tonality as well as more nebulous sensations, such as “roughness”, “sharpness” and “sensory pleasantness” (Zwicker and Fasti, 1990). Differing perceptions of timbre may well contribute to differing degrees of annoyance in different individuals listening to the same hearing aid circuit noise.

Agnew and Block (1997) performed a study to determine whether or not the auditory perceptions of internal circuit noise had characteristics that could be associated with a quantifiable physical variable. Four listeners with normal hearing matched their perceptions of the pitch and loudness of the noise from five ITE hearing aids to 1/3-octave bands of noise. An analysis of the results according to the method of Agnew (1996a) confirmed that the listeners found the noise to be audible for all test conditions by superimposing the listeners' audiograms converted to SPL values onto the 1/3-octave band measurements of hearing aid noise. The loudness of the hearing aid noise was quantified by perceptually matching the perceived loudness of the noise to the intensity of the 1/3-octave bands of noise. Subsequent measurement of the noise with a sound level meter showed a reasonable match of between perceived and measured values.

Although the hearing aid noise delivered to the listeners was broadband in nature, the listeners matched the perceived pitch of the noise most often to the frequency of their most sensitive hearing. Egan and Meyer (1950) and de Boer (1962) have speculated that the area on the basilar membrane that has the highest SNR determines the perception of the pitch of a sound. Since the highest SNR should occur at the frequency of the most sensitive hearing, it would be expected that this would result in a pitch-match at this frequency. This concept is also reasonable in light of modern loudness models based on excitation patterns (Zwicker and Fasti, 1990; Humes, 1994; Moore, 1995).

Acceptable Levels of Internal Noise

Although ANSI (1996) and IEC (1983a; 1983b) specifications describe how to measure the EIN figure, neither standard specifies any recommended suitable figure or allowable maximum EIN for hearing aid noise. When evaluating EIN, the noise performance of the hearing aid is assumed to become better as the calculated Ln figure becomes smaller. The EIN of most hearing aids is measured to be between about 24 dB and 28 dB. Some hearing aids may be lower; some may be higher. A typical maximum-allowable figure used by manufacturers is an EIN figure of 30 dB. Sweetow (1990) reported that one manufacturer considered an EIN of greater than 35 dB to be unacceptable for a Class A linear hearing aid. Kuk (1996) states that an EIN of below 25 dB is an acceptable value; but that EIN may exceed 30 dB for some programmable hearing aids.

In general, internal noise tends to reach levels that are objectionable to the listener for hearing aids with EIN figures of over about 30 dB. However, this guideline is not always useful. For example, because power hearing aids have very high gain, the calculated EIN may be quite low while the absolute noise may be very high. Depending on the configuration and severity of the hearing loss, the user may or may not perceive internal noise to be present. Also, for users with hearing losses that drop sharply in the high frequencies, hearing aids with low EIN figures may still be objectionable because the hearing ability is often normal in the low frequencies and thus the noise is audible to the user.

CONCLUSIONS

Though there is still controversy about any definitive links between distortion and intelligibility or distortion and sound quality, there are a few generalizations that can be made. It can be reasonably concluded that high levels of distortion will affect sound quality by transforming pleasant sounds into those that are unpleasant for listening. In many cases extended periods of listening to these distorted sounds will be fatiguing for the listener.

Distortion can also affect intelligibility. Highly-distorted sounds generally remain intelligible in quiet, though the sound quality will probably be unpleasant. However, the addition of noise, whether it is internal circuit noise, external environmental noise, or other forms of distortion, can significantly reduce the intelligibility of speech.

Finally, high levels of internal circuit noise can be objectionable for a hearing aid user in a quiet listening situation. If loud enough, this noise can disrupt intelligibility, either through direct masking of low level sounds or through the creation of additional noise products.

REFERENCES

  1. Agnew J. (1988). Hearing instrument distortion: what does it mean for the listener? Hear Instr 39 (10):10, 12,, 14,, 16,, 20,, 61 [Google Scholar]
  2. Agnew J. (1991). Advanced digital signal processing schemes for ITEs. Hear Instr 42 (9):13–14, 16–17 [Google Scholar]
  3. Agnew J. (1994). Measurement of distortion levels in hearing aids; it's not a simple matter. Hear J 47 (5):25–27, 30–32 [Google Scholar]
  4. Agnew J. (1995). The saturation distortion problem in hearing aids: causes and a solution. Hear J 48 (8):33–34, 36–38 [Google Scholar]
  5. Agnew J. (1996a). Perception of internally-generated noise in hearing amplification. J Am Acad Audiol 7 (4): 296–303 [PubMed] [Google Scholar]
  6. Agnew J. (1996b). Acoustic feedback and other audible artifacts in hearing aids. Trends Ampl 1 (2): 45–82 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Agnew J. (1997a). Sound quality evaluation of anti-saturation circuitry in a hearing aid. Scand Audiol 26: 15–22 [DOI] [PubMed] [Google Scholar]
  8. Agnew J. (1997b). Audible circuit noise in hearing aid amplifiers. J Acoust Soc Am 102 (5): 2793–2799 [DOI] [PubMed] [Google Scholar]
  9. Agnew J, Block M. (1997). The perception of internal circuit noise in hearing aids by listeners with normal hearing. J Speech Lang Hear Res 40: 1177–1191 [DOI] [PubMed] [Google Scholar]
  10. Agnew J, Mayhugh CM. (1997). The Causes and Effects of Saturation Distortion in Hearing Aids. Instructional Course #806. 9th Annual Convention of the American Academy of Audiology, Fort Lauderdale, FL: April 17–20, 1997 [Google Scholar]
  11. Agnew J, Potts LG, Valente M. (1997). Sound quality judgments in class A and class D hearing aids. Am J Audiol 6 (2): 33–44 [Google Scholar]
  12. American National Standards Institute (ANSI). (1992). Testing Hearing Aids with a Broad-band Noise Signal. American National Standard ANSI S3. 42–1992 Acoustical Society of America, New York, NY [Google Scholar]
  13. American National Standards Institute (ANSI). (1996). Specification of Hearing Aid Characteristics. American National Standard ANSI S3. 22–1996 Acoustical Society of America, New York, NY [Google Scholar]
  14. Anderson BA, Hagen LT, Peterson TS, Preves DA, Roberts RW. (1996). Hearing instrument distortion. Hear Rev 3 (11):18, 22,, 24,, 26,, 69 [Google Scholar]
  15. Armstrong S. (1995). Finding and reducing the causes of circuit noise. Hear Rev 2 (2): 9–11 [Google Scholar]
  16. Ashley R. (1976). On the audibility of distortion in loudspeakers. IEEE Conference on ASSP, Philadelphia, PA [Google Scholar]
  17. Batteau DW. (1967). The role of the pinna in human localization. Proceedings of the Royal Society, Series B 168: 158–180 [DOI] [PubMed] [Google Scholar]
  18. Blauert J. (1983). Spatial Hearing. The MIT Press, Cambridge, MA [Google Scholar]
  19. Bloom PJ. (1985). High-quality digital audio in the entertainment industry: an overview of achievements and challenges. IEEE ASSP Magazine 2 (4): 2–25 [Google Scholar]
  20. Bode DL, Kasten RN. (1971). Hearing aid distortion and consonant identification. J Speech Hear Res 14: 323–331 [DOI] [PubMed] [Google Scholar]
  21. Broch JT. (1971). Acoustic Noise Measurements. Bruel & Kjaer, Copenhagen, Denmark [Google Scholar]
  22. Brockbank RA, Wass CAA. (1945). Nonlinear distortion in transmission systems. J Inst Elec Eng 12: 45–56 [Google Scholar]
  23. Burnett ED. (1967). A new method for the measurement of non-linear distortion using a random noise test signal. Bull Prosth Res (Spring).
  24. Cabot RC. (1988). Audio tests and measurements. In: Benson KB. (ed). Audio Engineering Handbook. McGraw-Hill Book Company, New York, 16.1–16.72 [Google Scholar]
  25. Carlson EV. (1988). An output amplifier whose time has come. Hear Instr 39 (10):30, 32 [Google Scholar]
  26. Cole WA. (1993). Current design options and criteria for hearing aids. In: Jamieson DG. (ed). Matching technology to the needs of the hearing impaired. J Speech Lang Path Audiol, Monogr Suppl 1 (Jan): 7–14
  27. Cordeil RR. (1983). Phase intermodulation distortion instrumentation and measurements. J Audio Eng Soc 31 (3): 114–123 [Google Scholar]
  28. Corliss ELR, Burnett ED, Kobal MT, Bassin MA. (1968). The relative importance of frequency distortion and changes in time constants in the intelligibility of speech. IEEE Tr Audio Electroacoust AU-16 (1): 36–39 [Google Scholar]
  29. Crain TR, Van Tasell DJ. (1994). Effect of peak clipping on speech recognition threshold. Ear Hear 15: 443–53 [DOI] [PubMed] [Google Scholar]
  30. Curran JR. (1974). Harmonic distortion and intelligibility. Hear Aid J 27 (6):12, 39 [Google Scholar]
  31. de Boer E. (1962). Note on the critical bandwidth. J Acoust Soc Am 34: 985–986 [Google Scholar]
  32. Dillon H, Macrae J. (1984). Derivation of design specifications for hearing aids. National Acoustics Laboratories Report Number 102. Australian Government Publishing Service, Canberra, Australia, 1–129 [Google Scholar]
  33. Douglas-Young J. (1981). Illustrated Encyclopedic Dictionary of Electronics. Parker Publishing Co., Inc., West Nyack, NY [Google Scholar]
  34. DuPret JP, Lefevre F. (1991). Principles of the signal processing of the EMILY device. In: A New Approach for Auditory Rehabilitation. Translation of excerpted articles from the French audiological magazine Les Cahiers de l'Audition, June 1991. Laboratoire d'Audiologie DuPret-Lefevre, Montbeliard, France, 45–51 [Google Scholar]
  35. Durrant JD, Lovrinic JH. (1984). Bases of Hearing Science. Williams and Wilkins, Baltimore, MD [Google Scholar]
  36. Dyrlund O. (1989). Characterization of non-linear distortion in hearing aids using coherence analysis. Scand Audiol 18: 143–148 [DOI] [PubMed] [Google Scholar]
  37. Dyrlund O. (1992). Coherence measurements in hearing instruments, using different broad-band signals. Scand Audiol 21: 73–78 [DOI] [PubMed] [Google Scholar]
  38. Dyrlund O, Ludvigsen C, Olofsson A, Poulsen T. (1994). Hearing aid measurement with speech and noise signals. Scand Audiol 23: 153–157 [DOI] [PubMed] [Google Scholar]
  39. Egan JP, Meyer DR. (1950). Changes in pitch of tones of low frequency as a function of the pattern of excitation produced by a band of noise. J Acoust Soc Am 22: 827–833 [Google Scholar]
  40. Fielder LD. (1985). The audibility of modulation noise in floating-point conversion systems. J Audio Eng Soc 33 (10): 770–781 [Google Scholar]
  41. Fish PJ. (1994). Electronic Noise and Low Noise Design. McGraw-Hill, Inc., New York, NY [Google Scholar]
  42. Fletcher H. (1940). Auditory patterns. Rev Mod Phys 12: 47–65 [Google Scholar]
  43. Foreman C. (1987). Sound system design. In: Ballou G. (ed). Handbook for Sound Engineers. Howard W. Sams & Company, Indianapolis, IN, 995–1089 [Google Scholar]
  44. Fortune TW, Preves DA, Woodruff BD. (1991). Saturation-induced distortion and its effects on aided LDL. Hear Instr 42 (10):37, 40,, 42 [Google Scholar]
  45. Franks JR. (1982). Judgments of hearing aid processed music. Ear Hear 3 (1): 18–23 [DOI] [PubMed] [Google Scholar]
  46. Gabrielsson A, Lindstrom B. (1985). Perceived sound quality of high-fidelity loudspeakers. J Audio Eng Soc 33 (1/2): 33–53 [Google Scholar]
  47. Gabrielsson A, Sjogren H. (1979a). Perceived sound quality of sound reproducing systems. J Acoust Soc Am 65 (4): 1019–1033 [DOI] [PubMed] [Google Scholar]
  48. Gabrielsson A, Sjogren H. (1979b). Perceived sound quality of hearing aids. Scand Audiol 8: 159–169 [DOI] [PubMed] [Google Scholar]
  49. Gabrielsson A, Schenkman BN, Hagerman B. (1988). The effects of different frequency responses on sound quality judgments and speech intelligibility. J Speech Hear Res 31: 166–177 [DOI] [PubMed] [Google Scholar]
  50. Grebene AB. (1984). Bipolar and MOS Analog Integrated Circuit Design. John Wiley & Sons, New York, NY [Google Scholar]
  51. Gregorian R, Temes GC. (1986). Analog MOS Integrated Circuits for Signal Processing. John Wiley and Sons, New York, NY [Google Scholar]
  52. Handel S. (1993). Listening: An Introduction to the Perception of Auditory Events. The MIT Press, Cambridge, MA [Google Scholar]
  53. Hawkins DB, Naidoo SV. (1993). Comparison of sound quality and clarity with asymmetrical peak clipping and output limiting compression. J Am Acad Audiol 4: 221–228 [PubMed] [Google Scholar]
  54. Humes LE. (1994). Psychoacoustic considerations in clinical audiology. In: Katz J. (ed). Handbook of Clinical Audiology. Fourth Edition Williams & Wilkins, Baltimore, MD, 56–72 [Google Scholar]
  55. International Electrotechnical Commission (IEC). (1983a). Hearing Aids. Part 1: Measurement of Electroacoustical Characteristics. Publication 118–0. International Electrotechnical Commission, Geneva, Switzerland [Google Scholar]
  56. International Electrotechnical Commission (IEC). (1983b). Hearing Aids. Part 7: Measurement of the Performance Characteristics of Hearing Aids for Quality Inspection for Delivery Purposes. Publication 118–7. International Electrotechnical Commission, Geneva, Switzerland [Google Scholar]
  57. International Electrotechnical Commission (IEC). (1997). Amendment 2: Measurement of Frequency Input Signals, to IEC 118–2 (1993): Hearing Aids—Part 2: Hearing Aids with Automatic Gain Control Circuits. International Electrotechnical Commission, Geneva, Switzerland [Google Scholar]
  58. Jirsa RE, Norris TW. (1982). Effects of intermodulation distortion on speech intelligibility. Ear Hear 3 (5): 251–256 [DOI] [PubMed] [Google Scholar]
  59. Johnson WA, Killion MC. (1994). Amplification: is class D better than class B? Am J Audiol (March): 11–13
  60. Kates JM. (1990). A test suite for hearing aid evaluation. J Rehab Res Dev 27 (3): 255–278 [DOI] [PubMed] [Google Scholar]
  61. Kates JM. (1992). On using coherence to measure distortion in hearing aids. J Acoust Soc Am 91 (4): 2236–2244 [DOI] [PubMed] [Google Scholar]
  62. Kates JM, Kozma-Spytek L. (1994). Quality ratings for frequency-shaped peak-clipped speech. J Acoust Soc Am 95 (6): 3586–3593 [DOI] [PubMed] [Google Scholar]
  63. Killion MC. (1976). Noise of ears and microphones. J Acoust Soc Am 59 (2): 424–433 [DOI] [PubMed] [Google Scholar]
  64. Killion MC. (1979). Design and Evaluation of High Fidelity Hearing Aids. Doctoral Dissertation, Northwestern University, Evanston, IL [Google Scholar]
  65. Killion MC. (1988). Principles of high fidelity amplification. In: Sandlin RE. (ed). Handbook of Hearing Aid Amplification, Volume 1: Theoretical and Technical Considerations. College-Hill Press, Boston, MA, 45–79 [Google Scholar]
  66. Killion MC. (1993). The K-Amp hearing aid: an attempt to present high fidelity for persons with impaired hearing. Am J Audiol 2: 52–74 [DOI] [PubMed] [Google Scholar]
  67. Kochkin S, Ballad WJ. (1991). Dispenser sound quality perceptions of class-D integrated receivers. Hear Instr 42 (4):25, 28 [Google Scholar]
  68. Krebs DF. (1972). Distortion and methods of output limiting. Scand Audiol 1 (4): 167–175 [Google Scholar]
  69. Kuk FK. (1996). The effects of distortion on user satisfaction with hearing aids. In: Valente M. (ed). Hearing Aids: Standards, Options, and Limitations. Thieme Medical Publishers, Inc., New York, NY, 327–367 [Google Scholar]
  70. Lacroix PG, Harris JD, Randolph KJ. (1979). Multiplicative effects on sentence comprehension for combined acoustic distortions. J Speech Hear Res 22: 259–269 [DOI] [PubMed] [Google Scholar]
  71. Langford-Smith F. (1960). Fidelity and distortion. In: Langford-Smith F. (ed). Radiotron Designer's Handbook. Fourth Edition Radio Corporation of America, Harrison, NJ, 603–634 [Google Scholar]
  72. Leinonen E, Otala M, Curl J. (1977). A method for measuring transient intermodulation distortion (TIM). J Audio Eng Soc 25 (4): 170–177 [Google Scholar]
  73. Levitt H. (1987). Digital hearing aids: a tutorial review. J Rehab Res Dev 24 (4): 7–20 [PubMed] [Google Scholar]
  74. Levitt H, Cudahy E, Hwang W, Kennedy E, Link C. (1987). Towards a general measure of distortion. J Rehab Res Dev 24 (4): 283–292 [PubMed] [Google Scholar]
  75. Licklider JCR. (1946). Effects of amplitude distortion upon the intelligibility of speech. J Acoust Soc Am 18 (2): 429–434 [Google Scholar]
  76. Longwell TF, Gawinski MJ. (1992). Fitting strategies for the 90s: class-D amplification. Hear J 45 (9):26, 28,, 30–31 [Google Scholar]
  77. Lotterman SH, Kasten RN. (1967). Nonlinear distortion in modern hearing aids. J Speech Hear Res 10: 586–592 [DOI] [PubMed] [Google Scholar]
  78. Macrae J, Dillon H. (1986). Updated performance requirements for hearing aids. National Acoustics Laboratories Report Number 109. Australian Government Publishing Service, Canberra, Australia, 1–31 [Google Scholar]
  79. Macrae J, Dillon H. (1996). An equivalent input noise level criterion for hearing aids. J Rehab Res Dev 33 (4): 355–362 [PubMed] [Google Scholar]
  80. Metzler B. (1993). Audio Measurement Handbook. Audio Precision, Inc., Beaverton, OR [Google Scholar]
  81. Moller H. (1978a). Loudspeaker phase measurements transient response and audible quality. Application Note #17–198. Bruel & Kjaer, Cleveland, OH [Google Scholar]
  82. Moller H. (1978b). Multidimensional audio. Application note #17–206. Bruel & Kjaer, Cleveland, OH [Google Scholar]
  83. Moore BCJ. (1982). An Introduction to the Psychology of Hearing. Academic Press, London [Google Scholar]
  84. Moore BCJ. (1986). Frequency Selectivity in Hearing. Academic Press, London [Google Scholar]
  85. Moore BCJ. (1995). Perceptual Consequences of Cochlear Damage. Oxford University Press, Oxford [Google Scholar]
  86. Murray DJ, Hansen JV. (1992). Application of digital signal processing to hearing aids: a critical survey. J Am Acad Audiol 3: 145–152 [PubMed] [Google Scholar]
  87. Olson HF. (1972). Modern Sound Reproduction. Van Nostrand Reinhold, New York, NY [Google Scholar]
  88. Palmer CV, Killion MC, Wilber LA, Ballad WJ. (1995). Comparison of two hearing aid receiver-amplifier combinations using sound quality judgments. Ear Hear 16 (6): 587–598 [DOI] [PubMed] [Google Scholar]
  89. Parent TC, Chmiel R, Jerger J. (1997). Comparison of performance with frequency transposition hearing aids and conventional hearing aids. J Am Acad Audiol 8 (5): 355–365 [PubMed] [Google Scholar]
  90. Peters RW, Burkhard MD. (1968). On noise and harmonic distortion measurements. Report No 10350–1. Industrial Research Products, Inc., Elk Grove Village, IL, 1–16 [Google Scholar]
  91. Pohlmann KC. (ed). (1991). Advanced Digital Audio. Sams, Carmel, IN [Google Scholar]
  92. Pohlmann KC. (1992). Principles of Digital Audio. Sams, Carmel, IN [Google Scholar]
  93. Preves DA. (1990). Expressing hearing aid noise and distortion with coherence measurements. Am Speech Lang Hear Assoc (June/July): 56–59 [PubMed]
  94. Preves DA. (1994). Future trends in hearing aid technology. In: Valente M. (ed). Strategies for Selecting and Verifying Hearing Aid Fittings. Thieme Medical Publishers, New York, NY, 363–396 [Google Scholar]
  95. Preves DA, Newton JR. (1989). The headroom problem and hearing aid performance. Hear J 42 (10):19–21, 24–26 [Google Scholar]
  96. Preves DA, Woodruff BD. (1990). Some methods of improving and assessing hearing aid headroom. Au-decibel (Summer): 8–13
  97. Punch JL. (1978). Quality judgments of hearing-aid-processed speech and music by normal and otopathologic listeners. J Am Audiol Soc 3 (4): 179–188 [PubMed] [Google Scholar]
  98. Punch JL, Montgomery AA, Schwartz DM, Waiden BE, Prosek RA, Howard MT. (1980). Multidimensional scaling of quality judgments of speech signals processed by hearing aids. J Acoust Soc Am 68 (2): 458–466 [DOI] [PubMed] [Google Scholar]
  99. Revit LJ. (1994). Using coupler tests in the fitting of hearing aids. In: Valente M. (ed). Strategies for Selecting and Verifying Hearing Aid Fittings. Thieme Medical Publishers, New York, NY, 64–87 [Google Scholar]
  100. Rodgers CAP. (1981). Pinna transformations and sound reproduction. J Audio Eng Soc 29 (4): 226–233 [Google Scholar]
  101. Schneider T, Jamieson DG. (1995). Using maximum length sequence coherence for broadband distortion measurements on hearing aids. J Acoust Soc Am 97 (4): 2282–2292 [Google Scholar]
  102. Schweitzer HC, Causey GD, Tolton MC. (1977). Nonlinear distortion in hearing aids: the need for reevaluation of measurement philosophy and technique. Ear Hear (2) 4: 132–141 [PubMed] [Google Scholar]
  103. Scroggie MG. (1954). Radio Laboratory Handbook. Iliffe & Sons, Ltd., London [Google Scholar]
  104. Scroggie MG. (1958). Foundations of Wireless. Iliffe & Sons, Ltd., London [Google Scholar]
  105. Shorter DEL. (1950). The influence of high order products on non-linear distortion. Electronic Eng 22226: 152 [Google Scholar]
  106. Singer J. (1981). Some effects of intermodulation distortion on speech intelligibility. J Aud Res 21: 201–206 [PubMed] [Google Scholar]
  107. Skritek P. (1983). Dynamic distortion measurements of tape recorders and electroacoustic transducers. J Audio Eng Soc 31 (7): 512–516 [Google Scholar]
  108. Skritek P. (1987). A combined measurement method for both dynamic intermodulation and static nonlinear distortions. J Audio Eng Soc 35 (1/2): 31–37 [Google Scholar]
  109. Small RH. (1986). Total difference-frequency distortion: practical measurements. J Audio Eng Soc 34 (6): 427–436 [Google Scholar]
  110. Staab WJ. (1990). Digital/programmable hearing aids—an eye towards the future. Brit J Audiol 24: 243–256 [DOI] [PubMed] [Google Scholar]
  111. Stuart JR. (1994). Noise: methods for estimating detectability and threshold. J Audio Eng Soc 42 (3): 124–139 [Google Scholar]
  112. Subbarao WV. (1974). Boost audio-amplifier efficiencies. Elec Des 8: April 12, 96–98 [Google Scholar]
  113. Sweetow RW. (1990). Determining permissible internal noise levels. Hear Instr 41 (2):33 [Google Scholar]
  114. Teder H. (1990). Noise and speech levels in noisy environments. Hear Instr 41 (4): 32–3 [Google Scholar]
  115. Teder H. (1993). Compression in the time domain. Am J Audiol 2 (2): 41–46 [DOI] [PubMed] [Google Scholar]
  116. Teder H. (1995). Common transient sounds: the kitchen is a very noisy place. Hear Rev 2 (1):10–11, 49 [Google Scholar]
  117. Thiele AN. (1983). Measurement of nonlinear distortion in a band-limited system. J Audio Eng Soc 31 (6): 443–445 [Google Scholar]
  118. Thomsen C, Moller H. (1975). Swept measurements of harmonic, difference frequency and intermodulation distortion. Application Note 15–098. Bruel & Kjaer, Cleveland, OH [Google Scholar]
  119. Van Tasell D, Crain T. (1992). Noise reduction hearing aids: release from masking and release from distortion. Ear Hear 13: 114–21 [DOI] [PubMed] [Google Scholar]
  120. White GD. (1993). The Audio Dictionary. (2nd Ed). University of Washington Press, Seattle, WA [Google Scholar]
  121. White PS. (1977). Swept measurements of difference frequency intermodulation and harmonic distortion of hearing aids. Application Note 16–017. Bruel & Kjaer, Cleveland, OH [Google Scholar]
  122. Williamson MJ, Cummins KL, Hecox KE. (1987). Speech distortion measures for hearing aids. J Rehab Res 24 (4): 277–282 [PubMed] [Google Scholar]
  123. Williamson MJ, Punch JL. (1990). Speech enhancement in digital hearing aids. Sem Hear 11 (1): 68–78 [Google Scholar]
  124. Witter HL, Goldstein DP. (1971). Quality judgments of hearing aid transduced speech. J Speech Hear Res 14: 312–322 [DOI] [PubMed] [Google Scholar]
  125. Yanick P. (1977). Transient distortion and hearing aid circuits. Hear Instr 28 (1): 8–9 [Google Scholar]
  126. Zwicker E, Fasti H. (1990). Psychoacoustics, Facts and Models. Springer-Verlag, Berlin [Google Scholar]

Articles from Trends in Amplification are provided here courtesy of SAGE Publications

RESOURCES