Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Mar 18.
Published in final edited form as: IEEE/ACM Trans Audio Speech Lang Process. 2020 Aug 14;28:2489–2499. doi: 10.1109/taslp.2020.3016487

On Cross-Corpus Generalization of Deep Learning Based Speech Enhancement

Ashutosh Pandey 1, DeLiang Wang 2
PMCID: PMC7971413  NIHMSID: NIHMS1626101  PMID: 33748327

Abstract

In recent years, supervised approaches using deep neural networks (DNNs) have become the mainstream for speech enhancement. It has been established that DNNs generalize well to untrained noises and speakers if trained using a large number of noises and speakers. However, we find that DNNs fail to generalize to new speech corpora in low signal-to-noise ratio (SNR) conditions. In this work, we establish that the lack of generalization is mainly due to the channel mismatch, i.e. different recording conditions between the trained and untrained corpus. Additionally, we observe that traditional channel normalization techniques are not effective in improving cross-corpus generalization. Further, we evaluate publicly available datasets that are promising for generalization. We find one particular corpus to be significantly better than others. Finally, we find that using a smaller frame shift in short-time processing of speech can significantly improve cross-corpus generalization. The proposed techniques to address cross-corpus generalization include channel normalization, better training corpus, and smaller frame shift in short-time Fourier transform (STFT). These techniques together improve the objective intelligibility and quality scores on untrained corpora significantly.

Keywords: Speech enhancement, channel generalization, deep learning, cross-corpus generalization, robust enhancement

I. Introduction

Speech signal in a real-world environment is degraded by background noise. A degraded speech signal can severely degrade the performance of speech-based applications such as automatic speech recognition (ASR), speaker identification, and hearing aids. Speech enhancement is concerned with improving the intelligibility and quality of a speech signal degraded by additive noise, and commonly used as preprocessors in speech-based applications to improve their performance in noisy environments.

In real-world environments, speech signals are varied or distorted [1]. Sources of variations include background noise, room reverberation, speaker, language, accent, and communication channel. Ideally a speech enhancement algorithm should work well in different acoustic conditions. However, developing a general algorithm that works in all conditions remains a technical challenge.

Traditional approaches to speech enhancement include spectral subtraction [2], Wiener filtering [3], statistical model-based methods [4], and nonnegative matrix factorization [5]. These approaches work well for stationary noises but have difficulty in handling nonstationary noises or a large number of speakers. In recent years, deep learning-based approaches have become the mainstream for speech enhancement (see [6] for an overview). Among the most popular deep learning approaches are fully-connected networks [7], [8], recurrent neural networks (RNNs) [9], [10] and convolutional neural networks (CNNs) [11], [12], [13].

In [14], Chen et al. demonstrated that fully connected feedforward networks trained for a single speaker, using a large number of noises, can generalize to untrained noises. However, such a network has difficulty generalizing to both of untrained speakers and noises, when trained using a large number of noises and speakers [10]. In [10], a RNN with long short-term memory (LSTM) is employed to develop a speaker- and noise-independent model for speech enhancement. This was achieved by training a four-layered RNN model using utterances from 77 speakers mixed with 10000 different noises.

In the last few years, speech enhancement research has aimed to improve the performance of speaker-and noise-independent models. In [12], the authors propose a CNN with gated and dilated convolutions for magnitude-spectrum enhancement. A recent trend is the enhancement of phase, obtaining better speech enhancement than the magnitude-only enhancement approaches. The two popular approaches are complex-spectrogram enhancement [15], [16], [17], [18], [19] and time-domain enhancement [20], [13], [21], [22], [23], [24].

The common practice in all the DNN based approaches is that a DNN is trained using utterances of different speakers from a single corpus and evaluated on untrained speakers from the same corpus. However, we find that when evaluated on utterances from untrained corpora, DNN performance may degrade significantly. This behavior has not been revealed and analyzed before. To be suitable for real-world applications, speech enhancement has to work on noisy utterances recorded in an unknown fashion, i.e. on any untrained corpus.

In this study, we perform an experimental study to understand cross-corpus generalization of DNNs. Our key observation is that the generalization gap is severe at low SNR conditions and is mainly due to the channel mismatch between different speech corpora. We examine the effectiveness of traditional channel normalization techniques for speech enhancement in low SNR conditions.

The general behavior of traditional channel normalization methods used in ASR or speaker identification systems, such as cepstral mean subtraction (CMS) [25], [26] or RASTA filtering [27], [28], is unknown for supervised speech enhancement. In supervised approaches to speech enhancement, a noisy utterance is generated by adding a noise segment to a clean speech utterance. It is highly unlikely that the channels of clean speech and noise will be similar. This creates a channel situation that is different from those in ASR and speaker recognition where the noise channel is not a main concern. In other words, a noisy utterance captures two kinds of channel effects, one for speech and the other for noise. This implies that the predicted channel from the noisy utterance may be inaccurate in noise dominant segments. To verify this analysis, we have evaluated two different channel normalization methods, mean subtraction and RASTA filtering in the log-spectrum domain. We choose the log-spectrum domain because most of the DNN based speech enhancement systems use either spectrum or log-spectrum as the input features. We observe improved enhancement using channel normalization, however, the improvements are indeed limited in low SNR conditions.

Further, we evaluate different corpora that are promising for cross-corpus generalization. A corpus that is recorded using many microphones or recorded in different acoustic conditions would be promising as it will expose the underlying DNN model to different channels. LibriSpeech [29] and VoxCeleb2 [30] are two such corpora. The utterances in LibriSpeech are extracted from audiobooks that are read by different volunteers across the globe. This implies that the utterances recorded by different volunteers have different channel characteristics. VoxCeleb2 utterances are extracted from the audios in YouTube videos and hence are recorded in different conditions and using different devices. We find LibriSpeech to be significantly better than VoxCeleb2 and WSJ [31], the latter commonly used in speaker-independent enhancement models.

Additionally, we investigate the use of smaller frame shifts in STFT, as smaller shifts may lead to better cross-corpus generalization because of the averaging effect in the overlap-and-add stage of inverse STFT. This turns out to be a very simple and effective technique for improving cross-corpus generalization.

Finally, we combine all the proposed techniques; channel normalization, better training corpus, and smaller frame shift. This combination substantially improves objective intelligibility and quality scores. The short-time objective intelligibility (STOI) [32] and the perceptual evaluation of speech quality (PESQ) [33] scores at −5 dB SNR for babble noise are improved by 13.9 percentage points and 0.59 respectively for the utterances of a male speaker in the challenging IEEE corpus [34].

To our knowledge, this is the first systematic study on cross-corpus generalization in DNN based speech enhancement. The results of this study, we believe, represent a major step towards robust speech enhancement in real-world conditions. The rest of the paper is organized as follows. In Section II, we describe the speech enhancement framework used in this study. Section III explains corpus channel. Section IV illustrates the corpus fitting problem in speech enhancement. In Section V, we describe the techniques explored in this study to improve cross-corpus generalization. Experimental settings are given in Section VI and Section VII presents the results. Concluding remarks are given in Section VIII.

II. Deep Learning based speech enhancement

A. Problem Definition

Given a clean speech signal x and a noise signal n, the noisy speech signal is formed by the additive mixing as the following

y=x+n (1)

where {y,x,n}RM×1. M represents the number of samples in the signal. The goal of a speech enhancement algorithm is to get a close estimate, x^, of x given y.

B. Data Generation

Given a speech corpus C containing Ntr training utterances {xtr1,xtr2,,xtrNtr} and Nte test utterances {xte1,xte2,,xteNte}, we denote Ctr as the set of training utterances and Cte as the set of test utterances in corpus C.

The noisy utterances are generated by artificially adding noises to the utterances in Ctr and Cte.

ytri=xtri+ntri,i=1,2,Ntr (2)
ytej=xtej+ntej,j=1,2,Nte (3)

In general, to assess noise generalization, ntri and ntej are set to be either different noises or different segments of nonstationary noises. Similarly, to assess speaker generalization, speakers in Ctr and Cte are set to be different.

In this work, we evaluate DNN based speech enhancement models for cross-corpus generalization. We train different models on corpora {Ctr1,Ctr2,,CtrPtr} but evaluate them on utterances from untrained corpora {C^te1,C^te2,,C^tePte}. Ptr and Pte denote the numbers of training and test corpora respectively.

C. Feature Extraction and Training Targets

The pairs {x, y, n} are transformed to the time-frequency (T-F) representation using STFT.

X=STFT(x) (4)
Y=STFT(y) (5)
N=STFT(n) (6)

where {X,Y,N}RT×F, and T and F represent the number of frames and number of frequency bins. In this study, we use either STFT magnitude ∣Y∣ or logarithm of STFT magnitude, log∣Y∣, as the input feature.

There are many training targets studied in the literature such as the ideal ratio mask (IRM) [35], STFT magnitude [8], and spectral magnitude mask (SMM) [35]. We use the IRM in this study, defined as:

IRM(t,f)=X(t,f)2X(t,f)2+N(t,f)2 (7)

where X(t, f), N(t, f) and IRM(t, f), respectively, denote the values of X, N and IRM at the corresponding T-F unit.

D. Model Architecture

We use a 4-layer bidirectional LSTM (BLSTM) network with 512 hidden units in each direction. One fully-connected layer with 512 units is used before the BLSTM, which is followed by a fully-connected layer at the output with sigmoidal nonlinearity.

E. Loss Function

The BLSTM network takes as input the feature, ∣Y∣ or log∣Y∣, and outputs the estimated IRM, RM. A mean squared error (MSE) loss is used between IRM and RM. The utterance level MSE loss is given below.

L=1TFt=0Tf=0F[IRM(t,f)RM(t,f)]2 (8)

F. Time Domain Reconstruction

The trained model is used for predicting the IRM of noisy utterances in the test set. RM is multiplied to the noisy STFT magnitude, ∣Y∣, to obtain the enhanced STFT magnitude, X^.

X^=YRM (9)

where ⊗ denotes element-wise multiplication.

The estimated STFT magnitude is combined with the noisy STFT phase to obtain the estimated STFT.

X^=X^ejY (10)

where ∠Y represents the noisy phase. Finally, inverse STFT is used to obtain the enhanced waveform.

x^=ISTFT(X^) (11)

III. Corpus Channel

A speech corpus generally contains different utterances spoken by many speakers. The utterances are recorded in a controlled environment so that the recording is clean and suitable to be used for speech-based applications. The different controlled environments used for different corpora may lead to different stationary components in the utterances. For example, if recording microphones are different, a sentence spoken by the same person can be very different in quality. We refer to the stationary component of a corpus as the corpus channel.

An algorithm developed and shown to be effective for one corpus may not work when evaluated on a corpus recorded in a different condition. To illustrate this, Fig. 1 plots the log-spectrum of an utterance from the TIMIT corpus [36] that is convolved with two different microphone impulse response (MIR) functions1. We can observe that the energy patterns in the two spectra are very different. The left spectrum has higher energy around 100th frequency bin and lower energy around the 0th bin compared to the right spectrum. This type of difference in distribution may cause an algorithm to degrade on untrained corpora. A stationary channel can be defined as a linear- and time-invariant filter given in the following equation,

x=sh=k=0K1s[nk]h[k] (12)

where * denotes the convolution operator, x and s are discrete signals indexed by n, and h is a digital filter with K taps. When the underlying signal, s, is a time-varying speech signal, Equation 12 can be transformed into the following form using STFT.

X(t,f)=S(t,f)H(f) (13)

where H is the time-invariant but frequency-dependent gain introduced by the channel. Note that H(f) does not contain any time index implying the stationarity of the channel. Taking the logarithm of complex magnitude in both sides of Equation 13, we get

logX(t,f)=logS(t,f)+logH(f) (14)

Fig. 1:

Fig. 1:

Differences in the energy distribution of a spectrum convolved using different MIR functions. The frequency responses of MIRs are shown in the top row.

A straightforward method to remove stationary channel from a speech signal is log-spectral mean subtraction (LSMS). In this method, the long-term average of a log-spectrum is subtracted from the log-spectrum to obtain a channel removed log-spectrum. Taking the average over time in Equation 14, we get

1TtlogX(t,f)=1TtlogS(t,f)+logH(f) (15)

Now, we define the channel of a corpus, V, using the following equation.

logV(f)=i=1Ntrt=1TlogXtri(t,f)NtrT=i=1Ntrt=1t=T[logStri(t,f)+logH(f)]NtrT=logS¯(f)+logH(f) (16)

Thus the defined corpus channel consists of two components, where H corresponds to the recording channel and S¯ corresponds to the average of log-spectrum over the corpus. It is important to note that channel differences between corpora are primarily caused by H, as the long-term average speech spectrum is similar across different dialects of the same language and even different languages [37].

Further subtracting Equation 16 from Equation 14, we get

logX(t,f)logV(f)=logS(t,f)logS¯(f) (17)

The above equation says that removing the defined corpus channel from an utterance of a corpus gives a normalized utterance with both channel and speech mean effects removed.

We will use Equation 16 to estimate the spectral magnitudes of the corpus channel of three popular corpora utilized for speech enhancement; WSJ SI-84, TIMIT, and IEEE [34]. A frame of 20 ms with a shift of 10 ms is used for STFT computation. The estimates for the channels are plotted in Fig. 2. We can observe that the channels are quite different from each other. Even though the peaks occur at nearby frequencies, the decay rates are much different. The decay rate is fastest for IEEE and slowest for TIMIT. TIMIT and WSJ exhibit 2 peaks whereas IEEE shows only one peak.

Fig. 2:

Fig. 2:

The estimated spectral magnitudes of the channels of three speech corpora.

IV. Corpus Fitting

In this section, we demonstrate that models trained on one corpus fail to generalize to untrained corpora. Further, we show that the corpus channel is one of the factors that reduce the performance on untrained corpora.

We evaluate three different types of models; an IRM based BLSTM model described in Section II, a complex-spectrum based model proposed in [38] and two time-domain models proposed in [13], [24]. The models are trained on the WSJ corpus and are evaluated on 3 different corpora: WSJ, TIMIT, and IEEE. These corpora have been widely utilized in deep learning based speech enhancement studies. IEEE has a large number of utterances but few speakers, and is commonly used to train speaker-dependent models by using utterances of a single speaker [14], [15]; TIMIT has been used for small-scale training of noise-dependent and noise-independent models [35], [11], [20], [39], [18], and WSJ has been used to train speaker- and noise-independent models [10], [12], [13], [16]. We select one male and one female speaker from IEEE and treat them as two different corpora. They are denoted as IEEE Male and IEEE Female respectively. A detailed description of test data preparation is given in Section VI-A. The evaluation results in terms of STOI (%) and PESQ, for babble noise at SNRs of −5 dB and −2 dB, are given in Table I.

TABLE I:

STOI and PESQ comparisons between different test corpora for four deep learning based speech enhancement methods.

Test Corpus WSJ TIMIT IEEE Male IEEE Female
Test SNR −5 dB −2 dB −5 dB −2 dB −5 dB −2 dB −5 dB −2 dB
STOI (%) Mixture 58.6 65.5 54.0 60.9 55.0 62.3 55.5 62.9
BLSTM 77.4 83.0 64.7 73.3 60.4 74.0 62.5 73.5
CRN [38] 80.3 86.8 59.0 69.6 52.6 65.5 51.6 68.0
AECNN-SM [13] 81.0 88.3 60.8 72.0 51.5 65.2 61.1 75.8
TCNN [24] 82.7 88.9 61.6 72.9 57.2 69.9 56.5 74.1
PESQ Mixture 1.54 1.69 1.46 1.63 1.46 1.63 1.12 1.32
BLSTM 1.97 2.22 1.70 2.00 1.52 1.89 1.26 1.66
CRN [38] 2.17 2.50 1.33 1.73 1.07 1.50 0.91 1.50
AECNN-SM [13] 2.19 2.60 1.40 1.78 1.13 1.50 1.28 1.83
TCNN [24] 2.19 2.53 1.33 1.74 1.18 1.61 1.01 1.64

One can observe that the performance on the trained corpus, WSJ, is excellent. STOI is improved by more than 19.5% for all the models. However, the improvements are much reduced on untrained corpora, TIMIT, IEEE Male and IEEE Female. For the IEEE Male speaker, AECNN-SM and CRN even degrade STOI compared to unprocessed mixtures. Similarly, PESQ is also degraded in many cases. The results suggest that the BLSTM model is better in terms of generalization, even though within-corpus enhancement results are not as good as the more recent models. Therefore we choose this model for comparisons in the rest of the paper.

Next, we illustrate the behavior of the BLSTM model for different types of noises and at different SNR conditions. The plots of STOI improvement (%) are shown in the first row of Fig. 3. We observe that for all the noises the gap between trained and untrained corpus is the largest at −5 dB and gradually narrows with increasing SNR. This illustrates that cross-corpus generalization is a severe issue in low SNR conditions. Similarly, the generalization gap at low SNRs for different noises is in order of babble, cafeteria, factory and engine.

Fig. 3:

Fig. 3:

Effects of corpus-channel on cross-corpus generalization. First row plots ΔSTOI (%) obtained using original WSJ utterances. Second row plots ΔSTOI (%) using channel-removed utterances.

Finally, we design an experiment to demonstrate that the corpus channel is a major culprit for the cross-corpus generalization issue. We use Equation 17 to get corpus channel removed spectrum of utterances in a corpus. The corpus channel removed spectrum is used for time-domain reconstruction using Eqs. 10 and 11. For a given corpus C, we use Ctr for the corpus channel estimation, and use it to get corpus channel removed utterances in both Ctr and Cte. We use a frame size of 2048 and frame shift of 32 in STFT. We find that this setting introduces negligible artifacts in the modified utterances.

We show the effect of corpus channel normalization on sample utterances from different corpora in Fig. 4. One can observe that the energy distribution in different frequency bins becomes more prominent, especially in the high-frequency range where the corpus channel has a large attenuation factor.

Fig. 4:

Fig. 4:

Effects of channel normalization. The spectrogram of one utterance from each of the three corpora are plotted in the first column. The corresponding channel removed spectrograms are plotted in the second column.

We use corpus channel normalized utterances to generate a new training corpus on WSJ and new test corpora on WSJ, TIMIT, IEEE Male and IEEE Female. The BLSTM model is trained on the new WSJ corpus and evaluated on all the test corpora for four different noises. The improvements in STOI (%) are plotted in the second row of Fig. 3. These improvements are significantly higher than those in the first row. For example, ΔSTOI of the babble noise at −5 dB changes from 5% to 18% for IEEE Male, and 7% to 18% for IEEE Female. In addition, ΔSTOI improves for all the noises and in all SNR conditions. This demonstrates that the corpus channel is one of the main causes for the cross-corpus generalization issue, and channel differences need to be accounted for in order to improve cross-corpus generalization.

V. Improving cross-corpus generalization

In this section, we describe different techniques investigated in this study to improve cross-corpus generalization.

A. Modified Loss Function

We find that using a loss over high energy T-F units is better for cross-corpus generalization. We use loss over T-F units within the 20 dB of the maximum amplitude T-F unit. A similar loss function has been utilized in speaker separation methods, such as deep clustering [40]. The modified utterance level loss is given as

L=t=0Tf=0F[IRM(t,f)RM(t,f)]2M(t,f)t=0Tf=0FM(t,f) (18)

where,

M(t,f)={1,Y(t,f)0.01Max(Y)0Otherwise} (19)

B. Channel Normalization

We have discussed in Section IV that removing the corpus channel can be helpful in improving cross-corpus generalization. We evaluate the following channel normalization techniques in this study.

1). Log-Spectral Mean Subtraction:

Given a noisy utterance y, the channel can be estimated by taking the average of log-spectra over all the frames in the utterance

logV^(f)=1Tt=0TlogY(t,f) (20)

The channel normalized log-spectrum is defined as

logY(t,f)=logY(t,f)logV^(f) (21)

We use log ∣Y′ (t, f)∣ as the input feature in this case. Note that estimating the channel using noisy utterances may not be as accurate as using clean utterances because noise and speech in the data are likely to be recorded in different conditions and using different kinds of devices. Nevertheless, it can give a good approximate for the frequency bins dominated by speech. We add a small positive constant ϵ before applying the logarithm operator.

2). RASTA Filter:

The RASTA filter has been shown to attenuate the channel effects and improve the generalization of ASR systems [41]. The RASTA filter is applied over log-spectral magnitude and is given by

logY(t,f)=logY(t,f)logY(t1,f)+ClogY(t1,f) (22)

where C is a parameter that is set to 0.97.

C. Training Corpus

We evaluate following corpora to understand cross-corpus generalization behavior.

1). WSJ:

We use the WSJ0-SI-84 corpus as the baseline since this corpus has been used in past to train speaker- and noise-independent models [10], [12], [13].

2). VoxCeleb2:

The VoxCeleb2 corpus is promising for cross-corpus generalization because of the following reasons. First, it is very large with around 1.1 million utterances of 6000 speakers. Second, it is extracted from YouTube therefore it has the potential of generalizing to different channels as the uploaded videos on YouTube are usually recorded in different conditions and using different devices.

3). LibriSpeech:

LibriSpeech is a corpus derived from read audiobooks from the LibriVox project. It contains around 0.25 million utterances of 2.1k speakers. It is promising for cross-corpus generalization because the English utterances are spoken by different volunteers across the globe. This implies that the utterances recorded by different volunteers are typically over different channels.

We have evaluated three different versions of LibriSpeech; LibriClean, LibriOther, and LibriAll. LibriClean contains relatively clean utterances compared to LibriOther. LibriAll is the combination of both LibriClean and LibriOther. We list different corpora in terms of their size in Table II.

TABLE II:

Different corpus sizes used in this study.

Corpus WSJ VoxCeleb2 LibriClean LibriOther LibriAll
# of speakers 77 5994 921 1166 2087
# of utterances 6385 1092009 104014 148688 252702
# of hours 12 2318 360 500 860

D. Frame Shift

In short-time processing of speech, a frame shift equal to the half of frame size typically is used, and overlap-and-add is used during final reconstruction in the time domain. However, when frame shift is smaller, there will be multiple predictions (>2) of a single T-F unit from the neighboring frames. This leads to averaging the multiple predictions of a sample in the overlap-and-add stage. We find that the simple idea of using a smaller frame shift leads to a significant improvement in cross-corpus generalization. We fix the frame size to 32 ms and evaluate frame shifts of {16 ms, 8 ms, 4 ms, 2 ms}.

VI. Experimental Settings

A. Data Preparation

We train corpus dependent models on WSJ, TIMIT, IEEE Male, and IEEE Female corpora. Corpus independent models are trained on WSJ, VoxCeleb2, LibriClean, LibriOther, and LibriAll. For training, we use all 4620 utterances of the TIMIT corpus and 576 random utterances out of 720 of IEEE Male and IEEE Female. All the clean utterances are resampled to 16kHz. For WSJ training utterances, we remove all the frames in the beginning and end that are not within 20 dB of the maximum frame energy.

Noisy utterances are created during the training time by randomly adding noise segments to all the utterances in a batch. For training noises, we use 10000 non-speech sounds from a sound effect library (www.sound-ideas.com) as in [14]. For each utterance, we cut a random segment of 4 seconds if the utterance is longer than 4 seconds. A random noise segment is added to the utterance at a random SNR in {−5 dB, −4 dB, −3 dB, −2 dB, −1 dB, 0 dB}. For a corpus containing less than 100000 utterances, an epoch is defined as when the model has seen around 100000 utterances. This corresponds to 174, 22 and 16 noisy utterances per clean utterance in one epoch of IEEE, TIMIT, and WSJ respectively.

The WSJ test set consists of 150 utterances of 6 speakers not included in WSJ training. The TIMIT test set consists of 192 utterances from the core test set. The IEEE Male and IEEE Female test sets both consist of the 144 clean utterances not included in their training sets. A test set is generated from 4 different noises: babble, cafeteria, factory and engine, at the SNRs of {−5 dB, −2 dB, 0 dB}. The babble and cafeteria noises are from Auditec CD (available at http://www.auditec.com). Factory and engine noises are from Noisex [42].

All noisy utterance samples are normalized to the range [−1, 1] and corresponding clean utterances are scaled accordingly to maintain an SNR. The frame size of 32 ms with the Hamming window is used for STFT.

B. Training Methodology

The models trained on TIMIT and IEEE use a dropout rate of 0.5 for each layer except for the output. The models are trained for 10 epochs on TIMIT and IEEE, 100 epochs on LibriSpeech, and 20 epochs on VoxCeleb2.

The Adam optimizer [43] is used with a learning rate schedule given in Table III. A batch size of 32 utterances is used. All the utterances that are shorter than the longest utterance in a batch are padded with zero at the end. The loss values computed over the outputs corresponding to zero-padded inputs are ignored.

TABLE III:

Learning rate schedule. E denotes the maximum number of epochs of training.

Epoch 1 to 0.6E (0.6E + 1) to 0.9E (0.9E + 1) to E
Learning rate 0.0002 0.0001 0.00005

C. Evaluation Metrics

In our experiments, models are evaluated using STOI [32] and PESQ [33], which represent the standard metrics for speech enhancement. STOI has a typical value range from 0 to 1, which can be roughly interpreted as percent correct. PESQ values range from −0.5 to 4.5.

D. Baseline

For the baseline, we train the BLSTM model on WSJ using the loss function given in Equation 8. STFT magnitude is used as the feature with the channel normalization in Equation 22 but applied to STFT magnitude instead of log magnitude. We call this model SMS, standing for spectral mean subtraction (in Fig. 5 and Table IV).

Fig. 5:

Fig. 5:

STOI and PESQ comparisons between the baseline, modified loss, LSMS and RASTA on WSJ.

TABLE IV:

Performance improvements on babble noise by gradually incorporating different techniques proposed in this study.

Test Corpus WSJ TIMIT IEEE Male IEEE Female
Test SNR −5 dB −2 dB −5 dB −2 dB −5 dB −2 dB −5 dB −2 dB
STOI (%) Mixture 58.6 65.5 54.0 60.9 55.0 62.3 55.5 62.9
Baseline 77.4 83.0 64.7 73.3 60.4 74.0 62.5 73.5
+ Modified loss 78.3 83.5 65.7 74.3 64.8 75.1 63.8 75.2
+ LSMS 78.6 83.6 68.4 76.4 64.4 76.6 66.0 76.7
+ frame shift 4 ms 82.8 87.5 71.9 79.9 66.2 80.8 69.5 81.1
+ LibriAll 82.4 87.3 75.1 82.1 74.3 83.2 74.8 84.3
Same Corpus 73.5 80.7 77.9 82.6 75.9 83.2
PESQ Mixture 1.54 1.69 1.46 1.63 1.46 1.63 1.12 1.32
Baseline 1.97 2.22 1.70 2.00 1.52 1.89 1.26 1.66
+ Modified loss 2.00 2.23 1.73 2.04 1.63 1.92 1.31 1.74
+ LSMS 2.02 2.25 1.82 2.12 1.64 2.00 1.39 1.81
+ 4 ms frame shift 2.45 2.72 2.09 2.43 1.8 2.33 1.67 2.22
+ LibriAll 2.43 2.70 2.20 2.52 2.11 2.47 1.94 2.41
Same Corpus - - 2.12 2.42 2.14 2.38 2.03 2.40

VII. Results and Discussions

First, we evaluate the modified loss function (Section V.A) and two channel normalization methods (Section V.B) and compare them with the baseline model. The models are trained on the WSJ corpus with a frame shift of 16 ms. We denote the baseline with SMS and the model with modified loss as SMS_MOD. Average STOI and PESQ over all the four test noises and at SNRs of −5 dB, −2 dB, and 0 dB are plotted in Fig. 5.

We observe that SMS_MOD is consistently better than SMS. The improvement is maximum at −5 dB for all the corpora. The maximum improvement is observed for the IEEE Male corpus. The objective scores indicate that training a model using a loss over all the T-F units leads to overfitting on the corpus. Using a loss computed over only high energy T-F units can achieve better generalization. All the following models trained in this study, except for SMS, will use the modified loss function.

The objective scores for two normalization schemes suggest that LSMS and RASTA both are better than SMS and SMS_MOD for all untrained corpora. LSMS is consistently better than RASTA for all the corpora and at all SNR conditions.

Next, we examine different training corpora on 4 test noises. The models are trained using LSMS with a frame shift of 16 ms. The average STOI and PESQ over four test noises are plotted in Fig. 6. A general trend for STOI and PESQ scores are LibriAll > LibriOther > LibriCLean > VoxCeleb2 > WSJ, except for TIMIT where VoxCeleb2 is worse than WSJ.

Fig. 6:

Fig. 6:

STOI and PESQ comparisons between different training corpora with the frame shift of 16 ms.

A key observation from the corpora comparisons is that the corpus content is important to achieve better generalization but not the size of the corpus. A corpus with multiple possible channels sources, LibriAll, is very effective for generalization. However, a similar corpus VoxCeleb2 containing 4.3 times more utterances is not as effective. This observation is further supported by the fact that no dramatic performance differences exist between LibriClean (104014 utterances), LibriOther (148688 utterances) and LibriAll (252702 utterances), all of which contain utterances from the LibriSpeech corpus.

Perhaps surprisingly, VoxCeleb2 is not able to obtain good generalization. This might be due to the types of utterances in VoxCeleb2. Most of the utterances include some sort of reverberation, cross-talk or background noise. Hence, it may not be very suitable to be employed for the enhancement of utterances from clean corpora. More research is needed to explain the cross-corpus generalization behavior of VoxCeleb2.

Further, we compare models trained with different frame shifts. We compare frame shifts from {16 ms, 8 ms, 4 ms, 2 ms}. All the models are trained on LibriAll using LSMS with a frame size of 32 ms. Average STOI and PESQ scores are plotted in Fig. 7. We can observe a clear improvement in the objective scores when moving from 16 ms to 8 ms, and from 8 ms to 4 ms. However, the performances for 4 ms and 2 ms are very similar, suggesting the diminishing effect from reducing frame shift. Note that similar performance improvements are obtained using all the training corpora, suggesting that using small frame shift is an effective technique applicable to all training corpora. The performance is also improved on the trained corpus, WSJ in this case, when trained using smaller frame shifts. This is an important observation because getting an improvement on the trained corpus does not necessarily result in an improvement over untrained corpora as we have reported in Table I.

Fig. 7:

Fig. 7:

STOI and PESQ comparisons between different frame shifts on LibriAll.

We also compare all the training corpora using a smaller frame shift of 4 ms and the results are plotted in Fig. 8. We obtain the same performance trend as using the frame shift of 16 ms. This implies that using smaller frame shift and better training corpora are two independent techniques for improving cross-corpus generalization.

Fig. 8:

Fig. 8:

STOI and PESQ comparisons between different training corpora with the frame shift of 4 ms.

Furthermore, we report results on babble noise when different techniques to improve channel generalization are gradually incorporated into the baseline model. The results are given in Table IV. The bold scores in the last row of STOI and PESQ, Same Corpus (trained corpus), provide the scores obtained by training a model on the same corpus as the test corpus. Note that the results on the trained corpora, TIMIT and IEEE, represent benchmarks where the number of unique training utterances is small. IEEE corpora have only 576 training utterances and TIMIT has 4620 utterances in which many speakers speak the same set of sentences. A good model should be able to match the scores obtained using Same Corpus.

We observe that the most effective approach is the use of LibriAll that improves STOI at −5 dB by 3.2% on TIMIT, 8.1% on IEEE Male, and 5.3% on IEEE Female while obtaining similar performance on WSJ as to that obtained by training on WSJ. Similarly, smaller frame shift is also very effective as it improves STOI at −5 dB by 3.5% on TIMIT, 1.8% on IEEE Male, and 3.5% on IEEE Female.

All the proposed techniques are trained and evaluated on corpora with negligible room reverberation. Speech enhancement in the presence of both reverberation and background noise at low SNRs, such as −5 dB, is an extremely difficult problem, and would require training with noisy-reverberant utterances [44]. To examine the generality of the proposed techniques, we further evaluate on noisy-reverberant speech data. To create reverberant utterances, we utilize real room impulse responses (RIRs) in [45]. We use all 74 RIRs corresponding to the room with the reverberation time of 0.32 seconds. A given clean utterance is convolved with a randomly picked RIR, and is followed by noise addition. The results are reported in Table V, where anechoic speech is considered the reference signal in the evaluation. Note that the models already trained without reverberation are tested without retraining, and hence it is expected that the amounts of improvement are lower than those in Table IV. However, we observe a similar trend of cross-corpus generalization, except for the modified loss which is worse than the baseline. The model trained on LibriAll using LSMS with a frame shift of 4 ms performs the best in this case as well.

TABLE V:

Performance improvements on reverberant speech mixed with babble noise by gradually incorporating different techniques.

Test Corpus WSJ TIMIT IEEE Male IEEE Female
Test SNR −5 dB −2 dB −5 dB −2dB −5 dB −2 dB −5 dB −2 dB
STOI (%) Mixture 53.26 57.1 50.07 54.67 53.98 59.27 52.98 57.75
Baseline 65.1 68.4 54.3 60.4 57.6 66.4 54.9 61.1
+Modified loss 64.2 67.8 54.9 59.6 56.8 65.2 55.0 61.9
+LSMS 67.5 71.0 57.8 64.8 57.3 67.6 56.7 63.3
+ frame shift 4ms 69.8 73.3 59.7 65.7 61.6 70.1 57.4 65.3
+LibriAll 70.8 73.5 61.4 68.4 63.9 73.2 63.7 70.3
Same Corpus - - 62.8 68.5 65.2 71.5 65.3 71.4
PESQ Mixture 1.40 1.53 1.36 1.49 1.39 1.56 1.03 1.22
Baseline 1.65 1.87 1.45 1.67 1.45 1.73 1.09 1.35
+Modified loss 1.61 1.82 1.44 1.65 1.45 1.73 1.09 1.36
+LSMS 1.80 2.02 1.58 1.88 1.49 1.88 1.18 1.50
+ frame shift 4ms 1.99 2.23 1.66 1.97 1.64 1.97 1.25 1.67
+LibriAll 2.09 2.28 1.77 2.09 1.78 2.15 1.57 1.94
Same Corpus - - 1.84 2.11 1.77 2.05 1.66 2.02

VIII. Concluding Remarks

This work reveals robustness problem with deep learning based speech enhancement algorithms. We have shown that a model trained on a given corpus fails to generalize to utterances from an untrained corpus. The problem is more severe at low SNR levels, where speech enhancement is actually more needed. We have established that the cross-corpus generalization issue is mainly due to the channel mismatch between a trained and untrained corpus.

We have examined traditional channel normalization methods and found that they improve performance on untrained corpora, but improvement is limited, and hence other techniques need to be developed to further improve generalization.

We have proposed two effective methods to significantly improve cross-corpus generalization. The first technique is to use a corpus obtained using crowd-sourced audio recordings such as LibriSpeech and VoxCeleb. We found LibriSpeech to be significantly better than VoxCeleb. The second technique is the use of a smaller frame shift in STFT and ISTFT layers.

Further research is needed to evaluate the effectiveness of LibriSpeech and smaller frame shift for complex-domain and time-domain speech enhancement models. The behavior of VoxCeleb, which is found to be not very effective for generalization, needs to be further explored for a better understanding of cross-corpus generalization.

Acknowledgments

This research was supported in part by two NIDCD grants (R01DC012048 and R01DC015521) and the Ohio Supercomputer Center.

Footnotes

1

The two MIRs are obtained from https://www.audiothing.net/impulses/vintage-mics/

Contributor Information

Ashutosh Pandey, Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210 USA.

DeLiang Wang, Department of Computer Science and Engineering and the Center for Cognitive and Brain Sciences, The Ohio State University, Columbus, OH 43210 USA.

References

  • [1].Benzeghiba M, De Mori R, Deroo O, Dupont S, Erbes T, Jouvet D, Fissore L, Laface P, Mertins A, Ris C et al. , “Automatic speech recognition and speech variability: A review,” Speech Communication, vol. 49, no. 10-11, pp. 763–786, 2007. [Google Scholar]
  • [2].Boll S, “Suppression of acoustic noise in speech using spectral subtraction,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 27, no. 2, pp. 113–120, 1979. [Google Scholar]
  • [3].Scalart P et al. , “Speech enhancement based on a priori signal to noise estimation,” in ICASSP, vol. 2, 1996, pp. 629–632. [Google Scholar]
  • [4].Loizou PC, Speech Enhancement: Theory and Practice, 2nd ed. Boca Raton, FL, USA: CRC Press, 2013. [Google Scholar]
  • [5].Mohammadiha N, Smaragdis P, and Leijon A, “Supervised and unsupervised speech enhancement using nonnegative matrix factorization,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 10, pp. 2140–2151, 2013. [Google Scholar]
  • [6].Wang DL and Chen J, “Supervised speech separation based on deep learning: An overview,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, pp. 1702–1726, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Wang Y and Wang DL, “Towards scaling up classification-based speech separation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 7, pp. 1381–1390, 2013. [Google Scholar]
  • [8].Xu Y, Du J, Dai L-R, and Lee C-H, “A regression approach to speech enhancement based on deep neural networks,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 23, no. 1, pp. 7–19, 2015. [Google Scholar]
  • [9].Weninger F, Erdogan H, Watanabe S, Vincent E, Le Roux J, Hershey JR, and Schuller B, “Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR,” in International Conference on Latent Variable Analysis and Signal Separation. Springer, 2015, pp. 91–99. [Google Scholar]
  • [10].Chen J and Wang DL, “Long short-term memory for speaker generalization in supervised speech separation,” The Journal of the Acoustical Society of America, vol. 141, no. 6, pp. 4705–4714, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Fu S-W, Tsao Y, and Lu X, “SNR-Aware convolutional neural network modeling for speech enhancement.” in INTERSPEECH, 2016, pp. 3768–3772. [Google Scholar]
  • [12].Tan K, Chen J, and Wang D, “Gated residual networks with dilated convolutions for monaural speech enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 1, pp. 189–198, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Pandey A and Wang D, “A new framework for CNN-based speech enhancement in the time domain,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 27, no. 7, pp. 1179–1188, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Chen J, Wang Y, Yoho SE, Wang DL, and Healy EW, “Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises,” The Journal of the Acoustical Society of America, vol. 139, no. 5, pp. 2604–2612, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Williamson DS, Wang Y, and Wang DL, “Complex ratio masking for monaural speech separation,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 24, no. 3, pp. 483–492, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Tan K and Wang D, “Learning complex spectral mapping with gated convolutional recurrent networks for monaural speech enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 380–390, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Fu S-W, Hu T.-y., Tsao Y, and Lu X, “Complex spectrogram enhancement by convolutional neural network with multi-metrics learning,” in International Workshop on Machine Learning for Signal Processing. IEEE, 2017, pp. 1–6. [Google Scholar]
  • [18].Pandey A and Wang D, “Exploring deep complex networks for complex spectrogram enhancement,” in ICASSP, 2019, pp. 6885–6889. [Google Scholar]
  • [19].Choi H-S, Kim J-H, Huh J, Kim A, Ha J-W, and Lee K, “Phase-aware speech enhancement with deep complex U-Net,” arXiv preprint arXiv:1903.03107, 2019. [Google Scholar]
  • [20].Fu S-W, Tsao Y, Lu X, and Kawai H, “Raw waveform-based speech enhancement by fully convolutional networks,” arXiv preprint arXiv:1703.02205, 2017. [Google Scholar]
  • [21].Pascual S, Bonafonte A, and Serrè J, “SEGAN: Speech enhancement generative adversarial network,” in INTERSPEECH, 2017, pp. 3642–3646. [Google Scholar]
  • [22].Qian K, Zhang Y, Chang S, Yang X, Florêncio D, and Hasegawa-Johnson M, “Speech enhancement using bayesian wavenet,” in INTERSPEECH, 2017, pp. 2013–2017. [Google Scholar]
  • [23].Rethage D, Pons J, and Serra X, “A wavenet for speech denoising,” in ICASSP, 2018, pp. 5069–5073. [Google Scholar]
  • [24].Pandey A and Wang D, “TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain,” in ICASSP, 2019, pp. 6875–6879. [Google Scholar]
  • [25].Atal BS, “Automatic recognition of speakers from their voices,” IEEE, vol. 64, no. 4, pp. 460–475, 1976. [Google Scholar]
  • [26].Furui S, “Cepstral analysis technique for automatic speaker verification,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29, no. 2, pp. 254–272, 1981. [Google Scholar]
  • [27].Hermansky H, Morgan N, Bayya A, and Kohn P, “Compensation for the effect of the communication channel in auditory-like analysis of speech (RASTA-PLP),” in European Conference on Speech Communication and Technology, 1991. [Google Scholar]
  • [28].Hermansky H and Morgan N, “RASTA processing of speech,” IEEE Transactions on Speech and Audio Processing, vol. 2, no. 4, pp. 578–589, 1994. [Google Scholar]
  • [29].Panayotov V, Chen G, Povey D, and Khudanpur S, “LibriSpeech: an ASR corpus based on public domain audio books,” in ICASSP, 2015, pp. 5206–5210. [Google Scholar]
  • [30].Chung JS, Nagrani A, and Zisserman A, “Voxceleb2: Deep speaker recognition,” in INTERSPEECH, 2018. [Google Scholar]
  • [31].Paul DB and Baker JM, “The design for the wall street journal-based CSR corpus,” in Workshop on Speech and Natural Language, 1992, pp. 357–362. [Google Scholar]
  • [32].Taal CH, Hendriks RC, Heusdens R, and Jensen J, “An algorithm for intelligibility prediction of time–frequency weighted noisy speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2125–2136, 2011. [DOI] [PubMed] [Google Scholar]
  • [33].Rix AW, Beerends JG, Hollier MP, and Hekstra AP, “Perceptual evaluation of speech quality (PESQ) - a new method for speech quality assessment of telephone networks and codecs,” in ICASSP, 2001, pp. 749–752. [Google Scholar]
  • [34].IEEE, “IEEE recommended practice for speech quality measurements,” IEEE Transactions on Audio and Electroacoustics, vol. 17, pp. 225–246, 1969. [Google Scholar]
  • [35].Wang Y, Narayanan A, and Wang DL, “On training targets for supervised speech separation,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 22, no. 12, pp. 1849–1858, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Garofolo JS, Lamel LF, Fisher WM, Fiscus JG, and Pallett DS, “DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. nist speech disc 1-1.1,” NASA STI/Recon technical report n, vol. 93, 1993. [Google Scholar]
  • [37].Byrne D, Dillon H, Tran K, Arlinger S, Wilbraham K, Cox R, Hagerman B, Hetu R, Kei J, Lui C et al. , “An international comparison of long-term average speech spectra,” The Journal of the Acoustical Society of America, vol. 96, no. 4, pp. 2108–2120, 1994. [Google Scholar]
  • [38].Tan K and Wang D, “Complex spectral mapping with a convolutional recurrent network for monaural speech enhancement,” in ICASSP, 2019, pp. 6865–6869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Pandey A and Wang DL, “On adversarial training and loss functions for speech enhancement,” in ICASSP, 2018, pp. 5414–5418. [Google Scholar]
  • [40].Hershey JR, Chen Z, Le Roux J, and Watanabe S, “Deep clustering: Discriminative embeddings for segmentation and separation,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 31–35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Murveit H, Butzberger J, and Weintraub M, “Reduced channel dependence for speech recognition,” in Workshop on Speech and Natural Language, 1992, pp. 280–284. [Google Scholar]
  • [42].Varga A and Steeneken HJ, “Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems,” Speech Communication, vol. 12, no. 3, pp. 247–251, 1993. [Google Scholar]
  • [43].Kingma D and Ba J, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [Google Scholar]
  • [44].Zhao Y, Wang Z-Q, and Wang D, “Two-stage deep learning for noisy-reverberant speech enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 1, pp. 53–62, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Hummersone C, Mason R, and Brookes T, “Dynamic precedence effect modeling for source separation in reverberant environments,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1867–1871, 2010. [Google Scholar]

RESOURCES