Abstract
Convolutive and under-determined blind audio source separation from noisy recordings is a challenging problem. Several computational strategies have been proposed to address this problem. This study is concerned with several modifications to the expectation-minimization-based algorithm, which iteratively estimates the mixing and source parameters. This strategy assumes that any entry in each source spectrogram is modeled using superimposed Gaussian components, which are mutually and individually independent across frequency and time bins. In our approach, we resolve this issue by considering a locally smooth temporal and frequency structure in the power source spectrograms. Local smoothness is enforced by incorporating a Gibbs prior in the complete data likelihood function, which models the interactions between neighboring spectrogram bins using a Markov random field. Simulations using audio files derived from stereo audio source separation evaluation campaign 2008 demonstrate high efficiency with the proposed improvement.
Keywords: Blind source separation, Nonnegative matrix factorization, Expectation-maximization, Markov random field, Simultaneous auto-regression
Introduction
Blind source separation (BSS) aims to recover unknown source signals from observed mixtures with or without very limited information about their mixing process. BSS problems have been addressed in many previous studies, for example, [1–12], which were motivated by several real-world applications.
In a cocktail-party problem, microphones receive noisy mixtures of acoustic signals that propagate along multiple paths from their sources. In a real scenario, the number of audio sources may be greater than the number of microphones, audio sources may have different timbres and similar pitches, and audio signals may be only locally stationary.
A convolutive and under-determined mixing approach needs to be adopted to model this problem. There are several techniques for solving convolutive unmixing problems [13]. Some of these [14] operate in the time-domain by solving the alternative finite impulse response (FIR) inverse model using independent component analysis (ICA) methods [2]. Another method is to extract meaningful features from the time-frequency (TF) representations of mixtures. This approach seems to be more efficient than the ICA-based techniques especially when the number of microphones is lower than the number of sources. Acoustic signals are usually sparse in the TF domain, so the source signals can be separated efficiently even if they are partially overlapped and the problem is under-determined. These features can be extracted using several techniques, including TF masking [15, 16], frequency bin-wise clustering with permutation alignment (FBWC-PA) [17, 18], subspace projection [19], hidden Markov models (HMM) [20], interaural phase difference (IPD) [21], nonnegative matrix factorization (NMF) [22, 23], and nonnegative tensor factorization (NTF) [24].
Nonnegative matrix factorization [25] is a feature extraction method with many real-world applications [26]. A convolutive NMF-based unmixing model was proposed by Smaragdis [22]. Ozerov and Fevotte [23] developed the EM-NMF algorithm, which is suitable for unsupervised convolutive and possibly under-determined unmixing of audio sources using only stereo observations. Their model of the sources was based on the generalized Wiener filtering model [27–29], which assumes that each source is locally stationary and that it can be expressed in terms of superimposed amplitude-modulated Gaussian components. Thus, a power spectrogram of each source can be factorized into lower-rank nonnegative matrices, which facilitates the use of NMF for estimating the frequency and temporal profiles of each latent source component. In the TF representation, the latent components are mutually and individually independent across frequency and time bins. However, this assumption is very weak for any adjacent bins because real audio signals have locally smooth frequency and temporal structures.
Motivated by several papers on smoothness [26, 28, 30, 31] in BSS models, we attempt to further improve the EM-NMF algorithm by enforcing local smoothness both in the frequency and temporal profiles of the NMF factors. Similar to [28, 30, 32], we introduce a priori knowledge to the NMF-based model using a Bayesian framework, although our approach is based on a Gibbs prior with a Markov random field (MRF) model to describe pairwise interactions among adjacent bins in spectrograms. As demonstrated in [33], the MRF model with Green’s function, which is well known in many tomographic image reconstruction applications [34], can improve the EM-NMF algorithm. In this paper, we extend the results presented in [33] using other smoothing functions, particularly a more flexible simultaneous autoregressive (SAR) model that is more appropriate in term of hyperparameter estimation and computational complexity.
The rest of this paper is organized as follows. The next section reviews the underlying separation model. Section 3 is concerned with MRF smoothing. The optimization algorithm is described in Sect. 4. Audio source separation experiments are presented in Sect. 5. Finally, the conclusions are provided in Sect. 6.
Model
Let I microphones receive signals that can be modeled as a noisy convolutive mixture of J audio signals. The signal received by the i-th microphone (
) can be expressed as
![]() |
1 |
where
represents the corresponding mixing filter coefficient,
is the j-th source signal (
),
is the additive noise, and L is the length of the mixing filter.
In the TF domain, the model (1) can be expressed as
![]() |
2 |
where
, and
is the index of a frequency bin.
The noise n ift is assumed to be stationary and spatially uncorrelated, i.e,
![]() |
3 |
where
and
is a proper complex Gaussian distribution with a zero-mean and the covariance matrix
.
Benaroya et al. [27] described an audio source
as a superimposed amplitude-modulated Gaussian process:
![]() |
4 |
where h
r(t) is a slowly varying amplitude parameter in the r-th component (
), and
is a stationary zero-mean Gaussian process with the power spectral density σ2r (f). The TF representation of (4) leads to
![]() |
5 |
The power spectrogram of (5) is given by |s
ft|2 = ∑Rr=1
w
fr
h
rt, where w
fr = σ2r(f). Thus, the spectrogram of the j-th source
can be factorized as follows:
![]() |
6 |
where
is the number of latent components in the j-th source, and
is the nonnegative orthant of the Euclidean space. The column vectors of
represent the frequency profiles of the j-th source, while the row vectors of
are the temporal profiles.
Févotte et al. [28] transformed the model (5) to the following form:
![]() |
7 |
where
. Thus,
![]() |
8 |
and
, where
. Consequently, the model (2) can be expressed as
![]() |
9 |
is the number of entries in the set
, and
is created from the columns of the matrix
. For example, assuming
, we have
, and
is the augmented mixing matrix [23] created from
matrices
. From (8) and (9), we have
where
.
To estimate the parameters
, and
, we formulate the following posterior:
![]() |
10 |
from which we obtain
![]() |
11 |
From (3) and (9), we have the joint conditional PDF for
:
![]() |
12 |
Based on (12), the log-likelihood term in (11) can be expressed as
![]() |
13 |
where
, and
.
The joint conditional PDF for
comes from the model (5):
![]() |
14 |
From (14), we have the log-likelihood functional for
![]() |
15 |
The negative log-likelihood in (15) is the Itakura-Saito (IS) divergence [35], which is particularly useful for measuring the goodness of fit between spectrograms. The IS divergence is the special case of the β-divergence when
[26].
The priors
and
in (10) can be determined in many ways. Févotte et al. [28] proposed the determination of priors using Markov chains and the inverse Gamma distribution. In our approach, we propose to model the priors with the Gibbs distribution, which is particularly useful for enforcing local smoothness in images.
MRF Smoothing
Let us assume that prior information on the total smoothness of the estimated components W and H is modeled using the following Gibbs distributions:
![]() |
16 |
where Z W and Z H are partition functions, αW and αH are regularization parameters, and U(P) is a total energy function, which measures the total roughness in P. The function U(P) is often formulated with respect to the MRF model, which is commonly used in image reconstruction for modeling local smoothness.
The functions U(W) and U(H) can be determined for the matrices W and H in the following way:
![]() |
17 |
![]() |
18 |
In the first-order interactions (nearest neighborhood), we have S
f = {f − 1, f + 1} and the weighting factor νfl = 1, and S
t = {t − 1, t + 1} with νtl = 1. In the second-order interactions, S
f = {f − 2, f − 1, f + 1, f + 2} and S
t = {t − 2, t − 1, t + 1, t + 2}. The parameters δW and δH are scaling factors, while
is a potential function of ξ that can take different forms. The potential functions that can be applied to the EM-NMF algorithm are listed in Table 1.
Table 1.
Potential functions
According to Lange [41], a robust potential function in the Gibbs prior should have the following properties: nonnegative, even, 0 at ξ = 0, strictly increasing for ξ > 0, unbounded, and convex with bounded first-derivative. Of the functions listed in Table 1, Green’s function satisfies all of these properties, and consequently, it was selected for use in the tests in [33]. Unfortunately, the application of Green’s function to both matrices W and H demands the determination of two hyperparameters δW and δH, and two penalty parameters αW and αH. Moreover, data-driven hyperparameter estimation usually involves an approximation of the partition functions
and
, which is not easy in this task.
The Gaussian function
, as shown in Table 1, does not have a bounded first-derivative, but its scaling parameter δ may be merged with a penalty parameter α. Consequently, only two parameters need to be determined. The MRF model with a Gaussian potential function is actually the SAR model [42–44], which is used widely in many scientific fields [44, 45] to represent the interactions among spatial data with Gaussian noise. Let
be the r-th column of the matrix W, and
be the r-th row of the matrix H. Random variables in the vectors
and
can be modeled using the following stochastic equations:
![]() |
19 |
where
and
are symmetric matrices of spatial dependencies between the random variables,
is an i.i.d. Gaussian noise, and I is an identity matrix with a corresponding size.
According to [45, 46], the spatial dependence matrices can be expressed as
and
, where γ is a constant that ensures that the matrices
and
are positive-definite, while
and
are binary symmetric band matrices indicating the neighboring entries in
and
, respectively. In the first-order interactions, we have z
(W)1,2 = z
(W)F,F-1 = z
(W)m,m-1 = z
(W)m,m+1 = 1 for
for
, and z
mf = z
tn = 0 otherwise. In the P-order interactions, each entry w
fr and h
rt has the corresponding sets of neighbors: {w
f-ν,r}, {w
f+ν,r}, {h
r,t-ν}, {h
r,t+ν} with
. As a consequence,
and
are symmetric band matrices with P sub-diagonals and P super-diagonals, the entries of which are equal to ones, but zeros otherwise. The matrices
and
are positive-definite, if γ < (2P)−1 for P-order interactions [45, 46]. We selected
, where
is a small constant, for example,
.
In the SAR model, Gibbs priors (16) may be expressed as their joint multivariate Gaussian priors:
![]() |
20 |
![]() |
21 |
where
, and
is the f-th eigenvalue of the matrix
. Similarly,
. If
and
, which simplifies the hyperparameter estimation.
Algorithm
The EM algorithm [47] is applied to maximize
in (11). To calculate the E-step, the log-likelihood functional (13) is transformed to the following form
![]() |
22 |
where the correlation matrices are given by
, and the cross-correlation
.
Ozerov et al. [23] observed that the set {R
(xx)f, R
(xs)f, R
(ss)f, |c
rft|2 } provides sufficient statistics for the exponential family [47], so the sources
and the latent components
can be estimated by computing the conditional expectations of the natural statistics. According to [23], we have the following posterior estimates:
![]() |
23 |
![]() |
24 |
Similarly, for the latent components, we have
![]() |
25 |
![]() |
26 |
The conditional expectations for the sufficient statistics are as follows:
![]() |
27 |
![]() |
28 |
![]() |
29 |
Detailed derivations of the formulae (23)–(26) are presented in the ``Appendix''.
From the M-step, we have
, which gives
. From
![]() |
we have
![]() |
From
, we have
![]() |
30 |
Similarly, from
, we get
![]() |
31 |
The terms
and
in (30) and (31) take the following forms with respect to the potential functions:
- Gaussian (SAR model):

32 
33
Experiments
Experiments were conducted using selected sound recordings taken from the stereo audio source separation evaluation campaign (SiSEC)1 in 2007. This campaign aimed to evaluate the performance of source separation algorithms using stereo under-determined mixtures. We selected the benchmarks given in Table 2, which included speech recordings (three male voices—
male3
, and three female voices—
female3
), three nonpercussive music sources—
nodrums
, and three music sources that included drums—
wdrums
. The mixed signals were recordings that lasted 10 s, which were sampled at 16 kHz (the standard settings of recordings from the “Under-determined speech and music mixtures" datasets in the SiSEC2008). For each benchmark, the number of true sources was three (J = 3) but it only had two microphones (I = 2), that is, stereo recordings. Thus, for each case, we faced an under-determined BSS problem. All instantaneous mixtures were obtained using the same mixing matrix with positive coefficients. Synthetic convolutive mixtures were obtained for a meeting room with a 250 ms reverberation time using omnidirectional microphones with 1 m spacing.
Table 2.
Benchmarks
| Instantaneous | Convolutive |
|---|---|
| male3_inst_mix | male3_synthconv_250ms_1m_mix |
| female3_inst_mix | female3_synthconv_250ms_1m_mix |
| nodrums_inst_mix | nodrums_synthconv_250ms_1m_mix |
| wdrums_inst_mix | wdrums_synthconv_250ms_1m_mix |
The spectrograms were obtained by a short-time fourier transform (STFT) using half-overlapping sine windows. To create the spectrograms and recover the time-domain signals from STFT coefficients, we used the corresponding
stft
_
multi
and
istft
_
multi
Matlab functions from the SiSEC2008 webpage2 [48]. For instantaneous and convolutive mixtures, the window lengths were set to 1,024 and 2,048 samples, respectively.
The EM-NMF algorithm was taken from Ozerov’s homepage3, while the MRF-EM-NMF algorithm was coded and extensively tested by Ochal [49].
The proposed algorithm is based on an alternating optimization scheme, which is intrinsically non-convex, and hence, its initialization plays an important role. An incorrect initialization may result in slow convergence and early stagnation at an unfavorable local minimum of the objective function. As done in many NMF algorithms, the factors W and H are initialized with uniformly distributed random numbers, whereas the entries in the matrix A are drawn from a zero-mean complex Gaussian distribution. After W and H have been initialized, the covariance matrices
and
given by (8) can be computed. A noise covariance matrix
is needed to update the E-step. Ozerov and Fevotte [23] tested several techniques for determining this matrix. The E-step in MRF-EM-NMF is identical to that in EM-NMF [23], and hence, all of these techniques can be used in this experiment. The initial matrix
was determined based on the empirical variance of the observed power spectrograms.
The MRF-EM-NMF and EM-NMF algorithms were initialized using the same random values (given as
) and run for 1,500 iterations.
The choice of the parameters {αW, αH, γW, γH} used in the Gibbs distributions also affected the performance. The regularization parameters can be fixed or changed with iterations. Motivated by iterative thresholding strategies [26], we used the following strategies:
- Linear thresholding:

- Nonlinear thresholding:

- Fixed thresholding:
where k is the current iteration, k max is the maximum number of iterations,
is the shape parameter,
is the shift parameter, k
1 is the threshold, and α can be equal to αW or αH. All of the above thresholding strategies aim to relax smoothing during the early iterations when the descent directions in the updates are sufficiently steep and to emphasize smoothing if noisy perturbations become significantly detrimental to the overall smoothness. These strategies are motivated by standard regularization rules that apply to ill-posed problems. We tested all of the thresholding strategies using instantaneous and convolutive mixtures, and we obtained the best performance with fixed thresholding using k
1 = k
max/2.
The parameters δW and δH in the MRF models can be estimated using standard marginalization procedures or by maximizing the Type II ML estimate for (10). However, these techniques have a huge computational cost for the nonlinear potential functions in the MRF models. For practical reasons, they are not very useful for the GR or HR functions.
In this study, we tested all of the benchmarks in Table 2 and the following potential functions: the first- and second-order Gaussian, GR, and HR. For the Gaussian functions, we tested all combinations of the regularization parameters αW and αH from the discrete set {0.001, 0.005, 0.01, 0.05, 0.1}. For GR and HL, the regularization parameters could take only two values, {0.001, 0.01}, although the parameters δW and δH were tested with the following values: {0.1, 1, 10}. The optimal values of the smoothing parameters are summarized in Table 3.
Table 3.
Parameters of the MRF-EM-NMF algorithm for each test case shown in Fig. 1
| Benchmark | Smoothing | Instantaneous mixture | Convolutive mixture | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
|
αW | αH | δW | δH |
|
αW | αH | δW | δH | ||
| Male | GR | 12 | 0.01 | 0.01 | 1 | 1 | 4 | 0.01 | 0.01 | 0.1 | 10 |
| Male | HL | 12 | 0.001 | 0.001 | 1 | 10 | 4 | 0.001 | 0.01 | 1 | 1 |
| Male | 1-Gaussian | 12 | 0.001 | 0.01 | – | – | 4 | 0.05 | 0.05 | – | – |
| Male | 2-Gaussian | 12 | 0.001 | 0.01 | – | – | 4 | 0.05 | 0.01 | – | – |
| Female | GR | 12 | 0.01 | 0.01 | 10 | 10 | 4 | 0.01 | 0.01 | 1 | 1 |
| Female | HL | 12 | 0.001 | 0.001 | 1 | 10 | 4 | 0.001 | 0.001 | 0.1 | 10 |
| Female | 1-Gaussian | 12 | 0.001 | 0.001 | – | – | 4 | 0.1 | 0.001 | – | – |
| Female | 2-Gaussian | 12 | 0.001 | 0.001 | – | – | 4 | 0.05 | 0.005 | – | – |
| Nodrums | GR | 4 | 0.01 | 0.01 | 10 | 1 | 4 | 0.01 | 0.01 | 10 | 0.1 |
| Nodrums | HL | 4 | 0.01 | 0.001 | 1 | 10 | 4 | 0.01 | 0.01 | 0.1 | 0.1 |
| Nodrums | 1-Gaussian | 4 | 0.001 | 0.01 | – | – | 4 | 0.001 | 0.05 | – | – |
| Nodrums | 2-Gaussian | 4 | 0.01 | 0.001 | – | – | 4 | 0.005 | 0.01 | – | – |
| Wdrums | GR | 4 | 0.01 | 0.01 | 1 | 10 | 4 | 0.01 | 0.01 | 1 | 1 |
| Wdrums | HL | 4 | 0.01 | 0.001 | 1 | 10 | 4 | 0.001 | 0.01 | 1 | 0.1 |
| Wdrums | 1-Gaussian | 4 | 0.001 | 0.001 | – | – | 4 | 0.001 | 0.1 | – | – |
| Wdrums | 2-Gaussian | 4 | 0.001 | 0.001 | – | – | 4 | 0.005 | 0.1 | – | – |
The notations "1-Gaussian" and "2-Gaussian" represent the first- and second-order Gaussian functions, respectively
The separation results were evaluated in terms of the signal-to-distortion ratio (SDR) and the signal-to-interference ratio (SIR) [50]. Figure 1 shows the SDRs and SIRs averaged for the sources, which were estimated using the EM-NMF and MRF-EM-NMF with various smoothing functions based on instantaneous and convolutive mixing models. For each sample in Table 2 and each smoothing function, the smoothing parameters were tuned optimally for a given fixed initializer. This unsupervised learning approach evaluated the efficiency of the smoothing functions with respect to a given recording scenario. However, the smoothing parameters need to be determined with a supervised learning framework in practice. To test this option, each recording in Table 2 was divided into two 5 s excerpts during the training and testing stages. For each training excerpt, the smoothing parameters and initializer were selected to maximize the SDR performance. Testing was performed on the other excerpt with the same initializer. The results obtained during the testing stage with the instantaneous mixtures are shown in Fig. 2.
Fig. 1.
Source separation results obtained with the MRF-EM-NMF (first- and second-order Gaussian, GR, and HL functions) and EM-NMF (no smoothing) algorithms after 1,500 iterations: a mean SDR (dB) for instantaneous mixture, b mean SDR (dB) for convolutive mixture, c mean SIR (dB) for instantaneous mixture, d mean SIR (dB) for convolutive mixture. The smoothing parameters were tuned separately for each mixture in Table 2
Fig. 2.
Source separation results obtained in the testing stage with the MRF-EM-NMF (first- and second-order Gaussian, GR, and HL functions) and EM-NMF (no smoothing) algorithm after 1,500 iterations: a mean SDR (dB), b mean SIR (dB). The smoothing parameters were determined during the training stage. 5 s excerpts were used in the training and testing stages
For comparison, Table 4 shows the average SDR results produced and the running time taken when using several state-of-the-art algorithms, which were applied to the mixtures in Table 2. The generalized Gaussian prior (GGP) algorithm [51] and the statistically sparse decomposition principle (SSDP) algorithms [52] were applied to the instantaneous mixtures. The convolutive mixtures were unmixed with the IPD [21], two versions of the FBWC-PA [17, 18] algorithm, and the Convolutive NMF [22]. Note that the last method in this list was based on supervised learning, whereas the others were unsupervised learning algorithms. In this case, the first 8 s excerpts of the 10 s source recordings were used for learning, while the remainder was used for testing.
Table 4.
Mean SDR (dB) and running time (s) for sources estimated from the mixtures shown in Table 2
| Benchmark | Mixture | Male | Female | Nodrums | Wdrums | Time |
|---|---|---|---|---|---|---|
| MRF-EM-NMF (HR) | inst | 8.06 | 9.95 | 24.07 | 21.72 | 2487 |
| MRF-EM-NMF (GR) [33] | inst | 7.69 | 8.86 | 26.65 | 21.28 | 2498 |
| EM-NMF [23] | inst | 2.62 | 6.5 | 11.7 | 19.87 | 2456 |
| GGP [51] | inst | 8.4 | 8.57 | 13.9 | 10.3 | 5 |
| SABM+SSDP [52] | inst | 4.25 | 3.82 | 5.83 | 9.43 | 2 |
| MRF-EM-NMF (HR) | conv | 1.06 | 2.2 | 1.17 | 1.7 | 2760 |
| MRF-EM-NMF (GR) [33] | conv | 1.4 | 2.1 | 1.2 | 1.56 | 2762 |
| EM-NMF [23] | conv | 0.95 | 1.6 | 0.2 | 0.44 | 2720 |
| IPD [21] | conv | 1.53 | 1.43 | 2.2 | −2.7 | 1200 |
| FBWC-PA [17] | conv | −0.1 | 4.43 | 0.77 | −2.53 | 40 |
| Generalized FBWC-PA [18] | conv | 5.95 | 7.45 | 1.2 | −0.69 | 8 |
| ConvNMF [22] | conv | −0.7 | −0.47 | 3.85 | 8.13 | 347 |
The averaged elapsed time measured using Matlab 2008a for 1,500 iterations with
, executed on a 64-bit Intel Quad Core CPU 3 GHz with 8 GB RAM was almost the same for the MRF-EM-NMF and EM-NMF algorithms (see Table 4).
The simulations demonstrate that MRF smoothing improved the source separation results in almost all test cases. The results confirmed that instantaneous mixtures were considerably easier to separate than convolutive ones. The MRF-EM-NMF algorithm delivered the best mean SDR performance of all the algorithms tested with instantaneous mixtures. The highest SDR values were produced with instantaneously mixed non-percussive music sources. This was justified by the smooth frequency and temporal structures of non-percussive music spectrograms. If the source spectrograms were not very smooth (as with the percussive audio recordings), MRF smoothing gave only a slight improvement (see Figs. 1, 2) in the first-order MRF interactions, and even a slight deterioration in the higher-order MRF interactions. According to Fig. 1, the HL function delivered the most promising SDR results, which were stable with a wide range of parameters. In each case with the instantaneous mixtures, the best results were produced with the same hyperparameter values, δW = 1 and δH = 10, and almost the same penalty parameter values, αW and αH. The SAR model also improved the results compared with the standard EM-NMF algorithm. Moreover, the SAR model was tuned using only two penalty parameters, and the partition function of the associated Gibbs prior could be derived using a closed-form expression, which might be very useful for data-driven hyperparameter estimation.
The source separation results produced with the MRF-EM-NMF algorithm for convolutive and under-determined mixtures were better than those obtained with the EM-NMF algorithm. Unfortunately, the SDR values showed that these results were still a long way from being perfect, even after 1,500 iterations, and thus, further research is needed in this field. It is likely that some additional prior information could be imposed, especially on a mixing operator, which might increase the efficiency considerably.
It should be noted that the SDR performance with both mixtures could still be improved by refining the associated parameters, especially in the MRF models, and by using more efficient initializers.
Conclusions
This study demonstrated that imposing MRF smoothing on the power spectrograms of audio sources estimated from under-determined unmixing problems may improve the quality of estimated audio sounds considerably. This was justified because any type of meaningful prior information improves the performance, especially with under-determined problems. This study addressed the application of MRF smoothing in the EM-NMF algorithm, but this type of smoothing could be applied to many other related BSS algorithms based on feature extraction from power spectrograms. Thus, the theoretical results presented in this paper may have broad practical applications. Clearly, further studies are needed to improve this technique for convolutive mixtures and to integrate regularization parameter estimation techniques in the main algorithm.
Acknowledgments
This work was supported by habilitation grant N N515 603139 (2010–2012) from the Ministry of Science and Higher Education, Poland. The author would like to thank the reviewers for their valuable comments.
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Appendix
The conditional expectations of the natural statistics can be derived from the a posteriori distributions
and
. Thus,
![]() |
38 |
where
,
![]() |
We can transform
in (38) into the following form:
![]() |
Using the Woodbury matrix identity, we have
![]() |
and finally,
![]() |
39 |
where
![]() |
40 |
![]() |
41 |
Thus,
. From (5), it follows that
, so
. Since the zero-mean noise n
ft from (3) is not correlated with
, we have
![]() |
42 |
Inserting (42) and
into (40) and (41), we obtain the update rules (23) and (24), respectively.
Analyzing
, one can obtain
, which yields the update rules (25) and (26).
Footnotes
References
- 1.Cichocki A, Amari SI. Adaptive blind signal and image processing (new revised and improved edition) New York: Wiley; 2003. [Google Scholar]
- 2.Hyvrinen A, Karhunen J, Oja E. Independent component analysis. New York: Wiley; 2001. [Google Scholar]
- 3.Comon P, Jutten C. Handbook of blind source separation: independent component analysis and applications. 1st ed. Burlington, MA: Academic Press, Elsevier; 2010, ISBN: 0123747260, 9780123747266.
- 4.Naik GR, Kumar DK. Dimensional reduction using blind source separation for identifying sources. Int J Innov Comput Inf Control (IJICIC) 2011;7(2):989–1000. [Google Scholar]
- 5.Popescu TD. A new approach for dam monitoring and surveillance using blind source separation. Int J Innov Comput Inf Control (IJICIC) 2011;7(6):3811–3824. [Google Scholar]
- 6.Zhang Z, Miyake T, Imamura T, Enomoto T, Toda H. Blind source separation by combining independent component analysis with the complex discrete wavelet transform. Int J Innov Comput Inf Control (IJICIC) 2010;6(9):4157–4172. [Google Scholar]
- 7.Khosravy M, Asharif MR, Yamashita K. A PDF-matched short-term linear predictability approach to blind source separation. Int J Innov Comput Inf Control (IJICIC) 2009;5(11(A)):3677–3690. [Google Scholar]
- 8.Yang Z, Zhou G, Ding S, Xie S. Nonnegative blind source separation by iterative volume maximization with fully nonnegativity constraints. ICIC Express Lett. 2010;4(6(B)):2329–2334. [Google Scholar]
- 9.Pao TL, Liao WY, Chen YT, Wu TN. Mandarin audio-visual speech recognition with effects to the noise and emotion. Int J Innov Comput Inf Control (IJICIC) 2010;6(2):711–724. [Google Scholar]
- 10.Lin SD, Huang CC, Lin JH. A hybrid audio watermarking technique in cepstrum domain. ICIC Express Lett. 2010;4(5(A)):1597–1602. [Google Scholar]
- 11.Zin TT, Hama H, Tin P, Toriu T. HOG embedded markov chain model for pedestrian detection. ICIC Express Lett. 2010;4(6(B)):2463–2468. [Google Scholar]
- 12.Virtanen T. Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria. IEEE Trans Audio Speech Lang Process. 2007;15(3):1066–1074. doi: 10.1109/TASL.2006.885253. [DOI] [Google Scholar]
- 13.Pedersen MS, Larsen J, Kjems U, Parra LC. Convolutive blind source separation methods. In: Benesty J, Huang Y, Sondhi M, editors. Springer handbook of speech processing. Berlin: Springer; 2008. p. 1065−94, ISBN: 978-3-540-49125-5.
- 14.Parra L, Spence C. Convolutive blind separation of non-stationary sources. IEEE Trans Speech Audio Process. 2000;8(3):320–327. doi: 10.1109/89.841214. [DOI] [Google Scholar]
- 15.Yilmaz O, Rickard S. Blind separation of speech mixtures via time-frequency masking. IEEE Trans Signal Process. 2004;52(7):1830–1847. doi: 10.1109/TSP.2004.828896. [DOI] [Google Scholar]
- 16.Reju VG, Koh SN, Soon IY. Underdetermined convolutive blind source separation via time-frequency masking. IEEE Trans Audio Speech Lang Process. 2010;18(1):101–116. doi: 10.1109/TASL.2009.2024380. [DOI] [Google Scholar]
- 17.Sawada H, Araki S, Makino S. Measuring dependence of bin-wise separated signals for permutation alignment in frequency-domain bss. In: ISCAS; 2007. p. 3247–3250.
- 18.Sawada H, Araki S, Makino S. Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment. IEEE Trans Audio Speech Lang Process. 2011;19(3):516–527. doi: 10.1109/TASL.2010.2051355. [DOI] [Google Scholar]
- 19.Aïssa-El-Bey A, Abed-Meraim K, Grenier Y. Blind separation of underdetermined convolutive mixtures using their time-frequency representation. IEEE Trans Audio Speech Lang Process. 2007;15(5):1540–1550. doi: 10.1109/TASL.2007.898455. [DOI] [Google Scholar]
- 20.Weiss RJ, Ellis DPW. Speech separation using speaker-adapted eigenvoice speech models. Comput Speech Lang. 2010;24(1):16–29. doi: 10.1016/j.csl.2008.03.003. [DOI] [Google Scholar]
- 21.Mandel MI, Ellis DPW, Jebara T. An EM algorithm for localizing multiple sound sources in reverberant environments. In: Schölkopf B, Platt J, Hoffman T, editors. Advances in neural information processing systems 19. Cambridge: MIT Press; 2007. pp. 953–960. [Google Scholar]
- 22.Smaragdis P. Convolutive speech bases and their application to supervised speech separation. IEEE Trans Audio Speech Lang Process. 2007;15(1):1–12. doi: 10.1109/TASL.2006.876726. [DOI] [Google Scholar]
- 23.Ozerov A, Févotte C. Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation. IEEE Trans Audio Speech Lang Process. 2010;18(3):550–563. doi: 10.1109/TASL.2009.2031510. [DOI] [Google Scholar]
- 24.Ozerov A, Févotte C, Blouet R, Durrieu JL (2011) Multichannel nonnegative tensor factorization with structured constraints for user-guided audio source separation. In: ICASSP; p. 257–260.
- 25.Lee DD, Seung HS. Learning the parts of objects by non-negative matrix factorization. Nature. 1999;401:788–791. doi: 10.1038/44565. [DOI] [PubMed] [Google Scholar]
- 26.Cichocki A, Zdunek R, Phan AH, Amari SI. Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation. Chichester, UK: Wiley and Sons; 2009. [Google Scholar]
- 27.Benaroya L, Gribonval R, Bimbot F. Non-negative sparse representation for Wiener based source separation with a single sensor. In: Proceedings of the IEEE international conference on acoustics, speech and signal processing (ICASSP’03), Hong Kong; 2003. p. 613–616.
- 28.Févotte C, Bertin N, Durrieu JL. Nonnegative matrix factorization with the Itakura-Saito divergence: with application to music analysis. Neural Comput. 2009;21(3):793–830. doi: 10.1162/neco.2008.04-08-771. [DOI] [PubMed] [Google Scholar]
- 29.Duong NQK, Vincent E, Gribonval R. Under-determined reverberant audio source separation using a full-rank spatial covariance model. IEEE Trans Audio Speech Lang Process. 2010;18(7):1830–1840. doi: 10.1109/TASL.2010.2050716. [DOI] [Google Scholar]
- 30.Zdunek R, Cichocki A. Blind image separation using nonnegative matrix factorization with Gibbs smoothing. In: Ishikawa M, Doya K, Miyamoto H, Yamakawa T editors. Neural information processing, vol 4985 of Lecture notes in computer science. Berlin: Springer; 2008. p. 519–528 ICONIP 2007.
- 31.Zdunek R, Cichocki A. Improved M-FOCUSS algorithm with overlapping blocks for locally smooth sparse signals. IEEE Trans Signal Process. 2008;56(10):4752–4761. doi: 10.1109/TSP.2008.928160. [DOI] [Google Scholar]
- 32.Ozerov A, Vincent E, Bimbot F. A general flexible framework for the handling of prior information in audio source separation. IEEE Trans Audio Speech Lang Process. 2012;20(4):1118–1133. doi: 10.1109/TASL.2011.2172425. [DOI] [Google Scholar]
- 33.Zdunek R. Convolutive nonnegative matrix factorization with Markov random field smoothing for blind unmixing of multichannel speech recordings. In: Travieso-Gonzalez CM, Alonso-Hernandez JB, editors. Advances in nonlinear speech processing, vol 7015 of Lecture notes in artificial intelligence (LNAI). Springer Berlin/Heidelberg; 2011. p. 25–32 NOLISP 2011.
- 34.Green PJ. Bayesian reconstruction from emission tomography data using a modified EM algorithm. IEEE Trans Med Imaging. 1990;9:84–93. doi: 10.1109/42.52985. [DOI] [PubMed] [Google Scholar]
- 35.Itakura F, Saito S. An analysis-synthesis telephony based on the maximum likelihood method, vol c-5-5. In: Proceedings of the 6th International Congress on Acoustics, Tokyo, Japan. New York: Elsevier; 1968. p. 17–20.
- 36.Besag J. Toward Bayesian image analysis. J Appl Stat. 1989;16:395–407. doi: 10.1080/02664768900000049. [DOI] [Google Scholar]
- 37.Bouman CA, Sauer K. A generalized Gaussian image model for edge-preserving MAP estimation. IEEE Trans Image Process. 1993;2:296–310. doi: 10.1109/83.236536. [DOI] [PubMed] [Google Scholar]
- 38.Geman S, McClure D. Statistical methods for tomographic image reconstruction. Bull Int Stat Inst. 1987;LII-4:5–21. [Google Scholar]
- 39.Geman S, Reynolds G. Constrained parameters and the recovery of discontinuities. IEEE Trans Pattern Anal Mach Intell. 1992;14:367–383. doi: 10.1109/34.120331. [DOI] [Google Scholar]
- 40.Hebert T, Leahy R. A generalized EM algorithm for 3-D Bayesian reconstruction from poisson data using Gibbs priors. IEEE Trans Med Imaging. 1989;8:194–202. doi: 10.1109/42.24868. [DOI] [PubMed] [Google Scholar]
- 41.Lange K. Convergence of EM image reconstruction algorithms with Gibbs smoothing. IEEE Trans Med Imaging. 1990;9(4):439–446. doi: 10.1109/42.61759. [DOI] [PubMed] [Google Scholar]
- 42.Whittle P. On stationary processes in the plane. Biometrika. 1954;41(3):434–449. [Google Scholar]
- 43.Besag J. Spatial interactions and the statistical analysis of lattice systems. J R Stat Soc Ser B. 1974;36:192–236. [Google Scholar]
- 44.Ripley BD. Spatial statistics. New York: Wiley; 1981. [Google Scholar]
- 45.Molina R, Katsaggelos A, Mateos J. Bayesian and regularization methods for hyperparameter estimation in image restoration. IEEE Trans Image Process. 1999;8(2):231–246. doi: 10.1109/83.743857. [DOI] [PubMed] [Google Scholar]
- 46.Galatsanos N, Mesarovic V, Molina R, Katsaggelos A. Hierarchical Bayesian image restoration for partially-known blurs. IEEE Trans Image Process. 2000;9(10):1784–1797. doi: 10.1109/83.869189. [DOI] [PubMed] [Google Scholar]
- 47.Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc. 1977;39(1):1–38. [Google Scholar]
- 48.Vincent E, Araki S, Theis FJ, Nolte G, Bofill P, Sawada H, Ozerov A, Gowreesunker BV, Lutter D, Duong QKN. The signal separation evaluation campaign (2007–2010): achievements and remaining challenges. Signal Process. 2012;92:1928–1936. doi: 10.1016/j.sigpro.2011.10.007. [DOI] [Google Scholar]
- 49.Ochal P. Application of convolutive nonnegative matrix factorization for separation of muscial instrument sounds from multichannel polyphonic recordings. M.Sc. thesis (supervised by Dr. R. Zdunek), Wroclaw University of Technology, Poland (2010) (in Polish).
- 50.Vincent E, Gribonval R, Févotte C. Performance measurement in blind audio source separation. IEEE Trans Audio Speech Lang Process. 2006;14(4):1462–1469. doi: 10.1109/TSA.2005.858005. [DOI] [Google Scholar]
- 51.Vincent E. Complex nonconvex lp norm minimization for underdetermined source separation. In: Proceedings of the 7th international conference on Independent component analysis and signal separation. ICA’07. Berlin: Springer; 2007. p. 430–437.
- 52.Xiao M, Xie S, Fu Y. A statistically sparse decomposition principle for underdetermined blind source separation. In: Proceedings of 2005 international symposium on intelligent signal processing and communication systems (ISPACS 2005); 2005. p. 165–168.






















































