Skip to main content
Entropy logoLink to Entropy
. 2021 Apr 28;23(5):547. doi: 10.3390/e23050547

Improved Approach for the Maximum Entropy Deconvolution Problem

Shay Shlisel 1,, Monika Pinchas 1,*,
Editor: Gwanggil Jeon1
PMCID: PMC8146814  PMID: 33925207

Abstract

The probability density function (pdf) valid for the Gaussian case is often applied for describing the convolutional noise pdf in the blind adaptive deconvolution problem, although it is known that it can be applied only at the latter stages of the deconvolution process, where the convolutional noise pdf tends to be approximately Gaussian. Recently, the deconvolutional noise pdf was approximated with the Edgeworth Expansion and with the Maximum Entropy density function for the 16 Quadrature Amplitude Modulation (QAM) input but no equalization performance improvement was seen for the hard channel case with the equalization algorithm based on the Maximum Entropy density function approach for the convolutional noise pdf compared with the original Maximum Entropy algorithm, while for the Edgeworth Expansion approximation technique, additional predefined parameters were needed in the algorithm. In this paper, the Generalized Gaussian density (GGD) function and the Edgeworth Expansion are applied for approximating the convolutional noise pdf for the 16 QAM input case, with no need for additional predefined parameters in the obtained equalization method. Simulation results indicate that improved equalization performance is obtained from the convergence time point of view of approximately 15,000 symbols for the hard channel case with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm. By convergence time, we mean the number of symbols required to reach a residual inter-symbol-interference (ISI) for which reliable decisions can be made on the equalized output sequence.

Keywords: maximum entropy, deconvolution, blind equalization, edgeworth expansion, Generalized Gaussian Distribution (GGD), Laplace integral

1. Introduction

In this paper, the blind adaptive deconvolution problem (blind adaptive equalizer) is considered, where we observe the output of an unknown linear system (channel) from which we want to recover its input, using an adaptive blind equalizer (adaptive linear filter) [1,2,3,4,5,6]. The linear system (channel) is often modeled as a finite impulse response (FIR) filter. Since the channel coefficients are unknown, the equalizer’s coefficients used in the deconvolution process are only approximated values leading to an error signal that is added to the source signal at the output of the deconvolution process. In the following, we define this error signal throughout the paper as the convolutional noise. The Gaussian pdf is often applied in the literature [1,7,8,9,10,11], for approximating the convolutional noise pdf in calculating the conditional expectation of the source input given the equalized output sequence, based on Bayes rules. However, according to [8], the convolutional noise pdf tends approximately to a Gaussian pdf only at the latter stages of the iterative deconvolution process, where the equalizer has converged to a relative low residual ISI (where the convolutional noise is relative low). In the early stages of the iterative deconvolution process, the ISI is typically large with the result that the input sequence and the convolutional noise sequence are strongly correlated and the convolutional noise pdf is more uniform than Gaussian [8,12]. Recently [3,4], the convolutional noise pdf was approximated with the Maximum Entropy density approximation technique [1,2,13,14] with Lagrange multipliers up to order four and with the Edgeworth Expansion series [15,16] up to order six, to obtain the conditional expectation of the source signal (16 QAM input case), given the equalized output via Bayes rules. However, as demonstrated in [3], the blind adaptive equalization algorithm with the closed-form approximated expression for the conditional expectation based on approximating the convolutional noise pdf with the Maximum Entropy density approximation technique, achieved for the hard channel case (named the channel4 case in [3]), the same equalization performance from the residual ISI and convergence time point of view compared with the original blind adaptive equalization algorithm [1] where the convolutional noise pdf was approximated with the Gaussian pdf to obtain the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. The equalization performance obtained with the Edgeworth Expansion approach [4] was indeed improved compared with the original blind adaptive equalization algorithm [1] where the convolutional noise pdf was approximated with the Gaussian pdf to obtain the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. However, this equalization method [4] needed two additional predefined parameters (additional to the predefined step-size parameter involved in the equalizer’s coefficients update mechanism) in the algorithm. These two additional predefined parameters where used in the approximation for the fourth and sixth moment of the convolutional noise. Since the convolutional noise is channel dependent, the various moments of the convolutional noise are also channel dependent which lead also to the two additional predefined parameters in [4] to be channel dependent. As it was already implied earlier, the shape of the convolutional noise pdf changes during the iterative deconvolution process. Thus, if we could have an approximation for the convolutional pdf that is close to optimality, we could have a closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules that may lead to improved equalization performance from the residual ISI and convergence time point of view compared to existing methods based on the closed-form approximated conditional expectation expression [1,3,4]. According to [17,18,19], the GGD provides a flexible and suitable tool for data modeling and simulation. The GGD [17,18] is based on a shape parameter that changes the pdf which may have a Laplacian, or double exponential distribution, a Gaussian distribution or a uniform distribution for a shape parameter equal to one, two and infinity respectively. The shape of the convolutional noise pdf changes as a function of the residual ISI. Thus, in order to apply the GGD for the convolutional noise pdf approximation task, the shape parameter related to the GGD presentation must be a function of the residual ISI. Recently [20], the shape parameter related to the GGD presentation [17,18] was given as a function of the residual ISI.

In this paper, we deal with the 16QAM input case where we use the GGD presentation [17,18] with the results obtained from [20], to approximate the convolutional noise pdf involved in the calculation of the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. Since the shape parameter related to the GGD presentation [17,18] may have also fractional values during the iterative deconvolution process, the integral involved in the conditional expectation calculation may not lead to a closed-form approximated expression. Thus, we use in this work the Edgeworth Expansion series [15,16] up to order six for approximating the GGD presentation applicable for the convolutional noise pdf where the fourth and sixth moments of the convolutional noise sequence are approximated with the GGD technique [17,18]. By applying the GGD [17,18], the Edgeworth Expansion [15,16] and the results from [20] (the relationship between the shape parameter and the residual ISI), a new closed-form approximated expression is proposed for the conditional expectation of the source signal given the equalized output via Bayes rules that has no need for additional predefined parameters in the obtained equalization method as is the case in [4]. Simulation results indicate that with our new proposed equalization method based on the new model for the convolutional noise pdf we have:

  • Improved equalization performance from the convergence time point of view for the easy [6] as well as for the hard channel case, compared to the original Maximum Entropy algorithm [1]. The improvement in the convergence time for the hard channel case is approximately of 15,000 symbols while for the easy channel case the improvement in the convergence time is approximately of 250 symbols. In both cases we may say that the improvement in the convergence time is approximately third of the convergence time of the original Maximum Entropy algorithm [1].

  • Based on [3], the blind adaptive equalization algorithm with the closed-form approximated expression for the conditional expectation based on approximating the convolutional noise pdf with the Maximum Entropy density approximation technique, achieved for the hard channel case, the same equalization performance from the residual ISI and convergence time point of view as was achieved with the original Maximum Entropy algorithm [1]. Thus, the improvement in the convergence time with our new proposed method compared with the algorithm in [3] is also approximately of 15,000 symbols for the hard channel case.

  • The new proposed equalization method does not need additional predefined parameters (additional to the predefined step-size parameter involved in the equalizer’s coefficients update mechanism) in the algorithm in order to get improved convergence time compared to the original Maximum Entropy algorithm [1], as does the algorithm in [4] where the convolutional noise pdf was approximated with the Edgeworth Expansion series.

  • For the easy channel case and SNR of 26 dB, the new proposed equalization method has improved equalization performance from the residual ISI and convergence time point of view compared to the recently proposed methods [2,5] which are versions of the original Maximum Entropy algorithm [1]. From the residual ISI point of view, the improvement is approximately 4 dB while the improvement in the convergence time is approximately third of the convergence time achieved by the equalization methods presented in [2,5].

The paper is organized as follows—after having described the system under consideration in Section 2, the systematic way for obtaining the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules based on the GGD and Edgeworth Expansion series is given in Section 3. In Section 4 we introduce our simulation results. Finally, the conclusion is presented in Section 5.

2. System Description

In the following (Figure 1), we recall the system under consideration used in [1,3,4], where we apply the same assumptions made in [1,3,4]:

  • The input sequence x[n] is a 16QAM source (a modulation using ± {1,3} levels for in-phase and quadrature components) which can be written as x[n]=x1[n]+jx2[n] where x1[n] and x2[n] are the real and imaginary parts of x[n], respectively. x1[n] and x2[n] are independent and E[x[n]]=0 (where E[·] denotes the expectation operator on (·)).

  • The unknown channel h[n] is a possibly non-minimum phase linear time-invariant filter in which the transfer function has no “deep zeros”; namely, the zeros lie sufficiently far from the unit circle.

  • The filter c[n] is a tap-delay line.

  • The channel noise w[n] is an additive Gaussian white noise.

  • The function T[·] is a memoryless nonlinear function that satisfies the additivity condition:
    T[z1[n]+jz2[n]]=T[z1[n]]+jT[z2[n]], (1)
    where z1[n], z2[n] are the real and imaginary parts of the equalized output, respectively.

Figure 1.

Figure 1

A block diagram for baseband communication transmission.

The input to the equalizer is given by:

y[n]=x[n]h[n]+w[n], (2)

where “∗” stands for the convolutional operation. Based on (2), the equalized output is obtained via:

z[n]=y[n]c[n]=x[n]s˜[n]+w˜[n]=x[n]+p[n]+w˜[n], (3)

where

s˜[n]=cnhn=δn+ξnp[n]=x[n]ξn, (4)

where ξ[n] stands for the difference (error) between the ideal and the used value for c[n] following (6), δ is the Kronecker delta function, w˜[n]=w[n]*c[n] and p[n] is the convolutional noise. The ISI is expressed by:

ISI=m˜|s˜[m˜]|2|s˜|max2|s˜|max2, (5)

where |s˜|max is the component of s˜, given in (4), having the maximal absolute value. The function T[z[n]] is an estimation to x[n] where d[n]=T[z[n]]. The equalizer is updated according to:

c_[n+1]=c_[n]+μTznzn,y_*[n] (6)

where ·* is the conjugate operation on (·), μ is the step size parameter and c_[n] is the equalizer vector, where the input vector is y_[n]=[y[n]...y[nN+1]]T. The operator ()T denotes the transpose of the function (), and N is the equalizer’s tap length.

3. GGD Based Closed-Form Approximated Expression for the Conditional Expectation

In this section, we present a systematic approach for obtaining the conditional expectation (E[x[n]|z[n]]) based on approximating the convolutional noise pdf with the GGD [17,18] and Edgeworth Expansion [15,16] presentations. For simplicity, we use in the following, x, y, p for x[n], y[n] and p[n], respectively.

Theorem 1.

For the noiseless and 16QAM input case, the closed-form approximated expression for the conditional expectation (E[x|z]) is given by:

E[x|z]u1f1+ju2f2,wherefori=1,2K=4andk=2,4ui=zi+1223T15V+1k=2Kkzik1λk12Tσpi290Vσpi2zi+3T15V+1zik=2Kkzik1λk2+k=2Kkzik2λkk1(σzi2σxi2)+1843T15V+1k=2Kkzizikλk3+43T15V+1k=2K1zi3k3zikλk3k2zikλk+2kzikλk1212Tσpi290Vσpi2k=2Kkzizikλk+24Tσpi4360Vσpi4z+33T15V+1zik=2K1zi2k2zikλkkzikλk2612Tσpi290Vσpi2zik=2Kkzizikλk2+3T15V+1zik=2Kkzizikλk4+123T15V+1k=2K1zi2k2zikλkkzikλkk=2Kkzizikλk+3T15V+1zik=2K1zi411k2zikλk6k3zikλk+k4zikλk6kzikλk612Tσpi290Vσpi2zik=2K1zi2k2zikλkkzikλk+43T15V+1zik=2K1zi3k3zikλk3k2zikλk+2kzikλkk=2Kkzizikλk+63T15V+1zik=2K1zi2k2zikλkkzikλkk=2Kkzizikλk2σzi2σxi22andfi=1+12k=2Kkzik1λk2+k=2Kkzik2λkk1(σzi2σxi2)+18k=2Kkzik1λk4+6k=2Kkzik1λk2k=2Kkzik2λkk1+4k=2Kkzik3λkk1k2k=2Kkzik1λk+3k=2Kkzik2λkk12+k=2Kkzik4λkk1k2k3(σzi2σxi2)2,withσpi2=σzi2σxi2, (7)

where

T=(Γ(1/β)Γ2(3/β)Γ(5/β)34!);V=(Γ2(1/β)Γ3(3/β)Γ(7/β)15Γ(1/β)Γ2(3/β)Γ(5/β)+306!), (8)

and where Γ is the Gamma function and β is given by [20]:

β1.1938×105ISIdB47.3370×104ISIdB30.0146ISIdB20.0693ISIdB+2.6266ISIdB=10log10ISI. (9)

In this work the ISI is expressed as:

ISI=σp12σx12 (10)

and the Lagrange multipliers for k=2,4 (λ2, λ4) are calculated according to [1]:

1+4λ2m2+8λ4m4=03m2+8λ4m6+4λ2m4=0, (11)

where

mk=Ex1k. (12)

Proof of Theorem 1. 

For the two independent quadrature carrier case where the 16QAM modulation is a special case of it, the conditional expectation (E[x|z]) can be given according to [9] as:

E[x|z]=E[x1|z1]+jE[x2|z2]. (13)

Thus, real and imaginary parts of the data are to be estimated separately on the basis of the real and imaginary parts of the equalizer’s output sequence. For the noiseless case, (3) may be written as:

p=zx. (14)

In the following, we denote p1 and p2 as the real and imaginary parts of p. Based on (14) and under the assumption that the blind adaptive equalizer leaves the system with a relative low residual ISI for which the input signal x and the convolutional noise signal p can be considered as independent [8], we may write for the 16QAM modulation case:

σp2=σz2σx2=2σp12=2σp22=2σz122σx12=2σz222σx22σp12=σz12σx12. (15)

Based on (3), the variance of the real part of the equalized output signal σz12 can be written for the noiseless case as:

σz12=σx12m˜|s˜m˜[n]|2. (16)

Next, based on (16), (15) and (5) we may write:

2σp12=2σx12m˜|s˜m˜[n]|22σx12=2σx12m˜|s˜m˜[n]|212σp12=2σx12ISIfor|s˜|max=1σp12σx12=ISIfor|s˜|max=1. (17)

Next, we show the systematic approach for calculating the conditional expectation E[x1|z1]. The conditional expectation E[x1|z1] is defined by:

E[x1|z1]=+x1fx1|z1x1|z1dx1, (18)

where fx1|z1x1|z1 is the conditional pdf. Based on Bayes rules we may write:

fx1|z1x1|z1=fz1|x1z1|x1fx1x1fz1z1=fz1|x1z1|x1fx1x1+fz1|x1z1|x1fx1x1dx1. (19)

Now, by substituting (19) into (18) we obtain:

E[x1|z1]=+x1fz1|x1z1|x1fx1x1dx1+fz1|x1z1|x1fx1x1dx1. (20)

As was already mentioned earlier in this paper, we would like to use the GGD [17,18] presentation for approximating the real part of the convolutional noise pdf. Thus, based on the GGD [17,18] the real part of the convolutional noise pdf is approximately given by:

fp1p112Γ1+1βBβ,σexp|p1Bβ,σ|β, (21)

with

Bβ,σ=σp12Γ1βΓ3β12, (22)

where β is defined as the shape parameter of the pdf presentation. Thus, based on [17,18], (21) and (14), the conditional pdf fz1|x1z1|x1 can be expressed by:

fz1|x1z1|x112Γ1+1βBβ,σexp|z1x1Bβ,σ|β. (23)

Following [1,3,4], we use the Maximum Entropy density approximation technique [13,14] with Lagrange multipliers up to order four, for approximating the pdf of the real part input sequence:

fx1x1Aexpλ2x12+λ4x14, (24)

where λ2 and λ4 are the Lagrange multipliers and A is a constant. Next, by substituting (23) and (24) into (20), some problems are noticed in carrying out the integrals involved in (20) for achieving a closed-form approximated expression for the conditional expectation E[x1|z1] due to the fact that the shape parameter β is a changing parameter during the iterative blind deconvolution process that may have also non integer values. Thus, to overcome the problem, we apply the Edgeworth Expansion series [15,16] up to order six for approximating the real part of the convolutional noise pdf where the higher moments of the convolutional noise sequence are calculated via the GGD [17,18] technique:

fp1p1expp122σp122πσp11+Ep143σp1224!σp122p14σp1226p12σp12+3+Ep1615σp12Ep14+30σp1236!σp123p16σp12315p14σp122+45p12σp1215withEp16=σp12Γ1βΓ3β3Γ7βΓ1β;Ep14=σp12Γ1βΓ3β2Γ5βΓ1β. (25)

Thus, based on the Edgeworth Expansion series technique [15,16] and (25) we have:

fz1|x1z1|x1expz1x122σp122πσp11+Ep143σp1224!σp122z1x14σp1226z1x12σp12+3+Ep1615σp12Ep14+30σp1236!σp123z1x16σp12315z1x14σp122+45z1x12σp1215 (26)

with Ep16 and Ep14 given in (25). Now, substituting (26) and (24) into (20) yields:

Ex1|z1g1(x1)exp(Ψ(x1)/ρ)dx1g(x1)exp(Ψ(x1)/ρ)dx1, (27)

where

ρ=2σp12;Ψ(x1)=(z1x1)2;g1(x1)=x1g(x1);g(x1)=g˜(x1)1+Ep143σp1224!σp122z1x14σp1226z1x12σp12+3+Ep1615σp12Ep14+30σp1236!σp123z1x16σp12315z1x14σp122+45z1x12σp1215g˜(x1)=expλ2x12+λ4x14. (28)

In order to obtain closed-form expressions for the integrals involved in (27), the Laplace’s method [21] is applied as was also done in [1,3,4]. According to [21], the Laplace’s method is a general technique for obtaining the asymptotic behavior as ρ0 of integrals in which the large parameter 1/ρ appears in the exponent. The main idea of Laplace’s method is: if the continues function Ψ(x1) has its minimum at x0 which is between infinity and minus infinity, then it is only the immediate neighborhood of x1=x0 that contributes to the full asymptotic expansion of the integral for large 1/ρ. Thus, according to [1,3,4,21]:

g(x1)exp(Ψ(x1)/ρ)dx1expΨ(x0)ρ2πρΨ(x0)g(x0)+g(x0)2ρΨ(x0)+g(x0)8(ρΨ(x0))2+O(ρΨ(x0))3, (29)
g1(x1)exp(Ψ(x1)/ρ)dx1expΨ(x0)ρ2πρΨ(x0)g1(x0)+g1(x0)2ρΨ(x0)+g1(x0)8(ρΨ(x0))2+O(ρΨ(x0))3, (30)

where () and () stand for the second and fourth derivative of (), respectively, Ox is defined as limx0Ox/x=rconst and rconst is a constant. The expressions for Ψ(x0) and x0 are given by:

Ψx1=2z1x1;Ψx1=2Ψx0=2Ψx0=2z1x0=0x0=z1. (31)

Now, by substituting (29) and (30) into (27), dividing both the numerator and denominator by the function g˜(z1) given in (28) with z1 instead of x1, x0=z1, Ψx0=2 from (31), ρ=2σp12 from (28) and σp12=σz12σx12 from (15) we obtain:

Ex1|z1Ex1|z1upEx1|z1down (32)
Ex1|z1up=z1+(g1(z1)/2g˜(z1))(σz12σx12)+(g1(z1)/8g˜(z1))(σz12σx12)2Ex1|z1down=1+3T15V+(g(z1)/2g˜(z1))(σz12σx12)+(g(z1)/8g˜(z1))(σz12σx12)2. (33)

Next, in order to reduce the computational complexity, we notice that the denominator of (32) (Ex1|z1down from (33)) can be approximated by:

Ex1|z1down1+(g˜(z1)/2g˜(z1))(σz12σx12)+(g˜(z1)/8g˜(z1))(σz12σx12)2, (34)

where g˜(z1) and g˜(z1) are the second and fourth derivative of g˜(z1) respectively. Please note that (34) is valid for the Gaussian convolutional noise pdf case. By using (32) with Ex1|z1down and Ex1|z1up from (34) and (33) respectively and the following derivatives:

k=2,4;K=4g˜z1=g˜z1k=2Kkz1k1λkg˜z1=g˜z1(k=2Kkz1k1λk)2+g˜z1k=2Kkz1k2λkk1g˜z1=g˜z1k=2Kkz1k1λk3+g˜z13k=2Kkz1k2λkk1k=2Kkz1k1λkg˜z1k=2Kkz1k3λkk1k2g˜z1=3g˜z1k=2Kkz1k2λkk12+6g˜z1k=2Kkz1k2λkk1k=2Kkz1k1λk2+g˜z1k=2Kkz1k1λk4+4g˜z1k=2Kkz1k3λkk1k2k=2Kkz1k1λk+g˜z1k=2Kkz1k4λkk1k2k3 (35)
g1z1=2g˜z13T15V+1+z1g˜z13T15V+1z1g˜z112Tσp1290Vσp12g1z1=4g˜z13T15V+112g˜z112Tσp1290Vσp12+z1g˜z124Tσp14360Vσp14+z1g˜z13T15V+16z1g˜z112Tσp1290Vσp12, (36)

the expression of u1f1 from (7) is obtained. Now, by using (13), the expression from (7) is obtained. □

4. Simulation

In this section, we use the 16QAM input case with two different channels to show via simulation results the usefulness of our new proposed model for the convolutional noise pdf based on the GDD [17,18] and Edgeworth Expansion [15,16] compared to the Gaussian case. For equalization performance comparison, we use the MaxEnt algorithm [1], where the conditional expectation is derived by assuming the Gaussian model for the convolutional noise pdf and the source pdf is approximated with the Maximum Entropy density approximation technique [13,14] as it is done with our new proposed equalization method. Thus, the difference between the two approximated expressions for the conditional expectation ([1] and (7)) is only due to the different model used for the convolutional noise pdf. In addition, we use for the equalization performance comparison also two additional equalization methods [2,5] which we name as MaxEntBNEW and MaxEntANEW respectively. These methods ([2,5]) are versions of the original MaxEnt algorithm [1] where also the convolutional noise pdf was approximated with the Gaussian model.

The equalizer’s taps for the Maximum Entropy algorithm (MaxEnt) [1] were updated according to:

cln+1=clnμmeWy*[nl], (37)

with:

W=Ex1|z1z1nEx1|z1z12n+jEx2|z2z2nEx2|z2z22nzn, (38)

where μme is a positive step-size parameter and

Ex1|z1=z1+g^1(z1)2g^(z1)σx12σz12+g^1(4)(z1)8g^(z1)σx12σz1221+g^(z1)2g^(z1)σx12σz12+g^(4)(z1)8g^(z1)σx12σz122Ex2|z2=z2+g^1(z2)2g^(z2)σx22σz22+g^1(4)(z2)8g^(z2)σx22+σz2221+g^(z2)2g^(z2)σx22σz22+g^(4)(z2)8g^(z2)σx22σz222, (39)

where:

k=2,4;K=4s=1,2;g^zs=expk=2k=Kλkxskxs=zsg^(zs)=d2dxs2expk=2k=Kλkxskxs=zs;g^(4)(zs)=d4dxs4expk=2k=Kλkxskxs=zsg^1(zs)=d2dxs2xsexpk=2k=Kλkxskxs=zs;g^1(4)(zs)=d4dxs4xsexpk=2k=Kλkxskxs=zs (40)

and σx12,σx22 are the variances of the real and imaginary parts of the source signal respectively. The variances of the real and imaginary parts of the equalized output are defined as σz12 and σz22 respectively and estimated by [1]:

zs2=1βmezs2n1+βmezsn2, (41)

where   stands for the estimated expectation, zs20>0, l stands for the l-th tap of the equalizer and βme is a positive step size parameter. The Lagrange multipliers λk from (40) are given in (11). According to [1] the equalizer’s taps are updated only if N^s>ε, where ε is a small positive parameter and N^s=1+g^(z1)2g^(z1)σxs2σzs2+g^(4)(z1)8g^(z1)σxs2σzs22. In the following we denote our new proposed equalization method based on the GDD [17,18] as GDD were the equalizer’s taps are updated according to:

cl[n+1]=cl[n]μWy*[nl], (42)

where μ is a positive step size parameter and W is given in (38) with:

Ex1|z1=u1f1;Ex2|z2=u2f2, (43)

where u1f1 and u2f2 are given in (7). The variances of the real and imaginary parts of the convolutional noise (σp12 and σp22) are given by:

s=1,2σps2=σzs2σxs2zs2=1βzs2n1+βzsn2, (44)

where β is a positive step size parameter. It should be pointed out that the equalizer’s taps related to the GGD algorithm are updated only when f1>ε and f2>ε similar to the MaxEnt algorithm. The equalizer’s taps related to the MaxEntANEW algorithm are updated according to [5]:

c˜ln+1=clnμANEWWy*[nl], (45)

where μANEW is a positive step size parameter and W is given in (38) with:

Ex1|z11+(ε01+ε21z12+ε41z14)+12(ε01+ε21z12+ε41z14)2z1+σp122g1z1g(z1)+σp1228g1z1g(z1)Ex2|z21+(ε02+ε22z22+ε42z24)+12(ε02+ε22z22+ε42z24)2z2+σp222g1z2g(z2)+σp2228g1z2g(z2),where:s=1,2g1zsg(zs)=2zs8zs6λ42+8zs4λ2λ4+2zs2λ22+10zs2λ4+3λ2g1zsg(zs)=4zs64zs12λ44+128zs10λ2λ43+96zs8λ22λ42+352zs8λ43+32zs6λ23λ4+432zs6λ2λ42+4zs4λ24+168zs4λ22λ4+348zs4λ42+20zs2λ23+180zs2λ2λ4+15λ22+30λ4σps2=σzs2σxs2 (46)

and

σxs2=E[xs2]. (47)

According to [5]:

σzs2=E[zs2] (48)

and given by:

zs2=1βANEWzs2n1+βANEWzsn2, (49)

where zs20>0, βANEW and μANEW are positive step size parameters. ε0s, ε2s, ε4s, λ2 and λ4 were set according to [5] as

ε0s=2λ2σps2;ε2s=σps24λ22+12λ4;ε4s=16λ2λ4σps2 (50)
λ2140m¯220736m¯42+1280m¯2m¯641472m¯42+2560m¯2m¯6144m¯4480m¯22+288m¯4λ4120736m¯42+1280m¯2m¯6480m¯22+288m¯4, (51)

where

E[x1G]=m¯G. (52)

In order to get equalization gain of one, the following gain control was used according to [5]:

cl[n]=c˜llc˜l2, (53)

where cl[n] is the vector of taps after iteration and cl[0] is some reasonable initial guess. The equalizer’s taps related to the MaxEntBNEW algorithm are updated according to [2]:

c˜ln+1=clnμBNEWWy*[nl], (54)

where μBNEW is a positive step size parameter and W is given in (38) with:

Ex1|z1=z1+g^1(z1)2g^(z1)σp121+g^(z1)2g^(z1)σp12Ex2|z2=z2+g^1(z2)2g^(z2)σp221+g^(z2)2g^(z2)σp22,where:s=1,2g^1(zs)2g^(zs)=zs8zs6λ42+8zs4λ2λ4+2zs2λ22+10zs2λ4+3λ2g^(zs)2g^(zs)=8zs6λ42+8zs4λ2λ4+2zs2λ22+6zs2λ4+λ2σps2=σzs2σxs2 (55)
λ2=14m^264m^4264m^2m^664m^2m^664m^42+8m^48m^424m^22λ4=164m^4264m^2m^68m^424m^22withm^2=m¯21+1SNRk=0R1hk2m^4=m¯223SNRk=0R1hk22+6SNRk=0R1hk2+m¯4m¯22m^6=m¯2315SNRk=0R1hk23+45SNRk=0R1hk22+15SNRk=0R1hk2m¯4m¯22+m¯6m¯23, (56)

where

SNR=m¯2σwr2. (57)

σzs2 was estimated by

zs2=1βBNEWzs2n1+βBNEWzsn2, (58)

where zs20>0, βBNEW and μBNEW are positive step size parameters. The equalizer’s taps in (54) were updated only if N^s>ε1, where ε1 is a small positive parameter and

N^s=1+g^(zs)2g^(zs)σps2. (59)

In addition, the gain control was applied according to (53).

Two different channels were considered:

  • Easy channel case, Channel1 (initial ISI = 0.44): The channel parameters were determined according to [6]: hn={0 for n<0;0.4 for n=0;0.84×0.4n1 for n>0}

  • Hard channel case, Channel2 (initial ISI = 1.402): The channel parameters were taken according to [22]: hn=0.2258,0.5161,0.6452,0.5161

For Channel1 and Channel4, we used an equalizer with 13 and 21 taps respectively. In the simulation, the equalizers were initialized by setting the center tap equal to one and all others to zero [1]. The step size parameters μ, β, μme and βme, were chosen for fast convergence with low steady state ISI, where the values for μme and βme were taken from [1]. Figure 2 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel1 case for signal-to noise-ratio (SNR) of 26 dB according to [1].

Figure 2.

Figure 2

Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel1. The averaged result were obtained in 100 Monte Carlo trials for a SNR of 26 dB. The step size parameters were set to: μ=6×104,β=1×104,μme=3×104,βme=2×104. In addition we set: ε=0.

Please note that the main purpose of a blind adaptive equalizer is to be as fast as possible, a residual ISI that is low enough for sending the equalized output sequence to the decision device to get reliable decisions on that input data. Reliable decisions can be done on the equalized output sequence when the equalizer leaves the system with a residual ISI that is lower than −16 dB. According to Figure 2, the new algorithm (GGD) achieves the residual ISI of −16 dB faster than the MaxEnt algorithm [1]. Thus, the GGD has a faster convergence rate compared to the MaxEnt [1] method, which means that the equalized output sequence can be send earlier to the decision device with the GGD algorithm compared with the MaxEnt method [1]. Figure 3 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel4 case for SNR of 30 dB according to [1].

Figure 3.

Figure 3

Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel4. The averaged result were obtained in 50 Monte Carlo trials for a SNR of 30 dB. The step size parameters were set to: μ=3×104,β=2×106,μme=2×104,βme=2×106. In addition we set: ε=0.5.

According to Figure 3, the GGD algorithm reaches the residual ISI of −16 dB faster by approximately of 15,000 symbols than the MaxEnt [1] algorithm does while leaving the system with approximately the same residual ISI at the convergence state compared with the MaxEnt [1] method.

It should be pointed out that the equalization performance obtained with the GDD algorithm are very similar to those obtained in [4] where the Edgeworth Expansion up to order six was used for approximating the convolutional noise pdf. However, in [4], two additional step parameters were needed in the deconvolution process. Those step size parameters are channel dependent which are not needed in the GDD algorithm. Thus, the GDD algorithm is preferable over the algorithm proposed in [4]. The GDD algorithm has also improved equalization performance for the hard channel case (Channel2) compared to the equalization method proposed in [3] where the Maximum Entropy density approximation technique [13,14] was used for approximating the convolutional noise pdf with Lagrange multipliers up to order four. Please note that according to [3], the MaxEnt method [1] and the equalization algorithm proposed in [3] have the same equalization performance for the hard channel case (Channel2). Figure 4 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], to the MaxEntANEW method [5] and to the MaxEntBNEW method [2] for the 16QAM input and Channel1 case for SNR of 26 dB. According to Figure 4, the GGD algorithm has improved equalization performance from the residual ISI and convergence time point of view compared to the MaxEntANEW [5] and MaxEntBNEW [2] methods. From the residual ISI point of view, the improvement is approximately 4 dB while the improvement in the convergence time is approximately third of the convergence time achieved by the equalization methods presented in [2,5].

Figure 4.

Figure 4

Equalization performance comparison between the GGD, MaxEnt, MaxEntANEW and MaxEntBNEW methods for a 16QAM input going through channel1. The averaged result were obtained in 100 Monte Carlo trials for a SNR of 26 dB. The step size parameters were set to: μ=6×104,β=1×104,μme=3×104,βme=2×104,μANEW=3×104,βANEW=2×105,μBNEW=3×104,βBNEW=2×104. In addition we set: ε=0,ε1=0.5.

Although the GGD algorithm was obtained for the 16QAM constellation input, it can be extended to other two independent quadrature carrier inputs with Lagrange multiplier up to order four, by having just another function for β (9) that fits the new input constellation case. In addition, if more Lagrange multipliers are needed than only four for approximating properly the input sequence pdf, (7) should be used with k=2,4,6,...K and the Lagrange multipliers should be calculated as given in [1] for the general order case.

5. Conclusions

In this paper, the blind adaptive deconvolution problem was considered, where the GGD function and the Edgeworth Expansion up to order six were applied for approximating the convolutional noise pdf for the 16 QAM input case. A new closed-form approximated expression was derived for the conditional expectation that led to a new blind adaptive equalization method. This new proposed algorithm does not need additional predefined parameters that are channel dependent like the literature known blind adaptive equalization method based on the conditional expectation expression where the convolutional noise pdf was approximated with the Edgeworth Expansion up to order six. Simulation results demonstrated that improved equalization performance is obtained with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm and to the two recently obtained versions of the original Maximum Entropy algorithm for the easy channel and high SNR case. Since the original Maximum Entropy algorithm has the same equalization performance for the hard channel case as the equalization method based on the conditional expectation expression where the convolutional noise pdf was approximated with the Maximum Entropy density technique, the new proposed method has also improved equalization performance for the hard channel case compared with this equalization method. This paper demonstrated that improved equalization performance can be obtained if a proper approximation is applied for the convolutional noise pdf in the calculation for the expression of the conditional expectation via Bayes rules. The new proposed algorithm is valid only for the high SNR case due to the fact that the noise was not taken into account in our derivations. Please note that the original Maximum Entropy algorithm and the two equalization methods based on the conditional expectation via Bayes rules, where the convolutional noise pdf was approximated with the Maximum Entropy density technique and with the Edgeworth Expansion approach, are valid also only for the high SNR case.

Acknowledgments

We would like to thank the anonymous reviewers for their helpful comments.

Author Contributions

Conceptualization, M.P. and S.S.; methodology, M.P.; software, S.S.; validation, M.P. and S.S.; formal analysis, M.P. and S.S.; writing—original draft preparation, M.P. and S.S.; writing—review and editing, M.P.; visualization, M.P.; supervision, M.P.; project administration, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the required data is given in the article.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Pinchas M., Bobrovsky B.Z. A Maximum Entropy approach for blind deconvolution. Signal Process. (Eurasip) 2006;86:2913–2931. doi: 10.1016/j.sigpro.2005.12.009. [DOI] [Google Scholar]
  • 2.Pinchas M. A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range of Signal-to-Noise Ratio. Entropy. 2019;21:72. doi: 10.3390/e21010072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Freiman A., Pinchas M. A Maximum Entropy inspired model for the convolutional noise PDF. Digit. Signal Process. 2015;39:35–49. doi: 10.1016/j.dsp.2014.12.011. [DOI] [Google Scholar]
  • 4.Rivlin Y., Pinchas M. Edgeworth Expansion Based Model for the Convolutional Noise pdf. Math. Problem. Eng. 2014 doi: 10.1155/2014/951927. [DOI] [Google Scholar]
  • 5.Pinchas M. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case. Entropy. 2016;18:65. doi: 10.3390/e18030065. [DOI] [Google Scholar]
  • 6.Shalvi O., Weinstein E. New criteria for blind deconvolution of nonminimum phase systems (channels) IEEE Trans. Inf. Theory. 1990;36:312–321. doi: 10.1109/18.52478. [DOI] [Google Scholar]
  • 7.Pinchas M., Bobrovsky B.Z. A Novel HOS Approach for Blind Channel Equalization. IEEE Wireless Commun. J. 2007;6:875–886. doi: 10.1109/TWC.2007.04404. [DOI] [Google Scholar]
  • 8.Haykin S., editor. Blind Deconvolution. Prentice-Hall; Englewood Cliffs, NJ, USA: 1991. Adaptive Filter Theory. Chapter 20. [Google Scholar]
  • 9.Bellini S. Bussgang techniques for blind equalization. IEEE Global Telecommun. Conf. Record. 1986;3:1634–1640. [Google Scholar]
  • 10.Bellini S. Blind Equalization. Alta Frequenza. 1988;57:445–450. [Google Scholar]
  • 11.Fiori S. A contribution to (neuromorphic) blind deconvolution by flexible approximated Bayesian estimation. Signal Process. (Eurasip) 2001;81:2131–2153. doi: 10.1016/S0165-1684(01)00108-6. [DOI] [Google Scholar]
  • 12.Godfrey R., Rocca F. Zero memory non-linear deconvolution. Geophys. Prospect. 1981;29:189–228. doi: 10.1111/j.1365-2478.1981.tb00401.x. [DOI] [Google Scholar]
  • 13.Jumarie G. Nonlinear filtering: A weighted mean squares approach and a Bayesian one via the Maximum Entropy principle. Signal Process. 1990;21:323–338. doi: 10.1016/0165-1684(90)90102-5. [DOI] [Google Scholar]
  • 14.Papulis A. Probability, Random Variables, and Stochastic Processes. 2nd ed. McGraw-Hill; New York, NY, USA: 1984. p. 536. International Edition. Chapter 15. [Google Scholar]
  • 15.Assaf S.A., Zirkle L.D. Approximate analysis of nonlinear stochastic systems. Int. J. Control. 1976;23:477–492. doi: 10.1080/00207177608922174. [DOI] [Google Scholar]
  • 16.Bover D.C.C. Moment equation methods for nonlinear stochastic systems. J. Math. Anal. Appl. 1978;65:306–320. doi: 10.1016/0022-247X(78)90182-8. [DOI] [Google Scholar]
  • 17.Armando Domínguez-molina J., González-farías G., Rodríguez-dagnino R.M. A Practical Procedure to Estimate the Shape Parameter in the Generalized Gaussian Distribution. [(accessed on 28 March 2021)]; Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.329.2835; https://www.cimat.mx/reportes/enlinea/I-01-18_eng.pdf.
  • 18.González-Farías G., Domínguez-Molina J.A., Rodríguez-Dagnino R.M. Effi-ciency of the Approximated Shape Parameter Estimator in the Generalized Gaussian Distribution. IEEE Trans. Vehicular Technol. 2009;8:4214–4223. doi: 10.1109/TVT.2009.2021270. [DOI] [Google Scholar]
  • 19.Novey M., Adali T., Roy A. A complex generalized Gaussian distribution-characterization, generation and estimation. IEEE Trans. Signal Process. 2010;58:1427–1433. doi: 10.1109/TSP.2009.2036049. [DOI] [Google Scholar]
  • 20.Golberg H., Pinchas M. A Novel Technique for Achieving the Approximated ISI at the Receiver for a 16QAM Signal Sent via a FIR Channel Based Only on the Received Information and Statistical Techniques. Entropy. 2020;22:708. doi: 10.3390/e22060708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Orszag S.A., Bender C.M. Advanced Mathematical Methods for Scientist Engineers International Series in Pure and Applied Mathematics. McDraw-Hill; New York, NY, USA: 1978. Chapter 6. [Google Scholar]
  • 22.Lazaro M., Santamaria I., Erdogmus D., Hild K.E., Pantaleon C., Principe J.C. Stochastic blind equalization based on pdf fitting using parzen estimator. IEEE Trans. Signal Process. 2005;53:696–704. doi: 10.1109/TSP.2004.840767. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All the required data is given in the article.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES