Skip to main content
Entropy logoLink to Entropy
. 2022 Jun 17;24(6):838. doi: 10.3390/e24060838

A Generic Formula and Some Special Cases for the Kullback–Leibler Divergence between Central Multivariate Cauchy Distributions

Nizar Bouhlel 1,*, David Rousseau 2
Editors: Karagrigoriou Alexandros, Makrides Andreas
PMCID: PMC9222751  PMID: 35741558

Abstract

This paper introduces a closed-form expression for the Kullback–Leibler divergence (KLD) between two central multivariate Cauchy distributions (MCDs) which have been recently used in different signal and image processing applications where non-Gaussian models are needed. In this overview, the MCDs are surveyed and some new results and properties are derived and discussed for the KLD. In addition, the KLD for MCDs is showed to be written as a function of Lauricella D-hypergeometric series FD(p). Finally, a comparison is made between the Monte Carlo sampling method to approximate the KLD and the numerical value of the closed-form expression of the latter. The approximation of the KLD by Monte Carlo sampling method are shown to converge to its theoretical value when the number of samples goes to the infinity.

Keywords: Multivariate Cauchy distribution (MCD), Kullback–Leibler divergence (KLD), multiple power series, Lauricella D-hypergeometric series

1. Introduction

Multivariate Cauchy distribution (MCD) belongs to the elliptical symmetric distributions [1] and is a special case of the multivariate t-distribution [2] and the multivariate stable distribution [3]. MCD has been recently used in several signal and image processing applications for which non-Gaussian models are needed. To name a few of them, in speckle denoizing, color image denoizing, watermarking, speech enhancement, among others. Sahu et al. in [4] presented a denoizing method for speckle noise removal applied to a retinal optical coherence tomography (OCT) image. The method was based on the wavelet transform where the sub-bands coefficients were modeled using a Cauchy distribution. In [5], a dual tree complex wavelet transform (DTCWT)-based despeckling algorithm was proposed for synthetic aperture radar (SAR) images, where the DTCWT coefficients in each subband were modeled with a multivariate Cauchy distribution. In [6], a new color image denoizing method in the contourlet domain was suggested for reducing noise in images corrupted by Gaussian noise where the contourlet subband coefficients were described by the heavy-tailed MCD. Sadreazami et al. in [7] put forward a novel multiplicative watermarking scheme in the contourlet domain where the watermark detector was based on the bivariate Cauchy distribution and designed to capture the across scale dependencies of the contourlet coefficients. Fontaine et al. in [8] proposed a semi-supervised multichannel speech enhancement system where both speech and noise follow the heavy-tailed multi-variate complex Cauchy distribution.

Kullback–Leibler divergence (KLD), also called relative entropy, is one of the most fundamental and important measures in information theory and statistics [9,10]. KLD was first introduced and studied by Kullback and Leibler [11] and Kullback [12] to measure the divergence between two probability mass functions in the case of discrete random variables and between two univariate or multivariate probability density functions in the case of continuous random variables. In the literature, numerous entropy and divergence measures have been suggested for measuring the similarity between probability distributions, such as Rényi [13] divergence, Sharma and Mittal [14] divergence, Bhattacharyya [15,16] divergence and Hellinger divergence measures [17]. Other general divergence families have been also introduced and studied like the ϕ-divergence family of divergence measures defined simultaneously by Csiszár [18] and Ali and Silvey [19] where the KLD measure is a special case, the Bregman family divergence [20], the R-divergences introduced by Burbea and Rao [21,22,23], the statistical f-divergences [24,25] and recently the new family of a generalized divergence called the (h,ϕ)-divergence measures introduced and studied in Menéndez et al. [26]. Readers are referred to [10] for details about these divergence family measures.

KLD has a specific interpretation in coding theory [27] and is therefore the most popular and widely used as well. Since information theoretic divergence and KLD in particular are ubiquitous in information sciences [28,29], it is therefore important to establish closed-form expressions of such divergence [30]. An analytical expression of the KLD between two univariate Cauchy distributions was presented in [31,32]. To date, the KLD of MCDs has no known explicit form, and it is in practice either estimated using expensive Monte Carlo stochastic integration or approximated. Monte Carlo sampling can efficiently estimate the KLD provided that a large number of independent and identically distributed samples is provided. Nevertheless, Monte Carlo integration is a too slow process to be useful in many applications. The main contribution of this paper is to derive a closed-form expression for the KLD between two central MCDs in a general case to benchmark future approaches while avoiding approximation using expensive Monte Carlo (MC) estimation techniques. The paper is organized as follows. Section 2 introduces the MCD and the KLD. Section 3 gives some definitions and propositions related to a multiple power series used to compute the closed-form expression of the KLD between two central MCDs. In Section 4 and Section 5, expressions of some expectations related to the KLD are developed by exploiting the propositions presented in the previous section. Section 6 demonstrates some final results on the KLD computed for the central MCD. Section 7 presents some particular results such as the KLD for the univariate and the bivariate Cauchy distribution. Section 8 presents the implementation procedure of the KLD and a comparison with Monte Carlo sampling method. A summary and some conclusions are provided in the final section.

2. Multivariate Cauchy Distribution and Kullback–Leibler Divergence

Let X be a random vector of Rp which follows the MCD, characterized by the following probability density function (pdf) given as follows [2]

fX(x|μ,Σ,p)=Γ(1+p2)πp2Γ(12)1|Σ|121[1+(xμ)TΣ1(xμ)]1+p2. (1)

This is for any xRp, where p is the dimensionality of the sample space, μ is the location vector, Σ is a symmetric, positive definite (p×p) scale matrix and Γ(.) is the Gamma function. Let X1 and X2 be two random vectors that follow central MCDs with pdfs fX1(x|Σ1,p)=fX1(x|0,Σ1,p) and fX2(x|Σ2,p)=fX2(x|0,Σ2,p) given by (1). KLD provides an asymmetric measure of the similarity of the two pdfs. Indeed, the KLD between the two central MCDs is given by

KL(X1||X2)=RplnfX1(x|Σ1,p)fX2(x|Σ2,p)fX1(x|Σ1,p)dx (2)
=EX1{lnfX1(X)}EX1{lnfX2(X)}. (3)

Since the KLD is the relative entropy defined as the difference between the cross-entropy and the entropy, we have the following relation:

KL(X1||X2)=H(fX1,fX2)H(fX1) (4)

where H(fX1,fX2)=EX1{lnfX2(X)} denotes the cross-entropy and H(fX1)=EX1{lnfX1(X)} the entropy. Therefore, the determination of KLD requires the expression of the entropy and the cross-entropy. It should be noted that the smaller KL(X1||X2), the more similar are fX1(x|Σ1,p) and fX2(x|Σ2,p). The symmetric KL similarity measure between X1 and X2 is dKL(X1,X2)=KL(X1||X2)+KL(X2||X1). In order to compute the KLD, we have to derive the analytical expressions of EX1{lnfX1(X)} and EX1{lnfX2(X)} which depend, respectively, on EX1{ln[1+XTΣ11X]} and EX1{ln[1+XTΣ21X]}. Consequently, the closed-form expression of the KLD between two zero-mean MCDs is given by

KL(X1||X2)=12log|Σ2||Σ1|1+p2EX1{ln[1+XTΣ11X]}EX1{ln[1+XTΣ21X]}. (5)

To provide the expression of these two expectations, some tools based on the multiple power series are required. The next section presents some definitions and propositions used for this goal.

3. Definitions and Propositions

This section presents some definitions and exposes some propositions related to the multiple power series used to derive the closed-form expression of the expectation EX1{ln[1+XTΣ11X]} and EX1{ln[1+XTΣ21X]}, and as a consequence the KLD between two central MCDs.

Definition 1.

The Humbert series of n variables, denoted Φ2(n), is defined for all xiC,i=1,,n, by the following multiple power series (Section 1.4 in [33])

Φ2(n)(b1,,bn;c;x1,,xn)=m1=0..mn=0(b1)m1(bn)mn(c)i=1nmii=1nximimi!. (6)

The Pochhammer symbol (q)i indicates the i-th rising factorial of q, i.e., for an integer i>0

(q)i=q(q+1)(q+i1)=k=0i1(q+k)=Γ(q+i)Γ(q) (7)

3.1. Integral Representation for Φ2(n)

Proposition 1.

The following integral representation is true for Real{c}>Real{i=1nbi}>0 and Real{bi}>0 where Real{.} denotes the real part of the complex coefficients

Δ1i=1nuici=1nbi1i=1nuibi1exiuidui=Bb1,,bn,ci=1nbiΦ2(n)(b1,,bn;c;x1,,xn) (8)

where Δ={(u1,,un)|0ui1,i=1,,n;0u1++un1} and the multivariate beta function B is the extension of beta function to more than two arguments (called also Dirichlet function) defined as (Section 1.6.1 in [34])

B(b1,,bn,bn+1)=i=1n+1Γ(bi)Γ(i=1n+1bi). (9)

Proof. 

The power series of exponential function is given by

exiui=mi=0ximimi!uimi. (10)

By substituting the expression of the exponential into the multiple integrals we have

..Δ1i=1nuici=1nbi1i=1nuibi1exiuidui=..Δ1i=1nuici=1nbi1i=1nmi=0ximimi!uimi+bi1dui=m1=0..mn=0i=1nximimi!×ID (11)

where the multivariate integral ID, which is a generalization of a beta integral, is the type-1 Dirichlet integral (Section 1.6.1 in [34]) given by

ID=Δ1i=1nuici=1nbi1i=1nuimi+bi1dui=i=1nΓ(bi+mi)Γ(ci=1nbi)Γ(c+i=1nmi). (12)

Knowing that Γ(bi+mi)=Γ(bi)(bi)mi, the expression of ID can be written otherwise

ID=i=1nΓ(bi)Γ(ci=1nbi)Γ(c)i=1n(bi)mi(c)i=1nmi. (13)

Finally, plugging (13) back into (11) leads to the final result

Γ(ci=1nbi)i=1nΓ(bi)Γ(c)m1,,mn=0+i=1n(bi)mi(c)i=1nmii=1nximimi!=Bb1,,bn,ci=1nbiΦ2(n)(b1,,bn;c;x1,,xn) (14)

Given Proposition 1, we consider the particular cases n={1,2} one by one as follows:

Case n=1

1B(b1,cb1)01u1b11ex1u1(1u1)cb11du1=m1=0(b1)m1(c)m1x1m1m1!=Φ2(1)(b1;c;x1)=1F1(b1,c;x1) (15)

where 1F1(.) is the confluent hypergeometric function of the first kind (Section 9.21 in [35]).

Case n=2

1B(b1,b2,cb1b2)u10,u20,u1+u21u1b11u2b21ex1u1+x2u2(1u1u2)cb1b21du1du2=m1=0m2=0(b1)m1(b2)m2(c)m1+m2x1m1m1!x2m2m2!=Φ2(2)(b1,b2;c;x1,x2)=Φ2(b1,b2,c;x1,x2) (16)

where the double series Φ2 is one of the components of the Humbert series of two variables [36] that generalize Kummer’s confluent hypergeometric series 1F1 of one variable. The double series Φ2 converges absolutely at any x1, x2C.

3.2. Multiple Power Series FN(n)

Definition 2.

We define a new multiple power series, denoted by FN(n) and given by

FN(n)(a;b1,,bn;c,cn;x1,,xn)=xnam1,,mn=0+(a)i=1nmi(acn+1)i=1nmi(a+bncn+1)i=1nmii=1n1(bi)mi(c)i=1n1mii=1n1xixnmi1mi!(1xn1)mnmn!. (17)

The multiple power series (17) is absolutely convergent on the region |xixn1|+|1xn1|<1 in Cn,i{1,,n1}.

The multiple power series FN(n)(.) can also be transformed into two other expressions as follows

FN(n)(a;b1,,bn;c,cn;x1,,xn)=m1,,mn=0+(acn+1)i=1n1mi(bn)mn(a)i=1nmi(a+bncn+1)i=1nmii=1n1(bi)mi(c)i=1n1mii=1n1ximimi!(1xn)mnmn!, (18)
=xn1cnm1,,mn=0+(acn+1)i=1nmi(bncn+1)mn(a)i=1n1mi(a+bncn+1)i=1nmii=1n1(bi)mi(c)i=1n1mii=1n1ximimi!(1xn)mnmn!. (19)

By Horn’s rule for the determination of the convergence region (see [37], Section 5.7.2), the multiple power series (18) and (19) are absolutely convergent on region |xi|<1,i{1,,n1},|1xn|<1 in Cn.

Equation (18) can then be deduced from (17) by using the following development where the FN(p) function can be written as

FN(n)(a;b1,,bn;c,cn;x1,,xn)=xnam1,,mn1=0+(a)i=1n1mi(acn+1)i=1n1mi(a+bncn+1)i=1n1mii=1n1(bi)mi(c)i=1n1mi×i=1n1xixnmi1mi!mn=0(α)mn(αcn+1)mn(α+bncn+1)mn(1xn1)mnmn! (20)

and α=a+i=1n1mi is used here to alleviate writing equations. Using the definition of Gauss’ hypergeometric series 2F1(.) [34] and the Pfaff transformation [38], we can write

mn=0(α)mn(αcn+1)mn(α+bncn+1)mn(1xn1)mnmn!=2F1α,αcn+1;α+bncn+1;1xn1 (21)
=xnα2F1α,bn;α+bncn+1;1xn (22)
=xnαmp=0(α)mn(bn)mn(α+bncn+1)mn(1xn)mnmn!. (23)

By substituting (23) into (20), and using the following two relations:

(a)i=1n1mi(α)mn=(a)i=1nmi, (24)
(a+bncn+1)i=1n1mi(α+bncn+1)mn=(a+bncn+1)i=1nmi (25)

we can get (18).

The second transformation is given as follows

2F1α,αcn+1;bncn+α+1;1xn1=xnαcn+12F1bncn+1,αcn+1;α+bncn+1;1xn (26)
=xnαcn+1mn=0(αcn+1)mn(bncn+1)mn(α+bncn+1)mn(1xn)mnmn!. (27)

By substituting (27) into (20), we get (19).

Lemma 1.

The multiple power series FN(n) is equal to the Lauricella D-hypergeometric function FD(n) (see Appendix A) [39] when acn+1=c and is given as follows

FN(n)(a;b1,,bn;c,cn;x1,,xn)=m1,,mn=0+(a)i=1nmii=1n(bi)mi(a+bncn+1)i=1nmii=1n1ximimi!(1xn)mnmn! (28)
=FD(n)(a,b1,,bn;a+bncn+1;x1,,xn1,1xn) (29)

Proof. 

By using Equation (18) of the multiple power series FN(n) and after having simplified (acn+1)i=1n1mi to the numerator and (c)i=1n1mi to the denominator, we can get the result. □

3.3. Integral Representation for FN(n+1)

Proposition 2.

The following integral representation is true for Real{a}>0,Real{acn+1+1}>0,andReal{acn+1+bn+1+1}>0

Γ(a)Γ(acn+1+1)Γ(acn+1+bn+1+1)FN(n+1)(a;b1,,bn+1;c,cn+1;x1,,xn+1)=0erra1Φ2(n)(b1,,bn;c;rx1,,rxn)U(bn+1,cn+1;rxn+1)dr (30)

where U(·) is the confluent hypergeometric function of the second kind (Section 9.21 in [35]) defined for Real{b}>0, Real{z}>0 by the following integral representation

U(b,c;z)=1Γ(b)0ezttb1(1+t)cb1dt (31)

and Φ2(n)(·) is defined by Equation (6).

Proof. 

The multiple power series Φ2(n) and the confluent hypergeometric function U(·) are absolutely convergent on [0,+]. Using these functions in the above integral and changing the order of integration and summation, which is easily justified by absolute convergence, we get

0erra1Φ2(n)(b1,,bn;c;rx1,,rxn)U(bn+1,cn+1;rxn+1)dr=m1=0..mn=0(b1)m1(bn)mn(c)i=1nmii=1nximimi!I (32)

where integral I is defined as follows

I=0erra1+i=1nmiU(bn+1,cn+1;rxn+1)dr. (33)

Substituting the integral expression of U(·) in the previous equation and replacing α=a+i=1nmi to alleviate writing equations, we have

I=1Γ(bn+1)00e(1+xn+1t)rrα1tbn+11(1+t)(cn+1bn+11)drdt. (34)

Knowing that [35]

0e(1+xn+1t)rrα1dr=Γ(α)(1+xn+1t)α (35)

and

0tbn+11(1+t)cn+1bn+11(1+xn+1t)αdt=Γ(bn+1)Γ(αcn+1+1)Γ(α+bn+1cn+1+1)2F1α,bn+1;α+bn+1cn+1+1;1xn+1 (36)

the new expression of I is then given by

I=Γ(α)Γ(αcn+1+1)Γ(α+bn+1cn+1+1)mn+1=0+(α)mn+1(bn+1)mn+1(α+bn+1cn+1+1)mn+1(1xn+1)mn+1mn+1!. (37)

Using the fact that Γ(α)=Γ(a)(a)i=1nmi and (a)i=1nmi(α)mn+1=(a)i=1n+1mi, and developing the same method to Γ(α+bn+1cn+1+1), the final complete expression of the integral is then given by

Γ(a)Γ(acn+1+1)Γ(a+bn+1cn+1+1)m1=0..mn+1=0(b1)m1(bn)mn(c)i=1nmi(acn+1+1)i=1nmi(bn+1)mn+1(a)i=1n+1mi(a+bn+1cn+1+1)i=1n+1mii=1nximimi!×(1xn+1)mn+1mn+1!=Γ(a)Γ(acn+1+1)Γ(acn+1+bn+1+1)FN(n+1)(a;b1,,bn+1;c,cn+1;x1,,xn+1). (38)

4. Expression of EX1{ln[1+XTΣ11X]}

Proposition 3.

Let X1 be a random vector that follows a central MCD with pdf given by fX1(x|Σ1,p). Expectation EX1{ln[1+XTΣ11X]} is given as follows

EX1ln[1+XTΣ11X]=ψ1+p2ψ12 (39)

where ψ(.) is the digamma function defined as the logarithmic derivative of the Gamma function (Section 8.36 in [35]).

Proof. 

Expectation EX1{ln[1+XTΣ11X]} is developed as follows

EX1{ln[1+XTΣ11X]}=A|Σ1|12Rpln[1+xTΣ11x][1+xTΣ11x]1+p2dx (40)

where A=Γ(1+p2)π1+p2. Utilizing the following property log(x)f(x)dx=axaf(x) dx|a=0, as a consequence the expectation is given as follows

EX1{ln[1+XTΣ11X]}=A|Σ1|12aRp[1+xTΣ11x]a1+p2dx|a=0 (41)

Consider the transformation y=Σ11/2x where y=[y1,y2,,yp]T. The Jacobian determinant is given by dy=|Σ1|1/2dx (Theorem 1.12 in [40]). The new expression of the expectation is given by

EX1{ln[1+XTΣ11X]}=AaRp[1+yTy]a1+p2dy|a=0. (42)

Let u=yTy be a transformation where the Jacobian determinant is given by (Lemma 13.3.1 in [41])

dy=πp2Γ(p2)up21du. (43)

The new expectation is as follows

EX1{ln[1+XTΣ11X]}=Γ(1+p2)π1/2Γ(p2)a0+up21(1+u)a1+p2du|a=0 (44)

Using the definition of beta function, we can write that

0+up21(1+u)a1+p2du=Γ(p2)Γ(12a)Γ(1+p2a). (45)

The derivative of the last integral w.r.t a is given by

a0+up21(1+u)a1+p2du|a=0=Γ(p2)Γ(12)Γ(1+p2)ψ(1+p2)ψ(12) (46)

Finally, the expression of EX1{ln[1+XTΣ11X]} is given by

EX1{ln[1+XTΣ11X]}=ψ1+p2ψ12. (47)

5. Expression of EX1{ln[1+XTΣ21X]}

Proposition 4.

Let X1 and X2 be two random vectors that follow central MCDs with pdfs given, respectively, by fX1(x|Σ1,p) and fX2(x|Σ2,p). Expectation EX1{ln[1+XTΣ21X]} is given as follows

EX1{ln[1+XTΣ21X]}=ψ1+p2ψ12+lnλpaFD(p)a,12,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0. (48)

where λ1,…, λp are the eigenvalues of the real matrix Σ1Σ21, and FD(p)(.) represents the Lauricella D-hypergeometric function defined for p variables.

Proof. 

To prove Proposition 4, different steps are necessary. They are described in the following:

5.1. First Step: Eigenvalue Expression

Expectation EX1{ln[1+XTΣ21X]} is computed as follows

EX1{ln[1+XTΣ21X]}=A|Σ1|12Rpln[1+xTΣ21x][1+xTΣ11x]1+p2dx (49)

where A=Γ(1+p2)π1+p2. Consider transformation y=Σ11/2x where y=[y1,y2,,yp]T. The Jacobian determinant is given by dy=|Σ1|1/2dx (Theorem 1.12 in [40]) and matrix Σ=Σ112Σ21Σ112 is a real symmetric matrix since Σ1 and Σ2 are real symmetric matrixes. Then, the expectation is evaluated as follows

EX1{ln[1+XTΣ21X]}=ARpln[1+yTΣy][1+yTy]1+p2dy. (50)

Matrix Σ can be diagonalized by an orthogonal matrix P with P1=PT and Σ=PDP1 where D is a diagonal matrix composed of the eigenvalues of Σ. Considering that yTΣy=tr(ΣyyT)=tr(PDPTyyT)=tr(DPTyyTP), the expectation can be written as

EX1{ln[1+XTΣ21X]}=ARpln[1+tr(DPTyyTP)][1+yTy]1+p2dy. (51)

Let z=PTy with z=[z1,z2,,zp]T be a transformation where the Jacobian determinant is given by dz=|PT|dy=dy. Using the fact that tr(DPTyyTP)=tr(DzzT)=zTDz and yTy=zTPTPz=zTz, then the previous expectation (51) is given as follows

EX1{ln[1+XTΣ21X]}=ARpln[1+zTDz][1+zTz]1+p2dz (52)
=AR..Rln[1+i=1pλizi2][1+i=1pzi2]1+p2dz1dzp (53)

where λ1,…, λp are the eigenvalues of Σ1Σ21.

5.2. Second Step: Polar Decomposition

Let the independent real variables z1,,zp be transformed to the general polar coordinates r, θ1,,θp1 as follows, where r>0, π/2<θjπ/2, j=1,,p2, π<θp1π [40],

z1=rsinθ1 (54)
z2=rcosθ1sinθ2 (55)
zj=rcosθ1cosθ2cosθj1sinθj,j=2,3,,p1 (56)
zp=rcosθ1cosθ2cosθp1. (57)

The Jacobian determinant according to theorem (1.24) in [40] is

dz1dzp=rp1j=1p1|cosθj|pj1drdθj. (58)

It is clear that with the last transformations, we get i=1pzi2=r2 and the multiple integral in (53) is then given as follows

EX1{ln[1+XTΣ21X]}=A0+rp1[1+r2]1+p2π/2π/2..ππj=1p1|cosθj|pj1×ln1+r2(λ1sin2θ1++λpcos2θ1cos2θp1)drj=1p1dθj. (59)

By replacing the expression of sin2θj by 1cos2θj, for j=1,,p1, we have the following expression

λ1sin2θ1++λpcos2θ1cos2θp1=λ1+(λ2λ1)cos2θ1++(λpλp1)cos2θ1cos2θ2cos2θp1. (60)

Let xi=cos2θi be a transformation to use where dxi=2xi1/2(1xi)1/2dθi. Then the expectation given by the multiple integral over all θj, j=1,,p1 is as follows

2A0+rp1[1+r2]1+p20101j=1p1xjpj21(1xj)12ln[1+r2Bp(x1,,xp1)]drdx1dxp1 (61)

where Bp(x1,,xp1)=λ1+(λ2λ1)x1++(λpλp1)x1x2xp1, p1 and B1=λ1. In the following, we use the notation Bp instead of Bp(x1,,xp1) to alleviate writing equations.

Let t=r2 be transformation to use. Then, one can write

=A0+tp21[1+t]1+p20101j=1p1xjpj21(1xj)12ln[1+tBp]dtdx1dxp1. (62)

In order to solve the integral in (62), we consider the following property given by log(x)f(x) dx=axaf(x)dx|a=0 and the following equation given as follows

1+Bpta=1Γ(a)0+ya1e(1+Bpt)ydy. (63)

Making use of the above equation, we obtain a new expression of (62) given as follows

EX1{ln[1+XTΣ21X]}=aAΓ(a)0+tp21[1+t]1+p20+ya1e(1+Bpt)y0101j=1p1xjpj21(1xj)12dxjdydt|a=0 (64)
=aAΓ(a)0+tp21[1+t]1+p20+ya1eyH(t,y)dydt|a=0 (65)

where H(t,y) is defined as

H(t,y)=0101eBptyj=1p1xjpj21(1xj)12dxj. (66)

5.3. Third Step: Expression for H(t,y) by Humbert and Beta Functions

Let xi=1xi, i=1,,p1 be transformations to use. Then

(λ2λ1)x1=(λ2λ1)(1x1) (67)
(λ3λ2)x1x2=(λ3λ2)(1x1)[1x2] (68)
(λ4λ3)x1x2x3=(λ4λ3)(1x1)(1x2)[1x3]= (69)
(λpλp1)i=1p1xi=(λpλp1)i=1p1(1xi). (70)

Adding equations from (67) to (70), we can state that the new expression of the function Bp becomes

Bp=λp(λpλ1)x1(λpλ2)(1x1)x2(λpλ3)(1x1)(1x2)x3(λpλp1)(1x1)(1xp2)xp1. (71)

Then, the multiple integral H(t,y) given by (66) can be written otherwise

H(t,y)=0101eBptyj=1p1(1xj)pj21xj12dx1dxp1. (72)

Let the real variables x1,x2,,xp1 be transformed to the real variables u1,u2,,up1 as follows

u1=x1 (73)
u2=(1x1)x2=(1u1)x2 (74)
u3=(1x1)(1x2)x3=(1u1u2)x3 (75)
up1=i=1p2(1xi)xp1=(1i=1p2ui)xp1. (76)

The Jacobian determinant is given by

du1dup1=j=1p11i=1j1uidx1dxp1. (77)

Accordingly, the new expression of Bp becomes

Bp=λpi=1p1(λpλi)ui. (78)

As a consequence, the new domain of the multiple integral (72) is Δ={(u1,u2,,up1)Rp1;0u11,0u21u1,0u31u1u2,,and0up11u1u2up2}, and the expression of H(t,y) is given as follows

H(t,y)=ΔeBptyj=1p11i=1j1ui1uj1i=1j1ui12j=1p11uj1i=1j1uipj21duj (79)
=ΔeBptyj=1p1uj121i=1juipj211i=1j1ui12pj2du1dup1 (80)
=ΔeBpty1i=1p1uip2p121j=1p1uj12duj (81)
=eλptyΔ1i=1p1ui12i=1p1ui12e(λpλi)uitydui. (82)

Using Proposition 1, we subsequently find that

H(t,y)=eλptyB12,,12pΦ2(p1)12,,12p1;p2;(λpλ1)ty,(λpλ2)ty,,(λpλp1)ty. (83)

where Φ2(p1)(.) is the Humbert series of p1 variables and B(12,,12) is the multivariate beta function. Applying the following successive two transformations r=ty (dr=tdy) and u=1/t (du=u2dt), the new expression of the expectation given by (65) is written as follows

EX1{ln[1+XTΣ21X]}=a{AΓ(a)B12,,12p0+ra1eλpr×Φ2(p1)12,,12p1;p2;(λpλ1)r,,(λpλp1)r0+ua12(1+u)1+p2erududr}|a=0. (84)

5.4. Final Step

The last integral is related to the confluent hypergeometric function of the second kind U(.) as follows

0+ua12(1+u)1+p2erudu=Γ(a+12)U(a+12,a+1p2,r). (85)

As a consequence, the new expression is

EX1{ln[1+XTΣ21X]}=a{AΓ(a+12)Γ(a)B12,,12×0+ra1eλprΦ2(p1)12,,12;p2;1;(λpλ1)r,,(λpλp1)rU(a+12,a+1p2,r)dr}|a=0. (86)

Using the transformation r=λpr and the Proposition 2, and taking into account the expression of A, the new expression becomes

EX1{ln[1+XTΣ21X]}=a{B(a+12,p2)B(12,p2)λpa×FN(p)a;12,,12,a+12p;p2,ap2+1;1λ1λp,,1λp1λp,λp1}|a=0 (87)

Knowing that

aB(p2,a+12)B(p2,12)|a=0=ψ12ψ1+p2,and (88)
FN(p)a;12,,12,a+12;p2,ap2+1;1λ1λp,,1λp1λp,λp1|a=0=1, (89)

the new expression of EX1{ln[1+XTΣ21X]} becomes

EX1{ln[1+XTΣ21X]}=ψ1+p2ψ12aλpaFN(p)a;12,,12,a+12p;p2,ap2+1;1λ1λp,,1λp1λp,λp1|a=0. (90)

Applying the expression given by (18) of Definition 2 and relying on Lemma 1, the final result corresponds to the D-hypergeometric function of Lauricella FD(p)(.) given by

EX1{ln[1+XTΣ21X]}=ψ1+p2ψ12aλpam1,,mp=0+(a)i=1pmi(a+12)mpi=1p1(12)mi(a+1+p2)i=1pmii=1p11λiλpmi1mi!(1λp1)mpmp!|a=0 (91)
=ψ1+p2ψ12aλpaFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0. (92)

The final development of the previous expression is as follows

EX1{ln[1+XTΣ21X]}=ψ1+p2ψ12+lnλpaFD(p)a,12,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0. (93)

In this section, we presented the exact expression of EX1{ln[1+XTΣ21X]}. In addition, the multiple power series FD(p) which appears to be a special case of FN(p) provides many properties and numerous transformations (see Appendix A) that make easier the convergence of the multiple power series. In the next section, we establish the KLD closed-form expression based on the expression of the latter expectation.

6. KLD between Two Central MCDs

Plugging (39) and (93) into (5) yields the closed-form expression of the KLD between two central MCDs with pdfs fX1(x|Σ1,p) and fX2(x|Σ2,p). This result is presented in the following theorem.

Theorem 1.

Let X1 and X2 be two random vectors that follow central MCDs with pdfs given, respectively, by fX1(x|Σ1,p) and fX2(x|Σ2,p). The Kullback–Leibler divergence between central MCDs is

KL(X1||X2)=12logi=1pλi+1+p2[logλpaFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0] (94)

where λ1,…, λp are the eigenvalues of the real matrix Σ1Σ21, and FD(p)(.) represents the Lauricella D-hypergeometric function defined for p variables.

Lauricella [39] gave several transformation formulas (see Appendix A), whose relations (A5)–(A7), and (A9) are applied to FD(p)(.) in (94). The results of transformation are as follows

FD(p)a,12,,12,a+12;a+1+p2;1λ1λp,,1λp1λp,11λp=λpa+p2i=1p1λi12FD(p)1+p2,12,,12,a+12;a+1+p2;1λpλ1,,1λpλp1,1λp (95)
=λ1λpaFD(p)a,12,,12,a+12;a+1+p2;1λpλ1,,1λ2λ1,11λ1 (96)
=λpaFD(p)a,12,,12;a+1+p2;1λ1,1λ2,,1λp (97)
=λpai=1pλi12FD(p)1+p2,12,,12;a+1+p2;11λ1,11λ2,,11λp. (98)

Considering the above equations, it is easy to provide different expressions of KL(X1||X2) shown in Table 1. The derivative of the Lauricella D-hypergeometric series with respect to a goes through the derivation of the following expression

aFD(p)a,12,12,,12,a+12;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0 (99)
=m1,,mp=0+a(a)i=1pmi(a+12)mp(a+1+p2)i=1pmi|a=0i=1p112mi1λiλpmi1mi!(1λp1)mpmp! (100)

Table 1.

KLD and KL distance computed when X1 and X2 are two random vectors following central MCDs with pdfs fX1(x|Σ1,p) and fX2(x|Σ2,p).

        KL(X1||X2)(106)=12logi=1pλi+1+p2logλpaFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0(107)=12logi=1pλi1+p2λpp2i=1p1λi12aFD(p)1+p2,12,,12,a+12p;a+1+p2;1λpλ1,,1λpλp1,1λp|a=0(108)=12logi=1pλi+1+p2logλ1aFD(p)a,12,,12,a+12p;a+1+p2;1λpλ1,,1λ2λ1,11λ1|a=0(109)=12logi=1pλi1+p2aFD(p)a,12,,12p;a+1+p2;1λ1,,1λp|a=0(110)=12logi=1pλi1+p2i=1pλi12aFD(p)1+p2,12,,12p;a+1+p2;11λ1,,11λp|a=0
       KL(X2||X1)(111)=12logi=1pλi1+p2logλp+aFD(p)a,12,,12,a+12p;a+1+p2;1λpλ1,,1λpλp1,1λp|a=0(112)=12logi=1pλi1+p2λpp2i=1p1λi12aFD(p)1+p2,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0(113)=12logi=1pλi1+p2logλ1+aFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λ1λ2,1λ1|a=0(114)=12logi=1pλi1+p2aFD(p)a,12,,12p;a+1+p2;11λ1,,11λp|a=0(115)=12logi=1pλi1+p2i=1pλi12aFD(p)1+p2,12,,12p;a+1+p2;1λ1,,1λp|a=0
dKL(X1,X2)=1+p2[logλpaFD(p)a,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0λpp2i=1p1λi12(116)×aFD(p)1+p2,12,,12,a+12p;a+1+p2;1λ1λp,,1λp1λp,11λp|a=0]=1+p2[aFD(p)a,12,,12p;a+1+p2;1λ1,,1λp|a=0+i=1pλi12a{FD(p)(1+p2,12,,12p;a+1+p2;(117)1λ1,,1λp)}|a=0]=1+p2[i=1pλi12aFD(p)1+p2,12,,12p;a+1+p2;11λ1,,11λp|a=0+a{FD(p)(a,12,,12p;a+1+p2;(118)11λ1,,11λp)}|a=0]

The derivative with respect to a of the Lauricella D-hypergeometric series and its transformations goes through the following expressions (see Appendix B for demonstration)

a(a)i=1pmi(a+12)mp(a+1+p2)i=1pmi|a=0=(12)mp(1)i=1pmi(1+p2)i=1pmi(i=1pmi), (101)
a(a)i=1pmi(a+1+p2)i=1pmi|a=0=(1)i=1pmi(1+p2)i=1pmi(i=1pmi), (102)
a(a+12)mp(a+1+p2)i=1pmi|a=0=(12)mp(1+p2)i=1pmik=0mp11k+12k=0i=1pmi11k+1+p2, (103)
a1(a+1+p2)i=1pmi|a=0=1(1+p2)i=1pmik=0i=1pmi11k+1+p2. (104)

To derive the closed-form expression of dKL(X1,X2) we have to evaluate the expression of KL(X2||X1). The latter can be easily deduced from KL(X1||X2) as follows

KL(X2||X1)=12logi=1pλi1+p2[logλp+aFD(p)a,12,,12,a+12;a+1+p2;1λpλ1,,1λpλp1,1λp|a=0]. (105)

Proceeding in the same way by using Lauricella transformations, different expressions of KL(X2||X1) are provided in Table 1. Finally, given the above results, it is straightforward to compute the symmetric KL similarity measure dKL(X1,X2) between X1 and X2. Technically, any combination of the KL(X1||X2) and KL(X2||X1) expressions is possible to compute dKL(X1,X2). However, we choose the same convergence region for the two divergences for the calculation of the distance. Some expressions of dKL(X1,X2) are given in Table 1.

7. Particular Cases: Univariate and Bivariate Cauchy Distribution

7.1. Case of p=1

This case corresponds to the univariate Cauchy distribution. The KLD is given by

KL(X1||X2)=12logλa2F1(a,12;a+1;1λ)|a=0 (119)

where 2F1 is the Gauss’s hypergeometric function. The expression of the derivative of 2F1 is given as follows (see Appendix C.1 for details of computation)

a2F1(a,12;a+1;1λ)|a=0=n=112n1n(1λ)nn!=2ln1+λ1/22. (120)

Accordingly, the KLD is then expressed as

KL(X1||X2)=log(1+λ12)24λ12 (121)
=log(1+λ12)24λ12=KL(X2||X1). (122)

We conclude that KLD between Cauchy densities is always symmetric. Interestingly, this is consistent with the result presented in [31].

7.2. Case of p=2

This case corresponds to the Bivariate Cauchy distribution. The KLD is then given by

KL(X1||X2)=12logλ1λ232aF1(a,12,12;a+32;1λ1,1λ2)|a=0 (123)

where F1 is the Appell’s hypergeometric function (see Appendix A). The expression of the derivative of F1 can be further developed

aF1(a,12,12;a+32;1λ1,1λ2)|a=0=n,m=0+(1)m+n(12)n(12)m(32)m+n1m+n(1λ1)nn!(1λ2)mm!. (124)

In addition, when the eigenvalue λi for i=1,2 takes some particular values, the expression of the KLD becomes very simple. In the following, we show some cases:

  • (λ1=1,λ21) or (λ2=1,λ11)

For this particular case, we have

aF1(a,12,12;a+32;1λi,0)|a=0=a2F1(a,12;a+32;1λi)|a=0 (125)
=lnλi+11λiln11λi1+1λi+2. (126)

The demonstration of the derivation is shown in Appendix C.2. Then, KLD becomes equal to

KL(X1||X2)=lnλi3211λiln11λi1+1λi3. (127)
  • λ1=λ2=λ

For this particular case, we have

aF1(a,12,12;a+32;1λ,1λ)|a=0=a2F1(a,1;a+32;1λ)|a=0 (128)
=21λ1ln(λ+λ1)+2. (129)

For more details about the demonstration see Appendix C.3. The KLD becomes equal to

KL(X1||X2)=lnλ+31λ1ln(λ+λ1)3. (130)

It is easy to deduce that

KL(X2||X1)=lnλ+31λln(λ1+λ11)3. (131)

This result can be demonstrated using the same process as KL(X1||X2). It is worth to notice that KL(X1||X2)KL(X2||X1) which leads us to conclude that the property of symmetry observed for the univariate case is no longer valid in the multivariate case. Nielsen et al. in [32] gave the same conclusion.

8. Implementation and Comparison with Monte Carlo Technique

In this section, we show how we practically compute the numerical values of the KLD, especially when we have several equivalent expressions which differ in the region of convergence. To reach this goal, the eigenvalues of Σ1Σ21 are rearranged in a descending order λp>λp1>>λ1>0. This operation is justified by Equation (53) where it can be seen that the permutation of the eigenvalues does not affect the expectation result. Three cases can be identified from the expressions of KLD.

8.1. Case 1>λp>λp1>>λ1>0

The expression of KL(X1||X2) is given by Equation (109) and KL(X2||X1) is given by (115).

8.2. Case λp>λp1>>λ1>1

KL(X1||X2) is given by the Equation (110) and KL(X2||X1) is given by (114).

8.3. Case λp>1 and λ1<1

This case guarantees that 01λj/λp<1, j=1,,p1 and 011/λp<1. The expression of the KL(X1||X2) is given by Equation (106) and KL(X2||X1) is given by (112) or (113). To perform an evaluation of the quality of the numerical approximation of the derivative of the Lauricella series, we consider a case where an exact and simple expression of a{FD(p)(.)}|a=0 is possible. The following case where λ1==λp=λ allows the Lauricella series to be equivalent to the Gauss hypergeometric function given as follows

FD(p)a,12,,12p;a+1+p2;1λ,,1λ=2F1a,p2;a+1+p2;1λ. (132)

This relation allows us to compare the computational accuracy of the approximation of the Lauricella series with respect to the Gauss function. In addition, to compute the numerical value the indices of the series will evolve from 0 to N instead of infinity. The latter is chosen to ensure a good approximation of the Lauricella series. Table 2 shows the computation of the derivative of FD(p)(.) and 2F1(.), along with the absolute value of error |ϵ|, where p=2,N={20,30,40}. The exact expression of a{2F1(.)}|a=0 when p=2 is given by Equation (129). We can deduce the following conclusions. First, the error is reasonably low and decreases as the value of N increases. Second, the error increases for values of 1λ close to 1 as expected, which corresponds to the convergence region limit.

Table 2.

Computation of A=a{2F1(.)}|a=0 and B=a{FD(p)(.)}|a=0 when p=2 and λ1==λp=λ.

N=20 N=30 N=40
1λ A B |ϵ| B |ϵ| B |ϵ|
0.1 0.0694 0.0694 9.1309 × 1016 0.0694 9.1309 × 1016 0.0694 9.1309 × 1016
0.3 0.2291 0.2291 3.7747 × 1014 0.2291 1.1102 × 1016 0.2291 1.1102 × 1016
0.5 0.4292 0.4292 2.6707 × 109 0.4292 1.2458 × 1012 0.4292 6.6613 × 1016
0.7 0.7022 0.7022 5.9260 × 106 0.7022 8.2678 × 108 0.7022 1.3911 × 109
0.9 1.1673 1.1634 0.0038 1.1665 7.2760 × 104 1.1671 1.6081 × 104
0.99 1.7043 1.5801 0.1241 1.6267 0.0776 1.6514 0.0529

In the following section, we compare the Monte Carlo sampling method to approximate the KLD value with the numerical value of the closed-form expression of the latter. The Monte Carlo method involves sampling a large number of samples and using them to calculate the sum rather than the integral. Here, for each sample size, the experiment is repeated 2000 times. The elements of Σ1 and Σ2 are given in Table 3. Figure 1 depicts the absolute value of bias, mean square error (MSE) and box plot of the difference between the symmetric KL approximated value and its theoretical one, given versus the sample sizes. As the sample size increases, the bias and the MSE decrease. Accordingly, the approximated value will be very close to the theoretical KLD when the number of samples is very large. The computation time of the proposed approximation and the classical Monte Carlo sampling method are recorded using Matlab on a 1.6 GHz processor with 16 GB of memory. For the proposed numerical approximation, the computation time is evaluated to 1.56 s with N=20. The value of N can be increased to further improve the accuracy, but it will increase the computation time. For the Monte Carlo sampling method, the mean time values at sample sizes of {65,536; 131,072; 262,144} are {2.71;5.46;10.78} seconds, respectively.

Table 3.

Parameters Σ1 and Σ2 used to compute KLD for central MCD.

Σ Σ11, Σ22, Σ33, Σ12, Σ13, Σ23
Σ1 1, 1, 1, 0.6, 0.2, 0.3
Σ2 1, 1, 1, 0.3, 0.1, 0.4

Figure 1.

Figure 1

Top row: Bias (left) and MSE (right) of the difference between the approximated and theoretical symmetric KL for MCD. Bottom row: Box plot of the error. The mean error is the bias. Outliers are larger than Q3+1.5×IQR or smaller than Q11.5×IQR, where Q1, Q3, and IQR are the 25th, 75th percentiles, and the interquartile range, respectively.

To further encourage the dissemination of these results, we provide a code available as attached file to this paper. This is given in Matlab with a specific case of p=3. This can be easily extended to any value of p, thanks to the general closed-form expression established in this paper.

graphic file with name entropy-24-00838-i001.jpg

graphic file with name entropy-24-00838-i002.jpg

9. Conclusions

Since the MCDs have various applications in signal and image processing, the KLD between central MCDs tackles an important problem for future work on statistics, machine learning and other related fields in computer science. In this paper, we derived a closed-form expression of the KLD and distance between two central MCDs. The similarity measure can be expressed as function of the Lauricella D-hypergeometric series FD(p). We have also proposed a simple scheme to compute easily the Lauricella series and to bypass the convergence constraints of this series. Codes and examples for numerical calculations are presented and explained in detail. Finally, a comparison is made to show how the Monte Carlo sampling method gives approximations close to the KLD theoretical value. As a final note, it is also possible to extend these results on the KLD to the case of the multivariate t-distribution since the MCD is a particular case of this multivariate distribution.

Acknowledgments

Authors gratefully acknowledge the PHENOTIC platform node of the french national infrastructure on plant phenotyping ANR PHENOME 11-INBS-0012. The authors would like also to thank the anonymous reviewers for their helpful comments valuable comments and suggestions.

Appendix A. Lauricella Function

In 1893, G. Lauricella [39] investigated the properties of four series FA(n),FB(n),FC(n),FD(n) of n variables. When n=2, these functions coincide with Appell’s F2,F3,F4,F1, respectively. When n=1, they all coincide with Gauss’ 2F1. We present here only the Lauricella series FD(n) given as follows

FD(n)(a,b1,,bn;c;x1,,xn)=m1=0mn=0(a)m1++mn(b1)m1(bn)mn(c)m1++mnx1m1m1!xnmnmn! (A1)

where |x1|,,|xn|<1. The Pochhammer symbol (q)i indicates the i-th rising factorial of q, i.e.,

(q)i=q(q+1)(q+i1)=Γ(q+i)Γ(q)ifi=1,2, (A2)

If i=0, (q)i=1. Function FD(n)(.) can be expressed in terms of multiple integrals as follows [42]

FD(n)(a,b1,,bn;c;x1,,xn)=Γ(c)Γ(ci=1nbi)i=1nΓ(bi)×Ωi=1nuibi1(1i=1nui)ci=1nbi1(1i=1nxiui)ai=1ndui (A3)

where Ω={(u1,u2,,un);0ui1,i=1,,n,and0u1+u2++un1}, Real (bi)>0 for i=1,,n and Real (cb1bn)>0. Lauricella’s FD can be written as a one-dimensional Euler-type integral for any number n of variables. The integral form of FD(n)(.) is given as follows when Real (a)>0 and Real (ca)>0

FD(n)(a,b1,,bn;c;x1,,xn)=Γ(c)Γ(a)Γ(ca)01ua1(1u)ca1(1ux1)b1(1uxn)bndu. (A4)

Lauricella has given several transformation formulas, from which we use the two following relationships. More details can be found in Exton’s book [43] on hypergeometric equations.

FD(n)(a,b1,,bn;c;x1,,xn)
=i=1n(1xi)biFD(n)ca,b1,,bn;c;x1x11,,xnxn1 (A5)
=(1x1)aFD(n)a,ci=1nbi,b2,,bn;c;x1x11,x1x2x11,,x1xnx11 (A6)
=(1xn)aFD(n)a,b1,,bn1,ci=1nbi;c;xnx1xn1,xnx2xn1,,xnxn1xn1,xnxn1 (A7)
=(1x1)cai=1n(1xi)biFD(n)ca,ci=1nbi,b2,,bn;c;x1,x2x1x21,,xnx1xn1 (A8)
=(1xn)cai=1n(1xi)biFD(n)ca,b1,,bn1,ci=1nbi;c;x1xnx11,,xn1xnxn11,xn. (A9)

Appendix B. Demonstration of Derivative

Appendix B.1. Demonstration

We use the following notation α=i=1pmi to alleviate the writing of equations. Knowing that c(c)k=(c)kψ(c+k)ψ(c), ψ(c+k)ψ(c)==0k11c+ and (c)k=i=0k1(c+i) we can state that

a(a)α(a+1+p2)α=(a)α[ψ(a+α)ψ(a)ψ(a+1+p2+α)+ψ(a+1+p2)](a+1+p2)α=k=0α1(a+k)k=0α11a+k1a+1+p2+k(a+1+p2)α. (A10)

Using the fact that

k=0α1(a+k)k=0α11a+k=k=1α1(a+k)+k=0,k1α1(a+k)++k=0α2(a+k) (A11)

we can state that

a(a)α(a+1+p2)α|a=0=(α1)!(1+p2)α=(1)α(1+p2)α1α. (A12)

Appendix B.2. Demonstration

a(a)α(a+12)mp(a+1+p2)α=(a+12)mp(a)α[ψ(a+α)ψ(a)+ψ(a+12+mp)ψ(a+12)](a+1+p2)α(a)α(a+12)mp[ψ(a+1+p2+α)ψ(a+1+p2)](a+1+p2)α (A13)
=(a+12)mpk=0α1(a+k)k=0α11a+k1a+1+p2+k+k=0mp11a+12+k(a+1+p2)α. (A14)

By developing the previous expression we can state that

a(a)α(a+12)mp(a+1+p2)α|a=0=(12)mp(α1)!(1+p2)α=(12)mp(1)α(1+p2)α1α. (A15)

Appendix B.3. Demonstration

a(a+12)mp(a+1+p2)α=(a+12)mp(a+1+p2)αk=0mp11a+12+kk=0α11a+1+p2+k. (A16)

As a consequence,

a(a+12)mp(a+1+p2)α|a=0=(12)mp(1+p2)αk=0mp1112+kk=0α111+p2+k. (A17)

Appendix B.4. Demonstration

a1(a+1+p2)α=ψ(a+1+p2+α)ψ(a+1+p2)(a+1+p2)α (A18)
=1(a+1+p2)αk=0α11a+1+p2+k. (A19)

Finally,

a1(a+1+p2)α|a=0=1(1+p2)αk=0α111+p2+k. (A20)

Appendix C. Computations of Some Equations

Appendix C.1. Computation

Let f be a function of λ defined as follows:

f(λ)=n=112n1n(1λ)nn!. (A21)

The multiplication of the derivative of f with respect to λ by (1λ) is given as follows

(1λ)λf(λ)=n=112n(1λ)nn!=1λ1/2. (A22)

As a consequence,

λf(λ)=1λ1/21λ=λ1/21+λ1/2. (A23)

Finally,

f(λ)=2ln1+λ1/22. (A24)

Appendix C.2. Computation

a2F1a,12;a+32;1λi|a=0=n=1(12)n(1)n(32)nn(1λi)nn!=f(λi) (A25)

where f is a function of λi. The multiplication of the derivative of f with respect to λi by (1λi) is given as follows

(1λi)λif(λi)=n=1(12)n(1)n(32)n(1λi)nn! (A26)
=2F112,1;32;1λi+1. (A27)

Knowing that

2F112,1;32;1λi=arctan(λi1)λi1 (A28)
=121λiln1+1λi11λi (A29)

we can deduce an expression of

λif(λi)=arctan(λi1)(λi1)3/2+11λi. (A30)

Accordingly,

f(λi)=lnλi2arctan(λi1)λi1+2 (A31)
=lnλi+11λiln11λi1+1λi+2. (A32)

Appendix C.3. Computation

a2F1a,1;a+32;1λ|a=0=n=1(1)n(1)n(32)nn(1λ)nn!=f(λ) (A33)

where f is a function of λ. The multiplication of the derivative of f with respect to λ by (1λ) is given as follows

(1λ)λf(λ)=n=1(1)n(1)n(32)n(1λ)nn! (A34)
=2F11,1;32;1λ+1. (A35)

Knowing that

2F11,1;32;1λ=1λarcsin(1λ)1λ (A36)

we can state that

λf(λ)=1λarcsin(1λ)(1λ)3/2+11λ. (A37)

As a consequence,

f(λ)=2λarcsin(1λ)1λ+2 (A38)
=21λ1ln(λ+λ1)+2. (A39)

Author Contributions

Conceptualization, N.B.; methodology, N.B.; software, N.B.; writing original draft preparation, N.B.; writing review and editing, N.B. and D.R.; supervision, D.R. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research received no external funding.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Ollila E., Tyler D.E., Koivunen V., Poor H.V. Complex Elliptically Symmetric Distributions: Survey, New Results and Applications. IEEE Trans. Signal Process. 2012;60:5597–5625. doi: 10.1109/TSP.2012.2212433. [DOI] [Google Scholar]
  • 2.Kotz S., Nadarajah S. Multivariate T-Distributions and Their Applications. Cambridge University Press; Cambridge, UK: 2004. [DOI] [Google Scholar]
  • 3.Press S. Multivariate stable distributions. J. Multivar. Anal. 1972;2:444–462. doi: 10.1016/0047-259X(72)90038-3. [DOI] [Google Scholar]
  • 4.Sahu S., Singh H.V., Kumar B., Singh A.K. Statistical modeling and Gaussianization procedure based de-speckling algorithm for retinal OCT images. J. Ambient. Intell. Humaniz. Comput. 2018:1–14. doi: 10.1007/s12652-018-0823-2. [DOI] [Google Scholar]
  • 5.Ranjani J.J., Thiruvengadam S.J. Generalized SAR Despeckling Based on DTCWT Exploiting Interscale and Intrascale Dependences. IEEE Geosci. Remote Sens. Lett. 2011;8:552–556. doi: 10.1109/LGRS.2010.2089780. [DOI] [Google Scholar]
  • 6.Sadreazami H., Ahmad M.O., Swamy M.N.S. Color image denoising using multivariate cauchy PDF in the contourlet domain; Proceedings of the 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE); Vancouver, BC, Canada. 15–18 May 2016; pp. 1–4. [DOI] [Google Scholar]
  • 7.Sadreazami H., Ahmad M.O., Swamy M.N.S. A Study of Multiplicative Watermark Detection in the Contourlet Domain Using Alpha-Stable Distributions. IEEE Trans. Image Process. 2014;23:4348–4360. doi: 10.1109/TIP.2014.2339633. [DOI] [PubMed] [Google Scholar]
  • 8.Fontaine M., Nugraha A.A., Badeau R., Yoshii K., Liutkus A. Cauchy Multichannel Speech Enhancement with a Deep Speech Prior; Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO); A Coruna, Spain. 2–6 September 2019; pp. 1–5. [Google Scholar]
  • 9.Cover T.M., Thomas J.A. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing) Wiley-Interscience; Hoboken, NJ, USA: 2006. [DOI] [Google Scholar]
  • 10.Pardo L. Statistical Inference Based on Divergence Measures. CRC Press; Abingdon, UK: 2005. [Google Scholar]
  • 11.Kullback S., Leibler R.A. On Information and Sufficiency. Ann. Math. Stat. 1951;22:79–86. doi: 10.1214/aoms/1177729694. [DOI] [Google Scholar]
  • 12.Kullback S. Information Theory and Statistics. Wiley; New York, NY, USA: 1959. [Google Scholar]
  • 13.Rényi A. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. Volume 1. University of California Press; Berkeley, CA, USA: 1961. On Measures of Entropy and Information; pp. 547–561. [Google Scholar]
  • 14.Sharma B.D., Mittal D.P. New non-additive measures of relative information. J. Comb. Inf. Syst. Sci. 1977;2:122–132. [Google Scholar]
  • 15.Bhattacharyya A. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 1943;35:99–109. [Google Scholar]
  • 16.Kailath T. The Divergence and Bhattacharyya Distance Measures in Signal Selection. IEEE Trans. Commun. Technol. 1967;15:52–60. doi: 10.1109/TCOM.1967.1089532. [DOI] [Google Scholar]
  • 17.Giet L., Lubrano M. A minimum Hellinger distance estimator for stochastic differential equations: An application to statistical inference for continuous time interest rate models. Comput. Stat. Data Anal. 2008;52:2945–2965. doi: 10.1016/j.csda.2007.10.004. [DOI] [Google Scholar]
  • 18.Csiszár I. Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Publ. Math. Inst. Hung. Acad. Sci. Ser. A. 1963;8:85–108. [Google Scholar]
  • 19.Ali S.M., Silvey S.D. A General Class of Coefficients of Divergence of One Distribution from Another. J. R. Stat. Soc. Ser. B (Methodol.) 1966;28:131–142. doi: 10.1111/j.2517-6161.1966.tb00626.x. [DOI] [Google Scholar]
  • 20.Bregman L. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967;7:200–217. doi: 10.1016/0041-5553(67)90040-7. [DOI] [Google Scholar]
  • 21.Burbea J., Rao C. On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inf. Theory. 1982;28:489–495. doi: 10.1109/TIT.1982.1056497. [DOI] [Google Scholar]
  • 22.Burbea J., Rao C. On the convexity of higher order Jensen differences based on entropy functions (Corresp.) IEEE Trans. Inf. Theory. 1982;28:961–963. doi: 10.1109/TIT.1982.1056573. [DOI] [Google Scholar]
  • 23.Burbea J., Rao C. Entropy differential metric, distance and divergence measures in probability spaces: A unified approach. J. Multivar. Anal. 1982;12:575–596. doi: 10.1016/0047-259X(82)90065-3. [DOI] [Google Scholar]
  • 24.Csiszar I. Information-type measures of difference of probability distributions and indirect observation. Stud. Sci. Math. Hung. 1967;2:229–318. [Google Scholar]
  • 25.Nielsen F., Nock R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2014;21:10–13. doi: 10.1109/LSP.2013.2288355. [DOI] [Google Scholar]
  • 26.Menéndez M.L., Morales D., Pardo L., Salicrú M. Asymptotic behaviour and statistical applications of divergence measures in multinomial populations: A unified study. Stat. Pap. 1995;36:1–29. doi: 10.1007/BF02926015. [DOI] [Google Scholar]
  • 27.Cover T.M., Thomas J.A. Information theory and statistics. Elem. Inf. Theory. 1991;1:279–335. [Google Scholar]
  • 28.MacKay D.J.C. Information Theory, Inference and Learning Algorithms. Cambridge University Press; Cambridge, UK: 2003. [Google Scholar]
  • 29.Ruiz F.E., Pérez P.S., Bonev B.I. Information Theory in Computer Vision and Pattern Recognition. Springer Science & Business Media; Berlin/Heidelberg, Germany: 2009. [Google Scholar]
  • 30.Nielsen F. Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences. Entropy. 2022;24:421. doi: 10.3390/e24030421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Chyzak F., Nielsen F. A closed-form formula for the Kullback–Leibler divergence between Cauchy distributions. arXiv. 20191905.10965 [Google Scholar]
  • 32.Nielsen F., Okamura K. On f-divergences between Cauchy distributions. arXiv. 20212101.12459 [Google Scholar]
  • 33.Srivastava H., Karlsson P.W. Multiple Gaussian Hypergeometric Series. Horwood Halsted Press; Chichester, UK: West Sussex, UK: New York, NY, USA: 1985. (Ellis Horwood Series in Mathematics and Its Applications Statistics and Operational Research, E). [Google Scholar]
  • 34.Mathai A.M., Haubold H.J. Special Functions for Applied Scientists. Springer Science+Business Media; New York, NY, USA: 2008. [Google Scholar]
  • 35.Gradshteyn I., Ryzhik I. Table of Integrals, Series, and Products. 7th ed. Academic Press is an Imprint of Elsevier; Cambridge, MA, USA: 2007. [Google Scholar]
  • 36.Humbert P. The Confluent Hypergeometric Functions of Two Variables. Proc. R. Soc. Edinb. 1922;41:73–96. doi: 10.1017/S0370164600009810. [DOI] [Google Scholar]
  • 37.Erdélyi A. Higher Transcendental Functions. Volume I McGraw-Hill; New York, NY, USA: 1953. [Google Scholar]
  • 38.Koepf W. Hypergeometric Summation an Algorithmic Approach to Summation and Special Function Identities. 2nd ed. Universitext, Springer; London, UK: 2014. [Google Scholar]
  • 39.Lauricella G. Sulle funzioni ipergeometriche a piu variabili. Rend. Del Circ. Mat. Palermo. 1893;7:111–158. doi: 10.1007/BF03012437. [DOI] [Google Scholar]
  • 40.Mathai A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument. World Scientific; Singapore: 1997. [Google Scholar]
  • 41.Anderson T.W. An Introduction to Multivariate Statistical Analysis. John Wiley & Sons; Hoboken, NJ, USA: 2003. [Google Scholar]
  • 42.Hattori A., Kimura T. On the Euler integral representations of hypergeometric functions in several variables. J. Math. Soc. Jpn. 1974;26:1–16. doi: 10.2969/jmsj/02610001. [DOI] [Google Scholar]
  • 43.Exton H. Multiple Hypergeometric Functions and Applications. Wiley; New York, NY, USA: 1976. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES