Skip to main content
Entropy logoLink to Entropy
. 2019 May 11;21(5):485. doi: 10.3390/e21050485

On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means

Frank Nielsen 1
PMCID: PMC7514974  PMID: 33267199

Abstract

The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.

Keywords: Jensen–Shannon divergence, Jeffreys divergence, resistor average distance, Bhattacharyya distance, f-divergence, Jensen/Burbea–Rao divergence, Bregman divergence, abstract weighted mean, quasi-arithmetic mean, mixture family, statistical M-mixture, exponential family, Gaussian family, Cauchy scale family, clustering

1. Introduction and Motivations

1.1. Kullback–Leibler Divergence and Its Symmetrizations

Let (X,A) be a measurable space [1] where X denotes the sample space and A the σ-algebra of measurable events. Consider a positive measure μ (usually the Lebesgue measure μL with Borel σ-algebra B(Rd) or the counting measure μc with power set σ-algebra 2X). Denote by P the set of probability distributions.

The Kullback–Leibler Divergence [2] (KLD) KL:P×P[0,] is the most fundamental distance [2] between probability distributions, defined by:

KL(P:Q):=plogpqdμ, (1)

where p and q denote the Radon–Nikodym derivatives of probability measures P and Q with respect to μ (with P,Qμ). The KLD expression between P and Q in Equation (1) is independent of the dominating measure μ. Table A1 summarizes the various distances and their notations used in this paper.

The KLD is also called the relative entropy [2] because it can be written as the difference of the cross-entropy minus the entropy:

KL(p:q)=h×(p:q)h(p), (2)

where h× denotes the cross-entropy [2]:

h×(p:q):=plog1qdμ, (3)

and

h(p):=plog1pdμ=h×(p:p), (4)

denotes the Shannon entropy [2]. Although the formula of the Shannon entropy in Equation (4) unifies both the discrete case and the continuous case of probability distributions, the behavior of entropy in the discrete case and the continuous case is very different: When μ=μc, Equation (4) yields the discrete Shannon entropy which is always positive and upper bounded by log|X|. When μ=μL, Equation (4) defines the Shannon differential entropy which may be negative and unbounded [2] (e.g., the differential entropy of the Gaussian distribution N(m,σ) is 12log(2πeσ2)). See also [3] for further important differences between the discrete case and the continuous case.

In general, the KLD is an asymmetric distance (i.e., KL(p:q)KL(q:p), hence the argument separator notation using the delimiter ‘:’) In information theory [2], it is customary to use the double bar notation ‘‖’ instead of the comma ‘,’ notation to avoid confusion with joint random variables. The reverse KL divergence or dual KL divergence is:

KL*(P:Q):=KL(Q:P)=qlogqpdμ. (5)

In general, the reverse distance or dual distance for a distance D is written as:

D*(p:q):=D(q:p). (6)

One way to symmetrize the KLD is to consider the Jeffreys Divergence [4] (JD, Sir Harold Jeffreys (1891–1989) was a British statistician.):

J(p;q):=KL(p:q)+KL(q:p)=(pq)logpqdμ=J(q;p). (7)

However, this symmetric distance is not upper bounded, and its sensitivity can raise numerical issues in applications. Here, we used the optional argument separator notation ‘;’ to emphasize that the distance is symmetric but not necessarily a metric distance. This notation matches the notational convention of the mutual information if two joint random variables in information theory [2].

The symmetrization of the KLD may also be obtained using the harmonic mean instead of the arithmetic mean, yielding the resistor average distance [5] R(p;q):

1R(p;q)=121KL(p:q)+1KL(q:p), (8)
R(p;q)=2KL(p:q)+KL(q:p)KL(p:q)KL(q:p)=2J(p;q)KL(p:q)KL(q:p). (9)

Another famous symmetrization of the KLD is the Jensen–Shannon Divergence [6] (JSD) defined by:

JS(p;q):=12KLp:p+q2+KLq:p+q2, (10)
=12plog2pp+q+qlog2qp+qdμ. (11)

This distance can be interpreted as the total divergence to the average distribution (see Equation (10)). The JSD can be rewritten as a Jensen divergence (or Burbea–Rao divergence [7]) for the negentropy generator h (called Shannon information):

JS(p;q)=hp+q2h(p)+h(q)2. (12)

An important property of the Jensen–Shannon divergence compared to the Jeffreys divergence is that this distance is always bounded:

0JS(p:q)log2. (13)

This follows from the fact that

KLp:p+q2=plog2pp+qdμplog2ppdμ=log2. (14)

Finally, the square root of the JSD (i.e., JS) yields a metric distance satisfying the triangular inequality [8,9]. The JSD has found applications in many fields such as bioinformatics [10] and social sciences [11], just to name a few. Recently, the JSD has gained attention in the deep learning community with the Generative Adversarial Networks (GANs) [12]. In computer vision and pattern recognition, one often relies on information-theoretic techniques to perform registration and recognition tasks. For example, in [13], the authors use a mixture of Principal Axes Registrations (mPAR) whose parameters are estimated by minimizing the KLD between the considered two-point distributions. In [14], the authors parameterize both shapes and deformations using Gaussian Mixture Models (GMMs) to perform non-rigid shape registration. The lack of closed-form formula for the KLD between GMMs [15] spurred the use of other statistical distances which admit a closed-form expression for GMMs. For example, in [16], shape registration is performed by using the Jensen-Rényi divergence between GMMs. See also [17] for other information-theoretic divergences that admit closed-form formula for some statistical mixtures extending GMMs.

In information geometry [18], the KLD, JD and JSD are invariant divergences which satisfy the property of information monotonicity [18]. The class of (separable) distances satisfying the information monotonicity are exhaustively characterized as Csiszár’s f-divergences [19]. A f-divergence is defined for a convex generator function f strictly convex at 1 (with f(1)=f(1)=0) by:

If(p:q)=pfqpdμ. (15)

The Jeffreys and Jensen–Shannon f-generators are:

fJ(u):=(u1)logu, (16)
fJS(u):=(u+1)log1+u2+ulogu. (17)

1.2. Statistical Distances and Parameter Divergences

In information and probability theory, the term “divergence” informally means a statistical distance [2]. However in information geometry [18], a divergence has a stricter meaning of being a smooth parametric distance (called a contrast function in [20]) from which a dual geometric structure can be derived [21,22].

Consider parametric distributions pθ belonging to a parametric family of distributions {pθ:θΘ} (e.g., Gaussian family or Cauchy family), where Θ denotes the parameter space. Then a statistical distance D between distributions pθ and pθ amount to an equivalent parameter distance:

P(θ:θ):=D(pθ:pθ). (18)

For example, the KLD between two distributions belonging to the same exponential family (e.g., Gaussian family) amount to a reverse Bregman divergence for the cumulant generator F of the exponential family [23]:

KL(pθ:pθ)=BF*(θ:θ)=BF(θ:θ). (19)

A Bregman divergence BF is defined for a strictly convex and differentiable generator F as:

BF(θ:θ):=F(θ)F(θ)θθ,F(θ), (20)

where ·,· is an inner product (usually the Euclidean dot product for vector parameters).

Similar to the interpretation of the Jensen–Shannon divergence (statistical divergence) as a Jensen divergence for the negentropy generator, the Jensen–Bregman divergence [7] JBF (parametric divergence JBD) amounts to a Jensen divergence JF for a strictly convex generator F:ΘR:

JBF(θ:θ):=12BFθ:θ+θ2+BFθ:θ+θ2, (21)
=F(θ)+F(θ)2Fθ+θ2=:JF(θ:θ), (22)

Let us introduce the notation (θpθq)α:=(1α)θp+αθq to denote the linear interpolation (LERP) of the parameters. Then we have more generally that the skew Jensen–Bregman divergence JBFα(θ:θ) amounts to a skew Jensen divergence JFα(θ:θ):

JBFα(θ:θ):=(1α)BFθ:(θθ)α+αBFθ:(θθ)α), (23)
=(F(θ)F(θ))αF(θθ)α=:JFα(θ:θ), (24)

1.3. J-Symmetrization and JS-Symmetrization of Distances

For any arbitrary distance D(p:q), we can define its skew J-symmetrization for α[0,1] by:

JDα(p:q):=(1α)Dp:q+αDq:p, (25)

and its JS-symmetrization by:

JSDα(p:q):=(1α)Dp:(1α)p+αq+αDq:(1α)p+αq, (26)
=(1α)Dp:(pq)α+αDq:(pq)α. (27)

Usually, α=12, and for notational brevity, we drop the superscript: JSD(p:q):=JSD12(p:q). The Jeffreys divergence is twice the J-symmetrization of the KLD, and the Jensen–Shannon divergence is the JS-symmetrization of the KLD.

The J-symmetrization of a f-divergence If is obtained by taking the generator

fαJ(u)=(1α)f(u)+αf(u), (28)

where f(u)=uf(1u) is the conjugate generator:

If(p:q)=If*(p:q)=If(q:p). (29)

The JS-symmetrization of a f-divergence

Ifα(p:q):=(1α)If(p:(pq)α)+αIf(q:(pq)α), (30)

with (pq)α=(1α)p+αq is obtained by taking the generator

fαJS(u):=(1α)f(αu+1α)+αfα+1αu. (31)

We check that we have:

Ifα(p:q)=(1α)If(p:(pq)α)+αIf(q:(pq)α)=If1α(q:p)=IfαJS(p:q). (32)

A family of symmetric distances unifying the Jeffreys divergence with the Jensen–Shannon divergence was proposed in [24]. Finally, let us mention that once we have symmetrized a distance D, we may also metrize this symmetric distance by choosing (when it exists) the largest exponent δ>0 such that Dδ becomes a metric distance [8,25,26,27,28].

1.4. Contributions and Paper Outline

The paper is organized as follows:

Section 2 reports the special case of mixture families in information geometry [18] for which the Jensen–Shannon divergence can be expressed as a Bregman divergence (Theorem 1), and highlight the lack of closed-form formula when considering exponential families. This fact precisely motivated this work.

Section 3 introduces the generalized Jensen–Shannon divergences using statistical mixtures derived from abstract weighted means (Definitions 2 and 5), presents the JS-symmetrization of statistical distances, and report a sufficient condition to get bounded JS-symmetrizations (Property 1).

In Section 4.1, we consider the calculation of the geometric JSD between members of the same exponential family (Theorem 2) and instantiate the formula for the multivariate Gaussian distributions (Corollary 1). We discuss about applications for k-means clustering in Section 4.1.2. In Section 4.2, we illustrate the method with another example that calculates in closed form the harmonic JSD between scale Cauchy distributions (Theorem 4).

Finally, we wrap up and conclude this work in Section 5.

2. Jensen–Shannon Divergence in Mixture and Exponential Families

We are interested to calculate the JSD between densities belonging to parametric families of distributions.

A trivial example is when p=(p0,,pD) and q=(q0,,qD) are categorical distributions: The average distribution p+q2 is a again categorical distribution, and the JSD is expressed plainly as:

JS(p,q)=12i=0Dpilog2pipi+qi+qilog2qipi+qi. (33)

Another example is when p=mθp and q=mθq both belong to the same mixture family [18] M:

M:=mθ(x)=1i=1Dθipi(x)p0(x)+i=1Dθipi(x):θi>0,iθi<1, (34)

for linearly independent component distributions p0,p1,,pD. We have [29]:

KL(mθp:mθq)=BF(θp:θq), (35)

where BF is a Bregman divergence defined in Equation (20) obtained for the convex negentropy generator [29] F(θ)=h(mθ). The proof that F(θ) is a strictly convex function is not trivial [30].

The mixture families include the family of categorical distributions over a finite alphabet X={E0,,ED} (the D-dimensional probability simplex) since those categorical distributions form a mixture family with pi(x):=Pr(X=Ei)=δEi(x). Beware that mixture families impose to prescribe the component distributions. Therefore, a density of a mixture family is a special case of statistical mixtures (e.g., GMMs) with prescribed component distributions.

The mathematical identity of Equation (35) that does not yield a practical formula since F(θ) is usually not itself available in closed form. Worse, the Bregman generator can be non-analytic [31]. Nevertheless, this identity is useful for computing the right-sided Bregman centroid (left KL centroid of mixtures) since this centroid is equivalent to the center of mass, and independent of the Bregman generator [29].

Since the mixture of mixtures is also a mixture, specifically

mθp+mθq2=mθp+θq2M, (36)

it follows that we get a closed-form expression for the JSD between mixtures belonging to M.

Theorem 1 (JSD between mixtures).

The Jensen–Shannon divergence between two distributions p=mθp and q=mθq belonging to the same mixture family M is expressed as a Jensen–Bregman divergence for the negentropy generator F:

JS(mθp,mθq)=12BFθp:θp+θq2+BFθq:θp+θq2. (37)

This amounts to calculate the Jensen divergence:

JS(mθp,mθq)=JF(θ1;θ2)=(F(θ1)F(θ2))12F((θ1θ2)12), (38)

where (v1v2)α:=(1α)v1+αv2.

Now, consider distributions p=eθp and q=eθq belonging to the same exponential family [18] E:

E:=eθ(x)=expθxF(θ):θΘ, (39)

where

Θ:=θRD:exp(θx)dμ<, (40)

denotes the natural parameter space. We have [18]:

KL(eθp:eθq)=BF(θq:θp), (41)

where F denotes the log-normalizer or cumulant function of the exponential family [18].

However, eθp+eθq2does not belong to E in general, except for the case of the categorical/multinomial family which is both an exponential family and a mixture family [18].

For example, the mixture of two Gaussian distributions with distinct components is not a Gaussian distribution. Thus, it is not obvious to get a closed-form expression for the JSD in that case. This limitation precisely motivated the introduction of generalized JSDs defined in the next section.

Notice that in [32,33], it is shown how to express or approximate the f-divergences using expansions of power χ pseudo-distances. These power chi distances can all be expressed in closed form when dealing with isotropic Gaussians. This result holds for the JSD since the JSD is a f-divergence [33].

3. Generalized Jensen–Shannon Divergences

We first define abstract means M, and then generic statistical M-mixtures from which generalized Jensen–Shannon divergences are built thereof.

Definitions

Consider an abstract mean [34] M. That is, a continuous bivariate function M(·,·):I×II on an interval IR that satisfies the following in-betweenness property:

inf{x,y}M(x,y)sup{x,y},x,yI. (42)

Using the unique dyadic expansion of real numbers, we can always build a corresponding weighted mean Mα(p,q) (with α[0,1]) following the construction reported in [34] (page 3) such that M0(p,q)=p and M1(p,q)=q. In the remainder, we consider I=(0,).

Examples of common weighted means are:

  • the arithmetic mean Aα(x,y)=(1α)x+αy,

  • the geometric mean Gα(x,y)=x1αyα, and

  • the harmonic mean Hα(x,y)=xy(1α)y+αx.

These means can be unified using the concept of quasi-arithmetic means [34] (also called Kolmogorov–Nagumo means):

Mαh(x,y):=h1(1α)h(x)+αh(y), (43)

where h is a strictly monotonous function. For example, the geometric mean Gα(x,y) is obtained as Mαh(x,y) for the generator h(u)=log(u). Rényi used the concept of quasi-arithmetic means instead of the arithmetic mean to define axiomatically the Rényi entropy [35] of order α in information theory [2].

For any abstract weighted mean, we can build a statistical mixture called a M-mixture as follows:

Definition 1 (M-mixture).

TheMα-interpolation (pq)αM (with α[0,1]) of densities p and q with respect to a mean M is a α-weighted M-mixture defined by:

(pq)αM(x):=Mα(p(x),q(x))ZαM(p:q), (44)

where

ZαM(p:q)=tXMα(p(t),q(t))dμ(t)=:Mα(p,q). (45)

is the normalizer function (or scaling factor) ensuring that (pq)αMP. (The bracket notation f denotes the integral of f over X.)

The A-mixture (pq)αA(x)=(1α)p(x)+αq(x) (‘A’ standing for the arithmetic mean) represents the usual statistical mixture [36] (with ZαA(p:q)=1). The G-mixture (pq)αG(x)=p(x)1αq(x)αZαG(p:q) of two distributions p(x) and q(x) (’G’ standing for the geometric mean G) is an exponential family of order [37] 1:

(pq)αG(x)=exp(1α)p(x)+αq(x)logZαG(p:q). (46)

The two-component M-mixture can be generalized to a k-component M-mixture with αΔk1, the (k1)-dimensional standard simplex:

(p1pk)αM:=p1(x)α1××pk(x)αkZα(p1,,pk), (47)

where Zα(p1,,pk):=Xp1(x)α1××pk(x)αkdμ(x).

For a given pair of distributions p and q, the set {Mα(p(x),q(x)):α[0,1]} describes a path in the space of probability density functions. This density interpolation scheme was investigated for quasi-arithmetic weighted means in [38,39,40]. In [41], the authors study the Fisher information matrix for the α-mixture models (using α-power means).

We call (pq)αM the α-weighted M-mixture, thus extending the notion of α-mixtures [42] obtained for power means Pα. Notice that abstract means have also been used to generalize Bregman divergences using the concept of (M,N)-convexity [43].

Let us state a first generalization of the Jensen–Shannon divergence:

Definition 2 (M-Jensen–Shannon divergence).

For a mean M, the skew M-Jensen–Shannon divergence (for α[0,1]) is defined by

JSMα(p:q):=(1α)KLp:(pq)αM+αKLq:(pq)αM (48)

When Mα=Aα, we recover the ordinary Jensen–Shannon divergence since Aα(p:q)=(pq)α (and ZαA(p:q)=1).

We can extend the definition to the JS-symmetrization of any distance:

Definition 3 (M-JS symmetrization).

For a mean M and a distance D, the skew M-JS symmetrization of D (for α[0,1]) is defined by

JSDMα(p:q):=(1α)Dp:(pq)αM+αDq:(pq)αM (49)

By notation, we have JSMα(p:q)=JSKLMα(p:q). That is, the arithmetic JS-symmetrization of the KLD is the JSD.

Let us define the α-skew K-divergence [6,44] Kα(p:q) as

Kαp:q:=KL(p:(1α)p+αq)=KL(p:(pq)α), (50)

where (pq)α(x):=(1α)p(x)+αq(x). Then the Jensen–Shannon divergence and the Jeffreys divergence can be rewritten [24] as

JSp;q=12K12p:q+K12q:p, (51)
Jp;q=K1(p:q)+K1(q:p), (52)

since KL(p:q)=K1(p:q). Then JSα(p:q)=(1α)Kα(p:q)+αK1α(q:p). Similarly, we can define the generalized skew K-divergence:

KDMα(p:q):=Dp:(pq)αM. (53)

The success of the JSD compared to the JD in applications is partially due to the fact that the JSD is upper bounded by log2. So, one question to ask is whether those generalized JSDs are upper bounded or not?

To report a sufficient condition, let us first introduce the dominance relationship between means: We say that a mean M dominates a mean N when M(x,y)N(x,y) for all x,y0, see [34]. In that case we write concisely MN. For example, the Arithmetic-Geometric-Harmonic (AGH) inequality states that AGH.

Consider the term

KL(p:(pq)αM)=p(x)logp(x)ZαM(p,q)Mα(p(x),q(x))dμ(x), (54)
=logZαM(p,q)+p(x)logp(x)Mα(p(x),q(x))dμ(x). (55)

When mean Mα dominates the arithmetic mean Aα, we have

p(x)logp(x)Mα(p(x),q(x))dμ(x)p(x)logp(x)Aα(p(x),q(x))dμ(x),

and

p(x)logp(x)Aα(p(x),q(x))dμ(x)p(x)logp(x)(1α)p(x)dμ(x)=log(1α).

Notice that ZαA(p:q)=1 (when M=A is the arithmetic mean), and we recover the fact that the α-skew Jensen–Shannon divergence is upper bounded by log(1α) (e.g., log2 when α=12).

We summarize the result in the following property:

Property 1 (Upper bound on M-JSD).

The M-JSD is upper bounded by logZαM(p,q)1α when MA.

Let us observe that dominance of means can be used to define distances: For example, the celebrated α-divergences

Iα(p:q)=αp(x)+(1α)q(x)p(x)αq(x)1αdμ(x),α{0,1} (56)

can be interpreted as a difference of two means, the arithmetic mean and the geometry mean:

Iα(p:q)=Aα(q(x):p(x))Gα(q(x):p(x))dμ(x). (57)

We can also define the generalized Jeffreys divergence as follows:

Definition 4 (N-Jeffreys divergence).

For a mean N, the skew N-Jeffreys divergence (for β[0,1]) is defined by

JNβ(p:q):=Nβ(KLp:q,KLq:p). (58)

This definition includes the (scaled) resistor average distance [5] R(p;q), obtained for the harmonic mean N=H for the KLD with skew parameter β=12:

1R(p;q)=121KL(p:q)+1KL(q:p), (59)
R(p;q)=2J(p;q)KL(p:q)KL(q:p). (60)

In [5], the factor 12 is omitted to keep the spirit of the original Jeffreys divergence.

We can further extend this definition for any arbitrary divergence D as follows:

Definition 5 (Skew (M,N)-D divergence).

The skew (M,N)-divergence with respect to weighted means Mα and Nβ as follows:

JSDMα,Nβ(p:q):=NβDp:(pq)αM,Dq:(pq)αM. (61)

We now show how to choose the abstract mean according to the parametric family of distributions to obtain some closed-form formula for some statistical distances.

4. Some Closed-Form Formula for the M-Jensen–Shannon Divergences

Our motivation to introduce these novel families of M-Jensen–Shannon divergences is to obtain closed-form formula when probability densities belong to some given parametric families PΘ. We shall illustrate the principle of the method to choose the right abstract mean for the considered parametric family, and report corresponding formula for the following two case studies:

  1. The geometric G-Jensen–Shannon divergence for the exponential families (Section 4.1), and

  2. the harmonic H-Jensen–Shannon divergence for the family of Cauchy scale distributions (Section 4.2).

Recall that the arithmetic A-Jensen–Shannon divergence is well-suited for mixture families (Theorem 1).

4.1. The Geometric G-Jensen–Shannon Divergence

Consider an exponential family [37] EF with log-normalizer F:

EF=pθ(x)dμ=exp(θxF(θ))dμ:θΘ, (62)

and natural parameter space

Θ=θ:Xexp(θx)dμ<. (63)

The log-normalizer (a log-Laplace function also called log-partition or cumulant function) is a real analytic convex function.

We seek for a mean M such that the weighted M-mixture density (pθ1pθ2)αM of two densities pθ1 and pθ2 of the same exponential family yields another density of that exponential family (e.g., p(θ1θ2)α). When considering exponential families, choose the weighted geometric mean Gα for the abstract mean Mα(x,y): Mα(x,y)=Gα(x,y)=x1αyα, for x,y>0. Indeed, it is well-known that the normalized weighted product of distributions belonging to the same exponential family also belongs to this exponential family [45]:

xX,(pθ1pθ2)αG(x):=Gα(pθ1(x),pθ2(x))Gα(pθ1(t),pθ2(t))dμ(t)=pθ11α(x)pθ2α(x)ZαG(p:q), (64)
=p(θ1θ2)α(x), (65)

where the normalization factor is

ZαG(p:q)=exp(JFα(θ1:θ2)), (66)

for the skew Jensen divergence JFα defined by:

JFα(θ1:θ2):=(F(θ1)F(θ2))αF((θ1θ2)α). (67)

Notice that since the natural parameter space Θ is convex, the distribution p(θ1θ2)αEF (since (θ1θ2)αΘ).

Thus, it follows that we have:

KLpθ:(pθ1pθ2)αG=KLpθ:p(θ1θ2)α, (68)
=BF((θ1θ2)α:θ). (69)

This allows us to conclude that the G-Jensen–Shannon divergence admits the following closed-form expression between densities belonging to the same exponential family:

JSαG(pθ1:pθ2):=(1α)KL(pθ1:(pθ1pθ2)αG)+αKL(pθ2:(pθ1pθ2)αG), (70)
=(1α)BF((θ1θ2)α:θ1)+αBF((θ1θ2)α:θ2). (71)

Please note that since (θ1θ2)αθ1=α(θ2θ1) and (θ1θ2)αθ2=(1α)(θ1θ2), it follows that (1α)BF(θ1:(θ1θ2)α)+αBF(θ2:(θ1θ2)α)=JFα(θ1:θ2).

The dual divergence [46] D* (with respect to the reference argument) or reverse divergence of a divergence D is defined by swapping the calling arguments: D*(θ:θ):=D(θ:θ).

Thus, if we defined the Jensen–Shannon divergence for the dual KL divergence KL*(p:q):=KL(q:p)

JSKL*(p:q):=12KL*p:p+q2+KL*q:p+q2, (72)
=12KLp+q2:p+KLp+q2:q, (73)

then we obtain:

JSKL*Gα(pθ1:pθ2):=(1α)KL((pθ1pθ2)αG:pθ1)+αKL((pθ1pθ2)αG:pθ2), (74)
=(1α)BF(θ1:(θ1θ2)α)+αBF(θ2:(θ1θ2)α)=JBFα(θ1:θ2), (75)
=(1α)F(θ1)+αF(θ2)F((θ1θ2)α), (76)
=JFα(θ1:θ2). (77)

Please note that JSD*JSD*.

In general, the JS-symmetrization for the reverse KL divergence is

JSKL*(p;q)=12KLp+q2:p+KLp+q2:q, (78)
=mlogmpqdμ=A(p,q)logA(p,q)G(p,q)dμ, (79)

where m=p+q2=A(p,q) and G(p,q)=pq. Since AG (arithmetic-geometric inequality), it follows that JSKL*(p;q)0.

Theorem 2 (G-JSD and its dual JS-symmetrization in exponential families).

The α-skew G-Jensen–Shannon divergence JSGα between two distributions pθ1 and pθ2 of the same exponential family EF is expressed in closed form for α(0,1) as:

JSGα(pθ1:pθ2)=(1α)BF(θ1θ2)α:θ1+αBF(θ1θ2)α:θ2, (80)
JSKL*Gα(pθ1:pθ2)=JBFα(θ1:θ2)=JFα(θ1:θ2). (81)

4.1.1. Case Study: The Multivariate Gaussian Family

Consider the exponential family [18,37] of multivariate Gaussian distributions [47,48,49]

{N(μ,Σ):μRd,Σ0}. (82)

The multivariate Gaussian family is also called the multivariate normal family in the literature, or MVN family for short.

Let λ:=(λv,λM)=(μ,Σ) denote the composite (vector,matrix) parameter of an MVN. The d-dimensional MVN density is given by

pλ(x;λ):=1(2π)d2|λM|exp12(xλv)λM1(xλv), (83)

where |·| denotes the matrix determinant. The natural parameters θ are also expressed using both a vector parameter θv and a matrix parameter θM in a compound object θ=(θv,θM). By defining the following compound inner product on a composite (vector,matrix) object

θ,θ:=θvθv+trθMθM, (84)

where tr(·) denotes the matrix trace, we rewrite the MVN density of Equation (83) in the canonical form of an exponential family [37]:

pθ(x;θ):=expt(x),θFθ(θ)=pλ(x;λ(θ)), (85)

where

θ=(θv,θM)=Σ1μ,12Σ1=θ(λ)=λM1λv,12λM1, (86)

is the compound natural parameter and

t(x)=(x,xx) (87)

is the compound sufficient statistic. The function Fθ is the strictly convex and continuously differentiable log-normalizer defined by:

Fθ(θ)=12dlogπlog|θM|+12θvθM1θv, (88)

The log-normalizer can be expressed using the ordinary parameters, λ=(μ,Σ), as:

Fλ(λ)=12λvλM1λv+log|λM|+dlog2π, (89)
=12μΣ1μ+log|Σ|+dlog2π. (90)

The moment/expectation parameters [18,49] are

η=(ηv,ηM)=E[t(x)]=F(θ). (91)

We report the conversion formula between the three types of coordinate systems (namely the ordinary parameter λ, the natural parameter θ and the moment parameter η) as follows:

θv(λ)=λM1λv=Σ1μθM(λ)=12λM1=12Σ1λv(θ)=12θM1θv=μλM(θ)=12θM1=Σ (92)
ηv(θ)=12θM1θvηM(θ)=12θM114(θM1θv)(θM1θv)θv(η)=(ηM+ηvηv)1ηvθM(η)=12(ηM+ηvηv)1 (93)
λv(η)=ηv=μλM(η)=ηMηvηv=Σηv(λ)=λv=μηM(λ)=λMλvλv=Σμμ (94)

The dual Legendre convex conjugate [18,49] is

Fη*(η)=12log(1+ηvηM1ηv)+log|ηM|+d(1+log2π), (95)

and θ=ηFη*(η).

We check the Fenchel-Young equality when η=F(θ) and θ=F*(η):

Fθ(θ)+Fη*(η)θ,η=0. (96)

The Kullback–Leibler divergence between two d-dimensional Gaussians distributions p(μ1,Σ1) and p(μ2,Σ2) (with Δμ=μ2μ1) is

KL(p(μ1,Σ1):p(μ2,Σ2))=12tr(Σ21Σ1)+ΔμΣ21Δμ+log|Σ2||Σ1|d=KL(pλ1:pλ2). (97)

We check that KL(p(μ,Σ):p(μ,Σ))=0 since Δμ=0 and tr(Σ1Σ)=tr(I)=d. Notice that when Σ1=Σ2=Σ, we have

KL(p(μ1,Σ):p(μ2,Σ))=12ΔμΣ1Δμ=12DΣ12(μ1,μ2), (98)

that is half the squared Mahalanobis distance for the precision matrix Σ1 (a positive-definite matrix: Σ10), where the Mahalanobis distance is defined for any positive matrix Q0 as follows:

DQ(p1:p2)=(p1p2)Q(p1p2). (99)

The Kullback–Leibler divergence between two probability densities of the same exponential families amount to a Bregman divergence [18]:

KL(p(μ1,Σ1):p(μ2,Σ2))=KL(pλ1:pλ2)=BF(θ2:θ1)=BF*(η1:η2), (100)

where the Bregman divergence is defined by

BF(θ:θ):=F(θ)F(θ)θθ,F(θ), (101)

with η=F(θ). Define the canonical divergence [18]

AF(θ1:η2)=F(θ1)+F*(η2)θ1,η2=AF*(η2:θ1), (102)

since F**=F. We have BF(θ1:θ2)=AF(θ1:η2).

Now, observe that pθ(0,θ)=exp(F(θ)) when t(0),θ=0. In particular, this holds for the multivariate normal family. Thus, we have the following proposition.

Proposition 1. 

For the MVN family, we have

pθ(x;(θ1θ2)α)=pθ(x,θ1)1αpθ(x,θ2)αZαG(pθ1:pθ2), (103)

with the scaling normalization factor:

ZαG(pθ1:pθ2)=exp(JFα(θ1:θ2))=pθ(0;θ1)1αpθ(0;θ2)αpθ(0;(θ1θ2)α). (104)

More generally, we have for a k-dimensional weight vector α belonging to the (k1)-dimensional standard simplex:

ZαG(pθ1,pθk)=i=1kpθ(0,θi)αipθ(0;θ¯), (105)

where θ¯=i=1kαiθi.

Finally, we state the formulas for the G-JS divergence between MVNs for the KL and reverse KL, respectively:

Corollary 1 (G-JSD between Gaussians).

The skew G-Jensen–Shannon divergence JSαG and the dual skew G-Jensen–Shannon divergence JS*αG between two multivariate Gaussians N(μ1,Σ1) and N(μ2,Σ2) is

JSGα(p(μ1,Σ1):p(μ2,Σ2))=(1α)KL(p(μ1,Σ1):p(μα,Σα))+αKL(p(μ2,Σ2):p(μα,Σα)), (106)
=(1α)BF((θ1θ2)α:θ1)+αBF((θ1θ2)α:θ2),=12trΣα1((1α)Σ1+αΣ2)+log|Σα||Σ1|1α|Σ2|α+ (107)
(1α)(μαμ1)Σα1(μαμ1)+α(μαμ2)Σα1(μαμ2)d (108)
JS*Gα(p(μ1,Σ1):p(μ2,Σ2))=(1α)KL(p(μα,Σα):p(μ1,Σ1))+αKL(p(μα,Σα):p(μ2,Σ2)), (109)
=(1α)BF(θ1:(θ1θ2)α)+αBF(θ2:(θ1θ2)α), (110)
=JF(θ1:θ2), (111)
=12(1α)μ1Σ11μ1+αμ2Σ21μ2μαΣα1μα+log|Σ1|1α|Σ2|α|Σα|, (112)

where

Σα=(Σ1Σ2)αΣ=(1α)Σ11+αΣ211, (113)

(matrix harmonic barycenter) and

μα=(μ1μ2)αμ=Σα(1α)Σ11μ1+αΣ21μ2. (114)

Notice that the α-skew Bhattacharyya distance [7]:

Bα(p:q)=logXp1αqαdμ (115)

between two members of the same exponential family amounts to a α-skew Jensen divergence between the corresponding natural parameters:

Bα(pθ1:pθ2)=JFα(θ1:θ2). (116)

A simple proof follows from the fact that

p(θ1θ2)α(x)dμ(x)=1=pθ11α(x)pθ2α(x)ZαG(pθ1:pθ2)dμ(x). (117)

Therefore, we have

log1=0=logpθ11α(x)pθ2α(x)dμ(x)logZαG(pθ1:pθ2), (118)

with ZαG(pθ1:pθ2)=exp(JF(pθ1:pθ2)). Thus, it follows that

Bα(pθ1:pθ2)=logpθ11α(x)pθ2α(x)dμ(x), (119)
=logZαG(pθ1:pθ2), (120)
=JF(pθ1:pθ2). (121)
Corollary 2. 

The JS-symmetrization of the reverse Kullback–Leibler divergence between densities of the same exponential family amount to calculate a Jensen/Burbea–Rao divergence between the corresponding natural parameters.

4.1.2. Applications to k-Means Clustering

Let P={p1,,pn} denote a point set, and C={c1,,ck} denote a set of k (cluster) centers. The generalized k-means objective [23] with respect to a distance D is defined by:

ED(P,C)=1ni=1nminj{1,,k}D(pi:cj). (122)

By defining the distance D(p,C)=minj{1,,k}D(p:cj) of a point to a set of points, we can rewrite compactly the objective function as ED(P,C)=1ni=1nD(pi,C). Denote by ED*(P,k) the minimum objective loss for a set of k=|C| clusters: ED*(P,k)=min|C|=kED(P,C). It is NP-hard [50] to compute ED*(P,k) when k>1 and the dimension d>1. The most common heuristic is Lloyd’s batched k-means [23] that yields a local minimum.

The performance of the probabilistic k-means++ initialization [51] has been extended to arbitrary distances in [52] as follows:

Theorem 3 

(Generalized k-means++ performance, [53]). Let κ1 and κ2 be two constants such that κ1 defines the quasi-triangular inequality property:

D(x:z)κ1D(x:y)+D(y:z),x,y,zΔd, (123)

and κ2 handles the symmetry inequality:

D(x:y)κ2D(y:x),x,yΔd. (124)

Then the generalized k-means++ seeding guarantees with high probability a configuration C of cluster centers such that:

ED(P,C)2κ12(1+κ2)(2+logk)ED*(P,k). (125)

To bound the constants κ1 and κ2, we rewrite the generalized Jensen–Shannon divergences using quadratic form expressions: That is, using a squared Mahalanobis distance:

DQ(p:q)=(pq)Q(pq), (126)

for a positive-definite matrix Q0. Since the Bregman divergence can be interpreted as the tail of a first-order Taylor expansion, we have:

BF(θ1:θ2)=12(θ1θ2)2F(ξ)(θ1θ2), (127)

for ξΘ (open convex). Similarly, the Jensen divergence can be interpreted as a Jensen–Bregman divergence, and thus we have

JF(θ1:θ2)12(θ1θ2)2F(ξ)(θ1θ2), (128)

for ξΘ. More precisely, for a prescribed point set {θ1,,θn}, we have ξ,ξCH({θ1,,θn}), where CH denotes the closed convex hull. We can therefore upper bound κ1 and κ2 using the ratio maxθCH({θ1,,θn})2F(θ)maxθCH({θ1,,θn})2F(θ). See [54] for further details.

A centroid for a set of parameters θ1,,θn is defined as the minimizer of the functional

ED(θ)=1niD(θi:θ). (129)

In particular, the symmetrized Bregman centroids have been studied in [55] (for JSGα), and the Jensen centroids (for JS*Gα) have been investigated in [7] using the convex-concave iterative procedure.

4.2. The Harmonic Jensen–Shannon Divergence (H-JS)

The principle to get closed-form formula for generalized Jensen–Shannon divergences between distributions belonging to a parametric family PΘ={pθ:θΘ} consists of finding an abstract mean M such that the M-mixture (pθ1pθ2)αM belongs to the family PΘ. In particular, when Θ is a convex domain, we seek a mean M such that (pθ1pθ2)αM=p(θ1θ2)α with (θ1θ2)αΘ.

Let us consider the weighted harmonic mean [34] (induced by the harmonic mean) H:

Hα(x,y):=1(1α)1x+α1y=xy(1α)y+αx=xy(xy)1α,α[0,1]. (130)

The harmonic mean is a quasi-arithmetic mean Hα(x,y)=Mαh(x,y) obtained for the monotone (decreasing) function h(u)=1u (or equivalently for the increasing monotone function h(u)=1u).

This harmonic mean is well-suited for the scale family C of Cauchy probability distributions (also called Lorentzian distributions):

CΓ:=pγ(x)=1γpstdxγ=γπ(γ2+x2):γΓ=(0,), (131)

where γ denotes the scale and pstd(x)=1π(1+x2) the standard Cauchy distribution.

Using the computer algebra system Maxima (http://maxima.sourceforge.net/) we find that (see Appendix B)

(pγ1pγ2)12H(x)=Hα(pγ1(x):pγ2(x))ZαH(γ1,γ2)=p(γ1γ2)α (132)

where the normalizing coefficient is

ZαH(γ1,γ2):=γ1γ2(γ1γ2)α(γ1γ2)1α=γ1γ2(γ1γ2)α(γ2γ1)α, (133)

since we have (γ1γ2)1α=(γ2γ1)α.

The H-Jensen–Shannon symmetrization of a distance D between distributions writes as:

JSDHα(p:q)=(1α)D(p:(pq)αH)+αD(q:(pq)αH), (134)

where Hα denote the weighted harmonic mean. When D is available in closed form for distributions belonging to the scale Cauchy distributions, so is JSDHα(p:q).

For example, consider the KL divergence formula between two scale Cauchy distributions:

KL(pγ1:pγ2)=2logA(γ1,γ2)G(γ1,γ2)=2logγ1+γ22γ1γ2, (135)

where A and G denote the arithmetic and geometric means, respectively. The formula initially reported in [56] has been corrected by the authors. Since AG (and AG1), it follows that KL(pγ1:pγ2)0. Notice that the KL divergence is symmetric for Cauchy scale distributions. We note in passing that for exponential families, the KL divergence is symmetric only for the location Gaussian family (since the only symmetric Bregman divergences are the squared Mahalanobis distances [57]). The cross-entropy between scale Cauchy distributions is h×(pγ1:pγ2)=logπ(γ1+γ2)2γ2, and the differential entropy is h(pγ)=h×(pγ:pγ)=log4πγ.

Then the H-JS divergence between p=pγ1 and q=pγ2 is:

JSH(p:q)=12KLp:(pq)12H+KLq:(pq)12H, (136)
JSH(pγ1:pγ2)=12KLpγ1:pγ1+γ22+KLpγ2:pγ1+γ22, (137)
=log(3γ1+γ2)(3γ2+γ1)8γ1γ2(γ1+γ2). (138)

We check that when γ1=γ2=γ, we have JSHα(pγ:pγ)=0.

Theorem 4 (Harmonic JSD between scale Cauchy distributions).

The harmonic Jensen–Shannon divergence between two scale Cauchy distributions pγ1 and pγ2 is JSH(pγ1:pγ2)=log(3γ1+γ2)(3γ2+γ1)8γ1γ2(γ1+γ2).

Let us report some numerical examples: Consider pγ1=0.1 and pγ1=0.5, we find that JSH(pγ1:pγ2)0.176. When pγ1=0.2 and pγ1=0.8, we find that JSH(pγ1:pγ2)0.129.

Notice that KL formula is scale-invariant, and this property holds for any scale family:

Lemma 1. 

The Kullback–Leibler divergence between two distributions ps1 and ps2 belonging to the same scale family {ps(x)=1sp(xs)}s(0,) with standard density p is scale-invariant: KL(pλs1:pλs2)=KL(ps1:ps2)=KL(p:ps2s1)=KL(ps1s2:p) for any λ>0.

A direct proof follows from a change of variable in the KL integral with y=xλ and dx=λdy. Please note that although the KLD between scale Cauchy distributions is symmetric, it is not the case for all scale families: For example, the Rayleigh distributions form a scale family with the KLD amounting to compute a Bregman asymmetric Itakura–Saito divergence between parameters [37].

Instead of the KLD, we can choose the total variation distance for which a formula has been reported in [38] between two Cauchy distributions. Notice that the Cauchy distributions are alpha-stable distributions for α=1 and q Gaussian distributions for q=2 ([58], p. 104). A closed-form formula for the divergence between two q-Gaussians is given in [58] when q<2. The definite integral hq(p)=+p(x)qdμ is available in closed form for Cauchy distributions. When q=2, we have h2(pγ)=12πγ.

We refer to [38] for yet other illustrative examples considering the family of Pearson type VII distributions and central multivariate t-distributions which use the power means (quasi-arithmetic means Mh induced by h(u)=uα for α>0) for defining mixtures.

Table 1 summarizes the various examples introduced in the paper.

Table 1.

Summary of the weighted means M chosen according to the parametric family in order to ensure that the family is closed under M-mixturing: (pθ1pθ2)αM=p(θ1θ2)α.

JSMα Mean M Parametric Family ZαM(p:q)
JSAα arithmetic A mixture family ZαM(θ1:θ2)=1
JSGα geometric G exponential family ZαG(θ1:θ2)=exp(JFα(θ1:θ2))
JSHα harmonic H Cauchy scale family ZαH(θ1:θ2)=θ1θ2(θ1θ2)α(θ1θ2)1α

4.3. The M-Jensen–Shannon Matrix Distances

In this section, we consider distances between matrices which play an important role in quantum computing [59,60]. We refer to [61] for the matrix Jensen–Bregman logdet divergence. The Hellinger distance can be interpreted as the difference of an arithmetic mean A and a geometric mean G:

DH(p,q)=1Xp(x)q(x)dμ(x)=X(A(p(x),q(x))G(p(x),q(x)))dμ(x). (139)

Notice that since AG, we have DH(p,q)0. The scaled and squared Hellinger distance is an α-divergence Iα for α=0. Recall that the α-divergence can be interpreted as the difference of a weighted arithmetic minus a weighted geometry mean.

In general, if a mean M1 dominates a mean M2, we may define the distance as

DM1,M2(p,q)=XM1(p,q)M2(p,q)dμ(x). (140)

When considering matrices [62], there is not a unique definition of a geometric matrix mean, and thus we have different notions of matrix Hellinger distances [62], some of them are divergences (smooth distances defining a dualistic structure in information geometry).

We define the matrix M-Jensen–Shannon divergence for a matrix divergence [63,64] D as follows:

JSDM(X1,X2)=12D(X1:M(X1,X2))+D(X2:M(X1,X2))=JSDM(X2,X1). (141)

For example, we can choose the von Neumann matrix divergence [63]:

DvN(X1:X2):=trX1logX1X1logX2X1+X2, (142)

or the LogDet matrix divergence [63]:

Dld(X1:X2):=tr(X1X21)log|X1X21|d, (143)

where square matrices X1 and X2 have dimension d.

5. Conclusions and Perspectives

We introduced a generalization of the celebrated Jensen–Shannon divergence [6], termed the (M,N)-Jensen–Shannon divergences, based on M-mixtures derived from abstract means M. This new family of divergences includes the ordinary Jensen–Shannon divergence when both M and N are set to the arithmetic mean. We reported closed-form expressions of the M Jensen–Shannon divergences for mixture families and exponential families in information geometry by choosing the arithmetic and geometric weighted mean, respectively. The α-skewed geometric Jensen–Shannon divergence (G-Jensen–Shannon divergence) between densities pθ1 and pθ2 of the same exponential family with cumulant function F is

JSKLGα[pθ1:pθ2]=JSBF*Aα(θ1:θ2).

Here, we used the bracket notation to emphasize that the statistical distance JSKLGα is between densities, and the parenthesis notation to emphasize that the distance JSBF*Aα is between parameters. We also have JSKL*Gα[pθ1:pθ2]=JFα(θ1:θ2). We also show how to get a closed-form formula for the harmonic Jensen–Shannon divergence of Cauchy scale distributions by taking harmonic mixtures.

For an arbitrary distance D, we define the skew N-Jeffreys symmetrization:

JDNβ(p1:p2)=Nβ(D(p1:p2),D(p2:p1)), (144)

and the skew (M,N)-JS-symmetrization:

JSDMα,Nβ(p1:p2)=Nβ(D(p1:(p1p2)αM),D(p2:(p1p2)αM)). (145)

A Java™ source code for computing the geometric Jensen–Shannon divergence between multivariate Gaussian distributions is available online at https://franknielsen.github.io/M-JS/.

Appendix A. Summary of Distances and Their Notations

Table A1 lists the main distances with their notations.

Table A1.

Summary of Distances and Their Notations.

Weighted mean Mα, α(0,1)
Arithmetic mean Aα(x,y)=(1α)x+αy
Geometric mean Gα(x,y)=x1αyα
Harmonic mean Hα(x,y)=xy(1α)y+αx
Power mean Pαp(x,y)=((1α)xp+αyp)1p,pR{0}, limp0Pαp=G
Quasi-arithmetic mean Mαf(x,y)=f1((1α)f(x)+αf(y)), f strictly monotonous
M-mixture ZαM(p,q)=tXMα(p(t),q(t))dμ(t)
with ZαM(p,q)=tXMα(p(t),q(t))dμ(t)
Statistical distance D(p:q)
Dual/reverse distance D* D*(p:q):=D(q:p)
Kullback-Leibler divergence KL(p:q)=p(x)logp(x)q(x)dμ(x)
reverse Kullback-Leibler divergence KL*(p:q)=KL(q:p)=q(x)logq(x)p(x)dμ(x)
Jeffreys divergence J(p;q)=KL(p:q)+KL(q:p)=(p(x)q(x))logp(x)q(x)dμ(x)
Resistor divergence 1R(p;q)=121KL(p:q)+1KL(q:p). R(p;q)=2J(p;q)KL(p:q)KL(q:p)
skew K-divergence Kα(p:q)=p(x)logp(x)(1α)p(x)+αq(x)dμ(x)
Jensen-Shannon divergence JS(p,q)=12KLp:p+q2+KLq:p+q2
skew Bhattacharrya divergence Bα(p:q)=logXp(x)1αq(x)αdμ(x)
Hellinger distance DH(p,q)=1Xp(x)q(x)dμ(x)
α-divergences Iα(p:q)=αp(x)+(1α)q(x)p(x)αq(x)1αdμ(x),α{0,1}
Iα(p:q)=Aα(q:p)Gα(q:p)
Mahalanobis distance DQ(p:q)=(pq)Q(pq) for a positive-definite matrix Q0
f-divergence If(p:q)=p(x)fq(x)p(x)dμ(x), with f(1)=f(1)=0
f strictly convex at 1
reverse f-divergence If*(p:q)=q(x)fp(x)q(x)dμ(x)=If(p:q)
for f(u)=uf(1u)
J-symmetrized f-divergence Jf(p;q)=12(If(p:q)+If(q:p))
JS-symmetrized f-divergence Ifα(p;q):=(1α)If(p:(pq)α)+αIf(q:(pq)α)=IfαJS(p:q)
for fαJS(u):=(1α)f(αu+1α)+αfα+1αu
Parameter distance
Bregman divergence BF(θ:θ):=F(θ)F(θ)θθ,F(θ)
skew Jeffreys-Bregman divergence SFα=(1α)BF(θ:θ)+αBF(θ:θ)
skew Jensen divergence JFα(θ:θ):=(F(θ)F(θ))αF((θθ)α)
Jensen-Bregman divergence JBF(θ;θ)=12BFθ:θ+θ2+BFθ:θ+θ2=JF(θ;θ).
Generalized Jensen-Shannon divergences
skew J-symmetrization JDα(p:q):=(1α)Dp:q+αDq:p
skew JS-symmetrization JSDα(p:q):=(1α)Dp:(1α)p+αq+αDq:(1α)p+αq
skew M-Jensen-Shannon divergence JSMα(p:q):=(1α)KLp:(pq)αM+αKLq:(pq)αM
skew M-JS-symmetrization JSDMα(p:q):=(1α)Dp:(pq)αM+αDq:(pq)αM
N-Jeffreys divergence JNβ(p:q):=Nβ(KLp:q,KLq:p)
N-J D divergence JDNβ(p:q)=Nβ(D(p:q),D(q:p))
skew (M,N)-D JS divergence JSDMα,Nβ(p:q):=NβDp:(pq)αM,Dq:(pq)αM

Appendix B. Symbolic Calculations in Maxima

The program below calculates the normalizer Z for the harmonic H-mixtures of Cauchy distributions (Equation (133)).

assume(gamma>0);
Cauchy(x,gamma) := gamma/(%pi∗(x∗∗2+gamma∗∗2));
assume(alpha>0);
assume(alpha<1);
h(x,y,alpha) := (x∗y)/((1-alpha)∗y+alpha∗x);
assume(gamma1>0);
assume(gamma2>0);
m(x,alpha) := ratsimp(h(Cauchy(x,gamma1),Cauchy(x,gamma2),alpha));
/∗ calculate Z ∗/
integrate(m(x,alpha),x,-inf,inf);

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  • 1.Billingsley P. Probability and Measure. John Wiley & Sons; Hoboken, NJ, USA: 2008. [Google Scholar]
  • 2.Cover T.M., Thomas J.A. Elements of Information Theory. John Wiley & Sons; Hoboken, NJ, USA: 2012. [Google Scholar]
  • 3.Ho S.W., Yeung R.W. On the discontinuity of the Shannon information measures; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Adelaide, Australia. 4–9 September 2005; pp. 159–163. [Google Scholar]
  • 4.Nielsen F. Jeffreys centroids: A closed-form expression for positive histograms and a guaranteed tight approximation for frequency histograms. IEEE Signal Process. Lett. 2013;20:657–660. doi: 10.1109/LSP.2013.2260538. [DOI] [Google Scholar]
  • 5.Johnson D., Sinanovic S. Symmetrizing the Kullback-Leibler Distance. [(accessed on 11 May 2019)];2001 Technical report of Rice University (US) Available online: https://scholarship.rice.edu/handle/1911/19969.
  • 6.Lin J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory. 1991;37:145–151. doi: 10.1109/18.61115. [DOI] [Google Scholar]
  • 7.Nielsen F., Boltz S. The Burbea-Rao and Bhattacharyya centroids. IEEE Trans. Inf. Theory. 2011;57:5455–5466. doi: 10.1109/TIT.2011.2159046. [DOI] [Google Scholar]
  • 8.Vajda I. On metric divergences of probability measures. Kybernetika. 2009;45:885–900. [Google Scholar]
  • 9.Fuglede B., Topsoe F. Jensen-Shannon divergence and Hilbert space embedding; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Waikiki, HI, USA. 29 June–4 July 2014; p. 31. [Google Scholar]
  • 10.Sims G.E., Jun S.R., Wu G.A., Kim S.H. Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc. Natl. Acad. Sci. USA. 2009;106:2677–2682. doi: 10.1073/pnas.0813249106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.DeDeo S., Hawkins R.X., Klingenstein S., Hitchcock T. Bootstrap methods for the empirical study of decision-making and information flows in social systems. Entropy. 2013;15:2246–2276. doi: 10.3390/e15062246. [DOI] [Google Scholar]
  • 12.Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Advances in Neural Information Processing Systems. Curran Associates, Inc.; Red Hook, NY, USA: 2014. Generative adversarial nets; pp. 2672–2680. [Google Scholar]
  • 13.Wang Y., Woods K., McClain M. Information-theoretic matching of two point sets. IEEE Trans. Image Process. 2002;11:868–872. doi: 10.1109/TIP.2002.801120. [DOI] [PubMed] [Google Scholar]
  • 14.Peter A.M., Rangarajan A. Information geometry for landmark shape analysis: Unifying shape representation and deformation. IEEE Trans. Pattern Anal. Mach. Intell. 2009;31:337–350. doi: 10.1109/TPAMI.2008.69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Nielsen F., Sun K. Guaranteed bounds on information-theoretic measures of univariate mixtures using piecewise log-sum-exp inequalities. Entropy. 2016;18:442. doi: 10.3390/e18120442. [DOI] [Google Scholar]
  • 16.Wang F., Syeda-Mahmood T., Vemuri B.C., Beymer D., Rangarajan A. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) Springer; Berlin, Germany: 2009. Closed-form Jensen-Rényi divergence for mixture of Gaussians and applications to group-wise shape registration; pp. 648–655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Nielsen F. Closed-form information-theoretic divergences for statistical mixtures; Proceedings of the IEEE 21st International Conference on Pattern Recognition (ICPR2012); Tsukuba, Japan. 11–15 November 2012; pp. 1723–1726. [Google Scholar]
  • 18.Amari S.I. Information Geometry and Its Applications. Springer; Berlin, Germany: 2016. [Google Scholar]
  • 19.Csiszár I. Information-type measures of difference of probability distributions and indirect observation. Stud. Sci. Math. Hung. 1967;2:229–318. [Google Scholar]
  • 20.Eguchi S. Geometry of minimum contrast. Hiroshima Math. J. 1992;22:631–647. doi: 10.32917/hmj/1206128508. [DOI] [Google Scholar]
  • 21.Amari S.I., Cichocki A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. Tech. Sci. 2010;58:183–195. doi: 10.2478/v10175-010-0019-1. [DOI] [Google Scholar]
  • 22.Ciaglia F.M., Di Cosmo F., Felice D., Mancini S., Marmo G., Pérez-Pardo J.M. Hamilton-Jacobi approach to potential functions in information geometry. J. Math. Phys. 2017;58:063506. doi: 10.1063/1.4984941. [DOI] [Google Scholar]
  • 23.Banerjee A., Merugu S., Dhillon I.S., Ghosh J. Clustering with Bregman divergences. J. Mach. Learn. Res. 2005;6:1705–1749. [Google Scholar]
  • 24.Nielsen F. A family of statistical symmetric divergences based on Jensen’s inequality. arXiv. 20101009.4004 [Google Scholar]
  • 25.Chen P., Chen Y., Rao M. Metrics defined by Bregman divergences. Commun. Math. Sci. 2008;6:915–926. doi: 10.4310/CMS.2008.v6.n4.a6. [DOI] [Google Scholar]
  • 26.Chen P., Chen Y., Rao M. Metrics defined by Bregman divergences: Part 2. Commun. Math. Sci. 2008;6:927–948. doi: 10.4310/CMS.2008.v6.n4.a7. [DOI] [Google Scholar]
  • 27.Kafka P., Österreicher F., Vincze I. On powers of f-divergences defining a distance. Stud. Sci. Math. Hung. 1991;26:415–422. [Google Scholar]
  • 28.Österreicher F., Vajda I. A new class of metric divergences on probability spaces and its applicability in statistics. Ann. Inst. Stat. Math. 2003;55:639–653. doi: 10.1007/BF02517812. [DOI] [Google Scholar]
  • 29.Nielsen F., Nock R. On the geometry of mixtures of prescribed distributions; In Proceeding of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Calgary, AB, Canada. 15–20 Aprli 2018; pp. 2861–2865. [Google Scholar]
  • 30.Nielsen F., Hadjeres G. Monte Carlo Information Geometry: The dually flat case. arXiv. 20181803.07225 [Google Scholar]
  • 31.Watanabe S., Yamazaki K., Aoyagi M. Kullback information of normal mixture is not an analytic function. IEICE Tech. Rep. Neurocomput. 2004;104:41–46. [Google Scholar]
  • 32.Nielsen F., Nock R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2014;21:10–13. doi: 10.1109/LSP.2013.2288355. [DOI] [Google Scholar]
  • 33.Nielsen F., Hadjeres G. On power chi expansions of f-divergences. arXiv. 20191903.05818 [Google Scholar]
  • 34.Niculescu C., Persson L.E. Convex Functions and Their Applications. 2nd ed. Springer; Berlin, Germany: 2018. [Google Scholar]
  • 35.Rényi A. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California; Oakland, CA, USA: 1961. On measures of entropy and information. [Google Scholar]
  • 36.McLachlan G.J., Lee S.X., Rathnayake S.I. Finite mixture models. Ann. Rev. Stat. Appl. 2019;6:355–378. doi: 10.1146/annurev-statistics-031017-100325. [DOI] [Google Scholar]
  • 37.Nielsen F., Garcia V. Statistical exponential families: A digest with flash cards. arXiv. 20090911.4863 [Google Scholar]
  • 38.Nielsen F. Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means. Pattern Recognit. Lett. 2014;42:25–34. doi: 10.1016/j.patrec.2014.01.002. [DOI] [Google Scholar]
  • 39.Eguchi S., Komori O. Geometric Science of Information (GSI) Springer; Cham, Switzerland: 2015. Path connectedness on a space of probability density functions; pp. 615–624. [Google Scholar]
  • 40.Eguchi S., Komori O., Ohara A. Information Geometry and its Applications IV. Springer; Berlin, Germany: 2016. Information geometry associated with generalized means; pp. 279–295. [Google Scholar]
  • 41.Asadi M., Ebrahimi N., Kharazmi O., Soofi E.S. Mixture models, Bayes Fisher information, and divergence measures. IEEE Trans. Inf. Theory. 2019;65:2316–2321. doi: 10.1109/TIT.2018.2877608. [DOI] [Google Scholar]
  • 42.Amari S.I. Integration of stochastic models by minimizing α-divergence. Neural Comput. 2007;19:2780–2796. doi: 10.1162/neco.2007.19.10.2780. [DOI] [PubMed] [Google Scholar]
  • 43.Nielsen F., Nock R. Generalizing skew Jensen divergences and Bregman divergences with comparative convexity. IEEE Signal Process. Lett. 2017;24:1123–1127. doi: 10.1109/LSP.2017.2712195. [DOI] [Google Scholar]
  • 44.Lee L. Measures of distributional similarity; Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, Association for Computational Linguistics; Stroudsburg, PA, USA. 20–26 June 1999; pp. 25–32. [DOI] [Google Scholar]
  • 45.Nielsen F. The statistical Minkowski distances: Closed-form formula for Gaussian mixture models. arXiv. 20191901.03732 [Google Scholar]
  • 46.Zhang J. Reference duality and representation duality in information geometry. AIP Conf. Proc. 2015;1641:130–146. [Google Scholar]
  • 47.Yoshizawa S., Tanabe K. Dual differential geometry associated with the Kullback-Leibler information on the Gaussian distributions and its 2-parameter deformations. SUT J. Math. 1999;35:113–137. [Google Scholar]
  • 48.Nielsen F., Nock R. A closed-form expression for the Sharma–Mittal entropy of exponential families. J. Phys. A Math. Theor. 2011;45:032003. doi: 10.1088/1751-8113/45/3/032003. [DOI] [Google Scholar]
  • 49.Nielsen F. An elementary introduction to information geometry. arXiv. 2018 doi: 10.3390/e22101100.1808.08271 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Nielsen F., Nock R. Optimal interval clustering: Application to Bregman clustering and statistical mixture learning. IEEE Signal Process. Lett. 2014;21:1289–1292. doi: 10.1109/LSP.2014.2333001. [DOI] [Google Scholar]
  • 51.Arthur D., Vassilvitskii S. Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics. ACM; New York, NY, USA: 2007. k-means++: The advantages of careful seeding; pp. 1027–1035. [Google Scholar]
  • 52.Nielsen F., Nock R., Amari S.I. On clustering histograms with k-means by using mixed α-divergences. Entropy. 2014;16:3273–3301. doi: 10.3390/e16063273. [DOI] [Google Scholar]
  • 53.Nielsen F., Nock R. Total Jensen divergences: definition, properties and clustering; Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Brisbane, QLD, Australia. 19–24 August 2015; pp. 2016–2020. [Google Scholar]
  • 54.Ackermann M.R., Blömer J. Scandinavian Workshop on Algorithm Theory. Springer; Berlin, Germany: 2010. Bregman clustering for separable instances; pp. 212–223. [Google Scholar]
  • 55.Nielsen F., Nock R. Sided and symmetrized Bregman centroids. IEEE Trans. Inf. Theory. 2009;55:2882–2904. doi: 10.1109/TIT.2009.2018176. [DOI] [Google Scholar]
  • 56.Tzagkarakis G., Tsakalides P. A statistical approach to texture image retrieval via alpha-stable modeling of wavelet decompositions; Proceedings of the 5th International Workshop on Image Analysis for Multimedia Interactive Services, Instituto Superior Técnico; Lisboa, Portugal. 21–23 April 2004; pp. 21–23. [Google Scholar]
  • 57.Boissonnat J.D., Nielsen F., Nock R. Bregman Voronoi diagrams. Discrete Comput. Geom. 2010;44:281–307. doi: 10.1007/s00454-010-9256-1. [DOI] [Google Scholar]
  • 58.Naudts J. Generalised Thermostatistics. Springer Science & Business Media; Berlin, Germany: 2011. [Google Scholar]
  • 59.Briët J., Harremoës P. Properties of classical and quantum Jensen-Shannon divergence. Phys. Rev. A. 2009;79:052311. doi: 10.1103/PhysRevA.79.052311. [DOI] [Google Scholar]
  • 60.Audenaert K.M. Quantum skew divergence. J. Math. Phys. 2014;55:112202. doi: 10.1063/1.4901039. [DOI] [Google Scholar]
  • 61.Cherian A., Sra S., Banerjee A., Papanikolopoulos N. Jensen-Bregman logdet divergence with application to efficient similarity search for covariance matrices. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35:2161–2174. doi: 10.1109/TPAMI.2012.259. [DOI] [PubMed] [Google Scholar]
  • 62.Bhatia R., Jain T., Lim Y. Strong convexity of sandwiched entropies and related optimization problems. Rev. Math. Phys. 2018;30:1850014. doi: 10.1142/S0129055X18500149. [DOI] [Google Scholar]
  • 63.Kulis B., Sustik M.A., Dhillon I.S. Low-rank kernel learning with Bregman matrix divergences. J. Mach. Learn. Res. 2009;10:341–376. [Google Scholar]
  • 64.Nock R., Magdalou B., Briys E., Nielsen F. Matrix Information Geometry. Springer; Berlin, Germany: 2013. Mining matrix data with Bregman matrix divergences for portfolio selection; pp. 373–402. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES