Abstract
The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.
Keywords: Jensen–Shannon divergence, Jeffreys divergence, resistor average distance, Bhattacharyya distance, f-divergence, Jensen/Burbea–Rao divergence, Bregman divergence, abstract weighted mean, quasi-arithmetic mean, mixture family, statistical M-mixture, exponential family, Gaussian family, Cauchy scale family, clustering
1. Introduction and Motivations
1.1. Kullback–Leibler Divergence and Its Symmetrizations
Let be a measurable space [1] where denotes the sample space and the -algebra of measurable events. Consider a positive measure (usually the Lebesgue measure with Borel -algebra or the counting measure with power set -algebra ). Denote by the set of probability distributions.
The Kullback–Leibler Divergence [2] (KLD) is the most fundamental distance [2] between probability distributions, defined by:
| (1) |
where p and q denote the Radon–Nikodym derivatives of probability measures P and Q with respect to (with ). The KLD expression between P and Q in Equation (1) is independent of the dominating measure . Table A1 summarizes the various distances and their notations used in this paper.
The KLD is also called the relative entropy [2] because it can be written as the difference of the cross-entropy minus the entropy:
| (2) |
where denotes the cross-entropy [2]:
| (3) |
and
| (4) |
denotes the Shannon entropy [2]. Although the formula of the Shannon entropy in Equation (4) unifies both the discrete case and the continuous case of probability distributions, the behavior of entropy in the discrete case and the continuous case is very different: When , Equation (4) yields the discrete Shannon entropy which is always positive and upper bounded by . When , Equation (4) defines the Shannon differential entropy which may be negative and unbounded [2] (e.g., the differential entropy of the Gaussian distribution is ). See also [3] for further important differences between the discrete case and the continuous case.
In general, the KLD is an asymmetric distance (i.e., , hence the argument separator notation using the delimiter ‘:’) In information theory [2], it is customary to use the double bar notation ‘‖’ instead of the comma ‘,’ notation to avoid confusion with joint random variables. The reverse KL divergence or dual KL divergence is:
| (5) |
In general, the reverse distance or dual distance for a distance D is written as:
| (6) |
One way to symmetrize the KLD is to consider the Jeffreys Divergence [4] (JD, Sir Harold Jeffreys (1891–1989) was a British statistician.):
| (7) |
However, this symmetric distance is not upper bounded, and its sensitivity can raise numerical issues in applications. Here, we used the optional argument separator notation ‘;’ to emphasize that the distance is symmetric but not necessarily a metric distance. This notation matches the notational convention of the mutual information if two joint random variables in information theory [2].
The symmetrization of the KLD may also be obtained using the harmonic mean instead of the arithmetic mean, yielding the resistor average distance [5] :
| (8) |
| (9) |
Another famous symmetrization of the KLD is the Jensen–Shannon Divergence [6] (JSD) defined by:
| (10) |
| (11) |
This distance can be interpreted as the total divergence to the average distribution (see Equation (10)). The JSD can be rewritten as a Jensen divergence (or Burbea–Rao divergence [7]) for the negentropy generator (called Shannon information):
| (12) |
An important property of the Jensen–Shannon divergence compared to the Jeffreys divergence is that this distance is always bounded:
| (13) |
This follows from the fact that
| (14) |
Finally, the square root of the JSD (i.e., ) yields a metric distance satisfying the triangular inequality [8,9]. The JSD has found applications in many fields such as bioinformatics [10] and social sciences [11], just to name a few. Recently, the JSD has gained attention in the deep learning community with the Generative Adversarial Networks (GANs) [12]. In computer vision and pattern recognition, one often relies on information-theoretic techniques to perform registration and recognition tasks. For example, in [13], the authors use a mixture of Principal Axes Registrations (mPAR) whose parameters are estimated by minimizing the KLD between the considered two-point distributions. In [14], the authors parameterize both shapes and deformations using Gaussian Mixture Models (GMMs) to perform non-rigid shape registration. The lack of closed-form formula for the KLD between GMMs [15] spurred the use of other statistical distances which admit a closed-form expression for GMMs. For example, in [16], shape registration is performed by using the Jensen-Rényi divergence between GMMs. See also [17] for other information-theoretic divergences that admit closed-form formula for some statistical mixtures extending GMMs.
In information geometry [18], the KLD, JD and JSD are invariant divergences which satisfy the property of information monotonicity [18]. The class of (separable) distances satisfying the information monotonicity are exhaustively characterized as Csiszár’s f-divergences [19]. A f-divergence is defined for a convex generator function f strictly convex at 1 (with ) by:
| (15) |
The Jeffreys and Jensen–Shannon f-generators are:
| (16) |
| (17) |
1.2. Statistical Distances and Parameter Divergences
In information and probability theory, the term “divergence” informally means a statistical distance [2]. However in information geometry [18], a divergence has a stricter meaning of being a smooth parametric distance (called a contrast function in [20]) from which a dual geometric structure can be derived [21,22].
Consider parametric distributions belonging to a parametric family of distributions (e.g., Gaussian family or Cauchy family), where denotes the parameter space. Then a statistical distance D between distributions and amount to an equivalent parameter distance:
| (18) |
For example, the KLD between two distributions belonging to the same exponential family (e.g., Gaussian family) amount to a reverse Bregman divergence for the cumulant generator F of the exponential family [23]:
| (19) |
A Bregman divergence is defined for a strictly convex and differentiable generator F as:
| (20) |
where is an inner product (usually the Euclidean dot product for vector parameters).
Similar to the interpretation of the Jensen–Shannon divergence (statistical divergence) as a Jensen divergence for the negentropy generator, the Jensen–Bregman divergence [7] (parametric divergence JBD) amounts to a Jensen divergence for a strictly convex generator :
| (21) |
| (22) |
Let us introduce the notation to denote the linear interpolation (LERP) of the parameters. Then we have more generally that the skew Jensen–Bregman divergence amounts to a skew Jensen divergence :
| (23) |
| (24) |
1.3. J-Symmetrization and -Symmetrization of Distances
For any arbitrary distance , we can define its skew J-symmetrization for by:
| (25) |
and its JS-symmetrization by:
| (26) |
| (27) |
Usually, , and for notational brevity, we drop the superscript: . The Jeffreys divergence is twice the J-symmetrization of the KLD, and the Jensen–Shannon divergence is the -symmetrization of the KLD.
The J-symmetrization of a f-divergence is obtained by taking the generator
| (28) |
where is the conjugate generator:
| (29) |
The -symmetrization of a f-divergence
| (30) |
with is obtained by taking the generator
| (31) |
We check that we have:
| (32) |
A family of symmetric distances unifying the Jeffreys divergence with the Jensen–Shannon divergence was proposed in [24]. Finally, let us mention that once we have symmetrized a distance D, we may also metrize this symmetric distance by choosing (when it exists) the largest exponent such that becomes a metric distance [8,25,26,27,28].
1.4. Contributions and Paper Outline
The paper is organized as follows:
Section 2 reports the special case of mixture families in information geometry [18] for which the Jensen–Shannon divergence can be expressed as a Bregman divergence (Theorem 1), and highlight the lack of closed-form formula when considering exponential families. This fact precisely motivated this work.
Section 3 introduces the generalized Jensen–Shannon divergences using statistical mixtures derived from abstract weighted means (Definitions 2 and 5), presents the JS-symmetrization of statistical distances, and report a sufficient condition to get bounded JS-symmetrizations (Property 1).
In Section 4.1, we consider the calculation of the geometric JSD between members of the same exponential family (Theorem 2) and instantiate the formula for the multivariate Gaussian distributions (Corollary 1). We discuss about applications for k-means clustering in Section 4.1.2. In Section 4.2, we illustrate the method with another example that calculates in closed form the harmonic JSD between scale Cauchy distributions (Theorem 4).
Finally, we wrap up and conclude this work in Section 5.
2. Jensen–Shannon Divergence in Mixture and Exponential Families
We are interested to calculate the JSD between densities belonging to parametric families of distributions.
A trivial example is when and are categorical distributions: The average distribution is a again categorical distribution, and the JSD is expressed plainly as:
| (33) |
Another example is when and both belong to the same mixture family [18] :
| (34) |
for linearly independent component distributions . We have [29]:
| (35) |
where is a Bregman divergence defined in Equation (20) obtained for the convex negentropy generator [29] . The proof that is a strictly convex function is not trivial [30].
The mixture families include the family of categorical distributions over a finite alphabet (the D-dimensional probability simplex) since those categorical distributions form a mixture family with . Beware that mixture families impose to prescribe the component distributions. Therefore, a density of a mixture family is a special case of statistical mixtures (e.g., GMMs) with prescribed component distributions.
The mathematical identity of Equation (35) that does not yield a practical formula since is usually not itself available in closed form. Worse, the Bregman generator can be non-analytic [31]. Nevertheless, this identity is useful for computing the right-sided Bregman centroid (left KL centroid of mixtures) since this centroid is equivalent to the center of mass, and independent of the Bregman generator [29].
Since the mixture of mixtures is also a mixture, specifically
| (36) |
it follows that we get a closed-form expression for the JSD between mixtures belonging to .
Theorem 1 (JSD between mixtures).
The Jensen–Shannon divergence between two distributions and belonging to the same mixture family is expressed as a Jensen–Bregman divergence for the negentropy generator F:
(37) This amounts to calculate the Jensen divergence:
(38) where .
Now, consider distributions and belonging to the same exponential family [18] :
| (39) |
where
| (40) |
denotes the natural parameter space. We have [18]:
| (41) |
where F denotes the log-normalizer or cumulant function of the exponential family [18].
However, does not belong to in general, except for the case of the categorical/multinomial family which is both an exponential family and a mixture family [18].
For example, the mixture of two Gaussian distributions with distinct components is not a Gaussian distribution. Thus, it is not obvious to get a closed-form expression for the JSD in that case. This limitation precisely motivated the introduction of generalized JSDs defined in the next section.
Notice that in [32,33], it is shown how to express or approximate the f-divergences using expansions of power pseudo-distances. These power chi distances can all be expressed in closed form when dealing with isotropic Gaussians. This result holds for the JSD since the JSD is a f-divergence [33].
3. Generalized Jensen–Shannon Divergences
We first define abstract means M, and then generic statistical M-mixtures from which generalized Jensen–Shannon divergences are built thereof.
Definitions
Consider an abstract mean [34] M. That is, a continuous bivariate function on an interval that satisfies the following in-betweenness property:
| (42) |
Using the unique dyadic expansion of real numbers, we can always build a corresponding weighted mean (with ) following the construction reported in [34] (page 3) such that and . In the remainder, we consider .
Examples of common weighted means are:
the arithmetic mean ,
the geometric mean , and
the harmonic mean .
These means can be unified using the concept of quasi-arithmetic means [34] (also called Kolmogorov–Nagumo means):
| (43) |
where h is a strictly monotonous function. For example, the geometric mean is obtained as for the generator . Rényi used the concept of quasi-arithmetic means instead of the arithmetic mean to define axiomatically the Rényi entropy [35] of order in information theory [2].
For any abstract weighted mean, we can build a statistical mixture called a M-mixture as follows:
Definition 1 (M-mixture).
The-interpolation (with ) of densities p and q with respect to a mean M is a α-weighted M-mixture defined by:
(44) where
(45) is the normalizer function (or scaling factor) ensuring that . (The bracket notation denotes the integral of f over .)
The A-mixture (‘A’ standing for the arithmetic mean) represents the usual statistical mixture [36] (with ). The G-mixture of two distributions and (’G’ standing for the geometric mean G) is an exponential family of order [37] 1:
| (46) |
The two-component M-mixture can be generalized to a k-component M-mixture with , the -dimensional standard simplex:
| (47) |
where .
For a given pair of distributions p and q, the set describes a path in the space of probability density functions. This density interpolation scheme was investigated for quasi-arithmetic weighted means in [38,39,40]. In [41], the authors study the Fisher information matrix for the -mixture models (using -power means).
We call the -weighted M-mixture, thus extending the notion of -mixtures [42] obtained for power means . Notice that abstract means have also been used to generalize Bregman divergences using the concept of -convexity [43].
Let us state a first generalization of the Jensen–Shannon divergence:
Definition 2 (M-Jensen–Shannon divergence).
For a mean M, the skew M-Jensen–Shannon divergence (for ) is defined by
(48)
When , we recover the ordinary Jensen–Shannon divergence since (and ).
We can extend the definition to the -symmetrization of any distance:
Definition 3 (M-JS symmetrization).
For a mean M and a distance D, the skew M- symmetrization of D (for ) is defined by
(49)
By notation, we have . That is, the arithmetic JS-symmetrization of the KLD is the JSD.
Let us define the -skew K-divergence [6,44] as
| (50) |
where . Then the Jensen–Shannon divergence and the Jeffreys divergence can be rewritten [24] as
| (51) |
| (52) |
since . Then . Similarly, we can define the generalized skew K-divergence:
| (53) |
The success of the JSD compared to the JD in applications is partially due to the fact that the JSD is upper bounded by . So, one question to ask is whether those generalized JSDs are upper bounded or not?
To report a sufficient condition, let us first introduce the dominance relationship between means: We say that a mean M dominates a mean N when for all , see [34]. In that case we write concisely . For example, the Arithmetic-Geometric-Harmonic (AGH) inequality states that .
Consider the term
| (54) |
| (55) |
When mean dominates the arithmetic mean , we have
and
Notice that (when is the arithmetic mean), and we recover the fact that the -skew Jensen–Shannon divergence is upper bounded by (e.g., when ).
We summarize the result in the following property:
Property 1 (Upper bound on M-JSD).
The M-JSD is upper bounded by when .
Let us observe that dominance of means can be used to define distances: For example, the celebrated -divergences
| (56) |
can be interpreted as a difference of two means, the arithmetic mean and the geometry mean:
| (57) |
We can also define the generalized Jeffreys divergence as follows:
Definition 4 (N-Jeffreys divergence).
For a mean N, the skew N-Jeffreys divergence (for ) is defined by
(58)
This definition includes the (scaled) resistor average distance [5] , obtained for the harmonic mean for the KLD with skew parameter :
| (59) |
| (60) |
In [5], the factor is omitted to keep the spirit of the original Jeffreys divergence.
We can further extend this definition for any arbitrary divergence D as follows:
Definition 5 (Skew -D divergence).
The skew -divergence with respect to weighted means and as follows:
(61)
We now show how to choose the abstract mean according to the parametric family of distributions to obtain some closed-form formula for some statistical distances.
4. Some Closed-Form Formula for the M-Jensen–Shannon Divergences
Our motivation to introduce these novel families of M-Jensen–Shannon divergences is to obtain closed-form formula when probability densities belong to some given parametric families . We shall illustrate the principle of the method to choose the right abstract mean for the considered parametric family, and report corresponding formula for the following two case studies:
The geometric G-Jensen–Shannon divergence for the exponential families (Section 4.1), and
the harmonic H-Jensen–Shannon divergence for the family of Cauchy scale distributions (Section 4.2).
Recall that the arithmetic A-Jensen–Shannon divergence is well-suited for mixture families (Theorem 1).
4.1. The Geometric G-Jensen–Shannon Divergence
Consider an exponential family [37] with log-normalizer F:
| (62) |
and natural parameter space
| (63) |
The log-normalizer (a log-Laplace function also called log-partition or cumulant function) is a real analytic convex function.
We seek for a mean M such that the weighted M-mixture density of two densities and of the same exponential family yields another density of that exponential family (e.g., ). When considering exponential families, choose the weighted geometric mean for the abstract mean : , for . Indeed, it is well-known that the normalized weighted product of distributions belonging to the same exponential family also belongs to this exponential family [45]:
| (64) |
| (65) |
where the normalization factor is
| (66) |
for the skew Jensen divergence defined by:
| (67) |
Notice that since the natural parameter space is convex, the distribution (since ).
Thus, it follows that we have:
| (68) |
| (69) |
This allows us to conclude that the G-Jensen–Shannon divergence admits the following closed-form expression between densities belonging to the same exponential family:
| (70) |
| (71) |
Please note that since and , it follows that .
The dual divergence [46] (with respect to the reference argument) or reverse divergence of a divergence D is defined by swapping the calling arguments: .
Thus, if we defined the Jensen–Shannon divergence for the dual KL divergence
| (72) |
| (73) |
then we obtain:
| (74) |
| (75) |
| (76) |
| (77) |
Please note that .
In general, the JS-symmetrization for the reverse KL divergence is
| (78) |
| (79) |
where and . Since (arithmetic-geometric inequality), it follows that .
Theorem 2 (G-JSD and its dual JS-symmetrization in exponential families).
The α-skew G-Jensen–Shannon divergence between two distributions and of the same exponential family is expressed in closed form for as:
(80)
(81)
4.1.1. Case Study: The Multivariate Gaussian Family
Consider the exponential family [18,37] of multivariate Gaussian distributions [47,48,49]
| (82) |
The multivariate Gaussian family is also called the multivariate normal family in the literature, or MVN family for short.
Let denote the composite (vector,matrix) parameter of an MVN. The d-dimensional MVN density is given by
| (83) |
where denotes the matrix determinant. The natural parameters are also expressed using both a vector parameter and a matrix parameter in a compound object . By defining the following compound inner product on a composite (vector,matrix) object
| (84) |
where denotes the matrix trace, we rewrite the MVN density of Equation (83) in the canonical form of an exponential family [37]:
| (85) |
where
| (86) |
is the compound natural parameter and
| (87) |
is the compound sufficient statistic. The function is the strictly convex and continuously differentiable log-normalizer defined by:
| (88) |
The log-normalizer can be expressed using the ordinary parameters, , as:
| (89) |
| (90) |
The moment/expectation parameters [18,49] are
| (91) |
We report the conversion formula between the three types of coordinate systems (namely the ordinary parameter , the natural parameter and the moment parameter ) as follows:
| (92) |
| (93) |
| (94) |
The dual Legendre convex conjugate [18,49] is
| (95) |
and .
We check the Fenchel-Young equality when and :
| (96) |
The Kullback–Leibler divergence between two d-dimensional Gaussians distributions and (with ) is
| (97) |
We check that since and . Notice that when , we have
| (98) |
that is half the squared Mahalanobis distance for the precision matrix (a positive-definite matrix: ), where the Mahalanobis distance is defined for any positive matrix as follows:
| (99) |
The Kullback–Leibler divergence between two probability densities of the same exponential families amount to a Bregman divergence [18]:
| (100) |
where the Bregman divergence is defined by
| (101) |
with . Define the canonical divergence [18]
| (102) |
since . We have .
Now, observe that when . In particular, this holds for the multivariate normal family. Thus, we have the following proposition.
Proposition 1.
For the MVN family, we have
(103) with the scaling normalization factor:
(104)
More generally, we have for a k-dimensional weight vector belonging to the -dimensional standard simplex:
| (105) |
where .
Finally, we state the formulas for the G-JS divergence between MVNs for the KL and reverse KL, respectively:
Corollary 1 (G-JSD between Gaussians).
The skew G-Jensen–Shannon divergence and the dual skew G-Jensen–Shannon divergence between two multivariate Gaussians and is
(106)
(107)
(108)
(109)
(110)
(111)
(112) where
(113) (matrix harmonic barycenter) and
(114)
Notice that the α-skew Bhattacharyya distance [7]:
| (115) |
between two members of the same exponential family amounts to a -skew Jensen divergence between the corresponding natural parameters:
| (116) |
A simple proof follows from the fact that
| (117) |
Therefore, we have
| (118) |
with . Thus, it follows that
| (119) |
| (120) |
| (121) |
Corollary 2.
The JS-symmetrization of the reverse Kullback–Leibler divergence between densities of the same exponential family amount to calculate a Jensen/Burbea–Rao divergence between the corresponding natural parameters.
4.1.2. Applications to k-Means Clustering
Let denote a point set, and denote a set of k (cluster) centers. The generalized k-means objective [23] with respect to a distance D is defined by:
| (122) |
By defining the distance of a point to a set of points, we can rewrite compactly the objective function as . Denote by the minimum objective loss for a set of clusters: . It is NP-hard [50] to compute when and the dimension . The most common heuristic is Lloyd’s batched k-means [23] that yields a local minimum.
The performance of the probabilistic k-means++ initialization [51] has been extended to arbitrary distances in [52] as follows:
Theorem 3
(Generalized k-means++ performance, [53]). Let and be two constants such that defines the quasi-triangular inequality property:
(123) and handles the symmetry inequality:
(124) Then the generalized k-means++ seeding guarantees with high probability a configuration C of cluster centers such that:
(125)
To bound the constants and , we rewrite the generalized Jensen–Shannon divergences using quadratic form expressions: That is, using a squared Mahalanobis distance:
| (126) |
for a positive-definite matrix . Since the Bregman divergence can be interpreted as the tail of a first-order Taylor expansion, we have:
| (127) |
for (open convex). Similarly, the Jensen divergence can be interpreted as a Jensen–Bregman divergence, and thus we have
| (128) |
for . More precisely, for a prescribed point set , we have , where denotes the closed convex hull. We can therefore upper bound and using the ratio . See [54] for further details.
A centroid for a set of parameters is defined as the minimizer of the functional
| (129) |
In particular, the symmetrized Bregman centroids have been studied in [55] (for ), and the Jensen centroids (for ) have been investigated in [7] using the convex-concave iterative procedure.
4.2. The Harmonic Jensen–Shannon Divergence (H-)
The principle to get closed-form formula for generalized Jensen–Shannon divergences between distributions belonging to a parametric family consists of finding an abstract mean M such that the M-mixture belongs to the family . In particular, when is a convex domain, we seek a mean M such that with .
Let us consider the weighted harmonic mean [34] (induced by the harmonic mean) H:
| (130) |
The harmonic mean is a quasi-arithmetic mean obtained for the monotone (decreasing) function (or equivalently for the increasing monotone function ).
This harmonic mean is well-suited for the scale family of Cauchy probability distributions (also called Lorentzian distributions):
| (131) |
where denotes the scale and the standard Cauchy distribution.
Using the computer algebra system Maxima (http://maxima.sourceforge.net/) we find that (see Appendix B)
| (132) |
where the normalizing coefficient is
| (133) |
since we have .
The H-Jensen–Shannon symmetrization of a distance D between distributions writes as:
| (134) |
where denote the weighted harmonic mean. When D is available in closed form for distributions belonging to the scale Cauchy distributions, so is .
For example, consider the KL divergence formula between two scale Cauchy distributions:
| (135) |
where A and G denote the arithmetic and geometric means, respectively. The formula initially reported in [56] has been corrected by the authors. Since (and ), it follows that . Notice that the KL divergence is symmetric for Cauchy scale distributions. We note in passing that for exponential families, the KL divergence is symmetric only for the location Gaussian family (since the only symmetric Bregman divergences are the squared Mahalanobis distances [57]). The cross-entropy between scale Cauchy distributions is , and the differential entropy is .
Then the H-JS divergence between and is:
| (136) |
| (137) |
| (138) |
We check that when , we have .
Theorem 4 (Harmonic JSD between scale Cauchy distributions).
The harmonic Jensen–Shannon divergence between two scale Cauchy distributions and is .
Let us report some numerical examples: Consider and , we find that . When and , we find that .
Notice that KL formula is scale-invariant, and this property holds for any scale family:
Lemma 1.
The Kullback–Leibler divergence between two distributions and belonging to the same scale family with standard density p is scale-invariant: for any .
A direct proof follows from a change of variable in the KL integral with and . Please note that although the KLD between scale Cauchy distributions is symmetric, it is not the case for all scale families: For example, the Rayleigh distributions form a scale family with the KLD amounting to compute a Bregman asymmetric Itakura–Saito divergence between parameters [37].
Instead of the KLD, we can choose the total variation distance for which a formula has been reported in [38] between two Cauchy distributions. Notice that the Cauchy distributions are alpha-stable distributions for and q Gaussian distributions for ([58], p. 104). A closed-form formula for the divergence between two q-Gaussians is given in [58] when . The definite integral is available in closed form for Cauchy distributions. When , we have .
We refer to [38] for yet other illustrative examples considering the family of Pearson type VII distributions and central multivariate t-distributions which use the power means (quasi-arithmetic means induced by for ) for defining mixtures.
Table 1 summarizes the various examples introduced in the paper.
Table 1.
Summary of the weighted means M chosen according to the parametric family in order to ensure that the family is closed under M-mixturing: .
| Mean M | Parametric Family | ||
|---|---|---|---|
| arithmetic A | mixture family | ||
| geometric G | exponential family | ||
| harmonic H | Cauchy scale family |
4.3. The M-Jensen–Shannon Matrix Distances
In this section, we consider distances between matrices which play an important role in quantum computing [59,60]. We refer to [61] for the matrix Jensen–Bregman logdet divergence. The Hellinger distance can be interpreted as the difference of an arithmetic mean A and a geometric mean G:
| (139) |
Notice that since , we have . The scaled and squared Hellinger distance is an -divergence for . Recall that the -divergence can be interpreted as the difference of a weighted arithmetic minus a weighted geometry mean.
In general, if a mean dominates a mean , we may define the distance as
| (140) |
When considering matrices [62], there is not a unique definition of a geometric matrix mean, and thus we have different notions of matrix Hellinger distances [62], some of them are divergences (smooth distances defining a dualistic structure in information geometry).
We define the matrix M-Jensen–Shannon divergence for a matrix divergence [63,64] D as follows:
| (141) |
For example, we can choose the von Neumann matrix divergence [63]:
| (142) |
or the LogDet matrix divergence [63]:
| (143) |
where square matrices and have dimension d.
5. Conclusions and Perspectives
We introduced a generalization of the celebrated Jensen–Shannon divergence [6], termed the -Jensen–Shannon divergences, based on M-mixtures derived from abstract means M. This new family of divergences includes the ordinary Jensen–Shannon divergence when both M and N are set to the arithmetic mean. We reported closed-form expressions of the M Jensen–Shannon divergences for mixture families and exponential families in information geometry by choosing the arithmetic and geometric weighted mean, respectively. The -skewed geometric Jensen–Shannon divergence (G-Jensen–Shannon divergence) between densities and of the same exponential family with cumulant function F is
Here, we used the bracket notation to emphasize that the statistical distance is between densities, and the parenthesis notation to emphasize that the distance is between parameters. We also have . We also show how to get a closed-form formula for the harmonic Jensen–Shannon divergence of Cauchy scale distributions by taking harmonic mixtures.
For an arbitrary distance D, we define the skew N-Jeffreys symmetrization:
| (144) |
and the skew -JS-symmetrization:
| (145) |
A Java™ source code for computing the geometric Jensen–Shannon divergence between multivariate Gaussian distributions is available online at https://franknielsen.github.io/M-JS/.
Appendix A. Summary of Distances and Their Notations
Table A1 lists the main distances with their notations.
Table A1.
Summary of Distances and Their Notations.
| Weighted mean | , |
| Arithmetic mean | |
| Geometric mean | |
| Harmonic mean | |
| Power mean | , |
| Quasi-arithmetic mean | , f strictly monotonous |
| M-mixture |
with |
| Statistical distance | |
| Dual/reverse distance | |
| Kullback-Leibler divergence | |
| reverse Kullback-Leibler divergence | |
| Jeffreys divergence | |
| Resistor divergence | . |
| skew K-divergence | |
| Jensen-Shannon divergence | |
| skew Bhattacharrya divergence | |
| Hellinger distance | |
| -divergences |
|
| Mahalanobis distance | for a positive-definite matrix |
| f-divergence |
, with f strictly convex at 1 |
| reverse f-divergence |
for |
| J-symmetrized f-divergence | |
| JS-symmetrized f-divergence |
for |
| Parameter distance | |
| Bregman divergence | |
| skew Jeffreys-Bregman divergence | |
| skew Jensen divergence | |
| Jensen-Bregman divergence | . |
| Generalized Jensen-Shannon divergences | |
| skew J-symmetrization | |
| skew -symmetrization | |
| skew M-Jensen-Shannon divergence | |
| skew M--symmetrization | |
| N-Jeffreys divergence | |
| N-J D divergence | |
| skew -D JS divergence |
Appendix B. Symbolic Calculations in Maxima
The program below calculates the normalizer Z for the harmonic H-mixtures of Cauchy distributions (Equation (133)).
| assume(gamma>0); |
| Cauchy(x,gamma) := gamma/(%pi∗(x∗∗2+gamma∗∗2)); |
| assume(alpha>0); |
| assume(alpha<1); |
| h(x,y,alpha) := (x∗y)/((1-alpha)∗y+alpha∗x); |
| assume(gamma1>0); |
| assume(gamma2>0); |
| m(x,alpha) := ratsimp(h(Cauchy(x,gamma1),Cauchy(x,gamma2),alpha)); |
| /∗ calculate Z ∗/ |
| integrate(m(x,alpha),x,-inf,inf); |
Funding
This research received no external funding.
Conflicts of Interest
The author declares no conflict of interest.
References
- 1.Billingsley P. Probability and Measure. John Wiley & Sons; Hoboken, NJ, USA: 2008. [Google Scholar]
- 2.Cover T.M., Thomas J.A. Elements of Information Theory. John Wiley & Sons; Hoboken, NJ, USA: 2012. [Google Scholar]
- 3.Ho S.W., Yeung R.W. On the discontinuity of the Shannon information measures; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Adelaide, Australia. 4–9 September 2005; pp. 159–163. [Google Scholar]
- 4.Nielsen F. Jeffreys centroids: A closed-form expression for positive histograms and a guaranteed tight approximation for frequency histograms. IEEE Signal Process. Lett. 2013;20:657–660. doi: 10.1109/LSP.2013.2260538. [DOI] [Google Scholar]
- 5.Johnson D., Sinanovic S. Symmetrizing the Kullback-Leibler Distance. [(accessed on 11 May 2019)];2001 Technical report of Rice University (US) Available online: https://scholarship.rice.edu/handle/1911/19969.
- 6.Lin J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory. 1991;37:145–151. doi: 10.1109/18.61115. [DOI] [Google Scholar]
- 7.Nielsen F., Boltz S. The Burbea-Rao and Bhattacharyya centroids. IEEE Trans. Inf. Theory. 2011;57:5455–5466. doi: 10.1109/TIT.2011.2159046. [DOI] [Google Scholar]
- 8.Vajda I. On metric divergences of probability measures. Kybernetika. 2009;45:885–900. [Google Scholar]
- 9.Fuglede B., Topsoe F. Jensen-Shannon divergence and Hilbert space embedding; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Waikiki, HI, USA. 29 June–4 July 2014; p. 31. [Google Scholar]
- 10.Sims G.E., Jun S.R., Wu G.A., Kim S.H. Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc. Natl. Acad. Sci. USA. 2009;106:2677–2682. doi: 10.1073/pnas.0813249106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.DeDeo S., Hawkins R.X., Klingenstein S., Hitchcock T. Bootstrap methods for the empirical study of decision-making and information flows in social systems. Entropy. 2013;15:2246–2276. doi: 10.3390/e15062246. [DOI] [Google Scholar]
- 12.Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Advances in Neural Information Processing Systems. Curran Associates, Inc.; Red Hook, NY, USA: 2014. Generative adversarial nets; pp. 2672–2680. [Google Scholar]
- 13.Wang Y., Woods K., McClain M. Information-theoretic matching of two point sets. IEEE Trans. Image Process. 2002;11:868–872. doi: 10.1109/TIP.2002.801120. [DOI] [PubMed] [Google Scholar]
- 14.Peter A.M., Rangarajan A. Information geometry for landmark shape analysis: Unifying shape representation and deformation. IEEE Trans. Pattern Anal. Mach. Intell. 2009;31:337–350. doi: 10.1109/TPAMI.2008.69. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Nielsen F., Sun K. Guaranteed bounds on information-theoretic measures of univariate mixtures using piecewise log-sum-exp inequalities. Entropy. 2016;18:442. doi: 10.3390/e18120442. [DOI] [Google Scholar]
- 16.Wang F., Syeda-Mahmood T., Vemuri B.C., Beymer D., Rangarajan A. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) Springer; Berlin, Germany: 2009. Closed-form Jensen-Rényi divergence for mixture of Gaussians and applications to group-wise shape registration; pp. 648–655. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Nielsen F. Closed-form information-theoretic divergences for statistical mixtures; Proceedings of the IEEE 21st International Conference on Pattern Recognition (ICPR2012); Tsukuba, Japan. 11–15 November 2012; pp. 1723–1726. [Google Scholar]
- 18.Amari S.I. Information Geometry and Its Applications. Springer; Berlin, Germany: 2016. [Google Scholar]
- 19.Csiszár I. Information-type measures of difference of probability distributions and indirect observation. Stud. Sci. Math. Hung. 1967;2:229–318. [Google Scholar]
- 20.Eguchi S. Geometry of minimum contrast. Hiroshima Math. J. 1992;22:631–647. doi: 10.32917/hmj/1206128508. [DOI] [Google Scholar]
- 21.Amari S.I., Cichocki A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. Tech. Sci. 2010;58:183–195. doi: 10.2478/v10175-010-0019-1. [DOI] [Google Scholar]
- 22.Ciaglia F.M., Di Cosmo F., Felice D., Mancini S., Marmo G., Pérez-Pardo J.M. Hamilton-Jacobi approach to potential functions in information geometry. J. Math. Phys. 2017;58:063506. doi: 10.1063/1.4984941. [DOI] [Google Scholar]
- 23.Banerjee A., Merugu S., Dhillon I.S., Ghosh J. Clustering with Bregman divergences. J. Mach. Learn. Res. 2005;6:1705–1749. [Google Scholar]
- 24.Nielsen F. A family of statistical symmetric divergences based on Jensen’s inequality. arXiv. 20101009.4004 [Google Scholar]
- 25.Chen P., Chen Y., Rao M. Metrics defined by Bregman divergences. Commun. Math. Sci. 2008;6:915–926. doi: 10.4310/CMS.2008.v6.n4.a6. [DOI] [Google Scholar]
- 26.Chen P., Chen Y., Rao M. Metrics defined by Bregman divergences: Part 2. Commun. Math. Sci. 2008;6:927–948. doi: 10.4310/CMS.2008.v6.n4.a7. [DOI] [Google Scholar]
- 27.Kafka P., Österreicher F., Vincze I. On powers of f-divergences defining a distance. Stud. Sci. Math. Hung. 1991;26:415–422. [Google Scholar]
- 28.Österreicher F., Vajda I. A new class of metric divergences on probability spaces and its applicability in statistics. Ann. Inst. Stat. Math. 2003;55:639–653. doi: 10.1007/BF02517812. [DOI] [Google Scholar]
- 29.Nielsen F., Nock R. On the geometry of mixtures of prescribed distributions; In Proceeding of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Calgary, AB, Canada. 15–20 Aprli 2018; pp. 2861–2865. [Google Scholar]
- 30.Nielsen F., Hadjeres G. Monte Carlo Information Geometry: The dually flat case. arXiv. 20181803.07225 [Google Scholar]
- 31.Watanabe S., Yamazaki K., Aoyagi M. Kullback information of normal mixture is not an analytic function. IEICE Tech. Rep. Neurocomput. 2004;104:41–46. [Google Scholar]
- 32.Nielsen F., Nock R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2014;21:10–13. doi: 10.1109/LSP.2013.2288355. [DOI] [Google Scholar]
- 33.Nielsen F., Hadjeres G. On power chi expansions of f-divergences. arXiv. 20191903.05818 [Google Scholar]
- 34.Niculescu C., Persson L.E. Convex Functions and Their Applications. 2nd ed. Springer; Berlin, Germany: 2018. [Google Scholar]
- 35.Rényi A. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California; Oakland, CA, USA: 1961. On measures of entropy and information. [Google Scholar]
- 36.McLachlan G.J., Lee S.X., Rathnayake S.I. Finite mixture models. Ann. Rev. Stat. Appl. 2019;6:355–378. doi: 10.1146/annurev-statistics-031017-100325. [DOI] [Google Scholar]
- 37.Nielsen F., Garcia V. Statistical exponential families: A digest with flash cards. arXiv. 20090911.4863 [Google Scholar]
- 38.Nielsen F. Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means. Pattern Recognit. Lett. 2014;42:25–34. doi: 10.1016/j.patrec.2014.01.002. [DOI] [Google Scholar]
- 39.Eguchi S., Komori O. Geometric Science of Information (GSI) Springer; Cham, Switzerland: 2015. Path connectedness on a space of probability density functions; pp. 615–624. [Google Scholar]
- 40.Eguchi S., Komori O., Ohara A. Information Geometry and its Applications IV. Springer; Berlin, Germany: 2016. Information geometry associated with generalized means; pp. 279–295. [Google Scholar]
- 41.Asadi M., Ebrahimi N., Kharazmi O., Soofi E.S. Mixture models, Bayes Fisher information, and divergence measures. IEEE Trans. Inf. Theory. 2019;65:2316–2321. doi: 10.1109/TIT.2018.2877608. [DOI] [Google Scholar]
- 42.Amari S.I. Integration of stochastic models by minimizing α-divergence. Neural Comput. 2007;19:2780–2796. doi: 10.1162/neco.2007.19.10.2780. [DOI] [PubMed] [Google Scholar]
- 43.Nielsen F., Nock R. Generalizing skew Jensen divergences and Bregman divergences with comparative convexity. IEEE Signal Process. Lett. 2017;24:1123–1127. doi: 10.1109/LSP.2017.2712195. [DOI] [Google Scholar]
- 44.Lee L. Measures of distributional similarity; Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, Association for Computational Linguistics; Stroudsburg, PA, USA. 20–26 June 1999; pp. 25–32. [DOI] [Google Scholar]
- 45.Nielsen F. The statistical Minkowski distances: Closed-form formula for Gaussian mixture models. arXiv. 20191901.03732 [Google Scholar]
- 46.Zhang J. Reference duality and representation duality in information geometry. AIP Conf. Proc. 2015;1641:130–146. [Google Scholar]
- 47.Yoshizawa S., Tanabe K. Dual differential geometry associated with the Kullback-Leibler information on the Gaussian distributions and its 2-parameter deformations. SUT J. Math. 1999;35:113–137. [Google Scholar]
- 48.Nielsen F., Nock R. A closed-form expression for the Sharma–Mittal entropy of exponential families. J. Phys. A Math. Theor. 2011;45:032003. doi: 10.1088/1751-8113/45/3/032003. [DOI] [Google Scholar]
- 49.Nielsen F. An elementary introduction to information geometry. arXiv. 2018 doi: 10.3390/e22101100.1808.08271 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Nielsen F., Nock R. Optimal interval clustering: Application to Bregman clustering and statistical mixture learning. IEEE Signal Process. Lett. 2014;21:1289–1292. doi: 10.1109/LSP.2014.2333001. [DOI] [Google Scholar]
- 51.Arthur D., Vassilvitskii S. Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics. ACM; New York, NY, USA: 2007. k-means++: The advantages of careful seeding; pp. 1027–1035. [Google Scholar]
- 52.Nielsen F., Nock R., Amari S.I. On clustering histograms with k-means by using mixed α-divergences. Entropy. 2014;16:3273–3301. doi: 10.3390/e16063273. [DOI] [Google Scholar]
- 53.Nielsen F., Nock R. Total Jensen divergences: definition, properties and clustering; Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Brisbane, QLD, Australia. 19–24 August 2015; pp. 2016–2020. [Google Scholar]
- 54.Ackermann M.R., Blömer J. Scandinavian Workshop on Algorithm Theory. Springer; Berlin, Germany: 2010. Bregman clustering for separable instances; pp. 212–223. [Google Scholar]
- 55.Nielsen F., Nock R. Sided and symmetrized Bregman centroids. IEEE Trans. Inf. Theory. 2009;55:2882–2904. doi: 10.1109/TIT.2009.2018176. [DOI] [Google Scholar]
- 56.Tzagkarakis G., Tsakalides P. A statistical approach to texture image retrieval via alpha-stable modeling of wavelet decompositions; Proceedings of the 5th International Workshop on Image Analysis for Multimedia Interactive Services, Instituto Superior Técnico; Lisboa, Portugal. 21–23 April 2004; pp. 21–23. [Google Scholar]
- 57.Boissonnat J.D., Nielsen F., Nock R. Bregman Voronoi diagrams. Discrete Comput. Geom. 2010;44:281–307. doi: 10.1007/s00454-010-9256-1. [DOI] [Google Scholar]
- 58.Naudts J. Generalised Thermostatistics. Springer Science & Business Media; Berlin, Germany: 2011. [Google Scholar]
- 59.Briët J., Harremoës P. Properties of classical and quantum Jensen-Shannon divergence. Phys. Rev. A. 2009;79:052311. doi: 10.1103/PhysRevA.79.052311. [DOI] [Google Scholar]
- 60.Audenaert K.M. Quantum skew divergence. J. Math. Phys. 2014;55:112202. doi: 10.1063/1.4901039. [DOI] [Google Scholar]
- 61.Cherian A., Sra S., Banerjee A., Papanikolopoulos N. Jensen-Bregman logdet divergence with application to efficient similarity search for covariance matrices. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35:2161–2174. doi: 10.1109/TPAMI.2012.259. [DOI] [PubMed] [Google Scholar]
- 62.Bhatia R., Jain T., Lim Y. Strong convexity of sandwiched entropies and related optimization problems. Rev. Math. Phys. 2018;30:1850014. doi: 10.1142/S0129055X18500149. [DOI] [Google Scholar]
- 63.Kulis B., Sustik M.A., Dhillon I.S. Low-rank kernel learning with Bregman matrix divergences. J. Mach. Learn. Res. 2009;10:341–376. [Google Scholar]
- 64.Nock R., Magdalou B., Briys E., Nielsen F. Matrix Information Geometry. Springer; Berlin, Germany: 2013. Mining matrix data with Bregman matrix divergences for portfolio selection; pp. 373–402. [Google Scholar]
