Abstract
This work studies mixtures of probability measures on and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the -distance. The estimation of those constants for a mixture can be far more subtle than it is for its parts. Even mixing Gaussian measures may produce a measure with a Hamiltonian potential possessing multiple wells leading to metastability and large constants in Sobolev type inequalities. In particular, the Poincaré constant stays bounded in the mixture parameter, whereas the log–Sobolev may blow up as the mixture ratio goes to 0 or 1. This observation generalizes the one by Chafaï and Malrieu to the multidimensional case. The behavior is shown for a class of examples to be not only a mere artifact of the method.
Keywords: Poincaré inequality, log–Sobolev inequality, relative entropy, fisher information, Dirichlet form, mixture, finite Gaussian mixtures
1. Introduction
A mixture of two probability measures and on is for some parameter the probability measure defined by
(1) |
Hereby, both measures and are assumed to be absolutely continuous with respect to the Lebesgue measure and their supports are nested, i.e., or . Under these assumptions at least one measure is absolutely continuous to the other one
which implies that at least one of the measures has a density with respect to the other one
This work establishes criteria to check in a simple way under which a mixture of measures satisfies a Poincaré or log–Sobolev inequality with constants and , respectively, provided that each of the components satisfies one.
Definition 1
( and ). A probability measure μ on satisfies the Poincaré inequality with constant , if for all functions
PI(ϱ) A probability measure μ satisfies the log–Sobolev inequality with constant , if for all functions holds
LSI(α) By the change of variable , the log–Sobolev inequality is equivalent to
(2)
The question of how and in and depend for a mixture on the parameter was first studied by Chafaï and Malrieu [1] for measures on . The aim is to deduce simple criteria under which the measure (1) satisfies and knowing that and satisfy , and , , respectively. The approach by Chafaï and Malrieu [1] is based on a functional depending on the distribution function of the measures and , which then lead to bounds on the Poincaré and log–Sobolev constant of the mixture in one dimension.
This work generalizes part of the results from Chafaï and Malrieu [1] to the multidimensional case by a simple argument. The estimates on the Poincaré and log–Sobolev constants hold for the case, when the -distance of and is bounded (see Label (5) for its definition). For this to be true, at least one of the measures and needs to be absolutely continuous to the other, which is also a necessary condition for the mixture having connected support. The resulting bound is optimal in the scaling behavior of the mixture parameter , i.e., a logarithmic blow-up behavior in p for the log–Sobolev constant, whereas the Poincaré constant stays bounded. This different behavior of the Poincaré and log–Sobolev constant was also observed in the setting of metastability in ([2], Remark 2.20).
Let us first introduce the principle for the Poincaré inequality in Section 2 and then for the log–Sobolev inequality in Section 3. Then, the procedure is illustrated on specific examples of mixtures in Section 4.
2. Poincaré Inequality
To keep the presentation concise, the following notation for the mean of a function with respect to a measure is introduced
In this way, the variance in and relative entropy in become
Likewise, the covariance of two functions is defined by
The Cauchy–Schwarz inequality for the covariance takes now the form
The argument is based on an easy but powerful observation for measures and with joint support.
Lemma 1
(Mean-difference as covariance). If , then for any and any function holds
(3)
Proof.
The change of measure formula yields that the covariances above are just the difference of the expectation on the right-hand side
and likewise for . □
The subsequent strategy is based on the identity (3) by using a Cauchy–Schwarz inequality to arrive at the product of two variances. Then, or can be applied and the parameter leaves freedom to optimize the resulting expression. This allows for proving the following theorem, which is the generalization of ([1], Theorem 4.4) to the multidimensional case for the Poincaré inequality provided and are absolutely continuous to each other.
Theorem 1 (PI for absolutely continuous mixtures).
Let and satisfy and , respectively, and let both measures be absolutely continuous to each other. Then, for all and , the mixture measure satisfies with
(4) where
(5)
Proof.
The variance of f with respect to is decomposed to
Hereby, the first two terms are just the expectation of the conditional variances. The third term is the variance of a Bernoulli random variable. Now, the mean-difference is rewritten by Lemma 1 and the square is estimated with the Young inequality introducing an additional parameter
Then, the Cauchy–Schwarz inequality is applied to the covariances to obtain
(6) The resulting maximum is now minimized in and . To do so without loss of generality, is assumed. The other case can always be obtained by interchanging the roles of and . If , then and is optimal as long as
This corresponds to the second case in (4). By symmetry, the first case follows if .
Now, in the case and , there exists by monotonicity for every a unique such that both terms in the max of the right-hand side in (6) are equal and hence the max is minimal. Since and , the sum of the coefficients in the front is then given by in as a function of . The minimization of h in leads to and
holds. Hence, in this case, the parameter and . Thus, the problem can be rephrased: Find which solves
The solution is given by
For this value of , the value of the max in (6) is given by
□
Remark 1.
The constants and can be rewritten if and are mutual absolutely continuous as
This quantity is also known as-distance on the space of probability measures (cf. [3]). The -distance is a rather weak distance and therefore bounds many other probability distances. Among them is also the relative entropy. Indeed, by the concavity of the logarithm and the Jensen inequality follows
Remark 2.
The proof of Theorem 1 shows that the expression for in the last case of (4) can be bounded above and below by
(7) In the case, where , the formula for (4) simplifies to
(8)
Corollary 1.
Let and , satisfy , , respectively. Then, for all with , the mixture measure satisfies with
Likewise, if , then it holds
Proof.
The proof is a simple consequence of Lemma 1 with and a similar line of estimates as in (6). □
3. Log–Sobolev Inequality
In this section, a criterion for is established. It will be convenient to establish it in the form (2). For a function and two probability measures and , the averaged function is defined by
Moreover, the mixture of two Dirac measures and is by slight abuse of notation denoted by for and . Then, the entropy of the mixture is given by
(9) |
The following discrete log–Sobolev inequality for a Bernoulli random variable is used to estimate the entropy of the averaged function . The optimal log–Sobolev constant was found by Higuchi and Yoshida [4] and Diaconis and Saloff-Coste ([5], Theorem A.2.) at the same time.
Lemma 2 (Optimal log–Sobolev inequality for Bernoulli measures).
A Bernoulli measure on , i.e., a mixture of two Dirac measures with and satisfies the discrete log–Sobolev inequality
where is the logarithmic mean defined by
The above result allows for estimating the coarse-grained entropy in (9).
Lemma 3 (Estimate of the coarse-grained entropy).
Let be given by for . Then, for all and ,
(10) holds.
Proof.
Lemma 2 applied to yields
(11) The square-root-mean-difference on the right-hand side of (11) can be estimated by using the fact that the function is jointly convex on . Indeed, by introducing the functions defined by and , an application of the Jensen inequality yields the estimate
(12)
The decomposition (9) together with (10) yields that a mixture for and satisfies
(13) |
The right-hand side of (13) consists of quantities, which can be estimated under the assumption that and satisfy and . The following theorem provides an extension of the result ([1] Theorem 4.4) to the multidimensional case for the log–Sobolev inequality.
Theorem 2 (LSI for absolutely continuous mixtures).
Let and satisfy and , respectively, and let both measures be absolutely continuous to each other. Then, for all and , the mixture measure satisfies with
(14) Hereby, and are given in (5) and is used for the inverse logarithmic mean
Proof.
The starting point is the splitting obtained from (13). The variances and mean-difference in (13) can be estimated in the same way as in the proof (6) of Theorem 1. Additionally, the fact [6] that implies is used to derive for any and any
(15) By introducing reduced log–Sobolev constants
(16) as well as defining the constants and by
(17) the bound (15) takes the form
(18) The estimate (18) has the same structure as the estimate (6), where , play the role of , and , the roles of , . Hence, the optimization procedure from the proof of Theorem 1 applies to this case and the last step consists of translating the constants , and , back to the original ones. □
Remark 3.
Let the bound for in the last case of (14) be denoted by . Then, the proof shows that it can be bounded above and below in the same way as in (7) in terms of the reduced constants (16) and (17)
In the case , the simplified bound
(19) holds. The inverse logarithmic mean blows up logarithmically for . Hence, even in the case , the bound (19) diverges logarithmically. This logarithmic divergence looks at first sight artificial, especially in comparison to (8) showing that the Poincaré constant is bounded. However, the next section with examples shows that this blow-up may actually occur. Hence, the bound in (14) is actually optimal on this level of generality.
An analogue statement as Corollary 1 for the Poincaré constant is obtained for the lob-Sobolev constant, whose proof follows along the same lines and is omitted.
Corollary 2.
Let and , satisfy and , respectively. Then, for any and , the mixture measure satisfies with
Likewise, if , then
holds.
4. Examples
The results of Theorems 1 and 2 are illustrated for some specific examples and also compared to the results ([1], Section 4.5), which however are restricted to one-dimensional measures. Although the criterion of Theorems 1 and 2 can only give upper bounds for the multidimensional case, when at least one of the mixture component is absolutely continuous to the other, it is still possible to obtain the optimal results in terms of scaling in the mixture parameter .
4.1. Mixture of Two Gaussian Measures with Equal Covariance Matrix
Let us consider the mixtures of two Gaussians and , for some and a strictly positive definite covariance matrix in the sense of quadratic forms for some . Then, and satisfy and by the Bakry–Émery criterion (Theorem A1), i.e., . Furthermore, the -distance between and can be explicitly calculated as a Gaussian integral (see also [7])
Then, the bound from Theorem 1 in the form (8) yields
(20) |
Likewise, the log–Sobolev constant follows from Theorem 2 in the form (19) leads to
By noting that , both constants stay uniformly bounded in p. The large exponential factor in the distance cannot be avoided on this level of generality since the mixed measure has a bimodal structure leading to metastable effects ([2], Remark 2.20).
The result ([1] Corollary 4.7) deduced the following bound on for the mixture of two one-dimensional standard Gaussians in (20)
(21) |
where . The elementary inequalities and for all show that the bound (20) is better than the bound (21) for all parameter values and .
Hence, this example shows that, for mixtures with components that are absolutely continuous to each other as well as whose tail behavior is controlled in terms of the -distance, Theorems 1 and 2 even improve the bound of [1] and generalize it to the multidimensional case.
4.2. Mixture of a Gaussian and Sub-Gaussian Measure
Let us consider where is strictly positive definite. In addition, let the density of with respect to be bounded uniformly by some , that is the relative density satisfies almost everywhere on . By the Bakry–Émery criterion (Theorem A1), holds. Furthermore, an upper bound for is obtained by the assumption on the bound on the relative density
Provided that satisfies , the Poincaré constant of the mixture satisfies by Corollary 1 the estimate
Similarly, Corollary 2 provides whenever satisfies the following bound for the log–Sobolev constant of the mixture measure
In this case, the logarithmic blow-up of the log–Sobolev constant cannot be ruled out for , without any further information on .
4.3. Mixture of Two Centered Gaussians with Different Variance
For and , the Bakry–Émery criterion (Theorem A1) implies and . The calculation of the -distance can be done using the spherical symmetry and is reduced to the one dimensional integral
Hereby, denotes the -dimensional Hausdorff measure of the sphere . The integral does only exist for . In this case, it can be evaluated and simplified. The bound for the constant follows by duality under the substitution and is given by
(22) |
If , that is for , the bound given in Corollary 1 yields
Similarly, if , that is, for , the bound becomes
In the case , the interpolation bound (4) of Theorem 1 could be applied. However, the scaling behavior for the Poincaré constant can already be observed with the estimate (7) in Remark 2, where again, thanks to the symmetry ,
(23) |
holds. Hence, the Poincaré constant stays bounded for the full range of parameter and .
In the case for the log–Sobolev constant, the bound from Corollary 2 gives
(24) |
The bound (24) blows up logarithmically for in general. However, the special case , although trivially, allows for the combined bound , which stays bounded. This behavior can be extended to the range thanks to (22) and the interpolation bound of Theorem 2.
The result (23) can be compared with the one of ([1], Section 4.5.2), which states that, for some , all and
(25) |
holds. In general, depending on the constant C, the bound (23) is better for small, whereas the scaling in is better for (25), namely linear instead of as in (20).
4.4. Mixture of Uniform and Gaussian Measure
Let and with the unit ball around zero. Then, holds by the Bakry–Émergy criterion (Theorem A1) and by the result of [8]. Furthermore, since , the -distance between and becomes thanks to the spherical symmetry
(26) |
The volume and the surface area of the n-sphere satisfy the following relations
(27) |
The integral on the right-hand side in (26) can be bounded below by and above by , which altogether yields
Corollary 1 implies that the Poincaré constant of the mixture satisfies
(28) |
where the last inequality follows from for and all .
The estimate of the log–Sobolev constant uses by the Bakry–Émergy criterion (Theorem A1) and from (A1). Then, Corollary 2 yields the bound
(29) |
There is a logarithmically blow-up of the bound for .
The blow-up for is artificial, which can be shown by a combination Bakry–Émery criterion and the Holley–Stroock perturbation principle. To do so, the Hamiltonian of is decomposed into a convex function and some error term
(30) |
where
The function is radially monotone towards the boundary of , which yields for the bound
(31) |
From (30), the Hamiltonian is compared with the convex potential with the bound (31) on the perturbation . This together yields, by the Bakry–Émergy criterion (Theorem A1) and the Holley–Stroock perturbation principle (Theorem A2), the satisfies and with
(32) |
where is the same constant as in (27). This bound only blows up for . However, the blow-up is like . Furthermore, the bound on the Poincaré constant is worse than the one from (28). Therefore, both approaches need to be combined.
The combination of the bounds obtained in (29) and (32) results in the improved bound
(33) |
which only logarithmically blows up for .
This example shows that the Poincaré constant and log–Sobolev constant may have different scaling behavior for . Indeed, Ref. [1] shows, for this specific mixture in the one-dimensional case that the log–Sobolev constant can be bounded below by
for p small enough and a constant C independent of p. In one dimension, lower bounds are accessible via the functional introduced by Bobkov–Götze [9]. Hence, the bound (33) is optimal in the one-dimensional case, which strongly indicates also optimality for the higher dimension case in terms of scaling in the mixture ration p.
To conclude, the Bakry–Émery criterion in combination with the Holley–Stroock perturbation principle is effective for detecting blow-ups of the log–Sobolev constant for mixtures, but has, in general, the wrong scaling behavior in the mixing parameter p. On the other hand, the criterion presented in Theorem 2 provides the right scaling of the blow-up but may give artificial blow-ups, if the components of the mixture become singular in the sense of the -distance.
5. Conclusions
Recently, the investigation of mixtures can be found in many different applications, and the main results of this work may be useful to the investigation of asymmetric Kalman filter estimates [10], the study of asymmetric mixtures in Marine Biology [11], Econometrics [12], Gradient-quadratic and fixed-point iteration algorithms [7] and estimates of multivariate Gaussian mixtures [13].
Theorems 1 and 2 provide a simple estimate of the Poincaré and log–Sobolev constants of a two-component mixture measure if the -distance of and is bounded and each of the components satisfies a Poincaré or log–Sobolev inequality. Section 4 reviews several examples with the following findings:
For mixtures with components that are mutually absolutely continuous and whose tail behavior is mutually controlled in terms of the -distance, Theorems 1 and 2 are very effective.
If only one of the components is absolutely continuous to the other one with bounded density, then it is still possible to obtain a bound on the Poincaré and log–Sobolev constant. However, the log–Sobolev constant blows up logarithmically in the mixture parameter p approaching 0 or 1. It is shown for specific examples that this blow-up is at least for one limit or not artificial due to the applied method.
A necessary condition for the finiteness of the -distance between two measures is that at least one of the measures and is absolutely continuous to the other one, which in particular provides a mixture with connected support. This condition is too strong since one can easily decompose a measure into a mixture, where the joint support of the components is a null set. In this case, the present approach would not be helpful, even though the mixture may still satisfy both functional inequalities.
Future work could overcome the limits of the present approach by revisiting the crucial ingredient for both the Poincaré and log–Sobolev inequality, which was the representation of the mean-difference in Lemma 1 regarding covariances. Formula (3) from Lemma 1 applies only in the case where both measures are mutually absolutely continuous. However, the idea of an interpolation bound can be generalized to suitable weighted Sobolev spaces. For this, since for all , one can formally write and estimate
(34) |
Hereby, is the homogeneous weighted space with norm and is its dual space with norm
The representation (34) is fruitful to many more applications in which the components of the mixture do not need to be absolutely continuous. Similar ideas for estimating mean-differences were successfully applied in the metastable setting [2,14], in which suitable bounds on the -norm are obtained. In this regard, the bound (34) promises many interesting new insights for future studies.
Acknowledgments
This work is based on part of the Ph.D. thesis [15] written under the supervision of Stephan Luckhaus at the University of Leipzig. The author thanks the Max-Planck-Institute for Mathematics in the Sciences in Leipzig for providing excellent working conditions. The author thanks Georg Menz for many discussions on mixtures and metastability.
Appendix A. Bakry–Émery Criterion and Holley–Stroock Perturbation Principle
Two classical conditions for Poincaré and log–Sobolev inequalities are stated in this part of the appendix. The Bakry–Émery criterion relates the convexity of the Hamiltonian of a measure and positive curvature of the underlying space to constants for the Poincaré and log–Sobolev inequalities. Although the result is classical for the case of , the result for general convex domain was established in ([16], Theorem 2.1).
Theorem A1
(Bakry–Émery criterion ([17] Proposition 3, Corollary 2), ([16], Theorem 2.1)). Let be convex and let be a Hamiltonian with Gibbs measure and assume that for all . Then, μ satisfies and with
The second condition is the Holley–Stroock perturbation principle, which allows to show Poincaré and log–Sobolev inequalities for a very large class of measures.
Theorem A2
(Holley–Stroock perturbation principle ([18], p. 1184)). Let and and be a bounded function. Let μ and be the Gibbs measures with Hamiltonian H and , respectively
Then, if μ satisfies and , then satisfies and , respectively. Hereby, the constants satisfy
where .
Proofs relying on semigroup theory of Theorems A1 and A2 can be found in the exposition by Ledoux ([6], Corollary 1.4, Corollary 1.6 and Lemma 1.2).
Example A1 (Uniform measure on the ball).
The measure , with is the unit ball around zero, satisfies with
(A1) The proof compares the measure with a family of measures
Then, it holds that satisfies by the Bakry–Émery criterion (Theorem A1). Moreover, it holds that and hence satisfies by the Holley–Stroock perturbation principle (Theorem A2) for all . Optimizing the expression in σ gives the bound (A1).
Funding
This research received no external funding.
Conflicts of Interest
The author declares no conflict of interest.
References
- 1.Chafaï D., Malrieu F. On fine properties of mixtures with respect to concentration of measure and Sobolev type inequalities. Annales de l’Institut Henri Poincaré Probabilités et Statistiques. 2010;46:72–96. doi: 10.1214/08-AIHP309. [DOI] [Google Scholar]
- 2.Menz G., Schlichting A. Poincaré and logarithmic Sobolev inequalities by decomposition of the energy landscape. Ann. Probab. 2014;42:1809–1884. doi: 10.1214/14-AOP908. [DOI] [Google Scholar]
- 3.Gibbs A.L., Su F.E. On Choosing and Bounding Probability Metrics. Int. Stat. Rev. 2002;70:419–435. doi: 10.1111/j.1751-5823.2002.tb00178.x. [DOI] [Google Scholar]
- 4.Higuchi Y., Yoshida N. Analytic Conditions and Phase Transition for Ising Models. 1995. Unpublished lecture notes in Japanese.
- 5.Diaconis P., Saloff-Coste L. Logarithmic Sobolev inequalities for finite Markov chains. Ann. Appl. Probab. 1996;6:695–750. doi: 10.1214/aoap/1034968224. [DOI] [Google Scholar]
- 6.Ledoux M. Logarithmic Sobolev Inequalities for Unbounded Spin Systems Revisited. Springer; Berlin, Germany: 1999. pp. 167–194. Séminaire de Probabilités XXXV. [DOI] [Google Scholar]
- 7.Carreira-Perpinan M.A. Mode-finding for mixtures of Gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2000;22:1318–1323. doi: 10.1109/34.888716. [DOI] [Google Scholar]
- 8.Payne L.E., Weinberger H.F. An optimal Poincaré inequality for convex domains. Arch. Ration. Mech. Anal. 1960;5:286–292. doi: 10.1007/BF00252910. [DOI] [Google Scholar]
- 9.Bobkov S.G., Götze F. Exponential Integrability and Transportation Cost Related to Logarithmic Sobolev Inequalities. J. Funct. Anal. 1999;163:1–28. doi: 10.1006/jfan.1998.3326. [DOI] [Google Scholar]
- 10.Nurminen H., Ardeshiri T., Piche R., Gustafsson F. Skew-t Filter and Smoother with Improved Covariance Matrix Approximation. IEEE Trans. Signal Process. 2018;66:5618–5633. doi: 10.1109/TSP.2018.2865434. [DOI] [Google Scholar]
- 11.Contreras-Reyes J., López Quintero F., Yáñez A. Towards Age Determination of Southern King Crab (Lithodes santolla) Off Southern Chile Using Flexible Mixture Modeling. J. Mar. Sci. Eng. 2018;6:157. doi: 10.3390/jmse6040157. [DOI] [Google Scholar]
- 12.Tasche D. Exact Fit of Simple Finite Mixture Models. J. Risk Financ. Manag. 2014;7:150–164. doi: 10.3390/jrfm7040150. [DOI] [Google Scholar]
- 13.McLachlan G., Peel D. Finite Mixture Models. John Wiley & Sons, Inc.; Hoboken, NJ, USA: 2000. (Wiley Series in Probability and Statistics). [DOI] [Google Scholar]
- 14.Schlichting A., Slowik M. Poincaré and logarithmic Sobolev constants for metastable Markov chains via capacitary inequalities. arXiv. 2017. 1705.05135
- 15.Schlichting A. Ph.D. Thesis. Universität Leipzig; Leipzig, Germany: 2012. The Eyring-Kramers Formula for Poincaré and Logarithmic Sobolev Inequalities. [Google Scholar]
- 16.Kolesnikov A.V., Milman E. Riemannian metrics on convex sets with applications to Poincaré and log–Sobolev inequalities. Calc. Var. Part. Differ. Equ. 2016;55:1–36. doi: 10.1007/s00526-016-1018-3. [DOI] [Google Scholar]
- 17.Bakry D., Émery M. Diffusions Hypercontractives. Springer; Berlin, Germany: 1985. pp. 177–206. Séminaire de Probabilités, XIX. [Google Scholar]
- 18.Holley R., Stroock D. Logarithmic Sobolev inequalities and stochastic Ising models. J. Stat. Phys. 1987;46:1159–1194. doi: 10.1007/BF01011161. [DOI] [Google Scholar]