Skip to main content
Entropy logoLink to Entropy
. 2020 Nov 1;22(11):1244. doi: 10.3390/e22111244

A Two-Moment Inequality with Applications to Rényi Entropy and Mutual Information

Galen Reeves 1,2
PMCID: PMC7712232  PMID: 33287012

Abstract

This paper explores some applications of a two-moment inequality for the integral of the rth power of a function, where 0<r<1. The first contribution is an upper bound on the Rényi entropy of a random vector in terms of the two different moments. When one of the moments is the zeroth moment, these bounds recover previous results based on maximum entropy distributions under a single moment constraint. More generally, evaluation of the bound with two carefully chosen nonzero moments can lead to significant improvements with a modest increase in complexity. The second contribution is a method for upper bounding mutual information in terms of certain integrals with respect to the variance of the conditional density. The bounds have a number of useful properties arising from the connection with variance decompositions.

Keywords: information inequalities, mutual information, Rényi entropy, Carlson–Levin inequality

1. Introduction

The interplay between inequalities and information theory has a rich history, with notable examples including the relationship between the Brunn–Minkowski inequality and the entropy power inequality as well as the matrix determinant inequalities obtained from differential entropy [1]. In this paper, the focus is on a “two-moment” inequality that provides an upper bound on the integral of the rth power of a function. Specifically, if f is a nonnegative function defined on Rn and p,q,r are real numbers satisfying 0<r<1 and p<1/r1<q, then

f(x)rdx1rCn,p,q,rxnpf(x)dxqr+r1(qp)rxnqf(x)dx1rpr(qp)r, (1)

where the best possible constant Cn,p,q,r is given exactly; see Propositions 2 and 3 ahead. The one-dimensional version of this inequality is a special case of the classical Carlson–Levin inequality [2,3,4], and the multidimensional version is a special case of a result presented by Barza et al. [5]. The particular formulation of the inequality used in this paper was derived independently in [6], where the proof follows from a direct application of Hölder’s inequality and Jensen’s inequality.

In the context of information theory and statistics, a useful property of the two-moment inequality is that it provides a bound on a nonlinear functional, namely the r-quasi-norm ·r, in terms of integrals that are linear in f. Consequently, this inequality is well suited to settings where f is a mixture of simple functions whose moments can be evaluated. We note that this reliance on moments to bound a nonlinear functional is closely related to bounds obtained from variational characterizations such as the Donsker–Varadhan representation of Kullback divergence [7] and its generalizations to Rényi divergence [8,9].

The first application considered in this paper concerns the relationship between the entropy of a probability measure and its moments. This relationship is fundamental to the principle of maximum entropy, which originated in statistical physics and has since been applied to statistical inference problems [10]. It also plays a prominent role in information theory and estimation theory where the fact that the Gaussian distribution maximizes differential entropy under second moment constraints ([11], [Theorem 8.6.5]) plays a prominent role. Moment–entropy inequalities for Rényi entropy were studied in a series of works by Lutwak et al. [12,13,14], as well as related works by Costa et al. [15,16] and Johonson and Vignat [17], in which it is shown that, under a single moment constraint, Rényi entropy is maximized by a family of generalized Gaussian distributions. The connection between these moment–entropy inequalities and the Carlson–Levin inequality was noted recently by Nguyen [18].

In this direction, one of the contributions of this paper is a new family of moment–entropy inequalities. This family of inequalities follows from applying Inequality (1) in the setting where f is a probability density function, and thus there is a one-to-one correspondence between the integral of the rth power and the Rényi entropy of order r. In the special case where one of the moments is the zeroth moment, this approach recovers the moment–entropy inequalities given in previous work. More generally, the additional flexibility provided by considering two different moments can lead to stronger results. For example, in Proposition 6, it is shown that if f is the standard Gaussian density function defined on Rn, then the difference between the Rényi entropy and the upper bound given by the two-moment inequality (equivalently, the ratio between the left- and right-hand sides of (1)) is bounded uniformly with respect to n under the following specification of the moments:

pn=1rr1r2(1r)n+1,qn=1rr+1r2(1r)n+1. (2)

Conversely, if one of the moments is restricted to be equal to zero, as is the case in the usual moment–entropy inequalities, then the difference between the Rényi entropy and the upper bound diverges with n.

The second application considered in this paper is the problem of bounding mutual information. In conjunction with Fano’s inequality and its extensions, bounds on mutual information play a prominent role in establishing minimax rates of statistical estimation [19] as well as the information-theoretic limits of detection in high-dimensional settings [20]. In many cases, one of the technical challenges is to provide conditions under which the dependence between the observations and an underlying signal or model parameters converges to zero in the limit of high dimension.

This paper introduces a new method for bounding mutual information, which can be described as follows. Let PX,Y be a probability measure on X×Y such that PYX=x and PY have densities f(yx) and f(y) with respect to the Lebesgue measure on Rn. We begin by showing that the mutual information between X and Y satisfies the upper bound

I(X;Y)Var(f(yX))dy, (3)

where Var(p(yX))=f(yx)f(y)2dPX(x) is the variance of f(yX); see Proposition 8 ahead. In view of (3), an application of the two-moment Inequality (1) with r=1/2 leads to an upper bound with respect to the moments of the variance of the density:

ynsVar(f(yX))dy (4)

where this expression is evaluated at s{p,q} with p<1<q. A useful property of this bound is that the integrated variance is quadratic in PX, and thus Expression (4) can be evaluated by swapping the integration over y and with the expectation of over two independent copies of X. For example, when PX,Y is a Gaussian scale mixture, this approach provides closed-form upper bounds in terms of the moments of the Gaussian density. An early version of this technique is used to prove Gaussian approximations for random projections [21] arising in the analysis of a random linear estimation problem appearing in wireless communications and compressed sensing [22,23].

2. Moment Inequalities

Let Lp(S) be the space of Lebesgue measurable functions from S to R whose pth power is absolutely integrable, and for p0, define

fp:=Sf(x)pdx1/p.

Recall that ·p is a norm for p1 but only a quasi-norm for 0<p<1 because it does not satisfy the triangle inequality. The sth moment of f is defined as

Ms(f):=Sxs|f(x)|dx,

where · denotes the standard Euclidean norm on vectors.

The two-moment Inequality (1) can be derived straightforwardly using the following argument. For r(0,1), the mapping ffr is concave on the subset of nonnegative functions and admits the variational representation

fr=inffg1gr*:gLr*, (5)

where r*=r/(r1)(,0) is the Hölder conjugate of r. Consequently, each gLr* leads to an upper bound on fr. For example, if f has bounded support S, choosing g to be the indicator function of S leads to the basic inequality fr(Vol(S))(1r)/rf1. The upper bound on fr given in Inequality (1) can be obtained by restricting the minimum in Expression (5) to the parametric class of functions of the form g(x)=ν1xnp+ν2xnq with ν1,ν2>0 and then optimizing over the parameters (ν1,ν2). Here, the constraints on p,q are necessary and sufficient to ensure that gLr*(Rn).

In the following sections, we provide a more detailed derivation, starting with the problem of maximizing fr under multiple moment constraints and then specializing to the case of two moments. For a detailed account of the history of the Carlson type inequalities as well as some further extensions, see [4].

2.1. Multiple Moments

Consider the following optimization problem:

maximizefrsubjecttof(x)0forallxSMsi(f)mifor1ik.

For r(0,1), this is a convex optimization problem because ·rr is concave and the moment constraints are linear. By standard theory in convex optimization (e.g., [24]), it can be shown that if the problem is feasible and the maximum is finite, then the maximizer has the form

f*(x)=i=1kνi*xsi1r1,forallxS.

The parameters ν1*,,νk* are nonnegative and the ith moment constraint holds with equality for all i such that νi* is strictly positive—that is, νi*>0μsi(f*)=mi. Consequently, the maximum can be expressed in terms of a linear combination of the moments:

f*rr=(f*)r1=f*(f*)r11=i=1kνi*mi.

For the purposes of this paper, it is useful to consider a relative inequality in terms of the moments of the function itself. Given a number 0<r<1 and vectors sRk and νR+k, the function cr(ν,s) is defined according to

cr(ν,s)=0i=1kνixsir1rdx1rr,

if the integral exists. Otherwise, cr(ν,s) is defined to be positive infinity. It can be verified that cr(ν,s) is finite provided that there exists i,j such that νi and νj are strictly positive and si<(1r)/r<sj.

The following result can be viewed as a consequence of the constrained optimization problem described above. We provide a different and very simple proof that depends only on Hölder’s inequality.

Proposition 1.

Let f be a nonnegative Lebesgue measurable function defined on the positive reals R+. For any number 0<r<1 and vectors sRk and νR+k, we have

frcr(ν,s)i=1kνiMsi(f).

Proof. 

Let g(x)=i=1kνixsi. Then, we have

frr=gr(fg)r1gr11r(gf)r1r=gr1r11rgf1r=cr(ν,s)i=1kνiMsi(f)r,

where the second step is Hölder’s inequality with conjugate exponents 1/(1r) and 1/r. □

2.2. Two Moments

For a,b>0, the beta function B(a,b) and gamma function Γ(a) are given by

B(a,b)=01ta1(1t)b1dtΓ(a)=0ta1etdt,

and satisfy the relation B(a,b)=Γ(a)Γ(b)/Γ(a+b), a,b>0. To lighten the notation, we define the normalized beta function

B˜(a,b)=B(a,b)(a+b)a+baabb. (6)

Properties of these functions are provided in Appendix A.

The next result follows from Proposition 1 for the case of two moments.

Proposition 2.

Let f be a nonnegative Lebesgue measurable function defined on [0,). For any numbers p,q,r with 0<r<1 and p<1/r1<q,

frψr(p,q)1rr[Mp(f)]λ[Mq(f)]1λ,

where λ=(q+11/r)/(qp) and

ψr(p,q)=1(qp)B˜rλ1r,r(1λ)1r, (7)

where B˜(·,·) is defined in Equation (6).

Proof. 

Letting s=(p,q) and ν=(γ1λ,γλ) with λ>0, we have

[cr(ν,s)]r1r=0γ1λxp+γλxqr1rdx.

Making the change of variable x(γu)1qp leads to

[cr(ν,s)]r1r=1qp0ub1(1+u)a+bdu=Ba,bqp,

where a=r1rλ and b=r1r(1λ) and the second step follows from recognizing the integral representation of the beta function given in Equation (A3). Therefore, by Proposition 1, the inequality

frBa,bqp1rrγ1λMp(f)+γλMq(f),

holds for all γ>0. Evaluating this inequality with

γ=λMq(f)(1λ)Mp(f),

leads to the stated result. □

The special case r=1/2 admits the simplified expression

ψ1/2(p,q)=πλλ(1λ)(1λ)(qp)sin(πλ), (8)

where we have used Euler’s reflection formula for the beta function ([25], [Theorem 1.2.1]).

Next, we consider an extension of Proposition 2 for functions defined on Rn. Given any measurable subset S of Rn, we define

ω(S)=Vol(Bncone(S)), (9)

where Bn={uRn:u1} is the n-dimensional Euclidean ball of radius one and

cone(S)={xRn:txSforsomet>0}.

The function ω(S) is proportional to the surface measure of the projection of S on the Euclidean sphere and satisfies

ω(S)ω(Rn)=πn2Γ(n2+1), (10)

for all SRn. Note that ω(R+)=1 and ω(R)=2.

Proposition 3.

Let f be a nonnegative Lebesgue measurable function defined on a subset S of Rn. For any numbers p,q,r with 0<r<1 and p<1/r1<q,

frω(S)ψr(p,q)1rr[Mnp(f)]λ[Mnq(f)]1λ,

where λ=(q+11/r)/(qp) and ψr(p,q) is given by Equation (7).

Proof. 

Let f be extended to Rn using the rule f(x)=0 for all x outside of S and let g:R+R+ be defined according to

g(y)=1nSn1f(y1/nu)dσ(u),

where Sn1={uRn:u=1} is the Euclidean sphere of radius one and σ(u) is the surface measure of the sphere. In the following, we will show that

frω(S)1rrgr (11)
Mns(f)=Ms(g). (12)

Then, the stated inequality then follows from applying Proposition 2 to the function g.

To prove Inequality (11), we begin with a transformation into polar coordinates:

frr=0Sn1f(tu)rtn1dσ(u)dt. (13)

Letting 1cone(S)(x) denote the indicator function of the set cone(S), the integral over the sphere can be bounded using:

Sn1f(tu)rdσ(u)=Sn11cone(S)(u)f(tu)rdσ(u)(a)Sn11cone(S)(u)dσ(u)1rSn1f(tu)dσ(u)r=(b)nω(S)1rgr(tn), (14)

where: (a) follows from Hölder’s inequality with conjugate exponents 11r and 1r, and (b) follows from the definition of g and the fact that

ω(S)=01Sn11cone(S)(u)tn1dσ(u)dt=1nSn11cone(S)(u)dσ(u).

Plugging Inequality (14) back into Equation (13) and then making the change of variable ty1n yields

frrnω(S)1r0gr(tn)tn1dt=ω(S)1rgrr.

The proof of Equation (12) follows along similar lines. We have

Mns(f)=(a)0Sn1tnsf(tu)tn1dσ(u)dt=(b)1n0Sn1ysf(y1nu)dσ(u)dy=Ms(g)

where (a) follows from a transformation into polar coordinates and (b) follows from the change of variable ty1n.

Having established Inequality (11) and Equation (12), an application of Proposition 2 completes the proof. □

3. Rényi Entropy Bounds

Let X be a random vector that has a density f(x) with respect to the Lebesgue measure on Rn. The differential Rényi entropy of order r(0,1)(1,) is defined according to [11]:

hr(X)=11rlogRnfr(x)dx.

Throughout this paper, it is assumed that the logarithm is defined with respect to the natural base and entropy is measured in nats. The Rényi entropy is continuous and nonincreasing in r. If the support set S={xRn:f(x)>0} has finite measure, then the limit as r converges to zero is given by h0(X)=logVol(S). If the support does not have finite measure, then hr(X) increases to infinity as r decreases to zero. The case r=1 is given by the Shannon differential entropy:

h1(X)=Sf(x)logf(x)dx.

Given a random variable X that is not identical to zero and numbers p,q,r with 0<r<1 and p<1/r1<q, we define the function

Lr(X;p,q)=rλ1rlogE|X|p+r(1λ)1rlogE|X|q,

where λ=(q+11/r)/(qp).

The next result, which follows directly from Proposition 3, provides an upper bound on the Rényi entropy.

Proposition 4.

Let X be a random vector with a density on Rn. For any numbers p,q,r with 0<r<1 and p<1/r1<q, the Rényi entropy satisfies

hr(X)logω(S)+logψr(p,q)+Lr(Xn;p,q), (15)

where ω(S) is defined in Equation (9) and ψr(p,q) is defined in Equation (7).

Proof. 

This result follows immediately from Proposition 3 and the definition of Rényi entropy. □

The relationship between Proposition 4 and previous results depends on whether the moment p is equal to zero:

  • One-moment inequalities: If p=0, then there exists a distribution such that Inequality (15) holds with equality. This is because the zero-moment constraint ensures that the function that maximizes the Rényi entropy integrates to one. In this case, Proposition 4 is equivalent to previous results that focused on distributions that maximize Rényi entropy subject to a single moment constraint [12,13,15]. With some abuse of terminology, we refer to these bounds as one-moment inequalities. (A more accurate name would be two-moment inequalities under the constraint that one of the moments is the zeroth moment.)

  • Two-moment inequalities: If p0, then the right-hand side of Inequality (15) corresponds to the Rényi entropy of a nonnegative function that might not integrate to one. Nevertheless, the expression provides an upper bound on the Rényi entropy for any density with the same moments. We refer to the bounds obtained using a general pair (p,q) as two-moment inequalities.

The contribution of two-moment inequalities is that they lead to tighter bounds. To quantify the tightness, we define Δr(X;p,q) to be the gap between the right-hand side and left-hand side of Inequality (15) corresponding to the pair (p,q)—that is,

Δr(X;p,q)=logω(S)+logψr(p,q)+Lr(Xn;p,q)hr(X).

The gaps corresponding to the optimal two-moment and one-moment inequalities are defined according to

Δr(X)=infp,qΔr(X;p,q)Δ˜r(X)=infqΔr(X;0,q).

3.1. Some Consequences of These Bounds

By Lyapunov’s inequality, the mapping s1slogE|X|s is nondecreasing on [0,), and thus

Lr(X;p,q)Lr(X;0,q)=1qlogE|X|q,p0. (16)

In other words, the case p=0 provides an upper bound on Lr(X;p,q) for nonnegative p. Alternatively, we also have the lower bound

Lr(X;p,q)r1rlogE|X|1rr, (17)

which follows from the convexity of logE|X|s.

A useful property of Lr(X;p,q) is that it is additive with respect to the product of independent random variables. Specifically, if X and Y are independent, then

Lr(XY;p,q)=Lr(X;p,q)+Lr(Y;p,q). (18)

One consequence is that multiplication by a bounded random variable cannot increase the Rényi entropy by an amount that exceeds the gap of the two-moment inequality with nonnegative moments.

Proposition 5.

Let Y be a random vector on Rn with finite Rényi entropy of order 0<r<1, and let X be an independent random variable that satisfies 0<Xt. Then,

hr(XY)hr(tY)+Δr(Y;p,q),

for all 0<p<1/r1<q.

Proof. 

Let Z=XY and let SZ and SY denote the support sets of Z and Y, respectively. The assumption that X is nonnegative means that cone(SZ)=cone(SY). We have

hr(Z)(a)logω(SZ)+logψr(p,q)+Lr(Zn;p,q)=(b)hr(Y)+Lr(|X|n;p;q)+Δr(Y;p,q)(c)hr(Y)+nlogt+Δr(Y;p,q),

where (a) follows from Proposition 4, (b) follows from Equation (18) and the definition of Δr(Y;p,q), and (c) follows from Inequality (16) and the assumption |X|t. Finally, recalling that hr(tY)=hr(Y)+nlogt completes the proof. □

3.2. Example with Log-Normal Distribution

If WN(μ,σ2), then the random variable X=exp(W) has a log-normal distribution with parameters (μ,σ2). The Rényi entropy is given by

hr(X)=μ+121rrσ2+12log(2πr1r1σ2),

and the logarithm of the sth moment is given by

logE|X|s=μs+12σ2s2.

With a bit of work, it can be shown that the gap of the optimal two-moment inequality does not depend on the parameters (μ,σ2) and is given by

Δr(X)=logB˜r2(1r),r2(1r)r4(1r)+1212log(2πr1r1). (19)

The details of this derivation are given in Appendix B.1. Meanwhile, the gap of the optimal one-moment inequality is given by

Δ˜r(X)=infqlogB˜r1r1q,1q1q+12qσ2121rrσ212log(2πr1r1σ2). (20)

The functions Δr(X) and Δ˜r(X) are illustrated in Figure 1 as a function of r for various σ2. The function Δr(X) is bounded uniformly with respect to r and converges to zero as r increases to one. The tightness of the two-moment inequality in this regime follows from the fact that the log-normal distribution maximizes Shannon entropy subject to a constraint on ElogX. By contrast, the function Δ˜r(X) varies with the parameter σ2. For any fixed r(0,1), it can be shown that Δ˜r(X) increases to infinity if σ2 converges to zero or infinity.

Figure 1.

Figure 1

Comparison of upper bounds on Rényi entropy in nats for the log-normal distribution as a function of the order r for various σ2.

3.3. Example with Multivariate Gaussian Distribution

Next, we consider the case where YN(0,In) is an n-dimensional Gaussian vector with mean zero and identity covariance. The Rényi entropy is given by

hr(Y)=n2log(2πr1r1),

and the sth moment of the magnitude Y is given by

EYs=2s2Γ(n+s2)Γ(n2).

The next result shows that as the dimension n increases, the gap of the optimal two-moment inequality converges to the gap for the log-normal distribution. Moreover, for each r(0,1), the following choice of moments is optimal in the large-n limit:

pn=1rr1r2(1r)n+1,qn=1rr+1r2(1r)n+1. (21)

The proof is given in Appendix B.3.

Proposition 6.

If YN(0,In), then, for each r(0,1),

limnΔr(Y)=limnΔr(Y;pn,qn)=Δr(X),

where X has a log-normal distribution and (pn,qn) are given by (21).

Figure 2 provides a comparison of Δr(Y), Δr(Y;pn,qn), and Δ˜r(Y) as a function of n for r=0.1. Here, we see that both Δr(Y) and Δr(Y;pn,qn) converge rapidly to the asymptotic limit given by the gap of the log-normal distribution. By contrast, the gap of the optimal one-moment inequality Δ˜r(Y) increases without bound.

Figure 2.

Figure 2

Comparison of upper bounds on Rényi entropy in nats for the multivariate Gaussian distribution N(0,In) as a function of the dimension n with r=0.1. The solid black line is the gap of the optimal two-moment inequality for the log-normal distribution.

3.4. Inequalities for Differential Entropy

Proposition 4 can also be used to recover some known inequalities for differential entropy by considering the limiting behavior as r converges to one. For example, it is well known that the differential entropy of an n-dimensional random vector X with finite second moment satisfies

h(X)12log2πeE1nX2, (22)

with equality if and only if the entries of X are i.i.d. zero-mean Gaussian. A generalization of this result in terms of an arbitrary positive moment is given by

h(X)logΓns+1Γn2+1+n2logπ+nslogesE1nXs, (23)

for all s>0. Note that Inequality (22) corresponds to the case s=2.

Inequality (23) can be proved as an immediate consequence of Proposition 4 and the fact that hr(X) is nonincreasing in r. Using properties of the beta function given in Appendix A, it is straightforward to verify that

limr1ψr(0,q)=eq1qΓ1q+1,forallq>0.

Combining this result with Proposition 4 and Inequality (16) leads to

h(X)logω(S)+logΓ1q+1+1qlogeqEXnq.

Using Inequality (10) and making the substitution s=nq leads to Inequality (23).

Another example follows from the fact that the log-normal distribution maximizes the differential entropy of a positive random variable X subject to constraints on the mean and variance of log(X), and hence

h(X)Elog(X)+12log2πeVar(log(X)), (24)

with equality if and only if X is log-normal. In Appendix B.4, it is shown how this inequality can be proved using our two-moment inequalities by studying the behavior as both p and q converge to zero as r increases to one.

4. Bounds on Mutual Information

4.1. Relative Entropy and Chi-Squared Divergence

Let P and Q be distributions defined on a common probability space that have densities p and q with respect to a dominating measure μ. The relative entropy (or Kullback–Leibler divergence) is defined according to

DPQ=plogpqdμ,

and the chi-squared divergence is defined as

χ2(PQ)=pq2qdμ.

Both of these divergences can be seen as special cases of the general class of f-divergence measures and there exists a rich literature on comparisons between different divergences [8,26,27,28,29,30,31,32]. The chi-squared divergence can also be viewed as the squared L2 distance between p/q and q. The chi-square can also be interpreted as the first non-zero term in the power series expansion of the relative entropy ([26], [Lemma 4]). More generally, the chi-squared divergence provides an upper bound on the relative entropy via

DPQlog(1+χ2(PQ)). (25)

The proof of this inequality follows straightforwardly from Jensen’s inequality and the concavity of the logarithm; see [27,31,32] for further refinements.

Given a random pair (X,Y), the mutual information between X and Y is defined according to

I(X;Y)=DPX,YPXPY.

From Inequality (25), we see that the mutual information can always be upper bounded using

I(X;Y)log(1+χ2(PX,YPXPY)). (26)

The next section provides bounds on the mutual information that can improve upon this inequality.

4.2. Mutual Information and Variance of Conditional Density

Let (X,Y) be a random pair such that the conditional distribution of Y given X has a density fY|X(y|x) with respect to the Lebesgue measure on Rn. Note that the marginal density of Y is given by fY(y)=EfY|X(y|X). To simplify notation, we will write f(y|x) and f(y) where the subscripts are implicit. The support set of Y is denoted by SY.

The measure of the dependence between X and Y that is used in our bounds can be understood in terms of the variance of the conditional density. For each y, the conditional density f(y|X) evaluated with a random realization of X is a random variable. The variance of this random variable is given by

Var(f(y|X))=Ef(y|X)f(y)2, (27)

where we have used the fact that the marginal density f(y) is the expectation of f(y|X). The sth moment of the variance of the conditional density is defined according to

Vs(Y|X)=SYysVar(f(y|X))dy. (28)

The variance moment Vs(Y|X) is nonnegative and equal to zero if and only if X and Y are independent.

The function κ(t) is defined according to

κ(t)=supu(0,)log(1+u)ut,t(0,1]. (29)

The proof of the following result is given in Appendix C. The behavior of κ(t) is illustrated in Figure 3.

Figure 3.

Figure 3

Graphs of κ(t) and tκ(t) as a function of t.

Proposition 7.

The function κ(t) defined in Equation (29) can be expressed as

κ(t)=log(1+u)ut,t(0,1]

where

u=expW1texp1t+1t1,

and W(·) denotes Lambert’s W- function, i.e., W(z) is the unique solution to the equation z=wexp(w) on the interval [1,). Furthermore, the function g(t)=tκ(t) is strictly increasing on (0,1] with limt0g(t)=1/e and g(1)=1, and thus

1etκ(t)1t,t(0,1],

where the lower bound 1/(et) is tight for small values of t(0,1) and the upper bound 1/t is tight for values of t close to 1.

We are now ready to give the main results of this section, which are bounds on the mutual information. We begin with a general upper bound in terms of the variance of the conditional density.

Proposition 8.

For any 0<t1, the mutual information satisfies

I(X;Y)κ(t)SYf(y)12tVar(f(yX))tdy.

Proof. 

We use the following series of inequalities:

I(X;Y)=(a)f(y)DPX|Y=yPXdy(b)f(y)log1+χ2(PX|Y=yPX)dy=(c)f(y)log1+Var(f(yX))f2(y)dy(d)κ(t)f(y)Var(f(yX))f2(y)tdy,

where (a) follows from the definition of mutual information, (b) follows from Inequality (25), and (c) follows from Bayes’ rule, which allows us to write the chi-square in terms of the variance of the conditional density:

χ2(PX|Y=yPX)=Ef(y|X)f(y)12=Var(f(y|X))f2(y).

Inequality (d) follows from the nonnegativity of the variance and the definition of κ(t). □

Evaluating Proposition 8 with t=1 recovers the well-known inequality I(X;Y)χ2(PX,YPXPY). The next two results follow from the cases 0<t<12 and t=12, respectively.

Proposition 9.

For any 0<r<1, the mutual information satisfies

I(X;Y)κ(t)ehr(Y)V0(Y|X)t,

where t=(1r)/(2r).

Proof. 

Starting with Proposition 8 and applying Hölder’s inequality with conjugate exponents 1/(1t) and 1/t leads to

I(X;Y)κ(t)fr(y)dy1tVar(f(yX))dyt=κ(t)ethr(Y)V0t(Y|X),

where we have used the fact that r=(12t)/(1t). □

Proposition 10.

For any p<1<q, the mutual information satisfies

I(X;Y)C(λ)ω(SY)Vnpλ(Y|X)Vnq1λ(Y|X)(qp),

where λ=(q1)/(qp) and

C(λ)=κ(12)πλλ(1λ)(1λ)sin(πλ),

with κ(12)=0.80477.

Proof. 

Evaluating Proposition 8 with t=1/2 gives

I(X;Y)κ(12)SYVar(f(yX))dy.

Evaluating Proposition 3 with r=12 leads to

SYVar(f(yX))dy2ω(SY)ψ1/2(p,q)Vnpλ(Y|X)Vnq1λ(Y|X).

Combining these inequalities with the expression for ψ1/2(p,q) given in Equation (8) completes the proof. □

The contribution of Propositions 9 and 10 is that they provide bounds on the mutual information in terms of quantities that can be easy to characterize. One application of these bounds is to establish conditions under which the mutual information corresponding to a sequence of random pairs (Xk,Yk) converges to zero. In this case, Proposition 9 provides a sufficient condition in terms of the Rényi entropy of Yn and the function V0(Yn|Xn), while Proposition 10 provides a sufficient condition in terms of Vs(Yn|Xn) evaluated with two difference values of s. These conditions are summarized in the following result.

Proposition 11.

Let (Xk,Yk) be a sequence of random pairs such that the conditional distribution of Yk given Xk has a density on Rn. The following are sufficient conditions under which the mutual information of I(Xk;Yk) converges to zero as k increases to infinity:

  1. There exists 0<r<1 such that
    limkehr(Yk)V0(Yk|Xk)=0.
  2. There exists p<1<q such that
    limkVnpq1(Yk|Xk)Vnq1p(Yk|Xk)=0.

4.3. Properties of the Bounds

The variance moment Vs(Y|X) has a number of interesting properties. The variance of the conditional density can be expressed in terms of an expectation with respect to two independent random variables X1 and X2 with the same distribution as X via the decomposition:

Var(f(y|X))=Ef(y|X)f(y|X)f(y|X1)f(y|X2).

Consequently, by swapping the order of the integration and expectation, we obtain

Vs(Y|X)=EKs(X,X)Ks(X1,X2), (30)

where

Ks(x1,x2)=ysf(y|x1)f(y|x2)dy.

The function Ks(x1,x2) is a positive definite kernel that does not depend on the distribution of X. For s=0, this kernel has been studied previously in the machine learning literature [33], where it is referred to as the expected likelihood kernel.

The variance of the conditional density also satisfies a data processing inequality. Suppose that UXY forms a Markov chain. Then, the square of the conditional density of Y given U can be expressed as

fY|U2(y|u)=EfY|X(y|X1)fY|X(y|X2)U=u,

where (U,X1,X2)PUPX1|UPX2|U. Combining this expression with Equation (30) yields

Vs(Y|U)=EKs(X1,X2)Ks(X1,X2), (31)

where we recall that (X1,X2) are independent copies of X

Finally, it is easy to verify that the function Vs(Y) satisfies

Vs(aY|X)=|a|snVs(Y|X),foralla0.

Using this scaling relationship, we see that the sufficient conditions in Proposition 11 are invariant to scaling of Y.

4.4. Example with Additive Gaussian Noise

We now provide a specific example of our bounds on the mutual information. Let XRn be a random vector with distribution PX and let Y be the output of a Gaussian noise channel

Y=X+W, (32)

where WN(0,In) is independent of X. If X has finite second moment, then the mutual information satisfies

I(X;Y)n2log1+1nEX2, (33)

where equality is attained if and only if X has zero-mean isotropic Gaussian distribution. This inequality follows straightforwardly from the fact that the Gaussian distribution maximizes differential entropy subject to a second moment constraint [11]. One of the limitations of this bound is that it can be loose when the second moment is dominated by events that have small probability. In fact, it is easy to construct examples for which X does not have a finite second moment, and yet I(X;Y) is arbitrarily close to zero.

Our results provide bounds on I(X;Y) that are less sensitive to the effects of rare events. Let ϕn(x)=(2π)n/2exp(x2/2) denote the density of the standard Gaussian distribution on Rn. The product of the conditional densities can be factored according to

f(yx1)f(yx2)=ϕ2nyx1yx2=ϕ2n2y(x1+x2)/2(x1x2)/2=ϕn2yx1+x22ϕnx1x22,

where the second step follows because ϕ2n(·) is invariant to orthogonal transformations. Integrating with respect to y leads to

Ks(x1,x2)=2n+s2EW+x1+x22sϕnx1x22,

where we recall that WN(0,In). For the case s=0, we see that K0(x1,x2) is a Gaussian kernel, thus

V0(Y|X)=(4π)n21Ee14X1X22. (34)

A useful property of V0(Y|X) is that the conditions under which it converges to zero are weaker than the conditions needed for other measures of dependence. Observe that the expectation in Equation (34) is bounded uniformly with respect to (X1,X2). In particular, for every ϵ>0 and xR, we have

1Ee14X1X22ϵ2+2P|Xx|ϵ,

where we have used the inequality 1exx and the fact that P|X1X2|2ϵ2P|Xx|ϵ. Consequently, V0(Y|X) converges to zero whenever X converges to a constant value x in probability.

To study some further properties of these bounds, we now focus on the case where X is a Gaussian scalar mixture generated according to

X=AU,AN(0,1),U0, (35)

with A and U independent. In this case, the expectations with respect to the kernel Ks(x1,x2) can be computed explicitly, leading to

Vs(Y|X)=Γ(1+s2)2πE1+2Us2(1+U1)s2(1+U2)s2(1+12(U1+U2))s+12, (36)

where (U1,U2) are independent copies of U. It can be shown that this expression depends primarily on the magnitude of U. This is not surprising given that X converges to a constant if and only if U converges to zero.

Our results can also be used to bound the mutual information I(U;Y) by noting that UXY forms a Markov chain, and taking advantage of the characterization provided in Equation (31). Letting X1=A1U and X2=A2U with (A1,A2,U) be mutually independent leads to

Vs(Y|U)=Γ(1+s2)2πE1+Us12(1+U1)s2(1+U2)s2(1+12(U1+U2))s+12, (37)

In this case, Vs(Y|U) is a measure of the variation in U. To study its behavior, we consider the simple upper bound

Vs(Y|U)Γ(1+s2)2πPU1U2E1+Us12, (38)

which follows from noting that the term inside the expectation in Equation (37) is zero on the event U1=U2. This bound shows that if s1 then Vs(Y|U) is bounded uniformly with respect to distributions on U, and if s>1, then Vs(Y|U) is bounded in terms of the (s12)th moment of U.

In conjunction with Propositions 9 and 10, the function Vs(Y|U) provides bounds on the mutual information I(U;Y) that can be expressed in terms of simple expectations involving two independent copies of U. Figure 4 provides an illustration of the upper bound in Proposition 10 for the case where U is a discrete random variable supported on two points, and X and Y are generated according to Equations (32) and (35). This example shows that there exist sequences of distributions for which our upper bounds on the mutual information converge to zero while the chi-squared divergence between PXY and PXPY is bounded away from zero.

Figure 4.

Figure 4

Bounds on the mutual information I(U;Y) in nats when U(1ϵ)δ1+ϵδa(ϵ), with a(ϵ)=1+1/ϵ, and X and Y are generated according to Equations (32) and (35). The bound from Proposition 10 is evaluated with p=0 and q=2.

5. Conclusions

This paper provides bounds on Rényi entropy and mutual information that are based on a relatively simple two-moment inequality. Extensions to inequalities with more moments are worth exploring. Another potential application is to provide a refined characterization of the “all-or-nothing” behavior seen in a sparse linear regression problem [34,35], where the current methods of analysis depend on a complicated conditional second moment method.

Appendix A. The Gamma and Beta Functions

This section reviews some properties of the gamma and beta functions. For x>0, the gamma function is defined according to Γ(x)=0tx1etdt. Binet’s formula for the logarithm for the gamma function ([25], [Theorem 1.6.3]) gives

logΓ(x)=x12logxx+12log(2π)+θ(x), (A1)

where the remainder term θ(x) is convex and nonincreasing with limx0θ(x)= and limxθ(x)=0. Euler’s reflection formula ([25], [Theorem 1.2.1]) gives

Γ(x)Γ(1x)=πsin(πx),0<x<1. (A2)

For x,y>0, the beta function can be expressed as follows

B(x,y)=Γ(x)Γ(y)Γ(x+y)=01tx1(1t)y1dt=0ua1(1+u)a+bdu, (A3)

where the second integral expression follows from the change of variables tu/(1+u). Recall that B˜(x,y)=B(x,y)(x+y)x+yxxyy. Using Equation (A1) leads to

logB˜(x,y)xy2π(x+y)=θ(x)+θ(y)θ(x+y). (A4)

It can also be shown that ([36], [Equation (2), pg. 2])

B˜(x,y)x+yxy. (A5)

Appendix B. Details for Rényi Entropy Examples

This appendix studies properties of the two-moment inequalities for Rényi entropy described in Section 3.

Appendix B.1. Log-Normal Distribution

Let X be a log-normal random variable with parameters (μ,σ2) and consider the parametrization

p=1rr(1λ)(1r)urλ(1λ)q=1rr+λ(1r)urλ(1λ).

where λ(0,1) and u(0,). Then, we have

ψr(p,q)=B˜rλ1r,r(1λ)1rrλ(1λ)(1r)uLr(X;p,q)=μ+121rrσ2+12uσ2.

Combining these expressions with Equation (A4) leads to

Δr(X;p,q)=θrλ1r+θr(1λ)1rθr1r+12uσ212loguσ212log(r1r1). (A6)

We now characterize the minimum with respect to the parameters (λ,u). Note that the mapping λθ(rλ1r)+θ(r(1λ)1r) is convex and symmetric about the point λ=1/2. Therefore, the minimum with respect to λ is attained at λ=1/2. Meanwhile, mapping uuσ2log(uσ2) is convex and attains its minimum at u=1/σ2. Evaluating Equation (A6) with these values, we see that the optimal two-moment inequality can be expressed as

Δr(X)=2θr2(1r)θr1r+12loger11r.

By Equation (A4), this expression is equivalent to Equation (A1). Moreover, the fact that Δr(X) decreases to zero as r increases to one follows from the fact that θ(x) decreases to zero and x increases to infinity.

Next, we express the gap in terms of the pair (p,q). Comparing the difference between Δr(X;p,q) and Δr(X) leads to

Δr(X;p,q)=Δr(X)+12φrλ(1λ)1r(qp)2σ2+θrλ1r+θr(1λ)1r2θr2(1r),

where φ(x)=xlog(x)1. In particular, if p=0, then we obtain the simplified expression

Δr(X;0,q)=Δr(X)+12φq1rrσ2+θr1r1q+θ1q2θr2(1r).

This characterization shows that the gap of the optimal one-moment inequality Δ˜r(X) increases to infinity in the limit as either σ20 or σ2.

Appendix B.2. Multivariate Gaussian Distribution

Let YN(0,In) be an n-dimensional Gaussian vector and consider the parametrization

p=1rr1λr2(1r)zλ(1λ)nq=1rr+λr2(1r)zλ(1λ)n.

where λ(0,1) and z(0,). We can write

logω(SY)=n2logπlogn2logΓn2ψr(p,q)=B˜rλ1r,r(1λ)1rrλ(1λ)(1r)nr2z.

Furthermore, if

(1λ)2(1r)zλ(1λ)n<1, (A7)

then Lr(Yn;p,q) is finite and is given by

Lr(Yn;p,q)=Qr,n(λ,z)+n2log2+r1rlogΓn2rlogΓn2,

where

Qr,n(λ,z)=rλ1rlogΓn2r1λr(1r)nz2λ(1λ)+r(1λ)1rlogΓn2r+λr(1r)nz2λ(1λ)r1rlogΓn2r. (A8)

Here, we note that the scaling in Equation (21) corresponds to λ=1/2 and z=n/(n+1), and thus the condition Inequality (A7) is satisfied for all n1. Combining the above expressions and then using Equations (A1) and (A4) leads to

Δr(Y;p,q)=θrλ1r+θr(1λ)1rθr1r+Qr,n(z,λ)12logz12logr1r1+r1rθn2r11rθn2. (A9)

Next, we study some properties of Qr,n(λ,z). By Equation (A1), the logarithm of the gamma function can be expressed as the sum of convex functions:

logΓ(x)=φ(x)+12log1x+12log(2π)1+θ(x),

where φ(x)=xlogx+1x. Starting with the definition of Q(λ,z) and then using Jensen’s inequality yields

Qr,n(z,λ)rλ1rφn2r1λr(1r)nz2λ(1λ)+r(1λ)1rφn2r+λr(1r)nz2λ(1λ)r1rφn2r=λaφ11λλaz+(1λ)aφ1+λ1λaz,

where a=2(1r)/n. Using the inequality φ(x)(3/2)(x1)2/(x+2) leads to

Qr,n(λ,z)z211λλbz1+λ1λbz1z21+λ1λbz1, (A10)

where b=2(1r)/(9n).

Observe that the right-hand side of Inequality (A10) converges to z/2 as n increases to infinity. It turns out this limiting behavior is tight. Using Equation (A1), it is straightforward to show that Qn(λ,z) converges pointwise to z/2 as n increases to infinity—that is,

limnQr,n(λ,z)=12z, (A11)

for any fixed pair (λ,z)(0,1)×(0,).

Appendix B.3. Proof of Proposition 6

Let D=(0,1)×(0,). For fixed r(0,1), we use Qn(λ,z) to denote the function Qr,n(λ,z) defined in Equation (A8) and we use Gn(λ,z) to denote the right-hand side of Equation (A9). These functions are defined to be equal to positive infinity for any pair (λ,z)D such that Inequality (A7) does not hold.

Note that the terms θ(n/(2r)) and θ(n/2) converge to zero in the limit as n increases to infinity. In conjunction with Equation (A11), this shows that Gn(λ,z) converges pointwise to a limit G(λ,z) given by

G(λ,z)=θrλ1r+θr(1λ)1rθr1r+12z12logz12log(r1r1).

At this point, the correspondence with the log-normal distribution can be seen from the fact that G(λ,z) is equal to the right-hand side of Equation (A6) evaluated with uσ2=z.

To show that the gap corresponding to the log-normal distribution provides an upper bound on the limit, we use

lim supnΔr(Y)=lim supninf(λ,z)DGn(λ,z)inf(λ,z)Dlim supnGn(λ,z)=inf(λ,z)DG(λ,z)=Δr(X). (A12)

Here, the last equality follows from the analysis in Appendix B.1, which shows that the minimum of G(λ,z) is a attained at λ=1/2 and z=1.

To prove the lower bound requires a bit more work. Fix any ϵ(0,1) and let Dϵ=(0,1ϵ]×(0,). Using the lower bound on Qn(λ,z) given in Inequality (A10), it can be verified that

lim infninf(λ,z)DϵQr,n(z,λ)12logz12.

Consequently, we have

lim infninf(λ,z)DϵGn(λ,z)=inf(λ,z)DϵG(λ,z)Δr(X). (A13)

To complete the proof we will show that for any sequence λn that converges to one as n increases to infinity, we have

lim infninfz(0,)Gn(λn,z)=. (A14)

To see why this is the case, note that by Equation (A4) and Inequality (A5),

θrλ1r+θr(1λ)1rθr1r12log1r2πrλ(1λ).

Therefore, we can write

Gnλ,zQn(λ,z)12logλ(1λ)z+cn, (A15)

where cn is bounded uniformly for all n. Making the substitution u=λ(1λ)z, we obtain

infz>0Gnλ,zinfu>0Qnλ,uλ(1λ)12logu+cn.

Next, let bn=2(1r)/(9n). The lower bound in Inequality (A10) leads to

infu>0Qnλ,uλ(1λ)12loguinfu>0u2λ11λ+bnu12logu. (A16)

The limiting behavior in Equation (A14) can now be seen as a consequence of Inequality (A15) and the fact that, for any sequence λn converging to one, the right-hand side of Inequality (A16) increases without bound as n increases. Combining Inequality (A12), Inequality (A13), and Equation (A14) establishes that the large n limit of Δr(Y) exists and is equal to Δr(X). This concludes the proof of Proposition 6.

Appendix B.4. Proof of Inequality (uid39)

Given any λ(0,1) and u(0,) let

p(r)=1rr1rr1λλuq(r)=1rr+1rrλ1λu.

We need the following results, which characterize the terms in Proposition 4 in the limit as r increases to one.

Lemma A1.

The function ψr(p(r),q(r)) satisfies

limr1ψr(p(r),q(r))=2πu.

Proof. 

Starting with Equation (A4), we can write

ψr(p,q)=1qp2π(1r)rλ(1λexpθrλ1r+θr(1λ)1rθr1r.

As r converges to one, the terms in the exponent converge to zero. Note that q(r)p(r)=rλ(1λ)/(1r) completes the proof. □

Lemma A2.

If X is a random variable such that sE|X|s is finite in a neighborhood of zero, then Elog(X) and Var(log(X)) are finite, and

limr1Lr(X;p(r),q(r))=Elog|X|+u2Var(log|X|).

Proof. 

Let Λ(s)=log(E|X|s). The assumption that E|X|s is finite in a neighborhood of zero means that E(log|X|)m is finite for all positive integers m, and thus Λ(s) is real analytic in a neighborhood of zero. Hence, there exist constants δ>0 and C<, depending on the distribution of X, such that

Λ(s)as+bs2C|s|3,forall|s|δ,

where a=Elog|X| and b=12Var(|X|). Consequently, for all r such that 1δ<p(r)<(1r)/r<q(r)<1+δ, it follows that

Lr(X;p(r),q(r))a1rr+ubCr1rλ|p(r)|3+(1λ)|q(r)|3.

Taking the limit as r increases to one completes the proof. □

We are now ready to prove Inequality (24). Combining Proposition 4 with Lemma A1 and Lemma A2 yields

lim suprhr(X)12log2πu+ElogX+u2Var(logX).

The stated inequality follows from evaluating the right-hand side with u=1/Var(logX), recalling that h(X) corresponds to the limit of hr(X) as r increases to one.

Appendix C. Proof of Proposition 7

The function κ:(0,1]R+ can be expressed as

κ(t)=supu(0,)ρt(u), (A17)

where ρt(u)=log(1+u)/ut. For t=1, the bound log(1+u)u implies that ρ1(u)1. Noting that limu0ρ1(u)=1, we conclude that κ(1)=1.

Next, we consider the case t(0,1). The function ρt is continuously differentiable on (0,) with

sgn(ρt(u))=sgnut(1+u)log(1+u). (A18)

Under the assumption t(0,1), we see that ρt(u) is increasing for all u sufficiently close to zero and decreasing for all u sufficiently large, and thus the supremum is attained at a stationary point of ρt(u) on (0,). Making the substitution w=log(1+u)1/t leads to

ρt(u)=0wew=1te1t.

For t(0,1), it follows that 1te1t(e1,0), and thus ρt(u) has a unique root that can be expressed as

ut*=expW1texp1t+1t1,

where Lambert’s function W(z) is the solution to the equation z=wew on the interval on [1,).

Lemma A3.

The function g(t)=tκ(t) is strictly increasing on (0,1] with limt0g(t)=1/e and g(1)=1.

Proof. 

The fact that g(1)=1 follows from κ(1)=1. By the envelope theorem [37], the derivative of g(t) can be expressed as

g(t)=ddttρt(u)|u=ut*=log(1+ut*)(ut*)ttlog(ut*)log(1+ut*)(ut*)t

In view of Equation (A18), it follows that ρt(ut*)=0 can be expressed equivalently as

ut*(1+ut*)log(1+ut*)=t, (A19)

and thus

sgn(g(t))=sgn1ut*logut*(1+ut*)log(1+ut*). (A20)

Noting that ulogu<(1+u)log(1+u) for all u(0,), it follows that g(t)>0 is strictly positive, and thus g(t) is strictly increasing.

To prove the small t limit, we use Equation (A19) to write

log(g(t))=logut*1+ut*ut*logut*(1+ut*)log(1+ut*). (A21)

Now, as t decreases to zero, Equation (A19) shows that ut* increases to infinity. By Equation (A21), it then follows that log(g(t)) converges to negative one, which proves the desired limit. □

Funding

This research was supported in part by the National Science Foundation under Grant 1750362 and in part by the Laboratory for Analytic Sciences (LAS). Any opinions, findings, conclusions, and recommendations expressed in this material are those of the author and do not necessarily reflect the views of the sponsors.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Dembo A., Cover T.M., Thomas J.A. Information Theoretic Inequalities. IEEE Trans. Inf. Theory. 1991;37:1501–1518. doi: 10.1109/18.104312. [DOI] [Google Scholar]
  • 2.Carslon F. Une inégalité. Ark. Mat. Astron. Fys. 1934;25:1–5. [Google Scholar]
  • 3.Levin V.I. Exact constants in inequalities of the Carlson type. Doklady Akad. Nauk. SSSR (N. S.) 1948;59:635–638. [Google Scholar]
  • 4.Larsson L., Maligranda L., Persson L.E., Pečarić J. Multiplicative Inequalities of Carlson Type and Interpolation. World Scientific Publishing Company; Singapore: 2006. [Google Scholar]
  • 5.Barza S., Burenkov V., Pečarić J.E., Persson L.E. Sharp multidimensional multiplicative inequalities for weighted Lp spaces with homogeneous weights. Math. Inequalities Appl. 1998;1:53–67. doi: 10.7153/mia-01-04. [DOI] [Google Scholar]
  • 6.Reeves G. Two-Moment Inequailties for Rényi Entropy and Mutual Information; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017; pp. 664–668. [Google Scholar]
  • 7.Gray R.M. Entropy and Information Theory. Springer-Verlag; Berlin/Heidelberg, Germany: 2013. [Google Scholar]
  • 8.van Erven T., Harremoës P. Rényi Divergence and Kullback–Liebler Divergence. IEEE Trans. Inf. Theory. 2014;60:3797–3820. doi: 10.1109/TIT.2014.2320500. [DOI] [Google Scholar]
  • 9.Atar R., Chowdharry K., Dupuis P. Abstract. Robust Bounds on Risk-Sensitive Functionals via Rényi Divergence. SIAM/ASA J. Uncertain. Quantif. 2015;3:18–33. doi: 10.1137/130939730. [DOI] [Google Scholar]
  • 10.Rosenkrantz R., editor. E. T. Jaynes: Papers on Probability, Staistics and Statistical Physics. Springer; Berlin/Heidelberg, Germany: 1989. [Google Scholar]
  • 11.Cover T.M., Thomas J.A. Elements of Information Theory. 2nd ed. Wiley-Interscience; Hoboken, NJ, USA: 2006. [Google Scholar]
  • 12.Lutwak E., Yang D., Zhang G. Moment-entropy inequalities. Ann. Probab. 2004;32:757–774. doi: 10.1214/aop/1079021463. [DOI] [Google Scholar]
  • 13.Lutwak E., Yang D., Zhang G. Moment-Entropy Inequalities for a Random Vector. IEEE Trans. Inf. Theory. 2007;53:1603–1607. doi: 10.1109/TIT.2007.892780. [DOI] [Google Scholar]
  • 14.Lutwak E., Lv S., Yang D., Zhang G. Affine Moments of a Random Vector. IEEE Trans. Inf. Theory. 2013;59:5592–5599. doi: 10.1109/TIT.2013.2258457. [DOI] [Google Scholar]
  • 15.Costa J.A., Hero A.O., Vignat C. A Characterization of the Multivariate Distributions Maximizing Rényi Entropy; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Lausanne, Switzerland. 30 June–5 July 2002; [DOI] [Google Scholar]
  • 16.Costa J.A., Hero A.O., Vignat C. A Geometric Characterization of Maximum Rényi Entropy Distributions; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Seattle, WA, USA. 9–14 July 2006; pp. 1822–1826. [Google Scholar]
  • 17.Johnson O., Vignat C. Some results concerning maximum Rényi entropy distributions. Ann. de l’Institut Henri Poincaré (B) Probab. Stat. 2007;43:339–351. doi: 10.1016/j.anihpb.2006.05.001. [DOI] [Google Scholar]
  • 18.Nguyen V.H. A simple proof of the Moment-Entropy inequalities. Adv. Appl. Math. 2019;108:31–44. doi: 10.1016/j.aam.2019.03.006. [DOI] [Google Scholar]
  • 19.Barron A., Yang Y. Information-theoretic determination of minimax rates of convergence. Ann. Stat. 1999;27:1564–1599. doi: 10.1214/aos/1017939142. [DOI] [Google Scholar]
  • 20.Wu Y., Xu J. Statistical problems with planted structures: Information-theoretical and computational limits. In: Rodrigues M.R.D., Eldar Y.C., editors. Information-Theoretic Methods in Data Science. Cambridge University Press; Cambridge, UK: 2020. Chapter 13. [Google Scholar]
  • 21.Reeves G. Conditional Central Limit Theorems for Gaussian Projections; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017; pp. 3055–3059. [Google Scholar]
  • 22.Reeves G., Pfister H.D. The Replica-Symmetric Prediction for Random Linear Estimation with Gaussian Matrices is Exact. IEEE Trans. Inf. Theory. 2019;65:2252–2283. doi: 10.1109/TIT.2019.2891664. [DOI] [Google Scholar]
  • 23.Reeves G., Pfister H.D. Understanding Phase Transitions via Mutual Information and MMSE. In: Rodrigues M.R.D., Eldar Y.C., editors. Information-Theoretic Methods in Data Science. Cambridge University Press; Cambridge, UK: 2020. Chapter 7. [Google Scholar]
  • 24.Rockafellar R.T. Convex Analysis. Princeton University Press; Princeton, NJ, USA: 1970. [Google Scholar]
  • 25.Andrews G.E., Askey R., Roy R. Special Functions; Vol. 71, Encyclopedia of Mathematics and its Applications. Cambridge University Press; Cambridge, UK: 1999. [Google Scholar]
  • 26.Nielsen F., Nock R. On the Chi Square and Higher-Order Chi Distrances for Approximationg f-Divergences. IEEE Signal Process. Lett. 1014;1:10–13. [Google Scholar]
  • 27.Sason I., Verdú S. f-Divergence Inequalities. IEEE Trans. Inf. Theory. 2016;62:5973–6006. doi: 10.1109/TIT.2016.2603151. [DOI] [Google Scholar]
  • 28.Sason I. On the Rényi Divergence, Joint Range of Relative Entropy, and a Channel Coding Theorem. IEEE Trans. Inf. Theory. 2016;62:23–34. doi: 10.1109/TIT.2015.2504100. [DOI] [Google Scholar]
  • 29.Sason I., Verdú S. Improved Bounds on Lossless Source Coding and Guessing Moments via Rényi Measures. IEEE Trans. Inf. Theory. 2018;64:4323–4326. doi: 10.1109/TIT.2018.2803162. [DOI] [Google Scholar]
  • 30.Sason I. On f-divergences: Integral representations, local behavior, and inequalities. Entropy. 2018;20:383. doi: 10.3390/e20050383. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Melbourne J., Madiman M., Salapaka M.V. Relationships between certain f-divergences; Proceedings of the Allerton Conference on Communication, Control, and Computing; Monticello, IL, USA. 24–27 September 2019; pp. 1068–1073. [Google Scholar]
  • 32.Nishiyama T., Sason I. On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications. Entropy. 2020;22:563. doi: 10.3390/e22050563. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Jebara T., Kondor R., Howard A. Probability Product Kernels. J. Mach. Learn. Res. 2004;5:818–844. [Google Scholar]
  • 34.Reeves G., Xu J., Zadik I. The All-or-Nothing Phenomenon in Sparse Linear Regression; Proceedings of the Conference On Learning Theory (COLT); Phoenix, AZ, USA. 25–28 June 2019. [Google Scholar]
  • 35.Reeves G., Xu J., Zadik I. All-or-nothing phenomena from single-letter to high dimensions; Proceedings of the IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP); Guadeloupe, France. 15–18 December 2019. [Google Scholar]
  • 36.Grenié L., Molteni G. Inequalities for the beta function. Math. Inequalities Appl. 2015;18:1427–1442. doi: 10.7153/mia-18-111. [DOI] [Google Scholar]
  • 37.Milgrom P., Segal I. Envelope Theorems for Arbitrary Choice Sets. Econometrica. 2002;70:583–601. doi: 10.1111/1468-0262.00296. [DOI] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES