Skip to main content
Entropy logoLink to Entropy
. 2018 Mar 9;20(3):185. doi: 10.3390/e20030185

A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications

Arnaud Marsiglietti 1,*, Victoria Kostina 2
PMCID: PMC7512702  PMID: 33265276

Abstract

We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure d(x,x^)=|xx^|r, with r1, and we establish that the difference between the rate-distortion function and the Shannon lower bound is at most log(πe)1.5 bits, independently of r and the target distortion d. For mean-square error distortion, the difference is at most log(πe2)1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most log(πe2)1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. Our proof technique leverages tools from convex geometry.

Keywords: differential entropy, reverse entropy power inequality, rate-distortion function, Shannon lower bound, channel capacity, log-concave distribution, hyperplane conjecture

1. Introduction

It is well known that the differential entropy among all zero-mean random variables with the same second moment is maximized by the Gaussian distribution:

h(X)log(2πeE[|X|2]). (1)

More generally, the differential entropy under p-th moment constraint is upper bounded as (see e.g., [1] (Appendix 2)), for p>0,

h(X)logαpXp, (2)

where

αp2e1pΓ1+1pp1p,XpE[|X|p]1p. (3)

Here, Γ denotes the Gamma function. Of course, if p=2, αp=2πe, and Equation (2) reduces to Equation (1). A natural question to ask is whether a matching lower bound on h(X) can be found in terms of p-norm of X, Xp. The quest is meaningless without additional assumptions on the density of X, as h(X)= is possible even if Xp is finite. In this paper, we show that if the density of X, fX(x), is log-concave (that is, logfX(x) is concave), then h(X) stays within a constant from the upper bound in Equation (2) (see Theorem 3 in Section 2 below):

h(X)log2XE[X]pΓ(p+1)1p, (4)

where p1. Moreover, the bound (4) tightens for p=2, where we have

h(X)12log(4Var[X]). (5)

The bound (4) actually holds for p>1 if, in addition to being log-concave, X is symmetric (that is, fX(x)=fX(x)), (see Theorem 1 in Section 2 below).

The class of log-concave distributions is rich and contains important distributions in probability, statistics and analysis. Gaussian distribution, Laplace distribution, uniform distribution on a convex set, chi distribution are all log-concave. The class of log-concave random vectors has good behavior under natural probabilistic operations: namely, a famous result of Prékopa [2] states that sums of independent log-concave random vectors, as well as marginals of log-concave random vectors, are log-concave. Furthermore, log-concave distributions have moments of all orders.

Together with the classical bound in Equation (2), the bound in (4) tells us that entropy and moments of log-concave random variables are comparable.

Using a different proof technique, Bobkov and Madiman [3] recently showed that the differential entropy of a log-concave X satisfies

h(X)12log12Var[X]. (6)

Our results in (4) and (5) tighten (6), in addition to providing a comparison with other moments.

Furthermore, this paper generalizes the lower bound on differential entropy in (4) to random vectors. If the random vector X=(X1,,Xn) consists of independent random variables, then the differential entropy of X is equal to the sum of differential entropies of the component random variables, and one can trivially apply (4) component-wise to obtain a lower bound on h(X). In this paper, we show that, even for nonindependent components, as long as the density of the random vector X is log-concave and satisfies a symmetry condition, its differential entropy is bounded from below in terms of covariance matrix of X (see Theorem 4 in Section 2 below). As noted in [4], such a generalization is related to the famous hyperplane conjecture in convex geometry. We also extend our results to a more general class of random variables, namely, the class of γ-concave random variables, with γ<0.

The bound (4) on the differential entropy allows us to derive reverse entropy power inequalities with explicit constants. The fundamental entropy power inequality of Shannon [5] and Stam [6] states that for all independent continuous random vectors X and Y in Rn,

N(X+Y)N(X)+N(Y), (7)

where

N(X)=e2nh(X) (8)

denotes the entropy power of X. It is of interest to characterize distributions for which a reverse form of (7) holds. In this direction, it was shown by Bobkov and Madiman [7] that, given any continuous log-concave random vectors X and Y in Rn, there exist affine volume-preserving maps u1,u2 such that a reverse entropy power inequality holds for u1(X) and u2(Y):

N(u1(X)+u2(Y))c(N(u1(X))+N(u2(Y)))=c(N(X)+N(Y)), (9)

for some universal constant c1 (independent of the dimension).

In applications, it is important to know the precise value of the constant c that appears in (9). It was shown by Cover and Zhang [8] that, if X and Y are identically distributed (possibly dependent) log-concave random variables, then

N(X+Y)4N(X). (10)

Inequality (10) easily extends to random vectors (see [9]). A similar bound for the difference of i.i.d. log-concave random vectors was obtained in [10], and reads as

N(XY)e2N(X). (11)

Recently, a new form of reverse entropy power inequality was investigated in [11], and a general reverse entropy power-type inequality was developed in [12]. For further details, we refer to the survey paper [13]. In Section 5, we provide explicit constants for non-identically distributed and uncorrelated log-concave random vectors (possibly dependent). In particular, we prove that as long as log-concave random variables X and Y are uncorrelated,

N(X+Y)πe2(N(X)+N(Y)). (12)

A generalization of (12) to arbitrary dimension is stated in Theorem 8 in Section 2 below.

The bound (4) on the differential entropy is essential in the study of the difference between the rate-distortion function and the Shannon lower bound that we describe next. Given a nonnegative number d, the rate-distortion function RXd under r-th moment distortion measure is given by

RXd=infPX^|X:E[|XX^|r]dI(X;X^), (13)

where the infimum is over all transition probability kernels RR satisfying the moment constraint. The celebrated Shannon lower bound [14] states that the rate-distortion function is lower bounded by

RXdRXdh(X)logαrd1r, (14)

where αr is defined in (3). For mean-square distortion (r=2), (14) simplifies to

RXdh(X)log2πed. (15)

The Shannon lower bound states that the rate-distortion function is lower bounded by the difference between the differential entropy of the source and the term that increases with target distortion d, explicitly linking the storage requirements for X to the information content of X (measured by h(X)) and the desired reproduction distortion d. As shown in [15,16,17] under progressively less stringent assumptions (Koch [17] showed that (16) holds as long as H(X)<), the Shannon lower bound is tight in the limit of low distortion,

0RXdRXdd00. (16)

The speed of convergence in (16) and its finite blocklength refinement were recently explored in [18]. Due to its simplicity and tightness in the high resolution/low distortion limit, the Shannon lower bound can serve as a proxy for the rate-distortion function RXd, which rarely has an explicit representation. Furthermore, the tightness of the Shannon lower bound at low d is linked to the optimality of simple lattice quantizers [18], an insight which has evident practical significance. Gish and Pierce [19] showed that, for mean-square error distortion, the difference between the entropy rate of a scalar quantizer, H1, and the rate-distortion function RXd converges to 12log2πe120.254 bit/sample in the limit d0. Ziv [20] proved that H1˜RXd is bounded by 12log2πe60.754 bit/sample, universally in d, where H1˜ is the entropy rate of a dithered scalar quantizer.

In this paper, we show that the gap between RXd and RXd is bounded universally in d, provided that the source density is log-concave: for mean-square error distortion (r=2 in (13)), we have

RXdRXdlogπe21.05bits. (17)

Besides leading to the reverse entropy power inequality and the reverse Shannon lower bound, the new bounds on the differential entropy allow us to bound the capacity of additive noise memoryless channels, provided that the noise follows a log-concave distribution.

The capacity of a channel that adds a memoryless noise Z is given by (see e.g., [21] (Chapter 9)),

CZ(P)=supX:E[|X|2]PI(X;X+Z), (18)

where P is the power allotted for the transmission. As a consequence of the entropy power inequality (7) (or more elementary as a consequence of the worst additive noise lemma, see [22,23]), it holds that

CZ(P)CZ(P)=12log1+PVar[Z], (19)

for arbitrary noise Z, where CZ(P) denotes the capacity of the additive white Gaussian noise channel with noise variance Var[Z]. This fact is well known (see e.g., [21] (Chapter 9)), and is referred to as the saddle-point condition.

In this paper, we show that, whenever the noise Z is log-concave, the difference between the capacity CZ(P) and the capacity of a Gaussian channel with the same noise power satisfies

CZ(P)CZ(P)logπe21.05bits. (20)

Let us mention a similar result by Zamir and Erez [24], who showed that the capacity of an arbitrary memoryless additive noise channel is well approximated by the mutual information between the Gaussian input and the output of the channel:

CZ(P)I(X*;X*+Z)12bits, (21)

where X* is a Gaussian input satisfying the power constraint. The bounds (20) and (21) are not directly comparable.

The rest of the paper is organized as follows. Section 2 presents and discusses our main results: the lower bounds on differential entropy in Theorems 1, 3 and 4, the reverse entropy power inequalities with explicit constants in Theorems 7 and 8, the upper bounds on RXdR_Xd in Theorems 9 and 10, and the bounds on the capacity of memoryless additive channels in Theorems 12 and 13. The convex geometry tools served to prove the bounds on differential entropy and the bounds in Theorems 1, 3 and 4 are presented in Section 3. In Section 4, we extend our results to the class of γ-concave random variables. The reverse entropy power inequalities in Theorems 7 and 8 are proven in Section 5. The bounds on the rate-distortion function in Theorems 9 and 10 are proven in Section 6. The bounds on the channel capacity in Theorems 12 and 13 are proven in Section 7.

2. Main Results

2.1. Lower Bounds on the Differential Entropy

A function f:Rn[0,+) is log-concave if logf:Rn[,) is a concave function. Equivalently, f is log-concave if for every λ[0,1] and for every x,yRn, one has

f((1λ)x+λy)f(x)1λf(y)λ. (22)

We say that a random vector X in Rn is log-concave if it has a probability density function fX with respect to Lebesgue measure in Rn such that fX is log-concave.

Our first result is a lower bound on the differential entropy of symmetric log-concave random variable in terms of its moments.

Theorem 1.

Let X be a symmetric log-concave random variable. Then, for every p>1,

h(X)log2XpΓ(p+1)1p. (23)

Moreover, (23) holds with equality for uniform distribution in the limit p1.

As we will see in Theorem 3, for p=2, the bound (23) tightens as

h(X)log(2X2). (24)

The difference between the upper bound in (2) and the lower bound in (23) grows as log(p) as p+, as 1p as p0+, and reaches its minimum value of log(e) ≈ 1.4 bits at p=1.

The next theorem, due to Karlin, Proschan and Barlow [25], shows that the moments of a symmetric log-concave random variable are comparable, and demonstrates that the bound in Theorem 1 tightens as p1.

Theorem 2.

Let X be a symmetric log-concave random variable. Then, for every 1<pq,

XqΓ(q+1)1qXpΓ(p+1)1p. (25)

Moreover, the Laplace distribution satisfies (25) with equality [25].

Combining Theorem 2 with the well-known fact that Xp is non-decreasing in p, we deduce that for every symmetric log-concave random variable X, for every 1<p<q,

XpXqΓ(q+1)1qΓ(p+1)1pXp. (26)

Using Theorem 1 and (24), we immediately obtain the following upper bound for the relative entropy D(X||GX) between a symmetric log-concave random variable X and a Gaussian GX with same variance as that of X.

Corollary 1.

Let X be a symmetric log-concave random variable. Then, for every p>1,

D(X||GX)logπe+Δp, (27)

where GXN(0,X22), and

ΔplogΓ(p+1)1p2X2Xp,p2,log2,p=2. (28)

Remark 1.

The uniform distribution achieves equality in (27) in the limit p1. Indeed, if U is uniformly distributed on a symmetric interval, then

Δp=logΓ(p+2)1p6p112log16, (29)

and so, in the limit p1, the upper bound in Corollary 1 coincides with the true value of D(U||GU):

D(U||GU)=12logπe6. (30)

We next provide a lower bound for the differential entropy of log-concave random variables that are not necessarily symmetric.

Theorem 3.

Let X be a log-concave random variable. Then, for every p1,

h(X)log2XE[X]pΓ(p+1)1p. (31)

Moreover, for p=2, the bound (31) tightens as

h(X)log(2Var[X]). (32)

The next proposition is an analog of Theorem 2 for log-concave random variables that are not necessarily symmetric.

Proposition 1.

Let X be a log-concave random variable. Then, for every 1pq,

XE[X]qΓ(q+1)1q2XE[X]pΓ(p+1)1p. (33)

Remark 2.

Contrary to Theorem 2, we do not know whether there exists a distribution that realizes equality in (33).

Using Theorem 3, we immediately obtain the following upper bound for the relative entropy D(X||GX) between an arbitrary log-concave random variable X and a Gaussian GX with same variance as that of X. Recall the definition of Δp in (28).

Corollary 2.

Let X be a zero-mean, log-concave random variable. Then, for every p1,

D(X||GX)logπe+Δp, (34)

where GXN(0,X22). In particular, by taking p=2, we necessarily have

D(X||GX)logπe2. (35)

For a given distribution of X, one can optimize over p to further tighten (35), as seen in (29) for the uniform distribution.

We now present a generalization of the bound in Theorem 1 to random vectors satisfying a symmetry condition. A function f:RnR is called unconditional if, for every (x1,,xn)Rn and every (ε1,,εn){1,1}n, one has

f(ε1x1,,εnxn)=f(x1,,xn). (36)

For example, the probability density function of the standard Gaussian distribution is unconditional. We say that a random vector X in Rn is unconditional if it has a probability density function fX with respect to Lebesgue measure in Rn such that fX is unconditional.

Theorem 4.

Let X be a symmetric log-concave random vector in Rn, n2. Then,

h(X)n2log|KX|1nc(n), (37)

where |KX| denotes the determinant of the covariance matrix of X, and c(n)=e2n242(n+2). If, in addition, X is unconditional, then c(n)=e22.

By combining Theorem 4 with the well-known upper bound on the differential entropy, we deduce that, for every symmetric log-concave random vector X in Rn,

n2log|KX|1nc(n)h(X)n2log2πe|KX|1n, (38)

where c(n)=e2n242(n+2) in general, and c(n)=e22 if, in addition, X is unconditional.

Using Theorem 4, we immediately obtain the following upper bound for the relative entropy D(X||GX) between a symmetric log-concave random vector X and a Gaussian GX with the same covariance matrix as that of X.

Corollary 3.

Let X be a symmetric log-concave random vector in Rn. Then,

D(X||GX)n2log(2πec(n)), (39)

where GXN(0,KX), with c(n)=n2e2(n+2)42 in general, and c(n)=e22 when X is unconditional.

For isotropic unconditional log-concave random vectors (whose definition we recall in Section 3.3 below), we extend Theorem 4 to other moments.

Theorem 5.

Let X=(X1,,Xn) be an isotropic unconditional log-concave random vector. Then, for every p>1,

h(X)maxi{1,,n}nlog2XipΓ(p+1)1p1c, (40)

where c=e6. If, in addition, fX is invariant under permutations of coordinates, then c=e.

2.2. Extension to γ-Concave Random Variables

The bound in Theorem 1 can be extended to a larger class of random variables than log-concave, namely the class of γ-concave random variables that we describe next.

Let γ<0. We say that a probability density function f:Rn[0,+) is γ-concave if fγ is convex. Equivalently, f is γ-concave if for every λ[0,1] and every x,yRn, one has

f((1λ)x+λy)((1λ)f(x)γ+λf(y)γ)1γ. (41)

As γ0, (41) agrees with (22), and thus 0-concave distributions corresponds to log-concave distributions. The class of γ-concave distributions has been deeply studied in [26,27].

Since for fixed a,b0 the function ((1λ)aγ+λbγ)1γ is non-decreasing in γ, we deduce that any log-concave distribution is γ-concave, for any γ<0.

For example, extended Cauchy distributions, that is, distributions of the form

fX(x)=Cγ1+|x|n1γ,xRn, (42)

where Cγ is the normalization constant, are γ-concave distributions (but are not log-concave).

We say that a random vector X in Rn is γ-concave if it has a probability density function fX with respect to Lebesgue measure in Rn such that fX is γ-concave.

We derive the following lower bound on the differential entropy for one-dimensional symmetric γ-concave random variables, with γ(1,0).

Theorem 6.

Let γ(1,0). Let X be a symmetric γ-concave random variable. Then, for every p1,11γ,

h(X)log2XpΓ(p+1)1pΓ(11γ)1+1pΓ(1γ)Γ(1γ(p+1))1p. (43)

Notice that (43) reduces to (23) as γ0. Theorem 6 implies the following relation between entropy and second moment, for any γ(13,0).

Corollary 4.

Let γ(13,0). Let X be a symmetric γ-concave random variable. Then,

h(X)12log2X22Γ(11γ)3Γ(1γ)2Γ(1γ3)=12log2X22(2γ+1)(3γ+1)(γ+1)2. (44)

2.3. Reverse Entropy Power Inequality with an Explicit Constant

As an application of Theorems 3 and 4, we establish in Theorems 7 and 8 below a reverse form of the entropy power inequality (7) with explicit constants, for uncorrelated log-concave random vectors. Recall the definition of the entropy power (8).

Theorem 7.

Let X and Y be uncorrelated log-concave random variables. Then,

N(X+Y)πe2(N(X)+N(Y)). (45)

As a consequence of Corollary 4, reverse entropy power inequalities for more general distributions can be obtained. In particular, for any uncorrelated symmetric γ-concave random variables X and Y, with γ(13,0),

N(X+Y)πe(γ+1)2(2γ+1)(3γ+1)(N(X)+N(Y)). (46)

One cannot have a reverse entropy power inequality in higher dimensions for arbitrary log-concave random vectors. Indeed, just consider X uniformly distributed on [ε2,ε2]×[12,12] and Y uniformly distributed on [12,12]×[ε2,ε2] in R2, with ε>0 small enough so that N(X) and N(Y) are arbitrarily small compared to N(X+Y). Hence, we need to put X and Y in a certain position so that a reverse form of (7) is possible. While the isotropic position (discussed in Section 3) will work, it can be relaxed to the weaker condition that the covariance matrices are proportionals. Recall that we denote by KX the covariance matrix of X.

Theorem 8.

Let X and Y be uncorrelated symmetric log-concave random vectors in Rn such that KX and KY are proportionals. Then,

N(X+Y)πe3n222(n+2)(N(X)+N(Y)). (47)

If, in addition, X and Y are unconditional, then

N(X+Y)πe3(N(X)+N(Y)). (48)

2.4. New Bounds on the Rate-distortion Function

As an application of Theorems 1 and 3, we show in Corollary 5 below that in the class of one-dimensional log-concave distributions, the rate-distortion function does not exceed the Shannon lower bound by more than log(πe)1.55 bits (which can be refined to log(e)1.44 bits when the source is symmetric), independently of d and r1. Denote for brevity

βr1+r2rΓ(3r)Γ(1r), (49)

and recall the definition of αr in (3).

We start by giving a bound on the difference between the rate-distortion function and the Shannon lower bound, which applies to general, not necessarily log-concave, random variables.

Theorem 9.

Let d0 and r1. Let X be an arbitrary random variable.

(1) Let r[1,2]. If X2>d1r, then

RXdR_XdD(X||GX)+logαr2πe. (50)

If X2d1r, then RXd=0.

(2) Let r>2. If X2d1r, then

RXdR_XdD(X||GX)+logβr. (51)

If Xrd1r, then RXd=0. If Xr>d1r and X2<d1r, then RXdlog2πeβrαr.

Remark 3.

For Gaussian X and r=2, the upper bound in (50) is 0, as expected.

The next result refines the bounds in Theorem 9 for symmetric log-concave random variables when r>2.

Theorem 10.

Let d0 and r>2. Let X be a symmetric log-concave random variable.

If X2d1r, then

RXdR_XdD(X||GX)+minlog(βr),logαrΓ(r+1)1r2πe. (52)

If Xrd1r or X22Γ(r+1)1rd1r, then RXd=0. If Xr>d1r and X22Γ(r+1)1rd1r,d1r, then RXdminlog2πeβrαr,logΓ(r+1)1r2.

To bound RXdR_Xd independently of the distribution of X, we apply the bound (35) on D(X||GX) to Theorems 9 and 10:

Corollary 5.

Let X be a log-concave random variable. For r[1,2], we have

RXdR_Xdlogαr2. (53)

For r>2, we have

RXdR_Xdlogπe2βr. (54)

If, in addition, X is symmetric, then, for r>2, we have

RXdR_XdminlogαrΓ(r+1)1r22,logπe2βr. (55)

Figure 1a presents our bound for different values of r. Regardless of r and d,

Figure 1.

Figure 1

The bound on the difference between the rate-distortion function under r-th moment constraint and the Shannon lower bound, stated in Corollary 5.

RXdR_Xdlog(πe)1.55bits. (56)

The bounds in Figure 1a tighten for symmetric log-concave sources when r(2,4.3). Figure 1b presents this tighter bound for different values of r. Regardless of r and d,

RXdR_Xdlog(e)1.44bits. (57)

One can see that the graph in Figure 1b is continuous at r=2, contrary to the graph in Figure 1a. This is because Theorem 2, which applies to symmetric log-concave random variables, is strong enough to imply the tightening of (51) given in (52), while Proposition 1, which provides a counterpart of Theorem 2 applicable to all log-concave random variables, is insufficient to derive a similar tightening in that more general setting.

Remark 4.

While Corollary 5 bounds the difference RXdRXd by a universal constant independent of the distribution of X, tighter bounds can be obtained if one is willing to relinquish such universality. For example, for mean-square distortion (r=2) and a uniformly distributed source U, using Remark 1, we obtain

RU(d)RU(d)12log2πe120.254bits. (58)

Theorem 9 easily extends to random vector X in Rn, n2, with a similar proof. The only difference being an extra term of n2log1nX22/|KX|1n that will appear on the right-hand side of (50) and (51), and will come from the upper bound on the differential entropy (38). Here,

XpEi=1n|Xi|p1p.

As a result, the bound RXdRXd can be arbitrarily large in higher dimensions because of the term 1nX22/|KX|1n. However, for isotropic random vectors (whose definition we recall in Section 3.3 below), one has 1nX22=|KX|1n. Hence, using the bound (39) on D(X||GX), we can bound RXdR_Xd independently of the distribution of isotropic log-concave random vector X in Rn, n2.

Corollary 6.

Let X be an isotropic log-concave random vector in Rn, n2. Then,

RXdR_Xdn2log(2πec(n)), (59)

where c(n)=n2e2(n+2)42 in general, and c(n)=e22 if, in addition, X is unconditional.

Let us consider the rate-distortion function under the determinant constraint for random vectors in Rn, n2:

RXcov(d)=infPX^|X:|KXX^|1ndI(X;X^), (60)

where the infimum is taken over all joint distributions satisfying the determinant constraint |KXX^|1nd. For this distortion measure, we have the following bound.

Theorem 11.

Let X be a symmetric log-concave random vector in Rn. If |KX|1n>d, then

0RXcov(d)R_XdD(X||GX)n2log(2πec(n)), (61)

with c(n)=n2e2(n+2)42. If, in addition, X is unconditional, then c(n)=e22. If |KX|1nd, then RXcov(d)=0.

2.5. New Bounds on the Capacity of Memoryless Additive Channels

As another application of Theorem 3, we compare the capacity CZ of a channel with log-concave additive noise Z with the capacity of the Gaussian channel. Recall that the capacity of the Gaussian channel is

CZ(P)=12log1+PVar[Z]. (62)

Theorem 12.

Let Z be a log-concave random variable. Then,

0CZ(P)CZ(P)logπe21.05bits. (63)

Remark 5.

Theorem 12 tells us that the capacity of a channel with log-concave additive noise exceeds the capacity a Gaussian channel by no more than 1.05 bits.

As an application of Theorem 4, we can provide bounds for the capacity of a channel with log-concave additive noise Z in Rn, n1. The formula for capacity (18) generalizes to dimension n as

CZ(P)=supX:1nX22PI(X;X+Z). (64)

Theorem 13.

Let Z be a symmetric log-concave random vector in Rn. Then,

0CZ(P)n2log1+P|KZ|1nn2log2πec(n)1nZ22+P|KZ|1n+P, (65)

where c(n)=n2e2(n+2)42. If, in addition, Z is unconditional, then c(n)=e22.

The upper bound in Theorem 13 can be arbitrarily large by inflating the ratio 1nX22/|KX|1n. For isotropic random vectors (whose definition is recalled in Section 3.3 below), one has 1nZ22=|KZ|1n, and the following corollary follows.

Corollary 7.

Let Z be an isotropic log-concave random vector in Rn. Then,

0CZ(P)n2log1+P|KZ|1nn2log2πec(n), (66)

where c(n)=n2e2(n+2)42. If, in addition, Z is unconditional, then c(n)=e22.

3. New Lower Bounds on the Differential Entropy

3.1. Proof of Theorem 1

The key to our development is the following result for one-dimensional log-concave distributions, well-known in convex geometry. It can be found in [28], in a slightly different form.

Lemma 1.

The function

F(r)=1Γ(r+1)0+xrf(x)dx (67)

is log-concave on [1,+), whenever f:[0;+)[0;+) is log-concave [28].

Proof of Theorem 1.

Let p>0. Applying Lemma 1 to the values 1,0,p, we have

F(0)=Fpp+1(1)+1p+1pF(1)pp+1F(p)1p+1. (68)

The bound in Theorem 1 follows by computing the values F(1), F(0) and F(p) for f=fX.

One has

F(0)=12,F(p)=Xpp2Γ(p+1). (69)

To compute F(1), we first provide a different expression for F(r). Notice that

F(r)=1Γ(r+1)0+xr0fX(x)dtdx=r+1Γ(r+2)0maxfX{x0:fX(x)t}xrdxdt. (70)

Denote the generalized inverse of fX by fX1(t)sup{x0:fX(x)t}, t0. Since fX is log-concave and

fX(x)fX(0)=maxfX, (71)

it follows that fX is non-increasing on [0,+). Therefore, {x0:fX(x)t}=[0,fX1(t)]. Hence,

F(r)=r+1Γ(r+2)0fX(0)0fX1(t)xrdxdt=1Γ(r+2)0fX(0)(fX1(t))r+1dt. (72)

We deduce that

F(1)=fX(0). (73)

Plugging (69) and (73) into (68), we obtain

fX(0)Γ(p+1)1p2Xp. (74)

It follows immediately that

h(X)=fX(x)log1fX(x)dxlog1fX(0)log2XpΓ(p+1)1p. (75)

For p(1,0), the bound is obtained similarly by applying Lemma 1 to the values 1,p,0.

We now show that equality is attained, by letting p1, by U uniformly distributed on a symmetric interval [a2,a2], for some a>0. In this case, we have

Upp=a2p1p+1. (76)

Hence,

1plog2pUppΓ(p+1)=logaΓ(p+2)1pp1log(a)=h(U). (77)

Remark 6.

From (71) and (74), we see that the following statement holds: For every symmetric log-concave random variable XfX, for every p>1, and for every xR,

fX(x)Γ(p+1)1p2Xp. (78)

Inequality (78) is the main ingredient in the proof of Theorem 1. It is instructive to provide a direct proof of inequality (78) without appealing to Lemma 1, the ideas going back to [25]:

Proof of inequality (78)

By considering X|X0, where X is symmetric log-concave, it is enough to show that for every log-concave density f supported on [0,+), one has

f(0)0+xpf(x)dx1pΓ(p+1)1p. (79)

By a scaling argument, one may assume that f(0)=1. Take g(x)=ex. If f=g, then the result follows by a straightforward computation. Assume that fg. Since fg and f=g, the function fg changes sign at least one time. However, since f(0)=g(0), f is log-concave and g is log-affine, the function fg changes sign exactly once. It follows that there exists a unique point x0>0 such that for every 0<x<x0, f(x)g(x), and for every x>x0, f(x)g(x). We deduce that for every x>0, and p0,

1p(f(x)g(x))(xpx0p)0. (80)

Integrating over x>0, we arrive at

1p0+xpf(x)dxΓ(p+1)=1p0+(xpx0p)(f(x)g(x))dx0, (81)

which yields the desired result. ☐

Actually, the powerful and versatile result of Lemma 1, which implies (78), is also proved using the technique in (79)–(81). In the context of information theory, Lemma 1 has been previously applied to obtain reverse entropy power inequalities [7], as well as to establish optimal concentration of the information content [29]. In this paper, we make use of Lemma 1 to prove Theorem 1. Moreover, Lemma 1 immediately implies Theorem 2. Below, we recall the argument for completeness.

Proof of Theorem 2.

The result follows by applying Lemma 1 to the values 0,p,q. If 0<p<q, then

F(p)=F0·1pq+q·pqF(0)1pqF(q)pq. (82)

Hence,

XppΓ(p+1)XqqΓ(q+1)pq, (83)

which yields the desired result. The bound is obtained similarly if p<q<0 or if p<0<q. ☐

3.2. Proof of Theorem 3 and Proposition 1

The proof leverages the ideas from [10].

Proof of Theorem 3.

Let Y be an independent copy of X. Jensen’s inequality yields

h(X)=fXlog(fX)logfX2=log(fXY(0)). (84)

Since XY is symmetric and log-concave, we can apply inequality (74) to XY to obtain

1fXY(0)2XYpΓ(p+1)1p2XE[X]pΓ(p+1)1p, (85)

where the last inequality again follows from Jensen’s inequality. Combining (84) and (85) leads to the desired result:

h(X)log1fXY(0)log2XE[X]pΓ(p+1)1p. (86)

For p=2, one may tighten (85) by noticing that

XY22=2Var[X]. (87)

Hence,

h(X)log1fXY(0)log2XY2=log(2Var[X]). (88)

Proof of Proposition 1.

Let Y be an independent copy of X. Since XY is symmetric and log-concave, we can apply Theorem 2 to XY. Jensen’s inequality and triangle inequality yield:

XE[X]qXYqΓ(q+1)1qΓ(p+1)1pXYp2Γ(q+1)1qΓ(p+1)1pXE[X]p. (89)

3.3. Proof of Theorem 4

We say that a random vector XfX is isotropic if X is symmetric and for all unit vectors θ, one has

E[X,θ2]=mX2, (90)

for some constant mX>0. Equivalently, X is isotropic if its covariance matrix KX is a multiple of the identity matrix In,

KX=mX2In, (91)

for some constant mX>0. The constant

XfX(0)1nmX (92)

is called the isotropic constant of X.

It is well known that X is bounded from below by a positive constant independent of the dimension [30]. A long-standing conjecture in convex geometry, the hyperplane conjecture, asks whether the isotropic constant of an isotropic log-concave random vector is also bounded from above by a universal constant (independent of the dimension). This conjecture holds under additional assumptions, but, in full generality, X is known to be bounded only by a constant that depends on the dimension. For further details, we refer the reader to [31]. We will use the following upper bounds on X (see [32] for the best dependence on the dimension up to date).

Lemma 2.

Let X be an isotropic log-concave random vector in Rn, with n2. Then, X2n2e2(n+2)42. If, in addition, X is unconditional, then X2e22.

If X is uniformly distributed on a convex set, these bounds hold without factor e2.

Even though the bounds in Lemma 2 are well known, we could not find a reference in the literature. We thus include a short proof for completeness.

Proof. 

It was shown by Ball [30] (Lemma 8) that if X is uniformly distributed on a convex set, then X2n2(n+2)42. If X is uniformly distributed on a convex set and is unconditional, then it is known that X212 (see e.g., [33] (Proposition 2.1)). Now, one can pass from uniform distributions on a convex set to log-concave distributions at the expense of an extra factor e2, as shown by Ball [30] (Theorem 7). ☐

We are now ready to prove Theorem 4.

Proof of Theorem 4.

Let X˜fX˜ be an isotropic log-concave random vector. Notice that fX˜(0)2n|KX˜|1n=X˜2, hence, using Lemma 2, we have

h(X˜)=fX˜(x)log1fX˜(x)dxlog1fX˜(0)n2log|KX˜|1nc(n), (93)

with c(n)=n2e2(n+2)42. If, in addition, X˜ is unconditional, then again by Lemma 2, c(n)=e22.

Now consider an arbitrary symmetric log-concave random vector X. One can apply a change of variable to put X in isotropic position. Indeed, by defining X˜=KX12X, one has for every unit vector θ,

E[X˜,θ2]=E[X,KX12θ2]=KX(KX12θ),KX12θ=1. (94)

It follows that X˜ is an isotropic log-concave random vector with isotropic constant 1. Therefore, we can use (93) to obtain

h(X˜)n2log1c(n), (95)

where c(n)=n2e2(n+2)42 in general, and c(n)=e22 when X is unconditional. We deduce that

h(X)=h(X˜)+n2log|KX|1nn2log|KX|1nc(n). (96)

3.4. Proof of Theorem 5

First, we need the following lemma.

Lemma 3.

Let XfX be an isotropic unconditional log-concave random vector. Then, for every i{1,,n},

fXi(0)fX(0)1nc, (97)

where fXi is the marginal distribution of the i-th component of X, i.e., for every tR,

fXi(t)=Rn1fX(x1,,xi1,t,xi+1,,xn)dx1dxi1dxi+1dxn. (98)

Here, c=e6. If, in addition, fX is invariant under permutations of coordinates, then c=e [33] (Proposition 3.2).

Proof of Theorem 5.

Let i{1,,n}. We have

Xipp=R|t|pfXi(t)dt. (99)

Since fX is unconditional and log-concave, it follows that fXi is symmetric and log-concave, so inequality (74) applies to fXi:

R|t|pfXi(t)dtΓ(p+1)2pfXi(0)p. (100)

We apply Lemma 3 to pass from fXi to fX in the right side of (100):

fX(0)1nXipΓ(p+1)1pc2. (101)

Thus,

h(X)log1fX(0)nlog2XipΓ(p+1)1pc. (102)

4. Extension to γ-Concave Random Variables

In this section, we prove Theorem 6, which extends Theorem 1 to the class of γ-concave random variables, with γ<0. First, we need the following key lemma, which extends Lemma 1.

Lemma 4.

Let f:[0,+)[0,+) be a γ-concave function, with γ<0. Then, the function

F(r)=Γ(1γ)Γ(1γ(r+1))1Γ(r+1)0+trf(t)dt (103)

is log-concave on 1,11γ [34] (Theorem 7).

One can recover Lemma 1 from Lemma 4 by letting γ tend to 0 from below.

Proof of Theorem 6.

Let us first consider the case p(1,0). Let us denote by fX the probability density function of X. By applying Lemma 4 to the values 1,p,0, we have

F(p)=F(1·(p)+0·(p+1))F(1)pF(0)p+1.

From the proof of Theorem 1, we deduce that F(1)=fX(0). In addition, notice that, for γ(1,0),

F(0)=12Γ(1γ)Γ(1γ1). (104)

Hence,

fX(0)p2pXppΓ(p+1)Γ(11γ)p+1Γ(1γ)pΓ(1γ(p+1)), (105)

and the bound on differential entropy follows:

h(X)log1fX(0)1plog2pXppΓ(p+1)Γ(11γ)p+1Γ(1γ)pΓ(1γ(p+1)). (106)

For the case p0,11γ, the bound is obtained similarly by applying Lemma 4 to the values 1,0,p. ☐

5. Reverse Entropy Power Inequality with Explicit Constant

5.1. Proof of Theorem 7

Proof. 

Using the upper bound on the differential entropy (1), we have

h(X+Y)12log(2πeVar[X+Y])=12log(2πe(Var[X]+Var[Y])), (107)

the last equality being valid since X and Y are uncorrelated. Hence,

N(X+Y)2πe(Var[X]+Var[Y]). (108)

Using inequality (32), we conclude that

N(X+Y)πe2(N(X)+N(Y)). (109)

5.2. Proof of Theorem 8

Proof. 

Since X and Y are uncorrelated and KX and KY are proportionals,

|KX+Y|1n=|KX+KY|1n=|KX|1n+|KY|1n. (110)

Using (110) and the upper bound on the differential entropy (38), we obtain

h(X+Y)n2log2πe|KX+Y|1n=n2log2πe|KX|1n+|KY|1n. (111)

Using Theorem 4, we conclude that

N(X+Y)2πe|KX|1n+|KY|1n2πec(n)(N(X)+N(Y)), (112)

where c(n)=e2n242(n+2) in general, and c(n)=e22 if X and Y are unconditional. ☐

6. New Bounds on the Rate-Distortion Function

6.1. Proof of Theorem 9

Proof. 

Under mean-square error distortion (r=2), the result is implicit in [21] (Chapter 10). Denote for brevity σ=X2.

(1) Let r[1,2]. Assume that σ>d1r. We take

X^=1d2r/σ2X+Z, (113)

where ZN0,σ2d2rσ2d2r is independent of X. This choice of X^ is admissible since

XX^rrXX^2r=d2rσ22σ2+1d2rσ22Z22r2=d, (114)

where we used r2 and the left-hand side of inequality (26). Upper-bounding the rate-distortion function by the mutual information between X and X^, we obtain

RXdI(X;X^)=h(X+Z)h(Z), (115)

where we used homogeneity of differential entropy for the last equality. Invoking the upper bound on the differential entropy (1), we have

h(X+Z)h(Z)12log2πeσ2+σ2d2rσ2d2rh(Z)=R_Xd+D(X||GX)+logαr2πe, (116)

and (50) follows.

If X2d1r, then XrX2d1r, and setting X^0 leads to RXd=0.

(2) Let r>2. The argument presented here works for every r1. However, for r[1,2], the argument in part (1) provides a tighter bound. Assume that σd1r. We take

X^=X+Z, (117)

where Z is independent of X and realizes the maximum differential entropy under the r-th moment constraint, Zrr=d. The probability density function of Z is given by

fZ(x)=r11r2Γ1rd1re|x|rrd,xR. (118)

Notice that

Z22=d2rr2rΓ(3r)Γ(1r). (119)

We have

h(X+Z)h(Z)12log(2πe(σ2+Z22))log(αrd1r) (120)
R_Xd+log2πeβrσh(X), (121)

where βr is defined in (49). Hence,

RXdR_XdD(X||GX)+logβr. (122)

If Xrrd, then setting X^0 leads to RXd=0. Finally, if Xrr>d and σ<d1r, then, from (120), we obtain

RXdlog2πeβrd1rlog(αrd1r)=log2πeβrαr. (123)

6.2. Proof of Theorem 10

Proof. 

Denote for brevity σ=X2, and recall that X is a symmetric log-concave random variable.

Assume that σd1r. We take

X^=1δσ2(X+Z),δ2Γ(r+1)2rd2r, (124)

where ZN0,σ2δσ2δ is independent of X. This choice of X^ is admissible since

XX^rrXX^2rΓ(r+1)2r2=δr2Γ(r+1)2r2=d, (125)

where we used r>2 and Theorem 2. Using the upper bound on the differential entropy (1), we have

h(X+Z)h(Z)12log2πeσ2+σ2δσ2δh(Z)=12logσ2δ. (126)

Hence,

RXdRXdD(X||GX)+logαrΓ(r+1)1r2πe. (127)

If σ2δ, then from Theorem 2 Xrrd, hence RXd=0. Finally, if Xrr>d and σ2(δ,d2r), then, from (126), we obtain

RXd12logσ2δ12logΓ(r+1)2r2. (128)

Remark 7.

1) Let us explain the strategy in the proof of Theorems 9 and 10. By definition, RXdI(X;X^) for any X^ satisfying the constraint. In our study, we chose X^ of the form λ(X+Z), with λ[0,1], where Z is independent of X. To find the best bounds possible with this choice of X^, we need to minimize XX^rr over λ. Notice that if X^=λ(X+Z) and Z symmetric, then XX^rr=(1λ)X+λZrr.

To estimate (1λ)X+λZrr in terms of Xr and Zr, one can use triangle inequality and the convexity of ·r to get the bound

(1λ)X+λZrr2r1((1λ)rXrr+λrZrr), (129)

or one can apply Jensen’s inequality directly to get the bound

(1λ)X+λZrr(1λ)Xrr+λZrr. (130)

A simple study shows that (130) provides a tighter bound over (129). This justifies choosing X^ as in (117) in the proof of (51).

To justify the choice of X^ in (113) (also in (124)), which leads the tightening of (51) for r[1,2] in (50) (also in (52)), we bound r-th norm by second norm, and we note that by the independence of X and Z,

(1λ)X+λZ22(1λ)2X22+λ2Z22. (131)

A simple study shows that (131) provides a tighter bound over (130).

2) Using Corollary 2, if r=2, one may rewrite our bound in terms of the rate-distortion function of a Gaussian source as follows:

RXdRGX(d)logπeΔp, (132)

where Δp is defined in (28), and where

RGX(d)=12logσ2d (133)

is the rate-distortion function of a Gaussian source with the same variance σ2 as X. It is well known that for arbitrary source and mean-square distortion (see e.g., [21] (Chapter 10))

RXdRGX(d). (134)

By taking p=2 in (132), we obtain

0RGX(d)RXd12logπe2. (135)

The bounds in (134) and (135) tell us that the rate-distortion function of any log-concave source is approximated by that of a Gaussian source. In particular, approximating RXd of an arbitrary log-concave source by

R^X(d)=12logσ2d14logπe2, (136)

we guarantee the approximation error |RXdR^X(d)| of at most 14logπe212 bits.

6.3. Proof of Theorem 11

Proof. 

If |KX|1n>d, then we choose X^=1d|KX|1n(X+Z), where ZN0,d|KX|1nd·KX is independent of X. This choice is admissible by independence of X and Z and the fact that KX and KZ are proportionals. Upper-bounding the rate-distortion function by the mutual information between X and X^, we have

RXcov(d)h(X+Z)h(Z)n2log|KX|1nd. (137)

Since the Shannon lower bound for determinant constraint coincides with that for the mean-square error constraint,

RXcov(d)RXd=h(X)n2log(2πed). (138)

On the other hand, using (137), we have

RXcov(d)RXdD(X||GX)n2log(2πec(n)), (139)

where (139) follows from Corollary 3.

If |KX|1nd, then we put X^0, which leads to RXcov(d)=0. ☐

7. New Bounds on the Capacity of Memoryless Additive Channels

Recall that the capacity of such a channel is

CZ(P)=supX:1nX22PI(X;X+Z)=supX:1nX22Ph(X+Z)h(Z). (140)

We compare the capacity CZ of a channel with log-concave additive noise with the capacity of the Gaussian channel.

7.1. Proof of Theorem 12

Proof. 

The lower bound is well known, as mentioned in (19). To obtain the upper bound, we first use the upper bound on the differential entropy (1) to conclude that

h(X+Z)12log(2πe(P+Var[Z])), (141)

for every random variable X such that X22P. By combining (140), (141) and (32), we deduce that

CZ(P)12log(2πe(P+Var[Z]))12log(4Var[Z])=12logπe21+PVar[Z], (142)

which is the desired result. ☐

7.2. Proof of Theorem 13

Proof. 

The lower bound is well known, as mentioned in (19). To obtain the upper bound, we write

h(X+Z)h(Z)n2log2πe|KX+Z|1nh(Z)n2log2πec(n)1nZ22|KZ|1n+P|KZ|1n, (143)

where c(n)=n2e2(n+2)42 in general, and c(n)=e22 if Z is unconditional. The first inequality in (143) is obtained from the upper bound on the differential entropy (38). The last inequality in (143) is obtained by applying the arithmetic-geometric mean inequality and Theorem 4. ☐

8. Conclusions

Several recent results show that the entropy of log-concave probability densities have nice properties. For example, reverse, strengthened and stable versions of the entropy power inequality were recently obtained for log-concave random vectors (see e.g., [3,11,35,36,37,38]). This line of developments suggest that, in some sense, log-concave random vectors behave like Gaussians.

Our work follows this line of results, by establishing a new lower bound on differential entropy for log-concave random variables in (4), for log-concave random vectors with possibly dependent coordinates in (37), and for γ-concave random variables in (43). We made use of the new lower bounds in several applications. First, we derived reverse entropy power inequalities with explicit constants for uncorrelated, possibly dependent log-concave random vectors in (12) and (47). We also showed a universal bound on the difference between the rate-distortion function and the Shannon lower bound for log-concave random variables in Figure 1a and Figure 1b, and for log-concave random vectors in (59). Finally, we established an upper bound on the capacity of memoryless additive noise channels when the noise is a log-concave random vector in (20) and (66).

Under the Gaussian assumption, information-theoretic limits in many communication scenarios admit simple closed-form expressions. Our work demonstrates that, at least in three such scenarios (source coding, channel coding and joint source-channel coding), the information-theoretic limits admit a closed-form approximation with at most 1 bit of error if the Gaussian assumption is relaxed to the log-concave one. We hope that the approach will be useful in gaining insights into those communication and data processing scenarios in which the Gaussianity of the observed distributions is violated but the log-concavity is preserved.

Acknowledgments

This work is supported in part by the National Science Foundation (NSF) under Grant CCF-1566567, and by the Walter S. Baer and Jeri Weiss CMI Postdoctoral Fellowship. The authors would also like to thank an anonymous referee for pointing out that the bound (23) and, up to a factor 2, the bound (25) also apply to the non-symmetric case if p1.

Author Contributions

Arnaud Marsiglietti and Victoria Kostina contributed equally to the research and writing of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Zamir R., Feder M. On Universal Quantization by Randomized Uniform/Lattice Quantizers. IEEE Trans. Inf. Theory. 1992;32:428–436. doi: 10.1109/18.119699. [DOI] [Google Scholar]
  • 2.Prékopa A. On logarithmic concave measures and functions. Acta Sci. Math. 1973;34:335–343. [Google Scholar]
  • 3.Bobkov S., Madiman M. The entropy per coordinate of a random vector is highly constrained under convexity conditions. IEEE Trans. Inf. Theory. 2011;57:4940–4954. doi: 10.1109/TIT.2011.2158475. [DOI] [Google Scholar]
  • 4.Bobkov S., Madiman M. Entropy and the hyperplane conjecture in convex geometry; Proceedings of the 2010 IEEE International Symposium on Information Theory Proceedings (ISIT); Austin, TX, USA. 13–18 June 2010; pp. 1438–1442. [Google Scholar]
  • 5.Shannon C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948;27:379–423, 623–656. doi: 10.1002/j.1538-7305.1948.tb01338.x. [DOI] [Google Scholar]
  • 6.Stam A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control. 1959;2:101–112. doi: 10.1016/S0019-9958(59)90348-1. [DOI] [Google Scholar]
  • 7.Bobkov S., Madiman M. Reverse Brunn-Minkowski and reverse entropy power inequalities for convex measures. J. Funct. Anal. 2012;262:3309–3339. doi: 10.1016/j.jfa.2012.01.011. [DOI] [Google Scholar]
  • 8.Cover T.M., Zhang Z. On the maximum entropy of the sum of two dependent random variables. IEEE Trans. Inf. Theory. 1994;40:1244–1246. doi: 10.1109/18.335945. [DOI] [Google Scholar]
  • 9.Madiman M., Kontoyiannis I. Entropy bounds on abelian groups and the Ruzsa divergence. IEEE Trans. Inf. Theory. 2018;64:77–92. doi: 10.1109/TIT.2016.2620470. [DOI] [Google Scholar]
  • 10.Bobkov S., Madiman M. Limit Theorems in Probability, Statistics and Number Theory. Volume 42. Springer; Berlin/Heidelberg, Germany: 2013. On the problem of reversibility of the entropy power inequality; pp. 61–74. Springer Proceedings in Mathematics and Statistics. [Google Scholar]
  • 11.Ball K., Nayar P., Tkocz T. A reverse entropy power inequality for log-concave random vectors. Studia Math. 2016;235:17–30. doi: 10.4064/sm8418-6-2016. [DOI] [Google Scholar]
  • 12.Courtade T.A. Links between the Logarithmic Sobolev Inequality and the convolution inequalities for Entropy and Fisher Information. arXiv. 2016. 1608.05431
  • 13.Madiman M., Melbourne J., Xu P. Forward and Reverse Entropy Power Inequalities in Convex Geometry. In: Carlen E., Madiman M., Werner E., editors. Convexity and Concentration. Volume 161. Springer; New York, NY, USA: 2017. pp. 427–485. The IMA Volumes in Mathematics and Its Applications. [Google Scholar]
  • 14.Shannon C.E. Coding theorems for a discrete source with a fidelity criterion. IRE Int. Conv. Rec. 1959;7:142–163. Reprinted with changes in Information and Decision Processes; Machol, R.E., Ed.; McGraw-Hill: New York, NY, USA, 1960; pp. 93–126. [Google Scholar]
  • 15.Linkov Y.N. Evaluation of ϵ-entropy of random variables for small ϵ. Probl. Inf. Transm. 1965;1:18–26. [Google Scholar]
  • 16.Linder T., Zamir R. On the asymptotic tightness of the Shannon lower bound. IEEE Trans. Inf. Theory. 1994;40:2026–2031. doi: 10.1109/18.340474. [DOI] [Google Scholar]
  • 17.Koch T. The Shannon Lower Bound is Asymptotically Tight. IEEE Trans. Inf. Theory. 2016;62:6155–6161. doi: 10.1109/TIT.2016.2604254. [DOI] [Google Scholar]
  • 18.Kostina V. Data compression with low distortion and finite blocklength. IEEE Trans. Inf. Theory. 2017;63:4268–4285. doi: 10.1109/TIT.2017.2676811. [DOI] [Google Scholar]
  • 19.Gish H., Pierce J. Asymptotically efficient quantizing. IEEE Trans. Inf. Theory. 1968;14:676–683. doi: 10.1109/TIT.1968.1054193. [DOI] [Google Scholar]
  • 20.Ziv J. On universal quantization. IEEE Trans. Inf. Theory. 1985;31:344–347. doi: 10.1109/TIT.1985.1057034. [DOI] [Google Scholar]
  • 21.Cover T.M., Thomas J.A. Elements of Information Theory. John Wiley & Sons; Hoboken, NJ, USA: 2012. [Google Scholar]
  • 22.Ihara S. On the capacity of channels with additive non-Gaussian noise. Inf. Control. 1978;37:34–39. doi: 10.1016/S0019-9958(78)90413-8. [DOI] [Google Scholar]
  • 23.Diggavi S.N., Cover T.M. The worst additive noise under a covariance constraint. IEEE Trans. Inf. Theory. 2001;47:3072–3081. doi: 10.1109/18.959289. [DOI] [Google Scholar]
  • 24.Zamir R., Erez U. A Gaussian input is not too bad. IEEE Trans. Inf. Theory. 2004;50:1340–1353. doi: 10.1109/TIT.2004.828153. [DOI] [Google Scholar]
  • 25.Karlin S., Proschan F., Barlow R.E. Moment inequalities of Pólya frequency functions. Pac. J. Math. 1961;11:1023–1033. doi: 10.2140/pjm.1961.11.1023. [DOI] [Google Scholar]
  • 26.Borell C. Convex measures on locally convex spaces. Ark. Mat. 1974;12:239–252. doi: 10.1007/BF02384761. [DOI] [Google Scholar]
  • 27.Borell C. Convex set functions in d-space. Period. Math. Hungar. 1975;6:111–136. doi: 10.1007/BF02018814. [DOI] [Google Scholar]
  • 28.Borell C. Complements of Lyapunov’s inequality. Math. Ann. 1973;205:323–331. doi: 10.1007/BF01362702. [DOI] [Google Scholar]
  • 29.Fradelizi M., Madiman M., Wang L. High Dimensional Probability VII 2016. Volume 71. Birkhäuser; Cham, Germany: 2016. Optimal concentration of information content for log-concave densities; pp. 45–60. [Google Scholar]
  • 30.Ball K. Logarithmically concave functions and sections of convex sets in ℝn. Studia Math. 1988;88:69–84. doi: 10.4064/sm-88-1-69-84. [DOI] [Google Scholar]
  • 31.Brazitikos S., Giannopoulos A., Valettas P., Vritsiou B.H. Geometry of Isotropic Convex Bodies. American Mathematical Society; Providence, RI, USA: 2014. Mathematical Surveys and Monographs, 196. [Google Scholar]
  • 32.Klartag B. On convex perturbations with a bounded isotropic constant. Geom. Funct. Anal. 2006;16:1274–1290. doi: 10.1007/s00039-006-0588-1. [DOI] [Google Scholar]
  • 33.Bobkov S., Nazarov F. Geometric Aspects of Functional Analysis. Springer; Berlin/Heidelberg, Germany: 2003. On convex bodies and log-concave probability measures with unconditional basis; pp. 53–69. [Google Scholar]
  • 34.Fradelizi M., Guédon O., Pajor A. Thin-shell concentration for convex measures. Studia Math. 2014;223:123–148. doi: 10.4064/sm223-2-2. [DOI] [Google Scholar]
  • 35.Ball K., Nguyen V.H. Entropy jumps for isotropic log-concave random vectors and spectral gap. Studia Math. 2012;213:81–96. doi: 10.4064/sm213-1-6. [DOI] [Google Scholar]
  • 36.Toscani G. A concavity property for the reciprocal of Fisher information and its consequences on Costa’s EPI. Physica A. 2015;432:35–42. doi: 10.1016/j.physa.2015.03.018. [DOI] [Google Scholar]
  • 37.Toscani G. A strengthened entropy power inequality for log-concave densities. IEEE Trans. Inf. Theory. 2015;61:6550–6559. doi: 10.1109/TIT.2015.2495302. [DOI] [Google Scholar]
  • 38.Courtade T.A., Fathi M., Pananjady A. Wasserstein Stability of the Entropy Power Inequality for Log-Concave Densities. arXiv. 2016. 1610.07969

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES