Skip to main content
Entropy logoLink to Entropy
. 2020 Sep 19;22(9):1048. doi: 10.3390/e22091048

Expected Logarithm and Negative Integer Moments of a Noncentral χ2-Distributed Random Variable

Stefan M Moser 1,2
PMCID: PMC7597108  PMID: 33286817

Abstract

Closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral χ2-distributed random variable are presented in the cases of both even and odd degrees of freedom. Moreover, some basic properties of these expectations are derived and tight upper and lower bounds on them are proposed.

Keywords: central χ2 distribution, chi-square distribution, expected logarithm, exponential distribution, negative integer moments, noncentral χ2 distribution, squared Rayleigh distribution, squared Rice distribution

1. Introduction

The noncentral χ2 distribution is a family of probability distributions of wide interest. They appear in situations where one or several independent Gaussian random variables (RVs) of equal variance (but potentially different means) are squared and summed together. The noncentral χ2 distribution contains as special cases among others the central χ2 distribution, the exponential distribution (which is equivalent to a squared Rayleigh distribution), and the squared Rice distribution.

In this paper, we present closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral χ2-distributed RV with even or odd degrees of freedom. Note that while the probability density function (PDF), the moment-generating function (MGF), and the moments of a noncentral χ2-distributed RV are well-known, the expected logarithm and the negative integer moments have only been derived relatively recently for even degrees of freedom [1,2,3,4,5,6], but—to the best of our knowledge—for odd degrees of freedom they have been completely unknown so far. These expectations have many interesting applications. So, for example, in the field of information theory, there is a close relationship between the expected logarithm and entropy, and thus the expected logarithm of a noncentral χ2-distributed RV plays an important role, e.g., in the description of the capacity of multiple-input, multiple-output noncoherent fading channels [1,2]. Many more examples in the field of information theory can be found in [7].

We will see that the expected logarithm and the negative integer moments can be expressed using two families of functions gm(·) and hn(·) that will be defined in Section 3. Not unexpectedly, gm(·) and hn(·) are not elementary, but contain special functions like the exponential integral function ([8], Sec. 8.21), the imaginary error function [9], or a generalized hypergeometric function ([8], Sec. 9.14). While numerically this does not pose any problem as the required special functions are commonly implemented in many mathematical programming environments, working with them analytically can be cumbersome. We thus investigate gm(·) and hn(·) more in detail, present important properties, and derive tight elementary upper and lower bounds to them.

The structure of this paper is as follows. After a few comments about our notation, we will formally define the noncentral χ2 distribution in the following Section 2 and also state some fundamental properties of the expected logarithm and the negative integer moments. In Section 3 we present the two families of functions gm(·) and hn(·) that are needed for our main results in Section 4. Section 5 summarizes properties of gm(·) and hn(·), and Section 6 presents tight upper and lower bounds on them. Many proofs are deferred to the appendices.

We use upper-case letters to denote random quantities, e.g., U, and the corresponding lower-case letter for their realization, e.g., u. The expectation operator is denoted by E·; ln(·) is the natural logarithm; Γ(·) denotes the Gamma function ([8], Sec. 8.31–8.33); and i is the imaginary number i1. We use Neven to denote the set of all even natural numbers:

Neven{2,4,6,8,}. (1)

Accordingly, NoddN\Neven is the set of all odd natural numbers.

For a function ξf(ξ), f()(ξ) denotes its th derivative:

f()(ξ)df(ξ)dξ. (2)

The real Gaussian distribution of mean μR and variance σ2>0 is denoted by NRμ,σ2, while NCη,σ2 describes the complex Gaussian distribution of mean ηC and variance σ2>0. Thus, if X1 and X2 are independent standard Gaussian RVs, X1,X2NR0,1, X1X2, then

Z12X1+i12X2 (3)

is circularly symmetric complex Gaussian, ZNC0,1.

2. The Noncentral χ2 Distribution

Definition 1.

For some nN, let {Xk}k=1n be independent and identically distributed (IID) real, standard Gaussian RVs, XkNR0,1, let {μk}k=1nR be real constants, and define

τk=1nμk2. (4)

Then the nonnegative RV

Uk=1n(Xk+μk)2 (5)

is said to have a noncentral χ2 distribution with n degrees of freedom and noncentrality parameter τ. Note that the distribution of U depends on the constants {μk} only via the sum of their squares (4). The corresponding PDF is ([10], Ch. 29)

pU(u)=12uτn24eτ+u2In/21τu,u0, (6)

where Iν(·) denotes the modified Bessel function of the first kind of order νR ([8], Eq. 8.445):

Iν(x)k=01k!Γ(ν+k+1)x2ν+2k,x0. (7)

For τ=0 we obtain the central χ2 distribution, for which the PDF (6) simplifies to

pU(u)=12n2Γn2un21eu2,u0. (8)

Note that in this work any RV U will always be defined as given in (5). Sometimes we will write U[n,τ] to clarify the degrees of freedom n and the noncentrality parameter τ of U.

If the number of degrees of freedom n is even (i.e., if n=2m for some natural number m), there exists a second, slightly different definition of the noncentral χ2 distribution that is based on complex Gaussian random variables.

Definition 2.

For some mN, let {Zj}j=1m be IID NC0,1, let {ηj}j=1mC be complex constants, and define

λj=1m|ηj|2. (9)

Then the nonnegative RV

Vj=1mZj+ηj2 (10)

is said to have a noncentral χ2 distribution with 2m degrees of freedom and noncentrality parameter λ. It has a PDF

pV(v)=vλm12evλIm12λv,v0, (11)

which in the central case of λ=0 simplifies to

pV(v)=1Γ(m)vm1ev,v0. (12)

Note that in this work any RV V will always be defined as given in (10). Sometimes we will write V[m,λ] to clarify the degrees of freedom 2m and the noncentrality parameter λ of V.

Lemma 1.

Let nNeven be an even natural number and τ0 a nonnegative constant. Then

U[n,τ]=L2V[n/2,τ/2], (13)

where “=L” denotes equality in probability law.

Proof. 

Let {Xk}k=1n be IID NR0,1 and {μk}k=1nR as given in Definition 1. Define mn/2N and for all j{1,,m},

Zj12X2j1+i12X2j, (14)
ηj12μ2j1+i12μ2j. (15)

Then

λj=1m|ηj|2 (16)
=j=1m12μ2j12+12μ2j2 (17)
=12k=1nμk2=12τ (18)

and

V[m,λ]=j=1mZj+ηj2 (19)
=j=1m|12X2j1+i12X2j+12μ2j1+i12μ2j|2 (20)
=12j=1mX2j1+μ2j12+X2j+μ2j2 (21)
=12k=1nXk+μk2 (22)
=12U[n,τ]. (23)

 □

Proposition 1

(Existence of Negative Integer Moments). For N, the negative th moment of a noncentral χ2-distributed RV of nN degrees of freedom and noncentrality parameter τ0,

EU[n,τ], (24)

is finite if, and only if,

<n2. (25)

Proof. 

See Appendix A.1. □

Proposition 2

(Monotonicity in Degrees of Freedom). The expected logarithm of a noncentral χ2-distributed RV of nN degrees of freedom and noncentrality parameter τ0,

ElnU[n,τ], (26)

is monotonically strictly increasing in n (for fixed τ).

Similarly, for any N1,n21, the negative th moment of U[n,τ],

EU[n,τ], (27)

is monotonically strictly decreasing in n (for fixed τ).

Proof. 

See Appendix A.2. □

Proposition 3

(Continuity in Noncentrality Parameter). For a fixed n, the expected logarithm (26) is continuous in τ for every finite τ0.

Proof. 

See Appendix A.3. □

For completeness, we present here the positive integer moments of the noncentral χ2 distribution.

Proposition 4

(Positive Integer Moments). For any N, the positive th moment of U[n,τ] is given recursively as

EU[n,τ]=21(1)!(n+τ)+j=11(1)!2j1(j)!·(n+jτ)·EU[n,τ]j. (28)

Thus, the first two moments are

EU[n,τ]=n+τ, (29)
EU[n,τ]2=(n+τ)2+2n+4τ. (30)

The corresponding expressions for V[m,λ] follow directly from Lemma 1.

Proof. 

See, e.g., [10]. □

For the special case of the central χ2 distribution (i.e., the case when τ=λ=0), it is straightforward to compute the expected logarithm and the negative integer moments by evaluating the corresponding integrals.

Proposition 5

(Expected Logarithm and Negative Integer Moments for Central χ2 Distribution). For any n,mN, we have

ElnU[n,0]=ln(2)+ψn2, (31)
ElnV[m,0]=ψ(m), (32)

where ψ(·) denotes the digamma function ([8], Sec. 8.36) (see also (37) and (51) below).

Moreover, for any n,m,N,

EU[n,0]=2Γn2Γn2ifn2+1,ifn2, (33)
EV[m,0]=Γ(m)Γ(m)ifm+1,ifm. (34)

Proof. 

These results follow directly by evaluating the corresponding integrals using the PDF (8) and (12), respectively. See also (A2) in Appendix A.1 and (A46) in Appendix B. □

3. Two Families of Functions

3.1. The Family of Functions gm(·)

The following family of functions will be essential for the expected logarithm and the negative integer moments of a noncentral χ2-distributed RV of even degrees of freedom.

Definition 3.

([1,2]) For an arbitrary mN, we define the function gm:R0+R,

ξgm(ξ)ln(ξ)Eiξ+j=1m1(1)jeξ(j1)!(m1)!j(m1j)!1ξjifξ>0,ψ(m)ifξ=0. (35)

Here, Ei· denotes the exponential integral function ([8], Sec. 8.21)

Eixxettdt,x>0, (36)

and ψ(·) is the digamma function ([8], Sec. 8.36) that for natural values takes the form

ψ(m)γ+j=1m11j, (37)

with γ0.577 being Euler’s constant.

Note that in spite of the case distinction in its definition, gm(ξ) actually is continuous for all ξ0. In particular,

limξ0ln(ξ)Eiξ+j=1m1(1)jeξ(j1)!(m1)!j(m1j)!1ξj=ψ(m) (38)

for all mN. This will follow from Proposition 3 once we have shown the connection between gm(·) and the expected logarithm (see Theorem 1).

Therefore, its first derivative is defined for all ξ0 and can be evaluated to

gm(1)(ξ)=(1)m(m1)!ξmeξj=0m1(1)jj!ξjifξ>0,1mifξ=0. (39)

Using the following expression for the incomplete Gamma function [11]

Γ(m,z)=(m1)!ezj=0m1zjj!, (40)

the expression (39) can also be rewritten as

gm(1)(ξ)=(1)mξmeξΓ(m)Γ(m,ξ)ifξ>0,1mifξ=0. (41)

Note that also gm(1)(·) is continuous and that in particular

limξ0(1)m(m1)!ξmeξj=0m1(1)jj!ξj=1m. (42)

This can be checked directly using the series expansion of the exponential function to write

gm(1)(ξ)=(1)m(m1)!ξmj=m(1)jj!ξj (43)
=(m1)!k=0(1)k(k+m)!ξk (44)

and plug ξ=0 in, or it follows from (63a) in Theorem 3, which shows that gm(1)(ξ) can be written as a difference of two continuous functions.

Figure 1 and Figure 2 depict gm(·) and gm(1)(·), respectively, for various values of m.

Figure 1.

Figure 1

The functions gm(·) and hn(·) for m{1,2,3,4} and n{1,3,5,7}. (Increasing n and m results in increasing values.)

Figure 2.

Figure 2

The functions gm(1)(·) and hn(1)(·) for m{1,2,3,4} and n{1,3,5,7}. (Increasing n or m results in decreasing values.)

Note that the th derivative of gm(·) can be expressed as a finite sum of gm+j(·) or of gm+j(1)(·), see Corollary 2 in Section 5.

3.2. The Family of Functions hn(·)

The following family of functions will be essential for the expected logarithm and the negative integer moments of a noncentral χ2-distributed RV of odd degrees of freedom.

Definition 4.

For an arbitrary odd nNodd, we define the function hn:R0+R,

ξhn(ξ)γ2ln(2)+2ξ·2F21,1;32,2;ξ+j=1n12(1)j1Γj12·ξeξerfiξ+i=1j1(1)iξiΓi+121ξjifξ>0,ψn2ifξ=0. (45)

Here γ0.577 denotes Euler’s constant, ψ(·) is the digamma function ([8], Sec. 8.36), 2F2(·) is a generalized hypergeometric function ([8], Sec. 9.14)

2F21,1;32,2;ξπ2k=0(1)kΓ32+k(k+1)ξk, (46)

and erfi· denotes the imaginary error function [9]

erfiξ2π0ξet2dt. (47)

Note that one can also use Dawson’s function [12]

D(ξ)eξ20ξet2dt (48)

to write

eξerfiξ=2πDξ. (49)

This often turns out to be numerically more stable.

Note that hn(ξ) is continuous for all ξ0; in particular,

limξ0hn(ξ)=ψn2 (50)

for all nNodd. This will follow from Proposition 3 once we have shown the connection between hn(·) and the expected logarithm (see Theorem 1).

Moreover, note that ([8], Eq. 8.366-3)

hn(0)=ψn2=γ2ln(2)+j=1n121j12. (51)

The first derivative of hn(·) is defined for all ξ0 and can be evaluated to

hn(1)(ξ)=(1)n12Γn2ξn2eξerfiξ+j=1n12(1)jΓj+12ξj12ifξ>0,2nifξ=0. (52)

Note that also hn(1)(·) is continuous and that in particular

limξ0hn(1)(ξ)=2n (53)

for all nNodd. Checking this directly is rather cumbersome. It is easier to deduce this from (76a) in Theorem 5, which shows that hn(1)(ξ) can be written as a difference of two continuous functions.

Figure 1 and Figure 2 depict hn(·) and hn(1)(·), respectively, for various values of n.

Note that the th derivative of hn(·) can be expressed as a finite sum of hn+j(·) or of hn+j(1)(·), see Corollary 6 in Section 5.

4. Expected Logarithm and Negative Integer Moments

We are now ready for our main results. We will show how the functions hn(·) and gm(·) from Section 3 are connected to the expected logarithm and the negative integer moments of noncentral χ2-distributed random variables.

Theorem 1

(Expected Logarithm). For some nN and τ0, let U[n,τ] be as in Definition 1. Then

ElnU[n,τ]=ln(2)+hnτ2ifnNodd,ln(2)+gn/2τ2ifnNeven. (54)

Similarly, for some mN and λ0, let V[m,λ] be as in Definition 2. Then

ElnV[m,λ]=gm(λ). (55)

Theorem 2

(Negative Integer Moments). For some nN and τ0, let U[n,τ] be as in Definition 1. Then, for any N,

EU[n,τ]=(1)1(1)!2·hn2()τ2ifn2+1andnNodd,(1)1(1)!2·gn2()τ2ifn2+1andnNeven,ifn2. (56)

Similarly, for some mN and λ0, let V[m,λ] be as in Definition 2. Then, for any N,

EV[m,λ]=(1)1(1)!·gm()(λ)ifm+1,ifm. (57)

In particular, this means that for any n3,

E1U[n,τ]=12hn2(1)τ2ifnNodd,12gn21(1)τ2ifnNeven, (58)

and for any m2,

E1V[m,λ]=gm1(1)(λ). (59)

A proof for these two main theorems can be found in Appendix B.

5. Properties

We next investigate the two families of functions gm(·) and hn(·) more closely and state some useful properties.

5.1. Properties of gm(·)

Proposition 6.

For any mN, the function ξgm(ξ) is monotonically strictly increasing and strictly concave (ξR0+).

Proof. 

Using the expression (A104), we have

gm(1)(ξ)=eξk=0ξkk!1k+m, (60)
gm(2)(ξ)=eξk=0ξkk!1(k+m)(k+m+1), (61)

i.e., the first derivative of gm(·) is positive and the second derivative is negative. □

Proposition 7.

For any ξ0, the function mgm(ξ) is monotonically strictly increasing (mN).

Proof. 

This follows directly from Theorem 1 and Proposition 2. □

Proposition 8.

For any mN, the function ξgm(1)(ξ) is positive, monotonically strictly decreasing, and strictly convex (ξR0+).

Proof. 

The positivity and the monotonicity follow directly from (60) and (61). To see the convexity, use (A104) to write

gm(3)(ξ)=eξk=0ξkk!2(k+m)(k+m+1)(k+m+2), (62)

which is positive. □

Proposition 9.

For any ξ0, the function mgm(1)(ξ) is monotonically strictly decreasing (mN).

Proof. 

This follows directly from Theorem 2 and Proposition 2. □

Theorem 3.

For all mN, ξ0, and N, we have the following relations:

gm+1(ξ)=gm(ξ)+gm(1)(ξ), (63a)
gm+1()(ξ)=gm()(ξ)+gm(+1)(ξ). (63b)

Proof. 

See Appendix C.1. □

Corollary 1.

For any m>1,

E1V[m,λ]=gm(λ)gm1(λ). (64)

Proof. 

This follows directly from (63a) and (59). □

Corollary 2.

The th derivative gm()(·) can be written with the help of either the first derivative gm(1)(·) or of gm(·) in the following ways:

gm()(ξ)=j=01(1)j+11jgm+j(1)(ξ) (65)
=j=0(1)j+jgm+j(ξ). (66)

Proof. 

Using gm(0)(·) as an equivalent expression for gm(·), we rewrite (63) as

gm()(ξ)=gm+1(1)(ξ)gm(1)(ξ), (67)

and recursively apply this relation. □

Corollary 3.

For all mN and ξ0,

gm(ξ)=g1(ξ)+j=1m1gj(1)(ξ) (68)
=ln(ξ)Eiξ+j=1m1gj(1)(ξ). (69)

Proof. 

We recursively apply (63a) to obtain the relation

gm(ξ)=gm1(ξ)+gm1(1)(ξ) (70)
=gm2(ξ)+gm2(1)(ξ)+gm1(1)(ξ) (71)
=g1(ξ)+j=1m1gj(1)(ξ). (72)

 □

Theorem 4.

We have the following relation:

gm+1(1)(ξ)=1ξmξgm(1)(ξ) (73)

for all mN and all ξ0.

Proof. 

See Appendix C.2. □

Corollary 4.

For any m>1,

E1V=1λgm(1)(λ)m1. (74)

Proof. 

This follows directly from (73) and (59). □

5.2. Properties of hn(·)

Proposition 10.

For any nNodd, the function ξhn(ξ) is monotonically strictly increasing and strictly concave (ξR0+).

Proof. 

From (A95) and (A98) we see that hn(1)(ξ)>0 and hn(2)(ξ)<0. □

Proposition 11.

For any ξ0, the function nhn(ξ) is monotonically strictly increasing (nNodd).

Proof. 

This follows directly from Theorem 1 and Proposition 2. □

Proposition 12.

For any nNodd, the function ξhn(1)(ξ) is positive, monotonically strictly decreasing, and strictly convex (ξR0+).

Proof. 

The positivity and the monotonicity follow directly from (A95) and (A98). To see the convexity, we use (A99) to write

hn(3)(ξ)=eξk=0ξkk!2k+n2k+1+n2k+2+n2, (75)

which is positive. □

Proposition 13.

For any ξ0, the function nhn(1)(ξ) is monotonically strictly decreasing (nNodd).

Proof. 

This follows directly from Theorem 2 and Proposition 2. □

Theorem 5.

For all nNodd, ξ0, and N, we have the following relations:

hn+2(ξ)=hn(ξ)+hn(1)(ξ), (76a)
hn+2()(ξ)=hn()(ξ)+hn(+1)(ξ). (76b)

Proof. 

See Appendix C.1. □

Corollary 5.

For any nNodd, n3,

E1U[n,τ]=12hnτ212hn2τ2. (77)

Proof. 

This follows directly from (76a) and (58). □

Corollary 6.

The th derivative hn()(·) can be written with the help of either the first derivative hn(1)(·) or of hn(·) in the following ways:

hn()(ξ)=j=01(1)j+11jhn+2j(1)(ξ) (78)
=j=0(1)j+jhn+2j(ξ). (79)

Proof. 

We rewrite (76) as

hn()(ξ)=hn+2(1)(ξ)hn(1)(ξ) (80)

(where hn(0) is understood as being equivalent to hn) and recursively apply this relation. □

Corollary 7.

For all nNodd and ξ0,

hn(ξ)=h1(ξ)+j=1n12h2j1(1)(ξ) (81)
=γ2ln(2)+2ξ·2F21,1;32,2;ξ+j=1n12h2j1(1)(ξ). (82)

Proof. 

This follows by recursive application of (76a) in the same way as Corollary 3 follows from (63a). □

Theorem 6.

We have the following relation:

hn+2(1)(ξ)=1ξn2ξhn(1)(ξ) (83)

for all nNodd and all ξ0.

Proof. 

See Appendix C.2. □

Corollary 8.

For any nNodd, n3,

E1U=1τ2hn(1)τ2n2. (84)

Proof. 

This follows directly from (83) and (58). □

5.3. Additional Properties

Proposition 14.

For all ξ0, if nNodd,

hn(ξ)gn+12(ξ), (85)

and if nNeven,

gn2(ξ)hn+1(ξ). (86)

Similarly, for all ξ0, if nNodd,

hn(1)(ξ)gn+12(1)(ξ), (87)

and if nNeven,

gn2(1)(ξ)hn+1(1)(ξ). (88)

Proof. 

The relations (85) and (86) follow from (54) and Proposition 2; and (87) and (88) follow from (58) and Proposition 2. See also Figure 1 and Figure 2 for a graphical representation of this relationship. □

Lemma 2.

For any mN, the function ξgm1ξ is monotonically strictly decreasing and convex. Similarly, for any nNodd, the function ξhn1ξ is monotonically strictly decreasing and convex.

Proof. 

Since

ξgm1ξ=1ξ2gm(1)1ξ (89)

and because (by Proposition 8) gm(1)(·)>0, we conclude that gm1ξ is monotonically strictly decreasing.

To check convexity, we use Theorem 3 to rewrite (89) as

ξgm1ξ=1ξ2gm+11ξ+1ξ2gm1ξ (90)

such that

2ξ2gm1ξ=2ξ3gm+11ξ+1ξ4gm+1(1)1ξ2ξ3gm1ξ1ξ4gm(1)1ξ (91)
=2ξ3gm(1)1ξ+1ξ4ξmξgm(1)1ξ1ξ4gm(1)1ξ (92)
=gm(1)1ξ·2ξmξ1ξ4+1ξ3 (93)
11ξ+m·2ξmξ1ξ4+1ξ3 (94)
=2ξ2(mξ+1)>0. (95)

Here, in the second equality we use Theorems 3 and 4; and the inequality follows from the lower bound (105) in Theorem 8 below. (Note that while the derivation of the bounds in Section 6 do strongly rely on the properties derived in Section 5, the results of this Lemma 2 are not needed there.)

The derivation for hn1ξ is completely analogous. In particular, using Theorems 5 and 6 one shows that

2ξ2hn1ξ=hn(1)1ξ·2ξn2ξ1ξ4+1ξ3 (96)
11ξ+n2·2ξn2ξ1ξ4+1ξ3 (97)
=2ξ2n2ξ+1>0, (98)

where the inequality follows from (117) in Theorem 10 below. □

6. Bounds

Finally, we derive some elementary upper and lower bounds on gm(·) and hn(·) and their first derivative.

6.1. Bounds on gm(·) and gm(1)(·)

Theorem 7.

For any mN and ξR0+, gm(ξ) is lower-bounded as follows:

gm(ξ)ln(ξ+m1), (99)
gm(ξ)ln(ξ+m)ln(m)+ψ(m), (100)

and upper-bounded as follows:

gm(ξ)ln(ξ+m), (101)
gm(ξ)m+1mlnξ+m+1m+1+ψ(m). (102)

Proof. 

See Appendix D.1. □

Note that the bounds (101) and (99) are tighter for larger values of ξ, and they are exact asymptotically when ξ:

limξln(ξ+m)ln(ξ+m1)=0. (103)

In contrast, the bounds (102) and (100) are better for small values of ξ and are exact for ξ=0:

limξ0m+1mlnξ+m+1m+1+ψ(m)ln(ξ+m)ln(m)+ψ(m)=1. (104)

In general, the tightness of the bounds increases with increasing m.

The bounds of Theorem 7 are depicted in Figure 3 and Figure 4 for the cases of m=1, m=2, and m=5.

Figure 3.

Figure 3

Upper and lower bounds on g1(·) from Theorem 7 (m=1). For small ξ, (102) and (100) are tight, and while (101) and (99) are tight for larger ξ. In particular, (99) is extremely tight for ξ2, and (102) for ξ2.

Figure 4.

Figure 4

Upper and lower bounds on gm(·) from Theorem 7 for m=2 and m=5.

Theorem 8.

For any mN and ξR0+, gm(1)(ξ) is lower-bounded as follows:

gm(1)(ξ)1ξ+m, (105)

and upper-bounded as follows:

gm(1)(ξ)m+1m(ξ+m+1), (106)
gm(1)(ξ)1ξ+m1. (107)

Proof. 

See Appendix D.1. □

Note that the lower bound (105) is exact for ξ=0 and asymptotically when ξ. The upper bound (106) is tighter for small values of ξ and is exact for ξ=0, while (107) is better for larger values of ξ and is exact asymptotically when ξ. Concretely, we have

limξ1ξ+m11ξ+m=0 (108)

and

limξ0m+1m(ξ+m+1)1ξ+m=1. (109)

In general, also here it holds that the tightness of the bounds increases with increasing m.

The bounds of Theorem 8 are depicted in Figure 5 for the cases of m=1, m=3, and m=8.

Figure 5.

Figure 5

Upper and lower bounds on gm(1)(·) from Theorem 8 for m=1, m=3, and m=8. Note that for ξ<m+1 (106) is tighter than (107), while for ξ>m+1 (107) is tighter than (106).

6.2. Bounds on hn(·) and hn(1)(·)

Theorem 9.

For any ξR0+, hn(ξ) is lower-bounded as follows:

hn(ξ)lnξ+n21(nNodd,n3), (110a)
h1(ξ)lnξ+122ξ1eξ(n=1). (110b)

Moreover, for any nNodd,

hn(ξ)lnξ+n2lnn2+ψn2. (111)

For any ξR0+ and any nNodd, hn(ξ) is upper-bounded as follows:

hn(ξ)lnξ+n2, (112)
hn(ξ)n+2nlnξ+n2+1n2+1+ψn2. (113)

Proof. 

See Appendix D.2. □

Note that the bounds (112) and (110) are tighter for larger values of ξ, and they are exact asymptotically when ξ:

limξlnξ+n2lnξ+n21=0 (114)

and

limξlnξ+12lnξ+12+2ξ1eξ=0, (115)

respectively.

In contrast, the bounds (113) and (111) are better for small values of ξ and are exact for ξ=0:

limξ0n+2nlnξ+n2+1n2+1+ψn2lnξ+n2lnn2+ψn2=1. (116)

In general, the tightness of the bounds increases with increasing n.

The bounds of Theorem 9 are depicted in Figure 6 and Figure 7 for the cases of n=1, n=3, and n=9.

Figure 6.

Figure 6

Upper and lower bounds on h1(·) from Theorem 9 (n=1).

Figure 7.

Figure 7

Upper and lower bounds on hn(·) from Theorem 9 for n=3 and n=9.

Theorem 10.

For any nNodd and ξR0+, hn(1)(ξ) is lower-bounded as follows:

hn(1)(ξ)1ξ+n2, (117)

and upper-bounded as follows:

hn(1)(ξ)n+2nξ+n2+1. (118)

Moreover,

hn(1)(ξ)1ξ+n21(nNodd,n3), (119a)
h1(1)(ξ)2ξ1eξ2ξ(n=1). (119b)

Proof. 

See Appendix D.2. □

Note that the lower bound (117) is exact for ξ=0 and asymptotically when ξ. The upper bound (118) is tighter for small values of ξ and is exact for ξ=0, while (119) is better for larger values of ξ and is exact asymptotically when ξ. Concretely, we have

limξ1ξ+n211ξ+n2=0 (120)

or

limξ2ξ1ξ+n2=0, (121)

respectively, and

limξ0n+2nξ+n2+11ξ+n2=1. (122)

In the special case n=1, the improved version of (119b) is exact also for ξ=0, but it is still less tight for low ξ than (118).

In general, also here it holds that the tightness of the bounds increases with increasing n.

These bounds are depicted in Figure 8 and Figure 9 for the cases n=1, n=3, and n=9.

Figure 8.

Figure 8

Upper and lower bounds on h1(1)(·) from Theorem 10 (n=1).

Figure 9.

Figure 9

Upper and lower bounds on hn(1)(·) from Theorem 10 for n=3, and n=9. Note that for ξ<n2+1 (118) is tighter than (119a), while for ξ>n2+1 (119a) is tighter than (118).

7. Discussion

We have shown that the expected logarithm and the negative integer moments of a noncentral χ2-distributed RV can be expressed with the help of two families of functions gm(·) and hn(·), depending on whether the degrees of freedom are even or odd. While these two families of functions are very similar in many respects, they are actually surprisingly different in their description. The case of odd degrees of freedom thereby turns out to be quite a bit more complicated than the situation of even degrees of freedom (which explains why gm(·) was defined in [1] already, while hn(·) is newly introduced in this work).

We have also provided a whole new set of properties of both family of functions and derived new tight upper and lower bounds that are solely based on elementary functions.

It is intuitively pleasing that U—being proportional to the th derivative of ln(U)—has an expectation that is related to the th derivative of the function describing the expectation of the logarithm.

The recently proposed trick of representing the logarithm by an integral [7] turned out to be very helpful in the proof of the continuity of the expected logarithm (see Appendix A.3). While in general very well behaved, the logarithmic function nevertheless is a fickle beast due to its unboundedness both at zero and infinity.

Acknowledgments

I would like to express a big thank you for the accurate and very helpful comments of the two anonymous reviewers.

Appendix A. Proofs of Section 2

Appendix A.1. Proof of Proposition 1

We first look at the case τ=0. Using the PDF (8), we compute

EU[n,0]=0u+n212n2Γ(n2)eu2du (A1)
=Γ(n2)2Γ(n2)<, (A2)

where (A2) follows from ([8], Eq. 3.381-4) as long as

+n2>0. (A3)

On the other hand, if n2, then (A1) can be bounded as follows:

0u+n212n2Γ(n2)eu2du>12n2Γ(n2)01u+n21eu2du (A4)
12n2Γ(n2)e1201u+n21du==, (A5)

where the first inequality holds because all terms in the integral are positive; where in the second inequality we have bounded

eu2e12,u[0,1]; (A6)

and where the integral is infinite because from n2 it follows that

+n211. (A7)

Next, assume that τ>0. Using the PDF (6) we write the negative moment as an integral and make a change of integration variable xτu:

EU[n,τ]=0u·12uτn24eτ+u2In/21τudu (A8)
=12τn2eτ20x+n412ex2τIn/21xdx (A9)
=12τn2eτ20x+n412ex2τk=0xn412+kk!Γk+n22n21+2kdx (A10)
=2n2τn2eτ2k=01k!Γk+n222k0xk+n21ex2τdx, (A11)

where in (A10) we have relied on the series representation (7) of the modified Bessel function.

Now, if we again assume that <n2, we can use ([8], Eq. 3.381-4) to evaluate the integral and obtain

EU[n,τ]=2n2τn2eτ2k=0(2τ)k+n2Γk+n2k!Γk+n222k (A12)
=2eτ2k=0Γk+n2k!Γk+n2τ2k. (A13)

Using Γ(z+1)=zΓ(z) and noting that because n2>0 we must have n212, we bound

Γk+n2Γk+n2=Γk+n2k+n21k+n211k+n212Γk+n2 (A14)
2. (A15)

Thus, using the series expansion of the exponential function, we obtain from (A13),

EU[n,τ]2eτ2k=02k!τ2k (A16)
=2+1eτ2eτ2 (A17)
=2+1<. (A18)

On the other hand, if n2, we bound (A10) by reducing the integral boundaries and by dropping all terms in the sum apart from k=0:

EU[n,τ]>2n2τn2eτ21Γn201x+n21ex2τdx (A19)
2n2τn2eτ212τ1Γn201x+n21dx==, (A20)

where the second inequality follows because

ex2τe12τ,x[0,1], (A21)

and where the integral is infinite because (A7) holds. This concludes the proof of Proposition 1.

Appendix A.2. Proof of Proposition 2

We fix some τ0, two arbitrary natural numbers n1,n2N such that n1<n2, and some N such that <n22. We choose μ1=τ, μ2==μn2=0, and let {Xk}k=1n2 be IID NR0,1. Then

EU[n2,τ]=Ek=1n2Xk+μk2 (A22)
=Ek=1n1Xk+μk2+k=n1+1n2Xk+μk2 (A23)
<Ek=1n1Xk+μk2 (A24)
=EU[n1,τ], (A25)

where the first equality follows from (5); the subsequent equality from splitting the sum into two parts; the subsequent inequality from the monotonicity of ξξ and from dropping some terms that with probability 1 are positive; and the final equality again from (5). This proves the (decreasing) monotonicity of the negative integer moment in n.

The derivation of the (increasing) monotonicity of the expected logarithm is identical apart from that we rely on the (increasing) monotonicity of the logarithm instead of the (decreasing) monotonicity of ξξ.

Appendix A.3. Proof of Proposition 3

To prove that ElnU[n,τ] is continuous in τ we need to show that we are allowed to swap the order of a limit on τ and the expectation. This could be done using the Monotone Convergence Theorem or the Dominated Convergence Theorem [13]. Unfortunately, neither can be applied directly because ξln(ξ) is not nonnegative and unbounded both above and below.

Instead we rely on a trick recently presented in [7] that allows us to write the expected logarithm with the help of the MGF:

Eln(U)=0etEetU1tdt. (A26)

So, using the MGF of U[n,τ],

EetU[n,τ]=eτt1+2t(1+2t)n2,t0, (A27)

we have

ElnU[n,τ]=0eteτt1+2t(1+2t)n21tdt. (A28)

We use this to prove continuity as follows. Assume that 0τK for some arbitrary large, but finite constant K. Then

limϵ0ElnU[n,τ+ϵ]=limϵ00ete(τ+ϵ)t1+2t(1+2t)n21tdt (A29)
=0limϵ0ete(τ+ϵ)t1+2t(1+2t)n21tdt (A30)
=0eteτt1+2t(1+2t)n21tdt (A31)
=ElnU[n,τ]. (A32)

It only remains to justify the swap of integration and limit in (A30). To that goal we rely on the Dominated Convergence Theorem applied to the function

fτ,n(t)etteτt1+2tt(1+2t)n2. (A33)

Note that if

etteτt1+2tt(1+2t)n2, (A34)

we have from τK

|fτ,n(t)|=fτ,n(t)=etteτt1+2tt(1+2t)n2 (A35)
etteKtt(1+2t)n2. (A36)

On the other hand, if

ett<eτt1+2tt(1+2t)n2, (A37)

we have

|fτ,n(t)|=fτ,n(t)=eτt1+2tt(1+2t)n2ett (A38)
1t(1+2t)12ett. (A39)

Thus, for all t0,

|fτ,n(t)|maxetteKtt(1+2t)n2,1t1+2tettFn(t). (A40)

Since both functions in the maximum of Fn(t) are nonnegative for all t0, we can bound the maximum by the sum:

Fn(t)etteKtt(1+2t)n2+1t1+2tett (A41)
=1t1+2teKtt(1+2t)n2 (A42)
=(1+2t)n12eKtt(1+2t)n2 (A43)

and therefore

0Fn(t)dt0(1+2t)n12eKtt(1+2t)n2dt<, (A44)

where the finiteness of the integral is obvious once we realize that the integrand is finite for all t0, in particular,

limt0(1+2t)n12eKtt(1+2t)n2=K+n1, (A45)

and that for t1, the integrand grows like t32. Thus, all conditions needed for the Dominated Convergence Theorem are satisfied and the swap in (A30) is proven.

Appendix B. Derivations of the Main Results (Theorems 1 and 2)

Appendix B.1. Odd Degrees of Freedom

Appendix B.1.1. Expected Logarithm

Let n be odd and assume first τ=0. Taking the PDF (8) and using ([8], Eq. 4.352-1), we obtain

Eln(U)=0ln(u)·12n2Γn2un21eu2du=ψn2+ln(2), (A46)

which proves the result for τ=0.

For τ>0, we need to evaluate the following integral:

Eln(U)=0ln(u)·12uτn24eτ+u2In21τudu. (A47)

Expressing In21(·) as the power series (7) we obtain:

Eln(U)=0ln(u)12τ12n4eτ2u2un412k=0τk+n412uk+n412k!Γk+n222k+n21du (A48)
=0ln(u)12eτ2k=0τ2kk!Γk+n2u2k+n21eu2du (A49)
=eτ2k=0τ2kk!Γk+n22k+n20ln(u)uk+n21eu2du (A50)
=eτ2k=01k!τ2kψk+n2+ln(2) (A51)
=eτ2k=01k!τ2kγln(2)+j=1n12+k1j12 (A52)
=γln(2)+eτ2k=01k!τ2kj=1n12+k1j12. (A53)

Here the interchange of summation and integral in (A50) is valid because we show in Appendix E that the sum converges uniformly for all u0; in (A51) we use the result from (A46) with n2 replaced by k+n2; (A52) follows from (51); and the last equality (A53) from the series expansion of the exponential function. We introduce the shorthand

ξτ2 (A54)

and define the function

h˜n(ξ)γ2ln(2)+eξk=0ξkk!j=1n12+k1j12, (A55)

such that

Eln(U)=ln(2)+h˜nτ2. (A56)

The proof will be concluded once we show that in fact h˜n(ξ)=hn(ξ).

To that goal, we compute the derivative of (A55) by interchanging the derivative and the infinite summation (which again is valid due to uniform convergence proven in Appendix E):

h˜n(1)(ξ)=k=01k!j=1n12+k1j12eξξk+eξkξk1 (A57)
=eξk=1j=1n12+kξk1(k1)!j12eξk=0j=1n12+kξkk!j12 (A58)
=eξk=0j=1n12+k+1ξkk!j12eξk=0j=1n12+kξkk!j12 (A59)
=eξk=0ξkk!n12+k+112 (A60)
=eξk=01k!n2+kξk (A61)
=eξξn+12=n+121n+12!12ξ. (A62)

Here, in (A59) we shift k by 1 in the first sum; and (A62) follows from the substitution k+n+12.

Using the relation

j=0n+121(1)jΓj+12(j)!=(1)n+1212Γn2n+12!12π12!, (A63)

we thus obtain from (A62)

h˜n(1)(ξ)=eξ(1)n+12Γn2ξn+12=n+12(1)n+12Γn2n+12!12ξ (A64)
=eξ(1)n+12Γn2ξn+12=n+12j=0n+121(1)jΓj+12(j)!+12π12!ξ (A65)
=eξ(1)n+12Γn2ξn+12(=n+12j=0n+121(1)jΓj+12(j)!ξS+12π=n+12112!ξ). (A66)

We next swap the order of the sums in the term S and shift the counter by j, i.e., kj:

S=j=0n+121k=n+12j(1)jΓj+12k!ξk+j. (A67)

Now, the counters k and j cover the values shown by the black dots in Figure A1.

Figure A1.

Figure A1

The black dots depict the values covered by the counters k and j in the double sum of S in (A67).

We investigate the missing “triangle” of red dots, where we reorder the double sum to have an inner sum going along the “diagonals” (see again Figure A1) and an outer sum counting the diagonals:

k=0n+121j=0n+121k(1)jΓj+12k!ξk+j==0n+121t=0(1)tΓt+12t!ξ (A68)
==0n+121(1)ξt=0(1)tΓt+12t! (A69)
==0n+121(1)ξ(1)+12π12! (A70)
=12π=0n+121112!ξ, (A71)

where in (A68) we set =k+j (in a diagonal the sum of k and j is constant!) and t=k; and (A70) follows from

t=0(1)tΓt+12t!=(1)+12π12!. (A72)

Thus, we can rewrite S in (A67) as follows:

S=j=0n+121k=0(1)jΓj+12k!ξk+j+12π=0n+121112!ξ, (A73)

and we therefore obtain from (A66)

h˜n(1)(ξ)=eξ(1)n+12Γn2ξn+12j=0n+121k=0(1)jΓj+12k!ξk+j+12π=0112!ξ (A74)
=eξ(1)n+121Γn2ξn+12(j=0n+121(1)jξjΓj+12k=0ξkk!=eξ+ξerfiξ1πeξ) (A75)
=(1)n+121Γn2ξn2eξerfiξ+(1)n+121Γn2ξn+12j=1n+121(1)jξjΓj+12. (A76)

Here, in the last equality we used that Γ12=π, and (A75) can be shown as follows:

12π=0112!ξ=1π=0+1212!ξ (A77)
=1π=012!1212!ξ (A78)
=1π=1112(1)!ξ1π=01!ξ (A79)
=ξ12π=01+12!ξ+121πeξ (A80)
=ξerfiξ1πeξ, (A81)

where (A80) follows from the series expansion of the exponential function and (A81) from the series expansion of the imaginary error function [14].

It thus remains to integrate the expression (A76). We only attempt this for the case n=1. Using the substitution z=ξ, we obtain:

h˜1(ξ)=πξeξerfiξdξ (A82)
=4π2ez2erfizdz (A83)
=4D(z)dz (A84)
=2z2·2F21,1;32,2;z2+c (A85)
=2ξ·2F21,1;32,2;ξ+c (A86)

where we used the indefinite integral of Dawson’s function [12]. From the fact that h˜1(ξ) is continuous in ξ and that the expected logarithm is continuous in τ (Proposition 3), and because the expected logarithm of a central χ2-distributed RV of one degree of freedom is γln(2) (see (A46)), it follows from (A56) that the integration constant c in (A86) is

c=γ2ln(2). (A87)

(One could also take (A55) and evaluate it for ξ=0 to see that h˜1(0)=γ2ln(2).)

Comparing h˜1(·) with h1(·) now proves that h˜1(ξ)=h1(ξ) for all ξ0, and thus (54) holds true for n=1.

To prove h˜n(ξ)=hn(ξ) for general nNodd, we first point out that by comparing (A76) with (52) we see that h˜n(1)(ξ)=hn(1)(ξ) for all ξ>0 and all nNodd. The case ξ=0 follows trivially from (A61).

Next, we use the derivation shown in (A105)–(A108) applied to h˜n(·) and h˜n(1)(·) to show that

h˜n(ξ)=h˜n2(ξ)+h˜n2(1)(ξ). (A88)

A recursive application of this relation now proves that for all odd n3,

h˜n(ξ)=h˜1(ξ)+j=1n12h˜2j1(1)(ξ) (A89)

(also compare with Corollary 7). Plugging (A86) and (A76) into this, and comparing with (45), proves h˜n(ξ)=hn(ξ) and thus (54) for all odd n.

Appendix B.1.2. Negative Integer Moments

To prove the expression of the negative integer moments, fix some N with n12. (Note that the result for >n12 follows directly from Proposition 1.) We directly focus on ξ>0. We need to evaluate

EU=0u·12uτn24eτ+u2In21τudu. (A90)

Again using the power series (7), we obtain:

EU=eτ2k=0τ2kk!Γk+n22k+n20uk+n21eu2du (A91)
=eτ2k=0τ2kk!Γk+n22k+n22n2+kΓk+n2 (A92)
=2eξk=0Γk+n2k!Γk+n2ξk (A93)
=2eξk=0ξkk!1k1+n2k2+n2k+n2. (A94)

Here, (A92) follows from ([8], Eq. 3.381-4), in (A93) we again use the shorthand (A54), and the last equality (A94) follows because Γ(z+1)=zΓ(z).

Now recall from the equivalence of h˜n(1)(ξ)=hn(1)(ξ) and from (A61) that

hn(1)(ξ)=eξk=01k!k+n2ξk. (A95)

Thus, the second derivative can be computed to be (uniform convergence of the summation in (A95) can be shown similarly to (A221)–(A228)):

hn(2)(ξ)=eξk=01k!k+n2ξk+eξk=11(k1)!k+n2ξk1 (A96)
=eξk=0ξkk!1k+1+n21k+n2 (A97)
=eξk=0ξkk!1k+n2k+1+n2, (A98)

and, in general, the th derivative is

hn()(ξ)=eξk=0ξkk!(1)1(1)!k+n2k+1+n2k+1+n2. (A99)

The claim (56) for nNodd now follows by comparing (A99) with (A94).

Appendix B.2. Even Degrees of Freedom

Note that all results regarding U[n,τ] with nNeven follow directly from the corresponding result of V[m,λ] using Lemma 1.

Appendix B.2.1. Expected Logarithm

The derivation of (55) has been published before in ([1], Lem. 10.1) and ([2], Lem. A.6) (see also [3]). It is similar to the derivation shown in Appendix B.1.1, but easier because in (A51) ψ(·) has an integer argument instead of integer plus 12. This leads to an expression corresponding to (A62) with only integers and thus to a much simpler version of (A63):

j=0m1(1)jj!(j)!=(1)m(m1)!(m)! (A100)

containing only one term on the right. The change of variables is similar as shown in Figure A1, but again simpler because the sum over the red values in Figure A1 actually equals to zero. We omit further details.

Appendix B.2.2. Negative Integer Moments

The derivation of (57) is fully analogous to the derivation shown in Appendix B.1.2. We need to evaluate

EV=0v·vλm12evλIm12λvdv. (A101)

Using the power series (7) we obtain from ([8], Eq. 3.351-3) (using that m>)

EV=λm12eλk=01k!Γ(k+m)λ2k+m120vk+m1evdv (A102)
=eλk=0λkk!1(k+m1)(k+m). (A103)

Using the corresponding expression of the th derivative of gm(·), which is derived similarly to (A98),

gm()(ξ)=eξk=0ξkk!(1)1(1)!(k+m)(k+m+1), (A104)

we obtain the claimed result.

Appendix C. Proofs of Section 5

Appendix C.1. Proof of Theorems 3 and 5

We start with hn(·). To prove (76a), we use (A55) to write

hn+2(ξ)hn(ξ)=eξk=0ξkk!j=1n+12+k1j12eξk=0ξkk!j=1n12+k1j12 (A105)
=eξk=0ξkk!1n+12+k12 (A106)
=eξk=0ξkk!1n2+k (A107)
=hn(1)(ξ), (A108)

where the last equality follows from (A95).

To prove (76b), we use (A99) to write

hn+2()(ξ)hn()(ξ)=eξk=0ξkk!(1)1(1)!k+n2+1k+2+n2k++n2
eξk=0ξkk!(1)1(1)!k+n2k+1+n2k+1+n2 (A109)
=(1)1(1)!eξk=0ξkk!k+n2kn2k+n2k+1+n2k++n2 (A110)
=(1)!eξk=0ξkk!1k+n2k+1+n2k++n2 (A111)
=hn(+1)(ξ). (A112)

The derivations for gm(·) are fully analogous. In particular, we can use the equivalent of (A55), i.e.,

gm(ξ)=γ+eξk=0ξkk!j=1k+m11j, (A113)

to rewrite the corresponding version of (A105)–(A108). For the interested reader we show a different, slightly more cumbersome proof that directly relies on the definition of gm(ξ) and gm(1)(ξ) in (35) and (39), respectively:

gm(ξ)+gm(1)(ξ)=ln(ξ)Eiξ+j=1m1(1)jeξ(j1)!1ξjj=1m1(1)j(m1)!j(m1j)!1ξj+(1)m(m1)!ξmeξi=0m1(1)i+m(m1)!i!ξim (A114)
=ln(ξ)Eiξ+j=1m(1)jeξ(j1)!1ξjj=1m1(1)j(m1)!j(m1j)!1ξjj=1m(1)j+2m=(1)j(m1)!(mj)!ξj (A115)
=ln(ξ)Eiξ+j=1m(1)jeξ(j1)!1ξjj=1m1(1)j(m1)!1j(m1j)!+1(mj)!=mj(mj)!1ξj(1)m(m1)!0!1ξm (A116)
=ln(ξ)Eiξ+j=1m(1)jeξ(j1)!1ξjj=1m1(1)j(m1)!mj(mj)!1ξj(1)mm!m0!1ξm (A117)
=ln(ξ)Eiξ+j=1m(1)jeξ(j1)!1ξjj=1m(1)jm!j(mj)!1ξj (A118)
=gm+1(ξ). (A119)

Here, the first equality follows from the definitions given in (35) and (39); in the subsequent equality we combine the second last term with the first sum and reorder the last summation by introducing a new counter-variable j=mi; the subsequent three equalities follow from arithmetic rearrangements; and the final equality follows again from definition (35). This proves (63a).

To prove (63b), we use (A104) to write

gm+1()(ξ)gm()(ξ)
=eξk=1ξkk!(1)1(1)!1(k+m+1)(k+m+)1(k+m)(k+m+1) (A120)
=eξk=1ξkk!(1)1(1)!k+m(k+m+)(k+m)(k+m+) (A121)
=eξk=1ξkk!(1)!1(k+m)(k+m+) (A122)
=gm(+1)(ξ). (A123)

Appendix C.2. Proof of Theorems 4 and 6

Using (60) we have

1ξmξgm(1)(ξ)=1ξeξ·eξmξeξk=01k!·1k+m·ξk (A124)
=1ξeξk=01k!·ξk1ξeξk=01k!·mk+m·ξk (A125)
=1ξeξk=01k!1mk+mξk (A126)
=eξk=01k!·kk+m·ξk1 (A127)
=eξk=11(k1)!·1k+m·ξk1 (A128)
=eξk=01k!·1k+m+1·ξk (A129)
=gm+1(1)(ξ). (A130)

Here, the first equality follows from (60); in the subsequent equality we use the series expansion of eξ; the subsequent two equalities follow from algebraic rearrangements; in the next equality we note that for k=0 the terms in the sum are equal to zero; the second last equality then follows from renumbering the terms; and the last equality follows again from (60). This proves (73).

The derivation of (83) is completely analogous, but where m is replaced by n2 in most places and where we rely on (A61) instead of (60):

1ξn2ξhn(1)(ξ)=1ξeξk=01k!·ξk1ξeξk=01k!·n2k+n2·ξk (A131)
=1ξeξk=01k!1n2k+n2ξk (A132)
=eξk=01k!·kk+n2·ξk1 (A133)
=eξk=11(k1)!·1k+n2·ξk1 (A134)
=eξk=01k!·1k+n2+1·ξk (A135)
=hn+2(1)(ξ). (A136)

Appendix D. Proofs of Section 6

Appendix D.1. Proof of Theorems 7 and 8

We start with the proof of Theorem 8, because the derivation of the bounds on gm(·) depend strongly on the bounds on gm(1)(·).

We start with the observation that (105) holds with equality for ξ=0. Moreover, we notice that the bound is asymptotically tight, too:

limξgm(1)(ξ)=0, (A137)
limξ1ξ+m=0 (A138)

(the first equality follows directly from (39)). Since additionally both functions ξgm(1)(ξ) and ξ1ξ+m are monotonically strictly decreasing and strictly convex, they cannot cross. So, it suffices to find some ξ for which (105) is satisfied. We pick ξ=1 and check:

gm(1)(1)=(1)m(m1)!e1j=0m1(1)jj! (A139)
=(1)m(m1)!j=0(1)jj!j=0m1(1)jj! (A140)
=(1)m(m1)!j=m(1)jj! (A141)
=j=m,m+2,m+4,(1)j+m(m1)!j!+(1)j+1+m(m1)!(j+1)! (A142)
=j=m,m+2,m+4,(m1)!j!(m1)!(j+1)! (A143)
=j=m,m+2,m+4,(m1)!j(j+1)! (A144)
>(m1)!m(m+1)! (A145)
=1m+1. (A146)

Here, (A141) follows from the series expansion of the exponential function; in (A142) we split the sum into two sums over the even and odd values of j; (A143) holds because j+m is even and j+m+1 is odd; and the inequality (A145) follows from dropping all terms in the sum (they are all positive!) apart from the first.

Next, we turn to (106). From Theorem 4 we have for any mN,

gm(1)(ξ)=1ξgm+1(1)(ξ)m (A147)
1ξ1ξ+m+1m (A148)
=m+1m(ξ+m+1), (A149)

where the inequality follows from (105).

To derive (107), we first look at the case m2 and consider the difference between the expression of the upper bound and gm(1)(ξ):

1ξ+m1gm(1)(ξ)=1ξ+m11ξ+m1ξgm1(1)(ξ) (A150)
1ξ+m11ξ+m1ξ1ξ+m1 (A151)
=0, (A152)

where the first equality follows from Theorem 4 and the subsequent inequality from the lower bound (105). For m=1 and ξ=0, (107) holds trivially, so it remains to show the case m=1 and ξ>0. This follows directly from (39):

g1(1)(ξ)=1eξξ1ξ. (A153)

We next address the claims in Theorem 7.

The upper bound (101) has been proven before in ([15], App. B) and is based on Jensen’s inequality:

gm(λ)=ElnV[m,λ] (A154)
lnEV[m,λ] (A155)
=ln(m+λ). (A156)

The lower bound (99) follows from a slightly more complicated argument: Note that both ξgm(ξ) and ξln(ξ+m1) are monotonically strictly increasing and strictly concave functions (see Proposition 6). Hence, they can cross at most twice. Asymptotically as ξ the two functions coincide, i.e., this corresponds to one of these “crossings.” (This can be seen directly from (A156).) So, they can only cross at most once more for finite ξ. For ξ=0, we have

gm(0)=ψ(m)>ln(m1) (A157)

for all mN (where for m=1 we take ln(0)=), see, e.g., ([16], Eq. (94)).

By contradiction, let us assume for the moment that there is another crossing at a finite value. At that value, the slope of ξln(ξ+m1) is larger than the slope of ξgm(ξ). Since asymptotically the two function coincide again, there must exist some value ξ0 such that for ξ>ξ0 the slope of ξln(ξ+m1) is strictly smaller than the slope of ξgm(ξ). We know from (107), however, that

ξln(ξ+m1)=1ξ+m1gm(1)(ξ),ξ0, (A158)

which leads to a contradiction. Thus, there cannot be another crossing and ln(ξ+m1) must be strictly smaller that gm(ξ) for all ξ0.

The lower and upper bounds (100) and (102) rely on the fundamental theorem of calculus:

gm(ξ)gm(0)=0ξgm(1)(t)dt (A159)
0ξ1t+mdt (A160)
=ln(t+m)0ξ (A161)
=ln(ξ+m)ln(m), (A162)

where the inequality follows from (105). Thus,

gm(ξ)ln(ξ+m)ln(m)+gm(0) (A163)
=ln(ξ+m)ln(m)+ψ(m). (A164)

Similarly,

gm(ξ)=0ξgm(1)(t)dt+gm(0) (A165)
0ξm+1m(t+m+1)dt+ψ(m) (A166)
=m+1mln(ξ+m+1)m+1mln(m+1)+ψ(m), (A167)

where the inequality follows from (106).

Appendix D.2. Proof of Theorems 9 and 10

We start with the proof of Theorem 10, because the derivation of the bounds on hn(·) depend strongly on the bounds on hn(1)(·).

We start with the observation that (117) holds with equality for ξ=0. Moreover, we notice that the bound is asymptotically tight, too:

limξhn(1)(ξ)=0, (A168)
limξ1ξ+n2=0 (A169)

(the first equality follows directly from (52)). Since additionally both functions ξhn(1)(ξ) and ξ1ξ+n2 are monotonically strictly decreasing and strictly convex, they cannot cross. So, it suffices to find some ξ for which (117) is satisfied. We pick ξ=1 and check:

hn(1)(1)=(1)n12Γn2e1erfi1+j=1n12(1)jΓj+12 (A170)
=(1)n12Γn2k=0(1)kΓk+32k=0n32(1)kΓk+32 (A171)
=(1)n12Γn2k=n12(1)kΓk+32 (A172)
=k=n12(1)k+n12Γn2Γk+32 (A173)
=k=n12,n12+2,n12+4,Γn2Γk+32Γn2Γk+52 (A174)
=k=n12,n12+2,n12+4,k+321Γn2k+32Γk+32 (A175)
>n12+12Γn2n12+32Γn12+32 (A176)
=n2Γn2n2+1Γn2+1 (A177)
=Γn2+1n2+1Γn2+1 (A178)
=1n2+1. (A179)

Here, (A171) follows from the series expansion of Dawson’s function [12]

π2eξerfiξ=Dξ=π2k=0(1)kΓk+32ξk+12, (A180)

and from a substitution k=j1; in (A174) we split the sum into two sums over the even and odd values of k; in (A175) we combine the terms using the relation zΓ(z)=Γ(z+1); the inequality (A176) follows from dropping all terms in the sum (they are all positive!) apart from the first; and (A178) follows again from zΓ(z)=Γ(z+1).

Next we turn to (118). From Theorem 6 we have for any nNodd,

hn(1)(ξ)=1ξhn+2(1)(ξ)n2 (A181)
1ξ1ξ+n+22n2 (A182)
=n+2n(ξ+n2+1), (A183)

where the inequality follows from (117).

To derive (119a), we consider the difference between the expression of the upper bound and hn(1)(ξ):

1ξ+n21hn(1)(ξ)=1ξ+n211ξ+n22ξhn2(1)(ξ) (A184)
1ξ+n211ξ+n22ξ1ξ+n22 (A185)
=0, (A186)

where the first equality follows from Theorem 6 (with n3) and the subsequent inequality from the lower bound (117). For a derivation of (119b), we start with (A61):

h1(1)(ξ)=eξk=01k!k+12·ξk (A187)
=2ξeξk=01k!(2k+1)·ξk+1 (A188)
2ξeξk=01k!(k+1)·ξk+1 (A189)
=2ξeξk=01(k+1)!·ξk+1 (A190)
=2ξeξk=11k!·ξk (A191)
=2ξeξk=01k!·ξk1 (A192)
=2ξeξeξ1 (A193)
=2ξ1eξ. (A194)

Here, the inequality holds because 2k+1k+1; and in (A193) we again rely on the series expansion of the exponential function. The weaker version of (119b) follows directly from this because eξ0. This finishes the proof of Theorem 10.

We next address the claims in Theorem 9.

The upper bound (112) is based on Jensen’s inequality:

ln(2)+hnτ2=ElnU[n,τ] (A195)
lnEU[n,τ] (A196)
=ln(n+τ). (A197)

Thus,

hn(ξ)lnn+2ξln(2)=lnn2+ξ. (A198)

The lower bound (110a) follows from a slightly more complicated argument. Note that for n3, both ξhn(ξ) and ξlnξ+n21 are monotonically strictly increasing and strictly concave functions (see Proposition 10). Hence, they can cross at most twice. Asymptotically as ξ the two functions coincide (this can be seen directly from (A198)), i.e., this corresponds to one of these “crossings.” So, they can only cross at most once more for finite ξ. For ξ=0, we have

hn(0)=ψn2>lnn21 (A199)

for all n3, see, e.g., ([16], Eq. (94)). By contradiction, let us assume for the moment that there is another crossing at a finite value. At that value, the slope of ξlnξ+n21 is larger than the slope of ξhn(ξ). Since asymptotically the two function coincide again, there must exist some value ξ0 such that for ξ>ξ0 the slope of ξlnξ+n21 is strictly smaller than the slope of ξhn(ξ). We know from (199a), however, that

ξlnξ+n21=1ξ+n21hn(1)(ξ),ξ0, (A200)

which leads to a contradiction. Thus, there cannot be another crossing and lnξ+n21 must be strictly smaller that hn(ξ) for all ξ0 and n3.

To derive the lower bound (110b), we use (76a) in Theorem 5 and apply (110a) and (119b):

h1(ξ)=h3(ξ)h1(1)(ξ) (A201)
lnξ+122ξ1eξ. (A202)

The upper and lower bounds (111) and (113) rely on the fundamental theorem of calculus:

hn(ξ)hn(0)=0ξhn(1)(t)dt (A203)
0ξ1t+n2dt (A204)
=lnt+n20ξ (A205)
=lnξ+n2lnn2, (A206)

where the inequality follows from (117). Thus,

hn(ξ)lnξ+n2lnn2+hn(0) (A207)
=lnξ+n2lnn2+ψn2. (A208)

Similarly,

hn(ξ)=0ξhn(1)(t)dt+hn(0) (A209)
0ξn+2nt+n2+1dt+ψn2 (A210)
=n+2nlnξ+n2+1n+2nlnn2+1+ψn2, (A211)

where the inequality follows from (118).

Appendix E. Uniform Convergence

In the following we will prove uniform convergence using Weierstrass’ M-test ([13], Sec. 8.11): An infinite sum k=0fk(x) converges uniformly for all xXR, if we can find constants Mk that do not depend on x, that satisfy

|fk(x)|Mk,xX, (A212)

and whose sum converges:

k=0Mkisconverging. (A213)

The condition (A213) can be confirmed by d’Alembert’s ratio test [17]: if

limkMk+1Mk<1 (A214)

then the sum in (A213) indeed converges.

Appendix E.1. Uniform Convergence of (A49)

We assume ξ0 and note that ξξk+n21eξ has its maximum for ξ=k+n21. Thus,

|τ2kk!Γk+n2u2k+n21eu2|τ2kk!Γk+n2k+n21k+n21ek+n21 (A215)
=τ2kek+n21lnk+n21k+n21k!Γk+n2Mk. (A216)

Next, we verify that

limkMk+1Mk=limkτ2k+1ek+n2lnk+n2k+n2(k+1)!Γk+n2+1·k!Γk+n2τ2kek+n21lnk+n21k+n21 (A217)
=limkτ2ek+n2lnk+n2k+n21lnk+n211(k+1)k+n2 (A218)
=limkτ2k+n21ek+n2ln1+1k+n/211(k+1)k+n2=0, (A219)

because

limkek+n2ln1+1k+n/211=1. (A220)

Thus, we see that Weierstrass’ M-test is satisfied and that thus (A49) is uniformly converging for all u0.

Appendix E.2. Uniform Convergence of (A55)

We note that for any 0ξΞ:

|eξξkk!j=1n12+k1j12|Ξkk!j=1n12+k1j12 (A221)
=Ξkk!2+j=2n12+k1j12 (A222)
Ξkk!2+j=2n12+k1j1 (A223)
=Ξkk!2+j=1n12+k11j1 (A224)
Ξkk!2+n12+k1 (A225)
=Ξkk!k+n2+12Mk. (A226)

Since

limkMk+1Mk=limkΞk+1k+n2+32(k+1)!·k!Ξkk+n2+12 (A227)
=limkΞk+n2+32(k+1)k+n2+12=0, (A228)

we see that Weierstrass’ M-test is satisfied and that thus (A55) is uniformly converging for all finite ξ.

Funding

This work was started while the author stayed at NCTU and was at that time supported by the Industrial Technology Research Institute (ITRI), Zhudong, Taiwan, under JRC NCTU-ITRI and by the National Science Council under NSC 95-2221-E-009-046.

Conflicts of Interest

The author declares no conflict of interest.

References

  • 1.Lapidoth A., Moser S.M. Capacity Bounds via Duality with Applications to Multiple-Antenna Systems on Flat Fading Channels. IEEE Trans. Inf. Theory. 2003;49-10:2426–2467. doi: 10.1109/TIT.2003.817449. [DOI] [Google Scholar]
  • 2.Moser S.M. Ph.D. Thesis. ETH Zürich; Zürich, Switzerland: 2004. Duality-Based Bounds on Channel Capacity. Diss. ETH No. 15769. [Google Scholar]
  • 3.Lapidoth A., Moser S.M. The Expected Logarithm of a Noncentral Chi-Square Random Variable. [(accessed on 17 September 2020)]; Available online: https://moser-isi.ethz.ch/explog.html.
  • 4.Moser S.M. Some Expectations of a Non-Central Chi-Square Distribution With an Even Number of Degrees of Freedom; Proceedings of the IEEE International Region 10 Conference (TENCON); Taipei, Taiwan. 31 October–2 November 2007. [Google Scholar]
  • 5.Moser S.M. Expectations of a Noncentral Chi-Square Distribution With Application to IID MIMO Gaussian Fading; Proceedings of the IEEE International Symposium on Information Theory and Its Applications; Auckland, New Zealand. 7–10 December 2008; pp. 495–500. [Google Scholar]
  • 6.Lozano A., Tulino A.M., Verdú S. High-SNR Power Offset in Multiantenna Communication. IEEE Trans. Inf. Theory. 2005;51-12:4134–4151. doi: 10.1109/TIT.2005.858937. [DOI] [Google Scholar]
  • 7.Merhav N., Sason I. An Integral Representation of the Logarithmic Function with Applications in Information Theory. Entropy. 2020;22:51. doi: 10.3390/e22010051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Gradshteyn I.S., Ryzhik I.M. In: Table of Integrals, Series, and Products. 7th ed. Jeffrey A., Zwillinger D., editors. Academic Press; San Diego, CA, USA: 2007. [Google Scholar]
  • 9.Weisstein E.W. Erfi. From MathWorld—A Wolfram Web Resource. [(accessed on 17 September 2020)]; Available online: https://mathworld.wolfram.com/Erfi.html.
  • 10.Johnson N.L., Kotz S., Balakrishnan N. Continuous Univariate Distributions. 2nd ed. Volume 2 Wiley; New York, NY, USA: 1995. [Google Scholar]
  • 11.Weisstein E.W. Incomplete Gamma Function. From MathWorld—A Wolfram Web Resource. [(accessed on 17 September 2020)]; Available online: https://mathworld.wolfram.com/IncompleteGammaFunction.html.
  • 12.Weisstein E.W. Dawson’s Integral. From MathWorld—A Wolfram Web Resource. [(accessed on 17 September 2020)]; Available online: https://mathworld.wolfram.com/DawsonsIntegral.html.
  • 13.Priestley H.A. Introduction to Integration. Oxford University Press; Oxford, UK: 1997. [Google Scholar]
  • 14.Weisstein E.W. Maclaurin Series. From MathWorld—A Wolfram Web Resource. [(accessed on 17 September 2020)]; Available online: https://mathworld.wolfram.com/MaclaurinSeries.html.
  • 15.Moser S.M. The Fading Number of Memoryless Multiple-Input Multiple-Output Fading Channels. IEEE Trans. Inf. Theory. 2007;53-7:2652–2666. doi: 10.1109/TIT.2007.899512. [DOI] [Google Scholar]
  • 16.Blagouchine I.V. Three Notes on Ser’s and Hasse’s Representations for the Zeta-Functions. INTEGERS. 2018;18A:1–45. [Google Scholar]
  • 17.Weisstein E.W. Ratio Test. From MathWorld—A Wolfram Web Resource. [(accessed on 17 September 2020)]; Available online: https://mathworld.wolfram.com/RatioTest.html.

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES