Skip to main content
Entropy logoLink to Entropy
. 2018 Sep 19;20(9):719. doi: 10.3390/e20090719

Rate Distortion Function of Gaussian Asymptotically WSS Vector Processes

Jesús Gutiérrez-Gutiérrez 1,*, Marta Zárraga-Rodríguez 1, Pedro M Crespo 1, Xabier Insausti 1
PMCID: PMC7513241  PMID: 33265808

Abstract

In this paper, we obtain an integral formula for the rate distortion function (RDF) of any Gaussian asymptotically wide sense stationary (AWSS) vector process. Applying this result, we also obtain an integral formula for the RDF of Gaussian moving average (MA) vector processes and of Gaussian autoregressive MA (ARMA) AWSS vector processes.

Keywords: rate distortion function, Gaussian vector processes, MA vector processes, ARMA vector processes, AWSS vector processes

1. Introduction

The present paper focuses on the derivation of a closed-form expression for the rate distortion function (RDF) of a wide class of vector processes. As stated in [1,2], there exist very few journal papers in the literature that present closed-form expressions for the RDF of non-stationary processes, and just one of them deals with non-stationary vector processes [3]. In the present paper, we obtain an integral formula for the RDF of any real Gaussian asymptotically wide sense stationary (AWSS) vector process. This new formula generalizes the one given in 1956 by Kolmogorov [4] for real Gaussian stationary processes and the one given in 1971 by Toms and Berger [3] for real Gaussian autoregressive (AR) AWSS vector processes of finite order. Applying this new formula, we also obtain an integral formula for the RDF of real Gaussian moving average (MA) vector processes of infinite order and for the RDF of real Gaussian ARMA AWSS vector processes of infinite order. AR, MA and ARMA vector processes are frequently used to model multivariate time series (see, e.g., [5]).

The definition of the AWSS process was first given by Gray (see [6,7]), and it is based on his concept of asymptotically equivalent sequences of matrices [8]. The integral formulas given in the present paper are obtained by using some recent results on such sequences of matrices [9,10,11,12].

The paper is organized as follows. In Section 2, we set up notation, and we review the concepts of AWSS, MA and ARMA vector processes and the Kolmogorov formula for the RDF of a real Gaussian vector. In Section 3, we obtain an integral formula for the RDF of any Gaussian AWSS vector process. In Section 4, we obtain an integral formula for the RDF of Gaussian MA vector processes and of Gaussian ARMA AWSS vector processes. We finish the paper with a numerical example where the RDF of a Gaussian AWSS vector process is computed.

2. Preliminaries

2.1. Notation

In this paper, N, Z, R and C denote the set of natural numbers (i.e., the set of positive integers), the set of integer numbers, the set of (finite) real numbers and the set of (finite) complex numbers, respectively. If m,nN, then Cm×n, 0m×n and In are the set of all m×n complex matrices, the m×n zero matrix and the n×n identity matrix, respectively. The symbols ⊤ and ∗ denote transpose and conjugate transpose, respectively. E stands for expectation; i is the imaginary unit; tr denotes trace; δ stands for the Kronecker delta; and λk(A), k{1,,n}, are the eigenvalues of an n×n Hermitian matrix A arranged in decreasing order.

Let An and Bn be nN×nN matrices for all nN. We write {An}{Bn} if the sequences {An} and {Bn} are asymptotically equivalent (see ([9], p. 5673)), that is:

M[0,):An2,Bn2MnN

and:

limnAnBnFn=0,

where ·2 and ·F denote the spectral norm and the Frobenius norm, respectively. The original definition of asymptotically equivalent sequences of matrices, where N=1, was given by Gray (see ([6], Section 2.3) or [8]).

Let {xn:nN} be a random N-dimensional vector process, i.e., xn is a random (column) vector of dimension N for all nN. We denote by xn:1 the random vector of dimension nN given by:

xn:1:=xnxn1xn2x1,nN.

Consider a matrix-valued function of a real variable X:RCN×N, which is continuous and 2π-periodic. For every nN, we denote by Tn(X) the n×n block Toeplitz matrix with N×N blocks given by:

Tn(X):=(Xjk)j,k=1n=X0X1X2X1nX1X0X1X2nX2X1X0X3nXn1Xn2Xn3X0,

where {Xk}kZ is the sequence of Fourier coefficients of X:

Xk=12π02πekωiX(ω)dωkZ.

2.2. AWSS Vector Processes

We first review the well-known concept of the WSS vector process.

Definition 1.

Let X:RCN×N, and suppose that it is continuous and 2π-periodic. A random N-dimensional vector process {xn:nN} is said to be WSS (or weakly stationary) with power spectral density (PSD) X if it has constant mean (i.e., E(xn1)=E(xn2) for all n1,n2N) and {Exn:1xn:1}={Tn(X)}.

We now review the definition of the AWSS vector process given in ([11], Definition 7.1).

Definition 2.

Let X:RCN×N, and suppose that it is continuous and 2π-periodic. A random N-dimensional vector process {xn:nN} is said to be AWSS with asymptotic PSD (APSD) X if it has constant mean and {Exn:1xn:1}{Tn(X)}.

Definition 2 was first introduced by Gray for the case N=1 (see, e.g., ([6], p. 225)).

2.3. MA and ARMA Vector Processes

We first review the concept of real zero-mean MA vector process (of infinite order).

Definition 3.

A real zero-mean random N-dimensional vector process {xn:nN} is said to be MA if:

xn=wn+j=1n1GjwnjnN, (1)

where Gj, jN, are real N×N matrices, {wn:nN} is a real zero-mean random N-dimensional vector process and Ewn1wn2=δn1,n2Λ for all n1,n2N with Λ being an N×N positive definite matrix.

The MA vector process {xn:nN} in Equation (1) is of finite order if there exists qN such that Gj=0N×N for all j>q. In this case, {xn:nN} is called an MA(q) vector process (see, e.g., ([5], Section 2.1)).

Secondly, we review the concept of a real zero-mean ARMA vector process (of infinite order).

Definition 4.

A real zero-mean random N-dimensional vector process {xn:nN} is said to be ARMA if:

xn=wn+j=1n1Gjwnjj=1n1FjxnjnN, (2)

where Gj and Fj, jN, are real N×N matrices, {wn:nN} is a real zero-mean random N-dimensional vector process and Ewn1wn2=δn1,n2Λ for all n1,n2N with Λ being an N×N positive definite matrix.

The ARMA vector process {xn:nN} in Equation (2) is of finite order if there exist p,qN such that Fj=0N×N for all j>p and Gj=0N×N for all j>q. In this case, {xn:nN} is called an ARMA(p,q) vector process (see, e.g., ([5], Section 1.2.2)).

2.4. RDF of Gaussian Vectors

Let {xn:nN} be a real zero-mean Gaussian N-dimensional vector process satisfying that Exn:1xn:1 is positive definite for all nN. If nN from [4], we know that the RDF of the real zero-mean Gaussian vector xn:1 is given by:

Rn(D)=1nNk=1nNmax0,12lnλkExn:1xn:1θn (3)

with D0,trExn:1xn:1nN and where θn is the real number satisfying:

D=1nNk=1nNminθn,λkExn:1xn:1.

The RDF of the real zero-mean Gaussian vector process {xn:nN} is given by:

R(D):=limnRn(D)

whenever this limit exists.

3. Integral Formula for the RDF of Gaussian AWSS Vector Processes

Theorem 1.

Let {xn:nN} be a real zero-mean Gaussian AWSS N-dimensional vector process with APSD X. Suppose that X(ω) is positive definite for all ωR and that Exn:1xn:1 is positive definite for all nN. If D0,tr(X0)N, then:

R(D)=14πN02πk=1Nmax0,lnλkX(ω)θdω (4)

is the operational RDF of {xn:nN}, where θ is the real number satisfying:

D=12πN02πk=1Nminθ,λkX(ω)dω. (5)

Proof. 

See Appendix A. ☐

Corollary 1.

Let {xn:nN} be a real zero-mean Gaussian WSS N-dimensional vector process with PSD X. Suppose that X(ω) is positive definite for all ωR. If D0,tr(X0)N, then:

R(D)=14πN02πk=1Nmax0,lnλkX(ω)θdω, (6)

where θ is the real number satisfying:

D=12πN02πk=1Nminθ,λkX(ω)dω.

Proof. 

See Appendix B. ☐

The integral formula given in Equation (6) was presented by Kafedziski in ([13], Equation (20)). However, the proof that he proposed was not complete, because although Kafedziski pointed out that ([13], Equation (20)) can be directly proven by applying the Szegö theorem for block Toeplitz matrices ([14], Theorem 3), the Szegö theorem cannot be applied since the parameter θ that appears in the expression of Rn(D) in ([13], Equation (7)), depends on n, as it does in Equation (3). It should be also mentioned that the set of WSS vector processes that he considered was smaller, namely, he only considered WSS vector processes with PSD in the Wiener class. A function X:RCN×N is said to be in the Wiener class if it is continuous and 2π-periodic, and it satisfies k=|[Xk]r,s|< for all r,s{1,,N} (see, e.g., ([11], Appendix B)).

4. Applications

4.1. Integral Formula for the RDF of Gaussian MA Vector Processes

Theorem 2.

Let {xn:nN} be as in Definition 3. Assume that {Gk}k=, with G0=IN and Gk=0N×N for all k>0, is the sequence of Fourier coefficients of a function G:RCN×N, which is continuous and 2π-periodic. Then:

  • 1. 

    {xn:nN} is AWSS with APSD X(ω)=G(ω)Λ(G(ω)) for all ωR.

  • 2. 
    If {xn:nN} is Gaussian, det(G(ω))0 for all ωR, and D0,tr(X0)N yields
    R(D)=14πN02πk=1Nmax0,lnλkG(ω)Λ(G(ω))θdω,
    where θ is the real number satisfying:
    D=12πN02πk=1Nminθ,λkG(ω)Λ(G(ω))dω.

Proof. 

See Appendix C. ☐

4.2. Integral Formula for the RDF of Gaussian ARMA AWSS Vector Processes

Theorem 3.

Let {xn:nN} be as in Definition 4. Assume that {Gk}k=, with G0=IN and Gk=0N×N for all k>0, is the sequence of Fourier coefficients of a function G:RCN×N, which is continuous and 2π-periodic. Suppose that {Fk}k=, with F0=IN and Fk=0N×N for all k>0, is the sequence of Fourier coefficients of a function F:RCN×N, which is continuous and 2π-periodic. Assume that {(Tn(F))12} is bounded and det(F(ω))0 for all ωR. Then:

  • 1. 

    {xn:nN} is AWSS with APSD X(ω)=(F(ω))1G(ω)Λ((F(ω))1G(ω)) for all ωR.

  • 2. 
    If {xn:nN} is Gaussian, det(G(ω))0 for all ωR, and D0,tr(X0)N yields:
    R(D)=14πN02πk=1Nmax0,lnλk(F(ω))1G(ω)Λ((F(ω))1G(ω))θdω,
    where θ is the real number satisfying:
    D=12πN02πk=1Nminθ,λk(F(ω))1G(ω)Λ((F(ω))1G(ω))dω.

Proof. 

See Appendix D. ☐

5. Numerical Example

We finish the paper with a numerical example where the RDF of a Gaussian AWSS vector process is computed. Specifically, we compute the RDF of the MA(1) vector process considered in ([5], Example 2.1), by assuming that it is Gaussian.

Let {xn:nN} be as in Definition 3 with N=2,

G1=0.80.70.40.6,

Gj=02×2 for all j>1, and:

Λ=4112.

Assume that {xn:nN} is Gaussian. Figure 1 shows R(D) with D(0,5.77) that we have computed using Theorem 2.

Figure 1.

Figure 1

Rate Distortion Function (RDF) of the Gaussian MA vector process considered.

Appendix A. Proof of Theorem 1

Proof. 

We divide the proof into six steps.

Step 1: We show that there exists n0N such that θn in Equation (3) exists for all nn0, or equivalently, such that D0,trExn:1xn:1nN for all nn0.

Since Exn:1xn:1=Exn:1xn:1Tn(X), applying ([11], Theorem 6.6) yields:

limn1nNk=1nNλkExn:1xn:1=12πN02πk=1NλkX(ω)dω=12πN02πtrX(ω)dω=12πN02πk=1N[X(ω)]k,kdω=12πNk=1N02π[X(ω)]k,kdω=12πNk=1N02πX(ω)dωk,k=12πNtr02πX(ω)dω=1Ntr12π02πX(ω)dω=tr(X0)N. (A1)

Consequently, as D0,tr(X0)N, there exists n0N such that:

1nNk=1nNλkExn:1xn:1tr(X0)N<tr(X0)NDnn0.

Therefore, since:

tr(X0)N1nNk=1nNλkExn:1xn:11nNk=1nNλkExn:1xn:1tr(X0)Nnn0,

we obtain:

D<1nNk=1nNλkExn:1xn:1=trExn:1xn:1nNnn0. (A2)

Step 2: We prove that the sequence of real numbers {θn}nn0 is bounded.

From Equation (A2), we have θn<λ1Exn:1xn:1 for all nn0. As {Exn:1xn:1}{Tn(X)}, there exists M[0,) such that Exn:1xn:12,Tn(X)2M for all nN. Thus,

0<D=1nNk=1nNminθn,λkExn:1xn:11nNk=1nNθn=θn<λ1Exn:1xn:1=Exn:1xn:12Mnn0.

Step 3: We show that if {θσ(n)} is a convergent subsequence of {θn}nn0, then limnθσ(n)=θ.

We denote by θ^ the limit of {θσ(n)}. We need to prove that θ^=θ.

Since 0<Dθn for all nn0, we have 0<Dθ^. Let {θ^n} be the sequence of real numbers such that {θ^σ(n)}={θσ(n)} and θ^n=θ^ for all nN\σ(N). Obviously, limnθ^n=θ^ and 0<θ^n for all nN. As limn1θ^n=1θ^ and rankExn:1xn:1n=N for all nN, applying ([12], Lemma 1) yields 1θ^nExn:1xn:11θ^Tn(X). From ([11], Lemma 4.2) we obtain 1θ^nExn:1xn:11θ^Tn(X)=Tn1θ^X. Hence, applying ([11], Theorem 6.6) yields:

D=limn1σ(n)Nk=1σ(n)Nminθσ(n),λkExσ(n):1xσ(n):1=limnθσ(n)1σ(n)Nk=1σ(n)Nmin1,λkExσ(n):1xσ(n):1θσ(n)=θ^limn1σ(n)Nk=1σ(n)Nmin1,λkExσ(n):1xσ(n):1θ^σ(n)=θ^limn1nNk=1nNmin1,λkExn:1xn:1θ^n=θ^limn1nNk=1nNmin1,λk1θ^nExn:1xn:1=θ^12πN02πk=1Nmin1,λk1θ^X(ω)dω=θ^12πN02πk=1Nmin1,λkX(ω)θ^dω=12πN02πk=1Nminθ^,λkX(ω)dω.

Thus, θ^ is a real number satisfying Equation (5). Since D<tr(X0)N=12πN02πk=1NλkX(ω)dω, there exists a unique real number θ satisfying Equation (5), and consequently, θ^=θ.

Step 4: We prove that limnθn=θ. From Steps 2 and 3, we have lim infnθn=lim supnθn=θ. Consequently, the sequence of real numbers {θn}nn0 is convergent, and its limit is θ (see, e.g., ([15], p. 57)).

Step 5: We show that Equation (4) holds.

Let {θ^n} be the sequence of positive numbers defined in Step 3 for the case in which {σ(n)}={n+n01}, that is, θ^n=θn if nn0 and θ^n=θ if n<n0. From ([11], Theorem 6.6), we obtain:

R(D)=limnRn(D)=limn1nNk=1nNmax0,12lnλkExn:1xn:1θn=limn1nNk=1nN12lnmax1,λkExn:1xn:1θn=12limn1nNk=1nNlnmax1,λkExn:1xn:1θ^n=12limn1nNk=1nNlnmax1,λk1θ^nExn:1xn:1=14πN02πk=1Nlnmax1,λk1θX(ω)dω=14πN02πk=1Nlnmax1,λkX(ω)θdω=14πN02πk=1Nmax0,lnλkX(ω)θdω.

Step 6: We prove that Equation (4) is the operational RDF of {xn:nN}. Following the same arguments that Gray used in [16] for Gaussian AR AWSS one-dimensional vector processes, to prove the negative (converse) coding theorem and the positive (achievability) coding theorem, we only need to show that the sequence dmax(n) defined in ([17], p. 490), is bounded. Hence, Equation (A1) finishes the proof. ☐

Appendix B. Proof of Corollary 1

Proof. 

Since X(ω) is positive definite for all ωR, from ([11], Theorem 4.4) and ([18], Corollary VI.1.6), we have:

0<minω[0,2π]λNX(ω)=infω[0,2π]λNX(ω)λnNTn(X)=λnNExn:1xn:1=λnNExn:1xn:1

for all nN, and consequently, Exn:1xn:1 is positive definite for all nN. Combining ([11], Lemma 3.3) and ([11], Theorem 4.3) yields {Exn:1xn:1}={Tn(X)}{Tn(X)}. The proof finishes by applying Theorem 1. ☐

Appendix C. Proof of Theorem 2

Proof. 

(1) From Equation (1), we have:

xnxn1xn2x1=ING1G2G1n0N×NING1G2n0N×N0N×NING3n0N×N0N×N0N×NINwnwn1wn2w1,

or more compactly,

xn:1=Tn(G)wn:1

for all nN. Consequently,

xn:1xn:1=Tn(G)wn:1wn:1(Tn(G))nN,

and applying ([11], Lemma 4.2), yields:

Exn:1xn:1=Tn(G)Ewn:1wn:1(Tn(G))=Tn(G)Tn(Λ)(Tn(G))=Tn(G)Tn(Λ)Tn(G), (A3)

where G(ω)=(G(ω)), ωR. Combining ([11], Lemma 3.3) and ([11], Theorem 4.3), we obtain {Tn(G)}{Tn(G)}. Moreover, applying ([10], Theorem 3) yields Tn(Λ)Tn(G)Tn(ΛG). Hence, from ([10], Lemma 2) and ([10], Theorem 3), we have:

Exn:1xn:1=Exn:1xn:1=Tn(G)Tn(Λ)Tn(G)Tn(G)Tn(ΛG)Tn(GΛG)={Tn(X)}. (A4)

Thus, as the relation ∼ is transitive (see ([11], Lemma 3.1)), {xn:nN} is AWSS with APSD X. (2) First, we prove that X(ω) is positive definite for all ωR. Fix ωR, and consider yCN×1. Since Λ is positive definite, we have:

yX(ω)y=yG(ω)Λ(G(ω))y=(G(ω))yΛ(G(ω))y>0

whenever (G(ω))y0N×1. As det(G(ω))0, (G(ω))y=0N×1 if and only if y=(G(ω))10N×1=0N×1, and consequently, X(ω) is positive definite.

Secondly, we prove that Exn:1xn:1 is positive definite for all nN. To do that, we only need to show that detExn:1xn:10 for all nN, because as Exn:1xn:1 is a correlation matrix, it is positive semidefinite. We have:

detExn:1xn:1=detTn(G)Tn(Λ)Tn(G)=detTn(G)detTn(Λ)det(Tn(G))=|detTn(G)|2det(Λ)n=|det(IN)n|2det(Λ)n=det(Λ)n0 (A5)

for all nN.

The result now follows from Theorem 1. ☐

Appendix D. Proof of Theorem 3

Proof. 

(1) From Equation (2), we have:

j=0n1Fjxnj=j=0n1Gjwnj,

or more compactly,

Tn(F)xn:1=Tn(G)wn:1

for all nN. Consequently,

Tn(F)xn:1xn:1(Tn(F))=Tn(F)xn:1Tn(F)xn:1=Tn(G)wn:1wn:1(Tn(G))nN,

and applying Equation (A3) yields:

Tn(F)Exn:1xn:1(Tn(F))=Tn(F)Exn:1xn:1(Tn(F))=Tn(G)Ewn:1wn:1(Tn(G))=Tn(G)Tn(Λ)Tn(G).

Since det(Tn(F))=(det(IN))n=10 for all nN, we obtain:

Exn:1xn:1=(Tn(F))1Tn(G)Tn(Λ)Tn(G)((Tn(F)))1=(Tn(F))Tn(F)1(Tn(F))Tn(G)Tn(Λ)Tn(G)Tn(F)(Tn(F))Tn(F)1.

From Equation (A4) and the fact that the relation ∼ is transitive (see ([11], Lemma 3.1)), we have Tn(G)Tn(Λ)Tn(G)Tn(GΛG). Combining ([11], Lemma 3.3) and ([11], Theorem 4.3) yields {Tn(F)}{Tn(F)}. Therefore, applying ([10], Lemma 2) and ([10], Theorem 3), we obtain:

Tn(G)Tn(Λ)Tn(G)Tn(F)Tn(GΛG)Tn(F)Tn(GΛGF).

Using ([11], Lemma 3.1) yields {(Tn(F))}{(Tn(F))}, and applying ([10], Lemma 2), ([11], Lemma 4.2), and ([10], Theorem 3), we have:

(Tn(F))Tn(G)Tn(Λ)Tn(G)Tn(F)(Tn(F))Tn(GΛGF)=Tn(F)Tn(GΛGF)Tn(FGΛGF). (A6)

If ωR and yCN×1, then,

y(F(ω))F(ω)y=F(ω)yF(ω)y=F(ω)y2>0

whenever F(ω)y0N×1. As det(F(ω))0, F(ω)y=0N×1 if and only if y=(F(ω))10N×1=0N×1, hence (F(ω))F(ω) is positive definite for all ωR, and applying ([11], Theorem 4.4) and ([18], Corollary VI.1.6), yields:

0<minω[0,2π]λN(F(ω))F(ω)=infω[0,2π]λN(F(ω))F(ω)λnNTn(FF)nN.

Thus,

(Tn(FF))12=maxk{1,,nN}|λk((Tn(FF))1)|=maxk{1,,nN}1λk(Tn(FF))=maxk{1,,nN}1λk(Tn(FF))=1λnN(Tn(FF))1minω[0,2π]λN(F(ω))F(ω)

for all nN. Observe that (Tn(F))Tn(F)12 is also bounded, because:

(Tn(F))Tn(F)12=Tn(F)1(Tn(F))12=Tn(F)1(Tn(F))12Tn(F)12(Tn(F))12=Tn(F)122nN.

Moreover, from ([10], Theorem 3), we obtain (Tn(F))Tn(F)=Tn(F)Tn(F)Tn(FF). Consequently, applying Lemma A1 (see Appendix E) and ([11], Theorem 6.4), yields:

((Tn(F))Tn(F))1(Tn(FF))1Tn((FF)1)=Tn(F1(F)1). (A7)

Therefore, from Equation (A6), ([10], Lemma 2), and ([10], Theorem 3), we have:

((Tn(F))Tn(F))1(Tn(F))Tn(G)Tn(Λ)Tn(G)Tn(F)Tn(F1(F)1)Tn(FGΛGF))Tn(F1GΛGF)).

Hence, applying Equation (A7), ([10], Lemma 2), and ([10], Theorem 3), we deduce that:

Exn:1xn:1Tn(F1GΛGF))Tn(F1(F)1)Tn(F1GΛG(F)1)=Tn(F1GΛG(F1))={Tn(X)}.

(2) First, we prove that X(ω) is positive definite for all ωR. Fix ωR, and consider yCN×1. Since G(ω)Λ(G(ω)) is positive definite (see the proof of Theorem 2), we have:

yX(ω)y=y(F(ω))1G(ω)Λ(G(ω))((F(ω))1)y=(((F(ω))1)y)G(ω)Λ(G(ω))((F(ω))1)y>0

whenever ((F(ω)))1y=((F(ω))1)y0N×1. As ((F(ω)))1y=0N×1 if and only if y=(F(ω))0N×1=0N×1, X(ω) is positive definite.

Secondly, we prove that Exn:1xn:1 is positive definite for all nN, or equivalently, detExn:1xn:10 for all nN. Applying Equation (A5) yields:

detExn:1xn:1=det((Tn(F))1Tn(G)Tn(Λ)Tn(G)((Tn(F)))1)=det(Tn(G)Tn(Λ)Tn(G))det(Tn(F))det((Tn(F)))=det(Λ)n|det(Tn(F))|2=det(Λ)n0nN.

The result now follows from Theorem 1. ☐

Appendix E. A Property of Asymptotically Equivalent Sequences of Invertible Matrices

Lemma A1.

Let An and Bn be nN×nN invertible matrices for all nN. Suppose that {An}{Bn} and {An12} and {Bn12} are bounded. Then, {An1}{Bn1}.

Proof. 

If M[0,) such that An12,Bn12M for all nN, then:

0An1Bn1Fn=Bn1An1Fn=Bn1AnAn1Bn1BnAn1Fn=Bn1AnBnAn1FnBn12AnBnAn1FnBn12AnBnFnAn12M2AnBnFn0.

This result was presented in ([6], Theorem 1) for the case N=1.

Author Contributions

Authors are listed in order of their degree of involvement in the work, with the most active contributors listed first. J.G.-G. conceived the research question. All authors proved the main results. J.G.-G. and X.I. performed the simulations. All authors wrote the paper. All authors have read and approved the final manuscript.

Funding

This work was supported in part by the Spanish Ministry of Economy and Competitiveness through the CARMEN project (TEC2016-75067-C4-3-R).

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Hammerich E. Waterfilling theorems for linear time-varying channels and related nonstationary sources. IEEE Trans. Inf. Theory. 2016;62:6904–6916. doi: 10.1109/TIT.2016.2616139. [DOI] [Google Scholar]
  • 2.Kipnis A., Goldsmith A.J., Eldar Y.C. The distortion rate function of cyclostationary Gaussian processes. IEEE Trans. Inf. Theory. 2018;64:3810–3824. doi: 10.1109/TIT.2017.2741978. [DOI] [Google Scholar]
  • 3.Toms W., Berger T. Information rates of stochastically driven dynamic systems. IEEE Trans. Inf. Theory. 1971;17:113–114. doi: 10.1109/TIT.1971.1054569. [DOI] [Google Scholar]
  • 4.Kolmogorov A.N. On the Shannon theory of information transmission in the case of continuous signals. IRE Trans. Inf. Theory. 1956;2:102–108. doi: 10.1109/TIT.1956.1056823. [DOI] [Google Scholar]
  • 5.Reinsel G.C. Elements of Multivariate Time Series Analysis. Springer; Berlin, Germany: 1993. [Google Scholar]
  • 6.Gray R.M. Toeplitz and circulant matrices: A review. Found. Trends Commun. Inf. Theory. 2006;2:155–239. doi: 10.1561/0100000006. [DOI] [Google Scholar]
  • 7.Ephraim Y., Lev-Ari H., Gray R.M. Asymptotic minimum discrimination information measure for asymptotically weakly stationary processes. IEEE Trans. Inf. Theory. 1988;34:1033–1040. doi: 10.1109/18.21226. [DOI] [Google Scholar]
  • 8.Gray R.M. On the asymptotic eigenvalue distribution of Toeplitz matrices. IEEE Trans. Inf. Theory. 1972;18:725–730. doi: 10.1109/TIT.1972.1054924. [DOI] [Google Scholar]
  • 9.Gutiérrez-Gutiérrez J., Crespo P.M. Asymptotically equivalent sequences of matrices and Hermitian block Toeplitz matrices with continuous symbols: Applications to MIMO systems. IEEE Trans. Inf. Theory. 2008;54:5671–5680. doi: 10.1109/TIT.2008.2006401. [DOI] [Google Scholar]
  • 10.Gutiérrez-Gutiérrez J., Crespo P.M. Asymptotically equivalent sequences of matrices and multivariate ARMA processes. IEEE Trans. Inf. Theory. 2011;57:5444–5454. doi: 10.1109/TIT.2011.2159042. [DOI] [Google Scholar]
  • 11.Gutiérrez-Gutiérrez J., Crespo P.M. Block Toeplitz matrices: Asymptotic results and applications. Found. Trends Commun. Inf. Theory. 2011;8:179–257. doi: 10.1561/0100000066. [DOI] [Google Scholar]
  • 12.Gutiérrez-Gutiérrez J., Crespo P.M., Zárraga-Rodríguez M., Hogstad B.O. Asymptotically equivalent sequences of matrices and capacity of a discrete-time Gaussian MIMO channel with memory. IEEE Trans. Inf. Theory. 2017;63:6000–6003. doi: 10.1109/TIT.2017.2715044. [DOI] [Google Scholar]
  • 13.Kafedziski V. Rate distortion of stationary and nonstationary vector Gaussian sources; Proceedings of the IEEE/SP 13th Workshop on Statistical Signal Processing; Bordeaux, France. 17–20 July 2005; Piscataway, NJ, USA: IEEE; 2005. pp. 1054–1059. [Google Scholar]
  • 14.Gazzah H., Regalia P.A., Delmas J.P. Asymptotic eigenvalue distribution of block Toeplitz matrices and application to blind SIMO channel identification. IEEE Trans. Inf. Theory. 2001;47:1243–1251. doi: 10.1109/18.915697. [DOI] [Google Scholar]
  • 15.Rudin W. Principles of Mathematical Analysis. McGraw-Hill; New York, NY, USA: 1976. [Google Scholar]
  • 16.Gray R.M. Information rates of autoregressive processes. IEEE Trans. Inf. Theory. 1970;16:412–421. doi: 10.1109/TIT.1970.1054470. [DOI] [Google Scholar]
  • 17.Gallager R.G. Information Theory and Reliable Communication. John Wiley & Sons; Hoboken, NJ, USA: 1968. [Google Scholar]
  • 18.Bhatia R. Matrix Analysis. Springer; Berlin, Germany: 1997. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES