Skip to main content
Entropy logoLink to Entropy
. 2018 May 23;20(6):399. doi: 10.3390/e20060399

Rate-Distortion Function Upper Bounds for Gaussian Vectors and Their Applications in Coding AR Sources

Jesús Gutiérrez-Gutiérrez 1,*, Marta Zárraga-Rodríguez 1, Fernando M Villar-Rosety 1, Xabier Insausti 1
PMCID: PMC7512917  PMID: 33265489

Abstract

In this paper, we give upper bounds for the rate-distortion function (RDF) of any Gaussian vector, and we propose coding strategies to achieve such bounds. We use these strategies to reduce the computational complexity of coding Gaussian asymptotically wide sense stationary (AWSS) autoregressive (AR) sources. Furthermore, we also give sufficient conditions for AR processes to be AWSS.

Keywords: source coding, rate-distortion function (RDF), Gaussian vector, autoregressive (AR) source, discrete Fourier transform (DFT)

1. Introduction

In 1956, Kolmogorov [1] gave a formula for the rate-distortion function (RDF) of Gaussian vectors and the RDF of Gaussian wide sense stationary (WSS) sources. Later, in 1970 Gray [2] obtained a formula for the RDF of Gaussian autoregressive (AR) sources.

In 1973, Pearl [3] gave an upper bound for the RDF of finite-length data blocks of Gaussian WSS sources, but he did not propose a coding strategy to achieve his bound for a given block length. In [4], we presented two tighter upper bounds for the RDF of finite-length data blocks of Gaussian WSS sources, and we proposed low-complexity coding strategies, based on the discrete Fourier transform (DFT), to achieve such bounds. Moreover, we proved that those two upper bounds tend to the RDF of the WSS source (computed by Kolmogorov in [1]) when the size of the data block grows.

In the present paper, we generalize the upper bounds and the two low-complexity coding strategies presented in [4] to any Gaussian vector. Therefore, in contrast to [4], here no assumption about the structure of the correlation matrix of the Gaussian vector has been made (observe that since the sources in [4] were WSS the correlation matrix of the Gaussian vectors there considered was Toeplitz). To obtain such generalization we start our analysis by first proving several new results on the DFT of random vectors. Although in [4] (Theorem 1) another new result on the DFT was presented, it cannot be used here, because such result and its proof rely on the power spectral density (PSD) of a WSS process and its properties.

The two low-complexity strategies here presented are applied in coding finite-length data blocks of Gaussian AR sources. Specifically, we prove that the rates (upper bounds) corresponding to these two strategies tend to the RDF of the AR source (computed by Gray in [2]) when the size of the data block grows and the AR source is asymptotically WSS (AWSS).

The definition of AWSS process was introduced by Gray in [5] (Chapter 6) and it is based on his concept of asymptotically equivalent sequences of matrices [6]. Sufficient conditions for AR processes to be AWSS can be found in [5] (Theorem 6.2) and [7] (Theorem 7). In this paper we present other sufficient conditions which make easier to check in practice whether an AR process is AWSS.

The paper is organized as follows. In Section 2 we obtain several new results on the DFT of random vectors which are used in Section 3. In Section 3 we give upper bounds for the RDF of Gaussian vectors, and we propose coding strategies to achieve such bounds. In Section 4 we apply the strategies proposed in Section 3 to reduce the computational complexity of coding Gaussian AWSS AR sources. In Section 5 we give sufficient conditions for AR processes to be AWSS. We finish the paper with a numerical example and conclusions.

2. Several New Results on the DFT of Random Vectors

We begin by introducing some notation. C denotes the set of (finite) complex numbers, i is the imaginary unit, Re and Im denote real and imaginary parts, respectively. * stands for conjugate transpose, ⊤ denotes transpose, and λk(A), k{1,,n}, are the eigenvalues of an n×n Hermitian matrix A arranged in decreasing order. E stands for expectation, and Vn is the n×n Fourier unitary matrix, i.e.,

[Vn]j,k=1ne2π(j1)(k1)ni,j,k{1,,n}.

If zC then z^ denotes the real (column) vector

z^=Re(z)Im(z).

If zkC for all k{1,,n} then zn:1 is the n-dimensional vector given by

zn:1=znzn1zn2z1.

In this section, we give several new results on the DFT of random vectors in two theorems and one lemma.

Theorem 1.

Let yn:1 be the DFT of an n-dimensional random vector xn:1, that is, yn:1=Vn*xn:1.

  1. If k{1,,n} then
    λn(Exn:1xn:1*)Exk2λ1(Exn:1xn:1*) (1)
    and
    λn(Exn:1xn:1*)Eyk2λ1(Exn:1xn:1*). (2)
  2. If the random vector xn:1 is real and k{1,,n1}{n2} then
    λn(Exn:1xn:1)2ERe(yk)2λ1(Exn:1xn:1)2, (3)
    and
    λn(Exn:1xn:1)2EIm(yk)2λ1(Exn:1xn:1)2. (4)

Proof. 

(1) We first prove that if Wn is an n×n unitary matrix then

λn(Exn:1xn:1*)Wndiag1jnλjExn:1xn:1*Wn*nk+1,nk+1λ1(Exn:1xn:1*). (5)

We have

Wndiag1jnλjExn:1xn:1*Wn*k1,k2=h=1n[Wn]k1,hdiag1jnλjExn:1xn:1*Wn*h,k2=h=1n[Wn]k1,hl=1ndiag1jnλjExn:1xn:1*h,lWn*l,k2=h=1n[Wn]k1,hλh(Exn:1xn:1*)[Wn]k2,h¯ (6)

for all k1,k2{1,,n}, and hence,

Wndiag1jnλjExn:1xn:1*Wn*nk+1,nk+1=h=1nλh(Exn:1xn:1*)|[Wn]nk+1,h|2.

Consequently,

λn(Exn:1xn:1*)h=1n|[Wn]nk+1,h|2Wndiag1jnλjExn:1xn:1*Wn*nk+1,nk+1λ1(Exn:1xn:1*)h=1n|[Wn]nk+1,h|2,

and applying

h=1n|[Wn]nk+1,h|2=h=1n[Wn]nk+1,h[Wn*]h,nk+1=[WnWn*]nk+1,nk+1=[In]nk+1,nk+1=1,

where In denotes the n×n identity matrix, we obtain Equation (5).

Let Exn:1xn:1*=Undiag1jnλjExn:1xn:1*Un1 be a diagonalization of Exn:1xn:1* where the eigenvector matrix Un is unitary. As

Exk2=Exn:1xn:1*nk+1,nk+1=Undiag1jnλjExn:1xn:1*Un*nk+1,nk+1,

Equation (1) follows directly by taking Wn=Un in Equation (5).

Since

Eyk2=Eyn:1yn:1*nk+1,nk+1=EVn*xn:1xn:1*Vn**nk+1,nk+1=Vn*Exn:1xn:1*Vn**nk+1,nk+1=Vn*Undiag1jnλjExn:1xn:1*Un*Vn**nk+1,nk+1=Vn*Undiag1jnλjExn:1xn:1*Vn*Un*nk+1,nk+1, (7)

taking Wn=Vn*Un in Equation (5) we obtain Equation (2).

(2) Applying [4] (Equation (10)) and taking Wn=Un in Equation (6) yields

ERe(yk)2=1nk1,k2=1ncos2π(1k1)kncos2π(1k2)knExnk1+1xnk2+1=1nk1,k2=1ncos2π(1k1)kncos2π(1k2)knExn:1xn:1k1,k2=1nk1,k2=1ncos2π(1k1)kncos2π(1k2)knUndiag1jnλjExn:1xn:1Un*k1,k2=1nk1,k2=1ncos2π(1k1)kncos2π(1k2)knh=1n[Un]k1,hλhExn:1xn:1[Un]k2,h¯=1nh=1nλhExn:1xn:1k1=1ncos2π(1k1)kn[Un]k1,hk2=1ncos2π(1k2)kn[Un]k2,h¯=1nh=1nλhExn:1xn:1l=1ncos2π(1l)kn[Un]l,h2,

and therefore,

λnExn:1xn:11nh=1nl=1ncos2π(1l)kn[Un]l,h2ERe(yk)2λ1Exn:1xn:11nh=1nl=1ncos2π(1l)kn[Un]l,h2.

Analogously, it can be proved that

λnExn:1xn:11nh=1nl=1nsin2π(1l)kn[Un]l,h2EIm(yk)2λ1Exn:1xn:11nh=1nl=1nsin2π(1l)kn[Un]l,h2.

To finish the proof we only need to show that

1nh=1nl=1ncos2π(1l)kn[Un]l,h2=1nh=1nl=1nsin2π(1l)kn[Un]l,h2=12. (8)

If b1,,bn are n real numbers then

1nh=1nl=1nbl[Un]l,h2=1nh=1nk1=1nbk1[Un]k1,hk2=1nbk2[Un]k2,h¯=1nk1,k2=1nbk1bk2h=1n[Un]k1,h[Un*]h,k2=1nk1,k2=1nbk1bk2UnUn*k1,k2=1nk1,k2=1nbk1bk2Ink1,k2=1nl=1nbl2, (9)

and thus,

1nh=1nl=1nsin2π(1l)kn[Un]l,h2=1nl=1nsin2π(1l)kn2=1nl=1n1cos2π(1l)kn2=11nl=1ncos2π(1l)kn2=11nh=1nl=1ncos2π(1l)kn[Un]l,h2.

Equation (8) now follows directly from [4] (Equation (15)). ☐

Lemma 1.

Let yn:1 be the DFT of an n-dimensional random vector xn:1. If k{1,,n} then

  1. Eyk2=Vn*Exn:1xn:1*Vnnk+1,nk+1.

  2. Eyk2=Vn*Exn:1xn:1Vn¯nk+1,nk+1.

  3. EReykImyk=12ImEyk2.

  4. ERe(yk)2=Eyk2+ReEyk22.

  5. EIm(yk)2=Eyk2ReEyk22.

Proof. 

(1) It is a direct consequence of Equation (7).

(2) We have

Eyk2=Eyn:1yn:1nk+1,nk+1=EVn*xn:1xn:1Vn*nk+1,nk+1=EVn*xn:1xn:1Vn¯nk+1,nk+1=Vn*Exn:1xn:1Vn¯nk+1,nk+1.

(3) Observe that

Eyk2=ERe(yk)2Im(yk)2+2Re(yk)Im(yk)i=ERe(yk)2EIm(yk)2+2ERe(yk)Im(yk)i, (10)

and hence,

ImEyk2=2ERe(yk)Im(yk).

(4) and (5) From Equation (10) we obtain

ReEyk2=ERe(yk)2EIm(yk)2. (11)

Furthermore,

Eyk2=ERe(yk)2+Im(yk)2=ERe(yk)2+EIm(yk)2. (12)

(4) and (5) follow directly from Equations (11) and (12). ☐

Theorem 2.

Let yn:1 be the DFT of a real n-dimensional random vector xn:1. If k{1,,n1}{n2} then

λn(Exn:1xn:1)2λ2Eyk^yk^λ1Eyk^yk^λ1(Exn:1xn:1)2.

Proof. 

Fix r{1,2} and consider a real unit eigenvector v=(v1,v2) corresponding to λrEyk^yk^. We have

λrEyk^yk^=λrEyk^yk^vv=vλrEyk^yk^v=vEyk^yk^v.

From [4] (Equation (10)) we obtain

Eyk^yk^=1nk1,k2=1ncos2π(1k1)kncos2π(1k2)knExnk1+1xnk2+1cos2π(1k1)knsin2π(1k2)knExnk1+1xnk2+1sin2π(1k1)kncos2π(1k2)knExnk1+1xnk2+1sin2π(1k1)knsin2π(1k2)knExnk1+1xnk2+1=1nk1,k2=1nExn:1xn:1k1,k2wk1wk2

with

wl=cos2π(1l)knsin2π(1l)kn,l{1,,n},

and consequently,

λrEyk^yk^=1nk1,k2=1nExn:1xn:1k1,k2vwk1wk2v=1nk1,k2=1nh=1n[Un]k1,hλhExn:1xn:1[Un]k2,h¯vwk1wk2v=1nk1,k2=1nwk1vh=1n[Un]k1,hλhExn:1xn:1[Un]k2,h¯wk2v=1nh=1nλhExn:1xn:1k1=1nwk1v[Un]k1,hk2=1nwk2v[Un]k2,h¯=1nh=1nλhExn:1xn:1l=1nwlv[Un]l,h2

with Exn:1xn:1=Undiag1jnλjExn:1xn:1Un1 being a diagonalization of Exn:1xn:1 where the eigenvector matrix Un is unitary. Therefore,

λnExn:1xn:11nh=1nl=1nwlv[Un]l,h2λrEyk^yk^λ1Exn:1xn:11nh=1nl=1nwlv[Un]l,h2.

To finish the proof we only need to show that

1nh=1nl=1nwlv[Un]l,h2=12.

Applying Equation (9) and [4] (Equations (14) and (15)) yields

1nh=1nl=1nwlv[Un]l,h2=1nl=1nwlv2=1nl=1ncos2π(1l)knv1+sin2π(1l)knv22=v121nl=1ncos2π(1l)kn2+v221nl=1nsin2π(1l)kn2+2v1v21nl=1ncos2π(1l)knsin2π(1l)kn=v121nl=1ncos2π(1l)kn2+v222+v1v21nl=1nsin4π(1l)kn=v121nl=1n1sin2π(1l)kn2+v222v1v21nl=1nsin4π(l1)kn=v1211nl=1nsin2π(1l)kn2+v222v1v21nl=1nIme4π(l1)kni=v122+v222v1v21nIml=1ne4π(l1)kni=12vv=12.

3. RDF Upper Bounds for Real Gaussian Vectors

We first review the formula for the RDF of a real Gaussian vector given by Kolmogorov in [1].

Theorem 3.

If xn:1 is a real zero-mean Gaussian n-dimensional vector with positive definite correlation matrix, its RDF is given by

Rxn:1(D)=1nk=1nmax0,12lnλkExn:1xn:1θD0,trExn:1xn:1n,

where tr denotes trace and θ is a real number satisfying

D=1nk=1nminθ,λkExn:1xn:1.

We recall that Rxn:1(D) can be thought of as the minimum rate (measured in nats) at which one must encode (compress) xn:1 in order to be able to recover it with a mean square error (MSE) per dimension not larger than D, that is:

Exn:1xn:1˜22nD,

where xn:1˜ denotes the estimation of xn:1 and ·2 is the spectral norm.

The following result provides an optimal coding strategy for xn:1 in order to achieve Rxn:1(D) whenever DλnExn:1xn:1. Observe that if DλnExn:1xn:1 then

Rxn:1(D)=12nk=1nlnλkExn:1xn:1D=12nlndetExn:1xn:1Dn. (13)

Corollary 1.

Suppose that xn:1 is as in Theorem 3. Let Exn:1xn:1=Undiag1knλkExn:1xn:1Un1 be a diagonalization of Exn:1xn:1 where the eigenvector matrix Un is real and orthogonal. If D0,λnExn:1xn:1 then

Rxn:1(D)=1nk=1nRzk(D)=12nk=1nlnEzk2D (14)

with zn:1=Unxn:1.

Proof. 

We encode z1,,zn separately with Ezkzk˜22D for all k{1,,n}. Let xn:1˜:=Unzn:1˜, where

zn:1˜:=zn˜z1˜.

As Un is unitary (in fact, it is a real orthogonal matrix) and the spectral norm is unitarily invariant, we have

Exn:1xn:1˜22n=EUnxn:1Unxn:1˜22n=Ezn:1zn:1˜22n=Ek=1nzkzk˜2n=k=1nEzkzk˜2n=k=1nEzkzk˜22nD,

and thus,

Rxn:1(D)1nk=1nRzk(D).

To finish the proof we show Equation (14). Since

Ezn:1zn:1=EUnxn:1xn:1Un=UnExn:1xn:1Un=diag1knλkExn:1xn:1,

we obtain

Ezk2=Ezn:1zn:1nk+1,nk+1=λnk+1Exn:1xn:1λnExn:1xn:1D>0.

Hence, applying Equation (13) yields

1nk=1nRzk(D)=1nk=1n12lnEzk2D=12nk=1nlnλnk+1Exn:1xn:1D=12nk=1nlnλkExn:1xn:1D=Rxn:1(D).

Corollary 1 shows that an optimal coding strategy for xn:1 is to encode z1,,zn separately.

We now give two coding strategies for xn:1 based on the DFT whose computational complexity is lower than the computational complexity of the optimal coding strategy provided in Corollary 1.

Theorem 4.

Let xn:1 be as in Theorem 3. Suppose that yn:1 is the DFT of xn:1 and D0,λnExn:1xn:1. Then

Rxn:1(D)R˜xn:1(D)R˘xn:1(D)12nk=1nlnE(|yk|2)D (15)
Rxn:1(D)+12ln1+Exn:1xn:1Vndiag1knVn*Exn:1xn:1Vnk,kVn*FnλnExn:1xn:1, (16)

where ·F is the Frobenius norm,

R˜xn:1(D):=Ryn2D+2k=n2+1n1Ryk^D2+Ryn(D)nifniseven,2k=n+12n1Ryk^D2+Ryn(D)nifnisodd,

and

R˘xn:1(D):=Ryn2D+k=n2+1n1RReykD2+RImykD2+Ryn(D)nifniseven,k=n+12n1RReykD2+RImykD2+Ryn(D)nifnisodd.

Proof. 

Equations (15) and (16) were presented in [4] (Equations (16) and (20)) for the case where the correlation matrix Exn:1xn:1 is Toeplitz. They were proved by using a result on the DFT of random vectors with Toeplitz correlation matrix, namely, ref. [4] (Theorem 1). The proof of Theorem 4 is similar to the proof of [4] (Equations (16) and (20)) but using Theorem 1 instead of [4] (Theorem 1). Observe that in Theorems 1 and 4 no assumption about the structure of Exn:1xn:1 has been made. ☐

Theorem 4 shows that a coding strategy for xn:1 is to encode yn2,,yn separately, where n2 denotes the smallest integer higher than or equal to n2. Theorem 4 also shows that another coding strategy for xn:1 is to encode separately the real part and the imaginary part of yk instead of encoding yk when k{n2,,n1}{n2}. The computational complexity of these two coding strategies based on the DFT is lower than the computational complexity of the optimal coding strategy provided in Corollary 1. Specifically, the complexity of computing the DFT (yn:1=Vn*xn:1) is O(nlogn) whenever the fast Fourier transform (FFT) algorithm is used, while the complexity of computing zn:1=Unxn:1 is O(n2). Moreover, when the coding strategies based on the DFT are used, we do not need to compute a real orthogonal eigenvector matrix Un of Exn:1xn:1. It should also be mentioned that for these coding strategies based on the DFT the knowledge of Exn:1xn:1 is not even required, in fact, for them we only need to know Eyk^yk^ with k{n2,,n}.

The rates corresponding to the two coding strategies given in Theorem 4, R˜xn:1(D) and R˘xn:1(D), can be written in terms of Exn:1xn:1 and Vn by using Lemma 1 and the following lemma.

Lemma 2.

Let yn:1 and D be as in Theorem 4. Then

  1. Ryk(D)=12lnEyk2D for all k{1,,n}{n2,n}.

  2. Ryk^D2=14lnEReyk2EImyk2EReykImyk2D22 for all k{1,,n1}{n2}.

  3. RReykD2=12lnEReyk2D2 for all k{1,,n1}{n2}.

  4. RImykD2=12lnEImyk2D2 for all k{1,,n1}{n2}.

Proof. 

(1) Applying Equation (2) and [4] (Lemma 1) yields

0<DλnExn:1xn:1Eyk2=Eyk2.

Assertion (1) now follows directly from Equation (13).

(2) Applying Theorem 2 we have

0<D2λn(Exn:1xn:1)2λ2Eyk^yk^.

Consequently, from Equation (13) we obtain

Ryk^D2=14lndetEyk^yk^D22=14lndetEReyk2EReykImykEImykReykEImyk2D22.

Assertions (3) and (4) Applying Equations (3) and (4) yields

0<D2λnExn:1xn:12EReyk2.

and

0<D2λnExn:1xn:12EImyk2.

Assertions (3) and (4) now follow directly from Equation (13). ☐

We end this section with a result that is a direct consequence of Lemma 2. This result shows when the rates corresponding to the two coding strategies given in Theorem 4, R˜xn:1(D) and R˘xn:1(D), are equal.

Lemma 3.

Let xn:1, yn:1, and D be as in Theorem 4. Then the two following assertions are equivalent:

  1. R˜xn:1(D)=R˘xn:1(D).

  2. EReykImyk=0 for all k{n2,,n1}{n2}.

Proof. 

Fix k{n2,,n1}{n2}. From Lemma 2 we have

2Ryk^D2=12lnEReyk2EImyk2EReykImyk2D2212lnEReyk2EImyk2D22=12lnEReyk2D2+12lnEImyk2D2=RReykD2+RImykD2.

4. Low-Complexity Coding Strategies for Gaussian AWSS AR Sources

We begin by introducing some notation. The symbols N, Z, and R denote the set of positive integers, integers, and (finite) real numbers, respectively. If f:RC is continuous and 2π-periodic, we denote by Tn(f) the n×n Toeplitz matrix given by

[Tn(f)]j,k=tjk,

where {tk}kZ is the sequence of Fourier coefficients of f, i.e.,

tk=12π02πf(ω)ekωidωkZ.

If An and Bn are n×n matrices for all nN, we write {An}{Bn} if the sequences {An} and {Bn} are asymptotically equivalent, that is, {An2} and {Bn2} are bounded and limnAnBnFn=0 (see [5] (Section 2.3) and [6]).

We now review the definitions of AWSS processes and AR processes.

Definition 1.

A random process {xn} is said to be AWSS if it has constant mean (i.e., E(xj)=E(xk) for all j,kN) and there exists a continuous 2π-periodic function f:RC such that {Exn:1xn:1*}{Tn(f)}. The function f is called (asymptotic) PSD of {xn}.

Definition 2.

A real zero-mean random process {xn} is said to be AR if

xn=wnk=1n1akxnknN,

or equivalently,

k=0n1akxnk=wnnN, (17)

where a0=1, akR for all kN, and {wn} is a real zero-mean random process satisfying that Ewjwk=δj,kσ2 for all j,kN with σ2>0 and δj,k being the Kronecker delta (i.e., δj,k=1 if j=k, and it is zero otherwise).

The AR process {xn} in Equation (17) is of finite order if there exists pN such that ak=0 for all k>p. In this case, {xn} is called an AR (p) process.

The following theorem shows that if xn:1 is a large enough data block of a Gaussian AWSS AR source, the rate does not increase whenever we encode it using the two coding strategies based on the DFT presented in Section 3, instead of encoding xn:1 using an eigenvector matrix of its correlation matrix.

Theorem 5.

Let {xn} be as in Definition 2. Suppose that {ak}kZ, with ak=0 for all kN, is the sequence of Fourier coefficients of a function a:RC which is continuous and 2π-periodic. Then

  1. infnNλnExn:1xn:1σ2maxω[0,2π]|a(ω)|2>0.

  2. Consider D0,infnNλnExn:1xn:1.

    • (a)
      If {xn} is Gaussian,
      12lnσ2D=Rxn:1(D)R˜xn:1(D)R˘xn:1(D)K1(n,D)K2(n,D)K3(n,D)nN, (18)
      where K1(n,D) is given by Equation (16), and K2(n,D) and K3(n,D) are obtained by replacing λnExn:1xn:1 in Equation (16) by infnNλnExn:1xn:1 and σ2maxω[0,2π]|a(ω)|2, respectively.
    • (b)
      If {xn} is Gaussian and AWSS,
      limnRxn:1(D)=limnR˜xn:1(D)=limnR˘xn:1(D)=limnK3(n,D). (19)

Proof. (1) Equation (17) can be rewritten as

Tn(a)xn:1=wn:1nN.

Consequently,

Tn(a)Exn:1xn:1Tn(a)=ETn(a)xn:1Tn(a)xn:1=Ewn:1wn:1=σ2InnN.

As det(Tn(a))=1, Tn(a) is invertible, and therefore,

Exn:1xn:1=σ2Tn(a)1Tn(a)1=σ2Tn(a)Tn(a)1=σ2Tn(a)*Tn(a)1=σ2Nndiag1knσkTn(a)2Nn*1=Nndiag1knσ2σkTn(a)2Nn* (20)

for all nN, where Tn(a)=Mndiag1knσkTn(a)Nn* is a singular value decomposition of Tn(a). Thus, applying [8] (Theorem 4.3) yields

λnExn:1xn:1=σ2σ1Tn(a)2σ2maxω[0,2π]|a(ω)|2>0nN.

(2a) From Equation (13) we have

Rxn:1(D)=12nlndetExn:1xn:1Dn=12nlndetσ2Tn(a)1Tn(a)1Dn=12nlnσ2nDndetTn(a)detTn(a)=12nlnσ2nDn=12lnσ2DnN.

Assertion (2a) now follows from Theorem 4 and Assertion (1).

(2b) From Assertion (2a) we only need to show that

limnExn:1xn:1Vndiag1knVn*Exn:1xn:1Vnk,kVn*Fn=0. (21)

As the Frobenius norm is unitarily invariant we obtain

0Exn:1xn:1Vndiag1knVn*Exn:1xn:1Vnk,kVn*FnExn:1xn:1Tn(f)Fn+Tn(f)C^n(f)Fn+Vndiag1knVn*Exn:1xn:1Vnk,kVn*C^n(f)Fn=Exn:1xn:1Tn(f)Fn+Tn(f)C^n(f)Fn+Vndiag1knVn*Exn:1xn:1Tn(f)Vnk,kVn*Fn=Exn:1xn:1Tn(f)Fn+Tn(f)C^n(f)Fn+diag1knVn*Exn:1xn:1Tn(f)Vnk,kFnExn:1xn:1Tn(f)Fn+Tn(f)C^n(f)Fn+Vn*Exn:1xn:1Tn(f)VnFn=2Exn:1xn:1Tn(f)Fn+Tn(f)C^n(f)Fn,

where f is (asymptotic) PSD of {xn} and C^n(f)=Vndiag1kn[Vn*Tn(f)Vn]k,kVn*. Assertion (2b) now follows from {Exn:1xn:1}{Tn(f)} and [9] (Lemma 4.2). ☐

If k=0|ak|<, there always exists such function a and it is given by a(ω)=k=0akekωi for all ωR (see, e.g., [8] (Appendix B)). In particular, if {xn} is an AR (p) process, a(ω)=k=p0akekωi for all ωR.

5. Sufficient Conditions for AR Processes to be AWSS

In the following two results we give sufficient conditions for AR processes to be AWSS.

Theorem 6.

Let {xn} be as in Definition 2. Suppose that {ak}kZ, with ak=0 for all kN, is the sequence of Fourier coefficients of a function a:RC which is continuous and 2π-periodic. Then the following assertions are equivalent:

  1. {xn} is AWSS.

  2. {Exn:1xn:12} is bounded.

  3. {Tn(a)} is stable (that is, {(Tn(a))12} is bounded).

  4. a(ω)0 for all ωR and {xn} is AWSS with (asymptotic) PSD σ2|a|2.

Proof. 

(1)⇒(2) This is a direct consequence of the definition of AWSS process, i.e., of Definition 1.

(2)⇔(3) From Equation (20) we have

Exn:1xn:12=σ2σnTn(a)2=σ2Nndiag1kn1σkTn(a)Mn*22=σ2Tn(a)122

for all nN.

(3)⇒(4) It is well known that if f:RC is continuous and 2π-periodic, and {Tn(f)} is stable then f(ω)0 for all ωR. Hence, a(ω)0 for all ωR.

Applying [8] (Lemma 4.2.1) yields Tn(a)=Tn(a)*=Tn(a¯). Consequently, from [7] (Theorem 3) we obtain

Tn(a)Tn(a)=Tn(a¯)Tn(a)Tn(a¯a)=Tn|a|2.

Observe that the sequence

Tn(a)Tn(a)12=1σ2Exn:1xn:12=1σ2Exn:1xn:12

is bounded. As the function |a|2 is real, applying [8] (Theorem 4.4) we have that Tn|a|2 is Hermitian and 0<minω[0,2π]|a(ω)|2λn(Tn|a|2) for all nN, and therefore,

Tn|a|212=1λnTn|a|21minω[0,2π]|a(ω)|2nN.

Thus, from [5] (Theorem 1.4) we obtain

1σ2Exn:1xn:1=Tn(a)Tn(a)1Tn|a|21.

Hence, applying [10] (Theorem 4.2) and [5] (Theorem 1.2) yields

1σ2Exn:1xn:1Tn1|a|2.

Consequently, from [8] (Lemma 3.1.3) and [8] (Lemma 4.2.3) we have

Exn:1xn:1σ2Tn1|a|2=Tnσ2|a|2.

(4)⇒(1) It is obvious.

Corollary 2.

Let {xn} be as in Definition 2 with k=0|ak|<. If k=0akzk0 for all |z|1 then {xn} is AWSS.

Proof. 

It is well known that if a sequence of complex numbers {tk}kZ satisfies that k=|tk|< and that k=tkzk0 for all |z|1 then {Tn(f)} is stable with f(ω)=k=tkekωi for all ωR. Therefore, {Tn(b)} is stable with b(ω)=k=0akekωi for all ωR. Thus,

(Tn(a))12=(Tn(a))12=(Tn(a))12=(Tn(b))12

is bounded with a(ω)=k=0akekωi for all ωR. As {Tn(a)} is stable, from Theorem 6 we conclude that {xn} is AWSS. ☐

6. Numerical Example and Conclusions

6.1. Example

Let {xn} be as in Definition 2 with ak=0 for all k>1. Observe that σ2maxω[0,2π]|a(ω)|2=σ2(1+|a1|)2. If |a1|<1 from Corollary 2 we obtain that the AR(1) process {xn} is AWSS. Figure 1 shows Rxn:1(D), R˜xn:1(D), and R˘xn:1(D) by assuming that {xn} is Gaussian, a1=12, σ2=1, D=σ2(1+|a1|)2=49, and n100. Figure 1 also shows the highest upper bound of Rxn:1(D) presented in Theorem 5, namely, K3(n,D). Observe that the figure bears evidence of the equalities and inequalities given in Equations (18) and (19).

Figure 1.

Figure 1

Considered rates for a Gaussian AWSS AR(1) source.

6.2. Conclusions

The computational complexity of coding finite-length data blocks of Gaussian sources can be reduced by using any of the two low-complexity coding strategies here presented instead of the optimal coding strategy. Moreover, the rate does not increase if we use those strategies instead of the optimal one whenever the Gaussian source is AWSS and AR, and the considered data block is large enough.

Author Contributions

Authors are listed in order of their degree of involvement in the work, with the most active contributors listed first. All authors have read and approved the final manuscript.

Funding

This work was supported in part by the Spanish Ministry of Economy and Competitiveness through the CARMEN project (TEC2016-75067-C4-3-R).

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Kolmogorov A.N. On the Shannon theory of information transmission in the case of continuous signals. IRE Trans. Inf. Theory. 1956;2:102–108. doi: 10.1109/TIT.1956.1056823. [DOI] [Google Scholar]
  • 2.Gray R.M. Information rates of autoregressive processes. IEEE Trans. Inf. Theory. 1970;16:412–421. doi: 10.1109/TIT.1970.1054470. [DOI] [Google Scholar]
  • 3.Pearl J. On coding and filtering stationary signals by discrete Fourier transforms. IEEE Trans. Inf. Theory. 1973;19:229–232. doi: 10.1109/TIT.1973.1054985. [DOI] [Google Scholar]
  • 4.Gutiérrez-Gutiérrez J., Zárraga-Rodríguez M., Insausti X. Upper bounds for the rate distortion function of finite-length data blocks of Gaussian WSS sources. Entropy. 2017;19:554. doi: 10.3390/e19100554. [DOI] [Google Scholar]
  • 5.Gray R.M. Toeplitz and circulant matrices: A review. Found. Trends Commun. Inf. Theory. 2006;2:155–239. doi: 10.1561/0100000006. [DOI] [Google Scholar]
  • 6.Gray R.M. On the asymptotic eigenvalue distribution of Toeplitz matrices. IEEE Trans. Inf. Theory. 1972;18:725–730. doi: 10.1109/TIT.1972.1054924. [DOI] [Google Scholar]
  • 7.Gutiérrez-Gutiérrez J., Crespo P.M. Asymptotically equivalent sequences of matrices and multivariate ARMA processes. IEEE Trans. Inf. Theory. 2011;57:5444–5454. doi: 10.1109/TIT.2011.2159042. [DOI] [Google Scholar]
  • 8.Gutiérrez-Gutiérrez J., Crespo P.M. Block Toeplitz matrices: Asymptotic results and applications. Found. Trends Commun. Inf. Theory. 2011;8:179–257. doi: 10.1561/0100000066. [DOI] [Google Scholar]
  • 9.Gutiérrez-Gutiérrez J., Zárraga-Rodríguez M., Insausti X., Hogstad B.O. On the complexity reduction of coding WSS vector processes by using a sequence of block circulant matrices. Entropy. 2017;19:95. doi: 10.3390/e19030095. [DOI] [Google Scholar]
  • 10.Gutiérrez-Gutiérrez J., Crespo P.M. Asymptotically equivalent sequences of matrices and Hermitian block Toeplitz matrices with continuous symbols: Applications to MIMO systems. IEEE Trans. Inf. Theory. 2008;54:5671–5680. doi: 10.1109/TIT.2008.2006401. [DOI] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES