Skip to main content
Entropy logoLink to Entropy
. 2020 Jun 25;22(6):705. doi: 10.3390/e22060705

Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information

Lampros Gavalakis 1,*,, Ioannis Kontoyiannis 1,
PMCID: PMC7517243  PMID: 33286477

Abstract

The problem of determining the best achievable performance of arbitrary lossless compression algorithms is examined, when correlated side information is available at both the encoder and decoder. For arbitrary source-side information pairs, the conditional information density is shown to provide a sharp asymptotic lower bound for the description lengths achieved by an arbitrary sequence of compressors. This implies that for ergodic source-side information pairs, the conditional entropy rate is the best achievable asymptotic lower bound to the rate, not just in expectation but with probability one. Under appropriate mixing conditions, a central limit theorem and a law of the iterated logarithm are proved, describing the inevitable fluctuations of the second-order asymptotically best possible rate. An idealised version of Lempel-Ziv coding with side information is shown to be universally first- and second-order asymptotically optimal, under the same conditions. These results are in part based on a new almost-sure invariance principle for the conditional information density, which may be of independent interest.

Keywords: entropy, lossless data compression, side information, conditional entropy, central limit theorem, law of the iterated logarithm, conditional varentropy

1. Introduction

It is well-known that the presence of correlated side information can potentially offer dramatic benefits for data compression [1,2]. Important applications where such side information is naturally present include the compression of genomic data [3,4], file and software management [5,6], and image and video compression [7,8].

In practice, the most common approach to the design of effective compression methods with side information is based on generalisations of the Lempel-Ziv family of algorithms [9,10,11,12,13]. A different approach based on grammar-based codes was developed in [14], turbo codes were applied in [15], and a generalised version of context-tree weighting was used in [16].

In this work, we examine the theoretical fundamental limits of the best possible performance that can be achieved in such problems. Let (X,Y)={(Xn,Yn);n1} be a source-side information pair; X is the source to be compressed, and Y is the associated side information process which is assumed to be available both to the encoder and the decoder. Under appropriate conditions, the best average rate that can be achieved asymptotically [2], is the conditional entropy rate,

H(X|Y)=limn1nH(X1n|Y1n),bits/symbol,

where X1n=(X1,X2,,Xn), Y1n=(Y1,Y2,,Yn), and H(X1n|Y1n) denotes the conditional entropy of X1n given Y1n; precise definitions will be given in Section 2.

Our main goal is to derive sharp asymptotic expressions for the optimum compression rate (with side information available to both the encoder and decoder), not only in expectation but with probability 1. In addition to the best first-order performance, we also determine the best rate at which this performance can be achieved, as a function of the length of the data being compressed. Furthermore, we consider an idealised version of a Lempel-Ziv compression algorithm, and we show that it can achieve asymptotically optimal first- and second-order performance, universally over a broad class of stationary and ergodic source-side information pairs (X,Y).

Specifically, we establish the following. In Section 2.1 we describe the theoretically optimal one-to-one compressor fn(X1n|Y1n), for arbitrary source-side information pairs (X,Y). In Section 2.2 we prove our first result, stating that the description lengths (fn(X1n|Y1n)) can be well-approximated, with probability one, by the conditional information density, logP(X1n|Y1n). Theorem 2 states that for any jointly stationary and ergodic source-side information pair (X,Y), the best asymptotically achievable compression rate is H(X|Y) bits/symbol, with probability 1. This generalises Kieffer’s corresponding result [17] to the case of compression with side information.

Furthermore, in Section 2.4 we show that there is a sequence of random variables {Zn} such that the description lengths (fn(X1n|Y1n)) of any sequence of compressors {fn} satisfy a “one-sided” central limit theorem (CLT): Eventually, with probability 1,

(fn(X1n|Y1n))nH(X|Y)+nZn+o(n),bits, (1)

where the Zn converge to a N(0,σ2(X|Y)) distribution, and the term o(n) is negligible compared to n. The lower bound (1) is established in Theorem 3 where it is also shown that it is asymptotically achievable. This means that the rate obtained by any sequence of compressors has inevitable O(n) fluctuations around the conditional entropy rate, and that the size of these fluctuations is quantified by the conditional varentropy rate,

σ2(X|Y)=limn1nVarlogP(X1n|Y1n).

This generalises the minimal coding variance of [18]. The bound (1) holds for a broad class of source-side information pairs, including all Markov chains with positive transition probabilities. Under the same conditions, a corresponding “one-sided” law of the iterated logarithm (LIL) is established in Theorem 4, which gives a precise description of the inevitable almost-sure fluctuations above H(X|Y), for any sequence of compressors.

The proofs of all the results in Section 2.3 and Section 2.4 are based, in part, on analogous asymptotics for the conditional information density, logP(X1n|Y1n). These are established in Section 2.5, where we state and prove a corresponding CLT and an LIL for logP(X1n|Y1n). These results, in turn, follow from the almost sure invariance principle for logP(X1n|Y1n), proved in Appendix A. Theorem A1, which is of independent interest, generalises the invariance principle established for the (unconditional) information density logP(X1n) by Philipp and Stout [19]. In fact, Theorem A1 along with the identification of the conditions under which it holds (Assumption 1) in Section 2.4), are the more novel contributions of this work.

In a different direction, Nomura and Han [20] establish finer coding theorems for the Slepian-Wolf problem, when the side information is only available to the decoder. There, they obtain general second-order asymptotics for the best achievable rate region, under an excess-rate probability constraint.

Section 3 is devoted to universal compression. We consider a simple, idealised version of Lempel-Ziv coding with side information. As in the case of Lempel-Ziv compression without side information [21,22], the performance of this scheme is determined by the asymptotics of a family of conditional recurrence times, Rn=Rn(X|Y). Under appropriate, general conditions on the source-side information pair (X,Y), in Theorem 8 we show that the ideal description lengths, logRn, can be well-approximated by the conditional information density logP(X1n|Y1n). Combining this with our earlier results on the conditional information density, in Corollary 1 and Theorem 9 we show that the compression rate of this scheme converges to H(X|Y), with probability 1, and that it is universally second-order optimal. The results of this section generalise the corresponding asymptotics without side information established in [23,24].

The proofs of the more technical results needed in Section 2 and Section 3 are given in the appendix.

2. Pointwise Asymptotics

In this section, we derive general, fine asymptotic bounds for the description lengths of arbitrary compressors with side information, as well as corresponding achievability results.

2.1. Preliminaries

Let X={Xn;n1} be an arbitrary source to be compressed, and Y={Yn;n1} be an associated side information process. We let X,Y, denote their finite alphabets, respectively, and we refer to the joint process (X,Y)={(Xn,Yn);n1} as a source-side information pair.

Let x1n=(x1,x2,,xn) be a source string, and let y1n=(y1,y2,,yn) an associated side information string which is available to both the encoder and decoder. A fixed-to-variable one-to-one compressor with side information, of blocklength n, is a collection of functions fn, where each fn(x1n|y1n) takes a value in the set of all finite-length binary strings,

{0,1}=Uk=0{0,1}k={Ø,0,1,00,01,000,},

with the convention that {0,1}0={Ø} consists of just the empty string Ø of length zero. For each y1nYn, we assume that fn(·|y1n) is a one-to-one function from Xn to {0,1}, so that the compressed binary string fn(x1n|y1n) is always correctly decodable.

The main figure of merit in lossless compression is of course the description length,

(fn(x1n|y1n))=length offn(x1n|y1n),bits,

where throughout, (s) denotes the length, in bits, of a binary string s. It is easy to see that under quite general criteria, the optimal compressor fn is easy to describe; see [25] for an extensive discussion. For 1ij, we use the shorthand notation zij for the string (zi,zi+1,,zj), and similarly Zij for the corresponding collection of random variables Zij=(Zi,Zi+1,,Zj).

Definition 1

(The optimal compressor fn). For each side information string y1n, fn(·|y1n) is the optimal compressor for the distribution P(X1n=·|Y1n=y1n), namely the compressor that orders the strings x1n in order of decreasing probability P(X1n=x1n|Y1n=y1n), and assigns them codewords from {0,1} in lexicographic order.

2.2. The Conditional Information Density

Definition 2

(Conditional information density). For an arbitrary source-side information pair (X,Y), the conditional information density of blocklength n is the random variable: logP(X1n|Y1n)=logPX1n|Y1n(X1n|Y1n).

[Throughout the paper, ‘log’ denotes ‘log2’, the logarithm taken to base 2, and all familiar information theoretic quantities are expressed in bits.]

The starting point is the following almost sure (a.s.) approximation result between the description lengths (fn(X1n|Y1n)) of an arbitrary sequence of compressors and the conditional information density logP(X1n|Y1n)) of an arbitrary source-side information pair (X,Y). When it causes no confusion, we drop the subscripts for PMFs and conditional PMFs, e.g., simply writing P(x1n|y1n) for PX1n|Y1n(x1n|y1n) as in the definition above. Recall the definition of the optimal compressors {fn} from Section 2.1.

Theorem 1.

For any source-side information pair (X,Y), and any sequence {Bn} that grows faster than logarithmically, i.e., such that Bn/logn as n, we have:

  • (a
    For any sequence of compressors with side information {fn}:
    liminfn(fn(X1n|Y1n))[logP(X1n|Y1n)]Bn0,a.s.
  • (b

    The optimal compressors {fn} achieve the above bound with equality.

Proof. 

Fix ϵ>0 arbitrary and let τ=τn=ϵBn. Applying the general converse in ([25], Theorem 3.3) with X1n,Y1n in place of X,Y and Xn,Yn in place of X,Y, gives,

P(f(X1n|Y1n))logP(X1n|Y1n)ϵBn2lognϵBn(log|X|+1),

which is summable in n. Therefore, by the Borel-Cantelli lemma we have that eventually, a.s.,

(f(X1n|Y1n))+logP(X1n|Y1n)>ϵBn,

Since ϵ>0 was arbitrary, this implies (a). Part (b) follows from (a) together with the fact that (fn(X1n|Y1n))+logP(X1n|Y1n)0, a.s., by the general achievability result in ([25], Theorem 3.1). □

2.3. First-Order Asymptotics

For any source-side information pair (X,Y), the conditional entropy rate H(X|Y) is defined as:

H(X|Y)=limsupn1nH(X1n|Y1n).

Throughout H(Z) and H(Z|W) denote the discrete entropy of Z and the conditional entropy of Z given W, in bits. If (X,Y) are jointly stationary, then the above limsup is in fact a limit, and it is equal to H(X,Y)H(Y), where H(X,Y) and H(Y) are the entropy rates of (X,Y) and of Y, respectively [2]. Moreover, if (X,Y) are also jointly ergodic, then by applying the Shannon-McMillan-Breiman theorem [2] to Y and to the pair (X,Y), we obtain its conditional version:

1nlogP(X1n|Y1n)H(X|Y),a.s. (2)

The next result states that the conditional entropy rate is the best asymptotically achievable compression rate, not only in expectation but also with probability 1. It is a consequence of Theorem 1 with Bn=n, combined with (2).

Theorem 2.

Suppose (X,Y) is a jointly stationary and ergodic source-side information pair with conditional entropy rate H(X|Y).

  • (a
    For any sequence of compressors with side information {fn}:
    liminfn(fn(X1n|Y1n))nH(X|Y),a.s.
  • (b

    The optimal compressors {fn} achieve the above bound with equality.

2.4. Finer Asymptotics

The refinements of Theorem 2 presented in this section will be derived as consequences of the general approximation results in Theorem 1, combined with corresponding refined asymptotics for the conditional information density logP(X1n|Y1n). For clarity of exposition these are stated separately, in Section 2.5 below.

The results of this section will be established for a class of jointly stationary and ergodic source-side information pairs (X,Y), that includes all Markov chains with positive transition probabilities. The relevant conditions, in their most general form, will be given in terms of the following mixing coefficients.

Definition 3.

Suppose Z={Zn;nZ} is a stationary process on a finite alphabet Z. For any pair of indices ij, let Fij denote the σ-algebra generated by Zij. For d1, define:

α(Z)(d)=sup|P(AB)P(A)P(B)|;AF0,BFd,γ(Z)(d)=maxzZE|logP(Z0=z|Z1)logP(Z0=z|Zd1)|.

Note that if Z is an ergodic Markov chain of order k, then α(Z)(d) decays exponentially fast [26], and γ(Z)(d)=0 for all dk. Moreover, if (X,Y) is a Markov chain with all positive transition probabilities, then γ(Y)(d) also decays exponentially fast; cf. ([27], Lemma 2.1).

Throughout this section we will assume that the following conditions hold:

Assumption 1.

The source-side information pair (X,Y) is stationary and satisfies one of the following three conditions:

  • (a)

    (X,Y) is a Markov chain with all positive transition probabilities; or

  • (b)

    (X,Y) as well as Y are kth order, irreducible and aperiodic Markov chains; or

  • (c)
    (X,Y) is jointly ergodic and satisfies the following mixing conditions: [Our source-side information pairs (X,Y) are only defined for (Xn,Yn) with n1, whereas the coefficients α(Z)(d) and γ(Z)(d) are defined for two-sided sequences {Zn;nZ}. However, this does not impose an additional restriction, since any one-sided stationary process can be extended to a two-sided one by the Kolmogorov extension theorem [28].]
    α(X,Y)(d)=O(d336),γ(X,Y)(d)=O(d48),andγ(Y)(d)=O(d48). (3)

In view of the discussion following Definition 3, (a)(c) and (b)(c). Therefore, all results stated under Assumption 1 will be proved under the weakest set of conditions, namely that (3) hold.

Definition 4.

For a source-side information pair (X,Y), the conditional varentropy rate is:

σ2(X|Y)=limsupn1nVarlogP(X1n|Y1n). (4)

Under the above assumptions, the limsup in (4) is in fact a limit. Lemma 1 is proved in the Appendix A.

Lemma 1.

Under Assumption 1, the conditional varentropy rate σ2(X|Y) is:

σ2(X|Y)=limn1nVarlogP(X1n|Y1n)=limn1nVarlogP(X1n,Y1n|X0,Y0)P(Y1n|Y0).

Our first main result in this section is a “one-sided” central limit theorem (CLT), which states that the description lengths (fn(X1n|Y1n)) of an arbitrary sequence of compressors with side information, {fn}, are asymptotically at best Gaussian, with variance σ2(X|Y). Recall the optimal compressors {fn} described in Section 2.1

Theorem 3

(CLT for codelengths). Suppose (X,Y) satisfy Assumption 1, and let σ2=σ2(X|Y)>0 denote the conditional varentropy rate (4). Then there exists a sequence of random variables {Zn;n1} such that:

  • (a
    For any sequence of compressors with side information, {fn}, we have,
    liminfn(fn(X1n|Y1n))H(X1n|Y1n)nZn0,a.s., (5)

    where ZnN(0,σ2), in distribution, as n.

  • (b

    The optimal compressors {fn} achieve the lower bound in (5) with equality.

Proof. 

Letting Zn=[logP(X1n|Y1n)]/n, n1, and taking Bn=n, both results follow by combining the approximation results of Theorem 1 with the corresponding CLT for the conditional information density in Theorem 5. □

Our next result is in the form of a “one-sided” law of the iterated logarithm (LIL) which states that with probability 1, the description lengths of any compressor with side information will have inevitable fluctuations of order 2σ2nlogelog2n bits around the conditional entropy rate H(X|Y); throughout, loge denotes the natural logarithm to base e.

Theorem 4

(LIL for codelengths). Suppose (X,Y) satisfy Assumption 1, and let σ2=σ2(X|Y)>0 denote the conditional varentropy rate (4). Then:

  • (a
    For any sequence of compressors with side information, {fn}, we have:
    limsupn(fn(X1n|Y1n))H(X1n|Y1n)2nlogelogenσ,a.s., (6)
    andliminfn(fn(X1n|Y1n))H(X1n|Y1n)2nlogelogenσ,a.s. (7)
  • (b

    The optimal compressors {fn} achieve the lower bounds in (6) and (7) with equality.

Proof. 

Taking Bn=2nlog2logen, the results of the theorem again follow by combining the approximation results of Theorem 1 with the corresponding LIL for the conditional information density in Theorem 6. □

Remark 1.

  1. Although the results in Theorems 3 and 4 are stated for one-to-one compressors {fn}, they remain valid for the class of prefix-free compressors. Since prefix-free codes are certainly one-to-one, the converse bounds in Theorem 3 (a) and 4 (a) are valid as stated, while for the achievability results it suffices to consider compressors fnp with description lengths (fnp(x1n|y1n)))=logP(x1n|y1n), and then apply Theorem 5.

  2. Theorem 3 says that the compression rate of any sequence of compressors {fn} will have at best Gaussian fluctuations around H(X|Y),
    1n(fn(X1n|Y1n))NH(X|Y),σ2(X|Y)n,bits/symbol,

    and similarly Theorem 4 says that with probability 1, the description lengths will have inevitable fluctuations of approximately ±2nσ2logelogen bits around nH(X|Y).

    As both of these vanish when σ2(X|Y) is zero, we note that if the source-side information pair (X,Y) is memoryless, so that {(Xn,Yn)} are independent and identically distributed, then the conditional varentropy rate reduces to,
    σ2(X|Y)=Var(logP(X1|Y1)),

    which is equal to zero if and only if, for each yY, the conditional distribution of X1 given Y1=y is uniform on a subset XyX, where all the Xy have the same cardinality.

    In the more general case when both the pair process (X,Y) and the side information Y are Markov chains, necessary and sufficient conditions for σ2(X|Y) to be zero were recently established in [25].

  3. In analogy with the source dispersion for the problem of lossless compression without side information [29,30], for an arbitrary source-side information pair (X,Y) the conditional dispersion D(X|Y) was recently defined [25] as,
    D(X|Y)=limsupn1nVar[(fn(X1n|Y1n))].
    There, it was shown that when both the pair (X,Y) and Y itself are irreducible and aperiodic Markov chains, the conditional dispersion coincides with the conditional varentropy rate:
    D(X|Y)=limn1nVar[(fn(X1n|Y1n))]=σ2(X|Y)<.

2.5. Asymptotics of the Conditional Information Density

Here we show that the conditional information density itself, logP(X1n|Y1n), satisfies a CLT and a LIL. The next two theorems are consequences of the almost sure invariance principle established in Theorem A1, in the Appendix A.

Theorem 5

(CLT for the conditional information density). Suppose (X,Y) satisfy Assumption 1, and let σ2=σ2(X|Y)>0 denote the conditional varentropy rate (4). Then, as n:

logP(X1n|Y1n)H(X1n|Y1n)nN(0,σ2),in distribution. (8)

Proof. 

The conditions (3), imply that as n, [nH(X,Y)H(X1n,Y1n)]/n0, and [nH(Y)H(Y1n)]/n0, cf. [19], therefore also, [nH(X|Y)H(X1n|Y1n)]/n0, so it suffices to show that as n,

logP(X1n|Y1n)nH(X|Y)nN(0,σ2).in distribution. (9)

Let D=D([0,1],R) denote the space of cadlag (right-continuous with left-hand limits) functions from [0,1] to R, and define, for each t0, S(t)=logP(X1t|Y1t)+tH(X|Y), as in Theorem A1 in the Appendix A. For all n1,t[0,1], define Sn(t)=S(nt). Then Theorem A1 implies that as n,

1σnSn(t);t[0,1]{B(t);t[0,1]},weaklyinD,

where {B(t)} is a standard Brownian motion; see, e.g., ([19], Theorem E, p. 4). In particular, this implies that

1σnSn(1)B(1)N(0,1),in distribution,

which is exactly (9). □

Theorem 6

(LIL for the conditional information density). Suppose (X,Y) satisfy Assumption 1, and let σ2=σ2(X|Y)>0 denote the conditional varentropy rate (4). Then:

limsupnlogP(X1n|Y1n)H(X1n|Y1n)2nlogelogen=σ,a.s., (10)
andliminfnlogP(X1n|Y1n)H(X1n|Y1n)2nlogelogen=σ,a.s. (11)

Proof. 

As in the proof of (8), it suffices to prove (10) with nH(X|Y) in place of H(X1n|Y1n). However, this is immediate from Theorem A1, since, for a standard Brownian motion {B(t)},

limsuptB(t)2tlogeloget=1,a.s.,

see, e.g., ([31], Theorem 11.18). In addition, similarly for (11). □

3. Idealised LZ Compression with Side Information

Consider the following idealised version of Lempel-Ziv-like compression with side information. For a given source-side information pair (X,Y)={(Xn,Yn);nZ}, the encoder and decoder both have access to the infinite past (X0,Y0) and to the current side information Y1n. The encoder describes X1n to the decoder as follows. First she searches for the first appearance of (X1n,Y1n) in the past (X0,Y0), that is, for the first r1 such that (Xr+1r+n,Yr+1r+n)=(X1n,Y1n). Then she counts how many times Y1n appears in Y0 between locations r+1 and 0, namely how many indices 1j<r there are, such that Yj+1j+n=Y1n. Say there are (Rn1) such js. She describes X1n to the decoder by telling him to look at the Rnth position where Y1n appears in the past Y0, and read off the corresponding X string.

This description takes logRn bits, and, as it turns out, the resulting compression rate is asymptotically optimal: As n, with probability 1,

1nlogRnH(X|Y),bits/symbol. (12)

Moreover, it is second-order optimal, in that it achieves equality in the CLT and LIL bounds given in Theorems 3 and 4 of Section 2.

Our purpose in this section is to make these statements precise. We will prove (12) as well as its CLT and LIL refinements, generalising the corresponding results for recurrence times without side information in [24].

The use of recurrence times in understanding the Lempel-Ziv (LZ) family of algorithms was introduced by Willems [21] and Wyner and Ziv [22,32]. In terms of practical methods for compression with side information, Subrahmanya and Berger [9] proposed a side information analog of the sliding window LZ algorithm [33], and Uyematsu and Kuzuoka [10] proposed a side information version of the incremental parsing LZ algorithm [34]. The Subrahmanya-Berger algorithm was shown to be asymptotically optimal in [12,13]. Different types of LZ-like algorithms for compression with side information were also considered in [11].

Throughout this section, we assume (X,Y) is a jointly stationary and ergodic source-side information pair, with values in the finite alphabets X,Y, respectively. We use bold lower-case letters x,y without subscripts to denote infinite realizations x,y of X,Y, and the corresponding bold capital letters X,Y without subscripts to denote the entire process, X=X,Y=Y.

The main quantities of interest are the recurrence times defined next.

Definition 5

(Recurrence times). For a realization x of the process X, and n1, define the repeated recurrence times Rn(j)(x) of x1n, recursively, as:

Rn(1)(x)=inf{i1:xi+1i+n=x1n},Rn(j)(x)=inf{i>Rn(j1)(x):xi+1i+n=x1n},j>1.

For a realization (x,y) of the pair (X,Y) and n1, the joint recurrence time Rn(x,y) of (x1n,y1n) is defined as,

Rn(x,y)=inf{i1:(x,y)i+1i+n=(x,y)1n},

and the conditional recurrence time Rn(x|y) of x1n among the appearances y1n is:

Rn(x|y)=infi1:xRn(i)(y)+1Rn(i)(y)+n=x1n.

An important tool in the asymptotic analysis of recurrence times is Kac’s Theorem [35]. Its conditional version in Theorem 7 was first established in [12] using Kakutani’s induced transformation [36,37].

Theorem 7

(Conditional Kac’s theorem). [12] Suppose (X,Y) is a jointly stationary and ergodic source-side information pair. For any pair of strings x1nXn, y1nYn:

E[Rn(X|Y)|X1n=x1n,Y1n=y1n]=1P(x1n|y1n).

The following result states that we can asymptotically approximate logRn(X|Y) by the conditional information density not just in expectation as in Kac’s theorem, but also with probability 1. Its proof is in Appendix B.

Theorem 8.

Suppose (X,Y) is a jointly stationary and ergodic source-side information pair. For any sequence {cn} of non-negative real numbers such that nn2cn<, we have:

(i)logRn(X|Y)log1P(X1n|Y1n)cn,eventuallya.s.(ii)logRn(X|Y)log1P(X1n|Y1n,Y0,X0)cn,eventuallya.s.(iii)logRn(X|Y)logP(Y1n|Y0)P(X1n,Y1n|Y0,X0)2cn,eventuallya.s.

Next we state the main consequences of Theorem 8 that we will need. Recall the definition of the coefficients γ(Z)(d) from Section 2.4. Corollary 1 is proved in Appendix B.

Corollary 1.

Suppose (X,Y) are jointly stationary and ergodic.

  • (a
    If, in addition, dγ(X,Y)(d)< and dγ(Y)(d)<, then for any β>0:
    log[Rn(X|Y)P(X1n|Y1n)]=o(nβ),a.s.
  • (b
    In the general jointly ergodic case, we have:
    log[Rn(X|Y)P(X1n|Y1n)]=o(n),a.s.

From part (b) combined with the Shannon-McMillan-Breiman theorem as in (2), we obtain the result (12) promised in the beginning of this section:

limn1nlogRn(X|Y)H(X|Y),a.s.

This was first established in [12]. However, at this point we have already done the work required to obtain much finer asymptotic results for the conditional recurrence time.

For any pair of infinite realizations (x,y) of (X,Y), let {R(x|y)(t);t0} be the continuous-time path, defined as:

R(x|y)(t)=0,fort<1,R(x|y)(t)=logRt(x|y)tH(X|Y),fort1.

The following theorem is a direct consequence of Corollary 1 (a) combined with Theorem A1 in the Appendix A. Recall Assumption 1 from Section 2.4.

Theorem 9.

Suppose (X,Y) satisfy Assumption 1, and let σ2=σ2(X|Y)>0 denote the conditional varentropy rate. Then {R(X|Y)(t)} can be redefined on a richer probability space that contains a standard Brownian motion {B(t);t0} such that for any λ<1/294:

R(X|Y)(t)σB(t)=O(t1/2λ),a.s.

Two immediate consequences of Theorem 9 are the following:

Theorem 10

(CLT and LIL for the conditional recurrence times). Suppose (X,Y) satisfy Assumption 1, and let σ2=σ2(X|Y)>0 denote the conditional varentropy rate. Then:

(a)logRn(X|Y)H(X1n|Y1n)nN(0,σ2),in distribution,asn.(b)limsupnlogRn(X|Y)H(X1n|Y1n)2nlogelogen=σ,a.s.

Appendix A. Invariance Principle for the Conditional Information Density

This Appendix is devoted to the proof of Theorem A1, which generalises the corresponding almost sure invariance principle of Philipp and Stout ([19], Theorem 9.1) for the (unconditional) information density logP(X1n).

Theorem A1.

Suppose (X,Y) is a jointly stationary and ergodic process, satisfying the mixing conditions (3). For t0, let,

S(t)=logP(X1t|Y1t)+tH(X|Y). (A1)

Then the following series converges:

σ2=ElogP(X0,Y0|X1,Y1)+H(X|Y)2+2k=1E{logP(X0,Y0|X1,Y1)+H(X|Y)logP(Xk,Yk|Xk1,Yk1)+H(X|Y)}.

If σ2>0, then, without changing its distribution, we can redefine the process {S(t);t0} on a richer probability space that contains a standard Brownian motion {B(t);t0}, such that

S(t)σB(t)=O(t12λ),a.s., (A2)

as t, for each λ<1/294.

To simplify the notation, we write h=H(X|Y) and define,

fj=logP(Xj,Yj|Xj1,Yj1)P(Yj|Yj1),j0, (A3)

so that for example, the variance σ2 in the theorem becomes,

σ2=E[(f0+h)2]+2k=1E[(f0+h)(fk+h)]. (A4)

Lemma A1.

If dγ(X,Y)(d)< and dγ(Y)(d)< then, as n:

k=1nfklogP(X1n|Y1n)=O(1),a.s.

Proof. 

Let,

gj=logP(Xj,Yj|X1j1,Y1j1)P(Yj|Y1j1),j2,

and,

g1=logP(X1,Y1)P(Y1)=logP(X1|Y1).

We have, for k2,

E|fkgk|E|logP(Xk,Yk|Xk1,Yk1)logP(Xk,Yk|X1k1,Y1k1)|+E|logP(Yk|Yk1)logP(Yk|Y1k1)|x,yE|logP(Xk=x,Yk=y|Xk1,Yk1)logP(Xk=x,Yk=y|X1k1,Y1k1)|+yE|logP(Yk=y|Yk1)logP(Yk=y|Y1k1)||X||Y|γ(X,Y)(k1)+|Y|γ(Y)(k1).

Therefore, k=1E|fkgk|<, and by the monotone convergence theorem we have,

k=1|fkgk|<,a.s.

Hence, as n,

k=1nfklogP(X1n|Y1n)k=1n|fkgk|=O(1),a.s.,

as claimed. □

The following bounds are established in the proof of ([19], Theorem 9.1):

Lemma A2.

Suppose Z={Zn;nZ} is a stationary and ergodic process on a finite alphabet, with entropy rate H(Z), and such that α(Z)(d)=O(d336) and γ(Z)(d)=O(d48), as d.

Let fk(Z)=logP(Zk|Zk1), k0, and put ηn(Z)=fn(Z)+H(Z), n0. Then:

  • 1. 

    For each r>0, E|f0(Z)|r<.

  • 2. 
    For each r2 and ϵ>0,
    E|f0(Z)logP(Z0|Zk1)|rC(r,ϵ)(γ(Z)(k))12ϵ,
    where C(r,ϵ) is a constant depending only on r and ϵ.
  • 3. 

    For a constant C>0 independent of n, ηn(Z)4C.

  • 4. 
    Let ηn(Z)=E[ηn(Z)|Fnn]. Then, as :
    ηn(Z)ηn(Z)4=O(11/2).

Please note that under the assumptions of Theorem A1, the conclusions of Lemma A2 apply to Y as well as to the pair process (X,Y).

Lemma A3.

For each r>0, we have, E[|f0|r]<.

Proof. 

Simple algebra shows that

f0=f0(X,Y)f0(Y).

Therefore, by two applications of Lemma A2, part 1,

f0rf0(X,Y)r+f0(Y)r<.

The next bound follows from Lemma A2, part 2, upon applying the Minkowski inequality.

Lemma A4.

For each r2 and each ϵ>0,

f0logP(X0,Y0|Xk1,Yk1)P(Y0|Yk1)rC1(r,ϵ)[γ(X,Y)(k)]12ϵ2r+C2(r,ϵ)[γ(Y)(k)]12ϵ2r.

Lemma A5.

As N:

EkN(fk+h)2=σ2N+O(1).

Proof. 

First we examine the definition of the variance σ2. The first term in (A4),

f0+h22(f02+h)2<,

is finite by Lemma A3. For the series in (A4), let, for k0,

ϕk=logP(Xk,Yk|Xk/2k1,Yk/2k1)P(Yk|Yk/2k1),

and write,

E(f0+h)(fk+h)=E(f0+h)(fkϕk)+E(f0+h)(ϕk+h). (A5)

For the first term in the right-hand side above, we can bound, for any ϵ>0,

|E(f0+h)(fkϕk)|(a)f0+h2fkϕk2[f02+h]fkϕk2(b)AC1(2,ϵ)γ(X,Y)(k/2)1412ϵ+AC2(2,ϵ)γ(Y)(k/2)1412ϵ,

where (a) follows by the Cauchy-Schwarz inequality, and (b) follows by Lemmas A3 and A4, with A=f02+h<. Therefore, taking ϵ>0 small enough and using the assumptions of Theorem A1,

|E(f0+h)(fkϕk)|=O(k12+24ϵ)=O(k3),ask. (A6)

For the second term in (A5), we have that for any r>0, ϕkr<, uniformly over k1 by stationarity. Also, since f0,ϕk are measurable with respect to the σ-algebras generated by (X0,Y0) and (Xk/2,Yk/2), respectively, we can apply ([19], Lemma 7.2.1) with p=r=s=3, to obtain that

|E(f0+h)(ϕk+h)|10f0+h3ϕk+h3α(k/2)1/3,

where α(k)=α(X,Y)(k)=O(k48), as k, by assumption. Therefore, a fortiori,

E(f0+h)(fk+h)=O(k3),

and combining this with (A6) and substituting in (A5), implies that σ2 in (A4) is well defined and finite.

Finally, we have that as N,

EkN(fk+h)2=NE(f0+h)2+2k=0N1(Nk)E(f0+h)(fk+h)=Nσ22k=1N1kE(f0+h)(fk+h)2Nk=NE(f0+h)(fk+h)=σ2N+O(1),

as required. □

Proof of Lemma 1. 

Lemma A5 states that the limit,

limn1nVarlogP(X1n,Y1n|X0,Y0)P(Y1n|Y0). (A7)

exists and is finite. Moreover, by Lemma A4, after an application of the Cauchy-Schwarz inequality, we have that as n,

EknlogP(Xk,Yk|X1k1,Y1k1)P(Yk|Y1k1)logP(Xk,Yk|Xk1,Yk1)P(Yk|Yk1)2=O(1),

therefore,

1nVarlogP(X1n|Y1n)VarlogP(X1n,Y1n|X0,Y0)P(Y1n|Y0)=o(1).

Combining this with (A7) and the definition of σ2, completes the proof. □

Proof of Theorem A1. 

Note that we have already established the fact that the expression for the variance converges to some σ2<. Also, in view of Lemma A1, it is sufficient to prove the theorem for {S˜(t)} instead of {S(t)}, where:

S˜(t)=kt(fk+h),t0.

This will be established by an application of ([19], Theorem 7.1), once we verify that conditions (7.1.4), (7.1.5), (7.1.6), (7.1.7) and (7.1.9) there are all satisfied.

For each n0, let ηn=fn+h, where fn is defined in (A3) and h is the conditional entropy rate. First we observe that by stationarity,

E[ηn]=ElogP(Xn,Yn|Xn1,Yn1)P(Yn|Yn1)+H(X|Y)=ElogP(X0,Y0|X1,Y1)+H(X,Y)ElogP(Y0|Y1)H(Y)=0, (A8)

where H(X,Y) and H(Y) denote the entropy rates of (X,Y) and Y, respectively [2]. Observe that in the notation of Lemma A2, ηn=ηn(X,Y)ηn(Y), and ηn=ηn(X,Y)ηn(Y). By Lemma A2, parts 3 and 4, there exist a constant C, independent of n such that

ηn4C<, (A9)

and,

ηnηn4=O(11/2). (A10)

In addition, from Lemma A5 we have,

EnN1σηn2=N+O(1). (A11)

From (A8)–(A11) and the assumption that α(X,Y)(d)=O(d336), we have that all of the conditions (7.1.4), (7.1.5), (7.1.6), (7.1.7) and (7.1.9) of ([19], Theorem 7.1) are satisfied for the random variables {ηn/σ}, with δ=2. Therefore, {S˜(t);t0} can be redefined on a possibly richer probability space, where there exists a standard Brownian motion {B(t);t0}, such that as t:

1σS˜(t)B(t)=O(t1/2λ),a.s.

By Lemma A1, this completes the proof. □

Appendix B. Recurrence Times Proofs

In this appendix, we provide the proofs of some of the more technical results in Section 3. First we establish the following generalisation of ([2], Lemma 16.8.3).

Lemma A6.

Suppose (X,Y) is an arbitrary source-side information pair. Then, for any sequence {tn} of non-negative real numbers such that n2tn<, we have:

logP(Y1n|Y0,X0)P(Y1n|Y0)tn,eventuallya.s.

Proof. 

Let B(X0,Y0)Yn denote the support of P(·|X0,Y0). We can compute,

EP(Y1n|Y0)P(Y1n|Y0,X0)=EEP(Y1n|Y0)P(Y1n|Y0,X0)|Y0,X0=Ey1nB(X0,Y0)P(y1n|Y0)P(y1n|Y0,X0)P(y1n|Y0,X0)1.

By Markov’s inequality,

PlogP(Y1n|Y0)P(Y1n|Y0,X0)>tn=PP(Y1n|Y0)P(Y1n|Y0,X0)>2tn2tn,

and so, by the Borel-Cantelli lemma,

logP(Y1n|Y0)P(Y1n|Y0,X0)tn,eventuallya.s.,

as claimed. □

Proof of Theorem 8. 

Let K>0 arbitrary. By Markov’s inequality and Kac’s theorem,

P(Rn(X|Y)>K|X1n=x1n,Y1n=y1n)ERn(X|Y)|X1n=x1n,Y1n=y1nK=1KP(x1n|y1n).

Taking K=2cn/P(X1n|Y1n), we obtain,

Plog[Rn(X|Y)P(X1n|Y1n)]>cn|X1n=x1n,Y1n=y1n=PRn(X|Y)>2cnP(X1n|Y1n)|X1n=x1n,Y1n=y1n2cn.

Averaging over all x1nXn,y1nYn,

P(logRn(X|Y)P(X1n|Y1n)>cn)2cn,

and the Borel-Cantelli lemma gives (i).

For (ii) we first note that the probability,

Plog[Rn(X|Y)P(X1n|Y1n,X0,Y0)]<cn|Y1n=y1n,X0=x0,Y0=y0 (A12)

is the probability, under P(X1n=·|Y1n=y1n,X0=x0,Y0=y0), of those z1n such that

P(X1n=z1n|X0,Yn)<2cnRn(x0z1n|yn),

where ‘*’ denotes the concatenation of strings. Let Gn=Gn(x0,yn)Xn denote the set of all such z1n. Then the probability in (A12) is,

znGnP(z1n|x0,yn)znGn2cnRn(x0z1n|yn)2cnznXn1Rn(x0z1n|yn).

Since both x0 and yn are fixed, for each j1, there is exactly one z1nXn, such that Rn(x0z1n|yn)=j. Thus, the last sum is bound above by,

j=1|X|n1jDn,

for some positive constant D. Therefore, the probability in (A12) is bounded above by Dn2cn, which is independent of x0,yn and, by assumption, summable over n. Hence, after averaging over all infinite sequences x0,yn, the Borel-Cantelli lemma gives (ii).

For part (iii) we have, eventually, almost surely,

logRn(X|Y)P(X1n,Y1n|Y0,X0)P(Y1n|Y0)=logRn(X|Y)P(X1n|Y1n,X0,Y0)P(Y1n|X0,Y0)P(Y1n|Y0)=log[Rn(X|Y)P(X1n|Y1n,X0,Y0)]+logP(Y1n|X0,Y0)P(Y1n|Y0)2cn,

where the last inequality follows from (ii) and Lemma A6, and we have shown (iii). □

Proof of Corollary 1. 

If we take cn=ϵnβ in theorem 8, with ϵ>0 arbitrary, we get from (i) and (iii),

limsupn1nβlog[Rn(X|Y)P(X1n|Y1n)]0,a.s. (A13)
andliminfn1nβlogRn(X|Y)P(X1n,Y1n|X0,Y0)P(Y1n|Y0)0,a.s. (A14)

Hence, to prove (a) it is sufficient to show that as n,

logP(X1n|Y1n)logP(X1n,Y1n|X0,Y0)P(Y1n|Y0)=O(1),a.s.,

which is exactly Lemma A1 in Appendix A.

To prove (b), taking β=1 in (A13) and (A14), it suffices to show that

limn1nlogP(X1n|Y1n)1nlogP(X1n,Y1n|X0,Y0)P(Y1n|Y0)=0,a.s.

However, the first term converges almost surely to H(X|Y) by the Shannon-McMillan-Breiman theorem, as in (2), and the second term is,

1ni=1nlogP(Xi,Yi|Xi1,Yi1)+1ni=1nlogP(Yi|Yi1),

which, by the ergodic theorem, converges almost surely to,

E[logP(X0,Y0|X0,Y0)]+E[logP(Y0|Y0)]=H(X,Y)H(Y)=H(X|Y).

This completes the proof. □

Author Contributions

Conceptualization, L.G. and I.K.; methodology, L.G. and I.K.; formal analysis, L.G. and I.K.; investigation, L.G. and I.K.; writing—original draft preparation, L.G. and I.K.; writing—review and editing, L.G. and I.K.; funding acquisition, I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant,” project number 1034, and also in part by EPSRC grant number RG94782.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  • 1.Slepian D., Wolf J. Noiseless coding of correlated information sources. IEEE Trans. Inform. Theory. 1973;19:471–480. doi: 10.1109/TIT.1973.1055037. [DOI] [Google Scholar]
  • 2.Cover T., Thomas J. Elements of Information Theory. 2nd ed. J. Wiley & Sons; New York, NY, USA: 2012. [Google Scholar]
  • 3.Yang E.H., Kaltchenko A., Kieffer J. Universal lossless data compression with side information by using a conditional MPM grammar transform. IEEE Trans. Inform. Theory. 2001;47:2130–2150. doi: 10.1109/18.945239. [DOI] [Google Scholar]
  • 4.Fritz M., Leinonen R., Cochrane G., Birney E. Efficient storage of high throughput DNA sequencing data using reference-based compression. Genome Res. 2011;21:734–740. doi: 10.1101/gr.114819.110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Tridgell A., Mackerras P. The Rsync Algorithm. The Australian National University; Canberra, Australia: 1996. Technical Report TR-CS-96-05. [Google Scholar]
  • 6.Suel T., Memon N. Algorithms for delta compression and remote file synchronization. In: Sayood K., editor. Lossless Compression Handbook. Academic Press; New York, NY, USA: 2002. [Google Scholar]
  • 7.Pradhan S., Ramchandran K. Enhancing analog image transmission systems using digital side information: A new wavelet-based image coding paradigm; Proceedings of the 2001 Data Compression Conference; Snowbird, UT, USA. 27–29 March 2001; pp. 63–72. [Google Scholar]
  • 8.Aaron A., Zhang R., Girod B. Wyner-Ziv coding of motion video; Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers; Pacific Grove, CA, USA. 3–6 November 2002; pp. 240–244. [Google Scholar]
  • 9.Subrahmanya P., Berger T. A sliding window Lempel-Ziv algorithm for differential layer encoding in progressive transmission; Proceedings of the 1995 IEEE International Symposium on Information Theory (ISIT); Whistler, BC, Canada. 17–22 September 1995; p. 266. [Google Scholar]
  • 10.Uyematsu T., Kuzuoka S. Conditional Lempel-Ziv complexity and its application to source coding theorem with side information. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2003;E86-A:2615–2617. [Google Scholar]
  • 11.Tock T., Steinberg Y. On Conditional Entropy and Conditional Recurrence Time. Unpublished manuscript.
  • 12.Jacob T., Bansal R. On the optimality of Sliding Window Lempel-Ziv algorithm with side information; Proceedings of the 2008 International Symposium on Information Theory and its Applications (ISITA); Auckland, New Zealand. 7–10 December 2008; pp. 1–6. [Google Scholar]
  • 13.Jain A., Bansal R. On optimality and redundancy of side information version of SWLZ; Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017; pp. 306–310. [Google Scholar]
  • 14.Stites R., Kieffer J. Resolution scalable lossless progressive image coding via conditional quadrisection; Proceedings of the 2000 International Conference on Image Processing; Vancouver, BC, Canada. 10–13 September 2000; pp. 976–979. [Google Scholar]
  • 15.Aaron A., Girod B. Compression with side information using turbo codes; Proceedings of the 2002 Data Compression Conference; Snowbird, UT, USA. 2–4 April 2002; pp. 252–261. [Google Scholar]
  • 16.Cai H., Kulkarni S., Verdú S. An algorithm for universal lossless compression with side information. IEEE Trans. Inform. Theory. 2006;52:4008–4016. doi: 10.1109/TIT.2006.880020. [DOI] [Google Scholar]
  • 17.Kieffer J. Sample converses in source coding theory. IEEE Trans. Inform. Theory. 1991;37:263–268. doi: 10.1109/18.75241. [DOI] [Google Scholar]
  • 18.Kontoyiannis I. Second-order noiseless source coding theorems. IEEE Trans. Inform. Theory. 1997;43:1339–1341. doi: 10.1109/18.605604. [DOI] [Google Scholar]
  • 19.Philipp W., Stout W. Almost Sure Invariance Principles for Partial Sums of Weakly Dependent Random Variables. Volume 2. Memoirs of the AMS; Providence, RI, USA: 1975. p. 161. [Google Scholar]
  • 20.Nomura R., Han T.S. Second-order Slepian-Wolf coding theorems for non-mixed and mixed sources. IEEE Trans. Inform. Theory. 2014;60:5553–5572. doi: 10.1109/TIT.2014.2339231. [DOI] [Google Scholar]
  • 21.Willems F. Universal data compression and repetition times. IEEE Trans. Inform. Theory. 1989;35:54–58. doi: 10.1109/18.42176. [DOI] [Google Scholar]
  • 22.Wyner A., Ziv J. Some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression. IEEE Trans. Inform. Theory. 1989;35:1250–1258. doi: 10.1109/18.45281. [DOI] [Google Scholar]
  • 23.Ornstein D., Weiss B. Entropy and data compression schemes. IEEE Trans. Inform. Theory. 1993;39:78–83. doi: 10.1109/18.179344. [DOI] [Google Scholar]
  • 24.Kontoyiannis I. Asymptotic recurrence and waiting times for stationary processes. J. Theoret. Probab. 1998;11:795–811. doi: 10.1023/A:1022610816550. [DOI] [Google Scholar]
  • 25.Gavalakis L., Kontoyiannis I. Fundamental Limits of Lossless Data Compression with Side Information. IEEE Trans. Inform. Theory. 2019 Under revision. [Google Scholar]
  • 26.Bradley B. Basic properties of strong mixing conditions. In: Wileln E., Taqqu M.S., editors. Dependence in Probability and Statistics. Birkhäuser; Boston, MA, USA: 1986. pp. 165–192. [Google Scholar]
  • 27.Han G. Limit theorems for the sample entropy of hidden Markov chains; Proceedings of the 2011 IEEE International Symposium on Information Theory (ISIT); St. Petersburg, Russia. 31 July–5 August 2011; pp. 3009–3013. [Google Scholar]
  • 28.Billingsley P. Probability and Measure. 3rd ed. John Wiley & Sons Inc.; New York, NY, USA: 1995. [Google Scholar]
  • 29.Kontoyiannis I., Verdú S. Optimal lossless data compression: Non-asymptotics and asymptotics. IEEE Trans. Inform. Theory. 2014;60:777–795. doi: 10.1109/TIT.2013.2291007. [DOI] [Google Scholar]
  • 30.Tan V., Kosut O. On the dispersions of three network information theory problems. IEEE Trans. Inform. Theory. 2014;60:881–903. doi: 10.1109/TIT.2013.2291231. [DOI] [Google Scholar]
  • 31.Kallenberg O. Foundations of Modern Probability. 2nd ed. Springer; New York, NY, USA: 2002. [Google Scholar]
  • 32.Wyner A., Ziv J. The sliding-window Lempel-Ziv algorithm is asymptotically optimal. Proc. IEEE. 1994;82:872–877. doi: 10.1109/5.286191. [DOI] [Google Scholar]
  • 33.Ziv J., Lempel A. A universal algorithm for sequential data compression. IEEE Trans. Inform. Theory. 1977;23:337–343. doi: 10.1109/TIT.1977.1055714. [DOI] [Google Scholar]
  • 34.Ziv J., Lempel A. Compression of individual sequences by variable rate coding. IEEE Trans. Inform. Theory. 1978;24:530–536. doi: 10.1109/TIT.1978.1055934. [DOI] [Google Scholar]
  • 35.Kac M. On the notion of recurrence in discrete stochastic processes. Bull. Amer. Math. Soc. 1947;53:1002–1010. doi: 10.1090/S0002-9904-1947-08927-8. [DOI] [Google Scholar]
  • 36.Kakutani S. Induced measure preserving transformations. Proc. Imp. Acad. 1943;19:635–641. doi: 10.3792/pia/1195573248. [DOI] [Google Scholar]
  • 37.Shields P. Graduate Studies in Mathematics. Volume 13 American Mathematical Society; Providence, RI, USA: 1996. The ergodic theory of discrete sample paths. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES