Skip to main content
Entropy logoLink to Entropy
. 2020 Jan 2;22(1):63. doi: 10.3390/e22010063

Ordinal Pattern Based Entropies and the Kolmogorov–Sinai Entropy: An Update

Tim Gutjahr 1,*, Karsten Keller 1
PMCID: PMC7516495  PMID: 33285838

Abstract

Different authors have shown strong relationships between ordinal pattern based entropies and the Kolmogorov–Sinai entropy, including equality of the latter one and the permutation entropy, the whole picture is however far from being complete. This paper is updating the picture by summarizing some results and discussing some mainly combinatorial aspects behind the dependence of Kolmogorov–Sinai entropy from ordinal pattern distributions on a theoretical level. The paper is more than a review paper. A new statement concerning the conditional permutation entropy will be given as well as a new proof for the fact that the permutation entropy is an upper bound for the Kolmogorov–Sinai entropy. As a main result, general conditions for the permutation entropy being a lower bound for the Kolmogorov–Sinai entropy will be stated. Additionally, a previously introduced method to analyze the relationship between permutation and Kolmogorov–Sinai entropies as well as its limitations will be investigated.

Keywords: ordinal patterns, Kolmogorov–Sinai entropy, permutation entropy, conditional entropy

1. Introduction

The Kolmogorov–Sinai entropy is a central measure for quantifying the complexity of a measure-preserving dynamical system. Although it is easy from the conceptional viewpoint, its determination and its estimation from given data can be challenging. Since Bandt, Keller, and Pompe showed the coincidence of Kolmogorov–Sinai entropy and permutation entropy for interval maps (see [1]), there have been different attempts to approach the Kolmogorov–Sinai entropy by ordinal pattern based entropies (see e.g., [2,3,4,5,6] and references therein), leading to a nice subject of study. In this paper, we want to discuss the relationship of the Kolmogorov–Sinai entropy to the latter kind of entropies. We respond to the state of the art and give some generalizations and new results, mainly emphasizing combinatorial aspects.

For this, let (Ω,A,μ,T) be a measure-preserving dynamical system, which we think to be fixed in the whole paper. Here, (Ω,A,μ) is a probability space equipped with a A-A- measurable map T:ΩΩ satisfying μ(T1(A))=μ(A) for all AA. Certain properties of the system will be specified at the places where they are of interest. It is suggested for the following to interpret Ω as the set of states of a system, μ as their distribution, and T as a description of the dynamics underlying the system and saying that the system is in state T(ω) at time t+1 if it is in state ωΩ at time t.

In the following, we give the definitions of the central entropies considered in this paper.

1.1. The Kolmogorov–Sinai Entropy

The base of quantifying dynamical complexity is to consider the development of partitions and their entropies under the given dynamics. Recall that the coarsest partitions refining given partitions P1,P2,,Pk and P,Q of Ω are defined by

s=1kPs:=s=1kPsPsPs for s=1,2,,k

and

PQ:={PQPP,QQ},

respectively. The entropy of a finite or countably infinite partition QA of Ω is given by

H(Q):=QQμ(Q)logμ(Q).

For a finite or countably infinite partition P:={Pi}iIA of Ω and some kN, consider the partition

P(k):=t=0k1Tt(P)={P(i)iIk},

where

P(i):=t=0k1Tt(Pit)

for each multiindex i=(i0,i1,,ik1)Ik. The entropy rate of T with regard to a finite or countably infinite partition PA with H(P)< is defined by

h(P):=limn1nH(P(n)).

The Kolmogorov–Sinai entropy is then defined as

KS:=supPh(P),

where the supremum is taken over all finite or over all countably finite partitions PA with H(P)<.

1.2. Ordinal Pattern Based Entropy Measures

As the determination and estimation of the Kolmogorov–Sinai entropy based on the given definition are often not easy, there are many different alternative approaches to it, among them the permutation entropy approach by Bandt and Pompe [7]. The latter is built up on the concept of ordinal patterns, which we describe in a general manner now.

For this, let X=(X1,X2,,Xd):ΩRd be a random vector for dN. Here, each of the random variables Xi can be interpreted as an observable measuring some quantity in the following sense: If the system is in state ω at time 0, then the arschvalue of the quantity mesured at time t provides Xi(Tt(ω)). This general approach includes the one-dimensional case that states and measurements coincide, and this is that ΩR and X=id is the identical map on Ω. This case, originally considered in [7] and subsequent papers, is discussed in Section 3. We refer to it as the simple one-dimensional case.

Let

Πn:={(r0,r1,rn1){0,1,n1}nrirjforij}

be the set of all permutations of length n. We say that a vector (x0,x1,,xn1)Rn has ordinal pattern π=(r0,r1,rn1)Πn if

xri1<xriorxri1=xriandri<ri1

holds true for all i{1,2,n1}. The n! possible ordinal patterns (compare Figure 1) provide a classification of the vectors. We denote the set of points with ordinal pattern π1,π2,,πdΠn with regard to X1,X2,,Xd, respectively, by

Pπ1,π2,,πdX=ωΩXi(ω),Xi(T(ω)),,Xi(Tn1(ω))hasordinalpatternπifori=1,2,,d

and by

OPX(n):=Pπ1,π2,,πdX:π1,π2,,πdΠn

the partition of Ω into those sets.

Figure 1.

Figure 1

Abstract visualization of all 24 possible ordinal patterns of length 4.

We are especially interested in three ordinal pattern based entropy measures. These are the lower and upper permutation entropies defined as

PE_X=lim infn1nHOPX(n)

and

PE¯X=lim supn1nHOPX(n),

respectively, and the conditional entropy of ordinal patterns defined by

CEX=lim infnHOPX(n)(2)HOPX(n).

We speak of the permutation entropy if the upper and lower permutation entropies coincide.

1.3. Outline of This Paper

In Section 2, we will focus on the relationship between permutation and Kolmogorov–Sinai entropies in the general setting. With Theorems 1 and 3, we will restate two known statements. A new proof of Theorem 1 will be given in Appendix A.2. Theorem 3 is stated for completeness. Theorem 2 establishes a new relationship between the conditional permutation entropy and the Kolmogorov–Sinai entropy.

In Section 3, the relationship between permutation and Kolmogorov–Sinai entropies in the one-dimensional case is investigated. Conditions are introduced, under which the permutation entropy is equal to the Kolmogorov–Sinai entropy. The given conditions allow for a generalization of previous results. We will explain why (countably) piecewise monotone functions satisfy these conditions and consider two examples.

In Section 4, we will investigate a method to analyze the relationship between permutation and Kolmogorov–Sinai entropies that was first introduced in [5]. We will use this method to relate two different kinds of conditional permutation entropies in the general setting. Theorem 5 shows that this method cannot be used directly to prove equality between permutation and Kolmogorov–Sinai entropies.

The results of the paper are summarized in Section 5. The proofs for all new results can be found in the Appendix A.

2. Relating Entropies

2.1. Partitions via Ordinal Patterns

Given some d,nN and some random vector X=(X1,X2,,Xd), the partition described above can be defined in an alternative way, which is a bit more abstract but better fitting for the approach used in the proof of Theorem 4:

We can determine to which set PπOPXi(n) a point ωΩ belongs to if we know whether Xi(Ts(ω))<Xi(Tt(ω)) holds true for all s,t{0,1,,n1} with s<t. Therefore, we can write

OPXi(n)=s=0n1t=s+1n1ωΩXi(Ts(ω))<Xi(Tt(ω)),ωΩXi(Ts(ω))Xi(Tt(ω)).

Throughout this paper, we will use the set

R:={(x,y)R2x<y}

to describe the order relation between two points. This notation allows us to write

OPX(n)=i=1ds=0n1t=s+1n1(Ts,Tt)1(Xi×Xi)1{R,R2\R}.

2.2. Ordinal Characterization of the Kolmogorov–Sinai Entropy

To be able to reconstruct all information of the given system via quantities based on the random vector X=(X1,X2,,Xd)Rd, we need to assume that the latter itself does not reduce information. From the mathematical viewpoint, this means that the σ-algebra generated by X is equivalent to the originally given σ-algebra A, i.e., that

σXiTttN0,i{1,2,,d}=μA (1)

holds true, which is roughly speaking that orbits are separated by the given random vector. For definitions and some more details concerning σ-algebras and partitions, see Appendix A.1.

The following statement saying that, under (1), ordinal patterns entailing the complete information of the given system have been shown in [3] in a s slightly weaker form than given here.

Theorem 1.

Let X:ΩRd be a random vector satisfying (1). Then,

PE_XlimkhOPX(k)=KS (2)

holds true.

Note that the inequality in (2) is a relatively simple fact: Since the partition OPX(n) is finer than the partition (OPX(k))(nk) for all nk, we have

HOPX(n)H(OPX(k))(nk).

Dividing both sides by n and taking n and subsequently k to infinity proves this inequality.

Proofs of the inequality PE_XKS are also implicitly given in [1,8]. One-dimensional systems with direct observation as considered there are discussed in Section 3 in detail.

We will give a proof of the equality in (2) in Appendix A.2 being alternative to that in [3].

2.3. Conditional Entropies

In the case that (1) holds and that KS and PE_X coincide, in Appendix A.3, we will prove different representations of the Kolmogorov–Sinai entropy by ordinal pattern based conditional entropies as they are given in the following theorem.

Theorem 2.

Let X:ΩRd be a random vector satisfying (1). If KSPE¯X is true, then

KS=lim infnHOPX(n)T1(OPX(n))(k)=lim infnH(OPX(n+1))(k)(OPX(n))(k)=PE_X=PE¯X

holds true for all kN, in particular, in the case k=1, one has KS=CEX=PE_X=PE¯X.

2.4. Amigó’S Approach

Amigó et al. [2,8] describe an alternative ordinal way to the Kolmogorov–Sinai entropy, which is based on a refining sequence of finite partitions. We present it in a slightly more general manner as originally given and in the language of finite-valued random variables. Note that the basic result behind Amigo’s approach in [2,8] is that the Kolmogorov–Sinai entropy of a finite alphabet source and its permutation entropy given some order on the alphabet coincide (see also [9] for an alternative algebraic proof of the statement).

Theorem 3.

Given a sequence (Xk)kN of R-valued random variables satisfying

  • (i) 

    #(Xk(Ω))< for all kN,

  • (ii) 

    σ(Xk)σ(Xk+1) for all kN,

  • (iii) 

    σ({XkkN})=μA,

then it holds

limkPE_Xk=KS.

3. The Simple One-Dimensional Case

In the following, we consider the case that Ω is a subset of R with A coinciding with the Borel σ-algebra B on Ω, and with X=id being the identical map on Ω. The X is superfluous here, which is why we leave out each superscript X. For example, we write OP(n) instead of OPid(n).

3.1. (Countably) Piecewise Monotone Maps

We discuss some generalization of the results of Bandt, Keller, and Pompe that Kolmogorov–Sinai entropy and permutation entropy coincide for interval maps (see [1]) on the basis of a statement given in the paper [10]. The discussion sheds some light on structural aspects of the proofs given in that paper with some potential for further generalizations.

Definition 1.

Let Ω be a subset of R and B be the Borel σ-algebra on Ω and μ be a probability measure on (Ω,B). Then, we call a partition M={Mi}iI of Ω ordered (with regard to μ), if MB and

μ2(Mi1×Mi2)R0,μ2(Mi1×Mi2) (3)

holds true for all i1,i2I with i1i2. Here, μ2 denotes the product measure of μ with itself.

We call a map T:ΩΩ (countably) piecewise monotone (with regard to μ) if there exists a finite (or countably infinite) ordered partition M={Mi}iI of Ω with H(M)< such that

μ2(Mi×Mi)R(T×T)1(R)0,μ2(Mi×Mi)R (4)

holds true for all iI.

Given a probability space (Ω,A,μ), for two families of sets P,QA, we write

PQ

if, for all QQ, there exists a PP with μ(Q\P)=0. If P={Pi}iI and Q={Qj}jJ are partitions of Ω in A, PQ is equivalent to the fact that for every iI there exists a set JiJ such that Pi and jJiQj are equal up to some set with measure 0.

Moreover, given a partition M={Mi}iI of a set Ω, let

M(m)M(m):={Mi1×Mi2Mi1,Mi2M(m)}.

In Appendix A.4, we will show the following statement:

Theorem 4.

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω, and assume that the following conditions are satisfied:

Condition 1: There exists a finite or countably infinite ordered partition M={Mi}iIB with H(M)< and some mN with

M(m)M(m){R,Ω2\R}M(m)M(m)u=1m(T×T)u{R,Ω2\R}. (5)

Condition 2: For all ϵ>0, there exists a finite or countably infinite ordered partition Q with H(Q)< and

QQlim supn1nl=1nμ(QTl(Q))<ϵ. (6)

Then,

PE¯KS

holds true.

Theorem 4 extracts the two central arguments in proving the main statement of [10] in the form of Conditions 1 and 2. This statement is given in a slightly stronger form in Corollary 1. In the proof of [10], the m in Condition 1 is equal to 1. We will discuss in Section 3.2 a situation where Condition 1 with m=2 is of interest.

Corollary 1.

Let Ω be a compact subset of R and A=B be the Borel σ-algebra on Ω. If T is (countably) piecewise monotone, then

PE¯KS

holds true.

Since below we directly refer to the main statement in [10], which assumes compactness, and for simplicity, the Theorem is formulated under this assumption, we however will discuss a relaxation of the assumption in Remark A1.

To prove the above corollary, one needs to verify that Conditions 1 and 2 are satisfied for one-dimensional systems if T is piecewise monotone. It is easy to see that Condition 2 holds true for T being aperiodic and ergodic: If T is aperiodic, for any ϵ>0, one can choose a finite ordered partition Q such that μ(Q)<ϵ holds true for all QQ. The ergodicity then implies

QQlim supn1nl=1nμ(QTl(Q))=QQμ(Q)2<QQμ(Q)·ϵ=ϵ.

One can also show that Condition 2 is true for non-ergodic aperiodic compact systems (see Remark A1 and [10]).

If T is (countably) piecewise monotone, there exists a finite (or countable infinite) ordered partition M={Mi}iI with H(M)< satisfying (4), which is equivalent to

μ2Mi×MiR(T×T)1(R)0,μ2Mi×Mi(T×T)1(R)

for all iI. Therefore,

{Mi×Mi}{R,Ω2\R}(T×T)1{R,Ω2\R}(4){Mi×Mi}(T×T)1{R,Ω2\R} (7)

is true for all iI. Because M is an ordered partition, we have

{Mi1×Mi2}{R,Ω2\R}(3){Mi1×Mi2} (8)

for all i1i2I. This implies

MM{R,Ω2\R}=(i1,i2)I2:i1i2{Mi1×Mi2}{R,Ω2\R}iI{Mi×Mi}{R,Ω2\R}(8)(i1,i2)I2:i1i2{Mi1×Mi2}iI{Mi×Mi}{R,Ω2\R}(i1,i2)I2:i1i2{Mi1×Mi2}iI{Mi×Mi}{R,Ω2\R}(T×T)1({R,Ω2\R})(7)(i1,i2)I2:i1i2{Mi1×Mi2}iI{Mi×Mi}(T×T)1({R,Ω2\R})(i1,i2)I2{Mi1×Mi2}(T×T)1({R,Ω2\R})=MM(T×T)1({R,Ω\R}).

Hence, Condition 1 holds true if T is (countably) piecewise monotone. To show that Corollary 1 holds true if the dynamical system is not aperiodic, one splits the system into a periodic part and an aperiodic part in the following way:

Let

Θ:=t=1{ωΩTt(ω)=ω}

be the set of periodic points. Assume that μ(Θ){0,1} is true. Then,

PE¯lim supn1nH(OP(n){Θ,Ω\Θ})=lim supn1nH(OP(n){Θ,Ω\Θ})H(Θ,Ω\Θ)
=μ(Θ)·lim supn1nPπOP(n)μ(PπΘ)μ(Θ)logμ(PπΘ)μ(Θ) (9)
+μ(Ω\Θ)·lim supn1nPπOP(n)μ(Pπ\Θ)μ(Ω\Θ)logμ(Pπ\Θ)μ(Ω\Θ) (10)

holds true, where (9) is the periodic part of the upper permutation entropy and (10) the aperiodic part. One can use the aperiodic version of Corollary 1 to show that the Kolmogorov–Sinai entropy is an upper bound for (10). The proof of Corollary 1 for non-aperiodic dynamical systems is complete with Lemma A5 in Appendix A.4, which shows that (9) is equal to 0.

3.2. Examples

In order to illustrate the discussion in Section 3.1, we consider two examples. The first one reflects the situation in Corollary 1, and the second one discusses the case m=2 in Condition 1 in Theorem 4.

Example 1

(Gaussian map). The map T:[0,1][0,1] with

T(ω)=1/ωmod1,ifω>0,0,ifω=0

is called a Gaussian map (see Figure 2a). This map is measure-preserving with regard to the measure μ, which is defined by μ(A)=1log2A11+xdx for all AB [11]. The partition M={[1n+1,1n[nN}{{0}} of [0,1] is a countably infinite partition into monotony intervals of T satisfying H(M)<. This map is countably piecewise monotone and ergodic. Thus, its Kolmogorov–Sinai entropy is equal to its permutation entropy.

Figure 2.

Figure 2

(a) Graph of the Gaussian map. (b) Graph of the map given in Example 2.

Example 2.

Consider Ω=[0,1[ and the Borel σ-algebra B on Ω. Set

S:=i=01(i+1)(log(i+1))2,mi:=1S·1(i+1)(log(i+1))2foriN,a0:=0,a1:=m1,ai:=1mi+1m1fori2,Mi:=[ai1,ai]foriN.

The map T:ΩΩ is defined as piecewise linear on each set Mi (see Figure 2b) by

T(ω)=ωm1,ifωM1,ai1·aiωaiai1+ai2·ωai1aiai1,ifωMiwithiN\{1}.

Let λ be the one-dimensional Lebesgue measure. Define a measure μ on (Ω,B) by

μ(A):=j=1mj·λ(AMj)λ(Mj)

for all AB.

One can verify that T is measure-preserving and ergodic with regard to μ. The partition M:={Mi}iN does satisfy (5) for m=1, but H(M)= holds true. Therefore, Condition 1 does not hold true for m=1. However, one can show that Condition 1 holds true for m=2 and the partition M:={M1,m=2Mi}, which implies that the Kolmogorov–Sinai entropy is equal to the permutation entropy of this map due to Theorem 4.

4. A Supplementary Aspect

To determine under what conditions the Kolmogorov–Sinai entropy and the upper or lower permutation entropies coincide remains an open problem in the general case, and in the simple one-dimensional case of maps not being (countably) piecewise monotone the relation of Kolmogorov–Sinai and upper and lower permutation entropies is not completely understood. There is not even known an example where the entropies differ. Finally, we shortly want to discuss a further approach for discussing the relationship of Kolmogorov–Sinai entropy and upper and lower permutation entropies.

In [12], it was shown that under (1) the Kolmogorov–Sinai entropy is equal to the permutation entropy if roughly speaking the information contents of ‘words’ of k ’successive’ ordinal patterns of large length n is not too far from the information contents of ordinal patterns of length n+k1. We want to explain this for the simple one-dimensional case and k=2.

The ordinal pattern of some (x0,x1,,xn) contains all information on the order relation between the points x0,x1,,xn. When considering the ‘overlapping’ ordinal patterns of (x0,x1,,xn1) and (x1,x2,,xn), one has the same information with one exception: The order relation between x0 and xn is not known a priori. Looking at the related partitions, the missing information is quantified by the conditional entropy H(OP(n+1)|OP(n)T1(OP(n))). There is one situation reducing this missing information, namely that one of the xi;i=1,2,,n1 lies between x0 and xn. Then, the order relation between x0 and xn is known by knowing the ordinal patterns of (x0,x1,,xn1) and (x1,x2,,xn). Therefore, the following set is of some special interest:

Vn:=ωΩωTn(ω)andTs(ω)(ω,Tn(ω))foralls{1,2,,n1}ωΩTn(ω)ωandTs(ω)(Tn(ω),ω)foralls{1,2,,n1}. (11)

The following is shown in Appendix A.5:

Lemma 1.

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω. Then,

HOP(n+1)|OP(n)T1(OP(n))log(2)·μ(Vn) (12)

holds true for all nN.

This indicates that analyzing the measure of Vn as defined in (11) can be a useful approach to gain inside into the relationship between different kinds of entropies based on ordinal patterns. In particular, the behavior of μ(Vn) for n is of interest.

Lemma 2.

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω. If T is ergodic, then

lim infnμ(Vn)=0

holds true, and, if (stronger) T is mixing, then

limnμ(Vn)=0

holds true.

The statement under the assumption of mixing has been shown in [5], and the proof in the ergodic case is given in Appendix A.5.

One can show that in the simple one-dimensional case the Kolmogorov–Sinai entropy is equal to the permutation entropy if

n=1HOP(n+1)|OP(n)T1(OP(n))< (13)

holds true. Using (12), this is the case when n=1μ(Vn) is finite, providing a fast decay of the μ(Vn). However, we have n=1μ(Vn)= as stated in Theorem 5, which will be proved in Appendix A.6.

Theorem 5.

Let Ω be a subset of R, A=B be the Borel σ-algebra on Ω and T be aperiodic and ergodic. Then,

n=1μ(Vn)=

holds true.

Although formula n=1μ(Vn)< is false, we cannot answer the question of whether or when (13) is valid. Possibly, an answer to this question, and a better understanding of the kind of decay of the μ(Vn), could be helpful in further investigating the relationship of Kolmogorov–Sinai entropy and upper and lower permutation entropies, at least in the simple one-dimensional ergodic case.

5. Conclusions

With Theorem 1, we have slightly generalized a statement given in [3] by removing a technical assumption and using more basic combinatorial arguments. The remaining assumption (1) on the random vector X cannot be weakened in general.

In Section 2.3, we have shown that the equality of the permutation entropy and the Kolmogorov–Sinai entropy implies the equality of conditional permutation entropy and Kolmogorov–Sinai entropies as well. We considered two different kinds of conditional permutation entropy, which have turned out to be equal in the cases considered in Section 2.3; it is however not clear whether these two kinds of conditional permutation entropy are equal in the general.

In Section 4, we have established some condition under which these two kinds of conditional entropy are equal, independently from of the equality between permutation and Kolmogorov–Sinai entropies. This condition is based on a concept introduced in [5] that was originally introduced as a tool for better understanding the relationship between permutation and Kolmogorov–Sinai entropies in a general setting. However, with Theorem 5, we have shown that this tool cannot directly be used to show the equality between permutation and Kolmogorov–Sinai entropies. It is an interesting question of whether and how a clever adaption and improvement of it can allow for new insights in the relationship between permutation and Kolmogorov–Sinai entropies.

In Section 3, we considered the simpler one-dimensional case. With Theorem 4, we have given two conditions under which the permutation entropy is a lower bound for the Kolmogorov–Sinai entropy. This theorem generalizes previous statements in [1] and slightly generalizes a statement in [10]. One of the conditions (Condition 2) holds true for a large class of dynamical systems, while, for the other one (Condition 1) to hold true, it is necessary that the system is in some sense ’order preserving’. It is still an unsolved and interesting question, whether Condition 1 can be weakened, especially since, to the best of our knowledge, there does not exist a counterexample to the equality of permutation entropy and Kolmogorov–Sinai entropies. Finding a generalization of Theorem 4 to a multidimensional setting is a further interesting question one could ask.

Appendix A. Proofs

Appendix A.1. Preliminaries

Given a probability space (Ω,A,μ) and two σ-algebras A1,A2A, we write

A1μA2

if for all A1A1 there exists A2A2 with μ(A1\A2)+μ(A2\A1)=0. We write

A1=μA2

if both A1μA2 and A2μA1 hold true.

For a collection of R-valued random variables {Xi}iI defined on some measure space (Ω,A), we denote by

σ{XiiI}

the smallest σ-algebra containing all sets in {Xi1(B)iI,BB}, where B is the Borel σ-algebra on R.

Given a family of disjoint sets PA and some set QA, we define

Δ(P|Q):={PPμ(PQ)>0}

as those sets in P that are intersecting Q almost surely.

For two partitions P and Q of Ω in A, the number of elements in Δ(P|Q) can be used for an upper bound of the conditional entropy H(P|Q):=H(PQ)H(Q):

Consider the function f:[0,1]R with f(x)=xlog(x). Since f is convex, Jensen’s inequality provides

PΔ(P|Q)μ(PQ)log(μ(PQ))=#Δ(P|Q)PΔ(P|Q)1#Δ(P|Q)·f(μ(PQ))#Δ(P|Q)·fPΔ(P|Q)1#Δ(P|Q)·μ(PQ)=#Δ(P|Q)·fμ(Q)#Δ(P|Q)=#Δ(P|Q)·μ(Q)#Δ(P|Q)·logμ(Q)#Δ(P|Q)=μ(Q)·log(μ(Q))log(#Δ(P|Q))

for all QQ. Using the above inequality implies

H(PQ)=QQPPμ(PQ)log(μ(PQ))=QQPΔ(P|Q)μ(PQ)log(μ(PQ))QQμ(Q)·log(μ(Q))log(#Δ(P|Q))=H(Q)+QQμ(Q)·log(#Δ(P|Q)).

This is equivalent to

H(P|Q)QQμ(Q)·log(#Δ(P|Q)). (A1)

Appendix A.2. Proof of the Equality in Formula (2)

The proof is based on the following Lemma A1 and Corollary A1.

Lemma A1.

Let X:ΩR be a random variable and U a finite ordered partition of R with regard to the image measure μX. Then, for P:=X1(U), nN and all PπOPX(n)

#Δ(P(n)|Pπ)n+#U1#U1

holds true.

Proof. 

Set I={1,2,,#U} and label the sets UiU with iI in such a way that

i1<i2μ2({(ω1,ω2)(X1(Ui1)×X1(Ui2)):X(ω1)>X(ω2)})=0 (A2)

holds true for all i1,i2I. Since U is assumed to be an ordered partition, this is always possible. Set Pi:=X1(Ui) for all iI so that P={Pi}iI.

Fix nN and PπOPX(n). Using

P(i)=t=0n1Tt(Pit)

for all i=(i0,i1,in1)In, we have

#Δ(P(n)|Pπ)=#{iIn:μ(P(i)Pπ)>0}.

There exists a permutation π=(r0,r1,,rn1) of {0,1,,n1} such that

X(Tr0(ω))X(Tr1(ω))X(Trn1(ω))

holds true for all ωPπ. Using (A2), this implies

ir0ir1irn1

for all i=(i0,i1,in1)In with μ(P(i)Pπ)>0. Therefore,

#Δ(P(n)|Pπ)=#{(i0,i1,in1)In:μ(P(i)Pπ)>0}#{(i0,i1,in1)In:ir0ir1irn1}=n+#U1#U1

holds true for all PπOPX(n). □

The lemma can be used to directly prove the following result.

Corollary A1.

Let X=(X1,X2,,Xd):ΩRd be a random vector and U a finite partition of R into intervals. Then,

hi=1dXi1(U)limkhOPX(k)

holds true.

Proof. 

Take k,mN and set Pi:=Xi1(U). Then,

H(Pi(mk)|(OPXi(k))(mk))t=0m1H(Tkt(Pi(k))|(OPXi(k))(mk))t=0m1H(Tkt(Pi(k))|Tkt(OPXi(k)))=mH(Pi(k)|OPXi(k))

holds true for all i{1,2,,d}. Together with (A1) and Lemma A1, this provides

hi=1dPi=limn1nHi=1dPi(n)=limm1mkHi=1dPi(mk)limm1mkHi=1dPi(mk)(OPX(k))(mk)=limm1mkH((OPX(k))(mk))+Hi=1dPi(mk)|(OPX(k))(mk)limm1mkH((OPX(k))(mk))+i=1dHPi(mk)|(OPX(k))(mk)
limm1mkH((OPX(k))(mk))+i=1dHPi(mk)|(OPXi(k))(mk)limm1mkH((OPX(k))(mk))+i=1d1kH(Pi(k)|OPXi(k))=h(OPX(k))+i=1d1kH(Pi(k)|OPXi(k))h(OPX(k))+i=1d1kPπOPXi(k)μ(Pπ)log(#Δ(Pi(k)|Pπ))h(OPX(k))+i=1d1kPπOPXi(k)μ(Pπ)logk+#U1#U1h(OP(k))+dklog(k+#U1)#U1=h(OP(k))+d(#U1)·log(k+#U1)k

for all kN, which implies

h(P)limkh(T,OP(k))+d(#U1)·log(k+#U1)k=limkh(OP(k)).

 □

We are now able to prove of the equality in (2). Let pi:RdR with pi((x1,x2,,xd))=xi be the projection on the i-th coordinate, and let B(Rd) denote the Borel σ-algebra on Rd. Since this σ-algebra is generated by sets of the type

I1×I2×Id,

where Ii are intervals, there exists an increasing sequence of finite partition (Ul)lN of R into intervals, such that

B(Rd)=σpi1(Ul)lN,i{1,2,,d}

holds true. Using (1), this implies

A=σTtX1(pi1(Ul))tN0,lN,i{1,2,,d}=σTtXi1(Ul)tN0,lN,i{1,2,,d}=σTti=1dXi1(Ul)tN0,lN}.

Thus, (Pl)lN with

Pl:=i=1dXi1(Ul)

is a generating sequence of finite partitions, which implies (see e.g., [13])

KS=limlh(Pl).

Corollary A1 provides

h(Pl)limkh(OPX(k))

for all lN. Combining the two previous statements yields

KS=limlh(Pl)limkh(OPX(k)). (A3)

On the other hand,

KS=supPh(P)limkh(OPX(k))

holds true, which, together with (A3), finishes the proof of the equality in (2).

Appendix A.3. Proof of Theorem 2

For preparing the proof of Theorem 2, let us first give two lemmata.

Lemma A2.

Let (Pn)nN be a sequence of finite partitions of Ω in A satisfying

PnT1(Pn)Pn+1. (A4)

Then,

lim infnHPnT1Pn(k)lim infnHPn+1(k)Pn(k)lim infn1nH(Pn)

holds true for all kN.

Proof. 

Take kN. We have

lim infnHPnT1Pn(k)=lim infnHPn(k+1)HT1Pn(k)=lim infnHPn(k+1)HPn(k)(A4)lim infnHPn+1(k)HPn(k)=lim infnHPn+1(k)Pn(k).

The Stolz–Cesàro theorem further provides

lim infnHPn+1(k)Pn(k)lim infn1ni=1nHPi+1(k)Pi(k)=lim infn1ni=1nHPi+1(k)HPi(k)=lim infn1nHPn+1(k)HP1(k)=lim infn1nHPn+1(k)(A4)lim infn1nH(Pn+k)=lim infn1nH(Pn).

 □

Notice that (A4) is fulfilled for Pn:=OPX(n).

Lemma A3.

Let X=(X1,X2,,Xd):ΩRd be a random vector satisfying (1). Then,

KSlim infnHOPX(n)T1(OPX(n))(k)

holds true for all kN.

Proof. 

According to Theorem 1, we have

KS=limnh(T,OPX(n)).

Using the future formula for the entropy rate (see e.g., [13]), we can write

h(OPX(n))=limlHOPX(n)T1(OPX(n))(l)

for all nN. This implies

KS=limnlimlHOPX(n)T1(OPX(n))(l)lim infnHOPX(n)T1(OPX(n))(k)

for all kN. □

Now, Lemmas A2 and A3 provide

KSlim infnHOPX(n)T1(OPX(n))(k)lim infnH(OPX(n+1))(k)(OPX(n))(k)PE_XPE¯X.

The assumption KSPE¯X then implies

KS=lim infnHOPX(n)T1(OPX(n))(k)=lim infnH(OPX(n+1))(k)(OPX(n))(k)=PE_X=PE¯X

for all kN.

Appendix A.4. Proofs for the Simple One-Dimensional Case

This subsection is mainly devoted to the proofs of Theorem 4 and to the proof of Lemma A5 mentioned at the end of Section 3.1. Recall the assumption that Ω is a subset of R and A=B the Borel σ-algebra on Ω. The following lemma is a step to the proof of Theorem 4.

Lemma A4.

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω. Suppose, there exist an ordered partition M={Mi}iI and an mN satisfying (5). Then, for all nN with nm and multiindices i=(i0,i1,,in1)In

#Δ(OP(n)|M(i))2u=1m#{s{0,1,,n1}is=inuandsnu}

holds true.

Proof. 

Fix mN and nN with nm and i=(i0,i1,,in1)In. We will show that

M(s)M(s)t=0s1(T×T)t({R,Ω2\R})M(s)M(s)t=sms1(T×T)t({R,Ω2\R}) (A5)

holds true for all sN with sm using induction over s:

The above statement is trivial for s=m. Suppose that (A5) holds true for some sN with sm. We will show that (A5) then holds true for s+1:

M(s+1)M(s+1)t=0s(T×T)t{R,Ω2\R}=(T×T)1M(s)M(s)t=0s1(T×T)t{R,Ω2\R}M(m)M(m){R,Ω2\R}(5)(T×T)1M(s)M(s)t=0s1(T×T)t{R,Ω2\R}M(m)M(m)u=1m(T×T)u{R,Ω2\R}=(T×T)1M(s)M(s)t=0s1(T×T)t{R,Ω2\R}MM(T×T)1M(s)M(s)t=sms1(T×T)t{R,Ω2\R}MM=M(s+1)M(s+1)t=s+1ms(T×T)t{R,Ω2\R}. (A6)

In (A6), the induction hypotheses was used.

Notice that

M(n)=s=1n(id,T)n+sM(s)M(s),OP(n)=s=1n(id,T)n+st=0s1(T×T)t({R,Ω2\R})

hold true. This implies

M(n)OP(n)=s=1n(id,T)n+sM(s)M(s)t=0s1(T×T)t({R,Ω2\R})=s=1m1(id,T)n+sM(s)M(s)t=0s1(T×T)t({R,Ω2\R})s=mn(id,T)n+sM(s)M(s)t=0s1(T×T)t({R,Ω2\R})(A5)s=1m1(id,T)n+sM(s)M(s)t=0s1(T×T)t({R,Ω2\R})s=mn(id,T)n+sM(s)M(s)t=sms1(T×T)t({R,Ω2\R})=M(n)s=0n1t=nmn1(Ts,Tt)1({R,Ω2\R}).

Therefore,

#Δ(OP(n)|M(i))=#Δ(M(n)OP(n)|M(i))#ΔM(n)s=0n1t=nmn1(Ts,Tt)1({R,Ω2\R})|M(i)=#Δs=0n1t=nmn1(Ts,Tt)1({R,Ω2\R})|M(i)s=0n1t=nmn1#Δ(Ts,Tt)1({R,Ω2\R})|M(i) (A7)

holds true. Notice that

(Ts,Tt)1({R,Ω2\R})={{ωΩ:Ts(ω)<Tt(ω)},{ωΩ:Ts(ω)Tt(ω)}}.

For s=t, we have

#Δ(Ts,Tt)1({R,Ω2\R})|M(i)=#Δ{,Ω}|M(i)=1.

If isit is true, using the fact that M is an ordered partition yields

#Δ(Ts,Tt)1({R,Ω2\R})|M(i)=#Δ(Ts,Tt)1((Mis×Mit){R,Ω2\R})|M(i)=(3)#Δ(Ts,Tt)1(Mis×Mit)|M(i)=1.

For all other cases, we have

#Δ(Ts,Tt)1({R,Ω2\R})|M(i)#(Ts,Tt)1({R,Ω2\R})=2.

The above observations can be summarized as

#Δ(Ts,Tt)1({R,Ω2\R})|M(i)=1ifs=torisit,2ifstandis=it.

In combination with (A7), this provides

#Δ(Ts,Tt)1({R,Ω2\R})|M(i)2t=nmn1#{s{0,1,,n1is=itandst}=2u=1m#{s{0,1,,n1is=inuandsnu}.

 □

Notice that the above Lemma immediately implies

H(OP(n))H(OP(n)M(n))=H(M(n))+H(OP(n)|M(n))(A1)H(M(n))+iInμ(M(i))·log(#Δ(OP(n)|M(i)))H(M(n))+log(2)·nm. (A8)

We come now to the proof of Theorem 4, which slightly generalizes a proof given in [10], where the case m=1 was considered. For better readability, we restate this proof with the generalization to arbitrary mN at the appropriate places within the proof.

Take ϵ>0. According to Conditions 1 and 2, there exist finite or countably infinite ordered partitions M={Mi}iI, Q={Qj}jJ and mN with H(M)< and H(Q)< satisfying (5) and (6). Consider the partition

P:=MQ={MiQj}(i,j)I×J=:{P(i,j)}(i,j)I×J.

Notice that P is again a finite or countably infinite ordered partition with H(P)<H(M)+H(Q)<. Using (A1), this implies

PE¯lim supn1nH(OP(n)P(n))=h(P)+lim supn1nH(OP(n)|P(n))KS+lim supn1nH(OP(n)|P(n))KS+lim supn1n(i,j)(I×J)nμ(P((i,j)))log(#Δ(OP(n)|P((i,j)))), (A9)

where we consider (i,j) itself as one multiindex and I×J as one index set. Thus,

P((i,j))=t=0n1Tt(MitQjt)

for all (i,j)=((i0,j0),(i1,j1),,(in1,jn1))(I×J)n. Lemma A4 provides

(i,j)(I×J)nμ(P((i,j)))·log(#Δ(OP(n)|P((i,j))))log2(i,j)(I×J)nμ(P((i,j)))u=1m#{s{0,1,,n1}(is,js)=(inu,jnu)andsnu}log2(i,j)(I×J)nμ(P((i,j)))u=1m#{s{0,1,,n1}js=jnuandsnu}=log2u=1mjJniInμ(P((i,j)))·#{s{0,1,,n1}js=jnuandsnu}=log2u=1mjJnμ(Q(j))·#{s{0,1,,n1}js=jnuandsnu}log2u=1mjJnμ(Q(j))·(#{s{0,1,,nu1}js=jnu}+u1)=log2·m(m1)/2+u=1mjJnμ(Q(j))·#{s{0,1,,nu1}js=jnu}. (A10)

Combining (A9) and (A10) yields

PE¯KS+log2·u=1mlim supn1njJnμ(Q(j))·#{s{0,1,,nu1}js=jnu}.

For each u{1,,m}, we have

lim supn1njJnμ(Q(j))·#{s{0,1,,nu1}js=jnu}=lim supn1njnuJjJnuμ(Q((j,jnu)))·#{s{0,1,,nu1}js=jnu}=lim supn1nujnuJl=0nuμ({ωTn+u(Qjnu)#{s{0,1,,nu1}Ts(ω)Qj}=l})·l=QQlim supn1nul=0nuμ(Tn+u(Q)Tl(Q))=QQlim supn1nul=0nuμ(QTl(Q))<(6)ϵ.

Hence,

PE¯KS+log2·m·ϵ.

The statement of the theorem follows from the fact that ϵ can be chosen arbitrarily close to 0.

Lemma A5.

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω. If T is (countably) piecewise monotone and completely periodic, i.e.,

μt=1{ωΩTt(ω)=ω}=1,

then

PE¯=0

holds true.

Proof. 

Let

Θk:=t=1k{ωΩ:Tt(ω)=ω}

be the set of all points with period smaller or equal to kN and

Θ=t=1{ωΩ:Tt(ω)=ω}

the set of all periodic points. Since μ(Θ)=1, for all ϵ>0 there exists kN with

μΘk>1ϵ.

This implies

PE¯lim supn1nH(OP(n){Θk,Ω\Θk})=lim supn1nH(OP(n){Θk,Ω\Θk})H(Θk,Ω\Θk)
=μ(Θk)·lim supn1nPπOP(n)μ(PπΘk)μ(Θk)logμ(PπΘk)μ(Θk)+μ(Ω\Θk)·lim supn1nPπOP(n)μ(Pπ\Θk)μ(Ω\Θk)logμ(Pπ\Θk)μ(Ω\Θk)(A8)μ(Θk)·lim supn1nPπOP(n)μ(PπΘk)μ(Θk)logμ(PπΘk)μ(Θk)+ϵ·lim supn1n·H(M(n))+log(2)·nμ(Θk)·lim supn1nPπOP(n)μ(PπΘk)μ(Θk)logμ(PπΘk)μ(Θk)+(H(M)+log(2))·ϵ=μ(Θk)·lim supn1nPπOP(k)μ(PπΘk)μ(Θk)logμ(PπΘk)μ(Θk)+(H(M)+log(2))·ϵ=(H(M)+log(2))·ϵ.

This provides PE¯=0 because ϵ can be chosen arbitrarily close to 0. □

Remark A1.

To be able to show that Condition 2 holds true under the assumptions of Corollary 1 for non-ergodic systems via ergodic decomposition, one needs to require that (Ω,B,μ) is a Lebesgue space. A probability space (Ω,B,μ) is called a Lebesgue space if (Ω,B,μ) is isomorph to some probability space (Ω˜,B˜,μ˜), where Ω˜ is a complete separable metric space and B˜ the completion of the Borel σ-algebra on Ω˜, i.e., B˜ contains additionally all subsets of Borel sets with measure 0. If ΩR is a Borel subset, (Ω,B,μ) is a Lebesgue space if B is complete with regard to μ (see e.g., [14]).

Alternatively, one can use Rokhlin–Halmos towers to show that Condition 2 holds true for non-ergodic systems (see [10]). For this approach, it is only necessary to require that Ω is a separable metric space, B the Borel σ-algebra on Ω, and T:ΩΩ an aperiodic map [15].

Moreover, notice that, in [10], it was required that Ω is a compact metric space so that μ is regular, which allowed for approximating any set of B by a finite union of intervals. However, this is not necessary because the Borel σ-algebra is generated by the algebra containing all sets of the type IΩ, where I is an open or closed interval, and every set of a σ-algebra can be approximated by a set of the algebra that generates that σ-algebra (see, e.g., [16]).

Appendix A.5. Proof of Lemma 1 and the ‘Ergodic Part’ of Lemma 2

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω. We start with showing Lemma 1. For this, fix some nN. By its definition, the set Vn can be written as a union of sets in OP(n)T1(OP(n)). Notice that

OP(n+1)=OP(n)T1(OP(n))(id,Tn)1({R,Ω2\R}). (A11)

For QOP(n)T1(OP(n)), consider some ωQ. If ωVn is true, we can use the transitivity of the order relation to determine the order relation of ω and Tn(ω) from the ordering given by Q. This implies

#Δ((id,Tn)1({R,Ω2\R})|Q)=1 (A12)

for all QΩ\Vn. Thus,

H(OP(n+1)|OP(n)T1(OP(n)))(A11)H(OP(n)T1(OP(n))|OP(n)T1(OP(n)))+H((id,Tn)1({R,Ω2\R})|OP(n)T1(OP(n)))=H((id,Tn)1({R,Ω2\R})|OP(n)T1(OP(n)))(A1)QOP(n)T1(OP(n))μ(Q)·log(#Δ((id,Tn)1({R,Ω2\R})|Q))=QOP(n)T1(OP(n))QVnμ(Q)·log(#Δ((id,Tn)1({R,Ω2\R})|Q))+QOP(n)T1(OP(n))QVnμ(Q)·log(#Δ((id,Tn)1({R,Ω2\R})|Q))=QOP(n)T1(OP(n))QVnμ(Q)·log(#Δ((id,Tn)1({R,Ω2\R})|Q))+QOP(n)T1(OP(n))QΩ\Vnμ(Q)·log(#Δ((id,Tn)1({R,Ω2\R})|Q))=(A12)QOP(n)T1(OP(n))QVnμ(Q)·log(#Δ((id,Tn)1({R,Ω2\R})|Q))QOP(n)T1(OP(n))QVnμ(Q)·log(2)=log(2)·μ(Vn).

This shows Lemma 1.

To prove the ‘ergodic part’ of Lemma 2, take ϵ>0. Choose an ordered partition U={Ui}i=1N of Ω such that 0<μ(Ui)<ϵ holds true for all iI. This is always possible because μ was assumed to be aperiodic. Label the sets UiU with i{1,2,,N} in such a way that

i1<i2μ2({(ω1,ω2)Ui1×Ui2:ω1>ω2})=0

holds true for all i1,i2{1,2,,N}. Since T is ergodic, there exists an noN such that

μi=1Ns=1n01Ts(Ui)>1ϵ

holds true.

The set i=1Ns=1n01Ts(Ui) consists of all ωΩ with orbit (ω,T(ω),,Tn01) visiting each of the sets in U. Thus, if such ω lies in Ui with 1<i<N and in Vt for tn0, by definition of Vt, the point Tt(ω) must belong to Ut1UtUt+1. With a similar argumentation for ωU1 or ωUN, one obtains the following:

μVti=1Ns=1n01Ts(Ui)μ(U1Tt(U1U2))+i=2N1μ(UiTt(Ui1UiUi+1))+μ(UNTt(UN1UN))

holds true for all tN with t>n0. Using the ergodicity of T implies

limn1nt=1nμVti=1Ns=1n01Ts(Ui)=limn1nt=1nμVt+n0i=1Ns=1n01Ts(Ui)μ(U1)μ(U1U2)+i=2N1μ(Ui)μ(Ui1UiUi+1)+μ(UN)μ(UN1UN)μ(U1)2ϵ+i=2N1μ(Ui)3ϵ+μ(UN)2ϵ3ϵ.

The Stolz–Cesàro theorem then provides

lim infnμVni=1Ns=1n01Ts(Ui)lim infn1nt=1nμVti=1Ns=1n01Ts(Ui)3ϵ.

Hence,

lim infnμVn=lim infnμVni=1Ns=1n01Ts(Ui)+μVn\i=1Ns=1n01Ts(Ui)lim infnμVni=1Ns=1n01Ts(Ui)+ϵ4ϵ.

Since ϵ can be choosen arbitrarily close to 0, this implies lim infnμVn=0.

Appendix A.6. Proof of Theorem 5

Let Ω be a subset of R and A=B be the Borel σ-algebra on Ω and consider the set

Θ=t=1{ωΩTt(ω)=ω}.

Since T is μ-almost surely aperiodic, we have μ(Θ)=0.

We will prove the statement of the theorem by contradiction. Suppose n=1μ(Vn)< holds true. Using the Borel–Cantelli lemma, this implies

μk=1n=kVn=0

or, equivalently,

limKμk=1Kn=k(Ω\Vn)=μk=1n=k(Ω\Vn)=1.

Therefore, there exists a KN with

μn=K(Ω\Vn)\Θ=μn=K(Ω\Vn)=μk=1Kn=k(Ω\Vn)>0.

Set

δ(ω):=min1sK1|ωTs(ω)|

for all ωΩ. Notice that every aperiodic point ωΘ satisfies δ(ω)>0. Thus,

0<μn=K(Ω\Vn)\Θ=μi=1ωn=K(Ω\Vn)\Θδ(ω)>1/i=limiμωn=K(Ω\Vn)\Θδ(ω)>1/i.

Thus, there exists some δ>0 such that

Aδ:={ωn=K(Ω\Vn)\Θδ(ω)>δ}

has a strictly positive measure. Because there exists a countable set ΩδΩ with

Aδ=ωΩδAδ(ωδ/2,ω+δ/2),

we have

μ(Aδ(ω0δ/2,ω0+δ/2))>0

for some ω0Ω. Using the ergodicity of T, this implies

μn=KTn((ω0δ/2,ω0+δ/2))=μn=0Tn(TK((ω0δ/2,ω0+δ/2)))=1

and, consequently,

μAδ(ω0δ/2,ω0+δ/2)n=KTn((ω0δ/2,ω0+δ/2))=μ(Aδ(ω0δ/2,ω0+δ/2))>0.

Thus, in particular, Aδ(ω0δ/2,ω0+δ/2)n=KTn((ω0δ/2,ω0+δ/2)) is not empty. Now take some

ωAδ(ω0δ/2,ω0+δ/2)n=KTn((ω0δ/2,ω0+δ/2)).

We have |ωω0|<δ/2. Additionally, there exists n0N with n0K such that Tn0(ω)(ω0δ/2,ω0+δ/2) holds true, which is equivalent to |ω0Tn0(ω)|<δ/2. As a consequence,

|ωTn0(ω)||ωω0|+|ω0Tn0(ω)|<δ

holds true. This implies that

m:=min{nN|ωTn(ω)|<δ}

is smaller or equal to n0. In particular, mN is well defined and not infinite. On the other hand,

ωAδ{ωΩmin1sK1|ωTs(ω)|>δ}

implies mK. By construction of m, we have

|ωTs(ω)|δ>|ωTm(ω)|

for all s{1,2,,m1}. Hence, ωVm holds true, which is a contradiction to

ωAδn=KΩ\Vn.

Therefore, n=1μ(Vn)< cannot be true.

Author Contributions

Conceptualization, K.K. and T.G.; Formal analysis, T.G.; Investigation, K.K. and T.G.; Methodology, T.G.; Supervision, K.K.; Visualization, T.G.; Writing—original draft, K.K. and T.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Bandt C., Keller G., Pompe B. Entropy of interval maps via permutations. Nonlinearity. 2002;15:1595. doi: 10.1088/0951-7715/15/5/312. [DOI] [Google Scholar]
  • 2.Amigó J.M. The equality of Kolmogorov–Sinai entropy and metric permutation entropy generalized. Physica D. 2012;241:789–793. doi: 10.1016/j.physd.2012.01.004. [DOI] [Google Scholar]
  • 3.Antoniouk A., Keller K., Maksymenko S. Kolmogorov–Sinai entropy via separation properties of order-generated σ-algebras. Discret. Cont. Dyn.-A. 2014;34:1793. [Google Scholar]
  • 4.Fouda E., Koepf W., Jacquir S. The ordinal Kolmogorov–Sinai entropy: A generalized approximation. Commun. Nonlinear Sci. 2017;46:103–115. doi: 10.1016/j.cnsns.2016.11.001. [DOI] [Google Scholar]
  • 5.Unakafova V., Unakafov A., Keller K. An approach to comparing Kolmogorov–Sinai and permutation entropy. Eur. Phys. J. Spec. Top. 2013;222:353–361. doi: 10.1140/epjst/e2013-01846-7. [DOI] [Google Scholar]
  • 6.Watt S., Politi A. Permutation entropy revisited. Chaos Soliton Fract. 2019;120:95–99. doi: 10.1016/j.chaos.2018.12.039. [DOI] [Google Scholar]
  • 7.Bandt C., Pompe B. Permutation Entropy: A Natural Complexity Measure for Time Series. Phys. Rev. Lett. 2002;88:174102. doi: 10.1103/PhysRevLett.88.174102. [DOI] [PubMed] [Google Scholar]
  • 8.Amigó J.M., Kennel M.B., Kocarev L. The permutation entropy rate equals the metric entropy rate for ergodic information sources and ergodic dynamical systems. Physica D. 2005;210:77–95. doi: 10.1016/j.physd.2005.07.006. [DOI] [Google Scholar]
  • 9.Haruna T., Nakajima K. Permutation complexity via duality between values and orderings. Physica D. 2011;240:1370–1377. doi: 10.1016/j.physd.2011.05.019. [DOI] [Google Scholar]
  • 10.Gutjahr T., Keller K. Equality of Kolmogorov–Sinai and permutation entropy for one-dimensional maps consisting of countably many monotone parts. Discret. Cont. Dyn.-A. 2019;39:4207. doi: 10.3934/dcds.2019170. [DOI] [Google Scholar]
  • 11.Einsiedler M., Ward T. Ergodic Theory: With a View Towards Number Theory. Springer; London, UK: 2010. Graduate Texts in Mathematics. [Google Scholar]
  • 12.Keller K., Unakafov A.M., Unakafova V.A. On the relation of KS entropy and permutation entropy. Physica D. 2012;241:1477–1481. doi: 10.1016/j.physd.2012.05.010. [DOI] [Google Scholar]
  • 13.Walters P. An Introduction to Ergodic Theory. Springer; New York, NY, USA: 2000. Graduate Texts in Mathematics. [Google Scholar]
  • 14.Rokhlin V.A. On the fundamental ideas of measure theory. Am. Math. Soc. Transl. 1952;71:55. [Google Scholar]
  • 15.Heinemann S., Schmitt O. Rokhlin’s Lemma for Non-invertible Maps. Dyn. Syst. Appl. 2001;2:201–213. [Google Scholar]
  • 16.Halmos P.R. Measure Theory. Springer; New York, NY, USA: 1950. Extension of Measures; pp. 49–72. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES