Highlights
-
•
The paper introduces new method of reconstructing an unknown one-dimensional transformation that is subject to constantly applied stochastic perturbations based on temporal sequences of probability density functions.
-
•
The main assumption is that the one-dimensional transformation that generated the densities is piecewise-linear, semi-Markov.
-
•
A matrix approximation of the transfer operator associated with the stochastically perturbed transformation, which forms the basis for the reconstruction algorithm, is introduced.
-
•
A practical algorithm to estimate the matrix-representation of the Frobenius-Perron operator associated with the unperturbed transformation and reconstruct the onedimensional map is proposed.
-
•
The algorithm is extended to nonlinear continuous maps.
-
•
Numerical simulation examples are provided to demonstrate the performance of the approach and to compare it with that of an existing algorithm.
Keywords: Nonlinear systems, Chaotic maps, Probability density functions, Inverse Frobenius–Perron problem
Abstract
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
1. Introduction
A dynamical system, whose evolution is completely dictated by deterministic equations, can under certain conditions exhibit chaotic behavior and generate a density of states [1]. Chaotic behavior has been observed in many real-world systems including biological, physical and economic systems [2], [3], [4]. The simplest dynamical systems that exhibit chaos are one-dimensional maps. Such one-dimensional discrete dynamical systems are used to describe the evolution of many real-world systems including olfactory systems [5], electrical circuits [6], communication networks [7], rotary drills [8], chemical reactions [9] and the heart [10].
An important challenge is to develop such model from experimental observations [11], [12], [13]. Conventional approaches [14], [15], [16] rely on time series data. However, often it is not possible to measure point trajectories. For example, particle image velocimetry, a technique used to generate instantaneous velocity measurements in fluids, identifies individual tracer particles in consecutive images captured at high speeds but cannot resolve their individual orbits [17]. In such cases, in the absence of individual point trajectories, it is desirable to determine the underlying dynamical system that generated the observed density functions. Given a non-singular transformation, the evolution of an initial density function under the action of the transformation is described by the Frobenius–Perron operator associated with the transformation [18]. The fixed point of such an operator represents the invariant density under the transformation. The problem of inferring a dynamical system, whose invariant density function is given, is known as the Inverse Frobenius–Perron problem [19].
In general, solving the inverse problem involves deriving a finite-dimensional representation of the operator, which is then used to construct the dynamical system. Ulam conjectured [20] that a general infinite-dimensional Frobenius–Perron operator can be approximated by a finite rank Markov operator. For one-dimensional transformations Li [21] has shown that given a sequence of piecewise constant approximations Pn of the Frobenius–Perron operator P, the corresponding sequence of fixed points fn of Pn converge to the invariant density (i.e. fixed point) of the operator, thus proving Ulam's conjecture. In this context, the problem of determining the dynamical system that corresponds to the finite dimensional approximation of the Frobenius–Perron operator is also known as the inverse Ulam problem [22], [23].
A numerical algorithm to determine a one-dimensional transformation given the invariant density function was proposed in [19]. The algorithm however does not provide an explicit relationship between the invariant density of the one-dimensional map and the map itself. In [24] a graph-theoretic approach is introduced to construct piecewise-linear transformations that possess piecewise-constant invariant density functions that have value 0 in all relative minima points.
A generalization of these methods is presented in [25] which introduces a relationship between any arbitrary piecewise density function and a semi-Markov piecewise linear transformation defined over a partition of the interval of interest. This forms the basis for a matrix-based method to reconstruct a 3-band transformation, a special class of semi-Markov transformations, which has a given piecewise-constant density function as invariant density. The inverse problem was studied in [26] for a class of symmetric maps that have invariant symmetric Beta density functions and the unique solution can be achieved under given symmetry constraints. This method was generalized in [27] which considers a broader class of continuous unimodal maps for which each branch of the map covers the complete interval and the invariant densities are a asymmetric beta functions. Given arbitrary invariant densities similar approaches were proposed for identifying the maps with specified forms: two types of one-dimensional symmetric maps [28], smooth chaotic map with closed form [29], [30], multi-branches complete chaotic map [31]. Problems of synthesizing one-dimensional maps with prescribed invariant density function or autocorrelation function were considered in [32], [33]. Using positive matrix theory an approach to synthesizing chaotic maps with arbitrary piecewise constant invariant densities and arbitrary mixing properties was developed in [34]. This method was further extended to synthesizing dynamical systems with desired statistical properties [35], developing communication networks [36] and designing randomly switched chaotic maps and two-dimension chaotic maps used for image generation [37]. The global, open-loop strategy to control chaos proposed in [22], [23] is formulated as an inverse Frobenius–Perron problem. The aim is to perturb the original dynamical system to achieve the desired invariant measure. This reduces to the problem of finding a perturbation of the original Frobenius–Perron matrix to achieve the target invariant density function and then solving the inverse Ulam problem to determine the perturbed dynamical system.
In general, the solution to the inverse Frobenius–Perron problem is not unique. Different transformations exhibiting strikingly different dynamics may share the same invariant density functions. Additional limiting assumptions or constraints are required to ensure uniqueness of the solution [26], [27], [28], [29], [30], [31], [32], [33], [34]. In [38] a new method is proposed to solve the inverse Frobenius–Perron problem based not only on the invariant density function but also on sequences of probability density functions generated by the transformation, which ensures uniqueness of the solution. The method has been shown to be quite robust to noise. For small levels of noise this is indeed expected in light of the convergence results for noise perturbed systems established by Bollt et al. [39]. However, the accuracy of the reconstruction starts to deteriorate significantly above a certain level of noise. For large levels of noise, the approximation errors can be drastically reduced by taking into account the density function of the noise, which is often known a priori or can be estimated.
This paper proposes a new method to estimate the piecewise linear and expanding semi-Markov transformation that generated a temporal sequences of probability density functions whilst subjected to constantly applied stochastic perturbations. The method is extended to more general nonlinear transformations that can be approximated arbitrarily close by piecewise linear functions.
To differentiate with the normal deterministic inverse problem, we call this the inverse stochastic Frobenius–Perron problem. The emphasis here is on recovering the unknown transformation that generated the sequence of densities rather than one of the many possible transformation that share the same invariant density function.
To this end, we formulate the matrix representation of the transfer operator associated with the stochastically perturbed system in terms of the Frobenius–Perron matrix associated with the unperturbed system that we aim to estimate. This representations forms the basis for a the proposed algorithm to estimate the ‘unperturbed’ Frobenius–Perron matrix from sequences of probability density functions generated by the unknown, stochastically perturbed dynamical system, under the assumption that the density function of the perturbation is known. For general nonlinear transformations, we present a practical method to solve the inverse Ulam problem, which allows determining the sign of the derivative for each interval of the partition.
Whilst the sign of the derivative is not important when the goal is to determine a transformation that has a given invariant density function [22], this step is crucial if the aim is to reconstruct/approximate the true dynamical system that generated the data. We demonstrate that the proposed approach can reconstruct the underlying dynamical system that is subject to stochastic perturbations.
This paper is organized as follows. Section 2 introduces the inverse stochastic Frobenius–Perron problem. A matrix approximation of the transfer operator associated with the stochastically perturbed transformation is derived in Section 3. Section 4 introduces a methodology for reconstructing piecewise-linear semi-Markov transformations subject to stochastic perturbations, from sequences of density functions. The approach is extended in Section 5 to general nonlinear maps. Section 6 presents two numerical simulation examples that demonstrate the significant improvement in reconstruction accuracy achieved by the proposed algorithm that incorporates a priori knowledge of the noise in the reconstruction of the unknown transformation. Conclusions are given in Section 7.
2. Description of the inverse problem
Let , be a Borel σ-algebra of subsets in R, and μ denote the normalized Lebesgue measure on R. Let S: R → Rbe a measurable, non-singular transformation, that is, for any and for all with . If xn is a random variable on R having the probability density function , , such that
| (1) |
It follows that given by
| (2) |
is distributed according to the probability density function , where PS: L1(R) → L1(R), defined by
| (3) |
is the Frobenius–Perron operator [1] associated with the unperturbed transformation S.
If A = [a, x],PS can be written explicitly as
| (4) |
Let be a partition of R into intervals, and if i ≠ j. Assuming that S is piecewise monotonic and expanding [18],
| (5) |
where Si is the monotonic restriction of S on the interval Ri.
A more complicated situation arises when the dynamical system is subjected to an additive random perturbation [1] such that
| (6) |
where S: R → R is a given transformation and are independent random variables. The ‘stochastic’ Frobenius–Perron operator corresponding to the perturbed dynamical systems is defined by [1], [39]
| (7) |
where is a stochastic kernel, satisfying τ(x, y) > 0, and .
Here, we consider the deterministic system with constantly applied stochastic perturbations
| (8) |
where S: [0, b] → [0, b] is a piecewise monotonic and expanding transformation, ξn is an independent random variable with a probability density function g that has compact support on , i.e. ξn is bounded in , ɛ ≤ b.
For an arbitrary Borel set B⊂[0, b], the probability of falling into B is given by
| (9) |
for , where .
Let . It follows that
| (10) |
and
| (11) |
then, (9) is rewritten as
| (12) |
where is the probability density function of . The stochastic Frobenius–Perron operator , associated with the perturbed transformation (8), is then defined by
| (13) |
It is easy to see that for any ξn there are N1 ≤ N disjoint intervals such that for , , S(xn) is monotonic and or (i.e. maps outside the interval [0, b]), and N2 ≤ N disjoint intervals such that for , , S(xn) is monotonic and .
We have , ∀i ≤ N1, j ≤ N2, and . For each integers i ≤ N1 and j ≤ N2, there exist unique integers α(i) ≠ β(j) ≤ N such that and . From (13) it follows that
| (14) |
Substituting gives
| (15) |
We have
| (16) |
where . Since , the following equality holds
| (17) |
It follows that
| (18) |
So that
| (19) |
Eq. (19) provides the link between the operator corresponding to the randomly perturbed dynamical system (8) and the Frobenius–Perron operator PS associated to the noise-free system. The inverse problem is formulated as follows.
Let and be K sets of initial and final state observations, respectively, such that
| (20) |
It is assumed that the measurement system does not allow associating an initial state with its image under the transformation. The inverse problem considered here is to determine the transformation S in (8) given the noise density function g and probability density functions , associated with the initial states and final states , that is, , where is the transfer operator associated with perturbed transformation (8).
3. A matrix representation of the transfer operator
Let S be a piecewise linear and expanding semi-Markov transformation over the N-interval partition, .
Definition 1
A transformation S: R → R is said to be semi-Markov with respect to the partition ℜ (or ℜ-semi-Markov) if there exist disjoint intervals so that , , the restriction of S to , denoted , is monotonic and . [25]
The restriction is a homeomorphism from Ri to a union of intervals of ℜ
| (21) |
where , , , and p(i) denotes the number of disjoint subintervals corresponding to Ri.
Let fn be a piecewise constant function over the partition R such that . Its image under transformation PSfn is also a piecewise constant function over ℜ [18] such that and the Frobenius–Perron operator can be represented by a finite-dimensional matrix
| (22) |
where is the Frobenius–Perron matrix induced by S with entries given by
| (23) |
From (22) it follows that
| (24) |
for .
Let be a regular partition of R into N equal sized intervals. By integrating (19) over an interval gives
| (25) |
Consider the following approximation
| (26) |
where is the orthogonal projection of in L1 on the finite-dimensional space spanned by, qN(x) is the orthogonal complement in L1 and
| (27) |
where is the Lebesgue measure on . Clearly, qN(x) → 0 as .
It follows that
| (28) |
Let be the N × N matrix with entries given by
| (29) |
Substituting (24), (29) in (28) leads to
| (30) |
where , are the coefficient vectors associated with the piecewise constant density functions fn and respectively and is the matrix approximation of the operator . Eq. (30) maps a piecewise-constant density function over the N-dimensional partition, which in general is non-uniform, to a piecewise-constant density function over a uniform N-dimensional partition. Eq. (30) is the basis for the new algorithm to reconstruct the transformation S given pairs of successive density functions generated by the stochastically perturbed transformation.
In practice, we can chose a finer N1-interval partition , N1 >> N. For example, we can construct as a refinement of the partition ℜ such that the cut points of partition ℜ are a subset of cut-points associated with . This leads to an alternative formulation of (30) where both the initial and final densities are defined over the same partition. Given an initial piecewise density function f over the partition, the matrix approximation can then be used to compute a sequence of successive iterations by the corresponding finite-dimensional approximation of the stochastic Frobenius–Perron operator.
4. Solving the inverse stochastic Frobenius–Perron problem for piecewise linear semi-Markov transformations
This section presents an approach to solving the inverse stochastic Frobenius–Perron problem, under the assumption that S: [0, b] → [0, b] is a piecewise linear semi-Markov transformation over a partition ℜ,
| (31) |
, which is assumed to be known. In what follows we assume that is defined as the uniform partition of dimension N of [0, b].
The main steps of the approach are summarized below:
-
Step 1:
Given the observations , t = 0,…,T, estimate the coordinate vectors and , t = 0,…,T−1 corresponding to the piecewise constant density functions over ℜ and over , respectively. Compute the matrix D defined in (29).
-
Step 2:
Estimate M, the matrix representation of the Frobenius–Perron operator PS associated with the deterministic transformation S.
-
Step 3:
Construct the piecewise linear semi-Markov transformation over ℜ.
These steps are described below in more detail.
4.1. Step 1: estimate w and v and compute d
Let f0(x) be an initial density function that is piecewise constant on the partition.
| (32) |
where the coefficients satisfy . Let be the set of initial states obtained by sampling f0(x). The states at a given sampling time t > 0 are assumed to be generated by applying t times the process defined in (8), where are generated by sampling g(ξ).
The density function ft(x) on associated with the states Xt is given by
| (33) |
where the coefficients . In practice the densities ft(x) are estimated directly from observations.
We define the following matrices
| (34) |
and
| (35) |
The matrix D is obtained by numerical integration of (29).
4.2. Step 2: estimate the Frobenius–Perron matrix m
This is carried out in two stages. Firstly, the coordinate vector corresponding to the piecewise constant densities over the partition ℜ are obtained by solving the following constrained optimization problem
| (36) |
subject to
| (37) |
where
| (38) |
and || · ||F denotes the Frobenius norm.
In the second stage, the matrix representation of the Frobenius–Perron operator associated with the unperturbed transformation S is obtained as a solution to the following constrained optimization problem
| (39) |
subject to
| (40) |
In the following it is shown that the matrices and are non-singular, which ensures uniqueness of solutions.
Proposition 1
For a piecewise linear ℜ-semi-Markov transformation S subjected to additive perturbation, where is an N-dimensional regular partition of [0, b], the matrix is non-singular.
Proof
For , , Rj ∈ ℜ, , for ,
(41)
The matrix D satisfies that
| (42) |
and , for .
Let , for . The matrix D is decomposed into two triangle matrices as follows
| (43) |
where
| (44) |
| (45) |
Then, . According to the Minkowski determinant theorem, Dl and Du are both non-negative, then. Hence, D and Φ1 are non-singular, and this completes the proof.
Theorem 1
Let S: [0, b] → [0, b] be a piecewise linear R-semi-Markov transformation subjected to additive noise,. Then the matrix Q representing the transfer operator associated with the noisy dynamical system has 1 as the eigenvalue of maximum modulus and also has the unique eigenvalue of modulus 1.
Proof
For , the matrix is square. Let where
(46)
The sum of ith row of Q is given by
| (47) |
The column sum of kth column of D is given by
| (48) |
It follows that
| (49) |
For a regular partition ℜ, , then . Thus, Q is row stochastic. Because Q is also a positive matrix, matrix Q has 1 as the eigenvalue of maximum modulus, and the algebraic and geometric multiplicities of this eigenvalue are 1. This concludes the proof.
Remark
. From Theorem 1 it follows that has a unique, no-trivial solution, where is the N-dimensional approximation of the stochastic Frobenius–Perron operator associated with the noise perturbed piecewise linear ℜ-semi-Markov transformation (8).
Proposition 2
A noise perturbed piecewise linear ℜ-semi-Markov transformation S can be uniquely reconstructed given N linearly independent piecewise constant density functions defined over a regular partition ℜ, which correspond to initial states, and the piecewise constant densities.
Proof
Let
(50)
be a set of initial piecewise constant densities over a partition ℜ and
| (51) |
be the piecewise constant densities, corresponding to the final states
From (30), we have
| (52) |
where , .
Because the initial densities , are linearly independent, the matrix W0 is non-singular. Moreover, from Proposition 1, D is non-singular and the Frobenius–Perron matrix is given by
| (53) |
4.3. Step 3: construct the piecewise linear semi-Markov transformation over ℜ
It is assumed that each branch of the map, , is monotonically increasing. The derivative of is 1/mi, j, the length of is given by
| (54) |
which allows computing iteratively for each interval Ri starting with .Then the map is given by
| (55) |
for , j is the index of image Rj of , i.e. , , , where mi, j ≠ 0.
5. Solving the inverse stochastic Frobenius–Perron problem for general nonlinear transformations
This section considers more general nonlinear maps that are not piecewise linear semi-Markov. Starting with Lasota and Yorke [40] who established the existence of invariant measure for piecewise monotonic transformations and Li [21] who proposed a numerical procedure to calculate the invariant density function corresponding to the invariant measure, the problem of approximating the invariant density of a transformation, which is closely linked with the problem of approximating the Frobenius–Perron operator and the transformation itself, has been studied by a number of authors [41], [42], [43], [44], [45]. In [43] Gora and Boyarsky approximated a nonsingular transformation S, that may have infinitely many pieces of monotonicity, by a sequence of piecewise linear functions Sn and shown that the invariant density of the map S can be approximated arbitrarily well by densities that are invariant under finite approximations Sn of S. In general, for continuous nonsingular transformation we have the following result [22], [46]
Theorem 2
Let S: R → R be a continuous transformation and let {Sn}n ≥ 1be a sequence of transformations converging to S in the C0 topology. Let μn be a probabilistic measure invariant under Sn, n = 1,2,…. If μ is a weak-* limit point of the sequence {μn}n ≥ 1then μ is S-invariant.
This shows that the invariant densities of successive piecewise linear approximations Sn of S converge in a weak sense to the invariant density of the original transformation as n→∞. This means that in practice we can estimate transition matrices to approximate arbitrarily well the Frobenius–Perron operator associated with S. A generalization of this result for dynamical systems subjected to additive noise is presented in [39].
Here, the goal is to estimate from sequences of density functions a piecewise linear semi-Markov approximation , defined over a uniform Markov partition of [0, b], , of the unknown nonlinear map S: [0, b] → [0, b] subjected to stochastic perturbation. It is assumed that the nonlinear map S has an invariant density and that it can be approximated arbitrarily well by piecewise linear functions. Unlike the control problem studied in [33], here the challenge is to estimate the unknown nonlinear transformation that generated a sequence of density functions rather than one of the many possible perturbations of the original map, which yield a desired invariant density.
The proposed identification scheme for general nonlinear maps is summarized as follows:
Step 1: Given the observations, , estimate the coordinate vectors and of the piecewise constant density ft(x) and defined over a regular partition of size N. Compute D using the given probability density function of perturbation.
Step 2: Estimate the matrix Y corresponding to PSft(x)by solving the optimization problem (36). PSft(x) can be used to identify the Frobenius–Perron matrix associated with the unknown map;
Step 3: Identify a trial Frobenius–Perron matrix , by solving the constrained optimization problem (39). Since the map is continuous, this is then used to further determine the indices of consecutive positive entries of each row and in this way is refined. Let be the set of column indices corresponding to consecutive positive entries of ith row and satisfying
| (56) |
Therefore, the piecewise linear ℜ-semi-Markov map associated with the refined Frobenius–Perron matrix M should satisfy that , where is the image of the interval Ri,, and is the column index of a positive entry on the ith row of M satisfying
| (57) |
for .
Step 4: Solve the following optimization problem to determine the Frobenius–Perron matrix M
| (58) |
subject to
| (59) |
and
| (60) |
for .
Step 5. Determine the monotonicity of each branch . Let be the image of the interval Ri under the semi-Markov transformation associated with the identified Frobenius–Perron matrix M. Denote as the starting point of Rr(i, 1) mapped from the subinterval , and ar(i, p(i)) as the end point of Rr(i, p(i)), the image of the subinterval . Let be the midpoint of the image . The sign γ(i)of is given by
| (61) |
for and .
Step 5. Construct semi-Markov map based on the Frobenius–Perron matrix M and the monotonicity of each branch. Given that the derivative of is 1/mi, j, the end point of subinterval within Ri is given by
| (62) |
where and . The piecewise linear semi-Markov transformation on each subinterval is given by
| (63) |
for mi, j ≠ 0, . A smooth nonlinear map is obtained by fitting a polynomial smoothing spline. Fig. 1 shows the construction of monotonically increasing and decreasing piecewise linear semi-Markov transformation and the resulting smooth continuous map.
Fig. 1.
Schematic diagram of construction of piecewise linear semi-Markov transformations for monotonically increasing (a) and decreasing (b) cases. The indices of positive entries of the Frobenius–Perron matrix are determined from a trial matrix. The grayed ‘0’ in the map grid represents the corresponding entry of the refined Frobenius–Perron matrix is zero, and the other entries are positive. The dashed lines are the constructed nonlinear map smoothed from the identified piecewise linear semi-Markov map .
6. Numerical simulation studies
The proposed algorithms are demonstrated using simulated data generated by two chaotic maps.
6.1. Example A
Consider the noise perturbed dynamical system
| (64) |
where {ξn} is white noise that follows a Gaussian distribution truncated to the range where . The piecewise linear and expanding transformation S: [0, 1] → [0, 1] is defined by
| (65) |
for , , over the partition (0.3, 0.4], (0.4, 0.8], (0.8, 1]} where
The graph of S is shown in Fig. 2.
Fig. 2.
Example A: Original piecewise linear transformation S.
A set of initial densities , over the partition ℜ is shown in Fig. 3. These are used to generate the set of initial states . The corresponding final states , obtained by applying the map (64), were used to estimate piecewise constant densities , over a uniform partition {[0, 0.25], (0.25, 0.5], (0.5, 0.75], (0.75, 1]} of [0, 1].
Fig. 3.
Example A: The density functions , and .
For this partition, the matrix D calculated using (29)
| (66) |
is non-singular.
The two stage approach detailed in Section 4, was used to estimate the matrix representation of the Frobenius–Perron operator associated with the deterministic transformation
| (67) |
Specifically, the constrained optimization problems (36), (37) and (39), (40) were solved using the function in the Matlab Optimization Toolbox.
The reconstructed piecewise linear map is shown in Fig. 4. The estimated coefficients of the identified piecewise linear semi-Markov transformation are
Fig. 4.
Example A: The identified transformation of the underlying noisy system.
The performance of the reconstruction algorithm is evaluated by computing the relative error
| (68) |
for , which is shown in Fig. 5. The performance of the new reconstruction algorithm for different levels of noise was compared with that of a previous algorithm [38] that does not incorporate knowledge of the noise density. Table 1 shows for comparison the mean absolute percentage error (MAPE)
| (69) |
for the two reconstruction approaches, where . The results, clearly demonstrate the advantages of the new algorithm and in particular its robustness even for in the presence of very significant noise levels.
Fig. 5.
Example A: Relative error between the original map S and the identified map evaluated for 99 uniformly spaced points.
Table 1.
Example A: Performance comparison between the new algorithm (A1) and the algorithm (A2) in [38] for different levels of noise.
| 0.0335 | 0.1621 | 0.9058 | 2.3638 | 4.0535 | 16.2140 | 22.6460 | |
| ɛ | 0.02 | 0.04 | 0.10 | 0.15 | 0.20 | 0.40 | 0.50 |
| MAPE(A1) | 1.2171 | 1.4106 | 1.1222 | 0.3581 | 2.1578 | 2.2541 | 3.2035 |
| MAPE(A2) | 2.5362 | 3.1203 | 10.6281 | 34.2314 | 42.5629 | 56.2310 | 51.6851 |
6.2. Example B
This example demonstrates the proposed algorithm to reconstruct a nonlinear continuous map. Specifically, we consider the logistic map defined by
| (70) |
where ξ is white noise following a Gaussian distribution function truncated to the interval with . The aim is to infer a piecewise linear semi-Markov defined over a uniform partition ℜ with intervals, which approximates the logistic map S. The initial states , i = 1,…,100, were generated by sampling a set of initial densities shown in Fig. 6 (see Appendix for more details). The corresponding final densities over the uniform partition ℜ were estimated from , i = 1,…,100, the images under the noise perturbed transformation of the initial states .
Fig. 6.
Example B: Examples of initial densities (gray lines) and the corresponding final densities after one iteration (black lines) .
The approximate piecewise linear semi-Markov map, identified for ε = 0.02 using the algorithm in Section 5, is shown in Fig. 7.
Fig. 7.
Example B: Reconstructed piecewise linear semi-Markov map Ŝ over the uniform partition .
The smoothed map obtained with the smoothing parameter 0.999 is shown in Fig. 8, and the relative error calculated on the uniformly spaces points is shown in Fig. 9.
Fig. 8.
Example B: Identified smooth map resulted from piecewise linear semi-Markov map in Fig. 7 with smoothing parameter 0.999.
Fig. 9.
Example B: Relative error between the original map S and the identified smooth map in Fig. 8 evaluated for 99 uniformly spaced points.
The root mean square error (RMSE) between the predicted density functions using original and identified maps calculated by
| (71) |
where is the coefficient of predicted density function, is given in Table 2. As can be seen in Fig. 9 and Table 2, the approximation error has been very low. With the increase of the interval number, the reconstructed map is more close to the original one, and the stabilized distribution converges to the invariant density of the system, as proven in [22].
Table 2.
Example B: RMSE between the predicted density functions fn using the identified map and those generated by the original noisy system.
| n | 1 | 2 | 3 | 5 | 10 | 20 | 50 | 100 | 200 |
|---|---|---|---|---|---|---|---|---|---|
| RMSE | 0.2887 | 0.2515 | 0.1915 | 0.2477 | 0.2133 | 0.2311 | 0.1608 | 0.2106 | 0.2120 |
As in the previous example, the performance of the new reconstruction method was compared with that of a previous method [38] for different levels of noise. The results are summarized in Table 3. As it can be seen, the new algorithm performs significantly and consistently better. This is also illustrated in Fig. 10 in which the maps reconstructed using the two algorithms for ɛ = 0.15 and ɛ = 0.50 are shown side-by-side. It is worth noting however, that one of the advantages of the original algorithm in [38] is that it includes an additional step to optimize the partition.
Table 3.
Example B: Performance comparison between the new algorithm (A1) and the algorithm (A2) in [38] for different levels of noise.
| 0.0260 | 0.0978 | 0.5431 | 1.3692 | 3.5617 | 12.6023 | 17.6201 | |
| ɛ | 0.02 | 0.04 | 0.10 | 0.15 | 0.20 | 0.40 | 0.50 |
| MAPE(A1) | 0.9282 | 0.9809 | 4.5791 | 3.1054 | 2.7850 | 4.6319 | 9.7981 |
| MAPE(A2) | 2.6120 | 2.9862 | 7.9236 | 78.3210 | 76.2536 | 58.1245 | 64.2101 |
Fig. 10.
Example B: Reconstructed maps for noise level ɛ = 0.15 using (a) the new algorithm and (b) the algorithm in [38] and the reconstructed maps for noise ɛ = 0.50 using (c) the new algorithm and (d) the algorithm in [38].
7. Conclusions
This paper introduced a new method for reconstructing/approximating an unknown one-dimensional chaotic map that is perturbed by additive noise, from sequences of density functions. The emphasis here is on recovering the true transformation that generated the data rather than one of the many possible transformations that share the same invariant density functions.
By incorporating knowledge of the noise distribution, the new estimation method achieves dramatically better accuracy (i.e. over tenfold error reduction in some cases) for high levels of noise compared with a previous method that does not account for the noise density.
As highlighted in [38], it would be of interest to develop similar reconstruction approaches for higher-dimensional systems. The main challenge is to construct the transformation given the matrix-representation of the Frobenius–Perron operator [22].
Acknowledgments
X. N. gratefully acknowledges the support from the Department of Automatic Control and Systems Engineering at the University of Sheffield and China Scholarship Council. D. C. gratefully acknowledges the support from MRC (G0802627), BBSRC (BB/M025527/1) and the Human Frontier Science Program grant no. RGP0001/2012.
Appendix: Initial states for Example B in Section 6
The 100 sets of initial states used in the example are obtained by sampling the following density functions.
where B( ·, ·) is beta function.
References
- 1.Lasota A, Mackey MC. 2nd ed. Springer-Verlag; New York: 1994. Chaos, fractals, and noise: stochastic aspects of dynamics. [Google Scholar]
- 2.Strogatz SH. Westview Press; 2014. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. [Google Scholar]
- 3.Skinner JE. Low-dimensional chaos in biological systems. Nat Biotechnol. 1994;12:596–600. doi: 10.1038/nbt0694-596. [DOI] [PubMed] [Google Scholar]
- 4.Swishchuk A, Islam S. Taylor & Francis; 2013. Random dynamical systems in finance. [Google Scholar]
- 5.Lozowski AG, Lysetskiy M, Zurada JM. Signal Processing with temporal sequences in olfactory systems. IEEE Trans Neural Netw. 2004;15:1268–1275. doi: 10.1109/TNN.2004.832730. [DOI] [PubMed] [Google Scholar]
- 6.van Wyk MA, Ding J. Stochastic analysis of electrical circuits. In: Chen G, Ueta T, editors. Chaos in Circuits and Systems. World Scientific; 2002. pp. 215–236. edited by. edited by. [Google Scholar]
- 7.Huijberts H, Nijmeijer H, Willems R. System identification in communication with chaotic systems. IEEE Trans Circuits Syst I Fundam Theory Appl. 2000;47:800–808. [Google Scholar]
- 8.Lasota A, Rusek P. An application of ergodic theory to the determination of the efficiency of cogged drilling bits. Arch Górnictwa. 1974;19:281–295. [Google Scholar]
- 9.Simoyi RH, Wolf A, Swinney HL. One-dimensional dynamics in a multicomponent chemical reaction. Phys Rev Lett. 1982;49:245–248. [Google Scholar]
- 10.Guevara MR, Glass L. Phase locking, period doubling bifurcations and chaos in a mathematical model of a periodically driven oscillator: a theory for the entrainment of biological oscillators and the generation of cardiac dysrhythmias. J Math Biol. 1982;14:1–23. doi: 10.1007/BF02154750. [DOI] [PubMed] [Google Scholar]
- 11.Perry J, Smith RH, Woiwod IP, Morse DR. Springer Science & Business Media; 2012. Chaos in real data: the analysis of non-linear dynamics from short ecological time series. [Google Scholar]
- 12.Lai Y-C, Grebogi C, Kurths J. Modeling of deterministic chaotic systems. Phys Rev E. 1999;59:2907. [Google Scholar]
- 13.Skiadas CH, Skiadas C. CRC Press; 2008. Chaotic modelling and simulation: analysis of chaotic models, attractors and forms. [Google Scholar]
- 14.Maguire LP, Roche B, McGinnity TM, McDaid L. Predicting a chaotic time series using a fuzzy neural network. Inf Sci. 1998;112:125–136. [Google Scholar]
- 15.Han M, Xi J, Xu S, Yin F-L. Prediction of chaotic time series based on the recurrent predictor neural network. IEEE Trans Signal Process. 2004;52:3409–3416. [Google Scholar]
- 16.Príncipe J, Kuo J-M. Dynamic modelling of chaotic time series with neural networks. Adv Neural Inf Process Syst. 1995:311–318. [Google Scholar]
- 17.Lueptow R, Akonur A, Shinbrot T. PIV for granular flows. Exp Fluids. 2000;28:183–186. [Google Scholar]
- 18.Boyarsky A, Góra P. Laws of chaos: invariant measures and dynamical systems in one dimension. 1997 [Google Scholar]
- 19.Ershov SV, Malinetskii GG. The solution of the inverse problem for the Perron–Frobenius equation. USSR Comput Math Math Phys. 1988;28:136–141. [Google Scholar]
- 20.Ulam SM. Interscience; New York: 1960. A collection of mathematical problems: interscience tracts in pure and applied mathematics. [Google Scholar]
- 21.Li T-Y. Finite approximation for the Frobenius–Perron operator: a solution to Ulam's conjecture. J Approx Theory. 1976;17:177–186. [Google Scholar]
- 22.Bollt EM. Controlling chaos and the inverse Frobenius–Perron problem: global stabilization of arbitrary invariant measures. Int J Bifurc Chaos. 2000;10:1033–1050. [Google Scholar]
- 23.Bollt EM, Santitissadeekorn N. SIAM; 2013. Applied and computational measurable dynamics. [Google Scholar]
- 24.Friedman N, Boyarsky A. Construction of ergodic transformations. Adv Math. 1982;45:213–254. [Google Scholar]
- 25.Góra P, Boyarsky A. A matrix solution to the inverse Perron–Frobenius problem. Proc Am Math Soc. 1993;118:409–414. [Google Scholar]
- 26.Diakonos FK, Schmelcher P. On the construction of one-dimensional iterative maps from the invariant density: the dynamical route to the beta distribution. Phys Lett A. 1996;211:199–203. [Google Scholar]
- 27.Pingel D, Schmelcher P, Diakonos FK. Theory and examples of the inverse Frobenius–Perron problem for complete chaotic maps. Chaos. 1999;9:357–366. doi: 10.1063/1.166413. [DOI] [PubMed] [Google Scholar]
- 28.Koga S. The inverse problem of Flobenius–Perron equations in 1D difference systems―1D map idealization. Progr Theor Phys. 1991;86:991–1002. [Google Scholar]
- 29.Huang W. Constructing chaotic transformations with closed functional forms. Discret Dyn Nat Soc. 2006;2006 [Google Scholar]
- 30.Huang W. Proceedings of the 2009 International Conference on Topics on Chaotic Systems: Selected Papers from CHAOS. 2009. On the complete chaotic maps that preserve prescribed absolutely continuous invariant densities; pp. 166–173. [Google Scholar]
- 31.Huang W. Constructing multi-branches complete chaotic maps that preserve specified invariant density. Discret Dyn Nat Soc. 2009:14. (2009) [Google Scholar]
- 32.Baranovsky A, Daems D. Design of one-dimensional chaotic maps with prescribed statistical properties. Int J Bifurc Chaos. 1995;5:1585–1598. [Google Scholar]
- 33.Diakonos FK, Pingel D, Schmelcher P. A stochastic approach to the construction of one-dimensional chaotic maps with prescribed statistical properties. Phys Lett A. 1999;264:162–170. [Google Scholar]
- 34.Rogers A, Shorten R, Heffernan DM. Synthesizing chaotic maps with prescribed invariant densities. Phys Lett A. 2004;330:435–441. [Google Scholar]
- 35.Rogers A, Shorten R, Heffernan DM. A novel matrix approach for controlling the invariant densities of chaotic maps. Chaos Solitons Fractals. 2008;35:161–175. [Google Scholar]
- 36.Berman A, Shorten R, Leith D. Positive matrices associated with synchronised communication networks. Linear Algebra Appl. 2004;393:47–54. [Google Scholar]
- 37.Rogers A, Shorten R, Heffernan DM, Naughton D. Synthesis of piecewise-linear chaotic maps: invariant densities, autocorrelations, and switching. Int J Bifurc Chaos. 2008;18:2169–2189. [Google Scholar]
- 38.Nie X, Coca D. Reconstruction of one-dimensional chaotic maps from sequences of probability density functions. Nonlinear Dyn. 2015;80:1373–1390. [Google Scholar]
- 39.Bollt E, Góra P, Ostruszka A, Zyczkowski K. Basis Markov partitions and transition matrices for stochastic systems. SIAM J Appl Dyn Syst. 2008;7:341–360. [Google Scholar]
- 40.Lasota A, Yorke JA. On the existence of invariant measures for piecewise monotonic transformations. Trans Am Math Soc. 1973;186:481–488. [Google Scholar]
- 41.Ding J, Li TY. Markov finite approximation of Frobenius–Perron operator. Nonlinear Anal Theory Methods Appl. 1991;17:759–772. [Google Scholar]
- 42.Ding J, Li TT. Projection solutions of Frobenius–Perron operator equations. Int J Math Math Sci. 1993;16:465–484. [Google Scholar]
- 43.Góra P, Boyarsky A. Approximating the invariant densities of transformations with infinitely many pieces on the interval. Proc Am Math Soc. 1989;105:922–928. [Google Scholar]
- 44.Hunt FY, Miller WM. On the approximation of invariant measures. J Stat Phys. 1992;66:535–548. [Google Scholar]
- 45.Kohda T, Murao K. Piecewise polynomial galerkin approximation to invariant densities of one‐dimensional difference equations. Electron Commun Japan (Part I: Commun) 1982;65:1–11. [Google Scholar]
- 46.Góra P, Boyarsky A. An algorithm to control chaotic behavior in one-dimensional maps. Comput Math Appl. 1996;31:13–22. [Google Scholar]










