Skip to main content
Entropy logoLink to Entropy
. 2018 Oct 9;20(10):774. doi: 10.3390/e20100774

Information Transfer Among the Components in Multi-Dimensional Complex Dynamical Systems

Yimin Yin 1, Xiaojun Duan 1,*
PMCID: PMC7512336  PMID: 33265862

Abstract

In this paper, a rigorous formalism of information transfer within a multi-dimensional deterministic dynamic system is established for both continuous flows and discrete mappings. The underlying mechanism is derived from entropy change and transfer during the evolutions of multiple components. While this work is mainly focused on three-dimensional systems, the analysis of information transfer among state variables can be generalized to high-dimensional systems. Explicit formulas are given and verified in the classical Lorenz and Chua’s systems. The uncertainty of information transfer is quantified for all variables, with which a dynamic sensitivity analysis could be performed statistically as an additional benefit. The generalized formalisms can be applied to study dynamical behaviors as well as asymptotic dynamics of the system. The simulation results can help to reveal some underlying information for understanding the system better, which can be used for prediction and control in many diverse fields.

Keywords: Information transfer, continuous flow, discrete mapping, Lorenz system, Chua’s system

1. Introduction

Uncertainty quantification in complex dynamical systems is an important topic in prediction models. By integrating information-theoretic methods to investigate potential physics and measure indices, the uncertainty can be quantified better in ensemble practical predictions of complex dynamical systems. For instance, one of the important motivations is the couplings among variables of dynamical systems generating information at a nonzero rate [1], which produces information exchange [2]. Entropy can be used to quantitatively describe production, gathering, exchange and transfer of information [3]. Information transfer analysis can be used to detect asymmetry in the interactions of subsystems [1,4]. The emergent phenomena cannot be simply derived or solely predicted from the knowledge of the structure or from the interactions among individual elements in complex systems [5]. The dynamics of information transportation plays a critical role in complex systems, resulting in the system prediction [6,7], controls of a system [8,9] and causal analysis [10,11]. It emphasizes further understanding and investigating information transportation in complex dynamical systems. It has been applied to quantify nonlinear interactions based on the information transfer by several underlying efficient estimation strategies in complex dynamical systems [12,13,14]. Simple examples are used to illustrate various complex phenomena. The formalisms about information transfer are mostly based on two time series [1,15,16,17].

Recently, a new approach on information flow between the components of two-dimensional (2D) systems was adapted by Liang and Kleeman [6], which can be used to deal with the change of the uncertainty of one component given by the other component. This idea is based on specific interactions between two components in complex dynamical systems. For a system with dynamics given, a measure of information transfer can be rigorously formulated (referred as LK2005 formalism henceforth in [6]). In the forms of continuous flows and discrete mappings, the information flow has been analyzed using the Liouville equations [18] and the Frobenius–Perron operators [18]. These two equations are the evolution equations of the joint probability distributions, respectively. The present formalism is consistent with the transfer entropy of Schreiber [1] in both transfer asymmetry and quantification. A variety of generalizations and applications of the work in Reference [4] are developed in [19,20,21,22,23,24,25]. Majda and Harlim [26] applied the strategy to study subspaces of complex dynamical systems. For 2D systems, Liang and Kleeman discovered a concise law on the entropy evolution of deterministic autonomous systems and obtained the time rate of information flow from one component to the other [6]. Until now, the 2D formalism has been extended to some dynamical systems in different forms and scales with successful applications between two variables [23,25]. In the light of these applications, by thoroughly describing the statistical behavior of a system, this rigorous LK2005 formalism has yielded remarkable results [3].

However, the uncertainty of many real-world systems needs to be quantified among the variables for revealing the nonlinear relationships, so as to better understand the intrinsic mechanism and predict the forthcoming states of the systems [27]. Besides, many physical systems are affected by the interactions between multiple components in diverse fields [28]. For example, sensitivity analysis of an aircraft system with respect to design variables, parameters and uncertainty factors can be used to estimate the effects on the objective function or constraint function. The uncertainty analysis and sensitivity analysis (UASA) process is one of the key steps for determining the optimal search direction and guiding the design and decision-making, which aims at predicting complex computer models by quantifying the sensitivity information of the coupling variables. It can be offered to quick guide of determining design parameters which lead to high performance aircraft designs. Some preceding tools [29,30] related to sensitivity analysis are applicable for low-dimensional static problems and an urgent problem of high dimensionality arises when outputting variables of numerical models with spatially and temporally need to be solved [31]. The rigorous formalism of information flow has the potential to revolutionize the ability to analyze and measure uncertainty and sensitivity information in dynamical systems.

Hence, considering realistic applications, we generalize the LK2005 formalism to several variables of multi-dimensional dynamical systems in this paper. More precisely, we extend the results in [6,25] to the information flow between groups of components, rather than individual components. We aim to demonstrate that the formalism is feasible among several variables in arbitrary multi-dimensional dynamical systems when dynamics is fully known. In addition, the generalized formalisms can be reduced to two-dimensional formalisms as a special case. We also highlight the relationship between the LK2005 formalism and our generalized formalisms. Two applications are proposed with the classical Lorenz system and Chua’s system as validations of our formalisms. Compared with the LK2005 formalism and the transfer mutual information method [32], the generalized formalisms are beneficial for revealing more information among variables. It can better explore the complexity of evolution and intrinsic regularity of multi-dimensional dynamical systems. Meanwhile, it can provide a simple and versatile method to analyze sensitivity in dynamical models. These generalized formulas enable one to understand the relationship between information transfer and the behavior of a system. It can be used to perform sensitivity analysis as a measure in multi-dimensional complex dynamical systems. Therefore, the generalized formalisms have much wider applications and are significant to investigate real-world problems.

The structure of this paper is as follows: Section 2 recalls a systematic introduction of the theories and the formalisms about information flow in 2D systems; In Section 3, the formalisms are generalized to adapt to multi-dimensional complex dynamical system components based on the LK2005 formalism. Details on the derivations of the formalisms and the related properties are demonstrated; Section 4 gives a description about the formalisms with multi-dimensional applications; the summary of this paper is given in Section 5.

2. Two-Dimensional Formalism of Information Transfer (the LK2005 Formalism [6])

2.1. Continuous Flows

For 2D continuous and deterministic autonomous systems with fully known dynamics,

dxdt=F(x), (1)

where F=(F1,F2) with Fi=Fi(x1,x2) for any i=1,2 is known as the flow vector and x=(x1,x2)Ω=Ω1×Ω2. A stochastic process X=(X1,X2)Ω with joint probability density ρ(x1,x2,t) at time t is the random variables corresponding to the sample values (x1,x2). For convenience, we will use the notation ρ or ρ(x1,x2) instead of the notation ρ(x1,x2,t) throughout Section 2, including the same expression at multi-dimensional cases in Section 3. In addition, the integral domain is the whole sample space Ω, except where noted. The probability density ρ associated with Equation (1) satisfies the Liouville equation [18]:

ρt+(F1ρ)x1+(F2ρ)x2=0. (2)

The rate of change of joint entropy of X1 and X2, H(t)=defΩρlogρdx1dx2, satisfies the relation [6]

dHdt=E(·F), (3)

where E means the mathematical expectation with respect to ρ and E(·F)=Ωρ(·F)dx1dx2. That is to say, when a system evolves with time, the change of its joint entropy is totally controlled by the contraction or expansion of the phase space [6]. Later on, Liang and Kleeman showed that this property holds for deterministic systems of arbitrary dimensionality [20].

Liang and Kleeman [6] provided a very efficient heuristic argument to describe the decomposition of the various evolutionary mechanisms of information transfer in terms of the individual and joint time rates of entropy changes of X1,X2 and (X1,X2). Firstly, they computed dH1dt and dH2dt, where Hi is the entropy of Xi defined according to the marginal density, ρi. Secondly, they employed the novel idea of frozen variables to analyze the individual time rates of entropy changes. When Xi is fixed and Xj evolves on its own in 2D systems, they found its temporal rate of change of entropy depends only on E(Fjxj), denoted by dHjdt. In the presence of interactions between Xi and Xj, they observed that dHjdtEFjxj=dHjdt. Therefore, Liang and Kleeman [6] concluded that the difference between dHjdt and EFjxj should equal to the rate of entropy transfer from Xi to Xj. In the meantime, they denoted the rate of flow from Xi to Xj by Tij (T stands for ”transfer”) and defined information flow/transfer as

Tij=dHjdtdHjdt=Ωρi|j(xi|xj)(Fjρj)xjdxidxj, (4)

where ρi|j(xi|xj)=ρ(xi,xj,t)ρ(xj,t) and i,j=1,2 with different i,j at the same time.

2.2. Discrete Mappings

Similarly, Liang and Kleeman [6] also gave the formalism about a system in the discrete mapping form. Considering a 2D transformation

Φ:ΩΩ,(x1,x2)Φ1(x),Φ2(x),

where x=(x1,x2)Ω and Ω:=Ω1×Ω2. The evolution of the density of Φ is driven by the Frobenius–Perron operator (FP operator) P:L1(Ω)L1(Ω) [18]. The entropy increases as

ΔH=PρlogPρdx1dx2+ρlogρdx1dx2=ρ(x1,x2)log|J1|dx1dx2,

where J1 is the Jacobian matrix of Φ. When Φj is invertible in 2D transformations,

ΔHj=Elog|Jj|. (5)

The entropy of Xj increases as

ΔHj=ΩjΩiPρdxilogΩiPρdxidxj+Ωjρjlogρjdxj,

where ρj is the marginal density of Xj. When Φj is noninvertible in 2D transformations,

ΔHj=ρj(xj)logρj(xj)dxjPjρjΦj(xi,xj)logPjρjΦj(xi,xj)ρ(xi|xj)|Jj|dxidxj, (6)

where Pj is the FP operator when xi is frozen as a parameter in Pj. The entropy transferring from Xi to Xj is

Tij=ΩjΩiPρdxilogΩiPρdxidxj+PjρjΦj(xi,xj)logPjρjΦj(xi,xj)ρ(xi|xj)|Jj|dxidxj, (7)

where i,j=1,2 with different i,j at the same time.

3. n-Dimensional Formalism of Information Transfer

3.1. Continuous Flows

Firstly, we consider a three-dimensional (3D) continuous autonomous system,

dxdt=F(x), (8)

where F=(F1,F2,F3) is a known flow vector. Similarly, the probability density ρ associated with Equation (8) satisfies the Liouville equation [18]:

ρt+(F1ρ)x1+(F2ρ)x2+(F3ρ)x3=0. (9)

Analogous to the derivation in [6], firstly, multiplying by (1+logρ) for Equation (9), after some algebraic manipulations:

(ρlogρ)t+F··(ρlogρ)+ρ(1+logρ)·F=0. (10)

Then, integrating for Equation (10),

dHdtΩ·(ρlogρF)dx1dx2dx3Ωρ·Fdx1dx2dx3=0.

Assuming that ρ vanishes at the boundaries (the compact support assumption for ρ and the assumption is reasonable in real-world problems [6]), it is found that the time rate of the joint entropy change of X1,X2 and X3,

H(t)=defΩρlogρdx1dx2dx3,

satisfies

dHdtΩρ(x1,x2,x3)·Fdx1dx2dx3=0

or

dHdt=E(·F),

where E(·F)=Ωρ(·F)dx1dx2dx3.

As mentioned above, the time rate of change of H equals to the mathematical expectation of the divergence of the flow vector F. When we are interested in the entropy evolution of a component, xk in 3D systems, the marginal density is

ρk(xk,t)=Ωi×Ωjρ(xi,xj,xk,t)dxidxj.

The evolution equation of ρk is derived by taking the integral of Equation (9) with respect to xi and xj over the subspace Ωi×Ωj:

ρkt+xkΩi×ΩjρFkdxidxj=0.

The third and fourth terms in Equation (9) have been integrated out with the compact support assumption for ρ. So the entropy for the component

Hk(t)=Ωkρklogρkdxk

evolves as

dHkdt=Ωlogρk(ρFk)xkdxidxjdxk,

i.e.,

dHkdt=ΩρFkρkρkxkdxidxjdxk. (11)

The Equation (11) states how Hk evolves with time. The evolutionary mechanism of Hk derives from two parts: One is from the evolution itself, dHkdt; another from the transfers of Xi and Xj according to the coupling in the joint density distribution ρ. From Section 2, we know that when Xk evolves on its own, then

EFkxk=dHkdt=ΩρFkxkdxidxjdxk.

Therefore, the rate of information flow/transfer from Xi,Xj to Xk is

Ti,jk=dHkdtdHkdt=ΩρFkρkρkxk+Fkxkdxidxjdxk=Ωρρk(Fkρk)xkdxidxjdxk=Ωρi,j|k(xi,xj|xk)(Fkρk)xkdxidxjdxk, (12)

where ρi,j|k(xi,xj|xk)=ρ(xi,xj,xk,t)ρ(xk,t) and i,j,k=1,2,3 with different i,j,k at the same time.

In particular, if F1=F1(x1) has no dependence on x2, then T21=0. There is no information transfer from random variable component X2 to X1. This holds true with the transfers defined in LK2005 formalism. Obviously, in system (8), when F1 has no dependence on x2,x3, there should be no information transfer from X2,X3 to X1, but there is possibility that the transfers in other directions may be nonzero when F2 depends on x1,x3 or F3 depends on x1,x2. This is consistent with the information transfer defined in Equation (12). As a matter of fact, an important property of the transfer is given below.

Theorem 1.

If Fk is independent of xi,xj in system (8) with different i,j,k, then Ti,jk=0.

Proof of Theorem 1

According to the formalism of information transfer for system (8), with the notation of Fk=Fk(xk),

Ti,jk=Ωρi,j|k(xi,xj|xk)(Fkρk)xkdxidxjdxk=ΩkΩi×Ωjρi,j|k(xi,xj|xk)dxidxj(Fkρk)xkdxk=Ωk(Fkρk)xkdxk=0.

 □

It is worth noting that, while Xk gains information from Xi or Xj, or Xi and Xj, Xi or Xj might have no dependence on Xk in 3D systems. An important property about information transfer is its asymmetry among the components [1]. In addition, it is interesting to note that the formalism of 3D systems can be reduced to 2D cases under the condition that one variable does not depend on another variable. For example, If the evolution of Xk is independent of Xi, then

Ti,jk=Ωρi,j|k(xi,xj|xk)(Fkρk)xkdxidxjdxk=Ωj×ΩkΩiρ(xi,xj,xk)dxi1ρ(xk)(Fkρk)xkdxjdxk=Ωj×Ωkρ(xj|xk)(Fkρk)xkdxjdxk=Tjk. (13)

In particular, when Xk is independent of Xi and Xj,

Ti,jk=Tik=Tjk=0.

According to Theorem 1, the results are apparent. Furthermore, when Xk depends on Xi and Xj,

Ti,jk=Ωρi,j|k(xi,xj|xk)(Fkρk)xkdxidxjdxk=Ωρ(xj|xk)·ρi|j,k(xi|xj,xk)(Fkρk)xkdxidxjdxk=ΩiΩj×Ωkρj|k(xj|xk)(Fkρk)xkdxjdxkρi|j,k(xi|xj,xk)dxi=ΩiTjk·ρi|j,k(xi|xj,xk)dxi,

or

Ti,jk=ΩjTik·ρj|i,k(xj|xi,xk)dxj. (14)

From the above derivations, we can see that our formalisms are further intensified by emphasizing the inherent relation with the formalisms in 2D systems. The information flows from two variables and the high order interactions between them to another variable are quantified by formula (12). These are generalized forms of the LK2005 formalism. In Section 4, we will validate the conclusions by the applications of all formulas in the Lorenz and Chua’s systems. Moreover, when several variables are involved, the formalisms are capable to tackle information transfers of a multi-dimensional system.

Combining the Liouville equation

ρt+(F1ρ)x1+(F2ρ)x2++(Fnρ)xn=0, (15)

with Equation (3), dHdt=E(·F) in n-dimensional situations, we can generalize the formalism to n-dimensional continuous and deterministic autonomous systems in the same way. For example, the transfer of information from components X2,X3,,Xn to X1 is

T2,3,,n1=Ωρ2,3,,n(x2,x3,,xn|x1)(F1ρ1)x1dx1dx2dxn.

Hence, Theorem 1 can be generalized to multi-dimensional cases.

3.2. Discrete Mappings

For a 3D transformation Φ:ΩΩ,(x1,x2,x3)Φ1(x),Φ2(x),Φ3(x), the evolution of its density is driven by the Frobenius–Perron operator (FP operator) P:L1(Ω)L1(Ω) [18]. Similar to the 2D case, after some efficient computations, the entropy transfer from Xi,Xj to Xk in three-dimensional mappings has the following form:

Ti,jk=ΔHkΔHk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+PkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xi,xj|xk)|Jk|dxidxjdxk. (16)

We also give a theorem for the discrete mappings and highlight the relationship between two-dimensional formalisms and generalized formalisms. The formalisms can be extended to high-dimensional situations as well. The detailed processes are demonstrated in Appendix A.

4. The Application of Multi-Dimensional Formalism of Information Transfer

4.1. The Lorenz System

In this section, we propose an application to study the information flows about the Lorenz system [33]:

dx1dt=σ(x2x1)dx2dt=x1(rx3)x2dx3dt=x1x2bx3,

where σ,r and b are parameters, x1,x2 and x3 are the system state variables, and t is time. A chaotic attractor of Lorenz system with σ=10,r=28,b=83 is shown in Figure 1.

Figure 1.

Figure 1

The Lorenz attractor with initial value (1,1,1).

Firstly, we need to obtain the joint probability density function ρ(x1,x2,x3) of X to calculate information flows among the variables. For a deterministic system with known dynamics, the underlying evolution of the joint density ρ(x1,x2,x3) can be obtained by solving the Liouville equation. Taking into account of the computational load, we estimate the joint density ρ(x1,x2,x3) via numerical simulations. The steps are summarized as follows:

  • Initialize the joint density ρ(x1,x2,x3) with a preset distribution ρ0, then generate an ensemble through drawing samples randomly according to the initial distribution ρ0.

  • Partition the sample space Ω into “bins”.

  • Obtain an ensemble prediction for the Lorenz system at every time step.

  • Estimate the three-variable joint probability density function ρ via bin counting at every time step.

The Lorenz system is solved by applying a fourth order Runge–Kutta method with a time step Δt=0.01. According to Figure 1, the computation domain is restricted to Ω[30,30]×[30,30]×[0,60], which includes the attractor of the Lorenz system. We discretize the sample space into 60×60×60=21,600 bins to ensure covering the whole attractor and one draw per bin on average via making 21,600 random draws. Initially, we assume X is distributed as a Gaussian process N(u(t),Σ(t)), with a mean u and a covariance matrix Σ:

u(0)=u1u2u3,Σ(0)=σ12000σ22000σ32.

Although we have used different parameters u and σd2 (d=1,2,3) to compute information flows for the Lorenz system, the final results are the same and the trends stay invariant. The parameters u and σd2 can be adjusted for different experiments. Here we only show the results of one experiment with ud=4 and σd2=4. The ensemble is developed by drawing sample randomly in the light of a pre-established distribution ρ0(x_). We obtain an ensemble of X and estimate the three-variable joint probability density function ρ(x1,x2,x3,t) by the way of counting the bins, at every time step. As the equations are integrated forward in the Lorenz system, ρ can be estimated as a function of time and describe the statistics of the system. A detailed discussion on probability estimation through bin counting are referred to [20,25]. The sample data with initial value (1,1,1) and an estimated marginal density of x1,x2 and x3 are displayed in Figure 2.

Figure 2.

Figure 2

Left panel: a sample data (X1,X2 and X3) of the Lorenz system generated by a fourth order Runge–Kutta method with Δt=0.01. Right panel: an estimated marginal density of x1,x2 and x3 via counting the bins and initializing with a Gaussian distribution, respectively.

Through formula (12), the information transfer within three variables can be computed. There are nine transfer series in the Lorenz system, but here we mainly focus on the couple effect from two components to another component, that is, Ti,jk,i,j,k=1,2,3 with different i,j,k at the same time. A nonzero Ti,jk means that Xi and Xj are causal to Xk, and the value means how much uncertainty that Xi and Xj bring to Xk. Among all the transfers, it is clearly shown that any two variables drive the other variable in the dynamics except the evolution of X1 which only depends on X2. For the sake of revealing some underlying information in the chaotic dynamical system better, we also give information transfer among the components over space with a Gaussian distribution initialization and the averaged density over time via using the following formula: Si,jk=ρ¯i,j|k(xi,xj|xk,t)(Fkρ¯k(t))xkdxidxj, which characterizes the strength of information transfer at different planes of x=xi. That is to say, it demonstrates the information transfer of xj and xk to xi plane, whose relative values represent the magnitudes of information transfer. The calculation results are plotted in the left panel and the right one of Figure 3, respectively. According to the magnitude of parameters in the Lorenz system and the definition of rigorous 3D formalisms, the information transfer from X1,X2 to X3 is the smallest. The results are just as we expected, |T1,23|<|T1,32|<|T2,31|, as shown in the left panel of Figure 3. Meanwhile, we can get much information through numerical simulations. For example, the information transfer from X2 and X3 on X1 is larger than that of X1 and X3 on X2 in the Lorenz system, which is helpful for us to better analyze the system and the fields of interest. Only the absolute value of T measures the information transfer among the variables [23]. As the ensemble evolution is carried forth, any two variables aim to reduce the uncertainty of the other variable [24]; in other words, any two variables tend to stabilize the other variable. All information flows go to constants, which means that the system tends to be stable simultaneously. Comparing the left panel with the right one in Figure 3, we can find that not only the information flow from X2 and X3 to X1 is the largest at different times, but also the total information transfer is the largest at x1 plane, and the strength of information transfer obeys a distribution in each direction of x. Repeated experiments are found to be in line with the results no matter whatever the initialization is given.

Figure 3.

Figure 3

Left panel: the multivariate information flow of the Lorenz system: blue dot-dash line: T2,31; green star line: T1,32; red solid line: T1,23 (in nats per unit time); Right panel: the information strength of transfer in the Lorenz system: blue dot-dash line: S2,31; green star line: S1,32; red solid line: S1,23 (arbitrary unit).

In particular, we compute the transfer, T21, then compare T21 with the transfer, T2,31 in Figure 4, as well as plot the transfers T12, T32 and T1,32 in Figure 5.

Figure 4.

Figure 4

T21 and T2,31 in the Lorenz system (in nats per unit time).

Figure 5.

Figure 5

T12, T32 and T1,32 in the Lorenz system (in nats per unit time).

Since the evolution of X1 is independent of X3 and the evolution of X2 depends on X1 and X3 in the Lorenz system, the transfer T21 should be equal to T2,31 and neither the transfer T12 nor T32 should not be equal to T1,32 according to the derivations in Section 3.1. As expected, there is almost no difference between the two flows in Figure 4. The interpretation of the results is that X3 is not causal to X1 in the Lorenz system. The result agrees well with theoretical analysis, which also validates our formalisms. But the graphs T12 and T32 are quite different from the graph T1,32 in Figure 5, as that both X1 and X3 are causal to X2 in the Lorenz system. From Figure 4 and Figure 5, we can find that the information flow T21 is different from T12, as a property of asymmetry of the information transfer. There exists hidden sensitivity information in information transfer processes of high-dimensional dynamical systems: whether or not one variable brings more uncertainty to another variable. Comparing the magnitudes of three flows in Figure 5, we can say that X3 is more sensitive to X2 than X1 to X2 from the sensitivity analysis point of view. All the above differences are exactly the embodiment of the differences between the information flows in multi-dimensional dynamical systems and the LK2005 formalism. The proposed formalisms can be used to measure information transfers among the variables in dynamical systems and the numerical results can show how the measurement behavior with time, compared with the qualification of information transfer between two variables [4] and the transfer mutual information method [32]. For example, it can be quantified the influence that x3 on the relationship between x1 and x2 using the transfer mutual information method in the Lorenz system. With our generalized formalisms, we can give quantitatively the influence from x3 to the relationship between x1 and x2 as a dynamical process and other relationships (such as the asymmetrical influence between two variables) among the variables for analyzing the system better. To test the influence of error propagation on the measurement of information transfers, we use a different natural interval extension to compute information transfers according to the striking method [34]. In other words, we compute information transfers using formula (12) in the Lorenz system with the rewritten second equation, that is, rx1x1x3 is used to replace x1(rx3). For the Lorenz system, the results show that, the algorithm performs well (the relative error < 2%). All simulations are performed on a 64-bit Matlab R2016a environment. The physical consistency of the proposed approach in this paper can be explained as that a direction of the phase space is frozen in order to extract information transfers from the other two directions [3]. In addition, nonlinearity may lead a deterministic system to chaos, which causes the “spikes” in the right panel of Figure 3 and corresponds to intermittent switching in the chaotic dynamics. As the remarkable theory stated in [35], it indicates when the dynamics are about to switch lobes of the attractor in the Lorenz system.

Since Liouville equations and Frobenius–Perron analysis describe an ensemble of trajectories, we can use the generated formalisms of information flow as a sensitivity analysis index to perform dynamic sensitivity information analysis instead of the preceding widely used methods such as repeated calculation of principal component coefficients [36,37], construction of functional metamodels [31,38], calculation of moving average of the sensitivity index [39] and direct perturbation analysis of a dynamical system [40]. Using information flow to identify sensitive variables is directly based on the statistical perspective, which can improve numerical accuracy and efficiency while reduce the calculation load, compared with conventional dynamic sensitivity analysis methods. We cannot only quantify how much the uncertainty among variables of a system, but also understand how they influence system behavior, so it may be measured and used for prediction and control in realistic applications.

Furthermore, we use Equation (15) to compute information transfers, Tyzwx, Txzwy, Txywz, and Txyzw with the same strategy in the four-dimensional (4D) dynamical system:

dxdt=12(yx)dydt=23xxzy+wdzdt=xy2.1zdwdt=6y0.2w,

whose results are shown in Figure 6.

Figure 6.

Figure 6

Left panel: an estimated marginal density of x,y,z and w via counting the bins and initializing with a Gaussian distribution, respectively; Right panel: the multivariate information flow over time of a 4D dynamical system.

The generalized formalisms are useful to deal with universal problems, which is not difficult to be applied to higher-dimensional cases.

4.2. The Chua’s System

Since it is the first analog circuit to realize chaos in experiments, the initial Chua’s system is a well-known dynamical model [41]. The Chua’s system is described in reference [42] and there are many researches on its dynamical behavior [43,44]. Here we present an investigation of the information flows within the smooth Chua’s system [45]:

dxdt=p(x+yxln1+x2)dydt=xy+zdzdt=qy,

where p,q are parameters, x,y and z are state variables in R and tR+. When p=11 and q=14.87, a chaotic attractor of the Chua’s system is shown in Figure 7.

Figure 7.

Figure 7

The attractor of Chua’s system with x(0)=3,y(0)=2,z(0)=1. The former three trajectories are x,z-plane,x,y-plane and y,z-plane, respectively. The last trajectory is a 3D plot of x,y and z.

As mentioned before, using the same estimation procedures, we can obtain the density ρ(x,y,z) of R by counting the bins at each step. From Figure 7, the appropriate computation domain Ω[10,10]×[10,10]×[10,10] which includes an attractor of the Chua’s system can be selected to estimate the three-variable joint probability density function. The following computation is demonstrated by applying a fourth order Runge–Kutta method. Similarly, we only show the results of one experiment after computing information flows multiple times by using different parameters. Suppose that R is distributed as a Gaussian process N(u(t),Σ(t)), with a mean u and a covariance matrix Σ in the initial state:

u(0)=999,Σ(0)=900090009.

Due to the additional fact that the smooth Chua’s circuit has a highly non-coherent dynamics [46], we discretize the sample space into 200×200×200=8,000,000 bins to adequately understand the information transfer and the behavior of the system over time. A sample data and an estimation result of three marginal densities are shown in Figure 8, and we can find that the dynamical behaviors of the system are consistent with the results, such as symmetry. Using formula (12) to compute the information transfers within three variables of Chua’s system. Firstly, we discuss the coupling effect from two components to the other component, the calculation results are demonstrated in Figure 9.

Figure 8.

Figure 8

Left panel: a sample data (X,Y and Z) of the Chua’s system generated by a fourth order Runge–Kutta method with Δt=0.01; Right panel: the purple line, black line, and blue line represent an estimated marginal density of x,y,z by counting bins, respectively.

Figure 9.

Figure 9

Left panel: the multivariate information flow of the Chua’s system: green dot-dash line: Ty,zx; red dot-dash line: Tx,zy; blue dot-dash line: Tx,yz (in nats per unit time); Right panel: the information strength of transfer in the Chua’s system: green dot-dash line: Sy,zx; red dot-dash line: Sx,zy; blue dot-dash line: Sx,yz (arbitrary unit).

Secondly, we compute the transfers, Tyz and Tzy, then compare Tyz with the transfer, Tx,yz and Tzy with Tx,zy in Figure 10 and Figure 11, respectively. We also show the corresponding results of the strength of information transfer among the components with a Gauss distribution initialization and the averaged density over time in Figure 9.

Figure 10.

Figure 10

Tyz and Tx,yz in the Chua’s system (in nats per unit time).

Figure 11.

Figure 11

Tzy and Tx,zy in the Chua’s system (in nats per unit time).

Since X causes Y but does not cause Z in the Chua’s system, the numerical results of Figure 10 and Figure 11 conform with the derivations of Equations (13) and (14) in Section 3.1. More specifically, there is almost no difference between the two flows in Figure 10, however, there exists large disparity between the two flows in Figure 11. The results also verify our formalisms. In addition, as shown in Figure 10 and Figure 11, we can see that the information flow Tyz is different from Tzy due to the asymmetry of information transfer. All simulations are performed on a 64-bit Matlab R2016a environment. We are able to estimate that one variable makes another variable more uncertain or more predictable via the generalized formalisms. Besides, we can identify sensitive variables by computing information transfers among the variables in dynamical systems.

Compared with the Lorenz system, the Chua’s system embodies in engineering systems besides that their discoveries were extraordinary and changed scientific thinking [46]. It can be used as another means to research, experiment and think about humanity, identity and art, etc. [47,48]. In studying visualization of the dynamics of Chua’s circuit through computational models, the quantitative transformations of behavior are being taken into account [46]. The multi-dimensional formalisms of information flow enable us to improve our ability to estimate, predict, and control complex systems in many diverse fields. Furthermore, most existing approaches in control and synchronization of chaotic systems require adjusting the parameters of the model and estimating system parameters, which become an active area of research [49], and an additional benefit provided by the multi-dimensional formalisms of information flow is parameter estimation. We can compute information flows of the simulation model with different sets of parameters and do the same procedure for obtaining a group of feedback, then determine the optimal parameters that cater for the actual needs in order to put insight into complex behavior of models by comparing the change rates.

5. Conclusions

Based on the LK2005 formalism, we propose a rigorous and general formalism of the information transfer among multi-dimensional complex dynamical system components, for continuous flows and discrete mappings, respectively. Information transfers are quantified through entropy transfers from some components to another component, enabling us to better understand the physical mechanism underlying the superficial behavior and explore deeply hidden information in the evolution of multi-dimensional dynamical systems. When the generalized formalisms are reduced to 2D cases, the results are consistent with the LK2005 formalism. We mainly focus on the study of 3D systems and apply the formalisms to investigate information transfers for the Lorenz system and the Chua’s system. In the above-mentioned two cases, we show that information flows of the whole evolution and the strength of information transfer at different planes, which implies that how uncertainty propagates and how dynamic essential information in the system transports. The results of experiments on the generalized formalisms conform with observations and empirical analysis in the literature, whose application may benefit many diverse fields. Compared with the qualification of information transfer between two variables [4] and the transfer mutual information method [32], the generalized formalisms are helpful for analyzing the relationships among the variables in dynamical systems and the research of complex systems. Moreover, since the formalism is built on the statistical nature of information, it has the potential to perform sensitivity analysis in multi-dimensional complex dynamical systems and advance our ability to estimate, predict and control these systems. In practice, for complex high-dimensional dynamical systems, it is not easy to give the dynamics analytically. Considering many critical data-driven problems are primed to take advantage of progress in the data-driven discovery of dynamics [35], we are developing a dynamic-free formulation to analyze information flows of multi-dimensional dynamical systems.

In the future, the formalism will be further generated to high-dimensional stochastic dynamical systems and time-delay systems. Meanwhile, future research should investigate how the information flow as a new indicator can be deployed in the frame of dynamic sensitivity analysis.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant No. 11771450).

Appendix A. Discrete Mappings

Now consider a 3D transformation

Φ:ΩΩ,(x1,x2,x3)Φ1(x),Φ2(x),Φ3(x) (A1)

and the Frobenius–Perron operator (FP operator) P:L1(Ω)L1(Ω) [18] which steers the evolution of its density. Loosely, given a density ρ=(x1,x2,x3),P is defined as that

wPρ(x1,x2,x3)dx1dx2dx3=Φ1(w)ρ(x1,x2,x3)dx1dx2dx3,

where w represents any subset of Ω. When Φ is invertible, P can be expressed clearly as Pρ(x_)=ρΦ1(x_)|J1|, where J1=J1(x1,x2,x3)=detΦ1(x1,x2,x3)(x1,x2,x3) is the determinant of the Jacobian matrix of Φ. Similar to the two-dimensional case, the entropy increases

ΔH=PρlogPρdx1dx2dx3+ρlogρdx1dx2dx3=ρΦ1(x1,x2,x3)|J1|logρΦ1(x1,x2,x3)|J1|dx1dx2dx3+ρlogρdx1dx2dx3=ρ(v1,v2,v3)|J1|logρ(v1,v2,v3)+log|J1||J|dv1dv2dv3+ρlogρdx1dx2dx3=ρ(x1,x2,x3)log|J1|dx1dx2dx3,

concisely rewritten as

ΔH=Elog|J|. (A2)

Meantime, in the case when Φk is invertible of 3D transformations,

ΔHk=Elog|Jk|. (A3)

The entropy of Xk increases as

ΔHk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+Ωkρklogρkdxk, (A4)

where ρk is the marginal density of Xk.

When Φk is noninvertible,

ΔHk=ρk(xk)logρk(xk)dxkPkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xi,xj|xk)|Jk|dxidxjdxk, (A5)

where Pk is the FP operator when xi,xj is frozen as parameters in Pk. It is easy to find that Equation (A5) reduces to Equation (A3) when Φk is invertible. Therefore, the entropy transfers from Xi,Xj to Xk can be unified into a form

Ti,jk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩiPρdxidxjdxk+PkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xi,xj|xk)|Jk|dxidxjdxk, (A6)

where i,j,k=1,2,3 with different i,j,k at the same time.

Just as the former case with continuous variables, the information flow obtained by Equation (16) has the following property:

Theorem A1.

If Φk is independent of xi,xj in system (A1) with different i,j,k, then Ti,jk=0.

The detailed proof of Theorem A1 is presented in Appendix A.1. Moreover, the formalism of 3D system can be reduced to the formalism in 2D cases with the previously mentioned conditions being satisfied. For example, when Φk has no dependence on xi,

Ti,jk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+PkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xi,xj|xk)|Jk|dxidxjdxk=PkρklogPkρkdxk+Pkρk(Φk(xi,xj,xk)logPkρk(Φi(xi,xj,xk))|Jk|Ωiρ(xi,xj|xk)dxidxidxj=PkρklogPkρkdxk+PkρkΦk(xj,xk)logPkρkΦk(xj,xk)ρ(xj|xk)|Jk|dxjdxk=Tjk.

In particular, when Φk has no dependence on xi and xj,

Ti,jk=Tik=Tjk=0.

Furthermore, when Φk has dependence on xi and xj,

Ti,jk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+PkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xi,xj|xk)|Jk|dxidxjdxk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+PkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xj|xk)ρ(xi|xj,xk)|Jk|dxidxjdxk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+ΩiΩj×ΩkPkρkΦk(xi,xj,xk)logPkρkΦk(xi,xj,xk)ρ(xj|xk)|Jk|dxjdxk·ρ(xi|xj,xk)dxi=ΩiTjk·ρ(xi|xj,xk)dxi+ΩiΩkPkρklogPkρkdxk·ρ(xi|xj,xk)dxiΩkPkρklogPkρkdxk

The above formalisms can also be generalized to n-dimensional systems by efficient processing of the relationship between the FP operator

wPρ(x1,x2,xn)dx1dx2dxn=Φ1(w)ρ(x1,x2,xn)dx1dx2dxn

and the entropy evolution at different time steps. For example, the transfer of entropy from X2,X3,,Xn to X1 is

T2,3,,n1=Ω1Ω2nPρdx2dx3dxnlogΩ2nPρdx2dx3dxndx1+ΩP1ρ1Φ1(x1,x2,,xn)logP1ρ1Φ1(x1,x2,,xn)·ρ(x2,x3,,xn|x1)|J1|dx1dx2dxn.

Here Ω2n is the simplified script of Ω2×Ω3××Ωn. Similar to the continuous cases, the generalized version of the property of Theorem A1 is also suitable for multi-dimensional discrete mappings.

Appendix A.1.

Proof of Theorem A1

We only need to show that when Φk is independent of xi,xj in 3D system,

ΔHk=ΔHk.

According to Equation (A4) and Equation (A5), we only need to prove

ΩPkρkΦk(xk,xi,xj)logPkρkΦk(xk,xi,xj)ρ(xi,xj|xk)|Jk|dxidxjdxk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk.

According to the definition of the FP operator and the condition that Φk is independent of xi,xj at the same time,

ΩPkρkΦk(xk,xi,xj)logPkρkΦk(xk,xi,xj)ρ(xi,xj|xk)|Jk|dxidxjdxk=ΩkPkρkΦk(xk,xi,xj)logPkρkΦk(xk,xi,xj)|Jk|Ωi×Ωjρ(xi,xj|xk)dxidxjdxk

because Ωi×Ωjρ(xi,xj|xk)dxidxjdxk=1

ΩkPkρkΦk(xk,xi,xj)logPkρkΦk(xk,xi,xj)|Jk|Ωi×Ωjρ(xi,xj|xk)dxidxjdxk=ΩkPkρk(yk)logPkρk(yk)dyk=Ωkpkρk(xk)logPkρk(xk)dxk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk

where y1=Φk(xk,xi,xj). So

ΔHk=ρk(xk)logρk(xk)dxkPkρkΦk(xk,xi,xj)logPkρkΦk(xk,xi,xj)ρ(xi,xj|xk)|Jk|dxidxjdxk=ΩkΩi×ΩjPρdxidxjlogΩi×ΩjPρdxidxjdxk+Ωkρklogρkdxk=ΔHk.

 □

Author Contributions

Y.Y. proposed the original idea, implemented the experiments, analyzed the data and wrote the paper. X.D. contributed to the theoretical analysis and simulation designs and revised the manuscript. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Schreiber T. Measuring Information Transfer. Phys. Rev. Lett. 2000;85:461. doi: 10.1103/PhysRevLett.85.461. [DOI] [PubMed] [Google Scholar]
  • 2.Horowitz J.M., Esposito M. Thermodynamics with Continuous Information Flow. Phys. Rev. X. 2014;4:031015. doi: 10.1103/PhysRevX.4.031015. [DOI] [Google Scholar]
  • 3.Cafaro C., Ali S.A., Giffin A. Thermodynamic aspects of information transfer in complex dynamical systems. Phys. Rev. E. 2016;93:022114. doi: 10.1103/PhysRevE.93.022114. [DOI] [PubMed] [Google Scholar]
  • 4.Gencaga D., Knuth K.H., Rossow W.B. A Recipe for the Estimation of Information Flow in a Dynamical System. Entropy. 2015;17:438–470. doi: 10.3390/e17010438. [DOI] [Google Scholar]
  • 5.Kwapien J., Drozdz S. Physical approach to complex systems. Phys. Rep. 2012;515:115–226. doi: 10.1016/j.physrep.2012.01.007. [DOI] [Google Scholar]
  • 6.Liang X.S., Kleeman R. Information Transfer between Dynamical Systems Components. Phys. Rev. Lett. 2005;95:244101. doi: 10.1103/PhysRevLett.95.244101. [DOI] [PubMed] [Google Scholar]
  • 7.Kleeman R. Information flow in ensemble weather predictions. J. Atmos. Sci. 2007;6:1005–1016. doi: 10.1175/JAS3857.1. [DOI] [Google Scholar]
  • 8.Touchette H., Lloyd S. Information-Theoretic Limits of Control. Phys. Rev. Lett. 2000;84:1156. doi: 10.1103/PhysRevLett.84.1156. [DOI] [PubMed] [Google Scholar]
  • 9.Touchette H., Lloyd S. Information-theoretic approach to the study of control systems. Phys. A. 2004;331:140. doi: 10.1016/j.physa.2003.09.007. [DOI] [Google Scholar]
  • 10.Sun J., Cafaro C., Bollt E.M. Identifying coupling structure in complex systems through the optimal causation entropy principle. Entropy. 2014;16:3416–3433. doi: 10.3390/e16063416. [DOI] [Google Scholar]
  • 11.Cafaro C., Lord W.M., Sun J., Bollt E.M. Causation entropy from symbolic representations of dynamical systems. CHAOS. 2015;25:043106. doi: 10.1063/1.4916902. [DOI] [PubMed] [Google Scholar]
  • 12.Majda A., Kleeman R., Cai D. A Framework for Predictability through Relative Entropy. Methods Appl. Anal. 2002;9:425–444. [Google Scholar]
  • 13.Haven K., Majda A., Abramov R. Quantifying predictability through information theory: Small-sample estimation in a non-Gaussian framework. J. Comp. Phys. 2005;206:334–362. doi: 10.1016/j.jcp.2004.12.008. [DOI] [Google Scholar]
  • 14.Abramov R.V., Majda A.J. Quantifying Uncertainty for Non-Gaussian Ensembles in Complex Systems. SIAM J. Sci. Stat. Comp. 2004;26:411–447. doi: 10.1137/S1064827503426310. [DOI] [Google Scholar]
  • 15.Kaiser A., Schreiber T. Information transfer in continuous processes. Phys. D. 2002;166:43–62. doi: 10.1016/S0167-2789(02)00432-3. [DOI] [Google Scholar]
  • 16.Abarbanel H.D.I., Masuda N., Rabinovich M.I., Tumer E. Distribution of Mutual Information. Phys. Lett. A. 2001;281:368–373. doi: 10.1016/S0375-9601(01)00128-1. [DOI] [Google Scholar]
  • 17.Wyner A.D., Mackey M.C. A definition of conditional mutual information for arbitrary ensembles. Inf. Control. 1978;38:51–59. doi: 10.1016/S0019-9958(78)90026-8. [DOI] [Google Scholar]
  • 18.Lasota A., Mackey M.C. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Springer; New York, NY, USA: 1994. [Google Scholar]
  • 19.Liang X.S., Kleeman R. A rigorous formalism of information transfer between dynamical system components. I. Discrete mapping. Physica D. 2007;231:1–9. doi: 10.1016/j.physd.2007.04.002. [DOI] [PubMed] [Google Scholar]
  • 20.Liang X.S., Kleeman R. A rigorous formalism of information transfer between dynamical system components. II. Continuous flow. Physica D. 2007;227:173–182. doi: 10.1016/j.physd.2006.12.012. [DOI] [PubMed] [Google Scholar]
  • 21.Liang X.S. Information flow within stochastic dynamical systems. Phys. Rev. E. 2008;78:031113. doi: 10.1103/PhysRevE.78.031113. [DOI] [PubMed] [Google Scholar]
  • 22.Liang X.S. Uncertainty generation in deterministic fluid flows: Theory and applications with an atmospheric stability model. Dyn. Atmos. Oceans. 2011;52:51–79. doi: 10.1016/j.dynatmoce.2011.03.003. [DOI] [Google Scholar]
  • 23.Liang X.S. The Liang-Kleeman information flow: Theory and application. Entropy. 2013;15:327–360. doi: 10.3390/e15010327. [DOI] [Google Scholar]
  • 24.Liang X.S. Unraveling the cause-effect relation between time series. Phys. Rev. E. 2014;90:052150. doi: 10.1103/PhysRevE.90.052150. [DOI] [PubMed] [Google Scholar]
  • 25.Liang X.S. Information flow and causality as rigorous notions ab initio. Phys. Rev. E. 2016;94:052201. doi: 10.1103/PhysRevE.94.052201. [DOI] [PubMed] [Google Scholar]
  • 26.Majda A.J., Harlim J. Information flow between subspaces of complex dynamical systems. Proc. Natl. Acad. Sci. USA. 2007;104:9558–9563. doi: 10.1073/pnas.0703499104. [DOI] [Google Scholar]
  • 27.Zhao X.J., Shang P.J. Measuring the uncertainty of coupling. Europhys. Lett. 2015;110:60007. doi: 10.1209/0295-5075/110/60007. [DOI] [Google Scholar]
  • 28.Zhao X.J., Shang P.J., Huang J.J. Permutation complexity and dependence measures of time series. Europhys. Lett. 2013;102:40005. doi: 10.1209/0295-5075/102/40005. [DOI] [Google Scholar]
  • 29.Iooss B., Lemaitre P. A review on global sensitivity analysis methods. Oper. Res. Comput. Sci. Interfaces. 2014;59:101–122. [Google Scholar]
  • 30.Borgonovo E., Plischke E. Sensitivity analysis: A review of recent advances. Eur. J. Oper. Res. 2016;248:869–887. doi: 10.1016/j.ejor.2015.06.032. [DOI] [Google Scholar]
  • 31.Auder B., Crecy A.D., Iooss B., Marques M. Screening and metamodeling of computer experiments with functional outputs. Application to thermal–hydraulic computations. Reliab. Eng. Syst. Safety. 2012;107:122–131. doi: 10.1016/j.ress.2011.10.017. [DOI] [Google Scholar]
  • 32.Zhao X.J., Shang P.J., Lin A.J. Transfer mutual information: A new method for measuring information transfer to the interactions of time series. Physica A. 2017;467:517–526. doi: 10.1016/j.physa.2016.10.027. [DOI] [Google Scholar]
  • 33.Lorenz E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963;20:130–141. doi: 10.1175/1520-0469(1963)020&#x0003c;0130:DNF&#x0003e;2.0.CO;2. [DOI] [Google Scholar]
  • 34.Nepomuceno E.G., Mendes E.M.A.M. On the analysis of pseudo-orbits of continuous chaotic nonlinear systems simulated using discretization schemes in a digital computer. Chaos Soliton Fract. 2017;95:21–32. doi: 10.1016/j.chaos.2016.12.002. [DOI] [Google Scholar]
  • 35.Brunton S.L., Brunton B.W., Proctor J.L., Kaiser E., Kutz J.N. Chaos as an intermittently forced linear system. Nat. Commun. 2017 doi: 10.1038/s41467-017-00030-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Campbell K., McKay M.D., Williams B.J. Sensitivity analysis when model outputs are functions. Reliab. Eng. Syst. Saf. 2006;91:1468–1472. doi: 10.1016/j.ress.2005.11.049. [DOI] [Google Scholar]
  • 37.Lamboni M., Monod H., Makowski D. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models. Reliab. Eng. Syst. Saf. 2011;96:450–459. doi: 10.1016/j.ress.2010.12.002. [DOI] [Google Scholar]
  • 38.Marrel A., Iooss B., Jullien M., Laurent B., Volkova E. Global sensitivity analysis for models with spatially dependent outputs. Environmetrics. 2011;22:383–397. doi: 10.1002/env.1071. [DOI] [Google Scholar]
  • 39.Loonen R.C.G.M., Hensen J.L.M. Dynamic sensitivity analysis for performance-based building design and operation; Proceedings of the BS2013: 13th Conference of International Building Performance Simulation Association; Chambéry, France. 26– 28 August 2013; pp. 299–305. [Google Scholar]
  • 40.Richard R., Casas J., McCauley E. Sensitivity analysis of continuous-time models for ecological and evolutionary theories. Theor. Ecol. 2015 doi: 10.1007/s12080-015-0265-9. [DOI] [Google Scholar]
  • 41.Chua L.O., Komuro M., Matsumoto T. The double scroll family: Parts I and II. IEEE Trans. Circuits Syst. 1986;CAS-33(11):1072–1118. doi: 10.1109/TCS.1986.1085869. [DOI] [Google Scholar]
  • 42.Chua L.O. Archiv fur Elektronik und Ubertragungstechnik. Volume 46. University of California; Berkeley, CA, USA: 1992. The genesis of Chua’s circuit; pp. 250–257. [Google Scholar]
  • 43.Chua L.O. A zoo of strange attractor from the canonical Chua’s circuits; Proceedings of the 35th Midwest Symposium on Circuits and Systems; Washington, DC, USA. 1992. pp. 916–926. [Google Scholar]
  • 44.Liao X.X., Yu P., Xie S.L., Fu Y.L. Study on the global property of the smooth Chua’s system. Int. J. Bifurcat. Chaos Appl. Sci. Eng. 2006;16:2815–2841. doi: 10.1142/S0218127406016483. [DOI] [Google Scholar]
  • 45.Zhou G.P., Huang J.H., Liao X.X., Cheng S.J. Stability Analysis and Control of a New Smooth Chua’s System. Abstract Appl. Anal. 2013;2013 doi: 10.1155/2013/620286. 10 pages. [DOI] [Google Scholar]
  • 46.Bertacchini F., Bilotta E., Gabriele L., Pantano P., Tavernise A. Toward the Use of Chua’s Circuit in Education, Art and Interdisciplinary Research: Some Implementation and Opportunities. LEONARDO. 2013;46:456–463. doi: 10.1162/LEON_a_00641. [DOI] [Google Scholar]
  • 47.Bilotta E., Blasi G.D., Stranges F., Pantano P. A Gallery of Chua Attractors. Part V. Int. J. Bifurcat. Chaos. 2007;17:1383–1511. doi: 10.1142/S0218127407018099. [DOI] [Google Scholar]
  • 48.Adamo A., Tavernise A. Generation of Ego dynamics; Proceedings of the VIII International Conference on Generative Art; Milan, Italy. 15–17 December 2007. [Google Scholar]
  • 49.Kingni S.T., Jafari S., Simo H., Woafo P. Three-dimensional chaotic autonomous system with only one stable equilibrium: Analysis, circuit design, parameter estimation, control, synchronization and its fractional-order form. Eur. Phys. J. Plus. 2014;129 doi: 10.1140/epjp/i2014-14076-4. [DOI] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES