Skip to main content
Heliyon logoLink to Heliyon
. 2019 Jun 17;5(6):e01952. doi: 10.1016/j.heliyon.2019.e01952

Oscillating delayed feedback control schemes for stabilizing equilibrium points

Verónica E Pastor a,, Graciela A González a,b
PMCID: PMC6584792  PMID: 31249899

Abstract

Limitations of the delayed feedback control and of its extended versions have been fully treated in the literature. The oscillating delayed feedback control appears as a promising scheme to overcome this problem. Two methods based on it are dealt with in this work. It is rigorously proven that for a nonlinear scalar system, stabilization in one of its (unstable) equilibrium points is achieved if any of these methods is applied. An ad-hoc map is associated to the (continuous) controlled system and the results are derived using discrete-time system stabilization tools. Moreover, the stability parameters region is fully described and issues like control performance, rate of convergence or robustness aspects are carefully analyzed.

Keywords: Applied mathematics, Mathematical methods, Oscillating feedback control, Delay, Stability parameters, Region, Control performance, Rate of convergence

1. Introduction

There is an extensive literature on delayed feedback control (DFC) as a control chaos method. It is well known that DFC was originally proposed by Pyragas in [1] for stabilizing an unstable periodic orbit (UPO) in a chaotic system. For its implementation, the exact location of the UPO is not required. It is based on the difference between the current system state and the system state delayed by the period of the UPO. The DFC method is also reformulated as a tool to stabilize equilibria embedded in chaotic attractors (see [2] and references within it). With this objective, it is implemented on known chaotic systems as Chen system [3] or Rossler system [4] and on technical applications like [5] or [6] among others. An extended version (EDFC), proposed in [7], results more effective for stabilizing highly-unstable equilibrium points and UPO's ([8]). Applications of EDFC to certain technical problems have recently been published ([9], [10]).

It is important to point out that not all UPO's can be stabilized by time delayed feedback control methods. Namely, for non-autonomous systems, it is not possible to stabilize a hyperbolic periodic orbit which has an odd number of real Floquet multipliers larger than unity. This is known as the odd number limitation (ONL) and it is stated in [11] for DFC and in [12] for EDFC. The proofs of [11] and [12] do not apply to UPO's in the autonomous case (the technical reason is clearly explained in [13]). Instead, there is a limitation and it also involves the number of Floquet multipliers greater than unity but in addition, it depends on an analytical expression given by an integral of the control force along the UPO to be stabilized. This limitation is proven in [13] for DFC and in [14] for EDFC. For equilibrium point stabilization, the ONL holds true in both autonomous and non-autonomous systems. In particular, for the autonomous case, if the linearization matrix has an odd number of positive eigenvalues then stabilization is impossible by means of DFC methods ([8], [15]). An interesting review on the evolution of the ONL problem and its derivations may be found in [2].

Another drawback of time delayed feedback is that the controlled system comes out a delayed differential equation, the state space of which is infinite dimensional and hence it is quite difficult to state analytical results and to get effective stabilization criteria. Some approaches focused on overcoming these difficulties are based on periodic control gain ([16]) or “act-and-wait” concept introduced by Insperger ([17], [18]) These methods are characterized by alternately applying and cutting off the controller in finite intervals yielding to a finite-sized monodry matrix of the closed system so the linear stability of the UPO may be enhanced by an appropriate choosing of the control parameters. Act-and-wait approach has been used together with DFC for stabilizing unstable equilibrium points ([19]), for stabilizing UPO's of nonautonomous systems ([20]) and, of autonomous systems ([21], [22]).

For stabilizing equilibrium points, a delayed feedback controller is derived in [15] that overcomes the drawbacks of DFC, providing a systematic procedure of its design. However this procedure is valid only for sufficiently short delay time which results inappropriate in certain experimental setting (i.e., in fast dynamical systems due to the finite operating speed of these electronic devices). Later, Konishi et al. ([19]) proposed a DFC based on the “act-and-wait” control, the advantage of it being that the controlled system with delay can be described by a discrete-time system without delay. This method works for long delay time and deadbeat controller may be designed by a simple systematic procedure but they can not show that their method overcomes the ONL property. The periodic control gain approach for stabilizing UPO's of [16] has very recently been transferred to stabilize equilibrium points and, under certain restrictions on the spectrum of the linearization matrix, ONL is overcome by the algorithm stated in [23].

Interestingly, there is an early contribution for improving the delayed feedback limitations in ([24]) where it is proposed to apply feedback control only periodically so avoiding a too rapid decay of the control magnitude. Different from the discrete time case ([24], [25]), if an oscillating perturbation term involving the difference between current state and delayed state is applied to differential equations, stabilization can not be achieved. Hence an oscillatory velocity term is introduced in [24]: it is worked out for equilibrium point stabilization of a scalar linear differential equation, but as pointed out in [26], the related stabilizing result is not clear.

This work concentrates on the general scalar case

x˙=f(x) (1)

with f nonlinear, continuously differentiable function, and x an equilibrium point of (1) with f(x)=λ>0. The objective is to apply a tiny perturbation that does not require exact knowledge of the equilibrium point location, it preserves x (i.e., it is not invasive) as equilibrium point but turning it to be stable.

Two oscillating delayed feedback control (ODFC) schemes for equilibrium point stabilization will be deeply studied. Preliminary ideas on them have been introduced by us in [27]. The first method depends on the delayed velocity term taken from [24]. In the second one, the difference between two delayed states is introduced in the perturbation. In spite of being inspired in [24] both methods may be framed within the “act-and-wait” concept ([17]). In fact, as the resulting differential equations are affected by delayed feedback only periodically, a continuously differentiable map is associated to the controlled dynamics and stability will be derived using classical tools on discrete time systems linearization. Each algorithm will be clearly presented and conditions for stabilization will be deduced. Under the stated conditions, the control objective achievement will be rigorously proven for the general nonlinear case. An analytical description of the stability parameters region will be given. Rate of convergence, control performance and stability parameters region of each method will be studied and confronted.

Although there is no possibility of complex behavior in the one dimensional case, these proposals may be the kick-start to design strategies for the n-dimensional case that overcome the drawbacks of the control chaos methods before cited. Let us note that in this work, the objective to achieve is the local asymptotic stability of the equilibrium point. In the context of chaos control, just local stability is needed because the control is activated in the nearness of the equilibrium point. And, for this same reason, it does not matter about the dynamics behavior of the controlled system far from the equilibrium point.

2. ODFC method based on delayed velocity term

In this control strategy, a perturbation based on a delayed velocity term is added:

x˙(t)=f(x(t))+ϵ(t)x˙(tτ) (2)

where,

ϵ(t)={0if2kτt<(2k+1)τ,ϵif(2k+1)τt<(2k+2)τ,forkN{0}

being ϵ and τ control design parameters.

Let us note that x is preserved as an equilibrium point of system (2). Then, ϵ and τ for which the system stabilizes in x should be found. It will be proved that for a certain range of ϵ, depending on λ and τ, x comes out an asymptotically stable equilibrium point. Therefore, if this strategy is applied with initial condition in a neighborhood of the equilibrium point, the control objective is fulfilled.

System (2) results a nonlinear dynamical system described by the following smooth-piecewise differential equation:

{x˙=F(t,x)=Fk(x,t)x(0)=x0

where for each k0 and for t[2kτ,(2k+2)τ),

Fk(t,x)={f(x)if 2kτt<(2k+1)τf(x)+ϵψk(t)if (2k+1)τt<(2k+2)τ

and ψk(t)=x˙(tτ).

As ψk(t)=f(φ(tτ)) being φ the solution in the sub-interval [2kτ,(2k+1)τ), it results that ψk(t) is continuous on [(2k+1)τ,(2k+2)τ).

Some mathematical properties of the solution in each interval [2kτ,(2k+2)τ) are of interest:

Remark 1

The solution of system (2) on each interval [2kτ,(2k+2)τ) is only determined by the x2k value (but it does not depend on k). This is a consequence of the fact that when the control is not active, the system is autonomous and that when the control is active, it is non-autonomous but its dependence on t holds on the solution of the first half of the interval.

Remark 2

x0 is solution in [2kτ,(2k+2)τ) of x˙=Fk(t,x) with x(2kτ)=0.

Remark 3

From Remark 1, Remark 2 and, by continuous dependence on initial condition ([28]), given Δ>0, δ0>0 (δ0=δ0(ϵ,τ)) such that if |x2k|<δ0, there is a unique solution x(t) of x˙=Fk(t,x) with x(2kτ)=x2k and |x(t)|<Δ in [2kτ,(2k+2)τ). Moreover, x(t) is continuous in [2kτ,(2k+2)τ).

Proposition 1

Let fC1(R) with f(x)=0 and f(x)=λ>0. If the parameters ϵ and τ verify:

2cosh(λτ)λτ<ϵ<2sinh(λτ)λτ (3)

then, x is an asymptotically stable equilibrium point of the controlled system (2).

Proof

Putting δx=xx, and g(δx)=f(x+δx), system (1) yields to δx˙=g(δx) with g(0)=0 and g(0)=λ, while (2) becomes δx˙=g(δx)+ϵ(t)δx˙(tτ), so without loss of generality, we can assume x=0 and f(0)=λ.

Let us fix k0 such that there exists x(t) unique continuous solution of (2) in [2kτ,(2k+2)τ) with initial condition x(2kτ)=x2k (the existence of such a k is guaranteed by Remark 3 for k=0, taking an adequate x0).

As a consequence of Remark 1, the map P determined by x2k+2=P(x2k) being

x2k+2=limt(2k+2)τx(t) (4)

is well defined. From Remark 2, 0 is fixed point of P. Map P results from the composition of p and p˜ given by:

p:x2k+1=x((2k+1)τ)=p(x2k)

and,

p˜:x2k+2=p˜(x2k+1)

so P(0)=p˜(0)p(0).

Let ϕ(t,x2k) the solution of x˙=f(x) with initial condition x(2kτ)=x2k in [2kτ,(2k+1)τ). This solution satisfies:

ϕ(t,x2k)=x2k+2kτtf(ϕ(s,x2k))ds.

As f is C1, by differentiation under the integral sign, it is deduced that

ϕx2k(t,x2k)=1+2kτtf(ϕ(s,x2k))ϕx2k(s,x2k)ds

and it results:

ϕx2k((2k+1)τ,0)=eλτ.

Since p(x2k)=ϕ((2k+1)τ,x2k), then p(0)=eλτ.

Besides, being ϕ(t,x2k+1) the solution of x˙=f(x)+ϵx˙(tτ) in [(2k+1)τ,(2k+2)τ) with initial condition x((2k+1)τ)=x2k+1, it satisfies:

ϕ(t,x2k+1)=x2k+1+(2k+1)τt[f(ϕ(s,x2k+1))+ϵϕ˙(sτ,x2k)]ds.

Analogously to the first part, it is obtained

ϕx2k+1(t,x2k+1)=1+(2k+1)τtf(ϕ(s,x2k+1))ϕx2k(s,x2k)ds+ϵ[(ϕx2k(tτ,x2k)1)1p(x2k)]

yielding to

ϕx2k+1((2k+2)τ,0)=eλτ+ϵλτ.

As p˜(x2k+1)=ϕ((2k+2)τ,x2k+1), then p˜(0)=eλτ+ϵλτ.

As f is C1, the continuous differentiability of solution ϕ(t,) on initial conditions is argued ([29]). Then, p and p˜ are C1, and therefore, P is C1, too. Moreover,

P(0)=eλτ(eλτ+ϵλτ) (5)

which is of modulus less than 1 iff ϵ and τ verify (3), and 0 results an asymptotic stable fixed point of P. In turn, this yields to the asymptotic stability of the equilibrium point of (2) as follows.

Let us fix ϵ and τ that verify (3). Then by (5), P(0)=α with |α|<1. As P(0)=0, from the Mean Value Theorem, P(x)=P(ξ)x for some ξ between 0 and x. As P is continuous and |P(0)|=|α|<1, fixed α˜:|α|<|α˜|<1, |P(x)|<|α˜||x|<|x|,x:|x|<δ˜, for δ˜ sufficiently small. Then, if |x2k|<δ˜:

|x2k+2|<|α˜||x2k| (6)

Let us fix Δ>0 and δ=min{δ˜,δ0} where δ0 is as in Remark 3. Then, taking, |x(0)|<δ, it is deduced inductively on k, the building of the map P as well as formulas (5) and (6). In particular, it results |x2k|<δ for all k0, that together with (4) yields to the existence of a unique (continuous) solution of (2) for all t0. Moreover, |x(t)|<Δ,t0 and, the stability of the origin is shown. In turn, (6) yields to limkx2k=0, which implies limkx2k+1=0 and it results limtx(t)=0 so asymptotic stability is obtained. □

Remark 4

For each (ϵ,τ) verifying (3), there exists α(1,1):

ϵ=eλτ(αe2λτ)λτ (7)

and viceversa. Indeed α=P(0).

It is worth to point out some other features on this strategy. Let us note that in the Figs. 1, 2, 3, 4, 5 and 6, state and control signals are in red or blue for free system and controlled system respectively; moreover, u(t)=ϵ(t)x˙(t). For the particular case in which the function is linear, that is, f(x)=λx, the map P is also linear, namely, P(x)=αx and, global asymptotic stability results. Although the incidence of α on the convergence speed clearly arises. Let us take f(x)=2x as an example and apply (2) with τ=0.2 and x0=0.5, α=0.4 and α=0.8 (in Fig. 1 the resulting trajectories are confronted). The incidence of the control parameter τ on the convergence may also be appreciated: as τ is smaller, faster convergence comes out. Taking again f(x)=2x and α=0.4,0.8, and changing τ by 0.4, speed of convergence is slower (Fig. 2) than in the respective first examples (Fig. 1).

Figure 1.

Figure 1

State behavior and control performance of system (2) with f(x)=2x,x0 = 0.5,τ = 0.2 (a) α = −0.4 (b) α = 0.8.

Figure 2.

Figure 2

State behavior and control performance of system (2) with f(x)=2x,x0 = 0.5,τ = 0.4. (a) α = −0.4 (b) α = 0.8.

Figure 3.

Figure 3

State behavior and control performance of system (2) for x0 = 0.5, τ = 0.2, α = −0.4. (a) f(x)=2x + x2, (b) f(x)=2x + x3, (c) f(x)=2x − x3, (d) f(x)=2x + sin2(x).

Figure 4.

Figure 4

State behavior and control performance of system (2) for f(x)=2x(x − 1) with x0 = 0.5, τ = 0.2, (a) α = −0.4 (b) α = 0.8.

Figure 5.

Figure 5

Exponential stability for system (2) with x0 = 0.5,τ = 0.2 and α = −0.4: (a) f(x)=2x; (b) f(x)=2x − x3.

Figure 6.

Figure 6

State behavior and control performance of system (2) with f(x)=2x, x0 = 0.5, α = 0 (a) τ = 0.2 (b) τ = 0.02.

For the general non linear case, Proposition 1 only guarantees local asymptotic stability. Then, the signal convergence is achieved if the initial condition is taken near enough to the equilibrium point. This is appreciated in examples of Figs. 3 and 4. For the four nonlinear systems considered in Fig. 3, the origin is an unstable equilibrium (not unique in all of them) with its derivatives equal to 2. The resulting signals when applying strategy (2) with α=0.4, τ=0.2 and x0=0.5 are displayed (confront to Fig. 1 (a)). Let us note that the transitory behavior of state and control signals look different form each other due to the influence of the respective nonlinear terms. In Fig. 4, the stabilization of x=1, as unstable equilibrium point of f(x)=2x(x1) is dealt with. State and control signals resulting from applying control strategy (2) with α=0.4 and 0.8, τ=0.2;0.4 and x0=0.5 may be compared to the corresponding ones of Fig. 1.

Signal exponential convergence is also revealed. The exponential decay curves enveloping the signal as displayed in Fig. 5 put this feature even in more evidence. In fact, fixed ϵ and τ, being x(t) the solution of (2) and α determined by (7) it is not difficult to prove that given a small μ>0,δμ such that if |x0|δμ:

cmeln(|α|μ)2τt|x0||x(t)|cMeln(|α|+μ)2τt|x0|if α0 (8)

and,

0|x(t)|cMelnμ2τt|x0|if α=0

for certain positive constants cm,cM. For linear systems, (8) is also valid with μ=0. Fig. 5(a) represents the upper inequality in one of these cases. This inequality may be verified even in the nonlinear case by taking |x0| small enough (Fig. 5(b)).

Hence, a convergence rate β of algorithm (2) may be stated as:

β={ln|α|2τ,if α0,if α=0 (9)

Although the rate of convergence is optimized by taking α equal zero and τ as small as possible, if τ is chosen too small, control magnitude takes very large values during transitory. For example, influence of τ-value on trajectory behavior and on control cost resulting from applying the method to f(x)=2x, x0=0.5 and α=0 with τ=0.2 and τ=0.02 is illustrated in Fig. 6. Namely, the scale change is fully appreciated by confronting control signal of Fig. 6(a) and Fig. 6(b).

This phenomena is better understood by paying attention to stability parameters region, i.e. the region of the control parameter values for which the stability objective is achieved. This region -described analytically in (3)- is illustrated in Fig. 7(a). The lower and upper bounds of |ϵ| are the curves defined by α=1 and α=1, respectively (Fig. 7(c)). Note that if τ is near zero, for any α, there is a dramatic increase of |ϵ| (Fig. 7(b)), so affecting the control performance. Additionally, by using standard analytic study of real functions, it is deduced that for a fixed α, there exists a unique τ that minimizes the absolute value of the control gain; namely τ=1λ(12αα+e2λτ). Hence the choosing of adequate α and τ depends on a compromise between rate of convergence and control magnitude.

Figure 7.

Figure 7

In red: (a) Stability parameters region of (2). (b) Zoom in for −25 < ϵ ≤ 0. (c) Zoom in for 0 < λτ ≤ 0.5.

Another important aspect to point out is robustness with respect to λ. Clearly, the relationship (3) (or equivalently (7)) determines each pair of stability control parameter (ϵ, τ) with exact knowledge on λ. It is known that this may result unrealistic but instead it is considered an estimated value λ¯ of λ such that |λλ¯|<Δλ for a known bound Δλ. It is easy to deduce that for small Δλ, if ϵ=α¯e2λ¯τλ¯τeλ¯τ with |α¯|<1, there a exists α: |α|<1 such that the control objective is achieved with rate of convergence (9) depending on α.

Comment: The stability proof of system (2) does not take care of it as a neutral functional differential equation. Instead, it is conducive to the design of a second strategy (which does not yield to a system of neutral type) to achieve the same control objective.

3. ODFC method based on delayed states difference

It is easy to verify in the scalar case, that if the oscillating perturbation involves the difference between current state and delayed state while the definition of the non-constant gain ϵ(t) is maintained as in (2) stabilization can not be achieved by any control parameters (and not even for the generalized version as proposed in [19]). In this proposal, the difference between two delayed states is introduced into the perturbation:

x˙(t)=f(x(t))+ϵ(t)(x(t2τ)x(tτ)) (10)

where

ϵ(t)={0,if 3kτt<(3k+2)τϵ,if (3k+2)τt<(3k+3)τforkN{0}.

Let us note that the ratio between active and non-active control periods differs from the ratio in the first method.

As in the first method, x is preserved as an equilibrium point. System (10) also comes out a nonlinear dynamical system given by a smooth piece-wise differential equation. It will be also possible to state a range of ϵ, depending on λ and τ such that if this strategy is applied with initial condition in a neighborhood of the equilibrium point the control objective is fulfilled. The solution of (10) on each interval [3kτ,(3k+3)τ) has the same features outlined for the solution of (2) in [2kτ,(2k+2)τ) in Remark 1, Remark 2, Remark 3. The stabilization proof follows steps analogous to the respective proof of the first method.

Proposition 2

Let fC1(R) with f(x)=0 and f(x)=λ>0. If the parameters ϵ and τ verify:

e3λτ1τeλτ(eλτ1)<ϵ<e3λτ+1τeλτ(eλτ1) (11)

then, x is an asymptotically stable equilibrium point of the controlled system (10).

Proof

Putting δx=xx and g(δx)=f(x+δx), system (1) yields to δx=g(δx) with g(0)=0 and g(0)=λ while (10) becomes δx˙=g(δx)+ϵ(t)(δx(t2τ)δx(tτ)), so without loss of generality, we can assume x=0 and f(0)=λ. As in Proposition 1, there exists k0 for which existence, unicity and continuity of the solutions in [3kτ,(3k+3)τ) are guaranteed.

Here, the map P defined by x3k+3=limt(3k+3)τx(t)=P(x3k) has the origin as fixed point and P(0)=p˜(0)p(0) with:

p:x3k+2=x((3k+2)τ)=p(x3k)

and,

p˜:x3k+3=p˜(x3k+2).

As in Proposition 1, the solution of (10) in the respective intervals [3k,(3k+2)τ) and [(3k+2)τ,(3k+3)τ) are worked out by integral formulation and differentiation under the integral sign is valid because f is C1.

Namely, let ϕ(t,x3k) the solution of (10) in [3k,(3k+2)τ) with initial condition

x(3kτ)=x3k. It results:

p(0)=ϕx3k((3k+2)τ,0)=e2λτ

Idem, for ϕ(t,x3k+2) the solution of (10) in [(3k+2)τ,(3k+3)τ) with initial condition x((3k+2)τ)=x3k+2, it is obtained:

p˜(0)=ϕx3k+2((3k+3)τ,0)=eλτ+ϵτ(1eλτ)eλτ.

Therefore,

P(0)=e3λτ[1+ϵτ(1eλτ)e2λτ]

which is of modulus less than 1 iff ϵ and τ verify (11).

As in Proposition 1, it is shown that if ϵ and τ verify (2); there exists δ˜ such that if |x3k|<δ˜:

|x3k+3|<|α˜||x3k|

for certain α˜ with |α˜|<1. The arguments to state the existence of a unique continuous solution of (10) for all t0 and to prove the asymptotic stability of the equilibrium point, follow the same technical resources as in Proposition 1. □

Remark 5

As for the first method, introducing α=P(0)(1,1) the relationship (11) may be formulated through:

ϵ=eλτ(e3λτα)τ(eλτ1) (12)

for α: |α|<1.

Hence, for the general nonlinear case, Proposition 2 guarantees local asymptotic stability of x as an equilibrium point of the controlled system (10) Comments on this method control performance are quite similar to the ones with respect to the first method. For illustration see Figs. 8, 9, 10 and 11, where u(t)=ϵ(t)(x(t2τ)x(tτ)) while red and blue indicate free and controlled system, respectively.

Figure 8.

Figure 8

State behavior and control performance of system (10) with f(x)=2x,x0 = 0.5,τ = 0.2 (a) α = −0.4 (b) α = 0.8.

Figure 9.

Figure 9

State behavior and control performance of system (10) with f(x)=2x,x0 = 0.5,τ = 0.4. (a) α = −0.4 (b) α = 0.8.

Figure 10.

Figure 10

State behavior and control performance of system (10) for x0 = 0.5,τ = 0.2,α = −0.4. (a) f(x)=2x + x2, (b) f(x)=2x + x3, (c) f(x)=2x − x3, (d) f(x)=2x + sin2(x).

Figure 11.

Figure 11

State behavior and control performance of system (10) for f(x)=2x(x − 1) with x0 = 0.5,τ = 0.2, (a) α = −0.4 (b) α = 0.8.

The exponential decayment is also valid in this case (see Fig. 12):

cmeln(|α|μ)3τt|x0||x(t)|cMeln(|α|+μ)3τt|x0|if α0and,0|x(t)|cMelnμ3τt|x0|if α=0

for certain positive constants cm, cM.

Figure 12.

Figure 12

Exponential stability for system (10) with x0 = 0.5,τ = 0.2 and α = −0.4: (a) f(x)=2x; (b) f(x)=2x − x3.

And the convergence rate β of algorithm (10) comes out:

β={ln|α|3τ,if α0,if α=0 (13)

Equation (11) states the stability parameters region of this method and yields to the graphics displayed in Fig. 13. Considerations on an appropriate choice of the design control parameters are like in the first method. In particular, it is convenient to choose τ near τ, the minimizing value of ϵ/λ, which for a fixed α, is given by:

τ=1λ((e2λταeλτ)(eλτ1)e3λτ2e2λταeλτ+2α)

and that it is obtained by applying standard tools of real calculus.

Figure 13.

Figure 13

In red (a) Stability parameters region of (10). (b) Zoom in for 0 < ϵ/λ ≤ 25. (c) Zoom in for 0 < λτ ≤ 0.5.

Robustness with respect to λ may be stated in the same terms as for the first method. Namely, let λ¯ an estimation of λ, |λλ¯|<Δλ. For ϵ=eλ¯τ(e3λ¯τα¯)τ(eλ¯τ1) with |α¯|<1, there exists α:|α|<1 such that the control objective is achieved with rate of convergence (13) depending on α.

4. Concluding remarks and future research

Two methods based on ODFC schemes for the continuous time case has been dealt with. The first one coincidences with the proposal of [24], based on a delayed velocity term, but extended to the general nonlinear case. Interestingly, the control strategy does not work if the velocity term is changed by a difference between the current state and one delayed state while keeping the control gain periodicity ([19], [24]). Then, for the design of the second method, the perturbation has been replaced by one involving two delayed states; moreover, a 2:1 ratio between active and non-active control periods has been introduced. This last feature also appears in the proposal of [23] where the perturbation is based on the difference between current and delayed state. Compared to the second proposal of this work, the strategy of [23] involves more control parameters and full description of the stability parameters region is not provided. Indeed, although the second method may be considered as the novelty in this work, it has been worth to expose the methodology for proving the achievements of the first strategy that has been straightforward transferred to prove analogous features on the second one. Hence, for both of them, local stabilization of an equilibrium point in the general non-linear scalar case has been rigorously proven. The key ingredient of this proof is the building of a discrete-time map which reflects the dynamics of the controlled system. Let us emphasize that the controlled system is a discontinuous time-delayed system but the associated discrete-time system is described by a C1 map so stability is obtained from its linearization which can be computed for any nonlinear system. Then, from continuous dependence on initial conditions, the stabilization of the continuous time system is deduced. Additionally, the stability parameter region is explicitly described and in particular, the parameters values for deadbeat control (α=0) are easily obtained. In turn, this yields to another important feature, mainly for practical implementation: the robust dependence of the stability parameters with respect to the derivate value on the equilibrium point.

From a wide simulation work it may be claimed that the first method displays better control performance features than the second one. This is even appreciated by confronting the few examples of Section 2 with the respective examples of Section 3. Namely, from obtaining the exponential bound of the solution, a quantification for the rate of convergence has been stated. This index of convergence and a detailed analysis of the stability parameters region confirm the claimed conjectures.

These strategies may be developed to stabilize equilibrium points in the n-dimensional case under adequated observability and controlability conditions without presenting the restrictions of the DFC methods studied in [15] and [30]. More interestingly a suitable extension of our second method appears as a candidate for overcoming the ONL, coming out an alternative of [19] and [23]. In [19], an “on-off switching” feedback gain is introduced; however it does not work in the one-dimensional case. The quite recently published approach [23] is designed as a Pyragas feedback perturbation that depends on y(t)y(tτ) with a 3τperiodic piecewise constant control gain; it is shown that it works if the Jacobi matrix has m positive real eigenvalues of unit multiplicity (odd m) and the rest are lying in the left half-plane of the complex plane. The analytical proof for these strategies completely disregards the nonlinear nature of the controlled system. This may be well accomplished in our proposal by building an associated map as in the one dimensional case while a full description of the stability parameters region and an index of convergence could be given.

As the second method avoids the computation of the derivative, its numerical implementation may result more efficient just because it is not desirable to produce derivative signal x˙(t) from noisy measurements of x(t). Namely, its extension to the stabilization of UPO is quite simple. Suppose that x˜(t) is a UPO and its period T is known. By introducing δx=xx˜(t), the oscillating feedback control based on delayed states becomes:

u(t)=K(t)[δx(t2T)δx(tT)]=K(t)[x(t2T)x(tT)]

being K(t) the oscillating control gain. As in Pyragas method, it does not require the exact location of the UPO to be stabilized. So stated, it appears as an alternative to the proposals in [16], [21], [22]. The problem of UPO stabilization yields to the problem of stabilizing the origin in the non-autonomous n-dimensional case. Note that the kind of periodicity that define K(t) is quite similar to the switch on and off of the “act-and-wait-time-delayed” feedback control used in these works so the extension of our scheme to UPO stabilization could contribute to advance on these issues. Moreover, EDFC ideas could be introduced into ODFC methods for dealing with high-instabilities scenarios. These problems, and additionally, their application for controlling chaos, i.e., for equilibrium points and UPO's embedded in a strange attractor, is part of our future research.

Declarations

Author contribution statement

Veronica E. Pastor, Graciela A. González: Conceived and designed the analysis; Analyzed and interpreted the data; Contributed analysis tools or data; Wrote the paper.

Funding statement

This work was supported by UBACyT 2014-2017 (20020130200093 BA GEF).

Competing interest statement

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

Acknowledgements

The authors would like to thank the anonymous reviewers and the editor for their valuable comments and suggestions.

References

  • 1.Pyragas K. Continuous control of chaos by self-controlling feedback. Phys. Lett. A. 1992;170:421–428. [Google Scholar]
  • 2.Kuznetsov N.V., Leonov G.A., Shumafov M.M. A short survey on Pyragas time-delay feedback stabilization and odd number limitation. IFAC-PapersOnLine. 2015;48(11):706–709. [Google Scholar]
  • 3.Song Y., Wei J. Bifurcation analysis for Chen's system with delayed feedback and its application to control of chaos. Chaos Solitons Fractals. 2004;22:75–91. [Google Scholar]
  • 4.Ding Y., Jiang W., Wang H. Delayed feedback control and bifurcation analysis of Rossler chaotic system. Nonlinear Dyn. 2010;61:707–715. [Google Scholar]
  • 5.Lei A., Ji L., Xu W. Delayed feedback control of a chemical chaotic model. Appl. Math. Model. 2009;33:677–682. [Google Scholar]
  • 6.Yang K., Zhang L., Zhang J. Stability analysis of three-dimensional energy demand-supply system under delayed feedback control. Kybernetika. 2015;51:1084–1100. [Google Scholar]
  • 7.Socolar J.E.S., Sukow D.W., Gauthier D.J. Stabilizing unstable periodic orbits in fast dynamical systems. Phys. Rev. E. 1994;50:3245–3248. doi: 10.1103/physreve.50.3245. [DOI] [PubMed] [Google Scholar]
  • 8.Pyragas K. Control of chaos via extended delay feedback. Phys. Lett. A. 1995;206:323–330. [Google Scholar]
  • 9.Rong Y., Wen H. An extended delayed feedback control method for the two-lane traffic flow. Nonlinear Dyn. 2018;94:2479. [Google Scholar]
  • 10.Costa D.D.A., Savi M.A. Chaos control of an SMA-pendulum system using thermal actuation with extended time-delayed feedback approach. Nonlinear Dyn. 2018;93(2):571–583. [Google Scholar]
  • 11.Nakajima H. On analytical properties of delayed feedback control of chaos. Phys. Lett. A. 1997;232:207–208. [Google Scholar]
  • 12.Nakajima H., Ueda Y. Limitation of generalized delayed feedback control. Physica D. 1998;111:143–150. [Google Scholar]
  • 13.Hooton E.W., Amman A. An analytical limitation for time-delayed feedback control in autonomous systems. Phys. Rev. Lett. 2012;109 doi: 10.1103/PhysRevLett.109.154101. [DOI] [PubMed] [Google Scholar]
  • 14.Amman A., Hooton E.W. An odd number limitation of extended time-delayed feedback control in autonomous systems. Philos. Trans. A Math. Phys. Eng. Sci. 2013;371 doi: 10.1098/rsta.2012.0463. [DOI] [PubMed] [Google Scholar]
  • 15.Kokame H., Hirata K., Konishi K., Mori T. Difference feedback can stabilize uncertain steady states. IEEE Trans. Autom. Control. 2001;46:1908–1913. [Google Scholar]
  • 16.Leonov G.A. Pyragas stabilizability via delayed feedback with periodic control gain. Syst. Control Lett. 2014;69:34–37. [Google Scholar]
  • 17.Insperger T. Act-and-wait concept for continuous-time control system with feedback delay. IEEE Trans. Control Syst. Technol. 2006;14(5):974–977. [Google Scholar]
  • 18.Insperger T., Stepan G. On the dimension reduction of systems with feedback delay by act-and-wait control. IMA J. Math. Control Inf. 2010;27(4):457–473. [Google Scholar]
  • 19.Konishi H., Kokame K., Hara N. Delayed feedback control based on the act-and-wait concept. Nonlinear Dyn. 2011;63(3):513–519. [Google Scholar]
  • 20.Pyragas V., Pyragas K. Act-and-wait time delayed feedback control of nonautonomous systems. Phys. Rev. E. 2016;94 doi: 10.1103/PhysRevE.94.012201. [DOI] [PubMed] [Google Scholar]
  • 21.Pyragas V., Pyragas K. Act-and-wait time delayed feedback control of autonomous systems. Phys. Lett. A. 2018;382:574–580. [Google Scholar]
  • 22.Cetinkaya A., Hayakawa T., Taib M.A.F.b.M. Stabilizing unstable periodic orbits with delayed feedback control in act-and-wait fashion. Syst. Control Lett. 2018;113:71–77. [Google Scholar]
  • 23.Leonov G.A., Shumafov M.M. Pyragas stabilizability of unstable equilibria by nonstationary time-delayed feedback. Autom. Remote Control. 2018;6:1029–1039. [Google Scholar]
  • 24.Schuster H.G., Stemmler M.B. Control of chaos by oscillating feedback. Phys. Rev. E. 1997;56(6):6410–6416. [Google Scholar]
  • 25.Morgul O. On the stabilization of periodic orbits for discrete time chaotic systems. Phys. Lett. A. 2005;335:127–138. [Google Scholar]
  • 26.Pyragas K. Control of chaos via an unstable delayed feedback controller. Phys. Rev. Lett. 2001;86:2265–2268. doi: 10.1103/PhysRevLett.86.2265. [DOI] [PubMed] [Google Scholar]
  • 27.Pastor V.E., González G.A. Analysis and comparison of two oscillatory feedback control schemes for stabilizing equilibrium points. Proc. Ser. Braz. Soc. Comput. Appl. Math. 2016;4(1) [Google Scholar]
  • 28.Khalil H. 2a edition. Prentice Hall; Englewood Cliffs, NJ: 1996. Nonlinear Systems. [Google Scholar]
  • 29.Perko L. third edition. Springer-Verlag; New York: 2001. Differential Equations and Dynamical Systems. [Google Scholar]
  • 30.Leonov G.A., Shumafov M.M., Kuznetsov N.V. Delayed feedback stabilization of unstable equilibria. Proceeding of the 19th World Congress; IFAC; 2014. pp. 6818–6825. [Google Scholar]

Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES