Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2022 Feb 7;12:2049. doi: 10.1038/s41598-022-05663-4

Neuro-adaptive augmented distributed nonlinear dynamic inversion for consensus of nonlinear agents with unknown external disturbance

Sabyasachi Mondal 1,, Antonios Tsourdos 1
PMCID: PMC8821713  PMID: 35132111

Abstract

This paper presents a novel neuro-adaptive augmented distributed nonlinear dynamic inversion (N-DNDI) controller for consensus of nonlinear multi-agent systems in the presence of unknown external disturbance. N-DNDI is a blending of neural network and distributed nonlinear dynamic inversion (DNDI), a new consensus control technique that inherits the features of Nonlinear Dynamic Inversion (NDI) and is capable of handling the unknown external disturbance. The implementation of NDI based consensus control along with neural networks is unique in the context of multi-agent consensus. The mathematical details provided in this paper show the solid theoretical base, and simulation results prove the effectiveness of the proposed scheme.

Subject terms: Engineering, Aerospace engineering, Electrical and electronic engineering

Introduction

Cooperation among agents, i.e., the consensus, is a fundamental and essential requirement to execute a complex task cooperatively. In a real-world scenario, the agents face a variety of issues while making the consensus. These issues are associated with communication among the agents, plant’s uncertainty and unknown external disturbances. The former does not affect the agent dynamics, but the latter does a lot resulting in a mission failure. Considering the importance of a mission, the researchers focused on designing adaptive controllers capable of handling unknown disturbances. These controllers implement adaptive control laws, including the neural network (NN) based approximation scheme and the conventional linear or nonlinear control theory depending on the plant dynamics. The primary reason for selecting the NN is that it is an efficient technique to approximate unknown nonlinear functions1, especially the radial basis function (RBF) neural network, which is widely used due to its simple structure. Such neuro-adaptive controllers are proposed to solve a variety of consensus problems. A few examples are mentioned here. A leader-follower synchronization problem for uncertain dynamical nonlinear agents was solved using neuro-adaptive scheme2. A cooperative tracking problem of agents with unknown dynamics3 was proposed using a neural network-based controller. A bipartite consensus4 was achieved using a neural network to learn the uncertainties of agents. Another leader-follower output consensus problem was solved5 using a neuro-adaptive controller for a class of uncertain heterogeneous non-affine pure-feedback multi-agent systems in the presence of time-delay and input saturation. An adaptive leader-following consensus control for a class of strict-feedback agents6 was solved using neuro-adaptive control. An exciting example of distributed finite-time formation tracking control problem for multiple unmanned helicopters was presented by Wang et al.7. The authors used the radial basis function neural network (RBFNN) technique to design a novel finite-time multivariable neural network disturbance observer (FMNNDO) to approximate the unknown external disturbance and model uncertainty law. In addition to nonlinear systems, a neural-network-based leaderless consensus control problem of fractional-order multi-agent systems (FOMASs) with unknown nonlinearities and unknown external disturbances was reported8. The effect of actuator fault on consensus asymptotic convergence of nonlinear agents with unknown dynamics was discussed by Li et al.9. Other examples include event-triggered consensus control problem for nonstrict-feedback nonlinear systems with a dynamic leader10, fixed-time leader-follower consensus problem for multi-agent systems (MASs) with output constraints, unknown control direction, unknown system dynamics, an unknown external disturbance11, stochastic nonlinear multi-agent systems with input saturation12 etc.

These papers implemented a variety of nonlinear controllers (e.g. feedback linearization, Lyapunov function, sliding mode, backstepping etc.) and a neural network approximation for uncertainty and unknown disturbances. In this paper, we have presented a neuro-adaptive augmented distributed controller, which is designed based on Distributed Nonlinear Dynamic Inversion (DNDI)13. We named it N-Distributed NDI (N-DNDI). It can be mentioned that the adaptive control expression in the papers mentioned earlier contains a linear or nonlinear error feedback term, and an adaptive term is added to it. However, N-DNDI is a new neuro-adaptive structure augmented in the DNDI frame. The primary reasons for selecting NDI are given as follows.

  • The NDI is an effective way to design a controller for plants with nonlinear dynamics. The nonlinearities in the plant are eliminated by using feedback linearization theory. Moreover, the response of the closed-loop plant is similar to a stable linear system.

  • The NDI controller has many advantages. Examples of these advantages include 1. simple and closed-form control expression, 2. easily implementable, global exponential stability of the tracking error, 3. use of nonlinear kinematics in the plant inversion, 4. minimize the need for individual gain tuning, etc.

Many researchers have used NDI to solve their research problems. Enns et al.14 implemented NDI to design a flight controller. Singh et al.15 developed a controller for autonomous landing of a UAV. Padhi et al.16 described reactive obstacle avoidance schemes for UAVs in a Partial Integrated Guidance and Control (PIGC) framework using neuro-adaptive augmented dynamic inversion. Mondal et al.17 applied NDI to propose a formation flying scheme. They presented how the NDI is implemented for tracking the leader’s commands in terms of coordinate, velocity, and orientation. Caverly et al.18 used NDI to control the attitude of a flexible aircraft. Horn et al.19 designed a controller of rotorcraft using Dynamic Inversion. Lombaerts et al.20 proposed NDI-based attitude control of a hovering quad tilt-rotor eVTOL Vehicle.

The contribution is given as follows.

  • In this paper, a novel neuro-adaptive Distributed NDI (N-DNDI) is proposed to achieve the consensus among a class of nonlinear agents in the presence of unknown external disturbance. It can be mentioned that DNDI is a new consensus protocol13 and augmentation of the neural network with DNDI is a new formulation. Hence, this is new in the context of MASs and not reported in the literature.

  • The main advantage of N-DNDI is it inherits the features of NDI. Moreover, the augmentation of the neural network provides a very good approximation of the unknown external disturbances. Therefore, N-DNDI is a perfect combination for designing consensus controllers for nonlinear agents. The realistic simulation study justifies the effectiveness of blending DNDI and neural networks.

  • The formulation to accommodate the neuro-adaptive structure in the DNDI framework is a significant contribution. Moreover, the mathematical details for convergence are provided to show the solid theoretical base of this new controller.

The rest of the paper is organized as follows. In section “Preliminaries”, preliminaries are given. Section “Problem formulation” presents the problem definition. The mathematical details of the DNDI are provided in section “Nominal distributed nonlinear dynamic inversion (DNDI) controller”. The mathematical details of N-DNDI are given in section “Neuro-adaptive augmented DNDI for consensus”. The simulation study is presented in section “Simulation results”. The conclusion is given in section “Conclusion”.

Preliminaries

The topics which are relevant to the problem considered in this paper are given in this section.

Consensus of multiple agents

The consensus of MASs on communication network is discussed in this section. The definition of the consensus is given as follows.

Definition 1

Let us consider a MASs with N agents, where Xi,(i=1,2,3,...,N) denotes the states of the ith agent. The MASs will achieve the consensus if Xi-Xj0,ij as t+.

The consensus protocol aims to minimize the error in similar states of the individual agent with their neighbour by sharing information over the communication network, which is generally described using graph theory.

Graph theory

The communication among the agents can be represented by a weighted graph written by G={V,E}. The vertices V={v1,v2,,vN} of the graph denote the agents, and the set of edges, denoted by EV×V, represents the communication among the agents. The weighted adjacency matrix A=[aij]RN×N of G is denoted by aij>0 if (vj,vi)E, otherwise aij=0. There is no self loop in the graph. This fact is expressed by selecting the diagonal elements of the adjacency matrix A as zero, i.e., iV, aii=0. The degree matrix is denoted by DRN×N=diag{d1d2dN}, where di=jNiaij. The Laplacian matrix is written as L=D-A. A graph with the property that aij=aji is said to be undirected graph. If any two nodes vi,vjV, there exists a path from vi to vj, then the graph is called a connected graph. In this paper, we suppose that the topology G of the network is undirected and connected.

Radial basis function neural networks (RBFNNs)

Due to the ‘linear in the weight’ property, the Neural networks are widely implemented to approximate unknown functions and the radial basis function neural network (RBFNN) is a good candidate21. A continuous unknown nonlinear function ζ(X):RnRm can be approximated by

ζ(X)=WNNTΦ(X)+ϵX 1

where XRn is input vector, WNNRq×m is the weights of RBFs, Φ(X)=[ϕ1(X)ϕq(X)]T denotes the basis function vector. ‘q’ denotes the number of neurons. ϵXRm is the approximation error. The ith basis function ϕi is given by

ϕi(X)=exp(X-μi)T(X-μi)ψi2;i=1,2,,q. 2

where μiRn is the center of the receptors and ψi is width of the ith gaussian function.

Useful lemma

The useful lemmas used in this paper are given as follows.

Lemma 1

22 The Laplacian matrix L in an undirected graph is semi-positive definite, it has a simple zero eigenvalue and all the other eigenvalues are positive if and only if the graph is connected. Therefore, L is symmetric and it has N non-negative, real-valued eigenvalues 0=λ1λ2λN.

Lemma 2

23 Let ψ1(t),ψ2(t)Rm be continuous positive vector functions, by Cauchy inequality and Young’s inequality, there exists the following inequality:

ψ1(t)ψ2(t)ψ1(t)ψ2(t)ψ1(t)λλ+ψ2(t)ζζ 3

where

1λ+1ζ=1

Lemma 3

24 Let R(t)R be a continuous positive function with bounded initial R(0). If the inequality holds R˙(t)-βR(t)+η where, β>0,η>0, then the following inequality holds.

R(t)R(0)e-βt+ηβ1-e-βt 4

Problem formulation

In this section, the problem definition is given. The objective is to design a neuro-adaptive consensus protocol that enables a class of nonlinear agents to achieve the consensus in the presence of external disturbance. Let us consider a group of N nonlinear agents. They are connected by the undirected and connected network topology. All the agents are homogeneous, i.e., they have similar dynamics. The dynamics of ith agent is given by Eqs. (5)–(6) as follows.

Xi˙=f(Xi)+g(Xi)Ui+Di(Xi) 5
Yi=Xi 6

where, XiRn, UiRn are states and control respectively. f is a continuously differentiable vector-valued function representing the nonlinear dynamics. Di(Xi)Rn is the unknown bounded and smooth external disturbance term with t0.

Assumption 1

The matrix g(Xi) is invertible for all time.

Nominal distributed nonlinear dynamic inversion (DNDI) controller

It is relevant to get an overview of the DNDI controller13 and its convergence behaviour before augmenting neuro-adaptive structure is explained.

Brief overview of DNDI

A brief overview of DNDI controller is presented here. The block diagram of the consensus control scheme with nominal DNDI is shown in the Fig. 1.

Figure 1.

Figure 1

Block diagram of distributed NDI or DNDI.

The nominal dynamics of ith agent is given as follows.

Xi˙=f(Xi)+g(Xi)Uid 7
Yi=Xi 8

where, XiRn, UidRn . ei denotes the consensus error of ith agent given by

ei=d¯iXi-a¯iX 9

where eiRn, d¯i=(diIn)Rn×n, a¯i=(aiIn)Rn×nN, and X=[X1TX2TXNT]TRnN. In is n×n identity matrix. ‘’ denotes the Kroneker product. Enforcing the first order error dynamics we get

e˙i+Kiei=0 10

Differentiation of Eq. (9) yields

e˙i=d¯iX˙i-a¯iX˙=d¯if(Xi)+g(Xi)Uid-a¯iX˙ 11

Substitution of the expressions for ei and e˙i in Eq. (10) gives

d¯if(Xi)+g(Xi)Uid-a¯X˙+Ki(d¯iXi-a¯iX)=0 12

Simplification of Eq. (12) gives the expression of control Uid for ith agent as follows.

Uid=(g(Xi))-1-f(Xi)+d¯i-1(a¯iX˙-Ki(d¯iXi-a¯iX)) 13

Convergence of DNDI

Convergence study of DNDI is presented here. Let us consider a smooth scalar function given by

V=12XT(LIn)X 14

LIn can be written as

LIn=SΔST 15

where, SRnN×nN is the left eigenvalue matrix of LIn, Δ=diag{0,λ2(L),λ3(L),,λN(L)}InRnN×nN is eigenvalue matrix, STS=SST=InN×nN.

V=12XT(LIn)X=12XTSΔSTX=12XTSΔΔSTX=12XTSΔΔ¯Δ¯-1Δ¯-1Δ¯ΔSTX=12XTSΔΔ¯-1ΔSTX=12XTSΔSTSΔ¯-1STSΔSTX=12XTSΔSTSΔ¯-1STSΔSTX=12XT(LIn)Λ(LIn)X=12ETΛE 16

where Δ¯=diag{λ2(L),λ2(L),λ3(L),,λN(L)}InRnN×nN, E=[e1Te2TeNT]TRnN, and Λ=SΔ¯-1STRnN×nN.

Remark 1

It can be observed from Eqs. (14) and (16) that

λmin(Λ)2E2Vλmax(Λ)2E2 17
V=12XT(LIn)X=12XTE 18

Remark 2

According to Lemma 1, λ2>0. Hence, Δ¯ is invertible.

Remark 3

Λ=SΔ¯-1ST is positive definite matrix. Hence, V is positive definite subject to consensus error and qualify for a Lyapunov function.

Differentiating Eq. (14), we get

V˙=XT(LIn)X˙=ETX˙=i=1NeiTf(Xi)+g(Xi)Uid 19

where, E=[e1Te2TeNT]TRnN. Substituting the control Uid expression in Eq. (19) yields

V˙=i=1NeiTd¯i-1(a¯iX˙-Kiei)=i=1N-eiTd¯i-1Kiei+i=1NeiTd¯i-1a¯iX˙ 20

According to Lemma 2, we can write

eiTd¯i-1a¯iX˙eid¯i-1a¯iX˙ei22+d¯i-1a¯iX˙22 21

Substituting the inequality relation in Eq. (20)

V˙i=1N-eiTd¯i-1Kiei+ei22+d¯i-1a¯iX˙22 22

Let us design the gain Ki as follows.

Ki=d¯i12+αi2λmax(Λ) 23

Eq. (22) is written as

V˙i=1N-αi2λmax(Λ)ei2+d¯i-1a¯iX˙22-αiV+η 24

where, η=i=1Nd¯i-1a¯iX˙22. Applying Lemma 3 we get

Vηαi+V(0)-ηαie-αit 25

Hence, we conclude that V is bounded as t. In addition, we show the Uniformly Ultimate Boundedness (UUB) here.

Using Eq. (17), Eq. (25), and Lemma1.2 presented by Ge et al.24 we can write

λmin(Λ)2E2Vηαi+V(0)-ηαie-αit 26

Eq. (26) can be written as follows.

λmin(Λ)2E2ηαi+V(0)-ηαie-αitE2ηαi+2V(0)-ηαie-αitλmin(Λ) 27

It can be observed that, if V(0)=ηαi then

Eκ 28

t0 and κ=2ηαiλmin(Λ). If V(0)ηαi then for any given κ>κ there exist a time T>0 such that t>T, Eκ.

κ=2ηαi+2V(0)-ηαie-αiTλmin(Λ) 29

Therefore, we can conclude

limtE=κ 30

Neuro-adaptive augmented DNDI for consensus

Before going to the main derivation of Neuro-adaptive DNDI, we present the philosophy of neuro-adaptive control design25.

Philosophy of neuro-adaptive control

The sole objective of the design is to drive the actual state X to desired state Xd. The scheme adopted is to make actual state X to track the desired or nominal state Xd through the virtual state Xa as shown in Fig. 2.

Figure 2.

Figure 2

Philosophy of neuro-adaptive control.

The tracking of X to Xa and Xa to Xd is achieved by enforcing error dynamics to obtain the control considering nonlinear plant dynamics. We use the same philosophy to design the Neuro-adaptive distributed NDI controller in the next section.

Mathematical details of neuro-adaptive augmented DNDI (N-DNDI)

Neuro-adaptive augmented DNDI is a blending of neuro-adaptive control and DNDI. The block diagram of the control scheme is shown in Fig. 3. The portion of the diagram inside the blue border is the proposed design of neuro-adaptive controller.

Figure 3.

Figure 3

Block diagram of Neuro-adaptive DNDI or N-DNDI.

In case of neuro-adaptive augmented DNDI, the consensus error of ith agent is defined such that, the virtual state of ith agent, i.e., XaiRn reach consensus with the neighbours. Therefore, the consensus error of ith agent is given by

Edi=jNiaij(Xai-Xj)=d¯iXai-a¯iX 31

where EdiRn. XRnN denotes the actual states of all the agents. The actual dynamics of ith agent is given by

X˙i=f(Xi)+g(Xi)UiN+D(Xi) 32

where D(Xi) is the external disturbance added to ith agent. The virtual dynamics for ith agent is given by

X˙ai=f(Xi)+g(Xi)UiN+D^(Xi)+Kai(Xi-Xai) 33

where D^(Xi) is the approximation of D(Xi).

It is important to note that, the consensus error Edi in Eq. (31) is designed to measure the error in virtual state of ith agent and actual states of its neighbours. To drive this error to zero (i.e., Edi0), we define a Lyapunov function Vi as follows.

Vi=12EdiTEdi 34

Differentiating Eq. (34) yields

V˙i=EdiTE˙di 35

According to the Lyapunov stability theory, let the time derivative of the Lyapunov function should be

V˙i=-EdiTKdiEdi 36

where KdiRn×n is a positive definite diagonal matrix. The expression of V˙i in Eqs. (35) and (36) are equated to obtain

EdiTE˙di=-EdiTKdiEdi 37

Eq. (37) is simplified as follows

E˙di+KdiEdi=0 38

Substituting the expression of Edi in Eq. (38) we obtain

d¯iX˙ai-a¯iX˙+Kdi(d¯iXai-a¯iX)=0 39

Putting the expression of X˙ai in Eq. (39) yields

d¯if(Xi)+g(Xi)UiN+D^(Xi)+Kai(Xi-Xai)-a¯iX˙+Kdi(d¯iXai-a¯iX)=0 40

The expression of control UiN can be obtained by simplifying Eq. (40) as follows.

UiN=[g(Xi)]-1[-f(Xi)-D^(Xi)-Kai(Xi-Xai)+d¯i-1a¯iX˙-Kdi(d¯iXai-a¯iX)] 41

It can be observed that the control expression in Eq. (41) is different from Eq. (13). Next, the error dynamics is enforced for driving the actual state of ith agent to its virtual state, i.e., XiXai.

E˙ai+KaiEai=D(Xi)-D^(Xi) 42

where Eai=Xi-Xai. To approximate the unknown disturbance a single layer neural network is designed as shown in Eq. (43).

D^(Xi)=W^iTΦ(Xi) 43

where Φ(Xi) is a basis function vector. It is important to note that the ideal value of W^i is Wi and thus the disturbance D(Xi) can be approximated by

D(Xi)=WiTΦ(Xi)+ϵXi 44

where ϵXi is the error tolerance. Eq. (42) is rewritten as

E˙ai+KaiEai=W~iTΦ(Xi)+ϵXi 45

where W~i=Wi-W^i. The weight update rule is given by

W^˙i=γiΦ(Xi)EaiT-σiW^i 46

where γi is learning rate and σi is stabilizing factor of ith agent. It is important to note that W~˙i=-W^˙i because Wi is constant and W˙i=0.

Convergence study of Eai

The convergence study of the error Eai is important. We have selected a Lyapunov function as follows.

Vi=12EaiTEai+12W~iTγi-1W~i 47
=VEai+VW~i 48

where VEai=12EaiTEai and VW~i=12W~iTγi-1W~i.

Differentiation of Eq. (48) yields

V˙i=EaiTE˙ai+W~iTγi-1W~˙i=EaiTX˙i-X˙ai-W~iTγi-1γiΦ(Xi)EaiT-σiW^i=EaiTW~iTΦ(Xi)+ϵXi-KaiEai-W~iTΦ(Xi)EaiT-σiW^i=EaiTϵXi-EaiTKaiEai+σiW~iTW^i 49

Using Lemma 2 and W^i=-W~i+Wi, Eq. (49) is written as

V˙iEai22+ϵXi22-EaiTKaiEai-σiW~i2+σiW~iWiEai22+ϵXi22-EaiTKaiEai-σiW~i2+12σiW~i2+12σiWi2=Eai22+ϵXi22-EaiTKaiEai-12σiW~i2+12σiWi2=Eai22-EaiTKaiEai-12σiW~i2+ζi 50

where ζi=ϵXi22+12σiWi2. Let us define

Kai=δi12δi+12andσiδiλmax(γi-1)

where, δi>0. Hence, we can write the Eq. (50) as follows.

V˙i-δi2Eai2-δiλmax(γi-1)2W~i2+ζi 51

Using Eq. (17) we can write

12Eai2VEaiVi 52
λmin(γi-1)2W~i2VW~iVi 53

Therefore, Eq. (51) is written as follows.

V˙i-δiVEai-δiVW~i+ζi 54
=-δiVi+ζi 55

Applying Lemma 3 we can write

Vi(t)ζiδi+Vi(0)-ζiδie-δit 56

Lemma 4

24 Consider the positive function given by

V=12e(t)TQ(t)e(t)+12W~iTΓi-1W~i 57

where e(t)=x(t)-xd(t) and W~=W^-W. If the following inequality holds:

V˙(t)-c1V(t)+c2 58

then, given any initial compact set defined by

Ω0=x(0),xd(0),W^(0)|x(0),W^(0)finite,xd(0)Ωd 59

we can conclude that

  1. the states and weights in the closed-loop system will remain in the compact set defined by
    Ω=x(t),W^(t)|x(t)Cemax+maxτ[0,t]{xd(τ)},xd(t)Ωd,W^CW~max+W 60
  2. the states and weights will eventually converge to the compact sets defined by
    Ωs=x(t),W^(t)|limte(t)=μe,limtW~(t)=μW~ 61
    where constants
    Cemax=2V(0)+2c2c1λQmin 62
    CW~max=2V(0)+2c2c1λΓmin 63
    μe=2c2c1λQmin 64
    μW~=2c2c1λΓmin 65

We will present the Uniformly Ultimate Boundedness (UUB) here using conclusion 2. Using Eqs. (52), (53), and (56) we can write

Eai2ζiδi+2Vi(0)-ζiδie-δit 66
W~i2ζiδi+2Vi(0)-ζiδie-δitλmin(γi-1) 67

If Vi(0)=ζiδi then EaiμEai, t>0.

μEai=2ζiδi

If Vi(0)ζiδi then for a given μEai>μEai there exist a TE>0 such that t>TE, we get EaiμEai

μEai=2ζiδi+2V(0)-ζiδie-αiTE 68

Therefore, we conclude

limtEai=μEai 69

In a similar fashion, we can conclude

limtW~i=μW~i 70

Therefore, according to conclusion 2, the proposed controller is able to make the approximation error to converge in the compact set defined by Ωs.

Simulation results

Simulation results are presented here. The simulation study is performed on PC with AMD Ryzen 5 processor and 8 Gb RAM.

Agent dynamics

The agent dynamics are given as follows.

X˙i1=Xi2sin(2Xi1)+Ui1 71
X˙i2=Xi1cos(3Xi2)+Ui2 72

where Xi=Xi1Xi2T. Equations (71) and (72) give

f(Xi)=Xi2sin(2Xi1)Xi1cos(3Xi2) 73

and

g(Xi)=1001 74

and

Ui=Ui1Ui2 75

The values of the parameters used in this simulation study are given as follows.

Kdi=120010,Kai=100010

The learning rate γi=30. We have selected RBF NN basis functions given by Φ(Xi)=[ϕ1(Xi)ϕ2(Xi)ϕ30(Xi)]T, where, ϕj(Xi)=exp-(Xi-μj)T(Xi-μj)ψj2. The centers of the basis functions are spaced evenly in the range of [-10,10]×[-10,10]. The width of each basis function is selected as ψj=2. The value of σi is chosen as 0.12. The disturbance added is given by

Di=20cosπXi120T

which is unknown to the controller. The state trajectories of all the agents are shown as X1 and X2, where, X1=[X11X21X101] and X2=[X12X22X102]. Similarly, the controls for the agents are shown by U1=[U11U21U101], and U2=[U12U22U102]. Also, the virtual states are given by Xa1=[Xa11Xa21Xa101], and Xa2=[Xa12Xa22Xa102]. The initial values of the states of all the agents are given in Table 1.

Table 1.

Initial conditions of the states of the agents.

X1 2 − 2 − 2 − 1 9 3 − 1 6 5 − 5
X2 0 − 1 4 1 0 − 5 7 8 − 3 4

The adjacency matrix is given by

A=0011110111000000010110000010011000110001100101010010011011100010010110110011100110000110011111000110

The unknown external disturbance is approximated by a neuro-adaptive controller. The approximated and real disturbance is shown in Fig. 4a and the approximation error is shown in Fig. 4b.

Figure 4.

Figure 4

Performance of N-DNDI in approximating unknown external disturbance.

It can be observed that the approximation is very good, which can be confirmed using the approximation error plot. Consequently, the states of the agents achieved the consensus in a few seconds. The state trajectories of all the agents, i.e., X1 and X2, are shown in Fig. 5a and 5b respectively. The states of the agents reach the consensus in finite time.

Figure 5.

Figure 5

Actual state trajectories.

The consensus is achieved by neuro-adaptive consensus controls U1 and U2 which are shown in Fig. 6a and 6b respectively.

Figure 6.

Figure 6

Neuro-adaptive control.

The convergence of the states is shown by the consensus errors Edi in state X1 and X2. They are shown in Fig. 7a and 7b respectively. The errors converged in a few seconds. This means the virtual states Xa1 and Xa2 successfully reach the consensus.

Figure 7.

Figure 7

Consensus error Edi.

The virtual states Xa1 and Xa2 are shown in Fig. 8a and 8b respectively. It can be observed that the consensus value of the virtual state and the actual states are the same. Therefore, the actual states tracked the virtual states accurately. The proof of the tracking can be given by virtual errors.

Figure 8.

Figure 8

Virtual state trajectory.

The virtual errors Eai in states X1 and X2 are shown in Fig. 9a and 9b respectively. They have converged in finite time.

Figure 9.

Figure 9

Virtual error Eai.

Conclusion

The augmentation of neuro-adaptive structure to distributed nonlinear dynamic inversion (DNDI) frame produces a unique adaptive controller (N-DNDI) that efficiently handles the external disturbance. The N-DNDI inherits the features of the NDI technique and handles the unknown external disturbance. The convergence study provided in this paper explains the correctness of the design. The simulation results show that the neural network embedded in the controller approximates the unknown external function and the DNDI controller computes the consensus control signal accordingly. Consequently, the consensus is achieved in finite time. Hence, the proposed N-DNDI is a deserving candidate for consensus control in the presence of unknown external disturbances. We consider the heterogeneous agents along with communication issues as part of our future research plan. Also, we will present a comparison study of the proposed controller with the existing controllers.

Acknowledgements

This research was partially funded by an Engineering and Physical Sciences Research Council (EPSRC) project CASCADE (EP/R009953/1).

Author contributions

S.M.: Conceptualization, Derivation, writing manuscript, Simulation; A.T.: Supervision, Editing. All authors have reviewed the manuscript.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989;2:359–366. doi: 10.1016/0893-6080(89)90020-8. [DOI] [Google Scholar]
  • 2.Peng Z, Wang D, Zhang H, Sun G. Distributed neural network control for adaptive synchronization of uncertain dynamical multiagent systems. IEEE Trans. Neural Netw. Learn. Syst. 2013;25:1508–1519. doi: 10.1109/TNNLS.2013.2293499. [DOI] [PubMed] [Google Scholar]
  • 3.Peng Z, Wang D, Zhang H, Lin Y. Cooperative output feedback adaptive control of uncertain nonlinear multi-agent systems with a dynamic leader. Neurocomputing. 2015;149:132–141. doi: 10.1016/j.neucom.2013.12.064. [DOI] [Google Scholar]
  • 4.Wang D, Ma H, Liu D. Distributed control algorithm for bipartite consensus of the nonlinear time-delayed multi-agent systems with neural networks. Neurocomputing. 2016;174:928–936. doi: 10.1016/j.neucom.2015.10.013. [DOI] [Google Scholar]
  • 5.Yang Y, Yue D, Dou C. Distributed adaptive output consensus control of a class of heterogeneous multi-agent systems under switching directed topologies. Inf. Sci. 2016;345:294–312. doi: 10.1016/j.ins.2016.01.043. [DOI] [Google Scholar]
  • 6.Wang Z, Yuan J, Pan Y, Wei J. Neural network-based adaptive fault tolerant consensus control for a class of high order multiagent systems with input quantization and time-varying parameters. Neurocomputing. 2017;266:315–324. doi: 10.1016/j.neucom.2017.05.043. [DOI] [Google Scholar]
  • 7.Wang D, et al. Neural network disturbance observer-based distributed finite-time formation tracking control for multiple unmanned helicopters. ISA Trans. 2018;73:208–226. doi: 10.1016/j.isatra.2017.12.011. [DOI] [PubMed] [Google Scholar]
  • 8.Mo L, Yuan X, Yu Y. Neuro-adaptive leaderless consensus of fractional-order multi-agent systems. Neurocomputing. 2019;339:17–25. doi: 10.1016/j.neucom.2019.01.101. [DOI] [Google Scholar]
  • 9.Li Y, Wang C, Cai X, Li L, Wang G. Neural-network-based distributed adaptive asymptotically consensus tracking control for nonlinear multiagent systems with input quantization and actuator faults. Neurocomputing. 2019;349:64–76. doi: 10.1016/j.neucom.2019.04.018. [DOI] [Google Scholar]
  • 10.Wang W, Li Y, Tong S. Neural-network-based adaptive event-triggered consensus control of nonstrict-feedback nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 2021;32:1750–1764. doi: 10.1109/TNNLS.2020.2991015. [DOI] [PubMed] [Google Scholar]
  • 11.Ni J, Shi P. Adaptive neural network fixed-time leader-follower consensus for multiagent systems with constraints and disturbances. IEEE Trans. Cybern. 2021;51:1835–1848. doi: 10.1109/TCYB.2020.2967995. [DOI] [PubMed] [Google Scholar]
  • 12.Liang, H., Guo, X., Pan, Y. & Huang, T. Event-triggered fuzzy bipartite tracking control for network systems based on distributed reduced-order observers (revised manuscript of TFS-2019-1049). IEEE Trans. Fuzzy Syst. (2020).
  • 13.Mondal, S. & Tsourdos, A. The consensus of nonlinear agents under switching topology using dynamic inversion in the presence of communication noise, and delay. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. (2021).
  • 14.Enns D, Bugajski D, Hendrick R, Stein G. Dynamic inversion: An evolving methodology for flight control design. Int. J. Control. 1994;59:71–91. doi: 10.1080/00207179408923070. [DOI] [Google Scholar]
  • 15.Singh, S. & Padhi, R. Automatic path planning and control design for autonomous landing of UAVs using dynamic inversion. In 2009 American Control Conference, 2409–2414 (IEEE, 2009).
  • 16.Padhi, R. & Chawla, C. Neuro-adaptive augmented dynamic inversion based PIGC design for reactive obstacle avoidance of UAVs. In AIAA Guidance, Navigation, and Control Conference, 6642 (2011).
  • 17.Mondai S, Padhi R. Formation flying using genex and differential geometric guidance law. IFAC-PapersOnLine. 2015;48:19–24. doi: 10.1016/j.ifacol.2015.08.053. [DOI] [Google Scholar]
  • 18.Caverly RJ, Girard AR, Kolmanovsky IV, Forbes JR. Nonlinear dynamic inversion of a flexible aircraft. IFAC-PapersOnLine. 2016;49:338–342. doi: 10.1016/j.ifacol.2016.09.058. [DOI] [Google Scholar]
  • 19.Horn JF. Non-linear dynamic inversion control design for rotorcraft. Aerospace. 2019;6:38. doi: 10.3390/aerospace6030038. [DOI] [Google Scholar]
  • 20.Lombaerts, T. et al. Nonlinear dynamic inversion based attitude control for a hovering quad tiltrotor evtol vehicle. In AIAA Scitech 2019 Forum, 0134 (2019).
  • 21.Hou Z-G, Cheng L, Tan M. Decentralized robust adaptive control for the multiagent system consensus problem using neural networks. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009;39:636–647. doi: 10.1109/TSMCB.2008.2007810. [DOI] [PubMed] [Google Scholar]
  • 22.Ren W, Beard RW. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control. 2005;50:655–661. doi: 10.1109/TAC.2005.846556. [DOI] [Google Scholar]
  • 23.Ma H, et al. Neural-network-based distributed adaptive robust control for a class of nonlinear multiagent systems with time delays and external noises. IEEE Trans. Syst. Man Cybern. Syst. 2015;46:750–758. doi: 10.1109/TSMC.2015.2470635. [DOI] [Google Scholar]
  • 24.Ge SS, Wang C. Adaptive neural control of uncertain mimo nonlinear systems. IEEE Trans. Neural Netw. 2004;15:674–692. doi: 10.1109/TNN.2004.826130. [DOI] [PubMed] [Google Scholar]
  • 25.Ambati PR, Padhi R. A neuro-adaptive augmented dynamic inversion design for robust auto-landing. IFAC Proc. Vol. 2014;47:12202–12207. doi: 10.3182/20140824-6-ZA-1003.01315. [DOI] [Google Scholar]

Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES