Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2021 Aug 25;21(17):5729. doi: 10.3390/s21175729

𝕋-Proper Hypercomplex Centralized Fusion Estimation for Randomly Multiple Sensor Delays Systems with Correlated Noises

Rosa M Fernández-Alcalá 1,*, Jesús Navarro-Moreno 1, Juan C Ruiz-Molina 1
Editor: Iren E Kuznetsova1
PMCID: PMC8433698  PMID: 34502620

Abstract

The centralized fusion estimation problem for discrete-time vectorial tessarine signals in multiple sensor stochastic systems with random one-step delays and correlated noises is analyzed under different T-properness conditions. Based on Tk, k=1,2, linear processing, new centralized fusion filtering, prediction, and fixed-point smoothing algorithms are devised. These algorithms have the advantage of providing optimal estimators with a significant reduction in computational cost compared to that obtained through a real or a widely linear processing approach. Simulation examples illustrate the effectiveness and applicability of the algorithms proposed, in which the superiority of the Tk linear estimators over their counterparts in the quaternion domain is apparent.

Keywords: centralized fusion estimation, random delay systems, tessarine processing, 𝕋k properness

1. Introduction

Multi-sensor systems and related information fusion estimation theory have attracted much attention over the last few decades due to their wide range of applications in many fields, including target tracking, robotics, navigation, big data, and signal processing [1,2,3,4,5,6,7].

In practice, failures during data transmission are unavoidable and lead to uncertain systems. In this regard, a significant problem is the estimation of the state from systems with random sensor delays (see, for example, ref. [8,9,10,11,12,13]). Such delays may be mainly caused by computational load, heavy network traffic, and the limited bandwidth of the communication channel, as well as other limitations, which mean that the measurements are not always up to date [8]. It is commonly assumed that measurement delays can be described by Bernoulli distributed random variables with known conditional probabilities, where the values 1 and 0 of these variables indicate the presence or absence of measurement delays in the corresponding sensor [10].

Traditionally, there have been two basic approaches to process the information from multiple sensors, centralized and distributed fusion. In the former approach, all the measurement data from each sensor are collected in a fusion center where they are fused and processed, whereas in the distributed fusion method, the measurements of each sensor are transmitted to a local processor where they are independently processed before being transmitted to the fusion center. It is well known that centralized fusion methods lead to the best (optimal) solution when all sensors work healthily [14,15,16]. The strength of this approach lies in the fact that it is easy to implement, and it makes possible the best use of the available information. Accordingly, with the purpose of optimal estimation, centralized fusion methodology has received increased attention in recent literature related to multi-sensor fusion estimation (see, for example, ref. [9,17,18,19]). Notwithstanding the foregoing, the main disadvantage of this approach is the high computational load that may be required, especially when the number of sensors is too large. Alternatively, distributed fusion methodologies are developed with the purpose of designing solutions with a reduced computational load. Although distributed fusion approach presents a better robustness, flexibility and reliability due to its parallel structure; the main handicap of these solutions is that they are suboptimal and, hence, it is desirable to explore other alternatives that can alleviate the computational demand. In this respect, the use of hypercomplex algebras may well offer an ideal framework in which to analyze the properness characteristics of the signals which lead to lower computational costs without losing optimality.

In general, the implementation of hypercomplex algebras in signal processing problems has expanded rapidly because of their natural ability to model multi-dimensional data giving rise to better geometrical interpretations. In this connection, quaternions and tessarines appear as 4D hypercomplex algebras composed of a real part and three imaginary parts, which provide them with the ideal structure for describing three and four-dimensional signals. Nowadays, they play a fundamental role in a variety of applications such as robotics, avionics, 3D graphics, and virtual reality [20]. In principle, the use of quaternions or tessarines means renouncing some of the usual algebraic properties of the real or complex fields. Thus, while quaternion algebra is non-commutative, tessarines become a non-division algebra. These properties make each algebra more appropriate for every specific problem. With this in mind, in [21,22,23,24] the application of these two isodimensional algebras is compared with the objective of showing how the choice of a particular algebra may determine the proposed method performance.

In the related literature, quaternion algebra has been widely used as a signal processing tool and it is still a trending topic in different areas. In particular, in the area of multi-sensor fusion estimation, ref. [25,26] proposed sensor fusion estimation algorithms based on a quaternion extended Kalman filter, ref. [27,28] have provided robust distributed quaternion Kalman filtering algorithm for data fusion over sensor networks dealing with three-dimensional data, and [29] designed a linear quaternion fusion filter from multi-sensor observations. A common characteristic of all the estimation algorithms above is that their methodologies are based on a strictly linear (SL) processing. However, in the quaternion domain, optimal linear processing is widely linear (WL), which requires the consideration of the quaternion signal and its three involutions. In this framework, ref. [30] devised WL filtering, prediction and smoothing algorithms for multi-sensor systems with mixed uncertainties of sensor delays, packet dropout and missing observations. Interestingly, when the signal presents properness properties (cancellation of one or more of the three complementary covariance matrices), the optimal processing is SL (if the signal is Q-proper) or semi-widely linear (if the signal is C-proper), which amounts to operate on a vector with reduced dimension, which means a significant reduction in the computational load of the associated algorithms (please review [31,32,33,34] for further details).

On the other hand, the use of tessarines is less common in the signal processing literature and, to the best of the authors’ knowledge, they have never been considered in multi-sensor fusion estimation problems. In general, the use of tessarines in estimation problems has been limited by the fact that it is not being a normed division algebra. This drawback was successfully overcome in [23] by introducing a metric that guarantees the existence and unicity of the optimal estimator. Moreover, although the optimal processing in the tessarine field is the WL processing, under properness conditions, it is possible to get the optimal solution from estimation algorithms with lower computational costs. In this sense, ref. [23,24] introduced the concept of T1 and T2-properness and provided a statistical test to determine whether a signal presents one of these properness properties. According to the type of properness, the most suitable form of processing is T1 linear processing, which supposes to operate on the signal itself, or T2 linear processing, based on the augmented vector given by the signal and its conjugate. The application of both T1 and T2 linear processing to the estimation problem has provided optimal estimation algorithms of reduced dimension.

Motivated by the above discussions, in this paper we consider a tessarine multiple sensor system where each sensor may be delayed at any time independently from the others. The probability of the occurrence of each delay is dealt by a Bernoulli distribution. Moreover, unlike most sensor fusion estimation algorithms, the observation noises of different sensors can be correlated. In this context, new centralized fusion filtering, prediction and fixed-point smoothing algorithms are designed under both T1 and T2-properness conditions. The algorithms proposed provide the optimal estimations of the state; meanwhile, the computational load has been reduced with respect to the counterpart tessarine WL (TWL) estimation algorithms. It is important to note that such savings in computational demand cannot be achieved in the real field. The superiority of these algorithms obtained from a Tk linear approach over those derived in the quaternion domain is numerically demonstrated under different conditions of properness.

The remainder of the paper is organized as follows. Section 2 introduces the notation used throughout the paper and briefly reviews the main concepts related to the processing of tessarine signals and their implications under Tk properness. Then, in Section 3, the problem of estimating a tessarine signal in linear discrete stochastic systems with random state delays and multiple sensors is formulated. Concretely, under Tk -properness conditions, a compact state-space model of reduced dimension is proposed. From this model, and based on Tk-properness properties, Tk centralized fusion filtering, step ahead prediction, and fixed-point smoothing algorithms are devised in Section 4. Furthermore, the goodness of these algorithms in performance is numerically analyzed in Section 5 by means of a simulation example, where the superiority of the Tk estimation algorithms above over their counterparts in the quaternion domain is evidenced. The paper ends with a section of conclusions. In order to maintain continuity, all technical proofs have been deferred to the Appendixes Appendix A, Appendix B, Appendix C.

2. Preliminaries

Throughout this paper, and unless otherwise stated, all the random variables are assumed to have zero-mean. Moreover, the notation and terminology is fairly standard. They are summarized in the following two subsections.

2.1. Notation

Matrices are indicated by boldfaced uppercase letters, column vectors as boldfaced lowercase letters, and scalar quantities by lightfaced lowercase letters. Moreover, the following notation is used.

  • In denotes the identity matrix of dimension n.

  • 0n×m denotes the n×m zero matrix.

  • 1n denotes the n-vector of all ones.

  • 0n denotes the n-vector of all zeros.

  • Superscript * stands for the tessarine conjugate.

  • Superscript T stands for the transpose.

  • Superscript H stands for the Hermitian transpose.

  • Subscript r represents the real part of a tessarine.

  • Subscript ν, for ν=η,η,η, represents the imaginary part of a tessarine.

  • Z stands for the integer field.

  • R stands for the real field. Accordingly, ARn×m means that A is a real n×m matrix, and similarly rRn means that r is a n-dimensional real vector.

  • T stands for the tessarine field. Accordingly, ATn×m means that A is a tessarine n×m matrix, and similarly rTn means that r is a m-dimensional real vector.

  • E[·] is the expectation operator.

  • Cov(·) is the covariance operator.

  • diag(·) is a diagonal (or block diagonal) matrix with elements specified on the main diagonal.

  • δn,l is the Kronecker delta function, which is equal to one if l=n, and zero otherwise.

  • ∘ denotes the Hadamard product.

  • ⊗ denotes the Kronecker product.

2.2. Basic Concepts and Properties

The following property of the Hadamard product will be useful.

Property 1.

If ARn×n and bRn, then

diag(b)Adiag(b)=bbTA. (1)

Definition 1.

A tessarine random signal x(t)Tn is a stochastic process of the form [23]

x(t)=xr(t)+ηxη(t)+ηxη(t)+ηxη(t),tZ

where xν(t)Rn, for ν=r,η,η,η, are real random signals and {1,η,η,η} obeys the following rules:

ηη=η,ηη=η,ηη=η,η2=η2=1,η2=1.

The conjugate of a given tessarine random signal x(t)Tn, is

x(t)=xr(t)ηxη(t)+ηxη(t)ηxη(t).

Moreover, the following two auxiliary tessarine vectors are defined:

xη(t)=xr(t)+ηxη(t)ηxη(t)ηxη(t),
xη(t)=xr(t)ηxη(t)ηxη(t)+ηxη(t).

For a complete description of the second-order statistical properties of x(t), we need to consider the augmented tessarine signal vector x¯(t)=[xT(t),xH(t),xηT(t),xηT(t)]T. The following relationship between the augmented vector and the real vector xr(t)=[xrT(t),xηT(t),xηT(t),xηT(t)]T can be established:

x¯(t)=2Tnxr(t),

where Tn=12AIn

A=1ηηη1ηηη1ηηη1ηηη,

with TnHTn=I4n.

Definition 2.

Given two tessarine random signals x(t),y(s)Tn, the product ★ between them is defined as

x(t)y(s)=xr(t)yr(s)+ηxη(t)yη(s)+ηxη(t)yη(s)+ηxη(t)yη(s). (2)

The following property of the product ★ is easy to check.

Property 2.

The augmented vector of x(t)y(s) is x(t)y(s)¯=D¯x(t)y¯(s), where D¯x(t)=Tndiag(xr(t))TnH.

Definition 3.

The pseudo autocorrelation function of x(t)Tn is defined as Rx(t,s)=E[x(t)xH(s)], t,sZ, and the pseudo cross-correlation function of x(t),y(t)Tn is defined as Rxy(t,s)=E[x(t)yH(s)], t,sZ.

Note that, depending on the vanishing of the different pseudo correlation functions Rxxν(t,s), ν=,η,η, various kinds of tessarine properness can be defined. In particular, the following properness conditions in the tessarine domain have recently been introduced in [23,24].

Definition 4.

A random signal x(t)Tn is said to be T1-proper (respectively, T2-proper) if, and only if, the functions Rxxν(t,s), with ν=,η,η (respectively, ν=η,η), vanish t,sZ.

In a like manner, two random signals x(t)Tn1 and y(t)Tn2 are cross T1-proper, (respectively, cross T2-proper) if, and only if, the functions Rxyν(t,s), with ν=,η,η (respectively, ν=η,η), vanish t,sZ.

Moreover, x(t) and y(t) are jointly T1-proper (respectively, jointly T2-proper) if, and only if, they are T1-proper (respectively, T2-proper) and cross T1-proper (respectively, cross T2-proper).

Remark that, T1 properness is more restrictive than T2 properness. Statistical tests to experimentally check whether a signal is Tk-proper, k=1,2, or improper have been proposed in [23,24].

It should be highlighted that the different properness properties have direct implications on the optimal linear processing. In general, the optimal linear processing is the widely linear processing, which requires to operate on the augmented tessarine vector x¯(t). Nevertheless, in the case of joint Tk-properness, k=1,2, the optimal linear processing is reduced to a Tk linear processing, with the corresponding decrease in the dimension of the problem. In particular, T1 linear processing is based on the tessarine random signal itself, and T2 linear processing considers the augmented vector given by the signal and its conjugate [24].

3. Problem Formulation

Consider the class of linear discrete stochastic systems with state delays and multiple sensors

x(t+1)=F1(t)x(t)+F2(t)x(t)+F3(t)xη(t)+F4(t)xη(t)+u(t),t0z(i)(t)=x(t)+v(i)(t),t0,i=1,,Ry(i)(t)=γ(i)(t)z(i)(t)+(1nγ(i)(t))z(i)(t1),t1,i=1,,R (3)

where R is the number of sensors, ★ is the product defined in (2), Fj(t)Tn×n, j=1,2,3,4, are deterministic matrices, x(t)Tn is the system state to be estimated, u(t)Tn is a tessarine noise, z(i)(t)Tn is the ith sensor outputs with tessarine sensor noise v(i)(t)Tn, y(i)(t)Tn is the observation of the ith sensor, γ(i)(t)=[γ1(i)(t),,γn(i)(t)]TTn is a tessarine random vector with components γj(i)(t)=γj,r(i)(t)+ηγj,η(i)(t)+ηγj,η(i)(t)+ηγj,η(i)(t), for j=1,,n, composed of independent Bernoulli random variables γj,ν(i)(t), j=1,,n, ν=r,η,η,η, with known probabilities pj,ν(i)(t), and with possible outcomes {0,1} that indicates if the ν part of the jth observation component of the ith sensor is up-to-date (case γj,ν(i)(t))=1) or there exits one-step delay (case γj,ν(i)(t))=0).

The following assumptions for the above system (3) are made.

Assumption 1.

For a given sensor i, the Bernoulli variable vector γ(i)(t) is independent of γ(i)(s), for ts, and also γ(i)(t) is independent of γ(j)(t), for any two sensors ij.

Assumption 2.

For a given sensor i, γ(i)(t) is independent of x(t), u(t) and v(j)(t), for any i,j=1,,R.

Assumption 3.

u(t) and v(i)(t) are correlated white noises with respective pseudo variances Q(t) and R(i)(t). Moreover, E[u(t)v(i)H(s)]=S(i)(t)δt,s.

Assumption 4.

v(i)(t) is independent of v(j)(t), for any two sensors ij.

Assumption 5.

The initial state x(0) is independent of the additive noises u(t) and v(i)(t), for t0 and i=1,,R.

Remark 1.

From the hypotheses established on the Bernoulli random variables it follows that, for any j1,j2=1,,n, ν1,ν2=r,η,η,η and i1,i2=1,,R,

Eγj1,ν1(i1)(t)γj2,ν2(i2)(t)=pj1,ν1(i1)(t),ifi1=i2,j1=j2,ν1=ν2pj1,ν1(i1)(t)pj2,ν2(i2)(t),otherwise,E1γj1,η1(i1)(t)1γj2,η2(i2)(t)=1pj1,ν1(i1)(t),ifi1=i2,j1=j2,ν1=ν21pj1,ν1(i1)(t)1pj2,ν2(i2)(t),otherwise. (4)

3.1. One-State Delay System under Tk-Properness

In this section, a TWL one-state delay system, which exploits the full amount second-order statistics information available, is introduced and analyzed in Tk-properness scenarios, k=1,2.

For this purpose, consider the augmented vectors x¯(t), z¯(i)(t), and y¯(i)(t) of x(t), z(i)(t), and y(i)(t), respectively. Then, by applying Property 2 on system (3), the following TWL one-state delay model can be defined:

x¯(t+1)=Φ¯(t)x¯(t)+u¯(t),t0z¯(i)(t)=x¯(t)+v¯(i)(t),t0,i=1,,Ry¯(i)(t)=D¯γ(i)(t)z¯(i)(t)+D¯(1γ(i))(t)z¯(i)(t1),t1,i=1,,R (5)

where

Φ¯(t)=F1(t)F2(t)F3(t)F4(t)F2(t)F1(t)F4(t)F3(t)F3η(t)F4η(t)F1η(t)F2η(t)F4η(t)F3η(t)F2η(t)F1η(t).

Moreover, from Assumption 3, the pseudo correlation matrices associated to the augmented noise vectors u¯(t) and v¯(i)(t) are given by

  • E[u¯(t)u¯H(s)]=Q¯(t)δt,s;

  • E[v¯(i)(t)v¯(i)H(s)]=R¯(i)(t)δt,s;

  • E[u¯(t)v¯(i)H(s)]=S¯(i)(t)δt,s.

The following result establishes conditions on system (5), which lead to Tk-properness properties of the processes involved.

Proposition 1.

Consider the TWL one-state delay model (5).

  1. If x(0) and u(t) are T1-proper, and Φ¯(t) is a block diagonal matrix of the form
    Φ¯(t)=diagF1(t),F1(t),F1η(t),F1η(t),

    then x(t) is T1-proper.

    If additionally pj,r(i)(t)=pj,η(i)(t)=pj,η(i)(t)=pj,η(i)(t)pj(i)(t), t,j,i, v(i)(t) is T1-proper, and u(t) and v(i)(t) are cross T1-proper, then x(t) and y(i)(t) are jointly T1-proper.

  2. If x(0) and u(t) are T2-proper, and Φ¯(t) is a block diagonal matrix of the form
    Φ¯(t)=diagΦ2(t),Φ2η(t),withΦ2(t)=F1(t)F2(t)F2(t)F1(t), (6)

    then x(t) is T2-proper.

    If additionally, pj,r(i)(t)=pj,η(i)(t), pj,η(i)(t)=pj,η(i)(t), t,j,i, v(i)(t) is T2-proper and u(t), and v(i)(t) are cross T2-proper, then x(t) and y(i)(t) are jointly T2-proper.

Proof. 

The proof follows immediately from the application of the corresponding conditions on system (5) and the computation of the augmented pseudo correlation matrices Rx¯(t,s) and Rx¯y¯(i)(t,s). □

Remark 2.

Note that under T1-properness conditions, Π¯γ(i)(t)=E[D¯γ(i)(t)], i=1,,R, is a diagonal matrix of the form Π¯γ(i)(t)=I4Π1(i)(t), with Π1(i)(t)=diag(p1,r(i)(t),,pn,r(i)(t)).

Likewise, under T2-properness conditions, Π¯γ(i)(t)=E[D¯γ(i)(t)], i=1,,R, takes the form of a block diagonal matrix as follows:

Π¯γ(i)(t)=diagΠ2(i)(t),Π2(i)(t),withΠ2(i)(t)=12Πa(i)(t)Πb(i)(t)Πb(i)(t)Πa(i)(t),

where Πa(i)(t)=diag(p1,r(i)(t)+p1,η(i)(t),,pn,r(i)(t)+pn,η(i)(t)) and Πb(i)(t)=diag(p1,r(i)(t)p1,η(i)(t),,pn,r(i)(t)pn,η(i)(t)).

3.2. Compact State-Space Model

By stacking the observations at each sensor in a global observation vector z(t)=z¯(1)T(t),,z¯(R)T(t)T, the TWL one-state delay system (5) can be rewritten in a compact form as

x¯(t+1)=Φ¯(t)x¯(t)+u¯(t),t0z(t)=C¯x¯(t)+v(t),t0y(t)=D¯γ(t)z(t)+D¯(1γ)(t)z(t1),t1 (7)

where v(t) and y(t) denote the stacking vector of v¯(i)T(t) and y¯(i)T(t), for i=1,,R, respectively. Moreover, C¯=1RI4n, D¯γ(t)=L¯diag(γr(t))L¯H and D¯(1γ)(t)=L¯diag14Rnγr(t)L¯H, with L¯=IRTn.

In addition, E[v(t)vH(s)]=R¯(t)δt,s, with R¯(t)=diagR¯(1)(t),,R¯(R)(t), and E[u¯(t)vH(s)]=S¯(t)δt,s, with S¯(t)=S¯(1)(t),,S¯(R)(t).

In this paper, our aim is to investigate the centralized fusion estimation problem under conditions of Tk-properness, with k=1,2. In this sense, the use of Tk-properness properties allows us to consider the observation equation with reduced dimension

yk(t)=D˜kγ(t)C¯x¯(t)+D˜k(1γ)(t)C¯x¯(t1)+D˜kγ(t)v(t)+D˜k(1γ)(t)v(t1),t1 (8)

where x¯(t) satisfies the state equation in (7), D˜kγ(t)=Lkdiag(γr(t))L¯H and D˜k(1γ)(t)=Lkdiag14Rnγr(t)L¯H, with Lk=IRTk and Tk=12BkIn, where

  • T1-proper scenario:

    B1=1ηηη;

    y1(t)y(1)T(t),,y(R)T(t)T.

  • T2-proper scenario:

    B2=1ηηη1ηηη;

    y2(t)y(1)T(t),y(1)H(t),,y(R)T(t),y(R)H(t)T.

Remark 3.

Note that under Tk-properness conditions, Π˜kγ(t)=ED˜kγ(t) is given by Π˜kγ(t)=diagΠ˜kγ(1)(t),,Π˜kγ(R)(t), where Π˜kγ(i)(t)=Πk(i)(t),0kn×(4k)n with Πk(i)(t), i=1,,R, given in Remark 2.

Similarly, Π˜k(1γ)(t)=ED˜k(1γ)(t) is given by the block diagonal matrix Π˜k(1γ)(t)=diagΠ˜k1γ(1)(t),,Π˜k1γ(R)(t) with Π˜k1γ(i)(t)=IknΠk(i)(t),0kn×(4k)n.

Accordingly, whereas the optimal linear processing for the estimation of a tessarine signal x(t) is the TWL processing based on the set of measurements {y(1),y(t)}, under Tk-properness conditions the optimal estimator of x(t)Tn, x^Tk(t|s), can be computed by projecting on the set of measurements {yk(1),,yk(s)}, for k=1,2. Thereby, Tk estimators are obtained that have the same performance as TWL estimators, but with a lower computational complexity. More importantly, this computational load saving cannot be achieved with the real approach.

Note that tessarine algebra is not a Hilbert space and, as a consequence, neither the existence nor the uniqueness of the projection on a set of tessarines is guaranteed. Nevertheless, this drawback has been overcome in [23] by defining a suitable metric, which assures the existence and uniqueness of these projections.

The following property sets the correlations between the noises, u¯(t) and v(t), and both the augmented state x¯(t) and the observations yk(t).

Property 3.

Under Assumptions 1–4, the following correlations hold.

  • 1.

    Correlations between noises and the augmented state:

    •  (a) 

      E[x¯(t+1)u¯H(t)]=Q¯(t);

    •  (b) 

      E[x¯(t)u¯H(s)]=04n×4n, for ts;

    •  (c) 

      E[x¯(t+1)vH(t)]=S¯(t);

    •  (d) 

      E[x(t)v¯H(s)]=04n×4Rn, for ts.

  • 2.

    Correlations between noises and Tk observations:

    •  (a) 

      E[yk(t)u¯H(t)]=Π˜kγ(t)S¯H(t);

    •  (b) 

      E[yk(t+1)u¯H(t)]=Π˜kγ(t+1)C¯Q¯(t)+Π˜k(1γ)(t+1)S¯H(t);

    •  (c) 

      E[yk(t)u¯H(s)]=0kRn×4n, for t<s;

    •  (d) 

      E[yk(t)vH(t)]=Π˜kγ(t)R¯(t);

    •  (e) 

      E[yk(t+1)vH(t)]=Π˜kγ(t+1)C¯S¯(t)+Π˜k(1γ)(t+1)R¯(t);

    •  (f) 

      E[yk(t)vH(s)]=0kRn×4Rn, for t<s.

Remark 4.

Observe that, under a Tk-properness setting, the state equation in (7) is equivalent to the Tk state equation

xk(t+1)=Φk(t)xk(t)+uk(t),t0 (9)

where,

  • in a T1-proper scenario, x1(t)x(t), u1(t)u(t), and Φ1(t)F1(t);

  • in a T2-proper scenario, x2(t)[xT(t),xH(t)]T, u2(t)[uT(t),uH(t)]T and Φ2(t) is as in (6).

In such cases, Qk(t)=E[uk(t)ukH(t)] and Sk(t)=E[uk(t)vkH(t)], for k=1,2, where v1(t)v(t) and v2(t)[vT(t),vH(t)]T, with v(t)=v(1)T(t),,v(R)T(t)T.

Nevertheless, Equation (9) cannot be used together with the observation Equation (8), since the latter involves the augmented state vector x¯(t).

4. Tk-Proper Centralized Fusion Estimation Algorithms

In this section, the Tk centralized fusion filter, prediction, and fixed-point smoothing algorithms are designed on the basis of the set of observations {yk(1),,yk(s)}, k=1,2, defined in (8).

With this purpose in mind, the observation Equation (8) is used to devise filtering, prediction, and smoothing algorithms for the augmented state vector x¯(t). Then, by applying Tk-properness properties, the recursive formulas for the filtering, prediction, and smoothing estimators of xk(t) are easily determined. Finally, the desired Tk centralized fusion filtering, prediction and fixed-point smoothing estimators are obtained as a subvector of them.

Theorems 1–3 summarize the recursive formulas for the computation of these Tk estimators as well as their associated error variances.

4.1. Tk Centralized Fusion Filter

Theorem 1.

The optimal Tk centralized fusion filter x^Tk(t|t) and one-step predictor x^Tk(t+1|t) for the state x(t) are obtained by extracting the first n components of the optimal estimator x^k(t|t) and x^k(t+1|t), respectively, which are recursively computed from the expressions

x^k(t|t)=x^k(t|t1)+Lk(t)εk(t),t1 (10)
x^k(t+1|t)=Φk(t)x^k(t|t)+Hk(t)εk(t),t1 (11)

with x^k(0|0)=0kn and x^k(1|0)=0kn, and where Hk(t)=Sk(t)Πk(t)Ωk1(t), with Πk(t)=diagΠk(1)(t),,Πk(R)(t) and Πk(i)(t), i=1,,R, defined in Remark 2 for k=1,2. Moreover, εk(t) are the innovations calculated as follows

εk(t)=yk(t)Πk(t)Ckx^k(t|t1)ImΠk(t)Ckx^k(t1|t1)ImΠk(t)Gk(t1)εk(t1),t1 (12)

with m=kRn, εk(0)=0m, and where Ck=1RIkn, Gk(t)=Rk(t)Πk(t)Ωk1(t), with Rk(t)=E[vk(t)vkH(t)].

In addition, Lk(t)=Θk(t)Ωk1(t), where Θ(t) is computed through the equation,

Θk(t)=Pk(t|t1)CkTΠk(t)+Φk(t1)Pk(t1|t1)CkTImΠk(t)+Sk(t1)ImΠk(t)Hk(t1)ΘkH(t1)CkTImΠk(t)Φk(t1)Θk(t1)GkH(t1)ImΠk(t)Hk(t1)Ωk(t1)GkH(t1)ImΠk(t),t>1 (13)

with Θk(1)=Pk(1|0)CkTΠk(1)+Φk(0)Pk(0|0)CkTImΠk(1)+Sk(0)ImΠk(1), and the innovations covariance matrix Ωk(t) is obtained as

Ωk(t)=Mk1(t)Mk2(t)Mk3(t)+Mk4(t)+Πk(t)CkPk(t|t1)CkTΠk(t)+Πk(t)Jk(t1)ImΠk(t)+ImΠk(t)JkH(t1)Πk(t)+ImΠk(t)[CkPk(t1|t1)CkTCkΘk(t1)GkH(t1)Gk(t1)ΘkH(t1)CkTGk(t1)Ωk(t1)GkH(t1)]ImΠk(t),t>1 (14)

with

Ωk(1)=Mk1(1)Mk2(1)Mk3(1)+Mk4(1)+Πk(1)CkPk(1|0)CkTΠk(1)+Πk(1)Jk(0)ImΠk(1)+ImΠk(1)JkH(0)Πk(1)+ImΠk(1)CkPk(0|0)CkTImΠk(1), (15)

where

Jk(t)=Ck[Φk(t)Pk(t|t)CkTHk(t)ΘkH(t)CkT+Sk(t)Φk(t)Θk(t)GkH(t)Hk(t)Ωk(t)GkH(t)],

with Jk(0)=CkΦk(0)Pk(0|0)CkT+Sk(0), and

  • Mk1(t)=LkCov(γr(t))L¯HC¯Σ¯(t1)C¯TL¯LkH,

  • Mk2(t)=LkCov(γr(t))L¯HC¯S¯(t)L¯LkH,

  • Mk3(t)=LkCov(γr(t))L¯HS¯H(t)C¯TL¯LkH,

  • Mk4(t)=LkΔpr(t)L¯HR¯(t)L¯+Δ1pr(t)L¯HR¯(t1)L¯LkH,

where Δpr(t)=E[γr(t)γrT(t)], Δ1pr(t)=E[(14Rnγr(t))(14Rnγr(t))T], with entries given in (4), and

Σ¯(t)=Φ¯(t)D¯(t)Φ¯H(t)+Q¯(t)Φ¯(t)D¯(t)D¯(t)Φ¯H(t)+D¯(t),

where D¯(t)=Rx¯(t,t) is recursively computed from

D¯(t)=Φ¯(t1)D¯(t1)Φ¯H(t1)+Q¯(t1). (16)

Finally, theTkfiltering and prediction error pseudo covariance matricesPTk(t|t)andPTk(t+1|t), respectively, are obtained from the filtering and prediction error pseudo covariance matricesPk(t|t)andPk(t+1|t), calculated from the recursive expressions

Pk(t|t)=Pk(t|t1)Θk(t)Ωk1(t)ΘkH(t), (17)

with Pk(0|0)=E[xk(0)xkH(0)], and

Pk(t+1|t)=Φk(t)Pk(t|t)ΦkH(t)Hk(t)ΘkH(t)ΦkH(t)Φk(t)Θk(t)HkH(t)Hk(t)Ωk(t)HkH(t)+Qk(t), (18)

with Pk(1|0)=Φk(0)Pk(0|0)ΦkH(0)+Qk(0).

Remark 5.

In the implementation of the above algorithm, the particular structure of Σ¯(t) under Tk -properness conditions should be taken into consideration. In this regard, it is not difficult to check that Σ¯(t) is a block diagonal matrix of the form

  • T1-properness: Σ¯(t)=diagΣ1(t),Σ1(t),Σ1η(t),Σ1η(t);

  • T2-properness: Σ¯(t)=diagΣ2(t),Σ2η(t),

with Σk(t)=Φk(t)Dk(t)ΦkH(t)+Qk(t)Φk(t)Dk(t)Dk(t)ΦkH(t)+Dk(t), k=1,2, where Dk(t)=Rxk(t,t) is recursively computed from

Dk(t)=Φk(t1)Dk(t1)ΦkH(t1)+Qk(t1).

4.2. Tk Centralized Fusion Predictor

Theorem 2.

The optimal Tk centralized fusion predictor x^Tk(t+τ|t) for the state x(t) is obtained by extracting the first n components of the optimal estimator x^k(t+τ|t), which is recursively computed from the expression

x^k(t+τ|t)=Φk(t+τ1)x^k(t+τ1|t),τ2 (19)

with the initialization the one-step predictor x^k(t+1|t) given by (11).

Moreover, the Tk-proper prediction error pseudo covariance matrix PTk(t+τ|t) is obtained from the prediction error pseudo covariance matrix Pk(t+τ|t), computed from the recursive expression

Pk(t+τ|t)=Φk(t+τ1)Pk(t+τ1|t)ΦkH(t+τ1)+Qk(t+τ1),τ2 (20)

with the initialization the one-step prediction error pseudo covariance matrix given by (18).

4.3. Tk Centralized Fusion Smoother

Theorem 3.

The optimal Tk centralized fusion fixed-point smoother x^Tk(t|s), for a fixed instant t<s, for the state x(t) is obtained by extracting the n first components of the optimal estimator x^k(t|s), which is recursively computed from the expressions

x^k(t|s)=x^k(t|s1)+Lk(t,s)εk(s),t1 (21)

with initial condition x^k(t|t) given by (10), and where the innovations εk(s) are recursively computed from (12) and Lk(t,s)=Θk(t,s)Ωk1(s) with Ωk1(s) obtained from the recursive expression (14) and

Θk(t,s)=Ek(t,s1)ΦkH(s1)Θk(t,s1)HkH(s1)CkTΠk(s)+Ek(t,s1)CkTΘk(t,s1)GkH(s1)ImΠk(s), (22)
Ek(t,s)=Ek(t,s1)ΦkH(s1)Θk(t,s1)HkH(s1)ICkTΠk(s)LkH(s)Ek(t,s1)CkTΘk(t,s1)GkH(s1)ImΠk(s)LkH(s), (23)

with initialization Θk(t,t)=Θk(t) given by (13) and Ek(t,t)=Pk(t|t).

Furthermore, the Tk fixed-point smoothing error pseudo covariance matrix is recursively computed through the expression

Pk(t|s)=Pk(t|s1)Θk(t,s)Ωk1(s)ΘkH(t,s), (24)

with Pk(t|t) the filtering error pseudo covariance matrix (17).

As mentioned above, the main advantage of the proposed Tk centralized fusion algorithms is that the resulting Tk centralized fusion estimators coincide with the optimal TWL counterparts; meanwhile, they lead to computational savings with respect to the one derived from a TWL approach.

Remark 6.

The computational demand of the proposed tessarine estimation algorithms under Tk, for k=1,2 properness conditions is similar to that of their counterparts in the quaternion domain, i.e., the QSL and QSWL estimation algorithms, respectively, (review [34] for a comparative analysis of the computational complexity of quaternion estimators). Therefore, the computational load of TWL estimation algorithms is of order O(64R3n3), whereas the Tk, for k=1,2, algorithms are of order O(m3), with m=kRn.

5. Simulation Examples

In this section, the effectiveness of the above Tk-proper centralized fusion estimation algorithms is experimentally analyzed. With this aim, the following simulation examples have be chosen to reveal the superiority of the proposed Tk-proper estimators over their counterparts in the quaternion domain, when Tk-properness conditions are present.

Let us consider the following tessarine system with three sensors:

x(t+1)=f1x(t)+u(t)z(i)(t)=x(t)+v(i)(t),i=1,2,3y(i)(t)=γ(i)(t)z(i)(t)+(1γ(i)(t))z(i)(t1),i=1,2,3

with f1=0.90.3η+0.02η+0.1ηT. The following assumptions are made on the initial state and additive noises.

  • 1.
    The initial state x0 is a tessarine Gaussian variable determined by the real covariance matrix
    E[xr(0)xrT(0)]=a02.500402.52.50a002.504. (25)
  • 2.
    u(t) is a tessarine white Gaussian noise with a real covariance matrix
    E[ur(t)urT(s)]=0.90c00b0cc00.900c0bδt,s. (26)
  • 3.
    The measurement noises v(i)(t) of the three sensors are tessarine white Gaussian noises defined as
    v(i)(t)=αiu(t)+w(i)(t),
    where the coefficients αi are the constant scalars α1=0.5, α2=0.8, and α3=0.4 and w(i)(t), i=1,2,3, are T1-proper tessarine white Gaussian noises with mean zeros and real covariance matrices
    E[w(i)r(t)w(i)rT(s)]=βi0000βi0000βi0000βiδt,s,
    with β1=4, β2=8, and β3=25, and independent of u(t). Note that, if αi=0, then the noises u(t) and v(i)(t) are uncorrelated. In the opposite case, when αi becomes more different from 0, the correlation between u(t) and v(i)(t) is stronger.

Moreover, at every sensor i, the Bernoulli random variables γν(i)(t), ν=r,η,η,η, have the constant probabilities P[γν(i)(t)=1]=pν(i), for all tT.

In this framework, a comparative study between tessarine and quaternion approaches is carried out to evaluate the performance of the proposed filtering, prediction and smoothing algorithms under T1 and T2 properness conditions. Specifically, besides the filtering, the 3-step prediction and fixed-point smoother at t=20 problems are considered in our simulations.

5.1. Study Case 1: T1-Proper Systems

Consider the values a=4 in (25) and b=0.9 and c=0.3 in (26), and the Bernoulli probabilities

  • pr(1)=pη(1)=pη(1)=pη(1)=p1;

  • pr(2)=pη(2)=pη(2)=pη(2)=p2;

  • pr(3)=pη(3)=pη(3)=pη(3)=p3.

Note that, under these conditions, both x(t) and y(i)(t), i=1,2,3, are jointly T1-proper.

For the purpose of comparison, the error variances of both T1 and QSL estimators have been computed for different Bernoulli probabilities pi, i=1,2,3. We denote the QSL error variances by PQSL(t|s). Then, as a performance measure, we compute the difference between the T1 and QSL error variances associated to the filter, DE1(t|t)=PQSL(t|t)P1(t|t), the 3-step predictor, DE1(t+3|t)=PQSL(t+3|t)P1(t+3|t), and the fixed-point smoother at t=20, DE1(20|t)=PQSL(20|t)P1(20|t), for t>20.

Firstly, these differences are displayed in Figure 1 considering different degrees of correlations between the state and measurement noises: independent noises (α1=α2=α3=0), low correlations (α1=0.5, α2=0.8, α3=0.4), and high correlations (α1=5, α2=8, α3=4) and two levels of uncertainties: high delay probabilities (case p1=0.5, p2=0.2, p3=0.4) and low delay probabilities (case p1=0.9, p2=0.5, p3=0.8). As we can see, in all situations these differences are positive, which indicate that the proposed T1 estimators outperform the QSL estimators. Moreover, this superiority in performance increases when the correlation between the system noises is higher. With respect to the levels of uncertainties, a better behavior of the T1 estimators over the QSL counterparts is generally observed in the scenario of high delays probabilities, i.e., when the Bernoulli probabilities are smaller.

Figure 1.

Figure 1

Difference between QSL and T1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.

Next, in order to evaluate the performance of the proposed estimators versus the probability of delay, we consider the same Bernoulli probabilities in the three sensors (p1=p2=p3=p), and the difference between the T1 and QSL error variances are computed for different values of p. Figure 2 illustrates these differences for p=0,0.2,0.4,0.6,0.8,1. In these figures, the superiority in performance of T1 estimators over QSL estimators is confirmed since DE1>0 in every case. Additionally, in the filtering and prediction problems it is observed that this superiority is higher for the smallest Bernoulli probabilities, i.e., when the delay probabilities are greater. On the other hand, in the fixed-point smoothing problem, a similar behavior for Bernoulli probabilities p and 1p is obtained, the advantages of the T1 smoothing algorithm being higher than the QSL one at intermediate values of p (case p=0.4 and p=0.6). These results are examined in detail below.

Figure 2.

Figure 2

Difference between QSL and T1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.

Our aim now is to analyze the benefits of our T1 estimation algorithms in terms of the Bernoulli probabilities of the three sensors p. In this analysis, different values of c in (26) are also considered. Then, the means of the difference between the T1 and QSL filtering, prediction, and fixed-point smoothing error variances have been computed as

  • Filtering problem: MDE1p(t|t)=1100t=1100DE1p(t|t);

  • 3-step prediction problem: MDE1p(t+3|t)=197t=197DE1p(t+3|t);

  • Fixed-point smoothing problem: MDE1p(20|t)=180t=180DE1p(20|t);

for p varying from 0 to 1 and the values of c=0,0.3,0.6, and 0.8, and where DE1p(t|t), DE1p(t+3|t) and DE1p(20|t) denote the difference between the T1 and QSL filtering, 3-step prediction, and fixed-point smoothing error variances, respectively, for a value of the Bernoulli probability p. Note that in the case c=0, the noise u(t) is, besides being T1-proper, Q-proper, and a higher value of c means that the noise u(t) moves further away from the Q-properness condition. The results of this analysis are depicted in Figure 3 where, on the one hand, we can clearly observe how the best performance of T1 filtering and prediction estimators over the QSL counterparts is obtained for the smallest Bernoulli probabilities. Specifically, except for the case c=0.8, the maximum difference between T1 and QSL errors is achieved when the Bernoulli probability takes the value 0, i.e., when only one-step delay exists in the measurements. However, in the fixed-point smoothing problem T1 is more advantageous when the Bernoulli probabilities p tend to 0.5. On the other hand, in every case, the superiority of our T1 estimation algorithms is more evident as the parameter c in (26) grows, i.e., the noise u(t) is further away from the Q-properness condition.

Figure 3.

Figure 3

Mean of the difference between QSL and T1 error variances for the problem of (a) filtering, (b) 3-step prediction, and (c) fixed-point smoothing.

5.2. Study Case 2: T2-Proper Systems

Consider the values a=6 in (25), b=c=0.3 in (26), and the Bernoulli probabilities for the three sensors as in Section 5.1. Note that, under these conditions, both x(t) and y(i)(t), i=1,2,3, are jointly T2-proper.

Thus, we are interested in comparing the behavior of T2 centralized fusion estimators with their counterparts in the quaternion domain, i.e., the quaternion semi-widely linear (QSWL) estimators. For this purpose, the T2 and QSWL error variances, P2(t|s) and PQSWL(t|s), respectively, have been computed by considering different Bernoulli probabilities for the three sensors.

Specifically, we consider the filtering, the 3-step prediction, and the fixed-point smoothing problems at t=20, and, as a measure of comparison, we use the difference between both QSWL and T2 error variances, which are defined as DE2(t|t)=PQSWL(t|t)P2(t|t) (filtering), DE2(t+3|t)=PQSWL(t+3|t)P2(t+3|t) (3-step prediction), and DE2(20|t)=PQSWL(20|t)P2(20|t) (fixed-point smoothing).

Figure 4 and Figure 5 compare the difference between QSWL and T2 centralized estimation error variances for different Bernoulli probabilities p1, p2 and p3. Specifically, Figure 4 analyzes the filtering and 3-step prediction error variance differences DE2(t|t) and DE2(t+3|t) for the following cases:

  • 1.

    Case 1: for values of p1=0.1,0.5,0.9 in three situations: p2=0.9 and p3=0.1, p2=0.1 and p3=0.9, and p2=p3=0.5;

  • 2.

    Case 2: for values of p2=0.1,0.5,0.9 in three situations: p1=0.9 and p3=0.1, p1=0.1 and p3=0.9, and p1=p3=0.5;

  • 3.

    Case 3: for values of p3=0.1,0.5,0.9 in three situations: p1=0.9 and p2=0.1, p1=0.1 and p2=0.9, and p1=p2=0.5.

Figure 4.

Difference between QSWL and T2 error variances for the problem of filtering (left column) and 3-step prediction (right column) for Cases 1–3.

Figure 4

Figure 5.

Figure 5

Difference between QSWL and T2 error variances for the fixed-point smoothing problem for Cases 4–6.

It should be highlighted that similar results are obtained with any other combination of Bernoulli probabilities pi, i=1,2,3.

From these figures, we can reaffirm that T2 processing is a better approach than the QSWL processing in terms of performance (DE2>0). Moreover, in the filtering and 3-step prediction problems (Figure 4), this fact is more evident when the probabilities of the Bernoulli variables decrease (that is, the delay probabilities increase).

The differences between both QSWL and T2 error variances for the fixed-point smoothing problem are illustrated in Figure 5. Note that, since the behavior of the differences between QSWL and T2 fixed-point smoothing errors is similar for Bernoulli probabilities values pi and 1pi, these differences are analyzed in the following cases:

  • 1.

    Case 4: for values of p1=0.1,0.3,0.5 in three situations: p2=0.1 and p3=0.3, p2=0.3 and p3=0.1, and p2=p3=0.3.

  • 2.

    Case 5: for values of p2=0.1,0.3,0.5 in three situations: p1=0.1 and p3=0.3, p1=0.3 and p3=0.1, and p1=p3=0.3.

  • 3.

    Case 6: for values of p3=0.1,0.3,0.5 in three situations: p1=0.1 and p2=0.3, p1=0.3 and p2=0.1, and p1=p2=0.3.

In every situation, the better behavior of T2 processing over the QSWL processing is verified, and also this superiority increases when the Bernoulli probabilities tends to 0.5, i.e., when there is a similar chance of receiving updated and delayed information.

6. Discussion

From among the different sensor fusion methods, it is the centralized fusion techniques that provide the optimal estimators from measurements of all sensors. Nevertheless, to avoid the computational load involved in these estimates, especially in systems with a large number of sensors, suboptimum estimation algorithms have been traditionally designed by using a decentralized fusion approach. This paper has overcome the above computational difficulties without renouncing to obtain the optimal solution, by considering hypercomplex algebras. Quaternions and, more recently, tessarines are the most usual 4D hypercomplex algebra employed in signal processing. Commonly, since both quaternions and tessarines are isomorfic spaces to R4, they involve the same computational complexity. Interestingly, under properness conditions, this complexity in terms of dimension is reduced to a half for QSWL and T2-proper methods and four times for QSL and T1-proper methods, which leads to a significant reduction in the computational load of our algorithms. Precisely, it is in this context that the use of hypercomplex algebras becomes an ideal tool with computational advantages over the existing methods to address the centralized fusion estimation problem.

In general, neither of these algebras always performs better than the other, and the choice of the most suitable one is conditioned by the characteristics of the signal. Due to the commutativity and reduced computational complexity, the tessarine algebra makes it particularly interesting for our purposes. Thus, under conditions of Tk-properness, filtering, prediction, and fixed-point smoothing algorithms of reduced dimension have been devised for the estimation of a vectorial tessarine signal based on one-step randomly delayed observations coming from multiple sensors stochastic systems with different delay rates and correlated noises. The reduction of the dimension of the problem under Tk-properness scenarios makes it possible for these algorithms to facilitate the computation of the optimal estimates with a lower computational cost in comparison with the real processing approach. It should be highlighted that this computational saving cannot be attained in the real field.

The good performance of the algorithms proposed has been experimentally illustrated by means of two simulation examples, where the better behavior of the proposed Tk estimates over their counterparts in the quaternion domain under Tk-properness conditions has been evidenced.

In future research, we will set out to explore the design of decentralized fusion estimation algorithms for hypercomplex signals and investigate the use of new hypercomplex algebras in this field.

Appendix A. Proof of Theorem 1

The proof is based on the innovation technique. Consider the one-state delay model:

x¯(t+1)=Φ¯(t)x¯(t)+u¯(t),t0yk(t)=D˜kγ(t)C¯x¯(t)+D˜k(1γ)(t)C¯x¯(t1)+D˜kγ(t)v(t)+D˜k(1γ)(t)]v(t1),t1 (A1)

and define the innovations as εk(t)=yk(t)y^k(t|t1).

In order to simplify the proof of Theorem 1, the following results have been previously established.

Appendix A.1. Preliminary Results

The following property, stated without proof, about the correlations between the innovations εk(t) and the augmented state x¯(t) and the noises u¯(t) and vk(t), will be useful in the proof of Theorem 1.

Property A1.

Given the system (A1), and under the Assumptions 1-4, the following correlations hold:

  • (1)

    E[u¯(t)εkH(t)]=S¯(t)Π˜kγH(t).

  • (2)

    E[u¯(t)εkH(s)]=04n×m, for t>s.

  • (3)

    E[v¯(t)εkH(t)]=R¯(t)Π˜kγH(t).

  • (4)

    E[v¯(t)εkH(s)]=04Rn×m, for t>s.

Moreover, the following results will be of interest in the derivation of the formulas given in Theorem 1.

Lemma A1.

Denote ΔD˜kγ(t)=D˜kγ(t)Π˜kγ(t) and ΔD˜k(1γ)(t)=D˜k(1γ)(t)Π˜k(1γ)(t). For any tessarine random vectors α1(t),α2(t)T4Rn and β(t)Tq, for any dimension q, the following relations hold:

  •  1 

    EΔD˜kγ(t)α1(t)α2H(s)ΔD˜kγH(t)=LkCov(γr(t))L¯HE[α1(t)α2¯H(s)]L¯LkH.

  •  2 

    EΔD˜kγ(t)α1(t)α2H(s)D˜kγH(t)=LkCov(γr(t))L¯HE[α1(t)α2H(s)]L¯LkH.

  •  3 

    EΔD˜kγ(t)α1(t)α2H(s)D˜k(1γ)H(t)=LkCov(γr(t))L¯HE[α1(t)α2H(s)]L¯LkH.

  •  4 

    EΔD˜kγ(t)α1(t)βH(s)=0m×q.

  •  5 

    ED˜kγ(t)α1(t)α2H(s)D˜kγH(t)=LkEγr(t)γrT(t)L¯HE[α1(t)α2H(s)]L¯LkH.

  •  6 

    ED˜kγ(t)α1(t)α2H(s)D˜k(1γ)H(t)=LkEγr(t)(14Rnγr(t))TL¯HE[α1(t)α2H(s)]L¯LkH.

  •  7 

    ED˜k(1γ)(t)α1(t)α2H(s)D˜k(1γ)H(t)=LkE(Imγr(t))(Imγr(t))TL¯HE[α1(t)α2H(s)]L¯LkH.

Proof. 

The proof is immediate from (1) and taking into account that D˜kγ(t)=Lkdiag(γr(t))L¯H and D˜k(1γ)(t)=Lkdiag(14Rnγr(t))L¯H. □

Appendic A.2. Expressions in Theorem 1

Although tessarine algebra is not a Hilbert space, the existence and uniqueness of the projection of an element on the set of measurements {yk(1),,yk(s)}, for k=1,2, is guaranteed ([23]). Now, from Theorem 3 of [23], we obtain

x¯^(t|t)=x¯^(t|t1)+L˜k(t)εk(t), (A2)

with L˜k(t)=Θ˜k(t)Ωk1(t), where Θ˜k(t)=E[x¯(t)εkH(t)] and Ωk(t)=E[εk(t)εkH(t)]. Then, by applying Tk-properness conditions, (10) is directly devised.

Taking projections on both sides of the state and observation equations in (A1) onto the linear space spanned by {εk(1),,εk(t1)}, and using Property A1, we have

x¯^(t+1|t)=Φ¯(t)x¯^(t|t)+H˜k(t)εk(t) (A3)
y^k(t|t1)=Π˜kγ(t)C¯x¯^(t|t1)+Π˜k(1γ)(t)C¯x¯^(t1|t1)+v¯^(t1|t1) (A4)

where H˜k(t)=S¯(t)Π˜kγH(t)Ωk1(t) and v¯^(t|t)=G˜k(t)εk(t), with G˜k(t)=R¯(t)Π˜kγH(t)Ωk1(t).

Then, (11) follows from (A3) and the Tk-properness conditions on Φ¯(t) and Π˜kγ(t) established in Proposition 1 and Remark 3. Likewise, (12) is easily obtained from (A4).

Consider now the gain matrix L˜k(t)=Θ˜k(t)Ωk1(t) in (A2). Denote the prediction error and its covariance matrix as ϵ¯(t|t1)=x¯(t)x¯^(t|t1) and P¯(t|t1)=E[ϵ¯(t|t1)ϵ¯H(t|t1)], respectively. Then, by applying (A1) and (A4), ϵ¯(t|t1)x¯^(t|t1), Property 3 and Property A1, we have

Θ˜k(t)=P¯(t|t1)C¯TΠ˜kγH(t)+Φ¯(t1)P¯(t1|t1)C¯TΠ˜k(1γ)H(t)+S¯(t1)Π˜k(1γ)H(t)H˜k(t1)Θ˜kH(t1)C¯TΠ˜k(1γ)H(t)Φ¯(t1)Θ˜k(t1)G˜kH(t1)Π˜k(1γ)H(t)H˜k(t1)Ωk(t1)G˜kH(t1)Π˜k(1γ)H(t),t>1 (A5)

and thus, the recursive expression (13) is directly obtained from (A5), by applying the Tk-properness conditions on Φ¯(t), and denoting by Pk(t|t1) the first m×m submatrix of P¯(t|t1).

Next, we devise the expression for the innovation covariance matrix (14). For this purpose, the innovations are rewritten in the following form

εk(t)=ΔD˜kγ(t)C¯x¯(t)+Π˜kγ(t)C¯ϵ¯(t|t1)ΔD˜kγ(t)C¯x¯(t1)+Π˜k(1γ)(t)C¯ϵ¯(t1|t1)+D˜kγ(t)v(t)+D˜k(1γ)(t)v(t1)Π˜k(1γ)(t)G˜k(t1)εk(t1). (A6)

From (A3), the prediction error ϵ¯(t+1|t) can be expressed as

ϵ¯(t+1|t)=Φ¯(t)ϵ¯(t|t)+u¯(t)H˜k(t)εk(t). (A7)

As a consequence, from (A7), using Property 3 and Property A1, and taking into account that ϵ¯(t|t)εk(t), we have

E[u¯(t)ϵ¯H(t|t)]=H˜k(t)Θ˜kH(t), (A8)
E[ϵ¯(t+1|t)ϵ¯H(t|t)]=Φ¯(t)P¯(t|t)H˜k(t)Θ˜kH(t), (A9)
E[ϵ¯(t|t)v¯H(t)]=Θ˜k(t)G˜kH(t), (A10)
E[ϵ¯(t|t)v¯H(t+1)]=04n×4Rn, (A11)
E[ϵ¯(t+1|t)v¯H(t)]=S¯(t)Φ¯(t)Θ˜k(t)G˜kH(t)H˜k(t)Ω˜k(t)G˜kH(t), (A12)
E[ϵ¯(t+1|t)v¯H(t+1)]=04n×4Rn. (A13)

Then, the expression (14) for the innovation covariance matrix is obtained from (A6), by using Lemma A1, Property 3, Property A1, (A9)–(A13), ϵ¯(t+1|t)εk(t), and by applying Tk properness conditions. Furthermore, the recursion of D¯(t)=E[x¯(t)x¯H(t)] given in (16) is a direct consequence of the augmented state equation in system (A1). In a similar way, Equation (15) follows.

In the following step, consider the filtering error covariance matrix P¯(t|t)=E[ϵ¯(t|t)ϵ¯H(t|t)] with ϵ¯(t|t)=x¯(t)x¯^(t|t). From (A2), we directly obtain that P¯(t|t)=P¯(t|t1)Θ˜k(t)Ωk1(t)Θ˜kH(t) and thus (17) holds by virtue of Tk properness conditions.

Finally, from (A7), and taking into consideration that ϵ¯(t|t)εk(t), (A8), and Property A1, we have

P¯(t+1|t)=Φ¯(t)P¯(t|t)Φ¯H(t)H˜k(t)Θ˜kH(t)Φ¯H(t)Φ¯(t)Θ˜k(t)H˜kH(t)H˜k(t)Ωk(t)H˜kH(t)+Q¯(t).

From Tk properness conditions (18) follows.

Appendix A. Proof of Theorem 2

From the projection of x(t+τ) onto the linear space spanned by {εk(1),,εk(t)}, we have

x¯^(t+τ|t)=Φ(t+τ1)x¯^(t+τ1|t),τ2

Then, from Tk properness conditions, (19) holds.

Finally, from (19), it is clear that the prediction error covariance matrix Pk(t+τ|t) satisfies the recursive expression (20).

Appendix B. Proof of Theorem 3

By projecting the state x(t) onto the linear space spanned by {εk(1),,εk(s)}, we have

x¯^(t|s)=x¯^(t|s1)+L˜k(t,s)ϵk(s),t<s (A14)

with L˜k(t,s)=θ˜k(t,s)Ωk1(s), where θ˜k(t,s)=E[x¯(t)ϵkH(s)]. Then, (21) is directly derived from (A8), by applying Tk properness conditions.

Consider the matrix θ˜k(t,s). From (12) and (8), we have

θ˜k(t,s)=E[x¯(t)ϵ¯H(s|s1)]C¯TΠ˜kγ(s)+E[x¯(t)ϵ¯H(s1|s1)]C¯T(ImΠ˜kγ(s))θ˜k(t,s1)G˜kH(s1)(ImΠ˜kγ(s))

Let us define the matrix E¯(t,s)=Ex¯(t)ϵ¯H(s|s). Thus, from (A1) and (A3), it follows that

θ˜k(t,s)=E¯(t,s1)Φ¯H(s1)θ˜k(t,s1)H˜kH(s1)C¯TΠ˜kγ(s)+E¯(t,s1)C¯Tθ˜k(t,s1)G˜kH(s1)(ImΠ˜kγ(s)) (A15)

Then, (22) follows from T proper conditions.

In a similar way, from (A1)–(A3) and (A9), E¯(t,s) is of the form

E¯(t,s)=E¯(t,s1)Φ¯H(s1)θ˜k(t,s1)HH(s1)ImC¯TΠ˜kγ(s)LH(s)E¯(t,s1)C¯Tθ˜k(t,s1)GH(s1)(ImΠ˜kγ(s))LH(s) (A16)

where E¯(t,t)=P¯(t|t). Then, (23) follows from Tk proper conditions.

Finally, (24) can be easily derived from (21).

Author Contributions

Conceptualization, R.M.F.-A.; Formal analysis, R.M.F.-A., J.N.-M. and J.C.R.-M.; Funding acquisition, R.M.F.-A. and J.N.-M.; Investigation, R.M.F.-A. and J.N.-M.; Methodology, R.M.F.-A.; Project administration, R.M.F.-A. and J.N.-M.; Software, R.M.F.-A.; Supervision, J.N.-M. and J.C.R.-M.; Validation, R.M.F.-A., J.N.-M. and J.C.R.-M.; Visualization, J.N.-M. and J.C.R.-M.; Writing—original draft, R.M.F.-A.; Writing—review & editing, J.N.-M. and J.C.R.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported in part by I+D+i Project with reference number 1256911, under ‘Programa Operativo FEDER Andalucía 2014–2020’, Junta de Andalucía, and Project EI_FQM2_2021 of ‘Plan de Apoyo a la Investigación 2021–2022’ of the University of Jaén.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Li W.L., Jia Y.M., Du J.P., Fu J.C. State estimation for nonlinearly coupled complex networks with application to multi-target tracking. Neurocomputing. 2018;275:1884–1892. doi: 10.1016/j.neucom.2017.10.012. [DOI] [Google Scholar]
  • 2.Lee J.S., McBride J. Extended object tracking via positive and negative information fusion. IEEE Trans. Signal Process. 2019;67:1812–1823. doi: 10.1109/TSP.2019.2897942. [DOI] [Google Scholar]
  • 3.Kurkin A.A., Tyugin D.Y., Kuzin V.D., Chernov A.G., Makarov V.S., Beresnev P.O., Filatov V.I., Zeziulin D.V. Autonomous mobile robotic system for environment monitoring in a coastal zone. Procedia Comput. Sci. 2017;103:459–465. doi: 10.1016/j.procs.2017.01.022. [DOI] [Google Scholar]
  • 4.Gingras D. Automotive Informatics and Communicative Systems. Information Science Reference; IGI Global; Hershey, PA, USA: 2009. An overview of positioning and data fusion techniques applied to land vehicle navigation systems; pp. 219–246. [Google Scholar]
  • 5.Gao B., Hu G., Gao S., Zhong Y., Gu C., Beresnev P.O., Filatov V.I., Zeziulin D.V. Multi-sensor optimal data fusion for INS/GNSS/CNS integration based on unscented Kalman filter. Int. J. Control Autom. Syst. 2018;16:129–140. doi: 10.1007/s12555-016-0801-4. [DOI] [Google Scholar]
  • 6.Din S., Ahmad A., Paul A., Rathore M.M.U., Gwanggil J. A clusterbased data fusion technique to analyze big data in wireless multi-sensor system. IEEE Access. 2017;5:5069–5083. doi: 10.1109/ACCESS.2017.2679207. [DOI] [Google Scholar]
  • 7.Liggins M.E., Hall D.L., Llinas J. Handbook of Multisensor Data Fusion: Theory and Practice. CRC Press Inc.; Boca Raton, FL, USA: 2009. (The Electrical Engineering and Applied Signal Processing Series). [Google Scholar]
  • 8.Hounkpevi F.O., Yaz E.E. Minimum variance generalized state estimators for multiple sensors with different delay rates. Signal Process. 2007;87:602–613. doi: 10.1016/j.sigpro.2006.06.017. [DOI] [Google Scholar]
  • 9.Ma J., Sun S. Centralized fusion estimators for multisensor systems with random sensor delays, multiple packet dropouts and uncertain observations. IEEE Sens. J. 2013;13:1228–1235. doi: 10.1109/JSEN.2012.2227995. [DOI] [Google Scholar]
  • 10.Chen D., Xu L. Optimal filtering with finite-step autocorrelated process noises, random one-step sensor delay and missing measurements. Commun. Nonlinear Sci. Numer. Simul. 2016;32:211–224. doi: 10.1016/j.cnsns.2015.08.015. [DOI] [Google Scholar]
  • 11.Sun S., Ma J. Linear estimation for networked control systems with random transmission delays and packet dropouts. Inf. Sci. 2014;269:349–365. doi: 10.1016/j.ins.2013.12.055. [DOI] [Google Scholar]
  • 12.Li N., Sun S., Ma J. Multi-sensor distributed fusion filtering for networked systems with different delay and loss rates. Digital Signal Process. 2014;34:29–38. doi: 10.1016/j.dsp.2014.07.016. [DOI] [Google Scholar]
  • 13.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Optimal fusion estimation with multi-step random delays and losses in transmission. Sensors. 2017;17:1151. doi: 10.3390/s17051151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Tian T., Sun S., Li N. Multi-sensor information fusion estimators for stochastic uncertain systems with correlated noises. Inf. Fusion. 2016;27:126–137. doi: 10.1016/j.inffus.2015.06.001. [DOI] [Google Scholar]
  • 15.Abu Bakr M., Lee S. Distributed multisensor data fusion under unknown correlation and data inconsistency. Sensors. 2017;17:2472. doi: 10.3390/s17112472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sun S., Lin N., Ma J., Li X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion. 2017;38:122–134. doi: 10.1016/j.inffus.2017.03.006. [DOI] [Google Scholar]
  • 17.Liu W.Q., Wang X.M., Deng Z.L. Robust centralized and weighted measurement fusion kalman estimators for uncertain multisensor systems with linearly correlated white noises. Inf. Fusion. 2017;35:11–25. doi: 10.1016/j.inffus.2016.08.002. [DOI] [Google Scholar]
  • 18.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Centralized fusion approach to the Estimation problem with multi-packet processing under uncertainty in outputs and transmissions. Sensors. 2018;18:2697. doi: 10.3390/s18082697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Shen Q., Liu J., Zhou X., Qin W., Wang L., Wang Q. Centralized fusion methods for multi-sensor system with bounded disturbances. IEEE Access. 2019;7:141612–141626. doi: 10.1109/ACCESS.2019.2943163. [DOI] [Google Scholar]
  • 20.Alfsmann D., Göckler H.G., Sangwine S.J., Ell T.A. Hypercomplex Algebras in Digital Signal Processing: Benefits and Drawbacks; Proceedings of the 15th European Signal Processing Conference (EUSIPCO 2007); Poznan, Poland. 3–7 September 2007; pp. 1322–1326. [Google Scholar]
  • 21.Ortolani F., Scarpiniti M., Comminiello D., Uncini A. On the influence of microphone array geometry on the behavior of hypercomplex adaptive filters; Proceedings of the 5th IEEE Microwaves, Radar and Remote Sensing Symposium; 29–31 August 2017; Kyiv, Ukraine. pp. 37–42. [Google Scholar]
  • 22.Ortolani F., Comminiello D., Scarpiniti M., Uncini A. Neural Advances in Processing Nonlinear Dynamic Signals. Springer International Publishing; Berlin/Heidelberg, Germany: 2017. On 4-dimensional hypercomplex algebras in adaptive signal processing; pp. 131–140. [Google Scholar]
  • 23.Navarro-Moreno J., Fernández Alcalá R.M., Jiménez López J.D., Ruiz-Molina J.C. Tessarine signal processing under the T-properness condition. J. Frankl. Inst. 2020;357:10099–10125. doi: 10.1016/j.jfranklin.2020.08.002. [DOI] [Google Scholar]
  • 24.Navarro-Moreno J., Ruiz-Molina J.C. Wide-sense Markov signals on the tessarine domain. A study under properness conditions. Signal Process. 2021;183:108022. doi: 10.1016/j.sigpro.2021.108022. [DOI] [Google Scholar]
  • 25.Sabatelli S., Sechi F., Fanucci L., Rocchi A. A sensor fusion algorithm for an integrated angular position estimation with inertial measurement units; Proceedings of the Design, Automation and Test in Europe (DATE 2011); Grenoble, France. 14–18 March 2011; pp. 273–276. [Google Scholar]
  • 26.Tannous H., Istrate D., Benlarbi-Delai A., Sarrazin J., Gamet D., Ho Ba Tho M.C., Dao T.T. A new multi-sensor fusion scheme to improve the accuracy of knee flexion kinematics for functional rehabilitation movements. J. Sens. 2016;16:1914. doi: 10.3390/s16111914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Talebi S., Kanna S., Mandic D. A distributed quaternion Kalman filter with applications to smart grid and target tracking. IEEE Trans. Signal Inf. Process. Netw. 2016;2:477–488. doi: 10.1109/TSIPN.2016.2618321. [DOI] [Google Scholar]
  • 28.Talebi S.P., Werner S., Mandic D.P. Quaternion-valued distributed filtering and control. IEEE Trans. Autom. Control. 2020;65:4246–4256. doi: 10.1109/TAC.2020.3007332. [DOI] [Google Scholar]
  • 29.Wu J., Zhou Z., Fourati H., Li R., Liu M. Generalized linear quaternion complementary filter for attitude estimation from multi-sensor observations: An optimization approach. IEEE Trans. Autom. Sci. Eng. 2019;16:1330–1343. doi: 10.1109/TASE.2018.2888908. [DOI] [Google Scholar]
  • 30.Navarro-Moreno J., Fernández-Alcalá R.M., Jiménez López J.D., Ruiz-Molina J.C. Widely linear estimation for multisensor quaternion systems with mixed uncertainties in the observations. J. Frankl. Inst. 2019;356:3115–3138. doi: 10.1016/j.jfranklin.2018.08.031. [DOI] [Google Scholar]
  • 31.Vía J., Ramírez D., Santamaría I. Properness and widely linear processing of quaternion random vectors. IEEE Trans. Inform. Theory. 2010;56:3502–3515. doi: 10.1109/TIT.2010.2048440. [DOI] [Google Scholar]
  • 32.Jiménez-López J.D., Fernández-Alcalá R.M., Navarro-Moreno J., Ruiz-Molina J.C. Widely linear estimation of quaternion signals with intermittent observations. Signal Process. 2017;136:92–101. doi: 10.1016/j.sigpro.2016.09.016. [DOI] [Google Scholar]
  • 33.Fernández Alcalá R.M., Navarro-Moreno J., Jiménez López J.D., Ruiz-Molina J.C. Semi-widely linear estimation algorithms of quaternion signals with missing observations and correlated noises. J. Frankl. Inst. 2020;357:3075–3096. doi: 10.1016/j.jfranklin.2020.02.012. [DOI] [Google Scholar]
  • 34.Nitta T., Kobayashi M., Mandic D.P. Hypercomplex widely linear estimation through the lens of underpinning geometry. IEEE Trans. Signal Process. 2019;2019 67:3985–3994. doi: 10.1109/TSP.2019.2922151. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES