Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2017 May 18;17(5):1151. doi: 10.3390/s17051151

Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission

Raquel Caballero-Águila 1,*, Aurora Hermoso-Carazo 2, Josefa Linares-Pérez 2
Editors: Xue-Bo Jin, Shuli Sun, Hong Wei, Feng-Bao Yang
PMCID: PMC5470897  PMID: 28524112

Abstract

This paper is concerned with the optimal fusion estimation problem in networked stochastic systems with bounded random delays and packet dropouts, which unavoidably occur during the data transmission in the network. The measured outputs from each sensor are perturbed by random parameter matrices and white additive noises, which are cross-correlated between the different sensors. Least-squares fusion linear estimators including filter, predictor and fixed-point smoother, as well as the corresponding estimation error covariance matrices are designed via the innovation analysis approach. The proposed recursive algorithms depend on the delay probabilities at each sampling time, but do not to need to know if a particular measurement is delayed or not. Moreover, the knowledge of the signal evolution model is not required, as the algorithms need only the first and second order moments of the processes involved. Some of the practical situations covered by the proposed system model with random parameter matrices are analyzed and the influence of the delays in the estimation accuracy are examined in a numerical example.

Keywords: recursive fusion estimation, sensor networks, random parameter matrices, random delays, packet dropouts

1. Introduction

Over the last few decades, research on the estimation problem for networked stochastic systems has gained considerable attention, due to the undeniable advantages of networked systems, whose applicability is encouraged, among other causes, by the development and advances in communication technology and the growing use of wireless networks. As it is well known, the Kalman filter provides a recursive algorithm for the optimal least-squares estimator in stochastic linear systems, assuming that the system model is exactly known and all of the measurements are instantly updated. The development of sensor networks motivates the necessity of designing new estimation algorithms that integrate the information of all the sensors to achieve a satisfactory performance; thus, using different fusion techniques, the measurements from multiple sensors are combined to obtain more accurate estimators than those obtained when a single sensor is used. In this framework, important extensions of the Kalman filter have been proposed for conventional sensor networks in which the measured outputs of the sensors always contain the actual signal contaminated by additive noises, and the transmissions are carried through perfect connections (see, e.g., [1,2,3] and references therein).

However, in a network environment, usually the standard observation models are not suitable due to the existence of network-induced uncertainties that can occur in both of the sensor measured outputs, and during the data transmission through the network. Accordingly, the consideration of appropriate observation models is vitally important to address the estimation problem in networked systems. Random failures in the transmission of measured data, together with the inaccuracy of the measurement devices, cause often the degradation of the estimator performance in networked systems. In light of these concerns, the estimation problem with one or even several network-induced uncertainties is recently attracting considerable attention, and the design of new fusion estimation algorithms has become an active research topic (see, e.g., [4,5,6,7,8,9,10,11,12,13] and references therein). In addition, some recent advances on the estimation, filtering and fusion for networked systems with network-induced phenomena can be reviewed in [14,15], where a detailed overview of this field is presented.

One of the most common network-induced uncertainties in the measured outputs of the different sensors is the presence of multiplicative noise, due to different reasons, such as interferences or intermittent sensor failures. Specifically, in situations involving random observation losses (see [5]), sensor gain degradation (see [6]), missing or fading measurements (see [13,16], respectively), the sensor observation equations include multiplicative noises. A unified framework to model these random phenomena is provided by the use of random measurement matrices in the sensor observation model. For this reason, the estimation problem in networked systems with random measurement matrices has become a fertile research subject, since this class of systems allow for covering different networked-induced random uncertainties as those mentioned above (see, e.g., [17,18,19,20,21,22,23], and references therein).

In relation to the network-induced uncertainties during the data transmission, it must be indicated that sudden changes in the environment and the unreliability of the communication network, together with the limited bandwidths of communication channels, cause unavoidable random failures during the transmission process. Generally, random communication delays and/or transmission packet dropouts are two essential issues that must be taken into account to model the measurements, which, being available after transmission, will be used for the estimation. Several estimation algorithms have been proposed in multisensor systems considering either transmission delays or packet losses (see, e.g., [24,25]) and also taking into account random delays and packet dropouts simultaneously (see, e.g., [4,23,26]). By using the state augmentation method, systems with random delays and packet dropouts can be transformed into systems with random parameter matrices (see, e.g., [8,9,10,20]). Hence, systems with random parameter measurement matrices also provide an appropriate unified context for modelling these random phenomena in the transmission.

Nevertheless, it must be indicated that the state augmentation method leads to a rise of the computational burden, due to the increase of the state dimension. Actually, in models with more than one or two-step random delays, the computational cost can be excessive and alternative ways to model and address the estimation problem in this class of systems need to be investigated. Recently, a great variety of models have been used to describe the phenomena of multi-step random delays and packet losses during the data transmission in networked systems, and fusion estimation algorithms have been proposed based on different approaches—for example, the recursive matrix equation method in [6], the measurement reorganisation approach in [27], the innovation analysis approach in [28] and the state augmentation approach in [29,30,31]. It should be noted that, in the presence of multi-step random delays and packet losses during the data transmission, many difficulties can arise in the design of the optimal estimators when the state augmentation approach is not used.

In view of the above considerations, this paper is concerned with the optimal fusion estimation problem, in the least-squares linear sense, for sensor networks featuring stochastic uncertainties in the sensor measurements, together with multi-step random delays and packet dropouts during the data transmission. The derivation of the estimation algorithms will be carried out without using the evolution model generating the signal process. The uncertainties in the measured outputs of the different sensors are described by random measurement matrices. The multi-step random delays in the transmissions are modeled by using a collection of Bernoulli sequences with known distributions and different characteristics at each sensor; the exact value of these Bernoulli variables is not required, and only the information about the probability distribution is needed. To the best of the authors’ knowledge, the optimal estimation problem (including prediction, filtering and fixed-point smoothing) has not been investigated for systems involving random measurement matrices and transmission multi-step random delays simultaneously, and, therefore, it constitutes an interesting research challenge. The main contributions of this research can be highlighted as follows: (a) even though our approach, based on covariance information, does not require the signal evolution model, the proposed algorithms are also applicable in situations based on the state-space model (see Remark 1); (b) random measurement matrices are considered in the measured outputs, thus providing a unified framework to address different network-induced phenomena (see Remark 2); (c) besides the stochastic uncertainties in the sensor measurements, simultaneous multi-step random delays and losses with different rates are considered in the data transmission; (d) unlike most papers about multi-step random delays, in which only the filtering problem is considered, we propose recursive algorithms for the prediction, filtering an fixed-point smoothing estimators under the innovation approach, which are computationally very simple and suitable for online applications; and (e) optimal estimators are obtained without using the state augmentation approach, thus reducing the computational cost in comparison with the augmentation method.

The rest of the paper is organized as follows. In Section 2, we present the sensor network and the assumptions under which the optimal linear estimation problem will be addressed. In Section 3, the observation model is rewritten in a compact form and the innovation approach to the least-squares linear estimation problem is formulated. In Section 4, recursive algorithms for the prediction, filtering and fixed-point smoothing estimators are derived. A simulation example is given in Section 5 to show the performance of the proposed estimators. Finally, some conclusions are drawn in Section 6.

Notations. 

The notations used throughout the paper are standard. Rn and Rm×n denote the n-dimensional Euclidean space and the set of all m×n real matrices, respectively. For a matrix A, AT and A1 denote its transpose and inverse, respectively. The shorthand Diag(A1,,Am) stands for a block-diagonal matrix whose diagonal matrices are A1,,Am. 1n=(1,,1)T denotes the all-ones n×1-vector and In represent the n×n-identity matrix. If the dimensions of matrices are not explicitly stated, they are assumed to be compatible with algebraic operations. The Kronecker and Hadamard product of matrices will be denoted by ⊗ and ∘, respectively. δk,s denotes the Kronecker delta function. For any a,bR, ab is used to mean the minimum of a and b.

2. Observation Model and Preliminaries

In this paper, the optimal fusion estimation problem of multidimensional discrete-time random signals from measurements obtained by a sensor network is addressed. At each sensor, the measured outputs are perturbed by random parameter matrices and white additive noises that are cross-correlated at the same sampling time between the different sensors. The estimation is performed in a processing centre connected to all sensors, where the complete set of sensor data is combined, but due to eventual communication failures, congestion or other causes, random delays and packet dropouts are unavoidable during the transmission process. To reduce the effect of such delays and packet dropouts without overloading the network traffic, each sensor measurement transmits a fixed number of consecutive sampling times and, when several packets arrive at the same time, the receiver discards the oldest ones, so that only one measured output is processed for the estimation at each sampling time.

2.1. Signal Process

The optimal estimators will be obtained using the least-squares (LS) criterion and without requiring the evolution model generating the signal process. Actually, the proposed estimation algorithms, based on covariance information, only need the mean vectors and covariance functions of the processes involved, and the only requirement will be that the signal covariance function must be factorizable according to the following assumption:

  • (A1)

    The nx-dimensional signal process {xk;k1} has zero mean and its autocovariance function is expressed in a separable form, E[xkxsT]=AkBsT,sk, where Ak,BsRnx×n are known matrices.

Remark 1 (on assumption (A1)).

The estimation problems based on the state-space model require new estimation algorithms when the signal evolution model is modified; therefore, the algorithms designed for stationary signals driven by xk+1=Φxk+ξk cannot be applied for non-stationary signals generated by xk+1=Φkxk+ξk, and these, in turn, cannot be used in uncertain systems where xk+1=(Φk+ϵkΦ^k)xk+ξk. A great advantage of assumption (A1) is that it covers situations in which the signal evolution model is known, for both stationary and non-stationary signals (see, e.g., [23]). In addition, in uncertain systems with state-dependent multiplicative noise, as those considered in [6,32], the signal covariance function is factorizable, as it is shown in Section 5. Hence, assumption (A1) on the signal autocovariance function provides a unified context to deal with different situations based on the state-space model, avoiding the derivation of specific algorithms for each of them.

2.2. Multisensor Observation Model

Assuming that there are m different sensors, the measured outputs before transmission, zk(i)Rnz, are described by the following observation model:

zk(i)=Hk(i)xk+vk(i),k1;i=1,,m, (1)

where the measurement matrices, Hk(i), and the noise vectors, vk(i), satisfy the following assumptions:

  • (A2)

    Hk(i);k1, i=1,,m, are independent sequences of independent random parameter matrices, whose entries have known means and second-order moments; we will denote H¯k(i)E[Hk(i)],k1.

  • (A3)

    vk(i);k1,i=1,,m, are white noise sequences with zero mean and known second-order moments, satisfying Evk(i)vs(j)T=Rk(ij)δk,s,i,j=1,,m.

Remark 2 (on assumption (A2)).

Usually, in network environments, the measurements are subject to different network-induced random phenomena and new estimation algorithms must be designed to incorporate the effects of these random uncertainties. For example, in systems with stochastic sensor gain degradation or missing measurements as those considered in [6,7], respectively, or in networked systems involving stochastic multiplicative noises in the state and measurement equations (see, e.g., [31,32]), new estimation algorithms are proposed since the conditions necessary to implement the conventional ones are not met. The aforementioned systems are particular cases of systems with random measurement matrices, and, hence, assumption (A2) allows for designing a unique estimation algorithm, which is suitable to address all of these situations involving random uncertainties. In addition, based on an augmentation approach, random measurement matrices can be used to model the measured outputs of sensor networks with random delays and packet dropouts (see, e.g., [8,9,10,20]). Therefore, assumption (A2) provides a unified framework to deal with a great variety of network-induced random phenomena as those mentioned above.

2.3. Measurement Model with Transmission Random Delays and Packet Losses

Assuming that the maximum time delay is D, the measured output of the i-th sensor at time r, zr(i), is transmitted during the sampling times r,r+1,,r+D, but, at each sampling time k>D, only one of the measurements zkD(i),,zk(i) is processed. Consequently, at any time k>D, the measurement processed can either arrive on time or be delayed by d=1,,D sampling periods, while at any time kD, the measurement processed can be delayed only by d=1,,k1 sampling periods, since only z1(i),,zk(i) are available. Assuming, moreover, that the transmissions are perturbed by additive noises, the measurements received at the processing centre, impaired by random delays and packet losses, can be described by the following model:

yk(i)=d=0(k1)Dγd,k(i)zkd(i)+wk(i),k1, (2)

where the following assumptions on the random variables modelling the delays, γd,k(i), and the transmision noise, wk(i), are required:

  • (A4)

    For each d=0,1,,D, γd,k(i);k>d, i=1,,m, are independent sequences of independent Bernoulli random variables with P[γd,k(i)=1]=γ¯d,k(i) and d=0(k1)Dγd,k(i)1,k1.

  • (A5)

    wk(i);k1,i=1,,m, are white noise sequences with zero mean and known second-order moments, satisfying Ewk(i)ws(j)T=Qk(ij)δk,s,i,j=1,,m.

Remark 3 (on assumption (A4)).

For i=1,,m, when γ0,k(i)=1, the transmission of the i-th sensor is perfect and neither delay nor loss occurs at time k; that is, with probability γ¯0,k(i), the k-th measurement of the i-th sensor is received and processed on time. Since d=0(k1)Dγd,k(i)1, if γ0,k(i)=0, there then exists at most one d=1,,D, such that γd,k(i)=1. If there exists d such that γd,k(i)=1 (which occurs with probability γ¯d,k(i)), then the measurement is delayed by d sampling periods. Otherwise, γd,k(i)=0 for all d, meaning that the measurement gets lost during the transmission at time k with probability 1d=0(k1)Dγ¯d,k(i).

Finally, the following independence hypothesis is assumed:

  • (A6)

    For i=1,,m and d=0,1,,D, the processes {xk;k1}, {Hk(i);k1}, {vk(i);k1}, {wk(i);k1} and γd,k(i);k>d are mutually independent.

3. Problem Statement

Given the observation Equations (1) and (2) with random measurement matrices and transmission random delays and packet dropouts, our purpose is to find the LS linear estimator, x^k/L, of the signal xk based on the observations from the different sensors {y1(i),,yL(i),i=1,,m}. Specifically, our aim is to obtain recursive algorithms for the predictor (L<k), filter (L=k) and fixed-point smoother (k fixed and L>k).

3.1. Stacked Observation Model

Since the measurements coming from the different sensors are all gathered and jointly processed at each sampling time k, we will consider the vector constituted by the meaurements from all sensors, yk=yk(1)T,,yk(m)TT. More specifically, the observation Equations (1) and (2) of all sensors are combined, yielding the following stacked observation model:

zk=Hkxk+vk,k1,yk=d=0(k1)DΓd,kzkd+wk,k1, (3)

where zk=zk(1)T,,zk(m)TT, Hk=Hk(1)T,,Hk(m)TT, vk=vk(1)T,,vk(m)TT, wk=wk(1)T,,wk(m)TT and Γd,k=Diagγd,k(1),,γd,k(m)Inz.

Hence, the problem is to obtain the LS linear estimator of the signal, xk, based on the measurements y1,,yL, given in the observation Equation (3). Next, we present the statistical properties of the processes involved in Equation (3), from which the LS estimation algorithms of the signal will be derived; these properties are easily inferred from the assumptions (A1)–(A6).

  • (P1)
    Hk;k1 is a sequence of independent random parameter matrices with known means, H¯kE[Hk]=H¯k(1)T,,H¯k(m)TT, and
    E[HkxkxsTHsT]=E[HkAkBsTHsT]=E[Hk(i)AkBsTHs(j)T]i,j=1,m,sk,
    where EHk(i)AkBsTHs(j)T=H¯k(i)AkBsTH¯s(j)T, for ji or sk, and the entries of EHk(i)AkBkTHk(i)T are computed as follows:
    EHk(i)AkBkTHk(i)Tpq=a=1nxb=1nxE[hpa(i)(k)hqb(i)(k)](AkBkT)ab,p,q=1,,nz,
    where hpq(i)(k) denotes the (p,q)entry of the matrix Hk(i).
  • (P2)

    The noises vk;k1 and wk;k1 are zero-mean sequences with known second-order moments given by the matrices RkRk(ij)i,j=1,,m and QkQk(ij)i,j=1,,m.

  • (P3)
    {Γd,k;k>d}, d=0,1,,D, are sequences of independent random matrices with known means, Γ¯d,kE[Γd,k]=Diagγ¯d,k(1),,γ¯d,k(m)Inz, and if we denote γd,k=γd,k(1),,γd,k(m)T1nz and γ¯d,k=E[γd,k], the covariance matrices Σd,dγkE[(γd,kγ¯d,k)(γd,kγ¯d,k)T], for d,d=0,1,,D, are also known matrices. Specifically,
    Σd,dγk=DiagCovγd,k(1),γd,k(1),,Covγd,k(m),γd,k(m)1nz1nzT, (4)
    with Covγd,k(i),γd,k(i)=γ¯d,k(i)(1γ¯d,k(i)),d=d,γ¯d,k(i)γ¯d,k(i),dd. Moreover, for any deterministic matrix S, the Hadamard product properties guarantee that
    EΓd,kΓ¯d,kSΓd,kΓ¯d,k=Σd,dγkS.
  • (P4)

    For d=0,1,,D, the signal, {xk;k1}, and the processes {Hk;k1}, {vk;k1}, {wk;k1} and {Γd,k;k>d} are mutually independent.

Remark 4 (on the observation covariance matrices).

From the previous properties, it is clear that the observation process {zk;k1} is a zero-mean sequence whose covariance function, Σk,szE[zkzsT], is obtained by the following expression:

Σk,sz=EHkAkBsTHsT+Rkδk,s,sk, (5)

where EHkAkBsTHsT and Rk are calculated as it is indicated in properties (P1) and (P2), respectively.

3.2. Innovation Approach to the LS Linear Estimation Problem

The proposed covariance-based recursive algorithms for the LS linear prediction, filtering and fixed-point smoothing estimators will be derived by an innovation approach. This approach consists of transforming the observation process {yk;k1} into an equivalent one of orthogonal vectors called an innovation process, which will be denoted {μk;k1} and defined by μk=yky^k/k1, where y^k/k1 is the orthogonal projection of yk onto the linear space spanned by μ1,,μk1. Since both processes span the same linear subspace, the LS linear estimator of any random vector αk based on the observations y1,,yN, denoted by α^k/N, is equal to that based on the innovations μ1,,μN, and, denoting Πh=E[μhμhT], the following general expression for the LS linear estimators of αk is obtained

α^k/N=h=1NEαkμhTΠh1μh. (6)

Hence, to obtain the signal estimators, it is necessary to find an explicit formula beforehand for the innovations and their covariance matrices.

Innovation μL and Covariance Matrix ΠL. Applying orthogonal projections in Equation (3), it is clear that the innovation μL is given by

μL=yLd=0(L1)DΓ¯d,Lz^Ld/L1,L2;μ1=y1, (7)

so it is necessary to obtain the one-stage predictor z^L/L1 and the estimators z^Ld/L1, for d=1,,(L1)D, of the observation process.

In order to obtain the covariance matrix ΠL, we use Equation (3) to express the innovations as:

μL=d=0(L1)D(Γd,LΓ¯d,L)zLd+Γ¯d,L(zLdz^Ld/L1)+wL,L2 (8)

and, taking into account that

E(Γd,LΓ¯d,L)zLd(zLdz^Ld/L1)TΓ¯d,L=0,d,d,

we have

ΠL=d,d=0(L1)DΣd,dγLΣLd,Ldz+Γ¯d,LPLd,Ld/L1zΓ¯d,L+QL,L2;Π1=Σ0,0γ1+γ¯0,1γ¯0,1TΣ1,1z+Q1, (9)

where the matrices Σd,dγL and ΣLd,Ldz are given in the Equations (4) and (5), respectively, and PLd,Ld/L1zE[(zLdz^Ld/L1)(zLdz^Ld/L1)T].

4. Least-Squares Linear Signal Estimators

In this section, we derive recursive algorithms for the LS linear estimators, x^k/L,k1, of the signal xk based on the observations y1,,yL given in Equation (3); namely, a prediction and filtering algorithm (Lk) and a fixed-point smoothing algorithm (k fixed and L>k) are designed.

4.1. Signal Predictor and Filter x^k/L,Lk

From the general expression given in Equation (6), to obtain the LS linear estimators x^k/L=h=1LExkμhTΠh1μh, Lk, it is necessary to calculate the coefficients

Xk,hExkμhT=ExkyhTExky^h/h1T,hk.
  • On the one hand, using Equation (3) together with the independence hypotheses and assumption (A1) on the signal covariance factorization, it is clear that
    ExkyhT=d=0(h1)DE[xk(Hhdxhd+vhd)T]Γ¯d,h=Akd=0(h1)DBhdTH¯hdTΓ¯d,h,hk.
  • On the other hand, since y^h/h1=d=0(h1)DΓ¯d,hz^hd/h1,h2, and taking into account that, from Equation (6), z^hd/h1=j=1h1Zhd,jΠj1μj with Zhd,j=E[zhdμjT], the following identity holds
    Exky^h/h1T=d=0(h1)Dj=1h1Xk,jΠj1Zhd,jTΓ¯d,h.

Therefore, it is easy to check that Xk,h=AkEh,1hk, where Eh is a function satisfying that

Eh=d=0(h1)DBhdTH¯hdTΓ¯d,hd=0(h1)Dj=1h1EjΠj1Zhd,jTΓ¯d,h,h2,E1=B1TH¯1TΓ¯0,1. (10)

Hence, it is clear that the signal prediction and filtering estimators can be expressed as

x^k/L=AkeL,Lk,k1, (11)

where the vectors eL are defined by eL=h=1LEhΠh1μh, for L1, with e0=0, thus obeying the recursive relation

eL=eL1+ELΠL1μL,L1;e0=0. (12)

Matrices EL. Taking into account the above relation, an expression for EL,L1, must be derived. For this purpose, Equation (10) is rewritten for h=L as

EL=d=0(L1)DBLdTH¯LdTΓ¯d,Ld=0(L1)Dj=1L1EjΠj1ZLd,jTΓ¯d,L,L2,

and we examine the cases d=0 and d1 separately:

  • For d=0, using Equation (3), it holds that ZL,j=H¯LXL,j=H¯LALEj, for j<L, and, by denoting ΣLe=h=1LEhΠh1Eh, L1, we obtain that
    j=1L1EjΠj1ZL,jTΓ¯0,L=j=1L1EjΠj1EjTALTH¯LTΓ¯0,L=ΣL1eALTH¯LTΓ¯0,L.
  • For d1, since ZLd,j=H¯LdALdEj, for j<Ld, we can see that
    j=1L1EjΠj1ZLd,jTΓ¯d,L=ΣLd1eALdTH¯LdTΓ¯d,L+j=LdL1EjΠj1ZLd,jTΓ¯d,L.

By substituting the above sums in EL, it is deduced that

EL=d=0(L1)DBLdALdΣLd1eTH¯LdTΓ¯d,Ld=1(L1)Dj=LdL1EjΠj1ZLd,jTΓ¯d,L,L2;E1=B1TH¯1TΓ¯0,1, (13)

where the matrices ZLd,j,jLd, will be obtained in the next subsection, as they correspond to the observation smoothing estimators, and the matrices ΣLe are recursively obtained by

ΣLe=ΣL1e+ELΠL1ELT,L1;Σ0e=0. (14)

Finally, from assumption (A1) and since the estimation errors are orthogonal to the estimators, we have that the error covariance matrices, Pk/LxE[(xkx^k/L)(xkx^k/L)T], are given by

Pk/Lx=AkBkAkΣLeT,Lk,k1. (15)

4.2. Estimators of the Observations z^k/L,k1

As it has been already indicated, the Equation (7) require obtaining the observation estimators (predictor, filter and smoother). From the general expression for the estimators given in Equation (6), we have that z^k/L=j=1LZk,jΠj1μj, with Zk,j=E[zkμjT]. Next, recursive expressions will be derived separately for L<k (predictors) and Lk (filter and smoothers).

Observation Prediction Estimators. Since Zk,j=H¯kAkEj, for j<k, we have that the prediction estimators of the observations are given by

z^k/L=H¯kALeL,L<k,k1. (16)

Observation Filtering and Fixed-Point Smoothing Estimators. Clearly, the filter and fixed-point smoothers of the observations are obtained by the following recursive expression:

z^k/L=z^k/L1+Zk,LΠL1μL,Lk,k1, (17)

with initial condition given by the one-stage predictor z^k/k1=H¯kAkek1.

Hence, the matrices Zk,L must be calculated for Lk. Since the innovation is a white process, Ez^k/L1μLT=0 and hence Zk,L=E[zkμLT]=E(zkz^k/L1)μLT. Now using Equation (8) for μL and, taking into account that E(zkz^k/L1)zLdT(Γd,LΓ¯d,L)=0,d, we have

Zk,L=d=0(L1)DPk,Ld/L1zΓ¯d,L,Lk, (18)

where Pk,Ld/L1zE[(zkz^k/L1)(zLdz^Ld/L1)T].

Consequently, the error covariance matrices Pk,h/mz=E[(zkz^k/m)(zhz^h/m)T] must be derived, for which the following two cases are analyzed separately:

  • *
    For mkh, using Equation (17) and taking into account that Zk,m=E(zkz^k/m1)μmT, it is easy to see that
    Pk,h/mz=Pk,h/m1zZk,mΠm1Zh,mT,mkh. (19)
  • *
    For m<hk, using Equation (16), assumption (A1) and the orthogonality between the estimation errors and the estimators, we obtain
    Pk,h/mz=AkBhTH¯kAkΣmeAhTH¯hT,m<hk. (20)

4.3. Signal Fixed-Point Smoother x^k/L,L>k

Starting with the filter, x^k/k, and the filtering error covariance matrix, Pk/kx, it is clear that the signal fixed-point smoother x^k/L, L>k, and the corresponding error covariance matrix, Pk/LxE[(xkx^k/L)(xkx^k/L)T], are obtained by

x^k/L=x^k/L1+Xk,LΠL1μL,L>k,k1,Pk/Lx=Pk/L1xXk,LΠL1Xk,LT,L>k,k1. (21)

An analogous reasoning to that of Equation (18) leads to the following expression for the matrices Xk,L:

Xk,L=d=0(L1)DPk,Ld/L1xzΓ¯d,L,L>k, (22)

where Pk,Ld/L1xzE[(xkx^k/L1)(zLdz^Ld/L1)T].

The derivation of the error cross-covariance matrices Pk,h/mxz=E[(xkx^k/m)(zhz^h/m)T] is similar to that of the matrices Pk,h/mz, and they are given by

Pk,h/mxz=Pk,h/m1xzXk,mΠm1Zh,mT,mkh,Pk,h/mxz=Ak(BhAhΣme)THhT,m<hk,Pk,h/mxz=(BkAkΣme)AhTHhT,m<kh, (23)

where Xk,m=AkEm, for mk, and Zh,m=H¯hAmEm, for m<h, and otherwise these matrices are given by Equations (22) and (18), respectively.

4.4. Recursive Algorithms: Computational Procedure

The computational procedure of the proposed prediction, filtering and fixed-point smoothing algorithms can be summarized as follows:

  • (1)

    Covariance Matrices. The covariance matrices Σd,dγk and Σkz are obtained by Equations (4) and (5), respectively; these matrices only depend on the system model information, so they can be calculated offline before the observed data packets are available.

  • (2)
    LS Linear Prediction and Filtering Recursive Algorithm. At the sampling time k, once the (k1)-th iteration is finished and Ek1, Πk1, Σk1eμk1 and ek1 are known, the proposed prediction and filtering algorithm operates as follows:
    • (2a)
      Compute Zk,k1=H¯kAkEk1 and Zkd,k1, for d=1,,(k1)D, by Equation (18). From these matrices, we obtain the observation estimators z^kd/k1, for d=0,1,,(k1)D, by Equation (19) and (20), and the observation error covariance matrices Pkd,kd/k1z, for d,d=0,1,,(k1)D, by Equation (19) and (20).
    • (2b)
      Compute Ek by Equation (13) and use Pkd,kd/k1z to obtain the innovation covariance matrix Πk by Equation (9). Then, Σke is obtained by Equation (14) and, from it, the prediction and filtering error covariance matrices, Pk/ksx and Pk/kx, respectively, are obtained by Equation (15).
    • (2c)
      When the new measurement yk is available, the innovation μk is computed by Equation (7) using z^kd/k1, for d=0,1,,(k1)D, and, from the innovation, ek is obtained by Equation (12). Then, the predictors, x^k/ks and the filter, x^k/k are computed by Equation (11).
  • (3)

    LS linear fixed-point smoothing recursive algorithm. Once the filter, x^k/k, and the filtering error covariance matrix, Pk/kx are available, the proposed smoothing estimators and the corresponding error covariance matrix are obtained as follows:

    For L=k+1,k+2,, compute the error cross-covariance matrices Pk,Ld/L1xz, for d=0,1,,(k1)D, using Equation (23) and, from these matrices, Xk,L is derived by Equation (22); then, the smoothers x^k/L and their error covariance matrices Pk/Lx are obtained from Equation (21).

5. Computer Simulation Results

In this section, a numerical example is presented with the following purposes: (a) to show that, although the current covariance-based estimation algorithms do not require the evolution model generating the signal process, they are also applicable to the conventional formulation using the state-space model, even in the presence of state-dependent multiplicative noise; (b) to illustrate some kinds of uncertainties which can be covered by the current model with random measurement matrices; and (c) to analyze how the estimation accuracy of the proposed algorithms is influenced by the sensor uncertainties and the random delays in the transmissions.

Signal Evolution Model with State-Dependent Multiplicative Noise. Consider a scalar signal xk;k0 whose evolution is given by the following model with multiplicative and additive noises:

xk=(0.9+0.01ϵk1)xk1+ξk1,k1,

where x0 is a standard Gaussian variable and {ϵk;k0}, {ξk;k0} are zero-mean Gaussian white processes with unit variance. Assuming that x0, {ϵk;k0} and {ξk;k0} are mutually independent, the signal covariance function is given by E[xkxs]=0.9ksDs,sk, where Ds=E[xs2] is recursively obtained by Ds=0.8101Ds1+1, for s1, with D0=1; hence, the signal process satisfies assumption (A1) taking, for example, Ak=0.9k and Bs=0.9sDs.

Sensor Measured Outputs. As in [22], let us consider scalar measurements provided by four sensors with different types of uncertainty: continuous and discrete gain degradation in sensors 1 and 2, respectively, missing measurements in sensor 3, and both missing measurements and multiplicative noise in sensor 4. These uncertainties can be described in a unified way by the current model with random measurement matrices; specifically, the measured outputs are described according to Equation (1):

zk(i)=Hk(i)xk+vk(i),k1,i=1,2,3,4,

with the following characteristics:

  • For i=1,2,3, Hk(i)=C(i)θk(i) and Hk(4)=C(4)+C(4)ρkθk(4), where C(1)=C(3)=0.8, C(2)=C(4)=0.75, C(4)=0.95, and {ρk;k1} is a zero-mean Gaussian white process with unit variance. The sequences {ρk;k1} and {θk(i);k1},i=1,2,3,4, are mutually independent, and {θk(i);k1},i=1,2,3,4, are white processes with the following time-invariant probability distributions:
    • θk(1) is uniformly distributed over the interval [0.1,0.9];
    • P[θk(2)=0]=0.3,P[θk(2)=0.5]=0.3P[θk(2)=1]=0.4;
    • For i=3,4, θk(i) are Bernoulli random variables with the same time-invariant probabilities in both sensors P[θk(i)=1]=p.
  • The additive noises are defined by vk(i)=ciηkv,i=1,2,3,4, where c1=0.5, c2=c3=0.75, c4=1, and {ηkv;k1} is a zero-mean Gaussian white process with unit variance. Clearly, the additive noises {vk(i);k1}, i=1,2,3,4, are correlated at any time, with Rk(ij)=cicj,i,j=1,2,3,4.

Observations with Bounded Random Delays and Packet Dropouts. Next, according to the theoretical observation model, let us suppose that bounded random measurement delays and packet dropouts, with different delay probabilities, exist in the data transmission. Specifically, assuming that the largest delay is D=3, let us consider the observation Equation (2):

yk(i)=d=0(k1)3γd,k(i)zkd(i)+wk(i),k1,

where, for i=1,2,3,4 and d=0,1,2,3, {γd,k(i);k>d}, are sequences of independent Bernoulli variables with the same time-invariant delay probabilities for the four sensors γ¯d,k(i)=γ¯d, where d=03γ¯d1. Hence, the packet dropout probability is 1d=03γ¯d. The transmission noise is defined by wk(i)=ciηkw, i=1,2,3,4, where {ηkw;k1} is a zero-mean Gaussian white process with unit variance.

Finally, in order to apply the proposed algorithms, and according to (A5), we will assume that all of the processes involved in the observation equations are mutually independent.

To illustrate the feasibility and effectiveness of the proposed algorithms, they were implemented in MATLAB (R2011b 7.13.0.564, The Mathworks, Natick, MA, USA) and one hundred iterations of the prediction, filtering and fixed-point smoothing algorithms have been performed. In order to analyze the effect of the network-induced uncertainties on the estimation accuracy, different values of the probabilities p of the Bernoulli random variables that model the uncertainties of the third and fourth sensors, and several values of the delay probabilities γ¯d, d=0,1,2,3, have been considered.

Performance of the Prediction, Filtering and Fixed-Point Smoothing Estimators. Considering the values p=0.5, γ¯0=0.6 (packet arrival probability) and γ¯d=1γ¯04,d=1,2,3 (delay probabilities), Figure 1 displays a simulated trajectory along with the prediction, filtering and smoothing estimations, showing a satisfactory and efficient tracking performance of the proposed estimators. Figure 2 shows the error variances of the predictors z^k/k2 and z^k/k1, the filter z^k/k and the smoothers z^k/k+1 and z^k/k+2. Analogously to what happens for non-delayed observations, the performance of the estimators becomes better as more observations are used; that is, the error variances of the smoothing estimators are less than the filtering ones which, in turn, are less than those of the predictors. Hence, the estimation accuracy of the smoothers is superior to that of the filter and predictors and improves as the number of iterations in the fixed-point smoothing algorithm increases. The performance of the proposed filter has also been evaluated in comparison with the standard Kalman filter; for this purpose, the filtering mean square error (MSE) at each sampling time was calculated by considering one thousand independent simulations and one hundred iterations of each filter. The results of this comparison are displayed in Figure 3, which shows that the proposed filter performs better than the Kalman filter, a fact that was expected since the latter does not take into account either the uncertainties in the measured outputs or the delays and losses during transmission.

Figure 1.

Figure 1

Simulated signal and proposed prediction, filtering and smoothing estimates when p=0.5 and γ¯0=0.6, γ¯d=0.1,d=1,2,3.

Figure 2.

Figure 2

Prediction, filtering and smoothing error variances when p=0.5 and γ¯0=0.6, γ¯d=0.1, d = 1,2,3.

Figure 3.

Figure 3

MSE of Kalman filter and proposed filter when p=0.5 and γ¯0=0.6, γ¯d=0.1,d=1,2,3.

Influence of the Missing Measurements. To analyze the sensitivity of the estimation performance on the effect of the missing measurements phenomenon in the third and fourth sensors, the error variances are calculated for different values of the probability p. Specifically, considering again γ¯0=0.6 and γ¯d=0.1,d=1,2,3, Figure 4 displays the prediction, filtering and fixed-point smoothing error variances for the values p = 0.5 to p = 0.9. This Figure shows that, as p increases, the estimation error variances become smaller and, hence, as it was expected, the performance of the estimators improves as the probability of missing measurements 1p decreases.

Figure 4.

Figure 4

Prediction, filtering and fixed-point smoothing error variances for different values of the probability p, when γ¯0=0.6 and γ¯d=0.1,d=1,2,3.

Influence of the Transmission Random Delays and Packet Dropouts. Considering a fixed value of the probability p, namely, p=0.5, in order to show how the estimation accuracy is influenced by the transmission random delays and packet dropouts, the prediction, filtering and smoothing error variances are displayed in Figure 5 when γ¯0 is varied from 0.1 to 0.9, considering again γ¯d=1γ¯04,d=1,2,3. Since the behaviour of the error variances is analogous from a certain iteration on, only the results at the iteration k=50 are shown in Figure 5. From this figure, we see that the performance of the estimators (predictor, filter and smoother) is indeed influenced by the transmission delay and packet dropout probabilities and, as it was expected, it is confirmed that the error variances become smaller, and hence the performance of the estimators improves, as the packet arrival probability γ¯0 increases. Moreover, for the filter and the smoothers, this improvement is more significant than for the predictors. In addition, as it was deduced from Figure 2, it is observed that, for all the values of γ¯0, the performance of the estimators is better as more observations are used, and this improvement is more significant as γ¯0 increases.

Figure 5.

Figure 5

Estimation error variances for different values of the probability γ¯0, when p=0.5.

Finally, it must be remarked that analogous conclusions are deduced for other values of the probabilities p and γ¯d,d=0,1,2,3, and also when such probabilities are different at the different sensors.

6. Conclusions

This paper makes valuable contributions to the optimal fusion estimation problem in networked stochastic systems with random parameter matrices, when multi-step delays or even packet dropouts occur randomly during the data transmission. By an innovation approach, recursive prediction, filtering and fixed-point smoothing algorithms have been designed, which are easily implementable and do not require the signal evolution model, but only the mean and covariance functions of the system processes.

Unlike other estimation algorithms proposed in the literature, where the estimators are restricted to obey a particular structure, in this paper, recursive optimal estimation algorithms are designed without requiring a particular structure on the estimators, but just using the LS optimality criterion. Another advantage is that the current approach does not resort to the augmentation technique and, consequently, the dimension of the designed estimators is the same as that of the original signal, thus reducing the computational burden in the processing centre.

Acknowledgments

This research is supported by the “Ministerio de Economía y Competitividad” and “Fondo Europeo de Desarrollo Regional” FEDER (Grant No. MTM2014-52291-P).

Author Contributions

All of the authors contributed equally to this work. Raquel Caballero-Águila, Aurora Hermoso-Carazo and Josefa Linares-Pérez provided original ideas for the proposed model and collaborated in the derivation of the estimation algorithms; they participated equally in the design and analysis of the simulation results; and the paper was also written and reviewed cooperatively.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Ran C., Deng Z. Self-tuning weighted measurement fusion Kalman filtering algorithm. Comput. Stat. Data Anal. 2012;56:2112–2128. doi: 10.1016/j.csda.2012.01.001. [DOI] [Google Scholar]
  • 2.Feng J., Zeng M. Optimal distributed Kalman filtering fusion for a linear dynamic system with cross-correlated noises. Int. J. Syst. Sci. 2012;43:385–398. doi: 10.1080/00207721.2010.502601. [DOI] [Google Scholar]
  • 3.Yan L., Li X., Xia Y., Fu M. Optimal sequential and distributed fusion for state estimation in cross-correlated noise. Automatica. 2013;49:3607–3612. doi: 10.1016/j.automatica.2013.09.013. [DOI] [Google Scholar]
  • 4.Ma J., Sun S. Centralized fusion estimators for multisensor systems with random sensor delays, multiple packet dropouts and uncertain observations. IEEE Sens. J. 2013;13:1228–1235. doi: 10.1109/JSEN.2012.2227995. [DOI] [Google Scholar]
  • 5.Gao S., Chen P. Suboptimal filtering of networked discrete-time systems with random observation losses. Math. Probl. Eng. 2014;2014:151836. doi: 10.1155/2014/151836. [DOI] [Google Scholar]
  • 6.Liu Y., He X., Wang Z., Zhou D. Optimal filtering for networked systems with stochastic sensor gain degradation. Automatica. 2014;50:1521–1525. doi: 10.1016/j.automatica.2014.03.002. [DOI] [Google Scholar]
  • 7.Chen B., Zhang W., Yu L. Distributed fusion estimation with missing measurements, random transmission delays and packet dropouts. IEEE Trans. Autom. Control. 2014;59:1961–1967. doi: 10.1109/TAC.2013.2297192. [DOI] [Google Scholar]
  • 8.Li N., Sun S., Ma J. Multi-sensor distributed fusion filtering for networked systems with different delay and loss rates. Digit. Signal Process. 2014;34:29–38. doi: 10.1016/j.dsp.2014.07.016. [DOI] [Google Scholar]
  • 9.Wang S., Fang H., Tian X. Recursive estimation for nonlinear stochastic systems with multi-step transmission delays, multiple packet dropouts and correlated noises. Signal Process. 2015;115:164–175. doi: 10.1016/j.sigpro.2015.03.022. [DOI] [Google Scholar]
  • 10.Chen D., Yu Y., Xu L., Liu X. Kalman filtering for discrete stochastic systems with multiplicative noises and random two-step sensor delays. Discret. Dyn. Nat. Soc. 2015;2015:809734. doi: 10.1155/2015/809734. [DOI] [Google Scholar]
  • 11.García-Ligero M.J., Hermoso-Carazo A., Linares-Pérez J. Distributed fusion estimation in networked systems with uncertain observations and markovian random delays. Signal Process. 2015;106:114–122. doi: 10.1016/j.sigpro.2014.07.003. [DOI] [Google Scholar]
  • 12.Gao S., Chen P., Huang D., Niu Q. Stability analysis of multi-sensor Kalman filtering over lossy networks. Sensors. 2016;16:566. doi: 10.3390/s16040566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lin H., Sun S. State estimation for a class of non-uniform sampling systems with missing measurements. Sensors. 2016;16:1155. doi: 10.3390/s16081155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hu J., Wang Z., Chen D., Alsaadi F.E. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. Inf. Fusion. 2016;31:65–75. doi: 10.1016/j.inffus.2016.01.001. [DOI] [Google Scholar]
  • 15.Sun S., Lin H., Ma J., Li X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion. 2017;38:122–134. doi: 10.1016/j.inffus.2017.03.006. [DOI] [Google Scholar]
  • 16.Li W., Jia Y., Du J. Distributed filtering for discrete-time linear systems with fading measurements and time-correlated noise. Digit. Signal Process. 2017;60:211–219. doi: 10.1016/j.dsp.2016.10.003. [DOI] [Google Scholar]
  • 17.Luo Y., Zhu Y., Luo D., Zhou J., Song E., Wang D. Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications. Sensors. 2008;8:8086–8103. doi: 10.3390/s8128086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Shen X.J., Luo Y.T., Zhu Y.M., Song E.B. Globally optimal distributed Kalman filtering fusion. Sci. China Inf. Sci. 2012;55:512–529. doi: 10.1007/s11432-011-4538-7. [DOI] [Google Scholar]
  • 19.Hu J., Wang Z., Gao H. Recursive filtering with random parameter matrices, multiple fading measurements and correlated noises. Automatica. 2013;49:3440–3448. doi: 10.1016/j.automatica.2013.08.021. [DOI] [Google Scholar]
  • 20.Linares-Pérez J., Caballero-Águila R., García-Garrido I. Optimal linear filter design for systems with correlation in the measurement matrices and noises: Recursive algorithm and applications. Int. J. Syst. Sci. 2014;45:1548–1562. doi: 10.1080/00207721.2014.909093. [DOI] [Google Scholar]
  • 21.Yang Y., Liang Y., Pan Q., Qin Y., Yang F. Distributed fusion estimation with square-root array implementation for Markovian jump linear systems with random parameter matrices and cross-correlated noises. Inf. Sci. 2016;370–371:446–462. doi: 10.1016/j.ins.2016.08.020. [DOI] [Google Scholar]
  • 22.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Networked fusion filtering from outputs with stochastic uncertainties and correlated random transmission delays. Sensors. 2016;16:847. doi: 10.3390/s16060847. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Fusion estimation using measured outputs with random parameter matrices subject to random delays and packet dropouts. Signal Process. 2016;127:12–23. doi: 10.1016/j.sigpro.2016.02.014. [DOI] [Google Scholar]
  • 24.Feng J., Zeng M. Descriptor recursive estimation for multiple sensors with different delay rates. Int. J. Control. 2011;84:584–596. doi: 10.1080/00207179.2011.563321. [DOI] [Google Scholar]
  • 25.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Least-squares linear-estimators using measurements transmitted by different sensors with packet dropouts. Digit. Signal Process. 2012;22:1118–1125. doi: 10.1016/j.dsp.2012.06.002. [DOI] [Google Scholar]
  • 26.Ma J., Sun S. Distributed fusion filter for networked stochastic uncertain systems with transmission delays and packet dropouts. Signal Process. 2017;130:268–278. doi: 10.1016/j.sigpro.2016.07.004. [DOI] [Google Scholar]
  • 27.Wang S., Fang H., Liu X. Distributed state estimation for stochastic non-linear systems with random delays and packet dropouts. IET Control Theory Appl. 2015;9:2657–2665. doi: 10.1049/iet-cta.2015.0257. [DOI] [Google Scholar]
  • 28.Sun S. Linear minimum variance estimators for systems with bounded random measurement delays and packet dropouts. Signal Process. 2009;89:1457–1466. doi: 10.1016/j.sigpro.2009.02.002. [DOI] [Google Scholar]
  • 29.Sun S. Optimal linear filters for discrete-time systems with randomly delayed and lost measurements with/without time stamps. IEEE Trans. Autom. Control. 2013;58:1551–1556. doi: 10.1109/TAC.2012.2229812. [DOI] [Google Scholar]
  • 30.Sun S., Ma J. Linear estimation for networked control systems with random transmission delays and packet dropouts. Inf. Sci. 2014;269:349–365. doi: 10.1016/j.ins.2013.12.055. [DOI] [Google Scholar]
  • 31.Wang S., Fang H., Tian X. Minimum variance estimation for linear uncertain systems with one-step correlated noises and incomplete measurements. Digit. Signal Process. 2016;49:126–136. doi: 10.1016/j.dsp.2015.10.007. [DOI] [Google Scholar]
  • 32.Tian T., Sun S., Li N. Multi-sensor information fusion estimators for stochastic uncertain systems with correlated noises. Inf. Fusion. 2016;27:126–137. doi: 10.1016/j.inffus.2015.06.001. [DOI] [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES