Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2016 Jun 8;16(6):847. doi: 10.3390/s16060847

Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays

Raquel Caballero-Águila 1,*, Aurora Hermoso-Carazo 2, Josefa Linares-Pérez 2
Editor: Xue-Bo Jin
PMCID: PMC4934273  PMID: 27338387

Abstract

This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices.

Keywords: least-squares estimation, distributed and centralized fusion methods, random parameter matrices, correlated noises, random delays

1. Introduction

Estimation problems in networked stochastic systems have been widely studied, especially in the past decade, due to the wide range of potential applications in many areas, such as target tracking, air traffic control, fault diagnosis, computer vision, and so on, and important advances in the design of multisensor fusion techniques have been achieved [1]. The basic question in the fusion estimation problems is how to merge the measurement data of different sensors; usually, the centralized and distributed fusion methods are used. Centralized fusion estimators are obtained by processing in the fusion center the measurements received from all sensors; in the distributed fusion method, first, local estimators, based on the measurements received from each sensor, are obtained and, then, these local estimators are combined according to a certain information fusion criterion. Several centralized and/or distributed fusion estimation algorithms have been proposed for conventional systems (see, e.g., [2,3,4,5], and the references therein), where the sensor measured outputs are affected only by additive noises, and assuming that the data are sent directly to the fusion center without any kind of transmission error; that is, assuming perfect connections.

However, although the use of sensor networks offers several advantages, due to restrictions of the physical equipment and uncertainties in the external environment, new problems associated with network-induced phenomena inevitably arise in both the output and the transmission of the sensor measurements [6]. Multiplicative noise uncertainties, sensor gain degradations and missing measurements are some of the random phenomena that usually arise in the sensor measured outputs, which cannot only be described by the usual additive disturbances. The existence of these uncertainties can severely degrade the performance of the conventional estimators, and has encouraged the need of designing new fusion estimation algorithms for such systems (see, e.g., [7,8,9,10,11], and references therein). Clearly, these situations with uncertainties in the measurements of the network sensors can be modeled by systems with random parameter measurement matrices, which have important practical significance due to the large number of realistic situations and application areas, such as digital control of chemical processes, radar control, navigation systems, or economic systems, in which this kind of systems with stochastic parameters are found (see, e.g., [12,13], among others). Therefore, it is not surprising that, in the past few years, much attention has been focused on the design of new fusion estimation algorithms for systems with random parameter matrices (see, e.g., [14,15,16,17], and references therein).

In addition to the aforementioned uncertainties, when the data packets are sent from the sensors to the processing center via a communication network, some further network-induced phenomena, such as random delays or measurement loss, inevitably arise during this transmission process, due to the unreliable network characteristics, imperfect communication channels and failures in the transmission. These additional transmission uncertainties can spoil the fusion estimators performance and motivate the need of designing fusion estimation algorithms that take their effects into consideration. In recent years, in response to the popularity of networked stochastic systems, the fusion estimation problem from observations with random delays and packet dropouts, which may happen during the data transmission, has been one of the mainstream research topics (see, e.g., [18,19,20,21,22,23,24,25,26,27,28,29,30], and references therein). All the above papers on signal estimation with random transmission delays assume independent random delays in each sensor and mutually independent delays between the different network sensors; in [31] this restriction was weakened and random delays featuring correlation at consecutive sampling times were considered, thus allowing to deal with common practical situations (e.g., those in which two consecutive observations cannot be delayed).

It should also be noted that in many real-world problems the measurement noises are usually correlated; for example, when all the sensors operate in the same noisy environment or when the sensor noises are state dependent. For this reason, the fairly conservative assumption that the measurement noises are uncorrelated is commonly weakened in most of the aforementioned research on signal estimation, including systems with both deterministic and random parameter matrices. Namely, the optimal Kalman filtering fusion problem in systems with cross-correlated noises at consecutive sampling times is addressed, for example, in [25]; also, under different types of noise correlation, centralized and distributed fusion algorithms are obtained in [11,26] for systems with multiplicative noise, and in [7] for systems where the measurements might have partial information about the signal. Autocorrelated and cross-correlated noises have been also considered in systems with random parameter matrices and transmission uncertainties; some results on the fusion estimation problems in these systems can be found in [22,24,27].

In this paper, covariance information is used to address the distributed and centralized fusion estimation problems for a class of linear networked stochastic systems with measured outputs perturbed by random parameter matrices, and subject to one-step random transmission delays. It is assumed that the sensor measurement noises are one-step autocorrelated and cross-correlated, and that the Bernoulli variables describing the measurement delays in the different sensors are correlated at the same and consecutive sampling times. The proposed observation model can describe the case of one-step delay, packet loss, or re-received measurements. To the best of the authors’ knowledge, fusion estimation problems in this framework with random measurement matrices and cross-correlated sensor noises in the measured outputs, together with correlated random delays in transmission, has not been investigated; so, encouraged by the above considerations, we reckon that it constitutes an interesting research challenge.

The main contributions of the current research are highlighted as follows: (i) The treatment used to address the estimation problem, based on covariance information, does not require the evolution model generating the signal process; nonetheless, the proposed fusion algorithms are also applicable to the conventional state-space model formulation; (ii) Random parameter matrices are considered in the measured outputs, which provide a fairly comprehensive and unified framework to describe some network-induced phenomena, such as multiplicative noise uncertainties or missing measurements, and correlation between the different sensor measurement noises is simultaneously considered; (iii) As in [31], random correlated delays in the transmission, with different delay characteristics at each sensor, are considered; however, the observation model in this paper is more general than that in [31], as the latter does not take random measurement matrices and correlated noises into account; (iv) Unlike [22,24,31], where only centralized fusion estimators are obtained, in this paper, both centralized and distributed estimation problems are addressed; the estimators are obtained under the innovation approach and recursive algorithms, very simple computationally and suitable for online applications, are proposed; (v) Finally, it must be noted that we do not use the augmented approach to deal with the delayed measurements, thus reducing the computational cost compared with the augmentation method.

The rest of the paper is organized as follows. Both, the measurement outputs with random parameter matrices, and the one-step random delay observation models are presented in Section 2, including the model hypotheses under which the distributed and centralized estimation problems are addressed. The distributed fusion method is applied in Section 3; specifically, the local least-squares linear filtering algorithms are derived in Section 3.1 using an innovation approach and, in Section 3.2, the proposed distributed fusion filter is obtained by a matrix-weighted linear combination of the local filtering estimators, using the mean squared error as optimality criterion. In Section 4, a recursive algorithm for the centralized least-squares linear filtering estimator is proposed. Section 5 is devoted to analyze the effectiveness of the proposed estimation algorithms by a simulation example, in which the effects of the sensor random delays on the estimators are compared. Some conclusions are drawn in Section 6.

Notation: 

The notations throughout the paper are standard. The notation ab indicates the minimum value of two real numbers a,b. Rn and Rm×n denote the n-dimensional Euclidean space and the set of all m×n real matrices, respectively. For a matrix A, the symbols AT and A-1 denote its transpose and inverse, respectively; the notation AB represents the Kronecker product of the matrices A,B. If the dimensions of vectors or matrices are not explicitly stated, they are assumed to be compatible with algebraic operations. In particular, I and 0 denote the identity matrix and the zero matrix of appropriate dimensions, respectively. For any function Gk,s, depending on the time instants k and s, we will write Gk=Gk,k for simplicity; analogously, F(i)=F(ii) will be written for any function F(ij), depending on the sensors i and j. Finally, δk,s denotes the Kronecker delta function.

2. Problem Formulation and Model Description

This paper deals with the distributed and centralized fusion filtering problems from randomly delayed observations coming from networked sensors; the signal measurements at the different sensors are noisy linear functions with random parameter matrices, and the sensor noises are assumed to be correlated and cross-correlated at the same and consecutive sampling times. The estimation is performed in a processing center, which is connected to all sensors, where the measurements are transmitted through unreliable communication channels which may lead to one-step random delays, due to network congestion or other causes. In the centralized filtering problem, the estimators are obtained by fusion of all the network observations at each sampling time, whereas in the distributed filtering, local estimators, based only on the observations of each individual sensor, are first obtained and then, a fusion estimator based on the local ones is calculated.

Our aim is to design recursive algorithms for the optimal linear distributed and centralized filters under the least-squares (LS) criterion, requiring only the first and second-order moments of the processes involved in the model describing the observations from the different sensors. Next, we present the model and formulate the hypotheses necessary to address the estimation problem.

Description of the observation model. Let us consider a discrete-time second-order nx-dimensional signal process, {xk;k1}, which is measured in m sensor nodes of the network, i=1,,m, that generate the outputs zk(i)Rnz,k1, described by

zk(i)=Hk(i)xk+vk(i),k1;i=1,,m (1)

where {Hk(i);k1} are independent random parameter matrices of appropriate dimensions and {vk(i);k1} is the process describing the measurement noise in sensor i, which is assumed to be one-step correlated. We also assume that all the sensors operate in the same noisy environment and there exists correlation between different sensor noises at the same and consecutive sampling times.

In order to estimate the signal process, {xk;k1}, the measurement outputs are transmitted to a processing center via unreliable channels, causing one-step random delays in such transmissions. When a measurement, zk(i), suffers delay and, hence, it is unavailable at time k, the processor is assumed to use the previous one, zk-1(i); this way of dealing with delays requires that the first measurement is always available and so, considering zero-one random variables γk(i),k2, the observations used in the estimation are described by

yk(i)=(1-γk(i))zk(i)+γk(i)zk-1(i),k2;y1(i)=z1(i);i=1,,m (2)

In this paper, the variables modelling the delays are assumed to be one-step correlated, thus covering many practical situations; for example those in which two consecutive observations through the same channel cannot be delayed, and situations where there is some sort of link between the different communications channels.

Model hypotheses. The LS estimation problem will be addressed under the following hypotheses about the processes involved in Equations (1) and (2), which formally specify the above assumptions:

  • (H1) 

    The signal process, xk;k1, has zero mean and its covariance function can be expressed in a separable form; namely, ExkxsT=AkBsT,sk, where Ak,Bk,k1, are known matrices of dimensions nx×N.

  • (H2) 

    Hk(i);k1, i=1,,m, are independent sequences of independent random parameter matrices whose entries have known means and known second-order moments; we will denote H¯k(i)E[Hk(i)],k1.

  • (H3) 
    The sensor measurement noises vk(i);k1,i=1,,m, are zero-mean sequences with known second-order moments, defined by:
    Evk(i)vs(j)T=Rk(ij)δk,s+Rk,k-1(ij)δk-1,s,sk;i,j=1,,m
  • (H4) 

    γk(i);k2,i=1,,m, are sequences of Bernoulli random variables with known means, γ¯k(i)Eγk(i),k2. It is assumed that γk(i) and γs(j) are independent for |k-s|2, and the second-order moments, γ¯k,s(ij)Eγk(i)γs(j),s=k-1,k, and i,j=1,,m, are also known.

  • (H5) 

    For i=1,,m, the processes {xk;k1},{Hk(i);k1},{vk(i);k1} and γk(i);k2 are mutually independent.

Hypotheses (H1)–(H5) guarantee that the observation processes in the different sensors have zero mean, and that the matrices Σk,sy(ij)Eyk(i)ys(j)T,s=k-1,k, can be obtained from Σk,sz(ij)Ezk(i)zs(j)T,s=k-2,k-1,k, and their transposes, by the following expressions:

Σk,sy(ij)=1-γ¯k(i)-γ¯s(j)+γ¯k,s(ij)Σk,sz(ij)+γ¯s(j)-γ¯k,s(ij)Σk,s-1z(ij)+γ¯k(i)-γ¯k,s(ij)Σk-1,sz(ij)+Σk,sy(ij)=+γ¯k,s(ij)Σk-1,s-1z(ij),k,s2Σ2,1y(ij)=1-γ¯2(i)Σ2,1z(ij)+γ¯2(i)Σ1z(ij);Σ1y(ij)=Σ1z(ij)Σk,sz(ij)=EHk(i)AkBsTHs(j)T+Rk(ij)δk,s+Rk,k-1(ij)δk-1,s,sk (3)

Remark 1. 

The term EHk(i)AkBsTHs(j)T in Σk,sz(ij) becomes from EHk(i)xkxsTHs(j)T which, from the independence between (xk,xs) and (Hk(i),Hs(j)), is equal to EHk(i)E[xkxsT]Hs(j)T. From (H2), EHk(i)AkBsTHs(j)T=H¯k(i)AkBsTH¯s(j)T for ji or sk, and EHk(i)AkBkTHk(i)T is computed from its entries, according the following formula:

EHAkBkTHpq=a=1nxb=1nxE[hpahqb](AkBkT)ab,p,q=1,,nz

where HRnz×nx is any random matrix and hpq denote its entries.

3. Distributed Fusion Linear Filter

The aim of this section is to address the distributed fusion linear filtering problem of the signal xk, from the randomly delayed observations defined by Equations (1) and (2), under the LS criterion. The proposed distributed filter is designed as the LS matrix-weighted linear combination of the local LS linear filters and therefore, in a first step, such local filters need being derived.

3.1. Derivation of the Local LS Linear Filters

With the purpose of obtaining the signal LS linear filters based on the available observations from each sensor, we will use an innovation approach, which provides recursive algorithms for the local estimators, that will be denoted by x^k/k(i),i=1,,m.

For each sensor i=1,,m, the innovation at time k, which represents the new information provided by the k-th observation, is defined by μk(i)=yk(i)-y^k/k-1(i),k1, where y^k/k-1(i) is the LS linear estimator of yk based on ys(i),sk-1, with y^1/0(i)=E[y1(i)]=0. As it is known [32], the innovations, {μk(i);k1}, constitute a zero-mean white process, and the LS linear estimator of any random vector ξk based on the observations y1(i),,yL(i), denoted by ξ^k/L(i), can be calculated as a linear combination of the corresponding innovations, μ1(i),μL(i); namely,

ξ^k/L(i)=h=1LEξkμh(i)TΠh(i)-1μh(i) (4)

where Πk(i)Eμk(i)μk(i)T denotes the covariance matrix of μk(i). This general expression is derived from the Orthogonal Projection Lemma (OPL), which establishes that the estimation error is uncorrelated with all the observations or, equivalently, uncorrelated with all the innovations.

Using the following alternative expression for the observations yk(i) given by Equation (2),

yk(i)=1-γ¯k(i)Hk(i)xk+γ¯k(i)H¯k-1(i)xk-1+wk(i),k2wk(i)=γ¯k(i)Hk-1(i)-H¯k-1(i)xk-1+1-γ¯k(i)vk(i)+γ¯k(i)vk-1(i)-γk(i)-γ¯k(i)zk(i)-zk-1(i),k2 (5)

and taking into account the independence hypotheses stated on the model, it is easy to see, from Equation (4), that

y^k/k-1(i)=1-γ¯k(i)H¯k(i)x^k/k-1(i)+γ¯k(i)H¯k-1(i)x^k-1/k-1(i)+h=1(k-1)2Wk,k-h(i)Πk-h(i)-1μk-h(i),k2 (6)

where Wk,k-h(i)Ewk(i)μk-h(i)T,h=1,2.

The general Expression (4) for the LS linear estimators as linear combination of the innovations, together with Expression (6) for the one-stage observation predictor, are the starting point to derive the local recursive filtering algorithms presented below in Theorem 1; these algorithms provide also the filtering error covariance matrices, Pk/k(i)E(xk-x^k/k(i))(xk-x^k/k(i))T, which measure the accuracy of the estimators x^k/k(i) when the LS optimality criterion is used.

Hereafter, for the matrices Ak and Bk involved in the signal covariance factorization (H1), the following operator will be used:

H¯Ck(i)=1-γ¯k(i)H¯k(i)Ck+γ¯k(i)H¯k-1(i)Ck-1,k2;H¯C1(i)=H¯1(i)C1,C=A,B (7)

Theorem 1. 

For each i=1,,m, the local LS linear filters, x^k/k(i), and the corresponding error covariance matrices, Pk/k(i), are given by

x^k/k(i)=AkOk(i),k1 (8)

and

Pk/k(i)=AkBk-Akrk(i)T,k1 (9)

where the vectors Ok(i) and the matrices rk(i)=E[Ok(i)Ok(i)T] are recursively obtained from

Ok(i)=Ok-1(i)+Jk(i)Πk(i)-1μk(i),k1;O0(i)=0 (10)
rk(i)=rk-1(i)+Jk(i)Πk(i)-1Jk(i)T,k1;r0(i)=0 (11)

and the matrices Jk(i)=E[Ok(i)μk(i)T] satisfy

Jk(i)=H¯Bk(i)T-rk-1(i)H¯Ak(i)T-h=1(k-1)2Jk-h(i)Πk-h(i)-1Wk,k-h(i)T,k2;J1(i)=H¯B1(i)T (12)

The innovations μk(i), and their covariance matrices, Πk(i), are given by

μk(i)=yk(i)-H¯Ak(i)Ok-1(i)-h=1(k-1)2Wk,k-h(i)Πk-h(i)-1μk-h(i),k2;μ1(i)=y1(i) (13)

and

Πk(i)=Σky(i)-H¯Ak(i)H¯Bk(i)T-Jk(i)-h=1(k-1)2Wk,k-h(i)Πk-h(i)-1H¯Ak(i)Jk-h(i)+Wk,k-h(i)T,k2Π1(i)=Σ1y(i) (14)

The coefficients Wk,k-h(i)=E[wk(i)μk-h(i)T], h=1,2, are calculated as

Wk,k-1(i)=Σk,k-1y(i)-H¯Ak(i)H¯Bk-1(i)T-Wk,k-2(i)Πk-2(i)-1H¯Ak-1(i)Jk-2(i)+Wk-1,k-2(i)T,k3W2,1(i)=Σ2,1y(i)-H¯A2(i)H¯B1(i)T (15)
Wk,k-2(i)=γ¯k(i)(1-γ¯k-2(i))Rk-1,k-2(i),k4;W3,1(i)=γ¯3(i)R2,1(i) (16)

Finally, the matrices Σk,sy(i) and H¯As(i),H¯Bs(i),s=k-1,k, are given in Equations (3) and (7), respectively.

Proof of Theorem 1. 

The local filter x^k/k(i) will be obtained from the general Expression (4), starting from the computation of the coefficients Xk,h(i)=Exkμh(i)T=Exkyh(i)T-Exky^h/h-1(i)T,1hk.

The independence hypotheses and the separable structure of the signal covariance (H1) lead to E[xkyh(i)T]=AkH¯Bh(i)T, with H¯Bh(i) given by Equation (7). From Expression (6) for y^h/h-1(i),h2, we have:

Exky^h/h-1(i)T=1-γ¯h(i)Exkx^h/h-1(i)TH¯h(i)T+γ¯h(i)Exkx^h-1/h-1(i)TH¯h-1(i)T+j=1(h-1)2Xk,h-j(i)Πh-j(i)-1Wh,h-j(i)T

Hence, using now Equation (4) for x^h/h-1(i) and x^h-1/h-1(i), the filter coefficients are expressed as

Xk,h(i)=AkH¯Bh(i)T-j=1h-1Xk,j(i)Πj(i)-11-γ¯h(i)Xh,j(i)TH¯h(i)T+γ¯h(i)Xh-1,j(i)TH¯h-1(i)TXk,h(i)=-j=1(h-1)2Xk,h-j(i)Πh-j(i)-1Wh,h-j(i)T,2hk;Xk,1(i)=AkH¯B1(i)T

which guarantees that Xk,h(i)=AkJh(i),1hk, with Jh(i) given by

Jh(i)=H¯Bh(i)T-j=1h-1Jj(i)Πj(i)-1Jj(i)TH¯Ah(i)T-j=1(h-1)2Jh-j(i)Πh-j(i)-1Wh,h-j(i)T,h2;J1(i)=H¯B1(i)T (17)

Therefore, by defining Ok(i)=h=1kJh(i)Πh(i)-1μh(i) and rk(i)=EOk(i)Ok(i)T, Expression (8) for the filter follows immediately from Equation (4), and Equation (9) is obtained by using the OPL to express Pk/k(i)=ExkxkT-Ex^k/k(i)x^k/k(i)T, and applying (H1) and Equation (8).

The recursive Expressions (10) and (11) are directly obtained from the corresponding definitions, taking into account that rk(i)=h=1kJh(i)Πh(i)-1Jh(i)T which, in turn, from Equation (17), leads to Equation (12) for Jk(i).

From now on, using that x^k/k-1(i)=AkOk-1(i),x^k-1/k-1(i)=Ak-1Ok-1(i) and Equation (7), Expression (6) for the observation predictor will be equivalently written as follows:

y^k/k-1(i)=H¯Ak(i)Ok-1(i)+h=1(k-1)2Wk,k-h(i)Πk-h(i)-1μk-h(i),k2 (18)

From Equation (18), Expression (13) for the innovation is directly obtained and, applying the OPL to express its covariance matrix as Πk(i)=Eyk(i)yk(i)T-Ey^k/k-1(i)y^k/k-1(i)T, the following identity holds:

Πk(i)=Σky(i)-H¯Ak(i)EOk-1(i)y^k/k-1(i)T-h=1(k-1)2Wk,k-h(i)Πk-h(i)-1Eμk-h(i)y^k/k-1(i)T,k2;Π1(i)=Σ1y(i)

Now, using again Equation (18), it is deduced from Expression (12) that EOk-1(i)y^k/k-1(i)T=H¯Bk(i)T-Jk(i) and, since Ey^k/k-1(i)μk-h(i)T=H¯Ak(i)EOk-1(i)μk-h(i)T+Wk,k-h(i) and EOk-1(i)μk-h(i)T=Jk-h(i),h=1,2, Expression (14) for Πk(i) is obtained.

To complete the proof, the expressions for Wk,k-h(i)=Ewk(i)μk-h(i)T,h=1,2, with wk(i) given in Equation (5), are derived taking into account that wk(i) is uncorrelated with yh(i),hk-3. Consequently, Wk,k-2(i)=Ewk(i)yk-2(i)T, and Expression (16) is directly obtained from Equations (1), (2) and (5), using the hypotheses stated on the model. Next, using Equation (4) for y^k-1/k-2(i) in Wk,k-1(i)=Ewk(i)yk-1(i)T-Ewk(i)y^k-1/k-2(i)T, we have:

Wk,k-1(i)=Ewk(i)yk-1(i)T-Wk,k-2(i)Πk-2(i)-1Eyk-1(i)μk-2(i)TT (19)

To compute the first expectation involved in this formula, we express wk(i)=yk(i)-(1-γ¯k(i))Hk(i)xk-γ¯k(i)Hk-1(i)xk-1 and we apply the OPL to rewrite Exsyk-1(i)T=Ex^s/k-1(i)yk-1(i)T,s=k,k-1, thus obtaining that Ewk(i)yk-1(i)T=Σk,k-1y(i)-H¯Ak(i)EOk-1(i)yk-1(i)T; then, by expressing EOk-1(i)yk-1(i)T=EOk-1(i)μk-1(i)T+EOk-1(i)y^k/k-1(i)T and using Equations (12) and (18), it follows that Ewk(i)yk-1(i)T=Σk,k-1y(i)-H¯Ak(i)H¯Bk-1(i)T. The second expectation in Equation (19) is easily computed taking into account that, from the OPL, it is equal to Ey^k-1/k-2(i)μk-2(i)T and using Equation (18). So the proof is completed. ☐

3.2. Derivation of the Distributed LS Fusion Linear Filter

As it has been mentioned previously, a linear matrix-weighted fusion filter is now generated from the local filters by applying the LS optimality criterion. The distributed fusion filter at any time k is hence designed as a product, FkX^k/k, where X^k/k=x^k/k(1)T,,x^k/k(m)TT is the vector constituted by the local filters, and FkRnx×mnx is a matrix such that the mean squared error, Exk-FkX^k/kTxk-FkX^k/k, is minimized.

As it is known, the solution of this problem is given by Fkopt=ExkX^k/kTEX^k/kX^k/kT-1 and, consequently, the proposed distributed filter is expressed as:

x^k/k(D)=ExkX^k/kTΣ^k/k-1X^k/k (20)

with Σ^k/kEX^k/kX^k/kT=Ex^k/k(i)x^k/k(j)Ti,j=1,,m.

In view of Equation (20), and since the OPL guarantees that ExkX^k/kT=Ex^k/k(1)x^k/k(1)T,,Ex^k/k(m)x^k/k(m)T, the derivation of x^k/k(D) only requires the knowledge of the matrices Σ^k/k(ij)Ex^k/k(i)x^k/k(j)T,i,j=1,,m.

The following theorem provides a recursive algorithm to compute the matrices Σ^k/k(ij) which not only determine the proposed distributed fusion filter, but also the filtering error covariance matrix, Pk/k(D)=Exk-x^k/k(D)xk-x^k/k(D)T.

Theorem 2. 

Let X^k/k=x^k/k(1)T,,x^k/k(m)TT denote the vector constituted by the local LS filters given in Theorem 1, and Σ^k/k=Σ^k/k(ij)i,j=1,,m , with Σ^k/k(ij)=Ex^k/k(i)x^k/k(j)T. Then, the distributed filtering estimator, x^k/k(D), and the error covariance matrix, Pk/k(D), are given by

x^k/k(D)=Σ^k/k(1),,Σ^k/k(m)Σ^k/k-1X^k/k,k1 (21)

and

Pk/k(D)=AkBkT-Σ^k/k(1),,Σ^k/k(m)Σ^k/k-1Σ^k/k(1),,Σ^k/k(m)T,k1 (22)

The matrices Σ^k/k(ij), i,j=1,,m, are computed by

Σ^k/k(ij)=Akrk(ij)AkT,k1 (23)

with rk(ij)=EOk(i)Ok(j)T satisfying

rk(ij)=rk-1(ij)+Jk-1,k(ij)Πk(j)-1Jk(j)T+Jk(i)Πk(i)-1Jk(ji)T,k1;r0(ij)=0 (24)

where Jk-1,k(ij)=EOk-1(i)μk(j)T are given by

Jk-1,k(ij)=rk-1(i)-rk-1(ij)H¯Ak(j)T+h=1(k-1)2Jk-h(i)Πk-h(i)-1Wk,k-h(ji)T-h=1(k-1)2Jk-1,k-h(ij)Πk-h(j)-1Wk,k-h(j)T,k2J0,1(ij)=0 (25)

and Jk,s(ij)=EOk(i)μs(j)T, for s=k-1,k, satisfy

Jk,s(ij)=Jk-1,s(ij)+Jk(i)Πk(i)-1Πk,s(ij),k2;J1(ij)=J1(i)Π1(i)-1Π1(ij) (26)

The innovation cross-covariance matrices Πk(ij)=Eμk(i)μk(j)T are obtained from

Πk(ij)=Σky(ij)-H¯Ak(i)H¯Bk(j)T-Jk(j)-Jk-1,k(ij)Πk(ij)=-h=1(k-1)2Wk,k-h(ij)Πk-h(j)-1H¯Ak(j)Jk-h(j)+Wk,k-h(j)T-h=1(k-1)2Wk,k-h(i)Πk-h(i)-1Πk-h,k(ij),k2Π1(ij)=Σ1y(ij) (27)

where Πk,s(ij)=Eμk(i)μs(j)T,s=k-2,k-1, are given by

Πk,s(ij)=H¯Ak(i)Js(j)-Jk-1,s(ij)+Wk,s(ij)-h=1(k-1)2Wk,k-h(i)Πk-h(i)-1Πk-h,s(ij),k2 (28)

The coefficients Wk,k-h(ij)=Ewk(i)μk-h(j)T,h=1,2, involved in the above expressions, are computed by

Wk,k-1(ij)=Σk,k-1y(ij)-H¯Ak(i)H¯Bk-1(j)T-Wk,k-2(ij)Πk-2(j)-1H¯Ak-1(j)Jk-2(j)+Wk-1,k-2(j)T,k3W3,2(ij)=Σ2,1y(ij)-H¯A2(i)H¯B1(j)T (29)
Wk,k-2(ij)=γ¯k(i)1-γ¯k-2(j)Rk-1,k-2(ij),k4;W3,1(ij)=γ¯3(i)R2,1(ij) (30)

Finally, the matrices Σk,sy(ij), and H¯As(l),H¯Bs(l),s=k-1,k,l=i,j, are given in Equations (3) and (7), respectively.

Proof. 

As it has been discussed previously, Expression (21) is immediately derived from Equation (20), while Equation (22) is obtained from Pk/k(D)=ExkxkT-Ex^k/k(D)x^k/k(D)T, using (H1) and (21). Moreover, Equation (23) for Σ^k/k(ij) is directly obtained using Equation (8) for the local filters and defining rk(ij)=EOk(i)Ok(j)T.

Next, we derive the recursive formulas to obtain the matrices rk(ij), which clearly satisfy Equation (24) by simply using Equation (10) and defining Js,k(ij)=EOs(i)μk(j)T,s=k-1,k.

For subsequent derivations, the following expression of the one-stage predictor of yk(j) based on the observations of sensor i will be used; this expression is obtained from Equation (5), taking into account, as proven in Theorem 1, that x^k/s(i)=AkOs(i),s=k-1,k, and defining Wk,k-h(ji)=Ewk(j)μk-h(i)T,h=1,2:

y^k/k-1(j/i)=H¯Ak(j)Ok-1(i)+h=1(k-1)2Wk,k-h(ji)Πk-h(i)-1μk-h(i),k2 (31)

As Expression (31) is a generalization of Equation (18), hereafter we will also refer to it for the local predictors y^k/k-1(i),k2.

By applying the OPL, is clear that EOk-1(i)yk(j)T=EOk-1(i)y^k/k-1(j/i)T and, consequently, we can rewrite Jk-1,k(ij)=EOk-1(i)y^k/k-1(j/i)-y^k/k-1(j)T; then, using Equation (31) for both predictors, Expression (25) is easily obtained. Also, Equation (26) for Jk,s(ij),s=k-1,k, is immediate from Equation (10), by simply defining Πk,s(ij)=Eμk(i)μs(j)T.

To obtain Equation (27), firstly we apply the OPL to express Πk(ij)=Σky(ij)-Ey^k/k-1(i/j)y^k/k-1(j)T-Ey^k/k-1(i)μk(j)T. Then, using Equation (31) for y^k/k-1(i/j) and y^k/k-1(i), and definitions of Jk-1,k(ij) and Πk-h,k(ij), we have

Ey^k/k-1(i/j)y^k/k-1(j)T=H¯Ak(i)EOk-1(j)y^k/k-1(j)T+h=1(k-1)2Wk,k-h(ij)Πk-h(j)-1Eμk-h(j)y^k/k-1(j)T,Ey^k/k-1(i)μk(j)T=H¯Ak(i)Jk-1,k(ij)+h=1(k-1)2Wk,k-h(i)Πk-h(i)-1Πk-h,k(ij)

and Equation (27) is obtained taking into account that EOk-1(j)y^k/k-1(j)T=H¯Bk(j)T-Jk(j) and Ey^k/k-1(j)μk-h(j)T=H¯Ak(j)Jk-h(j)+Wk,k-h(j), as it has been derived in the proof of Theorem 1.

Next, Expression (28) for Πk,s(ij)=Eyk(i)μs(j)T-Ey^k/k-1(i)μs(j)T, with s=k-2,k-1, is obtained from Eyk(i)μs(j)T=HAk(i)Js(j)+Wk,s(ij), and using Equation (31) in Ey^k/k-1(i)μs(j)T.

Finally, the reasoning for obtaining the coefficients Wk,k-h(ij)=Ewk(i)μk-h(j)T,h=1,2, is also similar to that of Wk,k-h(i) in Theorem 1, so it is omitted. Then the proof of Theorem 2 is completed. ☐

4. Centralized LS Fusion Linear Filter

In the centralized fusion filtering, the observations of the different sensors are jointly processed at each sampling time to yield the optimal filter of the signal xk, which will be denoted by x^k/k(C). To carry out this process, at each time k1 we will work with the vector constituted by the observations of all sensors, yk=yk(1)T,yk(m)TT, which, from Equation (2), can be expressed by

yk=I-Γkzk+Γkzk-1,k2;y1=z1 (32)

where zk=zk(1)T,zk(m)TT is the vector constituted by the sensor measured outputs given in Equation (1), and Γk=Diagγk(1),,γk(m)Inz. Let us note that, analogously to the sensor measured outputs zk(i), the stacked vector zk is a noisy linear function of the signal, with random parameter matrices defined by Hk=Hk(1)T,Hk(m)TT, and noise vk=vk(1)T,vk(m)TT:

zk=Hkxk+vk,k1 (33)

The processes involved in Equations (32) and (33) satisfy the following properties, which are immediately derived from the model hypotheses (H1)–(H5):

  • (P1) 

    Hk;k1 is a sequence of independent random parameter matrices whose entries have known means and second-order moments.

  • (P2) 

    The noise vk;k1 is a zero-mean sequence with known second-order moments defined by the matrices Rk,sRk,s(ij)i,j=1,,m.

  • (P3) 

    The matrices Γk;k2 have known means, Γ¯kE[Γk]=Diagγ¯k(1),,γ¯k(m)Inz,k2, and Γk and Γs are independent for |k-s|2.

  • (P4) 

    The processes {xk;k1},{Hk;k1},{vk;k1} and Γk;k2 are mutually independent.

In view of Equations (32) and (33) and the above properties, the study of the LS linear filtering problem based on the stacked observations, {yk;k1}, is completely similar to that of the local filtering problem carried out in Section 3. Therefore, the centralized filtering algorithm described in the following theorem is derived by an analogous reasoning to that used in Theorem 1 and, hence, its proof is omitted.

Theorem 3. 

The centralized LS linear filter, x^k/k(C), and the corresponding error covariance matrix, Pk/k(C), are given by

x^k/k(C)=AkOk,k1

and

Pk/k(C)=AkBk-AkrkT,k1

where the vectors Ok and the matrices rk=EOkOkT are recursively obtained from

Ok=Ok-1+JkΠk-1μk,k1;O0=0
rk=rk-1+JkΠk-1JkT,k1;r0=0

The matrices Jk=EOkμkT satisfy

Jk=H¯BkT-rk-1H¯AkT-h=1(k-1)2Jk-hΠk-h-1Wk,k-hT,k2;J1=H¯B1T

The innovations, μk, and their covariance matrices, Πk, are given by

μk=yk-H¯AkOk-1-h=1(k-1)2Wk,k-hΠk-h-1μk-h,k2;μ1=y1

and

Πk=Σky-H¯AkH¯BkT-Jk-h=1(k-1)2Wk,k-hΠk-h-1H¯AkJk-h+Wk,k-hT,k2;Π1=Σ1y

and the coefficients Wk,k-h=Ewkμk-hT,h=1,2, verify

Wk,k-1=Σk,k-1y-H¯AkH¯Bk-1T-Wk,k-2Πk-2-1H¯Ak-1Jk-2+Wk-1,k-2T,k3W2,1=Σ2,1y-H¯A2H¯B1T
Wk,k-2=Γ¯kRk-1,k-2(I-Γ¯k-2),k4;W3,1=Γ¯3R2,1

In the above formulas, the matrices Σky and Σk,k-1y are computed by Σk,sy=Σk,sy(ij)i,j=1,,m,s=k,k-1, with Σk,sy(ij) given in Equation (3), and H¯Ck=H¯Ck(1)T,,H¯Ck(m)TT,C=A,B, with H¯Ck(i) defined in Equation (7).

5. Numerical Simulation Example

This section is devoted to analyze the effectiveness of the proposed distributed and centralized filtering algorithms by a simulation example. Let us consider a zero-mean scalar signal process, {xk;k1}, with autocovariance function E[xkxs]=1.025641×0.95k-s,sk, which is factorizable according to (H1) just taking, for example, Ak=1.025641×0.95k and Bk=0.95-k.

The measured outputs of this signal, which are provided by four different sensors, are described by Equation (1):

zk(i)=Hk(i)xk+vk(i),k1,i=1,2,3,4

where the processes {Hk(i);k1} and {vk(i);k1}, i=1,2,3,4, are defined as follows:

  • Hk(1)=0.8λk(1), Hk(2)=λk(2)0.75+0.95εk, Hk(3)=0.8λk(3) and Hk(4)=0.75λk(4), where {εk;k1} is a zero-mean Gaussian white process with unit variance, and {λk(i);k1}, i=1,2,3,4, are white processes with the following time-invariant probability distributions:
    • For i=1,2, λk(i) are Bernoulli random variables with P[λk(i)=1]=0.6.
    • λk(3) is uniformly distributed over [0.1,0.9].
    • P[λk(4)=0]=0.3,P[λk(4)=0.5]=0.3,P[λk(4)=1]=0.4.

    It is also assumed that the sequences {εk;k1} and {λk(i);k1},i=1,2,3,4, are mutually independent.

  • The additive noises are defined as vk(i)=ci(ηk+ηk+1),i=1,2,3,4, where c1=c4=0.75, c2=1, c3=0.5, and {ηk;k1} is a zero-mean Gaussian white process with unit variance.

Note that the sequences of random variables {Hk(i);k1}, i=1,2,3,4, model different types of uncertainty in the measured outputs: missing measurements in sensor 1; both missing measurements and multiplicative noise in sensor 2; and continuous and discrete gain degradation in sensors 3 and 4, respectively. Moreover, it is clear that the additive noises {vk(i);k1}, i=1,2,3,4, are only correlated at the same and consecutive time instants, with Rk(ij)=2cicj,Rk,k-1(ij)=cicj,i,j=1,2,3,4.

Next, according to our theoretical observation model, it is supposed that, at any sampling time k2, the data transmissions are subject to random one-step delays with different rates and such delays are correlated at consecutive sampling times. More precisely, let us assume that the available measurements yk(i) are given by Equation (2):

yk(i)=(1-γk(i))zk(i)+γk(i)zk-1(i),k2,i=1,2,3,4

where the variables γk(i) modeling this type of correlated random delays are defined using three independent sequences of independent Bernoulli random variables, {θk(i);k1}, i=1,2,3, with constant probabilities, P[θk(i)=1]=θ¯(i), for all k1; specifically, for i=1,2,3, γk(i)=θk+1(i)(1-θk(i)), and γk(4)=θk(1)(1-θk+1(1)).

It is clear that the sensor delay probabilities are time-invariant: γ¯(i)=θ¯(i)(1-θ¯(i)), for i=1,2,3, and γ¯(4)=γ¯(1). Moreover, the independence of the sequences {θk(i);k1}, i=1,2,3, together with the independence of the variables in each sequence, guarantee that the random variables γk(i) and γs(j) are independent if |k-s|2, for any i,j=1,2,3,4. Also, it is clear that, at each sensor, the variables {γk(i);k2} are correlated at consecutive sampling times and γ¯k,s(ii)=0, for i=1,2,3,4 and |k-s|=1. Finally, we have that {γk(4);k2} is independent of {γk(i);k2}, i=2,3, but correlated with {γk(1);k2} at consecutive sampling times, with γ¯k,k-1(14)=γ¯(1)θ¯(1) and γ¯k,k-1(41)=γ¯(1)(1-θ¯(1)).

Let us observe that, for each sensor i=1,2,3,4, if γk(i)=1, then γk+1(i)=0; this fact guarantees that, when the measurement at time k is delayed, the available measurement at time k+1 is well-timed. Therefore, this correlation model avoids the possibility of two consecutive delayed observations at the same sensor. Table 1 shows an example of data transmission in sensor 1 when θ¯(1)=0.5.

Table 1.

Sensor 1 data transmission.

Time k 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
θk(1) 1 1 1 0 0 1 0 1 1 1 0 0 1 1
γk(1) 0 0 1 0 0 1 0 0 0 1 0 0 0
yk(1) z1(1) z2(1) z3(1) z3(1) z5(1) z6(1) z6(1) z8(1) z9(1) z10(1) z10(1) z12(1) z13(1) z14(1)

Note that if γk(1)=0, there is no delay at time k; i.e., the output zk(1) is received on time by the fusion center and yk(1)=zk(1). If γk(1)=1, the packet received at time k is the output at k-1; i.e., there is one-step delay and the processed observation is yk(1)=zk-1(1). From Table 1, we see that z4(1),z7(1) and z11(1) are are lost, and z3(1),z6(1) and z10(1) are re-received.

To illustrate the feasibility and analyze the effectiveness of the proposed estimators, the algorithms were implemented in MATLAB, and a hundred iterations were run. In order to measure the estimation accuracy, the error variances of both distributed and centralized fusion estimators were calculated for several values of the delay probabilities at the different sensors, obtained from several values of θ¯(i). Let us observe that the delay probabilities, γ¯(i)=θ¯(i)(1-θ¯(i)), for i=1,2,3, are the same if 1-θ¯(i) is used instead of θ¯(i); for this reason, only the case θ¯(i)0.5 will be considered here.

First, the error variances of the local, distributed and centralized filters will be compared considering the same delay probabilities for the four sensors. Figure 1 displays these error variances when θ¯(i)=0.3, i=1,2,3, which means that the delay probability at every sensor is γ¯(i)=0.21. This figure shows that the error variances of the distributed fusion filtering estimator are lower than those of every local estimators, but slightly greater than those of the centralized one. However, this slight difference is compensated by the fact that the distributed fusion structure reduces the computational cost and has better robustness and fault tolerance. Analogous results are obtained for other values of θ¯(i). Actually, Figure 2 displays the distributed and centralized filtering error variances for θ¯(i)=0.1,0.2,0.3,0.4,0.5, which lead to the delay probabilities γ¯(i)=0.09,0.16,0.21,0.24,0.25, respectively. In this figure, both graphs (corresponding to the distributed and centralized fusion filters, respectively) show that the performance of the filters is poorer as θ¯(i) increases. This fact was expected as the increase of θ¯(i) yields a rise in the delay probabilities. This figure also confirms that both methods, distributed and centralized, have approximately the same accuracy, thus corroborating the previous results.

Figure 1.

Figure 1

Local, distributed and centralized filtering error variances for θ¯(i)=0.3, i=1,2,3.

Figure 2.

Figure 2

(a) Distributed filtering error variances and (b) centralized filtering error variances for different values of θ¯(i).

In order to carry out a further discussion on the effects of the sensor random delays, Figure 3 shows a comparison of the filtering error variances in the following cases:

  • Case I: error variances versus θ¯(1), when θ¯(2)=θ¯(3)=0.5. In this case, as mentioned above, the values θ¯(1)=0.1,0.2,0.3,0.4,0.5, lead to the values γ¯(1)=γ¯(4)=0.09,0.16,0.21,0.24,0.25, respectively, for the delay probabilities of sensors 1 and 4, whereas the delay probabilities of sensors 2 and 3 are constant and equal to 0.25.

  • Case II: error variances versus θ¯(1), when θ¯(2)=θ¯(1) and θ¯(3)=0.5. Varying θ¯(1) as in Case I, the delay probabilities of sensors 1, 2 and 4 are equal and take the aforementioned values, whereas the delay probability of sensor 3 is constant and equal to 0.25.

  • Case III: error variances versus θ¯(1), when θ¯(2)=θ¯(3)=θ¯(1). Now, as in Figure 2, the delay probabilities of the four sensors are equal, and they all take the aforementioned values.

Figure 3.

Figure 3

(a) Distributed filtering error variances and (b) centralized filtering error variances at k=100, versus θ¯(1).

Since the behavior of the error variances is analogous in all the iterations, only the results of a specific iteration (k=100) are displayed in Figure 3, which shows that the performance of the distributed and centralized estimators is indeed influenced by the probability θ¯(i) and, as expected, better estimations are obtained as θ¯(i) becomes smaller, due to the fact that the delay probabilities, γ¯(i), decrease with θ¯(i). Moreover, this figure shows that the error variances in Case III are less than those of Case II which, in turn, are lower than those of Case I. This is due to the fact that, while the delay probabilities of the four sensors are varied in Case III, only two and three sensors vary their delay probabilities in Cases I and II, respectively. Since the constant delay probabilities of the other sensors are assumed to take their greatest possible value, this figure confirms that the estimation accuracy improves as the delay probabilities decrease.

6. Conclusions

In this paper, distributed and centralized fusion filtering algorithms have been designed in multi-sensor systems from measured outputs with random parameter matrices and correlated noises, assuming correlated random delays in transmissions. The main outcomes and results can be summarized as follows:

  • Information on the signal process: our approach, based on covariance information, does not require the evolution model generating the signal process to design the proposed distributed and centralized filtering algorithms; nonetheless, they are also applicable to the conventional formulation using the state-space model.

  • Signal uncertain measured outputs: random measurement matrices and cross-correlation between the different sensor noises are considered in the measured outputs, thus providing a unified framework to address different network-induced phenomena, such as missing measurements or sensor gain degradation, along with correlated measurement noises.

  • Random one-step transmission delays: the fusion estimation problems are addressed assuming random one-step delays in the outputs transmission to the fusion center through the network communication channels; the delays have different characteristics at each sensor and they are assumed to be correlated and cross-correlated at consecutive sampling times. This correlation assumption covers many situations where the common assumption of independent delays is not realistic; for example, networked systems with stand-by sensors for the immediate replacement of a failed unit, thus avoiding the possibility of two successive delayed observations.

  • Fusion filtering algorithms: firstly, recursive algorithms for the local LS linear signal filters based on the measured output data coming from each sensor have been designed by an innovation approach; the computational procedure of the local algorithms is very simple and suitable for online applications. After that, the matrix-weighted sum that minimizes the mean-squared estimation error is proposed as distributed fusion estimator. Also, using covariance information, a recursive centralized LS linear filtering algorithm, with analogous structure to that of the local algorithms, is proposed. The accuracy of the proposed fusion estimators, obtained under the LS optimality criterion, is measured by the error covariance matrices, which can be calculated offline as they do not depend on the current observed data set.

  • Simulations: a numerical simulation example has illustrated the usefulness of the proposed algorithms for the estimation of a scalar signal. Error variance comparisons have shown that both distributed and centralized fusion filters outperform the local ones, as well as a slight superiority of the centralized fusion estimators over the distributed ones. The effects of the delays on the estimators performance have been also analyzed by the error variances. This example has also highlighted the applicability of the proposed algorithms to different multi-sensor systems with stochastic uncertainties, which can be dealt with using the observation model with random measurement matrices considered in this paper.

Acknowledgments

This research is supported by Ministerio de Economía y Competitividad and Fondo Europeo de Desarrollo Regional FEDER (grant no. MTM2014-52291-P).

Abbreviations

The following abbreviations are used in this manuscript:

LS Least-Squares
OPL Orthogonal Projection Lemma

Author Contributions

All the authors contributed equally to this work. Raquel Caballero-Águila, Aurora Hermoso-Carazo and Josefa Linares-Pérez provided original ideas for the proposed model and collaborated in the derivation of the estimation algorithms; they participated equally in the design and analysis of the simulation results; and the paper was also written and reviewed cooperatively.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Li W., Wang Z., Wei G., Ma L., Hu J., Ding D. A Survey on multisensor fusion and consensus filtering for sensor networks. Discret. Dyn. Nat. Soc. 2015;2015:683701. [Google Scholar]
  • 2.Ran C., Deng Z. Self-tuning weighted measurement fusion Kalman filtering algorithm. Comput. Stat. Data Anal. 2012;56:2112–2128. doi: 10.1016/j.csda.2012.01.001. [DOI] [Google Scholar]
  • 3.Feng J., Zeng M. Optimal distributed Kalman filtering fusion for a linear dynamic system with cross-correlated noises. Int. J. Syst. Sci. 2012;43:385–398. doi: 10.1080/00207721.2010.502601. [DOI] [Google Scholar]
  • 4.Yan L., Li X.R., Xia Y., Fu M. Optimal sequential and distributed fusion for state estimation in cross-correlated noise. Automatica. 2013;49:3607–3612. doi: 10.1016/j.automatica.2013.09.013. [DOI] [Google Scholar]
  • 5.Feng J., Wang Z., Zeng M. Distributed weighted robust Kalman filter fusion for uncertain systems with autocorrelated and cross-correlated noises. Inform. Fusion. 2013;14:78–86. doi: 10.1016/j.inffus.2011.09.004. [DOI] [Google Scholar]
  • 6.Hu J., Wang Z., Chen D., Alsaadi F.E. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. Inform. Fusion. 2016;31:65–75. doi: 10.1016/j.inffus.2016.01.001. [DOI] [Google Scholar]
  • 7.Liu Y., He X., Wang Z., Zhou D. Optimal filtering for networked systems with stochastic sensor gain degradation. Automatica. 2014;50:1521–1525. doi: 10.1016/j.automatica.2014.03.002. [DOI] [Google Scholar]
  • 8.Caballero-Águila R., García-Garrido I., Linares-Pérez J. Information fusion algorithms for state estimation in multi-sensor systems with correlated missing measurements. Appl. Math. Comput. 2014;226:548–563. doi: 10.1016/j.amc.2013.10.068. [DOI] [Google Scholar]
  • 9.Fangfang P., Sun S. Distributed fusion estimation for multisensor multirate systems with stochastic observation multiplicative noises. Math. Probl. Eng. 2014;2014:373270. doi: 10.1155/2014/373270. [DOI] [Google Scholar]
  • 10.Pang C., Sun S. Fusion predictors for multi-sensor stochastic uncertain systems with missing measurements and unknown measurement disturbances. IEEE Sens. J. 2015;15:4346–4354. doi: 10.1109/JSEN.2015.2416511. [DOI] [Google Scholar]
  • 11.Tian T., Sun S., Li N. Multi-sensor information fusion estimators for stochastic uncertain systems with correlated noises. Inform. Fusion. 2016;27:126–137. doi: 10.1016/j.inffus.2015.06.001. [DOI] [Google Scholar]
  • 12.De Koning W.L. Optimal estimation of linear discrete-time systems with stochastic parameters. Automatica. 1984;20:113–115. doi: 10.1016/0005-1098(84)90071-2. [DOI] [Google Scholar]
  • 13.Luo Y., Zhu Y., Luo D., Zhou J., Song E., Wang D. Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications. Sensors. 2008;8:8086–8103. doi: 10.3390/s8128086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Shen X.J., Luo Y.T., Zhu Y.M., Song E.B. Globally optimal distributed Kalman filtering fusion. Sci. China Inf. Sci. 2012;55:512–529. doi: 10.1007/s11432-011-4538-7. [DOI] [Google Scholar]
  • 15.Hu J., Wang Z., Gao H. Recursive filtering with random parameter matrices, multiple fading measurements and correlated noises. Automatica. 2013;49:3440–3448. doi: 10.1016/j.automatica.2013.08.021. [DOI] [Google Scholar]
  • 16.Linares-Pérez J., Caballero-Águila R., García-Garrido I. Optimal linear filter design for systems with correlation in the measurement matrices and noises: Recursive algorithm and applications. Int. J. Syst. Sci. 2014;45:1548–1562. doi: 10.1080/00207721.2014.909093. [DOI] [Google Scholar]
  • 17.Caballero-Águila R., García-Garrido I., Linares-Pérez J. Distributed fusion filtering in networked systems with random measurement matrices and correlated noises. Discret. Dyn. Nat. Soc. 2015;2015:398605. doi: 10.1155/2015/398605. [DOI] [Google Scholar]
  • 18.Shi Y., Fang H. Kalman filter based identification for systems with randomly missing measurements in a network environment. Int. J. Control. 2010;83:538–551. doi: 10.1080/00207170903273987. [DOI] [Google Scholar]
  • 19.Li H., Shi Y. Robust H∞ filtering for nonlinear stochastic systems with uncertainties and random delays modeled by Markov chains. Automatica. 2012;48:159–166. doi: 10.1016/j.automatica.2011.09.045. [DOI] [Google Scholar]
  • 20.Li N., Sun S., Ma J. Multi-sensor distributed fusion filtering for networked systems with different delay and loss rates. Digit. Signal Process. 2014;34:29–38. doi: 10.1016/j.dsp.2014.07.016. [DOI] [Google Scholar]
  • 21.Sun S., Ma J. Linear estimation for networked control systems with random transmission delays and packet dropouts. Inf. Sci. 2014;269:349–365. doi: 10.1016/j.ins.2013.12.055. [DOI] [Google Scholar]
  • 22.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Covariance-based estimation from multisensor delayed measurements with random parameter matrices and correlated noises. Math. Probl. Eng. 2014;2014:958474. doi: 10.1155/2014/958474. [DOI] [Google Scholar]
  • 23.Chen B., Zhang W., Yu L. Networked fusion Kalman filtering with multiple uncertainties. IEEE Trans. Aerosp. Electron. Syst. 2015;51:2332–2349. doi: 10.1109/TAES.2015.130803. [DOI] [Google Scholar]
  • 24.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Optimal state estimation for networked systems with random parameter matrices, correlated noises and delayed measurements. Int. J. Gen. Syst. 2015;44:142–154. doi: 10.1080/03081079.2014.973728. [DOI] [Google Scholar]
  • 25.Wang S., Fang H., Tian X. Recursive estimation for nonlinear stochastic systems with multi-step transmission delays, multiple packet dropouts and correlated noises. Signal Process. 2015;115:164–175. doi: 10.1016/j.sigpro.2015.03.022. [DOI] [Google Scholar]
  • 26.Chen D., Yu Y., Xu L., Liu X. Kalman filtering for discrete stochastic systems with multiplicative noises and random two-step sensor delays. Discret. Dyn. Nat. Soc. 2015;2015:809734. doi: 10.1155/2015/809734. [DOI] [Google Scholar]
  • 27.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Fusion estimation using measured outputs with random parameter matrices subject to random delays and packet dropouts. Signal Process. 2016;127:12–23. doi: 10.1016/j.sigpro.2016.02.014. [DOI] [Google Scholar]
  • 28.Chen D., Xu L., Du J. Optimal filtering for systems with finite-step autocorrelated process noises, random one-step sensor delay and missing measurements. Commun. Nonlinear Sci. Numer. Simul. 2016;32:211–224. doi: 10.1016/j.cnsns.2015.08.015. [DOI] [Google Scholar]
  • 29.Wang S., Fang H., Tian X. Minimum variance estimation for linear uncertain systems with one-step correlated noises and incomplete measurements. Digit. Signal Process. 2016;49:126–136. doi: 10.1016/j.dsp.2015.10.007. [DOI] [Google Scholar]
  • 30.Gao S., Chen P., Huang D., Niu Q. Stability analysis of multi-sensor Kalman filtering over lossy networks. Sensors. 2016;16:566. doi: 10.3390/s16040566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Caballero-Águila R., Hermoso-Carazo A., Linares-Pérez J. Linear estimation based on covariances for networked systems featuring sensor correlated random delays. Int. J. Syst. Sci. 2013;44:1233–1244. doi: 10.1080/00207721.2012.659709. [DOI] [Google Scholar]
  • 32.Kailath T., Sayed A.H., Hassibi B. Linear Estimation. Prentice Hall; Upper Saddle River, NJ, USA: 2000. [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES