Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2018 Apr 5;18(4):1103. doi: 10.3390/s18041103

The Cramér–Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations

Zhiguo Wang 1, Xiaojing Shen 1,*, Ping Wang 1, Yunmin Zhu 1
PMCID: PMC5948939  PMID: 29621158

Abstract

This paper considers the problems of the posterior Cramér–Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér–Rao bound. The first method is based on the recursive formula of the Cramér–Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0–1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér–Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér–Rao bound can be achieved by a limiting process of the Cramér–Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér–Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

Keywords: Cramér–Rao bound, sensor selection, uncertain measurement, target tracking

1. Introduction

In practical problems, we always encounter some sensors have the uncertain measurement subjected to random interference, natural interruptions or sensor failures. Using the mode parameters without considering the uncertainty is unavailable, and there are a lot of researchers that have studied the state estimation with uncertain measurement, such as [1,2,3,4]. In this paper, we consider the uncertainty caused by occlusions, i.e., the sensors may not be able to observe the target when blocked by some obstacles [5]. For the linear dynamic systems involving uncertainty in [6,7], the authors use a Kalman filter to track the target. However, it is difficult to obtain the optimal estimation for a nonlinear uncertain dynamic system, but we are particular interested in measuring their efficiency. For this purpose, it is natural to compare a lower bound of the estimation error, which gives an indication of performance limitations. Moreover, it can be used to determine whether imposed performance requirements are realistic or not.

The most popular lower bound is the well-known Cramér–Rao bound (CRB). In time-invariant statistical models, the estimated parameter vector is usually assumed to be real-valued (non-random). The lower bound is given by the inverse of the Fisher information matrix. When we deal with the time-varying systems, the estimated parameter vector is modeled randomly. A lower bound that is analogous to the CRB for random parameters is derived in [8], and this bound is also known as the Van Trees version of the CRB, or referred to as posterior CRB (PCRB). In fact, the underlying static random system needs to satisfy the regularity condition, which is absolute integrability of the first two derivatives of all related probability density functions. The first derivation of a sequential PCRB version applicable to discrete-time dynamic system filtering is done in [9] and then extended in [10,11,12]. The most general form of sequential PCRB for discrete-time nonlinear systems is presented in [13]. Together with the original static form of the CRB, these results serve as a basis for a large number of applications [14,15,16].

Most of the papers on PCRB are obtained without considering the uncertainty in the dynamic systems. When the sensors have uncertain measurements, we need to consider the influence of the uncertainty [17,18]. The CRB is presented in [19,20] to target tracking with detection probability smaller than one. If the uncertain measurement is prone to discretely-distributed faults, a Cramér–Rao-type bound is shown in [21]. Actually, the authors in [22,23] have considered uncertainty as the mixed Gaussian probabilistic model, where the sensor observation is assumed to contain only noise if the sensor cannot sense the target. Therefore, we hope to derive a recursive PCRB based on the uncertain model of the Gaussian mixture distribution.

Since the PCRB needs to compute the Fisher information, which is obtained by the derivatives of the log likelihood function of the Gaussian mixture model, and it is much more difficult than the case of a single Gaussian distribution. The reason is that the presence of the summation that occurs inside of the logarithm, and the PCRB of the Gaussian mixture model needs to compute the complex integral, which is with respect to the joint probability density function of the sensor measurements and the target state. These reasons motivate us to research another approach to derive the PCRB.

In large wireless sensor networks (WSNs), sensors are battery-powered devices with limited signal processing capabilities [24,25]. In such situations, it is inefficient to utilize all the sensors including the uninformative ones, which is hardly helpful to the tracking task but still consumes resources. This issue has been researched and shown via the development of sensor selection schemes, whose goal is to select the best non-redundant set of sensors for the tracking task while satisfying the resource constraints [26,27]. The previous research [28,29] on sensor selection assumes that the target tracking process does not have any interruptions. As the sensor observations are quite uncertain, we need to consider the sensor selection based on the proposed PCRB.

In this paper, we use two methods to derive the PCRB to effectively overcome the difficulties caused by uncertainty. The first method is based on the recursive formula of the Cramér–Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state, which leads to the computation burden of this method being relatively high, especially in large sensor networks, so that it is not better using this PCRB as a measure criteria of the sensor selection. In order to reduce the computation burden and deal with the sensor selection of a large-scale sensor networks, our contributions are as follows:

  • Inspired by the idea of the expectation maximization algorithm, we introduce some 0–1 latent variables to treat the Gaussian mixture model. Since the regular condition of the posterior Cramér–Rao bound is unsatisfied in the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables, then a new Cramér–Rao bound can be achieved by a limiting process of the Cramér–Rao bound of the continuous system. The Cramér–Rao bound avoids the complex integral with a less computation burden.

  • Based on the proposed posterior Cramér–Rao bound, the sensor selection problems for the nonlinear uncertain dynamic system can be efficiently solved, and the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection for the large-scale sensor networks.

The remainder of this paper is organized as follows. The system uncertain model is defined and the problem is formulated in Section 2. The PCRB for the dynamic system with uncertain observations is detailed and justified in Section 3. The optimal sensor selection with uncertain observations is shown in Section 4. Two numerical examples are presented in Section 5. Finally, the conclusions are offered in the final section.

2. Problem Formulation

Consider the L-sensor nonlinear dynamic systems with the uncertain observations [5,30],

xk=fk(xk1)+wk, (1)
yki=hki(xk)+vkiwithprobabilitypki,vkiwithprobability1pki,i=1,2,,L, (2)

where pki is the sensing probability of sensor i, xk is the state of system at time k, yki is the measurement at the ith sensor, i=1,,L, fk(xk1) is the nonlinear state function, and hki(xk) is the nonlinear measurement function of xk at the ith sensor. wk and vki are the state noise and the measurement noise, respectively, and they are mutually independent. vki is assumed to be independent across time steps and across sensors. The measurement information of the ith sensor is denoted by Yki={y1i,y2i,,yki}.

Assume that wk and vki are white Gaussian noise with N(0,Qk) and N(0,Rki), i=1,,L, respectively, where, Qk and Rki are the corresponding covariance matrices. We also assume that the initial state x0N(x^0,Σ0), and, if xk is given, then the measurement yki follows the Gaussian distribution N(hki(xk),Rki) with probability pki, and follows the Gaussian distribution N(0,Rki) with probability 1pki, i.e.,

p(yki|xk)=pkiN(hki(xk),Rki)+(1pki)N(0,Rki). (3)

Obviously, the conditional probability density function is a Gaussian mixture distribution, which ishard to calculate the PCRB. This difficult problem motivates us to introduce a hidden state variable, which draws lessons from the idea of the expectation maximization (EM) algorithm [31].

We introduce the 0–1 hidden state variables Iki,i=1,2,,L, which indicate whether the dynamic system has uncertainty. In other words, if Iki=1, then yki=hki(xk)+vki, and Iki=0, then yki=vki. Now, we transfer the nonlinear systems (1) and (2) as follows:

xk=fk(xk1)+wk,Iki=0·Ik1i+w˜ki, (4)
yki=Iki·hki(xk)+vki,i=1,2,,L. (5)

Then, the compact form for Equations (4) and (5) can be written as follows:

x˘k=Fk(x˘k1)+Wk, (6)
yki=Iki·hki(xk)+vki,i=1,2,,L, (7)

where x˘k=[xkTIkT]T, Wk=[wkTw˜k1T,,w˜kLT]T, Fk(Xk)=[fkT(xk)0]T, Ik=[Ik1,,IkL]T, wkN(0,Qk). w˜kiB(1,pki), which means a Bernoulli distribution with probability parameter pki, if P(w˜ki=1)=pki and P(w˜ki=0)=1pki. The process noise is independent of the uncertainty. Then, we assume wk and w˜ki, i=1,2,,L are mutually independent.

Since the PCRB is an important criterion of sensor selection, we drive two PCRBs of the uncertain dynamic systems (1), (2), (6) and (7) in Section 3 and Section 4, respectively. The former is accurate, but it is difficult to be computed. Thus, the latter is derived by introduced some hidden state variables, which avoids the complex integral and can reduce the computation burden. Finally, based on the second PCRB, we hope to obtain the analytically optimal solution of the sensor selection problem, so that it can be applied to the large-scale sensor selection problem for the uncertain dynamic systems.

3. The Posterior Cramér–Rao Bound with Uncertain Observations

In this section, we mainly discuss two methods to calculate the PCRB of multiple sensors. The first method is based on the nonlinear dynamic system with uncertain observations (1) and (2) and Gaussian mixture model [5,13,15,32]. The other approach is based on the nonlinear dynamic system (6) and (7) motivated by the EM algorithm [33].

Let θ be a r-dimensional estimated random parameter, z represents a vector of measured data, let p(z,θ) be the joint probability density of the pair (z,θ), and let g(z) be a function of z, which is an estimate of θ. Let Δ and ∇ be operators of the first and second-order partial derivatives, respectively,

η=[η1,,ηL],Δηξ=ηξT.

The PCRB on the estimate error has the form

P=E[g(z)θ][g(z)θ]TJ1, (8)

where J=EΔΘΘlogpz,θ(Z,Θ) is the (Fisher) information matrix denoted by Van Trees [8]. For example, if the posterior distribution of θ conditioned on z is Gaussian with mean θ¯z and a covariance matric Σz. Then, the information matrix (8) reads J=E{Σz1}.

Assume now that the parameter θ is decomposed into two parts as θ=[θαT,θβT]T, and the information matrix J is correspondingly divided into blocks

J=JααJαβJβαJββ.

Then, it can be easily shown that the covariance of estimation of θβ is lower bounded by the right-lower block of J1, i.e.,

Pβ=E{[gβ(x)θβ][gβ(x)θβ]T}[JββJβαJαα1Jαβ]1,

where we assume that Jαα1 exists. Denoted Jβ=JββJβαJαα1Jαβ, which is called the information submatrix for β.

Now, for nonlinear dynamic systems with uncertain observations (1) and (2), the following proposition gives a method to compute the information submatrix Jk recursively.

Proposition 1.

The Fisher information submatrix Jk for the estimating state vectors {xk} obeys the recursion:

Jk+1=Dk22Dk21(Jk+Dk11)1Dk12, (9)
J0=E[Δx0x0logp(x0)], (10)

with

Dk11=E[xkfkT(xk)]Qk1[xkfkT(xk)]T, (11)
Dk12=E[xkfkT(xk)]Qk1, (12)
Dk21=(Dk12)T, (13)
Dk22=Qk1+i=1LE{Δxk+1xk+1logp(yk+1i|xk+1)}, (14)

where

p(yk+1i|xk+1)=pkiN(hki(xk+1),Rk+1i)+(1pki)N(0,Rk+1i). (15)

Proof. 

Equations (1) and (2) together with p(x0) determine the joint probability distribution of Xk=[x0,x1,,xk] and Yk=[y0,y1,,yk], where yk=(yk1,yk2,,ykL)T,

p(Xk,Yk)=p(Xk1,Yk1)·p(xk|Xk1,Yk1)·p(yk|Xk,Yk1)=p(Xk1,Yk1)·p(xk|xk1)·p(yk|xk)=p(Xk1,Yk1)·p(xk|xk1)·i=1Lp(yki|xk). (16)

The conditional probability densities p(xk|xk1) and p(yki|xk) can be calculated by Equations (1) and (2), respectively. Denote pk=p(Xk,Yk), by Equation (16), we can obtain the formula about pk+1 as follows:

pk+1=pk·p(xk+1|xk)·i=1Lp(yk+1i|xk+1). (17)

Therefore,

logpk+1=logpk+logp(xk+1|xk)+i=1Llogp(yk+1i|xk+1). (18)

If we divide Xk into Xk=[Xk1T,xkT]T, then

J(Xk)=E{ΔXkXklogpk}=E{ΔXk1Xk1logpk}E{ΔXk1xklogpk}E{ΔxkXk1logpk}E{Δxkxklogpk}AkBkBkTCk. (19)

The information submatrix Jk for xk can be obtained as follows:

Jk=CkBkTAk1Bk. (20)

Moreover, let Xk+1=[Xk1T,xkT,xk+1T]T, then the posterior information matrix for Xk+1 can be written as the following block form by Equation (18),

J(Xk+1)=AkBk0BkTCk+Dk11Dk120Dk21Dk22, (21)

where 0 stands for zero blocks of appropriate dimensions, and Dk11,Dk12,Dk22 are calculated as follows:

Dk11=E{Δxkxklogp(xk+1|xk)},Dk12=E{Δxkxk+1logp(xk+1|xk)}=(Dk21)T,Dk22=i=1LE{Δxk+1xk+1logp(yk+1i|xk+1)}.

Then, the information submatrix Jk+1 for xk+1 can be computed as

Jk+1=Dk220Dk21AkBkBkTCk+Dk1110Dk12=Dk22Dk21[Ck+Dk11BkTAk1Bk]1Dk12. (22)

Based on the definition of Jk in (20), we can obtain the desired recursion (9). Since the state noise and the measurement noise are Gaussian with zero mean and invertible covariance matrices Qk and Rki, i=1,,L, respectively. Moreover, the dynamic systems have the uncertain observations. From these assumptions and Equation (3), it follows that

logp(xk+1|xk)=c1+12[xk+1fk(xk)]TQk1[xk+1fk(xk)]logp(yk+1i|xk+1)=logpkN(hk+1i(xk+1),Rk+1i)+(1pk)N(0,Rk+1i),

where c1 is a constant. Therefore, Dk11,Dk12,Dk22 can be simplified to (11)–(14). ☐

From Equations (14) and (15), we see that the appearance of the summation inside of the logarithm, and the computation of D22k is related to the joint probability density function of the sensor measurements yk+1 and the target state xk+1, then D22k is not easy to calculate. These reasons motivate us to study another approach to derive the PCRB.

Based on the equivalence between the systems (1)–(2) and (6)–(7), we can derive PCRB for the dynamic systems (6) and (7) by introducing a hidden variable Ik, and the new PCRB may be easier to compute. Since the second derivation for the discrete augmented variable Ik do not exist, then we bring in a continuous random variable I˜k to approximate the 0–1 variable Ik. The augmented state vector x˘k=[xk,Ik]T has changed into x˜k=[xk,I˜k]T. Therefore, the new system can be expressed as follows:

xk=fk(xk1)+wk, (23)
I˜ki=0·I˜k1i+w˜k, (24)
yki=I˜ki·hki(xk)+vki,i=1,2,,L. (25)

Lemma 1

([21]). If IkiB(1,pki) and I˜kipkiN(1,σ2)+(1pki)N(0,σ2), i=1,,L, then the limit of I˜k is the state variable Ik when σ0, i.e., limσ0I˜k=Ik.

Let J˜k represents the PCRB about the approximated augment vector x˜k of systems (23)–(25), respectively. Then, we can easily get the following conclusion:

Lemma 2

([21]). Assume that I˜kipkiN(1,σ2)+(1pki)N(0,σ2), i=1,,L, then Pk(x˘k)limσ0J˜k1=J¯k1000, where Pk(x˘k) denotes the estimation error covariance matrix about the vector x˘k and J¯k denotes the Fisher information submatrix about the vector xk.

Based on Lemmas 1 and 2, for nonlinear dynamic system with the uncertain observations (6) and (7), it is easy to see that J¯k1 can also represent a CRB for the estimation error covariance matrix of vector xk.

Proposition 2.

At time k+1, the Fisher information submatrix J¯k+1 of xk+1 for the multi-sensor uncertain systems (6) and (7) is computed according to the following recursion:

J¯k+1=D¯k22D¯k21(J¯k+D¯k11)1D¯k12, (26)
J¯0=Σ01, (27)

with

D¯k11=E{[xkfkT(xk)]Qk1[xkfkT(xk)]T}, (28)
D¯k12=E[xkfkT(xk)]Qk1, (29)
D¯k21=(D¯k12)T, (30)
D¯k22=Qk1+i=1Lpk+1iE[xk+1hk+1i(xk+1)]T(Rk+1i)1[xk+1hk+1i(xk+1)]. (31)

Proof. 

According to Lemma 1 and the derivation of Proposition 1, the new augmented state vector x˜k has the following PCRB for systems (6) and (7):

J˜k+1=D˜k22D˜k21(J˜k+D˜k11)1D˜k12, (32)

where D˜k11, D˜k12, D˜k22 are denoted as follows:

D˜k11=Ex˜k{Δx˜kx˜klogp(x˜k+1|x˜k)},D˜k12=Ex˜k{Δx˜kx˜k+1logp(x˜k+1|x˜k)},D˜k21=(D˜k12)T,D˜k22=Ex˜k+1{Δx˜k+1x˜k+1logp(x˜k+1|x˜k)}+i=1LEx˜k+1{Δx˜k+1x˜k+1logp(yk+1i|x˜k+1)}.

In order to obtain the lower bound for xk, it is necessary for us to calculate the following probability densities, according to Equations (6) and (7),

logp(x˜k+1|x˜k)=logp(xk+1|xk)·p(I˜k+1)=c3+12[xk+1fk(xk)]TQk1[xk+1fk(xk)]logp(I˜k+1), (33)

where c3 is a constant, and the first equality follows from the independence and the second follows from (2). The another probability density is as follows:

logp(yk+1i|x˜k+1)=c4+12[yk+1iI˜k+1ihk+1i(xk+1)]T(Rk+1i)1[yk+1iI˜k+1ihk+1i(xk+1)], (34)

where c4 is a constant. Since x˜k=[xk,I˜k]T, we use Equations (33)–(34) and Lemma 1, and the suitable partitioned expressions for D˜k11, D˜k12, D˜k22 are obtained:

D˜k11=D¯k11000, (35)
D˜k12=D¯k12000, (36)
D˜k22=D¯k22C12C21C22, (37)

where D¯k11, D¯k12 are denoted in (28) and (29), while C12,C21 and C22 are calculated as follows:

C12=Ex˜k+1{Δxk+1I˜k+1logp(x˜k+1|x˜k)}+i=1LEx˜k+1{Δxk+1I˜k+1ilogp(yk+1i|x˜k+1)}=i=1LEx˜k+1[xk+1hk+1i(xk+1)]T(Rk+1i)1yk+1i+2i=1LEx˜k+1I˜k+1ihk+1i(xk+1)(Rk+1i)1[xk+1hk+1i(xk+1)]=(C21)T,
C22=i=1LEx˜k+1{ΔI˜k+1iI˜k+1ilogp(yk+1i|x˜k+1)}+i=1LEx˜k+1{ΔI˜k+1iI˜k+1ilogp(I˜k+1i)}=i=1LEx˜k+1{(hk+1i(xk+1))T(Rk+1i)1(hk+1i(xk+1))}+i=1LEx˜k+1{ΔI˜k+1iI˜k+1ilogp(I˜k+1i)}.

If we divide J˜k as the following block matrix

J˜k=J˜k11J˜k12J˜k21J˜k22, (38)

then according to (32) and (35)–(37), the value of J˜k+1 is

J˜k+1=D¯k22+D¯k21D¯k11+J˜k11J˜k12(J˜k22)1J˜k211D¯k12C12C21C22. (39)

Since the matrix C22 is the function of σ, it is shown in [21] that

limσ0J˜k+11=D¯k22+D¯k21D¯k11+J¯k111D¯k121000. (40)

Using Lemma 1, we can obtain (26), and the matrix D¯k22 can be computed as

D¯k22=i=1LEx˜k+1{Ik+1ixk+1(hk+1i(xk+1))T(Rk+1i)1xk+1(hk+1i(xk+1))Ik+1i}+Qk1=Qk1+i=1Lpk+1iExk+1[xk+1hk+1i(xk+1)]T(Rk+1i)1[xk+1hk+1i(xk+1)].

 ☐

Remark 1.

Note that PCRBs derived in Propositions 1 and 2 have different forms. The first one is optimal. The second one is only approximately optimal with less computational burden. Since it may be approximated from above or below, which one is lower cannot be judged. The simulation in Section 6 shows that they are almost equal and the computational complexity of the approximate bound is less than that of the accurate bound.

Remark 2.

In the case of p=1 and L=1, the multi-sensor dynamic systems (1) and (2) has the certain observations. Obviously, the PCRB derived by the method in [13] is consistent with that derived in Proposition 2.

4. Sensor Selection with Uncertain Observations

In large sensor networks, it is an important problem to manage the communication resources efficiently. The calculation of PCRB by Proposition 1 needs to use the joint probability density function of the sensor measurements and the target state, which leads to the computational burden being heavy, so that it is detrimental to be used as a measure criteria of the sensor selection. In this section, we consider the problem of sensor selection by Proposition 2.

For the nonlinear dynamic system at time k, assume that s sensors will be selected from L sensors by maximizing the Fisher information matrix, then they will send their measurements or local estimates to the fusion center. Finally, the fusion center makes the estimates for the state. In order to select the optimal s sensors, we need to introduce a selection vector sk=[sk1,,skL]T{0,1}L. If the ith sensor is selected, let ski=1; otherwise, ski=0, i=1,,L. According to the derivation of the Fisher information matrix in Section 3, the selection vector modifies the log conditional probability density logp(yk|xk) as [34]

logi=1L(p(yki|xk))ski=i=1Lskilogp(yki|xk). (41)

In fact, the selected variable sk only has an effect on D¯k22 of Proposition 2. Then, D¯k22 can be written as

D¯k22=Qk1+i=1Lsk+1ipk+1iE[xk+1hk+1i(xk+1)]T(Rk+1i)1[xk+1hk+1i(xk+1)]. (42)

Therefore, the information matrix of xk+1 is the function of the selected variable sk. Now, the sensor selection problem can be expressed as the following optimization problem:

maxsk+1tr(J¯k+1(sk+1)), (43)
s.t.i=1Lsk+1i=s, (44)
sk+1i{0,1},i=1,,L, (45)

where “tr” means “trace”, which is the sum of squares of semiaxes lengths of the Fisher information matrix. “s.t”. means “subjected to”.

Remark 3.

In fact, the objective function in (43) should be matrix J¯k+1(sk+1). Then, the problem (43)–(45) is a matrix optimization problem, which is considered in the sense that if sk+1 is an optimal solution. Then, for an arbitrary feasible solution sk+1, we have J¯k+1(sk+1)J¯k+1(sk+1), i.e., J¯k+1(sk+1)J¯k+1(sk+1) is a positive semidefinite matrix. There are two reasons to choose trace function as the objective function. First, it is a linear function, which helps us to easily derive the optimal solution. Second, some researchers [26,27,28] have proved that it has many advantages to apply to sensor selection, such as, if the primal matrix optimization problem has an optimal solution and D¯k12 in (29) is invertible, then the matrix optimization problem for sensor selection can be equivalently transformed to this convex optimization problem (43)–(45).

Let the information measure corresponding to the i-th sensor at k+1-th time be denoted as

bk+1i=pk+1itrE[xk+1hk+1i(xk+1)]T(Rk+1i)1[xk+1hk+1i(xk+1)],i=1,,L. (46)

Let {bk+1r1,,bk+1rL} denote {bk+11,,bk+1L} as rearrangement with descending order, i.e., bk+1r1bk+1rL. The optimal solution of the optimization problem (43)–(45) can be obtained by the following proposition.

Proposition 3.

For multisensor nonlinear dynamic system with the uncertain observations (1) and (2), the optimal sensor selection scheme for the problem (43)–(45) is sk+1r1==sk+1rs=1 and sk+1rs+1==sk+1rL=0.

Proof. 

Since D¯k11 and D¯k12 are not related to sk, based on Proposition 2, the optimization problem (43)–(45) can be equivalent to

maxsk+1i=1Lbk+1isk+1i, (47)
s.t.i=1Lsk+1i=s, (48)
sk+1i{0,1},i=1,,L, (49)

where bk+1i is denoted by (46). According to bk+1r1bk+1rL, and sk+1i, i=1,,L needs to satisfy (48) and (49), and we have

i=1Lbk+1isk+1ii=1sbk+1ri.

The equality holds with sk+1r1==sk+1rs=1 and sk+1rs+1==sk+1rL=0. Thus, the optimal solution is got. ☐

5. Simulation

In this section, we provide two examples to compare the different PCRB by Proposition 1 with Proposition 2, and select the optimal sensors by Proposition 3.

Example 1: Consider an uncertain nonlinear dynamic system for the mobile robot. At time k, the mobile robot pose is described with thestate vector xk=[xkykθk], where xk and yk are the coordinates on a 2D plane relative to an external coordinate frame, and θk is the heading angle. We use the control commands uk=[Δdk,Δθk] to determine the motion of the mobile robot, where Δdk is the incremental distance robot (in meters) and Δθk is the incremental change in heading angle (in degrees). The robot motion can be described as follows [35]:

f1,k=xk1+Δdkcos(θk1+12Δθk),f2,k=yk1+Δdksin(θk1+12Δθk),f3,k=θk1+Δθk,

where Δdk=5,Δθk=5. The state equation is defined as fk=[f1,k,f2,k,f3,k]T, and then the state model is

xk=fk(xk1,uk)+wk. (50)

The measurement equation is

yki=hi(xk)+vkj,withprobabilitypki,vkjwithprobability1pki,fori=1,,L,pki=0.8, (51)

where

hi(xk)=(xk(1)zki(1))2+(xk(2)zki(2))2arctanxk(2)zki(2)xk(1)zki(1).

zki=[zki(1)zki(2)]T is the position of the ith sensor. In the simulation, we consider the WSN shown in Figure 1, which has L=6×6=36 sensors deployed in the area 100×100 m2 [5]. The noise covariances are set as Qk=diag([0.12,0.12,32]), Rki=diag([12,12]).

Figure 1.

Figure 1

The trajectory of the mobile robot and the location of the L sensors.

In the example, the initial state of the robot starts is [8,8,1] and the initial covariance matrix is P0=diag([10,10,2]) [35]. The sampling length is assigned to flag=50. Here, the number of Monte Carlo (MC) simulation is MC=200.

The following simulation results include three parts: the first part is about the trajectory of the mobile robot and PCRB of the state estimation, the second part is about the average computation time, and the third part is about the PCRB with different sensing probability p.

  • Figure 1 shows the trajectory of the mobile robot and the location of the L sensors. Figure 2 and Figure 3 show that the PCRB of position along the x- and y-directions based on Proposition 1 and Proposition 2, respectively. From Figure 2 and Figure 3, we can see the different PCRBs are shown to converge to the same values. However, the PCRB changes so much in the first seconds, and there are two possible reasons. First, the dynamic system is nonlinear. It may cause the algorithm to require some time to be convergent. Second, the initial variance may not be given better, such that it is far away from the convergence point.

  • The average computation time of calculating PCRB based on Proposition 1 and Proposition 2 is presented in Figure 4. From Figure 4, obviously, when the number of the sensors increases, the computational complexity of Proposition 1 is much higher than that of Proposition 2, and the average computation time of PCRB by Proposition 2 increases slowly. The reason may be that the expression of PCRB based on Proposition 2 has a more concise form where the D¯k22 is easier to compute. Thus, Proposition 2 is more suitable for the sensor selection in the large-scale sensor networks.

  • In Figure 5, the average PCRB of 20 time steps is plotted as a function of number of sensors. It shows that the PCRB obtained by Proposition 1 is the same as that based on Proposition 2. The larger p is, the smaller the number of required sensors. The reason may be that the sensors can take more observation information, when the sensor probability p is larger.

Figure 2.

Figure 2

The PCRB of position in the x-direction is plotted as a function of time steps.

Figure 3.

Figure 3

The PCRB of position in the y-direction is plotted as a function of time steps.

Figure 4.

Figure 4

The average computation time of PCRB is plotted as function of number of sensors.

Figure 5.

Figure 5

The average PCRB of 20 time steps is plotted as function of number of sensors with the different probability p.

Example 2: In order to manage the communication resources efficiently in large wireless sensor networks, we need to select some appropriate sensors. Thus, let us consider the above dynamic system (50) and (51) and the WSNs [35]. In general, if the sensors are close to the target, which may have higher sensing probabilities compared to other sensors in the WSN, then it is highly likely to select those sensors, owing to being both closer to target and higher sensing probability. Here, we consider a relatively difficult case that the sensors around the target have relatively low sensing probabilities. Then, we compare our algorithm in Proposition 3 with the recent two methods given in [5,28].

In this example, we present two cases with different number of sensors in WSN. Firstly, we consider L=36 and sk=15. Moreover, let pki=0.1, i=7,8,9,10,13,14,15,16,20,21,22,23,24, and the other sensing probabilities are between 0.8 and 1. Secondly, we also consider L=49 and sk=15. Moreover, let pki=0.1, i=14,,18,22,,26,30,,34, and the other sensing probabilities are between 0.8 and 1. The following simulation results contain three parts. The first part is about the sensor selection in the application of wireless sensor network, the second part is about the mean squared error based on the selected sensors, and the third part is about the computation time.

  • Figure 6 and Figure 7 present the location of L=36 and L=49 sensors, respectively. The target is showed at the time 10 s, and we use our algorithm in Proposition 3 to select the optimal sk=15 sensors. When the uncertainty in the dynamic system is ignored, the recent method in [28] can be used to select the required sensors, and the results are shown in Figure 8 and Figure 9. Comparing Figure 6 with Figure 8, some sensors are close to the target, such as sensor 8 and sensor 15, but they are not selected in Figure 6. In Figure 8, they are selected and the only closer sensors can be selected. The reason is that the sensing probability of sensor 8 and sensor 15 is very low, and they may be not given us much useful information, although they are close to the target. Comparing Figure 7 with Figure 9, it has a similar phenomenon, such as sensor 16 and 31 not being selected in Figure 7, but they are selected in Figure 9.

  • In Figure 10 and Figure 11, the mean squared errors of position in x- and y-directions are plotted for the algorithm given in Proposition 3 and the algorithms in [5,28]. It shows that our algorithm can derive the best performance. The reason is that our algorithm considers the influence of uncertain observation, and the optimal selected sensors are obtained. Although the algorithm in [5] considers the uncertain observation, it is difficult to obtain the optimal selected sensors, since it involves relaxing the variable {0,1} to the interval [0,1]. From Figure 12 and Figure 13, we can also see that the proposed method also performs best in the case of L=49, thus the performance of the new method is stable with the increase of the number of sensors.

  • The computation times of obtaining PCRB are plotted in Figure 14 and Figure 15 for the three algorithms, respectively. Figure 14 and Figure 15 show that the computation time of the method in Proposition 3 is much smaller than that of the other two methods. The reason is that the method in Proposition 3 is an analytical solution. Therefore, the proposed algorithm in Proposition 3 is more suitable for the large sensor networks.

Figure 6.

Figure 6

L=36 sensors placement and selected sensors sk=15 based on the algorithm in Proposition 3.

Figure 7.

Figure 7

L=49 sensors placement and selected sensors sk=15 based on the algorithm in 3.

Figure 8.

Figure 8

L=36 sensors placement and selected sensors sk=15 based on the algorithm in [28].

Figure 9.

Figure 9

L=49 sensors placement and selected sensors sk=15 based on the algorithm in [28].

Figure 10.

Figure 10

The mean squared error of position in the x-direction is plotted as function of time steps with L=36 sensors.

Figure 11.

Figure 11

The mean squared error of position in the y-direction is plotted as function of time steps with L=36 sensors.

Figure 12.

Figure 12

The mean squared error of position in the x-direction is plotted as function of time steps with L=49 sensors.

Figure 13.

Figure 13

The mean squared error of position in the y-direction is plotted as function of time steps with L=49 sensors.

Figure 14.

Figure 14

The computation time of obtaining PCRB is plotted as function of time steps with L=36 sensors.

Figure 15.

Figure 15

The computation time of obtaining PCRB is plotted as function of time steps with L=49 sensors.

6. Conclusions

This paper has proposed two methods to derive the PCRB to effectively overcome the difficulties caused by uncertainty. The first method is based on the recursive formula of the Cramér–Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computational burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0–1 latent variables to treat the Gaussian mixture model. Since the regular condition of the posterior Cramér–Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér–Rao bound can be achieved by a limiting process of the Cramér–Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Thus, the sensor selection problems for the nonlinear uncertain dynamic system with linear equality or inequality constraints can be efficiently solved, and the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

Acknowledgments

This work was supported in part by the NSFC No. 61673282, the open research funds of BACC-STAFDL of China under Grant No. 2015afdl010 and the PCSIRT16R53.

Author Contributions

Zhiguo Wang and Xiaojing Shen proposed the algorithm; Ping Wang performed the experiments; Yunmin Zhu contributed analysis tools.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Nahi N.E. Optimal recursive estimation with uncertain observation. IEEE Trans. Inf. Theory. 1969;15:457–462. doi: 10.1109/TIT.1969.1054329. [DOI] [Google Scholar]
  • 2.Hadidi M.T., Schwartz S.C. Linear recursive state estimators under uncertain observations. IEEE Trans. Autom. Control. 1979;24:944–948. doi: 10.1109/TAC.1979.1102171. [DOI] [Google Scholar]
  • 3.Lin H., Sun S. State estimation for a class of non-uniform sampling systems with missing measurements. Sensors. 2016;16:1155. doi: 10.3390/s16081155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Luo Y., Zhu Y., Luo D., Zhou J., Song E., Wang D. Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications. Sensors. 2008;8:8086–8103. doi: 10.3390/s8128086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Cao N., Choi S., Masazade E., Varshney P.K. Sensor selection for target tracking in wireless sensor networks with uncertainty. IEEE Trans. Signal Process. 2016;64:5191–5204. doi: 10.1109/TSP.2016.2595500. [DOI] [Google Scholar]
  • 6.Costa O.L.V., Guerra S. Stationary filter for linear minimum mean square error estimator of discrete-time Markovian jump systems. IEEE Trans. Autom. Control. 2002;47:1351–1356. doi: 10.1109/TAC.2002.800745. [DOI] [Google Scholar]
  • 7.Sinopoli B., Guerra S., Schenato L., Franceschetti M., Poolla K., Jordan M.I., Sastry S.S. Kalman filtering with intermittent observations. IEEE Trans. Autom. Control. 2004;49:1453–1464. doi: 10.1109/TAC.2004.834121. [DOI] [Google Scholar]
  • 8.Van Trees H.L. Detection, Estimation, and Modulation Theory, Part I. Wiley; New York, NY, USA: 1968. [Google Scholar]
  • 9.Bobbovsky B.Z., Zakai M. A lower bound on the estimation error for Markov processes. IEEE Trans. Autom. Control. 1975;20:785–788. doi: 10.1109/TAC.1975.1101088. [DOI] [Google Scholar]
  • 10.Taylor J. The Cramér–Rao estimation error lower bound computation for deterministic nonlinear systems. IEEE Trans. Autom. Control. 1979;24:343–344. doi: 10.1109/TAC.1979.1101979. [DOI] [Google Scholar]
  • 11.Galdos J.I. A Cramér–Rao bound for multidimensional discrete-time dynamical systems. IEEE Trans. Autom. Control. 1980;25:117–119. doi: 10.1109/TAC.1980.1102211. [DOI] [Google Scholar]
  • 12.Chang C.B. Two lower bounds on the covariance for nonlinear estimation problems. IEEE Trans. Autom. Control. 1981;26:1294–1297. doi: 10.1109/TAC.1981.1102807. [DOI] [Google Scholar]
  • 13.Tichavský P., Muravchik C.H., Nehorai A. Posterior Cramér–Rao bounds for discrete-time nonlinear filtering. IEEE Trans. Signal Process. 1998;46:1386–1396. [Google Scholar]
  • 14.Kirubarajan T., Bar-Shalom Y. Low observable target motion analysis using amplitude information. IEEE Trans. Aerosp. Electron. Syst. 1996;32:1367–1384. doi: 10.1109/7.543858. [DOI] [Google Scholar]
  • 15.Zheng Y., Ozdemir O., Niu R., Varshney P.K. New conditional posterior Cramér–Rao lower bounds for nonlinear sequential Bayesian estimation. IEEE Trans. Signal Process. 2012;60:5549–5556. doi: 10.1109/TSP.2012.2205686. [DOI] [Google Scholar]
  • 16.Šimandl M., Královec J., Tichavský P. Filtering, predictive, and smoothing Cramér–Rao bounds for discrete-time nonlinear dynamic systems. Automatica. 2001;37:1703–1716. doi: 10.1016/S0005-1098(01)00136-4. [DOI] [Google Scholar]
  • 17.Hernandez M.L., Marrs A.D., Gordon N.J., Maskell S.R., Reed C.M. Cramér-Rao bounds for non-linear filtering with measurement origin uncertainty; In Proceeding of the 5th International Conference on Information Fusion; Annapolis, MD, USA. 8–11 July 2002. [Google Scholar]
  • 18.Zhang X., Willett P., Bar-Shalom Y. The Cramer-Rao bound for dynamic target tracking with measurement origin uncertainty; In Proceeding of the 41st IEEE Conference on Decision and Control; Las Vegas, NV, USA. 10–13 December 2002. [Google Scholar]
  • 19.Farina A., Ristic B., Timmoneri L. Cramér–Rao bound for nonlinear filtering with Pd < 1 and its application to target tracking. IEEE Trans. Signal Process. 2002;50:1916–1924. [Google Scholar]
  • 20.Hernandez M., Ristic B., Farina A. A comparison of two Cramér–Rao bounds for nonlinear filtering with Pd < 1. IEEE Trans. Signal Process. 2004;52:2361–2370. [Google Scholar]
  • 21.Rapoport I., Oshman Y. A Cramér–Rao-type estimation lower bound for systems with measurement faults. IEEE Trans. Autom. Control. 2005;50:1234–1245. doi: 10.1109/TAC.2005.854579. [DOI] [Google Scholar]
  • 22.Hounkpevi F.O., Yaz E.E. Robust minimum variance linear state estimators for multiple sensors with different failure rates. Automatica. 2007;43:1274–1280. doi: 10.1016/j.automatica.2006.12.025. [DOI] [Google Scholar]
  • 23.Zhang H., Shi Y., Mehr A.S. Robust weighted H∞ filtering for networked systems with intermittent measurements of multiple sensors. Internat J. Adapt. Control Signal Process. 2011;25:313–330. doi: 10.1002/acs.1200. [DOI] [Google Scholar]
  • 24.Yick J., Mukherjee B., Ghosal D. Wireless sensor network survey. Comput. Netw. 2008;52:2292–2330. doi: 10.1016/j.comnet.2008.04.002. [DOI] [Google Scholar]
  • 25.Gungor V.C., Hancke G.P. Industrial wireless sensor networks: challenges, design principles, and technical approaches. IEEE Trans. Ind. Electron. 2009;56:4258–4265. doi: 10.1109/TIE.2009.2015754. [DOI] [Google Scholar]
  • 26.Shen X., Varshney P.K. Sensor selection based on generalized information gain for target tracking in large sensor networks. IEEE Trans. Signal Process. 2014;62:363–375. doi: 10.1109/TSP.2013.2289881. [DOI] [Google Scholar]
  • 27.Joshi S., Boyd S. Sensor selection via convex optimization. IEEE Trans. Signal Process. 2009;57:451–462. doi: 10.1109/TSP.2008.2007095. [DOI] [Google Scholar]
  • 28.Shen X., Liu S., Varshney P.K. Sensor selection for nonlinear systems in large sensor networks. IEEE Trans. Aerosp. Electron. Syst. 2014;50:2664–2678. doi: 10.1109/TAES.2014.130455. [DOI] [Google Scholar]
  • 29.Liu S., Fardad M., Masazade E., Varshney P.K. Optimal periodic sensor scheduling in networks of dynamical systems. IEEE Trans. Signal Process. 2014;62:3055–3068. doi: 10.1109/TSP.2014.2320455. [DOI] [Google Scholar]
  • 30.Wang P., Wang Z.G., Shen X.J., Zhu Y.M. The estimation fusion and Cramér-Rao bounds for nonlinear systems with uncertain observations; In Proceeding of the 20th International Conference on Information Fusion; Xi’an, China. 10–13 July 2017. [Google Scholar]
  • 31.Bishop C.M. Pattern Recognition and Machine Learning. Springer; New York, NY, USA: 2006. [Google Scholar]
  • 32.Yang X.J., Niu R.X. Sparsity-Promoting sensor selection for nonlinear target tracking with quantized data; In Proceeding of the 20th International Conference on Information Fusion; Xi’an, China. 10–13 July 2017. [Google Scholar]
  • 33.Bilmes J.A. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Int. Comput. Sci. Inst. 1998;4:126. [Google Scholar]
  • 34.Chepuri S.P., Leus G. Sparsity-promoting sensor selection for non-linear measurement models. IEEE Trans. Signal Process. 2015;63:684–698. doi: 10.1109/TSP.2014.2379662. [DOI] [Google Scholar]
  • 35.Pak J.M., Ahn C.K., Shmaliy Y.S., Lim M.T. Improving reliability of particle filter-based localization in wireless sensor networks via hybrid particle/FIR filtering. IEEE Trans. Ind. Inform. 2015;11:1089–1098. doi: 10.1109/TII.2015.2462771. [DOI] [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES