Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2023 Jul 14;23(14):6408. doi: 10.3390/s23146408

Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements

Shen Wang 1, Yinya Li 1,*, Guoqing Qi 1, Andong Sheng 1
Editors: Won-Sang Ra1, Ivan Masmitja1, Shaoming He1
PMCID: PMC10385148  PMID: 37514702

Abstract

This paper focuses on the optimal geometry and motion coordination problem of mobile bearings-only sensors for improving target tracking performance. A general optimal sensor–target geometry is derived with uniform sensor–target distance using D-optimality for arbitrary n (n2) bearings-only sensors. The optimal geometry is characterized by the partition cases dividing n into the sum of integers no less than two. Then, a motion coordination method is developed to steer the sensors to reach the circular radius orbit (CRO) around the target with a minimum sensor–target distance and move with a circular formation. The sensors are first driven to approach the target directly when outside the CRO. When the sensor reaches the CRO, they are then allocated to different subsets according to the partition cases through matching the optimal geometry. The sensor motion is optimized under constraints to achieve the matched optimal geometry by minimizing the sum of the distance traveled by the sensors. Finally, two illustrative examples are used to demonstrate the effectiveness of the proposed approach.

Keywords: optimal geometry, bearings-only measurement, Fisher information matrix, motion coordination

1. Introduction

Bearings-only target tracking is widely applied in wireless sensor networks for the civilian and military areas [1,2]. Different from other sensors such as range-only sensors, time difference of arrival (TDOA) sensors, and so on, bearings-only sensors work in passive mode and easily survive from being detected and attacked. However, they are highly sensitive to a range in which even a small angle measurement error may lead to a large tracking error. Therefore, bearings-only target tracking has been a research area of considerable interest for decades. Meanwhile, with the development of the unmanned vehicles, the traditional stationary sensor platforms have evolved into mobile ones characterized by high speed and long endurance. Accordingly, flexible sensor motion coordination can be achieved, so the tracking accuracy and the survival ability are significantly improved from sensor coordination.

Much previous work has been dedicated to developing different estimators for target tracking based on bearings-only measurements in two- and three-dimensional space [3,4,5,6]. The extended Kalman filter (EKF) is a classical method for the nonlinear tracking problem [7] but often diverges when the model nonlinearity is strong. The pseudolinear Kalman filter (PLKF) was introduced in [8,9], with better convergence than the EKF. However, the estimate is biased, which is highly dependent on sensor geometry [10]. Furthermore, other estimation algorithms such as the unscented Kalman filter (UKF) [11], cubature Kalman filter (CKF) [12,13,14,15], and particle filter (PF) [16,17] have been applied in bearings-only target tracking with different estimation performance advantages.

Compared with the improvement in the tracking accuracy produced by the estimation algorithms, sensor–target geometry plays a fundamental role in determining the accuracy of target tracking systems [18,19,20,21]. The Fisher information matrix (FIM) is a commonly used criterion assessing target tracking accuracy. The inverse of the FIM, called the Cramer–Rao lower bound (CRLB), indicates the optimal performance of a tracking system. Three popular optimality criteria are adopted tp achieve the optimal sensor configuration based on the FIM [19]. D-optimality minimizes the area of the uncertainty ellipse by maximizing the determinant of the FIM [18,20,22]; A-optimality suppresses the average variance by minimizing the trace of CRLB [23,24]; and E-optimality minimizes the length of the largest axis of the same ellipsoid by minimizing the maximum eigenvalue of the CRLB [19]. In [25], D-optimality was adopted to optimize sensor placement for range-based target tracking. In [26], the conditions for optimal placement of heterogeneous sensors were derived based on maximizing the information matrix, and the optimal placement for paired sensors was developed leveraging a “divide-and-conquer” strategy. In [27], A-optimality was used to solve sensor placement for 3D angle-of-arriving target localization. Geometric dilution of precision (GDOP) [28] is another criterion used to evaluate tracking accuracy. GDOP is defined as the root mean square position error and illustrates how an estimation is influenced by sensor–target geometry [29]. The optimal deployment for multitarget localization was developed in [30] by minimizing the GDOP.

In addition to the above theoretical analysis on the sensor–target geometries, some sensor path optimization methods have been proposed for target tracking to avoid the difficulty in finding the closed-form solution. A gradient-descent-based motion planning algorithm was presented for decentralized target tracking [31]. In [32], a gradient descent optimization algorithm was proposed for single- and multisensor path planning by minimizing the mean square error in 2D space. In [33], the path optimization for passive emitter localization in 2D space was transformed into a nonlinear programming problem with the FIM as the cost function. In [34], the path optimization strategy for 3D AOA target tracking was developed by minimizing the trace of covariance matrices with gradient descent optimization and a grid search method. In [35], the optimal sensor placement for AOA sensors was derived with a Gaussian prior using D- and A-optimality. In addition, the result was extended to path optimization based on a projection algorithm.

Most of the existing work has focused on optimal deployment using multiple bearings-only sensors for target localization. Some closed-form solutions have been derived with equal angular distribution. Inspired by the “divide-and-conquer” strategy in [26], the continuum of the optimal solution for bearings-only measurement has potential to be extended to general circumstances. Moreover, for bearings-only target tracking problems using mobile sensors, some studies in the literature have adopted optimization methods such as gradient descent, Gauss–Seidel relaxation, and so on. Nevertheless, the solution space for optimization is complex due to the high nonlinearity of the cost functions related to the FIM. As a result, these numerical methods may lead to falling into local optima and fail to reach the globally optimal tracking performance. Motivated by the aforementioned aspects, this paper focuses on the optimal sensor–target geometry and motion coordination problem of mobile bearings-only sensors for target tracking. The sensors are driven to approach the target from a distance and eventually move in a circular formation to track the target.

The contributions of this paper are summarized as follows. (1) The suboptimality of approaching the target for bearings-only sensors to improve tracking performance is analyzed. (2) A continuum solution to optimal sensor–target geometry is derived with uniform sensor–target distance using D-optimality for arbitrary n (n2) bearings-only sensors. The optimal geometry is characterized by the partition cases dividing n into the sum of integers no less than two. (3) A motion coordination algorithm to achieve globally optimal performance is developed based on matching optimal geometry and motion optimization to achieve the optimal target tracking performance.

The remainder of this paper is organized as follows: Section 2 presents the problem formulation. The CKF and FIM are introduced in Section 3. Section 4 reformulates the problem and investigates the optimality analysis. In Section 5, we design a motion coordination strategy based on the results in Section 4. The proposed method is verified by simulations in Section 6. Section 7 concludes this paper.

Notations: Define θi(π,π], θij=θjθi(π,π]. The two-norm of a vector xRn is defined as x=xTx. Chol(M) indicates the Cholesky decomposition of M. tr(·) and det(·) denote the trace and the determinant of the matrix contained in the bracket, respectively. |S| represents the cardinality of the set S. S\T={e|eSandeT}.

2. Problem Formulation

This paper focuses on the problem of sensor motion and coordination for single-moving-target tracking with n2 bearings-only sensors in 2D space. The target tracking geometry is depicted in Figure 1. θi,k is the angle of the line of sight (LOS) from sensor i at discrete time k. Define zi,k as the measurement of θi,k; then, the measurement function is

zi,k=θi,k+ηi,k=tan1ykpyi(k)xkpxi(k)+ηi,k (1)

where pk=[xkp,ykp]T is the position of the target at time k; tan1(·) is the four-quadrant inverse tangent function and θi,k(π,π]; si(k)=[xi(k),yi(k)]T is the location of sensor i; ηi,k is the measurement noise and assumed to be i.i.d Gaussian noise with zero mean and variance σi2, i{1,2,,n}. The sensors are homogeneous, i.e., σi2=σθ2. Write the measurements in a compact form as zk=[z1,k,z2,k,,zn,k]TRn, and ηk=[η1,k,η2,k,,ηn,k]TRn is measurement Gaussian noise with zero mean and covariance Rk=σθ2I, where I is an identity matrix.

Figure 1.

Figure 1

Target tracking geometry for n>2 bearings-only sensors, where i>2.

Consider the target whose motion is described by a nonlinear dynamic discrete system

xk+1=f(xk)+wk (2)

where xkRnx is the state vector of the dynamic system at discrete time k; wkRnx is process Gaussian noise with zero mean and covariance Qk; nx is the dimension of the state vector. Meanwhile, wk and ηk are mutually independent processes.

The dynamic model of the mobile sensors is given by

si(k+1)=si(k)+ui(k)ui(k)=vi(k)T (3)

where si(k) is the position of sensor i at discrete time k; ui(k) is the control input for sensor i at time k; vi(k) is the designed velocity of sensor i at time k; and T is the sampling time.

The state parameters of the target are unknown. We assume that the state of the mobile sensors and the measurements taken by them are known. Because of the noncooperative scenario, the minimum distance restriction between the target and the sensors should be ensured. We aimed to estimate the target state using the bearings-only measurements and improve the tracking accuracy by optimizing the sensor–target geometry of cooperative mobile sensors under practical constraints.

Assumption A1. 

At the beginning of the tracking process, at least two sensors are deployed in positions that are not colinear to the target to ensure the observability of the target by the sensors [19,36].

Assumption A2. 

The mobile sensors are homogeneous with a maximum speed vmax and a maximum turn rate φmax due to the limitations of the mechanical properties. The maximum speed of the sensor is faster than that of the target to ensure they can catch up the target. The minimum distance between sensors and the target is denoted as dmin.

3. Parameter Estimation

In this paper, we use the cubature Kalman filter [12] to estimate the state of the target. The CKF is a nonlinear filter rising in the past decade with improved performance over the conventional nonlinear filter, particularly in addressing the strong nonlinearity in bearings-only target tracking.

In addition, it is known that the tracking performance of static sensors has a limited track range performance. Obviously, one feasible way to improve the tracking accuracy is moving sensors to better locations to accurately track the target. Therefore, the FIM based on bearings-only measurements is introduced in this section for optimality analysis in the following section.

3.1. Cubature Kalman Filter

Denote x^k|k as the estimate of xk and Pk|k as the estimate error covariance by using the bearings-only measurements zk. The cubature Kalman filter, in its time- and measurement-update forms, can be computed by starting from x^0|0 and P0|0. The iteration functions are as follows:

Step 1. Evaluate cubature points (i=1,2,,2nx)

Sk1|k1=CholPk1|k1Xi,k1|k1=Sk1|k1ξi+x^k1|k1 (4)

where Sk1|k1 is the Cholesky decomposition of Pk1|k1; ξi=nx[1]i; [1]iRnx represents the ith element of the following set

100,010,,001,100,010,,0012nx

Step 2. Time update

Xi,k|k1=f(Xi,k1|k1)x^k|k1=12nxi=12nxXi,k|k1Pk|k1=12nxi=12nxXi,k|k1Xi,k|k1Tx^k|k1x^k|k1T+Qk1 (5)

where x^k|k1 is the state prediction, and Pk|k1 is the predicted error covariance.

Step 3. Measurement update

Sk|k1=CholPk|k1χi,k|k1=Sk|k1ξi+x^k|k1Zi,k|k1=h(χi,k|k1)z^k|k1=12nxi=12nxZi,k|k1Pzz,k|k1=12nxi=12nxZi,k|k1Zi,k|k1Tz^k|k1z^k|k1T+RkPxz,k|k1=12nxi=12nxχi,k|k1Zi,k|k1Tx^k|k1z^k|k1TWk=Pxz,k|k1Pzz,k|k11x^k|k=x^k|k1+Wk(zkz^k|k1)Pk|k=Pk|k1WkPzz,k|k1WkT (6)

where z^k|k1 is the predicted measurement; Pzz,k|k1 is the innovation covariance matrix; Pxz,k|k1 is the cross-covariance matrix; Wk is the Kalman gain.

3.2. Fisher Information Matrix

The error covariance matrix is defined as

Pk|kE(xkx^k|k)(xkx^k|k)TJk1 (7)

where Jk is called the FIM, which quantifies the amount of information obtained from the measurements, with the expression

Jk=E2lnp(zk|xk)xk2 (8)

where p(zk|xk) is the probability density function, expressed as

p(zk|xk)=1(2π)ndet(Rk)×exp12[(zkh(xk)]TRk1[(zkh(xk)] (9)

Given the measurements vector zk, the FIM is determined as

Jk=1σθ2i=1n1ri,k2cos2(θi,k)12sin(2θi,k)12sin(2θi,k)sin2(θi,k) (10)

where ri,k=pksi(k) represents the distance between the target position pk and the sensor position si(k) at time k.

Lemma 1 

([18]). The FIM is expressed in (10); then, the following expressions of the determinant of the FIM are equivalent:

(1)det(Jk)=14σθ4i=1n1ri,k22i=1ncos(2θi,k)ri,k22i=1Nsin(2θi,k)ri,k22(2)det(Jk)=1σθ4Ψsin2(θij)ri,k2rj,k2 (11)

where Ψ={{i,j}} is the set of all combinations of i and j with 1i<jn; θij=θj,kθi,k.

4. Optimality Analysis

The problem of path planning and motion coordination for improving tracking performance is equivalent to finding the next waypoints at each time step by maximizing the determinant of the FIM. There exist two kinds of parameters influencing the determinant of the FIM. So, we can maximize det(Jk) by simultaneously reducing the distances between the sensors and target and configuring the angles among the sensors.

In order to ensure the minimum distance constraint, the sensors move on a circular trajectory at a distance radius around the target. Before that, the path to reaching the circular radius orbit (CRO) for improving the tracking accuracy was studied. Thus, the design of the motion coordination for multiple sensors is divided into two stages, including outside the CRO distance and on the CRO distance dmin.

4.1. Outside the CRO Distance

Consider the bearings-only tracking problem. When the range between the target and the sensor is greater than dmin, the problem of the optimal sensor movement is equivalent to the following optimization problem:

maxdet(Jk+1)s.t.vi(k)vmax|vi(k)vi(k1)|φmax (12)

where vi(k) is the angle of the velocity vector at time k, and the difference between vi(k) and vi(k1) is bounded by φmax due to the limited turn rate.

Obviously, the difficulty of solving problem (12) increases with the increase in the number of the mobile sensors, though they can be solved via numerical methods. As such, we turned to suboptimal motion to reduce computational complexity.

When the sensors are far away from the target, the sensors are expected to move with maximum speed vmax to approach the target. As shown in Figure 2, the location si(k+1) that sensor i is able to reach can be expressed by

xi(k+1)=xi(k)+vmaxTcosϕi,kyi(k+1)=yi(k)+vmaxTsinϕi,k (13)

where ϕi,k[0,2π) is the heading direction of sensor i at time k. For convenience, denote Δxixk+1pxi(k), Δyiyk+1pyi(k) and divmaxT.

Figure 2.

Figure 2

Optimal sensor motion for target tracking.

Theorem 1. 

Consider the bearings-only tracking problem. When the range between the target and the sensor is greater than dmin, and the position of the target is pk+1 at time k+1, the suboptimal heading direction of sensor i at time k is

ϕi,k*=tan1ΔyiΔxi (14)

Proof. 

According to the Cauchy inequality,

det(Jk+1)14σθ4i=1n1ri,k+122i=1ncos22θi,k+1ri,k+14i=1nsin22θi,k+1ri,k+14=14σθ4i=1n1ri,k+122i=1n1ri,k+14F(γ) (15)

Consider the function

F(γ)=14σθ4i=1n1ri,k+122i=1N1ri,k+14 (16)

where γ=[r1,k+1,r2,k+1,,rn,k+1]T.

To achieve the maximum of F(γ), take the partial derivatives of F(γ) with respect to ϕi,k. Then, we have

F(γ)ϕi,k=14σθ4j=1n1rj,k+12·4diri,k+14(Δyicosϕi,kΔxisinϕi,k)+diri,k+16(Δyicosϕi,kΔxisinϕi,k) (17)

Let F(γ)ϕi,k=0, we obtain

ϕ0=tan1Δy1Δx1tan1Δy2Δx2tan1ΔynΔxn (18)

Additionally, let HRn×n denote the Hessian matrix of F(γ) at ϕ0, with elements

Hij=2F(γ)ϕi,kϕj,k (19)

We obtain

Hij|ϕ0=0ijdiΔriσθ4ri,k+14l=1n1rl,k+12i=j (20)

where Δri=Δxi2+Δyi2. Obviously, H is a negative definite matrix, and as a consequence, ϕ0 is the maximum point.    □

Furthermore, taking the limitation of the turn rate into consideration, the heading direction of sensor i at time k is

ϕi,k=ϕ_i,kϕi,k*<ϕ_i,kϕi,k*ϕ_i,kϕi,k*ϕ¯i,kϕ¯i,kϕi,k*>ϕ¯i,k (21)

where ϕ¯i,k=vi(k1)+φmax and ϕ_i,k=vi(k1)φmax.

Note that the determinant of FIM increases with the range between the sensors and target when the angles among the sensors remain unchanged. In other words, the optimal heading direction is always toward the target, so we can force the sensors to directly approach the CRO around the target. The tracking accuracy is improved as well but does not reach the optimum.

4.2. On the CRO Distance dmin

When all sensors reach the CRO around the target, which is a circle centered on the target and with a radius of dmin, we have ri=dmin. Define Δθi=θi+1θi, i{1,2,,n1}. The sensor–target geometry is depicted in Figure 3. In this section, the time step k is omitted for the convenience of description.

Figure 3.

Figure 3

Sensor–target geometry.

In order to simplify the analysis of optimal sensor–target geometry, the related propositions are reclaimed.

Proposition 1. 

The determinant of the FIM in (11) remains unchanged in the following three operations:

  • 1. 

    Switching the position of any two sensors;

  • 2. 

    Rotating all the sensors around the target;

  • 3. 

    Flipping arbitrary sensors about the target.

Remark 1. 

Proposition 1 originated from [18] and recognized in [20]. It implies that det(J) is invariant to these geometric operations.

Without loss of generality, the sensors are assumed to be renumbered counterclockwise with θi,θi(0,π] through the geometric operations according to Proposition 1, which is equivalent to flipping the sensors with the actual angles of a LOS ranging from π to 0.

The target tracking system achieves optimal estimation performance when all sensors move at the same speed as the target on the CRO in the formation, confirming the following results.

Lemma 2 

([30]). Consider n bearings-only sensors tracking a single target. When all sensors are on the CRO around the target (ri=dmin), Δθ1=Δθ2==Δθn1=Δθ, the Fisher information determinant given in (11) has the upper bound N24σθ4dmin4. The upper bound is achieved when Δθi=πn.

Remark 2. 

When n3, there are two solutions for optimal geometry with equal angular distribution in [30], i.e., Δθi=πnor2nπ. However, the optimal geometry when Δθi=2nπ can be obtained through flipping part of the sensors about the target in the optimal geometry when Δθi=πn. Therefore, we consider them as identical optimal geometry for n sensors and retain the solution of Δθi=πn, which avoids the complexity arising from two optional solutions.

For a more general circumstances, there is less limitation to Δθi. Denote S={1,2,,n} as the set of all sensors and Si={n1i,n2i,,nqii} as the subset of S, i{1,2,,m}. Denote Ξ={q1,q2,,qm}, where qi=|Si|. Then, we have the following result:

Theorem 2. 

Consider the bearings-only tracking problem. When all sensors are on the CRO around the target (ri=dmin), the Fisher information determinant given in (11) has the upper bound N24σθ4dmin4. The upper bound is achieved if the following conditions hold true

i=1mSi=S,SiSj=(ij)2qinθnl+1iθn1i=lqiπ,l{1,2,,qi1} (22)

Proof. 

When ri=dmin, then

det(J)=1σθ4dmin4Ψsin2(θij)=12σθ4dmin4n(n1)2Ψcos(2θij) (23)

The sensors in Si are placed as Lemma 2. Then,

Ψicos2(θab)=qi2 (24)

where Ψi={{a,b}} is the set of all combinations of a and b with a<b and a,bSi. Since i=1ncos(α+2(i1)nπ)=0 (α is arbitrary, n2), for jSi,lSg(ig)

l=1qgcos(2θjl)=l=1qgcos(2θjn1g+2(l1)πqv)=0 (25)

Finally, consider the following function

Ψcos(2θij)=Ψcos(2θij)+Ψ\Ψcos(2θij)=i=1mqi2+0=n2 (26)

where Ψ=i=1mΨi.

Hence,

det(J)=n24σθ4dmin4 (27)

   □

In view of (25) in the proof of Theorem 2, the angles between the sensors not in the same subset do not affect the optimal sensor–target geometry. In addition, it remains the optimal sensor–target geometry when the sensors are managed by the geometric operations in Proposition 1. Therefore, we can classify the optimal sensor–target geometry by the set Ξ={q1,q2,,qm}, which is recognized as the partition case dividing n into the sum of integers no less than 2. In other words, the optimal sensor–target geometries are regarded as identical for equivalent Ξ. Figure 4 and Figure 5 illustrate some examples of the optimal sensor–target geometry for n=4,5. In Figure 4a,b, two sensor–target geometries are considered the same because the sensors are both divided into two subsets with Ξ={2,2}. Additionally, the sensors with the same Ξ={2,3} in Figure 5a,b are also regarded as having identical sensor–target geometry, because the optimal sensor–target geometry in Figure 5b can be obtained by flipping sensor 4 about the target in Figure 5a. Additionally, the optimal sensor–target geometry with another partition case for n=5 is shown in Figure 5c,d, which is regarded as identical optimal geometry with Ξ={5,}, but they differ from the optimal geometry in Figure 5a,b due to different partition cases.

Figure 4.

Figure 4

Optimal sensor–target geometries for n=4. (a) S1={1,2}, S2={3,4}. (b) S1={1,3}, S2={2,4}.

Figure 5.

Figure 5

Figure 5

Optimal sensor–target geometry for n=5. (a) S1={1,3},S2={2,4,5}. (b) S1={1,3}, S2={2,4,5}. (c) S1={1,2,3,4,5}. (d) S1={1,2,3,4,5}.

Remark 3. 

Although the number of optimal sensor–target geometries described in Theorem 2 is infinite due to rotation invariance, we are only concerned with the partition cases of the set S according the classification method in this paper. The number of the partition cases dividing n into a sum of positive integers no less than 2, denoted as A(n), asymptotically equals 143nexpπ2n3143(n1)expπ2(n1)3 [37].

5. Motion Coordination

In this section, we propose a motion coordination strategy for mobile sensors to improve target tracking performance. According to our analysis above, the mobile sensors are required to reach the CRO around the target as soon as possible and coordinate with each other. Figure 6 illustrates the main steps of sensor motion coordination to achieve optimal geometry.

Figure 6.

Figure 6

Sensor motion coordination to achieve optimal geometry.

5.1. Single Sensor Motion

In practice, the real state of the target is unknown. We utilize the one-step predicted position of the target p^k+1|k=[x^k+1|kp,y^k+1|kp]T instead of pk+1 at time k. The velocity of sensor i is designed as

ui(k)=vmaxT[cosϕi,k,sinϕi,k]Tri,k>dmin(p^k+1|kp^k|k)ri,k=dmin (28)

As we want the sensors to approach the target as soon as possible, the velocities of the sensors are set to their maximum before they reach the boundary of the CRO around the target. After the sensors reach the CRO around the target, they are expected to follow the target on the CRO around the target.

5.2. Coordination Strategy

As all sensors reach the CRO around the target, they enter the coordination stage. The coordination strategy consists of matching the optimal sensor–target geometry and sensor motion optimization. The task of matching the optimal geometry involves allocateing the sensors into the subsets by comparing current sensor–target geometry with optimal geometry with the desired partition case Ξ. The sensor motion is optimized to achieve the optimal geometry with minimum energy consumption based on the result of matching the optimal geometry.

Let s^i(k+1)=[x^i(k+1),y^i(k+1)]T denote the expected location of sensor i at time k+1 calculated by ui(k) in (28) as

s^i(k+1)=si(k)+ui(k) (29)

Define θ^i as the predicted angle

θ^i=tan1y^k+1|kpy^i(k+1)x^k+1|kpx^i(k+1) (30)

where θ^i is constrained within the range of 0 to π to simplify the step of matching the optimal geometry.

Matching optimal geometry for a given Ξ={q1,q2,,qm} can be described as follows:

minκ=i=1ml=1qi1(θ^nl+1iθ^n1ilqiπ)2s.t.SiSj=,ijSi={n1i,n2i,,nqii}S|Si|=qi,i{1,2,,m}l{1,2,,qi1} (31)

where κ is defined as the difference degree compared with the optimal sensor–target geometry. The problem is naturally a combinatorial optimization problem, which is NP-hard. An algorithm to search for an approximate solution with a given Ξ was developed and is shown in Algorithm 1 based on the greedy search method.

Algorithm 1 Matching optimal geometry.
  • Input: 

      

  •   

    S={1,2,,n}, Ξ={q1,q2,,qm};

  • Output: 

      

  •   

    The sensor grouping S1,S2,,Sm;

  • 1:

    for  i=1,,m do

  • 2:

       for jS do

  • 3:

         for l=2,,qi do

  • 4:

                     Ljl=argminkS\{j}(θ^kθ^jl1qiπ)2;

  • 5:

         end for

  • 6:

               κj=l=2qi(θ^Ljlθj^l1qiπ)2;

  • 7:

       end for

  • 8:

       Find the minimum κj, Si{j,Lj2,,Ljqi},

  •  

          SS\Si;

  • 9:

    end for

  • 10:

    return{S1,S2,,Sm}.

Remark 4. 

The step of matching the optimal geometry only needs to be performed once when the sensors all reach the CRO. The sensor coordination follows the optimal geometry matched via Algorithm 1 in later sensor movement on the CRO. Moreover, the computation complexity of Algorithm 1 is Omn2.

After matching the optimal sensor–target geometry, the sensors engage in motion coordination to achieve the optimal geometry, thereby improving tracking performance. For the purpose of energy conservation, sensor motion optimization can be described as a nonlinear optimization problem

minϑ=i=1mj=1qi1u¯nji(k)s.t.θ^nj+1i*θ^n1i*=jqiπsnji*(k+1)p^k+1|k=dminj{1,2,,qi1},i{1,2,,m} (32)

where ϑ is the sum of the distance traveled by the sensors; u¯nji(k)=snji*(k+1)s^nji(k+1) and θ^i* are the predicted angles for si*(k+1). The nonlinear optimization problem in (32) can be solved by “fmincon” (Optimization toolbox) in Matlab®. Therefore, the control input for the sensor i is finally determined by

ui(k)=ui(k)+u¯i(k)=si*(k+1)si(k) (33)

The restriction of the turn rate can be implemented by choosing min{φmax,|ui(k)ui(k1)|}.

Remark 5. 

In terms of bearings-only target tracking accuracy, both the enveloping and semienveloping optimal sensor–target geometry configurations are considered equivalent. The selection of the configurations depends on the objectives of target tracking. When the sensors are expected to perform other operations, such as surveillance, recording, and so on, circumnavigation tracking is a more preferable approach, driving the sensors to achieve complete surrounding of a target on the CRO.

5.3. Collision Avoidance

A distance constraint is necessary to avoid collisions among the mobile sensors. Let lmin denote the minimum distance between two sensors. When si(k)sj(k)<ρmin, the collision avoidance algorithm is enabled, and we have

si(k+1)=xi(k)+ui(k)cos(ui(k)±δ)yi(k)+ui(k)sin(ui(k)±δ)sj(k+1)=xj(k)+uj(k)cos(uj(k)±δ)yj(k)+uj(k)sin(uj(k)±δ) (34)

where δ is a small heading change for the sensor, and ±δ is selected to make the range between them larger.

To summarize, the sensor motion coordination algorithm is presented in the Algorithm 2.

Algorithm 2 Sensor motion coordination for target tracking.
  • Input: 

      

  •   

    The estimate of the target at time k, x^k|k;

  •   

    The location of the sensor at time k, si(k);

  • Output: 

      

  •   

    The estimate of the target at time k+1, x^k+1|k+1;

  •   

    The location of the sensor at time k+1, si(k+1);

  • 1:

    Receive x^k+1|k from the estimation center;

  • 2:

    Compute ui(k) with (28), (31), (33) and (34);

  • 3:

    Move to a new position si(k+1);

  • 4:

    Take new measurements of the target zk+1, and estimate the state of the target via CKF;

  • 5:

    return x^k+1|k+1, si(k+1).

6. Simulation Experiments

In this section, we illustrate the proposed sensor motion coordination algorithm with some simulation examples. By default, all variables used in the simulation were in SI units. As introduced in Section 3.1, we used a CKF method to estimate the state of the target. For comparison, the gradient descent method in [34] and the projection method in [35] were adopted to optimize the sensor motion under the same conditions.

To compare the tracking performance, we used the root mean square error (RMSE) of the position of the target. The RMSE of position at time k is defined as

RMSEp(k)=1Nci=1Nc(xi(k)x^i(k))2+(yi(k)y^i(k))2

where Nc is the total numbers of Monte Carlo runs; [xi(k),yi(k)]T and [x^i(k),y^i(k)]T are the true and estimated positions at the nth Monte Carlo run respectively.

Scenario 1: We consider a problem of tracking a moving target using 5 mobile sensors in 2D space. The dynamic function of the target is described by the constant velocity model

xk+1=1T000100001T0001xk+wk

where xk=[xk,x˙k,yk,y˙k]T and T=0.2s is the sampling time. The process noise wk is a zero-mean Gaussian with a covariance matrix Qk=diag[qMqM], where

M=T3/3T2/2T2/2T

The scalar parameter q=0.1m/s3 denotes the process noise intensity. The measurements taken by sensor i at time k is given in (1) and σθ=0.1rad.

The true initial state of the target is x0=[50m3m/s50m1m/s]T, and its associated covariance is P0|0=diag[1000m2100m2/s21000m2100m2/s2]. The initial state estimate x0|0 is randomly chosen from N(x0,P0|0) in each run. The initial positions of the 5 sensors are s1(0)=[100m120m]T, s2(0)=[150m50m]T, s3(0)=[100m60m]T, s4(0)=[100m120m]T, and s5(0)=[100m200m]T. The maximum velocity and turn rate are vmax=10m/s and φmax=π3rad, respectively. The minimum restriction is dmin=50m, and the minimum distance among the sensors is ρmin=10m. Set Nc=2000.

There are two partition cases for n=5 with Ξ={2,3} and Ξ={5}. We first compared the tracking performance and distance traveled by the mobile sensors when the sensors are steered to achieve these two kinds of optimal sensor–target geometries. Additionally, we included static sensors and mobile sensors whose waypoints were computed by the methods in [34,35] in the comparative experiment. Figure 7a,b show the trajectory of the 5 bearings-only sensors achieving the optimal geometry with partition cases Ξ={5} and Ξ={2,3}, respectively. As shown in Figure 7b, sensor 1, sensor 3, and sensor 5 are assigned into the subset with three sensors and the others in the subset with two sensors after matching the optimal geometry. The sensors eventually move with the target in the optimal geometry, as expected. The optimal geometry is referenced to the estimated target position and shows discrepancies with the true optimal sensor–target geometry. This discrepancy is unavoidable in practical applications since the true target position is unknown. However, the proposed motion coordination method can enhance the estimation performance, and the circular formation approaches closer to the true optimal geometry, thus achieving the theoretically optimal estimation accuracy, as shown by the compared RMSEs of the position illustrated in Figure 8. Obviously, the tracking performance estimated by mobile sensors is better than that estimated by static sensors. The proposed method significantly improves the tracking performance and exhibits lower estimate error compared with the method in [34]. Meanwhile, the tracking performance of the method in [35] is close to the proposed method in this scenario. There is a negligible difference in the tracking performance between the two kinds of optimal geometries with Ξ={2,3} and Ξ={5}. Additionally, the sums of the distance traveled by all mobile sensors to achieve the optimal geometry with Ξ={2,3} and Ξ={5} are 1488.9m and 1548.6m, respectively. The reduction in distance between Ξ={2,3} and Ξ={5} is attributed to the fact that the sensor–target geometry is closer to the optimal geometry with Ξ={2,3}, whose κ is smaller, when the sensors reach the CRO.

Figure 7.

Figure 7

Sensor trajectory for target tracking in Scenario 1. (a) The optimal geometry with partition case Ξ={5}. (b) The optimal geometry with partition case Ξ={2,3}.

Figure 8.

Figure 8

Comparison of RMSEp for target tracking in Scenario 1 [34,35].

Scenario 2: We consider a problem of tracking a moving target using 4 mobile sensors in 2D space. The dynamic function of the target is described by

xk+1=1sinΩkTΩk01cosΩkTΩk00cosΩkT0sinΩkT001cosΩkTΩk1sinΩkTΩk00sinΩkT0cosΩkT000001xk+wk

where xk=[xk,x˙k,yk,y˙k,Ωk]T and T=1s. The process noise wk is a zero-mean Gaussian with a covariance matrix Qk=diag[q1Γq1Γq2T], where

Γ=T3/3T2/2T2/2T

and q1=0.1m/s3 and q2=1.75×104rad/s2 denote the process noise intensity. The true initial state of the target is x0=[0m20m0m0m0.05rad/s]T, and its associated covariance is P0|0=diag[1000m2100m2/s21000m2100m2/s2104rad2/s2]. The initial positions of the 4 sensors are randomly deployed. The rest parameters are listed as: σθ=0.05rad, dmin=100m, ρmin=20m, vmax=50m/s, φmax=π3rad and Nc=2000.

There are two partition cases for n=4 with Ξ={2,2} and Ξ={4}. However, the optimal geometry with Ξ={4} can be obtained by rotating the sensors in one subset in the optimal geometry with Ξ={2,2} as a whole by a proper angle. Thus, the partition case for n=4 is selected as Ξ={2,2} in Scenario 2. Figure 9 shows the trajectory of the 4 bearings-only sensors tracking a target. In this run, sensors 1 and 3 are assigned in a subset and the others in another subset after matching the optimal geometry. Figure 10 shows the compared RMSEs of the position. Obviously, the tracking performance estimated by static sensors is the poorest, and it continues to degrade as the distance from the target increases. The proposed method improves the tracking performance and exhibits lower estimate error compared with the methods in [34,35] for maneuver turning target tracking.

Figure 9.

Figure 9

Sensor trajectory for target tracking in Scenario 2.

Figure 10.

Figure 10

Comparison of RMSEp for target tracking in Scenario 2 [34,35].

7. Conclusions

In this study, optimal sensor–target geometry and a motion coordination strategy were proposed for a target tracking system using mobile bearings-only sensors in 2D space. We discussed the suboptimality of approaching the target for bearings-only sensors to improve tracking performance. A general optimal sensor–target geometry was derived with uniform sensor–target distance using D-optimality for arbitrary n (n2) bearings-only sensors. A motion coordination algorithm was developed based on the previous optimality analysis to achieve the optimal target tracking performance efficiently. In future work, we will investigate a distributed optimization method for mobile sensors and its extension to multitarget tracking.

Author Contributions

Conceptualization, S.W.; methodology, S.W.; software, S.W.; validation, S.W. and Y.L.; formal analysis, S.W. and Y.L.; investigation, S.W. and Y.L.; resources, Y.L.; data curation, S.W.; writing—original draft preparation, S.W.; writing—review and editing, S.W., Y.L. and G.Q.; supervision, Y.L.; funding acquisition, Y.L. and A.S. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This work was supported in part by the National Natural Science Foundation of China (62171223 and 61871221).

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Farina A. Target tracking with bearings–only measurements. Signal Process. 1999;78:61–78. doi: 10.1016/S0165-1684(99)00047-X. [DOI] [Google Scholar]
  • 2.Bar-Shalom Y., Li X.R., Kirubarajan T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software. John Wiley & Sons; Hoboken, NJ, USA: 2004. [Google Scholar]
  • 3.Shi Y., Farina A., Song T.L., Peng D., Guo Y. Distributed fusion in harsh environments using multiple bearings-only sensors with out-of-sequence-refined measurements. Aerosp. Sci. Technol. 2021;117:106950. doi: 10.1016/j.ast.2021.106950. [DOI] [Google Scholar]
  • 4.Wei Z., Duan Z., Han Y., Mallick M. A New Coarse Gating Strategy Driven Multidimensional Assignment for Two-Stage MHT of Bearings-Only Multisensor-Multitarget Tracking. Sensors. 2022;22:1802. doi: 10.3390/s22051802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ding X., Wang J., Wang C., Xia K., Xin M. Cooperative Estimation and Guidance Strategy Using Bearings-Only Measurements. J. Guid. Control Dyn. 2023;46:761–769. doi: 10.2514/1.G007119. [DOI] [Google Scholar]
  • 6.Jiang H., Wang X., Deng Y., Zhang Y. Event-Triggered Distributed Bias-Compensated Pseudolinear Information Filter for Bearings-Only Tracking Under Measurement Uncertainty. IEEE Sens. J. 2023;23:8504–8513. doi: 10.1109/JSEN.2023.3243039. [DOI] [Google Scholar]
  • 7.Barshalom Y. Multitarget-Multisensor Tracking: Aplications and Advances. Artech House; Norwood, MA, USA: 1993. [Google Scholar]
  • 8.Aidala V., Nardone S. Biased Estimation Properties of the Pseudolinear Tracking Filter. IEEE Trans. Aerosp. Electron. Syst. 2019;18:432–441. doi: 10.1109/TAES.1982.309250. [DOI] [Google Scholar]
  • 9.Bu S., Meng A., Zhou G. A New Pseudolinear Filter for Bearings-Only Tracking without Requirement of Bias Compensation. Sensors. 2021;21:5444. doi: 10.3390/s21165444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Doğançay K. 3D Pseudolinear Target Motion Analysis From Angle Measurements. IEEE Trans. Signal Process. 2015;63:1570–1580. doi: 10.1109/TSP.2015.2399869. [DOI] [Google Scholar]
  • 11.Julier S.J., Uhlmann J.K. Unscented filtering and nonlinear estimation. Proc. IEEE. 2004;92:401–422. doi: 10.1109/JPROC.2003.823141. [DOI] [Google Scholar]
  • 12.Arasaratnam I., Haykin S. Cubature Kalman filters. IEEE Trans. Autom. Control. 2009;54:1254–1269. doi: 10.1109/TAC.2009.2019800. [DOI] [Google Scholar]
  • 13.Ali W., Li Y., Chen Z., Raja M.A.Z., Ahmed N., Chen X. Application of Spherical-Radial Cubature Bayesian Filtering and Smoothing in Bearings Only Passive Target Tracking. Entropy. 2019;21:1088. doi: 10.3390/e21111088. [DOI] [Google Scholar]
  • 14.Liu Z., Ji L., Yang F., Qu X., Yang Z., Qin D. Cubature Information Gaussian Mixture Probability Hypothesis Density Approach for Multi Extended Target Tracking. IEEE Access. 2019;7:103678–103692. doi: 10.1109/ACCESS.2019.2931470. [DOI] [Google Scholar]
  • 15.Lv Y.W., Yang G.H. Centralized and distributed adaptive cubature information filters for multi-sensor systems with unknown probability of measurement loss. Inf. Sci. 2023;630:173–189. doi: 10.1016/j.ins.2023.02.035. [DOI] [Google Scholar]
  • 16.Arulampalam M.S., Maskell S., Gordon N., Clapp T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002;50:174–188. doi: 10.1109/78.978374. [DOI] [Google Scholar]
  • 17.Dunik J., Straka O., Simandl M., Blasch E. Random-point-based filters: Analysis and comparison in target tracking. IEEE Trans. Aerosp. Electron. Syst. 2015;51:1403–1421. doi: 10.1109/TAES.2014.130136. [DOI] [Google Scholar]
  • 18.Bishop A.N., Fidan B., Anderson B.D., Doğançay K., Pathirana P.N. Optimality analysis of sensor–target localization geometries. Automatica. 2010;46:479–492. doi: 10.1016/j.automatica.2009.12.003. [DOI] [Google Scholar]
  • 19.Yang C., Kaplan L., Blasch E. Performance Measures of Covariance and Information Matrices in Resource Management for Target State Estimation. IEEE Trans. Aerosp. Electron. Syst. 2012;48:2594–2613. doi: 10.1109/TAES.2012.6237611. [DOI] [Google Scholar]
  • 20.Zhao S., Chen B.M., Lee T.H. Optimal sensor placement for target localisation and tracking in 2D and 3D. Int. J. Control. 2013;86:1687–1704. doi: 10.1080/00207179.2013.792606. [DOI] [Google Scholar]
  • 21.Moreno-Salinas D., Pascoal A., Aranda J. Sensor Networks for Optimal Target Localization with Bearings-Only Measurements in Constrained Three-Dimensional Scenarios. Sensors. 2013;13:10386–10417. doi: 10.3390/s130810386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ronghua Z., Hemin S., Hao L., Weilin L. TDOA and track optimization of UAV swarm based on D-optimality. J. Syst. Eng. Electron. 2020;31:1140–1151. doi: 10.23919/JSEE.2020.000086. [DOI] [Google Scholar]
  • 23.Ucinski D. Optimal Measurement Methods for Distributed Parameter System Identification. CRC Press; Boca Raton, FL, USA: 2004. [Google Scholar]
  • 24.Xu S., Wu L., Doğançay K., Alaee-Kerahroodi M. A Hybrid Approach to Optimal TOA-Sensor Placement With Fixed Shared Sensors for Simultaneous Multi-Target Localization. IEEE Trans. Signal Process. 2022;70:1197–1212. doi: 10.1109/TSP.2022.3152232. [DOI] [Google Scholar]
  • 25.Martínez S., Bullo F. Optimal sensor placement and motion coordination for target tracking. Automatica. 2006;42:661–668. doi: 10.1016/j.automatica.2005.12.018. [DOI] [Google Scholar]
  • 26.Yang C., Kaplan L., Blasch E., Bakich M. Optimal Placement of Heterogeneous Sensors for Targets with Gaussian Priors. IEEE Trans. Aerosp. Electron. Syst. 2013;49:1637–1653. doi: 10.1109/TAES.2013.6558009. [DOI] [Google Scholar]
  • 27.Xu S., Doğançay K. Optimal sensor placement for 3-D angle-of-arrival target localization. IEEE Trans. Aerosp. Electron. Syst. 2017;53:1196–1211. doi: 10.1109/TAES.2017.2667999. [DOI] [Google Scholar]
  • 28.Sharp I., Yu K., Guo Y.J. GDOP analysis for positioning system design. IEEE Trans. Veh. Technol. 2009;58:3371–3382. doi: 10.1109/TVT.2009.2017270. [DOI] [Google Scholar]
  • 29.Zhong Y., Wu X.Y., Huang S.C. Geometric dilution of precision for bearing-only passive location in three-dimensional space. Electron. Lett. 2015;51:518–519. doi: 10.1049/el.2014.3700. [DOI] [Google Scholar]
  • 30.Li Y., Qi G., Sheng A. Optimal deployment of vehicles with circular formation for bearings-only multi-target localization. Automatica. 2019;105:347–355. doi: 10.1016/j.automatica.2019.04.008. [DOI] [Google Scholar]
  • 31.Chung T.H., Burdick J.W., Murray R.M. A decentralized motion coordination strategy for dynamic target tracking; Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006); Orlando, FL, USA. 15–19 May 2006; pp. 2416–2422. [Google Scholar]
  • 32.Doğançay K. Single-and multi-platform constrained sensor path optimization for angle-of-arrival target tracking; Proceedings of the 2010 18th European Signal Processing Conference; Aalborg, Denmark. 23–27 August 2010; pp. 835–839. [Google Scholar]
  • 33.Doğançay K. UAV Path Planning for Passive Emitter Localization. IEEE Trans. Aerosp. Electron. Syst. 2012;48:1150–1166. doi: 10.1109/TAES.2012.6178054. [DOI] [Google Scholar]
  • 34.Xu S., Doğançay K., Hmam H. Distributed pseudolinear estimation and UAV path optimization for 3D AOA target tracking. Signal Process. 2017;133:64–78. doi: 10.1016/j.sigpro.2016.10.012. [DOI] [Google Scholar]
  • 35.Dogancay K. Optimal Geometries for AOA Localization in the Bayesian Sense. Sensors. 2022;22:9802. doi: 10.3390/s22249802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Zhong Y., Wu X., Huang S., Li C., Wu J. Optimality Analysis of sensor–target Geometries for Bearing-Only Passive Localization in Three Dimensional Space. Chin. J. Electron. 2016;25:391–396. doi: 10.1049/cje.2016.03.029. [DOI] [Google Scholar]
  • 37.Andrews G.E. Number Theory. Courier Corporation; North Chelmsford, MA, USA: 1994. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES