Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2026 Feb 10;26(4):1148. doi: 10.3390/s26041148

Enhanced GNSS Navigation Using a Centered Error Entropy Extended Kalman Filter in Non-Gaussian Noise Environments

Yi Chang 1, Dah-Jing Jwo 1,2,*, Bo-Yang Lee 3
Editors: Yongbo Zhang, Yuhang Li
PMCID: PMC12944011  PMID: 41755094

Abstract

Global Navigation Satellite Systems (GNSSs) observables, such as those of the Global Positioning System (GPS), are frequently affected by multipath effects that cause unpredictable signal interference at the receiver, posing significant challenges for accurate state estimation in complex environments with non-Gaussian noise or outliers. The traditional extended Kalman filter (EKF), based on the minimum mean square error (MMSE) criterion, assumes Gaussian noise distributions and exhibits degraded performance under non-Gaussian conditions. To overcome this limitation, the minimum error entropy (MEE) criterion was proposed to reduce random uncertainty in estimation error distributions; however, due to its translation invariance property, MEE may inadvertently increase bias when errors contain systematic offsets, leading to poor convergence. In contrast, the maximum correntropy criterion (MCC) concentrates the error probability density function (PDF) around zero, enabling effective entropy adjustment even in the presence of bias and achieving superior error convergence. This paper presents the centered error entropy (CEE) extended Kalman filter (CEE-EKF) that integrates the complementary merits of both MEE and MCC approaches to overcome their individual limitations. Experimental validation in complex nonlinear GPS environments with non-Gaussian noise demonstrates that the CEE-EKF significantly outperforms individual algorithms in noise suppression, particularly exhibiting enhanced robustness and accuracy when handling outliers. These results offer an effective approach to enhancing the reliability of GPS navigation in challenging real-world environments, and the algorithm can be readily extended to other GNSS applications.

Keywords: GNSS, extended Kalman filter, centered error entropy, minimum error entropy, maximum correntropy criterion, non-Gaussian noise, multipath effects

1. Introduction

Accurate state estimation in Global Navigation Satellite Systems (GNSSs), such as the Global Positioning System (GPS) [1,2], is challenging under conditions involving complex non-Gaussian noise, where measurement outliers and heavy-tailed error distributions can severely degrade positioning accuracy. The Kalman filter (KF) is an optimal linear minimum mean square error (MMSE) estimator for linear dynamic systems under the assumptions of linear state-space models and additive Gaussian noise [3,4]. However, the MMSE criterion implicitly assumes that both process and measurement noise are Gaussian, an assumption that is often invalid in realistic GNSS environments where multipath effects, interference, and signal blockages give rise to non-Gaussian noise characteristics. While the extended Kalman filter (EKF) enables nonlinear state estimation, it remains an approximate MMSE estimator and retains the Gaussian noise assumption. Consequently, its estimation performance may deteriorate in the presence of strongly non-Gaussian noise, motivating the development of more robust nonlinear filtering frameworks.

The use of entropy as a performance criterion in signal processing stems from Rényi’s foundational work on generalized entropy measures [5], which established the theoretical basis for information-theoretic learning (ITL). Extending this framework, Principe [6] introduced kernel-based representations of Rényi entropy, enabling practical estimation of information-theoretic cost functions and their application to nonlinear filtering and machine learning problems. To address the limitations of the Kalman family of filters under non-Gaussian noise conditions, the minimum error entropy extended Kalman filter (MEE-EKF) was proposed [7,8,9,10,11,12,13,14,15,16,17,18,19,20]. Unlike MMSE-based methods that minimize second-order moments, the MEE-EKF minimizes the entropy of the estimation error, thereby capturing higher-order statistical information and improving robustness in non-Gaussian noise environments. Despite these advantages, existing studies have reported that MEE-EKF may suffer from numerical instability or even divergence under certain system dynamics and parameter settings.

Wang [7] employed the minimum entropy criterion to non-Gaussian stochastic system control and demonstrated that entropy-based objectives yield robust performance in the presence of uncertainty. Indiveri [8] introduced a least entropy-like filtering method for range measurements with outliers, demonstrating the effectiveness of entropy measures in suppressing impulsive noise. Chen et al. [9] proposed the kernel minimum error entropy (KMEE) algorithm, extending the MEE criterion to nonlinear estimation using kernel density estimation. Subsequent studies incorporated the MEE principle into Kalman filtering frameworks, including the standard Kalman filter and the unscented Kalman filter (UKF), demonstrating enhanced robustness against non-Gaussian disturbances [10]. KMEE-based approaches were subsequently employed for multipath estimation in navigation systems, demonstrating their practical effectiveness [11]. Luo et al. [12] provided a theoretical link between the Kalman filtering framework and Rényi entropy, reinforcing the information-theoretic interpretation of MEE-based estimators. More recent efforts have focused on improving the numerical stability and robustness of MEE-based Kalman filters [13,14,15,16] and extending them to packet-loss scenarios [17], distributed estimation architectures [18], and power system state estimation [19]. Feng et al. [20] proposed Wasserstein distribution-based MEE filtering methods, further enhancing robustness under model uncertainty and non-Gaussian noise.

As an alternative to entropy minimization-based approaches, the maximum correntropy criterion extended Kalman filter (MCC-EKF) was introduced to enhance robustness against non-Gaussian noise by maximizing the correntropy of the estimation error [21,22,23,24,25,26,27,28,29,30,31]. As a localized similarity measure defined in a reproducing kernel Hilbert space, correntropy emphasizes higher-order statistical moments and effectively suppresses impulsive disturbances. Although MCC-EKF exhibits strong noise rejection capabilities, its performance may degrade in highly dynamic systems due to sensitivity to kernel parameters and reduced adaptability to rapid state variations. In conjunction with MEE-based approaches, the MCC framework has received significant research attention, with notable contributions including Izanloo et al. [21], Liu et al. [22], and Chen et al. [23], followed by subsequent extensions to extended, cubature, and multi-kernel correntropy-based Kalman filters [24,25,26,27,28,29,30,31]. These developments demonstrated improved robustness and estimation accuracy in nonlinear systems subject to heavy-tailed and impulsive noise.

A further advancement in entropy-based filtering is the introduction of centered error entropy (CEE) filtering. The CEE-based filter normalizes the estimation error distribution by removing bias. The centered error entropy extended Kalman filter (CEE-EKF) was developed to combine the advantages of minimum error entropy (MEE) and maximum correntropy criterion (MCC), mitigating the individual limitations of each approach [32,33,34,35,36,37,38,39,40,41,42]. By jointly reducing error uncertainty and enhancing noise suppression, the CEE-EKF achieves superior performance in nonlinear systems affected by non-Gaussian noise.

Cheng et al. [32] and Yang et al. [33,34,35,36] demonstrated that CEE-based filtering approaches are effective for satellite attitude determination, spacecraft navigation, and integrated INS/GNSSs, exhibiting strong robustness under non-Gaussian noise conditions. Extending the CEE framework, fiducial point-based extensions of minimum error entropy (MEE) filtering were introduced to further enhance robustness against outliers. For example, Xie et al. [38] and Mitra et al. [39] incorporated strategically selected fiducial points into MEE filters to improve estimation accuracy in localization applications. Bahrami and Tuncel [40] provided an analysis of fiducial point selection and deployment strategies, identifying practical limitations and offering implementation guidelines. He et al. [41] and Zhao and Tian [42] extended these approaches to generalized and distributed MEE frameworks suitable for large-scale, networked, and high-precision state estimation.

In this paper, the CEE-EKF is applied to the GPS navigation state estimation, and its robustness and noise attenuation capabilities are evaluated under challenging non-Gaussian noise environments. Some highlights of the main contributions of this paper are as follows:

  • The centered error entropy (CEE) criterion is employed to design an extended Kalman filter (EKF), referred to as the CEE-EKF, for GPS navigation, providing enhanced robustness against observation outliers.

  • The CEE-EKF is proposed as an alternative to the traditional MMSE-based EKF, offering improved robustness and superior estimation performance when handling impulse noise in GPS navigation. The results provide valuable insights into improving the accuracy and reliability of GPS navigation in realistic and challenging noise environments.

  • The application of CEE in robust GPS EKF design remains limited in the open literature. The proposed CEE-EKF demonstrates significant improvements in estimation accuracy, and its integration into GPS navigation filter design is detailed in this paper.

  • While this study focuses on GPS navigation, the CEE-EKF algorithm can be readily extended to other GNSSs, such as Galileo, BeiDou, and GLONASS, without loss of generality.

The remainder of this paper is organized as follows. A review of the MEE-EKF and MCC-EKF is introduced in Section 2 and Section 3, respectively. In Section 4, the CEE-EKF algorithm is presented. The proposed CEE-EKF’s performance compared to the EKF, MEE-EKF, and MCC-EKF techniques are assessed using illustrative examples based on simulation experiments in Section 5. Finally, conclusions are given in Section 6.

2. Minimum Error Entropy Extended Kalman Filter

The minimum error entropy (MEE) criterion is realized by minimizing the second-order Rényi entropy of the estimation error. This criterion is particularly valuable as it facilitates more accurate state estimation, especially in the presence of non-Gaussian measurement errors that frequently occur in practical, real-world scenarios.

In information theory, Rényi entropy is a generalized form encompassing a range of measures. Alfred Rényi introduced this concept in 1961 [5], significantly extending Shannon entropy and Kullback–Leibler (KL) divergence. While Shannon entropy is a specific case of first-order Rényi entropy, commonly used for measuring uncertainty, KL divergence is an asymmetric measure that quantifies the difference between two probability distributions. It is also known as information divergence or relative entropy.

To define the error e between two random variables, X and Y, when α=2, the second-order Rényi entropy is expressed as

H2(e)=log(ip2(ei)) (1)

where p(ei) represents the probability distribution of the error. This step is fundamental in the context of satellite navigation as it helps to measure the uncertainty in the estimation errors more comprehensively than traditional Gaussian-based methods.

The cost function JMEE is defined as

JMEE(xk)=1L2i=1Lj=1LGσ(ej,kei,k) (2)

This equation calculates the cost associated with the error distribution for the system state xk. It is a crucial component in the MEE criterion, which is designed to minimize this cost function by adjusting the state estimate x^k|k. Next, the state estimate x^k|k is updated by finding the value that minimizes the cost function:

x^k|k=argmaxxk JMEE(xk)=argmaxxki=1Lj=1LGσ(ej,kei,k) (3)

This optimization process is essential in satellite navigation, where the goal is to refine the state estimates continuously to achieve the most accurate position, velocity, and time (PVT) solution. The gradient of the cost function with respect to the state xk is given by

JMEE(xk)xk=1L2σ2i=1Lj=1L([ej,kei,k]Gσ(ej,kei,k)[wj,kwi,k])=Γ1Γ2Γ3+Γ4=2Γ12Γ3=0 (4)

Here, the gradient is used to guide the optimization process, ensuring that the state estimate moves towards the minimum cost. The terms Γ1 through Γ4 are defined as

Γ1=i=1Lj=1Lej,kGσ(ej,kei,k)wj,kT (5)
Γ2=i=1Lj=1Lei,kGσ(ej,kei,k)wj,kT (6)
Γ3=i=1Lj=1Lej,kGσ(ej,kei,k)wi,kT (7)
Γ4=i=1Lj=1Lei,kGσ(ej,kei,k)wi,kT (8)

These equations represent the weighted sums of the errors, which are key in the derivation of the state update equation. Equations (5)–(8) can be rewritten as

Γ1Γ3=1L2σ2WkTΞkek1L2σ2WkTΨkek=0 (9)

where

[Ξk]ij=Gσ(ej,kei,k) (10)

These matrices play a crucial role in the computation of the cost function, representing the influence of each error term on the overall state estimate.

Ψk=diag{i=1LGσ(e1,kei,k),,i=1LGσ(eL,kei,k)} (11)

The matrix Ck is given by

Ck=ΞkΨk=Cx,kCyx,kCxy,kCy,k (12)

where the submatrices Cx,k, Cxy,k, Cyx,k and Cy,k are defined as

Cx,k=(Cij,k)n×n=(Ξij,k)n×n(Ψij,k)n×n(i=1,2,,n;j=1,2,,n) (13)
Cxy,k=(Cij,k)m×n=(Ξij,k)m×n(Ψij,k)m×n(i=n+1,n+2,,n+m;j=1,2,,n) (14)
Cyx,k=(Cij,k)n×m=(Ξij,k)n×m(Ψij,k)n×m(i=1,2,,n;j=n+1,n+2,,n+m) (15)
Cy,k=(Cij,k)m×m=(Ξij,k)m×m(Ψij,k)m×m(i=n+1,n+2,,n+m;j=n+1,n+2,,n+m) (16)

These submatrices define the relationships between different parts of the state vector, providing a more detailed structure for updating the state estimate. The state update can be rewritten as

x^kt=x^k|k1+Kkt1[zkh(x^k|k1t1)] (17)

where Kkt1 is the gain matrix that determines how much the measurement zk should influence the updated state estimate. This step is analogous to the Kalman gain in traditional filters but adapted for the MEE criterion. The gain matrix Kkt1 is calculated as

Kkt1=(Px,k|k1t1+HkTPxy,k|k1t1+(Pyx,k|k1t1+HkTRkt1)Hk)1(Pyx,k|k1t1+HkTRkt1) (18)

This equation integrates the predicted state covariance Px,k|k1t1, the cross-covariance Pxy,k|k1t1, and the measurement noise covariance Rkt1, which, together, determine the optimal weighting of the new measurement in the state update. The individual terms for the covariance matrices are defined as follows:

Px,k|k1t1=(Bp,k|k11)TCx,kt1Bp,k|k11 (19)
Pxy,k|k1t1=(Br,k1)TCxy,kt1Bp,k|k11 (20)
Pyx,k|k1t1=(Bp,k|k11)TCyx,kt1Br,k1 (21)
Rkt1=(Br,k1)TCy,kt1Br,k1 (22)

Furthermore, the posterior covariance matrix Pk can be updated by

Pk=(IKkt1Hk)Pk|k1t1(IKkt1Hk)T+Kkt1Rk(Kkt1)T (23)

In this expression, the matrix Pk represents the updated covariance matrix, reflecting the reduced uncertainty in the state estimate after incorporating the measurement zk. This update process is critical in satellite navigation applications, where precise and timely updates to the state estimate are essential for accurate positioning and tracking. The flowchart of the MEE-EKF algorithm is shown in Figure 1.

Figure 1.

Figure 1

Flowchart of the MEE-EKF algorithm.

3. Maximum Correntropy Criterion Extended Kalman Filter

Correntropy is a measure of similarity between two random variables and has been broadly applied in signal processing, machine learning, and statistics. Unlike the traditional mean square error (MSE) approach, which is sensitive to noise and outliers, correntropy is well-suited for handling non-Gaussian distributions and robustly managing noise and outliers. It achieves this by quantifying similarity through kernel density estimation, offering a more flexible and resilient framework for addressing complex and non-Gaussian noise environments.

The correntropy function can be expressed as

Vσ(X,Y)=E[κσ(X,Y)]=xX,yYκσ(x,y)fXY(x,y)dxdy (24)

where κσ() is the kernel function that depends on the difference between random variables X and Y. The parameter σ represents the kernel width, while E[·] denotes the expectation value of the kernel function over the joint distribution. The kernel function determines the sensitivity of correntropy to variations between X and Y. The most commonly used kernel function, the Gaussian kernel, is defined as

κσ(X,Y)=Gσ(e)=exp(e22σ2) (25)

Gσ represents the Gaussian kernel, and e=xy denotes the difference between the variables X and Y. In practical scenarios, the joint probability distribution fXY(x,y) is usually unknown, so correntropy is estimated from sample averages. The kernel width σ acts as a weighting parameter; however, as it approaches infinity, the MCC converges to the MMSE method. Thus, the estimated correntropy is expressed as

V^σX,Y=1Ni=1NGσ(ei)=1Ni=1NGσxiyi (26)

The prediction step in the MCC-EKF can be described as

x^k|k1zk=xkh(xk)+δk (27)

where x^k|k1 represents the predicted state and hxk is the function that is nonlinear and connects the measurement to the state, and the correction term δk is given by

δk=x^k|k1xkvk (28)

The objective function for the maximum correntropy criterion is defined as

JMCC(xk)=1Li=1LGσ(yi,kdi(xk)) (29)

This function sums the Gaussian kernel values over L samples, where each kernel value is evaluated at the difference between the error ei,k and the state-dependent divergence di(xk). The goal is to find the state xk that maximizes this objective function, thereby minimizing the divergence between the predicted and actual states.

To find the optimal state estimate, the MCC-EKF solves the following optimization problem:

x^k=argmaxxk JMCC(xk)=argmaxxki=1LGσ(ei,k) (30)

The gradient of the objective function with respect to x^k is set to zero to find the maximum:

JMCC(xk)xk=i=1L[Gσ(ei,k)wi,kT(yi,kdi(xk))]=0 (31)

This condition is essential for deriving the update equations in the MCC-EKF. Next, the covariance matrices Ck used in the state estimation are structured as follows:

Ck=Cx,k00Cy,k (32)

The state covariance Cx,k and measurement covariance Cy,k are defined as

Cx,k=diag(Gσ(e1,k),,Gσ(en,k)) (33)
Cy,k=diag(Gσ(en+1,k),,Gσ(en+m,k)) (34)

With the covariance matrices defined, the state update equation for the Kalman filter is expressed as

x^kt=x^k|k1+Kkt1[zkh(x^k|k1t1)] (35)

The Kalman gain Kkt1 is determined by

Kkt1=Px,k|k1t1Hk(HkPx,k|k1t1HkT+Rkt1)1 (36)

where Pk|k1t1 is the predicted covariance matrix, Hk is the measurement matrix, and Rkt1 is the measurement noise covariance. These matrices are updated as follows:

Pk|k1t1=Bp,k(Cx,kt1)1Bp,kT (37)
Rkt1=Br,k(Cy,kt1)1Br,kT (38)

In this context, Bp,k and Br,k are transformation matrices, with Cy,kt1 and Cy,kt1 representing the inverses of the state covariance and measurement covariance matrices, respectively. These transformations ensure that the covariance matrices Pk|k1t1 and Rkt1 are accurately updated, reflecting the latest state and measurement information.

Finally, the updated state covariance Pk is given by

Pk=(IKkt1Hk)Pk|k1t1(IKkt1Hk)T+Kkt1Rk(Kkt1)T (39)

This equation provides the final update for the posterior covariance matrix, accounting for the new measurements and their associated uncertainties. This condition is vital for deriving the update equations in the maximum correntropy criterion extended Kalman filter. It ensures that all samples are appropriately balanced in the estimation process, which is crucial for the filter’s robustness and accuracy. By accounting for the nonlinear relationship between the state and measurements, the MCC-EKF can effectively manage non-Gaussian noise and outliers, resulting in improved state estimation. Figure 2 Illustrates the flowchart of the MCC-EKF algorithm.

Figure 2.

Figure 2

Flowchart of the MCC-EKF algorithm.

4. Centered Error Entropy Extended Kalman Filter

In filtering techniques, particularly in non-Gaussian noise environments, centered error entropy (CEE) plays a crucial role in addressing the limitations of traditional methods like MEE and MCC. While MEE minimizes error entropy, it often suffers from stability and convergence issues. Conversely, MCC improves error correlation but may not sufficiently reduce uncertainty. CEE merges the strengths of both approaches, offering a more robust and reliable filtering performance by managing higher-order error statistics. Based on their research, Yang et al. [33,34,35,36,37] introduced the CEE-EKF to tackle complex nonlinear problems, thereby enhancing convergence and stability. This approach was later evolved into a weighted version, the MEE-EKF, specifically designed to improve performance in non-Gaussian noise scenarios.

The cost function based on the CEE criterion is defined as follows:

JCEExk=λ1Li=1LGσ1(ei)+(1λ)1L2i=1Lj=1LGσ2(eiej) (40)

Let λ be a weighting factor used to adjust the ratio between MCC and MEE. The variances σ1 and σ2 correspond to MCC and MEE, respectively. The values for λ1 and λ2 are defined as

λ1=λ1L (41)
λ2=1λL2 (42)

These two parameters balance the contributions of the MCC and MEE criteria in the overall cost function. The optimal solution xk can be obtained through maximizing the following objective function, which is the summation of the weighted components of the CEE cost function:

x^k|k=argmaxxk JCEE(xk) =argmaxxk(λ1i=1LGσ(ei,k)+λ2i=1Lj=1LGσ(ej,kei,k)) (43)

In this expression, ek=ykd(xk) denotes the difference between the measured value and the predicted value based on the current state xk. The optimal solution xk is calculated as follows:

xkJCEE(xk)=λ1i=1L[Gσ(ei,k)wi,kT(yi,kdi(xk))]+λ21σ22(Γ1Γ2Γ3+Γ4)=0 (44)

where wj,k and wi,k are weight matrices associated with the state xk, and Gσ represents the gain applied to each term in the cost function. Next, by simplifying the terms in the derivative, we introduce matrices Γ1, Γ2, Γ3 and Γ4 to represent the intermediate steps:

Γ1=i=1Lj=1Lej,kGσ2(ej,kei,k)wj,kT (45)
Γ2=i=1Lj=1Lei,kGσ2(ej,kei,k)wj,kT (46)
Γ3=i=1Lj=1Lej,kGσ2(ej,kei,k)wi,kT (47)
Γ4=i=1Lj=1Lei,kGσ2(ej,kei,k)wi,kT (48)

These matrices simplify the expression for the JCEE(xk) derivative, leading to the final update rule for the state estimate xk:

xkJCEE(xk)=λ1WkTCkek+2λ2σ22WkT(ΞkΨk)ek   =WkT(λ1Ck+2λ2σ22(ΞkΨk))ek   =WkTMkek   =0 (49)

where Ck, Ξk and Ψk are defined as

Ck=diag(Gσ1(e1,k),,Gσ1(eL,k)) (50)
[Ξk]ij=Gσ(ej,kei,k) (51)
Ψk=diag{i=1LGσ(e1,kei,k),,i=1LGσ(eL,kei,k)} (52)

Equation (53) defines the matrix Mk, which is composed of two main components: the weighting matrix Ck, whose diagonal elements are based on the gains of the error terms, and the term ΞkΨk. The matrix Mk integrates these components to form the basis for updating the state estimation through the gain matrix Kkt1:

Mk=λ1Ck+2λ2σ22(ΞkΨk)=Mx,kMyx,kMxy,kMy,k (53)

Finally, the state update can be written as

x^kt=x^k|k1+Kkt1[zkh(x^k|k1t1)] (54)

where the gain matrix Kkt1 and the covariance update matrices are defined by

Kkt1=(Px,k|k1t1+HkTPxy,k|k1t1+(Pyx,k|k1t1+HkTRkt1)Hk)1(Pyx,k|k1t1+HkTRkt1) (55)

Equations (56)–(59) detail the specific components of the covariance matrices used in Kkt1. These include the inverse state covariance matrix Px,k|k1t1, calculated using the matrix Bp,k|k11 and state prediction matrix Mx,kt1. The cross-covariance matrix Pxy,k|k1t1 and the measurement noise covariance Rkt1 are similarly defined using transformations of these matrices, ensuring that the gain matrix reflects both state and measurement uncertainties.

Px,k|k1t1=(Bp,k|k11)TMx,kt1Bp,k|k11 (56)
Pxy,k|k1t1=(Br,k1)TMxy,kt1Bp,k|k11 (57)
Pyx,k|k1t1=(Bp,k|k11)TMyx,kt1Br,k1 (58)
Rkt1=(Br,k1)TMy,kt1Br,k1 (59)

This matrix is updated using the gain matrix Kkt1 and the observation model, ensuring that, even after taking into account the new observations, the state estimate remains accurate and properly calibrated. This concludes the state update process, followed by the posterior covariance matrix update:

Pk=(IKkt1Hk)Pk|k1t1(IKkt1Hk)T+Kkt1Rk(Kkt1)T (60)

Finally, Equation (60) provides the expression for the posterior covariance matrix Pk, which reflects the most recent and accurate estimate of the system’s state. The flowchart of the CEE-EKF algorithm is shown in Figure 3.

Figure 3.

Figure 3

Flowchart of the CEE-EKF algorithm.

5. Results and Discussion

The experiment focuses on the state estimation problem for GPS navigation satellites, where non-Gaussian noise is introduced to assess the filtering performance of several algorithms. The filters tested and compared include EKF, MEE-EKF, MCC-EKF, and CEE-EKF. Each filter’s performance is evaluated based on the error probability density function, allowing for a comprehensive comparison of their effectiveness in various non-Gaussian noise scenarios.

The simulations were conducted on a desktop computer equipped with an Intel Core i5-10400 processor (6 cores, 12 threads, up to 4.3 GHz) and 16 GB of DDR4 RAM, representing a capable mid-range system for productivity, content consumption, and entry-level gaming. Storage was provided by an SSD ranging from 256 GB to 1 TB, and depending on the configuration, either Intel UHD 630 integrated graphics or a dedicated GPU such as the NVIDIA GTX 1650. All simulations were conducted in MATLAB R2021b on Microsoft Windows 10.

The Satellite Navigation (SatNav) Toolbox 3.0 [43] was employed to generate the data required for GPS positioning, including satellite positions and pseudoranges. The test trajectory and the satellite skyplot for the seven visible GPS satellites (red dots labeled with their GPS ID numbers) of the simulation are shown in Figure 4 and Figure 5, respectively. Positions expressed in geographic coordinates (latitude, longitude, and height) can be converted to WGS-84 Earth-Centered, Earth-Fixed (ECEF) Cartesian coordinates and vice versa. The vehicle taken for simulation is assumed to start from 121.7775 degrees east longitude and 25.1492 degrees north latitude with an initial altitude of 100 m, which is at 3042329.204911080.202694074.30T m at the WGS-84 ECEF coordinate frame.

Figure 4.

Figure 4

The trajectory plot.

Figure 5.

Figure 5

The skyplot for satellites. The red dots (*) indicate the locations of GPS satellites, labeled by their respective GPS ID numbers.

The initial vehicle position, specified by known geographic coordinates, is defined as the origin (0, 0, 0) of the local tangent East–North–Up (ENU) frame. Once the vehicle trajectory is represented in ENU coordinates, the positions are transformed into the WGS-84 Earth-Centered, Earth-Fixed (ECEF) Cartesian frame. State estimation is then performed based on this trajectory, and errors are evaluated in the east, north, and altitude directions for the different filter algorithms.

Rather than employing a nonlinear model, x˙=f(x,t)+u, the dynamics of a GPS receiver in a low-dynamic environment can be described using an eight-state PV (position–velocity) model. The state equations represent the fundamental differential equations of motion, and the process model can be expressed as x˙=Fx+Gu. In this case, the GPS navigation filter includes three position components (east, north, and vertical: x1, x3, and x5), three velocity components (east, north, and vertical: x2, x4, and x6), and two clock states (x7 and x8), resulting in an 8×1 state vector: x=[x1x2x3x4x5x6x7x8]T. The state variables in the PV model are driven by white-noise processes: u=[0u20u40u6u7u8]T.

We define the following submatrices:

FPV=Fclk=0100

where the subscripts indicate the submatrices used in the PV and clock models, respectively, and the full F matrix can then be constructed as follows:

F=diag(FPV,FPV,FPV,Fclk)

By converting the model to discrete-time form,

xk+1=Φkxk+wk (61)

the state transition matrix can be expressed as

Φk=diag(Φk,PV,Φk,PV,Φk,PV,Φk,clk)

where the submatrices are identical and given by

Φk,PV=Φk,clk=1Δt01

where the sampling interval is Δt=1 second. Similarly, for the process noise covariance matrix, the submatrices corresponding to the PV and clock models are defined as

Qk,PV=SpΔt33Δt22Δt22Δt; Qk,clk=SfΔt+SgΔt33SgΔt22SgΔt22SgΔt

and the full process noise covariance matrix can then be constructed as

Qk=diag(Qk,PV,Qk,PV,Qk,PV,Qk,clk)

where Sp,Sf, and Sg are the respective spectral amplitudes: Sp=50 (m/sec)2/(rad/sec), Sf=0.1 sec and Sg=0.1 sec1. These spectral amplitudes can be used to generate the Qk matrix.

By considering the GPS pseudorange observables as the nonlinear measurement of EKF in form of zk=h(xk)+vk, it can be represented as

ρi=(xix)2+(yiy)2+(ziz)2+ctb+vρi, i=1,,n (62)

where (xi,yi,zi) are the three-dimensional i-th satellite’s positions, (x,y,z) is the three dimensions’ user position, the speed of light is c, the receiver clock offset from system time is denoted by tb, and vρi is the pseudorange noise.

Assuming the pseudorange observables are available, the linearized measurement equation based on n observables for the PV model is given by the following:

ρ1ρ2ρnρ^1ρ^2ρ^nzk=hx(1)0hy(1)0hz(1)010hx(2)0hy(2)0hz(2)010hx(n)0hy(n)0hz(n)010Hkx1x2x3x4x5x6x7x8+vρ1vρ2vρn (63)

where vρ1 are additive white-noise measurements. The elements of the measurement model Hk=h(xk)/xk, where h(xk)=(xix)2+(yiy)2+(ziz)2+ctb, involving the direction cosine elements are the partial derivatives of the predicted measurements with respect to each state, which is an (n×8) matrix, respectively.

Outliers in pseudorange observables arise when measurement errors deviate significantly from the predictions of the navigation model. Such outliers typically appear as impulse noise due to transient RF interference or momentary signal obstruction. To evaluate the effectiveness of the CEE-based EKF for GPS navigation, simulations were conducted with injected pseudorange outliers. Two noise patterns were considered: uniformly distributed impulse noise and Gaussian mixture noise. The impulse noise scenario tested the filters’ ability to handle sporadic, high-magnitude disturbances typical of real-world GPS applications, while the Gaussian mixture noise scenario introduced both low- and high-intensity noise to assess robustness across a wider range of conditions. Together, these scenarios enabled a comprehensive comparison of the filters’ noise mitigation capabilities and state estimation accuracy.

To ensure reproducibility of the noise patterns, the random number generator seeds in MATLAB are fixed using randn(‘seed’, 13,579) and rand(‘seed’, 13,579) for generating the Gaussian and uniform random sequences, respectively. This guarantees that the noise injection locations and other random noise patterns are fully reproducible. The initial state and initial covariance are given by x0=[xLS0yLS0zLS000]T and P0=diag(104,103,104,103,104,103,105,103), respectively, where (xLS, yLS, zLS) represents position estimated using the least-squares method.

Monte Carlo simulations were performed based on 100 runs, with performance measured through positioning error and root mean square error (RMSE) to evaluate estimation accuracy. The RMSE formula used in the evaluation is as follows:

RMSE(k)=1Nn=1N(xkx^k)2 (64)

where k represents each step in the single Monte Carlo run; N is the total number of Monte Carlo trials. By calculating the difference between the true value xk and the estimated value x^k, the sum of squared errors is computed and square-rooted to obtain the RMSE. This enables a comparison of estimation accuracy among different algorithms.

5.1. Scenario 1: GPS Observables with Uniformly Distributed Impulse Noise

Testing of GPS observables with uniformly distributed impulse noise allows us to evaluate the robustness of a filtering algorithm when subjected to extreme and infrequent errors, which are common in real-world GPS signal processing scenarios due to interference or multipath effects. Filters that can effectively suppress these outliers without significant degradation in performance demonstrate higher robustness and reliability, qualities crucial for navigation systems operating in unpredictable scenarios. By introducing impulse noise in this study, we can better assess how well the proposed CEE-EKFs remove these anomalies while maintaining accurate state estimates.

The model settings for impulse noise are as follows:

rk0.98N(0,10)+0.02U(80,200) (65)

where N(0,10) and U(80,200) represent a zero-mean Gaussian sequence with a variance of 10 m2 and uniformly distributed sequence ranging from 80 to 200 m, occurring with probabilities of 0.98 and 0.02, respectively. The uniformly distributed sequence is regarded as the impulse noise. This model incorporates a 2% proportion of impulse noise to evaluate the outlier resistance capabilities of each filtering algorithm. Figure 6 compares the positioning errors of the EKF, MEE-EKF, and MCC-EKF, while Figure 7 further compares these three filtering algorithms with the CEE-EKF in an impulse noise environment. In the simulation, kernel bandwidths were set for the MEE-EKF, namely σMEE=4000 and MCC-EKF: σMCC=400, while the CEE-EKF employed a weighting factor λ=0.9 along with two kernel bandwidths: σMEE=4 and σMCC=40. The GPS test results clearly indicate that the CEE-EKF outperforms the other filters in the east, north, and altitude directions. It consistently exhibits smaller positioning errors and effectively suppresses outliers at critical points, demonstrating superior performance in this noisy environment.

Figure 6.

Figure 6

Comparison of positioning errors for EKF, MEE-EKF, and MCC-EKF: (a) east; (b) north; (c) altitude—Scenario 1.

Figure 7.

Figure 7

Comparison of positioning errors for the four approaches, EKF, MEE-EKF, MCC-EKF, and CEE-EKF: (a) east; (b) north; (c) altitude—Scenario 1.

The RMSE results obtained from the Monte Carlo simulations are presented in Figure 8 and Figure 9. Figure 8 compares the RMSE performance of the EKF, MEE-EKF, and MCC-EKF, whereas Figure 9 presents the RMSE comparison among all four algorithms. The CEE-EKF achieves significantly lower RMSE values in all directions (east, north, and altitude). The filter rapidly converges to a lower RMSE and maintains superior performance throughout the test. These results confirm that the CEE-EKF is the most effective filter among the four in this environment.

Figure 8.

Figure 8

RMSE comparison of EKF, MEE-EKF, and MCC-EKF: (a) east; (b) north; (c) altitude—Scenario 1.

Figure 9.

Figure 9

RMSE comparison of for the four approaches, EKF, MEE-EKF, MCC-EKF and CEE-EKF: (a) east; (b) north; (c) altitude—Scenario 1.

As shown in Table 1, CEE-EKF outperforms the other filters across all directions, demonstrating superior robustness and noise suppression in impulse noise environments. In contrast, EKF struggles the most, particularly with altitude errors. MEE-EKF shows some improvements but still underperforms in certain areas, while MCC-EKF performs better in specific directions but lacks consistency. CEE-EKF is the most effective filter, providing more robust performance under these conditions.

Table 1.

RMSE comparison of different algorithms under a uniformly distributed impulse noise environment.

Filter RMSE
East (m) North (m) Altitude (m)
EKF 44.7255 41.7277 81.4417
MEE-EKF (σMEE=4000) 34.6927 32.2780 77.7478
MCC-EKF (σMCC=400) 32.3500 27.5274 48.6833
CEE-EKF (λ=0.9, σMEE=4, σMCC=40) 14.4797 13.1921 18.7295

5.2. Scenario 2: GPS Observables with Gaussian Mixture Noise Sequence

Gaussian mixture noise, i.e., noise modeled as a mixture of multiple Gaussian distributions, was specifically designed in this experiment because it better represents real-world noise scenarios, where signal disturbances often combine minor, frequent noise and more significant, rarer disturbances. By introducing this Gaussian mixture noise model, we can more accurately simulate these real-world scenarios and evaluate how well the filtering algorithms maintain accuracy under both typical and extreme noise conditions. This setup ensures that the proposed filters are robust enough to handle varying noise intensities and distributions. The Gaussian mixture sequence model is defined as follows:

rk0.9N(0,100)+0.1N(0,107) (66)

where N(0,100) and N(0,107) represent zero-mean Gaussian sequences with variances of 100 m2 and 107 m2, occurring with probabilities of 0.9 and 0.1, respectively. The Gaussian-distributed sequence with the larger variance is regarded as impulse noise. This model incorporates a 10% proportion of random impulse noise to evaluate the outlier resistance capabilities of each filtering algorithm.

Figure 10 compares the positioning errors of the EKF, MEE-EKF, and MCC-EKF, while Figure 11 presents the comparison of these three filtering algorithms with the CEE-EKF in a Gaussian mixture noise environment. As shown in Figure 11, the Gaussian mixture noise highlights clear improvement in positioning errors. The CEE-EKF demonstrates superior robustness and accuracy across all directions, consistently maintaining lower errors even under mixed noise conditions. In contrast, the other filters, particularly the EKF, exhibit larger fluctuations and higher positioning errors, with the differences being more evident in challenging scenarios such as altitude estimation.

Figure 10.

Figure 10

Comparison of positioning errors for EKF, MEE-EKF, and MCC-EKF: (a) east; (b) north; (c) altitude—Scenario 2.

Figure 11.

Figure 11

Comparison of positioning errors for the four approaches, EKF, MEE-EKF, MCC-EKF, and CEE-EKF: (a) east; (b) north; (c) altitude—Scenario 2.

Similarly, the RMSE results obtained from Monte Carlo simulations are presented in Figure 12 and Figure 13. Figure 12 compares the RMSE performance of the EKF, MEE-EKF, and MCC-EKF, while Figure 13 presents the RMSE comparison among all four algorithms. As shown in Figure 13, the CEE-EKF consistently demonstrates improved performance, exhibiting lower error accumulation over time. Although the other filters are effective in certain aspects, they show less consistency and higher overall RMSE values across different directions. These results indicate that the CEE-EKF provides more reliable noise suppression and better adaptability to varying noise conditions, thereby achieving superior filtering performance. Table 2 summarizes the RMSE values, further confirming that the CEE-EKF remarkably enhances performance in Gaussian mixture noise environments. It consistently maintains both accuracy and robustness throughout the test, highlighting its effectiveness in managing complex noise patterns.

Figure 12.

Figure 12

RMSE comparison of EKF, MEE-EKF, and MCC-EKF: (a) east; (b) north; (c) altitude—Scenario 2.

Figure 13.

Figure 13

RMSE comparison for the four approaches, EKF, MEE-EKF, MCC-EKF and CEE-EKF: (a) east; (b) north; (c) altitude—Scenario 2.

Table 2.

Comparison of RMSE for various algorithms under Gaussian mixture noise.

Filter RMSE
East (m) North (m) Altitude (m)
EKF 57.6872 54.1626 102.9308
MEE-EKF (σMEE=4000) 46.7501 44.1522 97.6033
MCC-EKF (σMCC=400) 43.3192 37.5069 67.0332
CEE-EKF (λ=0.9, σMEE=4, σMCC=40) 28.1809 25.1601 27.7064

6. Conclusions

This study addressed the state estimation problem of GPS navigation under non-Gaussian noise conditions by using the centered error entropy-based extended Kalman filter. Two example scenarios of GPS navigation state estimation with observable outliers were presented, one with uniformly distributed impulse noise and another with Gaussian mixture noise, in order to evaluate whether the proposed filter can more effectively mitigate the effects of non-Gaussian noise. The filtering performance of the EKF, MEE-EKF, MCC, and CEE-EKF was compared in complex, nonlinear noise environments.

The results demonstrate that CEE-EKF consistently outperformed the other algorithms in terms of both noise suppression and estimation accuracy, particularly in GPS environments. While the performance of MEE-EKF was closer to that of EKF, CEE-EKF showed significantly better error reduction and robustness in the presence of non-Gaussian noise. Moreover, CEE-EKF addressed the limitations of MCC and MEE-EKF by enhancing noise suppression and maintaining stable performance, even in challenging GPS conditions. The evaluation using error probability density functions and root mean square error metrics confirmed that CEE-EKF had superior filtering performance. Its ability to effectively handle outliers and quickly adapt to high-intensity noise highlights its practical utility in GPS navigation systems. The findings suggest that CEE-EKF provides enhanced noise suppression and state estimation accuracy in non-Gaussian noise environments. By combining the strengths of both MCC and MEE-EKF, CEE-EKF achieved better error convergence and robustness, making it a reliable and efficient solution for state estimation in complex and noisy environments. This algorithm shows strong potential for future applications in scenarios requiring high-precision positioning under adverse conditions.

Although this paper focuses on GPS navigation, the proposed algorithm can be readily extended to other GNSSs, such as Galileo, BeiDou, and GLONASS, without loss of generality.

Several issues remain important directions for future research:

  1. Adaptive determination of free parameters, including the kernel bandwidths (σMEE and σMCC) and the weighting factor (λ), for which data-driven or self-tuning strategies would be preferable;

  2. Performance comparison with kernels other than the Gaussian kernel, such as Student’s t, Laplace, and Cauchy kernels, as well as combinations thereof;

  3. Evaluation under other non-Gaussian pseudorange error models to better characterize NLOS signal reception and multipath interference in real-world environments;

  4. Theoretical analysis of the stability and robustness of the proposed method, together with a sensitivity analysis.

Author Contributions

Conceptualization, D.-J.J.; methodology, D.-J.J. and Y.C.; software, Y.C. and B.-Y.L.; validation, Y.C. and D.-J.J.; formal analysis, Y.C. and D.-J.J.; investigation, Y.C., B.-Y.L. and D.-J.J.; data curation, Y.C., B.-Y.L. and D.-J.J.; writing—original draft preparation, Y.C. and B.-Y.L.; writing—review and editing, D.-J.J.; supervision, D.-J.J. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request from the authors.

Conflicts of Interest

The author Bo-Yang Lee was employed by the Chunghwa Telecom Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Funding Statement

The authors express their gratitude for the assistance from the grants NSTC 112-2221-E-019-030 and NSTC 113-2221-E-019-059 of the National Science and Technology Council, Taiwan.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Kaplan E.D., Hegarty C.J. Understanding GPS/GNSS: Principles and Applications. 3rd ed. Artech House; Boston, MA, USA: 2017. [Google Scholar]
  • 2.Farrell J.A., Barth M. The Global Positioning System & Inertial Navigation. McCraw-Hill; New York, NY, USA: 1999. [Google Scholar]
  • 3.Brown R.G., Hwang P.Y.C. Introduction to Random Signals and Applied Kalman Filtering. John Wiley and Sons; New York, NY, USA: 1997. [Google Scholar]
  • 4.Jwo D.-J., Cho T.-S., Demssie B.A. Dynamic Modeling and Its Impact on Estimation Accuracy for GPS Navigation Filters. Sensors. 2025;25:972. doi: 10.3390/s25030972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Rényi A. On Measures of Entropy and Information; Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability; Berkeley, CA, USA. 20 June–30 July 1961; pp. 547–561. [Google Scholar]
  • 6.Principe J.C. Information Theoretic Learning, Renyi’s Entropy and Kernel Perspectives. Springer; New York, NY, USA: 2010. [Google Scholar]
  • 7.Wang H. Minimum entropy control of non-Gaussian dynamic stochastic systems. IEEE Trans. Autom. Control. 2002;47:398–403. doi: 10.1109/9.983388. [DOI] [Google Scholar]
  • 8.Indiveri G. On a Least Entropy-Like Filter for Processing Range Measurements in the Presence of Outliers, 2013 IFAC Intelligent Autonomous Vehicles Symposium; Proceedings of the International Federation of Automatic Control; Gold Coast, Australia. 26–28 June 2013; pp. 91–96. [Google Scholar]
  • 9.Chen B.D., Yuan Z.J., Zheng N.N., Principle J.C. Kernel minimum error entropy algorithm. Neurocomputing. 2013;121:160–169. doi: 10.1016/j.neucom.2013.04.037. [DOI] [Google Scholar]
  • 10.Liu Y., Wang H., Hou C.H. UKF based nonlinear filtering using minimum entropy criterion. IEEE Trans. Signal Process. 2013;61:4988–4999. doi: 10.1109/TSP.2013.2274956. [DOI] [Google Scholar]
  • 11.Cheng L., Ren M.F., Xie G., Chen J. Multipath Estimation using Kernel Minimum Error Entropy Filter; Proceedings of the 2016 UKACC 11th International Conference on Control; Belfast, UK. 31 August–2 September 2016. [Google Scholar]
  • 12.Luo Y., Guo C., You S., Liu J. A Novel Perspective of the Kalman Filter from the Rényi Entropy. Entropy. 2020;22:982. doi: 10.3390/e22090982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Chen B., Dang L., Gu Y., Zheng N., Principe J.C. Minimum error entropy Kalman filter. IEEE Trans. Syst. Man Cybern. Syst. 2021;51:5819–5829. doi: 10.1109/TSMC.2019.2957269. [DOI] [Google Scholar]
  • 14.Dang L., Chen B., Xia Y., Lan J., Liu M. Dual Extended Kalman Filter Under Minimum Error Entropy with Fiducial Points. IEEE Trans. Syst. Man Cybern. Syst. 2022;52:7588–7599. doi: 10.1109/TSMC.2022.3161412. [DOI] [Google Scholar]
  • 15.Dang L., Chen B. Cubature Kalman Filter under Minimum Error Entropy with Fiducial Points for INS/GPS Integration. IEEE CAA J. Autom. Sinca. 2022;9:450–465. doi: 10.1109/JAS.2021.1004350. [DOI] [Google Scholar]
  • 16.Wang G., Chen B., Yang X., Peng B., Feng Z. Numerically stable minimum error entropy Kalman filter. Signal Process. 2021;181:107914. doi: 10.1016/j.sigpro.2020.107914. [DOI] [Google Scholar]
  • 17.Zhang M., Song X. A novel robust minimum error entropy Kalman filter in the presence of measurement packet dropping. Signal Process. 2023;206:108913. doi: 10.1016/j.sigpro.2022.108913. [DOI] [Google Scholar]
  • 18.Feng Z., Wang G., Peng B., He J., Zhang K. Distributed minimum error entropy Kalman filter. Inf. Fusion. 2023;91:556–565. doi: 10.1016/j.inffus.2022.11.016. [DOI] [Google Scholar]
  • 19.Dang L., Chen B., Wang S., Ma W., Ren P. Robust Power System State Estimation with Minimum Error Entropy Unscented Kalman Filter. IEEE Trans. Instrum. Meas. 2020;69:8797–8808. doi: 10.1109/TIM.2020.2999757. [DOI] [Google Scholar]
  • 20.Feng Z., Wang G., Peng B., He J., Zhang K. Novel Robust Minimum Error Entropy Wasserstein Distribution Kalman Filter under Model Uncertainty and Non-Gaussian Noise. Signal Process. 2023;203:108787. doi: 10.1016/j.sigpro.2022.108806. [DOI] [Google Scholar]
  • 21.Izanloo R., Fakoorian S.A., Yazdi H.S., Simon D. Kalman Filtering Based on the Maximum Correntropy Criterion in the Presence of Non-Gaussian Noise; Proceedings of the 2016 Annual Conference on Information Science and Systems (CISS); Princeton, NJ, USA. 16–18 March 2016; pp. 500–505. [Google Scholar]
  • 22.Liu X., Qu H., Zhao H., Chen B. Extended Kalman Filter under Maximum Correntropy Criterion; Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN); Vancouver, BC, Canada. 24–29 July 2016; pp. 1733–1737. [Google Scholar]
  • 23.Chen B., Liu X., Zhao H., Principe J.C. Maximum Correntropy Kalman Filter. Automatica. 2017;76:70–77. doi: 10.1016/j.automatica.2016.10.004. [DOI] [Google Scholar]
  • 24.Fakoorian S., Mohammadi A., Azimi V., Simon D. Robust Kalman-type filter for non-Gaussian noise: Performance analysis with unknown noise covariances. J. Dyn. Syst. Meas. Control. 2019;141:091011. doi: 10.1115/1.4043054. [DOI] [Google Scholar]
  • 25.Fakoorian S., Izanloo R., Shamshirgaran A., Simon D. Maximum correntropy criterion Kalman filter with adaptive kernel size; Proceedings of the 2019 IEEE National Aerospace and Electronics Conference (NAECON); Dayton, OH, USA. 15–19 July 2019; pp. 581–584. [Google Scholar]
  • 26.Chen B., Xie Y., Wang X., Yuan Z., Ren P., Qin J. Multikernel Correntropy for Robust Learning. IEEE Trans. Cybern. 2022;52:13500–13512. doi: 10.1109/TCYB.2021.3110732. [DOI] [PubMed] [Google Scholar]
  • 27.Liu X., Ren Z., Lyu H., Jiang Z., Ren P., Chen B. Linear and Nonlinear Regression-Based Maximum Correntropy Extended Kalman Filtering. IEEE Trans. Syst. Man Cybern. Syst. 2021;51:3093–3102. doi: 10.1109/TSMC.2019.2917712. [DOI] [Google Scholar]
  • 28.Shao J., Chen W., Zhang Y., Yu F., Chang J. Adaptive multikernel size-based maximum correntropy cubature Kalman filter for the robust state esitimation. IEEE Sens. J. 2022;22:19835–19844. doi: 10.1109/JSEN.2022.3202972. [DOI] [Google Scholar]
  • 29.Li S., Shi D., Zou W., Shi L. Multi-kernel maximum correntropy Kalman filter. IEEE Control. Syst. Lett. 2021;6:1490–1495. doi: 10.1109/LCSYS.2021.3114137. [DOI] [Google Scholar]
  • 30.Li S., Li L., Shi D., Zou W., Duan P., Shi L. Multi-kernel maximum correntropy Kalman filter for orientation estimation. IEEE Robot. Autom. Lett. 2022;7:6693–6700. doi: 10.1109/LRA.2022.3176798. [DOI] [Google Scholar]
  • 31.Dang L., Huang Y., Zhang Y., Chen B. Multi-kernel correntropy based extended Kalman filtering for state-of-charge estimation. ISA Trans. 2022;129:271–283. doi: 10.1016/j.isatra.2022.02.047. [DOI] [PubMed] [Google Scholar]
  • 32.Cheng L., Ren M.F., Xie G. Multipath Estimation Based on Centered Error Entropy Criterion for Non-Gaussian Noise. IEEE Access. 2016;4:9978–9986. doi: 10.1109/ACCESS.2016.2639049. [DOI] [Google Scholar]
  • 33.Yang B., Cao L., Li L., Jiang C., Ran D., Xiao B. A New Robust Centered Error Entropy Cubature Kalman Filter; Proceedings of the 7th International Conference on Control Science and Systems Engineering (ICCSSE); Qingdao, China. 30 July–1 August 2021; pp. 119–124. [Google Scholar]
  • 34.Yang B., Cao L., Ran D., Xiao B. Centered Error Entropy Kalman Filter with Application to Satellite Attitude Determination. Trans. Inst. Meas. Control. 2021;43:3055–3070. doi: 10.1177/01423312211019867. [DOI] [Google Scholar]
  • 35.Yang B., Huang H., Cao L. Centered Error Entropy-Based Sigma-Point Kalman Filter for Spacecraft State Estimation with Non-Gaussian Noise. Space Sci. Technol. 2022;2022:9854601. doi: 10.34133/2022/9854601. [DOI] [Google Scholar]
  • 36.Yang B., Du B., Li N., Li S., Shi Z. Centered Error Entropy-Based Variational Bayesian Adaptive and Robust Kalman Filter. IEEE Trans. Circuits Syst. II Express Briefs. 2022;69:5179–5183. doi: 10.1109/TCSII.2022.3196452. [DOI] [Google Scholar]
  • 37.Yang B., Wang H., Song L., Liu Z. A measurement modified centered error entropy cubature Kalman filter for integrated INS/GNSS. Measurement. 2025;241:115745. doi: 10.1016/j.measurement.2024.115745. [DOI] [Google Scholar]
  • 38.Xie Y., Li Y., Gu Y., Cao J., Chen B. Fixed-point minimum error entropy with fiducial points. IEEE Trans. Signal Process. 2020;68:3824–3833. doi: 10.1109/TSP.2020.3001404. [DOI] [Google Scholar]
  • 39.Mitra R., Kaddoum G., Dahman G., Poitau G. Error Analysis of Localization Based on Minimum-Error Entropy with Fiducial Points. IEEE Commun. Lett. 2021;25:1187–1191. doi: 10.1109/LCOMM.2020.3043974. [DOI] [Google Scholar]
  • 40.Bahrami S., Tuncel E. Challenging the Deployment of Fiducial Points in Minimum Error Entropy; Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT); Espoo, Finland. 26 June 2022–1 July 2022; pp. 1683–1688. [Google Scholar]
  • 41.He J., Feng Z., Wang G., Peng B. Cubature Kalman filter based on generalized minimum error entropy with fiducial point; Proceedings of the International Workshop on Signal Processing and Machine Learning (WSPML 2023); Hangzhou, China. 8 December 2023. [Google Scholar]
  • 42.Zhao H., Tian B. Distributed minimum error entropy with fiducial points Kalman filter for state tracking. ISA Trans. 2025;156:154–167. doi: 10.1016/j.isatra.2024.11.027. [DOI] [PubMed] [Google Scholar]
  • 43.GPSoft LLC . Satellite Navigation Toolbox 3.0 User’s Guide. GPSoft; Athens, OH, USA: 2003. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request from the authors.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES