Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2018 Oct 10;18(10):3381. doi: 10.3390/s18103381

Diffusion Logarithm-Correntropy Algorithm for Parameter Estimation in Non-Stationary Environments over Sensor Networks

Limei Hu 1,, Feng Chen 1,2,†,*, Shukai Duan 2, Lidan Wang 2
PMCID: PMC6209990  PMID: 30309002

Abstract

This paper considers the parameter estimation problem under non-stationary environments in sensor networks. The unknown parameter vector is considered to be a time-varying sequence. To further promote estimation performance, this paper suggests a novel diffusion logarithm-correntropy algorithm for each node in the network. Such an algorithm can adopt both the logarithm operation and correntropy criterion to the estimation error. Moreover, if the error gets larger due to the non-stationary environments, the algorithm can respond immediately by taking relatively steeper steps. Thus, the proposed algorithm achieves smaller error in time. The tracking performance of the proposed logarithm-correntropy algorithm is analyzed. Finally, experiments verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been proposed for parameter estimation.

Keywords: non-stationary, sensor networks, parameter estimation, diffusion logarithm-correntropy algorithm, tracking performance

1. Introduction

Sensor networks are useful tools for disaster relief management, target localization and tracking, and environment monitoring [1,2,3,4]. Distributed parameter estimation plays an essential role in sensor networks [5,6,7]. The objective of the parameter estimation is to estimate some essential parameters from noisy observation measurements through cooperation between nodes. Moreover, distributed strategies are of great significance to solve the problem of parameter estimation in sensor networks, due to their robustness against imperfections, low complexity, and low power demands.

Among these distributed schemes, in the incremental strategy [8], a cyclic path is defined over the nodes and data are processed in a cyclic manner through the network until optimization is achieved. However, determining a cyclic path that runs across all nodes is generally a challenging (NP-hard) task to perform. In the consensus strategy [9], vanishing step sizes are used to ensure that nodes can reach consensus and converge to the same optimizer in steady-state. In the diffusion strategy, information is processed locally and simultaneously at all nodes. The processed data are diffused through a real-time sharing mechanism that ripples through the network continuously [10,11]. The diffusion strategies are particularly attractive because they are robust [12,13,14,15], flexible, and fully distributed compared with incremental and consensus strategies, so we adopt diffusion strategies in this paper.

Most prior literature is mainly concerned with the case where nodes estimate the parameter vector collaboratively in the stationary case over sensor networks [10,16]. However, in the real world, the non-stationary case is normal. In this work, we mainly consider the parameter estimation in the non-stationary case, where the parameter is always time-varying. The observation data are nonlinear and non-Gaussian, since the data may be disturbed by changing communication links or outliers under the non-stationary environments.

Inspired by the differentiability and mathematical tractability of logarithm functions, we introduce the logarithm function as the error cost function [17]. Moreover, the correntropy criterion is a nonlinear measure of similarity between two random variables [18], which is a robust optimality criterion has been successfully used in the field of non-Gaussian signal processing. To make the error cost function more suitable for non-stationary environments, we propose a diffusion signal processing framework with a logarithm-correntropy cost function to solve the parameter estimation problem, which can elegantly and gradually adjust the cost function in its optimization based on the error amount.

A. Related Works

The tracking behavior of a wide range of adaptive networks under non-stationary conditions was thoroughly investigated in [19,20,21,22]. In stationary conditions, based on the p norm error criterion, a diffusion minimum average p-power (dLMP) was proposed to estimate the parameters in wireless sensor networks [23]. To estimate the mean-square weight deviations under the zero-mean stationary measurement noise, the proportionate-type normalized least mean square algorithms were proposed in [24]. The diffusion normalized least-mean-square algorithm (dNLMS) was proposed for parameter estimation in a distributed network [25], and the variable step size of the dNLMS algorithm was obtained by minimizing the mean-square deviation to achieve fast convergence rate. The gradient-descent total least-squares (dTLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data [26]. The steady-state analysis of gradient-descent total least-squares was inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. When measurement noise involves impulsive interference, Ni, Chen, and Chen [27] designed a diffusion sign-error LMS (dSE-LMS) to solve the parameter estimation. The tracking performance of a variable step-size diffusion LMS algorithm is considered in non-stationary environment [28], but this research did not get the closed-form expression of steady-state mean-square deviation (MSD) or excess mean-square error (EMSE) of the network. Consequently, the theory and simulation do not match well. To date, the performance of distributed estimation algorithms has been predominantly studied under stationary conditions. However, the performances of these algorithms may degrade in non-stationary environments.

To find the optimal adaptation step sizes over the networks, Abdolee, Vakilian, and Champagne [29] formulated a constrained nonlinear optimization problem and solved it through a log-barrier Newton algorithm in an iterative manner. By using the optimal step size at each node, the performance of diffusion least-mean squares (DLMS) could be improved in non-stationary signal environments. Compared with this research, the proposed algorithm can respond immediately by taking relatively steeper steps when the error gets larger, and as a result, the new algorithm can perform well in non-stationary environments without finding the optimal step size at each node.

B. Our Contributions and Organization

To further promote estimation performance in non-stationary environments over sensor networks, a novel algorithm needs to be designed. In this paper, the random-walk model is introduced for non-stationary environments. We proposed the logarithm-correntropy algorithm for parameter estimation in sensor networks under the non-stationary environments. This algorithm can adopt both the logarithm operation and correntropy criterion to the estimation error. Moreover, if the error gets larger due to the non-stationary environments, the algorithm can respond immediately by taking relatively steeper steps. Thus, the proposed algorithm achieves smaller error in time. The tracking performance of the proposed algorithm was analyzed. Simulation results are presented to evaluate the proposed algorithm.

The rest of this paper is organized as follows. In Section 2, we describe the estimation problem in a non-stationary environment. Section 3 introduces the adapt-then-combine (ATC) diffusion diffusion logarithmic-correntropy algorithm. In Section 4, the tracking performance analysis of the proposed algorithm is presented. Simulation results are presented in Section 5. Finally, conclusions are drawn in Section 6.

Notation: In what follows, let bold letters denote random variables and non-bold letters represent their realizations. Operators (.)T and E. denote transposition and expectation, respectively. Im denotes an m×m identity matrix. 1 is an N×1 all-unity vector. . is the absolute value of a scalar.

2. Estimation Problem in a Non-Stationary Environment

Consider a network with N nodes (sensors) deployed to observe some physical phenomena and specific events in a special environment. It is fundamentally necessary to consider and analyze parameter estimation under non-stationary conditions with the intent of employing them for practical applications. One challenge confronted in real-world applications is the non-stationary nature of the underlying parameters. For this purpose, a data model with a varying parameter is required. In this paper, we use the random walk model in [19] to depict the non-stationary condition.

Assumption 1.

(Random Walk Model): The parameter vector varies based on the following model:

wi=wi1+ηi, (1)

where wi1 is a random variable with a constant mean, where i is the time index. ηi is a zero-mean random sequence with a covariance matrix Rη.

Assumption 2.

The sequence ηi is independent of uk,i and nk,i for all k and i.

Assumption 3.

The initial conditions w1 are independent of all dk,i , uk,i , nk,i, and ηi.

At every time i, every node k can only exchange information with the nodes from its neighborhoods Nk (including node k itself), and takes a scalar measurement dk,i according to:

dk,i=wiTuk,i+nk,i, (2)

where uk,i denotes the M×1 random regression input signal vector and we assume I>M, nk,i is the Gaussian noise with zero mean and variance σn,k2. The problem is to estimate an M×1 unknown varying vector wi at each node k from collected measurements. The objective of the network is to search for all unknown variable w and find the best estimation w at the end by minimizing the MSE cost function in a distributed manner as follows:

Jkw=Edk,iwTuk,i2. (3)

The cost function of the global network can be described as:

minwJglobalw=k=1NEdk,iwTuk,i2. (4)

The optimization problem in Equation (3) can be solved by the diffusion strategies proposed in [30,31]. In these strategies, the estimate for each node is generated through a fixed combination strategy, which refers to giving different weights to the estimation of k’s neighbors to minimize the local function as follows:

Jklocalw=lNkclkEdl,iwTul,i, (5)

where clk is the combination coefficient. For simplicity and good performance, we use the Metropolis rule in our work. The description of the Metropolis rule is:

cl,k=1maxnk,nl,iflNk\k,1lNk\kclk,ifl=k,0.iflNk, (6)

where nk is the degree of node k (the number of nodes connected to node k). The combining coefficients clk also satisfy the following conditions: lNkkclk=1 and clk=0iflNk,CI=I,ITC=IT, where C is an N×N matrix with non-negative real entries clk.

3. Diffusion Logarithmic-Correntropy Algorithm

In the non-stationary case, the parameter is always time-varying. We propose a new logarithmic-correntropy method to solve the parameter estimation problem. In order to solve Equation (3), since nodes in sensor networks have access to the observed data, we can take advantage of node cooperation by introducing a distributed diffusion learning manner.

In this paper, we are inspired from the recent developments in the information theoretic learning (ITL) related to the “logarithmic cost function” and the “correntropy”-based approaches [17,32]. The logarithmic function is differentiable, which makes it mathematically tractable. We introduce the logarithmic function as an efficient cost function in the adaptive algorithm. In this framework, we introduce an error cost function using the logarithmic function given by:

Jek,i=Fek,i1αln1+αFek,i, (7)

where α>0 is a a small systemic parameter and Fel,i is a conventional cost function of the estimation error ek,i on each node k. The estimation error is ek,i=dk,iwTuk,i. In this paper, we introduce the correntropy criterion to formulate the conventional cost function F.. The correntropy is a similarity measure based on the ITL criterion. Given two random variables X and Y, the corresponding correntropy between them can be defined by [33]:

VX,Y=EkσXY=kσxydFXYx,y, (8)

where kσ. is a continuous, symmetric, positive-definite function with bandwidth σ, also called the Mercer kernel. E is an expectation operator. The joint distribution function of X and Y is FXYx,y. The Gaussian kernel is mainly concerned in this paper.

kσxy=expxy22σ2 (9)

For each node k, based on the correntropy criterion, the instantaneous conventional cost function Fek,i is:

Fek,i=kσek,i=expdk,iwTuk,i22σ2=expek,i22σ2. (10)

In non-stationary conditions over networks, the communication among nodes is subject to link noise, and it is natural that the observation vectors are affected by noise. The total least squares (TLS) method for estimation can have desirable performance by reducing the noise effect from both the observation vector and the data matrix [34]. We briefly explain the TLS method as follows:

Consider the linear parameter estimation problem Axb, where A is the data matrix, b is the observation vector, and x is the unknown parameter vector. The least squares (LS) approach considers that the observation vector is noisy while the data matrix is noiseless. However, the total least squares (TLS) approach considers that both the observation vector and the data matrix are noisy [35]. The LS approach seeks the estimate of the unknown parameter vector x by minimizing a sum of squared residuals expressed by:

minxAxb2, (11)

while the TLS approach minimizes a sum of weighted squared residuals expressed by:

minxAxb2x2+1. (12)

From the matrix algebra viewpoint, the total least squares (TLS) approach is a refinement of the LS method when there are errors in both the observation vector and the data matrix. Inspired by the desirable features of the TLS method, to make the logarithm-correntropy method more suitable for non-stationary environments, we rewrite the conventional cost function as:

F˜ek,i=expdk,iwTuk,i22σ2w2+1=expek,i22σ2w2+1. (13)

To demonstrate the superiority of the proposed logarithm-correntropy method, we introduce different stochastic cost functions, such as the least mean square cost e2 and absolute difference cost e. Figure 1 compares these cost functions with the proposed cost function logarithm-correntropy e. It can be observed that the proposed cost function logarithm-correntropy e is less sensitive to tiny interference on the error, and shows comparable steepness for quite large error interference. Furthermore, this new logarithm-correntropy cost function benefits from mapping the original input space into a potential higher-dimensional “feature space”, where linear methods can be employed. Particularly, if the error gets larger due to the non-stationary environments, the algorithm can respond immediately by taking relatively steeper steps. Thus, the proposed algorithm achieves smaller error in time and takes more gradual steps in space.

Figure 1.

Figure 1

(a) The value of cost function; (b) The gradient of cost error function.

Given the data model, all nodes can observe data generated by the data model in Equation (2). It is natural to expect collaboration between nodes to be beneficial for a distributed sensor network. This means that neighbor nodes can share information with each other as permitted by the network topology. Therefore, according to Equations (7) and (13), we define the global cost function so that all nodes in the sensor network can be adapted in a distributed manner, then the new global function can be built as follows:

Jglobalw=k=1NkF˜ek,i1αln1+αF˜ek,i=k=1Nkexpek,i22σ2w2+1+1αln1+αexpek,i22σ2w2+1. (14)

To develop the distributed diffusion logarithm-correntropy algorithm in non-stationary environments over sensor networks, we can build the following new diffusion Logarithm-Correntropy Algorithm (dLCA) local cost function at every node k as:

Jklocalwk,i=lNkclkF˜el,i1αln1+αF˜el,i=lNkclkHel,i=lNkclkexpel,i22σ2wl,i2+1+1αln1+αexpel,i22σ2wl,i2+1, (15)

where wk,i is the local estimate obtained by node k at time i, el,i=dl,iwl,iTul,i is the estimation error at node l, l denotes any neighbor node of node k, Hel,i=F˜el,i1αln1+αF˜el,i , and the cl,k denote combining coefficients, which also is subjected to the Metropolis rule.

To reach the minimum wi, it is a natural thought to use the steepest-descent method. Taking the derivative of Equation (15), we have

Jlocalwk,iwk,i=lNkclkHel,iwl,i=lNkclkτl,i+τl,i1+αζl,i, (16)

where ξl,i=expel,i22σ2wl,i12+1 and τl,i=expel,i22σ2wl,i12+1.el,iul,i+dl,iwl,iσ2wl,i12+12.

Since nodes in the sensor networks have access to all observed data, we can take advantage of node cooperation by introducing a diffusion strategy to estimate the parameter wk,i in a fully distributed manner. This paper concerns the Adapt-then-Combine (ATC) scheme of the diffusion strategy. As Figure 2 shows, in the ATC scheme, nodes in networks combine information from their immediate neighbors firstly, and then employ updates by the following steps:

Figure 2.

Figure 2

Adapt-then-Combine (ATC) diffusion strategies. Step 1 depicts a sensor network working in a non-stationary environment. In the adaptation stage 2, each node is using observed data uk,i,dk,i to update its intermediate estimate φk,i. Step 3 shows the information exchanging process between nodes. In the combination stage 4, each node collects the intermediate estimates from its neighbors.

(1) Adaptation: In order to obtain an intermediate estimate, we introduce a step-size parameter μ. Each node updates its current estimate for the true parameter value by taking steepest-descent method. We can obtain an intermediate estimate φk,i as follows:

φk,i=φk,i1μHek,iwk,i1=φk,i1μτl,i+τl,i1+αξl,i. (17)

(2) Combination: This step is also called the diffusion step, to obtain a new estimate, each node aggregates its own intermediate estimate from all its neighbor nodes as follows:

wk,i=lNkclkφl,i. (18)

For the purpose of clarity, we summarize the procedures of the diffusion Logarithm-Correntropy Algorithm (dLCA) (Algorithm 1) as follows:

Algorithm 1: diffusion Logarithm-Correntropy Algorithm
   Initialize: Start with wl,1=0 for all l, initialize wk,0 for each node
   k,step-size μ, and cooperative coefficients clk. Set α>0, σ>0.
   for t=1:T
      for each node k:
     Adaptation.
        ξl,i=expel,i22σ2wl,i12+1
        τl,i=expel,i22σ2wl,i12+1.el,iul,i+dl,iwl,iσ2wl,i12+12.
        φk,i=φk,i1μHek,iwk,i1=φk,i1μτl,i+τl,i1+αξl,i
        Communication.
        Transmit the intermediate φk,i to all neighbors in Nk.
        Combination.
             wk,i=lNkclkφl,i
              Nk are the neighbor nodes of node k in the communication subnetwork.

4. Tracking Performance Analysis

The tracking performance of the proposed diffusion logarithm-correntropy algorithm is analyzed in this section. The convergence condition is first studied with w˜i, defined as the error signal, which is a time-varying parameter under the random walk model:

w˜iwiwk,i. (19)

It has been proven that subtracting wi from both sides of the update procedure on a node and then taking the expectation value leads to the following relation under stationary conditions in [10]:

Ew˜i=Iμk=1NkRu,kEw˜i1. (20)

Then, considering the Assumptions 2 and 3 of the random sequence ηi, we observe that wi has a constant mean and hence Ewi=Ewi1 under the relation in Equation (1). Taking the expectation value leads to the following relation under non-stationary conditions in Equation (1). We obtain

Ew˜i=Iμk=1NkRu,kEw˜i1+ηi. (21)

In Equation (21), ηi is a zero-mean variable sequence with covariance matrix Rη. Our purpose is to achieve mean square deviation (MSD) and excess mean square error (EMSE) for each node, which are defined as:

MSDk=Ew˜k,I2. (22)
EMSEk=Ew˜k,Ru,k2. (23)

In the proposed algorithm in Equation (17), the error signals can be defined as follows:

ψ˜k,iwiψk,i, (24)
ek,idk,iuk,iψ˜k1,i. (25)

In the non-stationary case with wi=wi1+ηi, based on the definition in Equation (15), Jklocalw is twice continuous differentiable when w0. Then we obtain the Hessian matrix of Jklocalw, which is defined as w2Jklocalw.

From Lemma 1 and Theorem 1 in [12], the bound Hessian is: λk,minIMw2Jklocalλk,maxIM and 0λk,minλk,max.

Equations (16)–(18) cause gradient error. The error recursion is then given by

φ˜k,i=IMμHk,i1w˜k,i1μnk,i, (26)
w˜k,i=lNkclkφ˜l,i. (27)

In Equation (26), as a positive-definite random matrix, Hk,i1 is defined as

Hk,i101w2Jklocalwtw˜k,i1dt. (28)

Applying Jensen’s inequality to Equation (27), the variance of w˜k,i is bounded by

Ew˜k,i2lNkclkEφ˜l,i2, (29)

where .2 is a convex function and represents the squared Euclidean norm.

Integrating both sides of Equation (26), we achieve

Eφ˜k,i2=Ew˜k,i1Σk,i12+μ2Enk,i2, (30)
Σk,i1IMμHk,i1TIMμHk,i1+μHk,i1THk,i1. (31)

It follows from the bound Hessian and Equation (28) that

0Σk,i1τk2IM, (32)

where

τk2max1μkλk,maxRu,k2,1μkλk,minRu,k2+μ2λk,max2Ru,k. (33)

According to [12], substituting Equation (32) into Equation (30), we get

Eφ˜k,i2τk2+αμk2Ew˜k,i12+μ2σn,k2, (34)

where α0 is a constant. The global MSD is introduced, which leads to

w˜icolEw˜1,i2,Ew˜2,i2,,Ew˜N,i2. (35)

We collect the clk into N×N matrices Ci , such that C=ECi. The Ci is left-stochastic, that is, CiT1N=1N, 1N means the N×1 all one vector. From Equations (29) and (35), it holds that

w˜iCTΓw˜i1+CTΞ1N, (36)

where ⪯ denotes element-wise ordering and

Γdiagτ12+αμ2,τN2+αμ2, (37)
Ξdiagμ2σn,12,,μ2σn,N2. (38)

In order to ensure the stability of the proposed algorithm in the mean sense, according to Theorem 1 (meansquarestability) in Reference [12], it should hold that

μk<min2λk,maxRu,kλk,max2Ru,k+α,2λk,minRu,kλk,min2Ru,k+α. (39)

Since λk,min2Ru,k+λk,max2Ru,kλk,min2Ru,k. As i, which indicates

limiw˜iΞ1Γ=maxkμ2σn,k21maxkτk2+αμ2, (40)

where . is the l norm. When the step-sizes μ are sufficiently small, we can further yield the conclusion that

limiw˜iσn,k22min1kNλk,minμmax2μmin. (41)

According to the bound in Equation (41), if step-sizes μ are sufficiently small, the MSD of each node is Eω˜ki2, which can become sufficiently small.

5. Simulation Results

In this section, to verify the performance of the proposed diffusion logarithm-correntropy algorithm, we considered a network consisting of 20 nodes and 50 communication links. The topology is shown in Figure 3. The sensor nodes were randomly deployed in an area of 100×100 and the communication distance between nodes was set as 35. All results below were averaged over 150 independent Monte Carlo simulations with randomly generated samples.

Figure 3.

Figure 3

The topology of sensor network with 20 nodes.

In this simulation part, firstly, the performance of the proposed algorithm was verified in a non-stationary environment over sensor networks and the communication links were ideal. In Figure 4, the regression inputs uki are independent identically distributed (i.i.d.), which are zero-mean Gaussian vectors with covariance matrices Ru,k=σu,k2IM, and the σu,k2 is the input variance. The background noises nki are drawn independently of the regressors and are i.i.d. The unknown parameter vector wi is time-varying, as Figure 5 shows. The fixed step-size μ = 0.002 is used in the simulations.

Figure 4.

Figure 4

(a) Regressor statistics; (b) Noise variances.

Figure 5.

Figure 5

The desired time-varying vector, wi.

The MSD learning curves are plotted in Figure 6. It shows that the proposed dLCA algorithm obtained the fastest convergence rate when compared with the dSE-LMS, dLMP, dTLS, and dNLMS algorithms. It also shows that the dLCA algorithm could achieve relatively good performance in terms of the network MSD. The proposed algorithm had relatively smaller MSD than the mentioned algorithms. From these simulation results, it can be seen that diffusion logarithm-correntropy algorithm exhibited better tracking ability in non-stationary environments than the existing classical algorithms.

Figure 6.

Figure 6

A comparison of simulated MSD learning curves in a non-stationary environment over sensor networks for the diffusion sign-error least-mean-square (dSE-LMS), diffusion minimum average p-power (dLMP), gradient-descent total least-squares (dTLS), diffusion normalized least-mean-square algorithm (dNLMS), and diffusion logarithm-correntropy algorithm (dLCA) algorithms.

Figure 7 compares the steady-state EMSE performances of related algorithms on each node in the sensor networks. It can be observed that a large difference was observed at some nodes that achieved low EMSE. By averaging over 150 experiments and over 50 time samples after convergence, the steady-state EMSE values were obtained. The proposed algorithm captured a better trend of the steady-state performance than other algorithms.

Figure 7.

Figure 7

Estimated accuracy comparison in terms of excess mean-square error (EMSE) on each node for the dSE-LMS, dLMP, dTLS, dNLMS, and dLCA algorithms.

Secondly, to further simulate the non-stationary scenarios in sensor networks, the link was assumed to change at time 4000. The unknown parameter vector wi was time-varying with link changing, as Figure 8 shows.

Figure 8.

Figure 8

The desired time-varying vector with link changing, wi.

From the simulation results shown in Figure 9, in non-stationary environments over sensor networks with links changing, the diffusion logarithm-correntropy algorithm had smaller MSD than other related algorithms, such as dSE-LMS, dLMP, dTLS, and dNLMS algorithms. It further shows that the proposed dLCA algorithm had better tracking ability in non-stationary environments.

Figure 9.

Figure 9

A comparison of simulated MSD learning curves of the global network for the dSE-LMS, dLMP, dTLS, dNLMS, and dLCA algorithms in non-stationary environments over sensor networks with links changing.

Finally, we compared the simulated network MSD curves with theoretical results under Equation (41) in Figure 10. One can see that theoretical network MSD curves of the proposed algorithm showed good match with its simulated MSD curves.

Figure 10.

Figure 10

Theoretical and simulated MSD curves of the proposed dLCA algorithm under Equation (41).

6. Conclusions

To solve the problem of parameter estimation in non-stationary environments over sensor networks, each node in the sensor networks was equipped with the logarithm-correntropy cost function. The proposed algorithm can gradually adjust the cost function in its optimization based on the estimation error amount. We investigated the tracking behavior of the proposed algorithm under non-stationary conditions. Furthermore, the simulations were implemented in the non-stationary environments, where the parameters were time-varying with link changing. Simulation experiments were conducted to verify the analytical results, and illustrated that the proposed algorithm outperformed existing algorithms, such as dSE-LMS, dLMP, dTLS, and dNLMS algorithms.

Acknowledgments

Authors would like to thank the editor and the anonymous reviewers for their constructive comments that improved the quality of the paper.

Author Contributions

Data curation, F.C.; Funding acquisition, S.D.; Project administration, L.H.; Software, L.H.; Supervision, S.D. and L.W.; Writing–original draft, L.H.; Writing–review and editing, F.C. and L.W.

Funding

This work was supported in part by the Doctoral Fund of Southwest University (No. SWU113067), the Fundamental Research Funds for the Central Universities (Grant Nos. XDJK2017B053, XDJK2017D176, XDJK2017D180, XDJK2017D181), Chongqing Research Program of Basic Research and Frontier Technology (No. cstc2017jcyjAX0265) and Program for New Century Excellent Talents in University (Grant No. [2013] 47).

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Safdarian A., Fotuhi-Firuzabad M., Lehtonen M. A distributed algorithm for managing residential demand response in smart grids. IEEE Trans. Ind. Inf. 2014;10:2385–2393. doi: 10.1109/TII.2014.2316639. [DOI] [Google Scholar]
  • 2.Talebi S.P., Kanna S., Mandic D.P. A Distributed Quaternion Kalman Filter with Applications to Smart Grid and Target Tracking. IEEE Trans. Signal Inf. Process. Netw. 2016;2:477–488. doi: 10.1109/TSIPN.2016.2618321. [DOI] [Google Scholar]
  • 3.Harris N., Cranny A., Rivers M. Application of distributed wireless chloride sensors to environmental monitoring: Initial results. IEEE Trans. Instrum. Meas. 2016;4:736–743. doi: 10.1109/TIM.2015.2490838. [DOI] [Google Scholar]
  • 4.Sayed A.H. Adaptive Filters. John Wiley and Sons; Hoboken, NJ, USA: 2008. [Google Scholar]
  • 5.Tan T.H., Gochoo M., Chen Y.F., Hu J.J., Chiang J.Y., Chang C.S., Hsu J.C. Ubiquitous emergency medical service system based on wireless biosensors, traffic information, and wireless communication technologies: Development and evaluation. Sensors. 2017;17:202. doi: 10.3390/s17010202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Jiang P., Xu Y., Liu J. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks. Sensors. 2017;17:186. doi: 10.3390/s17010186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kang X., Huang B., Qi G. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones. Sensors. 2018;18:297. doi: 10.3390/s18010297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Lopes C.G., Sayed A.H. Incremental adaptive strategies over distributed networks. IEEE Trans. Signal Process. 2007;55:4064–4077. doi: 10.1109/TSP.2007.896034. [DOI] [Google Scholar]
  • 9.Soatti G., Nicoli M., Savazzi S. Consensus-Based Algorithms for Distributed Network-State Estimation and Localization. IEEE Trans. Signal Inf. Process. Netw. 2017;3:430–444. doi: 10.1109/TSIPN.2016.2626141. [DOI] [Google Scholar]
  • 10.Liu Y., Li C., Zhang Z. Diffusion sparse least-mean squares over networks. IEEE Trans. Signal Process. 2012;60:4480–4485. [Google Scholar]
  • 11.Abdolee R., Vakilian V. An Iterative Scheme for Computing Combination Weights in Diffusion Wireless Networks. IEEE Wirel. Commun. Lett. 2017;6:510–513. doi: 10.1109/LWC.2017.2710044. [DOI] [Google Scholar]
  • 12.Chen J., Sayed A.H. Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans. Signal Process. 2012;60:4289–4305. doi: 10.1109/TSP.2012.2198470. [DOI] [Google Scholar]
  • 13.Chen F., Shao X. Broken-motifs Diffusion LMS Algorithm for Reducing Communication Load. Signal Process. 2017;133:213–218. doi: 10.1016/j.sigpro.2016.11.005. [DOI] [Google Scholar]
  • 14.Chouvardas S., Slavakis K., Theodoridis S. Adaptive robust distributed learning in diffusion sensor networks. IEEE Trans. Signal Process. 2011;10:4692–4707. doi: 10.1109/TSP.2011.2161474. [DOI] [Google Scholar]
  • 15.Chen F., Shi T., Duan S., Wang L. Diffusion least logarithmic absolute difference algorithm for distributed estimation. Signal Process. 2018;142:423–430. doi: 10.1016/j.sigpro.2017.07.014. [DOI] [Google Scholar]
  • 16.Sayed A.H. Fundamentals of Adaptive Filtering. Wiley; Hoboken, NJ, USA: 2003. [Google Scholar]
  • 17.Sayin M.O., Vanli N.D., Kozat S.S. A Novel Family of Adaptive Filtering Algorithms Based on the Logarithmic Cost. IEEE Trans. Signal Process. 2014;62:4411–4424. doi: 10.1109/TSP.2014.2333559. [DOI] [Google Scholar]
  • 18.Chen B., Xing L., Liang J., Zheng N., Principe J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014;21:880–884. [Google Scholar]
  • 19.Nosrati H., Shamsi M., Taheri S.M., Sedaagh M.H. Adaptive networks under non-stationary conditions: Formulation, performance analysis, and application. IEEE Trans. Signal Process. 2015;63:4300–4314. doi: 10.1109/TSP.2015.2436363. [DOI] [Google Scholar]
  • 20.Predd J.B., Kulkarni S.B., Poor H.V. Distributed learning in wireless sensor networks. IEEE Signal Process. Mag. 2006;23:56–69. doi: 10.1109/MSP.2006.1657817. [DOI] [Google Scholar]
  • 21.Bertsekas D.P., Tsitsiklis J.N. Gradient convergence in gradient methods with errors. SIAM J. Optim. 2000;10:627–642. doi: 10.1137/S1052623497331063. [DOI] [Google Scholar]
  • 22.Arablouei R., Dogancay K. Adaptive Distributed Estimation Based on Recursive Least-Squares and Partial Diffusion. IEEE Trans. Signal Process. 2014;14:3510–3522. doi: 10.1109/TSP.2014.2327005. [DOI] [Google Scholar]
  • 23.Wen F. Diffusion least-mean P-power algorithms for distributed estimation in alpha-stable noise environments. Electron. Lett. 2013;49:1355–1356. doi: 10.1049/el.2013.2331. [DOI] [Google Scholar]
  • 24.Wagner K., Doroslovacki M. Proportionate-type normalized least mean square algorithms with gain allocation motivated by mean-square-error minimization for white input. IEEE Trans. Signal Process. 2011;59:2410–2415. doi: 10.1109/TSP.2011.2106123. [DOI] [Google Scholar]
  • 25.Jung S.M., Seo J.H., Park P. A variable step-size diffusion normalized least-mean-square algorithm with a combination method based on mean-square deviation. Circuits Syst. Signal Process. 2015;34:3291–3304. doi: 10.1007/s00034-015-0005-9. [DOI] [Google Scholar]
  • 26.Arablouei R., Werner S. Analysis of the gradient-descent total least-squares adaptive filtering algorithm. IEEE Trans. Signal Process. 2014;62:1256–1264. doi: 10.1109/TSP.2014.2301135. [DOI] [Google Scholar]
  • 27.Ni J., Chen J., Chen X. Diffusion sign-error LMS algorithm: Formulation and stochastic behavior analysis. Signal Process. 2016;128:142–149. doi: 10.1016/j.sigpro.2016.03.022. [DOI] [Google Scholar]
  • 28.Zhao X., Tu S., Sayed A.H. Diffusion adaptation over networks under imperfect information exchange and non-stationary data. IEEE Trans. Signal Process. 2012;60:3460–3475. doi: 10.1109/TSP.2012.2192928. [DOI] [Google Scholar]
  • 29.Abdolee R., Vakilian V., Champagne B. Tracking Performance and Optimal Step-Sizes of Diffusion LMS Algorithms in Nonstationary Signal Environment. IEEE Trans. Control Netw. Syst. 2016;5:67–78. doi: 10.1109/TCNS.2016.2578044. [DOI] [Google Scholar]
  • 30.Cattivelli F.S., Sayed A.H. Diffusion LMS Strategies for Distributed Estimation. IEEE Trans. Signal Process. 2010;58:1035–1048. doi: 10.1109/TSP.2009.2033729. [DOI] [Google Scholar]
  • 31.Lopes C.G., Sayed A.H. Diffusion least-mean squares over adaptive networks: Formulation and performance analysis. IEEE Trans. Signal Process. 2008;56:3122–3136. doi: 10.1109/TSP.2008.917383. [DOI] [Google Scholar]
  • 32.Chen B., Xing L., Zhao H., Zheng N., Prı J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016;64:3376–3387. doi: 10.1109/TSP.2016.2539127. [DOI] [Google Scholar]
  • 33.Chen B., Xing L., Xu B., Zhao H., Principe J.C. Insights into the Robustness of Minimum Error Entropy Estimation. IEEE Trans. Neural Netw. Learn. Syst. 2016;3:1–7. doi: 10.1109/TNNLS.2016.2636160. [DOI] [PubMed] [Google Scholar]
  • 34.Rahman M.D., Yu K.B. Total least squares approach for frequency estimation using linear prediction. IEEE Trans. Acoust. Speech Signal Process. 1987;35:1440–1454. doi: 10.1109/TASSP.1987.1165059. [DOI] [Google Scholar]
  • 35.Markovsky I., Van Huffel S. Overview of total least-squares methods. Signal Process. 2007;87:2283–2302. doi: 10.1016/j.sigpro.2007.04.004. [DOI] [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES