Skip to main content
SpringerPlus logoLink to SpringerPlus
. 2016 Mar 8;5:295. doi: 10.1186/s40064-016-1931-0

Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks

Qinwei Fan 1,2,3,, Wei Wu 2, Jacek M Zurada 3,4
PMCID: PMC4783325  PMID: 27066332

Abstract

This paper presents new theoretical results on the backpropagation algorithm with smoothing L1/2 regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.

Keywords: Feedforward neural networks, Adaptive momentum, Smoothing L1/2 regularization, Convergence

Background

A multilayer perceptron network trained with a highly popular algorithm known as the error back-propagation (BP) has been dominating in the neural network literature for over two decades (Haykin 2008). BP uses two practical ways to implement the gradient method: the batch updating approach that accumulates the weight corrections over the training epoch before performing the update, while the online learning approach updates the network weights immediately after each training sample is processed (Wilson and Martinez 2003).

Note that training is usually done by iteratively updating of the weights that reduces error value, which is proportional to the negative gradient of a sum-square error (SSE) function. However, during the training of feedforward neural networks (FNN) with SSE, the weights might become very large or even unbounded. This drawback can be addressed by adding a regularization term to the error function. The extra term acts as a brute-force to drive unnecessary weights to zero to prevent the weights from taking too large values and then it can be used to remove weights that are not needed, and is also called penalty term (Haykin 2008; Wu et al. 2006; Karnin 1990; Reed 1993; Saito and Nakano 2000).

There are four main different penalty approaches for BP training: weight decay procedure (Hinton 1989), weight elimination (Weigend et al. 1991), approximate smoother procedure (Moody and Rognvaldsson 1997) and inner product penalty (Kong and Wu 2001).

In the weight decay procedure, the complexity penalty term is defined as the squared norm of the weight vector, and all weights in the multilayer perceptron are treated equally. In the weight elimination procedure, the complexity penalty represents the complexity of the network as function of weight magnitudes relative to a pre-assigned parameter (Reed 1993).

In approximate smoother procedure, this penalty term is used for a multilayer perceptron with a single hidden layer and a single neuron in the output layer. Compared with the earlier methods, it does two things. First, it distinguishes between the roles of weights in the hidden layer and those in the output layer. Second, it captures the interactions between these two sets of weights, however, it is much more demanding in computational complexity than weight decay or weight elimination methods. In Kong and Wu (2001) the inner-product form is proposed and its efficiency in general performance of controlling the weights is demonstrated. Convergence of the gradient method for the FNN has been considered by Zhang et al. (2015, 2009), Wang et al. (2012) and Shao and Zheng (2011).

The convergence of the gradient method with momentum is considered in Bhaya and Kaszkurewicz (2004), Torii and Hagan (2002), Zhang et al. (2006), in Bhaya and Kaszkurewicz (2004) and Torii and Hagan (2002) under the restriction that the error function is quadratic. Inspired by Chan and Fallside (1987), Zhang et al. (2006) considers the convergence of a gradient algorithm with adaptive momentum, without assuming the error function to be quadratic as in the existing results. However, in Zhang et al. (2006), the strong convergence result is based on the assumption that the error function is uniformly convex, which still seems a little intense.

The size of a hidden layer is one of the most important considerations when dealing with real life tasks using FNN. However, the existing pruning methods may not prune the unnecessary weights efficiently, so how to efficiently simplify the network structure becomes our main task.

Recently, considerable attention has been paid to the sparsity problems and a class of regularization methods was proposed which take the following form:

min1ni=1nl(yi,h(xi))+λhk 1

where l(·,·) is a loss function, (xi,yi)i=1n is a data set, and λ is the regularization parameter. When h is in the linear form and the loss function is square loss, hk is normally taken as the norm of the coefficient of linear model.

For k=0, (1) becomes L0 regularization and can be understood as a penalized least squares with penalty h0, which yields the most sparse solutions, but for large data analysis it faces the problem of combinatory optimization (Davis 1994; Natarajan 1995). In order to deal with such difficulty, Tibshirani (1996) proposed L1 regularization where k=1 and h1 is the L1 norm of n dimensional Euclidean space Rn, which just needs to solve a quadratic programming problem but is less sparse than the L0 regularization. At the same time Donoho (1995, 2005) proved that under some conditions the solutions of the L0 regularizer are equivalent to those of the L1, so the hard NP optimization problem can be avoided in the L1 regularizer. In order to find a new regularizer which is more sparse than the L1 regularizer while it is still easier to be solved than the L0 regularizer, in Xu et al. (2010) a modified L1/2 regularizer is proposed of the following form:

β^L12=argmin1ni=1n(Yi-XiTβ)2+λi=1p|βi|12 2

where λ is the tuning parameter. As shown in Xu et al. (2010), L1/2 regularizer has a nonconvex penalty and possesses many promising properties such as unbiasedness, sparsity, oracle properties and can be taken as a representative of the Lr(0<r<1) regularizer. Recently, we develop a novel method to prune FNNs through modify the usual L1/2 regularization term by smoothing technique. The new algorithm not only removes the oscillation of the gradient value, but also get better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2 regularization (Wu et al. 2014; Fan et al. 2014).

The focus of this paper is on extension of L1/2 regularization beyond its basic concept though its augmentation with a momentum term. Also, there are some other applications of FNNs for optimization problems, such as the generalized gradient and recurrent neural network methods shown as Liu et al. (2012) and Liu and Cao (2010)

It is well known that a general drawback of gradient based BP learning process is its slow convergence. To accelerate learning, a momentum term is often added (Haykin 2001; Chan and Fallside 1987; Qiu et al. 1992; Istook and Martinez 2002). By adding momentum to the update formula, the current weight increment is a linear combination of the gradient of the error function and the previous weight increment. As a result, the updates respond not only to the local gradient but also to recent gradient in the error function. Selected reports discuss the NN training with momentum term in the literature (Torii and Hagan 2002; Perantonis and Karras 1995; Qian 1999).

As demonstrated in Torii and Hagan (2002), there always exists a momentum coefficient that will stabilize the steepest descent algorithm, regardless of the value of the learning rate (we will define it below). In addition, it shows how the value of the momentum coefficient changes the convergence properties. Momentum acceleration, its performance in terms of learning speed and scalability properties is evaluated and found superior to the performance of reputedly fast variants of the BP algorithm in several benchmark training tasks in Perantonis and Karras (1995). Qian (1999) shows that in the limit of continuous time, the momentum parameter is analogous to the mass of Newtonian particles that move through a viscous medium in a conservative force field.

In this paper, a modified batch gradient method with smoothing L1/2 regularization penalty and adaptive momentum algorithm (BGSAM) is proposed. It damps oscillations present in the L1/2 regularization and in the adaptive momentum algorithm (BGAM). In addition, without the requirement that the error function is quadratic or uniformly convex, we present a comprehensive study of the weak and strong convergence for BGSAM which offers an effective improvement in real life application.

The rest of this paper is arranged as follows. The algorithm BGSAM is described in “Batch gradient method with smoothing L1/2 regularization and adaptive momentum (BGSAM)” section. In “Convergence results” section, the convergence results of BGSAM are presented, and the detailed proofs of the main results are stated in the “Appendix”. The performance of BGSAM is compared to BGAM and the experimental results shown in “Numerical experiments” section. Concluding remarks are in “Conclusions” section.

Batch gradient method with smoothing L1/2 regularization and adaptive momentum (BGSAM)

Batch gradient method with L1/2 regularization and adaptive momentum (BGAM)

Here and below, some definitions and notations used in e.g. Wu et al. (2006), Shao and Zheng (2011), and Wu et al. (2006), Shao and Zheng (2011) have been re-defined and used without repeatedly citing the references. We consider a FNN with three layers, and we denote the numbers of neurons of the input, hidden and output layers by p, q and 1, respectively. Suppose that {ξj,Oj}j=1JRp×R is the given set of J training samples. Let w0=(w10,w20,,wq0)TRq be the weight vector between the hidden units and the output unit, and wi=(wi1,wi2,,wip)TRp be the weight vector between the input units and the hidden unit i(i=1,2,,q). To simplify the presentation, we combine the weight vectors, and write W=(w0T,w1T,,wqT)TRq+pq and we define a matrix V=(w1,w2,,wq)TRq×p. We also define a vector function G:RqRq, for x=(x1,x2,,xq)TRq

G(x)=(g(x1),g(x2),,g(xq))T. 3

Let g:RR be a given transfer function for the hidden and output nodes, which is typically, but not necessarily, a sigmoid function. Then for each input ξRp, the actual output vector of the hidden layer is G(Vξ) and the final output of the network is

g(w0·G(Vξ)). 4

For a fixed W, the output error function with the L1/2 regularization penalty term is

E(W)=12j=1J(Oj-g(w0·G(Vξj)))2+λi=1qk=0p|wik|12=j=1Jgj(w0·G(Vξj))+λi=1qk=0p|wik|12 5

where gj(t):=12(Oj-g(t))2, j=1,2,,J, tR, λ>0 is the penalty coefficient, and |·| denotes the absolute value. The gradient of the error function is

EW(W)=(Ew0T(W),Ew1T(W),,EwqT(W))T 6

where

Ew0T(W)=(Ew10(W),Ew20(W),,Ewq0(W))Ew1T(W)=(Ew11(W),Ew12(W),,Ew1p(W))Ew2T(W)=(Ew21(W),Ew22(W),,Ew2p(W))EwqT(W)=(Ewq1(W),Ewq2(W),,Ewqp(W))

The gradient of the error function with respect to wi0 and wik are, respectively, given by

Ewi0(W)=j=1Jgj(w0·G(Vξj))g(wiξj)+λsgn(wi0)2|wi0|12 7
Ewik(W)=j=1Jgj(w0·G(Vξj))wi0g(wi·ξj)ξkj+λsgn(wik)2|wik|12 8

where i=1,2,,q, and k=1,2,,p.

The detailed BGAM algorithm is presented as follows. We denote Wn+1=Wn+ΔWn,n=0,1,2,, starting from an arbitrary initial value W0 and W1, and the weights {Wn} are updated iteratively by

ΔWn=-ηEW(Wn)+αWnΔWn-1,n=0,1,2, 9

The learning rate is assumed constant and satisfies η>0, and αWn=(αw0n,αw1n,,αwqn) is the momentum coefficient vector of the n-th training. It consists of coefficients αwin for each Δwikn(i=1,2,,q,k=1,2,,p), and αw0n for each Δwi0n(i=1,2,,q). Similar to Shao and Zheng (2011), for every αwin, after each training epoch it is chosen as

αwin=α·-ηEwi(Wn)·Δwin-1Δwin-12,ifEwi(Wn)·Δwin-1<00,otherwise 10

where α(0,1) is the momentum factor. Compared with the traditional algorithm, the BGAM has better pruning performance, but we notice that this usual L1/2 regularization term used in this part involves in absolute values and it is not differentiable at the origin, which will cause difficulty in the convergence analysis. More importantly, it causes oscillations of the error function and the norm of gradient. In order to overcome these drawbacks we improved the BGAM algorithm as follows:

Smoothing L1/2 regularization and adaptive momentum (BGSAM)

This section introduces a modified algorithm with smoothing L1/2 regularization and adaptive momentum term. The network structure is the same as the description in part of last subsection (BGAM). We modify the usual L1/2 regularization term at the origin (i.e. we replace the absolute values of the weights by a smooth function in a neighborhood of the origin). Then we use a smooth function f(x) to approximate |x|. We get the following error function with a smoothing L1/2 regularization penalty term:

E(W)=12j=1J(Oj-g(w0·G(Vξj)))2+λi=1qk=0pf(wik)12=j=1Jgj(w0·G(Vξj))+λi=1qk=0pf(wik)12 11

where gj(t):=12(Oj-g(t))2, j=1,2,,J, tR, λ>0 is the penalty coefficient. Here, by smoothing we mean that, in a neighborhood of the origin, we replace the absolute values of the weights by a smooth function of the weights. For definiteness and simplicity, we choose f(x) as a piecewise polynomial function such as:

f(x)=|x|,if|x|a-18a3x4+34ax2+38a,if|x|<a 12

where a is a small positive constant. and |·| denotes the absolute value. Then, from the definition of f(x) immediately yields

f(x)38a,+,f(x)[-1,1],f(x)0,32a

The gradient of the error function with respect to W as in (6), and the gradients of the error function with respect to wi0 and wik are then as follows:

Ewi0(W)=j=1Jgj(w0·G(Vξj))g(wi·ξj)+λf(wi0)2f(wi0)12 13
Ewik(W)=j=1Jgj(w0·G(Vξj))wi0g(wi·ξj)ξkj+λf(wik)2f(wik)12 14

where i=1,2,,q,k=1,2,,p.

For BGSAM algorithm, we denote Wn+1=Wn+ΔWn,n=0,1,2,. Starting with an initial value W0 and W1, the weights {Wn} are updated iteratively by

ΔWn=-ηEW(Wn)+αWnΔWn-1,n=0,1,2, 15

Here the learning rate η, the momentum coefficient vector of the n-th training αWn and other coefficients are the same as the description of algorithm BGAM. For each αwin, after each training epoch it is chosen as (10).

Convergence results

The following assumptions are needed to introduce the relating convergence theorems of BGSAM.

(A1)

|g(t)|, |g(t)|, |g(t)| are uniformly bounded for tR.

(A2)

There exists a bounded region ΩRn such that {w0n}n=0Ω.

(A3)

0<η<1(Mλ+C1)(1+α)2, where M=64a3 and C1 is a constant defined in (16) below.

Assume conditions (A1)–(A2) is valid. Then there are some positive constants C1C5 such that

C1=J(1+C2)C3max{C22,C52}+12J(1+C2)C3+12JC32C42C5,C2=maxqC3,(C3C4)2,C3=maxsuptR|g(t)|,suptR|g(t)|,suptR|g(t)|,suptR,1jJ|gj(t)|,suptR,1jJ|gj(t)|,C4=max1jJξj,C5=supnNw0n. 16

Theorem 1

If assumptions (A1)-(A3) are valid for any arbitrary initial value W0 and W1, the error function be defined by (1), and let the learning sequence 1{Wn} be generated by the iteration algorithm (15), then we have the following convergence

  • (i)

    limnEW(Wn)=0. Moreover, (A4) if there exists a compact set Φ such that WnΦ and the set Φ0={WΦ:EW(W)=0} contains finite points also holds, then we have the following convergence

  • (ii)

    limn(Wn)=W, where WΦ0.

Numerical experiments

This section presents the simulations that verify the performance of BGAM and BGSAM. Our theoretical results are experimentally verified with the 3-bit parity problem, which is a typical benchmark problem in area of the neural networks.

The two algorithms (BGAM and BGSAM) are implemented by the networks with the structure 5-7-1 (input p=5, hidden q=7 and output nodes, see Fig. 1). Each of the two algorithms are carried out fifty trials for 3-bit parity problem and then take the mean values, and the termination criterion is that the error is <1e-6 or 3000 iterations. For the network with linear output, we set the transfer function for hidden neurons to be tansig(·) and that for output layer to be g(t)=t. For the network with nonlinear output, the transfer functions for both hidden and output neurons are tansig(·). The inputs and the ideal outputs are shown in Table 1.

Fig. 1.

Fig. 1

Feedforward neural network with one hidden layer and one output

Table 1.

3-bit parity problem

Input Output Input Output
1 1 1 −1 1 1 −1 −1 −1 1
1 1 −1 −1 0 −1 1 1 −1 0
1 −1 1 −1 0 −1 −1 1 −1 1
−1 −1 −1 −1 0 −1 1 −1 −1 1

The performance results of BGAM and BGSAM are shown in the following figures. Figures 23 and 4 present the comparison results for learning rate η, penalty parameter λ and momentum term α with 0.01, 0.0006 and 0.03, respectively.

Fig. 2.

Fig. 2

The curve of error function

Fig. 3.

Fig. 3

The curve of norm of gradient

Fig. 4.

Fig. 4

The curve of norm of weight

From Figs. 2 and 3, it can be seen that the error function decreases monotonically and the norm of the gradient of the error function approaches zero as depicted by the convergence theorem, respectively. Also Fig. 3 show us that our modified algorithm overcomes the drawbacks of numerical oscillations, i.e., for BGSAM the norm of gradient curve is much smoother than BGAM. The reason as the following: Since the derivative of |x| jumps from -1 to +1 near the x=0, the learning process of BGAM will oscillate when a weight wik is close to zero, whic prevents it from getting further closer to zero. And on the contrary, the derivative of f(x), which is a smooth approximation of |x|, is smooth and equal to zero at the origin, and will not cause any oscillation in the learning process when wik is close to zero. In the meantime, it can be seen that BGSAM convergence faster than BGAM. Fig. 4 demonstrates that the effectiveness of the algorithm BGSAM in controlling the magnitude of weights is better than BGAM.

Conclusions

In this paper, the smoothing L1/2 regularization term with adaptive momentum is introduced into the batch gradient learning algorithm to prune FNN. First, it removes the oscillation of the gradient value. Second, the convergence results for three-layer FNN are proved under certain relaxed conditions. Third, the algorithm is applied to a 3-bit parity problem and the related results are supplied to support the theoretical findings above. Finally, this new algorithm will also effective for other types neural networks or big data processing.

Authors’ contributions

This work was carried out by the three authors, in collaboration. All authors read and approved the final manuscript.

Acknowledgements

This work was supported by National Science Foundation of China (Nos. 11201051, 11501431, 11302158) and National Science Foundation for Tian yuan of China (No. 11426167), and the Science Plan Foundation of the Education Bureau of Shaanxi Province. The authors are grateful to the anonymous reviewers and editors for their profitable comments and suggestions, which greatly improves this paper.

Competing interests

The authors declare that they have no competing interests.

Appendix

In the following section three useful lemmas are given for convergence analysis for BGSAM algorithm. For the sake of description, we introduce the following notations:

Gn,j=G(Vnξj)ψn,j=Gn+1,j-Gn,jΔwin=win+1-winfori=1,2,qΔw0n=w0n+1-w0n 17

Using the error function (11) we have

EWn+1=j=1Jgjw0n+1·GVn+1ξj+λi=1qk=0pfwikn+112EWn=j=1Jgjw0n·GVnξj+λi=1qk=0pfwikn12

Lemma 1

(Wu et al. 2010, Lemma 1) Suppose thatF:RQRis continuous and differentiable on a compact setDRQ, and thatΩ¯={xD|F(x)x=0}contains only finite number of points. If a sequence{xk}Dsatisfies

limkxk+1-xk=0,limkF(xk)x=0,

then, there existsxΩ¯such thatlimkxk=x.

The above lemma is crucial for the strong convergence analysis, and it basically follows the same proof as Lemma 1 in Wu et al. (2010). So its proof is omitted.

Now, we give the proofs of Theorem1 through the following 4 steps.

Step 1. We show the following inequalities holds

G(x)qsuptR|g(t)|C2,xRq 18
ψn,j2C2i=1qΔwin2C2η2(1+α)2i=1qEwi(Wn)2j=1,2,,J. 19

Proof

With the assumption (A1), (16) and the definition of G(x), it is easy to show that there exists a positive constant C2 such that (18) holds.

By the Lagrange mean value theorem, (A1) and (16), we obtain that

ψn,j2=g(t~1,j,n)(w1n+1-w1n)·ξjg(t~2,j,n)(w2n+1-w2n)·ξjg(t~q,j,n)(wqn+1-wqn)·ξj2C2i=1qΔwin2 20

where t~i,j,nR(1iq,1jJ) is between win·ξj and win+1·ξj.

Furthermore, in terms of (9) and (10) we can show that

Δwin=-ηEwi(Wn)+αwin·Δwin-1η(1+α)Ewi(Wn) 21

On the basis of the above inequalities (20) and (21) we immediately have (19).

Step 2. We show the following monotonicity of the sequence {E(Wn)}

E(Wn+1)E(Wn),n=0,1,2, 22

Proof

According to the definition of w0n and ψn,j, we get

w0nψn,j=i=1qwi0ngwin+1·ξj-gwin·ξj 23

By the Taylor mean value theorem with Lagrange remainder to g(t) at the point win·ξj, we obtain

gwin+1·ξj-gwin·ξj=gwin·ξj·Δwin·ξj+12gti,j,nΔwin·ξj2 24

where each ti,j,n lies on the segment between win+1·ξj and win·ξj, i=1,2,,q;j=1,2,,J.

Together with (10) and (14), we get

j=1Jgjw0n·Gn,jw0nψn,j=i=1qj=1Jgjw0n·Gn,jwi0ngwin·ξj·Δwin·ξj+δ1=k=1pi=1qj=1Jgjw0n·Gn,jwi0ngwin·ξj·Δwikn·ξkj+δ1=k=1pi=1qEwik(Wn)·Δwikn-λk=1pi=1qfwikn·Δwikn2fwikn12+δ1=i=1qEwi(Wn)·Δwin-λk=1pi=1qfwikn·Δwikn2f(wikn)12+δ1=i=1qEwi(Wn)-ηEwi(Wn)+αwin·Δwin-1-λk=1pi=1qfwikn·Δwikn2fwikn12+δ1=-ηi=1qEwi(Wn)2+i=1qαwin·Ewi(Wn))·Δwin-1-λk=1pi=1qf(wikn)·Δwikn2f(wikn)12+δ1 25

where δ1=12i=1qj=1Jgjw0n·Gn,jwi0ngti,j,nΔwin·ξj2, and ti,j,nR is between win·ξj and win+1·ξj.

Using (10) and (13), we obtain

j=1Jgjw0n·Gn,jGn,jΔw0n=i=1qj=1Jgjw0n·Gn,jgwinξj·Δwi0n=i=1qEwi0(Wn)-λfwi0n2fwi0n12·Δwi0n=Ew0(Wn)Δw0n-λi=1qfwi0n·Δwi0n2fwi0n12=Ew0(Wn)-ηEw0(Wn)+αw0n·Δw0n-1-λi=1qfwi0n·Δwi0n2fwi0n12=-ηEw0(Wn)2+αw0nEw0(Wn)Δw0n-1-λi=1qfwi0n·Δwi0n2fwi0n12 26

Apply the Taylor mean value theorem with Lagrange remainder, we have

E(Wn+1)-E(Wn)=j=1Jgjw0n·Gn,j·(w0n+1·Gn+1,j-w0n·Gn,j)+λi=1qk=0pfwikn+112-fwikn12+δ2=j=1Jgjw0n·Gn,jGn,jΔw0n+j=1Jgjw0n·Gn,jw0nψn,j+λi=1qk=0pfwikn+112-fwikn12+δ2+δ3 27

where δ2=12j=1Jgj(sn,j)(w0n+1·Gn+1,j-w0n·Gn,j)2, δ3=j=1Jgj(w0n·Gn,j)Δw0nψn,j and sn,jR is a constant between w0n·Gn,j and w0n+1·Gn+1,j.

Substituting (25) and (26) into (27) and using the Lagrange mean value theorem for f(x), we have

E(Wn+1)-E(Wn)=-ηEW(Wn)2-λi=1qk=0pfwikn·Δwikn2fwikn12+i=1qαinEwi(Wn)·Δwin-1+α0nEw0(Wn)·Δw0n-1+δ1+δ2+δ3+λi=1qk=0pfwikn+112-fwikn12-ηEW(Wn)2+λ2i=1qk=0pF(ti,k,n)(Δwik)2+δ1+δ2+δ3 28

where ti,k,nR is between wikn and wikn+1.

According to (21) and (28) we can conclude

E(Wn+1)-E(Wn)-ηEW(Wn)2+Mλi=1qk=0p(Δwik)2+δ1+δ2+δ3=-ηEW(Wn)2+Mλi=1qΔwi2+Δw02+δ1+δ2+δ3-ηEW(Wn)2+Mλη2(1+α)2EW(Wn)2+δ1+δ2+δ3-η(1-λη(1+α)2M)EW(Wn)2+δ1+δ2+δ3 29

Set M=64a3, and F(x)(f(x))12. Note that

F(x)=f(x)2f(x)F(x)=2f(x)·f(x)-[f(x)]24[f(x)]32f(x)2f(x)62a3

It follows from (16), (18), (19) and the Cauchy-Schwartz inequality that

|δ2|C32j=1JΔw0nGn+1,j+w0nψn,j2C32j=1JC2Δw0n+C5ψn,j2C6j=1JΔw0n2+ψn,j2C6j=1JΔw0n2+C2i=1qΔwin2JC6(1+C2)Δw0n2+i=1qΔwin2JC6(1+C2)η2(1+α)2EW(Wn)2=C7η2(1+α)2EW(Wn)2 30

where C6=C3max{C22,C52}, C7=JC6(1+C2)

Similarly, we can evaluate |δ3| as follows:

|δ3|C32j=1JΔw0n2+ψn,j2C32j=1JΔw0n2+C2i=1qΔwin212JC3(1+C2)Δw0n2+i=1qΔwin212JC3(1+C2)η2(1+α)2EW(Wn)2=C8η2(1+α)2EW(Wn)2 31

where C8=12JC3(1+C2).

Similarly, we obtain

|δ1|12JC32C42C5i=1qΔwin212JC32C42C5η2(1+α)2i=1qEwi(Wn)2=C9η2(1+α)2EW(Wn)2 32

where C9=12JC32C42C5.

Using C1=C7+C8+C9, from (29)–(32) and (A3) we can obtain

E(Wn+1)-E(Wn)-η(1-λη(1+α)2M)i=1qk=0pEwik(Wn)2+δ1+δ2+δ3-η(1-Mλη(1+α)2)EW(Wn)2+c1η2(1+α)2EW(Wn)2=-η(1-Mλη(1+α)2-C1η(1+α)2)EW(Wn)20 33

There holds E(Wn+1)E(Wn) (n = 1,2,...).

Step 3. we show limnEW(Wn)=0.

Proof

From Step 2, we know that the nonnegative sequence {E(Wn)} is monotone and it is also bounded. Hence, there must exist E0 such that limkE(Wn)=E.

Taking β=η-(Mλ+C1)η2(1+α)2, it follows from Assumption (A3) that β>0. σn=i=1qk=0p(Ewik(Wn))2=EW(Wn)2 and using (33), we get

E(Wn+1)E(Wn)-βEW(Wn)2E(W0)-βk=0nEW(Wk)2

Since E(Wn+1)0, it gives that

βk=0nEW(Wk)2E(W0)

Let n, it holds that

k=0EW(Wk)21βE(W0)<

This results in

limnEW(Wn)2=0

Thus

limnEW(Wn)=0. 34

Step 4. Add the assumption (A4), we show limn(Wn)=W, where WΦ0.

Proof

Note that the error function E(W) defined in (11) is continuous and differentiable. According to (21) and (34), we get

i=1qk=0p(Δwikn)2EW(Wn)2

i.e.

limnWn+1-Wn=0 35

According to the assumption (A4), (34), (35) and Lemma 1, there exists a point WΦ0, such that

limnWn=W. 36

Now, we proved the Theorem 1 by Step 1–Step 4.

Contributor Information

Qinwei Fan, Email: qinweifan@126.com.

Wei Wu, Email: wuweiw@dlut.edu.cn.

Jacek M. Zurada, Email: jacek.zurada@louisville.edu

References

  1. Bhaya A, Kaszkurewicz E. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Neural Netw. 2004;17:65–71. doi: 10.1016/S0893-6080(03)00170-9. [DOI] [PubMed] [Google Scholar]
  2. Chan LW, Fallside F. An adaptive training algorithm for backpropagation networks. Comput Speech Lang. 1987;2:205–218. doi: 10.1016/0885-2308(87)90009-X. [DOI] [Google Scholar]
  3. Davis G (1994) Adaptive nonlinear approximations, Ph.D. thesis, New York University
  4. Donoho DL. De-noising by soft-thresholding. IEEE Trans Inf Theory. 1995;41:613–627. doi: 10.1109/18.382009. [DOI] [Google Scholar]
  5. Donoho DL (2005) Neighborly polytopes and the sparse solution of underdetermined systems of linear equations, Technical report, Statistics Department, Stanford University
  6. Fan QW, Wei W, Zurada JM. Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty. Neurocomputing. 2014;131:208–216. doi: 10.1016/j.neucom.2013.10.023. [DOI] [Google Scholar]
  7. Haykin S. Neural networks: a comprehensive foundation. 2. Beijing: Tsinghua University Press, Prentice Hall; 2001. [Google Scholar]
  8. Haykin S. Neural networks and learning machines. Upper Saddle River: Prentice-Hall; 2008. [Google Scholar]
  9. Hinton GE. Connectionist learning procedures. Artif Intell. 1989;40(1–3):185–234. doi: 10.1016/0004-3702(89)90049-0. [DOI] [Google Scholar]
  10. Istook E, Martinez T. Improved backpropagation learning in neural networks with windowed momentum. Int J Neural Syst. 2002;12(3–4):303–318. doi: 10.1142/S0129065702001114. [DOI] [PubMed] [Google Scholar]
  11. Karnin ED. A simple procedure for pruning back-propagation trained neural networks. IEEE Trans Neural Netw. 1990;1:239–242. doi: 10.1109/72.80236. [DOI] [PubMed] [Google Scholar]
  12. Kong J, Wu W. Online gradient methods with a punishing term for neural networks. Northeast Math J. 2001;173:371–378. [Google Scholar]
  13. Liu QS, Cao JD. A recurrent neural network based on projection operator for extended general variational inequalities. IEEE Trans Syst Man Cybern B Cybern. 2010;40(3):928–938. doi: 10.1109/TSMCB.2009.2033565. [DOI] [PubMed] [Google Scholar]
  14. Liu QS, Guo ZS, Wang J. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw. 2012;26:99–109. doi: 10.1016/j.neunet.2011.09.001. [DOI] [PubMed] [Google Scholar]
  15. Moody JE, Rognvaldsson TS (1997) Smoothing regularizers for projective basis function networks. In: Advances in neural information processing systems 9 (NIPS 1996). http://papers.nips.cc/book/advances-in-neuralinformation-processing-systems-9-1996
  16. Natarajan BK. Sparse approximate solutions to linear systems. SIAM J Comput. 1995;24:227–234. doi: 10.1137/S0097539792240406. [DOI] [Google Scholar]
  17. Perantonis SJ, Karras DA. An efficient constrained learning algorithm with momentum acceleration. Neural Netw. 1995;8(2):237–249. doi: 10.1016/0893-6080(94)00067-V. [DOI] [PubMed] [Google Scholar]
  18. Qian N. On the momentum term in gradient descent learning algorithms. Neural Netw. 1999;12(1):145–151. doi: 10.1016/S0893-6080(98)00116-6. [DOI] [PubMed] [Google Scholar]
  19. Qiu G, Varley MR, Terrell TJ. Accelerated training of backpropagation networks by using adaptive momentum step. IEEE Electron Lett. 1992;28(4):377–379. doi: 10.1049/el:19920236. [DOI] [Google Scholar]
  20. Reed R. Pruning algorithms-a survey. IEEE Trans Neural Netw. 1993;4(5):740–747. doi: 10.1109/72.248452. [DOI] [PubMed] [Google Scholar]
  21. Saito K, Nakano R. Second-order learning algorithm with squared penalty term. Neural Comput. 2000;12(3):709–729. doi: 10.1162/089976600300015763. [DOI] [PubMed] [Google Scholar]
  22. Shao H, Zheng G. Convergence analysis of a back-propagation algorithm with adaptive momentum. Neurocomputing. 2011;74:749–752. doi: 10.1016/j.neucom.2010.10.008. [DOI] [Google Scholar]
  23. Tibshirani R. Regression shrinkage and selection via the Lasso. J R Stat Soc B. 1996;58:267–288. [Google Scholar]
  24. Torii M, Hagan MT. Stability of steepest descent with momentum for quadratic functions. IEEE Trans Neural Netw. 2002;13(3):752–756. doi: 10.1109/TNN.2002.1000143. [DOI] [PubMed] [Google Scholar]
  25. Wang J, Wu W, Zurada JM. Computational properties and convergence analysis of BPNN for cyclic and almost cyclic learning with penalty. Neural Netw. 2012;33:127–135. doi: 10.1016/j.neunet.2012.04.013. [DOI] [PubMed] [Google Scholar]
  26. Weigend AS, Rumelhart DE, Huberman BA (1991) Generalization by weight elimination applied to currency exchange rate prediction. In: Proceedings of the international joint conference on neural networks, vol 1, pp 837–841
  27. Wilson DR, Martinez TR. The general inefficiency of batch training for gradient descent learning. Neural Netw. 2003;16:1429–1451. doi: 10.1016/S0893-6080(03)00138-2. [DOI] [PubMed] [Google Scholar]
  28. Wu W, Shao H, Li Z. Convergence of batch BP algorithm with penalty for FNN training. Lect Notes Comput Sci. 2006;4232:562–569. doi: 10.1007/11893028_63. [DOI] [Google Scholar]
  29. Wu W, Li L, Yang J, Liu Y. A modified gradient-based neuro-fuzzy learning algorithm and its convergence. Inf Sci. 2010;180:1630–1642. doi: 10.1016/j.ins.2009.12.030. [DOI] [Google Scholar]
  30. Wu W, Fan QW, Zurada JM, et al. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks. Neural Netw. 2014;50:72–78. doi: 10.1016/j.neunet.2013.11.006. [DOI] [PubMed] [Google Scholar]
  31. Xu Z, Zhang H, Wang Y, Chang X, Liang Y. L1/2 regularizer. Sci China Inf Sci. 2010;53:1159–1169. doi: 10.1007/s11432-010-0090-0. [DOI] [Google Scholar]
  32. Zhang NM, Wu W, Zheng GF. Convergence of gradient method with momentum for two-layer feedforward neural networks. IEEE Trans Neural Netw. 2006;17(2):522–525. doi: 10.1109/TNN.2005.863460. [DOI] [PubMed] [Google Scholar]
  33. Zhang H, Wu W, Liu F, Yao M. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Trans Neural Netw. 2009;20(6):1050–1054. doi: 10.1109/TNN.2009.2020848. [DOI] [PubMed] [Google Scholar]
  34. Zhang HS, Zhang Y, Xu DP, Liu XD. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks. Cogn Neurodyn. 2015;9(3):331–340. doi: 10.1007/s11571-014-9323-z. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from SpringerPlus are provided here courtesy of Springer-Verlag

RESOURCES