Skip to main content
Springer logoLink to Springer
. 2017 Sep 18;2017(1):227. doi: 10.1186/s13660-017-1506-9

Modified forward-backward splitting midpoint method with superposition perturbations for the sum of two kinds of infinite accretive mappings and its applications

Li Wei 1,✉,#, Liling Duan 1,#, Ravi P Agarwal 2,#, Rui Chen 1,#, Yaqin Zheng 3,#
PMCID: PMC5603715  PMID: 28989255

Abstract

In a real uniformly convex and p-uniformly smooth Banach space, a modified forward-backward splitting iterative algorithm is presented, where the computational errors and the superposition of perturbed operators are considered. The iterative sequence is proved to be convergent strongly to zero point of the sum of infinite m-accretive mappings and infinite θi-inversely strongly accretive mappings, which is also the unique solution of one kind variational inequalities. Some new proof techniques can be found, especially, a new inequality is employed compared to some of the recent work. Moreover, the applications of the newly obtained iterative algorithm to integro-differential systems and convex minimization problems are exemplified.

Keywords: p-uniformly smooth Banach space, θi-inversely strongly accretive mapping, γi-strongly accretive mapping, μi-strictly pseudo-contractive mapping, perturbed operator

Introduction and preliminaries

Let X be a real Banach space with norm and X be its dual space. ‘→’ denotes strong convergence and x,f is the value of fX at xX.

The function ρX:[0,+)[0,+) is called the modulus of smoothness of X if it is defined as follows:

ρX(t)=sup{12(x+y+xy)1:x,yX,x=1,yt}.

A Banach space X is said to be uniformly smooth if ρX(t)t0, as t0. Let p>1 be a real number, a Banach space X is said to be p-uniformly smooth with constant Kp if Kp>0 such that ρX(t)Kptp for t>0. It is well known that every p-uniformly smooth Banach space is uniformly smooth. For p>1, the generalized duality mapping Jp:X2X is defined by

Jpx:={fX:x,f=xp,f=xp1},xX.

In particular, J:=J2 is called the normalized duality mapping.

For a mapping T:D(T)XX, we use F(T) and N(T) to denote its fixed point set and zero point set, respectively; that is, F(T):={xD(T):Tx=x} and N(T)={xD(T):Tx=0}. The mapping T:D(T)XX is said to be

  1. non-expansive if
    TxTyxyfor x,yD(T);
  2. contraction with coefficient k(0,1) if
    TxTykxyfor x,yD(T);
  3. accretive [1, 2] if for all x,yD(T), TxTy,j(xy)0, where j(xy)J(xy);

    m-accretive if T is accretive and R(I+λT)=X for λ>0;

  4. θ-inversely strongly accretive [3] if for θ>0, x,yD(T), there exists jp(xy)Jp(xy) such that
    TxTy,jp(xy)θTxTypfor x,yX;
  5. γ-strongly accretive [2, 3] if for each x,yD(T), there exists j(xy)J(xy) such that
    TxTy,j(xy)γxy2
    for some γ(0,1);
  6. μ-strictly pseudo-contractive [4] if for each x,yX, there exists j(xy)J(xy) such that
    TxTy,j(xy)xy2μxy(TxTy)2
    for some μ(0,1).

If T is accretive, then for each r>0, the non-expansive single-valued mapping JrT:R(I+rT)D(T) defined by JrT:=(I+rT)1 is called the resolvent of T [1]. Moreover, N(T)=F(JrT).

Let D be a nonempty closed convex subset of X and Q be a mapping of X onto D. Then Q is said to be sunny [5] if Q(Q(x)+t(xQ(x)))=Q(x) for all xX and t0. A mapping Q of X into X is said to be a retraction [5] if Q2=Q. If a mapping Q is a retraction, then Q(z)=z for every zR(Q), where R(Q) is the range of Q. A subset D of X is said to be a sunny non-expansive retract of X [5] if there exists a sunny non-expansive retraction of X onto D and it is called a non-expansive retract of X if there exists a non-expansive retraction of X onto D.

It is a hot topic in applied mathematics to find zero points of the sum of two accretive mappings, namely, a solution of the following inclusion problem:

0(A+B)x. 1.1

For example, a stationary solution to the initial value problem of the evolution equation

ut+(A+B)u,u(0)=u0 1.2

can be recast as (1.1). A forward-backward splitting iterative method for (1.1) means each iteration involves only A as the forward step and B as the backward step, not the sum A+B. The classical forward-backward splitting algorithm is given in the following way:

xn+1=(I+rnB)1(IrnA)xn,nN. 1.3

Some of the related work can be seen in [68] and the references therein.

In 2015, Wei et al. [9] extended the related work of (1.1) from a Hilbert space to the real smooth and uniformly convex Banach space and from two accretive mappings to two finite families of accretive mappings:

{x0D,yn=QD[(1αn)(xn+en)],zn=(1βn)xn+βn[a0yn+i=1NaiJrn,iAi(ynrn,iBiyn)],xn+1=γnηf(xn)+(IγnT)zn,nN{0}, 1.4

where D is a nonempty, closed and convex sunny non-expansive retract of X, QD is the sunny non-expansive retraction of E onto D, {en} is the error, Ai and Bi are m-accretive mappings and θ-inversely strongly accretive mappings, respectively, where i=1,2,,N. T:XX is a strongly positive linear bounded operator with coefficient γ̅ and f:XX is a contraction. m=0Nam=1, 0<am<1. The iterative sequence {xn} is proved to converge strongly to p0i=1NN(Ai+Bi), which solves the variational inequality

(Tηf)p0,J(p0z)0 1.5

for zi=1NN(Ai+Bi) under some conditions.

The implicit midpoint rule is one of the powerful numerical methods for solving ordinary differential equations, and it has been extensively studied by Alghamdi et al. They presented the following implicit midpoint rule for approximating the fixed point of a non-expansive mapping in a Hilbert space H in [10]:

x1H,xn+1=(1αn)xn+αnT(xn+xn+12),nN, 1.6

where T is non-expansive from H to H. If F(T), then they proved that {xn} converges weakly to p0F(T) under some conditions.

Combining the ideas of forward-backward method and midpoint method, Wei et al. extended the study of two finite families of accretive mappings to two infinite families of accretive mappings [3] in a real q-uniformly smooth and uniformly convex Banach space:

{x0D,yn=QD[(1αn)(xn+en)],zn=δnyn+βni=1aiJrn,iAi[yn+zn2rn,iBi(yn+zn2)]+ζnen,xn+1=γnηf(xn)+(IγnT)zn+en,nN{0}, 1.7

where {en}, {en} and {en} are three error sequences, Ai:DX and Bi:DX are m-accretive mappings and θi-inversely strongly accretive mappings, respectively, where iN. T:XX is a strongly positive linear bounded operator with coefficient γ̅, f:XX is a contraction, n=1an=1, 0<an<1, δn+βn+ζn1 for nN{0}. The iterative sequence {xn} is proved to converge strongly to p0i=1N(Ai+Bi), which solves the following variational inequality:

(Tηf)p0,J(p0z)0,zi=1N(Ai+Bi). 1.8

In 2012, Ceng et al. [11] presented the following iterative algorithm to approximate zero point of an m-accretive mapping:

{x0X,yn=αnxn+(1αn)JrnAxn,xn+1=βnf(xn)+(1βn)[JrnAynλnμnF(JrnAyn)],nN{0}, 1.9

where T:XX is a γ-strongly accretive and μ-strictly pseudo-contractive mapping, with γ+μ>1, f:EE is a contraction and A:XX is m-accretive. Under some assumptions, {xn} is proved to be convergent strongly to the unique element p0N(A), which solves the following variational inequality:

p0f(p0),J(p0u)0,uN(A). 1.10

The mapping F in (1.9) is called a perturbed operator which only plays a role in the construction of the iterative algorithm for selecting a particular zero of A, and it is not involved in the variational inequality (1.10).

Inspired by the work mentioned above, in Section 2, we shall construct a new modified forward-backward splitting midpoint iterative algorithm to approximate the zero points of the sum of infinite m-accretive mappings and infinite θi-inversely strongly accretive mappings. New proof techniques can be found, the superposition of perturbed operators is considered and some restrictions on the parameters are mild compared to the existing similar works. In Section 3, we shall discuss the applications of the newly obtained iterative algorithms to integro-differential systems and the convex minimization problems.

We need the following preliminaries in our paper.

Lemma 1.1

[12]

Let X be a real uniformly convex and p-uniformly smooth Banach space with constant Kp for some p(1,2]. Let D be a nonempty closed convex subset of X. Let A:DX be an m-accretive mapping and B:DX be a θ-inversely strongly accretive mapping. Then, given s>0, there exists a continuous, strictly increasing and convex function φp:R+R+ with φp(0)=0 such that for all x,yD with xs and ys,

JrA(IrB)xJrA(IrB)ypxypr(pθKprp1)BxBypφp((IJrA)(IrB)x(IJrA)(IrB)y).

In particular, if 0<r(pθKp)1p1, then JrA(IrB) is non-expansive.

Lemma 1.2

[13]

Let X be a real smooth Banach space and B:XX be a μ-strictly pseudo-contractive mapping and also be a γ-strongly accretive mapping with μ+γ>1. Then, for any fixed number δ(0,1), IδB is a contraction with coefficient 1δ(11γμ).

Lemma 1.3

[2]

Let X be a real Banach space and D be a nonempty closed and convex subset of X. Let f:DD be a contraction. Then f has a unique fixed point.

Lemma 1.4

[14]

Let X be a real strictly convex Banach space, and let D be a nonempty closed and convex subset of X. Let Tm:DD be a non-expansive mapping for each mN. Let {am} be a real number sequence in (0,1) such that m=1am=1. Suppose that m=1F(Tm). Then the mapping m=1amTm is non-expansive and F(m=1amTm)=m=1F(Tm).

Lemma 1.5

[12]

In a real Banach space X, for p>1, the following inequality holds:

x+ypxp+py,jp(x+y),x,yX,jp(x+y)Jp(x+y).

Lemma 1.6

[15]

Let X be a real Banach space, and let D be a nonempty closed and convex subset of X. Suppose A:DX is a single-valued mapping and B:X2X is m-accretive. Then

F((I+rB)1(IrA))=N(A+B)for r>0.

Lemma 1.7

[16]

Let {an} be a real sequence that does not decrease at infinity, in the sense that there exists a subsequence {ank} so that ankank+1 for all kN{0}. For every n>n0, define an integer sequence {τ(n)} as

τ(n)=max{n0kn:ak<ak+1}.

Then τ(n) as n and for all n>n0, max{aτ(n),an}aτ(n)+1.

Lemma 1.8

[17]

For p>1, the following inequality holds:

ab1pap+p1pbpp1,

for any positive real numbers a and b.

Lemma 1.9

[18]

The Banach space X is uniformly smooth if and only if the duality mapping Jp is single-valued and norm-to-norm uniformly continuous on bounded subsets of X.

Strong convergence theorems

Theorem 2.1

Let X be a real uniformly convex and p-uniformly smooth Banach space with constant Kp where p(1,2] and D be a nonempty closed and convex sunny non-expansive retract of X. Let QD be the sunny non-expansive retraction of X onto D. Let f:XX be a contraction with coefficient k(0,1), Ai:DX be m-accretive mappings, Ci:DX be θi-inversely strongly accretive mappings, Wi:XX be μi-strictly pseudo-contractive mappings and γi-strongly accretive mappings with μi+γi>1 for iN. Suppose {ωi(1)} and {ωi(2)} are real number sequences in (0,1) for iN. Suppose 0<rn,i(pθiKp)1p1 for iN and nN, κt(0,1) for t(0,1), i=1ωi(1)Wi<+, i=1ωi(1)=i=1ωi(2)=1 and i=1N(Ai+Ci). If, for each t(0,1), we define Ztn:XX by

Ztnu=tf(u)+(1t)(Iκti=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDu),

then Ztn has a fixed point utn. Moreover, if κtt0, then utn converges strongly to the unique solution q0 of the following variational inequality, as t0:

q0f(q0),J(q0u)0,ui=1N(Ai+Ci). 2.1

Proof

We split the proof into five steps.

Step 1. Ztn:XX is a contraction for t(0,1), κt(0,1) and nN.

In fact, for x,yX, using Lemmas 1.1 and 1.2, we have

ZtnxZtnytf(x)f(y)+(1t)×i=1ωi(1)(IκtWi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDx)i=1ωi(1)(IκtWi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDy)tkxy+(1t)i=1ωi(1)[1κt(11γiμi)]xy[1(1k)t]xy,

which implies that Ztn is a contraction. By Lemma 1.3, there exists utn such that Ztnutn=utn. That is, utn=tf(utn)+(1t)(Iκti=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn).

Step 2. If limt0κtt=0, then {utn} is bounded for nN, 0<ta, where is a sufficiently small positive number and utn is the same as that in Step 1.

For ui=1N(Ai+Ci), using Lemmas 1.1, 1.2 and 1.6, we know that

utnutkutnu+tf(u)u+(1t)κti=1ωi(1)Wiu+(1t)i=1ωi(1)(IκtWi)[i=1ωi(2)Jrn,iAi(Irn,iCi)QDutni=1ωi(2)Jrn,iAi(Irn,iCi)QDu]tkutnu+tf(u)u+(1t)κti=1ωi(1)Wiu+(1t)i=1ωi(1)[1κt(11γiμi)]utnutf(u)u+(1t+tk)utnu+(1t)κti=1ωi(1)Wiu.

Then

utnuf(u)u+κtti=1ωi(1)Wiu1k.

Since limt0κtt=0, then there exists a sufficiently small positive number such that 0<κtt<1 for 0<ta. Thus {utn} is bounded for nN and 0<ta.

Step 3. If limt0κtt=0, then utni=1ωi(2)Jrn,iAi(Irn,iCi)QDutn0, as t0, for nN.

Noticing Step 2, we have

utni=1ωi(2)Jrn,iAi(Irn,iCi)QDutntf(utn)+ti=1ωi(2)Jrn,iAi(Irn,iCi)QDutn+(1t)κti=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn)0,

as t0.

Step 4. If the variational inequality (2.1) has solutions, the solution must be unique.

Suppose u0i=1N(Ai+Ci) and v0i=1N(Ai+Ci) are two solutions of (2.1), then

u0f(u0),J(u0v0)0, 2.2

and

v0f(v0),J(v0u0)0. 2.3

Adding up (2.2) and (2.3), we get

u0f(u0)v0+f(v0),J(u0v0)0. 2.4

Since

u0f(u0)v0+f(v0),J(u0v0)=u0v02f(u0)f(v0),J(u0v0)u0v02ku0v02=(1k)u0v02,

then (2.4) implies that u0=v0.

Step 5. If limt0κtt=0, then utnq0i=1N(Ai+Ci), as t0, which solves the variational inequality (2.1).

Assume tm0. Set umn:=utmn and define μ:XR by

μ(u)=LIMumnu2,uX,

where LIM is the Banach limit on l. Let

K={xX:μ(x)=minxXLIMumnx2}.

It is easily seen that K is a nonempty closed convex bounded subset of X. Since umni=1ωi(2)Jrn,iAi(Irn,iCi)QDumn0 from Step 3, then for uK,

μ(i=1ωi(2)Jrn,iAi(Irn,iCi)QDu)=LIMumni=1ωi(2)Jrn,iAi(Irn,iCi)QDu2LIMumnu2=μ(u),

it follows that i=1ωi(2)Jrn,iAi(Irn,iCi)QD(K)K; that is, K is invariant under i=1ωi(2)Jrn,iAi(Irn,iCi)QD. Since a uniformly smooth Banach space has the fixed point property for non-expansive mappings, i=1ωi(2)Jrn,iAi(Irn,iCi)QD has a fixed point, say q0, in K. That is, i=1ωi(2)Jrn,iAi(Irn,iCi)QDq0=q0D, which ensures from Lemmas 1.4 and 1.6 that q0i=1N(Ai+Ci). Since q0 is also a minimizer of μ over X, it follows that, for t(0,1),

0μ(q0+tf(q0)tq0)μ(q0)t=LIMumnq0tf(q0)+tq02umnq02t=LIMumnq0tf(q0)+tq0,J(umnq0tf(q0)+tq0)umnq02t=LIM(umnq0,J(umnq0tf(q0)+tq0)+tq0f(q0),J(umnq0tf(q0)+tq0)umnq02)/t.

Since X is uniformly smooth, then by letting t0, we find the two limits above can be interchanged and obtain

LIMf(q0)q0,J(umnq0)0. 2.5

Since umnq0=tm(f(umn)q0)+(1tm)[(Iκtmi=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn)q0], then

umnq02=umnq0,J(umnq0)tmf(umn)f(q0),J(umnq0)+tmf(q0)q0,J(umnq0)+(1tm)i=1ωi(2)Jrn,iAi(Irn,iCi)QDumnq0umnq0+(1tm)κtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn)umnq0(1tm+tmk)umnq02+tmf(q0)q0,J(umnq0)+(1tm)κtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn)umnq0.

Therefore,

umnq0211k[f(q0)q0,J(umnq0)+κtmtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn)umnq0]. 2.6

Since κtmtm0, then from (2.5), (2.6) and the result of Step 2, we have LIMumnq020, which implies that LIMumnq02=0, and then there exists a subsequence which is still denoted by {umn} such that umnq0.

Next, we shall show that q0 solves the variational inequality (2.1).

Note that umn=tmf(umn)+(1tm)(Iκtmi=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn), then for vi=1N(Ai+Ci),

i=1ωi(2)Jrn,iAi(Irn,iCi)QDumnf(umn),J(umnv)=1tm(Iκtmi=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn),J(umnv)1tmumntmκtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn),J(umnv)=1tmi=1ωi(1)(IκtmWi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QCumn)i=1ωi(1)(IκtmWi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDv),J(umnv)1tmumnv2κtmtmi=1ωi(1)Wiv,J(umnv)+κtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn),J(umnv)1tm{1i=1ωi(1)[1κtm(11γiμi)]}umnv2+κtmtmi=1ωi(1)Wivumnv+κtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn)umnvκtmtmi=1ωi(1)Wivumnv+κtmi=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDumn)umnv0,

as tm0. Since xnq0 and J is uniformly continuous on each bounded subset of X, then taking the limits on both sides of the above inequality, q0f(q0),J(q0v)0, which implies that q0 satisfies the variational inequality (2.1).

Next, to prove the net {umn} converges strongly to q0, as t0, suppose that there is another subsequence {utkn} of {utn} satisfying utknv0 as tk0. Denote utkn by ukn. Then the result of Step 3 implies that 0=limtk0(ukni=1ωi(2)Jrn,iAi(Irn,iCi)QDukn)=v0i=1ωi(2)Jrn,iAi(Irn,iCi)QDv0, which ensures that v0i=1N(Ai+Ci) in view of Lemmas 1.4 and 1.6. Repeating the above proof, we can also know that v0 solves the variational inequality (2.1). Thus q0=v0 by using the result of Step 4.

Hence utnq0, as t0, which is the unique solution of the variational inequality (2.1).

This completes the proof. □

Theorem 2.2

Let X be a real uniformly convex and p-uniformly smooth Banach space with constant Kp where p(1,2] and D be a nonempty closed and convex sunny non-expansive retract of X. Let QD be the sunny non-expansive retraction of X onto D. Let f:XX be a contraction with coefficient k(0,1), Ai:DX be m-accretive mappings, Ci:DX be θi-inversely strongly accretive mappings, and Wi:XX be μi-strictly pseudo-contractive mappings and γi-strongly accretive mappings with μi+γi>1 for iN. Suppose {ωi(1)}, {ωi(2)}, {αn}, {βn}, {ϑn}, {νn}, {ξn}, {δn} and {ζn} are real number sequences in (0,1), {rn,i}(0,+), {an}X and {bn}D are error sequences, where nN and iN. Suppose i=1N(Ai+Ci). Let {xn} be generated by the following iterative algorithm:

{x1D,un=QD(αnxn+βnan),vn=ϑnun+νni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)+ξnbn,xn+1=δnf(xn)+(1δn)(Iζni=1ωi(1)Wi)i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2),nN. 2.7

Under the following assumptions:

  • (i)

    αn+βn1, ϑn+νn+ξn1 for nN;

  • (ii)

    i=1ωi(1)=i=1ωi(2)=1;

  • (iii)

    n=1an<+, n=1bn<+, n=1(1αn)<+, n=1ξn<+, limni=1rn,i=0;

  • (iv)

    limnδn=0, n=1δn=+;

  • (v)

    1αn+an=o(δn), ξn=o(δn), ζn=o(ξn), νn0, as n;

  • (vi)

    i=1ωi(1)Wi<+, 0<rn,i(pθiKp)1p1 for iN, nN,

the iterative sequence xnq0i=1N(Ai+Ci), which is the unique solution of the variational inequality (2.1).

Proof

We split the proof into four steps.

Step 1. {vn} is well defined and so is {xn}.

For s,t(0,1), define Hs,t:DD by Hs,tx:=su+tH(u+x2)+(1st)v, where H:DD is non-expansive for xD and u,vD. Then, for x,yD,

Hs,txHs,tytu+x2u+y2t2xy.

Thus Hs,t is a contraction, which ensures from Lemma 1.3 that there exists xs,tD such that Hs,txs,t=xs,t. That is, xs,t=su+tH(u+xs,t2)+(1st)v.

Since i=1ωi(2)=1 and Jrn,iAi(Irn,iCi) is non-expansive for nN and iN, then {vn} is well defined, which implies that {xn} is well defined.

Step 2. {xn} is bounded.

For pi=1N(Ai+Ci), we can easily know that

unpαnxnp+βnan+(1αn)p.

And

vnpϑnunp+νnun+vn2p+ξnbnp(ϑn+νn2)unp+νn2vnp+ξnbnp.

Thus

vnp(2ϑn+νn2νn)unp+2ξn2νnbnpαnxnp+βnan+(1αn)p+2bn+2ξn2νnp. 2.8

Using Lemma 1.2 and (2.8), we have, for nN,

xn+1pδnf(xn)f(p)+δnf(p)p+(1δn)(Iζni=1ωi(1)Wi)i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)pδnkxnp+δnf(p)p+(1δn)ζni=1ωi(1)Wip+(1δn)i=1ωi(1)(IζnWi)i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)i=1ωi(1)(IζnWi)i=1ωi(2)Jrn,iAi(Irn,iCi)pδnkxnp+δnf(p)p+(1δn)ζni=1ωi(1)Wip+(1δn)[1ζn(1i=1ωi(1)1γiμi)]×[αnxnp+βnan+(1αn)p+2bn+ξn2νnp]{(1δn)[1ζn(1i=1ωi(1)1γiμi)]+δnk}xnp+δnf(p)p+(1δn)[1ζn(1i=1ωi(1)1γiμi)]×[βnan+(1αn)p+bn+ξn2νnp]+(1δn)ζni=1ωi(1)Wip. 2.9

By using the inductive method, we can easily get the following result from (2.9):

xn+1pmax{x1p,i=1ωi(1)Wip1i=1ωi(1)1γiμi,f(p)p1k}+k=1n(1δk)[1ζk(1i=1ωi(1)1γiμi)]×[βkak+(1αk)p+bk+ξk2νkp].

Therefore, from assumptions (iii) and (vi), we know that {xn} is bounded.

Step 3. There exists q0i=1N(Ai+Ci), which solves the variational inequality (2.1).

Using Theorem 2.1, we know that there exists utn such that utn=tf(utn)+(1t)(Iκti=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn) for t(0,1). Moreover, under the assumption that κtt0, utnq0i=1N(Ai+Ci), as t0, which is the unique solution of the variational inequality (2.1).

Step 4. xnq0, as n, where q0 is the same as that in Step 3.

Set C1:=sup{2αnxn+βnanq0p1,2q0αnxn+βnanq0p1:nN}, then from Step 2 and assumption (iii), C1 is a positive constant. Using Lemma 1.5, we have

unq0pαnxnq0pp(1αn)q0,Jp(αnxn+βnanq0)+pβnan,Jp(αnxn+βnanq0)αnxnq0p+C1(1αn)+C1an. 2.10

Using Lemma 1.1, we know that

vnq0pϑnunq0p+νni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)q0p+ξnbnq0p(ϑn+νn2)unq0p+νn2vnq0pνni=1ωi(2)rn,i(θiprn,ip1Kp)Ci(un+vn2)Ciq0pνni=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0))+ξnbnq0p.

Therefore,

vnq0p2ϑn+νn2νnunq0p+2ξn2νnbnq0p2νn2νni=1ωi(2)rn,i(θiprn,ip1Kp)Ci(un+vn2)Ciq0p2νn2νni=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0)). 2.11

Now, from (2.10)–(2.11) and Lemmas 1.4 and 1.5, we know that for nN,

xn+1q0p=δn(f(xn)q0)+(1δn)(k=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)q0)(1δn)ζni=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))p(1δn)un+vn2q0p+pδnf(xn)f(q0),Jp(xn+1q0)+pδnf(q0)q0,Jp(xn+1q0)p(1δn)ζni=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)),Jp(xn+1q0)(1δn)(unp022+vnp022)+pδnkxnq0xn+1q0p1+pδnf(q0)q0,Jp(xn+1q0)p(1δn)ζni=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)),Jp(xn+1q0)(1δn)unp02+(1δn)[ξn2νnbnq0pνn2νni=1ωi(2)rn,i(θiprn,ip1Kp)Ci(un+vn2)Ciq0pνn2νni=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0))]+pkδnxnq0xn+1q0p1+pδnf(q0)q0,Jp(xn+1q0)p(1δn)ζni=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)),Jp(xn+1q0)(1δn)xnp0p+C1(1αn+an)+kδnxnq0p+kδnxn+1q0p+ξn2νnbnq0p(1δn)νn2νni=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0))+pδnf(q0)q0,Jp(xn+1q0)+2ζni=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))Jp(xn+1q0),

which implies that

xn+1q0p1δn(1k)1δnkxnq0p+C1(1αn+an)1δnk+11δnk(ξn2νnbnq0p+pδnf(q0)q0,Jp(xn+1q0)+2ζni=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))Jp(xn+1q0))(1δn)νn2νn11δnki=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0)).

From Step 2, if we set C2=sup{i=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)),xnq0p1:nN}, then C2 is a positive constant.

Let εn(1)=δn(12k)1δnk, εn(2)=1δn(12k)[C1(1αn+an)+ξn2νnbnq0p+pδnf(q0)q0,Jp(xn+1q0)+2ζnC22] and εn(3)=(1δn)νn2νn11δnki=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0)).

Then

xn+1q0p(1εn(1))xnq0p+εn(1)εn(2)εn(3). 2.12

Our next discussion will be divided into two cases.

Case 1. {xnq0} is decreasing.

If {xnq0} is decreasing, we know from (2.12) and assumptions (iv) and (v) that

0εn(3)εn(1)(εn(2)xnq0p)+(xnq0pxn+1q0p)0,

which ensures that i=1ωi(2)φp((IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0))0, as n+. Then, from the property of φp, we know that i=1ωi(2)(IJrn,iAi)(un+vn2rn,iCi(un+vn2))(IJrn,iAi)(q0rn,iCiq0)0, as n+.

Note that limni=1rn,i=0, then

un+vn2i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)i=1ωi(2)(IJrn,iAi)(Irn,iCi)(un+vn2)(IJrn,iAi)(Irn,iCi)q0+i=1ωi(2)rn,iCi(un+vn2)+i=1ωi(2)rn,iCiq00,

as n.

Now, our purpose is to show that limsupnεn(2)0, which reduces to showing that limsupnf(q0)q0,Jp(xn+1q0)0.

Let utn be the same as that in Step 3. Since utnutnq0+q0, then {utn} is bounded, as t0. Using Lemma 1.5 again, we have

utnun+vn2p=utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)+i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)un+vn2putni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)p+pi=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)un+vn2,Jp(utnun+vn2)=tf(utn)+(1t)(Iκti=1ωi(1)Wi)(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn)i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)p+pi=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)un+vn2,Jp(utnun+vn2)utnun+vn2p+ptf(utn)i=1ωi(2)Jrn,iAi(Irn,iCi)QDutnκtt(1t)i=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn),Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))+pi=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)un+vn2,Jp(utnun+vn2),

which implies that

ti=1ωi(2)Jrn,iAi(Irn,iCi)QDutnf(utn)+κtt(1t)i=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn),Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)un+vn2utnun+vn2p1.

So, limt0limsupn+i=1ωi(2)Jrn,iAi(Irn,iCi)QDutnf(utn)+κtt(1t)×i=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn),Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))0.

Since utnq0, then i=1ωi(2)Jrn,iAi(Irn,iCi)QDutni=1ωi(2)Jrn,iAi(Irn,iCi)QDq0=q0, as t0.

Noticing that

q0f(q0),Jp(q0i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))=q0f(q0),Jp(q0i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))+q0f(q0),Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))=q0f(q0),Jp(q0i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))+q0f(q0)i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn+f(utn)κtt(1t)i=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn),Jp(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))+i=1ωi(2)Jrn,iAi(Irn,iCi)QDutnf(utn)+κtt(1t)i=1ωi(1)Wi(i=1ωi(2)Jrn,iAi(Irn,iCi)QDutn),J(utni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)),

we have limsupn+q0f(q0),Jp(q0i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2))0.

From assumptions (iv) and (v) and Step 2, we know that xn+1i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)0 and then limsupn+q0f(q0),Jp(q0xn+1)0. Thus limsupnεn(2)0.

Employing (2.12) again, we have

xnq0pxnq0pxn+1q0pεn(1)+εn(2).

Assumption (iv) implies that liminfnxnq0pxn+1q0pεn(1)=0. Then

limnxnq0pliminfnxnq0pxn+1q0pεn(1)+limsupnεn(2)0.

Then the result that xnq0 follows.

Case 2. If {xnq0} is not eventually decreasing, then we can find a subsequence {xnkq0} so that xnkq0xnk+1q0 for all k1. From Lemma 1.7, we can define a subsequence {xτ(n)q0} so that max{xτ(n)q0,xnq0}xτ(n)+1q0 for all n>n1. This enables us to deduce that (similar to Case 1)

0ετ(n)(3)ετ(n)(1)(ετ(n)(2)xτ(n)q0p)+(xτ(n)q0pxτ(n)+1q0p)0,

and then copying Case 1, we have limnxτ(n)q0=0. Thus 0xnq0xτ(n)+1q00, as n.

This completes the proof. □

Remark 2.3

Theorem 2.2 is reasonable if we suppose X=D=(,+), take f(x)=x4, Aix=Cix=x2i, Wix=x2i+1, θi=2i, ωi(1)=ωi(2)=12i, αn=11n2, βn=1n3, ϑn=δn=1n, ξn=ζn=an=bn=1n2, γi=12i+2, μi=2i+132+12i+12i+11, rn,i=12n+i for nN and iN.

Remark 2.4

Our differences from the main references are:

  • (i)

    the normalized duality mapping J:EE is no longer required to be weakly sequentially continuous at zero as that in [9];

  • (ii)

    the parameter {rn,i} in the resolvent Jrn,iAi does not need satisfying the condition ‘n=1|rn+1,irn,i|<+ and rn,iε>0 for iN and some ε>0’ as that in [3] or [9];

  • (iii)

    Lemma 1.7 plays an important role in the proof of strong convergence of the iterative sequence, which leads to different restrictions on the parameters and different proof techniques compared to the already existing similar works.

Applications

Integro-differential systems

In Section 3.1, we shall investigate the following nonlinear integro-differential systems involving the generalized pi-Laplacian, which have been studied in [3]:

{u(i)(x,t)tdiv[(C(x,t)+|u(i)|2)pi22u(i)]+ε|u(i)|ri2u(i)+g(x,u(i),u(i))+atΩu(i)dx=f(x,t),(x,t)Ω×(0,T),ϑ,(C(x,t)+|u(i)|2)pi22u(i)βx(u(i)),(x,t)Γ×(0,T),u(i)(x,0)=u(i)(x,T),xΩ,iN, 3.1

where Ω is a bounded conical domain of a Euclidean space RN (N1), Γ is the boundary of Ω with ΓC1 and ϑ denotes the exterior normal derivative to Γ. , and || denote the Euclidean inner-product and the Euclidean norm in RN, respectively. T is a positive constant. u(i)=(u(i)x1,u(i)x2,,u(i)xN) and x=(x1,x2,,xN)Ω. βx is the subdifferential of φx, where φx=φ(x,):RR for xΓ. a and ε are non-expansive constants, 0C(x,t)i=1Vi:=i=1Lpi(0,T;W1,pi(Ω)), f(x,t)i=1Wi:=i=1Lmax{pi,pi}(0,T;Lmax{pi,pi}(Ω)) and g:Ω×RN+1R are given functions.

Just like [3], we need the following assumptions to discuss (3.1).

Assumption 1

{pi}i=1 is a real number sequence with 2NN+1<pi<+, {θi}i=1 is any real number sequence in (0,1] and {ri}i=1 is a real number sequence satisfying 2NN+1<rimin{pi,pi}<+. 1pi+1pi=1 and 1ri+1ri=1 for iN.

Assumption 2

Green’s formula is available.

Assumption 3

For each xΓ, φx=φ(x,):RR is a proper, convex and lower-semicontinuous function and φx(0)=0.

Assumption 4

0βx(0) and for each tR, the function xΓ(I+λβx)1(t)R is measurable for λ>0.

Assumption 5

Suppose that g:Ω×RN+1R satisfies the following conditions:

  1. Carathéodory’s conditions;

  2. Growth condition.
    |g(x,r1,,rN+1)|max{pi,pi}|hi(x,t)|pi+bi|r1|pi,
    where (r1,r2,,rN+1)RN+1, hi(x,t)Wi and bi is a positive constant for iN;
  3. Monotone condition. g is monotone in the following sense:
    (g(x,r1,,rN+1)g(x,t1,,tN+1))(r1t1)
    for all xΩ and (r1,,rN+1),(t1,,tN+1)RN+1.

Assumption 6

For iN, let Vi denote the dual space of Vi. The norm in Vi, Vi, is defined by

u(x,t)Vi=(0Tu(x,t)W1,pi(Ω)pidt)1pi,u(x,t)Vi.

Definition 3.1

[3]

For iN, define the operator Bi:ViVi by

w,Biu=0TΩ(C(x,t)+|u|2)pi22u,wdxdt+ε0TΩ|u|ri2uwdxdt

for u,wVi.

Definition 3.2

[3]

For iN, define the function Φi:ViR by

Φi(u)=0TΓφx(u|Γ(x,t))dΓ(x)dt

for u(x,t)Vi.

Definition 3.3

[3]

For iN, define Si:D(Si)={u(x,t)Vi:utVi,u(x,0)=u(x,T)}Vi by

Siu=ut+atΩudx.

Lemma 3.4

[3]

For iN, define a mapping Ai:Wi2Wi as follows:

D(Ai)={uWi|there exists an fWi such that fBiu+Φi(u)+Siu},

where Φi:ViVi is the subdifferential of Φi. For uD(Ai), we set Aiu={fWi|fBiu+Φi(u)+Siu}. Then Ai:Wi2Wi is m-accretive, where iN.

Lemma 3.5

[3]

Define Ci:D(Ci)=Lmax{pi,pi}(0,T;W1,max{pi,pi}(Ω))WiWi by

(Ciu)(x,t)=g(x,u,u)f(x,t)

for u(x,t)D(Ci) and f(x,t) is the same as that in (3.1), where iN. Then Ci:D(Ci)WiWi is continuous and strongly accretive. If we further assume that g(x,r1,,rN+1)r1, then Ci is θi-inversely strongly accretive, where iN.

Lemma 3.6

[3]

For f(x,t)i=1Wi, integro-differential systems (3.1) have a unique solution u(i)(x,t)Wi for iN.

Lemma 3.7

[3]

If ε0, g(x,r1,,rN+1)r1 and f(x,t)k, where k is a constant, then u(x,t)k is the unique solution of integro-differential systems (3.1). Moreover, {u(x,t)i=1Wi|u(x,t)k satisfying (3.1)}=i=1N(Ai+Ci).

Remark 3.8

[3]

Set p:=infiN(min{pi,pi}) and q:=supiN(max{pi,pi}).

Let X:=Lmin{p,p}(0,T;Lmin{p,p}(Ω)), where 1p+1p=1.

Let D:=Lmax{q,q}(0,T;W1,max{q,q}(Ω)), where 1q+1q=1.

Then X=Lp(0,T;Lp(Ω)), D=Lq(0,T;W1,q(Ω)) and DWiX, iN.

Theorem 3.9

Let D and X be the same as those in Remark  3.8. Suppose Ai and Ci are the same as those in Lemmas 3.4 and 3.5, respectively. Let f:XX be a fixed contractive mapping with coefficient k(0,1) and Wi:XX be μi-strictly pseudo-contractive mappings and γi-strongly accretive mappings with μi+γi>1 for iN. Suppose that {ωi(1)}, {ωi(2)}, {αn}, {βn}, {ϑn}, {νn}, {ξn}, {δn}, {ζn}, {rn,i}, {an}X and {bn}D satisfy the same conditions as those in Theorem  2.2, where nN and iN. Let {xn} be generated by the following iterative algorithm:

{x1D,un=QD(αnxn+βnan),vn=ϑnun+νni=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2)+ξnbn,xn+1=δnf(xn)+(1δn)(Iζni=1ωi(1)Wi)i=1ωi(2)Jrn,iAi(Irn,iCi)(un+vn2),nN. 3.2

If, in integro-differential systems (3.1), ε0, g(x,r1,,rN+1)r1 and f(x,t)k, then under the following assumptions in Theorem  2.2, the iterative sequence xnq0i=1N(Ai+Ci), which is the unique solution of integro-differential systems (3.1) and which satisfies the following variational inequality: for yi=1N(Ai+Ci),

(If)q0(x,t),J(q0(x,t)y)0.

Convex minimization problems

Let H be a real Hilbert space. Suppose hi:H(,+) are proper convex, lower-semicontinuous and nonsmooth functions [2], suppose gi:H(,+) are convex and smooth functions for iN. We use gi to denote the gradient of gi and hi the subdifferential of hi for iN.

The convex minimization problems are to find xH such that

hi(x)+gi(x)hi(x)+gi(x),iN, 3.3

for xH.

By Fermats’ rule, (3.3) is equivalent to finding xH such that

0hi(x)+gi(x),iN. 3.4

Theorem 3.10

Let H be a real Hilbert space and D be the nonempty closed convex sunny non-expansive retract of H. Let QD be the sunny non-expansive retraction of H onto D. Let f:HH be a contraction with coefficient k(0,1). Let hi:H(,+) be proper convex, lower-semicontinuous and nonsmooth functions and gi:H(,+) be convex and smooth functions for iN. Let Wi:HH be μi-strictly pseudo-contractive mappings and γi-strongly accretive mappings with μi+γi>1 for iN. Suppose {ωi(1)}, {ωi(2)}, {αn}, {βn}, {ϑn}, {νn}, {ξn}, {δn}, {ζn}, {rn,i}(0,+), {an}H and {bn}D satisfy the same conditions as those in Theorem  2.2, where nN and iN. Let {xn} be generated by the following iterative algorithm:

{x1D,un=QD(αnxn+βnan),vn=ϑnun+νni=1ωi(2)Jrn,ihi(Irn,igi)(un+vn2)+ξnbn,xn+1=δnf(xn)+(1δn)(Iζni=1ωi(1)Wi)i=1ωi(2)Jrn,ihi(Irn,igi)(un+vn2),nN. 3.5

If, further, suppose gi is 1θi-Lipschitz continuous and hi+gi attains a minimizer, then {xn} converges strongly to the minimizer of hi+gi for iN.

Proof

It follows from [2] that hi is m-accretive. From [19], since gi is 1θi-Lipschitz continuous, then gi is θi-inversely strongly accretive. Thus Theorem 2.2 ensures the result.

This completes the proof. □

Acknowledgements

Supported by the National Natural Science Foundation of China (11071053), Natural Science Foundation of Hebei Province (A2014207010), Key Project of Science and Research of Hebei Educational Department (ZD2016024), Key Project of Science and Research of Hebei University of Economics and Business (2016KYZ07), Youth Project of Science and Research of Hebei Educational Department (QN2017328) and Science and Technology Foundation of Agricultural University of Hebei (LG201612).

Authors’ contributions

All authors contributed equally to the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Li Wei, Liling Duan, Ravi P Agarwal, Rui Chen and Yaqin Zheng contributed equally to this work.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Barbu V. Nonlinear Semigroups and Differential Equations in Banach Spaces. Leyden: Noordhoff; 1976. [Google Scholar]
  • 2.Agarwal RP, O’Regan D, Sahu DR. Fixed Point Theory for Lipschitz-Type Mappings with Applications. Berlin: Springer; 2008. [Google Scholar]
  • 3.Wei L, Agarwal RP. A new iterative algorithm for the sum of infinite m-accretive mappings and infinite μi-inversely strongly accretive mappings and its applications to integro-differential systems. Fixed Point Theory Appl. 2016;2016:7. doi: 10.1186/s13663-015-0495-y. [DOI] [Google Scholar]
  • 4.Browder FE, Petryshn WV. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967;20:197–228. doi: 10.1016/0022-247X(67)90085-6. [DOI] [Google Scholar]
  • 5.Takahashi W. Proximal point algorithms and four resolvents of nonlinear operators of monotone type in Banach spaces. Taiwan. J. Math. 2008;12(8):1883–1910. doi: 10.11650/twjm/1500405125. [DOI] [Google Scholar]
  • 6.Lions PL, Mercier B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979;16:964–979. doi: 10.1137/0716071. [DOI] [Google Scholar]
  • 7.Combettes PL, Wajs VR. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005;4:1168–1200. doi: 10.1137/050626090. [DOI] [Google Scholar]
  • 8.Tseng P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 1998;38(2):431–446. doi: 10.1137/S0363012998338806. [DOI] [Google Scholar]
  • 9.Wei L, Duan LL. A new iterative algorithm for the sum of two different types of finitely many accretive operators in Banach space and its connection with capillarity equation. Fixed Point Theory Appl. 2015;2015:25. doi: 10.1186/s13663-015-0269-6. [DOI] [Google Scholar]
  • 10.Alghamdi MA, Alghamdi MA, Shahzad N, Xu H. The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory Appl. 2014;2014:96. doi: 10.1186/1687-1812-2014-96. [DOI] [Google Scholar]
  • 11.Ceng LC, Ansari QH, Schaible S, Yao JC. Hybrid viscosity approximation method for zeros of m-accretive operators in Banach spaces. Numer. Funct. Anal. Optim. 2012;33(2):142–165. doi: 10.1080/01630563.2011.594197. [DOI] [Google Scholar]
  • 12.Lopez G, Martin-Marquez V, Wang FH, Xu HK. Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012;2012:109236. doi: 10.1155/2012/109236. [DOI] [Google Scholar]
  • 13.Ceng LC, Ansari QH, Yao YC. Mann-type steepest descent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces. Numer. Funct. Anal. Optim. 2008;29(9-10):987–1033. doi: 10.1080/01630560802418391. [DOI] [Google Scholar]
  • 14.Bruck RE. Properties of fixed-point sets of nonexpansive mappings in Banach spaces. Trans. Am. Math. Soc. 1973;179:251–262. doi: 10.1090/S0002-9947-1973-0324491-8. [DOI] [Google Scholar]
  • 15.Aoyama K, Kimura Y, Takahashi W, Toyoda M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007;8:471–489. [Google Scholar]
  • 16.Mainge PE. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008;66:899–912. doi: 10.1007/s11228-008-0102-z. [DOI] [Google Scholar]
  • 17.Mitrinovic DS. Analytic Inequalities. New York: Springer; 1970. [Google Scholar]
  • 18.Cioranescu I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Dordrecht: Kluwer Academic; 1990. [Google Scholar]
  • 19.Baillon JB, Haddad G. Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 1977;26:137–150. doi: 10.1007/BF03007664. [DOI] [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES