Skip to main content
Springer logoLink to Springer
. 2018 Mar 27;2018(1):64. doi: 10.1186/s13660-018-1657-3

New construction and proof techniques of projection algorithm for countable maximal monotone mappings and weakly relatively non-expansive mappings in a Banach space

Li Wei 1,, Ravi P Agarwal 2
PMCID: PMC5869991  PMID: 29606841

Abstract

In a real uniformly convex and uniformly smooth Banach space, some new monotone projection iterative algorithms for countable maximal monotone mappings and countable weakly relatively non-expansive mappings are presented. Under mild assumptions, some strong convergence theorems are obtained. Compared to corresponding previous work, a new projection set involves projection instead of generalized projection, which needs calculating a Lyapunov functional. This may reduce the computational labor theoretically. Meanwhile, a new technique for finding the limit of the iterative sequence is employed by examining the relationship between the monotone projection sets and their projections. To check the effectiveness of the new iterative algorithms, a specific iterative formula for a special example is proved and its computational experiment is conducted by codes of Visual Basic Six. Finally, the application of the new algorithms to a minimization problem is exemplified.

Keywords: Maximal monotone mapping, Weakly relatively non-expansive mapping, Projection, Limit of a sequence of sets, Uniformly convex and uniformly smooth Banach space

Introduction and preliminaries

Let E be a real Banach space with E its dual space. Suppose that C is a nonempty closed and convex subset of E. The symbol 〈⋅, ⋅〉 denotes the generalized duality pairing between E and E. The symbols “→” and “⇀” denote strong and weak convergence either in E or in E, respectively.

A Banach space E is said to be strictly convex [1] if for x,yE which are linearly independent,

x+y<x+y.

The above inequality is equivalent to the following:

x=y=1,xyx+y2<1.

A Banach space E is said to be uniformly convex [1] if for any two sequences {xn} and {yn} in E such that xn=yn=1 and limnxn+yn=2, limnxnyn=0 holds.

If E is uniformly convex, then it is strictly convex.

The function ρE:[0,+)[0,+) is called the modulus of smoothness of E [2] if it is defined as follows:

ρE(t)=sup{12(x+y+xy)1:x,yE,x=1,yt}.

A Banach space E is said to be uniformly smooth [2] if ρE(t)t0, as t0.

The Banach space E is uniformly smooth if and only if E is uniformly convex [2].

We say E has Property (H) if for every sequence {xn}E which converges weakly to xE and satisfies xnx as n necessarily converges to x in the norm.

If E is uniformly convex and uniformly smooth, then E has Property (H).

With each xE, we associate the set

J(x)={fE:x,f=x2=f2},xE.

Then the multi-valued mapping J:E2E is called the normalized duality mapping [1]. Now, we list some elementary properties of J.

Lemma 1.1

([1, 2])

  1. If E is a real reflexive and smooth Banach space, then J is single valued;

  2. if E is reflexive, then J is surjective;

  3. if E is uniformly smooth and uniformly convex, then J1 is also the normalized duality mapping from E into E. Moreover, both J and J1 are uniformly continuous on each bounded subset of E or E, respectively;

  4. for xE and k(,+), J(kx)=kJ(x).

For a nonlinear mapping U, we use F(U) and N(U) to denote its fixed point set and null point set, respectively; that is, F(U)={xD(U):Ux=x} and N(U)={xD(U):Ux=0}.

Definition 1.2

([3])

A mapping TE×E is said to be monotone if, for yiTxi, i=1,2, we have x1x2,y1y20. The monotone mapping T is called maximal monotone if R(J+θT)=E for θ>0.

Definition 1.3

([4])

The Lyapunov functional φ:E×E(0,+) is defined as follows:

φ(x,y)=x22x,j(y)+y2,x,yE,j(y)J(y).

Definition 1.4

([5])

Let B:CC be a mapping, then

  1. an element pC is said to be an asymptotic fixed point of B if there exists a sequence {xn} in C which converges weakly to p such that xnBxn0, as n. The set of asymptotic fixed points of B is denoted by Fˆ(B);

  2. B:CC is said to be strongly relatively non-expansive if Fˆ(B)=F(B) and φ(p,Bx)φ(p,x) for xC and pF(B);

  3. an element pC is said to be a strong asymptotic fixed point of B if there exists a sequence {xn} in C which converges strongly to p such that xnBxn0, as n. The set of strong asymptotic fixed points of B is denoted by F˜(B);

  4. B:CC is said to be weakly relatively non-expansive if F˜(B)=F(B) and φ(p,Bx)φ(p,x) for xC and pF(B).

Remark 1.5

It is easy to see that strongly relatively non-expansive mappings are weakly relatively non-expansive mappings. However, an example in [6] shows that a weakly relatively non-expansive mapping is not a strongly relatively non-expansive mapping.

Lemma 1.6

([5])

Let E be a uniformly convex and uniformly smooth Banach space and C be a nonempty closed and convex subset of E. If B:CC is weakly relatively non-expansive, then F(B) is a closed and convex subset of E.

Lemma 1.7

([3])

Let TE×E be maximal monotone, then

  1. N(T) is a closed and convex subset of E;

  2. if xnx and ynTxn with yny, or xnx and ynTxn with yny, then xD(T) and yTx.

Definition 1.8

([4])

  1. If E is a reflexive and strictly convex Banach space and C is a nonempty closed and convex subset of E, then for each xE there exists a unique element vC such that xv=inf{xy:yC}. Such an element v is denoted by PCx and PC is called the metric projection of E onto C.

  2. Let E be a real reflexive, strictly convex, and smooth Banach space and C be a nonempty closed and convex subset of E, then for xE, there exists a unique element x0C satisfying φ(x0,x)=inf{φ(y,x):yC}. In this case, xE, define ΠC:EC by ΠCx=x0, and then ΠC is called the generalized projection from E onto C.

It is easy to see that ΠC is coincident with PC in a Hilbert space.

Maximal monotone mappings and weakly or strongly relatively non-expansive mappings are different types of important nonlinear mappings due to their practical background. Much work has been done in designing iterative algorithms either to approximate a null point of maximal monotone mappings or a fixed point of weakly or strongly relatively non-expansive mappings, see [510] and the references therein. It is a natural idea to construct iterative algorithms to approximate common solutions of a null point of maximal monotone mappings and a fixed point of weakly or strongly relatively non-expansive mappings, which can be seen in [1115] and the references therein. Now, we list some closely related work.

In [12], Wei et al. presented the following iterative algorithms to approximate a common element of the set of null points of the maximal monotone mapping TE×E and the set of fixed points of the strongly relatively non-expansive mapping SE×E, where E is a real uniformly convex and uniformly smooth Banach space:

{x1E,r1>0,yn=(J+rnT)1J(xn+en),zn=J1[αnJxn+(1αn)Jyn],un=J1[βnJxn+(1βn)JSzn],Hn={zE:φ(z,zn)αnφ(z,xn)+(1αn)φ(z,xn+en)},Vn={zE:φ(z,un)βnφ(z,xn)+(1βn)φ(z,zn)},Wn={zE:zxn,Jx1Jxn0},xn+1=ΠHnVnWn(x1),nN, 1.1
{x1E,r1>0,yn=(J+rnT)1J(xn+en),zn=J1[αnJx1+(1αn)Jyn],un=J1[βnJx1+(1βn)JSzn],Hn={zE:φ(z,zn)αnφ(z,x1)+(1αn)φ(z,xn+en)},Vn={zE:φ(z,un)βnφ(z,x1)+(1βn)φ(z,zn)},Wn={zE:zxn,Jx1Jxn0},xn+1=ΠHnVnWn(x1),nN, 1.2

and

{x1E,r1>0,yn=(J+rnT)1J(xn+en),zn=J1[αnJxn+(1αn)Jyn],un=J1[βnJxn+(1βn)JSzn],H1={zE:φ(z,z1)α1φ(z,x1)+(1α1)φ(z,x1+e1)},V1={zE:φ(z,u1)β1φ(z,x1)+(1β1)φ(z,z1)},W1=E,Hn={zHn1Vn1Wn1:φ(z,zn)αnφ(z,xn)+(1αn)φ(z,xn+en)},Vn={zHn1Vn1Wn1:φ(z,un)βnφ(z,xn)+(1βn)φ(z,zn)},Wn={zHn1Vn1Wn1:zxn,Jx1Jxn0},xn+1=ΠHnVnWn(x1),nN. 1.3

Under some mild assumptions, {xn} generated by (1.1), (1.2), or (1.3) is proved to be strongly convergent to ΠN(T)F(S)(x1). Compared to projective iterative algorithms (1.1) and (1.2), iterative algorithm (1.3) is called monotone projection method since the projection sets Hn, Vn, and Wn are all monotone in the sense that Hn+1Hn, Vn+1Vn, and Wn+1Wn for nN. Theoretically, the monotone projection method will reduce the computation task.

In [13], Klin-eam et al. presented the following iterative algorithm to approximate a common element of the set of null points of the maximal monotone mapping AE×E and the sets of fixed points of two strongly relatively non-expansive mappings S,TC×C, where C is the nonempty closed and convex subset of a real uniformly convex and uniformly smooth Banach space E.

{un=J1[αnJxn+(1αn)JTzn],zn=J1[βnJxn+(1βn)JS(J+rnA)1Jxn],Hn={zC:φ(z,un)φ(z,xn)},Vn={zC:zxn,Jx1Jxn0},xn+1=ΠHnVn(x1),nN. 1.4

Under some assumptions, {xn} generated by (1.4) is proved to be strongly convergent to ΠN(A)F(S)F(T)(x1).

In [14], Wei et al. extended the topic to the case of finite maximal monotone mappings {Ti}i=1m1 and finite strongly relatively non-expansive mappings {Sj}j=1m2. They constructed the following two iterative algorithms in a real uniformly convex and uniformly smooth Banach space E:

{x1E,r>0,yn=J1[βnJxn+i=1m1βn,iJ(J+rTi)1Jxn],xn+1=J1[αnJxn+j=1m2αn,jJSjyn],nN, 1.5

and

{x1E,r>0,yn=J1[βnJxn+(1βn)J(J+rT1)1J(J+rT2)1J(J+rTm1)1Jxn],xn+1=J1[αnJxn+(1αn)JS1S2Sm2yn],nN. 1.6

Under some assumptions, {xn} generated by (1.5) or (1.6) is proved to be weakly convergent to v=limnΠ(i=1m1N(Ti))(j=1m2F(Sj))(xn).

Inspired by the previous work, in Sect. 2.1, we shall construct some new iterative algorithms to approximate the common element of the sets of null points of countable maximal monotone mappings and the sets of fixed points of countable weakly relatively non-expansive mappings. New proof techniques can be found, restrictions are mild, and error is considered. In Sect. 2.2, an example is listed and a specific iterative formula is proved. Computational experiments which show the effectiveness of the new abstract iterative algorithms are conducted. In Sect. 2.3, an application to the minimization problem is demonstrated.

The following preliminaries are also needed in our paper.

Definition 1.9

([16])

Let {Cn} be a sequence of nonempty closed and convex subsets of E, then

  1. s-liminfCn, which is called strong lower limit, is defined as the set of all xE such that there exists xnCn for almost all n and it tends to x as n in the norm.

  2. w-limsupCn, which is called weak upper limit, is defined as the set of all xE such that there exists a subsequence {Cnk} of {Cn} and xnkCnk for every nk and it tends to x as nk in the weak topology;

  3. if s-liminfCn=w-limsupCn, then the common value is denoted by limCn.

Lemma 1.10

([16])

Let {Cn} be a decreasing sequence of closed and convex subsets of E, i.e., CnCm if nm. Then {Cn} converges in E and limCn=n=1Cn.

Lemma 1.11

([17])

Suppose that E is a real reflexive and strictly convex Banach space. If limCn exists and is not empty, then {Pcnx} converges weakly to PlimCnx for every xE. Moreover, if E has Property (H), the convergence is in norm.

Lemma 1.12

([18])

Let E be a real smooth and uniformly convex Banach space, and let {un} and {vn} be two sequences of E. If either {un} or {vn} is bounded and φ(un,vn)0, as n, then unvn0, as n.

Lemma 1.13

([19])

Let E be a real uniformly convex Banach space and r(0,+). Then there exists a continuous, strictly increasing, and convex function ω:[0,2r][0,+) with ω(0)=0 such that

kx+(1k)y2kx2+(1k)y2k(1k)ω(xy)

for k[0,1],x,yE with xr and yr.

Strong convergence theorems and experiments

Strong convergence for infinite maximal monotone mappings and infinite weakly relatively non-expansive mappings

In this section, we suppose that the following conditions are satisfied:

  1. E is a real uniformly convex and uniformly smooth Banach space and J:EE is the normalized duality mapping;

  2. TiE×E is maximal monotone and Si:EE is weakly relatively non-expansive for each iN;

  3. {sn,i} and {τn} are two real number sequences in (0,+) for i,nN. {αn} is a real number sequence in (0,1) for nN;

  4. {εn} is the error sequence in E.

Algorithm 2.1

Step 1. Choose u1,ε1E. Let s1,i(0,+) for iN. α1(0,1) and τ1(0,+). Set n=1, and go to Step 2.

Step 2. Compute vn,i=(J+sn,iTi)1J(un+εn) and wn,i=J1[αnJun+(1αn)JSivn,i] for iN. If vn,i=un+εn and wn,i=J1[αnJun+(1αn)J(un+εn)] for all iN, then stop; otherwise, go to Step 3.

Step 3. Construct the sets Vn, Wn, and Un as follows:

{V1=E,Vn+1,i={zE:vn,iz,J(un+εn)Jvn,i0},Vn+1=(i=1Vn+1,i)Vn,{W1=E,Wn+1,i={zVn+1,i:φ(z,wn,i)αnφ(z,un)+(1αn)φ(z,vn,i)},Wn+1=(i=1Wn+1,i)Wn,

and

Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},

go to Step 4.

Step 4. Choose any element un+1Un+1 for nN.

Step 5. Set n=n+1, and return to Step 2.

Theorem 2.1

If, in Algorithm 2.1, vn,i=un+εn and wn,i=J1[αnJun+(1αn)J(un+εn)] for all iN, then un+εn(i=1N(Ti))(i=1F(Si)).

Proof

Since vn,i=un+εn, then from Step 2 in Algorithm 2.1, we know that Jvn,i+sn,iTivn,i=Jvn,i for all iN, which implies that sn,iTivn,i=0 for iN. Therefore, un+εni=1N(Ti).

Since wn,i=J1[αnJun+(1αn)J(un+εn)]=J1[αnJun+(1αn)JSivn,i], then in view of Lemma 1.1 vn,i=Sivn,i for i,nN. Thus vn,i=un+εni=1F(Si), nN.

This completes the proof. □

Theorem 2.2

Suppose (i=1N(Ti))(i=1F(Si)),infnsn,i>0 for iN, 0<supnαn<1, τn0, and εn0, as n. Then the iterative sequence uny0=Pn=1Wn(u1)(i=1N(Ti))(i=1F(Si)), as n.

Proof

We split the proof into eight steps.

Step 1. Vn is a nonempty subset of E.

In fact, we shall prove that (i=1N(Ti))(i=1F(Si))Vn, which ensures that Vn.

For this, we shall use inductive method. Now, p(i=1N(Ti))(i=1F(Si)).

If n=1, it is obvious that pV1=E. Since Ti is monotone, then

v1,ip,J(u1+ε1)Jv1,i=v1,ip,s1,iTiv1,is1,iTip0.

Thus pV2,i, which ensures that pV2.

Suppose the result is true for n=k+1. Then, if n=k+2, we have

vk+1,ip,J(uk+1+εk+1)Jvk+1,i=vk+1,ip,sk+1,iTivk+1,isk+1,iTip0.

Then pVk+2,i, which ensures that pVk+2.

Therefore, by induction, (i=1N(Ti))(i=1F(Si))Vn for nN.

Step 2. Wn is a nonempty closed and convex subset of E for nN.

Since φ(z,wn,i)αnφ(z,un)+(1αn)φ(z,vn,i) is equivalent to z,2αnJun+2(1αn)Jvn,i2Jwn,iαnun2+(1αn)vn,i2wn,i2, then it is easy to see that Wn,i is closed and convex for i,nN. Thus Wn is closed and convex for nN.

Next, we shall use inductive method to show that (i=1N(Ti))(i=1F(Si))Wn for nN, which ensures that Wn for nN.

In fact, p(i=1N(Ti))(i=1F(Si)).

If n=1, it is obvious that pW1=E. Then, from the definition of weakly relatively non-expansive mappings, we have

φ(p,wn,i)α1φ(p,u1)+(1α1)φ(p,Siv1,i)α1φ(p,u1)+(1α1)φ(p,v1,i).

Combining this with Step 1, we know that pW2,i for iN. Therefore, pW2.

Suppose the result is true for n=k+1. Then, if n=k+2, we know from Step 1 that pVk+2,i for i,kN. Moreover,

φ(p,wk+1,i)αk+1φ(p,uk+1)+(1αk+1)φ(p,Sivk+1,i)αk+1φ(p,uk+1)+(1αk+1)φ(p,vk+1,i),

which implies that pWk+2,i, and then p(i=1Wk+2,i)Wk+1=Wk+2. Therefore, by induction,

(i=1N(Ti))(i=1F(Si))Wnfor nN.

Step 3. Set yn=PWn+1(u1). Then yny0=Pn=1Wn(u1), as n.

From the construction of Wn in Step 3 of Algorithm 2.1, Wn+1Wn for nN. Lemma 1.10 implies that limWn exists and limWn=n=1Wn. Since E has Property (H), then Lemma 1.11 implies that yny0=Pn=1Wn(u1), as n.

Step 4. {un} is well defined.

It suffices to show that Un. From the definitions of PWn+1(u1) and infimum, we know that for τn+1 there exists bnWn+1 such that

u1bn2(infzWn+1u1z)2+τn+1=PWn+1(u1)u12+τn+1.

This ensures that Un+1 for n.

Step 5. un+1yn0 as n.

Since un+1Un+1Wn+1, then in view of Lemma 1.13 and the fact that Wn is convex, we have, for k(0,1),

ynu12kyn+(1k)un+1u12kynu12+(1k)un+1u12k(1k)ω(ynun+1).

Therefore,

kω(ynun+1)un+1u12ynu12τn+1.

Letting k1, then ynun+10 as n. Since yny0, then uny0, as n.

Step 6. unvn,i0 for iN, as n.

Since yn+1Wn+2Wn+1Vn+1, then

02vn,iyn+1,J(un+εn)Jvn,i=2yn+1vn,i,Jvn,iJ(un+εn)=φ(yn+1,un+εn)φ(yn+1,vn,i)φ(vn,i,un+εn)φ(yn+1,un+εn)φ(vn,i,un+εn).

Thus, by using Step 5 and by letting εn0, we have

φ(vn,i,un+εn)φ(yn+1,un+εn)=φ(yn+1,yn)+φ(yn,un+εn)+2yn+1yn,JynJ(un+εn)(yn+1Jyn+1Jyn+yn+1ynyn)+(ynJynJ(un+εn)+yn+1unεnun+εn)+2yn+1ynJynJ(un+εn)0,

as n. Using Lemma 1.12, vn,iunεn0 for iN, as n. Since εn0, then vn,iun0 for iN, as n. Since uny0, then vn,iy0 for iN, as n.

Step 7. wn,iun0 for iN, as n.

Since un+1Un+1Wn+1, then noticing Steps 5 and 6,

φ(un+1,wn,i)αnφ(un+1,un)+(1αn)φ(un+1,vn,i)0,

as n. Lemma 1.12 implies that un+1wn,i0, as n. Since uny0, then wn,iy0 for iN, as n.

Step 8. y0=Pn=1Wn(u1)(i=1N(Ti))(i=1F(Si)).

Since vn,i=(J+sn,iTi)1J(un+εn), then Jvn,i+sn,iTivn,i=J(un+εn). Since vn,iy0, uny0, εn0 and infnsn,i>0, then Tivn,i0 for iN, as n. Using Lemma 1.7, y0i=1N(Ti).

Since wn,i=J1[αnJun+(1αn)JSivn,i], then in view of Lemma 1.1, Sivn,iy0, as n. Lemma 1.6 implies that y0i=1F(Si).

This completes the proof. □

Corollary 2.3

If i1, denote by T the maximal monotone mapping and by S the weakly relatively non-expansive mapping, then Algorithm 2.1 reduces to the following:

{u1E,ε1E,vn=(J+snT)1J(un+εn),wn=J1[αnJun+(1αn)JSvn],V1=W1=E,Vn+1={zE:vnz,J(un+εn)Jvn0}Vn,Wn+1={zVn+1:φ(z,wn)αnφ(z,un)+(1αn)φ(z,vn)}Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN,

where {εn}E, {sn}(0,), {τn}(0,), and {αn}(0,1). Then

  1. Similar to Theorem 2.1, if vn=un+εn and wn=J1[αnJun+(1αn)J(un+εn)] for all nN, then un+εnN(T)F(S).

  2. Suppose that E, {εn}, {τn}, and {αn} satisfy the same conditions as those in Theorem 2.2. If N(T)F(S)= and infnsn>0, then the iterative sequence uny0=Pn=1Wn(u1)N(T)F(S), as n.

Algorithm 2.2

Only doing the following changes in Algorithm 2.1, we get Algorithm 2.2:

wn,i=J1[αnJu1+(1αn)JSivn,i]for all iN,

and

{W1=E,Wn+1,i={zVn+1,i:φ(z,wn,i)αnφ(z,u1)+(1αn)φ(z,vn,i)},Wn+1=(i=1Wn+1,i)Wn.

Theorem 2.4

If, in Algorithm 2.2, vn,i=un+εn and wn,i=J1[αnJu1+(1αn)J(un+εn)] for all iN, then un+εn(i=1N(Ti))(i=1F(Si)).

Proof

Similar to Theorem 2.1, the result follows. □

Theorem 2.5

We only change the condition that 0<supnαn<1 in Theorem 2.2 by αn0, as n. Then the iterative sequence uny0=Pn=1Wn(u1)(i=1N(Ti))(i=1F(Si)), as n.

Proof

Copy Steps 1, 3, 4, 5, and 6 in Theorem 2.2 and make slight changes in the following steps.

Step 2. Wn is a nonempty closed and convex subset of E for nN.

Since φ(z,wn,i)αnφ(z,u1)+(1αn)φ(z,vn,i) is equivalent to z,2αnJu1+2(1αn)Jvn,i2Jwn,iαnu12+(1αn)vn,i2wn,i2, then it is easy to see that Wn,i is closed and convex for i,nN. Thus Wn is closed and convex for nN.

Next, we shall use inductive method to show that (i=1N(Ti))(i=1F(Si))Wn for nN, which ensures that Wn for nN.

In fact, p(i=1N(Ti))(i=1F(Si)).

If n=1, it is obvious that pW1=E. Then, from the definition of weakly relatively non-expansive mappings, we have

φ(p,w1,i)α1φ(p,u1)+(1α1)φ(p,Siv1,i)α1φ(p,u1)+(1α1)φ(p,v1,i).

Combining this with Step 1, we know that pW2,i for iN. Therefore, pW2.

Suppose the result is true for n=k+1. Then, if n=k+2, we know from Step 1 that pVk+2,i for i,kN. Moreover,

φ(p,wk+1,i)αk+1φ(p,u1)+(1αk+1)φ(p,Sivk+1,i)αk+1φ(p,u1)+(1αk+1)φ(p,vk+1,i),

which implies that pWk+2,i and then p(i=1Wk+2,i)Wk+1=Wk+2. Therefore, by induction, (i=1N(Ti))(i=1F(Si))Wn for nN.

Step 7. wn,iun0 for iN, as n.

Since un+1Un+1Wn+1, then in view of the facts that αn0 and Step 6,

φ(un+1,wn,i)αnφ(un+1,u1)+(1αn)φ(un+1,vn,i)0,

as n, for iN. Lemma 1.12 implies that wn,iun0 for iN, as n.

Step 8. y0=Pn=1Wn(u1)(i=1N(Ti))(i=1F(Si)).

In the same way as Step 8 in Theorem 2.2, we have y0i=1N(Ti). Since wn,i=J1[αnJu1+(1αn)JSivn,i], then Sivn,iy0, as n. Thus in view of Lemma 1.6, y0i=1F(Si).

This completes the proof. □

Corollary 2.6

If i1, denote by T the maximal monotone mapping and by S the weakly relatively non-expansive mapping, then Algorithm 2.2 reduces to the following:

{u1E,ε1E,vn=(J+snT)1J(un+εn),wn=J1[αnJu1+(1αn)JSvn],V1=W1=E,Vn+1={zE:vnz,J(un+εn)Jvn0}Vn,Wn+1={zVn+1:φ(z,wn)αnφ(z,u1)+(1αn)φ(z,vn)}Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN,

where {εn}E, {sn}(0,), {τn}(0,) and {αn}(0,1). Then

  1. Similar to Theorem 2.4, if vn=un+εn and wn=J1[αnJu1+(1αn)J(un+εn)], then un+εnN(T)F(S) for all nN.

  2. Suppose that E, {εn}, {τn}, and {αn} satisfy the same conditions as those in Theorem 2.5. If N(T)F(S)= and infnsn>0, then the iterative sequence uny0=Pn=1Wn(u1)N(T)F(S) as n.

Remark 2.7

Compared to the existing related work, e.g., [1214], strongly relatively non-expansive mappings are extended to weakly relatively non-expansive mappings. Moreover, in our paper, the discussion on this topic is extended to the case of infinite maximal monotone mappings and infinite weakly relatively non-expansive mappings.

Remark 2.8

Calculating the generalized projection ΠHnVnWn(x1) in [12] or ΠHnVn(x1) in [13] is replaced by calculating the projection PWn+1(u1) in Step 3 in our Algorithms 2.1 and 2.2, which makes the computation easier.

Remark 2.9

A new proof technique for finding the limit y0=Pn=1Wn(u1) is employed in our paper by examining the properties of the projective sets Wn sufficiently, which is quite different from that for finding the limit ΠN(T)F(S)(x1) in [12] or ΠN(A)F(S)F(T)(x1) in [13].

Remark 2.10

Theoretically, the projection is easier for calculating than the generalized projection in a general Banach space since the generalized projection involves a Lyapunov functional. In this sense, iterative algorithms constructed in our paper are new and more efficient.

Special cases in Hilbert spaces and computational experiments

Corollary 2.11

If E reduces to a Hilbert space H, then iterative Algorithm 2.1 becomes the following one:

{u1H,ε1H,vn,i=(I+sn,iTi)1(un+εn),wn,i=αnun+(1αn)Sivn,i,V1=W1=H,Vn+1,i={zH:vn,iz,un+εnvn,i0},Vn+1=(i=1Vn+1,i)Vn,Wn+1,i={zVn+1,i:zwn,i2αnzun2+(1αn)zvn,i2},Wn+1=(i=1Wn+1,i)Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN. 2.1

The results of Theorems 2.1 and 2.2 are true for this special case.

Corollary 2.12

If E reduces to a Hilbert space H, then iterative Algorithm 2.2 becomes the following one:

{u1H,ε1H,vn,i=(I+sn,iTi)1(un+εn),wn,i=αnu1+(1αn)Sivn,i,V1=W1=H,Vn+1,i={zH:vn,iz,un+εnvn,i0},Vn+1=(i=1Vn+1,i)Vn,Wn+1,i={zVn+1,i:zwn,i2αnzu12+(1αn)zvn,i2},Wn+1=(i=1Wn+1,i)Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN. 2.2

The results of Theorems 2.4 and 2.5 are true for this special case.

Corollary 2.13

If, further i1, then (2.1) and (2.2) reduce to the following two cases:

{u1H,ε1H,vn=(I+snT)1(un+εn),wn=αnun+(1αn)Svn,V1=W1=H,Vn+1={zH:vnz,un+εnvn0}Vn,Wn+1={zVn+1:zwn2αnzun2+(1αn)zun2}Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN, 2.3

and

{u1H,ε1H,vn=(I+snT)1(un+εn),wn=αnu1+(1αn)Svn,V1=W1=H,Vn+1={zH:vnz,un+εnvn0}Vn,Wn+1={zVn+1:zwn2αnzu12+(1αn)zun2}Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN. 2.4

The results of Corollaries 2.3 and 2.6 are true for the special cases, respectively.

Remark 2.14

Take H=(,+), Tx=2x, and Sx=x for x(,+). Let εn=αn=τn=1n and sn=2n1 for nN. Then T is maximal monotone and S is weakly relatively non-expansive. Moreover, N(T)F(S)={0}.

Remark 2.15

Taking the example in Remark 2.14 and choosing the initial value u1=1(,+), we can get an iterative sequence {un} by algorithm (2.3) in the following way:

{u1=1(,+),un+1=u1+vn(u1vn)2+τn+12,nN, 2.5

where vn=un+εn1+2sn, nN. Moreover, un0N(T)F(S), as n.

Proof

We can easily see from iterative algorithm (2.3) that

vn=un+εn1+2snfor nN 2.6

and

wn=αnun+(1αn)vnfor nN. 2.7

To analyze the construction of set Wn, we notice that |zwn|2αn|zun|2+(1αn)|zvn|2 is equivalent to

[2αnun+2(1αn)vn2wn]zαnun2+(1αn)vn2wn2. 2.8

In view of (2.7), compute the left-hand side of (2.8):

[2αnun+2(1αn)vn2wn]z=[2αnun+2(1αn)vn2αnun2(1αn)vn]z0for nN. 2.9

Meanwhile, compute the right-hand side of (2.8):

αnun2+(1αn)vn2wn2=αnun2+(1αn)vn2αn2un22αn(1αn)unvn(1αn)2vn2=αn(1αn)un2+αn(1αn)vn22αn(1αn)unvn=αn(1αn)(unvn)2for nN. 2.10

Using (2.8)–(2.10), we get

Wn+1=Vn+1Wnfor nN. 2.11

Next, we shall use inductive method to show that

{0<vn+1<vn<1,vn>12n+1(n+1),Vn+1=(,vn],Wn+1=Vn+1,Un+1=[u1(u1vn)2+τn+1,vn],we may choose un+1=u1+vn(u1vn)2+τn+12for nN. 2.12

In fact, if n=1, then v1=u1+ε11+2s1=23, thus V2=(,v1]V1=(,v1]. From (2.11), W2=V2W1=V2. And then PW2(u1)=v1=23. So we have

U2={zW2:|u1z||PW2(u1)u1|2+τ2}=[119+12,1+19+12](,23]=[119+12,23]=[u1(u1v1)2+τ2,v1].

Therefore, we may choose u2U2 as follows:

u2=u1+v1(u1v1)2+τ22.

From (2.6), v2=u2+ε21+2s2=4152260. Then 0<v2<v1<1. And it is easy to see v1>121+1(1+1). Thus (2.12) is true for n+1.

Suppose (2.12) is true for n=k, that is,

{0<vk+1<vk<1,vk>12k+1(k+1),Vk+1=(,vk],Wk+1=Vk+1,Uk+1=[u1(u1vk)2+τk+1,vk],we may choose uk+1=u1+vk(u1vk)2+τk+12.

Then, for n=k+1, we first analyze the set Vk+2.

Note that uk+1+εk+1vk+1=(1+2sk+1)vk+1vk+1=2sk+1vk+1>0, then vk+1z,uk+1+εk+1vk+10 is equivalent to zvk+1. Then

Vk+2=(,vk+1]Vk+1=(,vk+1](,vk]=(,vk+1].

From (2.11),

Wk+2=Vk+2Wk+1=(,vk+1]Vk+1=Vk+2.

Now, we analyze set Uk+2.

Since 0<vk+1<1=u1, then PWk+2(u1)=vk+1. Thus |u1z||PWk+2(u1)u1|2+τk+2 is equivalent to u1(u1vk+1)2+τk+2zu1+(u1vk+1)2+τk+2.

It is easy to check that u1+(u1vk+1)2+τk+2>1>vk+1, and u1(u1vk+1)2+τk+2<u1(u1vk+1)=vk+1.

Thus Uk+2=[u1(u1vk+1)2+τk+2,vk+1]. Then we may choose uk+2Uk+2 such that

uk+2=u1+vk+1(u1vk+1)2+τk+22.

Now, we show that vk+2>0.

Since

vk+2=uk+2+εk+21+2sk+2=u1+vk+1(u1vk+1)2+τk+22+1k+21+2k+2=1(k+2)(1+2k+2)+1+vk+1(u1vk+1)2+1k+22(1+2k+2),

then

vk+2>01k+2+1+vk+12>(1vk+1)2+1k+221(k+2)2+1k+2+vk+1k+2+vk+1>14(k+2),

which is obviously true. Thus vk+2>0.

Next, we show that vk+1>12k+2(k+2).

Since vk+1=uk+1+εk+11+2sk+1=(k+1)uk+1(k+1)(1+2k+1)+1(k+1)(1+2k+1), then

vk+1>12k+2(k+2)(k+1)uk+1+1>(k+1)(1+2k+1)2k+2(k+2)(k+1)1+vk(1vk)2+1k+12>k+1k2k+132k+12k+2(k+2)(1+vk)+3+k(k+1)(k+2)12k+1(k+2)>(1vk)2+1k+1[3+k(k+1)(k+2)12k+1(k+2)]2+4vk+2vk[3+k(k+1)(k+2)12k+1(k+2)]+2[3+k(k+1)(k+2)12k+1(k+2)]>1k. 2.13

Note that

2[3+k(k+1)(k+2)12k+1(k+2)]1k+1=k2k+1+2k+32k22k+1(k+1)(k+2)>0,

then (2.13) is true, which implies that vk+1>12k+2(k+2).

Finally, we show that vk+2<vk+1.

From the definition of uk+2, we have uk+2<1+vk+1(1vk+1)2=vk+1. Then vk+2<vk+1+1k+21+2k+2. Since vk+1>12k+2(k+2), then vk+1+1k+21+2k+2vk+1=1k+22k+2vk+11+2k+2<0, which implies that vk+2<vk+1.

Therefore, by induction, (2.12) is true for nN. Since 0<vn+1<vn<1, then limnvn exists. Set a=limnvn. From (2.12), limnun=a and from (2.6), a=0. Then in view of (2.7), limnwn=0. That is, limnwn=limnvn=limnun=0.

This completes the proof. □

Remark 2.16

We next do a computational experiment on (2.5) in Remark 2.15 to check the effectiveness of iterative algorithm (2.3). By using the codes of Visual Basic Six, we get Table 1 and Fig. 1, from which we can see the convergence of {un}, {vn}, and {wn}.

Table 1.

Numerical results of {un}, {vn}, and {wn} with initial u1=1.0

n vn wn un
1 0.666666666666667 1.00000000000000 1.00000000000000
2 0.188493070669609 0.315479212008828 0.442465353348047
3 0.047734978022387 0.063917141637640 0.096281468868147
4 0.013887781581545 0.006938907907725 −0.01390771311373
5 0.005016751133393 −0.00287604161289 −0.03444721259803
6 0.002022073632571 −0.00418691873111 −0.03523188054954
7 0.000854971429905 −0.00391942854572 −0.03256582839944
8 0.000371596957448 −0.00362300404227 −0.02949958193595
9 0.000164574841194 −0.00281862431655 −0.02668421757849
10 0.000073908605586 −0.002357850182411 −0.02424367927438
Figure 1.

Figure 1

Convergence of {un}, {vn}, and {wn}

Remark 2.17

Similar to Remark 2.15, considering the same example in Remark 2.14 and choosing the initial value u1=1(,+), we can get an iterative sequence {un} by algorithm (2.4) in the following way:

{u1=1(,+),un+1=u1+vn(u1vn)2+τn+12,nN, 2.14

where vn=un+εn1+2sn and wn=αnu1+(1αn)vn for nN. Then {un}, {vn}, and {wn} converge strongly to 0N(T)F(S), as n.

Remark 2.18

We do a computational experiment on (2.14) in Remark 2.17 to check the effectiveness of iterative algorithm (2.4). By using the codes of Visual Basic Six, we get Table 2 and Fig. 2, from which we can see the convergence of {un}, {vn}, and {wn}.

Table 2.

Numerical esults of {un}, {vn}, and {wn} with initial u1=1.0

n vn wn un
1 0.666666666666667 1.00000000000000 1.00000000000000
2 0.188493070669609 0.594246535334805 0.442465353348047
3 0.047734978022387 0.365156652014924 0.096281468868147
4 0.013887781581545 0.260415836186159 −0.01390771311373
5 0.005016751133393 0.204013400906715 −0.03444721259803
6 0.002022073632571 0.168351728027143 −0.03523188054954
7 0.000854971429905 0.143589975511347 −0.03256582839944
8 0.000371596957448 0.125325147337767 −0.02949958193595
9 0.000164574841194 0.111257399858839 −0.02668421757849
10 0.000073908605586 0.100066517745027 −0.02424367927438
11 0.000033552200238 0.090939592909307 −0.02216063262202
12 0.000015364834636 0.083347417765083 −0.02038360583157
13 0.000007086981657 0.076929618752290 −0.01885943628695
14 0.000003288762206 0.071431625279192 −0.01754220267938
15 0.000001534136645 0.066668098527535 −0.01639454294823
16 0.000000718881060 0.062500673950994 −0.01538669196834
17 0.000000338196904 0.058823847714733 −0.01449504667360
18 0.000000159662486 0.055555706347903 −0.01370083322728
19 0.000000075612039 0.052631650579827 −0.01298901840146
20 0.000000035908223 0.050000034112812 −0.01234746359706
Figure 2.

Figure 2

Convergence of {un}, {vn}, and {wn}

Applications to minimization problems

Let h:E(,+] be a proper convex, lower-semicontinuous function. The subdifferential ∂h of h is defined as follows: xE,

h(x)={zE:h(x)+yx,zh(y),yE}.

Theorem 2.19

Let E, S, {εn}, {sn}, {τn}, and {αn} be the same as those in Corollary 2.3. Let h:E(,+] be a proper convex, lower-semicontinuous function. Let {un} be generated by

{u1E,ε1E,vn=argminzE{h(z)+12snz21snz,J(un+εn)},wn=J1[αnJun+(1αn)JSvn],V1=W1=E,Vn+1={zE:vnz,J(un+εn)Jvn0}Vn,Wn+1={zVn+1:φ(z,wn)αnφ(z,un)+(1αn)φ(z,vn)}Wn,Un+1={zWn+1:u1z2PWn+1(u1)u12+τn+1},un+1Un+1,nN.

Then

  1. if vn=un+εn and wn=J1[αnJun+(1αn)J(un+εn)] for all nN, then un+εnN(h)F(S).

  2. If N(h)F(S) and infnsn>0, then the iterative sequence unyo=Pn=1Wn(u1)N(h)F(S), as n.

Proof

Similar to [11], vn=argminzE{h(z)+12snz21snz,J(un+εn)} is equivalent to 0h(vn)+1snJun1snJ(un+εn). Then vn=(J+snh)1J(un+εn). So, Corollary 2.3 ensures the desired results.

This completes the proof. □

Theorem 2.20

We only do the following changes in Theorem 2.19: wn=J1[αnJu1+(1αn)JSvn] and Wn+1={zVn+1:φ(z,wn)αnφ(z,u1)+(1αn)φ(z,vn)}Wn. Then, under the assumptions of Corollary 2.6, we still have the result of Theorem 2.19.

Acknowledgements

Supported by the National Natural Science Foundation of China (11071053), Natural Science Foundation of Hebei Province (A2014207010), Key Project of Science and Research of Hebei Educational Department (ZD2016024), Key Project of Science and Research of Hebei University of Economics and Business (2016KYZ07), Youth Project of Science and Research of Hebei University of Economics and Business (2017KYQ09) and Youth Project of Science and Research of Hebei Educational Department (QN2017328).

Authors’ contributions

All authors contributed equally to the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Li Wei, Email: diandianba@yahoo.com.

Ravi P. Agarwal, Email: ravi.agarwal@tamuk.edu

References

  • 1.Takahashi W. Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama: Yokohama Publishers; 2000. [Google Scholar]
  • 2.Agarwal R.P., O’Regan D., Sahu D.R. Fixed Point Theory for Lipschitz-Type Mappings with Applications. Berlin: Springer; 2008. [Google Scholar]
  • 3.Pascali D., Sburlan S. Nonlinear Mappings and Monotone Type. The Netherlands: Sijthoff and Noordhoff; 1978. [Google Scholar]
  • 4.Alber Y.I. Metric and generalized projection operators in Banach spaces: properties and applications. In: Kartsatos A.G., editor. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. New York: Dekker; 1996. pp. 15–50. [Google Scholar]
  • 5.Zhang J.L., Su Y.F., Cheng Q.Q. Simple projection algorithm for a countable family of weak relatively nonexpansive mappings and applications. Fixed Point Theory Appl. 2012;2012:205. doi: 10.1186/1687-1812-2012-205. [DOI] [Google Scholar]
  • 6.Zhang J.L., Su Y.F., Cheng Q.Q. Hybrid algorithm of fixed point for weak relatively nonexpansive multivalued mappings and applications. Abstr. Appl. Anal. 2012;2012:479438. [Google Scholar]
  • 7.Matsushita S., Takahashi W. A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory. 2005;134:257–266. doi: 10.1016/j.jat.2005.02.007. [DOI] [Google Scholar]
  • 8.Liu Y. Weak convergence of a hybrid type method with errors for a maximal monotone mapping in Banach spaces. J. Inequal. Appl. 2015;2015:260. doi: 10.1186/s13660-015-0772-7. [DOI] [Google Scholar]
  • 9.Su Y.F., Li M.Q., Zhang H. New monotone hybrid algorithm for hemi-relatively nonexpansive mappings and maximal monotone operators. Appl. Math. Comput. 2011;217:5458–5465. [Google Scholar]
  • 10.Wei L., Tan R. Iterative schemes for finite families of maximal monotone operators based on resolvents. Abstr. Appl. Anal. 2014;2014:451279. [Google Scholar]
  • 11.Wei L., Cho Y.J. Iterative schemes for zero points of maximal monotone operators and fixed points of nonexpansive mappings and their applications. Fixed Point Theory Appl. 2008;2008:168468. doi: 10.1155/2008/168468. [DOI] [Google Scholar]
  • 12.Wei L., Su Y.F., Zhou H.Y. Iterative convergence theorems for maximal monotone operators and relatively nonexpansive mappings. Appl. Math. J. Chin. Univ. Ser. B. 2008;23(3):319–325. doi: 10.1007/s11766-008-1951-9. [DOI] [Google Scholar]
  • 13.Klin-eam C., Suantai S., Takahashi W. Strong convergence of generalized projection algorithms for nonlinear operators. Abstr. Appl. Anal. 2009;2009:649831. doi: 10.1155/2009/649831. [DOI] [Google Scholar]
  • 14.Wei L., Su Y.F., Zhou H.Y. Iterative schemes for strongly relatively nonexpansive mappings and maximal monotone operators. Appl. Math. J. Chin. Univ. Ser. B. 2010;25(2):199–208. doi: 10.1007/s11766-010-2195-z. [DOI] [Google Scholar]
  • 15.Inoue G., Takahashi W., Zembayashi K. Strong convergence theorems by hybrid methods for maximal monotone operator and relatively nonexpansive mappings in Banach spaces. J. Convex Anal. 2009;16(16):791–806. [Google Scholar]
  • 16.Mosco U. Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 1969;3(4):510–585. doi: 10.1016/0001-8708(69)90009-7. [DOI] [Google Scholar]
  • 17.Tsukada M. Convergence of best approximations in a smooth Banach space. J. Approx. Theory. 1984;40:301–309. doi: 10.1016/0021-9045(84)90003-0. [DOI] [Google Scholar]
  • 18.Kamimura S., Takahashi W. Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 2012;13(3):938–945. doi: 10.1137/S105262340139611X. [DOI] [Google Scholar]
  • 19.Xu H.K. Nonlinear Analysis. 1991. Inequalities in Banach spaces with applications; pp. 1127–1138. [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES