Skip to main content
Springer logoLink to Springer
. 2018 Sep 21;2018(1):254. doi: 10.1186/s13660-018-1845-1

Viscosity iterative algorithm for the zero point of monotone mappings in Banach spaces

Yan Tang 1,
PMCID: PMC6154085  PMID: 30839705

Abstract

Inspired by the work of Zegeye (J. Math. Anal. Appl. 343:663–671, 2008) and the recent papers of Chidume et al. (Fixed Point Theory Appl. 2016:97, 2016; Br. J. Math. Comput. Sci. 18:1–14, 2016), we devise a viscosity iterative algorithm without involving the resolvent operator for approximating the zero of a monotone mapping in the setting of uniformly convex Banach spaces. Under concise parameter conditions we establish strong convergence of the proposed algorithm. Moreover, applications to constrained convex minimization problems and solution of Hammerstein integral equations are included. Finally, the performances and computational examples and a comparison with related algorithms are presented to illustrate the efficiency and applicability of our new algorithm.

Keywords: Monotone mapping, Zero point, Viscosity approximation, Strong convergence

Introduction

Let H be a real inner product space. A map A:D(A)H2H is called monotone if, for each x,yD(A), the following inequality holds:

ξη,xy0for all ξAx,ηAy. 1.1

Interest in monotone mappings derives mainly from their significant numerous applications. For example, the classical convex optimization problem: let h:HR{} be a proper convex, lower semicontinuous (l.s.c.) function. The sub-differential of h at xH is defined by h:H2H

h(x)={xH:h(y)h(x)yx,x,yH}.

Clearly, ∂h is a monotone operator on H, and 0h(x0) if and only if x0 is a minimizer of h. In the case of setting hA, solving the inclusion 0Au, one obtains a minimizer of h.

In addition, the inclusion 0Au when A is a monotone map from a real Hilbert space to itself also appears in several systems, in particular, evolution systems:

dudt+Au=0,

where A is a monotone map. At an equilibrium state, dudt=0, so that Au=0, the solution coincides with the equilibrium state of the dynamical system (see, e.g., Zarantonello [4], Minty [5], Kac̆urovskii [6], Chidume [7], Berinde [8], and others).

For solving the original problem of finding a solution of the inclusion 0Au, Martinet [9] introduced the well-known iteration method as follows: for nN, λn>0, x1E and

xn+1=Jλnxn,

where Jλn=(I+λnA)1 is the well-known Yosida resolvent operator, A is a monotone operator in Hilbert spaces.

This is a successful and powerful algorithm in finding a solution of the equation 0Au and after that, it was extended by many authors (see, e.g., Rockafellar [10], Chidume [11], Xu [12], Tang [13], Qin et al. [14]).

On the other hand, Browder [15] introduced an operator T:HH by T=IA where I is the identity mapping on a Hilbert space H. The operator T is called pseudo-contractive and the zeros of monotone operator A, if they exist, correspond to the fixed points of T. Therefore the approximation of the solutions of Au=0 reduces to the approximation of the fixed points of a pseudo-contractive mapping.

Gradually, the notion of monotone mapping has been extended to real normed spaces. Let E be a real normed space with dual E. A map J:E2E, defined by

Jx:={xE:x,x=xx,x=x}, 1.2

is called the normalized duality map on E. Some properties of the normalized duality map can be obtained from Alber [16] and the references therein.

Since the normalized duality map J is the identity map I in Hilbert spaces, and so, under the idea of Browder [15], the approximating to solution of 0Au has been extended to normed spaces by numerous authors (see, for instance, Chidume [17, 18], Agarwal et al. [19], Reich [20], Diop [21], and the references therein), where A is a monotone mapping from E to itself.

Although the above results have better theoretical properties, such as, but not only, weak and strong convergence to a solution of the equation 0Au, there are still some difficulties to overcome. For instance, the generalized technique of converting the zero of A into the fixed point of T in Browder [15] is not applicable since, in this case when A is monotone, A maps E into E. In addition, the resolvent technique in Martinet [9] is not convenient to use because one has to compute the inverse of (I+λA) at each step of the iteration process.

Hence, it is only natural to ask the following question.

Question 1.1

Can we construct an algorithm without involving the resolvent operator to approximate a zero point of A in Banach spaces?

Motivated and inspired by the work of Martinet [9], Rockafellar [10], Zegeye [1], and Chidume et al. [2, 3], as well as Ibaraki and Takahashi [22], we wish to provide an affirmative answer to the question. Our contribution in the present work is a new viscosity iterative method for the solutions of the equation 0AJu, that is, JuA1(0), where A:EE is a monotone operator defined on the dual of a Banach space E and J:EE is the normalized duality map.

The outline of the paper is as follows. In Sect. 2, we collect definitions and results which are needed for our further analysis. In Sect. 3, our implicit and explicit algorithms without involving a resolvent operator are introduced and analyzed, the strong convergence to a zero of the composed mapping AJ under concise parameters conditions is obtained. In addition, the main result is applied to the convex optimization problems and the solution of Hammerstein equation. Finally, some numerical experiments and a comparison with related algorithms are given to illustrate the performances of our new algorithms.

Preliminaries

In the sequel, we shall need the following definitions and results.

Let E be a uniformly convex Banach space and E be its dual space, let the normalized duality map J on E be defined as (1.2). Then the following properties of the normalized duality map hold (see, e.g., Alber [16], Cioranescu [23], Xu and Roach [24], Xu [25], Zălinescu [26]):

  • (i)

    J is a monotone operator;

  • (ii)

    if E is smooth, then J is single-valued;

  • (iii)

    if E is reflexive, then J is onto;

  • (iv)

    if E is uniformly smooth, then J is uniformly continuous on bounded subsets of E.

The space E is said to be smooth if ρE(τ)>0 for all τ>0, and the space E is said to be uniformly smooth if limτ0+ρE(τ)τ=0, where ρE(τ) is defined by

ρE(τ)=sup{x+yxy21;x=1,y=τ}.

Let p>1, the space E is said to be p-uniformly smooth if there exists a constant c>0 such that ρE(τ)cτp, τ>0. It is well known that every p-uniformly smooth Banach space is uniformly smooth. Furthermore, from Alber [16], we can get that if E is 2-uniformly smooth, then there exists a constant L>0 such that

JxJyLxy,x,yE.

A mapping A:D(A)EE is said to be monotone on a Banach space E if, for each x,yD(A), the following inequality holds:

xy,AxAy0.

A mapping A:D(A)EE is said to be Lipschitz continuous if there exists L>0 such that, for each x,yD(A), the following inequality holds:

AxAyELxyE.

A mapping f:EE is called contractive if there exists a constant ρ(0,1) such that

f(x)f(y)ρxy,x,yE.

Let C be a nonempty closed convex subset of a uniformly convex Banach space E. A Banach limit μ is a bounded linear functional on l such that

inf{xn;nN}μ(x)sup{xn;nN},x={xn}l,

and μ(xn)=μ(xn+1) for all {xn}l. Suppose that {xn} is a bounded sequence in E, then the real valued function φ on E defined by

φ(y)=μxny2,yE, 2.1

is convex and continuous, and φ(y) as y. If E is reflexive, there exists zC such that φ(z)=minyCφ(y) (see, e.g., Kamimura and Takahashi [27], Tan and Xu [28]), so we can define the set Cmin by

Cmin={zC;φ(z)=minyCφ(y)}. 2.2

It is easy to verify that Cmin is a nonempty, bounded, closed, and convex subset of E. The following lemma was proved in Takahashi [29].

Lemma 2.1

Let α be a real number, and (x0,x1,)l such that μ(xn)α for all Banach limits. If lim supn(xn+1xn)0, then lim supnxnα.

Lemma 2.2

(see, e.g., Tan and Xu [28], Osilike and Aniagbosor [30])

Let {an} be a sequence of nonnegative real numbers satisfying the following relation:

an+1(1θn)an+σn,n0,

where {θn} and {σn} are real sequences such that

  • (i)

    limnθn=0, n=1θn=;

  • (ii)

    limnσnθn0 or n=0σn<.

Then the sequence {an} converges to 0.

Lemma 2.3

(see, e.g., Xu [12])

Let E be a real Banach space with dual E. Let J:EE be the normalized duality map, then for all x,yE,

x+y2x2+2y,j(x+y),j(x+y)J(x+y).

Lemma 2.4

(Zegeye [1])

Let E be a uniformly convex and uniformly smooth Banach space. Assume that A:EE is a maximal monotone mapping such that (AJ)1(0). Then, for any uE and t(0,1), the path txtE defined by

xt=tu+(1t)(IAJ)xt, 2.3

converges strongly to an element z(AJ)1(0) as t0.

Main results

We now show the strong convergence of our implicit and explicit algorithms.

Theorem 3.1

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that A:EE is an L-Lipschitz continuous monotone mapping such that (AJ)1(0) and f:EE is a contraction with coefficient ρ(0,1). Then the path txtE, defined by

xt=tf(xt)+(1t)(Iω(t)AJ)xt, 3.1

converges strongly to an element z(AJ)1(0) provided that limt0ω(t)t=0.

Proof

Since E is 2-uniformly smooth, from Alber [16, 31], we have that J is L-Lipschitz continuous, noticing that A is L-Lipschitz continuous, therefore IAJ is Lipschitz continuous with constant 1+LL.

First, we show that xt is well-defined. Since limt0ω(t)t=0, for ε>0, there exists δ>0 such that, for all t(0,δ), the inequality |ω(t)t|<ε holds.

Without loss of generality, we take ε>0 such that ρ+εLL=b<1, where b is a positive constant. Define an operator Tt as Ttx=f(x)(1t)ω(t)tAJx, for x,yE, we can get

TtxTty=f(x)(1t)ω(t)tAJxf(y)+(1t)ω(t)tAJy=f(x)f(y)(1t)ω(t)t(AJxAJy)f(x)f(y)+|(1t)ω(t)t|AJxAJy(ρ+εLL)xy=bxy,

which means that Tt is a contraction. Therefore, by the Banach contraction principle, there exists a unique fixed point of Tt denoted by xt. That is, xt=tf(xt)+(1t)(Iω(t)AJ)xt, so xt is well-defined.

Next we shall show that xt is bounded as limt0ω(t)t=0. For x(AJ)1(0), we have the following estimation:

xtx=f(xt)x(1t)ω(t)t(AJxtAJx)f(xt)f(x)+f(x)x+(1t)ω(t)tLLxtx(ρ+εLL)xtx+f(x)x,

hence,

xtx11bf(x)x,t(0,δ),

which means that xt is bounded as t0, therefore is f(xt).

On the other hand, for arbitrary uE, (3.1) can be rewritten as

xt=tu+(1t)(IAJ)xt+t(f(xt)u)+(1t)(1ω(t))AJxt,

hence,

(1t)(1ω(t))AJxt=xttu(1t)(IAJ)xtt(f(xt)u),

which means that xt converges strongly to an element z(AJ)1(0) as limt0ω(t)t=0 according to Lemma 2.4. The proof is complete. □

For the rest of the paper, {αn} and {ωn} are real sequences in (0,1) satisfying the following conditions:

(C1)

limnαn=0, n=1αn=; limnωnαn=0 and n=0ωn<;

(C2)

f is a piecewise function: f(x)=x if x(AJ)1(0); otherwise f(x) is a contractive function with coefficient ρ.

Theorem 3.2

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that A:EE is an L-Lipschitz continuous monotone mapping such that Cmin(AJ)1(0) and f:EE is a piecewise function defined as (C2). Then, for any x0E, the sequence {xn}, defined by

xn+1=αnf(xn)+(1αn)(IωnAJ)xn, 3.2

converges strongly to an element z(AJ)1(0).

Proof

According to the definition of f, it is obvious that if xn(AJ)1(0) then we stop the iteration. Otherwise, we set n:=n+1 and return to iterative step (3.2).

The proof includes three steps.

Step 1: First we prove that {xn} is bounded. Since αn0 and limnωnαn=0 as n, there exists N0>0 such that αn16, ωnαn16LL, n>N0. We take x(AJ)1(0) or JxA1(0). Let r>0 be sufficiently large such that xN0Br(x) and f(xN0)Br6(x).

We show that {xn} belongs to B:=Br(x) for all integers nN0. First, it is clear by construction that xN0B. Assuming now that, for an arbitrary n>N0, xnB, we prove that xn+1B.

If xn+1 does not belong to B, then we have xn+1x>r. From the recurrence (3.2) we obtain that

xn+1xn=αnf(xn)+(1αn)(IωnAJ)xnxn.

Thus,

xn+1xn=αn(f(xn)xn)(1αn)ωnAJxn. 3.3

Therefore, from (3.3) and Lemma 2.3 and the fact that xn+1x=xn+1xn+xnx,

xn+1x2=xn+1xn+xnx2xnx2+2xn+1xn,j(xn+1x)=xnx2+2αn(f(xn)xn)(1αn)ωnAJxn,j(xn+1x)=xnx2+2αn(f(xn)xn)(1αn)ωnAJxn+αn(xn+1x)αn(xn+1x),j(xn+1x)=xnx22αnxn+1x2+2αn(f(xn)xn)(1αn)ωnAJxn+αn(xn+1x),j(xn+1x)=xnx22αnxn+1x2+2αn(f(xn)x)+αn(xn+1xn)(1αn)ωnAJxn,j(xn+1x),

that is,

xn+1x2xnx22αnxn+1x2+2αn(f(xn)x)+αn2(f(xn)xn)αn(1αn)ωnAJxn(1αn)ωnAJxn,j(xn+1x)xnx22αnxn+1x2+2αn(f(xn)x)+αn2(f(xn)x)αn2(xnx)(1αn2)ωnAJxn,j(xn+1x)xnx22αnxn+1x2+2[2αnf(xn)x+αn2xnx+(1αn2)ωnAJxnAJx]xn+1x.

Since xn+1x>xnx and A is L-Lipschitz and J is L-Lipschitz continuous respectively, thus we get

αnxn+1x2αnf(xn)x+αn2xnx+2(1αn)ωnLLxnx.

Furthermore,

xn+1x2f(xn)x+αnxnx+2(1αn)ωnαnLLxnx2r6+r3+216LLLLrr.

This is contradiction. Consequently, we can get that {xn} belongs to B for all integers nN0, which implies that the sequence {xn} is bounded, so are the sequences {f(xn)} and {AJxn}.

Moreover, it is easy to see that xn+1xn0 because αn0 and ωn=o(αn),

xn+1xnαnf(xn)xn+(1αn)ωnAJxn0.

Step 2: We show that limnsupzf(xn),j(zxn+1)0, where zCmin(AJ)1(0).

Since the sequences {xn} and {f(xn)} are bounded, there exists R>0 sufficiently large such that f(xn), xnB1:=BR(z), nN. Furthermore, the set B1 is a bounded closed and convex nonempty subset of E. By the convexity of B1, we have that (1t)z+tf(xn)B1. Then it follows from the definition of φ that φ(z)φ((1t)z+tf(xn)). Using Lemma 2.3, we have that

xnzt(f(xn)z)2xnz22tf(xn)z,j(xnzt(f(xn)z)),

thus taking Banach limit over n1,

μxnzt(f(xn)z)2μxnz22tμf(xn)z,j(xnzt(f(xn)z)),

which means that

2tμf(xn)z,j(xnzt(f(xn)z))μxnz2μxnzt(f(xn)z)2=φ(z)φ(z+t(f(xn)z))0,

that is,

μf(xn)z,j(xnzt(f(xn)z))0.

By using the weak lower semi-continuity of the norm on E, we get the following as t0:

f(xn)z,j(xnz)f(xn)z,j(xnzt(f(xn)z))0.

Thus, for ε>0, there exists δ>0 such that t(0,δ), n1

f(xn)z,j(xnz)<f(xn)z,j(xnzt(f(xn)z))+ε,

therefore,

μf(xn)z,j(xnz)<μf(xn)z,j(xnzt(f(xn)z))+ε.

In view of the arbitrariness of ε, we have that

μf(xn)z,j(xnz)0.

From the norm-to-weak* uniform continuity of J on each bounded subset of E, we have that

limn(f(xn)z,j(xn+1z)f(xn)z,j(xnz))=0.

Thus, the sequence {f(xn)z,j(xnz)} satisfies the condition of Lemma 2.1, so we have that

lim supnf(xn)z,j(xn+1z)0. 3.4

Step 3: Next we show that xn+1z0.

From (3.2), (3.3) and Lemma 2.3 we have that

xn+1z2=xn+1xn+xnz2=xnz+αn(f(xn)xn)(1αn)ωnAJxn2=(1αn)(xnz)+αn(f(xn)z)(1αn)ωnAJxn2(1αn)2xnz2+2αn(f(xn)z)(1αn)ωnAJxn,j(xn+1z).

In view of the fact that the sequence {xn} is bounded, without loss of generality, we assume that M:=sup{xnz}, therefore,

xn+1z2(1αn)2xnz2+2αn(f(xn)z)(1αn)ωnAJxn,j(xn+1z)=(1αn)xnz2+2αn(f(xn)z),j(xn+1z)+2(1αn)ωnAJzAJxnxn+1z(1αn)xnz2+σn,

where σn=2αn(f(xn)z),j(xn+1z)+2ωnLLM2.

From Lemma 2.2 and (3.4) we shall obtain that

limnxnz=0,

which means that the consequence {xn} converges strongly to z. The proof is complete. □

Theorem 3.3

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that A:EE is an L-Lipschitz continuous monotone mapping such that Cmin(AJ)1(0). Then, for any x0E, the sequence {xn} defined by

xn+1=αnxn+(1αn)(IωnAJ)xn 3.5

converges strongly to an element z(AJ)1(0).

Proof

Similar to the proof in Theorem 3.2, we can obtain that the sequences {xn} and {AJxn} are bounded. Furthermore, we have that limnsupxnz,j(xn+1z)0, where zCmin(AJ)1(0).

In addition, the recurrence (3.5) can be rewritten as

xn+1=xn(1αn)ωnAJxn.

It is easy to see that xn+1xn=(1αn)ωnAJxn0 as αn0.

From the recursion (3.5) and Lemma 2.4 we have that

xn+1z2=(1αn)(xnz)+αn(xnz)(1αn)ωnAJxn2(1αn)2xnz2+2αnxnz,j(xn+1z)2(1αn)ωnAJxn,j(xn+1z)(1αn)xnz2+2αnxnz,j(xn+1z)+2(1αn)ωnLLM2(1αn)xnz2+2αnxnz,j(xn+1z)+2ωnLLM2,

where M:=sup{xnz}. It follows from Lemma 2.2 that limnxnz=0, which means that the sequence {xn} converges strongly to an element z(AJ)1(0). The proof is complete. □

According to Zegeye [1] and Liu [32], for a mapping T:EE, a point xE is called a J-fixed point of T if and only if Tx=Jx and T is called semi-pseudo if and only if A:=JT is monotone. We can observe that a zero point of A is the J-fixed point of a semi-pseudo mapping. If E is a Hilbert space, the semi-pseudo mapping and the J-fixed point coincide with a pseudo-contractive mapping and a fixed point of pseudo-contraction, respectively. In the case that the semi-pseudo mapping T is from E to E, we have that AJ:=(J1T)J is monotone and the J-fixed point set is denoted by FJ(T)={xE,x=TJx}. We have the following corollaries for semi-pseudo mappings from E to E.

Corollary 3.4

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that T:EE is an L-Lipschitz continuous semi-pseudo mapping such that CminFJ(T) and f:EE is a piecewise function defined as (C2). Then, for any x0E, the sequence {xn} defined by

xn+1=αnf(xn)+(1αn)((1ωn)I+ωnTJ)xn

converges strongly to an element zFJ(T).

Corollary 3.5

(Zegeye [1])

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that A:EE is an L-Lipschitz continuous monotone mapping such that Cmin(AJ)1(0). Then, for any uE, the sequence xn defined by

xn+1=αnu+(1αn)(IωnAJ)xn

converges strongly to an element z(AJ)1(0).

Proof

Take f(x)u in Theorem 3.2, the result is obtained. □

If we change the role of E and E, then we shall obtain the following results.

Theorem 3.6

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that A:EE is an L-Lipschitz continuous monotone mapping such that Cmin(AJ1)1(0). Then, for any x0E, the sequence {xn} defined by

xn+1=J1(αnJxn+(1αn)(JωnA)xn),n1,

converges strongly to an element z(AJ1)1(0).

Theorem 3.7

(Zegeye [1])

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that A:EE is an L-Lipschitz continuous monotone mapping such that Cmin(AJ1)1(0). Then, for any uE, the sequence {xn} defined by

xn+1=J1(αnJu+(1αn)(JωnA)xn),n1,

converges strongly to an element z(AJ1)1(0).

We give below two examples in order to show that the conditions of explicit iterative Algorithm (3.2) are easily satisfied.

Example 1

We take the parameters as follows:

αn=1(n+1)p,ωn=1n(n+1)p,(0<p1).

It is easy to verify that

  1. limnαn=0, n=1αn=;

  2. limnωnαn=0 and n=1ωn<.

Example 2

We take the parameters as follows:

αn=1lnp(n+1),ωn=1nlnp(n+1),(0<p1).

It is easy to verify that

  1. limnαn=0, n=1αn=;

  2. limnωnαn=0 and n=1ωn<.

Applications

In this section, we consider the constrained convex minimization problems and the solution of Hammerstein integral equations as the applications of our main result which is proposed in Sect. 3.

Application to constrained convex minimization problems

In this subsection, we will consider the following minimization problem:

minxCh(x), 4.1

where C is a nonempty closed convex subset of E, and h:CR is a real-valued convex function. Assume that problem (4.1) is consistent (i.e., its solution set is nonempty). According to Diop et al. [21], xE is a minimizer of h if and only if 0h(x).

Lemma 4.1

Let E be a real normed smooth space and h:ER be a differential convex function. Assume that the function h is bounded, then the sub-differential map h:ER is bounded and the following inequality holds:

h(x)h(y),xyJxJy,xy,x,yE.

Proof

Define g:=h122, then h=g+122. Since h and 2 are differential, so g is differential and the sub-differential of g is denoted by g=hJ. Let xE, we can get from the definition of ∂g that

g(y)g(x)yx,g(x),yE,

which means that

h(y)12y2h(x)+12x2yx,h(x)Jx,yE. 4.2

Exchanging x and y in the above inequality (4.2), we have that

h(x)12x2h(y)+12y2xy,h(y)Jy,xE. 4.3

Adding the above inequalities (4.2) and (4.3), we get that

h(x)h(y),xyxy,JxJy.

This completes the proof. □

Remark 4.2

From Lemma 4.1, the sub-differential ∂h is monotone, we can also get that T=Jh is a semi-pseudo mapping from E to E.

Consequently, the following theorems are obtained.

Theorem 4.3

Let E be a uniformly convex and 2-uniformly smooth real Banach space. Assume that h:ER is a proper, convex, bounded, and coercive function such that Cmin(hJ)1(0) and f:EE is a piecewise function defined as (C2). Then, for any x0E, the sequence {xn} defined by

xn+1=αnf(xn)+(1αn)(IωnhJ)xn,n1,

converges strongly to an element x(hJ)1(0), that is, Jx(h)1(0). Then function h has a unique minimizer JxE and the sequence {xn}.

Theorem 4.4

Let E be a uniformly convex and 2-uniformly smooth real Banach space. Assume that h:ER is a proper, convex, bounded, and coercive function such that Cmin(hJ)1(0). Then, for any x0E, the sequence {xn} defined by

xn+1=αnxn+(1αn)(IωnhJ),n1,

converges strongly to an element x(hJ)1(0), that is, Jx(h)1(0).

Application to solution of Hammerstein integral equations

An integral equation (generally nonlinear) of Hammerstein type has the form

u(x)+Ωk(x,y)f(y,u(y))=w(x), 4.4

where the unknown function u and the inhomogeneous function w lie in a Banach space E of measurable real-valued functions.

By simple transformation, (4.4) shall be written as

u+KFu=w,

which can be illustrated, without loss of generality, as

u+KFu=0. 4.5

For the case of a real Hilbert space H, for F,K:HH, Chidume and Zegeye [33] defined an auxiliary map on the Cartesian product E:=H×H, T:EE by

T[u,v]=[Fuv,Kv+u].

It is known that

T[u,v]=0u is the solution of (4.5) and v=Fu.

They obtained strong convergence of an iterative algorithm defined in the Cartesian product space E to a solution of Hammerstein Eq. (4.5).

In a Banach space more general than a Hilbert space, Zegeye [34], Chidume and Idu [35] introduced the operator T:E×EE×E:

T[u,v]=[JuFu+v,JvKvu],

where F:EE and K:EE are monotone mappings and J is the normalized duality map from E to E. They proved that the mapping A:=JT defined by A[u,v]:=[Fuv,Kv+u] is monotone and u is a solution (when they exist) of the Hammerstein equation u+KFu=0 if and only if (u,v) is a zero point of A, where v=Fu. Applying our Theorem 3.2, the following theorems shall be obtained.

Theorem 4.5

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that F:EE, K:EE are Lipschitz continuous monotone mappings such that Hammerstein Eq. (4.5) is solvable and f1:EE, f2:EE are two piecewise functions defined as (C2). Then, for (u0,v0)E×E, the sequences {un} and {vn} defined by

un+1=αnf1(un)+(1αn)(unωnJ(Funvn)),vn+1=αnf2(vn)+(1αn)(vnωnJ(Kvn+un)),

converge strongly to u and v, respectively, where u is a solution of u+KFu=0 with v=Fu.

Theorem 4.6

Let E be a uniformly convex and 2-uniformly smooth Banach space. Assume that F:EE, K:EE are Lipschitz continuous monotone mappings such that Hammerstein Eq. (4.5) is solvable. Then, for (u0,v0)E×E, the sequences {un} and {vn}, defined by

un+1=αnun+(1αn)(unωnJ(Funvn)),vn+1=αnvn+(1αn)(vnωnJ(Kvn+un)),

converge strongly to u and v, respectively, where u is a solution of u+KFu=0 with v=Fu.

Numerical example

In the sequel, we give a numerical example to illustrate the applicability, effectiveness, efficiency, and stability of our viscosity iterative algorithm (VIA). We have written all the codes in Matlab R2016b and they are preformed on a LG dual core personal computer.

Numerical behavior of VIA

Example

Let E=R, C=E. Let A,J:RR be the mappings defined as

Ax=ax,Jx=x,

f:CC be defined as

f(x)={x2,Ax0,x,if Ax=0.

Thus, for x,yR, we have

AxAy=axay|a|xy,JxJy=xy.

Hence, A is |a|-Lipschitz continuous monotone, J is 1-Lipschitz continuous.

Two groups of consequences of parameters are tested here as follows:

Case I: αn=1(n+1)p, ωn=1n(n+1)p, p[1/8,1/4,1/3,1/2,1];

Case II: αn=1lnp(n+1), ωn=1nlnp(n+1), p[1/8,1/4,1/3,1/2,1].

We can see that all these parameters satisfy the conditions:

  • (i)

    limnαn=0, n=1αn=;

  • (ii)

    ωn=o(αn), n=1ωn<.

We will use the sequence Dn=108×xn+1xn2 to study the convergence of our explicit viscosity iterative algorithm (VIA). The convergence of Dn to 0 implies that the sequence {xn} converges to x(AJ)1(0). To illustrate the behavior of the algorithm, we have performed experiments for both number of iterations (iter.) and elapsed execution time (CPU time—in the second). Figures 118 and Table 1 describe the behavior of Dn generated by VIA for the aforementioned groups of parameters. It is obvious that if xn(AJ)1(0), then the process stops and xn is the solution of problem 0AJu; otherwise, we shall compute the following viscosity algorithm:

xn+1=αnxn2+(1αn)(xnaωnxn),

where a is different choices from [1100,12,2].

Figure 2.

Figure 2

Case I, a=1100, p=13

Figure 3.

Figure 3

Case I, a=12, p=12

Figure 4.

Figure 4

Case I, a=12, p=14

Figure 5.

Figure 5

Case I, a=12, p=18

Figure 6.

Figure 6

Case I, a=2, p=13

Figure 7.

Figure 7

Case I, a=2, p=14

Figure 8.

Figure 8

Case I, a=2, p=2

Figure 9.

Figure 9

Case I, a=1100, p=2

Figure 10.

Figure 10

Case I, a=1100, p=14

Figure 11.

Figure 11

Case II, a=1100, p=13

Figure 12.

Figure 12

Case II, a=2, p=1

Figure 13.

Figure 13

Case II, a=12, p=14

Figure 14.

Figure 14

Case II, a=2, p=18

Figure 15.

Figure 15

Case II, a=12, p=1

Figure 16.

Figure 16

Case II, a=1100, p=12

Figure 17.

Figure 17

Case II, a=2, p=13

Figure 1.

Figure 1

Case I, a=1100, p=12

Figure 18.

Figure 18

Case II, a=1100, p=18

Table 1.

Algorithm (VIA) with different group of parameters

a 0.01 0.5 0.99
p = 1
Case I No. Iterations 10 10 12
CPU (time) 0.047 0.047 0.055
Case II No. Iterations 10 10 12
CPU (time) 0.044 0.048 0.045
p = 2
Case I No. of Iterations 10 10 12
CPU (time) 0.056 0.058 0.062
Case II No. Iterations 10 10 12
CPU (time) 0.068 0.055 0.052

In these figures, x-axes represent the number of iterations while y-axes represent the value of Dn. We can summarize the following observations from these figures:

  1. The rate of Dn=1010×xn+1xn2 generated by our algorithm (VIA) depends strictly on the convergence rate of parameter {αn} and the Lipschitz coefficient of a continuous monotone operator.

  2. Our viscosity iterative algorithm (VIA) works well for parameter sequences of {αn} being fast convergent to 0 as n. In general, if Dn=xn+1xn2, then the error of Dn can be obtained approximately equal to 10−16. When Dn obtains to this error, then it becomes unstable. The best error of Dn can be obtained approximately equal to 10−30 when a=2.

  3. For the second group parameter {αn} being slowly convergent to 0 as n, then Dn is slightly increasing in the early iterations, and after that, it is seen to be almost stable.

Comparison of VIA with other algorithms

In this part, we present several experiments in comparison with other algorithms. Two methods used in comparison are the generalized Mann iteration method (GMIM) (Chidume et al. [35], Algorithm 1) and the regularization method (RM) (Zegeye [1], Algorithm 2). The RM requires to previously know a constant u. For experiments, we choose the same sequences αn=1n+1 and ωn=1n(n+1) in these algorithms. The condition xn+1xn2TOL is chosen to be as the stopping criterion. The following tables are comparisons of VIA, RM, GMIM with different choices of a. The numerical results are showed in Table 2.

Table 2.

Comparison between VIA and other algorithms with x0=1

a TOL VIA RM (u = 1) GMIM
Iter CPU (s) Iter CPU (s) Iter CPU (s)
1/4 10−8 278 0.094 85 0.055 559 0.097
10−10 1290 0.14 325 0.070 3529 0.28
1/3 10−8 266 0.076 100 0.044 473 0.078
10−10 1233 0.14 380 0.088 2663 0.21
1/2 10−8 243 0.070 126 0.052 317 0.065
10−10 1123 0.13 472 0.077 1471 0.13
3/2 10−8 128 0.061 221 0.059 37 0.043
10−10 587 0.10 823 0.098 94 0.049

From these tables, we can see that the RM is the best. The GMIM is the most time-consuming, and the reasonable explanation is the fact that at each step the GMIM has no contractive parameters (coefficients) for obtaining the next step which can take lower convergence rate, while the convergence rate of the RM depends strictly on the previous constant u and the initial value x0. In comparing with other two methods, VIA seems to have competitive advantage. However, the main advantage of VIA is that the viscosity iterative algorithm works more stable than other methods and it is done in Banach spaces much more general than Hilbert spaces.

Conclusion

Let E be a nonempty closed uniformly convex and 2-uniformly smooth Banach space with dual E. We construct some implicit and explicit algorithms for solving the equation 0AJu in the Banach space E, where A:EE is a monotone mapping and J:EE is the normalized duality map which plays an indispensable role in this research paper. The advantages of the algorithm are that the resolvent operator is not involved, which makes the iteration simple for computation; moreover, the zero point problem of monotone mappings is extended from Hilbert spaces to Banach spaces. The proposed algorithms converge strongly to a zero of the composed mapping AJ under concise parameter conditions. In addition, the main result is applied to approximate the minimizer of a proper convex function and the solution of Hammerstein integral equations. To some extent, our results extend and unify some results considered in Xu [12], Zegeye [1], Chidume and Idu [2], Chidume [3, 35], and Ibarakia and Takahashi [22].

Acknowledgements

The author expresses their deep gratitude to the referee and the editor for his/her valuable comments and suggestions which helped tremendously in improving the quality of this paper and made it suitable for publication.

Authors’ contributions

The author worked jointly in drafting and approving the final manuscript. The author read and approved the final manuscript.

Funding

This article is funded by the National Science Foundation of China (11471059), the Science and Technology Research Project of Chongqing Municipal Education Commission (KJ1706154), and the Research Project of Chongqing Technology and Business University (KFJJ2017069).

Competing interests

The author declares that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Zegeye H. Strong convergence theorems for maximal monotone mappings in Banach spaces. J. Math. Anal. Appl. 2008;343:663–671. doi: 10.1016/j.jmaa.2008.01.076. [DOI] [Google Scholar]
  • 2.Chidume C.E., Kennedy O.I. Approximation of zeros of bounded maximal monotone mappings, solutions of Hammerstein integral equations and convex minimization problem. Fixed Point Theory Appl. 2016;2016:97. doi: 10.1186/s13663-016-0582-8. [DOI] [Google Scholar]
  • 3.Chidume C.E., Romanus O.M., Nnyaba U.V. A new iterative algorithm for zeros of generalized Phi-strongly monotone and bounded maps with application. Br. J. Math. Comput. Sci. 2016;18(1):1–14. doi: 10.9734/BJMCS/2016/25884. [DOI] [Google Scholar]
  • 4. Zarantonello, E.H.: Solving functional equations by contractive averaging. Tech. Rep. 160, US. Army Math, Madison, Wisconsin (1960)
  • 5.Minty G.J. Monotone (nonlinear) operators in Hilbert spaces. Duke Math. J. 1962;29(4):341–346. doi: 10.1215/S0012-7094-62-02933-2. [DOI] [Google Scholar]
  • 6.Kac̆urovskii R.I. On monotone operators and convex functionals. Usp. Mat. Nauk. 1960;15(4):213–215. [Google Scholar]
  • 7.Chidume C.E. An approximation method for monotone Lipschitz operators in Hilbert spaces. J. Aust. Math. Soc. Ser. A. 1986;41:59–63. doi: 10.1017/S144678870002807X. [DOI] [Google Scholar]
  • 8.Berinde V. Iterative Approximation of Fixed Points. London: Springer; 2007. [Google Scholar]
  • 9.Martinet B. Regularisation d’inequations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970;4:154–158. [Google Scholar]
  • 10.Rockafellar R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976;14:877–898. doi: 10.1137/0314056. [DOI] [Google Scholar]
  • 11.Chidume C.E. An approximation method for monotone Lipschitzian operators in Hilbert-spaces. J. Aust. Math. Soc. Ser. A. 1986;41:59–63. doi: 10.1017/S144678870002807X. [DOI] [Google Scholar]
  • 12.Xu H.K. A regularization method for the proximal point algorithm. J. Glob. Optim. 2006;36:115–125. doi: 10.1007/s10898-006-9002-7. [DOI] [Google Scholar]
  • 13.Tang Y. Strong convergence of viscosity approximation methods for the fixed-point of pseudo-contractive and monotone mappings. Fixed Point Theory Appl. 2013;2013:273. doi: 10.1186/1687-1812-2013-273. [DOI] [Google Scholar]
  • 14.Qin X.L., Kang S.M., Cho Y.J. Approximating zeros of monotone operators by proximal point algorithms. J. Glob. Optim. 2010;46:75. doi: 10.1007/s10898-009-9410-6. [DOI] [Google Scholar]
  • 15.Browder F.E. Nonlinear mappings of nonexpansive and accretive-type in Banach spaces. Bull. Am. Math. Soc. 1967;73:875–882. doi: 10.1090/S0002-9904-1967-11823-8. [DOI] [Google Scholar]
  • 16.Alber Y. Metric and generalized projection operators in Banach spaces: properties and applications. In: Kartsatos A.G., editor. Theory and Applications of Nonlinear Operators of Accrective and Monotone Type. New York: Dekker; 1996. pp. 15–50. [Google Scholar]
  • 17.Chidume C.E. Geometric Properties of Banach Spaces and Nonlinear Iterations. London: Springer; 2009. [Google Scholar]
  • 18.Chidume C.E. Iterative approximation of fixed points of Lipschitzian strictly pseudo-contractive mappings. Proc. Am. Math. Soc. 1987;99(2):283–288. [Google Scholar]
  • 19.Agarwal R.P., Meehan M., O’Regan D. Fixed Point Theory and Applications. Cambridge: Cambridge University Press; 2001. [Google Scholar]
  • 20.Reich S. A weak convergence theorem for alternating methods with Bergman distance. In: Kartsatos A.G., editor. Theory and Applications of Nonlinear Operators of Accrective and Monotone Type. New York: Dekker; 1996. pp. 313–318. [Google Scholar]
  • 21.Diop C., Sow T.M.M., Djitte N., Chidume C.E. Constructive techniques for zeros of monotone mappings in certain Banach space. SpringerPlus. 2015;4:383. doi: 10.1186/s40064-015-1169-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ibaraki T., Takahashi W. A new projection and convergence theorems for the projections in Banach spaces. J. Approx. Theory. 2007;149(1):1–14. doi: 10.1016/j.jat.2007.04.003. [DOI] [Google Scholar]
  • 23.Cioranescu I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Dordrecht: Kluwer Academic; 1990. [Google Scholar]
  • 24.Xu Z.B., Roach G.F. Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 1991;157:189–210. doi: 10.1016/0022-247X(91)90144-O. [DOI] [Google Scholar]
  • 25.Xu H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991;16(12):1127–1138. doi: 10.1016/0362-546X(91)90200-K. [DOI] [Google Scholar]
  • 26.Zălinescu C. On uniformly convex functions. J. Math. Anal. Appl. 1983;95:344–374. doi: 10.1016/0022-247X(83)90112-9. [DOI] [Google Scholar]
  • 27.Kamimura S., Takahashi W. Strong convergence of a proximal-type algorithm in Banach spaces. SIAM J. Optim. 2002;13(3):938–945. doi: 10.1137/S105262340139611X. [DOI] [Google Scholar]
  • 28.Tan K.K., Xu H.K. Approximating fixed points of nonexpansive mappings by Ishikawa iteration process. J. Math. Anal. Appl. 1993;178(2):301–308. doi: 10.1006/jmaa.1993.1309. [DOI] [Google Scholar]
  • 29.Takahashi W. Nonlinear Functional Analysis-Fixed Point Theory and Its Applications. Yokohama: Yokohama Publishers; 2000. [Google Scholar]
  • 30.Osilike M.O., Aniagbosor S.C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mapping. Math. Comput. Model. 2000;32(10):1181–1191. doi: 10.1016/S0895-7177(00)00199-0. [DOI] [Google Scholar]
  • 31.Alber Y., Ryazantseva I. Nonlinear Ill Posed Problems of Monotone Type. London: Springer; 2006. [Google Scholar]
  • 32.Liu B. Fixed point of strong duality pseudocontractive mappings and applications. Abstr. Appl. Anal. 2012;2012:623625. doi: 10.1155/2012/623625. [DOI] [Google Scholar]
  • 33.Chidume C.E., Zegeye H. Approximation of solutions of nonlinear equations of monotone and Hammerstein-type. Appl. Anal. 2003;82(8):747–758. doi: 10.1080/0003681031000151452. [DOI] [Google Scholar]
  • 34.Zegeye H. Iterative solution of nonlinear equations of Hammerstein type. J. Inequal. Pure Appl. Math. 2003;4(5):92. [Google Scholar]
  • 35.Chidume C.E., Idu K.O. Approximation of zeros of bounded maximal monotone mappings, solutions of Hammerstein integral equations and convex minimization problems. Fixed Point Theory Appl. 2016;2016:97. doi: 10.1186/s13663-016-0582-8. [DOI] [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES