Skip to main content
Springer logoLink to Springer
. 2018 Nov 16;2018(1):315. doi: 10.1186/s13660-018-1899-0

General iterative methods for systems of variational inequalities with the constraints of generalized mixed equilibria and fixed point problem of pseudocontractions

Qian-Wen Wang 1, Jin-Lin Guan 1, Lu-Chuan Ceng 1,, Bing Hu 2
PMCID: PMC6244759  PMID: 30839864

Abstract

In this paper, we introduce two general iterative methods (one implicit method and one explicit method) for finding a solution of a general system of variational inequalities (GSVI) with the constraints of finitely many generalized mixed equilibrium problems and a fixed point problem of a continuous pseudocontractive mapping in a Hilbert space. Then we establish strong convergence of the proposed implicit and explicit iterative methods to a solution of the GSVI with the above constraints, which is the unique solution of a certain variational inequality. The results presented in this paper improve, extend, and develop the corresponding results in the earlier and recent literature.

Keywords: General iterative method, General system of variational inequalities, Continuous monotone mapping, Continuous pseudocontractive mapping, Variational inequality, Generalized mixed equilibrium problem

Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H with inner product , and induced norm . We denote by PC the metric projection of H onto C and by Fix(S) the set of fixed points of the mapping S. Recall that a mapping T:CH is nonexpansive if TxTyxy, x,yC. A mapping T:CH is called pseudocontractive if

TxTy,xyxy2,x,yC.

This inequality can be equivalently rewritten as

TxTy2xy2+(IT)x(IT)y2,x,yC,

where I is the identity mapping.

T:CH is said to be k-strictly pseudocontractive if there exists a constant k[0,1) such that

TxTy2xy2+k(IT)x(IT)y2,x,yC.

A mapping V:CH is said to be l-Lipschitzian if there exists a constant l0 such that

VxVylxy,x,yC.

A mapping F:CH is called monotone if

xy,FxFy0,x,yC,

and F is called α-inverse-strongly monotone if there exists a constant α>0 such that

xy,FxFyαFxFy2,x,yC.

If F is an α-inverse-strongly monotone mapping, then it is obvious that F is 1α-Lipschitz continuous, that is, FxFy1αxy for all x,yC.

A mapping F:CH is called β-strongly monotone if there exists a constant β>0 such that

xy,FxFyβxy2,x,yC.

A linear operator A:HH is said to be strongly positive on H if there exists a constant γ¯>0 such that

Ax,xγ¯x2,xH.

Let F:CH be a mapping. The classical variational inequality problem (VIP) is to find xC such that

Fx,xx0,xC. 1.1

We denote the set of solutions of VIP (1.1) by VI(C,F).

In 2008, Ceng et al. [1] considered the following general system of variational inequalities (GSVI):

{λF1y+xy,xx0,xC,νF2x+yx,xy0,xC, 1.2

where F1, F2 are α-inverse-strongly monotone and β-inverse-strongly monotone, respectively, and λ(0,2α) and ν(0,2β) are two constants. Many iterative methods have been developed for solving GSVI (1.2); see [27] and the references therein.

Subsequently, Alofi et al. [8] also introduced two composite iterative algorithms based on the composite iterative methods in Ceng et al. [9] and Jung [10] for solving the problem of GSVI (1.2). Moreover, they showed strong convergence of the proposed algorithms to a common solution of these two problems.

Very recently, Kong et al. [11] established the strong convergence of two hybrid steepest-descent schemes to the same solution of GSVI (1.2), which is also a common solution of finitely many variational inclusions and a minimization problem.

Lemma 1.1

(see [12, Proposition 3.1])

Let C be a nonempty closed convex subset of a real Hilbert space H. For given x,yC, (x,y) is a solution of GSVI (1.3) for continuous monotone mappings F1 and F2 if and only if x is a fixed point of the composite R=F1,λF2,ν:HC of nonexpansive mappings F1,λ:HC and F2,ν:HC, where y=F2,νx,

F1,λx={zC:yz,F1z+1λyz,zx0,yC},

and

F2,νx={zC:yz,F2z+1νyz,zx0,yC}.

For simplicity, we denote by GSVI(C,F1,F2) the fixed point set of mapping R.

In the meantime, inspired by Ceng et al. [1], Jung [12] introduced a general system of variational inequalities (GSVI) for two continuous monotone mappings F1 and F2 of finding (x,y)C×C such that

{λF1x+xy,xx0,xC,νF2y+yx,xy0,xC, 1.3

where λ,ν>0 are two constants. In order to find an element of Fix(R)Fix(T), he proposed one implicit algorithm generating a net {xt}:

xt=(IθtA)TrtRxt+θt[tγVxt+(ItμG)TrtRxt], 1.4

with t(0,min{1,2γ¯τγl}) and θt(0,min{12,A1}), and an explicit algorithm generating a sequence {xn}:

{yn=αnγVxn+(IαnμG)TrnRxn,xn+1=(IβnA)TrnRxn+βnyn,n0, 1.5

with {αn}[0,1], {βn}(0,1], {rn}(0,), and x0C any initial guess, where Trtx={zC:yz,Tz1rtyz,(1+rt)zx0,yC} for rt(0,), and Trnx={zC:yz,Tz1rnyz,(1+rn)zx0,yC} for rn(0,). Moreover, he established strong convergence of the proposed iterative algorithms to an element x˜Fix(R)Fix(T), which uniquely solves the variational inequality

(AI)x˜,x˜p0,pFix(R)Fix(T).

On the other hand, the generalized mixed equilibrium problem (GMEP) is to find xC such that

Θ(x,y)+φ(y)φ(x)+Bx,yx0,yC. 1.6

We denote the set of solutions of GMEP (1.6) by GMEP(Θ,φ,B). GMEP (1.6) is very general in the sense that it includes many problems as special cases, namely optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games, and others. For different aspects and solution methods, we refer to [1318] and the references therein.

In this paper, we introduce implicit and explicit iterative methods for finding a solution of GSVI (1.3) with solutions belonging also to the common solution set i=1NGMEP(Θi,φi,Bi) of finitely many generalized mixed equilibrium problems and the fixed point set of a continuous pseudocontractive mapping T. First, GSVI (1.3) and each generalized mixed equilibrium problem both are transformed into fixed point problems of nonexpansive mappings. Then we establish strong convergence of the proposed iterative methods to an element of i=1NGMEP(Θi,φi,Bi)GSVI(C,F1,F2)Fix(T), which is the unique solution of a certain variational inequality.

Preliminaries and lemmas

Let H be a real Hilbert space, and let C be a nonempty closed convex subset of H. We write xnx and xnx to indicate the strong convergence of the sequence {xn} to x and the weak convergence of the sequence {xn} to x, respectively.

For every point xH, there exists a unique nearest point in C, denoted by PC(x), such that

xPC(x)xy,yC.

PC is called the metric projection of H onto C. It is well known that PC is nonexpansive and is characterized by the property

u=PC(x)xu,uy0,xH,yC. 2.1

In a Hilbert space H, the following equality holds:

xy2=x2+y22x,y,x,yH. 2.2

The following lemma is an immediate consequence of an inner product.

Lemma 2.1

In a real Hilbert space H, there holds the following inequality:

x+y2x2+2y,x+y,x,yH.

Next we list some elementary conclusions for the MEP.

It is first assumed as in [19] that Θ:C×CR is a bifunction satisfying conditions (A1)–(A4) and φ:CR is a lower semicontinuous and convex function with restriction (B1) or (B2), where

  1. Θ(x,x)=0 for all xC;

  2. Θ is monotone, i.e., Θ(x,y)+Θ(y,x)0 for any x,yC;

  3. Θ is upper-hemicontinuous, i.e., for each x,y,zC,
    lim supt0+Θ(tz+(1t)x,y)Θ(x,y);
  4. Θ(x,) is convex and lower semicontinuous for each xC;

  5. for xH and r>0, there exists a bounded subset DxC and yxC such that, for zCDx,
    Θ(z,yx)+φ(yx)φ(z)+1ryxz,zx<0;
  6. C is a bounded set.

Proposition 2.1

([19])

Assume that Θ:C×CR satisfies (A1)(A4), and let φ:CR be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping Tr(Θ,φ):HC as follows:

Tr(Θ,φ)(x):={zC:Θ(z,y)+φ(y)φ(z)+1ryz,zx0,yC}

for all xH. Then the following hold:

  • (i)

    for each xH, Tr(Θ,φ)(x) is nonempty and single-valued;

  • (ii)
    Tr(Θ,φ) is firmly nonexpansive, that is, for any x,yH,
    Tr(Θ,φ)xTr(Θ,φ)y2Tr(Θ,φ)xTr(Θ,φ)y,xy;
  • (iii)

    Fix(Tr(Θ,φ))=MEP(Θ,φ);

  • (iv)

    MEP(Θ,φ) is closed and convex;

  • (v)

    Ts(Θ,φ)xTt(Θ,φ)x2stsTs(Θ,φ)xTt(Θ,φ)x,Ts(Θ,φ)xx for all s,t>0 and xH.

Proposition 2.2

Let F:CH be an α-inverse-strongly monotone mapping. Then, for all x,yC and λ>0, one has

(IλF)x(IλF)y2xy2+λ(λ2α)FxFy2.

In particular, if λ(0,2α], IλF:CH is a nonexpansive mapping.

We will use the following lemmas for the proof of our main results in the sequel.

Lemma 2.2

([20])

Let {sn} be a sequence of nonnegative real numbers satisfying

sn+1(1ωn)sn+ωnδn+γn,n0,

where {ωn}, {δn}, and {γn} satisfy the following conditions:

  • (i)

    {ωn}[0,1] and n=0ωn= or, equivalently, n=0(1ωn)=0;

  • (ii)

    lim supnδn0 or n=0ωn|δn|<;

  • (iii)

    γn0 (n0), n=0γn<.

Then limnsn=0.

Lemma 2.3

(Demiclosedness principle [21])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S:CC be a nonexpansive mapping with Fix(S). Then the mapping IS is demiclosed. That is, if {xn} is a sequence in C such that xnx and (IS)xny, then (IS)x=y. Here I is the identity mapping of H.

Lemma 2.4

([22])

Let H be a real Hilbert space. Let A:HH be a strongly positive bounded linear operator with a constant γ¯>1. Then

(AI)x(AI)y,xy(γ¯1)xy2,x,yC.

That is, AI is strongly monotone with a constant γ¯1.

Lemma 2.5

([22])

Assume that A:HH is a strongly positive bounded linear operator with a coefficient γ¯>0 and 0<ζA1. Then IζA1ζγ¯.

Lemma 2.6

([23])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let G:CH be a ρ-Lipschitzian and η-strongly monotone mapping with constants ρ,η>0. Let 0<μ<2ηρ2 and 0<t<σ1. Then S:=σItμG:CH is a contractive mapping with constant σtτ, where τ=11μ(2ημρ2).

Lemma 2.7

([24])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a continuous monotone mapping. Then, for r>0 and xH, there exists zC such that

yz,Fz+1ryz,zx0,yC.

For r>0 and xH, define Fr:HC by

Frx={zC:yz,Fz+1ryz,zx0,yC}.

Then the following hold:

  • (i)

    Fr is single-valued;

  • (ii)
    Fr is firmly nonexpansive, that is,
    FrxFry2xy,FrxFry,x,yH;
  • (iii)

    Fix(Fr)=VI(C,F);

  • (iv)

    VI(C,F) is a closed convex subset of C.

Lemma 2.8

([24])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T:CH be a continuous pseudocontractive mapping. Then, for r>0 and xH, there exists zC such that

yz,Tz1ryz,(1+r)zx0,yC.

For r>0 and xH, define Tr:HC by

Trx={zC:yz,Tz1ryz,(1+r)zx0,yC}.

Then the following hold:

  • (i)

    Tr is single-valued;

  • (ii)
    Tr is firmly nonexpansive, that is,
    TrxTry2xy,TrxTry,x,yH;
  • (iii)

    Fix(Tr)=Fix(T);

  • (iv)

    Fix(T) is a closed convex subset of C.

Main results

Throughout this section, we always assume the following:

  • Bi:CH is a μi-inverse-strongly monotone mapping for each i=1,2,,N;

  • Θi:C×CR is a bifunction satisfying conditions (A1)–(A4) for each i=1,2,,N;

  • φi:CR is a proper lower semicontinuous and convex function with restriction (B1) or (B2) for each i=1,2,,N;

  • A:HH is a strongly positive linear bounded self-adjoint operator with a constant γ¯(1,2);

  • V:CC is l-Lipschitzian with constant l[0,);

  • G:CC is a ρ-Lipschitzian and η-strongly monotone mapping with constants ρ>0 and η>0;

  • constants μ, l, τ, and γ satisfy 0<μ<2ηρ2 and 0γl<τ, where τ=11μ(2ημρ2);

  • F1,F2:CH are continuous monotone mappings and T:CC is a continuous pseudocontractive mapping such that Ω:=i=1NGMEP(Θi,φi,Bi)GSVI(C,F1,F2)Fix(T);

  • Rt=F1,λtF2,νt:HC, where F1,λt,F2,νt:HC are defined as follows:
    F1,λtx={zC:yz,F1z+1λtyz,zx0,yC},F2,νtx={zC:yz,F2z+1νtyz,zx0,yC},
    for λt,νt(0,), t(0,1), limt0λt=λ>0, and limt0νt=ν>0;
  • Rn=F1,λnF2,νn:HC, where F1,λn,F2,νn:HC are defined as follows:
    F1,λnx={zC:yz,F1z+1λnyz,zx0,yC},F2,νnx={zC:yz,F2z+1νnyz,zx0,yC},
    for λn,νn(0,), limnλn=λ>0, and limnνn=ν>0;
  • Trt:HC is a mapping defined by
    Trtx={zC:yz,Tz1rtyz,(1+rt)zx0,yC}
    for rt(0,), t(0,1), and lim inft0rt>0;
  • Trn:HC is a mapping defined by
    Trnx={zC:yz,Tz1rnyz,(1+rn)zx0,yC}
    for rn(0,), and lim infnrn>0;
  • Tri,t(Θi,φi):HC is a mapping defined by
    Tri,t(Θi,φi)x={zC:Θi(z,y)+φi(y)φi(z)+1ri,tyz,zx0,yC}
    for {ri,t}t(0,1)[ci,di](0,2μi) and i{1,2,,N};
  • Tri,n(Θi,φi):HC is a mapping defined by
    Tri,n(Θi,φi)x={zC:Θi(z,y)+φi(y)φi(z)+1ri,nyz,zx0,yC}
    for {ri,n}n=1[ci,di](0,2μi) and i{1,2,,N}.

By Proposition 2.1 and Lemmas 2.7 and 2.8, we note that Tri,t(Θi,φi), Tri,n(Θi,φi), F1,λt, F1,λn, F2,νt, F2,νn, Trt, and Trn are nonexpansive, GMEP(Θi,φi,Bi)=Fix(Tri,t(Θi,φi)(Iri,tBi))=Fix(Tri,n(Θi,φi)(Iri,nBi)), and Fix(T)=Fix(Trt)=Fix(Trn). So it is known that the composite mappings Rt=F1,λtF2,νt and Rn=F1,λnF2,νn are nonexpansive. Also, we note that GSVI(C,F1,F2)=Fix(Rt)=Fix(Rn) by Lemma 1.1.

In this section, for t(0,1), n1 and i{1,2,,N}, we put

Δti=Tri,t(Θi,φi)(Iri,tBi)Tri1,t(Θi1,φi1)(Iri1,tBi1)Tr1,t(Θ1,φ1)(Ir1,tB1),Δni=Tri,n(Θi,φi)(Iri,nBi)Tri1,n(Θi1,φi1)(Iri1,nBi1)Tr1,n(Θ1,φ1)(Ir1,nB1),

and Δt0=Δn0=I.

We now introduce the first general iterative scheme that generates a net {xt} in an implicit way:

xt=PC[(IθtA)TrtΔtNRtxt+θt(tγVxt+(ItμG)TrtΔtNRtxt)], 3.1

where t(0,min{1,2γ¯τγl}) and θt(0,min{12,A1}).

We prove the strong convergence of {xt} as t0 to a point x˜Ω, which is a unique solution to the VI

(AI)x˜,px˜0,pΩ. 3.2

In the meantime, we also propose the second general iterative scheme that generates a sequence {xn} in an explicit way:

{wn=αnγVxn+(IαnμG)TrnΔnNRnxn,xn+1=PC[(IβnA)TrnΔnNRnxn+βnwn],n1, 3.3

where {αn},{βn}[0,1] and x0C is an arbitrary initial guess, and establish the strong convergence of {xn} as n to the same point x˜Ω, which is the unique solution to VI (3.2).

Next, for t(0,min{1,2γ¯τγl}) and θt(0,min{12,A1}), consider a mapping Qt:CC defined by

Qtx=PC[(IθtA)TrtΔtNRtx+θt(tγVx+(ItμG)TrtΔtNRtx)],xC.

It is easy to see that Qt is a contractive mapping with constant 1θt(γ¯1+t(τγl)). Indeed, by Propositions 2.1 and 2.2 and Lemmas 2.5 and 2.6, we have

QtxQty(IθtA)TrtΔtNRtx+θt(tγVx+(ItμG)TrtΔtNRtx)(IθtA)TrtΔtNRtyθt(tγVy+(ItμG)TrtΔtNRty)(IθtA)TrtΔtNRtx(IθtA)TrtΔtNRty+θt(tγVx+(ItμG)TrtΔtNRtx)(tγVy+(ItμG)TrtΔtNRty)(1θtγ¯)TrtΔtNRtxTrtΔtNRty+θt[tγVxVy+(ItμG)TrtΔtNRtx(ItμG)TrtΔtNRty](1θtγ¯)xy+θt[tγlxy+(1tτ)xy]=[1θt(γ¯1+t(τγl))]xy.

Since γ¯(1,2), τγl>0 and 0<t<min{1,2γ¯τγl}2γ¯τγl, it follows that 0<γ¯1+t(τγl)<1, which together with 0<θt<min{12,A1}<1 yields 0<1θt(γ¯1+t(τγl))<1. Hence Qt is a contractive mapping. By the Banach contraction principle, Qt has a unique fixed point, denoted by xt, which uniquely solves the fixed point equation (3.1).

We summarize the basic properties of {xt}.

Theorem 3.1

Let {xt} be defined via (3.1). Then

  • (i)

    {xt} is bounded for t(0,min{1,2γ¯τγl});

  • (ii)

    limt0xtRtxt=0, limt0xtΔtNxt=0, and limt0xtTrtxt=0 provided limt0θt=0;

  • (iii)

    xt:(0,min{1,2γ¯τγl})H is locally Lipschitzian provided θt:(0,min{1,2γ¯τγl})(0,min{12,A1}) is locally Lipschitzian, rt,λt,νt:(0,min{1,2γ¯τγl})(0,) are locally Lipschitzian, and ri,t:(0,min{1,2γ¯τγl})[ci,di] is locally Lipschitzian for each i=1,2,,N;

  • (iv)

    xt defines a continuous path from (0,min{1,2γ¯τγl}) into H provided θt:(0,min{1,2γ¯τγl})(0,min{12,A1}) is continuous, rt,λt,νt:(0,min{1,2γ¯τγl})(0,) are continuous, and ri,t:(0,min{1,2γ¯τγl})[ci,di] is continuous for each i=1,2,,N.

Proof

Let zt=Rtxt, ut=ΔtNzt, and vt=Trtut. Take pΩ. Then p=Trtp by Lemma 2.8(iii), p=Δtip (=Tri,t(Θi,φi)(Iri,tBi)p) by Proposition 2.1(iii), and p=Rtp by Lemma 1.1.

(i) Utilizing Proposition 2.1(ii) and Proposition 2.2, we have

utp=TrN,t(ΘN,φN)(IrN,tBN)ΔtN1ztTrN,t(ΘN,φN)(IrN,tBN)ΔtN1p(IrN,tBN)ΔtN1zt(IrN,tBN)ΔtN1pΔtN1ztΔtN1pΔt0ztΔt0p=ztp. 3.4

Moreover, it is easy from the nonexpansivity of Rt to see that

ztp=RtxtRtpxtp,

which together with the nonexpansivity of Trt and (3.4) implies that

vtp=TrtutTrtputpztpxtp. 3.5

By (3.5), we have

xtp(IθtA)vt+θt(tγVxt+(ItμG)vt)p=(IθtA)vt(IθtA)p+θt(tγVxt+(ItμG)vtp)+θt(IA)p(IθtA)vt(IθtA)p+θttγVxt+(ItμG)vtp+θt(IA)p=(IθtA)vt(IθtA)p+θt(ItμG)vt(ItμG)p+t(γVxtμGp)+θt(IA)p(1θtγ¯)vtp+θt[(ItμG)vt(ItμG)p+t(γVxtVp+γVpμGp)]+θt(IA)p(1θtγ¯)xtp+θt[(1tτ)xtp+t(γlxtp+(γVμG)p)]+θtIAp=[1θt(γ¯1+t(τγl))]xtp+θt[IAp+t(γVμG)p].

So, it follows that

xtpIAp+t(γVμG)pγ¯1+t(τγl)IAp+(γVμG)pγ¯1.

Hence {xt} is bounded and so are {Vxt}, {ut}, {vt}, {zt}, and {Gvt}.

(ii) By the definition of {xt}, we have

xtvt=PC[(IθtA)vt+θt(tγVxt+(ItμG)vt)]vt(IθtA)vt+θt(tγVxt+(ItμG)vt)vt=θt[(IA)vt+t(γVxtμGvt)]=θt(IA)vt+t(γVxtμGvt)θtIAvt+tγVxtμGvt0as t0,

using the boundedness of {Vxt}, {vt}, and {Gvt} in the proof of assertion (i). That is,

limt0xtvt=0. 3.6

In view of (3.5) and Lemma 2.7(ii), we get

vtp2ztp2=RtxtRtp2=F1,λtF2,νtxtF1,λtF2,νtp2F2,νtxtF2,νtp,F1,λtF2,νtxtF1,λtF2,νtp=F2,νtxtF2,νtp,ztp12[F2,νtxtF2,νtp2+ztp2(F2,νtxtF2,νtp)(ztp)2]12[xtp2+xtp2(F2,νtxtF2,νtp)(ztp)2]=xtp212(F2,νtxtF2,νtp)(ztp)2,

which immediately yields

12(F2,νtxtF2,νtp)(ztp)2xtp2vtp2(xtp+vtp)xtvt.

From (3.6) and the boundedness of {xt} and {vt}, we have

limt0(F2,νtxtF2,νtp)(ztp)=0. 3.7

Again from (3.5) and Lemma 2.7(ii), we obtain

vtp2ztp2=RtxtRtp2F2,νtxtF2,νtp2xtp,F2,νtxtF2,νtp12[xtp2+F2,νtxtF2,νtp2(xtp)(F2,νtxtF2,νtp)2]12[xtp2+xtp2(xtp)(F2,νtxtF2,νtp)2]=xtp212(xtp)(F2,νtxtF2,νtp)2,

which hence leads to

12(xtp)(F2,νtxtF2,νtp)2xtp2vtp2(xtp+vtp)xtvt.

Again from (3.6) and the boundedness of {xt} and {vt}, we have

limt0(xtp)(F2,νtxtF2,νtp)=0. 3.8

So it follows from (3.7) and (3.8) that

xtzt(xtp)(F2,νtxtF2,νtp)+(F2,νtxtF2,νtp)(ztp)0as t0.

That is,

limt0xtzt=0. 3.9

Furthermore, from (3.5) and Proposition 2.1(ii) and Proposition 2.2, it follows that

vtp2utp2=ΔtNztp2Δtiztp2=Tri,t(Θi,φi)(Iri,tBi)Δti1ztTri,t(Θi,φi)(Iri,tBi)p2(Iri,tBi)Δti1zt(Iri,tBi)p2Δti1ztp2+ri,t(ri,t2μi)BiΔti1ztBip2ztp2+ri,t(ri,t2μi)BiΔti1ztBip2xtp2+ri,t(ri,t2μi)BiΔti1ztBip2,

which together with {ri,t}t(0,1)[ci,di](0,2μi) for i{1,2,,N} implies that

ci(2μidi)BiΔti1ztBip2ri,t(2μiri,t)BiΔti1ztBip2xtp2vtp2(xtp+vtp)xtvt.

From (3.6) and the boundedness of {xt} and {vt}, we have

limt0BiΔti1ztBip=0. 3.10

Also, by Proposition 2.1(ii), we obtain that, for each i=1,2,,N,

Δtiztp2=Tri,t(Θi,φi)(Iri,tBi)Δti1ztTri,t(Θi,φi)(Iri,tBi)p2(Iri,tBi)Δti1zt(Iri,tBi)p,Δtiztp=12[(Iri,tBi)Δti1zt(Iri,tBi)p2+Δtiztp2(Iri,tBi)Δti1zt(Iri,tBi)p(Δtiztp)2]12[Δti1ztp2+Δtiztp2Δti1ztΔtiztri,t(BiΔti1ztBip)2]12[xtp2+Δtiztp2Δti1ztΔtiztri,t(BiΔti1ztBip)2],

which immediately implies that

Δtiztp2xtp2Δti1ztΔtiztri,t(BiΔti1ztBip)2.

This together with (3.5) leads to

vtp2utp2=ΔtNztp2Δtiztp2xtp2Δti1ztΔtiztri,t(BiΔti1ztBip)2,

which hence implies

Δti1ztΔtiztri,t(BiΔti1ztBip)2xtp2vtp2(xtp+vtp)xtvt.

From (3.6) and the boundedness of {xt} and {vt}, we have

limt0Δti1ztΔtiztri,t(BiΔti1ztBip)=0,

which together with (3.10) implies that, for each i=1,2,,N,

limt0Δti1ztΔtizt=0. 3.11

Note that

ztuti=1NΔti1ztΔtizt.

From (3.11), it is easy to see that

limt0ztut=0. 3.12

Also, observe that

xtΔtNxtxtzt+ztΔtNzt+ΔtNztΔtNxtxtzt+ztΔtNzt+ztxt=2xtzt+ztut.

From (3.9) and (3.12), it is easy to see that

limt0xtΔtNxt=0. 3.13

In the meantime, again from (3.5) and Lemma 2.7(ii), we obtain

vtp2=TrtutTrtp2utp,TrtutTrtp=utp,vtp=12[utp2+vtp2utp(vtp)2]12[xtp2+xtp2utvt2]=xtp212utvt2,

which immediately yields

12utvt2xtp2vtp2(xtp+vtp)xtvt.

From (3.6) and the boundedness of {xt} and {vt}, we have

limt0utvt=0. 3.14

Taking into account that

xtTrtxtxtut+utTrtut+TrtutTrtxtxtut+utTrtut+utxt=2xtut+utvt2(xtzt+ztut)+utvt,

we deduce from (3.9), (3.12), and (3.14) that

limt0xtTrtxt=0. 3.15

(iii) Let t,t0(0,min{1,2γ¯τγl}). Since vt=Trtut and vt0=Trt0ut0, we get

yvt,(IT)vt+1rtyvt,vtut0,yC, 3.16

and

yvt0,(IT)vt0+1rt0yvt0,vt0ut00,yC. 3.17

Putting y=vt0 in (3.16) and y=vt in (3.17), we obtain

vt0vt,(IT)vt+1rtvt0vt,vtut0 3.18

and

vtvt0,(IT)vt0+1rt0vtvt0,vt0ut00. 3.19

Adding up (3.18) and (3.19), we have

vtvt0,(IT)vt(IT)vt0+vt0vt,vtutrtvt0ut0rt00.

Since T is pseudocontractive, we know that IT is a monotone mapping such that

vt0vt,vtutrtvt0ut0rt00,

and hence

vtvt0,vt0vt+vtut0rt0rt(vtut)0. 3.20

Taking into account that lim inft0rt>0, without loss of generality, we may assume that rt>b>0 t(0,min{1,2γ¯τγl}) for some b>0. Then from (3.20) we have

vtvt02vtvt0,vtut+utut0rt0rt(vtut)=vtvt0,utut0+(1rt0rt)(vtut)vtvt0utut0+(1rt0rt)(vtut)vtvt0{utut0+|1rt0rt|vtut},

which immediately yields

vtvt0utut0+1rt|rtrt0|vtututut0+L˜1b|rtrt0|, 3.21

where L˜1=sup{vtut:t(0,min{1,2γ¯τγl})}.

Also, taking into account that limt0λt=λ>0 and limt0νt=ν>0, without loss of generality, we may assume that min{λt,νt}>a>0 t(0,min{1,2γ¯τγl}) for some a>0. Since zt=F1,λtyt and zt0=F1,λt0yt0, where yt=F2,νtxt and yt0=F2,νt0xt0 for t,t0(0,min{1,2γ¯τγl}), by using arguments similar to those of (3.21), we get

ztzt0ytyt0+1a|λtλt0|L˜2 3.22

and

ytyt0xtxt0+1a|νtνt0|L˜2, 3.23

where L˜2=sup{ztyt+ytxt:t(0,min{1,2γ¯τγl})}. Substituting (3.23) for (3.22), we obtain

ztzt0xtxt0+L˜2a(|λtλt0|+|νtνt0|). 3.24

In the meantime, by Proposition 2.1(ii), (v) and Proposition 2.2, we deduce that

utut0=ΔtNztΔt0Nzt0=TrN,t(ΘN,φN)(IrN,tBN)ΔtN1ztTrN,t0(ΘN,φN)(IrN,t0BN)Δt0N1zt0TrN,t(ΘN,φN)(IrN,tBN)ΔtN1ztTrN,t0(ΘN,φN)(IrN,t0BN)ΔtN1zt+TrN,t0(ΘN,φN)(IrN,t0BN)ΔtN1ztTrN,t0(ΘN,φN)(IrN,t0BN)Δt0N1zt0TrN,t(ΘN,φN)(IrN,tBN)ΔtN1ztTrN,t0(ΘN,φN)(IrN,tBN)ΔtN1zt+TrN,t0(ΘN,φN)(IrN,tBN)ΔtN1ztTrN,t0(ΘN,φN)(IrN,t0BN)ΔtN1zt+(IrN,t0BN)ΔtN1zt(IrN,t0BN)Δt0N1zt0|rN,trN,t0|rN,tTrN,t(ΘN,φN)(IrN,tBN)ΔtN1zt(IrN,tBN)ΔtN1zt+|rN,trN,t0|BNΔtN1zt+ΔtN1ztΔt0N1zt0=|rN,trN,t0|[BNΔtN1zt+1rN,tTrN,t(ΘN,φN)(IrN,tBN)ΔtN1zt(IrN,tBN)ΔtN1zt]+ΔtN1ztΔt0N1zt0|rN,trN,t0|[BNΔtN1zt+1rN,tTrN,t(ΘN,φN)(IrN,tBN)ΔtN1zt(IrN,tBN)ΔtN1zt]++|r1,tr1,t0|[B1Δt0zt+1r1,tTr1,t(Θ1,φ1)(Ir1,tB1)Δt0zt(Ir1,tB1)Δt0zt]+Δt0ztΔt00zt0L˜3i=1N|ri,tri,t0|+ztzt0, 3.25

where

supt(0,min{1,2γ¯τγl}){i=1N[BiΔti1zt+1ri,tTri,t(Θi,φi)(Iri,tBi)Δti1zt(Iri,tBi)Δti1zt]}L˜3

for some L˜3>0. This together with (3.21) and (3.24) implies that

vtvt0utut0+L˜1b|rtrt0|L˜3i=1N|ri,tri,t0|+ztzt0+L˜1b|rtrt0|L˜3i=1N|ri,tri,t0|+xtxt0+L˜2a(|λtλt0|+|νtνt0|)+L˜1b|rtrt0|xtxt0+(L˜1b+L˜2a)(|λtλt0|+|νtνt0|+|rtrt0|)+L˜3i=1N|ri,tri,t0|.

Taking into account that both θt0(0,min{12,A1}) and 0γl<τ=11μ(2ημρ2) imply

0<1θt0(γ¯1+t0τ)<1,

we calculate from (3.1)

xtxt0(IθtA)TrtΔtNRtxt+θt(tγVxt+(ItμG)TrtΔtNRtxt)(Iθt0A)Trt0Δt0NRt0xt0+θt0(t0γVxt0+(It0μG)Trt0Δt0NRt0xt0)=(IθtA)vt+θt(tγVxt+(ItμG)vt)(Iθt0A)vt0+θt0(t0γVxt0+(It0μG)vt0)(IθtA)vt(Iθt0A)vt+(Iθt0A)vt(Iθt0A)vt0+|θtθt0|tγVxt+(ItμG)vt+θt0[tγVxt+(ItμG)vt][t0γVxt0+(It0μG)vt0]|θtθt0|Avt+(1θt0γ¯)vtvt0+|θtθt0|tγVxt+(ItμG)vt+θt0(tt0)γVxt+t0γ(VxtVxt0)(tt0)μGvt+(It0μG)vt(It0μG)vt0|θtθt0|Avt+(1θt0γ¯)vtvt0+|θtθt0|[vt+t(γVxt+μGvt)]+θt0[(γVxt+μGvt)|tt0|+t0γlxtxt0+(1t0τ)vtvt0]|θtθt0|[vt+Avt+γVxt+μGvt]+θt0t0γlxtxt0+[1θt0(γ¯1+t0τ)]×{xtxt0+(L˜1b+L˜2a)(|λtλt0|+|νtνt0|+|rtrt0|)+L˜3i=1N|ri,tri,t0|}+θt0(γVxt+μGvt)|tt0|=|θtθt0|[vt+Avt+γVxt+μGvt]+[1θt0(γ¯1+t0(τγl)]xtxt0+[1θt0(γ¯1+t0τ)]{(L˜1b+L˜2a)(|λtλt0|+|νtνt0|+|rtrt0|)+L˜3i=1N|ri,tri,t0|}+θt0(γVxt+μGvt)|tt0|.

This immediately implies that

xtxt0vt+Avt+γVxt+μGvtθt0(γ¯1+t0(τγl)|θtθt0|+γVxt+μGvtγ¯1+t0(τγl)|tt0|+1θt0(γ¯1+t0τ)θt0(γ¯1+t0(τγl){(L˜1b+L˜2a)(|λtλt0|+|νtνt0|+|rtrt0|)+L˜3i=1N|ri,tri,t0|}.

Since θt:(0,min{1,2γ¯τγl})(0,min{12,A1}) is locally Lipschitzian, rt,λt,νt:(0,min{1,2γ¯τγl})(0,) are locally Lipschitzian, and ri,t:(0,min{1,2γ¯τγl})[ci,di] is locally Lipschitzian for each i=1,2,,N, we deduce that xt:(0,min{1,2γ¯τγl})H is locally Lipschitzian.

(iv) From the last inequality in (iii), the desired result follows immediately. □

We prove the following strong convergence theorem for the net {xt} as t0, which guarantees the existence of solutions of the variational inequality (3.2).

Theorem 3.2

Let the net {xt} be defined via (3.1). If limt0θt=0, then xt converges strongly to x˜Ω as t0, which solves VI (3.2). Equivalently, we have PΩ(2IA)x˜=x˜.

Proof

We first note that the uniqueness of a solution of VI (3.2) is a consequence of the strong monotonicity of AI (due to Lemma 2.4). See [2, 4, 5] for this fact.

Next, we prove that xtx˜ as t0. For simplicity, let vt=Trtut, ut=ΔtNzt, yt=F2,νtxt, and zt=Rtxt=F1,λtyt. For any given pΩ, we observe that Trtp=p, ΔtNp=p, and Rtp=p. From (3.1), we write

xtp=xtwt+wtp=xtwt+(IθtA)vt+θt(tγVxt+(ItμG)vt)p=xtwt+(IθtA)(vtp)+θt[t(γVxtμGp)+(ItμG)vt(ItμG)p]+θt(IA)p,

where wt=(IθtA)vt+θt(tγVxt+(ItμG)vt). In terms of (2.1) and (3.5), we have

xtp2=xtwt,xtp+(IθtA)(vtp),xtp+θt[tγVxtμGp,xtp+(ItμG)vt(ItμG)p,xtp]+θt(IA)p,xtp(1θtγ¯)xtp2+θt[(1tτ)xtp2+tγlxtp2+t(γVμG)p,xtp]+θt(IA)p,xtp=[1θt(γ¯1+t(τγl))]xtp2+θt(t(γVμG)p,xtp+(IA)p,xtp).

Therefore,

xtp21γ¯1+t(τγl)(t(γVμG)p,xtp+(IA)p,xtp). 3.26

Since {xt} is bounded as t0 (due to Theorem 3.1(i)), there exists a subsequence {tn} in (0,min{1,2γ¯τγl}) such that tn0 and xtnx. We first show that xΩ. To this end, we divide its proof into four steps.

Step 1. We claim that limnxtnztn=0, limnztnutn=0, and limnutnvtn=0, where ztn=Rtnxtn, utn=ΔtnNztn, and vtn=Trtnutn. Indeed, according to (3.9), (3.12), and (3.14) in the proof of Theorem 3.1, we obtain the assertion.

Step 2. We claim that xFix(T). In fact, from the definition of vtn=Trtnutn, we have

yvtn,(IT)vtn+yvtn,vtnutnrtn0,yC. 3.27

Set wt=tv+(1t)x for all t(0,1] and vC. Then wtC. From (3.27) it follows that

wtvtn,(IT)wtwtvtn,(IT)wtwtvtn,(IT)vtnwtvtn,vtnutnrtn=wtvtn,(IT)wt(IT)vtnwtvtn,vtnutnrtn. 3.28

By Step 1, we have vtnutnrtn0 as n. Moreover, since xtnx, by Step 1 we have vtnx. Since IT is monotone, we also have that wtvtn,(IT)wt(IT)vtn0. Thus, from (3.28) it follows that

0limnwtvtn,(IT)wt=wtx,(IT)wt,

and hence

vx,(IT)wt0,vC.

Letting t0, we know from the continuity of IT that

vx,(IT)x0,vC.

Putting v=Tx, we get (IT)x2=0, which leads to xFix(T).

Step 3. We claim that xGSVI(C,F1,F2). Indeed, note that limt0λt=λ>0 and limt0νt=ν>0. For each xC, we put x(t):=F1,λtx, x(0):=F1,λx, y(t):=F2,νtx, and y(0):=F2,νx. Then, by Lemma 1.1, we have GSVI(C,F1,F2)=Fix(R), where R=F1,λF2,ν and R is nonexpansive. Moreover, it is easy to see that

yx(t),F1x(t)+1λtyx(t),x(t)x0,yC, 3.29

and

yx(0),F1x(0)+1λyx(0),x(0)x0,yC. 3.30

Putting y=x(0) in (3.29) and y=x(t) in (3.30), we obtain

x(0)x(t),F1x(t)+1λtx(0)x(t),x(t)x0 3.31

and

x(t)x(0),F1x(0)+1λx(t)x(0),x(0)x0. 3.32

Adding up (3.31) and (3.32), we have

x(t)x(0),F1x(t)F1x(0)+x(0)x(t),x(t)xλtx(0)xλ0.

Since F1 is a monotone mapping, we know that

x(0)x(t),x(t)xλtx(0)xλ0,

and hence

x(t)x(0),x(0)x(t)+x(t)xλλt(x(t)x)0.

So it follows that

x(t)x(0)2x(t)x(0),x(t)xλλt(x(t)x)=x(t)x(0),(1λλt)(x(t)x)x(t)x(0)|λtλ|λtx(t)x,

which immediately yields

F1,λtxF1,λx|λtλ|λtF1,λtxx. 3.33

By using arguments similar to those of (3.33), we have

F2,νtxF2,νx|νtν|νtF2,νtxx. 3.34

Now, putting t=tn, x=F2,νxtn in (3.33), and t=tn, x=xtn in (3.34), respectively, we deduce that

F1,λtnF2,νxtnF1,λF2,νxtn|λtnλ|λtnF1,λtnF2,νxtnF2,νxtn

and

F2,νtnxtnF2,νxtn|νtnν|νtnF2,νtnxtnxtn.

Since limnλtn=λ>0 and limnνtn=ν>0, it follows from the last two inequalities that

limnF1,λtnF2,νxtnF1,λF2,νxtn=limnF2,νtnxtnF2,νxtn=0. 3.35

Also, we observe that

RxtnxtnF1,λF2,νxtnF1,λtnF2,νxtn+F1,λtnF2,νxtnF1,λtnF2,νtnxtn+F1,λtnF2,νtnxtnxtnF1,λF2,νxtnF1,λtnF2,νxtn+F2,νxtnF2,νtnxtn+F1,λtnF2,νtnxtnxtn=F1,λF2,νxtnF1,λtnF2,νxtn+F2,νxtnF2,νtnxtn+Rtnxtnxtn. 3.36

Since Rtnxtnxtn0 (due to Step 1), from (3.35) and (3.36) we get

limnRxtnxtn=0. 3.37

Taking into account that xtnx and xtnRxtn0 (due to (3.37)), from Lemma 2.3 we get x=Rx, that is, xFix(R)=GSVI(C,F1,F2).

Step 4. We claim that xi=1NGMEP(Θi,φi,Bi). In fact, since Δtniztn=Tri,tn(Θi,φi)(Iri,tnBi)Δtni1ztn, for each i=1,2,,N, we have

0Θi(Δtniztn,y)+φi(y)φi(Δtniztn)+BiΔtni1ztn,yΔtniztn+1ri,tnyΔtniztn,ΔtniztnΔtni1ztn.

By (A2), we have

Θi(y,Δtniztn)φi(y)φi(Δtniztn)+BiΔtni1ztn,yΔtniztn+1ri,tnyΔtniztn,ΔtniztnΔtni1ztn.

Let wt=tv+(1t)x for all t(0,1] and vC. This implies that wtC. Then we have

wtΔtniztn,Biwtφi(Δtniztn)φi(wt)+wtΔtniztn,BiwtwtΔtniztn,BiΔtni1ztnwtΔtniztn,ΔtniztnΔtni1ztnri,tn+Θi(wt,Δtniztn)=φi(Δtniztn)φi(wt)+wtΔtniztn,BiwtBiΔtniztn+wtΔtniztn,BjΔtniztnBiΔtni1ztnwtΔtniztn,ΔtniztnΔtni1ztnri,tn+Θi(wt,Δtniztn).

By the same arguments as in the proof of Theorem 3.1, we have BiΔtniztnBiΔtni1ztn0 as n. In the meantime, by the monotonicity of Bi, we obtain wtΔtniztn,BiwtBiΔtniztn0. Then by (A4) we get

wtx,Biwtφi(x)φi(wt)+Θi(wt,x).

Utilizing (A1), (A4), and the last inequality, we obtain

0=Θi(wt,wt)+φi(wt)φi(wt)tΘi(wt,v)+(1t)Θi(wt,x)+tφi(v)+(1t)φi(x)φi(wt)t[Θi(wt,v)+φi(v)φi(wt)]+(1t)wtx,Biwt=t[Θi(wt,v)+φi(v)φi(wt)]+(1t)tvx,Biwt,

and hence

0Θi(wt,v)+φi(v)φi(wt)+(1t)vx,Biwt.

Letting t0, we have, for each vC,

0Θi(x,v)+φi(v)φi(x)+vx,Bix.

This implies that xGMEP(Θi,φi,Bi) and hence xi=1NGMEP(Θi,φi,Bi). This together with Steps 2 and 3 attains xΩ.

Finally, we show that x is a solution of VI (3.2). In fact, putting xtn in place of xt in (3.26) and taking the limit as tn0, we obtain

xp21γ¯1(IA)p,xp,pΩ.

In particular, x solves the following VI:

xΩ,(AI)p,xp0,pΩ,

or the equivalent dual variational inequality

xΩ,(AI)x,xp0,pΩ.

That is, xΩ is a solution of VI (3.2). Hence x=x˜ by uniqueness. In a summary, we have proven that each cluster point of {xt} (as t0) equals . Therefore xtx˜ as t0. VI (3.2) can be rewritten as

(2IA)x˜x˜,x˜p0,pΩ.

So, in terms of (2.1), this is equivalent to the fixed point equation

PΩ(2IA)x˜=x˜.

This completes the proof. □

Taking TI, GI, μ=1, and γ=1 in Theorem 3.2, we have the following corollary.

Corollary 3.1

Let {xt} be defined by

xt=PC[(IθtA)ΔtNRtxt+θt(tVxt+(1t)ΔtNRtxt)].

If limt0θt=0, then xt converges strongly as t0 to x˜Ω:=i=1NGMEP(Θi,φi,Bi)GSVI(C,B1,B2), which is the unique solution of the VI

(AI)x˜,x˜p0,pΩ. 3.38

Proof

If TI, then Tr in Lemma 2.8 is the identity mapping. Thus the result follows from Theorem 3.2. □

We are now in a position to prove the strong convergence of the sequence {xn} generated by the general explicit iterative scheme (3.3) to x˜Ω, which is the unique solution to VI (3.2).

Theorem 3.3

Let {xn} be the sequence generated by the explicit algorithm (3.3). Let {αn}, {βn}, {rn}, {λn}, {νn}, and {ri,n}i=1N satisfy the following conditions:

  1. {αn}[0,1] and {βn}(0,1], αn0 and βn0 as n;

  2. n=0βn=;

  3. n=0|αn+1αn|<, and |βn+1βn|o(βn+1)+σn, n=0σn< (the perturbed control condition);

  4. {rn}(0,), lim infnrn>0, and n=0|rn+1rn|<;

  5. {λn}(0,), limnλn=λ>0, and n=0|λn+1λn|<;

  6. {νn}(0,), limnνn=ν>0, and n=0|νn+1νn|<;

  7. {ri,n}[ci,di](0,2μi) i{1,2,,N}, and n=0(i=1N|ri,n+1ri,n|)<.

Then {xn} converges strongly to x˜Ω:=i=1NGMEP(Θi,φi,Bi)GSVI(C,F1,F2)Fix(T), which is the unique solution of VI (3.2).

Proof

First, note that from condition (C1), without loss of generality, we assume that αnτ<1, βnγ¯<1 and 2βn(γ¯1)1βn<1 for all n0. Let x˜Ω be the unique solution of VI (3.2). (The existence of follows from Theorem 3.2.)

From now, we put zn=Rnxn, un=ΔnNzn, and vn=Trnun. Take pΩ. Then p=Trnp by Lemma 2.8(iii), p=Δnip (=Tri,n(Θi,φi)(Iri,nBi)p) by Proposition 2.1(iii), and p=Rnp by Lemma 1.1.

We divide the proof into several steps as follows.

Step 1. We show that {xn} is bounded. Indeed, utilizing Proposition 2.1(ii) and Proposition 2.2, we have

unp=TrN,n(ΘN,φN)(IrN,nBN)ΔnN1znTrN,n(ΘN,φN)(IrN,nBN)ΔnN1p(IrN,nBN)ΔnN1zn(IrN,nBN)ΔnN1pΔnN1znΔnN1pΔn0znΔn0p=znp. 3.39

It is easy from the nonexpansion of Rn to see that

znp=RnxnRnpxnp,

which together with the nonexpansion of Trn and (3.39) implies that

vnp=TrnunTrnpunpznpxnp. 3.40

From (3.3) and (3.40), we get

xn+1p(IβnA)vn+βn(αnγVxn+(IαnμG)vn)p=(IβnA)vn(IβnA)p+βn(αnγVxn+(IαnμG)vnp)+βn(IA)p(IβnA)vn(IβnA)p+βnαnγVxn+(IαnμG)vnp+βn(IA)p=(IβnA)vn(IβnA)p+βn(IαnμG)vn(IαnμG)p+αn(γVxnμGp)+βn(IA)p(1βnγ¯)vnp+βn[(IαnμG)vn(IαnμG)p+αn(γVxnVp+γVpμGp)]+βn(IA)p(1βnγ¯)xnp+βn[(1αnτ)xnp+αn(γlxnp+(γVμG)p)]+βnIAp=[1βn(γ¯1+αn(τγl))]xnp+βn[IAp+αn(γVμG)p][1βn(γ¯1)]xnp+βn[IAp+(γVμG)p]=[1βn(γ¯1)]xnp+βn(γ¯1)IAp+(γVμG)pγ¯1max{xnp,IAp+(γVμG)pγ¯1}.

By induction, we derive

xnpmax{x0p,IAp+(γVμG)pγ¯1},n0.

This implies that {xn} is bounded and so are {Vxn}, {un}, {vn}, {wn}, {zn}, and {Gvn}. As a consequence, with the control condition (C1), we get

xn+1vnβnwnAvn0(n). 3.41

Step 2. We show that limnxn+1xn=0. To this end, let yn=F2,νnxn, yn1=F2,νn1xn1, zn=F1,λnyn, and zn1=F1,λn1yn1. Then we derive

yyn1,F2yn1+1νn1yyn1,yn1xn10,yC, 3.42

and

yyn,F2yn+1νnyyn,ynxn0,yC. 3.43

Putting y=yn in (3.42) and y=yn1 in (3.43), we obtain

ynyn1,F2yn1+1νn1ynyn1,yn1xn10 3.44

and

yn1yn,F2yn+1νnyn1yn,ynxn0. 3.45

Adding up (3.44) and (3.45), we have

ynyn1,F2yn1F2yn+ynyn1,yn1xn1νn1ynxnνn0,

which together with the monotonicity of F2 implies that

ynyn1,yn1xn1νn1ynxnνn0,

and hence

ynyn1,yn1yn+ynxn1νn1νn(ynxn)0.

It follows that

ynyn12ynyn1,xnxn1+(1νn1νn)(ynxn)ynyn1(xnxn1+1νn|νnνn1|ynxn),

which immediately yields

ynyn1xnxn1+1νn|νnνn1|ynxn. 3.46

By using arguments similar to those of (3.46), we get

znzn1ynyn1+1λn|λnλn1|znyn. 3.47

Substituting (3.46) for (3.47), we have

znzn1ynyn1+1λn|λnλn1|znynxnxn1+1νn|νnνn1|ynxn+1λn|λnλn1|znyn. 3.48

Note that vn=Trnun and vn1=Trn1un1. By using arguments similar to those of (3.46), we obtain

vnvn1unun1+1rn|rnrn1|vnun. 3.49

Also, utilizing arguments similar to those of (3.25) in the proof of Theorem 3.1, we have

unun1=ΔnNznΔn1Nzn1|rN,nrN,n1|[BNΔnN1zn+1rN,nTrN,n(ΘN,φN)(IrN,nBN)ΔnN1zn(IrN,nBN)ΔnN1zn]++|r1,nr1,n1|[B1Δn0zn+1r1,nTr1,n(Θ1,φ1)(Ir1,nB1)Δn0zn(Ir1,nB1)Δn0zn]+Δn0znΔn10zn1M˜1i=1N|ri,nri,n1|+znzn1, 3.50

where M˜1>0 is a constant such that, for each n0,

i=1N[BiΔni1zn+1ri,nTri,n(Θi,φi)(Iri,nBi)Δni1zn(Iri,nBi)Δni1zn]}M˜1.

So it follows from (3.48), (3.49), and (3.50) that

vnvn1unun1+1rn|rnrn1|vnunM˜1i=1N|ri,nri,n1|+znzn1+1rn|rnrn1|vnunM˜1i=1N|ri,nri,n1|+xnxn1+1νn|νnνn1|ynxn+1λn|λnλn1|znyn+1rn|rnrn1|vnun. 3.51

Since lim infnrn>0, limnλn=λ>0, and limnνn=ν>0, it is easy to see from (3.51) that, for each n0,

vnvn1xnxn1+M˜[i=1N|ri,nri,n1|+|νnνn1|+|λnλn1|+|rnrn1|], 3.52

where M˜>0 is a constant such that

supn0{M˜1+1νnynxn+1λnznyn+1rnvnun}M˜.

Now, simple calculations yield that

wnwn1=αnγVxn+(IαnμG)vnαn1γVxn1(Iαn1μG)vn1=(αnαn1)(γVxn1μGvn1)+αnγ(VxnVxn1)+(IαnμG)vn(IαnμG)vn1.

In terms of (3.52) and Lemma 2.6, we obtain

wnwn1|αnαn1|(γVxn1+μGvn1)+αnγlxnxn1+(1ταn)vnvn1|αnαn1|(γVxn1+μGvn1)+αnγlxnxn1+(1ταn)xnxn1+M˜[i=1N|ri,nri,n1|+|νnνn1|+|λnλn1|+|rnrn1|]=|αnαn1|(γVxn1+μGvn1)+(1αn(τγl))xnxn1+M˜[i=1N|ri,nri,n1|+|νnνn1|+|λnλn1|+|rnrn1|]xnxn1+M˜2[i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|], 3.53

where M˜2=supn0{γVxn+μGvn+M˜}. By (3.53) and Lemma 2.5, we derive

xn+1xn(IβnA)vn+βnwn(Iβn1A)vn1βn1wn1(IβnA)(vnvn1)+|βnβn1|Avn1+βnwnwn1+|βnβn1|wn1(1βnγ¯)vnvn1+βn[xnxn1+M˜2(i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|)]+|βnβn1|M˜3(1βnγ¯)[xnxn1+M˜(i=1N|ri,nri,n1|+|νnνn1|+|λnλn1|+|rnrn1|)]+βn[xnxn1+M˜2(i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|)]+|βnβn1|M˜3(1βn(γ¯1))xnxn1+(1βn(γ¯1))M˜2(i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|)+|βnβn1|M˜3(1βn(γ¯1))xnxn1+M˜2(i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|)+|βnβn1|M˜3(1βn(γ¯1))xnxn1+M˜2(i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|)+(o(βn)+σn1)M˜3, 3.54

where M˜3=supn0{Avn+wn}. By taking sn+1=xn+1xn, ωn=βn(γ¯1), ωnδn=M˜3o(βn), and

γn=σn1M˜3+M˜2(i=1N|ri,nri,n1|+|αnαn1|+|νnνn1|+|λnλn1|+|rnrn1|),

we deduce from (3.54) that

sn+1(1ωn)sn+ωnδn+γn.

Hence, by conditions (C2)–(C7) and Lemma 2.2, we obtain

limnxn+1xn=0.

Step 3. We show that limnxn+1wn=0. Indeed, from (3.41) and condition (C1), we derive

xn+1wnxn+1vn+vnwnβnwnAvn+αnγVxnμGvn0(n).

Step 4. We show that limnxnwn=0. In fact, by Step 2 and Step 3, we get

xnwnxnxn+1+xn+1wn0(n).

Step 5. We show that limnxnzn=0 and limnxnRxn=0. In fact, we first derive limnxnzn=0 by using arguments similar to those of (3.9) in the proof of Theorem 3.1, and then we obtain limnxnRxn=0 by using arguments similar to those of (3.37) in the proof of Theorem 3.2.

Step 6. We show that limnznun=0 and limnxnΔnNxn=0. In fact, by using arguments similar to those of (3.12) and (3.13) in the proof of Theorem 3.1, we obtain the desired conclusions.

Step 7. We show that limnunvn=0 and limnxnTrnxn=0. In fact, by using arguments similar to those of (3.14) and (3.15) in the proof of Theorem 3.1, we obtain the desired conclusions.

Step 8. We show that lim supn(IA)x˜,xnx˜0. To this end, take a subsequence {xnk} of {xn} such that

lim supn(IA)x˜,xnx˜=limk(IA)x˜,xnkx˜.

Without loss of generality, we may assume that xnkxˆ. Utilizing Steps 5, 6, and 7 and arguments similar to those of Steps 2, 3, and 4 in the proof of Theorem 3.2, we derive xˆΩ. Thus, from VI (3.2), we conclude

lim supn(IA)x˜,xnx˜=limk(IA)x˜,xnkx˜=(IA)x˜,xˆx˜0.

Step 9. We show that limnxnx˜=0. Note that x˜Ω. From (3.3), x˜=Rnx˜, x˜=ΔnNx˜, and x˜=Trnx˜, we obtain

wnx˜=(IαnμG)vn(IαnμG)x˜+αn(γVxnμGx˜)

and

xn+1x˜=xn+1(IβnA)vnβnwn+(IβnA)vn+βnwnx˜=xn+1(IβnA)vnβnwn+(IβnA)(vnx˜)+βn(wnx˜)+βn(IA)x˜.

Applying (2.1), (3.40) and Lemmas 2.1, 2.5, and 2.6, we deduce that

wnx˜2=(IαnμG)vn(IαnμG)x˜+αn(γVxnμGx˜)2(IαnμG)vn(IαnμG)x˜2+2αnγVxnμGx˜,wnx˜(1αnτ)2vnx˜2+2αnγVxnμGx˜wnx˜xnx˜2+2αnγVxnμGx˜wnx˜,

and hence

xn+1x˜2=(IβnA)(vnx˜)+βn(wnx˜)+βn(IA)x˜+xn+1(IβnA)vnβnwn2(IβnA)(vnx˜)2+2βnwnx˜,xn+1x˜+2βn(IA)x˜,xn+1x˜+2xn+1(IβnA)vnβnwn,xn+1x˜(IβnA)(vnx˜)2+2βnwnx˜,xn+1x˜+2βn(IA)x˜,xn+1x˜(1βnγ¯)2vnx˜2+2βnwnx˜xn+1x˜+2βn(IA)x˜,xn+1x˜(1βnγ¯)2xnx˜2+βn(wnx˜2+xn+1x˜2)+2βn(IA)x˜,xn+1x˜(1βnγ¯)2xnx˜2+βn[xnx˜2+2αnγVxnμGx˜wnx˜]+βnxn+1x˜2+2βn(IA)x˜,xn+1x˜=[(1βnγ¯)2+βn]xnx˜2+2αnβnγVxnμGx˜wnx˜+βnxn+1x˜2+2βn(IA)x˜,xn+1x˜. 3.55

It then follows from (3.55) that

xn+1x˜2(1βnγ¯)2+βn1βnxnx˜2+βn1βn[2αnγVxnμGx˜wnx˜+2(IA)x˜,xn+1x˜]=(12βn(γ¯1)1βn)xnx˜2+2βn(γ¯1)1βn12(γ¯1)[2αnγVxnμGx˜wnx˜+βnγ¯2xnx˜2+2(IA)x˜,xn+1x˜]=(1ξn)xnx˜2+ξnδn,

where ξn=2βn(γ¯1)1βn, δn=12(γ¯1)[2αnγVxnμGx˜wnx˜+βnγ¯2xnx˜2+2(IA)x˜,xn+1x˜]. It can be readily seen from Step 2 and conditions (C1) and (C2) that ξn0, n=0ξn=, and lim supnδn0. By Lemma 2.2, we conclude that limnxnx˜=0. This completes the proof. □

Taking TI, GI, μ=1, and γ=1 in Theorem 3.3, we have the following corollary.

Corollary 3.2

Let {xn} be generated by the following iterative algorithm:

{wn=αnVxn+(1αn)ΔnNRnxn,xn+1=PC[(IβnA)ΔnNRnxn+βnwn],n1.

Assume that the sequences {αn}, {βn}, {λn}, {νn}, and {ri,n}i=1N satisfy conditions (C1)(C3) and (C5)(C7) in Theorem 3.3. Then {xn} converges strongly to x˜Ω:=i=1NGMEP(Θi,φi,Bi)GSVI(C,F1,F2), which is the unique solution of VI (3.38).

Remark 3.1

Compared with Proposition 3.3, Theorem 3.4, and Theorem 3.7 in [11], respectively, our Theorems 3.1, 3.2, and 3.3 improve and develop them in the following aspects:

  • (i)

    GSVI (1.3) with solutions being also fixed points of a continuous pseudocontinuous mapping in [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7] is extended to GSVI (1.3) with solutions being also common solutions of a finite family of generalized mixed equilibrium problems (GMEPs) and fixed points of a continuous pseudocontinuous mapping in our Theorems 3.1, 3.2, and 3.3;

  • (ii)

    in the argument process of our Theorems 3.1, 3.2, and 3.3, we use the variable parameters λt and νt (resp., λn and νn) in place of the fixed parameters λ and ν in the proof of [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7], and additionally deal with a pool of variable parameters {ri,t}i=1N (resp., {ri,n}i=1N) involving a finite family of GMEPs;

  • (iii)

    the iterative schemes in our Theorems 3.1, 3.2, and 3.3 are more advantageous and more flexible than the iterative schemes in [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7], because they can be applied to solving three problems (i.e., GSVI (1.3), a finite family of GMEPs, and the fixed point problem of a continuous pseudocontractive mapping) and involve much more parameter sequences;

  • (iv)

    it is worth emphasizing that our general implicit iterative scheme (3.1) is very different from Jung’s composite implicit iterative scheme in [12], because the term “TrtRxt” in Jung’s implicit scheme is replaced by the term “TrtΔtNRtxt” in our implicit scheme (3.1). Moreover, the term “TrnRxn” in Jung’s explicit scheme is replaced by the term “TrnΔnNRnxn” in our explicit scheme (3.3).

Numerical examples

The purpose of this section is to give two examples and numerical results to illustrate the applicability, effectiveness, and stability of our algorithm.

Example 4.1

(Example of Theorem 3.3)

Let H=R and C=[0,100]. Let the inner product ,:R×RR be defined by x,y=xy. Let N=2, Vx=2x, Gx=12x, Tx=x, B1x=12x, B2x=13x, F1x=12x, F2x=x, Θ1(x,y)=y2x2, Θ2(x,y)=3x2+xy+2y2, φ1x=x2, φ2x=0, and Ax=32x. Let αn=1n, βn=13(n+1), rn=1, r1,n=12, r2,n=1, λn=1, νn=12, γ=18, μ=23. It is easy to calculate that Tr1,n(Θ1,φ1)x=13x, Tr2,n(Θ2,φ2)x=16x, Trnx=x, F1,λnx=12x, and F2,νnx=12x. Choose an arbitrary initial guess x1=4. We get the numerical results of Algorithm (3.3).

Table 1 shows the value of the sequence {xn}.

Table 1.

The values of xn

n xn
1 4.0000
2 1.8261 × 10−1
3 3.3191 × 10−3
4 3.7633 × 10−5
5 3.2426 × 10−7
6 2.3546 × 10−9
7 1.5285 × 10−11
8 9.1892 × 10−14
9 5.2325 × 10−16
10 2.8636 × 10−18
11 1.5212 × 10−20
12 7.8994 × 10−23

Figure 1 shows the convergence of the iterative sequence of Algorithm (3.3).

Figure 1.

Figure 1

The convergence of {xn} with initial x1=4

Solution: We can see from both Table 1 and Fig. 1 that the sequence {xn} converges to 0, that is, 0 is the solution in Example 4.1. In addition, it is also easy to check from Example 4.1 that i=12GMEP(Θi,φi,Bi)GSVI(C,F1,F2)Fix(T)={0}. Therefore, the iterative algorithm of Theorem 3.3 is efficient.

Example 4.2

(Example of Theorem 3.7 in [12])

Let H=R and C=[0,100]. Let the inner product ,:R×RR be defined by x,y=xy. Let Vx=2x, Gx=12x, Tx=x, F1x=12x, F2x=x, and Ax=32x. Let αn=1n, βn=13(n+1), rn=1, λ=1, ν=12, γ=18, μ=23. Choose an arbitrary initial guess x1=4. We get the numerical results of Algorithm (1.5) (Algorithm (3.10) of [12]).

Table 2 shows the value of the sequence {xn}.

Table 2.

The values of xn

n xn
1 4.0000
2 1.0278
3 2.5219 × 10−1
4 6.1587 × 10−2
5 1.5055 × 10−2
6 3.6870 × 10−3
7 9.0468 × 10−4
8 2.2236 × 10−4
9 5.4731 × 10−5
10 1.3488 × 10−5
11 3.3278 × 10−6
12 8.2181 × 10−7

The Fig. 2 shows the convergence of the iterative sequence of Algorithm (1.5).

Figure 2.

Figure 2

The convergence of {xn} with initial x1=4

Solution: We can see from both Table 2 and Fig. 2 that the sequence {xn} converges to 0, that is, 0 is the solution in Example 4.2. In addition, it is also easy to check from Example 4.2 that GSVI(C,F1,F2)Fix(T)={0}.

Remark 4.1

From Tables 12 and Figs. 12, it is readily seen that the convergence of {xn} to 0 in Example 4.1 is faster than the one of {xn} to 0 in Example 4.2. Therefore, our algorithm is more applicable, efficient, and stable than the algorithm in [12].

Application

In this section, applying our main result Theorem 3.3, we can prove strong convergence theorems for approximating the solution of the standard constrained convex optimization problem.

Let C be a closed convex subset of H. The standard constrained convex optimization problem is to find xC such that

f(x)=minxCf(x), 5.1

where f:CR is a convex, Fréchet differentiable function. The set of the solutions of (5.1) is denoted by Φf.

Lemma 5.1

(Optimality condition, [25])

A necessary condition of optimality for a point xC to be a solution of the minimization problem (5.1) is that x solves the variational inequality

f(x),xx0 5.2

for all xC. Equivalently, xC solves the fixed point equation

x=PC(Iλf)x

for every λ>0. If, in addition, f is convex, then the optimality condition (5.2) is also sufficient.

Theorem 5.1

Let C be a nonempty closed convex subset of a real Hilbert space H. Let fi (i=1,2,,N): CR be a real-valued convex function with the gradient fi being 1Lfi-inverse strongly monotone and continuous with Lfi>0. Let Θi, φi, A, V, G, F1, F2, Rn, F1,λn, F2,νn, Trn, and Tri,n(Θi,φi) be defined as in Theorem 3.3. Given x1C and let {xn} be the sequence generated by the following explicit algorithm:

{wn=αnγVxn+(IαnμG)TrnΛnNRnxn,xn+1=PC[(IβnA)TrnΛnNRnxn+βnwn],n1, 5.3

where Λni=Tri,n(Θi,φi)(Iri,nfi)Tri1,n(Θi1,φi1)(Iri1,nfi1)Tr1,n(Θ1,φ1)(Ir1,nf1) and Λn0=I. Assume that {αn}, {βn}, {rn}, {λn}, {νn}, and {ri,n}i=1N satisfy conditions (C1)(C7) in Theorem 3.3. Then {xn} converges strongly to x˜Ω:=i=1NMEP(Θi,φi)i=1NΦfiGSVI(C,F1,F2)Fix(T), which is the unique solution of VI (3.2).

Proof

By using Lemma 5.1 and Theorem 3.3, we obtain the desired conclusion directly. □

Conclusions

We introduced and analyzed one general implicit iterative scheme and another general explicit iterative scheme for finding a solution of a general system of variational inequalities (GSVI) with the constraints of finitely many generalized mixed equilibrium problems and a fixed point problem of a continuous pseudocontractive mapping in a Hilbert space. Moreover, we established strong convergence of the proposed implicit and explicit iterative schemes to a solution of the GSVI, which is the unique solution of a certain variational inequality. Our Theorems 3.13.3 not only improve and develop the main results of [1] and [12] but also improve and develop Theorems 3.1 and 3.2 of [9], Theorems 3.1 and 3.2 of [10], and Proposition 3.1, Theorems 3.2 and 3.5 of [11].

Authors’ contributions

All authors read and approved the final manuscript.

Funding

L.-C. Ceng was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of the Ministry of Education of China (20123127110002), and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Qian-Wen Wang, Email: wangqw2017@gmail.com.

Jin-Lin Guan, Email: guanjinlinaabb@163.com.

Lu-Chuan Ceng, Email: zenglc@hotmail.com.

Bing Hu, Email: hubing@yorku.ca.

References

  • 1.Ceng L.C., Wang C.Y., Yao J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008;67:375–390. doi: 10.1007/s00186-007-0207-4. [DOI] [Google Scholar]
  • 2.Siriyan K., Kangtunyakarn A. A new general system of variational inequalities for convergence theorem and application. Numer. Algorithms. 2018;12:1–25. [Google Scholar]
  • 3.Bnouhachem A. A modified projection method for a common solution of a system of variational inequalities, a split equilibrium problem and a hierarchical fixed-point problem. Fixed Point Theory Appl. 2014;2014:22. doi: 10.1186/1687-1812-2014-22. [DOI] [Google Scholar]
  • 4.Ceng L.C., Liou Y.C., Wen C.F., Wu Y.J. Hybrid extragradient viscosity method for general system of variational inequalities. J. Inequal. Appl. 2015;2015:150. doi: 10.1186/s13660-015-0646-z. [DOI] [Google Scholar]
  • 5.Alofi A., Latif A., Mazrooei A.A., Yao J.C. Composite viscosity iterative methods for general systems of variational inequalities and fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 2016;17(4):669–682. [Google Scholar]
  • 6.Rouhani B.D., Kazmi K.R., Farid M. Common solutions to some systems of variational inequalities and fixed point problems. Fixed Point Theory. 2017;18(1):167–190. doi: 10.24193/fpt-ro.2017.1.14. [DOI] [Google Scholar]
  • 7.Eslamian M., Saejung S., Vahidi J. Common solution of a system of variational inequality problems. UPB Sci. Bull., Ser. A. 2015;77(1):55–62. [Google Scholar]
  • 8.Alofi A.S.M., Latif A., Al-Marzooei A.E., Yao J.C. Composite viscosity iterative methods for general systems of variational inequalities and fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 2016;17:669–682. [Google Scholar]
  • 9.Ceng L.C., Guu S.M., Yao J.C. A general composite iterative algorithm for nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 2011;61:2447–2455. doi: 10.1016/j.camwa.2011.02.025. [DOI] [Google Scholar]
  • 10.Jung J.S. A general composite iterative method for strictly pseudocontractive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014;2014:173. doi: 10.1186/1687-1812-2014-173. [DOI] [Google Scholar]
  • 11.Kong Z.R., Ceng L.C., Liou Y.C., Wen C.F. Hybrid steepest-descent methods for systems of variational inequalities with constraints of variational inclusions and convex minimization problems. J. Nonlinear Sci. Appl. 2017;10:874–901. doi: 10.22436/jnsa.010.03.03. [DOI] [Google Scholar]
  • 12.Jung J.S. Strong convergence of some iterative algorithms for a general system of variational inequalities. J. Nonlinear Sci. Appl. 2017;10:3887–3902. doi: 10.22436/jnsa.010.07.42. [DOI] [Google Scholar]
  • 13.Peng J.W., Yao J.C. A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008;12:1401–1432. doi: 10.11650/twjm/1500405033. [DOI] [Google Scholar]
  • 14.Kong Z.R., Ceng L.C., Ansari Q.H., Pang C.T. Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013;2013:718624. [Google Scholar]
  • 15.Ceng L.C., Ansari Q.H., Schaible S. Hybrid extragradient-like methods for generalized mixed equilibrium problems, systems of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012;53:69–96. doi: 10.1007/s10898-011-9703-4. [DOI] [Google Scholar]
  • 16.Ceng L.C., Yao J.C. A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010;72:1922–1937. doi: 10.1016/j.na.2009.09.033. [DOI] [Google Scholar]
  • 17.Ceng L.C., Lin Y.C., Wen C.F. Iterative methods for triple hierarchical variational inequalities with mixed equilibrium problems, variational inclusions, and variational inequalities constraints. J. Inequal. Appl. 2015;2015:16. doi: 10.1186/s13660-014-0535-x. [DOI] [Google Scholar]
  • 18.Ceng L.C., Hu H.Y., Wong M.M. Strong and weak convergence theorems for generalized mixed equilibrium problem with perturbation and fixed point problem of infinitely many nonexpansive mappings. Taiwan. J. Math. 2011;15:1341–1367. doi: 10.11650/twjm/1500406303. [DOI] [Google Scholar]
  • 19.Peng J.W., Yao J.C. A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008;12:1401–1432. doi: 10.11650/twjm/1500405033. [DOI] [Google Scholar]
  • 20.Jung J.S. A new iteration method for nonexpansive mappings and monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010;2010:251761. [Google Scholar]
  • 21.Goebel K., Kirk W.A. Topics in Metric Fixed Point Theory. Cambridge: Cambridge University Press; 1990. [Google Scholar]
  • 22.Marino G., Xu H.K. A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006;318:43–52. doi: 10.1016/j.jmaa.2005.05.028. [DOI] [Google Scholar]
  • 23.Yamada I. The hybrid steepest-descent method for variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. In: Butnariu D., Censor Y., Reich S., editors. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Amsterdam: North-Holland; 2001. pp. 473–504. [Google Scholar]
  • 24.Zegeye H. An iterative approximation method for a common fixed point of two pseudocontractive mappings. ISRN Math. Anal. 2011;2011:621901. [Google Scholar]
  • 25.Suwannaut S., Kangtunyakran A. The combination of the set of solutions of equilibrium problem for convergence theorem of the set of fixed points of strictly pseudo-contractive mappings and variational inequalities problem. Fixed Point Theory Appl. 2013;2013:291. doi: 10.1186/1687-1812-2013-291. [DOI] [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES