Skip to main content
Springer logoLink to Springer
. 2018 Aug 8;2018(1):205. doi: 10.1186/s13660-018-1799-3

Convergence theorems for split feasibility problems on a finite sum of monotone operators and a family of nonexpansive mappings

Narin Petrot 1, Montira Suwannaprapa 1, Vahid Dadashi 2,
PMCID: PMC6097171  PMID: 30839581

Abstract

In this paper, we present two iterative algorithms for approximating a solution of the split feasibility problem on zeros of a sum of monotone operators and fixed points of a finite family of nonexpansive mappings. Weak and strong convergence theorems are proved in the framework of Hilbert spaces under some mild conditions. We apply the obtained main result for the problem of finding a common zero of the sum of inverse strongly monotone operators and maximal monotone operators, for finding a common zero of a finite family of maximal monotone operators, for finding a solution of multiple sets split common null point problem, and for finding a solution of multiple sets split convex feasibility problem. Some applications of the main results are also provided.

Keywords: Maximal monotone operator, Inverse strongly monotone operator, Resolvent operator, Convex feasibility problems

Introduction

A very common problem in different areas of mathematics and physical sciences consists of finding a point in the intersection of convex sets and is formulated as finding a point zH satisfying the property

zi=1MCi,

where Ci, i=1,,M, are nonempty, closed, and convex subsets of a Hilbert space H. This problem is called the convex feasibility problem (CFP). There are various applications of CFP in many applied disciplines as diverse as applied mathematics, approximation theory, image recovery and signal processing, control theory, biomedical engineering, communications, and geophysics (see [17] and the references therein).

The problem of finding zH1 such that zC and LzD is called the split feasibility problem (SFP), where C and D are nonempty, closed, and convex subsets of real Hilbert spaces H1 and H2, respectively, and L:H1H2 is a bounded linear operator. Let L1(D)={x:LxD}, then the SFP can be viewed as a special case of the CFP since it can be rewritten as zCL1D. However, the methodologies for studying the SFP are actually different from those for the CFP; see [814].

The theory of monotone operators has appeared as a powerful and effective tool for studying a wide class of problems arising in different branches of social, engineering, and pure sciences in a unified and general framework. There is a notion about monotone operators and it is one of generalized sums of two monotone operators; see [15, 16] and the references therein. In recent years, monotone operators have received a lot of attention for treating zero points of monotone operators and fixed point of mappings which are Lipschitz continuous; see [1722] and the references therein. The first algorithm for approximating the zero points of the maximal monotone operator was introduced by Martinet [23]. He considered the proximal point algorithm for finding zero points of a maximal monotone operator. Then, Passty [24] introduced a forward-backward algorithm method for finding zero points of the sum of two operators. There are various applications of the problem of finding zero points of the sum of two operators; see [2529] for example and the references therein.

Therefore, there are some generalizations of the CFP, which can be formulated in various ways such as: finding a common fixed point of nonexpansive operators, finding a common minimum of convex functionals, finding a common zero of maximal monotone operators, solving a system of variational inequalities, and solving a system of convex inequalities. Surveys of methods for solving such problems can be found in [2, 4].

Recently, some authors introduced and studied algorithms to get a common solution to inclusion problems and fixed point problems in the framework of Hilbert spaces; see [3032]. Cho et al. [30] considered the problem of finding a common solution to the zero point problems involving two monotone operators and fixed point problems involving asymptotically strictly pseudocontractive mappings based on a one-step iterative method and proved the weak convergence theorems in the framework of Hilbert spaces.

In this paper, motivated and inspired by the above literature, we consider an iterative algorithm for finding a solution of split feasibility problem for a point in zeros of a finite sum of α-inverse strongly monotone operators and maximal monotone operators and fixed points of nonexpansive mappings. That is, we are going to consider the following problem: Let H1 and H2 be real Hilbert spaces. Let Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators and Bi:H12H1, i=1,,M, be maximal monotone operators, Tj:H2H2, j=1,,N, be nonexpansive mappings, L:H1H2 be a bounded linear operator. We are interested in considering the problem of finding a solution pH1 such that

p(i=1M(Ai+Bi)1(0))L1(j=1NF(Tj))=:F, 1.1

where F. Weak and strong convergence theorems will be provided under some mild conditions.

The paper is organized as follows. Section 2 gathers some definitions and lemmas of geometry of Hilbert spaces and monotone operators, which will be needed in the remaining sections. In Sect. 3, we prepare an iterative algorithm and prove the weak and strong convergence theorems. Finally, in Sect. 4, the results of Sect. 3 are applied to solve CFP, multiple-set null point problems, variational inequality problems, fixed point problems, and equilibrium problems.

Preliminaries

Throughout this paper, H will be a Hilbert space with norm and inner product ,, respectively. We now provide some basic concepts, definitions, and lemmas which will be used in the sequel. We write xnx to indicate that the sequence {xn} converges strongly to x and xnx to indicate that {xn} converges weakly to x.

Let T:HH be a mapping. We say that T is a Lipschitz mapping if there exists L0 such that

TxTyLxy,x,yH.

The number L, associated with T, is called a Lipschitz constant. If L=1, we say that T is a nonexpansive mapping, that is,

TxTyxy,x,yH.

We will say that T is firmly nonexpansive if

TxTy,xyTxTy2,x,yH.

The set of fixed points of T will be denoted by F(T), that is, F(T)={xH:Tx=x}. It is well known that if T is nonexpansive, then F(T) is closed and convex. Moreover, every nonexpansive operator T:HH satisfies the following inequality:

(xTx)(yTy),TyTx12(Txx)(Tyy)2,x,yH.

Therefore, for all xH and yF(T),

xTx,yTx12Txx2,x,yH, 2.1

see [33, 34].

Lemma 2.1

([35])

Let H be a real Hilbert space and T:HH be a nonexpansive mapping with F(T). Then the mapping IT is demiclosed at zero, that is, if {xn} is a sequence in H such that xnx and xnTxn0, then xF(T).

A mapping T:HH is called α-averaged if there exists α(0,1) such that T=(1α)I+αS, where S is a nonexpansive mapping of H into H. It should be observed that firmly nonexpansive mappings are 12-averaged mappings.

We now recall the concepts and facts on the class of monotone operators, for both single and multi-valued operators.

An operator A:HH is called α-inverse strongly monotone (α-ism) for a positive number α if

AxAy,xyαAxAy2,x,yH.

Lemma 2.2

([21])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let the mapping A:CH be α-inverse strongly monotone and r>0 be a constant. Then we have

(IrA)x(IrA)y2xy2+r(r2α)AxAy2

for all x,yC. In particular, if 0<r2α, then IrA is nonexpansive.

We have the following properties from [36, 37].

Lemma 2.3

We have

  1. The composite of finitely many averaged mappings is averaged. In particular, if Ti is αi-averaged, where αi(0,1) for i=1,2, then the composite T1T2 is α-averaged, where α=α1+α2α1α2.

  2. If A is β-ism and r(0,β], then T:=IrA is firmly nonexpansive.

A multifunction B:H2H is called a monotone operator if, for every x,yH,

xy,xy0,xB(x),yB(y).

A monotone operator B:H2H is said to be maximal monotone, when its graph is not properly included in the graph of any other monotone operators on the same space. For a maximal monotone operator B on H, and λ>0, we define the single-valued resolvent JλB:HD(B) by JλB=(I+λB)1. It is well known that JλB is firmly nonexpansive, and F(JλB)=B1(0).

Next, we collect some useful facts on monotone operators that will be used in our proof.

Lemma 2.4

([38])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H and A:CH be an operator. If B:H2H is a maximal monotone operator, then F(JλB(IλA))=(A+B)1(0).

Lemma 2.5

([39])

Let B:H2H be a maximal monotone operator. For λ>0, μ>0, and xH,

JλBx=JμB(μλx+(1μλ)JλBx).

For each sequence {xn}H, we put

ωw(xn):={xH:there is a subsequence {xnj}{xn} such that xnjx}.

The following lemma plays an important role in concluding our results.

Lemma 2.6

([37])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let {xn} be a sequence in H satisfying the properties:

  • (i)

    limnxnu exists for each uC;

  • (ii)

    ωw(xn)C.

Then {xn} converges weakly to a point in C.

Parallel algorithm

Let H1 and H2 be real Hilbert spaces. Let Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators and Bi:H12H1, i=1,,M, be maximal monotone operators, Tj:H2H2, j=1,,N, be nonexpansive mappings, L:H1H2 be a bounded linear operator. We will denote by L the adjoint operator of L. Let {βn} and {λn} be sequences of positive real numbers. For x1H1, we introduce the following parallel algorithm:

{yj,n=xn+λnL(TjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=JβnBi(IβnAi)yn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n. 3.1

We start by some lemmas.

Lemma 3.1

Let α=min{α1,,αM}. If

  • (i)

    {βn}(0,2α) and

  • (ii)

    {λn}(a,1L2) for some a>0,

then the sequences {xn} and {yn} generated by (3.1) are bounded.

Proof

Let uF. We have

ynu2=xn+λnL(TjnI)Lxnu2=xnu2+2λnxnu,L(TjnI)Lxn+λn2L(TjnI)Lxn2. 3.2

By (2.1), we get

xnu,L(TjnI)Lxn=LxnTjnLxn+TjnLxnLu,TjnLxnLxn=TjnLxnLxn2+TjnLxnLu,TjnLxnLxnTjnLxnLxn2+12TjnLxnLxn2=12TjnLxnLxn2. 3.3

It follows from (3.2) and (3.3) that

ynu2xnu2λnTjnLxnLxn2+λn2L2TjnLxnLxn2=xnu2λn(1λnL2)TjnLxnLxn2xnu2. 3.4

Hence, from Lemma 2.2, Lemma 2.4, and the control conditions on {βn} and {λn}, we have

xn+1u2=zin,nu2=JβnBin(IβnAin)ynJβnBin(IβnAin)u2(IβnAin)yn(IβnAin)u2=ynu2+βn2AinynAinu22βnynu,AinynAinuynu2+βn2AinynAinu22βnαinAinynAinu2ynu2+βn(βn2αin)AinynAinu2ynu2xnu2.

This means that xnu is a nonincreasing sequence of nonnegative real numbers, so it follows that it is a convergent sequence. Also, from the above inequality, we have xnu and ynu converge to the same limit point. These imply that the sequences {xn} and {yn} are bounded, and the proof is completed. □

Lemma 3.2

If 0<aλnb<12L2, then ωw(Lxn)j=1NF(Tj).

Proof

By (3.4) we have

λn(1λnL2)TjnLxnLxn2xnu2ynu20,n,

and hence,

TjnLxnLxn0,n.

Therefore, from (3.1), we get

L(TjLxnLxn)=1λnyj,nxn1λnynxn=L(TjnLxnLxn)LTjnLxnLxn0,n, 3.5

for each j=1,,N, which implies that

L(TjLxnLxn)0,n. 3.6

From (2.1), we have

λnL(TjLxnLxn)+xnu,λnL(TjLxnLxn)=λn2L(TjLxnLxn)2λnLxnLu,TjLxnLxn=λn2L(TjLxnLxn)2λnLxnTjLxn+TjLxnLu,TjLxnLxn=λn2L(TjLxnLxn)2+λnTjLxnLxn2λnTjLxnLu,TjLxnLxnλn2L2TjLxnLxn2+λnTjLxnLxn212λnTjLxnLxn=λn(12λnL2)TjLxnLxn20 3.7

for each j=1,,N. Thus, by (3.6) and the assumption of {λn}, we have

TjLxnLxn0,n, 3.8

for each j=1,,N. From Lemma 2.1, we obtain ωw(Lxn)F(Tj) for each j=1,,N. This completes the proof. □

Lemma 3.3

Let α=min{α1,,αM}. If {βn}(0,2α). Then, for each i=1,,M, we have xnzi,n0.

Proof

Since JβnBi and IβnAi are firmly nonexpansive, they are both 12-averaged and hence Ti,n:=JβnBi(IβnAi) is 34-averaged by Lemma 2.3. Thus, for each nN and 1iM, we can write

Ti,n=14I+34Si,n,

where Si,n is a nonexpansive mapping and F(Si,n)=F(Ti,n)=F(JβnBi(IβnAi))=(Ai+Bi)1(0) for each nN and 1iM. Then we can rewrite xn+1 as

xn+1=Tin,n(yn)=14yn+34Sin,n(yn). 3.9

Let ui=1M(Ai+Bi)1(0), we have

xn+1u2=14(ynu)+34(Sin,n(yn)u)2=14ynu2+34Sin,n(yn)u2316ynSin,n(yn)2ynu2316ynSin,n(yn)2,

and hence,

316ynSin,n(yn)2ynu2xn+1u2.

Then

ynSin,n(yn)0,n.

From (3.9),

ynxn+1=34ynSin,n(yn)0,n. 3.10

By (3.5), we get

xnyn=xnyjn,n0,n. 3.11

Now, from (3.1), (3.10), and (3.11), we obtain

xnzi,nxnzin,nxnyn+ynxn+10,n. 3.12

 □

Lemma 3.4

Assume that βnβ for some positive real number β. Then, for each i=1,,M, we have xnJβBi(IβAi)xn0, n.

Proof

Set wi,n=(IβnAi)yn, so zi,n=JβnBiwi,n. By Lemma 2.5, we have

JβnBi(IβnAi)ynJβBi(IβnAi)yn=JβnBiwi,nJβBiwi,n=JβBi(ββnwi,n+(1ββn)JβnBiwi,n)JβBiwi,nββnwi,n+(1ββn)JβnBiwi,nwi,n=|1ββn|JβnBiwi,nwi,n. 3.13

On the other hand, we have

JβnBiwi,nwi,n=zi,nwi,n=zi,nyn+βnAiynzi,nxn+xnyn+βnAiyn.

Since Ai is inverse strongly monotone, {yn} is bounded, (3.11) and (3.12) we know that {JβnBiwi,nwi,n} is bounded. It follows from βnβ and (3.13) that

JβnBi(IβnAi)ynJβBi(IβnAi)yn0,n. 3.14

We also have

JβBi(IβnAi)ynJβBi(IβAi)xn(IβnAi)yn(IβAi)xnynxn+βnAiynAixn+βnAixnβAixnynxn+βnαynxn+|βnβ|Aixn(1+βnα)ynxn+|βnβ|Aixn0,n. 3.15

It follows form (3.12), (3.14), and (3.15) that

xnJβBi(IβAi)xnxnzi,n+JβnBi(IβnAi)ynJβBi(IβnAi)yn+JβBi(IβnAi)ynJβBi(IβAi)xn0,n,

for each i=1,,M. This completes the proof of the lemma. □

Now, the weak convergence of algorithm (3.1) is given by the following theorem.

Theorem 3.5

Let H1 and H2 be real Hilbert spaces. Let Tj:H2H2, j=1,,N, be nonexpansive mappings, L:H1H2 be a bounded linear operator, Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators, and Bi:H12H1, i=1,,M, be maximal monotone operators such that F=(i=1M(Ai+Bi)1(0))L1(j=1NF(Tj)). Let α=min{α1,,αM}, βn(0,2α) for each nN and 0<aλnb<12L2, then the sequence {xn} generated by (3.1) converges weakly to a point pF.

Proof

In Lemma 3.1, we show that limnxnu exists for each uF. From Lemmas 3.2 and 3.4 we imply that ωw(xn)F. Then it follows from Lemma 2.6 that {xn} converges weakly to a point pF. □

Recall that for a subset C of H, a mapping T:CC is said to be semi-compact if for any bounded sequence {xn}C such that xnTxn0 (n), there exists a subsequence {xnj} of {xn} such that {xnj} converges strongly to xC.

Strong convergence of algorithm (3.1), under the concept of semi-compact assumption, is given by the following theorem.

Theorem 3.6

Let H1 and H2 be real Hilbert spaces. Let Tj:H2H2, j=1,,N, be nonexpansive mappings, L:H1H2 be a bounded linear operator, Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators, and Bi:H12H1, i=1,,M, be maximal monotone operators such that F=(i=1M(Ai+Bi)1(0))L1(j=1NF(Tj)). Let α=min{α1,,αM}, βn(0,2α) for each nN and 0<aλnb<12L2. If at least one of the maps Tj is semi-compact, then the sequence {xn} generated by (3.1) converges strongly to a point pF.

Proof

Let Tj be semi-compact for some fixed j{1,,N}. Since limnTjLxnLxn=0 by (4.7), there exists a subsequence {xnk} of {xn} such that it converges strongly to q. Since {xn} converges weakly to p, we get p=q. On the other hand, limnxnp exists and limnxnkp=0, which show that {xn} converges strongly to pF. This completes the proof of the theorem. □

Deduced results of parallel algorithm

One can obtain some results from Theorem 3.5. We give some of them in the following.

If we take M=N=1, we have the following corollary.

Corollary 3.7

Let H1 and H2 be real Hilbert spaces. Let T:H2H2 be a nonexpansive mapping, L:H1H2 be a bounded linear operator, A:H1H1 be an α-inverse strongly monotone operator, and B:H12H1 be a maximal monotone operator such that (A+B)1(0)L1(F(T)). Suppose that the sequence {xn} is defined by the following algorithm:

{yn=xn+λnL(TI)Lxn,xn+1=JβB(IβnA)yn,

where x1H1, 0<aλnb<12L2, and βn(0,2α) for each nN. Then the sequence {xn} converges weakly to a point p(A+B)1(0)L1(F(T)). If T be semi-compact, then the convergence is strong.

From Theorem 3.5, we have the following corollary for the problem of finding a common zero of the sum of α-inverse strongly monotone operators and maximal monotone operators.

Corollary 3.8

Let H be a real Hilbert space, Ai:HH, i=1,,M, be αi-inverse strongly monotone operators, and Bi:H2H, i=1,,M, be maximal monotone operators such that F=i=1M(Ai+Bi)1(0) and α=min{α1,,αM}. Suppose that the sequence {xn} is defined by the following algorithm:

{zi,n=JβnBi(IβnAi)xn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n,

where x1H and βn(0,2α) for each nN. Then the sequence {xn} converges weakly to a point pi=1M(Ai+Bi)1(0).

In the following corollary, we have a result for finding a common zero of a finite family of maximal monotone operators.

Corollary 3.9

Let H be a real Hilbert space, Bi:H2H, i=1,,M, be maximal monotone operators such that i=1MBi1(0). Suppose that the sequence {xn} is defined by the following algorithm:

{zi,n=JβnBixn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n,

where x1H and βn>0 for each nN. Then the sequence {xn} converges weakly to a point pi=1MBi1(0).

Corollary 3.10

Let H be a real Hilbert space, Ai:HH, i=1,,M, be αi-inverse strongly monotone operators such that i=1MAi1(0) and α=min{α1,,αM}. Suppose that the sequence {xn} is defined by the following algorithm:

{zi,n=xnβnAixn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n,

where x1H and βn(0,2α) for each nN. Then the sequence {xn} converges weakly to a point pi=1MAi1(0).

Corollary 3.11

Let H1 and H2 be real Hilbert spaces and Tj:H2H2, j=1,,N, be nonexpansive mappings and L:H1H2 be a bounded linear operator such that F=L1(j=1NF(Tj)). Suppose that the sequence {xn} is defined by the following algorithm:

{yj,n=xn+λnL(TjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,xn+1=yjn,n,

where x1H and 0<aλnb<12L2. Then the sequence {xn} converges weakly to a point pj=1NF(Tj). If Tj is semi-compact for some 1jN, then the convergence is strong.

Parallel hybrid algorithm

Notice that, in order to guarantee the strong convergence theorem of the introduced algorithm (3.1), we proposed an additional assumption to one of the operators Tj, as a semi-compact assumption (see Theorem 3.6). Next, we propose the following hybrid algorithm to obtain a strong convergence theorem for finding a point in zeros of a finite family of sums of α-inverse strongly monotone operators and maximal monotone operators and nonexpansive mappings. Of course, the strong convergence theorems of the following algorithm will be guaranteed without any additional assumptions on the considered operators. To do this, we recall some necessary concepts and facts: let C be a closed and convex subset of a Hilbert space H. The operator PC is called a metric projection operator if it assigns to each xH its nearest point yC such that

xy=min{xz:zC}.

An element y is called the metric projection of H onto C and is denoted by PCx. It exists and is unique at any point of the Hilbert space. It is known that the metric projection operator PC is a firmly nonexpansive mapping. Also, the following characterization is very useful in our proof.

Lemma 4.1

Let H be a Hilbert space and C be a nonempty, closed, and convex subset of H. Then, for all xH, the element z=PCx if and only if

xz,zy0,yC.

Now we are in a position to introduce the aforementioned algorithm: Let x1C1=H1 and {xn} be a sequence generated by the following algorithm:

{yj,n=xn+λnL(TjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=JβnBi(IβnAi)yn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,zn=zin,n,Cn+1={zCn:znzynzxnz},xn+1=PCn+1x1. 4.1

Theorem 4.2

Let H1 and H2 be real Hilbert spaces. Let Tj:H2H2, j=1,,N, be nonexpansive mappings, L:H1H2 be a bounded linear operator, Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators, and Bi:H12H1, i=1,,M, be maximal monotone operators such that F=(i=1M(Ai+Bi)1(0))L1(j=1NF(Tj)). Let α=min{α1,,αM}, βn(0,2α) for each nN and 0<aλnb<12L2. Then the sequence {xn} generated by (4.1) converges strongly to q=PF(x1).

Proof

We prove that the sequence {xn} generated by (4.1) is well defined. We first show that Cn is closed and convex for each nN. C1=H1 is closed and convex and suppose that Cn is closed and convex for some n>1. Set

Cn1={zH1:znzynz},Cn2={zH1:ynzxnz},

then Cn+1=CnCn1Cn2. For each pCn1, we obtain

znpynpznyn+ynp2ynp2znyn2+ynp2+2znyn,ynpynp2znyn2+2znyn,ynp0.

This implies that Cn1 is closed and convex. In a similar manner, Cn2 is closed and convex and so is Cn+1=CnCn1Cn2. By the induction, Cn is closed and convex for each n1.

We show that FCn for each n1. Let pF. From Lemmas 2.2 and 2.4 and (4.1), we have

znp=JβnBin(IβnAin)ynJβnBin(IβnAin)p(IβnAin)yn(IβnAin)pynp.

This together with (3.4) implies that pCn+1. Then {xn} is well defined.

Since F is nonempty, closed, and convex, there exists a unique element qFCn such that q=PFx1. From xn+1=PCn+1(x1), we get

xn+1x1x1q. 4.2

Since again xn=PCn(x1) and xn+1=PCn+1(x1)Cn+1Dn, we get

xnx1xn+1x1. 4.3

Thus, the sequence {xnx1} is a bounded above and nondecreasing sequence, so limnxnx1 exists, and the sequence {xn} is bounded. By (3.4) the sequence {yn} is bounded too.

We show that xn+1xn0, xnyn0, and ynzn0. From xn=PCn(x1), xn+1=PCn+1(x1)Cn+1Cn, and Lemma 4.1, we obtain

x1xn,xnxn+10.

Then we get

xnxn+12=xnx1+x1xn+12=xnx12+2xnx1,x1xn+1+x1xn+12=xnx12+2xnx1,x1xn+2xnx1,xnxn+1+x1xn+12xnx122xnx1,xnx1+x1xn+12=xnx122xnx12+x1xn+12=xnx12+x1xn+120,n,

and hence,

xnxn+10,n.

By xn+1=PCn+1(x1)Cn+1Cn and the definition of Cn, we obtain

xn+1znxn+1ynxn+1xn,

and then

xnynxnxn+1+xn+1yn2xnxn+1,

which implies that

xnyn0,n. 4.4

Also, we have

ynznynxn+1+xn+1zn2xnxn+1,

therefore,

ynzn0,n. 4.5

By (4.4) and (4.5), we obtain

xnzn0,n. 4.6

Now, we show that ωw(xn)F. From (3.5), (3.7), and (4.4), we get

TjLxnLxn0,n, 4.7

for each j=1,,N. It follows from Lemma 2.1 that ωw(Lxn)j=1NF(Tj). By arguing similarly to the proof of Lemma 3.4, (4.4), and (4.6), we conclude ωw(xn)F(JβBi(IβAi))=i=1M(Ai+Bi)1(0). Therefore,

ωw(xn)F. 4.8

Finally, we show that the sequence {xn} generated by (4.1) converges strongly to q=PF(x1). Since xn=PCn(x1) and qFCn, we get

xnx1qx1. 4.9

Let {xnk} be an arbitrary subsequence of {xn} converging weakly to pH1. Then pF by (4.8) and hence it follows from the lower semi-continuity of the norm that

qx1px1lim infkxnkx1lim supkxnkx1qx1.

Thus, we obtain that limkxnkx1=px1=qx1. Using the Kadec–Klee property of H1, we get that limkxnk=p=q. Since {xnk} is an arbitrary weakly convergent subsequence of {xn} and limnxnx1 exists, we can imply that {xn} converges strongly to q. This completes the proof. □

Deduced results of the parallel hybrid algorithm

One can obtain some results from Theorem 4.2. We give some of them in the following.

If we take M=N=1, we have the following corollary.

Corollary 4.3

Let H1 and H2 be real Hilbert spaces. Let T:H2H2 be a nonexpansive mapping, L:H1H2 be a bounded linear operator, A:H1H1 be an αi-inverse strongly monotone operator, and B:H12H1 be a maximal monotone operator such that F=(A+B)1(0)L1(F(T)). Suppose that the sequence {xn} is defined by the following algorithm:

{yn=xn+λnL(TI)Lxn,zn=JβnB(IβnA)yn,Cn+1={zCn:znzynzxnz},xn+1=PCn+1x1,

where x1C1=H1, 0<aλnb<12L2, and βn(0,2α) for each nN. Then the sequence {xn} converges strongly to q=PF(x1).

From Theorem 4.2, we have the following corollary for the problem of finding a common zero of the sum of α-inverse strongly monotone operators and maximal monotone operators.

Corollary 4.4

Let H be a real Hilbert space, Ai:HH, i=1,,M, be αi-inverse strongly monotone operators, and Bi:H2H, i=1,,M, be maximal monotone operators such that F=i=1M(Ai+Bi)1(0) and α=min{α1,,αM}. Suppose that the sequence {xn} is defined by the following algorithm:

{zi,n=JβnBi(IβnAi)xn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,zn=zin,n,Cn+1={zCn:znzxnz},xn+1=PCn+1x1,

where x1H and βn(0,2α) for each nN. Then the sequence {xn} converges strongly to q=PF(x1).

Applications

Zeros of maximal monotone operators

In this section, we discuss some applications of the main theorems. Let Mj:H22H2, j=1,,N, be maximal monotone operators. Set Tj=JrMj, where r>0 and j=1,,N. We know that Tj is nonexpansive and F(Tj)=Mj1(0) for each j=1,,N. By applying Theorem 3.5, we can get the following results.

Theorem 5.1

Let H1 and H2 be real Hilbert spaces, Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators, Bi:H12H1, i=1,,M, and Mj:H22H2, j=1,,N, be maximal monotone operators, and L:H1H2 be a bounded linear operator such that F=(i=1M(Ai+Bi)1(0))L1(j=1NMj1(0)). Let x1H1 and the sequence {xn} be generated by the following algorithm:

{yj,n=xn+λnL(JrMjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=JβnBi(IβnAi)yn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n.

If α=min{α1,,αM}, βn(0,2α), and 0<aλnb<12L2 for each nN, then {xn} converges weakly to a point pF.

By Theorem 5.1, we have the following corollary for multiple sets split null point problems.

Corollary 5.2

Let H1 and H2 be real Hilbert spaces, Bi:H12H1, i=1,,M, Mj:H22H2, j=1,,N, be maximal monotone operators, and L:H1H2 be a bounded linear operator such that (i=1MBi1(0))L1(j=1NMj1(0)). Let x1H1 and the sequence {xn} be generated by the following algorithm:

{yj,n=xn+λnL(JrMjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=JβnBiyn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n.

If βn>0 and 0<aλnb<12L2 for each nN, then {xn} converges weakly to a point p(i=1MBi1(0))L1(j=1NMj1(0)).

By applying Theorem 4.2, we have the following theorem.

Theorem 5.3

Let H1 and H2 be real Hilbert spaces, Ai:H1H1, i=1,,M, be αi-inverse strongly monotone operators, Bi:H12H1, i=1,,M, and Mj:H22H2, j=1,,N, be maximal monotone operators, and L:H1H2 be a bounded linear operator such that F=(i=1M(Ai+Bi)1(0))L1(j=1NMj1(0)). Let x1H1 and the sequence {xn} be generated by the following algorithm:

{yj,n=xn+λnL(JrMjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=JβnBi(IβnAi)yn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,zn=zin,n,Cn+1={zCn:znzynzxnz},xn+1=PCn+1x1. 5.1

If α=min{α1,,αM}, βn(0,2α), and 0<aλnb<12L2 for each nN, then {xn} converges strongly to q=PF(x1).

Multiple set split convex feasibility problems

Let f:HR{+} be a proper, convex, and lower semi-continuous function. It is well known that the subdifferential f:H2H, which is defined as

f(x)={zH:yx,zf(y)f(x),yH},

is a maximal monotone operator. In particular, let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let us consider the indicator function of C, denoted by ιC, which is defined as

ιC(x)={0,xC,+,xC.

We know that ιC is a proper, convex, and lower semi-continuous function on H, and it follows that the subdifferential ιC of ιC is a maximal monotone operator. Furthermore, we get z=JrιCx if and only if z=PC(x), where xH and JrιC=(I+rιC)1 for each r>0. Using these facts, by Theorems 3.5 and 4.2, we have the following corollaries for the multiple set split convex feasibility problem in Hilbert spaces.

Corollary 5.4

Let H1 and H2 be real Hilbert spaces, CiH1, i=1,,M, DjH2, j=1,,N, be nonempty, closed, and convex, and L:H1H2 be a bounded linear operator such that (i=1MCi)L1(j=1NDj). Let x1H1 and the sequence {xn} be generated by the following algorithm:

{yj,n=xn+λnL(PDjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=PCiyn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n.

If 0<aλnb<12L2 for each nN, then {xn} converges weakly to a point p(i=1MCi)L1(j=1NDj).

Corollary 5.5

Let H1 and H2 be real Hilbert spaces, CiH1, i=1,,M, DjH2, j=1,,N, be nonempty, closed, and convex, and L:H1H2 be a bounded linear operator such that F=(i=1MCi)L1(j=1NDj). Let x1H1 and the sequence {xn} be generated by the following algorithm:

{yj,n=xn+λnL(PDjI)Lxn,j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,zi,n=PCiyn,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,zn=zin,n,Cn+1={zCn:znzynzxnz},xn+1=PCn+1x1.

If 0<aλnb<12L2 for each nN, then {xn} converges strongly to q=PF(x1).

Multiple sets split equilibrium problems

Now, we apply Theorem 3.5 for getting a common solution of multiple sets split equilibrium problems. In this respect, let C be a nonempty closed convex subset of a Hilbert space H1 and F:C×CR be a bifunction. The equilibrium problem for bifunction F is the problem of finding a point zH1 such that

F(z,y)0,yC. 5.2

The set of solutions of equilibrium problem (5.2) is denoted by EP(F). The bifunction F:C×CR is called monotone if F(x,y)+F(y,x)0 for all x,yC. For finding a solution of equilibrium problem (5.2), we assume that F satisfies the following properties:

  1. F(x,x)=0 for all xC;

  2. F is monotone;

  3. for each x,y,zC, lim supt0F(tz+(1t)x,y)F(x,y);

  4. for each xC, yF(x,y) is convex and lower semi-continuous.

Then we have the following lemma which can be found in [40, 41].

Lemma 5.6

Let C be a nonempty closed convex subset of a Hilbert space H1 and F:C×CR be a bifunction satisfying properties (A1)(A4). Let r be a positive real number and xH1. Then there exists zC such that

F(z,y)+1ryz,zx0,yC.

Further, define

Trx={zC:F(z,y)+1ryz,zx0,yC}

for all r>0 and xH1. Then the following hold:

  1. Tr is single-valued;

  2. Tr is firmly nonexpansive; that is,
    TrxTry2TrxTry,xy,x,yH1;
  3. F(Tr)=EP(F);

  4. EP(F) is closed and convex.

Let Ci, i=1,,M, and Dj, j=1,,N, be nonempty, closed, and convex subsets of real Hilbert spaces H1 and H2, respectively, fi:Ci×CiR, i=1,,M, and gj:Dj×DjR, j=1,,N, be bifunctions which satisfy properties (A1)–(A4), and L:H1H2 be a bounded linear operator. From Lemma 5.6 there exist the sequences {zi,n} of H1 and {uj,n} of H2 satisfying

{rFj(uj,n,y)+yuj,n,uj,nLxn0,yDj,j=1,,N,yj,n=xn+λnL(uj,nLxn),j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,βnFi(zi,n,u)+uzi,n,zi,nyn0,uCi,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,xn+1=zin,n. 5.3

Therefore, by applying Theorem 3.5, we have the following theorem for multiple sets split equilibrium problem.

Theorem 5.7

Let Ci, i=1,,M, and Dj, j=1,,N, be nonempty, closed, and convex subsets of real Hilbert spaces H1 and H2, respectively, fi:Ci×CiR, i=1,,M, and gj:Dj×DjR, j=1,,N, be bifunctions which satisfy properties (A1)(A4). Suppose that L:H1H2 is a bounded linear operator such that F=(i=1MEP(fi))L1(j=1NEP(Fj)). If βn>0, 0<aλnb<12L2 for each nN and r is a positive real number, then the sequence {xn} generated by (5.3) converges weakly to a solution of multiple sets split equilibrium problem.

We also have the following strong convergence theorem for finding a solution of multiple sets split equilibrium problem.

Theorem 5.8

Let Ci, i=1,,M, and Dj, j=1,,N, be nonempty, closed, and convex subsets of real Hilbert spaces H1 and H2, respectively, fi:Ci×CiR, i=1,,M, and gj:Dj×DjR, j=1,,N, be bifunctions which satisfy properties (A1)(A4). Suppose that L:H1H2 is a bounded linear operator such that F=(i=1MEP(fi))L1(j=1NEP(Fj)). Suppose that x1C1=H1 and the sequence {xn} is generated by the following algorithm:

{rFj(uj,n,y)+yuj,n,uj,nLxn0,yDj,j=1,,N,yj,n=xn+λnL(uj,nLxn),j=1,,N,choosejn:yjn,nxn=maxj=1,,Nyj,nxn,yn=yjn,n,βnFi(zi,n,u)+uzi,n,zi,nyn0,uCi,i=1,,M,choosein:zin,nxn=maxi=1,,Mzi,nxn,zn=zin,n,Cn+1={zCn:znzynzxnz},xn+1=PCn+1x1. 5.4

If βn>0, 0<aλnb<12L2 for each nN and r is a positive real number, then the sequence {xn} converges strongly to q=PF(x1).

Numerical experiments

In this section, we show some numerical examples and discuss the possible good choices of step size parameters βn and λn, which satisfy the control conditions in Theorem 3.5.

Let H1=R2 and H2=R3 be equipped with the Euclidean norm. Let a1:=(2515), a2:=(1212), and u:=(11) be fixed in H1, and γ1:=cos7π18 and γ2:=cosπ3 be scalars. Set C˜1:=C1+u and C˜2:=C2+u, where C1 and C2 are the following closed convex ice-cream cones in H1:

C1:={xH1:a1,xγ1x},C2:={xH1:a2,xγ2x}.

We will consider 1-ism operators PC˜1 and PC˜2, where C˜1 and C˜2 are defined by the above settings.

Next, for each x:=(x1x2)H1, we are also concerned with the following two norms:

x1=|x1|+|x2|andx=max{|x1|,|x2|}.

Consider a function f:H1R, which is defined by

f(x)=x1for all xH1.

We know that f is a convex function and subdifferential of f is

f(x)={zH1:x,z=x1,z1}for all xH1.

Moreover, since f is a convex function, we know that f() must be a maximal monotone operator, and for each λ>0, we have

Jλf(x)={(u1u2)H1:ui=xi(min{|xi|,λ})sgn(xi),for i=1,2},

where sgn() is denoted for the signum function.

On the other hand, let x˜1:=(121), x˜2:=(111), and x˜3:=(010) be three fixed vectors in H2. We consider a nonempty convex subset Q1Q2Q3 of H2, where Q1:={xH2:x˜1x5}, Q2:={xH2:x˜2,x1}, and Q3:={xH2:x˜3,x12}. We notice that F(PQ1)F(PQ2)F(PQ3)=Q1Q2Q3.

Now, let us consider a 3×2 matrix L:=[102202]. We see that L is a bounded linear operator on H1 into H2 with L=3.282073.

Based on the above settings, we will present some numerical experiments to show the efficiency of the constructed algorithm (3.1). That is, we are going to show that algorithm (3.1) converges to a point pH1 such that

p((PC˜1+f)1(0)(PC˜2+f)1(0))L1(Q1Q2Q3), 6.1

and in this experiment, we consider the stopping criterion by xn+1xnmax{1,xn}1.0e06.

We will consider the following cases of the step size parameters βn and λn with the initial vectors (00), (11), (11), (11), and (11) in H1:

Case 1.

βn=1.0e03+1100n, λn=1.0e03+1100n.

Case 2.

βn=1.0e03+1100n, λn=14L2.

Case 3.

βn=1.0e03+1100n, λn=0.0461100n.

Case 4.

βn=1, λn=1.0e03+1100n.

Case 5.

βn=1, λn=14L2.

Case 6.

βn=1, λn=0.0461100n.

Case 7.

βn=1.9991100n, λn=1.0e03+1100n.

Case 8.

βn=1.9991100n, λn=14L2.

Case 9.

βn=1.9991100n, λn=0.0461100n.

From Tables 1, 2, and 3, we may suggest that, for each initial point, the step size of the parameters λn=0.0461100n provides a faster convergence rate than other cases. While the step size parameters βn seem to have less impact on the speed of algorithm (3.1) to a solution set (6.1).

Table 1.

Influence of the step size parameters βn and λn (cases 1–3) of algorithm (3.1) for different initial points

Case → Case 1 Case 2 Case 3
#Initial point ↓ Iters Time (s) Sol Iters Time (s) Sol Iters Time (s) Sol
(0,0) 1647 0.644764 (0.2497530) 145 0.210611 (0.2499900) 110 0.172755 (0.2499960)
(1,1) 790 0.393530 (1.1248770.875123) 51 0.117471 (1.1249960.875004) 27 0.098625 (1.1249970.875001)
(1,1) 195 0.231496 (0.8756760) 49 0.123486 (0.7953710) 36 0.127907 (0.7870960)
(1,1) 1069 0.486436 (0.2679560.018131) 150 0.207209 (0.2499900) 113 0.181702 (0.2499960)
(1,1) 2121 0.847208 (0.2497520) 449 0.313106 (0.2499910) 361 0.284821 (0.2499960)

Table 2.

Influence of the step size parameters βn and λn (cases 4–6) of algorithm (3.1) for different initial points

Case → Case 4 Case 5 Case 6
#Initial point ↓ Iters Time (s) Sol Iters Time (s) Sol Iters Time (s) Sol
(0,0) 1647 0.650587 (0.2497530) 106 0.176374 (0.2499910) 56 0.124235 (0.2499960)
(1,1) 790 0.398679 (1.1248770.875123) 51 0.122999 (1.1249960.875004) 27 0.098005 (1.1249990.875001)
(1,1) 3 0.078350 (0.9853330) 3 0.079696 (0.9690960) 3 0.083422 (0.9520000)
(1,1) 1032 0.500529 (0.5754130.325587) 61 0.133658 (0.5205600.270565) 31 0.108214 (0.4629990.213001)
(1,1) 1658 0.689241 (0.2497530) 107 0.180100 (0.2499910) 57 0.129912 (0.2499960)

Table 3.

Influence of the step size parameters βn and λn (cases 7–9) of algorithm (3.1) for different initial points

Case → Case 7 Case 8 Case 9
#Initial point ↓ Iters Time (s) Sol Iters Time (s) Sol Iters Time (s) Sol
(0,0) 1647 0.644395 (0.2497530) 106 0.167910 (0.2499910) 56 0.122966 (0.2499960)
(1,1) 790 0.403824 (1.1248770.875123) 51 0.118171 (1.1249960.875004) 27 0.095997 (1.1249990.875001)
(1,1) 3 0.080739 (0.9853330) 3 0.080157 (0.9690960) 3 0.080880 (0.9520000)
(1,1) 1032 0.463895 (0.5754130.325587) 61 0.133494 (0.5205600.270565) 31 0.104363 (0.4629990.213001)
(1,1) 1658 0.646397 (0.2497530) 107 0.173753 (0.2499910) 57 0.127317 (0.2499960)

Conclusions

In this paper, we present two iterative algorithms, (3.1) and (4.1), for approximating a solution of the split feasibility problem on zeros of a finite sum of monotone operators and fixed points of a finite family of nonexpansive mappings. Under some mild conditions, we show the convergence theorems of the mentioned algorithms. Subsequently, some corollaries and applications of those main results are provided. We point out that the construction of algorithm (3.1) seems to be less complicated than that of (4.1). However, algorithm (3.1) requires some additional assumptions in order to guarantee the strong convergence theorem, while algorithm (4.1) does not need them (see Theorem 3.6 and Theorem 4.2). This observation may lead to the future works that are to analyze and discuss the rate of convergence of these suggested algorithms.

Acknowledgements

The authors thank the anonymous referees for their remarkable comments and suggestions to improve this paper.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Funding

This work is partially supported by the Thailand Research Fund under the project RSA5880028.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Narin Petrot, Email: narinp@nu.ac.th.

Montira Suwannaprapa, Email: montira.sw@gmail.com.

Vahid Dadashi, Email: vahid.dadashi@iausari.ac.ir.

References

  • 1.Bauschke H.H., Borwein J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996;38(3):367–426. doi: 10.1137/S0036144593251710. [DOI] [Google Scholar]
  • 2.Censor Y. Iterative methods for the convex feasibility problem. North-Holl. Math. Stud. 1984;87:83–91. doi: 10.1016/S0304-0208(08)72812-3. [DOI] [Google Scholar]
  • 3.Combettes P.L. The convex feasibility problem in image recovery. In: Hawkes P., editor. Advances in Imaging and Electron Physics. New York: Academic Press; 1996. pp. 155–270. [Google Scholar]
  • 4.Combettes P.L. The foundations of set theoretic estimation. Proc. IEEE. 1993;81(2):182–208. doi: 10.1109/5.214546. [DOI] [Google Scholar]
  • 5.Deutsch F. The method of alternating orthogonal projections. In: Singh S.P., editor. Approximation Theory, Spline Functions and Applications. The Netherlands: Kluwer Academic; 1992. pp. 105–121. [Google Scholar]
  • 6.Rockafellar R.T. Maximal monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976;14(5):877–898. doi: 10.1137/0314056. [DOI] [Google Scholar]
  • 7.Stark H., editor. Image Recovery Theory and Applications. Orlando: Academic Press; 1987. [Google Scholar]
  • 8. Yao, Y., Liou, Y.C., Postolache, M.: Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 10.1080/02331934.2017.1390747 [DOI]
  • 9. Yao, Y., Postolache, M., Qin, X., Yao, J.C.: Iterative algorithms for the proximal split feasibility problem. U. Politeh. Buch. Ser. A. (in printing)
  • 10.Yao Y., Leng L., Postolache M., Zheng X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017;18(5):875–882. [Google Scholar]
  • 11.Yao Y., Agarwal R.P., Postolache M., Liu Y.C. Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014;2014:183. doi: 10.1186/1687-1812-2014-183. [DOI] [Google Scholar]
  • 12.Yao Y., Postolache M., Liou Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013;2013:201. doi: 10.1186/1687-1812-2013-201. [DOI] [Google Scholar]
  • 13.Ansari Q.H., Nimana N., Petrot N. Split hierarchical variational inequality problems and related problems. Fixed Point Theory Appl. 2014;2014:208. doi: 10.1186/1687-1812-2014-208. [DOI] [Google Scholar]
  • 14.Suwannaprapa M., Petrot N., Suantai S. Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl. 2017;2017:6. doi: 10.1186/s13663-017-0599-7. [DOI] [Google Scholar]
  • 15.Moudafi A. On the regularization of the sum of two maximal monotone operators. Nonlinear Anal., Theory Methods Appl. 2000;42(7):1203–1208. doi: 10.1016/S0362-546X(99)00136-4. [DOI] [Google Scholar]
  • 16.Moudafi A., Oliny M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003;155(2):447–454. doi: 10.1016/S0377-0427(02)00906-8. [DOI] [Google Scholar]
  • 17.Chang S.S., Lee H.J., Chan C.K. A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal., Theory Methods Appl. 2009;70(9):3307–3319. doi: 10.1016/j.na.2008.04.035. [DOI] [Google Scholar]
  • 18.Dadashi V. Shrinking projection algorithms for the split common null point problem. Bull. Aust. Math. Soc. 2017;96:299–306. doi: 10.1017/S000497271700017X. [DOI] [Google Scholar]
  • 19.Kang S., Cho S., Liu Z. Convergence of iterative sequences for generalized equilibrium problems involving inverse-strongly monotone mappings. J. Inequal. Appl. 2010;2010(1):827082. [Google Scholar]
  • 20.Lv S. Generalized systems of variational inclusions involving (A, η)-monotone mappings. Adv. Fixed Point Theory. 2011;1(1):15. [Google Scholar]
  • 21.Nadezhkina N., Takahashi W. Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006;128(1):191–201. doi: 10.1007/s10957-005-7564-z. [DOI] [Google Scholar]
  • 22.Qin X., Cho Y.J., Kang S.M. Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 2009;225(1):20–30. doi: 10.1016/j.cam.2008.06.011. [DOI] [Google Scholar]
  • 23.Martinet B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970;3:154–158. [Google Scholar]
  • 24.Passty G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979;72(2):383–390. doi: 10.1016/0022-247X(79)90234-8. [DOI] [Google Scholar]
  • 25.Dadashi V., Khatibzadeh H. On the weak and strong convergence of the proximal point algorithm in reflexive Banach spaces. Optimization. 2017;66(9):1487–1494. doi: 10.1080/02331934.2017.1337764. [DOI] [Google Scholar]
  • 26.Dadashi V., Postolache M. Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 2017;174:518–529. doi: 10.1007/s10957-017-1117-0. [DOI] [Google Scholar]
  • 27.Moudafi A., Thera M. Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1997;94(2):425–448. doi: 10.1023/A:1022643914538. [DOI] [Google Scholar]
  • 28.Qin X., Cho S.Y., Wang L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014;2014:75. doi: 10.1186/1687-1812-2014-75. [DOI] [Google Scholar]
  • 29.Tseng P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000;38:431–446. doi: 10.1137/S0363012998338806. [DOI] [Google Scholar]
  • 30.Cho S.Y., Li W., Kang S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013;2013(1):199. doi: 10.1186/1029-242X-2013-199. [DOI] [Google Scholar]
  • 31.Wu C., Liu A. Strong convergence of a hybrid projection iterative algorithm for common solutions of operator equations and of inclusion problems. Fixed Point Theory Appl. 2012;2012(1):90. doi: 10.1186/1687-1812-2012-90. [DOI] [Google Scholar]
  • 32.Zhang M. Iterative algorithms for common elements in fixed point sets and zero point sets with applications. Fixed Point Theory Appl. 2012;2012(1):21. doi: 10.1186/1687-1812-2012-21. [DOI] [Google Scholar]
  • 33.Shimoji K., Takahashi W. Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 2001;5(2):387–404. doi: 10.11650/twjm/1500407345. [DOI] [Google Scholar]
  • 34.Suzuki T. Strong convergence theorems for an infinite family of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005;1:103–123. [Google Scholar]
  • 35.Goebel K., Kirk W.A. Topics in Metric Fixed Point Theory. Cambridge: Cambridge University Press; 1990. [Google Scholar]
  • 36.Boikanyo O.A. The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016;2016:2371857. doi: 10.1155/2016/2371857. [DOI] [Google Scholar]
  • 37.Xu H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011;150:360–378. doi: 10.1007/s10957-011-9837-z. [DOI] [Google Scholar]
  • 38.Aoyama K., Kimura Y., Takahashi W., Toyoda M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007;8:471–489. [Google Scholar]
  • 39.Bruck R.E., Passty G.B. Almost convergence of the infinite product of resolvents in Banach spaces. Nonlinear Anal. 1979;3:279–282. doi: 10.1016/0362-546X(79)90083-X. [DOI] [Google Scholar]
  • 40.Blum E., Oettli W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994;63:123–145. [Google Scholar]
  • 41.Combettes P.L., Hirstoaga S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005;6:117–136. [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES