Skip to main content
Springer logoLink to Springer
. 2017 Apr 11;2017(1):72. doi: 10.1186/s13660-017-1338-7

Strong convergence theorems by hybrid and shrinking projection methods for sums of two monotone operators

Tadchai Yuying 1, Somyot Plubtieng 1,2,
PMCID: PMC5388757  PMID: 28458482

Abstract

In this paper, we introduce two iterative algorithms for finding the solution of the sum of two monotone operators by using hybrid projection methods and shrinking projection methods. Under some suitable conditions, we prove strong convergence theorems of such sequences to the solution of the sum of an inverse-strongly monotone and a maximal monotone operator. Finally, we present a numerical result of our algorithm which is defined by the hybrid method.

Keywords: hybrid projection methods, shrinking projection methods, monotone operators and resolvent

Introduction

The monotone inclusion problem is very important in many areas, such as convex optimization and monotone variational inequalities, for instance. Splitting methods are very important because many nonlinear problems arising in applied areas such as signal processing, machine learning and image recovery which mathematically modeled as a nonlinear operator equation which this operator can be consider as the sum of two nonlinear operators. The problem is finding a zero point of the sum of two monotone operators; that is,

find zH such that 0(A+B)z, 1

where A is a monotone operator and B is a multi-valued maximal monotone operator. The set of solutions of (1) is denoted by (A+B)1(0). We know that the problem (1) included many problems; see for more details [18] and the references therein. In fact, we can formulate the initial value problem of the evolution equation 0Tu+ut, u=u(0), as the problem (1) where the governing maximal monotone T is of the form T=A+B (see [6] and the references therein). The methods for solving the problem (1) have been studied extensively by many authors (see [4, 6] and [9]).

In 1997, Moudafi and Thera [10] introduced the iterative algorithm for the problem (1) where the operator B is maximal monotone and A is (single-valued) Lipschitz continuous and strongly monotone such as the iterative algorithm

{xn=JλBwn,wn+1=swn+(1s)xnλ(1s)Axn, 2

with fixed s(0,1) and under certain conditions. They found that the sequence {xn} defined by (2) converges weakly to elements in (A+B)1(0).

On the other hand, Nakago and Takahashi [11] introduced an iterative hybrid projection method and proved the strong convergence theorems for finding a solution of a maximal monotone case as follows:

{x0=xH,yn=Jrn(xn+fn),Cn={zH:ynzxn+fnz},Qn={zH:xnz,x0xn0},xn+1=PCnQn(x0), 3

for every nN{0}, where rn(0,). They proved that if lim infnrn>0 and limnfn=0, then xnz0=PA1(0)(x0). Furthermore, many authors have introduced the hybrid projection algorithm for finding the zero point of maximal monotones such as [12] and other references. Recently, Qiao-Li Dong et al. [13] introduced a new hybrid projection algorithm for finding a fixed point of nonexpansive mappings. Under suitable assumptions, they proved that such sequence converge strongly to a solution of fixed point T. Moreover, by using a shrinking projection method, Takahashi et al. [14] introduced a new algorithm and proved strong convergence theorems for finding a common fixed point of families of nonexpansive mappings.

In this paper motivated by the iterative schemes considered in the present paper, we will introduce two iterative algorithms for finding zero points of the sum of an inverse-strongly monotone and a maximal monotone operator by using hybrid projection methods and shrinking projection methods. Under some suitable conditions, we obtained strong convergence theorems of the iterative sequences generated by the our algorithms. The organization of this paper is as follows: Section 2, we recall some definitions and lemmas. Section 3, we prove a strong convergence theorem by using hybrid projection methods. Section 4, we prove a strong convergence theorem by using shrinking projection methods. Section 5, we report a numerical example which indicate that the hybrid projection method is effective.

Preliminaries

In this paper, we let C be a nonempty closed convex subset of a real Hilbert space H. Denote PC() is the metric projection on C. It is well known that z=PC(x) if

xz,zy0for all yC.

Moreover, we also note that

PC(x)PC(y)xyfor all x,yH

and

PC(x)xxyfor all yC

(see also [15]). We say that A:CH is a monotone operator if

AxAy,xy0for all x,yC,

and the operator A:CH is inverse-strongly monotone if there is α>0 such that

AxAy,xyαAxAy2for all x,yC.

For this case, the operator A is called α-inverse-strongly monotone. It is easy to see that every inverse-strongly monotone is monotone and continuous. Recall that B:H2H is a set-valued operator. Then the operator B is monotone if x1x2,z1z20 whenever z1Bx1 and z2Bx2. A monotone operator B is maximal if for any (x,z)H×H such that xy,zw0 for all (y,w)GraphB implies zBx. Let B be a maximal monotone operator and r>0. Then we can define the resolvent Jr:R(I+rB)D(B) by Jr=(I+rB)1 where D(B) is the domain of B. We know that Jr is nonexpensive and we can study the other properties in [1517].

Lemma 2.1

[18]

Let C be a closed convex subset of a real Hilbert space H, xH. and z=PCx. If {xn} is a sequence in C such that ωw(xn)C and

xnxxz,

for all n1, then the sequence {xn} converges strongly to a point z.

Lemma 2.2

[13]

Let {αn} and {βn} be nonnegative real sequences, a[0,1) and bR+. Assume that, for any nN,

αn+1aαn+bβn.

If n=1βn<+, then limnαn=0.

Lemma 2.3

[18]

Let C be a closed convex subset a real Hilbert space H, and x,y,zH. Then, for given aR, the set

U={vC:yv2xv2+z,v+a}

is convex and closed.

Lemma 2.4

[19]

Let C be a nonempty closed convex subset of a real Hilbert space H, and A:CH an operator. If B:H2H is a maximal monotone operator, then

F(Jr(IrA))=(A+B)1(0).

Hybrid projection methods

In this section, we introduce a new iterative hybrid projection method and prove a strong convergence theorem for finding a solution of the sum of an α-inverse-strongly monotone (single-value) operator and a maximal monotone (multi-valued) operator.

Theorem 3.1

Let C be a nonempty closed convex subset of a real Hilbert space H. Suppose that A:CH is an α-inverse-strongly monotone operator and let B:H2H be a maximal monotone operator with D(B)C and (A+B)1(0). Define a sequence {xn} by the algorithm

{x0,z0C,yn=αnzn+(1αn)xn,zn+1=Jrn(ynrnAyn),Cn={zC:zn+1z2αnznz2+(1αn)xnz2},Qn={zC:xnz,x0xn0},xn+1=PCnQn(x0), 4

for all nN{0}, where Jrn=(I+rnB)1, {αn} and {rn} are sequences of positive real numbers with 0αnβ for some β[0,12) and 0<rn2α. Then the sequence {xn} converges strongly to a point p=P(A+B)1(0)(x0).

Proof

From Lemma 2.3, we see that Cn is closed convex for every nN{0}. First, we show that (A+B)1(0)Cn for all nN{0}. Since A:CH is an α-inverse-strongly monotone operator, we have IrnA is nonexpensive. Indeed,

(IrnA)x(IrnA)y2=(xy)rn(AxAy)2=xy22rnxy,AxAy+rn2AxAy2xy2rn(2αrn)AxAy2xy2.

Let nN{0} and w(A+B)1(0). Thus, we have

zn+1w2=Jrn(ynrnAyn)Jrn(wrnAw)2(ynrnAyn)(wrnAw)2ynw2=αnzn+(1αn)xnw2αnznw2+(1αn)xnw2.

This implies that wCn for all nN{0} and hence

(A+B)1(0)Cn, 5

for all nN{0}. Next, we prove that (A+B)1(0)Qn for all nN{0} by the mathematical induction. For n=0, we note that

(A+B)1(0)C=Q0.

Suppose that (A+B)1(0)Qk for some kN. Since CkQk is closed and convex, we can define

xk+1=PCkQk(x0).

It follows that

xk+1z,x0xk+10for all zCkQk.

From (A+B)1(0)CkQk, we see that

(A+B)1(0)Qk+1.

Therefore

(A+B)1(0)Qn, 6

for all nN{0}. Combining the inequalities (5) and (6), it follows that {xn} is well defined.

Since (A+B)1(0) is a nonempty closed convex set, there is a unique element p(A+B)1(0) such that

p=P(A+B)1(0)(x0).

From xn=PQn(x0), we have

xnx0qx0for all qQn.

Due to p(A+B)1(0)Qn, we have

xnx0px0, 7

for any nN{0}. It follows that {xn} is bounded. As xn+1CnQnQn, we have

xnxn+1,x0xn0,

and hence

xn+1xn2=(xn+1x0)(xnx0)2=xn+1x02xnx022xn+1xn,xnx0xn+1x02xnx02. 8

By (7) and (8), we have

n=1Nxn+1xn2n=1N(xn+1x02xnx02)=xN+1x02x1x02qx02x1x02.

Since N is arbitrary, n=1xn+1xn2 is convergent and hence

xn+1xn0as n. 9

Since xn+1CnQnCn, we have

zn+1xn+12αnznxn+12+(1αn)xnxn+12=αn(znxn2+2znxn,xnxn+1+xnxn+12)+(1αn)xnxn+122αn(znxn2+xnxn+12)+(1αn)xnxn+12=2αnznxn2+(1+αn)xnxn+122βznxn2+2xnxn+12,

for all nN. By Lemma 2.2 and β[0,12), we get

znxn0as n. 10

In fact, since zn+1xnzn+1xn+1+xn+1xn, for all nN, it follows by (9) and (10) that

zn+1xn0as n. 11

Note that

xnyn=xnαnzn(1αn)xn=αnxnznβxnzn,

for all nN. Thus, we see that

xnyn0as n. 12

Moreover, we note that

Jrn(IrnA)xnxnJrn(IrnA)xnJrn(IrnA)yn+Jrn(IrnA)ynzn+1+zn+1xnxnyn+zn+1xn,

for all nN. By (11) and (12), we see that

Jrn(IrnA)xnxn0as n. 13

From (13), it follows by the demiclosed principle (see [20]) that

ωw(xn)F(Jrn(IrnA))=(A+B)1(0).

Hence by Lemma 2.1 and (7), we can conclude that the sequence {xn} converges strongly to p=P(A+B)1(0)(x0). This completes the proof. □

If we take A=0 and αn=0 for all nN{0} in Theorem 3.1, then we obtain the following result.

Corollary 3.2

Let C be a nonempty closed convex subset of a real Hilbert space H. Let B:H2H be a maximal monotone operator with D(B)C. Assume that (B)1(0). A sequence {xn} generated by the following algorithm:

{x0C,zn+1=Jrn(xn),Cn={zC:zn+1zxnz},Qn={zC:xnz,x0xn0},xn+1=PCnQn(x0),

for all nN{0}, where Jrn=(I+rnB)1 and {rn} is a sequence of positive real numbers with 0<rn2α for some α>0. Then xnp=P(B)1(0)(x0).

Shrinking projection methods

In this section, we introduce a new iterative shrinking projection method and prove a strong convergence theorem for finding a solution of the sum of an α-inverse-strongly monotone (single-value) operator and a maximal monotone (multi-valued) operator.

Theorem 4.1

Let C be a nonempty closed convex subset of a real Hilbert space H. Suppose that A:CH is an α-inverse-strongly monotone operator and let B:H2H be a maximal monotone operator with D(B)C and (A+B)1(0). Define a sequence {xn} by the algorithm

{x0,z0C0,yn=αnzn+(1αn)xn,zn+1=Jrn(ynrnAyn),Cn+1={zCn:zn+1z2αnznz2+(1αn)xnz2},xn+1=PCn+1x0, 14

for all nN{0}, where C0=C, Jrn=(I+rnB)1, {αn} and {rn} are sequences of positive real numbers with 0αnβ for some β[0,12) and 0<rn2α. Then the sequence {xn} converges strongly to a point p=P(A+B)1(0)(x0).

Proof

From Lemma 2.3, we see that Cn is closed convex for every nN{0}. First, we show that (A+B)1(0)Cn for all nN{0}. For n=0, we have

(A+B)1(0)C=C0.

Suppose that (A+B)1(0)Ck for some kN. Since A:CH is an α-inverse-strongly monotone operator, we see that IrnA is nonexpensive. Let w(A+B)1(0). Thus wCk and

zk+1w2αkzkw2+(1αk)xkw2.

That is, wCk+1. So, we have

(A+B)1(0)Cn, 15

for all nN{0}. It follows that {xn} is well defined.

Since (A+B)1(0) is a nonempty closed convex set, there is a unique element p(A+B)1(0) such that

p=P(A+B)1(0)(x0).

From xn=PCn(x0), we have

xnx0qx0for all qCn.

Due to p(A+B)1(0)Cn, we have

xnx0px0, 16

for any nN{0}. It follows that {xn} is bounded. As xn+1Cn+1Cn and xn=PCn(x0), we have

xnxn+1,x0xn0,

for all nN. This implies that

xn+1xn2=xn+1x02xnx022xn+1xn,xnx0xn+1x02xnx02, 17

for all nN. From (16) and (17), we have

n=1Nxn+1xn2n=1N(xn+1x02xnx02)=xN+1x02x1x02qx02x1x02.

Since N is arbitrary, we see that n=1xn+1xn2 is convergent. Thus, we have

xn+1xn0as n. 18

From xn+1Cn+1 and {αn}[0,β), it implies that

zn+1xn+12αnznxn+12+(1αn)xnxn+122αnznxn2+(1+αn)xnxn+122βznxn2+2xnxn+12,nN.

By Lemma 2.2 and β[0,12), we obtain

znxn0as n. 19

In fact, since zn+1xnzn+1xn+1+xn+1xn, for all nN, it follows by (18) and (19) that

zn+1xn0as n. 20

Note that

xnyn=xnαnzn(1αn)xn=αnxnznβxnzn,

for all nN. This implies that

xnyn0as n. 21

Moreover, we note that

Jrn(IrnA)xnxnJrn(IrnA)xnJrn(IrnA)yn+Jrn(IrnA)ynzn+1+zn+1xnxnyn+zn+1xn,

for all nN. By (20) and (21), we see that

Jrn(IrnA)xnxn0as n. 22

From (22), it follows by the demiclosed principle (see [20]) that

ωw(xn)F(Jrn(IrnA))=(A+B)1(0).

By Lemma 2.1 and (16), we can conclude that the sequence {xn} converges strongly to p=P(A+B)1(0)(x0). This completes the proof. □

If we take A=0 and αn=0 for all nN{0} in Theorem 4.1, then we obtain the following result.

Corollary 4.2

Let C be a nonempty closed convex subset of a real Hilbert space H. Let B:H2H be a maximal monotone operator with D(B)C. Assume that (B)1(0). A sequence {xn} generated by the following algorithm:

{x0C0,zn+1=Jrn(xn),Cn+1={zCn:zn+1zxnz},xn+1=PCn+1x0,

for all nN{0}, where C0=C, Jrn=(I+rnB)1 and {rn} is a sequence of positive real numbers with 0<rn2α for some α>0. Then xnp=P(B)1(0)(x0).

Numerical results

In this section, we firstly follow the ideas of He et al. [21] and Dong et al. [13]. For C=H, we can write (4) in Theorem 3.1 as follows:

{x0,z0H,yn=αnzn+(1αn)xn,zn+1=Jrn(ynrnAyn),un=αnzn+(1αn)xnzn+1,vn=(αnzn2+(1αn)xn2zn+12)/2,Cn={zC:un,zvn},Qn={zC:xnz,xnx00},xn+1=pn,if pnQn,xn+1=qn,if pnQn, 23

where

pn=x0un,x0vnun2un,qn=(1x0xn,xnpnx0xn,wnpn)pn+x0xn,xnpnx0xn,wnpnwn,wn=xnun,xnvnun2.

Let R2 be the two dimensional Euclidean space with usual inner product x,y=x1y1+x2y2 for all x=(x1,x2)T, y=(y1,y2)TR2 and denote x=x12+x22.

Define the operator A:R2R2 as

A(x)=(0,12x1)Tfor all x=(x1,x2)R2.

It is obvious that A is nonexpansive and hence IA is 12-inverse-strongly monotone (see [17, 22]). Thus we have the mapping A=IA:R2R2 as

A(x)=(x1,x212x1)Tfor all x=(x1,x2)R2

is 12-inverse-strongly monotone. Let W={(x1,x2)R2:x1=x2}. Then W is a linear subspace of R2. Define

NW={(x,y):xW and yW}.

This implies that NW is maximal monotone (see [23]). It is easily seen that (A+NW)1(0). We take {r(n)}={1n+2}(0,1) (note α=12). Then {r(n)} is a sequence of positive real numbers in (0,2α), and {α(n)}=0.1 (note β=0.4). Let x(0)=(4,3),(2,8),(3,4) and (1,3) be the initial points and fixed z(0)=(1,1). Denote

E(x)=x(n)Jr(n)(x(n)r(n)Ax(n))x(n).

Since we do not know the exact value of the projection of x0 onto the set of fixed points of Jrn(IrnA), we take E(x) to be the relative rate of convergence of our algorithm. In the numerical result, E(x)<ε is the stopping condition and ε=107. Moreover, we have shown that the competitive efficacy of our example, see Table 1.

Table 1.

This table illustrates that in our examples ( 23 ) derived from ( 4 ) has a competitive efficacy

x(0) Iter. x=(x1,x2)T E ( x )
(4,3) 4520 (1.573198640818142,1.573198530023523) 3.521317011074167e − 08
(−2,8) 5420 (0.944819548758385,0.944819526356611) 1.185505467234501e − 08
(3,−4) 3307 (99.631392375764780,99.631402116509490) 4.888391102078766e − 08
(−1,−3) 4110 (−0.781555402714756,−0.781555394005797) 5.571556279844247e − 09

Conclusions

We have proposed two new iterative algorithms for finding the common solution of the sum of two monotone operators by using hybrid methods and shrinking projection methods. The convergence of the proposed algorithms is obtained and the numerical result of the hybrid iterative algorithm is also effective.

Acknowledgements

The first author would like to thanks the Thailand Research Fund through the Royal Golden Jubilee PH.D. Program for supporting by grant fund under Grant No. PHD/0032/2555 and Naresuan University.

Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

This work was contributed equally on both authors. Both authors read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Tadchai Yuying, Email: tadchai_99@hotmail.com.

Somyot Plubtieng, Email: Somyotp@nu.ac.th.

References

  • 1.Attouch H, Thera M. A general duality principle for the sum of two operators. J. Convex Anal. 1996;3:1–24. [Google Scholar]
  • 2.Bauschke HH. A note on the paper by Eckstein and Svaiter on general projective splitting methods for sums of maximal monotone operators. SIAM J. Control Optim. 2009;48:2513–2515. doi: 10.1137/090759690. [DOI] [Google Scholar]
  • 3.Chen YQ, Cho YJ, Kumam P. On the maximality of sums of two maximal monotone. J. Math. Anal. 2016;7:24–30. [Google Scholar]
  • 4.Chen GHG, Rockafellar RT. Convergence rates in forward-backward splitting. SIAM J. Optim. 1997;7:421–444. doi: 10.1137/S1052623495290179. [DOI] [Google Scholar]
  • 5.Cho SY, Qin X, Wang L. A strong convergence theorem for solutions of zero point problems and fixed point problems. Bull. Iran. Math. Soc. 2014;40:891–910. [Google Scholar]
  • 6.Lions PL, Mercier B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979;16:964–979. doi: 10.1137/0716071. [DOI] [Google Scholar]
  • 7.Mahey P, Pham DT. Partial regularization of the sum of two maximal monotone operators. RAIRO Model. Math. Anal. Num. 1993;27:375–392. doi: 10.1051/m2an/1993270303751. [DOI] [Google Scholar]
  • 8.Passty GB. Ergodic convergence to a zero of the sum of monotone operator in Hilbert spaces. J. Math. Anal. Appl. 1979;72:383–390. doi: 10.1016/0022-247X(79)90234-8. [DOI] [Google Scholar]
  • 9.Svaiter BF. On Weak Convergence of the Douglas-Rachford Method. SIAM J. Control Optim. 2011;49:280–287. doi: 10.1137/100788100. [DOI] [Google Scholar]
  • 10.Moudafi A, Thera M. Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1997;97:425–448. doi: 10.1023/A:1022643914538. [DOI] [Google Scholar]
  • 11.Nakajo K, Takahashi W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003;279:372–379. doi: 10.1016/S0022-247X(02)00458-4. [DOI] [Google Scholar]
  • 12.Saewan S, Kumam P. Computational of generalized projection method for maximal monotone operator and a countable family of relatively quasi-nonexpansive mappings. Optimization. 2015;64:2531–2552. doi: 10.1080/02331934.2013.824444. [DOI] [Google Scholar]
  • 13.Dong QL, Lu YY. A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theory Appl. 2015;2015 doi: 10.1186/s13663-015-0285-6. [DOI] [Google Scholar]
  • 14.Takahashi W, Takeuchi Y, Kubota R. Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2008;341:276–286. doi: 10.1016/j.jmaa.2007.09.062. [DOI] [Google Scholar]
  • 15.Takahashi W. Introduction to Nonlinear and Conex Analysis. Yokohama: Yokohama Publishers; 2009. [Google Scholar]
  • 16.Kamimura S, Takahashi W. Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory. 2000;106:226–240. doi: 10.1006/jath.2000.3493. [DOI] [Google Scholar]
  • 17.Takahashi W. Nonlinear Functional Analysis. Yokohama: Yokohama Publishers; 2000. [Google Scholar]
  • 18.Matinez-Yanes C, Xu HK. Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 2006;64:2400–2411. doi: 10.1016/j.na.2005.08.018. [DOI] [Google Scholar]
  • 19.Aoyama K, Kimura Y, Takahashi W, Toyoda M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007;8:471–489. [Google Scholar]
  • 20.Goebel K, Kirk WA. Topics in Metric Fixed point Theory. Cambridge: Cambridge University Press; 1990. [Google Scholar]
  • 21.He S, Yang C, Duan P. Realization of the hybrid method for Mann iterations. Appl. Math. Comput. 2010;217:4239–4247. [Google Scholar]
  • 22.Browder FE, Petryshyn WV. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967;20:197–228. doi: 10.1016/0022-247X(67)90085-6. [DOI] [Google Scholar]
  • 23.Eckstein J, Bertsekas DP. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992;55:293–318. doi: 10.1007/BF01581204. [DOI] [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES