Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Apr 11.
Published in final edited form as: J Optim Theory Appl. 2011 Feb;148(2):318–335. doi: 10.1007/s10957-010-9757-3

The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space

Y Censor 1, A Gibali 2, S Reich 3
PMCID: PMC3073511  NIHMSID: NIHMS232812  PMID: 21490879

Abstract

We present a subgradient extragradient method for solving variational inequalities in Hilbert space. In addition, we propose a modified version of our algorithm that finds a solution of a variational inequality which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for both algorithms.

Keywords: Extragradient method, Nonexpansive mapping, Subgradient, Variational inequalities

1 Introduction

In this paper, we are concerned with the Variational Inequality (VI), which consists in finding a point x*, such that

xCandf(x),xx0,xC, (1)

where H is a real Hilbert space, f : HH is a given mapping, CH is nonempty, closed and convex, and 〈·, ·〉 denotes the inner product in H. This problem, denoted by VI(C, f), is a fundamental problem in Variational Analysis and, in particular, in Optimization Theory. Many algorithms for solving the VI are projection algorithms that employ projections onto the feasible set C of the VI, or onto some related set, in order to iteratively reach a solution. In particular, Korpelevich [1] proposed an algorithm for solving the VI in Euclidean space, known as the Extragradient Method (see also Facchinei and Pang [2, Chapter 12]). In each iteration of her algorithm, in order to get the next iterate xk+1, two orthogonal projections onto C are calculated, according to the following iterative step. Given the current iterate xk, calculate

yk=PC(xkτf(xk)), (2)

and then

xk+1=PC(xkτf(yk)), (3)

where τ is some positive number and PC denotes the Euclidean least distance projection onto C. Figure 1 illustrates the iterative step (2) and (3). The literature on the VI is vast and Korpelevich's extragradient method has received great attention by many authors, who improved it in various ways; see, e.g., [3, 4, 5] and references therein, to name but a few.

Figure 1.

Figure 1

Korpelevich's iterative step.

Though convergence was proved in [1] under the assumptions of Lipschitz continuity and pseudo-monotonicity, there is still the need to calculate two projections onto C. If the set C is simple enough, so that projections onto it are easily executed, then this method is particularly useful; but, if C is a general closed and convex set, then a minimal distance problem has to be solved (twice) in order to obtain the next iterate. This might seriously affect the efficiency of the extragradient method. Therefore, we developed in [6] the subgradient extragradient algorithm in Euclidean space, in which we replace the (second) projection (3) onto C by a projection onto a specific constructible half-space, which is actually one of the subgradient half-spaces as will be explained later. In this paper, we study the subgradient extragradient method for solving the VI in Hilbert space. In addition, we present a modified version of the algorithm, which finds a solution of the VI that is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for both algorithms.

Our paper is organized as follows. In Section 3, we sketch a proof of the weak convergence of the extragradient method. In Section 4, the subgradient extragradient algorithm is presented. It is analyzed in Section 5. In Section 6, we modify the subgradient extragradient algorithm and then analyze it in Section 7.

2 Preliminaries

Let H be a real Hilbert space with inner product 〈·, ·〉 and norm ∥ · ∥, and let D be a nonempty, closed and convex subset of H. We write xkx to indicate that the sequence {xk}k=0 converges weakly to x and xkx to indicate that the sequence {xk}k=0 converges strongly to x. For each point xH, there exists a unique nearest point in D, denoted by PD(x). That is,

xPD(x)xy,yD. (4)

The mapping PD : HD is called the metric projection of H onto D. It is well known that PD is a nonexpansive mapping of H onto D, i.e.,

PD(x)PD(y)xy,x,yH. (5)

The metric projection PD is characterized [7, Section 3] by the following two properties:

PD(x)D (6)

and

xPD(x),PD(x)y0,xH,yD, (7)

and if D is a hyperplane, then (7) becomes an equality. It follows that

xy2xPD(x)2+yPD(x)2,xH,yD. (8)

We denote by ND (v) the normal cone of D, at vD, i.e.,

ND(v){dHd,yv0,yD}. (9)

We also recall that in a real Hilbert space H,

λx+(1λ)y2=λx2+(1λ)y2λ(1λ)xy2, (10)

for all x, yH and λ ∈ [0, 1].

Definition 2.1 Let B : H ⇉ 2H be a point-to-set operator defined on a real Hilbert space H. B is called a maximal monotone operator iff B is monotone, i.e.,

uv,xy0,uB(x)andvB(y), (11)

and the graph G(B) of B,

G(B){(x,u)H×HuB(x)}, (12)

is not properly contained in the graph of any other monotone operator.

It is clear ([8, Theorem 3]) that a monotone mapping B is maximal if and only if, for any (x, u) ∈ H × H, if 〈uv, xy〉 ≥ 0 for all (v, y) ∈ G(B), then it follows that uB(x).

The next property is known as the Opial condition [9]. Any Hilbert space has this property.

Condition 2.1 (Opial) For any sequence {xk}k=0 in H that converges weakly to x (xkx),

lim infkxkx<lim infkxky,yx. (13)

The next lemma was proved in [10, Lemma 3.2].

Lemma 2.1 Let H be a real Hilbert space and let D be a nonempty, closed and convex subset of H. Let the sequence {xk}k=0H be Fejér-monotone with respect to D, i.e., for every uD,

xk+1uxku,k0. (14)

Then {PD(xk)}k=0 converges strongly to some zD.

Notation 2.1 Any closed and convex set DH can be represented as

D={xHc(x)0}, (15)

where c : HR is an appropriate convex function.

We denote the subdifferential set of c at a point x by

c(x){ξHc(y)c(x)+ξ,yx,yH}. (16)

For zH, take any ξ ∈ ∂c(z) and define

T(z){wHc(z)+ξ,wz0}. (17)

This is a half-space the bounding hyperplane of which separates the set D from the point z if z ∉ int D. Otherwise T(z) = H.

The next lemma is known (see, e.g., [11, Lemma 3.1]).

Lemma 2.2 Let H be a real Hilbert space, {αk}k=0 a real sequence satisfying 0 < a ≤ αkb < 1 for all k ≥ 0, and let {υk}k=0 and {wk}k=0 be two sequences in H such that for some σ ≥ 0,

lim supkvkσ, (18)
lim supkwkσ (19)

and

limkαkvk+(1αk)wk=σ. (20)

Then

limkvkwk=0. (21)

The next fact is known as the Demiclosedness Principle [12].

Demiclosedness Principle. Let H be a real Hilbert space, D a closed and convex subset of H and let S : DH be a nonexpansive mapping, i.e.,

S(x)S(y)xy,x,yD. (22)

Then IS (I is the identity operator on H) is demiclosed at yH, i.e., for any sequence {xk}k=0 in D such that xkxD and (IS)xky, we have (IS)x=y.

3 The Extragradient Algorithm

In this section we sketch the proof of the weak convergence of Korpelevich's extragradient method, (2)−(3).

We assume the following conditions.

Condition 3.1 The solution set of (1), denoted by SOL(C, f), is nonempty.

Condition 3.2 The mapping f is monotone on C, i.e.,

f(x)f(y),xy0,x,yC. (23)

Condition 3.3 The mapping f is Lipschitz continuous on C with constant L > 0, that is,

f(x)f(y)Lxy,x,yC. (24)

We will use the same outline in Section 5. The next lemma is a known result which is crucial for the proof of our convergence theorem.

Lemma 3.1 Let {xk}k=0 and {yk}k=0 be the two sequences generated by the extragradient method, (2)–(3), and let u ∈ SOL(C, f). Then, under Conditions 3.1–3.3, we have

xk+1u2xku2(1τ2L2)ykxk2,k0. (25)

Proof. see, e.g., [1, Theorem 1, eq. (14)], [2, Lemma 12.1.10, p. 1117]

Theorem 3.1 Assume that Conditions 3.1–3.3 hold and let τ < 1/L. Then any sequences {xk}k=0 and {yk}k=0 generated by the extragradient method weakly converge to the same solution u* ∈ SOL(C, f) and furthermore,

u=limkPSOL(C,f)(xk). (26)

Proof. Fix u ∈ SOL(C, f) and define ρ := 1 − τ2L2. Since τ < 1/L, ρ ∈ (0, 1). By (25), we have

ρykxk2xku2. (27)

Using (25) with k ← (k − 1), we get

ρykxk2+ρyk1xk12xk1u2. (28)

Continuing, we get for all integers K ≥ 0,

ρk=0Kykxk2x0u2 (29)

and therefore

ρk=0ykxk2x0u2. (30)

Hence

limkykxk=0. (31)

By Lemma 3.1, the sequence {xk}k=0 is bounded. Therefore it has at least one weak accumulation point. If x is a weak limit point of some subsequence {xkj}j=0 of {xk}k=0, then

wlimjxkj=x (32)

and

wlimjykj=x. (33)

Let

A(v){f(v)+NC(v),vC,,vC,} (34)

where NC (v) is the normal cone of C at vC (see 9). It is known that A is a maximal monotone operator and A−1 (0) = SOL(f, C). If (v, w) ∈ G(A), then

(w,vx)0, (35)

and therefore xA1(0)=SOL(f,C). The Opial condition now implies that the entire sequence weakly converges to x. Finally, if we take

uk=PSOL(C,f)(xk), (36)

then by (7) and Lemma 2.1, we see that {uk}k=0 converges strongly to some u* ∈ SOL(C, f). We also heve

xu,ux0, (37)

and hence u* = x, which completes the proof.

4 The Subgradient Extragradient Algorithm

Next we present the subgradient extragradient algorithm [6].

Algorithm 4.1 The subgradient extragradient algorithm

Step 0: Select a starting point x0H and τ > 0, and set k = 0.

Step 1: Given the current iterate xk, compute

yk=PC(xkτf(xk)) (38)

construct the half-space Tk the bounding hyperplane of which supports C at yk,

TK{wH(xkτf(xk))yk,wyk0} (39)

and calculate the next iterate

xk+1=PTk(xkτf(yk)). (40)

Step 2: If xk = yk then stop. Otherwise, set k ← (k + 1) and return to Step 1.

Remark 4.1 Every convex set C can be represented as a sublevel set of a convex function c : HR as in (15); so if c is, in addition, differentiable at yk, then {(xk − τf(xk)) − yk} = ∂c(yk) = {∇c(yk)}. Otherwise, (xk − τf(xk)) − yk ∈ ∂c(yk).

Figure 2 illustrates the iterative step of this algorithm.

Figure 2.

Figure 2

xk+1 is a subgradient projection of the point xk − τf(yk) onto the hyperplane Tk.

We assume the following condition.

Condition 4.1 The function f is Lipschitz continuous on H with constant L > 0, that is,

f(x)f(y)Lxy,x,yH. (41)

5 Convergence of the Subgradient Extragradient Algorithm

In this section we give a complete proof of the weak convergence theorem for Algorithm 4.1, using similar techniques to those sketched in Section 3. First we show that the stopping criterion in Step 2 of Algorithm 4.1 is valid.

Lemma 5.1 If xk = yk in Algorithm 4.1, then xk ∈ SOL(C, f).

Proof. If xk = yk, then xk = PC(xk−τf(xk)), so xkC. By the variational characterization of the metric projection onto C, we have

wxk,(xkτf(xk))xk0,wC, (42)

which implies that

τf(xk),wxk0,wC. (43)

Since τ > 0, inequality (43) implies that xk ∈ SOL(C, f).

The next lemma is crucial for the proof of our convergence theorem.

Lemma 5.2 Let {xk}k=0 and {yk}k=0 be the two sequences generated by Algorithm 4.1 and let u ∈ SOL(C, f). Then, under Conditions 3.1, 3.2 and 4.1, we have

xk+1u2xku2(1τ2L2)ykxk2,k0. (44)

Proof. Since u ∈ SOL(C, f), ykC and f is monotone, we have

f(yk)f(u),yku0,k0. (45)

This implies that

f(yk),yku0,k0. (46)

So,

f(yk),xk+1uf(yk),xk+1yk. (47)

By the variational characterization of the metric projection onto Tk, we have

xk+1yk,(xkτf(xk))yk=0 (48)

for all k ≥ 0: Thus,

xk+1yk,(xkτf(xk))yk=xk+1yk,xkτf(xk)yk+τxk+1yk,f(xk)f(yk)=τxk+1yk,f(xk)f(yk). (49)

Denoting zk = xk = k − τf(yk), we obtain

xk+1u2=PTk(zk)u2=PTk(zk)zk+zku,PTk(zk)zk+zku=zku2zkPTk(zk)2+2PTk(zk)zk,zku. (50)

Since

2zkPTk(zk)2+2PTk(zk)zk,zku=2zkPTk(zk),uPTk(zk)0 (51)

for all k ≥ 0, we get

zkPTk(zk)2+2PTk(zk)zk,zkuzkPTk(zk)2 (52)

for all k ≥ 0. Hence,

xk+1u2zku2zkPTk(zk)2=(xkτf(yk))u2(xkτf(yk))xk+12=xku2xkxk+12+2τuxk+1,f(yk)xku2xkxk+12+2τykxk+1,f(yk), (53)

where the last inequality follows from (47). So,

xk+1u2xku2xkxk+12+2τykxk+1,f(yk)=xku2(xkyk+ykxk+1,xkyk+ykxk+1)+2τykxk+1,f(yk)=xku2xkyk2ykxk+12+2xk+1yk,xkτf(yk)yk, (54)

and by (49),

xk+1u2xku2xkyk2ykxk+12+2τxk+1yk,f(xk)f(yk). (55)

Using the Cauchy–Schwarz inequality and Condition 4.1, we obtain

2τxk+1yk,f(xk)f(yk)2τLxk+1ykxkyk. (56)

In addition,

0(τLxkykykxk+1)2=τ2L2xkyk22τLxk+1ykxkyk+ykxk+12. (57)

So,

2τLxk+1ykxkykτ2L2xkyk2+ykxk+12. (58)

Combining the above inequalities and using Condition 4.1, we see that

xk+1u2xku2xkyk2ykxk+12+2τLxk+1ykxkykxku2xkyk2ykxk+12+τ2L2xkyk+ykxk+12=xku2xkyk2+τ2L2xkyk2. (59)

Finally, we get

xk+1u2xku2(1τ2L2)ykxk2, (60)

which completes the proof.

Theorem 5.1 Assume that Conditions 3.1, 3.2 and 4.1 hold and let τ < 1/L. Then any sequences {xk}k=0 and {yk}k=0 generated by Algorithm 4.1 weakly converge to the same solution z* ∈ SOL(C, F) and furthermore,

u=limkPSOL(C,f)(xk). (61)

Proof. Fix u ∈ SOL(C, f) and define ρ := 1 − τ2L2. Since τ < 1/L, ρ ∈ (0, 1). By (60), we have

0xku2ρykxk2, (62)

or

ρykxk2xku2. (63)

Using (60) with k ← (k − 1), we get

xku2xk1u2ρyk1xk12, (64)

or

ρykxk2+ρyk1xk12xk1u2. (65)

Continuing, we get for all integers K ≥ 0,

ρk=0Kykxk2x0u2. (66)

Since the sequence {Σk=0Kykxk2}K0 is monotonically increasing and bounded,

ρk=0ykxk2x0u2. (67)

Hence

limkykxk=0. (68)

By Lemma 5.2, the sequence {xk}k=0 is bounded. Therefore, it has at least one weak accumulation point. If x is a weak limit point of some subsequence {xkj}j=0 of {xk}k=0, then

wlimjxkj=x (69)

and

wlimjykj=x. (70)

Define the operator A by (34). It is known that A is a maximal monotone operator and A−1 (0) = SOL(f; C). If (v; w) ∈ G (A), since wA(v) = f(v) + NC (v), we get wf(v) ∈ NC (v). Then

wf(v),vy0,yC. (71)

On the other hand, by the definition of yk and (7),

xkτf(xk)yk,ykv0, (72)

or

(ykxkτ)+f(xk),vyk0 (73)

for all k ≥ 0. Using (68) and applying (71) with {ykj}j=0, we get

wf(v),vykj0. (74)

Hence,

w,vykjf(v),vykjf(v),vykj(ykjxkjτ)+f(xkj),vykj=f(v)f(ykj),vykj+f(ykj)f(xkj),vykj(ykjxkjτ),vykjf(ykj)f(xkj),vykj(ykjxkjτ),vykj (75)

and

w,vykjf(ykj)f(xkj),vykj(ykjxkjτ),vykj. (76)

Taking the limit as j → ∞, we obtain

w,vx)0, (77)

and since A is a maximal monotone operator, it follows that xA1(0)=SOL(f,C). In order to show that the entire sequence weakly converges to x, assume that there is another subsequence {xkj}j=0 of {xk}k=0 that weakly converges to some xx and xSOL(f,C) ∈ SOL(f, C). Note that from Lemma 5.2 it follows that the sequence {xkx}k=0 is decreasing for each u ∈ SOL(C, f). By the Opial condition we have

limkxkx=lim infjxkjx<lim infjxkjx=limkxkx=lim infjxkjx<lim infjxkjx=limkxkx, (78)

and this is a contradiction, thus x=x. This implies that the sequences {xk}k=0 and {yk}k=0 converge weakly to the same point xSOL(C,f). Finally, put

uk=PSOL(C,f)(xk), (79)

so by (7) and since xSOL(C,f),

xuk,ukxk0. (80)

By Lemma 2.1, {uk}k=0 converges strongly to some u* ∈ SOL(C, f). Therefore

xu,ux0 (81)

and hence u=x.

6 The Modified Subgradient Extragradient Algorithm

Next we present the modified subgradient extragradient algorithm which finds a solution of the VI which is also a fixed point of a given nonexpansive mapping. Let S : HH be a nonexpansive mapping and denote by Fix(S) its fixed point set, i.e.,

Fix(S)={xHS(x)=x}. (82)

Let {αk}k=0[c,d] for some c, d ∈ (0, 1).

Algorithm 6.1 The modified subgradient extragradient algorithm

Step 0: Select a starting point x0H and τ > 0, and set k = 0.

Step 1: Given the current iterate xk, compute

ykPC(xkτf(xk)), (83)

construct the half-space Tk as in (39) and calculate the next iterate

xk+1=αkxk+(1αk)SPTk(xkτf(yk)). (84)

Step 2: Set k ← (k + 1) and return to Step 1.

Figure 3 illustrates the iterative step of this algorithm. We assume the following condition.

Figure 3.

Figure 3

The iterative step of Algorithm 6.1.

Condition 6.1 Fix(S) ∩ SOL(C, f) ≠ ∅.

7 Convergence of the Modified Subgradient Extragradient Algorithm

In this section we establish a weak convergence theorem for Algorithm 6.1. The outline of its proof is similar to that of [11, Theorem 3.1].

Theorem 7.1 Assume that Conditions 3.2, 4.1 and 6.1 hold and τ < 1/L. Then any sequences {xk}k=0 and {yk}k=0 generated by Algorithm 6.1 weakly converge to the same point u* ∈ Fix(S) ∩ SOL(C, f) and furthermore,

u=limkPFix(S)SOL(C,f)(xk). (85)

Proof. Denote tk := PTk (xk − τf(yk)) for all k ≥ 0. Let u ∈ Fix(S) ∩ SOL(C, f). Applying (8) with D = Tk, x = xk − τf(yk) and y = u, we obtain

tku2xkτf(yk)u2xkτf(yk)tk2=xku2xktk2+2τf(yk),utk=xku2xktk2+2τ(f(yk)f(u),uyk+f(u),uyk+f(yk),yktk). (86)

By Condition 3.2,

f(yk)f(u),uyk0, (87)

and since u ∈ SOL(C, f)

f(u),uyk0. (88)

So,

tku2xku2xktk2+2τf(yk),yktk=xku2xkyk22xkyk,yktkyktk2+2τf(yk),yktk=xku2xkyk2yktk2+2xkτf(yk)yk,tkyk. (89)

By (7) applied to Tk,

(xkτf(xk))yk,tkyk=0, (90)

so

xkτf(yk)yk,tkyk=xkτf(xk)yk,tkyk+τf(xk)τf(yk),tkyk=τf(xk)τf(yk),tkykτf(xk)f(yk)tkykτLxkyktkyk, (91)

where the last two inequalities follow from the Cauchy–Schwarz inequality and Condition 4.1. Therefore

tku2xku2xkyk2yktk2+2τLxkyktkyk. (92)

Observe that

0(tkykτLtkyk)2=tkyk22τLxkyktkyk+τ2L2xkyk2, (93)

so,

2τLxkyktkyktkyk2+τ2L2xkyk2. (94)

Thus

tku2xku2xkyk2yktk2+tkyk2+τ2L2xkyk2=xku2xkyk2+τ2L2xkyk2=xku2+(τ2L21)xkyk2xku2, (95)

where the last inequality follows from the fact that τ < 1/L. Using (10), we get

xk+1u2=αkxk+(1αk)S(tk)u2=αk(xku)+(1αk)(S(tk)u)2=αkxku2+(1αk)S(tk)u2αk(1αk)(xku)(S(tk)u)2αkxku2+(1αk)S(tk)u2=αkxku2+(1αk)S(tk)S(u)2αkxku2+(1αk)(tku)2αkxku2+(1αk)(xku2+(τ2L21)xkyk2)=xku2+(1αk)(τ2L21)xkyk2xku2, (96)

so

xk+1u2xku2. (97)

Therefore there exists

limkxku=σ, (98)

and {xk}k=0 and {tk}k=0 are bounded. From the last relations it follows that

(1αk)(1τ2L2)xkyk2xku2xk+1u2, (99)

or

xkyk2xku2xk+1u2(1αk)(1τ2L2). (100)

Hence,

limkxkyk=0. (101)

In addition, by the definition of yk and Tk,

ykt22=PC(xkτf(xk))PTk(xkτf(yk))2=PTk(xkτf(xk))PTk(xkτf(yk))2(xkτf(xk))(xkτf(yk))2=τf(yk)τf(xk)2τ2Lykxk2, (102)

where the last inequality follows from Condition 4.1. So,

yktk2τ2L2ykxk2, (103)

and by (101) we get

limkyktk=0. (104)

By the triangle inequality,

xktkxkyk+yktk, (105)

so by (101) and (104), we have

limkxktk=0. (106)

Since {xk}k=0 is bounded, it has a subsequence {xkj}j=0 which weakly converges to some xH. We now show that xFix(S)SOL(C,f). Define the operator A as in (34). By using arguments similar to those used in the proof of Theorem 5.1, we get that xA1(0)=SOL(C,f). It is now left to show that xFix(S). To this end, let uFix(S)SOL(C,f) as before. Since S is nonexpansive, we get from (95) that

S(tk)u=S(tk)S(u)tkuxku. (107)

By (98),

lim supkS(tk)uσ. (108)

Furthermore,

limkαkxk+(1αk)S(tk)u=limkαk(xku)+(1αk)(S(tk)u)=limkxk+1u=σ. (109)

So applying Lemma 2.2, we obtain

limkS(tk)xk=0. (110)

Since

S(xk)xk=S(xk)S(tk)+S(tk)xkS(xk)S(tk)+S(tk)xkxktk+S(tk)xk, (111)

It follows from (106) and (110) that

limkS(xk)xk=0. (112)

Since S is nonexpansive on H, xkjx and

limj(IS)(xkj)=limkxkjS(xkj)=0, (113)

we obtain by the Demiclosedness Principle that (IS)(x)=0, which means that xFix(S). Now, again by using similar arguments to those used in the proof of Theorem 5.1, we get that the entire sequence weakly converges to x. Therefore the sequences {xk}k=0 and {yk}k=0 weakly converge to xFix(S)SOL(C,f). Finally, put

uk=PFix(S)SOL(C,f)(xk). (114)

Since xFix(S)SOL(C,f), it follows from (7) that

xuk,ukxk0. (115)

By Lemma 2.1, {uk}k=0 converges strongly to some u* ∈ Fix(S) ∩ SOL(C, f). Therefore

xu,ux0. (116)

and hence xu.

Remark 7.1 In Algorithm 6.1 we assumed that S was a nonexpansive mapping on H. If it is defined only on C we can replace it by S~=SPC, which is a nonexpansive mapping on C. In this case the iterative step is as follows:

yk=PC(xkτf(xk)),

construct the half-space Tk (39) and calculate the next iterate

xk+1=αkxk+(1αk)S~PTk(xkτf(yk). (117)

8 Conclusions

In this paper we proposed two subgradient extragradient algorithms for solving variational inequalities in Hilbert space and established weak convergence theorems for both of them. The second algorithm finds a solution of a variational inequality which is also a fixed point of a given nonexpansive mapping.

Acknowledgments

This work was partially supported by Award Number R01HL070472 from the National Heart, Lung and Blood Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung and Blood Institute or the National Institutes of Health. The third author was partially supported by the Israel Science Foundation (Grant 647/07), by the Fund for the Promotion of Research at the Technion and by the Technion President's Research Fund.

Footnotes

Communicated by B.T. Polyak

AMS Classification 65K15 · 58E35

References

  • [1].Korpelevich GM. The extragradient method for finding saddle points and other problems. Ekonomika i Matematicheskie Metody. 1976;12:747–756. [Google Scholar]
  • [2].Facchinei F, Pang JS. Finite-Dimensional Variational Inequalities and Complementarity Problems. Volume I and Volume II. Springer-Verlag; New York: 2003. [Google Scholar]
  • [3].Iusem AN, Svaiter BF. A variant of Korpelevich's method for variational inequalities with a new search strategy. Optimization. 1997;42:309–321. [Google Scholar]
  • [4].Khobotov EN. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1989;27:120–127. [Google Scholar]
  • [5].Solodov MV, Svaiter BF. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999;37:765–776. [Google Scholar]
  • [6].Censor Y, Gibali A, Reich S. Two extensions of Korpelevich's extra-gradient method for solving the variational inequality problem in Euclidean space. Technical Report. 2010 [Google Scholar]
  • [7].Goebel K, Reich S. Uniform Convexity, Hyperbolic Geometry, and Non-expansive Mappings. Marcel Dekker; New York and Basel: 1984. [Google Scholar]
  • [8].Rockafellar RT. On the maximality of sums of nonlinear monotone operators. Trans. Amer. Math. Soc. 1970;149:75–88. [Google Scholar]
  • [9].Opial Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 1967;73:591–597. [Google Scholar]
  • [10].Takahashi W, Toyoda M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003;118:417–428. [Google Scholar]
  • [11].Nadezhkina N, Takahashi W. Weak convergence theorem by an extra-gradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006;128:191–201. [Google Scholar]
  • [12].Browder FE. Fixed point theorems for noncompact mappings in Hilbert space. Proc. Natl. Acad. Sci. USA. 1965;53:1272–1276. doi: 10.1073/pnas.53.6.1272. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES