Abstract
In this paper, we introduce two general iterative methods (one implicit method and one explicit method) for finding a solution of a general system of variational inequalities (GSVI) with the constraints of finitely many generalized mixed equilibrium problems and a fixed point problem of a continuous pseudocontractive mapping in a Hilbert space. Then we establish strong convergence of the proposed implicit and explicit iterative methods to a solution of the GSVI with the above constraints, which is the unique solution of a certain variational inequality. The results presented in this paper improve, extend, and develop the corresponding results in the earlier and recent literature.
Keywords: General iterative method, General system of variational inequalities, Continuous monotone mapping, Continuous pseudocontractive mapping, Variational inequality, Generalized mixed equilibrium problem
Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H with inner product and induced norm . We denote by the metric projection of H onto C and by the set of fixed points of the mapping S. Recall that a mapping is nonexpansive if , . A mapping is called pseudocontractive if
This inequality can be equivalently rewritten as
where I is the identity mapping.
is said to be k-strictly pseudocontractive if there exists a constant such that
A mapping is said to be l-Lipschitzian if there exists a constant such that
A mapping is called monotone if
and F is called α-inverse-strongly monotone if there exists a constant such that
If F is an α-inverse-strongly monotone mapping, then it is obvious that F is -Lipschitz continuous, that is, for all .
A mapping is called β-strongly monotone if there exists a constant such that
A linear operator is said to be strongly positive on H if there exists a constant such that
Let be a mapping. The classical variational inequality problem (VIP) is to find such that
| 1.1 |
We denote the set of solutions of VIP (1.1) by .
In 2008, Ceng et al. [1] considered the following general system of variational inequalities (GSVI):
| 1.2 |
where , are α-inverse-strongly monotone and β-inverse-strongly monotone, respectively, and and are two constants. Many iterative methods have been developed for solving GSVI (1.2); see [2–7] and the references therein.
Subsequently, Alofi et al. [8] also introduced two composite iterative algorithms based on the composite iterative methods in Ceng et al. [9] and Jung [10] for solving the problem of GSVI (1.2). Moreover, they showed strong convergence of the proposed algorithms to a common solution of these two problems.
Very recently, Kong et al. [11] established the strong convergence of two hybrid steepest-descent schemes to the same solution of GSVI (1.2), which is also a common solution of finitely many variational inclusions and a minimization problem.
Lemma 1.1
(see [12, Proposition 3.1])
Let C be a nonempty closed convex subset of a real Hilbert space H. For given , is a solution of GSVI (1.3) for continuous monotone mappings and if and only if is a fixed point of the composite of nonexpansive mappings and , where ,
and
For simplicity, we denote by the fixed point set of mapping R.
In the meantime, inspired by Ceng et al. [1], Jung [12] introduced a general system of variational inequalities (GSVI) for two continuous monotone mappings and of finding such that
| 1.3 |
where are two constants. In order to find an element of , he proposed one implicit algorithm generating a net :
| 1.4 |
with and , and an explicit algorithm generating a sequence :
| 1.5 |
with , , , and any initial guess, where for , and for . Moreover, he established strong convergence of the proposed iterative algorithms to an element , which uniquely solves the variational inequality
On the other hand, the generalized mixed equilibrium problem (GMEP) is to find such that
| 1.6 |
We denote the set of solutions of GMEP (1.6) by . GMEP (1.6) is very general in the sense that it includes many problems as special cases, namely optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games, and others. For different aspects and solution methods, we refer to [13–18] and the references therein.
In this paper, we introduce implicit and explicit iterative methods for finding a solution of GSVI (1.3) with solutions belonging also to the common solution set of finitely many generalized mixed equilibrium problems and the fixed point set of a continuous pseudocontractive mapping T. First, GSVI (1.3) and each generalized mixed equilibrium problem both are transformed into fixed point problems of nonexpansive mappings. Then we establish strong convergence of the proposed iterative methods to an element of , which is the unique solution of a certain variational inequality.
Preliminaries and lemmas
Let H be a real Hilbert space, and let C be a nonempty closed convex subset of H. We write and to indicate the strong convergence of the sequence to x and the weak convergence of the sequence to x, respectively.
For every point , there exists a unique nearest point in C, denoted by , such that
is called the metric projection of H onto C. It is well known that is nonexpansive and is characterized by the property
| 2.1 |
In a Hilbert space H, the following equality holds:
| 2.2 |
The following lemma is an immediate consequence of an inner product.
Lemma 2.1
In a real Hilbert space H, there holds the following inequality:
Next we list some elementary conclusions for the MEP.
It is first assumed as in [19] that is a bifunction satisfying conditions (A1)–(A4) and is a lower semicontinuous and convex function with restriction (B1) or (B2), where
for all ;
Θ is monotone, i.e., for any ;
- Θ is upper-hemicontinuous, i.e., for each ,
is convex and lower semicontinuous for each ;
- for and , there exists a bounded subset and such that, for ,
C is a bounded set.
Proposition 2.1
([19])
Assume that satisfies (A1)–(A4), and let be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For and , define a mapping as follows:
for all . Then the following hold:
-
(i)
for each , is nonempty and single-valued;
-
(ii)is firmly nonexpansive, that is, for any ,
-
(iii)
;
-
(iv)
is closed and convex;
-
(v)
for all and .
Proposition 2.2
Let be an α-inverse-strongly monotone mapping. Then, for all and , one has
In particular, if , is a nonexpansive mapping.
We will use the following lemmas for the proof of our main results in the sequel.
Lemma 2.2
([20])
Let be a sequence of nonnegative real numbers satisfying
where , , and satisfy the following conditions:
-
(i)
and or, equivalently, ;
-
(ii)
or ;
-
(iii)
(), .
Then .
Lemma 2.3
(Demiclosedness principle [21])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a nonexpansive mapping with . Then the mapping is demiclosed. That is, if is a sequence in C such that and , then . Here I is the identity mapping of H.
Lemma 2.4
([22])
Let H be a real Hilbert space. Let be a strongly positive bounded linear operator with a constant . Then
That is, is strongly monotone with a constant .
Lemma 2.5
([22])
Assume that is a strongly positive bounded linear operator with a coefficient and . Then .
Lemma 2.6
([23])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a ρ-Lipschitzian and η-strongly monotone mapping with constants . Let and . Then is a contractive mapping with constant , where .
Lemma 2.7
([24])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a continuous monotone mapping. Then, for and , there exists such that
For and , define by
Then the following hold:
-
(i)
is single-valued;
-
(ii)is firmly nonexpansive, that is,
-
(iii)
;
-
(iv)
is a closed convex subset of C.
Lemma 2.8
([24])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a continuous pseudocontractive mapping. Then, for and , there exists such that
For and , define by
Then the following hold:
-
(i)
is single-valued;
-
(ii)is firmly nonexpansive, that is,
-
(iii)
;
-
(iv)
is a closed convex subset of C.
Main results
Throughout this section, we always assume the following:
is a -inverse-strongly monotone mapping for each ;
is a bifunction satisfying conditions (A1)–(A4) for each ;
is a proper lower semicontinuous and convex function with restriction (B1) or (B2) for each ;
is a strongly positive linear bounded self-adjoint operator with a constant ;
is l-Lipschitzian with constant ;
is a ρ-Lipschitzian and η-strongly monotone mapping with constants and ;
constants μ, l, τ, and γ satisfy and , where ;
are continuous monotone mappings and is a continuous pseudocontractive mapping such that ;
- , where are defined as follows:
for , , , and ; - , where are defined as follows:
for , , and ; - is a mapping defined by
for , , and ; - is a mapping defined by
for , and ; - is a mapping defined by
for and ; - is a mapping defined by
for and .
By Proposition 2.1 and Lemmas 2.7 and 2.8, we note that , , , , , , , and are nonexpansive, , and . So it is known that the composite mappings and are nonexpansive. Also, we note that by Lemma 1.1.
In this section, for , and , we put
and .
We now introduce the first general iterative scheme that generates a net in an implicit way:
| 3.1 |
where and .
We prove the strong convergence of as to a point , which is a unique solution to the VI
| 3.2 |
In the meantime, we also propose the second general iterative scheme that generates a sequence in an explicit way:
| 3.3 |
where and is an arbitrary initial guess, and establish the strong convergence of as to the same point , which is the unique solution to VI (3.2).
Next, for and , consider a mapping defined by
It is easy to see that is a contractive mapping with constant . Indeed, by Propositions 2.1 and 2.2 and Lemmas 2.5 and 2.6, we have
Since , and , it follows that , which together with yields . Hence is a contractive mapping. By the Banach contraction principle, has a unique fixed point, denoted by , which uniquely solves the fixed point equation (3.1).
We summarize the basic properties of .
Theorem 3.1
Let be defined via (3.1). Then
-
(i)
is bounded for ;
-
(ii)
, , and provided ;
-
(iii)
is locally Lipschitzian provided is locally Lipschitzian, are locally Lipschitzian, and is locally Lipschitzian for each ;
-
(iv)
defines a continuous path from into H provided is continuous, are continuous, and is continuous for each .
Proof
Let , , and . Take . Then by Lemma 2.8(iii), () by Proposition 2.1(iii), and by Lemma 1.1.
(i) Utilizing Proposition 2.1(ii) and Proposition 2.2, we have
| 3.4 |
Moreover, it is easy from the nonexpansivity of to see that
which together with the nonexpansivity of and (3.4) implies that
| 3.5 |
By (3.5), we have
So, it follows that
Hence is bounded and so are , , , , and .
(ii) By the definition of , we have
using the boundedness of , , and in the proof of assertion (i). That is,
| 3.6 |
In view of (3.5) and Lemma 2.7(ii), we get
which immediately yields
From (3.6) and the boundedness of and , we have
| 3.7 |
Again from (3.5) and Lemma 2.7(ii), we obtain
which hence leads to
Again from (3.6) and the boundedness of and , we have
| 3.8 |
So it follows from (3.7) and (3.8) that
That is,
| 3.9 |
Furthermore, from (3.5) and Proposition 2.1(ii) and Proposition 2.2, it follows that
which together with for implies that
From (3.6) and the boundedness of and , we have
| 3.10 |
Also, by Proposition 2.1(ii), we obtain that, for each ,
which immediately implies that
This together with (3.5) leads to
which hence implies
From (3.6) and the boundedness of and , we have
which together with (3.10) implies that, for each ,
| 3.11 |
Note that
From (3.11), it is easy to see that
| 3.12 |
Also, observe that
From (3.9) and (3.12), it is easy to see that
| 3.13 |
In the meantime, again from (3.5) and Lemma 2.7(ii), we obtain
which immediately yields
From (3.6) and the boundedness of and , we have
| 3.14 |
Taking into account that
we deduce from (3.9), (3.12), and (3.14) that
| 3.15 |
(iii) Let . Since and , we get
| 3.16 |
and
| 3.17 |
Putting in (3.16) and in (3.17), we obtain
| 3.18 |
and
| 3.19 |
Adding up (3.18) and (3.19), we have
Since T is pseudocontractive, we know that is a monotone mapping such that
and hence
| 3.20 |
Taking into account that , without loss of generality, we may assume that for some . Then from (3.20) we have
which immediately yields
| 3.21 |
where .
Also, taking into account that and , without loss of generality, we may assume that for some . Since and , where and for , by using arguments similar to those of (3.21), we get
| 3.22 |
and
| 3.23 |
where . Substituting (3.23) for (3.22), we obtain
| 3.24 |
In the meantime, by Proposition 2.1(ii), (v) and Proposition 2.2, we deduce that
| 3.25 |
where
for some . This together with (3.21) and (3.24) implies that
Taking into account that both and imply
we calculate from (3.1)
This immediately implies that
Since is locally Lipschitzian, are locally Lipschitzian, and is locally Lipschitzian for each , we deduce that is locally Lipschitzian.
(iv) From the last inequality in (iii), the desired result follows immediately. □
We prove the following strong convergence theorem for the net as , which guarantees the existence of solutions of the variational inequality (3.2).
Theorem 3.2
Let the net be defined via (3.1). If , then converges strongly to as , which solves VI (3.2). Equivalently, we have .
Proof
We first note that the uniqueness of a solution of VI (3.2) is a consequence of the strong monotonicity of (due to Lemma 2.4). See [2, 4, 5] for this fact.
Next, we prove that as . For simplicity, let , , , and . For any given , we observe that , , and . From (3.1), we write
where . In terms of (2.1) and (3.5), we have
Therefore,
| 3.26 |
Since is bounded as (due to Theorem 3.1(i)), there exists a subsequence in such that and . We first show that . To this end, we divide its proof into four steps.
Step 1. We claim that , , and , where , , and . Indeed, according to (3.9), (3.12), and (3.14) in the proof of Theorem 3.1, we obtain the assertion.
Step 2. We claim that . In fact, from the definition of , we have
| 3.27 |
Set for all and . Then . From (3.27) it follows that
| 3.28 |
By Step 1, we have as . Moreover, since , by Step 1 we have . Since is monotone, we also have that . Thus, from (3.28) it follows that
and hence
Letting , we know from the continuity of that
Putting , we get , which leads to .
Step 3. We claim that . Indeed, note that and . For each , we put , , , and . Then, by Lemma 1.1, we have , where and R is nonexpansive. Moreover, it is easy to see that
| 3.29 |
and
| 3.30 |
Putting in (3.29) and in (3.30), we obtain
| 3.31 |
and
| 3.32 |
Adding up (3.31) and (3.32), we have
Since is a monotone mapping, we know that
and hence
So it follows that
which immediately yields
| 3.33 |
By using arguments similar to those of (3.33), we have
| 3.34 |
Now, putting , in (3.33), and , in (3.34), respectively, we deduce that
and
Since and , it follows from the last two inequalities that
| 3.35 |
Also, we observe that
| 3.36 |
Since (due to Step 1), from (3.35) and (3.36) we get
| 3.37 |
Taking into account that and (due to (3.37)), from Lemma 2.3 we get , that is, .
Step 4. We claim that . In fact, since , for each , we have
By (A2), we have
Let for all and . This implies that . Then we have
By the same arguments as in the proof of Theorem 3.1, we have as . In the meantime, by the monotonicity of , we obtain . Then by (A4) we get
Utilizing (A1), (A4), and the last inequality, we obtain
and hence
Letting , we have, for each ,
This implies that and hence . This together with Steps 2 and 3 attains .
Finally, we show that is a solution of VI (3.2). In fact, putting in place of in (3.26) and taking the limit as , we obtain
In particular, solves the following VI:
or the equivalent dual variational inequality
That is, is a solution of VI (3.2). Hence by uniqueness. In a summary, we have proven that each cluster point of (as ) equals x̃. Therefore as . VI (3.2) can be rewritten as
So, in terms of (2.1), this is equivalent to the fixed point equation
This completes the proof. □
Taking , , , and in Theorem 3.2, we have the following corollary.
Corollary 3.1
Let be defined by
If , then converges strongly as to , which is the unique solution of the VI
| 3.38 |
Proof
If , then in Lemma 2.8 is the identity mapping. Thus the result follows from Theorem 3.2. □
We are now in a position to prove the strong convergence of the sequence generated by the general explicit iterative scheme (3.3) to , which is the unique solution to VI (3.2).
Theorem 3.3
Let be the sequence generated by the explicit algorithm (3.3). Let , , , , , and satisfy the following conditions:
and , and as ;
;
, and , (the perturbed control condition);
, , and ;
, , and ;
, , and ;
, and .
Then converges strongly to , which is the unique solution of VI (3.2).
Proof
First, note that from condition (C1), without loss of generality, we assume that , and for all . Let be the unique solution of VI (3.2). (The existence of x̃ follows from Theorem 3.2.)
From now, we put , , and . Take . Then by Lemma 2.8(iii), () by Proposition 2.1(iii), and by Lemma 1.1.
We divide the proof into several steps as follows.
Step 1. We show that is bounded. Indeed, utilizing Proposition 2.1(ii) and Proposition 2.2, we have
| 3.39 |
It is easy from the nonexpansion of to see that
which together with the nonexpansion of and (3.39) implies that
| 3.40 |
By induction, we derive
This implies that is bounded and so are , , , , , and . As a consequence, with the control condition (C1), we get
| 3.41 |
Step 2. We show that . To this end, let , , , and . Then we derive
| 3.42 |
and
| 3.43 |
Putting in (3.42) and in (3.43), we obtain
| 3.44 |
and
| 3.45 |
Adding up (3.44) and (3.45), we have
which together with the monotonicity of implies that
and hence
It follows that
which immediately yields
| 3.46 |
By using arguments similar to those of (3.46), we get
| 3.47 |
Substituting (3.46) for (3.47), we have
| 3.48 |
Note that and . By using arguments similar to those of (3.46), we obtain
| 3.49 |
Also, utilizing arguments similar to those of (3.25) in the proof of Theorem 3.1, we have
| 3.50 |
where is a constant such that, for each ,
So it follows from (3.48), (3.49), and (3.50) that
| 3.51 |
Since , , and , it is easy to see from (3.51) that, for each ,
| 3.52 |
where is a constant such that
Now, simple calculations yield that
In terms of (3.52) and Lemma 2.6, we obtain
| 3.53 |
where . By (3.53) and Lemma 2.5, we derive
| 3.54 |
where . By taking , , , and
we deduce from (3.54) that
Hence, by conditions (C2)–(C7) and Lemma 2.2, we obtain
Step 3. We show that . Indeed, from (3.41) and condition (C1), we derive
Step 4. We show that . In fact, by Step 2 and Step 3, we get
Step 5. We show that and . In fact, we first derive by using arguments similar to those of (3.9) in the proof of Theorem 3.1, and then we obtain by using arguments similar to those of (3.37) in the proof of Theorem 3.2.
Step 6. We show that and . In fact, by using arguments similar to those of (3.12) and (3.13) in the proof of Theorem 3.1, we obtain the desired conclusions.
Step 7. We show that and . In fact, by using arguments similar to those of (3.14) and (3.15) in the proof of Theorem 3.1, we obtain the desired conclusions.
Step 8. We show that . To this end, take a subsequence of such that
Without loss of generality, we may assume that . Utilizing Steps 5, 6, and 7 and arguments similar to those of Steps 2, 3, and 4 in the proof of Theorem 3.2, we derive . Thus, from VI (3.2), we conclude
Step 9. We show that . Note that . From (3.3), , , and , we obtain
and
Applying (2.1), (3.40) and Lemmas 2.1, 2.5, and 2.6, we deduce that
and hence
| 3.55 |
It then follows from (3.55) that
where , . It can be readily seen from Step 2 and conditions (C1) and (C2) that , , and . By Lemma 2.2, we conclude that . This completes the proof. □
Taking , , , and in Theorem 3.3, we have the following corollary.
Corollary 3.2
Let be generated by the following iterative algorithm:
Assume that the sequences , , , , and satisfy conditions (C1)–(C3) and (C5)–(C7) in Theorem 3.3. Then converges strongly to , which is the unique solution of VI (3.38).
Remark 3.1
Compared with Proposition 3.3, Theorem 3.4, and Theorem 3.7 in [11], respectively, our Theorems 3.1, 3.2, and 3.3 improve and develop them in the following aspects:
-
(i)
GSVI (1.3) with solutions being also fixed points of a continuous pseudocontinuous mapping in [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7] is extended to GSVI (1.3) with solutions being also common solutions of a finite family of generalized mixed equilibrium problems (GMEPs) and fixed points of a continuous pseudocontinuous mapping in our Theorems 3.1, 3.2, and 3.3;
-
(ii)
in the argument process of our Theorems 3.1, 3.2, and 3.3, we use the variable parameters and (resp., and ) in place of the fixed parameters λ and ν in the proof of [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7], and additionally deal with a pool of variable parameters (resp., ) involving a finite family of GMEPs;
-
(iii)
the iterative schemes in our Theorems 3.1, 3.2, and 3.3 are more advantageous and more flexible than the iterative schemes in [12, Proposition 3.3, Theorem 3.4, and Theorem 3.7], because they can be applied to solving three problems (i.e., GSVI (1.3), a finite family of GMEPs, and the fixed point problem of a continuous pseudocontractive mapping) and involve much more parameter sequences;
-
(iv)
it is worth emphasizing that our general implicit iterative scheme (3.1) is very different from Jung’s composite implicit iterative scheme in [12], because the term “” in Jung’s implicit scheme is replaced by the term “” in our implicit scheme (3.1). Moreover, the term “” in Jung’s explicit scheme is replaced by the term “” in our explicit scheme (3.3).
Numerical examples
The purpose of this section is to give two examples and numerical results to illustrate the applicability, effectiveness, and stability of our algorithm.
Example 4.1
(Example of Theorem 3.3)
Let and . Let the inner product be defined by . Let , , , , , , , , , , , , and . Let , , , , , , , , . It is easy to calculate that , , , , and . Choose an arbitrary initial guess . We get the numerical results of Algorithm (3.3).
Table 1 shows the value of the sequence .
Table 1.
The values of
| n | |
|---|---|
| 1 | 4.0000 |
| 2 | 1.8261 × 10−1 |
| 3 | 3.3191 × 10−3 |
| 4 | 3.7633 × 10−5 |
| 5 | 3.2426 × 10−7 |
| 6 | 2.3546 × 10−9 |
| 7 | 1.5285 × 10−11 |
| 8 | 9.1892 × 10−14 |
| 9 | 5.2325 × 10−16 |
| 10 | 2.8636 × 10−18 |
| 11 | 1.5212 × 10−20 |
| 12 | 7.8994 × 10−23 |
Figure 1 shows the convergence of the iterative sequence of Algorithm (3.3).
Figure 1.

The convergence of with initial
Solution: We can see from both Table 1 and Fig. 1 that the sequence converges to 0, that is, 0 is the solution in Example 4.1. In addition, it is also easy to check from Example 4.1 that . Therefore, the iterative algorithm of Theorem 3.3 is efficient.
Example 4.2
(Example of Theorem 3.7 in [12])
Let and . Let the inner product be defined by . Let , , , , , and . Let , , , , , , . Choose an arbitrary initial guess . We get the numerical results of Algorithm (1.5) (Algorithm (3.10) of [12]).
Table 2 shows the value of the sequence .
Table 2.
The values of
| n | |
|---|---|
| 1 | 4.0000 |
| 2 | 1.0278 |
| 3 | 2.5219 × 10−1 |
| 4 | 6.1587 × 10−2 |
| 5 | 1.5055 × 10−2 |
| 6 | 3.6870 × 10−3 |
| 7 | 9.0468 × 10−4 |
| 8 | 2.2236 × 10−4 |
| 9 | 5.4731 × 10−5 |
| 10 | 1.3488 × 10−5 |
| 11 | 3.3278 × 10−6 |
| 12 | 8.2181 × 10−7 |
The Fig. 2 shows the convergence of the iterative sequence of Algorithm (1.5).
Figure 2.

The convergence of with initial
Solution: We can see from both Table 2 and Fig. 2 that the sequence converges to 0, that is, 0 is the solution in Example 4.2. In addition, it is also easy to check from Example 4.2 that .
Remark 4.1
From Tables 1, 2 and Figs. 1, 2, it is readily seen that the convergence of to 0 in Example 4.1 is faster than the one of to 0 in Example 4.2. Therefore, our algorithm is more applicable, efficient, and stable than the algorithm in [12].
Application
In this section, applying our main result Theorem 3.3, we can prove strong convergence theorems for approximating the solution of the standard constrained convex optimization problem.
Let C be a closed convex subset of H. The standard constrained convex optimization problem is to find such that
| 5.1 |
where is a convex, Fréchet differentiable function. The set of the solutions of (5.1) is denoted by .
Lemma 5.1
(Optimality condition, [25])
A necessary condition of optimality for a point to be a solution of the minimization problem (5.1) is that solves the variational inequality
| 5.2 |
for all . Equivalently, solves the fixed point equation
for every . If, in addition, f is convex, then the optimality condition (5.2) is also sufficient.
Theorem 5.1
Let C be a nonempty closed convex subset of a real Hilbert space H. Let (): be a real-valued convex function with the gradient being -inverse strongly monotone and continuous with . Let , , A, V, G, , , , , , , and be defined as in Theorem 3.3. Given and let be the sequence generated by the following explicit algorithm:
| 5.3 |
where and . Assume that , , , , , and satisfy conditions (C1)–(C7) in Theorem 3.3. Then converges strongly to , which is the unique solution of VI (3.2).
Proof
By using Lemma 5.1 and Theorem 3.3, we obtain the desired conclusion directly. □
Conclusions
We introduced and analyzed one general implicit iterative scheme and another general explicit iterative scheme for finding a solution of a general system of variational inequalities (GSVI) with the constraints of finitely many generalized mixed equilibrium problems and a fixed point problem of a continuous pseudocontractive mapping in a Hilbert space. Moreover, we established strong convergence of the proposed implicit and explicit iterative schemes to a solution of the GSVI, which is the unique solution of a certain variational inequality. Our Theorems 3.1–3.3 not only improve and develop the main results of [1] and [12] but also improve and develop Theorems 3.1 and 3.2 of [9], Theorems 3.1 and 3.2 of [10], and Proposition 3.1, Theorems 3.2 and 3.5 of [11].
Authors’ contributions
All authors read and approved the final manuscript.
Funding
L.-C. Ceng was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of the Ministry of Education of China (20123127110002), and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).
Competing interests
The authors declare that they have no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Qian-Wen Wang, Email: wangqw2017@gmail.com.
Jin-Lin Guan, Email: guanjinlinaabb@163.com.
Lu-Chuan Ceng, Email: zenglc@hotmail.com.
Bing Hu, Email: hubing@yorku.ca.
References
- 1.Ceng L.C., Wang C.Y., Yao J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008;67:375–390. doi: 10.1007/s00186-007-0207-4. [DOI] [Google Scholar]
- 2.Siriyan K., Kangtunyakarn A. A new general system of variational inequalities for convergence theorem and application. Numer. Algorithms. 2018;12:1–25. [Google Scholar]
- 3.Bnouhachem A. A modified projection method for a common solution of a system of variational inequalities, a split equilibrium problem and a hierarchical fixed-point problem. Fixed Point Theory Appl. 2014;2014:22. doi: 10.1186/1687-1812-2014-22. [DOI] [Google Scholar]
- 4.Ceng L.C., Liou Y.C., Wen C.F., Wu Y.J. Hybrid extragradient viscosity method for general system of variational inequalities. J. Inequal. Appl. 2015;2015:150. doi: 10.1186/s13660-015-0646-z. [DOI] [Google Scholar]
- 5.Alofi A., Latif A., Mazrooei A.A., Yao J.C. Composite viscosity iterative methods for general systems of variational inequalities and fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 2016;17(4):669–682. [Google Scholar]
- 6.Rouhani B.D., Kazmi K.R., Farid M. Common solutions to some systems of variational inequalities and fixed point problems. Fixed Point Theory. 2017;18(1):167–190. doi: 10.24193/fpt-ro.2017.1.14. [DOI] [Google Scholar]
- 7.Eslamian M., Saejung S., Vahidi J. Common solution of a system of variational inequality problems. UPB Sci. Bull., Ser. A. 2015;77(1):55–62. [Google Scholar]
- 8.Alofi A.S.M., Latif A., Al-Marzooei A.E., Yao J.C. Composite viscosity iterative methods for general systems of variational inequalities and fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 2016;17:669–682. [Google Scholar]
- 9.Ceng L.C., Guu S.M., Yao J.C. A general composite iterative algorithm for nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 2011;61:2447–2455. doi: 10.1016/j.camwa.2011.02.025. [DOI] [Google Scholar]
- 10.Jung J.S. A general composite iterative method for strictly pseudocontractive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014;2014:173. doi: 10.1186/1687-1812-2014-173. [DOI] [Google Scholar]
- 11.Kong Z.R., Ceng L.C., Liou Y.C., Wen C.F. Hybrid steepest-descent methods for systems of variational inequalities with constraints of variational inclusions and convex minimization problems. J. Nonlinear Sci. Appl. 2017;10:874–901. doi: 10.22436/jnsa.010.03.03. [DOI] [Google Scholar]
- 12.Jung J.S. Strong convergence of some iterative algorithms for a general system of variational inequalities. J. Nonlinear Sci. Appl. 2017;10:3887–3902. doi: 10.22436/jnsa.010.07.42. [DOI] [Google Scholar]
- 13.Peng J.W., Yao J.C. A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008;12:1401–1432. doi: 10.11650/twjm/1500405033. [DOI] [Google Scholar]
- 14.Kong Z.R., Ceng L.C., Ansari Q.H., Pang C.T. Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013;2013:718624. [Google Scholar]
- 15.Ceng L.C., Ansari Q.H., Schaible S. Hybrid extragradient-like methods for generalized mixed equilibrium problems, systems of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012;53:69–96. doi: 10.1007/s10898-011-9703-4. [DOI] [Google Scholar]
- 16.Ceng L.C., Yao J.C. A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010;72:1922–1937. doi: 10.1016/j.na.2009.09.033. [DOI] [Google Scholar]
- 17.Ceng L.C., Lin Y.C., Wen C.F. Iterative methods for triple hierarchical variational inequalities with mixed equilibrium problems, variational inclusions, and variational inequalities constraints. J. Inequal. Appl. 2015;2015:16. doi: 10.1186/s13660-014-0535-x. [DOI] [Google Scholar]
- 18.Ceng L.C., Hu H.Y., Wong M.M. Strong and weak convergence theorems for generalized mixed equilibrium problem with perturbation and fixed point problem of infinitely many nonexpansive mappings. Taiwan. J. Math. 2011;15:1341–1367. doi: 10.11650/twjm/1500406303. [DOI] [Google Scholar]
- 19.Peng J.W., Yao J.C. A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008;12:1401–1432. doi: 10.11650/twjm/1500405033. [DOI] [Google Scholar]
- 20.Jung J.S. A new iteration method for nonexpansive mappings and monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010;2010:251761. [Google Scholar]
- 21.Goebel K., Kirk W.A. Topics in Metric Fixed Point Theory. Cambridge: Cambridge University Press; 1990. [Google Scholar]
- 22.Marino G., Xu H.K. A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006;318:43–52. doi: 10.1016/j.jmaa.2005.05.028. [DOI] [Google Scholar]
- 23.Yamada I. The hybrid steepest-descent method for variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. In: Butnariu D., Censor Y., Reich S., editors. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Amsterdam: North-Holland; 2001. pp. 473–504. [Google Scholar]
- 24.Zegeye H. An iterative approximation method for a common fixed point of two pseudocontractive mappings. ISRN Math. Anal. 2011;2011:621901. [Google Scholar]
- 25.Suwannaut S., Kangtunyakran A. The combination of the set of solutions of equilibrium problem for convergence theorem of the set of fixed points of strictly pseudo-contractive mappings and variational inequalities problem. Fixed Point Theory Appl. 2013;2013:291. doi: 10.1186/1687-1812-2013-291. [DOI] [Google Scholar]
