Abstract
Many iterative optimization algorithms involve compositions of special cases of Lipschitz continuous operators, namely firmly nonexpansive, averaged, and nonexpansive operators. The structure and properties of the compositions are of particular importance in the proofs of convergence of such algorithms. In this paper, we systematically study the compositions of further special cases of Lipschitz continuous operators. Applications of our results include compositions of scaled conically nonexpansive mappings, as well as the Douglas–Rachford and forward–backward operators, when applied to solve certain structured monotone inclusion and optimization problems. Several examples illustrate and tighten our conclusions.
Keywords: Compositions of operators, Conically nonexpansive operators, Douglas–Rachford algorithm, Forward-backward algorithm, Hypoconvex function, Maximally monotone operator, Proximal operator, Resolvent
Introduction
In this paper, we assume that
with the inner product and the induced norm . Let and let . Then T is L-Lipschitz continuous if , and T is nonexpansive if T is 1-Lipschitz continuous, i.e., . In this paper, we study compositions of what we call (see Definition 3.1) identity-nonexpansive decompositions (I-N decompositions for short) of Lipschitz continuous operators. Let and let be the identity operator on X. A Lipschitz continuous operator R admits an -I-N decomposition if for some nonexpansive operator . For instance, averaged,1 conically nonexpansive,2 and cocoercive3 operators are all Lipschitz continuous operators that admit special I-N decompositions.
We consider compositions of the form
| 1 |
where , , and is a family of Lipschitz continuous operators such that, for each , admits an -I-N decomposition. That is, for all , where and are real numbers, and are nonexpansive for all . A straightforward (and naive) conclusion is that the composition is Lipschitz continuous with a constant . However, such a conclusion can be further refined when, for instance, each is an averaged operator. Indeed, in this case it is known that the composition is an averaged (and not just Lipschitz continuous) operator (see, e.g., [2, Proposition 4.46], [6, Lemma 2.2], and [21, Theorem 3]). In this paper, we provide a systematic study of the structure of R under additional assumptions on the decomposition parameters.
Our main result is stated in Theorem 3.4. We show that, for , under a mild assumption on composition (1) is a scalar multiple of a conically nonexpansive operator. As a consequence of Theorem 3.4, we show in Theorem 4.2 that, under additional assumptions on the decomposition parameters, compositions of scaled conically nonexpansive mappings are scaled conically nonexpansive mappings, see also [1] for a relevant result.4 Special cases of Theorem 4.2 include, e.g., compositions of averaged operators [2, Proposition 4.46] and compositions of averaged and negatively averaged operators [12].
Of particular interest are compositions R that are averaged, conically nonexpansive, or contractive. Let . For an averaged (respectively contractive) operator R, the sequence converges weakly (respectively strongly) towards a fixed point of R (if one exists) [2, Theorem 5.14]. For conically nonexpansive operators, a simple averaging trick gives an averaged operator with the same fixed point set as the conically nonexpansive operator. Iterating the new averaged operator yields a sequence that converges weakly to a fixed point of the conically nonexpansive operator. These properties have been instrumental in proving convergence for the Douglas–Rachford algorithm and the forward–backward algorithm. In this paper, we apply our composition result Theorem 4.2 to prove convergence of these splitting methods in new settings.
The Douglas–Rachford and forward–backward methods traditionally solve monotone inclusion problems of the form
| 2 |
where and are maximally monotone, and, in the case of the forward–backward method, A is additionally assumed to be cocoercive. The Douglas–Rachford method iterates the Douglas–Rachford map , where5 is a positive step-size. The Douglas–Rachford map is an averaged map of the composition of reflected resolvents. The forward–backward method iterates the forward–backward map , where is a positive step-size. The forward–backward map is a composition of a resolvent and a forward-step.
In this paper, we show that for Douglas–Rachford splitting we need not impose monotonicity on the individual operators, but only on the sum, provided the sum is strongly monotone. The reflected resolvents and are negatively conically nonexpansive, the composition is conically nonexpansive, and a sufficient averaging gives an averaged map that converges to a fixed point when iterated. Relevant work appears in [9, 16], and [17].
More striking, for the forward–backward method, we show that it is sufficient that the sum is monotone (not strongly monotone as for DR). More specifically, we show that identity can be shifted between the two operators, while still guaranteeing averagedness of the forward–backward map . Indeed, the resolvent is cocoercive and the forward-step is scaled averaged. This implies that the composition is averaged (given restrictions on the cocoercivity and averagedness parameters). Moreover, when the sum is strongly monotone, again with no assumptions on monotonicity of the individual operators, we show that the forward–backward map is contractive. We also prove tightness of our contraction factor.
We also provide, in Theorem 4.7, a generalization of Theorem 4.2 to the setting in (1) of compositions of more than two operators. We assume that all are scaled conically nonexpansive operators and provide conditions on the parameters that give a specific scaled conically nonexpansive representation of R. Our condition is symmetric in the individual operators and allows for one of them to be scaled conic, while the rest must be scaled averaged. This is in compliance with the case in Theorem 4.2.
Finally, in Sect. 8, we provide graphical 2D-representations of different operator classes that admit I-N decompositions such as Lipschitz continuous operators, averaged operators, and cocoercive operators. We also provide 2D-representations of compositions of two such operator classes. Illustrations of the firmly nonexpansive (-averaged ) and nonexpansive operator classes have previously appeared in [10, 11], and illustrations of more operator classes that admit particular I-N decompositions and their compositions have appeared in [14, 24] and in early preprints of [15].
Organization and notation
The remainder of this paper is organized as follows: Sect. 2 presents useful facts and auxiliary results that are used throughout the paper. In Sect. 3, we present the main abstract results of the paper. Section 4 presents the main composition results of Lipschitz continuous operators that admit I-N decompositions, under mild assumptions on the decomposition parameters, as well as illustrative and limiting examples. In Sect. 5 and Sect. 6, we present applications of our composition results to the Douglas–Rachford and forward–backward algorithms, respectively. In Sect. 7 we present applications of our results to optimization problems. Finally, in Sect. 8, we provide graphical representations of many different I-N decompositions and their compositions.
The notation we use is standard and follows, e.g., [2] or [23].
Facts and auxiliary results
Let . Let . Recall that A is ρ-monotone if
| 3 |
and is maximally ρ-monotone if any proper extension of graA will violate (3). In passing we point out that A is (maximally) monotone (respectively ρ-hypomonotone, ρ-strongly monotone) if (respectively , ) see, e.g., [2, Chap. 20], [4, Definition 6.9.1], [7, Definition 2.2], and [23, Example 12.28].
Fact 2.1
Let , let , let , and suppose that . Suppose that and are single-valued and that . Set
| 4 |
Then T is single-valued, , and
| 5 |
Proof
See [9, Lemma 4.1]. □
Proposition 2.2
Let , let , and suppose that . Suppose that is single-valued and that . Set
| 6 |
Then T is single-valued, , and
| 7 |
Proof
The proof is similar to the proof of [2, Proposition 26.1(iv)].6 Indeed, let . Then ⇔ ⇔ ⇔ . □
Lemma 2.3
Let , let , let , and set
| 8 |
Let . Then
| 9 |
Proof
See Appendix A. □
Proposition 2.4
Let , let , let , and set . Let . Then the following hold:
| 10a |
| 10b |
| 10c |
Proof
Indeed, we have
| 11a |
| 11b |
| 11c |
| 11d |
| 11e |
This proves (10a) and (10c) in view of (11c) and (11e). Finally, note that . This proves (10b). □
Proposition 2.5
Let , let , let , and set . Let . Then the following are equivalent:
-
(i)
N is nonexpansive.
-
(ii)
.
-
(iii)
.
-
(iv)
.
-
(v)
.
Proof
(i)⇔(ii)⇔(iii)⇔(v): This is a direct consequence of Proposition 2.4. (i)⇔(iv): Applying (10b) with replaced by yields . The proof is complete. □
Proposition 2.6
Let , let , and set . Let . Then the following are equivalent:
-
(i)
N is nonexpansive.
-
(ii)
.
-
(iii)
.
-
(iv)
.
-
(v)
.
Proof
Apply Proposition 2.5 with replaced by . □
Lemma 2.7
Let . Then
| 12 |
Proof
Let . By Young’s inequality, . Equivalently, . Now, replace by . □
Proposition 2.8
Let , let , and let . Then T is α-averaged if and only if and M is -conically nonexpansive.
Proof
Indeed, T is α-averaged if and only if there exists a nonexpansive mapping such that . Equivalently,
and the conclusion follows by setting . □
The following three lemmas can be directly verified, hence we omit the proof.
Lemma 2.9
Let , and let . Then T is α-conically nonexpansive ⇔ is -cocoercive ⇒ is maximally monotone.
Lemma 2.10
Let , let , and let . Suppose that A is maximally μ-monotone and -cocoercive. Then .
Lemma 2.11
Let , let , and let . Suppose that T is -cocoercive. Then T is -cocoercive.
Lemma 2.12
Let , and let . Suppose that A is β-Lipschitz continuous. Then the following hold:
-
(i)
A is maximally -monotone.
-
(ii)
is -cocoercive.
Proof
See Appendix B. □
Lemma 2.13
Let , let , and let . Suppose that (respectively ) is -cocoercive (respectively -cocoercive). Then is β-Lipschitz continuous.
Proof
See Appendix C. □
As a corollary, we obtain the following result which was stated in [27, page 4].
Corollary 2.14
Let , be Frechét differentiable convex functions, and let . Suppose that (respectively ) is β-Lipschitz continuous (respectively δ-Lipschitz continuous). Then the following hold:
-
(i)
is β-Lipschitz continuous.
-
(ii)
Suppose that is convex. Then is -cocoercive.
Proof
See Appendix D. □
Lemma 2.15
Let , let , and let . Suppose that T is α-averaged. Then the following hold:
-
(i)
δT is -averaged.
-
(ii)
Suppose that . Then δT is a Banach contraction with constant δ.
Proof
See Appendix E. □
Let A be maximally ρ-monotone, where . Then (see [9, Proposition 3.4] and [3, Corollary 2.11 and Proposition 2.12]) we have
| 13 |
The following result involves resolvents and reflected resolvents of ρ-monotone operators.
Proposition 2.16
Let A be ρ-monotone, where . Then the following hold:
-
(i)
is - cocoercive, in which case is Lipschitz continuous with constant .
-
(ii)
is -conically nonexpansive.
-
(iii)
Suppose that . Then is Lipschitz continuous with constant .
Proof
(i): See [9, Lemma 3.3(ii)]. Alternatively, it follows from [3, Corollary 3.8(ii)] that is -averaged. Now apply Lemma 2.9 with T replaced by . (ii): It follows from (i) that there exists a nonexpansive operator such that . Now, . (iii): Indeed, let and let N be as defined above. We have
| 14a |
| 14b |
The proof is complete. □
Compositions
Definition 3.1
(-I-N decomposition)
Let be Lipschitz continuous, and let7. We say that R admits an -identity-nonexpansive (I-N) decomposition8 if there exists a nonexpansive operator such that .
Throughout the rest of this paper, we assume that
Proposition 3.2
Let , let , let , let , and suppose that . Set
| 15a |
| 15b |
| 15c |
Suppose that admits an -I-N decomposition and that admits an -I-N decomposition. Then we have
| 16 |
Proof
Set , and observe that by Proposition 2.5 applied with replaced by , , we have
| 17 |
Equivalently,
| 18 |
Observe also that, because , we have
| 19 |
It follows from (18), applied with and replaced by in (20c) and by in (20f), in view of (19) that
| 20a |
| 20b |
| 20c |
| 20d |
| 20e |
| 20f |
| 20g |
Rearranging yields the desired result. □
Theorem 3.3
Let , let , let , let , and suppose that . Let , , and be defined as in (15a)–(15c). Set
| 21 |
and suppose that , that , and that . Suppose that admits an -I-N decomposition, and that admits an -I-N decomposition. Then admits an -I-N decomposition, where
| 22 |
Proof
Let , let , and let (i.e., if , and if ). Then Proposition 3.2 and Lemma 2.7 imply that
| 23a |
| 23b |
| 23c |
| 23d |
| 23e |
| 23f |
Comparing (23a)–(23f) to Proposition 2.5 applied with T replaced by , we learn that there exist a nonexpansive operator and such that , where and . Equivalently, , hence , as claimed. □
Theorem 3.4
Let , let , let , let , suppose that , that , and that either or . Set
| 24a |
| 24b |
Suppose that admits an -I-N decomposition, and that admits an -I-N decomposition. Then and admits a -I-N decomposition, i.e., is κ-scaled θ-conically nonexpansive. That is, there exists a nonexpansive operator such that
| 25 |
Proof
Let , and observe that
| 26 |
Next, let , and note that is nonexpansive. Now, set
| 27 |
| 28a |
| 28b |
| 28c |
We proceed by cases. Case I: . Observe that ⇔ . The conclusion follows by observing that is nonexpansive, .
Case II: . By assumption we must have . We claim that , , satisfy the conditions of Theorem 3.3 with replaced by . Indeed, observe that , which is always true. Moreover, replacing by yields , , , and, consequently, . We claim that
| 29 |
Indeed, recall that . This implies that . Moreover,
| 30 |
Therefore, by Theorem 3.3, we conclude that there exists a nonexpansive operator such that , , and . Now combine with (28a)–(28c). □
Applications to special cases
We start this section by recording the following simple lemma which can be easily verified, hence we omit the proof.
Lemma 4.1
Set . Then the following hold:
-
(i)
.
-
(ii)
Let , let , and suppose that is -conically nonexpansive. Then is -conically nonexpansive.
Theorem 4.2
Let , let , let , let be such that is -conically nonexpansive. Suppose that either or . Set
| 31 |
Then there exists a nonexpansive operator such that
| 32 |
Furthermore, .
Proof
Set and set . The proof proceeds by cases.
Case I: , . By assumption, there exist nonexpansive operators such that . Moreover, one can easily check that satisfy the assumptions of Theorem 3.4 with replaced by . Applying Theorem 3.4, with replaced by , we learn that there exists a nonexpansive operator such that , where
| 33 |
Finally, observe that ⇔ [ and ] ⇔ [ and ] ⇔ [ and ] ⇔ [ and ].
Case II: , . Observe that is -conically nonexpansive. Therefore, Lemma 4.1(ii), applied with replaced by , implies that are -conically nonexpansive. Now combine Lemma 4.1(i) and Case I applied with replaced by .
Case III: and : Observe that is -conically nonexpansive. Now, using Lemma 4.1(i)&(ii), we have , and is -conically nonexpansive. Now combine with Case II, applied with replaced by , to learn that there exists a nonexpansive mapping such that , and the conclusion follows.
Case IV: and : Indeed, . Now combine with Case I applied with replaced by , in view of Lemma 4.1(ii). □
Corollary 4.3
Let , let , let , let , and suppose that is α-averaged, and that is -cocoercive. Set . Then , and there exists a nonexpansive operator such that
| 34 |
Proof
Suppose first that , and observe that there exists a nonexpansive operator N̅ such that . Applying Theorem 4.7 with , replaced by yields that there exists a nonexpansive operator N such that , where
| 35 |
The case follows similarly. □
The assumption is critical in the conclusion of Theorem 4.2 as we illustrate below.
Example 4.4
()
Let , and set . Then
| 36 |
Hence, . That is, is not monotone; hence, is not conically nonexpansive by Lemma 2.9 applied with T replaced by .
The following proposition provides an abstract framework to construct a family of operators and such that is -conically nonexpansive, is -conically nonexpansive, , and the composition fails to be conically nonexpansive.
Proposition 4.5
Let , let , let , let
| 37 |
set
| 38 |
and set
| 39 |
Then is -conically nonexpansive, and is -conically nonexpansive. Moreover, we have the implication ⇒ is not conically nonexpansive.
Proof
Set , and observe that and that . Now,
| 40a |
| 40b |
| 40c |
| 40d |
| 40e |
Consequently,
| 41 |
Hence,
| 42 |
Now, is conically nonexpansive ⇒ is monotone by Lemma 2.9, and the conclusion follows in view of (42). □
The following example provides two concrete instances where: (i) , , hence , (ii) , , . In both cases, is not conically nonexpansive.
Example 4.6
Suppose that one of the following holds:
-
(i)
, , , , and .
-
(ii)
, , , and .
Let be defined as in (37), let , and let . Then , and is not conically nonexpansive.
Proof
Let κ be defined as in (39). In view of Proposition 4.5, it is sufficient to show that . (i): Note that ⇔ . Now,
| 43a |
| 43b |
(ii): We have
| 44a |
| 44b |
| 44c |
| 44d |
| 44e |
| 44f |
Now, observe that . Consequently, . Now use the assumption to learn that , hence , and the conclusion follows. □
Theorem 4.7
(composition of m scaled conically nonexpansive operators)
Let be an integer, set , let be a family of operators from X to X, let , let be real numbers such that and , let be real numbers in , and suppose that, for every , is -conically nonexpansive. Set
| 45 |
Suppose that , and set
| 46 |
Then there exists a nonexpansive operator such that
| 47 |
Proof
First, observe that , is nonexpansive. If , then is -Lipschitz continuous and the conclusion readily follows. Now, suppose that . We proceed by induction on . At , the claim holds by Theorem 4.2. Now, suppose that the claim holds for some . Let be a family of operators from X to X, let , let be real numbers such that and , let be real numbers in , and suppose that, for every , is -conically nonexpansive. Set , and suppose that . We examine two cases.
Case I: . In this case the conclusion follows by applying Theorem 4.2 in view of the inductive hypothesis with replaced by and replaced by .
Case II: . We claim that
| 48 |
To this end, set , and observe that . By assumption we have . Altogether, we conclude that . It follows from the inductive hypothesis that
| 49 |
Next note that
| 50a |
| 50b |
| 50c |
Because , we learn that . Moreover, because , we have . Therefore, (50a)–(50c) implies
| 51a |
| 51b |
| 51c |
| 51d |
| 51e |
Now, observe that
| 52 |
and
| 53a |
| 53b |
| 53c |
In view of (52) and (53a)–(53c), (51a)–(51e) becomes
| 54 |
This proves (48). Now proceed similar to Case I in view of (48) and (49). □
The assumption is critical in the conclusion of the above theorem as we illustrate in the following example.
Example 4.8
Let , let , let , let , and let
| 55 |
Set , , , and
| 56 |
Then . Moreover, the following hold:
-
(i)
, , and .
-
(ii)
is -conically nonexpansive where .
-
(iii)
.
-
(iv)
.
-
(v)
. Hence, is not monotone.
-
(vi)
R is not conically nonexpansive.
Proof
It is straightforward to verify that . (i): It is clear that and that . Note that ⇔ lies between the roots of the quadratic , and the conclusion follows from the quadratic formula. (ii): This follows from [2, Proposition 4.38]. (iii): Indeed, in view of (i) we have
| 57a |
| 57b |
| 57c |
| 57d |
| 57e |
| 57f |
| 57g |
Now, because , , we learn that , and the conclusion follows. (iv): It is straightforward, by noting that , to verify that . Consequently, . (v): This is a direct consequence of (iv). (vi): Combine (v) and Lemma 2.9. □
Theorem 4.9
(Composition of cocoercive operators)
Let be an integer, set , let be a family of operators from X to X, let be real numbers in , and suppose that, for every , is -cocoercive. Then there exists a nonexpansive operator such that
| 58 |
Proof
Apply Theorem 4.7 with replaced by , . □
Application to the Douglas–Rachford algorithm
Theorem 5.1
(Averagedness of the Douglas–Rachford operator)
Let , and let . Suppose that one of the following holds:
-
(i)
A is maximally -monotone and B is maximally μ-monotone.
-
(ii)
A is maximally μ-monotone and B is maximally -monotone.
Set
| 59 |
Then and T is α-averaged.
Proof
Suppose that (i) holds. Note that γA is -monotone, and
| 60 |
Using (13) and Fact 2.1 we learn that and, in turn, T are single-valued and . It follows from [3, Proposition 4.3 and Table 1] that is -conically nonexpansive and is -conically nonexpansive. It follows from Theorem 4.2, applied with replaced by , that is -conically nonexpansive. Therefore, there exists a nonexpansive mapping such that
| 61 |
The conclusion now follows by applying Proposition 2.8 with replaced by . Finally, notice that , which implies that . Therefore,
| 62 |
The proof of (ii) follows similarly. □
Corollary 5.2
([9, Theorem 4.5(ii)])
Let , and let . Suppose that one of the following holds:
-
(i)
A is maximally -monotone and B is maximally μ-monotone.
-
(ii)
A is maximally μ-monotone and B is maximally -monotone.
Set and let . Then such that .
Proof
Remark 5.3
In view of (13), one might think that the scaling factor γ is required only to guarantee the single-valuedness and the full domain of T. However, it is actually critical to guarantee convergence as well, as we illustrate in Example 5.4.
Example 5.4
Let , let U be a closed linear subspace of X, suppose that9
| 63 |
Then A is μ-monotone, B is −ω-monotone, and is single-valued. Furthermore, we have
| 64 |
and does not converge.
Proof
Indeed, one can verify that
| 65 |
Consequently,
| 66 |
and (64) follows. Therefore,
| 67 |
Hence, does not converge. □
Before we proceed to the convergence analysis, we recall that if T is averaged and then we have (see, e.g., [22, Theorem 3.7])
| 68 |
We conclude this section by proving the strong convergence of the shadow sequence of the Douglas–Rachford algorithm.
Theorem 5.5
(Convergence analysis of the Douglas–Rachford algorithm)
Let , and let . Suppose that one of the following holds:
-
(i)
A is maximally μ-monotone and B is maximally -monotone.
-
(ii)
A is maximally -monotone and B is maximally μ-monotone.
Set
| 69 |
and let . Then . Moreover, there exist , , , , and .
Proof
Suppose that (i) holds. Since is -monotone and , we conclude from [2, Proposition 23.35] that is a singleton. Combining with Fact 2.1 with replaced by yields . The claim that follows from Corollary 5.2. It remains to show that and . To this end, note that is bounded; consequently, since and are Lipschitz continuous (see Proposition 2.16(i)&(ii)), we learn that
| 70 |
On the one hand, in view of (68) we have
| 71 |
Combining (70) and (71) yields
| 72a |
| 72b |
| 72c |
On the other hand, combining Lemma 2.3, applied with replaced by and replaced by , in view of (68) yields
| 73a |
| 73b |
| 73c |
Therefore,
| 74 |
Combining (72a)–(72c) and (74) and noting that yields and , which proves (i). The proof of (ii) proceeds similarly. □
Remark 5.6
(Relaxed Douglas–Rachford algorithm)
A careful look at the proofs of Theorem 5.1 and Theorem 5.5 reveals that analogous conclusions can be drawn for the relaxed Douglas–Rachford operator defined by , . In this case, we choose . One can verify that the corresponding averagedness constant is .
Application to the forward–backward algorithm
Throughout this section we assume that
In the rest of this section, we prove that the forward–backward operator is averaged, hence its iterates form a weakly convergent sequence in each of the following situations:
A is maximally μ-monotone, is -cocoercive, B is maximally -monotone, and .
A is maximally -monotone, is -cocoercive, B is maximally μ-monotone, and .
A is β-Lipschitz continuous, B is maximally μ-monotone, and .
That is, we do not require A and B to be monotone. Instead, it is enough that the sum is monotone to have an averaged forward–backward map. In addition, we show that the forward–backward map is contractive if the sum is strongly monotone, and we prove the tightness of our contraction factor.
Theorem 6.1
(Case I: A is μ-monotone)
Let , and let . Suppose that A is maximally μ-monotone, is -cocoercive, and B is maximally -monotone. Let . Set , set , set , and let . Then and . Moreover, the following hold:
-
(i)
, N is nonexpansive.
-
(ii)
T is -averaged.
-
(iii)
T is δ-Lipschitz continuous.
-
(iv)
There exists such that .
Suppose that . Then we additionally have:
-
(v)
T is a Banach contraction with a constant .
-
(vi)
and with a linear rate .
Proof
Clearly, and . Moreover, we have ⇔ ⇔ . Hence, as claimed. Next note that , hence . It follows from Proposition 2.2 that and, in turn, T are single-valued and . The assumption on A implies that there exists , N̅ is nonexpansive, such that . Therefore,
| 75a |
| 75b |
Moreover, Proposition 2.16(i) implies that
| 76 |
(i): It follows from Corollary 4.3 applied with replaced by and replaced by , in view of (75a)–(75b) and (76), that there exists a nonexpansive operator N such that . (ii): Combine (i) and Lemma 2.15(i). (iii): Combine (i) and (ii). (iv): Applying Proposition 2.2 with replaced by yields . The claim that follows from combining (ii) and [2, Theorem 5.15]. (v): Observe that . Now, combine with (iii). (vi): Note that is maximally -monotone and , we conclude from [2, Proposition 23.35] that is a singleton. Alternatively, use (iii) to learn that T is a Banach contraction with a constant , hence is a singleton, and the conclusion follows. □
Theorem 6.2
Let , and let . Suppose that A is maximally μ-monotone, is -cocoercive, and B is maximally -monotone. Let . Set , set , set , and let . Then and . Moreover, the following hold:
-
(i)
, N is nonexpansive.
-
(ii)
T is a Banach contraction with a constant .
-
(iii)
There exists such that and with a linear rate .
Proof
We proceed similar to the proof of Theorem 6.1 to verify that T is single-valued, , , and . The assumption on A implies that there exists , N̅ is nonexpansive such that . Therefore,
| 77a |
| 77b |
Now, proceed similar to the proof of Theorem 6.1(i), (v), and (vi) in view of (76). □
Corollary 6.3
Let , and let . Suppose that A is maximally μ-monotone, is -cocoercive, and B is maximally -monotone. Let . Set , set , and let . Then , T is a Banach contraction with a constant δ, and there exists such that and .
Proof
Remark 6.4
(Tightness of the Lipschitz constant)
-
(i)
Suppose that the setting of Theorem 6.1 holds. Set . Then . Hence, the claimed Lipschitz constant is tight.
-
(ii)
Suppose that the setting of Theorem 6.2 holds. Set . Then . Hence, the claimed contraction factor is tight.
Note in particular that the worst cases are subgradients of convex functions. Hence, the worst cases are attained by the proximal gradient method.
Theorem 6.5
(Case II: is cocoercive)
Let , let , and let . Suppose that A is maximally -monotone, is β-cocoercive, and B is maximally μ-monotone. Let . Set , set , set , and let . Then and . Moreover, the following hold:
-
(i)
, N is nonexpansive.
-
(ii)
T is -averaged.
-
(iii)
T is δ-Lipschitz continuous.
-
(iv)
There exists , and .
Suppose that . Then we additionally have:
-
(v)
T is a Banach contraction with a constant .
-
(vi)
and with a linear rate .
Proof
Observe that the assumption on A and Lemma 2.11 applied with T replaced by imply that there exists , N̅ is nonexpansive, such that .
| 78a |
| 78b |
Moreover, Proposition 2.16(i) implies that
| 79 |
Now proceed similar to the proof of Theorem 6.1 but use (78a)–(78b) and (79). □
Theorem 6.6
Let , let , and let . Suppose that A is maximally -monotone, is β-cocoercive, and B is maximally μ-monotone. Let . Set , set , set , and let . Then and . Moreover, the following hold:
-
(i)
, N is nonexpansive.
-
(ii)
T is a Banach contraction with a constant .
-
(iii)
There exists such that and with a linear rate .
Proof
Observe that the assumption on A and Lemma 2.11 applied with T replaced by implies that there exists , N̅ is nonexpansive, such that .
| 80a |
| 80b |
Now proceed similar to the proof of Theorem 6.5 in view of (79). □
Corollary 6.7
Let , let , and let . Suppose that A is maximally -monotone, is β-cocoercive, and B is maximally μ-monotone. Let . Set , set , and let . Then and . Then , T is a Banach contraction with a constant δ, and there exists such that and .
Proof
Theorem 6.8
(Case III: A is β-Lipschitz continuous)
Let . Suppose that A is β-Lipschitz continuous and that B is maximally μ-monotone. Let , and let . Set , set , set , and let . Then and . Moreover, the following hold:
-
(i)
, N is nonexpansive.
-
(ii)
T is -averaged.
-
(iii)
T is δ-Lipschitz continuous.
-
(iv)
There exists , and .
Suppose that . Then we additionally have:
-
(v)
T is a Banach contraction with a constant .
-
(vi)
and with a linear rate .
Proof
Combine Lemma 2.12 and Theorem 6.5 applied with replaced by . □
Theorem 6.9
Let . Suppose that A is β-Lipschitz continuous and that B is maximally μ-monotone. Let , and let . Set , set , set , and let . Then and . Moreover, the following hold:
-
(i)
, N is nonexpansive.
-
(ii)
T is a Banach contraction with a constant .
-
(iii)
There exists such that and with a linear rate .
Proof
Combine Lemma 2.12 and Theorem 6.6 applied with replaced by . □
Applications to optimization problems
Let , and let . Throughout this section, we shall assume that
We shall use ∂f to denote the subdifferential mapping from convex analysis.
Definition 7.1
(see [3, Definition 6.1])
An abstract subdifferential associates a subset of X with f at , and it satisfies the following properties:
-
(i)
if f is a proper lower semicontinuous convex function;
-
(ii)
if f is continuously differentiable;
-
(iii)
if f attains a local minimum at ;
-
(iv)for every ,
The Clarke–Rockafellar subdifferential, Mordukhovich subdifferential, and Frechét subdifferential all satisfy Definition 7.1(i)–(iv), see, e.g., [5, 19, 20], so they are .
Let . Recall that f is λ-hypoconvex (see [23, 26]) if
| 81 |
for all and or, equivalently,
| 82 |
For , the proximal mapping is defined at by
| 83 |
Fact 7.2
Suppose that is a proper lower semicontinuous λ-hypoconvex function. Then
| 84 |
Moreover, we have:
-
(i)
The Clarke–Rockafellar, Mordukhovich, and Frechét subdifferential operators of f all coincide.
-
(ii)
is maximally −λ-monotone.
-
(iii)
is single-valued and .
Proof
See [3, Proposition 6.2 and Proposition 6.3]. □
Proposition 7.3
Let . Suppose that and that one of the following conditions is satisfied:
-
(i)
f is μ- strongly convex, g is ω- hypoconvex.
-
(ii)
f is ω- hypoconvex, and g is μ- strongly convex.
Then is convex and .
If, in addition, one of the following conditions is satisfied:
.
X is finite dimensional and .
X is finite dimensional, f and g are polyhedral, and .
Then
| 85 |
and
| 86 |
Proof
It is clear that either (i) or (ii) implies that is convex, and the identity follows in view of Definition 7.1(i). Now, suppose that (i) holds along with one of the assumptions (a)–(c). Rewrite f and g as and observe that both f̅ and g̅ are convex, as is . Moreover, we have and . Now,
| 87a |
| 87b |
| 87c |
| 87d |
Here, (87b) follows from applying Definition 7.1(iv) to , (87c) follows from [2, Theorem 16.47] applied to f̅ and g̅, and (87c) follows from applying Fact 7.2 to f and g and using Definition 7.1(i), which verify (85). Finally, (86) follows from combining (85) and [2, Theorem 16.3]. □
The following theorem provides an alternative proof to [17, Theorem 4.4] and [9, Theorem 5.4(ii)].
Theorem 7.4
Let , and let . Suppose that one of the following holds:
-
(i)
f is μ- strongly convex, g is ω- hypoconvex.
-
(ii)
f is ω-hypoconvex, and g is μ-strongly convex,
and that (see Proposition 7.3for sufficient conditions). Set
| 88 |
and let . Then , and T is α-averaged. Moreover, such that , , and .
Proof
Suppose that (i) holds. Then [2, Example 22.4] (respectively Fact 7.2(ii)) implies that (respectively ) is maximally μ-monotone (respectively maximally -monotone). The conclusion follows from applying Theorem 5.5(i) with replaced by . The proof for (ii) follows similarly by using Theorem 5.5(ii). □
Before we proceed further, we recall the following useful fact.
Fact 7.5
(Baillon–Haddad)
Let be a Frechét differentiable convex function, and let . Then is β-Lipschitz continuous if and only if is -cocoercive.
Proof
See, e.g., [2, Corollary 18.17]. □
Lemma 7.6
Let , let , and let be a Frechét differentiable function. Suppose that f is μ-strongly convex with a β-Lipschitz continuous gradient. Then the following hold:
-
(i)
is convex.
-
(ii)
is maximally μ-monotone.
-
(iii)
is -cocoercive.
Proof
(i): See, e.g., [2, Proposition 10.8]. (ii): See, e.g., [2, Example 22.4(iv)]. (iii): Combine (i), Lemma 2.10, and Corollary 2.14(ii) applied with replaced by . □
Theorem 7.7
(The forward–backward algorithm when f is μ-strongly convex)
Let , and let . Let f be μ-strongly convex and Frechét differentiable with a β-Lipschitz continuous gradient, and let g be ω-hypoconvex. Suppose that . Let , and set . Set , and let . Then the following hold:
-
(i)
There exists such that .
Suppose that . Then we additionally have:
-
(ii)
and with a linear rate .
Proof
Note that Definition 7.1(ii) implies that . Set and observe that Proposition 7.3 and Proposition 2.2 imply that . It follows from [2, Example 22.4] (respectively Fact 7.2(ii)) that A (respectively B) is maximally μ-monotone (respectively maximally -monotone). Moreover, Lemma 7.6(iii) implies that is -cocoercive. (i)–(ii): Apply Theorem 6.1(iv)&(vi). □
To proceed to the next result, we need the following lemma.
Lemma 7.8
Let , let , and let be a Frechét differentiable function. Suppose that g is ω-hypoconvex with a -Lipschitz continuous gradient. Then is -cocoercive.
Theorem 7.9
(The forward–backward algorithm when f is ω-hypoconvex)
Let , let , and let . Let f be ω-hypoconvex, and let g be μ-strongly convex and Frechét differentiable with a β-Lipschitz continuous gradient. Suppose that . Let , and set . Set , and let . Then the following hold:
-
(i)
There exists such that .
Suppose that . Then we additionally have:
-
(ii)
and with a linear rate .
Proof
Proceed similar to the proof of Theorem 7.7 but use Theorem 6.5(iv)&(vi). □
Theorem 7.10
(The forward–backward algorithm when f is -hypoconvex)
Let , and let . Let f be μ-strongly convex, and let g be Frechét differentiable with a β-Lipschitz continuous gradient. Suppose that . Let , and set . Set , and let . Then the following hold:
-
(i)
There exists such that .
Suppose that . Then we additionally have:
-
(ii)
and with a linear rate .
Proof
Combine Lemma 2.12 applied with A replaced by and Theorem 7.9 applied with replaced by . □
Remark 7.11
The results of Theorem 6.2, Theorem 6.6, and Theorem 6.9 can be directly applied to optimization settings in a similar fashion à la Theorem 7.7, Theorem 7.9, and Theorem 7.10.
Graphical characterizations
This section contains 2D-graphical representations of different Lipschitz continuous operator classes that admit I-N decompositions and of their composition classes. We illustrate exact shapes of the composition classes in 2D and conservative estimates from Theorem 3.4 and Theorem 4.2. Similar graphical representations have appeared before in the literature. In [10, 11], nonexpansiveness and firm nonexpansiveness (-averagedness) are characterized. Early preprints of [15] have more 2D graphical representations, and the lecture notes [14] contain many such characterizations with the purpose of illustrating how different properties relate to each other and to provide intuition on why different algorithms converge. This has been further extended and formalized in [24]. Not only do these illustrations provide intuition. Indeed, it is a straightforward consequence of, e.g., [24, 25] that for compositions of two operator classes that admit I-N decompositions, there always exists a 2D-worst case. Hence, if the 2D illustration implies that the composition class admits a specific -I-N decomposition, so does the full operator class.
In Sect. 8.1, we characterize many well-known special cases of operator classes that admit I-N decompositions. In Sect. 8.2, we characterize classes obtained by compositions of such operator classes and highlight differences between the true composition classes and their characterizations using Theorem 3.4.
Single operators
We consider classes of -I-N decomposition of Lipschitz continuous operators. We graphically illustrate properties of some special cases. The illustrations should be read as follows. Assume that is represented by the marker in the figure. The diagram then shows where can end up in relation to . If the point is rotated in the picture, the rest of the picture rotates with it. The characterization is, by construction of -I-N decompositions, always a circle of radius shifted along the line defined by the origin and the point .
Lipschitz continuous operators
Let and let . Then R is β-Lipschitz continuous if and only if R admits an -I-N decomposition, with α chosen as 0. Figure 1 shows the case . The radius of the Lipschitz circle is .
Figure 1.

Illustration of β-Lipschitz continuous operator with
Cocoercive operators
Let , and let . Then R is -cocoercive if and only if R admits an -I-N decomposition, with chosen as . Figure 2 shows the cases and . The diameter is . The figure clearly illustrates that -cocoercive operators are also β-Lipschitz (but not necessarily the other way around).
Figure 2.

Illustration of -cocoercive operators with and
Averaged operators
Let , and let . Then R is α-averaged if and only if R admits an -I-N decomposition, with chosen as . Figure 3 shows the cases and , and . All averaged operators are nonexpansive.
Figure 3.

Illustration of α-averaged operators with , , and
Conic operators
Let , and let . Then R is α-conically nonexpansive if and only if R admits an -I-N decomposition, with chosen as . Figure 4 shows the cases and . Conically nonexpansive operators fail to be nonexpansive for .
Figure 4.
Illustration of α-conically nonexpansive operators with and
μ-Monotone operators
Let , and suppose that is μ-monotone. The shortest distance between the vertical line and the origin in the illustration is . Figure 5 shows the case .
Figure 5.

Illustration of μ-monotone operator with
Compositions of two operators
In this section, we provide illustrations of compositions of different classes of Lipschitz continuous operators. We consider compositions of the form
. Let . We illustrate the regions within which can end up. For most considered composition classes, we provide two illustrations. The left illustration explicitly shows how the composition is constructed. It shows the region within which must end up. The second operator is applied at a subset, marked by crosses, of boundary points of that region. Given these as starting points for application, the dashed circles show where can end up for this subset. The right illustration shows, in gray, the resulting exact shape of the composition. It also contains the estimate from Theorem 3.4 that provides an I-N decomposition of the composition. From these illustrations, it is obvious that many different I-N decomposition are valid. The illustrations also reveal that the specific I-N decompositions provided in Theorem 3.4 indeed are suitable for our purpose of characterizing the composition as averaged, conic, or contractive.
Averaged-averaged composition
We first consider -averaged with . A special case is the forward–backward splitting operator with -cocoercive A and maximally monotone B. This implies that is -averaged for and that is -averaged. The example in Fig. 6 has individual averagedness parameters and , i.e., with and . Theorem 3.4 shows that the composition is of the form , where N is nonexpansive, i.e., it is 0.67-averaged. The fact that the composition is averaged is already known, see [8, 12].
Figure 6.
Illustration of composition of -averaged and -averaged operators with
The example in Fig. 7 shows and . Theorem 3.4 shows that the composition is of the form , where N is nonexpansive, i.e., it is 0.79-averaged.
Figure 7.
Illustration of composition of -averaged and -averaged operators with and
Conic-conic composition
We consider -averaged with . Several examples with this setting are considered in for Douglas-Rachford splitting and forward–backward splitting in Sect. 5 and Sect. 6. We know from Theorem 4.2 that the composition is conic if . The example in Fig. 8 has and , that satisfies . Theorem 4.2 shows that the composition is of the form , where N is nonexpansive, i.e., it is 2.64-conic.
Figure 8.
Illustration of composition of -conic operator and -averaged operator with and
In Example 4.6, we have shown that the assumption is critical for the composition to be conic. Figure 9 illustrates the case and , which satisfies , hence Theorem 4.2 cannot be used to deduce that the composition is conic. Indeed, we see from the figure that the composition is not conic. It is impossible to draw a circle that touches the marker at and extends only to the left.
Figure 9.
Illustration of composition of -conic operator and -averaged operator with and
We conclude the conic composed with conic examples with a forward–backward example. The forward–backward splitting operator with A -cocoercive and B (maximally) monotone is composed of -averaged resolvent and -conic forward step . The composition with -conic is conic if , Theorem 4.2. In the forward–backward setting, this corresponds to , which doubles the allowed range compared to guaranteeing an averaged composition. This extended range has been shown before, e.g., in [13, 18].
In Fig. 10, we illustrate the forward–backward setting with . This corresponds to conic parameters and , i.e., with and . The composition is of the form , where N is nonexpansive, i.e., it is 19.99-conic, Theorem 4.2. The left figure shows the resulting composition and (parts of) the conic approximation. The conic approximation is very large compared to the actual region. This is due to the local behavior around the point , where it is almost vertical. As , the exact shape approaches being vertical around and the conic circle approaches to have an infinite radius. For , the exact shape extends to the right of (as in the figure above), and the composition will not be conic.
Figure 10.
To the left is an illustration of the forward–backward composition with , where is the cocoercivity constant of A. It is a composition between an -conic operator and an -averaged operator with and . To the right is an illustration of a θ-relaxation of the same forward-backward map with
In the right figure, we consider the relaxed forward–backward map with . If the composition is α-conic, it is straightforward to verify that the relaxed map is θα-conic. Therefore, any gives an θα-averaged relaxed forward–backward map. An averaged map is needed to guarantee convergence to a fixed-point when iterated. In the figure, we let , which satisfies . The approximation is indeed averaged, but the region within which the composition can end up is very small compared to the conic approximation.
Scaled averaged and cocoercive compositions
Compositions of scaled averaged and cocoercive operators are also special cases of scaled conic composed with scaled conic operators treated in Theorem 4.2. It covers the forward backward examples in Sect. 6, where identity is shifted between the operators and the sum is (strongly) monotone. The operators in the composition are of the form and , where , , and .
In Fig. 11, we consider the forward–backward setting in Theorem 6.5. The forward backward map is and we let be 1-cocoercive, B be maximally 0.3-monotone. That is, we have shifted 0.3Id from A to B and the sum is monotone. We use step-length . The proof of Theorem 6.5 shows that, in our setting, is 1.6-scaled 0.62-averaged and that is 1.6-cocoercive. Theorem 3.4 implies that the composition is of the form , where N is nonexpansive, i.e., it is 0.73-averaged.
Figure 11.
Illustration of composition of 1.6-scaled 0.62-averaged operator with 1.6-cocoercive operator. The composition comes from the forward–backward map with 1-cocoercive, B 0.3-monotone, and
Figure 12 considers a similar forward–backward setting, but with a strongly monotone sum. We let be 1-cocoercive, B be maximally 0.3-monotone, which implies that the sum is 0.1-strongly monotone. We keep step-length . The proof of Theorem 6.5 shows that, in our setting, is 1.4-scaled 0.62-averaged and that is 1.6-cocoercive. Theorem 3.4 implies that the composition is of the form , where N is nonexpansive, i.e., it is 0.87-contractive.
Figure 12.
Illustration of composition of 1.4-scaled 0.62-averaged operator with 1.6-cocoercive operator. The composition comes from the forward–backward map with 1-cocoercive, B 0.3-monotone, and
The final example in Fig. 13 considers a similar forward–backward setting where the sum is not monotone. We let be 1-cocoercive, B be maximally 0.3-monotone, which implies that the sum is −0.1-monotone, i.e., it is not monotone. We use step-length . The proof of Theorem 6.5 shows that, in our setting, is 1.8-scaled 0.62-averaged and that is 1.6-cocoercive. Theorem 3.4 implies that the composition is of the form , where N is nonexpansive, i.e., it is 1.12-Lipschitz and not conic, averaged, or contractive.
Figure 13.
Illustration of composition of 1.8-scaled 0.62-averaged operator with 1.6-cocoercive operator. The composition comes from the forward–backward map with 1-cocoercive, B 0.3-monotone, and
Acknowledgements
Not applicable.
Appendix A
Proof of Lemma 2.3
Indeed, observe that
| 89 |
and
| 90 |
In view of (89) and (90) we have
and the conclusion follows. □
Appendix B
Proof of Lemma 2.12
(i): Because is nonexpansive, we learn from [2, Example 20.7] that , as is , is maximally monotone. The conclusion now follows in view of e.g., [3, Lemma 2.5]. (ii): This is clear by observing that . □
Appendix C
Proof of Lemma 2.13
Indeed, by assumption, there exist nonexpansive mappings and such that
| 91 |
Now,
| 92a |
| 92b |
Using the triangle inequality, one can directly verify that is Lipschitz continuous with a constant . The proof is complete. □
Appendix D
Proof of Corollary 2.14
(i): It follows from Fact 7.5 that (respectively ) is -cocoercive (respectively -cocoercive). Now apply Lemma 2.13 with replaced by . (ii): Combine (i) with Fact 7.5 applied with f replaced by . □
Appendix E
Proof of Lemma 2.15
(i): Indeed, we have , where . Note that , hence Ñ is nonexpansive and the conclusion follows. (ii): Clear. □
Authors’ contributions
All authors contributed equally in writing this article. All authors read and approved the final manuscript.
Funding
PG was partially supported by the Swedish Research Council and the Wallenberg AI, Autonomous Systems and Software Program (WASP). WMM was partially supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant (NSERC-DG).
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Declarations
Competing interests
The authors declare that they have no competing interests.
Footnotes
Let . Then T is α-averaged if and nonexpansive exists such that .
Let . Then T is α-conically nonexpansive if and nonexpansive exists such that .
Let , and let . Then T is -cocoercive if nonexpansive exists such that .
The paper [1] appeared online while putting the finishing touches on this paper. Partial results of this work were presented by the second author at the Numerical Algorithms in Nonsmooth Optimization workshop at Erwin Schrödinger International Institute for Mathematics and Physics (ESI) in Vienna in February 2019 and at the Operator Splitting Methods in Data Analysis workshop at the Flatiron Institute, in New York in March 2019. Both workshops predate [1].
Let be an operator. The resolvent of A, denoted by , is defined by , and the reflected resolvent of A, denoted by , is defined by
In passing, we mention that [2, Proposition 26.1(iv)] assume that A and B are maximally monotone, which is not required here. However, the proof is the same.
Here and elsewhere, we use to denote the interval .
The assumption that is not restrictive. Indeed, since N is nonexpansive, an operator admits an -I-N decomposition if and only if it admits an -I-N decomposition. This is the reason why we define it only for nonnegative β.
Let C be a nonempty, closed convex subset of X. Here and elsewhere, we shall use to denote the normal cone operator associated with C, defined by if ; and , otherwise.
Contributor Information
Pontus Giselsson, Email: pontus.giselsson@control.lth.se.
Walaa M. Moursi, Email: walaa.moursi@uwaterloo.ca
References
- 1. Bartz, S., Dao, M.N., Phan, H.M.: Conical averagedness and convergence analysis of fixed point algorithms. arXiv preprint (2019). arXiv:1910.14185
- 2.Bauschke H.H., Combettes P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. 2. New York: Springer; 2017. [Google Scholar]
- 3.Bauschke H.H., Moursi W.M., Wang X. Generalized monotone operators and their averaged resolvents. Math. Program., Ser. B. 2020 doi: 10.1007/s10107-020-01500-6. [DOI] [Google Scholar]
- 4.Burachik R.S., Iusem A.N. Set-Valued Mappings and Enlargements of Monotone Operators. Berlin: Springer; 2007. [Google Scholar]
- 5.Clarke F.H. Optimization and Nonsmooth Analysis. Philadelphia: SIAM; 1990. [Google Scholar]
- 6.Combettes P.L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization. 2004;53(5–6):475–504. doi: 10.1080/02331930412331327157. [DOI] [Google Scholar]
- 7.Combettes P.L., Pennanen T. Proximal methods for cohypomonotone operators. SIAM J. Control Optim. 2004;43(2):731–742. doi: 10.1137/S0363012903427336. [DOI] [Google Scholar]
- 8.Combettes P.L., Yamada I. Compositions and convex combinations of averaged nonexpansive operators. J. Math. Anal. Appl. 2015;425(1):55–70. doi: 10.1016/j.jmaa.2014.11.044. [DOI] [Google Scholar]
- 9.Dao M.N., Phan H.M. Adaptive Douglas–Rachford splitting algorithm for the sum of two operators. SIAM J. Optim. 2019;29(4):2697–2724. doi: 10.1137/18M121160X. [DOI] [Google Scholar]
- 10. Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. Ph.D. thesis, MIT (1989)
- 11.Eckstein J., Bertsekas D.P. On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992;55(1):293–318. doi: 10.1007/BF01581204. [DOI] [Google Scholar]
- 12.Giselsson P. Tight global linear convergence rate bounds for Douglas–Rachford splitting. J. Fixed Point Theory Appl. 2017;19:2241–2270. doi: 10.1007/s11784-017-0417-1. [DOI] [Google Scholar]
- 13. Giselsson, P.: Nonlinear forward–backward splitting with projection correction (2019). arXiv:1908.07449
- 14. Giselsson, P.: Lecture notes on large-scale convex optimization (2015). http://control.lth.se/education/doctorate-program/large-scale-convex-optimization/
- 15.Giselsson P., Boyd S. Linear convergence and metric selection for Douglas–Rachford splitting and ADMM. IEEE Trans. Autom. Control. 2017;62(2):532–544. doi: 10.1109/TAC.2016.2564160. [DOI] [Google Scholar]
- 16.Guo K., Han D. A note on the Douglas–Rachford splitting method for optimization problems involving hypoconvex functions. J. Glob. Optim. 2018;72(3):431–441. doi: 10.1007/s10898-018-0660-z. [DOI] [Google Scholar]
- 17.Guo K., Han D., Yuan X. Convergence analysis of Douglas–Rachford splitting method for strongly + weakly convex programming. SIAM J. Numer. Anal. 2017;55(4):1549–1577. doi: 10.1137/16M1078604. [DOI] [Google Scholar]
- 18.Latafat P., Patrinos P. Asymmetric forward–backward-adjoint splitting for solving monotone inclusions involving three operators. Comput. Optim. Appl. 2017;68(1):57–93. doi: 10.1007/s10589-017-9909-6. [DOI] [Google Scholar]
- 19.Mordukhovich B.S. Variational Analysis and Generalized Differentiation I: Basic Theory. Berlin: Springer; 2006. [Google Scholar]
- 20.Mordukhovich B.S. Variational Analysis and Applications. 2018. [Google Scholar]
- 21.Ogura N., Yamada I. Non-strictly convex minimization over the fixed point set of an asymptotically shrinking nonexpansive mapping. Numer. Funct. Anal. Optim. 2002;22(1–2):113–137. doi: 10.1081/NFA-120003674. [DOI] [Google Scholar]
- 22.Reich S. On the asymptotic behavior of nonlinear semigroups and the range of accretive operators. J. Math. Anal. Appl. 1981;79(1):113–126. doi: 10.1016/0022-247X(81)90013-5. [DOI] [Google Scholar]
- 23.Rockafellar R.T., Wets R.J.-B. Variational Analysis. 3. Berlin: Springer; 1998. [Google Scholar]
- 24. Ryu, E.K., Hannah, W., Yin, R.: Scaled relative graph: nonexpansive operators via 2D Euclidean geometry (2019). arXiv:1902.09788
- 25.Ryu E.K., Taylor A.B., Bergeling C., Giselsson P. Operator splitting performance estimation: tight contraction factors and optimal parameter selection. SIAM J. Optim. 2020;30(3):2251–2271. doi: 10.1137/19M1304854. [DOI] [Google Scholar]
- 26.Wang X. On Chebyshev functions and Klee functions. J. Math. Anal. Appl. 2010;368(1):293–310. doi: 10.1016/j.jmaa.2010.03.041. [DOI] [Google Scholar]
- 27.Wen B., Chen X., Pong T.K. Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. SIAM J. Optim. 2017;27(1):124–145. doi: 10.1137/16M1055323. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.









