Abstract
We study the common fixed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. We propose a definition of sparseness of a family of operators and investigate a string-averaging algorithmic scheme that favorably handles the common fixed points problem when the family of operators is sparse. The convex feasibility problem is treated as a special case and a new subgradient projections algorithmic scheme is obtained.
1 Introduction
Given a finite family of operators acting on the Euclidean space Rn with Fix Ti ≠ ∅, i = 1, 2, …, m, the common fixed point problem is to find a point
(1) |
where Fix Ti is the fixed points set of Ti. In this paper we study the common fixed point problem for sparse directed operators. We use the term directed operators for operators in the ℑ-class of operators as defined and investigated by Bauschke and Combettes in [3] and by Combettes in [18]. Additionally, we focus on sparse operators and, for that purpose, we give a definition of sparseness of a family of operators.
The significance of working with this class stems from the fact that many commonly used types of nonlinear operators arising in convex optimization are directed operators (see, e.g., [3]) and, when developing algorithms for the problem (1) for such operators, we take advantage of their sparsity, whenever it exists.
The algorithms that are in use to find a common fixed point can be, from their structural view point, sequential, when only one operator at a time is used in each iteration, or simultaneous (parallel), when all operators in the given family are used in each iteration. There are algorithmic schemes which encompass sequential and simultaneous properties. These are the, so called, string-averaging [9] and block-iterative projections (BIP) [1], schemes, see also [15]. It turns out that the sequential and the simultaneous algorithms are special cases of the string-averaging and of the BIP algorithmic schemes.
Our objective here is to propose and study a string-averaging algorithmic scheme that enables component-wise weighting. Our work is a theoretical development aimed at gauging how far can the notions of sparsity, component-weighting and algorithmic string-averaging be expanded to cover the common fixed point problem for directed operators. The origins lie in [11] where a simultaneous projection algorithm, called component averaging (CAV), for systems of linear equations, that uses component-wise weighting was proposed. Such weighting enables, as shown and demonstrated experimentally on problems of image reconstruction from projections in [11], significant and valuable acceleration of the early algorithmic iterations due to the high sparsity of the system matrix appearing there. A block-iterative version of CAV, named BICAV, was introduced later in [12]. Full mathematical analyses of these methods, as well as their companion algorithms for linear inequalities, were presented by Censor and Elfving [10] and by Jiang and Wang [25]. In Section 2 we present preliminary material on directed operators and discuss some of their particular cases. In Section 3 we develop and study our string-averaging algorithmic scheme. In Section 4 we consider, as a special case, the convex feasibility problem and apply our algorithm from Section 3 using subgradient projectors.
1.1 Earlier work
The string-averaging algorithmic scheme has attracted attention recently and further work on it has been reported since its presentation in [9]. In [14] we investigated the behavior of string-averaging algorithms for inconsistent convex feasibility problems. In Bauschke, Matoušková and Reich [4] string-averaging was studied in Hilbert space. In Crombez [19, 20] the string-averaging algorithmic paradigm is used to find common fixed points of certain paracon-tractive operators in Hilbert space. In Bilbao-Castro, Carazo, García and Fernández [6], an implementation of the string-averaging method to electron microscopy is reported. Butnariu, Davidi, Herman and Kazantsev [7] call a certain class of string-averaging methods the Amalgamated Projection Method and show its stable behavior under summable perturbations. The iterative procedure studied in Butnariu, Reich and Zaslavski [8, Sections 6 and 7] is also a particular case of the string-averaging method. In Rhee [27] the string-averaging scheme is applied to a problem in approximation theory.
The notion of sparseness is very well understood and used for matrices and, from there, the road to sparseness of the Jacobian (or generalized Jacobian) matrix as an indicator of sparseness of nonlinear operators is short, see, e.g., Betts and Frank [5]. Our definition of sparseness of operators does not require differentiability (or subdifferentiability) and generalizes those previous notions.
2 Directed operators
We recall the definitions and results on directed operators and their properties as they appear in Bauschke and Combettes [3, Proposition 2.4] and Combettes [18], which are also sources for further references on the subject. Let 〈x,y〉 and ∥x∥ be the Euclidean inner product and norm, respectively, in Rn.
Given x,y ∈ Rn we denote the half-space
(2) |
Definition 1 An operator T : Rn → Rn is called directed if
(3) |
or, equivalently,
(4) |
The class of directed operators is denoted by ℑ. Bauschke and Combettes [3] defined the directed operators (although without using this name) and showed (see [3, Proposition 2.4]) (i) that the set of all fixed points of a directed operator T with nonempty FixT is closed and convex because
(5) |
and (ii) that the following holds
(6) |
where I is the identity operator. The localization of fixed points is discussed in [23, pages 43-44]. In particular, it is shown there that a firmly nonexpansive operator, namely, an operator N : Rn → Rn that fulfills
(7) |
satisfies (5) and is, therefore, a directed operator. The class of directed operators, includes additionally, according to [3, Proposition 2.3], among others, the resolvents of a maximal monotone operators, the orthogonal projections and the subgradient projectors (see Example 7 below). Note that every directed operator belongs to the class of operators ℱ0, defined by Crombez [21, p. 161], whose elements are called elsewhere quasi-nonexpansive or paracontracting operators.
The following definition of a closed operator will be required.
Definition 2 An operator T : Rn → Rn is said to be closed at y ∈ Rn if for every x̄ ∈ Rn and every sequence in Rn, such that, limk→∞xk = x̄ and limk→∞T(xk) = y, we have T(x̄) = y.
For instance, the orthogonal projection onto a closed convex set is everywhere a closed operator, due to its continuity.
Remark 3 [18] If T : Rn → Rn is nonexpansive, then T − I is closed on Rn.
Consider a finite family Ti : Rn → Rn, i = 1, 2, …, m, of operators. In sequential algorithms for solving the common fixed point problem the order by which the operators are chosen for the iterations is determined by a control sequence of indices , see, e.g., [15, Definition 5.1.1].
Definition 4 (i) Cyclic control. A control sequence is cyclic if i(k) = k mod m + 1, where m is the number of operators in the common fixed point problem.
(ii) Almost cyclic control. is almost cyclic on {1, 2, …, m}, if 1 ≤ i(k) ≤ m for all k ≥ 0, and there exists an integer c ≥ m (called the almost cyclicality constant), such that, for all k ≥ 0, {1, 2, …, m} ⊆ {i(k + 1), i(k + 2), …, i(k + c)}.
The notions “cyclic” and “almost cyclic” are sometimes also called “periodic” and “quasi-periodic”, respectively, see, e.g., [22].
Given a finite family Ti : Rn → Rn, i = 1, 2, …, m, of directed operators with a nonempty intersection of their fixed points sets, such that Ti − I are closed at 0, for every i ∈ {1, 2, …, m}. The following algorithm for finding a common fixed point of such a family is a special case of [18, Algorithm 6.1]. We will use it in the sequel.
Algorithm 5 Almost Cyclic Sequential Algorithm (ACSA) for solving common fixed point problem
Initialization: x0 ∈ Rn is an arbitrary starting point.
Iterative Step: Given xk, compute xk+1 by
(8) |
Control: is almost cyclic on {1, 2, …, m}.
Relaxation parameters: are confined to the interval [0,2].
The convergence theorem for Algorithm 5 for a finite family of directed operators is as follows.
Theorem 6 Let be a finite family of directed operators Ti : Rn → Rn, which satisfies
(i) Fix Ti is nonempty, and
(ii) Ti − I are closed at 0, for every i ∈ {1, 2, …, m}.
Then any sequence , generated by Algorithm 5, converges to a point in Ω.
Proof. This follows as a special case of [18, Theorem 6.6 (i)]. ■
In the next definition and lemma we recall the notion of the subgradient projector and show that this operator satisfies condition (ii) of Theorem 6.
Definition 7 See, e.g., [3, Proposition 2.3(iv)]. Let f : Rn → R be a convex function such that the level-set F := {x ∈ Rn | f(x) ≤ 0} is nonempty. The operator
(9) |
where q is a selection from the subdifferential set ∂f(y) of f at y, is called a subgradient projector relative to f.
Lemma 8 Let f : Rn → R be a convex function, let y ∈ Rn and assume that the level-set F ≠ ∅. For any q ∈ ∂f(y), define the closed convex set
(10) |
Then the following hold:
(i) F ⊆ L. If q ≠ 0 then L is a half-space, otherwise L = Rn.
- (ii) Denoting by PL(y) the orthogonal projection of y onto L,
(11) (iii) PL − I is closed at 0.
Proof. For (i) and (ii) see, e.g., [2, Lemma 7.3]. (iii) Denote Ψ = PL − I. Take any x̄ ∈ Rn and any sequence in Rn, such that, limk→∞xk = x̄ and limk→∞ :(xk) = 0. Since f is convex, its subdifferential is uniformly bounded on bounded sets, see, e.g., [2, Corollary 7.9]. Using this and the continuity of f we obtain, from (9), that f(x̄) = 0, and, therefore, Ψ(x̄) = 0. ■
3 The new string averaging algorithmic scheme
We study here a particular modification of the string averaging paradigm, adapted to handle the common fixed point problem for sparse directed operators.
3.1 The string averaging prototypical scheme
The string averaging prototypical scheme is defined as follows. Let the string Sp, for p = 1, 2, …, t, be a finite, nonempty ordered subset of elements taken from {1, 2, …, m} of the form
(12) |
The length γ(p) of the string Sp is the number of its elements. We do not require that the strings should be disjoint. Suppose that there is a set Q ⊆ Rn such that there are operators V1, V2, …, Vm mapping Q into Q and an operator V which maps Qt = Q × Q × … × Q into Q. Then the string averaging prototypical scheme is as follow.
Algorithm 9 The string averaging prototypical algorithmic scheme [9]
Initialization: x0 ∈ Q is an arbitrary starting point.
Iterative Step: Given the current iterate xk,
- (i) calculate, for all p = 1, 2, … , t,
(13) - (ii) and then calculate,
(14)
For every p = 1, 2, … t, this algorithmic scheme applies to xk successively the operators whose indices belong to the p-th string. This can be done in parallel for all strings and then the operator V maps all end-points onto the next iterate xk+1. This is indeed an algorithm provided that the operators and V all have algorithmic implementations. In this framework we get a sequential algorithm by the choice t = 1 and S1 = {1, 2, …, m} and a simultaneous algorithm by the choice t = m and Sp = {p} , p = 1, 2, …, t.
In our new algorithmic scheme we assume that a finite family of directed operators (see Definition 1) is given with . After applying the operators along strings, the end-points will be averaged not by taking a plain convex combination but by doing a, so called, component-averaging step. The component averaging principle, introduced for linear systems in [11], [12], is a useful tool for handling sparseness in the linear case.
3.2 Sparseness of operators and the new algorithm
To define sparseness of the set of operators we need to speak about zeros of the vectors x − Ti(x).
Definition 10 Let T : Rn → Rn be a directed operator. If (x − T(x))j = 0, for all x ∉ FixT then j is called a void of T and we write j = voidT.
For every i ∈ {1, 2, …, m} define the following sets
(15) |
i.e., Zi contains all the pairs (i, j), such that (x−Ti(x))j = 0, for all x ∉ FixTi.
Definition 11 The family of directed operators will be called sparse if the set is nonempty and contains many elements.
Remark 12 The word “many” in Definition 11 is not well-defined. The more pairs (i,j) are contained in Z the higher is the sparseness of the family. It is of some interest to note that sparseness of matrices was considered as early as in 1971. Wilkinson [28, p. 191] refers to it by saying: “We shall refer to a matrix as dense if the percentage of zero elements or its distribution is such as to make it uneconomic to take advantage of their presence”. Obviously, denseness is meant here as an opposite of sparseness.
Denote by Ij, 1 ≤ j ≤ n, the set of indices of strings that contain an index of an operator Ti for which (i, j) ∉ Zi, i.e.,
(16) |
and let sj = |Ij| (the cardinality of Ij). Equivalently,
(17) |
Definition 13 [24, Definition 1] The component-wise string averaging operator relative to the family of strings S := {S1, S2, …, St} is a mapping CAS : Rn×t → Rn, defined as follows. For x1, x2,…, xt ∈ Rn,
(18) |
where is the j-th component of xp, for 1 ≤ p ≤ t.
Our new scheme performs sequential steps within each of the strings of the family S and merges the resulting end-points by the component-wise string averaging operator (18) as follows.
Algorithm 14
Initialization: x0 ∈ Rn is an arbitrary starting point and define an integer constant N, such that N ≥ m.
Iterative step: Given xk, compute xk+1 as follows:
(i) For every 1 ≤ p ≤ t (possibly in parallel): Execute a finite number, not exceeding N, of iterative steps of the form (8), on the operators {Ti}i∈Sp of the p-th string and denote the resulting end-points by .
- (ii) Apply
(19)
3.3 Convergence
For the proof of convergence of Algorithm 14 we need the following construction. From the family of directed operators in Rn we construct another family of directed operators in a higher-dimensional space Rs and a family of strings for those operators. For the new operators and new strings, the operators belonging to different strings do not share any common variables. Therefore, the parallel processing of the strings in Rn in (i) of Algorithm 14 is equivalent to performing sequential ACSA iterations on the new directed operators in Rs. Moreover, using ideas of Pierra's [26] formalization, we show that the component-wise string averaging step in (ii) of Algorithm 14 is equivalent to an orthogonal projection onto a certain subspace of Rs. Inspired by the construction in [24], this is done as follows.
We represent each Ij is explicitly as
(20) |
which defines each double-indexed p in an obvious way. Let Rs be the s-dimensional Euclidean space, where , and denote the components of each y ∈Rs by
(21) |
Define a linear mapping
(22) |
where yj,pj,1 = yj,pj,2 = … = yj,pj,sj = xj for j = 1, 2, …, n. Let D be the range of δ, i.e.,
(23) |
which is a subspace of Rs. Define γ new operators where in the following manner: For each p, are the indices of the operators Ti that are included in the string Sp, see (12). To each pair we attach a new operator , defined by
(24) |
where the operators in the right-hand side of (24) are defined as follows. Πp : Rs → Rn, 1 ≤ p ≤ t, is defined component-wise for each 1 ≤, j ≤ n as
(25) |
is the w-th directed operator in the string Sp and Up : Rn → Rs, 1 ≤ p ≤ t, is defined component-wise for each 1 ≤ j ≤ n and 1 ≤ ℓ ≤ sj as
(26) |
The new operators have fixed point sets Fix . Each string in Rn gives rise to a string
(27) |
of the same length in Rs. Note, that operators that belong to different strings in the family of strings do not have a common variable which is not a void.
Lemma 15 Every operator , 1 ≤ p ≤ t, is a directed operator and is closed at 0, where I is the identity operator in Rs.
Proof. If then z ∈ Im Up, the image set of Up. Moreover, then also . For every x ∈ Rs we have
(28) |
since the operator is directed, therefore, (28) implies that is also directed. Next, we show that is closed at 0. Let be a sequence in Rs, such that limk→∞ xk = x̄ and . Since the operator Πp is continuous, we obtain
(29) |
and
(30) |
The operator is closed at zero and, therefore,
(31) |
Applying Up to both sides of (31), we obtain
(32) |
From
(33) |
follows that x̄ ∈ Im Up and therefore Up(Πp(x̄) = x̄. Then, from (32) one has that from which the closedness of follows. ■
Define the set
(34) |
The mapping δ : Rn → D is a one-to-one mapping. Therefore, in the space Rs, we can reformulate the problem (1) as
(35) |
This means that
(36) |
and, hence, the m–sets problem (1) is reduced to the 2-sets problem (35), which involves only a vector subspace and a convex set.
Next we present the alternative formulation of the Algorithm 14 in which the operations are performed in Rs.
Algorithm 16
Initialization:
(i) x0 ∈ Rn is arbitrary and define an integer constant N, such that N ≥ m.
(ii) y0 = δ(x0) is the initial vector in Rs.
Iterative step: Given yk, compute yk+1 via:
(i) In Rs, for every 1 ≤ p ≤ t (possibly in parallel): Execute a finite number, not exceeding N, of iterative steps of the form (8) on the operators of the p-th string and denote the resulting end-points by .
- (ii) Apply CAS in Rs as follows. For 1 ≤ j ≤ n, set
(37) (iii) Denote .
The following lemma shows that the averaging operation in the iterative step (ii) of Algorithm 16 is the orthogonal projection onto the subspace D.
Lemma 17 Let y =(y1,1, y1,2, …, y1,s1, …, yn,1, yn,2, …, yn,sn) ∈ Rs, then
(38) |
Proof. Using the definition of the orthogonal projection we obtain
(39) |
The minimum is obtained when the gradient is equal to zero,
(40) |
Then,
(41) |
and
(42) |
and the proof is complete. ■
Now we are ready to prove our main convergence result.
Theorem 18 then any sequence , generated by the Algorithm 14, converges to a solution of (1).
Proof. The consistency assumption on the problem (1) implies that (35) is also consistent. Moreover, Lemma 28 guarantees that all the operators are directed and that , 1 ≤ w ≤ γ(p), 1 ≤ p ≤ t and PD − I are closed at 0. The Algorithm 16 can be executed in Rs in parallel or sequentially, since the strings do not contain any common non-void variables. Therefore, from Theorem 6 follows convergence to a common fixed point of the operators , 1 ≤ w ≤ γ(p), 1 ≤ p ≤ t, and PD and the proof is complete. ■
4 Special case: The convex feasibility problem
The convex feasibility problem (CFP) is to find a point x* in the intersection C of m closed convex subsets C1, C2, …, Cm ⊆ Rn. Each Ci is expressed as
(43) |
where fi : Rn → R is a convex function, so the CFP requires a solution of the system of convex inequalities
(44) |
The convex feasibility problem is a special case of the common fixed point problem, where the directed operators are the subgradient projectors relative to fi (see, Example 7 and Lemma 8 above).
In a recent paper by Gordon and Gordon [24] a new parallel “Component-Averaged Row Projections (CARP)” method for the solution of large sparse linear systems was introduced. It proceeds by dividing the equations into nonempty, not necessarily disjoint, sets (strings), performing Kaczmarz row projections within the strings, and merging the results by component-averaging operations to form the next iterate. As shown in [24], using orthogonal projections onto convex sets, this method and its convergence proof also apply to the consistent nonlinear CFP.
In contrast, when applied to a CFP, our Algorithm 14 gives rise to a method which is structurally similar to CARP but uses subgradient projections instead of orthogonal projections. This is, of course, a development that might be very useful for CFPs with nonlinear convex sets for which each orthogonal projection mandates an inner-loop of distance optimization. We use now our results from Section 3 to present a string-averaging algorithm with component-wise averaging for a sparse CFP.
Sparseness of the nonlinear system (44) can be defined in compliance with Definitions 10 and 11 by speaking about zeros of the subgradients of the functions fi and to do so we use the next definition.
Definition 19 Let fi : Rn → R, i = 1, 2, …, m, be convex functions. For any x ∈ Rn, the m × n matrix is called a generalized Jacobian of the family of functions at the point x if , for all i and all j, for some such that qi ∈ ∂fi(x).
This definition coincides in our case with the Clarke's generalized Jacobian, see [16] and [17]. A generalized Jacobian Q(x) of the functions in (44) is not unique because of the possibility to fill it up with different subgradients from each subdifferential set. In case all fi are differentiable the generalized Jacobian reduces to the usual Jacobian.
We define for every i ∈ {1, 2, …, m} the following sets
(45) |
A mapping F : Rn → Rm given by will be called sparse if some of its component functions fi do not depend on some of their variables xj which means that . The more pairs (i,j) are contained in Z the higher is the sparseness of the mapping F.
Next we recall the cyclic subgradient projections (CSP) method for the CFP (studied in [13]) which is a special version of the ACSA algorithm (Algorithm 5).
Algorithm 20 Cyclic Subgradient Projections (CSP)
Initialization: x0 ∈ Rn is arbitrary.
Iterative step:
(46) |
where qk ∈ ∂fi(k) (xk) is a subgradient of fi(k) at the point xk.
Relaxation parameters: are confined to the interval [ε, 2 − ε], where ε > 0.
Control: Almost cyclic on {1, 2, …, m}.
According to our scheme the algorithm for solving the CFP performs CSP steps within the strings and merges the results by the C AS(x1, x2, …, xt) component-averaging operation.
Algorithm 21
Initialization: x0 ∈ Rn is arbitrary and define an integer constant N, such that N ≥ m.
Iterative step: Given xk, compute xk+1 via:
(i) For every 1 ≤ p ≤ t (possibly in parallel): Execute a finite number, not exceeding N, of CSP steps on the inequalities of the p-th string Sp and denote the resulting point by .
- (ii) Apply
(47)
Acknowledgments
This work was supported by grant No. 2003275 of the United States-Israel Binational Science Foundation (BSF) and by a National Institutes of Health (NIH) grant No. HL70472.
References
- 1.Aharoni R, Censor Y. Block-iterative projection methods for parallel computation of solutions to convex feasibility problems. Linear Algebra and its Applications. 1989;120:165–175. [Google Scholar]
- 2.Bauschke HH, Borwein JM. On projection algorithms for solving convex feasibility problems. SIAM Review. 1996;38:367–426. [Google Scholar]
- 3.Bauschke HH, Combettes P. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Mathematics of Operations Research. 2001;26:248–264. [Google Scholar]
- 4.Bauschke HH, Matoušková E, Reich S. Projection and proximal point methods: convergence results and counterexamples. Nonlinear Analysis: Theory, Methods and Applications. 2004;56:715–738. [Google Scholar]
- 5.Bettes JT, Frank PD. A sparse nonlinear optimization algorithm. Journal of Optimization Theory and Applications. 1994;82:519–541. [Google Scholar]
- 6.Bilbao-Castro JR, Carazo JM, García I, Fernández JJ. Parallel iterative reconstruction methods for structure determination of biological specimens by electron microscopy. Proceedings of The International Conference on Image Processing (ICIP) 2003;1:I565–I568. [Google Scholar]
- 7.Butnariu D, Davidi R, Herman GT, Kazansev IG. Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems. IEEE Journal of Selected Topics in Signal Processing. 2007;1:540–547. [Google Scholar]
- 8.Butnariu D, Reich S, Zaslavski AJ. Stable convergence theorems for infinite products and powers of nonexpansive mappings. Numerical Functional Analysis and Optimization. 2008;29:304–323. [Google Scholar]
- 9.Censor Y, Elfving T, Herman GT. Averaging strings of sequential iterations for convex feasibility problems. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and their Applications. Elsevier; Amsterdam: 2001. pp. 101–113. [Google Scholar]
- 10.Censor Y, Elfving T. Block-iterative algorithms with diagonally scaled oblique projections for the linear feasibility problem. SIAM Journal on Matrix Analysis and Applications. 2002;24:40–58. [Google Scholar]
- 11.Censor Y, Gordon D, Gordon R. Component averaging: An efficient iterative parallel algorithm for large and sparse unstructured problems. Parallel Computing. 2001;27:777–808. [Google Scholar]
- 12.Censor Y, Gordon D, Gordon R. BICAV: A block-iterative, parallel algorithm for sparse systems with pixel-related weighting. IEEE Transactions on Medical Imaging. 2001;20:1050–1060. doi: 10.1109/42.959302. [DOI] [PubMed] [Google Scholar]
- 13.Censor Y, Lent A. Cyclic subgradient projections. Mathematical Programming. 1982;24:233–235. [Google Scholar]
- 14.Censor Y, Tom E. Convergence of string-averaging projection schemes for inconsistent convex feasibility problems. Optimization Methods and Software. 2003;18:543–554. [Google Scholar]
- 15.Censor Y, Zenios SA. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press; New York, NY, USA: 1997. [Google Scholar]
- 16.Clarke FH. Generalized gradients and applications. Transactions of the American Mathematical Society. 1975;205:247–262. [Google Scholar]
- 17.Clarke FH. On the inverse function theorem. Pacific Journal of Mathematics. 1976;64:97–102. [Google Scholar]
- 18.Combettes PL. Quasi-Fejérian analysis of some optimization algorithms. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and their Applications. Elsevier; Amsterdam: 2001. pp. 115–152. [Google Scholar]
- 19.Crombez G. Finding common fixed points of strict paracontractions by averaging strings of sequential iterations. Journal of Nonlinear and Convex Analysis. 2002;3:345–351. [Google Scholar]
- 20.Crombez G. Finding common fixed points of a class of paracontractions. Acta. Math. Hungar. 2004;103:233–241. [Google Scholar]
- 21.Crombez G. A geometrical look at iterative methods for operators with fixed points. Numerical Functional Analysis and Optimization. 2005;26:157–175. [Google Scholar]
- 22.Dye JM, Reich S. Unrestricted iterations of nonexpansive mappings in Hilbert space. Nonlinear Analysis. 1992;18:199–207. [Google Scholar]
- 23.Goebel K, Reich S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marsel Dekker; New York and Basel: 1984. [Google Scholar]
- 24.Gordon D, Gordon R. Component-averaged row projections: A robust, block-parallel scheme for sparse linear systems. SIAM Journal of Scientific Computing. 2005;27:1092–1117. [Google Scholar]
- 25.Jiang M, Wang G. Convergence studies on iterative algorithms for image reconstruction. IEEE Transactions on Medical Imaging. 2003;22:569–579. doi: 10.1109/TMI.2003.812253. [DOI] [PubMed] [Google Scholar]
- 26.Pierra G. Decomposition through formalization in a product space. Mathematical Programming. 1984;28:96–115. [Google Scholar]
- 27.Rhee H. An application of the string averaging method to one-sided best simultaneous approximation. J. Korea Soc. Math. Educ. Ser. B: Pure Appl. Math. 2003;10:49–56. [Google Scholar]
- 28.Wilkinson JH. Introduction to Part II, The Algebraic Eigenvalue Problem. In: Wilkinson JH, Reinsch C, editors. Handbook for Automatic Computation. II. Springer-Verlag; 1971. Linear Algebra. [Google Scholar]