Abstract
We propose the split common fixed point problem that requires to find a common fixed point of a family of operators in one space whose image under a linear transformation is a common fixed point of another family of operators in the image space. We formulate and analyze a parallel algorithm for solving this split common fixed point problem for the class of directed operators and note how it unifies and generalizes previously discussed problems and algorithms.
1 Introduction
In this paper we propose a new problem, called the split common fixed point problem (SCFPP), and study it for the class of directed operators T such that T – I is closed at the origin. These operators were introduced and investigated by Bauschke and Combettes in [3, Definition 2.2] and by Combettes in [16], although not called by this name. We present a unified framework for the study of this problem and class of operators and propose iterative algorithms and study their convergence. The SCFPP is a generalization of the split feasibility problem (SFP) and of the convex feasibility problem (CFP). The class of directed operators is an important class since it includes the orthogonal projections and the subgradient projectors, and we also supply an additional operator from this class.
The split common fixed point problem (SCFPP) requires to find a common fixed point of a family of operators in one space such that its image under a linear transformation is a common fixed point of another family of operators in the image space. This generalizes the convex feasibility problem (CFP), the two-sets split feasibility problem (SFP) and the multiple sets split feasibility problem (MSSFP).
Problem 1 The split common fixed point problem.
Given operators Ui : RN → RN, i = 1, 2, …, p, and Tj : RM → RM, j = 1, 2, …, r, with nonempty fixed points sets Ci, i = 1, 2, …, p and Qj, j = 1, 2, …, r, respectively. The split common fixed point problem (SCFPP) is
(1) |
Such problems arise in the field of intensity-modulated radiation therapy (IMRT) when one attempts to describe physical dose constraints and equivalent uniform dose (EUD) constraints within a single model, see [10]. The problem with only a single pair of sets C in RN and Q in RM was first introduced by Censor and Elfving [11] and was called the split feasibility problem (SFP). They used their simultaneous multiprojections algorithm (see also [15, Subsection 5.9.2]) to obtain iterative algorithms to solve the SFP. Their algorithms, as well as others, see, e.g., Byrne [6], involve matrix inversion at each iterative step. Calculating inverses of matrices is very time-consuming, particularly if the dimensions are large. Therefore, a new algorithm for solving the SFP was devised by Byrne [7], called the CQ-algorithm, with the following iterative step
(2) |
where xk and xk+1 are the current and the next iteration vectors, respectively, γ ∈ (0, 2/L) where L is the largest eigenvalue of the matrix AtA (t stands for matrix transposition), I is the unit matrix or operator and PC and PQ denote the orthogonal projections onto C and Q, respectively.
The CQ-algorithm converges to a solution of the SFP, for any starting vector x0 ∈ RN, whenever the SFP has a solution. When the SFP has no solutions, the CQ-algorithm converges to a minimizer of ∥PQ(Ac) – Ac∥ , over all c ∈ C, whenever such a minimizer exists. A block-iterative CQ-algorithm, called the BICQ-method, is also available in [7], see also Byrne [8] and his recent book [9]. The MSSFP, posed and studied in [12], was handled, for both the feasible and the infeasible cases, with a proximity function minimization approach, namely, if the MSSFP problem is consistent then unconstrained minimization of the proximity function yields the value 0, otherwise, in the inconsistent case, it finds a point which is least violating the feasibility by being “closest” to all sets, as “measured” by the proximity function. Masad and Reich [19] is a recent sequel to [12] where they prove weak and strong convergence theorems for an algorithm that solves the multiple-set split convex feasibility problem in Hilbert space.
In the case of nonlinear constraints sets, orthogonal projections may demand a great amount of work of solving a nonlinear optimization problem to minimize the distance between the point and the constraint set. However, it can easily be estimated by linear approximation using the current constraint violation and the subgradient at the current point. This was done by Yang, in his recent paper [21], where he proposed a relaxed version of the CQ-algorithm in which orthogonal projections are replaced by subgradient projections, which are easily executed when the sets C and Q are given as lower level sets of convex functions, see also [23]. In [13] Censor, Motova and Segal formulated a simultaneous subgradient projections algorithm for the MSSFP.
Many common types of operators arising in convex optimization belong to the class of directed operators. These operators were introduced and investigated by Bauschke and Combettes in [3] (denoted there as 𝔗-class) and by Combettes in [16]. Using the notion of directed operators we develop algorithms for the SCFPP. In Section 2 we present preliminary material on the directed operators and discuss some particular cases. In Section 3 we formulate the two operators split fixed point problem and study our algorithm for it. In Section 4 we present our parallel algorithm for the SCFPP and establish its convergence and, in Section 5, we note how it unifies and generalizes previously discussed problems and algorithms.
2 Directed operators
The class 𝔗 of operators was introduced and investigated by Bauschke and Combettes in [3, Definition 2.2] and by Combettes in [16]. Operators in this class were named directed operators in Zaknoon [22] and further employed under this name in [14]. We recall definitions and results on directed operators and their properties as they appear in [3, Proposition 2.4] and [16], which are also sources for references on the subject. Let RN be the N-dimensional Euclidean space with 〈x, y〉 and ∥x∥ as the Euclidean inner product and norm, respectively.
Given x, y ∈ RN we denote
(3) |
Definition 2 An operator T : RN → RN is called a directed operator, if
(4) |
where Fix T is the fixed points set of T, equivalently,
(5) |
The class of directed operators is denoted by 𝔗, i.e.
(6) |
Bauschke and Combettes [3] showed the following:
- (i) That the set of all fixed points of a directed operator with nonempty Fix T is closed and convex because
(7) - (ii) That, denoting by I the unit operator,
(8)
This class of operators is fundamental because many common types of operators arising in convex optimization belong to the class and because it allows a complete characterization of Fejér-monotonicity [3, Proposition2.7]. The localization of fixed points is discussed in [18, pages 43-44]. In particular, it is shown there that a firmly nonexpansive operator, namely, an operator Ω : Rn → Rn that fulfills
(9) |
satisfies (7) and is, therefore, a directed operator. The class of directed operators, includes additionally, according to [3, Proposition 2.3], among others, the resolvents of a maximal monotone operators, the orthogonal projections and the subgradient projectors (see Example 5 below). Note that every directed operator belongs to the class of operators ℱ0, defined by Crombez [17, p. 161],
(10) |
whose elements are called elsewhere quasi-nonexpansive or paracontracting operators.
The following definition of a closed operator originated in Browder [5] (see, e.g., [16]) and will be required in the sequel.
Definition 3 An operator T : RN → RN is said to be closed at a point y ∈ RN if for every x̄ ∈ RN and every sequence in RN, such that, limk→∞ xk = x̄ and limk→∞ Txk = y, we have Tx̄ = y.
For instance, the orthogonal projection onto a closed convex set is a closed operator everywhere, due to its continuity.
Remark 4 [16] If T : RN → RN is nonexpansive then T – I is closed on RN.
In the next example and lemma we recall the notion of the subgradient projector ΠF(y) and show that ΠF(y) – I is closed at 0.
Example 5 Let f : RN → R be a convex function such that the level set F := {x ∈ RN | f(x) ≤ 0} is nonempty. The operator
(11) |
where q is a selection from the subdifferential set ∂f(y) of f at y, is called a subgradient projector relative to f.
Lemma 6 Let f : RN → R be a convex function, let y ∈ RN and assume that the level set F ∆ ∅. For any q ∈ ∂f(y), define the closed convex set
(12) |
Then the following hold:
(i) F ⊆ L. If q ∆ 0 then L is a half-space, otherwise L = Rn.
- (ii) Denoting by PL(y) the orthogonal projection of y onto L,
(13) (iii) PL – I is closed at 0.
Proof. For (i) and (ii) see, e.g., [2, Lemma 7.3]. (iii) Denote Ψ := PL–I. Take any x̄ ∈ RN and any sequence in RN, such that, limk→∞ xk = x̄ and limk→∞ Ψ(xk) = 0. Define f+(y) = max{f(y), 0}. Then , q ∈ ∂f(y). Since f+ is convex, its subdifferential is uniformly bounded on bounded sets, see, e.g., [2, Corollary 7.9]. Using this and the continuity of f+ we obtain that f+(x̄) = 0, and, therefore, Ψ(x̄) = 0. ■
Influenced by the framework established in Bregman et al. [4], and by Aharoni, Berman and Censor's (δ, η)-Algorithm [1] for solving convex feasibility problems, we define next another type of operators which we call “E-δ operators”. We need first the following setup. Let E ⊂ RN be a nonempty closed convex set. We assume, without loss of generality, that E is expressed as
(14) |
where e : RN → R is a convex function. Given a point z ∈ RN, a real number δ, 0 < δ ≤ 1, we define for z ∉ E the ball
(15) |
For all pairs (y, t) ∈ RN × RN we look at the half-spaces of the form
(16) |
and define
(17) |
We also need to impose the following condition.
Condition 7 Given a set E ⊂ RN, described as in (14), it is true that for every z ∉ E
(18) |
Every convex set E can be described by (14) with e(z) = d(z, E), the distance function between the point z and the set E, and in this case Condition 7 always holds.
Definition 8 Given a set E = {x ∈ RN | e(x) ≤ 0} where e : RN → R is a convex function and a real number δ, 0 < δ ≤ 1, such that Condition 7 holds, we define the operator TE,δ for any z ∈ RN, by
(19) |
where (y, t) is any selection from Aδ(e(z)), and call it an E-δ operator.
The fact that any E-δ operator is a directed operator follows from its definition.
Lemma 9 If TE,δ is an E-δ operator then TE,δ − I is closed at 0.
Proof. Let be a sequence with zk ∉ E for all k ≥ 0, such that limk→∞ ∥zk∥ = q ∈ RN and limk→∞ ∥TE,δ(zk) − zk∥ = 0. For every k = 0, 1, 2, … we have
(20) |
Taking limits on both sides of the last inequality we obtain
(21) |
and from the continuity of e(z) follows that e(q) = 0 and, therefore, q ∈ Fix TE,δ. ■
Next we show that the subgradient projector of Example 5 is a TE,δ operator. To show that an operator is a TE,δ operator one needs to guarantee (among other things) that, given a set E, the intersection B(z, δe(z)) ∩ E is empty for all z ∉ E, for some choice of a real number δ, 0 < δ ≤ 1. This is done in the next lemma.
Lemma 10 Let e : Rn → R be a convex function such that the level set E := {x ∈ RN | e(x) ≤ 0} is nonempty. Then there exists a real number δ, 0 < δ ≤ 1, such that the subgradient projector ΠE(z) of Example 5 is a TE,δ operator.
Proof. If e(z) ≤ 0 then z ∈ E and, by definition, ΠE(z) = z. If e(z) > 0 then, using the setup of (14)-(17), let
(22) |
where t ∈ ∂e(z). By Lemma 6 we have that E ⊆ Le(z, t). Now we need to show that int B(z, δe(z)) ∩ Le(z, t) = Ø. Denoting w = ΠE(z) ∈ Le(z, t), it follows from (22) that
(23) |
By [2, Corollary 7.9] the subdifferential ∂e(z) is uniformly bounded on bounded sets, i.e., there exists a K > 0, such that, ∥t∥ ≤ K, hence, e(z) ≤ K ∥w − z∥. Taking any δ < 1/K we obtain
(24) |
which implies that
(25) |
■
Aside from theoretical interest, the extension (of subgradient projectors) to TE,δ operators can lead to algorithms useful in practice, provided that the computational efforts of finding hyperplanes S(y, t) are reasonable. Another special case (besides the subgradient hyperplane) is obtained by constructing the S(y, t) via an interior point in the convex set (using the assumption that in each set we know an point in its interior), see [15, Algorithm 5.5.2].
3 The two-operators split common fixed point problem
The split common fixed point problem for a single pair of directed operators is obtained from Problem 1 with p = r = 1.
Definition 11 Let A be a real M × N matrix and let U : RN → RN and T : RM → RM be operators with nonempty Fix U = C and Fix T = Q. The two-operators split common fixed point problem is to find x* ∈ C such that Ax* ∈ Q.
Denoting the solution set of the two-operators SCFPP by
(26) |
the following algorithm is designed to solve it.
Algorithm 12
Initialization: Let x0 ∈ RN be arbitrary.
Iterative step: For k ≥ 0 let
(27) |
where γ ∈ (0, 2/L), L is the largest eigenvalue of the matrix AtA and I is the unit operator.
We recall the definition of Fejér-monotone sequences, which will be useful for our further analysis.
Definition 13 A sequence is called Fejér–monotone with respect to a given nonempty set S ⊆ RN if for every x ∈ S,
(28) |
To prove convergence of Algorithm 12 we need the following lemma.
Lemma 14 Given a real M × N matrix A, let U : RN → RN and T : RM → RM be directed operators with nonempty Fix U = C and Fix T = Q. Any sequence , generated by Algorithm 12, is Fejér-monotone with respect to the solution set Γ.
Proof. Taking y ∈ Γ we use (10) to obtain
(29) |
From the definition of L follows
(30) |
Denoting ϴ := 2γ 〈xk − y, At(T − I)(Axk)〉 and using (5) we obtain
(31) |
From (29) and by using (30) and (31) follows
(32) |
Then, from the definition of γ, we obtain
(33) |
from which the Fejér-monotonicity with respect to Γ follows. ■
The next lemma describes a property of directed operators that will be used in our convergence analysis.
Lemma 15 Let T : RN → RN be a directed operator with Fix T ≠ Ø. For any q ∈ Fix T and any x ∈ RN,
(34) |
Proof. Since T is directed, we use (5) to obtain
(35) |
from which the proof follows. ■
Now we present the convergence result for Algorithm 12.
Theorem 16 Given a real M × N matrix, let U : RN → RN and T : RM → RM be directed operators with nonempty Fix U = C and Fix T = Q. Assume that (U − I) and (T − I) are closed at 0. If Γ ≠ Ø, i.e., the problem is consistent, then any sequence , generated by Algorithm 12, converges to a split common fixed point x* ∈ Γ.
Proof. From (32) we obtain that the sequence is monotonically decreasing. Therefore,
(36) |
From the Fejér-monotonicity of follows that the sequence is bounded. Denoting by x* a cluster point of , let ℓ = 0, 1, 2, … be the sequence of indices, such that
(37) |
Then, from (36) and closedness of (T − I) at 0 we obtain,
(38) |
from which Ax* ∈ Q follows. Denote
(39) |
Then
(40) |
and, from (36) and (37), it follows that
(41) |
Next we show that x* ∈ C. Assume, by negation, that x* ∉ C, i.e., that Ux* ≠ x*. Then from closedness of the operator (U − I) at 0 follows that
(42) |
Therefore, there exists an ε > 0 and a subsequence of the sequence , such that
(43) |
Since U is directed, for any z ∈ Γ we have, by virtue of Lemma 15, that for s = 1, 2, …,
(44) |
It can be shown, following the same lines as in the proof of Lemma 14, that for any z ∈ Γ, we have
(45) |
Since xk+1 = U(uk), k = 0, 1, …, (10) implies that
(46) |
Then (45) and (46) indicate that the sequence {x1, u1, x2, u2, …} is Fejér-monotone with respect to Γ. Since U(ukℓs) = xkℓs+1, we obtain, using (44), that the sequence is also Fejér-monotone with respect to Γ. Moreover,
(47) |
and this cannot be true for infinitely many vectors ukℓs. Hence x* ∈ C and, therefore, x* ∈ Γ.
Replacing y by x* in (32) we obtain that monotonically decreasing and its subsequence converges to 0. Hence limk→∞xk = x*. ■
4 A parallel algorithm for the SCFPP
We employ a product space formulation, originally due to Pierra [20], to derive and analyze a simultaneous algorithm for the SCFPP of Problem 1. Let Γ be the solution set of the SCFPP. We introduce the spaces V =RN and W = RrN+pM, where r, p, N and M are as in Problem 1, and adopt the notational convention that the product spaces and all objects in them are represented in boldface type.
Define the following sets in the product spaces
(48) |
and
(49) |
and the matrix
(50) |
where αi > 0, for i = 1, 2, …, p, and βj > 0, for j = 1, 2, …, r, and t stands for matrix transposition.
Let us define also the operator T : W → W by
(51) |
We have obtained a two-operators split common fixed point problem in the product space, with sets C =RN, Q ⊆ W, the matrix A, the identity operator I : C → C and the operator T : W → W. This problem can be solved using Algorithm 12. It is also easy to verify that the following equivalence holds
(52) |
Therefore, we may apply Algorithm 12
(53) |
to the problem (48)-(51) in order to obtain a solution of the original SCFPP. We translate the iterative step (53) to the original spaces RN and RM using the relation
(54) |
and obtain the following algorithm,
Algorithm 17
Initialization: Let x0 be arbitrary.
Iterative step: For k ≥ 0 let
(55) |
Here γ ∈ (0, 2/L), with , where λ is the largest eigenvalue of the matrix AtA.
The following convergence result follows from Theorem 16.
Theorem 18 Let Ui : RN → RN, i = 1, 2, …, p, and Tj : RN → RN, j = 1, 2, …, r, be directed operators with fixed points sets Ci, i = 1, 2, …, p and Qj, j = 1, 2, …, r, respectively, and let A be an M × N real matrix. Assume that (Ui − I), i = 1, 2, …, p and (Tj − I), j = 1, 2, …, r, are closed at 0. If Γ ≠ ∅ then every sequence, generated by Algorithm 17, converges to x* ∈ Γ.
Proof. Applying Theorem 16 to the two operators split common fixed point problem in the product space setting with U = I : RN → RN, Fix U = C and T = T : W → W, Fix T = Q the proof follows.
5 Applications and special cases
In this section we review special cases of the split common fixed point problem (SCFPP) described in Problem 1, and a real-world application of algorithms for its solution. SCFPP generalizes the multiple-sets split feasibility problem (MSSFP) which requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It serves as a model for real-world inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator's range. MSSFP itself generalizes the convex feasibility problem (CFP) and the two-sets split feasibility problem. Formally, given nonempty closed convex sets Ci ⊆ RN, i = 1, 2, …, p, in the N-dimensional Euclidean space RN, and nonempty closed convex sets Qj ⊆ RM, j = 1, 2, …, r, and an M × N real matrix A, the multiple-sets split feasibility problem (MSSFP) is
(56) |
The algorithm for solving the MSSFP, presented in [12], generalizes Byrne's CQ-algorithm [7] and involves orthogonal projections onto the sets Ci ⊆ RN, i = 1, 2, …, p, and the sets Qj ⊆ RM, j = 1, 2, …, r, and has the following iterative step
(57) |
where xk and xk+1 are the current and the next iteration vectors, respectively, αi > 0, i = 1, 2, …, p, and βj > 0, j = 1, 2, …, r, are user-chosen parameters, γ ∈ (0, 2/L), where and λ is the spectral radius of the matrix AtA. The algorithm converges to a solution of the MSSFP, for any starting vector x0 ∈ RN, whenever the MSSFP has a solution. In the inconsistent case, it finds a point which is least violating the feasibility by being “closest” to all sets, as “measured” by a proximity function. Since the orthogonal projection P is a directed operator and P − I is closed at 0, the algorithm (57) is a special case of our Algorithm 17.
Finding at each iterative step the orthogonal projections can be computationally intensive and may affect the algorithm's efficiency. In the relaxed CQ-algorithm for solving the two-sets split feasibility problem, Yang [21] assumes, without loss of generality, that the sets C and Q are nonempty and given by
(58) |
where c : RN → R and q : RM → R are a convex functions, respectively. And instead of orthogonal projections he uses the subgradient projectors. In [13] we generalized Yang's result by formulating the following simultaneous subgradient projectors algorithm for the MSSFP, which is also a special case of our Algorithm 17 (see Example 5 and Lemma 6). Assume, without loss of generality, that the sets Ci and Qj are expressed as
(59) |
where ci : RN → R, and qj : RM → R are convex functions for all i = 1, 2, …, p, and all i = 1, 2, …, r, respectively.
Algorithm 19 [13]
Initialization: Let x0 be arbitrary.
Iterative step: For k ≥ 0 let
(60) |
Here γ ∈ (0,2/L), with , where λ is the spectral radius of AtA, and
(61) |
where ξi,k ∈ ∂ci(xk) is a subgradient of ci at the point xk, and
(62) |
where ηj,k ∈ ∂qj(Axk).
A new possibility that follows from our present work is to solve the MSSFP with Algorithm 17 and using E-δ operators. We present this in the framework of (14)-(17). Choosing parameters , such that 0 < δi ≤ 1 for all i = 1, 2, …, p + r, define the directed operators TCi,δi and TQj,δp+j as in Definition 8.
Algorithm 20
Initialization: Let x0 be arbitrary.
Iterative step: For k ≥ 0 let
(63) |
Here γ ∈ (0, 2/L), with , where λ is the largest eigenvalue of the matrix AtA.
Algorithm 19 is a special case of Algorithm 20 since, by Lemma 10, subgradient projectors are TE,δ operators.
Finally, we mention that our work is related to significant real-world applications. In a recent paper [10], the multiple-sets split feasibility problem was applied to the inverse problem of intensity-modulated radiation therapy (IMRT). In this field beams of penetrating radiation are directed at the lesion (tumor) from external sources in order to eradicate the tumor without causing irreparable damage to surrounding healthy tissues, see, e.g., [12].
In addition to the physical and biological parameters of the irradiated object that are assumed known for the dose calculation, information about the capabilities and specifications of the available treatment machine (i.e., radiation source) is given. Based on medical diagnosis, knowledge, and experience, the physician prescribes desired upper and lower dose bounds to the treatment planning case. The output of a solution method for the inverse problem is a radiation intensity function (also called intensity map). Its values are the radiation intensities at the sources, as a function of source location, that would result in a dose function which agrees with the prescribed dose bounds.
Recently the concept of equivalent uniform dose (EUD) was introduced to describe dose distributions with a higher clinical relevance. These EUD constraints are defined for tumors as the biological equivalent dose that, if given uniformly, will lead to the same cell-kill in the tumor volume as the actual non-uniform dose distribution. They could also be defined for normal tissues. We developed in [10] a unified theory that enables treatment of both EUD constraints and physical dose constraints. This model relies on the multiple-sets split feasibility problem formulation and accommodates the specific IMRT situation.
Acknowledgments
We gratefully acknowledge discussions on the topic of this paper with our colleague Avi Motova. We thank an anonymous referee for helpful commnets. This work was supported by grant No. 2003275 of the United States-Israel Binational Science Foundation (BSF) and by a National Institutes of Health (NIH) grant No. HL70472.
References
- 1.Aharoni R, Berman A, Censor Y. An interior points algorithm for the convex feasibility problem. Linear algebra and its Applications. 1983;120:479–489. [Google Scholar]
- 2.Bauschke HH, Borwein JM. On projection algorithms for solving convex feasibility problems. SIAM Review. 1996;38:367–426. [Google Scholar]
- 3.Bauschke HH, Combettes PL. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Mathematics of Operations Research. 2001;26:248–264. [Google Scholar]
- 4.Bregman LM, Censor Y, Reich S, Zepkowitz-Malachi Y. Finding the projection of a point onto the intersection of convex sets via projections onto half-spaces. Journal of Approximation Theory. 2003;124:194–218. [Google Scholar]
- 5.Browder FE. Convergence theorems for sequences of nonlinear operators in Banach spaces. Mathematische Zeitschrift. 1967;100:201–225. [Google Scholar]
- 6.Byrne C. Bregman-Legendre multidistance projection algorithms for convex feasibility and optimization. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Elsevier; Amsterdam, The Netherlands: 2001. pp. 87–100. [Google Scholar]
- 7.Byrne C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Problems. 2002;18:441–453. [Google Scholar]
- 8.Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems. 2004;20:103–120. [Google Scholar]
- 9.Byrne CL. Applied Iterative Methods. A.K. Peters, Ltd.; Wellsley, MA, USA: 2008. [Google Scholar]
- 10.Censor Y, Bortfeld T, Martin B, Trofimov A. A unified approach for inversion problems in intensity-modulated radiation therapy. Physics in Medicine and Biology. 2006;51:2353–2365. doi: 10.1088/0031-9155/51/10/001. [DOI] [PubMed] [Google Scholar]
- 11.Censor Y, Elfving T. A multiprojection algorithm using Bregman projections in a product space. Numerical Algorithms. 1994;8:221–239. [Google Scholar]
- 12.Censor Y, Elfving T, Kopf N, Bortfeld T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems. 2005;21:2071–2084. [Google Scholar]
- 13.Censor Y, Motova A, Segal A. Perturbed projections and subgradi-ent projections for the multiple-sets split feasibility problem. Journal of Mathematical Analysis and Applications. 2007;327:1244–1256. [Google Scholar]
- 14.Censor Y, Segal A. On the string averaging method for sparse common fixed point problems. International Transactions in Operational Research. doi: 10.1111/j.1475-3995.2008.00684.x. accepted for publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Censor Y, Zenios SA. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press; New York, NY, USA: 1997. [Google Scholar]
- 16.Combettes PL. Quasi-Fejérian analysis of some optimization algorithms. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and their Applications. Elsevier; Amsterdam: 2001. pp. 115–152. [Google Scholar]
- 17.Crombez G. A geometrical look at iterative methods for operators with fixed points. Numerical Functional Analysis and Optimization. 2005;26:157–175. [Google Scholar]
- 18.Goebel K, Reich S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marsel Dekker; New York and Basel: 1984. [Google Scholar]
- 19.Masad E, Reich S. A note on the multiple-set split convex feasibility problem in Hilbert space. Journal of Nonlinear Convex Analysis. 2007;8:367–371. [Google Scholar]
- 20.Pierra G. Decomposition through formalization in a product space. Mathematical Programming. 1984;28:96–115. [Google Scholar]
- 21.Yang Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems. 2004;20:1261–1266. [Google Scholar]
- 22.Zaknoon M. Algorithmic Developments for the Convex Feasibility Problem. University of Haifa; Haifa, Israel: Apr, 2003. Ph.D. Thesis. [Google Scholar]
- 23.Zhao J, Yang Q. Several solution methods for the split feasibility problem. Inverse Problems. 2005;21:1791–1799. [Google Scholar]