Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Feb 11.
Published in final edited form as: J Convex Anal. 2010 May 1;26(5):055007. doi: 10.1088/0266-5611/26/5/055007

The Split Common Fixed Point Problem for Directed Operators

Yair Censor 1, Alexander Segal 1
PMCID: PMC3037827  NIHMSID: NIHMS152452  PMID: 21318099

Abstract

We propose the split common fixed point problem that requires to find a common fixed point of a family of operators in one space whose image under a linear transformation is a common fixed point of another family of operators in the image space. We formulate and analyze a parallel algorithm for solving this split common fixed point problem for the class of directed operators and note how it unifies and generalizes previously discussed problems and algorithms.

1 Introduction

In this paper we propose a new problem, called the split common fixed point problem (SCFPP), and study it for the class of directed operators T such that TI is closed at the origin. These operators were introduced and investigated by Bauschke and Combettes in [3, Definition 2.2] and by Combettes in [16], although not called by this name. We present a unified framework for the study of this problem and class of operators and propose iterative algorithms and study their convergence. The SCFPP is a generalization of the split feasibility problem (SFP) and of the convex feasibility problem (CFP). The class of directed operators is an important class since it includes the orthogonal projections and the subgradient projectors, and we also supply an additional operator from this class.

The split common fixed point problem (SCFPP) requires to find a common fixed point of a family of operators in one space such that its image under a linear transformation is a common fixed point of another family of operators in the image space. This generalizes the convex feasibility problem (CFP), the two-sets split feasibility problem (SFP) and the multiple sets split feasibility problem (MSSFP).

Problem 1 The split common fixed point problem.

Given operators Ui : RNRN, i = 1, 2, …, p, and Tj : RMRM, j = 1, 2, …, r, with nonempty fixed points sets Ci, i = 1, 2, …, p and Qj, j = 1, 2, …, r, respectively. The split common fixed point problem (SCFPP) is

findavectorxCi=1pCisuchthatAxQi=1rQj. (1)

Such problems arise in the field of intensity-modulated radiation therapy (IMRT) when one attempts to describe physical dose constraints and equivalent uniform dose (EUD) constraints within a single model, see [10]. The problem with only a single pair of sets C in RN and Q in RM was first introduced by Censor and Elfving [11] and was called the split feasibility problem (SFP). They used their simultaneous multiprojections algorithm (see also [15, Subsection 5.9.2]) to obtain iterative algorithms to solve the SFP. Their algorithms, as well as others, see, e.g., Byrne [6], involve matrix inversion at each iterative step. Calculating inverses of matrices is very time-consuming, particularly if the dimensions are large. Therefore, a new algorithm for solving the SFP was devised by Byrne [7], called the CQ-algorithm, with the following iterative step

xk+1=PC(xk+γAt(PQI)Axk), (2)

where xk and xk+1 are the current and the next iteration vectors, respectively, γ ∈ (0, 2/L) where L is the largest eigenvalue of the matrix AtA (t stands for matrix transposition), I is the unit matrix or operator and PC and PQ denote the orthogonal projections onto C and Q, respectively.

The CQ-algorithm converges to a solution of the SFP, for any starting vector x0RN, whenever the SFP has a solution. When the SFP has no solutions, the CQ-algorithm converges to a minimizer of ∥PQ(Ac) – Ac∥ , over all cC, whenever such a minimizer exists. A block-iterative CQ-algorithm, called the BICQ-method, is also available in [7], see also Byrne [8] and his recent book [9]. The MSSFP, posed and studied in [12], was handled, for both the feasible and the infeasible cases, with a proximity function minimization approach, namely, if the MSSFP problem is consistent then unconstrained minimization of the proximity function yields the value 0, otherwise, in the inconsistent case, it finds a point which is least violating the feasibility by being “closest” to all sets, as “measured” by the proximity function. Masad and Reich [19] is a recent sequel to [12] where they prove weak and strong convergence theorems for an algorithm that solves the multiple-set split convex feasibility problem in Hilbert space.

In the case of nonlinear constraints sets, orthogonal projections may demand a great amount of work of solving a nonlinear optimization problem to minimize the distance between the point and the constraint set. However, it can easily be estimated by linear approximation using the current constraint violation and the subgradient at the current point. This was done by Yang, in his recent paper [21], where he proposed a relaxed version of the CQ-algorithm in which orthogonal projections are replaced by subgradient projections, which are easily executed when the sets C and Q are given as lower level sets of convex functions, see also [23]. In [13] Censor, Motova and Segal formulated a simultaneous subgradient projections algorithm for the MSSFP.

Many common types of operators arising in convex optimization belong to the class of directed operators. These operators were introduced and investigated by Bauschke and Combettes in [3] (denoted there as 𝔗-class) and by Combettes in [16]. Using the notion of directed operators we develop algorithms for the SCFPP. In Section 2 we present preliminary material on the directed operators and discuss some particular cases. In Section 3 we formulate the two operators split fixed point problem and study our algorithm for it. In Section 4 we present our parallel algorithm for the SCFPP and establish its convergence and, in Section 5, we note how it unifies and generalizes previously discussed problems and algorithms.

2 Directed operators

The class 𝔗 of operators was introduced and investigated by Bauschke and Combettes in [3, Definition 2.2] and by Combettes in [16]. Operators in this class were named directed operators in Zaknoon [22] and further employed under this name in [14]. We recall definitions and results on directed operators and their properties as they appear in [3, Proposition 2.4] and [16], which are also sources for references on the subject. Let RN be the N-dimensional Euclidean space with 〈x, y〉 and ∥x∥ as the Euclidean inner product and norm, respectively.

Given x, yRN we denote

H(x,y){uRNuy,xy0}. (3)

Definition 2 An operator T : RNRN is called a directed operator, if

FixTH(x,T(x))forallxRN, (4)

where Fix T is the fixed points set of T, equivalently,

ifqFixTthenT(x)x,T(x)q0forallxRN. (5)

The class of directed operators is denoted by 𝔗, i.e.

𝔗{T:RNRNFixTH(x,T(x))for allxRN}. (6)

Bauschke and Combettes [3] showed the following:

  • (i) That the set of all fixed points of a directed operator with nonempty Fix T is closed and convex because
    FixT=xRNH(x,T(x)). (7)
  • (ii) That, denoting by I the unit operator,
    IfT𝔗thenI+λ(TI)𝔗for allλ[0,1]. (8)

This class of operators is fundamental because many common types of operators arising in convex optimization belong to the class and because it allows a complete characterization of Fejér-monotonicity [3, Proposition2.7]. The localization of fixed points is discussed in [18, pages 43-44]. In particular, it is shown there that a firmly nonexpansive operator, namely, an operator Ω : RnRn that fulfills

Ω(x)Ω(y)2Ω(x)Ω(y),xy,for allx,yRn, (9)

satisfies (7) and is, therefore, a directed operator. The class of directed operators, includes additionally, according to [3, Proposition 2.3], among others, the resolvents of a maximal monotone operators, the orthogonal projections and the subgradient projectors (see Example 5 below). Note that every directed operator belongs to the class of operators ℱ0, defined by Crombez [17, p. 161],

F0{T:RnRnTxqxqfor allqFixTandxRn}, (10)

whose elements are called elsewhere quasi-nonexpansive or paracontracting operators.

The following definition of a closed operator originated in Browder [5] (see, e.g., [16]) and will be required in the sequel.

Definition 3 An operator T : RNRN is said to be closed at a point yRN if for every RN and every sequence {xk}k=0 in RN, such that, limk→∞ xk = and limk→∞ Txk = y, we have Tx̄ = y.

For instance, the orthogonal projection onto a closed convex set is a closed operator everywhere, due to its continuity.

Remark 4 [16] If T : RNRN is nonexpansive then TI is closed on RN.

In the next example and lemma we recall the notion of the subgradient projector ΠF(y) and show that ΠF(y) – I is closed at 0.

Example 5 Let f : RNR be a convex function such that the level set F := {xRN | f(x) ≤ 0} is nonempty. The operator

ΠF(y){yf(y)q2q,iff(y)>0,y,iff(y)0,} (11)

where q is a selection from the subdifferential set ∂f(y) of f at y, is called a subgradient projector relative to f.

Lemma 6 Let f : RNR be a convex function, let yRN and assume that the level set F ∆ ∅. For any q∂f(y), define the closed convex set

L=Lf(y,q){xRNf(y)+q,xy0}. (12)

Then the following hold:

  • (i) FL. If q ∆ 0 then L is a half-space, otherwise L = Rn.

  • (ii) Denoting by PL(y) the orthogonal projection of y onto L,
    PL(y)=ΠF(y). (13)
  • (iii) PLI is closed at 0.

Proof. For (i) and (ii) see, e.g., [2, Lemma 7.3]. (iii) Denote Ψ := PLI. Take any RN and any sequence {xk}k=0 in RN, such that, limk→∞ xk = and limk→∞ Ψ(xk) = 0. Define f+(y) = max{f(y), 0}. Then Ψ(y)=f+(y)q2q, q∂f(y). Since f+ is convex, its subdifferential is uniformly bounded on bounded sets, see, e.g., [2, Corollary 7.9]. Using this and the continuity of f+ we obtain that f+() = 0, and, therefore, Ψ() = 0. ■

Influenced by the framework established in Bregman et al. [4], and by Aharoni, Berman and Censor's (δ, η)-Algorithm [1] for solving convex feasibility problems, we define next another type of operators which we call “E-δ operators”. We need first the following setup. Let ERN be a nonempty closed convex set. We assume, without loss of generality, that E is expressed as

E={xRNe(x)0}, (14)

where e : RNR is a convex function. Given a point zRN, a real number δ, 0 < δ ≤ 1, we define for zE the ball

B(z,δe(z)){xRNxzδe(z)}. (15)

For all pairs (y, t) ∈ RN × RN we look at the half-spaces of the form

S(y,t){uRNu,ty,t}, (16)

and define

Aδ(e(z)){(y,t)RN×RNES(y,t)andintB(z,δe(z))S(y,t)=}. (17)

We also need to impose the following condition.

Condition 7 Given a set ERN, described as in (14), it is true that for every zE

B(z,δe(z))E=. (18)

Every convex set E can be described by (14) with e(z) = d(z, E), the distance function between the point z and the set E, and in this case Condition 7 always holds.

Definition 8 Given a set E = {xRN | e(x) ≤ 0} where e : RNR is a convex function and a real number δ, 0 < δ ≤ 1, such that Condition 7 holds, we define the operator TE,δ for any z ∈ RN, by

TE,δ(z){PS(y,t)(z),ifzE,z,ifzE,} (19)

where (y, t) is any selection from Aδ(e(z)), and call it an E-δ operator.

The fact that any E-δ operator is a directed operator follows from its definition.

Lemma 9 If TE,δ is an E-δ operator then TE,δ − I is closed at 0.

Proof. Let {zk}k=0 be a sequence with zkE for all k ≥ 0, such that limk→∞zk∥ = qRN and limk→∞TE,δ(zk) − zk∥ = 0. For every k = 0, 1, 2, … we have

TE,δ(zk)zkδe(zk). (20)

Taking limits on both sides of the last inequality we obtain

limke(zk)=0, (21)

and from the continuity of e(z) follows that e(q) = 0 and, therefore, q ∈ Fix TE,δ. ■

Next we show that the subgradient projector of Example 5 is a TE,δ operator. To show that an operator is a TE,δ operator one needs to guarantee (among other things) that, given a set E, the intersection B(z, δe(z)) ∩ E is empty for all zE, for some choice of a real number δ, 0 < δ ≤ 1. This is done in the next lemma.

Lemma 10 Let e : RnR be a convex function such that the level set E := {xRN | e(x) ≤ 0} is nonempty. Then there exists a real number δ, 0 < δ ≤ 1, such that the subgradient projector ΠE(z) of Example 5 is a TE,δ operator.

Proof. If e(z) ≤ 0 then zE and, by definition, ΠE(z) = z. If e(z) > 0 then, using the setup of (14)-(17), let

Le(z,t)={xRNe(z)+t,xz0}, (22)

where t∂e(z). By Lemma 6 we have that ELe(z, t). Now we need to show that int B(z, δe(z)) ∩ Le(z, t) = Ø. Denoting w = ΠE(z) ∈ Le(z, t), it follows from (22) that

e(z)t,zwtwz. (23)

By [2, Corollary 7.9] the subdifferential ∂e(z) is uniformly bounded on bounded sets, i.e., there exists a K > 0, such that, ∥t∥ ≤ K, hence, e(z) ≤ Kwz∥. Taking any δ < 1/K we obtain

δe(z)wz, (24)

which implies that

intB(z,δe(z))Le(z,t)=. (25)

Aside from theoretical interest, the extension (of subgradient projectors) to TE,δ operators can lead to algorithms useful in practice, provided that the computational efforts of finding hyperplanes S(y, t) are reasonable. Another special case (besides the subgradient hyperplane) is obtained by constructing the S(y, t) via an interior point in the convex set (using the assumption that in each set we know an point in its interior), see [15, Algorithm 5.5.2].

3 The two-operators split common fixed point problem

The split common fixed point problem for a single pair of directed operators is obtained from Problem 1 with p = r = 1.

Definition 11 Let A be a real M × N matrix and let U : RNRN and T : RMRM be operators with nonempty Fix U = C and Fix T = Q. The two-operators split common fixed point problem is to find x* ∈ C such that Ax* ∈ Q.

Denoting the solution set of the two-operators SCFPP by

ΓΓ(U,T){yCAyQ}, (26)

the following algorithm is designed to solve it.

Algorithm 12

Initialization: Let x0RN be arbitrary.

Iterative step: For k ≥ 0 let

xk+1=U(xk+γAt(TI)(Axk)), (27)

where γ ∈ (0, 2/L), L is the largest eigenvalue of the matrix AtA and I is the unit operator.

We recall the definition of Fejér-monotone sequences, which will be useful for our further analysis.

Definition 13 A sequence {xk}k=0 is called Fejér–monotone with respect to a given nonempty set S ⊆ RN if for every xS,

xk+1xxkx,forallk0. (28)

To prove convergence of Algorithm 12 we need the following lemma.

Lemma 14 Given a real M × N matrix A, let U : RNRN and T : RMRM be directed operators with nonempty Fix U = C and Fix T = Q. Any sequence {xk}k=0, generated by Algorithm 12, is Fejér-monotone with respect to the solution set Γ.

Proof. Taking y ∈ Γ we use (10) to obtain

xk+1y2=U(xk+γAt(TI)(Axk))y2xk+γAt(TI)Axky2=xky2+γ2At(TI)(Axk)2+2γxky,At(TI)(Axk)=xky2+γ2(TI)(Axk),AAt(TI)(Axk)+2γxky,At(TI)(Axk). (29)

From the definition of L follows

γ2(TI)(Axk),AAt(TI)(Axk)Lγ2(TI)(Axk),(TI)(Axk)=Lγ2(TI)(Axk)2. (30)

Denoting ϴ := 2γxky, At(TI)(Axk)〉 and using (5) we obtain

Θ=2γA(xky),(TI)(Axk)=2γA(xky)+(TI)(Axk)(TI)(Axk),(TI)(Axk)=2γ(T(Axk)Ay,(TI)(Axk)(TI)(Axk)2)2γ(TI)(Axk)2. (31)

From (29) and by using (30) and (31) follows

xk+1y2xky2+γ(Lγ2)(TI)(Axk)2. (32)

Then, from the definition of γ, we obtain

xk+1y2xky2, (33)

from which the Fejér-monotonicity with respect to Γ follows. ■

The next lemma describes a property of directed operators that will be used in our convergence analysis.

Lemma 15 Let T : RNRN be a directed operator with Fix T ≠ Ø. For any q ∈ Fix T and any xRN,

T(x)q2xq2T(x)x2. (34)

Proof. Since T is directed, we use (5) to obtain

xq2=T(x)x(T(x)q)2=T(x)x2+T(x)q22T(x)x,T(x)qT(x)x2+T(x)q2, (35)

from which the proof follows. ■

Now we present the convergence result for Algorithm 12.

Theorem 16 Given a real M × N matrix, let U : RNRN and T : RMRM be directed operators with nonempty Fix U = C and Fix T = Q. Assume that (UI) and (TI) are closed at 0. If Γ ≠ Ø, i.e., the problem is consistent, then any sequence {xk}k=0, generated by Algorithm 12, converges to a split common fixed point x* ∈ Γ.

Proof. From (32) we obtain that the sequence {xky}k=0 is monotonically decreasing. Therefore,

limk(TI)(Axk)=0. (36)

From the Fejér-monotonicity of {xk}k=0 follows that the sequence is bounded. Denoting by x* a cluster point of {xk}k=0, let = 0, 1, 2, … be the sequence of indices, such that

limxk=x. (37)

Then, from (36) and closedness of (TI) at 0 we obtain,

T(Ax)=Ax, (38)

from which Ax* ∈ Q follows. Denote

ukxk+γAt(TI)(Axk). (39)

Then

uk=xk+γAt(TI)(Axk) (40)

and, from (36) and (37), it follows that

limuk=x. (41)

Next we show that x* ∈ C. Assume, by negation, that x* ∉ C, i.e., that Ux* ≠ x*. Then from closedness of the operator (UI) at 0 follows that

limU(uk)uk0. (42)

Therefore, there exists an ε > 0 and a subsequence {uks}s=0 of the sequence {uk}=0, such that

U(uks)uks>ε,s=0,1,. (43)

Since U is directed, for any z ∈ Γ we have, by virtue of Lemma 15, that for s = 1, 2, …,

U(uks)z2uksz2U(uks)uks2<uksz2ε2. (44)

It can be shown, following the same lines as in the proof of Lemma 14, that for any z ∈ Γ, we have

(xk+γAt(TI)(Axk))zxkz. (45)

Since xk+1 = U(uk), k = 0, 1, …, (10) implies that

xk+1zukz. (46)

Then (45) and (46) indicate that the sequence {x1, u1, x2, u2, …} is Fejér-monotone with respect to Γ. Since U(uks) = xks+1, we obtain, using (44), that the sequence {uks}s=0 is also Fejér-monotone with respect to Γ. Moreover,

uks+1z2<uksz2ε2,fors=1,2,. (47)

and this cannot be true for infinitely many vectors uks. Hence x* ∈ C and, therefore, x* ∈ Γ.

Replacing y by x* in (32) we obtain that {xkx}k=0 monotonically decreasing and its subsequence {xkx}=0 converges to 0. Hence limk→∞xk = x*. ■

4 A parallel algorithm for the SCFPP

We employ a product space formulation, originally due to Pierra [20], to derive and analyze a simultaneous algorithm for the SCFPP of Problem 1. Let Γ be the solution set of the SCFPP. We introduce the spaces V =RN and W = RrN+pM, where r, p, N and M are as in Problem 1, and adopt the notational convention that the product spaces and all objects in them are represented in boldface type.

Define the following sets in the product spaces

C:=RNand (48)

and

Q:=(i=1rαiCi)×(j=1pβjQj), (49)

and the matrix

A:=(α1I,,αrI,β1At,,βpAt)t, (50)

where αi > 0, for i = 1, 2, …, p, and βj > 0, for j = 1, 2, …, r, and t stands for matrix transposition.

Let us define also the operator T : WW by

T(y)=((U1(y1y2yN))t,(U2(yN+1yN+2y2N))t,,(Ur(yN(r1)+1yN(r1)+2yrN))t,)((T1(yrN+1yrN+2yrN+M))t,(T2(yrN+M+1yrN+M+2yrN+2M))t,,(Tp(yrN+M(p1)+1yrN+M(p1)+2yrN+pM))t)t, (51)

We have obtained a two-operators split common fixed point problem in the product space, with sets C =RN, QW, the matrix A, the identity operator I : CC and the operator T : WW. This problem can be solved using Algorithm 12. It is also easy to verify that the following equivalence holds

xΓif and only ifAxQ. (52)

Therefore, we may apply Algorithm 12

xk+1=xk+γAt(TI)(Axk),k0, (53)

to the problem (48)-(51) in order to obtain a solution of the original SCFPP. We translate the iterative step (53) to the original spaces RN and RM using the relation

T(Ax)=(α1U1(x),,αrUr(x),β1AT1(x),,βtATp(x))t (54)

and obtain the following algorithm,

Algorithm 17

Initialization: Let x0 be arbitrary.

Iterative step: For k ≥ 0 let

xk+1=xk+γ(Σi=1pαi(Ui(xk)xk)+Σj=1rβjAt(Tj(Axk)Axk)). (55)

Here γ ∈ (0, 2/L), with L=Σi=1pαi+λΣj=1rβj, where λ is the largest eigenvalue of the matrix AtA.

The following convergence result follows from Theorem 16.

Theorem 18 Let Ui : RNRN, i = 1, 2, …, p, and Tj : RNRN, j = 1, 2, …, r, be directed operators with fixed points sets Ci, i = 1, 2, …, p and Qj, j = 1, 2, …, r, respectively, and let A be an M × N real matrix. Assume that (UiI), i = 1, 2, …, p and (TjI), j = 1, 2, …, r, are closed at 0. If Γ ≠ ∅ then every sequence, generated by Algorithm 17, converges to x* ∈ Γ.

Proof. Applying Theorem 16 to the two operators split common fixed point problem in the product space setting with U = I : RNRN, Fix U = C and T = T : WW, Fix T = Q the proof follows.

5 Applications and special cases

In this section we review special cases of the split common fixed point problem (SCFPP) described in Problem 1, and a real-world application of algorithms for its solution. SCFPP generalizes the multiple-sets split feasibility problem (MSSFP) which requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It serves as a model for real-world inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator's range. MSSFP itself generalizes the convex feasibility problem (CFP) and the two-sets split feasibility problem. Formally, given nonempty closed convex sets CiRN, i = 1, 2, …, p, in the N-dimensional Euclidean space RN, and nonempty closed convex sets QjRM, j = 1, 2, …, r, and an M × N real matrix A, the multiple-sets split feasibility problem (MSSFP) is

find a vectorxCi=1pCisuch thatAxQi=1rQj. (56)

The algorithm for solving the MSSFP, presented in [12], generalizes Byrne's CQ-algorithm [7] and involves orthogonal projections onto the sets CiRN, i = 1, 2, …, p, and the sets QjRM, j = 1, 2, …, r, and has the following iterative step

xk+1=xk+γ(Σi=1pαi(PCi(xk)xk)+Σj=1rβjAt(PQj(Axk)Axk)), (57)

where xk and xk+1 are the current and the next iteration vectors, respectively, αi > 0, i = 1, 2, …, p, and βj > 0, j = 1, 2, …, r, are user-chosen parameters, γ ∈ (0, 2/L), where L=Σi=1pαi+λΣj=1rβj and λ is the spectral radius of the matrix AtA. The algorithm converges to a solution of the MSSFP, for any starting vector x0RN, whenever the MSSFP has a solution. In the inconsistent case, it finds a point which is least violating the feasibility by being “closest” to all sets, as “measured” by a proximity function. Since the orthogonal projection P is a directed operator and PI is closed at 0, the algorithm (57) is a special case of our Algorithm 17.

Finding at each iterative step the orthogonal projections can be computationally intensive and may affect the algorithm's efficiency. In the relaxed CQ-algorithm for solving the two-sets split feasibility problem, Yang [21] assumes, without loss of generality, that the sets C and Q are nonempty and given by

C={xRNc(x)0},andQ={yRMq(y)0}, (58)

where c : RNR and q : RMR are a convex functions, respectively. And instead of orthogonal projections he uses the subgradient projectors. In [13] we generalized Yang's result by formulating the following simultaneous subgradient projectors algorithm for the MSSFP, which is also a special case of our Algorithm 17 (see Example 5 and Lemma 6). Assume, without loss of generality, that the sets Ci and Qj are expressed as

Ci={xRNci(x)0}andQj={yRMqj(y)0}, (59)

where ci : RNR, and qj : RMR are convex functions for all i = 1, 2, …, p, and all i = 1, 2, …, r, respectively.

Algorithm 19 [13]

Initialization: Let x0 be arbitrary.

Iterative step: For k ≥ 0 let

xk+1=xk+γ(Σi=1pαi(PCi,k(xk)xk)+Σj=1rβjAt(PQj,k(Axk)Axk)). (60)

Here γ ∈ (0,2/L), with L=Σi=1pαi+λΣj=1rβj, where λ is the spectral radius of AtA, and

Ci,k={xRnci(xk)+ξi,k,xxk0}, (61)

where ξi,k∂ci(xk) is a subgradient of ci at the point xk, and

Qj,k={xRmqj(xk)+ηj,k,yAxk0}, (62)

where ηj,k∂qj(Axk).

A new possibility that follows from our present work is to solve the MSSFP with Algorithm 17 and using E-δ operators. We present this in the framework of (14)-(17). Choosing parameters {δi}i=1p+r, such that 0 < δi ≤ 1 for all i = 1, 2, …, p + r, define the directed operators TCii and TQjp+j as in Definition 8.

Algorithm 20

Initialization: Let x0 be arbitrary.

Iterative step: For k ≥ 0 let

xk+1=xk+γ(Σi=1pαi(TCi,δi(xk)xk)+Σj=1rβjAt(TQj,δp+j(Axk)Axk)). (63)

Here γ ∈ (0, 2/L), with L=Σi=1pαi+λΣj=1rβj, where λ is the largest eigenvalue of the matrix AtA.

Algorithm 19 is a special case of Algorithm 20 since, by Lemma 10, subgradient projectors are TE,δ operators.

Finally, we mention that our work is related to significant real-world applications. In a recent paper [10], the multiple-sets split feasibility problem was applied to the inverse problem of intensity-modulated radiation therapy (IMRT). In this field beams of penetrating radiation are directed at the lesion (tumor) from external sources in order to eradicate the tumor without causing irreparable damage to surrounding healthy tissues, see, e.g., [12].

In addition to the physical and biological parameters of the irradiated object that are assumed known for the dose calculation, information about the capabilities and specifications of the available treatment machine (i.e., radiation source) is given. Based on medical diagnosis, knowledge, and experience, the physician prescribes desired upper and lower dose bounds to the treatment planning case. The output of a solution method for the inverse problem is a radiation intensity function (also called intensity map). Its values are the radiation intensities at the sources, as a function of source location, that would result in a dose function which agrees with the prescribed dose bounds.

Recently the concept of equivalent uniform dose (EUD) was introduced to describe dose distributions with a higher clinical relevance. These EUD constraints are defined for tumors as the biological equivalent dose that, if given uniformly, will lead to the same cell-kill in the tumor volume as the actual non-uniform dose distribution. They could also be defined for normal tissues. We developed in [10] a unified theory that enables treatment of both EUD constraints and physical dose constraints. This model relies on the multiple-sets split feasibility problem formulation and accommodates the specific IMRT situation.

Acknowledgments

We gratefully acknowledge discussions on the topic of this paper with our colleague Avi Motova. We thank an anonymous referee for helpful commnets. This work was supported by grant No. 2003275 of the United States-Israel Binational Science Foundation (BSF) and by a National Institutes of Health (NIH) grant No. HL70472.

References

  • 1.Aharoni R, Berman A, Censor Y. An interior points algorithm for the convex feasibility problem. Linear algebra and its Applications. 1983;120:479–489. [Google Scholar]
  • 2.Bauschke HH, Borwein JM. On projection algorithms for solving convex feasibility problems. SIAM Review. 1996;38:367–426. [Google Scholar]
  • 3.Bauschke HH, Combettes PL. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Mathematics of Operations Research. 2001;26:248–264. [Google Scholar]
  • 4.Bregman LM, Censor Y, Reich S, Zepkowitz-Malachi Y. Finding the projection of a point onto the intersection of convex sets via projections onto half-spaces. Journal of Approximation Theory. 2003;124:194–218. [Google Scholar]
  • 5.Browder FE. Convergence theorems for sequences of nonlinear operators in Banach spaces. Mathematische Zeitschrift. 1967;100:201–225. [Google Scholar]
  • 6.Byrne C. Bregman-Legendre multidistance projection algorithms for convex feasibility and optimization. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Elsevier; Amsterdam, The Netherlands: 2001. pp. 87–100. [Google Scholar]
  • 7.Byrne C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Problems. 2002;18:441–453. [Google Scholar]
  • 8.Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems. 2004;20:103–120. [Google Scholar]
  • 9.Byrne CL. Applied Iterative Methods. A.K. Peters, Ltd.; Wellsley, MA, USA: 2008. [Google Scholar]
  • 10.Censor Y, Bortfeld T, Martin B, Trofimov A. A unified approach for inversion problems in intensity-modulated radiation therapy. Physics in Medicine and Biology. 2006;51:2353–2365. doi: 10.1088/0031-9155/51/10/001. [DOI] [PubMed] [Google Scholar]
  • 11.Censor Y, Elfving T. A multiprojection algorithm using Bregman projections in a product space. Numerical Algorithms. 1994;8:221–239. [Google Scholar]
  • 12.Censor Y, Elfving T, Kopf N, Bortfeld T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems. 2005;21:2071–2084. [Google Scholar]
  • 13.Censor Y, Motova A, Segal A. Perturbed projections and subgradi-ent projections for the multiple-sets split feasibility problem. Journal of Mathematical Analysis and Applications. 2007;327:1244–1256. [Google Scholar]
  • 14.Censor Y, Segal A. On the string averaging method for sparse common fixed point problems. International Transactions in Operational Research. doi: 10.1111/j.1475-3995.2008.00684.x. accepted for publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Censor Y, Zenios SA. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press; New York, NY, USA: 1997. [Google Scholar]
  • 16.Combettes PL. Quasi-Fejérian analysis of some optimization algorithms. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and their Applications. Elsevier; Amsterdam: 2001. pp. 115–152. [Google Scholar]
  • 17.Crombez G. A geometrical look at iterative methods for operators with fixed points. Numerical Functional Analysis and Optimization. 2005;26:157–175. [Google Scholar]
  • 18.Goebel K, Reich S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marsel Dekker; New York and Basel: 1984. [Google Scholar]
  • 19.Masad E, Reich S. A note on the multiple-set split convex feasibility problem in Hilbert space. Journal of Nonlinear Convex Analysis. 2007;8:367–371. [Google Scholar]
  • 20.Pierra G. Decomposition through formalization in a product space. Mathematical Programming. 1984;28:96–115. [Google Scholar]
  • 21.Yang Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems. 2004;20:1261–1266. [Google Scholar]
  • 22.Zaknoon M. Algorithmic Developments for the Convex Feasibility Problem. University of Haifa; Haifa, Israel: Apr, 2003. Ph.D. Thesis. [Google Scholar]
  • 23.Zhao J, Yang Q. Several solution methods for the split feasibility problem. Inverse Problems. 2005;21:1791–1799. [Google Scholar]

RESOURCES