Skip to main content
Springer logoLink to Springer
. 2021 Feb 4;15(8):2719–2732. doi: 10.1007/s11590-021-01706-3

On the asymptotic behavior of the Douglas–Rachford and proximal-point algorithms for convex optimization

Goran Banjac 1,, John Lygeros 1
PMCID: PMC8550334  PMID: 34721701

Abstract

Banjac et al. (J Optim Theory Appl 183(2):490–519, 2019) recently showed that the Douglas–Rachford algorithm provides certificates of infeasibility for a class of convex optimization problems. In particular, they showed that the difference between consecutive iterates generated by the algorithm converges to certificates of primal and dual strong infeasibility. Their result was shown in a finite-dimensional Euclidean setting and for a particular structure of the constraint set. In this paper, we extend the result to real Hilbert spaces and a general nonempty closed convex set. Moreover, we show that the proximal-point algorithm applied to the set of optimality conditions of the problem generates similar infeasibility certificates.

Keywords: Douglas–Rachford algorithm, Proximal-point algorithm, Convex optimization, Infeasibility detection

Introduction

Due to its very good practical performance and ability to handle nonsmooth functions, the Douglas–Rachford algorithm has attracted a lot of interest for solving convex optimization problems. Provided that a problem is solvable and satisfies certain constraint qualification, the algorithm converges to an optimal solution [1, Cor. 27.3]. If the problem is infeasible, then some of its iterates diverge [2].

Results on the asymptotic behavior of the Douglas–Rachford algorithm for infeasible problems are very scarce, and most of them study some specific cases such as feasibility problems involving two convex sets that do not intersect [35]. Although there have been some recent results studying a more general setting [6, 7], they impose some additional assumptions on feasibility of either the primal or the dual problem. The authors in [8] consider a problem of minimizing a convex quadratic function over a particular constraint set, and show that the iterates of the Douglas–Rachford algorithm generate an infeasibility certificate when the problem is primal and/or dual strongly infeasible. A similar analysis was applied in [9] to show that the proximal-point algorithm used for solving a convex quadratic program can also detect infeasibility.

The constraint set of the problem studied in [8] is represented in the form AxC, where A is a real matrix and C the Cartesian product of a convex compact set and a translated closed convex cone. This paper extends the result of [8] to real Hilbert spaces and a general nonempty closed convex set C. Moreover, we show that a similar analysis can be used to prove that the proximal-point algorithm for solving the same class of problems generates similar infeasibility certificates.

The paper is organized as follows. We introduce some definitions and notation in the remainder of Sect. 1, and the problem under consideration in Sect. 2. Section 3 presents some supporting results that are essential for generalizing the results in [8]. Finally, Sects. 4 and  5 analyze the asymptotic behavior of the Douglas–Rachford and proximal-point algorithms, respectively, and show that they provide infeasibility certificates for the considered problem.

Notation

Let H, H1, H2 be real Hilbert spaces with inner products ··, induced norms ·, and identity operators Id. The power set of H is denoted by 2H. Let N denote the set of positive integers. For a sequence (sn)nN, we denote by sns (sns) that it converges strongly (weakly) to s and define δsn+1:=sn+1-sn.

Let D be a nonempty subset of H with D¯ being its closure. Then T:DH is nonexpansive if

(xD)(yD)Tx-Tyx-y,

and it is α-averaged with α]0,1[ if there exists a nonexpansive operator R:DH such that T=(1-α)Id+αR. We denote the range of T by ranT. A set-valued operator B:H2H, characterized by its graph

graB=(x,u)H×HuBx,

is monotone if

(x,u)graB(y,v)graBx-yu-v0.

The inverse of B, denoted by B-1, is defined through its graph

graB-1=(u,x)H×H(x,u)graB.

For a proper lower semicontinuous convex function f:H]-,+], we define its:

graphic file with name 11590_2021_1706_Equ76_HTML.gif

For a nonempty closed convex set CH, we define its:

graphic file with name 11590_2021_1706_Equ77_HTML.gif

Problem of interest

Consider the following convex optimization problem:

graphic file with name 11590_2021_1706_Equ1_HTML.gif 1

with Q:H1H1 a monotone self-adjoint bounded linear operator, qH1, A:H1H2 a bounded linear operator, and C a nonempty closed convex subset of H2; we assume that ranQ and ranA are closed. The objective function of the problem is convex, continuous, and Fréchet differentiable [1, Prop. 17.36].

When H1 and H2 are finite-dimensional Euclidean spaces, problem (1) reduces to the one considered in [8], where the Douglas–Rachford algorithm (which is equivalent to the alternating direction method of multipliers) was shown to generate certificates of primal and dual strong infeasibility. Moreover, the authors proposed termination criteria for infeasibility detection, which are easy to implement and are used in several numerical solvers; see, e.g., [1012]. To prove the main results, they used the assumption that C can be represented as the Cartesian product of a convex compact set and a translated closed convex cone, which was exploited heavily in their proofs. In this paper we extend these results to the case where H1 and H2 are real Hilbert spaces, and C is a general nonempty closed convex set.

Optimality conditions

We can rewrite problem (1) in the form

minimizexH112Qxx+qx+ιC(Ax).

Provided that a certain constraint qualification holds, we can characterize its solution by [1, Thm. 27.2]

0Qx+q+AιC(Ax),

and introducing a dual variable yιC(Ax), we can rewrite the inclusion as

0Qx+q+Ay-y+ιC(Ax). 2

Introducing an auxiliary variable zC and using ιC=NC, we can write the optimality conditions for problem (1) as

Ax-z=0 3a
Qx+q+Ay=0 3b
zC,yNCz. 3c

Infeasibility certificates

The authors in [8] derived the following conditions for characterizing strong infeasibility of problem (1) and its dual:

Proposition 2.1

([8, Prop. 3.1])

  • (i)
    If there exists a y¯H2 such that
    Ay¯=0andσC(y¯)<0,
    then problem (1) is strongly infeasible.
  • (ii)
    If there exists an x¯H1 such that
    Qx¯=0,Ax¯recC,andqx¯<0,
    then the dual of problem (1) is strongly infeasible.

Auxiliary results

Fact 3.1

Suppose that T:HH is an averaged operator and let s0H, sn=Tns0, and δs:=Pran¯(T-Id)(0). Then

  • (i)

    1nsnδs.

  • (ii)

    δsnδs.

Proof

The first result is [13, Cor. 3] and the second is [14, Cor. 2.3].

The following proposition provides essential ingredients for generalizing the results in [8, §5].

Proposition 3.2

Let (sn)nN be a sequence in H satisfying 1nsnδs. Let DH be a nonempty closed convex set and define sequences (pn)nN and (rn)nN by

pn:=PDsnrn:=(Id-PD)sn.

Then

  • (i)

    rn(recD).

  • (ii)

    1npnδp:=PrecD(δs).

  • (iii)

    1nrnδr:=P(recD)(δs).

  • (iv)

    limn1npnrn=σD(δr).

Proof

(i): Follows from [15, Thm. 3.1].

(ii) and (iii): A related result was shown in [16, Lem. 6.3.13] and [17, Prop. 2.2] in a finite-dimensional setting. Using similar arguments here, together with those in [18, Lem. 4.3], we can only establish the weak convergence, i.e., 1npnδp. Using Moreau’s decomposition [1, Thm. 6.30], it follows that 1nrnδr and δs2=δp2+δr2. For an arbitrary vector zD, [1, Thm. 3.16] yields

sn-z2pn-z2+rn2,nN.

Dividing the inequality by n2 and taking the limit superior, we get

lim1nsn2lim¯(1npn2+1nrn2)lim¯1npn2+lim_1nrn2,

and thus

lim¯1npn2lim1nsn2-lim_1nrn2δs2-δr2=δp2,

where the second inequality follows from [1, Lem. 2.42]. The inequality above yields lim¯1npnδp, which due to [1, Lem. 2.51] implies 1npnδp. Using Moreau’s decomposition, it follows that 1nrnδr.

(iv): Taking the limit of the inequality

(nN)(p^D)p^1nrnsuppDp1nrn,

we obtain

(p^D)limnp^1nrnlimnsuppDp1nrn,

and taking the supremum of the left-hand side over D, we get

suppDlimnp1nrnlimnsuppDp1nrn. 4

From [1, Prop. 6.47], we have

rn=sn-pnNDpn,

which, due to [1, Thm. 16.29] and the facts that ιD=σD and ιD=ND, is equivalent to

1npnrn=σD1nrn. 5

Taking the limit of (5) and using (4), we obtain

limn1npnrn=limnsuppDp1nrnsuppDlimnp1nrn=σD(δr).

Since pnD, we also have

limn1npnrnsuppDlimnp1nrn=σD(δr).

The result follows by combining the two inequalities above.

The results of Prop. 3.2 are straightforward under the additional assumption that D is compact, since then recD={0} and (recD)=H, and thus

limn1npn=limn1nPDsn=0=PrecD(δs)limn1nrn=limn1n(sn-pn)=δs=P(recD)(δs).

Moreover, the compactness of D implies the continuity of σD [1, Example 11.2], and thus taking the limit of (5) yields

limn1npnrn=limnσD1nrn=σDlimn1nrn=σD(δr).

When D is a (translated) closed convex cone, its recession cone is the cone itself, and the results of Prop. 3.2 can be shown using Moreau’s decomposition and some basic properties of the projection operator; see [8, Lem. A.3 and Lem. A.4] for details.

A result that motivated our generalization of these limits to an arbitrary nonempty closed convex set D is given in [18, Lem. 4.3], where Prop. 3.2(ii) is established in a finite-dimensional setting.

Douglas–Rachford algorithm

The Douglas–Rachford algorithm is an operator splitting method, which can be used to solve composite minimization problems of the form

minimizewHf(w)+g(w), 6

where f and g are proper lower semicontinuous convex functions. An iteration of the algorithm in application to problem (6) can be written as

wn=Proxgsnw~n=Proxf(2wn-sn)sn+1=sn+α(w~n-wn).

where α]0,2[ is the relaxation parameter.

If we rewrite problem (1) as

f(x,z)=12Qxx+qx+ιAx=z(x,z)g(x,z)=ιC(z),

then an iteration of the Douglas–Rachford algorithm takes the following form [8, 10]:

x~n=argminxH1(12Qxx+qx+12x-xn2+12Ax-(2PC-Id)vn2) 7a
xn+1=xn+αx~n-xn 7b
vn+1=vn+αAx~n-PCvn 7c

We will exploit the following well-known result to analyze the asymptotic behavior of the algorithm [19]:

Fact 4.1

Iteration (7) amounts to

(xn+1,vn+1)=TDR(xn,vn),

where TDR:(H1×H2)(H1×H2) is an (α/2)-averaged operator.

The solution to the subproblem in (7a) satisfies the optimality condition

Qx~n+q+(x~n-xn)+AAx~n-(2PC-Id)vn=0. 8

If we rearrange (7b) to isolate x~n,

x~n=xn+α-1δxn+1,

and substitute it into (7c) and (8), we obtain the following relations between the iterates:

Axn-PCvn=-α-1Aδxn+1-δvn+1 9a
Qxn+q+A(Id-PC)vn=-α-1(Q+Id)δxn+1+Aδvn+1. 9b

Let us define the following auxiliary iterates of iteration (7):

zn:=PCvn 10a
yn:=(Id-PC)vn. 10b

Observe that the pair (zn,yn) satisfies optimality condition (3c) for all nN [1, Prop. 6.47], and that the right-hand terms in (9) indicate how far the iterates (xn,zn,yn) are from satisfying (3a) and (3b).

The following corollary follows directly from Fact 3.1, Prop. 3.2, Fact 4.1, and Moreau’s decomposition [1, Thm. 6.30]:

Corollary 4.2

Let the sequences (xn)nN, (vn)nN, (zn)nN, and (yn)nN be given by (7) and (10), and (δx,δv):=Pran¯(TDR-Id)(0). Then

  • (i)

    1n(xn,vn)(δx,δv).

  • (ii)

    (δxn,δvn)(δx,δv).

  • (iii)

    yn(recC).

  • (iv)

    1nznδz:=PrecC(δv).

  • (v)

    1nynδy:=P(recC)(δv).

  • (vi)

    limn1nznyn=σC(δy).

  • (vii)

    δz+δy=δv.

  • (viii)

    δzδy=0.

  • (ix)

    δz2+δy2=δv2.

The following two propositions generalize [8, Prop. 5.1 and Prop. 5.2], though the proofs follow very similar arguments.

Proposition 4.3

The following relations hold between δx, δz, and δy, which are defined in Cor. 4.2:

  • (i)

    Aδx=δz.

  • (ii)

    Qδx=0.

  • (iii)

    Aδy=0.

  • (iv)

    δznδz.

  • (v)

    δynδy.

Proof

  • (i)
    Divide (9a) by n, take the limit, and use Cor. 4.2(iv) to get
    Aδx=limn1nPCvn=δz. 11
  • (ii)
    Divide (9b) by n, take the inner product of both sides with δx and take the limit to obtain
    Qδxδx=-limnAδx,1n(Id-PC)vn=-δzδy=0,
    where we used (11) and Cor. 4.2(v) in the second equality, and Cor. 4.2(viii) in the third. Due to [1, Cor. 18.18], the equality above implies
    Qδx=0. 12
  • (iii)
    Divide (9b) by n, take the limit, and use (12) to obtain
    0=limn1nA(Id-PC)vn=Aδy,
    where we used Cor. 4.2(v) in the second equality.
  • (iv)
    Subtracting (9a) at iterations n+1 and n, and taking the limit yield
    limnδzn=Aδx=δz,
    where the second equality follows from (11).
  • (v)

    From (10) we have

limnδyn=limnδvn-δzn=δv-δz=δy,

where the last equality follows from Cor. 4.2(vii).

Proposition 4.4

The following identities hold for δx and δy, which are defined in Cor. 4.2:

  • (i)

    qδx=-α-1δx2-α-1Aδx2.

  • (ii)

    σC(δy)=-α-1δy2.

Proof

Take the inner product of both sides of (9b) with δx and use (12) to obtain

qδx+Aδxyn=-α-1δxδxn+1-α-1Aδxδvn+1.

Taking the limit and using Prop. 4.3(i) and Cor. 4.2(vii) and (viii) give

qδx+α-1δx2+α-1δz2=-limnδzyn0, 13

where the inequality follows from Cor. 4.2(iii) and (iv) as the inner product of terms in recC and (recC) is nonpositive. Now take the inner product of both sides of (9a) with δy to obtain

Aδyxn+α-1δxn+1-δyPCvn=α-1δyδvn+1.

Due to Prop. 4.3(iii), the first inner product on the left-hand side is zero. Taking the limit and using Cor. 4.2(vii) and (viii), we obtain

-α-1δy2=limnδyPCvnsupzCδyz=σC(δy),

or equivalently,

σC(δy)+α-1δy20. 14

Summing (13) and (14) and using Cor. 4.2(ix), we obtain

qδx+σC(δy)+α-1δx2+α-1δv20. 15

Now take the inner product of both sides of (9b) with xn to obtain

Qxnxn+qxn+Axnyn=-α-1(Q+Id)δxn+1xn-α-1Axnδvn+1.

Dividing by n, taking the limit, and using Prop. 4.3(i) and (ii) and Cor. 4.2(vii) and (viii) yield

limn1nQxnxn+qδx+limn1nAxnyn=-α-1δx2-α-1δz2.

We can write the last term on the left-hand side as

limn1nAxnyn=limn1nzn+α-1δvn+1-Aδxn+1yn=limn1nznyn+α-1δy2=σC(δy)+α-1δy2,

where the first equality follows from (9a), the second from Prop. 4.3(i) and Cor. 4.2(v) and (vii), and the third from Cor. 4.2(vi). Plugging the equality above in the preceding, we obtain

qδx+σC(δy)+α-1δx2+α-1δv2=-limn1nQxnxn0, 16

where the inequality follows from the monotonicity of Q. Comparing inequalities (15) and (16), it follows that they must be satisfied with equality. Consequently, the left-hand sides of (13) and (14) must be zero. This concludes the proof.

Given the infeasibility conditions in Prop. 2.1, it follows from Prop. 4.3 and Prop. 4.4 that, if the limit δy is nonzero, then problem (1) is strongly infeasible, and similarly, if δx is nonzero, then its dual is strongly infeasible. Thanks to the fact that (δyn,δxn)(δy,δx), we can now extend the termination criteria proposed in [8, §5.2] for the more general case where C is a general nonempty closed convex set. The criteria in [8, §5.2] evaluate conditions given in Prop. 2.1 at δyn and δxn, and have already formed the basis for stable numerical implementations [10, 11]. Our results pave the way for similar developments in the more general setting considered here.

Proximal-point algorithm

The proximal-point algorithm is a method for finding a vector wH that solves the following inclusion problem:

0B(w), 17

where B:H2H is a maximally monotone operator. An iteration of the algorithm in application to problem (17) can be written as

wn+1=(Id+γB)-1wn,

where γ>0 is the regularization parameter.

Due to [1, Cor. 16.30], we can rewrite (2) as

0M(x,y):=Qx+q+Ay-Ax+ιC(y),

where M:(H1×H2)2(H1×H2) is a maximally monotone operator [20]. An iteration of the proximal-point algorithm in application to the inclusion above is then

(xn+1,yn+1)=Id+γM-1(xn,yn), 18

which was also analyzed in [12]. We will exploit the following result [1, Prop. 23.8] to analyze the algorithm:

Fact 5.1

Operator TPP:=(Id+γM)-1 is the resolvent of a maximally monotone operator and is thus (1/2)-averaged.

Iteration (18) reads

0=xn+1-xn+γQxn+1+q+Ayn+1 19a
0yn+1-yn+γ-Axn+1+ιC(yn+1). 19b

Inclusion (19b) can be written as

γAxn+1+ynId+γιCyn+1,

which is equivalent to [1, Prop. 16.44]

yn+1=ProxγιCγAxn+1+yn=γAxn+1+yn-γPC(Axn+1+γ-1yn), 20

where the second equality follows from [1, Thm. 14.3]. Let us define the following auxiliary iterates of iteration (18):

vn+1:=Axn+1+γ-1yn 21a
zn+1:=PCvn+1, 21b

and observe from (20) that

yn+1=γ(Id-PC)vn+1.

Using (19a) and (20), we now obtain the following relations between the iterates:

Axn+1-PCvn+1=γ-1δyn+1 22a
Qxn+1+q+γA(Id-PC)vn+1=-γ-1δxn+1. 22b

Similarly as for the Douglas–Rachford algorithm, the pair (zn+1,yn+1) satisfies optimality condition (3c) for all nN. Observe that the optimality residuals, given by the norms of the left-hand terms in (22), can be computed by evaluating the norms of δyn+1 and δxn+1.

The following corollary follows directly from Fact 3.1, Prop. 3.2, and Fact 5.1:

Corollary 5.2

Let the sequences (xn)nN, (yn)nN, (vn)nN, and (zn)nN be given by (18) and (21), and (δx,δy):=Pran¯(TPP-Id)(0). Then

  • (i)

    1n(xn,yn,vn)(δx,δy,Aδx+γ-1δy).

  • (ii)

    (δxn,δyn,δvn)(δx,δy,Aδx+γ-1δy).

  • (iii)

    yn+1(recC).

  • (iv)

    1nznδz:=PrecC(δv).

  • (v)

    δy=γP(recC)(δv).

  • (vi)

    limn1nznyn=σC(δy).

The proofs of the following two propositions follow similar arguments as those in Sect. 4, and are thus omitted.

Proposition 5.3

The following relations hold between δx, δz, and δy, which are defined in Cor. 5.2:

  • (i)

    Aδx=δz.

  • (ii)

    Qδx=0.

  • (iii)

    Aδy=0.

Proposition 5.4

The following identities hold for δx and δy, which are defined in Cor. 5.2:

  • (i)

    qδx=-γ-1δx2.

  • (ii)

    σC(δy)=-γ-1δy2.

The authors in [12] use similar termination criteria to those given in [8, §5.2] to detect infeasibility of convex quadratic programs using the algorithm given by iteration (18), though they do not prove that δy and δx are indeed infeasibility certificates whenever the problem is strongly infeasible. Identities in (22) show that, when (δy,δx)=(0,0), the optimality conditions (3) are satisfied in the limit. Otherwise, Prop. 2.1, Prop. 5.3, and Prop. 5.4 imply that problem (1) and/or its dual is strongly infeasible.

Remark 5.5

Weak infeasibility of problem (1) means that the sets ranA and C do not intersect, but the distance between them is zero. In such cases, there exists no y¯H2 satisfying the conditions in Prop. 2.1 and the algorithms studied in Sects. 45 would yield δynδy=0. A similar reasoning holds for the weak infeasibility of the dual problem for which the algorithms would yield δxnδx=0.

Acknowledgements

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement OCAL, No. 787845.

Funding

Open Access funding provided by ETH Zurich.

Compliance with ethical standards

Data Availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Goran Banjac, Email: gbanjac@ethz.ch.

John Lygeros, Email: jlygeros@ethz.ch.

References

  • 1.Bauschke HH, Combettes PL. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. 2. New York: Springer; 2017. [Google Scholar]
  • 2.Eckstein J, Bertsekas DP. On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992;55(1):293–318. doi: 10.1007/BF01581204. [DOI] [Google Scholar]
  • 3.Bauschke HH, Dao MN, Moursi WM. The Douglas–Rachford algorithm in the affine-convex case. Oper. Res. Lett. 2016;44(3):379–382. doi: 10.1016/j.orl.2016.03.010. [DOI] [Google Scholar]
  • 4.Bauschke HH, Moursi WM. The Douglas–Rachford algorithm for two (not necessarily intersecting) affine subspaces. SIAM J. Optim. 2016;26(2):968–985. doi: 10.1137/15M1016989. [DOI] [Google Scholar]
  • 5.Bauschke HH, Moursi WM. On the Douglas–Rachford algorithm. Math. Program. 2017;164(1):263–284. doi: 10.1007/s10107-016-1086-3. [DOI] [Google Scholar]
  • 6.Ryu E, Liu Y, Yin W. Douglas-Rachford splitting and ADMM for pathological convex optimization. Comput. Optim. Appl. 2019;74:747–778. doi: 10.1007/s10589-019-00130-9. [DOI] [Google Scholar]
  • 7.Bauschke HH, Moursi WM. On the behavior of the Douglas–Rachford algorithm for minimizing a convex function subject to a linear constraint. SIAM J. Optim. 2020;30(3):2559–2576. doi: 10.1137/19M1281538. [DOI] [Google Scholar]
  • 8.Banjac G, Goulart P, Stellato B, Boyd S. Infeasibility detection in the alternating direction method of multipliers for convex optimization. J. Optim. Theory Appl. 2019;183(2):490–519. doi: 10.1007/s10957-019-01575-y. [DOI] [Google Scholar]
  • 9.Liao-McPherson D, Kolmanovsky I. FBstab: a proximally stabilized semismooth algorithm for convex quadratic programming. Automatica. 2020 doi: 10.1016/j.automatica.2019.108801. [DOI] [Google Scholar]
  • 10.Stellato B, Banjac G, Goulart P, Bemporad A, Boyd S. OSQP: an operator splitting solver for quadratic programs. Math. Program. Comput. 2020;12(4):637–672. doi: 10.1007/s12532-020-00179-2. [DOI] [Google Scholar]
  • 11.Garstka, M., Cannon, M., Goulart, P.: COSMO: a conic operator splitting method for large convex problems. In: European Control Conference (ECC) (2019). 10.23919/ECC.2019.8796161
  • 12.Hermans, B., Themelis, A., Patrinos, P.: QPALM: a Newton-type proximal augmented Lagrangian method for quadratic programs. In: IEEE Conference on Decision and Control (CDC) (2019). 10.1109/CDC40024.2019.9030211
  • 13.Pazy A. Asymptotic behavior of contractions in Hilbert space. Israel J. Math. 1971;9(2):235–240. doi: 10.1007/BF02771588. [DOI] [Google Scholar]
  • 14.Baillon JB, Bruck RE, Reich S. On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houston J. Math. 1978;4(1):1–9. [Google Scholar]
  • 15.Zarantonello EH. Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello EH, editor. Contributions to nonlinear functional analysis. Cambridge: Academic Press; 1971. pp. 237–424. [Google Scholar]
  • 16.Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research and Financial Engineering. Springer, New York (2003). 10.1007/b97543
  • 17.Gowda MS, Sossa D. Weakly homogeneous variational inequalities and solvability of nonlinear equations over cones. Math. Program. 2019;177:149–171. doi: 10.1007/s10107-018-1263-7. [DOI] [Google Scholar]
  • 18.Shen J, Lebair TM. Shape restricted smoothing splines via constrained optimal control and nonsmooth Newton’s methods. Automatica. 2015;53:216–224. doi: 10.1016/j.automatica.2014.12.040. [DOI] [Google Scholar]
  • 19.Lions P, Mercier B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979;16(6):964–979. doi: 10.1137/0716071. [DOI] [Google Scholar]
  • 20.Rockafellar RT. Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1976;1(2):97–116. doi: 10.1287/moor.1.2.97. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.


Articles from Optimization Letters are provided here courtesy of Springer

RESOURCES