Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2023 Sep 4;18(4):2095–2110. doi: 10.1007/s11571-023-09998-0

Solving general convex quadratic multi-objective optimization problems via a projection neurodynamic model

Mohammadreza Jahangiri 1, Alireza Nazemi 1,
PMCID: PMC11297893  PMID: 39104693

Abstract

A neural network model is constructed to solve convex quadratic multi-objective programming problem (CQMPP). The CQMPP is first converted into an equivalent single-objective convex quadratic programming problem by the mean of the weighted sum method, where the Pareto optimal solution (POS) are given by diversifying values of weights. Then, for given various values weights, multiple projection neural networks are employded to search for Pareto optimal solutions. Based on employing Lyapunov theory, the proposed neural network approach is established to be stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the single-objective problem. The simulation results also show that the presented model is feasible and efficient.

Keywords: Multi-objective optimization problem, Neural networks, Convex quadratic programming problem, Pareto optimal solution, Stability, Convergence

Introduction

The CQMPP is considered as

minimizef(x)=(f1(x),f2(x),...,fK(x)) 1

         subject to

xS={xΩEx-g0,Ax-b=0}, 2

where the objective functions are of the form

fk(x)=12xTQkx+ckTx,

Ω={xRn,lixihifori=1,2,...,n}, and some hi (or -lj) could be +, QkRn×n is a real symmetric positive semidefinite matrix, ERm×n, gRm, ARl×n, rank(A)=l,(0l<n) and bRl. It is clear that functions fk(k=1,...,K) are convex and twice continuous differentiable on open convex set ERn including Ω. Throughout this paper, we assume The K objectives are conflicted with each other. Therefore, the target of CQMPP is to achieve a set of efficient solutions that are called Pareto set.

Multi-objective optimization problem has become an important research area for both scientists and researchers. In the multi-objective optimization problem, multi-objective functions need to be optimized simultaneously. In the case of multi-objectives, there may not necessarily exist a solution that is best with respect to all objectives because of confliction among objectives. Therefore, there usually exist a set of solutions for the multi-objective case which cannot simply be compared with each other. For such solutions, called no-dominated solutions or POS’s, no improvement is possible in any objective function without scarifying at least one of the other objective functions (Miettinen 2002; Copado-Mendez et al. 2014; Oberdiecka and Pistikopoulos 2016; Mian et al. 2015; Hwang and Masud 1979). For an introduction to multi-objective optimization we refer the reader to Chinchuluun and Pardalos (2007), Ehrgott (2005) and Ehrgott and Wiecek (2005). For more some recent new works and surveys of recent developments on multi-objective optimization problems one can see (Jiang et al. 2022; Pan et al. 2022; Zhang et al. 2022; Lai et al. 2022; Hu et al. 2022; Kabgani and Soleimani-damaneh 2022; Ghalavand et al. 2021; Shavazipour et al. 2021; Dominguez-Rios et al. 2021; Groetzner and Werner 2022).

The multi-objective problem is convex if all its objective functions are convex and its feasible set is convex. There are several methods for solving these kinds of problems. For instance, for multi-objective linear programming (MOLP) problems, researchers have developed a variety of methods for generating the efficient set or the non-dominated set such as multi-objective simplex methods, interior point methods, and objective space methods. Multi-objective simplex methods and interior point methods work in variable space to find efficient solutions, see the references in Ehrgott and Wiecek (2005). Benson in Benson (1998) has studied an outer approximation method to find the extended feasible set in objective space (and thereby the non-dominated set) of an MOLP. In Ehrgott et al. (2007) and Shao and Ehrgott (2008) some improvements for this algorithm have been suggested. Krichen et al. in Krichen et al. (2012) have considered a new method based on the concept of adjacencies between efficient extreme points for generating the entire efficient set for MOLPs in the outcome space. Matejas and Peric in Matejas and Peric (2014) have suggested a new iterative method based on the principles of game theory for solving MOLPs with an arbitrary number of decision makers. Antunes et al. in Antunes et al. (2016) have studied five multiobjective interactive methods for MOLP problems, which are representative of different strategies of search. Lu et al. in Lu et al. (2020) have solved an equivalent mixed integer programming problem to compute an optimal solution to the MOLP problems. Shao and Ehrgott in Shao and Ehrgott (2008) introduced an approximation version of Benson’s algorithm to sandwich the extended feasible set of an MOLP with an outer approximation and an inner approximation.

A more general state of MOLP is CQMPP as stated in (1)–(2). Problems of type (1)–(2) arise in many different applications, such as engineering, economics and biological systems (see Miettinen 2002; Copado-Mendez et al. 2014; Oberdiecka and Pistikopoulos 2016; Mian et al. 2015). One of the most well known strategies to obtain a Pareto point of (1)–(2) is the ε-constraint method (Miettinen 2002). However in this strategy the solution of the CQMPP problem depends on the values of certain parameter, namely εj. Hence, while many researchers consider the iterative solution of the resulting optimization problems for different parameter values (Capitanescu et al. 2015) and (Antipova et al. 2015), some attention has been given to the explicit calculation of the entire Pareto front via parametric programming, which solves optimization problems as a function and for a range of certain parameters. In Gass and Saaty (1955) and Yuf and Zeleny (1976), authors considered the case of linear cost functions, and in Papalexandri and Dimkou (1998) the case of a mixed-integer nonlinear multi-objective optimization problem was addressed. The case of quadratic cost functions was treated in Goh and Yang (1996) and Ghaffari-Hadigheh et al. (2010), although either only conceptually or for the case where the quadratic part remains constant. In Oberdiecka and Pistikopoulos (2016) authors introduced an algorithm for the approximate explicit solution of multi-objective optimization problems with general convex quadratic cost functions and linear constraints by multi-parametric programming. In Ehrgott et al. (2011), Ehrgott et al. extended the algorithm in Shao and Ehrgott (2008) to approximately solve CQMPP. De Santis and Eichfelder in De Santis and Eichfelder (2021) presented a branch-and-bound algorithm for minimizing multiple convex quadratic objective functions over integer variables. Shang and Yu in Shang and Yu (2011) presented a constraint shifting combined homotopy method for solving CQMPP with both equality and inequality constraints. Luquea et. al in Luquea et al. (2010) described an interactive procedural algorithm for CQMPP based upon the Tchebycheff method, Wierzbicki’s reference point approach, and the procedure of Michalowski and Szapiro. Zerfa and Chergui in Zerfa and Chergui (2022) presented a branch-and-bound based algorithm to generate all non-dominated points for a multiobjective integer programming problem with convex quadratic objective functions and linear constraints.

In the past decades, neural networks have been successfully applied to various fields such as signal processing, pattern recognition, fixed-point computations, control and other scientific areas. Mathematically, the neural dynamical approach converts an optimization problem into a dynamical system or an system of ordinary differential equations so that their equilibrium point corresponds to the optimal solution of the optimization problem. Then the optimal solution can be obtained by tracking the solution trajectory of the designed dynamical system based on hardware and software implementation. The literature contains many studies on neural networks for solving real-time optimization problems, (see Kennedy and Chua 1988; Rodriguez-Vazquez et al. 1990; Friesz et al. 1994; Gao 2004; Gao et al. 2005; Nazemi and Effati 2013; Nazemi 2012, 2013, 2011; Nazemi and Omidi 2012; Yang and Cao 2008; Xia 2004; Xia and Wang 2000; Nikseresht and Nazemi 2019, 2018; Karbasi et al. 2021; Feizi and Nazemi 2021; Karbasi et al. 2020; Nazemi and Sabeghi 2019; Nazemi and Mortezaee 2019; Nazemi 2019; Arjmandzadeh et al. 2019). Although each of these studies work successfully for a specific kind of optimization problems, to the best of our knowledge, there exist only one neural network approach that solves CQMPP (see Rizk-Allah and Abo-Sinna 2017 and Abo-Sinna and Rizk-Allah 2018). However, the structure of the proposed model is rather complicated and further simplification can be achieved. Thus, proposing an efficient neural network for solving CQMPP’s with good stability properties and convergence results is very necessary and meaningful. The main contributions of this study are as follows:

  • (i)

    In the presented model, the primal and dual problems can be solved simultaneously.

  • (ii)

    By the means of the weighted sum method, POSs can be obtained through using different weights.

  • (iii)

    The possible convergence of the new network to the optimal solution when k=1KwkQk is positive semidefinite, which is the limitation of the models in Nazemi and Effati (2013) and Nazemi (2012).

Motivated by the former discussions, in this paper, we consider a neural network approach for solving CQMPP’s. Firstly, the CQMPP is transformed to a single-objective convex quadratic programming by the means of weighted sum method. Based on the Karush-Kuhn-Tucker (KKT) optimality conditions and to obtain a set of pareto optimal solutions, a collective neural networks related to various values of weights are constructed. The equilibrium point of the neural network is proved to be equivalent to the KKT point of the convex programming. The existence and uniqueness of an equilibrium point of the neural network will be also analyzed. By constructing a suitable Lyapunov function, for an arbitrary initial point, the studied neural network is proved to be global stable is the sense of Lyapunov and is capable to obtain the optimal solutions for the obtained nonlinear programming with different values of weights. Further, the POSs of the CQMPP are generated by utilizing different weights.

This paper is organized as follows. In Sect. 2, some preliminaries of the multi-objective optimization are described. In Sect. 3, a collaborative neurodynamic strategy is addressed. A comparison of the suggested model with some previous neural networks are stated in Sect. 4. In Sect. 5, the stability of the equilibrium point and the convergence of the optimal solution are investigated. In Sect. 6, several illustrative examples and simulation results are given to show the feasibility of the raised model. Conclusions are given in Sect. 7.

Preliminaries of multi-objective optimization

This section provides the necessary mathematical background for multi-objective optimization problems (Miettinen 2002; Chankong and Haimes 1983).

Definition 1

The natural ordering cone is defined as follows:

RK={xRK:xk0,k=1,2,...,K}.

For any y,y¯RK:

y<y¯ yk<yk¯ for all k=1,2,...,K,

yy¯ ykyk¯ for all k=1,2,...,K,

yy¯ ykyk¯ for all k=1,2,...,K and yy¯.

Due to the conflict of the objective functions, it is often not possible to find a single solution what would be optimal for all the objectives simultaneously. As a result, the solution of problem (1)–(2) is established by the concept of Pareto optimality, which refers to the solution that none of the objective functions can be enhanced without detriment to at least one of the other objective functions. A formal definition of Pareto optimality is given in the following.

Definition 2

A feasible solution xS is called

  • (i)

    an efficient (a Pareto optimal) solution for problem (1)–(2), if there is no other xS such that f(x)f(x).

  • (ii)

    a weakly efficient (a weakly Pareto optimal) solution for problem (1)–(2), if there is no other xS such that f(x)<f(x).

In order to acquire Pareto optimal solutions, one widely used approach is the weighted sum method (Abd El-Waheda et al. 2010) and (Miettinen 2002), which transforms the multiple objectives into a single one by multiplying each objective with a weight. Given a weighting vector w=(w1,w2,,wK), problem (1)–(2) is converted into

minimizek=1Kwkfk(x) 3

         subject to

xS,wW={wRKwk0,k=1Kwk=1}. 4

Theorem 1

Miettinen (2002) If xS is an optimal solution of the weighting problem (3)–(4) where either wk>0,k=1,,K, or x is a unique optimal solution, then x is a Pareto optimal solution of the multi-objective program (1)–(2).

Theorem 2

Miettinen (2002) Let the multi-objecive optimization problem (1)–(2) be convex. If xS is Pareto optimal, then there exists a weighting vector wW which is a solution to the weighting problem (3)–(4).

The following assumptions hold throughout of this paper.

Assumption 1

The minimizer of each objective function fk exists.

Assumption 2

A finite solution of (3)–(4) satisfies the Slater condition (Avriel 1976), i.e., for a given wW, there exists a x0Rn such that

Ex0-g<0,Ax0-b=0.

Assumption 3

The {EjT|j=1,,m}{AqT|q=1,,l} are linear independent, where E=E1,E2,,EmT and A=A1,A2,,AlT.

Assumption 4

In all parts, we assume wW.

The neural network model

In this section, we will construct a neural network for solving (3)–(4). First, we give a sufficient and necessary condition for the optimal solution of (3)–(4), which is the theoretical foundation for us to design the network for it.

Theorem 3

Bazaraa et al. (1993) xS is an optimal solution of (3)–(4) if and only if there exist λRm and μRl such that (xT,λT,μT)T satisfies the following KKT system

λ0,Ex-g0,λT(Ex-g)=0,k=1Kwk(Qkx+ck)+ETλ+ATμ=0,Ax=b,wW. 5

In order to gain a projection neural network, we state the KKT conditions (5) in variational inequality form.

Theorem 4

Gao (2004) xS is an optimal solution of (3)–(4), if and only if there exists u=(u1,u2,,um)TRm and μ=(v1,v2,,vl)TRl such that (xT,uT,vT)T satisfies the following KKT conditions:

(x-x)T(k=1Kwk(Qkx+ck)+ETu-ATv)0,xΩ,uj(Ex-g)j=0,uj0,(Ex-g)j0,j=1,...,m,Ax-b=0. 6

Lemma 1

Youshen and Wang (2004) x is an optimal solution of (3)–(4) if and only if there exists u=(u1,u2,,um)TRm and v=(v1,v2,,vl)TRl such that:

x-PΩ[x-k=1Kwk(Qkx+ck)-ETu+ATv]=0,u-[u+(Ex-g)]+=0,Ax-b=0, 7

where PΩ(x)=[(x1)l11,(x2)l22,,(xn)lnn]Rn,(xi)lii=min{hi,max{xi,hi}} and (xi)+=max{0,xi}.

Lemma (1) indicates that the optimal solution of (3)–(4) can be obtained by solving (7).

Lemma 2

Kinderlehrer and Stampacchia (2000) Assume that the set ΩRn is a closed convex set. Then

(V-PΩ(V))T(PΩ(V)-U)0,VRn,UΩ, 8
PΩ(U)-PΩ(V)U-V,U,VRn. 9

A basic property of the projection mapping on a closed convex set is stated in Lemma (2).

For simplicity, we denote

z(t)=(xT,uT,vT)TRn+m+l,D=Ω×Rm+l,u¯=[u+(Ex-g)]+,x¯=PΩ[x-k=1Kwk(Qkx+ck)-ETu+AT(v-Ax+b)],D={zRn+m+l|zsolves(7).}

Based on the above results and by an extension of the neural network in Gao (2004), we introduce the following architecture to solve (3)–(4):

dzdt=ddtxuv=-τ2(x-x¯)u-u¯Ax-b, 10

here τ is a scale parameter and indicates the convergence rate of the neural network (10). For simplicity of our analysis, we let τ=1.

Let De be the equilibrium point set of the neural network (10). From Lemma 1 we have that xDe is an equilibrium point of the neural network (10) if and only if xD is an optimal solution of (3)–(4). That is, D=De.

In the following, we give a more technical description of the proposed neural network algorithm whose framework is shown in Algorithm 1. The procedures start with choosing an arbitrary initial vector. Then iterative procedures are controlled by the principal while-loop of Algorithm 1. In each iteration, calculate the right side of system (10), update initial vector, calculation of error and stopping criteria, must be scheduled. The pseudo code that explains the implementation of the presented algorithm is stated as follows:graphic file with name 11571_2023_9998_Figa_HTML.jpg

Remark 1

By using the weighted sum method, a single–objective optimization problem of (1)–(2) can be formulated as (3)–(4), in which the scalar objective (3) is summation of weighted original objective functions. A system consisting of multiple neural networks is developed to cooperatively seek Pareto optimal solutions of the multi-objective optimization problem (1)–(2). Each neural network is required to be run for a fixed given weight wW. In fact for wW, each neural network optimizes k=1Kwkfk(x) such that xS, while seeks for a point of Pareto efficient frontier (Pareto optimal solution). Thus, for a given arbitrary value wiW and solving the single–objective problem as ith sub–problem, ith neural network with weight wiW is as

dzidt=ddtxiuivi=-2(xi-x¯i)ui-u¯iAxi-b,

where

u¯i=[ui+(Exi-g)]+,x¯i=PΩ[xi-k=1Kwki(Qkxi+ck)-ETui+AT(vi-Axi+b)],

is adopted and updated to optimize the ith sub–problem. Each neural network is supplied with a wiW and it generates an equilibrium point after performing precise local search. That is, given a set of weights w={w1,w2,,wNi} where wiW and Ni is the total number of employed neural networks. The neural networks will output a set of equilibrium points z={z1,z2,,zNi}. Necessary and sufficient conditions are satisfied to guarantee the convergence of the collaborative neurodynamic system to a Pareto optimal solution. Finally, Pareto optimal set is constructed by using different values of weights.

Stability analysis

In the section, we will study some stability and convergence properties for network (10). We now establish our main results as follows.

Theorem 5

For any initial point z(t0)=(x(t0)T,u(t0)T,v(t0)T)TRn+m+l, there exists a unique continuous solution z(t)=(x(t)T,u(t)T,v(t)T)T for system (10). Moreover, x(t)Ω and u(t)0, provided that x(t0)Ω and u(t0)0.

Proof

The projection mappings PΩ(.) and (.)+ are non-expansive. Since (Qkx+ck),k=1,2,,K and (Ex-g)j,j=1,2,,m are continuously differentiable on an open convex set ERn including Ω, then x-PΩ[x-k=1Kwk(Qkx+ck)-ETu+AT(v-Ax+b)], u-[u+(Ex-g)]+ and Ax-b are locally Lipschitz continuous. According to the local existence of ordinary differential equations, the initial value problem of the system (10) has a unique solution of (10) on [t0,T) as is its maximal interval of existence. Moreover, it can be seen in the proof of Theorem 4.4 that the solution z(t) is bounded and thus T+.

Let the initial point x0=x(t0)Ω and u0=u(t0)0. Since

dxdt+x=PΩ[x-k=1Kwk(Qkx+ck)-ETu¯-AT(v-Ax+b)],dudt+u=[u+(Ex-g)]+. 11

We have

t0t(dxdt+x)etdt=t0tPΩ[x-k=1Kwk(Qkx+ck)-ETu¯-AT(v-Ax+b)]etdt,t0t(dudt+u)etdt=t0t[u+(Ex-g)]+etdt. 12

Then

x(t)=e-(t-t0)x0+e-tt0tetPΩ[x-k=1Kwk(Qkx+ck)-ETu¯-AT(v-Ax+b)]dt,u(t)=e-(t-t0)u0+e-tt0tet[u+(Ex-g)]+dt. 13

By the integration mean value theorem, we have:

x(t)=e-(t-t0)x0+(1-e-(t-t0))PΩ[x^-k=1Kwk(Qkx^+ck)-ETu^-AT(v^-Ax^+b)],u(t)=e-(t-t0)u0+(1-e-(t-t0))[u^+Ex^-g]+. 14

It follows that x(t)Ω and u(t)0 from x(t0)Ω and u(t0)0. This completes the proof.

Remark 2

The results of Theorem 5 indicate that neural network (10) is well defined. In the proof of Theorem 5, we assume (x0T,u0T,v0T)TΩ×R+m×Rl, that is to say, the initial point should be feasible. But if (x0T,u0T,v0T)T)TΩ×R+m×Rl, as t+, (x(t,x0)T,u(t,x0)T,v(t,v0)T)T will eventually approach Ω×R+m×Rl by (14). So there exists a moment t0,t>t0 such that (x(t,x0)T,u(t,x0)T,v(t,v0)T)T can be regarded as a feasible point (or approximate) and (x(t,x0)T,u(t,x0)T,v(t,v0)T)T will stay in Ω×R+m×Rl from then on.

Lemma 3

Let (xT,uT,vT)TΩe be an equilibrium point of (10) and

V(z,z)=ψ(z)-ψ(z)-(z-z)ψ(z)+12z-z2, 15

where

ψ(z)=k=1Kwkfk(x)+12(u¯2+v-Ax+b2). 16

Then

  • i)

    V(z,z) is convex on D and continuously differentiable on E×Rm+l.

  • ii)
    V(z,z)12z-z2,zD. 17

Proof

  • i)

    It is easy to verify that v-Ax+b2 is differentially convex on Rn+l. From Bazaraa et al. (1993), u¯2 is a convex differentiable on Ω×Rm and continuously differentiable on E×Rm. By the differentiable convexity of fk(x),k=1,2,,K, we see that ψ(z) in (16) is also convex on D and continuously differentiable on E×Rm×Rl. Thus i) is achieved.

  • ii)
    Using the former analysis of i), it follows that
    ψ(z)ψ(z)+(z-z)Tψ(z),zD.
    By (15) we have
    V(z,z)12z-z2,zD. 18

The results in Lemma (3) pave a means for us to study the dynamics of neural network (10). In particular, the network (10) has the following basic property.

Theorem 6

The neural network (10) is stable in the sense of Lyapunov. Moreover, the solution trajectory of neural network (10) is extendable to the global existence.

Proof

By theorem 5, (x0T,u0T,v0T)TΩ×R+m×Rl, there exists a unique continuous solution (x(t)T,u(t)T,v(t)T)TΩ×R+m×Rl for system (10).

Define a Lyapunov function as (15). Noting that u¯2=Σj=1m[(uj+(Ex-g)j)+]2 and

[(uj+(Ex-g)j)+]2=[(uj+(Ex-g)j)]2,uj+(Ex-g)j0,0,o.w. 19

We have

u¯2=(j=1m[(uj+(Ex-g)j)+]2)=2ETu¯2u¯. 20

Using i) in Lemma (3) and (15) we get

ψ(z)=k=1Kwk(Qkx+ck)+ETu¯-AT(v-Ax+b)u¯v-Ax+b,

and

V(z,z)=z-z+ψ(z)-ψ(z).

Thus, by (7),(10) and u¯-u=(Ex-g)+(-u-(Ex-g))+, zD one has

dV(z,z)dt=V(z,z)xdxdt+V(z,z)ududt+V(z,z)vdvdt=2(x-x¯)T[x-x-k=1Kwk(Qkx+ck)-ETu+ATv+k=1Kwk(Qkx+ck)+ETu¯-AT(v-Ax+b)]-Ax-b2+2(v-v)T(Ax-b)+(u-u¯)T(u-2u+u¯)=2(x¯-x)T[x-x¯+k=1Kwk(Qkx+ck)+ETu-ATv-k=1Kwk(Qkx+ck)-ETu¯+AT(v-Ax+b)]+2x-x¯2+2(x-x)T[k=1Kwk(Qkx+ck)+ETu¯-k=1Kwk(Qkx+ck)-ETu]+Ax-b2+u-u¯2+2(u¯-u)T[-(Ex-g)-(-(Ex-g)-u)+]=2(x¯-x)T[x-x¯-k=1Kwk(Qkx+ck)-ETu¯+AT(v-Ax+b)]+2(x¯-x)T[k=1Kwk(Qkx+ck)+ETu-ATv]+2x-x¯2+u-u¯2+Ax-b2+2(x-x)T[k=1Kwk(Qkx+ck)-k=1Kwk(Qkx+ck)]+2u¯T[-(Ex-g)-(-(Ex-g)-u)++ET(x-x)]-2uT[-(Ex-g)-(-(Ex-g)-u)++ET(x-x)]2x-x¯2+2(x-x)T[k=1Kwk(Qkx+ck)-k=1Kwk(Qkx+ck)]-2u¯T(Ex-g)+u-u¯2+Ax-b2+2u¯T[(Ex-g)-(Ex-g)+ET(x-x)]-2uT[(Ex-g)-(Ex-g)+ET(x-x)], 21

In the above calculations we used

[(-u-(Ex-g))+]Tu¯=0,(Ex-g)Tu=0,[(-u-(Ex-g))+]Tu0,(Ex-g)Tu¯0.

Moreover, in the inequality of Lemma (2), let V=x-k=1Kwk(Qkx+ck)-ETu¯+AT(v-Ax+b) and U=x, we obtain:

x-k=1Kwk(Qkx+ck)-ETu¯+AT(v-Ax+b)-x¯T(x¯-x)0. 22

From the differentiable convexities of fk(x),k=1,2,...,K, xΩ, we have

[k=1Kwk(Qkx+ck)-k=1Kwk(Qkx+ck)]T(x-x)0, 23

Substituting (6), (22) and (23) into (21), tt0, one has

dV(z,z)dt-2x(t)-x¯(t)2-Ax(t)-b2-u(t)-u¯(t)20. 24

This means neural network (10) is globally stable in the sense of Lyapunov.

From (24) we know that V is a non-increasing function with respect to t, so

12z(t)-z2V(z,z)V(z(t0),z),t0tT, 25

This shows the state trajectory of neural network (10) is bounded. Thus T=+.

Theorem 7

The solution trajectory of the neural network (10) is globally convergent to an optimal point of the problem (3)–(4). In particular, neural network (10) is asymptotically stable when D has only a point.

Proof

From (17), the level set L(z0)={zD|V(z,z)V(z(t0),z)} is bounded. Thus, there exists a convergent subsequence

{z(tk)|t0<t1<...<tk<tk+1},andtkask,

such that

limkz(tk)=z¯, 26

where z¯ satisfies

dV(z,z)dt=0,

which indicates that z¯ is an ω-limit point of {z(t)|tt0}. Using the LaSalle Invariant Set Theorem (Miller and Michel 1982), one has that {z(t)M} as t, where M is the largest invariant set in Λ={z(t)|dV(z,z)dt=0}. From (10) and (24), it follows that dxdt=dudt=dvdt=0dV(z,z)dt=0. Thus z¯D by MΛD.

Substituting z=z^ in (15), similar to the analysis above, we can prove that the function V[z(t),z^] is monotonically nonincreasing in [t0,+), and

V(z,z^)z-z^2/2,zD.

From the continuity of V(z,z^) and (26), ε>0, there exist a tq such that

V[z(t),z^]12ε2,whenttq. 27

Thus

z(t)-z^2V[z(t),z^]2V[z(tq),z^]ε,whenttq.

That is,limkz(t)=z^. So, the solution trajectory of the neural network (10) is globally convergent to an equilibrium point z^ is also an optimal solution of (3)–(4). In particular, if D={z}, then for each x0Ω, v free in sign, and u00, the solution z with the initial point z(t0)=z0 will approach z by the analysis above. That is, the neural network (10) is globally asymptotically stable.

Comparing with some existing models

In order to see how well the presented neural network (10) can be applied to solve (3)–(4), we compare it with some existing neural network models.

First, let us consider the following problem

mink=1Kwkfk(x) 28
s.t.Ex-g0, 29
Ax=b,xΩ, 30

where fk(x)=12xTQkx+ckTx, wW,xRn, Ω={xRn:lixihi,i=1,2,,n}, and some hi (or -lj) could be +, QkRn×n is a real symmetric positive semidefinite matrix, ERm×n, gRm, ARl×n, rank(A)=l,(0l<n) and bRl. By extending the globally projected dynamic systems (Friesz et al. 1994) to constrained optimization problems by Xia in Xia (2004), a neural network can be utilized for solving (28)–(30) as

dxdt=PΩ(x-k=1Kwk(Qkx+ck)+ETλ-ATμ)-x, 31
dλdt=-λ+(λ+Ex-g)+, 32
dμdt=-Ax+b, 33

where ΩRn is a closed convex set and PΩ:RnΩ is a projection operator (Gao 2004) defined by

PΩ(xi)=li,xi<li,xi,lixihi,hi,xi>hi.

It is well known that the asymptotical stability of the dynamic model (31) - (33) for a monotone and a symmetric mapping could not be guaranteed (Gao et al. 2005)and (Xia and Wang 2000). Thus, the system described by (31) – (33) cannot solve problem (28) – (30) in some cases. For instance, one can see Example 1 in this manuscript.

The proposed model in Nazemi and Effati (2013) can be used for constructing the following neural network for solving (3)–(4) as

d(y(t))dt=-E2(y(t)), 34
y(0)=y0,y(t)=(x(t),λ(t),μ(t))T, 35

where

E2(y)=12ρ(y)2, 36
ρy=k=1Kwk(Qkx+ck)+ATμ+ETλϕFBελ,-(Ex-g)b-Ax, 37

and

ϕFBεa,b)=(a+b)-a2+b2+ε,ε0+.

Using (Nazemi 2012), under the condition that fk(x)(k=1,...,K) is strictly convex, a lagrangian network model can be constructed for solving (3)–(4) as

dydt=Φ(y), 38
y(t0)=y0,y(t)=(x(t),λ(t),μ(t))T,λ(t0)>0, 39

where

Φ(y)=-k=1Kwk(Qkx+ck)+12ETλ2+ATμdiag(λ1,,λm)(Ex-g)Ax-b.

It should be noted that the convergence of the models (34)–(35) and (38)–(39) under the condition that k=1KwkQk is positive definite matrix is only guaranteed. Thus, these models can not solve linear programming problems which limits their application. Moreover, the neural network (10) can converge to the optimal solution under the condition of positive semidefinite matrix k=1KwkQk. For instance, one can see Example 6.1 in this manuscript.

Numerical simulations

In order to clarify the effectiveness of the proposed neural network (10), in this section, we test several examples. The simulation is conducted on Matlab R2022.

Example 1

Matejas and Peric (2014) Let f1(x):=50x1+100x2+17.5x3,f2(x):=50x1+50x2+100x3,f3(x):=20x1+50x2+100x3 and f4(x):=25x1+75x2+12x3. Considering

minimizef(x):=(f1(x),f2(x),f3(x),f4(x))subjecttoxS=xR3|12x1+17x21400,3x1+9x2+8x31000,10x1+13x2+15x31750,6x1+16x31325,12x2+7x3900,9.5x1+9.5x2+4x31075,x1,x2,x30. 40

All simulation results show that the state trajectories of the model (10) with

(w1,w2,w3,w4)T=(0.3,0.3,0.3,0.1)T converge to the unique optimal solution x=(45.22,49.61,43.52)T. Fig. 1 depicts that the trajectories of the network (10) converge to the optimal solution of the related single-objective problem.

Fig. 1.

Fig. 1

Transient behavior of (x1,x2,x3)T of the neural network (10) in Example 1

To make a comparison, we solve the MOLP problem by using neural network (31) - (33) shown in Fig. 2. It is clear that this model is not stable. The MOLP problem is also solved by using the model in (38) - (39). From Fig. 3, we see that this model is not suitable for solving the MOLP problem, too.

Fig. 2.

Fig. 2

Divergent behavior of the system (31)–(33) in Example 1

Fig. 3.

Fig. 3

Divergent behavior of the system (38) - (39) in Example 1

Example 2

Abo-Sinna and Rizk-Allah (2018)

minimizef(x):=((x1-1)2+(x2-1)2,(x1-2)2+(x2-3)2)subjecttoxS=xR2x1+2x2-100,x10,0x24. 41

Table 1 shows results for different values of (w1,w2)T. All simulation results show that the state trajectories of the model (10) converge to the optimal solution x by choosing different weight values (w1,w2)T. For example, Fig. 4 shows that the trajectories of the network (10) with 5 different initial points and weight (w1,w2)T=(0.65,0.35)T converge to the optimal solution of the related single-objective problem. The generated Pareto fronts using the established neural network approach is presented in Fig. 5.

Table 1.

Results for some values of (w1,w2)T in Example 2

Weight Optimal solution Objective function value
(w1,w2)T x1 x2 f1 f2 w1f1+w2f2
(0,1)T 1.999999825 3.000000641 5.000002214 6.69E-13 6.69E-13
(0.1,0.9)T 1.899998993 2.800000072 4.049998447 0.050000173 0.45
(0.2,0.8)T 1.800000014 2.600000773 3.200002496 0.199999376 0.8
(0.3,0.7)T 1.699998518 2.400000889 2.450000415 0.449999822 1.05
(0.4,0.6)T 1.599998815 2.19999806 1.799993921 0.800004052 1.2
(0.5,0.5)T 1.499999145 1.999998341 1.249995827 1.250004173 1.25
(0.6,0.4)T 1.399999185 1.799998667 0.799997216 1.800004177 1.2
(0.7,0.3)T 1.29999882 1.600000145 0.449999466 2.450001247 1.05
(0.8,0.2)T 1.200001354 1.399999501 0.200000143 3.19999943 0.8
(0.9,0.1)T 1.100001094 1.2000013 0.050000739 4.04999335 0.45
(1,0)T 0.999999646 1.000001578 2.62E-12 4.999994397 2.62E-12

Fig. 4.

Fig. 4

Transient behaviors of (x1,x2)T with w1=0.65,w2=0.35 for 5 different initial points of the neural network (10) in Example 2

Fig. 5.

Fig. 5

Obtained POSs of the problem (41) using the neural network (10) in Example 2

Example 3

Ehrgott et al. (2011)

minimizef(x):=((x1-1)2+(x2-1)2,(x1-2)2+(x2-3)2,(x1-4)2+(x2-2)2)subjecttoxS=xR2|x1+2x210,0x110,0x24. 42

Table 2 depicts results for different values of (w1,w2,w3)T. Fig. 6 shows the convergence behavior of x(t) based on the network (10) with 25 random initial points. Figure 7 illustrates that the feature of the proposed approach is contained in the convergent to Pareto fronts of the problem (42) by adopting different values of (w1,w2,w3)T.

Table 2.

Results for some values of (w1,w2,w3)T in Example 3

Weight Optimal solution Objective function value
(w1,w2,w3)T x1 x2 f1 f2 f3 w1f1+w2f2+w3f3
(0.2,0.4,0.4)T 2.6000022 2.199995 3.9999949 1.0000106 2.0000122 2
(0.4,0.2,0.4)T 2.3999969 1.8000035 2.5999969 1.5999892 2.5999793 2.4
(0.4,0.4,0.2)T 1.9999954 1.9999954 1.9999815 1.0000091 3.9999909 2
(0.4,0.3,0.3)T 2.199998 1.8999956 2.2499873 1.2500087 3.2500134 2.25
(0.3,0.4,0.3)T 2.2999970 2.0999956 2.8999826 0.9000061 2.8999882 2.1
(0.3,0.3,0.4)T 2.5000026 2.0000012 3.2500102 1.2500003 2.2499903 2.25
(0.6,0.3,0.1)T 1.5999985 1.7000049 0.8500051 1.8499883 5.8500030 1.65
(0.1,0.6,0.3)T 2.5000058 2.499999 4.5000142 0.5000067 2.5000206 1.5
(0.1,0.3,0.6)T 3.0999935 2.1999959 5.8499631 1.8499922 0.8500102 1.65
(0.8,0.1,0.1)T 1.3999980 1.2999987 0.2499976 3.2500068 7.2499837 1.25
(0.1,0.8,0.1)T 2.1000027 2.6999939 4.0999852 0.1000041 4.1000223 0.9
(0.1,0.1,0.8)T 3.4999962 2.0000035 7.2499877 3.2499814 0.2499981 1.25
(0.6,0.2,0.2)T 1.7999984 1.5999966 0.9999932 2.0000103 5.0000183 2

Fig. 6.

Fig. 6

Transient behaviors of (x1,x2)T with w1=0.2,w2=0.45,w3=0.35 for 25 different initial points of the neural network (10) in Example 3

Fig. 7.

Fig. 7

Pareto front of the problem (42) using (10) in Example 3

Example 4

Oberdiecka and Pistikopoulos (2016)

minf(x):=(f1(x),f2(x),f3(x))s.t.AxbxR2, 43

where

f1(x)=xT2.5007.5x+30Tx,f2(x)=xT3.3008.5x+1-1Tx-1,f3(x)=xT3.5000.25x+2,A=4-30-3-4260-6-2-9-1,b=20148203917

Table 3 depicts results for different values of (w1,w2,w3)T. Fig. 8 shows the convergence behavior of x(t) based on the proposed neural network with 11 random initial points. Figure 9 demonstrates that the feature of the proposed approach is contained in the convergent to Pareto fronts of the problem (43) by adopting different values of (w1,w2,w3)T.

Table 3.

Results for some values of (w1,w2,w3)T in Example 4

weight optimal solution objective function value
(w1,w2,w3)T x1 x2 f1 f2 f3 w1f1+w2f2+w3f3
(0.2,0.4,0.4)T -0.1552789 0.0400000 -0.39355 -1.10211 2.08479 0.31436024
(0.4,0.2,0.4)T -0.2287582 0.0208333 -0.55219 -1.07321 2.18326 0.43778594
(0.4,0.4,0.2)T -0.2649000 0.0310083 -0.61205 -1.05616 2.24584 -0.2181220
(0.4,0.3,0.3)T -0.2467110 0.0266666 -0.58263 -1.06647 2.21320 0.11096710
(0.3,0.4,0.3)T -0.2083334 0.0349344 -0.50734 -1.08966 2.15221 0.05759643
(0.3,0.3,0.4)T -0.1910823 0.0306122 -0.47493 -1.09323 2.12802 0.38075848
(0.6,0.3,0.1)T -0.3697149 0.0211998 -0.76405 -0.93602 2.47852 -0.49138443
(0.1,0.6,0.3)T -0.1371947 0.0506329 -0.34530 -1.10392 2.06651 -0.07692767
(0.1,0.3,0.6)T -0.0898201 0.0434781 -0.23511 -1.09060 2.02870 0.866532153
(0.8,0.1,0.1)T -0.4664179 0.0072732 -0.85499 -0.75534 2.76142 -0.48338602
(0.1,0.8,0.1)T -0.1697531 0.0528075 -0.41630 -1.10376 2.10155 -0.71448631
(0.1,0.1,0.8)T -0.0591715 0.0277364 -0.16299 -1.06881 2.01244 1.48677679
(0.6,0.2,0.2)T -0.3496514 0.0160000 -0.74139 -0.96003 2.42796 -0.15125035

Fig. 8.

Fig. 8

Transient behaviors of (x1,x2)T with w1=0.1,w2=0.4,w3=0.5 for 11 different initial points of the neural network (10) in Example 4

Fig. 9.

Fig. 9

Pareto front of the problem (43) in Example 4

To end this section, we answer two natural questions: what are the practical and computational advantages of the network (10), compared to existing generally available algorithms for CQMPP’s? Are there advantages of our network compared to the existing ones? To answer these, we summarize what we have observed from numerical experiments and theoretical results as below.

  • We compare framework (10) with some existing models in Sect. 5. At the first glance, we see that the difference of the numerical performance is very marginal by testing an optimization example.

  • From Theorem 7, we observe that the solution converges with any initial point. Thus, changing initial points may not have much effect for our neural network model. In fact, our model is globally convergent to the optimal solution.

  • The suggested neural network (10) can be implemented without a penalty parameter and can be convergent to an exact solution of the CQMPP’s.

Conclusion

In this paper we established a neural network model for CQMPPs. The provided neural network has been shown to be Lyapunov stable and converge globally to the related single-objective of the CQMPP. The convergence of the neural network has been validated with the simulation results of the CQMPP examples. The studied neural network does not also require any adjustable parameter and its structure is simple. Thus, the model is more suitable to be implemented in hardware. Moreover, since CQMPP problem has wide applications, the new model and the obtained results in this paper are significant in both theory and application. The CQMPP simulation results describe the effectiveness of the suggested framework.

Funding

The authors have not disclosed any funding.

Data Availability

Enquiries about data availability should be directed to the authors.

Declarations

Conflict of interest

Both of the authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with animals performed by any of the authors.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Abd El-Waheda WF, Zaki EM, El-Refaey AM (2010) Artificial immune system based neural networks for solving multi-objective programming problems. Egyptian Inform J 11:59–65 [Google Scholar]
  2. Abo-Sinna MA, Rizk-Allah RM (2018) Decomposition of parametric space for bi-objective optimization problem using neural network approach. Opsearch 55:502–531 [Google Scholar]
  3. Antipova E, Pozo C, Guillen-Gosalbez G, Boer D, Cabeza LF, Jimenez L (2015) On the use of filters to facilitate the post-optimal analysis of the Pareto solutions in multiobjective optimization. Comput Chem Eng 74:48–58 [Google Scholar]
  4. Antunes CH, Alves MJ, Climaco J (2016) Interactive methods in multiobjective linear programming, EURO advanced tutorials on operational research. Springer, Cham, 57–136
  5. Arjmandzadeh Z, Nazemi A, Safi M (2019) Solving multiobjective random interval programming problems by a capable neural network framework, Arjmandzadeh, Ziba and Nazemi, Alireza and Safi, Mohammadreza. Appl Intell 49:1566–1579 [Google Scholar]
  6. Avriel M (1976) Nonlinear programming: analysis and methods, Englewood Cliffs. Prentice-Hall, NJ [Google Scholar]
  7. Bazaraa MS, Sherali HD, Shetty CM (1993) Nonlinear Programming- Theory and Algorithms, 2nd edn. Wiley, New York [Google Scholar]
  8. Benson HP (1998) An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. J Global Optim 13:1–24 [Google Scholar]
  9. Capitanescu F, Ahmadi A, Benetto E, Marvuglia A, Tiruta-Barna L (2015) Some efficient approaches for multi-objective constrained optimization of computationally expensive black-box model problems. Comput Chem Eng 82:228–39 [Google Scholar]
  10. Chankong V, Haimes YY (1983) Multiobjective Decision making: theory and methodology. Elsevier Science Publishing, Amsterdam [Google Scholar]
  11. Chinchuluun A, Pardalos PM (2007) A survey of recent developments in multiobjective optimization. Ann Oper Res 154:29–50 [Google Scholar]
  12. Coello CA (2009) Evolutionary multi-objective optimization: some current research trends and topics thatremain to be explored, Frontiers of Computer. Science 3:18–30 [Google Scholar]
  13. Copado-Mendez PJ, Guillen-Gosalbez G, Jimenez L (2014) MILP-based decomposition algorithm for dimensionality reduction in multi-objective optimization: application to environmental and systems biology problems. Comput Chem Eng 67:137–147 [Google Scholar]
  14. Dominguez-Rios MA, Chicano F, Alba E (2021) Effective anytime algorithm for multiobjective combinatorial optimization problems. Inf Sci 565:210–228 [Google Scholar]
  15. Ehrgott M (2005) Multicriteria optimization. Springer, Berlin [Google Scholar]
  16. Ehrgott M, Shao L, Schobel A (2011) An approximation algorithm for convex multi-objective programming problems. J Global Optim 50:397–416 [Google Scholar]
  17. Ehrgott M, Lohne A, Shao L (2007) A dual variant of Benson’s outer approximation algorithm, Report 654, Department of Engineering Science, The University of Auckland
  18. Ehrgott M, Wiecek M (2005) Multiobjective programming. In: Figueira J, Greco S, Ehrgott M (eds.) Multicriteria decision analysis: state of the art surveys., pp. 667–722. Springer Science + Business Media, New York
  19. Feizi A, Nazemi A (2021) Solving the stochastic support vector regression with probabilistic constraints by a high-performance neural network model. Eng Comput, 1–16
  20. Friesz TL, Bernstein DH, Mehta NJ, Tobin RL, Ganjlizadeh S (1994) Day-to-day dynamic network disequilibria and idealized traveler information systems. Operat Res 42:1120–1136 [Google Scholar]
  21. Gao X (2004) A novel neural network for nonlinear convex programming. IEEE Trans Neural Netw 15:613–621 [DOI] [PubMed] [Google Scholar]
  22. Gao XB, Liao L-Z, Qi LQ (2005) A novel neural network for variational inequalities with linear and nonlinear constraints. IEEE Trans Neural Netw 16:1305–1317 [DOI] [PubMed] [Google Scholar]
  23. Gass S, Saaty T (1955) The computational algorithm for the parametric objective function. Naval Res Logist Quart 2(1–2):39–45 [Google Scholar]
  24. Ghaffari-Hadigheh A, Romanko O, Terlaky T (2010) Bi-parametric convex quadratic optimization. Optim Methods Softw 25(2):229–45 [Google Scholar]
  25. Ghalavand N, Khorram E, Morovati V (2021) An adaptive nonmonotone line search for multiobjective optimization problems. Comput Operat Res 136:105506 [Google Scholar]
  26. Goh CJ, Yang XQ (1996) Analytic efficient solution set for multi-criteria quadratic programs. Eur J Opera Res 92(1):166–81 [Google Scholar]
  27. Groetzner P, Werner R (2022) Multiobjective optimization under uncertainty: a multiobjective robust (relative) regret approach. Eur J Oper Res 296:101–115 [Google Scholar]
  28. Hu Z, Zhou T, Su Q, Liu M (2022) A niching backtracking search algorithm with adaptive local search for multimodal multiobjective optimization. Swarm Evol Comput 69:101031 [Google Scholar]
  29. Hwang CL, Masud ASM (1979) Multiple objectives decision making: methods and applications. Springer, Berlin [Google Scholar]
  30. Jiang J, Han F, Wang J, Ling Q, Han H, Wang Y (2022) A two-stage evolutionary algorithm for large-scale sparse multiobjective optimization problems. Swarm and Evol Comput 17:101093 [Google Scholar]
  31. Kabgani A, Soleimani-damaneh M (2022) Semi-quasidifferentiability in nonsmooth nonconvex multiobjective optimization. Eur J Oper Res 299:35–45 [Google Scholar]
  32. Karbasi D, Nazemi A, Rabiei M (2020) A parametric recurrent neural network scheme for solving a class of fuzzy regression models with some real-world applications. Soft Comput 24:11159–11187 [Google Scholar]
  33. Karbasi D, Nazemi A, Rabiei M (2021) An optimization technique for solving a class of ridge fuzzy regression poblems. Neural Process Lett 53:3307–3338 [Google Scholar]
  34. Kennedy MP, Chua LO (1988) Neural networks for nonlinear programming. IEEE Trans Circuits and Syst 35:554–562 [Google Scholar]
  35. Kinderlehrer D, Stampacchia G (2000) An introduction to variational inequalities and their applications. Soc Ind Appl Math
  36. Krichen S, Masri H, Guitouni A (2012) Adjacency based method for generating maximal efficient faces in multiobjective linear programming. Appl Math Model 36:6301–6311 [Google Scholar]
  37. Lai KK, Maury JK, Mishra SK (2022) Multiobjective approximate gradient projection method for constrained vector optimization: sequential optimality conditions without constraint qualifications. J Comput Appl Math 410:114122 [Google Scholar]
  38. Lu K, Mizuno S, Shi J (2020) A new mixed integer programming approach for optimization over the efficient set of a multiobjective linear programming problem. Optimiz Lett 14:2323–2333 [Google Scholar]
  39. Luquea M, Ruiz F, Steuer RE (2010) Modified interactive Chebyshev algorithm (MICA) for convex multiobjective programming. Eur J Oper Res 204:557–564 [Google Scholar]
  40. Matejas J, Peric T (2014) A new iterative method for solving multiobjective linear programming problem. Appl Math Comput 243:746–754 [Google Scholar]
  41. Mian A, Ensinas AV, Marechal F (2015) Multi-objective optimization of SNG production from microalgae through hydrothermal gasification. Comput Chem Eng 76:170–183 [Google Scholar]
  42. Miettinen K (2002) Nonlinear multiobjective optimization. Kluwer Academic Publishers, Dordrecht [Google Scholar]
  43. Miller RK, Michel AN (1982) Ordinary differential equations. Academic Press, NewYork [Google Scholar]
  44. Nazemi A (2011) A dynamical model for solving degenerate quadratic minimax problems with constraints. J Comput Appl Math 236:1282–1295 [Google Scholar]
  45. Nazemi A (2012) A dynamic system model for solving convex nonlinear optimization problems. Commun Nonlinear Sci Numer Simul 17:1696–1705 [Google Scholar]
  46. Nazemi A (2013) Solving general convex nonlinear optimization problems by an efficient neurodynamic model. Eng Appl Artif Intell 26:685–696 [Google Scholar]
  47. Nazemi A (2019) A new collaborate neuro-dynamic framework for solving convex second order cone programming problems with an application in multi-fingered robotic hands. Appl Intell 49:3512–3523 [Google Scholar]
  48. Nazemi A, Effati S (2013) An application of a merit function for solving convex programming problems. Comput Ind Eng 66:212–221 [Google Scholar]
  49. Nazemi A, Mortezaee M (2019) A new gradient-based neural dynamic framework for solving constrained min-max optimization problems with an application in portfolio selection models. Appl Intell 49:396–419 [Google Scholar]
  50. Nazemi A, Omidi F (2012) A capable neural network model for solving the maximum flow problem. J Comput Appl Math 236:3498–3513 [Google Scholar]
  51. Nazemi A, Sabeghi A (2019) A novel gradient-based neural network for solving convex second-order cone constrained variational inequality problems. J Comput Appl Math 347:343–356 [Google Scholar]
  52. Nikseresht A, Nazemi A (2018) A novel neural network model for solving a class of nonlinear semidefinite programming problems. J Comput Appl Math 338:69–79 [Google Scholar]
  53. Nikseresht A, Nazemi A (2019) A novel neural network for solving semidefinite programming problems with some applications. J Comput Appl Math 350:309–323 [Google Scholar]
  54. Oberdiecka R, Pistikopoulos EN (2016) Multi-objective optimization with convex quadratic cost functions: a multi-parametric programming approach. Comput Chem Eng 85:36–39 [Google Scholar]
  55. Pan A, Shen B, Wang L (2022) Ensemble of resource allocation strategies in decision and objective spaces for multiobjective optimization, Information Sciences,, Available online 13 May
  56. Papalexandri KP, Dimkou TI (1998) A Parametric mixed-integer optimization algorithm for multiobjective engineering problems involving discrete decisions. Ind Eng Chem Res 37(5):1866–82 [Google Scholar]
  57. Rizk-Allah RM, Abo-Sinna MA (2017) Integrating reference point. Kuhn-Tucker conditions and neuralnetwork approach for multi-objective and multi-level programming problems. Opsearch 54:663–683 [Google Scholar]
  58. Rodriguez-Vazquez A, Dominguez-Castro R, Rueda A, Huertas JL, Sanchez-Sinencio E (1990) Nonlinear switched-capacitor neural networks for optimization problems. IEEE Trans Circuits and Syst 37:384–397 [Google Scholar]
  59. Ruzika S, Wiecek MM (2005) Approximation methods in multiobjective programming. J Optim Theory Appl 126:473–501 [Google Scholar]
  60. De Santis M, Eichfelder G (2021) A decision space algorithm for multiobjective convex quadratic integer optimization. Comput Operat Res 134:105396 [Google Scholar]
  61. Shang Y, Yu B (2011) A constraint shifting homotopy method for convex multi-objective programming. J Comput Appl Math 236:640–646 [Google Scholar]
  62. Shao L, Ehrgott M (2008) Approximately solving multiobjective linear programmes in objective space and an application in radiotherapy treatment planning. Math Methods Oper Res 68:257–276 [Google Scholar]
  63. Shavazipour B, Lopez-Ibanezb M, Miettinen K (2021) Visualizations for decision support in scenario-based multiobjective optimization. Inf Sci 578:1–21 [Google Scholar]
  64. Xia Y (2004) An extended neural network for constrained optimization. Neural Comput 16:863–883 [Google Scholar]
  65. Xia Y, Wang J (2000) A recurrent neural network for solving linear projection equations. Neural Netw 13:337–350 [DOI] [PubMed] [Google Scholar]
  66. Yang Y, Cao J (2008) A feedback neural network for solving convex constraint optimization problems. Appl Math Comput 201:340–350 [Google Scholar]
  67. Youshen X, Wang J (2004) A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraint. IEEE Trans Circuits Syst I Regul Pap 51(7):1385–1394 [Google Scholar]
  68. Yuf PL, Zeleny M (1976) Linear multiparametric programming by multicriteria simplex method. Manage Sci 23(2):159–70 [Google Scholar]
  69. Zerfa L, Chergui MEA (2022) Finding non-dominated points for multiobjective integer convex programs with linear constraints. J Glob Optimiz
  70. Zhang J, Ishibuchi H, He L (2022) A classification-assisted environmental selection strategy for multiobjective optimization. Swarm Evol Comput 71:101074 [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Enquiries about data availability should be directed to the authors.


Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES