Abstract
A neural network model is constructed to solve convex quadratic multi-objective programming problem (CQMPP). The CQMPP is first converted into an equivalent single-objective convex quadratic programming problem by the mean of the weighted sum method, where the Pareto optimal solution (POS) are given by diversifying values of weights. Then, for given various values weights, multiple projection neural networks are employded to search for Pareto optimal solutions. Based on employing Lyapunov theory, the proposed neural network approach is established to be stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the single-objective problem. The simulation results also show that the presented model is feasible and efficient.
Keywords: Multi-objective optimization problem, Neural networks, Convex quadratic programming problem, Pareto optimal solution, Stability, Convergence
Introduction
The CQMPP is considered as
| 1 |
subject to
| 2 |
where the objective functions are of the form
and some (or ) could be is a real symmetric positive semidefinite matrix, , and It is clear that functions are convex and twice continuous differentiable on open convex set including Throughout this paper, we assume The K objectives are conflicted with each other. Therefore, the target of CQMPP is to achieve a set of efficient solutions that are called Pareto set.
Multi-objective optimization problem has become an important research area for both scientists and researchers. In the multi-objective optimization problem, multi-objective functions need to be optimized simultaneously. In the case of multi-objectives, there may not necessarily exist a solution that is best with respect to all objectives because of confliction among objectives. Therefore, there usually exist a set of solutions for the multi-objective case which cannot simply be compared with each other. For such solutions, called no-dominated solutions or POS’s, no improvement is possible in any objective function without scarifying at least one of the other objective functions (Miettinen 2002; Copado-Mendez et al. 2014; Oberdiecka and Pistikopoulos 2016; Mian et al. 2015; Hwang and Masud 1979). For an introduction to multi-objective optimization we refer the reader to Chinchuluun and Pardalos (2007), Ehrgott (2005) and Ehrgott and Wiecek (2005). For more some recent new works and surveys of recent developments on multi-objective optimization problems one can see (Jiang et al. 2022; Pan et al. 2022; Zhang et al. 2022; Lai et al. 2022; Hu et al. 2022; Kabgani and Soleimani-damaneh 2022; Ghalavand et al. 2021; Shavazipour et al. 2021; Dominguez-Rios et al. 2021; Groetzner and Werner 2022).
The multi-objective problem is convex if all its objective functions are convex and its feasible set is convex. There are several methods for solving these kinds of problems. For instance, for multi-objective linear programming (MOLP) problems, researchers have developed a variety of methods for generating the efficient set or the non-dominated set such as multi-objective simplex methods, interior point methods, and objective space methods. Multi-objective simplex methods and interior point methods work in variable space to find efficient solutions, see the references in Ehrgott and Wiecek (2005). Benson in Benson (1998) has studied an outer approximation method to find the extended feasible set in objective space (and thereby the non-dominated set) of an MOLP. In Ehrgott et al. (2007) and Shao and Ehrgott (2008) some improvements for this algorithm have been suggested. Krichen et al. in Krichen et al. (2012) have considered a new method based on the concept of adjacencies between efficient extreme points for generating the entire efficient set for MOLPs in the outcome space. Matejas and Peric in Matejas and Peric (2014) have suggested a new iterative method based on the principles of game theory for solving MOLPs with an arbitrary number of decision makers. Antunes et al. in Antunes et al. (2016) have studied five multiobjective interactive methods for MOLP problems, which are representative of different strategies of search. Lu et al. in Lu et al. (2020) have solved an equivalent mixed integer programming problem to compute an optimal solution to the MOLP problems. Shao and Ehrgott in Shao and Ehrgott (2008) introduced an approximation version of Benson’s algorithm to sandwich the extended feasible set of an MOLP with an outer approximation and an inner approximation.
A more general state of MOLP is CQMPP as stated in (1)–(2). Problems of type (1)–(2) arise in many different applications, such as engineering, economics and biological systems (see Miettinen 2002; Copado-Mendez et al. 2014; Oberdiecka and Pistikopoulos 2016; Mian et al. 2015). One of the most well known strategies to obtain a Pareto point of (1)–(2) is the -constraint method (Miettinen 2002). However in this strategy the solution of the CQMPP problem depends on the values of certain parameter, namely . Hence, while many researchers consider the iterative solution of the resulting optimization problems for different parameter values (Capitanescu et al. 2015) and (Antipova et al. 2015), some attention has been given to the explicit calculation of the entire Pareto front via parametric programming, which solves optimization problems as a function and for a range of certain parameters. In Gass and Saaty (1955) and Yuf and Zeleny (1976), authors considered the case of linear cost functions, and in Papalexandri and Dimkou (1998) the case of a mixed-integer nonlinear multi-objective optimization problem was addressed. The case of quadratic cost functions was treated in Goh and Yang (1996) and Ghaffari-Hadigheh et al. (2010), although either only conceptually or for the case where the quadratic part remains constant. In Oberdiecka and Pistikopoulos (2016) authors introduced an algorithm for the approximate explicit solution of multi-objective optimization problems with general convex quadratic cost functions and linear constraints by multi-parametric programming. In Ehrgott et al. (2011), Ehrgott et al. extended the algorithm in Shao and Ehrgott (2008) to approximately solve CQMPP. De Santis and Eichfelder in De Santis and Eichfelder (2021) presented a branch-and-bound algorithm for minimizing multiple convex quadratic objective functions over integer variables. Shang and Yu in Shang and Yu (2011) presented a constraint shifting combined homotopy method for solving CQMPP with both equality and inequality constraints. Luquea et. al in Luquea et al. (2010) described an interactive procedural algorithm for CQMPP based upon the Tchebycheff method, Wierzbicki’s reference point approach, and the procedure of Michalowski and Szapiro. Zerfa and Chergui in Zerfa and Chergui (2022) presented a branch-and-bound based algorithm to generate all non-dominated points for a multiobjective integer programming problem with convex quadratic objective functions and linear constraints.
In the past decades, neural networks have been successfully applied to various fields such as signal processing, pattern recognition, fixed-point computations, control and other scientific areas. Mathematically, the neural dynamical approach converts an optimization problem into a dynamical system or an system of ordinary differential equations so that their equilibrium point corresponds to the optimal solution of the optimization problem. Then the optimal solution can be obtained by tracking the solution trajectory of the designed dynamical system based on hardware and software implementation. The literature contains many studies on neural networks for solving real-time optimization problems, (see Kennedy and Chua 1988; Rodriguez-Vazquez et al. 1990; Friesz et al. 1994; Gao 2004; Gao et al. 2005; Nazemi and Effati 2013; Nazemi 2012, 2013, 2011; Nazemi and Omidi 2012; Yang and Cao 2008; Xia 2004; Xia and Wang 2000; Nikseresht and Nazemi 2019, 2018; Karbasi et al. 2021; Feizi and Nazemi 2021; Karbasi et al. 2020; Nazemi and Sabeghi 2019; Nazemi and Mortezaee 2019; Nazemi 2019; Arjmandzadeh et al. 2019). Although each of these studies work successfully for a specific kind of optimization problems, to the best of our knowledge, there exist only one neural network approach that solves CQMPP (see Rizk-Allah and Abo-Sinna 2017 and Abo-Sinna and Rizk-Allah 2018). However, the structure of the proposed model is rather complicated and further simplification can be achieved. Thus, proposing an efficient neural network for solving CQMPP’s with good stability properties and convergence results is very necessary and meaningful. The main contributions of this study are as follows:
-
(i)
In the presented model, the primal and dual problems can be solved simultaneously.
-
(ii)
By the means of the weighted sum method, POSs can be obtained through using different weights.
-
(iii)
The possible convergence of the new network to the optimal solution when is positive semidefinite, which is the limitation of the models in Nazemi and Effati (2013) and Nazemi (2012).
Motivated by the former discussions, in this paper, we consider a neural network approach for solving CQMPP’s. Firstly, the CQMPP is transformed to a single-objective convex quadratic programming by the means of weighted sum method. Based on the Karush-Kuhn-Tucker (KKT) optimality conditions and to obtain a set of pareto optimal solutions, a collective neural networks related to various values of weights are constructed. The equilibrium point of the neural network is proved to be equivalent to the KKT point of the convex programming. The existence and uniqueness of an equilibrium point of the neural network will be also analyzed. By constructing a suitable Lyapunov function, for an arbitrary initial point, the studied neural network is proved to be global stable is the sense of Lyapunov and is capable to obtain the optimal solutions for the obtained nonlinear programming with different values of weights. Further, the POSs of the CQMPP are generated by utilizing different weights.
This paper is organized as follows. In Sect. 2, some preliminaries of the multi-objective optimization are described. In Sect. 3, a collaborative neurodynamic strategy is addressed. A comparison of the suggested model with some previous neural networks are stated in Sect. 4. In Sect. 5, the stability of the equilibrium point and the convergence of the optimal solution are investigated. In Sect. 6, several illustrative examples and simulation results are given to show the feasibility of the raised model. Conclusions are given in Sect. 7.
Preliminaries of multi-objective optimization
This section provides the necessary mathematical background for multi-objective optimization problems (Miettinen 2002; Chankong and Haimes 1983).
Definition 1
The natural ordering cone is defined as follows:
For any
for all
for all
for all and
Due to the conflict of the objective functions, it is often not possible to find a single solution what would be optimal for all the objectives simultaneously. As a result, the solution of problem (1)–(2) is established by the concept of Pareto optimality, which refers to the solution that none of the objective functions can be enhanced without detriment to at least one of the other objective functions. A formal definition of Pareto optimality is given in the following.
Definition 2
A feasible solution is called
-
(i)
an efficient (a Pareto optimal) solution for problem (1)–(2), if there is no other such that
-
(ii)
a weakly efficient (a weakly Pareto optimal) solution for problem (1)–(2), if there is no other such that
In order to acquire Pareto optimal solutions, one widely used approach is the weighted sum method (Abd El-Waheda et al. 2010) and (Miettinen 2002), which transforms the multiple objectives into a single one by multiplying each objective with a weight. Given a weighting vector problem (1)–(2) is converted into
| 3 |
subject to
| 4 |
Theorem 1
Miettinen (2002) If is an optimal solution of the weighting problem (3)–(4) where either , or is a unique optimal solution, then is a Pareto optimal solution of the multi-objective program (1)–(2).
Theorem 2
Miettinen (2002) Let the multi-objecive optimization problem (1)–(2) be convex. If is Pareto optimal, then there exists a weighting vector which is a solution to the weighting problem (3)–(4).
The following assumptions hold throughout of this paper.
Assumption 1
The minimizer of each objective function exists.
Assumption 2
A finite solution of (3)–(4) satisfies the Slater condition (Avriel 1976), i.e., for a given there exists a such that
Assumption 3
The are linear independent, where and .
Assumption 4
In all parts, we assume
The neural network model
In this section, we will construct a neural network for solving (3)–(4). First, we give a sufficient and necessary condition for the optimal solution of (3)–(4), which is the theoretical foundation for us to design the network for it.
Theorem 3
Bazaraa et al. (1993) is an optimal solution of (3)–(4) if and only if there exist and such that satisfies the following KKT system
| 5 |
In order to gain a projection neural network, we state the KKT conditions (5) in variational inequality form.
Theorem 4
Gao (2004) is an optimal solution of (3)–(4), if and only if there exists and such that satisfies the following KKT conditions:
| 6 |
Lemma 1
Youshen and Wang (2004) is an optimal solution of (3)–(4) if and only if there exists and such that:
| 7 |
where and
Lemma (1) indicates that the optimal solution of (3)–(4) can be obtained by solving (7).
Lemma 2
Kinderlehrer and Stampacchia (2000) Assume that the set is a closed convex set. Then
| 8 |
| 9 |
A basic property of the projection mapping on a closed convex set is stated in Lemma (2).
For simplicity, we denote
Based on the above results and by an extension of the neural network in Gao (2004), we introduce the following architecture to solve (3)–(4):
| 10 |
here is a scale parameter and indicates the convergence rate of the neural network (10). For simplicity of our analysis, we let
Let be the equilibrium point set of the neural network (10). From Lemma 1 we have that is an equilibrium point of the neural network (10) if and only if is an optimal solution of (3)–(4). That is, .
In the following, we give a more technical description of the proposed neural network algorithm whose framework is shown in Algorithm 1. The procedures start with choosing an arbitrary initial vector. Then iterative procedures are controlled by the principal while-loop of Algorithm 1. In each iteration, calculate the right side of system (10), update initial vector, calculation of error and stopping criteria, must be scheduled. The pseudo code that explains the implementation of the presented algorithm is stated as follows:
Remark 1
By using the weighted sum method, a single–objective optimization problem of (1)–(2) can be formulated as (3)–(4), in which the scalar objective (3) is summation of weighted original objective functions. A system consisting of multiple neural networks is developed to cooperatively seek Pareto optimal solutions of the multi-objective optimization problem (1)–(2). Each neural network is required to be run for a fixed given weight In fact for each neural network optimizes such that while seeks for a point of Pareto efficient frontier (Pareto optimal solution). Thus, for a given arbitrary value and solving the single–objective problem as ith sub–problem, ith neural network with weight is as
where
is adopted and updated to optimize the ith sub–problem. Each neural network is supplied with a and it generates an equilibrium point after performing precise local search. That is, given a set of weights where and is the total number of employed neural networks. The neural networks will output a set of equilibrium points Necessary and sufficient conditions are satisfied to guarantee the convergence of the collaborative neurodynamic system to a Pareto optimal solution. Finally, Pareto optimal set is constructed by using different values of weights.
Stability analysis
In the section, we will study some stability and convergence properties for network (10). We now establish our main results as follows.
Theorem 5
For any initial point , there exists a unique continuous solution for system (10). Moreover, and , provided that and .
Proof
The projection mappings and are non-expansive. Since and are continuously differentiable on an open convex set including , then and are locally Lipschitz continuous. According to the local existence of ordinary differential equations, the initial value problem of the system (10) has a unique solution of (10) on as is its maximal interval of existence. Moreover, it can be seen in the proof of Theorem 4.4 that the solution z(t) is bounded and thus
Let the initial point and Since
| 11 |
We have
| 12 |
Then
| 13 |
By the integration mean value theorem, we have:
| 14 |
It follows that and from and . This completes the proof.
Remark 2
The results of Theorem 5 indicate that neural network (10) is well defined. In the proof of Theorem 5, we assume that is to say, the initial point should be feasible. But if , as , will eventually approach by (14). So there exists a moment , such that can be regarded as a feasible point (or approximate) and will stay in from then on.
Lemma 3
Let be an equilibrium point of (10) and
| 15 |
where
| 16 |
Then
-
i)
is convex on D and continuously differentiable on
-
ii)
17
Proof
-
i)
It is easy to verify that is differentially convex on From Bazaraa et al. (1993), is a convex differentiable on and continuously differentiable on By the differentiable convexity of we see that in (16) is also convex on D and continuously differentiable on Thus i) is achieved.
- ii)
The results in Lemma (3) pave a means for us to study the dynamics of neural network (10). In particular, the network (10) has the following basic property.
Theorem 6
The neural network (10) is stable in the sense of Lyapunov. Moreover, the solution trajectory of neural network (10) is extendable to the global existence.
Proof
By theorem 5, , there exists a unique continuous solution for system (10).
Define a Lyapunov function as (15). Noting that and
| 19 |
We have
| 20 |
Using i) in Lemma (3) and (15) we get
and
| 21 |
In the above calculations we used
Moreover, in the inequality of Lemma (2), let and , we obtain:
| 22 |
From the differentiable convexities of , we have
| 23 |
Substituting (6), (22) and (23) into (21), , one has
| 24 |
This means neural network (10) is globally stable in the sense of Lyapunov.
From (24) we know that V is a non-increasing function with respect to t, so
| 25 |
This shows the state trajectory of neural network (10) is bounded. Thus .
Theorem 7
The solution trajectory of the neural network (10) is globally convergent to an optimal point of the problem (3)–(4). In particular, neural network (10) is asymptotically stable when has only a point.
Proof
From (17), the level set is bounded. Thus, there exists a convergent subsequence
such that
| 26 |
where satisfies
which indicates that is an -limit point of Using the LaSalle Invariant Set Theorem (Miller and Michel 1982), one has that as where M is the largest invariant set in From (10) and (24), it follows that Thus by
Substituting in (15), similar to the analysis above, we can prove that the function is monotonically nonincreasing in , and
From the continuity of and (26), , there exist a such that
| 27 |
Thus
That is,. So, the solution trajectory of the neural network (10) is globally convergent to an equilibrium point is also an optimal solution of (3)–(4). In particular, if , then for each , v free in sign, and , the solution z with the initial point will approach by the analysis above. That is, the neural network (10) is globally asymptotically stable.
Comparing with some existing models
In order to see how well the presented neural network (10) can be applied to solve (3)–(4), we compare it with some existing neural network models.
First, let us consider the following problem
| 28 |
| 29 |
| 30 |
where and some (or ) could be is a real symmetric positive semidefinite matrix, , and By extending the globally projected dynamic systems (Friesz et al. 1994) to constrained optimization problems by Xia in Xia (2004), a neural network can be utilized for solving (28)–(30) as
| 31 |
| 32 |
| 33 |
where is a closed convex set and is a projection operator (Gao 2004) defined by
It is well known that the asymptotical stability of the dynamic model (31) - (33) for a monotone and a symmetric mapping could not be guaranteed (Gao et al. 2005)and (Xia and Wang 2000). Thus, the system described by (31) – (33) cannot solve problem (28) – (30) in some cases. For instance, one can see Example 1 in this manuscript.
The proposed model in Nazemi and Effati (2013) can be used for constructing the following neural network for solving (3)–(4) as
| 34 |
| 35 |
where
| 36 |
| 37 |
and
Using (Nazemi 2012), under the condition that is strictly convex, a lagrangian network model can be constructed for solving (3)–(4) as
| 38 |
| 39 |
where
It should be noted that the convergence of the models (34)–(35) and (38)–(39) under the condition that is positive definite matrix is only guaranteed. Thus, these models can not solve linear programming problems which limits their application. Moreover, the neural network (10) can converge to the optimal solution under the condition of positive semidefinite matrix For instance, one can see Example 6.1 in this manuscript.
Numerical simulations
In order to clarify the effectiveness of the proposed neural network (10), in this section, we test several examples. The simulation is conducted on Matlab R2022.
Example 1
Matejas and Peric (2014) Let and Considering
| 40 |
All simulation results show that the state trajectories of the model (10) with
converge to the unique optimal solution Fig. 1 depicts that the trajectories of the network (10) converge to the optimal solution of the related single-objective problem.
Fig. 1.

Transient behavior of of the neural network (10) in Example 1
To make a comparison, we solve the MOLP problem by using neural network (31) - (33) shown in Fig. 2. It is clear that this model is not stable. The MOLP problem is also solved by using the model in (38) - (39). From Fig. 3, we see that this model is not suitable for solving the MOLP problem, too.
Fig. 2.

Fig. 3.

Example 2
Abo-Sinna and Rizk-Allah (2018)
| 41 |
Table 1 shows results for different values of All simulation results show that the state trajectories of the model (10) converge to the optimal solution by choosing different weight values For example, Fig. 4 shows that the trajectories of the network (10) with 5 different initial points and weight converge to the optimal solution of the related single-objective problem. The generated Pareto fronts using the established neural network approach is presented in Fig. 5.
Table 1.
Results for some values of in Example 2
| Weight | Optimal solution | Objective function value | |||
|---|---|---|---|---|---|
| 1.999999825 | 3.000000641 | 5.000002214 | 6.69E-13 | 6.69E-13 | |
| 1.899998993 | 2.800000072 | 4.049998447 | 0.050000173 | 0.45 | |
| 1.800000014 | 2.600000773 | 3.200002496 | 0.199999376 | 0.8 | |
| 1.699998518 | 2.400000889 | 2.450000415 | 0.449999822 | 1.05 | |
| 1.599998815 | 2.19999806 | 1.799993921 | 0.800004052 | 1.2 | |
| 1.499999145 | 1.999998341 | 1.249995827 | 1.250004173 | 1.25 | |
| 1.399999185 | 1.799998667 | 0.799997216 | 1.800004177 | 1.2 | |
| 1.29999882 | 1.600000145 | 0.449999466 | 2.450001247 | 1.05 | |
| 1.200001354 | 1.399999501 | 0.200000143 | 3.19999943 | 0.8 | |
| 1.100001094 | 1.2000013 | 0.050000739 | 4.04999335 | 0.45 | |
| 0.999999646 | 1.000001578 | 2.62E-12 | 4.999994397 | 2.62E-12 | |
Fig. 4.

Transient behaviors of with for 5 different initial points of the neural network (10) in Example 2
Fig. 5.

Obtained POSs of the problem (41) using the neural network (10) in Example 2
Example 3
Ehrgott et al. (2011)
| 42 |
Table 2 depicts results for different values of Fig. 6 shows the convergence behavior of x(t) based on the network (10) with 25 random initial points. Figure 7 illustrates that the feature of the proposed approach is contained in the convergent to Pareto fronts of the problem (42) by adopting different values of
Table 2.
Results for some values of in Example 3
| Weight | Optimal solution | Objective function value | ||||
|---|---|---|---|---|---|---|
| 2.6000022 | 2.199995 | 3.9999949 | 1.0000106 | 2.0000122 | 2 | |
| 2.3999969 | 1.8000035 | 2.5999969 | 1.5999892 | 2.5999793 | 2.4 | |
| 1.9999954 | 1.9999954 | 1.9999815 | 1.0000091 | 3.9999909 | 2 | |
| 2.199998 | 1.8999956 | 2.2499873 | 1.2500087 | 3.2500134 | 2.25 | |
| 2.2999970 | 2.0999956 | 2.8999826 | 0.9000061 | 2.8999882 | 2.1 | |
| 2.5000026 | 2.0000012 | 3.2500102 | 1.2500003 | 2.2499903 | 2.25 | |
| 1.5999985 | 1.7000049 | 0.8500051 | 1.8499883 | 5.8500030 | 1.65 | |
| 2.5000058 | 2.499999 | 4.5000142 | 0.5000067 | 2.5000206 | 1.5 | |
| 3.0999935 | 2.1999959 | 5.8499631 | 1.8499922 | 0.8500102 | 1.65 | |
| 1.3999980 | 1.2999987 | 0.2499976 | 3.2500068 | 7.2499837 | 1.25 | |
| 2.1000027 | 2.6999939 | 4.0999852 | 0.1000041 | 4.1000223 | 0.9 | |
| 3.4999962 | 2.0000035 | 7.2499877 | 3.2499814 | 0.2499981 | 1.25 | |
| 1.7999984 | 1.5999966 | 0.9999932 | 2.0000103 | 5.0000183 | 2 | |
Fig. 6.

Transient behaviors of with for 25 different initial points of the neural network (10) in Example 3
Fig. 7.
Example 4
Oberdiecka and Pistikopoulos (2016)
| 43 |
where
Table 3 depicts results for different values of Fig. 8 shows the convergence behavior of x(t) based on the proposed neural network with 11 random initial points. Figure 9 demonstrates that the feature of the proposed approach is contained in the convergent to Pareto fronts of the problem (43) by adopting different values of
Table 3.
Results for some values of in Example 4
| weight | optimal solution | objective function value | ||||
|---|---|---|---|---|---|---|
| 0.1552789 | 0.0400000 | 0.39355 | 1.10211 | 2.08479 | 0.31436024 | |
| 0.2287582 | 0.0208333 | 0.55219 | 1.07321 | 2.18326 | 0.43778594 | |
| 0.2649000 | 0.0310083 | 0.61205 | 1.05616 | 2.24584 | 0.2181220 | |
| 0.2467110 | 0.0266666 | 0.58263 | 1.06647 | 2.21320 | 0.11096710 | |
| 0.2083334 | 0.0349344 | 0.50734 | 1.08966 | 2.15221 | 0.05759643 | |
| 0.1910823 | 0.0306122 | 0.47493 | 1.09323 | 2.12802 | 0.38075848 | |
| 0.3697149 | 0.0211998 | 0.76405 | 0.93602 | 2.47852 | 0.49138443 | |
| 0.1371947 | 0.0506329 | 0.34530 | 1.10392 | 2.06651 | 0.07692767 | |
| 0.0898201 | 0.0434781 | 0.23511 | 1.09060 | 2.02870 | 0.866532153 | |
| 0.4664179 | 0.0072732 | 0.85499 | 0.75534 | 2.76142 | 0.48338602 | |
| 0.1697531 | 0.0528075 | 0.41630 | 1.10376 | 2.10155 | 0.71448631 | |
| 0.0591715 | 0.0277364 | 0.16299 | 1.06881 | 2.01244 | 1.48677679 | |
| 0.3496514 | 0.0160000 | 0.74139 | 0.96003 | 2.42796 | 0.15125035 | |
Fig. 8.

Transient behaviors of with for 11 different initial points of the neural network (10) in Example 4
Fig. 9.

To end this section, we answer two natural questions: what are the practical and computational advantages of the network (10), compared to existing generally available algorithms for CQMPP’s? Are there advantages of our network compared to the existing ones? To answer these, we summarize what we have observed from numerical experiments and theoretical results as below.
We compare framework (10) with some existing models in Sect. 5. At the first glance, we see that the difference of the numerical performance is very marginal by testing an optimization example.
From Theorem 7, we observe that the solution converges with any initial point. Thus, changing initial points may not have much effect for our neural network model. In fact, our model is globally convergent to the optimal solution.
The suggested neural network (10) can be implemented without a penalty parameter and can be convergent to an exact solution of the CQMPP’s.
Conclusion
In this paper we established a neural network model for CQMPPs. The provided neural network has been shown to be Lyapunov stable and converge globally to the related single-objective of the CQMPP. The convergence of the neural network has been validated with the simulation results of the CQMPP examples. The studied neural network does not also require any adjustable parameter and its structure is simple. Thus, the model is more suitable to be implemented in hardware. Moreover, since CQMPP problem has wide applications, the new model and the obtained results in this paper are significant in both theory and application. The CQMPP simulation results describe the effectiveness of the suggested framework.
Funding
The authors have not disclosed any funding.
Data Availability
Enquiries about data availability should be directed to the authors.
Declarations
Conflict of interest
Both of the authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with animals performed by any of the authors.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Abd El-Waheda WF, Zaki EM, El-Refaey AM (2010) Artificial immune system based neural networks for solving multi-objective programming problems. Egyptian Inform J 11:59–65 [Google Scholar]
- Abo-Sinna MA, Rizk-Allah RM (2018) Decomposition of parametric space for bi-objective optimization problem using neural network approach. Opsearch 55:502–531 [Google Scholar]
- Antipova E, Pozo C, Guillen-Gosalbez G, Boer D, Cabeza LF, Jimenez L (2015) On the use of filters to facilitate the post-optimal analysis of the Pareto solutions in multiobjective optimization. Comput Chem Eng 74:48–58 [Google Scholar]
- Antunes CH, Alves MJ, Climaco J (2016) Interactive methods in multiobjective linear programming, EURO advanced tutorials on operational research. Springer, Cham, 57–136
- Arjmandzadeh Z, Nazemi A, Safi M (2019) Solving multiobjective random interval programming problems by a capable neural network framework, Arjmandzadeh, Ziba and Nazemi, Alireza and Safi, Mohammadreza. Appl Intell 49:1566–1579 [Google Scholar]
- Avriel M (1976) Nonlinear programming: analysis and methods, Englewood Cliffs. Prentice-Hall, NJ [Google Scholar]
- Bazaraa MS, Sherali HD, Shetty CM (1993) Nonlinear Programming- Theory and Algorithms, 2nd edn. Wiley, New York [Google Scholar]
- Benson HP (1998) An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. J Global Optim 13:1–24 [Google Scholar]
- Capitanescu F, Ahmadi A, Benetto E, Marvuglia A, Tiruta-Barna L (2015) Some efficient approaches for multi-objective constrained optimization of computationally expensive black-box model problems. Comput Chem Eng 82:228–39 [Google Scholar]
- Chankong V, Haimes YY (1983) Multiobjective Decision making: theory and methodology. Elsevier Science Publishing, Amsterdam [Google Scholar]
- Chinchuluun A, Pardalos PM (2007) A survey of recent developments in multiobjective optimization. Ann Oper Res 154:29–50 [Google Scholar]
- Coello CA (2009) Evolutionary multi-objective optimization: some current research trends and topics thatremain to be explored, Frontiers of Computer. Science 3:18–30 [Google Scholar]
- Copado-Mendez PJ, Guillen-Gosalbez G, Jimenez L (2014) MILP-based decomposition algorithm for dimensionality reduction in multi-objective optimization: application to environmental and systems biology problems. Comput Chem Eng 67:137–147 [Google Scholar]
- Dominguez-Rios MA, Chicano F, Alba E (2021) Effective anytime algorithm for multiobjective combinatorial optimization problems. Inf Sci 565:210–228 [Google Scholar]
- Ehrgott M (2005) Multicriteria optimization. Springer, Berlin [Google Scholar]
- Ehrgott M, Shao L, Schobel A (2011) An approximation algorithm for convex multi-objective programming problems. J Global Optim 50:397–416 [Google Scholar]
- Ehrgott M, Lohne A, Shao L (2007) A dual variant of Benson’s outer approximation algorithm, Report 654, Department of Engineering Science, The University of Auckland
- Ehrgott M, Wiecek M (2005) Multiobjective programming. In: Figueira J, Greco S, Ehrgott M (eds.) Multicriteria decision analysis: state of the art surveys., pp. 667–722. Springer Science + Business Media, New York
- Feizi A, Nazemi A (2021) Solving the stochastic support vector regression with probabilistic constraints by a high-performance neural network model. Eng Comput, 1–16
- Friesz TL, Bernstein DH, Mehta NJ, Tobin RL, Ganjlizadeh S (1994) Day-to-day dynamic network disequilibria and idealized traveler information systems. Operat Res 42:1120–1136 [Google Scholar]
- Gao X (2004) A novel neural network for nonlinear convex programming. IEEE Trans Neural Netw 15:613–621 [DOI] [PubMed] [Google Scholar]
- Gao XB, Liao L-Z, Qi LQ (2005) A novel neural network for variational inequalities with linear and nonlinear constraints. IEEE Trans Neural Netw 16:1305–1317 [DOI] [PubMed] [Google Scholar]
- Gass S, Saaty T (1955) The computational algorithm for the parametric objective function. Naval Res Logist Quart 2(1–2):39–45 [Google Scholar]
- Ghaffari-Hadigheh A, Romanko O, Terlaky T (2010) Bi-parametric convex quadratic optimization. Optim Methods Softw 25(2):229–45 [Google Scholar]
- Ghalavand N, Khorram E, Morovati V (2021) An adaptive nonmonotone line search for multiobjective optimization problems. Comput Operat Res 136:105506 [Google Scholar]
- Goh CJ, Yang XQ (1996) Analytic efficient solution set for multi-criteria quadratic programs. Eur J Opera Res 92(1):166–81 [Google Scholar]
- Groetzner P, Werner R (2022) Multiobjective optimization under uncertainty: a multiobjective robust (relative) regret approach. Eur J Oper Res 296:101–115 [Google Scholar]
- Hu Z, Zhou T, Su Q, Liu M (2022) A niching backtracking search algorithm with adaptive local search for multimodal multiobjective optimization. Swarm Evol Comput 69:101031 [Google Scholar]
- Hwang CL, Masud ASM (1979) Multiple objectives decision making: methods and applications. Springer, Berlin [Google Scholar]
- Jiang J, Han F, Wang J, Ling Q, Han H, Wang Y (2022) A two-stage evolutionary algorithm for large-scale sparse multiobjective optimization problems. Swarm and Evol Comput 17:101093 [Google Scholar]
- Kabgani A, Soleimani-damaneh M (2022) Semi-quasidifferentiability in nonsmooth nonconvex multiobjective optimization. Eur J Oper Res 299:35–45 [Google Scholar]
- Karbasi D, Nazemi A, Rabiei M (2020) A parametric recurrent neural network scheme for solving a class of fuzzy regression models with some real-world applications. Soft Comput 24:11159–11187 [Google Scholar]
- Karbasi D, Nazemi A, Rabiei M (2021) An optimization technique for solving a class of ridge fuzzy regression poblems. Neural Process Lett 53:3307–3338 [Google Scholar]
- Kennedy MP, Chua LO (1988) Neural networks for nonlinear programming. IEEE Trans Circuits and Syst 35:554–562 [Google Scholar]
- Kinderlehrer D, Stampacchia G (2000) An introduction to variational inequalities and their applications. Soc Ind Appl Math
- Krichen S, Masri H, Guitouni A (2012) Adjacency based method for generating maximal efficient faces in multiobjective linear programming. Appl Math Model 36:6301–6311 [Google Scholar]
- Lai KK, Maury JK, Mishra SK (2022) Multiobjective approximate gradient projection method for constrained vector optimization: sequential optimality conditions without constraint qualifications. J Comput Appl Math 410:114122 [Google Scholar]
- Lu K, Mizuno S, Shi J (2020) A new mixed integer programming approach for optimization over the efficient set of a multiobjective linear programming problem. Optimiz Lett 14:2323–2333 [Google Scholar]
- Luquea M, Ruiz F, Steuer RE (2010) Modified interactive Chebyshev algorithm (MICA) for convex multiobjective programming. Eur J Oper Res 204:557–564 [Google Scholar]
- Matejas J, Peric T (2014) A new iterative method for solving multiobjective linear programming problem. Appl Math Comput 243:746–754 [Google Scholar]
- Mian A, Ensinas AV, Marechal F (2015) Multi-objective optimization of SNG production from microalgae through hydrothermal gasification. Comput Chem Eng 76:170–183 [Google Scholar]
- Miettinen K (2002) Nonlinear multiobjective optimization. Kluwer Academic Publishers, Dordrecht [Google Scholar]
- Miller RK, Michel AN (1982) Ordinary differential equations. Academic Press, NewYork [Google Scholar]
- Nazemi A (2011) A dynamical model for solving degenerate quadratic minimax problems with constraints. J Comput Appl Math 236:1282–1295 [Google Scholar]
- Nazemi A (2012) A dynamic system model for solving convex nonlinear optimization problems. Commun Nonlinear Sci Numer Simul 17:1696–1705 [Google Scholar]
- Nazemi A (2013) Solving general convex nonlinear optimization problems by an efficient neurodynamic model. Eng Appl Artif Intell 26:685–696 [Google Scholar]
- Nazemi A (2019) A new collaborate neuro-dynamic framework for solving convex second order cone programming problems with an application in multi-fingered robotic hands. Appl Intell 49:3512–3523 [Google Scholar]
- Nazemi A, Effati S (2013) An application of a merit function for solving convex programming problems. Comput Ind Eng 66:212–221 [Google Scholar]
- Nazemi A, Mortezaee M (2019) A new gradient-based neural dynamic framework for solving constrained min-max optimization problems with an application in portfolio selection models. Appl Intell 49:396–419 [Google Scholar]
- Nazemi A, Omidi F (2012) A capable neural network model for solving the maximum flow problem. J Comput Appl Math 236:3498–3513 [Google Scholar]
- Nazemi A, Sabeghi A (2019) A novel gradient-based neural network for solving convex second-order cone constrained variational inequality problems. J Comput Appl Math 347:343–356 [Google Scholar]
- Nikseresht A, Nazemi A (2018) A novel neural network model for solving a class of nonlinear semidefinite programming problems. J Comput Appl Math 338:69–79 [Google Scholar]
- Nikseresht A, Nazemi A (2019) A novel neural network for solving semidefinite programming problems with some applications. J Comput Appl Math 350:309–323 [Google Scholar]
- Oberdiecka R, Pistikopoulos EN (2016) Multi-objective optimization with convex quadratic cost functions: a multi-parametric programming approach. Comput Chem Eng 85:36–39 [Google Scholar]
- Pan A, Shen B, Wang L (2022) Ensemble of resource allocation strategies in decision and objective spaces for multiobjective optimization, Information Sciences,, Available online 13 May
- Papalexandri KP, Dimkou TI (1998) A Parametric mixed-integer optimization algorithm for multiobjective engineering problems involving discrete decisions. Ind Eng Chem Res 37(5):1866–82 [Google Scholar]
- Rizk-Allah RM, Abo-Sinna MA (2017) Integrating reference point. Kuhn-Tucker conditions and neuralnetwork approach for multi-objective and multi-level programming problems. Opsearch 54:663–683 [Google Scholar]
- Rodriguez-Vazquez A, Dominguez-Castro R, Rueda A, Huertas JL, Sanchez-Sinencio E (1990) Nonlinear switched-capacitor neural networks for optimization problems. IEEE Trans Circuits and Syst 37:384–397 [Google Scholar]
- Ruzika S, Wiecek MM (2005) Approximation methods in multiobjective programming. J Optim Theory Appl 126:473–501 [Google Scholar]
- De Santis M, Eichfelder G (2021) A decision space algorithm for multiobjective convex quadratic integer optimization. Comput Operat Res 134:105396 [Google Scholar]
- Shang Y, Yu B (2011) A constraint shifting homotopy method for convex multi-objective programming. J Comput Appl Math 236:640–646 [Google Scholar]
- Shao L, Ehrgott M (2008) Approximately solving multiobjective linear programmes in objective space and an application in radiotherapy treatment planning. Math Methods Oper Res 68:257–276 [Google Scholar]
- Shavazipour B, Lopez-Ibanezb M, Miettinen K (2021) Visualizations for decision support in scenario-based multiobjective optimization. Inf Sci 578:1–21 [Google Scholar]
- Xia Y (2004) An extended neural network for constrained optimization. Neural Comput 16:863–883 [Google Scholar]
- Xia Y, Wang J (2000) A recurrent neural network for solving linear projection equations. Neural Netw 13:337–350 [DOI] [PubMed] [Google Scholar]
- Yang Y, Cao J (2008) A feedback neural network for solving convex constraint optimization problems. Appl Math Comput 201:340–350 [Google Scholar]
- Youshen X, Wang J (2004) A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraint. IEEE Trans Circuits Syst I Regul Pap 51(7):1385–1394 [Google Scholar]
- Yuf PL, Zeleny M (1976) Linear multiparametric programming by multicriteria simplex method. Manage Sci 23(2):159–70 [Google Scholar]
- Zerfa L, Chergui MEA (2022) Finding non-dominated points for multiobjective integer convex programs with linear constraints. J Glob Optimiz
- Zhang J, Ishibuchi H, He L (2022) A classification-assisted environmental selection strategy for multiobjective optimization. Swarm Evol Comput 71:101074 [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Enquiries about data availability should be directed to the authors.

