Skip to main content
PLOS One logoLink to PLOS One
. 2020 Dec 17;15(12):e0243926. doi: 10.1371/journal.pone.0243926

Evolutionary algorithm using surrogate models for solving bilevel multiobjective programming problems

Yuhui Liu 1,#, Hecheng Li 2,*,#, Hong Li 3
Editor: Weinan Zhang4
PMCID: PMC7746194  PMID: 33332433

Abstract

A bilevel programming problem with multiple objectives at the leader’s and/or follower’s levels, known as a bilevel multiobjective programming problem (BMPP), is extraordinarily hard as this problem accumulates the computational complexity of both hierarchical structures and multiobjective optimisation. As a strongly NP-hard problem, the BMPP incurs a significant computational cost in obtaining non-dominated solutions at both levels, and few studies have addressed this issue. In this study, an evolutionary algorithm is developed using surrogate optimisation models to solve such problems. First, a dynamic weighted sum method is adopted to address the follower’s multiple objective cases, in which the follower’s problem is categorised into several single-objective ones. Next, for each the leader’s variable values, the optimal solutions to the transformed follower’s programs can be approximated by adaptively improved surrogate models instead of solving the follower’s problems. Finally, these techniques are embedded in MOEA/D, by which the leader’s non-dominated solutions can be obtained. In addition, a heuristic crossover operator is designed using gradient information in the evolutionary procedure. The proposed algorithm is executed on some computational examples including linear and nonlinear cases, and the simulation results demonstrate the efficiency of the approach.

Introduction

Problem models

The bilevel programming problem (BLPP), a hierarchical optimisation problem, has a nested optimisation structure because of a leader and a follower. In a BLPP, optimisation procedures are executed successively at both the leader’s and follower’s levels. The upper-level optimisation is provided by the leader, whereas the lower-level optimisation is controlled by the follower. The related problems are called the upper-level/leader’s problem and lower-level/follower’s problem. Both the leader’s and follower’s problems have their own decision variables, constraints and objective functions [1]. The general BLPP can be formulated as follows:

{minxF˜(x,y)s.t.Gk(x,y)0,k=1,2,,Kminyf˜(x,y)s.t.gj(x,y)0,j=1,2,,J (1)

Where x = (x1, …, xn) are the leader’s variables, y = (y1, …, ym) the follower’s variables, F˜:Rn+mR the leader’s objective function, f˜:Rn+mR the follower’s objective function, Gk: Rn+mR, k = 1, 2, ⋯, K, the leader’s constraints, and gj: Rn+mR, j = 1, 2, ⋯, J, the follower’s constraints.

When the number of objectives in a leader’s and/or follower’s problems exceeds one, the problem is called a bilevel multiobjective programming problem (BMPP). The BMPP is one of the most difficult optimisation problems, because it accumulates all computational costs of hierarchical and multiobjective optimisations. Consequently, it is strongly NP-hard. The BMPP can be formulated as follows:

{minxF(x,y)=(F1(x,y),F2(x,y),,Fq(x,y))s.t.Gk(x,y)0,k=1,2,,Kminyf(x,y)=(f1(x,y),f2(x,y),,fp(x,y))s.t.gj(x,y)0,j=1,2,,J (2)

Here, axb, where a, and b are the lower and upper bounds of x, respectively. In general, the optimisation procedure of problem (2) can be described as follows. The leader first selects a strategy x to optimise his objective, and then the follower reacts to the leader’s selection by providing a group of non-dominated solutions yi to the follower’s problem. The pairs (x, yi) are then used as bilevel feasible solutions to (2). When the leader selects another strategy, the related feasible point pairs can be obtained. To solve (2) is analogous to determining a set of non-dominated solutions that is distributed well for the leader’s objective in all bilevel feasible solutions.

Unlike BLPPs with a single objective at each level, the BMPP incurs an additional computational cost for the selection of the follower’s non-dominated solutions. In addition, the selection of Pareto solutions in the leader’s problem renders the BMPP much harder than the single objective case.

Related work

For problem (1), BLPPs have been extensively applied in economic management and engineering, urban traffic and transportation, supply chain planning, resource assignment, engineering design, structural optimisation, and game-playing strategies. For example, Zhu and Guo [2] proposed a BLPP with a maxmin or minmax operator in the follower’s levels for a manufacturer, where the manufacturer plans to produce multiple short life cycle products using one-shot decision theory. The model was solved using typically used optimisation methods. Based on a bilevel complementary model, Nasrolahpouret et al. [3] developed an energy storage system decision tool for merchant price-making, that can determine the most advantageous trading behaviour in a pool-based market. More examples in real-world applications are summarised in [48].

As the applications of BLPPs are becoming increasingly extensive and diverse, researchers have focused on developing efficient solution strategies for such problems. However, owing to the intrinsic non-convexity and non-differentiability of BLPPs, a general BLPP is always complex and extremely challenging to solve using canonical optimisation methods that involve gradient information. Thus far, only a limited number of BLPPs can be solved effectively, such as linear [913] and convex quadratic BLPPs [1416]. Liu et al. [10] used a new variant of the penalty method and Karush-Kuhn-Tucker(K-K-T) conditions to solve weak linear BLPPs. Franke et al. [16] used K-K-T conditions and the optimal value approach to transform a bilevel convex programming problem into a single-level optimisation problem, and introduced M-stationarity for mathematical programs with complementarities in Banach spaces. For most general BLPPs, only local optima can be obtained using these gradient approaches. To effectively solve BLPPs, another algorithm framework, i.e. the swarm optimisation method, has been widely adopted. This method is based on population search technology and affords good global convergence. Evolutionary algorithms [1719], as representative swarm optimisation techniques, have been widely adopted to develop various bilevel optimisation algorithms in the past decades. For example, Sinha et al. [19] presented a single-level reduction of BLPPs using approximate K-K-T conditions and embedded the neighbourhood measure idea into an evolutionary algorithm. Based on a new constraint-handling scheme, Wang, et al. [20] proposed an evolutionary algorithm using K-K-T conditions. In Wang’s method, the new constraint-handling scheme enables individuals to satisfy linear constraints.

For problem (2), even though it is extremely difficult to design efficient solution approaches for the BMPP, a large number of practical applications stimulate the researches on theoretical results and algorithmic design. Guo and Xu [21] developed a BMPP model to study the seismic risk of transportation system reconstruction in large construction projects, and fuzzy random variable transformation and fuzzy variable decomposition methods were proposed to solve the model. Brian et al. [22] proposed a BMPP model for coordinating multiple design problems according to conflicting criteria. The design of a hybrid vehicle layout was expressed as a two-stage decomposition problem including vehicle class and battery class, and a multiobjective decomposition algorithm was developed. Alves et al. [23] developed a particle swarm optimisation algorithm to solve linear BMPPs with simple multiple objectives at the leader’s level. Chakraborti et al. [24] investigated environmental-economic power generation and despatch problems using the BMPP.

The existing BMPPs can be classified into three categories: 1) Only the leader’s problem involves multiobjective optimisation [25, 26]; 2) only the follower’s problem includes multiobjective optimisation [2729]; and 3) both levels involve multiobjective optimisation [3039]. In fact, once a the non-dominated procedure is hierarchically executed, the computational amount for bilevel optimisation will be significant. Consequently, a few efficient approaches exist for addressing linear or nonlinear BMPPs.

For linear BMPPs, Calice et al. [34] first converted the linear BMPP into two multiobjective problems, and then used some specific optimisation conditions to prove that the common solutions to each multiobjective problem are optimal to the original problem. Kirlik et al. [35] proposed the use of a global optimisation method for discrete linear BMPPs. However, this method is computationally intensive. Liu et al. [36] investigated a class of pessimistic semi-rectorial BLPPs, and used secularisation to, transform the original problem into a scalar objective optimisation problem. Subsequently, the authors used the generalised differentiation calculus of Mordukhovich to establish optimality conditions for linear multiobjective problems. Lv and Wan [37] used the duality gap as a penalty on the leader’s objective, and the follower’s problem was transformed into a single objective case by adopting a weighted sum scheme. Alves [25] used a multiobjective particle swarm optimisation algorithm to solve linear BMPPs with multiple objective functions at the leader’s level.

Only a few studies have been reported regarding nonlinear BMPPs. Deb and Sinha [29] presented an evolutionary-local-search algorithm for solving nonlinear BMPPs. In that study, the mapping method was used for the first time to solve the follower’s programming problems. Jia et al. [30] used genetic algorithms to solve BMPPs, in which a uniform design was used to generate weights, crossover and mutation operators. In addition, the follower’s objectives were converted into a single objective function by the weight sum method. In this method, MOEA/D was used to solve the leader’s multiobjective convex programming problem. Besides, based on the sensitivity theorem, an iterative method was used to solve a class of nonlinear BMPPs [33]. Zhang et al. [38] presented an improved particle swarm optimisation for solving BMPPs.

Research motivation

As mentioned above, because BMPPs have high computational complexity, only a few efficient solution methods exist for general BMPPs, and most of the existing approaches have been developed to solve cases where only the leader’s problem is multiobjective. When the follower’s problem involves multiple objectives, the non-dominated procedure can cause a large amount of computation. In addition, recall that in BLPPs, the optimisation procedure of the follower’s problem is frequently executed to obtain bilevel feasible points, which can increase the computational cost. Consequently, if one intends to develop an efficient approach, two key issues must be addressed. One is to reduce the computational cost caused by the follower’s optimisation procedure, whereas the other is to avoid excessive dominant comparisons among individuals. Hence, the weighted sum method is always adopted to delete non-dominated sorting procedures [40], and K-K-T conditions are applied to transform BLPPs into single-level ones. Furthermore, either MOEA/D [41, 42] or NSGA-II [43, 44] can be used to solve multiobjective optimisation problems. However, weight sum and K-K-T transformation techniques cannot perform the computation effectively when addressing the follower’s problems. In the literature [26] Li et al used evolutionary algorithms to solve linear BMPPs in which the leader’s objective is multiobjective. In Li’s method, the programming problem was first transformed into a single-level multiobjective problem by K-K-T conditions; subsequently. The multiobjective problem was solved via the weighted sum method, Tchebycheff approach and NSGA-II method, separately. Finally, the computational results obtained using the three approaches were compared. Sinha et al. [31] presented an approximation of the set-mapping method to solve the follower’s problem. This method can effectively reduce the amount of computation caused by the follower’s optimisation. However, it is complicated to establish multiple quadratic fibers between the variables of the leader’s and follower’s problems. The methods used in the literature [26] and [31] requires a high computational cost.

Motivated by both weighted aggregation and approximate solution methods, in this study, an evolutionary algorithm using surrogate models was developed to solve BMPPs. Unlike the existing algorithms, the proposed algorithm is characterised as follows. First, a weighted aggregation method using a uniform design is adopted to convert the follower’s problem into several single objective ones. Second, surrogate optimisation models and interpolation functions are used to replace the solution functions of the follower’s problem and updated periodically using new sample points. Finally, as heuristic information, the gradient direction is embedded into genetic operators to produce potentially high-quality offspring.

Basic notions

Some basic definitions for problem (2) are summarized as follows:

Let B1 = {(x, y)|Gk(x, y) ≤ 0, xRn, yRm, k = 1, 2, …, K} and B2 = {(x, y)|gj(x, y) ≤ 0, xRn, yRm, j = 1, 2, …, J}.

Definition 0.1 (Dominated relations): Vector F1=(F11,,F1q) is dominated by vector F2=(F21,,F2q), denoted by F2F1, if F2iF1i,i=1,2,,q, and there exists at least j ∈ {1, 2, …, q}, such that F2j<F1j.

If the objective value F˘(x1) of x1 is dominated by the objective value F˘(x2) of x2, we also denote the relation by x2x1. After a decision xRn is made by the leader, the optimal solution set to the follower’s problem is defined as follows:

Ox={yRm:yRm,f(x,y)f(x,y)}

Let Q = {(x, y)∈B, yQx}. The set is known as an inducible region in BMPPs.

Definition 0.2 (Non-dominated solution set): For a BMPP, the Non-dominated solution set P* for the leader’s problem is defined as:

P*={(x,y)Q,(x,y)Q,F(x,y)F(x,y)}

Definition 0.3 (Pareto front): For BMPP, the Pareto front PFu* for the leader’s problem is defined as:

PFu*={F(x,y),(x,y)P*}

According to the above definitions, BMPPs can be equivalently written as:

{minxF(x,y)=(F1(x,y),F2(x,y),,Fq(x,y))s.t.(x,y)Q (3)

Assumptions:

  • (A1)

    For each x taken in B1B2 by the leader, the follower has room to react, that is to say, Ox ≠ Ø.

  • (A2)

    F(x, y) is differential in x, and f(x, y) and g(x, y) is differentiable and convex in y for x fixed.

Here, A1 is a common assumption in BLPPs, which makes BLPPs well-posed. A2 is presented to conveniently utilize gradient information.

Algorithm design

Transformation of follower problem

BMPPs will incur a high computational cost if the follower’s problem is addressed as a general multiobjective optimisation. In the proposed approach, the solution of the follower’s problem is categorised into two steps. First, multiobjective problem is converted into several single-objective problems by the weighted sum of the objectives. Subsequently, these converted problems are solved using simplified surrogate models that can efficiently decrease the computational cost; this will be presented in the next Section.

1). The uniform design of weights:

Uniform design and spherical coordinates are used to generate weights uniformly distributed in [0,π2]p-1.

As a classical experimental design method, uniform design has become popular since the 1980’s. It was originally developed to obtain designed points that are uniformly distributed over the experimental domain [45, 46]. A uniform design array for k factors with q levels and H combinations is often denoted by LH(Kq). For example, L16(45) involves four factors, and each factor has five levels. Therefore, 1024(= 45) combinations exist. However, in a uniform design, one must only select 16 combinations from those 1024 possible combinations. H selected combinations can be represented by a uniform matrix U(n, H) = [Uiℓ]H × n, where Uiℓ is the level of the th factor in the ith combination. In this subsection, we use the concept of uniform design to generate uniformly distributed points in [0,π2]p-1. For a closed and bounded set φRp−1, the main purpose of the uniform design is to sample a small group of points from set φ, such that the sampled points are uniformly scattered in set φ. Herein, we consider only sample points in the set [0,π2]p-1. A brief description of the uniform design in the set is presented as follows.

For any point θ = (θ1, θ2, …, θp−1) in φ, a hyper-rectangle between 0 and θ, denoted by φ(θ), can be defined as [40]:

φ(θ)={(r1,r2,,rp-1)|0riθi,i=1,2,,p-1} (4)

For a set of v points on φ, we assume that v(θ) points exist in the hyper-rectangle φ(θ). Therefore, the ratio of points in the hyper-rectangle φ(θ) is v(θ)v, the volume of the hyper-rectangle is (π2)p-1, and the percentage of the hyper-rectangle volume is θ1θ2θp-1(π2)p-1. Subsequently, the uniform design on φ is defined as determining v points on φ such that the following discrepancy is minimised.

supθφ|v(θ)v-θ1θ2θp-1(π2)p-1| (5)

To determine uniformly scattered points on φ, we used the good-lattice-point method [47] to generate a set of v uniformly scattered points on φ, denoted by c(p − 1, v). The procedure is as follows: a v × (p − 1) uniform array is first generated:

U(p-1,v)=[Ui]v×(p-1) (6)

where Uℓi = (ℓσi−1 modv) + 1, i = 1, 2, …, p − 1, = 1, 2, …, v. Then, each row of matrix U(p − 1, v) can define a row θ = (θ1, θ2, …, θp − 1, ) of c(p − 1, v) by

θi=2Ui-12vπ2,i=1,2,,p-1,=1,2,,v (7)

Hence, we have

c(p-1,v)={θ|=1,2,,v} (8)

For example, when v = 7 and p = 5, it can be seen that σ = 3 from Table 1, Thus:

U(p-1,v)=U(4,7)=(2437375643755624624375621111)7×4 (9)
c(p-1,v)={π28(3,7,5,13),π28(5,13,9,11),π28(7,5,13,9),π28(9,11,3,7),π28(11,3,7,5),π28(13,9,11,3),π28(1,1,1,1)} (10)

Table 1. Values of parameter σ for different values of q and M.

q M σ
5 2-4 2
7 2-6 3
11 2-10 7
13 2 5
3 4
4-12 6

2). Weight vector

We can obtain v weight vectors as follows:

w=(w1,w1,,wp),=1,2,,v (11)

where if p = 2, we have

{w1=cos2θp-1w2=sin2θp-1,=1,2,,v (12)

otherwise, if p > 2, we also have

{w1=cos2θ1w2=sin2θ1cos2θ2w3=sin2θ1sin2θ2cos2θ3w4=sin2θ1sin2θ2sin2θ3cos2θ4,=1,2,,vwp-1,=sin2θ1sin2θ2sin2θp-2cos2θp-1wp,=sin2θ1sin2θ2sin2θp-2sin2θp-1 (13)

3). Transformed problems

We used the weight vectors designed in the above to address the follower’s multiple objectives, transforming the follower’s problem of the original problem into several single-objective ones. Subsequently, then (2) can be transformed into v BMPPs with a single objective in their follower’s problems:

{minxF(x,y)=(F1(x,y),F2(x,y),,Fq(x,y))s.t.Gk(x,y)0,k=1,2,,Kminyf(x,y)=w1f1(x,y)+w2f2(x,y)++wpfp(x,y),=1,2,,vs.t.gj(x,y)0,j=1,2,,J (14)

The follower’s problems of (14) are:

{minyf(x,y)=w1f1(x,y)+w2f2(x,y)++wpfp(x,y)s.t.Gk(x,y)0,k=1,2,,K,=1,2,,vgj(x,y)0,j=1,2,,J (15)

Surrogate models

In BLPPs/BMPPs, for each the leader’s variable values, the follower’s problem must be optimised. The procedure result in can a significant amount of computation in solving BLPPs/BMPPs, particularly when the problem is large. According to the optimisation procedure, the optimal solutions to the follower’s problem are always determined by the leader’s variables. This means that the optimal solution of the follower’s problem is a function of the leader’s variables. However, the function is often implicit and can not be obtained analytically. In the proposed approach, we used the interpolation function as surrogate models to estimate the optimal solutions to the follower’s problems. Therefore, some values xi of the leader’s variables are first used, and then the follower’s problems are optimised individually. The optimal solutions are denoted as yi, point pairs (xi, yi) are regarded as interpolation sample points and used to generate interpolation functions.

The interpolation function demonstrates better performance in fitting unknown functions [48] and can efficiently decrease the computational times of the follower’s problems. In the proposed algorithm, the cubic spline interpolation method is adopted when the interpolation function is one-dimensional, whereas the linear interpolation method is used for other cases.

As an example, on the one-dimensional coordinate plane, for w sample points xi, i = 1, 2, …, w, we constructed an interpolation function y = ψ(x) to approximate the optimal solution function of the follower’s problem. Therefore, y = ψ(x*) can be regarded as the approximate solution to the follower’s problem when x is fixed at x*. The relationship between the approximate and real solutions is shown in Fig 1.

Fig 1. Interpolation approximation and the real optimal solutions.

Fig 1

In Fig 1, the solid red line represents the real solutions to the follower’s problem, whereas the dotted yellow line represents the approximate points provided by interpolation.

For interpolation functions, denser sample interpolation points resulted in a better approximation effect. It is noteworthy that for these interpolation points, each follower’s variable value must be optimal when the leader’s components are fixed. In the proposed algorithm, the interpolation function can be obtained as follows. First, an initial population of N points is randomly generated for the leader’s variable space, and these points are denoted by xi, i = 1, 2, …, N. Subsequently, for each leader’s value among xi, i = 1, 2, …, N, the optimal solutions to the follower’s problem (15) are denoted by yℓi, i = 1, 2, …, N, = 1, 2, …, v. Consequently, v × N point pairs of (xi, yℓi), i = 1, …, N, = 1, 2, …, v can be obtained. These point pairs are used as interpolation nodes to generate v interpolation functions.

ψ(x)=(ψ1(x),ψ2(x),,ψm(x)),=1,2,,v (16)

Where

{y1ψ1(x1,x2,,xn)y2ψ2(x1,x2,,xn),=1,2,,vymψm(x1,x2,,xn) (17)

i.e. each of yj,j=1,,m,=1,,v, is a function of x and yl(x)=(y1,y2,,ym),=1,,v. According to the above-mentioned interpolation method, we can obtain the approximate optimal solutions to the follower’s problem (15). In the proposed algorithm, we periodically update both the interpolation sample points and to improve interpolation functions.

Proposed algorithm

When solving BMPPs, Transformation of follower problem guarantees the reduction of the computational complexity of the follower’s problems, surrogate models guarantees the saving of the calculation cost of the follower’s problems. Combining the methods in above, in this paper, an evolutionary algorithm [4951]. based on surrogate models, denoted by SMEA, is developed to solve BMPPs. Fig 2 gives the flowchart of SMEA.

Fig 2. Basic flowchart of the proposed algorithm.

Fig 2

The specific procedure is described as follows:

Step 0 (Transformation of the follower’s problems)

The weight vectors uniformly designed in section 3 are used to deal with the follower’s multiple objectives, making the follower’s problem become v single-objective ones. As a result, we obtain v BMPPs with different follower’s objectives.

Step 1 (Initial population)

Randomly generate N initial points xi = ai + (biai) ⋅ rand, i = 1, …, N, where bi, and ai are the upper and lower bounds of the xi, respectively, rand is random in [0, 1]. Then the initial population pop(0) with size N is obtained. Set gen = 0.

Step 2 (Fitness assessment)

For each xi, to solve the follower’s problems (15) and obtain the optimal solution yℓi, i = 1, …, N, = 1, 2, …, v. and the value of the leader’s objectives are taken as Fk(xi, yℓi), i = 1, …, N, = 1, 2, …, v, k = 1, 2, …, q. Construct the interpolation functions (surrogate models) just as in Section 4. Use MOEA/D to deal with the leader’s problem, a multi-objective optimisation, and take non-dominated solutions in the population pop(gen) into the set D1.

Step 3 (Crossover)

Note that Fk(x, y), k = 1, 2, …, q are differential in x. To obtain a potential descent direction, we take the negative gradient direction of each leader’s function Fk(xr, yr).

  • a. Negative gradient vector:
    -Fk(xr,yr)=-(Fk(xr,yr)x1,Fk(xr,yr)x2,,Fk(xr,yr)xn),k=1,2,,q
  • b. Normalize the direction:
    pk=-Fk(xr,yr)/Fk(xr,yr)=-(Fk(xr,yr)x1,Fk(xr,yr)x2,,Fk(xr,yr)xn)/(j=1n(Fk(xr,yr)xj)2)12,k=1,2,,q
    Set
    p=k=1qτkpk
    Here, τk ∈ (0, 1), k = 1, ⋯, q, are randomly generated and satisfy k=1qτk=1.
  • c. Crossover operator is designed as follows:
    xr=xr+s·rand·p
    where s is positive. In fact, the operator can be extended to non-differential case of the leader’s functions. One only needs to replace the gradient function by an approximate gradient:
    dki=Fk(x1,,xi+Δdi,,xn)-Fk(x1,x2,xi,,xn)Δdi,k=1,2,,q

Step 4 (Mutation)

Gaussian mutation is adopted. Suppose that p¯ is an individual chosen for mutation, then the offspring Op¯ of p¯ is generated as follows:

Op¯=p¯+Δ,ΔN(0,σ2).

Step 5 (Offspring population pop′(gen))

For offspring set (xo1, xo2, …, xoλ) generated by the crossover and mutation operation, interpolation function is used to obtain the approximated solutions (y1,y2,,yλ),=1,2,,v to (15). Then an offspring set pop′(gen) with size v × λ is obtained and the values of the leader’s objective functions are Fk(xoi, yℓi), i = 1, 2, …, λ, = 1, 2, …, v, k = 1, 2, …, q.

Step 6 (Update interpolation function)

Use MOEA/D to select non-dominated solutions from set {(xoi, yℓi), i = 1, 2, …, λ, = 1, 2, …, v}, and put these non-dominated solutions into the set D2. For each point in D2, update the optimal solutions yℓi by solving the follower’s problems. And update the interpolation function with the exact solutions obtained above.

Step 7 (Update Non-dominated solution set D1)

Set D1 = D1D2 and D2 = ϕ. Remove the dominant solutions in D1. Once the scale of D1 exceeds the pre-determined threshold value, then the crowding distance method can be applied to delete some redundant points just as done in NSGA-II.

Step 8 (Selection)

Select the best N individuals from set pop′(gen)⋃D1 to form the next generation of population pop(gen + 1);

Step 9 (Termination condition)

If the stopping criterion is satisfied, then stop and output the non-dominated solutions set D1; otherwise, set gen = gen + 1, go to Step 3.

Simulation results

Test examples

To demonstrate the feasibility and efficiency of the proposed algorithm, we test SMEA on 20 Examples which are taken from literatures [26, 31, 32, 52] and. In [26] and [53], two EAs have been developed. In spite of the fact that the two algorithms are proposed only for dealing with BMPPs in which the follower’s problem is single objective, as a special case, the two approaches can be taken as compared algorithms to demonstrate the performance of the proposed SMEA.

According to the number of objectives, we have divided the test Examples into two categories. One (Examples 1-13) is the bilevel multiobjective case, that is to say, at least one between the leader and the follower involves multiobjective optimisation. The problems of this type are mainly used to test the performance of the weighted sum method and the interpolation approximation. The other (Examples 14-20) only involves bilevel single objective optimization which is utilized to test the performance of surrogate models as well as crossover operators. All 20 examples are presented as follows:

Example 1(F01) [26]:

{minxF(x,y)=((x+2y2+3)(3y1+2),(2x+y1+2)(y2+1))s.t.3x+y1+2y25,y1+y23minyf(x,y)=(y1+1)(x+y1+y2+3)x+2y1+y22,3y1+2y26x0,y10,y20

Example 2(F02) [26]:

{minxF(x,y)=(-2x,-x+5y)minyf(x,y)=-ys.t.x-2y4,2x-y24,3x+4y96,x+7y126-4x+5y65,x+4y8,x0,y0

Example 3(F03) [26]:

{maxxF(x,y)=(2x1-4x2+y1-y2,-x1+2x2-y1+5y2)maxyf(x,y)=3y1+y2s.t.4x1+3x2+2y1+y260,2x1+x2+3y1+4y260x1,x2,y1,y20

Example 4(F04) [26]:

{maxxF(x,y)=((y1+y3)(200-y1-y3),(y2+y4)(160-y2-y4))s.t.x1+x2+x3+x4400x110,0x25,0x315,0x420miny1,y2,y3,y4f(x,y)=((y1-4)2+(y2-13)2,(y3-35)2+(y4-2)2)0.4y1+0.7y2-x10,0.6y1+0.3y2-x200.4y3+0.7y4-x30,0.6y3+0.3y4-x400y120,0y220,0y340,0y440

Example 5(F05) [26]:

{minxF(x,y)=(-x-y,x2+(y-10)2)s.t.0x15minyf(x,y)=y(x-30)y-x0,0y15

Example 6(F06) [26]:

{minxF(x,y)=(8x+4y12-2,4x-8y2+1)s.t.1x2miny1,y2f(x,y)=(2y13-x+7,x2+2x+y22-5)x-y1-10,x-y20

Example 7(F07) [26]:

{minxF(x,y)=(y1+y22+x+sin2(y1+x),cos(y2)(0.1+x)exp(-y10.1+y2))s.t.0x10miny1,y2f(x,y)=(y1-2)2+(y2-1)24+y2x+(5-x)216+siny210+y12+(y2-6)2-2y1x-(5-x)280y12-y20,5y12+y210,y2+x65,y10

Example 8(F08) [26]:

{minxF(x,y)=((y1-1)2+i=113yi+12+x2,(y1-1)2+i=113yi+12+(x-1)2)minyf(x,y)=(y1-x)2+i=113yi+12s.t.-1x,yi2,i=1,2,,14

Example 9(F09) [26]:

{minxF(x,y)=((1-x1)(1+x22+x32)y,x1(1+x22+x32)y)minyf(x,y)=(1-x1)(1+x42+x52)ys.t.(1-x1)y+12x1y-10,-1x11,1y2,-5xi5,i=2,3,4,5

Example 10(F10) [26]:

{minxF(x,y)=(i=113exp(-xi1+yi)+i=113sin(xi1+yi),i=113exp(-yi1+xi)+i=113sin(yi1+xi))s.t.-1xi1,i=1,2,,13minyf(x,y)=i=113cos(xiyi)+i=113sin(xi-yi)xi+yi1,i=1,2,,13,-1yi1,i=1,2,,13

Example 11(F11) [32]:

{minxF(x,y)=(y1-x,y2)minyf(x,y)=(y1,y2)s.t.x2-y12-y220,1+y1+y20-1y1,y21,0x1

Example 12(F12) [32]:

{minxF(x,y)=((y1-1)2+i=113yi+12+x2,(y1-1)2+i=113yi+12+(x-1)2)minyf(x,y)=(y12+i=113yi+12,(y1-x)2+i=113yi+12)s.t.-1y1,y2,,y14,x2

Example 13(F13) [31]:

{minxF(x,y)=((1-y1)(1+j=25yj2)x),y1(1+j=25yj2)x)minyf(x,y)=((1-y1)(1+j=69yj2)x,y1(1+j=69yj2)x)s.t.(1-y1)x+12y1x-10-1y11,1x2,-9yj9,j=2,3,,9

Example 14(F14) [52]:

{minxF(x,y)=-15x1+5x2-5x3+21x4-25x5+31x6-19x7+32x8+13x9-13x10-32y1-30y2-15y3+12y4-31y5-3y6+15y7-44y8-38y9-5y10s.t.8x1+6x2+10x3+10x4+4x5+7x6+7x7+7x8+3x9+7x10+4y1+2y2+7y3+7y4+10y5+8y6+9y7+y8+8y9+2y10659x1+x2+10x3+5x4+10x5+8x7+x8+3x10+4y1+5y2+8y3+y4+3y5+2y6+10y7+2y8+2y9+2y10113x1+3x2+x3+8x4+8x5+9x6+8x7+7x8+x9+10x10+8y1+4y2+3y3+y4+6y5+5y6+6y7+9y8+10y9+6y108910x1+6x2+10x3+x4+10x5+10x6+4x7+9x9+8y1+7y2+7y3+5y4+2y5+7y6+y7+2y8+3y9+5y1085xiZ,i=1,2,3,4,5,6,7,8,9,10minyf(x,y)=-32y1-30y2-15y3+12y4-31y5-3y6+15y7-44y8-38y9-5y1010x1+4x2+5x3+6x4+x5+x6+7x7+2x8+5x9+x10+8y1+2y2+2y3+9y4+9y5+4y6+2y7+9y8+3y9+8y10193x1+6x2+8x3+5x4+8x5+6x6+8x7+10x8+10x9+10x10+9y1+8y2+2y3+6y4+6y5+2y7+10y8+9y9+4y10238x1+10x3+3x5+2x6+4x7+x8+4y2+y3+6y4+3y5+2y6+4y7+5y8+4y9+2y101058x1+x3+3x4+5x5+7x6+9x8+4x9+8x10+4y1+10y2+y3+y4+5y5+y6+5y8+y9+4y10106yi0,i=1,2,3,4,5,6,7,8,9,10yjZ,jJ,J=1,5,9,10

Example 15(F15) [32]:

{minxF(x,y)=-(200-x1-x2)(x1+x3)-(160-x2-x4)(x2+x4)s.t.x1+x2+x3+x440,0x110,0x25,0x315,0x420,minyf(x,y)=(y1-4)2+(y2-13)2+(y3-35)2+(y4-2)20.4y1+0.7y2x1,0.6y1+0.3y2x2,0.4y3+0.7y4x3,0.6y3+0.3y4x4,0y120,0y220,0y340,0y440.

Example 16(F16) [53]:

{minxF(x,y)=-(x1)2-3(x2)2-4y1+y22minyf(x,y)=2(x1)2+y12-5y2s.t.(x1)2-2x1+(x2)2-2y1+y2-3x2+3y1-4y24,(x1)2+2x24,xi0,yi0,i=1,2

Example 17(F17) [53]:

{minxF(x,y)=-8x1-4x2+4y1-40y2-4y3minyf(x,y)=x1+2x2+y1+y2+2y3s.t.y2+y3-y11,2x1-y1+2y2-0.5y312x2+2y1-y2-0.5y31yi0,i=1,2,3,xi0,i=1,2

Example 18(F18) [53]:

{minxF(x,y)=(x1-1)2+2y1-2x1minyf(x,y)=(2y1-4)2+(2y2-1)2+x1y1s.t.4x1+5y1+4y212,4y2-4x1-5y1-44x1-4y1+5y24,4y1-4x1+5y24yi0,i=1,2,x10

Example 19(F19) [53]:

{minxF(x,y)=i=110(|xi-1|+|yi|)minyf(x,y)=e(1+14000i=110(yixi)2-i=110cos(yixii))s.t.-πyiπ,i=1,2,,10

Example 20(F20) [53]:

{minxF(x,y)=|2x1+2x2-3y1-3y2-60|minyf(x,y)=(y1-x1+20)2+(y2-x2+20)2s.t.2y1-x1+100,2y2-x2+100,x1+x2+y1-2y240-10yi20,0xi50,i=1,2

Parameter setting

To compare the experimental results, the parameter settings in this study are consistent with in [26].

  • Population size: 15;

  • Weight vector: v = 10;

  • Crossover rate: pc = 0.6;

  • Gaussian mutation probability: pm = 0.05;

  • Maximum number of generations: maxgen = 300;

  • Step size in the crossover operator: s = 8.

Performance measure

To test the effectiveness and practicability of the SMEA algorithm, we tested the non-dominated solutions, HV, IGD, Cmetric, CPU time and optimal solution according to the characteristics of the problems. Definitions of several metrics are as follows:

1). HV [54]

Select a reference point in the objective space e = (e1, e2, ⋯, es)T, denote A = SMEA, NSGA-II, Weighted sum approach or Tchebycheff approach. Compute

HV(A,e)=volume(fA[f1,e1]××[fs,es])

For reference points in the objective space, the large HV means better quality.

2). IGD [55]

Let G be a uniformly distributed subset selected to form the true Pareto Front and G′ is the approximated set that is obtained by a multiobjective optimisation algorithm. The IGD value of G to G′, i.e., IGD(G, G′) is defined as

IGD(G,G)=i=1Gd(Gi,G)G

where |G| returns the number of solutions in the set G and d(Gi, G′) computes the minnmum Euclidean distance from Gi to the solutions of G′ in objective space. Generally, a lower value of IGD(G, G′)is preferred as it indicates that G′ is distributed more uniformly and closer to the true Pareto front.

3). Cmetric [54]

We give the following notations:

Set SM: non-dominated solution set of SMEA;

Set WS: non-dominated solution set of Weighted sum approach;

Set TE: non-dominated solution set of Tchebycheff approach;

Set NS: non-dominated solution set of NSGA-II. Let:

C(A,B)=uBvA:vdominatesuB

C(A′, B) is defined as the percentage of the solutions in B that are dominated by at least one solution in A′. C(A′, B) is not necessarily equal to 1 − C(B, A′). C(A′, B) = 1 implies that no solution in B is dominated by a solution in A′.

Simulation results and comparisons

We execute an algorithm on a computer Intel(R) Core(TM)i5-8250U CPU@ 160 GHz 1.80 GHz using Matlab software. The algorithm was independently run ten times on every test instance. For the first 10 Examples, SMEA is compared with the weighted sum method, the Tchebycheff approach and NSGA-II. For each algorithm, we randomly take one among all 10 computational results of a single compared algorithm and show them in Figs 312. The analytical(theoretical) solution set of Examples 11 to 13 is provided, as a result, we can compare the computational results of SMEA with these known analytical solutions as shown in Figs 1315.

Fig 3. Non-dominated solutions on example 1.

Fig 3

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 1.

Fig 12. Non-dominated solutions on example 10.

Fig 12

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 10.

Fig 13. Non-dominated solutions on example 11.

Fig 13

Analytical solutions and non-dominated solutions obtained by SMEA on example 11.

Fig 15. Non-dominated solutions on example 13.

Fig 15

Analytical solutions and non-dominated solutions obtained by SMEA on example 13.

Fig 5. Non-dominated solutions on example 3.

Fig 5

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 3.

Fig 6. Non-dominated solutions on example 4.

Fig 6

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 4.

Fig 7. Non-dominated solutions on example 5.

Fig 7

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 5.

Fig 8. Non-dominated solutions on example 6.

Fig 8

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 6.

Fig 9. Non-dominated solutions on example 7.

Fig 9

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 7.

Fig 10. Non-dominated solutions on example 8.

Fig 10

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 8.

Fig 11. Non-dominated solutions on example 9.

Fig 11

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 9.

Fig 14. Non-dominated solutions on example 12.

Fig 14

Analytical solutions and non-dominated solutions obtained by SMEA on example 12.

In Figs 312 (example 1–10), for Example 1 (shown in Fig 3), the non-dominated solutions set obtained by SMEA is superior to that obtained by the three compared methods. Example 2 is a maximization problem, from Fig 4 we can see that the non-dominated solutions set obtained by the SMEA is better than those obtained by the three compared methods. In addition, for Example 3, 4, 5, 7 and 9, the non-dominated solutions set obtained by the SMEA is similar to that by the Tchebycheff approach, but better than those by the other two methods. For Example 10, the results presented by the SMEA is almost the same as those obtained by both the Tchebycheff approach and the weighted sum approach, but better than that by the NSGA-II. For Examples 6 and 8, SMEA can also find approximately the non-dominated solutions provided by literature. Figs 1315 show that the non-dominated solutions sets obtained by SMEA are almost in line with the analytical solutions sets.

Fig 4. Non-dominated solutions on example 2.

Fig 4

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 2.

Table 2 represents the average value of HV obtained by SMEA, Weighted sum approach, Tchebycheff approach and NSGA-II running 10 times independently on Examples 1–10. While Table 3 represents the average value of HV and IGD obtained by SMEA and Analytical points running 10 times independently on Examples 11–13. The symbols “+”, “−” and “≈” mean the computational result is better than, worse than and almost equal to that obtained by our algorithm, respectively. Calculated the standard deviation(std)of HV value in 10 times. Best results are highlighted bold color in Table 2, we can see that the HV obtained by SMEA is much larger than those obtained other three algorithms on test problems 2–5, 7, 9 in Table 2. From Table 2, we can see in SMEA 10 values of HV better than the Weighted sum approach, 10 values of HV better than the Tchebycheff approach and 6 values of HV better than NSGA-II. Which illustrates that the diversity of solutions obtained by SMEA is better than obtained by other algorithms. Meanwhile, it is obvious that the IGD obtained by SMEA is small on Examples 11–13, which indicates for these test problems, the coverage of solutions obtained by SMEA quite well to the analytical solutions.

Table 2. HV obtaind by SMEA and the approaches in the literature.

Test problem SMEAstd) Weighted sum approach Tchebycheff approach NSGA-II
F01 3.8841E + 00(±3.1298E − 03) 3.7641E+00 3.5249E+00 2.4364E+01
F02 1.6698E + 03(±5.7438E − 02) 0.0000E+00 8.2293E+01 1.2895E+03
F03 6.0071E + 03(±2.2216E − 03) 1.5274E+03 4.8412E+03 1.4086E+02
F04 7.0414E + 07(±6.0845E − 04) 5.5236E+05 3.2912E+05 4.6756E+02
F05 2.8771E + 03(±3.5607E − 03) 2.5212E+03 2.6888E+03 2.7711E+03
F06 2.0669E + 02(±1.2233E − 04) 2.5281E+01 2.6506E+01 3.0948E+02
F07 1.8901E + 03(±4.7892E − 03) 2.5252E+01 2.8871E+01 1.5958E+03
F08 3.2278E + 00(±8.6756E − 03) 2.0780E-01 2.0460E-01 5.6428E+00
F09 3.2245E + 03(±4.5582E − 03) 3.0200E+02 2.7828E+03 2.5540E+01
F10 3.8894E + 00(±2.5514E − 02) 3.7641E+00 3.5249E+00 2.4363E+01
+/ − /≈ 10/0/0 10/0/0 6/4/0

Table 3. HV, IGD obtaind by SMEA and HV obtaind by analytical points.

Test problem SMEA HVstd) Analytical points HV IGD
F11 3.1070E − 01(±1.3322E − 03) 3.1160E-01 3.4000E-03
F12 1.9964E − 01(±4.7843E − 03) 2.0840E-01 8.2817E-04
F13 9.8030E − 01(±2.53366E − 03) 1.0000E+00 4.6000E-03
+/−/≈ 2/0/1

In terms of the results of the multiple problem Wilcoxon’s test in Table 4, all the R+ values are bigger than the R ones, which implies that the performance of our algorithm is superior to that of other competitors. Moreover, the significant difference at α = 0.05 can be found in four cases, i.e., SMEA versus Tchebycheff approach, SMEA versus Weighted sum approach and SMEA versus NSGA-II. Besides, our algorithm ranks first in the Friedmans test(see Table 5 and Fig 16).

Table 4. Results of the multiple problem Wilcoxon’s test on the problems of F01-F10.

SMEA VS R+ R p-value α = 0.05
Weighted sum approach 55 0 0.005 Yes
Tchebycheff approach 55 0 0.005 Yes
NSGA-II 45 10 0.044 Yes

Table 5. Ranking of the SMEA by the Friedmans test on the problems of F01-F10.

Algorithm Ranking
SMEA 3.60
Weighted sum approach 1.70
Tchebycheff approach 1.90
NSGA-II 2.80

Fig 16. Ranking of SMEA by Friedman test on the problems of F01-F10.

Fig 16

Table 6 represents the average value of C-metrics running 10 times independently on Examples 1–10. We used SMEA to make a pairwise comparison with three algorithms in the literature. We can found C(SM, WS)≥C(WS, SM), C(SM, TE)≥C(TE, SM) and C(SM, NS)≥C(NS, SM) on problems 1, 2, 5, 6, 8, 10. It means that SMEA found a better non-dominated set than algorithms in [26].

Table 6. Comparison of C-metric between SMEA and algorithms in [26].

Text problems F01 F02 F03 F04 F05 F06 F07 F08 F09 F10 +/−/≈
C(SM,WS) 1.0000 0.6700 0.4700 0.5700 0.5400 0.0000 0.0200 0.0930 0.0067 0.8800 6/3/1
C(WS,SM) 0.0000 0.5700 1.0000 0.5100 0.4100 0.0000 0.3200 0.0067 0.8400 0.5400
C(SM,TE) 1.0000 0.6700 0.5500 0.2600 0.0000 0.0000 0.0930 0.5200 0.6700 0.4100 7/1/2
C(TE,SM) 0.0000 0.5000 0.3900 0.5300 0.0000 0.0000 0.0800 0.4000 0.4800 0.0000
C(SM,NS) 1.0000 0.6900 0.6200 0.3700 0.0067 0.0000 0.0013 0.0400 0.0470 0.9600 8/1/1
C(NS,SM) 0.0000 0.0130 0.0270 0.2900 0.0000 0.0000 0.0067 0.0200 0.0067 0.0000

In addition, CPU time is used to compare the efficiency of the algorithms. For the first 10 Examples, Table 7 shows the CPU times running 10 times independently used by both SMEA and two compared algorithms in [26]. From Table 7 one can see that SMEA can save more computation costs than the compared methods. For other Examples, we also provided their CPU times in Table 7.

Table 7. CPU time used by the SMEA and the CPU time of the approaches in the literature [26].

Test problem Weighted sum approach(s) Tchebycheff approach(s) SMEA approach(s)
F01 288.8705 269.1355 184.0437
F02 290.4008 305.5410 279.4890
F03 1153.4208 1169.1095 329.9828
F04 1514.5671 1652.7553 579.3628
F05 316.4801 381.0618 290.3375
F06 344.4778 256.3112 238.1882
F07 487.5152 579.9305 351.7422
F08 608.9422 550.5114 453.6247
F09 674.3858 691.0868 328.4653
F10 1060.9020 1102.1321 837.5767
F11 424.1344
F12 385.6785
F13 506.1157
F14 12.5431
F15 10.8668
F16 11.7438
F17 12.2328
F18 6.8531
F19 12.1533
F20 8.5941

It is difficult to solve BMPPs, especially to solve the follower’s problem, and there exist only a small number of computational Examples. In the proposed algorithm, the weighted sum approach can transform the original multiobjective problem into single objective ones. Hence, SMEA can also be used to solve BLPPs with a single objective on which the effectiveness of the surrogate models can be verified. When SMEA is used in solving Examples 14–20, the best solution (x*, y*) in all 10 runs is recorded as well as the corresponding objective values F(x*, y*) and f(x*, y*). All results are presented in Table 8 in which the objective values are denoted by Ref-f(x*, y*) and Ref-F(x*, y*), respectively.

Table 8. Comparison of the best results found by SMEA and the compared approaches.

Test problems Ref-f(x*, y*) f(x*, y*) Ref-F(x*, y*) F(x*, y*) ± std (x*, y*)
F14 −152.5005 −152.2950 −351.8333 -352.0990 ± 0 x* = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
y* = (0, 0.6600, 8.8330, 0, 0, 0, 0, 0, 0, 0)
F15 57.4800 57.4800 −6600.0000 -6648.1400 ± 0 x* = (7.3600, 3.5500, 12.3500, 17.4600)
y* = (0.9100, 10.0000, 29.0900, 0)
F16 −1.0156 −1.0156 −18.6787 -18.6835 ± 0 x* = (0, 2)
y* = (1.8768, 0.9076)
F17 3.2000 3.2000 −29.2000 −29.2000 ± 0 x* = (0.0001, 0.8999)
y* = (0, 0.5999, 0.4001)
F18 7.6145 7.6148 −1.2091 -1.2092 ± 0 x* = 1.8885
y* = (0.8892, 0)
F19 1.0000 1.0000 0.0000 0.0000 ± 0 x* = (−0.8885, −0.1115, 0.7173, 0.2510, 0.1672, −0.6012, −0.5287, −0.7114, 0.3353, 0.9776)
y* = (0.0797, 0.9856, 0.4052, 0 −0.9677, −0.5822, 0.6768, 0, 0.8700, −0.2090)
F20 100.0000 0.0000 0.0000 0.0000 ± 0 x* = (0.9857, 28.6942)
y* = (−19.0141, 8.6941)

Table 8 shows the average value of optimal results running 10 times independently on Examples 14–20. The best results obtained are highlighted bold in this table. Especially, the optimal solutions of Examples 14–16 and 18 are obviously better than those provided in the existing literature. It follows that the surrogate model technique is effective to solve the problems of this type.

Conclusion

BMPP is one of the known hardest optimisation models in knowing that this problem always accumulates computational complexity of both the hierarchical structure and multi-objective optimization. In order to reduce the computational cost of the problem, two efficient techniques are embedded in the proposed algorithm. One is the weighted sum method used to deal with multiple objectives in the follower’s problem. The other is the surrogate model which can efficiently save the computational cost in obtaining bilevel feasible solutions. In addition, a heuristic crossover operator is designed by making use of the gradient information. The simulation results in 20 computational examples show the efficiency of the proposed algorithm.

Supporting information

S1 File

(XLS)

S2 File

(XLS)

Acknowledgments

Hong Li is thanked for useful discussions and supply some reference datas. We thank the editors and the anonymous reviewers for their professional and valuable suggestions.

Data Availability

All relevant data are within the manuscript and its Supporting information files, and I also uploaded data on URL https://doi.org/10.5061/dryad.dfn2z3504.

Funding Statement

The research work was supported by the National Natural Science Foundation of China under Grant Nos. 61966030 and 11661067, the Natural Science Foundation of Qinghai Province under Grant No. 2018-ZJ-901 and the Key Laboratory of IoT of Qinghai under grant 2020-ZJ-Y16.

References

  • 1. Bard JF (1998) Practical Ailevel Optimisation: Algorithms and Applications. Kluwer Academic Publishers, Dordrecht, the Netherlands. [Google Scholar]
  • 2. Zhu XD, Guo PJ (2019) Bilevel Programming Approaches to Production Planning for Multiple Products with Short Life Cycles. 4OR Quarterly Journal of the Belgian: French and Italian Operations Research Societies 2: 1–25. [Google Scholar]
  • 3. Nasrolahpour E, Kazempour J, Zareipour H, et al. (2018) A Bilevel Model for Participation of a Storage System in Energy and Reserve Markets. IEEE Transactions on Sustainable Energy 9(2): 582–598. 10.1109/TSTE.2017.2749434 [DOI] [Google Scholar]
  • 4. Ulbrich M, Leibold M, Albrecht S (2017) A Bilevel Optimisation Approach to Obtain Optimal Cost Functions for Human Arm Movements. Numerical Algebra: Control and Optimisation (NACO) 2(1): 105–127. [Google Scholar]
  • 5. Yang K, Lan YF, Zhao RQ (2017) Monitoring Mechanisms in New Product Development with Risk-Averse Project Manager. Journal of Intelligent Manufacturing 28: 1–15. 10.1007/s10845-014-0993-5 [DOI] [Google Scholar]
  • 6. Sinha A, Malo P, Deb K (2017) A Review on Bilevel Optimisation: From Classical to Evolutionary Approaches and Applications. IEEE Transactions on Evolutionary Computation 22(2): 276–295. 10.1109/TEVC.2017.2712906 [DOI] [Google Scholar]
  • 7. Jensen TV, Kazempour J, Pinson P (2018) Cost-Optimal ATCs in Zonal Electricity Markets. IEEE Transactions on Power Systems 33(4): 3624–3633. 10.1109/TPWRS.2017.2786940 [DOI] [Google Scholar]
  • 8. Dupin R, Michiorri A, Kariniotakis G (2019) Optimal Dynamic Line Rating Forecasts Selection Based on Ampacity Probabilistic Forecasting and Network Operators’ Risk Aversion. IEEE Transactions on Power Systems 34(4): 2836–2845. 10.1109/TPWRS.2018.2889973 [DOI] [Google Scholar]
  • 9. Liu J, Hong YF, Zheng Y (2018) A Branch and Bound-Based Algorithm for the Weak Linear Bilevel Programming Problems. Wuhan University Journal of Natural Sciences 23(6): 480–486. 10.1007/s11859-018-1352-8 [DOI] [Google Scholar]
  • 10. Liu J, Hong YF, Zheng Y (2018) A New Variant of Penalty Method for Weak Linear Bilevel Programming Problems. Wuhan University Journal of Natural Sciences 23(4): 328–332. 10.1007/s11859-018-1330-1 [DOI] [Google Scholar]
  • 11. Sinha A, Soun T, Deb K (2019) Using Karush-Kuhn-Tucker Proximity Measure for Solving Bilevel Optimisation Problems. Swarm and Evolutionary Computation 44: 496–510. 10.1016/j.swevo.2018.06.004 [DOI] [Google Scholar]
  • 12. Ye JJ, Zhu DL (1995) Optimality Conditions for Bilevel Programming Problems. Optimisation 33(1): 9–27. 10.1080/02331939508844060 [DOI] [Google Scholar]
  • 13. Zheng Y, Zhang G, Zhang Z, et al. (2018) A Reducibility Method for the Weak Linear Bilevel Programming Problems and a Case Study in Principal-Agent. Informatics and Computer Science Intelligent Systems Applications 3: 1–33. [Google Scholar]
  • 14. Jin ZJ, Xu YH, Zhang LP (2017) An Efficient Algorithm for Convex Quadratic Semi-Definite Optimisation. Numerical Algebra Control and Optimisation 2(1): 129–144. [Google Scholar]
  • 15. Sun J, Liao LZ, Brian R (2017) Quadratic Two-Stage Stochastic Optimisation with Coherent Measures of Risk. Mathematical Programming 168(3): 1–15. [Google Scholar]
  • 16. Susanne F, Patrick M, Maria P (2017) Optimality Conditions for the Simple Convex Bilevel Programming Problem in Banach Spaces. Optimization 67(4): 1–32.29375237 [Google Scholar]
  • 17. Joseph A, Ozaltin OY (2018) Feature Selection for Classification Models via Bilevel Optimisation. Computers and Operations Research 5(5): 1–32. [Google Scholar]
  • 18. Fahim A, Hedar A R (2017) Filter-Based Genetic Algorithm for Mixed Variable Programming. Numerical Algebra: Control and Optimisation (NACO) 1(1): 99–116. [Google Scholar]
  • 19. Sinha A, Soun T, Deb K (2017) Evolutionary Bilevel Optimisation Using KKT Proximity Measure. 2017 IEEE Congress on Evolutionary Computation (CEC) 1: 1–9. [Google Scholar]
  • 20. Wang YP, Jiao YC, Li H (2005) An Evolutionary Algorithm for Solving Nonlinear Bilevel Programming Based on a New Constraint-Handling scheme. IEEE Transactions on Systems Man and Cybernetics: Applications and Reviews 35(2): 221–232. [Google Scholar]
  • 21. Guo L, Xu JP (2014) Retrofitting Transportation Network Using a Fuzzy Random Multiobjective Bilevel Model to Hedge against Seismic Risk. Abstract and Applied Analysis 2014(1): 1–24. 10.1155/2014/621359 [DOI] [Google Scholar]
  • 22. Brian D, Paolo G, et al. (2014) Bilevel Multiobjective Packaging Optimisation for Automotive Design. Structural and Multidisciplinary Optimisation 50(4): 663–682. 10.1007/s00158-014-1120-0 [DOI] [Google Scholar]
  • 23. Alves MJ, Antunes CH (2018) A Semivectorial Bilevel Programming Approach to Optimize Electricity Dynamic Time-of-Use Retail Pricing. Computers and Operations Research 2: 1–29. [Google Scholar]
  • 24.Chakraborti D, Biswas P, Pal BB (2017) Modelling Multiobjective Bilevel Programming for Environmental-Economic Power Generation and Dispatch Using Genetic Algorithm. International Conference on Computational Intelligence 1: 423-439.
  • 25.Alves MJ (2012) Using MOPSO to Solve Multiobjective Bilevel Linear Problems. International Conference on Swarm Intelligence 1: 332-339.
  • 26. Li H, Zhang QF, et al. (2016) Multiobjective Differential Evolution Algorithm Based on Decomposition for a Type of Multiobjective Bilevel Programming Problems. Knowledge-Based Systems 107: 271–288. 10.1016/j.knosys.2016.06.018 [DOI] [Google Scholar]
  • 27. Ankhili Z, Mansouri A (2009) An Exact Penalty on Bilevel Programs with Linear Vector Optimisation Lower Level. European Journal of Operational Research 197(1): 36–41. 10.1016/j.ejor.2008.06.026 [DOI] [Google Scholar]
  • 28. Calvete HI, Gal¨¦ C (2011) On Linear Bilevel Problems with Multiple Objectives at the Lower Level. Omega 39(1): 33–40. 10.1016/j.omega.2010.02.002 [DOI] [Google Scholar]
  • 29. Deb K, Sinha A (2010) An Efficient and Accurate Solution Methodology for Bilevel Multi-Objective Programming Problems Using a Hybrid Evolutionary-Local-Search Algorithm. Evolutionary Computation 18(3): 403–449. 10.1162/EVCO_a_00015 [DOI] [PubMed] [Google Scholar]
  • 30. Jia LP, Wang YP, Fan L (2016) An Improved Uniform Design-Based Genetic Algorithm for Multi-Objective Bilevel Convex Programming. Int. J. Computational Science and Engineering 12(1): 38–46. [Google Scholar]
  • 31. Sinha A, Malo P, Deb K (2017) Approximated Set-Valued Mapping Approach for Handling Multiobjective Bilevel Problems. Computers and Operations Research 77: 194–209. 10.1016/j.cor.2016.08.001 [DOI] [Google Scholar]
  • 32. Deb K, Sinha A (2009) Solving Bilevel Multi-Objective Optimisation Problems Using Evolutionary Algorithms. Springer; Berlin Heidelberg: 1: 110–124. [Google Scholar]
  • 33. Eichfelder G (2010) Multiobjective Bilevel Optimisation. Mathematical Programming 123(2): 419–449. 10.1007/s10107-008-0259-0 [DOI] [Google Scholar]
  • 34. Pieume CO, Marcotte P, et al. (2013) Generating Efficient Solutions in bilevel Multi-Objective Programming Problems. American Journal of Operations Research 3: 289–298. 10.4236/ajor.2013.32026 [DOI] [Google Scholar]
  • 35. Kirlik G, Sayin S (2017) Bilevel Programming for Generating Discrete Representations in Multi-Objective Optimisation. Mathematical Programming 1: 1–20. [Google Scholar]
  • 36. Liu BB, Wan ZP, et al. (2014) Optimality Conditions for Pessimistic Semivectorial Bilevel Programming Problems. Journal of Inequalities and Applications 2014(1): 1–26. 10.1186/1029-242X-2014-41 [DOI] [Google Scholar]
  • 37. Lv YB, Wan ZP (2015) Solving Linear Bilevel Multi-Objective Programming Problem via Exact Penalty Function Approach. Journal of Inequalities and Applications 2015(1): 258–269. 10.1186/s13660-015-0780-7 [DOI] [Google Scholar]
  • 38. Zhang T, Hu TS, et al. (2012) An Improved Particle Swarm Optimisation for Solving Bilevel Multi-Objective Programming Problem. Journal of Applied Mathematics 1: 359–373. [Google Scholar]
  • 39. Zhang T, Hu TS, et al. (2013) Solving High Dimensional Bilevel Multiobjective Programming Problem Using a Hybrid Particle Swarm Optimisation Algorithm with Crossover Operator. Knowledge-Based Systems 53: 13–19. 10.1016/j.knosys.2013.07.015 [DOI] [Google Scholar]
  • 40. Liu HL, Wang YP, et al. (2009) A Multi-Objective Evolutionary Algorithm Using Min-Max Strategy and Sphere Coordinate Transformation. Intelligent Automation and Soft Computing 15(3): 361–384. 10.1080/10798587.2009.10643036 [DOI] [Google Scholar]
  • 41. Wang ZK, Zhang QF, Zhou AM, et al. (2015) Adaptive Replacement Strategies for MOEA/D. IEEE Transactions on Cybernetics 46(2): 474–486. 10.1109/TCYB.2015.2403849 [DOI] [PubMed] [Google Scholar]
  • 42. Zhang QF, Li H (2007) MOEA/D: A Multi-Objective Evolutionary Algorithm Based on Decomposition. IEEE Transactions on Evolutionary Computation 11(6): 712–731. 10.1109/TEVC.2007.892759 [DOI] [Google Scholar]
  • 43. Deb K, Pratap A, Agarwal S, et al. (2002) A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2): 182–197. 10.1109/4235.996017 [DOI] [Google Scholar]
  • 44. Deb K, Jain H (2014) An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems with Box Constraints. IEEE Transactions on Evolutionary Computation 18(4): 577–601. 10.1109/TEVC.2013.2281535 [DOI] [Google Scholar]
  • 45. Fang KT, Bentler WPM (1994) Some Applications of Number-Theoretic Methods in Statistics. Statistical Science 9(3): 416–428. 10.1214/ss/1177010392 [DOI] [Google Scholar]
  • 46. Wang Y, Fang KT (1990) Number Theoretic Methods in Applied Statistics. Chinese Annals of Mathematics Series B 11(1): 51–65. [Google Scholar]
  • 47. Stromberg AJ (1996) Number-theoretic Methods in Statistics. Technometrics 38(1): 189–190. 10.1080/00401706.1996.10484478 [DOI] [Google Scholar]
  • 48. Li HC, Wang YP (2008) An Interpolation Based Genetic Algorithm for Sloving Nonlinear Bilevel Programming Problems. Chinese Journal of Computers 31(6): 910–918. 10.3724/SP.J.1016.2008.00910 [DOI] [Google Scholar]
  • 49. Mengist H, Huizinga J, Mouret JB, Clune J (2016) The Evolutionary Origins of Hierarchy. Plos Computational Biology 12(6):1–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Azmat U, Abdullah M S, Saleem A K, et al. (2018) Evolutionary Algorithm Based Heuristic Scheme For Nonlinear Heat Transfer Equations. Plos One 13(1):1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Zhang WN, Ming ZY, Zhang Y, et al. (2016) Capturing the Semantics of Key Phrases Using Multiple Languages for Question Retrieval. IEEE Transactions on Knowledge and Data Engineering 28(4):888–900. 10.1109/TKDE.2015.2502944 [DOI] [Google Scholar]
  • 52. Xu P, Wang LZ (2014) An Exact Algorithm for the Bilevel Mixed Integer Linear Programming Problem under Three Simplifying Assumptions. Computers and Operations Research 41(41): 309–318. 10.1016/j.cor.2013.07.016 [DOI] [Google Scholar]
  • 53. Sinha A, Malo P, Deb K (2016) Evolutionary Algorithm for Bilevel Optimisation Using Approximations of the Lower Level Optimal Solution Mapping. European Journal of Operational Research 257(2): 395–411. 10.1016/j.ejor.2016.08.027 [DOI] [Google Scholar]
  • 54. Zitzler E, Thiele L (1999) Multi-Objective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach. IEEE Transaction on Evolutionary Computation 3(4):257–271. 10.1109/4235.797969 [DOI] [Google Scholar]
  • 55. Li H, Zhang QF (2009) Multiobjective Optimisation Problems with Complicated Pareto Sets, MOEA/D, and NSGA-II. IEEE Transaction on Evolutionary Computation 18:450–455. [Google Scholar]

Decision Letter 0

Weinan Zhang

17 Sep 2020

PONE-D-20-21853

An Evolutionary Algorithm Using Surrogate Models for Solving Bilevel Multi-objective Programming Problems

PLOS ONE

Dear Dr. Liu,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Nov 01 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Weinan Zhang

Academic Editor

PLOS ONE

Additional Editor Comments:

To read the reviews and the reading of the manuscript by myself. I decide to give a major revision for the current version of the manuscript.

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

'The research work was supported by the National Natural Science Foundation of China under Grant Nos. 61966030 and 11661067, the Natural Science Foundation of Qinghai Province under Grant No. 2018-ZJ-901 and the Key Laboratory of IoT of Qinghai under grant 2020-ZJ-Y16.'

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

a. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

'NO - Include this sentence at the end of your statement: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript '

b. Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical.

5. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper proposed an evolutionary algorithm using surrogate models for solving bilevel multiobjective programming Problems. It is an interesting topic and can be accepted with revised. My detailed comments are as follows:

1) Figure 2 shows the framework of the entire algorithm, but in Figure 2 we can seen that the represents "Archive D2" it is unclear. And the color of the partial graph in Figure 15 is inconsistent with the color in other Figures (13-14), it is recommended to modify.

2) Please carefully check the representation of the numbers in Table 2 and Table 3. For example, did you miss "E" in the representation of numbers in Example F08-F10?

3) It can be seen from Table 6 that the C-Measure of different algorithms is compared, But the description of Table 6 on page 24 seems inappropriate, such as "… C(SM,WS) > C(WS,SM), C(SM,TE) > C(TE,SM) and C(SM,NS) > C(NS,SM) on problems 1,2,5,6,8,10… ", but I found that in F06, it should be " …C(SM,WS) = C(WS,SM), C(SM,TE) = C(TE,SM) and C(SM,NS) = C(NS,SM) …".

4) Table 8 shows the comparison of objective function values on Examples 14-20. Please clearly explain the relationship between F(x∗,y∗) and f(x∗,y∗). Whether the comparison of their values is mainly reflected in F(x∗,y∗)?

Reviewer #2: The paper presents a novel evolutionary algorithm for the bilevel multi-objective programming problem, which is a very difficult problem being studied extensively in previous studies. The experimental results demonstrate that the proposed method is very efficient for the problem. Overall, the paper needs major improvements in written, especially for the organization structure. It seems that the related work and the introduction are mixed together, which makes the reader difficult to understand the major motivation or how the idea is developed. In addition, the Section 5, the authors should pay more attention to let the algorithm being presented more understandable. The direct showing can make readers hard to follow.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Weinan Zhang

1 Dec 2020

Evolutionary Algorithm Using Surrogate Models for Solving Bilevel Multiobjective Programming Problems

PONE-D-20-21853R1

Dear Dr. Li,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Weinan Zhang

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The main concerns of reviewers are addressed in the revision.

However, please also mind the comment pointed by the reviewer "The flowchart Fig.2 has no start and end marks." and revise accordingly.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this study, the authors proposed an evolutionary algorithm using surrogate optimisation models to solve the computational complexity of both hierarchical structures and multiobjective optimisation in BMPP.

The authors have revised the paper according to the reviewer's suggestions.

The flowchart Fig.2 has no start and end marks.

Reviewer #2: The paper addresses an important problem and present a good solution. All previous issues have been handled. I think the paper is well done and suitable to be published directly.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Acceptance letter

Weinan Zhang

7 Dec 2020

PONE-D-20-21853R1

Evolutionary Algorithm Using Surrogate Models for Solving Bilevel Multiobjective Programming Problems

Dear Dr. Li:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Weinan Zhang

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (XLS)

    S2 File

    (XLS)

    Attachment

    Submitted filename: Response to reviewers.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting information files, and I also uploaded data on URL https://doi.org/10.5061/dryad.dfn2z3504.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES