Abstract
The differential evolution algorithm is one of the promising natural inspired population-based metaheuristic algorithms that attracted the attention of researchers in the recent years. This paper presents a new mutation strategy called DE/current-to-best/2 that presents a new mutated vector based on utilizing the distance between the best vector and the current vector along with another random vector. In addition, the crossover procedure is self-adapted to cover low locality and high locality based on the iteration number. To obtain the best results of the proposed modified differential evolution algorithm, design of experiments is done to optimize its parameters. The comparative results are done using 11 optimization problems to compare the classical version of differential evolution algorithm with the new modified version and the results show high efficiency of the proposed DE algorithm in terms of CPU time, evaluation, and accuracy
The outline of the work done in this paper can be shown as follows:
-
•
The paper produces a new modification of one of the most promising metaheuristics algorithms, the differential evolution algorithm.
-
•
The mutation strategy of the algorithm is modified to work with the current solution, the global best solution, and a random solution. The resulted mutated vector from this procedure is used to produce a new modified crossover solution.
-
•
The crossover procedure is self-adapted to cover low locality and high locality based on the iteration number, where in case of the odd iterations, the high locality is applied to obtain more diversity, and in case of the even iterations the low locality is applied to obtain local neighbor solutions. The comparison is done with the classical version of the algorithm, and the results show efficiency in terms of CPU time, evaluation, and accuracy.
Keywords: Optimization, Heuristics, Metaheuristics, Differential evolution algorithm, Design of experiments
Method name: Modified Differential Evolution Algorithm for Solving Optimization Problems
Graphical abstract
A graphical abstract is mandatory. The graphical abstract should summarize the contents of your article in a concise, pictorial form. Authors must provide images that clearly represent the work described in the article. Graphical abstracts should be submitted as a separate file. Image size: please provide an image with a minimum of 531 × 1328 pixels (h × w) or proportionally more. The image should be readable at a size of 5 × 13 cm using a regular screen resolution of 96 dpi. Preferred file types: TIFF, EPS, PDF or MS Office files.
Specifications table
| Subject area: | Computer Science |
| More specific subject area: | Metaheuristics and Evolutionary Algorithms |
| Name of your method: | Modified Differential Evolution Algorithm for Solving Optimization Problems |
| Name and reference of original method: | Storn, R., & Price, K. (1997). Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4), 341-359 |
| Resource availability: | https://github.com/sadeer1966/A-Modified-Differential-Evolution-Algorithm-Based-on-Improving-A-New-Mutation-Strategy-and-Self-Adap |
Method details
Introduction
There are many approaches for solving optimization problems. Some of them are called classical approaches that rely on using mathematics, such as the simplex method. These classical approaches can easily deal with low-scale problems and produce exact optimal solutions. However, when the problem is NP-hard, classical approaches encounter a large number of variables and constraints, making them very difficult to apply. Therefore, heuristics and metaheuristic approaches could be ideal in such cases. The word ``metaheuristic'' consists of two ancient Greek words. The first one is ``meta,'' which means upper-level methodology, and the second one is ``heuristic,'' which means the art of discovering new strategies. Therefore, metaheuristics are upper-level methodologies that lead to obtaining approximate optimized solutions by using guiding strategies [1]. Metaheuristics in research are classified into single-based and population-based metaheuristics. In single-based metaheuristics, each iteration focuses on having a random solution in the solution space, and it searches around it for new neighbours using local search, hoping to find better local solutions. The global best solution is the best solution found throughout all iterations. In the case of population-based metaheuristics, the approach begins by generating a population of random solutions, followed by mutation procedures that change the positions of the included solutions in the population. The main concept is to cover more solution areas in the solution space by having diversified solutions in the population. In each solution area, local search is performed to find the best local solution. This paper presents a new modification to one of the population-based metaheuristics named the differential evolution algorithm (DE). The paper is organized as follows: the second section presents a literature review on DE, the third section describes the modified DE, the fourth section discusses numerical experiments that optimize the parameters of the proposed modified DE, and finally, the fifth section presents the conclusion and future research points.
Literature Review
Qin et al.,[2] proposed a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Zhang and Sanderson [3] proposed a new mutation strategy "DE/current-to pbest" with optional external archive and adaptively updating control parameters. The DE/current-to- pbest is a generalization of the classic "DE/current-to-best", while the optional archive operation utilizes historical data to provide information on progress direction. Both operations diversify the population and improve the convergence performance. Islam et al. [4] proposed a new mutation strategy with binomial crossover and two adaptive control parameters. In their work, a biased selection method is introduced where the mutant vector undergoes crossover with ‘p’ top-ranked population vectors, rather than the target vector. The authors validated the results on 25 CEC 2005 benchmark problems. Reynoso-Meza et al.,[5] used a local search routine to improve convergence and an adaptive crossover operator. Tanabe & Fukunaga [6] proposed a new parameter adaptation technique for DE by using a historical memory of successful control parameter settings in order to guide the selection of future control parameter values. This technique has been evaluated depended on the comparison of 28 problems from the CEC2013 benchmark set.
Xia & Wang [7] proposed a novel Self adaptive Differential Evolution algorithm (SaDE), where the two control parameters of F and CR in addition to the choice of learning strategy are not required to be pre-specified. Tanabe & Fukunaga [8] proposed L-SHADE that extended the SHADE with Linear Population Size Reduction (LPSR). According to the linear function, it has been continually decreased the population size. Fan and Zhang [9] proposed self-adaptive differential evolution with adaptive crossover strategies. Brest et al. [10] presented a differential evolution algorithm (iL-SHADE) which is an improved version of the well- known L-SHADE algorithm. It has been used for solving single objective real parameter optimization problems. Awad et al. [11] proposed algorithm called (LSHADE-EpSin), they used a new ensemble sinusoidal approach in order to adapt automatically the values of the scaling factor of the Differential Evolution algorithm. Brest et al. [12] presented a new algorithm, namely jSO. The algorithm is an improved variant of the iL-SHADE algorithm, mostly it was conjugated with a new weighted version of mutation strategy. Stanovov et al. [13] proposed a new variant of LSHADE algorithm. The basic idea of it is to adapt its mutation strategy by using selective pressure. The experiments were done on CEC 2018 benchmark functions. Stanovov et al. [14] proposed a new parameter control scheme for the differential evolution algorithm. Song et al.[15] presented an enhanced success history adaptive DE with greedy mutation strategy (EBLSHADE) was employed in order to optimize the parameters of PV models to propose a parameter optimization method. Shen et al. [16] proposed a modified jSO algorithm (MJSO) that had a significant impact on the performance of the algorithm. It was based on cosine similarity with parameter adaptation and a novel opposition-based learning restart mechanism.
Most of the previously developed DE algorithms are efficient in terms of convergence speed and simplicity. The previous work on DE algorithms has primarily focused on modifying mutation strategies and implementing self-adaptation. Mutation strategies contribute to achieving a greater diversity of solutions, while self-adaptation processes help reduce the number of trials required in the experimental design by minimizing the number of parameters. This paper presents a new mutation strategy that utilizes the position of the best solution found, the position of the current solution, and another randomly selected solution from the population. This strategy is named DE/current-to-best/2. In the proposed mutation strategy, the new position is calculated proportionally to the best solution found by multiplying a random weight by the distance between the current solution and the best, and another random weight by the distance between the randomly selected solution and the best. This can lead to more diverse solutions and expedite the convergence towards the best solution found. The other modification of the DE algorithm in this paper introduces a new self-adaptation strategy based on the crossover procedure of the algorithm. This crossover procedure involves generating either a new diverse solution using a higher crossover probability, which allows for more characteristics from the mutated solution found by DE/current-to-best/2 or generating a new solution that incorporates more characteristics from the current solution. Additionally, the paper presents a design of experiments in order to optimize the parameters of the newly modified algorithm, which is rarely found in other works.
Differential Evolution Algorithm
As aforementioned, the differential evolution algorithm is one of the population-based metaheuristics. The classical version of the DE algorithm begins with an initial population that consists of random solutions, each with a corresponding vector, to cover diversified areas of the solution space. Thereafter, the algorithm has three phases: mutation, crossover, and selection. The next subsections provide information about these phases and their modifications.
Mutation Phase
In this phase, the vectors of the population solutions are to be mutated using mutation strategies at each iteration. To illustrate some of the strategies, the following notations are used.
Notations:
| The mutated vector | |
| Random vector from population | |
| Vector in iteration | |
| Global best vector in iteration | |
| The crossover vector of vectorin iteration | |
| Scaling factor |
The most common mutation strategies are found in Deng et al. [17] as follows:
-
(1)DE/rand/1
(1) -
(2)DE/best/1
(2) -
(3)DE/current-to best/1
(3) -
(4)DE/rand/2
(4) -
(5)DE/best/2
(5)
The new proposed strategy in this paper named as DE/current-to-best/2, which is shown in Eq. (6). The strategy utilizes the distance between the position of the best vector and the current. In addition, another random vector is to be used with the best vector to extend the modification of the mutated vector. The parameter is chosen to be a normally distributed random number with a mean of 0 and a standard deviation of 0.5. As mentioned earlier, this strategy enhances diversity by determining a new position based on the distances between the current solution and the best solution found, as well as between a randomly selected solution from the population and the best solution found. The parameter is selected as a normally distributed random number to incorporate both negative and positive values, which contributes to increased diversity in the mutated vector's components.
| (6) |
Crossover Phase
In this phase, the algorithm generates a new solution that shares characteristics from positions and using crossover probability. Each element in the crossover vector can be generated as follows:
| (7) |
The random number, in Eq. (7) is uniformly distributed random number with the interval . The crossover probability plays a role of either having diversity or locality. The higher the crossover probability the higher the diversity, and the lower the crossover probability the higher chance of having a local neighbor solution. In this paper, the proposed crossover probability is to be generated using the iteration number, where the odd iteration number allows for generated randomly from the interval, and for the even iteration number, is to be generated randomly from the interval. So, the odd iterations allow for higher diversity solutions and the even iterations allow to have local neighbor solutions. In the case of odd iterations, the generated crossover rate () is very high and difficult to reach due to the interval including large numbers between 0.8 and 1. Consequently, during odd iterations, most of the components of are selected to create the crossover vector. This leads to the creation of a crossover vector that inherits more characteristics from , resulting in greater diversity. Conversely, during even iterations, the value is very low, indicating that most of the components of the crossover vector are selected from , resulting in a new vector that can be described as a local neighbor of . In summary, odd iterations promote diverse solutions, while even iterations produce local neighbors for the current solution.
Selection Phase
In the selection phase, the best solution found among , , and will be selected to remain in the population of the next iteration. In case that the evaluations of all of them are the same, then the vector that will remain the population in the next iteration is to be randomly selected from the three vectors.
Now all the phases of the proposed modification of the DE algorithm are discussed. Fig. 1 shows the flowchart the new proposed DE algorithm. The source code of the algorithm can be found in https://github.com/sadeer1966/A-Modified-Differential-Evolution-Algorithm-Based-on-Improving-A-New-Mutation-Strategy-and-Self-Adap.
Fig. 1.
The flowchart of the proposed DE algorithm
To further illustrate the steps of the modified algorithm, the following pseudo code is provided. Additionally, both the modified DE and classical DE algorithms are coded in Python and can be accessed a thttps://github.com/sadeer1966/A-Modified-Differential-Evolution-Algorithm-Based-on-Improving-A-New-Mutation-Strategy-and-Self-Adap.
The pseudo code of the Proposed algorithm is as follows:
| 1 | |
| 2 | |
| 3 | While G≤MaxItr do: |
| 4 | |
| 5 | |
| 6 | |
| 7 | |
| 8 | |
| 9 | Else: |
| 10 | |
| 11 | |
| 12 | |
| 13 | |
| 14 | |
| 15 |
Numerical Experiments
In this section, 11 optimization problems have been selected to test the modified DE algorithm. The evaluation is based on the output of the functions and the CPU time (). The problems are listed in Table 5. This paper proposes two parameters that can be calculated throughout the implementation of the algorithm, which are the scaling factor that is generated using uniformly distributed random number and the self-adapted crossover probability . The remaining parameters for the proposed DE algorithm are the population size () and the number of iterations (). The population size and number of iterations in any metaheuristic play a crucial role in convergence. Therefore, these two parameters are to be optimized using design of experiments. The selected levels for both the and parameters are 10, 30, 50, 70, and 90 solutions and iterations, respectively. The full factorial design consists of multiple factors. Each factor has a set of discrete levels. The experiments in the full factorial design are done according to listing all combinations of these levels across their factors [18]. Therefore, the full factorial design herein can be represented as in Table 1.
Table 5.
Test Optimization Functions
| Function name | Function | Global minimum | Limits |
|---|---|---|---|
| Sphere | |||
| Rastrigin | |||
| Ackley | |||
| Rosenbrock |
|
||
| Beale | |||
| Goldstein-Price | |||
| Bohachevsky |
|
||
| Booth | |||
| Matyas | |||
| Zakharov | |||
| Six hump camel |
|
Table 1.
Full Factorial Design
| Trails | Population | Iterations |
|---|---|---|
| 1 | 10 | 10 |
| 2 | 10 | 30 |
| 3 | 10 | 50 |
| 4 | 10 | 70 |
| 5 | 10 | 90 |
| 6 | 30 | 10 |
| 7 | 30 | 30 |
| 8 | 30 | 50 |
| 9 | 30 | 70 |
| 10 | 30 | 90 |
| 11 | 50 | 10 |
| 12 | 50 | 30 |
| 13 | 50 | 50 |
| 14 | 50 | 70 |
| 15 | 50 | 90 |
| 16 | 70 | 10 |
| 17 | 70 | 30 |
| 18 | 70 | 50 |
| 19 | 70 | 70 |
| 20 | 70 | 90 |
| 21 | 90 | 10 |
| 22 | 90 | 30 |
| 23 | 90 | 50 |
| 24 | 90 | 70 |
| 25 | 90 | 90 |
The experimental design in this paper aims to enhance the efficiency of the algorithm by optimizing the evaluation of the test optimization functions and the CPU time required by the proposed DE algorithm. Therefore, the response value () for any trail , which is required to be minimized, is calculated according to Eq. (8), where is the evaluation of the test optimization function.
| (8) |
After running the 25 trails contained in Table 1, the response value for each trail is normalized using Eq. (9) for each test optimization function and these normalized values () are found as in Table 6 [19]. The analysis of variance (ANOVA) is used to find if there are any differences between levels of the population and iteration factors. After implementing ANOVA using Minitab, it found that the P-value of the population levels equals to 0.964, which means there is no significant difference between the population levels. The P-value of the iteration levels equals to 0.146, which also means that there is no significant difference between the iteration levels. In case of the population levels, the interval plot of responses vs levels as in Fig. 2 shows that all levels are the same without any further investigation. But in case of the iteration levels, the interval plot of responses vs levels as in Fig. 3 shows that some levels may be investigated between the level 10 and 30. So, the suggestion herein is to keep the population size equals to 10 and make another experiment that include only the iteration factor with levels 10, 15, 20, 25, and 30. After implementing the experiment and applying ANOVA, it found that the P-value of iteration factor equals to 0.029, which means that not all of its levels come from the same population and there is a significant difference between some or all of these levels. Therefore, Tukey's pairwise comparison test is done to find which levels differ.
| (9) |
Table 6.
Normalized results
| Sphare | Rastrigin | Ackley | Rosenbrock | Beale | Goldstein_Price | Bohachevsky | Booth | Matyas | Zakharov | Six_Hump |
|---|---|---|---|---|---|---|---|---|---|---|
| 0.072 | 0.502 | 0.047 | 0.065 | 0.029 | 0.273 | 0.044 | -0.865 | 0.045 | 0.037 | -0.022 |
| 0.055 | 0.051 | 0.047 | 0.026 | 0.043 | 0.382 | 0.028 | 0.075 | 0.045 | 0.041 | 0.047 |
| -0.008 | 0.017 | 0.046 | 0.001 | 0.049 | 0.198 | 0.051 | 0.091 | 0.045 | 0.041 | 0.047 |
| -0.022 | -0.007 | 0.045 | 0.000 | 0.049 | 0.003 | 0.050 | 0.101 | 0.044 | 0.041 | 0.046 |
| 0.025 | 0.022 | 0.045 | 0.011 | 0.047 | 0.005 | 0.049 | 0.099 | 0.044 | 0.041 | 0.046 |
| 0.175 | 0.077 | 0.047 | 0.463 | 0.038 | 0.030 | 0.032 | -0.029 | 0.045 | 0.041 | 0.036 |
| 0.043 | 0.004 | 0.044 | 0.006 | 0.039 | 0.002 | 0.047 | 0.097 | 0.044 | 0.041 | 0.045 |
| -0.007 | 0.056 | 0.042 | 0.001 | 0.046 | 0.001 | 0.046 | 0.095 | 0.042 | 0.041 | 0.045 |
| -0.004 | 0.001 | 0.041 | 0.031 | 0.045 | 0.001 | 0.044 | 0.092 | 0.041 | 0.040 | 0.044 |
| -0.014 | -0.007 | 0.039 | 0.004 | 0.042 | 0.001 | 0.042 | 0.089 | 0.039 | 0.040 | 0.043 |
| 0.166 | 0.088 | 0.046 | 0.162 | 0.029 | 0.024 | 0.050 | -0.003 | 0.045 | 0.041 | 0.036 |
| 0.071 | 0.033 | 0.042 | 0.027 | 0.046 | 0.003 | 0.046 | 0.088 | 0.042 | 0.041 | 0.045 |
| -0.017 | -0.003 | 0.039 | 0.006 | 0.043 | 0.001 | 0.042 | 0.090 | 0.039 | 0.040 | 0.043 |
| -0.009 | -0.001 | 0.036 | 0.000 | 0.041 | 0.003 | 0.040 | 0.084 | 0.037 | 0.040 | 0.043 |
| -0.017 | -0.006 | 0.034 | 0.002 | 0.038 | 0.001 | 0.038 | 0.080 | 0.035 | 0.040 | 0.042 |
| 0.135 | 0.061 | 0.045 | 0.168 | 0.037 | 0.048 | 0.010 | 0.098 | 0.044 | 0.040 | 0.045 |
| 0.010 | 0.006 | 0.040 | 0.002 | 0.042 | 0.006 | 0.044 | 0.091 | 0.040 | 0.040 | 0.044 |
| -0.014 | 0.002 | 0.036 | 0.001 | 0.040 | 0.002 | 0.040 | 0.084 | 0.037 | 0.040 | 0.042 |
| -0.009 | 0.005 | 0.033 | 0.001 | 0.037 | 0.002 | 0.036 | 0.077 | 0.034 | 0.040 | 0.041 |
| -0.015 | -0.004 | 0.031 | 0.001 | 0.034 | 0.001 | 0.033 | 0.067 | 0.032 | 0.039 | 0.040 |
| 0.341 | 0.106 | 0.044 | 0.015 | 0.044 | 0.004 | 0.048 | 0.099 | 0.043 | 0.040 | 0.041 |
| 0.048 | 0.008 | 0.039 | 0.005 | 0.040 | 0.005 | 0.042 | 0.087 | 0.039 | 0.040 | 0.043 |
| 0.011 | 0.000 | 0.034 | 0.002 | 0.037 | 0.001 | 0.037 | 0.079 | 0.035 | 0.040 | 0.041 |
| -0.010 | -0.003 | 0.031 | 0.001 | 0.035 | 0.002 | 0.032 | 0.071 | 0.032 | 0.039 | 0.040 |
| -0.005 | -0.005 | 0.028 | 0.002 | 0.032 | 0.001 | 0.030 | 0.065 | 0.029 | 0.039 | 0.038 |
Fig. 2.
Interval plot for population levels
Fig. 3.
Interval plot for iteration levels
So, after applying the Tukey's pairwise comparison test, the grouping information shows two groups that differ from one another, which are A and B. Table 2 shows the grouping information and the interval plot is shown as in Fig. 4. Since the grouping information shows that level 10 differs from level 30 and level 30 shows better response than level 10, the selected level for iteration factor is 30 iterations.
Table 2.
Grouping information
| levels | N | Mean | Grouping |
|---|---|---|---|
| 10 | 11 | 0.3581 | A |
| 20 | 11 | 0.2362 | A B |
| 15 | 11 | 0.1674 | A B |
| 25 | 11 | 0.1418 | A B |
Fig. 4.
Interval plot of iterations after applying Tukey test
In summary, according to the experimental design, it can be concluded that the optimized parameters for population can be selected from any level, and in case of the number of iterations factor, the selected parameter level is 30 iterations. The comparative results section shows the results of implementing the proposed DE algorithm with its optimized parameters on 11 test optimization functions. The comparison is done with three different versions of the classical DE algorithm.
Comparative Results
In this section, the comparative results are done between three different types of the classical version of DE and modified DE. All of them are coded using python programming. The three different versions used in this comparison are based on the mutation strategies in Eqs. (3), (4), and (5). Because the classical version of DE doesn't have adaptation of , The value considered in this comparison equals 0.25. To obtain comparative results, 50 outputs were obtained from each algorithm using the 11 optimization functions listed in Table 5 . Fig. 5 depicts the box plots of the output from each algorithm. It is evident that the modified DE yields similar results to the classical DE, which employs a mutation strategy based on Eq. (5). Both of these methods outperform the remaining algorithms. The box plots also indicate that both the modified DE and the classical DE exhibit lower variability compared to the other algorithms. Table 3 displays the mean comparative results of the output from each algorithm, revealing that the modified DE algorithm outperforms the others in terms of mean results, where the bold text highlights the best outputs. In order to assess the robustness of the results, the standard deviations of the outputs were calculated and are presented in Table 4. The standard deviation values of the modified DE algorithm demonstrate its superior robustness compared to the other algorithms.
Fig. 5.
Comparative results box plots.
Table 3.
The mean comparative results
| Functions | Modified DE | Classical 1 | Classical 2 | Classical 3 |
|---|---|---|---|---|
| Sphere | 0 | 1.349530703 | 33.90031884 | 0.20725885 |
| Rastrigin | 6.11304E-05 | 2.039027697 | 35.26733858 | 0.19439251 |
| Ackley | 2.850293399 | 3.23459056 | 8.028763444 | 2.636296377 |
| Rosenbrock | 23.8655269 | 1061.1744 | 61128.32002 | 234.258577 |
| Beale | 0.10212286 | 0.002883156 | 0.011259602 | 0.11556722 |
| Goldstein_Price | 4.62 | 3.111058471 | 4.721087403 | 8.40000001 |
| bohachevsky | 0 | 0 | 6.50017E-06 | 0 |
| booth | 0 | 1.62335E-05 | 0.000150294 | 0 |
| matyas | 0 | 0 | 1.54294E-05 | 0 |
| zakharov | -6.15665237 | -6.15664587 | -6.156279425 | -6.15665237 |
| six_hump | -1.03162845 | -1.03156355 | -1.031129072 | -0.99898186 |
Table 4.
The standard deviation comparative results
| Functions | Modified DE | Classical 1 | Classical 2 | Classical 3 |
|---|---|---|---|---|
| Sphere | 0.000238 | 1.496218 | 17.319566 | 0.787995 |
| Rastrigin | 0.000156 | 1.971238 | 17.990258 | 0.839166 |
| Ackley | 1.543311 | 0.979291 | 1.362058 | 1.560347 |
| Rosenbrock | 34.063151 | 2312.820948 | 59452.559248 | 310.251986 |
| Beale | 0.192298 | 0.008944 | 0.032390 | 0.214863 |
| Goldstein_Price | 11.340000 | 0.432722 | 4.946090 | 17.076299 |
| bohachevsky | 0 | 3.68222E-07 | 0.000023 | 2.78769E-14 |
| booth | 0 | 0.000044 | 0.000301 | 7.24439E-11 |
| matyas | 1.15774E-31 | 0.000001 | 0.000058 | 1.27292E-11 |
| zakharov | 8.88178E-16 | 0.000014 | 0.000688 | 8.33903E-11 |
| six_hump | 0 | 0.000152 | 0.000878 | 0.159935 |
Conclusion
This paper presents a new modification of the DE algorithm in both the mutation and crossover phases. The mutation procedure generates new positions using the distance between the position of the current solution and the position of the best solution found. Additionally, it utilizes the distance between the position of the best solution and a randomly selected position from the population. This type of mutation procedure allows for the generation of neighboring solutions between three positions and converges towards the best solution found. The crossover procedure controls the locality around the position of the current solution based on the iteration number. During odd iterations, the locality becomes higher, resulting in greater diversity. Conversely, during even iterations, the locality decreases, generating local neighbors around the position of the current solution. The scaling factor and the crossover probability are self-adapted in the proposed modified DE. To optimize the parameters of the proposed DE, the design of experiments is conducted with only two factors: the number of iterations and the population size. After optimizing these parameters, comparative results are obtained for 11 optimization problems, comparing the results of the classical version of DE with the modified version. The comparative results demonstrate the high efficiency of the modified DE. It is expected that the modified version of DE will be utilized in future research to address the following problems:
-
-
Job shop scheduling problem
-
-
Warehouse location problem
-
-
Aircraft landing problem
-
-
Timetable scheduling problem
CRediT author statement
Sadeer Fadhil: Conceptualization, Methodology, Software, Validity tests, Data curation, Writing- Original draft preparation, Visualization, Investigation. Hegazy Zaher, Naglaa Ragaa, and Eman Oun: Supervision
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper
Acknowledgments
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability
The data is already included inside the manuscript. I addition, the code developed is uploaded to GitHub with referring to its link inside the manuscript.
References
- 1.Talbi E.-G. 1st ed. John Wiley & Sons, Inc.; Hoboken, New Jersey: 2009. Metaheuristics : from design to implementation. [Google Scholar]
- 2.Qin A.K., Huang V.L., Suganthan P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2008;13(2):398–417. [Google Scholar]
- 3.Zhang J., Sanderson A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009;13(5):945–958. doi: 10.1109/TEVC.2009.2014613. [DOI] [Google Scholar]
- 4.Islam S.M., Das S., Ghosh S., Roy S., Suganthan P.N. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans. Syst. Man, Cybern. Part B Cybern. 2012;42(2):482–500. doi: 10.1109/TSMCB.2011.2167966. [DOI] [PubMed] [Google Scholar]
- 5.Reynoso-Meza G., Sanchis J., Blasco X., Herrero J.M. 2011 IEEE Congr. Evol. Comput. CEC 2011. 2011. Hybrid de algorithm with adaptive crossover operator for solving real-world numerical optimization problems; pp. 1551–1556. [DOI] [Google Scholar]
- 6.Tanabe R., Fukunaga A. 2013 IEEE Congress on Evolutionary Computation, CEC 2013. 2013. Success-history based parameter adaptation for Differential Evolution; pp. 71–78. [DOI] [Google Scholar]
- 7.Xia H.G., Wang Q.Z. Modified differential evolution algorithm for numerical optimization problems. Adv. Mater. Res. 2014;989–994:2536–2539. doi: 10.4028/www.scientific.net/AMR.989-994.2536. [DOI] [Google Scholar]
- 8.R. Tanabe and A. S. Fukunaga, “Improving the search performance of SHADE using linear population size reduction,” 2014. doi: 10.1109/CEC.2014.6900380.
- 9.Fan Q., Zhang Y. Self-adaptive differential evolution algorithm with crossover strategies adaptation and its application in parameter estimation. Chemom. Intell. Lab. Syst. 2016;151:164–171. doi: 10.1016/j.chemolab.2015.12.020. [DOI] [Google Scholar]
- 10.Brest J., Maučec M.S., Bošković B. 2016 IEEE Congr. Evol. Comput. CEC 2016. 2016. IL-SHADE: Improved L-SHADE algorithm for single objective real-parameter optimization; pp. 1188–1195. [DOI] [Google Scholar]
- 11.Awad N.H., Ali M.Z., Suganthan P.N., Reynolds R.G. 2016 IEEE Congr. Evol. Comput. CEC 2016. 2016. An ensemble sinusoidal parameter adaptation incorporated with L-SHADE for solving CEC2014 benchmark problems; pp. 2958–2965. [DOI] [Google Scholar]
- 12.Brest J., Maučec M.S., Bošković B. 2017 IEEE Congr. Evol. Comput. CEC 2017 - Proc. 2017. Single objective real-parameter optimization: Algorithm jSO; pp. 1311–1318. [DOI] [Google Scholar]
- 13.Stanovov V., Akhmedova S., Semenkin E. 2018 IEEE Congr. Evol. Comput. CEC 2018 - Proc. 2018. LSHADE Algorithm with Rank-Based Selective Pressure Strategy for Solving CEC 2017 Benchmark Problems. [DOI] [Google Scholar]
- 14.Stanovov V., Akhmedova S., Semenkin E. Differential evolution with linear bias reduction in parameter adaptation. Algorithms. 2020;13(11):1–17. doi: 10.3390/a13110283. [DOI] [Google Scholar]
- 15.Song Y., Wu D., Wagdy Mohamed A., Zhou X., Zhang B., Deng W. Enhanced Success History Adaptive de for Parameter Optimization of Photovoltaic Models. Complexity. 2021;2021 doi: 10.1155/2021/6660115. [DOI] [Google Scholar]
- 16.Shen Y., Liang Z., Kang H., Sun X., Chen Q. A modified jso algorithm for solving constrained engineering problems. Symmetry (Basel) 2021;13(1):1–32. doi: 10.3390/sym13010063. [DOI] [Google Scholar]
- 17.Deng W., Xu J., Song Y., Zhao H. Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl. Soft Comput. 2021;100 doi: 10.1016/j.asoc.2020.106724. [DOI] [Google Scholar]
- 18.Montgomery D.C. 9th ed. John wiley & sons; 2017. Design and Analysis of Experiments. [Google Scholar]
- 19.Olson David L. Decision Aids for Selection Problems. 1996. Geometric Mean Technique; pp. 69–80.https://link.springer.com/chapter/10.1007/978-1-4612-3982-6_6 (September 26, 2022) [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data is already included inside the manuscript. I addition, the code developed is uploaded to GitHub with referring to its link inside the manuscript.






