Abstract
This paper proposes a modified version of the Dwarf Mongoose Optimization Algorithm (IDMO) for constrained engineering design problems. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. First, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Second, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero. The proposed IDMO was used to solve the classical and CEC 2020 benchmark functions and 12 continuous/discrete engineering optimization problems. The performance of the IDMO, using different performance metrics and statistical analysis, is compared with the DMO and eight other existing algorithms. In most cases, the results show that solutions achieved by the IDMO are better than those obtained by the existing algorithms.
Keywords: Improved dwarf mongoose, Nature-inspired algorithms, Constrained optimization, Unconstrained optimization, Engineering design problems
Introduction
The drive to better each circumstance or situation in human life is high. Optimization tries to find the best solution amongst a pool of other solutions. As such, optimization occurs naturally in many human endeavors and is deeply rooted in science, businesses, ecology, and manufacturing [1]. Basically, optimization is accomplished using either the mathematical or metaheuristic approach. The mathematical methods suffer from being gradient-dependent, time or computationally complex, and problem-dependent [2]. On the other hand, the metaheuristic approach does not guarantee that a better or optimal solution will be found; however, a near-optimal solution will be found. The trade-off of speed with the quality of the solution and associated drawbacks of the mathematical approach could be attributed to the surge in the rate at which researchers are proposing nature-inspired algorithms. Also, some attribute this surge to the ease of mimicking nature’s way of solving problems [3].
Another reason for this surge is the emergence of new application areas in various optimization domains and their attendant technology. These emerging real-life application areas require optimization of various components like speed, profit, risk, efficiency, or cost, often nonlinear and complex, requiring a robust and efficient metaheuristic optimization technique. Though the metaheuristic methods do not guarantee that it would find the optimal solution, its stochastic nature ensures at least a near-optimal solution in the shortest possible time. Also, the “No Free Lunch Theorem (NFLT)” stipulates that no one algorithm can optimally solve all optimization problems. No matter how robust an algorithm is, it can only solve that problem or problems for which its robustness is established. Therefore, there is a need to develop new algorithms, hybridize two or more existing algorithms, or improve existing algorithms to solve emerging optimization problems.
Many aspects of natural problem-solving techniques have been used for developing metaheuristic algorithms. One of the first approaches that use nature as a source of inspiration is the Genetic Algorithm (GA), whose source of inspiration is natural selection in the theory of evolution [4]. Another popular approach is Particle Swarm Optimization (PSO) which is inspired by the intelligent way birds flock together [5]. They have been used to solve problems in domains such as the traveling salesman problem [6], optimal control [7], and many more. These algorithms have recorded significant success in their respective areas of applications. However, some authors have criticized the over-reliance on the metaphor-based paradigm by the nature-inspired metaheuristic algorithms [8–10].
Nature-inspired metaheuristic algorithms imitate some aspects of the problem-solving technique in nature. They are stochastic and gradient-independent, and their parameters can be tuned depending on the problem to be solved [2]. Each nature-inspired metaheuristic algorithm has its unique way of searching the problem space for the optimal solution. Its robustness and efficiency are measured by how effective the search mechanism is under different problem landscapes and parameter uncertainty. Other desired features of these algorithms include implementation simplicity, flexibility, and robustness. The research efforts on nature-inspired metaheuristic algorithms are enormous, and impossible to review all the contributions except in a review study [11]. However, some selected nature-inspired metaheuristic algorithms proposed between 2019 and 2022 are summarized in Table 1
Table 1.
Some nature-inspired metaheuristic algorithms with their source of inspiration (2019–2021)
| Acronym | Name | References |
|---|---|---|
| BO | Butterfly Optimization Algorithm | [47] |
| SSA | Squirrel Search Algorithm | [48] |
| DHOA | Deer Hunting Optimization Algorithm | [49] |
| EPC | Emperor Penguins Colony | [50] |
| HHO | Harris Hawks Optimization | [51] |
| SOA | Seagull Optimization Algorithm | [52] |
| ASO | Atom Search Optimization | [53] |
| NRO | Nuclear Reaction Optimization | [54] |
| HGSO | Henry Gas Solubility Optimization | [55] |
| BWO | Black Window Optimization | [56] |
| ChOA | Chimp Optimization Algorithm | [57] |
| MPA | Marine Predators Algorithm | [58] |
| MRFO | Manta Ray Foraging Optimization | [59] |
| AVOA | African Vultures Optimization Algorithm | [60] |
| GTO | Artificial Gorilla Troop Optimizer | [61] |
| RFO | Red Fox Optimization Algorithm | [62] |
| CHIO | Coronavirus Herd Immunity Optimization | [63] |
| RCM | Red Colobuses Monkey | [64] |
| GOA | Gazelle Optimization Algorithm | [65] |
| RHSO | Red Hyraxes Swarm Optimizer | [66] |
| PDO | Prairie Dog Optimization | [67] |
| EOSA | Ebola Optimization Search Algorithm | [68] |
| PPO | Predator–Prey Optimization | [69] |
Aside from developing new nature-inspired algorithms, developers have also hybridized existing metaheuristic algorithms. The bibliographic diversity of hybridized approaches is enormous; therefore, a few examples of this approach involving the DMO are mentioned in this study. The DMO was hybridized with a Multi-Hop Routing Scheme (DMOSC-MHRS) and applied to solve the clustering problem [12]. The binary variant of DMO, the BDMO, was hybridized with a local search algorithm simulated annealing to tackle feature selection challenges of varying dimensional problems (BDMSAO) [13]. Moreover, in [14], the study proposed a Binary Dwarf Mongoose Optimizer (BDMO) and was applied to solve the multiclass high-dimensional feature selection problem. The success history-based adaptive differential evolution (SHADE) was hybridized with a modified Whale Optimization Algorithm (WOA) such that the two algorithms work stand-alone and only share information about the best-found solution [15].
These inspirational sources can be categorized into swarm-based, evolutionary-based, human, and physics-based, as in [16], and some further categorized them as system-based and bio-based, as can be found in [13]. The swarm-based category, also known as Swarm Intelligent (SI), derives their inspirations from the behavior or social interaction of animals, fish, birds and so on. This group has attracted much attention in the past decades as more methods were developed based on this inspirational point. The evolutionary-based are nature-based that commence their process by random generation of solutions’ population. The human-based category is based on the activities that humans perform. For example, the teaching–learning process involving students and teachers was used to form a popular algorithm in this group. Similarly, the physics-based is inspired by physics laws in nature. For example, nuclear reaction, atom search, and others.
The authors in [17] proposed a novel self-adaptive beneficial factor-based improved SOS (SaISOS), which improved the Symbiosis Organism Search (SOS) algorithm. Also, a modified Whale Optimization Algorithm (WOA) was developed for prediction using COVID-19 chest X-ray pictures [18]. The Quasi-Oppositional Based Learning (QOBL) strategy and Symbiosis Organisms Search (SOS) were incorporated to form a Quasi-Oppositional Symbiosis Organisms Search (QOSOS) algorithms for solving unconstrained global optimization problems [19]. The mLBOA, a new BOA variant, was proposed to solve the IEEE CEC 2017 benchmark suite [20]. The ensemble of Butterfly Optimization Algorithm (BOA) and Symbiosis Organism Search (SOS) called (h-BOASOS) was used to optimize the weight and cost of the cantilever retaining wall [21]. The authors [22] proposed a modified WOA called m-SDWOA by combining the WOA with the modified Symbiotic Organisms Search (SOS).
The DMO, a recently developed swarm-based metaheuristic algorithm, has received some attention as researchers have improved, modified, or hybridized it to solve various optimization problems. Shortly after its development, the DMO was employed to modify long short-time memory (LSTM) in predicting the effect of content of nanoparticle on the thermal expansion’s coefficient in nanocomposites which uses the in situ chemical method in its preparation [23]. Alumni nitrate was used to prepare the nanocomposite, which was added to a copper nitrate solution. After which the and in powdered form were obtained, and removal of the leftover liquid was done by thermal treatment for 1 h at 850° C. The study investigated the impact of content of the nanocomposite. The machine learning model proposed could predict the thermal expansion coefficient (TEC), which was evaluated using various temperature rates, achieving an accuracy of up to 99%, as reported by the researchers.
The DMO was also applied to the clustering problem in [12], where a novel DMO-secure-based clustering combined with a multi-hop scheme of routing (DMOSC-MHRS) was developed in the internet of drones arena. Moreover, in [14] the study proposed a binary dwarf mongoose optimizer (BDMO) and was applied to solve the multiclass high-dimensional feature selection problem. The proposed method used 18 high-dimensional feature selection datasets and was compared against 10 state-of-the-art feature selection techniques. The method produced higher validation accuracy on 15 of the 18 datasets utilized. Afterward, the BDMO was hybridized with a local search Algorithm Simulated Annealing (SA) known as BDMSAO [13] to tackle feature selection challenges of varying dimensional problems. The proposed hybrid method yielded the highest classification accuracy obtainable on 50% of the 18 datasets employed for validation. It was compared with other popular methods in the literature with promising results.
This algorithm has been gaining ground in other application areas. For example, in [47] work, a parameter estimation of an autoregressive exogenous (ARX) model was developed based on the DMO. In [24], a new method was developed that incorporated the Dwarf Mongoose Optimization Algorithm (DMOA), generalized normal distribution (GNF), and opposition-based learning strategy (OBL), which is abbreviated as GNDDMOA. The proposed method was evaluated using 8 data clustering and 23 test functions and was compared with other popular techniques. This proposed GNDDMOA yielded the best results when employed to solve data clustering applications. Finally, the DMO was modeled with machine learning-driven ransomware detection called DWOML-RWD [25]. The method was majorly proposed to recognize and classify ransomware. This study first carried out the pre-processing stage of feature selection with a Krill Herd Optimization (EKHO) using dynamic oppositional-based learning (QOBL).
The proposed optimization technique (IDMO) modifies the base algorithm (DMO) in three simple but effective ways. First, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Second, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero. The proposed improved algorithm is used to solve benchmark test functions and twelve (12) different optimization problems in the engineering domain. The major contributions of this study can be summarized as follows:
The alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha.
A new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO.
The scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas.
The babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero.
The rest of the paper is organized as follows: Sect. 2 presents the dwarf mongoose optimization algorithm (DMO). Section 3 presents the improved dwarf mongoose optimization algorithm (IDMO). The experimental setup, results, and detailed discussion are presented in Sect. 4. Finally, the conclusion and future work is presented in Sect. 5.
The Dwarf Mongoose Optimization Algorithm (DMO)
This section presents the base algorithm, namely the dwarf mongoose optimization (DM) algorithm. First, the nature-derived inspiration which motivated the design of the novel DMO is described to allow for the understanding of the conceptualization of the flow of the algorithm. This study justified and provided a linkage of the real-life activities or phenomena in the domain of consideration to further buttress the optimization process inherent among dwarf mongoose creatures. The second sub-section is focused on presenting a summary of the mathematical and procedural models of the DMO algorithm. Then the section is concluded by drawing up the gap in the existing implementation of DMO with respect to the natural phenomenon in the domain. This is aimed at providing a ground for a new variant.
Inspiration
The behavioral pattern of an organism is an embodiment of its natural disposition to the demand of survival and evolvement strategy. This strategy allows the organism to adapt to its environment, evolve into a bigger specie, develop a fitness mechanism to wade off predators, source food, and build a social system with related and non-related organisms. When the population of such an organism is considered, their combined behavioral pattern demonstrates a great system with natural phenomena typical of providing computational solution patterns. A careful understudy of the dwarf mongoose population revealed that an interesting and useful natural phenomenon is inherent in their existential strategy. One of the fundamental attributes of the dwarf mongoose, peculiar to the Helogale specie, is the ability to cohabit in groups—a natural disposition that reflects a complete optimization process.
A group of dwarf mongoose lives in a territory that is demarcated using anal secretion, a behavioral approach for wading off competing groups or predators, and for keeping group members within the territory size. The secretion is made on horizontal objects within the territory using secretion from anal or checks glands. The smell of the secretion, which may remain detectable between 20 and 25 days, serves two purposes: reassures group members of safety within the territory and puts predators to flight [26] [27]. Unlike other organisms, which expand their population size for the benefit of resource enrichment, the dwarf mongoose prefers to shrink its population size to allow for the sustainability of available resources for the group. This resource control strategy is one of the group survival strategies in addition to the anti-predation strategy. Whereas the former supports foraging and nutrition needs, the latter ensures that the group members are kept from an intruder's reach.
In fact, compared to secretion measure for wading off an enemy, the dwarf mongoose has an aggressive skull-crush attack, a bite aimed at their prey's eyes—an approach that allows for adaptation and disallows intruding dwarf mongoose from depleting their group resources. The prey hunted down serves as a food source, which any individual extensively and intensively searches for within the group to have a full meal. This food search has characterized the dwarf mongoose with a seminomadic lifestyle, allowing a group to forage for food intensively from the current location and extensively over a long distance. This foraging behavior promotes relocation of the group's territory, referred to as a mound, to allow for searching for a new mound where the groups pitch their territory and sleep. This sleeping mound is reported to change almost every night [28]. Meanwhile, while the group moves about during the foraging activity, cohesion of the entire group members is achieved using a peep vocalization to alert members of the presence of a predator or intruder [29].
The group of dwarf mongooses maintains a caste structure that delineates members into subgroups. These subgroups include the alpha (male and female), juvenile, scout, and babysitters. The task of vocalization associated with foraging activity is reserved for the alpha female. Meanwhile, even the call for the group members to move out of their mound for foraging activity, the direction of the journey, and the distance covered, are imitated through the vocalization of this same alpha female. Furthermore, the alpha female is the only member who can and should birth, rare, and raise young dwarf mongooses. An attempt by other female subgroup members will be fought harshly and considered an act of insubordination [30], which group members will aggressively fight and most likely lead to the offender's departure from the group or such young, birthed ones being killed. In fact, to prevent such an extreme reaction, the alpha female prevents the subordinate males from mating with subordinate females.
Meanwhile, babysitters are elected from among the subordinates to keep watch on the babies of the alpha females since such young ones are prevented from foraging with the group. However, their mothers may carry them during mound relocation. This carrying capacity of the young dwarf mongoose often limits the distance covered when relocating to another mound. This presents an advantage to the young ones since the dwarf mongooses attack all enemies as a closed group [31]. This attack on an enemy—which could also mean another group of dwarf mongoose—is led by the alpha male, and the alpha female comes in the rear, considering they might be carrying the young ones. The numerical strength of a group determines if it will win the fight. Unfortunately, while the group may win against a ground predator, they are known to be weak in dealing with predator launching from an aerial position. Group territoriality determines the optimization of group size, which optimizes an individual’s fitness and impacts the cost/benefit relationship in the group. Considering the rich foraging, anti-predation, and group territory sustenance strategy of the dwarf mongoose, the DMO algorithm was designed and implemented to solve real-life optimization problems. In the next subsection, we present the algorithmic and mathematical models of the DMO.
The DMO Model
Motivated by the natural phenomena of the dwarf mongoose, the design of the DMO [32] algorithm is discussed in the following paragraph. The authors model all subgroups identified with the dwarf mongoose population by incorporating the alpha (male and female), scouts, and babysitter subgroups. The adaptive nature of the animals in their environment concerning predation and foraging was also simulated and applied in the design phase. In Fig. 1, the complete optimization process of the dwarf mongoose is represented. First, the scouting group moves out to source food while those designated for babysitting are allowed to remain in the mound. It is assumed that the procedure for looking out for food, known as foraging, demonstrates the exploration phase of the optimization process. Moreover, since the location of a new food source allows for a group of dwarf mongoose to settle down in a mound, the DMO models that as the exploitation or intensification phase. Meanwhile, the algorithm also simulated the discovery of new mounds resulting from the exploration process. The figure also showed that the design for the exchange of babysitters among the scout group is provisioned.
Fig. 1.
The optimization procedures of the DMO
The representation of a group of dwarf mongooses, which also represents the entire population, is modeled using Eq. (1). Since the population of the dwarf mongoose is often composed of the alpha groups, juvenile group, and scout (including babysitters), we derive the alpha group from the population to allow for allocating the remaining individuals in the population for the other two subgroups. Meanwhile, considering the role of the alpha female ( in the population, they applied Eq. (2) to compute this special subgroup of alphas. This is made possible by first evaluating the fitness of the entire group and as well an individual fitness to know what individuals are characterized and suitable for the alpha female (.
| 1 |
| 2 |
Similarly, the scout groups, which represent the workforce of the dwarf mongoose population, are computed and extracted from the entire population. To achieve this, they developed Eq. (3) to represent this derivation process, allowing for the foraging activity and searching for new sleeping mounds simultaneously [31].
| 3 |
where is a random number between and is the collective-volatile movement control parameter and determines the movement of the mongoose to the new sleeping mound, and .
The predatory and foraging activities within a mound of dwarf mongoose require that individuals move from one location to another. This will often require that an update mechanism be applied for computing each group member's position. They leverage the vocalization facility of the alpha female ( in addition to a randomly generated variable , in the range of [−1,1] to compute the current position of an individual. Using Eq. (4), the position update model is described.
| 4 |
Another important milestone characterized by predatory and foraging activities is the discovery of a new sleeping mound. This new mound is assumed to change during the iterative process of optimization. To model the discovery of the sleeping mound (), Eq. (5) is applied for computing this, and Eq. (6) allows for averaging the sleeping mound values.
| 5 |
The procedure for the application of mathematical models described in this section is encoded in the following listing:
Initialize all control parameters
The population of the dwarf mongoose is first generated
Subgrouping the entire population into alpha (male and female), scouts, and babysitters
Determine the available number of search agents by subtracting the babysitters from the entire population
Set the rate of exchange for babysitting tasks as L
- While the termination condition is not satisfied, do the following:
-
i.Compute the fitness of the mongoose population/group
-
ii.Activate and set the time counter
-
iii.Apply Eq. (2) to deduce the size of the alpha female
-
iv.Update the position of a potential food source using Eq. (4)
-
v.Iterate over each individual and compute the fitness of
-
vi.Derive the sleeping mound for the population using Eq. (5)
-
vii.Determine the movement vector
-
viii.Exchange the babysitters
-
ix.Compute the scout group using Eq. (3)
-
x.Update the solution so far
-
i.
Return the best solution
The procedure described above was translated into a pseudocode, and then implemented as the DMO algorithm. Although the DMO demonstrates some measure of novelty, a further study of the domain revealed that some important component of the optimization process derivable from the natural existence and co-existence among the dwarf mongoose presents an opportunity for advancing the algorithm. The inclusion of these phenomena is necessitated by enhancing performance and balancing the exploration and exploitation stages. The next section presents a detailed design and discussion of the improved dwarf mongoose optimization algorithm.
The pseudocode for the algorithm is given in algorithm listing 1.
The Improved Dwarf Mongoose Optimization Algorithm (IDMO) Model
The section presents the improved dwarf mongoose optimization algorithm (IDMO).
The IDMO Model
The IDMO is proposed to enhance the exploration and exploitation of the DMO. This limitation is evident in the solution returned by DMO for F9, F15, and F17, which shows how DMO could not find the optimal solution. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. First, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Second, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero.
The proposed IDMO achieves optimization in three phases, as shown in Fig. 2. This model shows how the scouting activity is separated from the foraging, which is not the case in DMO, where the scouting and foraging are the same activity. The individual dwarf mongooses are the search agents as modeled as a matrix shown in Eq. 6. At the exploration phase, the modified alpha (Eq. 8) leads the group to uncharted territories using steps modeled in Eq. 9. Equation 10 models a new operator ω, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. As shown in Eq. 11, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. The exploitation is achieved after the babysitter exchange criterium is met and babysitters are exchanged, as shown in Eq. 12. This phase refines the obtained solution toward the optimal solution.
Fig. 2.
The model of the proposed IDMO
Population Initialization
The IDMO population is initialized stochastically as a matrix of candidate dwarf mongooses (X), as shown in Eq. (6). The population vector ranges between the upper bound (U) and lower bound (L) of the optimization problem
| 6 |
where is the number of dwarf mongoose in a mound, denotes the position of the jth dimension of the ith population and each is defined in Eq. (7).
| 7 |
Alpha Group
The population size of this group is modeled by subtracting the number of babysitters from the total number of dwarf mongooses. The alpha female leads this group, and the fittest dwarf mongoose is selected as the alpha female, as given in Eq. 8. The alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other members.
| 8 |
The alpha female keeps the group together using vocalization modeled by
The IDMO searches the problem space moving around as defined in Eq. 9. Initially defined as the fittest dwarf mongoose, it drags the other family members toward a potential food source. This scenario is a clear departure from the DMO, where only the alpha’s vocalization is used to influence the position of the other dwarf mongoose. In IDMO, the position of the alpha is used to set the position of the other mongoose and a new operator , defined in Eq. 10, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO
| 9 |
| 10 |
where , is the previous dwarf position, is a uniformly distributed random number [−1,1]. is a randomly selected dwarf mongoose.
Scout Group
The responsibility of the scouts is to look for a suitable sleeping mound since the dwarf mongooses are known to be seminomadic, never returning to the previous sleeping mound. The IDMO models the scout group scouting for the next sleeping mound after foraging activities. The scouts’ fitness is considered a potential sleeping mound since the dwarf mongooses are known to stay around abundant food sources, and the fittest scout is considered the selected sleeping mound. The scouts are modeled as given in Eq. 11.
| 11 |
where is a random number between, and are randomly selected dwarf mongooses.
The Babysitters
The babysitter’s exchange criterium is given in Eq. 12. Once the criterium is met, the exchanged babysitters interact with the dwarf mongooses to gain information about food sources and the next sleeping mound, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero. This improvement is modeled in Eq. 13, where the dwarf mongooses to replace the babysitters are randomly selected, and their information is passed to the babysitters as shown. If L gets to zero, it is reset by multiplying it with the current iteration and CF.
| 12 |
| 13 |
where controls the collective-volitive movement of the dwarf mongooses, are randomly selected dwarf mongooses to replace the babysitters, and is the birthrate.
The computation complexity of the IDMO is significantly reduced because it simplifies the overhead of alpha selection. The intuitive and detailed process of IDMO is shown in Fig. 3. The optimization process starts when the dwarf mongooses forage, led by the alpha female. A select few are left behind, called the babysitters, to tend the nest. The search for abundant food sources simulates the exploration phase of the IDMO. At midday, the babysitters are exchanged so they can feed since the dwarf mongooses are not known to bring food for the young or others. When this change occurs, the return to already known food sources to enable the exchanged babysitters to feed quickly before new food sources or sleeping mounds are found. This scenario simulates the exploitation phase of the IDMO. The scouting for a sleeping mound at the end of the day further explores and exploits the search space. Like the DMO, the IDMO algorithm has only one specific parameter to be fine-tuned (the number of babysitters.
Fig. 3.
The flowchart of the proposed IDMO
Conceptual Advantage of the IDMO
The superiority of the proposed IDMO can be theoretically attributed to the fact that the IDMO processes are stochastic in all ramifications. Starting with the population generation and the updating steps of the alpha and scout groups. The selection of the babysitters and dwarf mongooses to exchange them is entirely stochastic, which improves these solutions using enhanced exploratory and exploitation. The IDMO has only one parameter that can be tuned, and its implementation is simple and flexible.
As listed in Algorithm 2, the following algorithm reflects the mathematical model and procedural listing for the IDMO model.
Results and Discussion
The proposed improvements of the IDMO were tested to establish performance using 31 benchmark functions (classical and CEC2020 benchmark functions [33, 34], twelve (12) engineering benchmark problems, and real-world feature selection problems. The results of IDMO for benchmark functions were compared with that of DMO and eight existing population-based metaheuristic algorithms, namely differential evolution (DE), arithmetic optimization algorithm (AOA), particle swarm optimization (PSO), constriction-coefficient-based (PSO) and GSA (CPSOGSA), salp swarm algorithm (SSA), grey wolf optimizer (GWO), biogeography-based optimization (BBO), sine cosine algorithm (SCA). All the algorithms and optimization problems considered were implemented using MATLAB R2020b, and Table 2 presents the different algorithm control parameters used for the experiments. The population size and the maximum number of iterations used for all algorithms are 50 and 1000, respectively. Windows 10 OS environment, Intel Core i7-7700@3.60 GHz CPU, and 16G RAM were used to conduct the experiments. The results of 30 independent runs of each algorithm are collated using the “best, worst, average, and SD” performance indicators. Further statistical analysis was carried out using mean, standard deviation, and Friedman and Wilcoxon tests.
Table 2.
Algorithm control parameters
| Algorithm | References | Name of the parameter | Value of the parameter |
|---|---|---|---|
| AOA | [70] | 5 | |
| 0.05 | |||
| PSO | [5] | C1, C2 | 2 |
| Wmax | 0.9 | ||
| Wmin | 0.2 | ||
| CPSOGSA | [71] | 2.05 | |
| GWO | [72] | A | [0,2] |
| r1f r2 | [0,1] | ||
| SCA | [73] | a | 2 |
| SSA | [74] | c2, c3 | [0,1] |
| BBO | [75] | nKeep | 0.2 |
| Pmutation | 0.9 | ||
| DE | [76] | Lower bound of scaling factor | 0.2 |
| Upper bound of scaling factor | 0.8 | ||
| PCR | 0.8 |
Benchmark Test Function
The results of all the algorithms used in this study are presented in Table 3. The exploitation ability of the IDMO was tested using the unimodal, separable, and non-separable benchmark functions (F1-F9). Clearly, the IDMO, AOA, and GWO found the global minimum solution for F1 and F2. However, the DMO, CPSOGSA, PSO, DE, SSA, and SCA found near-optimal solutions, and the BBO could not find the optimal solution. All the algorithms failed to find the global minimum for F5 and showed promising results for F6-F9. In most cases, the IDMO outperformed the DMO and performed competitively with GWO. The result confirms the exploitative capability of IDMO.
Table 3.
Result of classical benchmark functions
| Function | Dim | Global | Value | IDMO | DMO | AOA | CPSOGSA | PSO | BBO | DE | SSA | SCA | GWO |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| F1 | 50 | 0 | Best | 0.00E + 00 | 2.00E−07 | 0.00E + 00 | 0.00E + 00 | 6.49E−05 | 1.07E + 01 | 2.36E−05 | 2.88E−08 | 3.83E−02 | 0.00E + 00 |
| Worst | 0.00E + 00 | 6.81E−07 | 0.00E + 00 | 1.00E + 04 | 4.52E−03 | 1.78E + 01 | 6.72E−05 | 5.86E−08 | 2.12E + 02 | 0.00E + 00 | |||
| Average | 0.00E + 00 | 3.87E−07 | 0.00E + 00 | 5.00E + 02 | 8.00E−04 | 1.38E + 01 | 3.89E−05 | 4.30E−08 | 4.52E + 01 | 0.00E + 00 | |||
| SD | 0.00E + 00 | 1.13E−07 | 0.00E + 00 | 2.24E + 03 | 1.15E−03 | 1.68E + 00 | 1.00E−05 | 7.40E−09 | 7.45E + 01 | 0.00E + 00 | |||
| F2 | 50 | 0 | Best | 0.00E + 00 | 1.64E−04 | 0.00E + 00 | 1.41E-08 | 1.59E−03 | 1.36E + 00 | 4.43E−04 | 7.24E−02 | 9.04E−06 | 0.00E + 00 |
| Worst | 0.00E + 00 | 4.28E−04 | 0.00E + 00 | 7.31E + 01 | 3.59E + 01 | 1.81E + 00 | 8.23E−04 | 5.28E + 00 | 3.36E−02 | 0.00E + 00 | |||
| Average | 0.00E + 00 | 2.80E−04 | 0.00E + 00 | 5.87E + 00 | 2.00E + 00 | 1.57E + 00 | 5.88E−04 | 2.04E + 00 | 5.38E−03 | 0.00E + 00 | |||
| SD | 0.00E + 00 | 8.60E−05 | 0.00E + 00 | 1.60E + 01 | 7.99E + 00 | 1.15E−01 | 1.04E−04 | 1.33E + 00 | 7.76E−03 | 0.00E + 00 | |||
| F3 | 50 | 0 | Best | 0.00E + 00 | 1.23E−01 | 0.00E + 00 | 5.46E + 03 | 2.17E + 03 | 8.61E + 02 | 7.18E + 04 | 4.84E + 02 | 9.10E + 03 | 0.00E + 00 |
| Worst | 6.95E + 00 | 8.64E−01 | 3.10E−01 | 2.33E + 04 | 5.45E + 03 | 1.63E + 03 | 1.07E + 05 | 7.73E + 03 | 5.56E + 04 | 3.45E−08 | |||
| Average | 3.55E−01 | 4.14E−01 | 5.15E−02 | 1.33E + 04 | 4.04E + 03 | 1.20E + 03 | 9.15E + 04 | 2.05E + 03 | 2.46E + 04 | 3.31E−09 | |||
| SD | 1.55E + 00 | 2.02E-01 | 7.07E−02 | 5.91E + 03 | 9.77E + 02 | 2.11E + 02 | 9.99E + 03 | 1.67E + 03 | 1.07E + 04 | 8.82E−09 | |||
| F4 | 50 | 0 | Best | 0.00E + 00 | 8.53E−01 | 3.85E−02 | 3.83E + 01 | 7.20E + 00 | 1.77E + 00 | 1.94E + 01 | 7.62E + 00 | 3.50E + 01 | 0.00E + 00 |
| Worst | 1.06E−03 | 9.38E−01 | 7.60E−02 | 9.35E + 01 | 1.42E + 01 | 2.33E + 00 | 2.60E + 01 | 1.77E + 01 | 7.23E + 01 | 0.00E + 00 | |||
| Average | 5.32E−05 | 9.03E−01 | 4.92E−02 | 7.52E + 01 | 9.84E + 00 | 2.10E + 00 | 2.23E + 01 | 1.29E + 01 | 5.61E + 01 | 0.00E + 00 | |||
| SD | 2.38E−04 | 1.92E−02 | 9.13E−03 | 1.98E + 01 | 1.70E + 00 | 1.36E−01 | 2.19E + 00 | 2.33E + 00 | 8.88E + 00 | 0.00E + 00 | |||
| F5 | 50 | 0 | Best | 4.36E + 01 | 1.08E + 00 | 4.77E + 01 | 4.46E + 01 | 4.64E + 01 | 2.44E + 02 | 7.59E + 01 | 4.55E + 01 | 3.99E + 02 | 4.59E + 01 |
| Worst | 4.59E + 01 | 6.12E + 00 | 4.89E + 01 | 3.05E + 02 | 2.07E + 02 | 1.37E + 03 | 2.76E + 02 | 8.13E + 02 | 1.26E + 06 | 4.84E + 01 | |||
| Average | 4.45E + 01 | 3.40E + 00 | 4.85E + 01 | 8.00E + 01 | 1.31E + 02 | 3.98E + 02 | 1.82E + 02 | 1.55E + 02 | 3.05E + 05 | 4.68E + 01 | |||
| SD | 5.60E−01 | 1.56E + 00 | 3.01E−01 | 7.06E + 01 | 3.95E + 01 | 2.44E + 02 | 7.13E + 01 | 2.23E + 02 | 4.11E + 05 | 6.95E−01 | |||
| F6 | 50 | 0 | Best | 1.70E−02 | 2.17E−07 | 5.39E + 00 | 0.00E + 00 | 1.90E−05 | 1.03E + 01 | 1.50E−05 | 3.48E−08 | 8.35E + 00 | 7.11E−01 |
| Worst | 7.79E−01 | 6.96E−07 | 6.88E + 00 | 7.89E−01 | 4.33E−03 | 1.87E + 01 | 7.00E−05 | 6.35E−08 | 6.41E + 02 | 2.50E + 00 | |||
| Average | 4.76E−01 | 4.05E−07 | 6.23E + 00 | 4.81E−02 | 7.00E−04 | 1.26E + 01 | 3.89E−05 | 4.37E−08 | 7.24E + 01 | 1.51E + 00 | |||
| SD | 2.30E−01 | 1.47E−07 | 4.01E−01 | 1.76E−01 | 1.01E−03 | 1.93E + 00 | 1.36E−05 | 7.17E−09 | 1.43E + 02 | 4.46E−01 | |||
| F7 | 50 | 0 | Best | 4.14E−04 | 3.43E−02 | 1.04E−06 | 8.22E−02 | 3.01E−02 | 5.71E−03 | 5.92E−02 | 5.10E−02 | 2.19E−02 | 1.32E−04 |
| Worst | 5.58E−03 | 8.82E−02 | 5.14E−05 | 2.10E−01 | 7.77E−02 | 1.42E−02 | 9.32E−02 | 2.70E−01 | 2.50E + 00 | 2.20E−03 | |||
| Average | 1.83E−03 | 5.76E−02 | 1.76E−05 | 1.31E−01 | 4.80E−02 | 9.38E−03 | 7.47E−02 | 1.76E−01 | 3.65E−01 | 7.48E−04 | |||
| SD | 1.48E−03 | 1.25E−02 | 1.42E−05 | 3.79E−02 | 1.36E−02 | 2.96E−03 | 1.02E−02 | 5.41E−02 | 5.58E−01 | 4.68E−04 | |||
| F8 | 50 | −20,949 | Best | −1.40E + 04 | −6.60E + 03 | −8.83E + 03 | −1.40E + 04 | −1.44E + 04 | −1.42E + 04 | −1.40E + 04 | −1.46E + 04 | −5.94E + 03 | −1.12E + 04 |
| Worst | −1.02E + 04 | −5.38E + 03 | −7.19E + 03 | −1.07E + 04 | −1.12E + 04 | −1.15E + 04 | −1.27E + 04 | −1.05E + 04 | −4.80E + 03 | −5.71E + 03 | |||
| Average | −1.16E + 04 | −6.00E + 03 | −7.93E + 03 | −1.22E + 04 | −1.27E + 04 | −1.27E + 04 | −1.33E + 04 | −1.23E + 04 | −5.27E + 03 | −9.07E + 03 | |||
| SD | 1.06E + 03 | 3.30E + 02 | 5.25E + 02 | 7.85E + 02 | 7.06E + 02 | 6.93E + 02 | 3.75E + 02 | 1.07E + 03 | 3.40E + 02 | 1.19E + 03 | |||
| F9 | 50 | 0 | Best | 0.00E + 00 | 8.28E−01 | 0.00E + 00 | 1.23E + 02 | 1.62E + 02 | 5.63E + 01 | 1.81E + 02 | 4.28E + 01 | 8.46E−01 | 0.00E + 00 |
| Worst | 1.87E + 01 | 1.13E + 00 | 0.00E + 00 | 3.15E + 02 | 2.96E + 02 | 1.16E + 02 | 2.24E + 02 | 1.41E + 02 | 2.07E + 02 | 1.02E + 01 | |||
| Average | 1.61E + 00 | 1.01E + 00 | 0.00E + 00 | 2.05E + 02 | 2.14E + 02 | 7.33E + 01 | 2.00E + 02 | 8.02E + 01 | 7.51E + 01 | 6.85E−01 | |||
| SD | 5.03E + 00 | 7.39E−02 | 0.00E + 00 | 4.96E + 01 | 3.64E + 01 | 1.69E + 01 | 1.01E + 01 | 2.43E + 01 | 6.17E + 01 | 2.38E + 00 | |||
| F10 | 50 | 0 | Best | 0.00E + 00 | 4.32E−04 | 0.00E + 00 | 0.00E + 00 | 1.68E−03 | 9.22E−01 | 8.21E−04 | 1.27E + 00 | 8.37E−01 | 0.00E + 00 |
| Worst | 0.00E + 00 | 1.54E−03 | 0.00E + 00 | 1.91E + 01 | 1.47E + 00 | 1.49E + 00 | 1.74E−03 | 3.67E + 00 | 2.05E + 01 | 0.00E + 00 | |||
| Average | 0.00E + 00 | 7.31E−04 | 0.00E + 00 | 7.64E + 00 | 4.29E−01 | 1.18E + 00 | 1.34E−03 | 2.46E + 00 | 1.84E + 01 | 0.00E + 00 | |||
| SD | 0.00E + 00 | 2.56E−04 | 0.00E + 00 | 8.72E + 00 | 5.63E−01 | 1.18E−01 | 2.29E−04 | 6.08E−01 | 5.08E + 00 | 0.00E + 00 | |||
| F11 | 50 | 0 | Best | 0.00E + 00 | 3.59E−06 | 9.76E−02 | 1.27E + 00 | 9.93E−05 | 1.08E + 00 | 2.35E−05 | 9.60E−07 | 1.50E−01 | 0.00E + 00 |
| Worst | 0.00E + 00 | 6.24E−03 | 7.64E−01 | 9.22E + 01 | 5.88E−02 | 1.17E + 00 | 2.93E−04 | 3.94E−02 | 3.47E + 00 | 1.88E−02 | |||
| Average | 0.00E + 00 | 8.27E−04 | 3.89E−01 | 2.17E + 01 | 1.06E−02 | 1.12E + 00 | 9.31E−05 | 7.31E−03 | 1.29E + 00 | 1.63E−03 | |||
| SD | 0.00E + 00 | 1.46E−03 | 1.79E−01 | 3.58E + 01 | 1.65E−02 | 2.21E−02 | 6.69E−05 | 9.82E−03 | 7.62E−01 | 5.08E−03 | |||
| F12 | 50 | 0 | Best | 3.25E−05 | 1.03E + 00 | 5.01E−01 | 4.77E + 00 | 2.61E−06 | 1.36E−02 | 7.08E−06 | 3.07E + 00 | 2.21E + 00 | 2.29E−02 |
| Worst | 1.83E−02 | 4.44E + 00 | 6.59E−01 | 1.10E + 01 | 7.58E−01 | 2.87E−02 | 4.50E−05 | 1.35E + 01 | 6.61E + 06 | 8.12E−02 | |||
| Average | 7.47E−03 | 2.48E + 00 | 5.96E−01 | 7.54E + 00 | 1.44E−01 | 2.11E−02 | 2.12E−05 | 7.48E + 00 | 6.31E + 05 | 4.71E−02 | |||
| SD | 5.59E−03 | 1.04E + 00 | 3.35E−02 | 2.07E + 00 | 2.24E−01 | 4.27E−03 | 9.94E−06 | 2.25E + 00 | 1.54E + 06 | 1.70E−02 | |||
| F13 | 50 | 0 | Best | 3.89E−03 | 6.80E−01 | 4.60E + 00 | 1.10E−02 | 4.25E−04 | 4.55E−01 | 3.83E−05 | 1.10E−02 | 7.71E + 00 | 1.03E + 00 |
| Worst | 5.78E−01 | 3.83E + 00 | 4.95E + 00 | 4.95E + 01 | 2.34E−01 | 7.56E−01 | 1.79E−04 | 7.93E + 01 | 5.49E + 07 | 2.19E + 00 | |||
| Average | 3.27E−01 | 1.50E + 00 | 4.82E + 00 | 2.31E + 01 | 5.43E-02 | 5.85E−01 | 1.04E−04 | 3.58E + 01 | 4.42E + 06 | 1.55E + 00 | |||
| SD | 1.51E−01 | 7.69E−01 | 1.04E−01 | 1.38E + 01 | 7.41E−02 | 7.84E−02 | 3.44E−05 | 2.67E + 01 | 1.23E + 07 | 2.98E−01 | |||
| F14 | 2 | 1 | Best | 9.98E−01 | 4.00E−03 | 9.98E−01 | 9.98E−01 | 9.98E−01 | 9.98E−01 | 9.98E−01 | 9.98E−01 | 9.98E−01 | 9.98E−01 |
| Worst | 9.98E−01 | 1.94E−02 | 1.27E + 01 | 1.27E + 01 | 9.98E−01 | 1.55E + 01 | 9.98E−01 | 9.98E−01 | 1.00E + 00 | 1.27E + 01 | |||
| Average | 9.98E−01 | 7.61E−03 | 8.22E + 00 | 2.97E + 00 | 9.98E−01 | 4.58E + 00 | 9.98E−01 | 9.98E−01 | 9.98E−01 | 2.76E + 00 | |||
| SD | 5.09E−17 | 4.11E−03 | 4.68E + 00 | 2.90E + 00 | 0.00E + 00 | 3.68E + 00 | 0.00E + 00 | 1.53E−16 | 1.17E−03 | 3.51E + 00 | |||
| F15 | 4 | 0.003 | Best | 3.07E−04 | 1.94E−06 | 3.86E−04 | 3.07E−04 | 3.07E−04 | 5.27E−04 | 4.37E−04 | 3.74E−04 | 3.96E−04 | 3.07E−04 |
| Worst | 3.07E−04 | 6.35E−04 | 8.00E−02 | 2.04E−02 | 2.04E−02 | 1.38E−03 | 7.63E−04 | 1.22E−03 | 1.35E−03 | 3.08E−04 | |||
| Average | 3.07E−04 | 3.61E−04 | 1.08E−02 | 4.62E−03 | 1.82E−03 | 6.64E−04 | 6.11E−04 | 7.26E−04 | 8.55E−04 | 3.07E−04 | |||
| SD | 2.76E−13 | 1.75E−04 | 1.84E−02 | 8.08E−03 | 4.39E−03 | 1.78E−04 | 1.01E−04 | 2.54E−04 | 3.46E−04 | 7.80E−09 | |||
| F16 | 2 | 3 | Best | 3.00E + 00 | 4.86E−04 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 |
| Worst | 3.00E + 00 | 5.55E−02 | 3.00E + 01 | 3.00E + 00 | 3.00E + 00 | 3.00E + 01 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | |||
| Average | 3.00E + 00 | 1.24E−02 | 4.35E + 00 | 3.00E + 00 | 3.00E + 00 | 4.35E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | 3.00E + 00 | |||
| SD | 9.34E−16 | 1.41E−02 | 6.04E + 00 | 1.25E−15 | 2.70E−16 | 6.04E + 00 | 6.52E−16 | 6.18E−14 | 1.42E−05 | 3.45E−06 | |||
| F17 | 3 | −3.86 | Best | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 |
| Worst | −3.86E + 00 | −3.86E + 00 | −3.85E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.85E + 00 | −3.85E + 00 | |||
| Average | −3.86E + 00 | −3.86E + 00 | −3.85E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | -3.86E + 00 | −3.86E + 00 | −3.86E + 00 | −3.86E + 00 | |||
| SD | 2.28E−15 | 2.28E−15 | 3.96E−03 | 2.05E−15 | 2.28E−15 | 1.86E−15 | 2.28E−15 | 1.11E−14 | 2.50E−03 | 2.83E−03 | |||
| F18 | 6 | −3.32 | Best | −3.32E + 00 | −3.32E + 00 | −3.23E + 00 | −3.32E + 00 | −3.32E + 00 | −3.32E + 00 | −3.32E + 00 | −3.32E + 00 | −3.20E + 00 | −3.32E + 00 |
| Worst | −3.32E + 00 | −3.32E + 00 | −2.93E + 00 | −3.20E + 00 | −3.20E + 00 | −3.20E + 00 | −3.32E + 00 | −3.20E + 00 | −2.59E + 00 | −3.08E + 00 | |||
| Average | −3.32E + 00 | −3.32E + 00 | −3.12E + 00 | −3.22E + 00 | −3.27E + 00 | −3.28E + 00 | −3.32E + 00 | −3.23E + 00 | −3.02E + 00 | −3.26E + 00 | |||
| SD | 5.61E−12 | 4.89E−16 | 5.86E−02 | 4.36E−02 | 6.07E−02 | 5.82E−02 | 4.05E−05 | 4.90E−02 | 1.55E−01 | 7.27E−02 | |||
| F19 | 4 | −10.1532 | Best | −1.02E + 01 | −1.02E + 01 | −6.59E + 00 | −1.02E + 01 | −1.02E + 01 | −1.02E + 01 | −1.02E + 01 | −1.02E + 01 | −6.47E + 00 | −1.02E + 01 |
| Worst | −1.02E + 01 | -1.02E + 01 | −2.77E + 00 | −2.63E + 00 | −2.63E + 00 | −2.63E + 00 | −1.02E + 01 | −2.63E + 00 | −4.97E−01 | −5.06E + 00 | |||
| Average | −1.02E + 01 | −1.02E + 01 | −4.45E + 00 | −5.89E + 00 | −7.89E + 00 | −7.14E + 00 | -1.02E + 01 | −8.90E + 00 | −4.06E + 00 | −9.90E + 00 | |||
| SD | 6.17E−15 | 1.03E−11 | 9.60E−01 | 3.36E + 00 | 3.25E + 00 | 3.50E + 00 | 2.21E−05 | 2.64E + 00 | 1.85E + 00 | 1.14E + 00 | |||
| F20 | 4 | −10.4028 | Best | −1.04E + 01 | −1.04E + 01 | −8.22E + 00 | −1.04E + 01 | −1.04E + 01 | −1.04E + 01 | −1.04E + 01 | −1.04E + 01 | −8.09E + 00 | −1.04E + 01 |
| Worst | −1.04E + 01 | −1.04E + 01 | −2.79E + 00 | −2.75E + 00 | −2.77E + 00 | −1.84E + 00 | −1.04E + 01 | −5.09E + 00 | −9.07E−01 | −1.04E + 01 | |||
| Average | −1.04E + 01 | −1.04E + 01 | −4.74E + 00 | −8.44E + 00 | −8.19E + 00 | −5.87E + 00 | −1.04E + 01 | −9.87E + 00 | −4.15E + 00 | −1.04E + 01 | |||
| SD | 3.67E−15 | 3.42E−03 | 1.38E + 00 | 3.12E + 00 | 3.17E + 00 | 3.50E + 00 | 5.81E−05 | 1.63E + 00 | 2.28E + 00 | 1.61E−04 | |||
| F21 | 4 | −10.5363 | Best | −1.05E + 01 | −1.05E + 01 | −6.02E + 00 | −1.05E + 01 | −1.05E + 01 | −1.05E + 01 | −1.05E + 01 | −1.05E + 01 | −8.34E + 00 | −1.05E + 01 |
| Worst | −1.05E + 01 | −1.05E + 01 | −2.17E + 00 | −2.42E + 00 | −2.42E + 00 | −2.42E + 00 | −1.05E + 01 | −2.43E + 00 | −9.47E−01 | −1.05E + 01 | |||
| Average | −1.05E + 01 | −1.05E + 01 | −4.30E + 00 | −8.06E + 00 | −9.72E + 00 | −7.16E + 00 | −1.05E + 01 | −8.56E + 00 | −4.96E + 00 | −1.05E + 01 | |||
| SD | 5.04E−15 | 1.06E−04 | 1.01E + 00 | 3.53E + 00 | 2.50E + 00 | 3.85E + 00 | 4.99E−13 | 3.18E + 00 | 1.66E + 00 | 1.65E−04 | |||
| Mean rank | 2.88 | 3.64 | 6.17 | 7.67 | 5.64 | 6.21 | 4.29 | 5.98 | 8.76 | 3.76 | |||
| Rank | 1 | 2 | 7 | 9 | 5 | 8 | 4 | 6 | 10 | 3 | |||
The study used the multimodal, separable, and non-separable benchmark functions (F10-F21) to test the exploration ability of the IDMO. The algorithms seem to perform relatively well, finding optimal or near-optimal solutions. The IDMO, however, returned the best solution for most of the functions and performed competitively for others. The null hypothesis of related-samples Friedman's test assumed that the distributions of IDMO, DMO, CPSOGSA, AOA, GWO, DE, BBO, SCA, SSA, and PSO are the same. The test results are summarized in Table 4; a p value of 0.000 is returned by the test, which is far less than the significant tolerance level of 0.05, so the hypothesis is rejected. The lowest mean rank is returned by the IDMO, as shown in Fig. 4, which means it ranked first for all the 21 benchmark functions.
Table 4.
Summary of Friedman’s test
| Total N | 21 |
|---|---|
| Test statistic | 75.228 |
| Degree of freedom | 9 |
| Asymptotic Sig. (two-sided test) | 0.000 |
Fig. 4.
Mean ranking of the 10 algorithms used
The Wilcoxon Signed Ranks Test on classical benchmark test functions shown in Table 5 confirmed the superiority of the optimization capability of IDMO. The IDMO has higher R + values in 7 out of 9 cases, and the number of R − and ties combined is more for the remaining two cases. This outcome implies that IDMO significantly outperforms DMO, AOA, CPSOGSA, PSO, BBO, DE, SSA, SCA, and GWO for all of the classical benchmark test functions. Their respective p-values returned by Wilcoxon's test at tolerance level, α = 0.05, show a significant difference in the performance of the algorithms in 7 out of 9 cases, which means that IDMO significantly outperforms 7 out of 9 algorithms on all classical test functions.
Table 5.
Wilcoxon signed ranks test for classical benchmark functions
| Algorithms | N | Mean rank | Sum of ranks | Z | Asymp. Sig. (two-tailed) | |
|---|---|---|---|---|---|---|
| DMO–IDMO | Negative ranks | 5 | 12.40 | 62.00 | −0.686b | 0.492 |
| Positive ranks | 12 | 7.58 | 91.00 | |||
| Ties | 4 | |||||
| Total | 21 | |||||
| AOA–IDMO | Negative ranks | 3 | 5.67 | 17.00 | −2.983b | 0.003 |
| Positive ranks | 15 | 10.27 | 154.00 | |||
| Ties | 3 | |||||
| Total | 21 | |||||
| CPSOGSA–IDMO | Negative ranks | 2 | 11.00 | 22.00 | −2.938b | .003 |
| Positive ranks | 17 | 9.88 | 168.00 | |||
| Ties | 2 | |||||
| Total | 21 | |||||
| PSO–IDMO | Negative ranks | 3 | 11.00 | 33.00 | −2.286b | .022 |
| Positive ranks | 15 | 9.20 | 138.00 | |||
| Ties | 3 | |||||
| Total | 21 | |||||
| BBO–IDMO | Negative ranks | 1 | 19.00 | 19.00 | −3.211b | .001 |
| Positive ranks | 19 | 10.05 | 191.00 | |||
| Ties | 1 | |||||
| Total | 21 | |||||
| DE–IDMO | Negative ranks | 4 | 9.00 | 36.00 | −1.036b | .300 |
| Positive ranks | 10 | 6.90 | 69.00 | |||
| Ties | 7 | |||||
| Total | 21 | |||||
| SSA–IDMO | Negative ranks | 2 | 11.50 | 23.00 | −2.722b | .006 |
| Positive ranks | 16 | 9.25 | 148.00 | |||
| Ties | 3 | |||||
| Total | 21 | |||||
| SCA–IDMO | Negative ranks | 0 | .00 | .00 | −3.920b | < ,001 |
| Positive ranks | 20 | 10.50 | 210.00 | |||
| Ties | 1 | |||||
| Total | 21 | |||||
| GWO–IDMO | Negative ranks | 4 | 5.00 | 20.00 | −2.040b | .041 |
| Positive ranks | 10 | 8.50 | 85.00 | |||
| Ties | 7 | |||||
| Total | 21 | |||||
bBased on negative ranks
Also, the IDMO returned smaller values for the standard deviation than the other exiting algorithms, which translates to IDMO being a more stable algorithm. The convergence rate comparison in Fig. 5 confirms this assertion and reveals that IDMO converges toward the best solution early in the iteration process. This convergence can be attributed to the alpha effectively pulling the other population members toward the optimal solution early in the iteration. In some cases (F2-F4), this effect is seen in the middle of the iteration. In the case of F8, IDMO converged toward the optimal solution late in the iteration process.
Fig. 5.
Convergence rate comparison for classical and CEC2020 benchmark functions
The CEC2020 Test Functions
The robustness and stability of the proposed IDMO were further tested using all the functions found in CEC 2020 benchmark test suite. The results of these tests are presented in Table 6. Clearly, the IDMO returned better solutions than other algorithms because it could find the global optimum for F22, F23, F25, F27, and F29. The IDMO performed competitively for the rest of the functions, returning solutions very close to optimal ones. The solutions returned by the AOA, GWO, and CPSOGSA are also competitive. Friedman’s test was used to carry out a non-parametric test on the results, as presented in Table 7. The IDMO also returned the lowest mean ranks of these comparisons, ranking first amongst all the algorithms compared, as shown in Fig. 6. The convergence rate comparison for the CEC 2020 is also shown in Fig. 5. The IDMO, closely followed by AOA, GWO, and CPSOGSA, provided the best convergence.
Table 6.
Results of CEC2020 test functions
| Function | Global Opt | Value | IDMO | DMO | AOA | CPSOGSA | PSO | BBO | DE | SSA | SCA | GWO |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| F22 | 100 | Best | 100.07 | 409.33 | 1.31E + 09 | 103.32 | 107.58 | 101.45 | 326.91 | 18,406,000 | 2.06E + 08 | 4132.3 |
| Worst | 101.43 | 22,626 | 1.34E + 10 | 12,616 | 5618 | 9479.3 | 5794.6 | 12,320 | 1.38E + 09 | 4.81E + 08 | ||
| Average | 100.33 | 5538.9 | 5.06E + 09 | 4335.2 | 1676.6 | 2057.6 | 1992.1 | 2825.4 | 6.75E + 08 | 1.96E + 07 | ||
| SD | 0.27695 | 6041.8 | 2.40E + 09 | 3972 | 1566.3 | 2482.6 | 1501.2 | 2797.2 | 2.97E + 08 | 8.75E + 07 | ||
| F23 | 1100 | Best | 1108.1 | 1499.5 | 1506.9 | 1347.8 | 1115.1 | 1250.5 | 1308.2 | 1482.1 | 1738.8 | 1116.8 |
| Worst | 1507 | 2439.7 | 2621.8 | 2859.8 | 1905.4 | 2390.9 | 1754.7 | 2526.7 | 2785.6 | 1876.5 | ||
| Average | 1242.2 | 2064.3 | 2018.7 | 2188.3 | 1466.3 | 1783.9 | 1552.7 | 1916.5 | 2301.3 | 1524.2 | ||
| SD | 97.941 | 206.86 | 310.98 | 358.44 | 187.56 | 263.22 | 111.6 | 300.71 | 250.68 | 210.7 | ||
| F24 | 700 | Best | 713.47 | 724.85 | 766.97 | 729.55 | 712.76 | 714.08 | 716.08 | 733.85 | 762.11 | 717.17 |
| Worst | 727.11 | 744.13 | 828.09 | 821.18 | 732.39 | 729.92 | 727.08 | 751.65 | 796.07 | 746.98 | ||
| Average | 718.36 | 733.3 | 801 | 759.03 | 721.39 | 720.9 | 722.16 | 731.28 | 775.54 | 730.58 | ||
| SD | 3.2551 | 4.6801 | 15.2 | 23.211 | 4.9189 | 4.36 | 2.245 | 10.17 | 9.1564 | 10.56 | ||
| F25 | 1900 | Best | 1900.6 | 1900.5 | 3854.5 | 1900.7 | 1900.5 | 1900.1 | 1900.8 | 1903 | 1909.1 | 1900.5 |
| Worst | 1901.8 | 1902.7 | 186,740 | 1936.5 | 1901.9 | 1904.6 | 1901.9 | 1903.6 | 2082.2 | 1903.5 | ||
| Average | 1901.3 | 1901.3 | 62,921 | 1902.8 | 1901.1 | 1901.7 | 1901.6 | 1901.7 | 1923.9 | 1901.9 | ||
| SD | 0.30019 | 0.56783 | 52,820 | 6.4186 | 0.37742 | 0.94168 | 0.28284 | 0.7055 | 30.854 | 0.79346 | ||
| F26 | 1700 | Best | 1719.8 | 2689.5 | 34,413 | 2180.1 | 2261.1 | 2208 | 4677.1 | 2951.6 | 5195.9 | 3109.4 |
| Worst | 1814.9 | 9413.5 | 369,980 | 61,662 | 8108.7 | 121,810 | 98,655 | 17,343 | 69,371 | 352,510 | ||
| Average | 1759.8 | 4343.8 | 170,260 | 12,593 | 4208 | 25,290 | 27,794 | 5886.3 | 19,164 | 65,047 | ||
| SD | 21.325 | 1491.5 | 92,783 | 14,552 | 1651.5 | 33,507 | 19,248 | 3862.2 | 14,917 | 128,330 | ||
| F27 | 1600 | Best | 1600.5 | 1600.6 | 1600.5 | 1600.5 | 1600.5 | 1600.5 | 1600.5 | 1600.7 | 1601 | 1600.5 |
| Worst | 1600.5 | 1601.1 | 1630.8 | 1660.7 | 1659 | 1619.3 | 1600.9 | 1603.1 | 1602.4 | 1622.2 | ||
| Average | 1600.5 | 1600.9 | 1615.2 | 1616.6 | 1612.9 | 1603.3 | 1600.7 | 1601.1 | 1601.5 | 1603.2 | ||
| SD | 0.00086963 | 0.12188 | 8.8082 | 22.669 | 11.984 | 5.9223 | 0.083246 | 0.49274 | 0.29381 | 5.9343 | ||
| F28 | 2100 | Best | 2103.6 | 2360 | 2707.9 | 2527.5 | 2134.6 | 2125.9 | 2239.1 | 3520 | 3206.3 | 2533 |
| Worst | 2125.1 | 2968.9 | 57,694 | 23,244 | 4599.7 | 22,806 | 5027.4 | 19,475 | 26,612 | 17,583 | ||
| Average | 2109.2 | 2579.1 | 8043.3 | 8128.1 | 2701.9 | 8606.8 | 3169.1 | 6999.2 | 10,094 | 8628.2 | ||
| SD | 4.4189 | 134.97 | 9733.1 | 6196.7 | 537.8 | 7009.3 | 790.72 | 4990.2 | 5331.4 | 4655.6 | ||
| F29 | 2200 | Best | 2200.1 | 2239.3 | 2417.6 | 2246.9 | 2200 | 2300.6 | 2274.2 | 2311.5 | 2273.1 | 2301.5 |
| Worst | 2303.2 | 2303.5 | 3346.4 | 3376.6 | 2304.8 | 2309.7 | 2301.6 | 2309.3 | 2424.6 | 2321.4 | ||
| Average | 2279.3 | 2287.8 | 2791.4 | 2336.5 | 2298.7 | 2302.6 | 2299.9 | 2296 | 2368.4 | 2307.9 | ||
| SD | 42.131 | 19.046 | 254.97 | 196.71 | 18.67 | 1.6989 | 4.9212 | 22.805 | 31.759 | 4.9224 | ||
| F30 | 2400 | Best | 2416.3 | 2500 | 2688.9 | 2500 | 2500 | 2500 | 2560.5 | 2503.2 | 2534.2 | 2729.5 |
| Worst | 2736.9 | 2614.1 | 2975.6 | 2824.4 | 2764.2 | 2767.4 | 2753.8 | 2768.8 | 2801.5 | 2771 | ||
| Average | 2539.7 | 2522.1 | 2827.1 | 2758.7 | 2723.9 | 2715.7 | 2725.6 | 2747.5 | 2768.2 | 2748.3 | ||
| SD | 65.097 | 29.925 | 70.872 | 72.626 | 55.138 | 86.578 | 44.927 | 8.3183 | 55.568 | 12.484 | ||
| F31 | 2500 | Best | 2601.8 | 2898 | 2950.8 | 2898.1 | 2600.1 | 2897.8 | 2899.9 | 2904.7 | 2923.3 | 2900.2 |
| Worst | 2898.1 | 2940.7 | 3445.1 | 3024.2 | 2948 | 2949.4 | 2947.5 | 2950.6 | 2971.2 | 2949.9 | ||
| Average | 2887.9 | 2903.9 | 3149.9 | 2940.4 | 2907.5 | 2932.1 | 2913.9 | 2925.7 | 2956.8 | 2934.3 | ||
| SD | 54.03 | 9.8305 | 106.65 | 37.844 | 62.419 | 22.393 | 13.589 | 23.926 | 14.119 | 17.203 | ||
| Mean rank | 1.25 | 3.75 | 9.2 | 7.70 | 3.20 | 5.35 | 4.30 | 4.95 | 8.50 | 5.80 | ||
| Rank | 1 | 3 | 10 | 8 | 2 | 6 | 4 | 5 | 9 | 7 | ||
Table 7.
Summary of Friedman’s test
| Total N | 10 |
|---|---|
| Test statistic | 62.694 |
| Degree of freedom | 9 |
| Asymptotic sig. (2−sided test) | .000 |
Fig. 6.
Mean ranking for CEC functions
A further Wilcoxon Signed Ranks Test for CEC 2020 test functions is presented in Table 8. A p value less than the 0.05 significance level indicates that the obtained results for IDMO significantly outperform the other algorithms considered in this study. Also, this assertion is further confirmed by the IDMO obtaining higher R + values than R − or ties in all the cases. The implication is that the IDMO significantly outperforms all the algorithms used in the comparison for all the CEC 2020 test functions. Thus, it can be concluded that the efficiency, searchability, and robustness of IDMO are better than the other state-of-the-art algorithms used in this study.
Table 8.
Wilcoxon signed ranks test for CEC 2020 test functions
| Algorithm | N | Mean rank | Sum of ranks | Z | Asymp. sig. (two-tailed) | |
|---|---|---|---|---|---|---|
| DMO—IDMO | Negative ranks | 1 | 5.00 | 5.00 | −2.073b | 0.038 |
| Positive ranks | 8 | 5.00 | 40.00 | |||
| Ties | 1 | |||||
| Total | 10 | |||||
| AOA—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| CPSOGSA—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| PSO—IDMO | Negative ranks | 1 | 1.00 | 1.00 | −2.701b | 0.007 |
| Positive ranks | 9 | 6.00 | 54.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| BBO—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| DE—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| SSA—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| SCA—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
| GWO—IDMO | Negative ranks | 0 | 0.00 | 0.00 | −2.803b | 0.005 |
| Positive ranks | 10 | 5.50 | 55.00 | |||
| Ties | 0 | |||||
| Total | 10 | |||||
bBased on negative ranks
Engineering Problems
This section presents the results of experiments conducted using the IDMO to optimize twelve (12) benchmark problems in different engineering domains. The population size and the maximum number of iterations used for IDMO, DMO, and AOA are 50 and 1000, respectively. The results for the remaining comparative algorithms are obtained from their respective literature. The number of function evaluations (FEs) is set at . Windows 10 OS environment, Intel Core i7-7700@3.60 GHz CPU, and 16G RAM were used to conduct the experiments. The results of 30 independent runs of each algorithm are collated using the “best, worst, average, and SD” performance indicators. Further statistical analysis was carried out using mean, standard deviation, and Friedman and Wilcoxon tests. The details about the 12 engineering problems can be found at the following:
The compression spring design problem (CSD) [37]
The pressure vessel design problem (PVD) [38]
The speed reducer design problem (SRD) [39]
The three-bar truss design problem (3-BTD) [40]
The gear train design problem (GTD) [41]
The cantilever beam design problem (CBD) [42]
The optimal design of I-shaped beam (IBD) [43]
The tubular column design (TCD) [44]
The piston lever design problem (PLD) [43]
The corrugated bulkhead design problem (Cbhd) [45]
The reinforced concrete beam design problem (RCB) [46]
Results of the Welded Beam Design Problem (WBD)
The results and statistical analysis of optimizing the WBD using IDMO and other existing results in the literature are presented in Table 9. The IDMO found the least minimum cost, and Fig. 7 shows that this value was found at the 400th iteration. This means the results were achieved within 20,000 function evaluations (FEs). The least minimum cost for IDMO was achieved within the desired combination of the four (4) problem design variables, as shown in Table 9. The IDMO also returned the minor standard deviation and average values, thereby confirming the stability and robustness of the IDMOin solving the WBD problem.
Table 9.
The comparative results of WBD
| Algorithm | h | l | t | b | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|
| IDMO | 0.200952683 | 3.353311664 | 9.036343673 | 0.20574711 | 1.6952 | 1.781 | 1.7192 | 0.021464 | 20,000 |
| DMO [32] | 0.205234326 | 3.26225324 | 9.037040824 | 0.205730354 | 1.6953 | 1.6991 | 1.6961 | 0.00097687 | 35,000 |
| AOA [70] | 0.247498553 | 2.536812819 | 10 | 0.250284392 | 1.831 | 2.928 | 2.3579 | 0.27284 | 40,000 |
| BBO [77] | 0.185486 | 4.3129 | 8.439903 | 0.235902 | 1.918055 | 3.606933 | 2.630412 | 4.11E—01 | 50,000 |
| PSO [77] | 0.219292 | 3.430416 | 8.433559 | 0.236204 | 1.85272 | 3.841845 | 2.613785 | 4.71E—01 | 50,000 |
| ICA [77] | 0.205799 | 3.469634 | 9.03495 | 0.205806 | 1.725135 | 2.237755 | 1.79433 | 1.10E—01 | 50,000 |
| WCA [78] | 0.205728 | 3.470522 | 9.03662 | 0.205729 | 1.724856 | 1.744697 | 1.726427 | 4.29E—03 | 46,500 |
| ABC [79] | 0.20573 | 3.470489 | 9.036624 | 0.20573 | 1.724852 | NA | 1.741913 | 3.10E—02 | 30,000 |
| EO [80] | 0.2057 | 3.4705 | 9.0366 | 0.2057 | 1.724853 | 1.736725 | 1.726482 | 3.26E—03 | 15,000 |
| TEO [81] | 0.205681 | 3.472305 | 9.035133 | 0.205796 | 1.725284 | 1.931161 | 1.76804 | 5.82E—02 | NA |
| SSA [74] | 0.2057 | 3.4714 | 9.0366 | 0.2057 | 2.246638 | 1.725886 | 1.823426 | 1.28E—01 | 50,000 |
| WSA [77] | 0.20573 | 3.470489 | 9.036624 | 0.20573 | 1.724852 | 1.725068 | 1.724908 | 4.15E—05 | 50,000 |
| GWO [77] | 0.205677 | 3.470894 | 9.038558 | 0.205739 | 1.728487 | 1.725232 | 1.72631 | 7.71E—04 | 50,000 |
| SHO [82] | 0.205563 | 3.474846 | 9.035799 | 0.205811 | 1.725661 | 1.726064 | 1.725828 | 2.87E—04 | NA |
| WOA [83] | 0.205396 | 3.484293 | 9.037426 | 0.206276 | 1.730499 | NA | 1.732 | 0.0226 | 9900 |
| SNS [84] | 0.2057296 | 3.4704887 | 9.0366239 | 0.2057296 | 1.724852 | 1.725051 | 1.72488 | 5.18E—05 | 9000 |
Fig. 7.
Convergence rate graph for WBD
Results of the Compression Spring Design Problem (CSD)
The results of optimizing the CSD with IDMO and other existing results are presented in Table 10. The IDMO failed to return the least cost of the objective function. However, its result is better than that of the DMO and AOA. The IDMO needed 20 iterations, corresponding to 1,000 FEs, as shown in Fig. 8.
Table 10.
Summary of comparative results for CSD
| Algorithm | d | D | N | Best | Worst | Mean | SD | NFEs |
|---|---|---|---|---|---|---|---|---|
| IDMO | 0.13914 | 1.3 | 11.88923 | 3.6619 | 3.6923 | 3.6632 | 0.005662 | 1,000 |
| DMO [32] | 0.13915 | 1.3 | 11.89243 | 3.6619 | 3.6619 | 3.6619 | 1.57E−15 | 20,000 |
| AOA [70] | 0.148317 | 1.3 | 15 | 4.3133 | 6.585 | 6.0337 | 0.52677 | 50,000 |
| CPSO [85] | 0.051728 | 0.357644 | 11.244543 | 0.0126747 | 0.012924 | 0.01273 | 5.20E—05 | 200,000 |
| HPSO [86] | NA | NA | NA | 0.0126652 | 0.0127191 | 0.0127072 | 1.58E—05 | 81,000 |
| CDE [87] | 0.051689 | 0.356718 | 11.288968 | 0.0126702 | 0.01279 | 0.012703 | 2.70E—05 | 204,800 |
| PSO [77] | 0.05169 | 0.356737 | 11.28885 | 0.012857 | 0.071802 | 0.019555 | 1.17E—02 | 20,000 |
| QPSO [77] | 0.0518 | 0.359 | 11.279 | 0.012669 | 0.018127 | 0.013854 | 1.34E—03 | 20,000 |
| G−QPSO [77] | NA | NA | NA | 0.012666 | 0.015869 | 0.012996 | 6.28E—04 | 20,000 |
| WCA [88] | 0.05168 | 0.3565 | 11.3004 | 0.012665 | 0.012952 | 0.012746 | 8.06E—05 | 11,750 |
| ABC [79] | 0.051749 | 0.358179 | 11.203763 | 0.012665 | NA | 0.012709 | 1.28E—02 | 30,000 |
| APSO [89] | 0.052588 | 0.378343 | 10.138862 | 0.0127 | 0.014937 | 0.013297 | 6.85E—04 | 120,000 |
| IAPSO [89] | 0.051685 | 0.356629 | 11.294175 | 0.01266523 | 0.01782864 | 0.01367653 | 1.57E—03 | 20,000 |
| WOA [83] | 0.0512 | 0.3452 | 12.004 | 0.0126763 | NA | 0.0127 | 3.00E—04 | 4410 |
| MCEO [90] | 0.051994 | 0.364109 | 10.868421 | 0.01266051 | 0.01350901 | 0.0127196 | 3.79E—05 | 2000 |
| EO [80] | 0.051207 | 0.345215 | 12.004032 | 0.012666 | 0.013997 | 0.013017 | 3.91E—04 | 15,000 |
| SNS [84] | 0.051587 | 0.354268 | 11.434058 | 0.01266525 | 0.01276587 | 0.01268472 | 2.39E—05 | 9000 |
Fig. 8.
Convergence rate graph for CSD
Result of the Pressure Vessel Design Problem (PVD)
Similarly, Table 11 presents the comparative results and statistical analysis of the IDMO and several other existing optimization algorithms. The IDMO returned the least cost of the design problem, achieved at the 20th iteration, equivalent to 1,000 FEs, as shown in Fig. 9. Similarly, the stability of the IDMO is confirmed by returning the least value of the average and standard deviation performance indicators.
Table 11.
Summary of the comparative results for PVD
| Algorithm | Ts | Th | R | L | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|
| IDMO | 0.5479 | 0.2469 | 43.4967 | 160.0482 | 4527.2 | 5106.1 | 4533.2 | 279.92 | 1000 |
| DMO [32] | 0.4611 | 0.2401 | 40.3196 | 200 | 4527.3 | 4527.3 | 4527.3 | 2.96E−11 | 5000 |
| AOA [70] | 0.3764 | 0.3945 | 40.6441 | 200 | 4739.3 | 6763.9 | 5592.3 | 549.35 | 45,000 |
| SAP [91] | 0.8125 | 0.4375 | 40.3239 | 200 | 6288.8 | 6308.2 | 6293.8 | 7.41E + 00 | 3000 |
| HPSO [85] | 0.8125 | 0.4375 | 42.0984 | 176.6366 | 6059.7 | 6288.7 | 6099.9 | 8.62E + 01 | 81,000 |
| CDE [87] | 0.8125 | 0.4375 | 42.0984 | 176.6376 | 6371.1 | 6059.7 | 6085.2 | 4.30E + 01 | 204,800 |
| CPSO [85] | 0.8125 | 0.4375 | 42.0913 | 176.7465 | 6061.1 | 6363.8 | 6147.1 | 8.65E + 01 | 200,000 |
| PSO [92] | 0.8125 | 0.4375 | 42.0984 | 176.6366 | 6693.7 | 14,076.3 | 8756.7 | 1.49E + 03 | 8000 |
| QPSO [92] | 0.8125 | 0.4375 | 42.0984 | 176.6374 | 6059.7 | 8017.3 | 6839.9 | 4.79E + 02 | 8000 |
| G−QPSO [92] | 0.8125 | 0.4375 | 42.0984 | 176.6372 | 6059.7 | 7544.5 | 6440.4 | 4.48E + 02 | 8000 |
| ABC [79] | 0.8125 | 0.4375 | 42.0985 | 176.6366 | 6059.7 | NA | 6245.3 | 2.05E + 02 | 30,000 |
| CS [93] | 0.8125 | 0.4375 | 42.0985 | 176.6366 | 6059.7 | 6495.4 | 6447.7 | 5.03E + 02 | 15,000 |
| WOA [83] | 0.8125 | 0.4375 | 42.0983 | 176.639 | 6059.7 | NA | 6068.05 | 6.57E + 01 | 6300 |
| APSO [89] | 0.8125 | 0.4375 | 42.0984 | 176.6374 | 6059.7 | 7544.5 | 6470.7 | 3.27E + 02 | 200,000 |
| EO [80] | 0.8125 | 0.4375 | 42.0985 | 176.6366 | 6059.7 | 7544.5 | 6668.1 | 5.66E + 02 | 15,000 |
| CGO [94] | 0.8125 | 0.4345 | 42.0892 | 176.7587 | 6247.7 | 6331 | 6251 | 1.07E + 01 | 100,000 |
| SNS [84] | 0.8125 | 0.4375 | 42.0985 | 176.6366 | 6059.7 | 6410.1 | 6097.1 | 9.28E + 01 | 6000 |
Fig. 9.
Convergence rate graph for PVD
Results of the Speed Reducer Design Problem (SRD)
The best and statistical results of optimizing the SRD with IDMO compared with other existing methods are presented in Table 12. It can be seen that the IDMO returned the least cost of the objective function. The “average and standard deviation” values returned by the IDMO confirm its stability and robustness. The required number of function evaluations (FEs) for the IDMO algorithm is 500, which is equivalent to the 10th iteration, as shown in Fig. 10, which is much lower than that of other algorithms.
Table 12.
Summary of comparative results for SRD
| Algorithm | × 1 | × 2 | × 3 | × 4 | × 5 | × 6 | × 7 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IDMO | 3.5 | 0.7 | 17 | 7.3 | 7.8 | 3.4 | 5.3 | 2993.7 | 3007.3 | 2998.2 | 3.2165 | 500 |
| DMO [32] | 3.5 | 0.7 | 17 | 7.3 | 7.7 | 3.4 | 5.3 | 2993.6 | 2993.6 | 2993.6 | 4.63E−13 | 10,000 |
| AOA [32] | 3.6 | 0.7 | 17 | 7.5 | 8.3 | 3.6 | 5.3 | 3059.2 | 3222.5 | 3119.1 | 32.434 | 40,000 |
| CS [93] | 3.5 | 0.7 | 17 | 7.6 | 7.8 | 3.3 | 5.3 | 3001 | 3009 | 3007.2 | 4.96E + 00 | 250,000 |
| ABC [79] | 3.5 | 0.7 | 17 | 7.3 | 7.8 | 3.4 | 5.3 | 2997.1 | NA | 2997.1 | 0.00E + 00 | 30,000 |
| WCA [78] | 3.5 | 0.7 | 17 | 7.3 | 7.7 | 3.4 | 5.3 | 2994.5 | 2994.5 | 2994.5 | 7.40E—03 | 15,150 |
| APSO [89] | 3.5 | 0.7 | 18 | 8.1 | 8.0 | 3.4 | 5.3 | 3187.6 | 4443.0 | 3822.6 | 3.66E + 02 | 30,000 |
| SHO [82] | 3.5 | 0.7 | 17 | 7.3 | 7.8 | 3.4 | 5.3 | 2998.6 | 3003.9 | 2999.6 | 1.93E + 00 | NA |
| SSA [74] | 3.5 | 0.7 | 17 | 7.3 | 7.7 | 3.4 | 5.3 | 2996.0 | 3015.7 | 3005.6 | 4.63E + 00 | NA |
| WOA [83] | NA | NA | NA | NA | NA | NA | NA | 2996.6 | 3233.6 | 3042.9 | 4.08E + 01 | NA |
| CSS [93] | NA | NA | NA | NA | NA | NA | NA | 2996.5 | 3106.2 | 3005.7 | 4.86E + 00 | NA |
| CGO [94] | NA | NA | NA | NA | NA | NA | NA | 2994.4 | 2995.5 | 2994.5 | 0.110282 | 100,000 |
| FACSS [95] | NA | NA | NA | NA | NA | NA | NA | 2996.4 | 3006.4 | 2999.4 | 4.82E + 00 | NA |
| SNS [84] | 3.5 | 0.7 | 17 | 7.3 | 7.7 | 3.4 | 5.3 | 2994.5 | 2994.5 | 2994.5 | 7.00E—06 | 3750 |
Fig. 10.
Convergence rate graph for SRD
Results of the Three-Bar Truss Design Problem (3-BTD)
The best results and statical analysis of optimizing the 3-BTD with IDMO and several other existing algorithms are presented in Table 13. The best objective value returned by the IDMO is the least and better than that of other algorithms. This was achieved within the 5th iteration (250 FEs), as shown in Fig. 11.
Table 13.
Summary of comparative results for 3−BTD
| Algorithm | × 1 | × 2 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|
| IDMO | 0.219498 | 0.188424 | 106.93 | 106.93 | 106.93 | 4.85E−05 | 250 |
| DMO [32] | 0.219515 | 0.188373 | 106.93 | 106.93 | 106.93 | 3.01E−14 | 500 |
| AOA [32] | 0.224256 | 0.192452 | 106.93 | 107.16 | 106.97 | 0.045939 | 45,000 |
| GA [77] | 0.788915 | 0.407569 | 263.89589 | 264.82081 | 263.96804 | 1.66862E—01 | 50,000 |
| PSO [77] | 0.788669 | 0.408265 | 263.89584 | 264.5849 | 263.95741 | 1.36897E—01 | 50,000 |
| ICA [77] | 0.788625 | 0.408389 | 263.89585 | 263.91413 | 263.89933 | 4.11693E—03 | 50,000 |
| CS [93] | 0.78867 | 0.40902 | 263.97156 | NA | 264.0669 | 9.00000E—05 | 15,000 |
| WCA [78] | 0.788651 | 0.408316 | 263.89584 | 263.8962 | 263.8959 | 8.71000E—05 | 5250 |
| GWO [77] | 0.788648 | 0.408325 | 263.89601 | 263.90422 | 263.89796 | 1.61422E—03 | 50,000 |
| WSA [77] | 0.788683 | 0.408227 | 263.89584 | 263.89743 | 263.89607 | 3.11960E—04 | 50,000 |
| CGO [94] | NA | NA | 263.89584 | 263.89601 | 263.89585 | 2.51E—05 | 100,000 |
| AOS [96] | NA | NA | 263.89584 | 263.89585 | 263.89584 | 8.26E—09 | 100,000 |
| SNS [84] | 0.7886847 | 0.4082211 | 263.89584 | 263.89586 | 263.89585 | 3.31056E—06 | 4800 |
Fig. 11.
Convergence rate graph for 3-BTD
Results of the Gear Train Design Problem (GTD)
The best results and statical analysis of optimizing the GTD with IDMO and several other existing algorithms are presented in Table 14. Though all the algorithms found the global optimum solution as the ‘best’ value, the IDMO returned the least value for the standard deviation and average values, indicating a more stable and robust optimization. Also, this was achieved within the minimum number of 5,000 function evaluations, as shown in Fig. 12.
Table 14.
Comparative result for GTD
| Algorithm | × 1 | × 2 | × 3 | × 4 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|
| IDMO | 48 | 17 | 22 | 54 | 2.70E−12 | 2.36E−09 | 1.22E−09 | 6.16E−10 | 5,000 |
| DMO [32] | 49 | 19 | 16 | 43 | 2.70E−12 | 2.31E−11 | 8.81E−12 | 9.50E−12 | 20,000 |
| AOA [32] | 55 | 14 | 34 | 60 | 2.31E−11 | 2.73E−08 | 1.28E−08 | 1.14E−08 | 45,000 |
| GA [77] | 49 | 19 | 16 | 43 | 2.70E−12 | 1.5247E−08 | 1.6212E−09 | 3.2174E−09 | 50,000 |
| PSO [77] | 34 | 13 | 20 | 53 | 2.31E−11 | 1.0222E−06 | 7.9383E−08 | 1.8147E−07 | 50,000 |
| ICA [77] | 43 | 16 | 19 | 49 | 2.70E−12 | 2.3576E−09 | 8.0417E−10 | 7.7862E−10 | 50,000 |
| CS [93] | 43 | 16 | 19 | 49 | 2.70E−12 | 6.51E−09 | 9.6633E−10 | 6.4529E−10 | 50,000 |
| ABC [79] | 49 | 16 | 19 | 43 | 2.70E−12 | 1.3616E−09 | 1.6800E−10 | 4.5748E−09 | 50,000 |
| MSFWA [97] | 49 | 19 | 16 | 43 | 2.70E−12 | 1.36165E−09 | 1.68012E−10 | 7.2953E−08 | 50,000 |
| SNS [84] | 43 | 19 | 16 | 49 | 2.70E−12 | 1.36165E−09 | 1.68012E−10 | 3.74894E−10 | 25,000 |
Fig. 12.
Convergence rate for GTD
Results of the Cantilever Beam Design Problem (CBD)
The best results and statical analysis of optimizing the CBD with IDMO and several other existing algorithms are presented in Table 15. It can be observed that the best solution for solving the CBD problem obtained is IDMO, which returned the optimal values for the five (5) design variables within the 80th iteration or 4000 FEs as shown in Fig. 13. Furthermore, the superiority and robustness of IDMO are shown by the least “average and standard deviation” values it returned.
Table 15.
Comparative result for CBD
| Algorithm | × 1 | × 2 | × 3 | × 4 | × 5 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|---|
| IDMO | 5.689588 | 5.020755 | 4.261692 | 3.312994 | 2.040863 | 1.3004 | 1.3005 | 1.3004 | 1.31E−05 | 4,000 |
| DMO [32] | 5.694297 | 5.025255 | 4.253986 | 3.314226 | 2.037547 | 1.3004 | 1.3004 | 1.3004 | 2.26E−16 | 15,000 |
| AOA [32] | 6.322849 | 4.41889 | 4.119345 | 7.754706 | 1.46564 | 1.3847 | 3.2824 | 1.9122 | 0.52199 | 45,000 |
| SOS [98] | 6.01878 | 5.30344 | 4.49587 | 3.49896 | 2.15564 | 1.33996 | NA | 1.33997 | 1.1E—5 | 15,000 |
| CGO [94] | 6.01513 | 5.3093 | 4.495 | 3.50142 | 2.15278 | 1.33997 | 1.340602 | 1.340052 | 1.23E—04 | 100,000 |
| AOS [96] | NA | NA | NA | NA | NA | 1.339957 | 1.491711 | 1.351954 | 0.0249974 | 100,000 |
| MGA [99] | NA | NA | NA | NA | NA | 1.3399756 | 1.3402011 | 1.3400526 | 6.99E—05 | 100,000 |
| SNS [84] | 6.01545 | 5.31066 | 4.488 | 3.50528 | 2.15428 | 1.3399576 | 1.3399576 | 1.3399576 | 1.1102E—15 | 12,000 |
Fig. 13.
Convergence rate for CBD
Results of the Optimal Design of I-Shaped Beam (IBD)
The best results and statical analysis of optimizing the IBD with IDMO and several other existing algorithms are presented in Table 16. All the algorithms could find optimal solutions, and the least standard deviation was returned by the IDMO. This result confirms the IDMO's stability and superiority. Also, the results were achieved within the smallest possible function evaluations (150), as can be seen in Fig. 14.
Table 16.
Summary of comparative results for IBD
| Algorithm | × 1 | × 2 | × 3 | × 4 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|
| IDMO | 80 | 50 | 0.9 | 2.321795 | 0.013074 | 0.013078 | 0.013075 | 8.8592E−07 | 150 |
| DMO [32] | 80 | 50 | 0.9 | 2.321793 | 0.013074 | 0.013074 | 0.013074 | 1.523E−13 | 2,500 |
| AOA [32] | 80 | 50 | 0.9 | 2.321547 | 0.013074 | 0.013161 | 0.013089 | 1.6536E−05 | 4,000 |
| GWO [100] | 80 | 50 | 0.9 | 2.3217 | 0.0131 | NA | NA | NA | NA |
| EMGO-FCR [100] | 80 | 50 | 0.9 | 2.32 | 0.0131 | NA | NA | NA | NA |
| CS [93] | 80 | 50 | 0.9 | 2.3216 | 0.0130747 | 0.01353646 | 0.0132165 | 0.0001345 | 5000 |
| SOS [98] | 80 | 50 | 0.9 | 2.3217 | 0.0130741 | NA | 0.0130884 | 4.0E−5 | 5000 |
| AOS [96] | NA | NA | NA | NA | 0.0130741 | 0.013814 | 0.0131788 | 1.555E−04 | 100,000 |
| SNS [84] | 80 | 50 | 0.9 | 2.3217 | 0.0130741 | 0.0130764 | 0.0130743 | 4.313E−07 | 3600 |
Fig. 14.
Convergence rate for IBD
Results of the Tubular Column Design (TCD)
The TCD problem has been previously solved using various optimizers, and some of the best results obtained by these optimizers and IDMO are presented in Table 17. The number of function evaluations needed to obtain the results and statistical analysis of some optimizers is reported. It can be seen from Fig. 15 that the IDMO obtained this result in the 10th iteration or 500 FEs.
Table 17.
Summary of comparative results for TCD
| Algorithm | × 1 | × 2 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|
| IDMO | 6.182678144 | 0.2 | 24.615 | 2.46E + 01 | 24.615 | 1.95E−06 | 500 |
| DMO [32] | 6.182683216 | 0.2 | 24.615 | 2.46E + 01 | 24.615 | 1.81E−14 | 4,000 |
| AOA [32] | 6.179782123 | 0.2 | 24.615 | 2.47E + 01 | 24.631 | 0.021115 | 25,000 |
| ISA [101] | 5.45115623 | 0.29196547 | 2.65E + 01 | 26.531 | NA | 3000 | |
| CS [93] | 5.45139 | 0.29196 | 26.53217 | 26.53972 | 26.53504 | 1.93E—03 | 15,000 |
| FA [102] | NA | NA | 26.52 | NA | 28.74 | 2.08 | 3000 |
| AOS [96] | NA | NA | 26.531378 | 26.608314 | 26.531614 | 1.0300E—03 | 100,000 |
| SNS [84] | 5.45115623 | 0.29196547 | 26.486361 | 26.486371 | 26.486362 | 2.2160E—06 | 1250 |
Fig. 15.
Convergence rate for TCD
The Piston Lever Design Problem (PLD)
The best results and statical analysis of optimizing the PLD with IDMO and several other existing algorithms are presented in Table 18. It can be seen that the results returned by the IDMO, DMO, and AOA are relatively small compared to those of CS, ISA, CGO, AOS, MGA, and SNS. Figure 16 shows that the result for IDMO was obtained at the 60th iteration or 3000 FEs.
Table 18.
Comparative result for PLD
| Algorithm | × 1 | × 2 | × 3 | × 4 | Best | Worst | Average | SD | FEs |
|---|---|---|---|---|---|---|---|---|---|
| IDMO | 0.05 | 0.144897318 | 4.11572157 | 120 | 4.695 | 167.87 | 10.136 | 29.791 | 3000 |
| DMO [32] | 0.05 | 0.125073578 | 4.116042166 | 120 | 4.6949 | 4.7006 | 4.6987 | 0.0027386 | 5000 |
| AOA [32] | 500 | 500 | 2.578147082 | 120 | 7.532 | 488.42 | 320.17 | 115.54 | 35,000 |
| PSO [103] | 133.3 | 2.44 | 117.14 | 4.75 | 122 | 294 | 166 | 51.7 | 50,000 |
| DE [103] | 129.4 | 2.43 | 119.8 | 4.75 | 159 | 199 | 187 | 14.2 | 50,000 |
| GA [103] | 250 | 3.96 | 60.03 | 5.91 | 161 | 216 | 185 | 18.2 | 50,000 |
| HPSO [103] | 135.5 | 2.48 | 116.62 | 4.75 | 162 | 197 | 187 | 13.4 | 50,000 |
| HPSO with Q-learning [103] | NA | NA | NA | NA | 129 | 168 | 151 | 13.4 | 50,000 |
| CS [93] | 0.05 | 2.043 | 120 | 4.085 | 8.4271 | 168.592 | 40.2319 | 59.0552 | 50,000 |
| ISA [101] | NA | NA | NA | NA | 8.4 | 610.6 | 226.5 | 111.2 | 12,500 |
| CGO [94] | NA | NA | NA | NA | 8.41281381 | 167.472809 | 45.04866 | 67.24763 | 100,000 |
| AOS [96] | NA | NA | NA | NA | 8.41914274 | 167.664986 | 33.741276 | 93.46674724 | 100,000 |
| MGA [ [99] | NA | NA | NA | NA | 8.41340665 | 167.473213 | 32.468893 | 29.96370439 | 100,000 |
| SNS [84] | 0.05 | 2.042 | 120 | 4.083 | 8.41269835 | 167.472775 | 24.318974 | 47.71792646 | 5000 |
Fig. 16.
Convergence rate for PLD
Results of the Corrugated Bulkhead Design Problem (CBHD)
The best results and statical analysis of optimizing the CBhD with IDMO and several other existing algorithms are presented in Table 19. The results showed the IDMO returned the least value for the objective function within the least possible function evaluations, as shown in Fig. 17.
Table 19.
Summary of comparative result for CBhD)
| × 1 | × 2 | × 3 | × 4 | Best | Worst | Average | SD | FEs | |
|---|---|---|---|---|---|---|---|---|---|
| IDMO | 48.31 | 54.78 | 61.93 | 0.43 | 4.6972 | 1.548 | 4.6693 | 4.3569 | 2500 |
| DMO [32] | 48.14 | 5.45E + 01 | 62.04 | 0.43 | 4.8201 | 4.6699 | 4.5407 | 0.70271 | 20,000 |
| AOA [32] | 43.46 | 74.92 | 100 | 0.39 | 5.002 | 5.9617 | 5.031 | 0.99549 | 40,000 |
| FA [102] | 37.12 | 33.04 | 37.19 | 0.73 | 7.21 | NA | 10.23 | 1.95 | 12,000 |
| LF−FA [102] | 57.69 | 34.15 | 57.69 | 1.05 | 6.95 | NA | 8.83 | 1.26 | 12,000 |
| LS−LF−FA [102] | 57.69 | 34.13 | 57.55 | 1.05 | 6.86 | NA | 7.44 | 0.67 | 12,000 |
| AD−IFA [102] | NA | NA | NA | NA | 6.84 | NA | 7.21 | 0.58 | 12,000 |
| AOS [96] | NA | NA | NA | NA | 6.843 | 7.0669 | 7.0608 | 6.49E – 04 | 100,000 |
| SNS [84] | 57.69 | 34.15 | 57.69 | 1.05 | 6.843 | 6.8431 | 6.843 | 2.09E—05 | 3125 |
Fig. 17.
Convergence rate for CBhD
Results of the Reinforced Concrete Beam Design Problem (RCB)
The best results and statical analysis of optimizing the IBD with IDMO and several other existing algorithms are presented in Table 20. It can be seen that values returned by the IDMO, DMO, and AOA are relatively small compared to FA, CS, AOS, and SNS. There is a significant improvement in the results available in the literature. It is important to note that this improvement was achieved within the least possible function evaluations (100) or 2nd iteration, as shown in Fig. 18.
Table 20.
Comparative result for RCB
| Algorithm | × 1 | × 2 | × 3 | Best | Worst | Average | SD | Fes |
|---|---|---|---|---|---|---|---|---|
| IDMO | 6.16 | 28 | 5 | 357.5 | 357.6 | 357.55 | 1.12E−06 | 100 |
| DMO [32] | 6 | 33 | 5 | 357.6 | 357.6 | 357.6 | 1.17E−13 | 150 |
| AOA [32] | 8.4 | 28 | 5 | 357.6 | 357.6 | 357.6 | 0 | 200 |
| FA [104] | 6.32 | 34 | 8.5 | 359.208 | 669.15 | 460.706 | 80.7387 | 25,000 |
| CS [93] | 6.32 | 34 | 8.5 | 359.208 | NA | NA | NA | 5000 |
| AOS [96] | 6.32 | 34 | 8.5 | 359.208 | 362.2535 | 359.3306872 | 0.596149 | 100,000 |
| SNS [84] | 6.32 | 34 | 8.5 | 359.208 | 362.634 | 359.3222001 | 0.6149858 | 1000 |
Fig. 18.
Convergence rate for RCB
Summary of Results
This study proposes a modified version of the dwarf mongoose optimization algorithm (IDMO) for constrained engineering design problems. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. The proposed method solved 21 classical, 10 CEC2020, and 12 constrained benchmark engineering optimization problems at a relatively low computational cost. This section summarizes the obtained results.
The unimodal test functions provide a good tool to test the exploitation capabilities of IDMO. The results of these experiments show that IDMO effectively exploited the search regions and returned the optimal or near-optimal solution compared to the DMO and eight other algorithms. This scenario can be attributed to the newly introduced operator ω, which controls the alpha movement ensuring continuous neighborhood searching during exploitation and stops when the next phase is activated. Similarly, the results of IDMO on multimodal test functions, which are excellent tools for measuring the exploration ability of algorithms, were superior compared to the other algorithms used. Hence, it can be concluded that IDMO has very good exploration capabilities.
The IDMO demonstrated high optimization capabilities while solving the 12 engineering benchmark problems. It returned superior results for almost all the 12 engineering problems, as seen from the results. Also, the convergence analysis for the benchmark and engineering problems further confirms the superiority of the IDMO. As seen from the convergence figure, the IDMO found quality solutions early in the iteration process and converged steadily toward the optimal solution. Further statistical analysis of the obtained results using Friedman’s test showed that IDMO ranked first in the final ranking of all algorithms based on their general performance. A conclusion can be made that the proposed algorithm could perform real-world optimization tasks efficiently.
Conclusion and Future Work
This study proposes a modified version of the dwarf mongoose optimization algorithm (IDMO) for constrained engineering design problems. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. Firstly, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Secondly, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mound, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero.
The proposed method solved 21 classical, 10 CEC2020, and 12 constrained benchmark engineering optimization problems at a relatively low computational cost. The results and further statistical analysis showed that the IDMO is an effective tool for optimizing the selected optimization problems. A comparison with DMO and other state-of-the-art algorithms further solidifies the superiority of the IDMO. The IDMO achieved more balanced exploitation and exploration due to introducing a new operator ω, which controls the alpha movement. Also, the randomization of the scout group and babysitters' updating steps greatly influenced the performance of the IDMO.
While this study greatly improved the exploration and exploitation capabilities of the DMO, the overall effect or influence of the alpha female persists. Though the randomization of the updating steps prevents the IDMO from being trapped in local minima, the alpha size could still prevent effective exploitation. In the future, efforts could be made to address the issues mentioned above. Also, the capability of the IDMO could be tested on CEC 2011, CEC 2014, and CEC 2017 benchmark functions and other complex real-world optimization problems such as feature selection and classification using deep learning models.
Data availability statement
All data generated or analyzed during this study are included in this article.
Declarations
Conflict of interest
The authors declare that there is no conflict of interest with regard to the publication of this paper.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Jeffrey O. Agushaka, Email: 208088307@stu.ukzn.ac.za
Absalom E. Ezugwu, Email: Ezugwua@ukzn.ac.za
Oyelade N. Olaide, Email: olaide_oyelade@yahoo.com
Olatunji Akinola, Email: 2080883025@stu.ukzn.ac.za.
Raed Abu Zitar, Email: raed.zitar@sorbonne.ae.
Laith Abualigah, Email: aligah@ammanu.edu.jo.
References
- 1.Agushaka JO, Ezugwu AE. Evaluation of several initialization methods on arithmetic optimization algorithm performance. Journal of Intelligent Systems. 2021;31(1):70–94. [Google Scholar]
- 2.Ezugwu AE, Adeleke OJ, Akinyelu AA, Viriri S. A conceptual comparison of several metaheuristic algorithms on continuous optimization problems. Neural Computing and Applications. 2020;32(10):6207–6251. [Google Scholar]
- 3.Ezugwu AE, Shukla AK, Nath R, Akinyelu AA, Agushaka JO, Chiroma H, Muhuri PK. Metaheuristics: A comprehensive overview and classification along with bibliometric analysis. Artificial Intelligence Review. 2021;54(6):1–80. [Google Scholar]
- 4.Holland, J. H. (1975). Adaptation in natural and artificial systems. Michigan: University of Michigan Press. (Second edition: MIT Press, 1992)
- 5.Kennedy, J. & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks, 4,1942–1948. IEEE
- 6.Johnson, T. & Husbands, P. (1990). System identification using genetic algorithms. In Proceedings of International Conference Parallel Problem Solving Nature, Berlin, Germany
- 7.Michalewicz, Z., Krawczyk, J., Kazemi, M. & Janikow, C. Z. (1990). Genetic algorithms and optimal control problems. In 29th IEEE conference on decision and control, 1664–1666. IEEE
- 8.Zapata H, Perozo N, Angulo W, Contreras J. A hybrid swarm algorithm for collective construction of 3D structures. International Journal of Artificial Intelligence. 2020;18:1–18. [Google Scholar]
- 9.Liang, J. J., Qu, B. Y. & Suganthan, P. N. (2013). Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, China and Singapore
- 10.Qin A, Huang V, Suganthan PN. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE transactions on Evolutionary Computation. 2009;13(2):398–417. [Google Scholar]
- 11.Jerebic J, Mernik M, Liu SH, Ravber M, Baketarić M, Mernik L, Črepinšek M. A novel direct measure of exploration and exploitation based on attraction basins.". Expert Systems with Applications. 2021;167:114353. [Google Scholar]
- 12.Alrayes FS, Alzahrani JS, Alissa KA, Alharbi A, Alshahrani H, Elfaki MA, Yafoz A, Mohamed A, Hilal AM. Dwarf mongoose optimization-based secure clustering with routing technique in internet of drones. Drones. 2022;6(9):247. [Google Scholar]
- 13.Akinola OA, Ezugwu AE, Oyelade ON, Agushaka JO. A hybrid binary dwarf mongoose optimization algorithm with simulated annealing for feature selection on high dimensional multi-class datasets. Scientific Reports. 2022;12(1):14945. doi: 10.1038/s41598-022-18993-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Akinola OA, Agushaka JO, Ezugwu AE. Binary dwarf mongoose optimizer for solving high-dimensional feature selection problems. PLoS ONE. 2022;17(10):e0274850. doi: 10.1371/journal.pone.0274850. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Chakraborty S, Sharma S, Saha AK, Chakraborty S. SHADE–WOA: A metaheuristic algorithm for global optimization. Applied Soft Computing. 2021;113:107866. [Google Scholar]
- 16.Agushaka JO, Ezugwu AE. Advanced Arithmetic Optimization Algorithm for solving mechanical engineering design problems. PLoS ONE. 2021;16(8):e0255703. doi: 10.1371/journal.pone.0255703. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Nama S, Saha AK, Sharma S. A novel improved symbiotic organisms search algorithm. Computational Intelligence. 2022;38(3):947–977. [Google Scholar]
- 18.Chakraborty S, Saha AK, Nama S, Debnath S. COVID-19 X-ray image segmentation by modified whale optimization algorithm with population reduction. Computers in Biology and Medicine. 2021;139:104984. doi: 10.1016/j.compbiomed.2021.104984. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Nama S, Saha A. An ensemble symbiosis organisms search algorithm and its application to real world problems. Decision Science Letters. 2018;7(2):103–118. [Google Scholar]
- 20.Sharma S, Chakraborty S, Saha AK, Nama S, Sahoo SK. mLBOA: A modified butterfly optimization algorithm with Lagrange interpolation for global optimization. Journal of Bionic Engineering. 2022;19:1–16. [Google Scholar]
- 21.Sharma S, Saha AK, Lohar G. Optimization of weight and cost of cantilever retaining wall by a hybrid metaheuristic algorithm. Engineering with Computers. 2022;38(4):2897–2923. [Google Scholar]
- 22.Chakraborty S, Saha AK, Sharma S, Chakraborty R, Debnath S. A hybrid whale optimization algorithm for global optimization. Journal of Ambient Intelligence and Humanized Computing. 2021;9:1–37. [Google Scholar]
- 23.Arora S, Singh S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Computing. 2019;23(3):715–734. [Google Scholar]
- 24.Jain M, Singh V, Rani A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm and evolutionary computation. 2019;44:148–175. [Google Scholar]
- 25.Brammya, G., Praveena, S., Preetha, N. S., Ninu, Ramya, R., Rajakumar, B. R., & Binu, D. (2019). Deer hunting optimization algorithm: A new nature-inspired meta-heuristic paradigm. The Computer Journal, bxy133
- 26.Harifi S, Khalilian M, Mohammadzadeh J, Ebrahimnejad S. Emperor penguins colony: A new metaheuristic algorithm for optimization. Evolutionary Intelligence. 2019;12(2):211–226. [Google Scholar]
- 27.Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Future generation computer systems. 2019;97:849–872. [Google Scholar]
- 28.Dhiman G, Kumar V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowledge-Based Systems. 2019;165:169–196. [Google Scholar]
- 29.Zhao W, Wang L, Zhang Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowledge-Based Systems. 2019;163:283–304. [Google Scholar]
- 30.Wei Z, Huang C, Wang X, Han T, Li Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access. 2019;7:1–9. [Google Scholar]
- 31.Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S. Henry gas solubility optimization: A novel physics-based algorithm. Future Generation Computer Systems. 2019;101:646–667. [Google Scholar]
- 32.Premkumar K, Vishnupriya M, Babu TS, Manikandan BV, Thamizhselvan T, Ali AN, Islam MR, Kouzani AZ, Mahmud MA. Black widow optimization-based optimal PI-controlled wind turbine emulator. Sustainability (Switzerland) 2020;12(24):1–19. [Google Scholar]
- 33.Khishe M, Mosavi MR. Chimp optimization algorithm. Expert Systems with Applications. 2020;149:113338. doi: 10.1016/j.eswa.2022.119206. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH. Marine predators algorithm: A nature-inspired metaheuristic. Expert Systems with Applications. 2020;152:113377. [Google Scholar]
- 35.Zhao W, Zhang Z, Wang L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Engineering Applications of Artificial Intelligence. 2020;87:103300. [Google Scholar]
- 36.Abdollahzadeh B, Gharehchopogh FS, Mirjalili S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Computers and Industrial Engineering. 2021;158:107408. [Google Scholar]
- 37.Abdollahzadeh B, Soleimanian Gharehchopogh F, Mirjalili S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. International Journal of Intelligent Systems. 2021;36(10):5887–5958. [Google Scholar]
- 38.Połap D, Woźniak M. Red fox optimization algorithm. Expert Systems with Applications. 2021;166:114107. [Google Scholar]
- 39.Abdollahzadeh B, Gharehchopogh FS. A multi-objective optimization algorithm for feature selection problems. Engineering with Computers. 2022;38(3):1845–1863. [Google Scholar]
- 40.Gharehchopogh FS. Advances in tree seed algorithm: a comprehensive survey. Archives of Computational Methods in Engineering. 2022;29:1–24. doi: 10.1007/s11831-022-09804-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Agushaka JO, Ezugwu AE, Abdualigah L. Gazelle Optimization algorithm: A novel nature-inspired metaheuristic optimizer for mechanical engineering applications. Neural Computing and Applications. 2022;6:1–33. [Google Scholar]
- 42.Al-Khateeb B, Ahmed K, Mahmood M, Le DN. Rock hyraxes swarm optimization: A new nature-inspired metaheuristic optimization algorithm. Computers, Materials and Continua. 2021;68(1):643–654. [Google Scholar]
- 43.Ezugwu AE, Agushaka JO, Abualigah L, Mirjalili S, Gandomi AH. Prairie dog optimization algorithm. Neural Computing and Applications. 2022;34(22):20017–20065. doi: 10.1007/s00521-022-07705-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Oyelade ON, Ezugwu AE, Mohamed TIA, Abualigah L. Ebola optimization search algorithm: a new nature-inspired metaheuristic optimization algorithm. IEEE Access. 2022;10:16150–16177. [Google Scholar]
- 45.Mohammad HZB, Mansouri N. PPO: a new nature-inspired metaheuristic algorithm based on predation for optimization. Soft Computing. 2022;26(3):1331–1402. [Google Scholar]
- 46.Sadoun AM, Najjar IR, Alsoruji GS, Wagih A, Elaziz MA. Utilizing a long short-term memory algorithm modified by dwarf mongoose optimization to predict thermal expansion of Cu-Al2O3 nanocomposites. Mathematics. 2022;10(7):1050. [Google Scholar]
- 47.Aldosari F, Abualigah L, Almotairi KH. A normal distributed dwarf mongoose optimization algorithm for global optimization and data clustering applications. Symmetry. 2022;14(15):1021. [Google Scholar]
- 48.Alissa KA, Elkamchouchi DH, Tarmissi K, Yafoz A, Alsini R, Alghushairy O, Mohamed A, Al-Duhayyim M. Dwarf mongoose optimization with machine-learning-driven ransomware detection in internet of things environment. Applied Sciences. 2022;12(19):9513. [Google Scholar]
- 49.Rasa OE. Aspects of social organization in captive dwarf mongooses. Journal of Mammalogy. 1972;53:18I–185. [Google Scholar]
- 50.Rasa OE. The ethology and sociology of the dwarf mongoose (Helogule unduluru rufulu) Zeitschrift für Tierpsychologie. 1977;43:337–406. [Google Scholar]
- 51.Rasa OE. Differences in group member response to intruding conspecifics and potentially dangerous stimuli in dwarf mongooses (Helogule undulura rufulu) Z. Suugerierkunde. 1977;42:108–112. [Google Scholar]
- 52.Rasa OAE. The effects of crowding on the social relationships and behaviour of the dwarf mongoose (Helogule unduluru rufulu) Zeitschrift für Tierpsychologie. 1979;49:317–329. [Google Scholar]
- 53.Rasa OAE. Coordinated vigilance in dwarf mongoose family groups: The ‘watchman’s song’ hypothesis and the costs of guarding. Zeitschrift für Tierpsychologie. 1986;71:340–344. [Google Scholar]
- 54.Rasa OAE. The dwarf mongoose: A study of behavior and social structure in relation to ecology in a small, social carnivore. Advances in the Study of Behavior. 1987;17:121–163. [Google Scholar]
- 55.Agushaka JO, Ezugwu AE, Abualigah L. Dwarf mongoose optimization algorithm. Computer Methods in Applied Mechanics and Engineering. 2022;391:114570. [Google Scholar]
- 56.Agushaka J, Ezugwu A. Influence of initializing Krill Herd algorithm with low-discrepancy sequences. IEEE Access. 2020;8:210886–210909. [Google Scholar]
- 57.Yue, C. T., Price, K. V., Suganthan, P. N., Liang, J. J., Ali, M. Z., Qu, B. Y., Awad, N. H., Biswas, P. P., Yue, C. T. & Price, K. V. (2019) Problem definitions and evaluation criteria for the CEC 2020 special session and competition on single objective bound constrained numerical optimization. Zhengzhou Univ Zhengzhou China Nanyang Technol Univ Singapore, Zhengzhou China and Singapore
- 58.Abualigah L, Diabat A, Mirjalili S, Abd-Elaziz M, Gandomi AH. The arithmetic optimization algorithm. Computer methods in applied mechanics and engineering. 2021;376:113609. [Google Scholar]
- 59.Rather, S. & Bala, P. (2019). Hybridization of constriction coefficient based particle swarm optimization and gravitational search algorithm for function optimization. In International Conference on Advances in Electronics, Electrical, and Computational Intelligence (ICAEEC-2019)
- 60.Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Advances in engineering software. 2014;69:46–61. [Google Scholar]
- 61.Mirjalili S. SCA: A sine cosine algorithm for solving optimization problems. Knowledge-based systems. 2016;96:120–133. [Google Scholar]
- 62.Mirjalili S, Gandomi A, Mirjalili S, Saremi S, Faris H, Mirjalili S. Salp swarm algorithm: a bioinspired optimizer for engineering design problems. Advances in Engineering Software. 2017;114:1–29. [Google Scholar]
- 63.Goldanloo MJ, Gharehchopogh FS. A hybrid OBL-based firefly algorithm with symbiotic organisms search algorithm for solving continuous optimization problems. The Journal of Supercomputing. 2022;78(3):3998–4031. [Google Scholar]
- 64.Storn R, Price K. Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization. 1997;11(4):341–359. [Google Scholar]
- 65.Hu G, Chen L, Wang X, Wei G. Differential evolution-boosted sine cosine golden eagle optimizer with Lévy flight. Journal of Bionic Engineering. 2022;19:1–36. [Google Scholar]
- 66.Ai Y, Yu L, Huang Y, Liu X. The investigation of molten pool dynamic behaviors during the “∞” shaped oscillating laser welding of aluminum alloy. International Journal of Thermal Sciences. 2022;173:107350. [Google Scholar]
- 67.Kaveh A, Dadras-Eslamlou A. Water strider algorithm: A new metaheuristic and applications. Structures. 2020;25:520–541. [Google Scholar]
- 68.Eskandar H, Sadollah A, Bahreininejad A, Hamdi M. Water cycle algorithm—a novel metaheuristic optimization method for solving constrained engineering optimization problems. Computers and Structures. 2012;110(111):151–166. [Google Scholar]
- 69.Akay B, Karaboga D. Artificial bee colony algorithm for large-scale problems and engineering design optimization. Journal of Intelligent Manufacturing. 2012;23(4):1001–1014. [Google Scholar]
- 70.Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S. Knowledge-based systems equilibrium optimizer: a novel optimization algorithm. Knowledge-Based Systems. 2019;191:105190. [Google Scholar]
- 71.Kaveh A, Dadras A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Advances in Engineering Software. 2017;110:69–84. [Google Scholar]
- 72.Dhiman G, Kumar V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Advances in Engineering Software. 2017;114:48–70. [Google Scholar]
- 73.Sheikholeslami R, Talatahari S. Developed swarm optimizer: a new method for sizing optimization of water distribution systems. Developed Swarm Optimizer: A Journal of Computing in Civil Engineering. 2016;30(5):4016005. [Google Scholar]
- 74.Bayzidi H, Talatahari S, Saraee M, Lamarche CP. Social network search for solving engineering optimization problems. Computational Intelligence and Neuroscience. 2021;2021:1–32. doi: 10.1155/2021/8548639. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Kazemzadeh-Parsi MJ. A modified firefly algorithm for engineering design optimization problems. Iranian Journal of Science and Technology. 2014;38:403. [Google Scholar]
- 76.He Q, Wang L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Engineering Applications of Artificial Intelligence. 2007;20(1):89–99. [Google Scholar]
- 77.He Q, Wang L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Applied Mathematics and Computation. 2007;186(2):1407–1422. [Google Scholar]
- 78.Zhuo HF, Wang L, He Q. An effective co-evolutionary differential evolution for constrained optimization. Applied Mathematics and Computation. 2007;186:340–356. [Google Scholar]
- 79.Hayyolalam V, Kazem AAP. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Engineering Applications of Artificial Intelligence. 2020;87:103249. [Google Scholar]
- 80.Mohamed AW, Hadi AA, Mohamed AK. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. International Journal of Machine Learning and Cybernetics. 2020;11:1501–1529. [Google Scholar]
- 81.Brest, J., Maučec, M. S. & Bošković, B. (2017) Single objective real-parameter optimization: Algorithm jSO. In IEEE Congress on Evolutionary Computation (CEC), San Sebastian, 2017, 1311–1318
- 82.Sandgren E. NIDP in mechanical design optimization. Journal of Mechanical Design. 1990;112(2):223–229. [Google Scholar]
- 83.Mohamed, A.W., Hadi, A. A., Fattouh, A. M. & Jambi, K. M. (2017) LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In: Proceedings of IEEE Congr. Evol. Comput. (CEC)
- 84.Coelho L. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Systems with Applications. 2010;37(2):1676–1683. [Google Scholar]
- 85.Mohmmadzadeh H, Gharehchopogh FS. An efficient binary chaotic symbiotic organisms search algorithm approaches for feature selection problems. The Journal of Supercomputing. 2021;77(8):9102–9144. [Google Scholar]
- 86.Mohamed, A.W., Hadi, A. A., Mohamed, A. K. & Awad, N. H. (2020) Evaluating the performance of adaptive gaining sharing knowledge based algorithm on CEC 2020 Benchmark Problems. In 2020 IEEE Congress on Evolutionary Computation (CEC), pp 1–8
- 87.Mezura-Montes, E. & Coello, C. A. C. (2005). Useful infeasible solutions in engineering optimization with evolutionary algorithms. In Mexican international conference on artificial intelligence, Berlin, Heidelberg
- 88.Talatahari S, Azizi M. Chaos game optimization: A novel metaheuristic algorithm. Artificial Intelligence Review. 2021;54(2):917–1004. [Google Scholar]
- 89.Ray T, Saini P. Engineering design optimization using a swarm with an intelligent information sharing among individuals. Engineering Optimization. 2001;33(6):735–748. [Google Scholar]
- 90.Talatahari S, Azizi M, Gandomi AH. Material generation algorithm: A novel metaheuristic algorithm for optimization of engineering problems. Processes. 2021;9(5):859. [Google Scholar]
- 91.Sandgren E. Nonlinear integer and discrete programming in mechanical design optimization. Journal of Mechanical Design. 1990;112(2):223–229. [Google Scholar]
- 92.Han X, Yue L, Dong Y, Xu Q, Xie G, Xu X. Efficient hybrid algorithm based on moth search and fireworks algorithm for solving numerical and constrained engineering optimization problems. Journal of Supercomputing. 2020;76:9404–9429. [Google Scholar]
- 93.Chickermane H, Gea HC. Structural optimization using a new local approximation method. International Journal for Numerical Methods in Engineering. 1996;39(5):829–846. [Google Scholar]
- 94.Cheng M, Prayogo D. Symbiotic organisms search: A new metaheuristic optimization algorithm. Computers and Structures. 2014;139:98–112. [Google Scholar]
- 95.Talatahari S, Azizi M, Tolouei M, Talatahari B, Sareh P. Crystal structure algorithm (CryStAl): A metaheuristic optimization method. IEEE Access. 2021;9:71244–71261. [Google Scholar]
- 96.Rao SS. Engineering optimization. Hoboken: Wiley; 2009. [Google Scholar]
- 97.Anita Y, Yadav A, Kumar N. Artificial electric field algorithm for engineering optimization problems. Expert Systems with Applications. 2020;149:113308. [Google Scholar]
- 98.Parkinson A, Balling R, Hedengren JD. Optimization methods for engineering design. 2. Brigham: Brigham Young University; 2018. [Google Scholar]
- 99.Gandomi, A. & Roke, D. (2015). Engineering optimization using interior search algorithm. In Proceedings of the 2014 IEEE Symposium Series on Computational Intelligence-SIS 2014:2014 IEEE Symposium on Swarm Intelligence, Orlando, FL, USA
- 100.Wu J, Wang Y, Burrage K, Tian Y, Lawson B, Ding Z. An improved firefly algorithm for global continuous optimization problems. Expert Systems with Applications. 2020;149:113340. [Google Scholar]
- 101.Kim P, Lee J. An integrated method of particle swarm optimization and differential evolution. Journal of Mechanical Science and Technology. 2009;23(2):426–434. [Google Scholar]
- 102.Ravindran A, Ragsdell KM, Reklaitis GV. Engineering optimization. Hoboken: Wiley; 2006. [Google Scholar]
- 103.Amir HM, Hasegawa T. Nonlinear mixed-discrete structural optimization. Journal of Structural Engineering. 1989;115(3):626–646. [Google Scholar]
- 104.Gandomi A, Yang X, Alavi AH. Mixed variable structural optimization using Firefly Algorithm. Computers and Structures. 2011;89(23–24):2325–2336. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All data generated or analyzed during this study are included in this article.



















