Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2026 Jan 19;16:2401. doi: 10.1038/s41598-025-31729-0

Adaptive memory-based opposition and midpoint mutation in black winged kite algorithm for global optimization and engineering applications

Rajasekar P 1, Jayalakshmi M 2,
PMCID: PMC12820313  PMID: 41554763

Abstract

Metaheuristic algorithms play a vital role in addressing complex and nonlinear optimization problems. This study proposes an enhanced variant of the Black-winged Kite Algorithm (BKA), termed Adaptive Memory-based Opposition and Midpoint Mutation in BKA (AMOMM-BKA), developed to improve population diversity and convergence accuracy, particularly for complex optimization problems. The proposed framework integrates four complementary strategies to balance exploration and exploitation effectively. Blended Opposition-Based Learning (BOBL) combines classical opposition with population-mean guidance to adaptively expand the search space, while historical reflective opposition exploits individual memory to guide the search toward promising regions. Random opposition introduces controlled randomness to preserve population diversity and prevent premature convergence, and midpoint-based mutation directs individuals toward the midpoint between elite and peer solutions, enhancing focused exploration and convergence precision. AMOMM-BKA was evaluated using three CEC benchmark suites (CEC2005, CEC2019, and CEC2022), and its performance was compared with four categories of existing optimization algorithms:(i) widely cited classical optimizers, such as PSO and GWO; (ii) recently developed algorithms, including GJO, SO, SCSO, and AVOA; (iii) high-performance optimizers, such as CMAES and SHADE; and (iv) improved variants of BKA, including CBKA, IBKA, and QOBLBKA. Moreover, its successful application to four mechanical and structural engineering design problems further validates the algorithm’s effectiveness and practical relevance. The statistical analysis, including the Friedman rank test and Wilcoxon test, were conducted on the experimental results to verify the robustness and significance of the findings. AMOMM-BKA consistently demonstrated superior performance, achieving the top rank with an average score of 1.78 approximately 56.18% better than the second-best algorithm, SHADE (average rank: 4.56) highlighting its remarkable convergence rate, solution accuracy, and robustness across diverse optimization.

Keywords: Meta-heuristic algorithms, Black-winged kite algorithm, Blended opposition based learning, Midpoint mutation strategy, Engineering optimization problems

Subject terms: Engineering, Mathematics and computing

Introduction

Optimization problems are fundamental to advancing human well-being, as many real-world challenges can be modeled as optimization tasks aimed at identifying the most effective solutions1. As science, technology, and industry continue to evolve, the scale and intricacy of optimization problems have grown substantially. These problems often encompass multiple variables, nonlinear relationships, and a mix of constrained and unconstrained objectives, resulting in vast and highly complex solution landscapes. Addressing such challenges requires sophisticated algorithms capable of navigating these spaces efficiently2. To address such complexity, optimization algorithms have been developed to identify optimal of near-optimal solutions. These algorithms are broadly categorized into deterministic and stochastic approaches. Deterministic methods such as the simplex algorithm, Newton’s method, the conjugate gradient method, and Lagrangian techniques typically require prior knowledge of the problem domain and rely heavily on gradient information. While they perform well on smooth, convex functions, their gradient dependent iterative mechanisms often struggle with non-convex, non-differentiable, or constraint-sensitive problems, leading to local convergence or computational failure. Consequently, deterministic algorithms may fall short when applied to nonlinear or large-scale optimization tasks3. In response, stochastic algorithms have emerged as powerful alternatives. Free from the constraints of gradient dependency, these algorithms offer greater flexibility and resilience when tackling complex, nonlinear, and large-scale problems. Among them, Metaheuristic algorithms (MH) stand out for their global search capabilities, adaptability, and domain independent design4. Metaheuristics have demonstrated exceptional performance across a wide spectrum of applications, including engineering optimization5, image analysis6, mechanical design7, feature selection8, wireless sensor9, and UAV path planning10. Their growing success underscores the importance of continued research into advanced metaheuristic strategies, particularly as problem complexity and dimensionality continue to rise in modern scientific and industrial contexts.

MH has garnered substantial attention in recent years, especially within the realms of engineering optimization, due to the emergence of high-dimensional data generated from diverse sources such as digital platforms, and large-scale information systems. These algorithms are effective because they can avoid getting stuck in local optima by using simple strategies inspired by natural processes. As a result, they are widely used in many fields and offer flexible solutions to different types of problems. Currently, MH has demonstrated successful applications across a wide range of domains, including biomedical engineering, machine vision, intelligent manufacturing, and power system optimization. Their appeal lies in their structural simplicity and the foundational inspiration drawn from intuitive, nature inspired principles, which enable them to adapt effectively to diverse and complex problem environments.

MH can be broadly classified into four principal categories: Evolutionary-based algorithms (EAs), Swarm-based algorithms (SAs), Physics-based algorithms (PAs), and Human behavior-based algorithms (HAs) as shown in Fig. 1. EAs are represent the earliest class of metaheuristic techniques, drawing inspiration from biological principles such as natural selection, genetic inheritance, and evolutionary adaptation mechanisms. For instance, the Genetic Algorithm (GA)11 emulate natural selection through operations like mutation and crossover to solve complex, nonlinear problems. Differential Evolution (DE)12 enhances solutions via population-based strategies and is effective for continuous and multimodal optimization. Success-History based Adaptive Differential Evolution (SHADE)13 improves DE by adaptively tuning parameters based on historical success, while LSHADE14 further refines this approach by reducing population size over time, balancing exploration and exploitation for superior optimization performance. Covariance Matrix Adaption Evolution Strategy (CMAES)15 is a powerful evolutionary algorithm that adapts the covariance matrix of the search distribution, enabling efficient convergence with fewer generations. Its design reduces time complexity while enhancing robustness against noise, improving global search, and supporting parallel implementations. Conversely, SAs are among the most rapidly evolving branches of metaheuristics, rooted in the collective behaviors observed in biological populations including animals, plants, and microorganisms. Several foundational SAs have made significant contributions to the advancement of optimization methodologies. Over the past few decades, this class of algorithms has witnessed substantial diversification and advancement. Particle Swarm Optimization (PSO)16 exemplifies this paradigm by emulating the synchronized foraging dynamics of bird flocks and fish schools. The Artificial Bee Colony (ABC)17 algorithm replicates the decentralized decision-making and resource allocation strategies of honeybee swarms. Grey Wolf Optimization (GWO)18 draws inspiration from the hierarchical hunting tactics and social leadership exhibited by grey wolves. More recently, Golden Jackal Optimization (GJO)19 has emerged, modeling the cooperative predation and adaptive behavior of golden jackals in their natural habitat. Additional noteworthy swarm-based algorithms encompass the Snake Optimizer (SO)20, Sand Cat Swarm Optimization (SCSO)21, African Vultures Optimization Algorithm (AVOA)22, Harris Hawks Optimizer (HHO)23, Honey Badger Algorithm (HBA)24, Artificial Rabbits Optimization (ARO)25, Dung Beetle Optimizer (DBO)26, Electric Eel Foraging Optimization (EEFO)27, Greylag Goose Optimization (GGO)28. Thirdly, PAs constitute a major category within metaheuristics, inspired by the dynamics and principles of physical systems. These algorithms are modeled on diverse physical laws and phenomena encompassing mechanics, thermodynamics, electromagnetism, optics, and atomic interactions. The Gravitational Search Algorithm (GSA)29 emulates Newtonian mechanics, specifically the laws of motion and universal gravitation, to guide the optimization process through mass interactions. The Muti-verse Optimizer (MVO)30 derives its conceptual framework from cosmological phenomena, notably the dynamics of white holes, black holes, and wormholes, to facilitate solution exchange across multiple universes. The gradient-Based Optimizer (GBO)31, influenced by Newton’s gradient descent methodology, integrates two principal mechanisms namely, the gradient search rule and the local escaping operator alongside a vector-based strategy to effectively navigate the solution space. Additional prominent PAs including the Energy Valley Optimizer (EVO)32, Arithmetic Optimizer Algorithm (AOA)33, Kepler Optimization Algorithm (KOA)34, and Special Relativity Search (SRS)35. Finally, HAs, as a burgeoning class of metaheuristic techniques, are garnering significant scholarly interest due to their foundation in sociocognitive dynamics and behavioral patterns. These algorithms emulate various aspects of human interaction and decision-making. For instance, the Teaching Learning based Optimization (TLBO)36 algorithm models pedagogical exchanges between instructors and learners to enhance solution refinement. Similarly, the Poor and Rich Optimization (PRO)37 algorithm reflects socioeconomic motivation, simulating the aspirational drive of individuals across financial strata to elevate their status. Additional noteworthy HAs include the Political Optimizer (PO)38, Student Psychology-Based Optimization (SPBO)39, Sewing Training-Based Optimization (STBO)40, Human Evolutionary Optimization Algorithm (HEOA)41.

Fig. 1.

Fig. 1

Classification of Meta-heuristic algorithms (MH).

The Black-winged Kite Algorithm (BKA)42 is a swarm-based optimization method inspired by the flight behavior of black-winged kites. It offers simple structure, fast convergence, and strong performance across benchmark functions. BKA has been successfully applied to multiple applications for feature selection and parameter optimization, demonstrating efficient and reliable results compared with related algorithms. Nevertheless, there remains scope for further improvement to enhance its overall optimization capability. However, the BKA operates through two main phases: attacking and migration. According to the No-Free-Lunch (NFL)43 theorem, no single algorithm performs optimally across all optimization problems. Therefore, there is a continuous need to develop improved or modified algorithms to address complex optimization challenges. In certain cases, BKA exhibits slow convergence and may become trapped in local optima, particularly in high-dimensional or complex search spaces. To overcome these limitations and enhance its convergence performance, the AMOMM-BKA variant was proposed. Recent studies have focused on enhancing the BKA to overcome its inherent limitations. Some researchers have developed hybrid variants that combine BKA with other optimization algorithms to strengthen its exploitation ability and resistance to local optima, while others have proposed adaptive versions that dynamically adjust parameters to improve overall performance. In this study, we introduce four strategies that build on these ideas and offer new improvements. First, Blended Opposition-Based Learning (BOBL) combines classical opposition with the population mean to balance exploration and exploitation, helping the algorithm escape local optima and converge faster a limitation in earlier opposition-based methods. Second, Historical Reflective Opposition uses each individual’s best past solution to guide the search toward promising areas, strengthening learning in successful directions and complementing adaptive parameter strategies. Third, Random Opposition introduces controlled randomness to maintain diversity and prevent premature convergence, addressing the narrow search focus in some hybrid methods. Finally, Midpoint-Based Mutation directs individuals toward promising regions based on elite and peer knowledge, improving solution refinement and stability. Together, these strategies extend previous enhancements by providing a more balanced, diverse, and guided search, leading to better convergence and higher-quality solutions.

Related work

The Black-winged Kite Algorithm (BKA) is a recently proposed nature-inspired optimization method with promising results in solving diverse engineering problems. This section reviews the various BKA variants reported in the literature. Hanaa Mansouri et al44. proposed the Modified Black-Winged Kite Optimizer (M-BWKO) to enhance convergence speed, robustness, and solution diversity of the original BWKO. The algorithm integrates six key strategies, including a top-k leader selection, adaptive chaos weighting, diversity-aware reactivation, chaotic index-based selection, adaptive Cauchy mutation, and a hybrid migration rule combining chaotic and directional updates. These mechanisms collectively balance exploration and exploitation while preventing stagnation. Sarada Mohapatra et al45. introduced the Revamped Black-Winged Kite Algorithm (RBKA), which integrates logistic chaos-based initialization to enhance diversity and convergence speed. It employs chaotic perturbation and Brownian motion-based migration to balance exploration and exploitation, along with an opposition learning strategy to escape local optima and improve global search efficiency. Taybe Alabed et al46. proposed the Chaotic Black-Winged Kite Algorithm (CBKA), which integrates logistic chaos mapping to enhance population diversity and prevent premature convergence. Junwen Liao et al47. proposed the Hybrid Multi-Strategy Black-Winged Kite Algorithm (HBKA), which integrates Tent mapping in the predation stage to expand the search space, enhance global exploration, accelerate convergence, and help the algorithm escape local optima. Yancang Li et al48. developed the Black-Winged Kite Algorithm by combining the Osprey Optimization Algorithm with Crossbar enhancement (DKCBKA) . It incorporates an adaptive index factor and probability distribution update to speed up convergence, a stochastic difference variant to prevent local trapping, and longitudinal–transversal crossover to enhance accuracy and maintain population diversity. Fu et al49. introduced the Improved BKA (IBKA) by replacing the attack phase parameter with the Gompertz growth model, thereby achieving a more balanced exploration-exploitation trade-off and reducing step decay. Zhao et al50. proposed a variant incorporating chaotic mapping and adversarial learning, which significantly boosted convergence speed and optimization accuracy. BKA has been successfully applied across divers optimal domains (52-53) and has undergone multiple refinements by researchers (54-55). Nonetheless, existing improvements do not fully resolve its inherent limitations, motivating the pursuit of a more comprehensive and efficient enhancement framework. This paper encapsulates its principal contributions and methodological novelties through the following highlights:

  • Performance-based strategy selection: Each individual’s fitness change between iterations is used to compute a confidence score. Based on this score, the algorithm categorizes individuals as stagnated, improving, or uncertain.

  • Blended opposition-based learning (BOBL): Applied to stagnated individuals showing negligible improvement. BOBL balances classical opposition with the population mean, enabling guided exploration.

  • Historical reflective opposition (HRO): Used for significantly improving individuals. It reflects the current solution across the individual’s historical best, encouraging intensified exploitation in promising regions.

  • Random opposition (RO): Applied to individuals with unclear progress. This maintains population diversity by generating stochastic opposite candidates.

  • Memory integration: Each agent maintains its personal best (historical memory), updated upon improvement. This memory guides the reflective strategy and prevents over-reliance on the global best.

  • Midpoint-based mutation (MM) strategy: After the attacking behavior, each solution is mutated toward the midpoint between the global best and a random peer, enhancing diversity while maintaining guided exploration.

  • Extensive benchmarking: The AMOMM-BKA algorithm was rigorously evaluated across diverse benchmark suites, including CEC2005, CEC2019, and CEC2022. Comparative analysis against eleven cutting-edge algorithms, supported by statistical evidence, underscores its superior performance.

  • Practical applicability: AMOMM-BKA was applied to four engineering optimization tasks, where comparative results and statistical analyses against other algorithms confirm its strong competitive advantage in real-world problem solving.

The structure of this paper is as follows: Section Black-winged Kite Algorithm (BKA) comprises a background on the Black-winged Kite Algorithm (BKA). Section Proposed framework introduces the proposed model and its components. Section  Experimental results and comprehensive analysis covers the implementation of numerical experiments and the detailed evaluation of the obtained results. Section Results and discussion of engineering applications presents the application of the proposed AMOMM-BKA to practical engineering problems. Section Conclusion and future work includes the concluding remarks and possible directions for future research

Black-winged Kite Algorithm (BKA)

The Black-winged Kite Algorithm (BKA), introduced by Wang et al.42 in 2024, is a population-based optimization technique inspired by the hunting and migratory behavior of the black-winged kite, a small bird of prey known for its agility and precision. This species, identified by its blue-gray upper body and white underparts, typically preys on small animals such as birds, reptiles, mice, and beetles. Its remarkable hovering ability and sharp hunting skills serve as the core inspiration for BKA. The algorithm operates through two main phases attacking behavior, which models the bird’s hunting strategy, and migration behavior, which reflects its movement across regions. These two stages together form the basis of BKA’s search and optimization mechanism.

Mathematical model

This section presents the formulation of the BKA, a conceptually simple yet highly effective metaheuristic optimization approach inspired by the predatory and migratory behaviors of the black-winged kite.

Initialization phase

The BKA is a population-based metaheuristic framework in which each black-winged kite represents an individual agent. The position of each agent within the search space corresponds to a candidate solution for the optimization problem. Initially, these positions are generated randomly, as defined by Eq. 1

graphic file with name d33e476.gif 1

Where Inline graphic represents the Inline graphic individual, lb and ub represent the lower and upper limits for the Inline graphic black-winged kite in the Inline graphic dim, and rand is a randomly selected value between 0 and 1.

Attacking phase

Black-winged kites exhibit exceptional hunting proficiency, particularly in capturing small grassland animals and insects. During flight, they dynamically adjust their wing and tail angles in response to wind velocity, enabling them to hover silently while observing potential prey. Upon detection, they execute swift dives to secure their target. This adaptive hunting strategy reflects diverse attack patterns, which serve as a metaphor for global search and exploration within the optimization process. Figure 2 illustrates the kite’s calculated hunting sequence, reflecting the exploitation phase in optimization where precision is crucial. Figure 3 highlights its tactical prey assessment, mirroring the exploration phase of algorithms seeking promising solution areas. The following expression models the kite’s tactical attack behavior:

graphic file with name d33e520.gif 2
graphic file with name d33e524.gif 3

Here, Inline graphic and Inline graphic denote the position of the Inline graphic black-winged kite in the Inline graphic dimension during the Inline graphic and Inline graphic iteration, respectively. r is randomly generated value between 0 and 1, while p is a fixed constant set at 0.9. And T represents the total number of iteration, while t indicates the number of iterations completed up to the current point.

Fig. 2.

Fig. 2

Sequential hunting behavior of the black-winged kite.

Fig. 3.

Fig. 3

The selective predation strategy embedded in black-winged kite.

Migration phase

Bird migration is influenced by food availability and climate changes. To survive seasonal shifts, many birds move from north to south seeking better habitats. Leaders with strong navigation skills guide the group, helping maintain unity and success. Figure 4 shows how black-winged kites shift leadership during migration. The following formulation mathematically models the migration behavior exhibited by these birds:

graphic file with name d33e574.gif 4
graphic file with name d33e578.gif 5

Here, Inline graphic signifies the highest scorer among the black-winged kite’s in the Inline graphic dimension at the Inline graphic iteration, Inline graphic and Inline graphic denote the position of the Inline graphic black-winged kite in the Inline graphic dimension during the Inline graphic and Inline graphic iteration, respectively. Then Inline graphic refers to the fitness of a black-winged kite in the Inline graphic dimension at the Inline graphic iteration, while Inline graphic represents the fitness value of a random position in the same dimension from any black-winged kite at the Inline graphic iteration. And C(0, 1) denotes the Cauchy mutation as outlined by Jiang et al.56. Its definition is given below:

Fig. 4.

Fig. 4

Dynamic reallocation of leadership roles in black-winged kite migration.

A one-dimensional Cauchy distribution is a continuous probability distribution characterized by two parameters. The equation below describes its probability density function:

graphic file with name d33e651.gif 6

When Inline graphic and Inline graphic, the probability density function assumes its standard form. The specific formula is as follows:

graphic file with name d33e664.gif 7

Proposed framework

As discussed in the earlier section, the BKA is a promising metaheuristic technique for solving a wide range of optimization problems due to its simplicity and flexibility. However, BKA still encounters certain limitations, particularly a tendency to converge prematurely to local optima. This drawback arises mainly from an imbalance between exploration and exploitation phases, limited diversity in the initial population, and insufficient use of historical or population-level knowledge during the search process. To overcome these challenges, this study introduces an enhanced variant named Adaptive Memory-based Opposition and Midpoint Mutation in Black-winged Kite Algorithm (AMOMM-BKA), which integrates four key strategies. First, a Blended Opposition-Based Learning (BOBL) approach balances exploration and exploitation by blending classical opposition with population mean, helping individuals escape local traps. Second, a historical reflective opposition strategy leverages each individual’s past best solution to intensify the search in successful directions. Third, random opposition introduces controlled stochasticity to handle uncertain cases and maintain diversity. Finally, a midpoint-based mutation is applied to guide individuals toward promising regions defined by both elite and peer knowledge. The combination of these strategies enables AMOMM-BKA to dynamically adapt its behavior based on individual performance, resulting in more robust and effective optimization. The details of each proposed strategy are explained in the following subsections.

Adaptive memory-based opposition (AMO)

The proposed Adaptive Memory-based Opposition (AMO) introduces an intelligent mechanism for generating and selecting opposition-based solutions in population-based optimization. Unlike static opposition strategies, AMO dynamically adjusts its behavior based on the performance of each solution throughout the search process. This section details the key components of AMO, the motivation behind their selection, and how they collectively address the limitations of existing OBL techniques.

Several variants of opposition-based learning (OBL) such as classical OBL57, quasi OBL58, random OBL59, and their improved extensions have been explored in the literature. While these methods contribute to enhancing exploration in the early stages of optimization, they suffer from three major limitations:

  • Lack of adaptiveness: Most techniques apply the same opposition logic to all individuals, regardless of their search behavior.

  • Absence of learning memory: They do not consider historical success, thus losing valuable knowledge from previous high-quality solutions.

  • Inflexible exploitation: Static opposition formulas can lead to over-exploration or premature convergence due to lack of feedback control.

To overcome these challenges, AMO incorporates a confidence-based switching mechanism that evaluates everyone’s improvement trend and applies a suitable opposition strategy accordingly.

Blended opposition-based learning (BOBL) for stagnated individuals

The Blended Opposition-Based Learning (BOBL) strategy is developed to enhance population diversity and prevent premature convergence. During the optimization process, stagnated individuals those exhibiting negligible improvement in fitness across successive iterations may become trapped in local optima. To effectively guide these individuals without compromising the overall stability of the population, BOBL integrates classical opposition with the population mean to generate new candidate solutions, thereby achieving a balanced trade-off between exploration and exploitation.

graphic file with name d33e726.gif 8

Where Inline graphic is the mean position of the population, Inline graphic is a randomly chosen weight, Inline graphic and Inline graphic stands for the limit boundaries of the Inline graphic variable, Inline graphic is the current position of the Inline graphic variable of the individual z, and Inline graphic is the new BOBL generated solution for the Inline graphic variable of individual z. This creates a balance between exploration and exploitation, providing a controlled push for individuals likely stuck in local traps.

Historical reflective opposition for improving individuals

The Historical Reflective Opposition (HRO) strategy is designed to enhance the progress of individuals exhibiting notable improvement during the search process. Such individuals are considered to follow promising search trajectories. In this approach, each individual utilizes its previously stored best solution as a reference to generate a new candidate, reinforcing successful exploration patterns and accelerating convergence. The new candidate is formulated as:

graphic file with name d33e779.gif 9

Where Inline graphic represents the best solution ever found by a specific individual, Inline graphic is the new solution generated by historical reflective opposition for the Inline graphic variable of individual z, and Inline graphic adjusts the reflection depth. This strategy intensifies the search in a successful direction and helps accelerate convergence in regions likely to contain the global optimum.

Random opposition for uncertain individuals

The Random Opposition (RO) strategy is introduced to handle individuals whose performance trends are uncertain, making it difficult to determine whether exploration or exploitation should be emphasized. To preserve population diversity and prevent premature convergence, AMO employs the RO mechanism, which generates new candidate solutions through random opposition as follows:

graphic file with name d33e808.gif 10

where Inline graphic denotes newly generated random opposition solution of Inline graphic variable of individual z, and r stands for uniform random number between 0 and 1. This introduces stochastic variation into the search space, allowing uncertain individuals to escape shallow basins and potentially discover better regions.

Mathematically, the AMO framework works as follows. For each solution Inline graphic, the confidence score is computed as:

graphic file with name d33e833.gif 11

Where, Inline graphic is the position of the same individual in the previous iteration, and Inline graphic is a small constant to avoid division by zero. This score reflects the quality of improvement, allowing the algorithm to adapt its opposition behavior. Based on the value of Inline graphic, the algorithm selects one of the following strategies:

  • If Inline graphic, the BOBL strategy is applied

  • If Inline graphic, the algorithm generates a historical reflective opposition

  • Otherwise, a random opposition is used

In this context, Inline graphic and Inline graphic are assigned the values 0.01 and 0.1, respectively.

Midpoint-based mutation strategy

In this section, we introduce the midpoint-based mutation mechanism. This strategy applies a randomized guidance mechanism that steers the current solution toward the midpoint between the global best and a randomly selected peer from the population. It helps maintain a balance between exploration and a soft focus on promising regions of the search space. For each individual Inline graphic, a random peer Inline graphic is selected such that Inline graphic, and Inline graphic be the global best (leader) solution. The midpoint between the best and random peer is computed as:

graphic file with name d33e898.gif 12

A mutated solution is then generated by moving the current solution Inline graphic toward this midpoint using a scaling factor Inline graphic

graphic file with name d33e911.gif 13

Here, r is a uniformly generated random number, controlling how far the mutated solution shifts toward the midpoint. This operator encourages the current solution to be influenced by both elite knowledge (via Inline graphic ) and the population diversity (via Inline graphic), while maintaining the stochastic.

AMOMM-BKA algorithm frame work

This study proposes an enhanced BKA by integrating four key mechanisms: BOBL, historical reflective opposition, random opposition, midpoint-based mutation strategy. The aim is to strengthen the algorithm’s ability to maintain diversity, escape local optima, and improve convergence behavior. The optimization process begins with a population initialization phase, where AMOMM-BKA generates refined opposite solutions using a blended approach that combines classical opposition and the population mean. Each individual also maintains a personal memory to store its historically best position, which supports learning during later stages. In each iteration, the AMOMM-BKA performs a sequence of operations: an attacking phase, followed by midpoint-based mutation, and then a migration step. The midpoint-based mutation, applied after attacking, adjusts individuals by moving them toward the midpoint between the global best and a randomly selected peer, encouraging exploratory moves in promising regions. To further enhance adaptability, AMOMM-BKA monitors the recent fitness trend of each individual and dynamically chooses the appropriate opposition strategy. If an individual’s performance is stagnating, a BOBL is applied to gently push it out of local optima. If the individual is showing significant improvement, a historical reflective opposition is used to intensify the search around successful directions. In cases where the performance trend is ambiguous, a random opposition is introduced to preserve diversity and prevent convergence stagnation. Together, these strategies allow the AMOMM-BKA to maintain a flexible balance between exploration and exploitation, leading to more reliable performance on complex and multi modal optimization tasks. To facilitate a clearer understanding of the proposed technique, the flowchart of AMOMM-BKA is illustrated in Fig. 5

Fig. 5.

Fig. 5

The flowchart of AMOMM-BKA.

Computational complexity analysis

This section presents the complexity analysis of the proposed AMOMM-BKA in terms of time and space complexity to assess the computational cost of the introduced enhancements. The theoretical complexities of the AMOMM-BKA are derived with reference to the procedural steps outlined in Algorithm 1

Time complexity of AMOMM-BKA

Time complexity is a key factor in assessing an algorithm’s overall performance. In optimization algorithms, it is primarily influenced by three operations: initialization, fitness evaluation, and position updates of the agents. In this subsection, we compare the time complexity of the original BKA with that of the proposed AMOMM-BKA. To systematically analyze the time complexity of AMOMM-BKA, we consider a population of N search agents, each with D dimensions, and denote the maximum number of function evaluations as Max FEs. The time complexity of the BKA algorithm can then be evaluated as follows:

  • Initialization: The algorithm generates N solutions, each with D dimensions, once at the beginning of each run, leading to a time complexity of Inline graphic

  • Fitness Evaluation: The evaluation of all N solutions across D dimensions at each iteration requires a time complexity of Inline graphic

  • Position Updates: Updating the positions of all agents during each iteration requires a time complexity of Inline graphic

Hence, the total time complexity of the original BKA is given by Inline graphic

For the proposed AMOMM-BKA, the computational complexity can be evaluated in a similar manner. Furthermore, AMOMM-BKA integrates additional procedures such as the Adaptive Memory-based Opposition and the Midpoint-based Mutation strategies, which contribute to an increased computational load. The step-by-step complexity analysis is outlined as follows:

  • The initialization process, similar to that of the original BKA, involves generating N candidate solutions across D dimensions, resulting in a computational complexity of Inline graphic

  • The fitness assessment of the search agents at each iteration has a computational complexity of Inline graphic

  • The computational complexity of updating the agents’ positions during both the exploration and exploitation phases remains Inline graphic

  • The mutation strategy has a complexity of Inline graphic, as all individuals are dimension-wise mutated and evaluated.

  • The AMO mechanism has a complexity of Inline graphic, as opposition solutions are generated and evaluated for all individuals.

Combining these factors,the over all time complexity of AMOMM-BKA is: Inline graphic

The space complexity of AMOMM-BKA

In computer science, space complexity refers to the memory required for an algorithm’s execution. In AMOMM-BKA, memory usage is primarily determined by the number of dimensions and the population size of black-winged kites, both set during initialization. Thus, the overall space complexity of AMOMM-BKA is Inline graphic

Algorithm 1.

Algorithm 1

Pseudo-code outlining the proposed AMOMM-BKA framework

Experimental results and comprehensive analysis

To ensure consistent and reliable experimentation, all simulations were carried out on a 64-bit Windows 11 platform using MATLAB 2023b. The computational environment comprised an intel R Core TM i3-1005G1 processor 1.20 GHz and 8 GB of RAM.

Benchmark functions

This section systematically investigates the efficacy of the proposed AMOMM-BKA through a series of comprehensive experimental studies. To ensure a robust evaluation, three well-established CEC benchmark suites are employed. Initially, the CEC200560 test suite comprising 23 widely recognized benchmark functions as outlined in Table 1 is utilized to analyze the balance between exploration and exploitation capabilities. Subsequently, the performance of AMOMM-BKA is further scrutinized on more intricate and compositional challenging landscapes using the CEC201961 and CEC202262 benchmark suites, presented in Tables 2 and 3, respectively. Additionally, the scalability and robustness of the proposed algorithm are examined on large-scale optimization problems using the high dimensional functions of CEC2005. To further validate the real-world applicability, AMOMM-BKA is applied to four diverse and complex engineering design problems, thereby demonstrating its competency in addressing practical global optimization scenarios.

Table 1.

Overview of the standard CEC2005 test functions.

Name Functions D Range Inline graphic
Fun01 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun02 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun03 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun04 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun05 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun06 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun07 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun08 Inline graphic 10, 30, 50, 100 Inline graphic Inline graphic
Fun09 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun10 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun11 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun12 Inline graphic
Inline graphic
Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun13 Inline graphic 10, 30, 50, 100 Inline graphic 0
Fun14 Inline graphic 2 Inline graphic 1
Fun15 Inline graphic 4 Inline graphic 0.0003
Fun16 Inline graphic 2 Inline graphic -1.0316
Fun17 Inline graphic 2 Inline graphic 0.398
Fun18 Inline graphic 2 Inline graphic 3.000
Fun19 Inline graphic 3 Inline graphic -3.86
Fun20 Inline graphic 6 Inline graphic -3.32
Fun21 Inline graphic 4 Inline graphic -10.1532
Fun22 Inline graphic 4 Inline graphic -10.4028
Fun23 Inline graphic 4 Inline graphic -10.5363

Table 2.

Overview of the CEC2019 test functions.

Name Functions D Range Inline graphic
Fun24 Storn’s Chebyshev Polynomial Fitting Problem 9 [-8192, 8192] 1
Fun25 Inverse Hilbert Matrix Problem 16 [-16,384, 16,384] 1
Fun26 Lennard-Jones Minimum Energy Cluster 18 [-4, 4] 1
Fun27 Shifted Rotated Rastrigin’s Function 10 [-100, 100] 1
Fun28 Shifted Rotated Grienwangk’s Function 10 [-100, 100] 1
Fun29 Shifted Rotated Weierstrass Function 10 [-100, 100] 1
Fun30 Modified Schwefel’s Function 10 [-100, 100] 1
Fun31 Expanded Schaffer’s Function 10 [-100, 100] 1
Fun32 Shifted Rotated Happy Cat Function 10 [-100, 100] 1
Fun33 Shifted Rotated Ackley Function 10 [-100, 100] 1

Table 3.

Overview of the CEC2022 test functions.

Name Functions D Range Inline graphic
Fun34 Shifted and full Rotated Zakharov function 10 [-100, 100] 300
Fun35 Shifted and full Rotated Rosenbrock’s function 10 [-100, 100] 400
Fun36 Shifted and full Rotated Expanded Schaffer’s function 10 [-100, 100] 600
Fun37 Shifted and full Rotated Non-Continuous Rastrigin’s function 10 [-100, 100] 800
Fun38 Shifted and full Rotated Levy function 10 [-100, 100] 900
Fun39 Hybrid function 1 (N=3) 10 [-100, 100] 1800
Fun40 Hybrid function 2 (N=6) 10 [-100, 100] 2000
Fun41 Hybrid function 3 (N=5) 10 [-100, 100] 2200
Fun42 Composition function 1 (N=5) 10 [-100, 100] 2300
Fun43 Composition function 2 (N=4) 10 [-100, 100] 2400
Fun44 Composition function 3 (N=5) 10 [-100, 100] 2600
Fun45 Composition function 4 (N=6) 10 [-100, 100] 2700

Parameter setting

This section outlines the parameter settings for both the proposed algorithm and its comparative counter parts. To evaluate the efficiency of the proposed algorithm on CEC2005, CEC2019, and CEC2022 benchmark functions, five distinct categories of optimization algorithms were considered: (1) widely cited classical optimizers, such as PSO16, and GWO18.(2) recently developed algorithms, including and GJO19, SO20, SCSO21, and AVOA22. (3) high-performance optimizers, namely CMAES15, and SHADE13. (4) the classical and recent improvements of the BKA such as BKA42, CBKA46, IBKA49, QOBLBKA51. The parameter values for all competing algorithms were adopted from their respective original studies and are detailed in Table 4. To ensure a fair evaluation, each function was executed with a fixed population size of 30 agents and 15000 function evaluations FEs, across different dimensions. Furthermore, the sensitivity of the proposed algorithm to the control parameter (Inline graphic) and the threshold values Inline graphic and Inline graphic, as well as their impact on overall performance, is analyzed. To eliminate bias caused by randomness, each algorithm was independently tested against each benchmark function over 30 runs.

Table 4.

Parameter configurations for the meta-heuristic algorithms.

Algorithms Parameter settings
PSO Inline graphic, Inline graphic, Inline graphic, Damping ratio = 0.99
GWO Inline graphic
GJO Inline graphic
SO Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic
SCSO Inline graphic, Roulette wheel = [0, 360]
AVOA Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic
CMAES Inline graphic, Inline graphic
BKA Inline graphic
CBKA Inline graphic
IBKA Inline graphic
QOBLBKA Inline graphic, Jumping probability=0.1
AMOMM-BKA Inline graphic, Inline graphic, threshold values 0.01 and 0.1

CEC2005 result analysis and discussion

The experimental simulations employ twenty-three benchmark functions to comprehensively evaluate the proposed AMOMM-BKA against the original BKA and other competitive algorithms. Performance assessment is based on the statistical average (AVG) and standard deviation (STD) of the obtained fitness values. For the first thirteen non-fixed benchmark functions, a default dimension of 30 was used, while the remaining ten fixed-dimension functions were evaluated using their respective standard settings. The comparison algorithms selected are well-established and known for their strong performance on complex optimization problems. Table 5 presents the results of thirty independent runs for each algorithm. As shown, the proposed AMOMM-BKA consistently outperforms the original BKA and other competitors. Specifically, AMOMM-BKA achieved the best results in fifteen functions considering both AVG and STD values and reached optimal fitness values for functions Fun01, Fun03, Fun11, and Fun16Fun23. In several cases such as Fun09, Fun11, Fun16Fun19, and Fun21Fun23 some algorithms demonstrated comparable performance with minimal differences in average fitness. For , Fun05, Fun06, Fun12, and Fun13, the AVOA algorithm exhibited superior performance in both average and standard deviation metrics, while Fun08 showed SO as the clear best performer. For Fun14, AMOMM-BKA and CBKA achieved nearly identical outcomes. In Fun20, the SO algorithm demonstrated slightly better standard deviation values, although AMOMM-BKA and SO produced similar average fitness results. For Fun21 - Fun23, AMOMM-BKA attained the smallest standard deviation values, indicating stable convergence. Overall, out of the twenty-three benchmark functions, AMOMM-BKA achieved the best performance in eight functions and delivered comparable results in nine others. These findings confirm that AMOMM-BKA exhibits robust and competitive optimization capability across diverse test scenarios. To ensure a more insightful comparative evaluation, the Friedman rank test63 is employed to statistically assess the performance of the examined algorithms. Table 6 clearly highlights the Friedman mean ranks of all compared algorithms. Based on the Friedman rank test, AMOMM-BKA secures the first position, demonstrating superior performance over all competitors, while AVOA and IBKA achieve the second and third ranks, respectively. The subsequent analysis employed the Wilcoxon rank-sum test64 to examine whether the performance differences between AMOMM-BKA and the comparative algorithms were statistically significant. Table 7 presents the Wilcoxon test outcomes (at a significance level of 0.05) for AMOMM-BKA against the competing algorithms on the CEC2005 benchmark functions, reported in terms of p-values. The results reveal that, for the majority of the tested functions, AMOMM-BKA exhibits statistically significant superiority, with p-values less than 0.05, indicating meaningful performance differences from the baseline algorithms. A few p-values are listed as NA, signifying that both algorithms achieved similar optimal results on simpler functions. In the results column, a Inline graphic symbol denotes a statistically significant improvement in favor of AMOMM-BKA, whereas a Inline graphic indicates no significant difference. The last row of Table 7, labeled (w/t/l), summarizes the counts of wins, ties, and losses, respectively, for AMOMM-BKA compared to its counterparts. The findings proved that the proposed AMOMM-BKA is superior and significantly different from other algorithms at most CEC2005 functions.

Table 5.

Experimental outcomes for various algorithms on CEC2005 test suite.

Functions Fun01 Fun02 Fun03 Fun04 Fun05
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 1.851E+02 2.987E+01 4.062E+00 1.106E+00 1.890E+02 6.882E+01 1.311E+01 1.381E+00 1.251E+03 7.537E+02
GWO 1.605E-12 9.735E-13 9.938E-17 9.470E-17 1.267E-05 3.805E-05 7.714E-01 6.381E-01 2.690E+01 7.417E-01
GJO 1.914E-27 3.523E-27 2.008E-32 2.899E-32 2.623E-17 8.792E-17 9.296E+00 1.222E+01 2.779E+01 7.210E-01
SO 7.266E-82 1.991E-81 4.383E-43 1.645E-42 7.566E-57 2.313E-56 1.818E-36 2.998E-36 2.034E+01 1.437E+01
SCSO 1.824E-120 7.157E-120 1.592E-75 6.730E-75 5.275E-101 2.883E-100 3.086E-49 9.025E-49 2.799E+01 6.575E-01
AVOA 1.979E-286 0.000E+00 1.148E-142 6.286E-142 7.838E-222 0.000E+00 1.727E-144 6.681E-144 6.049E-05 5.016E-05
CMAES 1.547E+01 2.761E+00 4.174E-03 1.018E-03 2.171E+00 2.942E+00 2.877E+01 3.950E+00 3.202E+01 2.195E+01
SHADE 1.039E+02 3.586E+01 6.024E-04 1.359E-04 2.242E+04 4.028E+03 6.983E+01 2.611E+00 1.125E+02 4.310E+01
BKA 9.991E-74 5.472E-73 2.643E-38 1.448E-37 5.021E-88 2.268E-87 4.565E-44 2.284E-43 2.788E+01 8.927E-01
CBKA 7.589E-92 3.914E-91 1.007E-48 4.587E-48 3.088E-93 9.441E-93 1.482E-48 3.609E-48 2.798E+01 8.564E-01
IBKA 3.677E-157 2.014E-156 1.856E-82 1.014E-81 1.456E-154 7.952E-154 7.207E-82 3.162E-81 2.821E+01 7.308E-01
QOBLBKA 3.532E-156 1.935E-155 4.435E-83 2.380E-82 1.147E-150 6.285E-150 4.568E-75 2.502E-74 2.822E+01 7.890E-01
AMOMM-BKA 0.000E+00 0.000E+00 2.899E-276 0.000E+00 0.000E+00 0.000E+00 9.399E-274 0.000E+00 2.464E+01 1.077E-01
Fun06 Fun07 Fun08 Fun09 Fun10
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 2.558E+00 1.217E+00 1.526E+03 2.196E+02 -6.629E+03 1.200E+03 1.624E+02 2.868E+01 2.539E+00 4.597E-01
GWO 7.572E-01 3.147E-01 6.528E-03 2.107E-03 -6.044E+03 9.671E+02 2.108E+00 4.237E+00 1.025E-13 1.932E-14
GJO 2.641E+00 4.910E-01 1.420E-03 9.305E-04 -4.366E+03 1.015E+03 0.000E+00 0.000E+00 6.484E-15 1.656E-15
SO 7.023E-01 6.342E-01 2.283E-04 2.364E-04 -1.251E+04 9.966E+01 6.008E+00 1.138E+01 1.059E-01 5.800E-01
SCSO 1.776E+00 5.534E-01 1.250E-04 1.364E-04 -6.603E+03 8.131E+02 0.000E+00 0.000E+00 4.441E-16 0.000E+00
AVOA 4.283E-07 3.569E-07 1.377E-04 1.559E-04 -1.227E+04 5.537E+02 0.000E+00 0.000E+00 4.441E-16 0.000E+00
CMAES 9.214E-06 2.962E-06 1.599E-01 2.267E-02 -5.429E+03 4.607E+01 1.271E+02 6.809E+01 9.096E-04 1.401E-04
SHADE 2.988E-05 1.588E-05 9.341E-01 2.361E-01 -1.231E+04 1.643E+02 7.243E-01 8.663E-01 2.178E-02 1.111E-01
BKA 1.969E+00 1.083E+00 2.704E-04 1.772E-04 -8.903E+03 1.189E+03 0.000E+00 0.000E+00 4.441E-16 0.000E+00
CBKA 1.766E+00 4.694E-01 2.638E-04 1.686E-04 -8.903E+03 9.643E+02 0.000E+00 0.000E+00 4.441E-16 0.000E+00
IBKA 2.076E+00 1.341E+00 2.569E-05 2.645E-05 -9.977E+03 1.355E+03 0.000E+00 0.000E+00 4.441E-16 0.000E+00
QOBLBKA 2.263E+00 1.578E+00 2.585E-04 2.278E-04 -7.887E+03 1.673E+03 0.000E+00 0.000E+00 4.441E-16 0.000E+00
AMOMM-BKA 1.223E-05 3.089E-05 2.428E-05 2.388E-05 -1.034E+04 5.998E+02 0.000E+00 0.000E+00 4.441E-16 0.000E+00
Fun11 Fun12 Fun13 Fun14 Fun15
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 1.142E-01 5.645E-02 5.286E-02 5.185E-02 5.157E-01 1.931E-01 3.496E+00 2.548E+00 9.280E-04 1.368E-04
GWO 6.797E-03 8.944E-03 5.676E-02 3.246E-02 5.943E-01 2.878E-01 4.690E+00 4.027E+00 4.433E-03 8.104E-03
GJO 0.000E+00 0.000E+00 2.138E-01 1.200E-01 1.668E+00 2.513E-01 3.803E+00 4.144E+00 2.477E-03 6.066E-03
SO 6.653E-02 1.693E-01 1.669E-01 3.289E-01 4.641E-01 8.096E-01 1.064E+00 2.522E-01 5.547E-04 2.668E-04
SCSO 0.000E+00 0.000E+00 9.805E-02 4.812E-02 2.404E+00 3.510E-01 5.237E+00 4.489E+00 3.747E-04 2.315E-04
AVOA 0.000E+00 0.000E+00 2.556E-08 1.725E-08 5.442E-08 6.246E-08 1.687E+00 1.855E+00 4.242E-04 1.841E-04
CMAES 3.665E-04 1.374E-03 9.734E-07 4.217E-07 1.376E-05 6.276E-06 5.166E+00 3.608E+00 6.399E-03 5.751E-03
SHADE 7.136E-04 1.944E-03 7.923E-03 2.655E-02 7.361E-04 2.779E-03 1.031E+00 1.815E-01 1.198E-03 1.335E-03
BKA 0.000E+00 0.000E+00 7.668E-02 1.136E-01 1.866E+00 5.379E-01 1.295E+00 9.054E-01 1.835E-03 5.064E-03
CBKA 0.000E+00 0.000E+00 5.886E-02 2.459E-02 1.746E+00 3.167E-01 9.980E-01 2.182E-16 2.982E-03 6.934E-03
IBKA 0.000E+00 0.000E+00 1.078E-01 1.548E-01 1.679E+00 5.131E-01 1.262E+00 8.591E-01 1.158E-03 3.647E-03
QOBLBKA 0.000E+00 0.000E+00 1.025E-01 1.741E-01 1.731E+00 3.625E-01 1.624E+00 1.384E+00 1.163E-03 3.645E-03
AMOMM-BKA 0.000E+00 0.000E+00 4.157E-07 7.149E-07 5.854E-02 6.678E-02 9.980E-01 1.623E-16 3.321E-04 5.994E-05
Fun16 Fun17 Fun18 Fun19 Fun20
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO -1.032E+00 4.701E-16 3.979E-01 0.000E+00 3.000E+00 6.311E-15 -3.863E+00 2.067E-15 -3.251E+00 5.924E-02
GWO -1.032E+00 2.618E-08 3.979E-01 2.874E-06 3.000E+00 4.231E-05 -3.862E+00 2.152E-03 -3.275E+00 7.160E-02
GJO -1.032E+00 2.266E-07 3.979E-01 4.992E-05 3.000E+00 3.551E-06 -3.859E+00 3.846E-03 -3.125E+00 1.496E-01
SO -1.032E+00 5.532E-16 3.979E-01 2.932E-08 3.900E+00 4.930E+00 -3.837E+00 1.411E-01 -3.322E+00 2.603E-10
SCSO -1.032E+00 5.812E-10 3.979E-01 2.865E-08 3.000E+00 7.252E-08 -3.861E+00 3.212E-03 -3.177E+00 2.100E-01
AVOA -1.032E+00 4.103E-16 3.979E-01 0.000E+00 3.000E+00 4.736E-06 -3.863E+00 9.864E-11 -3.266E+00 6.037E-02
CMAES -1.032E+00 6.775E-16 4.040E-01 2.137E-02 3.000E+00 6.226E-16 -3.863E+00 2.390E-15 -3.168E+00 2.033E-01
SHADE -1.032E+00 6.584E-16 3.979E-01 0.000E+00 3.000E+00 1.296E-15 -3.863E+00 2.710E-15 -3.308E+00 3.653E-02
BKA -1.032E+00 5.684E-16 3.979E-01 0.000E+00 3.000E+00 2.267E-15 -3.863E+00 2.445E-15 -3.293E+00 5.328E-02
CBKA -1.032E+00 5.532E-16 3.979E-01 3.243E-16 3.000E+00 2.389E-15 -3.863E+00 2.316E-15 -3.294E+00 5.150E-02
IBKA -1.032E+00 4.215E-11 3.979E-01 3.729E-12 3.000E+00 9.590E-15 -3.863E+00 5.088E-08 -3.294E+00 5.102E-02
QOBLBKA -1.032E+00 4.503E-07 3.979E-01 1.310E-11 3.000E+00 2.903E-15 -3.863E+00 2.613E-12 -3.282E+00 5.858E-02
AMOMM-BKA -1.032E+00 6.046E-16 3.979E-01 0.000E+00 3.000E+00 1.365E-15 -3.863E+00 2.354E-15 -3.322E+00 5.827E-03
Functions Fun21 Fun22 Fun23
AVG STD AVG STD AVG STD
PSO -7.385E+00 3.112E+00 -8.956E+00 2.447E+00 -8.845E+00 2.904E+00
GWO -9.138E+00 2.061E+00 -1.023E+01 9.628E-01 -1.053E+01 1.023E-03
GJO -8.708E+00 2.447E+00 -9.606E+00 2.065E+00 -9.895E+00 1.959E+00
SO -1.003E+01 4.339E-01 -1.028E+01 4.340E-01 -1.034E+01 5.660E-01
SCSO -6.076E+00 2.073E+00 -6.961E+00 2.697E+00 -6.406E+00 2.942E+00
AVOA -1.015E+01 2.178E-13 -1.040E+01 2.160E-13 -1.054E+01 1.948E-13
CMAES -7.429E+00 3.553E+00 -9.894E+00 1.938E+00 -1.004E+01 1.888E+00
SHADE -9.632E+00 1.544E+00 -1.015E+01 1.394E+00 -1.052E+01 7.242E-02
BKA -9.902E+00 1.373E+00 -1.040E+01 2.697E-05 -1.002E+01 1.961E+00
CBKA -1.015E+01 2.448E-06 -1.040E+01 2.348E-05 -1.054E+01 7.920E-06
IBKA -1.015E+01 2.265E-07 -1.040E+01 2.069E-04 -1.054E+01 4.931E-06
QOBLBKA -1.015E+01 3.230E-03 -1.015E+01 1.394E+00 -1.054E+01 2.430E-06
AMOMM-BKA -1.015E+01 5.278E-15 -1.040E+01 1.682E-15 -1.054E+01 1.714E-15

Table 6.

Friedman test analysis on CEC2005 test suite.

Functions PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA AMOMM-BKA
Fun01 13 10 9 7 5 2 11 12 8 6 3 4 1
Fun02 13 10 9 7 5 2 12 11 8 6 4 3 1
Fun03 12 10 8 9 5 2 11 13 7 6 3 4 1
Fun04 11 9 10 8 5 2 12 13 7 6 3 4 1
Fun05 13 4 5 2 8 1 11 12 6 7 9 10 3
Fun06 13 6 12 5 8 1 2 4 9 7 10 11 3
Fun07 13 10 9 5 3 4 11 12 8 7 2 6 1
Fun08 8 10 12 1 9 3 11 2 6 6 5 7 4
Fun09 6 3 1 4 1 1 5 2 1 1 1 1 1
Fun10 7 3 2 6 1 1 4 5 1 1 1 1 1
Fun11 6 4 1 5 1 1 2 3 1 1 1 1 1
Fun12 5 6 13 12 9 1 3 4 8 7 11 10 2
Fun13 6 7 8 5 13 1 2 3 12 11 9 10 4
Fun14 8 10 9 3 12 7 11 2 5 1 4 6 1
Fun15 5 12 10 4 2 3 13 9 8 11 6 7 1
Fun16 1 1 1 1 1 1 1 1 1 1 1 1 1
Fun17 1 1 1 1 1 1 2 1 1 1 1 1 1
Fun18 1 1 1 2 1 1 1 1 1 1 1 1 1
Fun19 1 2 4 5 3 1 1 1 1 1 1 1 1
Fun20 8 6 11 1 9 7 10 2 4 3 3 5 1
Fun21 8 5 6 2 9 1 7 4 3 1 1 1 1
Fun22 8 3 7 2 9 1 6 4 1 1 1 5 1
Fun23 8 2 7 4 9 1 5 3 6 1 1 1 1
Average rank 7.61 5.87 6.78 4.39 5.61 2.00 6.70 5.39 4.91 4.09 3.57 4.39 1.48
Overall rank 12 9 11 5 8 2 10 7 6 4 3 5 1

Table 7.

Results of the Wilcoxon rank-sum test at the 5% significance level on standard benchmark functions.

Functions AMOMM-BKA Vs
PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA
Fun01 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 2.213E-06 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+)
Fun02 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+)
Fun03 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+)
Fun04 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+)
Fun05 3.339E-03 (+) 3.020E-11 (+) 3.020E-11 (+) 3.988E-04 (+) 3.020E-11 (+) 5.573E-10 (+) 7.617E-03 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 1.618E-11 (+) 3.020E-11 (+)
Fun06 3.571E-06 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 4.035E-01 (-) 3.671E-03 (+) 1.091E-05 (+) 3.020E-11 (+) 3.020E-11 (+) 5.231E-04 (+) 3.020E-11 (+)
Fun07 3.020E-11 (+) 3.020E-11 (+) 7.389E-11 (+) 2.669E-09 (+) 1.850E-08 (+) 4.353E-05 (+) 3.020E-11 (+) 3.020E-11 (+) 2.154E-10 (+) 1.613E-10 (+) 7.389E-11 (+) 8.236E-02 (-)
Fun08 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 9.919E-11 (+) 2.366E-12 (+) 3.020E-11 (+) 5.186E-07 (+) 1.695E-09 (+) 2.629E-11 (+) 4.825E-01 (+)
Fun09 1.212E-12 (+) 1.158E-12 (+) NA (=) 4.788E-08 (+) NA (=) NA (=) 1.212E-12 (+) 1.212E-12 (+) NA (=) NA (=) NA (=) NA (=)
Fun10 1.212E-12 (+) 1.134E-12 (+) 1.548E-13 (+) 2.074E-13 (+) NA (=) NA (=) 1.212E-12 (+) 1.212E-12 (+) NA (=) NA (=) NA (=) NA (=)
Fun11 1.212E-12 (+) 4.193E-02 (+) NA (=) 2.158E-02 (+) NA (=) NA (=) 1.212E-12 (+) 1.212E-12 NA (=) NA (=) NA (=) NA (=)
Fun12 8.650E-01 (-) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.478E-01 (-) 3.965E-08 (+) 6.283E-06 (+) 3.020E-11 (+) 3.020E-11 (+) 4.225E-03 (+) 3.020E-11 (+)
Fun13 9.705E-01 (-) 1.206E-10 (+) 3.020E-11 (+) 2.433E-05 (+) 3.020E-11 (+) 3.020E-11 (+) 1.028E-06 (+) 3.157E-05 (+) 3.020E-11 (+) 3.020E-11 (+) 2.839E-10 (+) 3.020E-11 (+)
Fun14 2.203E-04 (+) 1.925E-07 (+) 1.806E-08 (+) 1.650E-02 (+) 9.729E-09 (+) 4.760E-04 (+) 8.440E-10 (+) 1.366E-03 (+) 4.165E-01 (-) 4.556E-01 (-) 8.156E-01 (-) 2.610E-02 (+)
Fun15 7.172E-01 (-) 1.585E-04 (+) 1.430E-05 (+) 1.606E-06 (+) 6.010E-08 (+) 5.874E-04 (+) 3.338E-11 (+) 5.072E-10 (+) 2.002E-06 (+) 7.736E-06 (+) 6.096E-03 (+) 4.856E-03 (+)
Fun16 2.368E-01 (-) 5.144E-12 (+) 5.144E-12 (+) 3.246E-03 (+) 5.144E-12 (+) 4.234E-09 (+) 2.142E-02 (+) 4.589E-01 (+) 2.604E-02 (+) 2.299E-01 (+) 7.460E-06 (+) 2.325E-05 (+)
Fun17 NA (=) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.212E-12 (+) 1.607E-01 (-) 1.212E-12 (+) NA (=) NA (=) NA (=) NA (=) NA (=)
Fun18 7.496E-01 (-) 2.474E-11 (+) 2.474E-11 (+) 1.003E-06 (+) 2.474E-11 (+) 2.474E-11 (+) 9.634E-04 (+) 4.112E-05 (+) 8.964E-04 (+) 4.168E-01 (-) 2.893E-02 (+) 3.355E-07 (+)
Fun19 2.080E-07 (+) 1.520E-11 (+) 1.520E-11 (+) 4.370E-01 (-) 1.520E-11 (+) 1.663E-10 (+) 9.159E-09 (+) 9.159E-09 (+) NA (=) 6.329E-01 (-) 5.620E-01 (-) 1.392E-01 (-)
Fun20 5.839E-01 (-) 6.921E-09 (+) 2.480E-10 (+) 8.019E-01 (-) 2.306E-11 (+) 4.241E-08 (+) 8.203E-01 (-) 8.310E-01 (-) 7.050E-07 (+) 7.599E-08 (+) 3.897E-08 (+) 5.181E-07 (+)
Fun21 9.605E-06 (+) 4.099E-12 (+) 4.099E-12 (+) 2.907E-07 (+) 4.099E-12 (+) 4.316E-12 (+) 1.047E-02 (+) 7.952E-09 (+) 4.099E-12 (+) 4.099E-12 (+) 2.865E-12 (+) 1.441E-11 (+)
Fun22 2.495E-02 (+) 5.200E-12 (+) 5.200E-12 (+) 7.983E-09 (+) 5.200E-12 (+) 6.043E-12 (+) 5.991E-04 (+) 4.727E-09 (+) 6.087E-12 (+) 5.200E-12 (+) 4.978E-12 (+) 6.087E-12 (+)
Fun23 3.321E-03 (+) 1.233E-11 (+) 1.233E-11 (+) 1.569E-05 (+) 1.233E-11 (+) 1.499E-11 (+) 2.576E-04 (+) 3.337E-04 (+) 1.233E-11 (+) 1.233E-11 (+) 9.010E-12 (+) 1.233E-11 (+)
w/t/l 16/1/6 23/0/0 21/2/0 21/0/2 20/0/3 17/3/3 22/0/1 20/1/2 18/4/1 16/4/3 17/4/2 16/4/3

Convergence behavior analysis

The convergence curve provides a clear visual representation of how the optimal fitness values evolve throughout the function evaluations of the proposed and comparative algorithms. The convergence speed toward the global optimum is a critical indicator of an algorithm’s efficiency and overall performance. Figure 6 illustrates the convergence behavior for nine benchmark functions from the CEC2005 suite. For functions Fun01, Fun02, Fun03, and Fun04, AMOMM-BKA demonstrates the fastest convergence rate, reaching optimal values significantly earlier than the other algorithms. Particularly, for Fun01 and Fun03, the proposed algorithm attains optimal fitness in the early stages of evaluation. In the case of the multimodal function Fun10, AMOMM-BKA exhibits stable and consistent convergence toward the global optimum. Although AVOA and SHADE show slightly faster initial convergence for Fun12, AMOMM-BKA gradually achieves comparable fitness values as the evaluations progress. For Fun09 and Fun11, AMOMM-BKA again attains optimal values early, underscoring its rapid convergence capability. Regarding the fixed-dimension functions Fun14 and Fun15, AMOMM-BKA maintains excellent performance, outperforming others in both convergence speed and stability. While AMOMM-BKA, AVOA, and SHADE achieve similarly low fitness values, AMOMM-BKA converges more swiftly, reflecting its superior search efficiency.

Fig. 6.

Fig. 6

Convergence of the AMOMM-BKA algorithm relative to other algorithms on the CEC2005 test functions.

Scalability analysis

A comprehensive performance assessment is indispensable for any newly introduced optimization algorithm, with scalability serving as a pivotal criterion in contemporary efficiency and solution quality as the dimensionality of the problem escalates. In this study, the scalability of both the baseline BKA and the enhanced AMOMM-BKA is rigorously examined using 13 benchmark functions evaluated at four dimensional settings 10, 30, 50, and 100. The experimental outcomes, presented in Table 8, report the AVG and STD values derive from 30 independent trials, each conducted with 500 iterations per function. The results indicate that AMOMM-BKA consistently attains the global optimum for several functions, including Fun01, Fun03, Fun09, and Fun11, across all tested dimensions. Although certain functions fall short of the global optimum, AMOMM-BKA exhibits markedly superior stability and search efficacy compared to the classical BKA. Additionally, the convergence profiles in Figs. 7, 8, 9 and 10 clearly demonstrate that AMOMM-BKA achieves faster convergence rates and heightened solution robustness across all dimensional settings. Collectively, these findings substantiate the improved scalability of AMOMM-BKA for tackling high-dimensional optimization challenges.

Table 8.

Performance analysis of BKA and AMOMM-BKA across various dimensional settings (D=10, 30, 50, 100).

Functions Dim 10 30 50 100
BKA AMOMM-BKA BKA AMOMM-BKA BKA AMOMM-BKA BKA AMOMM-BKA
Fun01 AVG 9.030E-81 0.000E+00 1.149E-87 0.000E+00 3.638E-82 0.000E+00 4.042E-74 0.000E+00
STD 4.945E-80 0.000E+00 4.396E-87 0.000E+00 1.993E-81 0.000E+00 2.214E-73 0.000E+00
Fun02 AVG 3.154E-38 2.310E-266 2.183E-38 1.750E-278 2.635E-47 6.487E-270 8.914E-46 6.428E-271
STD 1.728E-37 0.000E+00 1.195E-37 0.000E+00 9.486E-47 0.000E+00 4.228E-45 0.000E+00
Fun03 AVG 3.677E-78 0.000E+00 3.033E-88 0.000E+00 2.165E-92 0.000E+00 1.838E-78 0.000E+00
STD 2.014E-77 0.000E+00 1.660E-87 0.000E+00 8.475E-92 0.000E+00 1.007E-77 0.000E+00
Fun04 AVG 9.547E-45 2.791E-270 1.859E-37 1.637E-275 9.761E-43 4.624E-270 7.555E-40 2.478E-269
STD 3.725E-44 0.000E+00 1.018E-36 0.000E+00 5.327E-42 0.000E+00 3.205E-39 0.000E+00
Fun05 AVG 6.451E+00 4.434E+00 2.799E+01 2.462E+01 4.840E+01 4.483E+01 9.855E+01 9.513E+01
STD 1.406E+00 8.218E-02 7.774E-01 1.200E-01 5.313E-01 2.530E-01 2.204E-01 6.058E-01
Fun06 AVG 3.056E-02 1.915E-19 2.235E+00 6.482E-06 5.524E+00 1.396E-04 1.554E+01 7.570E-01
STD 1.221E-01 1.049E-18 1.609E+00 1.178E-05 2.021E+00 9.533E-05 2.861E+00 3.846E-01
Fun07 AVG 2.421E-04 3.255E-05 2.317E-04 2.614E-05 2.302E-04 2.172E-05 2.639E-04 3.659E-05
STD 2.564E-04 2.419E-05 1.566E-04 2.064E-05 1.621E-04 1.757E-05 1.879E-04 2.688E-05
Fun08 AVG -3.234E+03 -3.702E+03 -9.100E+03 -1.254E+04 -1.317E+04 -1.645E+04 -2.049E+04 -2.844E+04
STD 4.010E+02 2.298E+02 1.317E+03 4.880E+02 2.492E+03 8.169E+02 5.721E+03 1.557E+03
Fun09 AVG 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
STD 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
Fun10 AVG 4.441E-16 4.441E-16 4.441E-16 4.441E-16 4.441E-16 4.441E-16 4.441E-16 4.441E-16
STD 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
Fun11 AVG 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
STD 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
Fun12 AVG 1.410E-02 8.306E-19 7.757E-02 2.363E-07 2.642E-01 3.481E-06 4.204E-01 4.785E-03
STD 5.471E-02 3.577E-18 1.285E-01 5.530E-07 2.437E-01 2.219E-06 1.312E-01 2.998E-03
Fun13 AVG 1.063E-01 1.313E-02 1.926E+00 5.084E-02 3.582E+00 1.293E+00 8.097E+00 6.779E+00
STD 1.513E-01 2.772E-02 5.428E-01 7.665E-02 7.064E-01 4.367E-01 5.083E-01 3.953E-01
Fig. 7.

Fig. 7

Performance of function Fun03 across different dimensions.

Fig. 8.

Fig. 8

Performance of function Fun07 across different dimensions.

Fig. 9.

Fig. 9

Performance of function Fun10 across different dimensions.

Fig. 10.

Fig. 10

Performance of function Fun12 across different dimensions.

CEC2019 result analysis and discussion

This section presents the enhancement of the classical BKA through the integration of the proposed AMOMM strategy, leading to the development of the AMOMM-BKA variant. The effectiveness of this improved algorithm was assessed using ten complex benchmark functions from the CEC2019 suite, with detailed results summarized in Table 9. The analysis reveals that AMOMM-BKA outperforms competing algorithms in seven functions, demonstrating superior optimization ability. Although CBKA achieved the best fitness value for Fun24, the difference between CBKA and AMOMM-BKA is marginal, indicating that AMOMM-BKA performs comparably well on this function. For Fun25 and Fun26, while some algorithms achieved the best average fitness values, AMOMM-BKA exhibited greater stability, reflected in its consistent performance across runs. In contrast, for Fun27 and Fun28, SHADE showed slightly better results than AMOMM-BKA. For Fun29-Fun31 and Fun33, AMOMM-BKA achieved the better average fitness values and maintained the smallest standard deviations, highlighting its stability and reliability in convergence. Overall, across the ten benchmark functions, AMOMM-BKA achieved the best results in four cases and demonstrated comparable performance in three others, confirming its robustness and strong competitive capability in handling diverse and challenging optimization problems. To provide a comprehensive comparative analysis, the Friedman test is employed to evaluate the performance of the competing algorithms. As shown in Table 10, according to the Friedman rank test, AMOMM-BKA attains the best overall rank, showcasing its superior performance across all benchmark functions, while SHADE and CBKA follow in the second and third positions, respectively. Table 11. presents the Wilcoxon statistical test results for the CEC2019 benchmark suite. The test was performed between the proposed AMOMM-BKA and each comparative algorithm, with each experiment executed over 30 independent runs. The p-value was used to determine the statistical significance of the performance differences between algorithms. According to the results in Table 11, AMOMM-BKA consistently outperforms the competing algorithms in most cases, demonstrating statistically significant improvements. However, for function Fuc28, the significance level is lower due to the superior performance of certain competing algorithms on this particular function. Overall, the Wilcoxon test results confirm that AMOMM-BKA exhibits statistically significant superiority over the majority of compared algorithms across the CEC2019 benchmark functions, validating its strong optimization capability and robustness.

Table 9.

Experimental outcomes for various algorithms on CEC2019 test suite.

Functions Fun24 Fun25 Fun26 Fun27 Fun28
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 1.409E+12 1.268E+12 1.430E+04 4.103E+03 1.370E+01 1.410E-09 2.797E+01 1.177E+01 2.160E+00 1.324E-01
GWO 1.866E+08 2.862E+08 1.836E+01 6.128E-02 1.370E+01 2.272E-06 5.400E+01 2.103E+01 2.450E+00 2.407E-01
GJO 1.520E+07 5.064E+07 1.839E+01 1.088E-01 1.370E+01 3.680E-04 1.387E+03 1.366E+03 2.571E+00 2.814E-01
SO 8.694E+07 1.134E+08 1.834E+01 1.391E-10 1.370E+01 1.099E-13 2.297E+01 1.538E+01 2.101E+00 5.129E-02
SCSO 4.554E+04 3.776E+03 1.836E+01 7.519E-02 1.370E+01 3.066E-06 2.866E+02 5.287E+02 2.329E+00 1.709E-01
AVOA 4.640E+04 3.882E+03 1.834E+01 3.018E-07 1.370E+01 1.662E-09 1.321E+02 6.253E+01 2.348E+00 2.626E-01
CMAES 4.283E+09 1.320E+10 1.526E+02 7.430E+01 1.370E+01 4.873E-04 1.944E+03 2.379E+03 2.174E+00 5.318E-01
SHADE 2.106E+10 1.167E+10 1.834E+01 1.090E-09 1.370E+01 3.625E-06 1.902E+01 1.041E+01 2.042E+00 1.865E-02
BKA 4.487E+04 2.780E+04 1.835E+01 4.762E-02 1.370E+01 6.487E-05 8.049E+02 1.707E+03 2.542E+00 2.677E-01
CBKA 3.811E+04 9.317E+02 1.835E+01 5.884E-02 1.370E+01 5.121E-09 1.267E+02 1.299E+02 2.495E+00 2.433E-01
IBKA 3.932E+04 4.517E+03 1.834E+01 6.547E-03 1.370E+01 2.126E-05 6.658E+02 1.311E+03 2.768E+00 5.907E-01
QOBLBKA 8.160E+04 1.495E+05 1.849E+01 3.359E-01 1.370E+01 2.262E-07 7.550E+02 1.857E+03 2.660E+00 5.876E-01
AMOMM-BKA 4.214E+04 3.061E+03 1.834E+01 9.918E-15 1.370E+01 5.666E-15 9.177E+01 4.087E+01 2.325E+00 1.912E-01
Fun29 Fun30 Fun31 Fun32 Fun33
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 1.047E+01 9.048E-01 2.142E+02 1.254E+02 5.437E+00 6.063E-01 3.435E+00 4.598E-02 2.133E+01 1.035E-01
GWO 1.199E+01 6.720E-01 4.626E+02 3.170E+02 4.996E+00 1.058E+00 5.474E+00 8.158E-01 2.100E+01 2.635E+00
GJO 1.194E+01 6.066E-01 6.114E+02 3.372E+02 5.478E+00 8.279E-01 7.687E+01 1.439E+02 2.109E+01 1.611E+00
SO 1.196E+01 6.023E-01 7.166E+01 1.325E+02 5.183E+00 8.591E-01 3.422E+00 3.385E-02 2.150E+01 9.445E-02
SCSO 8.843E+00 1.571E+00 3.096E+02 1.499E+02 5.417E+00 6.550E-01 1.490E+01 4.916E+01 2.113E+01 1.006E-01
AVOA 7.159E+00 1.610E+00 3.930E+02 1.974E+02 5.571E+00 4.445E-01 4.469E+00 7.859E-01 2.104E+01 8.204E-02
CMAES 1.295E+01 5.975E-01 8.955E+02 8.600E+01 5.505E+00 1.893E+00 3.432E+00 2.912E-02 2.162E+01 8.728E-02
SHADE 8.413E+00 6.343E-01 5.960E+01 9.531E+01 4.892E+00 4.318E-01 3.557E+00 6.995E-02 2.110E+01 1.176E-01
BKA 1.046E+01 1.290E+00 2.326E+02 3.579E+02 5.082E+00 5.475E-01 7.645E+01 3.490E+02 2.103E+01 1.258E+00
CBKA 1.042E+01 1.062E+00 2.035E+02 1.927E+02 4.779E+00 5.541E-01 5.464E+00 8.075E-01 2.068E+01 3.329E+00
IBKA 8.659E+00 2.212E+00 1.899E+02 2.142E+02 4.682E+00 6.276E-01 5.262E+01 1.770E+02 2.109E+01 1.811E-01
QOBLBKA 1.058E+01 1.512E+00 1.791E+02 2.119E+02 5.052E+00 7.671E-01 4.053E+01 1.211E+02 2.133E+01 1.459E-01
AMOMM-BKA 6.588E+00 1.553E+00 5.764E+01 8.605E+01 4.427E+00 5.602E-01 4.049E+00 4.658E-01 1.981E+01 4.451E+00

Table 10.

Friedman test analysis on CEC2019 test suite.

Functions PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA AMOMM-BKA
Fun24 13 10 8 9 5 6 11 12 4 1 2 7 3
Fun25 7 3 4 1 3 1 6 1 2 2 1 5 1
Fun26 1 1 1 1 1 1 1 1 1 1 1 1 1
Fun27 3 4 12 2 8 7 13 1 11 6 9 10 5
Fun28 3 8 11 2 6 7 4 1 10 9 13 12 5
Fun29 8 12 10 11 3 2 13 4 7 6 5 9 1
Fun30 7 11 12 3 9 10 13 2 8 6 5 4 1
Fun31 10 5 11 8 9 13 12 4 7 2 3 6 1
Fun32 3 9 13 1 10 7 2 4 12 8 11 6 5
Fun33 9 3 6 10 8 5 11 7 4 2 6 9 1
Average rank 6.40 6.60 8.80 4.80 6.20 5.90 8.60 3.70 6.60 4.30 5.60 6.90 2.40
Overall rank 8 9 12 4 7 6 11 2 9 3 5 10 1

Table 11.

Results of the Wilcoxon rank-sum test at the 5% significance level on CEC2019 benchmark functions.

Functions AMOMM-BKA Vs
PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA
Fun24 3.020E-11 (+) 3.020E-11 (+) 2.254E-04 (+) 3.020E-11 (+) 3.020E-11 (+) 6.669E-03 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 1.206E-10 (+) 3.020E-11 (+)
Fun25 5.253E-09 (+) 2.394E-11 (+) 2.394E-11 (+) 3.585E-11 (+) 2.394E-11 (+) 2.394E-11 (+) 2.394E-11 (+) 2.394E-11 (+) 2.394E-11 (+) 2.394E-11 (+) 2.387E-11 (+) 2.394E-11 (+)
Fun26 1.418E-09 (+) 2.184E-11 (+) 2.184E-11 (+) 4.749E-01 (-) 2.184E-11 (+) 5.038E-02 (-) 2.184E-11 (+) 2.184E-11 (+) 3.273E-11 (+) 2.184E-11 (+) 2.178E-11 (+) 2.813E-11 (+)
Fun27 6.062E-11 (+) 7.959E-03 (+) 3.157E-05 (+) 8.993E-11 (+) 3.020E-11 (+) 1.031E-02 (+) 2.510E-02 (+) 1.287E-09 (+) 4.515E-02 (+) 3.917E-02 (+) 2.982E-11 (+) 1.154E-01 (-)
Fun28 1.010E-08 (+) 8.073E-01 (-) 1.784E-04 (+) 1.868E-05 (+) 3.020E-11 (+) 2.458E-01 (-) 2.956E-04 (+) 3.338E-11 (+) 1.669E-01 (-) 1.958E-01 (-) 6.517E-08 (+) 1.442E-03 (+)
Fun29 1.748E-05 (+) 3.690E-11 (+) 3.020E-11 (+) 3.690E-11 (+) 4.504E-11 (+) 8.883E-01 (-) 3.018E-11 (+) 7.295E-04 (+) 8.101E-10 (+) 1.464E-10 (+) 6.696E-11 (+) 1.370E-03 (+)
Fun30 2.052E-03 (+) 3.825E-09 (+) 1.094E-10 (+) 2.170E-01 (-) 3.020E-11 (+) 3.197E-09 (+) 2.398E-11 (+) 3.042E-01 (-) 1.537E-01 (-) 5.322E-03 (+) 2.371E-10 (+) 4.084E-05 (+)
Fun31 9.031E-04 (+) 3.848E-03 (+) 4.744E-06 (+) 1.260E-01 (-) 3.338E-11 (+) 6.528E-08 (+) 7.659E-05 (+) 1.597E-03 (+) 1.055E-01 (-) 9.468E-03 (+) 2.959E-05 (+) 1.748E-05 (+)
Fun32 4.975E-11 (+) 3.197E-09 (+) 7.389E-11 (+) 2.371E-10 (+) 3.020E-11 (+) 2.839E-04 (+) 4.573E-09 (+) 1.441E-02 (+) 4.311E-08 (+) 9.833E-08 (+) 4.065E-11 (+) 3.081E-08 (+)
Fun33 2.195E-08 (+) 5.967E-09 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 6.283E-06 (+) 3.020E-11 (+) 4.504E-11 (+) 7.695E-08 (+) 1.464E-10 (+) 6.044E-07 (+) 1.518E-03 (+)
w/t/l 10/0/0 9/0/1 10/0/0 7/0/3 10/0/0 7/0/3 10/0/0 9/0/1 7/0/3 9/0/1 10/0/0 9/0/1

Convergence behavior analysis

To further demonstrate the efficacy of the proposed AMOMM-BKA algorithm, the convergence behaviors of AMOMM-BKA and 12 competing metaheuristics on the fixed dimensional CEC2019 benchmark functions are illustrated in Fig 11. The curves reveal that AMOMM-BKA exhibits a consistently faster and more stable convergence profile compared to its counterparts, indicating superior robustness and computational efficiency in approaching optimal solutions. Notably, AMOMM-BKA maintains a smooth and accelerated convergence trajectory across most test functions, suggesting enhanced search capability as iterations progress. For function Fun24, CBKA achieves better result values, but AMOMM-BKA exhibits superior convergence behavior. For function Fun29, AMOMM-BKA initially converges slower than PSO; however, as PSO becomes trapped in local optima, AMOMM-BKA maintains a steady and promising convergence. In function Fun33, AVOA shows faster initial convergence, but it eventually gets stuck in local optima, whereas AMOMM-BKA demonstrates superior exploration and ultimately identifies better solutions. The findings of this study reveal that AMOMM-BKA exhibits enhanced convergence speed and superior optimization accuracy compared to other algorithms, thereby substantiating the efficacy and competitiveness of the proposed approach

Fig. 11.

Fig. 11

Convergence of the AMOMM-BKA algorithm relative to other algorithms on the CEC2019 test functions.

CEC2022 result analysis and discussion

In this section, the recent CEC2022 benchmark suite is employed to evaluate the capability of AMOMM-BKA in avoiding entrapment in local optima and to analyze its exploration-exploitation balance. The same parameter settings and experimental configurations used for the CEC2019 benchmarks are retained here, with the problem dimensionality set to D = 10. Table 12 presents the AVG and STD values used to assess the performance and stability of AMOMM-BKA in comparison with other algorithms across twelve complex test functions. To offer a comprehensive performance comparison, the Friedman test is employed across the evaluated algorithms. As presented in Table 13, AMOMM-BKA secures the first rank, demonstrating performance levels unmatched by any competitor. Its superiority in eight functions in terms of average fitness underscores its strong exploratory and exploitative capabilities and the effective balance between them. For Fun36 and Fun40, SHADE achieved slightly better results, showing marginal superiority over AMOMM-BKA. Similarly, for Fun43, SHADE attained the minimum fitness value, while IBKA exhibited a smaller standard deviation, indicating higher stability. Overall, AMOMM-BKA outperforms all competing algorithms across most CEC2022 functions, whereas SHADE ranks second, showing superiority in only three functions. These results confirm the robustness, adaptability, and balanced search behavior of AMOMM-BKA in addressing complex optimization problems. Table 14 presents the Wilcoxon statistical test results for the CEC2022 benchmark, comparing AMOMM-BKA with twelve competing algorithms at a significance level of 0.05. The results indicate that AMOMM-BKA shows statistically significant differences from most competitors, confirming its superior optimization performance.The summary metric (w/t/l) in the last row indicates the number of functions where AMOMM-BKA outperforms, ties, or underperforms against each competitor. Overall, the statistical analysis validates the robustness and high efficiency of AMOMM-BKA, establishing it as a promising algorithm for tackling complex optimization problems in the CEC2022 suite.

Table 12.

Experimental outcomes for various algorithms on CEC2022 test suite.

Functions Fun34 Fun35 Fun36 Fun37 Fun38
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 3.001E+02 9.910E-02 4.094E+02 2.082E+01 6.132E+02 1.253E+01 8.214E+02 9.409E+00 9.696E+02 1.273E+02
GWO 3.202E+03 2.430E+03 4.259E+02 2.496E+01 6.020E+02 2.961E+00 8.260E+02 8.231E+00 9.113E+02 2.024E+01
GJO 3.288E+03 2.195E+03 4.540E+02 2.264E+01 6.120E+02 9.014E+00 8.304E+02 1.137E+01 1.023E+03 1.330E+02
SO 9.502E+02 7.487E+02 4.167E+02 1.255E+01 6.017E+02 2.412E+00 8.191E+02 7.152E+00 9.299E+02 5.194E+01
SCSO 2.906E+03 2.207E+03 4.344E+02 3.533E+01 6.178E+02 1.110E+01 8.267E+02 9.268E+00 1.154E+03 1.854E+02
AVOA 6.353E+02 6.187E+02 4.281E+02 3.306E+01 6.191E+02 1.346E+01 8.325E+02 9.007E+00 1.323E+03 2.063E+02
CMAES 1.885E+04 9.244E+03 6.168E+02 7.101E+01 6.097E+02 1.656E+01 8.238E+02 1.102E+01 9.000E+02 0.000E+00
SHADE 1.013E+04 3.239E+03 4.102E+02 9.835E+00 6.000E+02 9.109E-08 8.192E+02 4.181E+00 9.029E+02 2.403E+00
BKA 1.644E+03 2.779E+03 4.304E+02 4.742E+01 6.281E+02 9.481E+00 8.195E+02 7.285E+00 1.146E+03 1.254E+02
CBKA 3.188E+02 2.219E+01 4.093E+02 1.857E+01 6.268E+02 9.722E+00 8.189E+02 7.521E+00 1.109E+03 1.097E+02
IBKA 5.007E+03 1.181E+03 4.143E+02 5.360E+00 6.111E+02 4.143E+00 8.431E+02 4.640E+00 1.017E+03 4.924E+01
QOBLBKA 8.571E+02 1.151E+03 4.212E+02 3.208E+01 6.287E+02 1.157E+01 8.242E+02 7.224E+00 1.164E+03 1.108E+02
AMOMM-BKA 3.001E+02 7.312E-01 4.042E+02 1.299E+01 6.035E+02 3.299E+00 8.174E+02 5.864E+00 9.483E+02 4.388E+01
Fun39 Fun40 Fun41 Fun42 Fun43
AVG STD AVG STD AVG STD AVG STD AVG STD
PSO 4.171E+03 2.310E+03 2.040E+03 2.328E+01 2.237E+03 4.189E+01 2.529E+03 4.839E+01 2.657E+03 1.993E+02
GWO 6.308E+03 2.395E+03 2.030E+03 1.050E+01 2.233E+03 3.117E+01 2.587E+03 3.782E+01 2.561E+03 5.795E+01
GJO 1.226E+04 5.947E+03 2.048E+03 2.340E+01 2.229E+03 3.620E+00 2.611E+03 3.189E+01 2.585E+03 6.122E+01
SO 3.659E+03 2.016E+03 2.031E+03 1.665E+01 2.223E+03 1.632E+00 2.530E+03 9.181E-01 2.579E+03 1.388E+02
SCSO 5.613E+03 1.890E+03 2.057E+03 2.609E+01 2.228E+03 5.151E+00 2.601E+03 4.662E+01 2.574E+03 6.596E+01
AVOA 3.876E+03 2.009E+03 2.050E+03 2.144E+01 2.229E+03 9.270E+00 2.541E+03 2.782E+01 2.561E+03 6.594E+01
CMAES 3.319E+07 7.038E+07 2.089E+03 5.543E+01 2.252E+03 1.313E+01 2.556E+03 4.526E+01 2.625E+03 1.713E+02
SHADE 5.487E+03 2.031E+03 2.011E+03 6.920E+00 2.219E+03 3.692E+00 2.531E+03 3.818E+00 2.493E+03 3.730E+01
BKA 2.833E+03 1.547E+03 2.046E+03 1.652E+01 2.241E+03 3.480E+01 2.544E+03 4.476E+01 2.549E+03 7.828E+01
CBKA 2.754E+03 1.572E+03 2.048E+03 1.898E+01 2.231E+03 2.324E+01 2.535E+03 2.680E+01 2.594E+03 1.274E+02
IBKA 8.199E+03 3.583E+03 2.025E+03 2.484E+00 2.228E+03 2.602E+00 2.530E+03 1.234E+00 2.502E+03 4.737E-01
QOBLBKA 2.910E+03 1.400E+03 2.049E+03 2.144E+01 2.228E+03 2.339E+01 2.541E+03 3.941E+01 2.566E+03 6.660E+01
AMOMM-BKA 2.681E+03 8.724E+02 2.024E+03 9.047E+00 2.218E+03 3.945E+00 2.529E+03 1.726E-11 2.518E+03 4.437E+01
Fun44 Fun45
AVG STD AVG STD
PSO 2.729E+03 1.317E+02 2.875E+03 4.425E+01
GWO 2.829E+03 1.821E+02 2.870E+03 9.242E+00
GJO 2.925E+03 2.377E+02 2.878E+03 1.899E+01
SO 2.706E+03 1.210E+02 2.871E+03 5.253E+00
SCSO 2.790E+03 1.593E+02 2.871E+03 1.058E+01
AVOA 2.775E+03 1.653E+02 2.869E+03 6.654E+00
CMAES 2.939E+03 2.117E+02 2.876E+03 4.835E+00
SHADE 2.734E+03 2.790E+01 2.864E+03 1.099E+00
BKA 2.811E+03 3.054E+02 2.867E+03 7.189E+00
CBKA 2.715E+03 1.739E+02 2.865E+03 1.423E+00
IBKA 2.763E+03 6.399E+00 2.864E+03 1.031E+00
QOBLBKA 2.773E+03 2.398E+02 2.872E+03 2.337E+01
AMOMM-BKA 2.674E+03 4.927E+01 2.864E+03 1.111E+00

Table 13.

Friedman test analysis on CEC2022 test suite.

Functions PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA AMOMM-BKA
Fun34 1 8 9 5 7 3 12 11 6 2 10 4 1
Fun35 3 8 12 6 11 9 13 4 10 2 5 7 1
Fun36 8 3 7 2 9 10 5 1 12 11 6 13 4
Fun37 6 9 11 3 10 12 7 4 5 2 13 8 1
Fun38 6 3 8 4 11 13 1 2 10 9 7 12 5
Fun39 8 11 13 6 10 7 5 9 3 2 12 4 1
Fun40 6 4 8 5 11 10 12 1 7 8 3 9 2
Fun41 8 7 5 3 4 5 10 2 9 6 4 4 1
Fun42 1 8 10 2 9 5 7 3 6 4 3 5 1
Fun43 11 5 8 7 6 5 10 1 4 9 2 6 3
Fun44 4 11 12 2 9 8 13 5 10 3 6 7 1
Fun45 8 5 10 6 6 4 9 1 3 2 1 7 1
Average rank 5.83 6.83 9.42 4.25 8.58 7.58 8.67 3.67 7.08 5.00 6.00 7.17 1.83
Overall rank 5 7 13 3 11 10 12 2 8 4 6 9 1

Table 14.

Results of the Wilcoxon rank-sum test at the 5% significance level on CEC2022 benchmark functions.

Functions AMOMM-BKA Vs
PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA
Fun34 1.238E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 3.020E-11 (+) 1.464E-10 (+) 3.020E-11 (+) 3.020E-11 (+) 5.072E-10 (+) 6.722E-10 (+) 3.020E-11 (+) 7.773E-09 (+)
Fun35 2.905E-01 (+) 9.521E-04 (+) 1.106E-04 (+) 4.733E-01 (-) 3.020E-11 (+) 5.827E-03 (+) 3.020E-11 (+) 2.921E-02 (+) 1.958E-01 (-) 9.117E-01 (-) 2.596E-05 (+) 2.340E-01 (-)
Fun36 2.254E-04 (+) 5.188E-02 (-) 1.635E-05 (+) 1.019E-05 (+) 3.020E-11 (+) 2.669E-09 (+) 3.790E-01 (-) 3.020E-11 (+) 9.919E-11 (+) 7.389E-11 (+) 1.776E-10 (+) 7.389E-11 (+)
Fun37 9.624E-02 (-) 2.772E-01 (-) 3.831E-05 (+) 3.632E-01 (-) 4.504E-11 (+) 3.646E-08 (+) 4.214E-04 (+) 6.309E-01 (-) 8.650E-01 (-) 1.695E-02 (+) 2.426E-09 (+) 1.597E-03 (+)
Fun38 8.787E-10 (+) 1.028E-06 (+) 2.170E-01 (-) 1.031E-02 (+) 3.020E-11 (+) 1.174E-09 (+) 1.212E-12 (+) 9.919E-11 (+) 2.371E-10 (+) 3.835E-06 (+) 9.068E-03 (+) 4.573E-09 (+)
Fun39 1.852E-09 (+) 7.088E-08 (+) 1.777E-10 (+) 9.883E-03 (+) 3.020E-11 (+) 9.926E-02 (-) 3.020E-11 (+) 2.028E-07 (+) 6.627E-01 (-) 4.204E-01 (-) 5.997E-01 (-) 7.845E-01 (-)
Fun40 9.626E-02 (-) 5.369E-02 (-) 1.729E-06 (+) 4.035E-01 (-) 3.020E-11 (+) 3.010E-07 (+) 3.825E-09 (+) 3.010E-07 (+) 1.337E-05 (+) 4.218E-04 (+) 1.408E-09 (+) 1.337E-05 (+)
Fun41 2.126E-04 (+) 2.380E-03 (+) 4.616E-10 (+) 1.765E-02 (+) 3.020E-11 (+) 4.943E-05 (+) 2.439E-09 (+) 2.154E-06 (+) 3.094E-06 (+) 2.783E-07 (+) 6.555E-11 (+) 1.359E-07 (+)
Fun42 6.309E-01 (-) 1.102E-08 (+) 6.121E-10 (+) 3.368E-04 (+) 3.020E-11 (+) 1.464E-10 (+) 3.020E-11 (+) 8.187E-01 (-) 8.891E-10 (+) 9.063E-08 (+) 4.743E-06 (+) 1.067E-07 (+)
Fun43 1.063E-11 (+) 5.518E-10 (+) 5.022E-10 (+) 9.057E-01 (-) 1.937E-10 (+) 4.570E-10 (+) 4.158E-10 (+) 5.518E-10 (+) 5.518E-10 (+) 5.022E-10 (+) 4.570E-10 (+) 5.022E-10 (+)
Fun44 2.199E-04 (+) 4.060E-02 (+) 3.564E-04 (+) 7.283E-01 (-) 1.596E-07 (+) 1.715E-01 (-) 6.765E-05 (+) 9.117E-01 (-) 5.369E-02 (-) 2.772E-01 (-) 7.280E-03 (+) 5.746E-02 (-)
Fun45 1.224E-03 (+) 2.643E-01 (-) 2.236E-02 (+) 4.218E-04 (+) 2.872E-10 (+) 2.772E-01 (-) 2.602E-08 (+) 7.043E-07 (+) 5.012E-02 (-) 3.644E-02 (+) 2.170E-01 (-) 2.398E-01 (-)
w/t/l 8/0/4 9/0/3 11/0/1 7/0/5 12/0/0 10/0/2 11/0/1 9/0/3 7/0/5 9/0/3 10/0/2 8/0/4

Convergence behavior analysis

To investigate the convergence properties of the algorithms in solving the test functions, Fig. 12 illustrates the convergence performance of the proposed AMOMM-BKA algorithm in comparison with its counterparts, evaluated across twelve benchmark functions selected from CEC2022. As observed, AMOMM-BKA consistently exhibits superior convergence behavior, achieving the lowest global fitness values in most cases. Notably, it demonstrates the fastest convergence across most test functions, except for Fun38, Fun40, and Fun43. In function Fun37, AMOMM-BKA initially converges slower than SO and CBKA. However, as the iterations progress, it accelerates and achieves a more promising convergence rate. For function Fun38, CMAES and SHADE attain faster convergence and superior fitness values, whereas AMOMM-BKA yields a slightly higher objective value. In the case of Fun40 and Fun43, SO shows accelerated convergence during the later stages of optimization. However, for function Fun44, AMOMM-BKA not only maintains an efficient convergence rate but also secures the best fitness value among all algorithms. Overall, AMOMM-BKA exhibits robust convergence efficiency and competitive global search capability, affirming its effectiveness across diverse optimization landscapes.

Fig. 12.

Fig. 12

Convergence of the AMOMM-BKA algorithm relative to other algorithms on the CEC2022 test functions.

Ablation experiments

The AMOMM-BKA represents an enhanced version of the BKA, developed through a novel integration of Blended Opposition-Based Learning (BOBL), Historical Reflective Opposition (HRO), Random Opposition (RO), and a Midpoint-Based Mutation (MM) strategy. This combination substantially improves the algorithm’s convergence rate, robustness, and overall search efficiency. The study demonstrates that the synergy between the original BKA framework and these enhancement mechanisms enhances the adaptability of the proposed AMOMM-BKA when addressing diverse optimization problems. To comprehensively assess the contribution of each component, an ablation study was conducted using twelve benchmark functions from the CEC2022 test suite. Each algorithm was independently executed 30 times with 30 search agents, resulting in a total of 15,000 FEs. As summarized in Table 15, AMOMM-BKA achieved superior performance in terms of mean fitness values for most benchmark functions. Furthermore, the comparatively lower standard deviation values indicate that AMOMM-BKA exhibits greater stability than its counterpart variants. The convergence behavior presented in Figure 13 further highlights the efficiency of AMOMM-BKA compared to its simplified versions. The convergence curves reveal that AMOMM-BKA consistently achieves faster and more stable convergence, confirming that the incorporated strategies effectively balance exploration and exploitation across various problem landscapes. Overall, the ablation analysis validates the effectiveness of the proposed modifications and underscores the distinct contribution of each strategy, offering valuable insights for the design of future metaheuristic algorithms.

Table 15.

Ablation experiments outcomes for AMOMM-BKA on CEC2022 test suite.

Functions Metric BKA BOBLBKA HROBKA ROBKA MMBKA AMOMM-BKA
Fun34 AVG 1.644E+03 1.528E+03 9.394E+03 5.907E+03 3.004E+02 3.000E+02
STD 2.779E+03 6.376E+02 3.278E+03 2.807E+03 6.417E-01 4.424E-02
Fun35 AVG 4.304E+02 4.082E+02 4.070E+02 4.071E+02 4.076E+02 4.042E+02
STD 4.742E+01 1.231E+01 1.371E+00 2.250E+00 1.515E+01 1.299E+01
Fun36 AVG 6.281E+02 6.062E+02 6.137E+02 6.112E+02 6.059E+02 6.035E+02
STD 9.481E+00 4.380E+00 5.268E+00 4.491E+00 5.223E+00 3.299E+00
Fun37 AVG 8.195E+02 8.184E+02 8.338E+02 8.230E+02 8.196E+02 8.174E+02
STD 7.285E+00 7.310E+00 7.698E+00 4.748E+00 4.190E+00 5.864E+00
Fun38 AVG 1.146E+03 9.611E+02 1.386E+03 1.052E+03 1.001E+03 9.483E+02
STD 1.254E+02 4.125E+01 2.639E+02 7.946E+01 6.845E+01 4.388E+01
Fun39 AVG 2.833E+03 2.994E+03 5.163E+03 4.781E+03 3.074E+03 2.681E+03
STD 1.547E+03 8.114E+02 1.451E+03 1.552E+03 1.227E+03 8.724E+02
Fun40 AVG 2.046E+03 2.025E+03 2.024E+03 2.025E+03 2.018E+03 2.024E+03
STD 1.652E+01 3.610E+00 1.501E+00 3.250E+00 1.165E+01 9.047E+00
Fun41 AVG 2.241E+03 2.225E+03 2.226E+03 2.226E+03 2.219E+03 2.218E+03
STD 3.480E+01 4.100E+00 1.504E+00 2.991E+00 6.152E+00 3.945E+00
Fun42 AVG 2.544E+03 2.529E+03 2.529E+03 2.529E+03 2.529E+03 2.529E+03
STD 4.476E+01 6.611E-04 9.090E-04 9.319E-04 9.916E-09 1.726E-11
Fun43 AVG 2.549E+03 2.515E+03 2.517E+03 2.515E+03 2.517E+03 2.518E+03
STD 7.828E+01 1.283E-01 1.273E-01 1.211E-01 1.734E-01 4.437E+01
Fun44 AVG 2.811E+03 2.607E+03 2.692E+03 2.636E+03 2.790E+03 2.674E+03
STD 3.054E+02 8.443E+00 5.233E+01 3.525E+01 1.874E+02 4.927E+01
Fun45 AVG 2.867E+03 2.864E+03 2.871E+03 2.864E+03 2.869E+03 2.864E+03
STD 7.189E+00 1.136E+00 7.552E+00 9.954E-01 6.641E+00 1.111E+00

Fig. 13.

Fig. 13

Convergence of the AMOMM-BKA compared to other optimizer.

Balance and diversity analysis

Population diversity is vital for understanding the search behavior of evolutionary algorithms. These algorithms employ multiple agents to explore the search space and locate optimal solutions65. As agents converge toward promising regions, their spacing decreases, enhancing exploitation, while greater spacing promotes exploration. To quantify these spatial dynamics, a diversity metric is utilized as defined in Eq. 15.

graphic file with name d33e10623.gif 14
graphic file with name d33e10627.gif 15

Where Dive denotes the overall population diversity, while Inline graphic signifies the diversity within the data set corresponding to the jth dimension across all individuals. Here, dim represents the dimension of each search agent, and N indicates the total number of search agents. The term Inline graphic refers to the median value of the jth dimension, while Inline graphic denotes the jth dimension component of the ith search agent. Moreover, the diversity metric, as defined in Eqs. 16 and 17, is utilized to quantify the proportion of exploration and exploitation at each iteration.

graphic file with name d33e10673.gif 16
graphic file with name d33e10677.gif 17

The maximum diversity is represented as Inline graphic, while Inline graphic and Inline graphic and denote the exploration and exploitation rates at each iteration, respectively. The exploitation rate is obtained from the absolute difference between the maximum and current diversity. Achieving a proper balance between exploration and exploitation is crucial for enhancing algorithmic performance.

Figure 14 illustrates the diversity behavior of the standard BKA and the proposed AMOMMBKA across several benchmark functions. For analysis, three functions (Fun14, Fun15, and Fun20) were selected from CEC2005, and three (Fun25, Fun29, and Fun30) from CEC2019. In each case, BKA begins with high diversity that rapidly declines, indicating strong initial exploration followed by premature convergence. In contrast, AMOMMBKA sustains moderate and more stable diversity, reflecting a controlled balance between exploration and exploitation. Although its diversity decreases initially, it stabilizes over time, preventing rapid convergence and maintaining effective search dynamics.

Fig. 14.

Fig. 14

The diversity analysis performed by AMOMM-BKA and BKA.

Figure 15 further depicts the adaptive balance between exploration and exploitation in AMOMMBKA. The fluctuations of both curves across iterations demonstrate dynamic adjustment between global search and local refinement. Their close alignment signifies a well-maintained trade-off that enhances search efficiency and mitigates premature convergence across different test functions.

Fig. 15.

Fig. 15

The balance analysis performed by AMOMMBKA.

Sensitivity analysis

This subsection examines how different parameters influence the performance of the proposed AMOMM-BKA. As a population-based algorithm, AMOMM-BKA gradually converges to the best solution for a given optimization problem. Its performance mainly depends on the population size, the control parameter Inline graphic defined in Eq. 9. and threshold values (Inline graphic) in the selection of confidence score . Therefore, a sensitivity analysis is conducted for these three parameters, population size (N), control parameter (Inline graphic), and threshold values (Inline graphic). Hence, prior experience and experimental insight play an important role in selecting suitable values.

Sensitivity of population size

To examine how the performance of the proposed AMOMM-BKA is influenced by population size, eight benchmark functions were selected from the CEC2005, CEC2019, and CEC2022 test suites, ensuring representation across various function categories to minimize bias. Each experiment was executed over 15000 FEs and repeated 30 times for population sizes of 10, 30, 50, 80, and 100 agents. The corresponding results, presented in Table 16, are analyzed using the AVG and STD of fitness values. The findings indicate that increasing the population size results in mean fitness values and reduced variance. This improvement is attributed to the enhanced exploration capability provided by a larger number of agents, which increases the likelihood of identifying optimal or near-optimal solutions.

Table 16.

Sensitivity analysis of AMOMM-BKA for varying population sizes.

Functions Metric Popsize 10 Popsize 30 Popsize 50 Popsize 80 Popsize 100
Fun05 AVG 2.572E+01 2.464E+01 2.422E+01 2.380E+01 2.359E+01
STD 1.831E-01 1.077E-01 1.121E-01 1.063E-01 9.559E-02
Fun06 AVG 5.214E-02 1.223E-05 5.428E-08 8.702E-07 1.164E-10
STD 1.019E-01 3.089E-05 2.807E-07 4.766E-06 5.491E-10
Fun12 AVG 1.514E-03 4.157E-07 2.569E-08 6.614E-11 1.064E-13
STD 3.367E-03 7.149E-07 7.747E-08 3.037E-10 2.294E-13
Fun13 AVG 1.535E+00 5.854E-02 2.671E-02 2.549E-02 5.127E-03
STD 4.581E-01 6.678E-02 4.133E-02 4.010E-02 5.575E-03
Fun27 AVG 2.337E+02 9.177E+01 8.491E+01 7.121E+01 5.838E+01
STD 4.295E+02 4.087E+01 3.123E+01 3.261E+01 2.258E+01
Fun28 AVG 2.622E+00 2.325E+00 2.372E+00 2.307E+00 2.258E+00
STD 3.995E-01 1.912E-01 2.113E-01 1.415E-01 1.340E-01
Fun35 AVG 4.245E+02 4.111E+02 4.111E+02 4.101E+02 4.047E+02
STD 3.327E+01 2.841E+01 2.100E+01 2.066E+01 1.322E+01
Fun44 AVG 2.792E+03 2.704E+03 2.676E+03 2.640E+03 2.618E+03
STD 1.442E+02 1.218E+02 1.240E+02 8.760E+01 7.717E+01

Sensitivity of control parameter Inline graphic

The parameter Inline graphic in the Historical Reflective Opposition (HRO) phase controls the reflection depth toward an individual’s historical best position, influencing the balance between exploration and exploitation. To determine an appropriate value, Inline graphic was varied across ten settings (0.01to2.0) and tested on eight benchmark functions. For each value, the average and standard deviation of the best fitness were recorded. As shown in Table 17, Inline graphic = 0.5 achieved the lowest mean fitness with smaller deviations, indicating stable and efficient performance. Very small Inline graphic values resulted in overly cautious updates, while larger ones caused unstable reflections. Hence, Inline graphic = 0.5 is chosen as the optimal setting for the HRO phase.

Table 17.

Sensitivity of AMOMM-BKA to the control parameter at different values.

Functions Metric Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
Fun05 AVG 3.346E+05 2.538E+05 2.858E+05 2.257E+05 2.366E+05 2.416E+05 2.617E+05 2.764E+05 3.086E+05 4.116E+05
STD 3.166E+05 2.510E+05 2.239E+05 2.088E+05 1.279E+05 2.282E+05 2.276E+05 2.399E+05 3.046E+05 3.300E+05
Fun06 AVG 1.491E+03 1.134E+03 1.100E+03 1.019E+03 1.039E+03 1.037E+03 1.045E+03 1.079E+03 1.307E+03 1.509E+03
STD 7.403E+02 5.435E+02 6.125E+02 5.996E+02 5.449E+02 4.255E+02 5.429E+02 5.288E+02 7.137E+02 8.265E+02
Fun12 AVG 2.577E+03 4.019E+03 2.672E+03 1.895E+02 2.978E+02 1.506E+03 3.291E+03 3.991E+02 1.517E+03 8.375E+03
STD 8.224E+03 1.342E+04 8.316E+03 4.545E+02 9.178E+02 6.285E+03 1.748E+04 9.322E+02 6.251E+03 2.999E+04
Fun13 AVG 8.913E+04 1.034E+05 5.735E+04 6.012E+04 1.261E+05 1.390E+05 1.208E+05 9.940E+04 1.356E+05 1.887E+05
STD 1.308E+05 1.646E+05 6.632E+04 1.311E+05 2.481E+05 3.599E+05 1.875E+05 2.752E+05 1.850E+05 3.441E+05
Fun27 AVG 7.781E+01 7.961E+01 7.503E+01 7.379E+01 7.592E+01 7.604E+01 8.236E+01 7.803E+01 7.555E+01 7.972E+01
STD 1.566E+01 1.856E+01 1.398E+01 1.771E+01 1.942E+01 2.250E+01 2.225E+01 2.148E+01 1.764E+01 2.066E+01
Fun28 AVG 2.291E+00 2.245E+00 2.280E+00 2.211E+00 2.280E+00 2.276E+00 2.278E+00 2.274E+00 2.296E+00 2.299E+00
STD 9.286E-02 7.715E-02 8.100E-02 8.841E-02 7.935E-02 6.585E-02 9.736E-02 8.239E-02 7.271E-02 6.694E-02
Fun35 AVG 4.068E+02 4.069E+02 4.068E+02 4.042E+02 1.299E+01 4.068E+02 4.079E+02 4.068E+02 4.079E+02 4.066E+02
STD 1.976E+00 1.329E+00 1.896E+00 2.277E+00 1.008E+01 2.338E+00 3.695E+00 1.847E+00 4.401E+00 1.399E+00
Fun44 AVG 2.680E+03 2.687E+03 2.676E+03 2.674E+03 2.689E+03 2.685E+03 2.685E+03 2.686E+03 2.690E+03 2.686E+03
STD 4.259E+01 4.730E+01 5.165E+01 4.927E+01 3.886E+01 4.123E+01 4.875E+01 3.887E+01 4.488E+01 4.381E+01

Sensitivity of threshold values

The selection of suitable threshold values plays a crucial role in the AMOMM-BKA mechanism, as these thresholds determine when the algorithm should switch between exploration and exploitation strategies. In particular, the parameters Inline graphic and Inline graphic control the activation of different opposition strategies based on the confidence score Inline graphic. Choosing inappropriate threshold values may lead to excessive exploration, premature convergence, or stagnation in local optima. Therefore, a sensitivity analysis was conducted to identify effective threshold settings for achieving stable and high-quality performance. In this experiment, four combinations of Inline graphic and Inline graphic were tested to evaluate their solution quality. The experiment was performed using a population size of 30 agents and a fixed number of function evolution, across eight representative benchmark functions. The four threshold combinations examined are summarized in Table 18, and the corresponding average and standard deviation of fitness values are reported in Table 19. The results show that Scenario 3 achieved the best performance across most functions. Hence, these values were selected as the default thresholds for AMOMM-BKA, as they provide a stable balance between exploration and exploitation.

Table 18.

Summary of the four threshold combinations used for sensitivity analysis.

Scenario Inline graphic Inline graphic
Scenario 1 0.001 0.05
Scenario 2 0.005 0.1
Scenario 3 0.01 0.1
Scenario 4 0.02 0.2
Table 19.

Sensitivity analysis of AMOMM-BKA for the threshold parameter.

Functions Metric Scenario 1 Scenario 2 Scenario 3 Scenario 4
Fun05 AVG 2.466E+01 2.463E+01 2.462E+01 2.468E+01
STD 1.522E-01 1.451E-01 1.273E-01 1.155E-01
Fun06 AVG 1.153E-05 7.279E-06 5.278E-06 1.568E-05
STD 2.550E-05 2.001E-05 1.667E-05 3.198E-05
Fun12 AVG 4.084E-07 2.952E-07 2.352E-07 3.710E-07
STD 9.180E-07 5.243E-07 5.521E-07 7.365E-07
Fun13 AVG 5.993E-02 6.500E-02 5.035E-02 1.106E-01
STD 7.055E-02 7.880E-02 5.927E-02 1.311E-01
Fun27 AVG 1.029E+02 9.456E+01 7.778E+01 8.564E+01
STD 6.838E+01 4.498E+01 3.500E+01 3.136E+01
Fun28 AVG 2.336E+00 2.345E+00 2.269E+00 2.272E+00
STD 1.778E-01 1.737E-01 1.245E-01 1.589E-01
Fun35 AVG 4.226E+02 4.106E+02 4.112E+02 4.126E+02
STD 3.223E+01 2.158E+01 2.030E+01 2.310E+01
Fun44 AVG 2.660E+03 2.714E+03 2.648E+03 2.669E+03
STD 9.344E+01 1.372E+02 1.005E+02 1.196E+02

Results and discussion of engineering applications

To further validate the effectiveness of AMOMM-BKA in tackling real world engineering issues, this section presents a comprehensive analysis of its optimization performance across four engineering design problems. Table 20 provides a succinct summary of these problems, including their dimension (D), number of equality constraints (Inline graphic), inequality constraints (Inline graphic), and the optimal cost. The evaluation was carried out using a population size of 30, a maximum of 15000 function evaluations (FEs), with each problem independently tested 30 times to ensure statistical reliability.

Table 20.

Description of the four real-world engineering optimization problem.

Problems Dimension (D) Equality constraints (Inline graphic) Inequality constraints (Inline graphic) Optimal cost
Multiple disk clutch brake design problem 5 0 7 0.23524246
Step-cone pulley problem 5 3 8 16.0698687
Welded beam design 4 0 5 1.67021773
Speed Reducer design 7 0 11 2994.42447

The proposed AMOMM-BKA is evaluated against various metaheuristic algorithms, including PSO16, GWO18, GJO19, SO20, SCSO21, AVOA22, CMAES15, SHADE13, classical BKA42, CBKA46, IBKA49, and QOBLBKA51. For fairness, AMOMM-BKA and all compared algorithms maintain identical parameter settings when solving mathematical test functions.

Multiple disk clutch brake design problem

The primary objective of this design is to minimize the mass of a multi disc clutch break system. The optimization problem is formulated using integer-valued decision variables, namely: the inner radius Inline graphic, outer radius Inline graphic, disc thickness Inline graphic, actuator force Inline graphic, and the number of frictional surfaces Inline graphic. The formulation is subject to nine nonlinear constraints that govern the feasible design space66. The geometric and functional configuration of these decision variables for the multi plate disc clutch break design problem is depicted in Fig. 16 . Formally, the problem can be expressed as follows:

Fig. 16.

Fig. 16

A schematic representation of the Multiple disk clutch brake design66.

Minimize:

graphic file with name d33e12096.gif

subject to:

graphic file with name d33e12100.gif

where,

graphic file with name d33e12104.gif
graphic file with name d33e12107.gif

with bounds:

graphic file with name d33e12111.gif

The outcomes of applying AMOMM-BKA to the multiple disk clutch brake design problem are presented in Table 21. The findings indicate that both AMOMM-BKA and SHADE achieved the minimum cost for the problem while satisfying all constraints, whereas AVOA and PSO also produced competitive results. The superior performance of AMOMM-BKA can be attributed to the integration of AMOMM, which effectively balances exploration and exploitation, enhances solution diversity, and mitigates the risk of premature convergence. Collectively, these mechanisms enable AMOMM-BKA to efficiently navigate the solution space and improve solution quality, thereby underscoring its efficacy and robustness for the multiple disk clutch break design optimization task.

Table 21.

A comparison of the Multiple disk clutch brake design using different algorithms.

Algorithm Optimal set of decision variables Optimal cost
Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
PSO 70.0115 90.0115 1.0000 827.6266 2.0000 0.23528
GWO 69.9948 90.0004 1.0000 752.2048 2.0000 0.23531
GJO 69.9871 90.0000 1.0000 814.5706 2.0001 0.23538
SO 69.9887 90.0000 1.0000 817.7628 2.0000 0.23536
SCSO 69.6014 90.0000 1.0000 635.6428 2.0000 0.23932
AVOA 70.0005 90.0006 1.0000 718.5205 2.0000 0.23525
CMAES 69.9293 90.6051 1.0000 728.3340 2.0000 0.24391
SHADE 70.0000 90.0000 1.0000 792.8992 2.0000 0.23524
BKA 69.3333 90.0000 1.0000 688.5030 2.0000 0.24161
CBKA 69.8253 90.0000 1.0000 631.5354 2.0000 0.23704
IBKA 69.6655 90.0000 1.0000 684.2314 2.0000 0.23844
QOBLBKA 68.7627 90.0020 1.0014 701.0887 2.0006 0.24763
AMOMM-BKA 70.0005 90.0005 1.0000 791.3692 2.0000 0.23524

Step-cone pulley problem

The primary aim of this problem is to minimize the weight of a four-step cone pulley by optimizing five design variables. Among these, four variables Inline graphic correspond to the diameters of each pulley step, while the fifth variable (w) represents the pulley width. The problem is subject to eleven nonlinear constraints, ensuring that the transmitted power exceeds 0.75 hp67. A schematic representation of the design is provided in Fig 17. The mathematical formulation of the optimization problem is expressed as follows:

Fig. 17.

Fig. 17

A schematic representation of the Step-cone pulley problem.

Suppose   Inline graphic

Minimize:

graphic file with name d33e12404.gif

subject to:

graphic file with name d33e12408.gif

where,

graphic file with name d33e12412.gif

with:

graphic file with name d33e12416.gif

Table 22 presents a comparative analysis of AMOMM-BKA against several established algorithms in addressing the Step-cone pulley optimization problem. The findings clearly indicate that AMOMM-BKA achieved the minimum design weight, effectively optimizing decision variables Inline graphic through Inline graphic. Notably, high performance algorithms such as CMAES and SHADE under performed in this specific context. While other methods, including PSO, yielded competitive outcomes, AMOMM-BKA consistently outperformed them, affirming its superiority in solving this engineering design challenge.

Table 22.

A comparison of the Step-cone pulley problem using different algorithms.

Algorithm Optimal set of decision variables Optimal cost
Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
PSO 40.2940 55.4476 73.9241 88.6302 85.8253 1.6876E+01
GWO 40.5996 55.8662 74.4767 89.3059 88.0996 2.5573E+07
GJO 40.2205 55.3589 73.7890 88.4440 88.8242 1.8626E+08
SO 40.2171 55.3418 73.7831 88.4613 87.5145 1.7221E+01
SCSO 37.3671 48.2674 68.3776 86.8969 87.3216 3.0601E+17
AVOA 40.4117 55.6097 74.1402 88.8891 87.6380 1.7343E+01
CMAES 42.5862 58.8634 81.1081 89.6467 89.8018 6.0657E+11
SHADE 40.6101 55.8837 74.5031 89.3251 85.5047 3.5744E+05
BKA 40.3577 55.5298 74.0533 88.7738 88.0971 4.6451E+07
CBKA 40.3890 55.5785 74.0987 88.8393 88.1747 4.5136E+01
IBKA 40.9775 56.5201 75.1721 89.9323 90.0000 5.9063E+09
QOBLBKA 39.9940 55.0881 73.3575 87.2039 87.2832 5.6549E+18
AMOMM-BKA 40.0338 55.0893 73.4465 88.0581 86.3824 1.6767E+01

Welded beam design

The welded beam design is a fundamental engineering optimization challenge aimed at minimizing manufacturing costs. The design process is constrained by four critical factors: shear stress Inline graphic, bending stress Inline graphic , buckling load Inline graphic, and beam deflection Inline graphic, all of which must be carefully considered to ensure structural integrity68. A visual representation of the problem in both two and three dimensional perspectives is provided in Fig. 18. This optimization problem involves four key design variables namely the thickness of weld h, length of the clamping bar l, the height of the bar t, and thickness of the bar b. Additionally, the problem is governed by seven constraints, ensuring feasibility and performance. The complete mathematical formulation is presented as follows:

graphic file with name d33e12725.gif
graphic file with name d33e12729.gif

Fig. 18.

Fig. 18

A schematic representation of the Welded beam design.

Table 23 presents a detailed comparison of the performance of the AMOMM-BKA on the welded beam design problem alongside other optimization algorithms. The results clearly show that AMOMM-BKA achieved the minimum cost while fully satisfying all design constraints, demonstrating its strong capability in addressing complex constrained engineering problems. This superior performance is likely due to the algorithm’s effective balance between exploration and exploitation, which allows it to search the solution space efficiently and refine potential solutions. However, both SCSO and SHADE yielded relatively higher cost values, suggesting that they may struggle to maintain optimal performance in such constrained design tasks.

Table 23.

A comparison of the Welded beam design using different algorithms.

Algorithm Optimal set of decision variables Optimal cost
h l t b
PSO 0.2165 3.3597 8.8373 0.2166 1.7634E+00
GWO 0.2037 3.5240 9.0380 0.2059 1.7302E+00
GJO 0.2004 3.6157 9.0406 0.2062 1.7400E+00
SO 0.2137 3.4099 8.8771 0.2158 1.7659E+00
SCSO 0.4742 2.6225 5.9301 0.6797 3.3251E+00
AVOA 0.1898 3.9073 9.0604 0.2059 1.7598E+00
CMAES 0.1969 4.0765 9.2212 0.2134 1.8865E+00
SHADE 0.2770 3.0627 7.9052 0.3028 2.1316E+00
BKA 0.2008 3.7373 8.9684 0.2103 1.7685E+00
CBKA 0.2049 3.4940 9.0375 0.2059 1.7279E+00
IBKA 0.2020 3.3762 9.5686 0.2036 1.7792E+00
QOBLBKA 0.2567 3.4255 8.5125 0.2745 2.0041E+00
AMOMM-BKA 0.2057 3.4705 9.0366 0.2057 1.7269E+00

Speed reducer design

The speed reducer is a crucial component is gearbox system, playing a vital role in mechanical performance. This optimization problem focuses on minimizing the total weight of the speed reducer while satisfying 11 constraints four of which are linear inequalities, and the remaining seven are nonlinear constraints69. The linear constraints are bending stress of the gear teeth, transverse deflections of the shafts, surface stress, stresses in the shafts. This problem involves optimizing seven design variables namely, the face width Inline graphic, the module of teeth Inline graphic, the number of teeth in the pinon Inline graphic, the length of the first shaft between bearings Inline graphic, the length of the second shaft between bearings Inline graphic, the diameter of the first shaft Inline graphic, the diameter of the second shaft Inline graphic. The 2D and 3D representation of the speed reducer design are illustrated in Fig. 19, and the complete mathematical formulation is provided below.

graphic file with name d33e12994.gif

Fig. 19.

Fig. 19

A schematic representation of the Speed reducer design.

Table 24 encapsulates the performance outcomes of the AMOMM-BKA algorithm applied to the reducer weight minimization problem. The algorithm demonstrated exceptional efficacy by attaining the minimal reducer weight while rigorously adhering to all constraint conditions, thereby showcasing its robust optimization proficiency. Furthermore, algorithms such as SO and SHADE also exhibited commendable performance, yielding competitive solutions. Collectively, these findings accentuate the operational efficiency of AMOMM-BKA, attributed to its well calibrated balance between exploration and exploitation, which enables effective navigation of constrained design landscapes and the identification of optimal solutions.

Table 24.

A comparison of the Speed reducer design using different algorithms.

Algorithm Optimal set of decision variables Optimal cost
Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
PSO 3.5012 0.7000 17.0000 7.6971 7.9711 3.3517 5.2869 3004.6992
GWO 3.5054 0.7000 17.0007 7.5526 7.9847 3.3673 5.2897 3011.2583
GJO 3.5166 0.7002 17.0011 7.6609 8.0543 3.3907 5.2919 3026.8061
SO 3.5000 0.7000 17.0000 7.3333 7.7158 3.3503 5.2868 2994.8932
SCSO 3.4798 0.7000 19.9893 7.5397 7.7828 3.4693 5.2865 1.3839E+12
AVOA 3.5000 0.7000 17.0000 7.7032 7.9194 3.3510 5.2867 3002.7558
CMAES 3.5000 0.7000 17.0000 7.7577 7.9190 3.3526 5.2872 3004.9287
SHADE 3.5014 0.7000 17.0000 7.3000 7.7153 3.3502 5.2867 2995.0231
BKA 3.5037 0.7000 17.0000 7.7868 7.9315 3.3555 5.2891 3007.9271
CBKA 3.5038 0.7000 17.0005 7.8103 7.9591 3.3540 5.2884 3008.0085
IBKA 3.5733 0.7000 17.0000 7.3000 8.0701 3.3502 5.3223 3054.7243
QOBLBKA 3.5011 0.7000 17.0001 7.6810 8.0332 3.3574 5.2884 3008.3736
AMOMM-BKA 3.5000 0.7000 17.0000 7.3010 7.7153 3.3502 5.2867 2994.5467

Table 25 presents the Friedman ranking of all algorithms applied to the engineering problem set. Remarkably, the AMOMM-BKA algorithm consistently secured the first position across all four evaluated engineering scenarios, underscoring its exceptional efficacy and robustness in addressing complex global optimization challenges. Overall, the empirical results demonstrate that AMOMM-BKA significantly surpasses competing methodologies, affirming its potential as a highly promising and effective solution for global optimization in engineering domains.

Table 25.

Friedman-based performance rankings for the algorithms tested on four engineering challenges.

Engineering Problems PSO GWO GJO SO SCSO AVOA CMAES SHADE BKA CBKA IBKA QOBLBKA AMOMM-BKA
Multiple disk clutch brake design problem 3 4 7 6 5 2 11 1 10 8 9 12 1
Step-cone pulley problem 2 6 5 3 7 4 13 8 10 9 12 11 1
Welded beam design 6 3 4 7 13 5 10 12 8 2 9 11 1
Speed Reducer design 5 10 11 2 13 4 6 3 7 8 12 9 1

Conclusion and future work

To prevent the BKA from becoming trapped in local optima and to enhance its global optimization capability, this study introduces three innovative improvement strategies, collectively forming the proposed AMOMM-BKA framework. Firstly, BOBL is employed to guide stagnated individuals showing minimal fitness improvement toward more promising regions without destabilizing the population. Secondly historical reflective opposition leverages archived elite solutions to reinforce the trajectory of improving individuals. Thirdly, random opposition maintains diversity for uncertain individuals whose search behavior lacks clear direction. Finally, the midpoint-based mutation strategy employs a randomized convergence mechanism, guiding solutions toward the midpoint between the global best and a randomly selected peer, thereby maintaining a balance between exploration and exploitation. To comprehensively assess the effectiveness of AMOMM-BKA, an extensive comparative evaluation was conducted against twelve state-of-the-art metaheuristic algorithms on the CEC2005, CEC2019, and CEC2022 benchmark suites. Across a total of 45 numerical optimization problems, AMOMM-BKA consistently demonstrated superior performance, securing the first rank with an average score of 1.78, which is 56.18% superior to the second-best algorithm, SHADE (average rank: 4.56). The experimental and statistical analyses consistently highlight the algorithm’s superior convergence speed, solution accuracy, and robustness. Moreover, AMOMM-BKA is applied to solve four real-world engineering optimization problems, and the results and comparisons prove the algorithm’s effectiveness in solving practical problems, which is verified from the practical aspect. In conclusion, the proposed AMOMM-BKA algorithm exhibits superior convergence accuracy, speed, and optimization performance. Its effectiveness across both fixed and variable-dimensional functions confirms strong adaptability and robustness in solving diverse optimization problems. The current version of the AMOMM-BKA algorithm is restricted to single-objective optimization, limiting its applicability to broader real-world scenarios. Moreover, its structure requires further adaptation to effectively handle discrete optimization problems. Future work will focus on developing binary and multi-objective extensions of AMOMM-BKA to enhance its versatility. These advancements will enable AMOMM-BKA to address diverse optimization tasks, such as hyperparameter tuning in natural language processing, resource management in wireless sensor networks, and optimization modeling in digital twin systems.

Acknowledgements

We would like to express our gratitude to VIT University for supporting this research work.

Author contributions

Rajasekar P: Conceptualization, Methodology, Software, Data curation, Writing– original draft. Jayalakshmi M: Resources, Validation, Formal analysis, Writing– review & editing, Supervision.

Funding

Open access funding provided by Vellore Institute of Technology.

Data availability

All data generated or analyzed during this study are included in this article

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Zhou, Y. et al. A neighborhood regression optimization algorithm for computationally expensive optimization problems. IEEE Trans. Cybern.52(5), 3018–3031 (2020). [DOI] [PubMed] [Google Scholar]
  • 2.Yu, M. et al. A multi-strategy enhanced Dung Beetle Optimization for real-world engineering problems and UAV path planning. Alex. Eng. J.118, 406–434 (2025). [Google Scholar]
  • 3.Liu, H. et al. An improved arithmetic optimization algorithm based on reinforcement learning for global optimization and engineering design problems. Swarm Evol. Comput.96, 101985 (2025). [Google Scholar]
  • 4.Wang, W. et al. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw.195, 103694 (2024). [Google Scholar]
  • 5.Shen, Y. et al. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl.215, 119269 (2023). [Google Scholar]
  • 6.Anitha, J., Immanuel Alex Pandian, S. & Akila Agnes, S. An efficient multilevel color image thresholding based on modified whale optimization algorithm. Expert Syst. Appl.178, 115003 (2021). [Google Scholar]
  • 7.Sallam, K. M. et al. An enhanced LSHADE-based algorithm for global and constrained optimization in applied mechanics and power flow problems. Swarm Evol. Comput.97, 102032 (2025). [Google Scholar]
  • 8.Elhoseny, M., Abdel-Salam, M. & El-Hasnony, I. M. An improved multi-strategy Golden Jackal algorithm for real world engineering problems. Knowl.-Based Syst.295, 111725 (2024). [Google Scholar]
  • 9.Houssein, E. H. et al. An efficient multi-objective gorilla troops optimizer for minimizing energy consumption of large-scale wireless sensor networks. Expert Syst. Appl.212, 118827 (2023). [Google Scholar]
  • 10.Yu, X. et al. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst. Appl.215, 119327 (2023). [Google Scholar]
  • 11.Golberg, D. E. Genetic algorithms in search, optimization, and machine learning. Addion Wesley1989(102), 36 (1989). [Google Scholar]
  • 12.Storn, R. & Price, K. Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim.11(4), 341–359 (1997). [Google Scholar]
  • 13.Tanabe, R. & Fukunaga, A. Success-history based parameter adaptation for differential evolution. 2013 IEEE Congress on Evolutionary Computation (IEEE, 2013).
  • 14.Tanabe, R. & Fukunaga, A. S. Improving the search performance of SHADE using linear population size reduction. 2014 IEEE Congress on Evolutionary Computation (CEC) (IEEE, 2014).
  • 15.Hansen, N., Müller, S. D. & Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput.11(1), 1–18 (2003). [DOI] [PubMed] [Google Scholar]
  • 16.Kennedy, J. & Eberhart, R. Particle swarm optimization. Proceedings of ICNN’95-International Conference on Neural Networks Vol. 4 (IEEE, 1995).
  • 17.Karaboga, D. & Basturk, B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim.39(3), 459–471 (2007). [Google Scholar]
  • 18.Mirjalili, S., Mohammad Mirjalili, S. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw.69, 46–61 (2014). [Google Scholar]
  • 19.Chopra, N. & Ansari, M. M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl.198, 116924 (2022). [Google Scholar]
  • 20.Hashim, F. A. & Hussien, A. G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst.242, 108320 (2022). [Google Scholar]
  • 21.Seyyedabbasi, A. & Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput.39(4), 2627–2651 (2023). [Google Scholar]
  • 22.Abdollahzadeh, B., Soleimanian Gharehchopogh, F. & Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng.158, 107408 (2021). [Google Scholar]
  • 23.Heidari, A. A. et al. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst.97, 849–872 (2019). [Google Scholar]
  • 24.Hashim, F. A. et al. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul.192, 84–110 (2022). [Google Scholar]
  • 25.Wang, L. et al. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell.114, 105082 (2022). [Google Scholar]
  • 26.Xue, J. & Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Super Comput.79(7), 7305–7336 (2023). [Google Scholar]
  • 27.Zhao, W. et al. Electric eel foraging optimization: A new bio-inspired optimizer for engineering applications. Expert Syst. Appl.238, 122200 (2024). [Google Scholar]
  • 28.El-Kenawy, E.-S.M. et al. Greylag goose optimization: Nature-inspired optimization algorithm. Expert Syst. Appl.238, 122147 (2024). [Google Scholar]
  • 29.Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci.179(13), 2232–2248 (2009). [Google Scholar]
  • 30.Mirjalili, S., Mohammad Mirjalili, S. & Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl.27(2), 495–513 (2016). [Google Scholar]
  • 31.Ahmadianfar, I., Bozorg-Haddad, O. & Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci.540, 131–159 (2020). [Google Scholar]
  • 32.Azizi, M. et al. Energy valley optimizer: A novel metaheuristic algorithm for global and engineering optimization. Sci. Rep.13(1), 226 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Abualigah, L. et al. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng.376, 113609 (2021). [Google Scholar]
  • 34.Abdel-Basset, M. et al. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst.268, 110454 (2023). [Google Scholar]
  • 35.Goodarzimehr, V. et al. Special relativity search: A novel metaheuristic method based on special relativity physics. Knowl.-Based Syst.257, 109484 (2022). [Google Scholar]
  • 36.Rao, R. V., Savsani, V. J. & Vakharia, D. P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des.43(3), 303–315 (2011). [Google Scholar]
  • 37.Moosavi, S. H. S. & Bardsiri, V. K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell.86, 165–181 (2019). [Google Scholar]
  • 38.Askari, Q., Younas, I. & Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst.195, 105709 (2020). [Google Scholar]
  • 39.Das, B., Mukherjee, V. & Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw.146, 102804 (2020). [Google Scholar]
  • 40.Dehghani, M., Trojovska, E. & Zuscak, T. A new human-inspired metaheuristic algorithm for solving optimization problems based on mimicking sewing training. Sci. Rep.12(1), 17387 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Lian, J. & Hui, G. Human evolutionary optimization algorithm. Expert Syst. Appl.241, 122638 (2024). [Google Scholar]
  • 42.Wang, J. et al. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev.57(4), 98 (2024). [Google Scholar]
  • 43.Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput.1(1), 67–82 (2002). [Google Scholar]
  • 44.Mansouri, H. et al. A modified black-winged kite optimizer based on chaotic maps for global optimization of real-world applications. Knowl.-Based Syst.318, 113558 (2025). [Google Scholar]
  • 45.Mohapatra, S., Kaliyaperumal, D. & Gharehchopogh, F. S. A revamped black winged kite algorithm with advanced strategies for engineering optimization. Sci. Rep.15(1), 17681 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Alabed, T. & Servi, S. A Levy flight based chaotic black winged kite algorithm for solving optimization problems. Sci. Rep.15(1), 34608 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Liao, J. et al. An improved black-winged kite optimization algorithm incorporating multiple strategies. Eng. Res. Express7(3), 035225 (2025). [Google Scholar]
  • 48.Li, Y. et al. A black-winged kite optimization algorithm enhanced by osprey optimization and vertical and horizontal crossover improvement. Sci. Rep.15(1), 6737 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Fu, J. et al. Prediction of lithium-ion battery state of health using a deep hybrid kernel extreme learning machine optimized by the improved black-winged kite algorithm. Batteries10(11), 398 (2024). [Google Scholar]
  • 50.Zhao, M. et al. Improved black-winged kite algorithm based on chaotic mapping and adversarial learning. J. Phys.: Conf. Ser. Vol. 2898. No. 1. (IOP Publishing, 2024).
  • 51.Rajasekar, P. & Jayalakshmi, M. Adaptive quasi-opposition and dynamic switching in black-winged kite algorithm for global optimization and constrained engineering designs. Alex. Eng. J.130, 969–994 (2025). [Google Scholar]
  • 52.Haohao, M. et al. Improved black-winged kite algorithm and finite element analysis for robot parallel gripper design. Adv. Mech. Eng.16(10) (2024).
  • 53.Wang, J. et al. Indoor visible light 3D localization system based on black-wing kite algorithm. IEEE Access (2025).
  • 54.Du, C., Zhang, J. & Fang, J. An innovative complex-valued encoding black-winged kite algorithm for global optimization. Sci. Rep.15(1), 932 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Xue, R. et al. Multi-strategy Integration Model Based on Black-Winged Kite Algorithm and Artificial Rabbit Optimization. International Conference on Swarm Intelligence (Springer Nature Singapore, 2024).
  • 56.Jiang, M. et al. Robust color image watermarking algorithm based on synchronization correction with multi-layer perceptron and Cauchy distribution model. Appl. Soft Comput.140, 110271 (2023). [Google Scholar]
  • 57.Tizhoosh, H. R. Opposition-based learning: A new scheme for machine intelligence. International Conference On Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06) Vol. 1. (IEEE, 2005).
  • 58.Rahnamayan, S., Tizhoosh, H. R. & Salama, M. A. Quasi oppositional differential evolution. 2007 IEEE Congress on Evolutionary Computation (IEEE, 2007).
  • 59.Long, W. et al. A random opposition-based learning grey wolf optimizer. IEEE Access7, 113810–113825 (2019). [Google Scholar]
  • 60.Suganthan, P. N. et al. Problem de nitions and evaluation criteria for the cec 2005 special session on real-parameter optimization. KanGAL Rep.2005005, 2005 (2005). [Google Scholar]
  • 61.Liang, J. J., Qu, B., Gong, D. & Yue, C. Problem Definitions and Evaluation Criteria for the Cec 2019 Special Session on Multimodal Multi objective Optimization (Zhengzhou University, Computational Intelligence Laboratory, 2019).
  • 62.Biedrzycki, R., Arabas, J. & Warchulski, E. A version of NL-SHADE-RSP algorithm with midpoint for CEC 2022 single objective bound constrained problems. 2022 IEEE Congress on Evolutionary Computation (CEC) (IEEE, 2022).
  • 63.Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc.32(200), 675–701 (1937). [Google Scholar]
  • 64.Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull.1(6), 80–83 (1945). [Google Scholar]
  • 65.Chen, Z. et al. An artificial bee bare-bone hunger games search for global optimization and high-dimensional feature selection. Iscience26(5) (2023). [DOI] [PMC free article] [PubMed]
  • 66.Zhong, C. et al. Starfish optimization algorithm (SFOA): A bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl.37(5), 3641–3683 (2025). [Google Scholar]
  • 67.Naruei, I. & Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl.183, 115352 (2021). [Google Scholar]
  • 68.Agushaka, J. O., Ezugwu, A. E. & Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng.391, 114570 (2022). [Google Scholar]
  • 69.Zhu, Y. et al. ISO: An improved snake optimizer with multi-strategy enhancement for engineering optimization. Expert Syst. Appl.281, 127660 (2025). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All data generated or analyzed during this study are included in this article


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES