Abstract
The black-winged kite algorithm (BKA) constructed on the black-winged kites’ migratory and predatory instincts is a revolutionary swarm intelligence method that integrates the Leader tactic with the Cauchy variation procedure to retrieve the expansive appropriate convergence solution. The essential BKA exhibits marginalized resolution efficiency, inferior assessment precision, and stagnant sensitive anticipation. To foster aggregate discovery intensity and advance widespread computational efficacy, an innovative complex-valued encoding BKA (CBKA) is presented to resolve the global optimization. The complex-valued encoding manipulates the dual-diploid configuration to encode the black-winged kite, and the actual and fictitious portions are inserted into the BKA, which transforms dual-dimensional encoding into a single-dimensional manifestation. With the inherent parallelism and consistency, the actual and fictitious portions are renewed separately for each search agent, which reinforces population pluralism, restricts discovery stagnation, extends identification area, promotes estimation excellence, advances information resources, and fosters collaboration efficiency. The CBKA not only showcases abundant flexibility and compatibility to accomplish supplementary advantages and sharpen resolution precision but also incorporates localized exploitation and universal exploration to forestall exaggerated convergence and cultivate desirable solutions. The function evaluations, engineering layouts, and adaptive infinite impulse response system identification are executed to certify the suitability and affordability of the CBKA. The experimental results manifest that the computational accomplishment and convergence productivity of the CBKA are superior to those of other comparison algorithms, the CBKA delivers noteworthy stabilization and resilience to explore superior assessment precision and swifter convergence efficiency.
Keywords: Black-winged kite algorithm, Complex-valued encoding, Function evaluations, Engineering layouts, Infinite impulse response system identification
Subject terms: Engineering, Mathematics and computing
Introduction
The optimized premium is to carryout the layout regulations and accomplish the appropriate largest or lowest amount within an extensive selection of pertinent restrictions through employing the most acceptable customized variables of the recognition framework. Numerous multi-model, multi-objective, large-scale, uncertain, and exacerbated analytical features are observed in real-world optimization scenarios. The conventional optimization procedures incorporate mixed-integer programming, dynamic programming, Newton’s approach, gradient collapse, quadratic programming, and conjugated gradient, which exhibit some weaknesses: (1) Excessive consumption of the mathematical structure. The goal-oriented solution could be divergent if the prerequisites of first- and second-order differentiability are not fulfilled. (2) Sensitive initial solution. A substantial inaccuracy in the initial location selection will culminate in ineffective precision of mathematical and numerical outcomes. (3) Ineffectual to tackle the multifaceted, combinational, and huge-scale framework. The conventional optimization procedures emphasize inadequate elimination efficiency, instructive resource garbage, sensitive anticipation stagnation, frequent uncharacteristic convergence, multidimensional amplification, and inferior assessment precision.
Motivated by unforeseen occurrences or sophisticated predatory conduct, metaheuristic algorithms (MAs) are indifferent to a selection of starting solutions and are not fixated on concaveness and convolution. MAs inherit the more sophisticated progressive information of the privileged individual to investigate the calculation region, which reinforces population pluralism, restricts discovery stagnation, extends identification area, promotes estimation excellence, advances information resources and fosters collaboration efficiency, exhibits easy-to-implement structure, and displays courageous self-organization and intelligence. The MAs are arranged into four categories according to the numerous sources of motivation.
-
Swarm intelligence (SI)
Intelligent organizations that manifest self-sustaining action are collectively referred to as SI, which is characterized as an intelligent decision-making behavior that emerges from collaboration between individuals and surroundings. Individuals within an affiliation adhere to a fundamental code action with no predominant centralized management between organizations. The collaborative intelligence of the entire population eventually originates from individual interaction, such as puma optimizer (PO)1, elk herd optimizer (EHO)2, spider wasp optimization (SWO)3, walrus optimizer (WO)4, coati optimization algorithm (COA)5, sand cat swarm optimization (SCSO)6, horned lizard optimization algorithm (HLOA)7, GOOSE algorithm (GOOSE)8, artificial gorilla troops optimizer (GTO)9, mountain gazelle optimizer (MGO)10, african vultures optimization algorithm (AVOA)11, greater cane rat algorithm (GCRA)12, secretary bird optimization algorithm (SBOA)13, arctic puffin optimization (APO)14, blood-sucking leech optimizer (BSLO)15, Eel and grouper optimizer (EGO)16, frilled lizard optimization (FLO)17, giant armadillo optimization (GAO)18. SI refines an inefficient solution or create an alternative one in approximating the most appropriate solution contingent on the searchable resolution information and the repeatable evolutionary approach. SI not only exhibits spectacular flexibility and adaptability to exploit the pursuit productivity and advance computational accurateness but also amalgamates investigation and extraction to enrich population pluralism and reinforce the global alternative solutions. SI exhibits attractive consistency and resilience to tackle the multifaceted, combinational and huge-scale difficulties.
-
Evolutionary algorithms (EAs)
EAs emphasize self-sustaining, self-adapting, and self-studying attributes and are motivated by natural biological advancement. Reproduction, mutation, competitiveness, and selection all contribute to biological progression. The EAs cope with the optimization challenges through genetic alteration, recombination, and choice, such as liver cancer algorithm (LCA)19, coronavirus mask protection algorithm (CMPA)20, gooseneck barnacle optimization (GBO)21, nutcracker optimization algorithm (NOA)22, altruistic population algorithm (APA)23, anti-coronavirus optimization (ACVO)24, water optimization algorithm (WOA)25, poplar optimization algorithm (POA)26, starling murmuration optimizer (SMO)27, plant competition optimization (PCO)28, remora optimization algorithm (ROA)29, water flow optimizer (WFO)30, differential evolution (DE)31, snow ablation optimizer (SAO)32, gradient-based optimizer (GBO)33. EAs exhibit attractive stability and robustness to promote the effectiveness of accomplishing multifaceted issues. EAs utilize a variety of sophisticated heuristic procedures to broaden the resolution productivity and alleviate the computing charges, which harmonizes the localized extraction and universal exploration to restrict anticipation stagnation and bolster evaluation precision. EAs has strong reliability and feasibility to administer the massive amount of data. EAs are attempted to expedite the preparation and evaluation of the huge-scale datasets, which can capture valuable analytical information and bolster practicability and parallelism.
-
Physics/mathematics-based algorithms.
Physics/mathematics-based algorithms are precipitated by the inevitable physical/mathematical theorems or occurrences, which typically symbolizes the paramount principles of physical/mathematical procedures in the partnerships between search individuals in the procedure actualization, such as Kepler optimization algorithm (KOA)34, numeric crunch algorithm (NCA)35, exponential distribution optimizer (EDO)36, elastic deformation optimization algorithm (EDOA)37, geometric octal zones distance estimation (GOZDE)38, young’s double-slit experiment (YDSE)39, arithmetic optimization algorithm (AOA)40, integrated optimization algorithm (IOA)41, atomic orbital search (AOS)42, triangulation topology aggregation optimizer (TTAO)43, newton–raphson-based optimizer (NRBO)44, simulated annealing (SA)45, gravitational search algorithm (GSA)46, sinh cosh optimizer (SCHO)47, Chernobyl disaster optimizer (CDO)48. Physics/mathematics-based algorithms exhibit strong adaptability and stability to regulate the issue’s ambiguity. These algorithms integrate fuzziness-based logic, resilient optimization, uncontrollable probabilistic instruction, and reinforced training to promote universal discovery precision and upgrade design adaptability. They utilize theoretical mathematical concepts to employ algorithmic convergence, operational complexity, and problematic solvability.
-
Human-based algorithms
Human-based algorithms, which incorporate algorithms motivated by both physical and non-physical human interactions like contemplating and social actions, are the ultimate type of MAs that have been explored, such as football team training algorithm (FTTA)49, love evolution algorithm (LEA)50, partial reinforcement optimizer (PRO)51, human memory optimization 52, musical chairs algorithm (MCA) 53, tactical unit algorithm (TUA)54, alpine skiing optimization (ASO)55, search in forest optimizer (SIFO)56, criminal search optimization algorithm (CSOA)57, competitive search algorithm(CSA)58, hunter–prey optimizer (HPO)59, archerfish hunting optimizer (AHO)60, hunger games search (HGS)61, human felicity algorithm (HFA)62, group learning algorithm (GLA)63. Human-based algorithms furnish innovative solutions and competitive advantage in scientific and industrial contexts, incentivizing academics to allocate additional resources to synthesizing innovative methodologies. These algorithms receive formidable consistency and endurance from recognizing supplemental advantages and precocious convergence and prioritizing investigation and utilization to advance convergence frequency and bolster estimated precision.
Zhang et al. juxtaposed a black-winged kite algorithm based on logistic chaotic mapping with an osprey optimization algorithm to address the function evaluations and engineering layouts, and this algorithm exhibited large-scale discovery and small-scale extraction to foster aggregate discovery intensity and advance widespread computational efficacy64. Ma et al. explored the black-winged kite algorithm with a good point set, nonlinear convergence factor, and adaptive t-distribution to address the robot parallel gripper design, this algorithm exhibited abundant adaptability and versatility to determine a more stable evaluation accuracy65. Xue et al. integrated the black-winged kite algorithm with artificial rabbit optimization to address the function optimization, and this algorithm exhibited abundant sustainability and versatility to forestall exaggerated convergence and locate the appropriate solution66. Zhou et al. integrated the black-winged kite algorithm with sine cosine guidelines to address function optimization. This algorithm exhibited delightful reliability and adaptability, enriching the detection information capacity and promoting global convergence performance67. Rasooli et al. established the black-winged kite algorithm to address the clustering, and this algorithm exhibited suitability and affordability to foster aggregate discovery intensity, advance widespread computational efficacy, and retrieve the universal adequate solution68.
Although the altered versions of the black-winged kite algorithm (BKA) demonstrate outstanding reliability and flexibility to promote assessment precision and collaboration efficiency, they remain insufficient in reconciling localized exploitation and universal exploration. The no-free-lunch (NFL) theorem speculates that no single search methodology could successfully address the entirety of optimization difficulties. The BKA is constructed on the black-winged kites’ migratory and predatory instincts that integrate the Leader tactic with the Cauchy variation procedure to retrieve the universal adequate solution69. The basic BKA exhibits limitations, such as marginalized resolution efficiency, sensitive anticipation stagnation, sluggish collaboration speed, inferior assessment precision, and inadequate investigation and extraction. The innovative complex-valued encoding technique is integrated into the BKA to foster aggregate discovery intensity and advance widespread computational effectiveness, which incorporates a dual-diploid configuration to encode every black-winged kite and translates dual-dimensional encoding to single-dimension manifestation. The main contributions are summarized below: (1) The complex-valued encoding black-winged kite algorithm (CBKA) is established to address global optimization. (2) The complex-valued encoding reinforces population pluralism, restricts discovery stagnation, extends the identification zone, promotes estimation excellence, advances information resources, fosters collaboration efficiency, and exhibits remarkable parallelism and consistency. (3) The CBKA is compared with various comparison approaches that contain the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, BSLO, EGO, FLO, GOOSE, YDSE, SCHO, SWO, GAO and BKA. The CBKA is tested against the function evaluations, engineering layouts, and infinite impulse response (IIR) system identification. (4) The CBKA not only emphasizes formidable flexibility and sustainability to renew the actual and fictitious portions of each black-winged kite but also reconciles localized exploitation and universal exploration to forestall exaggerated convergence and locate the appropriate solution. In addition, the CBKA showcases abundant adaptability and versatility to reap supplementary advantages and explore superior assessment precision and swifter convergence efficiency.
The article is partitioned into the following components. Section “Black-winged kite algorithm (BKA)” reveals the BKA. Section “Complex-valued encoding black-winged kite algorithm (CBKA)” articulates the CBKA. Section “Simulation evaluation and result interpretation for benchmark functions” portrays comparative experiments and result analysis. Section “CBKA for adaptive infinite impulse response system identification” portrays the comparative experiments and result analysis. Section “CBKA for classical engineering design” portrays the comparative experiments and result analysis. Section “Impact analysis” showcases the impact analysis of the CBKA. Section “Conclusion and future exploration” summarizes the conclusion of future research.
Black-winged kite algorithm (BKA)
The BKA integrates the Leader tactic with the Cauchy variation procedure to foster aggregate discovery intensity and retrieve the appropriate computational solution, which not only captures the black-winged kites’ migratory and predatory instincts but also meticulously mimics the fabulous adaptability to alter the surrounding circumstances and target locations.
Initialization population
The population is haphazardly initialized, and a matrix
is stipulated as:
![]() |
1 |
where
showcases the population magnitude,
showcases the problematic dimension,
showcases jth dimension of ith individual. The
is stipulated as:
![]() |
2 |
where
showcases an integer in
,
and
showcase the lower and upper limitations, and
.
The most appropriate leader
is stipulated as:
![]() |
3 |
![]() |
4 |
Assaulting behavior
Black-winged kites are predators of tiny pastureland creatures and parasites; they swiftly descend and strike after silently monitoring their prey and regulating their wings and tail angles relative to motion velocity. Figure 1(a) articulates a black-winged kite safeguarding equilibrium and hovering. Figure 1(b) articulates a black-winged kite wafting towards the prey at breakneck speed. Figure 2(a) articulates a black-winged kite remaining hovering and foreseeing an assault. Figure 2(b) articulates a black-winged kite remaining hovering and foraging for prey.
Fig. 1.

(a) Safeguarding equilibrium and hovering, (b) Wafting towards the prey at breakneck speed.
Fig. 2.

(a) Remaining hovering and foreseeing an assault, (b) Remaining hovering and foraging for prey.
The assaulting behavior is stipulated as:
![]() |
5 |
![]() |
6 |
where
and
showcase the positions of ith black-winged kite in jth dimension,
,
,
showcases the iteration termination.
Migration behavior
The sophisticated behavior of bird migration is restricted by multiple environmental conditions incorporating food accessibility and humidity. Migration is to acclimatize to seasonal fluctuations, and the numerous black-winged kites migrate form the northwest to the southeast during winter to pursue superior living circumstances and materialistic resources. If the anticipated fitness of the particular species is lower than that of the haphazard species, the leader will allocate to step down and revert to the migrating species. Conversely, the leader will instruct the search population until it accomplishes the detection destination. This procedure continuously designates exemplary leaders to guarantee a productive migration. Figure 3 articulates the strategic alterations of the leading black-winged kite.
Fig. 3.

The strategic alterations of the leading black-winged kite.
The migration behavior is stipulated as:
![]() |
7 |
![]() |
8 |
where
showcases the leading scorer,
showcases the current position,
showcases the accidental position,
showcases the Cauchy mutation.
The single-dimensional Cauchy is a continuous stochastic distribution with two metrics, which is stipulated as:
![]() |
9 |
where
,
, the probability density fitness is stipulated as:
![]() |
10 |
Algorithm 1.

Algorithm 1 yields the pseudocode of BKA.
Complex-valued encoding black-winged kite algorithm (CBKA)
The exclusive chromosomes of natural biological tissues are constructed of double-stranded or multi-stranded structures, and the CBKA characterizes a pair of genotypes to enrich the complexity and diversity of the biological data. The actual and fictitious portions are correlated with the actual and fictitious genes70. The
is stipulated as:
![]() |
11 |
where
and
showcase the actual and fictitious portions.
Table 1 portrays the chromosome structure of CBKA.
Table 1.
Chromosome structure of CBKA.
| Individual | ![]() |
![]() |
![]() |
![]() |
![]() |
|---|---|---|---|---|---|
actual portion
|
![]() |
![]() |
![]() |
![]() |
|
fictitious portion
|
![]() |
![]() |
![]() |
![]() |
![]() |
| chromosome structure | ![]() |
![]() |
![]() |
![]() |
![]() |
Initialization CBKA population
The definition interval is
, and
modules and
arguments are randomly generated.
![]() |
12 |
![]() |
13 |
The
complex values are stipulated as:
![]() |
14 |
Updating positions of CBKA
Attacking behavior
(1) Refresh the actual portion:
![]() |
15 |
(2) Refresh the fictitious portion:
![]() |
16 |
Migration behavior
(1) Refresh the actual portion:
![]() |
17 |
(2) Refresh the fictitious portion:
![]() |
18 |
Calculating the fitness value
The CBKA manipulates actual and fictitious portions to encode the black-winged kite, the encoding conversion region is transformed into actual and fictitious solutions, and the fitness value is stipulated as:
![]() |
19 |
![]() |
20 |
where
showcases the altered actual argument.
The solution procedure of CBKA
The CBKA emphasizes formidable flexibility and sustainability to forestall exaggerated convergence and locate the appropriate solution, which reinforces population pluralism, restricts discovery stagnation, extends identification area, promotes estimation excellence, advances information resources, fosters collaboration efficiency, and exhibits remarkable parallelism and consistency. Algorithm 2 yields the pseudocode of CBKA. Figure 4 articulates the flowchart of CBKA.
Fig. 4.

Convergence curves of the CBKA and compared algorithms for resolving the benchmark functions.
Algorithm 2.

CBKA
Computational complexity
Computational complexity is adopted to measure the time and space resources consumed by a procedure when remedying huge-scale troublesome challenges. This subsection will investigate the computational complexity of the CBKA to emphasize sustainability and productivity.
Time complexity: the CBKA embodies three essential actions: initialization, estimating fitness, and refreshing the black-winged kite’s location. In CBKA,
showcases population magnitude,
showcases iteration termination, and
showcases problematic dimension. (1) The complexity is related to the initialization approach and issue size. Initialization symbolizes establishing prospective solutions, embarking on parameters, and initializing other necessitate procedures to advance the discovery potential and optimization productivity and time complexity of the initialization equals
. (2) Estimating fitness is attempted to validate the practicality and quality of potential solutions, which necessitates sophisticated computation and experimental validation, the time complexity of estimating fitness equals
. (3) Refreshing black-winged kite’s location executes a neighborhood discovery predation and complex-valued encoding strategy to revise the black-winged kites’ locations and furnish alternative solutions, the time complexity of the refreshing black-winged kite’s location equals
. Therefore, the total time complexity of the CBKA is equal
.
Space complexity: the CBKA not only showcases abundant flexibility and compatibility to accomplish supplementary advantages and sharpen resolution precision but also incorporates localized exploitation and universal exploration to forestall exaggerated convergence and cultivate desirable solutions. Space complexity is the supplemental data storage area, the stored alternative solutions, associated intermediate outcomes, auxiliary ephemeral variables, and inevitable discovery- and extraction-related data layouts contributing to CBKA’s space complexity utilization. In CBKA,
showcases population magnitude,
showcases iteration termination, and
showcases problematic dimension. Therefore, the space complexity of the CBKA is equal
.
Simulation evaluation and result interpretation for benchmark functions
Experimental setup
Numerical experiments are conducted on a Windows 10 machine with an Intel Core i7-8750H 2.2 GHz CPU, a GTX1060, and 8 GB memory.
Benchmark functions
Benchmark functions involve three variations: unimodal functions
, multimodal functions
, and fixed-dimension multimodal functions
. The unimodal functions do not exhibit local optimal, and the purpose is to furnish an associated benchmark for monitoring the exploitation and search localization of metaheuristic algorithms. These functions have a particular guiding significance in quantifying the advantages and drawbacks of the algorithms and maintaining the attention concentrated on establishing the expansive optimal solution without being disturbed by false peaks. The multimodal functions preserve many locally optimum solutions, and the purpose is to furnish a fantastic foundation for investigating the entirety of the exploration and worldwide discovery of metaheuristic algorithms. The fixed-dimension multimodal functions endure fewer local optimal solutions as compared to the multimodal functions, and the purpose is to furnish an appropriate metric for quantifying the optimization efficiency of metaheuristic algorithms in harmonizing large-scale exploration and small-scale exploitation. Table 2 portrays the benchmark functions.
Table 2.
Benchmark functions.
| Benchmark functions | Dim | Range | fmin |
|---|---|---|---|
![]() |
30 | [− 100,100] | 0 |
![]() |
30 | [− 10,10] | 0 |
![]() |
30 | [− 100,100] | 0 |
![]() |
30 | [− 100,100] | 0 |
![]() |
30 | [− 30,30] | 0 |
![]() |
30 | [− 100,100] | 0 |
![]() |
30 | [− 1.28,1.28] | 0 |
![]() |
30 | [− 5.12,5.12] | 0 |
![]() |
30 | [− 32,32] | 0 |
![]() |
30 | [− 600,600] | 0 |
![]() |
30 | [− 50,50] | 0 |
![]() |
30 | [− 50,50] | 0 |
![]() |
2 | [− 65,65] | 0.998 |
![]() |
4 | [− 5,5] | 0.000307 |
![]() |
2 | [− 5.12,5.12] | − 1 |
![]() |
2 | [− 2,2] | 3 |
![]() |
6 | [0,1] | − 3.32 |
![]() |
4 | [0,10] | − 10.1532 |
![]() |
4 | [0,10] | − 10.4029 |
![]() |
4 | [0,10] | − 10.5364 |
![]() |
2 | ![]() |
− 1 |
![]() |
2 | [− 100,100] | − 1 |
![]() |
10 | [− 10,10] | 0 |
Parameter settings
The CBKA is contrasted with GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA to emphasize practicality and accessibility. Certain representative empirical variables that are extracted from the source manuscripts serve as the control parameters. The portrayed regulation variables of each approach are stipulated as:
GTO: precarious value
, precarious values
, precarious value
, fixed value
, fixed value
.
MGO: precarious values
, precarious values
.
PO: fixed value
, fixed value
, fixed value
, fixed value
, fixed value
, fixed value
.
AVOA: fixed value
, fixed value
, fixed value
, fixed value
, fixed value
, fixed value
.
GCRA: precarious value
, fixed value
, precarious value
.
HLOA: hue angle
, fixed value
, precarious value
, precarious value
.
WO: precarious value
, precarious values
,
, precarious values
, precarious value
, precarious value
, standard deviation
, fixed value
, precarious value
.
SBOA: precarious value
, precarious value
, fixed value
, fixed value
, precarious value
, precarious value
, fixed value
.
NRBO: fixed factor
, precarious value
, precarious value
, precarious value
, precarious value
.
APO: fixed value
, fixed value
.
EHO: precarious value
, precarious value
, precarious value
.
BKA: precarious value
, fixed value
, precarious value
, Cauchy mutation
, fixed value
, fixed value
.
CBKA: precarious value
, fixed value
, precarious value
, Cauchy mutation
, fixed value
, fixed value
.
Simulation evaluation and result interpretation
For approaches,
,
and
. Best, Worst, Mean and Std showcase the optimal score, worst score, mean score, and standard deviation.
Table 3 portrays the comparative solutions of the benchmark functions. Twelve metaheuristic algorithms are used as comparison methods to resolve the function evaluations and ensure the dependability and superiority of the CBKA. The optimal score (Best), worst score (Worst), mean score (Mean), and standard deviation (Std) are regarded as the most comprehensive and standardized assessment indications to track the stability and robustness of each method. The optimal score is inextricably linked to the fitness score of multi-model, multi-objective, large-scale, and uncertain issues, which is to recognize the lowest or highest score in the entire fixed search area. The global optimal score represents the most effective search agent of all candidate solutions, which infinitely overlaps the position close to the prey during the foraging and capturing operations of the entire detection population. The optimal score highlights the detection efficiency and exploitation accuracy. The worst score means that the worst candidate solution is obtained in the independent operation results of a certain algorithm. The gap between the worst and the mean scores highlights that the comparison algorithm produces a slower discovery efficiency and poorer execution accuracy to fall into the local optimality and premature convergence, which can indirectly highlight the stability and feasibility. The mean score is the arithmetic average value obtained by calculating and evaluating the population size, maximum iteration, and independent operation. The mean score clearly highlights stability, robustness, overall search efficiency, and global detection accuracy. The standard deviation is a statistic that measures the degree of deviation of various possible results form the expectation in the probability distribution, which reflects the fluctuation of the value in the data set relative to the mean score. The smaller the standard deviation, the smaller the data dispersion and more stable the data changes. The larger the standard deviation, the greater the data dispersion and the more unstable the data changes. For unimodal functions
, the optimal scores, worst scores, mean scores and standard deviations of CBKA, GTO, AVOA, and GCRA remain at the same magnitude and consistent for functions
,
,
and
. The evaluation accuracy of the CBKA is superior to those of the MGO, PO, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA. The CBKA can reasonably deploy large-scale discovery and small-scale extraction to foster aggregate discovery intensity and advance widespread computational efficacy. The CBKA demonstrates strong stability and reliability. The evaluation accuracy of the CBKA has been augmented astronomically in juxtaposition to BKA. The CBKA manipulates a dual-diploid organization to encode the black-winged kite, reinforce population pluralism, restrict discovery stagnation, extend identification area, promote estimation excellence, advance information resources, and foster collaboration efficiency. For
, the evaluation accuracy of the CBKA has been slightly enhanced, the CBKA has the weakest difference between the worst score and the mean score. The CBKA has the strength and reliability to achieve the exact optimal scores. The CBKA has the smallest standard deviation, showcasing abundant adaptability and versatility to determine a more stable evaluation accuracy. For
and
, the calculation magnitude and evaluation accuracy of the CBKA are superior to those of the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA. The CBKA utilizes the dual-diploid organization of the complex-valued encoding to transform the black-winged kite into an individual with actual and fictitious portions, expand the detection area, and avoid local optimality. For multimodal functions
, the optimal scores, worst scores, mean scores ues, and standard deviations of the CBKA, GTO, MGO, PO, AVOA, GCRA, HLOA, WO, NRBO, and BKA remain at the same magnitude and consistent for
. The evaluation accuracy of the CBKA is superior to those of the SBOA, APO, and EHO. The CBKA showcases abundant sustainability and versatility to forestall exaggerated convergence and locate the appropriate solution. For
, the calculation magnitude and evaluation accuracy of the CBKA, GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, and BKA are the same and consistent, the comparative solutions of the CBKA outperform those of the APO and EHO. The CBKA exhibits delightful reliability and adaptability to enrich the detection information capacity and promote global convergence performance. For
, the optimal scores, worst scores, mean scores, and standard deviations of the CBKA, GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, and BKA are the global exact solutions, the evaluation accuracy of the CBKA is superior to those of the APO and EHO. The CBKA exhibits suitability and affordability to foster aggregate discovery intensity, advance widespread computational efficacy, and retrieve an adequate universal solution. For
and
, the evaluation accuracy of the CBKA has been slightly enhanced compared to the BKA, and the CBKA has the weakest difference between the optimal, worst, and mean scores. The CBKA showcases the smallest standard deviation, and the CBKA receives exceptional consistency and endurance to promote exploration efficiency and disrupt anticipation stagnation. For fixed-dimension multimodal
, the optimal, worst, and mean scores of the CBKA, MGO, and APO remain at the same magnitude, and the global exact solutions for
and
. The evaluation accuracy of the CBKA is superior to those of the GTO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, EHO, and BKA. The CBKA exhibits a relatively smaller standard deviation and maintains instructive supremacy and stabilization to recognize universal solutions. For
and
, the calculation magnitude and evaluation accuracy of the CBKA, GTO, MGO, HLOA, WO, SBOA, NRBO, APO, and BKA are the same and consistent, the comparative solutions of the CBKA outperform those of the PO, AVOA, GCRA and EHO. The CBKA exhibits suitability and affordability to explore a superior assessment precision and a swifter convergence rate. The variation in computational magnitude and evaluation accuracy between these algorithms is subtle. The CBKA has smaller standard deviations, indicating that the CBKA deploys large-scale exploration and small-scale exploitation to enhance overall search efficiency and improve optimization stability. For
, the evaluation accuracy of the CBKA is superior to those of the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA, the CBKA has strong reliability and robustness to locate the exact assessment precision. For
,
and
, the optimal scores of the GTO, MGO, PO, AVOA, HLOA, WO, SBOA, NRBO, APO, EHO, BKA, and CBKA remain at the same magnitude and consistent, but the worst scores, mean scores, and standard deviations of the CBKA outperform those of the comparison procedures. The CBKA reconciles localized exploitation and universal exploration to forestall exaggerated convergence and locate the appropriate solution. For
and
, all procedures utilize their own global search properties and location update strategies to obtain the exact solutions, and the CBKA showcases abundant adaptability and versatility to reap supplementary advantages and explore superior assessment precision. For
, the computational magnitude and evaluation accuracy of the CBKA are superior to those of the MGO, PO, GCRA, HLOA, WO, SBOA, APO, EHO, and BKA, the CBKA maintains excellent equilibrium and endurance to enhance the exploration efficiency and locate the advance widespread computational efficacy.
Table 3.
Comparative solutions of benchmark functions.
| Function | Result | GTO | MGO | PO | AVOA | GCRA | HLOA | WO | SBOA | NRBO | APO | EHO | BKA | CBKA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
Best | 0 | 1.5E-180 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3.45E-10 | 2.66E-22 | 3.3E-219 | 0 |
| Worst | 0 | 4.1E-165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.55E-08 | 1.90E-18 | 1.5E-166 | 0 | |
| Mean | 0 | 2.3E-166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2.30E-09 | 1.57E-19 | 4.9E-168 | 0 | |
| Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2.96E-09 | 4.56E-19 | 0 | 0 | |
![]() |
Best | 0 | 3.3E-102 | 5.9E-277 | 0 | 0 | 4.6E-295 | 4.0E-197 | 8.1E-189 | 0 | 7.31E-06 | 2.49E-14 | 2.7E-111 | 0 |
| Worst | 0 | 2.60E-95 | 3.4E-267 | 0 | 0 | 3.3E-250 | 1.9E-161 | 2.7E-164 | 2.4E-305 | 5.89E-05 | 5.16E-11 | 8.90E-99 | 0 | |
| Mean | 0 | 1.70E-96 | 2.4E-268 | 0 | 0 | 1.6E-251 | 6.4E-163 | 9.2E-166 | 1.2E-306 | 2.30E-05 | 3.39E-12 | 5.1E-100 | 0 | |
| Std | 0 | 5.51E-96 | 0 | 0 | 0 | 0 | 3.1E-162 | 0 | 0 | 1.25E-05 | 9.60E-12 | 2.00E-99 | 0 | |
![]() |
Best | 0 | 7.46E-33 | 0 | 0 | 0 | 0 | 0 | 1.2E-241 | 0 | 7.70E-07 | 2.626421 | 1.8E-218 | 0 |
| Worst | 0 | 3.64E-20 | 0 | 0 | 0 | 0 | 0 | 6.0E-206 | 0 | 3.09E-05 | 40.65992 | 1.2E-183 | 0 | |
| Mean | 0 | 1.40E-21 | 0 | 0 | 0 | 0 | 0 | 2.0E-207 | 0 | 7.94E-06 | 12.04016 | 3.9E-185 | 0 | |
| Std | 0 | 6.64E-21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8.22E-06 | 8.628604 | 0 | 0 | |
![]() |
Best | 0 | 3.72E-64 | 5.9E-276 | 0 | 0 | 1.3E-271 | 7.3E-191 | 3.0E-148 | 4.7E-308 | 0.010068 | 3.896559 | 3.0E-108 | 0 |
| Worst | 0 | 1.03E-52 | 1.2E-268 | 0 | 0 | 1.0E-246 | 1.0E-156 | 1.1E-130 | 3.1E-299 | 0.046604 | 22.69124 | 5.92E-82 | 0 | |
| Mean | 0 | 3.44E-54 | 4.5E-270 | 0 | 0 | 4.0E-248 | 3.3E-158 | 3.6E-132 | 1.2E-300 | 0.024490 | 9.496622 | 1.97E-83 | 0 | |
| Std | 0 | 1.87E-53 | 0 | 0 | 0 | 0 | 1.8E-157 | 2.0E-131 | 0 | 0.009466 | 3.663021 | 1.08E-82 | 0 | |
![]() |
Best | 6.47E-09 | 0 | 3.91E-05 | 2.27E-07 | 1.22E-09 | 0.005377 | 2.97E-05 | 22.81122 | 26.35627 | 0.000520 | 2.678482 | 24.79555 | 0 |
| Worst | 1.92E-05 | 2.39E-29 | 25.40717 | 1.42E-05 | 2.02E-05 | 28.70675 | 0.237591 | 23.57569 | 28.81243 | 26.05845 | 141.7184 | 28.93170 | 3.10E-29 | |
| Mean | 4.47E-06 | 1.46E-30 | 23.05609 | 2.85E-06 | 4.16E-06 | 25.80643 | 0.022344 | 23.26998 | 27.70786 | 11.88652 | 39.99899 | 26.35369 | 1.32E-30 | |
| Std | 5.45E-06 | 5.28E-30 | 6.282092 | 2.77E-06 | 5.67E-06 | 8.744028 | 0.050907 | 0.213851 | 0.799264 | 12.92475 | 33.96517 | 1.152647 | 5.71E-30 | |
![]() |
Best | 7.72E-18 | 2.25E-29 | 6.56E-11 | 1.73E-09 | 3.10E-11 | 8.00E-06 | 8.19E-08 | 1.04E-15 | 1.619463 | 9.97E-09 | 6.22E-22 | 3.38E-05 | 1.34E-26 |
| Worst | 2.31E-14 | 2.54E-18 | 1.44E-08 | 2.02E-08 | 5.79E-07 | 0.000246 | 0.000417 | 5.08E-13 | 2.754444 | 2.68E-07 | 3.24E-19 | 6.036753 | 1.33E-21 | |
| Mean | 3.44E-15 | 8.47E-20 | 2.07E-09 | 5.27E-09 | 7.40E-08 | 7.14E-05 | 6.12E-05 | 6.31E-14 | 2.208682 | 6.08E-08 | 4.86E-20 | 0.799383 | 1.04E-22 | |
| Std | 6.07E-15 | 4.64E-19 | 2.89E-09 | 3.73E-09 | 1.37E-07 | 5.46E-05 | 8.94E-05 | 1.13E-13 | 0.295708 | 5.47E-08 | 8.78E-20 | 1.740813 | 2.98E-22 | |
![]() |
Best | 9.03E-07 | 1.47E-05 | 8.64E-06 | 2.70E-06 | 8.52E-07 | 1.14E-05 | 5.72E-06 | 5.97E-06 | 1.37E-06 | 0.007746 | 0.018150 | 1.78E-06 | 2.75E-07 |
| Worst | 0.000193 | 0.000662 | 0.000229 | 0.000343 | 0.000134 | 0.000314 | 0.000469 | 0.000358 | 0.000248 | 0.024298 | 0.115852 | 0.000334 | 4.47E-05 | |
| Mean | 3.99E-05 | 0.000141 | 6.41E-05 | 5.45E-05 | 4.85E-05 | 9.66E-05 | 0.000136 | 0.000153 | 6.94E-05 | 0.013823 | 0.058306 | 7.34E-05 | 1.61E-05 | |
| Std | 4.16E-05 | 0.000133 | 5.10E-05 | 6.92E-05 | 3.52E-05 | 7.46E-05 | 0.000122 | 8.56E-05 | 6.60E-05 | 0.004052 | 0.026607 | 6.94E-05 | 1.15E-05 | |
![]() |
Best | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 20.08312 | 15.91934 | 0 | 0 |
| Worst | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6.005041 | 0 | 155.9933 | 58.70249 | 0 | 0 | |
| Mean | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.200168 | 0 | 73.71032 | 30.47889 | 0 | 0 | |
| Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1.096365 | 0 | 35.97173 | 10.13477 | 0 | 0 | |
![]() |
Best | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 6.18E-06 | 1.86E-11 | 8.88E-16 | 8.88E-16 |
| Worst | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 2.45E-05 | 4.736201 | 8.88E-16 | 8.88E-16 | |
| Mean | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 1.13E-05 | 1.876198 | 8.88E-16 | 8.88E-16 | |
| Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4.61E-06 | 1.013469 | 0 | 0 | |
![]() |
Best | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7.57E-10 | 0 | 0 | 0 |
| Worst | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012321 | 0.127386 | 0 | 0 | |
| Mean | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000411 | 0.020849 | 0 | 0 | |
| Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002250 | 0.028999 | 0 | 0 | |
![]() |
Best | 6.03E-18 | 1.57E-32 | 5.76E-12 | 8.47E-11 | 1.75E-12 | 2.44E-07 | 5.87E-10 | 8.66E-18 | 0.094215 | 1.90E-10 | 2.01E-23 | 1.96E-06 | 1.57E-32 |
| Worst | 6.38E-14 | 1.57E-32 | 1.41E-09 | 8.50E-10 | 3.68E-08 | 0.103671 | 5.96E-06 | 8.99E-09 | 0.344426 | 3.82E-09 | 3.031747 | 0.460621 | 5.02E-12 | |
| Mean | 6.07E-15 | 1.57E-32 | 1.43E-10 | 3.01E-10 | 3.06E-09 | 0.003458 | 3.56E-07 | 4.26E-10 | 0.197062 | 1.46E-09 | 0.474842 | 0.033007 | 1.67E-13 | |
| Std | 1.50E-14 | 5.57E-48 | 2.61E-10 | 1.93E-10 | 7.11E-09 | 0.018927 | 1.08E-06 | 1.76E-09 | 0.066425 | 1.04E-09 | 0.850770 | 0.091725 | 9.17E-13 | |
![]() |
Best | 5.58E-18 | 1.35E-32 | 3.17E-11 | 2.78E-10 | 4.82E-12 | 5.49E-06 | 2.44E-08 | 4.28E-15 | 1.308839 | 3.87E-09 | 1.05E-20 | 0.742207 | 1.35E-32 |
| Worst | 2.49E-09 | 1.35E-32 | 6.18E-09 | 1.41E-08 | 3.57E-07 | 1.874620 | 1.79E-05 | 0.197740 | 2.882751 | 0.010987 | 3.597465 | 2.451350 | 6.66E-22 | |
| Mean | 8.31E-11 | 1.35E-32 | 1.23E-09 | 2.11E-09 | 2.60E-08 | 0.073584 | 3.76E-06 | 0.019991 | 1.954286 | 0.000733 | 0.495947 | 1.346790 | 3.45E-23 | |
| Std | 4.55E-10 | 5.57E-48 | 1.42E-09 | 2.53E-09 | 6.57E-08 | 0.341755 | 4.34E-06 | 0.047508 | 0.431302 | 0.002788 | 1.003017 | 0.434012 | 1.37E-22 | |
![]() |
Best | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 | 0.998004 |
| Worst | 0.998004 | 0.998004 | 0.998004 | 1.992031 | 0.998004 | 12.67051 | 0.998004 | 0.998004 | 12.67051 | 0.998004 | 3.968250 | 0.998004 | 0.998004 | |
| Mean | 0.998004 | 0.998004 | 0.998004 | 1.064272 | 0.998004 | 6.186815 | 0.998004 | 0.998004 | 1.651634 | 0.998004 | 1.361954 | 0.998004 | 0.998004 | |
| Std | 0 | 1.93E-16 | 0 | 0.252193 | 2.55E-12 | 4.212248 | 2.90E-16 | 0 | 2.190709 | 0 | 0.712300 | 5.83E-17 | 0 | |
![]() |
Best | 0.000307 | 0.000307 | 0.000307 | 0.000307 | 0.000419 | 0.000307 | 0.000307 | 0.000307 | 0.000307 | 0.000307 | 0.000307 | 0.000307 | 0.000307 |
| Worst | 0.001223 | 0.000307 | 0.001223 | 0.000342 | 0.001674 | 0.020363 | 0.000737 | 0.020363 | 0.020363 | 0.000307 | 0.001212 | 0.020363 | 0.000307 | |
| Mean | 0.000399 | 0.000307 | 0.000399 | 0.000309 | 0.001611 | 0.003919 | 0.000331 | 0.001736 | 0.003711 | 0.000307 | 0.000841 | 0.001962 | 0.000307 | |
| Std | 0.000279 | 5.37E-18 | 0.000279 | 6.21E-06 | 0.000254 | 0.007497 | 7.96E-05 | 0.005071 | 0.007578 | 1.95E-19 | 0.000307 | 0.005023 | 2.02E-19 | |
![]() |
Best | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 |
| Worst | -1 | -1 | -0.93625 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -0.93625 | -1 | -1 | |
| Mean | -1 | -1 | -0.99787 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -0.99362 | -1 | -1 | |
| Std | 0 | 0 | 0.011640 | 0 | 1.26E-10 | 0 | 0 | 0 | 0 | 0 | 0.019453 | 0 | 0 | |
![]() |
Best | 3 | 3 | 3 | 3 | 3.033880 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| Worst | 3 | 3 | 3 | 3.000004 | 32.68463 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | |
| Mean | 3 | 3 | 3 | 3.000001 | 17.26016 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | |
| Std | 1.30E-15 | 1.35E-15 | 1.38E-15 | 8.57E-07 | 11.21137 | 1.07E-14 | 6.83E-15 | 1.25E-15 | 2.10E-15 | 1.81E-15 | 1.56E-15 | 1.26E-15 | 1.66E-15 | |
![]() |
Best | -3.32200 | -3.32200 | -3.32200 | -3.32200 | -2.76541 | -3.32200 | -3.32200 | -3.32200 | -3.32200 | -3.32200 | -3.32200 | -3.32200 | -3.32200 |
| Worst | -3.20310 | -3.20310 | -3.20310 | -3.20310 | -1.16984 | -3.08668 | -3.03515 | -3.20310 | -3.09417 | -3.20310 | -3.20310 | -3.12916 | -3.32200 | |
| Mean | -3.27840 | -3.25066 | -3.26255 | -3.25462 | -2.03807 | -3.26759 | -3.24902 | -3.27444 | -3.24090 | -3.31407 | -3.26651 | -3.29575 | -3.32200 | |
| Std | 0.058273 | 0.059241 | 0.060463 | 0.059923 | 0.463088 | 0.067481 | 0.071609 | 0.059241 | 0.077155 | 0.030164 | 0.060328 | 0.054838 | 4.73E-15 | |
![]() |
Best | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 | -10.1532 |
| Worst | -10.1532 | -10.1532 | -2.63047 | -10.1532 | -10.1487 | -2.63047 | -10.1532 | -10.1532 | -9.71179 | -10.1532 | -2.63047 | -10.1532 | -10.1532 | |
| Mean | -10.1532 | -10.1532 | -9.90244 | -10.1532 | -10.1526 | -9.14953 | -10.1532 | -10.1532 | -10.1383 | -10.1532 | -7.57178 | -10.1532 | -10.1532 | |
| Std | 6.85E-15 | 6.08E-15 | 1.373456 | 4.06E-14 | 0.001099 | 2.600697 | 7.44E-13 | 6.62E-15 | 0.080549 | 7.23E-15 | 3.498603 | 5.54E-15 | 6.68E-15 | |
![]() |
Best | -10.4029 | -10.4029 | -10.4029 | -10.4029 | -10.4028 | -10.4029 | -10.4029 | -10.4029 | -10.4029 | -10.4029 | -10.4029 | -10.4029 | -10.4029 |
| Worst | -10.4029 | -10.4029 | -3.72430 | -10.4029 | -10.4008 | -1.83759 | -10.4029 | -10.4029 | -8.51057 | -10.4029 | -10.4029 | -10.4029 | -10.4029 | |
| Mean | -10.4029 | -10.4029 | -10.1803 | -10.4029 | -10.4025 | -8.52768 | -10.4029 | -10.4029 | -10.3301 | -10.4029 | -10.4029 | -10.4029 | -10.4029 | |
| Std | 8.73E-16 | 8.08E-16 | 1.219347 | 2.82E-14 | 0.000503 | 3.463578 | 3.57E-12 | 1.23E-15 | 0.346642 | 1.55E-15 | 1.28E-15 | 1.48E-15 | 8.08E-16 | |
![]() |
Best | -10.5364 | -10.5364 | -10.5364 | -10.5364 | -10.5363 | -10.5364 | -10.5364 | -10.5364 | -10.5364 | -10.5364 | -10.5364 | -10.5364 | -10.5364 |
| Worst | -10.5364 | -10.5364 | -2.80663 | -10.5364 | -10.5355 | -1.67655 | -10.5364 | -10.5364 | -3.35462 | -10.5364 | -2.42173 | -3.83543 | -10.5364 | |
| Mean | -10.5364 | -10.5364 | -9.83202 | -10.5364 | -10.5361 | -7.52129 | -10.5364 | -10.5364 | -10.0108 | -10.5364 | -9.77207 | -10.3130 | -10.5364 | |
| Std | 2.36E-15 | 2.42E-15 | 2.154953 | 3.91E-14 | 0.000283 | 4.045844 | 1.86E-11 | 1.98E-15 | 1.604531 | 1.78E-15 | 2.342062 | 1.223427 | 1.23E-15 | |
![]() |
Best | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 |
| Worst | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | |
| Mean | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | |
| Std | 0 | 0 | 0 | 0 | 2.46E-07 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
![]() |
Best | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 |
| Worst | -1 | -1 | -0.99028 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -0.99028 | -1 | -1 | |
| Mean | -1 | -1 | -0.99968 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -0.99223 | -1 | -1 | |
| Std | 0 | 0 | 0.001774 | 0 | 1.32E-10 | 0 | 0 | 0 | 0 | 0 | 0.003953 | 0 | 0 | |
![]() |
Best | 0 | 0 | 1.2E-273 | 0 | 8.24E-14 | 1.1E-274 | 2.0E-211 | 1.7E-239 | 0 | 1.03E-16 | 9.8E-248 | 8.2E-114 | 0 |
| Worst | 0 | 3.47E-18 | 1.9E-260 | 0 | 1.83E-05 | 3.5E-256 | 2.0E-172 | 2.29E-25 | 0 | 7.02E-12 | 6.38E-15 | 6.62E-87 | 0 | |
| Mean | 0 | 1.16E-19 | 6.2E-262 | 0 | 1.09E-06 | 2.3E-257 | 6.8E-174 | 7.63E-27 | 0 | 6.58E-13 | 3.90E-16 | 2.21E-88 | 0 | |
| Std | 0 | 6.33E-19 | 0 | 0 | 3.63E-06 | 0 | 0 | 4.18E-26 | 0 | 1.47E-12 | 1.16E-15 | 1.21E-87 | 0 |
To enhance clarity and relevance, we have reduced the number of iterations and re-plotted the graps accordingly for the benchmarks. Figure 4 portrays the convergence curves of the CBKA and compares algorithms for resolving the benchmark functions. The convergence curves show the compared algorithms’ assessment precision and faster convergence efficiency. A higher convergence efficiency means that the compared algorithm exhibits high optimization efficiency and high convergence performance to avoid search stagnation. A lower calculation accuracy means that the compared algorithm has excellent detection breadth and exploitation efficiency for determining the global optimal feasible solution. For unimodal functions
, the convergence productivity and assessment precision of the CBKA outperforms those of the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA. The CBKA exhibits delightful reliability and adaptability to enrich the detection information capacity and promote global convergence performance. The CBKA features strong stability and parallelism to avoid search stagnation and obtain higher convergence accuracy. For multimodal functions
, the optimal scores, worst scores, mean scores ues, and standard deviations of the CBKA are superior to those of the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA. The CBKA exhibits remarkable parallelism and consistency to restrict discovery stagnation, extend identification area, and foster collaboration efficiency in terms of convergence productivity and assessment precision. For fixed-dimension multimodal
, the CBKA exhibits suitability and affordability to explore a superior assessment precision and swifter faster convergence rate, the variation in computational magnitude and evaluation accuracy. Compared with GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA, the CBKA features instructive superiority and reliability to cultivate better convergence productivity and superior assessment precision. The CBKA showcases abundant flexibility and versatility to sharpen resolution precision and incorporates localized exploitation and universal exploration to cultivate desirable solutions.
Figure 5 portrays the boxplots of the CBKA and compares algorithms for resolving the benchmark functions. The standard deviation is a statistic that measures the degree of deviation of various possible results form the expectation in the probability distribution, which reflects the fluctuation of the value in the data set relative to the mean score. The standard deviation is inextricably linked to the dispersion of sample data, which exhibits stability and robustness. A lower standard deviation means that the compared algorithm has strong effectiveness and feasibility in advancing information resources and balances exploration and exploitation to locate the appropriate solution. For unimodal functions
, the CBKA utilizes the dual-diploid organization of the complex-valued encoding to transform the black-winged kite into an individual with actual and fictitious portions, expand the detection area, avoid local optimality, and enhance calculation magnitude and evaluation accuracy. The standard deviations and stability of the CBKA are superior to those of the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA. The CBKA features fantastic durability and adaptability to advance inherent parallelism and locate the appropriate solution. For multimodal functions
, the CBKA showcases abundant sustainability and versatility to forestall exaggerated convergence and discover the proper solution. Compared with GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA, the CBKA features more minor standard deviations and more substantial stability. The CBKA attributes admirable consistency and endurance to promote inherent parallelism and enhance local exploitation capability. For fixed-dimension multimodal
, the standard deviations and stability of the CBKA are superior to those of the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, and BKA. The CBKA emphasizes formidable flexibility and sustainability to renew each black-winged kite’s actual and fictitious portions to advance widespread computational efficacy and forestall exaggerated convergence. The CBKA recognizes the supplementary advantages of BKA and the actual and fictitious portions to mitigate the marginalized resolution efficiency, inferior assessment precision, and sensitive anticipation stagnation, which reconciles localized exploitation and universal exploration to forestall exaggerated convergence and locate the appropriate solution.
Fig. 5.

Boxplots of the CBKA and compared algorithms for resolving the benchmark functions.
Wilcoxon rank-sum test is executed to ascertain if there is an instructive distinction between CBKA and other procedures71.
is an instructive distinction,
is no instructive distinction, and N/A is “not applicable”. Table 4 portrays the comparative solutions of the Wilcoxon rank-sum test.
Table 4.
Comparative solutions of Wilcoxon rank-sum test.
| Function | GTO | MGO | PO | AVOA | GCRA | HLOA | WO | SBOA | NRBO | APO | EHO | BKA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 |
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 |
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 |
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 |
![]() |
4.11E-12 | 9.80E-03 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 | 4.11E-12 |
![]() |
3.02E-11 | 2.46E-02 | 3.02E-11 | 3.02E-11 | 3.02E-11 | 3.02E-11 | 3.02E-11 | 3.02E-11 | 3.02E-11 | 3.02E-11 | 5.49E-11 | 3.02E-11 |
![]() |
1.50E-02 | 7.38E-10 | 5.09E-06 | 3.37E-04 | 1.39E-06 | 6.01E-08 | 5.46E-09 | 9.76E-10 | 2.77E-05 | 3.02E-11 | 3.02E-11 | 8.84E-07 |
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 3.02E-11 |
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 3.02E-11 |
![]() |
1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 5.76E-11 | 1.10E-02 |
![]() |
1.45E-10 | 1.10E-02 | 6.48E-12 | 6.48E-12 | 8.92E-12 | 6.48E-12 | 6.48E-12 | 1.19E-10 | 6.48E-12 | 6.48E-12 | 7.14E-11 | 6.48E-12 |
![]() |
3.16E-12 | 8.15E-03 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 | 3.16E-12 |
![]() |
N/A | 5.54E-03 | N/A | 1.44E-09 | 1.21E-12 | 5.76E-11 | 9.71E-12 | 3.34E-11 | 3.14E-07 | 3.69E-11 | 2.78E-03 | 1.61E-02 |
![]() |
2.55E-02 | 2.69E-11 | 9.47E-07 | 2.69E-11 | 2.69E-11 | 2.96E-11 | 2.69E-11 | 2.69E-11 | 1.91E-07 | 3.02E-11 | 4.91E-11 | 2.81E-08 |
![]() |
3.07E-07 | 1.45E-11 | 1.71E-04 | 1.45E-11 | 4.25E-02 | 1.45E-11 | 1.13E-03 | N/A | N/A | 2.14E-02 | 8.14E-04 | N/A |
![]() |
2.44E-04 | 3.65E-05 | 1.39E-05 | 2.25E-11 | 2.25E-11 | 7.29E-11 | 2.21E-11 | 7.19E-03 | 3.80E-08 | 2.69E-11 | 7.15E-09 | 6.94E-05 |
![]() |
2.41E-03 | 8.58E-03 | 2.78E-03 | 1.96E-11 | 1.59E-11 | 1.59E-11 | 1.59E-11 | 2.99E-03 | 1.59E-11 | 6.07E-03 | 8.81E-05 | 1.59E-11 |
![]() |
3.99E-03 | 1.07E-02 | 7.96E-03 | 1.09E-11 | 1.14E-11 | 1.13E-11 | 1.84E-11 | 7.95E-05 | 3.79E-10 | 6.18E-04 | 2.26E-02 | 4.75E-05 |
![]() |
7.63E-08 | N/A | 2.79E-03 | 6.03E-12 | 6.32E-12 | 6.28E-12 | 6.31E-12 | 3.05E-02 | 5.60E-06 | 4.18E-05 | 1.61E-02 | 8.93E-04 |
![]() |
6.15E-03 | 1.12E-04 | 3.63E-11 | 1.40E-11 | 1.45E-11 | 1.44E-11 | 1.45E-11 | 5.32E-04 | 2.87E-06 | 1.90E-05 | 5.27E-03 | 4.87E-03 |
![]() |
2.95E-11 | 1.59E-11 | 4.39E-03 | 1.59E-11 | 1.21E-12 | 5.80E-07 | 1.14E-11 | 1.85E-10 | 1.14E-11 | 6.00E-05 | 1.14E-11 | 4.10E-03 |
![]() |
6.39E-07 | 6.32E-12 | 1.13E-08 | 6.32E-12 | 8.12E-08 | 6.32E-12 | 1.58E-04 | 5.95E-05 | 1.21E-12 | N/A | 4.67E-10 | 1.59E-11 |
![]() |
2.48E-08 | 5.77E-11 | 1.21E-12 | 2.25E-11 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.21E-12 | 1.66E-11 | 1.21E-12 | 1.13E-12 | 1.21E-12 |
CBKA for adaptive infinite impulse response system identification
Adaptive infinite impulse response system identification
The adaptive infinite impulse response (IIR) system potentially incorporates a multifaceted error structure, and accurately acquiring the trustworthy filter coefficients for system simulation remains sophisticated. The CBKA is incorporated to resolve the IIR system identification, and the fundamental objective is to establish the most advantageous modulating coefficients, mitigate the mean square error (MSE) between an unanticipated system’s input and the IIR system’s output, and recognize an appropriate transfer function that corresponds to the unanticipated system. Figure 6 articulates the adaptive IIR system identification via CBKA.
Fig. 6.

The adaptive IIR system identification via CBKA.
The input
and output
is stipulated as:
![]() |
21 |
where
showcases the feedforward order,
showcases the feedback,
showcases the pole factor,
showcases the zero factor. The
is stipulated as:
![]() |
22 |
The discrepancy between unanticipated system and the IIR system is
. The MSE is stipulated as:
![]() |
23 |
where
showcases the input sample number,
showcases the coefficient factor,
.
CBKA-based adaptive IIR system identification
Algorithm 3 emphasizes the CBKA-based adaptive IIR system identification.
Algorithm 3.
CBKA-based adaptive IIR system identification
Parameter settings
The CBKA is contrasted with BSLO, EGO, FLO, GOOSE, HLOA, TTAO, WO, YDSE, SCHO, SWO, GAO and BKA to emphasize the practicality and accessibility. Certain representative empirical variables that are extracted from the source manuscripts serve as the control parameters. The portrayed regulation variables of each approach are stipulated as:
BSLO: fixed value
, fixed value
, fixed value
, fixed value
, fixed value
, fixed value
.
EGO: precarious value
, precarious value
, precarious value
.
FLO: precarious value
, fixed value
.
GOOSE: precarious value
, precarious value
, precarious value
, precarious value
.
HLOA: hue angle
, fixed value
, precarious value
, precarious value
.
TTAO: precarious value
, precarious value
.
WO: precarious value
, precarious values
,
, precarious values
, precarious value
, precarious value
, standard deviation
, fixed value
, precarious value
.
YDSE: wavelength
, distance between two slits
, distance between the barrier and the projection screen
, distance between light source and barrier
, constant value
.
SCHO: precarious value
, fixed value
, precarious value
, fixed value
, fixed value
, fixed value
, fixed value
, fixed value
, fixed value
.
SWO: fixed value
, fixed value
, precarious value
, precarious value
.
GAO: precarious value
, fixed value
.
BKA: precarious value
, fixed value
, precarious value
, Cauchy mutation
, fixed value
, fixed value
.
CBKA: precarious value
, fixed value
, precarious value
, Cauchy mutation
, fixed value
, fixed value
.
Simulation evaluation and result interpretation
The CBKA emphasizes formidable flexibility and sustainability to renew the actual and fictitious portions of each black-winged kite and reconciles localized exploitation and universal exploration to forestall exaggerated convergence and locate the appropriate solution.
For case 1, each methodology promotes a first-order IIR filter to discern a second-order structure, the unanticipated system
and IIR filter
are stipulated as:
![]() |
24 |
![]() |
25 |
For case 2, each methodology promotes a second-order IIR filter to discern a second-order structure, the unanticipated system
and IIR filter
are stipulated as:
![]() |
26 |
![]() |
27 |
For case 3, each methodology promotes a higher-order IIR filter to discern a higher-order structure, the unanticipated system
and IIR filter
are stipulated as:
![]() |
28 |
![]() |
29 |
Table 5 portrays each approach’s experimental results (MSE) for cases 1, 2, and 3. Table 6 portrays each approach’s experimental results (average estimation parameters) for cases 1, 2, and 3. The CBKA is incorporated to resolve the IIR system identification, and the fundamental objective is to establish the most advantageous modulating coefficients, mitigate the mean square error (MSE) between an unanticipated system’s input and the IIR system’s output, and recognize an appropriate transfer function that corresponds to the unanticipated system. For each approach,
,
and
. Best, Worst, Mean, and Std are regarded as the most comprehensive and conventional assessment metrics for recognizing the stability and robustness of each method. For case 1, the optimal score, worst score, mean score, standard deviation, and average estimation parameters of CBKA are superior to those of the BSLO, EGO, FLO, GOOSE, HLOA, TTAO, WO, YDSE, SCHO, SWO, GAO and BKA. The CBKA utilizes large-scale discovery and small-scale extraction to foster aggregate discovery intensity, advance widespread computational efficacy, and retrieve an adequate universal solution. The CBKA demonstrates strong stability and reliability to restrict discovery stagnation and achieve the exact optimal score. For case 2, the optimal scores of CBKA and BKA remain consistent and at the same magnitude. The worst score, mean score, and standard deviation of CBKA have been significantly enhanced compared to the BKA. The computational magnitude, evaluation accuracy, and average estimation parameters of the CBKA are superior to those of the BSLO, EGO, FLO, GOOSE, HLOA, TTAO, WO, YDSE, SCHO, SWO, GAO and BKA. The CBKA manipulates a dual-diploid organization to encode the black-winged kite, reinforce population pluralism, restrict discovery stagnation, extend identification area, promote estimation excellence, advance information resources, and foster collaboration efficiency. For case 3, compared with the BSLO, EGO, FLO, GOOSE, HLOA, TTAO, WO, YDSE, SCHO, SWO, GAO, and BKA, the CBKA has showcases abundant adaptability and versatility to achieve the better optimal score, worst score, mean score, standard deviation, and average estimation parameters. The CBKA utilizes the dual-diploid organization of the complex-valued encoding to transform the black-winged kite into an individual with actual and fictitious portions to enrich the detection information capacity and promote global convergence performance. The CBKA not only showcases abundant predictability and versatility to reap additional advantages and sharpen resolution precision but also incorporates localized extraction and universal utilization to forestall exaggerated convergence and cultivate desirable solutions.
Table 5.
Experimental results (MSE) of each approach for different cases.
| Cases | Result | BSLO | EGO | FLO | GOOSE | HLOA | TTAO | WO | YDSE | SCHO | SWO | GAO | BKA | CBKA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Case 1 | Best | 0.009619 | 0.011792 | 0.010877 | 0.010071 | 0.009702 | 0.011797 | 0.009907 | 0.010108 | 0.010165 | 0.012263 | 0.011892 | 0.009214 | 0.009145 |
| Worst | 0.020216 | 0.019464 | 0.017904 | 0.019437 | 0.018869 | 0.017062 | 0.019894 | 0.013176 | 0.020697 | 0.028699 | 0.018868 | 0.011791 | 0.011436 | |
| Mean | 0.014151 | 0.015811 | 0.014361 | 0.012890 | 0.012814 | 0.014766 | 0.013090 | 0.012003 | 0.014128 | 0.018372 | 0.015393 | 0.010813 | 0.010508 | |
| Std | 0.003457 | 0.001659 | 0.002071 | 0.002813 | 0.002371 | 0.001277 | 0.002726 | 0.000783 | 0.003374 | 0.004110 | 0.002051 | 0.000632 | 0.000522 | |
| Case 2 | Best | 3.90E-18 | 0.019511 | 0.008645 | 6.43E-09 | 4.22E-05 | 1.54E-26 | 2.68E-09 | 3.34E-09 | 6.96E-06 | 0.012970 | 0.019014 | 0 | 0 |
| Worst | 0.254200 | 0.218795 | 0.240145 | 0.296689 | 0.248417 | 7.71E-05 | 0.193884 | 1.89E-06 | 0.248278 | 0.429215 | 0.238565 | 4.79E-07 | 7.74E-30 | |
| Mean | 0.093332 | 0.099097 | 0.166015 | 0.087159 | 0.056197 | 7.23E-06 | 0.010959 | 2.88E-07 | 0.147983 | 0.149503 | 0.152798 | 1.65E-08 | 2.67E-31 | |
| Std | 0.095738 | 0.057353 | 0.070880 | 0.113079 | 0.066993 | 1.88E-05 | 0.037428 | 3.78E-07 | 0.092816 | 0.103427 | 0.068690 | 8.74E-08 | 1.41E-30 | |
| Case 3 | Best | 0.009810 | 0.012654 | 0.019347 | 0.000396 | 0.008047 | 0.002295 | 0.009524 | 0.003971 | 0.001311 | 0.022960 | 0.025003 | 0.001376 | 4.35E-05 |
| Worst | 0.020286 | 0.036350 | 0.043832 | 0.028945 | 0.079879 | 0.016452 | 0.036851 | 0.026148 | 0.073240 | 0.113430 | 0.043899 | 0.042917 | 0.010862 | |
| Mean | 0.014171 | 0.020770 | 0.031369 | 0.003752 | 0.048457 | 0.007182 | 0.024878 | 0.012225 | 0.033006 | 0.067567 | 0.033446 | 0.014957 | 0.001628 | |
| Std | 0.003353 | 0.005317 | 0.005181 | 0.005631 | 0.020911 | 0.003521 | 0.005693 | 0.005951 | 0.022947 | 0.018361 | 0.004580 | 0.009655 | 0.002589 |
Table 6.
Experimental results (average estimation parameters) of each approach for different cases.
| Cases | Result | BSLO | EGO | FLO | GOOSE | HLOA | TTAO | WO | YDSE | SCHO | SWO | GAO | BKA | CBKA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Case 1 | ![]() |
0.542458 | 0.318714 | 0.339855 | 0.582849 | 0.667811 | 0.904038 | 0.597943 | 0.916925 | 0.533434 | 0.737912 | 0.218789 | 0.867937 | 0.911457 |
![]() |
-0.15241 | -0.01958 | -0.07276 | -0.18917 | -0.22987 | -0.31236 | -0.13789 | -0.27828 | -0.15183 | -0.22145 | -0.06522 | -0.27495 | -0.28481 | |
| Case 2 | ![]() |
-0.77123 | -0.95025 | -0.41910 | -0.75968 | -1.02974 | -1.40170 | -1.30803 | -1.40008 | -0.43169 | -1.38518 | -0.49819 | -1.40004 | -1.40000 |
![]() |
0.184355 | 0.175280 | 0.087197 | 0.372299 | 0.314144 | 0.491543 | 0.424788 | 0.490056 | 0.012500 | 0.488388 | 0.099516 | 0.490040 | 0.490000 | |
![]() |
0.629255 | 0.938298 | 0.228870 | 0.678497 | 0.767260 | 0.997413 | 0.960312 | 0.999801 | 0.428387 | 0.929266 | 0.271766 | 0.999915 | 0.998991 | |
| Case 3 | ![]() |
0.407971 | 0.044818 | 0.571741 | 0.071850 | -0.04235 | -0.00297 | 0.691073 | 0.006687 | -0.05245 | -0.00645 | 0.443501 | -0.01587 | 0.107773 |
![]() |
-0.09455 | 0.028528 | 0.558930 | -0.21465 | -0.05206 | -0.37043 | 0.660711 | -0.29537 | -0.04282 | -0.07825 | 0.437121 | -0.15919 | -0.22174 | |
![]() |
-0.20427 | 0.064564 | 0.564440 | -0.03334 | -0.04315 | -0.02191 | 0.672803 | -0.03962 | -0.03690 | 0.070357 | 0.414343 | 0.055081 | -0.0084 | |
![]() |
0.053098 | -0.01152 | 0.544305 | -0.48601 | -0.05995 | -0.43280 | 0.592081 | -0.25636 | -0.05829 | 0.065729 | 0.402791 | -0.12678 | -0.51181 | |
![]() |
-0.13546 | 0.713472 | 0.602154 | 0.982854 | 0.350901 | 0.859356 | 0.798099 | 0.977551 | 0.654671 | 0.650681 | 0.498056 | 0.743585 | 0.977775 | |
![]() |
-0.25513 | 0.048506 | 0.552431 | 0.077448 | -0.01815 | -0.00359 | 0.624065 | -0.00876 | 0.027253 | -0.01436 | 0.408190 | 0.001201 | 0.088317 | |
![]() |
0.128562 | 0.070804 | 0.571683 | 0.135751 | 0.101738 | 0.081952 | 0.664480 | 0.068004 | 0.135691 | 0.120896 | 0.465989 | 0.089398 | 0.130274 | |
![]() |
0.316745 | 0.035232 | 0.545284 | 0.010769 | -0.06340 | -0.03118 | 0.605366 | -0.00025 | -0.05008 | 0.045962 | 0.429138 | 0.019532 | 0.019495 | |
![]() |
0.016512 | 0.119900 | 0.573087 | -0.16379 | 0.044414 | 0.030244 | 0.653779 | 0.061685 | 0.049006 | 0.090746 | 0.472957 | 0.162096 | -0.15249 |
Wilcoxon rank-sum test is executed to ascertain if there is an instructive distinction between CBKA and other procedures71.
is an instructive distinction,
is no instructive distinction, and N/A is “not applicable”. Table 7 portrays the comparative solutions of the Wilcoxon rank-sum test.
Table 7.
Results of the p-value Wilcoxon rank-sum test on the different cases.
| Result | BSLO | EGO | FLO | GOOSE | HLOA | TTAO | WO | YDSE | SCHO | SWO | GAO | BKA |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Case 1 | 3.09E-06 | 3.02E-11 | 6.70E-11 | 4.12E-06 | 3.08E-08 | 3.02E-11 | 1.70E-08 | 7.77E-09 | 1.43E-08 | 3.02E-11 | 3.02E-11 | 3.27E-02 |
| Case 2 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 6.48E-12 | 4.43E-03 |
| Case 3 | 4.50E-11 | 3.02E-11 | 3.02E-11 | 3.99E-04 | 4.08E-11 | 2.19E-08 | 3.34E-11 | 4.62E-10 | 1.61E-10 | 3.02E-11 | 3.02E-11 | 1.07E-09 |
Figure 7 portrays the convergence curves of the CBKA and compares algorithms for resolving the IIR system identification. For different cases 1, 2, and 3, the convergence productivity and assessment precision of the CBKA are superior to those of the BSLO, EGO, FLO, GOOSE, HLOA, TTAO, WO, YDSE, SCHO, SWO, GAO, and BKA. The CBKA showcases abundant adaptability and versatility to forestall exaggerated convergence and determine a more stable evaluation accuracy. Figure 8 portrays Boxplots of the CBKA and compares algorithms for resolving the IIR system identification. For different cases 1, 2, and 3, the standard deviations and stability of the CBKA are superior to those of the BSLO, EGO, FLO, GOOSE, HLOA, TTAO, WO, YDSE, SCHO, SWO, GAO, and BKA. The CBKA receives exceptional consistency and endurance to promote exploration efficiency and disrupt anticipation stagnation. The actual and fictitious portions are inserted into the BKA that converts the dual-dimensional encoding region to the single-dimension manifestation region, and this is refreshed separately for each search agent that exhibits inherent parallelism and reliability, which reinforces population pluralism, restricts discovery stagnation, extends identification area, promotes estimation excellence, advances information resources, fosters collaboration efficiency, exhibits remarkable parallelism and consistency. The CBKA has strong stability and reliability to foster aggregate discovery intensity and advance widespread computational efficacy.
Fig. 7.

Convergence curves of the CBKA and compared algorithms for resolving the IIR system identification.
Fig. 8.

Boxplots of the CBKA and compared algorithms for resolving the IIR system identification.
CBKA for classical engineering design
To emphasize scalability and adaptability, the CBKA is administered to remedy the engineering layouts: speed reducer4, gear train72, multiple disc clutch brake59, and rolling element bearing73.
Speed reducer layout
The foremost objective is to alleviate the combined weight as articulated in Fig. 9, which showcases seven evaluation elements: facial breadth
, teeth magnitude
, teeth size
, first shaft distance
, second shaft distance
, first shaft diameter
, and second shaft diameter
. The mathematical scheme is stipulated as:
Fig. 9.

Speed reducer layout.
Consider
![]() |
30 |
Minimize
![]() |
31 |
Subject to
![]() |
32 |
![]() |
33 |
![]() |
34 |
![]() |
35 |
![]() |
36 |
![]() |
37 |
![]() |
38 |
![]() |
39 |
![]() |
40 |
![]() |
41 |
![]() |
42 |
Variable range
![]() |
43 |
Table 8 portrays the comparative solutions of the speed reducer layout. The CBKA ascertains the most appropriate convergence fitness
with the evaluation elements
. The extraction productivity and assessment precision of the CBKA are superior to those of other procedures, and the CBKA reconciles localized exploitation and universal exploration to forestall exaggerated convergence and locate the appropriate solution.
Table 8.
Comparative solutions of speed reducer layout.
| Algorithm | Optimal values for variables | Optimal cost | ||||||
|---|---|---|---|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||
| SA74 | 3.4979 | 0.7 | 17 | 7.9205 | 7.9513 | 3.3518 | 5.2853 | 3004.85 |
| EHO74 | 3.4889 | 0.7782 | 23.2193 | 7.849 | 8.1021 | 3.5603 | 5.2459 | 73,504.7 |
| GOA74 | 3.5126 | 0.7033 | 17.2246 | 7.9131 | 7.9627 | 3.6567 | 5.2784 | 3169.32 |
| TEO74 | 3.4261 | 0.7 | 17.6222 | 7.7408 | 7.9775 | 3.4145 | 5.2758 | 3595.59 |
| MPA75 | 3.50669 | 0.7 | 17 | 7.380933 | 7.815726 | 3.357847 | 5.286768 | 3001.288 |
| TLBO75 | 3.508755 | 0.7 | 17 | 7.3 | 7.8 | 3.46102 | 5.2892113 | 3030.563 |
| BWO6 | 3.58 | 0.72 | 18.28 | 7.73 | 7.73 | 3.43 | 5.28 | 3417.1535 |
| RUN5 | 3.507125 | 0.7 | 17 | 7.307812 | 7.8078 | 3.356534 | 5.29705 | 3004.852 |
| SMA5 | 3.512233 | 0.7 | 17 | 7.388958 | 7.8236 | 3.363121 | 5.29507 | 3007.596 |
| MSA5 | 3.505551 | 0.7 | 17 | 8.308882 | 7.8079 | 3.357677 | 5.29502 | 3012.079 |
| MBO5 | 3.514048 | 0.7 | 17 | 7.418166 | 7.8239 | 3.363347 | 5.29508 | 3009.238 |
| INFO5 | 3.514301 | 0.7 | 17 | 7.307301 | 7.8078 | 3.466456 | 5.29752 | 3036.931 |
| CPA5 | 3.525688 | 0.7 | 17 | 8.378957 | 7.8078 | 3.372258 | 5.29702 | 3035.367 |
| BOA76 | 3.5239 | 0.7003 | 17.0088 | 8.0962 | 8.004 | 3.4048 | 5.3286 | 3061.6 |
| AO77 | 3.49688 | 0.7 | 17 | 8.10828 | 7.8 | 3.37081 | 5.28578 | 3008.168 |
| HOA77 | 3.56008 | 0.7 | 17 | 7.34912 | 7.8 | 3.49325 | 5.28415 | 3058.577 |
| ES73 | 3.506163 | 0.700831 | 17 | 7.460181 | 7.962143 | 3.3629 | 5.309 | 3025.005 |
| SBS73 | 3.506122 | 0.700006 | 17 | 7.549126 | 7.85933 | 3.365576 | 5.289773 | 3008.981 |
| SCSO69 | 3.5 | 0.7 | 17 | 7.32 | 8.029658 | 3.350294 | 5.286794 | 3001.69686 |
| MDO4 | 3.5 | 0.7 | 17 | 7.3 | 7.67 | 3.542 | 5.246 | 3019.583365 |
| CBKA | 3.50281 | 0.7 | 17 | 7.3 | 7.71535 | 3.35071 | 5.28668 | 2995.5868 |
Gear train layout
The foremost objective is to alleviate the combined cost as articulated in Fig. 10, which showcases four evaluation elements: gears teeth magnitude
,
,
and
. The mathematical scheme is stipulated as:
Fig. 10.

Gear train layout.
Consider
![]() |
44 |
Minimize
![]() |
45 |
Variable range
![]() |
46 |
Table 9 portrays the comparative solutions of gear train layout. The CBKA ascertains the most appropriate convergence fitness
with the evaluation elements
. The extraction productivity and assessment precision of the CBKA are superior to those of other procedures, and the CBKA can reasonably deploy large-scale discovery and small-scale extraction to foster aggregate discovery intensity and advance widespread computational efficacy.
Table 9.
Comparative solutions of gear train layout.
| Algorithm | Optimal value for variables | Optimal cost | |||
|---|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
||
| ICA78 | 43 | 16 | 19 | 49 | 2.70E-12 |
| BBO78 | 53 | 26 | 15 | 51 | 2.31E-11 |
| NNA78 | 49 | 16 | 19 | 43 | 2.70E-12 |
| WSA78 | 43 | 16 | 19 | 49 | 2.70E-12 |
| KOA34 | 44 | 20 | 16 | 50 | 2.700857E-12 |
| FLA34 | 44 | 16 | 20 | 49 | 2.700857E-12 |
| COA34 | 23 | 14 | 12 | 48 | 9.92158E-10 |
| RUN34 | 44 | 17 | 19 | 49 | 2.700857E-12 |
| SMA34 | 52 | 30 | 13 | 53 | 2.307816E-11 |
| DO34 | 49 | 16 | 19 | 44 | 2.700857E-12 |
| POA34 | 44 | 17 | 19 | 49 | 2.700857E-12 |
| PDO79 | 48 | 17 | 22 | 54 | 2.70E-12 |
| DMOA79 | 49 | 19 | 16 | 43 | 2.70E-12 |
| AOA79 | 49 | 19 | 19 | 54 | 2.70E-12 |
| SSA79 | 49 | 19 | 19 | 49 | 2.70E-12 |
| SCA79 | 49 | 19 | 34 | 49 | 2.700857E-12 |
| GMO72 | 43 | 19 | 16 | 49 | 2.700857E-12 |
| GBO43 | 53 | 13 | 20 | 34 | 2.3078E-11 |
| TTAO43 | 43 | 16 | 19 | 49 | 2.70E-12 |
| WO4 | 43 | 16 | 19 | 43 | 2.700857E-12 |
| CBKA | 50 | 32 | 16 | 47 | 2.4277E-18 |
Multiple disc clutch brake layout
The foremost objective is to alleviate the combined weight as articulated in Fig. 11, which showcases five evaluation elements: disc depth
, inner radius
, outer radius
, activation force
, and friction surface magnitude
. The mathematical scheme is stipulated as:
Fig. 11.

Multiple-disc clutch brake layout.
Consider
![]() |
47 |
Minimize
![]() |
48 |
Subject to
![]() |
49 |
![]() |
50 |
![]() |
51 |
![]() |
52 |
![]() |
53 |
![]() |
54 |
![]() |
55 |
![]() |
56 |
![]() |
57 |
![]() |
58 |
![]() |
59 |
![]() |
60 |
![]() |
61 |
![]() |
62 |
![]() |
63 |
![]() |
64 |
![]() |
65 |
![]() |
66 |
![]() |
67 |
Table 10 portrays the comparative solutions of multiple-disc clutch brake layout. The CBKA ascertains the most appropriate convergence fitness
with the evaluation elements
. The extraction productivity and assessment precision of the CBKA are superior to those of other procedures, and the CBKA incorporates localized exploitation and universal exploration to forestall exaggerated convergence and cultivate desirable solutions.
Table 10.
Comparative solutions of multiple-disc clutch brake layout.
| Algorithm | Optimal value for variables | Optimal cost | ||||
|---|---|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
||
| TLBO80 | 70 | 90 | 1 | 810 | 3 | 0.313657 |
| MFO81 | 70 | 90 | 1 | 910 | 3 | 0.313656 |
| MVO82 | 70 | 90 | 1 | 910 | 3 | 0.313656 |
| CMVO82 | 70 | 90 | 1 | 910 | 3 | 0.313656 |
| WCA83 | 70 | 90 | 1 | 910 | 3 | 0.313656 |
| PVS84 | 70 | 90 | 1 | 980 | 3 | 0.31366 |
| MBFPA85 | 70 | 90 | 1 | 600 | 2 | 0.2352424579 |
| GOA86 | 71 | 92 | 1 | 835 | 3 | 0.3355146 |
| EOBL-GOA86 | 70 | 90 | 1 | 984 | 3 | 0.31365661053 |
| ABC87 | 70 | 90 | 1 | 790 | 3 | 0.313657 |
| CS87 | 70 | 90 | 1 | 810 | 3 | 0.3136566 |
| GSA87 | 72 | 92 | 2 | 815 | 3 | 0.3175771 |
| AEO87 | 70 | 90 | 1 | 810 | 3 | 0.3136566 |
| AHA88 | 70 | 90 | 1 | 840 | 3 | 0.3136566 |
| HBO89 | 70 | 90 | 1 | 1000 | 3 | 0.3136566 |
| HGS61 | 70 | 90 | 1 | 1000 | 3 | 0.313657 |
| MRFO90 | 70 | 90 | 1 | 835 | 3 | 0.3136566 |
| GA90 | 72 | 92 | 1 | 918 | 3 | 0.321498 |
| DE90 | 71 | 92 | 1 | 835 | 3 | 0.3355146 |
| RSO91 | 70 | 90 | 1 | 810 | 3 | 0.313657 |
| CBKA | 70 | 90 | 1 | 440 | 2 | 0.2352 |
Rolling element bearing layout
The foremost objective is to alleviate the combined weight as articulated in Fig. 12, which showcases ten evaluation elements: pitch diameter (
), ball diameter (
), number of balls (
), inner (
), and outer (
), raceway curvature coefficients,
,
,
,
,
. The mathematical scheme is stipulated as:
Fig. 12.

Rolling element bearing layout.
Consider
![]() |
68 |
Minimize
![]() |
69 |
Subject to
![]() |
70 |
![]() |
71 |
![]() |
72 |
![]() |
73 |
![]() |
74 |
![]() |
75 |
![]() |
76 |
![]() |
77 |
![]() |
78 |
![]() |
79 |
![]() |
80 |
![]() |
81 |
![]() |
82 |
![]() |
83 |
![]() |
84 |
Variable range
![]() |
85 |
![]() |
86 |
![]() |
87 |
![]() |
88 |
![]() |
89 |
Table 11 presents the comparative solutions for rolling element bearing layout. The CBKA ascertains the most appropriate convergence fitness
with the evaluation elements
. The extraction productivity and assessment precision of the CBKA are superior to those of other procedures, and the CBKA exhibits suitability and affordability to foster aggregate discovery intensity, advance widespread computational efficacy, and retrieve the universal adequate solution.
Table 11.
Comparative solutions of rolling element bearing layout.
| Algorithm | Optimal value for variables | Optimal cost | ||||
|---|---|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
||
| TLBO80 | 125.7191 | 21.42559 | 11 | 0.515 | 0.515 | |
| 0.424266 | 0.633948 | 0.3 | 0.068858 | 0.799498 | 81,859.74 | |
| CSA92 | 125 | 21.418 | 11.356 | 0.515 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.02 | 0.612 | 85,201.641 | |
| GSA92 | 125 | 20.854 | 11.149 | 0.515 | 0.517 | |
| 0.5 | 0.618 | 0.3 | 0.02 | 0.624 | 82,276.941 | |
| HHO93 | 125 | 21 | 11.09207 | 0.515 | 0.515 | |
| 0.4 | 0.6 | 0.3 | 0.050474 | 0.6 | 83,011.88 | |
| RSA93 | 125.1722 | 21.29734 | 10.88521 | 0.515253 | 0.517764 | |
| 0.41245 | 0.632338 | 0.301911 | 0.024395 | 0.6024 | 83,486.64 | |
| RSO94 | 125 | 21.41769 | 10.94027 | 0.515 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.02 | 0.6 | 85,069.021 | |
| STOA95 | 125 | 21.4189 | 10.94113 | 0.515 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.02 | 0.6 | 85,067.983 | |
| TSA96 | 125 | 21.4175 | 10.941 | 0.51 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.02 | 0.6 | 85,070.08 | |
| EPO97 | 125 | 21.4189 | 10.94113 | 0.515 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.02 | 0.6 | 85,067.983 | |
| ESA97 | 125 | 21.4175 | 10.94109 | 0.51 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.02 | 0.6 | 85,070.085 | |
| SSA97 | 125 | 20.77562 | 11.01247 | 0.515 | 0.515 | |
| 0.5 | 0.61397 | 0.3 | 0.05004 | 0.61001 | 82,773.982 | |
| WCA83 | 125.721167 | 21.4233 | 11.00103 | 0.515 | 0.515 | |
| 0.401514 | 0.659047 | 0.300032 | 0.040045 | 0.6 | 85,538.480 | |
| WOA24 | 125.100734 | 21.4233 | 10.95119 | 0.515 | 0.515 | |
| 0.4 | 0.7 | 0.314216 | 0.02 | 0.6 | 85,265.167 | |
| ACVO24 | 125.70959 | 21.4232997 | 11.000104 | 0.515 | 0.515 | |
| 0.48352698 | 0.61821897 | 0.3002753 | 0.02 | 0.6478817 | 85,533.4103 | |
| MBA87 | 125.7153 | 21.4233 | 11 | 0.515 | 0.515 | |
| 0.488805 | 0.627829 | 0.300149 | 0.097305 | 0.646095 | 85,535.9611 | |
| HBO89 | 125.7227184 | 21.4233 | 11 | 0.515 | 0.515 | |
| 0.438476 | 0.699998 | 0.3 | 0.047532 | 0.601081 | 85,533.18 | |
| HPO59 | 125 | 21.875 | 10.777 | 0.515 | 0.515 | |
| 0.4 | 0.7 | 0.3 | 0.029 | 0.6 | 83,918.4925 | |
| MGA73 | 125.718 | 21.8745119 | 10.7770658 | 0.51500082 | 0.51500299 | |
| 0.405908353 | 0.65558802 | 0.30000415 | 0.07754492 | 0.6 | 83,912.87983 | |
| CGO73 | 125 | 21.875 | 10.777009 | 0.515 | 0.515 | |
| 0.4 | 0.64620052 | 0.3 | 0.050152445 | 0.6 | 83,918.49253 | |
| EVO73 | 125.7190556 | 21.4255902 | 10.6955328 | 0.515 | 0.515 | |
| 0.463182936 | 0.6999265 | 0.3 | 0.063431519 | 0.604213108 | 81,859.7415974 | |
| CBKA | 125.3527 | 21.5041 | 11 | 0.515 | 0.515 | |
| 0.4013 | 0.7014 | 0.3 | 0.07481 | 0.6103 | 85,539.302 | |
Impact analysis
The CBKA is constructed on the black-winged kites’ migratory and predatory instincts and integrates the Leader tactic with the Cauchy variation procedure to retrieve the expansive appropriate convergence solution. The CBKA efficiently resolves the function evaluations, engineering layouts, and IIR system identification for the following aspects of impact analysis. First, the CBKA exhibits some advantages of simple algorithm framework construction, few control variables, high computational efficiency, easy approach fusion, good information exchange, strong parallelism and feasibility, superior exploration and exploitation, strong stability, and robustness. Second, the Cauchy variation enhances global exploration efficiency, avoids premature convergence, and improves the calculation accuracy. The leader tactic accelerates exploitation efficiency, promotes directional search, and improves convergence speed. Third, the complex-valued encoding strategy manipulates a dual-diploid organization to encode the black-winged kite, the actual and fictitious portions are inserted into the BKA that converts the dual-dimensional encoding region to the single-dimension manifestation region. This strategy reinforces population pluralism, restricts discovery stagnation, extends identification area, promotes estimation excellence, advances information resources, and fosters collaboration efficiency. To summarize, the CBKA not only showcases abundant adaptability and versatility to reap supplementary advantages and sharpen resolution precision but also incorporates localized exploitation and universal exploration to achieve superior assessment precision and a swifter convergence rate.
Conclusion and future exploration
This paper establishes the CBKA to resolve the function evaluations, engineering layouts, and adaptive IIR system identification. The purpose is to attain the minimal available solutions of function evaluations, the satisfactory computational expenditures of engineering layouts, and the most advantageous modulating coefficients and mitigate the mean square error (MSE) between an unanticipated system’s input and the IIR system’s output. The BKA is constructed on the black-winged kites’ migratory and predatory instincts that integrate the Leader tactic with the Cauchy variation procedure to foster aggregate discovery intensity and retrieve the appropriate computational solution. The basic BKA exhibits marginalized resolution efficiency, inferior assessment precision, and stagnant sensitive anticipation. The innovative complex-valued encoding strategy is inserted into the BKA to foster aggregate discovery intensity and advance widespread computational efficacy. The CBKA manipulates a dual-diploid organization to encode the actual and fictitious portions of each black-winged kite and translates the dual-dimensional encoding region to the single-dimension manifestation region, which reinforces population pluralism, restricts discovery stagnation, extends identification area, promotes estimation excellence, advances information resources, fosters collaboration efficiency, exhibits remarkable parallelism and consistency. The CBKA showcases abundant sustainability and versatility to forestall exaggerated convergence, reconciling localized exploitation and universal exploration to locate the appropriate solution. The CBKA is compared with the GTO, MGO, PO, AVOA, GCRA, HLOA, WO, SBOA, NRBO, APO, EHO, BSLO, EGO, FLO, GOOSE, TTAO, YDSE, SCHO, SWO, GAO and BKA. The experimental results show that the CBKA exhibits suitability and affordability to explore a superior assessment precision and a swifter convergence rate.
Future research on CBKA will focus on the following three aspects: (1) We will introduce more streamlined extraction strategies (e.g., golden-sine, multi-population, quasi-opposition-based learning, Brownian motion, intelligent perception, S-shaped escape energy, Tent chaotic map, elite pool, differential evolution), distinguishable encoding formats (e.g., quantum, discrete, binary, integer, hybridized encodings), and hybrid approaches (e.g., genetic algorithms or differential evolution) to reap supplementary advantages and explore a superior assessment precision and swifter convergence efficiency. (2) A more extensive application of real-world datasets will improve the research relevance, such as neural network layouts (e.g., convolutional neural networks, recurrent neural networks, generative adversarial networks, graph neural networks, long short-term memory, Hopfield networks, deconvolutional networks, and recurrent neural networks), image processing (e.g., sophisticated patterns, feature tracking, stereo matching, real-world images), multilevel thresholding segmentation methods (Tsallis entropy, Renyi entropy, cross-entropy, fuzzy entropy, and Otsu’s method). (3) Due to the wide distribution of agriculture and forestry crops, the geographical dispersion, and the inconvenience of planting and maintenance in the Dabie Mountains in Anhui Province, the CBKA will utilize data collection, infrared detection, image recognition, big data intelligent analysis and decision-making, intelligent precision plant protection equipment to achieve agricultural intelligent perception and detection, agricultural data intelligent processing, and agricultural equipment intelligent control.
Acknowledgements
This research was funded by Natural Science Key Research Project of Anhui Educational Committee under Grant No. 2024AH051989, Start-up Fee for Scientific Research of High-level Talents of West Anhui University under Grant No. WGKQ2022052, School-level Quality Engineering (School-enterprise Cooperation Development Curriculum Resource Construction) under Grant No. wxxy2022101, School-level Quality Engineering (Teaching and Research Project) under Grant No. wxxy2023079, PWMDIC Design and Application under Grant No. WXCHX0045023110, and Natural Science Key Research Project of Anhui Educational Committee under Grant No. 2022AH051675. The authors would like to thank the editor and anonymous reviewers for their helpful comments and suggestions.
Author contributions
Chengtao Du: Conceptualization, Methodology, Formal analysis, Resources, Data curation, Project administration, Writing – original draft and Funding acquisition. Jinzhong Zhang: Conceptualization, Methodology, Software, Data curation, Formal analysis, Writing – original draft and Funding acquisition. Jie Fang: Software, Validation, Investigation, Data curation, Visualization, Supervision, Writing – review & editing and Funding acquisition.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. All data generated or analyzed during this study are included directly in the text of this submitted manuscript. There are no additional external files with datasets.
Declarations
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Abdollahzadeh, B. et al. Puma optimizer (PO): A novel metaheuristic optimization algorithm and its application in machine learning. Clust. Comput.27, 5235–5283 (2024). [Google Scholar]
- 2.Al-Betar, M. A., Awadallah, M. A., Braik, M. S., Makhadmeh, S. & Doush, I. A. Elk herd optimizer: a novel nature-inspired metaheuristic algorithm. Artif. Intell. Rev.57, 48. 10.1007/s10462-023-10680-4 (2024). [Google Scholar]
- 3.Abdel-Basset, M., Mohamed, R., Jameel, M9. & Abouhawwash, M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artif. Intell. Rev.56, 11675–11738 (2023).
- 4.Han, M. et al. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Syst. Appl.239, 122413 (2024). [Google Scholar]
- 5.Dehghani, M., Montazeri, Z., Trojovská, E. & Trojovskỳ, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst.259, 110011 (2023).
- 6.Seyyedabbasi, A. & Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput.39, 2627–2651 (2023). [Google Scholar]
- 7.Peraza-Vázquez, H., Peña-Delgado, A., Merino-Treviño, M., Morales-Cepeda, A. B. & Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev.57, 59. 10.1007/s10462-023-10653-7 (2024). [Google Scholar]
- 8.Hamad, R. K. & Rashid, T. A. GOOSE algorithm: a powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst.15, 1249–1274 (2024). [Google Scholar]
- 9.Abdollahzadeh, B., Soleimanian Gharehchopogh, F. & Mirjalili, S. Artificial gorilla troops optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst.36, 5887–5958 (2021).
- 10.Abdollahzadeh, B., Gharehchopogh, F. S., Khodadadi, N. & Mirjalili, S. Mountain gazelle optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Adv. Eng. Softw.174, 103282 (2022). [Google Scholar]
- 11.Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng.158, 107408 (2021). [Google Scholar]
- 12.Agushaka, J. O. et al. Greater cane rat algorithm (GCRA): A nature-inspired metaheuristic for optimization problems. Heliyon.10.1016/j.heliyon.2024.e31629 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Fu, Y., Liu, D., Chen, J. & He, L. Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artif. Intell. Rev.57, 123. 10.1007/s10462-024-10729-y (2024). [Google Scholar]
- 14.Wang, W., Tian, W., Xu, D. & Zang, H. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw.195, 103694 (2024). [Google Scholar]
- 15.Bai, J. et al. Blood-sucking leech optimizer. Adv. Eng. Softw.195, 103696 (2024). [Google Scholar]
- 16.Mohammadzadeh, A. & Mirjalili, S. Eel and grouper optimizer: a nature-inspired optimization algorithm. Clust. Comput.27, 12745–12786 (2024). [Google Scholar]
- 17.Falahah, I. A. et al. Frilled Lizard Optimization: A Novel Bio-Inspired Optimizer for Solving Engineering Applications. Comput. Mater. Contin.10.32604/cmc.2024.053189 (2024).
- 18.Alsayyed, O. et al. Giant Armadillo optimization: A new bio-inspired metaheuristic algorithm for solving optimization problems. Biomimetics8, 619. 10.3390/biomimetics8080619 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Houssein, E. H., Oliva, D., Samee, N. A., Mahmoud, N. F. & Emam, M. M. Liver Cancer Algorithm: A novel bio-inspired optimizer. Comput. Biol. Med.165, 107389 (2023). [DOI] [PubMed] [Google Scholar]
- 20.Yuan, Y. et al. Coronavirus mask protection algorithm: A new bio-inspired optimization algorithm and its applications. J. Bionic Eng.20, 1747–1765 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Ahmed, M., Sulaiman, M. H., Mohamad, A. J. & Rahman, M. Gooseneck barnacle optimization algorithm: A novel nature inspired optimization theory and application. Math. Comput. Simul.218, 248–265 (2024). [Google Scholar]
- 22.Abdel-Basset, M., Mohamed, R., Jameel, M. & Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl.-Based Syst.262, 110248 (2023).
- 23.Ouyang, H., Chen, J., Li, S., Xiang, J. & Zhan, Z.-H. Altruistic population algorithm: A metaheuristic search algorithm for solving multimodal multi-objective optimization problems. Math. Comput. Simul.210, 296–319 (2023). [Google Scholar]
- 24.Emami, H. Anti-coronavirus optimization algorithm. Soft Comput.26, 4991–5023 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Daliri, A., Asghari, A., Azgomi, H. & Alimoradi, M. The water optimization algorithm: a novel metaheuristic for solving optimization problems. Appl. Intell.52, 17990–18029 (2022). [Google Scholar]
- 26.Chen, D. et al. Poplar optimization algorithm: A new meta-heuristic optimization technique for numerical optimization and image segmentation. Expert Syst. Appl.200, 117118 (2022). [Google Scholar]
- 27.Zamani, H., Nadimi-Shahraki, M. H. & Gandomi, A. H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng.392, 114616 (2022). [Google Scholar]
- 28.Rahmani, A. M. & AliAbdi, I. Plant competition optimization: A novel metaheuristic algorithm. Expert Syst.39, e12956 (2022). [Google Scholar]
- 29.Jia, H., Peng, X. & Lang, C. Remora optimization algorithm. Expert Syst. Appl.185, 115665 (2021). [Google Scholar]
- 30.Luo, K. Water flow optimizer: a nature-inspired evolutionary algorithm for global optimization. IEEE Trans. Cybern.52, 7753–7764 (2021). [DOI] [PubMed] [Google Scholar]
- 31.Pant, M., Zaheer, H., Garcia-Hernandez, L., Abraham, A., & others. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell.90, 103479 (2020).
- 32.Deng, L. & Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl.225, 120069 (2023). [Google Scholar]
- 33.Daoud, M. S. et al. Gradient-based optimizer (GBO): a review, theory, variants, and applications. Arch. Comput. Methods Eng.30, 2431–2449 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Abdel-Basset, M., Mohamed, R., Azeem, S. A. A., Jameel, M. & Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst.268, 110454 (2023).
- 35.Thapliyal, S. & Kumar, N. Numeric Crunch Algorithm: a new metaheuristic algorithm for solving global and engineering optimization problems. Soft Comput.27, 16611–16657 (2023). [Google Scholar]
- 36.Abdel-Basset, M., El-Shahat, D., Jameel, M. & Abouhawwash, M. Exponential distribution optimizer (EDO): A novel math-inspired algorithm for global optimization and engineering problems. Artif. Intell. Rev.56, 9329–9400 (2023). [Google Scholar]
- 37.Pan, Q., Tang, J. & Lao, S. Edoa: An elastic deformation optimization algorithm. Appl. Intell.52, 17580–17599 (2022). [Google Scholar]
- 38.Kuyu, Y. Ç. & Vatansever, F. GOZDE: A novel metaheuristic algorithm for global optimization. Future Gener. Comput. Syst.136, 128–152 (2022). [Google Scholar]
- 39.Abdel-Basset, M., El-Shahat, D., Jameel, M. & Abouhawwash, M. Young’s double-slit experiment optimizer: A novel metaheuristic optimization algorithm for global and constraint optimization problems. Comput. Methods Appl. Mech. Eng.403, 115652 (2023). [Google Scholar]
- 40.Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M. & Gandomi, A. H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng.376, 113609 (2021). [Google Scholar]
- 41.Li, C. et al. Integrated optimization algorithm: A metaheuristic approach for complicated optimization. Inf. Sci.586, 424–449 (2022). [Google Scholar]
- 42.Azizi, M. Atomic orbital search: A novel metaheuristic algorithm. Appl. Math. Model.93, 657–683 (2021). [Google Scholar]
- 43.Zhao, S., Zhang, T., Cai, L. & Yang, R. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst. Appl.238, 121744 (2024). [Google Scholar]
- 44.Sowmya, R., Premkumar, M. & Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell.128, 107532 (2024). [Google Scholar]
- 45.Shi, K., Wu, Z., Jiang, B. & Karimi, H. R. Dynamic path planning of mobile robot based on improved simulated annealing algorithm. J. Frankl. Inst.360, 4378–4398 (2023). [Google Scholar]
- 46.Yu, X., Zhao, Q., Lin, Q. & Wang, T. A grey wolf optimizer-based chaotic gravitational search algorithm for global optimization. J. Supercomput.79, 2691–2739 (2023). [Google Scholar]
- 47.Bai, J. et al. A sinh cosh optimizer. Knowl.-Based Syst.282, 111081 (2023).
- 48.Shehadeh, H. A. Chernobyl disaster optimizer (CDO): a novel meta-heuristic method for global optimization. Neural Comput. Appl.35, 10733–10749 (2023). [Google Scholar]
- 49.Tian, Z. & Gai, M. Football team training algorithm: A novel sport-inspired meta-heuristic optimization algorithm for global optimization. Expert Syst. Appl.245, 123088 (2024). [Google Scholar]
- 50.Gao, Y., Zhang, J., Wang, Y., Wang, J. & Qin, L. Love Evolution Algorithm: a stimulus–value–role theory-inspired evolutionary algorithm for global optimization. J. Supercomput.80, 12346–12407 (2024). [Google Scholar]
- 51.Taheri, A. et al. Partial reinforcement optimizer: An evolutionary optimization algorithm. Expert Syst. Appl.238, 122070 (2024). [Google Scholar]
- 52.Zhu, D., Wang, S., Zhou, C., Yan, S. & Xue, J. Human memory optimization algorithm: A memory-inspired optimizer for global optimization problems. Expert Syst. Appl.237, 121597 (2024). [Google Scholar]
- 53.Eltamaly, A. M. & Rabie, A. H. A novel musical chairs optimization algorithm. Arab. J. Sci. Eng.48, 10371–10403 (2023). [Google Scholar]
- 54.Li, Z. et al. Tactical unit algorithm: A novel metaheuristic algorithm for optimal loading distribution of chillers in energy optimization. Appl. Therm. Eng.238, 122037 (2024). [Google Scholar]
- 55.Yuan, Y. et al. Alpine skiing optimization: A new bio-inspired optimization algorithm. Adv. Eng. Softw.170, 103158 (2022). [Google Scholar]
- 56.Ahwazian, A., Amindoust, A., Tavakkoli-Moghaddam, R. & Nikbakht, M. Search in forest optimizer: a bioinspired metaheuristic algorithm for global optimization problems. Soft Comput.26, 2325–2356 (2022). [Google Scholar]
- 57.Srivastava, A. & Das, D. K. Criminal search optimization algorithm: a population-based meta-heuristic optimization technique to solve real-world optimization problems. Arab. J. Sci. Eng.47, 3551–3571 (2022). [Google Scholar]
- 58.Xu, Y., Liu, H., Xie, S., Xi, L. & Lu, M. Competitive search algorithm: a new method for stochastic optimization. Appl. Intell.52, 12131–12154 (2022). [Google Scholar]
- 59.Algorithm and applications. Naruei, I., Keynia, F. & Sabbagh Molahosseini, A. Hunter–prey optimization. Soft Comput.26, 1279–1314 (2022). [Google Scholar]
- 60.Zitouni, F., Harous, S., Belkeram, A. & Hammou, L. E. B. The archerfish hunting optimizer: A novel metaheuristic algorithm for global optimization. Arab. J. Sci. Eng.47, 2513–2553 (2022). [Google Scholar]
- 61.Yang, Y., Chen, H., Heidari, A. A. & Gandomi, A. H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl.177, 114864 (2021). [Google Scholar]
- 62.Veysari, E. F. & others. A new optimization algorithm inspired by the quest for the evolution of human society: human felicity algorithm. Expert Syst. Appl.193, 116468 (2022).
- 63.Rahman, C. M. Group learning algorithm: a new metaheuristic algorithm. Neural Comput. Appl.35, 14013–14028 (2023). [Google Scholar]
- 64.Zhang, Z., Wang, X. & Yue, Y. Heuristic Optimization Algorithm of Black-Winged Kite Fused with Osprey and Its Engineering Application. Biomimetics9, 595. 10.3390/biomimetics9100595 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Ma, H. et al. Improved black-winged kite algorithm and finite element analysis for robot parallel gripper design. Adv. Mech. Eng.10.1177/16878132241288402 (2024). [Google Scholar]
- 66.Xue, R. et al. Multi-strategy Integration Model Based on Black-Winged Kite Algorithm and Artificial Rabbit Optimization. International Conference on Swarm Intelligence14788, 197–207 (2024). [Google Scholar]
- 67.Zhou, Y., Wu, X., Liu, Y. & Jiang, X. BKA optimization algorithm based on sine-cosine guidelines. International Symposium on Computer Technology and Information Science (ISCTIS) 480–484. 10.1109/ISCTIS63324.2024.10699037 (2024).
- 68.Rasooli, A. Q. & Inan, O. Clustering with the Blackwinged Kite Algorithm. Int. J. Comput. Sci. Commun.9, 22–33 (2024). [Google Scholar]
- 69.Wang, J., Wang, W., Hu, X., Qiu, L. & Zang, H. Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev.57, 98. 10.1007/s10462-024-10723-4 (2024). [Google Scholar]
- 70.Zhang, J. et al. CWOA: A novel complex-valued encoding whale optimization algorithm. Math. Comput. Simul.207, 151–188 (2023). [Google Scholar]
- 71.Zhang, J. et al. A complex-valued encoding golden jackal optimization for multilevel thresholding image segmentation. Appl. Soft Comput.165, 112108 (2024). [Google Scholar]
- 72.Wu, H. et al. Wild geese migration optimization algorithm: a new meta-heuristic algorithm for solving inverse kinematics of robot. Comput. Intell. Neurosci.10.1155/2022/5191758 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Azizi, M., Aickelin, U., A. Khorshidi, H. Baghalzadeh Shishehgarkhaneh, M. Energy valley optimizer: a novel metaheuristic algorithm for global and engineering optimization. Sci. Rep.13, 226. 10.1038/s41598-022-27344-y. (2023) [DOI] [PMC free article] [PubMed]
- 74.Hashim, F. A., Houssein, E. H., Hussain, K., Mabrouk, M. S. & Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul.192, 84–110 (2022). [Google Scholar]
- 75.Dehghani, M., Hubálovskỳ, Š & Trojovskỳ, P. Northern goshawk optimization: a new swarm-based algorithm for solving optimization problems. Ieee Access9, 162059–162080 (2021). [Google Scholar]
- 76.Chakraborty, S., Saha, A. K., Sharma, S., Chakraborty, R. & Debnath, S. A hybrid whale optimization algorithm for global optimization. J. Ambient Intell. Humaniz. Comput.14, 431–467 (2023). [Google Scholar]
- 77.Wang, S., Jia, H., Liu, Q. & Zheng, R. An improved hybrid Aquila Optimizer and Harris Hawks Optimization for global optimization. Math Biosci Eng18, 7076–7109 (2021). [DOI] [PubMed] [Google Scholar]
- 78.Kaveh, A. & Eslamlou, A. D. Water strider algorithm: A new metaheuristic and applications. Structures25, 520–541 (2020). [Google Scholar]
- 79.Ezugwu, A. E., Agushaka, J. O., Abualigah, L., Mirjalili, S. & Gandomi, A. H. Prairie dog optimization algorithm. Neural Comput. Appl.34, 20017–20065 (2022). [Google Scholar]
- 80.Rao, R. V., Savsani, V. J. & Vakharia, D. P. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des.43, 303–315 (2011). [Google Scholar]
- 81.Bhesdadiya, R., Trivedi, I. N., Jangir, P. & Jangir, N. Moth-flame optimizer method for solving constrained engineering optimization problems. Advances in Computer and Computational Sciences554, 61–68 (2018). [Google Scholar]
- 82.Sayed, G. I., Darwish, A. & Hassanien, A. E. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems. J. Exp. Theor. Artif. Intell.30, 293–317 (2018). [Google Scholar]
- 83.Eskandar, H., Sadollah, A., Bahreininejad, A. & Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct.110, 151–166 (2012). [Google Scholar]
- 84.Savsani, P. & Savsani, V. Passing vehicle search (PVS): A novel metaheuristic algorithm. Appl. Math. Model.40, 3951–3978 (2016). [Google Scholar]
- 85.Wang, Z., Luo, Q. & Zhou, Y. Hybrid metaheuristic algorithm using butterfly and flower pollination base on mutualism mechanism for global optimization problems. Eng. Comput.37, 3665–3698 (2021). [Google Scholar]
- 86.Yildiz, B. S., Pholdee, N., Bureerat, S., Yildiz, A. R. & Sait, S. M. Enhanced grasshopper optimization algorithm using elite opposition-based learning for solving real-world engineering problems. Eng. Comput.38, 4207–4219 (2022). [Google Scholar]
- 87.Zhao, W., Wang, L. & Zhang, Z. Artificial ecosystem-based optimization: a novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl.32, 9383–9425 (2020). [Google Scholar]
- 88.Zhao, W., Wang, L. & Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng.388, 114194 (2022). [Google Scholar]
- 89.Askari, Q., Saeed, M. & Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl.161, 113702 (2020). [Google Scholar]
- 90.Zhao, W., Zhang, Z. & Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell.87, 103300 (2020). [Google Scholar]
- 91.Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl. Based Syst.222, 106926 (2021). [Google Scholar]
- 92.Braik, M. S. Chameleon Swarm Algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl.174, 114685 (2021). [Google Scholar]
- 93.Abualigah, L., Abd Elaziz, M., Sumari, P., Geem, Z. W. & Gandomi, A. H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl.191, 116158 (2022). [Google Scholar]
- 94.Dhiman, G., Garg, M., Nagar, A., Kumar, V. & Dehghani, M. A novel algorithm for global optimization: rat swarm optimizer. J. Ambient Intell. Humaniz. Comput.12, 8457–8482 (2021). [Google Scholar]
- 95.Dhiman, G. & Kaur, A. STOA: a bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell.82, 148–174 (2019). [Google Scholar]
- 96.Kaur, S., Awasthi, L. K., Sangal, A. L. & Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell.90, 103541 (2020). [Google Scholar]
- 97.Singh, N. & Kaur, J. Hybridizing sine–cosine algorithm with harmony search strategy for optimization design problems. Soft Comput.25, 11053–11075 (2021). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. All data generated or analyzed during this study are included directly in the text of this submitted manuscript. There are no additional external files with datasets.





























































































































































































































