Skip to main content
Biomimetics logoLink to Biomimetics
. 2025 Nov 19;10(11):787. doi: 10.3390/biomimetics10110787

FBCA: Flexible Besiege and Conquer Algorithm for Multi-Layer Perceptron Optimization Problems

Shuxin Guo 1,2,, Chenxu Guo 1,2,, Jianhua Jiang 1,2,*
Editor: Heming Jia
PMCID: PMC12650673  PMID: 41294459

Abstract

A Multi-Layer Perceptron (MLP), as the basic structure of neural networks, is an important component of various deep learning models such as CNNs, RNNs, and Transformers. Nevertheless, MLP training faces significant challenges, with a large number of saddle points and local minima in its non-convex optimization space, which can easily lead to gradient vanishing and premature convergence. Compared with traditional heuristic algorithms relying on a population-based parallel search, such as GA, GWO, DE, etc., the Besiege and Conquer Algorithm (BCA) employs a one-spot update strategy that provides a certain level of global optimization capability but exhibits clear limitations in search flexibility. Specifically, it lacks fast detection, fast adaptation, and fast convergence. First, the fixed sinusoidal amplitude limits the accuracy of fast detection in complex regions. Second, the combination of a random location and fixed perturbation range limits the fast adaptation of global convergence. Finally, the lack of a hierarchical adjustment under a single parameter (BCB) hinders the dynamic transition from exploration to exploitation, resulting in slow convergence. To address these limitations, this paper proposes a Flexible Besiege and Conquer Algorithm (FBCA), which improves search flexibility and convergence capability through three new mechanisms: (1) the sine-guided soft asymmetric Gaussian perturbation mechanism enhances local micro-exploration, thereby achieving a fast detection response near the global optimum; (2) the exponentially modulated spiral perturbation mechanism adopts an exponential spiral factor for fast adaptation of global convergence; and (3) the nonlinear cognitive coefficient-driven velocity update mechanism improves the convergence performance, realizing a more balanced exploration–exploitation process. In the IEEE CEC 2017 benchmark function test, FBCA ranked first in the comprehensive comparison with 12 state-of-the-art algorithms, with a win rate of 62% over BCA in 100-dimensional problems. It also achieved the best performance in six MLP optimization problems, showing excellent convergence accuracy and robustness, proving its excellent global optimization ability in complex nonlinear MLP optimization training. It demonstrates its application value and potential in optimizing neural networks and deep learning models.

Keywords: metaheuristic algorithm, besiege and conquer algorithm (BCA), perturbation mechanism, swarm intelligence, multi-layer perceptron (MLP)

1. Introduction

The Multi-Layer Perceptron (MLP) [1], as an early representative of deep neural networks (DNNs) [2], holds a crucial position in the development of neural networks. The success of MLP has inspired the development of numerous subsequent deep learning models, such as Convolutional Neural Networks (CNNs) [3], Transformers [4], YOLO [5], and DeepSeek [6], significantly advancing the application of artificial intelligence across various fields. Despite its strong performance in tasks such as pattern recognition, classification, and regression, MLP faces several challenges during training. These include the non-convexity and high-dimensional nonlinearity of its weight-bias space, which often causes the model to get stuck in saddle points or local minima [7,8], limiting its generalization ability and performance improvement.

Against this backdrop, Metaheuristic Algorithms (MAs), as global optimization strategies, have gradually become effective tools for addressing these issues [9]. In recent years, many metaheuristic algorithms have been incorporated into MLP training, achieving positive results. For instance, the Grey Wolf Optimizer (GWO) in swarm intelligence has excelled in MLP classification tasks [10], while Particle Swarm Optimization (PSO) has outperformed Stochastic Gradient Descent (SGD) in problems like estimating the vertical dispersion coefficient of a natural flow, demonstrating the effectiveness of metaheuristic methods in MLP weight-bias optimization [11]. Genetic algorithms (GAs) have been used in evolutionary algorithms to optimize MLP hyperparameters and achieve 100% key recovery rate in AES side-channel attacks, which is significantly better than stochastic and Bayesian baselines [12]. In addition, algorithms such as the Slime Mould Algorithm (SMA) [13], Black-winged Kite Algorithm (BKA) [14], and Harris Hawks Optimization (HHO) [15] also showed good optimization performance in different MLP training tasks. Some studies also try to construct hybrid models or multi-objective optimization models, such as MLP-PSODE fused with PSO and differential evolution (DE) for suspended sediment load estimation [16], and the MLP-MOSSA model based on the multi-objective Salp Swarm Algorithm for water evaporation prediction [17]. These studies fully show that the combination of MLP and metaheuristic algorithms has become an important research direction in the field of optimization [18,19].

However, although many optimization algorithms have provided solutions for MLP, most of these algorithms rely on parallel search structures. While they offer some search efficiency, they may not fully leverage strategies that combine exploration around the optimal solution with random position exploration in the process of finding the global optimum. When designing the original Besiege and Conquer Algorithm (BCA) [20,21], this consideration was fully incorporated and transformed into an advantage in the optimization process. BCA balances global optimization ability with local exploitation, and although it has achieved some success in MLP optimization problems, it still has certain limitations: search dynamics can become rigid, it can easily get stuck in local minima, and there is an insufficient balance between global exploration and local exploitation. These shortcomings limit the flexibility of its global optimization capability and constrain its performance in complex nonlinear problems such as MLP weight-bias optimization. To address these shortcomings, this paper proposes an improved Flexible Besiege and Conquer Algorithm (FBCA), with the following key research motivations and contributions summarized.

1.1. Motivation

Although the original BCA demonstrates certain optimization capability, its inherent limitations restrict global search efficiency and convergence performance. To enhance its adaptability and robustness in complex optimization scenarios, this study proposes the Flexible Besiege and Conquer Algorithm (FBCA), inspired by three main research motivations.

Motivation 1: BCA possesses structure advantages and special mechanisms over traditional MAs like GA [22] and DE [23]. First, regarding particle generation, BCA introduces a more detailed hierarchical structure: population–army(sub-population)–soldier(particle), whereas traditional algorithms typically employ a simple population–particle. Second, BCA controls the exploration and exploitation through binary gates, using sine and cosine factors to guide single-point besieges and random perturbations, unlike GA and DE, which use a uniform F/CR or crossover rate parameter. This design makes BCA easier to understand, implement, and improve and allows for flexible application to optimization scenarios such as MLP.

Motivation 2: The exploitation phase of BCA primarily relies on a sine-based factor to guide the search direction. Although this mechanism can converge to a certain extent to a better solution, due to the limited search direction, the population is prone to fall into the local optimum and lacks the ability to further explore the solution space in detail. Therefore, it is necessary to consider introducing mechanisms with perturbative and flexible exploration at the exploitation stage to enhance the accuracy of local search and the ability of solution refinement.

Motivation 3: The global search process in BCA depends on sine and cosine factors within a fixed interval [−1, 1]. Although this periodic driving mechanism promotes early-stage diversity, premature convergence may occur if exploitation begins before reaching promising regions. To strengthen global exploration, an adaptive and dynamically regulated position-update mechanism is needed to achieve a more flexible and effective global search.

Motivation 4: In BCA, the transition between exploration and exploitation is governed by a single binary control gate BCB. This fixed control structure restricts the algorithm’s flexibility during phase transitions. Once a branch is selected, soldiers can only follow fixed update rules, preventing dynamic balance at the mechanism level. Hence, a hierarchical and self-adaptive control structure is proposed, refining the new mechanisms within both BCB branches and introducing additional binary gates to achieve a more flexible exploration–exploitation balance in FBCA.

1.2. Contributions

The main contributions of this paper are summarized as follows:

  • Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism: an optimization mechanism that integrates Gaussian flexible micro-perturbations under sine factor guidance, enhancing the ability to quickly detect high-precision solutions, reducing the risk of local stagnation.

  • Exponentially Modulated Spiral Perturbation Mechanism: a position update mechanism that applies exponential modulation through an adaptive spiral factor to improve population diversity and ensure fast-adaptive global convergence.

  • Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism: drawing on the PSO’s velocity-based update mechanism, the nonlinear cognitive coefficient dynamically regulates the soldier position update, thereby improving fast-convergent performance and achieving a balanced exploration–exploitation trade-off.

  • Validation on IEEE CEC 2017 and Six MLP Problems: extensive experiments on the IEEE CEC 2017 benchmark set (30D, 50D, and 100D) demonstrate FBCA’s excellence in numerical accuracy, convergence behavior, stability, and the Wilcoxon rank sum test. Notably, FBCA shows outstanding performance in high-dimensional composite function optimization. Moreover, in six MLP optimization problems, FBCA achieves an order-of-magnitude lead in the mean results of MSE on XOR and Heart datasets, surpassing the original BCA and other state-of-the-art algorithms such as SMA in function approximation problems.

2. Related Work

2.1. BCA: Besiege and Conquer Algorithm

This section reviews the BCA [21], which explicitly divides the optimization process into exploration and exploitation, controlled by the parameter BCB. The exploitation phase focuses on generating new soldiers around the best army using a sine-based factor, while the exploration phase updates soldier positions around random armies through a cosine-based factor. The BCA population is randomly initialized within the defined upper and lower bounds. It then alternates between exploration and exploitation based on the BCB parameter to estimate the optimal solution for continuous optimization problems. The detailed pseudocode is presented in Algorithm 1.

Algorithm 1 The pseudocode of the BCA
  •   1:

    Input: Population Size: N, Problem Dimension: D, Max Iteration: Max_Gen, The Number Of Soldiers: nSoldiers

  •   2:

    Output: Obtained best solution

  •   3:

    Initialize the BCB parameters

  •   4:

    Initialize the solutions’ positions randomly

  •   5:

    while t ← 1 to Max_Gen do

  •   6:

         for i ← 1 to nArmies do

  •   7:

               for j ← 1 to nSoldiers do

  •   8:

                     for d1 to D do

  •   9:

                           if rand < BCB then

  • 10:

                                 Update the position by Equation (1)

  • 11:

                                 Update the position that exceeds the search boundaries By Equation (3)

  • 12:

                           else

  • 13:

                                 Update the position by Equation (2)

  • 14:

                                 Update the position that exceeds the search boundaries By Equation (4)

  • 15:

                           end if

  • 16:

                     end for

  • 17:

               end for

  • 18:

         end for

  • 19:

         Update gBest and gBestPos.

  • 20:

    end while

When rand < BCB, it is defined as BCA exploitation, and when rand ≥ BCB it is defined as BCA exploration. The BCA divides the entire population into multiple armies, each consisting of a fixed number of soldiers, which can be regarded as the descendants of their respective armies. When a soldier achieves a better fitness value than its current army, it becomes the leader of that army. During the exploitation phase, the update rule for soldiers within each army is defined by Equation (1), while in the exploration phase, it follows Equation (2). The parameters α and β are random values within the range [0, 2π].

Sj,dt+1=Bdt+|Ar,dtAi,dt|×sin(α)  rand<BCB (1)
Sj,dt+1=Ar,dt+|Ar,dtAi,dt|×cos(β)  randBCB (2)

Meanwhile, BCA also accounts for soldier position updates that exceed the search boundaries during exploration and exploitation. If the soldier crosses the border in algorithm exploitation will use Equation (3) for correction processing, if the soldier crosses the border in algorithm exploration, it will use Equation (4) for correction processing., where Sj,dt+1 is the jth soldier of the dth dimension of the t+1 iteration, Bdt is the best army of the current tth iteration, Ai,dt is the ith army of the dth dimension of the t iteration, Ar,dt is the random army of the dth dimension of the t iteration, and BCB is set to a fixed value of 0.8.

Sj,dt+1=BCBdt×Bdt+(1BCBdt)×Ai,dt (3)
Sj,dt+1=lb+(ublb)×rand (4)

2.2. PSO: Particle Swarm Optimization

The particle swarm optimization (PSO) algorithm [24], proposed by Kennedy and Eberhart in 1995, is a swarm intelligence optimization algorithm that simulates the collaborative behavior of bird or fish populations in a search space. The optimization process is as follows: firstly, the initial particle swarm is randomly generated in the search range according to the upper and lower bounds of the variables, and each particle is assigned a position and velocity; subsequently, the particle state is updated through iteration, and the fitness function is used to determine the quality of the solution and guide the search direction.

xi(t+1)=xi(t)+vi(t+1) (5)
vi(t+1)=w×vi(t)+c1×r1×(pi(t)xi(t))+c2×r2×(g(t)xi(t)) (6)

For the ith particle, its position and velocity update equations are defined in Equations (5) and (6): where xi(t) and vi(t) denote the current position and velocity of the particle at the tth iteration, respectively; w is the inertia weight to regulate the motion continuity; c1 and c2 represent the cognitive and social coefficients, respectively, to balance the individual learning and group learning; r1, r2[0,1] are random numbers; pi(t) denotes the particle’s own historical optimal position, and g(t) is the global optimal position of the group.

w×vi(t),inertia partc1×r1×(pi(t)xi(t)),cognitive partc2×r2×(g(t)xi(t)),social part (7)

As shown in Equation (7), the particle’s velocity update process can be decomposed into three parts. The inertial part maintains the particle motion trend to expand the search range; the cognitive part guides the particle back to its individual optimal region; and the social part drives the particle to approach the global optimal of the group. The interaction of the three components enables the PSO to achieve a dynamic balance between global exploration and local exploitation, which results in a better global optimization capability and convergence nature.

2.3. MLP: Multi-Layer Perceptron

The MLP is a typical feed-forward artificial neural network model [25] consisting of at least three layers of nodes: an input layer, a hidden layer, and an output layer. The example model contains a hidden layer as shown in Figure 1. The orange nodes denote the input layer neurons, the number of which depends on the dimensions of the input features, denoted as X1 to Xi, and the purple nodes denote the hidden layer neurons, denoted as H1 to Hj. The number of layers and the number of nodes in the hidden layer are not fixed, and they can be adjusted according to the specific task requirements and experimental settings. The blue nodes denote the output layer neurons, denoted as O1 to Ok, and their number is usually determined by the training exploitation type.

Figure 1.

Figure 1

A simple MLP neural network model.

In the training process of MLP, Mean Squared Error (MSE) is often used as a performance evaluation metric, which is calculated as shown in Equation (8), where m denotes the number of output units, dik is the target output value of the kth training sample in the ith output unit, oik is the actual output value of the kth training sample at the time of input of that output unit, and s denotes the number of training samples in the dataset.

MSE¯=k=1si=1m(oikdik)2s (8)

3. Methods

3.1. Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism

The original BCA algorithm was only developed by a sine factor, and its single mechanism can easily lead to the population falling into the local optimum, thus limiting the accuracy of the solution. Therefore, this section proposes the Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism. Based on the original sine factor, a small Gaussian perturbation [26] is adopted as a correction item to break the original symmetry structure without changing the main search direction, so as to avoid the algorithm falling into the trap of accurate numerical values and improve the probability of the population jumping out of the local optimum.

Sj,dt+1=Bdt+|Ar,dtAi,dt|×sin(α)+N(0,0.12) (9)

As shown in Equation (9), compared with the original BCA exploitation mechanism, this mechanism adopts an additional Gaussian perturbation term of N(0,0.12). The position change in the search space is shown in Figure 2, and the clear comparison between the soldier with Gaussian perturbation and the soldier without Gaussian perturbation reflects the position of the original BCA exploitation mechanism and the soldiers generated by the sine-guided soft asymmetric Gaussian perturbation mechanism, respectively. The original BCA only relies on the position generated by the sine guidance, which may cause the algorithm to miss more refined solutions. By introducing a small Gaussian perturbation, the mechanism introduces slight oscillations while maintaining stability in the original direction, allowing the population to explore potential optimal solutions in adjacent areas, thereby effectively preventing the population from falling into local optimum and further improving the overall convergence accuracy.

Figure 2.

Figure 2

Schematic of the soldier position update mechanism with soft Gaussian perturbation.

3.2. Exponentially Modulated Spiral Perturbation Mechanism

The original BCA algorithm has a very fixed exploitation limit, controlled by a single BCB binary gate. However, if the population enters the exploitation phase without fully completing preliminary exploration, convergence often stalls, limiting further improvement in algorithm performance. Therefore, this section details the Exponentially Modulated Spiral Perturbation Mechanism. This mechanism adopts an adaptive spiral perturbation factor [27] to moderately enhance the population’s exploration capabilities, thereby increasing its distribution diversity in the search space, ensuring that the population can escape local optima and prevent premature convergence.

Spiral_Factor=ebl×cos(2πl) (10)

As shown in Equation (10), the main improvement of this paper compared to the original BCA mechanism is that the original cosine coefficient is replaced by a spiral perturbation factor Spiral_factor, the mathematical definition of which is shown in Equation (11). This factor is not a linear spiral, but is an improvement based on the logarithmic spiral. Unlike traditional algorithms that modulate only the exponential parameter l [28], the spiral factor proposed in this paper dynamically updates both parameters b and l simultaneously during the exponential modulation stage to enhance the flexibility and global exploration capability of the search process. The specific update method of parameters b and l is shown in Equation (12).

Sj,dt+1=Ai,dt+Spiral_Factor×|Ar,dtAi,dt| (11)

As shown in Figure 3, we compared the changing trends of two spiral factors: one is a fixed spiral factor with b = 1 [28], and the other is a spiral factor with b dynamically updated according to Equation (12). It can be observed in the green dashed box in the figure that the spiral factor with dynamic b exhibits a higher peak amplitude, which means that it provides a wider exploration space for the population during the search process. Furthermore, from the variation curve of the 200 to 250 iterations in the red dashed box in the figure, it can be seen that the spiral factor with dynamic parameter b can rapidly jump from a value close to 0 to a peak value of approximately 3 within a very short iteration interval. This characteristic fully demonstrates that the proposed exponentially modulated spiral factor has a stronger ability to change instantaneously and escape local extrema, thereby significantly improving the global search performance of the algorithm.

Figure 3.

Figure 3

Comparison of spiral factors with constant b and dynamic b.

Figure 4 is a schematic diagram of the mechanism, which shows the value changes of Spiral_Factor in 500 iterations and the position distribution of soldiers under spiral perturbation. It can be seen that the value of Spiral_Factor is mainly concentrated in the range of [−2, 3], while the value range of sin(α) and cos(β) used in the original BCA is [−1, 1]. By comparison, it can be found that the value range of Spiral_Factor is approximately twice or more than that of the original BCA sine and cosine factors.

l=1(tMax_Gen+2)×randb=1+0.5×rand×1t/Max_Gen2 (12)

Figure 4.

Figure 4

Variation in Spiral_Factor and the soldier position updating with spiral perturbation.

In fact, the core of the improved Spiral Perturbation Mechanism lies in its introduction of an exponentially modulated adaptive spiral perturbation factor. This factor exhibits exponential variation, allowing the Spiral_Factor to rapidly increase from a small value near 0 to 2 or even 3, thereby prompting the population to conduct large-step global exploration and preventing premature convergence. The iterative update method for the parameters b and l in Spiral_Factor is shown in Equation (12), where b is in the range [1, 1.5] and l is a linearly decreasing variable with a range of [−2, 1].

3.3. Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism

The exploration mechanism of the original BCA is driven by the cosine factor, but the long-term use of fixed cosine exploration may lead to the population missing the optimal solution, which makes it difficult to achieve a good balance between exploration and exploitation of the original BCA. To this end, this section adopts the Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism. By introducing and improving the concept of speed in PSO, this mechanism constructs a method based on random army position [29,30] reference and combined with a dynamically changing cognitive coefficient to drive the soldier position update so that the algorithm can better balance the exploration and exploitation process.

Sj,dt+1=Ai,dt+vi,dt (13)

As shown in Equation (13), compared to the original BCA, this mechanism removes the position update term driven by the cosine factor and instead drives the position update with the velocity term vi,dt. The specific calculation method of velocity is shown in Equation (14). The velocity formula in Equation (14) refers to the velocity update concept of PSO but is not identical. It mainly improves the cognitive part of PSO while retaining the inertial and social components.

vi,dt+1=w×vi,dt+c1×r1×(Ar,dtAi,dt)+c2×r2×(BdtAi,dt) (14)

The reason for retaining the inertial and social components is that they can provide a directional basis for the search and maintain global convergence while considering the global optimal solution. The improvement of the cognitive part of PSO is mainly reflected in the design of the cognitive coefficient c1 and its reference position. Specifically, the mechanism replaces the individual optimal position pi(t) in the traditional PSO with a random army position, thereby enhancing the exploration ability of the population. The setting of the cognitive coefficient c1 is shown in Equation (15), and its value shows a nonlinear decreasing trend, ranging from 0.2 to 0.3. The change pattern of c1 can be clearly seen from Figure 5.

c1=0.2+0.1×rand×1(t/Max_Gen)2tMax_Gen (15)

Figure 5.

Figure 5

Schematic of nonlinear cognitive coefficient-driven velocity update mechanism.

The entire Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism is triggered by a binary gate probability, whose probability expression is shown in Equation (16) and has a value range of [0.1, 0.15]. The purpose of the design of this binary gate is to adopt a small probability PSO velocity update method, so that when the algorithm enters the exploration branch, it can not only maintain the overall exploration capability but also generate “fine search seeds” at a specific moment. This ensures that when the population shows an early convergence trend, some individuals can still quickly approach the best optimal solution through the velocity accumulation effect of PSO. The introduction of this mechanism significantly enhances the exploration–exploitation balance of the algorithm, making the convergence process more stable and efficient.

p2=0.1+0.05×rand×(1tMax_Gen)tMax_Gen10 (16)

3.4. FBCA: Flexible Besiege and Conquer Algorithm

To achieve superior algorithmic performance, the Flexible Besiege and Conquer Algorithm (FBCA) was proposed by combining the three aforementioned innovative mechanisms. This section, using flowcharts and algorithm pseudocode, will detail how these three new mechanisms are integrated to form FBCA and how FBCA’s flexibility is demonstrated. Furthermore, the comprehensive improvement effect of FBCA in escaping local optimum, improving convergence capacity, and balancing exploration and exploitation will be illustrated.

Whenr1<BCB:Sj,dt+1=Bdt+|Ar,dtAi,dt|×sin(α)+N(0,0.12),ifr2<p1Ai,dt+Spiral_Factor×|Ar,dtAi,dt|,else (17)
Whenr1BCB:Sj,dt+1=Ai,dt+vi,dt+1,ifr3<p2Ar,dt+|Ar,dtAi,dt|×cos(β),else (18)

First, the overall mathematical expression of FBCA can be seen from Equations (17) and (18), where r1, r2 and r3 are different random values from 0 to 1. Its main improvement lies in further refining the two branches controlled by BCB in the original BCA, introducing probability p1 in the original exploitation branch and probability p2 in the original exploration branch. When r1<BCB, a branch of r2<p1 will be triggered; when r1BCB, a branch of r3<p2 will be triggered. For the handling of the soldier positions that exceed the search boundaries, FBCA still follows the original BCA method: if an out-of-bounds error occurs when r1< BCB, it is updated according to Equation (3); if an out-of-bounds error occurs when r1 BCB, it is updated according to Equation (4).

The control relationship between the two probabilities on the specific mechanism is shown in Figure 6. This flowchart clearly shows the operation logic of FBCA. Among them, p1 is the alternating control probability of the Gaussian perturbation mechanism and spiral perturbation mechanism, and its value is fixed at 0.5. When r2<p1, the algorithm enters the Gaussian perturbation mechanism stage. First, the Gaussian perturbation term N(0,0.12) is initialized. Then, under the guidance of the sine factor, the perturbation term N is introduced as a correction term into the update of the soldier’s position. When r2p1, the spiral perturbation mechanism branch is entered. First, the exponential parameters b and l of the spiral factor are updated according to Equation (12). Then, the spiral factor is calculated using Equation (10). Finally, it is introduced into Equation (11) to complete the update of the soldier’s position.

Figure 6.

Figure 6

Flow chart of the FBCA.

With the participation of probability p2, the original exploration branch can be divided into two stages: one is to retain the cosine-driven exploration mechanism of the original BCA, and the other is to introduce a cognitive coefficient-driven velocity update mechanism. Before judging whether r3 is less than p2, the value of p2 will be updated according to Equation (16); if r3<p2, the cognitive coefficient-driven velocity update mechanism is triggered, and the velocity term with nonlinear coefficient c1 is updated through Equation (14), and the velocity term is introduced into Equation (13) to update the soldier position; if r3p2, the cosine-driven exploration mechanism is used to update the soldier position.

The pseudocode of FBCA is shown in Algorithm 2. As can be seen from the algorithm flow, FBCA will trigger and experience three new mechanisms with probability during each iteration. From the perspective of creation 1, the adopted Gaussian perturbation term N is a very small asymmetric perturbation, which makes FBCA have slight flexibility and local exploration ability on the basis of maintaining sine-guided exploitation. From the perspective of creation 2, the spiral perturbation factor under exponential modulation is an adaptive change mechanism, and its exponential characteristics drive the population to carry out local fine exploitation, but also realize large-step global exploration, so as to effectively avoid falling into local optimum. From the perspective of creation 3, the nonlinear cognitive coefficient-driven speed update mechanism dynamically generates the position of the soldier so that the algorithm can quickly converge when excellent individuals appear, thereby improving the global convergence performance. In summary, compared with the original BCA, FBCA has adaptive regulation characteristics, which can flexibly switch between exploration and exploitation mechanisms in different iteration stages to achieve a more balanced and efficient global optimized search.

Algorithm 2 The pseudocode of the FBCA
  •   1:

    Input: Population Size: N, Problem Dimension: D, Max Iteration: Max_Gen, The Number Of Soldiers: nSoldiers

  •   2:

    Output: Obtained best solution

  •   3:

    Initialize the solutions’ positions randomly

  •   4:

    Initialize the velocities and BCB parameters

  •   5:

    while t ← 1 to Max_Gen do

  •   6:

         for i ← 1 to nArmies do

  •   7:

               for j ← 1 to nSoldiers do

  •   8:

                     for d1 to D do

  •   9:

                           if rand < BCB then

  • 10:

                                if rand < p1 then

  • 11:

                                     Creation1: Sine-Guided Soft Asymmetric Gaussian Perturbation

  • 12:

                                     initial Gaussian perturbation item N

  • 13:

                                     Update the soldier position by Equation (9)

  • 14:

                                     End Creation1

  • 15:

                                else

  • 16:

                                     Creation2: Exponentially Modulated Spiral Perturbation

  • 17:

                                     Update Spiral_Factor by Equation (10)

  • 18:

                                     Update the soldier position by Equation (11)

  • 19:

                                     End Creation2

  • 20:

                                end if

  • 21:

                                Update the soldier position that exceeds the search boundaries By Equation (3)

  • 22:

                           else

  • 23:

                                Update p2 by Equation (16)

  • 24:

                                if rand < p2 then

  • 25:

                                     Creation3: Nonlinear Cognitive Velocity Update

  • 26:

                                     Update vi,d by Equation (14)

  • 27:

                                     Update the position by Equation (13)

  • 28:

                                     End Creation3

  • 29:

                                else

  • 30:

                                     Update the position by Equation (2)

  • 31:

                                end if

  • 32:

                                Update the soldier position that exceeds the search boundaries By Equation (4)

  • 33:

                           end if

  • 34:

                     end for

  • 35:

               end for

  • 36:

         end for

  • 37:

         Update gBest and gBestPos.

  • 38:

    end while

3.5. Analyzing the Computational Complexity of FBCA

To evaluate the computational complexity of FBCA, we define the following parameters: number of iterations T, problem dimension D, population size N, number of soldiers in each army nSoldiers, and function evaluation cost c. The overall computational complexity of FBCA mainly consists of the following parts: problem definition, initialization, soldier position update, and population evaluation.

First, the computational complexity of the problem definition phase is O(1). In the initialization phase, the algorithm needs to initialize soldier positions for NnSoldier armies in each dimension, thus the complexity is O(NnSoldier×D). In subsequent iterations, the main computational overhead is concentrated in the soldier update and population evaluation phases. The impact of the introduction of the three new mechanisms on the complexity of a single iteration is as follows:

  • Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism: Each dimension needs to perform a sine operation and Gaussian perturbation correction term N(0,0.12) once. The complexity of a single operation is O(1), so the complexity of a single soldier in a single dimension is O(1).

  • Exponentially Modulated Spiral Perturbation Mechanism: including the calculation of exponential function (exp), cosine function (cos) and linear parameters b and l, the single-dimensional operation complexity is also O(1).

  • Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism: It involves the calculation of the nonlinear cognitive coefficient (c1) and the solution of speed update formula. The single-dimensional computational complexity is also O(1).

Because each generation of soldiers selects and executes only one of the three mechanisms, each mechanism maintains O(1) one-dimensional complexity. Therefore, the generation complexity of a single soldier is O(D). Considering all soldiers and the number of iterations, the overall complexity of the soldier update stage can be expressed as O(nSoldiers×T×D). In the evaluation stage, the algorithm needs to evaluate the fitness of all armies and select the best army in each iteration. At initialization, a total of NnSoldier armies are generated, and the evaluation cost of each army is c. Therefore, the overall evaluation complexity is O(T×NnSoldier×c).

O(FBCA)=O(problemdefinition)+O(initialization)+O(soldierupdate)+O(populationevaluation)=O(1)+O(NnSoldier)+O(nSoldiers×T×D)+O(T×NnSoldier×c)=O(1+NnSoldier+nSoldiers×T×D+T×NnSoldier×c) (19)

Therefore, the overall complexity metric of FBCA is consistent with the baseline algorithm BCA. The three new mechanisms only introduce constant-level additional computations during the soldier update stage, without increasing the overall complexity cost. This indicates that FBCA successfully improves algorithm performance while maintaining computational efficiency. In summary, the overall computational complexity of FBCA is shown in Equation (19).

4. Experiments and Analysis

In this section, we evaluate the performance of FBCA using the IEEE CEC 2017 benchmark function. First, Section 4.1 describes the experimental environment, dataset, and experimental parameter settings. Subsequently, Section 4.2, Section 4.3, Section 4.4, Section 4.5 systematically compare FBCA with 12 other state-of-the-art algorithms and provide a comprehensive performance evaluation of the test results. Section 4.2 conducts qualitative analysis, Section 4.3 conducts quantitative evaluation, Section 4.4 completes statistical testing, and Section 4.5 analyzes the stability of the algorithm. The participating comparison algorithms include the original BCA, the classical algorithms PSO, GA, SCA, the emerging algorithms HOA, COA, PO, HHO, and the high-performance algorithms SMA, CPO, BKA, etc. Section 4.6 performs parameter sensitivity analysis, ablation experiments, and comparison with state-of-the-art (SOTA) algorithms.

4.1. Experiment Settings

Environment: The hardware environment used in this study is a computer configured with Intel (R) Core (TM) i7-11800H CPU 2.30 GHz. The code writing was done in the Matlab R2024a environment under the Windows 10 operating system.

Datasets: We used the IEEE CEC 2017 benchmark function set to verify the effectiveness of our algorithm. This function set covers unimodal functions (F1–F2), multimodal functions (F3–F9), hybrid functions (F10–F19), and composite functions (F20–F29). Its global optimal values are known, making it a standard tool for evaluating algorithm performance.

Experiment parameters: To ensure fair comparison between algorithms, we uniformly set the population size to 30 and the maximum number of iterations to 500 in the benchmark function experiments with different dimensions (30D, 50D, and 100D). This parameter setting is intended to ensure the reliability and validity of the experimental results. The detailed parameter configurations of the compared algorithms are shown in Table 1.

Table 1.

Parameter settings for each algorithm.

Algorithms Parameters Values Reference
SCA a 2 [31]
r Linearly decreased from a to 0
GA CrossPercent 70% [22]
MutatPercent 20%
ElitPercent 10%
RSA Evolutionary sense randomly decreasing values between 2 and −2 [32]
Sensitive parameter β = 0.005
Sensitive parameter α = 0.1
PSO Cognitive component 2 [24]
Social component 2
DBO k and λ 0.1 [33]
b 0.3
S 0.5
BKA p 0.9 [34]
r range from [0, 1]
HHO E0 range from [−1, 1] [35]
β 1.5π
HOA Angle of inclination of the trail range from [0, 50°] [36]
Sweep Factor (SF) of the hiker range from [1, 3]
COA I 1 or 2 [37]
CPO α 0.1 [38]
Tf 0.5
T 2
PO p [0, 1] [39]
SMA vc Linearly decreased from 1 to 0 [40]

4.2. Qualitative Analysis

In order to validate and evaluate the performance of FBCA, this section selects the unimodal function F1 to examine the exploitation ability of the algorithm, and the multimodal functions F10, F13, and F17 to evaluate the performance of the algorithm in terms of the exploration–exploitation balance [41]. The search space shape is shown in the first column of Figure 7.

Figure 7.

Figure 7

Qualitative analysis experiment of FBCA.

In the search history diagram, the blue points represent the distribution of particles in the population, while contour lines are the projections of the three-dimensional search space onto a two-dimensional plane, reflecting different values of the objective function. The color of the contour lines changes from yellow to green, then to blue and purple, corresponding to the improvement process of fitness; the darker the color, the higher the quality of the solution. Blue points represent the locations searched by particles, while red points represent the current location of the global optimum. Observing the population distribution reveals that, in most cases, blue points are concentrated near red points, indicating that the population is gradually approaching the optimum, fully demonstrating the fine-grained search capability of FBCA during the exploitation phase, as shown in Figure 7a. At the same time, apart from the dense area near the red points, a certain degree of clustering can still be observed in other locations, indicating that FBCA can still maintain good population diversity, as shown in Figure 7c.

In the average fitness graph and the convergence curve graph, both the average fitness graph and the convergence curve graph reflect the fitness changes in the algorithm during the search process. Average fitness represents the average fitness value of all particles in the population, while the convergence curve represents the fitness trend of the optimal particle. As shown in Figure 7, the algorithm follows a search pattern of exploration followed by exploitation in all four test functions. But the convergence of the two curves is different. Average fitness, as a macroscopic representation of overall fitness, converges quickly in the initial stage due to the large differences between particles during initialization. As iterations progress, FBCA mainly performs local searches around the current optimal soldier particle, so the overall fitness value stabilizes after a sharp decrease. The convergence curve, on the other hand, shows a more detailed view of the fitness changes of the optimal particle, exhibiting a monotonically decreasing trend, indicating that FBCA continuously optimizes the quality of the solution. In summary, combined with the search history plot, it can be seen that FBCA not only accurately locates the optimal solution but also demonstrates good global convergence characteristics.

The trajectory history graph shows the trajectory of the optimal solution for the particle during the iteration process. The horizontal axis represents the number of iterations, and the vertical axis represents the change in the value of the optimal particle in the first dimension. The graph reflects the dynamic changes in the optimal solution during 500 iterations. The large fluctuations in the initial stage of FBCA indicate its ability to quickly explore the global region; as iterations progress, the algorithm exhibits differentiated search and convergence characteristics on different functions. For example, in Figure 7b, the algorithm completes extensive exploration and quickly locates the center of the multimodal region within about 150 iterations, and then approaches the optimal solution through fine exploitation; Figure 7d shows that exploration is the main activity during iterations 0 to 200, continuous exploitation is the stage from 200 to 400, and finally, it converges to the global optimum during iterations 400 to 500. This demonstrates that FBCA can adaptively and dynamically adjust the exploration and exploitation mechanisms to achieve an effective balance of exploration–exploitation.

4.3. Quantitative Analysis

This section shows the convergence curves in different dimensions, as shown in Figure 8, Figure 9 and Figure 10. Table 2 summarizes the comprehensive rankings of each algorithm in different dimensions. As shown in Table 3, we rank the algorithms on each test function one by one, their detailed mean and standard deviation data are listed in Table A1, Table A2, Table A3, and the best results of each function are marked with a gray background. At the same time, Figure 11 visualizes and compares the algorithm performance through radar maps. Finally, the experimental results show that FBCA has excellent performance in terms of convergence accuracy, avoidance of premature convergence, convergence speed, and exploration–exploitation balance ability.

Figure 8.

Figure 8

Convergence curve of the FBCA and its comparative algorithms with 30D.

Figure 9.

Figure 9

Convergence curve of the FBCA and its comparative algorithms with 50D.

Figure 10.

Figure 10

Convergence curve of the FBCA and its comparative algorithms with 100D.

Table 2.

The overall ranking results of FA-BCA and other algorithms.

Index FBCA BCA SCA GA RSA PSO DBO BKA HHO HOA COA CPO PO SMA
D = 30 Average ranking 2.72 3.10 9.72 12.93 12.28 8.90 5.79 6.24 7.34 10.10 13.10 2.79 7.14 2.83
Total ranking 1 4 10 13 12 9 5 6 8 11 14 2 8 3
D = 50 Average ranking 2.45 3.86 9.97 13.31 11.86 8.93 6.17 6 6.52 10.10 12.79 3.41 7.07 2.55
Total ranking 1 4 10 14 12 9 6 5 7 11 13 3 8 2
D = 100 Average ranking 2.52 4.55 10.10 13.90 11.17 9.24 6.24 6.03 5.93 9.79 12.17 4 6.69 2.66
Total ranking 1 4 11 14 12 9 7 6 5 10 13 3 8 2

The bold part indicates the optimal result.

Table 3.

The detailed ranking results of all algorithms on CEC2017 test functions.

F FBCA BCA SCA GA RSA PSO DBO BKA HHO HOA COA CPO PO SMA
30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100
F1 2 1 1 1 3 3 10 10 10 13 14 14 12 12 12 8 8 8 5 6 6 9 9 9 6 5 5 11 11 11 14 13 13 4 4 4 7 7 7 3 2 2
F2 7 10 10 12 12 12 9 9 9 14 14 13 8 4 3 13 13 14 11 11 8 2 1 1 3 3 6 6 2 2 10 8 4 4 5 7 5 7 5 1 6 11
F3 1 1 1 2 2 3 10 10 10 12 14 14 13 12 12 8 8 8 5 5 7 9 9 9 6 6 5 11 11 11 14 13 13 4 4 4 7 7 6 3 3 2
F4 1 1 1 3 3 4 11 11 11 14 14 14 13 12 12 10 10 10 6 7 7 5 4 3 7 6 6 9 9 9 12 13 13 4 5 5 8 8 8 2 2 2
F5 4 3 2 1 1 1 8 10 11 14 14 14 13 12 13 6 7 10 5 5 6 7 6 5 10 8 7 9 11 9 12 13 12 2 2 3 11 9 8 3 4 4
F6 5 4 4 3 3 3 9 11 13 14 14 14 12 12 11 6 6 8 4 5 5 8 7 6 11 10 10 10 9 9 13 13 12 2 2 1 7 8 7 1 1 2
F7 1 1 1 3 3 3 11 11 11 14 14 14 12 13 12 10 9 10 7 7 7 4 6 5 6 5 6 9 10 9 13 12 13 5 4 4 8 8 8 2 2 2
F8 4 5 8 2 1 3 11 11 13 12 14 14 14 13 11 9 8 12 6 9 10 5 4 2 10 10 7 7 7 6 13 12 9 1 2 4 8 6 5 3 3 1
F9 1 2 8 12 14 13 13 12 12 11 11 14 10 10 10 9 9 9 5 5 6 3 3 2 4 4 3 7 7 5 14 13 11 8 8 7 6 6 4 2 1 1
F10 1 3 2 2 4 9 9 9 7 14 14 14 12 12 10 10 11 13 7 6 11 6 7 3 5 5 5 11 10 8 13 13 12 3 2 4 8 8 6 4 1 1
F11 1 2 1 2 1 3 10 10 10 12 12 14 14 13 12 9 8 8 5 5 5 8 9 9 6 6 6 11 11 11 13 14 13 3 3 4 7 7 7 4 4 2
F12 1 2 2 2 1 1 10 10 10 12 12 14 14 13 12 9 9 8 6 6 6 8 8 9 5 5 5 11 11 11 13 14 13 3 3 3 7 7 7 4 4 4
F13 3 3 1 5 4 5 8 9 11 13 13 14 14 12 12 10 10 9 6 6 7 2 2 2 11 8 6 9 11 10 12 14 13 1 1 3 7 7 8 4 5 4
F14 3 2 3 2 1 1 9 10 10 12 13 14 13 12 12 10 8 9 5 7 6 6 9 8 7 5 5 11 11 11 14 14 13 1 3 2 8 6 7 4 4 4
F15 1 1 1 5 5 7 10 10 10 11 12 14 13 13 12 9 9 8 6 6 3 4 3 6 8 7 5 12 11 11 14 14 13 3 4 4 7 8 9 2 2 2
F16 3 1 1 2 6 4 10 11 9 12 14 12 14 13 13 6 9 8 8 8 6 5 4 10 7 5 5 11 10 11 13 12 14 1 3 3 9 7 7 4 2 2
F17 3 2 3 5 5 6 9 10 11 13 12 14 12 13 12 10 8 9 6 7 8 2 3 2 7 6 4 11 11 10 14 14 13 1 1 1 8 9 7 4 4 5
F18 3 4 2 2 1 1 11 10 10 12 13 14 13 14 12 9 9 8 6 6 6 7 7 9 5 5 5 10 11 11 14 12 13 1 2 3 8 8 7 4 3 4
F19 2 1 4 4 11 13 11 13 12 14 14 14 13 12 9 10 9 10 8 8 7 5 2 2 9 5 3 6 7 6 12 10 11 1 4 8 7 6 5 3 3 1
F20 1 1 1 3 3 4 11 9 7 14 14 14 12 12 12 10 8 9 6 5 5 7 6 6 9 11 10 8 10 11 13 13 13 4 4 3 5 7 8 2 2 2
F21 4 3 4 5 10 13 12 12 11 14 14 14 11 13 10 9 8 8 3 6 5 7 2 2 8 5 3 10 9 7 13 11 12 1 4 9 2 7 6 6 1 1
F22 4 3 1 2 2 2 7 7 8 13 14 14 11 10 9 10 11 11 5 5 5 8 8 7 9 9 10 12 12 13 14 13 12 3 4 4 6 6 6 1 1 3
F23 3 2 3 2 4 2 7 7 8 14 14 14 9 9 10 11 11 11 5 5 5 8 8 7 10 10 9 13 13 13 12 12 12 4 3 4 6 6 6 1 1 1
F24 2 1 1 1 3 3 10 10 10 14 14 14 12 12 12 9 8 9 5 6 7 8 9 8 6 5 5 11 11 11 13 13 13 4 4 4 7 7 6 3 2 2
F25 4 4 1 1 2 3 8 10 10 13 14 14 12 12 12 6 9 9 5 5 5 10 8 8 9 7 6 11 11 11 14 13 13 3 3 4 7 6 7 2 1 2
F26 4 3 1 1 1 2 8 8 10 13 12 14 11 11 11 9 9 9 5 5 5 6 7 7 10 10 8 12 13 12 14 14 13 3 4 4 7 6 6 2 2 3
F27 4 2 2 3 3 4 10 10 10 14 14 14 12 12 11 9 8 8 7 7 7 8 9 9 5 5 5 11 11 13 13 13 12 1 4 3 6 6 6 2 1 1
F28 4 1 1 1 2 3 10 9 9 12 12 14 13 13 12 6 10 7 5 5 5 7 8 10 8 6 6 11 11 11 14 14 13 2 4 4 9 7 8 3 3 2
F29 2 2 2 1 1 1 10 10 10 11 12 14 14 13 13 8 9 8 5 5 5 7 6 9 6 7 6 12 11 11 13 14 12 4 3 3 9 8 7 3 4 4

Figure 11.

Figure 11

Radar chart of FBCA and other algorithms’ rankings.

In terms of convergence speed and convergence accuracy, Figure 8, Figure 9 and Figure 10 show that FBCA finds an optimal solution in almost every iteration from 0 to 500, demonstrating its excellent convergence speed. Comparing the final result of the 500th iteration, as shown in Figure 9d, we can see that FBCA’s solution is optimal, and its convergence accuracy is significantly improved compared to BCA. Figure 9a,b also show that the optimal solution found is significantly superior to that of other algorithms, demonstrating that FBCA has excellent convergence accuracy.

In terms of avoiding premature convergence, as shown in Figure 8e and Figure 9e, the curve slope is generally small during the period from 0 to 100 iterations, that is, the downward trend is not obvious, but the curve suddenly drops rapidly when it tends to equilibrium, and a better solution is found. This shows that FBCA’s excellent exploration ability can get rid of the extreme value trap in time when the algorithm falls into the local optimum, thereby effectively avoiding premature convergence.

The exploration–exploitation balance is a crucial metric for evaluating an algorithm’s overall performance. From this perspective, the detailed and comprehensive rankings in Table 2 and Table 3 can be combined to assess FBCA’s exploration–exploitation capabilities. Composite functions are among the most complex types of functions in the IEEE CEC 2017 test suite, and their multimodal extreme value traps are well-suited for evaluating FBCA’s exploration–exploitation balance. Table 3 shows that FBCA’s winning the first place gradually increases in three dimensions. In the 100-dimensional problem, 6 out of 10 combination functions won first place, which fully indicates that FBCA has significantly improved its exploration–exploitation capabilities.

Further combined with Table 2, it can be seen that FBCA ranks first in the comprehensive ranking of all three dimensions. The radar map in Figure 11 also visually shows that FBCA is in the lead, with the smallest closed graphic area. This demonstrates FBCA’s excellent combined ability to balance exploration and exploitation and its excellent performance on the IEEE CEC 2017 test suite.

4.4. Statistical Testing

In this section, the Wilcoxon rank sum test was employed to evaluate the statistical significance of the algorithmic performance [42]. Specifically, for each benchmark function and dimensional setting, FBCA and the comparative algorithms were each executed 30 independent times, and the differences in their errors were ranked by sign. When the sum of positive ranks was significantly greater than that of the negative ranks (p < 0.05), the result was marked as “+”, indicating that FBCA performed significantly better than the compared algorithm. Conversely, a “–” sign indicated that FBCA performed significantly worse, while “=” denoted no statistically significant difference. The “(+/=/−)” column in Table 4 reports the win–tie–loss outcomes of this test under 30D, 50D, and 100D dimensions.

Table 4.

Results of the Wilcoxon rank sum test of FBCA and other algorithms.

Algorithms D = 30 D = 50 D = 100 Total
(+/=/−) (+/=/−) (+/=/−) (+/=/−)
FBCA vs. BCA 7/16/6 13/8/8 18/6/5 38/30/19
FBCA vs. SCA 28/1/0 28/1/0 28/0/1 84/2/1
FBCA vs. GA 29/0/0 29/0/0 29/0/0 87/0/0
FBCA vs. RSA 28/1/0 28/0/1 27/1/1 83/2/2
FBCA vs. PSO 29/0/0 29/0/0 27/2/0 85/2/0
FBCA vs. DBO 25/3/1 27/2/0 26/2/1 78/7/2
FBCA vs. BKA 26/0/3 23/3/3 22/1/6 71/4/12
FBCA vs. HHO 28/0/1 28/0/1 24/2/3 80/2/5
FBCA vs. HOA 28/0/1 28/0/1 24/3/2 80/3/4
FBCA vs. COA 28/1/0 28/0/1 28/0/1 84/1/2
FBCA vs. CPO 12/6/11 17/6/6 20/5/4 49/17/21
FBCA vs. PO 27/1/1 28/0/1 24/3/2 79/4/4
FBCA vs. SMA 11/12/6 10/13/6 15/7/7 36/32/19

Experimental results demonstrate that FBCA significantly outperforms most compared algorithms. Across three dimensions, FBCA achieves a net win of more than 80 compared with most algorithms, demonstrating its robust performance across a wide range of problem sizes. Further comparisons with the original BCA and the high-performance algorithms SMA and CPO reveal that FBCA’s win rate steadily increases with increasing dimensionality. In particular, at 100 dimensions, FBCA achieves a significant win rate of 62% and 68% compared to BCA and CPO, respectively, demonstrating its significantly enhanced optimization capabilities for complex, high-dimensional problems.

4.5. Stability Analysis

Figure 12, Figure 13 and Figure 14 show the boxplot results for all algorithms across various dimensions. The horizontal axis represents the compared algorithms, and the vertical axis represents the evaluation results of each algorithm on the test function. Each algorithm’s corresponding boxplot includes statistical features such as the median, quartiles, maximum, minimum, and outliers, comprehensively reflecting the distribution characteristics of the algorithm’s performance [43]. As can be seen from the figures, FBCA demonstrates excellent performance in both result stability and robustness.

Figure 12.

Figure 12

Boxplots of the FBCA and its comparative algorithms with 30D.

Figure 13.

Figure 13

Boxplots of the FBCA and its comparative algorithms with 50D.

Figure 14.

Figure 14

Boxplots of the FBCA and its comparative algorithms with 100D.

From the perspective of the robustness of FBCA, as shown in Figure 12, Figure 13 and Figure 14, it can be observed that the overall results of FBCA are generally at a lower level than other algorithms on the three test functions of F1, F15 and F24, and the upper and lower boundary spacing of the box plot is small, indicating that the result fluctuation is small and the stability is high. Especially on the F1 function, the value of results are almost stable around 0, as shown in Figure 12a and Figure 13a, which further verify that FBCA has good robustness in different dimensions. In addition, FBCA achieves excellent performance results in all three dimensions and four types of functions covered by the same dimension. This shows that the algorithm not only has cross-dimensional robustness but also shows good cross-function robustness.

From the perspective of the stability performance of FBCA, with the increase in dimensions, the optimization results of FBCA are more significant when dealing with hybrid functions and composite functions with higher diversity and complexity, as shown in Figure 13e and Figure 14f. This shows that FBCA has the ability to jump out of the local optimal and find a better global solution. At the same time, it can still maintain high stability on these complex and diverse functions, indicating that the algorithm not only has good stability, but also verifies that the exploration and exploitation strategy proposed in this paper has been effectively played, so that the balance between exploration and exploitation of FBCA has been improved.

4.6. Parameter Sensitivity and Mechanism Validation

In order to verify the rationality of FBCA, this section presents the parameter sensitivity analysis, ablation experiments, and performance comparisons with SOTA algorithms. As is well known, the IEEE CEC 2017 test suites become more difficult to optimize as the dimensions increase. Therefore, this paper selects 29 functions of dimension 100 from these test suites as evaluation benchmarks. In terms of experimental design, Section 4.6.1 conducts sensitivity analysis on changes in the probability parameter p1; Section 4.6.2 examines the impact of different Gaussian perturbation terms N on algorithm performance; Section 4.6.3 conducts ablation experiments by replacing the spiral perturbation mechanism; Section 4.6.4 compares the performance differences between the original PSO velocity update method and the cognitively driven velocity update strategy in FBCA. Finally, Section 4.6.5 provides a comprehensive performance comparison with SaDE [44], L-SHADE [45], and L-SHADE_EpSin [46] algorithms.

4.6.1. Influence of the Probability Parameter p1

To verify the rationality of the parameter p1 setting, we conducted parameter sensitivity analysis for different values of p1. Algorithms involved in the comparison included BCA, FBCA (p1 = 0.2), FBCA (p1 = 0.5), and FBCA (p1 = 0.8). As shown in the Table 5, when p1 = 0.5, the algorithm performs best out of 29 test functions, ranking first. Further analysis revealed that FBCA with p1 = 0.2 performs exceptionally well on composite functions, achieving first place in 6 out of 10 such functions; while FBCA with p1 = 0.8 exhibits better convergence performance on unimodal functions. Comparatively, FBCA with p1 = 0.5 demonstrates stronger versatility and adaptability. The results show that FBCA (p1 = 0.5) achieves optimal results on three multimodal functions, two mixed functions, and four combined functions. Figure 15 shows the convergence curves on the multimodal function F3, the hybrid function F11, and the composite functions F24 and F27. Overall, setting p1 = 0.5 enables FBCA to demonstrate excellent comprehensive adaptability in various types of optimization functions.

Table 5.

Results of FBCA with various p1 values.

F BCA FBCA (p1=0.2) FBCA (p1=0.5) FBCA (p1=0.8)
F1 8.63 ×109 1.21 ×109 2.39 ×108 1.92×108
F2 8.22 ×105 6.76 ×105 7.20 ×105 6.63×105
F3 1.76 ×103 1.16 ×103 1.04×103 1.07 ×103
F4 1.63 ×103 1.30 ×103 1.29×103 1.32 ×103
F5 6.33×102 6.37 ×102 6.40 ×102 6.48 ×102
F6 2.55×103 2.67 ×103 2.75 ×103 3.31 ×103
F7 1.83 ×103 1.56 ×103 1.53×103 1.62 ×103
F8 4.69×104 6.88 ×104 7.71 ×104 8.82 ×104
F9 3.36 ×104 3.03 ×104 3.05 ×104 2.50×104
F10 1.94 ×105 6.24 ×104 5.70 ×104 4.83×104
F11 5.23 ×108 1.09×108 9.65 ×107 1.38 ×108
F12 1.24×104 3.18 ×104 5.12 ×104 1.89 ×105
F13 5.34 ×106 3.88 ×106 3.03×106 3.35 ×106
F14 1.80 ×104 6.65×103 2.25 ×104 2.66 ×104
F15 1.13 ×104 7.83 ×103 6.44 ×103 6.23×103
F16 8.20 ×103 6.15 ×103 5.05×103 5.12 ×103
F17 1.59 ×107 9.53 ×106 8.04 ×106 4.62×106
F18 7.62×103 1.16 ×104 1.21 ×104 4.32 ×104
F19 8.28 ×103 7.16 ×103 7.04 ×103 6.93×103
F20 3.43 ×103 3.12 ×103 3.11×103 3.17 ×103
F21 3.60 ×104 3.31 ×104 3.01×104 3.06 ×104
F22 3.64 ×103 3.45×103 3.55 ×103 3.78 ×103
F23 4.43 ×103 4.14×103 4.28 ×103 4.80 ×103
F24 4.74 ×103 3.85 ×103 3.71×103 3.79 ×103
F25 1.80 ×104 1.54×104 1.77 ×104 2.17 ×104
F26 3.78 ×103 3.76×103 3.84 ×103 4.06 ×103
F27 5.77 ×103 4.46 ×103 4.17×103 4.77 ×103
F28 1.02 ×104 6.81×103 6.87 ×103 7.46 ×103
F29 8.33 ×105 4.23×105 8.30 ×105 2.52 ×106
Average ranking 3.21 2.10 1.97 2.72
Total ranking 4 2 1 3

The bold part indicates the optimal result.

Figure 15.

Figure 15

Convergence curve of the BCA, FBCA (p1 = 0.2), FBCA (p1 = 0.5) and FBCA (p1 = 0.8).

4.6.2. Effect of Gaussian Perturbation Variance

To further verify the rationality of the Gaussian perturbation term parameter N, we compared FBCA versions using N(0,0.052), N(0,0.12) and N(0,0.22). As shown in the Table 6, the N(0,0.12) configuration exhibits the best overall performance, especially in hybrid and composite functions containing numerous local extremum traps. As shown in Figure 16b, the FBCA with the N(0,0.12) configuration converges significantly faster and with higher accuracy. Although the FBCA(N(0,0.052)) and FBCA(N(0,0.22)) also achieved some results on different functions, they did not show significant optimal characteristics in any type of function. In summary, when the Gaussian perturbation term is set to N(0,0.12), FBCA not only achieves the best overall optimization performance but also demonstrates good adaptability to complex functions.

Table 6.

Results of FBCA with variations in the Gaussian item N.

F BCA FBCA (N(0,0.052)) FBCA (N(0,0.12)) FBCA (N(0,0.22))
F1 9.34 ×109 4.95 ×108 3.25 ×108 2.63×108
F2 1.22 ×106 6.82 ×105 6.36×105 6.87 ×105
F3 1.72 ×103 1.06 ×103 1.03 ×103 9.62×102
F4 1.62 ×103 1.25 ×103 1.25 ×103 1.24×103
F5 6.31×102 6.39 ×102 6.40 ×102 6.42 ×102
F6 2.54×103 2.77 ×103 2.63 ×103 2.78 ×103
F7 1.84 ×103 1.61 ×103 1.56×103 1.57 ×103
F8 4.00×104 6.84 ×104 7.08 ×104 6.45 ×104
F9 3.37 ×104 2.80×104 2.93 ×104 2.84 ×104
F10 1.97 ×105 5.31×104 5.89 ×104 5.35 ×104
F11 5.26 ×108 9.98×107 1.18 ×108 1.27 ×108
F12 1.60×104 4.10 ×104 4.69 ×104 6.32 ×104
F13 5.30 ×106 3.71 ×106 2.85 ×106 2.26×106
F14 5.63×103 2.07 ×104 1.65 ×104 1.30 ×104
F15 1.16 ×104 7.53 ×103 5.70×103 6.57 ×103
F16 8.38 ×103 5.42 ×103 5.41×103 5.85 ×103
F17 2.33 ×107 8.70 ×106 7.47×106 7.92 ×106
F18 6.30×103 8.28 ×103 1.23 ×104 1.24 ×104
F19 8.27 ×103 6.81 ×103 6.79×103 7.15 ×103
F20 3.41 ×103 3.11 ×103 3.10×103 3.14 ×103
F21 3.61 ×104 3.16 ×104 3.02×104 3.11 ×104
F22 3.60 ×103 3.53×103 3.60 ×103 3.54 ×103
F23 4.45 ×103 4.38 ×103 4.34 ×103 4.31×103
F24 4.64 ×103 3.86 ×103 3.72 ×103 3.67×103
F25 1.74 ×104 1.72 ×104 1.74 ×104 1.63×104
F26 3.77×103 3.83 ×103 3.79 ×103 3.80 ×103
F27 5.98 ×103 4.35 ×103 4.01×103 4.03 ×103
F28 9.16 ×103 6.98×103 7.00 ×103 7.04 ×103
F29 8.00 ×105 5.21×105 9.02 ×105 1.26 ×106
Average ranking 3.14 2.38 2.14 2.34
Total ranking 4 3 1 2

The bold part indicates the optimal result.

Figure 16.

Figure 16

Convergence curve of the FBCA with variations in the Gaussian item N.

4.6.3. Comparison of Spiral Perturbation Mechanisms

To verify the effectiveness of the proposed exponentially modulated spiral perturbation mechanism, we designed ablation experiments for four spiral mechanisms, named Spiral1 to Spiral4. Spiral1 and Spiral2 are two variants of the logarithmic spiral: Spiral1 corresponds to the proposed mechanism with a dynamic exponential parameter b, while Spiral2 has a fixed b = 1; Spiral3 is the Archimedean Spiral, and Spiral4 is the Rose Spiral.

ψ=rand·4πlinear_term=c+darch·ψArch_factor=linear_term·cos(ψ) (20)

The mathematical formula for the Archimedean Spiral is defined in Equation (20), where c = 0.5 and darch = 0.3. The mathematical formula for Rose Spiral is defined in Equation (21), where erose = 1.2, n = 3. In Equations (20) and (21), rand is a random value that takes values in the range [0, 1].

k(t)=1+0.3tMax_Genξ=rand·2π·k(t)Rose_factor=erose·cosn·ξ·cosξ (21)

As shown in the Table 7, the proposed Spiral1 has the best optimization performance, with an average ranking of 2.28, leading the second-place mechanism by 0.31. As shown in Figure 17b,c, the FBCA (red curve) using exponentially modulated spiral perturbation significantly outperforms other mechanisms in global convergence. In detail, the Archimedean Spiral has a certain advantage in the composite function, while the Rose spiral exhibits relatively stable performance and lacks obvious characteristics. Comparing the results of the two logarithmic spirals, it can be seen that although Spiral1 did not achieve the optimal solution on unimodal functions, it outperformed Spiral2 on the other three types of functions, especially showing a more outstanding optimization ability on hybrid functions.

Table 7.

Results of FBCA with various spiral mechanisms.

F BCA FBCA (Spiral1) FBCA (Spiral2) FBCA (Spiral3) FBCA (Spiral4)
F1 9.41 ×109 3.34 ×108 2.82×108 2.53 ×109 7.29×108
F2 1.72 ×106 6.79 ×105 6.91 ×105 6.62 ×105 6.61×105
F3 1.74 ×103 9.98×102 1.02 ×103 1.43 ×103 1.11 ×103
F4 1.59 ×103 1.23×103 1.27 ×103 1.39 ×103 1.28 ×103
F5 6.32×102 6.39 ×102 6.41 ×102 6.38 ×102 6.41 ×102
F6 2.50×103 2.63 ×103 2.68 ×103 2.61 ×103 3.15 ×103
F7 1.80 ×103 1.59 ×103 1.55×103 1.69 ×103 1.57 ×103
F8 3.86×104 7.30 ×104 7.51 ×104 6.76 ×104 7.88 ×104
F9 3.37 ×104 3.00 ×104 2.92 ×104 3.08 ×104 2.79×104
F10 2.14 ×105 5.01×104 5.09 ×104 6.90 ×104 5.34 ×104
F11 4.22 ×108 1.09×108 1.11 ×108 2.11 ×108 2.00 ×108
F12 1.47×104 7.00 ×104 1.31 ×105 1.22 ×105 7.91 ×106
F13 4.32 ×106 2.93 ×106 3.90 ×106 2.75×106 3.74 ×106
F14 8.42×103 2.09 ×104 1.13 ×104 2.12 ×104 1.75 ×104
F15 1.13 ×104 6.01 ×103 5.80×103 7.56 ×103 6.87 ×103
F16 8.33 ×103 4.94×103 5.48 ×103 5.60 ×103 5.63 ×103
F17 2.38 ×107 6.50 ×106 5.60×106 7.75 ×106 6.48 ×106
F18 7.42×103 1.15 ×104 1.92 ×104 1.05 ×104 2.49 ×104
F19 8.27 ×103 6.80 ×103 6.43 ×103 6.99 ×103 6.32×103
F20 3.39 ×103 3.11×103 3.13 ×103 3.20 ×103 3.12 ×103
F21 3.62 ×104 2.99 ×104 3.14 ×104 3.32 ×104 2.72×104
F22 3.64 ×103 3.58 ×103 3.62 ×103 3.51×103 3.67 ×103
F23 4.40 ×103 4.32 ×103 4.49 ×103 4.20×103 4.63 ×103
F24 4.68 ×103 3.75 ×103 3.70×103 4.10 ×103 3.79 ×103
F25 1.67 ×104 1.69 ×104 1.77 ×104 1.54×104 2.08 ×104
F26 3.82 ×103 3.82 ×103 3.86 ×103 3.79×103 4.04 ×103
F27 5.86 ×103 4.23×103 4.31 ×103 4.68 ×103 4.68 ×103
F28 9.04 ×103 7.23 ×103 7.04×103 7.19 ×103 7.57 ×103
F29 6.45×105 9.12 ×105 7.95 ×105 8.41 ×105 2.11 ×106
Average ranking 3.76 2.28 2.59 2.97 3.41
Total ranking 5 1 2 3 4

The bold part indicates the optimal result.

Figure 17.

Figure 17

Convergence curve of the FBCA with various spiral mechanisms.

4.6.4. Validation of the Velocity Update Mechanism

To verify the difference between the cognitive-driven velocity update mechanism and the original PSO velocity update mechanism, we tested FBCA under both mechanisms, named FBCA (pso-velocity) and FBCA (new-velocity), respectively. In the Table 8, velocity1 is FBCA (new-velocity) and velocity2 is FBCA (pso-velocity). Experimental results show that the improved velocity update strategy significantly enhances the quality of solution. As shown in Figure 18, FBCA (new-velocity) achieves better results on functions F1, F10, F15, and F24, demonstrating stronger robustness. Particularly in the F1 function of Figure 18a, the convergence curve of FBCA (new-velocity) is significantly better than that of FBCA (pso-velocity). Overall, FBCA (new-velocity) achieved first place in 15 out of 29 test functions, while FBCA (pso-velocity) only performed the best in seven functions, validating the effectiveness of the improved mechanism.

Table 8.

Results of FBCA with various velocity update mechanisms.

F BCA FBCA (velocity1) FBCA (velocity2)
F1 9.11 ×109 2.11×108 4.05 ×108
F2 9.05 ×105 6.76 ×105 6.42×105
F3 1.92 ×103 9.74×102 1.01 ×103
F4 1.62 ×103 1.22×103 1.23 ×103
F5 6.32×102 6.41 ×102 6.43 ×102
F6 2.58×103 2.78 ×103 2.71 ×103
F7 1.89 ×103 1.54×103 1.60 ×103
F8 4.24×104 7.03 ×104 7.28 ×104
F9 3.34 ×104 2.79 ×104 2.39×104
F10 1.95 ×105 5.39×104 5.86 ×104
F11 6.07 ×108 9.92×107 1.09 ×108
F12 2.82 ×105 4.22×104 5.38 ×104
F13 6.83 ×106 3.12 ×106 2.36×106
F14 7.07×103 1.50 ×104 1.17 ×104
F15 1.15 ×104 6.53×103 7.03 ×103
F16 8.24 ×103 5.67 ×103 5.04×103
F17 1.90 ×107 6.91 ×106 5.39×106
F18 6.01×103 1.35 ×104 1.17 ×104
F19 8.30 ×103 6.40×103 7.07 ×103
F20 3.42 ×103 3.09×103 3.17 ×103
F21 3.57 ×104 3.03 ×104 2.61×104
F22 3.65 ×103 3.57×103 3.59 ×103
F23 4.35 ×103 4.29×103 4.34 ×103
F24 4.76 ×103 3.73×103 3.74 ×103
F25 1.61×104 1.77 ×104 1.75 ×104
F26 3.80 ×103 3.81 ×103 3.79×103
F27 5.85 ×103 4.13×103 4.14 ×103
F28 9.33 ×103 6.92×103 7.28 ×103
F29 9.00×105 9.07 ×105 9.53 ×105
Average ranking 2.48 1.66 1.86
Total ranking 3 1 2

The bold part indicates the optimal result.

Figure 18.

Figure 18

Convergence curve of the BCA, FBCA (new-velocity) and FBCA (pso-velocity).

4.6.5. Comparison with SOTA Optimizers

To verify the leading performance of our proposed FBCA, we compared it with SaDE, L-SHADE, and L-SHADE_EpSin cutting-edge algorithms. Looking at the rankings of the algorithms across the four function types, SaDE showed no significant advantage. L-SHADE_EpSin excels at optimizing hybrid functions, achieving a strong 12 first-place results. However, L-SHADE also has some limitations. While FBCA only achieved seven first-place results, further analysis reveals its relative superiority. For example, in Figure 19a, L-SHADE exhibits significantly weaker optimization capabilities on F4, while FBCA easily outperforms L-SHADE, SaDE, and L-SHADE_EpSin, achieving first place. Furthermore, although FBCA did not achieve first place on F13, F16, and F22, its second-place ranking still significantly surpassed the fourth-place L-SHADE. Therefore, as shown in the Table 9, FBCA ultimately achieved the overall first-place ranking.

Figure 19.

Figure 19

Convergence curve of the FBCA and other SOTA optimizers.

Table 9.

Results of FBCA versus L-SHADE and other SOTA optimizers.

F FBCA BCA SaDE L-SHADE L-SHADE_EpSin
F1 3.29 ×108 8.02 ×109 9.24 ×109 2.82×108 5.07 ×1010
F2 6.90 ×105 1.21 ×106 3.54×105 5.85 ×105 3.85 ×105
F3 9.97 ×102 1.88 ×103 2.11 ×103 9.61×102 6.22 ×103
F4 1.21×103 1.55 ×103 1.37 ×103 1.51 ×103 1.23 ×103
F5 6.40 ×102 6.31 ×102 6.49 ×102 6.14×102 6.46 ×102
F6 2.66 ×103 2.47 ×103 2.53 ×103 2.05×103 2.56 ×103
F7 1.56×103 1.85 ×103 1.69 ×103 1.81 ×103 1.57 ×103
F8 7.07 ×104 4.35 ×104 5.61 ×104 1.66×104 2.69 ×104
F9 2.73 ×104 3.34 ×104 3.26 ×104 3.20 ×104 2.22×104
F10 6.00×104 2.15 ×105 6.63 ×104 1.20 ×105 7.87 ×104
F11 1.09 ×108 3.92 ×108 5.11 ×108 4.75×107 4.72 ×109
F12 4.09 ×104 1.37×104 9.59 ×104 1.46 ×104 8.09 ×107
F13 3.04 ×106 5.61 ×106 5.40×105 4.53 ×106 3.30 ×106
F14 1.24 ×104 7.58 ×103 2.14 ×105 5.07×103 3.60 ×105
F15 6.89 ×103 1.16 ×104 6.91 ×103 9.73 ×103 6.31×103
F16 5.21 ×103 8.13 ×103 6.70 ×103 7.18 ×103 5.09×103
F17 6.75 ×106 1.97 ×107 1.52×106 7.74 ×106 4.08 ×106
F18 1.25 ×104 1.10 ×104 4.10 ×105 4.66×103 7.06 ×106
F19 7.01 ×103 8.27 ×103 7.63 ×103 7.49 ×103 5.61×103
F20 3.07×103 3.47 ×103 3.30 ×103 3.36 ×103 3.15 ×103
F21 3.09 ×104 3.61 ×104 3.44 ×104 3.42 ×104 2.44×104
F22 3.61 ×103 3.60×103 3.69 ×103 3.83 ×103 3.99 ×103
F23 4.30×103 4.39 ×103 4.48 ×103 4.40 ×103 5.18 ×103
F24 3.68 ×103 4.54 ×103 4.82 ×103 3.66×103 7.38 ×103
F25 1.70×104 1.73 ×104 1.73 ×104 1.72 ×104 2.41 ×104
F26 3.83 ×103 3.81 ×103 3.87 ×103 3.60×103 4.82 ×103
F27 4.27 ×103 5.95 ×103 5.95 ×103 4.03×103 1.10 ×104
F28 6.93×103 9.42 ×103 8.30 ×103 9.42 ×103 9.72 ×103
F29 7.57 ×105 7.75 ×105 5.53 ×106 1.14×105 1.72 ×108
Average ranking 2.24 3.51 3.34 2.41 3.48
Total ranking 1 5 3 2 4

The bold part indicates the optimal result.

5. MLP Optimization Problems

In this section, we evaluate and validate the feasibility of the FBCA algorithm using six MLP optimization problems, including three classification problems and three function approximation problems. First, Section 5.1 introduces how to train an MLP using FBCA. Subsequently, Section 5.2, Section 5.3, Section 5.4 present experimental results and analysis for the three classification problems: MLP_XOR, MLP_Iris and MLP_Heart. Section 5.5, Section 5.6Section 5.7 demonstrate the optimization performance for the three function approximation problems: MLP_Sigmoid, MLP_Cosine and MLP_Sine. Section 5.8 compares the experimental results of FBCA and gradient-based optimizers. Section 5.9 discusses the performance of FBCA on MLP optimization problems, limitations, and future work.

5.1. Training MLPs Using FBCA

Figure 20 shows an optimization model that combines FBCA with MLP, referred to as FBCA-MLP. In this model, FBCA is used to optimize the weight and bias parameters of the MLP to minimize the mean squared error (MSE) on the training dataset.

Figure 20.

Figure 20

The FBCA−MLP training model.

Specifically, the training process begins by preparing training samples and test samples. The training samples are then fed into the FBCA−MLP model, where iterative training searches for optimal weight and bias parameters. Once the optimal parameters are obtained, they are applied to the MLP and evaluated on the test samples. Depending on the problem type, the model’s accuracy or test error is ultimately output.

Before applying FBCA to the MLP optimization problem, we have detailed definitions of the relevant parameters required for training, as shown in Table 10 and Table 11. The table clearly shows the number of training samples, test samples, MLP structure, and problem dimensions for two different types of problems.

Table 10.

Classification problems.

Datasets Feature Numbers Training Samples Test Samples Number of Classes MLP Structure Dimension
XOR 3 8 8 2 3-7-1 36
Iris 4 150 150 3 4-9-3 75
Heart 22 80 80 2 22-45-1 1081

Table 11.

Function-approximation problems.

Datasets Training Samples Test Samples MLP Structure Dimension
Sigmoid: y=11+e(x) 61: x in [−3:0.1:3] 121: x in [−3:0.05:3] 1-15-1 46
Cosine: y=cos(xπ/2)7 31: x in [1.25:0.05:2.75] 38: x in [1.25:0.04:2.75] 1-15-1 46
Sine: y=sin(2x) 126: x in [−2π:0.1:2π] 252: x in [−2π:0.05:2π] 1-15-1 46

According to the No Free Lunch Theorem (NFL) [47], there is no universal algorithm that can perform best on all problems, which has become an academic consensus. Accordingly, this paper compares the experimental results of eight cutting-edge algorithms, including FBCA and SMA, Guided Learning Strategy (GLS) [48], and the Osprey Optimization Algorithm (OOA) [49], on three MLP classification optimization problems and three function approximation problems, to verify their respective areas of strength [50].

5.2. MLP_XOR Problem

This study uses a three-bit XOR dataset [51], with input features as three-bit binary numbers and output results as the parity check values of these three features. The comparative results in Table 12 show that the classification accuracy of FBCA not only reached 100%, but also, from the mean results of MSE, the optimization results of FBCA are in the order of 106. While other compared algorithms only reached 101103, the optimization performance was improved by about three to five orders of magnitude. This indicates that FBCA demonstrates strong exploration capabilities in MLP optimization.

Table 12.

Comparison optimization results of MLP_XOR problem.

FBCA BCA GA SMA HHO OOA COA GLS HOA RSA
Mean 9.589×106 2.93 ×103 6.863 ×102 2.003 ×101 8.166 ×105 1.779 ×101 1.663 ×101 3.567 ×102 1.149 ×101 1.555 ×101
Std 1.573 ×105 8.72 ×103 6.513 ×102 4.472 ×102 1.244 ×104 4.815 ×102 6.867 ×102 4.549 ×102 5.087 ×102 3.516 ×102
Accuracy 100% 100% 62.5% 12.5% 100% 37.5% 37.5% 100% 25% 12.5%

The bold part indicates the optimal result.

5.3. MLP_Iris Problem

The Iris dataset is for a three-class classification problem [52,53], with its four input features being sepal length, sepal width, petal length, and petal width, and the output results corresponding to the three iris flower categories. As shown in Table 13, the mean MSE of FBCA performs the best. Meanwhile, the MLP model optimized by FBCA also ranks first in the best accuracy in prediction, indicating that compared to other algorithms, FBCA has superior global convergence ability.

Table 13.

Comparison optimization results of MLP_Iris problem.

FBCA BCA GA SMA HHO OOA COA GLS HOA RSA
Mean 2.672×102 2.89 ×102 2.946 ×101 6.596 ×102 6.572 ×102 4.342 ×101 4.221 ×101 1.224 ×101 2.507 ×101 3.044 ×101
Std 6.321 ×103 1.316 ×102 1.878 ×101 4.507 ×102 1.044 ×101 1.009 ×101 7.079 ×102 4.89 ×102 5.174 ×102 4.194 ×102
Accuracy 88.67% 86% 25.33% 43.33% 74% 6.67% 14% 54% 5.33% 7.33%

The bold part indicates the optimal result.

5.4. MLP_Heart Problem

Compared with the previous two classification problems, the dimension of the Heart dataset [54,55] has increased from 10 to 1000, an increase of about 102, and the number of input features has also increased from 3 or 4 to 22, making FBCA face higher-dimensional and more complex optimization challenges. From the results of Table 14, it can be seen that the mean of MSE of other algorithms is in the order of 101, while the mean of FBCA reaches 102, showing a new breakthrough in the order of magnitude. At the same time, its accuracy rate is also the first, indicating that FBCA can balance exploration and exploitation to find a better solution.

Table 14.

Comparison optimization results of MLP_Heart problem.

FBCA BCA GA SMA HHO OOA COA GLS HOA RSA
Mean 9.779×102 1.101 ×101 2.826 ×101 1.681 ×101 1.257 ×101 1.76 ×101 1.718 ×101 1.474 ×101 1.242 ×101 1.626 ×101
Std 1.179 ×102 3.974 ×102 3.916 ×102 6.858 ×103 8.339 ×103 6.495 ×103 7.723 ×103 2.258 ×102 1.114 ×102 1.167 ×102
Accuracy 83.75% 82.5% 52.5% 73.75% 73.75% 32.5% 36.25% 78.75% 67.5% 48.75%

The bold part indicates the optimal result.

5.5. MLP_Sigmoid Problem

The Sigmoid dataset is the simplest function approximation problem in this article. Its specific expression is listed in Table 11, along with the mathematical expressions for other function approximation problems [50]. Unlike classification problems, function approximation problems are evaluated not by accuracy but by test error. Table 15 shows that FBCA achieves the best performance in both mean and test error. In particular, the test error result shows a 0.8 reduction for FBCA compared to BCA, demonstrating a significant improvement in FBCA’s optimization performance for the Sigmoid problem.

Table 15.

Comparison optimization results of MLP_Sigmoid problem.

FBCA BCA GA SMA HHO OOA COA GLS HOA RSA
Mean 2.466×101 2.482 ×101 2.486 ×101 2.468 ×101 2.467 ×101 2.496 ×101 2.486 ×101 2.469 ×101 2.477 ×101 2.471 ×101
Std 1.711 ×104 1.759 ×103 1.909 ×103 2.321 ×104 1.503 ×104 1.807 ×103 1.835 ×103 4.225 ×104 7.978 ×104 3.776 ×104
Error 17.5564 18.3290 19.4690 17.8225 17.5827 20.5183 17.7487 18.1118 17.8106 18.1837

The bold part indicates the optimal result.

5.6. MLP_Cosine Problem

The optimization difficulty for the Cosine dataset is higher than that for the Sigmoid problem. As shown in Table 16, FBCA maintains the best performance among the compared algorithms, demonstrating its ability to achieve higher-precision solutions during exploitation. Furthermore, FBCA achieves the lowest test error, surpassing not only BCA but also GLS, SMA, and HHO, all of which have lower test errors than BCA. This result further highlights the significant improvements and advantages of FBCA in function approximation.

Table 16.

Comparison optimization results of MLP_Cosine problem.

FBCA BCA GA SMA HHO OOA COA GLS HOA RSA
Mean 1.772×101 1.826 ×101 1.98 ×101 1.816 ×101 1.774 ×101 2.756 ×101 2.244 ×101 1.79 ×101 1.85 ×101 2.001 ×101
Std 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106 4.262 ×106
Error 4.6792 5.2449 8.9720 4.7839 4.7741 6.0326 7.4608 5.0299 5.3183 6.0737

The bold part indicates the optimal result.

5.7. MLP_Sine Problem

The Sine dataset is the most complex function approximation problem in this paper. However, the MLP optimization model based on FBCA maintains stable performance and significantly improves performance. As shown in Table 17, FBCA achieves the best mean MSE metric and the lowest test error. This demonstrates that FBCA effectively avoids falling into local optima and finds more suitable weight and bias parameters for MLP training.

Table 17.

Comparison optimization results of MLP_Sine problem.

FBCA BCA GA SMA HHO OOA COA GLS HOA RSA
Mean 4.453×101 4.514 ×101 4.655 ×101 4.523 ×101 4.462 ×101 4.649 ×101 4.523 ×101 4.495 ×101 4.611 ×101 4.677 ×101
Std 8.941 ×103 4.996 ×103 9.217 ×103 9.393 ×103 2.225 ×103 4.332 ×103 1.138 ×102 9.507 ×103 3.623 ×103 7.564 ×103
Error 146.5873 147.1405 157.9612 148.8101 147.4074 14.9740 146.9557 148.6581 151.0190 153.4001

The bold part indicates the optimal result.

5.8. Comparison with Gradient-Based Optimizers

In the experiments in Section 5.2, Section 5.3, Section 5.4, Section 5.5, Section 5.6, Section 5.7, we employed various metaheuristic algorithms as optimizers to adjust the weights and biases of the MLP. To further verify the effectiveness and persuasiveness of this optimization approach, this section selects a representative dataset each from the classification problems and the function approximation problems, and compares it with four traditional gradient-based optimization algorithms [56]. These four optimization algorithms are SGD [57], Adam [58], RMSprop [59], and Adagrad [60].

In the classification problem, we chose the MLP_Heart dataset for our experiments. This dataset has a high dimension, and the MLP model structure is relatively complex, making it a challenging task to optimize. In the function approximation problem, we selected MLP_Sine, which is one of the most difficult test functions to converge among similar problems.

As shown in the Table 18, the proposed FBCA can more effectively adjust weights and biases during MLP training, enabling the model to achieve better generalization performance. Compared with four traditional gradient optimizers, FBCA achieves the highest classification accuracy and the smallest test error on both problems, verifying its significant role in improving MLP performance within a non-gradient optimization framework.

Table 18.

Comparison results of FBCA and the gradient-based optimizers.

Datasets Item FBCA BCA SGD Adam RMSprop Adagrad
MLP_Heart Mean 9.779 ×102 1.101 ×101 9.212 ×102 5.580 ×102 4.331 ×102 5.821 ×102
Accuracy 83.75% 82.5% 31.25% 71.25% 72.5% 76.25%
MLP_Sine Mean 4.453 ×101 4.514 ×101 4.983 ×101 4.453 ×101 4.453 ×101 4.896 ×101
Error 146.5873 147.1405 159.7947 148.4509 146.7834 157.5333

The bold part indicates the optimal result.

5.9. Discussion

To address the issue of nonlinear MLPs easily getting trapped in saddle points and local optima in nonconvex optimization, we not only systematically evaluated the optimization capability of FBCA using the IEEE CEC 2017 benchmark function but also validated it using six MLP optimization problems. FBCA’s hierarchical structure, perturbation updates, and velocity-driven search fully leverage its global optimization capabilities, enabling it to achieve more stable convergence and lower errors in small-to-medium-scale MLP tasks, demonstrating its effectiveness in nonconvex optimization scenarios for nonlinear MLPs. This study systematically verifies the role of the improved FBCA mechanism in enhancing search balance and convergence performance through parameter sensitivity analysis, ablation experiments, and complexity comparison. These results collectively support FBCA’s cutting-edge advantage in six MLP optimization problems. Moreover, FBCA not only achieved first place in metaheuristic algorithms, but also demonstrated a representative lead compared to traditional metaheuristic algorithms and mainstream gradient optimizers. It showed outstanding performance in the most complex MLP_Heart classification problem and MLP_Sine problem. Not only did it outperform Adam and RMSprop by more than 10% in MLP_Heart accuracy but it also reduced the test error by more than 10 compared to SGD and Adagrad in MLP_Sine.

While this paper mentions the potential applicability of FBCA in deep networks such as CNNs and Transformers, it does not provide corresponding empirical analysis and experimental verification. This limitation stems primarily from the fact that this study focuses on the weight and bias optimization of MLPs without directly comparing the optimization performance of other deep learning. However, as a global optimization framework, FBCA possesses scalability for adapting to different network depths due to its hierarchical population–army–soldier particle generation and global optimization mechanism. For CNNs, FBCA can effectively model the convolutional kernel structure through block-level parameter mapping and weight sharing constraints. Specifically, each convolutional kernel can be regarded as an army, with multiple soldiers within it performing searches in local parameter subspaces; during the search process, the updated sub-blocks of the soldiers can synchronously act on the shared weight structure, thereby achieving the overall optimization of the convolutional kernel. For Transformer architectures, FBCA can be combined with its multi-head self-attention mechanism to design a parallel evolutionary process. Each attention head can be regarded as a relatively independent optimization subspace, corresponding to an independent army in FBCA. The overall fitness is determined by the joint performance of all attention heads, ensuring global coordination and consistency in the distribution of multi-head attention. In summary, FBCA provides a new theoretical perspective and research direction for the generalization of metaheuristic algorithms in complex deep networks such as CNNs and Transformers, laying the foundation for future optimization methods in structured parameter spaces.

Meanwhile, due to computational limitations, this study did not test large-scale datasets and deep MLP architectures; therefore, its performance and scalability in these scenarios remain to be verified. Future work will focus on the performance of FBCA in high-dimensional neural network parameter spaces, and evaluate its hybrid optimization strategy combined with gradient methods, further exploring its application potential in complex deep learning models.

6. Conclusions and Future Work

The proposed FBCA integrates soft Gaussian perturbation asymmetry, adaptively modulated spiral perturbation factors, and dynamically decreasing nonlinear cognitive coefficients, enabling the algorithm to achieve rapid detection, fast adaptation, and fast convergence. This demonstrates the algorithm’s flexibility and dynamic control during the search process. Its outstanding performance on the IEEE CEC 2017 benchmark suite, particularly in high-dimensional and complex composite optimization problems, demonstrates that FBCA not only maintains stable convergence speed and high optimization accuracy but also achieves notable improvements in six MLP optimization problems. These results validate the algorithm’s robustness and generalization ability in high-dimensional nonconvex optimization scenarios. More importantly, FBCA flexibly controls the proposed mechanism through refined binary gates, achieving an adaptive exploration–exploitation balance search, demonstrating a novel design approach for swarm intelligence algorithms that emphasizes structural flexibility and search balance.

Although FBCA demonstrates strong performance across multiple problems, several promising directions remain for future research. Theoretically, developing a multi-objective extension of FBCA could enhance its adaptability to problems involving multiple or conflicting constraints. Additionally, integrating strategic military decision-making models may inspire interdisciplinary advancements in swarm-based intelligent optimization. The population initialization strategy also warrants further exploration—approaches such as latin hypercube sampling, good-point sets, or customized initialization schemes could improve search space coverage and convergence behavior. At the algorithmic structural level, future work could focus on novel exploration–exploitation balance mechanisms to address the global convergence limitations of the original BCA. Finally, from an application perspective, FBCA’s performance in MLP optimization lays the foundation for its broader neural network field, and its application potential in complex intelligent tasks such as medical image analysis [61], financial timing prediction [62], and natural language emotion recognition [63] can be further explored in the future.

Finally, we have made BCA-related research materials publicly available at www.jianhuajiang.com, accessed on 1 January 2025, and we welcome researchers interested in exploring and studying the theoretical innovations and migration applications of BCA and FBCA.

Appendix A

IEEE CEC 2017 test suite detailed experimental data.

Table A1.

Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 30.

F Item FBCA BCA SCA GA RSA PSO DBO BKA HHO HOA COA CPO PO SMA
F1 Std 6.44 ×103 5.91 ×103 3.47 ×109 2.06 ×1010 7.83 ×109 2.37 ×109 2.32 ×108 6.8 ×109 2.39 ×108 8.68 ×109 5.96 ×109 8.43 ×105 1.31 ×109 6.95 ×104
Mean 1.13 ×104 6.64 ×103 2.07 ×1010 5.30 ×1010 4.80 ×1010 4.22 ×109 2.75 ×108 9.35 ×109 4.72 ×108 3.71 ×1010 5.70 ×1010 7.47 ×105 3.23 ×109 1.33 ×105
F2 Std 2.69 ×104 3.64 ×104 1.70 ×104 6.02 ×104 5.99 ×103 7.69 ×104 2.64 ×104 1.88 ×104 6.81 ×103 6.74 ×103 5.87 ×103 1.10 ×104 8.57 ×103 1.35 ×104
Mean 7.88 ×104 1.40 ×105 8.19 ×104 2.59 ×105 8.10 ×104 1.76 ×105 8.97 ×104 3.74 ×104 5.76 ×104 7.33 ×104 8.49 ×104 6.18 ×104 6.81 ×104 3.42 ×104
F3 Std 3.89 ×101 2.64 ×101 8.15 ×102 4.64 ×103 3.61 ×103 4.42 ×102 1.18 ×102 7.73 ×102 1.02 ×102 1.74 ×103 2.24 ×103 1.67 ×101 1.08 ×102 2.38 ×101
Mean 4.93 ×102 5.06 ×102 2.91 ×103 9.24 ×103 1.09 ×104 1.24 ×103 6.44 ×102 1.27 ×103 7.23 ×102 7.95 ×103 1.57 ×104 5.24 ×102 7.51 ×102 5.12 ×102
F4 Std 3.23 ×101 7.64 ×101 2.73 ×101 7.15 ×101 2.85 ×101 4.11 ×101 5.37 ×101 4.88 ×101 3.78 ×101 3.15 ×101 4.00 ×101 1.58 ×101 3.79 ×101 3.94 ×101
Mean 6.31 ×102 6.56 ×102 8.29 ×102 9.94 ×102 9.22 ×102 8.07 ×102 7.73 ×102 7.52 ×102 7.75 ×102 7.99 ×102 9.10 ×102 6.91 ×102 7.88 ×102 6.43 ×102
F5 Std 9.77 ×100 8.97 ×100 5.03 ×100 1.40 ×101 4.68 ×100 1.55 ×101 1.23 ×101 7.04 ×100 6.25 ×100 7.89 ×100 6.08 ×100 7.25 ×100 7.62 ×100 9.20 ×100
Mean 6.18 ×102 6.01 ×102 6.65 ×102 7.20 ×102 6.92 ×102 6.58 ×102 6.49 ×102 6.60 ×102 6.67 ×102 6.66 ×102 6.88 ×102 6.02 ×102 6.69 ×102 6.18 ×102
F6 Std 1.71 ×102 7.99 ×101 6.91 ×101 3.17 ×102 3.99 ×101 6.58 ×101 7.36 ×101 8.16 ×101 5.76 ×101 5.32 ×101 5.29 ×101 2.02 ×101 8.33 ×101 4.88 ×101
Mean 1.02 ×103 9.49 ×102 1.24 ×103 2.07 ×103 1.38 ×103 1.14 ×103 1.01 ×103 1.23 ×103 1.30 ×103 1.26 ×103 1.42 ×103 9.39 ×102 1.23 ×103 8.92 ×102
F7 Std 5.31 ×101 6.03 ×101 2.15 ×101 8.58 ×101 1.85 ×101 4.10 ×101 5.77 ×101 4.94 ×101 2.63 ×101 2.63 ×101 2.36 ×101 1.44 ×101 3.80 ×101 3.86 ×101
Mean 9.32 ×102 9.82 ×102 1.09 ×103 1.24 ×103 1.14 ×103 1.07 ×103 1.03 ×103 9.87 ×102 9.89 ×102 1.06 ×103 1.15 ×103 9.87 ×102 1.03 ×103 9.39 ×102
F8 Std 3.50 ×103 9.05 ×102 1.75 ×103 2.73 ×103 1.01 ×103 3.02 ×103 1.72 ×103 1.41 ×103 1.32 ×103 1.00 ×103 1.44 ×103 4.27 ×102 1.47 ×103 1.31 ×103
Mean 4.59 ×103 1.53 ×103 9.14 ×103 1.05 ×104 1.14 ×104 7.75 ×103 6.19 ×103 5.73 ×103 8.69 ×103 6.74 ×103 1.11 ×104 1.35 ×103 7.64 ×103 4.57 ×103
F9 Std 1.31 ×103 1.13 ×103 3.30 ×102 6.14 ×102 4.49 ×102 6.63 ×102 1.03 ×103 1.28 ×103 8.04 ×102 6.67 ×102 4.53 ×102 2.33 ×102 7.52 ×102 6.18 ×102
Mean 4.52 ×103 8.83 ×103 8.86 ×103 8.52 ×103 8.49 ×103 7.67 ×103 6.42 ×103 5.86 ×103 6.05 ×103 7.56 ×103 8.89 ×103 7.63 ×103 7.10 ×103 4.65 ×103
F10 Std 4.05 ×101 1.04 ×102 1.42 ×103 1.29 ×104 2.53 ×103 3.08 ×103 1.28 ×103 1.21 ×103 2.83 ×102 1.47 ×103 2.21 ×103 2.78 ×101 6.46 ×102 5.58 ×101
Mean 1.21 ×103 1.26 ×103 4.11 ×103 2.44 ×104 9.06 ×103 4.41 ×103 1.97 ×103 1.87 ×103 1.63 ×103 5.99 ×103 9.17 ×103 1.28 ×103 2.57 ×103 1.29 ×103
F11 Std 8.96 ×105 1.25 ×106 7.78 ×108 4.71 ×109 3.25 ×109 6.09 ×108 1.16 ×108 1.42 ×109 9.04 ×107 1.94 ×109 3.95 ×109 8.74 ×105 2.76 ×108 3.98 ×106
Mean 1.05 ×106 1.29 ×106 2.64 ×109 7.15 ×109 1.49 ×1010 5.58 ×108 8.64 ×107 3.64 ×108 8.74 ×107 7.03 ×109 1.32 ×1010 1.32 ×106 3.22 ×108 5.21 ×106
F12 Std 2.05 ×104 3.14 ×104 4.04 ×108 4.79 ×109 7.28 ×109 1.94 ×109 1.19 ×107 4.20 ×108 1.18 ×106 1.27 ×109 4.48 ×109 1.23 ×104 2.66 ×107 5.73 ×104
Mean 1.57 ×104 1.92 ×104 1.11 ×109 4.69 ×109 1.26 ×1010 8.49 ×108 4.70 ×106 1.41 ×108 1.20 ×106 2.27 ×109 1.03 ×1010 2.27 ×104 1.14 ×107 7.87 ×104
F13 Std 2.04 ×105 3.89 ×105 1.03 ×106 1.10 ×107 6.65 ×106 3.04 ×106 7.61 ×105 1.33 ×105 1.82 ×106 8.72 ×105 2.85 ×106 8.48 ×102 6.30 ×105 1.64 ×105
Mean 1.70 ×105 2.95 ×105 1.17 ×106 7.17 ×106 7.99 ×106 1.55 ×106 4.78 ×105 3.72 ×104 1.82 ×106 1.42 ×106 3.68 ×106 2.20 ×103 6.89 ×105 2.01 ×105
F14 Std 1.06 ×104 8.57 ×103 4.13 ×107 4.57 ×108 1.07 ×108 4.60 ×108 6.10 ×104 1.73 ×105 7.60 ×104 1.30 ×108 5.49 ×108 2.75 ×103 1.42 ×106 1.58 ×104
Mean 1.28 ×104 8.97 ×103 5.91 ×107 3.77 ×108 5.48 ×108 1.23 ×108 8.34 ×104 9.08 ×104 1.33 ×105 1.35 ×108 6.68 ×108 4.65 ×103 9.81 ×105 2.93 ×104
F15 Std 3.68 ×102 6.71 ×102 2.19 ×102 7.08 ×102 6.62 ×102 5.32 ×102 4.92 ×102 4.53 ×102 4.25 ×102 6.54 ×102 1.06 ×103 2.17 ×102 2.96 ×102 2.66 ×102
Mean 2.65 ×103 3.21 ×103 4.13 ×103 4.58 ×103 5.39 ×103 3.93 ×103 3.47 ×103 3.13 ×103 3.82 ×103 4.76 ×103 6.28 ×103 3.10 ×103 3.76 ×103 2.68 ×103
F16 Std 2.85 ×102 2.53 ×102 1.88 ×102 3.27 ×102 5.30 ×103 2.94 ×102 2.48 ×102 2.98 ×102 3.10 ×102 5.07 ×102 3.22 ×103 1.32 ×102 2.69 ×102 2.43 ×102
Mean 2.28 ×103 2.04 ×103 2.77 ×103 3.15 ×103 7.29 ×103 2.62 ×103 2.66 ×103 2.44 ×103 2.63 ×103 3.06 ×103 5.50 ×103 2.04 ×103 2.68 ×103 2.40 ×103
F17 Std 1.63 ×106 3.87 ×106 5.88 ×106 6.92 ×107 2.53 ×107 3.54 ×107 6.54 ×106 1.94 ×105 4.35 ×106 1.49 ×107 4.72 ×107 7.34 ×104 9.27 ×106 3.52 ×106
Mean 1.50 ×106 3.46 ×106 1.17 ×107 4.56 ×107 3.74 ×107 1.53 ×107 3.90 ×106 1.98 ×105 3.90 ×106 1.53 ×107 5.31 ×107 1.44 ×105 7.66 ×106 2.82 ×106
F18 Std 1.65 ×104 1.41 ×104 5.44 ×107 2.63 ×108 3.44 ×108 5.00 ×107 2.46 ×106 2.17 ×107 1.21 ×106 4.64 ×107 4.96 ×108 5.46 ×103 4.94 ×106 2.23 ×104
Mean 1.64 ×104 1.16 ×104 9.55 ×107 2.62 ×108 6.87 ×108 2.89 ×107 1.58 ×106 4.62 ×106 1.47 ×106 4.10 ×107 7.67 ×108 6.09 ×103 6.42 ×106 2.72 ×104
F19 Std 2.33 ×102 3.72 ×102 1.52 ×102 2.47 ×102 1.34 ×102 1.66 ×102 2.31 ×102 1.91 ×102 2.21 ×102 1.65 ×102 2.20 ×102 1.45 ×102 1.94 ×102 1.84 ×102
Mean 2.54 ×103 2.62 ×103 2.98 ×103 3.25 ×103 3.10 ×103 2.85 ×103 2.80 ×103 2.65 ×103 2.83 ×103 2.70 ×103 3.02 ×103 2.50 ×103 2.72 ×103 2.60 ×103
F20 Std 4.36 ×101 7.50 ×101 1.80 ×101 7.05 ×101 4.56 ×101 5.14 ×101 5.85 ×101 5.60 ×101 5.76 ×101 4.03 ×101 4.34 ×101 1.73 ×101 7.01 ×101 4.03 ×101
Mean 2.42 ×103 2.47 ×103 2.61 ×103 2.84 ×103 2.73 ×103 2.60 ×103 2.56 ×103 2.56 ×103 2.60 ×103 2.60 ×103 2.76 ×103 2.48 ×103 2.55 ×103 2.42 ×103
F21 Std 2.46 ×103 3.97 ×103 2.02 ×103 1.31 ×103 9.86 ×102 2.84 ×103 2.51 ×103 1.55 ×103 1.47 ×103 1.30 ×103 6.22 ×102 3.79 ×100 2.35 ×103 1.53 ×103
Mean 5.56 ×103 5.75 ×103 9.32 ×103 9.80 ×103 8.90 ×103 7.62 ×103 4.93 ×103 6.70 ×103 7.13 ×103 7.68 ×103 9.66 ×103 2.31 ×103 4.55 ×103 5.81 ×103
F22 Std 8.14 ×101 8.33 ×101 3.82 ×101 1.78 ×102 6.94 ×101 1.82 ×102 8.68 ×101 1.11 ×102 1.47 ×102 1.58 ×102 1.59 ×102 1.67 ×101 7.36 ×101 3.76 ×101
Mean 2.85 ×103 2.80 ×103 3.08 ×103 3.65 ×103 3.33 ×103 3.30 ×103 3.03 ×103 3.12 ×103 3.24 ×103 3.48 ×103 3.66 ×103 2.85 ×103 3.06 ×103 2.77 ×103
F23 Std 8.13 ×101 8.82 ×101 3.51 ×101 2.62 ×102 7.87 ×101 1.93 ×102 6.78 ×101 1.77 ×102 1.60 ×102 1.76 ×102 1.77 ×102 2.15 ×101 8.33 ×101 4.06 ×101
Mean 3.02 ×103 3.01 ×103 3.25 ×103 3.93 ×103 3.47 ×103 3.65 ×103 3.19 ×103 3.35 ×103 3.55 ×103 3.83 ×103 3.82 ×103 3.02 ×103 3.20 ×103 2.95 ×103
F24 Std 2.18 ×101 1.53 ×101 2.26 ×102 1.20 ×103 7.07 ×102 9.83 ×101 5.34 ×101 3.13 ×102 3.62 ×101 2.04 ×102 4.70 ×102 1.76 ×101 7.78 ×101 1.38 ×101
Mean 2.90 ×103 2.90 ×103 3.60 ×103 6.18 ×103 4.84 ×103 3.27 ×103 2.99 ×103 3.14 ×103 3.02 ×103 3.94 ×103 5.18 ×103 2.92 ×103 3.11 ×103 2.90 ×103
F25 Std 1.25 ×103 8.60 ×102 5.22 ×102 1.08 ×103 9.74 ×102 1.35 ×103 6.58 ×102 1.63 ×103 1.37 ×103 9.09 ×102 7.97 ×102 9.97 ×102 1.38 ×103 4.61 ×102
Mean 6.15 ×103 4.99 ×103 7.79 ×103 1.08 ×104 1.05 ×104 7.30 ×103 6.86 ×103 7.89 ×103 7.84 ×103 9.51 ×103 1.18 ×104 5.36 ×103 7.57 ×103 5.06 ×103
F26 Std 2.79 ×101 1.79 ×101 9.42 ×101 5.80 ×102 3.79 ×102 2.72 ×102 4.52 ×101 8.93 ×101 3.29 ×102 2.55 ×102 4.67 ×102 1.42 ×101 9.24 ×101 2.06 ×101
Mean 3.28 ×103 3.23 ×103 3.58 ×103 4.49 ×103 3.95 ×103 3.65 ×103 3.32 ×103 3.39 ×103 3.66 ×103 4.25 ×103 4.51 ×103 3.28 ×103 3.41 ×103 3.23 ×103
F27 Std 4.65 ×101 2.13 ×101 4.52 ×102 1.39 ×103 7.00 ×102 1.18 ×103 7.50 ×102 7.72 ×102 9.13 ×101 6.74 ×102 6.58 ×102 2.85 ×101 9.08 ×101 4.43 ×101
Mean 2.75 ×102 2.47 ×102 4.47 ×103 7.75 ×103 6.37 ×103 4.06 ×103 3.71 ×103 3.82 ×103 3.49 ×103 5.68 ×103 7.57 ×103 1.69 ×102 3.57 ×103 2.11 ×102
F28 Std 3.90 ×103 3.77 ×103 3.71 ×102 1.40 ×103 2.35 ×103 5.70 ×102 4.03 ×102 6.91 ×102 3.76 ×102 7.75 ×102 2.00 ×103 4.05 ×103 4.79 ×102 4.03 ×103
Mean 4.07 ×103 3.79 ×103 5.29 ×103 6.68 ×103 7.28 ×103 4.81 ×103 4.46 ×103 4.90 ×103 4.97 ×103 6.13 ×103 8.59 ×103 4.00 ×103 5.01 ×103 4.03 ×103
F29 Std 9.59 ×103 5.73 ×103 8.34 ×107 3.68 ×108 1.14 ×109 5.13 ×107 5.02 ×106 4.98 ×107 9.39 ×106 4.74 ×108 9.10 ×108 7.81 ×104 4.22 ×107 8.46 ×104
Mean 1.70 ×104 1.33 ×104 2.03 ×108 2.98 ×108 3.00 ×109 2.99 ×107 2.90 ×106 1.90 ×107 1.13 ×107 5.87 ×108 1.52 ×109 1.35 ×105 6.12 ×107 1.17 ×105

Table A2.

Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 50.

F Item FBCA BCA SCA GA RSA PSO DBO BKA HHO HOA COA CPO PO SMA
F1 Std 2.73 ×105 8.68 ×106 8.35 ×109 2.54 ×1010 7.82 ×109 5.19 ×109 1.95 ×1010 2.11 ×1010 1.81 ×109 8.39 ×109 6.1 ×109 1.43 ×108 4.08 ×109 2.26 ×106
Mean 4.95 ×105 1.06 ×107 6.58 ×1010 1.69 ×1011 10.00 ×1010 1.90 ×1010 1.02 ×1010 4.44 ×1010 5.25 ×109 8.54 ×1010 1.15 ×1011 1.88 ×108 1.58 ×1010 6.55 ×106
F2 Std 6.26 ×104 7.47 ×104 3.19 ×104 9.81 ×104 1.57 ×104 1.27 ×105 6.75 ×104 2.84 ×104 1.96 ×104 1.39 ×104 2.07 ×104 1.99 ×104 2.94 ×104 6.25 ×104
Mean 2.34 ×105 3.26 ×105 2.20 ×105 4.68 ×105 1.77 ×105 3.96 ×105 2.69 ×105 9.58 ×104 1.72 ×105 1.59 ×105 2.01 ×105 1.78 ×105 1.94 ×105 1.82 ×105
F3 Std 6.55 ×101 5.72 ×101 3.50 ×103 1.57 ×104 5.72 ×103 2.43 ×103 7.37 ×102 6.84 ×103 6.01 ×102 5.01 ×103 5.17 ×103 4.94 ×101 9.82 ×102 6.74 ×101
Mean 6.08 ×102 6.29 ×102 1.45 ×104 4.46 ×104 2.92 ×104 3.60 ×103 1.43 ×103 7.39 ×103 2.00 ×103 2.38 ×104 3.81 ×104 7.01 ×102 2.67 ×103 6.31 ×102
F4 Std 8.03 ×101 1.57 ×102 3.34 ×101 9.83 ×101 3.08 ×101 5.03 ×101 1.05 ×102 7.48 ×101 3.30 ×101 4.20 ×101 3.11 ×101 2.17 ×101 4.94 ×101 5.01 ×101
Mean 7.88 ×102 8.48 ×102 1.14 ×103 1.51 ×103 1.17 ×103 1.11 ×103 9.88 ×102 9.07 ×102 9.43 ×102 1.08 ×103 1.19 ×103 9.29 ×102 1.02 ×103 8.16 ×102
F5 Std 6.69 ×100 2.75 ×100 6.21 ×100 1.35 ×101 4.90 ×100 1.55 ×101 1.22 ×101 9.83 ×100 5.00 ×100 6.97 ×100 2.35 ×100 3.19 ×100 7.41 ×100 1.18 ×101
Mean 6.26 ×102 6.07 ×102 6.84 ×102 7.39 ×102 7.04 ×102 6.79 ×102 6.66 ×102 6.71 ×102 6.80 ×102 6.85 ×102 7.04 ×102 6.11 ×102 6.83 ×102 6.47 ×102
F6 Std 1.78 ×102 9.88 ×101 1.07 ×102 6.29 ×102 5.82 ×101 1.04 ×102 1.47 ×102 9.43 ×101 9.91 ×101 7.00 ×101 4.47 ×101 4.36 ×101 1.03 ×102 1.01 ×102
Mean 1.35 ×103 1.28 ×103 1.91 ×103 4.50 ×103 1.96 ×103 1.73 ×103 1.41 ×103 1.76 ×103 1.88 ×103 1.85 ×103 2.06 ×103 1.26 ×103 1.79 ×103 1.17 ×103
F7 Std 6.34 ×101 1.34 ×102 3.22 ×101 1.01 ×102 2.15 ×101 6.37 ×101 9.63 ×101 1.09 ×102 3.82 ×101 4.27 ×101 2.97 ×101 2.25 ×101 5.95 ×101 5.27 ×101
Mean 1.08 ×103 1.20 ×103 1.44 ×103 1.79 ×103 1.51 ×103 1.41 ×103 1.31 ×103 1.26 ×103 1.24 ×103 1.43 ×103 1.50 ×103 1.22 ×103 1.34 ×103 1.10 ×103
F8 Std 1.01 ×104 3.92 ×103 4.25 ×103 6.13 ×103 2.18 ×103 8.53 ×103 7.80 ×103 4.59 ×103 2.69 ×103 3.55 ×103 2.75 ×103 2.57 ×103 4.89 ×103 3.46 ×103
Mean 1.99 ×104 6.73 ×103 3.38 ×104 4.84 ×104 3.91 ×104 2.93 ×104 2.96 ×104 1.74 ×104 3.23 ×104 2.82 ×104 3.77 ×104 7.83 ×103 2.74 ×104 1.70 ×104
F9 Std 3.40 ×103 9.01 ×102 3.83 ×102 9.06 ×102 4.83 ×102 1.00 ×103 2.18 ×103 1.61 ×103 1.16 ×103 7.75 ×102 3.76 ×102 5.37 ×102 1.02 ×103 1.01 ×103
Mean 8.88 ×103 1.57 ×104 1.54 ×104 1.52 ×104 1.52 ×104 1.42 ×104 1.17 ×104 9.47 ×103 1.06 ×104 1.31 ×104 1.55 ×104 1.36 ×104 1.24 ×104 8.30 ×103
F10 Std 1.59 ×103 1.12 ×103 3.11 ×103 2.10 ×104 2.25 ×103 1.26 ×104 3.09 ×103 4.58 ×103 7.43 ×102 2.27 ×103 2.44 ×103 2.77 ×102 2.60 ×103 7.39 ×101
Mean 2.06 ×103 2.31 ×103 1.26 ×104 6.26 ×104 2.18 ×104 1.93 ×104 4.19 ×103 5.63 ×103 3.10 ×103 1.89 ×104 2.67 ×104 1.84 ×103 7.94 ×103 1.44 ×103
F11 Std 7.59 ×106 5.38 ×106 6.77 ×109 2.59 ×1010 1.95 ×1010 7.50 ×109 5.78 ×108 1.56 ×1010 6.82 ×108 1.09 ×1010 1.36 ×1010 1.05 ×107 1.19 ×109 1.92 ×107
Mean 1.29 ×107 8.83 ×106 2.24 ×1010 6.73 ×1010 7.46 ×1010 9.63 ×109 8.85 ×108 1.25 ×1010 1.03 ×109 5.24 ×1010 9.30 ×1010 2.01 ×107 2.54 ×109 3.84 ×107
F12 Std 8.06 ×103 7.78 ×103 3.15 ×109 1.56 ×1010 1.50 ×1010 6.04 ×109 1.37 ×108 5.49 ×109 6.32 ×107 8.06 ×109 1.29 ×1010 3.46 ×104 2.13 ×108 8.08 ×104
Mean 1.44 ×104 9.44 ×103 7.19 ×109 2.91 ×1010 4.80 ×1010 5.83 ×109 9.34 ×107 1.71 ×109 4.08 ×107 2.24 ×1010 5.38 ×1010 2.58 ×104 3.47 ×108 1.32 ×105
F13 Std 8.22 ×105 6.68 ×105 5.86 ×106 1.30 ×108 5.40 ×107 4.76 ×107 3.85 ×106 1.04 ×105 7.33 ×106 2.99 ×107 9.57 ×107 1.66 ×105 2.87 ×106 7.45 ×105
Mean 6.52 ×105 7.46 ×105 8.64 ×106 9.82 ×107 7.07 ×107 2.13 ×107 3.62 ×106 1.35 ×105 6.22 ×106 4.69 ×107 1.18 ×108 1.31 ×105 4.94 ×106 9.28 ×105
F14 Std 8.19 ×103 6.77 ×103 4.72 ×108 4.41 ×109 2.46 ×109 8.04 ×107 7.61 ×107 8.93 ×108 5.69 ×106 2.35 ×109 4.17 ×109 1.22 ×104 2.39 ×107 1.94 ×104
Mean 1.08 ×104 8.76 ×103 1.15 ×109 7.82 ×109 5.96 ×109 7.06 ×107 2.03 ×107 2.17 ×108 3.32 ×106 4.43 ×109 1.03 ×1010 1.44 ×104 1.83 ×107 4.98 ×104
F15 Std 7.32 ×102 1.13 ×103 3.58 ×102 1.29 ×103 1.58 ×103 9.60 ×102 6.64 ×102 1.10 ×103 8.06 ×102 9.78 ×102 1.39 ×103 2.69 ×102 7.42 ×102 4.05 ×102
Mean 3.68 ×103 4.86 ×103 6.26 ×103 7.85 ×103 8.33 ×103 5.84 ×103 4.93 ×103 4.46 ×103 4.95 ×103 7.08 ×103 1.06 ×104 4.53 ×103 5.56 ×103 3.70 ×103
F16 Std 3.43 ×102 4.92 ×102 3.13 ×102 4.79 ×104 4.73 ×103 4.46 ×102 4.82 ×102 7.39 ×102 2.98 ×102 5.67 ×102 6.02 ×103 2.36 ×102 4.22 ×102 2.92 ×102
Mean 3.17 ×103 4.17 ×103 5.05 ×103 2.32 ×104 1.34 ×104 4.58 ×103 4.40 ×103 3.82 ×103 3.90 ×103 4.84 ×103 1.16 ×104 3.47 ×103 4.22 ×103 3.35 ×103
F17 Std 2.30 ×106 1.32 ×107 3.07 ×107 1.48 ×108 8.23 ×107 3.99 ×107 1.61 ×107 1.14 ×107 1.40 ×107 4.56 ×107 1.00 ×108 9.88 ×105 2.63 ×107 5.40 ×106
Mean 3.61 ×106 1.09 ×107 6.09 ×107 1.59 ×108 1.91 ×108 2.66 ×107 1.37 ×107 5.01 ×106 1.13 ×107 1.00 ×108 2.02 ×108 1.7 ×106 3.66 ×107 7.09 ×106
F18 Std 1.45 ×104 1.30 ×104 3.91 ×108 2.21 ×109 1.44 ×109 3.37 ×108 1.28 ×107 8.63 ×107 2.50 ×106 7.18 ×108 1.51 ×109 7.71 ×103 1.60 ×107 1.56 ×104
Mean 2.60 ×104 1.62 ×104 7.85 ×108 3.44 ×109 4.68 ×109 1.33 ×108 9.62 ×106 1.69 ×107 2.27 ×106 1.00 ×109 3.19 ×109 2.21 ×104 2.21 ×107 2.30 ×104
F19 Std 5.27 ×102 5.22 ×102 2.21 ×102 3.47 ×102 1.69 ×102 3.12 ×102 3.76 ×102 3.40 ×102 2.74 ×102 2.85 ×102 1.85 ×102 2.34 ×102 3.76 ×102 2.24 ×102
Mean 3.28 ×103 4.21 ×103 4.36 ×103 4.79 ×103 4.25 ×103 4.14 ×103 3.82 ×103 3.29 ×103 3.65 ×103 3.75 ×103 4.15 ×103 3.58 ×103 3.70 ×103 3.32 ×103
F20 Std 7.19 ×101 1.49 ×102 3.84 ×101 1.31 ×102 7.90 ×101 7.72 ×101 9.81 ×101 1.18 ×102 9.74 ×101 6.23 ×101 8.97 ×101 2.67 ×101 7.90 ×101 6.36 ×101
Mean 2.56 ×103 2.65 ×103 2.96 ×103 3.38 ×103 3.12 ×103 2.93 ×103 2.85 ×103 2.88 ×103 2.98 ×103 2.98 ×103 3.31 ×103 2.70 ×103 2.90 ×103 2.58 ×103
F21 Std 3.32 ×103 2.88 ×103 4.53 ×102 7.90 ×102 5.24 ×102 2.17 ×103 2.27 ×103 1.22 ×103 1.09 ×103 8.03 ×102 6.09 ×102 5.62 ×103 9.99 ×102 8.72 ×102
Mean 1.17 ×104 1.67 ×104 1.73 ×104 1.75 ×104 1.74 ×104 1.51 ×104 1.29 ×104 1.11 ×104 1.25 ×104 1.52 ×104 1.69 ×104 1.19 ×104 1.42 ×104 9.77 ×103
F22 Std 1.23 ×102 1.68 ×102 9.17 ×101 3.20 ×102 1.90 ×102 3.54 ×102 1.58 ×102 1.67 ×102 2.32 ×102 2.67 ×102 2.23 ×102 3.02 ×101 2.03 ×102 6.20 ×101
Mean 3.13 ×103 3.09 ×103 3.73 ×103 4.65 ×103 4.08 ×103 4.14 ×103 3.61 ×103 3.78 ×103 4.04 ×103 4.52 ×103 4.60 ×103 3.18 ×103 3.67 ×103 3.03 ×103
F23 Std 1.22 ×102 1.21 ×102 9.37 ×101 3.67 ×102 6.44 ×102 3.01 ×102 1.56 ×102 2.26 ×102 2.47 ×102 2.61 ×102 2.27 ×102 3.12 ×101 1.17 ×102 1.03 ×102
Mean 3.31 ×103 3.36 ×103 3.88 ×103 5.16 ×103 4.42 ×103 4.63 ×103 3.71 ×103 3.90 ×103 4.47 ×103 5.00 ×103 4.88 ×103 3.35 ×103 3.80 ×103 3.19 ×103
F24 Std 3.38 ×101 3.63 ×101 1.20 ×103 7.21 ×103 1.42 ×103 1.00 ×103 1.70 ×103 2.40 ×103 2.45 ×102 9.93 ×102 1.07 ×103 4.87 ×101 4.58 ×102 4.22 ×101
Mean 3.08 ×103 3.11 ×103 9.68 ×103 2.99 ×104 1.34 ×104 5.85 ×103 3.91 ×103 6.13 ×103 3.83 ×103 1.19 ×104 1.56 ×104 3.23 ×103 4.51 ×103 3.10 ×103
F25 Std 1.65 ×103 1.61 ×103 7.94 ×102 2.33 ×103 7.26 ×102 2.19 ×103 1.60 ×103 2.20 ×103 1.40 ×103 7.39 ×102 6.25 ×102 1.74 ×103 2.34 ×103 2.28 ×103
Mean 8.06 ×103 7.33 ×103 1.41 ×104 2.13 ×104 1.65 ×104 1.27 ×104 1.13 ×104 1.25 ×104 1.19 ×104 1.55 ×104 1.76 ×104 7.99 ×103 1.14 ×104 4.87 ×103
F26 Std 1.39 ×102 8.58 ×101 2.52 ×102 7.19 ×102 1.13 ×103 8.53 ×102 3.16 ×102 5.00 ×102 5.65 ×102 5.97 ×102 8.51 ×102 7.25 ×101 2.90 ×102 8.15 ×101
Mean 3.65 ×103 3.47 ×103 4.97 ×103 6.98 ×103 6.14 ×103 5.12 ×103 4.04 ×103 4.35 ×103 5.21 ×103 6.98 ×103 7.14 ×103 3.74 ×103 4.25 ×103 3.55 ×103
F27 Std 4.18 ×101 8.34 ×101 1.10 ×103 3.23 ×103 1.28 ×103 1.60 ×103 2.38 ×103 2.26 ×103 3.99 ×102 8.84 ×102 1.63 ×103 1.24 ×102 4.19 ×102 4.88 ×101
Mean 3.39 ×103 3.45 ×103 8.69 ×103 1.60 ×104 1.17 ×104 6.50 ×103 6.32 ×103 6.82 ×103 4.81 ×103 1.07 ×104 1.37 ×104 3.70 ×103 5.32 ×103 3.38 ×103
F28 Std 3.67 ×102 8.21 ×102 9.39 ×102 6.29 ×104 8.47 ×104 9.26 ×103 8.47 ×102 5.28 ×103 8.38 ×102 2.20 ×104 1.24 ×105 2.50 ×102 1.69 ×103 4.67 ×102
Mean 4.52 ×103 4.65 ×103 8.86 ×103 3.31 ×104 6.00 ×104 9.51 ×103 6.45 ×103 8.27 ×103 6.99 ×103 2.26 ×104 1.33 ×105 5.16 ×103 8.16 ×103 4.88 ×103
F29 Std 4.56 ×105 4.39 ×105 5.00 ×108 2.99 ×109 2.40 ×109 6.39 ×108 4.60 ×107 6.15 ×107 3.83 ×107 1.21 ×109 2.92 ×109 3.10 ×106 1.27 ×108 4.43 ×106
Mean 1.75 ×106 1.2 ×106 1.42 ×109 5.64 ×109 7.92 ×109 5.30 ×108 5.18 ×107 7.25 ×107 1.34 ×108 3.02 ×109 8.81 ×109 9.53 ×106 3.12 ×108 1.28 ×107

Table A3.

Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 100.

F Item FBCA BCA SCA GA RSA PSO DBO BKA HHO HOA COA CPO PO SMA
F1 Std 1.87 ×108 7.9 ×109 1.31 ×1010 7.1 ×1010 6.9 ×109 1.88 ×1010 7.29 ×1010 4.57 ×1010 7.91 ×109 1.27 ×1010 1.21 ×1010 3.26 ×109 1.09 ×1010 9.06 ×107
Mean 2.00 ×108 1.08 ×1010 2.19 ×1011 5.55 ×1011 2.46 ×1011 1.16 ×1011 8.25 ×1010 1.53 ×1011 5.15 ×1010 2.25 ×1011 2.72 ×1011 1.40 ×1010 8.96 ×1010 4.70 ×108
F2 Std 9.09 ×104 2.65 ×105 9.83 ×104 1.47 ×105 1.82 ×104 2.72 ×105 2.23 ×105 4.75 ×104 2.03 ×105 1.07 ×104 1.37 ×104 6.25 ×104 6.67 ×104 3.19 ×105
Mean 6.97 ×105 8.59 ×105 6.20 ×105 9.21 ×105 3.48 ×105 1.02 ×106 6.16 ×105 2.72 ×105 4.12 ×105 3.28 ×105 3.55 ×105 4.60 ×105 3.91 ×105 7.78 ×105
F3 Std 1.01 ×102 4.73 ×102 8.51 ×103 4.82 ×104 1.31 ×104 5.16 ×103 1.46 ×104 1.48 ×104 1.98 ×103 1.07 ×104 1.15 ×104 4.95 ×102 2.15 ×103 1.03 ×102
Mean 1.01 ×103 1.85 ×103 5.16 ×104 2.01 ×105 8.57 ×104 1.86 ×104 1.63 ×104 2.43 ×104 9.53 ×103 6.93 ×104 1.10 ×105 2.58 ×103 1.25 ×104 1.02 ×103
F4 Std 1.86 ×102 2.46 ×102 5.83 ×101 1.54 ×102 4.03 ×101 9.36 ×101 2.18 ×102 1.56 ×102 6.31 ×101 6.33 ×101 4.40 ×101 6.34 ×101 6.78 ×101 1.06 ×102
Mean 1.26 ×103 1.57 ×103 2.04 ×103 2.92 ×103 2.06 ×103 2.02 ×103 1.71 ×103 1.55 ×103 1.67 ×103 1.92 ×103 2.13 ×103 1.64 ×103 1.81 ×103 1.38 ×103
F5 Std 6.18 ×100 1.18 ×101 4.61 ×100 6.83 ×100 4.13 ×100 9.24 ×100 1.32 ×101 6.33 ×100 4.23 ×100 5.47 ×100 4.20 ×100 5.87 ×100 4.68 ×100 5.81 ×100
Mean 6.40 ×102 6.33 ×102 7.05 ×102 7.61 ×102 7.13 ×102 7.03 ×102 6.79 ×102 6.78 ×102 6.90 ×102 6.97 ×102 7.13 ×102 6.41 ×102 6.96 ×102 6.65 ×102
F6 Std 3.34 ×102 2.11 ×102 3.06 ×102 9.98 ×102 8.14 ×101 2.06 ×102 2.64 ×102 1.90 ×102 1.06 ×102 1.25 ×102 7.80 ×101 1.16 ×102 1.26 ×102 2.69 ×102
Mean 2.63 ×103 2.54 ×103 4.09 ×103 1.17 ×104 3.84 ×103 3.68 ×103 2.98 ×103 3.42 ×103 3.76 ×103 3.69 ×103 4.04 ×103 2.35 ×103 3.66 ×103 2.37 ×103
F7 Std 1.86 ×102 1.82 ×102 7.82 ×101 1.79 ×102 4.08 ×101 1.14 ×102 2.21 ×102 1.19 ×102 5.78 ×101 8.18 ×101 5.16 ×101 5.98 ×101 7.18 ×101 1.08 ×102
Mean 1.53 ×103 1.94 ×103 2.41 ×103 3.33 ×103 2.54 ×103 2.40 ×103 2.15 ×103 2.00 ×103 2.12 ×103 2.36 ×103 2.59 ×103 1.98 ×103 2.27 ×103 1.68 ×103
F8 Std 1.79 ×104 1.49 ×104 1.09 ×104 2.55 ×104 3.31 ×103 1.28 ×104 1.19 ×104 1.07 ×104 4.99 ×103 6.22 ×103 3.98 ×103 6.60 ×103 5.54 ×103 3.51 ×103
Mean 7.05 ×104 4.42 ×104 9.48 ×104 1.54 ×105 8.27 ×104 9.02 ×104 7.99 ×104 3.85 ×104 6.93 ×104 6.80 ×104 7.87 ×104 4.87 ×104 6.45 ×104 3.78 ×104
F9 Std 2.64 ×103 6.73 ×102 6.39 ×102 8.89 ×102 9.94 ×102 1.30 ×103 4.65 ×103 1.44 ×103 1.78 ×103 1.35 ×103 5.20 ×102 6.21 ×102 1.68 ×103 1.13 ×103
Mean 3.12 ×104 3.38 ×104 3.30 ×104 3.40 ×104 3.21 ×104 3.15 ×104 2.93 ×104 1.98 ×104 2.45 ×104 2.93 ×104 3.29 ×104 3.07 ×104 2.73 ×104 1.90 ×104
F10 Std 1.68 ×104 3.85 ×104 2.66 ×104 1.58 ×105 3.97 ×104 4.92 ×105 5.37 ×104 3.46 ×104 3.88 ×104 3.60 ×104 5.00 ×104 1.56 ×104 2.16 ×104 8.64 ×103
Mean 5.05 ×104 1.93 ×105 1.75 ×105 5.36 ×105 2.13 ×105 5.29 ×105 2.18 ×105 7.35 ×104 1.52 ×105 1.86 ×105 2.66 ×105 9.05 ×104 1.62 ×105 2.39 ×104
F11 Std 4.98 ×107 4.91 ×108 1.27 ×1010 6.92 ×1010 1.62 ×1010 1.20 ×1010 2.75 ×109 3.56 ×1010 3.72 ×109 1.74 ×1010 1.49 ×1010 3.02 ×108 3.94 ×109 2.31 ×108
Mean 1.02 ×108 5.75 ×108 9.45 ×1010 2.91 ×1011 1.85 ×1011 3.00 ×1010 7.94 ×109 6.44 ×1010 1.15 ×1010 1.52 ×1011 2.06 ×1011 8.36 ×108 1.82 ×1010 4.46 ×108
F12 Std 1.13 ×105 7.19 ×103 2.87 ×109 1.39 ×1010 5.43 ×109 2.13 ×109 2.48 ×108 6.26 ×109 1.42 ×108 3.91 ×109 5.92 ×109 1.01 ×105 1.15 ×109 5.31 ×106
Mean 5.69 ×104 1.38 ×104 1.77 ×1010 6.86 ×1010 4.60 ×1010 3.37 ×109 3.42 ×108 6.98 ×109 2.73 ×108 3.12 ×1010 4.76 ×1010 1.85 ×105 1.76 ×109 1.71 ×106
F13 Std 1.41 ×106 6.59 ×106 2.20 ×107 1.66 ×108 4.40 ×107 2.28 ×107 9.15 ×106 6.41 ×106 3.83 ×106 2.21 ×107 5.02 ×107 1.77 ×106 6.73 ×106 3.54 ×106
Mean 3.02 ×106 7.05 ×106 5.71 ×107 2.29 ×108 8.72 ×107 2.98 ×107 1.54 ×107 3.18 ×106 1.16 ×107 4.31 ×107 1.14 ×108 4.39 ×106 1.72 ×107 5.40 ×106
F14 Std 1.95 ×104 6.59 ×103 1.50 ×109 1.09 ×1010 4.40 ×109 1.79 ×109 9.64 ×107 3.29 ×109 1.35 ×107 3.60 ×109 4.48 ×109 3.88 ×103 2.20 ×108 2.41 ×106
Mean 1.65 ×104 7.37 ×103 5.62 ×109 3.01 ×1010 2.22 ×1010 1.65 ×109 6.39 ×107 1.28 ×109 1.86 ×107 1.76 ×1010 2.63 ×1010 1.01 ×104 2.50 ×108 9.25 ×105
F15 Std 1.83 ×103 1.46 ×103 1.12 ×103 5.51 ×103 2.64 ×103 1.25 ×103 1.55 ×103 3.67 ×103 1.29 ×103 1.97 ×103 2.31 ×103 5.54 ×102 1.64 ×103 5.85 ×102
Mean 6.16 ×103 1.13 ×104 1.51 ×104 3.00 ×104 2.20 ×104 1.23 ×104 9.41 ×103 1.13 ×104 1.07 ×104 1.76 ×104 2.51 ×104 1.04 ×104 1.33 ×104 6.50 ×103
F16 Std 6.63 ×102 5.52 ×102 4.27 ×104 1.25 ×107 8.43 ×106 6.52 ×104 1.29 ×103 1.45 ×106 2.40 ×103 2.20 ×106 1.54 ×107 4.29 ×102 1.15 ×104 7.25 ×102
Mean 4.77 ×103 8.46 ×103 5.76 ×104 7.37 ×106 8.79 ×106 4.32 ×104 9.63 ×103 3.90 ×105 8.85 ×103 2.26 ×106 1.63 ×107 7.02 ×103 1.50 ×104 5.65 ×103
F17 Std 3.79 ×106 1.27 ×107 6.34 ×107 3.49 ×108 9.00 ×107 3.11 ×107 1.33 ×107 1.61 ×107 4.27 ×106 3.14 ×107 1.54 ×108 2.45 ×106 9.85 ×106 4.15 ×106
Mean 6.74 ×106 1.74 ×107 1.34 ×108 4.72 ×108 1.74 ×108 5.06 ×107 2.56 ×107 6.58 ×106 8.87 ×106 7.15 ×107 3.24 ×108 5.45 ×106 2.22 ×107 1.00 ×107
F18 Std 7.09 ×103 6.12 ×103 1.49 ×109 9.60 ×109 4.56 ×109 4.99 ×108 1.65 ×108 4.50 ×109 5.13 ×107 3.29 ×109 5.20 ×109 9.26 ×103 2.23 ×108 4.39 ×105
Mean 1.12 ×104 7.31 ×103 5.33 ×109 3.02 ×1010 2.50 ×1010 1.18 ×109 1.37 ×108 2.27 ×109 4.91 ×107 1.53 ×1010 2.64 ×1010 1.19 ×104 2.77 ×108 6.90 ×105
F19 Std 1.37 ×103 3.20 ×102 3.30 ×102 3.29 ×102 2.07 ×102 3.56 ×102 7.95 ×102 9.30 ×102 4.99 ×102 5.12 ×102 2.40 ×102 2.80 ×102 4.98 ×102 6.70 ×102
Mean 6.33 ×103 8.31 ×103 8.11 ×103 8.83 ×103 7.90 ×103 7.91 ×103 7.21 ×103 6.06 ×103 6.25 ×103 6.95 ×103 8.00 ×103 7.36 ×103 6.70 ×103 5.80 ×103
F20 Std 2.15 ×102 2.36 ×102 1.01 ×102 2.43 ×102 2.28 ×102 2.31 ×102 2.12 ×102 2.96 ×102 2.44 ×102 1.82 ×102 2.44 ×102 4.70 ×101 1.27 ×102 1.46 ×102
Mean 3.10 ×103 3.42 ×103 4.18 ×103 5.34 ×103 4.61 ×103 4.29 ×103 4.07 ×103 4.15 ×103 4.44 ×103 4.51 ×103 5.05 ×103 3.40 ×103 4.20 ×103 3.21 ×103
F21 Std 5.64 ×103 8.54 ×102 7.43 ×102 1.47 ×103 8.58 ×102 1.81 ×103 4.62 ×103 2.73 ×103 1.93 ×103 8.77 ×102 7.06 ×102 7.02 ×102 1.49 ×103 1.21 ×103
Mean 2.99 ×104 3.61 ×104 3.52 ×104 3.65 ×104 3.48 ×104 3.27 ×104 3.04 ×104 2.39 ×104 2.73 ×104 3.22 ×104 3.54 ×104 3.31 ×104 3.04 ×104 2.08 ×104
F22 Std 1.25 ×102 2.00 ×102 1.43 ×102 6.14 ×102 1.80 ×102 4.56 ×102 2.41 ×102 3.10 ×102 5.48 ×102 4.85 ×102 3.16 ×102 5.22 ×101 2.18 ×102 1.25 ×102
Mean 3.58 ×103 3.59 ×103 5.26 ×103 8.03 ×103 5.72 ×103 6.32 ×103 4.82 ×103 5.24 ×103 5.89 ×103 7.33 ×103 6.74 ×103 4.00 ×103 5.09 ×103 3.59 ×103
F23 Std 2.35 ×102 2.62 ×102 2.63 ×102 1.54 ×103 2.65 ×103 9.98 ×102 5.28 ×102 8.76 ×102 7.31 ×102 7.44 ×102 8.58 ×102 6.81 ×101 3.85 ×102 1.73 ×102
Mean 4.36 ×103 4.34 ×103 7.43 ×103 1.31 ×104 9.65 ×103 1.02 ×104 6.19 ×103 7.00 ×103 8.47 ×103 1.20 ×104 1.04 ×104 4.56 ×103 6.60 ×103 4.26 ×103
F24 Std 1.44 ×102 2.89 ×102 2.94 ×103 1.46 ×104 2.35 ×103 2.27 ×103 5.56 ×103 4.47 ×103 5.11 ×102 2.08 ×103 1.22 ×103 3.87 ×102 9.58 ×102 7.28 ×101
Mean 3.70 ×103 4.46 ×103 2.26 ×104 9.62 ×104 2.59 ×104 1.44 ×104 9.79 ×103 1.34 ×104 6.81 ×103 2.41 ×104 3.03 ×104 5.01 ×103 9.01 ×103 3.70 ×103
F25 Std 2.19 ×103 2.47 ×103 3.38 ×103 9.66 ×103 2.20 ×103 1.52 ×104 3.97 ×103 7.23 ×103 2.65 ×103 3.14 ×103 1.85 ×103 1.49 ×103 3.52 ×103 2.64 ×103
Mean 1.59 ×104 1.63 ×104 4.22 ×104 7.74 ×104 5.02 ×104 3.85 ×104 2.69 ×104 3.67 ×104 3.23 ×104 4.80 ×104 5.34 ×104 2.11 ×104 3.35 ×104 1.60 ×104
F26 Std 8.15 ×101 1.41 ×102 5.68 ×102 1.79 ×103 2.31 ×103 1.92 ×103 4.64 ×102 1.48 ×103 1.13 ×103 1.13 ×103 1.57 ×103 9.73 ×101 4.87 ×102 1.05 ×102
Mean 3.78 ×103 3.79 ×103 8.70 ×103 1.53 ×104 1.16 ×104 7.87 ×103 4.62 ×103 5.95 ×103 7.02 ×103 1.32 ×104 1.50 ×104 4.23 ×103 5.31 ×103 3.85 ×103
F27 Std 3.48 ×102 1.27 ×103 3.65 ×103 8.84 ×103 2.32 ×103 2.40 ×103 7.08 ×103 6.83 ×103 8.50 ×102 2.19 ×103 1.62 ×103 4.81 ×102 1.63 ×103 7.13 ×101
Mean 4.07 ×103 6.29 ×103 2.76 ×104 6.31 ×104 2.89 ×104 1.82 ×104 1.70 ×104 2.00 ×104 9.49 ×103 3.13 ×104 3.02 ×104 5.98 ×103 1.19 ×104 3.73 ×103
F28 Std 6.90 ×102 1.91 ×103 1.10 ×104 1.46 ×106 4.84 ×105 2.16 ×103 2.30 ×103 9.79 ×104 1.83 ×103 1.48 ×105 5.99 ×105 4.70 ×102 2.95 ×103 5.35 ×102
Mean 6.98 ×103 9.31 ×103 3.20 ×104 1.26 ×106 6.66 ×105 1.61 ×104 1.23 ×104 5.46 ×104 1.30 ×104 1.67 ×105 8.15 ×105 1.03 ×104 1.80 ×104 8.28 ×103
F29 Std 5.40 ×105 5.33 ×105 2.37 ×109 1.32 ×1010 4.79 ×109 3.63 ×109 1.23 ×108 6.08 ×109 3.69 ×108 6.38 ×109 6.77 ×109 3.42 ×106 1.12 ×109 8.65 ×106
Mean 8.27 ×105 7.05 ×105 1.21 ×1010 5.49 ×1010 4.26 ×1010 4.03 ×109 2.68 ×108 5.75 ×109 8.27 ×108 2.76 ×1010 4.21 ×1010 7.67 ×106 2.13 ×109 1.64 ×107

Author Contributions

Conceptualization, S.G. and C.G.; methodology, C.G. and J.J.; software, C.G.; validation, C.G.; resources, S.G. and J.J.; writing—original draft preparation, C.G.; writing—review and editing, S.G. and J.J.; visualization, C.G.; supervision, S.G. and J.J. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets analyzed during the current research can be obtained from the University of California, Irvine (UCI) Machine Learning Repository (https://archive.ics.uci.edu/, accessed on 1 January 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

The authors thank the financial support from the Scientific Research Project of Jilin Provincial Department of Education (No. JJKH20240171SK).

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Zhang W., Shen X., Zhang H., Yin Z., Sun J., Zhang X., Zou L. Feature importance measure of a multilayer perceptron based on the presingle-connection layer. Knowl. Inf. Syst. 2024;66:511–533. doi: 10.1007/s10115-023-01959-7. [DOI] [Google Scholar]
  • 2.Chan K.Y., Abu-Salih B., Qaddoura R., Al-Zoubi A., Palade V., Pham D.S., Del Ser J., Muhammad K. Deep neural networks in the cloud: Review, applications, challenges and research directions. Neurocomputing. 2023;545:126327. doi: 10.1016/j.neucom.2023.126327. [DOI] [Google Scholar]
  • 3.Zhao X., Wang L., Zhang Y., Han X., Deveci M., Parmar M. A review of convolutional neural networks in computer vision. Artif. Intell. Rev. 2024;57:99. doi: 10.1007/s10462-024-10721-6. [DOI] [Google Scholar]
  • 4.Sajun A.R., Zualkernan I., Sankalpa D. A historical survey of advances in transformer architectures. Appl. Sci. 2024;14:4316. doi: 10.3390/app14104316. [DOI] [Google Scholar]
  • 5.Terven J., Córdova-Esparza D.M., Romero-González J.A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023;5:1680–1716. doi: 10.3390/make5040083. [DOI] [Google Scholar]
  • 6.Deng Z., Ma W., Han Q.L., Zhou W., Zhu X., Wen S., Xiang Y. Exploring DeepSeek: A Survey on Advances, Applications, Challenges and Future Directions. IEEE/CAA J. Autom. Sin. 2025;12:872–893. doi: 10.1109/JAS.2025.125498. [DOI] [Google Scholar]
  • 7.Jin C., Netrapalli P., Ge R., Kakade S.M., Jordan M.I. On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points. J. ACM (JACM) 2021;68:1–29. doi: 10.1145/3418526. [DOI] [Google Scholar]
  • 8.Swenson B., Murray R., Poor H.V., Kar S. Distributed stochastic gradient descent: Nonconvexity, nonsmoothness, and convergence to local minima. J. Mach. Learn. Res. 2022;23:1–62. [Google Scholar]
  • 9.Van Thieu N., Mirjalili S., Garg H., Hoang N.T. MetaPerceptron: A standardized framework for metaheuristic-driven multi-layer perceptron optimization. Comput. Stand. Interfaces. 2025;93:103977. doi: 10.1016/j.csi.2025.103977. [DOI] [Google Scholar]
  • 10.Mirjalili S. How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl. Intell. 2015;43:150–161. doi: 10.1007/s10489-014-0645-7. [DOI] [Google Scholar]
  • 11.Hai T., Li H., Band S.S., Shadkani S., Samadianfard S., Hashemi S., Chau K.W., Mousavi A. Comparison of the efficacy of particle swarm optimization and stochastic gradient descent algorithms on multi-layer perceptron model to estimate longitudinal dispersion coefficients in natural streams. Eng. Appl. Comput. Fluid Mech. 2022;16:2207–2221. doi: 10.1080/19942060.2022.2141896. [DOI] [Google Scholar]
  • 12.Hameed F., Alkhzaimi H. Hybrid genetic algorithm and deep learning techniques for advanced side-channel attacks. Sci. Rep. 2025;15:25728. doi: 10.1038/s41598-025-06375-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Gurgenc E., Altay O., Altay E.V. AOSMA-MLP: A novel method for hybrid metaheuristics artificial neural networks and a new approach for prediction of geothermal reservoir temperature. Appl. Sci. 2024;14:3534. doi: 10.3390/app14083534. [DOI] [Google Scholar]
  • 14.Lu Y., Zhao H. Research on Slope Stability Prediction Based on MC-BKA-MLP Mixed Model. Appl. Sci. 2025;15:3158. doi: 10.3390/app15063158. [DOI] [Google Scholar]
  • 15.Abu-Doush I., Ahmed B., Awadallah M.A., Al-Betar M.A., Rababaah A.R. Enhancing multilayer perceptron neural network using archive-based harris hawks optimizer to predict gold prices. J. King Saud Univ.-Comput. Inf. Sci. 2023;35:101557. doi: 10.1016/j.jksuci.2023.101557. [DOI] [Google Scholar]
  • 16.Mohammadi B., Guan Y., Moazenzadeh R., Safari M.J.S. Implementation of hybrid particle swarm optimization-differential evolution algorithms coupled with multi-layer perceptron for suspended sediment load estimation. Catena. 2021;198:105024. doi: 10.1016/j.catena.2020.105024. [DOI] [Google Scholar]
  • 17.Ehteram M., Panahi F., Ahmed A.N., Huang Y.F., Kumar P., Elshafie A. Predicting evaporation with optimized artificial neural network using multi-objective salp swarm algorithm. Environ. Sci. Pollut. Res. 2022;29:10675–10701. doi: 10.1007/s11356-021-16301-3. [DOI] [PubMed] [Google Scholar]
  • 18.Yang Z., Jiang Y., Yeh W.C. Self-learning salp swarm algorithm for global optimization and its application in multi-layer perceptron model training. Sci. Rep. 2024;14:27401. doi: 10.1038/s41598-024-77440-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Liu Z., Cui Z., Wang M., Liu B., Tian W. A machine learning proxy based multi-objective optimization method for low-carbon hydrogen production. J. Clean. Prod. 2024;445:141377. doi: 10.1016/j.jclepro.2024.141377. [DOI] [Google Scholar]
  • 20.Jiang J., Wu J., Luo J., Yang X., Huang Z. MOBCA: Multi-Objective Besiege and Conquer Algorithm. Biomimetics. 2024;9:316. doi: 10.3390/biomimetics9060316. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Jiang J., Meng X., Wu J., Tian J., Xu G., Li W. BCA: Besiege and Conquer Algorithm. Symmetry. 2025;17:217. doi: 10.3390/sym17020217. [DOI] [Google Scholar]
  • 22.Holland J.H. Genetic algorithms. Sci. Am. 1992;267:66–73. doi: 10.1038/scientificamerican0792-66. [DOI] [Google Scholar]
  • 23.Storn R., Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997;11:341–359. doi: 10.1023/A:1008202821328. [DOI] [Google Scholar]
  • 24.Kennedy J., Eberhart R. Particle swarm optimization; Proceedings of the ICNN’95-International Conference on Neural Networks; Perth, Australia. 27 November–1 December 1995; New York, NY, USA: IEEE; 1995. pp. 1942–1948. [DOI] [Google Scholar]
  • 25.Qamar R., Zardari B.A. Artificial Neural Networks: An Overview. Mesopotamian J. Comput. Sci. 2023;2023:124–133. doi: 10.58496/MJCSC/2023/015. [DOI] [Google Scholar]
  • 26.Xi F. Stability for a random evolution equation with Gaussian perturbation. J. Math. Anal. Appl. 2002;272:458–472. doi: 10.1016/S0022-247X(02)00163-4. [DOI] [Google Scholar]
  • 27.Omar M.B., Bingi K., Prusty B.R., Ibrahim R. Recent advances and applications of spiral dynamics optimization algorithm: A review. Fractal Fract. 2022;6:27. doi: 10.3390/fractalfract6010027. [DOI] [Google Scholar]
  • 28.Mirjalili S., Lewis A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016;95:51–67. doi: 10.1016/j.advengsoft.2016.01.008. [DOI] [Google Scholar]
  • 29.Yang Q., Hua L., Gao X., Xu D., Lu Z., Jeon S.W., Zhang J. Stochastic cognitive dominance leading particle swarm optimization for multimodal problems. Mathematics. 2022;10:761. doi: 10.3390/math10050761. [DOI] [Google Scholar]
  • 30.Yang Q., Jing Y., Gao X., Xu D., Lu Z., Jeon S.W., Zhang J. Predominant cognitive learning particle swarm optimization for global numerical optimization. Mathematics. 2022;10:1620. doi: 10.3390/math10101620. [DOI] [Google Scholar]
  • 31.Mirjalili S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016;96:120–133. doi: 10.1016/j.knosys.2015.12.022. [DOI] [Google Scholar]
  • 32.Abualigah L., Abd Elaziz M., Sumari P., Geem Z.W., Gandomi A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022;191:116158. doi: 10.1016/j.eswa.2021.116158. [DOI] [Google Scholar]
  • 33.Xue J., Shen B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023;79:7305–7336. doi: 10.1007/s11227-022-04959-6. [DOI] [Google Scholar]
  • 34.Wang J., Wang W.C., Hu X.X., Qiu L., Zang H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024;57:98. doi: 10.1007/s10462-024-10723-4. [DOI] [Google Scholar]
  • 35.Heidari A.A., Mirjalili S., Faris H., Aljarah I., Mafarja M., Chen H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019;97:849–872. doi: 10.1016/j.future.2019.02.028. [DOI] [Google Scholar]
  • 36.Oladejo S.O., Ekwe S.O., Mirjalili S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowl.-Based Syst. 2024;296:111880. doi: 10.1016/j.knosys.2024.111880. [DOI] [Google Scholar]
  • 37.Dehghani M., Montazeri Z., Trojovská E., Trojovskỳ P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023;259:110011. doi: 10.1016/j.knosys.2022.110011. [DOI] [Google Scholar]
  • 38.Abdel-Basset M., Mohamed R., Abouhawwash M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024;284:111257. doi: 10.1016/j.knosys.2023.111257. [DOI] [Google Scholar]
  • 39.Lian J., Hui G., Ma L., Zhu T., Wu X., Heidari A.A., Chen Y., Chen H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024;172:108064. doi: 10.1016/j.compbiomed.2024.108064. [DOI] [PubMed] [Google Scholar]
  • 40.Li S., Chen H., Wang M., Heidari A.A., Mirjalili S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020;111:300–323. doi: 10.1016/j.future.2020.03.055. [DOI] [Google Scholar]
  • 41.Zheng B., Chen Y., Wang C., Heidari A.A., Liu L., Chen H. The moss growth optimization (MGO): Concepts and performance. J. Comput. Des. Eng. 2024;11:184–221. doi: 10.1093/jcde/qwae080. [DOI] [Google Scholar]
  • 42.Cai X., Zhang C. An Innovative Differentiated Creative Search Based on Collaborative Development and Population Evaluation. Biomimetics. 2025;10:260. doi: 10.3390/biomimetics10050260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Potter K., Hagen H., Kerren A., Dannenmann P. Methods for presenting statistical information: The box plot; Proceedings of the VLUDS; Seoul, Republic of Korea. 11 September 2006; [(accessed on 28 October 2025)]. pp. 97–106. Available online: https://api.semanticscholar.org/CorpusID:1344717. [Google Scholar]
  • 44.Brest J., Zumer V., Maucec M.S. Self-adaptive differential evolution algorithm in constrained real-parameter optimization; Proceedings of the 2006 IEEE International Conference on Evolutionary Computation; Vancouver, BC, Canada. 16–21 July 2006; New York, NY, USA: IEEE; 2006. pp. 215–222. [DOI] [Google Scholar]
  • 45.Tanabe R., Fukunaga A.S. Improving the search performance of SHADE using linear population size reduction; Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC); Beijing, China. 6–11 July 2014; New York, NY, USA: IEEE; 2014. pp. 1658–1665. [DOI] [Google Scholar]
  • 46.Awad N.H., Ali M.Z., Suganthan P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems; Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC); San Sebastian, Spain. 5–8 June 2017; New York, NY, USA: IEEE; 2017. pp. 372–379. [DOI] [Google Scholar]
  • 47.Wolpert D.H., Macready W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 2002;1:67–82. doi: 10.1109/4235.585893. [DOI] [Google Scholar]
  • 48.Jia H., Lu C. Guided learning strategy: A novel update mechanism for metaheuristic algorithms design and improvement. Knowl.-Based Syst. 2024;286:111402. doi: 10.1016/j.knosys.2024.111402. [DOI] [Google Scholar]
  • 49.Dehghani M., Trojovskỳ P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng. 2023;8:1126450. doi: 10.3389/fmech.2022.1126450. [DOI] [Google Scholar]
  • 50.Meng X., Jiang J., Wang H. AGWO: Advanced GWO in multi-layer perception optimization. Expert Syst. Appl. 2021;173:114676. doi: 10.1016/j.eswa.2021.114676. [DOI] [Google Scholar]
  • 51.McGarry K.J., Wermter S., MacIntyre J. Knowledge extraction from radial basis function networks and multilayer perceptrons; Proceedings of the IJCNN’99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339); Washington, DC, USA. 10–16 July 1999; New York, NY, USA: IEEE; 1999. pp. 2494–2497. [DOI] [Google Scholar]
  • 52.Daniel W.B., Yeung E. A constructive approach for one-shot training of neural networks using hypercube-based topological coverings. arXiv. 2019 doi: 10.48550/arXiv.1901.02878.1901.02878 [DOI] [Google Scholar]
  • 53.Fisher R.A. Iris. UCI Machine Learning Repository. 1936. [(accessed on 28 October 2025)]. Available online: https://archive.ics.uci.edu/dataset/53/iris.
  • 54.Ay Ş., Ekinci E., Garip Z. A comparative analysis of meta-heuristic optimization algorithms for feature selection on ML-based classification of heart-related diseases. J. Supercomput. 2023;79:11797. doi: 10.1007/s11227-023-05132-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Janosi A., Steinbrunn W., Pfisterer M., Detrano R. Heart Disease. UCI Machine Learning Repository. 1989. [(accessed on 28 October 2025)]. Available online: https://archive.ics.uci.edu/dataset/45/heart+disease.
  • 56.Ruder S. An overview of gradient descent optimization algorithms. arXiv. 2016 doi: 10.48550/arXiv.1609.04747.1609.04747 [DOI] [Google Scholar]
  • 57.Robbins H., Monro S. A stochastic approximation method. Ann. Math. Stat. 1951;22:400–407. doi: 10.1214/aoms/1177729586. [DOI] [Google Scholar]
  • 58.Kingma D.P., Ba J. Adam: A Method for Stochastic Optimization. [(accessed on 28 October 2025)];CoRR. 2014 abs/1412.6980 Available online: https://api.semanticscholar.org/CorpusID:6628106. [Google Scholar]
  • 59.Cao J., Qian C., Huang Y., Chen D., Gao Y., Dong J., Guo D., Qu X. A Dynamics Theory of RMSProp-Based Implicit Regularization in Deep Low-Rank Matrix Factorization. IEEE Trans. Neural Netw. Learn. Syst. 2025;36:18750–18764. doi: 10.1109/TNNLS.2025.3574683. [DOI] [PubMed] [Google Scholar]
  • 60.Ward R., Wu X., Bottou L. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. J. Mach. Learn. Res. 2020;21:1–30. [Google Scholar]
  • 61.Oyelade O.N., Aminu E.F., Wang H., Rafferty K. An adaptation of hybrid binary optimization algorithms for medical image feature selection in neural network for classification of breast cancer. Neurocomputing. 2025;617:129018. doi: 10.1016/j.neucom.2024.129018. [DOI] [Google Scholar]
  • 62.Fu Y. Research on Financial Time Series Prediction Model Based on Multifractal Trend Cross Correlation Removal and Deep Learning. Procedia Comput. Sci. 2025;261:217–226. doi: 10.1016/j.procs.2025.04.192. [DOI] [Google Scholar]
  • 63.Wafa A.A., Eldefrawi M.M., Farhan M.S. Advancing multimodal emotion recognition in big data through prompt engineering and deep adaptive learning. J. Big Data. 2025;12:210. doi: 10.1186/s40537-025-01264-w. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets analyzed during the current research can be obtained from the University of California, Irvine (UCI) Machine Learning Repository (https://archive.ics.uci.edu/, accessed on 1 January 2025).


Articles from Biomimetics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES