Skip to main content
Biomimetics logoLink to Biomimetics
. 2024 Nov 1;9(11):670. doi: 10.3390/biomimetics9110670

Improved Osprey Optimization Algorithm with Multi-Strategy Fusion

Wenli Lei 1,*, Jinping Han 1, Xinghao Wu 1
Editors: Heming Jia1, Laith Abualigah1, Xuewen Xia1
PMCID: PMC11592147  PMID: 39590242

Abstract

The osprey optimization algorithm (OOA) is an effective metaheuristic algorithm. Although the OOA has the characteristics of strong optimality-seeking ability and fast convergence speed, it also has the disadvantages of imbalance between global exploration and local exploitation ability, easily falling into local optima in the later stage, and reduced population diversity and convergence speed. Therefore, this paper proposes an improved osprey optimization algorithm (IOOA) with multi-strategy fusion. First, Fuch chaotic mapping is used to initialize the ospreys’ population and increase the population diversity. Then, an adaptive weighting factor is introduced in the exploration phase of the algorithm to help the algorithm improve the convergence accuracy. The Cauchy variation strategy is integrated in the algorithm’s exploitation stage to enhance the diversity of the ospreys’ population and avoid falling into local optima. Finally, a Warner mechanism for the sparrow search algorithm is introduced to coordinate the algorithm’s local optimization and global search capabilities. The IOOA with various optimization algorithms is tested in a simulation for 10 benchmark test functions and 15 CEC2017 test functions, and non-parametric tests are performed on the IOOA. Experimental results show that the IOOA achieves improved accuracy and stability. The application of the IOOA to the three-bar truss engineering design problem further verifies its superiority in dealing with practical optimization problems.

Keywords: osprey optimization algorithm, Fuch chaotic mapping, adaptive weighting factor, Cauchy’s variation

1. Introduction

Due to their simplicity, flexibility, and efficiency, swarm intelligence algorithms have been widely used in recent years in engineering fields such as cluster task planning [1,2], workshop scheduling [3,4], power system optimization [5,6], and path planning [7,8]. The osprey optimization algorithm (OOA) [9], first proposed in 2023 by Mohammad Dehghani and Pavel Trojovský, is a novel bio-heuristic optimization algorithm that simulates the hunting–predatory behavior of ospreys in nature. Based on the simulation of ospreys’ hunting process, the OOA establishes a mathematical model of the two phases of ospreys’ exploration and exploitation. The algorithm has the advantages of simple structure, a robust global search capability, and fast convergence. However, it inevitably encounters common problems arising from metaheuristic algorithms, such as reduced convergence speed of algorithm iterations, a tendency to fall into local optimality, etc.

In order to improve the convergence speed of algorithms and enhance the ability of algorithms to leap out of local extremes, many scholars have conducted extensive research in this field. The study in Yu et al. [10] improved the gray wolf optimization algorithm by using the good point-set method to initialize the population to increase the population diversity and introducing a beetle tentacle search mechanism to prevent the algorithm from falling into local optima. The study in [11] proposed an ADFPSO algorithm by using a fitness-based driver to improve the development capability of the PSO algorithm and a novelty-based driver to enhance the exploration capability of the PSO algorithm and by introducing an adaptive weighting factor to coordinate the weights of the two drivers at different stages of the optimization search. The study in [12] proposed a improved gorilla troops optimizer based on lens opposition-based learning and adaptive hill climbing for global optimization, using reverse convex lens imaging learning to expand the search range and avoid falling into local optima and introducing an adaptive hill climbing algorithm in combination with GTO to improve the solution accuracy. The study in [13] proposed a whale optimization algorithm based on the siege mechanism, which combines the siege mechanism of the Harris hawk optimization algorithm to improve the global exploration and local optimization-seeking ability of the whale optimization algorithm. The study in [14] solved the problem of low solution accuracy and the poor stability of the Marine Predator Algorithm (MPA) by introducing an elite reverse learning strategy and a golden sine strategy. The study in [15] proposed an improved grasshopper optimization algorithm (IWGOA) that introduced the weed algorithm and random wandering strategy into the locust algorithm to improve the convergence accuracy. The study in [16] introduced time delay and sorting parameters based on the artificial bee colony algorithm and used chaotic systems to solve the multidimensionalization problem of parameter estimation to prevent the execution process from falling into local optima. The study in [17] proposed a chaotic chimp optimization algorithm based on adaptive tuning, which employs a Sin operator for population initialization to enhance the population richness and also improves the convergence factor (f) and dynamically adjusts the number of chimp precedence echelons, which enhances the algorithm’s global search and local exploitation abilities. The study in [18] improved the osprey optimization algorithm by using Sobol sequences for population initialization, introducing a step factor based on the Weibull distribution to balance the algorithm’s local and global optimality-seeking ability and incorporating firefly perturbations to prevent the algorithm from falling into a local optimum. The study in [19] proposed an attack–defense strategy-assisted osprey optimization algorithm (ADSOOA), which integrates an attack–defense strategy to improve convergence performance and prevent the algorithm from falling into local optima, and applied it to PEMFC parameter identification.

The improvements proposed above for the swarm intelligence algorithm reduce the possibility of the algorithm falling into local extremes to a certain extent, but there are still problems, such as low convergence accuracy and limited improvement in algorithm performance. To better improve the optimization performance and application capability of the osprey optimization algorithm, this paper proposes an improved osprey optimization algorithm (IOOA) with multi-strategy fusion. The algorithm uses Fuch chaotic mapping to make the initialized population more evenly distributed and increase population diversity. The introduction of adaptive weighting factors in the exploration phase of the algorithm improves the convergence speed of the algorithm. Incorporating the Cauchy variation operator during the algorithm’s exploration phase enhances the diversity of the ospreys’ population while improving the algorithm’s ability to leapfrog local extremes. A Warner mechanism is introduced for the sparrow search algorithm to balance the algorithm’s ability to explore globally and develop locally.

2. OOA

The OOA is a heuristic algorithm that models the foraging process of osprey populations and consists of the following two parts: an exploration phase and an exploitation phase. First, the osprey population is initialized in the search space with the expression shown in Equation (1).

Xi,j=lbj+ri,j·(ubjlbj),i=1,2,,N;j=1,2,,D (1)

where Xi,j is the initial position of the ith osprey in the jth dimension; lbj and ubj are the upper and lower bounds of the jth problem variable, respectively; ri,j is a random number in the range of [0, 1]; N is the osprey population size; and D is the dimension of the problem.

Equation (2) shows the function to calculate the fitness value.

Fi=FXi,i=1,2,,N (2)

where Fi is the fitness value of the ith osprey and Xi is the position of the ith osprey. In this paper, we need to solve for the minimum value of Fi. The smaller the value, the better it is for the location of the osprey.

After population initialization, the osprey enters the exploration phase, which is also the global exploration phase. Other ospreys’ position with better fitness values in the search space were considered as fish positions. Equation (3) expresses the location of the school of fish for each osprey.

FPi=Xkk{1,2,,N}Fk<FiXbest,i=1,2,,N (3)

where FPi is the set of fish locations of the ith osprey and Xbest is the location of the osprey with the best fitness value.

In the search space, the osprey randomly selects a fish and attacks it. During the simulation of the movement of the osprey towards the fish, this paper uses Equations (4) and (5) to calculate the new osprey’s position.

Xi,jP1=Xi,j+ri,j·SFi,jIi,j·Xi,j,i=1,2,,N;j=1,2,,D (4)
Xi,jP1=Xi,jP1,lbjXi,jP1ubjlbj,Xi,jP1<lbjubj,Xi,jP1>ubj (5)

where XiP1 is the new position of the ith osprey in first stage in the jth dimension; SFi,j is the fish selected by the ith osprey in the jth dimension; ri,j is a random number in the range [0, 1]; and Ii,j is a random integer, either 1 or 2.

If the new position’s fitness value is better, then the new position replaces the original position; otherwise, it does not. Equation (6) shows the process.

Xi=XiP1,FiP1<FiXi,FiP1Fi (6)

where XiP1 is the new position of the ith osprey after the first stage update and FiP1 is the new position’s fitness value of the ith osprey after the first stage update.

After hunting a fish in nature, the osprey will take it to a safe place and feed on it. In this process, this paper uses Equations (7) and (8) to calculate a new random position as the feeding position.

Xi,jP2=Xi,j+lbj+ri,j·ubjlbjt,i=1,2,,N;j=1,2,,D;t=1,2,,T (7)
Xi,jP2=Xi,jP2,lbjXi,jP2ubjlbj,Xi,jP2<lbjubj,Xi,jP2>ubj (8)

where Xi,jP2 is the new position of the ith osprey in the jth dimension in the second stage; ri,j is a random number in the range [0, 1]; t is the current number of iterations; and T is the maximum number of iterations.

If the new position’s fitness value is better, then the new position replaces the original position; otherwise, it does not. Equation (9) shows the process.

Xi=XiP2,FiP2<FiXi,FiP2Fi (9)

where XiP2 is the new position of the ith osprey after the second stage update and FiP2 is the new position’s fitness value of the ith osprey after the second stage update.

3. IOOA

3.1. Fuch Chaotic Mapping

The OOA initializes the population using random initialization, which makes the population initialization unevenly distributed and leads to a reduction in the initialized population diversity. Chaotic mapping has the advantages of randomness, ergodicity and regularity, which can enrich the population initialization diversity, enhance the global search ability, and improve the algorithm solution’s effectiveness. Therefore, this paper introduces Fuch chaotic mapping [20] to initialize the ospreys’ population, which is a kind of infinitely collapsible chaotic mapping, showing advantages such as stronger chaotic properties and more balanced traversal than the traditional chaotic mapping, and its chaotic sequence is shown in Figure 1. Equation (10) shows the mathematical expression of Fuch chaotic mapping.

yi+1=cos1yi2,yi1,1,yi0,iZ+ (10)

This paper use Equation (10) to generate chaotic variables, and the initialization formula after adding chaotic variables is shown in Equation (11).

Xi,j=lbj+yi·(ubjlbj),i=1,2,,N;j=1,2,,D (11)

where Figure 1a shows the distribution of Fuch chaotic sequences and Figure 1b shows the histogram of the distribution of chaotic sequences.

Figure 1.

Figure 1

Fuch chaotic distribution.

3.2. Adaptive Weighting Factor

The global search ability and local optimization ability of coordinated metaheuristic algorithms are important factors that affect the algorithms’ optimization accuracy and speed. In the first stage of osprey position update, this paper introduces an adaptive weight factor to dynamically regulate the algorithm’s local exploitation and global exploration capabilities, and to improve the algorithm’s optimization accuracy and convergence speed. Equation (12) shows the adaptive weighting factor.

w=etT1e1 (12)

At the beginning of the algorithm iterations, the value is small, and the osprey individual focuses more on exploring other locations at this time, which facilitates better global exploration. The w-value increases adaptively in the later iterations, and the osprey individual gradually switches from exploring other locations to exploiting its own neighborhood locations, improving the algorithm’s local exploitation capability. Equation (13) shows the improved osprey position update formula.

Xi,jP1=w·Xi,j+ri,j·SFi,jIi,j·Xi,j,i=1,2,,N;j=1,2,,D (13)

3.3. Cauchy Variation Strategy

In order to improve the individuals’ quality, increase population diversity, and prevent the algorithms from falling into local optima, this paper incorporates the Cauchy variation strategy in the second stage of the osprey population position update. The one-dimensional standard Cauchy distribution probability density function expression is shown in Equation (14).

fx=1π1x2+1,<x< (14)

Figure 2 shows the probability density curves of the Gaussian and Cauchy distributions. From Figure 2, the Cauchy distribution peak value is smaller than the one-dimensional Gaussian distribution peak value at the origin. The Cauchy distribution ends are flatter and slower than the Gaussian distribution as they approach zero. Thus, the Cauchy distribution can generate larger perturbations. In this paper, the algorithm’s position update formula introduces the Cauchy variation, which enhances the ospreys’ population diversity and improves the algorithm’s global optimization capability.

Figure 2.

Figure 2

One-dimensional standard Cauchy and Gaussian distribution probability density curves.

In the second phase of the IOOA’s population position update, each iteration compares the magnitude of the current osprey’s fitness value in relation to the population average fitness value. When the osprey’s fitness value is lower than the population mean fitness value, it indicates that the current osprey is aggregated. At this point, the perturbation power of the Cauchy operator is exploited to increase the ospreys’ population diversity. When the osprey’s fitness value is higher than the population average fitness value, the OOA position update method is used. Equation (15) shows the improved position update formula.

Xi,jP2=Xbest+Xbest·cauchy(0,1),Fi<FaνgXi,j+lbj+ri,j·ubjlbjt,FiFaνg (15)

where cauchy(0,1) denotes denotes the generation of numbers that follow the standard Cauchy distribution, and Faνg denotes the population average fitness value.

3.4. Integration of the Sparrow Search Algorithm Warner Mechanism

In order to better balance the OOA’s global exploration and local exploitation capabilities, the second stage position update for the ospreys’ population uses the Sparrow Search Algorithm [21] Warner mechanism. Equation (16) shows the improved osprey position update formula.

Xi,jP2=Xbest+β·Xi,jXbest,Fi>FgXi,j+K·Xi,jXworstFiFw+ε,Fi=Fg (16)

where Fi is the current osprey’s fitness value; Fg and Fw are the current global optimal and worst fitness values; Xworst is the current global worst position; β is a random number that satisfies a normal distribution, K is a uniform random number in the range [−1, 1]; and ε is the smallest constant. When Fi>Fg, it indicates that the osprey is located at the population edge; this position is not suitable for the osprey to feed and it is easily be attacked by its natural enemies. When Fi=Fg, it indicates that the osprey of the population center senses danger and needs to move closer to other ospreys to reduce the risk of being attacked.

3.5. Overall Flow of the IOOA

Step 1: Set the population size N; maximum iterations T; problem dimension D; and boundary condition lbj and ubj.

Step 2: Use Fuch mapping to initialize the osprey population and calculate the osprey population fitness values.

Step 3: Calculate the first stage position according to Equation (12).

Step 4: Update Xi according to Equation (6).

Step 5: Calculate the second stage position according to Equation (14).

Step 6: Update Xi according to Equation (9) and calculate the worst fitness value and its corresponding position.

Step 7: Calculate the second stage position according to Equation (15).

Step 8: Update Xi according to Equation (9).

Step 9: Judge whether it reaches the maximum iterations; if so, proceed to the next step, and otherwise, skip to step 2.

Step 10: The procedure ends with the output of the optimal solution.

3.6. Time Complexity Analysis

The time complexity is an important metric for evaluating its solution speed. This paper performs time complexity analysis of the IOOA. This paper sets the ospreys’ population size to N, the maximum iterations to T, and the problem dimension to D. In the OOA, each iteration of population initialization takes O(N × D), and both phases of the position update process take O(N × D × T), with the total time complexity of O(N × D × (1 + 2T)) For the IOOA, the time complexity of population initialization is O(N×D), and both phases of the position update process for introducing the update strategy take O(N × D × T), giving the total time complexity of O(N × D × (1+2T)). Thus, the IOOA is equal to the OOA in time complexity; it does not increase the complexity overall and it does not increase the computational burden.

4. Simulation Experiments and Result Analysis

4.1. Experimental Environment and Test Functions

The simulation environment of this paper is Windows 11 (64-bit) operating system, with an Intel(R) Core(TM) i5-12500H CPU (Intel, Santa Clara, CA, USA) with 3.10 GHz main frequency and 16 GB RAM, and the program is implemented with MatlabR2022a programming. This paper selects 10 benchmark test functions and 15 functions of the CEC2017 test set for optimization-seeking tests. Table 1 shows the benchmark test functions, where F1–F7 are single-peak functions and F8–F10 are multi-peak functions, and Table 2 shows the CEC2017 test functions. To verify the superior performance of the IOOA, Golden Jackal Optimization (GJO) [22], Subtraction-Average-Based Optimizer (SABO) [23], Sand Cat Swarm Optimization (SCSO) [24], Pelican Optimization Algorithm (POA) [25], Sine Cosine Algorithm (SCA) [26] and Osprey Optimization Algorithm (OOA) [9] are compared with the IOOA. Table 3 demonstrates each algorithm’s parameter settings.

Table 1.

Benchmark functions.

Function Range Min
F1(Sphere) [−100, 100] 0
F2(Schwefel 2.22) [−10, 10] 0
F3(Schwefel 1.2) [−100, 100] 0
F4(Schwefel 2.21) [−100, 100] 0
F5(Rosenbrock) [−30, 30] 0
F6(Step) [−100, 100] 0
F7(Quartic) [−1.28, 1.28] 0
F8(Ackley) [−32, 32] 0
F9(Penalized 1.1) [−50, 50] 0
F10(Penalized 1.2) [−50, 50] 0

Table 2.

CEC2017 test functions.

Function Range Min
F11(CEC-1) [−100, 100] 100
F12(CEC-3) [−100, 100] 300
F13(CEC-4) [−100, 100] 400
F14(CEC-8) [−100, 100] 800
F15(CEC-11) [−100, 100] 1100
F16(CEC-12) [−100, 100] 1200
F17(CEC-13) [−100, 100] 1300
F18(CEC-15) [−100, 100] 1500
F19(CEC-19) [−100, 100] 1900
F20(CEC-22) [−100, 100] 2200
F21(CEC-25) [−100, 100] 2500
F22(CEC-26) [−100, 100] 2600
F23(CEC-28) [−100, 100] 2800
F24(CEC-29) [−100, 100] 2900
F25(CEC-30) [−100, 100] 3000

Table 3.

Parameter settings.

Algorithm Parameters
IOOA r0,1, I = 1 or 2, w0,1
GJO c1=1.5, r0,1, μ[0,1], v0,1, β=1.5
SABO v = 1 or 2, r0,1
SCSO sM=2
POA I = 1 or 2, R = 0.2
SCA a = 2, r2[0,2π], r30,2, r40,1
OOA r0,1, I = 1 or 2

4.2. Convergence Curve Comparison Analysis

The convergence curves clearly show the convergence accuracy and speed of each algorithm and the algorithms’ performance in jumping out of the local extremes. In order to ensure the experimental fairness, the algorithms’ population size is set to 30, the maximum iterations is 1000, the dimension is 30, and each algorithm is run independently 30 times in order to avoid chance. Figure 3 gives the 10 convergence curves of the IOOA with the other six optimization algorithms corresponding to the test functions F1−F10. In the figure, the horizontal coordinate represents iterations, the vertical coordinate represents the functions’ average fitness value, and the convergence curve represents the average fitness value searched by the algorithm at the current iterations.

Figure 3.

Figure 3

Benchmark test function convergence curves.

From Figure 3a–g, it can be seen that the IOOA has better optimization accuracy and convergence speed compared to the other six algorithms for single-peaked functions. When solving F1−F4, the IOOA improves the optimization accuracy by about 260 magnitude orders compared to the SCA, and by about 200 magnitude orders compared to the GJO. When solving F1 and F3, the IOOA’s optimization accuracy is improved by about 100 magnitude orders compared to the SCSO and POA, and the optimal value is found in about 120 iterations. When solving F2 and F4, the IOOA’s optimization accuracy is improved by about 200 orders of magnitude compared to SCSO and POA, and the optimal value can be found in about 250 iterations. For F1−F4, although the optimization accuracy of the IOOA is the same as the OOA, the convergence speed of the IOOA is faster than the OOA, which is a significant advantage. When solving F5 and F6, the IOOA has the ability to significantly leap out of the local optima, and the IOOA’s optimization accuracy is better compared to the other six algorithms.

From Figure 3h–j, for F8, the IOOA converges faster compared to the SABO and SCSO, and has higher optimization accuracy compared to the GJO, POA, SCA and OOA. For F9, the IOOA has significantly better optimization accuracy than the other six algorithms and is less likely to fall into local optima. For F10, the IOOA has a significant advantage in terms of higher search accuracy and faster convergence.

In summary, the IOOA has better convergence speed and optimization accuracy when solving the benchmark test functions. It can effectively prevent falling into local optima, and the optimization performance improves significantly, which illustrates the IOOA’s effectiveness and superiority.

4.3. Optimization Accuracy Comparison

In this section, the IOOA is selected to be tested against the other six comparison algorithms for optimization of the 10 benchmark functions in Table 1 when the dimensions are 30 or 50 or 100. Each algorithm’s population size is set to 30, the maximum number of iterations is 1000, and each algorithm is run independently 30 times. This paper selects the mean and standard deviation of the algorithm optimization as the evaluation metrics. The mean value reflects the algorithm optimization accuracy, and the standard deviation reflects the algorithm optimization stability. At the same time, this paper performs the Friedman test. Each algorithm’s results are ranked and algorithms with the same result are given an average ranking. In Table 4, Table 5 and Table 6, Rank-Count is the sum rankings, Ave-Rank is the average rankings, and Overall-Rank is the final ranking on the benchmark function.

Table 4.

Benchmark function optimization results (D = 30).

Function Index GJO SABO SCSO POA SCA OOA IOOA
F1 Mean 2.51×10111 0 4.52×10229 2.98×10207 1.81×102 0 0
Std 1.21×10110 0 0 0 6.91×102 0 0
Rank 6 2 4 5 7 2 2
F2 Mean 1.52×1065 6.44×10224 7.29×10119 2.52×10108 9.54×106 0 0
Std 3.87×1065 0 3.97×10118 1.11×10107 1.42×105 0 0
Rank 6 3 4 5 7 1.5 1.5
F3 Mean 1.45×1037 3.62×1087 2.21×10197 1.21×10211 3.77×103 0 0
Std 5.98×1037 1.80×1086 0 0 2.63×103 0 0
Rank 6 5 4 3 7 1.5 1.5
F4 Mean 4.21×1033 1.61×10155 2.28×10101 5.38×10105 2.38×101 0 0
Std 1.11×1032 2.93×10155 6.06×10101 2.91×10104 1.12×101 0 0
Rank 6 3 5 4 7 1.5 1.5
F5 Mean 2.78×101 2.82×101 2.80×101 2.76×101 9.38×102 7.18×103 2.53×1010
Std 7.46×101 5.54×101 6.99×101 9.49×101 3.52×103 2.37×102 7.74×1010
Rank 4 6 5 3 7 2 1
F6 Mean 2.62 1.97 1.64 2.59 4.55 7.00×108 2.89×1014
Std 4.81×101 5.22×101 5.38×101 5.41×101 4.92×101 2.24×107 7.38×1014
Rank 5 4 3 6 7 2 1
F7 Mean 2.37×104 6.33×105 4.80×105 1.01×104 3.11×102 2.80×105 2.65×105
Std 1.37×104 4.88×105 5.28×105 6.72×105 3.15×102 2.31×105 2.12×105
Rank 6 4 3 5 7 2 1
F8 Mean 4.23×1015 4.44×1016 4.44×1016 3.05×103 1.08×101 4.00×1015 4.44×1016
Std 9.01×1016 0 0 1.60×1015 9.57 0 0
Rank 6 2 2 4 7 5 2
F9 Mean 2.18×101 1.31×101 7.53×102 1.70×101 2.31 1.26×1013 1.49×1014
Std 1.19×101 5.20×102 4.13×102 5.51×102 2.65 3.36×1013 7.61×1014
Rank 6 4 3 5 7 1 2
F10 Mean 1.66 2.76 2.31 2.74 1.50×103 7.33×104 2.00×1011
Std 2.41×101 4.83×101 3.87×101 3.57×101 7.70×103 2.79×103 8.89×1011
Rank 3 6 4 5 7 2 1
Rank-Count 54 39 37 45 70 20.5 14.5
Ave-Rank 5.4 3.9 3.7 4.5 7.0 2.05 1.45
Overall-Rank 6 4 3 5 7 2 1

Table 5.

Benchmark function optimization results (D = 50).

Function Index GJO SABO SCSO POA SCA OOA IOOA
F1 Mean 7.35×1084 0 4.64×10221 3.80×10208 1.27×102 0 0
Std 2.31×1083 0 0 0 3.27×102 0 0
Rank 6 2 4 5 7 2 2
F2 Mean 8.76×1051 5.04×10227 8.24×10118 7.32×10104 1.83×102 0 0
Std 2.96×1050 0 2.24×10117 4.01×10103 6.62×102 0 0
Rank 6 3 4 5 7 1.5 1.5
F3 Mean 2.27×1022 3.69×1042 8.35×10188 1.23×10207 3.75×104 0 0
Std 1.15×1021 2.02×1041 0 0 1.26×104 0 0
Rank 6 5 4 3 7 1.5 1.5
F4 Mean 9.01×1016 2.47×10152 3.88×1099 5.41×10104 6.14×101 0 0
Std 4.93×1015 5.36×10152 2.02×1098 2.83×10103 7.05 0 0
Rank 6 3 5 4 7 1.5 1.5
F5 Mean 4.77×101 4.83×101 4.83×101 4.81×101 1.39×106 7.49×103 3.65×1010
Std 8.02×101 4.54×101 5.94×101 6.91×101 1.99×106 1.94×102 1.95×109
Rank 3 5 6 4 7 2 1
F6 Mean 6.02 5.32 4.80 5.32 1.22×102 3.80×107 9.21×109
Std 6.71×101 6.25×101 9.31×101 8.00×101 3.48×102 6.76×107 2.39×108
Rank 6 4 3 5 7 2 1
F7 Mean 4.15×104 7.75×105 1.07×104 1.01×104 5.40×101 3.45×105 2.30×105
Std 5.57×104 5.95×105 1.04×104 6.65×105 7.46×101 2.91×105 1.98×105
Rank 6 3 4 5 7 2 1
F8 Mean 6.84×1015 4.44×1016 4.44×1016 3.52×1015 1.66×101 4.00×1015 4.44×1016
Std 1.45×1015 0 0 1.23×1015 7.26 0 0
Rank 6 2 2 4 7 5 2
F9 Mean 3.97×101 2.74×101 1.74×101 2.46×101 2.90×106 2.21×108 6.38×1012
Std 8.69×102 6.44×102 7.45×102 5.77×102 5.50×106 5.08×108 1.74×1011
Rank 6 5 3 4 7 2 1
F10 Mean 3.49 4.97 4.62 4.91 3.76×106 4.01×103 1.54×1011
Std 2.27×101 2.94×102 1.64×101 2.28×101 6.55×106 1.61×102 6.77×1011
Rank 3 6 4 5 7 2 1
Rank-Count 54 38 39 44 70 21.5 13.5
Ave-Rank 5.4 3.8 3.9 4.4 70 2.15 1.35
Overall-Rank 6 3 4 5 7 2 1

Table 6.

Benchmark function optimization results (D = 100).

Function Index GJO SABO SCSO POA SCA OOA IOOA
F1 Mean 3.27×1060 0 1.66×10213 1.33×10208 5.42×103 0 0
Std 5.18×1060 0 0 0 3.72×103 0 0
Rank 6 2 4 5 7 2 2
F2 Mean 4.55×1037 1.83×10231 4.06×10106 2.53×10111 2.43 0 0
Std 4.25×1037 0 1.91×10105 1.16×10110 3.11 0 0
Rank 6 3 5 4 7 1.5 1.5
F3 Mean 2.63×104 1.17×1017 2.39×10183 1.00×10204 2.12×105 0 0
Std 1.42×103 6.33×1017 0 0 4.35×104 0 0
Rank 6 5 4 3 7 1.5 1.5
F4 Mean 1.22 7.42×10148 1.76×1097 1.94×10105 8.55×101 0 0
Std 4.26 3.67×10147 6.42×1097 6.44×10105 2.97 0 0
Rank 6 3 5 4 7 1.5 1.5
F5 Mean 9.80×101 9.85×101 9.85×101 9.83×101 6.34×107 2.42×101 8.46×109
Std 7.40×101 2.34×101 3.48×101 5.12×101 4.07×107 9.00×101 2.39×108
Rank 4 5 6 3 7 2 1
F6 Mean 1.71×101 1.47×101 1.31×101 1.37×101 5.20×103 1.10×104 6.64×108
Std 8.06×101 9.81×101 1.47 1.15 3.6×103 2.67×104 1.87×107
Rank 6 5 3 4 7 2 1
F7 Mean 4.63×104 7.21×105 9.12×105 9.19×105 6.50×101 3.68×105 2.51×105
Std 2.99×104 5.51×105 9.41×105 5.67×105 4.38×101 2.81×105 2.66×105
Rank 6 3 4 5 7 2 1
F8 Mean 9.21×1015 4.44×1016 4.44×1016 3.17×1015 1.97×101 4.00×1015 4.44×1016
Std 2.91×1015 0 0 1.53×1015 3.28 0 0
Rank 6 2 2 4 7 5 2
F9 Mean 5.96×101 4.43×101 2.96×101 3.86×101 1.49×108 1.15×106 2.43×1012
Std 5.14×102 8.76×102 5.84×102 5.69×102 1.02×108 1.42×106 1.03×1011
Rank 6 5 3 4 7 2 1
F10 Mean 8.54 9.95 9.66 9.95 2.82×108 1.15×103 1.31×1011
Std 2.78×101 7.07×103 9.90×102 6.18×103 1.98×108 3.38×103 4.05×1011
Rank 3 6 4 5 7 2 1
Rank-Count 55 39 40 41 70 21.5 13.5
Ave-Rank 5.5 3.9 4.0 4.1 7.0 2.15 1.35
Overall-Rank 6 3 4 5 7 2 1

Table 4 represents the test results for dimension 30, where the IOOA achieves the theoretically optimal solution when solving F1−F4, and the mean and standard deviation are both 0. Although the theoretical optimal solution is not achieved for F5−F10, the results are better than other algorithms, with the smallest mean and standard deviation, which indicates that the IOOA has good optimization accuracy and stability. Meanwhile, in Table 4, the Friedman test results show that the IOOA has the highest Ave-Rank and ranks first in Overall-Rank.

From Table 5 and Table 6, at dimension 50, the optimization accuracy of the IOOA for F6 and F9 is reduced compared to 30 dimensions, but it is still optimal compared with other algorithms. When solving other functions, it has almost no degradation in the optimization accuracy. At dimension 100,the IOOA’s optimization accuracy decreases for F5, F6 and F9, but it is still the best result, and on the rest of the functions, there is almost no change in the optimization accuracy compared to 30 dimensions. The Friedman test results in Table 5 and Table 6 show that the IOOA is still ranked first. As the dimension increases, the IOOA’s optimization performance does not degrade and the IOOA can solve some multidimensionalization problems.

In summary, compared with the other six algorithms, the IOOA has better optimization ability and stability when solving the benchmark test functions, and has obvious advantages.

4.4. Wilcoxon Rank-Sum Test

In order to comprehensively assess the IOOA’s reliability and superiority, this paper selects the Wilcoxon rank-sum test to further validate the significant difference in each algorithm’s experimental results. This paper selects the IOOA’s results running on the benchmark functions on three different dimensions. The IOOA performs the Wilcoxon rank-sum test with six other algorithms and calculates the test results. The significance level is 5%; when the p-value is less than 5%, it means that the difference between two algorithms is significant, and otherwise, it is not. The experiment results are shown in Table 7, where “NAN” shows that the algorithm has the same results as the IOOA, and “+”, “−” and “=” indicate that the IOOA’s performance is superior, inferior and equal to the comparison algorithms.

Table 7.

Wilcoxon rank-sum test results.

dim Function GJO SABO SCSO POA SCA OOA
D = 30 F1 1.21×1012 NAN 1.21×1012 1.21×1012 1.21×1012 NAN
F2 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F3 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F4 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F5 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 1.96×1011
F6 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.25×106
F7 2.61×1010 3.37×104 1.91×103 1.86×106 3.02×1011 7.62×103
F8 4.16×1014 NAN NAN 5.36×109 1.21×1012 1.69×1014
F9 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.21×103
F10 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 7.41×104
+/=/− 10/0/0 8/2/0 9/1/0 10/0/0 10/0/0 6/4/0
D = 50 F1 1.21×1012 NAN 1.21×1012 1.21×1012 1.21×1012 NAN
F2 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F3 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F4 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F5 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 2.80×1011
F6 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 2.56×108
F7 4.50×1011 4.74×106 7.22×106 5.09×108 3.02×1011 8.24×102
F8 1.55×1013 NAN NAN 1.97×1011 1.21×1012 1.69×1014
F9 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 7.38×1011
F10 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 5.38×1011
+/=/− 10/0/0 8/2/0 9/1/0 10/0/0 10/0/0 6/4/0
D = 100 F1 1.21×1012 NAN 1.21×1012 1.21×1012 1.21×1012 NAN
F2 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F3 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F4 1.21×1012 1.21×1012 1.21×1012 1.21×1012 1.21×1012 NAN
F5 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 2.80×1011
F6 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.22×1011
F7 4.50×1011 4.86×103 3.51×102 2.28×105 3.02×1011 3.92×102
F8 2.54×1013 NAN NAN 1.47×109 1.21×1012 1.69×1014
F9 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 2.92×1011
F10 3.02×1011 3.02×1011 3.02×1011 3.02×1011 3.02×1011 2.40×1011
+/=/− 10/0/0 8/2/0 9/1/0 10/0/0 10/0/0 6/4/0

From Table 7, there is a significant difference between the IOOA and GJO, POA, SCA for F1–F10 at dimensions 30, 50 and 100. For F1–F4, the difference between the IOOA and OOA was not significant. For F1 and F8, the difference between the IOOA and SABO was not significant. For F8, the variability between the IOOA and SCSO is not significant. In general, the IOOA has significant advantages over other algorithms, further confirming statistically the IOOA’s validity and reliability.

4.5. The IOOA Solves CEC2017 Test Functions

In order to further verify the IOOA’s superiority and robustness and improve the test results’ reliability, the IOOA is tested with the GJO, SABO, SCSO, POA, SCA and OOA for optimization searching for the 15 CEC2017 test functions selected in Table 2. The experimental parameters set the population size to 30, the maximum iterations to 1000, the dimension to 30, and each algorithm is run independently 30 times. The specific optimization results are shown in Table 8.

Table 8.

CEC2017 test function optimization results.

Function Index GJO SABO SCSO POA SCA OOA IOOA
F11 Mean 1.23×1010 9.84×109 8.33×109 1.48×1010 1.88×1010 5.77×1010 4.51×103
Std 3.71×109 3.41×109 3.65×109 5.65×109 2.81×109 8.54×109 4.97×103
Rank 4 3 2 5 6 7 1
F12 Mean 5.59×104 5.45×104 5.05×104 3.28×104 7.30×104 9.01×104 3.58×104
Std 1.05×104 1.02×104 1.10×104 8.77×103 1.40×104 7.84×103 7.08×103
Rank 5 4 3 1 6 7 2
F13 Mean 1.26×103 1.75×103 9.93×102 1.93×103 2.45×103 1.52×104 4.93×102
Std 5.85×102 1.27×103 4.91×102 1.20×103 7.26×102 3.70×103 2.39×101
Rank 3 4 2 5 6 7 1
F14 Mean 9.75×102 1.08×103 9.97×102 9.87×102 1.09×103 1.14×103 9.66×102
Std 4.39×101 3.33×101 2.87×101 2.24×101 2.08×101 2.55×101 2.09×101
Rank 2 5 4 3 6 7 1
F15 Mean 3.05×103 4.88×103 2.72×103 2.07×103 3.12×103 9.08×103 1.27×103
Std 1.46×103 1.47×103 1.57×103 6.98×103 7.10×102 2.20×103 7.51×101
Rank 4 6 3 2 5 7 1
F16 Mean 1.09×109 5.87×108 1.92×108 1.34×109 2.13×109 1.26×1010 2.81×106
Std 9.31×108 4.48×108 3.29×108 1.28×109 6.06×108 3.60×109 4.53×106
Rank 4 3 2 5 6 7 1
F17 Mean 3.31×108 6.14×107 7.62×107 1.32×107 8.33×108 9.69×109 2.68×104
Std 7.49×108 1.31×108 1.50×108 3.39×107 4.26×108 4.45×109 1.85×104
Rank 5 4 3 2 6 7 1
F18 Mean 6.79×106 6.44×105 9.91×105 4.11×104 4.56×107 6.40×108 5.36×103
Std 1.36×107 9.01×105 3.63×106 2.42×104 2.62×107 6.56×108 4.69×103
Rank 5 4 3 2 6 7 1
F19 Mean 2.43×107 5.57×106 7.41×106 1.35×106 7.57×107 7.63×108 8.49×103
Std 4.30×107 6.98×106 2.66×107 1.79×106 3.28×107 5.72×108 6.78×103
Rank 5 4 3 2 6 7 1
F20 Mean 5.62×103 3.82×103 4.46×103 4.89×103 9.45×103 9.43×103 2.81×103
Std 2.39×103 1.55×103 1.79×103 1.65×103 1.70×103 9.31×102 6.74×102
Rank 5 2 3 4 7 6 1
F21 Mean 3.23×103 3.27×103 3.13×103 3.30×103 3.46×103 5.02×103 2.90×103
Std 1.52×102 1.37×102 1.04×102 1.86×102 1.78×102 4.53×102 1.64×101
Rank 3 5 2 4 6 7 1
F22 Mean 6.01×103 8.05×103 6.66×103 7.03×103 7.49×103 1.17×104 4.18×103
Std 6.70×102 8.29×102 1.00×103 1.52×103 1.92×103 1.05×103 3.33×102
Rank 2 6 3 4 5 7 1
F23 Mean 3.93×103 4.03×103 3.62×103 4.05×103 4.17×103 7.48×103 3.22×103
Std 3.30×102 3.72×102 1.75×102 4.00×102 2.41×102 8.04×102 1.93×101
Rank 3 5 2 4 6 7 1
F24 Mean 4.26×103 5.69×103 4.51×103 4.51×103 5.13×103 9.05×103 4.03×103
Std 2.61×102 5.74×102 3.60×102 3.17×102 2.53×102 3.59×103 2.45×102
Rank 2 6 4 3 5 7 1
F25 Mean 3.59×107 2.71×107 1.60×107 1.21×107 1.55×108 1.70×109 3.52×105
Std 2.84×107 2.36×107 1.45×107 7.33×106 7.10×107 1.26×109 8.52×105
Rank 5 4 3 2 6 7 1
Rank-Count 57 65 42 48 88 104 16
Ave-Rank 3.80 4.33 2.80 3.20 5.87 6.93 1.07
Overall-Rank 4 5 2 3 6 7 1

The high complexity of the IEEE CEC2017 test functions makes it difficult to search for the objective functions’ optimal value, which can only be compared with other algorithms to find the relative optimal value. From Table 8, the IOOA achieves the relative optimal average for F11, F13–F25, and for F12, although it is inferior to the POA, it achieves the optimal results except the POA, and has the best overall optimization-seeking effect, which is more advantageous compared with other algorithms, showing that the IOOA has a better optimization-seeking ability. This further verifies the IOOA’s superiority. The IOOA achieved the minimum standard deviation for F11–F13 and F15–F25, which is slightly smaller than the SCA for F14, and two algorithms have a small difference in numerical results. Overall, the IOOA achieved the optimal standard deviation for most of the functions with good overall stability, which further indicates that the IOOA has good robustness. Meanwhile, the Friedman test results in Table 8 shows that the IOOA has the highest Ave-Rank and ranks first in Overall-Rank.

5. Engineering Design Problem

In order to further validate the IOOA’s feasibility and effectiveness, this paper selects the engineering optimization problem of three-bar truss design and compares the IOOA with the other six optimization algorithms in Section 5 for validation.

The purpose of the three-bar truss design problem is to minimize the volume of the three-rod truss by adjusting the cross-sectional area (x1,x2) and to make the bearing capacity of each truss (σ) satisfy the constraint conditions. Its structural design is shown in Figure 4. Equation (17) shows a mathematical model of the problem.

minfx=22x1+x2·Lg1x=2x1+x22x12+2x1x2Pσ0g2x=x22x12+2x1x2Pσ0g3x=12x2+x1Pσ00x1,x21L=100cm,P=2kN/cm2,σ=2kN/cm2 (17)

Figure 4.

Figure 4

A schematic model of three-bar truss.

Table 9 shows the results of solving the three-bar truss design problem. From Table 9, the IOOA obtains the optimal value of 263.895849, and the volume of the three-bar truss is minimized when the cross-sectional areas are 0.788764 and 0.407998, respectively. It indicates that the IOOA has better optimization-seeking performance in solving the three-bar truss design problem and has some advantages.

Table 9.

Experimental results of different algorithms for solving the three-bar truss design problem.

Algorithm Parameters Best
x1 x2
GJO 0.792652 0.397171 263.913033
SABO 0.782390 0.427893 264.082722
SCSO 0.784455 0.420320 263.909287
POA 0.411249 0.411249 263.896682
SCA 0.796310 0.391495 264.379949
OOA 0.747933 0.537742 265.321646
IOOA 0.788764 0.407998 263.895849

6. Conclusions

Based on the OOA, this paper proposes an improved osprey optimization algorithm (IOOA) with multi-strategy fusion.The IOOA introduces Fuch chaotic mapping to increase the population diversity at the early stage of algorithm iteration; introduces an adaptive weighting factor to improve the convergence speed and accuracy at the exploration stage of the algorithm; incorporates Cauchy’s variational operator to prevent osprey individuals from falling into the local optima at the algorithm’s exploitation stage; and finally incorporates the Warner mechanism of the sparrow search algorithm to coordinate the algorithm’s local exploration and global search capability. The IOOA is verified to have better optimization performance and stronger robustness by 10 benchmark test functions, Friedman ranking test and Wilcoxon rank-sum test. Then, the IOOA’s superiority is further verified by solving 15 CEC2017 test functions. Finally, the IOOA is applied to the three-bar truss design problem to verify its applicability and reliability in solving practical engineering problems. In the future, the IOOA can be further applied to other fields.

Author Contributions

Conceptualization, W.L. and J.H.; methodology, W.L., J.H. and X.W.; software, W.L. and J.H.; validation, J.H. and X.W.; formal analysis, W.L. and J.H.; investigation, J.H. and X.W.; resources, J.H. and X.W.; data curation, W.L. and J.H.; writing—original draft preparation, W.L. and J.H.; writing—review and editing, W.L., J.H. and X.W.; visualization, J.H. and X.W.; supervision, W.L.; project administration, W.L.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

This article does not contain any research involving humans or animals.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

This work was funded by the Shaanxi Province Natural Science Basic Research Program Project (2024JC-YBMS-572), partially funded by Yan’an University Graduate Education Innovation Program Project (YCX2023032, YCX2023033, YCX2024094, YCX2024097) and the “14th Five Year Plan Medium and Long Term Major Scientific Research Project” (2021ZCQ015) of Yan’an University.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Xiao P.F., Ju H.H., Li Q.D., Chen F.F. Task planning of space-robot clusters based on modified differential evolution algorithm. Appl. Sci. 2020;10:5000. doi: 10.3390/app10145000. [DOI] [Google Scholar]
  • 2.Wang H.Y., Zhang Z.Q., Yang C.J., Wang Z.Q. Improved ABC Algorithm for Multi-Objective Fire Fighting and Rescue Mission Planning for UAV Swarms. Fire Sci. Technol. 2023;42:838–843. [Google Scholar]
  • 3.Lu S.Q. Multi-objective workshop scheduling of marine production based on improved ant colony algorithm. J. Coast. Res. 2020;107:222–225. doi: 10.2112/JCR-SI107-056.1. [DOI] [Google Scholar]
  • 4.Ding J.Y., Chang D.F., Qian Z.H., Mu H. Ship Curved Segment Workshop Scheduling Based on Improved Differential Evolution Algorithm. Mar. Eng. 2023;45:116–124. [Google Scholar]
  • 5.Venkateswarlu B., Vaisakh K. Adaptive unified differential evolution algorithm for optimal operation of power systems with static security, transient stability and SSSC device. Int. J. Innov. Technol. Explor. Eng. 2019;9:2238–2253. doi: 10.35940/ijitee.A4809.119119. [DOI] [Google Scholar]
  • 6.Xia A.M., Wu X.D. Optimized scheduling of power system based on improved multi-objective Haitaru swarm algorithm. Electr. Meas. Instrum. 2023;60:77–82. [Google Scholar]
  • 7.Zou A.W., Wang L., Li W.-M., Cai J.C., Wang H., Tan T.L. Mobile robot path planning using improved mayfly optimization algorithm and dynamic window approach. J. Supercomput. 2023;79:8340–8367. doi: 10.1007/s11227-022-04998-z. [DOI] [Google Scholar]
  • 8.Cai J., Zhong Z.Y. Improved Ant Colony Algorithm for Food Delivery Robot Path Planning. J. Intell. Syst. 2023;19:370–380. [Google Scholar]
  • 9.Dehghani M., Trojovskỳ P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng. 2023;8:1126450. doi: 10.3389/fmech.2022.1126450. [DOI] [Google Scholar]
  • 10.Yu X.W., Huang L.P., Liu Y., Zhang K., Li P., Li Y. WSN node location based on beetle antennae search to improve the gray wolf algorithm. Wirel. Netw. 2022;28:539–549. doi: 10.1007/s11276-021-02875-w. [DOI] [Google Scholar]
  • 11.Yu F., Tong L., Xia X.W. Adjustable driving force based particle swarm optimization algorithm. Inf. Sci. 2022;609:60–78. doi: 10.1016/j.ins.2022.07.067. [DOI] [Google Scholar]
  • 12.Xiao Y.N., Sun X., Guo Y.L., Li S.P., Zhang Y.P., Wang Y.W. An Improved Gorilla Troops Optimizer Based on Lens Opposition-Based Learning and Adaptive β-Hill Climbing for Global Optimization. CMES-Comput. Model. Eng. Sci. 2022;131:815–850. doi: 10.32604/cmes.2022.019198. [DOI] [Google Scholar]
  • 13.Wang Y.F., Liao R.H., Liang E.H., Sun J.W. Improved whale optimization algorithm based on siege mechanism. Control. Decis. Mak. 2023;38:2773–2782. [Google Scholar]
  • 14.Zhang L., Liu S., Gao W.X., Guo Y.X. Elite Inverse Golden Sine Ocean Predator Algorithm. Comput. Eng. Sci. 2023;45:355–362. [Google Scholar]
  • 15.Yue X.F., Zhang H.B., Yu H.Y. A hybrid grasshopper optimization algorithm with invasive weed for global optimization. IEEE Access. 2020;8:5928–5960. doi: 10.1109/ACCESS.2019.2963679. [DOI] [Google Scholar]
  • 16.Gu W.J., Yu Y.G., Hu W. Artificial bee colony algorithmbased parameter estimation of fractional-order chaotic system with time delay. IEEE/CAA J. Autom. Sin. 2017;4:107–113. doi: 10.1109/JAS.2017.7510340. [DOI] [Google Scholar]
  • 17.Lei W.L., Jia K., Zhang X., Lei Y. Research on Chaotic Chimp Optimization Algorithm Based on Adaptive Tuning and Its Optimization for Engineering Application. J. Sens. 2023;2023:5567629. doi: 10.1155/2023/5567629. [DOI] [Google Scholar]
  • 18.Zhang Y., Liu P.T. Research on reactive power optimization based on hybrid osprey optimization algorithm. Energies. 2023;16:7101. doi: 10.3390/en16207101. [DOI] [Google Scholar]
  • 19.Yuan Y.L., Yang Q.K., Ren J.J., Mu X.K., Wang Z.X., Shen Q.L., Zhao W. Attack-defense strategy assisted osprey optimization algorithm for PEMFC parameters identification. Renew. Energy. 2024;225:120211. doi: 10.1016/j.renene.2024.120211. [DOI] [Google Scholar]
  • 20.Chen X.Y., Zhang M.J., Wang D.G. Improved Moby Dick Optimization Algorithm Based on Fuch Mapping and Applications. Comput. Eng. Sci. 2024;46:1482–1492. [Google Scholar]
  • 21.Xue J.K., Shen B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020;8:22–34. doi: 10.1080/21642583.2019.1708830. [DOI] [Google Scholar]
  • 22.Chopra N., Ansari M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022;198:116924. doi: 10.1016/j.eswa.2022.116924. [DOI] [Google Scholar]
  • 23.Trojovskỳ P., Dehghani M. Subtraction-average-based optimizer: A new swarm-inspired metaheuristic algorithm for solving optimization problems. Biomimetics. 2023;8:149. doi: 10.3390/biomimetics8020149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Seyyedabbasi A., Kiani F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023;39:2627–2651. doi: 10.1007/s00366-022-01604-x. [DOI] [Google Scholar]
  • 25.Trojovskỳ P., Dehghani M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors. 2022;22:855. doi: 10.3390/s22030855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Mirjalili S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016;96:120–133. doi: 10.1016/j.knosys.2015.12.022. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data are contained within the article.


Articles from Biomimetics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES