Skip to main content
Biomimetics logoLink to Biomimetics
. 2025 Oct 19;10(10):708. doi: 10.3390/biomimetics10100708

Exemplar Learning and Memory Retrieval-Based Particle Swarm Optimization Algorithm with Engineering Applications

Shuying Zhang 1,, Xiaohong Hu 1,, Yue Gao 1,*, Minghan Gao 2, Yufei Zhang 3
Editor: Yongquan Zhou
PMCID: PMC12563707  PMID: 41149238

Abstract

Particle swarm optimization (PSO) is a bio-inspired stochastic optimization algorithm that simulates the foraging behavior of birds. Despite its simplicity and efficiency, PSO often suffers from premature convergence and a poor balance between exploration and exploitation. These drawbacks mainly arise from its limited learning sources and rigid position update scheme. To address these issues, this paper proposes an enhanced PSO framework, termed Exemplar Learning and Memory Retrieval-Based Particle Swarm Optimization (EMPSO). The design of EMPSO is inspired by the learning, memory, and adaptation mechanisms observed in biological collectives. It integrates three complementary strategies to improve swarm intelligence. First, an elite exemplar learning mechanism aggregates the positional information of top-performing particles to construct a more reliable guidance vector. Second, a memory recall strategy retains exemplars that have recently contributed to global improvements and reuses them probabilistically with a recency bias, thus enabling effective knowledge inheritance. Third, an adaptive position update scheme assigns exploration- or exploitation-oriented behaviors to particles based on fitness ranking, promoting dynamic role differentiation within the swarm. Comprehensive experiments on the CEC2017 and CEC2022 benchmark suites demonstrate that EMPSO consistently outperforms six representative algorithms. Furthermore, applications to three engineering design problems and the optimal PMU placement task verify its robustness and practical effectiveness.

Keywords: swarm intelligence, particle swarm optimization, engineering optimization

1. Introduction

Swarm intelligence (SI) algorithms originate from bio-inspired studies of collective behaviors observed in nature [1,2,3]. By mimicking the cooperation, competition, and information-sharing mechanisms exhibited by biological groups in decentralized and self-organized environments, SI algorithms provide adaptive and robust frameworks for solving complex optimization problems. Over the past few decades, numerous SI algorithms have been developed by emulating various biological phenomena, including Particle Swarm Optimization (PSO) [4], Ant Colony Optimization (ACO) [5], Whale Optimization Algorithm (WOA) [6], and White Shark Optimizer (WSO) [7], etc.

In contrast to gradient-based approaches that typically explore the neighborhood of local optima [8], heuristic algorithms are capable of conducting a more global search across the decision space [8,9]. Owing to their conceptual simplicity, scalability, and inherent robustness [10], SI algorithms have been widely applied in engineering design [11,12,13], machine learning [14,15,16], robotics [17,18,19], and industrial optimization [20,21,22], becoming an indispensable component of modern computational intelligence.

Among various SI paradigms, PSO stands as one of the most representative and influential bio-inspired algorithms [23]. Originating from the simulation of bird flocking and fish schooling behaviors, PSO models each candidate solution as a particle that iteratively adjusts its velocity and position through self-experience learning and social collaboration learning. These mechanisms enable particles to collectively approach promising regions in the search space. Due to its ease of implementation and strong capability of rapidly converging to high-quality solutions, PSO has been successfully applied to a wide spectrum of scientific and engineering optimization problems [24,25,26].

Nevertheless, the canonical PSO exhibits inherent shortcomings. It often suffers from premature convergence, losing population diversity too early and becoming trapped in local optima [27]. Furthermore, PSO struggles to maintain an appropriate balance between exploration and exploitation [28], particularly in deceptive or high-dimensional landscapes where both sustained diversity and fine-grained exploitation are essential.

To address these shortcomings, a large body of research has focused on improving PSO. Proposed strategies include adaptive parameter control [29,30] (e.g., inertia weight adjustment), hybridization with other metaheuristics [31,32], and alternative swarm topologies for information dissemination [33,34]. While such variants have achieved notable improvements, they remain limited by rigid update mechanisms, inefficient knowledge reuse, and inadequate responsiveness to the evolving fitness landscape. Specifically, traditional PSO relies heavily on current personal and global best positions as its sole learning sources, without mechanisms to preserve and exploit historical improvement trajectories or to adaptively differentiate learning strategies according to particle performance. As a result, the algorithm often exhibits unstable search dynamics and struggles to sustain an effective exploration–exploitation balance across different stages of optimization.

To address these limitations, this paper introduces a new PSO variant inspired by biological cognition and behavioral adaptation, termed Exemplar Learning and Memory Retrieval-Based Particle Swarm Optimization (EMPSO). The algorithm draws inspiration from the way individuals in biological collectives continuously optimize their behavior through experience accumulation, memory retrieval, and behavioral adjustment. EMPSO incorporates these cognitive elements into a unified framework consisting of three synergistic mechanisms: Elite Exemplar Learning (EEL), Superior Memory Recall (SMR), and Adaptive Position Update (APU).

Unlike conventional methods that rely solely on current best experiences, EMPSO explicitly models and reuses historical knowledge gained from successful improvements, providing a more stable and representative guidance direction for swarm evolution. The integration of EEL, SMR, and APU generates complementary search dynamics: EEL enhances convergence reliability by aggregating elite exemplars; SMR preserves valuable convergence trajectories and probabilistically reintroduces them to stimulate novel solution discovery; and APU adaptively allocates exploration or exploitation behaviors based on particle fitness ranking, enabling dynamic role differentiation within the swarm. Together, these mechanisms improve swarm adaptability across different optimization phases, enhance population diversity, and effectively alleviate premature stagnation.

The main contributions of this work can be summarized as follows:

  • EEL: A mechanism that constructs a representative guidance vector by aggregating the positions of top-performing particles in a fitness-proportional manner, thereby improving swarm stability and convergence robustness.

  • SMR: A dynamic memory bank that archives recent elite exemplars responsible for global improvements and reintroduces them with recency bias, enabling the algorithm to retain valuable knowledge and inspire exploration in promising directions.

  • APU: A fitness-aware learning strategy that dynamically allocates exploitation-oriented or exploration-oriented updates according to relative particle fitness, yielding self-organized search dynamics that adapt to the current stage of optimization.

The remainder of this paper is organized as follows: Section 2 reviews related work on PSO and its advanced variants. Section 3 details the proposed EMPSO framework. Section 4 presents the experimental design and compares EMPSO with six representative SI algorithms on the CEC2017 and CEC2022 benchmark suites. Section 5 evaluates EMPSO on three real-world engineering design problems and the optimal PMU placement problem. Finally, Section 6 concludes the paper and outlines future research directions.

2. Preliminaries and Related Work

2.1. Particle Swarm Optimization

PSO is a population-based stochastic optimization algorithm originally proposed by Kennedy and Eberhart in 1995 [4], inspired by the social foraging and information-sharing behaviors observed in bird flocks and fish schools. Owing to its simplicity, low computational cost, and ability to efficiently exploit collective intelligence, PSO has become one of the most widely studied and applied metaheuristics in computational intelligence. Over the past decades, it has been successfully extended and tailored to solve a broad spectrum of optimization problems, ranging from continuous and combinatorial optimization to constrained, dynamic, and multi-objective scenarios.

In PSO, a population of candidate solutions, termed particles, jointly explores the search space while exchanging information to guide the search process. Each particle i at iteration t is represented by a position vector xit and a velocity vector vit, which, respectively, denote a candidate solution and its search direction. Formally, for a D-dimensional problem with N particles,

xit=(xi,1t,xi,2t,,xi,Dt),vit=(vi,1t,vi,2t,,vi,Dt).

The search dynamics of PSO rely on two knowledge sources: the best position discovered by particle i itself (personal best, pbest), and the best position discovered by the entire swarm (global best, gbest). The velocity and position updates are as follows:

vit+1=ωvit+c1r1pbestixit+c2r2gbestxit,
xit+1=xit+vit+1,

where ω is the inertia weight that balances global exploration and local exploitation, c1 and c2 are cognitive and social acceleration coefficients controlling the relative influence of self-experience and social learning, and r1,r2U(0,1) are stochastic factors that introduce randomness into the search process. To prevent excessive exploration in later iterations, ω is often linearly decreased with the iteration index t,

ω=ωmaxωmaxωmintmax·t,

where ωmax and ωmin denote the upper and lower bounds of the inertia weight, respectively. Variants of PSO further refine this mechanism by adopting nonlinear decay, time-varying acceleration coefficients, or adaptive control rules to enhance search efficiency.

Although PSO has achieved remarkable success in diverse fields such as engineering design, feature selection, scheduling, and multi-objective optimization, it still suffers from several intrinsic weaknesses. The canonical PSO utilizes only two learning sources—the personal best and global best—lacking mechanisms for higher-order memory, structured knowledge sharing, or collaborative learning among subgroups. As a result, swarm diversity often decreases rapidly, causing premature convergence and stagnation in local optima, particularly in high-dimensional or multimodal problems. Moreover, PSO struggles to maintain a stable exploration–exploitation balance, frequently oscillating between excessive exploration of unpromising regions and over-exploitation near local attractors. These limitations have motivated extensive research into enhanced PSO variants, which aim to achieve more reliable and scalable performance across complex optimization scenarios.

2.2. Existing Improvements of PSO

Over the past two decades, a wide spectrum of PSO variants has been proposed to mitigate the aforementioned limitations. These efforts can be broadly categorized into three directions: (i) methods with improved topology structures, (ii) methods with dynamic parameter adjustment, and (iii) methods combining different optimization techniques. In the following subsections, we provide a systematic review of representative approaches in each category.

2.2.1. Methods with Improved Topology Structures

Topology-improved PSO methods aim to strengthen population diversity and information propagation by redesigning the communication structure among particles. Li et al. [33] introduced the pyramid PSO, where particles are organized into a hierarchical pyramid according to their fitness levels. Particles within the same layer determine superiority through pairwise comparison: inferior particles collaborate with local winners, while superior particles interact with upper-layer elites. This hierarchical design significantly enhances population diversity and improves convergence behavior. Building upon this work, Jin et al. [35] further proposed an adaptive constraint-handling strategy based on the pyramid structure, which enhances the exploration capability of the swarm. Hu et al. [36] developed a centroid-based PSO, where a population centroid is generated at each iteration and used to replace the global best solution as the guidance source, thereby providing richer knowledge for position updates. Radwan et al. [37] proposed a three-stage framework, where the problem is decomposed into subproblems, solved via a cooperative multi-swarm approach, and complemented with a reaction mechanism to mitigate the diversity loss introduced by decomposition. Zhou et al. [38] developed a sub-swarm region-based solution selection mechanism to maintain diversity. By defining two neighborhood radii around global and local optima discovered during evolution, the swarm is encouraged to distribute more uniformly across the search space. Hong et al. [39] proposed an ensemble PSO framework that integrates adaptive covariance matrix learning, inertia-weighted PSO, and a sample-pool replacement mechanism, which collectively enhance convergence efficiency and robustness.

2.2.2. Methods with Dynamic Parameter Adjustment

Dynamic parameter adaptation methods seek to balance exploration and exploitation by adjusting algorithmic parameters according to evolutionary progress or particle behavior. Liu et al. [29] proposed a weighting strategy based on the Sigmoid function, which takes into account both the distance from a particle to the global best position and the distance from the particle to its personal best position. This strategy enables adaptive adjustment of acceleration coefficients, thereby enhancing the convergence speed. Minh et al. [40] proposed the variable velocity strategy PSO, which introduces a new velocity term controlled by a linearly decreasing function, enabling more flexible position updates. Song et al. [41] introduced a fractional-order adaptive velocity parameter into PSO, which perturbs the swarm based on evolutionary states to improve the ability to escape local optima and explore the search space more thoroughly. Moazen et al. [42] proposed PSO-ELPM, where a cube-root inverse operation is employed to ensure smooth weight distribution, combined with an exponential mutation operator that adaptively adjusts mutation probability based on current and historical swarm states, thereby achieving a better balance between exploration and exploitation. Similarly, Meng et al. [43] proposed a novel PSO variant with an adaptive regulation of paradigm proportions and contraction coefficients during iterations. Moreover, a full-information search mechanism based on generational best solutions is introduced to help particles escape local optima and achieve improved global performance. Li et al. [44] proposed a novel variable weight coefficient based on evolutionary states to balance exploration and exploitation. Furthermore, multiple trial positions were used for each particle, and promising positions were selected by simultaneously leveraging the superiority and uncertainty of the ensemble. This approach ensures that the particle swarm maintains a large exploration space while controlling the convergence time.

2.2.3. Methods Combining Different Optimization Techniques

Hybridization approaches aim to integrate complementary mechanisms from other optimization algorithms into PSO, thereby compensating for the inherent limitations of a single strategy. Li et al. [45] proposed the multi-component PSO algorithm, where four distinct PSO variants are incorporated into a strategy pool. A leader-learning mechanism is employed to facilitate knowledge sharing and guide global convergence, enabling the swarm to exploit the complementary advantages of different PSO paradigms in a cooperative manner. Şenel et al. [46] proposed a hybrid algorithm combining PSO and the Grey Wolf Optimizer (GWO), in which a fraction of PSO particles are probabilistically replaced by GWO-enhanced solutions. This hybridization effectively leverages the exploitation strength of PSO and the exploration ability of GWO. Other studies have integrated PSO with evolutionary operators. Liu et al. [47] proposed the integration of evolutionary game theory into the research of PSO algorithms, combining four classical variants of PSO algorithms with different exploration and exploitation capabilities. The population is divided into two subpopulations, with more advantageous strategies achieving a higher execution probability in the larger subpopulation. A hybrid ML–TSO approach [48] combined transient search optimization with learning-based modeling to minimize power generation costs in both classical and probabilistic optimal power flow problems. An ANN–PSO model [49] was embedded within a probabilistic machine learning framework to improve the prediction accuracy of soil desiccation cracking under environmental uncertainty. A PSO–ant lion optimization hybrid [50] was applied to optimize a probabilistic neural network for wind speed forecasting, achieving faster convergence and higher prediction accuracy than conventional models.

2.2.4. Discussion

In summary, existing improvements of PSO have made remarkable progress in addressing premature convergence and enhancing the exploration–exploitation balance. Topology-based strategies mainly enrich the communication structure to preserve diversity, parameter adaptation methods enable responsive adjustments to evolutionary states, and hybridization approaches introduce external mechanisms to mitigate PSO’s inherent weaknesses. Probabilistic machine learning methods explicitly model uncertainty and inter-sample covariance to guide search decisions. However, most of these approaches tend to emphasize one aspect (e.g., diversity preservation or convergence acceleration) while lacking a unified design that systematically integrates multiple knowledge sources and adaptive mechanisms.

To this end, we argue that further progress requires a more holistic framework that simultaneously leverages elite information, historical knowledge, and adaptive search dynamics. Motivated by this perspective, our proposed algorithm introduces three synergistic mechanisms: (1) EEL, which aggregates knowledge from multiple high-quality particles to prevent over-reliance on a single leader; (2) SMR, which reuses superior historical exemplars to reintroduce valuable search trajectories when stagnation occurs; and (3) APU, which allocates distinct update rules to different subgroups of particles according to their fitness levels. Together, these mechanisms provide complementary strengths, offering a more stable balance between exploration and exploitation across diverse problem scenarios.

3. Proposed Algorithm

In this section, we present the proposed variant of PSO, named EMPSO, which integrates three synergistic mechanisms: EEL, SMR, and APU. These mechanisms are designed to enrich the knowledge sources available to the swarm, enhance the utilization of valuable information, and dynamically balance exploration and exploitation during the search process. The overall flowchart of EMPSO is shown in Figure 1.

Figure 1.

Figure 1

Overall flowchart of EMPSO.

3.1. Elite Exemplar Learning (EEL)

In the canonical PSO, each particle updates its position primarily based on its personal best solution pbesti and the global best solution gbest. Such a limited knowledge source often causes particles to be overly attracted to the global optimum, leading to rapid aggregation around a single region of the search space. Consequently, diversity is reduced, which may result in premature convergence and suboptimal performance.

To address this issue, we propose an EEL mechanism that exploits multiple elite particles to generate a knowledge exemplar suitable for minimization tasks. Specifically, at iteration t, all particles are ranked in ascending order according to their objective values, since a smaller value indicates better fitness. Let Et denote the set of the top-M elite particles selected from the population Pt, where M=0.3N. Each elite particle xjEt is assigned a weight inversely proportional to its objective value, and the elite exemplar Et is constructed as follows:

Et=xjEtwj·xj,wj=1/f(xj)xkEt1/f(xk), (1)

where f(xj) denotes the objective (fitness) value of particle xj, and a smaller f(xj) corresponds to higher quality. By aggregating knowledge from multiple elites through inverse-value weighting, Et provides a more representative and balanced exemplar to guide the swarm toward regions with lower objective values, thereby enhancing search directionality and maintaining population diversity.

3.2. Superior Memory Recall (SMR)

Most existing PSO variants rely exclusively on the current pbest and gbest, while neglecting historical knowledge. This lack of memory may cause valuable optimization trajectories to be forgotten, limiting the algorithm’s ability to escape from stagnation.

To reinforce knowledge reusability, we introduce the SMR mechanism. At each iteration t, if the best solution xt obtained in the current population improves upon the historical global best gbest, the elite exemplar Et is considered superior knowledge and stored in a memory archive M. Formally,

iff(xt)<f(gbest),MM{Et}. (2)

When a particle’s search ability is limited (i.e., its fitness level does not reach the elite subset of the population), the exemplars EmM are probabilistically selected to guide its position update. The probability of selecting Em is defined as Equation (3):

P(Em)=exp(λ(ttm))EkMexp(λ(ttk)), (3)

where tm denotes the iteration when Em was stored, and λ is a decay parameter controlling the preference for recent memory. In this way, more recent exemplars are more likely to be reused, enabling the swarm to reintroduce successful historical trajectories and escape local optima.

3.3. Adaptive Position Update (APU)

In standard PSO, all particles follow the same update strategy, which may lead to an imbalance between exploration and exploitation. To address this limitation, we propose an APU mechanism that dynamically assigns heterogeneous search behaviors to particles based on their relative fitness ranking.

At iteration t, the population Pt is divided into the following three subgroups:

  • High-fitness particles: focused on exploitation, emphasizing fine-tuning around the best-known regions.

  • Medium-fitness particles: assigned a hybrid strategy, balancing exploration and exploitation.

  • Low-fitness particles: dedicated to exploration, encouraging escape from inferior regions.

Formally, the velocity update rule for particle i is defined as Equation (4),

vit+1=ωvit+c1r1(pbestixit)+c2r2(Etxit),ihigh-fitnessgroup,ωvit+c1r1(Em1xit)+c2r2(gbestxit),imedium-fitnessgroup,ωvit+c1r1(Em2xit)+c2r2(Em3xit),ilow-fitnessgroup, (4)

where ω is the inertia weight, c1 and c2 are acceleration coefficients, and r1,r2U(0,1). Here, Et denotes the current elite exemplar, while Em1, Em2, and Em3 represent three exemplars retrieved from memory. These examples are independently sampled with replacement according to the probabilities computed by Equation (3), so they may be the same or different across different selections. The pseudocode, for example, sampling is shown in Algorithm 1.

Algorithm 1: SampleExemplar_WithReplacement (M,t,λ)
graphic file with name biomimetics-10-00708-i001.jpg

The adaptive mechanism assigns complementary search roles to different particle subgroups, maintaining a dynamic balance between exploration and exploitation. High-quality particles use the elite exemplar to reduce premature convergence, medium-quality particles exploit memorized exemplars to escape local optima, and low-quality particles follow multiple stored exemplars to quickly reach promising regions. This design improves search efficiency and accelerates convergence.

3.4. Pseudocode of the Proposed Algorithm

The overall procedure of the proposed PSO variant is summarized in Algorithm 2.

Algorithm 2: Proposed PSO with EEL, SMR, and APU
graphic file with name biomimetics-10-00708-i002.jpg

3.5. Complexity Analysis

The proposed algorithm introduces three additional mechanisms on top of the standard PSO framework. The computational cost can be analyzed as follows:

  • EEL requires sorting the population by fitness in O(NlogN) time and aggregating the top-M elites. Since MN, the overall cost is dominated by sorting.

  • SMR involves updating the memory archive and sampling exemplars, which incurs at most O(M) additional cost per iteration.

  • APU modifies the velocity update rules without changing the complexity, i.e., O(ND), where D is the problem dimension.

Therefore, the overall computational complexity per iteration is

O(ND+NlogN). (5)

Since DlogN in most real-world optimization tasks, the additional cost introduced by EEL and SMR is negligible compared to the standard PSO. Meanwhile, the proposed mechanisms significantly enhance the diversity and knowledge exploitation of the swarm, which improves the algorithm’s robustness and convergence behavior.

4. Experiments on the Benchmark Suite

This section presents a comprehensive empirical study to evaluate the performance of the proposed EMPSO algorithm. The experiments are organized into three groups. First, EMPSO is compared with six representative algorithms on the widely used CEC2017 benchmark suite [51] and the more recent CEC2022 benchmark suite [52]. Second, a sensitivity analysis is conducted to investigate the impact of the parameter on algorithmic performance. Finally, multiple ablation studies are performed to assess the effectiveness of the three position update strategies.

4.1. Experimental Setup

4.1.1. Computing Platform

All experiments were executed on a workstation equipped with an Intel(R) Xeon(R) CPU E5-2696 v3 and 64 GB of RAM. The operating system was Windows 11 (version 24H2), and the implementation was carried out using MATLAB 2024a.

4.1.2. Parameter Settings

To ensure a fair comparison, all competing algorithms were configured with standard parameter settings commonly adopted in the literature. Specifically, the maximum number of iterations was set to T=1000, the population size to N=100. The remaining algorithm-specific parameters are summarized in Table 1. Since population-based metaheuristics are inherently stochastic, each benchmark function was independently tested 30 times, and the mean and standard deviation of the obtained results were reported to mitigate the influence of randomness.

Table 1.

Parameter settings of EMPSO and the peer algorithms.

Algorithm Other Parameters
EMPSO ω=0.90.4, c1=c2=1.5, λ=0.2
KLDE [53] LR=0.2,EP=10,F=0.5,CR=0.9
MPSO [54] ω=0.90.4, c1=c2=2
AWPSO [29] ω=0.90.4, c1=c2=2,a=0.000035m,b=0.5,c=0,d=1.5
PECSO [17] η=0.5,α=1
WOA [6] a decreases linearly: a [0,2], b=1, l=(a21)·rand+1
PSO [4] ω=0.90.4, c1=c2=1.5

4.1.3. Benchmark Suites

CEC2017 Benchmark Suite: The CEC2017 benchmark suite consists of 29 continuous optimization test functions designed to comprehensively evaluate algorithmic performance under varying levels of problem complexity. The suite includes unimodal functions (F1–F3), multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30), covering a broad spectrum of search landscape characteristics. Test dimensions are typically set to 30 or 50 to balance computational cost and discriminative power. All functions are shifted and rotated to introduce translation and rotation invariance, thereby making CEC2017 a widely adopted benchmark for assessing the robustness and generalization ability of optimization algorithms.

CEC2022 Benchmark Suite: The CEC2022 benchmark suite comprises 12 single-objective bound-constrained optimization problems that more closely emulate the complexity of real-world applications. Compared with CEC2017, this suite enhances scalability, landscape diversity, and inter-variable dependency, providing a more rigorous test of an algorithm’s performance under nonlinear and highly correlated conditions. It includes unimodal (F1), multimodal (F2–F5), hybrid (F6–F8), and composition functions (F9–F12), all subject to shifting, rotation, and dynamic scaling. These design features ensure that the CEC2022 suite serves as a challenging and realistic platform for benchmarking the stability, adaptability, and convergence efficiency of advanced optimization algorithms.

4.2. Comparative Results on the CEC2017 Benchmark Suite

To thoroughly assess the adaptability and competitiveness of EMPSO in solving complex optimization problems, a comprehensive set of experiments was conducted on the CEC2017 test suite. The numerical results, including mean errors, standard deviations, overall rankings, and total running time, are summarized in Table 2; the convergence trajectories on several representative functions are plotted in Figure 2; and the statistical significance tests together with the final ranking outcomes are reported in Table 3.

Table 2.

Experimental results on the CEC2017 benchmark functions.

Funtion EMPSO KLDE MPSO AWPSO PECSO WOA PSO
F1 Mean 6.87 ×102 4.74 ×103 3.66 ×108 4.50 ×106 3.60 ×103 3.00 ×108 7.69 ×109
Std 5.36 ×102 4.72 ×103 3.25 ×108 4.94 ×106 1.37 ×103 7.15 ×107 2.96 ×109
Rank 1 3 6 4 2 5 7
F3 Mean 1.04 ×104 8.77 ×104 6.76 ×104 4.99 ×103 5.17 ×103 2.09 ×105 4.53 ×104
Std 2.89 ×103 1.95 ×104 1.78 ×104 2.52 ×103 1.28 ×103 3.52 ×104 1.53 ×104
Rank 3 6 5 1 2 7 4
F4 Mean 4.87 ×102 4.86 ×102 5.54 ×102 6.25 ×102 4.92 ×102 6.42 ×102 1.29 ×103
Std 1.48 3.03 8.55 ×101 5.64 ×101 1.13 ×101 4.63 ×101 4.42 ×102
Rank 2 1 4 5 3 6 7
F5 Mean 5.12 ×102 6.35 ×102 6.46 ×102 6.20 ×102 6.28 ×102 7.75 ×102 6.33 ×102
Std 2.77 4.46 ×101 5.19 ×101 3.60 ×101 3.91 4.30 ×101 2.21 ×101
Rank 1 5 6 2 3 7 4
F6 Mean 6.00 ×102 6.00 ×102 6.06 ×102 6.18 ×102 6.11 ×102 6.69 ×102 6.17 ×102
Std 1.20 ×102 1.08 ×104 1.85 6.11 5.57 ×101 5.83 5.35
Rank 2 1 3 6 4 7 5
F7 Mean 7.45 ×102 8.91 ×102 9.11 ×102 9.12 ×102 9.28 ×102 1.21 ×103 9.21 ×102
Std 2.86 1.29 ×101 3.66 ×101 5.66 ×101 6.07 8.11 ×101 5.48 ×101
Rank 1 2 3 4 6 7 5
F8 Mean 8.12 ×102 9.29 ×102 9.41 ×102 9.21 ×102 9.02 ×102 9.87 ×102 9.24 ×102
Std 1.85 4.25 ×101 4.52 ×101 3.14 ×101 4.00 2.85 ×101 2.68 ×101
Rank 1 5 6 3 2 7 4
F9 Mean 9.01 ×102 9.00 ×102 1.04 ×103 2.11 ×103 2.82 ×103 7.63 ×103 3.66 ×103
Std 3.97 ×101 8.29 ×102 1.41 ×102 8.22 ×102 1.22 ×101 1.76 ×103 1.13 ×103
Rank 2 1 3 4 5 7 6
F10 Mean 3.52 ×103 7.31 ×103 7.61 ×103 4.94 ×103 4.39 ×103 6.35 ×103 4.72 ×103
Std 4.53 ×102 8.62 ×102 8.15 ×102 7.49 ×102 2.04 ×103 5.54 ×102 3.77 ×102
Rank 1 6 7 4 2 5 3
F11 Mean 1.14 ×103 1.18 ×103 1.34 ×103 1.37 ×103 1.23 ×103 3.09 ×103 1.44 ×103
Std 1.83 ×101 3.17 ×101 2.30 ×102 8.24 ×101 5.75 ×101 8.02 ×102 7.67 ×101
Rank 1 2 4 5 3 7 6
F12 Mean 3.16 ×104 5.26 ×104 5.23 ×106 2.58 ×107 2.12 ×106 9.13 ×107 2.36 ×108
Std 1.21 ×104 2.15 ×104 5.70 ×106 3.61 ×107 2.50 ×104 4.94 ×107 2.15 ×108
Rank 1 2 4 5 3 6 7
F13 Mean 8.06 ×103 1.44 ×103 3.26 ×104 3.62 ×104 1.27 ×104 3.65 ×105 1.81 ×106
Std 5.92 ×103 1.85 ×101 6.69 ×104 3.61 ×104 1.20 ×104 1.42 ×105 2.14 ×106
Rank 2 1 4 5 3 6 7
F14 Mean 6.00 ×103 1.48 ×103 3.59 ×105 4.37 ×103 3.42 ×104 6.36 ×105 3.35 ×104
Std 3.40 ×103 6.47 3.79 ×105 7.27 ×103 2.58 ×103 4.56 ×105 2.63 ×104
Rank 3 1 6 2 5 7 4
F15 Mean 1.97 ×103 1.56 ×103 3.27 ×103 1.28 ×104 5.52 ×103 1.34 ×105 3.57 ×104
Std 3.42 ×102 6.64 1.85 ×103 1.37 ×104 1.25 ×103 4.36 ×104 1.81 ×104
Rank 2 1 3 5 4 6 7
F16 Mean 1.95 ×103 2.16 ×103 2.82 ×103 2.52 ×103 2.66 ×103 3.72 ×103 2.74 ×103
Std 1.81 ×102 3.04 ×102 3.85 ×102 2.71 ×102 2.45 ×102 2.42 ×102 2.34 ×102
Rank 1 2 6 3 4 7 5
F17 Mean 1.83 ×103 1.84 ×103 2.21 ×103 2.19 ×103 2.25 ×103 2.49 ×103 2.31 ×103
Std 6.35 ×101 1.15 ×102 2.35 ×102 2.05 ×102 5.12 ×101 1.61 ×102 1.52 ×102
Rank 1 2 4 3 5 7 6
F18 Mean 7.12 ×104 1.88 ×103 8.99 ×105 1.88 ×105 5.19 ×105 3.06 ×106 2.88 ×105
Std 2.46 ×104 5.52 9.12 ×105 3.05 ×105 4.02 ×104 2.22 ×106 1.56 ×105
Rank 2 1 6 3 5 7 4
F19 Mean 3.88 ×103 1.94 ×103 6.14 ×103 1.05 ×104 7.62 ×103 3.99 ×106 5.63 ×105
Std 1.44 ×103 4.45 4.03 ×103 1.27 ×104 1.46 ×103 1.79 ×106 7.35 ×105
Rank 2 1 3 5 4 7 6
F20 Mean 2.13 ×103 2.22 ×103 2.42 ×103 2.41 ×103 2.53 ×103 2.68 ×103 2.38 ×103
Std 6.34 ×101 1.80 ×102 1.92 ×102 1.62 ×102 4.70 ×101 1.21 ×102 1.42 ×102
Rank 1 2 5 4 6 7 3
F21 Mean 2.32 ×103 2.44 ×103 2.44 ×103 2.40 ×103 2.41 ×103 2.56 ×103 2.44 ×103
Std 2.93 3.88 ×101 5.48 ×101 2.96 ×101 4.86 3.33 ×101 2.41 ×101
Rank 1 5 6 2 3 7 4
F22 Mean 2.30 ×103 6.30 ×103 2.76 ×103 2.48 ×103 4.32 ×103 7.13 ×103 5.10 ×103
Std 4.48 ×101 3.35 ×103 1.55 ×103 7.78 ×102 1.68 ×103 1.22 ×103 1.24 ×103
Rank 1 6 3 2 4 7 5
F23 Mean 2.75 ×103 2.75 ×103 2.75 ×103 2.76 ×103 2.83 ×103 3.03 ×103 2.93 ×103
Std 2.16 ×101 5.58 ×101 4.94 ×101 2.91 ×101 1.88 ×101 5.74 ×101 5.39 ×101
Rank 1 3 2 4 5 7 6
F24 Mean 2.90 ×103 2.96 ×103 2.95 ×103 2.91 ×103 3.01 ×103 3.14 ×103 3.12 ×103
Std 1.46 ×101 4.98 ×101 6.20 ×101 3.03 ×101 1.33 ×101 6.90 ×101 4.81 ×101
Rank 1 4 3 2 5 7 6
F25 Mean 2.89 ×103 2.89 ×103 3.01 ×103 2.95 ×103 2.90 ×103 3.03 ×103 3.05 ×103
Std 1.58 ×101 9.78 ×102 3.69 ×101 2.88 ×101 1.63 2.03 ×101 9.33 ×101
Rank 2 1 5 4 3 6 7
F26 Mean 4.33 ×103 4.37 ×103 4.51 ×103 4.37 ×103 5.64 ×103 7.23 ×103 6.19 ×103
Std 1.57 ×102 5.58 ×102 1.11 ×103 1.09 ×103 4.07 ×102 1.06 ×103 6.54 ×102
Rank 1 2 4 3 5 7 6
F27 Mean 3.25 ×103 3.20 ×103 3.26 ×103 3.28 ×103 3.20 ×103 3.35 ×103 3.31 ×103
Std 1.60 ×101 7.64 2.53 ×101 2.39 ×101 1.47 ×101 4.56 ×101 3.13 ×101
Rank 3 1 4 5 2 7 6
F28 Mean 3.20 ×103 3.20 ×103 3.42 ×103 3.34 ×103 3.25 ×103 3.41 ×103 4.03 ×103
Std 3.50 ×101 4.02 ×101 7.14 ×101 5.92 ×101 3.99 ×101 2.45 ×101 4.22 ×102
Rank 1 2 6 4 3 5 7
F29 Mean 3.48 ×103 3.51 ×103 3.76 ×103 3.99 ×103 3.72 ×103 4.77 ×103 3.87 ×103
Std 8.93 ×101 1.02 ×102 2.12 ×102 2.01 ×102 1.31 ×102 2.40 ×102 1.82 ×102
Rank 1 2 4 6 3 7 5
F30 Mean 6.86 ×103 6.36 ×103 9.88 ×104 9.89 ×105 7.10 ×103 1.13 ×107 1.57 ×106
Std 8.74 ×102 5.86 ×102 1.91 ×105 1.15 ×106 1.83 ×103 6.26 ×106 1.08 ×106
Rank 2 1 4 5 3 7 6
Mean Rank 1.52 2.48 4.45 3.79 3.69 6.62 5.45
Final Rank 1 2 5 4 3 7 6
Time Taken 2231.80 48859.42 1935.85 1789.36 1801.12 1207.04 1315.51

Figure 2.

Figure 2

Figure 2

Figure 2

Average convergence trends of EMPSO and 6 comparison algorithms on selected CEC2017 functions.

Table 3.

Wilcoxon rank-sum test results of EMPSO against 6 representative algorithms on the CEC2017 benchmark functions (significance level α=0.05).

Function KLDE MPSO AWPSO PECSO WOA PSO
F1 + + + + + +
F3 + + + +
F4 + + + + +
F5 + + + + + +
F6 + + + + +
F7 + + + + + +
F8 + + + + + +
F9 + + + + +
F10 + + + + + +
F11 + + + + + +
F12 + + + + + +
F13 + + + + +
F14 + + + +
F15 + + + + +
F16 + + + + + +
F17 + + + + + +
F18 + + + + +
F19 + + + + +
F20 + + + + + +
F21 + + + + + +
F22 + + + + + +
F23 + + + + + +
F24 + + + + +
F25 - + + + + +
F26 + + + + + +
F27 + + + +
F28 + + + + +
F29 + + + + + +
F30 + + + + +
Better 17 29 26 27 29 29
Similar 2 0 1 0 0 0
Worse 10 0 2 2 0 0

4.2.1. Accuracy Comparison

Table 2 clearly demonstrates the superior accuracy of EMPSO across the CEC2017 benchmarks. EMPSO achieved the best average rank of 1.52, benefiting from obtaining the best fitness values on 17 out of the 29 functions, which is significantly better than all competing algorithms. Among the baselines, KLDE followed with average ranks of 2.48, while PSO and WOA exhibited overall weaker performance. Moreover, EMPSO not only yielded lower mean errors but also exhibited smaller variances, reflecting more stable convergence behavior.

Further analyses across different function categories reveal the consistent superiority of EMPSO. For unimodal functions (F1–F3), which mainly test exploitation capability, EMPSO achieves the best overall performance. This advantage is attributed to the effectiveness of the EEL strategy in conducting fine-grained local searches around the global optimum. For multimodal functions (F4–F10), where escaping from local optima is crucial, EMPSO again outperforms the other algorithms. The results indicate that the SMR mechanism enhances the swarm’s ability to traverse complex landscapes. For hybrid functions (F11–F20), which combine multimodality with separable structures, EMPSO attains the best performance on five test functions. This outcome validates that the APU strategy promotes a well-balanced search behavior between exploration and exploitation. Finally, for the most challenging composite functions (F21–F30) characterized by intricate multilayer structures, EMPSO maintains a clear advantage. It achieves the best results on seven functions. Even when not ranking first, EMPSO consistently remains among the top performers, demonstrating its strong adaptability and robustness across diverse problem landscapes.

4.2.2. Convergence Trend Analysis

To further evaluate the search dynamics of EMPSO, we report the mean convergence curves over 30 independent runs on 14 representative CEC2017 benchmark functions, as shown in Figure 2. Overall, EMPSO demonstrates a clear advantage in convergence accuracy compared to the competing algorithms, achieving superior results on 9 out of the 14 selected functions. These results confirm the effectiveness and reliability of the proposed hierarchical strategy in guiding the search process.

A closer examination reveals that, on several functions such as F1, F5, F7, and F21, EMPSO exhibits an “S”-shaped convergence pattern. This behavior indicates that EMPSO is capable of maintaining steady progress in the early and middle phases while effectively escaping local optima in later stages. Such adaptability can be attributed to the exemplar-based learning mechanism, which leverages historical high-quality solutions to introduce promising knowledge into the swarm when the search stagnates. Consequently, EMPSO achieves more stable accuracy improvements, highlighting its ability to balance exploration and exploitation across diverse problem landscapes.

4.2.3. Statistical Analysis via the Wilcoxon Rank-Sum Test

To further examine the statistical significance of the performance differences between EMPSO and the six representative algorithms, the well-known non-parametric Wilcoxon rank-sum test [55] was employed at a significance level of 0.05. The results are summarized in Table 3, where the symbols “−”, “≈”, and “+” denote that EMPSO performs significantly worse, statistically equivalent, or significantly better than the corresponding algorithm, respectively.

From Table 3, it can be observed that EMPSO exhibited statistically significant superiority on the majority of benchmark functions. In particular, EMPSO outperformed MPSO, WOA, and PSO on 29 out of 30 functions, with no instances of inferior performance, demonstrating strong robustness. Against AWPSO and PECSO, EMPSO also achieved 26 or 27 statistically significant wins, with only 2 losses in each case. KLDE’s performance is relatively close, but EMPSO still achieved 17 significant advantages. Overall, the Wilcoxon test further confirms the effectiveness of EMPSO and its clear statistical advantage over the compared algorithms.

4.3. Comparative Results on the CEC2022 Benchmark Suite

To further evaluate the adaptability and competitiveness of EMPSO, we conducted additional tests on 12 benchmark functions from the more recent CEC2022 test suite. The same six representative algorithms mentioned earlier were selected as baseline methods: KLDE, MPSO, AWPSO, PECSO, WOA, and classical PSO. The numerical experimental results, including average error, standard deviation, overall ranking, and runtime, are summarized in Table 4.

Table 4.

Experimental results on the CEC2022 benchmark functions.

Funtion EMPSO KLDE MPSO AWPSO PECSO WOA PSO
F1 Mean 3.00 ×102 3.64 ×102 4.59 ×102 3.20 ×102 3.02 ×102 4.96 ×103 3.01 ×102
Std 5.71 ×103 9.95 ×101 6.54 ×101 1.21 ×101 1.26 1.80 ×103 1.22
Rank 1 5 6 4 3 7 2
F2 Mean 4.49 ×102 4.49 ×102 4.45 ×102 4.57 ×102 4.29 ×102 4.76 ×102 4.90 ×102
Std 7.65 ×101 1.32 1.15 ×101 4.95 1.96 ×101 1.42 ×101 3.41 ×101
Rank 3 4 2 5 1 6 7
F3 Mean 6.00 ×102 6.00 ×102 6.00 ×102 6.00 ×102 6.01 ×102 6.53 ×102 6.07 ×102
Std 9.57 ×102 9.25 ×106 1.51 ×101 8.14 ×104 1.09 7.42 3.76
Rank 3 1 4 2 5 7 6
F4 Mean 8.06 ×102 8.52 ×102 8.26 ×102 8.33 ×102 8.47 ×102 8.86 ×102 8.51 ×102
Std 1.58 2.85 ×101 6.37 5.59 9.95 1.64 ×101 8.63
Rank 1 6 2 3 4 7 5
F5 Mean 9.00 ×102 9.00 ×102 9.01 ×102 9.12 ×102 1.27 ×103 2.72 ×103 1.06 ×103
Std 1.01 ×1013 2.83 ×102 8.93 ×101 8.39 1.79 ×102 4.98 ×102 1.39 ×102
Rank 1 2 3 4 6 7 5
F6 Mean 2.40 ×103 1.83 ×103 2.81 ×103 4.24 ×103 2.87 ×103 7.67 ×103 8.85 ×103
Std 4.14 ×102 1.23 ×101 7.32 ×102 2.23 ×103 8.52 ×102 3.51 ×103 7.24 ×103
Rank 2 1 3 5 4 6 7
F7 Mean 2.02 ×103 2.04 ×103 2.04 ×103 2.02 ×103 2.05 ×103 2.15 ×103 2.04 ×103
Std 1.86 5.46 6.25 1.94 1.32 ×101 2.86 ×101 9.97
Rank 1 3 4 2 6 7 5
F8 Mean 2.22 ×103 2.23 ×103 2.23 ×103 2.22 ×103 2.23 ×103 2.25 ×103 2.23 ×103
Std 7.30 ×101 1.75 1.91 4.73 ×101 7.61 9.23 4.50
Rank 2 3 4 1 6 7 5
F9 Mean 2.48 ×103 2.48 ×103 2.48 ×103 2.48 ×103 2.47 ×103 2.49 ×103 2.52 ×103
Std 3.27 ×1013 9.27 ×1012 5.54 ×101 4.79 ×101 3.09 8.02 2.44 ×101
Rank 1 2 4 3 5 6 7
F10 Mean 2.53 ×103 2.82 ×103 2.52 ×103 2.53 ×103 2.59 ×103 3.72 ×103 2.76 ×103
Std 5.40 ×101 6.89 ×102 4.15 ×101 6.03 ×101 8.86 ×101 1.02 ×103 4.26 ×102
Rank 2 6 1 3 4 7 5
F11 Mean 2.90 ×103 2.93 ×103 2.94 ×103 2.89 ×103 2.89 ×103 2.97 ×103 3.90 ×103
Std 1.44 ×1012 4.83 ×101 7.60 ×101 5.48 ×101 5.48 ×101 1.26 ×101 4.77 ×102
Rank 3 4 5 1 2 6 7
F12 Mean 2.96 ×103 2.94 ×103 2.96 ×103 2.96 ×103 2.90 ×103 3.00 ×103 2.97 ×103
Std 1.22 ×101 7.77 7.27 6.71 1.50 ×104 2.26 ×101 1.65 ×101
Rank 5 2 3 4 1 7 6
Mean Rank 2.08 3.25 3.42 3.08 3.92 6.67 5.58
Final Rank 1 3 4 2 5 7 6
Time Taken 564.85 12464.14 488.85 448.48 461.57 305.58 328.66

Table 4 reports the comparative results of EMPSO and six representative algorithms on twelve CEC2022 benchmark functions. Overall, EMPSO achieves the best mean rank of 2.08, outperforming all competitors, followed by AWPSO (3.08) and KLDE (3.25).

Specifically, EMPSO secures the top performance on five functions (F1, F4, F5, F7, and F9) and maintains competitive stability across the rest. Its advantages are particularly pronounced on unimodal and hybrid composition functions (e.g., F1–F5), where accurate exploitation and exemplar-guided learning contribute to consistent convergence. Although KLDE slightly surpasses EMPSO on certain multimodal cases (e.g., F6 and F3), EMPSO still demonstrates robust overall adaptability. Moreover, EMPSO exhibits a favorable computational efficiency, with an average runtime of 564.84 s, significantly lower than KLDE (12,464.14 s) while remaining comparable to lightweight variants such as AWPSO and MPSO. These results substantiate the effectiveness and generalizability of the proposed learning and memory retrieval mechanisms under diverse optimization landscapes.

4.4. Parameter Sensitivity Analysis

To further investigate the sensitivity of EMPSO to its control parameter λ, we conducted additional experiments on the CEC2017 benchmark set, where λ was configured as {0,0.1,0.2,0.3,0.5}. The averaged performance over all 30 functions is summarized in Table 5. The parameter λ regulates the preference of EMPSO when recalling historical exemplars. Specifically, λ=0 implies that all recorded exemplars are selected with nearly uniform probability, while λ=0.5 indicates a strong bias towards the most recent exemplars. Thus, λ can be interpreted as a memory decay factor, controlling the balance between short-term and long-term experience in guiding the swarm dynamics.

Table 5.

Sensitivity analysis of parameter λ on the CEC2017 benchmark functions.

Function λ=0 λ=0.1 λ=0.2 λ=0.3 λ=0.5
F1 2.15 ×103 2.58 ×103 1.61 ×103 1.90 ×103 2.28 ×103
F3 2.18 ×103 5.22 ×102 4.52 ×102 4.88 ×102 5.13 ×102
F4 4.88 ×102 4.90 ×102 4.88 ×102 4.93 ×102 4.93 ×102
F5 5.27 ×102 5.32 ×102 5.32 ×102 5.33 ×102 5.33 ×102
F6 6.01 ×102 6.01 ×102 6.01 ×102 6.01 ×102 6.01 ×102
F7 7.67 ×102 7.61 ×102 7.63 ×102 7.62 ×102 7.64 ×102
F8 8.24 ×102 8.30 ×102 8.32 ×102 8.31 ×102 8.31 ×102
F9 9.38 ×102 9.09 ×102 9.08 ×102 9.09 ×102 9.08 ×102
F10 5.95 ×103 3.53 ×103 3.38 ×103 3.40 ×103 3.54 ×103
F11 1.26 ×103 1.24 ×103 1.23 ×103 1.23 ×103 1.24 ×103
F12 7.57 ×104 6.92 ×104 4.89 ×104 7.50 ×104 7.24 ×104
F13 1.45 ×104 1.39 ×104 1.11 ×104 1.68 ×104 2.09 ×104
F14 4.66 ×103 3.65 ×103 3.57 ×103 3.79 ×103 3.71 ×103
F15 3.19 ×103 3.09 ×103 2.59 ×103 3.07 ×103 3.16 ×103
F16 2.09 ×103 2.06 ×103 2.03 ×103 2.09 ×103 2.11 ×103
F17 1.86 ×103 1.88 ×103 1.86 ×103 1.89 ×103 1.89 ×103
F18 8.31 ×104 7.47 ×104 5.67 ×104 7.09 ×104 7.32 ×104
F19 4.21 ×103 3.57 ×103 3.90 ×103 3.98 ×103 4.14 ×103
F20 2.18 ×103 2.15 ×103 2.16 ×103 2.17 ×103 2.18 ×103
F21 2.33 ×103 2.34 ×103 2.34 ×103 2.34 ×103 2.34 ×103
F22 3.18 ×103 3.24 ×103 3.15 ×103 3.09 ×103 3.11 ×103
F23 2.74 ×103 2.76 ×103 2.74 ×103 2.79 ×103 2.80 ×103
F24 2.90 ×103 2.92 ×103 2.93 ×103 2.95 ×103 2.97 ×103
F25 2.89 ×103 2.89 ×103 2.89 ×103 2.89 ×103 2.89 ×103
F26 4.54 ×103 4.65 ×103 4.65 ×103 4.75 ×103 4.70 ×103
F27 3.25 ×103 3.27 ×103 3.25 ×103 3.28 ×103 3.27 ×103
F28 3.24 ×103 3.23 ×103 3.23 ×103 3.24 ×103 3.23 ×103
F29 3.64 ×103 3.59 ×103 3.55 ×103 3.60 ×103 3.61 ×103
F30 7.45 ×103 7.75 ×103 7.74 ×103 7.32 ×103 7.12 ×103

As shown in Table 5, the choice of λ demonstrates considerable robustness. Across all tested configurations, EMPSO consistently outperforms its competitors, as reported in Table 2, confirming that λ plays a non-trivial role in shaping the search behavior of the algorithm. Different settings of λ result in distinct search dynamics. For instance, on certain functions such as F5, F8, F24, and F26, the performance deteriorates as λ increases, which may be attributed to excessive reliance on short-term memory that restricts global exploration. Conversely, on other functions such as F9, F28, and F30, larger values of λ lead to improved performance, suggesting that emphasizing recent exemplars can accelerate convergence when the fitness landscape exhibits relatively stable local structures. These observations indicate that extreme values of λ may enhance EMPSO’s adaptability for particular problem classes, but they also introduce higher variance in performance across tasks.

Nevertheless, a moderate choice of λ appears to achieve the best trade-off, yielding the best results on 16 out of the 29 benchmark functions. This also highlights the effectiveness of the SMR strategy, which provides a balanced utilization of both recent and distant memory, thereby reducing the risk of premature convergence while maintaining sufficient exploitation ability. Based on these results and to ensure fairness in comparisons, λ=0.2 is adopted as the default configuration in all experiments.

4.5. Ablation Study

To further investigate the contribution of different components in EMPSO, we conducted an ablation study on several representative CEC2017 functions. Specifically, three degraded variants were implemented by removing the high-level, middle-level, and low-level strategies, respectively. The detailed results are summarized in Table 6.

Table 6.

Ablation study results of EMPSO and its variants on the CEC2017 functions.

Funtion EMPSO w/o_High w/o_Middle w/o_Low PSO
F1 Mean 1.61 ×103 1.52 ×109 3.85 ×103 2.12 ×103 7.69 ×109
Std 1.56 ×103 1.29 ×109 3.66 ×103 1.85 ×103 2.96 ×109
Rank 1 4 3 2 5
F3 Mean 4.52 ×102 1.87 ×103 3.31 ×103 2.38 ×103 4.53 ×104
Std 2.03 ×102 2.02 ×103 1.59 ×103 1.55 ×103 1.53 ×104
Rank 1 2 4 3 5
F5 Mean 5.32 ×102 5.68 ×102 5.33 ×102 5.25 ×102 6.33 ×102
Std 5.12 1.57 ×101 6.21 4.95 2.21 ×101
Rank 2 4 3 1 5
F7 Mean 7.63 ×102 8.02 ×102 7.65 ×102 7.68 ×102 9.21 ×102
Std 4.52 2.03 ×101 5.22 7.02 5.48 ×101
Rank 1 4 2 3 5
F9 Mean 9.08 ×102 1.52 ×103 9.15 ×102 9.34 ×102 3.66 ×103
Std 5.01 3.14 ×102 8.33 2.09 ×101 1.13 ×103
Rank 1 4 2 3 5
F11 Mean 1.23 ×103 1.41 ×103 1.25 ×103 1.26 ×103 1.44 ×103
Std 5.27 ×101 7.62 ×101 3.46 ×101 4.22 ×101 7.67 ×101
Rank 1 4 2 3 5
F13 Mean 1.11 ×104 1.77 ×106 1.77 ×104 1.68 ×104 1.81 ×106
Std 6.59 ×103 2.32 ×106 1.25 ×104 1.21 ×104 2.14 ×106
Rank 1 4 3 2 5
F15 Mean 2.59 ×103 3.31 ×104 3.05 ×103 3.60 ×103 3.57 ×104
Std 5.48 ×102 2.10 ×104 1.08 ×103 1.91 ×103 1.81 ×104
Rank 1 4 2 3 5
F17 Mean 1.86 ×103 2.14 ×103 1.89 ×103 1.89 ×103 2.31 ×103
Std 5.94 ×101 1.54 ×102 6.42 ×101 6.77 ×101 1.52 ×102
Rank 1 4 2 3 5
F19 Mean 3.90 ×103 2.04 ×105 4.43 ×103 4.46 ×103 5.63 ×105
Std 1.48 ×103 2.09 ×105 1.91 ×103 1.92 ×103 7.35 ×105
Rank 1 2 3 4 5
F21 Mean 2.34 ×103 2.37 ×103 2.34 ×103 2.34 ×103 2.44 ×103
Std 7.16 1.10 ×101 7.72 7.31 2.41 ×101
Rank 1 4 3 2 5
F23 Mean 2.74 ×103 2.82 ×103 2.77 ×103 2.74 ×103 2.93 ×103
Std 2.03 ×101 3.10 ×101 2.59 ×101 2.23 ×101 5.39 ×101
Rank 1 4 3 2 5
F25 Mean 2.89 ×103 2.95 ×103 2.89 ×103 2.89 ×103 3.05 ×103
Std 1.07 4.45 ×101 1.30 1.79 9.33 ×101
Rank 1 4 2 3 5
F27 Mean 3.25 ×103 3.29 ×103 3.24 ×103 3.26 ×103 3.31 ×103
Std 2.19 ×101 2.24 ×101 1.52 ×101 2.36 ×101 3.13 ×101
Rank 2 4 1 3 5
F29 Mean 3.55 ×103 3.84 ×103 3.60 ×103 3.62 ×103 3.87 ×103
Std 7.83 ×101 2.04 ×102 1.05 ×102 1.10 ×102 1.82 ×102
Rank 1 4 2 3 5
Mean Rank 1.13 3.73 2.47 2.67 5.00
Final Rank 1 4 2 3 5

Overall, EMPSO achieves the best mean rank (1.13) across all test functions, significantly outperforming its variants and the canonical PSO. This demonstrates that the hierarchical strategy design of EMPSO is essential for balancing exploration and exploitation.

When the high-level strategy is removed (w/o_High), the performance deteriorates markedly on most functions, such as F1, F3, F13, F15, and F19, where the mean errors increase by several orders of magnitude compared to EMPSO. This degradation highlights the critical role of the high-level mechanism in promoting global guidance and preventing premature convergence.

The removal of the middle-level strategy (w/o_Middle) leads to moderately reduced performance; although this variant performs better than w/o_High, it remains consistently inferior to EMPSO, particularly on F3, F7, and F11, indicating that the middle-level strategy is vital for sustaining population diversity and enhancing robustness.

In contrast, the variant without the low-level strategy (w/o_Low) exhibits competitive or even superior results on certain problems, such as F5, suggesting that the low-level component primarily contributes to fine-grained exploitation and local refinement, whose benefits may be problem-dependent. Nonetheless, when considering the overall performance across all test functions, EMPSO still surpasses all its variants, while the canonical PSO shows the poorest results (mean rank = 5.00).

These findings confirm that each hierarchical level provides complementary benefits, and their synergistic integration enables EMPSO to achieve superior accuracy, stability, and adaptability across diverse optimization landscapes.

5. Simulation on Engineering Optimization Problems

To further evaluate the practical effectiveness of the proposed algorithm, we conducted simulations on three widely used constrained engineering design problems: the three-bar truss design [56], the pressure vessel design [57], and the tension/compression spring design [58]. These problems are representative of real-world engineering optimization tasks, which are typically characterized by nonlinearity, discrete variables, and complex constraints. The detailed formulations are presented below.

5.1. Problem Formulations

5.1.1. Three-Bar Truss Design

The three-bar truss design is a classical benchmark in structural optimization. The objective is to minimize the overall weight of a planar truss while ensuring that the stress on each bar and the displacement at the loaded joint remain within acceptable limits. This problem is widely adopted to test the capability of optimization algorithms in handling nonlinear stress–displacement interactions. The structure of the three-bar truss is illustrated in Figure 3.

Figure 3.

Figure 3

Schematic of the three-bar truss structure.

Expression:

f(x)=(22x1+x2)·l·ρ,

where x1 and x2 denote the cross-sectional areas of truss members, l is the member length, and ρ is the material density.

Constraints:

g1(x)0(stressinbar1),g2(x)0(stressinbar2),g3(x)0(stressinbar3),g4(x)0(displacementlimit).

Variable Scope:

0x1,x21.0(in2).

5.1.2. Pressure Vessel Design

The pressure vessel design problem is a well-known engineering benchmark involving both continuous and discrete decision variables. The goal is to minimize the total cost of material, forming, and welding, subject to safety and design requirements. Due to the mixed-variable nature and nonlinear constraints, this problem is particularly challenging for evolutionary algorithms. The schematic representation of the vessel structure is shown in Figure 4.

Figure 4.

Figure 4

Schematic of the pressure vessel design problem.

Expression:

f(x)=0.6224x1x3x4+1.7781x2x32+3.1661x12x4+19.84x12x3,

where x1 and x2 are the thickness of the shell and head, x3 is the inner radius, and x4 is the length of the cylindrical section.

Constraints:

g1(x)=x1+0.0193x30,g2(x)=x2+0.00954x30,g3(x)=πx32x443πx33+1,296,0000,g4(x)=x42400.

Variable Scope:

x1{1,2,,99}×0.0625,x2{1,2,,99}×0.0625,10x3200,10x4200.

5.1.3. Tension/Compression Spring Design

The tension/compression spring design problem focuses on minimizing the spring’s weight while ensuring that it meets the requirements on shear stress, deflection, and frequency. This benchmark reflects practical challenges in mechanical design, as it involves highly nonlinear constraints and conflicting objectives. The schematic diagram of the spring structure is depicted in Figure 5.

Figure 5.

Figure 5

Schematic of the tension/compression spring design problem.

Expression:

f(x)=π2x2x3x124,

where x1 is the wire diameter, x2 is the mean coil diameter, and x3 is the number of active coils.

Constraints:

g1(x)=8Fmaxx2πx13S0(shearstress),g2(x)=lmaxFmaxK+1.05(x3+2)x10(deflection),g3(x)=x2x130,g4(x)=15x2x10.

Variable Scope:

0.05x12.0,0.25x21.3,2x315.

5.2. Experimental Setup

For each engineering optimization problem, the proposed EMPSO was compared against five representative SI algorithms, namely AWPSO, PECSO, WOA, WSO, and the canonical PSO. All algorithms were executed under identical termination conditions, with the maximum number of function evaluations (FEs) set to 1.0×105. Each algorithm was independently run 30 times to ensure statistical reliability.

5.3. Results and Discussion

For the three-bar truss design problem (Table 7), all algorithms were able to converge to the vicinity of the known optimum (263.89584). Nevertheless, EMPSO stands out in terms of stability, achieving an almost negligible variance (9×106). Compared with advanced variants such as AWPSO and PECSO, EMPSO yields lower variability, indicating that the exemplar-driven search mechanism effectively preserves convergence reliability in low-dimensional structural design tasks.

Table 7.

Comparison results on the three-bar truss design problem.

Algorithm Optimized Result Optimization Variable
Best Mean Std. x1 x2
EMPSO 263.89584 263.89585 0.000009 0.78869 0.40821
AWPSO 263.89625 265.59212 1.459546 0.78942 0.40614
PECSO 263.89592 264.19864 0.460413 0.78848 0.40879
WSO 263.89584 263.89589 0.000039 0.78872 0.40813
WOA 263.89636 263.95316 0.056557 0.787839929 0.41062
PSO 263.89584 263.89585 0.000003 0.78868 0.40824

In the pressure vessel design problem (Table 8), the performance differences among algorithms become more pronounced. EMPSO consistently identifies solutions close to 6.06×103, with the best solution of 6059.71, which is highly competitive with the best-known designs reported in the literature. Its average performance (6073.07) is significantly superior to AWPSO (7826.72) and WOA (6728.76), both of which exhibit larger variances. The narrow spread of EMPSO’s results (Std. =15.53) highlights its efficiency and robustness when handling mixed-integer constraints. These findings emphasize the advantage of exemplar-based learning in guiding the population toward feasible and high-quality regions within complex design landscapes.

Table 8.

Comparison results on the pressure vessel design problem.

Algorithm Optimized Result Optimization Variable
Best Mean Std. x1 x2 x3 x4
EMPSO 6059.71434 6073.06615 15.529393 12.85781 7.14422 42.09845 176.63660
AWPSO 6319.46259 7826.71843 663.786354 13.06618 7.50000 40.71069 198.30049
PECSO 6059.71431 6355.17383 376.004535 12.81252 6.43752 42.09485 176.63661
WSO 6059.71473 6208.49219 106.840886 12.52765 6.97388 42.09845 176.63661
WOA 6129.15647 6728.76216 388.105378 14.44860 6.74448 45.00141 143.70370
PSO 6059.71434 6102.56744 75.657230 12.69574 7.04475 42.09845 176.63660

For the tension/compression spring design problem (Table 9), EMPSO again demonstrates competitive performance, achieving the best solution of 0.01267, an average value of 0.01271, and a very small variance. Although classical PSO also attains near-optimal solutions with slightly smaller variance, both methods reliably converge to the global optimum in this relatively smooth search space. By contrast, AWPSO and PECSO exhibit inferior mean performance, confirming that EMPSO maintains robustness even on less challenging problems.

Table 9.

Comparison results on the tension/compression spring design problem.

Algorithm Optimized Result Optimization Variable
Best Mean Std. x1 x2 x3
EMPSO 0.01267 0.01271 0.000020 0.05170 0.35702 11.27100
AWPSO 0.01389 0.01875 0.003225 0.05127 0.34308 13.40521
PECSO 0.01272 0.01321 0.000806 0.05179 0.35902 11.15565
WSO 0.01267 0.01290 0.000243 0.05173 0.35779 11.22652
WOA 0.01267 0.01296 0.000212 0.05172 0.35739 11.24984
PSO 0.01267 0.01271 0.000016 0.05143 0.35062 11.65546

Overall, across the three constrained engineering benchmarks, EMPSO consistently achieves highly competitive or superior results in terms of best and mean objective values while maintaining low variance. Its advantage is particularly evident in the pressure vessel problem, where the coexistence of discrete and continuous variables poses a considerable challenge for standard metaheuristics. These results demonstrate that EMPSO provides a robust and flexible approach for addressing practical engineering design tasks characterized by complex constraints and heterogeneous decision variables.

6. Simulation on Optimal PMU Placement Problem

6.1. Problem Formulation

The Optimal PMU Placement (OPP) [8,9,59] problem aims to determine the minimum number of Phasor Measurement Units (PMUs) and their optimal locations in a power system to ensure full network observability.

Consider a power system with N buses and L transmission lines. Let

xi=1,ifaPMUisinstalledatbusi,0,otherwise.i=1,2,,N

denote the binary decision variable representing PMU placement.

The OPP problem can be formulated as the following optimization problem:

mini=1Nxis.t.EachbusisobservableeitherdirectlyorviaaconnectedPMUObservabilityconstraintsincludezero-injectionbuseswhereapplicablexi{0,1},i=1,2,,N (6)

where the objective function (Equation (6)) minimizes the total number of PMUs. A bus is considered observable if either:

  1. A PMU is installed at the bus itself, or

  2. The bus is connected to another bus equipped with a PMU.

For zero-injection buses (buses with no load generation), additional observability rules are applied: if all neighboring buses except one are observable, the zero-injection bus can help infer the voltage of the remaining unobserved bus. This reduces the required number of PMUs compared to a naive placement.

Even though representative optimization methods (such as [8,9]) have already satisfactorily addressed this PMU placement problem, the issue still holds research value and can effectively characterize the practical application capability and robustness of a new method.

6.2. Experimental Setup

To validate the effectiveness of the proposed EMPSO algorithm for the OPP problem, extensive experiments were conducted on the IEEE 30-bus, IEEE 39-bus, IEEE 57-bus, and IEEE 118-bus test systems.

EMPSO is compared with the binary particle swarm optimization (BPSO) [60] algorithm and the binary bat algorithm (BBA) [61]. For each algorithm, the performance is evaluated through four key indicators reported over 30 independent runs: the average number of PMUs obtained, the standard deviation of the number of PMUs obtained, the minimum number of PMUs achieved in the optimal case, and the maximum number of PMUs in the sub-optimal case. Additionally, the PMU placement configuration from the best run is documented for each test system to provide insight into the optimal solutions achieved. During testing, we set the population size to 100 and the maximum number of iterations to 1000.

It should be noted that EMPSO was originally designed for continuous optimization problems. Therefore, when applying it to binary optimization tasks, several minor adjustments were introduced without altering the main algorithmic framework. Specifically, each particle’s position was interpreted as a continuous probability within the range [0,1], and subsequently thresholded to a binary selection vector before being evaluated by the functions—following the common practice in binary PSO adaptations. In addition, the position update boundaries were constrained to [0,1], and the final evaluation employed a binarization step to ensure valid discrete representations.

6.3. Results

6.3.1. Statistical Results

Table 10 summarizes the statistical results of 30 independent runs for EMPSO, BPSO, and BBA on the IEEE 30-bus, 39-bus, 57-bus, and 118-bus systems. Table 11 presents the optimal PMU locations identified by EMPSO on different IEEE test systems.

Table 10.

Statistical Comparison of EMPSO and BPSO for the OPP.

Problems Algorithms Optimal Sub-Optimal Avg. Std.
IEEE 30-bus EMPSO 10 12 10.50 0.629724
BPSO 10 11 10.23 0.430183
BBA 11 13 11.77 0.568321
IEEE 39-bus EMPSO 13 18 15.77 1.16511
BPSO 15 17 16.36 0.614948
BBA 17 19 18.13 0.571346
IEEE 57-bus EMPSO 20 25 22.67 1.39786
BPSO 21 25 23.77 1.10433
BBA 23 28 26.73 1.08066
IEEE 118-bus EMPSO 44 60 52.53 3.32942
BPSO 51 60 56.40 2.17509
BBA 61 67 65.37 1.37674
Table 11.

Optimal PMU placement obtained by EMPSO on different IEEE test systems.

Test System Total Buses PMUs Required Optimal PMU Locations
IEEE 30-Bus 30 10 {2, 4, 6, 10, 11, 12, 15, 20, 25, 27}
IEEE 39-Bus 39 13 {2, 6, 9, 10, 11, 14, 17, 19, 20, 22, 23, 25, 29}
IEEE 57-Bus 57 20 {1, 4, 7, 10, 13, 20, 22, 24, 28, 30, 32, 35, 39, 41, 44, 47, 50, 53, 55, 56}
IEEE 118-Bus 118 44 {3, 5, 7, 8, 9, 12, 15, 17, 20, 23, 24, 25, 29, 35, 38, 40, 43, 47, 49, 50, 51, 52, 57, 59, 60, 64, 66, 68, 72, 73, 75, 76, 77, 78, 85, 86, 89, 92, 96, 100, 105, 107, 110, 115}

EMPSO consistently outperforms both BPSO and BBA across all test systems in terms of optimal solution quality. For the IEEE 30-bus system, EMPSO matches BPSO’s best result (10 PMUs) and significantly outperforms BBA (11 PMUs). While BPSO shows a marginally better average (10.23 vs. 10.50) and standard deviation on this smaller system, EMPSO maintains consistent worst-case performance.

As system complexity increases, EMPSO’s advantages become more pronounced. For the IEEE 39-bus system, EMPSO achieves a superior best-case solution (13 PMUs) compared to both BPSO (15 PMUs) and BBA (17 PMUs). This performance advantage extends to the larger systems, where EMPSO demonstrates remarkable scalability. Particularly notable is its performance on the IEEE 118-bus system, where EMPSO achieves a best-case solution of 44 PMUs—significantly better than BPSO’s 51 PMUs and vastly superior to BBA’s 61 PMUs.

The statistical results clearly demonstrate EMPSO’s robustness in maintaining solution quality across multiple runs. EMPSO strikes an optimal balance between exploration and exploitation, enabling it to escape local optima and find better solutions while maintaining reasonable consistency, especially given the complexity of the search space in larger systems.

6.3.2. Convergence Analysis

Figure 6 illustrates the convergence behavior of EMPSO, BPSO, and BBA across the IEEE 30-, 39-, 57-, and 118-bus systems, revealing distinct performance differences. Overall, EMPSO achieves both faster convergence and higher solution quality than the other algorithms.

Figure 6.

Figure 6

Convergence curves of EMPSO, BPSO, and BBA on the OPP problem.

In the early iterations, EMPSO rapidly decreases the objective value, quickly approaching high-quality regions. BPSO also converges fast initially but tends to stagnate later due to premature convergence. In contrast, BBA exhibits the slowest convergence, with limited improvement over time, reflecting weak global exploration capability.

As the system size increases, EMPSO’s advantage becomes more evident. In the 118-bus system, EMPSO continues improving and reaches the best final solution, whereas BPSO and BBA stop making significant progress early. This demonstrates EMPSO’s scalability and robustness for large-scale, complex optimization problems, attributed to its enhanced exploration and exemplar-guided learning mechanisms that help avoid local entrapment.

These convergence trends are consistent with the statistical results, confirming EMPSO’s superior performance in the optimal PMU placement task. In particular, the algorithm effectively maintains population diversity and achieves a stable balance between exploration and exploitation throughout the optimization process.

6.4. Summary

In summary, the experimental study on the OPP problem demonstrates the superior performance, robustness, and scalability of the proposed EMPSO algorithm across multiple IEEE benchmark systems. EMPSO consistently requires fewer PMUs than both BPSO and BBA while maintaining a lower standard deviation, indicating its stability across independent runs. The integration of exemplar learning, memory retrieval, and adaptive position updating effectively enhances search efficiency and solution quality under discrete encoding.

Convergence analyses further reveal that EMPSO not only achieves faster descent in early iterations but also sustains improvement in the later stages, thereby avoiding the premature stagnation typically observed in competing algorithms. This advantage becomes increasingly pronounced as the network scale grows from 30 to 118 buses, validating EMPSO’s capability to handle large and complex search spaces efficiently.

Overall, EMPSO maintains a stable trade-off between exploration and exploitation, preserves population diversity, and exhibits strong generalization to binary optimization tasks. These findings substantiate EMPSO as a promising and extensible framework for large-scale engineering optimization problems such as PMU placement in power systems.

7. Conclusions and Future Work

This paper introduced EMPSO, a biologically inspired PSO variant that integrates EEL, SMR, and APU) mechanisms to overcome the limitations of premature convergence and insufficient adaptability in conventional PSO. By combining exemplar-driven learning, memory-based knowledge reuse, and fitness-dependent behavioral differentiation, EMPSO establishes a unified framework that enhances swarm intelligence through self-adaptive knowledge evolution.

Extensive experiments on the CEC2017 and CEC2022 benchmark suites confirm that EMPSO achieves superior convergence accuracy and stability across diverse problem landscapes. Its advantage becomes particularly pronounced in large-scale and multimodal scenarios, demonstrating that hierarchical exemplar learning and recency-weighted memory retrieval effectively sustain diversity and prevent stagnation. Applications to engineering design and PMU placement further validate EMPSO’s practicality, achieving consistent improvements in solution quality, robustness, and computational efficiency.

For future research, several directions are worth pursuing. First, the current memory retrieval process relies on a fixed recency decay; designing an adaptive forgetting mechanism or integrating reinforcement learning could yield more responsive memory management. Second, the exemplar aggregation in EEL may be extended through data-driven weighting, where the influence of elites is adaptively estimated using landscape metrics or clustering information. Third, probabilistic priors and Bayesian processing can provide more robust uncertainty quantification and may complement EMPSO. Therefore, combining probabilistic priors or surrogate-based uncertainty modeling with EMPSO could be a promising direction for future research. Finally, future studies could extend EMPSO to dynamic, multiobjective, and high-dimensional optimization, or embed it in hybrid systems such as deep learning training, energy management, or industrial scheduling to explore its scalability and domain adaptability.

Overall, this study contributes a novel perspective on strengthening swarm intelligence via exemplar-driven knowledge reuse and adaptive evolution, and we expect it to inspire the development of more resilient and knowledge-intensive swarm optimizers.

Acknowledgments

This paper has no additional acknowledgments beyond those listed in the author contributions or funding sections.

Abbreviations

The following abbreviations are used in this manuscript:

APU Adaptive Position Update
AWPSO Adaptive Weighted Particle Swarm Optimizer
BBA Binary Bat Algorithm
BPSO Binary Particle Swarm Optimization
EEL Elite Exemplar Learning
EMPSO Exemplar Learning and Memory Retrieval-Based Particle Swarm Optimization
KLDE Knowledge Learning Differential Evolution
MPSO Modified Particle Swarm Optimization
OPP Optimal Phasor Measurement Units Placement
PECSO Performance-Enhanced Chicken Swarm Optimization
PMU Phasor Measurement Units
PSO Particle Swarm Optimization
SI Swarm Intelligence
SMR Superior Memory Recall
WOA Whale Optimization Algorithm
WSO White Shark Optimizer

Author Contributions

Conceptualization, S.Z. and Y.G.; methodology, S.Z. and X.H.; software, S.Z.; validation, S.Z., X.H. and Y.G.; formal analysis, S.Z. and Y.G.; investigation, S.Z. and X.H.; resources, Y.G.; data curation, S.Z. and X.H.; writing—original draft preparation, S.Z.; writing—review and editing, X.H., M.G. and Y.G.; visualization, X.H. and Y.Z.; supervision, Y.G.; project administration, Y.G.; funding acquisition, Y.G.; auxiliary support, M.G. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are contained in this paper. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

This research was funded in part by the Key Project of Educational Science Planning of Jilin Province under Grant ZD24093, in part by the Key Project of Teaching Research of Beihua University under Grant XJZD-20240008 and XJZD-20230007, and in part by the General Project of Teaching Research of Beihua University under Grant XJYB-2021020.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Liu B., Xu M., Gao L. Enhanced swarm intelligence optimization: Inspired by cellular coordination in immune systems. Knowl.-Based Syst. 2024;290:111557. doi: 10.1016/j.knosys.2024.111557. [DOI] [Google Scholar]
  • 2.Zhang Y., Wang L., Zhao J., Han X., Wu H., Li M., Deveci M. A convolutional neural network based on an evolutionary algorithm and its application. Inf. Sci. 2024;670:120644. doi: 10.1016/j.ins.2024.120644. [DOI] [Google Scholar]
  • 3.Qi A., Zhao D., Heidari A.A., Liu L., Chen Y., Chen H. FATA: An efficient optimization method based on geophysics. Neurocomputing. 2024;607:128289. doi: 10.1016/j.neucom.2024.128289. [DOI] [Google Scholar]
  • 4.Kennedy J., Eberhart R. Particle swarm optimization; Proceedings of the ICNN’95-International Conference on Neural Networks; Perth, WA, Australia. 27 November–1 December 1995; New York, NY, USA: IEEE; 1995. pp. 1942–1948. [Google Scholar]
  • 5.Dorigo M., Birattari M., Stutzle T. Ant colony optimization. IEEE Comput. Intell. Mag. 2007;1:28–39. doi: 10.1109/MCI.2006.329691. [DOI] [Google Scholar]
  • 6.Mirjalili S., Lewis A. The whale optimization algorithm. Adv. Eng. Softw. 2016;95:51–67. doi: 10.1016/j.advengsoft.2016.01.008. [DOI] [Google Scholar]
  • 7.Braik M., Hammouri A., Atwan J., Al-Betar M.A., Awadallah M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022;243:108457. doi: 10.1016/j.knosys.2022.108457. [DOI] [Google Scholar]
  • 8.Theodorakatos N.P., Babu R., Theodoridis C.A., Moschoudis A.P. Mathematical models for the single-channel and multi-channel PMU allocation problem and their solution algorithms. Algorithms. 2024;17:191. doi: 10.3390/a17050191. [DOI] [Google Scholar]
  • 9.Koutsoukis N.C., Manousakis N.M., Georgilakis P.S., Korres G.N. Numerical observability method for optimal phasor measurement units placement using recursive Tabu search method. IET Gener. Transm. Distrib. 2013;7:347–356. doi: 10.1049/iet-gtd.2012.0377. [DOI] [Google Scholar]
  • 10.Li Y., Zhao L., Wang Y., Wen Q. Improved sand cat swarm optimization algorithm for enhancing coverage of wireless sensor networks. Measurement. 2024;233:114649. doi: 10.1016/j.measurement.2024.114649. [DOI] [Google Scholar]
  • 11.Yang W., Xia K., Fan S., Wang L., Li T., Zhang J., Feng Y. A multi-strategy whale optimization algorithm and its application. Eng. Appl. Artif. Intell. 2022;108:104558. doi: 10.1016/j.engappai.2021.104558. [DOI] [Google Scholar]
  • 12.Fu Y., Liu D., Chen J., He L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024;57:123. doi: 10.1007/s10462-024-10729-y. [DOI] [Google Scholar]
  • 13.Huang J., Hu H. Hybrid beluga whale optimization algorithm with multi-strategy for functions and engineering optimization problems. J. Big Data. 2024;11:3. doi: 10.1186/s40537-023-00864-8. [DOI] [Google Scholar]
  • 14.Yuan G., Wang B., Xue B., Zhang M. Particle swarm optimization for efficiently evolving deep convolutional neural networks using an autoencoder-based encoding strategy. IEEE Trans. Evol. Comput. 2023;28:1190–1204. doi: 10.1109/TEVC.2023.3245322. [DOI] [Google Scholar]
  • 15.Zhuang X., Wang W., Su Y., Yan B., Li Y., Li L., Hao Y. Multi-objective optimization of reservoir development strategy with hybrid artificial intelligence method. Expert Syst. Appl. 2024;241:122707. doi: 10.1016/j.eswa.2023.122707. [DOI] [Google Scholar]
  • 16.Hong T.Y., Chen C.C. Hyperparameter optimization for convolutional neural network by opposite-based particle swarm optimization and an empirical study of photomask defect classification. Appl. Soft Comput. 2023;148:110904. doi: 10.1016/j.asoc.2023.110904. [DOI] [Google Scholar]
  • 17.Zhang Y., Wang L., Zhao J. PECSO: An improved chicken swarm optimization algorithm with performance-enhanced strategy and its application. Biomimetics. 2023;8:355. doi: 10.3390/biomimetics8040355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Dai Y., Yu J., Zhang C., Zhan B., Zheng X. A novel whale optimization algorithm of path planning strategy for mobile robots. Appl. Intell. 2023;53:10843–10857. doi: 10.1007/s10489-022-04030-0. [DOI] [Google Scholar]
  • 19.Akay R., Yildirim M.Y. Multi-strategy and self-adaptive differential sine–cosine algorithm for multi-robot path planning. Expert Syst. Appl. 2023;232:120849. doi: 10.1016/j.eswa.2023.120849. [DOI] [Google Scholar]
  • 20.Huang W., Ding H., Qiao J. Large-scale and knowledge-based dynamic multiobjective optimization for MSWI process using adaptive competitive swarm optimization. IEEE Trans. Syst. Man, Cybern. Syst. 2023;54:379–390. doi: 10.1109/TSMC.2023.3308922. [DOI] [Google Scholar]
  • 21.Abualigah L., Diabat A., Thanh C.L., Khatir S. Opposition-based Laplacian distribution with Prairie Dog Optimization method for industrial engineering design problems. Comput. Methods Appl. Mech. Eng. 2023;414:116097. doi: 10.1016/j.cma.2023.116097. [DOI] [Google Scholar]
  • 22.Yadav N.K., Das S. Multi-objective optimization for distributed generator and shunt capacitor placement considering voltage-dependent nonlinear load models. Swarm Evol. Comput. 2025;92:101782. doi: 10.1016/j.swevo.2024.101782. [DOI] [Google Scholar]
  • 23.Jain M., Saihjpal V., Singh N., Singh S.B. An overview of variants and advancements of PSO algorithm. Appl. Sci. 2022;12:8392. doi: 10.3390/app12178392. [DOI] [Google Scholar]
  • 24.Aslan M.F., Durdu A., Sabanci K. Goal distance-based UAV path planning approach, path optimization and learning-based path estimation: GDRRT*, PSO-GDRRT* and BiLSTM-PSO-GDRRT. Appl. Soft Comput. 2023;137:110156. doi: 10.1016/j.asoc.2023.110156. [DOI] [Google Scholar]
  • 25.Daviran M., Maghsoudi A., Ghezelbash R. Optimized AI-MPM: Application of PSO for tuning the hyperparameters of SVM and RF algorithms. Comput. Geosci. 2025;195:105785. doi: 10.1016/j.cageo.2024.105785. [DOI] [Google Scholar]
  • 26.Kocak O., Erkan U., Toktas A., Gao S. PSO-based image encryption scheme using modular integrated logistic exponential map. Expert Syst. Appl. 2024;237:121452. doi: 10.1016/j.eswa.2023.121452. [DOI] [Google Scholar]
  • 27.Wang D., Zhai L., Fang J., Li Y., Xu Z. psoResNet: An improved PSO-based residual network search algorithm. Neural Netw. 2024;172:106104. doi: 10.1016/j.neunet.2024.106104. [DOI] [PubMed] [Google Scholar]
  • 28.Sangrody R., Taheri S., Cretu A.M., Pouresmaeil E. An improved PSO-based MPPT technique using stability and steady state analyses under partial shading conditions. IEEE Trans. Sustain. Energy. 2023;15:136–145. doi: 10.1109/TSTE.2023.3274939. [DOI] [Google Scholar]
  • 29.Liu W., Wang Z., Yuan Y., Zeng N., Hone K., Liu X. A Novel Sigmoid-Function-Based Adaptive Weighted Particle Swarm Optimizer. IEEE Trans. Cybern. 2021;51:1085–1093. doi: 10.1109/TCYB.2019.2925015. [DOI] [PubMed] [Google Scholar]
  • 30.Nabi S., Ahmad M., Ibrahim M., Hamam H. AdPSO: Adaptive PSO-based task scheduling approach for cloud computing. Sensors. 2022;22:920. doi: 10.3390/s22030920. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Khatir A., Capozucca R., Khatir S., Magagnini E., Benaissa B., Le Thanh C., Wahab M.A. A new hybrid PSO-YUKI for double cracks identification in CFRP cantilever beam. Compos. Struct. 2023;311:116803. doi: 10.1016/j.compstruct.2023.116803. [DOI] [Google Scholar]
  • 32.Amirteimoori A., Mahdavi I., Solimanpur M., Ali S.S., Tirkolaee E.B. A parallel hybrid PSO-GA algorithm for the flexible flow-shop scheduling with transportation. Comput. Ind. Eng. 2022;173:108672. doi: 10.1016/j.cie.2022.108672. [DOI] [Google Scholar]
  • 33.Li T., Shi J., Deng W., Hu Z. Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl. Soft Comput. 2022;121:108731. doi: 10.1016/j.asoc.2022.108731. [DOI] [Google Scholar]
  • 34.Zhou T., Wang L., Han X., Liu Z., Gao M. A binary linear predictive evolutionary algorithm with feature analysis for multiobjective feature selection in classification. Eng. Appl. Artif. Intell. 2025;152:110733. doi: 10.1016/j.engappai.2025.110733. [DOI] [Google Scholar]
  • 35.Jin X., Wei B., Deng L., Yang S., Zheng J., Wang F. An adaptive pyramid PSO for high-dimensional feature selection. Expert Syst. Appl. 2024;257:125084. doi: 10.1016/j.eswa.2024.125084. [DOI] [Google Scholar]
  • 36.Hu Q., Zhou N., Chen H., Weng S. Bayesian damage identification of an unsymmetrical frame structure with an improved PSO algorithm. Structures. 2023;57:105119. doi: 10.1016/j.istruc.2023.105119. [DOI] [Google Scholar]
  • 37.Radwan M., Elsayed S., Sarker R., Essam D., Coello C.C. Neuro-PSO algorithm for large-scale dynamic optimization. Swarm Evol. Comput. 2025;94:101865. doi: 10.1016/j.swevo.2025.101865. [DOI] [Google Scholar]
  • 38.Zhou T., Han X., Wang L., Gan W., Chu Y., Gao M. A multiobjective differential evolution algorithm with subpopulation region solution selection for global and local Pareto optimal sets. Swarm Evol. Comput. 2023;83:101423. doi: 10.1016/j.swevo.2023.101423. [DOI] [Google Scholar]
  • 39.Hong L., Yu X., Wang B., Woodward J., Özcan E. An improved ensemble particle swarm optimizer using niching behavior and covariance matrix adapted retreat phase. Swarm Evol. Comput. 2023;78:101278. doi: 10.1016/j.swevo.2023.101278. [DOI] [Google Scholar]
  • 40.Minh H.L., Khatir S., Rao R.V., Abdel Wahab M., Cuong-Le T. A variable velocity strategy particle swarm optimization algorithm (VVS-PSO) for damage assessment in structures. Eng. Comput. 2023;39:1055–1084. doi: 10.1007/s00366-021-01451-2. [DOI] [Google Scholar]
  • 41.Song B., Wang Z., Zou L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021;100:106960. doi: 10.1016/j.asoc.2020.106960. [DOI] [Google Scholar]
  • 42.Moazen H., Molaei S., Farzinvash L., Sabaei M. PSO-ELPM: PSO with elite learning, enhanced parameter updating, and exponential mutation operator. Inf. Sci. 2023;628:70–91. doi: 10.1016/j.ins.2023.01.103. [DOI] [Google Scholar]
  • 43.Meng Z., Zhong Y., Mao G., Liang Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2022;586:176–191. doi: 10.1016/j.ins.2021.11.076. [DOI] [Google Scholar]
  • 44.Li F., Cai X., Gao L. Ensemble of surrogates assisted particle swarm optimization of medium scale expensive problems. Appl. Soft Comput. 2019;74:291–305. doi: 10.1016/j.asoc.2018.10.037. [DOI] [Google Scholar]
  • 45.Li X.L., Serra R., Olivier J. A multi-component PSO algorithm with leader learning mechanism for structural damage detection. Appl. Soft Comput. 2022;116:108315. doi: 10.1016/j.asoc.2021.108315. [DOI] [Google Scholar]
  • 46.Şenel F.A., Gökçe F., Yüksel A.S., Yiğit T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019;35:1359–1373. doi: 10.1007/s00366-018-0668-5. [DOI] [Google Scholar]
  • 47.Liu Z., Nishi T. Strategy dynamics particle swarm optimizer. Inf. Sci. 2022;582:665–703. doi: 10.1016/j.ins.2021.10.028. [DOI] [Google Scholar]
  • 48.Shaheen M.A., Hasanien H.M., Mekhamer S.F., Qais M.H., Alghuwainem S., Ullah Z., Tostado-Véliz M., Turky R.A., Jurado F., Elkadeem M.R. Probabilistic optimal power flow solution using a novel hybrid metaheuristic and machine learning algorithm. Mathematics. 2022;10:3036. doi: 10.3390/math10173036. [DOI] [Google Scholar]
  • 49.Jamhiri B., Xu Y., Shadabfar M., Costa S. Probabilistic machine learning for predicting desiccation cracks in clayey soils. Bull. Eng. Geol. Environ. 2023;82:355. doi: 10.1007/s10064-023-03366-2. [DOI] [Google Scholar]
  • 50.Vinothkumar T., Deepa S., Raj F.V.A. Adaptive probabilistic neural network based on hybrid PSO–ALO for predicting wind speed in different regions. Neural Comput. Appl. 2023;35:19997–20011. doi: 10.1007/s00521-023-08807-3. [DOI] [Google Scholar]
  • 51.Wu G., Mallipeddi R., Suganthan P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization. [(accessed on 3 September 2025)];2017 Volume 9 Technical Report. Available online: https://www.researchgate.net/profile/Guohua-Wu-5/publication/317228117_Problem_Definitions_and_Evaluation_Criteria_for_the_CEC_2017_Competition_and_Special_Session_on_Constrained_Single_Objective_Real-Parameter_Optimization/links/5982cdbaa6fdcc8b56f59104/Problem-Definitions-and-Evaluation-Criteria-for-the-CEC-2017-Competition-and-Special-Session-on-Constrained-Single-Objective-Real-Parameter-Optimization.pdf. [Google Scholar]
  • 52.Xu J., Xu S., Zhang L., Zhou C., Han Z. A particle swarm optimization algorithm based on diversity-driven fusion of opposing phase selection strategies. Complex Intell. Syst. 2023;9:6611–6643. doi: 10.1007/s40747-023-01069-5. [DOI] [Google Scholar]
  • 53.Jiang Y., Zhan Z.H., Chen Tan K., Zhang J. Knowledge Learning for Evolutionary Computation. IEEE Trans. Evol. Comput. 2025;29:16–30. doi: 10.1109/TEVC.2023.3278132. [DOI] [Google Scholar]
  • 54.Lin S., Liu A., Wang J., Kong X. An intelligence-based hybrid PSO-SA for mobile robot path planning in warehouse. J. Comput. Sci. 2023;67:101938. doi: 10.1016/j.jocs.2022.101938. [DOI] [Google Scholar]
  • 55.Wilcoxon F. Individual comparisons by ranking methods. Biom. Bull. 1945;1:80–83. doi: 10.2307/3001968. [DOI] [Google Scholar]
  • 56.Xu X., Hu Z., Su Q., Li Y., Dai J. Multivariable grey prediction evolution algorithm: A new metaheuristic. Appl. Soft Comput. 2020;89:106086. doi: 10.1016/j.asoc.2020.106086. [DOI] [Google Scholar]
  • 57.Fu W.Y. Adaptive-Acceleration-Empowered Collaborative Particle Swarm Optimization. Inf. Sci. 2025;721:122621. doi: 10.1016/j.ins.2025.122621. [DOI] [Google Scholar]
  • 58.Tzanetos A., Blondin M. A qualitative systematic review of metaheuristics applied to tension/compression spring design problem: Current situation, recommendations, and research direction. Eng. Appl. Artif. Intell. 2023;118:105521. doi: 10.1016/j.engappai.2022.105521. [DOI] [Google Scholar]
  • 59.Maji T.K., Acharjee P. Multiple solutions of optimal PMU placement using exponential binary PSO algorithm for smart grid applications. IEEE Trans. Ind. Appl. 2017;53:2550–2559. doi: 10.1109/TIA.2017.2666091. [DOI] [Google Scholar]
  • 60.Abd Rahman N.H., Zobaa A.F. Integrated mutation strategy with modified binary PSO algorithm for optimal PMUs placement. IEEE Trans. Ind. Inform. 2017;13:3124–3133. doi: 10.1109/TII.2017.2708724. [DOI] [Google Scholar]
  • 61.Nakamura R.Y., Pereira L.A., Costa K.A., Rodrigues D., Papa J.P., Yang X.S. BBA: A binary bat algorithm for feature selection; Proceedings of the 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images; Ouro Preto, Brazil. 22–25 August 2012; New York, NY, USA: IEEE; 2012. pp. 291–297. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The original contributions presented in this study are contained in this paper. Further inquiries can be directed to the corresponding author.


Articles from Biomimetics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES