Skip to main content
Biomimetics logoLink to Biomimetics
. 2026 Feb 13;11(2):144. doi: 10.3390/biomimetics11020144

Dual-Subpopulation Competitive Particle Swarm Optimization with Engineering Applications

Shuying Zhang 1,, Yufei Zhang 2,, Minghan Gao 3, Qiaohong Zhang 4, Yue Gao 1,*
Editor: Heming Jia
PMCID: PMC12937953  PMID: 41744589

Abstract

Particle swarm optimization (PSO) is a widely used bio-inspired optimization algorithm, yet maintaining an effective balance between exploration and exploitation remains challenging. Most existing PSO variants rely on static or predefined regulation strategies, which restrict their adaptability to evolving search states and may lead to premature convergence or search stagnation. Inspired by division of labor and competitive selection mechanisms in biological populations, this paper proposes a dual-subpopulation competitive particle swarm optimization (DCPSO). In DCPSO, the population is explicitly partitioned into exploration and exploitation subpopulations with distinct search roles. A dynamic competition mechanism is designed to evaluate recent search progress, based on which stagnated particles are adaptively migrated between subpopulations, enabling flexible reallocation of computational resources during the optimization process. Experimental results on the CEC2017 benchmark suite demonstrate that DCPSO consistently outperforms standard PSO and several representative state-of-the-art algorithms, achieving statistically significant improvements on the majority of benchmark functions, particularly on hybrid and composition problems. Additional experiments on engineering design problems further verify the robustness, convergence stability, and practical effectiveness of DCPSO.

Keywords: swarm intelligence, particle swarm optimization, engineering optimization

1. Introduction

Swarm intelligence (SI) optimization algorithms constitute a class of computational methods inspired by collective behaviors observed in natural systems, such as bird flocking [1], fish schooling [2], and insect foraging [3]. In these systems, complex global intelligence emerges from local interactions among simple individuals without centralized control. This decentralized and self-organizing nature makes SI algorithms particularly suitable for solving complex and nonlinear optimization problems [4], and has led to their widespread application in areas such as engineering design [5,6], control systems [7,8], and machine learning [9,10].

Particle swarm optimization (PSO) is one of the most well-known and widely used SI algorithms [11]. It models social information sharing within a population, where particles adjust their trajectories according to both personal experience and social learning. Owing to its fast convergence speed and ease of implementation, PSO has achieved considerable success in a variety of optimization tasks [12,13,14]. Nevertheless, similar to many bio-inspired algorithms, PSO faces a fundamental challenge: achieving an effective balance between exploration and exploitation [15].

Exploration refers to probing new regions of the search space to avoid being trapped in local optima, whereas exploitation focuses on intensively refining promising regions discovered during exploration to obtain high-quality solutions [16]. Due to the dynamic nature of the optimization process, the relative importance of these two conflicting objectives varies over time and is often jointly influenced by problem characteristics and the current search state. Excessive exploration may result in slow convergence, while excessive exploitation frequently leads to premature convergence to local optima. Therefore, maintaining an appropriate balance between exploration and exploitation is crucial to the overall performance of particle swarm optimization.

Numerous PSO variants have been proposed to mitigate the exploration–exploitation trade-off, including parameter control strategies [17,18,19], topology modification methods [20,21,22], and multi-strategy hybridization techniques [23,24,25]. These approaches primarily regulate particle behaviors by adjusting coefficients, modifying information-sharing patterns, or integrating additional search operators. Although such designs can influence the intensity of exploration and exploitation, they operate mainly at the strategy or interaction level. The overall population organization typically remains unchanged throughout the search, which may limit the algorithm’s ability to adapt when the global search demand shifts significantly.

To further enhance behavioral diversity, multi-population or role-based PSO variants  [26,27,28] explicitly divide the swarm into subgroups with different search responsibilities. Compared with single-strategy regulation, these methods introduce functional differentiation within the population. However, in most existing designs, the partitioning scheme and role assignments are fixed once initialized, and the relative influence of each subpopulation is not dynamically adjusted according to their actual search contributions. As a result, even though heterogeneous behaviors coexist, the structural configuration of the swarm lacks an internal mechanism for self-reorganization when one search mode becomes temporarily dominant or ineffective.

To overcome these limitations, we propose a dual-subpopulation competitive particle swarm optimization (DCPSO) framework that introduces structural adaptation at the population level. Inspired by division of labor and competitive selection in natural systems, DCPSO differentiates exploration and exploitation roles while enabling their relative dominance to evolve during the search process. Division of labor is implemented through role-specific update strategies, whereas competitive selection is realized via a performance-driven migration mechanism. By periodically evaluating recent search contributions using a sliding-window metric, stagnated particles are adaptively transferred between subpopulations. Through this competition-driven reorganization process, the swarm structure continuously self-adjusts in response to changing optimization demands, achieving a dynamic and state-aware coordination between exploration and exploitation.

The main contributions of this paper are summarized as follows:

  • A dynamically adaptive dual-subpopulation framework is proposed, in which exploration and exploitation roles are explicitly differentiated while allowing population composition to evolve during the search process. Unlike static PSO variants, the proposed framework enables structural self-adjustment based on search state.

  • A dynamic competition and migration mechanism is introduced, where a sliding-window-based winning-strength metric is used to evaluate subpopulation performance and adaptively reallocate particles. This mechanism enables resource redistribution in response to the search state.

  • Dedicated update strategies are developed for each subpopulation, employing elite-restricted neighborhood learning for exploitation and fitness-ranking-based dimensional learning for exploration. These tailored strategies reinforce the complementary functions of the two subpopulations and improve the overall search efficiency.

The remainder of this paper is organized as follows: Section 2 introduces the standard PSO algorithm and its advanced variants. Section 3 describes the proposed DCPSO framework in detail. Section 4 presents numerical experimental results and comparative analyses. Section 5 reports simulation results on three engineering optimization problems. Finally, Section 6 concludes the paper and discusses directions for future research.

2. Preliminaries and Related Work

2.1. Particle Swarm Optimization

PSO is a population-based stochastic optimization algorithm originally proposed by Kennedy and Eberhart. It is inspired by the collective movement and information-sharing mechanisms observed in social organisms such as bird flocks and fish schools. In PSO, a population of particles collaboratively explores a continuous search space in order to locate optimal or near-optimal solutions.

Let N denote the population size and D the dimensionality of the search space. Each particle i{1,2,,N} is associated with a position vector xitRD and a velocity vector vitRD at iteration t. The position represents a candidate solution, while the velocity determines the direction and magnitude of the particle’s movement in the search space.

During the optimization process, each particle retains a memory of its best historical position, referred to as the personal best and denoted by pbesti. In addition, information about the best position found by the entire population, known as the global best and denoted by gbest, is shared among all particles. Based on this information, the velocity and position of particle i are updated according to

vit+1=ωvit+c1r1(pbestixit)+c2r2(gbestxit), (1)
xit+1=xit+vit+1, (2)

where ω is the inertia weight controlling the influence of the previous velocity, c1 and c2 are acceleration coefficients corresponding to the cognitive and social components, respectively, and r1 and r2 are vectors of independent random variables uniformly distributed in [0,1]D.

The canonical PSO is attractive due to its simple structure and low computational cost. However, it is well known to suffer from premature convergence and rapid loss of population diversity, particularly in multimodal or high-dimensional optimization problems, where particles may quickly cluster around suboptimal regions. Although adjusting control parameters such as inertia weight and acceleration coefficients can partially influence the exploration–exploitation balance, parameter tuning alone cannot fundamentally reflect the collective search state of the population or reorganize its structural behavior once diversity has significantly decreased. These limitations have motivated the development of more adaptive and structurally enhanced PSO variants.

2.2. Related Work

PSO has inspired extensive research aimed at improving its search behavior across diverse optimization problems. Existing studies are commonly grouped into three categories: parameter control–based methods, topology-oriented designs, and hybrid multi- strategy approaches.

2.2.1. Parameter Control–Based PSO

Parameter control methods regulate particle dynamics by adjusting key coefficients, such as inertia weight, acceleration parameters, or velocity bounds, in order to influence the search trajectory during evolution. For example, Yang et al. [29] proposed a nonlinear time-varying inertia weight scheme combined with low-discrepancy initialization to enhance global coverage. Song et al. [30] introduced a fractional-order adaptive velocity parameter to perturb particle updates based on evolutionary states. Minh et al. [31] incorporated an additional velocity component governed by a decreasing function to gradually shift from exploration to exploitation. Meng et al. [32] dynamically adjusted paradigm proportions and contraction coefficients, while Nayeem et al. [33] integrated adaptive inertia regulation with guidance from the Grey Wolf Optimizer.

These approaches refine particle movement through adaptive parameter scheduling and have demonstrated effectiveness across various problem settings. Since their regulation primarily operates on update coefficients, the overall population organization and role distribution typically remain unchanged during the search process. As optimization progresses, the collective search behavior is therefore mainly influenced through parameter variation rather than structural reconfiguration.

2.2.2. Topology-Oriented PSO

Topology-oriented PSO variants modify interaction structures among particles to regulate information exchange and collective learning. Li et al. [21] proposed pyramid PSO (PPSO), in which particles are hierarchically organized into multiple fitness-based layers. Information exchange occurs both within layers and across adjacent layers, enabling multi-level learning and diversity preservation. Jin et al. [28] further extended PPSO by incorporating an adaptive constraint-handling mechanism that propagates feasibility information along the hierarchical structure. Flori et al. [22] developed QUAPSO, which partitions the swarm into exploration and exploitation sub-populations with differentiated update rules and multi-position representations. Hu et al. [34] introduced an adaptive elite–common partitioning strategy with cooperative coevolutionary interactions, while Zhou et al. [35] designed a region-based solution selection mechanism that defines neighborhoods around discovered optima to encourage distributed exploration.

These approaches primarily regulate how information is propagated within the swarm through structured communication patterns. In most cases, group assignments are determined by predefined or fitness-based rules, and adjustments mainly occur at the interaction level. In contrast, DCPSO adaptively reallocates particles between functional roles over time through performance-driven migration, enabling dynamic adjustment of population composition rather than solely modifying communication structure.

2.2.3. Hybrid and Multi-Strategy PSO

Hybrid PSO approaches integrate multiple optimization mechanisms within a unified framework to exploit complementary search behaviors. Li et al. [36] proposed a multi-component PSO algorithm that maintains a strategy pool of distinct PSO variants coordinated by a leader-learning mechanism. Adamu et al. [37] hybridized PSO with an enhanced crow search algorithm by introducing time-varying transfer functions and opposition-based learning. Şenel et al. [38] combined PSO with the Grey Wolf Optimizer by probabilistically refining a subset of particles, while Shi et al. [39] embedded genetic crossover and mutation operators into the PSO updating process. Lin et al. [40] incorporated simulated annealing to probabilistically accept inferior solutions and enhance local escape capability.

These frameworks enrich search behavior by introducing external optimization components. DCPSO differs in that adaptability is achieved through internal competition between role-differentiated subpopulations rather than through integration of additional metaheuristics. The additional computational cost in DCPSO mainly arises from periodic performance evaluation and migration operations, remaining moderate compared with hybrid strategies that coordinate multiple optimization modules.

3. Dual-Subpopulation Competitive Particle Swarm Optimization

3.1. Dual-Subpopulation Competitive Framework

The core idea of DCPSO is to maintain two subpopulations with complementary search roles and to dynamically adjust their relative influence according to recent search progress. Intuitively, when one search mode demonstrates stronger improvement, more computational resources are gradually allocated to that mode through particle migration, allowing the swarm structure to evolve along with the optimization process.

At initialization, the population is evenly divided into an exploitation subpopulation (E) and an exploration subpopulation (X), each consisting of N/2 particles, where N denotes the total population size. Although equal partitioning is adopted for simplicity, the framework itself does not rely on strict balance. Due to the adaptive migration mechanism, moderate unequal initial splits were observed to have negligible influence on the final performance, as the population composition is continuously adjusted during evolution.

The overall flowchart of DCPSO is illustrated in Figure 1. As illustrated in Figure 1, the competition–migration cycle is embedded within each iteration of the algorithm, forming a closed-loop structural adaptation process. At each iteration, the two groups evolve independently according to their respective update strategies, thereby reinforcing different search behaviors. After the evolution step, a population-level competition mechanism is triggered.

Figure 1.

Figure 1

Overall flowchart of DCPSO.

Specifically, the recent search efficiency of each group is evaluated through subpopulation progress assessment (Section 3.1.1), based on which the dominant and inferior subpopulations at the current stage are identified. Subsequently, a winning intensity indicator (Section 3.1.2) is computed to quantify the dominance degree of the superior group, which further determines the number of particles to be migrated. Finally, through a stagnated-particle migration mechanism (Section 3.1.3), the dominant subpopulation takes over a portion of the most severely stagnated particles from the inferior one. Through this competitive interaction, the population structure is dynamically adjusted to accommodate the evolving requirements of the search process.

3.1.1. Subpopulation Progress Evaluation

This subsection quantifies the recent evolutionary effectiveness of each subpopulation. An improvement-rate metric based on a sliding window is employed. At iteration t, the improvement rate of subpopulation k{E,X} is defined as

ρk(t)=iPkf(xit)<f(xit1)|Pk|, (3)

where Pk denotes the particle set of subpopulation k, and f(xit) represents the fitness value of particle i at iteration t.

To mitigate the influence of stochastic fluctuations in a single iteration, a sliding window of length W is adopted. The cumulative improvement rate within this window is computed as

Sk(t)=τ=tW+1tρk(τ),k{E,X}. (4)

The window length W controls the trade-off between responsiveness and stability. A moderate value was selected based on empirical evaluation, and its influence is further analyzed in the parameter sensitivity study presented in Section 4.3. The group exhibiting a higher cumulative improvement rate is regarded as the dominant one at the current stage.

3.1.2. Winning Intensity Indicator

To characterize the dominance degree of the winning group over the losing one, a winning intensity indicator is introduced. Based on the cumulative improvement capability of each group, the winning intensity at iteration t is defined as

η(t)=|SE(t)SX(t)|SE(t)+SX(t)+ε, (5)

where ε is a small positive constant introduced to avoid division by zero. The value η(t)[0,1) reflects the relative performance gap between the two groups: larger values indicate stronger dominance, whereas values close to zero imply nearly balanced competition.

3.1.3. Stagnated Particle Migration Mechanism

The proposed competition achieves adaptivity by adjusting the population structure through particle migration. The competition and migration procedure is performed at every iteration, ensuring timely structural adjustment in response to ongoing search dynamics.

When a subpopulation is identified as dominant within the sliding window, particles in the inferior group are ranked according to their stagnation degree, defined as the number of consecutive iterations without fitness improvement. The top m most severely stagnated particles are selected and reassigned to the dominant group. The migration size is adaptively determined as

m=η(t)·mmax, (6)

where mmax is a predefined upper bound on the number of migratable particles, typically set as a small fraction of the subpopulation size to avoid excessive structural perturbations.

When the two groups exhibit comparable performance (i.e., η(t) falls below a predefined threshold), a small number of particles randomly exchange their strategy labels between subpopulations. This lightweight perturbation helps prevent structural stagnation and maintains population diversity.

It should be emphasized that migration only changes the strategy affiliation of particles; their positions, velocities, and historical best information remain unchanged, thereby preserving search continuity and memory.

3.2. Exploitation Subpopulation Strategy

This subsection describes the exploitation strategy, which aims to conduct stable and refined local search. Neighborhood learning and a constriction factor are jointly employed. To prevent excessive amplification of global-best information that may lead to rapid population homogenization, neighborhood interactions are restricted within the exploitation subpopulation, and the influence of the global best is intentionally weakened.

3.2.1. Neighborhood-Based Learning

An extended ring topology is adopted to construct the local cooperation mechanism. For each exploitation particle i, its neighborhood Ni consists of K adjacent particles on each side in the ring. Only neighbors belonging to the exploitation group are retained, forming the effective neighborhood N˜i.

The neighborhood best position lbesti is defined as

lbesti=argminj{i}N˜if(pbestj). (7)

By confining information flow to locally high-quality substructures, this elite-restricted learning mechanism enhances exploitation efficiency while suppressing premature convergence caused by global-best dominance.

3.2.2. Velocity and Position Update

Let xitRD and vit denote the position and velocity of particle i at iteration t, respectively. The personal best position is denoted by pbesti, and the global best position by gbest. A constriction-factor-based velocity update is adopted to enhance stability:

vit+1=χvit+c1r1(pbestixit)+c2r2(lbestixit)+c3r3(gbestxit), (8)

where χ is the constriction factor, and c1, c2, and c3 are acceleration coefficients corresponding to individual, neighborhood, and global guidance, respectively. Although local neighborhood learning dominates the exploitation process, a weakened global-best component is retained to provide global directional consistency and prevent excessive isolation of local neighborhoods. This mild global guidance helps maintain convergence coherence without inducing rapid population homogenization.

The position is updated as

xit+1=xit+vit+1. (9)

3.3. Exploration Subpopulation Strategy

The exploration strategy aims to maintain global diversity and continuously supply promising candidate solutions to the exploitation process. Unlike the exploitation strategy, it does not explicitly rely on global or neighborhood best positions. Instead, a dimension-wise random learning mechanism is employed to promote information recombination across particles and dimensions, thereby alleviating dimension coupling in high- dimensional spaces.

3.3.1. Dimension Learning Probability

To induce differentiated exploration behaviors, a fitness-ranking-based dimension learning probability is introduced. Let NX denote the size of the exploration subpopulation, and let ri{1,2,,NX} represent the fitness rank of particle i, where a smaller rank indicates better fitness. The dimension learning probability is defined as

Pi=Pmin+(PmaxPmin)expαri1NX11exp(α)1. (10)

Here, Pmin and Pmax are the lower and upper bounds of the learning probability, respectively, and α is the learning factor. This monotonic mapping assigns lower learning probabilities to better-performing particles, preserving their search structures, while encouraging poorer-performing ones to explore unknown regions through more aggressive recombination.

3.3.2. Dimension Learning and Update Mechanism

Based on Pi, a learning vector ei is constructed. For each dimension d, particle i selects two particles from the exploration subpopulation with probability Pi and inherits the d-th component of the personal best position from the better one; otherwise, it retains its own component:

ei,d=pκ,d,ifrand<Pi,κ=argminj{a,b}f(pj),pi,d,otherwise. (11)

The velocity update is given by

vit+1=ω(t)vit+cr(eixit), (12)

where ω(t) is a time-decreasing inertia weight. The position update follows

xit+1=xit+vit+1. (13)

Through dimension-wise exemplar recombination, the exploration subpopulation continuously generates structurally diverse candidate solutions, thereby effectively complementing the exploitation process. The dimension-wise recombination is performed independently for each coordinate, resulting in an additional computational cost proportional to O(D) per particle, which is comparable to standard velocity update operations. Therefore, the mechanism remains scalable for high-dimensional problems.

Conceptually, the dimension-wise exemplar selection shares similarity with differential evolution–style coordinate recombination. However, instead of constructing trial vectors through differential mutation, the proposed mechanism directly leverages personal best information within the exploration subpopulation, maintaining compatibility with the PSO updating framework.

3.4. Computational Complexity

This subsection analyzes the computational complexity of the proposed DCPSO. Let N denote the population size, D the problem dimensionality, T the maximum number of iterations, K the neighborhood radius in the exploitation subpopulation, W the sliding window length, and Cf the cost of a single fitness evaluation.

At each iteration, both subpopulations perform particle updates. For the exploitation subpopulation, neighborhood-best identification incurs a cost of O(NK), while velocity and position updates require O(ND). For the exploration subpopulation, dimension-wise learning and state updates also require O(ND). The competition mechanism introduces additional overheads: computing improvement rates is O(N), and fitness-based ranking in the exploration subpopulation, together with stagnated-particle selection for migration, incurs O(NlogN). The sliding-window accumulation can be implemented with a running sum, resulting in O(1) amortized cost per iteration.

In addition, fitness evaluation dominates the algorithmic cost, requiring O(NCf) per iteration. Therefore, the overall time complexity per iteration is

ONCf+ND+NK+NlogN, (14)

and the total computational complexity over T iterations is

OTNCf+ND+NK+NlogN. (15)

When the fitness evaluation is computationally expensive, the overall cost is dominated by the O(TNCf) term; otherwise, the additional overhead introduced by neighborhood learning and competitive adaptation remains moderate and scales linearly or near-linearly with the population size.

For comparison, the per-iteration time complexity of standard PSO is O(NCf+ND), dominated by velocity–position updates and fitness evaluations. AFMPSO introduces additional forgetting and centroid-guided operations that scale linearly with population size and dimension, thus remaining O(NlogNCf+ND) with a slightly larger constant factor. SDPSO requires extra strategy payoff evaluation and probability updating, incurring an additional O(N) overhead while maintaining the same asymptotic order, i.e., O(NCf+ND).

In contrast, DCPSO further incorporates neighborhood learning and competition-based migration, leading to O(NCf+ND+NK+NlogN) per iteration. Since K is a small constant and fitness evaluation typically dominates the computational cost in practical applications, the overall complexity remains effectively governed by O(NCf), indicating that the additional structural adaptation overhead is moderate and asymptotically comparable to standard and multi-strategy PSO variants.

3.5. Algorithmic Procedure

Based on the proposed dual-subpopulation competitive framework and the corresponding exploitation and exploration strategies, the overall procedure of DCPSO is summarized in Algorithm 1.

Algorithm 1: Dual-Subpopulation Competitive Particle Swarm Optimization
graphic file with name biomimetics-11-00144-i001.jpg

4. Experiments on Benchmark Functions

In this section, extensive experiments on benchmark functions are conducted to systematically investigate the performance of the proposed DCPSO algorithm. The experimental study is divided into three independent stages. First, a comprehensive performance comparison is carried out using the widely adopted CEC2017 benchmark suite [41] against several state-of-the-art PSO variants, including numerical results, convergence behaviors, and statistical significance tests. Second, a sensitivity analysis is performed to examine the influence of key algorithm parameters. Finally, an ablation study is conducted to isolate and verify the contributions of the specific algorithmic components introduced in the framework.

4.1. Benchmark Settings

4.1.1. Test Environment

All simulation experiments are carried out in the MATLAB 2024a environment. The platform runs on a workstation equipped with the Windows 11 (25H2) operating system. The hardware configuration includes an Intel(R) Core(TM) i7-14700 (2.10 GHz) processor and 32 GB of system memory.

4.1.2. Benchmark Problems

The CEC2017 benchmark test suite is selected as the test platform due to its wide acceptance and diverse problem characteristics. A notable feature of this benchmark is the extensive use of rotation and shift operations, which significantly increase variable coupling and non-separability. As a result, algorithms cannot easily locate the global optimum by optimizing individual dimensions independently.

The CEC2017 test suite consists of 29 benchmark functions, which can be categorized into four groups according to different search characteristics. Unimodal functions F1 and F3 contain only one global optimum without local optima. These functions are mainly used to evaluate the convergence speed and exploitation capability of algorithms. Multimodal functions F4 to F10 include multiple local optima. They are designed to assess the global exploration ability of algorithms and their capability to escape from local optima and avoid premature convergence. Hybrid functions F11 to F20 divide the variables into several groups, where each group is governed by a different basic function. This setting simulates complex real-world problems in which variables exhibit heterogeneous characteristics and requires algorithms to balance exploration and exploitation. Composition functions F21 to F30 are constructed by combining multiple basic functions using weighted schemes. They exhibit highly complex landscapes with numerous local traps and strong asymmetry, providing a rigorous test for an algorithm’s ability to balance global search and local refinement in complex search spaces.

4.1.3. Parameter Settings

The parameters involved in DCPSO and other state of the art comparison algorithms are divided into common parameters and algorithm specific parameters.

For the common parameters, widely used settings in the literature are adopted. The population size is fixed at N=100, and the maximum number of iterations is set to T=1000, resulting in a maximum of 100,000 fitness evaluations. The problem dimension of the CEC2017 benchmark functions is set to D=30. Following common practice in evolutionary computation, each algorithm is independently executed 30 times on every benchmark function. This number of independent runs is widely adopted in the literature to ensure statistical reliability and to mitigate the influence of stochastic variability. The mean error and standard deviation are recorded for performance evaluation to reduce the influence of randomness.

For the algorithm-specific parameters, all comparison algorithms employ the parameter values recommended by their original authors to ensure fairness. The detailed parameter configurations of DCPSO and the baseline algorithms are listed in Table 1. The control parameters of DCPSO are fixed according to the empirical settings obtained from the parameter sensitivity analysis presented in Section 4.3. No additional tuning was performed for individual benchmark functions, ensuring fair and consistent comparisons across all experiments.

Table 1.

Parameter settings of DCPSO and the peer algorithms.

Algorithm Other Parameters
DCPSO c1=c2=1.5, c3=0.1, W=10, K=3
ACEPSO [34] Dp=0.7, Dis=0.65, Nne=2.4
AFMPSO [42] ε=0.01·DN,r=0.5
VPPSO  [43] α=0.3, N1=50, N2=50
SDPSO [27] α=0.1, r=4, LP=200
MPSO [18] r(t+1)=4r(t)(1r(t)), r(0)=rand
PSO [11] c1=c2=1.5

4.2. Performance Comparison on CEC2017

To comprehensively evaluate the effectiveness of DCPSO, comparative experiments are conducted on all 29 benchmark functions. Five state of the art PSO variants are selected as competitors. The detailed numerical results, including mean performance, standard deviation, and ranking statistics, are summarized in Section 4.2.1. The convergence behaviors on selected functions are illustrated in Section 4.2.2. In addition, the statistical significance analysis is employed in Section 4.2.3 to examine the statistical significance of the performance differences.

4.2.1. Numerical Results

Table 2 reports the numerical performance of all compared algorithms on the 29 benchmark functions from the CEC2017 test suite in terms of mean error, standard deviation, and overall ranking. The results indicate that DCPSO demonstrates a consistent performance advantage over the baseline methods. Specifically, DCPSO achieves the best fitness values on 20 of the 29 test functions and attains the lowest average rank of 1.448, which is superior to all competing algorithms.

Table 2.

Experimental results on the CEC2017 benchmark functions.

Func. DCPSO ACEPSO AFMPSO VPPSO SDPSO MPSO PSO
F1 1.197×103 3.762×103 9.568×103 2.558×103 6.429×103 1.798×108 1.263×1010
9.667×102 1.368×103 4.426×103 4.619×103 6.363×103 2.687×108 6.178×109
1 3 5 2 4 6 7
F3 7.418×103 8.946×102 9.952×101 1.646×103 2.729×104 8.249×103 2.444×104
2.261×103 1.285×103 4.419×101 6.942×102 7.283×103 3.305×103 1.678×104
4 2 1 3 7 5 6
F4 4.988×100 9.536×101 8.425×101 9.407×101 8.788×101 1.205×102 1.322×103
9.325×100 1.134×101 2.243×101 2.072×101 1.031×101 5.031×101 1.222×103
1 5 2 4 3 6 7
F5 6.419×101 8.776×101 1.216×102 1.601×102 8.612×101 7.107×101 1.458×102
1.160×101 3.906×100 1.662×101 2.200×101 6.566×101 2.516×101 3.895×101
1 4 5 7 3 2 6
F6 2.317×104 6.461×100 3.815×101 4.180×101 2.098×101 3.662×100 1.955×101
3.377×104 5.575×101 6.309×100 7.208×100 1.940×101 1.760×100 7.027×100
1 4 6 7 2 3 5
F7 8.527×101 6.009×101 1.786×102 2.455×102 1.426×102 1.313×102 2.371×102
1.008×101 6.067×100 3.200×101 4.535×101 7.277×101 3.650×101 6.369×101
2 1 5 7 4 3 6
F8 6.023×101 8.484×101 9.237×101 1.197×102 5.298×101 6.851×101 1.421×102
1.110×101 3.998×100 1.615×101 2.164×101 4.207×101 2.591×101 3.414×101
2 4 5 6 1 3 7
F9 2.218×100 3.613×101 1.524×103 2.310×103 1.213×101 1.217×102 2.566×103
1.758×100 1.221×101 4.900×102 7.052×102 1.343×101 1.259×102 1.221×103
1 3 5 6 2 4 7
F10 3.443×103 3.135×103 3.929×103 3.821×103 7.155×103 4.223×103 4.058×103
6.198×102 2.040×103 5.552×102 4.113×102 2.378×102 6.733×102 6.566×102
2 1 4 3 7 6 5
F11 7.935×101 1.066×102 1.496×102 1.303×102 1.108×102 1.212×102 5.463×102
1.707×101 5.754×101 3.997×101 5.111×101 5.543×101 5.544×101 3.068×102
1 2 6 5 3 4 7
F12 1.487×104 5.091×105 1.659×106 2.715×106 2.184×105 1.895×106 5.703×108
6.154×103 2.499×104 1.162×106 1.837×106 4.046×105 1.866×106 5.340×108
1 2 4 6 3 5 7
F13 5.633×103 1.397×104 3.686×104 7.255×104 3.033×104 1.399×104 7.601×107
4.118×103 1.197×104 1.542×104 3.865×104 2.477×104 8.522×103 3.258×108
1 2 5 6 4 3 7
F14 2.195×103 5.868×103 3.830×102 1.499×103 2.998×104 2.735×103 8.627×104
1.749×103 2.578×103 1.782×102 1.636×103 2.568×104 2.692×103 8.161×104
3 5 1 2 6 4 7
F15 1.373×103 1.871×103 8.951×103 3.932×104 7.638×103 1.558×103 8.265×104
1.099×103 1.249×103 5.169×103 2.108×104 8.859×103 1.369×103 7.095×104
1 3 5 6 4 2 7
F16 5.423×102 6.697×102 1.142×103 1.135×103 9.480×102 7.791×102 1.261×103
1.484×102 2.449×102 2.770×102 3.162×102 4.574×102 2.949×102 3.170×102
1 2 6 5 4 3 7
F17 1.329×102 2.540×102 5.753×102 5.080×102 2.175×102 3.102×102 6.496×102
5.634×101 5.125×101 1.736×102 1.862×102 1.300×102 1.531×102 2.267×102
1 3 6 5 2 4 7
F18 6.136×104 1.435×105 3.914×104 7.143×104 5.970×105 8.640×104 9.791×105
2.614×104 4.023×104 1.656×104 2.817×104 5.960×105 8.542×104 2.213×106
2 5 1 3 6 4 7
F19 2.660×103 5.309×103 5.970×103 2.612×104 1.054×104 3.315×103 2.709×106
1.988×103 1.465×103 5.370×103 2.117×104 1.132×104 2.800×103 4.869×106
1 3 4 6 5 2 7
F20 2.623×102 3.424×102 5.209×102 5.121×102 2.348×102 2.245×102 4.501×102
4.629×101 4.700×101 1.230×102 1.256×102 1.396×102 1.080×102 1.902×102
3 4 7 6 2 1 5
F21 2.541×102 2.875×102 3.120×102 3.539×102 2.692×102 2.622×102 3.526×102
1.021×101 4.859×100 1.801×101 5.407×101 4.292×101 1.878×101 4.136×101
1 4 5 7 3 2 6
F22 1.000×102 2.279×102 8.914×102 1.959×103 2.607×103 3.618×102 3.152×103
2.215×1013 1.684×103 1.515×103 1.930×103 3.354×103 8.871×102 1.661×103
1 2 4 5 6 3 7
F23 4.141×102 4.801×102 6.553×102 6.042×102 4.273×102 4.341×102 6.520×102
2.829×101 1.875×101 6.660×101 6.131×101 1.787×101 2.852×101 7.561×101
1 4 7 5 2 3 6
F24 4.768×102 5.930×102 6.965×102 6.588×102 5.364×102 4.901×102 7.336×102
1.170×101 1.330×101 6.009×101 5.936×101 5.028×101 2.579×101 7.378×101
1 4 6 5 3 2 7
F25 3.873×102 4.008×102 4.146×102 4.071×102 3.880×102 4.669×102 6.265×102
1.252×100 1.634×100 2.237×101 2.092×101 1.238×100 3.344×101 2.116×102
1 3 5 4 2 6 7
F26 9.212×102 4.472×102 3.264×103 2.669×103 1.683×103 1.415×103 3.512×103
7.164×102 4.066×102 1.256×103 1.285×103 1.982×102 1.001×103 6.532×102
2 1 6 5 4 3 7
F27 5.232×102 5.416×102 6.099×102 6.153×102 5.296×102 5.457×102 6.221×102
7.246×100 1.475×101 4.502×101 6.734×101 1.541×101 2.282×101 6.306×101
1 3 5 6 2 4 7
F28 3.000×102 4.296×102 4.301×102 4.292×102 4.459×102 5.838×102 1.616×103
2.163×106 3.987×101 1.830×101 2.133×101 5.399×101 9.307×101 9.161×102
1 3 4 2 5 6 7
F29 6.988×102 7.366×102 1.425×103 1.421×103 6.376×102 7.655×102 1.197×103
6.947×101 1.314×102 2.278×102 2.101×102 1.267×102 2.090×102 2.809×102
2 3 7 6 1 4 5
F30 3.383×103 1.137×104 1.616×105 5.854×105 8.408×103 1.751×104 5.346×106
5.944×102 1.827×103 7.144×104 2.515×105 4.069×103 2.235×104 7.217×106
1 3 4 6 2 5 7
Mean Rank 1.448 3.034 4.690 5.034 3.517 3.724 6.552
Final Rank 1 2 5 6 3 4 7

Among the baseline methods, ACEPSO ranks second with an average ranking of 3.034, followed by SDPSO with 3.517, MPSO with 3.724, AFMPSO with 4.690, VPPSO with 5.034, and the canonical PSO, which performs the worst with an average rank of 6.552. In addition to its dominant ranking performance, DCPSO also yields lower mean errors and smaller standard deviations on the majority of benchmark functions, indicating not only higher solution accuracy but also improved robustness and stability.

The performance advantage of DCPSO is more evident on the hybrid function set (F11–F20) and the composition function set (F21–F30), which are characterized by heterogeneous variable groupings and highly complex landscape structures. These functions typically require alternating phases of global exploration and refined local search, as different regions of the search space may demand different search intensities during evolution. The dual-subpopulation competitive framework allows exploration and exploitation tendencies to operate in parallel, while their relative influence is adaptively adjusted according to recent improvement trends. Such structural flexibility may help maintain diversity in early stages and reinforce local refinement when promising regions emerge. This complementary interaction mechanism could provide additional adaptability when addressing heterogeneous and dynamically shifting landscapes, thereby contributing to the observed performance trends in these complex problem categories.

4.2.2. Convergence Behavior

To further evaluate the search dynamics of DCPSO, the average convergence curves over 30 independent runs are recorded on the odd-numbered benchmark functions of the CEC2017 test suite. The corresponding convergence curves are shown in Figure 2.

Figure 2.

Figure 2

Figure 2

Average convergence trends of DCPSO and 6 comparison algorithms on selected CEC2017 functions.

DCPSO exhibits favorable convergence behavior in terms of both convergence speed and final solution quality. On 12 out of the 15 selected benchmark functions, DCPSO converges to better fitness values within the given computational budget. In most cases, DCPSO achieves a faster reduction in fitness during the early and middle stages of the optimization process and maintains superior performance throughout the search.

It is worth noting that on function F3 (shown in Figure 2b), DCPSO exhibits relatively slower convergence compared with some competitors. Function F3 in CEC2017 is a unimodal problem with strong variable coupling introduced by rotation operations. Since DCPSO maintains a persistent exploration subpopulation and applies restrained global-best influence within the exploitation group, its search process may remain slightly more conservative in fully unimodal landscapes where rapid centralized convergence is beneficial. This behavior reflects a trade-off inherent in maintaining structural diversity, which is more advantageous for complex multimodal and composite functions but may result in less aggressive convergence on simpler unimodal cases.

In contrast, several baseline algorithms converge more slowly or suffer from premature stagnation, particularly on benchmark functions with complex search landscapes. These behaviors indicate limitations in balancing exploration and exploitation during the optimization process.

In addition, the convergence curves of DCPSO are notably smooth, suggesting stable search dynamics without abrupt oscillations. This smooth convergence behavior indicates that DCPSO maintains a controlled and consistent search process, thereby achieving an effective balance between exploration and exploitation.

4.2.3. Statistical Significance Analysis

To further assess the statistical significance of the performance differences between DCPSO and the baseline algorithms, the Wilcoxon signed-rank test is employed [44]. As a nonparametric paired statistical test, the Wilcoxon signed-rank test does not rely on any distributional assumptions and is well suited for performance comparison of stochastic optimization algorithms on the same benchmark problems.

In this study, the Wilcoxon signed-rank test is conducted independently for each benchmark function (function-wise comparison) based on the paired fitness values obtained from 30 independent runs, with the significance level set to 0.05. For each benchmark function, pairwise comparisons are performed between DCPSO and each competing algorithm. The statistical test results are summarized in Table 3, where the symbols “+”, “≈”, and “−” indicate that DCPSO performs significantly better than, statistically equivalent to, or significantly worse than the corresponding algorithm, respectively.

Table 3.

Wilcoxon signed-rank test results of DCPSO against 6 algorithms on the CEC2017 benchmark functions with significance level α=0.05.

Func. ACEPSO AFMPSO VPPSO SDPSO MPSO PSO
F1 + + + + + +
F3 + +
F4 + + + + + +
F5 + + + + + +
F6 + + + + + +
F7 + + + + +
F8 + + + + +
F9 + + + + + +
F10 + + + + +
F11 + + + + + +
F12 + + + + + +
F13 + + + + + +
F14 + - + + +
F15 + + + + + +
F16 + + + + + +
F17 + + + + + +
F18 + + + + +
F19 + + + + + +
F20 + + + +
F21 + + + + +
F22 + + + + + +
F23 + + + + +
F24 + + + + + +
F25 + + + + + +
F26 + + + + +
F27 + + + + + +
F28 + + + + + +
F29 + + + +
F30 + + + + + +
Better 25 26 27 25 25 29
Similar 0 1 0 2 3 0
Worse 4 2 2 2 1 0

As shown in Table 3, DCPSO achieves a dominant number of statistically significant wins over all baseline algorithms. Specifically, DCPSO records 25 wins and 4 losses against ACEPSO, 26 wins, 1 tie, and 2 losses against AFMPSO, 27 wins and 2 losses against VPPSO, 25 wins, 2 ties, and 2 losses against SDPSO, and 25 wins with 3 ties and only 1 loss against MPSO. In comparison with the canonical PSO, DCPSO achieves statistically significant improvements on all 29 benchmark functions, resulting in 29 wins and no losses.

The consistently high win ratios across all pairwise comparisons indicate that the observed performance advantages of DCPSO are statistically robust rather than caused by random variations. These statistical findings are in strong agreement with the numerical results and convergence analyses presented earlier. The overall comparison results further confirm that DCPSO consistently outperforms the baseline algorithms on the CEC2017 benchmark suite, demonstrating its robustness and effectiveness across a wide range of optimization problems.

4.3. Parameter Sensitivity Analysis

In DCPSO, two key parameters play an important role in controlling the search behavior, namely the sliding window length W used for subpopulation progress evaluation and the neighborhood size K adopted in the exploitation subpopulation. To examine the influence of these parameters on the algorithm performance, a parameter sensitivity analysis is conducted. When one parameter is analyzed, all other parameters are kept unchanged to ensure the fairness and reliability of the experimental results.

The sliding window length W determines the extent to which historical iterations are considered in evaluating subpopulation performance. Five different values are investigated, including 1, 5, 10, 15, and 20. The experimental results presented in Table 4 indicate that the choice of W has a noticeable impact on the performance of DCPSO. When W is set to a small value, the competition mechanism becomes overly sensitive to short-term fluctuations, which may lead to instability in the population structure. Conversely, excessively large values of W reduce the responsiveness to performance changes, thereby weakening the adaptability of the algorithm. According to the average ranking results, DCPSO achieves the lowest average rank when W=10, suggesting that this value provides a favorable balance between responsiveness and stability. Therefore, W=10 is selected as the recommended setting for DCPSO.

Table 4.

Sensitivity analysis of parameter W on the CEC2017 benchmark functions. Reported values are the mean errors over 30 independent runs.

Func. W=1 W=5 W=10 W=15 W=20
F1 1.747×103 1.236×103 1.115×103 1.095×103 1.438×103
F3 7.490×103 7.670×103 7.724×103 7.776×103 8.446×103
F4 2.771×101 6.532×100 4.091×100 4.418×100 6.521×100
F5 7.184×101 6.585×101 6.537×101 6.766×101 6.615×101
F6 4.172×104 1.806×104 1.783×104 2.559×104 2.878×104
F7 8.483×101 7.975×101 8.131×101 8.467×101 8.482×101
F8 6.609×101 6.390×101 6.195×101 6.282×101 6.526×101
F9 3.143×100 1.981×100 1.655×100 2.730×100 2.780×100
F10 3.917×103 3.414×103 3.504×103 3.472×103 3.573×103
F11 8.592×101 7.932×101 7.383×101 8.099×101 8.233×101
F12 2.568×104 1.724×104 1.618×104 1.811×104 1.933×104
F13 8.744×103 6.194×103 6.139×103 5.479×103 5.504×103
F14 1.662×103 2.650×103 2.523×103 2.627×103 2.692×103
F15 1.841×103 9.469×102 8.552×102 1.206×103 1.222×103
F16 5.225×102 5.688×102 5.272×102 5.428×102 5.091×102
F17 1.570×102 1.287×102 1.194×102 1.229×102 1.425×102
F18 7.338×104 6.425×104 6.730×104 7.060×104 7.012×104
F19 2.877×103 2.701×103 2.238×103 2.075×103 2.155×103
F20 2.718×102 2.696×102 2.630×102 2.637×102 2.660×102
F21 2.629×102 2.588×102 2.550×102 2.579×102 2.556×102
F22 1.000×102 1.000×102 1.000×102 1.000×102 1.000×102
F23 4.193×102 4.157×102 4.143×102 4.108×102 4.148×102
F24 4.748×102 4.756×102 4.750×102 4.729×102 4.734×102
F25 3.867×102 3.869×102 3.869×102 3.872×102 3.878×102
F26 1.049×103 7.221×102 7.666×102 8.333×102 8.464×102
F27 5.228×102 5.195×102 5.199×102 5.211×102 5.209×102
F28 3.137×102 3.000×102 3.000×102 3.000×102 3.014×102
F29 7.187×102 6.831×102 6.727×102 6.796×102 7.090×102
F30 3.922×103 3.636×103 3.418×103 3.492×103 3.643×103
Mean Rank 4.41 2.79 1.79 2.62 3.38
Final Rank 5 3 1 2 4

The neighborhood size K controls the rate of information sharing within the exploitation subpopulation. Five different values ranging from 1 to 5 are examined. As reported in Table 5, the results show that the algorithm performance is more sensitive when K takes smaller values. This can be attributed to the limited information exchange among particles when K is too small, which may slow down the convergence process. In contrast, a larger neighborhood size facilitates information sharing but may reduce population diversity and increase the risk of premature convergence, leading to a gradual degradation in performance. Based on the average ranking results, DCPSO achieves its best performance when K=3. Consequently, K=3 is adopted as the default neighborhood size in DCPSO.

Table 5.

Sensitivity analysis of parameter k on the CEC2017 benchmark functions. Reported values are the mean errors over 30 independent runs.

Func. k=1 k=2 k=3 k=4 k=5
F1 9.712×102 1.115×103 1.187×103 1.184×103 1.147×103
F3 1.375×104 8.998×103 7.421×103 6.069×103 5.460×103
F4 2.023×101 7.371×100 4.478×100 1.302×101 7.252×100
F5 8.064×101 6.754×101 6.669×101 6.494×101 6.498×101
F6 7.179×102 4.672×104 2.373×104 4.937×104 1.008×103
F7 9.829×101 8.784×101 8.306×101 8.175×101 8.169×101
F8 7.189×101 6.781×101 6.313×101 6.201×101 6.113×101
F9 3.180×101 4.248×100 2.218×100 3.084×100 2.849×100
F10 3.955×103 3.521×103 3.534×103 3.393×103 3.513×103
F11 8.947×101 8.383×101 8.265×101 8.119×101 7.951×101
F12 2.361×104 1.975×104 1.826×104 1.815×104 1.979×104
F13 7.288×103 5.365×103 5.678×103 5.793×103 5.571×103
F14 2.128×103 2.120×103 2.413×103 2.886×103 3.338×103
F15 1.260×103 1.375×103 1.228×103 1.044×103 1.569×103
F16 5.915×102 5.842×102 5.555×102 5.201×102 5.696×102
F17 1.481×102 1.418×102 1.353×102 1.439×102 1.423×102
F18 6.047×104 6.165×104 6.620×104 7.481×104 7.034×104
F19 2.648×103 2.048×103 2.660×103 2.344×103 2.523×103
F20 2.712×102 2.715×102 2.623×102 2.702×102 2.664×102
F21 2.684×102 2.571×102 2.573×102 2.563×102 2.533×102
F22 1.000×102 1.000×102 1.000×102 1.000×102 1.000×102
F23 4.290×102 4.188×102 4.110×102 4.161×102 4.138×102
F24 4.887×102 4.826×102 4.750×102 4.752×102 4.760×102
F25 3.870×102 3.874×102 3.869×102 3.868×102 3.866×102
F26 1.166×103 9.869×102 8.867×102 8.259×102 8.591×102
F27 5.246×102 5.226×102 5.216×102 5.223×102 5.211×102
F28 3.006×102 3.000×102 3.000×102 3.000×102 3.000×102
F29 7.875×102 7.039×102 6.928×102 6.629×102 7.042×102
F30 3.593×103 3.412×103 3.769×103 3.681×103 3.693×103
Mean Rank 4.38 3.24 2.34 2.41 2.62
Final Rank 5 4 1 2 3

Overall, although the two key parameters have a noticeable influence on the performance of DCPSO, they do not alter the algorithm’s significant performance advantage over the competing methods reported in Section 4.2. The main sensitivity findings can be summarized as follows:

  • A moderate sliding window length (W=10) provides a good balance between responsiveness and stability. Very small values may lead to overly frequent structural fluctuations, while excessively large values may weaken adaptivity.

  • An intermediate neighborhood size (K=3) facilitates sufficient information sharing within the exploitation subpopulation without excessively reducing diversity.

  • Within the tested ranges, DCPSO demonstrates stable relative performance ranking, indicating a certain degree of robustness to moderate parameter variation.

From a practical perspective, when applying DCPSO to new optimization problems, it is recommended to start with the default settings (W=10, K=3) and adjust them moderately if the problem exhibits extremely noisy dynamics (where slightly larger W may improve stability) or requires faster local convergence (where slightly larger K may enhance information sharing). In most standard benchmark and engineering scenarios, extensive parameter tuning is not necessary.

4.4. Ablation Study

To analyze the contribution of different components in DCPSO, ablation experiments are conducted on selected CEC2017 benchmark functions. Three degraded variants are constructed by selectively removing key components, including DCPSO-X without the exploration subpopulation strategy, DCPSO-E without the exploitation subpopulation strategy, and DCPSO-DC without the dual-subpopulation competitive mechanism. All other algorithmic settings are kept unchanged to ensure a fair comparison. The experimental results are summarized in Table 6.

Table 6.

Ablation study results of DCPSO and its variants on the CEC2017 functions.

Func. DCPSO DCPSO-E DCPSO-X DCPSO-DC PSO
F1 1.197×103 5.052×105 3.073×103 2.718×103 9.078×109
F3 7.418×103 6.368×104 1.086×104 2.728×104 2.606×104
F4 4.988×100 1.338×102 2.881×101 8.094×101 1.175×103
F5 6.419×101 1.539×102 7.412×101 8.850×101 1.402×102
F6 2.317×104 2.622×100 1.091×103 1.403×103 1.571×101
F7 8.527×101 2.136×102 8.885×101 1.007×102 2.063×102
F8 6.023×101 1.549×102 7.177×101 9.197×101 1.343×102
F9 2.218×100 8.018×102 3.175×100 9.102×100 2.567×103
F10 3.443×103 5.974×103 3.312×103 4.032×103 3.916×103
F11 7.935×101 1.947×102 1.031×102 1.285×102 3.945×102
F12 1.487×104 2.929×106 2.655×104 1.094×105 4.791×108
F13 5.633×103 1.533×105 1.174×104 1.239×104 6.902×106
F14 2.195×103 1.393×104 3.355×103 5.619×103 3.965×104
F15 1.373×103 2.484×104 3.432×103 3.805×103 6.744×104
F16 5.423×102 8.786×102 5.792×102 6.760×102 1.137×103
F17 1.329×102 2.185×102 1.859×102 1.506×102 6.010×102
F18 6.136×104 3.746×105 9.743×104 1.797×105 2.851×105
F19 2.660×103 2.373×104 5.036×103 5.105×103 2.192×106
F20 2.623×102 3.166×102 2.893×102 2.633×102 3.941×102
F21 2.541×102 3.147×102 2.675×102 2.788×102 3.602×102
F22 1.000×102 2.720×102 1.000×102 1.007×102 2.892×103
F23 4.141×102 4.865×102 4.253×102 4.347×102 6.163×102
F24 4.768×102 5.560×102 4.790×102 4.952×102 7.065×102
F25 3.873×102 3.918×102 3.877×102 3.955×102 6.105×102
F26 9.212×102 1.382×103 1.218×103 1.173×103 3.534×103
F27 5.232×102 5.389×102 5.293×102 5.220×102 6.025×102
F28 3.000×102 4.629×102 3.208×102 3.818×102 1.153×103
F29 6.988×102 9.078×102 7.700×102 7.482×102 1.210×103
F30 3.383×103 3.132×105 4.515×103 4.819×103 2.721×106

Overall, DCPSO achieved the best performance across all test functions. Removing the exploration strategy leads to noticeable performance degradation on most benchmarks, while removing the exploitation strategy results in a more severe decline, with substantial performance losses observed on many functions. When the competitive mechanism is disabled, DCPSO-DC also exhibits inferior performance compared with the full DCPSO, but generally outperforms DCPSO-X and DCPSO-E, indicating an intermediate level of effectiveness. In addition, all ablation variants consistently and significantly outperform the canonical PSO. These results indicate that each component contributes positively to the overall performance, and that the integration of exploration, exploitation, and adaptive competition is essential for achieving the superior performance of DCPSO.

5. Simulation on Engineering Optimization Problems

To assess the effectiveness and robustness of the proposed DCPSO algorithm, three representative constrained engineering optimization problems are considered in this section, including the three-bar truss design problem, the gear train design problem, and the pressure vessel design problem. These problems are commonly adopted in the literature due to their nonlinear objective functions, complex constraint conditions, and discrete or mixed-variable characteristics, which pose substantial challenges for population-based optimization algorithms.

5.1. Problem Formulations

5.1.1. Three-Bar Truss Design

The three-bar truss design problem is a classical structural optimization problem that has been extensively studied in the context of constrained optimization. The objective is to minimize the total structural weight while satisfying stress and geometric constraints. This problem is characterized by a narrow feasible region and strong nonlinear constraints, which significantly increase the difficulty of locating feasible and high-quality solutions. The structure of the three-bar truss is illustrated in Figure 3.

Figure 3.

Figure 3

Schematic of the three-bar truss structure.

The design variable vector is defined as

x=[x1,x2], (16)

where x1 and x2 denote the cross-sectional areas of the truss members.

The objective function is formulated as

minf(x)=(22x1+x2)×100. (17)

The nonlinear constraints are expressed as

g1(x)=2(2x1+x2)2x12+2x1x220,g2(x)=2x22x12+2x1x220,g3(x)=22x1+x220, (18)

with variable bounds

0x1, x2200. (19)

5.1.2. Gear Train Design

The gear train design problem is a representative discrete optimization problem frequently employed to evaluate the performance of algorithms on integer-valued decision variables. The objective is to determine the number of teeth of four gears such that the resulting gear ratio closely matches a predefined target value. Due to the discrete nature of the design variables and the nonlinear form of the objective function, the search space is highly discontinuous, which poses challenges for conventional continuous optimization techniques. The structure of the three-bar truss is illustrated in Figure 4.

Figure 4.

Figure 4

Schematic of the gear train design problem.

The design variables are defined as

x=[x1,x2,x3,x4], (20)

where each variable represents the number of gear teeth and takes integer values.

The objective function is defined as

minf(x)=16.931x3x2x1x42. (21)

The design variables are constrained by

12xi60,xiZ,i=1,2,3,4. (22)

Solutions violating the constraints are penalized to restrict the search within the feasible region.

5.1.3. Pressure Vessel Design

The pressure vessel design problem is a well-known nonlinear constrained optimization problem originating from mechanical engineering design. The objective is to minimize the total fabrication cost while satisfying strength, volume, and dimensional constraints. This problem involves mixed discrete and continuous variables, making it particularly challenging for optimization algorithms to efficiently balance exploration and exploitation. The schematic representation of the vessel structure is shown in Figure 5.

Figure 5.

Figure 5

Schematic of the pressure vessel design problem.

The design variable vector is defined as

x=[x1,x2,x3,x4], (23)

where x1 and x2 are discrete thickness variables with increments of 0.0625, x3 represents the inner radius, and x4 denotes the length of the cylindrical section.

The objective is to minimize the total fabrication cost, which is formulated as

minf(x)=0.6224x1x3x4+1.7781x2x32+3.1661x12x4+19.84x12x3. (24)

The constraints are given by

g1(x)=x1+0.0193x30,g2(x)=x2+0.00954x30,g3(x)=πx32x443πx33+12960000,g4(x)=x42400, (25)

with variable bounds

0x1,x2100,10x3,x4200. (26)

5.2. Experimental Settings

To ensure a fair and objective comparison, the proposed DCPSO algorithm and the competing algorithms are evaluated under identical experimental conditions. The comparison algorithms include ACEPSO, AFMPSO, VPPSO, SDPSO, MPSO, and the standard PSO. For all algorithms, the population size and the maximum number of iterations are set to 100 and 1.0×105, respectively. To reduce the influence of stochastic effects, each algorithm is independently executed 30 times. The best mean and standard deviation (Std.) of the obtained solutions, along with the corresponding optimization variables of the best solutions, are recorded to assess optimization performance and robustness.

5.3. Results and Discussion

Table 7, Table 8 and Table 9 present the experimental results for the three-bar truss design problem, the gear train design problem, and the pressure vessel design problem, respectively. As shown in these tables, the proposed method achieves accurate optimal solutions and demonstrates superior overall performance compared with several state-of-the-art PSO variants, particularly in terms of the mean and standard deviation of the obtained results.

Table 7.

Comparison results on the three-bar truss design problem.

Algorithm Optimized Result Optimization Variable
Best Mean Std. x1 x2
DCPSO 2.63896×102 2.63896×102 1.33666×105 0.7887 0.4083
ACEPSO 2.63896×102 2.63896×102 1.56130×105 0.7887 0.4082
AFMPSO 2.63896×102 2.63898×102 4.27073×103 0.7887 0.4083
VPPSO 2.63896×102 2.63896×102 4.48355×104 0.7887 0.4083
SDPSO 2.63896×102 2.63896×102 2.50774×105 0.7887 0.4082
MPSO 2.63896×102 2.63896×102 4.01519×106 0.7887 0.4082
PSO 2.63896×102 2.63896×102 2.50289×105 0.7887 0.4082

Table 8.

Comparison results on the gear train design problem.

Algorithm Optimized Result Optimization Variable
Best Mean Std. x1 x2 x3 x4
DCPSO 2.70086×1012 6.09707×1012 7.72399×1012 49.4702 15.8649 19.0769 43.2132
ACEPSO 2.70086×1012 4.47249×1010 4.76498×1010 48.8431 15.8243 19.0019 43.0089
AFMPSO 2.70086×1012 5.02722×1010 5.97481×1010 42.5040 15.9593 19.2789 48.9835
VPPSO 2.70086×1012 5.58935×1010 6.75837×1010 48.6641 19.1968 16.4948 43.4540
SDPSO 2.70086×1012 1.08338×1010 2.76852×1010 48.5168 19.1123 15.6856 43.1325
MPSO 1.36165×109 1.37192×108 1.29235×108 60.0000 33.7594 14.1274 54.9405
PSO 2.70086×1012 1.70802×109 4.85272×109 48.5989 19.3534 16.1911 42.9620

Table 9.

Comparison results on the pressure vessel design problem.

Algorithm Optimized Result Optimization Variable
Best Mean Std. x1 x2 x3 x4
DCPSO 6.05971×103 6.17872×103 1.34649×102 13.2808 7.0891 42.0984 176.6366
ACEPSO 6.05971×103 6.42932×103 3.02489×102 12.9102 7.1233 42.0984 176.6366
AFMPSO 6.05972×103 6.45425×103 2.26756×102 13.4097 6.5622 42.0984 176.6368
VPPSO 6.05971×103 6.52902×103 3.52286×102 12.5377 7.1640 42.0984 176.6366
SDPSO 6.05971×103 6.06619×103 1.24940×101 12.8702 7.1830 42.0984 176.6366
MPSO 6.09053×103 6.53380×103 3.39194×102 13.9664 7.0662 45.3368 140.2538
PSO 6.05971×103 6.20891×103 3.44106×102 12.5671 7.2428 42.0984 176.6366

In the gear train design problem, the relatively low standard deviation observed across multiple independent runs indicates that DCPSO consistently attains lower objective function values, reflecting its enhanced robustness in discrete optimization scenarios. For the pressure vessel design problem, which involves mixed discrete–continuous variables and multiple nonlinearar constraints, DCPSO is able to reliably identify feasible solutions with reduced fabrication costs. Moreover, the algorithm exhibits stable convergence behavior and effectively avoids premature convergence, highlighting its suitability for addressing complex engineering optimization tasks.

Overall, the experimental results on these constrained engineering optimization problems confirm the robustness and effectiveness of the proposed algorithm. Its consistent performance across problems with diverse characteristics further suggests its strong potential for practical engineering applications.

6. Conclusions and Future Work

This paper proposed a dual-subpopulation competitive particle swarm optimization framework, termed DCPSO, to enhance the adaptive balance between exploration and exploitation in particle swarm optimization. In DCPSO, the population is explicitly partitioned into exploration and exploitation subpopulations with distinct functional roles, each guided by a dedicated update strategy. A sliding-window-based competition mechanism is introduced to evaluate recent evolutionary efficiency and to drive adaptive particle migration between subpopulations. Through this biomimetic competition and migration process, the population structure can be dynamically reorganized in response to the current search state while preserving the continuity of particle trajectories. Extensive experimental results on the CEC2017 benchmark suite demonstrate that DCPSO achieves competitive performance compared with the canonical PSO and several representative state-of-the-art variants. Additional experiments on engineering design problems further confirm the robustness and practical effectiveness of the proposed framework.

Beyond benchmark validation, the proposed framework may be applicable to large-scale engineering design, high-dimensional parameter tuning, and data-driven optimization tasks where adaptive resource allocation and structural flexibility are beneficial. The competition-driven mechanism could also be explored in scenarios involving complex multimodal landscapes or evolving search demands.

Future research may investigate extensions to multiobjective, constrained, and dynamic optimization problems. Exploring more flexible multi-subpopulation frameworks with heterogeneous search behaviors and adaptively adjustable sizes may further enhance scalability and generalization ability. Incorporating adaptive or self-tuning parameter control strategies could improve robustness and reduce manual configuration effort. In addition, formal convergence analysis or probabilistic modeling of the competition and migration dynamics may provide deeper theoretical understanding of the proposed mechanism. These directions may further strengthen the applicability and theoretical foundation of the framework.

Acknowledgments

This paper has no additional acknowledgments beyond those listed in the author contributions or funding sections.

Abbreviations

The following abbreviations are used in this manuscript:

ACEPSO Adaptive Co-Evolved Particle Swarm Optimization
AFMPSO Adaptive Forgetting Multi-Strategy Particle Swarm Optimization
DCPSO Dual-Subpopulation Competitive Particle Swarm Optimization
MPSO Modified Particle Swarm Optimization
PSO Particle Swarm Optimization
SDPSO Strategy Dynamics Particle Swarm Optimization
SI Swarm Intelligence
VPPSO Velocity Pausing Particle Swarm Optimization

Author Contributions

Conceptualization, S.Z. and Y.G.; methodology, S.Z. and Y.Z.; software, S.Z.; validation, S.Z., Y.Z. and Y.G.; formal analysis, S.Z. and Y.G.; investigation, S.Z. and Y.Z.; resources, Y.G.; data curation, S.Z. and Y.Z.; writing—original draft preparation, S.Z.; writing—review and editing, Y.Z., M.G. and Y.G.; visualization, Y.Z. and Q.Z.; supervision, Y.G.; project administration, Y.G.; funding acquisition, Y.G.; auxiliary support, M.G. and Q.Z. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are contained in this paper. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

This research was funded in part by the Key Scientific Research Project of the Education Department of Jilin under Grant JJKH20261750KJ2, in part by the Key Project of Educational Science Planning of Jilin Province under Grant ZD24093, in part by the Key Project of Teaching Research of Beihua University under Grant XJZD-20240008, and in part by the General Project of Teaching Research of Beihua University under Grant XJYB-2021020.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Tian A.Q., Liu F.F., Lv H.X. Snow Geese Algorithm: A novel migration-inspired meta-heuristic algorithm for constrained engineering optimization problems. Appl. Math. Model. 2024;126:327–347. doi: 10.1016/j.apm.2023.10.045. [DOI] [Google Scholar]
  • 2.Braik M., Hammouri A., Atwan J., Al-Betar M.A., Awadallah M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022;243:108457. doi: 10.1016/j.knosys.2022.108457. [DOI] [Google Scholar]
  • 3.Xue J., Shen B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023;79:7305–7336. doi: 10.1007/s11227-022-04959-6. [DOI] [Google Scholar]
  • 4.Wang D., Yuan Z., Tang G., Wang J., Qiao J. Evolution-Guided Q-Learning With Dual Swarm Intelligence for Model-Free Optimal Control. IEEE Trans. Autom. Sci. Eng. 2025;22:16964–16975. doi: 10.1109/TASE.2025.3581315. [DOI] [Google Scholar]
  • 5.Xiao Y., Cui H., Khurma R.A., Castillo P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025;58:84. doi: 10.1007/s10462-024-11023-7. [DOI] [Google Scholar]
  • 6.Zhang S., Hu X., Gao Y., Gao M., Zhang Y. Exemplar Learning and Memory Retrieval-Based Particle Swarm Optimization Algorithm with Engineering Applications. Biomimetics. 2025;10:708. doi: 10.3390/biomimetics10100708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dai Y., Yu J., Zhang C., Zhan B., Zheng X. A novel whale optimization algorithm of path planning strategy for mobile robots. Appl. Intell. 2023;53:10843–10857. doi: 10.1007/s10489-022-04030-0. [DOI] [Google Scholar]
  • 8.Lin X., Gao F., Bian W. A high-effective swarm intelligence-based multi-robot cooperation method for target searching in unknown hazardous environments. Expert Syst. Appl. 2025;262:125609. doi: 10.1016/j.eswa.2024.125609. [DOI] [Google Scholar]
  • 9.Zhang Y., Wang L., Zhao J., Han X., Wu H., Li M., Deveci M. A convolutional neural network based on an evolutionary algorithm and its application. Inf. Sci. 2024;670:120644. doi: 10.1016/j.ins.2024.120644. [DOI] [Google Scholar]
  • 10.Ying Y., Ma S., Wang L., Chen X., Chen X., Zhu Y., Xu Y., Zheng C., Shentu Y., Wang Y., et al. Swarm intelligence enhanced machine learning model for predicting prognostic outcome in IgA nephropathy patients with mild proteinuria. Biomed. Signal Process. Control. 2025;103:107392. doi: 10.1016/j.bspc.2024.107392. [DOI] [Google Scholar]
  • 11.Kennedy J., Eberhart R. Particle swarm optimization; Proceedings of the ICNN’95-International Conference on Neural Networks; Perth, WA, Australia. 27 November–1 December 1995; New York, NY, USA: IEEE; 1995. pp. 1942–1948. [Google Scholar]
  • 12.Wu J., Sun Y., Bi J., Chen C. A Novel Hybrid Enhanced Particle Swarm Optimization for UAV Path Planning. IEEE Trans. Veh. Technol. 2025;74:11806–11819. doi: 10.1109/TVT.2025.3555285. [DOI] [Google Scholar]
  • 13.Jafari S., Byun Y.C. AI-driven state of power prediction in battery systems: A PSO-optimized deep learning approach with XAI. Energy. 2025;331:136764. doi: 10.1016/j.energy.2025.136764. [DOI] [Google Scholar]
  • 14.Shen J., Wang Y., Zhu H., Zhang L., Tang Y. AGWO-PSO-VMD-TEFCG-AlexNet bearing fault diagnosis method under strong noise. Measurement. 2025;242:116259. doi: 10.1016/j.measurement.2024.116259. [DOI] [Google Scholar]
  • 15.Hussan U., Waheed A., Bilal H., Wang H., Hassan M., Ullah I., Peng J., Hosseinzadeh M. Robust Maximum Power Point Tracking in PV Generation System: A Hybrid ANN-Backstepping Approach with PSO-GA Optimization. IEEE Trans. Consum. Electron. 2025;71:6016–6026. doi: 10.1109/TCE.2025.3569871. [DOI] [Google Scholar]
  • 16.Chen X., Li Y. A modified PSO structure resulting in high exploration ability with convergence guaranteed. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2007;37:1271–1289. doi: 10.1109/TSMCB.2007.897922. [DOI] [PubMed] [Google Scholar]
  • 17.Yin S., Jin M., Lu H., Gong G., Mao W., Chen G., Li W. Reinforcement-learning-based parameter adaptation method for particle swarm optimization. Complex Intell. Syst. 2023;9:5585–5609. doi: 10.1007/s40747-023-01012-8. [DOI] [Google Scholar]
  • 18.Liu H., Zhang X.W., Tu L.P. A modified particle swarm optimization using adaptive strategy. Expert Syst. Appl. 2020;152:113353. doi: 10.1016/j.eswa.2020.113353. [DOI] [Google Scholar]
  • 19.Moazen H., Molaei S., Farzinvash L., Sabaei M. PSO-ELPM: PSO with elite learning, enhanced parameter updating, and exponential mutation operator. Inf. Sci. 2023;628:70–91. doi: 10.1016/j.ins.2023.01.103. [DOI] [Google Scholar]
  • 20.Radwan M., Elsayed S., Sarker R., Essam D., Coello C.C. Neuro-PSO algorithm for large-scale dynamic optimization. Swarm Evol. Comput. 2025;94:101865. doi: 10.1016/j.swevo.2025.101865. [DOI] [Google Scholar]
  • 21.Li T., Shi J., Deng W., Hu Z. Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl. Soft Comput. 2022;121:108731. doi: 10.1016/j.asoc.2022.108731. [DOI] [Google Scholar]
  • 22.Flori A., Oulhadj H., Siarry P. QUAntum Particle Swarm Optimization: An auto-adaptive PSO for local and global optimization. Comput. Optim. Appl. 2022;82:525–559. doi: 10.1007/s10589-022-00362-2. [DOI] [Google Scholar]
  • 23.Peng S., Zhu J., Wu T., Yuan C., Cang J., Zhang K., Pecht M. Prediction of wind and PV power by fusing the multi-stage feature extraction and a PSO-BiLSTM model. Energy. 2024;298:131345. doi: 10.1016/j.energy.2024.131345. [DOI] [Google Scholar]
  • 24.Song M., An M., He W., Wu Y. Research on land use optimization based on PSO-GA model with the goals of increasing economic benefits and ecosystem services value. Sustain. Cities Soc. 2025;119:106072. doi: 10.1016/j.scs.2024.106072. [DOI] [Google Scholar]
  • 25.Wan A., Zhang F., Khalil A.B., Cheng X., Ji X., Wang J., Shan T. A novel GA-PSO-SVM model for compound fault diagnosis in gearboxes with limited data. IEEE Sens. J. 2025;25:30431. doi: 10.1109/JSEN.2025.3576761. [DOI] [Google Scholar]
  • 26.Gao W., Peng X., Guo W., Li D. A Dual-Competition-Based Particle Swarm Optimizer for Large-Scale Optimization. Mathematics. 2024;12:1738. doi: 10.3390/math12111738. [DOI] [Google Scholar]
  • 27.Liu Z., Nishi T. Strategy dynamics particle swarm optimizer. Inf. Sci. 2022;582:665–703. doi: 10.1016/j.ins.2021.10.028. [DOI] [Google Scholar]
  • 28.Jin X., Wei B., Deng L., Yang S., Zheng J., Wang F. An adaptive pyramid PSO for high-dimensional feature selection. Expert Syst. Appl. 2024;257:125084. doi: 10.1016/j.eswa.2024.125084. [DOI] [Google Scholar]
  • 29.Yang C., Gao W., Liu N., Song C. Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight. Appl. Soft Comput. 2015;29:386–394. doi: 10.1016/j.asoc.2015.01.004. [DOI] [Google Scholar]
  • 30.Song B., Wang Z., Zou L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021;100:106960. doi: 10.1016/j.asoc.2020.106960. [DOI] [Google Scholar]
  • 31.Minh H.L., Khatir S., Rao R.V., Abdel Wahab M., Cuong-Le T. A variable velocity strategy particle swarm optimization algorithm (VVS-PSO) for damage assessment in structures. Eng. Comput. 2023;39:1055–1084. doi: 10.1007/s00366-021-01451-2. [DOI] [Google Scholar]
  • 32.Meng Z., Zhong Y., Mao G., Liang Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2022;586:176–191. doi: 10.1016/j.ins.2021.11.076. [DOI] [Google Scholar]
  • 33.Nayeem G.M., Fan M., Daiyan G.M., Fahad K.S. UAV path planning with an adaptive hybrid PSO; Proceedings of the 2023 International Conference on Information and Communication Technology for Sustainable Development (ICICT4SD); Dhaka, Bangladesh. 21–23 September 2023; New York, NY, USA: IEEE; 2023. pp. 139–143. [Google Scholar]
  • 34.Hu G., Cheng M., Sheng G., Wei G. ACEPSO: A multiple adaptive co-evolved particle swarm optimization for solving engineering problems. Adv. Eng. Inform. 2024;61:102516. doi: 10.1016/j.aei.2024.102516. [DOI] [Google Scholar]
  • 35.Zhou T., Han X., Wang L., Gan W., Chu Y., Gao M. A multiobjective differential evolution algorithm with subpopulation region solution selection for global and local Pareto optimal sets. Swarm Evol. Comput. 2023;83:101423. doi: 10.1016/j.swevo.2023.101423. [DOI] [Google Scholar]
  • 36.Li X.L., Serra R., Olivier J. A multi-component PSO algorithm with leader learning mechanism for structural damage detection. Appl. Soft Comput. 2022;116:108315. doi: 10.1016/j.asoc.2021.108315. [DOI] [Google Scholar]
  • 37.Adamu A., Abdullahi M., Junaidu S.B., Hassan I.H. An hybrid particle swarm optimization with crow search algorithm for feature selection. Mach. Learn. Appl. 2021;6:100108. doi: 10.1016/j.mlwa.2021.100108. [DOI] [Google Scholar]
  • 38.Şenel F.A., Gökçe F., Yüksel A.S., Yiğit T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019;35:1359–1373. doi: 10.1007/s00366-018-0668-5. [DOI] [Google Scholar]
  • 39.Shi L., Gong J., Zhai C. Application of a hybrid PSO-GA optimization algorithm in determining pyrolysis kinetics of biomass. Fuel. 2022;323:124344. doi: 10.1016/j.fuel.2022.124344. [DOI] [Google Scholar]
  • 40.Lin S., Liu A., Wang J., Kong X. An intelligence-based hybrid PSO-SA for mobile robot path planning in warehouse. J. Comput. Sci. 2023;67:101938. doi: 10.1016/j.jocs.2022.101938. [DOI] [Google Scholar]
  • 41.Salgotra R., Singh U., Saha S., Gandomi A.H. Improving cuckoo search: Incorporating changes for CEC 2017 and CEC 2020 benchmark problems; Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC); Glasgow, UK. 19–24 July 2020; New York, NY, USA: IEEE; 2020. pp. 1–7. [Google Scholar]
  • 42.Zhu D., Shen J., Zhang Y., Li W., Zhu X., Zhou C., Cheng S., Yao Y. Multi-strategy particle swarm optimization with adaptive forgetting for base station layout. Swarm Evol. Comput. 2024;91:101737. doi: 10.1016/j.swevo.2024.101737. [DOI] [Google Scholar]
  • 43.Shami T.M., Mirjalili S., Al-Eryani Y., Daoudi K., Izadi S., Abualigah L. Velocity pausing particle swarm optimization: A novel variant for global optimization. Neural Comput. Appl. 2023;35:9193–9223. doi: 10.1007/s00521-022-08179-0. [DOI] [Google Scholar]
  • 44.Wilcoxon F. Individual comparisons by ranking methods. Biom. Bull. 1945;1:80–83. doi: 10.2307/3001968. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The original contributions presented in this study are contained in this paper. Further inquiries can be directed to the corresponding author.


Articles from Biomimetics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES