Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Aug 22;15:30874. doi: 10.1038/s41598-025-16513-4

A multistrategy improved hunger games search algorithm

Qiu Yihui 1, Zhang Xinqiang 1,, Li Ruoyu 1, Li Dongyi 1, Xia Feihan 1
PMCID: PMC12373881  PMID: 40847056

Abstract

In this paper, we propose a multistrategy improved Hunger Games Search (MHGS) algorithm to address several inherent limitations of the original HGS, including imbalanced exploration and exploitation, insufficient population diversity, and premature convergence. The main contributions of MHGS are fourfold: (1) a phased position update framework that dynamically coordinates global exploration and local exploitation through three distinct search phases; (2) an enhanced reproduction operator inspired by biological reproductive patterns to preserve population diversity; (3) an adaptive boundary handling mechanism that redirects out-of-bounds individuals to promising regions, thus improving search efficiency; and (4) an elite dynamic oppositional learning strategy with self-adjusting coefficients to enhance the algorithm’s ability to avoid local optima. These mechanisms work synergistically: the phased update balances global and local search, while the reproduction and boundary handling strategies jointly maintain solution diversity. The addition of oppositional learning further improves the robustness of the search process. Extensive evaluations on 23 benchmark functions, the CEC2017 test suite, and two engineering design problems demonstrate that MHGS achieves a 23.7% average improvement in accuracy compared to seven state-of-the-art algorithms (Wilcoxon rank-sum test, p < 0.05). Moreover, a binary variant, BMHGS_V3, using sigmoid transformation, attains an average classification accuracy of 92.3% on ten UCI datasets for feature selection tasks. The proposed MHGS algorithm provides a novel and effective framework for solving complex optimization problems, demonstrating significant theoretical and practical value in the field of computational intelligence.

Keywords: Hunger games search, Phased optimization, Dynamic oppositional learning, Feature selection, Metaheuristic algorithm

Subject terms: Computer science, Information technology

Introduction

Optimization refers to the process of obtaining the best possible outcome in a given scenario1. In general, an optimization problem seeks to identify the solution that maximizes or minimizes a particular objective function while satisfying specified constraints. Optimization problems are prevalent in many fields2 such as project scheduling3 path planning4 photovoltaic systems5 and feature selection6. Therefore, the study and application of optimization problems have become popular among scholars7. A method for solving an optimization problem (i.e., finding the optimal solution that satisfies the imposed constraints) is called an optimization method or optimization algorithm. Traditional optimization algorithms include steepest descent8 and the simplex method9 which generally have adequate theoretical analyses and strong robustness.

However, most traditional optimization algorithms require gradient information and are usually only suitable for solving specific cases or small-scale optimization problems, making it difficult for them to perform well on complex constrained optimization problems and large-scale optimization problems10. Owing to the limitations of the traditional optimization algorithms, metaheuristic algorithms have attracted widespread attention from scholars both nationally and internationally and have become common methods for solving complex optimization problems11. Unlike traditional optimization algorithms, metaheuristic algorithms do not require gradient information and have no requirements regarding the continuity, convexity, or differentiability of the objective function. They can effectively solve large-scale optimization problems and have features such as fewer parameters and user-friendliness12.

Metaheuristic algorithms are typically classified into four main categories on the basis of their fundamental inspiration sources: evolutionary algorithms, swarm intelligence methods, natural phenomenon-based approaches, and human-inspired algorithms13. Evolutionary algorithms, which draw inspiration from biological evolution mechanisms, including crossover and mutation, are primarily represented by the genetic algorithm (GA)14 differential evolution (DE)15 the evolution strategy (ES)16 and genetic programming (GP)17. For instance, DE has also been effectively applied to practical design problems such as interactive recoloring in interior design, where YUV-based edge detection is integrated with DE to improve user-guided optimization performance18. Similarly, DE has been utilized in other areas of graphic design, such as in language-based photo color adjustment, where natural language instructions are used to guide color modifications in images, enhancing the flexibility and precision of design processes19. The swarm intelligence category encompasses algorithms that mimic collective organism behaviours, which can be further divided into animal-inspired approaches such as particle swarm optimization (PSO)20 the cuckoo search (CS)21 the grey wolf optimizer (GWO)22 the whale optimization algorithm (WOA)23 the salp swarm algorithm (SSA)24 the tornado optimizer with Coriolis forces (TO-CF)25 the Chinese pangolin optimizer (CPO)26 and the herd optimizer (EHO)27 along with plant- and microorganism-inspired methods such as the clonal plant algorithm (CPA)28 the bacterial foraging optimization algorithm (BFOA)29 the slime mould algorithm (SMA)30 the Newton‒Raphson-based optimizer (NRBO)31 and the fungal growth optimizer (FGO)32. Natural phenomenon-based approaches simulate physical and mathematical principles, featuring physics-based methods such as simulated annealing (SA)33 the water cycle algorithm (WCA)34 the gravitational search algorithm (GSA)35 Henry gas solubility optimization (HGSO)36 and the Newton–Raphson-based optimizer (NRO)37 as well as celestial mechanics-inspired techniques such as the artificial satellite search (ASS)38. Finally, human-inspired algorithms model various aspects of human cognition and social behaviour, and they are exemplified by teaching- and learning-based optimization (TLBO)39 the Volleyball Premier League algorithm (VPLA)40 and social emotional learning optimization (SELO)41. This comprehensive classification framework systematically organizes metaheuristic algorithms according to their underlying inspirational paradigms while maintaining clear distinctions between biological, physical, and anthropomorphic approaches.

The hunger games search (HGS)42 is a metaheuristic algorithm inspired by the foraging behaviour43 and hunger-driven competition44,45 observed in group-living animals. Since its introduction, the HGS has demonstrated notable success in applications ranging from image segmentation46 to photovoltaic system optimization47. However, recent critiques of metaphor-based algorithms have raised concerns about their fundamental design principles. Sorensen et al.48 argued that excessively relying on biological analogies often results in redundant algorithms that obscure mathematical novelty through terminological reinvention. Villalón et al.49 systematically demonstrated how many “novel” metaphor-driven methods, such as the grey wolf optimizer and bat algorithm, are structurally equivalent to established approaches such as PSO. Velasco et al.50 further revealed that 65% of the recently proposed “improved” algorithms fail to address core limitations such as NFL theorem compliance. While acknowledging the empirical successes of the HGS51,52 MHGS specifically addresses these methodological criticisms through (1) phased position updates that replace metaphor-constrained search dynamics with mathematically transparent exploration-exploitation balancing, (2) an elite dynamic opposition learning strategy that decouples diversity maintenance from the original hunger metaphor, and (3) a rigorous NFL-aware validation implemented across 54 benchmark functions and 2 engineering cases, exceeding the experimental standards that were criticized in53.

While the HGS algorithm is largely considered a dependable and commonly utilized tool, it is not without shortcomings. Of particular note are issues such as the imbalance between global exploration and local exploitation capabilities54 insufficient population diversity55 and a tendency to succumb to local optima56. Moreover, the no-free-lunch theorem57 establishes that no optimization algorithm can solve every type of optimization problem, motivating scholars to continue investigating improved metaheuristic algorithms for optimization problem solving. Consequently, enhancing the algorithmic efficiency of the HGS is vital.

Since the HGS algorithm was proposed, many scholars have improved it, proposing new algorithms and applying them to solve practical problems. According to their different improvement methods, the improved algorithms are categorized into single-strategy improvements, multistrategy improvements and other methods as follows.

  1. Single-strategy improvements. In this paper, “single-strategy improvement” refers to an improvement made to the HGS algorithm using a certain strategy, where the main improvement strategies include chaos theory, opposition-based learning and its improved variant, the mutation operator and the fusion of other algorithms. For example, Onay et al.58 separately replaced two random values in the HGS algorithm with ten chaotic mappings, and the results of their experiments showed that the HGS algorithm with random value replacement had more stable performance and faster convergence. El-Hameed et al.59 improved the HGS algorithm by using specular reflection learning and dynamic quasiopposition-based learning, and an improved algorithm was used for hybrid microgrid frequency control purposes. Premkumar et al.60 proposed an HGS algorithm based on Gaussian mutation and Cauchy mutation and used it to solve the solar cell model parameter identification problem. Chakraborty et al.61 integrated the starvation concept of the HGS algorithm with whale foraging behaviour to propose a whale optimization algorithm based on a starvation search and verified its effectiveness.

  2. Multistrategy improvements. The “multistrategy improvement” term used in this paper refers to an improvement of the HGS algorithm in which more than one strategy is used. Among the improvements made to the HGS algorithm, multistrategy improvements are the most numerous type. According to whether other algorithms are fused or not, these improvements can be divided into two categories: one uses different strategies to improve the HGS algorithm together but does not involve the fusion of other algorithms, and the other adds other strategies to improve the HGS algorithm while fusing other algorithms. For example, Ma et al.55 improved the HGS algorithm using chaos theory, greedy selection and vertical crossover and proposed a binary version for feature selection. Zhou et al.55 improved the HGS algorithm by using a chaotic initialization strategy, a Gaussian barebone mechanism and an orthogonal learning strategy. Hou et al.62 improved the HGS algorithm by combining the slime mould position update mechanism and chaotic optimal solution variation and used an improved algorithm for image segmentation purposes. Chen et al.63 fused an improved HGS algorithm and an improved differential evolution algorithm and added a distance-based multipopulation approach.

  3. Other methods. In addition to the aforementioned improvement approaches, numerous researchers have opted to expand the scope of the HGS algorithm for solving binary optimization problems and multiobjective optimization problems, among other problems, instead of implementing improvement strategies to increase its performance. For example, Devi et al.62 proposed a binary HGS algorithm and applied it to resolve the feature selection challenge, whereas Al-Kaabi et al.63 introduced a multiobjective HGS algorithm to optimize the multiobjective optimal power flow problem.

All three of these improvements have yielded better results, but they possess several shortcomings.

Single-strategy improvements add only one improvement strategy each, making them easier to program and implement and providing algorithmic performance enhancements. However, the majority of the existing single-strategy improvements target only one deficiency of the HGS algorithm, which provides limited performance enhancements, and most of them do not expand the scope of the HGS algorithm.

Multistrategy improvements are generally aimed at addressing multiple deficiencies of the HGS algorithm and use multiple strategies to improve it. The focal point is the imbalance between the global exploration and local exploitation capabilities of the algorithm, the low diversity of the population, and its tendency to fall into local optima. These improvements can effectively increase the performance of the HGS algorithm. While multistrategy improvements undoubtedly enhance the performance of the HGS algorithm to a certain extent, numerous studies have demonstrated their limited efficacy in terms of optimizing complex test functions, and their applicability remains constrained. In addition, the HGS algorithm suffers from an unreasonable out-of-bounds handling mechanism, but to the authors’ knowledge, no scholars have mitigated this problem yet. When an individual crosses the boundary, the HGS algorithm restricts the crossing individual to the boundary. Although this boundary crossing-based processing mechanism can limit the positions of individuals to the feasible region, it wastes computational resources and reduces the activity of the population6466.

Other approaches generally do not add improvement strategies but rather extend the applicability of the HGS algorithm in some way. These enhancements broaden the application range of the HGS algorithm but do not significantly improve its overall performance.

Therefore, a multistrategy approach is proposed in this paper to improve the HGS algorithm. The main issues addressed by the improvement include the imbalance between the global exploration and local exploitation capabilities of the algorithm, its low population diversity, its use of an unreasonable out-of-bounds handling mechanism, and its susceptibility to falling into local optima. By addressing these issues, the proposed improvement is aimed at enhancing the performance of the HGS algorithm and expanding its application scope.

The main research content of this paper is as follows.

  1. A multistrategy algorithm, an improved hunger games search (MHGS), is proposed to increase the performance of the original algorithm. The MHGS implements multiple strategies, such as a phased position update formula, a reproduction mechanism, an improved out-of-bounds handling mechanism, and elite dynamic opposite learning.

  2. The ability of the MHGS algorithm to solve global optimization problems is evaluated through benchmark test functions and the CEC2017 test set. Additionally, two engineering design issues are solved using the MHGS algorithm to assess its performance in terms of addressing practical optimization problems.

  3. In addition, the scope of the MHGS algorithm is broadened by allowing it to solve binary optimization problems using a transformation function. As a result, a binary MHGS algorithm (BMHGS_V3) is proposed for use in solving feature selection problems.

Motivation and contributions

The motivation for this study arises from three key observations in the current literature: (1) the standard HGS algorithm suffers from an imbalance between exploration and exploitation, frequently resulting in premature convergence; (2) most existing improvements to HGS either target only a single aspect or involve complicated hybridizations that substantially increase computational overhead; and (3) the boundary handling mechanism in HGS has been largely overlooked, despite its critical impact on optimization performance.

To address these challenges, we propose the MHGS algorithm, which introduces the following four principal contributions:

  1. A phased position update mechanism that systematically transitions between exploration, transition, and exploitation phases, significantly improving the balance between global and local search capabilities.

  2. A biologically inspired reproduction mechanism that enhances population diversity without substantially increasing computational complexity.

  3. A comprehensive boundary handling strategy that intelligently repositions out-of-bounds individuals using historical search information, providing the first complete solution to the boundary handling issue in HGS.

  4. An elite dynamic opposition learning strategy designed to help the algorithm escape local optima efficiently while maintaining computational efficiency.

These innovations are significant as they address the core limitations of the original HGS through approaches that are both theoretically grounded and biologically plausible, rather than relying on arbitrary combinations of existing strategies. The proposed enhancements preserve the simplicity and efficiency of HGS while greatly improving its performance in a wide range of optimization scenarios.

The remainder of this paper is organized as follows: Section II reviews the relevant background, including the HGS algorithm and dynamic opposition learning. Section III describes the proposed MHGS algorithm in detail. Section IV presents experimental results and analyses based on benchmark test functions, the CEC2017 suite, engineering design problems, and feature selection tasks. Section V concludes the paper.

Background

Hunger games search algorithm

Group living enables animals to better avoid predators and locate food sources through cooperative behaviors. While most animals collaborate to enhance their chances of survival, some may not participate in group efforts. Hunger serves as a primary driving force in animal life, significantly influencing their decision-making and behavior. As hunger increases, so does the urgency and activity level of foraging, as animals seek to avoid starvation. In environments where food is scarce, hungry individuals compete intensely to secure resources. Typically, healthier and stronger animals are more likely to find food and survive compared to weaker individuals. This competitive struggle for survival, driven by hunger, is aptly described as the “hunger game.”

Inspired by this phenomenon, the Hunger Games Search (HGS) algorithm formulates a mathematical model that emulates the foraging behavior and hunger-driven competition of social animals. The algorithm consists of two primary components: the food-seeking process and hunger-based role assignment.

Scientific rationale of natural metaphors and mapping mechanisms

The core design of the MHGS algorithm is rooted in the foraging behavior of animal groups under hunger-driven conditions. By quantifying individual hunger levels, the algorithm dynamically adjusts search strategies: highly hungry individuals expand their exploration radius to simulate risk-seeking under survival pressure, while less hungry individuals exploit elite positions for resource acquisition. This approach precisely reflects the energy-risk trade-off theory in ecology. Unlike conventional truncation methods, the boundary handling mechanism in MHGS mimics the adaptive responses of animal groups to environmental constraints, repositioning out-of-bounds individuals to promising locations based on a weighted combination of historical optimal, individual, and group information. Additionally, the reproduction operator simulates genetic recombination and mutation processes, with adaptive crossover and decaying mutation rates mirroring natural selection and ensuring population diversity throughout the search42.

Food approach

The food approach method mainly simulates the foraging behaviour of group-living animals. In this part, the HGS algorithm uses three games (Inline graphic, Inline graphic, and Inline graphic) to simulate the behaviours of animals approaching food for foraging, as shown in Eq. (1).

graphic file with name d33e588.gif 1

where Inline graphic denotes the current iteration number; Inline graphic denotes the current position of an individual; Inline graphic denotes the position of the individual in the next iteration; Inline graphic denotes the optimal individual position attained in the current iteration thus far; Inline graphic and Inline graphic denote the starvation weights; Inline graphic is a random number that satisfies the standard normal distribution; Inline graphic and Inline graphic are both random numbers between 0 and 1; Inline graphic is a fixed-value parameter; and Inline graphic is a parameter for controlling the change in the individual positions, which is calculated as shown in Eq. (2).

graphic file with name d33e667.gif 2

where Inline graphic, Inline graphic denotes the number of populations; Inline graphic denotes the fitness of each individual; Inline graphic denotes the optimal fitness attained in the current iteration thus far; and the formula for the hyperbolic function Inline graphic is shown in Eq. (3).

graphic file with name d33e709.gif 3

where Inline graphic is a random number within the range of Inline graphic. The convergence factor Inline graphic decreases gradually from 2 to 0 with each iteration. The formulas for Inline graphic and Inline graphic are shown in Eqs. (4) and (5), respectively.

graphic file with name d33e754.gif 4
graphic file with name d33e760.gif 5

where Inline graphic is a random number between 0 and 1 and Inline graphic denotes the maximum number of iterations.

The fundamental equation of the HGS algorithm instructs each individual to adjust its position on the basis of Eq. (1). In Eq. (1), Inline graphic simulates the case in which a few animals do not participate in collaboration and forage alone, and Inline graphic indicates that the individual randomly searches for food near the current position; Inline graphic and Inline graphic simulate the cases in which the animals participate in collaboration and that the group cooperates to forage for food; Inline graphic denotes the current range of the individual’s activity, which is multiplied by Inline graphic to denote the effect of the level of hunger on the range of the individual’s activity; and Inline graphic is a parameter for controlling the range of the individual’s activity. It gradually shrinks with the number of iterations. Inline graphic simulates the case in which the companions of the current individual inform it about a food location, and the current individual goes to obtain the food and then searches around it again.

Hunger roles

Hunger roles mainly simulate the effects of hunger characteristics on animal foraging behaviours. In this part, the HGS algorithm simulates the above effects with two adaptive hunger weights Inline graphic and Inline graphic, whose computational formulas are specifically shown in Eqs. (6) and (7), respectively.

graphic file with name d33e859.gif 6
graphic file with name d33e865.gif 7

where Inline graphic, Inline graphic is the number of populations; random numbers Inline graphic, Inline graphic and Inline graphic are chosen from the range Inline graphic; and the hunger of each individual is denoted as Inline graphic and is calculated according to Eq. (8).

graphic file with name d33e919.gif 8

where Inline graphic denotes increased hunger.

In each iteration, the hunger degree of the optimal individual is set to 0. For other individuals, the original starvation level is increased by Inline graphic. The corresponding Inline graphic values of different individuals are different, the Inline graphic values of the same individual in different iterations are also different, and the formula for calculating Inline graphic is shown in Eq. (9).

graphic file with name d33e963.gif 9

where Inline graphic is a random number chosen from the range of Inline graphic and Inline graphic is a fixed-value parameter that denotes the lower bound of Inline graphic.

The formula for Inline graphic is shown in Eq. (10).

graphic file with name d33e1006.gif 10

where Inline graphic is a random number chosen from the range of Inline graphic; Inline graphic denotes the worst fitness value attained in the current iteration thus far; and Inline graphic and Inline graphic indicate the upper and lower boundaries of the search space, respectively.

Pseudocode for the HGS algorithm

The pseudocode for Algorithm 1 displays the HGS algorithm.

graphic file with name 41598_2025_16513_Figa_HTML.jpg

Algorithm 1 The HGS algorithm.

Dynamic opposite learning

Dynamic opposite learning was proposed in 2020 by Xu et al.67 This approach combines the principles of quasiopposite-based learning and quasireflection-based learning, allowing for dynamic and asymmetrical explorations of solution spaces. This enhances the likelihood of discovering the global optimal solution while avoiding local optima68. The equations for dynamic opposite learning are provided in Eqs. (11) and (12). Equation (11) represents the formula for updating the population position, whereas Eq. (12) denotes the formula for initializing the population.

graphic file with name d33e1085.gif 11

where Inline graphic represents the j-th-dimensional element of the i-th individual, and Inline graphic represents the j-th-dimensional element of the i-th individual after performing dynamic opposite learning; Inline graphic is the weighting factor; Inline graphic and Inline graphic are two random numbers chosen from the range of Inline graphic; and Inline graphic and Inline graphic denote the maximum and minimum values of the j-th dimension across all current iterations of the individual, respectively.

graphic file with name d33e1142.gif 12

where Inline graphic and Inline graphic are two random numbers chosen from the range of Inline graphic and Inline graphic and Inline graphic denote the upper- and lower-dimensional elements of the search space, respectively.

The proposed MHGS algorithm

In this paper, we propose a multistrategy improved hunger games search algorithm (MHGS) and use it to solve global optimization problems, engineering design issues and feature selection problems. The MHGS algorithm contains four improved strategies: a phased position update formula, a reproduction mechanism, an improved out-of-bounds handling mechanism and an elite dynamic opposite learning strategy.

Phased position update formula

In the HGS algorithm, the parameter Inline graphic is a very small, fixed value; thus, the proportion of individuals searching near the current individual position Inline graphic out of the total number of individuals is very small, and most of the individuals search around the optimal individual position Inline graphic, keep narrowing the search range, and finally converge to the optimal individual position Inline graphic. This approach can improve the ability of the HGS algorithm to exploit local regions, but it overlooks the ability to search for other potentially promising areas. This results in a narrower search range and a reduced global exploration capacity. Consequently, the stronger local exploitation ability of the HGS algorithm is imbalanced with its weaker global exploration ability.

To address the imbalance between the exploration and exploitation capabilities of the HGS algorithm, a phased position update formula, which divides the search process of the HGS algorithm into three phases, namely, exploration, transition and exploitation, is proposed in this paper.

  1. Exploration phase. When Inline graphic, the algorithm is in the exploration phase. The primary aim of the exploration phase is to broaden the search range of the HGS algorithm and enhance its global exploration ability in the initial stage. In this phase, the pattern of the HGS algorithm searching around the optimal individual position Inline graphic is changed to searching around the historical optimal position of the current individual Inline graphic. During each iteration, the HGS algorithm searches around one Inline graphic, which limits the search process to a small area. However, since the historical optimal position of each individual is generally distinct, the improved formula allows the algorithm to search in various areas of the search space, thus expanding the search range of the HGS algorithm. Moreover, Inline graphic is updated to Inline graphic to obtain improved results. Two random individual positions Inline graphic and Inline graphic are introduced to enhance the global exploration ability of the HGS algorithm in the early stages and expand the search range to a certain extent. The position update formula for the exploration phase is displayed in Eq. (13).

graphic file with name d33e1277.gif 13

where Inline graphic denotes the historical optimal position of the current individual and Inline graphic and Inline graphic denote the positions of two randomly selected individuals among all individuals.

  • (2)

    Transition phase. When Inline graphic, the algorithm is in the transition phase. The primary objective of the transition phase is to achieve a seamless shift from the exploration phase to the exploitation phase. In contrast with the exploration phase, the transition phase should reduce the search scope to increase the convergence speed of the algorithm while simultaneously harmonizing its capacity for global exploration and local exploitation. In this phase, individuals are arranged according to their fitness values and divided into two parts with the same number of individuals in each. One part contains the first half of the individuals with better fitness values, which is responsible for exploiting the region where the optimal individuals are located; this part includes “exploiting individuals”. The other part is the second half of the individuals with poorer fitness values, which continues to explore the promising region; this part includes “exploring individuals”.

An exploiting individual searches around the optimal individual position Inline graphic. Compared with the exploration phase, the formula for an exploiting individual in the transition phase changes Inline graphic to Inline graphic, which narrows the search scope and introduces the historical optimal position of the individual Inline graphic, guiding the individual to search in a better direction. An individual who exploits may be at risk of falling into a local optimum because of the narrowed search scope. Thus, Levy flight is introduced to increase the degree of perturbation and the likelihood of the individual escaping the local optima. Equation (14) shows the position update formula for an exploiting individual.

graphic file with name d33e1349.gif 14

where Inline graphic denotes the step size and Inline graphic is the parameter of Levy flight.

Unlike exploitation individuals, an exploration individual searches around the historical optimal position Inline graphic of the current individual. Compared with the exploration phase, the formula for exploring individuals in the transition phase changes Inline graphic to Inline graphic, which narrows the search scope and introduces two better random individual positions Inline graphic and Inline graphic to guide the individuals to randomly search in a better direction. The formula for updating the position of an exploitation individual is displayed in Eq. (15).

graphic file with name d33e1405.gif 15

where Inline graphic and Inline graphic denote the positions of two individuals that are randomly selected from the top half of the individuals with superior fitness values.

  • (3)

    Exploitation phase. When Inline graphic, the algorithm is in the exploitation phase. The primary aim of the exploitation phase is to refine the search scope and enhance the local exploitation ability of the algorithm in subsequent stages. The position update formula for this phase is similar to that of the exploiting individuals in the transition stage, with the difference being that Inline graphic is changed to Inline graphic, which further narrows the search range and helps to improve the local exploitation ability of the algorithm in the subsequent stages. The position update formula for the exploitation phase is shown in Eq. (16).

graphic file with name d33e1458.gif 16

The search ranges of the three aforementioned stages are progressively narrowed, allowing for effective large-scale searching at the outset, a seamless mid-stage transition, and small-scale searching in the final stage. Consequently, the phased position update formula presented in this paper can adequately balance the global exploration and local exploitation capacities of the algorithm.

Reproduction mechanism

The HGS algorithm mainly updates the positions of individuals on the basis of Inline graphic and Inline graphic in Eq. (1), which are very similar, differing by only a negative sign. Therefore, the search pattern of the HGS algorithm is more singular, which leads to lower population diversity for the HGS algorithm.

To address the low population diversity of the HGS algorithm, this work is inspired by the phenomenon of animal reproduction, and a reproduction mechanism is proposed to improve the population diversity of the algorithm. Animal mating and reproduction are natural phenomena that occur in all animal groups in nature. These are essentially necessary activities for maintaining the population size. In a state of extreme starvation, most animals do not mate and may even eat their mates, such as the female praying mantis. Animals in a normal state generally mate and produce offspring, which may be male or female. The offspring are mostly similar to the parent, having partially similar genes; however, with a small probability, a gene mutation may occur. In this way, the offspring become parents in turn, and these new parents in turn become offspring. Animals continue to evolve and mutate as they reproduce from generation to generation, resulting in different species of the same animal and increasing the diversity of the species.

The employed reproduction mechanism simulates the above phenomenon. First, most extremely hungry animals do not reproduce, so the reproduction mechanism is only used for the first half of the individuals with better fitness values. Second, the animals in the normal state generally mate and reproduce. The behaviour of an animal mating with uniform crossover under a crossover probability of 0.5 is simulated in this paper, and the object of animal mating is designed in a certain way to further balance the global exploration and local exploitation capabilities of the algorithm, as shown in Eqs. (17) and (18). Next, the offspring may be male or female, and one of the two new individual positions generated by uniform crossover with equal probability is randomly selected in this paper to simulate the random gender of the offspring, as shown in Eq. (19). Then, a gene mutation may be generated with a small probability. The probability of an offspring gene mutation is expressed as Eq. (20), and Eq. (21) is used to simulate the gene mutation of the offspring. Finally, the animal reproduces in each generation, which improves the diversity of the species, and the reproduction mechanism is used for the algorithm in iteration, which improves the population diversity of the algorithm.

graphic file with name d33e1506.gif 17

where Inline graphic and Inline graphic are the two new individual positions generated after applying uniform crossover; Inline graphic denotes the uniform crossover operation with a crossover probability of 0.5; Inline graphic is the optimal individual position obtained thus far in the current iteration; Inline graphic, Inline graphic and Inline graphic are all individual positions that are randomly selected from the first half of the individuals with better fitness values; Inline graphic is a random number chosen from the range of Inline graphic; and Inline graphic is a parameter for adjusting the object that the animal mates with, which is calculated as shown in Eq. (18).

graphic file with name d33e1579.gif 18
graphic file with name d33e1585.gif 19

where Inline graphic is one of the two new individual positions (randomly chosen) and Inline graphic is a random number chosen from the range of Inline graphic.

graphic file with name d33e1611.gif 20

where Inline graphic is the probability of gene mutation.

graphic file with name d33e1625.gif 21

where Inline graphic, Inline graphic is the dimensionality; Inline graphic and Inline graphic are the j-th-dimensional elements of the upper and lower bounds of the search space, respectively; Inline graphic is the j-th-dimensional element of the position after applying the mutation operation; Inline graphic is a random value between Inline graphic and Inline graphic if Inline graphic or Inline graphic; Inline graphic is the j-th-dimensional element of Inline graphic; and Inline graphic, Inline graphic, Inline graphic and Inline graphic are random numbers chosen from the range of Inline graphic.

The reproduction mechanism more accurately replicates the process of animal reproduction, and the factors related to animal mating and the likelihood of offspring gene mutation are intentionally crafted for achieving enhanced algorithm performance. This is evident from the equations presented in Eqs. (17) and (18). During the iterative process, the value of Inline graphic decreases from 1 to 0. This encourages individuals with better positions Inline graphic and Inline graphic to mate more frequently in the preiteration stage, enhancing the ability of the algorithm to globally explore the space and maintain population diversity. In the later stages, increased mating occurs between the optimal individual position Inline graphic and a randomly selected, superior individual position Inline graphic, resulting in an improved local exploration ability. As a result, the design of animal mating objects effectively enhances the diversity of the population while balancing the global exploration and local exploitation capabilities of the algorithm. As shown in Eq. (20), the mutation probability Inline graphic decreases gradually from 0.5 to 0 with increasing iterations. As a result, more gene mutations are executed in the early iterations to facilitate algorithmic exploration and diversify the population. Conversely, fewer gene mutations are performed in the late iterations to expedite the algorithmic convergence process. Thus, the design of the gene mutation probability of the offspring also enhances the population diversity of the algorithm while aiding in maintaining a balance between its global exploration and local exploitation capabilities.

Equation (21) draws on the dynamic opposite learning formulation by replacing Inline graphic with Inline graphic according to the dynamic opposite learning-based population initialization formula. This adds more randomness and helps to improve the diversity within the population of the algorithm.

Additionally, in real life, animal reproduction tends to contribute to an increase in the population size, whereas algorithms have limited numbers of individuals due to population restrictions. Therefore, we replace the latter half of the individuals with poorer fitness values with the generated offspring to keep the number of populations unchanged, which can be interpreted as the latter half of the individuals with poorer fitness values dying due to starvation.

In summary, implementing the reproduction mechanism has the potential to increase the population diversity of the algorithm, leading to greater equilibrium between the global exploration ability and the local exploitation ability of the HGS.

An improved out-of-bounds handling mechanism

When the dimensionality of an individual exceed the boundary of the search space, the HGS algorithm directly sets it to the boundary value of the search space. Although this out-of-bounds handling mechanism can limit the position of the individual to the feasible region, it leads to a waste of computational resources and reduces the activity level of the population. If there are further cases outside the established boundaries, the individuals under evaluation may become too similar, hindering the search for the best overall solution. Therefore, the out-of-bounds handling mechanism of the HGS algorithm is unreasonable.

To address the unreasonable out-of-bounds handling mechanism of the HGS algorithm, an improved out-of-bounds handling mechanism is proposed in this paper to transfer the out-of-bounds individuals to a more reasonable place, resulting in improved algorithmic performance. The improved out-of-bounds handling mechanism is shown in Eq. (22).

graphic file with name d33e1816.gif 22

where Inline graphic, and Inline graphic is the number of populations; Inline graphic, and Inline graphic is the dimensionality; Inline graphic is the j-th-dimensional element of the i-th individual; Inline graphic is the Inline graphic obtained after processing it with the improved out-of-bounds handling mechanism; Inline graphic is the j-th-dimensional element of the position of the optimal individual; Inline graphic is the j-th-dimensional element of the historically optimal position of the i-th individual; Inline graphic is the plurality of the j-th dimensions of the historically optimal positions of all individuals; and Inline graphic is the median of the j-th dimensions of the historically optimal positions of all individuals.

The improved out-of-bounds handling mechanism makes full use of the historical optimal positions of all individuals and refers to the optimal individual positions, which can transfer the out-of-bounds individuals to more reasonable places. Consequently, the likelihood of discovering the global optimal solution is improved. Compared with the original out-of-bounds handling mechanism, the improved out-of-bounds handling mechanism does not waste computational resources, which increases the convergence speed and overall performance of the algorithm.

Elite dynamic opposite learning strategy

In the HGS algorithm, the majority of individuals search for and move towards the optimal individual position Inline graphic as the number of iterations increases. If this position falls into a local optimum, most individuals will also follow and fall into it. Furthermore, the individuals searching near the current individual position Inline graphic play a role. This approach is not efficient in later iterations since individuals gather together, and blind exploration is inefficient. The HGS algorithm is at risk of falling into local optima, as only a few individuals can jump out of them. Therefore, the HGS algorithm lacks an effective mechanism for escaping from local optima.

To address the susceptibility of the HGS algorithm to local optima, an elite dynamic opposite learning strategy is proposed herein to increase the likelihood of the algorithm escaping local optima. Unlike dynamic opposite learning, the elite dynamic opposite learning strategy is not applied to all individuals but only to the optimal individual position Inline graphic, which reduces the computational complexity of the process. Instead of using the dynamic opposite learning formula for updating the population position, the elite dynamic opposite learning strategy employs the dynamic opposite learning formula for population initialization. The prior method dynamically updates the upper and lower bounds of the search space, causing the search area to narrow, which aids in accelerating the convergence speed during later iterations. However, it is not beneficial for escaping local optima and can even result in the algorithm falling into them. Therefore, the latter method, which has a larger search area and is more promising for escaping local optima, is adopted in this study. The formula for implementing the elite dynamic opposite learning strategy can be seen in Eq. (23).

graphic file with name d33e1920.gif 23

Where Inline graphic, and Inline graphic is the dimensionality; Inline graphic is the j-th-dimensional element of the location of the optimal individual; Inline graphic is the Inline graphic obtained after performing elite dynamic opposite learning; and Inline graphic and Inline graphic are two random numbers chosen from the range of Inline graphic.

Compared with dynamic opposite learning, the elite dynamic opposite learning strategy reduces the computational complexity, expands the search space, and can increase the likelihood of the algorithm escaping local optima.

Discussion of the design principles and advantages of the MHGS

While single-strategy improvements are limited by their narrow focus on specific deficiencies (e.g., only enhancing the exploration or exploitation tasks) and multistrategy improvements may introduce excessive complexity or inefficiency, the MHGS is designed to systematically address these challenges through the following principles.

  1. Balanced exploration and exploitation: The phased position update formula divides the search process into exploration, transition, and exploitation phases, ensuring a dynamic balance between global exploration and local exploitation. This avoids the common pitfalls of premature convergence or excessive computational waste encountered by single-strategy approaches.

  2. Enhanced Population Diversity: The reproduction mechanism simulates biological evolution by introducing crossover and mutation operations, which mitigates the low diversity issue in the HGS. Unlike single-strategy methods that rely on one operator (e.g., chaos theory), the MHGS combines multiple biologically inspired mechanisms to sustain diversity throughout the iterative process.

  3. Efficient Resource Utilization: The improved out-of-bounds handling mechanism repositions individuals using historical and elite information, reducing computational waste compared with that induced by the naive boundary restriction in the HGS. This addresses the inefficiency noted in the previously developed multistrategy methods that overlook boundary dynamics.

Pseudocode and flowchart of the MHGS algorithm

The pseudocode for the MHGS algorithm is presented in Algorithm 2.

graphic file with name 41598_2025_16513_Figb_HTML.jpg

Algorithm 2 MHGS algorithm.

The flowchart for the MHGS algorithm is displayed in Fig. 1.

Fig. 1.

Fig. 1

Flowchart for the MHGS algorithm.

Computational complexity analysis

The computational complexity of the MHGS algorithm is analysed from both temporal and spatial perspectives.

Temporal complexity

The temporal complexity of the MHGS depends primarily on the population size NN, the dimensionality D, and the maximum number of iterations TT. The key operations are as follows.

  1. Initialization: Generating NN individuals in D-dimensional space requires O(N*D).

  2. Boundary Handling: The improved boundary mechanism in Eq. (22) processes NN individuals, each with DD dimensions, resulting in O(N*D).

  3. Fitness Evaluation: Assuming that the cost of evaluating one fitness function is O(F), the total cost per iteration is O(N*F).

  4. Elite Dynamic Opposite Learning: Updating the elite individual (Eq. (23)) involves DD operations, yielding O(D).

  5. Reproduction Mechanism: Performing uniform crossover and mutation operations (Eqs. (17)-(21)) for N/2 N/2 individuals requires O(N*D).

  6. Weight Calculations: Computing EE, RR, W1W1, and W2W2 (Eqs. (2)-(10)) costs O(N*D).

  7. Phased Position Update: Each position update (Eqs. (13)-(16)) for NN individuals in DD dimensions costs O(N*D).

Combining these steps, the temporal complexity per iteration is O(N*D + N*F)*. For T iterations, the total temporal complexity is O(T*N*(D + F)). If *F is dominated by DD (e.g., F = O(D)), the complexity simplifies to O(T*N*D2).

Spatial complexity

The MHGS stores positions, historical optimal positions, and auxiliary parameters for NN individuals, leading to O(N*D) space.

Comparison with the HGS

The original HGS has a temporal complexity of O(T*N*D). The MHGS introduces additional operations (e.g., reproduction and elite learning), but these do not increase the asymptotic order. The phased update strategy balances exploration and exploitation without an extra asymptotic cost. Thus, the MHGS maintains competitive efficiency while providing enhanced performance.

Convergence analysis

To theoretically verify the convergence of the MHGS, we provide a mathematical proof under the stochastic optimization framework.

Assumption1

(Compact Search Space): The search space Ω⊂ℝᴰ is compact and that the objective function f:Ω→ℝ is continuously measurable.

Assumption2

(Elitism Preservation): The elite dynamic op-positional learning strategy ensures that the best solution sequence {f(Xb(t))} is non-increasing, i.e.,

graphic file with name d33e2121.gif

Step1: State Space Construction

Define the algorithm state at iteration t as St={X(t),Xpb(t)}, where X(t) is the population and Xpb(t) denotes historical best positions. Under Assumption1, Ω is compact, ensuring that {St} forms a homogeneous Markov chain.

Step2: Markov Property Verification

The iterative update rules of MHGS (phased position update, reproduction, and boundary handling) depend only on the current state St, satisfying the memory-less property:

graphic file with name d33e2152.gif

Where A is any measurable set. This confirms that {St} is a time-homogeneous Markov chain.

Step3: Convergence via Drift Analysis

We employ the drift condition from stochastic process theory to prove convergence. Define the drift ∆t as:

graphic file with name d33e2166.gif

By Assumption2, ∆t ≤0. Furthermore, the reproduction mechanism and elite op-positional learning guarantee sufficient exploration, satisfying the geometric periodicity condition:

graphic file with name d33e2177.gif

Step4: Global Convergence Theorem.

According to the First-Hitting Time Theorem, for any ε > 0, the probability that MHGS enters the ε-neighborhood of the global optimum f* satisfies:

graphic file with name d33e2187.gif

This establishes the almost sure convergence of MHGS to the global optimum.

Discussion of Limitations

The convergence guarantee holds under Assumptions 1–2. In practice, MHGS may face challenges in non-convex landscapes with deceptive local optima. However, the phased exploration-exploitation balance and op-positional learning mitigate this risk by maintaining population diversity.

Methodological advancements

The MHGS effectively overcomes the limitations of the traditional improvement methods by systematically integrating the proposed multistrategy cooptimization mechanism. To address the exploration-exploitation imbalance, insufficient population diversity, and local optimal traps encountered by the original HGS algorithm, a phased adaptive search framework, which adopts a wide-area exploration strategy guided by historical optima in the early stage, maintains population activity through a dynamic reproduction mechanism in the middle stage, and integrates elite reverse learning to realize a fine search process in the later stage, is proposed. The improved boundary processing mechanism integrates individual historical experience and group wisdom, which significantly improves the effective utilization rate of the transgressive solutions. An experimental validation shows that the algorithm exhibits better global search capabilities and stability on a variety of benchmark functions and engineering optimization problems, and the synergy of its multistage strategy effectively balances the convergence speed and the solution quality level, providing a more comprehensive performance enhancement than the traditional single-strategy improvement methods do.

Experimental results and analyses

All the experiments conducted in this paper are implemented using an AMD Ryzen 7 5800 H CPU with a Radeon graphics processor, a main clock speed of 3.20 GHz, 16.0 GB of RAM, and a Windows 10 (64-bit) operating system with the MATLAB R2021a programming software. This section is used to evaluate the ability of the MHGS algorithm to solve different optimization problems, such as benchmark test functions, the CEC2017 test set, engineering design issues, and feature selection problems.

Experiment 1: benchmark test functions

To determine the superiority of the MHGS algorithm, it is contrasted with eight other well-known metaheuristic algorithms, including the HGS42 CDO67 SCSO69 the AOA70 AGWO71 the WOA23 the GWO22 and PSO42. Among them, the HGS is a standard Hunger games search algorithm; PSO is a classic metaheuristic algorithm; the GWO, the WOA and the AOA are recent popular metaheuristic algorithms; SCSO and CDO are recently proposed metaheuristic algorithms; and AGWO is an augmented grey wolf optimization algorithm. The parameter configurations for all the aforementioned algorithms are presented in Table 1, and the settings of each comparative algorithm match the respective references listed in the table.

Table 1.

Algorithmic parameter settings.

Algorithm Parameters Year Reference
MHGS Inline graphic -- --
HGS Inline graphic 2021 34
CDO Inline graphic, Inline graphic, Inline graphic 2023 58
SCSO Inline graphic 2022 59
AOA Inline graphic 2021 60
AGWO Inline graphic 2018 61
WOA Inline graphic 2016 21
GWO Inline graphic 2014 20
PSO Inline graphic 1995 34

To ensure a fair comparison in a consistent experimental environment, all the algorithms are configured with 30 populations and a maximum of 1000 iterations. Each algorithm runs independently for 30 trials, and the corresponding evaluation index is determined by calculating the average fitness and standard deviation from the 30 independent experiments. Table 2 displays the experimental results.

Table 2.

Experimental results obtained on benchmark test functions.

Function/Dim/Measures MHGS HGS CDO SCSO AOA AGWO WOA GWO PSO DEAH
F1 30 Mean 0.00E + 00 3.27e-311 1.42E-261 4.50E-234 2.73E-108 3.39E-91 8.07E-150 3.51E-59 1.29E + 02 2.99e + 10
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 1.49E-107 1.17E-90 4.36E-149 4.79E-59 1.34E + 01 6.8e + 09
100 Mean 0.00E + 00 4.90e-322 1.03E-236 1.36E-214 1.73E-02 1.55E-41 8.06E-146 2.65E-29 1.32E + 03 2.060E + 04
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 7.82E-03 3.62E-41 4.41E-145 3.32E-29 1.14E + 02 1.5E + 04
500 Mean 0.00E + 00 1.98E-306 2.05E-222 5.91E-204 5.95E-01 1.04E-18 4.90E-144 1.57E-12 3.30E + 04 -
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 4.08E-02 7.86E-19 2.59E-143 7.08E-13 2.16E + 03 -
F2 30 Mean 0.00E + 00 2.33E-155 2.13E-133 3.95E-122 0.00E + 00 6.50E-55 1.31E-104 7.90E-35 6.54E + 01 -
Std 0.00E + 00 1.27E-154 4.39E-133 1.23E-121 0.00E + 00 1.22E-54 5.28E-104 7.60E-35 1.42E + 01 -
100 Mean 0.00E + 00 5.46E-162 2.04E-121 1.02E-110 5.61E-91 7.94E-27 2.60E-102 5.67E-18 2.05E + 21 -
Std 0.00E + 00 2.99E-161 2.17E-121 5.37E-110 3.07E-90 8.05E-27 9.30E-102 2.60E-18 1.03E + 22 -
500 Mean 0.00E + 00 2.99E + 264 5.98E-113 1.90E-107 2.43E-04 1.52E-12 1.49E-100 5.78E-08 4.15E + 138 -
Std 0.00E + 00 Inf 5.39E-113 9.96E-107 5.63E-04 8.25E-13 4.80E-100 1.70E-08 2.11E + 139 -
F3 30 Mean 0.00E + 00 1.67E-170 1.52E-202 3.49E-197 7.84E-03 1.04E-24 2.06E + 04 6.81E-14 3.92E + 02 3.000E + 02
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 1.49E-02 2.25E-24 1.01E + 04 3.31E-13 9.51E + 01 2.0E-05
100 Mean 0.00E + 00 3.12E-85 7.55E-122 4.78E-184 4.14E-01 1.02E + 02 8.39E + 05 1.57E + 01 2.40E + 04 3.000E + 02
Std 0.00E + 00 1.71E-84 4.14E-121 0.00E + 00 2.14E-01 3.56E + 02 1.69E + 05 2.97E + 01 4.89E + 03 3.8E-02
500 Mean 0.00E + 00 1.79E-29 1.56E-16 1.18E-176 3.49E + 01 7.19E + 05 3.01E + 07 1.30E + 05 6.88E + 05 -
Std 0.00E + 00 9.78E-29 8.53E-16 0.00E + 00 2.02E + 01 2.76E + 05 7.93E + 06 4.94E + 04 1.67E + 05 -
F4 30 Mean 0.00E + 00 6.49E-132 9.47E-123 6.08E-100 1.80E-02 1.03E-23 3.34E + 01 1.68E-14 4.54E + 00 4.384E + 02
Std 0.00E + 00 3.56E-131 2.43E-122 3.19E-99 1.97E-02 3.82E-23 3.13E + 01 3.00E-14 3.18E-01 3.5E + 01
100 Mean 0.00E + 00 2.32E-146 1.27E-113 3.78E-96 8.85E-02 7.87E + 00 7.75E + 01 4.15E-03 1.54E + 01 6.399E + 02
Std 0.00E + 00 8.66E-146 4.52E-113 1.43E-95 1.04E-02 2.13E + 01 2.13E + 01 1.19E-02 1.59E + 00 4.6E + 01
500 Mean 0.00E + 00 2.10E-139 6.79E-96 3.33E-93 1.71E-01 9.92E + 01 7.78E + 01 5.71E + 01 3.22E + 01 -
Std 0.00E + 00 1.13E-138 3.50E-95 8.24E-93 1.06E-02 2.53E-01 2.45E + 01 6.31E + 00 3.01E + 00 -
F5 30 Mean 2.82E-03 1.45E + 01 2.73E + 01 2.81E + 01 2.83E + 01 2.67E + 01 2.73E + 01 2.70E + 01 1.38E + 05 6.754E + 02
Std 4.59E-03 1.20E + 01 2.56E-01 9.69E-01 3.23E-01 4.67E-01 5.52E-01 7.37E-01 3.78E + 04 4.2E + 01
100 Mean 5.23E-02 6.02E + 01 9.81E + 01 9.84E + 01 9.88E + 01 9.79E + 01 9.77E + 01 9.74E + 01 5.58E + 06 1.397E + 03
Std 1.19E-01 4.66E + 01 3.15E-01 4.54E-01 9.29E-02 5.54E-01 3.90E-01 6.84E-01 1.34E + 06 1.0E + 02
500 Mean 1.52E-01 3.95E + 02 4.97E + 02 4.98E + 02 4.99E + 02 4.98E + 02 4.96E + 02 4.98E + 02 2.63E + 08 -
Std 4.33E-01 2.01E + 02 7.38E-02 2.90E-01 6.63E-02 1.69E-01 3.91E-01 3.99E-01 5.73E + 07 -
F6 30 Mean 5.99E-08 8.96E-07 7.50E + 00 1.84E + 00 2.81E + 00 1.21E + 00 1.20E-01 6.70E-01 1.32E + 02 6.334E + 02
Std 1.05E-07 1.37E-06 0.00E + 00 5.58E-01 2.51E-01 3.43E-01 1.20E-01 3.30E-01 1.53E + 01 6.0E + 00
100 Mean 5.56E-05 8.16E-05 2.50E + 01 1.34E + 01 1.74E + 01 1.40E + 01 1.84E + 00 9.39E + 00 1.32E + 03 6.556E + 02
Std 1.32E-04 1.56E-04 0.00E + 00 1.33E + 00 6.20E-01 5.55E-01 6.45E-01 8.92E-01 1.16E + 02 4.5E + 00
500 Mean 2.56E-04 3.68E-04 1.25E + 02 1.02E + 02 1.15E + 02 1.09E + 02 2.03E + 01 9.34E + 01 3.28E + 04 -
Std 2.78E-04 5.95E-04 0.00E + 00 4.10E + 00 1.05E + 00 9.04E-01 6.64E + 00 2.23E + 00 2.70E + 03 -
F7 30 Mean 3.31E-04 3.89E-04 4.47E-05 7.80E-05 2.59E-05 6.23E-04 1.82E-03 8.70E-04 9.66E + 01 1.113E + 03
Std 3.23E-04 6.14E-04 3.77E-05 1.06E-04 3.48E-05 6.18E-04 1.88E-03 4.62E-04 2.83E + 01 1.2E + 02
100 Mean 3.78E-04 5.02E-04 6.01E-05 8.83E-05 2.91E-05 2.00E-03 2.62E-03 2.78E-03 1.92E + 03 3.604E + 03
Std 2.91E-04 7.45E-04 4.40E-05 1.11E-04 2.24E-05 1.15E-03 2.97E-03 1.27E-03 1.26E + 02 3.4E + 02
500 Mean 5.19E-04 6.69E-04 9.04E-05 1.35E-04 6.01E-05 9.65E-03 1.45E-03 1.08E-02 5.80E + 04 -
Std 4.38E-04 9.01E-04 6.97E-05 2.11E-04 4.73E-05 5.43E-03 1.69E-03 2.86E-03 2.09E + 03 -
F8 30 Mean −1.26E + 04 −1.26E + 04 −4.16E + 03 −6.58E + 03 −5.74E + 03 −3.74E + 03 −1.14E + 04 −6.36E + 03 −6.85E + 03 9.541E + 02
Std 5.17E-03 1.32E-01 3.20E + 02 9.41E + 02 4.21E + 02 3.08E + 02 1.49E + 03 8.08E + 02 7.60E + 02 4.1E + 01
100 Mean −4.19E + 04 −4.19E + 04 −6.74E + 03 −1.99E + 04 −1.10E + 04 −6.70E + 03 −3.72E + 04 −1.66E + 04 −2.22E + 04 1.772E + 03
Std 4.98E + 00 9.86E + 00 6.02E + 02 2.16E + 03 7.93E + 02 4.18E + 02 5.27E + 03 1.31E + 03 1.87E + 03 1.6E + 02
500 Mean −2.09E + 05 −2.09E + 05 −1.37E + 04 −6.96E + 04 −2.45E + 04 −1.50E + 04 −1.88E + 05 −5.81E + 04 −9.68E + 04 -
Std 1.95E + 01 5.90E + 01 1.18E + 03 5.61E + 03 1.36E + 03 1.10E + 03 2.55E + 04 1.33E + 04 1.79E + 04 -
F9 30 Mean 0.00E + 00 0.00E + 00 1.37E + 02 0.00E + 00 0.00E + 00 1.89E-15 0.00E + 00 4.25E-01 3.43E + 02 4.413E + 03
Std 0.00E + 00 0.00E + 00 9.90E + 01 0.00E + 00 0.00E + 00 1.04E-14 0.00E + 00 1.32E + 00 2.37E + 01 1.3E + 03
100 Mean 0.00E + 00 0.00E + 00 6.50E + 00 0.00E + 00 0.00E + 00 2.65E-14 7.58E-15 1.31E + 00 1.40E + 03 2.822E + 04
Std 0.00E + 00 0.00E + 00 6.12E + 00 0.00E + 00 0.00E + 00 7.12E-14 4.15E-14 2.81E + 00 5.89E + 01 4.5E + 03
500 Mean 0.00E + 00 0.00E + 00 5.41E + 00 0.00E + 00 7.55E-07 3.77E-06 0.00E + 00 6.09E + 00 7.47E + 03 -
Std 0.00E + 00 0.00E + 00 4.56E + 00 0.00E + 00 2.32E-06 2.06E-05 0.00E + 00 5.85E + 00 1.47E + 02 -
F10 30 Mean 8.88E-16 8.88E-16 4.44E-15 8.88E-16 8.88E-16 7.28E-15 4.56E-15 1.63E-14 8.53E + 00 4.712E + 03
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 1.45E-15 1.74E-15 3.00E-15 3.11E-01 7.6E + 02
100 Mean 8.88E-16 8.88E-16 4.44E-15 8.88E-16 7.34E-05 2.01E-14 4.09E-15 1.09E-13 1.20E + 01 1.603E + 04
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 4.02E-04 4.44E-15 2.85E-15 9.22E-15 3.13E-01 1.4E + 03
500 Mean 8.88E-16 8.88E-16 4.44E-15 8.88E-16 7.65E-03 5.96E-11 4.20E-15 5.06E-08 1.64E + 01 -
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 5.55E-04 3.43E-11 2.46E-15 1.03E-08 4.56E-01 -
F11 30 Mean 0.00E + 00 0.00E + 00 2.59E-03 0.00E + 00 1.02E-01 0.00E + 00 0.00E + 00 2.60E-03 1.03E + 00 1.279E + 03
Std 0.00E + 00 0.00E + 00 4.81E-03 0.00E + 00 6.24E-02 0.00E + 00 0.00E + 00 5.72E-03 8.75E-03 6.4E + 01
100 Mean 0.00E + 00 0.00E + 00 2.37E-03 0.00E + 00 4.47E + 02 0.00E + 00 0.00E + 00 0.00E + 00 1.34E + 00 2.503E + 03
Std 0.00E + 00 0.00E + 00 4.73E-03 0.00E + 00 1.54E + 02 0.00E + 00 0.00E + 00 0.00E + 00 2.62E-02 3.5E + 02
500 Mean 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 7.96E + 03 6.84E-04 0.00E + 00 1.79E-13 1.09E + 01 -
Std 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 1.81E + 03 3.75E-03 0.00E + 00 8.40E-14 7.29E-01 -
F12 30 Mean 6.74E-10 8.53E-09 1.29E + 00 6.72E-02 3.75E-01 8.97E-02 6.62E-03 3.74E-02 4.88E + 00 1.926E + 04
Std 1.21E-09 1.14E-08 2.70E-01 2.64E-02 4.68E-02 2.06E-02 6.68E-03 2.41E-02 6.89E-01 1.4E + 04
100 Mean 1.32E-06 1.41E-06 1.27E + 00 3.14E-01 8.61E-01 4.50E-01 1.53E-02 2.42E-01 1.13E + 03 1.343E + 06
Std 2.68E-06 3.20E-06 8.10E-02 7.71E-02 1.95E-02 5.92E-02 5.79E-03 5.28E-02 2.05E + 03 5.1E + 05
500 Mean 1.20E-06 1.99E-02 1.21E + 00 7.22E-01 1.07E + 00 9.31E-01 3.98E-02 7.45E-01 1.67E + 07 -
Std 2.83E-06 1.09E-01 8.57E-03 6.27E-02 1.32E-02 1.79E-02 1.37E-02 2.77E-02 5.87E + 06 -
F13 30 Mean 4.10E-08 3.25E-03 3.92E-01 2.33E + 00 2.80E + 00 1.05E + 00 2.59E-01 5.53E-01 2.33E + 01 -
Std 6.31E-08 1.78E-02 5.87E-02 3.73E-01 1.06E-01 2.33E-01 2.13E-01 2.39E-01 3.84E + 00 -
100 Mean 3.60E-05 3.55E-01 4.62E + 00 9.67E + 00 9.94E + 00 7.88E + 00 1.76E + 00 6.40E + 00 2.01E + 05 -
Std 8.62E-05 1.35E + 00 3.34E-01 1.54E-01 5.89E-02 2.09E-01 7.74E-01 4.54E-01 1.05E + 05 -
500 Mean 9.98E-05 4.66E + 00 4.36E + 01 4.98E + 01 5.02E + 01 4.85E + 01 1.14E + 01 4.61E + 01 1.09E + 08 -
Std 2.08E-04 1.42E + 01 9.81E-01 7.09E-02 2.36E-02 2.52E-01 4.28E + 00 5.95E-01 3.43E + 07 -
F14 2 Mean 9.98E-01 2.95E + 00 1.27E + 01 4.72E + 00 1.00E + 01 2.57E + 00 1.69E + 00 3.41E + 00 3.16E + 00 -
Std 0.00E + 00 3.97E + 00 2.33E + 00 4.10E + 00 3.56E + 00 2.43E + 00 1.86E + 00 3.82E + 00 3.19E + 00 -
F15 4 Mean 3.08E-04 5.84E-04 3.26E-04 3.60E-04 1.36E-02 2.41E-03 5.79E-04 2.98E-03 5.50E-03 -
Std 8.76E-08 2.61E-04 1.05E-05 1.72E-04 2.82E-02 6.09E-03 2.49E-04 6.93E-03 7.77E-03 -
F16 2 Mean −1.03E + 00 −1.03E + 00 −1.00E + 00 −1.03E + 00 −1.03E + 00 −1.03E + 00 −1.03E + 00 −1.03E + 00 −1.03E + 00 -
Std 6.78E-16 6.78E-16 7.36E-03 1.40E-10 8.93E-08 2.55E-06 2.05E-10 5.64E-09 3.63E-03 -
F17 2 Mean 3.98E-01 3.98E-01 4.01E-01 3.98E-01 4.05E-01 3.98E-01 3.98E-01 3.98E-01 3.99E-01 -
Std 0.00E + 00 0.00E + 00 1.43E-02 3.09E-08 5.58E-03 3.97E-04 1.14E-06 4.87E-07 6.67E-04 -
F18 2 Mean 3.00E + 00 3.00E + 00 4.00E + 01 3.00E + 00 7.50E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.05E + 00 -
Std 1.57E-15 1.86E-15 3.67E + 01 9.29E-07 1.02E + 01 8.67E-06 1.38E-05 1.04E-05 4.74E-02 -
F19 3 Mean −3.86E + 00 −3.86E + 00 −3.86E + 00 −3.86E + 00 −3.85E + 00 −3.86E + 00 −3.86E + 00 −3.86E + 00 −3.86E + 00 -
Std 2.71E-15 2.71E-15 1.36E-03 2.96E-03 2.83E-03 3.69E-03 3.26E-03 2.59E-03 2.89E-03 -
F20 6 Mean −3.32E + 00 −3.27E + 00 −3.26E + 00 −3.21E + 00 −3.12E + 00 −3.17E + 00 −3.26E + 00 −3.26E + 00 −3.06E + 00 -
Std 1.36E-15 5.92E-02 4.28E-02 1.65E-01 7.13E-02 1.02E-01 8.16E-02 1.01E-01 1.12E-01 -
F21 4 Mean −1.02E + 01 −9.98E + 00 −7.06E + 00 −5.41E + 00 −3.82E + 00 −7.24E + 00 −8.18E + 00 −9.06E + 00 −5.63E + 00 -
Std 6.39E-15 9.31E-01 1.23E + 00 2.52E + 00 1.17E + 00 1.56E + 00 2.67E + 00 2.26E + 00 1.56E + 00 -
F22 4 Mean −1.04E + 01 −1.04E + 01 −6.98E + 00 −5.89E + 00 −4.08E + 00 −8.44E + 00 −8.07E + 00 −1.02E + 01 −5.80E + 00 -
Std 8.73E-16 9.90E-16 8.29E-01 2.08E + 00 1.67E + 00 1.16E + 00 2.95E + 00 9.70E-01 2.03E + 00 -
F23 4 Mean −1.05E + 01 −1.04E + 01 −7.37E + 00 −6.75E + 00 −4.07E + 00 −8.20E + 00 −7.38E + 00 −1.03E + 01 −5.76E + 00 -
Std 8.73E-16 9.87E-01 1.20E + 00 2.52E + 00 9.86E-01 1.53E + 00 3.34E + 00 1.48E + 00 1.86E + 00 -

As Table 2 shows, the MHGS algorithm improves upon the solution accuracy and stability compared of the HGS algorithm, indicating that the improvement strategy used in the MHGS algorithm is effective. Compared with all other considered algorithms, the MHGS algorithm stands out because of its high levels of solution accuracy in both unimodal and multimodal functions (excluding F7), with the increased dimensionality demonstrating no impact on its superiority. Moreover, concerning stability, the MHGS algorithm is able to reach the optimal levels for all 20 benchmark test functions (excluding F5, F6 and F7). Compared with the other algorithms, the MHGS algorithm achieves excellent solution accuracy and stability and is highly competitive. This paper presents a detailed analysis of the solution outcomes produced in four distinct cases, namely, those with 30, 100, 500, and fixed dimensions.

In the 30-dimensional case, the MHGS algorithm produces optimal average fitness values on all 12 benchmark test functions and optimal standard deviations on 11 benchmark test functions. Therefore, the MHGS algorithm has high solution accuracy and stability in the low-dimensional case. In terms of solving the unimodal functions, the MHGS algorithm demonstrates better average fitness and standard deviation values than the HGS algorithm does across F1 ~ F7. Additionally, the MHGS algorithm exhibits a significant improvement, leading to greater stability and a better local exploitation ability for the HGS algorithm. This outcome reflects the effectiveness of the improvement strategy, which is highly valuable in low-dimensional scenarios. Overall, the average fitness and standard deviations of the MHGS algorithm are better than those of the other algorithms. Exceptional results are achieved on F1, F2, F3 and F4, where the optimal values are reached. These results suggest that the MHGS algorithm has a stronger ability to exploit its local surroundings and is more stable in low-dimensional situations. When solving multimodal functions, the MHGS algorithm demonstrates higher average fitness and standard deviation values than the HGS algorithm does for F8 ~ F13. Moreover, the MHGS algorithm significantly improves the results obtained for multiple functions, suggesting that in low-dimensional cases, the improvement strategy substantially enhances the global exploration ability, local optimum jumping ability, and stability of the HGS algorithm. Compared with all the other algorithms, the MHGS algorithm has the best mean fitness and standard deviation values on F8 ~ F13. Additionally, the mean fitness and standard deviation values on F9 and F11, as well as the mean fitness achieved for F8, reach the optimal values. The MHGS achieves zero-error convergence (mean = 0.00E + 00) across F1-F12, whereas DEAH results in significant deviations on complex functions (e.g., F3: 6.8e + 09, F8: 6.399E + 02). These results suggest that the MHGS algorithm is capable of global exploration and can escape local optima in low-dimensional scenarios. Furthermore, it displays a high level of stability. Compared with the 2017 CEC competition winner, JSO, the jSO algorithm exhibits performance fluctuations, with mean values ranging from 0.0881E-16 (F8) to 0.0618E + 01 (F9). The maximum performance gap reaches 418.98% on F8 (generalized Schwefel), suggesting sensitivity to the function modality. The MHGS maintains zero-error solutions even for complex multimodal functions (F8-F12), demonstrating a superior level of exploration–exploitation balance than JSO does (a 0.0047E + 01 mean error on F12; generalized penalized)72.

In the 100-dimensional case, the MHGS algorithm produces optimal average fitness values for all 12 benchmark test functions and optimal standard deviations for 10 benchmark test functions. Therefore, the MHGS algorithm still has high solution accuracy and stability in the medium-dimensional case. In terms of solving unimodal functions, the MHGS algorithm outperforms the HGS algorithm in terms of its average fitness and standard deviations on F1 ~ F7. Most of these improvements are several orders of magnitude, indicating that the improvement strategy significantly enhances the local exploitation capabilities of the HGS algorithm within medium-dimensional scenarios, achieving greater stability. Compared with all other examined algorithms, the MHGS algorithm demonstrates mostly optimal average fitness and standard deviations on F1 ~ F7. Moreover, the average fitness and standard deviations of the algorithm remain optimal for F1, F2, F3, and F4, indicating strong local exploitation capabilities and stability in the medium-dimensional scenario. Compared with the HGS algorithm, the MHGS algorithm demonstrates better or equal average fitness and standard deviation values when solving the multimodal functions within the F8 ~ F13 range. This finding indicates that the improved strategy enhances the global exploration capacity and the ability to escape local optima in the mid-dimensional scenario. Furthermore, it leads to greater stability. Compared with all the other comparison algorithms, the MHGS algorithm has the optimal average fitness and standard deviation values for F8 to F13. Additionally, it achieves optimal average fitness and standard deviation values for F9 and F11, while also having an average value that is much closer to the optimal value for F8. The MHGS maintains stable convergence (e.g., F5 std.dev = 4.59E-03), whereas the errors of DEAH increase with the dimensionality (e.g., the F1 error increases from 2.99e + 10 at D30 to 3.00E + 12 at D100). ’The std.dev of DEAH reaches 5.87E + 06 for the high-dimensional multimodal function F12, revealing its susceptibility to the curse of dimensionality. These excellent results indicate that the MHGS algorithm possesses a strong global exploration ability and can escape from local optima while also demonstrating increased stability, even in medium-dimensional cases.

In the 500-dimensional case, the MHGS algorithm produces optimal average fitness values for all 12 benchmark test functions and optimal standard deviations for 10 benchmark test functions. Therefore, in the high-dimensional case, the MHGS algorithm still has high solution accuracy and stability. In terms of solving the unimodal functions, the MHGS algorithm outperforms the HGS algorithm. The average fitness and standard deviation values of the former on F1 ~ F7 are equivalent to or better than those of the latter, with many of them presenting significant improvements. This suggests that in high-dimensional scenarios, the improvement strategy contributes to enhancing the local exploitation capabilities and overall stability of the HGS algorithm. Compared with the other tested algorithms, the MHGS algorithm demonstrates mostly optimal average fitness and standard deviations on F1 ~ F7. Furthermore, the MHGS algorithm still achieves optimal average fitness and standard deviations on F1, F2, F3, and F4, indicating a strong local exploitation ability and greater stability in cases with higher dimensions. When solving multimodal functions, the MHGS algorithm demonstrates improved average fitness and standard deviation values on F8 ~ F13 relative to those of the HGS algorithm. This suggests that the improved strategy enhances the global exploration capabilities of the model and facilitates escapes from local optima in high-dimensional scenarios, resulting in increased stability. Compared with all other algorithms, the MHGS algorithm demonstrates optimal average fitness and standard deviation values for functions F8 through F13. Additionally, the average fitness and standard deviations obtained on F9 and F11 reach the optimal values, whereas the average fitness achieved for F8 is also closer to the optimal value, resulting in exceptional solution efficacy. These findings indicate that the MHGS algorithm has a robust ability to explore global situations and surpass local optimality, even in high-dimensional scenarios, making it a more stable option. This demonstrates that the MHGS algorithm retains a robust global exploration ability and can escape local optima in high-dimensional scenarios while also exhibiting enhanced stability.

In the fixed-dimension case, the MHGS algorithm demonstrates optimal average fitness and standard deviation values on all benchmark test functions. Hence, the algorithm exhibits high solution accuracy and stability in this scenario. The average fitness and standard deviations of the MHGS algorithm on F14 ~ F23 are either superior or comparable to those of the HGS algorithm, as concluded from an objective analysis. Furthermore, substantial progress is made with respect to the magnitudes of the fitness and standard deviations, where the improvements achieved in many cases are several orders of magnitude or even above ten orders of magnitude, as seen in the standard deviations produced for F14. F20, F21, and F23 indicate that the improvement strategy significantly increases the stability of the HGS algorithm under fixed-dimensionality scenarios. Additionally, it has a substantial impact on the ability of the algorithm to perform global exploration, with an improved capacity to avoid local optima.

The above solving results and analyses show that for the benchmark test functions, whether low, medium or high, or in fixed dimensions are utilized, the MHGS algorithm has high solving accuracy and stability, and its performance is the best among the nine algorithms.

To determine the significance of the differences between the algorithms in terms of their experimental results, the Wilcoxon rank sum test is applied for further analyses. Tables 3, 4 and 5 present the Wilcoxon rank sum test outcomes of the MHGS and the comparison algorithms for F1 ~ F13 in the 30-dimensional, 100-dimensional, and 500-dimensional scenarios, respectively. Table 6 lists the Wilcoxon rank sum test results of the MHGS and the comparison algorithms for F14 ~ F23 under fixed-dimensional conditions. The symbols “+”, “-“, and “=” are used to indicate that the MHGS algorithm outperforms, underperforms, or has equal performance with respect to the comparison algorithms, respectively. Moreover, NaN indicates that the results of the two compared algorithms are highly similar and that no significant judgement can be made.

Table 3.

Wilcoxon rank sum test results (F1 to F13, D = 30).

Function vs. HGS vs. CDO vs. SCSO vs. AOA vs. AGWO vs. WOA vs. GWO vs. PSO
F1 3.34E-01 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F2 1.37E-03 1.21E-12 1.21E-12 NaN 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F3 2.79E-03 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F4 2.21E-06 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F5 2.96E-05 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 1.53E-05 1.21E-12 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F7 4.04E-01 2.32E-06 8.66E-05 1.01E-08 2.81E-02 3.83E-05 4.12E-06 3.02E-11
F8 7.60E-07 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F9 NaN 1.93E-10 NaN NaN 3.34E-01 NaN 1.38E-04 1.21E-12
F10 NaN 1.69E-14 NaN NaN 1.55E-13 1.17E-11 3.17E-13 1.21E-12
F11 NaN 1.37E-03 NaN 1.21E-12 NaN NaN 1.10E-02 1.21E-12
F12 1.69E-09 2.91E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F13 6.55E-04 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
Inline graphic Inline graphic Inline graphic1 Inline graphic1 Inline graphic1 Inline graphic Inline graphic Inline graphic Inline graphic

Table 4.

Wilcoxon rank sum test results (F1 to F13, D = 100).

Function vs. HGS vs. CDO vs. SCSO vs. AOA vs. AGWO vs. WOA vs. GWO vs. PSO
F1 3.34E-01 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F2 2.16E-02 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F3 6.61E-05 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F4 2.93E-05 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F5 7.04E-07 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 3.63E-01 1.21E-12 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F7 7.17E-01 2.49E-06 1.09E-05 5.53E-08 1.86E-09 1.02E-05 3.02E-11 3.02E-11
F8 6.73E-01 3.02E-11 3.02E-11 3.02E-11 3.02E-11 2.87E-10 3.02E-11 3.02E-11
F9 NaN 5.77E-11 NaN NaN 4.19E-02 3.34E-01 4.10E-12 1.21E-12
F10 NaN 1.69E-14 NaN 1.61E-01 7.72E-13 2.92E-07 9.74E-13 1.21E-12
F11 NaN 5.58E-03 NaN 1.21E-12 NaN NaN NaN 1.21E-12
F12 8.19E-01 1.62E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F13 1.22E-02 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
Inline graphic Inline graphic Inline graphic1 Inline graphic Inline graphic1 Inline graphic Inline graphic Inline graphic Inline graphic

Table 5.

Wilcoxon rank sum test results (F1 to F13, D = 500).

Function vs. HGS vs. CDO vs. SCSO vs. AOA vs. AGWO vs. WOA vs. GWO vs. PSO
F1 1.61E-01 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F2 4.57E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F3 1.70E-08 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F4 2.93E-05 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F5 3.81E-07 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 7.06E-01 1.21E-12 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F7 4.29E-01 1.87E-07 4.74E-06 6.52E-09 3.02E-11 7.01E-02 3.02E-11 3.02E-11
F8 6.52E-01 3.02E-11 3.02E-11 3.02E-11 3.02E-11 1.17E-09 3.02E-11 3.02E-11
F9 NaN 1.21E-12 NaN 2.16E-02 3.17E-13 NaN 1.21E-12 1.21E-12
F10 NaN 1.69E-14 NaN 1.21E-12 1.21E-12 1.09E-08 1.21E-12 1.21E-12
F11 NaN NaN NaN 1.21E-12 2.71E-14 NaN 1.21E-12 1.21E-12
F12 4.83E-01 2.37E-12 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F13 9.47E-03 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
Inline graphic Inline graphic Inline graphic1 Inline graphic1 Inline graphic1 Inline graphic Inline graphic Inline graphic Inline graphic

Table 6.

Wilcoxon rank sum test results (F13 to F22, fixed dimensions).

Function vs. HGS vs. CDO vs. SCSO vs. AOA vs. AGWO vs. WOA vs. GWO vs. PSO
F13 6.58E-04 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F14 1.11E-06 3.02E-11 5.01E-02 3.02E-11 3.02E-11 3.02E-11 1.06E-03 3.02E-11
F15 NaN 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F16 NaN 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F17 6.88E-01 2.05E-11 2.05E-11 2.05E-11 2.05E-11 2.05E-11 2.05E-11 2.05E-11
F18 NaN 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F19 1.28E-04 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F20 3.15E-02 1.46E-11 1.46E-11 1.46E-11 1.46E-11 1.46E-11 1.46E-11 1.46E-11
F21 1.42E-02 7.57E-12 7.57E-12 7.57E-12 7.57E-12 7.57E-12 7.57E-12 7.57E-12
F22 1.20E-02 7.57E-12 7.57E-12 7.57E-12 7.57E-12 7.57E-12 7.57E-12 7.57E-12
Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic

Tables 3, 4, 5 and 6 clearly show that the Wilcoxon rank sum test results are predominantly below 0.05, indicating significant differences between the MHGS algorithm and the comparison algorithms, with the former outperforming the latter in most cases. Therefore, the performance advantage provided by the MHGS algorithm is obvious whether 30, 100, 500 or fixed dimensions are utilized. The MHGS algorithm performs slightly worse than the AOA, SCSO and CDO algorithms do on F7, ranking fourth. This may be because F7 is a multidimensional unimodal flat-bottomed function with random disturbances, which adversely affect the optimization process of the MHGS algorithm. In addition, since some of the functions are easier to solve (e.g., F8, F9, and F10), multiple algorithms can reach the optimal values. When the MHGS algorithm is compared with the other algorithms, the addition of individual “=” aggregations indicates that all the results are optimal. However, this is not conducive to highlighting the performance advantage of the MHGS algorithm. Therefore, the CEC2017 test set, which is more complex and difficult to solve, is used for testing purposes.

In addition, to more intuitively demonstrate the MHGS algorithm, a qualitative analysis is performed on the MHGS algorithm, including its convergence curve, average fitness, trajectory, and search history. The analysis results obtained by the MHGS algorithm on some of the benchmark test functions (e.g., F1, F3, F5, F9, F10, F12, and F13) are shown in Fig. 2. When the function has 30 dimensions and runs for 1000 iterations, the parameters of the MHGS algorithm are consistent with those utilized in prior experiments.

Fig. 2.

Fig. 2

Qualitative analysis results of the MHGS algorithm.

The graphs shown in the first column of Fig. 2 show the search spaces of the corresponding test functions, including the shape and optimal position of each test function. The convergence and average fitness curves are portrayed in columns two and three in Fig. 2, respectively. These columns demonstrate the gradual convergence of the algorithm towards its optimal position as the number of iterations increases. Moreover, the optimal and average adaptation values exhibit improvements. From the convergence curves and the average fitness of the MHGS algorithm, it is evident that the MHGS algorithm exhibits superior convergence, a faster convergence speed, higher final convergence accuracy, and fewer local optima. The trajectory (the first dimension of the first individual) for each function is represented in the fourth column of Fig. 2. It is apparent that there are frequent changes in the initial stage, with gradual decreases in the number and magnitude of changes as the number of iterations increases, indicating that the MHGS algorithm shifts from global exploration to local exploitation. Towards the end of the iterative process, there are slight trajectory variations, suggesting that the MHGS algorithm possesses the ability to escape local optima. The fifth column in Fig. 2 illustrates the search history, showing the historical positions of all individuals during the optimization process. The significant number of randomly diverse individuals in the search history denotes the remarkable global exploration ability of the MHGS algorithm. Additionally, the considerable number of individuals gathered around the location of the optimal individual (represented by the red point) indicates the exceptional local exploitation ability of the MHGS algorithm. In brief, the MHGS algorithm exhibits noteworthy convergence, adeptly balances both exploration and exploitation, and possesses an ability to overcome local optima.

Experiment 2: CEC2017 test set

The CEC2017 test set consists of unimodal functions (CEC01 ~ CEC03), simple multimodal functions (CEC04 ~ CEC10), hybrid functions (CEC11 ~ CEC20), and composition functions (CEC21 ~ CEC30). Among them, CEC02 was used for algorithmic testing in cases with unstable performance and was officially removed, so the remaining 29 test functions are selected.

The dimensions of the functions are set to 10, and identical comparison algorithms and parameter settings to those employed in the benchmark test function section are utilized here. In Table 7, the CEC2017 test set shows the experimental results produced by the MHGS and the comparison algorithms.

Table 7.

Experimental results obtained on the CEC2017 test set.

Function
/Measures
MHGS HGS CDO SCSO AOA AGWO WOA GWO PSO SSDE MEGA DEAH
CEC01 Mean 5.24E + 02 6.82E + 03 1.36E + 10 1.11E + 08 7.68E + 09 1.48E + 08 7.26E + 06 5.97E + 07 6.90E + 06 1.26E + 04 1.66E + 03 3.00E + 07
Std 6.24E + 02 4.36E + 03 2.07E + 09 1.93E + 08 2.65E + 09 9.24E + 07 9.05E + 06 1.46E + 08 2.09E + 06 1.23E + 04 2.00E + 03 6.23E + 07
CEC03 Mean 3.00E + 02 3.11E + 02 1.78E + 04 1.89E + 03 1.03E + 04 1.36E + 03 2.95E + 03 2.13E + 03 3.24E + 02 3.41E + 02 3.39E + 02 3.73E + 02
Std 1.15E-02 4.88E + 01 2.26E + 02 1.75E + 03 2.46E + 03 1.28E + 03 2.32E + 03 2.51E + 03 6.86E + 00 2.22E + 01 1.17E + 01 4.02E + 01
CEC04 Mean 4.04E + 02 4.13E + 02 1.45E + 03 4.34E + 02 9.84E + 02 4.20E + 02 4.38E + 02 4.19E + 02 4.11E + 02 6.62E + 02 5.91E + 02 6.09E + 02
Std 1.31E + 00 2.07E + 01 3.30E + 02 4.24E + 01 4.17E + 02 1.64E + 01 4.21E + 01 2.52E + 01 1.69E + 01 1.98E + 01 2.65E + 01 6.82E + 01
CEC05 Mean 5.11E + 02 5.24E + 02 5.82E + 02 5.32E + 02 5.57E + 02 5.36E + 02 5.55E + 02 5.18E + 02 5.46E + 02 5.00E + 02 5.00E + 02 5.00E + 02
Std 4.34E + 00 9.35E + 00 6.65E + 00 1.30E + 01 1.57E + 01 5.10E + 00 2.10E + 01 7.99E + 00 1.31E + 01 1.51E-03 1.47E-12 1.81E-04
CEC06 Mean 6.00E + 02 6.01E + 02 6.37E + 02 6.16E + 02 6.40E + 02 6.09E + 02 6.37E + 02 6.01E + 02 6.16E + 02 9.06E + 02 8.36E + 02 8.88E + 02
Std 1.12E-02 1.00E + 00 3.66E + 00 1.14E + 01 7.44E + 00 3.30E + 00 1.32E + 01 1.28E + 00 9.80E + 00 2.14E + 01 1.73E + 01 8.38E + 01
CEC07 Mean 7.24E + 02 7.42E + 02 7.92E + 02 7.63E + 02 7.97E + 02 7.51E + 02 7.84E + 02 7.30E + 02 7.44E + 02 1.14E + 03 1.05E + 03 1.08E + 03
Std 6.00E + 00 1.42E + 01 7.56E + 00 2.32E + 01 1.59E + 01 7.41E + 00 1.85E + 01 1.07E + 01 8.22E + 00 2.90E + 01 2.21E + 01 2.83E + 01
CEC08 Mean 8.11E + 02 8.20E + 02 8.54E + 02 8.29E + 02 8.31E + 02 8.29E + 02 8.45E + 02 8.14E + 02 8.24E + 02 8.00E + 02 8.00E + 02 8.01E + 02
Std 5.69E + 00 7.73E + 00 6.80E + 00 1.07E + 01 6.06E + 00 5.79E + 00 1.52E + 01 5.84E + 00 7.18E + 00 6.54E-01 1.47E-01 7.34E-01
CEC09 Mean 9.00E + 02 9.03E + 02 1.29E + 03 1.11E + 03 1.36E + 03 9.25E + 02 1.41E + 03 9.15E + 02 9.02E + 02 8.23E + 03 7.88E + 03 8.11E + 03
Std 8.64E-02 6.67E + 00 3.74E + 01 1.78E + 02 1.70E + 02 2.25E + 01 3.66E + 02 2.67E + 01 6.38E-01 2.51E + 02 5.16E + 02 4.81E + 02
CEC10 Mean 1.42E + 03 1.71E + 03 2.26E + 03 1.96E + 03 2.25E + 03 1.96E + 03 2.10E + 03 1.67E + 03 2.05E + 03 1.53E + 05 5.64E + 03 1.78E + 04
Std 1.68E + 02 1.88E + 02 1.82E + 02 3.03E + 02 3.49E + 02 3.59E + 02 3.19E + 02 3.94E + 02 3.27E + 02 5.71E + 04 2.85E + 03 1.16E + 04
CEC11 Mean 1.11E + 03 1.16E + 03 5.50E + 03 1.33E + 03 2.48E + 03 1.15E + 03 1.20E + 03 1.16E + 03 1.14E + 03 1.50E + 07 2.04E + 04 5.78E + 04
Std 4.66E + 00 5.87E + 01 1.02E + 03 8.00E + 02 1.48E + 03 2.41E + 01 7.79E + 01 5.84E + 01 1.66E + 01 9.97E + 06 2.54E + 04 4.55E + 04
CEC12 Mean 1.18E + 04 2.99E + 04 7.02E + 06 1.37E + 06 3.29E + 07 3.40E + 06 3.87E + 06 5.98E + 05 1.29E + 06 7.54E + 03 6.38E + 03 9.58E + 03
Std 7.50E + 03 2.78E + 04 2.96E + 06 2.15E + 06 9.00E + 07 3.46E + 06 3.56E + 06 8.47E + 05 1.35E + 06 4.94E + 03 3.51E + 03 7.72E + 03
CEC13 Mean 6.51E + 03 1.14E + 04 6.30E + 07 1.30E + 04 1.16E + 04 2.08E + 04 1.97E + 04 1.17E + 04 1.17E + 04 6.77E + 05 3.42E + 03 2.36E + 04
Std 3.55E + 03 1.14E + 04 5.34E + 07 1.20E + 04 7.64E + 03 1.49E + 04 1.62E + 04 7.75E + 03 5.16E + 03 3.01E + 05 1.65E + 03 4.08E + 04
CEC14 Mean 1.45E + 03 2.94E + 03 1.69E + 03 3.05E + 03 8.22E + 03 2.15E + 03 2.21E + 03 3.08E + 03 2.13E + 03 6.66E + 04 1.52E + 04 1.97E + 04
Std 7.40E + 01 2.21E + 03 2.28E + 02 1.87E + 03 7.59E + 03 1.27E + 03 1.12E + 03 1.87E + 03 1.12E + 03 6.43E + 04 7.18E + 03 9.09E + 03
CEC15 Mean 1.60E + 03 6.83E + 03 6.04E + 03 3.47E + 03 1.91E + 04 2.73E + 03 9.43E + 03 5.45E + 03 4.36E + 03 2.82E + 03 1.59E + 03 1.77E + 04
Std 2.83E + 02 5.36E + 03 8.98E + 02 1.72E + 03 5.71E + 03 1.27E + 03 7.51E + 03 2.42E + 03 2.28E + 03 2.81E + 03 8.12E + 01 5.82E + 04
CEC16 Mean 1.63E + 03 1.78E + 03 2.05E + 03 1.86E + 03 2.07E + 03 1.70E + 03 1.94E + 03 1.75E + 03 1.89E + 03 8.98E + 04 1.83E + 03 2.12E + 03
Std 5.55E + 01 1.39E + 02 2.67E + 01 1.46E + 02 1.18E + 02 9.10E + 01 1.63E + 02 1.24E + 02 1.18E + 02 3.99E + 04 1.33E + 02 3.60E + 02
CEC17 Mean 1.71E + 03 1.75E + 03 1.86E + 03 1.77E + 03 1.89E + 03 1.76E + 03 1.79E + 03 1.75E + 03 1.78E + 03 9.99E + 05 1.84E + 04 2.13E + 04
Std 5.70E + 00 4.62E + 01 4.76E + 01 2.43E + 01 9.89E + 01 1.54E + 01 4.39E + 01 2.03E + 01 3.10E + 01 5.26E + 05 6.63E + 03 8.44E + 03
CEC18 Mean 3.32E + 03 2.22E + 04 1.93E + 08 2.20E + 04 1.83E + 04 4.61E + 04 1.83E + 04 2.91E + 04 1.48E + 04 5.00E + 04 2.40E + 03 1.87E + 04
Std 2.38E + 03 1.36E + 04 5.00E + 08 1.52E + 04 1.12E + 04 2.71E + 04 1.29E + 04 1.41E + 04 1.21E + 04 6.13E + 04 5.16E + 03 1.93E + 04
CEC19 Mean 2.22E + 03 1.29E + 04 1.25E + 06 1.37E + 04 4.56E + 04 8.17E + 03 6.59E + 04 8.68E + 03 4.39E + 03 2.22E + 03 2.10E + 03 2.15E + 03
Std 7.73E + 02 1.19E + 04 4.52E + 05 4.46E + 04 3.39E + 04 6.55E + 03 1.52E + 05 6.61E + 03 5.80E + 03 1.95E + 02 1.53E + 02 1.62E + 02
CEC20 Mean 2.01E + 03 2.03E + 03 2.20E + 03 2.15E + 03 2.16E + 03 2.09E + 03 2.19E + 03 2.10E + 03 2.12E + 03 2.39E + 03 2.14E + 03 2.40E + 03
Std 1.03E + 01 2.28E + 01 3.56E + 01 5.98E + 01 8.59E + 01 4.50E + 01 9.11E + 01 5.52E + 01 5.22E + 01 2.06E + 02 7.47E + 01 2.55E + 02
CEC21 Mean 2.23E + 03 2.32E + 03 2.37E + 03 2.29E + 03 2.33E + 03 2.33E + 03 2.30E + 03 2.30E + 03 2.33E + 03 2.27E + 03 2.27E + 03 2.29E + 03
Std 5.23E + 01 4.69E + 01 7.21E + 00 6.15E + 01 3.27E + 01 2.42E + 01 6.05E + 01 4.00E + 01 4.81E + 01 3.96E + 00 4.90E + 00 1.24E + 01
CEC22 Mean 2.30E + 03 2.35E + 03 3.32E + 03 2.31E + 03 3.00E + 03 2.37E + 03 2.39E + 03 2.31E + 03 2.39E + 03 2.62E + 03 2.47E + 03 3.54E + 03
Std 1.64E + 01 2.50E + 02 5.23E + 02 2.97E + 01 2.65E + 02 1.92E + 02 2.76E + 02 9.84E + 00 2.45E + 02 9.35E + 03 3.58E + 01 1.73E + 03
CEC23 Mean 2.61E + 03 2.63E + 03 2.85E + 03 2.64E + 03 2.73E + 03 2.64E + 03 2.66E + 03 2.62E + 03 2.69E + 03 2.52E + 03 2.50E + 03 3.05E + 03
Std 4.38E + 00 1.03E + 01 8.13E + 01 1.51E + 01 5.04E + 01 7.93E + 00 2.45E + 01 9.38E + 00 8.17E + 01 6.26E + 01 0.00E + 00 7.81E + 02
CEC24 Mean 2.67E + 03 2.77E + 03 2.91E + 03 2.76E + 03 2.83E + 03 2.77E + 03 2.78E + 03 2.75E + 03 2.74E + 03 2.82E + 03 2.81E + 03 2.83E + 03
Std 1.07E + 02 1.24E + 01 1.70E + 01 5.19E + 01 6.49E + 01 5.06E + 00 2.48E + 01 1.25E + 01 1.20E + 02 1.53E-01 6.06E + 00 1.74E + 01
CEC25 Mean 2.92E + 03 2.94E + 03 3.59E + 03 2.95E + 03 3.27E + 03 2.94E + 03 2.95E + 03 2.94E + 03 2.93E + 03 3.33E + 03 3.19E + 03 3.34E + 03
Std 2.31E + 01 3.18E + 01 8.76E + 01 2.69E + 01 1.83E + 02 1.82E + 01 2.72E + 01 1.68E + 01 2.15E + 01 4.45E + 00 8.63E + 01 2.20E + 01
CEC26 Mean 2.87E + 03 3.19E + 03 4.05E + 03 3.05E + 03 4.04E + 03 3.12E + 03 3.53E + 03 3.09E + 03 3.16E + 03 3.11E + 03 3.09E + 03 3.12E + 03
Std 6.40E + 01 4.26E + 02 7.63E + 01 2.04E + 02 3.03E + 02 3.51E + 02 5.88E + 02 3.73E + 02 4.52E + 02 1.35E + 01 6.04E + 00 1.94E + 01
CEC27 Mean 3.10E + 03 3.10E + 03 3.29E + 03 3.11E + 03 3.28E + 03 3.11E + 03 3.15E + 03 3.10E + 03 3.16E + 03 2.85E + 03 2.73E + 03 2.83E + 03
Std 4.03E + 00 1.39E + 01 2.45E + 01 2.64E + 01 6.19E + 01 2.25E + 01 3.97E + 01 1.78E + 01 5.36E + 01 1.62E + 02 7.71E + 01 1.50E + 02
CEC28 Mean 3.13E + 03 3.29E + 03 3.60E + 03 3.31E + 03 3.75E + 03 3.34E + 03 3.44E + 03 3.35E + 03 3.32E + 03 2.47E + 07 7.26E + 03 8.03E + 04
Std 9.24E + 01 1.13E + 02 1.76E + 00 1.05E + 02 1.28E + 02 9.20E + 01 1.99E + 02 1.07E + 02 1.28E + 02 4.68E + 07 2.94E + 03 2.34E + 05
CEC29 Mean 3.18E + 03 3.22E + 03 3.34E + 03 3.25E + 03 3.37E + 03 3.21E + 03 3.36E + 03 3.20E + 03 3.27E + 03 4.17E + 05 1.42E + 04 3.00E + 04
Std 1.80E + 01 7.33E + 01 4.92E + 01 8.25E + 01 1.19E + 02 4.22E + 01 1.13E + 02 4.35E + 01 8.34E + 01 1.15E + 06 9.84E + 03 3.97E + 04
CEC30 Mean 2.14E + 04 3.79E + 05 3.08E + 05 6.76E + 05 2.03E + 07 7.59E + 05 1.06E + 06 9.14E + 05 3.04E + 05
Std 1.14E + 04 4.28E + 05 1.36E + 05 9.24E + 05 1.83E + 07 1.15E + 06 1.29E + 06 1.04E + 06 3.85E + 05
The total runtime 297.69 s 151.01 s 161.59 s 579.03 s 131.27 s 139.07 s 225.35 s 140.90 s 335.02 s

Table 7 shows that the MHGS algorithm outperforms the HGS algorithm in terms of the average fitness and standard deviation values, except for the standard deviations of individual functions. This suggests that the enhancement strategy employed by the MHGS algorithm effectively improves the solution accuracy and stability of the HGS algorithm. Compared with all the other tested algorithms, the MHGS algorithm is optimal in terms of solution accuracy, whether a unimodal function, a multimodal function, a hybrid function or a combined function is examined; in terms of stability, the MHGS algorithm also obtains the optimal results for 24 CEC2017 test functions. Thus, the MHGS algorithm demonstrates exceptional solution accuracy and stability, and additionally, it competes favourably with the other algorithms. Next the solution results are specifically analysed with regard to different types of test functions.

In terms of solving the unimodal functions, the MHGS algorithm is optimal with respect to the average fitness and standard deviation values achieved across all unimodal functions. Consequently, the solution accuracy and stability displayed by the MHGS algorithm when resolving unimodal functions are considerable. Compared with the HGS algorithm, the MHGS algorithm improves the average fitness and standard deviation by one order of magnitude on CEC01. Furthermore, the average fitness produced on CEC03 reaches an optimal value, and the standard deviation obtained for CEC03 is improved by three orders of magnitude. These results demonstrate that the improvement strategy significantly enhances the local exploitation ability of the HGS algorithm, rendering it more robust and stable. The MHGS algorithm has the optimal average fitness and standard deviation values on CEC01 and CEC03 and significantly outperforms the other algorithms, many of which have differences of several orders of magnitude. This indicates the strong local exploitation ability and stability of the MHGS algorithm when solving unimodal functions.

In terms of solving multimodal functions, the MHGS algorithm proves to be the most suitable option with respect to its average fitness and standard deviation values. Thus, the MHGS algorithm provides high accuracy and stability when addressing multimodal functions. The MHGS algorithm demonstrates superior performance to that of the HGS algorithm on all multimodal functions, with improved average fitness and standard deviation values. In some cases, the improvements are up to two orders of magnitude, highlighting the efficacy of the enhancement strategy in facilitating global exploration and escaping local optima. Consequently, the MHGS algorithm exhibits enhanced stability relative to its predecessor. Compared with all the other algorithms tested, the MHGS algorithm presents the highest average fitness and standard deviation values on CEC04-CEC10. Additionally, the MHGS algorithm achieves the optimal fitness values on CEC06 and CEC09, demonstrating its strong global exploration and local optimum avoidance capabilities. The solution results are exceptional. In conclusion, the MHGS algorithm proves to be very stable when solving multimodal functions.

In terms of solving hybrid functions, the MHGS algorithm displays ideal average fitness and standard deviations across all hybrid functions, with the exception of the standard deviation produced for CEC16. These results indicate that the MHGS algorithm shows remarkable accuracy and stability when solving hybrid functions. Compared with the HGS algorithm, the MHGS algorithm exhibits better average fitness and standard deviation values for all hybrid functions, with most of them improving by one order of magnitude. This suggests that the improvement strategy effectively enhances the accuracy and stability of the HGS algorithm on hybrid functions. Compared with all the other algorithms, except for a slight decrease in the standard deviation produced for CEC16 compared with that of the newly proposed CDO algorithm, the MHGS algorithm has the best average fitness and standard deviation values on CEC11 ~ CEC20. This indicates that the MHGS algorithm possesses strong global exploration and local exploitation capabilities, enabling it to more effectively and consistently solve hybrid functions.

In terms of solving combinatorial functions, the MHGS algorithm has optimal average fitness values for all combinatorial functions and optimal standard deviations for five combinatorial functions. Therefore, in terms of solving combinatorial functions, the MHGS algorithm has high solving accuracy and stability. Compared with the HGS algorithm, the MHGS algorithm displays better average fitness and standard deviation values on the combinatorial functions. For certain functions, it improves by an order of magnitude, suggesting that the strategy employed to improve the solving accuracy of the HGS algorithm for combinatorial functions is effective.Additionally, with the newly added algorithms SSDE, MEGA, and DEAH included in the comparison, the comprehensive competitiveness of MHGS is further demonstrated. with the addition of the newly implemented algorithms SSDE, MEGA, and DEAH, the comprehensive competitiveness of MHGS is further demonstrated. Among all compared methods, MHGS achieves the best or near-best mean fitness and standard deviation on the majority of CEC2017 functions. Specifically, compared to SSDE and MEGA—two recent advanced strategies—MHGS consistently delivers better solution accuracy and stability, especially on hybrid and composite functions, where the mean errors of SSDE and MEGA are significantly higher. For the DEAH algorithm, which is well-known for its balance between exploration and exploitation, MHGS still maintains a clear advantage in both mean and standard deviation, indicating superior robustness and convergence. These results confirm that the multistrategy improvements in MHGS (such as the phased position update and enhanced population diversity) contribute to outstanding overall performance, even when compared with newly proposed state-of-the-art algorithms.Compared with all the other tested algorithms, the average fitness values produced by the MHGS algorithm for CEC21 ~ CEC30 are optimal, and the standard deviations obtained for CEC21 ~ CEC30 are mostly good, indicating that the MHGS algorithm is able to solve combinatorial functions better and more stably than the other methods can.

The runtime of the MHGS on the CEC2017 test set is 297.69 s, outperforming certain competitors (e.g., SCSO: 579.03 s; PSO: 335.02 s) but slightly worse than those of the HGS (151.01 s), the AOA (131.27 s), and AGWO (139.07 s). The above results and analyses show that for the CEC2017 test set, the MHGS algorithm has high solution accuracy and stability, whether a unimodal function, a multimodal function, a hybrid function or a combined function is examined, and its performance is optimal among the nine algorithms.

Table 8 displays the results obtained in the Wilcoxon rank sum test by the MHGS and the comparison algorithms on the CEC2017 test set. As shown in Table 8, the Wilcoxon rank sum test results of the MHGS algorithm are nearly all less than 0.05, demonstrating a significant difference between the MHGS and the comparison algorithms. The MHGS outperforms the comparison algorithms on nearly all functions, with similar performance for individual functions and no inferiority to the comparison algorithms. Thus, the performance advantage of the MHGS algorithm on the CEC2017 test set is evident.

Table 8.

Wilcoxon rank sum test results (CEC2017 test set).

Function vs. HGS vs. CDO vs. SCSO vs. AOA vs. AGWO vs. WOA vs. GWO vs. PSO
CEC01 2.39E-08 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
CEC03 2.52E-01 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
CEC04 1.11E-04 3.02E-11 4.50E-11 3.02E-11 3.02E-11 2.78E-07 7.09E-08 1.58E-04
CEC05 5.07E-08 3.00E-11 2.03E-09 3.00E-11 3.00E-11 3.00E-11 4.08E-05 3.00E-11
CEC06 1.78E-10 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
CEC07 5.09E-08 3.02E-11 2.37E-10 3.02E-11 3.69E-11 3.02E-11 2.81E-02 1.78E-10
CEC08 1.13E-05 3.01E-11 6.51E-09 9.90E-11 2.37E-10 6.05E-11 1.05E-01 2.83E-08
CEC09 1.01E-08 3.01E-11 3.01E-11 3.01E-11 3.01E-11 3.01E-11 4.96E-11 3.01E-11
CEC10 3.81E-07 3.02E-11 1.31E-08 1.78E-10 1.70E-08 1.09E-10 5.32E-03 6.12E-10
CEC11 8.29E-06 3.02E-11 3.34E-11 3.02E-11 3.34E-11 3.69E-11 2.87E-10 2.61E-10
CEC12 4.03E-03 3.02E-11 8.10E-10 4.98E-11 3.02E-11 4.08E-11 3.35E-08 3.02E-11
CEC13 6.73E-01 3.02E-11 1.09E-01 2.50E-03 1.73E-07 1.75E-05 1.03E-02 7.66E-05
CEC14 7.38E-10 3.47E-10 6.52E-09 4.98E-11 2.87E-10 2.61E-10 6.12E-10 1.56E-08
CEC15 6.70E-11 3.02E-11 4.62E-10 3.02E-11 1.17E-09 8.15E-11 6.07E-11 1.21E-10
CEC16 4.69E-08 3.02E-11 3.20E-09 3.02E-11 3.16E-05 1.46E-10 3.52E-07 1.21E-10
CEC17 1.49E-06 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.69E-11 3.02E-11
CEC18 5.57E-10 3.02E-11 9.26E-09 4.18E-09 3.02E-11 1.29E-09 1.09E-10 5.46E-09
CEC19 8.84E-07 3.02E-11 2.84E-04 8.99E-11 2.78E-07 5.07E-10 4.08E-05 1.68E-04
CEC20 9.79E-05 3.01E-11 3.33E-11 3.01E-11 3.01E-11 3.01E-11 4.50E-11 4.07E-11
CEC21 1.85E-08 3.02E-11 7.04E-07 6.52E-09 8.99E-11 3.01E-07 1.49E-06 2.67E-09
CEC22 2.38E-03 3.02E-11 7.22E-06 3.02E-11 5.07E-10 3.02E-11 3.81E-07 7.12E-09
CEC23 1.87E-07 3.02E-11 8.99E-11 3.02E-11 3.02E-11 5.49E-11 2.01E-04 5.57E-10
CEC24 3.02E-11 3.02E-11 3.16E-10 1.31E-08 3.02E-11 1.09E-10 4.31E-08 2.68E-06
CEC25 3.59E-05 3.02E-11 3.25E-07 3.69E-11 7.66E-05 2.49E-06 1.78E-04 1.37E-03
CEC26 3.16E-10 3.02E-11 2.39E-08 3.02E-11 5.07E-10 3.16E-10 2.57E-07 2.92E-09
CEC27 9.23E-01 3.02E-11 2.50E-03 3.02E-11 4.64E-03 4.69E-08 1.91E-01 4.62E-10
CEC28 6.38E-08 3.02E-11 3.82E-09 3.02E-11 1.69E-09 6.09E-10 1.55E-09 1.01E-08
CEC29 7.62E-03 3.02E-11 4.74E-06 1.21E-10 3.03E-03 7.38E-10 5.57E-03 1.07E-09
CEC30 5.26E-04 3.02E-11 1.49E-06 3.02E-11 1.87E-07 9.26E-09 3.99E-04 5.97E-09
Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic

Figures 3 and 4 show some of the convergence curves and boxplots produced by the MHGS algorithm and the comparison algorithms on the CEC2017 test set (CEC03, CEC04, CEC05, CEC06, CEC07, CEC15, CEC17, CEC23 and CEC24). As shown in Fig. 3, the MHGS algorithm converges faster, its convergence curves are smoother, and its final convergence accuracies are all better than or equal to those of the other eight algorithms. On some test functions (e.g., CEC03 and CEC06), the MHGS algorithm converges relatively slowly in the preiteration period, whereas its convergence speed is accelerated in the middle and late iteration periods. The slower convergence rate in the preiteration period is because the position update formula in the exploration phase of the MHGS algorithm causes individuals to search on the basis of their historical optimal position in the preiteration period rather than searching around the optimal individual position. Although this could enhance the ability of the MHGS algorithm to globally explore and expand the search scope in the early stages, it concurrently diminishes the convergence speed of the algorithm to some extent during this phase. The speed of convergence is accelerated, and the algorithm converges quickly in the middle and late iterations, mainly because the position update formulas in the transition and development phases of the MHGS algorithm allow individuals to search around the optimal individual positions and further reduce the search range. Overall, the MHGS algorithm sacrifices some of its convergence speed in the early part of the iterative process but effectively avoids premature convergence and improves its global exploration ability, which is conducive to improving the solution accuracy. For example, when solving CEC03, algorithms such as the AOA and CDO converge faster, converging in approximately 200 iterations, but their convergence accuracy is poor. Although the MHGS algorithm may experience slower convergence during the initial iterations, it ultimately converges faster than algorithms such as the AOA and CDO do in later iterations while achieving superior final convergence accuracy.

Fig. 3.

Fig. 3

Partial convergence curves produced on the CEC2017 test set.

Fig. 4.

Fig. 4

Partial boxplots produced on the CEC2017 test set.

As shown in Fig. 4, most of the results of the MHGS algorithm have the smallest box widths, indicating that the MHGS algorithm has better stability than the comparison algorithms do. In addition, the MHGS algorithm has lower box positions, indicating that it has better convergence accuracy than the comparison algorithms do.

Experiment 3: engineering design issues

To further validate the ability of the MHGS algorithm to solve realistic optimization problems, the following section is focused on two classic engineering design problems, specifically the three-bar truss design problem and the speed reducer design problem. Eight other algorithms are compared with the MHGS algorithm under metrics such as the best fitness, worst fitness, average fitness and standard deviation values.

Three-bar truss design problem

The goal of the three-bar truss design problem is to determine the smallest possible volume of a three-bar truss while adhering to three distinct constraints. The problem includes two variables that require decisions, namely, the cross-sectional area Inline graphic of rod 1 and the cross-sectional area Inline graphic of rod 2. Figure 5 provides a diagrammatic representation of the three-bar truss design. The mathematical model for solving the three-bar truss design challenge is presented in equations (25) ~ (28). The objective function is represented by Eq. (25), while equations (25) ~ (28) portray the constraints.

graphic file with name d33e10606.gif 25
graphic file with name d33e10612.gif 26
graphic file with name d33e10618.gif 27
graphic file with name d33e10624.gif 28
Fig. 5.

Fig. 5

Schematic design of a three-bar truss19.

where Inline graphic, Inline graphic, Inline graphic, and Inline graphic.

To ascertain the superiority of the MHGS algorithm, it is compared with eight metaheuristic algorithms, namely, the HGS36 CDO60 SCSO61 the AOA62 AGWO63 the WOA23 the GWO22 and PSO36. Tables 9 and 10 present the experimental findings concerning the three-bar truss design problem. Table 10 indicates that the MHGS algorithm achieves greater solution accuracy and stability than the HGS algorithm does for the three-bar truss design problem. This suggests that the modification technique proposed in this paper enhances the optimality-seeking ability of the HGS algorithm. Compared with all the other algorithms considered, the MHGS algorithm performs optimally in terms of the optimal fitness, worst fitness, average fitness and standard deviation metrics. These findings indicate that the performance of the MHGS algorithm is superior to that of the other nine algorithms. Furthermore, the MHGS algorithm is highly effective at solving the triple-rod truss design problem with consistently stable performance.

Table 9.

Experimental results obtained for the three-bar truss design problem.

Algorithm Best Worst Mean Std
MHGS 263.895843 263.907950 263.897051 2.42E-03
HGS 264.526746 282.842712 272.460632 5.38E + 00
CDO 263.909134 264.267061 263.995277 7.73E-02
SCSO 263.895919 282.842712 265.160498 4.81E + 00
AOA 263.963398 269.651581 265.038779 1.07E + 00
AGWO 263.902857 282.842712 264.583133 3.45E + 00
WOA 263.898703 267.215228 264.523861 8.53E-01
GWO 263.895999 263.908236 263.899420 3.39E-03
PSO 263.898419 264.146311 263.994299 7.28E-02
Table 10.

Optimal solutions produced by each algorithm for the three-bar truss design problem.

Algorithm Inline graphic Inline graphic
MHGS 0.788678 0.408241
HGS 0.812696 0.346616
CDO 0.791369 0.400762
SCSO 0.788667 0.408273
AOA 0.790529 0.403679
AGWO 0.789789 0.405169
WOA 0.786709 0.413837
GWO 0.789001 0.407328
PSO 0.789648 0.405523

Speed reducer design problem

The aim of the speed reducer design problem is to identify the lowest possible weight for the reducer while satisfying nine unique constraints. Seven decision variables must be considered, including the tooth face width (Inline graphic), gear module (Inline graphic), pinion tooth count (Inline graphic), length of the first shaft between the bearings (Inline graphic), length of the second shaft between the bearings (Inline graphic), diameter of the first shaft (Inline graphic), and diameter of the second shaft (Inline graphic). Figure 6 shows a schematic diagram of the speed reducer design.

Fig. 6.

Fig. 6

Schematic diagram of the speed reducer19.

The mathematical model for the speed reducer design problem is presented in equations (29) ~ (38), where Eq. (29) is the objective function and equations (30) ~ (38) are the constraints.

graphic file with name d33e11053.gif 29
graphic file with name d33e11059.gif 30
graphic file with name d33e11065.gif 31
graphic file with name d33e11071.gif 32
graphic file with name d33e11078.gif 33
graphic file with name d33e11084.gif 34
graphic file with name d33e11090.gif 35
graphic file with name d33e11096.gif 36
graphic file with name d33e11102.gif 37
graphic file with name d33e11108.gif 38

where Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic,Inline graphic

To evaluate the efficiency of the MHGS algorithm, it is compared with the HGS36 CDO60 SCSO61 the AOA62 AGWO63 the WOA23 the GWO22 and PSO36. The results of the experiments conducted on the speed reducer design problem are provided in Tables 11 and 12. As demonstrated in Table 11, the optimal fitness value produced by both the MHGS algorithm when solving the speed reducer design problem is identical to those of the HGS algorithm, which is 2994.471066. However, the worst fitness, average fitness, and standard deviation values of the MHGS algorithm are notably superior to those of the HGS algorithm, indicating that the former has superior stability. Among all the compared algorithms, the MHGS algorithm yields optimal results concerning the optimal fitness, worst fitness, average fitness, and standard deviation values. These results show that the MHGS algorithm has the best performance among all nine algorithms and is capable of resolving the speed reducer design problem effectively and with good stability.

Table 11.

Experimental results obtained for the speed reducer design problem.

Algorithm Best Worst Mean Std
MHGS 2994.471066 2994.471066 2994.471066 4.38E-11
HGS 2994.471066 3033.748526 2996.756166 8.57E + 00
CDO 3065.217700 3171.044648 3110.760405 3.30E + 01
SCSO 2995.695765 3017.211857 3006.015121 6.05E + 00
AOA 3094.505168 3224.777574 3160.707085 3.98E + 01
AGWO 3054.563573 3120.709222 3089.745786 1.55E + 01
WOA 3015.294217 5326.968653 3212.867063 4.13E + 02
GWO 2998.199068 3015.038673 3007.136464 3.93E + 00
PSO 3017.493536 3209.528246 3082.445699 5.87E + 01
Table 12.

Optimal solution produced by each algorithm for the speed reducer design problem.

Algorithm Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
MHGS 3.500000 0.700000 17.000000 7.300000 7.715320 3.350215 5.286654
HGS 3.500000 0.700000 17.000000 7.300000 7.715320 3.350215 5.286654
CDO 3.600000 0.700000 17.000000 8.300000 8.300000 3.378802 5.290333
SCSO 3.500206 0.700000 17.000000 7.308098 7.739710 3.351613 5.286939
AOA 3.600000 0.700000 17.000000 7.300000 8.034523 3.470205 5.321308
AGWO 3.600000 0.700000 17.000000 7.300000 8.300000 3.360438 5.295064
WOA 3.515989 0.700000 17.000000 7.578152 8.108536 3.363331 5.286791
GWO 3.501402 0.700000 17.000000 7.334880 7.791578 3.353653 5.287155
PSO 3.509908 0.700000 17.000000 7.372485 8.300000 3.355851 5.293236

Experiment 4: feature selection problem

Mathematical modelling

The feature selection problem is a binary optimization problem. Suppose that Inline graphic is a dataset consisting of Inline graphic samples with Inline graphic features and that Inline graphic is a set of Inline graphic features. The purpose of feature selection is to optimize the objective function by selecting the optimal feature subset Inline graphic. Since feature selection requires a trade-off between classification accuracy and the number of selected features, a linear combination of these features is adopted as the objective function, as shown in Eq. (39), and accordingly, the objective function is optimal, even if the function takes its minimum value.

graphic file with name d33e11678.gif 39

where Inline graphic is the classification accuracy (the KNN classifier is used in this paper; Inline graphic)72,73Inline graphic is the number of selected features, Inline graphic is the total number of features in the dataset, and Inline graphic and Inline graphic are the two weighting coefficients, which denote the classification accuracy and the importance of the number of selected features, respectively. Inline graphic; Inline graphic. In this paper, Inline graphic is set to 0.9974.

Binary coding is used to represent the selected feature subset, as shown in Eq. (40).

graphic file with name d33e11756.gif 40

where Inline graphic indicates the selection of the j-th feature in the i-th feature subset of Inline graphic, whereas the corresponding Inline graphic indicates the nonselection of this feature. Consequently, the optimization problem6 formulates the feature selection problem in this study, and the mathematical model is exhibited in Eq. (41).

graphic file with name d33e11790.gif 41

BMHGS_V3 algorithm

Both the HGS and MHGS algorithms are tailored for continuous optimization problems, meaning that they are unsuitable for solving discrete optimization problems, including binary optimization problems. To expand the potential applications of the MHGS algorithm to binary optimization problems, continuous values must first be converted to binary values. In this work, a V-conversion function is used to perform the above conversion, as shown in Eq. (42)74. The converted binary MHGS algorithm is named the BMHGS_V3 algorithm.

graphic file with name d33e11804.gif 42

where Inline graphic denotes the value between 0 and 1 to which Inline graphic is mapped by the conversion function and Inline graphic is a random number chosen from the range of Inline graphic.

The implementation steps for solving the feature selection problem based on the BMHGS_V3 algorithm are as follows.

Step 1: Subject the data contained in the dataset to max-min normalization, and randomly select a subset with 80% of its data as the training set and 20% of its data as the test set.

Step 2: Initialize the algorithm-related parameters, such as the number of populations and the maximum number of iterations.

Step 3: Randomly initialize the population to generate N binary individuals.

Step 4: Improve the out-of-bounds handling mechanism according to Eq. (22) and calculate the fitness of each individual.

Step 5: Sort the individuals according to their fitness values, and update the optimal fitness Inline graphic, the worst fitness Inline graphic and the optimal individual position Inline graphic.

Step 6: Use the elite dynamic opposite learning strategy according to Eqs. (23) and (24).

Step 7: Use reproduction mechanisms according to Eqs. (17) and (21).

Step 8: Calculate Inline graphic, Inline graphic, Inline graphic and Inline graphic according to Eqs. (2)-(10).

Step 9: Determine whether Inline graphic; if yes, update the individual position according to Eq. (13); otherwise, perform step 10.

Step 10: Determine whether Inline graphic; if yes, update the individual position according to Eqs. (14) and (15); otherwise, update the individual position according to Eq. (16).

Step 11: Convert all individual positions to binary values according to Eq. (42).

Step 12: Determine whether the maximum number of iterations is reached; if yes, perform step 13; otherwise, return to step 4.

Step 13: End the feature selection session and output the optimal subset of features Inline graphic.

Step 14: Obtain the test set data through Inline graphic after performing feature selection and evaluate the test set accuracy.

The flowchart for solving the feature selection problem according to the BMHGS_V3 algorithm is displayed in Fig. 7.

Fig. 7.

Fig. 7

Flowchart for the feature selection problem solving process according to the BMHGS_V3 algorithm.

Experimental results and analysis of the feature selection problem

In this section of the study, 10 datasets are chosen from the UCI database for examination, and Table 13 lists the pertinent features, samples, and categories of each dataset. The data show that the numbers of features contained in these datasets range from 13 to 617, the numbers of samples range from 142 to 1559, and the numbers of categories range from 2 to 26. Thus, these datasets exhibit adequate diversity and can be used to effectively examine the performance of the proposed algorithm in multiple scenarios.

Table 13.

Dataset details.

NO. Dataset Feature size Sample size Category size
1 HeartEW 13 270 2
2 Exactly 13 1000 2
3 M-of-n 13 1000 2
4 Lymphography 18 142 2
5 Climate 18 540 2
6 SpectEW 22 267 2
7 IonosphereEW 34 351 2
8 movementlibras 90 360 15
9 hillvalley 100 1212 2
10 isolet5 617 1559 26

To assess the effectiveness and superiority of the BMHGS_V3 algorithm, we compare it with eight other binary algorithms, including BHGS47 BGWOPSO75 BALO_176, BGWO175, the BBA76 VPSO77 the BGSA78 and the GA79. Among them, BHGS is a recently proposed binary hunger games search algorithm; BGWOPSO, BALO_1 and BGWO1 are recently proposed metaheuristic feature selection algorithms (binary algorithms); and the BBA, VPSO, the BGSA and the GA are classic binary algorithms. The parameter configurations used for all the aforementioned algorithms are presented in Table 14, and the settings employed for each comparative algorithm align with the corresponding references specified within the table.

Table 14.

Parameter settings of the tested algorithms.

Algorithm Parameters Year Reference
BMHGS_V3 Inline graphic -- --
BHGS Inline graphic 2021 39
BGWOPSO Inline graphic 2019 66
BALO_1 Inline graphic 2016 65
BGWO1 Inline graphic 2016 67
BBA Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic 2014 68
VPSO Inline graphic 2013 69
BGSA Inline graphic 2010 71
GA Inline graphic 1975 78

To ensure fairness in the experimental environment, all algorithms are given an equal number of iterations (100), and 10 populations are utilized. To minimize randomness, each algorithm undergoes 30 independent runs on the datasets, and the evaluation indices include the average classification accuracy, standard deviation, and average dimensionality reduction rate achieved on the test set.

Among them, the accuracy of classification (ACC) and dimensionality reduction effect (DR) are defined as shown in Eqs. (43) and (44), respectively.

graphic file with name d33e12436.gif 43

where NCC is the number of correctly classified samples and NAS is the total number of samples contained in the dataset.

graphic file with name d33e12444.gif 44

where NSF is the number of selected features and NAF is the total number of features contained in the dataset.

Table 15 lists the outcomes of each algorithm tested on the feature selection problem. The mean represents the average classification accuracy achieved the test set after 30 independent runs, Std indicates the standard deviation of the classification accuracy attained on the test set after 30 independent runs, and ADR represents the average dimensionality reduction rate provided across 30 independent runs. Bold formatting indicates the optimal result in each row.

Table 15.

Experimental results obtained for the feature selection problem.

Dataset Measures BMHGS_V3 BHGS BGWOPSO BALO-1 BGWO1 BBA VPSO BGSA GA
HeartEW Mean 82.28 79.57 81.11 80.86 80.37 79.32 79.63 81.36 79.57
Std 4.86E-02 4.99E-02 5.20E-02 5.68E-02 5.97E-02 5.94E-02 4.64E-02 5.77E-02 5.29E-02
ADR 58.97 53.59 40.00 54.62 31.28 56.15 53.33 55.64 52.82
Exactly Mean 100.00 94.65 96.13 100.00 82.48 99.85 98.85 96.90 94.13
Std 0.00E + 00 1.23E-01 1.04E-01 0.00E + 00 9.69E-02 6.04E-03 6.30E-02 6.43E-02 6.34E-02
ADR 53.85 50.77 50.26 53.85 30.51 53.33 53.33 48.97 43.59
M-of-n Mean 100.00 96.82 99.67 100.00 92.92 100.00 99.97 99.88 98.58
Std 0.00E + 00 6.66E-02 1.83E-02 0.00E + 00 4.06E-02 0.00E + 00 1.83E-03 3.13E-03 2.21E-02
ADR 53.85 52.82 52.56 53.85 26.92 53.85 53.33 52.05 44.62
Lymphography Mean 81.79 78.57 80.48 79.05 80.71 78.45 80.95 79.76 79.05
Std 7.41E-02 7.90E-02 7.25E-02 7.31E-02 7.48E-02 6.86E-02 6.85E-02 7.64E-02 6.20E-02
ADR 48.70 45.56 34.63 48.70 31.48 50.19 47.59 50.00 48.89
Climate Mean 93.67 92.87 92.81 93.40 92.69 93.61 93.52 92.93 92.90
Std 1.58E-02 1.95E-02 1.37E-02 1.82E-02 1.56E-02 2.12E-02 1.42E-02 1.83E-02 1.90E-02
ADR 66.85 57.04 43.15 58.52 33.52 60.00 59.44 60.00 57.59
SpectEW Mean 79.18 76.98 76.48 78.49 78.11 76.10 77.17 77.04 78.36
Std 4.30E-02 5.14E-02 4.31E-02 4.69E-02 4.07E-02 6.06E-02 4.84E-02 5.25E-02 5.31E-02
ADR 62.88 55.91 35.91 53.48 33.64 58.64 55.00 52.58 51.52
IonosphereEW Mean 89.24 88.76 87.14 87.71 85.43 88.19 87.71 87.71 85.81
Std 3.11E-02 3.09E-02 3.95E-02 3.06E-02 2.96E-02 2.73E-02 2.27E-02 2.82E-02 2.95E-02
ADR 87.45 70.78 48.63 70.49 43.53 73.04 70.49 67.35 60.49
movementlibras Mean 74.21 73.75 74.21 73.19 72.45 72.45 73.94 73.56 72.41
Std 4.52E-02 4.35E-02 4.82E-02 4.70E-02 5.16E-02 5.04E-02 5.27E-02 4.62E-02 5.23E-02
ADR 72.41 55.00 33.33 51.93 30.11 55.19 51.00 52.00 51.22
hillvalley Mean 57.81 55.76 55.39 56.07 55.33 57.33 55.92 56.12 55.88
Std 2.66E-02 2.59E-02 1.84E-02 2.78E-02 3.51E-02 3.34E-02 3.43E-02 2.84E-02 3.28E-02
ADR 82.77 56.83 32.73 52.80 31.97 57.57 52.63 51.73 52.77
isolet5 Mean 84.18 81.93 81.48 82.81 81.18 82.67 83.34 82.42 82.15
Std 1.92E-02 2.45E-02 1.78E-02 1.96E-02 1.94E-02 1.53E-02 1.72E-02 2.26E-02 1.99E-02
ADR 80.60 58.88 27.87 50.24 25.25 54.42 51.45 50.26 50.28

As shown in Table 16, when the average classification accuracies of the different methods are compared, BMHGS_V3 performs optimally on all 10 datasets, significantly outperforming the other binary algorithms. Therefore, in terms of average classification accuracy, BMHGS_V3 is the optimal choice, and it is also more effective at identifying optimal feature subsets and improving the classification accuracy of the constructed model. When comparing the standard deviations, it is noteworthy that while BMHGS_V3 performs optimally on only two datasets, on the other datasets, its standard deviations are within the same magnitude and not significantly different from the optimal values. Therefore, in terms of standard deviation, BMHGS_V3 performs well overall but does not have a significant advantage. In terms of the average dimensionality reduction rate, BMHGS_V3 achieve the optimal values on 9 datasets, making it significantly better than the other binary algorithms. However, if the average dimensionality reduction rate is high alone, it is meaningless; thus, it needs to be considered together with the average classification accuracy. On the hillvalley dataset, the average dimensionality reduction rate of BMHGS_V3 is 82.77%, and the BBA, which has an average dimensionality reduction rate of 57.57%, ranks second; BMHGS_V3 exceeds the second-place finisher by 25.2%, and its average classification accuracy is optimal. On the isolet5 dataset, BMHGS_V3 has an average dimensionality reduction rate of 80.60%, followed by BHGS, which has an average dimensionality reduction rate of 58.88%; BMHGS_V3 exceeds the second-place finisher by 21.71%, and has the optimal average classification accuracy. Therefore, BMHGS_V3 is optimal in terms of the average dimensionality reduction rate and is able to significantly reduce the number of redundant features while ensuring high classification accuracy. From the information and analysis presented above, it is evident that BMHGS_V3 outperforms the other eight binary algorithms in terms of identifying the best feature subset, thus exhibiting optimal performance.

Table 16.

Experimental results of the ablation experiments.

Function
/Measures
MHGS HGS MHGS-1 MHGS-2 MHGS-3 MHGS-4
CEC01 Mean 5.24E + 02 6.82E + 03 1.59E + 03 2.02E + 10 3.00E + 03 2.76E + 03
Std 6.24E + 02 4.36E + 03 1.69E + 03 4.92E + 09 3.55E + 03 2.96E + 03
CEC03 Mean 3.00E + 02 3.11E + 02 3.00E + 02 2.06E + 05 3.15E + 02 3.19E + 02
Std 1.15E-02 4.88E + 01 8.70E-09 3.07E + 05 3.61E + 01 4.63E + 01
CEC04 Mean 4.04E + 02 4.13E + 02 4.04E + 02 3.12E + 03 4.05E + 02 4.05E + 02
Std 1.31E + 00 2.07E + 01 1.41E + 00 1.29E + 03 1.85E + 00 1.82E + 00
CEC05 Mean 5.11E + 02 5.24E + 02 5.21E + 02 6.51E + 02 5.16E + 02 5.18E + 02
Std 4.34E + 00 9.35E + 00 9.36E + 00 2.73E + 01 5.68E + 00 6.51E + 00
CEC06 Mean 6.00E + 02 6.01E + 02 6.00E + 02 6.85E + 02 6.01E + 02 6.01E + 02
Std 1.12E-02 1.00E + 00 6.31E-07 1.45E + 01 1.23E + 00 2.45E + 00
CEC07 Mean 7.24E + 02 7.42E + 02 7.36E + 02 8.90E + 02 7.42E + 02 7.40E + 02
Std 6.00E + 00 1.42E + 01 9.64E + 00 2.84E + 01 1.00E + 01 1.12E + 01
CEC08 Mean 8.11E + 02 8.20E + 02 8.21E + 02 9.09E + 02 8.20E + 02 8.19E + 02
Std 5.69E + 00 7.73E + 00 8.54E + 00 2.10E + 01 6.87E + 00 4.91E + 00
CEC09 Mean 9.00E + 02 9.03E + 02 8.54E + 00 3.24E + 03 9.50E + 02 9.41E + 02
Std 8.64E-02 6.67E + 00 2.31E + 01 6.83E + 02 4.85E + 01 7.03E + 01
CEC10 Mean 1.42E + 03 1.71E + 03 1.71E + 03 3.64E + 03 1.49E + 03 1.51E + 03
Std 1.68E + 02 1.88E + 02 2.83E + 02 3.66E + 02 1.98E + 02 2.03E + 02
CEC11 Mean 1.11E + 03 1.16E + 03 1.11E + 03 3.55E + 04 1.12E + 03 1.12E + 03
Std 4.66E + 00 5.87E + 01 7.42E + 00 8.66E + 04 8.72E + 00 8.86E + 00
CEC12 Mean 1.18E + 04 2.99E + 04 1.12E + 04 1.99E + 09 8.11E + 04 2.09E + 04
Std 7.50E + 03 2.78E + 04 8.01E + 03 9.95E + 08 3.36E + 05 2.90E + 04
CEC13 Mean 6.51E + 03 1.14E + 04 9.15E + 03 2.65E + 08 6.52E + 03 1.00E + 04
Std 3.55E + 03 1.14E + 04 6.76E + 03 3.20E + 08 9.19E + 03 1.17E + 04
CEC14 Mean 1.45E + 03 2.94E + 03 2.63E + 03 3.91E + 06 1.45E + 03 1.46E + 03
Std 7.40E + 01 2.21E + 03 2.12E + 03 7.35E + 06 2.92E + 01 4.93E + 01
CEC15 Mean 1.60E + 03 6.83E + 03 5.62E + 03 2.98E + 07 1.80E + 03 1.62E + 03
Std 2.83E + 02 5.36E + 03 4.56E + 03 5.52E + 07 8.55E + 02 2.04E + 02
CEC16 Mean 1.63E + 03 1.78E + 03 1.88E + 03 2.61E + 03 1.65E + 03 1.66E + 03
Std 5.55E + 01 1.39E + 02 1.02E + 02 2.15E + 02 5.02E + 01 5.72E + 01
CEC17 Mean 1.71E + 03 1.75E + 03 1.74E + 03 2.30E + 03 1.72E + 03 1.72E + 03
Std 5.70E + 00 4.62E + 01 4.17E + 01 1.80E + 02 1.24E + 01 1.22E + 01
CEC18 Mean 3.32E + 03 2.22E + 04 9.34E + 03 1.02E + 09 1.09E + 04 8.65E + 03
Std 2.38E + 03 1.36E + 04 7.22E + 03 8.22E + 08 9.02E + 03 7.57E + 03
CEC19 Mean 2.22E + 03 1.29E + 04 1.04E + 04 9.66E + 07 2.31E + 03 2.25E + 03
Std 7.73E + 02 1.19E + 04 8.59E + 03 1.91E + 08 3.53E + 02 3.00E + 02
CEC20 Mean 2.01E + 03 2.03E + 03 2.02E + 03 2.51E + 03 2.03E + 03 2.03E + 03
Std 1.03E + 01 2.28E + 01 1.57E + 01 1.32E + 02 3.01E + 01 1.82E + 01
CEC21 Mean 2.23E + 03 2.32E + 03 2.31E + 03 2.44E + 03 2.23E + 03 2.24E + 03
Std 5.23E + 01 4.69E + 01 3.98E + 01 2.08E + 01 4.75E + 01 5.46E + 01
CEC22 Mean 2.30E + 03 2.35E + 03 2.32E + 03 4.05E + 03 2.27E + 03 2.26E + 03
Std 1.64E + 01 2.50E + 02 1.18E + 02 5.74E + 02 3.16E + 01 3.65E + 01
CEC23 Mean 2.61E + 03 2.63E + 03 2.63E + 03 2.87E + 03 2.62E + 03 2.62E + 03
Std 4.38E + 00 1.03E + 01 1.37E + 01 8.00E + 01 6.31E + 00 6.16E + 00
CEC24 Mean 2.67E + 03 2.77E + 03 2.74E + 03 3.06E + 03 2.71E + 03 2.69E + 03
Std 1.07E + 02 1.24E + 01 6.78E + 01 1.05E + 02 9.46E + 01 1.13E + 02
CEC25 Mean 2.92E + 03 2.94E + 03 2.94E + 03 4.37E + 03 2.93E + 03 2.91E + 03
Std 2.31E + 01 3.18E + 01 2.01E + 01 4.39E + 02 2.65E + 01 8.47E + 01
CEC26 Mean 2.87E + 03 3.19E + 03 2.98E + 03 4.83E + 03 3.02E + 03 2.98E + 03
Std 6.40E + 01 4.26E + 02 2.76E + 02 5.42E + 02 9.74E + 01 1.18E + 02
CEC27 Mean 3.10E + 03 3.10E + 03 3.11E + 03 3.54E + 03 3.10E + 03 3.10E + 03
Std 4.03E + 00 1.39E + 01 2.07E + 01 1.55E + 02 3.71E + 00 2.89E + 00
CEC28 Mean 3.13E + 03 3.29E + 03 3.22E + 03 4.15E + 03 3.30E + 03 3.26E + 03
Std 9.24E + 01 1.13E + 02 1.43E + 02 1.91E + 02 1.17E + 02 1.09E + 02
CEC29 Mean 3.18E + 03 3.22E + 03 3.21E + 03 4.00E + 03 3.21E + 03 3.22E + 03
Std 1.80E + 01 7.33E + 01 5.62E + 01 3.16E + 02 3.82E + 01 4.02E + 01
CEC30 Mean 2.14E + 04 3.79E + 05 1.87E + 05 1.41E + 08 5.10E + 04 9.94E + 04
Std 1.14E + 04 4.28E + 05 3.95E + 05 1.15E + 08 8.20E + 04 1.94E + 05

Critical algorithmic performance analysis

The enhanced performance of the MHGS stems from its phased optimization framework and dynamic oppositional learning strategy. For high-dimensional unimodal functions (e.g., CEC01), the phased position update mechanism ensures gradual refinement from exploration to exploitation, preventing premature convergence. The algorithm excels at solving multimodal problems (e.g., CEC10) because of its reproduction operator and boundary handling approach, which maintain diversity and redirect individuals to promising regions.

However, the MHGS exhibits marginally slower convergence on flat-bottomed functions with noise (e.g., F7), where random perturbations disrupt the phased transition process. This stems from the reliance of the exploration phase on historical positions rather than immediate gradient information.

In engineering design problems, the MHGS achieves optimality with minimal standard deviations, demonstrating robustness in constrained spaces. In terms of feature selection, the sigmoid transformation procedure of BMHGS_V3 balances its feature reduction effect and accuracy, outperforming the competing methods on the UCI datasets.

Algorithmic performance of single strategy and multiple strategy improvements

Single-strategy improvements (e.g., chaotic mapping [58] or opposition-based learning [59]) exhibit lower computational complexity (0(N) per iteration) due to a single added operator. However, their performance gains are often problem-dependent: For instance, chaos-enhanced HGS accelerates convergence on unimodal functions but fails to escape local optima in multimodal cases (see Sect. “Experiment 1: Benchmark test functions”, F6 results).

Multi-strategy improvements (e.g., MHGS) achieve broader performance enhancement (23.7% average accuracy gain in Table 2) by synergizing exploration-exploitation balance (phased updates), diversity maintenance (reproduction), and boundary optimization. This comes at a moderate computational cost (0(NlogN) for elite opposition learning).

Limitations

The computational overhead of the algorithm is slightly higher than that of the HGS (297.69 s vs. 151.01 s on CEC2017) because of the multistrategy coordination approach. Future work will focus on simplifying the transition phase for noisy landscapes.

This analysis clarifies the strengths of the MHGS in complex, high-dimensional optimization tasks while acknowledging the trade-offs struck in specific noisy scenarios.

Analysis of ablation experiments

To verify the contribution of each improvement strategy to the performance of the MHGS algorithm, ablation experiments are designed in this section. Four variants are generated by gradually removing the improvement strategies from the MHGS, and these variants are compared them with the original HGS algorithm and the complete MHGS. The experimental setup is as follows.

MHGS-1: Remove the phased position update formula and retain the remaining strategies.

MHGS-2: Remove the enhanced reproduction mechanism and retain the remaining strategies.

MHGS-3: Remove the improved transgression handling mechanism and retain the remaining strategies.

MHGS-4: Remove the elite dynamic reverse learning strategy and retain the remaining strategies.

The ablation experiments reveal the distinct contribution of each strategy to the MHGS through comparisons with the original HGS and four variants (MHGS-1 to MHGS-4).

Multistage position update (MHGS-1)

The removal of the multistage position update mechanism (MHGS-1) leads to significant performance degradations in high-dimensional optimization problems, highlighting its critical role in accelerating the convergence process. For example, on the CEC01 benchmark, the mean objective value increases sharply from 5.24E + 02 to 1.59E + 03 (a 303% deterioration), whereas the CEC30 mean increases from 2.14E + 04 to 1.87E + 05 (774% higher), accompanied by substantial increases in the corresponding standard deviations (e.g., the std for CEC01 increases from 6.24E + 02 to 1.69E + 03). This suggests that the phased update strategy effectively balances the exploration and exploitation processes in complex search spaces, preventing premature convergence. However, its impact diminishes in low-dimensional or simpler functions—for example, CEC03 and CEC06 exhibit minimal mean fluctuations (< 1%) and stable standard deviations after removing this strategy, indicating that the adaptive phased optimization scheme of the mechanism is tailored primarily for high-dimensional challenges where the traditional update rules struggle to maintain their efficiency.

Enhanced reproduction mechanism (MHGS-2)

The removal of the enhanced reproduction mechanism (MHGS-2) triggers catastrophic performance deteriorations, particularly in complex, high-dimensional optimization tasks, emphasizing its irreplaceable role in sustaining the global search capabilities and population diversity of the algorithm. For example, on the CEC01 benchmark, the mean objective value escalates exponentially from 5.24E + 02 to 2.02E + 10—an increase of approximately 38 million percent—while the standard deviation produced for CEC12 soars to 9.95E + 08 (compared with the 7.50E + 03 of the MHGS), reflecting severe algorithmic instability and divergence. These results demonstrate that the enhanced reproduction strategy is vital for preventing premature convergence in intricate search landscapes, as it systematically regenerates low-quality solutions while preserving elite individuals. The absence of this mechanism leads to rapid population homogenization and stagnation in local optima, especially in functions with deceptive minima (e.g., CEC12 and CEC30), where maintaining exploratory diversity is paramount. This underscores its dual function: balancing the exploitation of promising regions with the aggressive exploration of uncharted spaces, which is a cornerstone for solving real-world optimization challenges characterized by high dimensionality and multimodality.

Boundary constraint handling (MHGS-3)

The removal of the improved boundary mechanism (MHGS-3) causes minimal global performance losses but reveals scenario-specific vulnerabilities. While benchmarks such as CEC14 and CEC15 yield negligible degradations (unchanged means and stable stds), CEC09 exhibits a 5.6% mean increase (9.00E + 02 → 9.50E + 02) and a sharp std rise (8.64E-02 → 4.85E + 01), indicating instability in boundary-sensitive optima. This suggests that the adaptive repair strategy of the mechanism—reinitializing or redirecting out-of-bounds solutions—is critical for functions whose optimal values lie near constraints. Although less impactful in general cases, it proves vital for real-world problems requiring strict boundary adherence (e.g., engineering design tasks), preventing convergence to invalid regions while maintaining high search diversity.

Elite opposition-based learning (MHGS-4)

The removal of the elite dynamic reverse learning mechanism (MHGS-4) results in moderate performance declines in complex, high-dimensional problems while leaving simpler functions largely unaffected. For example, the mean value of CEC30 increases by 365% (from 2.14E + 04 to 9.94E + 04), with a tripled standard deviation, highlighting the role of this mechanism in refining the local exploitation process for intricate landscapes. In contrast, low-dimensional functions such as CEC03 and CEC06 exhibit negligible changes (e.g., the mean of CEC06 shifts from 6.00E + 02 to 6.01E + 02; <0.2% variation). This dichotomy aligns with the design of the strategy: by generating opposition-based solutions around elite individuals, it intensifies the local search precision achieved in regions near promising optima. This is a critical ability for escaping local minima in multimodal or nonseparable problems. While this mechanism is less impactful for simple convex functions, its integration ensures robustness in real-world scenarios where the solution spaces are discontinuous or highly constrained.

The intact MHGS consistently outperforms all the variants, underscoring the complementary roles of its components: the multistage position update and enhanced reproduction mechanisms form the algorithmic core, driving rapid convergence and global search capabilities, whereas adaptive boundary handling and elite opposition-based learning enhance the robustness of the algorithm in constrained or locally deceptive landscapes. This strategic synergy achieves an optimal balance between exploration and exploitation, as evidenced by the MHGS dominating 78% of the 30 benchmarks—particularly excelling in high-dimensional problems (e.g., the CEC01 error is reduced by 97% relative to that of the HGS) while maintaining stability in low-dimensional scenarios. The integrated design of the framework is essential for real-world optimization tasks, where complex, nonlinear search spaces demand coordinated adaptation across diverse problem phases.

Conclusion

The multistrategy improved hunger games search algorithm (MHGS), which advances the traditional HGS algorithm, is proposed in this paper. The MHGS addresses the limitations of the existing method by integrating a phased update formula, reproduction, an improved out-of-bounds handling scheme, and elite dynamic opposite learning. These enhancements foster efficient and effective searches, balancing global exploration and local exploitation. The phased update mechanism enhances the exploration ability of the model, reproduction fosters diversity, out-of-bounds handling refines the positioning step, and elite dynamic learning avoids local optima. Experiments conducted across benchmarks, CEC2017, engineering designs, and feature selection problems validate ’the performance of the MHGS. The MHGS excels in 96% of benchmarks, remaining resilient to dimensionality, and outperforms the other tested methods, notably on CEC2017. In engineering designs, it achieves optimality in terms of several key metrics. Furthermore, we propose BMHGS_V3, which leverages V-transformation to broaden the reach of the MHGS in the feature selection field.

Acknowledgements

We are grateful to the anonymous reviewers for their valuable input. This study was supported by the Fujian Provincial Key Laboratory of Green Intelligent Cleaning Technology and Equipment, School of Mechanical and Automotive Engineering, Xiamen University of Technology, Xiamen 361024, China, and by the project Research on Grey Water Footprint Governance Model and Governance Standard System from the Perspective of Green Development (Grant No. FJ2024B116).

Author contributions

Qiu yihui: Conceptualization, Methodology, Writing Review & Editing. Zhang Xinqiang: Visualization, Data Curation, Writing Original draft and Figure Generation. Li ruoyu: Writing - Review & Editing. Xia Feihan & Li Dongyi: proofreading, revision of manuscripts and Supplementary experiments.

Data availability

The data and codes supporting the results of this study are publicly available at https://github.com/KloveR521/MHGS.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1. Rao, S. Engineering Optimization: Theory and Practice (Wiley, 2019).
  • 2.Liu, C. & Du, Y. A membrane algorithm based on chemical reaction optimization for many-objective optimization problems. Knowl. Based Syst.165, 306–320 (2019). [Google Scholar]
  • 3.Zhu, L., Lin, J., Li, Y. Y. & Wang, Z. J. A decomposition-based multi-objective genetic programming hyper-heuristic approach for the multi-skill resource constrained project scheduling problem. Knowl. Based Syst.225, 107099 (2021). [Google Scholar]
  • 4.Qu, C., Gai, W., Zhang, J. & Zhong, M. A novel hybrid grey Wolf optimizer algorithm for unmanned aerial vehicle (UAV) path planning. Knowl. Based Syst.194, 105530 (2020). [Google Scholar]
  • 5.Easwarakhanthan, T., Bottin, J., Bouhouch, I. & Boutrit, C. Nonlinear minimization algorithm for determining the solar cell parameters with microcomputers. Int. J. Sol Energy. 4, 1–12 (1986). [Google Scholar]
  • 6.Zhang, X. & Li, Z. Research on feature selection algorithm based on natural evolution strategy. J. Softw.31, 3733–3752 (2020). [Google Scholar]
  • 7.Lamata, M. T., Pelta, D. & Verdegay, J. L. Optimisation problems as decision problems: the case of fuzzy optimisation problems. Inf. Sci.460–461, 377–388 (2018). [Google Scholar]
  • 8.Anjidani, M. & Effati, S. Steepest descent method for solving zero-one nonlinear programming problems. Appl. Math. Comput.193, 197–202 (2007). [Google Scholar]
  • 9.Visuthirattanamanee, R., Sinapiromsaran, K. & Boonperm, A. A. Self-regulating artificial-free linear programming solver using a jump and simplex method. Mathematics8, 356 (2020). [Google Scholar]
  • 10.Zhao, S., Zhang, T., Ma, S. & Chen, M. Dandelion optimizer: a nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell.114, 105075 (2022). [Google Scholar]
  • 11.Hashim, F. A., Houssein, E. H., Hussain, K. & Mabrouk, M. S. Al-Atabany, W. Honey Badger algorithm: new metaheuristic algorithm for solving optimization problems. Math. Comput. Simul.192, 84–110 (2022). [Google Scholar]
  • 12.Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. Artificial Gorilla troops optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst.36, 5887–5958 (2021). [Google Scholar]
  • 13.Abd Elaziz, M. et al. Advanced metaheuristic optimization techniques in applications of deep neural networks: a review. Neural Comput. Appl.33, 14079–14099 (2021). [Google Scholar]
  • 14.Holland, J. H. Adaptation in Natural and Artificial Systems: an Introductory analysis with Applications To Biology, Control, and Artificial Intelligence (The MIT Press, 1992).
  • 15.Das, S. & Suganthan, P. N. Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput.15, 4–31 (2010). [Google Scholar]
  • 16.Beyer, H. G. & Schwefel, H. P. Evolution strategies – a comprehensive introduction. Nat. Comput.1, 3–52 (2002). [Google Scholar]
  • 17.Koza, J. et al. Genetic Programming IV: Routine Human-Competitive Machine Intelligence (Springer, 2005).
  • 18.Ramlan, F. W., Palakonda, V. & Mallipeddi, R. Differential Evolutionary (DE) Based Interactive Recoloring Based on YUV Based Edge Detection for Interior Design.2019 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, : 597–601. (2019).
  • 19.Wang, Z. et al. Language-based photo color adjustment for graphic designs. ACM Trans. Graph. 42 (4), 101:1–101 (2023). [Google Scholar]
  • 20.Wang, D., Tan, D. & Liu, L. Particle swarm optimization algorithm: an overview. Soft Comput.22, 387–408 (2018). [Google Scholar]
  • 21.Gandomi, A. H., Yang, X. S. & Alavi, A. H. Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng. Comput.29, 17–35 (2013). [Google Scholar]
  • 22.Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey Wolf optimizer. Adv. Eng. Softw.69, 46–61 (2014). [Google Scholar]
  • 23.Mirjalili, S. & Lewis, A. The Whale optimization algorithm. Adv. Eng. Softw.95, 51–67 (2016). [Google Scholar]
  • 24.Xue, J. & Shen, B. A novel swarm intelligence optimization approach: sparrow search algorithm. Syst. Sci. Control Eng.8, 22–34 (2020). [Google Scholar]
  • 25.Braik, M. et al. Tornado optimizer with coriolis force: a novel bio-inspired meta-heuristic algorithm for solving engineering problems. Artif. Intell. Rev.58, 123 (2025). [Google Scholar]
  • 26.Guo, Z., Liu, G. & Jiang, F. Chinese Pangolin optimizer: a novel bio-inspired metaheuristic for solving optimization problems. J. Supercomput. 81, 517 (2025). [Google Scholar]
  • 27.Al-Betar, M. A., Awadallah, M. A., Braik, M. S. & Makhadmeh, S. Doush, I. A. Elk herd optimizer: a novel nature-inspired metaheuristic algorithm. Artif. Intell. Rev.57, 48 (2024). [Google Scholar]
  • 28.Ong, K. M., Ong, P. & Sia, C. K. A carnivorous plant algorithm for solving global optimization problems. Appl. Soft Comput.98, 106833 (2021). [Google Scholar]
  • 29.Passino, K. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 22, 52–67 (2002). [Google Scholar]
  • 30.Li, S., Chen, H., Wang, M., Heidari, A. A. & Mirjalili, S. Slime mould algorithm: a new method for stochastic optimization. Future Gener Comput. Syst.111, 300–323 (2020). [Google Scholar]
  • 31.Sowmya, R., Premkumar, M. & Jangir, P. Newton-Raphson-based optimizer: a new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell.128, 107532 (2024). [Google Scholar]
  • 32.Abdel-Basset, M., Mohamed, R. & Abouhawwash, M. Fungal growth optimizer: a novel nature-inspired metaheuristic algorithm for stochastic optimization. Comput. Methods Appl. Mech. Eng.437, 117825 (2025). [Google Scholar]
  • 33.Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. Optimization by simulated annealing. Science220, 671–680 (1983). [DOI] [PubMed] [Google Scholar]
  • 34.Eskandar, H., Sadollah, A., Bahreininejad, A. & Hamdi, M. Water cycle algorithm – a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct.110–111, 151–166 (2012). [Google Scholar]
  • 35.Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. GSA: a gravitational search algorithm. Inf. Sci.179, 2232–2248 (2009). [Google Scholar]
  • 36.Hashim, F. A., Houssein, E. H., Mabrouk, M. S., Al-Atabany, W. & Mirjalili, S. Henry gas solubility optimization: a novel physics-based algorithm. Future Gener Comput. Syst.101, 646–667 (2019). [Google Scholar]
  • 37.Cheng, M. Y. & Sholeh, M. N. Artificial satellite search: a new metaheuristic algorithm for optimizing truss structure design and project scheduling. Appl. Math. Model.143, 116008 (2025). [Google Scholar]
  • 38.Rao, R. V., Savsani, V. J. & Vakharia, D. P. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des.43, 303–315 (2011). [Google Scholar]
  • 39.Givi, H. & Hubalovska, M. Skill optimization algorithm: a new human-based metaheuristic technique. Comput. Mater. Contin. 74, 179–202 (2023). [Google Scholar]
  • 40.Moghdani, R. & Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput.64, 161–185 (2018). [Google Scholar]
  • 41.Kumar, M., Kulkarni, A. J. & Satapathy, S. C. Socio evolution & learning optimization algorithm: a socio-inspired optimization methodology. Future Gener Comput. Syst.81, 252–272 (2018). [Google Scholar]
  • 42.Yang, Y., Chen, H., Heidari, A. A. & Gandomi, A. H. Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl.177, 114864 (2021). [Google Scholar]
  • 43.Clutton-Brock, T. Cooperation between non-kin in animal societies. Nature462, 51–57 (2009). [DOI] [PubMed] [Google Scholar]
  • 44.Friedman, M. I. & Stricker, E. M. The physiological psychology of hunger: a physiological perspective. Psychol. Rev.83, 409–431 (1976). [PubMed] [Google Scholar]
  • 45.O’Brien, W. J., Browman, H. I. & Evans, B. I. Search strategies of foraging animals. Am. Sci.78, 152–160 (1990). [Google Scholar]
  • 46.Abd Elaziz, M., Abo Zaid, E. O., Al-Qaness, M. A. A. & Ibrahim, R. A. Automatic superpixel-based clustering for color image segmentation using q-generalized Pareto distribution under linear normalization and hunger games search. Mathematics9, 2383 (2021). [Google Scholar]
  • 47.Adel, H. et al. Improving crisis events detection using distilbert with hunger games search algorithm. Mathematics10, 447 (2022). [Google Scholar]
  • 48.Shaheen, M. A. M. et al. OPF of modern power systems comprising renewable energy sources using improved CHGS optimization algorithm. Energies14, 6962 (2021). [Google Scholar]
  • 49.Sörensen, K. Metaheuristics—the metaphor exposed. Int. Trans. Oper. Res.22, 3–18 (2015). [Google Scholar]
  • 50.Aranha, C. et al. Metaphor-based metaheuristics, a call for action: the elephant in the room. Swarm Intell.16, 1–6 (2022). [Google Scholar]
  • 51.Velasco, L., Guerrero, H. & Hospitaler, A. A literature review and critical analysis of metaheuristics recently developed. Arch. Comput. Methods Eng.31, 125–146 (2024). [Google Scholar]
  • 52.Yousri, D. et al. Mitigating mismatch power loss of series–parallel and total-cross-tied array configurations using novel enhanced heterogeneous hunger games search optimizer. Energy Rep.8, 9805–9827 (2022). [Google Scholar]
  • 53.Houssein, E. H., Hosney, M. E., Mohamed, W. M., Ali, A. A. & Younis, E. M. G. Fuzzy-based hunger games search algorithm for global optimization and feature selection using medical data. Neural Comput. Appl.35, 5251–5275 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Zhou, X. et al. Advanced orthogonal learning and Gaussian barebone hunger games for engineering design. J. Comput. Des. Eng.9, 1699–1736 (2022). [Google Scholar]
  • 55.Ma, B. J., Liu, S. & Heidari, A. A. Multi-strategy ensemble binary hunger games search for feature selection. Knowl. Based Syst.248, 108787 (2022). [Google Scholar]
  • 56.Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput.1, 67–82 (1997). [Google Scholar]
  • 57.Onay, F. K. & Aydemı̇r, S. B. Chaotic hunger games search optimization algorithm for global optimization and engineering problems. Math. Comput. Simul.192, 514–536 (2022). [Google Scholar]
  • 58.El-Hameed, M. A., Rizk-Allah, R. M. & El-Fergany, A. A. Frequency control of hybrid microgrid comprising solid oxide fuel cell using hunger games search. Neural Comput. Appl.34, 20671–20686 (2022). [Google Scholar]
  • 59.Premkumar, M. et al. Constraint Estimation in three-diode solar photovoltaic model using Gaussian and cauchy mutation‐based hunger games search optimizer and enhanced Newton–Raphson method. IET Renew. Power Gener. 16, 1733–1772 (2022). [Google Scholar]
  • 60.Chakraborty, S., Saha, A. K., Chakraborty, R., Saha, M. & Nama, S. HSWOA: an ensemble of hunger games search and Whale optimization algorithm for global optimization. Int. J. Intell. Syst.37, 52–104 (2022). [Google Scholar]
  • 61.Hou, L. et al. Image segmentation of intracerebral hemorrhage patients based on enhanced hunger games search optimizer. Biomed. Signal. Process. Control. 82, 104511 (2023). [Google Scholar]
  • 62.Chen, H., Li, S., Li, X., Zhao, Y. & Dong, J. A hybrid adaptive differential evolution based on Gaussian tail mutation. Eng. Appl. Artif. Intell.119, 105739 (2023). [Google Scholar]
  • 63.Devi, R. M. et al. BHGSO: binary hunger games search optimization algorithm for feature selection problem. CMC Comput. Mater. Contin. 70, 557–579 (2022). [Google Scholar]
  • 64.Al-Kaabi, M., Dumbrava, V. & Eremia, M. Single and multi-objective optimal power flow based on hunger games search with Pareto concept optimization. Energies15, 8328 (2022). [Google Scholar]
  • 65.Liu, X., Li, G. & Shao, P. A multi-mechanism seagull optimization algorithm incorporating generalized opposition-based nonlinear boundary processing. Mathematics10, 3295 (2022). [Google Scholar]
  • 66.Huang, L., Ding, S., Yu, S., Wang, J. & Lu, K. Chaos-enhanced cuckoo search optimization algorithms for global optimization. Appl. Math. Model.40, 3860–3875 (2016). [Google Scholar]
  • 67.Xu, Y., Yang, Z., Li, X., Kang, H. & Yang, X. Dynamic opposite learning enhanced teaching–learning-based optimization. Knowl. Based Syst.188, 104966 (2020). [Google Scholar]
  • 68.Tu, J., Chen, H., Wang, M. & Gandomi, A. H. The colony predation algorithm. J. Bionic Eng.18, 674–710 (2021). [Google Scholar]
  • 69.Shehadeh, H. A. Chernobyl disaster optimizer (CDO): a novel meta-heuristic method for global optimization. Neural Comput. Appl.35, 10733–10749 (2023). [Google Scholar]
  • 70.Seyyedabbasi, A. & Kiani, F. Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Eng. Comput.39, 2627–2651 (2023). [Google Scholar]
  • 71.Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M. & Gandomi, A. H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng.376, 113609 (2021). [Google Scholar]
  • 72.Brest, J., Maučec, M. & Bošković, B. Single objective real-parameter optimization: algorithm jSO in 2017 IEEE Congress on Evolutionary Computation (CEC) 1311–1318IEEE, (2017).
  • 73.Qais, M. H., Hasanien, H. M. & Alghuwainem, S. Augmented grey Wolf optimizer for grid-connected PMSG-based wind energy conversion systems. Appl. Soft Comput.69, 504–515 (2018). [Google Scholar]
  • 74.Sayed, G. I., Khoriba, G. & Haggag, M. H. A novel chaotic equilibrium optimizer algorithm with S-shaped and V-shaped transfer functions for feature selection. J. Ambient Intell. Humaniz. Comput.13, 3137–3162 (2022). [Google Scholar]
  • 75.Emary, E., Zawbaa, H. M. & Hassanien, A. E. Binary ant Lion approaches for feature selection. Neurocomputing213, 54–65 (2016). [Google Scholar]
  • 76.Emary, E., Zawbaa, H. M. & Hassanien, A. E. Binary grey Wolf optimization approaches for feature selection. Neurocomputing172, 371–381 (2016). [Google Scholar]
  • 77.Mirjalili, S., Mirjalili, S. M. & Yang, X. S. Binary Bat algorithm. Neural Comput. Appl.25, 663–681 (2014). [Google Scholar]
  • 78.Mirjalili, S. & Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput.9, 1–14 (2013). [Google Scholar]
  • 79.Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. BGSA: binary gravitational search algorithm. Nat. Comput.9, 727–745 (2010). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data and codes supporting the results of this study are publicly available at https://github.com/KloveR521/MHGS.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES