Abstract
This paper presents a new multi-objective optimization algorithm called the Multi-Objective Crested Porcupines Optimization (MOCPO) Algorithm, which uses an elitist, non-dominated sorting and crowding distance mechanism. MOCPO is motivated by the predator-prey behavior of crested porcupines and is based on the newly proposed Crested Porcupines Algorithm. MOCPO is formulated to efficiently manage conflicting objectives in multi-objective optimization problems. Through the use of non-dominated sorting and crowding distance mechanisms, MOCPO promotes solution diversity and convergence towards the Pareto front. MOCPO employs a new Information Feedback Mechanism (IFM) and an enhanced solution updating strategy to enhance convergence and diversity control. The performance of MOCPO is tested on a variety of benchmark problems, including the ZDT and DTLZ series, as well as real-world engineering design problems from the RWMOP suite. These test problems represent a variety of optimization problems with linear, nonlinear, continuous, and discrete nature. MOCPO performance is compared with state-of-the-art algorithms like the Multi-Objective Gradient Based Optimizer (MOGBO), Preference inspired Differential Evolution (Pre-DEMO), Multi-Objective Exponential Distribution Algorithm (MOEDO), Pivot solution based Multi-Objective Evolutionary Algorithm (Pi-MOEA), and Clustering aided Grid based Multi-Objective Evolutionary Algorithm (ClGrMOEA). Qualitative and quantitative analyses using standard performance metrics show the effectiveness of the algorithm. Experimental results verify that MOCPO provides substantial improvements in convergence and solution diversity, making it a viable choice for solving complex multi-objective optimization problems.
Keywords: Multi-objective optimization, Meta-heuristics, Crested porcupines optimizer, Engineering design, Real-world problems
Subject terms: Engineering, Mathematics and computing
Introduction
Optimization is crucial across a wide range of scientific and engineering disciplines where multiple conflicting goals must be sacrificed. Multi-objective optimization methods (MOAs) are supposed to generate good-quality solutions reflecting a Pareto-optimal front1. Existing methods, however, are plagued with problems like premature convergence, diversity loss, and difficulty in coping with high-dimensional objective spaces2. Addressing such issues requires innovative solutions that maintain exploration and exploitation along with diversity of solutions3.
Metaheuristic algorithms have been very effective in solving hard optimization problems. Metaheuristic algorithms can be broadly classified into: Trajectory-based methods: These methods, such as Simulated Annealing and Tabu Search, follow the path of a single solution through the search space. Population-based methods: These utilize a population of agents to explore space cooperatively, rather than getting stuck in local optima. Swarm intelligence (SI)-based methods, as a category of population-based methods, are particularly well-suited because they are self-organizing and cooperative in nature, much like natural systems4. Some bio-inspired approaches belong to the family of nature-inspired methods. Swarm intelligence (SI)-based algorithms are categorized within this family. SI-based algorithms are developed based on the observed behaviours of different species in nature, making them very powerful models for solving complex problems. These methods are based on information exchange within population-level agents, which offers all means of self-organization, co-evolution, and parallel exploration toward enrichment of the search process5.
To enhance performance, efficiency, and reliability over a variety of applications, engineering optimization has developed as a fundamental area of study. Various advanced multi-objective algorithms have been devised to solve complex optimization issues. Under complex optimization problems, two-archive improved Multi-Objective Harris Hawks Optimization (MOHO) algorithm has shown efficacy in boosting convergence and solution diversity6. The benefits of utilizing many methods to achieve better performance have also been exemplified by the effective use of hybrid techniques, including the union of Response Surface Methodology (RSM), the multi-objective bat algorithm, and the VIKOR decision-making technique, to optimize engineering systems7. Another robust multi-objective optimization technique that yields efficient solutions to engineering design problems is the MORIME framework8. To enhance further robustness and reliability, reliability-based optimization techniques—such as the graylag goose algorithm—have been employed to ensure optimal solutions under uncertain conditions9.
In complex engineering applications, the 2-archive multi-objective cuckoo search algorithm has proved to be very promising in acquiring well-distributed Pareto-optimal solutions, which will aid decision-making10. In addition, metaheuristic algorithm capacity in dealing with high numbers of competing goals has been validated through an exploration of multiple-objective optimization strategies for multidimensional problems11. Further investigations into the MOHO algorithm on a multi-objective front have proven the latter capability of yielding optimum solutions across engineering domains12. In addition, considering both mechanical and thermal constraints for improved performance, multi-objective thermal exchange optimization has introduced a new approach to engineering problem solutions13.
Algorithms based on SI can be further classified according to the behaviours of the species they emulate. Birds, aquatic animals, terrestrial animals, and insects are the categories that make up these. The first includes the most known algorithms such as Ant Colony Optimisation (ACO)14, Artificial Bee Colony (ABC)15, Best-so-far ABC16, Moth-Flame Optimisation (MFO)17, Grasshopper Optimisation Algorithm (GOA)18, Pity Beetle Algorithm (PBA)19, Butterfly Optimisation Algorithm (BOA)20, and Mayfly Algorithm (MA)21, derived from insect behavioural phenomena, namely self-organisation and collective foraging. The second category draws inspiration from terrestrial animals, and it focuses on behaviours like information sharing, group leadership, predatory skills, and searching for prey. These algorithms lead populations to the best areas of the search space by using social communication processes. This category includes, for example, Red Fox Optimisation (RFO)22, Squirrel Search Algorithm (SSA)23, Gorilla Troops Optimiser (GTO)24, Spotted Hyena Optimiser (SHO)25, Elephant Herding Optimisation (EHO)26, Lion Pride Optimiser (LPO)27, and Grey Wolf Optimiser (GWO)28.
The third group includes algorithms inspired by birds, mimicking behaviours such as social interactions, nesting, feeding, mating, migration, and predator defence.
The most used is PSO, the rest are the African Vulture Optimisation Algorithm (AVOA)29, Quantum-Based Avian Navigation Optimiser Algorithm (QANA)30, Bird Swarm Algorithm (BSA)31, Harris Hawk Optimiser (HHO)32, Golden Eagle Optimiser (GEO)33, Conscious Neighbourhood-Based Crow Search Algorithm (CCSA)34, Artificial Hummingbird Algorithm (AHA)35. The fourth type is the ones that are based on SI algorithms inspired by collective movement of water animals, like social behavior, encircling prey, mating, migration, and food finding. A few well-known algorithms in this area are the Krill Herd (KH)36, Whale Optimisation Algorithm (WOA)37, Salp Swarm Algorithm (SSA)38, Yellow Saddle Goatfish Algorithm (YSGA)39, Sailfish Optimiser (SFO)40, and Jellyfish Search (JS)41.
There are also algorithms that are not driven by swarm intelligence but by biological processes. These focus on alternative optimization techniques rather than swarming behaviours. Some examples include differential evolution (DE), evolutionary strategies (ES), genetic algorithms (GA), gene expression programming, and biogeography-based optimization (BBO).
These algorithms are inspired by physics and chemistry, based on concepts from these fields of study. The categories include, but are not limited to, Big Bang–Big Crunch (BB–BC)42, Henry Gas Solubility Optimisation (HGSO)43, Quantum HGSO (QHGSO)44, Arithmetic Optimisation Algorithm (AOA)45, Atom Search Optimisation (ASO)46, and Optics-Inspired Optimisation (OIO)47. The fourth class of algorithms is inspired by non-biological systems such as social or emotional systems. These include the Heap-Based Optimiser (HBO)48, the Volleyball Premier League Algorithm (VPL)49, the Mine Blast Algorithm (MBA)50, and the Imperialist Competitive Algorithm (ICA)51.
Despite their diversity, many of these optimisation methods are unable to properly solve complex problems because they lack effective search techniques. Scalability and robustness are usually weak, diversity loss, and not exploiting and exploring appropriately are the typical disadvantages. Good issue optimization needs both exploration and exploitation. Spreading the search agents throughout the search space to find areas where good prospects lie improves poor exploration. Since SI-based algorithms’ strength is related to the proper balance between exploration and exploitation, it is still challenging. Several approaches have been proposed in dealing with such concerns and in managing the complexity of optimisation problems.
The novel nature-inspired metaheuristic algorithm, namely the Multi-objective Crested Porcupine Optimiser (MOCPO), is introduced in this research. It is inspired by the protective behaviours of the crested porcupine (CP). The proposed approach is expected to excel in handling both real-world challenges and highly challenging optimisation problems. While there have been advancements in SI-based metaheuristics, existing algorithms are still not effective in solving complex multi-objective problems. The need for a dynamic optimizer that automatically updates its search strategies without compromising strong convergence and diversity led to the development of the Multi-Objective Crested Porcupine Optimization (MOCPO) Algorithm.
Inspired by crested porcupines’ defence strategies, MOCPO integrates four disparate defence strategies—visual displays, auditory signals, scent release, and physical assault—to simulate adaptive search behavior. In contrast to predator-inspired metaheuristics, the prey-inspired approach enables MOCPO to navigate difficult search spaces with greater robustness and efficiency compare to Multi-Objective Gradient Based Optimizer (MOGBO)52, Preference inspired Differential Evolution (Pre-DEMO)53, Multi-Objective Exponential Distribution Algorithm (MOEDO)54, Pivot solution based Multi-Objective Evolutionary Algorithm (Pi-MOEA)55, and Clustering aided Grid based Multi-Objective Evolutionary Algorithm (ClGrMOEA)56.
Motivation for Developing MOCPO.
Metaheuristic algorithms have significantly contributed to solving multi-objective optimization problems across numerous domains. However, existing algorithms like MOGBO, Pre-DEMO, MOEDO, Pi-MOEA, and ClGrMOEA still possess inherent limitations that restrict their performance in complex search spaces. The key challenges are:
Premature Convergence: It is difficult for most algorithms to maintain diversity and, therefore, converge prematurely at suboptimal solutions.
Loss of Diversity: It is difficult to maintain a diversified Pareto front in high-dimensional problems.
Exploration-Exploitation Imbalance: The existing methods overemphasize exploration and end up with inefficient searches or overemphasize exploitation and lead to local optima traps.
To address these shortcomings, we propose Multi-Objective Crested Porcupine Optimization (MOCPO), a novel bio-inspired algorithm based on the defence mechanisms of crested porcupines. Unlike predator-inspired algorithms, MOCPO uses a prey-inspired approach, which makes it highly robust to premature convergence and loss of diversity.
MOCPO major contributions are as follows:
A new metaheuristic algorithm called MOCPO is proposed in this paper inspired by the crested porcupines’ defensive response to predators. The proposed algorithm models four defensive strategies exhibited by crested porcupines, namely, visual displays, auditory warnings, scent releases, and physical confrontations, which start from the least to the most aggressive behaviours. The performance of MOCPO is benchmarked against three categories of optimization algorithms: recently developed metaheuristics, widely cited algorithms, and high-performance optimizers.
Real-world application results demonstrate the algorithm effectiveness in addressing practical optimization challenges.
The structure of the paper is as follows: Sect. 2 presents the biological inspiration, mathematical modelling, and the development of the MOCPO algorithm. Section 3 present the results and discussion for the experimental tests conducted in the benchmark problems. Section 4 analyses the application of the algorithm to real-world engineering design problems. Finally, in Sect. 5, the paper is concluded and open research directions presented.
Crested porcupines
Motivation
Amongst other places, crested porcupines live in forests, deserts, rocky terrain, and slopes. They have 8 to 10 inches in length for the tails and bodies which range from 25 to 36 inches in length. They are big, bulky, slow-moving animals weighing from 12 to 35 pounds. Their most notable feature is their long quills that cover such a big part of their body. The diet of crested porcupines, which are nocturnal herbivorous animals, consists of leaves, shrubs, and other plant material. They often forage alone, although travel distances can be as far as nine miles, though they prefer to live in small family groups.
Their name, “crested porcupine,” is derived from the prominent quills found on their body, which they can raise into a crest-like display. What truly distinguishes crested porcupines from other species is their remarkable defensive behavior against predators. These animals face threats from formidable predators, demonstrating impressive tactics to defend themselves.
Crested Porcupine defensive strategies
Crested porcupines possess a variety of defence mechanisms that make them highly effective at self-protection. Their physical traits, which differ significantly from those of other animals, contribute to their reputation as formidable defenders. These defence strategies, which include using thorns, Odors, and sounds, are typically activated when the porcupines perceive a threat. Notably, crested porcupines adjust their defensive tactics according to the type and severity of the threat. Their responses to potential predators and rivals involve four distinct stages, each characterized by escalating aggression:
Visual Défense: When a crested porcupine feels that a predator is near, it first resorts to raising its long, sharp quills to appear larger. One porcupine can have more than 30,000 quills, some of which reach up to 35 cm in length.
Auditory defence: The crested porcupine fights back its predators with a series of vocalizations, that include growls, snorts, grunts, and hisses. It also produces other sounds with the help of its feet stomping, clicking its teeth, and rattling special quills on the tail it produces that provide a strong hissing sound. The louder the sounds are while closing in on the victim.
Odor defence: If the visual and auditory strategies fail, the porcupine may use a strong-smelling chemical secretion. This Odor is produced by a gland on the lower back and spread by specially adapted quills. This tactic often succeeds in repelling predators.
Physical Attack: The strongest and most assertive defence mechanism is a physical attack. A porcupine body has quills everywhere, but specifically on the rear, where quills are relatively short, stubby, and very effective as a deterrent. If a predator persists despite the previous defence, the porcupine will charge backward, impaling the attacker with its quills. These quills can become embedded in the predator skin, potentially causing severe injury or death. Crested porcupines have been known to injure or even kill lions, leopards, hyenas, and humans with this defence.
The Multi-Objective Crested Porcupine Optimization (MOCPO) algorithm is inspired by crested porcupines’ defence mechanisms. These mechanisms—Visual, Auditory, Odor, and Physical Attack—are critical in the search process by maintaining exploration and exploitation balance. This is how each of them helps in optimization:
-
Visual Strategy (Exploration Phase - Sight).
The visual defence mechanism of the porcupine is to stand up on its quills to appear larger and frighten predators.
In MOCPO, this is the same as searching distant regions in the search space. The algorithm uses a mathematical model to search widely and find good regions for further optimization.
Mathematically, this is achieved by updating positions with a scaling factor relative to the best global position, promoting a broad search in the solution space.
-
Auditory Strategy (Exploration Phase - Sound).
Crested porcupines produce loud noises—growls, hisses, and foot stomping—to stun predators.
In MOCPO, this is simulated by introducing perturbations into the search process to escape local optima and enhance diversity.
The algorithm disrupts solutions with random step-size factors to encourage broader search, encouraging exploration and preventing premature convergence.
-
Odor Strategy (Exploitation Phase - Odor).
Weak visual and auditory defence, porcupines release a foul-smelling chemical to repel predators.
Like a local refinement process in MOCPO, where small-magnitude perturbations are employed to refine solutions in potential areas.
It allows controlled exploitation by modifying the search process with slight adjustments to maintain diversity without drastic changes.
-
Physical Attack (Exploitation Phase - Direct Movement Toward Optimal Solutions).
The fourth defence mechanism is a porcupine physically attacking a predator and inserting quills into the aggressor.
In MOCPO, it is an active move towards optimal solutions. The algorithm implements explicit changes by pushing candidate solutions toward the locally best solutions obtained in the vicinity, improving the efficiency of convergence.
Exploration (Visual & Auditory): These ensure the search covers a broad solution space and does not get stuck in local optima.
Exploitation (Odor & Physical Attack): These refine solutions and accelerate convergence to optimal positions.
Adaptive Switching: The algorithm switches strategies dynamically based on probability values, ensuring an equilibrium between diversification (exploration) and intensification (exploitation) to improve optimization efficiency. By integrating these biological strategies, MOCPO can greatly enhance convergence, diversity, and robustness for multi-objective optimization problem solutions.
Due to their intelligence and adaptability, crested porcupines are highly effective at defending themselves, sometimes even against multiple predators simultaneously. These behaviours inspired a mathematical optimization model that mimics the crested porcupine defensive strategies to find optimal solutions within specific constraints. Unlike most optimization algorithms that are based on the predatory behaviours of animals, this model is unique in that it is inspired by the defensive tactics of prey. The following subsections will detail these processes and introduce the Crested Porcupine Optimization (CPO) algorithm.
Mathematical model
The Multi-Objective Crested Porcupine Algorithm (MOCPO) is an adaptation of the Crested Porcupine Optimizer (CPO), specifically designed to handle multi-objective optimization problems. The algorithm mimics the defensive behavior of crested porcupines, such as sight, sound, odor, and physical attack, to balance exploration and exploitation while solving optimization problems.
-
Problem Definition: The multi-objective optimization problem can be defined by Eq. (1):

1 subject to the constraints as depicted in Eq. (2):
2 where:
is the design vector in
-dimensional space.
(
) are the objective functions.
and
are inequality and equality constraints. -
Population Representation: Each crested porcupine represents a candidate solution as shown in Eq. (3):

3 where
is the population size, and
is the
-th solution vector.Objective Evaluation is shown in Eq. (4):
4 -
Pareto Dominance: A solution
dominates
if:
Non-Dominated Sorting: Organize solutions into Pareto fronts
, with
being the best front.
Algorithm phases
Initialization
Randomly initialize the positions of
porcupines:
where
is the variable index, and
are the bounds.
Cyclic population reduction
To enhance convergence and maintain diversity, the cyclic population reduction strategy adjusts the population size dynamically as per Eq. (5):
![]() |
5 |
Were,
: Current population size.
: Initial population size.
: Minimum allowed population size.
: Current iteration.
: Cyclic interval.
Exploration phase
Exploration corresponds to the defensive strategies of sight and sound.
-
Sight: Simulates surveying distant regions, encouraging wide exploration as per Eq. (6):

6 where
is the scaling factor.
: Global best position.
: Random number. -
Sound: Simulates creating noises to perturb predators and explore as per Eq. (7):

7 Where
is the step-size factor.
: A randomly selected solution.
Exploitation phase
Exploitation corresponds to the odor and physical attack strategies:
Multi-objective handling
To manage multiple objectives:
Non-Dominated Sorting: Sort solutions into Pareto fronts.
- Crowding Distance: Maintain diversity within each front by calculating as per Eq. (10)

10
Multi-objective crested porcupine optimization (MOCPO)
MOCPO algorithm starts with a random population. current generation is
and
the
-th individual at
and
generation.
the
-th individual at the
generation generated through the CPO algorithm and parent population
. the fitness value of
is
and
is the set of
. Then, we can calculate
according to
generated through the CPO algorithm and Information Feedback Mechanism (IFM) Eq. (11)
![]() |
11 |
where
is the
th individual we chose from the
th generation, the fitness value of
is
and
are weight coefficients. Generate offspring population
.
is the set of
The combined population
is sorted into different w-non-dominant levels
. Begin from
, all individuals in level 1 to
are added to
and remaining members of
are rejected. If
no other actions are required, the next generation is begun with
directly. Otherwise, solutions in
are included in
and the remaining solutions
are selected from
according to the Crowding Distance (CD) mechanism, the way to select solutions is according to the CD of solutions in
.
The larger the crowding distance, the higher the probability of selection and check termination condition is met. If the termination condition is not satisfied,
than repeat, and if it is satisfied,
is generated, it is then applied to generate a new population
by CPO algorithm. Such a careful selection strategy is found to computational complexity of
-Objectives
. MOCPO incorporates proposed information feedback mechanisms to effectively guide the search process, ensuring a balance between exploration and exploitation. This leads to improved convergence, coverage and diversity preservation, which are crucial aspects of multi-objective optimization. MOCPO algorithm does not require to set any new parameter other than the usual CPO parameters such as the population size, termination parameter, and their associated parameters as per Pseudo code of MOCPO in algorithm-I and flow chart shown in Fig. 1.
Fig. 1.
Flow chart of MOCPO algorithm.
Algorithm 1.
Generation t of MOCPO Algorithm with IFM Procedure.
Update rule for constructing a new solution in the MOCPO is given by Eq. (12). A linear combination of the updated solution in the previous step and an additional component is utilized to calculate the new position. Depending on their appropriateness, this model helps in altering the effect of both current and previous remedies. The Eq. (13) is used for No-dominated sorting to classify solutions based on Pareto dominance. the population is divided into different non-dominated fronts by applying a non-dominated sorting approach. The Eq. (14) is used for next generation selection. They are chosen directly if the overall number of solutions has reached the required population size N. Otherwise, the last front is adjusted accordingly to maintain the correct population size. This maintains diversity while ensuring that only the best performers survive. The crowding distance is calculated using Eq. (15), by selecting solutions spread across the solution space. crowding distance is a measure utilized to maintain diversity in the population. The normalized difference between adjacent fitness values is utilized to compute it.
Results and discussion
In the next section, findings of the experimental analysis conducted to assess the performance of the MOCPO algorithm are presented. Every experiment was conducted independently 30 times with a population size of 40 and a maximum of 1000 iterations to ensure that the results obtained were statistically valid and reliable.
Four sets of benchmark test functions were used to evaluate the performance of MOCPO.
ZDT Benchmark Suite (Zitzler–Deb–Thiele): In this suite, the test functions, especially ZDT1 to ZDT4 and ZDT6, each have two objective functions, which makes them perfectly suitable for Pareto optimization tasks, which are most used in engineering57.
Benchmark Functions for DTLZ: The DTLZ suite is an important tool in handling the challenges of “many-objective” optimization because it is scalable and may be adapted to accommodate any number of objectives58.
WFG Benchmark test Functions: The non-separable problems, deceptive problems, truly degenerative problems and mixed shape Pareto front problems are thoroughly covered by WFG test function suits59.
The five real-world engineering design challenges which should be categorized under practical engineering design problems include the 3-bar truss (RW1), disc brake (RW2), speed reducer (RW3), welded beam (RW4), and cantilever beam (RW5)60,61. These issues form a useful framework for assessing to what extent MOCPO addresses actual technical problems.
In multi-objective optimization, accurately assessing the performance of algorithms is crucial for determining their efficiency and effectiveness. This study compared several advanced optimization algorithms using well-established performance metrics. The metrics employed in the evaluation include Hypervolume (HV), Generational Distance (GD), Inverted Generational Distance (IGD), Spread (SD), Spacing (SP), and Runtime (RT). Each of these indicators offers a unique perspective on algorithm performance, enabling a comprehensive comparison across multiple dimensions.
All algorithms under study were run 30 times independently to ensure the best robust and fair assessment. This measure made it possible to carry a thorough statistical analysis of their performance taking into consideration potential variability and for forming strong grounds for comparison. Wilcoxon signed-rank test (WSRT) with the significance level set to 0.05 was used for performing the statistical analysis involved in this study. The reason for using this non-parametric test is its efficiency in the comparison of paired samples, robustness against non-normal distributions, and therefore the best tool to use when determining the performance of various optimization algorithms.
The WART statistical method gave a panoramic view of how the algorithms performed in different test scenarios, revealing the conditions under which each algorithm excelled, faltered, or performed comparably to others. The study compared the newly proposed MOCPO algorithm with several well-known multi-objective optimization algorithms. Each one of these algorithms has previously been recognized for its contribution toward the field, and the comparison here is to evaluate the relative performance of MOCPO in this competitive landscape.
Performance on ZDT test functions
The ZDT benchmark functions are highly used in evaluating multi-objective optimization algorithms because they have different characteristics and levels of difficulty. Due to their properties, such as convex and non-convex Pareto fronts, these functions are suitable for testing the convergence and diversity of solutions generated by optimization algorithms.
Figure 2: the results of multi-objective optimization algorithms on benchmark functions ZDT. The functions ZDT series (ZDT1 through ZDT6) are well known in the scope of multiobjective optimization, namely for the benchmarking of algorithms performance in a construction of Pareto fronts.
All six algorithms, including MOCPO, MOGBO, PRE-DEMO, MOEDO, PI-MOEA, and CLGRMOEA, have shown a good approximation of the Pareto front in ZDT1. The achieved solution distribution is very close to the original Pareto front, which may be considered proof of strong convergence and diversity for the solution set. MOGBO has demonstrated a slightly dispersed distribution along the Pareto front with regards to MOCPO, while MOEDO and PI-MOEA are convergent, therefore though keeping diversification, may be less exact in terms of convergence.
The performance of algorithms on ZDT2 is like what was obtained on ZDT1. MOCPO, PI-MOEA, and CLGRMOEA converge well and track the true Pareto front well. MOGBO and PRE-DEMO keep a good diversity but have a less precise convergence compared to other algorithms.
The function ZDT3 is discontinuous. Algorithms must face difficulty in keeping diversity while converging towards several disconnected Pareto-optimal regions. The efficiency of MOCPO and their ability to capture the different parts of the Pareto front are seen. MOEDO, PI-MOEA, and CLGRMOEA also worked well but showed scattered points, especially at the edges of the true Pareto-front segments, which may indicate difficulties in exact convergence or maintenance of diversity in those areas.
ZDT4 has many local Pareto fronts, making it challenging for algorithms to converge to the global Pareto front without getting trapped in local optima.
MOCPO works well and closely follows the true Pareto front without being trapped into local optima. MOGBO and are dispersed more scatteringly, and it might be that this algorithm is having difficulties with the high-dimensional ZDT4 space, perhaps getting stuck in local fronts or maintaining too much diversity.
ZDT6 has a very challenging objective space with non-uniformly distributed Pareto-optimal solutions. MOCPO performs well in this complexity since the solutions are very close to the true Pareto front and have excellent convergence and non-uniformity handling. PI-MOEA also performs well, but there is a small deviation at the extremes of the Pareto front, which might indicate some problems in maintaining diversity in those regions. PRE-DEMO, MOGBO, and MOCPO scatter their solutions much more than the others, showing, therefore problems with the complex objective space particularly in the uniform convergence throughout the front.
Fig. 2.
Pareto fronts obtained for ZDT benchmark test functions by different MOOAs.
Overall, MOCPO exhibit the most consistent and robust performance across all ZDT functions, with strong convergence to the true Pareto front and good maintenance of diversity. MOEDO, PI-MOEA, CLGRMOEA, PRE-DEMO and MOGBO demonstrate good performance but tend to struggle more with complex or discontinuous functions like ZDT3, ZDT4, and ZDT6, where convergence precision is slightly compromised. These results highlight the importance of selecting an appropriate algorithm based on the specific characteristics of the problem at hand, as some algorithms may perform better with certain types of objective spaces.
Performance on DTLZ test functions
Figure 3 illustrates the performance of several multi-objective optimization algorithms on the DTLZ benchmark functions, with a particular emphasis on the performance of the Multi-Objective Crested Porcupine Optimization (MOCPO) algorithm. The DTLZ series (DTLZ1 through DTLZ9) are well-established benchmarks used to evaluate the effectiveness of multi-objective optimization algorithms, particularly for problems with three or more objectives.
Fig. 3.
Pareto fronts obtained for DTLZ benchmark test functions by different MOOAs.
Figure 3 depicts that MOCPO demonstrates outstanding performance on DTLZ1, showing a near-perfect alignment with the actual Pareto front. The solutions are uniformly distributed across the entire objective space, reflecting MOCPO strong convergence capabilities and ability to maintain diversity. MOCPO again excels on DTLZ2, characterized by a spherical Pareto front. The algorithm achieves an impressive spread of solutions, closely matching the actual Pareto front while maintaining diversity across the front. MOGBO and PRE-DEMO also perform admirably, but their solutions are slightly less aligned with the actual Pareto front than MOCPO, indicating a minor gap in convergence precision.
DTLZ3 is a challenging problem with many local Pareto fronts. MOCPO performs very well by evolving out of the local optima and approximates the global Pareto front more accurately compared to the other algorithms considered here. This is because MOGBO and PRE-DEMO perform well regarding convergence but a little worse in terms of diversity especially when several local optima are present. MOCPO has continued to remain bright on the DTLZ4 problem having a nonuniform distribution of solution along the Pareto front. MOCPO handles nonlinearity very efficiently and has presented a good distribution of solutions as close to the true Pareto front. DTLZ5 and DTLZ6 are highly dimensional Pareto fronts, making them a hard problem for most algorithms. MOCPO deals very effectively with these difficulties, maintaining a high level of convergence and diversity, even in the case of complex front structures.
The disconnected Pareto front of DTLZ7 is particularly challenging to approximate. The performance of MOCPO is outstanding, as it was able to capture the characteristic of a disconnected Pareto front while maintaining an acceptable distribution of solutions.
MOCPO outperforms on DTLZ8 and DTLZ9, which involve objective redundancy and difficult-to-approximate Pareto fronts. MOCPO solutions closely align with the true Pareto fronts, demonstrating both strong convergence and diversity.
Across the DTLZ functions, MOCPO consistently demonstrates the best performance, with exceptional convergence and diversity across a wide range of challenging benchmark functions. The complex, high-dimensional, as well as the non-uniform Pareto fronts complexity make MOCPO an extremely different algorithm within the comparison experiments. In many ways, MOCPO stood out with considerable superiority compared with the other competing algorithms MOEDO, MOGBO, PI-MOEA, MOPS, and CLGRMOEA for problems from the entire set of the DTLZ benchmark problems used.
Performance on WFG test functions
This test suite was designed to surpass the functionalities of test suites that have been implemented earlier. Specifically, non-separable problems, deceptive problems, genuinely degenerative problems and mixed shape Pareto front problems are exhaustively addressed, together with scalable problems both in the number of objectives and variables. Also, problems involving dependency among position and distance related parameters are addressed.
Pareto fronts-based comparison of various algorithms under considerations
Figure 4 presents the pareto fronts produced by different algorithms under comparison for the WFG test functions. The comparison of pareto fronts generated by different algorithms proves the superiority of MOCPO. MOCPO always covers the entire Pareto front without losing a well-distributed set of solutions.
Fig. 4.
Pareto fronts generated by different algorithms under consideration for WFG test functions.
MOCPO has a dense and continuous Pareto front in WFG1, WFG2, and WFG3 over MOEDO and Pi-MOEA that have discontinuities in the distribution of solutions. The adaptive search approach of MOCPO ensures even distribution of solutions and prevents premature convergence to poorer areas. Both WFG4 and WFG5 have very nonlinear Pareto fronts and pose a formidable challenge to many algorithms. MOCPO shows strong convergence in these cases, closely tracking the true Pareto front with minimal divergence. In WFG6, with a very misleading front, MOCPO can avoid local optima successfully, compared to Pre-DEMO, which show scattered or incomplete Pareto solutions. These problems test an algorithm ability to scale with increasing goals and dimensions. MOCPO can maintain diversity along the front well, ensuring that extreme solutions are well represented. Pi-MOEA and ClGrMOEA struggle to maintain diversity, leading to uneven solution distributions. MOCPO efficiently approximates the true Pareto front for all WFG test problems, showing better convergence, preservation of diversity, and computational efficiency. Its performance significantly outperforms that of MOEDO, Pi-MOEA which often fail to maintain a smooth and evenly distributed Pareto front.
Hyper volume-based comparison of different algorithms for WFG test functions
Figure 5 depicts that MOCPO outperformed all other competitor algorithms on all WFG test functions, consistently achieving the highest HV values. Significant improvements were made on WFG1, WFG2, and WFG3, where MOCPO achieved scores of 7.7509e-1, 9.2783e-1, and 3.7970e-1, respectively, surpassing the second-best algorithm, Pi-MOEA. The p-values of HV on all WFG functions were extremely low (e.g., 1.80E-33 for WFG1, 5.82E-39 for WFG2), demonstrating statistically significant differences.
Fig. 5.
Hyper volume boxplots for different algorithms under WFG test functions.
In WFG4, WFG5, and WFG6, MOCPO obtained higher HV values compared to ClGrMOEA and Pi-MOEA, which were second in some instances. Pi-MOEA and ClGrMOEA performed competitively in some test scenarios (e.g., WFG2 and WFG3), but were unable to match MOCPO ability to balance convergence and diversity. The high HV scores confirm MOCPO effectiveness in simultaneously maintaining diversity and convergence. The data statistical significance, reflected by comparatively low p-values, illustrates that the witnessed performance differences cannot be explained through randomness.
Hypervolume (HV) convergence plot is a fundamental tool used to analyze the rate at which an algorithm converges to a best Pareto front for many iterations. It provides information on the effectiveness, robustness, and final quality of the solutions generated by different multi-objective optimization techniques. MOCPO shows a rapid initial increase in HV, which indicates quick convergence to high-quality solutions. MOEDO and Pi-MOEA show a reduced growth rate in hypervolume, which indicates difficulties in early exploration and convergence. Pre-DEMO show erratic high-voltage trajectories, which are likely due to local optimum stagnation.
MOCPO stabilizes faster compared to other algorithms, showing an efficient balance between exploration and exploitation. Pi-MOEA and ClGrMOEA require significantly more iterations to achieve similar HV values, which leads to higher computing costs. MOGBO and MOEDO are increasingly improving, but they do not reach the same high voltage levels as MOCPO. It achieves the highest HV values among the WFG test functions, exhibiting outstanding Pareto front convergence and distribution.
MOCPO often shows higher convergence velocity than competing methods, while keeping computational load low without compromising accuracy. Form Fig. 6 it is evident that the HV curve for MOCPO is a monotonic, continuously increasing path compared to others, where there is variance. The minimum range of values in HV values for MOCPO supports the reliability and strength of MOCPO in controlling diverse optimization spaces. MOCPO records the highest efficacy and stability towards hypervolume convergence across all WFG problems, highlighting the ability of MOCPO in quickly approximating better Pareto fronts. Its fast convergence, along with high final HV values, makes it a great candidate for solving complex multi-objective optimization problems. The results illustrate that MOCPO outperforms state-of-the-art algorithms both in convergence speed and quality of the solutions, making it highly useful for real-world application.
Fig. 6.
Conversions plots of hyper volume by different algorithms under consideration for WFG test functions.
Table 1.
Hyper volume obtained by different algorithms under consideration for WFG test functions.
| Problem | M | D | MOGBO | Pre-DEMO | MOEDO | Pi-MOEA | ClGrMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| WFG1 | 3 | 12 | 6.4058e-1 ± 3.98e-2 | 6.9739e-1 ± 5.30e-2 | 6.4092e-1 ± 4.04e-2 | 7.6039e-1 ± 3.38e-2 | 7.3648e-1 ± 4.08e-2 | 7.7509e-1 ± 4.14e-2 |
| WFG2 | 3 | 12 | 9.0359e-1 ± 1.94e-2 | 9.1391e-1 ± 5.16e-3 | 9.1056e-1 ± 5.40e-3 | 9.2500e-1 ± 3.36e-3 | 9.0358e-1 ± 6.04e-3 | 9.2783e-1 ± 3.32e-3 |
| WFG3 | 3 | 12 | 3.4197e-1 ± 1.28e-2 | 3.4695e-1 ± 9.45e-3 | 3.5522e-1 ± 8.84e-3 | 3.7453e-1 ± 6.65e-3 | 3.7642e-1 ± 9.15e-3 | 3.7970e-1 ± 5.64e-3 |
| WFG4 | 3 | 12 | 5.0799e-1 ± 4.61e-3 | 5.3138e-1 ± 3.21e-3 | 5.3184e-1 ± 2.37e-3 | 5.3724e-1 ± 3.05e-3 | 5.0135e-1 ± 5.91e-3 | 5.3887e-1 ± 3.68e-3 |
| WFG5 | 3 | 12 | 4.8786e-1 ± 3.66e-3 | 5.0664e-1 ± 3.89e-3 | 5.0598e-1 ± 3.87e-3 | 5.0899e-1 ± 4.07e-3 | 4.7866e-1 ± 4.83e-3 | 5.0989e-1 ± 3.94e-3 |
| WFG6 | 3 | 12 | 4.5655e-1 ± 1.70e-2 | 4.8331e-1 ± 1.47e-2 | 4.7657e-1 ± 1.52e-2 | 4.8845e-1 ± 1.40e-2 | 4.4626e-1 ± 1.51e-2 | 4.8952e-1 ± 1.45e-2 |
| WFG7 | 3 | 12 | 5.1848e-1 ± 4.75e-3 | 5.3196e-1 ± 3.54e-3 | 5.3506e-1 ± 2.81e-3 | 5.4482e-1 ± 1.56e-3 | 5.0880e-1 ± 7.29e-3 | 5.4720e-1 ± 2.46e-3 |
| WFG8 | 3 | 12 | 4.3183e-1 ± 5.97e-3 | 4.4921e-1 ± 3.73e-3 | 4.4215e-1 ± 4.64e-3 | 4.5152e-1 ± 3.66e-3 | 4.2385e-1 ± 6.82e-3 | 4.4930e-1 ± 3.55e-3 |
One of the most critical performance measures for multi-objective optimization techniques is hypervolume (HV). It measures the area of the objective space that is covered by a set of Pareto-optimal solutions. Improved convergence and diversity performance is reflected in a higher HV. The results presented in Table:1 shows the hyper volume under WFG test suits. For WFG1 test functions, Pi-MOEA and MOCPO scored higher than Pre-DEMO, which scored better than MOGBO and MOEDO. From the differences, MOCPO and Pi-MOEA seem to have better diversity preservation and convergence. For WFG2 test functions, MOCPO (0.9278 ± 0.0033) performed best, followed by Pi-MOEA (0.9250 ± 0.0034). for WFG3 test functions MOCPO again demonstrated superior performance (0.3797 ± 0.0056), with ClGrMOEA being second (0.3764 ± 0.0092).
The WFG4 & WFG5 test function results shows that Pi-MOEA was the best, with MOCPO in second place (0.5372 ± 0.0030 for WFG4 and 0.5090 ± 0.0039 for WFG5). The results for WFG6 & WFG7 test functions indicates that performing better than other algorithms, MOCPO dominated for WFG6 (0.4895 ± 0.0145) and WFG7 (0.5472 ± 0.0025) test functions. In case of WFG8, MOCPO (0.4493 ± 0.0036) again reached the highest HV, followed by Pre-DEMO and Pi-MOEA closely. Robustness of MOCPO was established in that it always achieved the best HV on all test functions.
Table 2 presents P value and Rank by different algorithms under consideration for WFG test Suits. Ranking and p-values were employed in the statistical analysis to further confirm the HV results. Lower ranks indicate better performance. In all WFG problems, MOCPO always ranked the highest. This agrees with the HV results, showing that the best Pareto front approximation is achieved by MOCPO. Overall, Pi-MOEA ranked second or third on all problems. This suggests that it is competitive, although slightly less capable than MOCPO. The p-values in the table, like 1.80E-33 for WFG1 and 5.82E-39 for WFG2, are extremely low. These values prove that the variations in HV between the algorithms observed are statistically significant, affirming that the superior performance of MOCPO and Pi-MOEA is not by chance. MOCPO attained the maximum HV and highest rank, reflecting the best performance on all WFG problems. The dominance of MOCPO was further substantiated through statistical significance analysis, confirming that the performance gains were statistically significant.
Table 2.
P values and rank comparison for different algorithms under WFG test suits.
| Problem | M | D | MOGBO | Pre-DEMO | MOEDO | Pi-MOEA | ClGrMOEA | MOCPO | P VALUES |
|---|---|---|---|---|---|---|---|---|---|
| WFG1 | 3 | 12 | 1.6667 | 3.1667 | 1.7917 | 5 | 4.2083 | 5.1667 | 1.80E-33 |
| WFG2 | 3 | 12 | 2.0208 | 3.4792 | 2.8542 | 5.2917 | 1.6875 | 5.6667 | 5.82E-39 |
| WFG3 | 3 | 12 | 1.5 | 1.9792 | 2.7708 | 4.6042 | 4.7917 | 5.3542 | 3.95E-37 |
| WFG4 | 3 | 12 | 1.7917 | 3.6042 | 3.625 | 5.1667 | 1.2083 | 5.6042 | 1.12E-43 |
| WFG5 | 3 | 12 | 1.9792 | 4.1458 | 3.8333 | 4.9167 | 1.0208 | 5.1042 | 2.70E-38 |
| WFG6 | 3 | 12 | 2.1042 | 4.2292 | 3.6875 | 4.7292 | 1.5 | 4.75 | 1.14E-26 |
| WFG7 | 3 | 12 | 1.9375 | 3.1667 | 3.7917 | 5.2083 | 1.1042 | 5.7917 | 4.84E-47 |
| WFG8 | 3 | 12 | 1.875 | 4.7292 | 3.125 | 5.375 | 1.2083 | 4.6875 | 6.12E-41 |
Comparison of algorithms based on inverted generative distance for WFG test suits
A metric used to evaluate the quality of approximations to the Pareto front produced by multi-objective optimization algorithms is the inverted generational distance (IGD). One widely used measure to evaluate the calibre of solutions generated by multi-objective evolutionary algorithms (MOEAs) is the Inverted Generational Distance (IGD). Lower IGD values indicate improved convergence and diversity of the Pareto front approximation. IGD values of various MOEAs on WFG1 to WFG8 test functions are compared in the box plots given in the Fig. 7.
Fig. 7.
Box plot of IGD values by different algorithms under considerations for WFG test suits.
The following algorithms are evaluated: MOGBO, MOEDO, Pre-Demo, Pi-MOEA, ClGrMOEA, and MOCPO. A distribution of IGD values for several iterations of each algorithm is shown in each box plot, along with the median, quartiles, and outliers (shown by red crosses). A thorough function-wise discussion is provided below.
WFG1 findings: There is a noticeable difference in the IGD values between methods. Good performance is indicated by the comparatively low median IGD values for MOCPO, Pi-MOEA, and Pre-Demo. The greater IGD values of MOGBO and MOEDO, however, indicate poor convergence. Top Algorithms: Pi-MOEA and MOCPO have the lowest IGD values. Worst-performing algorithms: MOGBO and MOEDO have poorer effective convergence, as reflected through their higher variability and median IGD.
WFG2 results: Most of the algorithms reflect comparatively low IGD values. With smaller interquartile ranges and lower median IGD values, MOCPO and Pi-MOEA demonstrate consistent high performance. Best-Performing Algorithms: MOCPO and Pi-MOEA yield outstanding IGD values. Worst performing algorithms: MOGBO is the worst-performing algorithm in this test problem, evident from its greater IGD values.
WFG3: There are significant differences in the distribution of IGD among the algorithms. Although MOGBO and MOEDO possess greater spread values, MOCPO and Pi-MOEA have smaller IGD values. Best Algorithms: MOCPO yields the best results while reducing IGD levels. Worst performance algorithms: MOGBO and MOEDO exhibit greater IGD values with considerable variability.
WFG4 results: A visible difference in performance is present. IGD values are consistently greater for MOGBO and MOEDO and less for MOCPO and Pi-MOEA. The best algorithms: MOCPO and Pi-MOEA. Worst Algorithms: MOGBO and MOEDO perform lower, as revealed by larger IGD values.
WFG5 results: Pre-Demo and MOGBO achieve relatively higher IGD values, but most algorithms are quite similar. MOGBO possesses a broader range of IGD values and converges poorly. Pi-MOEA and MOCPO are the best two algorithms. Worst performing algorithms: MOGBO converges poorly.
WFG6 Observations: There is a stronger performance gap among the algorithms. While MOEDO and MOGBO are more volatile, Pi-MOEA and MOCPO still maintain lower IGD values. Pi-MOEA and MOCPO are the best-performing algorithms. Worst-performing algorithms: MOGBO and MOEDO have higher IGD values.
WFG7 Results: Pi-MOEA and MOCPO are the best-performing algorithms according to IGD values, but MOEDO and MOGBO have a lot of oscillation and poor performance. Pi-MOEA and MOCPO are the best, while MOEDO and MOGBO are the worst.
WFG8 Results: There is a significant variation in the IGD values. MOGBO and MOEDO exhibit high variability, but Pi-MOEA and MOCPO provide lower IGD values. The best two algorithms are Pi-MOEA and MOCPO. The worst-performing algorithms are MOGBO and MOEDO.
IGD box plot outcomes reveal the strength and weakness of different approaches on different WFG test problems. Some key findings are:
The better convergence and diversity preservation are exhibited by the MOCPO algorithm.
Though Pi-MOEA has a slightly higher variance in IGD in some cases, it too does well with most of WFG functions.
Generally, MOEDO and MOGBO perform poorly; their larger dispersion and higher IGD value suggest that they are having difficulty maintaining convergent and diverse solutions.
Based on the IGD performance, MOCPO and Pi-MOEA are the most reliable choices to solve WFG test functions.
Figure 8 is a plot of the IGD (Inverted Generational Distance) convergence of the algorithms MOGBO, MOEDO, Pre-Demo, Pi-MOEA, ClGrMOEA, and MOCPO on the WFG1 to WFG8 test cases. The x-axis represents the number of function evaluations, while the y-axis represents the values of IGD. Lower values of IGD represent better convergence and diversity of solutions. A discussion of the convergence trends thus observed follows below:
Fig. 8.
Convergence plot of algorithms under consideration for WFG test functions.
All algorithms show a decreasing IGD trend throughout the function evaluations, indicating improved convergence with more iterations. MOCPO and Pi-MOEA always show better performance, as their IGD values decrease more rapidly and are lower than those of other algorithms. In contrast, MOEDO and MOGBO often show slower convergence and higher final IGD values, indicating poor performance in maintaining solution diversity and convergence. Convergence charts reveal MOCPO and Pi-MOEA as the most reliable methods for achieving convergent and spread approximations to the Pareto front. Their counterparts, however, face tremendous challenges as their IGD value is high while convergence takes so long in running the test cases. ClGrMOEA and Pre-Demo perform reasonably, but they can’t outpace the best opponents. MOCPO and Pi-MOEA have the strongest decrease in IGD values in the early iterations, suggesting their best exploration-exploitation balance. Pre-Demo and ClGrMOEA have a relatively steady, though slightly slower, convergence in comparison to the best-performing algorithms. MOGBO and MOEDO have constantly high IGD values, suggesting poor early-stage convergence. This suggests that these algorithms either struggle with proper early exploration or cannot quickly find high-quality solutions.
Effective multi-objective optimizers need to demonstrate early rapid decline in IGD values, a parameter well addressed by MOCPO and Pi-MOEA. MOEDO and MOGBO, on the other hand, lack initial momentum, thus performing poorly. MOCPO and Pi-MOEA have consistent convergence profiles with smooth graphs, indicating strong performance across several iterations. ClGrMOEA and Pre-Demo have occasional fluctuations; nevertheless, they retain decent stability.
MOGBO and MOEDO display varying IGD values on some test functions, reflecting unstable convergence behavior. Their constant changes reflect that they either converge prematurely to local optima or are unable to maintain solution diversity. These results affirm that MOCPO and Pi-MOEA are the best choices for solving WFG test problems, while MOGBO and MOEDO require dramatic improvements in convergence speed and solution robustness.
The Table 3 shows Inverted Generational Distance (IGD) results (mean ± standard deviation) of six multi-objective optimizers—MOGBO, Pre-Demo, MOEDO, Pi-MOEA, ClGrMOEA, and MOCPO—over the WFG1–WFG8 test cases. The IGD metric measures how good the achieved Pareto front approximation is with smaller values reflecting better performance.
Table 3.
IGD values of different algorithms under consideration for WFG test suits.
| Problem | M | D | MOGBO | Pre-DEMO | MOEDO | Pi-MOEA | ClGrMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| WFG1 | 3 | 12 | 6.6257e-1 ± 7.08e-2 | 6.6042e-1 ± 7.58e-2 | 4.3978e-1 ± 5.30e-2 | 5.0284e-1 ± 7.01e-2 | 4.3066e-1 ± 6.67e-2 | 5.5056e-1 ± 9.27e-2 |
| WFG2 | 3 | 12 | 1.8630e-1 ± 4.55e-2 | 1.6967e-1 ± 4.82e-3 | 1.7245e-1 ± 4.55e-3 | 2.3643e-1 ± 1.64e-2 | 1.8135e-1 ± 4.85e-3 | 1.6857e-1 ± 4.64e-3 |
| WFG3 | 3 | 12 | 1.8344e-1 ± 2.59e-2 | 1.5794e-1 ± 1.87e-2 | 1.1227e-1 ± 1.15e-2 | 1.3175e-1 ± 2.36e-2 | 1.0196e-1 ± 1.07e-2 | 1.7057e-1 ± 1.69e-2 |
| WFG4 | 3 | 12 | 2.6452e-1 ± 1.02e-2 | 2.3007e-1 ± 2.15e-3 | 2.3382e-1 ± 3.99e-3 | 2.9336e-1 ± 1.29e-2 | 2.3449e-1 ± 5.37e-3 | 2.2925e-1 ± 1.77e-3 |
| WFG5 | 3 | 12 | 2.6220e-1 ± 6.58e-3 | 2.3632e-1 ± 1.77e-3 | 2.3891e-1 ± 3.40e-3 | 2.9870e-1 ± 1.26e-2 | 2.4218e-1 ± 5.52e-3 | 2.3362e-1 ± 1.10e-3 |
| WFG6 | 3 | 12 | 3.0768e-1 ± 1.78e-2 | 2.7049e-1 ± 1.53e-2 | 2.6816e-1 ± 1.44e-2 | 3.4601e-1 ± 2.11e-2 | 2.6551e-1 ± 1.39e-2 | 2.6387e-1 ± 1.41e-2 |
| WFG7 | 3 | 12 | 2.5871e-1 ± 7.40e-3 | 2.3015e-1 ± 1.69e-3 | 2.3131e-1 ± 3.25e-3 | 2.9665e-1 ± 1.15e-2 | 2.3155e-1 ± 4.20e-3 | 2.3257e-1 ± 3.09e-3 |
| WFG8 | 3 | 12 | 3.4113e-1 ± 9.19e-3 | 3.2053e-1 ± 7.30e-3 | 3.1578e-1 ± 5.31e-3 | 3.8637e-1 ± 1.41e-2 | 3.1191e-1 ± 5.13e-3 | 3.1111e-1 ± 5.46e-3 |
MOCPO and MOEDO frequently achieve the lowest IGD values over a variety of WFG problems, which indicates better approximation of the Pareto front. ClGrMOEA performs well, particularly for WFG2, WFG3, and WFG8.
MOGBO always shows poor performance for all problems, often ranking among the worst solutions.
MOCPO is highly stable as it has low standard deviation.
The Table 4 shows rank p-values for the IGD performance of six multi-objective optimization algorithms—MOGBO, Pre-DEMO, MOEDO, Pi-MOEA, ClGrMOEA, and MOCPO—on the WFG1 to WFG8 test functions. The rank values indicate the relative effectiveness of each method, with lower ranks indicating better IGD performance. Statistical hypothesis testing p-values assess the significance of the observed differences.
Table 4.
Rank and P values comparison for different algorithms under WFG test suits.
| Problem | M | D | MOGBO | Pre-DEMO | MOEDO | Pi-MOEA | ClGrMOEA | MOCPO | P VALUES |
|---|---|---|---|---|---|---|---|---|---|
| WFG1 | 3 | 12 | 5.2917 | 5.2292 | 1.9375 | 2.9583 | 1.8542 | 3.7292 | 8.12E-33 |
| WFG2 | 3 | 12 | 4.1458 | 1.8958 | 2.6875 | 5.9792 | 4.4583 | 1.8333 | 4.47E-38 |
| WFG3 | 3 | 12 | 5.4792 | 4.2292 | 2.0833 | 2.9167 | 1.3542 | 4.9375 | 5.33E-38 |
| WFG4 | 3 | 12 | 5.0208 | 1.9792 | 3.1667 | 5.9792 | 3.125 | 1.7292 | 4.97E-40 |
| WFG5 | 3 | 12 | 4.9583 | 2.3125 | 3 | 6 | 3.5625 | 1.1667 | 6.40E-44 |
| WFG6 | 3 | 12 | 4.8125 | 2.7917 | 2.5833 | 5.9792 | 2.5 | 2.3333 | 1.87E-32 |
| WFG7 | 3 | 12 | 5 | 1.8958 | 2.6875 | 6 | 2.5417 | 2.875 | 9.33E-37 |
| WFG8 | 3 | 12 | 4.9583 | 3.4167 | 2.7708 | 6 | 1.9583 | 1.8958 | 3.62E-39 |
MOCPO consistently achieves the worst rank (best IGD performance) on WFG2, WFG4, WFG5, and WFG8, making it the best method overall. ClGrMOEA is also ranked high, particularly on WFG1, WFG3, and WFG8, which signifies strong convergence.
Pi-MOEA consistently gets the worst rankings, i.e., 6 on WFG4, WFG5, WFG7, and WFG8, which signifies its poor Pareto front approximation. MOGBO never outperforms, often ending up within the top three of the worst performers, significantly on WFG3, WFG6, and WFG7.
p-values for all problems are extremely small (below 1.0E-30), meaning that the performance differences between algorithms are statistically significant; thus, ranking differences observed may not be explained by random chance, but rather due to the intrinsic search mechanism of algorithms involved.
MOCPO is the preeminent algorithm, continuously attaining the lowest IGD ranks across several challenges. ClGrMOEA is a formidable contender, demonstrating exceptional performance on various challenges, notably WFG1, WFG3, and WFG8. Pre-DEMO and MOEDO exhibit modest performance; nonetheless, they demonstrate inconsistency across various WFG issues. MOGBO frequently exhibits inadequate convergence of the Pareto front. Pi-MOEA is the least effective, consistently scoring at the bottom across several test situations.
The Table :5 compares the runtime performance of six multi-objective optimization algorithms, namely MOGBO, Pre-DEMO, MOEDO, Pi-MOEA, ClGrMOEA, and MOCPO on WFG1 to WFG8 test problems in terms of seconds. Low runtime values indicate higher computational efficiency, while high values indicate higher computational complexity or slow convergence rates.
MOCPO performs with the lowest runtime for all the test problems, indicating that it was highly computationally efficient. ClGrMOEA demonstrates the greatest computational expense, with significantly prolonged runtime values, especially on WFG3 (22.8s), WFG5 (32.1s), and WFG7 (22.8s). Pre-DEMO exhibits commendable efficiency, succeeded by MOGBO and MOEDO, while Pi-MOEA necessitates considerably more time.
Table 5.
Runtime comparison of different algorithms under consideration for WFG test suits.
| Problem | M | D | MOGBO | Pre-DEMO | MOEDO | Pi-MOEA | ClGrMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| WFG1 | 3 | 12 | 2.37E + 00 | 1.19E + 00 | 1.91E + 00 | 3.96E + 00 | 8.21E + 00 | 7.61E-01 |
| WFG2 | 3 | 12 | 1.81E + 00 | 1.09E + 00 | 2.11E + 00 | 5.30E + 00 | 1.23E + 01 | 6.72E-01 |
| WFG3 | 3 | 12 | 1.88E + 00 | 1.12E + 00 | 2.74E + 00 | 7.61E + 00 | 2.28E + 01 | 6.24E-01 |
| WFG4 | 3 | 12 | 1.74E + 00 | 1.24E + 00 | 2.64E + 00 | 6.87E + 00 | 1.80E + 01 | 6.96E-01 |
| WFG5 | 3 | 12 | 1.85E + 00 | 1.11E + 00 | 2.58E + 00 | 7.21E + 00 | 3.21E + 01 | 6.77E-01 |
| WFG6 | 3 | 12 | 1.79E + 00 | 1.01E + 00 | 2.09E + 00 | 5.39E + 00 | 1.43E + 01 | 5.80E-01 |
| WFG7 | 3 | 12 | 1.85E + 00 | 1.05E + 00 | 2.86E + 00 | 7.77E + 00 | 2.28E + 01 | 5.93E-01 |
| WFG8 | 3 | 12 | 2.00E + 00 | 1.13E + 00 | 1.99E + 00 | 4.29E + 00 | 1.76E + 01 | 1.39E + 01 |
MOCPO is good at strike balance between exploring and exploiting thereby requiring fewer run for convergence by the function. Pre-DEMO is a computationally efficient optimizer, albeit marginally slower than MOCPO. MOEDO has mid-level computational efficiency, but less efficient than MOCPO or Pre-DEMO. MOGBO suffers from low computing efficiency, presumably because of its trade-off between exploration and exploitation. Pi-MOEA incurs significant computational costs, rendering it less suitable for large-scale or time-critical applications.
MOCPO is the best algorithm for this set as it has a minimal runtime over all problems considered. Pre-DEMO comes next in computational efficiency and hence it is quite viable as well. MOEDO and MOGBO have middle runtime performance, neither very fast nor slow. Pi-MOEA has huge computational costs that make it not as efficient as MOCPO and Pre-DEMO. ClGrMOEA is the least efficient algorithm, requiring significantly more time for execution, especially on WFG3, WFG5, and WFG7.
Performance metrics for multi-objective optimization (MOO) algorithms
To fully analyze the results of the research, six performance metrics were selected carefully. These metrics like SP, SD, GD, HV, IGD, and RT are all different perspectives on the performance of the algorithm, which allows for a comprehensive evaluation of the optimisation process. In combination, these measures provide a comprehensive method to evaluate the performance of the algorithm, focusing on its computational viability, convergence efficiency, and diversity of solutions.
-
Generational Distance (GD): GD is the average distance between the algorithm solutions and the nearest Pareto front member, which is an indicator of convergence. The lower the value of GD, the closer the solutions are to the optimal Pareto front.

16 
17 Where,
is the smallest sum of the absolute differences between solution and the nearest solution across all objective functions. Stated otherwise, it is the separation between the nearest solution in the real Pareto front and a solution in the obtained Pareto front. -
Inverted Generational Distance (IGD): The IGD metric offers an alternative perspective to Generational Distance (GD) by calculating the average distance from points on the true Pareto front to the nearest solution obtained by the algorithm. A smaller IGD value signifies improved algorithm performance in terms of both convergence and diversity.

18 Where,
is the number of true pareto front and
represents the distance between a solution in the true Pareto front and its closest counterpart in the obtained Pareto front. -
Spacing (SP): SP evaluates the uniformity of spacing between adjacent solutions along the Pareto front. It ensures that solutions are evenly distributed, minimizing clustering or large gaps. Lower SP values indicate better uniformity in the distribution of solutions.

19 Where,
= The number of pareto fronts and
= The Euclidean distances between consecutive solutions in the objective space. -
Spread (SD): The SD metric measures how evenly the solutions cover the range of the Pareto front. It reflects the spread and balance of solutions within the objective space. A lower SD value indicates a more uniform distribution of solutions.

20 Where,
is the distance from the first solution to the reference point in the objective space,
is the distance from the last solution to the reference point and
is the Euclidean distance between consecutive solutions. This helps to assess the evenness of the distribution of solutions along the Pareto front. -
Hypervolume (HV): HV evaluates the volume encompassed by the Pareto front within the objective space. It considers both the diversity and quality of solutions. A higher HV value implies a Pareto front that covers a larger portion of the objective space, indicating better performance. The nadir point snadir in multi-objective optimization represents the worst objective values among Pareto-optimal solutions. The nadir point is defined as:

21 where,
is the value of the m-th objective function for solution in the Pareto front .
22 Where,
represents the volume,
is the obtained Pareto Front,
indicates that
is bounded by the solution s and the nadir point 
Running Time (RT): RT captures the actual running time of the algorithm while generating solutions. It is an efficiency of computation measure, indirectly reflecting on how practical the given algorithm is in practice: shorter RT, more efficient algorithm in real-world applications.
Generational Distance (GD) is a very important measure in multi-objective optimization. It measures the performance of different algorithms in terms of average distance between solutions found by an algorithm and the closest point on the actual Pareto front. The objective of multi-objective optimization is to find solutions that lie close to the true Pareto front. GD directly measures the extent to which an algorithm can achieve this. The lower the value of GD, the better is the convergence since it reflects the closeness of solutions produced by an algorithm to the true Pareto front.
Figure 9 presents a comparative analysis of different algorithms based on generative distances they obtained when tested for different ZDT and DTLZ test functions. Due to their diverse properties, the ZDT series of test functions is widely used to evaluate the performance of multi-objective optimization algorithms. According to the data shown in Fig. 9, in the ZDT1, ZDT2, and ZDT6 test scenarios, MOCPO significantly outperformed other algorithms. The tight alignment of the Pareto fronts generated by MOCPO with the genuine ones demonstrates its remarkable capacity for convergence towards optimal solutions.
Fig. 9.
Boxplots for generative distance comparing the performance of different algorithms for ZDT and DTLZ benchmark test functions.
The generative distance (GD) plots reinforce this observation as MOCPO consistently shows lower GD values, which signifies better convergence and distribution of solutions along the Pareto front. PRE-DEMO shows good convergence properties, although slightly behind MOCPO. PI-MOEA ranks third, showing robust performance but slightly lacking diversity compared to MOCPO and MOEDO.
ZDT3 disconnected Pareto front and ZDT4 complex landscapes pose challenges for many algorithms, yet MOCPO maintains its robustness, providing a well-distributed set of solutions. MOGBO follows MOCPO, struggling slightly with diversity in these complex scenarios.
The DTLZ functions are higher-dimensional problems designed to test scalability and convergence properties in multi-objective optimization. The results shown in Fig. 9 reflect the following insights:
MOCPO consistently outperforms other algorithms in DTLZ1, DTLZ2, and DTLZ7. The plots show its ability to converge closer to the true Pareto front. Particularly in DTLZ1, where the Pareto front is a hyperplane, MOCPO effectively captures the entire front, demonstrating strong performance in maintaining diversity.
DTLZ3, DTLZ4, and DTLZ5 are tougher because of multi-modality and biased solution distribution. Here again, MOCPO has done well with high adaptability through the kind of solution distribution across all front. The values of GD for the functions also present better performance by MOCPO. It is followed by PRE-DEMO where it converges well but has a little less diversity when compared with MOCPO.
DTLZ6, DTLZ8 and DTLZ9 are high dimensional and thus difficult problems where MOCPO is still attaining the leading edge. The plot shows that it is constantly finding its diverse solutions relatively close to the Pareto front - difficult considering its complexity.
The results show that MOCPO is the best performer of the algorithms tested. Low generative distance values, generally consistent across the problem, further validated the capability of MOCPO to find convergent and well-distributed solutions, thus giving it high reliability in solving multi-objective optimization problems and DTLZ benchmark test functions.
Another key performance metric in multi-objective optimization is Inverse Generational Distance, IGD. IGD quantifies how close an algorithm solution set approximates the true Pareto front. IGD is calculated as the average distance from each point on the true Pareto front to its nearest point in the set of solutions found by the algorithm.
Figure 10 indicates that for all ZDT1, ZDT2, ZDT3, and ZDT6 test functions, MOCPO performs amazingly well. MOCPO clearly approaches the real Pareto front as IGD values are shown to be within low bounds almost everywhere. Hence, MOCPO converges well towards an optimal solution but maintains a reasonably good spread on the Pareto front. This captures the superior handling of convex and disconnected Pareto fronts for the MOCPO. MOEDO is good at convergence properties for ZDT1 and ZDT6 with convergence very close to MOCPO. Notwithstanding the high-dimensional and rugged landscape of ZDT4, MOCPO still holds robust performance. Its solutions had a good spread along the Pareto front, and IGD values outperform most of the other algorithms, which further confirms its adaptability to difficult optimization scenarios. In the challenging landscape of ZDT4, MOEDO performed well in maintaining diversity across the Pareto front. This is also reflected in the IGD values, where MOEDO has a strong presence, though not as robust as MOCPO.
The purpose of the DTLZ functions is to evaluate the convergence and scalability characteristics of multi-objective optimization algorithms in higher-dimensional spaces. These functions present complex challenges, such as multimodality and non-uniformity, making IGD a critical metric for assessing algorithm performance.
MOCPO performs better than other algorithms on DTLZ1, DTLZ2, DTLZ5, DTLZ6, DTLZ7 and DTLZ8 functions with much lower IGD values. MOCPO captures the whole front well, which means it maintains convergence and diversity. MOEDO is slightly behind MOCPO in terms of diversity, as the IGD values show. MOCPO shows remarkable adaptability to the DTLZ3, DTLZ4, and DTLZ9 functions, known for their multimodality and complex landscapes. In the face of these difficulties, MOCPO balances convergence and diversity well enough to outperform other algorithms. The results show that, while PI-MOEA is a robust performer, it fails in the most complex cases.
Fig. 10.
Boxplots for IGD comparing the performance of different algorithms for ZDT and DTLZ benchmark test functions.
Spacing is the important metric while measuring the performance of multiobjective optimization algorithms; that is, since it gives insights about the solution uniformity across the Pareto front. Evenness of spacing is the term that measures even distribution between consecutive distances between solutions acquired by the considered Pareto front. Solutions are expected to have a uniform spacing for the description of trade-offs between objectives correctly. Uniform spacing ensures that the algorithm does not cluster solutions in certain regions of the Pareto front while leaving other regions sparsely populated.
From Fig. 11, it is evident that across nearly all test functions, MOCPO consistently exhibits superior performance. The spacing values for MOCPO are among the lowest or very close to the lowest, indicating that this algorithm produces a well-distributed set of solutions. CLGRMOEA generally performs well, emerging as the second-best algorithm. It consistently achieves low spacing values, particularly in test functions like ZDT2, DTLZ3, DTLZ6, and DTLZ8, where its performance is close to MOCPO.
ZDT Functions: MOCPO excels in ZDT1, ZDT4, and ZDT6, demonstrating significantly better spacing values than the other algorithms. In ZDT2, CLGRMOEA also shows strong performance, second only to MOCPO. However, for ZDT3, PI-MOEA slightly outperforms MOCPO, suggesting that PI-MOEA might have some advantage in this landscape.
DTLZ Functions: MOCPO dominance is evident in DTLZ3, DTLZ4, DTLZ6, DTLZ7 and DTLZ9, where it achieves the best spacing values. CLGRMOEA is very competitive in DTLZ1, DTLZ2, and DTLZ8, though MOCPO still outperforms it in most cases. PI-MOEA shows its strengths in DTLZ5 but struggles to match the performance of MOCPO and CLGRMOEA in others.
Fig. 11.
Boxplots for Spacing comparing the performance of different algorithms for ZDT and DTLZ benchmark test functions.
These findings suggest that MOCPO is the most reliable choice for ensuring well-distributed Pareto fronts, making it a highly competitive algorithm in multi-objective optimization tasks.
Figure 12 Presents the Spread values for ZDT and DTLZ test functions using different MO algorithms. MOCPO consistently low spread values across ZDT and DTLZ functions make it the best-performing algorithm in this study. Its ability to maintain a uniform and diverse set of solutions across different problem landscapes highlights its versatility and robustness. MOEDO ability to achieve a good distribution of solutions in ZDT and DTLZ problems makes it a strong contender, particularly in scenarios where MOCPO performance is slightly less dominant.
Fig. 12.
Boxplots for Spread comparing performance of different algorithms for ZDT and DTLZ benchmark test functions.
Figure 13 depicts the hypervolume obtained by different MOAs on ZDT and DTLZ test functions in comparison. The hypervolume plots show that MOCPO has achieved the highest efficient results for problems with a more realistic and complex scenario. The results also suggest that CLGRMOEA performed well as the second best; its hypervolume scores almost overlapped with the hypervolume scores of MOCPO.
Fig. 13.
Boxplots for Hyper volume comparing performance of different algorithms for ZDT and DTLZ benchmark test functions.
Convergence plots to compare the performances of MOOAs
Convergence plots are useful in studying the behavior of multi-objective optimization algorithms with time, measured by function evaluations or generations. In the case of the ZDT and DTLZ test problems, the X-axis is used to represent the number of function evaluations or generations; it corresponds to the evolution of the optimization process. The Y-axis is the performance metric; examples include hypervolume, IGD, and other metrics relevant to the context. These metrics are plotted across generations or evaluations to show the performance trajectory of the algorithm.
A well-crafted convergence plot shows how the performance measure is changing. Most of the time, it displays an acceleration in performance in the early stages (represented by a high slope) when convergence is nearing an optimal or near-optimal point and then levelling off upon reaching that convergence point. It indicative of a good algorithm: fast progress early and slowing down as the solution stabilizes over time.
Generative distance
Figure 14 Comparing Generative distance values of different MOAs for solving ZDT and DTLZ functions. The figure shows that MOCPO was able to have the lowest GD values for most test functions with a better ability to produce an optimal Pareto front compared to other MOAs considered. Apart from MOCPO, some test functions gave strong performances of PI-MOEA and MOGBO, thus ranking them at the second and third best respectively among the MOAs considered in this study.
Fig. 14.
Convergence curve for Generative distance using ZDT and DTLZ test functions.
Inverted generative distance
Figure 15 illustrates a comparison of IGD values of different MOAs on ZDT and DTLZ benchmark functions. It can be clearly seen that MOCPO has performed better than the others by having the lowest IGD values on most test functions, which clearly depicts its higher efficiency in producing good quality Pareto fronts. PI-MOEA and CLGRMOEA also exhibit robust performance in certain instances, ranking as the second and third most effective algorithms in the set of algorithms under consideration.
Fig. 15.
Convergence curve for IGD using ZDT and DTLZ test functions.
Spacing
Figure 16 compares various multi-objective optimization algorithms (MOAs) comprehensively by comparing their spacing metric across the ZDT and DTLZ benchmark functions. A lower spacing value indicates more evenly distributed solutions, which is desirable for achieving a well-spread Pareto front.
Fig. 16.
Convergence curve for Spacing using ZDT and DTLZ test functions.
Since the MOCPO algorithm yields the lowest spacing values for the majority of ZDT test functions, it consistently performs better. This would imply that the set of Pareto-optimal solutions produced by MOCPO is evenly distributed, guaranteeing improved objective space coverage.
DTLZ1 to DTLZ8: Like the ZDT functions, MOCPO retains low spacing values, particularly for DTLZ1 to DTLZ8, thus reiterating that it can provide well-distributed solutions. The spacing values of MOGBO and PI-MOEA are comparable and in some cases nearly match MOCPO, meaning they can also offer a well-distributed solution.
Spread
Figure 17 compares the spread metrics of different MOAs over the ZDT and DTLZ benchmark functions. The MOCPO algorithm is always the best, as it has the lowest spread values in most ZDT test functions. This indicates that MOCPO is effective in generating a well-distributed set of Pareto-optimal solutions, which ensures comprehensive coverage of the objective space.
Fig. 17.
Convergence curve for Spread using ZDT and DTLZ test functions.
For DTLZ1 to DTLZ8 functions, MOCPO maintains low spread values, thereby supporting its capability to yield solutions that are well-distributed. The performance of CLGRMOEA is also stellar; it often runs close to MOCPO and sometimes even surpasses the original algorithm.
Hyper volume
Figure 18 compares different MOAs by looking at hypervolume metrics computed over the set of ZDT and DTLZ benchmark functions. Overall, MOCPO is very successful, achieving best hypervolumes in most instances of the test cases from ZDT. Such results suggest a high ability to generate well-scattered Pareto-optimal solutions, thus very extensive coverage of the objective space.
Fig. 18.
Convergence curve for Hyper volume using ZDT and DTLZ test functions.
On the DTLZ1 to DTLZ8 functions, the MOCPO holds great performance with high hypervolume value, showing it to produce solutions that are well distributed. CLGRMOEA and PI-MOEA also maintain robust results, in many cases ranked only behind the MOCPO, which draws attention to their capabilities for balanced solution distribution.
For GD metrics, in the context of Table 6 it would be said that MOCPO gives results that are always better in every test case because it generates smaller values compared with most cases of other algorithms such as PI-MOEA, PRE-DEMO, MOGBO and MOEDO. The same can also be well proved in ZDT1 and ZDT2 where it yields much smaller values of GD in comparison with others. Its performance shows an excellent capability to converge toward the Pareto front, meanwhile guaranteeing a diversified solution set.
Table 6.
Generative distance matrix of different algorithms for ZDT and DTLZ test functions.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| ZDT1 | 2 | 30 | 1.6650e-3 ± 1.40e-4 | 5.7116e-4 ± 1.45e-4 | 1.5087e-3 ± 4.19e-4 | 1.3106e-3 ± 7.51e-4 | 1.9125e-3 ± 3.40e-4 | 5.5630e-4 ± 1.42e-4 |
| ZDT2 | 2 | 30 | 2.4715e-3 ± 6.63e-4 | 1.0227e-3 ± 2.21e-4 | 5.8319e-3 ± 3.23e-3 | 1.6239e-3 ± 3.38e-4 | 2.6699e-3 ± 8.33e-4 | 9.3440e-4 ± 2.73e-4 |
| ZDT3 | 2 | 30 | 1.1731e-3 ± 1.83e-4 | 6.3904e-4 ± 1.44e-4 | 8.0805e-4 ± 1.11e-4 | 7.6982e-4 ± 1.58e-4 | 1.5108e-3 ± 3.75e-4 | 4.9250e-4 ± 5.67e-5 |
| ZDT4 | 2 | 10 | 1.1840e-1 ± 1.04e-1 | 3.0465e-2 ± 2.81e-2 | 8.4263e-2 ± 7.36e-2 | 2.2969e-2 ± 1.85e-2 | 1.1611e-1 ± 1.15e-1 | 1.7788e-2 ± 5.20e-2 |
| ZDT6 | 2 | 10 | 3.4981e-2 ± 1.95e-2 | 5.9616e-3 ± 2.22e-3 | 1.3348e-2 ± 7.03e-3 | 8.8912e-3 ± 6.38e-3 | 3.7971e-2 ± 1.09e-2 | 7.7254e-3 ± 3.15e-3 |
| DTLZ1 | 3 | 7 | 4.3543e-2 ± 4.08e-2 | 9.4282e-3 ± 1.65e-2 | 3.4082e-2 ± 2.56e-2 | 4.7778e-2 ± 3.91e-2 | 4.0684e-2 ± 4.54e-2 | 8.7667e-3 ± 3.94e-2 |
| DTLZ2 | 3 | 12 | 8.8063e-4 ± 2.05e-4 | 7.0430e-4 ± 9.15e-5 | 6.0978e-4 ± 4.24e-5 | 1.4663e-3 ± 1.86e-4 | 6.1779e-4 ± 3.16e-5 | 6.0905e-4 ± 7.40e-5 |
| DTLZ3 | 3 | 12 | 1.8307e + 0 ± 7.00e-1 | 1.1840e + 0 ± 4.56e-1 | 2.0646e + 0 ± 1.22e + 0 | 1.4726e + 0 ± 6.09e-1 | 1.7457e + 0 ± 7.80e-1 | 1.2536e + 0 ± 5.55e-1 |
| DTLZ4 | 3 | 12 | 7.2016e-4 ± 1.92e-4 | 7.0306e-4 ± 1.91e-4 | 4.3341e-4 ± 2.03e-4 | 1.0996e-3 ± 4.04e-4 | 4.7178e-4 ± 1.95e-4 | 3.9984e-4 ± 2.69e-4 |
| DTLZ5 | 3 | 12 | 2.7353e-4 ± 6.62e-5 | 3.2324e-4 ± 6.20e-5 | 2.3369e-4 ± 4.58e-5 | 2.8010e-4 ± 8.23e-5 | 3.2702e-4 ± 1.06e-4 | 1.7774e-4 ± 1.66e-4 |
| DTLZ6 | 3 | 12 | 6.4513e-6 ± 5.96e-6 | 5.0747e-6 ± 2.31e-7 | 4.3661e-5 ± 1.22e-4 | 9.6091e-6 ± 1.42e-5 | 4.7010e-6 ± 2.42e-7 | 4.4969e-6 ± 2.42e-7 |
| DTLZ7 | 3 | 22 | 5.7039e-3 ± 1.05e-3 | 3.0352e-3 ± 3.91e-4 | 4.4146e-3 ± 5.94e-4 | 6.4014e-3 ± 7.13e-4 | 5.7748e-3 ± 1.04e-3 | 2.4317e-3 ± 4.49e-4 |
| DTLZ8 | 3 | 30 | 4.0801e-3 ± 8.10e-4 | 7.4144e-3 ± 1.66e-3 | 7.1532e-3 ± 1.42e-3 | 8.5627e-3 ± 2.24e-3 | 5.3153e-3 ± 1.53e-3 | 3.5755e-3 ± 2.22e-3 |
| DTLZ9 | 2 | 20 | 6.1048e-3 ± 8.18e-3 | 6.0696e-3 ± 6.74e-3 | 2.0667e-2 ± 4.01e-2 | 1.6991e-3 ± 1.29e-3 | 1.0014e-2 ± 9.50e-3 | 1.6252e-3 ± 1.40e-3 |
Table 7 summarizes the results of IGD metrics, showing excellent performance of MOCPO for both test functions ZDT and DTLZ. When other algorithms such as PI-MOEA, PRE-DEMO, MOGBO, and MOEDO are compared with this proposed MOCPO method, better and comparable results have been found. MOCPO has achieved the lowest IGD values on the ZDT1, ZDT3, ZDT4, ZDT6, DTLZ1, DTLZ3, DTLZ4, DTLZ5, DTLZ6, and DTLZ8 test problems, hence showing a superb ability to strike a balance between convergence and diversity towards the Pareto front.
Table 7.
Inverted generative distance matrix of different algorithms for ZDT and DTLZ test functions.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| ZDT1 | 2 | 30 | 1.7062e-2 ± 1.35e-3 | 1.5161e-2 ± 3.53e-3 | 1.4003e-2 ± 7.19e-3 | 1.9599e-2 ± 2.62e-3 | 8.2893e-3 ± 1.13e-3 | 8.0313e-3 ± 2.23e-3 |
| ZDT2 | 2 | 30 | 5.7330e-2 ± 8.21e-2 | 4.5307e-1 ± 9.55e-2 | 1.6853e-2 ± 2.93e-3 | 2.7887e-2 ± 7.00e-3 | 4.7406e-2 ± 5.69e-2 | 6.0481e-2 ± 7.56e-2 |
| ZDT3 | 2 | 30 | 1.7055e-2 ± 2.95e-3 | 3.5647e-2 ± 4.44e-2 | 1.3974e-2 ± 9.02e-3 | 2.0755e-2 ± 4.22e-3 | 1.1169e-2 ± 9.32e-3 | 1.0998e-2 ± 8.96e-3 |
| ZDT4 | 2 | 10 | 5.8680e-1 ± 2.65e-1 | 6.1447e-1 ± 2.15e-1 | 3.3425e-1 ± 9.66e-2 | 6.1056e-1 ± 3.45e-1 | 4.0430e-1 ± 1.58e-1 | 2.8482e-1 ± 1.70e-1 |
| ZDT6 | 2 | 10 | 1.7799e-1 ± 9.78e-2 | 7.8899e-2 ± 3.60e-2 | 6.1723e-2 ± 3.15e-2 | 2.0220e-1 ± 4.90e-2 | 5.7758e-2 ± 2.18e-2 | 4.7112e-2 ± 1.30e-2 |
| DTLZ1 | 3 | 7 | 2.7077e-1 ± 2.89e-1 | 1.6669e-1 ± 1.35e-1 | 3.0510e-1 ± 2.45e-1 | 2.1048e-1 ± 2.26e-1 | 1.4899e-1 ± 2.80e-1 | 8.8752e-2 ± 1.28e-1 |
| DTLZ2 | 3 | 12 | 5.8430e-2 ± 1.34e-3 | 5.5143e-2 ± 2.33e-4 | 7.2320e-2 ± 2.08e-3 | 5.4941e-2 ± 1.83e-4 | 5.7072e-2 ± 6.83e-4 | 5.7145e-2 ± 5.01e-4 |
| DTLZ3 | 3 | 12 | 9.1663e + 0 ± 3.22e + 0 | 9.3624e + 0 ± 4.04e + 0 | 8.7103e + 0 ± 4.93e + 0 | 8.1382e + 0 ± 3.71e + 0 | 8.3887e + 0 ± 3.88e + 0 | 6.7448e + 0 ± 3.10e + 0 |
| DTLZ4 | 3 | 12 | 2.0301e-1 ± 2.34e-1 | 3.8711e-1 ± 3.11e-1 | 1.3840e-1 ± 1.49e-1 | 2.9005e-1 ± 3.25e-1 | 1.9621e-1 ± 3.07e-1 | 1.0631e-1 ± 1.58e-1 |
| DTLZ5 | 3 | 12 | 1.0839e-2 ± 1.24e-3 | 5.7561e-3 ± 2.19e-4 | 6.6384e-3 ± 4.66e-4 | 1.2509e-2 ± 1.40e-3 | 9.7070e-3 ± 5.28e-3 | 5.7064e-3 ± 2.04e-4 |
| DTLZ6 | 3 | 12 | 1.3226e-2 ± 1.03e-3 | 7.0878e-3 ± 6.28e-3 | 6.7931e-3 ± 9.80e-4 | 1.8145e-2 ± 1.91e-3 | 7.8530e-3 ± 2.63e-4 | 5.1522e-3 ± 7.35e-5 |
| DTLZ7 | 3 | 22 | 9.0282e-2 ± 5.10e-3 | 1.1121e-1 ± 9.04e-2 | 9.5442e-2 ± 6.01e-3 | 9.0814e-2 ± 5.08e-3 | 9.5545e-2 ± 9.03e-2 | 1.5247e-1 ± 1.38e-1 |
| DTLZ8 | 3 | 30 | 6.3864e-2 ± 1.41e-2 | 6.5137e-2 ± 9.20e-3 | 5.0194e-2 ± 4.18e-3 | 5.7540e-2 ± 8.06e-3 | 6.4542e-2 ± 8.76e-3 | 5.0044e-2 ± 8.91e-3 |
| DTLZ9 | 2 | 20 | 8.6232e-2 ± 1.11e-1 | 1.7434e-1 ± 1.92e-1 | 6.0399e-2 ± 5.69e-2 | 1.2051e-1 ± 1.02e-1 | 8.2335e-2 ± 9.61e-2 | 8.7404e-2 ± 5.24e-2 |
As revealed by a more detailed analysis of the SP metrics in Table 8 for the ZDT and DTLZ test functions, MOCPO typically has the superior performance profile in most instances, thus indicating competitive advantage.
Table 8.
Spacing matrix of different algorithms for ZDT and DTLZ test functions.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| ZDT1 | 2 | 30 | 9.5708e-3 ± 6.98e-4 | 1.1471e-2 ± 9.40e-4 | 6.0356e-3 ± 4.94e-4 | 1.1876e-2 ± 1.92e-3 | 7.1764e-3 ± 5.84e-4 | 6.2671e-3 ± 5.72e-4 |
| ZDT2 | 2 | 30 | 9.0596e-3 ± 3.35e-3 | 1.9698e-2 ± 2.26e-2 | 7.9811e-3 ± 1.19e-3 | 9.7514e-3 ± 2.59e-3 | 6.1142e-3 ± 1.45e-3 | 6.0739e-3 ± 3.24e-3 |
| ZDT3 | 2 | 30 | 1.0319e-2 ± 9.08e-4 | 1.3199e-2 ± 2.40e-3 | 7.1085e-3 ± 9.11e-4 | 9.9573e-3 ± 1.43e-3 | 9.7365e-3 ± 1.28e-3 | 7.0363e-3 ± 7.64e-4 |
| ZDT4 | 2 | 10 | 5.8234e-2 ± 5.54e-2 | 2.9443e-2 ± 2.07e-2 | 1.9105e-2 ± 1.23e-2 | 4.7572e-2 ± 2.50e-2 | 4.0525e-2 ± 3.17e-2 | 2.0573e-2 ± 2.00e-2 |
| ZDT6 | 2 | 10 | 2.7072e-2 ± 1.32e-2 | 1.7973e-2 ± 6.95e-3 | 1.6838e-2 ± 1.43e-2 | 2.9000e-2 ± 9.74e-3 | 1.3143e-2 ± 5.32e-3 | 1.0392e-2 ± 3.21e-3 |
| DTLZ1 | 3 | 7 | 6.9846e-2 ± 3.46e-2 | 6.6432e-2 ± 2.38e-2 | 5.8274e-2 ± 3.46e-2 | 6.7048e-2 ± 5.11e-2 | 3.1172e-2 ± 3.42e-2 | 1.9405e-2 ± 7.79e-3 |
| DTLZ2 | 3 | 12 | 5.0417e-2 ± 3.12e-3 | 5.8864e-2 ± 3.63e-3 | 5.8564e-2 ± 3.55e-3 | 5.8084e-2 ± 1.31e-3 | 3.6251e-2 ± 4.34e-3 | 3.5390e-2 ± 2.07e-3 |
| DTLZ3 | 3 | 12 | 2.2870e + 0 ± 3.79e + 0 | 7.4456e-1 ± 6.44e-1 | 8.4084e-1 ± 3.70e-1 | 1.5656e + 0 ± 9.61e-1 | 8.2013e-1 ± 6.75e-1 | 7.8760e-1 ± 4.91e-1 |
| DTLZ4 | 3 | 12 | 3.7878e-2 ± 1.82e-2 | 2.6124e-2 ± 2.68e-2 | 5.6624e-2 ± 1.79e-2 | 3.7518e-2 ± 2.49e-2 | 3.2793e-2 ± 1.83e-2 | 2.9170e-2 ± 8.11e-3 |
| DTLZ5 | 3 | 12 | 1.3722e-2 ± 1.18e-3 | 1.2086e-2 ± 1.06e-3 | 1.0881e-2 ± 1.33e-3 | 1.5644e-2 ± 2.69e-3 | 1.9910e-2 ± 8.78e-3 | 1.0063e-2 ± 6.43e-4 |
| DTLZ6 | 3 | 12 | 1.5066e-2 ± 3.00e-3 | 1.3812e-2 ± 1.06e-2 | 1.2343e-2 ± 1.35e-3 | 1.5726e-2 ± 5.65e-3 | 1.9321e-2 ± 9.77e-4 | 9.4841e-3 ± 7.93e-4 |
| DTLZ7 | 3 | 22 | 7.4013e-2 ± 6.48e-3 | 8.6675e-2 ± 1.47e-2 | 7.6480e-2 ± 8.29e-3 | 6.9934e-2 ± 7.97e-3 | 7.1221e-2 ± 7.81e-3 | 4.8361e-2 ± 1.09e-2 |
| DTLZ8 | 3 | 30 | 2.6344e-2 ± 3.37e-3 | 2.8968e-2 ± 4.10e-3 | 2.9935e-2 ± 4.73e-3 | 2.4565e-2 ± 4.67e-3 | 1.8728e-2 ± 5.90e-3 | 1.8689e-2 ± 3.07e-3 |
| DTLZ9 | 2 | 20 | 5.1568e-2 ± 4.08e-2 | 6.5238e-2 ± 7.57e-2 | 2.4059e-2 ± 1.70e-2 | 7.1436e-2 ± 4.90e-2 | 3.9232e-2 ± 2.01e-2 | 6.6947e-2 ± 3.38e-2 |
The MOCPO also does well in ZDT2, ZDT3, ZDT6, DTLZ1, DTLZ2, and DTLZ5 through DTLZ8 scenarios. In all those scenarios, the least spacing values are attained by MOCPO, meaning its solutions are spread more efficiently within the goal space. However, MOCPO does not show consistent domination for all test functions. Its spacing measurements reveal that it is either competitive with or marginally less successful than other algorithms in some of these scenarios, such as ZDT1, ZDT3, and DTLZ7.
Table 9 presents an analysis of SD metrics for the ZDT and DTLZ test functions, evidencing an excellent ability of MOCPO to keep a balanced distribution of solutions over objective space. Better or competitive results are regularly reported by MOCPO, while the performance varies across tasks, especially in scenarios where maintaining solution variety is important.
Table 9.
Spread matrix of different algorithms for ZDT and DTLZ test functions.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| ZDT1 | 2 | 30 | 4.7697e-1 ± 3.99e-2 | 4.9520e-1 ± 2.87e-2 | 3.3406e-1 ± 3.61e-2 | 5.3138e-1 ± 4.90e-2 | 3.9120e-1 ± 4.21e-2 | 3.3003e-1 ± 5.53e-2 |
| ZDT2 | 2 | 30 | 6.0302e-1 ± 1.47e-1 | 1.0155e + 0 ± 3.36e-2 | 4.0237e-1 ± 4.94e-2 | 5.6520e-1 ± 7.81e-2 | 5.0760e-1 ± 1.92e-1 | 5.1136e-1 ± 2.28e-1 |
| ZDT3 | 2 | 30 | 5.6108e-1 ± 4.00e-2 | 7.2345e-1 ± 5.82e-2 | 4.2553e-1 ± 7.49e-2 | 5.9244e-1 ± 7.24e-2 | 4.9136e-1 ± 5.24e-2 | 4.1917e-1 ± 6.24e-2 |
| ZDT4 | 2 | 10 | 9.1980e-1 ± 8.98e-2 | 9.6652e-1 ± 3.97e-2 | 9.1280e-1 ± 9.00e-2 | 9.1645e-1 ± 7.39e-2 | 9.4591e-1 ± 7.55e-2 | 8.5029e-1 ± 1.86e-1 |
| ZDT6 | 2 | 10 | 7.7382e-1 ± 8.63e-2 | 7.3314e-1 ± 8.96e-2 | 7.4265e-1 ± 1.06e-1 | 7.9695e-1 ± 5.67e-2 | 6.6133e-1 ± 1.27e-1 | 6.2688e-1 ± 6.61e-2 |
| DTLZ1 | 3 | 7 | 5.4326e-1 ± 1.94e-1 | 6.8553e-1 ± 2.27e-1 | 6.4174e-1 ± 1.49e-1 | 5.7019e-1 ± 2.93e-1 | 3.3661e-1 ± 2.40e-1 | 2.2143e-1 ± 4.42e-2 |
| DTLZ2 | 3 | 12 | 2.2790e-1 ± 3.61e-2 | 2.0635e-1 ± 1.25e-2 | 4.9633e-1 ± 4.40e-2 | 1.8631e-1 ± 8.39e-3 | 1.8725e-1 ± 2.44e-2 | 1.8137e-1 ± 2.19e-2 |
| DTLZ3 | 3 | 12 | 9.3289e-1 ± 1.25e-1 | 9.6317e-1 ± 7.82e-2 | 9.2211e-1 ± 1.28e-1 | 9.7254e-1 ± 1.73e-1 | 8.8011e-1 ± 1.27e-1 | 9.2041e-1 ± 1.65e-1 |
| DTLZ4 | 3 | 12 | 4.2997e-1 ± 3.84e-1 | 4.9950e-1 ± 2.80e-1 | 6.0514e-1 ± 1.51e-1 | 4.9401e-1 ± 3.99e-1 | 3.5671e-1 ± 3.39e-1 | 2.3532e-1 ± 2.64e-1 |
| DTLZ5 | 3 | 12 | 8.1189e-1 ± 7.38e-2 | 4.5105e-1 ± 6.65e-2 | 5.0365e-1 ± 7.56e-2 | 8.9118e-1 ± 5.39e-2 | 7.9217e-1 ± 2.85e-1 | 3.8614e-1 ± 3.58e-2 |
| DTLZ6 | 3 | 12 | 1.2091e + 0 ± 6.14e-2 | 4.4911e-1 ± 1.99e-1 | 7.2573e-1 ± 6.09e-2 | 1.3319e + 0 ± 6.97e-2 | 8.3638e-1 ± 3.84e-2 | 4.0663e-1 ± 4.50e-2 |
| DTLZ7 | 3 | 22 | 5.8113e-1 ± 5.51e-2 | 5.0478e-1 ± 3.55e-2 | 5.1359e-1 ± 4.52e-2 | 5.7252e-1 ± 8.07e-2 | 3.7502e-1 ± 1.01e-1 | 3.0039e-1 ± 6.68e-2 |
| DTLZ8 | 3 | 30 | 5.3835e-1 ± 5.32e-2 | 5.6764e-1 ± 4.99e-2 | 5.2878e-1 ± 4.69e-2 | 5.2827e-1 ± 4.28e-2 | 4.5033e-1 ± 3.33e-2 | 3.5230e-1 ± 1.37e-2 |
| DTLZ9 | 2 | 20 | 1.1364e + 0 ± 1.82e-1 | 1.3414e + 0 ± 2.26e-1 | 1.4360e + 0 ± 1.78e-1 | 1.2308e + 0 ± 2.04e-1 | 1.3318e + 0 ± 2.00e-1 | 1.4552e + 0 ± 1.84e-1 |
The RT metrics in Table 10 for the ZDT and DTLZ test functions indicate the remarkable computational efficiency of MOCPO. MOCPO regularly outperforms other algorithms, such as PI-MOEA, PRE-DEMO, MOGBO, and MOEDO, in run times for a wide range of problems. To be implemented practically, the algorithm must remain efficient even as the issue dimensionality rises. This constant outperformance in the run-time efficiency highlights MOCPO supremacy as a computationally efficient algorithm that can render quick answers in the absence of sacrificing quality.
Table 10.
Runtime matrix of different algorithms for ZDT and DTLZ test functions.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| ZDT1 | 2 | 30 | 1.26E + 00 | 3.05E + 00 | 8.84E-01 | 2.43E + 00 | 1.24E + 00 | 6.26E-01 |
| ZDT2 | 2 | 30 | 1.30E + 00 | 1.17E + 00 | 9.07E-01 | 1.92E + 00 | 1.13E + 00 | 6.22E-01 |
| ZDT3 | 2 | 30 | 1.29E + 00 | 3.42E + 00 | 8.70E-01 | 2.66E + 00 | 1.21E + 00 | 5.94E-01 |
| ZDT4 | 2 | 10 | 1.19E + 00 | 1.04E + 00 | 8.65E-01 | 9.09E-01 | 8.54E-01 | 5.64E-01 |
| ZDT6 | 2 | 10 | 1.23E + 00 | 1.22E + 00 | 8.12E-01 | 1.01E + 00 | 7.90E-01 | 5.30E-01 |
| DTLZ1 | 3 | 7 | 1.45E + 00 | 3.83E + 00 | 9.15E-01 | 2.75E + 00 | 1.24E + 00 | 5.98E-01 |
| DTLZ2 | 3 | 12 | 1.38E + 00 | 1.32E + 01 | 1.08E + 00 | 6.76E + 00 | 2.41E + 00 | 7.22E-01 |
| DTLZ3 | 3 | 12 | 1.59E + 00 | 1.75E + 00 | 1.20E + 00 | 1.44E + 00 | 9.67E-01 | 8.10E-01 |
| DTLZ4 | 3 | 12 | 2.71E + 00 | 1.15E + 01 | 2.89E + 00 | 6.39E + 00 | 3.00E + 00 | 6.73E-01 |
| DTLZ5 | 3 | 12 | 3.26E + 00 | 1.18E + 01 | 2.47E + 00 | 5.36E + 00 | 2.00E + 00 | 8.00E-01 |
| DTLZ6 | 3 | 12 | 3.53E + 00 | 1.21E + 01 | 3.13E + 00 | 6.52E + 00 | 2.45E + 00 | 7.27E-01 |
| DTLZ7 | 3 | 22 | 1.97E + 00 | 1.31E + 01 | 1.38E + 00 | 5.38E + 00 | 2.15E + 00 | 7.41E-01 |
| DTLZ8 | 3 | 30 | 2.12E + 00 | 1.22E + 01 | 1.27E + 00 | 5.15E + 00 | 1.87E + 00 | 7.93E-01 |
| DTLZ9 | 2 | 20 | 1.05E + 00 | 6.18E-01 | 5.01E-01 | 6.94E-01 | 4.14E-01 | 3.13E-01 |
The analysis of Table 11 data shows that MOCPO achieves the lowest GD across most ZDT problems, indicating its high effectiveness in converging to the actual Pareto front. MOGBO and PRE-DEMO show relatively higher GD values, suggesting they are less effective in achieving solutions close to the Pareto optimal front.
The performance across DTLZ problems is more varied. MOCPO still maintains strong performance in many instances, with CLGRMOEA closely following MOCPO. MOGBO shows some competitive performance in DTLZ2 and DTLZ6, but its overall GD values remain higher than MOCPO and CLGRMOEA.
The P-values indicate that the differences between algorithms are statistically significant for most ZDT problems and specific DTLZ problems, emphasizing the reliability of the results.
Table 12 results indicate that MOCPO shows strong performance, with consistently lower IGD values across the ZDT series, emphasizing its capability to cover the true Pareto front effectively. CLGRMOEA demonstrates the second low IGD value.
The IGD performance across DTLZ problems shows that MOCPO is still leading, and CLGRMOEA is the second-best algorithm amongst the different algorithms under consideration.
The P-values highlight significant differences between the algorithms, reinforcing the robustness of the observed trends.
Table 11.
The rank and P-value matrix based on GD using ZDT and DTLZ functions.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| ZDT1 | 4.9 | 4.5 | 5.3 | 3.3 | 1.3 | 1.7 | 9.76E-08 |
| ZDT2 | 4.4 | 5.7 | 4.8 | 3 | 1.8 | 1.3 | 2.58E-08 |
| ZDT3 | 5.1 | 3.5 | 5.7 | 3.1 | 2.4 | 1.2 | 1.38E-07 |
| ZDT4 | 4.7 | 4.1 | 4.5 | 2.2 | 2.4 | 3.1 | 5.01E-03 |
| ZDT6 | 5.1 | 3.5 | 5.7 | 2.7 | 1.6 | 2.4 | 6.74E-07 |
| DTLZ1 | 4.2 | 4.3 | 3.8 | 4.3 | 2.3 | 2.1 | 1.02E-02 |
| DTLZ2 | 4.8 | 2 | 2.3 | 6 | 2.8 | 3.1 | 1.45E-06 |
| DTLZ3 | 4.1 | 4 | 3.8 | 3.5 | 2.7 | 2.9 | 4.34E-01 |
| DTLZ4 | 4.3 | 1.8 | 2.5 | 4.9 | 4 | 3.5 | 1.72E-03 |
| DTLZ5 | 3.5 | 2.8 | 4.8 | 3.4 | 4.5 | 2 | 8.28E-03 |
| DTLZ6 | 2.5 | 4.9 | 2.8 | 4.6 | 4.4 | 1.8 | 2.28E-04 |
| DTLZ7 | 4.5 | 3.2 | 4.8 | 5.5 | 1.9 | 1.1 | 3.46E-08 |
| DTLZ8 | 1.9 | 4.6 | 2.7 | 5.3 | 4.7 | 1.8 | 2.14E-06 |
| DTLZ9 | 3.4 | 4.4 | 4.7 | 2.6 | 3.7 | 2.2 | 1.75E-02 |
Table 12.
The rank and P-value matrix based on IGD using ZDT and DTLZ functions.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| ZDT1 | 4.9 | 4.1 | 5.6 | 3.2 | 1.7 | 1.5 | 1.38E-07 |
| ZDT2 | 4 | 6 | 3.8 | 2.2 | 2.4 | 2.1 | 1.91E-05 |
| ZDT3 | 4.4 | 4.4 | 5.2 | 2.9 | 1.5 | 2.6 | 4.25E-05 |
| ZDT4 | 4.2 | 4.8 | 4.7 | 2.6 | 2.7 | 2 | 8.41E-04 |
| ZDT6 | 5.1 | 3.4 | 5.6 | 2.6 | 2.4 | 1.9 | 3.71E-06 |
| DTLZ1 | 3.7 | 3.9 | 3.9 | 4.6 | 2.5 | 2.4 | 5.55E-02 |
| DTLZ2 | 4.6 | 1.8 | 1.2 | 6 | 3.9 | 3.5 | 1.36E-08 |
| DTLZ3 | 4 | 4.1 | 3.5 | 3.7 | 3.2 | 2.5 | 4.19E-01 |
| DTLZ4 | 4.1 | 3.6 | 2.9 | 4.5 | 3.3 | 2.6 | 1.94E-01 |
| DTLZ5 | 4.7 | 1.5 | 5.8 | 3.2 | 4.1 | 1.7 | 8.55E-08 |
| DTLZ6 | 4.9 | 1.9 | 5.9 | 3 | 3.8 | 1.5 | 6.55E-08 |
| DTLZ7 | 4.2 | 3.2 | 4 | 5.1 | 2 | 2.5 | 1.95E-03 |
| DTLZ8 | 4.1 | 4.4 | 3 | 1.7 | 4.7 | 3.1 | 3.09E-03 |
| DTLZ9 | 2.6 | 4.7 | 3.9 | 3 | 2.6 | 4.2 | 4.55E-02 |
Hypervolume quantifies the volume in the objective space covered by the obtained Pareto front relative to a reference point. Higher HV values indicate better performance as they reflect a well-converged and diverse set of solutions.
-
The data of Table 13 indicate that MOCPO again shows outstanding performance, with the highest HV values across almost all ZDT problems. This indicates that these algorithms generate a well-spread Pareto front with good convergence properties. PI-MOEA and CLGRMOEA perform well, but overall, MOCPO maintains an edge in HV performance.
- MOCPO dominates the HV metric in the DTLZ problems, highlighting MOCPO ability to handle complex multi-objective landscapes effectively. CLGRMOEA also performs competitively, but its HV values are generally slightly lower than MOCPO.
- The P-values indicate that the differences in HV are statistically significant, further confirming the superior performance of MOCPO.
Table 13.
The rank and P-value matrix based on HV using ZDT and DTLZ functions.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| ZDT1 | 2.2 | 2.8 | 1.3 | 3.8 | 5.3 | 5.6 | 5.44E-08 |
| ZDT2 | 2.9 | 1 | 3.3 | 4.9 | 4.5 | 4.4 | 1.64E-05 |
| ZDT3 | 2.3 | 3.3 | 1.3 | 4 | 4.5 | 4.6 | 2.79E-06 |
| ZDT4 | 2.6 | 2.3 | 2.4 | 4.8 | 4.1 | 4.8 | 9.76E-04 |
| ZDT6 | 1.9 | 3.6 | 1.4 | 4.4 | 4.5 | 5.2 | 3.18E-06 |
| DTLZ1 | 3.2 | 3.1 | 3.45 | 2.15 | 4.4 | 4.7 | 2.98E-02 |
| DTLZ2 | 2 | 5.1 | 3.6 | 1 | 4.3 | 5 | 1.57E-07 |
| DTLZ3 | 3.5 | 3.5 | 3.5 | 3.5 | 3.5 | 3.5 | 1.00E + 00 |
| DTLZ4 | 2.7 | 3.1 | 3.6 | 2.5 | 4.4 | 4.7 | 4.07E-02 |
| DTLZ5 | 2.2 | 4.8 | 1.4 | 4.4 | 3.3 | 4.9 | 1.30E-05 |
| DTLZ6 | 2.1 | 4.6 | 1.1 | 4 | 3.2 | 6 | 1.97E-08 |
| DTLZ7 | 2.5 | 3.8 | 3 | 2.2 | 5.4 | 4.1 | 1.25E-03 |
| DTLZ8 | 1.9 | 3.6 | 3.3 | 5.7 | 2.5 | 4 | 1.48E-04 |
| DTLZ9 | 4.3 | 2.1 | 2.9 | 4.2 | 4.3 | 3.2 | 3.56E-02 |
Table 14 shows that The MOCPO algorithm demonstrates competitive performance in terms of spacing across the ZDT benchmark problems. Specifically, in ZDT1, ZDT4, and ZDT6 functions, the spacing value for MOCPO is the lowest value among the compared algorithms. Although not the lowest, the spacing value for ZDT2 and ZDT3 is still competitive, indicating that MOCPO can maintain a uniform distribution of solutions. MOCPO excels particularly in DTLZ1, DTLZ2, DTLZ4, DTLZ5, DTLZ6, and DTLZ7, with spacing values being minimum among the compared algorithms. This indicates that MOCPO can achieve a highly uniform distribution of solutions across these problems. Although not the lowest, the spacing value for DTLZ3 and DTLZ9 is still competitive, indicating that MOCPO can maintain a uniform distribution of solutions. The competitive p-values further underscore the statistical significance of MOCPO performance, emphasizing its robustness and effectiveness in addressing complex multi-objective optimization problems.
Table 14.
The rank and P-value matrix based on spacing using ZDT and DTLZ functions.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| ZDT1 | 4.3 | 5.4 | 5.3 | 1.7 | 2.7 | 1.6 | 4.06E-08 |
| ZDT2 | 4.3 | 3.9 | 4.3 | 4.1 | 1.8 | 2.6 | 7.71E-03 |
| ZDT3 | 4.5 | 5.4 | 4.2 | 1.2 | 3.9 | 1.8 | 3.14E-07 |
| ZDT4 | 4.5 | 3.1 | 4.8 | 2.3 | 4 | 2.3 | 4.34E-03 |
| ZDT6 | 4.9 | 3.7 | 5.4 | 2.9 | 2.6 | 1.5 | 1.03E-05 |
| DTLZ1 | 4.4 | 4.5 | 4.2 | 4 | 2.2 | 1.7 | 6.89E-04 |
| DTLZ2 | 3 | 5 | 5 | 5 | 1.6 | 1.4 | 3.85E-08 |
| DTLZ3 | 4.4 | 2.2 | 4.7 | 3.4 | 3.2 | 3.1 | 3.48E-02 |
| DTLZ4 | 3.4 | 2.8 | 3.4 | 5.4 | 3.1 | 2.8 | 2.11E-02 |
| DTLZ5 | 4.4 | 3.4 | 4.9 | 2.2 | 4.7 | 1.4 | 1.86E-05 |
| DTLZ6 | 4.5 | 2.5 | 4.1 | 3.2 | 5.5 | 1.2 | 2.94E-06 |
| DTLZ7 | 3.7 | 5.6 | 3.1 | 4.2 | 3.3 | 1.1 | 8.78E-06 |
| DTLZ8 | 4.2 | 4.8 | 3.4 | 4.9 | 1.9 | 1.8 | 4.71E-05 |
| DTLZ9 | 3.4 | 3.7 | 4.5 | 2.3 | 2.8 | 4.3 | 6.61E-02 |
The spread metric (also known as the diversity metric) measures the extent of the solutions’ distribution along the Pareto front, with a lower value indicating better diversity.
In Table 15, it is evident that MOCPO achieves the lowest spread value for most of the ZDT and DTLZ functions. The results indicate that MOCPO maintains a good spread of solutions. Pi-MOEA is the second-best algorithm considering spread. The competitive p-values further reinforce the statistical significance of MOCPO performance, highlighting its robustness and effectiveness in solving complex multi-objective optimization problems.
Table 15.
The rank and P-value matrix based on spread using ZDT and DTLZ functions.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| ZDT1 | 4.5 | 5.1 | 5.4 | 1.7 | 2.5 | 1.8 | 1.00E-07 |
| ZDT2 | 3.9 | 6 | 3.9 | 2 | 2.5 | 2.7 | 1.55E-05 |
| ZDT3 | 3.9 | 6 | 4.6 | 1.6 | 2.9 | 2 | 1.85E-07 |
| ZDT4 | 3.3 | 4.3 | 3.4 | 3.4 | 3.9 | 2.7 | 5.09E-01 |
| ZDT6 | 4.5 | 3.6 | 4.9 | 3.8 | 2.7 | 1.5 | 5.24E-04 |
| DTLZ1 | 4 | 4.6 | 4.1 | 4.4 | 2.4 | 1.5 | 4.39E-04 |
| DTLZ2 | 4.3 | 3.9 | 2.4 | 6 | 2.6 | 1.8 | 2.20E-06 |
| DTLZ3 | 2.9 | 3.9 | 3.7 | 3.3 | 3 | 4.2 | 5.74E-01 |
| DTLZ4 | 3.6 | 4.3 | 3.6 | 4.5 | 2.9 | 2.1 | 4.45E-02 |
| DTLZ5 | 4.6 | 2.1 | 5.6 | 3 | 4.4 | 1.3 | 2.97E-07 |
| DTLZ6 | 5.1 | 1.6 | 5.9 | 2.9 | 3.9 | 1.6 | 9.58E-09 |
| DTLZ7 | 5.1 | 3.6 | 4.9 | 4 | 2.1 | 1.3 | 3.62E-06 |
| DTLZ8 | 4 | 5.1 | 4.3 | 4.4 | 2.2 | 1 | 1.61E-06 |
| DTLZ9 | 2 | 3.6 | 2.8 | 4.4 | 3.4 | 4.8 | 1.02E-02 |
To further strengthen the effectiveness of the Multi-Objective Crested Porcupine Optimization (MOCPO) algorithm, we conducted a Friedman test, paired Wilcoxon signed-rank testing with Holm post-hoc adjustment. The Friedman test revealed significant differences in performance of the analysed algorithms on a variety of different benchmark functions (p < 0.05). It warranted further performance comparison in the pairwise manner for finding out those algorithms that reflected superior or lower performance.
Pairwise Wilcoxon signed-rank test analysis
The Wilcoxon signed-rank test was employed to compare MOCPO against other prominent multi-objective optimization algorithms, i.e., MOGBO, PRE-DEMO, MOEDO, PI-MOEA, and CLGRMOEA. Holm correction was used to correct p-values to manage the family-wise error rate.
The main findings of the paired analysis are elaborated below:
MOCPO vs. MOGBO:
Adjusted p-value was significantly low (p < 0.05), indicating that MOCPO outperformed MOGBO in most test cases.
This shows that MOCPO possesses better convergence and diversity over various benchmark functions.
MOCPO vs. PRE-DEMO:
Statistical analysis revealed a significant difference with MOCPO showing better performance (p < 0.05).
The improvement is likely due to MOCPO ability to balance exploration and exploitation, compared to PRE-DEMO that often has problems maintaining diversity.
MOCPO vs. MOEDO:
Though MOEDO performed well on some test functions (e.g., ZDT4), MOCPO consistently achieved better hypervolume and spacing values on all benchmarks.
The adjusted p-value was less than 0.05, supporting the superiority of MOCPO in handling multi-objective trade-offs.
MOCPO vs. PI-MOEA.
The comparison proved that MOCPO significantly outperformed PI-MOEA (p < 0.05).
PI-MOEA struggled with highly dimensional problems like DTLZ4 and DTLZ6, while MOCPO proved to have solid convergence and uniformly distributed Pareto front.
MOCPO compared to CLGRMOEA:
Test results proved the statistically significant improvement of MOCPO over CLGRMOEA, confirming that MOCPO consistently achieves lower Generational Distance (GD) values and better Spread (SD) values.
This verifies the effectiveness and robustness of MOCPO in complex optimization scenarios.
Thorough Discussion and Analysis.
The results from the Friedman test and Wilcoxon pairwise tests conclusively prove that MOCPO outperforms competing multi-objective optimization algorithms in every performance metric consistently, such as Hypervolume (HV), Generational Distance (GD), Inverted Generational Distance (IGD), Spread (SD), and Spacing (SP). The Holm correction ensured statistical solidity of the findings, minimizing the chances of false positives due to multiple comparisons.
MOCPO demonstrated statistically significant superiority to all tested algorithms, particularly when dealing with high-dimensional and complex Pareto fronts. The improvements can be attributed to the revolutionary Information Feedback Mechanism (IFM) and adaptive search strategies inspired by crested porcupines’ defence mechanisms.
The results confirm the feasibility of MOCPO as a robust replacement for existing multi-objective optimization methods, and it is positioned as a valuable candidate for engineering applications in practice. The statistical validation technique shows that MOCPO far outperforms existing algorithms in handling multi-objective optimization problems. Friedman test results showed significant differences, and pairwise Wilcoxon tests with Holm correction also supported the superiority of MOCPO. These results point to the robustness, diversity preservation, and fast convergence properties of MOCPO as a most valuable tool for solving real-world multi-objective problems.
Ablation study
We performed an ablation study to investigate how each defence technique in the Multi-Objective Crested Porcupine Optimization (MOCPO) algorithm operates by removing every component individually—Visual, Auditory, Odor, and Physical Attack—then checking how its performance affect convergence, diversity, and computationally load.
Baseline Algorithm: The complete MOCPO with four defence mechanisms.
Ablation Variants:
MOCPO-Visual: Without vision-related exploration (vision approach).
MOCPO-Auditory: Without auditory method (perturbation-based exploration).
MOCPO-Odor: Without odor method (local refinement).
MOCPO-Physical Attack: Without physical attack method (aggressive exploitation).
Benchmark Problems: ZDT and DTLZ test suites.
Evaluation Metrics: Generational Distance (GD), Inverted Generational Distance (IGD), Hypervolume (HV), and Spacing (SP).
Number of Runs 30 independent runs for statistical significance.
Table 16 presents the results of ablation study based on performance parameters.
Table 16.
Ablation study results using different performance parameters.
| Variant | GD | IGD | HV | SP |
|---|---|---|---|---|
| Full MOCPO | 0.0031 | 0.0054 | 0.875 | 0.0121 |
| MOCPO-Visual | 0.0045 | 0.0068 | 0.845 | 0.0157 |
| MOCPO-Auditory | 0.0043 | 0.0065 | 0.852 | 0.0148 |
| MOCPO-Odor | 0.0051 | 0.0074 | 0.832 | 0.0172 |
| MOCPO-Physical Attack | 0.0048 | 0.007 | 0.84 | 0.0165 |
-
Visual Strategy (Sight-based Exploration):
Deleting this strategy caused an increase in observable GD and IGD, suggesting worse exploration ability.
HV declined, suggesting worse solution quality.
Conclusion: Visual-based exploration is a critical aspect of global search space coverage.
-
Auditory Strategy (Perturbation for Diversity):
Deleting this mechanism deteriorated IGD and SP to some degree, suggesting that MOCPO struggled with diversity control.
HV remained extremely stable but overall performance decreased slightly.
Conclusion: Perturbation based on sound enhances the capability of escaping local optima.
-
Odor Strategy (Local Refinement):
Maximum increase of GD and IGD when deleting the exploitation based on odor.
Worst HV values indicate that this step is most necessary for solution improvement.
Local improvement is necessary for convergence and Pareto front precision.
-
Physical Attack (Aggressive Exploitation):
Eliminating this strategy lowered convergence speed and solution spread.
Although IGD and SP deteriorated, the effect was not as bad as eliminating odor-based exploitation.
Direct movement towards optimal solutions improves end convergence.
The ablation study proves that every element of MOCPO makes substantial contributions to its performance. Visual and audio methods improve exploration, whereas the smell and physical attack methods improve exploitation. Combining these methods enables MOCPO to maintain convergence and diversity and thus is a good optimizer for multi-objective problems. Adaptive weighting of these elements in future research can be investigated to offer optimized performance on various landscapes of problems.
MOCPO strengths across various problem characteristics
Our experiments on actual engineering design problems and the ZDT and DTLZ benchmark problems demonstrate that MOCPO displays strong convergence and diversity in various optimization settings. To highlight the characteristics in which MOCPO outperforms other algorithms, we have now included additional descriptions:
-
Pareto Front Convergence, Both Convex and Non-Convex.
MOCPO converges faster and more accurately than MOEDO, PI-MOEA, and PRE-DEMO for convex Pareto front functions like ZDT1 and ZDT2.
With its ability to deal with discontinuous and complex landscapes, MOCPO converges nicely to optimal solutions while maintaining diversity in ZDT6 with a non-uniform Pareto front.
-
Handling Multi-Modality and Local Optima: Most algorithms struggle with ZDT4 and DTLZ3 because of their high number of local Pareto fronts.
Improved global convergence is provided by MOCPO capability to escape local optima better than MOGBO and PRE-DEMO due to its adaptive solution update strategy and cyclic population reduction method.
Our further analysis of MOCPO exploration-exploitation trade-off shows that its defensive behavioural modelling allows it to switch between exploitation (physical attack and odor strategies) and exploration (sight and sound strategies), which accounts for its good performance on difficult search spaces.
Scalability and High-Dimensional Performance: MOCPO performs better than PI-MOEA and MOEDO on DTLZ5 and DTLZ6, which involve high-dimensional and scalable objective spaces, by showing consistent convergence rates and maintaining solution spread over dimensions.
Real-world problem
Figure 19 compares various multi-objective optimization algorithms on five benchmark test functions: RWOP1, RWOP2, RWOP3, RWOP4, and RWOP5. The algorithms evaluated include MOGBO, PRE-DEMO, MOEDO, PI-MOEA, CLGRMOEA, and MOCPO. The comparison is depicted through scatter plots representing the obtained Pareto Fronts (PF) against the true PF for each algorithm across the five test functions.
Fig. 19.
Pareto fronts obtained by Real-world multi-objective optimization problem test suits.
All algorithms exhibit a similar trend, with the resulting PF closely aligning with the true PF. This indicates that each algorithm can effectively approximate the true Pareto Front for RWOP1. The crowding of points suggests good diversity among the solutions for most algorithms.
For RWOP2, the true PF approximations generally are reasonable across the board, but one can see the slight discrepancies in performance between the different comparisons. MOCPO has comparisons that compare closer and more accurately to the tracking of true PF, but the approximation does have some sparse gaps in it. PRE-DEMO, MOEDO, CLGRMOEA, and Pi-MOEA have slightly a spread-out comparison of solutions around the true, meaning that one cannot fully approximate the true solution. However, they are still closely approximated.
For RWOP3, RWOP4, and RWOP5, the plot of MOCPO is very tight, meaning that it has great performance in approximating the true PF with high precision. MOGBO, PRE-DEMO, CLGRMOEA MOEDO, and PI-MOEA plots show slight spreading, particularly at the extremes of the PF, which could be challenging to converge to such areas.
Across all five test functions, the MOCPO algorithm generally succeeds in approximating the true PF, with minor differences in convergence quality and diversity. PI-MOEA and MOCPO show slightly more variability in performance, which might suggest a trade-off between convergence speed and diversity preservation.
BOX PLOTS for hyper volume and spacing
Figure 20 shows the performance comparison of different multi-objective optimization algorithms on the RWOP benchmark test functions (RWOP1 to RWOP5) using two key performance metrics:
Fig. 20.
Box Plots for Hyper Volume and Spacing for RWOP test suits.
Figure 20 reveals that in RWOP1, MOCPO showcases exceptional performance with a high HV, indicating that MOCPO effectively converges to the true Pareto Front (PF) with a good spread of solutions. While other algorithms like CLGRMOEA and Pi-MOEA also perform well, MOCPO stands out for its consistency and precision in maintaining a high HV.
MOCPO continues to demonstrate robust performance in RWOP2, with HV values among the highest. The lower variation in MOCPO HV values compared to other algorithms underscores its reliability. CLGRMOEA also shows strong performance here, placing it as the second-best algorithm.
RWOP3, RWOP4, and RWOP5 further cement MOCPO dominance, with its HV values consistently higher than those of most other algorithms, indicating strong convergence and diversity.
Convergence plots
Figure 21 shows the convergence plots for the hypervolume and spacing matrix comparing the performance of different algorithms. The results depict that across all test functions (RWMOP1 to RWMOP5), MOCPO consistently outperforms the other algorithms, achieving the highest Hypervolume (HV) and the lowest Spacing. MOEDO follows closely and demonstrates strong performance. PI-MOEA shows moderate performance.
Fig. 21.
Convergence plots for Hypervolume and Spacing comparing different MOOAs.
The results of Table 17 indicate that MOCPO provides the highest hypervolume for RWMOP1, RWMOP4, and RWMOP5, and for remaining real-world test suits also, the hypervolume obtained by MOCPO is close to the highest hypervolume. These ascertain that MOCPO algorithm provides best Pareto front for RWMOP test functions, making it suitable for solving real-world engineering optimization problems.
Table 17.
Comparison of MOOAs based on hypervolume for different RWMOP test suits.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| RWMOP1 | 2 | 4 | 6.0425e-1 ± 1.15e-3 | 6.1233e-1 ± 2.13e-2 | 6.0951e-1 ± 1.66e-2 | 6.0942e-1 ± 1.49e-2 | 6.0791e-1 ± 2.78e-4 | 6.1778e-1 ± 3.73e-4 |
| RWMOP2 | 2 | 5 | 5.0635e-2 ± 6.08e-2 | 4.0966e-2 ± 5.86e-2 | 7.7884e-2 ± 6.26e-2 | 2.0378e-2 ± 4.76e-2 | 2.3559e-2 ± 3.99e-2 | 5.7946e-2 ± 5.47e-2 |
| RWMOP3 | 2 | 3 | 8.9393e-1 ± 1.07e-3 | 8.9792e-1 ± 3.81e-4 | 8.9304e-1 ± 1.63e-3 | 9.0182e-1 ± 2.16e-4 | 9.0245e-1 ± 1.81e-4 | 8.9770e-1 ± 9.17e-4 |
| RWMOP4 | 2 | 4 | 8.4175e-1 ± 8.38e-3 | 8.4482e-1 ± 7.30e-3 | 8.3792e-1 ± 1.07e-2 | 8.4850e-1 ± 4.28e-3 | 8.4828e-1 ± 5.77e-3 | 8.5121e-1 ± 4.24e-3 |
| RWMOP5 | 2 | 4 | 4.3005e-1 ± 1.83e-3 | 4.3295e-1 ± 1.39e-3 | 4.3281e-1 ± 1.33e-3 | 4.3375e-1 ± 7.73e-4 | 4.3364e-1 ± 1.71e-3 | 4.3391e-1 ± 1.42e-3 |
The analysis of details in Table 18 reveals that in almost all RWMOP test functions, the MOCPO algorithm has the lowest spacing value. MOCPO can generate Pareto from a balanced distribution of solution points without clustering or excessive gaps.
Table 18.
Comparison of MOOAs based on spacing for different RWMOP test suits.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| RWMOP1 | 2 | 4 | 2.5003e + 5 ± 3.25e + 4 | 2.0711e + 5 ± 2.04e + 4 | 2.6195e + 5 ± 3.71e + 4 | 2.3713e + 5 ± 2.21e + 4 | 2.3861e + 5 ± 2.00e + 4 | 2.6607e + 5 ± 2.19e + 4 |
| RWMOP2 | 2 | 5 | 1.6342e-1 ± 4.77e-2 | 1.8067e-1 ± 4.65e-2 | 3.8818e-1 ± 2.67e-1 | 1.4947e-1 ± 8.49e-2 | 4.0061e-1 ± 3.39e-1 | 1.0410e-1 ± 2.98e-2 |
| RWMOP3 | 2 | 3 | 5.5948e + 3 ± 6.63e + 2 | 2.1449e + 3 ± 1.19e + 3 | 4.8273e + 3 ± 1.91e + 3 | 1.0279e + 3 ± 8.31e + 1 | 4.4309e + 3 ± 6.76e + 2 | 6.9355e + 2 ± 4.13e + 1 |
| RWMOP4 | 2 | 4 | 4.8580e-1 ± 5.67e-2 | 6.5088e-1 ± 1.06e-1 | 6.5102e-1 ± 2.20e-1 | 2.3658e-1 ± 2.88e-2 | 2.6036e-1 ± 3.10e-2 | 2.2039e-1 ± 1.95e-2 |
| RWMOP5 | 2 | 4 | 6.9011e-2 ± 1.33e-2 | 6.8718e-2 ± 8.63e-3 | 6.1591e-2 ± 1.48e-2 | 2.4655e-2 ± 1.72e-3 | 3.0411e-2 ± 2.40e-3 | 2.4625e-2 ± 1.33e-3 |
Table 19 runtime data shows that MOCPO is the best-performing algorithm with the lowest runtime for all RWMOP test suites. This parameter is crucial while solving large-scale, complex, real-world problems of multi-objective optimization.
Table 19.
The runtime data for different algorithms while solving RWMOP test suits.
| Problem | M | D | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO |
|---|---|---|---|---|---|---|---|---|
| RWMOP1 | 2 | 4 | 8.59E-01 | 5.15E + 00 | 5.03E-01 | 9.69E-01 | 2.06E + 00 | 2.62E-01 |
| RWMOP2 | 2 | 5 | 8.23E-01 | 2.02E + 00 | 5.05E-01 | 5.70E-01 | 1.28E + 00 | 3.06E-01 |
| RWMOP3 | 2 | 3 | 7.34E-01 | 5.89E + 00 | 5.36E-01 | 9.26E-01 | 2.40E + 00 | 2.90E-01 |
| RWMOP4 | 2 | 4 | 6.59E-01 | 6.06E + 00 | 4.63E-01 | 7.52E-01 | 1.99E + 00 | 2.61E-01 |
| RWMOP5 | 2 | 4 | 7.63E-01 | 4.74E + 00 | 7.26E-01 | 7.97E-01 | 1.97E + 00 | 2.69E-01 |
Table 20 compares the ranks of different multi-objective optimization algorithms based on hypervolume. The ranks are analysed across five different problems (RWMOP1 to RWMOP5). Additionally, the tables provide P-values to assess the statistical significance of the rankings. The results show that MOCPO achieves the highest rank based on hypervolume for the majority of the RWMOP benchmark functions. Pi-MOEA algorithm is the second-best algorithm among the considered algorithms. The P-values across both tables indicate that, in most cases, the differences in rankings among the algorithms are statistically significant. This implies that the performance differences observed are unlikely to be due to random variation and reflect true differences in algorithm effectiveness.
Table 20.
Comparison of rank based on hypervolume and P-value.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| RWMOP1 | 1.8 | 3.9 | 2.3 | 5.4 | 2.6 | 5 | 6.26E-06 |
| RWMOP2 | 4.05 | 3.2 | 4.15 | 2.75 | 2.7 | 4.15 | 1.38E-01 |
| RWMOP3 | 1.7 | 3.6 | 1.3 | 6 | 5 | 3.4 | 4.65E-09 |
| RWMOP4 | 2.5 | 3.2 | 2.1 | 4.1 | 4.2 | 4.9 | 5.01E-03 |
| RWMOP5 | 1.4 | 2.9 | 3 | 4.5 | 4.4 | 4.8 | 1.87E-04 |
Table 21 compares rank based on spacing values for RWMOP test suits. The MOCPO algorithm has the lowest Spacing values in almost all test functions, proving its capability of better convergence. This implies that the algorithm has produced solutions closer to the true Pareto-optimal solutions.
Table 21.
Comparison of rank based on spacing and P-value.
| Problem | MOGBO | PRE-DEMO | MOEDO | PI-MOEA | CLGRMOEA | MOCPO | P-VALUE |
|---|---|---|---|---|---|---|---|
| RWMOP1 | 3.9 | 1.4 | 4.2 | 3.2 | 3.4 | 4.9 | 1.08E-03 |
| RWMOP2 | 3.5 | 4 | 4.8 | 2.8 | 4.1 | 1.8 | 6.22E-03 |
| RWMOP3 | 5.4 | 2.8 | 5 | 2.4 | 4.3 | 1.1 | 1.57E-07 |
| RWMOP4 | 4.5 | 5.4 | 5.1 | 2.2 | 2.6 | 1.2 | 4.17E-08 |
| RWMOP5 | 5 | 5.2 | 4.8 | 1.6 | 3 | 1.4 | 3.46E-08 |
Conclusion
In the paper, it is fully confirmed by evaluating the proposed Multi-Objective Crested Porcupine Optimisation method on both numerical benchmarks and actual engineering problems. MOCPO consistently outperformed leading multi-objective optimisation algorithms, which have shown a wide range of better performance. Importantly, the algorithm MOCPO showed better convergence toward the real Pareto fronts in comparison to PI-MOEA, PRE-DEMO, MOGBO, and MOEDO. It is well noticed especially on engineering design tasks, in which MOCPO showed very robust exploration ability during the whole process of finding the solutions by identifying both optimal and well distributed along the Pareto front group options.
MOCPO was evaluated using key performance indicators such as GD, IGD, Spread, Spacing, and HV, and this gave the comparisons a strong quantitative basis. Since MOCPO is a multi-objective optimisation technique, its domination over these measures in a range of problem sets indicates its reliability and effectiveness. The study rigorous methodology, combining theoretical advancement with empirical verification, establishes MOCPO as a useful tool for solving challenging optimisation problems. Looking ahead, there are quite a few research initiatives that may make MOCPO performance even better. The balance of exploration and exploitation of this algorithm can be improved further with adaptive mechanisms for dynamic adjustment of parameters and further study on the hybridization of this algorithm with other optimization techniques. Its usage can be further extended to fields like environmental, financial, and biological engineering and hence may offer significant new insight into how versatile it is for dealing with many kinds of issues.
Author contributions
Divya Adalja: Conceptualization, methodology development, writing, original draft preparation, and review.Pinank Patel: Algorithm implementation, experimental evaluation, data analysis, and manuscript editing.Nikunj Mashru: Software implementation, performance analysis, and manuscript drafting.Pradeep Jangir: Data curation, validation, and results interpretation, supervision, and manuscript review.Arpita: Literature review, manuscript revision, and technical proofreading.Reena Jangid: Statistical analysis, validation, and manuscript editing.Gulothungan G.: Conceptualization, methodology, supervision, and manuscript review.Mohammad Khishe: Theoretical modeling, mathematical formulation, manuscript editing, and final approval.
Data availability
The data presented in this study are available through email upon request to the corresponding author.
Declarations
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Divya Adalja, Email: pateldivya91@gmail.com.
Pinank Patel, Email: pinankpatel19@gmail.com.
Nikunj Mashru, Email: nikunj.mashru039@gmail.com.
G. Gulothungan, Email: g.gulothungan@gmail.com
Mohammad Khishe, Email: m_khishe@alumni.iust.ac.ir.
References
- 1.Pandya, S. B., Kalita, K., Jangir, P., Ghadai, R. K. & Abualigah, L. Multi-objective Geometric Mean Optimizer (MOGMO): A Novel Metaphor-Free Population-Based Math-Inspired Multi-objective Algorithm, International Journal of Computational Intelligence Systems17(1), 91. 10.1007/s44196-024-00420-z (2024).
- 2.Ravichandran, S. et al. Multi-objective resistance-capacitance optimization algorithm: An effective multi-objective algorithm for engineering design problems, Heliyon10(17), e35921. 10.1016/j.heliyon.2024.e35921 (2024). [DOI] [PMC free article] [PubMed]
- 3.Pandya, S. B. et al. Multi-objective RIME algorithm-based techno economic analysis for security constraints load dispatch and power flow including uncertainties model of hybrid power systems. Energy Rep.11, 4423–4451. 10.1016/j.egyr.2024.04.016 (2024).
- 4.Kalita, K. et al. Multi-Objective water Strider algorithm for complex structural optimization: A comprehensive performance analysis. IEEE Access.12, 55157–55183. 10.1109/ACCESS.2024.3386560 (2024). [Google Scholar]
- 5.Kalita, K. et al. Multi-objective liver cancer algorithm: A novel algorithm for solving engineering design problems. Heliyon10 (5), e26665. 10.1016/j.heliyon.2024.e26665 (2024). [DOI] [PMC free article] [PubMed]
- 6.Tejani, G. G., Sharma, S. K., Mashru, N., Patel, P. & Jangir, P. Optimization of truss structures with two archive-boosted MOHO algorithm. Alexandria Eng. J.120, 296–317. 10.1016/j.aej.2025.02.032 (2025).
- 7.Hsu, C. Y. et al. A novel approach for optimizing a photovoltaic thermal system combined with solar thermal collector: integrating RSM, multi-objective Bat algorithm and VIKOR decision maker. J. Taiwan. Inst. Chem. Eng.168, 105927. 10.1016/j.jtice.2024.105927 (2025).
- 8.Aljaidi, M. et al. MORIME: A multi-objective RIME optimization framework for efficient truss design. Results Eng.25, 103933. 10.1016/j.rineng.2025.103933 (2025).
- 9.Mashru, N., Tejani, G. G. & Patel, P. Reliability-based multi-objective optimization of trusses with Greylag Goose algorithm. Evol. Intell.18 (1), 25. 10.1007/s12065-024-01011-9 (2025).
- 10.Tejani, G. G., Mashru, N., Patel, P., Sharma, S. K. & Celik, E. Application of the 2-archive multi-objective cuckoo search algorithm for structure optimization. Sci. Rep.14 (1), 31553. 10.1038/s41598-024-82918-2 (2024). [DOI] [PMC free article] [PubMed]
- 11.Mashru, N., Tejani, G. G. & Patel, P. Many-Objective optimization of a 120-Bar 3D dome truss structure using three metaheuristics. 231–239. 10.1007/978-981-97-4654-5_21 (2024).
- 12.Mashru, N., Tejani, G. G., Patel, P. & Khishe, M. Optimal truss design with MOHO: A multi-objective optimization perspective. PLoS One19, e0308474. 10.1371/journal.pone.0308474 (2024). [DOI] [PMC free article] [PubMed]
- 13.Mashru, N., Patel, P., Tejani, G. G. & Kaneria, A. Multi-objective thermal exchange optimization for truss structure, in Lecture Notes in Mechanical Engineering, Springer Science and Business Media Deutschland GmbH. 139–146. 10.1007/978-981-19-9285-8_14 (2023).
- 14.Liu, J., Anavatti, S., Garratt, M. & Abbass, H. A. Multi-operator continuous ant colony optimisation for real world problems. Swarm Evol. Comput.69, 100984. 10.1016/J.SWEVO.2021.100984 (2022).
- 15.Akbari, R., Hedayatzadeh, R., Ziarati, K. & Hassanizadeh, B. A multi-objective artificial bee colony algorithm. Swarm Evol. Comput.2, 39–52. 10.1016/j.swevo.2011.08.001 (2012).
- 16.Banharnsakun, A., Achalakul, T. & Sirinaovakul, B. The best-so-far selection in Artificial Bee Colony algorithm, Appl Soft Comput. 11(2), 2888–2901. 10.1016/j.asoc.2010.11.025 (2011).
- 17.Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm, Knowl Based Syst. 89, 228–249. 10.1016/j.knosys.2015.07.006 (2015).
- 18.Saremi, S., Mirjalili, S. & Lewis, A. Grasshopper optimisation algorithm: theory and application. Adv. Eng. Softw.105, 30–47. 10.1016/j.advengsoft.2017.01.004 (2017).
- 19.Ath, N., Kallioras, N. D., Lagaros & Avtzis, D. N. Pity beetle algorithm – A new metaheuristic inspired by the behavior of bark beetles. Adv. Eng. Softw.121, 147–166. 10.1016/j.advengsoft.2018.04.007 (2018).
- 20.Arora, S. & Singh, S. Butterfly optimization algorithm: a novel approach for global optimization. Soft Comput.23 (3), 715–734. 10.1007/s00500-018-3102-4 (2019).
- 21.Zervoudakis, K. & Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng.145, 106559. 10.1016/j.cie.2020.106559 (2020).
- 22.Połap, D. & Woźniak, M. Red Fox optimization algorithm. Expert Syst. Appl.166, 114107. 10.1016/j.eswa.2020.114107 (2021).
- 23.Jain, M., Singh, V. & Rani, A. A novel nature-inspired algorithm for optimization: squirrel search algorithm. Swarm Evol. Comput.44, 148–175. 10.1016/j.swevo.2018.02.013 (2019).
- 24.Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems, International Journal of Intelligent Systems36(10), 5887–5958. 10.1002/int.22535 (2021).
- 25.Dhiman, G. & Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw.114, 48–70. 10.1016/j.advengsoft.2017.05.014 (2017).
- 26.Wang, G. G., Deb, S. & Coelho, L. S. Elephant Herding Optimization. In: 3rd International Symposium on Computational and Business Intelligence (ISCBI). pp. 1–5. 10.1109/ISCBI.2015.8 (IEEE, 2015).
- 27.Wang, B., Jin, X. & Cheng, B. Lion pride optimizer: an optimization algorithm inspired by Lion pride behavior. Sci. China Inform. Sci.55 (10), 2369–2389. 10.1007/s11432-012-4548-0 (2012).
- 28.Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey Wolf optimizer. Adv. Eng. Softw.69, 46–61. 10.1016/j.advengsoft.2013.12.007 (2014).
- 29.Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng.158, 107408. 10.1016/j.cie.2021.107408 (2021).
- 30.Zamani, H., Nadimi-Shahraki, M. H. & Gandomi, A. H. Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell.104, 104314. 10.1016/j.engappai.2021.104314 (2021).
- 31.Meng, X. B., Gao, X. Z., Lu, L., Liu, Y. & Zhang, H. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm, Journal of Experimental & Theoretical Artificial Intelligence28(4), 673–687. 10.1080/0952813X.2015.1042530 (2016).
- 32.Heidari, A. A. et al. Harris Hawks optimization: algorithm and applications. Future Generation Comput. Syst.97, 849–872. 10.1016/j.future.2019.02.028 (2019).
- 33.Mohammadi-Balani, A., Dehghan Nayeri, M., Azar, A. & Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng.152, 107050. 10.1016/j.cie.2020.107050 (2021).
- 34.Zamani, H., Nadimi-Shahraki, M. H. & Gandomi, A. H. Conscious Neighborhood-based crow search algorithm for solving global optimization problems. Appl. Soft Comput.85, 105583. 10.1016/j.asoc.2019.105583 (2019).
- 35.Khodadadi, N. et al. Multi-Objective Artificial Hummingbird Algorithm. 407–419. 10.1007/978-3-031-09835-2_22 (2023).
- 36.Gandomi, A. H. & Alavi, A. H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul.17 (12), 4831–4845. 10.1016/j.cnsns.2012.05.010 (2012).
- 37.Mirjalili, S. & Lewis, A. The Whale optimization algorithm. Adv. Eng. Softw.95, 51–67. 10.1016/j.advengsoft.2016.01.008 (2016).
- 38.Mirjalili, S. et al. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems, Advances in Engineering Software114, 163–191. 10.1016/j.advengsoft.2017.07.002 (2017).
- 39.Zaldívar, D. et al. A novel bio-inspired optimization model based on yellow saddle goatfish behavior. Biosystems174, 1–21. 10.1016/j.biosystems.2018.09.007 (2018). [DOI] [PubMed]
- 40.Shadravan, S., Naji, H. R. & Bardsiri, V. K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems, Eng Appl Artif Intell80, 20–34. 10.1016/j.engappai.2019.01.001 (2019).
- 41.Chou, J. S. & Truong, D. N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput.389, 125535. 10.1016/j.amc.2020.125535 (2021).
- 42.Erol, O. K. & Eksin, I. A new optimization method: Big Bang–Big Crunch, Advances in Engineering Software37(2), 106–111. 10.1016/j.advengsoft.2005.04.005 (2006).
- 43.Hashim, F. A., Houssein, E. H., Mabrouk, M. S., Al-Atabany, W. & Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm, Future Generation Computer Systems101, 646–667. 10.1016/j.future.2019.07.015 (2019).
- 44.Mohammadi, D., Abd Elaziz, M., Moghdani, R., Demir, E. & Mirjalili, S. Quantum Henry gas solubility optimization algorithm for global optimization. Eng. Comput.38, 2329–2348. 10.1007/s00366-021-01347-1 (2022).
- 45.Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M. & Gandomi, A. H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng.376, 113609. 10.1016/j.cma.2020.113609 (2021).
- 46.Zhao, W., Wang, L. & Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter Estimation problem. Knowl. Based Syst.163, 283–304. 10.1016/j.knosys.2018.08.030 (2019).
- 47.Husseinzadeh Kashan, A. A new metaheuristic for optimization: optics inspired optimization (OIO). Comput. Oper. Res.55, 99–125. 10.1016/j.cor.2014.10.011 (2015).
- 48.Askari, Q., Saeed, M. & Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl.161, 113702. 10.1016/j.eswa.2020.113702 (2020).
- 49.Moghdani, R. & Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput.64, 161–185. 10.1016/j.asoc.2017.11.043 (2018).
- 50.Sadollah, A., Sayyaadi, H., Yoo, D. G., Lee, H. M. & Kim, J. H. Mine blast harmony search: A new hybrid optimization method for improving exploration and exploitation capabilities. Appl. Soft Comput.68, 548–564. 10.1016/j.asoc.2018.04.010 (2018).
- 51.Atashpaz-Gargari, E. & Lucas, C. Sep., Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition, in 2007 IEEE Congress on Evolutionary Computation. 4661–4667. 10.1109/CEC.2007.4425083 (IEEE, 2007).
- 52.Premkumar, M., Jangir, P. & Sowmya, R. MOGBO: A new multiobjective Gradient-Based optimizer for real-world structural optimization problems. Knowl. Based Syst.218, 106856 (2021). [Google Scholar]
- 53.Palakonda, V. & Jae-Mo K. Pre-DEMO: preference-inspired differential evolution for multi/many-objective optimization. IEEE Trans. Syst. Man. Cybernetics: Syst.53 (12), 7618–7630 (2023). [Google Scholar]
- 54.Kalita, K., Ramesh, J. V. N., Cepova, L. & Pandya, S. B. Pradeep Jangir, and laith Abualigah. Multi-objective exponential distribution optimizer (MOEDO): a novel math-inspired multi-objective algorithm for global optimization and real-world engineering design problems. Sci. Rep.14 (1), 1816 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Palakonda, V., Kang, J. M. & Jung, H. An adaptive neighborhood based evolutionary algorithm with pivot-solution based selection for multi-and many-objective optimization. Inf. Sci.607, 126–152 (2022). [Google Scholar]
- 56.Palakonda, V., Kang, J. M. & Jung, H. Clustering-aided grid-based one-to-one selection-driven evolutionary algorithm for multi/many-objective optimization. IEEE Access. (2024).
- 57.Zitzler, E., Deb, K. & Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol Comput. 8(2), 173–195. 10.1162/106365600568202 (2000). [DOI] [PubMed]
- 58.Deb, K., Thiele, L., Laumanns, M. & Zitzler, E. Scalable test problems for evolutionary multiobjective optimization, in Evolutionary Multiobjective Optimization, London: Springer-, pp. 105–145. 10.1007/1-84628-137-7_6
- 59.Meneghini, I. R., Alves, M. A., Gaspar-Cunha, A. & Guimarães, F. G. Scalable and customizable benchmark problems for many-objective optimization. Appl. Soft Comput.90, 106139. 10.1016/j.asoc.2020.106139 (2020).
- 60.Zapotecas-Martínez, S., García-Nájera, A. & Menchaca-Méndez, A. Engineering applications of multi-objective evolutionary algorithms: A test suite of box-constrained real-world problems. Eng. Appl. Artif. Intell.123, 106192. 10.1016/j.engappai.2023.106192 (2023).
- 61.Tanabe, R. & Ishibuchi, H. An easy-to-use real-world multi-objective optimization problem suite. Appl. Soft Comput.89, 106078. 10.1016/j.asoc.2020.106078 (2020).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data presented in this study are available through email upon request to the corresponding author.






































