Abstract
The trade off between exploration and exploitation is one of the key challenges in evolutionary and swarm optimisers which are led by guided and stochastic search. This work investigates the exploration and exploitation balance in a minimalist swarm optimiser in order to offer insights into the population’s behaviour. The minimalist and vector-stripped nature of the algorithm—dispersive flies optimisation or DFO—reduces the challenges of understanding particles’ oscillation around constantly changing centres, their influence on one another, and their trajectory. The aim is to examine the population’s dimensional behaviour in each iteration and each defined exploration-exploitation zone, and to subsequently offer improvements to the working of the optimiser. The derived variants, titled unified DFO or uDFO, are successfully applied to an extensive set of test functions, as well as high-dimensional tomographic reconstruction, which is an important inverse problem in medical and industrial imaging.
Keywords: exploration, exploitation, diversity, zone analysis, dispersive flies optimisation, DFO
1. Introduction
Information exchange and communication between particles in swarm intelligence manifest themselves in a variety of forms, including the use of different update equations and strategies; deploying extra vectors in addition to the particles’ current positions; and dealing with tunable parameters. Ultimately, the goal of the optimisers is to achieve a balance between global exploration of the search space and local exploitation of potentially suitable areas in order to guide the optimisation process [1,2].
The motivation for studying dispersive flies optimisation, or DFO [3], is the algorithm’s minimalist update equation and its sole reliance on particles’ positions at time t to generate the positions at time , therefore not using additional vectors. This characteristic [4] is in contrast to several other population-based algorithms and their variants which, besides using position vectors, use a subset of the following: velocities and memories (personal best and global best) in particle swarm optimisation (PSO) [5], mutant and trial vectors in differential evolution (DE) [6], pheromone and heuristic vectors in Ant Colony Optimisation (ACO) [7], and so forth. Besides only using position vectors in any given iteration (similar to some evolution strategies, such as CMA-ES [8]), the only tunable parameter in DFO, other than population size, is the restart threshold, , which controls the component-wise restart in each dimension. This is again contrary to many well-known swarm and evolutionary algorithms dealing with several (theoretically- or empirically-driven) tunable parameters, such as: learning factors, inertia weight in PSO, crossover or mutation rates, tournament and elite sizes, constricting factor in DE and/or Genetic Algorithms (GA) [9], heuristic strength, greediness, pheromone decay rate in ACO, impact of distance on attractiveness, scaling factor and speed of convergence in Firefly algorithm (FF) [10], and so on. It is worthwhile to note that DFO is not the only minimalist algorithm, and there have been several attempts to present ‘simpler’, more compact algorithms to better understand the dynamic of population’s behaviour, as well as the significance of various communication strategies, but often still with more vectors and parameters, and often at the expense of performance. Perhaps one of the most notable minimalist swarm algorithm is barebones particle swarms [11]. Another barebones algorithm is barebones differential evolution [12], which is a hybrid of the barebones particle swarm optimiser and differential evolution, aiming to reduce the number of parameters, albeit with more than only the position vector. It is well understood that swarm intelligence techniques are dependant on the tuning of their parameters. This ultimately results in the need to adjust a growing number of parameters which becomes increasingly complex.
This paper aims at identifying and investigating knowledge-based exploration and exploitation zones in a minimalist, vector-stripped algorithm; therefore, using the analysis to propose ways to measure exploration and exploitation probabilities, with the ultimate goal of controlling the behaviour of the population by suggesting dimensionally-dependent exploration-exploitation balance without degrading the algorithm performance. Furthermore, the paper highlights the limitations and challenges of the proposed methods, which are also applied to tomographic reconstruction, where images are reconstructed using tomography.
In this work, the swarm optimiser is first presented in Section 2, followed by the analysis in Section 3, which subsequently leads to proposing adaptable exploration-exploitation mechanisms. Finally, in Section 4, the experiment results on a comprehensive set of benchmarks are presented.
2. Background
Dispersive flies optimisation (DFO) belongs to the broad family of population-based, swarm intelligence optimisers, which has been applied to various areas, including medical imaging [13], solving diophantine equations [14], PID speed control of DC motor [15], optimising machine learning algorithms [16], training deep neural networks [17], computer vision and quantifying symmetrical complexities [18,19], beer organoleptic optimisation [20], and analysis of autopoiesis in computational creativity [21].
In this algorithm, components of the position vectors are independently updated in each iteration, taking into account: the current particle’s position; the current particle’s best neighbouring individual (consider ring topology, where particles have left and right neighbours); and the best particle in the swarm. The update equation is
| (1) |
where
: position of ith particle in dth dimension at time step t;
: position of ’s best neighbouring individual (in ring topology) in dth dimension at time step t;
: position of the swarm’s best individual in the dth dimension at time step t;
: generated afresh for each individual and each dimension update.
As a diversity-promotion mechanism, individual components of the population’s position vectors are reset if a random number generated from a uniform distribution on the unit interval is less than the disturbance or restart threshold, . This ensures a restart to the otherwise permanent stagnation over a likely local minima. In this method, which is summarised in Algorithm 1, each member of the population is assumed to have two neighbours (i.e., ring topology) and particles are not clamped to bounds, therefore, when out-of-bounds, are left unevaluated. The source code for standard DFO is available on http://github.com/mohmaj/DFO, accessed on 26 July 2021.
| Algorithm 1 Dispersive flies optimisation (DFO) |
![]() |
As a population-based continuous optimiser, DFO bears several similarities with other swarm and evolutionary algorithms. Stemming from its barebones and vector-stripped nature, DFO allows for further analysis while demonstrating competitive performance, despite being bare of “accessories”. As stated, DFO’s update mechanism relies solely on the position vectors at time t to produce the position vectors for time , without storing extra vectors, and, in terms of tunable parameters, other than population size, DFO uses one extra parameter for adjusting the global diversity of the population. To provide more context and before the analysis, a number of well-known algorithms, along with their tunable (and/or theoretically-driven) parameters, are provided.
For instance, PSO, in many of the proposed variants, commonly uses the following parameters: population size; , controlling the impact of cognitive component; , controlling the impact of social component; or w, depending on the update equation. In addition to the position of particle i, , each particle has an associated velocity, , and memory, , vectors. Other variants of PSO, including barebone PSOs were also introduced to simplify the algorithm, with the ultimate goal of offering insight into the algorithm’s underlying behaviour. In one such case, one of the inventor of PSO, Kennedy, describes the process as “strip[ping] away some traditional features” with the hope of revealing the mysteries of the algorithm [11]. In this particular model, the velocity vectors are removed, while the algorithm still benefits from having memories, a work that was carried out to shed light on the behaviour of the algorithm. Other contributions have tried to further explore the simplified version and enhance its performance, demonstrating the capability of the simplified version in contrast with the original models [22,23,24,25].
Other than PSO, parameters and adjustable configurations of other well-known algorithms include those of GAs [9]: population size, : crossover rate, : mutation rate, tournament size, elite size; DE [6,26]: population size, : crossover rate, equations used to calculate the mutation vector (e.g., the most notable ones are: DE/rand/1, DE/rand-to-best/1, DE/best/1, DE/best/2, DE/rand/2), F: constricting factor; ACO [7]: m: number of ants (population size), : heuristic strength, : greediness, : pheromone decay rate; Firefly algorithm or FA [10]: population size, m: impact of distance on attractiveness, which could be replaced with in cases where scales vary significantly in different dimensions, d. Thus, given d dimensions (), adding d extra parameters, determining the speed of convergence, in theory, , with maintaining a constant attractiveness of .
Looking at the update equations of DFO, PSO and DE’s mutant vector (DE1: DE/rand-to-best/1 and DE2: DE/best/1), certain similarities can be identified:
| (2) |
| (3) |
| (4) |
| (5) |
| (6) |
| (7) |
| (8) |
| (9) |
| (10) |
where, for PSO, w is the inertia weight whose optimal value is problem dependent [27]; is the velocity of particle i in dimension d at time step t; are the learning factors (also referred to as acceleration constants) for personal best and neighbourhood best, respectively; are random numbers adding stochasticity to the algorithm, and they are drawn from a uniform distribution on the unit interval ; is the personal best position of particle in dimension d; is swarm best at dimension d; and takes as input the variables needed at time t in order to return the particle’s component’s position at time . For DE’s mutant vector (DE1: DE/rand-to-best/1 and DE2: DE/best/1), is dth gene of the ith chromosomes’s mutant vector ( in PSO and DE are different, albeit they carry the same name in the literature); is dth gene of the ith chromosomes’s trial vector; and are different from i and are distinct random integers drawn from the range ; and is the dth gene of the best chromosome at generation t; F is a positive control parameter for constricting the difference vectors.
In these update equations, similarities between PSO’s Equations (2)–(4) and DE1’s Equations (5) and (6) can be observed, including current and best positions, and the use of extra components to steer the update process (e.g., in PSO: velocity, , and memories, ; and in DE1: mutant vector, , and trial vector, ), as shown in Equations (4) and (6).
On the other hand, there are similarities between DE2 (DE/best/1) and DFO, as shown in Equations (7)–(10). In their update equations, the focus ( and ) is either the best chromosome in the population or the best neighbouring particle, and the spread is determined by taking into account two members of the population: in DE2’s instance, it uses the distances between two random chromosomes, and, in DFO’s case, the distance between the best particle and the current particle’s positions is calculated; both of these distances are then “controlled” (i.e., by F in DE2, and by in DFO). Furthermore, DFO’s use of evolutionary phases (i.e., mutation, crossover, and selection) can be demonstrated in the restart mechanism, update equation, and the elitism strategy, respectively, where particles’ current positions determine their next positions, i.e., , with being a 2D matrix of particles positions.
Therefore, following on the above and to quote Kennedy [11]: “The particle swarm algorithm has just enough moving parts to make it hard to understand”, and this work builds on of its key motivation to analyse a minimalistic algorithm to:
reduce the challenges of understanding particles oscillating around the constantly changing centres (in each iteration, independently),
understand particles’ influence on one another (and their contribution to the swarm’s next iteration), and
strip the parameters in the analysis to understand the trajectory of particles (moving between different regions in the feasible search space).
To address these challenges, the minimalist, vector-stripped features of the optimiser are used to provide an analysis of the population’s exploration-exploitation behaviour.
3. Exploration-Exploitation Zones Analysis
As shown in the update equation, Equation (1), for each particle, the search focus is , and the spread, , is the distance between the best particle in the swarm and the current particle. Therefore, the equation could be rewritten for each particle’s dimension as
| (11) |
The spatial location of particles and their proximity to the global optimum of a given function, informs the role played by and . Considering one dimension of a problem and for ease of read in the remaining of this section, x refers to ; refers to ; g refers to ; and n refers to . Furthermore, exploitation refer to the approaching of x to g (i.e., ). By the same token, exploration refers to the increasing distance between x and g (i.e., ). This section presents the unified exploration-exploitation analysis where a number of zones are identified, and their roles in terms of exploration and exploitation are investigated and ultimately measured.
Consider x is to be uniform in , while g and n are fixed. Given this, the areas highlighting exploitation can be plotted using A and B below:
To proceed, and as shown in Figure 1, the exploitation probabilities in the following four cases are presented individually:
,
,
,
.
Figure 1.
Unified exploitation probability, or p. The shaded areas in the graph represent exploitation, where particles in these areas at time t will be exploiting at time .
The exploitation probability for instances when is derived from the first case (i.e., ); the probability confirms the findings in the scenario-based analysis presented in Appendix A.1 for scenario 1 (see in Figure 1 or Figure A2), where x is between g and n (). This illustrates the link between the unified and the scenario-based analyses. The scenario-based analysis can be found in Appendix A, where the three scenarios, S, S, S are examined. Furthermore, the scenario-based analysis assumes a start from the initial state and is based on the position of x in relation to n and g. While the scenario-based analysis is independent of the feasible bounds to the search space, this aspect is taken into account in the unified exploitation analysis.
Based on the analysis, for and given the tendency of in the scaled space (influenced by the proximity of g and n over time), the unified exploitation probability, or p, is summarised as:
| (12) |
3.1. Self-Adaptive Variants
Based on the analysis, an immediate line of research is to measure the iteration-based, dimensional probabilities of exploitation (p) to facilitate diversity adjustment. The dimensional diversity mechanism can be facilitated through an adaptable restart threshold, (as opposed to a pre-determined parameter value, ).
3.1.1. Unified DFO (uDFO)
In one such approach, the unified exploitation probability, p, is measured for each dimension and in each iteration. Using p, the component-wise restart is triggered when , where , and controls the restart mechanism dynamically. In order to take into account the previously reported empirical restart threshold of [3], in one set of experiments, is set to , where when , or higher when . This approach has similarities with standard DFO at the high end of p. Alternatively, in the second approach, , where the previously derived empirical restart threshold is reached when , and higher when (see Figure 2).
Figure 2.
Component-wise restart threshold, based on p, with . The restart threshold of the original DFO () is illustrated in black.
The adapted versions of the algorithm, which benefit from the unified exploitation probability, are termed unified DFO or uDFO. Using the proposed methods enabled the adaptive, dimension-dependant diversity to be present throughout the optimisation process, and it was reduced when the population is more inclined towards exploitation, be it local or global.
To demonstrate the evident effect of individual’s restart on p over the iterations, a sample run of DFO with is illustrated in Figure 3; here, the behaviour of p is visualised during the optimisation process of Rastrigin function where the restart mechanism is triggered when the dimensional average of . As shown in the bottom graphs, the black circles, which represent p’s average over the dimensions, increase until reaching (on the left graph) or (on the right graph), when the restart mechanism is triggered. As the graph on the right shows, the average p is allowed to increase higher before the restart mechanism is activated. The impact of p on diversity can be observed in the top graphs.
Figure 3.
Relation between exploitation probability, p, and diversity. This figure illustrates the commencing of the restart mechanism when the dimensional average of . In the bottom graphs, and represent p’s average and standard deviation in each iteration. As shown, increased diversity, which is the average distance around the population centre (see top graphs), decreases the p values (see bottom graphs), and vice versa.
3.1.2. Unified DFO with Zone-Relocation (uDFO)
Figure 1 highlights the exploration and exploitation-related, scaled zone borderlines at , and, based on that, the search space is categorised into 5 zones (). Using the zones provides a fitting way to investigate the behaviour of the individuals in the context of the unified exploitation probabilities, as well as particle trajectories. In these zones, are explore-only, is exploit-only, and influence both exploration and exploitation. In other words, zones impacting exploration are , and zones impacting exploitation are . Figure 4 illustrates the visit-frequency of particle components in each zone over the iterations, highlighting the most visited zone, , and the least visited one, .
Figure 4.
Particle component’s visit-frequency in each zone. These figures illustrate the number of particle components in each zone over the iterations in two representative sample runs, optimising (a) Sphere function, which is a unimodal problem, and (b) Rastrigin function, as a multimodal problem. This illustrates that, irrespective of the modality and the landscapes of the functions, is the most frequently visited zone, and is the least visited one.
Additionally, having these properties, investigating the state transitions from one zone at time t () to the next at time () provides each particle’s dimensional trajectory, which is illustrated in Figure 5 and summarised below:
,
,
,
.
Figure 5.
State transition between zones. This figure shows the state transition of components between zones. Transitions from are highlighted in red, as dashed lines.
Following on the state transition, in order to show the trajectory density for each of these transitions, 1,000,000 component updates are initiated from each of the zones with . The density plots from different zones are shown in Figure 6; for instance, the top of Figure 6 illustrates the transition of components from to with higher density of components near n. In addition to presenting the density plot for each individual zone, the bottom of Figure 6 shows the trajectory density of the independent updates across all zones, illustrating the densest area, which is in line with the search focus being n.
Figure 6.
Density plots for transition trajectory of 1 million independent components from each of the zones at time t (shaded) to in one-step updates. The dashed lines represent zones boundaries. The x-axes represent the scaled positions in range with , , , and the y-axes illustrate the trajectory density. For instance, the top graph shows the trajectory density of values in which are originated from range at time t, and trajected to at time . The bottom graph presents the density plot across all zones, highlighting the focus as . Note that the number of components initialised in each zone is equal.
State transition analysis allows for devising a strategy to control diversity through particle position’s zone-relocation. Observing the density plot for in Figure 6 or the state transition from in Figure 5, it is evident that particles in at time t will be relocated to at time , a unique disseminating possibility only available to components in .
To better understand each zone’s coverage, the behaviour of the optimiser is investigated in a single dimension of a particle when optimising a unimodal function (Sphere) and a multimodal function (Rastrigin). The plots in Figure 7 illustrate the area covered by each zone. It is shown that cover the widest range (irrespectively of the problem’s modality), and, as evidenced in Figure 1, ’s coverage area is equal in size to the area covered by . The intuition that the distance between g and n reduces over time is clearly illustrated in Figure 7b, as manifested by shrinking of areas covered by ; and, as shown in Figure 7a, the more occasional higher increase of distance between g and n indicates the identification of a new local optimum (caused by a larger jump which momentarily reduces the coverage of and increases the coverage of ).
Figure 7.
Zones coverage in Sphere (top), and Rastrigin (bottom) functions. The plots in this figure show the coverage of each zone in each iteration. (a) highlights larger updates in the location of g or n throughout the optimisation process, this is because the coverage ranges of indicate updates in the position of g or n, while (b), which uses the logarithmic scale for the y-axis, is used to illustrate the continuous smaller updates in the position of g or n. The error values at the end of iteration 500 are and respectively.
Given the state transition analysis and the zone coverage for , another experiment is proposed so that, when the restart mechanism is triggered with , components are relocated to . The will be chosen between the better performing algorithm from . Using this strategy, components are effectively restarted to the exploit-only zone. As a result, while expecting lower diversity, the purpose of zone-relocation experiment is to examine the impact of ‘targeted’ restarts, with potential follow-up exploitation and visits to other zones. The adapted algorithm using the proposed zone-relocation strategy is termed uDFO.
4. Experiments and Results
This experiments reported in this section examine the results of the exploitation study over a comprehensive benchmark [28], which consists of the functions presented in various sources [29,30,31]. The combined benchmark, CEC05 + CEC13 + ENG13, provides 84 unique problems whose details are presented in Reference [28] and are summarised in Table A1. The benchmark includes functions with the following properties: U: unimodal, M: multimodal, S: separable, NS: non-separable, N: noisy, B: on bounds (where is the optimum), NV: in narrow valley, UB: outside initialisation volume, F: neutrality (has flat areas), HC: high conditioning, SD: sensitivity (f has one or more sensitive directions), A: asymmetric, D: deceptive ( is far from next local optimum), and C: composition.
In this section, uDFO with and uDFO are compared against the standard DFO (with ) and DFO (i.e., without the restart mechanism), where the population size is . Furthermore, given PSO’s structural similarity to DFO (as outlined in Section 2 and belonging to swarm intelligence family), standard PSO algorithm in two neighbourhood structures, global PSO (GPSO) and local PSO (LPSO), are also used where the population size, , , , and the initial [31]. Furthermore, DE (DE/best/1) is also used in the experiments, where the population size, with F, CR [32]. Each algorithm is run 50 times on each test function, and the termination criterion is set to reaching 150,000 function evaluations. The problems’ dimensionality is constant in all trials and is set to .
The metrics used to evaluate the results are error: best function value and proximity to known optimal values; and population’s terminal diversity: mean distance between individuals and centroid (in PSO, the memory or personal best vectors are used, as opposed to DFOs and DE, where particles positions are used). This measure illustrates the variants’ impact on the population’s diversity to investigate its presence among the population in order to facilitate exploration without hindering the population’s ability to exploit potential optimal solutions. In other words, diversity, alongside the error metric, provides an insight into the inner dynamics of the algorithms.
In total, 33,600 trials (8 algorithms × 84 test functions × 50 runs) are analysed by grouping them in terms of functions and function properties. To analyse the performance of the algorithms over the test functions, Wilcoxon [33] non-parametric tests of significance () is used.
Additionally, the algorithms are applied to tomographic reconstruction, which is an important inverse problem in medical and industrial imaging [34]. One of the purposes of applying the proposed variants to this particular problem is to investigate the performance of the algorithms over problems with increasing dimensionality. The termination criterion is again set to reaching 150,000 function evaluations. In this problem, downsampled standard test images, the Shepp-Logan image phantoms [35], are reconstructed by using two projections. The images have the following dimensions: 25D , 100D , 255D , 400D , and 625D .
Results
Table 1a summarises the performance of the algorithms on 84 test functions, where ‘win’ and ‘loss’ of uDFO against other algorithms are considered when there is a recorded statistically significant outperformance in terms of the error values. The results demonstrate uDFO’s outperformance in 62%, 75%, 66%, 55%, and 64% of the cases with statistically significant difference, when compared against DFO, DFO and GPSO, LPSO, and DE, respectively. The details of the algorithms’ performance over each of one of the benchmark are presented in the appendix in Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9. The tables provide the numerical values for the minimum, maximum, median, mean, and standard deviation associated with each algorithm over each benchmark function. In terms of diversity, and as shown in Table 1b, the statistically significant similarity between the algorithm and standard DFO is evident by observing the number of ties (i.e., 78 out of 84 cases, or 93%). This is expected as per the original intention to take into account the previously reported restart threshold (see Figure 2).
Table 1.
Summary of the results for uDFO with . The scores indicate uDFO’s wins and losses when compared against other algorithms. uDFO exhibits outperformance for the error metric in the majority of significant cases (see bold type).
| (a) Error | |||||
|---|---|---|---|---|---|
| Algorithms | Win | Loss | Tie | Win Rate | Win Rate (Significant Cases) |
| uDFO (vs. DFO) | 8 | 5 | 71 | 10% | 62% |
| uDFO (vs. DFO) | 43 | 14 | 27 | 51% | 75% |
| uDFO (vs. GPSO) | 43 | 22 | 19 | 51% | 66% |
| uDFO (vs. LPSO) | 42 | 34 | 8 | 50% | 55% |
| uDFO (vs. DE) | 46 | 26 | 12 | 55% | 64% |
| (b) Diversity | |||||
| Algorithms | Win | Loss | Tie | Win Rate | Win Rate (Significant Cases) |
| uDFO (vs. DFO) | 4 | 2 | 78 | 5% | 67% |
| uDFO (vs. DFO) | 84 | 0 | 0 | 100% | 100% |
| uDFO (vs. GPSO) | 62 | 19 | 3 | 74% | 77% |
| uDFO (vs. LPSO) | 25 | 57 | 2 | 30% | 30% |
| uDFO (vs. DE) | 67 | 15 | 2 | 80% | 82% |
The results of uDFO against other algorithms are reported in Table 2a. The algorithm’s outperformance in 68%, 78%, 73%, 61%, and 68% of the cases with statistically significant difference are reported. While uDFO presents higher termination diversity against DFO, GPSO and DE, as shown in Table 2b, the contrary can be observed with DFO and LPSO. The rationale is the consistent value of the restart threshold in standard DFO throughout the optimisation (given ) and the well understood higher diversity of local neighbourhood population in LPSO [36]. In other words, as shown in Figure 2, the reduced rate of the restart mechanism at the tail end of p manifests itself in the reduced terminal diversity, as illustrated in the first row of Table 2b.
Table 2.
Summary of the results for uDFO with . The scores indicate uDFO’s wins and losses when compared against other algorithms. uDFO exhibits outperformance for the error metric in the majority of significant cases (see bold type).
| (a) Error | |||||
|---|---|---|---|---|---|
| Algorithms | Win | Loss | Tie | Win Rate | Win Rate (Significant Cases) |
| uDFO (vs. DFO) | 21 | 10 | 53 | 25% | 68% |
| uDFO (vs. DFO) | 40 | 11 | 33 | 48% | 78% |
| uDFO (vs. GPSO) | 47 | 17 | 20 | 56% | 73% |
| uDFO (vs. LPSO) | 45 | 29 | 10 | 54% | 61% |
| uDFO (vs. DE) | 47 | 22 | 15 | 56% | 68% |
| (b) Diversity | |||||
| Algorithms | Win | Loss | Tie | Win Rate | Win Rate (Significant Cases) |
| uDFO (vs. DFO) | 0 | 84 | 0 | 0% | 0% |
| uDFO (vs. DFO) | 84 | 0 | 0 | 100% | 100% |
| uDFO (vs. GPSO) | 60 | 21 | 3 | 71% | 74% |
| uDFO (vs. LPSO) | 22 | 59 | 3 | 26% | 27% |
| uDFO (vs. DE) | 67 | 16 | 1 | 80% | 81% |
Table 3 presents the performance comparison of uDFO with other algorithms, including uDFO, which exhibits better performance in terms of error than uDFO. As expected, in terms of error, the winning rates of uDFO and uDFO are similar when compared against other algorithms, although the latter offers better overall performance. The last rows in Table 3a,b compare uDFO and uDFO, demonstrating the largest number of ties (see the underlined values) as indicators of similarities, which are likely to be influenced by the coverage similarity of holistic and zone-based restarts. However, as expected and explained earlier, uDFO exhibits higher diversity than uDFO.
Table 3.
Summary of the results for uDFO. The scores indicate algorithm’s wins and losses when compared against other methods. uDFO exhibits outperformance for the error metric in the majority of significant cases (see bold type), except for uDFO, albeit with the majority of cases in tie states, as underlined.
| (a) Error | |||||
|---|---|---|---|---|---|
| Algorithms | Win | Loss | Tie | Win Rate | Win Rate (Significant Cases) |
| uDFO (vs. DFO) | 25 | 14 | 45 | 30% | 64% |
| uDFO (vs. DFO) | 39 | 11 | 34 | 46% | 78% |
| uDFO (vs. GPSO) | 43 | 17 | 24 | 51% | 72% |
| uDFO (vs. LPSO) | 45 | 29 | 10 | 54% | 61% |
| uDFO (vs. DE) | 47 | 25 | 12 | 56% | 65% |
| uDFO (vs. uDFO) | 4 | 13 | 67 | 5% | 24% |
| (b) Diversity | |||||
| Algorithms | Win | Loss | Tie | Win Rate | Win Rate (Significant Cases) |
| uDFO (vs. DFO) | 0 | 84 | 0 | 0% | 0% |
| uDFO (vs. DFO) | 84 | 0 | 0 | 100% | 100% |
| uDFO (vs. GPSO) | 57 | 21 | 6 | 68% | 73% |
| uDFO (vs. LPSO) | 22 | 60 | 2 | 26% | 27% |
| uDFO (vs. DE) | 67 | 16 | 1 | 80% | 81% |
| uDFO (vs. uDFO) | 0 | 45 | 39 | 0% | 0% |
In order to analyse the error-related strengths and weaknesses of uDFO and uDFO, each of the algorithm pairs are broken down in Table 4 based on fourteen function properties. The total number of function properties (shared by the test functions) is 233. The results demonstrate an overall outperformance of uDFO and uDFO, where the most visible contribution of the unified exploitation approaches can be seen for functions with the following properties {U, S, NS, SD}, while being competitive in {M, NV, A}, and less effective for {N, C}. Among the suitable function properties is non-separable, or NS, where variables interact, making it challenging to decompose the problem into sub-problems; this property is amongst the more demanding in the benchmark and in real-world fitness functions. Further analysis is required to better understand the function properties in the context of the algorithms performance.
Table 4.
Performance comparison by function properties. Bold type indicates significantly lower error by the algorithm for greater number of function instances with a given property.
| (a) uDFO | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Property | Total | uDFO | DFO | uDFO | DFO | uDFO | GPSO | uDFO | LPSO | uDFO | DE |
| U: Unimodal | 22 | 14 | 0 | 8 | 8 | 14 | 6 | 17 | 3 | 12 | 6 |
| M: Multimodal | 62 | 7 | 10 | 32 | 3 | 33 | 11 | 28 | 26 | 35 | 16 |
| S: Separable | 18 | 8 | 1 | 13 | 5 | 10 | 5 | 11 | 3 | 8 | 6 |
| NS: Non-separable | 66 | 13 | 9 | 27 | 6 | 37 | 12 | 34 | 26 | 39 | 16 |
| N: Noisy | 3 | 0 | 0 | 3 | 0 | 0 | 1 | 1 | 2 | 2 | 0 |
| B: on bounds | 4 | 2 | 0 | 1 | 1 | 3 | 0 | 2 | 2 | 1 | 1 |
| NV: in narrow val | 3 | 0 | 1 | 0 | 0 | 2 | 0 | 2 | 1 | 2 | 1 |
| UB: out init vol | 2 | 0 | 1 | 2 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
| F: Neutrality | 8 | 0 | 2 | 6 | 0 | 2 | 1 | 1 | 7 | 1 | 4 |
| HC: High condition | 2 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
| SD: Sensitivity | 2 | 2 | 0 | 2 | 0 | 2 | 0 | 2 | 0 | 1 | 0 |
| A: Asymmetric | 20 | 4 | 1 | 7 | 0 | 9 | 4 | 7 | 7 | 11 | 9 |
| D: Deceptive | 2 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 2 | 0 |
| C: Composition | 19 | 2 | 3 | 9 | 0 | 5 | 4 | 2 | 13 | 5 | 9 |
| ∑ | 233 | 52 | 30 | 111 | 23 | 120 | 46 | 110 | 92 | 121 | 70 |
| % | 63% | 37% | 83% | 17% | 72% | 28% | 54% | 46% | 63% | 37% | |
| (b) uDFO | |||||||||||
| Property | Total | uDFO | DFO | uDFO | DFO | uDFO | GPSO | uDFO | LPSO | uDFO | DE |
| U: Unimodal | 22 | 16 | 1 | 7 | 8 | 13 | 6 | 18 | 3 | 12 | 6 |
| M: Multimodal | 62 | 9 | 13 | 32 | 3 | 30 | 11 | 27 | 26 | 35 | 19 |
| S: Separable | 18 | 9 | 5 | 12 | 5 | 9 | 5 | 12 | 4 | 7 | 6 |
| NS: Non-separable | 66 | 16 | 9 | 27 | 6 | 34 | 12 | 33 | 25 | 40 | 19 |
| N: Noisy | 3 | 0 | 1 | 2 | 0 | 0 | 1 | 1 | 2 | 1 | 0 |
| B: on bounds | 4 | 2 | 0 | 1 | 2 | 2 | 1 | 2 | 2 | 1 | 2 |
| NV: in narrow val | 3 | 1 | 0 | 1 | 0 | 2 | 0 | 2 | 1 | 2 | 1 |
| UB: out init vol | 2 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
| F: Neutrality | 8 | 0 | 1 | 6 | 0 | 1 | 1 | 1 | 7 | 1 | 5 |
| HC: High condition | 2 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
| SD: Sensitivity | 2 | 2 | 0 | 2 | 0 | 2 | 0 | 2 | 0 | 1 | 0 |
| A: Asymmetric | 20 | 3 | 4 | 7 | 0 | 7 | 3 | 7 | 7 | 11 | 9 |
| D: Deceptive | 2 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 2 | 0 |
| C: Composition | 19 | 2 | 2 | 9 | 1 | 3 | 4 | 2 | 13 | 5 | 11 |
| ∑ | 233 | 61 | 38 | 108 | 25 | 106 | 46 | 110 | 92 | 120 | 80 |
| % | 62% | 38% | 81% | 19% | 70% | 30% | 54% | 46% | 60% | 40% | |
Finally, the proposed approaches are trialled on tomographic construction, taking into account problems with larger dimensionality (Table 5 and Table 6). Each algorithm is run 50 times for each problem; therefore, a total of 1500 trials are conducted (6 algorithms × 5 problems × 50 runs). Barring the lowest dimensional problem (25D), the results illustrate the overall competitiveness of uDFO in (15 out of 16), and uDFO in (12 out 12), of the algorithm-problem pairs in high-dimensional problems (see Table 5a,b, respectively).
Table 5.
Tomographic Reconstruction: Performance comparison.
| (a) uDFO with | |||||
|---|---|---|---|---|---|
| Algorithms | D = 25 | D = 100 | D = 225 | D = 400 | D = 625 |
| uDFO vs. DFO | -- | uDFO | uDFO | DFO | uDFO |
| uDFO vs. GPSO | -- | uDFO | uDFO | uDFO | uDFO |
| uDFO vs. LPSO | uDFO | uDFO | uDFO * | uDFO * | uDFO * |
| uDFO vs. DE | uDFO | uDFO | uDFO | uDFO | uDFO |
| (b) uDFO | |||||
| Algorithms | D = 25 | D = 100 | D = 225 | D = 400 | D = 625 |
| uDFO vs. DFO | -- | uDFO | uDFO | uDFO | uDFO |
| uDFO vs. GPSO | -- | uDFO | uDFO | uDFO | uDFO |
| uDFO vs. LPSO | uDFO | uDFO | uDFO * | uDFO * | uDFO * |
| uDFO vs. DE | uDFO | uDFO | uDFO | uDFO | uDFO |
| uDFO vs. uDFO | -- | uDFO | uDFO | uDFO | uDFO |
*: LPSO does not compute solutions for D = . This is due to a large number of particles components’ off-shooting out of bounds.
Table 6.
Tomographic Reconstruction: Error values. Bold type indicates outperforming algorithm(s) for each dimension.
| Algorithm | Min | Max | Median | Mean | StdDev | |
|---|---|---|---|---|---|---|
| D = 25 | uDFO | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| uDFO | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
| DFO | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
| GPSO | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
| LPSO | 0.00 | |||||
| DE | ||||||
| D = 100 | uDFO | |||||
| uDFO | ||||||
| DFO | ||||||
| GPSO | ||||||
| LPSO | ||||||
| DE | ||||||
| D = 225 | uDFO | |||||
| uDFO | ||||||
| DFO | ||||||
| GPSO | ||||||
| LPSO | NA | NA | NA | NA | NA | |
| DE | ||||||
| D = 400 | uDFO | |||||
| uDFO | ||||||
| DFO | ||||||
| GPSO | ||||||
| LPSO | NA | NA | NA | NA | NA | |
| DE | ||||||
| D = 625 | uDFO | |||||
| uDFO | ||||||
| DFO | ||||||
| GPSO | ||||||
| LPSO | NA | NA | NA | NA | NA | |
| DE |
In summary, while the performance of uDFO and uDFO are similar on the lower dimensional problem, uDFO demonstrates better performance in all higher-dimensional problems (i.e., 100D, 255D, 400D, 625D), with wider performance gaps as the dimensionality grows (see Table 6). Further experiments are needed to verify the extendibility of performance in other high-dimensional problems.
Among the challenges of the approach is the need for a-priori knowledge of the bounds to feasible solutions. Whilst setting indicative bounds in many real-world problems is practically possible, further investigation is needed in this area. Additionally, although the main computational expense is associated with function evaluation, the impact of calculating exploitation probability, p, on the computational cost is a topic for an ongoing research. Furthermore, having tested the approaches on a comprehensive set of test functions, as well as identifying a number of suitable function properties, one of the next steps is applying the methods to other complex real-world problems with known function properties.
5. Conclusions
This work presents a framework for analysing the exploitation probabilities in a vector-stripped swarm technique, which is a minimalist numerical optimiser over continuous search spaces. The algorithm’s vector-stripped nature stems from its update equation’s sole reliance on particles’ position vectors, as well as having (other than population size) one tunable parameter, , controlling the component-wise restart of the particles. This work provides an iteration-based zone analysis of particle’s movements in order to establish their exploration and exploitation behaviour. In addition to better understanding the particles’ behaviour, the work focuses on providing a strategy to control the population’s interaction in the search space. This is attempted through a unified exploitation probability, p, through (1) uDFO (with ) using a holistic restart, and (2) uDFO, which is trialled for the purpose of examining zone-relocation restart mechanism. Both methods allow adaptable dimensional control of the particles.
The proposed approaches are then examined over 84 test functions with a combined 233 function properties, where uDFO performs better in 62%, 75%, 66%, 55%, and 64% of cases with statistically significant difference when compared against DFO, DFO, GPSO, LPSO, and DE, respectively; and uDFO in 68%, 78%, 73%, 61%, and 68%; and uDFO in 64%, 78%, 72%, 61%, and 65% of the significant cases.
The performance is then investigated on the high-dimensional tomographic reconstruction problems, where uDFO and uDFO exhibited better performance in and of the high-dimensional D = algorithm-problem pairs, respectively.
Using minimalist algorithms facilitates analysis in order to better understand the complex underlying behaviour of the particles, such as: particles oscillation around the constantly changing centres, particles’ influence on one another, and understanding the trajectory of particles [11,22,37]. The paper aimed at investigating the exploitation- and exploration-derived zones to inform the behaviour of the population.
Future work includes investigating the exploitation and zone analyses to other swarm optimisers, as well as exploring approaches, to deal with unbounded problems. Furthermore, studying the performance of the presented approaches on dynamically changing environments and studying the combinations of function properties, which benefit from the analysis, are topics for ongoing research.
Short Biography of the Author
|
Mohammad Majid al-Rifaie is a Senior Lecturer in Artificial Intelligence at the University of Greenwich, School of Computing and Mathematical Sciences. He holds a PhD in Artificial Intelligence and Computational Swarm Intelligence (CSI) from Goldsmiths, University of London, and since the start of his PhD, he has published extensively in the field, covering various applications of CSI, Evolutionary Computation (EC), Machine Learning (ML) and Deep Neural Networks (DNNs). He has taught in the higher education for more than a decade, several of which relevant to CSI, AI, their real-world applications as well as philosophical issues on artificial intelligence and arts. His work in the area has featured multiple times in the media including the British Broadcasting Corporation (BBC). Over the past 10 years, he has developed a unique interdisciplinary research profile with more than 70 peer-reviewed publications including, book chapters, journal and conference papers on CSI, EC, ML and DNNs as well as their applications in medical imaging, data science, philosophy and arts. He has supervised (and is supervising) several PhD students who are researching the aforementioned areas. |
Appendix A. Iteration-Based Exploration and Exploitation Analysis
For, the iteration-based exploration and exploitation analysis, when , where g is the same for all x in the population for each iteration, the following three scenarios can be analysed:
S: S: ; S: ,
S: S: ; S: ,
S: S: ; S: .
The analysis are first presented from an initial state, where the particles are initialised in the search space. Using the small-view exploitation and exploration concepts, the analysis in this section focuses on each scenario (and their corresponding sub-scenarios) and, by extension, the overall impact on each iteration.
Appendix A.1. Scenario 1: n ≤ x ≤ g
In this scenario, the difference between and , as well as the value of u in the update equation, determine whether approaches g. Given and , in the first scenario, (see Figure A1, top).
Figure A1.
Three scenarios for , with , . The analysis also holds for the mirrored scenarios where .
Depending on the proximity of x to either its best neighbour or the best particle in the swarm, two distinct cases need to be explored. Considering (S1.1 in Figure A1), the exploitation probability, , and the exploration probability is , where x moves away from g. On the other hand, in S1.2, in Figure A1, when , and depend on the proximity of the x to n, as well as the randomly generated value of u.
Based on this and to analyse the probability of exploitation in this scenario, the space is scaled with n at the origin and g at 1 and x is uniformly distributed in . Therefore,
This analysis holds for the mirrored case of scenario 1 (i.e., ). The plots in Figure A2 illustrate the exploitation probabilities as shaded areas in S, and mirrored S.
Figure A2.
Exploitation probability in scenario 1. (Left) ; (Right) .
Appendix A.2. Scenario 2: x ≤ n ≤ g
Scaling the space, we have and . Therefore, there are two possible outcome cases for :
,
.
Therefore, :
The mirrored of this scenario (i.e., ) also holds with exploitation probability equal to 1 (see Figure A3).
Figure A3.

Exploitation probability in scenario 2 (for both and ).
Appendix A.3. Scenario 3: x ≤ g ≤ n
Scaling the space, we have and . Therefore,
The analysis for the mirrored cases in scenario 3 holds (i.e., ). The plots in Figure A4 illustrate the exploitation probabilities in S, and mirrored S.
Figure A4.
Exploitation probability in scenario 3. (Left) ; (Right) .
Appendix B. Benchmark Functions and Algorithms Error Values
This section presents the Benchmark functions in Table A1, which are the combined benchmark of CEC05/13 and ENG13 functions. Based on these functions, the numbering is denoted as Function properties, as claimed in the original publications, are: U: unimodal, M: multimodal, S: separable, NS: non-separable, N: noisy, B: on bounds, NV: in narrow valley, UB: outside initialisation volume, F: neutrality (has flat areas), HC: high conditioning, SD: sensitivity (f has one or more sensitive directions), A: asymmetric, D: deceptive ( is far from next local optimum), C: composition.
Furthermore, the error values for algorithms over all the benchmarks are reported in Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9.
Table A1.
Benchmark functions.
| F No. | Label/Number | Name | Properties | F No. | Label/Number | Name | Properties |
|---|---|---|---|---|---|---|---|
| 2 | CEC05 F02 | Shifted Schwefel 1.2 | U, NS | 52 | CEC13 F22 | Composition Function 2 (n = 3, Unrotated) | M, S, A, C |
| 4 | CEC05 F04 | Shifted Schwefel 1.2 with noise | U, NS, N | 53 | CEC13 F23 | Composition Function 3 (n = 3, Rotated) | M, NS, A, C |
| 5 | CEC05 F05 | Schwefel 2.6, on bounds | U, NS, B | 54 | CEC13 F24 | Composition Function 4 (n = 3, Rotated) | M, NS, A, C |
| 6 | CEC05 F06 | Shifted Rosenbrock | M, NS, NV | 55 | CEC13 F25 | Composition Function 5 (n = 3, Rotated) | M, NS, A, C |
| 7 | CEC05 F07 | Shifted Rotated Griewank without bounds | M, NS, UB | 56 | CEC13 F26 | Composition Function 6 (n = 5, Rotated) | M, NS, A, C |
| 8 | CEC05 F08 | Shifted Rotated Ackley, on bounds | M, NS, B | 57 | CEC13 F27 | Composition Function 7 (n = 5, Rotated) | M, NS, A, C |
| 11 | CEC05 F11 | Shifted Rotated Weierstrass | M, NS | 58 | CEC13 F28 | Composition Function 8 (n = 5, Rotated) | M, NS, A, C |
| 12 | CEC05 F12 | Schwefel 2.13 | M, NS | 61 | ENG13 F01 | Absolute value | U, S |
| 13 | CEC05 F13 | Expanded Extended Griewank + Rosenbrock | M, NS | 62 | ENG13 F02 | Ackley | M, NS |
| 14 | CEC05 F14 | Shifted Rotated Expanded Scaffer F6 | M, NS | 63 | ENG13 F02 Sh | Shifted Ackley | M, NS |
| 15 | CEC05 F15 | Hybrid Composition | M, NS, F, C | 64 | ENG13 F02 R | Rotated Ackley | M, NS |
| 16 | CEC05 F16 | Rotated Hybrid Composition | M, NS, F, C | 65 | ENG13 F03 | Alpine | M, S |
| 17 | CEC05 F17 | Rotated Hybrid Composition with noise | M, NS, N, F, C | 66 | ENG13 F04 | Egg holder | M, NS |
| 18 | CEC05 F18 | Rotated Hybrid Composition | M, NS, F, C | 67 | ENG13 F05 | Elliptic | U, S |
| 19 | CEC05 F19 | Rotated Hybrid Composition, in narrow basin | M, NS, NV, F, C | 68 | ENG13 F05 Sh | Shifted Elliptic | U, S |
| 20 | CEC05 F20 | Rotated Hybrid Composition, on bounds | M, NS, B, F, C | 69 | ENG13 F06 | Griewank | M, NS |
| 21 | CEC05 F21 | Rotated Hybrid Composition | M, NS, C | 70 | ENG13 F06 Sh | Shifted Griewank | M, NS |
| 22 | CEC05 F22 | Rotated Hybrid Composition, highly conditioned | M, NS, HC, C | 71 | ENG13 F06 R | Rotated Griewank | M, NS |
| 23 | CEC05 F23 | Non-Continuous Rotated Hybrid Composition | M, NS, B, C | 72 | ENG13 F07 | Hyperellipsoid | U, S |
| 24 | CEC05 F24 | Rotated Hybrid Composition | M, NS, F, C | 73 | (ENG13 F07 ShR) | Shifted Rotated Hyperellipsoid | U, NS |
| 25 | CEC05 F25 | Rotated Hybrid Composition without Bounds | M, NS, UB, F, C | 74 | ENG13 F08 | Michalewicz | M, S |
| 31 | CEC13 F01 | Sphere | U, S | 75 | ENG13 F09 | Norwegian | M, NS |
| 32 | CEC13 F02 | Rotated High Conditioned Elliptic | U, NS, HC | 76 | ENG13 F10 | Quadric | U, NS |
| 33 | CEC13 F03 | Rotated Bent Cigar | U, NS | 77 | ENG13 F11 | Quartic | U, S |
| 34 | CEC13 F04 | Rotated Discus | U, NS, SD, A | 78 | ENG13 F11 N | Quartic/Jong’s F4 | U, S, N |
| 35 | CEC13 F05 | Different Powers | U, S, SD | 79 | ENG13 F12 | Rastrigin | M, S |
| 36 | CEC13 F06 | Rotated Rosenbrock | M, NS, NV | 80 | ENG13 F12 R | Rotated Rastrigin | M, NS |
| 37 | CEC13 F07 | Rotated Schaffer’s F7 | M, NS, A | 81 | ENG13 F13 R | Rosenbrock | M, NS |
| 38 | CEC13 F08 | Rotated Ackley | M, NS, A | 82 | ENG13 F13 R | Rotated Rosenbrock | M, NS |
| 39 | CEC13 F09 | Rotated Weierstrass | M, NS, A | 83 | ENG13 F14 | Saloman | M, NS |
| 40 | CEC13 F10 | Rotated Griewank | M, NS | 84 | ENG13 F15 | Schaffer 6 | M, NS |
| 41 | CEC13 F11 | Rastrigin | M, S, A | 85 | ENG13 F16 | Schwefel 1.2 | U, NS |
| 42 | CEC13 F12 | Rotated Rastrigin | M, NS, A | 86 | ENG13 F16 R | Rotated Schwefel 1.2 | U, NS |
| 43 | CEC13 F13 | Non-Continuous Rotated Rastrigin | M, NS, A | 87 | ENG13 F17 | Schwefel 2.6 | U, NS |
| 44 | CEC13 F14 | Schwefel | M, NS, A, D | 88 | ENG13 F18 | Schwefel 2.13 | M, NS |
| 45 | CEC13 F15 | Rotated Schwefel | M, NS, A, D | 89 | ENG13 F19 | Schwefel 2.21 | U, S |
| 46 | CEC13 F16 | Rotated Katsuura | M, NS, A | 90 | ENG13 F20 | Schwefel 2.22 | U, S |
| 47 | CEC13 F17 | Lunacek Bi Rastrigin | M, NS | 91 | (ENG13 F20 ShR) | Shifted Rotated Schwefel 2.22 | U, NS |
| 48 | CEC13 F18 | Rotated Lunacek Bi Rastrigin | M, NS, A | 92 | ENG13 F21 | Shubert | M, NS |
| 49 | CEC13 F19 | Expanded Griewank + Rosenbrock | M, NS | 93 | ENG13 F23 | Step | M, S |
| 50 | CEC13 F20 | Expanded Scaffer’s F6 | M, NS, A | 94 | ENG13 F24 | Vincent | M, S |
| 51 | CEC13 F21 | Composition Function 1 (n = 5, Rotated) | M, NS, A, C | 95 | ENG13 F25 | Weierstrass | M, S |
Table A2.
Error values for DFO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A3.
Error values for DFO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A4.
Error values for uDFO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A5.
Error values for uDFO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A6.
Error values for uDFO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A7.
Error values for GPSO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A8.
Error values for LPSO.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Table A9.
Error values for DE.
| F No. | Min | Max | Median | Mean | StdDev |
|---|---|---|---|---|---|
| 2 | |||||
| 4 | |||||
| 5 | |||||
| 6 | |||||
| 7 | |||||
| 8 | |||||
| 11 | |||||
| 12 | |||||
| 13 | |||||
| 14 | |||||
| 15 | |||||
| 16 | |||||
| 17 | |||||
| 18 | |||||
| 19 | |||||
| 20 | |||||
| 21 | |||||
| 22 | |||||
| 23 | |||||
| 24 | |||||
| 25 | |||||
| 31 | |||||
| 32 | |||||
| 33 | |||||
| 34 | |||||
| 35 | |||||
| 36 | |||||
| 37 | |||||
| 38 | |||||
| 39 | |||||
| 40 | |||||
| 41 | |||||
| 42 | |||||
| 43 | |||||
| 44 | |||||
| 45 | |||||
| 46 | |||||
| 47 | |||||
| 48 | |||||
| 49 | |||||
| 50 | |||||
| 51 | |||||
| 52 | |||||
| 53 | |||||
| 54 | |||||
| 55 | |||||
| 56 | |||||
| 57 | |||||
| 58 | |||||
| 61 | |||||
| 62 | |||||
| 63 | |||||
| 64 | |||||
| 65 | |||||
| 66 | |||||
| 67 | |||||
| 68 | |||||
| 69 | |||||
| 70 | |||||
| 71 | |||||
| 72 | |||||
| 73 | |||||
| 74 | |||||
| 75 | |||||
| 76 | |||||
| 77 | |||||
| 78 | |||||
| 79 | |||||
| 80 | |||||
| 81 | |||||
| 82 | |||||
| 83 | |||||
| 84 | |||||
| 85 | |||||
| 86 | |||||
| 87 | |||||
| 88 | |||||
| 89 | |||||
| 90 | |||||
| 91 | |||||
| 92 | |||||
| 93 | |||||
| 94 | |||||
| 95 |
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Trelea I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003;85:317–325. doi: 10.1016/S0020-0190(02)00447-7. [DOI] [Google Scholar]
- 2.Olorunda O., Engelbrecht A.P. Measuring exploration/exploitation in particle swarms using swarm diversity; Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence); Hong Kong, China. 1–6 June 2008; pp. 1128–1134. [Google Scholar]
- 3.al-Rifaie M.M. Dispersive Flies Optimisation; Proceedings of the IEEE 2014 Federated Conference on Computer Science and Information Systems; Warsaw, Poland. 7–10 September 2014; pp. 529–538. [DOI] [Google Scholar]
- 4.al-Rifaie M.M. Perceived Simplicity and Complexity in Nature; Proceedings of the AISB 2017: Computational Architectures for Animal Cognition; Bath, UK. 18–21 April 2017; pp. 299–305. [Google Scholar]
- 5.Kennedy J. The particle swarm: Social adaptation of knowledge; Proceedings of the IEEE International Conference on Evolutionary Computation; Indianapolis, IN, USA. 13–16 April 1997; pp. 303–308. [Google Scholar]
- 6.Storn R., Price K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997;11:341–359. doi: 10.1023/A:1008202821328. [DOI] [Google Scholar]
- 7.Dorigo M., Di Caro G. Ant colony optimization: A new meta-heuristic; Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406); Washington, DC, USA. 6–9 July 1999; pp. 1470–1477. [Google Scholar]
- 8.Hansen N., Müller S.D., Koumoutsakos P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES) Evol. Comput. 2003;11:1–18. doi: 10.1162/106365603321828970. [DOI] [PubMed] [Google Scholar]
- 9.Back T., Fogel D.B., Michalewicz Z. Handbook of Evolutionary Computation. IOP Publishing Ltd.; Bristol, UK: 1997. [Google Scholar]
- 10.Yang X.S. International Symposium on Stochastic Algorithms. Springer; Berlin/Heidelberg, Germany: 2009. Firefly algorithms for multimodal optimization; pp. 169–178. [Google Scholar]
- 11.Kennedy J. Bare Bones Particle Swarms; Proceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS’03); Indianapolis, IN, USA. 26 April 2003; pp. 80–87. [Google Scholar]
- 12.Omran M.G., Engelbrecht A.P., Salman A. Bare bones differential evolution. Eur. J. Oper. Res. 2009;196:128–139. doi: 10.1016/j.ejor.2008.02.035. [DOI] [Google Scholar]
- 13.al-Rifaie M.M., Aber A. Recent Advances in Computational Optimization. Springer; Berlin/Heidelberg, Germany: 2016. Dispersive Flies Optimisation and Medical Imaging; pp. 183–203. [Google Scholar]
- 14.Lazov B., Vetsov T. Sum of Three Cubes via Optimisation. arXiv. 20202005.09710 [Google Scholar]
- 15.Acharya B.B., Dhakal S., Bhattarai A., Bhattarai N. PID speed control of DC motor using meta-heuristic algorithms. Int. J. Power Electron. Drive Syst. 2021;12:822–831. [Google Scholar]
- 16.Alhakbani H. Ph.D. Thesis. Goldsmiths, University of London; London, UK: 2018. Handling Class Imbalance Using Swarm Intelligence Techniques, Hybrid Data and Algorithmic Level Solutions. [Google Scholar]
- 17.Oroojeni H., al-Rifaie M.M., Nicolaou M.A. Deep Neuroevolution: Training Deep Neural Networks for False Alarm Detection in Intensive Care Units; Proceedings of the IEEE European Association for Signal Processing (EUSIPCO); Rome, Italy. 3–7 September 2018; pp. 1157–1161. [DOI] [Google Scholar]
- 18.al-Rifaie M.M., Ursyn A., Zimmer R., Javid M.A.J. International Conference on Evolutionary and Biologically Inspired Music and Art. Springer; Cham, Switzerland: 2017. On symmetry, aesthetics and quantifying symmetrical complexity; pp. 17–32. [Google Scholar]
- 19.Aparajeya P., Leymarie F.F., al-Rifaie M.M. Swarm-Based Identification of Animation Key Points from 2D-medialness Maps. In: Ekárt A., Liapis A., Castro Pena M.L., editors. International Conference on Computational Intelligence in Music, Sound, Art and Design. Springer International Publishing; Cham, Switzerland: 2019. pp. 69–83. [Google Scholar]
- 20.al-Rifaie M.M., Cavazza M. Beer Organoleptic Optimisation: Utilising Swarm Intelligence and Evolutionary Computation Methods; Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (GECCO’20); Cancún, Mexico. 8–12 July 2020; New York, NY, USA: Association for Computing Machinery; 2020. pp. 255–256. [DOI] [Google Scholar]
- 21.al-Rifaie M.M., Leymarie F.F., Latham W., Bishop M. Swarmic autopoiesis and computational creativity. Connect. Sci. 2017;29:276–294. doi: 10.1080/09540091.2016.1274960. [DOI] [Google Scholar]
- 22.Blackwell T. A Study of Collapse in Bare Bones Particle Swarm Optimisation. IEEE Trans. Evol. Comput. 2012;16:354–372. doi: 10.1109/TEVC.2011.2136347. [DOI] [Google Scholar]
- 23.Krohling R.A., Mendel E. Bare bones particle swarm optimization with Gaussian or Cauchy jumps; Proceedings of the IEEE Congress on Evolutionary Computation (CEC’09); Trondheim, Norway. 18–21 May 2009; pp. 3285–3291. [Google Scholar]
- 24.al-Rifaie M.M., Blackwell T. Bare Bones Particle Swarms with Jumps. In: Birattari M., Blum C., Christensen A.L., Engelbrecht A.P., Groß R., Dorigo M., Stützle T., editors. ANTS 2012, Lecture Notes in Computer Science Series. Volume 7461. Springer; Heidelberg/Berlin, Germany: 2012. pp. 49–60. [Google Scholar]
- 25.al-Rifaie M.M., Blackwell T. Cognitive Bare Bones Particle Swarm Optimisation with Jumps. Int. J. Swarm Intell. Res. (IJSIR) 2016;7:1–31. doi: 10.4018/IJSIR.2016010101. [DOI] [Google Scholar]
- 26.Storn R., Price K. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces. TR-95-012. [(accessed on 21 March 2012)];1995 Available online: http://www.icsi.berkeley.edu/~storn/litera.html.
- 27.Shi Y., Eberhart R.C. Lecture Notes in Computer Science. Springer; Berlin/Heidelberger, Germany: 1998. Parameter selection in particle swarm optimization; pp. 591–600. [Google Scholar]
- 28.Blackwell T., Kennedy J. Impact of communication topology in particle swarm optimization. IEEE Trans. Evol. Comput. 2019;23:689–702. doi: 10.1109/TEVC.2018.2880894. [DOI] [Google Scholar]
- 29.Suganthan P.N., Hansen N., Liang J.J., Deb K., Chen Y.P., Auger A., Tiwari S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005;2005005:2005. [Google Scholar]
- 30.Liang J., Qu B., Suganthan P., Hernández-Díaz A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization. Volume 201212. Computational Intelligence Laboratory, Zhengzhou University; Zhengzhou, China: Nanyang Technological University; Singapore: 2013. pp. 281–295. Technical Report. [Google Scholar]
- 31.Engelbrecht A.P. Particle swarm optimization: Global best or local best?; Proceedings of the IEEE 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence; Ipojuca, Brazil. 8–11 September 2013; pp. 124–135. [Google Scholar]
- 32.al-Rifaie M.M., Bishop J.M., Blackwell T. Information sharing impact of stochastic diffusion search on differential evolution algorithm. Memetic Comput. 2012;4:327–338. doi: 10.1007/s12293-012-0094-y. [DOI] [Google Scholar]
- 33.Wilcoxon F., Katti S., Wilcox R.A. Critical Values and Probability Levels for the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test. Pearl River; New York, NY, USA: 1963. pp. 171–259. [Google Scholar]
- 34.Bruyant P.P. Analytic and iterative reconstruction algorithms in SPECT. J. Nucl. Med. 2002;43:1343–1358. [PubMed] [Google Scholar]
- 35.Shepp L.A., Logan B.F. The Fourier reconstruction of a head section. IEEE Trans. Nucl. Sci. 1974;21:21–43. doi: 10.1109/TNS.1974.6499235. [DOI] [Google Scholar]
- 36.Cheng S., Shi Y. Diversity control in particle swarm optimization; Proceedings of the 2011 IEEE Symposium on Swarm Intelligence; Paris, France. 11–15 April 2011; pp. 1–9. [Google Scholar]
- 37.Wang H., Rahnamayan S., Sun H., Omran M.G. Gaussian bare-bones differential evolution. IEEE Trans. Cybern. 2013;43:634–647. doi: 10.1109/TSMCB.2012.2213808. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Not applicable.











