Abstract
Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.
1. Introduction
Particle swarm optimization (PSO) algorithm is a population-based optimization method, which was originally introduced by Eberhart and Kennedy in 1995 [1]. In PSO, the position of a particle is represented by a vector in search space, and the movement of the particle is determined by an assigned vector called the velocity vector. Each particle updates the velocity based on its current velocity, the best previous position of the particle, and the global best position of the population. PSO is extensively used for the optimization problems because it has simple structures and is easy to implement. However, it has some disadvantages, such that it easily falls into local optima when solving the complex and high-dimension problems [2, 3]. Hence a number of variant algorithms have been proposed to overcome the disadvantages of PSO [4, 5].
The particle swarm algorithm based on the probability convergence is one of the variant algorithms. This kind of particle swarm algorithm allows the particles to move according to probability instead of using velocity-displacement particle movement way. The Bare Bones PSO (BBPSO) family is a typical class of probabilistic PSO algorithms [6–8]. The Gaussian distribution was used in the original version of BBPSO, which was proposed by Kennedy [6]. then several new BBPSO variants used other distributions which seem to generate better results [7–9].
Inspired by the quantum theory and the trajectory analysis of PSO [10], Sun et al. proposed a new probabilistic algorithm, quantum-behaved particle swarm optimization (QPSO) algorithm [11]. In QPSO, each particle has a target point, which is defined as a linear combination of the best previous position of the particle and the global best position. The particle appears around the target point following a double exponential distribution. The QPSO algorithm essentially belongs to the BBPSO family, and its update equation uses an adaptive strategy and has fewer parameters to be adjusted [12–14]. The QPSO has been shown to perform well in finding the optimal solutions for continuous optimization problems and successfully applied to a wide range of areas such as multiobjective optimization [15, 16], clustering [17–19], neural network training [20–22], image processing [23, 24], engineering design [25], and dynamic optimization [26].
PSO and QPSO have been effective tools for solving global optimization problems, but they were originally developed for continuous search spaces. Kennedy and Eberhart introduced a binary version of PSO for discrete problems named binary PSO (BPSO) [27], where the trajectories are defined as changes in the probability that each particle changes its state to 1. Binary PSO has simple structure and is easy to implement; hence, it is extensively employed in the optimization problems [28–30]. But it also suffers from some disadvantages when solving the complex and high-dimension problems [28]. Sun et al. proposed binary QPSO (BQPSO), in which the target point is obtained by using the crossover operator at the best previous position of the particle and the global best position. Experiment results show that BQPSO can find better solution generally than BPSO [31].
In recent years, BQPSO has been used successfully in many fields [32–34]. However, although BQPSO broadens the application fields of QPSO, it did not show the same advantage as in the continuous space. QPSO algorithm should have better performance in solving the problems based on discrete space. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by the introduction of rejection region. It then designed a new binary coding QPSO, which has crossover and mutation operator and is like genetic algorithm (GA) in form; that is, the proposed algorithm is a new genetic algorithm but incorporates the core idea of QPSO. So it was named swarm optimization genetic algorithm (SOGA).
Compared with the GA, the SOGA has no selection operator, and each individual participates in evolution based on the information of the population and its own information. At the same time, the mutation probability of the SOGA is not fixed. In the early stage of the algorithm, the probability of mutation is large and the population can keep the diversity, with the iteration of the algorithm, the mutation probability tends to zero, and the algorithm can finally converge.
The rest of this paper is organized as follows. Section 2 is a brief introduction of PSO and binary PSO; Section 3 summarizes QPSO and binary QPSO; Section 4 introduces the mutation condition of binary coding converted from the particle movement formula in QPSO; Section 5 proposes the new binary QPSO algorithm, SOGA, and then discusses the difference between this algorithm and QPSO, GA; Section 6 presents the experiment results from the benchmark functions; finally, the paper is concluded in Section 7.
2. Particle Swarm Optimization
Particle swarm optimization (PSO) algorithm is a population-based optimization technique used in continuous spaces. It can be mathematically described as follows.
Assume the size of the population is n and the dimension of the search space is q; then the ith particle of the swarm can be represented by a position vector Xi = (xi1, xi2,…, xiq); the velocity of a particle i is denoted by vector Vi = (vi1, vi2,…, viq); vector Pi = (pi1, pi2,…, piq) is the best previous position of particle i, called personal best position, and Pg = (pg1, pg2,…, pgq) is the best position of the population, called global best position.
The velocity of particle i is calculated accordingly:
| (1) |
where i = 1,2,…, n, j = 1,2,…, q, n is population size, k is the number of iterations, ω is inertia weight, c1 and c2 are acceleration coefficients, and r1 and r1 are random numbers in the interval [0,1].
Then the next position is updated as follows:
| (2) |
The PSO algorithm is applied to solve optimization problems in the real search space, but many optimization problems are set in discrete space. Kennedy and Eberhart proposed a discrete binary version of PSO, named binary PSO (BPSO), where the particle position has two possible values, “0” or “1.” The velocity formula in BPSO remains unchanged, and the particle position is updated as follows:
| (3) |
where rand is a random number in the interval [0,1] and the function S(v) is a Sigmoid function as
| (4) |
3. Quantum-Behaved Particle Swarm Optimization
Inspired by trajectory analyses of PSO in [10], Sun et al. proposed a novel variant of PSO, named quantum-behaved particle swarm optimization (QPSO), which outperforms the traditional PSO in search ability.
QPSO sets a target point for each particle; denote Gi = (gi1, gi2,…, giq) as the target point for particle i, of which the coordinates are
| (5) |
where ϕij is a random number in the interval [0,1]. the trajectory analysis in [10] shows that Gi is the local attractor of particle i; that is, in PSO, particle i converges to it.
The position of particle i is updated as follows:
| (6) |
where u is a random number in the interval [0,1] and C = [c1, c1,…, cq] is known as the mean best position that is defined by the average of the personal best position of all particles, accordingly,
| (7) |
Parameter α is called Contraction-Expansion Coefficient, which can be tuned to control the convergence speed of the algorithms.
Because the iterations of QPSO are different from those of PSO, the methodology of BPSO cannot be applied to QPSO. Sun et al. introduced the crossover operator of GA into QPSO and proposed binary QPSO (BQPSO). In BQPSO, Xi = (xi1, xi2,…, xiq) still represents the position of particle i, but it is necessary to emphasize that Xi is a binary string rather than a vector, and xij is the jth substring of Xi, not the jth bit in the binary string. Assume the length of each substring is l; then the length of Xi is lq.
The target point Gi for particle i is generated through crossover operator; that is, BQPSO exerts crossover operation on the personal best position Pi and the global best position Pg to generate two offspring binary strings, and Gi is randomly selected from them.
Define
| (8) |
where k is the number of iterations and dH(cj, xijk) is the Hamming distance between cj and xijk. Compared with the two bit strings, the Hamming distance is the count of bit difference in the two strings. cj is the jth substring of the mean best position, and the dth bit of cj is determined by the states of the dth bit of all particles' personal best positions. If more particles take on 1 at the dth bit, the dth bit of cj is 1; otherwise the bit will be 0.
For each bit of gij, when pm > rand execute operations as follows: if the state of the bit is 1, then set its state to 0; else set its state to 0.
4. A Mutation Condition Using in Binary Space
The reason why the QPSO algorithm has better global search capability than the traditional PSO algorithm is that it changes the velocity-displacement model of the traditional PSO algorithm; in QPSO, the movement of particle to its target point has no determined trajectory; it can appear at any position in the whole feasible search space with a certain distribution, which is the double exponential distribution [13, 14]. Such a position can be far from the target point and may be superior to the current global best position of the population. This should also be reflected in the construction of binary QPSO algorithm.
The probability density function of particle i in QPSO is
| (9) |
Set λi = 2/Li, and yi = Xi − Gi; then (9) can be rewritten as
| (10) |
That is, yi obeys the double exponential distribution, of which the mean and variance are E(yi) = 0 and D(yi) = 2/λi2. The graph of probability density function (10) is Figure 1. Since the domain of yi is (−∞, +∞), particle can appear in any position of the search space, but the probability that a particle appears in a position far away from its target point is small. When λi → +∞, the variance D(yi) = 2/λi2 → 0 which means that Xi converge to Gi with probability 1.
Figure 1.

Probability density function of double exponential distribution.
When the position of a particle uses binary encoding, it is hard to describe the relative position of two points using the measure of two binary strings. Similar to set a rejection region, we set a threshold value v (v > 0). When the value of yi falls into the rejection region, as shown in Figure 2, set yi = 0; that is, Xi = Gi, else Xi = mutation(Gi). mutation(Gi) means mutation operation on Gi.
Figure 2.

Refused domain of probability density function.
For any u, which is a random number in the interval [0,1], the condition that yi does not fall into the rejection region is
| (11) |
The left side of Condition (11) can be written as
| (12) |
Thus Condition (11) means e−λiv > u, accordingly:
| (13) |
In order to ensure that the algorithm can converge, set
| (14) |
where C is the mean best position of the population.
Then Condition (13) is
| (15) |
where d(·) is used to measure the difference of two binary strings. Hamming distance can be used here.
Assume y = ln(1/u); then
| (16) |
For (16), when the value of y is small, the function has fast rates of change as shown in Figure 3, so Condition (15) suffers from the effect of the initial value of d(C, Xi). So Condition (15) can be changed into its equivalent form:
| (17) |
where parameter σ is a constant that is greater than zero.
Figure 3.

Figure of equation (16).
5. Swarm Optimization Genetic Algorithm
Based on the mutation condition (17), mutation operator is introduced into BQPSO. Xi still represents the position of particle i, Pi is the personal best position of particle i, Pg is the global best position, and C is the mean best position which is defined the same as in BQPSO.
Different from BQPSO, crossover or mutation operation process is applied to the whole binary string, instead of bits. Because the procedure of the algorithm is similar to GA, it is named as swarm optimization genetic algorithm (SOGA). The process can be described as follows.
Initialize a population of particles Xi in binary space;
Set personal best position Pi = Xi, and compute C;
Evaluate the fitness of particles f(Xi) and determine the global best position Pg;
while terminate condition is not reached do
for each particle i do
Exert crossover operation on Pi and Pg to generate two offspring binary strings,
Gi is randomly selected from them.
if condition (17) is true,
Exert mutation operation on Gi;
end if
Set Xi = Gi;
Compute the fitness of particles f(Xi), and update Pi,
end for i
Update Pg and the mean best position C;
end while
Compared to the GA with the same crossover and mutation operator, SOGA has the following characteristics:
-
(1)
SOGA does not have selection operator and crossover probability and its crossover operator is exerted directly on Pi and Pg. Therefore, the form of the fitness function f(Xi) has no effect on the algorithm, and the target function of the maximization problem can be set as the fitness function.
-
(2)Condition (17) can be turned into
(18) - Since σd(C, Xi) ≥ 0, the range of exp(−σd(C, Xi)) is (0,1), and u is a random number in the interval [0,1]; thus Condition (18) is equivalent to an adaptive mutation probability:
(19) where σ is a constant that is greater than zero and d(C, Xi) decreases with the increase of iteration times. Therefore, pm is shrunk, which causes the algorithm to converge.
-
(3)
σ is the only parameter of SOGA, which can be tuned to control the convergence speed of the algorithms as Contraction-Expansion Coefficient α in BQPSO.
When the value of σ is 0.5, 1, and 2, the curves of mutation probability pm changing with d(·) are shown in Figure 4. The figure demonstrates that the smaller the value of σ, the faster the convergence speed of the algorithm. It also can be seen that the global searching ability of the algorithm is reduced when σ is too small. So set σ = 1 in SOGA.
Figure 4.

Figures of mutation probability.
6. Experimental Results
The proposed SOGA is compared with BPSO, BQPSO, and GA. They are tested on the following 10 benchmark problems to be minimized [28, 35]:
(1) Sphere Function
| (20) |
(2) Schwefel's Problem 2.22
| (21) |
(3) Schwefel's Problem 1.2
| (22) |
(4) Step Function
| (23) |
(5) Schwefel's Problem 2.21
| (24) |
(6) 2n Minima Function
| (25) |
(7) Schwefel's Problem 1.2
| (26) |
(8) Ackley Function
| (27) |
(9) Generalized Penalized Function
| (28) |
(10) Griewank Function
| (29) |
In these functions, F1 ~ F5 are unimodal and F6 ~ F10 are multimodal. Their optimum values are all zeros except F6 and F7. The minimum values of F6 and F7 are −78.3323 and −418.9829∗n, respectively, where n is the dimension of a function.
In the experiments, the dimension of each function is 8, and the binary code length of each continuous variable is 15, so the length of particle is 120 for each function. The size of population is 50 and the total number of iterations is set to 500. The parameters of algorithms are listed in Table 1, where pc is crossover probability and pm is mutation probability in GA.
Table 1.
Parameters of algorithms applied in the experiments.
| Algorithm | Parameter settings |
|---|---|
| SOGA | σ = 1 |
| BPSO | ω = 0.7, c1 = c2 = 2, Vmax = 6 |
| BQPSO | α = 1.1~1.4 |
| GA | p c = 0.90, pm = 0.10~0.15 |
Four algorithms ran independently 30 times on the benchmark functions, and the best target function value was recorded at each run. To compare the four algorithms, 30 data sets were analyzed using the following statistic parameters: the mean, the standard deviation (STD), the best, the worst, and the median; these results are reported in Tables 2 and 3.
Table 2.
Minimization results for BPSO, BQPSO, and SOGA.
| Function | Algorithm | The best | Mean | SD | The worst | Median |
|---|---|---|---|---|---|---|
| F 1 | BPSO | 65.1674 | 278.1128 | 160.0441 | 702.2221 | 244.4259 |
| BQPSO | 1.4096 | 3.3647 | 1.4272 | 6.5327 | 3.1107 | |
| SOGA | 7.4510e − 05 | 1.6641e − 04 | 4.8945e − 04 | 0.0028 | 7.4510e − 05 | |
|
| ||||||
| F 2 | BPSO | 1.5918 | 3.0268 | 0.9300 | 5.3401 | 2.9463 |
| BQPSO | 0.2155 | 0.4080 | 0.1226 | 0.7025 | 0.4025 | |
| SOGA | 0.0024 | 0.0026 | 0.0005 | 0.0049 | 0.0024 | |
|
| ||||||
| F 3 | BPSO | 121.8395 | 1171.3480 | 911.2165 | 3261.0044 | 860.7055 |
| BQPSO | 8.2112 | 30.8752 | 25.8621 | 120.6034 | 21.4084 | |
| SOGA | 263.9170 | 1919.6076 | 1078.9650 | 3782.1623 | 2175.0068 | |
|
| ||||||
| F 4 | BPSO | 56 | 312.7667 | 248.9619 | 1154 | 250.5 |
| BQPSO | 1 | 4.8667 | 2.4174 | 11 | 5 | |
| SOGA | 0 | 0.1000 | 0.4026 | 2 | 0 | |
|
| ||||||
| F 5 | BPSO | 9.0243 | 16.3089 | 5.5677 | 27.5491 | 14.3406 |
| BQPSO | 0.9430 | 1.8437 | 0.5141 | 3.3296 | 1.8342 | |
| SOGA | 0.0031 | 0.2228 | 0.6241 | 3.1281 | 0.0275 | |
|
| ||||||
| F 6 | BPSO | −77.5179 | −74.5146 | 1.8144 | −70.6644 | −74.5941 |
| BQPSO | −72.7114 | −69.3385 | 1.7327 | −66.4601 | −68.9703 | |
| SOGA | −77.7357 | −76.2789 | 0.6355 | −75.1676 | −76.2048 | |
|
| ||||||
| F 7 | BPSO | −3136.9403 | −2724.3377 | 194.8185 | −2263.5189 | −2752.3562 |
| BQPSO | −1810.3356 | −1498.0555 | 157.2211 | −1272.7048 | −1454.7974 | |
| SOGA | −3248.4725 | −2913.3245 | 218.1944 | −2467.9053 | −2952.6730 | |
|
| ||||||
| F 8 | BPSO | 4.5456 | 7.6315 | 2.0230 | 15.5215 | 7.8838 |
| BQPSO | 0.9446 | 1.9230 | 0.3842 | 2.5818 | 1.9273 | |
| SOGA | 0.0040 | 1.3614 | 1.1703 | 3.1276 | 1.8407 | |
|
| ||||||
| F 9 | BPSO | 2.6925 | 12.6817 | 7.3535 | 31.3502 | 10.9785 |
| BQPSO | 0.7294 | 1.7635 | 0.5748 | 2.9455 | 1.7687 | |
| SOGA | 0.0742 | 1.4639 | 1.4386 | 5.2594 | 0.9397 | |
|
| ||||||
| F 10 | BPSO | 1.5798 | 3.6606 | 1.5582 | 7.7680 | 3.4519 |
| BQPSO | 0.3552 | 0.8360 | 0.2063 | 1.1561 | 0.8923 | |
| SOGA | 0.0546 | 0.3702 | 0.1933 | 0.6738 | 0.4183 | |
Table 3.
Minimization results for SOGA and GA.
| Function | Algorithm | The best | Mean | SD | The worst | Median |
|---|---|---|---|---|---|---|
| F 1 | SOGA | 7.4510e − 05 | 1.6641e − 04 | 4.8945e − 04 | 0.0028 | 7.4510e − 05 |
| GA | 0.0013 | 76.0248 | 166.2636 | 664.4937 | 2.5246 | |
| SOGA∗ | 7.4510e − 05 | 0.0074 | 0.0189 | 0.0773 | 0.0007 | |
| GA∗ | 0.0016 | 1.1302 | 2.3717 | 11.0953 | 0.2626 | |
|
| ||||||
| F 2 | SOGA | 0.0024 | 0.0026 | 0.0005 | 0.0049 | 0.0024 |
| GA | 0.0031 | 0.3220 | 0.4294 | 1.3587 | 0.1172 | |
| SOGA∗ | 0.0031 | 0.0069 | 0.0076 | 0.0433 | 0.1059 | |
| GA∗ | 0.0220 | 0.1423 | 0.1452 | 0.7703 | 2.5471 | |
|
| ||||||
| F 3 | SOGA | 263.9170 | 1919.6076 | 1078.9650 | 3782.1623 | 2175.0068 |
| GA | 823.0982 | 3805.95462 | 1587.4665 | 8753.2186 | 3602.7854 | |
| SOGA∗ | 0.0238 | 27.5765 | 95.7945 | 434.9900 | 0.5557 | |
| GA∗ | 482.1519 | 4922.3981 | 2417.1198 | 10470.9239 | 4705.2068 | |
|
| ||||||
| F 4 | SOGA | 0 | 0.1000 | 0.4026 | 2 | 0 |
| GA | 0 | 94.3667 | 457.2142 | 2509 | 1 | |
| SOGA∗ | 0 | 0 | 0 | 0 | 0 | |
| GA∗ | 0 | 3.6333 | 7.5177 | 36 | 2 | |
|
| ||||||
| F 5 | SOGA | 0.0031 | 0.2228 | 0.6241 | 3.1281 | 0.0275 |
| GA | 9.3783 | 27.0193 | 11.2953 | 53.1297 | 27.1523 | |
| SOGA∗ | 0.0458 | 0.1908 | 0.0952 | 0.4913 | 0.1831 | |
| GA∗ | 6.4913 | 20.0783 | 10.5937 | 49.6017 | 18.9795 | |
|
| ||||||
| F 6 | SOGA | −77.7357 | −76.2789 | 0.6355 | −75.1676 | −76.2048 |
| GA | −77.9857 | −75.2636 | 1.8587 | −69.1535 | −75.5698 | |
| SOGA∗ | −78.3316 | −77.2289 | 0.6240 | −75.8823 | −77.1074 | |
| GA∗ | −77.5952 | −74.8886 | 1.902 | −70.5101 | −75.457 | |
|
| ||||||
| F 7 | SOGA | −3248.4725 | −2642.8652 | 256.1202 | −2001.7012 | −2952.6730 |
| GA | −3113.4015 | −2088.3848 | 342.5752 | −1355.6609 | −2692.5141 | |
| SOGA∗ | −3351.7352 | −3111.4602 | 154.6330 | −2736.5711 | −3114.6197 | |
| GA∗ | −3150.5510 | −2733.6038 | 180.6351 | −2424.3367 | −2726.4449 | |
|
| ||||||
| F 8 | SOGA | 0.0040 | 1.3614 | 1.1703 | 3.1276 | 1.8407 |
| GA | 1.8409 | 3.3475 | 1.8626 | 10.2185 | 2.6024 | |
| SOGA∗ | 0.0040 | 0.0223 | 0.0216 | 0.0911 | 0.0141 | |
| GA∗ | 0.3384 | 2.7764 | 0.9262 | 4.3458 | 2.7676 | |
|
| ||||||
| F 9 | SOGA | 0.0742 | 1.4639 | 1.4386 | 5.2594 | 0.9397 |
| GA | 1.5468 | 10.2883 | 7.6703 | 34.3495 | 8.7194 | |
| SOGA∗ | 0.0348 | 0.6195 | 0.5317 | 1.9344 | 0.4575 | |
| GA∗ | 0.6679 | 6.0007 | 4.0074 | 13.6647 | 5.7359 | |
|
| ||||||
| F 10 | SOGA | 0.0546 | 0.3702 | 0.1933 | 0.6738 | 0.4183 |
| GA | 0.1600 | 0.9633 | 1.3811 | 7.9350 | 0.6882 | |
| SOGA∗ | 0.2550 | 0.4715 | 0.1208 | 0.7548 | 0.4758 | |
| GA∗ | 0.1805 | 0.7492 | 0.2431 | 1.2999 | 0.7741 | |
∗The crossover and mutation operation act on substring.
Moreover, the statistical test is conducted in order to determine whether the average best results are different with a statistical significance. The confidence level is fixed at 0.95, and the tests return p value which are shown in Tables 4 and 5. We use the SAS for statistical testing; in the SAS system, if the p value is less than 0.0001, the system displays <0.0001. The value of h in Tables 4 and 5 shows the result of pairwise comparison; h = 1 indicates the previous comparison algorithm is significantly better than the latter; h = 0 represents no significant difference between the two compared algorithms; h = −1 indicates the previous comparison algorithm is significantly worse than the latter.
Table 4.
Comparison of SOGA with other algorithms and GA∗ with GA.
| Function | Test | SOGA | GA∗ | |||
|---|---|---|---|---|---|---|
| BPSO | BQPSO | GA | GA∗ | GA | ||
| F 1 | p value | <0.0001 | <0.0001 | 0.0151 | 0.0115 | 0.0166 |
| h | 1 | 1 | 1 | 1 | 1 | |
| F 2 | p value | <0.0001 | <0.0001 | 0.0001 | <0.0001 | 0.0340 |
| h | 1 | 1 | 1 | 1 | 1 | |
| F 3 | p value | 0.0724 | <0.0001 | <0.0001 | <0.0001 | 0.4280 |
| h | −1 | −1 | 1 | 1 | 0 | |
| F 4 | p value | <0.0001 | <0.0001 | 0.2633 | 0.0119 | 0.2816 |
| h | 1 | 1 | 0 | 1 | 0 | |
| F 5 | p value | <0.0001 | <0.0001 | <0.0001 | <0.0001 | 0.0171 |
| h | 1 | 1 | 1 | 1 | 1 | |
| F 6 | p value | <0.0001 | <0.0001 | 0.0006 | <0.0001 | 0.4430 |
| h | 1 | 1 | 1 | 1 | 0 | |
| F 7 | p value | 0.0081 | <0.0001 | 0.0004 | 0.0080 | 0.2582 |
| h | 1 | 1 | 1 | 1 | 0 | |
| F 8 | p value | <0.0001 | 0.0002 | <0.0001 | <0.0001 | 0.2063 |
| h | 1 | 1 | 1 | 1 | 0 | |
| F 9 | p value | <0.0001 | 0.3884 | <0.0001 | <0.0001 | 0.0043 |
| h | 1 | 0 | 1 | 1 | 1 | |
| F 10 | p value | <0.0001 | <0.0001 | 0.0291 | <0.0001 | 0.4065 |
| h | 1 | 1 | 1 | 1 | 0 | |
∗The crossover and mutation operation act on substring.
Table 5.
Comparison of SOGA∗ with other algorithms.
| Function | Test | SOGA∗ | ||||
|---|---|---|---|---|---|---|
| BPSO | BQPSO | GA | GA∗ | SOGA | ||
| F 1 | p value | <0.0001 | <0.0001 | 0.0151 | 0.0120 | 0.0409 |
| h | 1 | 1 | 1 | 1 | −1 | |
| F 2 | p value | <0.0001 | <0.0001 | 0.0002 | <0.0001 | 0.0308 |
| h | 1 | 1 | 1 | 1 | −1 | |
| F 3 | p value | <0.0001 | 0.8561 | <0.0001 | <0.0001 | <0.0001 |
| h | 1 | 0 | 1 | 1 | 1 | |
| F 4 | p value | <0.0001 | <0.0001 | 0.2629 | 0.0104 | 0.1555 |
| h | 1 | 1 | 0 | 1 | 0 | |
| F 5 | p value | <0.0001 | <0.0001 | <0.0001 | <0.0001 | 0.2768 |
| h | 1 | 1 | 1 | 1 | 0 | |
| F 6 | p value | <0.0001 | <0.0001 | 0.0002 | <0.0001 | 0.3923 |
| h | 1 | 1 | 1 | 1 | 0 | |
| F 7 | p value | <0.0001 | <0.0001 | <0.0001 | <0.0001 | <0.0001 |
| h | 1 | 1 | 1 | 1 | 1 | |
| F 8 | p value | <0.0001 | <0.0001 | <0.0001 | <0.0001 | <0.0001 |
| h | 1 | 1 | 1 | 1 | 1 | |
| F 9 | p value | <0.0001 | <0.0001 | <0.0001 | <0.0001 | 0.0044 |
| h | 1 | 1 | 1 | 1 | 1 | |
| F 10 | p value | <0.0001 | <0.0001 | 0.0435 | <0.0001 | 0.1231 |
| h | 1 | 1 | 1 | 1 | 0 | |
∗The crossover and mutation operation act on substring.
The results of SOGA compared with BPSO and BQPSO are listed in Tables 2 and 4. The results show that SOGA surpasses BPSO and BQPSO in minimizing the ten benchmark functions except F3. Figure 5 illustrates the convergence process of the best target function value of population in one running. As shown in Figure 5, the SOGA converges faster than BPSO and BQPSO.
Figure 5.
Figures of the convergence processes of BPSO, BQPSO, and SOGA.
Since SOGA has almost the same form as GA, the same crossover and mutation operator, single-point crossover and single-point mutation, are used in both algorithms. In GA, the elitist strategy is applied to improve the convergence and optimization results. It should be noted that the GA can not converge after 500 iterations for most of the functions; for better comparison, the number of iterations of the GA is set to 2000 in Table 3 to ensure that the algorithm is fully convergent.
For high-dimension functions, assume Xi = (xi1, xi2,…, xiq) is the binary string of the particle (or individual) i, where q is the number of dimensions and xij is the jth substring of Xi. It is easy for GA or SOGA to exert crossover and mutation operation on each substring xij in turn, instead of on the whole Xi. For instance, in SOGA, Condition (17) can be written as
| (30) |
for each substring xij of Xi, where cj are the jth substring of C. Then the process of SOGA when the crossover and mutation operation act on substring can be described as follows: the same operation can also be used in GA.
Initialize a population of particles Xi in binary space;
Set personal best position Pi = Xi, and compute C;
Evaluate the fitness of particles f(Xi) and determine the global best position Pg;
while terminate condition is not reached do
for each particle i do
for each substring of particle j do
Exert crossover operation on Pij and Pgj to generate two offspring binary strings,
Gij is randomly selected from them.
if condition (30) is true,
Exert mutation operation on Gij;
end if
Set Xij = Gij;
end for j
Compute the fitness of particles f(Xi), and update Pi,
end for i
Update Pg and the mean best position C;
end while
The convergence processes of SOGA and GA, when crossover and mutation operation act on substrings of particles (or individual), are shown in Figure 6; the results of SOGA and GA are listed in Tables 3, 4, and 5. The experimental results show that SOGA is obviously superior to the GA on the solution accuracy and the convergence. For high-dimension functions, it is effective to improve the convergence speed and optimization ability, exerting crossover and mutation operation on substrings; as shown in Tables 3 and 4, it significantly improves the convergence rate of GA. But its influence is not significant for SOGA; according to Table 5, it has better performance in minimizing functions F3, F7, F8, and F9, especially for function F3.
Figure 6.
Figures of the convergence processes of SOGA and GA when the crossover and mutation operation act on substring.
7. Conclusions
In this study, SOGA, a binary swarm intelligence algorithm, which is based on QPSO and binary QPSO, is introduced. It converts the movement formula of QPSO to mutation conditions, thus introducing the mutation operator of GA. SOGA has the similar form to GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. SOGA integrates strongpoint of GA and PSO. The experimental results show that SOGA is distinctly superior to BPSO, BQPSO, and GA in terms of solution accuracy and convergence. Furthermore, since SOGA has the same crossover and mutation operator as GA, many improvements on the GA can be applied to it; therefore, this algorithm has better applications and research prospects.
Acknowledgments
The work was supported by the National Natural Science Foundation of China (551276199).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
- 1.Eberhart R. C., Kennedy J. A new optimizer using particle swarm theory. Proceedings of the 6th International Symposium on Micromachine and Human Science; October 1995; Nagoya, Japan. pp. 39–43. [Google Scholar]
- 2.Van den Bergh F., Engelbrecht A. P. A new locally convergent particle swarm optimiser. Proceedings of the International Conference on Systems, Man and Cybernetics; October 2002; pp. 94–99. [Google Scholar]
- 3.van den Bergh F., Engelbrecht A. P. A convergence proof for the particle swarm optimiser. Fundamenta Informaticae. 2010;105(4):341–374. doi: 10.3233/FI-2010-370. [DOI] [Google Scholar]
- 4.Poli R., Kennedy J., Blackwell T. Particle swarm optimization: an overview. Swarm Intelligence. 2007;1(1):33–57. doi: 10.1007/s11721-007-0002-0. [DOI] [Google Scholar]
- 5.Esmin A. A. A., Coelho R. A., Matwin S. A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data. Artificial Intelligence Review. 2015;44(1):23–45. doi: 10.1007/s10462-013-9400-4. [DOI] [Google Scholar]
- 6.Kennedy J. Bare bones particle swarms. Proceedings of the IEEE Swarm Intelligence Symposium (SIS '03); 2003; Indianapolis, Ind, USA. pp. 80–87. [DOI] [Google Scholar]
- 7.Kennedy J. Probability and dynamics in the particle swarm. Proceedings of the Congress on Evolutionary Computation, CEC '04; June 2004; pp. 340–347. [Google Scholar]
- 8.Richer T. J., Blackwell T. M. The Lévy particle swarm. Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06); July 2006; pp. 808–815. [Google Scholar]
- 9.Vafashoar R., Meybodi M. R. Multi swarm bare bones particle swarm optimization with distribution adaption. Applied Soft Computing Journal. 2016;47:534–552. doi: 10.1016/j.asoc.2016.06.028. [DOI] [Google Scholar]
- 10.Clerc M., Kennedy J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation. 2002;6(1):58–73. doi: 10.1109/4235.985692. [DOI] [Google Scholar]
- 11.Sun J., Feng B., Xu W. Particle swarm optimization with particles having quantum behavior. Congress on Evolutionary Computation. 2004;70(3):1571–1580. [Google Scholar]
- 12.Sun J., Xu W., Feng B. Adaptive parameter control for quantum-behaved particle swarm optimization on individual level. Proceedings of IEEE International Conference on Systems, Man and Cybernetics; October 2005; pp. 3049–3054. [Google Scholar]
- 13.Sun J., Xu W., Feng B. A global search strategy of quantum-behaved particle swarm optimization. Proceedings of the Conference on Cybernetics Intelligent Systems; 2005; pp. 111–116. [Google Scholar]
- 14.Sun J., Fang W., Wu X., Palade V., Xu W. Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection. Evolutionary Computation. 2012;20(3):349–393. doi: 10.1162/EVCO_a_00049. [DOI] [PubMed] [Google Scholar]
- 15.Omkar S. N., Khandelwal R., Ananth T. V. S., Narayana Naik G., Gopalakrishnan S. Quantum behaved Particle Swarm Optimization (QPSO) for multi-objective design optimization of composite structures. Expert Systems with Applications. 2009;36(8):11312–11322. doi: 10.1016/j.eswa.2009.03.006. [DOI] [Google Scholar]
- 16.Zhang T., Hu T., Chen J. W., Wan Z., Guo X. Solving bilevel multiobjective programming problem by elite quantum behaved particle swarm optimization. Abstract and Applied Analysis. 2012;2012(5):97–112. doi: 10.1155/2012/102482.102482 [DOI] [Google Scholar]
- 17.Sun J., Xu W., Ye B. Quantum-Behaved particle swarm optimization clustering algorithm. Proceedings of the International Conference on Advanced Data Mining and Applications; 2006; pp. 340–347. [Google Scholar]
- 18.Lu K., Fang K., Xie G. A hybrid quantum-behaved particle swarm optimization algorithm for clustering analysis. Proceedings of the 5th International Conference on Fuzzy Systems and Knowledge Discovery, FSKD; October 2008; China. pp. 21–25. [DOI] [Google Scholar]
- 19.Zhang C., Chen W. Quantum-behaved particle swarm optimization dynamic clustering algorithm. Advanced Materials Research. 2013;694-697:2757–2760. doi: 10.4028/www.scientific.net/AMR.694-697.2757. [DOI] [Google Scholar]
- 20.Li S., Wang R., Hu W., Sun J. A new QPSO based bp neural network for face detection. Proceedings of the International Conference of Fuzzy Information and Engineering; 2007; pp. 355–363. [Google Scholar]
- 21.Lian G. Y., Huang K. L., Chen J. H., Gao F. Q. Training algorithm for radial basis function neural network based on quantum-behaved particle swarm optimization. International Journal of Computer Mathematics. 2010;87(1–3):629–641. doi: 10.1080/00207160802166465. [DOI] [Google Scholar]
- 22.Cheng C.-T., Niu W.-J., Feng Z.-K., Shen J.-J., Chau K.-W. Daily reservoir runoff forecasting method using artificial neural network based on quantum-behaved particle swarm optimization. Water. 2015;7(8):4232–4246. doi: 10.3390/w7084232. [DOI] [Google Scholar]
- 23.Lei X., Fu A. Two-dimensional maximum entropy image segmentation method based on quantum-behaved particle swarm optimization algorithm. Proceedings of the 4th International Conference on Natural Computation, ICNC '08; October 2008; pp. 692–696. [DOI] [Google Scholar]
- 24.Su X., Fang W., Shen Q., Hao X. An image enhancement method using the quantum-behaved particle swarm optimization with an adaptive strategy. Mathematical Problems in Engineering. 2013;2013(3):211–244. doi: 10.1155/2013/824787.824787 [DOI] [Google Scholar]
- 25.Coelho L. D. S. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Systems with Applications. 2010;37(2):1676–1683. doi: 10.1016/j.eswa.2009.06.044. [DOI] [Google Scholar]
- 26.Fang W., Wang M., Li C. Solving dynamic optimization problems based on an improved clustering quantum-behaved particle swarm optimizer. Journal of Computational Theoretical Nanoscience. 2016;13(6):3540–3547. doi: 10.1166/jctn.2016.5181. [DOI] [Google Scholar]
- 27.Kennedy J., Eberhart R. C. A discrete binary version of the particle swarm algorithm. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation; October 1997; Orlando, Fla, USA. pp. 4104–4108. [Google Scholar]
- 28.Beheshti Z., Shamsuddin S. M., Hasan S. Memetic binary particle swarm optimization for discrete optimization problems. Information Sciences. 2015;299:58–84. doi: 10.1016/j.ins.2014.12.016. [DOI] [Google Scholar]
- 29.Banka H., Dara S. A Hamming distance based binary particle swarm optimization (HDBPSO) algorithm for high dimensional feature selection, classification and validation. Pattern Recognition Letters. 2015;52:94–100. doi: 10.1016/j.patrec.2014.10.007. [DOI] [Google Scholar]
- 30.Bharti K. K., Singh P. K. Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. Applied Soft Computing Journal. 2016;43:20–34. doi: 10.1016/j.asoc.2016.01.019. [DOI] [Google Scholar]
- 31.Sun J., Xu W., Fang W., Chai Z. (Lecture Notes in Computer Science).Adaptive and Natural Computing Algorithms. 2007;4431:376–385. [Google Scholar]
- 32.Zhang J., Zhou Z., Gao W., Ma Y., Ye Y. Cognitive Radio adaptation decision engine based on binary quantum-behaved particle swarm optimization. Chinese Journal of Scientific Instrument. 2011;32(2):221–225. [Google Scholar]
- 33.Xi M., Sun J., Liu L., Fan F., Wu X. Cancer feature selection and classification using a binary quantum-behaved particle swarm optimization and support vector machine. Computational and mathematical Methods in Medicine. 2016;2016(9):1–9. doi: 10.1155/2016/3572705.3572705 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Yan J., Duan S., Huang T., Wang L. Hybrid feature matrix construction and feature selection optimization-based multi-objective QPSO for electronic nose in wound infection detection. Sensor Review. 2016;36(1):23–33. doi: 10.1108/SR-01-2015-0011. [DOI] [Google Scholar]
- 35.Digalakis J. G., Margaritis K. G. An experimental study of benchmarking functions for genetic algorithms. International Journal of Computer Mathematics. 2002;79(4):403–416. doi: 10.1080/00207160210939. [DOI] [Google Scholar]


