Skip to main content
The Scientific World Journal logoLink to The Scientific World Journal
. 2013 Oct 31;2013:860289. doi: 10.1155/2013/860289

On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

Martins Akugbe Arasomwan 1, Aderemi Oluyinka Adewumi 1,*
PMCID: PMC3833296  PMID: 24324383

Abstract

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.

1. Introduction

The idea of Particle Swarm Optimization (PSO) stems from biology where a swarm of birds coordinates itself in order to achieve a goal. When a swarm of birds looks for food, its individuals will spread in the environment and move around independently with some degree of randomness which enables it to discover food accumulations. After a while, one of them will find something digestible and, being social, communicates this to its neighbors. These can then approach the source of food, thus leading to the convergence of the swarm to the source of food. Following this analogy, PSO was largely derived from sociopsychology concept and transferred to optimization [1], where each particle (bird) uses the local information regarding the displacement of its reachable closer neighbors to decide on its own displacement, resulting to complex and adaptive collective behaviors.

Since the inception of PSO technique, a lot of work has been done by researchers to enhance its efficiency in handling optimization problems. One such work is the introduction of linear decreasing inertia weight (LDIW) strategy into the original PSO to control its exploration and exploitation for better performance [24]. However, LDIW-PSO algorithm from the literature is known to have the shortcoming of premature convergence in solving complex (multipeak) problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. The challenge of addressing this shortcoming has been on for a long time and has attracted much attention of researchers in the field of global optimization. Consequently upon this, many other inertia weight PSO variants have been proposed [2, 516], with different levels of successes. Some of these variants have claimed better performances over LDIW-PSO, thereby making it look weak or inferior. Also, since improving on the performance of PSO is an area which still attracts more researchers, this paper strives to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters, like velocity limits for the particles, are properly set. Using the experimentally obtained values for particle velocity limits in LDIW-PSO, results show that LDIW-PSO outperformed other PSO variants adopted for comparison.

In the sections that follow, inertia weight PSO technique is described in Section 2, LDIW-PSO and the PSO variants adopted for comparison are reviewed in Section 3, parameter settings were experimentally conducted in Section 4, presentations and discussions of the experimental results of LDIW-PSO and its competing variants are made in Section 5, while Section 6 concludes the paper.

2. Particle Swarm Optimization

This technique is a simple but efficient population-based, adaptive, and stochastic technique for solving simple and complex optimization problems [17, 18]. It does not need the gradient of the problems to work with, so the technique can be employed for a host of optimization problems. In PSO, a swarm of particles (set of solutions) is randomly positioned (distributed) in the search space. For every particle, the objective function determines the food at its place (value of the objective function). Every particle knows its own actual value of the objective function, its own best value (locally best solution), the best value of the whole swarm (globally best solution), and its own velocity.

PSO maintains a single static population whose members are tweaked (adjust slightly) in response to new discoveries about the space. The method is essentially a form of directed mutation. It operates almost exclusively in multidimensional metric, and usually real-valued, spaces. Because of its origin, PSO practitioners tend to refer to candidate solutions not as a population of individuals but as a swarm of particles. Generally, these particles never die [19], but are moved about in the search space by the directed mutation.

Implementing PSO involves a small number of different parameters that regulates the behavior and efficacy of the algorithm in optimizing a given problem. These parameters are particle swarm size, problem dimensionality, particle velocity, inertia weight, particle velocity limits, cognitive learning rate, social learning rate, and the random factors. The versatility of the usage of PSO comes at a price because for it to work well on any problem at hand, these parameters need tuning and this could be very laborious. The inertia weight parameter (popularly represented as ω) has attracted a lot of attentions and seems to be the most important compared with other parameters. The motivation behind its introduction was the desire to better control (or balance) the scope of the (local and global) search and reduce the importance of (or eliminate) velocity clamping, V max⁡, during the optimization process [2022]. According to [22], the inertia weight was successful in addressing the former objective, but could not completely eliminate the need for velocity clamping. The feature of the divergence or convergence of particles can be controlled only by parameter ω, however, in conjunction with the selection of values for the acceleration constants [22, 23] as well as other parameters.

Each individual in the particle swarm is composed of three n-dimension vectors (current position, previous position, and velocity), where n is the dimensionality of the search space. Thus, in a physical n-dimensional search space, the position and velocity of each particle i are represented as the vectors X i = (x i1,…, x in) and V i = (v i1,…, v in), respectively. In course of movement in the search space looking for the optimum solution of the problem being optimized, the particle's velocity and position are updated as follows:

Vik+1=ωVik+c1r1(PbestikXik)+c2r2(GbestikXik), (1)
Xik+1=Xik+Vik+1, (2)

where, c 1 and c 2 are acceleration (weighting) factors known as cognitive and social scaling parameters that determine the magnitude of the random forces in the direction of Pbest (previous best) and Gbest (global previous best); r 1 and r 2 are random numbers between 0 and 1; k is iteration index; ω is inertia weight. It is common that the positions and velocities of particles in the swarm, when they are being updated, are controlled to be within some specified bounds as shown in Algorithms 1 and 2, respectively. An inertia weight PSO algorithm is shown in Algorithm 3.

Algorithm 1.

Algorithm 1

Particle position clamping.

Algorithm 2.

Algorithm 2

Particle velocity clamping.

Algorithm 3.

Algorithm 3

Inertia weight PSO algorithm.

3. A Review of LDIW-PSO and Some of Its Competing PSO Variants

Despite the fact that LDIW-PSO algorithm, from the literature, is known to have a shortcoming of premature convergence in solving complex (multipeak) problems, it may not always be true that LDIW-PSO is as weak or inferior as it has been pictured to be by some PSO variants in the literature [2, 7, 13]. Reviewed below are some of these variants and other variants, though not directly compared with LDIW-PSO in the literature, but have been adopted for comparison with LDIW-PSO.

3.1. Linear Decreasing Inertia Weight PSO (LDIW-PSO)

The inertia weight parameter was introduced into the original version of PSO by [20]. By introducing a linearly decreasing inertia weight into the original version of PSO, the performance of PSO has been greatly improved through experimental study [24]. In order to further illustrate the effect of this linearly decreasing inertia weight, [4] empirically studied the performance of PSO. With the conviction that a large inertia weight facilitates a global search while a small inertia weight facilitates a local search, a linearly decreasing inertia weight was used with an initial value of 0.9 and a final value of 0.4. By reason of these values, the inertia weight can be interpreted as the fluidity of the medium in which a particle moves [21], showing that setting it to a relatively high initial value (e.g., 0.9) makes particles move in a low viscosity medium and performs extensive exploration. Gradually reducing it to a much lower value (e.g., 0.4) makes the particle moves in a high viscosity medium and performs more exploitation. The experimental results in [4] showed that the PSO converged quickly towards the optimal positions but slowed down its convergence speed when it is near the optima. Thus, by using the linearly decreasing inertia weight, the PSO lacks global search ability at the end of run even when the global search ability is required to jump out of the local minimum in some cases. As a result, employing adapting strategy for adjusting the inertia weight was suggested to improve PSO's performance near the optima. Towards achieving this, there are many improvements on LDIW-PSO in the literature [2, 3, 16, 2426], which have made PSO to perform with varying degree of successes. Represented in (3) is the LDIW:

ωt=(ωstartωend)(TmaxtTmax)+ωend, (3)

where ω start and ω end are the initial and final values of inertia weight, t is the current iteration number, T max⁡ is the maximum iteration number, and ω t ∈ [0,1] is the inertia weight value in the tth iteration.

3.2. Chaotic Descending Inertia Weight PSO (CDIW-PSO)

Chaos is a nonlinear dynamic system which is sensitive to the initial value. It has the characteristic of ergodicity and stochastic property. Using the idea of chaotic mapping, CDIW-PSO was proposed by [2] as shown in (5) based on logistic mapping in (4). The goal was to improve on the LDIW-PSO to avoid getting into local optimum in searching process by utilizing the merits of chaotic optimization

zk+1=μ×zk×(1zk), (4)

where μ = 4 and z k is the kth chaotic number. The map generates values between 0 and 1, provided that the initial value z 0 ∈ (0,1) and that z 0 ∉ (0.0, 0.25, 0.5, 0.75, 1.0):

ωt=(ωstartωend)(TmaxtTmax)+ωend×zk+1, (5)

where ω start and ω end are the initial and final values of inertia weight, and rand() is a uniform random number in [0,1]. The experimental results in [2] show that CDIW-PSO outperformed LDIW-PSO in all the test problems used in the experiment in terms of convergence precision, quick convergence velocity, and better global search ability.

3.3. Random Inertia Weight and Evolutionary Strategy PSO (REPSO)

This variant proposed in [7] used the idea of simulated annealing and the fitness of particles to design another inertia weight represented by (6). A cooling temperature was introduced to adjust the inertia weight based on certain probability to facilitate jumping off local optimal solutions.

It was experimentally proven that REPSO is significantly superior LDIW-PSO in terms of convergent speed and accuracy:

ωt={α1+r2.0,pr,α2+r2.0,p<r, (6)

where α 1, α 1 ∈ [0,1] are constants with α 1 > α 2 and rU[0,1]. The simulated annealing probability is defined as follows:

p={1,min1imfitkmin1imfit,exp(min1imfitkmin1imfitTt),min1imfitk>min1imfit, (7)

where m is the number of particles, f i t is the fitness value of particle i in the tth iteration, and the adaptive cooling temperature in the tth iteration, T t, is defined as shown in (8):

Tt=favgtfbestt1, (8)

where f best t is the current best fitness value, and f avg t which is defined in (9), is the average fitness value in the tth iteration:

favgt=1mi=1mfit. (9)

The combined efforts of the simulated annealing idea and fitness variance of particles improved the global search ability of PSO. In all the experiments performed, REPSO was recorded superior to LDIW-PSO in convergence velocity and precision.

3.4. Dynamic Adaptive Particle Swarm Optimization (DAPSO)

DAPSO was introduced by [3] with the aim of proffering solution to the PSO premature convergence problem associated with typical multipeak, high dimensional function optimization problems so as to improve its global optimum and convergence speed. A dynamic adaptive strategy was introduced into the variant to adjust the inertia weight value based on the current swarm diversity and congregate degree as well as the impact on the search performance of the swarm. The experimental results recorded showed that DAPSO was more effective compared with LDIW-PSO. The inertia weight formula that was used is represented in (10):

ωt=ωmin+(ωmaxωmin)×Ft×φt, (10)

where ω min⁡ and ω max⁡ are the minimum and maximum inertia weight values, t is the current number of iterations, the diversity function F t and adjustment function φ t, both in the tth iteration, are represented in (11) and (12), respectively:

Ft=12πarctan(E), (11)

where E is the group fitness as shown in (13):

φt=e(t2/(2σ2)), (12)

where σ = T/3 and T are the total numbers of iterations:

E=1Ni=1N(f(xi)favg)2, (13)

where N is the swarm size, f(x i) is the fitness of particle i, and f avg represented in (14) is the current average fitness of the swarm:

favg=1Ni=1Nf(xi). (14)

3.5. Adaptive Particle Swarm Optimization (APSO)

This PSO variant was proposed in [5], in which an adaptive mutation mechanism and a dynamic inertia weight were incorporated into the conventional PSO method. These mechanisms were employed to enhance global search ability and convergence speed and to increase accuracy, while the mutation mechanism affected the particle position updating formula, the dynamic inertia weight affected the inertia weight formula shown in (15). Though APSO was not compared with LDIW-PSO, it outperformed all its competitors as evidenced by all the experimental results:

ωt=0.5{1+tanh[1α×F(Pgdt)]}, (15)

where F(P gd t) is the fitness of current best solution in the tth iteration, and the parameter α is predefined which could be set equal to the fitness of the best particle in the initial population. For the updating of the particle's position, when a particle is chosen for mutation, a Gaussian random disturbance was added as depicted in (16):

xij=xij+M×βij, (16)

where x ij is the ith component of the jth particle, β ij is a random variable with Gaussian distribution with zero mean and unit variance, and M is a variable step size which dynamically decreases according to current best solution fitness. M is defined in tth iteration according to

Mt=xmax×tanh[1α×F(Pgdt)]. (17)

3.6. Dynamic Nonlinear and Dynamic Logistic Chaotic Map PSO (DLPSO2)

In [11], two types of variants were proposed to solve the premature convergence problem of PSO. In this variant, two dynamic nonlinear methods and logistic chaotic map were used to adjust the inertia weight in parallel according to different fitness values. One of the dynamic nonlinear inertia weights is represented by (18) and (19), while the other is represented by (20) and (21). They are meant to achieve tradeoff between exploration and exploitation:

dn=dnmin+(dnmaxdnmin)(iteritermax), (18)
ω=ωmin+(ωmaxωmin)(iteritermax)dn, (19)
dn=dnmax(dnmaxdnmin)(iteritermax), (20)
ω=ωmax(ωmaxωmin)(iteritermax)dn, (21)

where dn is the dynamic nonlinear factor, ω is the inertia weight, ω max⁡ and ω min⁡ are the maximum and minimum values of ω, respectively, dn max⁡ and dn min⁡ are the maximum and minimum values of dn, respectively, and iter and itermax⁡ are the current iteration numbers and the maximum iteration number, respectively.

A dynamic logistic chaotic map in (4) was incorporated into the PSO variant inertia weight as shown in (23) to enrich searching behaviors and avoid being trapped into local optima:

α=αmax(αmaxαmin)(iteritermax), (22)
ω=α+(1α)Lmap, (23)

where α is the dynamic chaotic inertia weight adjustment factor, α max⁡ and α min⁡ are the maximum and minimum values of α, respectively, and Lmap is the result of logistic chaotic map. In this variant, using (19) and (23) was labeled DLPSO1, while using (21) and (23) was captioned DLPSO2.

For the purpose of achieving a balance between global exploration and local exploitation and also avoiding premature convergence, (19), (21), and (23) were mixed together to dynamically adjust the inertia weight of the variant as shown in Algorithm 4, where f i is the fitness value of particle i and f avg is the average fitness value of the swarm. Experiments and comparisons showed that the DLPSO2 outperformed several other well-known improved particle swarm optimization algorithms on many famous benchmark problems in all cases.

Algorithm 4.

Algorithm 4

3.7. Discussions

LDIW-PSO is relatively simple to implement and fast in convergence. When [4] experimentally ascertained that LDIW-PSO is prone to premature convergence, especially when solving complex multimodal optimization problems, a new area of research was opened up for improvements on inertia weight strategies in PSO, and LDIW-PSO became a popular yard stick for many other variants.

From the variants described previously, there are ample expectations that they should outperform LDIW-PSO judging by the various additional strategies introduced into the inertia weight strategies used by them. For example, CDIW-PSO introduced chaotic optimization characteristic, REPSO introduced a combined effort of simulated annealing idea and fitness variance of particles, DAPSO introduced a dynamic adaptive strategy based on swarm diversity function, APSO introduced an adaptive mutation to the particle positions and made the inertia weight dynamic based on the best global fitness, while DLPSO2 used different formulas coupled with chaotic mapping. The general aims of remedying the problem of premature convergence by these variants were not achieved, rather they only struggled to move a bit further than LDIW-PSO in trying to optimize the test problems because a total solution to this problem is for an algorithm to escape all possible local optima and obtain the global optimum. With this, it is possible that LDIW-PSO was subjected to settings, for example, the particles velocity limits [24], which were not appropriate for it to operate effectively.

4. Testing with Benchmark Problems

To validate the claim in this paper, 6 different experiments were performed for the purpose of comparing the LDIW-PSO with 6 other different PSO variants, namely, AIW-PSO, CDIW-PSO, REPSO, SA-PSO, DAPSO, and APSO. Different experiments, relative to the competing PSO variants, used different set of test problems which were also used to test LDIW-PSO. Brief descriptions of these test problems are given next. Additional information on these problems can be found in [2729]. The application software was developed in Microsoft Visual C# programming language.

4.1. Benchmark Problems

Six well studied benchmark problems in the literature were used to test the performance of LDIW-PSO, AIW-PSO, CDIW-PSO, REPSO, SA-PSO, DAPSO, and APSO. These problems were selected because their combinations were used to validate the competing PSO variants in the respective literature referenced. The test problems are Ackley, Griewank, Rastrigin, Rosenbrock, Schaffer's f6, and Sphere.

The Ackley problem is multimodal and nonseparable. It is a widely used test problem, and it is defined in (24). The global minimum f1(x)=0 is obtainable at x=0, and the number of local minima is not known:

f1(x)=20exp(0.21ni=1dxi2)exp(1ni=1dcos(2πxi))+20+e. (24)

The Griewank problem is similar to that of Rastrigin. It is continuous, multimodal scalable, and nonseparable with many widespread local minima regularly distributed. The complexity of the problem increases with its dimensionality. Its global minimum f2(x)=0 is obtainable at x = 0, and the number of local minima for arbitrary n is not known, but in the two-dimensional case, there are some 500 local minima. This problem is represented by

f2(x)=14000(i=1dxi2)(i=1dcos(xii))+1. (25)

The Rastrigin problem represented in (26) is continuous, multimodal, scalable, and separable with many local minima arranged in a lattice-like configuration. It is based on the Sphere problem with the addition of cosine modulation so as to produce frequent local minima. There are about 50 local minima for two-dimensional case, and its global minimum f3(x)=0 is obtainable at x=0:

f3(x)=i=1d(xi210cos(2πxi)+10). (26)

Shown in (27) is the Rosenbrock problem. It is continuous, unimodal, scalable, and nonseparable. It is a classic optimization problem also known as banana function, the second function of De Jong, or extended Rosenbrock function. Its global minimum f4(x)=0 obtainable at x=1, lies inside a long narrow, parabolic shaped valley. Though it looks simple to solve, yet due to a saddle point it is very difficult to converge to the global optimum:

f4(x)=i=1d1(100(xi+1xi2)2)+(xi1)2. (27)

The Schaffer's f6 problem represented in (28) is 2-dimensional, continuous, multimodal, and nonseparable with unknown number of many local minima. Its global minimum f5(x)=0 is obtainable at x = 0:

f5(x)=i=1d1(0.5+sin2(xi+12+xi2)0.5(0.001(xi+12+xi2)+1)2). (28)

The Sphere problem also known as the first De Jong function is continuous, convex, unimodal, scalable, and separable. It is one of the simplest test benchmark problems. Its global minimum f6(x)=0 is obtainable at x = 0, and the problem is represented by

f6(x)=i=1dxi2. (29)

4.2. Parameter Settings

The limits of particle velocity could negatively affect the performance of PSO algorithm if it is not properly set. As a result, different work has been done to determine the velocity limits of particles in order to improve on the performance of PSO. Researches in this direction are [4, 24, 30] the three major methods that appear in the literature, for computing the velocity clamping (V min⁡ and V max⁡) are (i) multiplying the search space range with certain percentage (δ); that is, V max⁡ = δ(X max⁡X min⁡) and V min⁡ = −V max⁡; (ii) multiplying both the minimum and maximum limits of the search space separately with certain percentage (δ); that is, V max⁡ = δ(X max⁡) and V min⁡ = δ(X min⁡); (iii) assigning the search space upper limit to V max⁡. It is obvious from (i) and (ii) that the parameter δ is very important. As a result, different values have been used by different authors [5, 6, 13, 30] for δ to determine velocity clamping for particles.

In trying to substantiate the fact that LDIW-PSO is not as weak or inferior as many authors claimed it to be, an experiment was conducted to investigate the effect of the parameter δ on the performance of LDIW-PSO using the benchmark problems described previously. The results were used as a guide to set δ in LDIW-PSO before embarking on some experimental comparison, between it and some other PSO variants described previously to prove that LDIW-PSO is superior to many of the variants that have been claimed to be better that it. The results of the experiments are listed in the Appendix. Using the results as guide, the value of δ was set in LDIW-PSO for the various test problems as listed in Table 1. However, δ was set to 0.015 for f 2 in Experiment 2 and 0.25 for f 3 in Experiments 2 and 5.

Table 1.

Settings for parameter δ in LDIW-PSO.

Problem f 1 f 2 f 3 f 4 f 5 f 6
δ 0.05 0.0075 0.05 0.015 0.075 0.015

4.3. Experimental Setup

The settings for the different experiments carried out for the comparisons are described next one after the other. In all the experiments, LDIW-PSO was subjected to the settings of its competitors as recorded in the literature. For LDIW-PSO, c 1 = c 2 = 2.0, ω max⁡ = 0.9, ω min⁡ = 0.4, V min⁡ = δX min⁡, and V max⁡ = δX max⁡. Table 2 shows the respective search and initialization ranges for all the algorithms, the symbol “–” means that the corresponding test problem was not used by the algorithm under which the symbol appears.

Table 2.

Test problems search and initialization ranges for the PSO variants.

Label CDIW-PSO REPSO DAPSO APSO DLPSO2
f 1 [−32,32] [−32,32]
f 2 [−600,600] [−600,600] [−600,600] [−600,600] [−600,600]
f 3 [−5.12,5.12] [−10,10] [−5.12,5.12] [−5.12,5.12] [−10,10]
f 4 [−30,30] [−100,100] [−30,30]
f 5 [−100,100] [−10,10] [−1.0,1.0]
f 6 [−100,100] [−10,10] [−100,100]

Experiment  1. The purpose of this experiment was to compare LDIW-PSO with CDIW-PSO [2]. All the test problems described previously were used in this experiment, except f 1. The dimension for f 5 was 2, while it was 30 for others. The maximum numbers of iterations were set to 1500 with swarm size of 20, and the experiment was repeated 500 times. Stated in Table 3 are the set goals (criteria) of success for each of the problems.

Table 3.

Goals for the test problems in CDIW-PSO.

Label f 2 f 3 f 4 f 5 f 6
Goal 0.05 50.0 100.0 0.00001 0.01

Experiment  2. The purpose of this experiment was to compare LDIW-PSO with REPSO [7]. All the test problems in Table 1 except f 1 were used. The dimension for f 5 was 2, while it was 10 for others. The performances of the algorithms were considered at different number of iterations as shown in Section 5, in line with what is recorded in the literature [7]. The swarm size used was 30, and the experiment was repeated 50 times.

Experiment  3. The purpose of this experiment was to compare LDIW-PSO with DAPSO [13]. Test problems f 1f 3 were used with four different problem dimensions of 20, 30, 40, and 50. The maximum number of iterations and swarm size was set to 3000 and 30, respectively, and the experiment was repeated 50 times.

Experiment  4. The purpose of this experiment was to compare LDIW-PSO with APSO [5]. f 2, f 3, and f 4 were used as test problems with three different problem dimensions of 10, 20, and 30. The respective maximum numbers of iterations associated with these dimensions are 1000, 1500, and 2000, respectively. The experiment was carried out over three different swarm sizes, 20, 40, and 80 for each problem dimension, and the experiment was repeated 30 times.

Experiment  5. This experiment compared LDIW-PSO with DLPSO2 [11]. All the test problems except f 4 were used in the experiment with two different problem dimensions of 2 (for f 3 and f 5) and 30 (for f 1, f 2, and f 6). The maximum number of iterations was set to 2000 and swarm sizes to 20, and the experiment was repeated 20 times.

5. Results and Discussions

Presented in Tables 48 are the results obtained for all the experiments. The results for all the competing PSO variants were obtained from the respective referenced papers, and they are presented here the way they were recorded. Thus, the recording of the results for LDIW-PSO was patterned after them. In each of the tables, bold values represent the best results. In the tables, mean best fitness (solution) is a measure of the precision that the algorithm can get within a given number of iterations, standard deviation (Std. Dev.) is a measure of the algorithm's stability and robustness, while success rate (SR) [31] is the rate at which an algorithm obtains optimum fitness result in the criterion range during a given number of independent runs.

Table 4.

Experimental results for LDIW-PSO compared with CDIW-PSO.

Criteria Griewank Rastrigin Rosenbrock Schaffer's f6 Sphere
CDIW-PSO LDIW-PSO CDIW-PSO LDIW-PSO CDIW-PSO LDIW-PSO CDIW-PSO LDIW-PSO CDIW-PSO LDIW-PSO
Mean fitness 0.014773 0.007609 40.044561 33.055877 44.305058 31.148789 0.007732 0.000117 0.000092 0.000000
Std. Dev. 0.002959 0.008439 8.028912 10.498048 8.861012 20.832263 0.001546 0.001058 0.000016 0.000000
SR (%) 96.2 100 83.6 92.8 99.6 98.0 22.0 98.6 100 100

Table 8.

Experimental results for LDIW-PSO compared with DLPSO2.

Criteria Best fitness Mean fitness Std. Dev.
Ackley
 DLPSO2 8.6209e − 06 0.4743 0.6527
 LDIW-PSO 2.0441e − 07 0.0000 0.0000
Griewank
 DLPSO2 7.7589e − 06 0.0086 0.0114
 LDIW-PSO 3.5694e − 13 0.0083 0.0088
Rastrigin
 DLPSO2 −2 −2 0
 LDIW-PSO −2 −2 0
Schaffer's f6
 DLPSO2 7.5206e − 07 5.6300e − 06 2.8969e − 06
 LDIW-PSO 0.0000e + 00 0.0000e + 00 0.0000e + 00
Sphere
 DLPSO2 7.6941e − 06 9.5001e − 06 4.9557e − 07
 LDIW-PSO 4.1289e − 14 0.0000e + 00 0.0000e + 00

Experiment  1 (comparison of LDIW-PSO with CDIW-PSO). The results in Table 4 clearly reveal a great difference in performance between LDIW-PSO and CDIW-PSO [2]. The results are compared based on the final accuracy of the averaged best solutions, success rate (SR), and standard deviation (Std. Dev.) of the best solutions. In all the test problems, the result indicates that LDIW-PSO can get better optimum fitness result, showing better convergence precision. LDIW-PSO is also more stable and robust compared with CDIW-PSO, because its standard deviation is comparatively lesser in three of the test problems. Besides, LDIW-PSO has better global search ability and could easily get out of local optima than CDIW-PSO.

Experiment  2 (comparison of LDIW-PSO with REPSO). In Table 5, the comparison between LDIW-PSO and REPSO was based on the final accuracy of the averaged best solutions relative to the specified number of iterations and convergence speed as recorded in [7]. From the results, REPSO appears to converge faster in Griewank and Rastrigin at the beginning but was overtaken by LDIW-PSO which eventually converged faster and had better accuracy. In Rosenbrock and Sphere problems, LDIW-PSO had better convergence speed and accuracy in comparison with REPSO. The symbol “—” means that the corresponding iteration number was not considered for the test problem under which the symbol appears.

Table 5.

Experimental results for LDIW-PSO compared with REPSO.

Iteration Griewank1 Rastrigin Rosenbrock2 Sphere
REPSO LDIW-PSO REPSO LDIW-PSO REPSO LDIW-PSO REPSO LDIW-PSO
50
100 0.6705 0.7859 30.7320 44.2732 0.00671 0.00493
200 0.4922 0.6437
300 0.2487 0.5607 2.1142e − 04 2.9792e − 04
400 0.2345 0.4318 20.6671 16.5414
500 0.1658 0.3185 17.3751 10.4621 570.7681 352.1663 7.1144e − 05 9.1853e − 07
800 15.5611 3.9143 6.8751e − 06 5.8431e − 17
1000 0.1461 0.0967 10.8120 3.2609 300.1407 218.9924 5.6367e − 07 1.2425e − 28
1500 0.1353 0.0842 260.8421 138.2756
2000 0.1089 0.0794 170.2157 79.9941
3000 60.4418 21.5586

1This problem is slightly different from the one in (25).

2This problem is slightly different from the one in (27).

Experiment  3 (comparison of LDIW-PSO with DAPSO). The results for DAPSO were obtained from [13]. Comparing these results with that of LDIW-PSO were measured using the final accuracy of the respective mean best solutions across the different problems dimensions as shown in Table 6. In all the problems and dimensions except dimension 40 of Rastrigin, LDIW-PSO outperformed DAPSO getting better fitness quality and precision. This is a clear indication that LDIW-PSO has better global search ability and is not easily trapped in local optima compared with DAPSO.

Table 6.

Experimental results for LDIW-PSO compared with DAPSO.

Dim Ackley Griewank Rastrigin
DAPSO LDIW-PSO DAPSO LDIW-PSO DAPSO LDIW-PSO
20 3.906209e − 014 8.970602e − 015 8.605280e − 002 1.649481e − 002 2.159059e + 001 2.040020e + 001
30 4.159541e − 008 1.527799e − 010 2.583338e − 002 9.258783e − 003 3.263463e + 001 2.996404e + 001
40 7.046093e − 005 2.578715e − 007 1.087868e − 002 4.875733e − 003 3.890287e + 001 4.109865e + 001
50 1.025568e − 003 1.629095e − 005 1.346732e − 002 4.335978e − 003 4.823559e + 001 4.606947e + 001

Experiment  4 (comparison of LDIW-PSO with APSO). Recorded in Table 7 are the results for LDIW-PSO and APSO [5] over different swarm sizes, dimensions, and iterations. The basis for comparison is the final accuracy and quality of their mean best fitness. The two variants put up a good competition. In Griewank and Rastrigin, APSO performed better in smaller dimensions, while LDIW-PSO performed better in higher dimensions. But in Rosenbrock, LDIW-PSO outperformed APSO in locating better solutions to the problem.

Table 7.

Experimental results for LDIW-PSO compared with APSO.

Swarm size Dim Maximum iteration Griewank Rastrigin Rosenbrock
APSO LDIW-PSO APSO LDIW-PSO APSO LDIW-PSO
20 10 1000 0.0983 0.2347 5.1565 12.4602 5.8467 4.3695
20 1500 0.0237 0.0150 16.0456 27.6708 47.9842 19.1223
30 2000 0.0117 0.0103 42.2325 33.2050 100.4528 29.3482

40 10 1000 0.0952 0.2231 2.9468 10.5713 4.5431 3.9145
20 1500 0.0201 0.0211 15.3678 19.3199 38.3464 16.5186
30 2000 0.0105 0.0099 33.7538 26.3453 72.5473 26.9638

80 10 1000 0.0689 0.1294 2.0457 9.0800 4.1680 6.5127
20 1500 0.0199 0.0184 10.0563 16.4368 27.9547 17.6043
30 2000 0.0102 0.0080 25.3473 23.2303 69.0609 24.6653

Experiment  5 (comparison of LDIW-PSO with DLPSO2). The results for LIDIW-PSO and DLPSO2 [11] in Table 8 are compared based on the best fitness, mean best fitness, convergence speed, as well as standard deviation (Std. Dev.) of the best solutions. In Rastrigin, the two algorithms have equal performances. However, in other test problems, the result indicates that LDIW-PSO can get better optimum fitness result, showing better convergence speed. LDIW-PSO is also more stable and robust compared with DLPSO2, because its standard deviation is comparatively smaller in all the test problems. Besides, LDIW-PSO demonstrated better global search ability and getting out of local optima than DLPSO2.

6. Conclusion

Motivated by the superiority claims by some PSO variants over LDIW-PSO in terms of performance, a number of experiments were performed in this paper to empirically verify some of these claims. Firstly, an appropriate (approximate) percentage of the test problems search space limits were obtained to determine the particle velocity limits for LDIW-PSO. Secondly, these values were used in the implementation of LDIW-PSO for some benchmark optimization problems and the results obtained compared with that of some PSO variants that have previously claimed superiority in performance. LDIW-PSO performed better than these variant. The performances of the two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO on similar problems with the latter showing competitive advantage. This work has therefore showed that with good experimental setting, LDIW-PSO will perform competitively with similar variants. Precious claims of inferior performance might therefore be due to some unfavourable experimental settings. The Appendix provides further simulation results that can provide useful hints for deciding the setting velocity threshold for particles for LDIW-PSO.

Acknowledgment

Thanks are due to the College of Agricultural Science, Engineering and Sciences, University of Kwazulu-Natal, South Africa, for their support towards this work through financial grant.

Appendix

Tables 9, 10, 11, 12, and 13 show the results of LDIW-PSO in optimizing some benchmark problems so as to determine a suitable value for δ that was used to set the velocity limits for the particles. The experiments were repeated 500 times for each of the problems. Two different swarm sizes of 20 and 30 were used for each of the three different problem dimensions 10, 30, and 50. The respective number of iterations that was used with the dimensions is 1000, 1500, and 2000. The LDIW strategy was decreased from 0.9 to 0.4 in course of searching for solution to the problem [7, 1012, 27], the acceleration constants (c 1 and c 2) were set to 2.0, and V max⁡ = δ(X max⁡) and V min⁡ = δ(X min⁡). In the tables, bold values represent the best mean fitness value.

Table 9.

Different values of parameter δ and respective mean best fitness for Griewank test problem.

δ Dimension 10 Dimension 30 Dimension 50
Size = 20 Size = 30 Size = 20 Size = 30 Size = 20 Size = 30
1.0 9.913e − 02 9.125e − 02 1.157e + 01 5.607e + 00 6.269e + 01 3.941e + 01
0.75 9.645e − 02 8.825e − 02 3.088e + 00 1.451e − 02 1.519e + 01 6.875e + 00
0.5 9.983e − 02 9.018e − 02 1.972e − 01 1.601e − 02 2.003e + 00 5.522e − 01
0.25 1.002e − 01 2.925e − 02 1.602e − 02 1.458e − 02 1.200e − 02 9.885e − 03
0.15 9.772e − 02 9.276e − 02 1.556e − 02 1.450e − 02 9.925e − 03 8.654e − 03
0.1 1.044e − 01 9.141e − 02 1.489e − 02 1.564e − 02 1.027e − 02 9.339e − 03
0.075 1.064e − 01 1.006e − 01 1.328e − 02 1.389e − 02 8.937e − 03 7.963e − 03
0.05 1.011e − 01 9.417e − 02 1.521e − 02 1.580e − 02 8.224e − 03 7.821e − 03
0.025 9.682e − 02 8.738e − 02 1.604e − 02 1.668e − 02 7.108e − 03 7.354e − 03
0.015 9.028e − 02 8.648e − 02 1.379e − 02 1.444e − 02 5.719e − 03 6.226e − 03
0.01 1.274e − 01 1.265e − 01 1.148e − 02 1.141e − 02 5.005e − 03 4.768e − 03
0.0075 2.251e − 01 2.078e − 01 7.160e − 03 7.595e − 03 4.237e − 03 4.021e − 03
0.005 5.546e − 01 3.751e − 01 8.006e − 03 8.030e − 03 4.025e − 03 4.526e − 03
0.0025 1.258e + 00 6.833e − 01 1.203e − 02 1.218e − 02 6.808e − 03 6.013e − 03
0.0015 1.895e + 01 9.642e − 01 1.415e − 02 1.434e − 02 7.226e − 03 7.419e − 03
0.001 4.061e + 00 2.083e + 00 1.366e − 02 1.622e − 02 7.184e − 03 7.462e − 03

Table 10.

Different values of parameter δ and respective mean best fitness for Rastrigin test problem.

δ Dimension 10 Dimension 30 Dimension 50
Size = 20 Size = 30 Size = 20 Size = 30 Size = 20 Size = 30
1.0 4.551e + 00 3.400e + 00 9.959e + 01 8.462e + 01 2.694e + 02 2.361e + 02
0.75 4.537e + 00 3.619e + 00 6.924e + 01 5.866e + 01 1.935e + 02 1.729e + 02
0.5 4.646e + 00 3.476e + 00 5.253e + 01 4.282e + 01 1.330e + 02 1.151e + 02
0.25 6.484e + 00 5.247e + 00 4.534e + 01 4.197e + 01 8.943e + 01 8.462e + 01
0.15 1.043e + 01 9.013e + 00 4.142e + 01 3.798e + 01 7.204e + 01 6.590e + 01
0.1 1.149e + 01 9.470e + 00 3.702e + 01 3.380e + 01 6.183e + 01 5.653e + 01
0.075 1.077e + 01 9.397e + 00 3.328e + 01 2.917e + 01 5.394e + 01 4.824e + 01
0.05 1.162e + 01 1.022e + 01 3.302e + 01 2.943e + 01 5.370e + 01 4.704e + 01
0.025 1.373e + 01 1.160e + 01 3.607e + 01 3.194e + 01 5.474e + 01 4.860e + 01
0.015 1.387e + 01 1.159e + 01 3.893e + 01 3.521e + 01 5.762e + 01 5.087e + 01
0.01 1.431e + 01 1.221e + 01 4.010e + 01 3.565e + 01 5.995e + 01 5.390e + 01
0.0075 1.475e + 01 1.213e + 01 4.164e + 01 3.692e + 01 6.256e + 01 5.476e + 01
0.005 1.868e + 01 1.398e + 01 4.300e + 01 3.663e + 01 6.451e + 01 5.464e + 01
0.0025 3.337e + 01 2.507e + 01 7.294e + 01 4.917e + 01 9.215e + 01 6.073e + 01
0.0015 4.794e + 01 4.027e + 01 1.168e + 02 7.803e + 01 1.396e + 02 8.922e + 01
0.001 5.792e + 01 5.220e + 01 1.898e + 02 1.548e + 02 2.102e + 02 1.390e + 02

Table 11.

Different values of parameter δ and respective mean best fitness for Rosenbrock test problem.

δ Dimension 10 Dimension 30 Dimension 50
Size = 20 Size = 30 Size = 20 Size = 30 Size = 20 Size = 30
1.0 1.165e + 04 1.040e + 04 1.851e + 05 2.873e + 04 3.075e + 06 1.148e + 06
0.75 6.020e + 03 4.020e + 03 2.009e + 04 1.711e + 04 8.240e + 05 1.837e + 05
0.5 2.585e + 03 2.189e + 03 1.128e + 04 8.214e + 03 1.175e + 04 1.360e + 04
0.25 1.872e + 01 5.571e + 00 4.307e + 02 4.445e + 02 2.315e + 03 1.056e + 03
0.15 1.075e + 01 4.229e + 00 4.910e + 01 4.750e + 01 1.156e + 02 9.710e + 01
0.1 4.798e + 00 4.241e + 00 4.248e + 01 4.147e + 01 9.217e + 01 8.699e + 01
0.075 4.680e + 00 4.099e + 00 4.531e + 01 3.607e + 01 1.073e + 02 7.701e + 01
0.05 5.182e + 00 4.534e + 00 3.453e + 01 3.282e + 01 6.858e + 01 6.383e + 01
0.025 5.770e + 00 5.598e + 00 3.148e + 01 3.035e + 01 5.450e + 01 5.215e + 01
0.015 7.818e + 00 6.800e + 00 2.956e + 01 2.832e + 01 5.207e + 01 5.218e + 01
0.01 7.748e + 00 6.480e + 00 2.962e + 01 2.891e + 01 5.487e + 01 5.154e + 01
0.0075 8.085e + 00 7.945e + 00 2.998e + 01 2.948e + 01 5.505e + 01 5.164e + 01
0.005 6.491e + 00 6.896e + 00 3.134e + 01 3.015e + 01 5.544e + 01 5.263e + 01
0.0025 7.943e + 01 7.682e + 00 3.052e + 01 2.915e + 01 5.656e + 01 5.163e + 01
0.0015 5.003e + 01 1.408e + 01 3.095e + 01 2.672e + 01 5.398e + 01 5.174e + 01
0.001 2.417e + 04 3.426e + 03 3.020e + 01 2.949e + 01 5.614e + 01 5.222e + 01

Table 12.

Different values of parameter δ and respective mean best fitness for Sphere test problem.

δ Dimension 10 Dimension 30 Dimension 50
Size = 20 Size = 30 Size = 20 Size = 30 Size = 20 Size = 30
1.0 1.043e − 20 3.679e − 23 1.140e + 03 5.400e + 02 7.380e + 03 4.400e + 03
0.75 9.490e − 21 1.554e − 23 1.600e + 02 4.000e + 01 1.460e + 03 7.600e + 02
0.5 5.108e − 21 1.048e − 23 1.349e − 08 4.015e − 10 1.000e + 02 2.000e + 01
0.25 8.561e − 22 5.859e − 24 3.547e − 09 6.110e − 11 1.538e − 05 4.976e − 07
0.15 5.304e − 21 9.144e − 25 1.503e − 09 2.963e − 11 6.952e − 06 2.114e − 07
0.1 6.679e − 23 1.203e − 24 4.432e − 10 1.193e − 11 2.224e − 06 7.656e − 08
0.075 8.577e − 23 2.149e − 25 2.397e − 10 8.813e − 12 1.306e − 06 4.954e − 08
0.05 3.921e − 23 1.794e − 25 1.147e − 10 3.490e − 12 5.098e − 07 2.235e − 08
0.025 1.006e − 23 4.835e − 26 2.596e − 11 7.592e − 13 1.620e − 07 6.654e − 09
0.015 2.466e − 24 1.795e − 26 1.349e − 11 2.364e − 13 5.689e − 08 2.222e − 09
0.01 1.022e − 24 4.326e − 27 3.998e − 12 1.245e − 13 3.983e − 08 8.837e − 10
0.0075 9.942e − 25 3.991e − 27 2.758e − 12 7.017e − 14 1.115e − 08 5.786e − 10
0.005 6.363e − 25 2.300e − 27 1.449e − 12 3.061e − 14 1.116e − 08 2.034e − 10
0.0025 2.003e − 23 1.376e − 26 3.638e − 13 9.420e − 15 1.592e − 09 6.778e − 11
0.0015 4.469e − 08 2.962e − 08 7.378e − 13 1.254e − 14 1.062e − 09 3.130e − 11
0.001 2.900e + 02 9.887e + 01 5.711e − 02 8.265e − 13 2.563e − 09 2.755e − 11

Table 13.

Different values of parameter δ and respective mean best fitness for Schaffer's f6 test problem.

δ Dimension 2
Size = 20 Size = 30
1.0 1.342e − 03 5.446e − 04
0.75 2.392e − 03 9.335e − 04
0.5 2.052e − 03 7.651e − 04
0.25 1.387e − 03 7.212e − 04
0.15 7.756e − 04 2.731e − 04
0.1 6.816e − 04 1.847e − 04
0.075 4.865e − 04 1.749e − 04
0.05 6.413e − 04 1.612e − 04
0.025 4.275e − 03 2.740e − 03
0.015 5.625e − 03 3.129e − 03
0.01 4.726e − 03 2.993e − 03
0.0075 4.594e − 03 2.683e − 03
0.005 5.663e − 03 3.327e − 03
0.0025 5.940e − 03 4.760e − 03
0.0015 7.582e − 03 5.449e − 03
0.001 7.776e − 03 6.092e − 03

References

  • 1.Eberhart RC, Kennedy J. New optimizer using particle swarm theory. Proceedings of the 6th International Symposium on Micro Machine and Human Science (MHS '95); October 1995; Nagoya, Japan. pp. 39–43. [Google Scholar]
  • 2.Feng Y, Teng GF, Wang AX, Yao YM. Chaotic inertia weight in particle swarm optimization. Proceedings of the 2nd International Conference on Innovative Computing, Information and Control (ICICIC '07); September 2007; Kumamoto, Japan. p. p. 475. [Google Scholar]
  • 3.Xin J, Chen G, Hai Y. A particle swarm optimizer with multi-stage linearly-decreasing inertia weight. Proceedings of the International Joint Conference on Computational Sciences and Optimization (CSO '09); April 2009; Sanya, China. [Google Scholar]
  • 4.Shi YH, Eberhart RC. Empirical study of particle swarm optimization. Proceedings of the IEEE International Conference on Evolutionary Computation; 1999; Washington, DC, USA. pp. 1945–1950. [Google Scholar]
  • 5.Alfi A. PSO with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems. Acta Automatica Sinica. 2011;37(5):541–549. [Google Scholar]
  • 6.Chen G, Huang X, Jia J, Min Z. Natural exponential inertia weight strategy in particle swarm optimization. Proceedings of the 6th World Congress on Intelligent Control and Automation (WCICA '06); June 2006; Dalian, China. pp. 3672–3675. [Google Scholar]
  • 7.Gao Y-L, Duan Y-H. A new particle swarm optimization algorithm with random inertia weight and evolution strategy. Proceedings of the International Conference on Computational Intelligence and Security Workshops (CISW '07); December 2007; Heilongjiang, China. pp. 199–203. [Google Scholar]
  • 8.Gao Y, An X, Liu J. A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation. Proceedings of the International Conference on Computational Intelligence and Security (CIS '08); December 2008; Suzhou, China. pp. 61–65. [Google Scholar]
  • 9.Kentzoglanakis K, Poole M. Particle swarm optimization with an oscillating inertia weight. Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09); July 2009; Montreal, Canada. pp. 1749–1750. [Google Scholar]
  • 10.Li HR, Gao YL. Particle swarm optimization algorithm with exponent decreasing inertia weight and stochastic mutation. Proceedings of the 2nd International Conference on Information and Computing Science (ICIC '09); May 2009; Manchester, UK. pp. 66–69. [Google Scholar]
  • 11.Liu H, Su R, Gao Y, Xu R. Improved particle swarm optimization using two novel parallel inertia weights. Proceedings of the 2nd IEEE International Conference on Intelligent Computing Technology and Automation (ICICTA '09); October 2009; Hunan, China. pp. 185–188. [Google Scholar]
  • 12.Malik RF, Rahman TA, Hashim SZM, Ngah R. New particle swarm optimizer with sigmoid increasing inertia weight. International Journal of Computer Science and Security. 2007;1(2):p. 35. [Google Scholar]
  • 13.Shen X, Chi Z, Yang J, Chen C, Chi Z. Particle swarm optimization with dynamic adaptive inertia weight. Proceedings of the IEEE International Conference on Challenges in Environmental Science and Computer Engineering (CESCE '10); March 2010; Wuhan, China. pp. 287–290. [Google Scholar]
  • 14.Jian-Ru Z, Guo-Li Z, Hua Z. Hybrid linear and nonlinear weight particle swarm optimization algorithm. Proceedings of the International Conference on Machine Learning and Cybernetics (ICMLC '12); July 2012; pp. 1237–1241. [Google Scholar]
  • 15.Chauhan P, Deep K, Pant M. Novel inertia weight strategies for particle swarm optimization. Memetic Computing. 2013;5(3):229–251. [Google Scholar]
  • 16.Shi Y, Eberhart RC. Fuzzy adaptive particle swarm optimization. Proceedings of the Congress on Evolutionary Computation; May 2001; Seoul, Republic of Korea. pp. 101–106. [Google Scholar]
  • 17.Arasomwan MA, Adewumi AO. An adaptive velocity particle swarm optimization for high-dimensional function optimization. Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13); June 2013; Mexico City, Mexico. pp. 2352–2359. [Google Scholar]
  • 18.Arasomwan MA, Adewumi AO. On adaptive chaotic inertia weights in particle swarm optimization. Proceedings of the 4th IEEE Symposium Series on Computational Intelligence (SSCI '13); 2013; Singapore. pp. 72–79. [Google Scholar]
  • 19.Luke S. Essentials of Metaheuristics, a Set of Undergraduate Lecture Notes. 1st edition. 2011. (version 1.2). [Google Scholar]
  • 20.Shi YH, Eberhart RC. A modified particle swarm optimizer. Proceedings of the IEEE International Conferences on Evolutionary Computation; May 1998 ; Anchorage, Alaska, USA. pp. 69–73. [Google Scholar]
  • 21.Poli R, Kennedy J, Blackwell T. Particle swarm optimization: an overview. Swarm Intelligence. 2007;1:33–57. [Google Scholar]
  • 22.Engelbrecht AP. Computational Intelligence: An Introduction. New York, NY, USA: John Wiley & Sons; 2007. [Google Scholar]
  • 23.Iwasaki N, Yasuda K, Ueno G. Dynamic parameter tuning of particle swarm optimization. IEEJ Transactions on Electrical and Electronic Engineering. 2006;1(3):353–363. [Google Scholar]
  • 24.Shi Y, Eberhart R. Parameter selection in particle swarm optimization. In: Porto VW, Saravanan N, Waagen D, Eiben AE, editors. Evolutionary Programming VII. Vol. 1447. Berlin, Germany: Springer; 1998. pp. 591–600. [Google Scholar]
  • 25.Eberhart RC, Shi Y. Tracking and optimizing dynamic systems with particle swarms. Proceedings of the Congress on Evolutionary Computation; May 2001; Seoul, Republic of Korea. pp. 94–100. [Google Scholar]
  • 26.Nickabadi A, Ebadzadeh MM, Safabakhsh R. A novel particle swarm optimization algorithm with adaptive inertia weight. Applied Soft Computing Journal. 2011;11(4):3658–3670. [Google Scholar]
  • 27.Molga M, Smutnicki C. Test functions for optimization needs. 2005, http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf.
  • 28.Montaz AM, Khompatraporn C, Zabinsky ZB. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. Journal of Global Optimization. 2005;31(4):635–672. [Google Scholar]
  • 29.Chetty S, Adewumi AO. Three new stochastic local search algorithms for continuous optimization problems. Computational Optimization and Applications. 2013 [Google Scholar]
  • 30.Evers GI. An automatic regrouping mechanism to deal with stagnation in particle swarm optimization [M.S. thesis] Edinburg, Tex, USA: University of Texas-Pan American; 2009. [Google Scholar]
  • 31.Sawyerr BA, Ali MM, Adewumi AO. A comparative study of some real-coded genetic algorithms for unconstrained global optimization. Optimization Methods and Software. 2011;26(6):945–970. [Google Scholar]

Articles from The Scientific World Journal are provided here courtesy of Wiley

RESOURCES