empirical-study-of-particle-swarm-optimization
empirical-study-of-particle-swarm-optimization
Abstract- In this paper, we empirically study the Xi = (xil, xi2, ... , x~D). The best previous position (the
performance of the particle swarm optimizer (PSO). position giving the best fitness value) of the ith particle is
Four different benchmark functions with asymmetric recorded and represented as Pi = (pil, pi2, ... , piD). The
initial range settings are selected as testing functions. index of the best particle among all the particles in the
The experimental results illustrate the advantages and population is represented by the symbol g. The rate of the
disadvantages of the PSO. Under all the testing cases, position change (velocity) for particle i is represented as Vi
the PSO always converges very quickly towards the - (vil, viz, ... , viD). The particles are manipulated according
optimal positions but may slow its convergence speed to the following equation:
when it is near a minimum. Nevertheless, the
experimental results show that the PSO is a promising
optimization method and a new approach is suggested to
improve PSO’s performance near the optima, such as
using an adaptive inertia weight.
where c I and c2 are two positive constants, and rand() and
1 Introduction Rand() are two random functions in the range [0,1].
Unlike in genetic algorithms, evolutionary programming,
Through cooperation and competition among the and evolution strategies, in PSO, the selection operation is
population, population-based optimization approaches often not performed [9, lo]. All particles in PSO are kept as
can find very good solutions efficiently and effectively. members of the population through the course of the run (a
Most of the population based search approaches are run is defined as the total number of generations of the
motivated by evolution as seen in nature. Four well-known evolutionary algorithms prior to termination) [9]. It is the
examples are genetic algorithms [ 11, evolutionary velocity of the particle which is updated according to its
programming [2], evolutionary strategies [3] and genetic own previous best position and the previous best position of
programming [4]. Particle swarm optimization (PSO), on its companions. The particles fly with the updated
the other hand, is motivated from the simulation of social velocities. PSO is the only evolutionary algorithm that
behavior. Nevertheless, they all work in the same way, that does not implement survival of the fittest [9].
is, updating the population of individuals by applying some By considering equation (lb) as similar to a mutation
kinds of operators according to the fitness information operation, the PSO algorithm is similar to the evolutionary
obtained from the environment so that the individuals of the programming algorithm since neither algorithm performs a
population can be expected to move towards better solution crossover operation. In evolutionary programming, each
areas. individual is mutated by adding a random function (the most
The PSO algorithm was first introduced by Eberhart and commonly used random function is either a Gaussian or
Kennedy [5, 6, 7, 81. Instead of using evolutionary Cauchy function) [11, 121, while in PSO each particle
operators to manipulate the individuals, like in other (individual) is updated according to its own flying
evolutionary computational algorithms, each individual in experience and the group’s flying experience. In other
PSO flies in the search space with a velocity which is words, at each generation, each particle in PSO can only fly
dynamically adjusted according to its own flying experience in a limited number of directions which are expected to be
and its companions’ flying experience. Each individual is good areas to fly toward according to the group’s
treated as a volume-less particle (a point) in the D- experience; while in evolutionary programming, each
dimensional search space. The ith particle is represented as individual has the possibility to “fly” in any direction. That
the results, it can be excepted that by employing a As in [15], for each function, three different dimension
dynamically adapting velocity step size approach, the PSO sizes are tested. They are dimension sizes: 10, 20 and 30.
1946
The maximum number of generations is set as 1000, 1500
and 2000 corresponding to the dimensions 10, 20 and 30, Table 4: Mean fitness values for the Rosenbrock function.
respectively. In order to investigate whether the PSO
algorithm scales well or not, different population sizes are
used for each function with different dimensions. They are
population sizes of 20, 40, 80, and 160. A linearly
decreasing inertia weight is used which starts at 0.9 and
ends at 0.4, with cl=2 and cz=2. V,, and X,,,,, are set to be
equal and their values for each function are listed in Table 2.
A total of 50 runs for each experimental setting are
conducted.
1947
jump out of the local minimum in some cases. Conference on Evolutionary Computation
Nevertheless, the results shown illustrate that by using a (Indianapolis, Indiana), IEEE Service Center,
linearly decreasing inertia weight, the performance of PSO Piscataway, NJ, 303-308.
can be improved greatly and have better results than that of 9. Eberhart, R. C., Shi, Y. H. (1998). Comparison
both PSO and evolutionary programming reported in [15]. between genetic algorithms and particle swarm
From the figures, it is also clear that the PSO with different optimization. 1998 Annual Conference on
population sizes has almost the similar performance. Evolutionary Programming, San Diego.
Similar to the observation for Sphere function, the PSO 10. Angeline, P. J. (1998), Using selection to improve
algorithm scales well for all four functions. particle swarm optimization. IEEE International
Conference on Evolutionary Computation, Anchorage,
0 Conclusion Alaska, May 4-9, 1998.
11. Fogel, D., Beyer H. A note on the empirical evaluation
In this paper, the performance of the PSO algorithm with of intermediate recombination. Evolutionary
linearly decreasing inertia weight has been extensively Computation, vol. 3, no. 4.
investigated by experimental studies of four non-linear 12. Yao, X., Liu, Y. (1996). Fast evolutionary
functions well studied in the literature. The experimental programming. The Fifth Annual Conference on
results illustrate that the PSO has the ability to quickly Evolutionary Programming.
converge, the performance of PSO is not sensitive to the 13. Shi, Y . H., Eberhart, R. C. (1998). Parameter selection
population size, and PSO scales well. in particle swarm optimization. 1998 Annual
The results also illustrate that the PSO may lack global Conference on Evolutionary Programming, San Diego,
search ability at the end of a run due to the utilization of a March 1998.
linearly decreasing inertia weight. The PSO may fail to find 14. Shi, Y. H., Eberhart, R. C., (1998), A modified particle
the required optima in cases when the problem to be solved swarm optimizer. IEEE International Conference on
is too complicated and complex. But to some extent, this Evolutionary Computation, Anchorage, Alaska, May 4-
can be overcome by employing a self-adapting strategy for 9, 1998.
adjusting the inertia weight. 15. Angeline, P. J. (1998). Evolutionary optimization
versus particle swarm optimization: philosophy and
References performance difference. 1998 Annual Conference on
Evolutionary Programming, San Diego.
1. Goldberg, D. E. (1989), Genetic Algorithms in Search, 16. Saravanan, N., Fogel, D. (1996). An empirical
Optimization, and Machine Learning, Reading MA: comparison of methods for correlated mutations under
Addison- Welsey. self-adaptation. The Fifth Annual Conference on
2. Fogel, L. J. (1994), Evolutionary Programming in Evolutionary Programming.
Perspective: the Top-down View, in Computational
Intelligence: Imitating Life, J.M. Zurada, R. J. Marks 11,
and C. J. Robinson, Eds., IEEE Press, Piscataway, NJ.
3. Rechenberg, I. (1994), Evolution Strategy, in
Computational Intelligence: Imitating Life, J. M.
Zurada, R. J. Marks 11, and C. Robinson, Eds., IEEE
Press, Piscataway, NJ.
4. Koza, J. R. (1992), Genetic Programming: On the
Programming of Computers by Means of Natural
Selection, MIT Press, Cambridge, MA.
5 . Eberhart, R. C., Dobbins, R. W., and Simpson, P.
(1996), Computational Intelligence PC Tools, Boston:
Academic Press.
6. Eberhart, R. C., and Kennedy, J. (1995). A new
optimizer using particle swarm theory. Proc. Sixth
International Symposium on Micro Machine and
Human Science (Nagoya, Japan), IEEE Service Center,
Piscataway, NJ, 39-43.
7 . Kennedy, J., and Eberhart, R. C. (1995). Particle swarm
optimization. Proc. IEEE International Conference on
Neural Networks (Perth, Australia), IEEE Service
Center, Piscataway, NJ, pp. IV: 1942-1948.
8. Kennedy, J. (1997), The particle swarm: social
adaptation of knowledge. Proc. IEEE International
1948
~ _______
Fig. I Sphere function with popu. size=ZO Fig. 2 Sphere function with popu size-40
10
__
--
m 5
- _.
-d=lO(log)
~
d=lO(log)
$ 0 - d=20(log) ..d=ZO(log)
5 -5 _~ d=3O(log) -
d=30(log)
___
-10
generation
I generation
Fig. 3 Sphere function with popu size-80 Fig. 4 Sphere function with POPU SiZe=160
10
-
$
$
5
0
-d=lO(log)
d=2O(log)
g -5 --
d=3O(iog)
-10
generation
I generation
Fig. 5 Rosenbrock function with popu size=20 Fig. 6 Rosenbrock function with popu size=40
Fig. 7 Rosenbrockfunction with popu size=8O 1 Fig. 8 Rosenbrock function with popusize=l60 !
!
A IO
-1
8 8 -d=lO(log) - __ .
.
" 6 -d=lO(iog)l
._ d=20(log)
Q 4 d=ZO(iog)j;
g 2 -~ d=30(log)
d=30(log) //
1949
r
I Fig. 9 Rastrigrin function with popu sire=20 ! i Fig. 10 Rastrigrin function with popu size=40
i
I
800 800 I
generation generation
Fig. 11 Rastrigrin function with popu sire=80 Fig. 12 Rastrigrin function with popu sire=160
1 e600
I $ 400
1 c 200
I O
. - m m ~ m a m o g e I
. - m m r - r n z r~ n zZ zS ~ z
r - m m o m a v
generation I
I
r-
I
Fig 13 Griewank function with popu size=20 I
I
Fig. 14 Griewank function with popu size=40
I
, 200
i y) 150 -Series1
$ 100 -Series2
E 50
0
- r n m ~ r n a m o r - ~ z g
I r. -- m
m m
m ro - r r nn az e~ rz m r - c
r r
generation ! generation
~ ~ ~~ __ ~ _ _
~
1950