0% found this document useful (0 votes)
24 views

1-Comparative Study On Several PSO Algorithms

Uploaded by

Amreen Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

1-Comparative Study On Several PSO Algorithms

Uploaded by

Amreen Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Comparative Study on Several PSO Algorithms

Weichang Jiang.Yating zhang,Ruihua Wang


1. Electronic Information and Control Engineering college of Beijing University of Technology,Beijing 100124
E-mail: [email protected]

Abstract: With the development of intelligent algorithm,GA and PSO have become the hot spot for the study on
multi-objective optimization in recently years. Information sharing is the core of PSO algorithm,Comparing with
GA,PSO algorithm has less variables to adjust and is easy to achieve,so it is widely used in engineering.This paper focus
on the comparation on several PSO algorithm and introduce a kind of PSO algorithm that is better in performance than
others
Key Words:PSO Algorithm, Intelligent Algorithm,SelPSO

influence from pbest,it is the memory from particle


1 Introduction itself.Third part stands for the influence from gbest,it is the
memory from entire swarm.
PSO˄Particle Swarm Optimization˅,based on the
hunting action of birds,was proposed by Kenndey and The basic PSO algorithm includes six steps:
Eberhart in 1995.The algorithm simulates the hunting (1)Initializing the position and velocity randomly for
action of birds by treating every bird as a spot that has no every particle.
weight and volume and the solution as the food that the (2) Evaluating the fitness value of every
birds are looking for.The algorithm has been widely used particle,update the pbest and gbest based on the current
since it was proposed.The improved algorithms come search.
out at the same time.This paper will introduce an (3)Updating the values of position and velocity by
improved PSO algorithm and show it’s advantages in formula (1-1) and (1-2).
performance by comparing with several PSO algorithms (4)Updating the pbest if fitness value is better than the
through experiment. current pbest.
(5)Comparing all pbest and gbest and updating the
gbest if there is a better fitness value.
2 PSO Algorithm (6)Stoping iteration if the conditions are satisfied.If
the conditions aren’t satisfied, back to the step (3) to
2.1 Basic PSO Algorithm contine.
PSO algorithm is a random optimized algorithm
that bases on swarm intelligence. The algorithm
simulates the hunting action of birds,treats every bird as 2.2 Several Improved PSO Algorithm
a spot that has no weight and volume and the solution as
the food that the birds are looking for.Every particle has 2.2.1 LDWPSO
two characters:velocity and position.It updates the
characters through settled rules and get to the optimal LDWPSO˄Linearly Decreasing Weight Particle
point after a mass of update[1]. Swarm Optimization˅is a mean to solve the constant
weight’s bad influence on searching.Large value of weight
Vk+1i=w* Vki +c1*r1*(pbest-xki)+c2*r2*(gbest- xki)
is good for jumping out of partial minimum value while
(1-1)
small weight value can process partial search accurately
Xk+1i=xki+ Vk+1i (1-2) and contribute to the convergence of algorithm.
(1) and (1-2) are the formulas for update.
Xi=(xi1,xi2,xi3,ĂĂ,xid,) is the variable for the position of t * ( wmax − wmin )
No.i particle,D is the dimension for the problem and k as w = wmax − (1-3)
the number of update,c1 and c2 is learning factors,r1 and r2 tmax
are random numbers through 0 to 1,pbest stands for the best
position for every particle at present and gbest stands for The algorithm changes w through formula (1-3).wmax
the best position for all particles in history. is the maximum value for w and wmin is the minimum.T
Every particle gets a random number for it’s stands for iterative number.w will become smaller and
position and velocity at beginning,it updates the data smaller while t becomes larger and larger,so the algorithm
through the formula (1-1) and (1-2).Formula (1-1) will be good at searching in global at the beginning and
contains three parts: w is inertia factor , w* Vki stands for accurate ability of partial searching[2].
the influence of current velocity,it can balance the ability of
global searching and part searching.Second part is the

978-1-4799-3708-0/14/$31.00 2014
c IEEE 1117
2.2.2 AsyLnCPSO (2)Evaluating the fitness value of every
particle,update the pbest and gbest based on the current
AsyLnCPSO is a method that changes c1 and c2,they search.
show the degree that the memory from particles and swarm (3)Updating the values of position and velocity by
effect on the update for position and velocity.At the formula (1-1) and (1-2).
beginning of the algorithm,large c1 and small c2 can let the (4)Updating the pbest if fitness value is better than the
algorithm has strong ability of self-learning and weak current pbest.
ability of social-learning,small c1 and large c2 can let the (5)Comparing all pbest and gbest and updating the
algorithm focus on social-learning and reduce the effect of gbest if there is a better fitness value.
self-learning at last.
(6)Ranking particles based on the fitness value and
using the value of position and velocity of particles that
ci. fin − c1.ini take half of the best fitness values to replace the bad
c1 = c1.ini + *t (1-4)
tmax half’s.Keep pbest and gbest unchanged.
(7)Stoping iteration if the conditions are satisfied.If
the conditions aren’t satisfied, back to the step (3) to
c2. fin − c2.ini contine.
c2 = c2.ini + *t (1-5)
tmax
3 Experiment and Analysis
The update formulas for c1 and c2 are (1-4) and
(1-5).c1.ini and c2.ini are the initial value for c1 and c2,c1.fin and
There are a lot of standard testing functions for PSO
c2.fin stand for the final value of c1 and c2.In this
algorithm and Rastrigin function is one of them,Rastrigin
algorithm,we let c1.ini equal to c2.fin and c2.ini equal to c1.ini.
has a lot of deep partial optimal values that are sorted of
sinusoidal inflexions.It is the hardest function for
2.2.3 SAPSO
searching.The experiment uses Rastrigin as the testing
function for the five algorithms.The optimal value for
SAPSO balances the ability of global searching and
Rastrigin is 0 when x is [0,0,ĂĂ,0].
partial searching by changing the weight nonlinearly.

( wmax − wmin )*( f − f min ) Five algorithms are set in same parameters and the
wmin = , f ≤ f avg
f avg − f min experiment shows their difference in performance.The
w ={ wmax , f > f avg (1-6) range of x is [-5.12,5.12].Table 1 shows the values of
parameters,table 2 shows the difference in performance.
As the formula (1-6) shows that wmax is the maximum
for weight and wmin is the minimum.Favg stands for the
average value for all the particles at present and fmin is the 
smallest for all of them.When particles verge to same value Table 1. Parameters
or the best in partial,w gets larger and that will contribute to
Swarm Number 80
global searching ability.W gets smaller when particles are
scattered.Meanwhile,small w can protect the particle that W(weight) 0.73
has good fitness value well,. C1 2 (2.15/0.15)
C2 2 (2.15/0.15)
Iterating Tme 500
2.3 SelPSO Repeating Time 100
Dimension 3
Unlike the three improved PSO algorithms above that
balance global searching ability and partial ability by
adjusting the parameters,the main idea of SelPSO is that
ranks the fitness value of particles and using the value of
position and velocity of particles that take half of the best Table 2.Performance
fitness values to replace the bad half’s,it remains the value
of pbest of particle and gbest of swarm at the same time so Algorithms Percentage of result(%) Average for results
that the algorithm can verge to the global optimal value Basic PSO 20 0.9452
after iteration. LDWPSO 40 0.6569
AsyLnCPSO 65 0.4974
SelPSO includes seven steps: SAPSO 32 0.8817
SelPSO 90 0.0703
(1)Initializing the position and velocity randomly for
every particle.

1118 2014 26th Chinese Control and Decision Conference (CCDC)


From the Table 2 we can see that comparing with the [6] ALFI AlirezaˊPSO with Adaptive Mutation and Inertia
other four algorithms,SelPSO performs better both in Weight and Its Application in Parameter Estimation of
percentage of results and average for results SelPSO can Dynamic Systems[J]ˊACTA AUTOMATICA SINICAˈ
find 0 for 90 times in 100 and the average value of 100 2011ˈ37(5)˖541~549ˊ
times searching is closer to the optimal fitness value than [7] Liping Zhang,the Particle Swarm Optimization
other four algorithms. Algorithm,Zhejiang University.2005
Table 3 shows the performance when the dimension [8] Chuntao Man,Research on Particle Swarm Optimization
ups to 5.Comparing with Table 2 we can see that all the five Algorithm and Application in Steady-State Optimization
algorithms are hard to find the optimal fitness value.In this of Process Control System,Harbin University of Science
situation,SelPSO’s performance still better than other and Technology,2009
glorithms.It’s average is the one that is most close to 0. [9] Minghui Sun,Steady-State Optimization Control Study on
Otherwise SelPSO uses less running time than others Complicated System, Harbin University of Science and
Technology ,2007
when they are set up in the same number of iteration.
[10] Kun Wang,Research on Steady-State Optimization of
Table 3.Performance Industry Prcoess Based on Improved PSO Algorithm,
Harbin University of Science and Technology,2009
Algorithms Percentage of result(%) Average for results [11] Weibo Wang>Research on Particle Swarm Optimization
Basic PSO 1 5.641 Algorithm and It’s Application.Southwest Jiaotong
LDWPSO 0 5.351 University.2005
AsyLnCPSO 1 4.278 [12] Guiyang Wang.Study on Optimal Control of Ground
Source Heat Pump System Based on Matlab.Beijing
SAPSO 0 5.987
University of Technology.2013
SelPSO 3 1.891

Analysizing the results of experiments,I have several


opinions about why the algorithms can’t find optimal
fitness value 100 percently:(1)At the beginning of the
algorithms,every particle gets value of position and
velocity randomly.it is good for the algorithms to find the
global optimal fitness value if the value of position and
velocity are scattered.If the values are concentrated,it is
easy for particles to fall into partial optimal value
deeply.(2)The value of parameters are set by
experience,they may not be the best value for Rastrigin.

4 Conclusion
This paper introduces an improved PSO algorithm and
it changes the performance by adding rank and
replacement step.On one hand,the replacement can promote
particles to converge to the best fitness value,on the other
hand,unchanged pbest and gbest avoid particles falling into
partial minimum.From above we can see that SelPSO has
strong ability of searching and performs better than other
four algorithms,it can find out more optimal fitness value
and make the average closer to the value.This paper shows
the good performances of SelPSO and Furture studies will
focus on the improvement for SelPSO.

Reference
[1] Li LiǃBen NiuˈPSO Algorithm,Science Press. Beijing,
2009.10.
[2] Li LiuˈPSO Algorithm and Application for Engineeringˈ
Electronic Industry Press.Beijing., 2010.8
[3] Zhen Ji,Hunlian Liao,Qinghua Wu. PSO Algorithm and
Application, Science Press.Beijing, 2009.1
[4] Ling Wang,Bo Liu.Particle swarm optimization
and scheduling algorithm, Tsinghua University
Press Beijing, 2008.5
[5] Chun Gong, Proficient in MATLAB optimization
calculation. Electronic Industry Press.Beijing.,2009

2014 26th Chinese Control and Decision Conference (CCDC) 1119

You might also like