0% found this document useful (0 votes)
91 views3 pages

Improved PSO Algorithms Comparison

This paper presents a comparative study of various Particle Swarm Optimization (PSO) algorithms, highlighting the advantages of an improved algorithm called SelPSO. The study demonstrates that SelPSO outperforms traditional PSO and other improved versions by achieving a higher percentage of optimal results and closer average fitness values. The findings suggest that SelPSO's unique approach to ranking and replacing particles enhances its search capabilities in multi-objective optimization tasks.

Uploaded by

Amreen Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views3 pages

Improved PSO Algorithms Comparison

This paper presents a comparative study of various Particle Swarm Optimization (PSO) algorithms, highlighting the advantages of an improved algorithm called SelPSO. The study demonstrates that SelPSO outperforms traditional PSO and other improved versions by achieving a higher percentage of optimal results and closer average fitness values. The findings suggest that SelPSO's unique approach to ranking and replacing particles enhances its search capabilities in multi-objective optimization tasks.

Uploaded by

Amreen Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Comparative Study on Several PSO Algorithms

Weichang [Link] zhang,Ruihua Wang


1. Electronic Information and Control Engineering college of Beijing University of Technology,Beijing 100124
E-mail: jwch1126@[Link]

Abstract: With the development of intelligent algorithm,GA and PSO have become the hot spot for the study on
multi-objective optimization in recently years. Information sharing is the core of PSO algorithm,Comparing with
GA,PSO algorithm has less variables to adjust and is easy to achieve,so it is widely used in [Link] paper focus
on the comparation on several PSO algorithm and introduce a kind of PSO algorithm that is better in performance than
others
Key Words:PSO Algorithm, Intelligent Algorithm,SelPSO

influence from pbest,it is the memory from particle


1 Introduction [Link] part stands for the influence from gbest,it is the
memory from entire swarm.
PSO˄Particle Swarm Optimization˅,based on the
hunting action of birds,was proposed by Kenndey and The basic PSO algorithm includes six steps:
Eberhart in [Link] algorithm simulates the hunting (1)Initializing the position and velocity randomly for
action of birds by treating every bird as a spot that has no every particle.
weight and volume and the solution as the food that the (2) Evaluating the fitness value of every
birds are looking [Link] algorithm has been widely used particle,update the pbest and gbest based on the current
since it was [Link] improved algorithms come search.
out at the same [Link] paper will introduce an (3)Updating the values of position and velocity by
improved PSO algorithm and show it’s advantages in formula (1-1) and (1-2).
performance by comparing with several PSO algorithms (4)Updating the pbest if fitness value is better than the
through experiment. current pbest.
(5)Comparing all pbest and gbest and updating the
gbest if there is a better fitness value.
2 PSO Algorithm (6)Stoping iteration if the conditions are [Link]
the conditions aren’t satisfied, back to the step (3) to
2.1 Basic PSO Algorithm contine.
PSO algorithm is a random optimized algorithm
that bases on swarm intelligence. The algorithm
simulates the hunting action of birds,treats every bird as 2.2 Several Improved PSO Algorithm
a spot that has no weight and volume and the solution as
the food that the birds are looking [Link] particle has 2.2.1 LDWPSO
two characters:velocity and [Link] updates the
characters through settled rules and get to the optimal LDWPSO˄Linearly Decreasing Weight Particle
point after a mass of update[1]. Swarm Optimization˅is a mean to solve the constant
weight’s bad influence on [Link] value of weight
Vk+1i=w* Vki +c1*r1*(pbest-xki)+c2*r2*(gbest- xki)
is good for jumping out of partial minimum value while
(1-1)
small weight value can process partial search accurately
Xk+1i=xki+ Vk+1i (1-2) and contribute to the convergence of algorithm.
(1) and (1-2) are the formulas for update.
Xi=(xi1,xi2,xi3,ĂĂ,xid,) is the variable for the position of t * ( wmax − wmin )
No.i particle,D is the dimension for the problem and k as w = wmax − (1-3)
the number of update,c1 and c2 is learning factors,r1 and r2 tmax
are random numbers through 0 to 1,pbest stands for the best
position for every particle at present and gbest stands for The algorithm changes w through formula (1-3).wmax
the best position for all particles in history. is the maximum value for w and wmin is the minimum.T
Every particle gets a random number for it’s stands for iterative number.w will become smaller and
position and velocity at beginning,it updates the data smaller while t becomes larger and larger,so the algorithm
through the formula (1-1) and (1-2).Formula (1-1) will be good at searching in global at the beginning and
contains three parts: w is inertia factor , w* Vki stands for accurate ability of partial searching[2].
the influence of current velocity,it can balance the ability of
global searching and part [Link] part is the

978-1-4799-3708-0/14/$31.00 2014
c IEEE 1117
2.2.2 AsyLnCPSO (2)Evaluating the fitness value of every
particle,update the pbest and gbest based on the current
AsyLnCPSO is a method that changes c1 and c2,they search.
show the degree that the memory from particles and swarm (3)Updating the values of position and velocity by
effect on the update for position and [Link] the formula (1-1) and (1-2).
beginning of the algorithm,large c1 and small c2 can let the (4)Updating the pbest if fitness value is better than the
algorithm has strong ability of self-learning and weak current pbest.
ability of social-learning,small c1 and large c2 can let the (5)Comparing all pbest and gbest and updating the
algorithm focus on social-learning and reduce the effect of gbest if there is a better fitness value.
self-learning at last.
(6)Ranking particles based on the fitness value and
using the value of position and velocity of particles that
ci. fin − [Link] take half of the best fitness values to replace the bad
c1 = [Link] + *t (1-4)
tmax half’[Link] pbest and gbest unchanged.
(7)Stoping iteration if the conditions are [Link]
the conditions aren’t satisfied, back to the step (3) to
c2. fin − [Link] contine.
c2 = [Link] + *t (1-5)
tmax
3 Experiment and Analysis
The update formulas for c1 and c2 are (1-4) and
(1-5).[Link] and [Link] are the initial value for c1 and c2,[Link] and
There are a lot of standard testing functions for PSO
[Link] stand for the final value of c1 and [Link] this
algorithm and Rastrigin function is one of them,Rastrigin
algorithm,we let [Link] equal to [Link] and [Link] equal to [Link].
has a lot of deep partial optimal values that are sorted of
sinusoidal [Link] is the hardest function for
2.2.3 SAPSO
[Link] experiment uses Rastrigin as the testing
function for the five [Link] optimal value for
SAPSO balances the ability of global searching and
Rastrigin is 0 when x is [0,0,ĂĂ,0].
partial searching by changing the weight nonlinearly.

( wmax − wmin )*( f − f min ) Five algorithms are set in same parameters and the
wmin = , f ≤ f avg
f avg − f min experiment shows their difference in [Link]
w ={ wmax , f > f avg (1-6) range of x is [-5.12,5.12].Table 1 shows the values of
parameters,table 2 shows the difference in performance.
As the formula (1-6) shows that wmax is the maximum
for weight and wmin is the [Link] stands for the
average value for all the particles at present and fmin is the 
smallest for all of [Link] particles verge to same value Table 1. Parameters
or the best in partial,w gets larger and that will contribute to
Swarm Number 80
global searching ability.W gets smaller when particles are
[Link],small w can protect the particle that W(weight) 0.73
has good fitness value well,. C1 2 (2.15/0.15)
C2 2 (2.15/0.15)
Iterating Tme 500
2.3 SelPSO Repeating Time 100
Dimension 3
Unlike the three improved PSO algorithms above that
balance global searching ability and partial ability by
adjusting the parameters,the main idea of SelPSO is that
ranks the fitness value of particles and using the value of
position and velocity of particles that take half of the best Table [Link]
fitness values to replace the bad half’s,it remains the value
of pbest of particle and gbest of swarm at the same time so Algorithms Percentage of result(%) Average for results
that the algorithm can verge to the global optimal value Basic PSO 20 0.9452
after iteration. LDWPSO 40 0.6569
AsyLnCPSO 65 0.4974
SelPSO includes seven steps: SAPSO 32 0.8817
SelPSO 90 0.0703
(1)Initializing the position and velocity randomly for
every particle.

1118 2014 26th Chinese Control and Decision Conference (CCDC)


From the Table 2 we can see that comparing with the [6] ALFI AlirezaˊPSO with Adaptive Mutation and Inertia
other four algorithms,SelPSO performs better both in Weight and Its Application in Parameter Estimation of
percentage of results and average for results SelPSO can Dynamic Systems[J]ˊACTA AUTOMATICA SINICAˈ
find 0 for 90 times in 100 and the average value of 100 2011ˈ37(5)˖541~549ˊ
times searching is closer to the optimal fitness value than [7] Liping Zhang,the Particle Swarm Optimization
other four algorithms. Algorithm,Zhejiang University.2005
Table 3 shows the performance when the dimension [8] Chuntao Man,Research on Particle Swarm Optimization
ups to [Link] with Table 2 we can see that all the five Algorithm and Application in Steady-State Optimization
algorithms are hard to find the optimal fitness [Link] this of Process Control System,Harbin University of Science
situation,SelPSO’s performance still better than other and Technology,2009
[Link]’s average is the one that is most close to 0. [9] Minghui Sun,Steady-State Optimization Control Study on
Otherwise SelPSO uses less running time than others Complicated System, Harbin University of Science and
Technology ,2007
when they are set up in the same number of iteration.
[10] Kun Wang,Research on Steady-State Optimization of
Table [Link] Industry Prcoess Based on Improved PSO Algorithm,
Harbin University of Science and Technology,2009
Algorithms Percentage of result(%) Average for results [11] Weibo Wang>Research on Particle Swarm Optimization
Basic PSO 1 5.641 Algorithm and It’s [Link] Jiaotong
LDWPSO 0 5.351 University.2005
AsyLnCPSO 1 4.278 [12] Guiyang [Link] on Optimal Control of Ground
Source Heat Pump System Based on [Link]
SAPSO 0 5.987
University of Technology.2013
SelPSO 3 1.891

Analysizing the results of experiments,I have several


opinions about why the algorithms can’t find optimal
fitness value 100 percently:(1)At the beginning of the
algorithms,every particle gets value of position and
velocity [Link] is good for the algorithms to find the
global optimal fitness value if the value of position and
velocity are [Link] the values are concentrated,it is
easy for particles to fall into partial optimal value
deeply.(2)The value of parameters are set by
experience,they may not be the best value for Rastrigin.

4 Conclusion
This paper introduces an improved PSO algorithm and
it changes the performance by adding rank and
replacement [Link] one hand,the replacement can promote
particles to converge to the best fitness value,on the other
hand,unchanged pbest and gbest avoid particles falling into
partial [Link] above we can see that SelPSO has
strong ability of searching and performs better than other
four algorithms,it can find out more optimal fitness value
and make the average closer to the [Link] paper shows
the good performances of SelPSO and Furture studies will
focus on the improvement for SelPSO.

Reference
[1] Li LiǃBen NiuˈPSO Algorithm,Science Press. Beijing,
2009.10.
[2] Li LiuˈPSO Algorithm and Application for Engineeringˈ
Electronic Industry [Link]., 2010.8
[3] Zhen Ji,Hunlian Liao,Qinghua Wu. PSO Algorithm and
Application, Science [Link], 2009.1
[4] Ling Wang,Bo [Link] swarm optimization
and scheduling algorithm, Tsinghua University
Press Beijing, 2008.5
[5] Chun Gong, Proficient in MATLAB optimization
calculation. Electronic Industry [Link].,2009

2014 26th Chinese Control and Decision Conference (CCDC) 1119

You might also like