CAPSO Chaos Adaptive Particle Swarm Optimization Algorithm
CAPSO Chaos Adaptive Particle Swarm Optimization Algorithm
ABSTRACT As an influential technology of swarm evolutionary computing (SEC), the particle swarm
optimization (PSO) algorithm has attracted extensive attention from all walks of life. However, how to
rationally and effectively utilize the population resources to equilibrate the exploration and utilization is still
a key dispute to be resolved. In this paper, we propose a novel PSO algorithm called Chaos Adaptive Particle
Swarm Optimization (CAPSO), which adaptively adjust the inertia weight parameter w and acceleration
coefficients c1 , c2 , and introduces a controlling factor γ based on chaos theory to adaptively adjust the range
of chaotic search. This makes the algorithm have favorable adaptability, and then the particles cannot only
effectively prevent missing the global optimal solution, but also have a high probability of jumping out of the
local optimal solution. To verify the stability, convergence speed, and accuracy of CAPSO, we conduct ample
experiments on 6 test functions. In addition, to further verify the effectiveness and scalability of CAPSO,
comparative experiments are carried out on the CEC2013 test suite. Finally, the results prove that CAPSO
outperforms other peer algorithms to achieve satisfactory performance.
INDEX TERMS Swarm evolutionary computing, particle swarm optimization, chaos theory, function
optimization.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 29393
Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm
acceleration coefficient (PNAC), and propose an improved set, and can efficiently find an ideal alternative solution to
SPSO algorithm (P-SPSO) based on PNAC. At the same the data clustering problem. Wang et al. [27] combine PSO
time, they also develop a mean difference mutation and Chaos Search Technology (CST) to solve the problem
strategy (MDM) for the update mechanism of P-SPSO, of nonlinear bipolar programming. This method can greatly
which is called mean-differential-mutation-strategy embed- increase the search diversity of the population and avoid the
ded P-SPSO (MP-SPSO). This algorithm has significant algorithm capturing local particles. Gong et al. [28] propose
effects in terms of solution quality and robustness and has a novel Genetic Learning PSO Algorithm (GLPSO) by com-
been successfully applied in practical applications. bining genetic evolution techniques. Specifically, it trains
particles through the genetic algorithm and uses the experi-
B. VELOCITY AND POSITION FORMULA ence of particles to guide the evolution process, which makes
Combining the characteristics of wireless sensor networks, GLPSO have a significant improvement in searchability and
Nagireddy et al. [3] propose a PSO algorithm based on veloc- efficiency. On this basis, Xia et al. [29] combine the genetic
ity adaptation (VAPSO), which improves the traditional algorithm and propose Triple Archive Particle Swarm Opti-
velocity update formula by introducing partial derivatives of mization (TAPSO) to improve search efficiency through the
local and global optimums to time. It improves the conver- collaboration of three different roles particles. Wei et al. [30]
gence and minimizes the positioning error, thereby helping propose Multiple Adaptive Particle Swarm Optimization
to improve the positioning accuracy and network life. (MAPSO), which divides multiple clusters in each itera-
tion, adjusts the cluster distribution according to fitness, and
C. INTRODUCE NEW RULES OR PARAMETERS breeds particles using differential evolution. GOLCHI [31]
To balance the convergence and diversity of the population, proposed a hybrid algorithm of firefly and improved particle
Zhang et al. [22] propose an adaptive bare-bones particle swarm optimization (IPSO) to optimize load balancing in
swarm optimization (ABPSO) algorithm. It adds disturbance a cloud environment to achieve a better average load and
value to each particle through convergence and population to improve important indicators such as effective resource
diversity, and introduces a mutation operator to adaptively utilization and task response time. This algorithm not only
adjust the global search process. Moreover, to solve the prob- has obvious advantages in convergence speed and response
lem of resource-constrained project scheduling (RCPSP), speed, but also has better flexibility than other methods in
Kumar and Vidyarthi [23] embed the valid particle gener- minimizing average load through different goals.
ator (VPG) operator into the PSO, and propose an adap-
tive particle swarm optimization algorithm (A-PSO). This III. PROBLEM DEFINITION
algorithm can convert the invalid particles caused by the A. NOTATION
dependent behavior of RCPSP into effective particles, and The definitions of related notations used in this paper are
adjust the inertia weight through three parameters of fitness shown in the Table 1.
value, previous inertia weight, and the iteration counter to
TABLE 1. Related notations.
speed up the convergence speed of the algorithm. Due to
the large shrinkage factor of traditional PSO in the initial
iteration process, its global distribution in the solution space
cannot accurately track the local optimal solution, which
leads to the problem of difficulty in convergence. Acharya
and Kumar [24] propose a new shrinkage factor (ECF) and
apply it to channel equalization. The simulation results prove
that this algorithm achieves a perfect balance between local
and global search, and has better performance. Yan et al. [25]
introduce the constraint factor into the velocity update of the
SPSO, and dynamically adjust the inertia weight according
to the exponential decay mode. This makes it possible not
only to obtain enjoyable global searchability in the early stage
of the optimization process, but also to obtain a higher local B. PROBLEM DEFINITION
search performance in the later stage. The results show that Due to PSO being prone to premature convergence and dif-
it has more advantages than other algorithms in terms of ficult to accurately search for the global optimum, Shi and
convergence speed and stability. Eberhart [32] try to introduce a new parameter-inertia weight
parameter w based on the original parameters to make more
D. HYBRID APPROACH fine adjustments to the algorithm. Such PSO algorithms with
Chuang et al. [26] propose an accelerated chaotic parti- w parameters are collectively referred to as standard particle
cle swarm optimization (ACPSO) algorithm by combining swarm optimization algorithms (SPSO).
chaotic mapping and acceleration strategies. This algorithm Suppose a total of M particles are searched in an
searches for the appropriate cluster center through any data N −dimensional space, where the local optima of particles
and the global optima of all particles can be broadly small, making it difficult to find the global optimal solution.
expressed as pbesti = pbesti,1 , pbesti,2 , . . . , pbesti,N and
Therefore, to avoid affecting the information exchange and
gbest = (gbest1 , gbest2 , . . . , gbestN ). During each itera- optimization capabilities between particles, it is necessary
tion of SPSO, the velocity vij and position xij of the particle to set appropriate learning factors. Furthermore, the setting
i in the j dimension of the search space are adjusted by Eq.1 strategy is generally divided into static and dynamic strate-
and Eq.2. gies. Static strategy refers to setting the learning factor to a
constant, usually set to 2 [32], but some scholars believe that
vk+1
ij = wvkij + c1 r1 pbestij − xijk + c2 r2 gbestj − xijk , it can be set to 1.494 [33], 2.05, 2.5 [34] etc. The dynamic
(1) strategy means that the value of the learning factor changes
dynamically with the optimization process. For example, the
ij ,
xijk+1 = xijk + vk+1 (2)
values of c1 and c2 are continuously increased [35], c1 keeps
where i = 1, 2, . . . , M . j = 1, 2, . . . , N . r1 , r2 are random decreasing while c2 keeps increasing [36], etc.
numbers belonging to [0, 1].
3) POPULATION SIZE M
C. INFLUENCE OF PARAMETERS
For SPSO, the larger the value of M means the more particles
The SPSO algorithm is simple, efficient, and easy to under-
need to be searched. At the same time, the searchability of
stand and implement. However, in the optimization process,
the algorithm is stronger, and the easier it is to search for
the selection and adjustment of various parameters are closely
the global optimal solution, the corresponding search for the
related to the performance of the algorithm, as follows:
solution will take the longer. The smaller the value of M , the
fewer particles need to be searched, the more difficult it is to
1) INERTIAL WEIGHT w
search for the global optimal solution, but the corresponding
In SPSO, the inertia weight can be used to adjust the particle’s search for the solution will be shorter. For different optimiza-
ability to explore the solution space, and its value determines tion problems, it is generally set according to experience and
the ability to adjust the current velocity during the velocity the difficulty of the problem to be optimized.
update process. When the value of w is large (w > 1.2),
the particles tend to search globally, and constantly try to
search in new areas. There is a high probability that the 4) MAXIMUM VELOCITY vmax
global optimal solution will be missed, and more iterations The purpose of maximum velocity is to control the change of
are needed to find it. When the value of w is average (0.8 < particle velocity. The larger its value, the greater the ampli-
w < 1.2), the particles’ global search ability is the best. When tude of particle movement and the higher the possibility of
the value of w is small (0.4 < w < 0.8), the particles tend missing the global optimal solution. The lower the value, the
to search for local areas. At this time, if the particles search smaller the amplitude of particle movement, it may take a
near the global optimal solution, the possibility of finding it is long time to find the required solution and it is difficult to
higher. Shi and Eberhart [32] suggest that the inertia weight jump out of the local optimum. Related studies have shown
w is dynamically adjusted during the optimization process, that the effect of setting the maximum velocity and adjusting
starting from a larger value of 0.9 (more inclined to global the inertia weight w is the same, so it is generally not adjusted
search), and dynamically reduced to 0.4 (more inclined to further.
local search). Given the influence of different parameters, CAPSO opti-
mizes the adjustment and change process of the parameters
2) ACCELERATION COEFFICIENT c1 , c2 on the basis of the SPSO, which will be introduced in detail
The acceleration coefficient is also called the learning factor, in Section IV.
which can adjust the particle and population cognitive ability.
When its value is large, it can make the particles search IV. THE PROPOSED APPROACH CAPSO
quickly outside the target area, and the search range is wide, A. ADAPTIVE INERTIA WEIGHT w
but it is easy to miss the global optimal solution. When The inertia weight w determines the extent to which the
its value is small, it can make the particles search within current velocity is affected by the previous velocity, and its
the target area. The particle search range is small, but it is value seriously affects the accuracy and convergence speed
difficult to jump out of the local optimum. For example, when of the SPSO. In the early stage of the iteration, using a larger
c1 = c2 = 0, the particles can only move along the initial w can increase the particle’s movement velocity and global
direction, the search range of the particles is small, and to search capability. In the later stage of the iteration, using a
a large extent, the global optimal solution cannot be found. smaller w can reduce the moving velocity of the particles and
When c1 = 0, c2 6 = 0, the particles can only search based make them focus on the local search to improve the accuracy
on population experience, and it is also difficult to find the of the optimal solution. Generally, as the iteration progresses,
global optimal solution. When c1 6 = 0, c2 = 0, the population w decreases linearly. Whereas the optimization process of
experience cannot be relied on. At this time, the particle itself SPSO is very complicated, simple linear adjustment can
cannot be effectively searched, and the search range is also no longer meet the needs of the algorithm. We combine
the richer the diversity of the population, and the easier it is where ξ ∈ [0, 1] is the adjustment variable that can be
to overcome the obstacles of the local optimum to find the adjusted according to the actual situation.
global optimum so that the performance of the algorithm is Use γ to control chaotic search, and the process of opti-
superior. In general, the most commonly used is to initialize mizing gbest is shown in the Algorithm 1.
particles randomly, but to some extent, it is difficult to ensure In the process of iterative search, the value of the control
the ergodicity of the population, which affects the final result. factor γ decreases nonlinearly, so that the search range near
In our work, Logistic mapping is used to generate a series the global optimal solution is gradually reduced, and the
of chaotic variables to initialize the population, which is current optimal point is replaced with a better point. In the
shown in Eq.6. early iterative search process, the value of γ is large, so that
the population roughly searches for the larger area around
zk+1 = azk (1 − zk ), (6) the current global optimal solution. In the middle and later
Algorithm 1 Chaos Optimization gbest It is assumed that the i-th iteration of the population of
1. Scale the value of each dimension of gbest = particles takes Ci time, the maximum number of iterations
(gbest1 , gbest2 , . . . , gbestN ) to the range of is T , M is the number of particles in the population, Mi is the
[0, 1] according to gbest 0t1 = gbest−u
, where number of particles in the i-th iteration of the population, and
u−l
l = (l1 , l2 , . . . , lN ) is the minimum value of the N is the dimension of the search space. Then the computa-
solution space, u = (u1 , u2 , . . . , uN ) is the maximum tional complexity of the proposed CAPSO is:
value of the solution space; T
X
2. Use the Eq.6 to map gbest 0t1 to generate chaotic vari- complexity = N × Mi × Ci . (8)
ables gbest c ; i=1
3. Use linear mapping gbest ∗ = l + gbest c × (u − l) to
V. EXPERIMENT
map the chaotic variable gbestc back to the original value
A. TEST FUNCTION
range, and get gbest ∗ ;
4. According to xi = γ × gbesti∗ , i = 1, 2, . . . , N control This work selects 6 commonly used test functions to test the
the search of the neighborhood range and calculate the performance of CAPSO, which are:
corresponding fitness function value f (xi );
1) DROP-WAVE FUNCTION
5. If there is f (xi ) < gbest, then let gbest = f (xi ).
The DROP-WAVE function is shown in the Eq.9, and its
image is illustrated in FIGURE 4(a). It can be seen that this
iterative search process, the value of γ is small, so that a more function is multi-model and complex, and the value of the
refined search can be performed to find the global optimal global optimal solution is −1.
q
point. In this case, the search can quickly converge, thereby 2
1 + cos 12 x1 + x2 2
reducing running time.
f1 (x1 , x2 ) = − , (9)
0.5 x12 + x22 + 2
FIGURE 5. The change of average fitness value of each algorithm on GRIEWANK function.
actual global optimal solution of each algorithm are shown TABLE 5. Parameter configuration information of Peer algorithms.
in Table 3, where the bolded data represent the smallest value
of the algorithm on the corresponding function.
It can be concluded that although the variance and standard
deviation results of CAPSO on the BUKIN function are
not optimal, they are not much different from the results
of LDWPSO. In other test functions, CAPSO’s indicators
are better than other algorithms. On the whole, the conver-
gence accuracy of CAPSO is significantly better than other ments. On the whole, the CAPSO algorithm we proposed
algorithms. In addition, although CAPSO has not reached has higher convergence accuracy, stronger stability, and it is
the global optimal solution on the SPHERE function and easier to search for the global optimal solution.
GRIEWANK function, it has been infinitely close to the global
optimal solution, indicating that CAPSO is more stable and
D. CEC2013 TEST SUITE
has a lower probability of falling into the local optimal
To further verify the effectiveness of CAPSO, we con-
solution.
duct experiments on the CEC2013 test suite to verify the
To evaluate the convergence, we take the GRIEWANK
performance of the proposed algorithm in different envi-
function as an example. The average change of the best fitness
ronments. It should be noted that in CEC2013, f1 -f5 are
value obtained by each algorithm repeated 30 times on this
unimodal functions, f6 -f20 are multimodal functions, and
function is shown in the curve drawn in FIGURE 5. Accord-
f21 − f28 are combined functions. Moreover, we set dimen-
ing to the curve change, CAPSO is slower in the early stage
sions N = 10 and N = 50 respectively to verify the scalabil-
and faster in the later stage. It is better than the SPSO and
ity of CAPSO.
LDWPSO. Generally speaking, it is still within the acceptable
range, and the most important thing is that CAPSO has the
highest convergence accuracy. 1) PEER ALGORITHMS
When N = 10, since f1 and f2 are two-dimensional func- We selected 5 PSO variants that were widely used in
tions, we repeat the experiment 30 times on the f3 − f6 test CEC2013. To ensure the rigor and fairness of the comparative
function. The results of the mean, variance, and standard experiments, all relevant parameters of peer algorithms are
deviation of the difference between the fitness value of the set according to the recommendations in the original paper.
repeated experiment and the actual global optimal solution In addition, we ensure that all algorithms are experimented
of each algorithm are shown in Table 4, where the bolded with in the same environment to remove the effects of any
data represent the smallest value of the algorithm on the random errors. All peer algorithms and corresponding con-
corresponding function. figuration information are recorded in Table 5.
It can be seen from Table 4 that CAPSO has obvious
advantages over other algorithms in terms of mean, variance, 2) PERFORMANCE ON CEC2013
and standard deviation. Although the effect of CAPSO on the In Table 6 and Table 7, the mean and standard deviation
GRIEWANK function is not satisfactory, the results on the of the peer algorithms on the CEC2013 suite when N =
SPHERE function and the SCHWEFEL function are approx- 10 and N = 50 are recorded respectively. By comparison,
imately 0, indicating that CAPSO is infinitely close to or we can know the effectiveness of CAPSO. It can be found that
reaches the global optimal solution in 30 repeated experi- CAPSO almost all shows the best performance on unimodal
TABLE 6. Test results of CAPSO and peer algorithms on CEC2013 suite (N = 10).
functions, multimodal functions, and combined functions to higher dimensions. It is mainly since CAPSO adaptively
with high optimization difficulty. Furthermore, CAPSO also adjusts the inertia weight and acceleration coefficient, and
displays better reliability when the test function is extended adaptively adjusts the search range through the control factor
TABLE 7. Test results of CAPSO and peer algorithms on CEC2013 suite (N = 50).
of chaos theory. Therefore, CAPSO has better adaptability, VI. CONCLUSION AND FUTURE
and can easily jump out of the local optimal solution and Based on the in-depth research and analysis of traditional par-
approach the global optimal solution infinitely. To sum up, ticle swarm optimization algorithms, this paper aims to deal
our proposed CAPSO algorithm is effective. with complex function optimization problems and practical
applications that are prone to poor convergence accuracy and [12] S. N. Ghorpade, M. Zennaro, B. S. Chaudhari, R. A. Saeed, H. Alhumyani,
the inability to effectively obtain global optimization. On the and S. Abdel-Khalek, ‘‘A novel enhanced quantum PSO for optimal net-
work configuration in heterogeneous industrial IoT,’’ IEEE Access, vol. 9,
basis of chaos theory, we propose a chaotic adaptive particle pp. 134022–134036, 2021.
swarm optimization (CAPSO) algorithm. To prove the stabil- [13] T. M. Shami, A. A. El-Saleh, M. Alswaitti, Q. Al-Tashi, M. A. Summakieh,
ity, convergence speed, and accuracy of CAPSO, experiments and S. Mirjalili, ‘‘Particle swarm optimization: A comprehensive survey,’’
IEEE Access, vol. 10, pp. 10031–10061, 2022.
are performed on 6 test functions with other algorithms. The [14] X. Ji, Y. Zhang, D. Gong, and X. Sun, ‘‘Dual-surrogate-assisted cooper-
comparative analysis results show that although CAPSO has ative particle swarm optimization for expensive multimodal problems,’’
a slight deficiency in the convergence speed, its convergence IEEE Trans. Evol. Comput., vol. 25, no. 4, pp. 794–808, Aug. 2021.
[15] W. Zhang, J. Ma, L. Wang, and F. Jiang, ‘‘Particle-swarm-optimization-
accuracy is higher, the stability is stronger, and it is not easy to based 2D output feedback robust constraint model predictive control for
fall into the local optimum. To further prove the effectiveness batch processes,’’ IEEE Access, vol. 10, pp. 8409–8423, 2022.
and scalability of CAPSO, extensive experiments are per- [16] Z. Zhang, H. Chen, Y. Yu, F. Jiang, and Q. S. Cheng, ‘‘Yield-constrained
formed on the CEC2013 test suite. All results comprehensive optimization design using polynomial chaos for microwave filters,’’ IEEE
Access, vol. 9, pp. 22408–22416, 2021.
prove CAPSO has achieved satisfactory performance. [17] Y. Hu, F. Zhu, L. Zhang, Y. Lui, and Z. Wang, ‘‘Scheduling of manufactur-
Furthermore, CAPSO achieves advanced retrieval accu- ers based on chaos optimization algorithm in cloud manufacturing,’’ Robot.
racy due to a series of adaptive computations. However, Comput.-Integr. Manuf., vol. 58, pp. 13–20, Aug. 2019.
[18] L. Zhang, H. Wang, J. Liang, and J. Wang, ‘‘Decision support in cancer
its convergence speed, although within an acceptable range, base on fuzzy adaptive PSO for feedforward neural network training,’’ in
is still slightly slower. In the future, we hope to further sim- Proc. Int. Symp. Comput. Sci. Comput. Technol., vol. 1, 2008, pp. 220–223.
plify the search process of the algorithm based on the adap- [19] N. Lynn and P. N. Suganthan, ‘‘Heterogeneous comprehensive learning
particle swarm optimization with enhanced exploration and exploitation,’’
tive adjustment strategy to improve the convergence speed. Swarm Evol. Comput., vol. 24, pp. 11–24, Oct. 2015.
Moreover, due to the advantages of CAPAO in terms of con- [20] W. Liu, Z. Wang, Y. Yuan, N. Zeng, K. Hone, and X. Liu, ‘‘A novel
vergence, stability, and accuracy, we believe that it will play a sigmoid-function-based adaptive weighted particle swarm optimizer,’’
IEEE Trans. Cybern., vol. 51, no. 2, pp. 1085–1093, Feb. 2021.
role in resource scheduling, load problems, system optimiza-
[21] M. Lin, Z. Wang, F. Wang, and D. Chen, ‘‘Improved simplified parti-
tion, and other fields. In the follow-up work, we also expect cle swarm optimization based on piecewise nonlinear acceleration coef-
to work with other researchers to further explore and make ficients and mean differential mutation strategy,’’ IEEE Access, vol. 8,
pp. 92842–92860, 2020.
breakthroughs in parameter sensitivity, high-dimensional
[22] Y. Zhang, D.-W. Gong, X.-Y. Sun, and N. Geng, ‘‘Adaptive bare-bones
solution space, multi-objective optimization, etc. particle swarm optimization algorithm and its convergence analysis,’’ Soft
Comput., vol. 18, no. 7, pp. 1337–1352, Jul. 2014.
[23] N. Kumar and D. P. Vidyarthi, ‘‘A model for resource-constrained
REFERENCES
project scheduling using adaptive PSO,’’ Soft Comput., vol. 20, no. 4,
[1] H. Han, X. Bai, H. Han, Y. Hou, and J. Qiao, ‘‘Self-adjusting multitask pp. 1565–1580, 2016.
particle swarm optimization,’’ IEEE Trans. Evol. Comput., vol. 26, no. 1, [24] U. K. Acharya and S. Kumar, ‘‘Particle swarm optimization exponential
pp. 145–158, Feb. 2022. constriction factor (PSO-ECF) based channel equalization,’’ in Proc. 6th
[2] Z.-M. Gao, J. Zhao, Y.-R. Hu, and H.-F. Chen, ‘‘The challenge for the Int. Conf. Comput. Sustain. Global Develop., 2019, pp. 94–97.
nature-inspired global optimization algorithms: Non-symmetric bench- [25] C.-M. Yan, G.-Y. Lu, Y.-T. Liu, and X.-Y. Deng, ‘‘A modified PSO
mark functions,’’ IEEE Access, vol. 9, pp. 106317–106339, 2021. algorithm with exponential decay weight,’’ in Proc. 13th Int. Conf. Nat-
[3] V. Nagireddy, P. Parwekar, and T. K. Mishra, ‘‘Velocity adaptation based ural Comput., Fuzzy Syst. Knowl. Discovery (ICNC-FSKD), Jul. 2017,
PSO for localization in wireless sensor networks,’’ Evol. Intell., vol. 14, pp. 239–242.
no. 2, pp. 243–251, Jun. 2021. [26] L.-Y. Chuang, C.-J. Hsiao, and C.-H. Yang, ‘‘Chaotic particle swarm
[4] C. Shang, J. Gao, H. Liu, and F. Liu, ‘‘Short-term load forecasting based optimization for data clustering,’’ Expert Syst. Appl., vol. 38, no. 12,
on PSO-KFCM daily load curve clustering and CNN-LSTM model,’’ IEEE pp. 14555–14563, Nov. 2011.
Access, vol. 9, pp. 50344–50357, 2021. [27] Z. Wan, G. Wang, and B. Sun, ‘‘A hybrid intelligent algorithm by combin-
[5] X. Liu, P. Zhang, H. Fang, and Y. Zhou, ‘‘Multi-objective reactive ing particle swarm optimization with chaos searching technique for solving
power optimization based on improved particle swarm optimization with nonlinear bilevel programming problems,’’ Swarm Evol. Comput., vol. 8,
-greedy strategy and Pareto archive algorithm,’’ IEEE Access, vol. 9, pp. 26–32, Feb. 2013.
pp. 65650–65659, 2021. [28] Y.-J. Gong, J.-J. Li, Y. Zhou, Y. Li, H. S.-H. Chung, Y.-H. Shi, and
[6] J. Kennedy and R. Eberhart, ‘‘Particle swarm optimization,’’ in Proc. IEEE J. Zhang, ‘‘Genetic learning particle swarm optimization,’’ IEEE Trans.
ICNN, vol. 4, Nov./Dec. 1995, pp. 1942–1948. Cybern., vol. 46, no. 10, pp. 2277–2290, Oct. 2016.
[7] E. Iranmehr, S. B. Shouraki, and M. M. Faraji, ‘‘Unsupervised feature [29] X. Xia, L. Gui, F. Yu, H. Wu, B. Wei, Y.-L. Zhang, and Z.-H. Zhan, ‘‘Triple
selection for phoneme sound classification using particle swarm opti- archives particle swarm optimization,’’ IEEE Trans. Cybern., vol. 50,
mization,’’ in Proc. 5th Iranian Joint Congr. Fuzzy Intell. Syst. (CFIS), no. 12, pp. 4862–4875, Dec. 2020.
Mar. 2017, pp. 86–90. [30] B. Wei, X. Xia, F. Yu, Y. Zhang, X. Xu, H. Wu, L. Gui, and G. He,
[8] J.-H. Seo, C.-H. Im, C.-G. Heo, J.-K. Kim, H.-K. Jung, and C.-G. Lee, ‘‘Multiple adaptive strategies based particle swarm optimization
‘‘Multimodal function optimization based on particle swarm optimiza- algorithm,’’ Swarm Evol. Comput., vol. 57, Sep. 2020, Art. no. 100731.
tion,’’ IEEE Trans. Magn., vol. 42, no. 4, pp. 1095–1098, Apr. 2006. [Online]. Available: https://www.sciencedirect.com/science/article/pii/
[9] G. Lei, ‘‘Application improved particle swarm algorithm in parameter S2210650220303849
optimization of hydraulic turbine governing systems,’’ in Proc. IEEE 3rd [31] M. M. Golchi, S. Saraeian, and M. Heydari, ‘‘A hybrid of firefly and
Inf. Technol. Mechatronics Eng. Conf. (ITOEC), Oct. 2017, pp. 1135–1138. improved particle swarm optimization algorithms for load balancing in
[10] A. Marjani, S. Shirazian, and M. Asadollahzadeh, ‘‘Topology optimization cloud environments: Performance evaluation,’’ Comput. Netw., vol. 162,
of neural networks based on a coupled genetic algorithm and particle Oct. 2019, Art. no. 106860.
swarm optimization techniques (c-GA–PSO-NN),’’ Neural Comput. Appl., [32] Y. Shi and R. C. Eberhart, ‘‘Parameter selection in particle swarm opti-
vol. 29, no. 11, pp. 1073–1076, Jun. 2018. mization,’’ in Proc. Int. Conf. Evol. Program. Berlin, Germany: Springer,
[11] M. Bouzidi, M. E. Riffi, and A. Serhir, ‘‘Discrete particle swarm opti- 1998, pp. 591–600.
mization for travelling salesman problems: New combinatorial operators,’’ [33] R. C. Eberhart and Y. Shi, ‘‘Comparing inertia weights and constriction
in Proc. Int. Conf. Soft Comput. Pattern Recognit. Cham, Switzerland: factors in particle swarm optimization,’’ in Proc. Congr. Evol. Comput.,
Springer, 2017, pp. 141–150. vol. 1, Jul. 2000, pp. 84–88.
[34] M. Clerc, ‘‘The swarm and the queen: Towards a deterministic and adaptive YONGJING NI was born in 1981. She is currently
particle swarm optimization,’’ in Proc. Congr. Evol. Comput., vol. 3, pursuing the Ph.D. degree with Yanshan Univer-
Jul. 1999, pp. 1951–1957. sity. She is a Lecturer at the Hebei University of
[35] P. N. Suganthan, ‘‘Particle swarm optimiser with neighbourhood operator,’’ Science and Technology. Her research interests
in Proc. Congr. Evol. Comput., vol. 3, Jul. 1999, pp. 1958–1962. include computer networks and signal processing.
[36] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, ‘‘Self-organizing
hierarchical particle swarm optimizer with time-varying acceleration coef-
ficients,’’ IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 240–255, Jun. 2004.
[37] Y. Shi and R. C. Eberhart, ‘‘Empirical study of particle swarm optimiza-
tion,’’ in Proc. Congr. Evol. Comput., vol. 3, Jul. 1999, pp. 1945–1950.
[38] M. Cheng and Y. Han, ‘‘Application of a modified CES production function
model based on improved PSO algorithm,’’ Appl. Math. Comput., vol. 387,
Dec. 2020, Art. no. 125178.
[39] X. Liu, Y. Gu, S. He, Z. Xu, and Z. Zhang, ‘‘A robust reliability prediction
method using weighted least square support vector machine equipped S. V. N. SANTHOSH KUMAR received the
with chaos modified particle swarm optimization and online correcting B.E. degree in computer science and engineer-
strategy,’’ Appl. Soft Comput., vol. 85, Dec. 2019, Art. no. 105873. ing and the M.E. degree in software engineer-
ing from Anna University, Chennai, India, in
2011 and 2013, respectively. He is currently an
Assistant Professor with VIT, Vellore Campus,
YOUXIANG DUAN received the B.S. degree from India. He works in the areas of security and
the College of Computer and System Science, data dissemination in wireless sensor networks.
Nankai University, in 1986, and the Ph.D. degree He does research in information systems (busi-
from the School of Geosciences, China Univer- ness informatics), computer communications (net-
sity of Petroleum (East China), Qingdao, China, works), and computer security and reliability. He has published 20 articles
in 2017. He is currently a Professor with the in reputed international journals and conferences. His research interests
College of Computer Science and Technology, include wireless sensor networks, the Internet of Things, and mobile com-
China University of Petroleum (East China). His puting. He is a Peer Reviewer of Peer-to-Peer Networking and Applications
research interests include service computing, intel- (Springer), IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, and Wire-
ligent control, and machine learning. less Personal Communications.