0% found this document useful (0 votes)
19 views

CAPSO Chaos Adaptive Particle Swarm Optimization Algorithm

Uploaded by

wanweikang2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

CAPSO Chaos Adaptive Particle Swarm Optimization Algorithm

Uploaded by

wanweikang2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Received February 21, 2022, accepted March 7, 2022, date of publication March 10, 2022, date of current version

March 21, 2022.


Digital Object Identifier 10.1109/ACCESS.2022.3158666

CAPSO: Chaos Adaptive Particle Swarm


Optimization Algorithm
YOUXIANG DUAN 1 , NING CHEN 1 , LUNJIE CHANG 2, YONGJING NI 3,

S. V. N. SANTHOSH KUMAR 4 , AND PEIYING ZHANG 1


1 College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China
2 Research Institute of Petroleum Exploration and Development, PetroChina Tarim Oilfield Company, Korla 841000, China
3 School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang, Hebei 050000, China
4 School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India

Corresponding author: Yongjing Ni ([email protected])


This work was supported in part by the Shandong Provincial Natural Science Foundation, China, under Grant ZR2020MF006; in part by
the Industry-University Research Innovation Foundation of Ministry of Education of China under Grant 2021FNA01001; in part by the
Major Scientific and Technological Projects of China National Petroleum Corporation (CNPC) under Grant ZD2019-183-006; in part by
the Fundamental Research Funds for the Central Universities of China University of Petroleum (East China) under Grant 20CX05017A;
and in part by the Youth Fund for Science and Technology Research in Colleges and Universities of Hebei Province under
Grant QN2021066.

ABSTRACT As an influential technology of swarm evolutionary computing (SEC), the particle swarm
optimization (PSO) algorithm has attracted extensive attention from all walks of life. However, how to
rationally and effectively utilize the population resources to equilibrate the exploration and utilization is still
a key dispute to be resolved. In this paper, we propose a novel PSO algorithm called Chaos Adaptive Particle
Swarm Optimization (CAPSO), which adaptively adjust the inertia weight parameter w and acceleration
coefficients c1 , c2 , and introduces a controlling factor γ based on chaos theory to adaptively adjust the range
of chaotic search. This makes the algorithm have favorable adaptability, and then the particles cannot only
effectively prevent missing the global optimal solution, but also have a high probability of jumping out of the
local optimal solution. To verify the stability, convergence speed, and accuracy of CAPSO, we conduct ample
experiments on 6 test functions. In addition, to further verify the effectiveness and scalability of CAPSO,
comparative experiments are carried out on the CEC2013 test suite. Finally, the results prove that CAPSO
outperforms other peer algorithms to achieve satisfactory performance.

INDEX TERMS Swarm evolutionary computing, particle swarm optimization, chaos theory, function
optimization.

I. INTRODUCTION electromagnetic optimization problems. Lei [9] introduces


Most engineering optimization problems can be abstracted the concave function form of the inertia weight into the
into the mathematical representation of multimodal func- PSO, and applies it to the model parameter optimization
tions with multiple minimum (maximum) values [1]. How of the established Francis hydraulic turbine governing sys-
to solve such problems has critically influenced academic tem. Marjani et al. [10] combine genetic algorithms and PSO
dialogue [2]–[4]. The Particle Swarm Optimization (PSO) to supervise neural networks. They track the connections
algorithm [5], which is researched and developed by between the applied operators and the layers through genetic
Kennedy and Eberhart [6], is an important technology algorithms and use PSO to check the values of all individ-
with uniqueness and effectiveness in optimization problems. ual deviations and weights in the neural network to mod-
Once it was published, it triggered a wave of research. ify the best network topology. Bouzidi et al. [11] develop a
For example, Iranmehr et al. [7] develop a new method new operator specifically used to solve combinatorial opti-
based on PSO to extract audio features similar to human mization problems, and embed it into the improved discrete
ears. Seo et al. [8] propose a multi-group particle swarm particle swarm optimization algorithm (DPSO-CO), opening
optimization (MGPSO) algorithm and further apply it to up a new horizon to solve the traveling salesman problem.
Due to its excellent performance, PSO has been widely
The associate editor coordinating the review of this manuscript and used in many fields [3], [4], [12] such as medicine, chemi-
approving it for publication was Kuo-Ching Ying . cal industry, agriculture, finance, etc., and has successfully

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 29393
Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

1) We combine linear and nonlinear inertia weight to


adaptively adjust the local and global search ability of
particles. A nonlinear method is adopted to adjust the
acceleration coefficient adaptively so that the particles
can quickly obtain the global optimal solution, avoid
falling into the local optimal, and speed up the conver-
gence speed.
2) The Logistic mapping is used to initialize the popula-
tion, which increases the richness and convenience of
the population and makes it easier for particles to jump
out of the local optimum.
3) To replace the current global optimal point with a better
FIGURE 1. The influence of local best position pbest and global best point, the control factor γ is introduced to adaptively
position gbest on population flight. x k and x k+1 represent the position adjust the search range of the particles near the global
of the particle before and after the particle is updated, respectively.
vk and vk+1 represent the running velocity of the particle before and optimal solution.
after the particle is updated, respectively. vpb represents the velocity 4) Finally, extensive experiments are carried out to verify
when the particle moves towards pbest . vgb represents the velocity when
the particle moves towards gbest . the effectiveness of the algorithm. The results show
that CAPSO can dynamically adjust the value of each
achieved satisfactory application results in resource schedul- parameter, but also has high convergence accuracy and
ing, control system design, load problems, and power system strong stability.
optimization. The rest of this paper is organized as follows. The related
As the key algorithm for swarm intelligence optimization, work is reviewed in Section II. Section III manifests the prob-
the principle of PSO is to simulate the social behavior of lem definition of this paper. The proposed CAPSO algorithm
biological communities [13], and use the evolution and itera- is presented in Section IV. In Section V, the results of our
tion of the population to achieve the purpose of optimization.
experiment are provided. Finally, in Section VI, we summa-
In the process of aggregation and predation, the position rize our work done.
corresponds to the solution of the problem, and the velocity
determines the direction and distance of the next search [14].
The movement direction of the particle is adjusted according II. RELATED WORK
to the position closest to the food found by itself and the At present, the related work of the improved PSO algorithm
entire group [5]. Therefore, the location of the food that can be roughly summarized into the following aspects.
the population is looking for can be abstracted as the best
solution in the solution of the problem. Moreover, PSO judges A. INERTIA WEIGHT AND ACCELERATION COEFFICIENT
whether the position of the particle is the best according to the Zhang et al. [18] introduce a fuzzy system into PSO for the
value of the fitness function, which plays a guiding role for breast cancer problem, and propose a fuzzy adaptive PSO
the flight of the population [15]. As shown in FIGURE 1, algorithm to adjust the searchability of the algorithm. The
in each iteration, the direction and distance of the particle application of fuzzy adaptive PSO to train the feedforward
search will be adjusted in time under the joint influence of neural network is more accurate and stable, can converge
its local optimal position and the global optimal position of to the best position faster, and reduce the risk of overfitting
all particles. Iterate many times until the conditions are met. to a certain extent. Lynn and Suganthan propose a novel
However, in the optimization process of traditional PSO, PSO, called Heterogeneous comprehensive learning particle
problems such as low convergence accuracy and difficulty swarm optimization (HCLPSO) [19]. It divides the popu-
in finding the global optimum are prone to occur. Mean- lation into two sub-populations for exploration and utiliza-
while, Chaos Optimization Algorithm (COA) [16] can pro- tion respectively, and proposes a comprehensive learning
vide search diversity in the optimization process, and has strategy combined with particle experience for the search
been successfully used in robot optimization control, param- process, which made the algorithm obtain satisfactory con-
eter optimization in control systems, financial systems, and vergence. Inspired by the activation function widely used
manufacturer scheduling [17], etc. In COA, chaotic mapping, in neural networks, Liu et al. [20] combine the Sigmoid
as a simple and effective mapping method, can improve the function for weighting to adaptively adjust the acceleration
exploration of meta-heuristic algorithms. Inspired by COA coefficient, and propose Adaptive Weighted Particle Swarm
making full use of many characteristics of chaotic variables Optimization (AWPSO), which significantly improved the
in a certain range of search space to improve the probability of convergence of the population. To solve the problem of
searching for global optimal solutions, we propose the Chaos slightly insufficient performance of the standard particle
Adaptive Particle Swarm Optimization algorithm (CAPSO). swarm optimization algorithm (SPSO) for high-dimensional
The main contributions of CAPSO can be summarized as complex optimization problems, Lin et al. [21] introduce a
follows: new parameter adjustment strategy of piecewise nonlinear

29394 VOLUME 10, 2022


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

acceleration coefficient (PNAC), and propose an improved set, and can efficiently find an ideal alternative solution to
SPSO algorithm (P-SPSO) based on PNAC. At the same the data clustering problem. Wang et al. [27] combine PSO
time, they also develop a mean difference mutation and Chaos Search Technology (CST) to solve the problem
strategy (MDM) for the update mechanism of P-SPSO, of nonlinear bipolar programming. This method can greatly
which is called mean-differential-mutation-strategy embed- increase the search diversity of the population and avoid the
ded P-SPSO (MP-SPSO). This algorithm has significant algorithm capturing local particles. Gong et al. [28] propose
effects in terms of solution quality and robustness and has a novel Genetic Learning PSO Algorithm (GLPSO) by com-
been successfully applied in practical applications. bining genetic evolution techniques. Specifically, it trains
particles through the genetic algorithm and uses the experi-
B. VELOCITY AND POSITION FORMULA ence of particles to guide the evolution process, which makes
Combining the characteristics of wireless sensor networks, GLPSO have a significant improvement in searchability and
Nagireddy et al. [3] propose a PSO algorithm based on veloc- efficiency. On this basis, Xia et al. [29] combine the genetic
ity adaptation (VAPSO), which improves the traditional algorithm and propose Triple Archive Particle Swarm Opti-
velocity update formula by introducing partial derivatives of mization (TAPSO) to improve search efficiency through the
local and global optimums to time. It improves the conver- collaboration of three different roles particles. Wei et al. [30]
gence and minimizes the positioning error, thereby helping propose Multiple Adaptive Particle Swarm Optimization
to improve the positioning accuracy and network life. (MAPSO), which divides multiple clusters in each itera-
tion, adjusts the cluster distribution according to fitness, and
C. INTRODUCE NEW RULES OR PARAMETERS breeds particles using differential evolution. GOLCHI [31]
To balance the convergence and diversity of the population, proposed a hybrid algorithm of firefly and improved particle
Zhang et al. [22] propose an adaptive bare-bones particle swarm optimization (IPSO) to optimize load balancing in
swarm optimization (ABPSO) algorithm. It adds disturbance a cloud environment to achieve a better average load and
value to each particle through convergence and population to improve important indicators such as effective resource
diversity, and introduces a mutation operator to adaptively utilization and task response time. This algorithm not only
adjust the global search process. Moreover, to solve the prob- has obvious advantages in convergence speed and response
lem of resource-constrained project scheduling (RCPSP), speed, but also has better flexibility than other methods in
Kumar and Vidyarthi [23] embed the valid particle gener- minimizing average load through different goals.
ator (VPG) operator into the PSO, and propose an adap-
tive particle swarm optimization algorithm (A-PSO). This III. PROBLEM DEFINITION
algorithm can convert the invalid particles caused by the A. NOTATION
dependent behavior of RCPSP into effective particles, and The definitions of related notations used in this paper are
adjust the inertia weight through three parameters of fitness shown in the Table 1.
value, previous inertia weight, and the iteration counter to
TABLE 1. Related notations.
speed up the convergence speed of the algorithm. Due to
the large shrinkage factor of traditional PSO in the initial
iteration process, its global distribution in the solution space
cannot accurately track the local optimal solution, which
leads to the problem of difficulty in convergence. Acharya
and Kumar [24] propose a new shrinkage factor (ECF) and
apply it to channel equalization. The simulation results prove
that this algorithm achieves a perfect balance between local
and global search, and has better performance. Yan et al. [25]
introduce the constraint factor into the velocity update of the
SPSO, and dynamically adjust the inertia weight according
to the exponential decay mode. This makes it possible not
only to obtain enjoyable global searchability in the early stage
of the optimization process, but also to obtain a higher local B. PROBLEM DEFINITION
search performance in the later stage. The results show that Due to PSO being prone to premature convergence and dif-
it has more advantages than other algorithms in terms of ficult to accurately search for the global optimum, Shi and
convergence speed and stability. Eberhart [32] try to introduce a new parameter-inertia weight
parameter w based on the original parameters to make more
D. HYBRID APPROACH fine adjustments to the algorithm. Such PSO algorithms with
Chuang et al. [26] propose an accelerated chaotic parti- w parameters are collectively referred to as standard particle
cle swarm optimization (ACPSO) algorithm by combining swarm optimization algorithms (SPSO).
chaotic mapping and acceleration strategies. This algorithm Suppose a total of M particles are searched in an
searches for the appropriate cluster center through any data N −dimensional space, where the local optima of particles

VOLUME 10, 2022 29395


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

and the global optima of all particles can be broadly small, making it difficult to find the global optimal solution.
expressed as pbesti = pbesti,1 , pbesti,2 , . . . , pbesti,N and

Therefore, to avoid affecting the information exchange and
gbest = (gbest1 , gbest2 , . . . , gbestN ). During each itera- optimization capabilities between particles, it is necessary
tion of SPSO, the velocity vij and position xij of the particle to set appropriate learning factors. Furthermore, the setting
i in the j dimension of the search space are adjusted by Eq.1 strategy is generally divided into static and dynamic strate-
and Eq.2. gies. Static strategy refers to setting the learning factor to a
    constant, usually set to 2 [32], but some scholars believe that
vk+1
ij = wvkij + c1 r1 pbestij − xijk + c2 r2 gbestj − xijk , it can be set to 1.494 [33], 2.05, 2.5 [34] etc. The dynamic
(1) strategy means that the value of the learning factor changes
dynamically with the optimization process. For example, the
ij ,
xijk+1 = xijk + vk+1 (2)
values of c1 and c2 are continuously increased [35], c1 keeps
where i = 1, 2, . . . , M . j = 1, 2, . . . , N . r1 , r2 are random decreasing while c2 keeps increasing [36], etc.
numbers belonging to [0, 1].
3) POPULATION SIZE M
C. INFLUENCE OF PARAMETERS
For SPSO, the larger the value of M means the more particles
The SPSO algorithm is simple, efficient, and easy to under-
need to be searched. At the same time, the searchability of
stand and implement. However, in the optimization process,
the algorithm is stronger, and the easier it is to search for
the selection and adjustment of various parameters are closely
the global optimal solution, the corresponding search for the
related to the performance of the algorithm, as follows:
solution will take the longer. The smaller the value of M , the
fewer particles need to be searched, the more difficult it is to
1) INERTIAL WEIGHT w
search for the global optimal solution, but the corresponding
In SPSO, the inertia weight can be used to adjust the particle’s search for the solution will be shorter. For different optimiza-
ability to explore the solution space, and its value determines tion problems, it is generally set according to experience and
the ability to adjust the current velocity during the velocity the difficulty of the problem to be optimized.
update process. When the value of w is large (w > 1.2),
the particles tend to search globally, and constantly try to
search in new areas. There is a high probability that the 4) MAXIMUM VELOCITY vmax
global optimal solution will be missed, and more iterations The purpose of maximum velocity is to control the change of
are needed to find it. When the value of w is average (0.8 < particle velocity. The larger its value, the greater the ampli-
w < 1.2), the particles’ global search ability is the best. When tude of particle movement and the higher the possibility of
the value of w is small (0.4 < w < 0.8), the particles tend missing the global optimal solution. The lower the value, the
to search for local areas. At this time, if the particles search smaller the amplitude of particle movement, it may take a
near the global optimal solution, the possibility of finding it is long time to find the required solution and it is difficult to
higher. Shi and Eberhart [32] suggest that the inertia weight jump out of the local optimum. Related studies have shown
w is dynamically adjusted during the optimization process, that the effect of setting the maximum velocity and adjusting
starting from a larger value of 0.9 (more inclined to global the inertia weight w is the same, so it is generally not adjusted
search), and dynamically reduced to 0.4 (more inclined to further.
local search). Given the influence of different parameters, CAPSO opti-
mizes the adjustment and change process of the parameters
2) ACCELERATION COEFFICIENT c1 , c2 on the basis of the SPSO, which will be introduced in detail
The acceleration coefficient is also called the learning factor, in Section IV.
which can adjust the particle and population cognitive ability.
When its value is large, it can make the particles search IV. THE PROPOSED APPROACH CAPSO
quickly outside the target area, and the search range is wide, A. ADAPTIVE INERTIA WEIGHT w
but it is easy to miss the global optimal solution. When The inertia weight w determines the extent to which the
its value is small, it can make the particles search within current velocity is affected by the previous velocity, and its
the target area. The particle search range is small, but it is value seriously affects the accuracy and convergence speed
difficult to jump out of the local optimum. For example, when of the SPSO. In the early stage of the iteration, using a larger
c1 = c2 = 0, the particles can only move along the initial w can increase the particle’s movement velocity and global
direction, the search range of the particles is small, and to search capability. In the later stage of the iteration, using a
a large extent, the global optimal solution cannot be found. smaller w can reduce the moving velocity of the particles and
When c1 = 0, c2 6 = 0, the particles can only search based make them focus on the local search to improve the accuracy
on population experience, and it is also difficult to find the of the optimal solution. Generally, as the iteration progresses,
global optimal solution. When c1 6 = 0, c2 = 0, the population w decreases linearly. Whereas the optimization process of
experience cannot be relied on. At this time, the particle itself SPSO is very complicated, simple linear adjustment can
cannot be effectively searched, and the search range is also no longer meet the needs of the algorithm. We combine

29396 VOLUME 10, 2022


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

linear and nonlinear w to adaptively adjust the particle’s local


and global searchability. The specific adjustment method is
shown in Eq.3.
 k
wmax ,
 0 ≤ ≤ 0.1,
T
w= (wmax + wmin ) (3)
wmin +

20k
, else,
e−wmax + e−1.2+ T
where wmax and wmin are the predefined maximum and min-
imum values of inertia weight, respectively.

B. ADAPTIVE ACCELERATION COEFFICIENT c1 , c2


The acceleration coefficients c1 and c2 are designed to enable
the algorithm to quickly cover the entire search space in the
early stage, and improve the accuracy and convergence speed
of the algorithm in the later stage. In most existing work, it is
FIGURE 2. Logistic mapping.
usually set to a constant, but on the whole, adaptive adjust-
ment can increase the ability of particles to find the optimal where zk+1 is the value after mapping, zk is the value before
solution and make the algorithm have better performance. mapping, and a is a random variable. As illustrated in the
We combine the non-linear strategy to make the value of FIGURE 2, when a changes from 3.569945672 to 4, the
c1 gradually decrease from 2.5 while adjusting the value of chaotic state of the system gradually changes from the ini-
c2 to gradually increase from 0.5. In this way, the particles tial state to the complete state. When its value exceeds 4,
are more inclined to find the optimal solution based on their the system will become unstable. Therefore, the value of a
own experience in the early iteration process, increasing the is generally set to 4. To better illustrate the superiority of
diversity of the search while quickly obtaining the global Logistic chaotic mapping over randomly initialized popula-
optimal solution, avoiding falling into the local optimal. In the tions, we conduct 1000 iterations of experiments, and the
later iterative process, the particles are more inclined to find results are shown in FIGURE 3. It can be seen that the ergod-
the optimal solution based on the experience of the population icity of generating points using Logistic chaotic mapping is
and have a strong local search ability, which can adjust the better than random function.
accuracy of the global optimal solution in more detail and
speed up the convergence speed. The specific adjustment D. CHAOS OPTIMIZATION
strategies for acceleration coefficients c1 and c2 are shown When some particles are searching near the global optimal
in Eq.4 and Eq.5. solution, if they move at the previous velocity, they may miss
 2 the position of the solution. When the particle moves to the
k vicinity of the local optimal value, as it continues to iterate,
c1 = 0.5 ∗ (cmax − cmin ) + cmin , (4)
T the remaining particles will move in the direction of this
 2
k particle, thereby causing the particles to fall into the local
c2 = (cmax − cmin ) + cmax , (5) optimal value. To solve this problem, we introduce the control
T
factor γ shown in Eq.7, which not only can effectively prevent
where cmax and cmin are the maximum and minimum values the particles from missing the global optimal solution, but
of the predefined acceleration coefficient, respectively. also makes them have a great probability of jumping out of
the local optimal situation.
C. INITIAL POPULATION 1
In the SPSO, initializing the population is the first and critical γ =ξ+ , (7)
0.1+ 5k
step. The stronger the ergodicity of the initial population, e T

the richer the diversity of the population, and the easier it is where ξ ∈ [0, 1] is the adjustment variable that can be
to overcome the obstacles of the local optimum to find the adjusted according to the actual situation.
global optimum so that the performance of the algorithm is Use γ to control chaotic search, and the process of opti-
superior. In general, the most commonly used is to initialize mizing gbest is shown in the Algorithm 1.
particles randomly, but to some extent, it is difficult to ensure In the process of iterative search, the value of the control
the ergodicity of the population, which affects the final result. factor γ decreases nonlinearly, so that the search range near
In our work, Logistic mapping is used to generate a series the global optimal solution is gradually reduced, and the
of chaotic variables to initialize the population, which is current optimal point is replaced with a better point. In the
shown in Eq.6. early iterative search process, the value of γ is large, so that
the population roughly searches for the larger area around
zk+1 = azk (1 − zk ), (6) the current global optimal solution. In the middle and later

VOLUME 10, 2022 29397


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

FIGURE 3. Comparison chart of Logistic chaotic mapping and random function.

Algorithm 1 Chaos Optimization gbest It is assumed that the i-th iteration of the population of
1. Scale the value of each dimension of gbest = particles takes Ci time, the maximum number of iterations
(gbest1 , gbest2 , . . . , gbestN ) to the range of is T , M is the number of particles in the population, Mi is the
[0, 1] according to gbest 0t1 = gbest−u
, where number of particles in the i-th iteration of the population, and
u−l
l = (l1 , l2 , . . . , lN ) is the minimum value of the N is the dimension of the search space. Then the computa-
solution space, u = (u1 , u2 , . . . , uN ) is the maximum tional complexity of the proposed CAPSO is:
value of the solution space; T
X
2. Use the Eq.6 to map gbest 0t1 to generate chaotic vari- complexity = N × Mi × Ci . (8)
ables gbest c ; i=1
3. Use linear mapping gbest ∗ = l + gbest c × (u − l) to
V. EXPERIMENT
map the chaotic variable gbestc back to the original value
A. TEST FUNCTION
range, and get gbest ∗ ;
4. According to xi = γ × gbesti∗ , i = 1, 2, . . . , N control This work selects 6 commonly used test functions to test the
the search of the neighborhood range and calculate the performance of CAPSO, which are:
corresponding fitness function value f (xi );
1) DROP-WAVE FUNCTION
5. If there is f (xi ) < gbest, then let gbest = f (xi ).
The DROP-WAVE function is shown in the Eq.9, and its
image is illustrated in FIGURE 4(a). It can be seen that this
iterative search process, the value of γ is small, so that a more function is multi-model and complex, and the value of the
refined search can be performed to find the global optimal global optimal solution is −1.
 q 
point. In this case, the search can quickly converge, thereby 2
1 + cos 12 x1 + x2 2
reducing running time.
f1 (x1 , x2 ) = − , (9)
0.5 x12 + x22 + 2


E. CAPSO ALGORITHM FLOW where the domain of definition is xi ∈ [−5.12, 5.12], i = 1, 2.


Function optimization problems include minimum and max-
imum optimization. Without loss of generality, in this paper, 2) BUKIN FUNCTION
we carry out the corresponding research with the minimum The BUKIN function is shown in Eq.10, and its image is illus-
optimization of multimodal functions. Based on our work, the trated in FIGURE 4(b). It can be seen that this function has
maximum problem is also easy to derive. many local minima, the value of its global optimal solution is
Suppose the constrained optimization problem is expressed 0, and these minima are all located in a ridge.
as: minfi (x), X = (x1 , x2 , . . . , xi , . . . , xM ) is the parti-
q
f2 (x1 , x2 ) = 100 x2 − 0.01 x12 + 0.01 |x1 + 10| , (10)
cle population, and the position of each particle is xi =
(xi1 , xi2 , . . . , xij , . . . , xiN ) where xij ∈ [lj , uj ]. The algorithm where the domain of definition is x1 ∈ [−15, −5] and
steps of CAPSO are shown in the Algorithm 2. x2 ∈ [−3, 3].

29398 VOLUME 10, 2022


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

FIGURE 4. Test function.

Algorithm 2 The Algorithm Steps of CAPSO TABLE 2. Comparison of parameter settings.

Initialization parameters: wmax , wmin , cmax , cmin , ξ , T , l,


u, k = 0.
Initialize the particle population X :
1. Randomly initialize each dimension xij of each parti-
cle xi , where 0 ≤ xij ≤ 1, and xij ∈ / (0, 0.25, 0.5, 0.75, 1);
2. Use Eq.6 to map the population X to generate the
chaotic variable X c ; value of 0 and multiple local optimal solutions.
3. Use linear mapping X k = l + X c × (u − l) to map X c N h
to the original value, and then obtain the initial population X i
f3 (xi ) = 10N + xi2 − 10 cos (2πxi ) (11)
X k = (x1k , x2k , . . . , xik , . . . , xM
k ).
i=1
repeat
1. Calculate the fitness value fi (xik ) of each particle xi for where the domain of definition is xi ∈ [−5.12, 5.12],
the fitness function fi (x). i = 1, 2, . . . , N .
2. if fi (xik ) < pbesti then
pbesti = fi (xik ) 4) GRIEWANK FUNCTION
end The GRIEWANK function is shown in Eq.12, and its image
3. if fi (xik ) < gbesti then is illustrated in FIGURE 4(d). It can be seen that this func-
gbesti = fi (xik ) tion has a global minimum value of 0 and multiple local
end minimums.
4. Update the inertia weight w and acceleration coeffi- XN
xi2 Yd  
xi
cient c1 , c2 according to Eq.3, Eq.4 and Eq.5; f4 (xi ) = − cos √ + 1 (12)
4000 i
5. Update the velocity vkij and the position xijk of each i=1 i=1
particle in the population X k according to Eq.1 and Eq.2; where the domain of definition is xi ∈ [−10, 10],
6. According to the algorithm 1, the control factor γ is i = 1, 2, . . . , N .
used to control the chaotic search and optimize gbest;
7. k++. 5) SCHWEFEL FUNCTION
until the cycle epoch iterates T times or the fitness function The SCHWEFEL function is shown in Eq.13, and its image is
value fi (X k ) no longer changes. illustrated in FIGURE 4(e). It can be seen that this function
Output: global optimal solution gbest. has multiple local minima and the value of the global optimal
solution is 0.
N
X p 
3) RASTRIGIN FUNCTION f5 (xi ) = 418.9829 N − xi sin |xi | (13)
The RASTRIGIN function is shown in Eq.11, and its image i=1
is illustrated in FIGURE 4(c). It can be seen that this where the domain of definition is xi ∈ [−500, 500],
multi-modal function has a global optimal solution with a i = 1, 2, . . . , N .

VOLUME 10, 2022 29399


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

TABLE 3. Test function results of each algorithm when N = 2.

FIGURE 5. The change of average fitness value of each algorithm on GRIEWANK function.

6) SPHERE FUNCTION inertia Weight Particle Swarm Optimization algorithm


The SPHERE function is shown in Eq.14, and its image is (LDWPSO) [37], ChPSO (a PSO algorithm improved by
illustrated in FIGURE 4(f). It can be seen that this function is Cheng and Han) [38], and Chaos Modified Particle Swarm
unimodal and convex, with N local minima, and the value of Optimization algorithm (CMPSO) [39] in terms of conver-
the global optimal solution is 0. gence speed and accuracy. The parameters of each algorithm
N
X are adjusted and set for each test function, as shown in
f6 (xi ) = xi2 (14) Table 2. Taking into account the influence of randomness, for
i=1 all algorithms, the dimensions N are selected as 2 and 10 in
where the domain of definition is xi ∈ [−5.12, 5.12], the test process to repeat the experiment 30 times.
i = 1, 2, . . . , N .
C. PERFORMANCE
B. BASELINE When N = 2, the results of the mean, variance, and standard
We compare CAPSO with Standard Particle Swarm Opti- deviation of the difference between the fitness value of the
mization algorithm (SPSO) [32], Linearly Decreasing repeated experiment on the test function f1 − f6 and the

29400 VOLUME 10, 2022


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

TABLE 4. Test function results of each algorithm when N = 10.

actual global optimal solution of each algorithm are shown TABLE 5. Parameter configuration information of Peer algorithms.
in Table 3, where the bolded data represent the smallest value
of the algorithm on the corresponding function.
It can be concluded that although the variance and standard
deviation results of CAPSO on the BUKIN function are
not optimal, they are not much different from the results
of LDWPSO. In other test functions, CAPSO’s indicators
are better than other algorithms. On the whole, the conver-
gence accuracy of CAPSO is significantly better than other ments. On the whole, the CAPSO algorithm we proposed
algorithms. In addition, although CAPSO has not reached has higher convergence accuracy, stronger stability, and it is
the global optimal solution on the SPHERE function and easier to search for the global optimal solution.
GRIEWANK function, it has been infinitely close to the global
optimal solution, indicating that CAPSO is more stable and
D. CEC2013 TEST SUITE
has a lower probability of falling into the local optimal
To further verify the effectiveness of CAPSO, we con-
solution.
duct experiments on the CEC2013 test suite to verify the
To evaluate the convergence, we take the GRIEWANK
performance of the proposed algorithm in different envi-
function as an example. The average change of the best fitness
ronments. It should be noted that in CEC2013, f1 -f5 are
value obtained by each algorithm repeated 30 times on this
unimodal functions, f6 -f20 are multimodal functions, and
function is shown in the curve drawn in FIGURE 5. Accord-
f21 − f28 are combined functions. Moreover, we set dimen-
ing to the curve change, CAPSO is slower in the early stage
sions N = 10 and N = 50 respectively to verify the scalabil-
and faster in the later stage. It is better than the SPSO and
ity of CAPSO.
LDWPSO. Generally speaking, it is still within the acceptable
range, and the most important thing is that CAPSO has the
highest convergence accuracy. 1) PEER ALGORITHMS
When N = 10, since f1 and f2 are two-dimensional func- We selected 5 PSO variants that were widely used in
tions, we repeat the experiment 30 times on the f3 − f6 test CEC2013. To ensure the rigor and fairness of the comparative
function. The results of the mean, variance, and standard experiments, all relevant parameters of peer algorithms are
deviation of the difference between the fitness value of the set according to the recommendations in the original paper.
repeated experiment and the actual global optimal solution In addition, we ensure that all algorithms are experimented
of each algorithm are shown in Table 4, where the bolded with in the same environment to remove the effects of any
data represent the smallest value of the algorithm on the random errors. All peer algorithms and corresponding con-
corresponding function. figuration information are recorded in Table 5.
It can be seen from Table 4 that CAPSO has obvious
advantages over other algorithms in terms of mean, variance, 2) PERFORMANCE ON CEC2013
and standard deviation. Although the effect of CAPSO on the In Table 6 and Table 7, the mean and standard deviation
GRIEWANK function is not satisfactory, the results on the of the peer algorithms on the CEC2013 suite when N =
SPHERE function and the SCHWEFEL function are approx- 10 and N = 50 are recorded respectively. By comparison,
imately 0, indicating that CAPSO is infinitely close to or we can know the effectiveness of CAPSO. It can be found that
reaches the global optimal solution in 30 repeated experi- CAPSO almost all shows the best performance on unimodal

VOLUME 10, 2022 29401


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

TABLE 6. Test results of CAPSO and peer algorithms on CEC2013 suite (N = 10).

functions, multimodal functions, and combined functions to higher dimensions. It is mainly since CAPSO adaptively
with high optimization difficulty. Furthermore, CAPSO also adjusts the inertia weight and acceleration coefficient, and
displays better reliability when the test function is extended adaptively adjusts the search range through the control factor

29402 VOLUME 10, 2022


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

TABLE 7. Test results of CAPSO and peer algorithms on CEC2013 suite (N = 50).

of chaos theory. Therefore, CAPSO has better adaptability, VI. CONCLUSION AND FUTURE
and can easily jump out of the local optimal solution and Based on the in-depth research and analysis of traditional par-
approach the global optimal solution infinitely. To sum up, ticle swarm optimization algorithms, this paper aims to deal
our proposed CAPSO algorithm is effective. with complex function optimization problems and practical

VOLUME 10, 2022 29403


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

applications that are prone to poor convergence accuracy and [12] S. N. Ghorpade, M. Zennaro, B. S. Chaudhari, R. A. Saeed, H. Alhumyani,
the inability to effectively obtain global optimization. On the and S. Abdel-Khalek, ‘‘A novel enhanced quantum PSO for optimal net-
work configuration in heterogeneous industrial IoT,’’ IEEE Access, vol. 9,
basis of chaos theory, we propose a chaotic adaptive particle pp. 134022–134036, 2021.
swarm optimization (CAPSO) algorithm. To prove the stabil- [13] T. M. Shami, A. A. El-Saleh, M. Alswaitti, Q. Al-Tashi, M. A. Summakieh,
ity, convergence speed, and accuracy of CAPSO, experiments and S. Mirjalili, ‘‘Particle swarm optimization: A comprehensive survey,’’
IEEE Access, vol. 10, pp. 10031–10061, 2022.
are performed on 6 test functions with other algorithms. The [14] X. Ji, Y. Zhang, D. Gong, and X. Sun, ‘‘Dual-surrogate-assisted cooper-
comparative analysis results show that although CAPSO has ative particle swarm optimization for expensive multimodal problems,’’
a slight deficiency in the convergence speed, its convergence IEEE Trans. Evol. Comput., vol. 25, no. 4, pp. 794–808, Aug. 2021.
[15] W. Zhang, J. Ma, L. Wang, and F. Jiang, ‘‘Particle-swarm-optimization-
accuracy is higher, the stability is stronger, and it is not easy to based 2D output feedback robust constraint model predictive control for
fall into the local optimum. To further prove the effectiveness batch processes,’’ IEEE Access, vol. 10, pp. 8409–8423, 2022.
and scalability of CAPSO, extensive experiments are per- [16] Z. Zhang, H. Chen, Y. Yu, F. Jiang, and Q. S. Cheng, ‘‘Yield-constrained
formed on the CEC2013 test suite. All results comprehensive optimization design using polynomial chaos for microwave filters,’’ IEEE
Access, vol. 9, pp. 22408–22416, 2021.
prove CAPSO has achieved satisfactory performance. [17] Y. Hu, F. Zhu, L. Zhang, Y. Lui, and Z. Wang, ‘‘Scheduling of manufactur-
Furthermore, CAPSO achieves advanced retrieval accu- ers based on chaos optimization algorithm in cloud manufacturing,’’ Robot.
racy due to a series of adaptive computations. However, Comput.-Integr. Manuf., vol. 58, pp. 13–20, Aug. 2019.
[18] L. Zhang, H. Wang, J. Liang, and J. Wang, ‘‘Decision support in cancer
its convergence speed, although within an acceptable range, base on fuzzy adaptive PSO for feedforward neural network training,’’ in
is still slightly slower. In the future, we hope to further sim- Proc. Int. Symp. Comput. Sci. Comput. Technol., vol. 1, 2008, pp. 220–223.
plify the search process of the algorithm based on the adap- [19] N. Lynn and P. N. Suganthan, ‘‘Heterogeneous comprehensive learning
particle swarm optimization with enhanced exploration and exploitation,’’
tive adjustment strategy to improve the convergence speed. Swarm Evol. Comput., vol. 24, pp. 11–24, Oct. 2015.
Moreover, due to the advantages of CAPAO in terms of con- [20] W. Liu, Z. Wang, Y. Yuan, N. Zeng, K. Hone, and X. Liu, ‘‘A novel
vergence, stability, and accuracy, we believe that it will play a sigmoid-function-based adaptive weighted particle swarm optimizer,’’
IEEE Trans. Cybern., vol. 51, no. 2, pp. 1085–1093, Feb. 2021.
role in resource scheduling, load problems, system optimiza-
[21] M. Lin, Z. Wang, F. Wang, and D. Chen, ‘‘Improved simplified parti-
tion, and other fields. In the follow-up work, we also expect cle swarm optimization based on piecewise nonlinear acceleration coef-
to work with other researchers to further explore and make ficients and mean differential mutation strategy,’’ IEEE Access, vol. 8,
pp. 92842–92860, 2020.
breakthroughs in parameter sensitivity, high-dimensional
[22] Y. Zhang, D.-W. Gong, X.-Y. Sun, and N. Geng, ‘‘Adaptive bare-bones
solution space, multi-objective optimization, etc. particle swarm optimization algorithm and its convergence analysis,’’ Soft
Comput., vol. 18, no. 7, pp. 1337–1352, Jul. 2014.
[23] N. Kumar and D. P. Vidyarthi, ‘‘A model for resource-constrained
REFERENCES
project scheduling using adaptive PSO,’’ Soft Comput., vol. 20, no. 4,
[1] H. Han, X. Bai, H. Han, Y. Hou, and J. Qiao, ‘‘Self-adjusting multitask pp. 1565–1580, 2016.
particle swarm optimization,’’ IEEE Trans. Evol. Comput., vol. 26, no. 1, [24] U. K. Acharya and S. Kumar, ‘‘Particle swarm optimization exponential
pp. 145–158, Feb. 2022. constriction factor (PSO-ECF) based channel equalization,’’ in Proc. 6th
[2] Z.-M. Gao, J. Zhao, Y.-R. Hu, and H.-F. Chen, ‘‘The challenge for the Int. Conf. Comput. Sustain. Global Develop., 2019, pp. 94–97.
nature-inspired global optimization algorithms: Non-symmetric bench- [25] C.-M. Yan, G.-Y. Lu, Y.-T. Liu, and X.-Y. Deng, ‘‘A modified PSO
mark functions,’’ IEEE Access, vol. 9, pp. 106317–106339, 2021. algorithm with exponential decay weight,’’ in Proc. 13th Int. Conf. Nat-
[3] V. Nagireddy, P. Parwekar, and T. K. Mishra, ‘‘Velocity adaptation based ural Comput., Fuzzy Syst. Knowl. Discovery (ICNC-FSKD), Jul. 2017,
PSO for localization in wireless sensor networks,’’ Evol. Intell., vol. 14, pp. 239–242.
no. 2, pp. 243–251, Jun. 2021. [26] L.-Y. Chuang, C.-J. Hsiao, and C.-H. Yang, ‘‘Chaotic particle swarm
[4] C. Shang, J. Gao, H. Liu, and F. Liu, ‘‘Short-term load forecasting based optimization for data clustering,’’ Expert Syst. Appl., vol. 38, no. 12,
on PSO-KFCM daily load curve clustering and CNN-LSTM model,’’ IEEE pp. 14555–14563, Nov. 2011.
Access, vol. 9, pp. 50344–50357, 2021. [27] Z. Wan, G. Wang, and B. Sun, ‘‘A hybrid intelligent algorithm by combin-
[5] X. Liu, P. Zhang, H. Fang, and Y. Zhou, ‘‘Multi-objective reactive ing particle swarm optimization with chaos searching technique for solving
power optimization based on improved particle swarm optimization with nonlinear bilevel programming problems,’’ Swarm Evol. Comput., vol. 8,
-greedy strategy and Pareto archive algorithm,’’ IEEE Access, vol. 9, pp. 26–32, Feb. 2013.
pp. 65650–65659, 2021. [28] Y.-J. Gong, J.-J. Li, Y. Zhou, Y. Li, H. S.-H. Chung, Y.-H. Shi, and
[6] J. Kennedy and R. Eberhart, ‘‘Particle swarm optimization,’’ in Proc. IEEE J. Zhang, ‘‘Genetic learning particle swarm optimization,’’ IEEE Trans.
ICNN, vol. 4, Nov./Dec. 1995, pp. 1942–1948. Cybern., vol. 46, no. 10, pp. 2277–2290, Oct. 2016.
[7] E. Iranmehr, S. B. Shouraki, and M. M. Faraji, ‘‘Unsupervised feature [29] X. Xia, L. Gui, F. Yu, H. Wu, B. Wei, Y.-L. Zhang, and Z.-H. Zhan, ‘‘Triple
selection for phoneme sound classification using particle swarm opti- archives particle swarm optimization,’’ IEEE Trans. Cybern., vol. 50,
mization,’’ in Proc. 5th Iranian Joint Congr. Fuzzy Intell. Syst. (CFIS), no. 12, pp. 4862–4875, Dec. 2020.
Mar. 2017, pp. 86–90. [30] B. Wei, X. Xia, F. Yu, Y. Zhang, X. Xu, H. Wu, L. Gui, and G. He,
[8] J.-H. Seo, C.-H. Im, C.-G. Heo, J.-K. Kim, H.-K. Jung, and C.-G. Lee, ‘‘Multiple adaptive strategies based particle swarm optimization
‘‘Multimodal function optimization based on particle swarm optimiza- algorithm,’’ Swarm Evol. Comput., vol. 57, Sep. 2020, Art. no. 100731.
tion,’’ IEEE Trans. Magn., vol. 42, no. 4, pp. 1095–1098, Apr. 2006. [Online]. Available: https://www.sciencedirect.com/science/article/pii/
[9] G. Lei, ‘‘Application improved particle swarm algorithm in parameter S2210650220303849
optimization of hydraulic turbine governing systems,’’ in Proc. IEEE 3rd [31] M. M. Golchi, S. Saraeian, and M. Heydari, ‘‘A hybrid of firefly and
Inf. Technol. Mechatronics Eng. Conf. (ITOEC), Oct. 2017, pp. 1135–1138. improved particle swarm optimization algorithms for load balancing in
[10] A. Marjani, S. Shirazian, and M. Asadollahzadeh, ‘‘Topology optimization cloud environments: Performance evaluation,’’ Comput. Netw., vol. 162,
of neural networks based on a coupled genetic algorithm and particle Oct. 2019, Art. no. 106860.
swarm optimization techniques (c-GA–PSO-NN),’’ Neural Comput. Appl., [32] Y. Shi and R. C. Eberhart, ‘‘Parameter selection in particle swarm opti-
vol. 29, no. 11, pp. 1073–1076, Jun. 2018. mization,’’ in Proc. Int. Conf. Evol. Program. Berlin, Germany: Springer,
[11] M. Bouzidi, M. E. Riffi, and A. Serhir, ‘‘Discrete particle swarm opti- 1998, pp. 591–600.
mization for travelling salesman problems: New combinatorial operators,’’ [33] R. C. Eberhart and Y. Shi, ‘‘Comparing inertia weights and constriction
in Proc. Int. Conf. Soft Comput. Pattern Recognit. Cham, Switzerland: factors in particle swarm optimization,’’ in Proc. Congr. Evol. Comput.,
Springer, 2017, pp. 141–150. vol. 1, Jul. 2000, pp. 84–88.

29404 VOLUME 10, 2022


Y. Duan et al.: CAPSO: Chaos Adaptive Particle Swarm Optimization Algorithm

[34] M. Clerc, ‘‘The swarm and the queen: Towards a deterministic and adaptive YONGJING NI was born in 1981. She is currently
particle swarm optimization,’’ in Proc. Congr. Evol. Comput., vol. 3, pursuing the Ph.D. degree with Yanshan Univer-
Jul. 1999, pp. 1951–1957. sity. She is a Lecturer at the Hebei University of
[35] P. N. Suganthan, ‘‘Particle swarm optimiser with neighbourhood operator,’’ Science and Technology. Her research interests
in Proc. Congr. Evol. Comput., vol. 3, Jul. 1999, pp. 1958–1962. include computer networks and signal processing.
[36] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, ‘‘Self-organizing
hierarchical particle swarm optimizer with time-varying acceleration coef-
ficients,’’ IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 240–255, Jun. 2004.
[37] Y. Shi and R. C. Eberhart, ‘‘Empirical study of particle swarm optimiza-
tion,’’ in Proc. Congr. Evol. Comput., vol. 3, Jul. 1999, pp. 1945–1950.
[38] M. Cheng and Y. Han, ‘‘Application of a modified CES production function
model based on improved PSO algorithm,’’ Appl. Math. Comput., vol. 387,
Dec. 2020, Art. no. 125178.
[39] X. Liu, Y. Gu, S. He, Z. Xu, and Z. Zhang, ‘‘A robust reliability prediction
method using weighted least square support vector machine equipped S. V. N. SANTHOSH KUMAR received the
with chaos modified particle swarm optimization and online correcting B.E. degree in computer science and engineer-
strategy,’’ Appl. Soft Comput., vol. 85, Dec. 2019, Art. no. 105873. ing and the M.E. degree in software engineer-
ing from Anna University, Chennai, India, in
2011 and 2013, respectively. He is currently an
Assistant Professor with VIT, Vellore Campus,
YOUXIANG DUAN received the B.S. degree from India. He works in the areas of security and
the College of Computer and System Science, data dissemination in wireless sensor networks.
Nankai University, in 1986, and the Ph.D. degree He does research in information systems (busi-
from the School of Geosciences, China Univer- ness informatics), computer communications (net-
sity of Petroleum (East China), Qingdao, China, works), and computer security and reliability. He has published 20 articles
in 2017. He is currently a Professor with the in reputed international journals and conferences. His research interests
College of Computer Science and Technology, include wireless sensor networks, the Internet of Things, and mobile com-
China University of Petroleum (East China). His puting. He is a Peer Reviewer of Peer-to-Peer Networking and Applications
research interests include service computing, intel- (Springer), IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, and Wire-
ligent control, and machine learning. less Personal Communications.

NING CHEN is currently pursuing the master’s


degree with the College of Computer Science
PEIYING ZHANG is currently pursuing the
and Technology, China University of Petroleum
Ph.D. degree in information and communica-
(East China). His research interests include cross-
tion engineering with the State Key Labora-
modal retrieval, deep learning, particle swarm
tory of Networking and Switching Technology,
optimization, network virtualization, blockchain,
Beijing University of Posts and Telecommu-
and microservice management.
nications, Beijing, China. He is an Associate
Professor with the College of Computer and
Communication Engineering, China University of
Petroleum (East China), Qingdao, China. Since
2016, he has been publishing multiple IEEE/ACM
LUNJIE CHANG is currently a Professor-Level TRANSACTIONS/journal/magazine articles, such as IEEE TRANSACTIONS ON
Senior Engineer at the Petroleum Exploration INDUSTRIAL INFORMATICS, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION
and Development Research Institute, PetroChina SYSTEMS, IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, IEEE TRANSACTIONS
Tarim Oilfield Company. He has published more ON NETWORK SCIENCE AND ENGINEERING, IEEE TRANSACTIONS ON NETWORK AND
than 40 papers in journals or conferences, such SERVICE MANAGEMENT, IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING,
as Xinjiang Petroleum Geology, Natural Gas IEEE Network, IEEE ACCESS, IEEE INTERNET OF THINGS JOURNAL, ACM
Geoscience, Geophysical and Geochemical Explo- TALLIP, COMPUT COMMUN, and IEEE Communications Magazine. His
ration Computing Technology, Petroleum Explo- current research interests include semantic computing, deep learning, net-
ration and Development, Earth Science Frontiers, work virtualization, and future network architecture. He has served the Tech-
and Oil and Gas Geology. His research interests nical Program Committee for ISCIT 2016, ISCIT 2017, ISCIT 2018, ISCIT
mainly include geology, mining engineering technology, oil and gas field 2019, GLOBECOM 2019, COMNETSAT 2020, SoftIoT 2021, CBIoT 2021,
well development engineering, composite films, computer architecture, frac- DPRR 2021, IWCMC-Satellite 2019, and IWCMC-Satellite 2020.
ture, and other directions.

VOLUME 10, 2022 29405

You might also like