2015 Elsevier Reduction of Artificial Bee Colony Algorithm for Global Optimization
2015 Elsevier Reduction of Artificial Bee Colony Algorithm for Global Optimization
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
art ic l e i nf o a b s t r a c t
Article history: This paper presents a reduction of artificial bee colony algorithm for global optimization. Artificial bee
Received 6 April 2012 colony algorithm is an optimization technique which refers to the behavior of honeybee swarms, and a
Accepted 12 June 2012 multi-point search approach which finds a best solution using multiple bees. For avoiding local minima,
Available online 6 August 2014
a number of bees are initially prepared and their positions are updated by artificial bee colony algorithm.
Keywords: Bees sequentially reduce to reach a predetermined number of them grounded in the evaluation value
Artificial bee colony algorithm and artificial bee colony algorithm continues until the termination condition is met. In order to show the
Global optimization effectiveness of the proposed algorithm, we examine the best value by using test functions compared to
Metaheuristics existing algorithms. Furthermore the influence of best value on the initial number of bees for our
Reduction
algorithm is discussed.
& 2014 Elsevier B.V. All rights reserved.
1. Introduction minima. Various approaches have also been introduced for unit
reduction, and many discussions have been made on the multi-
By growing the large-scale and complexity of solving problems, layer neural networks [6,7]. For the purpose of vector quantization,
it is difficult for optimization problem to obtain an optimal self-organizing algorithms are proposed for reduction techniques
solution and the processing requires an immense amount of time. [8,9]. Meanwhile for optimization problems, there exists particle
Metaheuristics have been a focus of attention for this situation swarm optimization with reduction [10], but artificial bee colony
since they can obtain an approximate solution as the polynomial algorithm with respect to reduction has not been proposed.
time algorithm [1]. Metaheuristics are optimization approaches In this study, we present a reduction of artificial bee colony
which make use of the best solution improved iteratively to the algorithm for global optimization. A number of bees are initially
next search. Metaheuristics involve, for example, genetic algo- prepared and the positions of these are updated in artificial bee
rithm (GA), differential evolution (DE), particle swarm optimiza- colony algorithm. Bees sequentially reduce to reach a predeter-
tion (PSO), and artificial bee colony (ABC) algorithm. GA is a search mined number of them founded on the evaluation value and
algorithm which carries out the genetic operation of selection, artificial bee colony algorithm continues until the termination
crossover, and mutation [2]. DE adopts mutation as a weighted condition is met. In order to show that our algorithm is effective in
sum of a base vector and a difference vector. An individual selected quality, experimental results are presented in comparison with
from the population becomes the basic vector and the difference existing algorithms. Moreover we examine the influence of best
between a pair of individuals becomes the difference vector [3]. value on the initial number of bees for our algorithm.
PSO is a multi-point search algorithm using multiple candidate
solutions called particles and performs the solution search by
sharing huge amounts of information in each particle [4]. ABC 2. Artificial bee colony algorithm
algorithm is an optimization approach which refers to the beha-
vior of honeybee swarms [5]. The motion model of honeybee Artificial bee colony (ABC) algorithm is an optimization
consists of food sources, employed bees, onlooker bees, and scout approach which refers to the behavior of honeybee swarms. The
bees. Although ABC algorithm can be applied to a number of motion model of honeybee in the natural world consists of food
optimization problems, the solution may fall into a local minimum sources and three kinds of bee swarms (employed bees, onlooker
and it is difficult to find an optimal solution for a complicated bees, and scout bees). ABC algorithm sends employed bees to food
objective function such as a multimodal function with a lot of local sources and intensively searches for around food sources with
high fitness value by utilizing onlooker bees and scout bees. Bee
swarms quests for food sources in the following description.
n
Corresponding author. (1) Phase of employed bees: Each employed bee searches for
E-mail address: maeda@fit.ac.jp (M. Maeda). food sources with high quality around food sources which relates
http://dx.doi.org/10.1016/j.neucom.2012.06.066
0925-2312/& 2014 Elsevier B.V. All rights reserved.
M. Maeda, S. Tsuda / Neurocomputing 148 (2015) 70–74 71
to each employed bee. For all food sources xi ði ¼ 1; 2; 3; …; mÞ, 3.2. One search point xkc is selected by the roulette rule
candidate positions of food sources vi are calculated as follows: grounded in probability pki . For only xkc , candidate position
vkc is produced, as in Step 2. In comparison with positions
vi ¼ xi þ ϕi ðxi xj Þ ð1Þ
vkc and xkc , the position with best value becomes position xkc .
where ϕi is a uniform random number among ½ 1; 1 and xj (j a i) 4. Processing of scout bees:
is determined randomly. For search points xki which are never updated at T limit , xki is
In comparison with positions vi and xi , the position with best changed according to Eq. (4).
value becomes position xi . 5. Reduction of bee:
(2) Phase of onlooker bees: In accordance with amounts of If k ¼ u q and m 4mf , then reduce bee j and set m’m 1,
information obtained by the search of employed bees, onlooker where q is a positive integer and j ¼ arg maxi f (xki þ 1 )
bees search for neighborhoods of food sources with high evalua- 6. Update of best value:
tion intensively. Onlooker bees choose food sources based on Set g k þ 1 ’xks þ 1 , where s ¼ arg mini f ðxki þ 1 Þ.
probability pi expressed as follows: 7. Termination condition:
Fi If k ¼ T max , then terminate, otherwise set k’k þ 1 and go to
pi ¼ ð2Þ Step 2.
∑m
n ¼ 1Fn
500 90
400 80
70
300
60
200 50
100 40
30
0
4 20 4
-100 10
-200 2 0 2
4 0 4 0
2 2
0 -2 0 -2
-2 -2
-4 -4 -4 -4
200 1000
180 800
160 600
140 400
120 200
100 0
80 -200
60 4 -400
40 3.5 -600 400
20 3 -800
0 2.5 -1000 200
6 2
5 400 0
4 1.5
200
3 1 0 -200
2 0.5 -200
1 -400
0 -400
0
250 500
200 450
150 400
100 350
300
50
250
0
200
-50 -2 150 60
-100 -1.5 100 40
-150 -1 50
-200 0 20
-0.5
-2 60
-1.5 0 40 0
-1
-0.5 0.5 20 -20
0 1 0
0.5 -20 -40
1 1.5
1.5 -40
2 2 -60 -60
n
Fig. 1. Shape of test functions. (a) F1: 2 minima function. (b) F2: Rastrigin's function. (c) F3: Levy's function. (d) F4: Schwefel's function. (e) F5: Shubert's function. (f) F6:
Shekel's Foxholes function.
Table 3
Parameters.
Trials 10000
Maximum iteration 1000
Minimum population 20
Dimensions (n) 2
M. Maeda, S. Tsuda / Neurocomputing 148 (2015) 70–74 73
Table 4
Best value for each of real-coded genetic algorithm (RGA), differential evolution (DE), particle swarm optimization (PSO), artificial bee colony (ABC) algorithm, and the
proposed algorithm (ABCR).
F1 1.55 102 1.56 102 1.55 102 1.57 102 1.57 102
F2 3.32 10 2 6.55 10 2 2.09 10 2 1.39 10 3 0.00
F3 1.35 10 1 3.53 10 2 9.14 10 3 0.00 0.00
F4 5.95 102 7.80 102 7.83 102 8.36 102 8.38 102
F5 1.83 102 1.86 102 1.86 102 1.87 102 1.87 102
F6 5.91 2.10 1.39 1.03 9.98 10 1
-1.566615e+002 1.4e-003
-1.566620e+002 1.2e-003
-1.566625e+002 1.0e-003
8.0e-004
-1.566630e+002
Best value
Best value
6.0e-004
-1.566635e+002
4.0e-004
-1.566640e+002
2.0e-004
-1.566645e+002
0.0e+000
-1.566650e+002
20 30 40 50 60 70 80 90 100 20 30 40 50 60 70 80 90 100
Initial number of bees Initial number of bees
1.515e-031 -8.360e+002
-8.362e+002
1.510e-031
-8.364e+002
1.505e-031 -8.366e+002
-8.368e+002
1.500e-031
Best value
Best value
-8.370e+002
1.495e-031
-8.372e+002
1.490e-031 -8.374e+002
-8.376e+002
1.485e-031
-8.378e+002
1.480e-031 -8.380e+002
20 30 40 50 60 70 80 90 100 20 30 40 50 60 70 80 90 100
Initial number of bees Initial number of bees
-1.890e+002 1.040e+000
-1.885e+002 1.035e+000
-1.880e+002 1.030e+000
-1.875e+002 1.025e+000
Best value
Best value
-1.870e+002 1.020e+000
-1.865e+002 1.015e+000
-1.860e+002 1.010e+000
-1.855e+002 1.005e+000
-1.850e+002 1.000e+000
-1.845e+002 9.950e-001
20 30 40 50 60 70 80 90 100 20 30 40 50 60 70 80 90 100
Initial number of bees Initial number of bees
Fig. 2. Best value and initial number of bees for each function. (a) F1: 2n minima function. (b) F2: Rastrigin's function. (c) F3: Levy's function. (d) F4: Schwefel's function. (e) F5:
Shubert's function. (f) F6: Shekel's Foxholes function.
74 M. Maeda, S. Tsuda / Neurocomputing 148 (2015) 70–74
F4 Schwefel's function: We examined the best value by using test functions to show the
n pffiffiffiffiffiffiffi effectiveness of the proposed algorithm and the influence of best
F 4 ðxÞ ¼ ∑ ðxi sin ð jxi jÞÞ ð8Þ value on the initial number of bees for our algorithm. As a result,
i¼1
our algorithm had a superiority in comparison with existing
algorithms for the complicated functions given in this paper. For
F5 Shubert's function: the future works, we will study more effective techniques of our
( ) algorithm.
n1
F 5 ðxÞ ¼ ∑ i cos ½ði þ 1Þx1 þ i
i¼1
( ) References
n1
þ ∑ i cos ½ðiþ 1Þx2 þ i ð9Þ
i¼1 [1] E. Aiyoshi, K. Yasuda, Metaheuristics and Applications, Ohmsha, Tokyo, 2007.
[2] H. Kitano, Genetic Algorithm, vol. 4, Sangyotosho, Tokyo, 2000.
[3] K. Price, R. Storn, J. Lampinen, Differential Evolution, Springer-Verlag, Berlin,
F6 Shekel's Foxholes Function: Heidelberg, 2005.
[4] J. Kennedy, R. Eberhart, Particle swarm optimization, in: IEEE Proceedings of
" #1 the International Conference on Neural Networks, 1995, pp. 1942–1948.
1 25 1
F 6 ðxÞ ¼ þ ∑ ð10Þ [5] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm,
500 j ¼ 1 jþ ∑2 ðxi aij Þ6 Appl. Math. Comput. 214 (1) (2009) 108–132.
i¼1
[6] R. Reed, Pruning algorithms—a survey, IEEE Trans. Neural Netw. 4 (5) (1993)
where 740–747.
[7] M. Ishikawa, Structural learning with forgetting, Neural Netw. 9 (3) (1996)
32 16 0 16 32 509–521.
aij ¼ [8] M. Maeda, N. Shigei, H. Miyajima, Adaptive vector quantization with creation
32 32 32 32 32
and reduction grounded in the equinumber principle, J. Adv. Comput. Intell.
⋯ 32 16 0 16 32 Intell. Inform. 9 (6) (2005) 599–606.
⋯ 32 32 32 32 32 [9] M. Maeda, N. Shigei, H. Miyajima, K. Suzaki, Reduction models in competitive
learning founded on distortion standards, J. Adv. Comput. Intell. Intell. Inform.
12 (3) (2008) 314–323.
[10] M. Maeda, S. Tsuda, Particle swarm optimization with reduction for global
optimization problems, in: Proceedings of the World Academy of Science,
Engineering and Technology vol. 59, 2011, pp. 1751–1755.
Tables 1 and 2 show the domain of test functions and the
position of minimal solution for each test function, respectively.
Fig. 1 shows the shape of each function in case of two-dimensional
Michiharu Maeda received the B.S. and M.S. degrees in
space. Common parameters used for experiments are shown in theoretical physics in 1990 and 1992, respectively, and
Table 3. The other parameters are given: for RGA, mutation rate the Ph.D. degree in information and computer science
α ¼ 0:5, crossover rate β ¼ 0:35, and number of crossover nc ; for in 1997, from Kagoshima University, Japan. He is
currently a Professor in the Department of Computer
DE, mutation rate F¼ 0.5 and crossover rate C ¼0.9; for PSO, Science & Engineering at Fukuoka Institute of Technol-
cognitive weight c1 ¼ 1:4955, social weight c2 ¼ 1:4955, and iner- ogy. His research interests include computational intel-
tia weight w¼0.729; for ABC and ABCR, limit number ligence, mathematical and physical computation, and
signal processing.
T limit ¼ m D, where m is the total number of bees and D is the
dimension of the problem.
Table 4 shows the best value in averages of 10 000 trials for
each of RGA, DE, PSO, ABC, and ABCR. The number of individuals or
particles for RGA, DE, and PSO is 20, and the number of bees for
ABC is 20 and that of ABCR reduces 50 to 20. It is proven that the
Shinya Tsuda received the B.E. and M.E. degrees in
proposed algorithm leads the lower or the equal position com- computer science in 2010 and 2012, respectively, from
pared to the existing algorithms. The proposed algorithm is better Fukuoka Institute of Technology, Japan. His research
than or equal to the existing algorithms. interests involve computational intelligence.
For reduction, Fig. 2 shows the relationship between the best
value and the initial number of bees. We studied the effect that the
initial number of bees had on accuracy in reduction. When the
initial number is 20, the proposed algorithm becomes the con-
ventional algorithm because there are no bees to be deleted. The
best value gradually decreases as the initial number of bees
increases. It is found that the proposed algorithm is more effective
on the complicated functions given in this experiment.
5. Conclusions