Nature is replete with social compartments to carry out various jobs. Even if the final goal of all persons and collective conduct is survival, for various reasons: hunting, protection, navigation and foraging animals work and interact in groups, herds, schools, colonies and flocks. It is very interesting that creatures find the optimal situations and perform tasks efficiently in groups. It is clear that such optimal and efficient conduct has been evolved throughout the millennia. So we inspire them to fix our issues is fairly logical. This is the main aim of a study field called swarm intelligence (SI). Many algorithms have been proposed in the field of swarm intelligence (SI) e.g. Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO) etc.
In this article, we shall discuss the Cat Swarm Optimization technique which is a swarm-based optimization.
Inspiration:
Cat Swarm Optimization is clearly inspired by the behaviour of cats in the real world. According to biology, there are about 32 different cat species from lions to cheetahs and from tigers to domestic cats. Even though they live in quite different environments many behavioural attributes are similar. Despite being in most times inactive, cats have a strong curiosity.
Before defining the mathematical model of the algorithm, one must know that there are two different states of being for any cat.
- Seeking mode: Cats are inactive in this state i.e. resting, looking around or in a state to move to another location.
- Tracking mode: Cats are active in this state i.e. they change their current position.
Mathematical Model:
Let's define the model of cat swarm optimization. Every cat in a solution space has its own N dimensions dN, velocity for each dimension v_{i,d} , a flag that suggests whether the cat is in either of the two modes (seeking or tracking) and finally a fitness value that represents the accommodation of cats to the fitness function.
We desire to find the optimal position for a cat.
Seeking mode: As already mentioned cats in this state are inactive. This mode has four essential factors namely: seeking memory pool (SMP), seeking a range of the selected dimension (SRD), counts of dimension to change (CDC), and self-position considering (SPC).
- SMP: Defines the memory for each cat indicating the points pursued by a cat.
- SRD: declares the mutative ratio for the selected dimensions.
- CDC: How many dimensions will be varied is indicated by this factor.
- SPC: Whether the position at which the cat is already placed will be a candidate point for a cat to move to. It is a Boolean value (either true or false).
NOTE: Regardless of the value of SPC the value of SMP is not influenced.
1. If SPC is True Then
j=SMP
Else
j=SMP-1
2. Present position of cat Ck is copied j times.
3. Depending on the value of CDC the value of SRD is either decremented or incremented.
4. Calculate the Fitness value (FV) of all candidate points.
5. If all the FV are not equal, FV are converted to selection probability:
P_i = \frac{|FV_i - FV_b|}{FV_{max}-FV_{min}} where i ranges in (0,j) FV_b can be set to FV_{max} or FV_{min} depending on the goal of fitness function either maximize or minimize. 6. Randomly select the candidate points and replace the current position of cat Ck.
Tracing mode: Once a cat is in tracing mode, the cat is assigned with its velocity as v_{i,d} .
1. Update the velocities of each cat according to:
v_{k,d} = v_{k,d} + c_1.r_1(X_{best,d}-X_{k,d})where d ranges in [1,N] X_{k,d} = X_{k,d} + v_{k,d} c1 is a constant, r1 ranges in [0,1] and X_{k,d} and X_{best,d} are the position of cat Ck in dimension d and the position of a cat with the best FV respectively.2. If v_{k,d} > v_{max} Then v_{k,d} = v_{max} Else v_{k,d} = v_{k,d}
CSO algorithm
In order to merge both the modes, we must keep in mind that the cats spend most of the time in seeking mode. Therefore, we define a mixture ratio (MR) that suggests how much of either of the modes to take into account while performing CSO. It is quite evident that we must keep the value of MR very low to account for cats spending the most time in seeking mode.
1. Create M cats.
2. Randomly drop the M cats in N dimensional solution space. Assign the velocities of each cat in
accordance to v_{max}.3. Distribute the cats to tracing mode according to MR and rest to seeking mode.4. Apply the position of the cat X_{k,d} into the fitness function and calculate the fitness value FV.5. Move the cats according to their flag value. If Ck is in tracing mode apply the tracing mode process to it, if not then apply seeking mode process.6. Redistribute the cats according to MR.7. Repeat from step 4 to 6 until termination condition is met.
By applying the CSO we get the X_{best,d} which is the position of a cat with the best FV.
This is how the CSO algorithm works.
Reference:
https://link.springer.com/chapter/10.1007/978-3-540-36668-3_94
Similar Reads
Brain storm optimization Optimization is usually tasked to identify the best solution(s) for some specific problem. A problem with optimization in Rn or simply a problem with optimization is f: R_n \rightarrow R_m , whereby Rn and Rm represent decision space and objective space respectively. The fitness value or fitness val
5 min read
CatBoost Optimization Technique In the ever-evolving landscape of machine learning, staying ahead of the curve is essential. One such revolutionary optimization technique that has been making waves in the data science community is CatBoost. Developed by Yandex, a leading Russian multinational IT company, CatBoost is a high-perform
7 min read
Uni-variate Optimization - Data Science Optimization is an important part of any data science project, with the help of optimization we try to find the best parameters for our machine learning model which will give the minimum loss value. There can be several ways of minimizing the loss function, However, generally, we use variations of t
6 min read
CatBoost Bayesian optimization Bayesian optimization is a powerful and efficient technique for hyperparameter tuning of machine learning models and CatBoost is a very popular gradient boosting library which is known for its robust performance in various tasks. When we combine both, Bayesian optimization for CatBoost can offer an
10 min read
Grey wolf optimization - Introduction Optimization is essentially everywhere, from engineering design to economics and from holiday planning to Internet routing. As money, resources and time are always limited, the optimal utilization of these available resources is crucially important. In general, an optimization problem can be written
5 min read