0% found this document useful (0 votes)
19 views

Opt SA 6

This document discusses non-traditional optimization algorithms, including genetic algorithms. Genetic algorithms are inspired by natural evolution and use techniques like reproduction, crossover and mutation to evolve solutions over multiple generations. They work on a population of potential solutions, represented in binary strings, and use the objective function value as a measure of fitness. Good solutions are more likely to be selected for reproduction to produce the next generation. This continues until an optimal solution is found. Some key advantages of genetic algorithms are that they do not require derivatives and can find global rather than local optima.

Uploaded by

Biggie Cheese
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Opt SA 6

This document discusses non-traditional optimization algorithms, including genetic algorithms. Genetic algorithms are inspired by natural evolution and use techniques like reproduction, crossover and mutation to evolve solutions over multiple generations. They work on a population of potential solutions, represented in binary strings, and use the objective function value as a measure of fitness. Good solutions are more likely to be selected for reproduction to produce the next generation. This continues until an optimal solution is found. Some key advantages of genetic algorithms are that they do not require derivatives and can find global rather than local optima.

Uploaded by

Biggie Cheese
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Optimisation Techniques for Engineering Design

Prof. Sanjib Kumar Acharyya

Department of mechanical Engineering

Jadavpur University

Part – VI

Non Traditional Optimisation Algorithms


Non traditional algorithms :

There are some optimization methods which are conceptually different from the traditional mathematical programming
techniques.
These methods are labeled as modern or nontraditional methods of optimization.
Most of these methods are based on certain characteristics and behavior of biological, molecular, swarm of insects, and
neurobiological systems. : Nature inspired, Evolutionary Algorithm (EA)

1. Genetic algorithms : Theory of natural Evolution


2. Simulated annealing : Annealing process of material heat treatment.
3. Particle swarm optimization : movement of swarm of Fishes or flying of a flocks of birds
4. Ant colony optimization : food searching techniques of a colony of ants
5. Fuzzy optimization :
6. Neural-network-based methods

Features of Non traditional algorithms :


1. A population of points (trial design vectors) is used instead of a single design point..
2. Since several points are used as candidate solutions, less likely to get trapped at a local optimum. Gives Global optimum.
3. use only the values of the objective function. The derivatives are not used in the search procedure.
4. the design variables are represented as strings of binary variables and is applicable for solving discrete and integer
programming problems. For continuous design variables, the string length can be varied to achieve any desired resolution.
5. The objective function value corresponding to a design vector plays the role of fitness.
Genetic algorithms (Theory of natural Evolution) :
• Genetic algorithms are based on the principles of natural genetics and natural selection.
• The basic elements of natural genetics—reproduction, struggle for existence, Natural selection, crossover, and mutation—
are used in the genetic search procedure.

Theory of natural Evolution : Highly improved chromosomes in Human being is evolved from simplest and least qualified mono-
cellular animal Amiba following the rules of Generation of new genes by crossover and mutation and preservation of good
genes by natural selection ( survival of the fittest).

Genetic operators : crossover , mutation, natural selection ( random/probabilistic)

a) Representation of Design Variables (coding) :


In GAs, the design variables are represented as strings of binary numbers, 0 and 1. ( GA operators requires stringed
structure)
For example, if a design variable xi is denoted by a string of length four (or a four-bit string)as 0 1 0 1, its integer (decimal
equivalent) value will be 1 X 20 + (0)X 21 + (1)X 22 + (0) X 23 = 1 + 0 + 4 + 0 = 5.
If each design variable xi, i = 1, 2, . . . , n is coded in a string of length q, a design vector is represented using a string of total
length nq.
For example, if a string of length 5 is used to represent each variable, a total string of length 20 describes a design vector with n
= 4. The following string of 20 binary digits denote the vector
(x1 = 18, x2 = 3, x3 = 1, x4 = 4)
Decoding : GA operators act on binary representation of the variables but the evaluation of objective function needs decimal
representation of the variables. Therefore decoding ( conversion from binary to decimal) is required.
In general, if a binary number is given by bqbq-1 · · · b2b1b0, where bk = 0 or 1, k = 0, 1, 2, . . . , q, then its equivalent decimal
number y (integer) is given by

Precision : If binary representation is used , then a continuous design variable x can be represented by a set of only discrete
values.
If a variable x (whose bounds are given by x(l) and x(u)) is represented by a string of q binary numbers, its decimal value can be
computed as
0

X(l) X(u)
Δx

Precision , Δx =

Precision can be controlled by by selecting appropriate q ( string length) from relationship between q and Δx
Design variable vector :
• Design variables in GA are represented by binary coding.
• The vector of design variables contains m number of solutions at a time which is termed as population of size m.
• The selection of size m depends on computation burden and search accuracy.
• The population is continuously revised based on GA operators ( crossover, mutation and selection).
• The best solution in the population is the latest optimum solution.
• The population and optimum solution evolves with generation (completion of GA operators for once).

• In an optimization problem, suppose, no of design variables = n, string length = q, population size = m


• In design variable vector is a matrix m number of design solutions, represented in each row.
• Each solution (row) represents n number of design variables and each variable is represented by q strings ( 0 or 1).
• Hence each row contains total ( n X q) numbers of strings ( 0 or 1).
• Such m number of rows form the design vector (population)
Representation of Objective Function and Constraints
Because in genetic algorithms selection of fit candidates are based on the survival-of-the-fittest principle of nature, they try to
maximize a function called the fitness function.
Thus GAs are naturally suitable for solving unconstrained maximization problems.
The fitness function, F (X), can be taken to be same as the objective function f (X) of an unconstrained maximization problem
F (X) = f (X).
A minimization problem can be transformed into a maximization problem before applying the GAs. Usually the fitness function is
chosen to be nonnegative. The commonly used transformation to convert an unconstrained minimization problem to a fitness
function is given by

It is already discussed that it does not alter the location of the minimum of f (X) but converts the minimization problem into an
equivalent maximization problem.

Fitness function
Genetic Operators

• The solution of an optimization problem by GAs starts with a population of random strings denoting several (population of)

design vectors. The population size in GAs (m) is usually fixed.

• Each string (or design vector) is evaluated to find its fitness value.

• The population (of designs) is operated by three operators—reproduction, crossover, and mutation—to produce a new

population of points (designs).

• The new population is further evaluated to find the fitness values and tested for the convergence of the process.

• One cycle of reproduction, crossover, and mutation and the evaluation of the fitness values is known as a generation in GAs.

• If the convergence criterion is not satisfied, the population is iteratively operated by the three operators and the resulting new

population is evaluated for the fitness values.

• The procedure is continued through several generations until the convergence criterion is satisfied and the process is

terminated.
Reproduction.
• The GA starts with an initial randomly generated population containing n number of design solutions.
• The fitness of each solution can be calculated by evaluating fitness function for each solution.
• The population for the next generation is to be evolved from this current generation with a target that good solutions
(relatively higher fitness) in the current generation will retain ( survival of the fittest) and bad solutions (relatively lower
fitness) will be rejected (not selected) in the next generation with a probabilistic scheme.
• The reproduction operator is also called the selection operator because it selects good strings of the population.
• The reproduction operator is used to pick above-average strings from the current population and insert their multiple copies
in the next generation based on a probabilistic procedure.
• In a commonly used reproduction operator, a string is selected from the mating pool with a probability proportional to its
fitness. Thus if Fi denotes the fitness of the ith string in the population of size n, the probability for selecting the i th string for the

mating pool (pi ) is given by


fitness of ith string ( design solution) = Fi

Relative fitness of ith string =


Fitness function

Population in binary code Population in decimal code Fi pi Cumulative Pi


11000 10101 … … ….. 00111 24 21 7 1 023 0.259 0.259
11010 01010 … … … .. 10000 26 10 16 2018 0.509 0.768
01010 10001 … … ….. 10101 10 17 21 920 0.232 1.0
Selection scheme :
guideline : the probability for selecting the ith string for the next generation (pi ) is given by relative fitness of the ith string

Roulette wheel scheme :


The implementation of the selection process (following guideline) can be done by following the rule of a roulette wheel .
with its circumference divided into segments, one for each string of the population, with the
segment lengths proportional to the fitness of the strings as shown Fig.
By spinning the roulette wheel n times (n being the population size) and selecting a member
in next generation, each time, the string chosen by the roulette-wheel pointer, we select n
members in next generation of size n.
Since the segments of the circumference of the wheel are marked according to the
fitness of the various strings of the original population, the roulette-wheel process is
expected to select ith string in next generation with a probability, pi
Thus the roulette-wheel selection process can be implemented
by selecting ith string if a random number(0,1) falls associating
the cumulative probability range (P − P ).
Suppose, at any generation 6 strings are in the population, Elitist method : It is sometime preferred to
A random number between 0 to 1 will be generated in the programme, preserve the best string in the current
If random number is between, 0 - 0.12 , 1st string will be selected generation by directly selecting it for next
0.12 – 0.16, 2nd string will be selected generation and then n-1 strings by selection
……. operator.
0.76 -1.0 6th string will be selected

The same process will be repeated six times to select six strings in the new generation
Population in new generation contains some of the strings one or multiple copies and some not at all
Crossover.
• After reproduction, the crossover operator is implemented.
• The purpose of crossover is to create new strings by exchanging information among strings of the mating pool.
• In most crossover operators, two individual strings (designs) are picked (or selected) at random from the mating pool
generated by the reproduction operator and some portions of the strings are exchanged between the strings.
• In the commonly used process, known as a single-point crossover operator, a crossover site is selected at random along the
string length, and the binary digits (alleles) lying on the right side of the crossover site are swapped (exchanged) between the
two strings.
• The two strings selected for participation in the crossover operators are known as parent strings and the strings generated by
the crossover operator are known as child strings.
Before crossover after crossover

• The child strings generated using a random crossover site may or may not be as good or better than their parent strings in
terms of their fitness values.
• If they are good or better than their parents, they will contribute to a faster improvement of the average fitness value of the
new population.
• On the other hand, if the child strings created are worse than their parent strings, will not survive very long as they are less
likely to be selected in the next reproduction stage (becauseof the survival-of-the-fittest strategy used).
Crossover.

• As indicated above, the effect of crossover may be useful or detrimental.

• Hence it is desirable not to use all the strings of the mating pool in crossover but to preserve some of the good strings of the

mating pool as part of the population in the next generation.

• In practice, a crossover probability, pc, is used in selecting the parents for crossover.

• Thus only 100 X pc percent of the strings in the mating pool will be used in the crossover operator while 100 (1 − p c) percent of

the strings will be retained as they are in the new generation (of population).

• For elitist method to preserve the best string , it is not allowed for crossover or mutation
• Mutation.
• The crossover is the main operator by which new strings with better fitness values are created for the new generations.
• The mutation operator is applied to the new strings with a specific small mutation probability, p m.
• The mutation operator changes the binary digit (allele’s value) 1 to 0 and vice versa.
• In the single-point mutation, a mutation site is selected at random along the string length and the binary digit at that site is
then
• changed from 1 to 0 or 0 to 1 with a probability of pm.
• In the bit-wise mutation, each bit (binary digit) in the string is considered one at a time in sequence, and the digitis changed
from 1 to 0 or 0 to 1 with a probability pm.

• Numerically, the process can be implemented as follows.


• A random number between 0 and 1 is generated/chosen. If the random number is smaller than p m, then the binary digit is
changed. Otherwise, the binary digit is not changed.
The purpose of mutation is
(1) to generate a string (design point) in the neighborhood of the current string, thereby accomplishing a local search around the
current solution.
(2) to safeguard against a premature loss of important genetic material at aparticular position, and (3) to maintain diversity in the
population.
As an example, consider the following population of size n = 5 with a string length 10:

Here all the five strings have a 1 in the position of the first bit. If, The true optimum solution of
the problem requires a 0 as the first bit. The required 0 cannot be created by either the
reproduction or the crossover operators. However, when the mutation operator is used, the
binary number will be changed from 1 to 0 in the location of the first bit with a probability of
npm.
• Note that the three operators—reproduction, crossover, and mutation—are simple to implement.
• The reproduction operator selects good strings for the mating pool,
• The crossover operator recombines the substrings of good strings of the mating pool to create strings (next generation of
population), and
• the mutation operator alters the string locally.
• The use of these three operators successively yields new generations with improved values of average fitness of the
population. Although, the improvement of the fitness of the strings in successive generations is probabilistic but after a huge
number of generation expected improvement is achieved .
• the process has been found to converge to the optimum fitness value of the objective function.
• Note that if any bad strings are created at any stage in the process, they expected to be eliminated by the reproduction
operator in the next generation.
• The GAs have been successfully used to solve a variety of optimization problems in real life .
• A convergence criteria is required
• Convergence criteria :
i) Number of generation M ii) Ratio of best fitness to average fitness
• Theoretically this ratio will be 1 but for convergence a ratio near to 1 (0.9 or 0.8) is used.
A
L
G
O
R
I
T
H
M

You might also like