0% found this document useful (0 votes)
4 views

Introduction to Optimization

Uploaded by

Manjubala B
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Introduction to Optimization

Uploaded by

Manjubala B
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Introduction to

Optimization
Dr. Manjubala Bisi
Optimization
• The action of making the best or most effective use of a situation or
resource
• Finding an alternative with the most cost effective or highest
achievable performance under the given constraints by maximizing
desired factors and minimizing undesired ones
• Used for decision making
• Decision making always involves making a choice between various
possible alternatives
Category in decision making
problems
• Category 1 :
The set of possible alternatives for the decision is a finite discrete set
typically consisting of a small number of elements. (scoring/ranking
method)
Category in decision making
problems
• Category 2 :
The number of possible alternatives are either infinite, or finite but
very large and the decision may be required to satisfy some
restrictions and constraints
(Unconstrained and constrained optimization methods)
Solution for Category 2 :
Step 1:
Get a precise definition of the problem, all relevant
data and information on it (Variables)
Step 2:
Construct a mathematical (optimization) model of the
problem (Build objective functions and constraints)
Step 3:
Solve the model (Apply the most appropriate algorithm)
Optimization and Its Components
• Selection of the best choice (based on some criteria) from a set of
alternatives
• Decision variable
• Objective function (Relation of decision variables)
• Constraints (Restrictions on the decision variables)
• Helps in classification problem
Decision variables
• Formulation of optimization problem begins with identifying the
decision variables
• Relates objective function and constraints
• Can be continuous, semi-continuous, discrete or set
• Can be bounded or unbounded
Objective function
• Criteria with respect to which the decision variables are to be
optimized
• Every solution in variable space is mapped to objective
• Can be continuous or semi continuous
• Maximization problem to a minimization problem
• Can be bounded or unbounded
• Absence of objective function (In presence of constraints) leads to a
feasibility problem (Map coloring problem)
Constraints
• Inequality (usually resource constraints) (convert one form to
another)
• Equality constraints
• Feasible solution (Satisfy all constraints)
• Infeasible solution (Not satisfy at least one constraint)
• Hard constraints (Must be satisfied in order to accept a solution)
• Soft constraints (Allowed to relax to some extent to accept a solution)
Bounded and Unbounded problem
• Redundant constraints help to reduce feasible region
• Non-redundant constraints not able to reduce feasible region
Feasible problem and Infeasible
problem
Contour Plot
• It shows how the objective function behaves in the search space
• Lines having identical objective function value
Realization
• Two or more solutions with same objective function value
Monotonic and Convex Functions
• Monotonic: functions are continuously increasing / decreasing
• Convex function: The line segment between any two points on the
graph of a function lies above the graph
Unimodal and Multimodal function
• Unimodal function: For some value of m, if the function is
monotonically increasing for x<=m and monotonically decreases for
x>=m
• Maximum value of f(x) is f(m)
• No other local maxima
• Multimodal function: Function has multiple global and local optima
• Most real life optimization problem are multimodal
Optimality

• Local optima (Minimization)


• Smallest function value in its neighbour
• There can be multiple local optima solutions
• Global Optima (Minimization)
• Smallest function value in the feasible region
• If the function is convex, only global optimal exists (no local optimum)
• For multimodal functions, most algorithms fail to determine global
optimum (Non linear programming)
Classification of Optimization
Problem
• Linear programming (LP) • Integer linear programming(ILP)
Variable (Continuous) Variable (Discrete)
Objective function (Linear) Objective function (Linear)
Constraints (Linear) Constraints (Linear)
• Non linear programming • Mixed ILP
Variable (Continuous) Variable (Continuous or Discrete)
Objective function (Linear Objective function (Linear)
Constraints (Linear)
or Nonlinear)
• Mixed Integer Non linear
Constraints (Linear or Non
linear) programming
Classification of Optimization
Problem
• Mixed Integer Non linear programming
Variable (Continuous or Discrete)
Objective function (Linear or Non linear)
Constraints (Linear or Non linear)
Classification of Optimization
problems
• Mathematical Programming Techniques
• Meta-Heuristic Techniques
• Particle swarm optimization
• Genetic Algorithm
• Teaching Learning based Optimization
• Jaya algorithm
• Rao algorithms
• Ant Colony optimization
• Simulated annealing
• Differential algorithm
• Artificial Bee colony optimization
Multi-objective Optimization
• Two or more conflicting objectives which can be either minimized /
or maximized
• Each objective function corresponds to different optimal solution, no
single optima
• Obtain a set of optimal solutions where a gain in one objective
deteriates the other objective
Pareto solution
• A solution s1 is said to dominate a solution s2 if both the following
conditions are true:
1. s1 is no worse than s2 in all the objective
2. s1 is strictly better than s2 in at least one objevtice
Meta-Heuristic techniques and
Optimization problems
• Generate randomly a single solution or a set of solutions
• Based on the fitness of the current solution or set of solutions,
suggest other solutions with the help of “Intelligent” operation
Generalized scheme for Meta-
Heuristic techniques
• Problem : fitness function , Bound of decision variable
• Technique: population size, max iteration (T)
Generalized scheme for Meta-
Heuristic techniques
Generate Determine fitness value to
Define
random evaluate population
parameter
population (p)

Iteration t =1

no
t = t+1 T <= T stop

yes

Pop p’’ = survivor (p’) New pop p’ = selection (p)


Particle Swarm
Optimization Algorithm
PSO
• Inspired by the social behavior of bird flocking and fish schooling
• Suppose a group of birds is searching food in an area
• Only one piece of food is available
• Birds do not have any knowledge about location of food
• But they know how far the food is from their present location
• What is the best strategy to locate the food?
• The best strategy is to follow the bird nearest to the food
PSO
• A flying bird has a position and velocity at any time t
• In search of food, the bird changes its position by adjusting its velocity
• The velocity changes based on his past experience and also the
feedback received from his neighbor
• This searching process can be artificially simulated for solving non-
linear optimization problem
• So, this is a population based stochastic optimization technique
inspired by social behavior of bird flocking or fish schooling
PSO
• Each solution is considered as a bird called particle
• All the particles have a fitness value. It is calculated using objective
function
• All the particles preserve their individual best performance
• They also know the best performance of their group
• They adjust their velocity considering their best performance and also
considering the best performance of the particle in their group
Flow chart of PSO
Example
Solution
PSO Algorithm Parameters
• Swarm size
• Iteration numbers
• Velocity components
• Acceleration coefficients
Swarm Size
• Population size
• It is the number of particles in the swarm
• A big swarm generates larger parts of the search space to be covered
per iteration
• A large number of particles may reduce the number of iterations need
to obtain a good optimization result
• In contrast, huge amounts of particles increase the computational
complexity per iteration, and more time consuming
Iteration Numbers
• It is problem – dependent
• A small number of iterations may stop the search process prematurely
• While too large iterations has the consequences of unnecessary
added computational complexity and more time needed
Velocity Components
• Important for updating particle’s velocity
• There are three components for updating velocity
1. Initial velocity and inertia constant
2. Cognitive component
3. Social component
Velocity Components
• The inertia component provides a memory of the previous direction
that means movement in the immediate past
• The cognitive component measures the performance of the particles
i relative to past performance
• The social component measures the performance of the particles i
relative to a group of particles or neighbors. Its effect is that each
particle flies towards the best position found by the particle’s
neighborhood
Acceleration Coefficients
• C1 and c2 together with random values r1 and r2 , maintains
stochastic influence of the cognitive and social components of the
particles velocity respectively
• C1 expresses how much confidence a particle has in itself
• C2 expresses how much confidence a particle has in its neighbors

You might also like