CONCEPT OF OPTIMIZATION CLASSICAL
OPTIMIZATION TECHNIQUES
Chapter 1
Dr. Dereje Shiferaw
What is Optimization?
Optimization is the mathematical discipline which is
concerned with finding the maxima and minima of
functions, possibly subject to constraints.
Optimize means make as perfect, effective or
functional as possible
Used to determine best solution without actually
testing all possible cases
Need for optimization in power system
Power system operation is required to be
Secure
Economical
Reliable
All operations have to operate at optimum point
Power flow analysis
Economic Dispatch
Reactive power
Load shedding
Configuration of electrical distribution networks etc
General Statement of a Mathematical Programming Problem
Find x which minimize: f(x)
Subjectgto:( x) 0 i 0,1,2,...h
i
li ( x ) 0 i h 1, h 2,...m
f(x), gi(x) and li(x) are twice continuously differentiable real
valued functions.
gi(x) is known as inequality constraint
X can be a vector with several
variables
Minimization of f(x) is same as
maximization of –f(x)
Some terminologies
Design vector
Design constraints
Constraint surface
Objective function
Mathematical programming
Design vector
Two types of variables exist Design space- space of
-Pre-assigned variables
the design vector
-Variables
whose value is
known before hand f ( x ) 9.82x1x 2 2x1
Design vector x1 x2
- x
-Vector of decision variables x 2
Design vector Design space
-Should be calculated using
techniques
x1
Design constraints
Restrictions on Behavioral constraints
variables Power cannot be
Find x which minimize: f(x) negative
Subject to:
gi(x) < 0 for i = 1, 2, ..., h Geometric constraints
li(x) = 0 for i = h+1, ..., m
Constraints due to
Where:
gi(x) is inequality constrain geometry
li(x) is equality constraint
Constraint surface
Set of values which satisfy
a single constraint
Plot of gi(x)=0
Four possible points
Free and acceptable
Free and unacceptable
Bound acceptable
Bound unacceptable
Example
Draw constraint surface for problem of minimization
f ( x ) 9.82 x1x 2 2 x1 Subject to 1
0.9
2 x1 14 0.8
0.7
0.2 x 2 0.8 0.6
x2
0.5
0.4
2500
500 0 0.3
x1x 2
0.2
0.1
2 4 6 8 10 12 14
x1
2500 2 (8.5x105 )( x12 x 22 )
0
x1x 2 8(250) 2
Objective function
Example – economic
The function which gives
dispatch problem
the relation between the
Minimize operating cost
objective we want to
Variables – power
achieve and the variables output of each generator
involved Constraint- system load
Single or multiple demands, generating
capacity of generators
Objective function
Amount of coal required
Example: A power
company operates two
thermal power plants A
and B using three
different grades of coal
C1, C2 and C3. The
minimum power to be
generated at plants A and Write the objective
B is 30MWh and 80MWh function to min cost
respectively.
Objective function
surface
Locus of all points
satisfying f(x)=C for
some constant C
1
0.9
0.8
0.7
Red lines are 0.6
x2
objective function 0.5
surfaces for C=50 0.4
and C=30 0.3
0.2
0.1
2 4 6 8 10 12 14
x1
Classification of optimization problems
Constrained optimization
1)Based on
existence of Formulation
constraints Find X which Min F(X)
subject to
gj(X)0
Unconstrained optimization
Formulation
Min F(X)
Classification cont…
2)Based on nature of Static
design variables
Design variables are simple variables
Dynamic
Design variables are function of other
variables
Classification cont…
3)Based on nature Linear programming
of expression for
objective function
Objective function and constraints are linear
and constraints Approximation
Optimal Power flow, steady state security etc
Nonlinear programming
If any one of the objective function or
constraints is nonlinear
Power system problems are nonlinear
Classification cont…
4)Based on
expression of
Geometric programming – objective
objective function function and/or constraint are
or constrains
expressed as power terms
Quadratic programming
Special case of NLP
Objective function is quadratic form
Classical Optimization techniques
Used for continuous and differentiable
functions
Make use of differential calculus
disadvantages
Practical problems have non differentiable
objective functions
Single variable optimization
Local minima
If for Local
maxima
small positive and
negative h
Local maxima
If for
small positive and
negative h Local
minima
Single variable cont…
Theorem 1
Theorem 2
Example
f (x) 12x 5 45x 4 40x 3 5 400
350
Soln. 300
250
Find f’(x) and then equate it with zero.
200
150
The extreme points are x=0,1 and 2
100
50
X=0 is inflection point, x=2 is local 0
minima and x=1 is local maxima -50
-100
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Multivariable optimization
Without constraint and With constraint
Has similar condition with single variable case
Theorem 3
Theorem 4
Example: find extreme points of
Function of two variable Findextreme points
Check the Hessian
matrix by determining
Soln. second derivatives and
Evaluate first partial determinants
derivatives and
Multivariable optimization with equality
constraints
Problem formulation
Find which minimizes F(x) subject to the constraint
gi(x)=0 for i=1,2,3, … m where m n
Solution can be obtained using
Directsubstitution
Constrained variation and Lagrangian method
Direct substitution
Converts constrained Technique
optimization to Express the m constraint
unconstrained variables in terms of the
Used for simpler remaining n-m variables
problems Substitute into the
objective function
Direct substitution cont…
Example – find the values of x1,x2 and x3 which
maximize
subject to the equality constrain
Soln. Re-write the constraint equation to eliminate any
one of the variables
x 2 1 x12 x 32 then f (x1 , x 2 , x 3 ) 8x1x 3 1 x12 x 32
constrained variation method
Finds a closed form expression for the first order
differential of f at all points where the constraints
are satisfied
Example: minimize
Subject to
Constrained variation
At a minimum Rewriting the equation
If we take small Substituting
variations dx1 and dx2
Necessary condition
After Taylor series expansion
Constrained variation
For general case
Under the assumption that
Example minimize the following function subject to given
constraint
Minimize
Subject to
Soln. The partial Using the necessary condition
differentials are
Method of Lagrangian multipliers
Problem formulation
s.t.
Procedure:
A function L can be formed as
Necessary conditions for extreme are given by
Lagrange Multiplier method cont…
For a general case L
Lagrangian method
Sufficient condition is
Has to be positive definite matrix
Example: find maximum of f given by
Subject to
Soln. The Lagrangian is Giving
Formulation of multivariable optimization
When the constraints are inequality constraints, i.e.
It can be transformed to equality by adding slack
variable
Lagrangian method can be used
Kunh- Tucker conditions
The necessary condition for the above problem is
When there are both equality and inequality constraints, the KT
condition is given us
Kuhn-Tucker conditions cont…
Example: For the following problem, write the KT
conditions
Subject to
Linear programming
History Problem statement
George B. Dantzing
1947, simplex method Subject to the constraint
Kuhn and Tucker –
duality theory
Charles and Cooper -
industrial application
Properties of LP
The objective function is Transformations
minimization type If problem is maximization
use –f
All constraints are linear If there is negative
equality type variable, write it as
difference of two
All decision variables
If constraint is inequality,
are nonnegative add slack or surplus
variables
Simplex algorithm
Objective of simplex Algorithm
algorithm is to find vector X 1. Convert the system of
0 which minimizes f(X) equation to canonical form
2.Identify the basic solution
3. Test optimality and stop if
and satisfies equality optimum
constraints of the form 4. If not, select next pivotal
point and re-write equation
5. Go to step 2
Simplex algorithm
Example: Maximize
Solution:
Subject to
Step 1. convert to canonical form
Use tabular method to proceed on the
algorithm
Basic variable means those variables having coefficient 1 in one of the equation and
zero in the rest of the equations
Various types of solutions
Unbounded solution
Ifall the coefficients of the entering variable are
negative
Infinite solution
Ifthe coefficient of the objective function is zero
at an optimal solution
Modifications to simplex method
Two phase method Revised simplex method
When an initial feasible Solve the dual of the
solution is not readily basic solution
available
Phase I is for
rearranging the
equations
Phase II is solving
Using MATLAB to solve LP
Example:
Subject to
Soln.
1. Form the matrices containing coefficients of the objective function, constraints
equations and constants separately
f=[5 2];
A=[3 4;1 -1;-1 -4;-3 -1];
b=[24;3;-4;-3];
lb=zeros(2,1);