0% found this document useful (0 votes)
85 views

Multi Obj Unit 5

This presentation compares single-objective and multi-objective optimization. Single-objective optimization involves one objective function, while multi-objective optimization involves more than one objective function. Common multi-objective optimization methods include the weighted sum method, epsilon-constraint method, and weighted metric methods. These methods have advantages like finding Pareto-optimal solutions, but also disadvantages like requiring user-specified parameters and normalization of objectives.

Uploaded by

Kunal Agarwal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Multi Obj Unit 5

This presentation compares single-objective and multi-objective optimization. Single-objective optimization involves one objective function, while multi-objective optimization involves more than one objective function. Common multi-objective optimization methods include the weighted sum method, epsilon-constraint method, and weighted metric methods. These methods have advantages like finding Pareto-optimal solutions, but also disadvantages like requiring user-specified parameters and normalization of objectives.

Uploaded by

Kunal Agarwal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Presentation on

MultiObjective Optimization
Single Vs Multi-objective
Single Objective Optimization:
When an optimization problem involves only one objective
function, the task of finding the optimal solution is called single-
objective optimization.

Example: Find out a car for me with Minimum cost.

Multi-objective Optimization: When an optimization problem


involves more than one objective function, the task of finding one
or more optimal solutions is known as multi-objective
optimization.

Example: Find out a car for me with minimum cost and maximum
comfort.
Single Vs Multi-objective: A Simple
Visualization
1

0.5

-0.5 2
60
60
40 B
40
20
20 Luxury
0 0
A

Price
Multi-objective Problem (ctd.)
Mapping: Rd to Fn
Concept of Domination
A solution x1 dominate other solution x2, if both conditions 1
and 2 are true:

1. The solution x1 is no worse than x2 in all objectives

2. The solution x1 is strictly better than x2 in at least one


objective
A Simple Visualization
Minimize f2
Time Complexity of non-
dominated set: O(MN2)
1

3 4 5

Maximize f1
Properties of Dominance
Reflexive: The dominance relation is not reflexive.
Symmetric: The dominance relation is also not symmetric.
Transitive: The dominance relation is transitive.

In order for a binary relation to qualify as an ordering


relation, it must be at least transitive .

Thus dominance relation is only a strict partial order


relation.
Pareto Optimality
Non-dominated Set: Among a set of solutions P, the
non-dominated set of solutions P’ are those that are
not dominated by any member of the set P.
Global Pareto Optimality Set: The non-dominated set of
the entire feasible search space S is the globally
Pareto-optimal set.
Locally Pareto Optimality Set: If for every member x in a
set P” there exists no solution ‘y’ dominating any
member of the set P”, then solutions belonging to the
set P” constitute a locally Pareto-optimal set.
Classification
of the classical methods
• Classical method have been around for the past four
decades.
• They can be classified in the following classes:
Non-preference methods
These methods do not assume any information about the importance of objectives,
but an heuristic is used to find a single optimum solution. They do not make any
attempt to find multiple Pareto-optimal solutions.
Posteriori methods
Posteriori methods use preference information of each objective and iteratively
generate a set of Pareto-optimal solutions.
A priori methods
On the other hand, a priori methods use more information about the preferences
of objectives and they usually find one preferred Pareto-optimal solution.
Interactive methods
These methods use the preference information progressively during the
optimization process
Weighted Sum Method
• This method is the simplest approach and probably the most
widely used classical method.

• This method scale the set of objectives into a single objective


by multiplying each objective with a user supplied weight.

• Although simple, it introduces a non-simple question: What


value of the weights must be used? The answer depend on
the relative importance of each objective.
Weighted Sum Method
formulation
M
Minimize F(x)   w m f m ( x) ,
m 1

subject to g j ( x)  0, j  1,2..............J
h k ( x)  0, k  1,2,............K
xi( L )  xi  xi(U ) , i  1,2,..............n

• where the objectives are normalized.


• wm   0,1 is the weight of the m-th objective function.
• It is usual practice to choose weights such that
M

w
m 1
m 1
Weighted Sum Method
Properties

Theorem
The solution to the problem presented in equation (1) is
Pareto-optimal if the weight is positive for all objectives:
wm > 0,m = 1, . . . ,M
• The proof is achieved by contradiction considering a solution
with all positive weights and that is not Pareto-optimal and
showing that this brings to a contradiction.
• Note that the theorem does not imply that any Pareto-
optimal solution can be obtained using a positive weight
vector
Weighted Sum Method
Properties

Theorem
if x is a Pareto-optimal solution of a convex multi-objective
optimization problem, then there exists a non-zero positive
weight vector w such that x is a solution of problem

• The theorem suggests that for a convex multi-objective


optimization problem any Pareto solution can be found using
the weighted sum method
Weighted Sum Method
Illustration
Weighted Sum Method
Advantages and disadvantages
Advantages
• Simple and easy to use.
• For convex problems it guarantees to find solutions on the entire Pareto-optimal
set.
Disadvantages
• For mixed optimization problems (min-max), we need to convert all the
objectives into one type.
• Uniformly distributed set of weights does not guarantee a uniformly distributed
set of Pareto-optimal solutions.
• Two different set of weight vectors not necessarily lead to two different Pareto-
optimal solutions.
• There may exists multiple minimum solutions for a specific weight vector that
represents different solutions in the Pareto-optimal front (wasting the search
effort).
Non-convex problems
Difficulties
ɛ-Constraint method
Definition
• Idea: keep only one of the objectives and restrict the rest of
the objectives within some user-specified values.
• The modified problem is the following:
Minimize f  ( x),
subject to f m ( x)   m , m  1,2,........., M and m  
g j ( x)  0, j  1,2..............J
h k ( x)  0, k  1,2,............K
xi( L )  xi  xi(U ) , i  1,2,..............n
ɛ-Constraint method
Illustration
ɛ-Constraint method
Properties
Theorem
The unique solution of the ɛ-constraint problem stated in
equation is Pareto-optimal for any given upper bound vector
  ( 1 ,.......,   1 ,   1 ,.......,  M )T

• The proof is again achieved by contradiction assuming that


a unique solution of the ɛ-constraint problem is not Pareto-
optimal and than showing that this assumption violates the
definition of Pareto-optimality .
ɛ-Constraint method
Advantages and disadvantages
Advantages
• Different Pareto-optimal solution can be found using different
m
values
• the method can be used also for non-convex multi-objective
optimization problems
Disadvantages
• The solution to the problem largely depends on the selection
of the ɛ vector. In particular, it must be chosen such that it
lies between the minimum and maximum value of each
objective function.
• As the number of objectives increase more information from
the user is required  m .
Weighted Metric Methods
Definition
• Idea: instead of using a weighted sum of the objectives, we can consider other ways
of combining multiple objectives.
M
Minimize l p ( x)  ( wm | f m ( x)  zm* | p )1/ p,
m 1

sub ject to g j ( x)  0, j  1,2..............J


h k ( x)  0, k  1,2,............K
xi( L )  xi  xi(U ) , i  1,2,..............n

p  1, are
• Weights  non-negative
• z*
• is called reference point.
• When p = 1 is used, this is equivalent to the weighted sum method.
Weighted Metric Methods
(p = 1) Taxicab or Manhattan norm
Weighted Metric Methods
(p = 2) Euclidean norm
Tchebycheff problem
p
Minimize l ( x)  max mM1 wm | f m ( x)  z m* | ,
sub ject to g j ( x)  0, j  1,2..............J
h k ( x)  0, k  1,2,............K
xi( L )  xi  xi(U ) , i  1,2,..............n

• If the Tchebycheff metric is used, any Pareto optimal solution can be found.
Theorem
Let x be a Pareto-optimal solution, then there exists a positive
weighing vector w such that x is a solution of the weighted
Tchebycheff problem where the reference point is the utopian objective vector.
Weighted Metric Methods
Weighted Metric Methods
Advantages and Disadvantages
Advantages
• The Tchebycheff metric allows to find each and every Pareto-
optimal solution when z * is the utopian objective vector .
Disadvantages
• It is advisable to normalize the objective functions, which
requires knowledge of the minimum and maximum values of
each objective.
• It requires the knowledge of the ideal solution z *. So we need
to optimize each objective function before we can compute
the l p metric.
Rotated Weighted Metric Method
• Instead of using directly the l p metric as it is stated in
equation , the l p metric can be applied with an arbitrary
rotation from the ideal point.
• Let us say that the relation between the rotated objective
vector f and the original objective vector f is : ~

f  Rf where
R is the rotation matrix of size M × M.
• The modified l p metric is:
M
~
l p ( x)  ( wm | f ( x)  z m* | p )1/ p
m 1
Rotated Weighted Metric Method
Illustration
Dynamically Changing the Ideal Solution

• Idea: update the reference point z * every time


that a Pareto-optimal solution is found.

• The l p distance of the ideal solution comes


closer to the Pareto-optimal front and new
previous undiscovered solution can now be
found
Dynamically Changing the Ideal Solution
Illustration
Benson’s Method
• Idea: take the reference solution randomly from the feasible
*
non pareto-optimal region. Let us call it z .
• The non-negative difference ( z m0  f m ( x)) for each objective is
calculated and their sum is maximized
M
M aximize  max(0,
m 1
(z m  f m ( x ))) ,
0

subject to f m ( x)  z m0 , m  1,2,........., M
g j ( x)  0, j  1,2..............J
h k ( x)  0, k  1,2,............K
xi( L )  xi  xi(U ) , i  1,2,..............n
Benson’s Method
Illustration
Benson Method
Advantages and Disadvantages
Advantages
• Avoid scaling problems: individual differences can be
normalized before the summation.
• If the point z 0 is chosen appropriately than this method can be
used also for non-convex multi-objective problems.
Disadvantages
• It has an additional number of constraints.
• The objective function is non-differentiable, causing
difficulties for gradient based methods.
Value Function Method
(or Utility Function Method)
• Idea: the user provides an utility function U: R  R
M

relating all M objectives.


• The utility function must be valid over the entire feasible
space.
• Among two solutions i and j, i is preferred to j if
U(f(xi )) >U(f(xj)).
M aximize U(f(x)) ,
subject to g j ( x)  0, j  1,2..............J
h k ( x)  0, k  1,2,............K
xi( L )  xi  xi(U ) , i  1,2,..............n
where f(x)  (f1 ( x), f 2 ( x),......, f M ( x))T
Value Function Method
Properties
• The utility function must be strongly decreasing before it can
be used in multi-objective optimization.
• This means that the preference of a solution must increase if
one of the objective function values is decreased while
keeping the other objective function values the same.

Theorem
Let the utility function U : R M  Rbe strongly decreasing.
* *
Let U attain its maximum at f .Then f is Pareto-optimal.
Value Function Method
Illustration
Value Function Method
Advantages and Disadvantages
Advantages
• The idea is simple and ideal, if adequate value function
information is available.
• Mainly used for multi-attribute decision analysis
problems with a discrete set of feasible solutions.
Disadvantages
• The solution entirely depends on the chosen value
function.
• It requires the users to come up with a value function
that is globally applicable over the entire search

You might also like