Class1 Emoo 2016
Class1 Emoo 2016
[email protected]
CINVESTAV-IPN
Evolutionary Computation Group (EVOCINV)
Computer Science Department
Av. IPN No. 2508, Col. San Pedro Zacatenco
Mexico, D.F. 07360, MEXICO
T
Find the vector ~x = x1 , x2 , . . . , xn which will satisfy the m
inequality constraints:
gi (~x ) 0 i = 1, 2, . . . , m (1)
the p equality constraints
hi (~x ) = 0 i = 1, 2, . . . , p (2)
and will optimize the vector function
z = zi i (6)
for every i = 1, . . . , k, where zi is a component of the ideal
objective vector and i > 0 is a scalar which is relatively small,
but computationally significant. Clearly, the utopian objective
vector is strictly better (i.e., it strictly dominates) every Pareto
optimal solution.
Pareto Optimality
We say that a vector of decision variables ~x F is Pareto
optimal if there does not exist another ~x F such that
fi (~x ) fi (~x ) for all i = 1, . . . , k and fj (~x ) < fj (~x ) for at least
one j (assuming that all the objectives are being minimized).
P := {x F | x 0 F ~f (x 0 ) ~f (x)}. (8)
Pareto Front
For a given MOP ~f (x) and Pareto optimal set P , the Pareto
front (PF ) is defined as:
Pareto Front
For a given multi-objective optimization problem ~f (x) and a
Pareto optimal set P , the Pareto Front (PF ) is defined as:
Pareto Front
In general, it is impossible to find an analytical expression that
represents the line or hyper-surface corresponding to the
Pareto Optimal Front. This is possible only in very simple
(textbook) cases.
Computational Efficiency
Both Algorithm 1 and Algorithm 2 shown before have an
algoritmic complexity O(MN 2 ) in the worst case. In this case M
is the number of objectives and N is the number of solutions.
However, in practice, Algorithm 2 requires about half of the
computational effort required by Algorithm 1.
Optimality Conditions
Fritz-Johns Necessary Condition. A necessary condition for
x to be Pareto optimal is that there exist vectors 0 and
u 0 (where RM , u RJ and , u 6= 0) such that the
following conditions hold:
PM
Pf
m=1 m fm (x ) j=1 uj gj (x ) = 0
1
2 uj gj (x ) = 0 for every j = 1, 2, . . . , J.
Optimality Conditions
These conditions are very similar to the Kuhn-Tucker conditions
of optimality for single-objective problems. The difference lies
on the addition (in this case) of the vector of the gradients of the
objectives.
Optimality Conditions
For nonlinear objective functions, it is expected that the partial
derivatives are nonlinear. For a given vector , it is possible to
check the non-existence of a Pareto optimal solution using the
previously defined conditions.
Optimality Conditions
Kuhn-Tuckers Sufficiency Conditions for Pareto
Optimality: Lets assume that the objective functions are
convex and the constraints are non-convex. Lets assume that
the objective functions and the constraints are continuously
differentiable in a feasible solution x . A sufficient condition for
x to be Pareto optimal is that there exist vectors > 0 and
u 0 (where RM and u RJ ) such that the following
equations hold:
PM
PJ
m=1 m fi (x ) j=1 uj gj (x ) = 0
1
2 uj gj (x ) = 0 for every j = 1, 2, . . . , J.
Game Theory
The so-called game theory can be traced back to a work by
Borel from 1921. However, many historians normally attribute
the origins of game theory to a paper from the famous
hungarian mathematician John von Neumann which was orally
presented in 1926 and published in 1928.
Game Theory
In 1944, John von Neumann and Oskar Morgenstern
mentioned (in their famous book on Game Theory) that they
had found a problem in economics that was a peculiar and
disconcerting mixture of several problems in conflict with each
other which could not be solved with the classical optimization
methods known at that time. It remains a mystery why is that
von Neummann did not get interested in this peculiar problem.
Carlos A. Coello Coello Multi-Objective Optimization
Historical Highlights of Multi-Objective Optimization
Mathematical Foundations
The origins of the mathematical foundations of multi-objective
optimization can be traced back to the period from 1895 to
1906 in which Georg Cantor and Felix Hausdorff established
the foundations of ordered spaces of infinite dimensions.
Mathematical Foundations
Cantor also introduced equivalent classes and established the
first set of sufficiency conditions for the existence of a utility
function.
Mathematical Foundations
However, it was the concept of the maximum vector problem
introduced by Harold W. Kuhn and Albert W. Tucker (1951)
which allowed multi-objective optimization to become a
mathematical discipline on its own.
Mathematical Foundations
It is well-known that the now famous conditions of optimality
commonly attributed to Kuhn and Tucker had been previously
stated and proved by W. Karush in an unpublished Masters
thesis in 1939.
Mathematical Foundations
Nevertheless, the theory of multi-objective optimization
remained practically unexplored during the 1950s. It was until
the 1960s, in which the mathematical foundations of the area
were consolidated when Leonid Hurwicz generalized Kuhn and
Tuckers results to topological vector spaces.
Goal Programming
Perhaps the most important outcome from the 1950s was the
development of Goal Programming, which was originally
introduced by Abraham Charnes and William Wager Cooper in
1957. However, Goal Programming became popular in the
1960s.
Carlos A. Coello Coello Multi-Objective Optimization
Historical Highlights of Multi-Objective Optimization
Applications
The first application of multi-objective optimization outside
economics was done by Koopmans (1951) in production theory.
Later on, Marglin (1967) developed the first applications of
multi-objective optimization in water resources.
Applications
The first engineering application of multi-objective optimization
reported in the literature is a paper published by Lofti Zadeh in
the early 1960s (related to automatic control). However,
multi-objective optimization applications generalized until the
1970s.
k
" #1/p
X p
0
Lp (f ) = f f (x) , 1p (13)
i i
i=1
fi0 fi (x)
(14)
fi0
are preferred over absolute deviations, because they have a
substantive meaning in any context.
Compromise Programming
Using the global criterion method one non-inferior solution is
obtained. If certain parameters wi are used as weights for the
criteria, a required set of non-inferior solutions can be found.
Duckstein [1984] calls this method compromise
programming.
f (~x ) f 0 p 1/p
#
k
"
i
wip
X
i
Lp (~x ) = (17)
fi max fi0
i=1
where wi are the weights, fi max is the worst value obtainable for
criterion i; fi (~x ) is the result of implementing decision ~x with
respect to the i th criterion.
Displaced Ideal
The Displaced Ideal technique [Zeleny, 1977] which proceeds
to define an ideal point, a solution point, another ideal point,
etc. is an extension of compromise programming.
Wierzbickis Method
Another variation of this technique is the method suggested by Wierzbicki
[1979, 1980] in which the global function has a form such that it penalizes the
deviations from the so-called reference objective. Any reasonable or
desirable point in the space of objectives chosen by the decision maker can
be considered as the reference objective.
Let ~f r = [f1r , f2r , . . . , fkr ]T be a vector which defines this point. Then the function
which is minimized has the form
k k
P(~x , ~f r ) =
X X
(fi (~x fir )2 + % (max(0, fi (~x fir )2 ) (18)
i=1 i=1
Goal Programming
Charnes and Cooper [1961] and Ijiri [1965] are credited with
the development of the goal programming method for a linear
model, and played a key role in applying it to industrial
problems. This was one of the earliest techniques specifically
designed to deal with multiobjective optimization problems.
Goal Programming
In this method, the decision maker (DM) has to assign targets
or goals that wishes to achieve for each objective. These
values are incorporated into the problem as additional
constraints. The objective function then tries to minimize the
absolute deviations from the targets to the objectives. The
simplest form of this method may be formulated as follows:
k
X
min |fi (~x ) Ti | , subject to ~x F (19)
i=1
1
di+ = {|fi (~x ) Ti | + [fi (~x ) Ti ]}, (20)
2
1
di = {|fi (~x ) Ti | [fi (~x ) Ti ]}, (21)
2
Carlos A. Coello Coello Multi-Objective Optimization
A Priori Preference Articulation
Goal Programming
Adding and subtracting these equations, the following
equivalent linear formulation may be found:
k
X
min Z0 = (di+ + di ), (22)
i=1
subject to
~x F
fi (~x ) di+ + di = Ti (23)
di+ , di 0, i = 1, . . . , k
Since it is not possible to have both under- and
overachievements of the goal simultaneously, then at least one
of the deviational variables must be zero. In other words:
di+ di = 0 (24)
Carlos A. Coello Coello Multi-Objective Optimization
A Priori Preference Articulation
Goal Programming
Fortunately, this constraint is automatically fulfilled by the
simplex method because the objective function drives either di+
or di or both variables simultaneously to zero for all i.
Goal Programming
In addition, goal programming provides the flexibility to deal
with cases that have conflicting multiple goals.
Goal Programming
The resulting optimization model becomes
k
X
min S0 = pi (wi+ di+ + wi di ), (25)
i=1
subject to
~x F
fi (~x ) di+ + di = Ti (26)
di , di 0, i = 1, . . . , k
+
Goal Programming
More information on Goal Programming can be found at the following
references:
Minimize (27)
subject to:
gj (~x ) 0; j = 1, 2, . . . , m
bi + wi fi (~x ); i = 1, 2, . . . , k (28)
Lexicographic Ordering
This is a peculiar method in which the aggregations performed
are not scalar. In this method, the objectives are ranked in
order of importance by the decision maker (from best to worst).
Lexicographic Ordering
Let the subscripts of the objectives indicate not only the objective function
number, but also the priority of the objective. Thus, f1 (~x ) and fk (~x ) denote the
most and least important objective functions, respectively. Then the first
problem is formulated as
Minimize f1 (~x ) (30)
subject to
gj (~x ) 0; j = 1, 2, . . . , m (31)
and its solution ~x1 and f1 = f (~x1 ) is obtained. Then, the second problem is
formulated as
Minimize f2 (~x ) (32)
subject to
gj (~x ) 0; j = 1, 2, . . . , m (33)
f1 (~x ) = f1 (34)
Lexicographic Ordering
This procedure is repeated until all k objectives have been
considered. The i th problem is given by
gj (~x ) 0; j = 1, 2, . . . , m (36)
fl (~x ) = fl , l = 1, 2, . . . , i 1 (37)
The solution obtained at the end, i.e., ~xk is taken as the desired
solution ~x of the problem.
Lexicographic Ordering
More information on Lexicographic Ordering can be found at:
G. V. Sarma, L. Sellami, and K. D. Houam, Application of
Lexicographic Goal Programming in Production
PlanningTwo case studies, Opsearch, 30(2):141162,
1993.
S. S. Rao, Multiobjective Optimization in Structural
Design with Uncertain Parameters and Stochastic
Processes, AIAA Journal, 22(11):16701678, November
1984.
Min-Max Optimization
The idea of stating the min-max optimum and applying it to
multiobjective optimization problems was taken from game
theory, which deals with solving conflicting situations. The
min-max approach to a linear model was proposed by Jutler
[1967] and Solich [1969].
Min-Max Optimization
Lets consider the i th objective function for which the relative
deviation can be calculated from
or from
00 |fi (~x ) fi0 )|
zi (~x ) = (39)
|fi (~x )|
It should be clear that for equations (38) and (39) it is
necessary to assume that for every i I (I = 1, 2, . . . , k) and for
every ~x F, fi (~x ) 6= 0.
Min-Max Optimization
If all the objective functions are going to be minimized, then
equation (38) defines function relative increments, whereas if
all of them are going to be maximized, it defines relative
decrements. Equation (39) works conversely.
Min-Max Optimization
Now the min-max optimum can be defined as follows [Osyczka,
1984]:
A point ~x F is min-max optimal, if for every ~x F the
following recurrence formula is satisfied:
Step 1:
v1 (~x ) = min{zi (~x )} (41)
xF iI
and then Ii = {i1 }, where i1 is the index for which the value of
z1 (~x ) is maximal.
Min-Max Optimization
Step 2:
~
v2 (x ) = min max {zi (~x )} (42)
xX1 iI,i6I1
and then I2 = {i1 , i2 }, where i2 is the index for which the value
of zi (x) in this step is maximal.
Min-Max Optimization
Step r :
~
vr (x ) = min max {zi (~x )} (43)
xXr 1 iI,i6Ir 1
and then Ir = {Ir 1 , ir }, where ir is the index for which the value
of zi (~x ) in the r th step is maximal.
Min-Max Optimization
Step k:
Min-Max Optimization
The point ~x F which satisfies the equations of Steps 1 and 2
may be called the best compromise solution considering all the
criteria simultaneously and on equal terms of importance.
The two most representative algorithms within this class are the
following:
Linear Combination of Weights
The -Constraint Method
Both of them will be briefly described next.
subject to:
~x F (46)
where wi 0 for all i and is strictly positive for at least one
objective.
subject to:
STEP Method
This method (also known as STEM) is an iterative technique
based on the progressive articulation of preferences. The basic
idea is to converge toward the best solution in the min-max
sense, in no more than k steps, being k the number of
objectives. This technique, which is mostly useful for linear
problems, starts from an ideal point and proceeds in six steps.
STEP Method
Another problem with the STEP Method is that it does not explicitly capture
the trade-offs between the objectives. The weights in no way reflect a value
judgment on the part of the DM. The weights are artificial quantities,
generated by the analyst to reflect deviations from an ideal solution, which is
itself an artificial quantity. This definition of the weights serves to obscure
rather than capture the normative nature of the multiobjective optimization
problems [Cohon and Marks, 1975].
From the many metaheuristics currently available, one particular class has
become very popular in the last 20 years: bio-inspired metaheuristics.