0% found this document useful (0 votes)
66 views37 pages

Lesson 1. Introduction To Metaheuristics and General Concepts

This document introduces metaheuristics and general optimization concepts. It discusses motivation for using metaheuristics to solve non-polynomially solvable problems like the knapsack problem and traveling salesman problem. It covers key elements of metaheuristics like encoding, evaluation functions, and classification of algorithms based on construction, trajectory, or population. The document also introduces concepts like diversification versus intensification and the no free lunch theorems for optimization.

Uploaded by

hamo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views37 pages

Lesson 1. Introduction To Metaheuristics and General Concepts

This document introduces metaheuristics and general optimization concepts. It discusses motivation for using metaheuristics to solve non-polynomially solvable problems like the knapsack problem and traveling salesman problem. It covers key elements of metaheuristics like encoding, evaluation functions, and classification of algorithms based on construction, trajectory, or population. The document also introduces concepts like diversification versus intensification and the no free lunch theorems for optimization.

Uploaded by

hamo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Lesson 1.

Introduction to Metaheuristics and General


Concepts

Carlos García Martínez


Goals
● P vs NP problems
● Heuristic vs Metaheuristic method
– Metaheuristics' elements
– Metaheuristic classification
● Theorems of No free lunch for optimisation
Outline
● Motivation
● Encoding and evaluation
● Optimisation Algorithms
● Diversification vs Intensification
● Example
● Random Search
● No free lunch theorems for optimisation
● Others
Motivation
● Optimization problems
● Polynomially solvable problems
– Continuous knapsack
– Shortest path problem
– 2/3 maximum diversity problem
● Non-polinomially solvable problems
– Knapsack problem
– Traveling Salesman Problem
– Maximum diversity problem
– Real parameter optimization (cars/systems configuration)
– SuperMario Bros controller (https://www.youtube.com/watch?v=EiAWYGNpu9M )
● Other problems
Example
● How much time would you take for solving the
knapsack problem?
● How much for solving the TSP?
● How much for the getting the best Super Mario Bros
controller?

● Suppose that you can evaluate a solution every


second, and you can work uninterruptedly 24 hours a
day...
Other NP Optimization Problems
● Vehicle routing
● Scheduling
● Workforce distribution
● Bin packing
● Nurse rostering
● Timetabling
● Weeding table seating planning
● ....
TSP Example

http://www.math.uwaterloo.ca/tsp/county/img/usa3100_screen.jpg
Encoding and Evaluation
● Objective Function
min/max f ( x)
subject to
hi (x )=0, ∀ i 1,. .. , H
g j (x )≤0, ∀ j 1,... , G

● Search Space
– Discrete / Countable / Uncountable / Continuous
– Examples
Encoding and Evaluation
● Knapsack problem
Encoding and Evaluation
● Travelling Salesman Problem
Encoding and Evaluation
● Quadratic Multiple Knapsack Problem
Encoding and Evaluation
● Super Mario Bros Controller
Generalization
● Main encoding schemes:

● Purpose of the evaluation function


Optimisation Algorithm
● Exact/complete methods (Dynamic programming, Backtracking,
Branch and Bound,...)

● Approximate approaches
– Heuristic and -aproximation algorithms
– Metaheuristics
● General: they can be applied to many different problems
● “Faster”: they offer a good solution in a reduced amount of time
● Easy to implement, easy to parallelise
● They can address problems with a higher complexity

When to
choose them?
Ingredients
● Must
– Encoding and Evaluation
– Sampler
– Stop condition
● Others
– Memory mechanisms
– Restarts
– Variety of sampling operators
– Parameter adaptation
– Collaboration
– Parallel hardware
– ....
Pros and Cons
● General ● They are no exact
● Successful methods
● Easy to implement
● They are stochastic
● Easy to parallelise
● Not much theoretical
ground
Classification

http://thinkingmachineblog.net/wp-content/uploads/2013/06/2000px-Metaheuristics_classification.svg_.png
Classification
● Constructive MHs
● Trajectory-based MHs
● Population-based MHs
Diversification vs Intensification

http://fab.cba.mit.edu/classes/864.11/people/r http://www.cs.bham.ac.uk/research/projects/ecb/
achelle.villalon/pset10/rosenbrock.png data/150/Fractal_Volcano_F48D3N1x1.jpg

What is the most intensive MH? What is the most explorative MH?
TSP Example
● Other slides
3. Metaheurísticas: Ej. de Iteración (Alg. Pobl.)
Ejemplo: El problema del viajante de comercio

Ejemplo: 17 ciudades

Representación de orden

(3 5 1 13 6 15 8 2 17 11 14 4 7 9 10 12 16)
41
Ejemplo: El problema del viajante de comercio

17! (3.5568734e14)
soluciones posibles

Solución óptima:
Coste=226.64

42
Viajante de Comercio

Iteración: 0 Costo: 403.7 Iteración: 25 Costo: 303.86

Solución óptima: 226.64


43
Viajante de Comercio

Iteración: 25 Costo: 303.86 Iteración: 50 Costo: 293.6


Solución óptima: 226.64
44
Viajante de Comercio

Iteración: 50 Costo: 293.6 Iteración: 100 Costo: 256.55


Solución óptima: 226.64
45
Viajante de Comercio

Iteración: 100 Costo: 256.55 Iteración: 200 Costo: 231.4


Solución óptima: 226.64
46
Viajante de Comercio

Iteración: 250 Solución


Iteración: 200 Costo: 231.4
óptima: 226.64
47
Ejemplo: El problema del viajante de comercio

532! soluciones posibles


Coste solución óptima =
27.686 millas

48
Random Search and No free lunch
theorems

● Is Random Search a good MH?

S <- sample solution();

Do
S' <- sample solution();
If S' is better than S
S <- S';
Until satisfying a stop condition;

Return S;
Random Search
● What is the probability of sampling the best
solution, among M candidates, after L draws?
– What is the probability of sampling the best solution
for a knapsack problem with N objects?
● How many draws do we have to do to get the
optimal solution with probability α?
log ( α)
L>
log (1−1/m)
Computing L
α M L α M L
0.9 1000 2302 0.9 3000 6907
0.8 1000 1609 0.8 3000 4828
0.7 1000 1203 0.7 3000 3612
0.6 1000 916 0.6 3000 2748
0.9 2000 4605 0.9 4000 9210
0.8 2000 3219 0.8 4000 6437
0.7 2000 2408 0.7 4000 4816
0.6 2000 1833 0.6 4000 3664
No free lunch theorems

Algoritmos Problemas
A B
P1
P2
Pi
RS C P3 PN

̂
R ( X , P)= ̂
R(Y , P)

D. Wolpert and W. Macready, ”No free lunch theorems for optimization”, IEEE Trans. Evol.
Comput., vol. 1, no. 1, pp. 67-82, 1997
No free lunch theorems

NFL

State-of-
New
the-art
Alg. State-of-
New
the-art
Alg.

P1 P2
P \ { P, P }
1 2
No free lunch theorems

NFL

. ..
e d
in u
State-of-
n t
the-art
co New
Alg.
e
State-of-
New
Alg.
o b the-art

P1 T
P2
P \ { P, P }
1 2
Others
● Parallelization
● Data mining
● Multiobjective problems
● Adaptitivity
● Robot controllers
Summary
● Motivation
● Encoding
● Optimisation algorithms
– Exact, Heuristic and Metaheuristic methods
● Diversification vs Intensification
● No free lunch theorems for optimisation
● Others
Some questions
● What is a NP-hard problem?
● What representations do you know?
● What is the role of the objective function?
● What is the search space?
● What is a heuristic method?
● What is a metaheuristic?
● Why using MHs?
● Name two aspects by which MHs can be categorised. Explain them.
● What is better diversification of intensification?
● What does the no free lunch theorem for optimisation say?

You might also like