SET394 - AI - Lecture 05 - Informed Search and Local Search Algorithms
SET394 - AI - Lecture 05 - Informed Search and Local Search Algorithms
Artificial Intelligence
Informed Search Algorithms
Associate Prof. Dr. Ayman Elshenawy
5
Lecture Five
[email protected]
[email protected]
1
Informed Search
• Uses problem/domain specific knowledge beyond
the definition of the problem.
• Can find solutions more efficiently than uninformed
search (blind search).
• Depend on an evaluation function F(n) that:
• Used to estimate how close a node is to the goal.
• Consider the nodes of better f(n) and explore it first.
• Is imprecise, which makes the method a heuristic
(works well in most cases).
• Is often based on empirical observations.
Informed Search Evaluation Function F(n)
• F(n) = g(n) + h(n)
• g(n) is the cost to reach node n from the start node ( exact value).
• h(n) is a heuristic function :
• h(n): estimated cost to reach the goal from the node n.
• h(n1) < h(n2) means it is probably cheaper to get to the goal from n1.
• h(goal) = 0.
• Evaluation function f(n):
• f(n) = g(n): Uniform-cost search. ➔ h(n) = 0
• f(n) = h(n): Greedy best-first search. ➔ g(n) = 0
• f(n) = g(n) + h(n): A* search.
• Example: Straight-Line Distance (SLD) on the map between the two points
f(n)
Start g(n) h(n) Goal
3
Informed Search Algorithms
• Greedy best-first search
• A* search
• Weighted A*
• Beam Search
• Bounded-cost search
• Un-Bounded-cost search
• Simple bound A* search
Greedy best-first search GBFS
• Is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm.
• It uses f(n) = h(n) and ignores the path cost g(n) entirely (h(n)=0).
• Is identical to UNIFORM-COST-SEARCH except that h is used instead
of g.
• A node is selected for expansion based on an evaluation function, f(n).
5
Greedy best-first search GBFS
Find a route from Arad to Bucharest?
hSLD =(Arad)=366
14
A* search Optimality conditions
• Optimal: A* algorithm is optimal if its heuristic achieves the following two conditions:
• Admissible: h(n) should be an admissible heuristic for A* tree search.
• For every node n if h(n) ≤ h*(n), h(n) never overestimates the true cost h*(n). Hence, f(n) never
overestimates the true cost to the goal through node n.
• An admissible heuristic is optimistic in nature.
• Consistency: A heuristic h(n) is consistent if, for every node n and every successor n’ of
n generated by an action a, we have:
• Complete: if branching factor (m) is finite and the cost at every action is fixed.
• Time and space complexity: not straightforward!
• Number of nodes explored depends on the difference between h and h* (true cost).
• If h = h*, A* expands only the nodes on the optimal solution path(s).
• If h = 0, A* consumes as much (time/space) resources as UCS. 15
admissible heuristic is one that never overestimates the cost to reach a goal
A* search
• Finally:
• A* search is complete, cost-optimal, and optimally
efficient among all such algorithms.
• Unfortunately, it does not mean that A* is the answer
to all our searching needs.
16
Inadmissible heuristics and weighted A*
• A* is efficient, but it expands a lot of nodes.
• We can explore fewer nodes (less time and space)
if we are willing to accept solutions.
• If we allow A* search to use an inadmissible
heuristic
• A detour index of 1.3 means that if two cities are
distance by 10 miles straight-line distance, a good
estimate of the best path between them is 13 miles
(detour index ranges between 1.2 and 1.6).
• Weighted A* search where we weight the heuristic
value more heavily,
f (n)= g(n)+W h(n), for some W > 1.
• In general, if the optimal solution costs C* , a
weighted A* search will find a solution that costs
somewhere between C* and WC*. 17
Inadmissible heuristics
18
Simple Memory Bounded A*
• The main issue with A* is its use of memory
• Like A*, but when memory is full, delete the worst node,(largef(n) ).
• If there is a tie (equal f-values) we first delete the oldest nodes first.
• simple-MBA* finds the optimal reachable solution given the memory
constraint.
• Time can still be exponential.
• Beam search limits the size of the frontier.
• The easiest approach is to keep only the k nodes with the best f-scores,
discarding any other expanded nodes.
Simple Memory-bounded A* (SMA*)
(Example with 3-node memory)
Search space Progress of SMA*. Each node is labeled with its current f-cost. Values in
parentheses show the value of the best forgotten descendant.
A
f = g+h 13[15]
A A A
12 12
A 13
G
0+12=12
13
10 8
B G B B G
10+5=15 8+5=13 15
18 H
10 10 8 16
15 13
C D H I
20+5=25 16+2=18 A A A
15[15] 15[24] 20[24]
20+0=20 24+0=24
10 10 A 8
8 8 15
E F J K G B B
15 20[]
24[]
30+5=35 30+0=30 24+0=24 24+5=29
B G
I D
Algorithm can tell you when best solution 15 24 C 25
24
20
found within memory constraint is optimal
or not.
Local search algorithms
• In previous search we wanted to find paths through the search
space, such as a path from Arad to Bucharest.
• But sometimes we care only about the final state, not the path to
get there.
• For example,
• In the 8-queens problem, we care only about finding a valid final
configuration of 8 queens (because if you know the configuration, it is
trivial to reconstruct the steps that created it).
• This is also true for many important applications such as
• Integrated-circuit design, Factory floor layout, Job shop scheduling,
Automatic programming, Telecommunications network optimization,
Local search algorithms
• Operate by searching from a start state to neighboring states:
• Not keeping track of the paths
• Not Keeping the set of states that have been reached.
• That means they are not systematic—they might never explore a portion
of the search space where a solution resides.
• Advantages:
• Use very little memory;
• Can find reasonable solutions in large or infinite state spaces .
• Can solve optimization problems (find the best state according to an objective
function).
Local search algorithms
Crossover :
Resulting individuals after cross-over
fitness = #non-attacking queens
probability of being
in next generation =
fitness/(_i fitness_i)
33