Chapter Three
Chapter Three
• Informed search
– Greedy search
– A*-search
– Iterative improvement
– Constraint satisfaction
– etc.
Breadth first search
•Expand shallowest unexpanded
node,
–i.e. expand all nodes on a given level
of the search tree before moving to the
next level
•Implementation: use queue data
structure to store the list:
–Expansion: put successors at the end
of queue
–Pop nodes from the front of the queue
•Properties:
– Takes space: keeps every node in
memory
– Optimal and complete: guarantees to
find solution
Algorithm for Breadth first search
function BFS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f)
succ = Successors (f);
l = not-in-set (suc, closed );
open = merge ( rest(open), l); //append to the list
}
return ('fail')
}
Exercise
• Apply BFS to find an optimal path from start
node to Goal node.
Uniform cost Search
•The goal of this technique is to find the shortest path to the
goal in terms of cost.
–It modifies the BFS by always expanding least-cost
unexpanded node
•Implementation: nodes in list keep track of total path
length from start to that node
–List kept in priority queue ordered by path cost
A
S S S
1 10 S
5 B 5
S G 0 A B C A B C A B C
1 5 15 5 15 15
G G G
15 5
11 11 10
C
•Properties:
– This strategy finds the cheapest solution provided the cost of
a path must never decrease as we go along the path
g(successor(n)) ≥ g(n), for every node n
– Takes space since it keeps every node in memory
Algorithm for Uniform Cost search
function uniform_cost (problem){
open = (C_0); //put initial state C_0 in the List
g(s) = 0;
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
succ = Successors (f)
l = not-in-set (succ, closed );
g(f,li) = g(f) + c(f,li);
open = merge(rest(open), l, g(f,li)); //keep the open list sorted in
ascending order by edge cost
}
return ('fail')
}
Depth-first search
•Expand one of the node at the deepest
level of the tree.
–Only when the search hits a non-goal dead
end does the search go back and expand
nodes at shallower levels
•Implementation: treat the list as stack
–Expansion: push successors at the top of
stack
–Pop nodes from the top of the stack
•Properties
–Incomplete and not optimal: fails in infinite-
depth spaces, spaces with loops.
–Takes less space (Linear): Only needs to
remember up to the depth expanded
Algorithm for Depth first search
function DFS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
succ = Successors (f)
l = not-in-set (succ, closed );
open = merge ( rest(open), l); //prepend to the list
}
return ('fail')
}
Iterative Deepening Search (IDS)
•IDS solves the issue of choosing the best depth limit by trying
all possible depth limit:
–Perform depth-first search to a bounded depth d, starting at d = 1 and
increasing it by 1 at each iteration.
•This search combines the benefits of DFS and BFS
–DFS is efficient in space, but has no path-length guarantee
–BFS finds min-step path towards the goal, but requires memory space
–IDS performs a sequence of DFS searches with increasing depth-cutoff
until goal is found
Start Goal
Bidirectional Search
• Advantages:
– Only need to go to half depth
– It can enormously reduce time complexity, but is not
always applicable
• Difficulties
– Do you really know solution? Unique?
– Cannot reverse operators
– Memory requirements may be important: Record all
paths to check they meet
• Memory intensive
S
1 8
5
A B C
3 9
7 4 5
D E G
Informed search
o Search efficiency would improve greatly if there
is a way to order the choices so that the most
promising nodes are explored first.
This requires domain knowledge of the problem (i.e.
heuristic) to undertake focused search
o informed search strategy—uses problem-specific
knowledge
can find solutions more efficiently
o the knowledge to make this determination is provided
by an evaluation function
that returns a number describing the desirability of
expanding the node
Informed search cont..
19
Greedy search Cont..
g(n): gives the path cost from the start node to node n,
h(n): is the estimated cost of the cheapest path from n to the goal,
– Hence we have f(n) = estimated cost of the cheapest solution through n
A* search find the cheapest solution, a reasonable thing to try first is the
node with the lowest value of f(n).
Admissible heuristics
– The restriction is to choose an h(n) function that never
overestimates the cost to reach the goal.
– Such an h is called an admissible heuristic.
– Admissible heuristics are by nature optimistic, because they think
the cost of solving the problem is less than it actually is.
– If h is admissible, f(n) never overestimates the actual cost of the
best solution through n.
– Best-first search using f(n) as the evaluation function and an
admissible h function is known as A* search
Conditions for optimality: Admissibility and
consistency