Solving Problems by Searching
Solving Problems by Searching
Chapter 3
1
Why Search?
To achieve goals or to maximize our utility
we need to predict what the result of our
actions in the future will be.
2
Search Overview
Watch this video on search algorithms:
http://videolectures.net/aaai2010_thayer_bis/
3
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between cities or choose next city
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
4
Example: Romania
5
Task Environment
Static / Dynamic
Previous problem was static: no attention to changes in environment
Observable / Partially Observable / Unobservable
Previous problem was observable: it knew its states at all times.
Deterministic / Stochastic
Previous problem was deterministic: no new percepts
were necessary, we can predict the future perfectly given our actions
Discrete / continuous
Previous problem was discrete: we can enumerate all possibilities
Single Agent
No other agents interacting with your cost function
Sequential
Decisions depend on past decisions 6
Example: vacuum world
Observable, start in #5.
Solution?
7
Example: vacuum world
Observable, start in #5.
Solution? [Right, Suck]
Unobservable, start in
{1,2,3,4,5,6,7,8} e.g.,
Solution?
8
Example: vacuum world
Unobservable, start in
{1,2,3,4,5,6,7,8} e.g.,
Solution?
[Right,Suck,Left,Suck]
9
10
Problem Formulation
A problem is defined by four items:
For guaranteed realizability, any real state "in Arad” must get to some
real state "in Zerind”
(Abstract) solution set of real paths that are solutions in the real world
states?
initial state?
actions? Try yourselves
goal test?
path cost?
16
Example: The 8-puzzle
18
Tree search example
19
Tree search example
20
Tree search example
21
Repeated states
Failure to detect repeated states can turn a
linear problem into an exponential one!
22
Solutions to Repeated States
S
B
S
B C
C C S B S
State Space
Example of a Search Tree
Graph search optimal but memory inefficient
23
Graph Search vs Tree Search
26
27
28
Uninformed Search
29
Complexity Recap (app.A)
• We often want to characterize algorithms independent of their
implementation.
• Better is:
“This algorithm takes O(nlog(n)) time to run and O(n) to store”.
because this statement abstracts away from irrelevant details.
31
Breadth-first search
Expand shallowest unexpanded node
Fringe: nodes waiting in a queue to be explored
Implementation:
fringe is a first-in-first-out (FIFO) queue, i.e.,
new successors go at end of the queue.
Is A a goal state?
32
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe = [B,C]
Is B a goal state?
33
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe=[C,D,E]
Is C a goal state?
34
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe=[D,E,F,G]
Is D a goal state?
35
Example
BFS
36
Properties of breadth-first search
Complete? Yes it always reaches goal (if b is finite)
Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1)
(this is the number of nodes we generate)
Space? O(bd+1) (keeps every node in memory,
either in fringe or on a path to fringe).
Optimal? Yes (if we guarantee that deeper
solutions are less optimal, e.g. step-cost=1).
Note: in the new edition Space & Time complexity was O(bd) because we
postpone the expansion. 37
Uniform-cost search
Breadth-first is only optimal if step costs is increasing with
depth (e.g. constant). Can we guarantee optimality for any
step cost?
Uniform-cost Search: Expand node with
smallest path cost g(n).
Proof Completeness:
38
Uniform-cost search
39
6 1
3 A D F 1
2 4 8
S B E G
1 20
C
The graph above shows the step-costs for different paths going from the start (S) to
the goal (G).
Use uniform cost search to find the optimal path to the goal.
Exercise
40
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = Last In First Out (LIPO) queue, i.e., put
successors at front
Is A a goal state?
41
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[B,C]
Is B a goal state?
42
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[D,E,C]
Is D = goal state?
43
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[H,I,E,C]
Is H = goal state?
44
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[I,E,C]
Is I = goal state?
45
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[E,C]
Is E = goal state?
46
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[J,K,C]
Is J = goal state?
47
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[K,C]
Is K = goal state?
48
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[C]
Is C = goal state?
49
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[F,G]
Is F = goal state?
50
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[L,M,G]
Is L = goal state?
51
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[M,G]
Is M = goal state?
52
Properties of depth-first search
A
53
Iterative deepening search
54
Iterative deepening search L=0
55
Iterative deepening search L=1
56
Iterative deepening search L=2
57
Iterative Deepening Search L=3
58
Iterative deepening search
Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
Note: BFS can also be adapted to be by waiting to expand until all nodes at depth d are checked
59
Properties of iterative deepening search
Complete? Yes
Time? O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1 or increasing
function of depth.
60
Bidirectional Search
Idea
simultaneously search forward from S and backwards
from G
stop when both “meet in the middle”
need to keep track of the intersection of 2 open sets of
nodes
What does searching backwards from G mean
need a way to specify the predecessors of G
this can be difficult,
e.g., predecessors of checkmate in chess?
which to take if there are multiple goal states?
where to start if there is only a goal test, no explicit list?61
Bi-Directional Search
Complexity: time and space complexity are:
62
Graph Search vs Tree Search
S
B
S
B C
C C S B S
State Space
Example of a Search Tree
Graph search optimal but memory inefficient
63
Summary of algorithms
64
Summary
Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored
http://www.cs.rmit.edu.au/AI-Search/Product/
http://aima.cs.berkeley.edu/demos.html (for more demos)
65
Exercise
2. Consider the graph below:
B D
A F
E
C
a) [2pt] Draw the first 3 levels of the full search tree with root node given by A.
Use graph search, i.e. avoid repeated states.
b) [2pt] Give an order in which we visit nodes if we search the tree breadth first.
c) [2pt] Express time and space complexity for general breadth-first search in terms
of the branching factor, b, and the depth of the goal state, d.
d) [2pt] If the step-cost for a search problem is not constant, is breadth first search
always optimal? Is BFS graph search optimal?
e) [2pt] Now assume the constant step-cost.
Is BSF tree search optimal? Is BFS graph search optimal? 66
Next time
Questions?
67