0% found this document useful (0 votes)
13 views

2024 Slide2 Uninform Search Update

Document in Artificial Intelligence subject

Uploaded by

de.minhduong
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

2024 Slide2 Uninform Search Update

Document in Artificial Intelligence subject

Uploaded by

de.minhduong
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

Artificial Intelligence

Solving Problems by searching


Uninformed/Blind search strategies
2
Outline: Problem solving by searching

• Introduction to Problem Solving

• Uninformed search (Blind Search)


• Problem formulation
• Search strategies: depth-first, breadth-first, uniform-cost search

• Informed search (next lecturer)


• Search strategies: best-first, A*
• Heuristic functions

3
Example: Measuring problem!

9l
3l 5l

Problem: Using these three buckets,


measure 7 liters of water.

4
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

5
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

6
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

7
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

8
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

9
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

10
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

11
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

12
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

13
Example: Measuring problem!

• (one possible) Solution:

a b c 9l
0 0 0 start 3l 5l
3 0 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

14
Example: Measuring problem!

• Another Solution:

a b c 9l
0 0 0 start 3l 5l
0 5 0
0 0 3 a b c
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

15
Example: Measuring problem!

• Another Solution:

a b c 9l
0 0 0 start 3l 5l
0 5 0
3 2 0 a b c
0 0 3
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

16
Example: Measuring problem!

• Another Solution:

a b c 9l
0 0 0 start 3l 5l
0 5 0
3 2 0 a b c
3 0 2
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

17
Example: Measuring problem!

• Another Solution:

a b c 9l
0 0 0 start 3l 5l
0 5 0
3 2 0 a b c
3 0 2
3 5 2
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

18
Example: Measuring problem!

• Another Solution:

a b c 9l
0 0 0 start 3l 5l
0 5 0
3 2 0 a b c
3 0 2
3 5 2
3 0 7 goal
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7 goal

19
Which solution do we prefer?

• Solution 1: • Solution 2:

a b c a b c
0 0 0 start 0 0 0 start
3 0 0 0 5 0
0 0 3 3 2 0
3 0 3 3 0 2
0 0 6 3 5 2
3 0 6 3 0 7 goal
0 3 6
3 3 6
1 5 6
0 5 7 goal

20
Other Example:

21
Rational Decisions

• We’ll use the term rational in a very specific, technical way:


• Rational: maximally achieving pre-defined goals
• Goals are expressed in terms of the utility of outcomes
• World is uncertain, so we’ll use expected utility
• Being rational means acting to maximize your expected
utility
Design of Rational Agents

• An agent is an entity that perceives and


acts.
• A rational agent selects actions that
maximize its (expected) utility.
• Characteristics of the percepts,
environment, and action space dictate
techniques for selecting rational actions
• This course: Sensors

Environment
Percepts
• General AI techniques for a variety of problem types

Agent
?
• Learning to recognize when and how a new problem can
be solved with an existing technique Actuators
Actions
Problem-solving: (Iteratively) Selecting the right agent
design

• The design of an AI agent for a complex task might require the integration of multiple
components and techniques to deal with different aspects of the problem
Search Problems

25
Search problems (Deterministic Planning)
Search problem components Start
state

• Start state
• Actions: available to the agent in each state
• Transition model
• the state resulting from doing action a in state s
• Goal state
• Path cost
• Assume that it is a sum of
nonnegative step costs (𝑐(𝑠, 𝑎, 𝑠’) ≥ 0)
Goal
• A solution is a sequence of actions (a plan) which transforms state
the start state to a goal state
• The optimal solution is the sequence of actions that gives
the lowest path cost for reaching the goal
Example: Romania
• On vacation in Romania; currently in Arad
• Flight leaves tomorrow from Bucharest
• Start state
• Arad
• Actions
• Go from one city to another
• Transition model
• If you go from city A to
city B, you end up in city B
• Goal state
• Bucharest
• Path cost
• Sum of edge costs (total
distance traveled)
Task Environment - PEAS

Performance measure
 -1 per step; +10 food; +500 win; -500 die;
+200 hit scared ghost
Environment
 Pacman dynamics (incl ghost behavior)
Actuators
 North, South, East, West, (Stop)
Sensors
 Entire state is visible
Environment Types – Quiz?

Pacman Taxi
Fully or partially observable
Single agent or multi-agent
Deterministic or stochastic
Static or dynamic
Discrete or continuous
State space
• The start state, actions, and transition model define the state space
of the problem
• The set of all states reachable from start state by any sequence of actions
• Determined by start state + actions + transition model
• Can be represented as a directed graph where the nodes are states and links between
nodes are actions
• What is the state space for the Romania problem?
What’s in a State Space?

• The real-world state includes every last detail of the environment


• A search state defines a model that abstracts away details not needed to solve the problem
(or to expensive to take into account)

32
State Space Sizes?

• World state:
• Agent positions: 120
• Food count: 30
• Ghost positions: 12
• Agent facing: NSEW

• How many
• World states?
120x(230)x(122)x4
• States for pathing?
120
• States for eat-all-dots?
120x(230)
State Space Graphs

• State space graph: A mathematical


representation of a search problem
• Nodes are (abstracted) world configurations
• Arcs represent transitions resulting from actions
• The goal test is a set of goal nodes (maybe
only one)

• In a state space graph, each state occurs


only once!

• We can rarely build this full graph in


memory (it’s too big), but it’s a useful idea
More Examples

Oradea
71
Neamt

Zerind 87
151
75
Iasi
Arad
140
92
Sibiu Fagaras
99
118
Vaslui
80
Rimnicu Vilcea
Timisoara
142
111 Pitesti 211
Lugoj 97
70 98
85 Hirsova
Mehadia 146 101 Urziceni
75 138 86
Bucharest
Drobeta 120
90
Craiova Eforie
Giurgiu
Example: 8-puzzle

start state goal state

• State (trạng thái):


• Actions :
• Goal test:
• Path cost:

36
Example: 8-puzzle

start state goal state

• State: integer location of tiles (ignore intermediate locations)


• Actions: moving blank left, right, up, down (ignore jamming)
• Goal test: does state match goal state?
• Path cost: 1 per move

37
Example: 8-puzzle

start state goal state

Why search algorithms?


• 8-puzzle has 362,800 states
• 15-puzzle has 10^12 states
• 24-puzzle has 10^25 states

So, we need a principled way to look for a solution in these


huge search spaces…
38
Search

• Given:
• Start state
• Actions
• Transition model
• Goal state
• Path cost
• How do we find the optimal solution?
• How about building the state space and then using Dijkstra’s shortest path
algorithm?
• Complexity of Dijkstra’s is O(E + V log V), where V is the size of the state space
• The state space may be huge!
Search: Basic idea

• Let’s begin at the start state and expand it by making a list of


all possible successor states
• Maintain a frontier or a list of unexpanded states
• At each step, pick a state from the frontier to expand
• Keep going until you reach a goal state
• Try to expand as few states as possible
Search tree
• “What if” tree of sequences of actions and outcomes
Starting
• The root node corresponds to the starting state state
• The children of a node correspond to the successor
states of that node’s state Action
• A path through the tree corresponds to a sequence Successor
of actions state
• A solution is a path ending in the goal
state

• Nodes vs. states
• A state is a representation of the world,
………
while a node is a data structure that is part
of the search tree Goal state
• Node has to keep pointer to parent, path cost,
possibly other info
State Space Graphs vs. Search Trees

Consider this 4-state graph: How big is its search tree (from S)?
S

a
a b
S G
b G G a
b G a b G


Important: Lots of repeated structure in the search tree!
Tree Search Algorithm Outline

Function General-Search(problem, strategy) returns a solution, or failure


initialize the search tree using the initial state problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy search
if the node contains a goal state then
return the corresponding solution
else expand the node and add resulting nodes to the search tree
end

43
Problem-Solving

• Problem solving:
• Goal formulation
• Problem formulation (states, operators)
• Search for solution

• Problem formulation:
• Initial (start) state
• Actions
• Goal test
• Path cost

44
Finding a solution

Solution: is ???

Function General-Search(problem, strategy) returns a solution, or failure


initialize the search tree using the initial state problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add resulting nodes to the search tree
end

45
Finding a solution

Solution: is a sequence of actions that bring you from current state


to the goal state.

Function General-Search(problem, strategy) returns a solution, or failure


initialize the search tree using the initial state problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add resulting nodes to the search tree
end

Strategy: The search strategy is determined by ???

46
Finding a solution

Solution: is a sequence of actions that bring you from current state


to the goal state

Function General-Search(problem, strategy) returns a solution, or failure


initialize the search tree using the initial state problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add resulting nodes to the search tree
end

Strategy: The search strategy is determined by the order in which


the nodes are expanded.

47
Example: Traveling from Arad To Bucharest

48
Tree search example

49
Tree search example

50
Tree search example

51
Tree search example

52
Implementation of search algorithms

Function General-Search(problem, Queuing-Fn) returns a solution, or failure


nodes  make-queue(make-node(initial-state[problem]))
loop do
if nodes is empty then return failure
node  Remove-Front(nodes)
if Goal-Test[problem] applied to State(node) succeeds then return node
nodes  Queuing-Fn(nodes, Expand(node, Operators[problem]))
end

Queuing-Fn(queue, elements) is a queuing function that inserts a set


of elements into the queue and determines the order of node expansion.
Varieties of the queuing function produce varieties of the search algorithm.

53
Function TREE_SEARCH(problem) returns a solution, or failure

initialize the frontier as a specific work list (stack, queue, priority queue)
add initial state of problem to frontier
loop do
if the frontier is empty then
return failure
choose a node and remove it from the frontier
if the node contains a goal state then
return the corresponding solution

for each resulting child from node


add child to the frontier
Function GRAPH_SEARCH(problem) returns a solution, or failure
initialize the explored set to be empty
initialize the frontier as a specific work list (stack, queue, priority queue)
add initial state of problem to frontier
loop do
if the frontier is empty then
return failure
choose a node and remove it from the frontier
if the node contains a goal state then
return the corresponding solution
add the node state to the explored set
for each resulting child from node
if the child state is not already in the frontier or explored set then
add child to the frontier
Encapsulating state information in nodes

56
Evaluation of search strategies

• A search strategy is defined by picking the order of node expansion


• How do we will score the performance of different approaches for finding a solution?

• Parameters for space and time complexity:


• b: maximum branching factor of the search tree
• d: depth of the least-cost solution
• m: maximum depth of the state space (may be ∞)
57
Binary Tree Example

Depth = 0
root

Depth = 1
N1 N2

Depth = 2 N3 N4 N5 N6

Number of nodes: n = 2 max depth


Number of levels (max depth) = log(n) (could be n)
58
Complexity

• Why worry about complexity of algorithms?

 because a problem may be solvable in principle but may take too long to
solve in practice

59
Complexity: Tower of Hanoi

60
Complexity:
Tower of Hanoi

61
Complexity: Tower of Hanoi

 3-disk problem: 23 - 1 = 7 moves

 64-disk problem: 264 - 1.


 210 = 1024  1000 = 103,
 264 = 24 * 260  24 * 1018 = 1.6 * 1019

 One year  3.2 * 107 seconds

62
Complexity: Tower of Hanoi

 The wizard’s speed = one disk / second

1.6 * 1019 = 5 * 3.2 * 1018 =


5 * (3.2 * 107) * 1011 =
(3.2 * 107) * (5 * 1011)

63
Complexity: Tower of Hanoi

 The time required to move all 64 disks from needle 1 to needle 3 is


roughly 5 * 1011 years.

 It is estimated that our universe is about 15 billion = 1.5 * 1010


years old.

5 * 1011 = 50 * 1010  33 * (1.5 * 1010).

64
Complexity: Tower of Hanoi

 Assume: a computer with 1 billion = 109 moves/second.


 Moves/year=(3.2 *107) * 109 = 3.2 * 1016

 To solve the problem for 64 disks:


 264  1.6 * 1019 = 1.6 * 1016 * 103 =
(3.2 * 1016) * 500

 500 years for the computer to generate 264 moves at the


rate of 1 billion moves per second.

65
Complexity

• Why worry about complexity of algorithms?


 because a problem may be solvable in principle but may take too
long to solve in practice

• How can we evaluate the complexity of algorithms?


 through asymptotic (tiệm cận) analysis, i.e., estimate time (or
number of operations) necessary to solve an instance of size n of
a problem when n tends towards infinity
 See AIMA, Appendix A.

66
Complexity of Algorithms

• T(n) is O(f(n)) means there exists n0, k such that for all n >n0 T(n) <= kf(n):
• N = input size
• T(n) = total number of step of the algorithm
• Independent of the implementation, compiler, …
• Asymtotic analysis: For large n, an O(n) algorithm is better than an O(n2) algorithm.
• O() abstract over constant factors:
• T(100n +1000) is better than T(n2 + 1) only for n > 110
• O() notation is a good compromise between precision and easy of analysis

67
Remember: Implementation of search algorithms

Function General-Search(problem, Queuing-Fn) returns a solution, or failure


nodes  make-queue(make-node(initial-state[problem]))
loop do
if nodes is empty then return failure
node  Remove-Front(nodes)
if Goal-Test[problem] applied to State(node) succeeds then return node
nodes  Queuing-Fn(nodes, Expand(node, Operators[problem]))
end

Queuing-Fn(queue, elements) is a queuing function that inserts a set of


elements into the queue and determines the order of node expansion. Varieties of
the queuing function produce varieties of the search algorithm.

68
Types of search strategies

• Uninformed/blind search
• Can only generate successors, sum up what has happened so
far, and distinguish goals from non-goals
• Informed/heuristic search:
• Strategies that know whether one non-goal state is more
promising than another
• Adversarial Search

69
Uninformed/Blind search
strategies

70
Uninformed/Blind search strategies

• Breadth-first search)
• Depth-first search)
• Depth-limited search
• Iterative deepening depth-first search
• Uniform-cost Search

71
Breadth-first search - Idea

• Strategy: Expand shallowest unexpanded node in the tree


• All nodes at a given level of the search tree are expanded before any
other node is expanded. “Horizontal”, level-by-level search
• Can be implemented by using a FIFO queue for the frontier set

72
Breadth-first search

A D Move downwards,
level by level,
until goal is
B D A E reached.

C E E B B F

D F B F C E A C G

G C G F
G 73
Example: Traveling from Arad To Bucharest

74
Breadth-first search

75
Breadth-first search

76
Breadth-first search

77
BFS Graph Search for 8-Puzzle

78
Breadth-first search (BFS)

From the general search algorithm we have:

1. Initialize queue L containing only start state


2. Loop do
2.1 If (L == Empty) then
{failed search message; end}
2.2 Remove u state from beginning of the queue L;
2.3 If (u == Goal state) then
{successful search message; end}
2.4 For (each state v expand from u) do
{Put v at the end of queue L;}

79
Properties of breadth-first search

• Completeness:
• Time complexity:
• Space complexity:
• Optimality:

• Search algorithms are commonly evaluated according to the following four criteria:
• Completeness: does it always find a solution if one exists?
• Time complexity: how long does it take as function of num. of nodes?
• Space complexity: how much memory does it require?
• Optimality: does it guarantee the least-cost solution?

• Time and space complexity are measured in terms of:


• b – max branching factor of the search tree
• d – depth of the least-cost solution
• m – max depth of the search tree (may be infinity)

80
Properties of breadth-first search

• Completeness: Yes, if b is finite


• Time complexity: 1+b+b2+…+bd = O(b d), i.e., exponential in d
• Space complexity: O(b d) (see following slides)
• Optimality: Yes (assuming cost = 1 per step)

81
Time complexity of breadth-first search

• If a goal node is found on depth d of the tree, all nodes up till that
depth are created.

d
m
b G

• Thus: O(bd)

82
Space complexity of breadth-first

• Largest number of nodes in QUEUE is reached on the level d of


the goal node.

d
m
b G

• QUEUE contains all and G nodes. (Thus: 4) .


• In General: bd

83
Examples

• Assuming b=10, checking 1000 states takes 1 second, storing one state
takes 100 bytes

depth d Time Space


4 11 s 1 megabyte
6 18 s 111 megabyte
8 31 h 11 gigabyte
10 128 d 1terabyte
12 35 y 111 terabyte

14 3500 y 11 111 terabyte

84
Depth First Search - Idea

• Strategy: Expand deepest unexpanded node


• In other words, we will scan the search tree by branch from the root
• Can be implemented by using a stack for the frontier (LIFO).

85
Depth First Search - Idea

86
Depth First Search

C E

D F

87
Romania with step costs in km

88
Depth-first search

89
Depth-first search

90
Depth-first search

91
DFS for 8-Puzzle

92
DFS for 8-Puzzle
Goal found after 19 node expansions
and 32 goal checks

93
DFS vs. BFS

94
Depth-first search algorithm

From general search, deep-first search algorithm:

1. Initialize queue L containing only start state


2. Loop do
2.1 If (L == Empty) then
{failed search message; end}
2.2 Remove u state from beginning of the stack L;
2.3 If (u == Goal state) then
{successful search message; end}
2.4 For (each state v expanding from u) do
{Put v at the beginning of stack L;}

95
Properties of depth-first search

• Completeness: No, fails in infinite state-space (yes if finite


state space)
• Time complexity: O(b m)
• Space complexity: O(bm)
• Optimality: No

Remember:
b = branching factor
m = max depth of search tree

96
Time complexity of depth-first: details

• In the worst case:


• the (only) goal node may be on the right-most branch,

m
b

• Time complexity == bm + bm-1 + … + 1 = bm+1 -1


• Thus: O(bm) b-1
97
Space complexity of depth-first

• Largest number of nodes in QUEUE is reached in bottom left-


most node.
• Example: m = 3, b = 3 :

...

• QUEUE contains all nodes. Thus: 7.


• In General: ((b-1) * m) + 1
• Order: O(m*b)
98
Handling repeated states

• A search tree can contain many nodes corresponding to the same state –
these states are called repeated states
• Search algorithms will waste a lot of time re-expanded the states we have
already explored
• To handle repeated states:
• Every time you expand a node, add that state to the explored set; do not put explored
states on the frontier again
• Every time you add a node to the frontier, check whether it already exists in the frontier
with a higher path cost, and if yes, replace that node with the new one

99
Exercise

Find path from A to K


by:
- Breadth-first
- Depth-first

100
Exercise (2)

101
Breadth-First Search

Strategy: expand a a G
shallowest node first b c
Implementation: e
d f
Frontier is a FIFO queue S h
p q r

d e p
Search
b c e h r q
Tiers
a a h r p q f

p q f q c G

q c G a

a
Depth-First Search

Strategy: a G
b c
expand a e
d f
deepest node S h
first p q r

Implementation S

: Frontier is a d e p
LIFO stack b c e h r q
a a h r p q f
p q f q c G

q c G a
a
Quiz: DFS vs BFS

 When will BFS outperform DFS?

 When will DFS outperform BFS?


Depth-limited search

Is a depth-first search with depth limit l


Implementation:
• In the above figure, the depth-limit is 1.
• Depth-first search is a special case of
depth-limited search.
Complete: if cutoff chosen appropriately
then it is guaranteed to find a solution.
Optimal: it does not guarantee to find the
least-cost solution

105
Iterative Deepening

• Idea: get DFS’s space advantage with


BFS’s time / shallow-solution
b
advantages …
• Run a DFS with depth limit 1. If no
solution…
• Run a DFS with depth limit 2. If no
solution…
• Run a DFS with depth limit 3. …..

• Isn’t that wastefully redundant?


• Generally most work happens in the lowest
level searched, so not so bad!
Iterative deepening search

Function Iterative-deepening-Search(problem) returns a solution,


or failure
for depth = 0 to  do
result  Depth-Limited-Search(problem, depth)
if result succeeds then return result
end
return failure

Combines the best of breadth-first and depth-first search


strategies.
• Completeness: Yes,
• Time complexity: O(b d)
• Space complexity: O(bd)
• Optimality: Yes, if step cost = 1 107
Romania with step costs in km

108
109
110
111
112
113
114
115
116
Iterative deepening search
Iterative deepening complexity

• In iterative deepening, nodes at bottom level are expanded once,


level above twice, etc. up to root (expanded d+1 times) so total
number of expansions is:
(d+1)1 + (d)b + (d-1)b^2 + … + 3b^(d-2) + 2b^(d-1) + 1b^d = O(b^d)

• In general, iterative deepening is preferred to depth-first or


breadth-first when search space large and depth of solution not
known.

118
Uniform-cost search

• For each frontier node (queue), save the total cost of the path from the initial
state to that node
• Expand the frontier node with the lowest path cost
• Implementation: frontier is a priority queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Equivalent to Dijkstra’s algorithm in general
Function UNIFORM-COST-SEARCH(problem) returns a solution, or failure
initialize the explored set to be empty
initialize the frontier as a priority queue using node path_cost as the priority
add initial state of problem to frontier with path_cost = 0
loop do
if the frontier is empty then
return failure
choose a node and remove it from the frontier
if the node contains a goal state then
return the corresponding solution
add the node state to the explored set
for each resulting child from node
if the child state is not already in the frontier or explored set then
add child to the frontier
else if the child is already in the frontier with higher path_cost then
replace that frontier node with child
Uniform-cost search example
Uniform-cost search example

• Expansion order:
(S,p,d,b,e,a,r,f,e,G)
Another example of uniform-cost search

Source: Wikipedia
Properties of uniform-cost search

• Complete?
Yes, if step cost is greater than some positive constant ε
(we don’t want infinite sequences of steps that have a
finite total cost)
• Optimal?
Yes!
Optimality of uniform-cost search

• Graph separation property: every path from the initial state to an


unexplored state has to pass through a state on the frontier
• Proved inductively

• Optimality of UCS: proof by contradiction


• Suppose UCS terminates at goal state n with path cost
g(n) but there exists another goal state n’ with g(n’) < g(n)
• By the graph separation property, there must exist a node n” on the
frontier that is on the optimal path to n’
• But because g(n”) ≤ g(n’) < g(n), n” should have been expanded first!
Properties of uniform-cost search
• What nodes does UCS expand?
• Processes all nodes with cost less than cheapest solution!
• If that solution costs C* and arcs cost at least  , then the b g1
“effective depth” is roughly C*/ …
g2
• Takes time O(bC*/) (exponential in effective depth) C*/ “tiers”
g3
• How much space does the frontier take?
• Has roughly the last tier, so O(bC*/)

• Is it complete?
• Assuming C* is finite and  > 0, yes!
• Is it optimal?
• Yes!
Comparing uninformed search strategies

Criterion Breadth Uniform Depth- Depth- Iterative


first cost first limited deepening

Time b^d b^d b^m b^l b^d

Space b^d b^d bm bl bd

Optimal? Yes Yes No No Yes

Complete? Yes Yes No Yes, Yes


if ld

• b – max branching factor of the search tree


• d – depth of the least-cost solution
• m – max depth of the state-space (may be infinity)
• l – depth cutoff
127
Summary

• Problem formulation usually requires abstracting away real-world details to


define a state space that can be explored using computer algorithms.

• Variety of uninformed search strategies; difference lies in method used to


pick node that will be further expanded.

• Uninformed search algorithms


• BSF (problems with memory space)
• UCS (~BFS for finding cost-optimal solutions)
• DFS (Tree search version has very low space requirements)
• Depth-limited DFS (avoid infinite diving)
• IDS (progressively increase depth limits)

128
References

• AIMA textbook
• Slides of CMU AI course
• Slides of UC Berkeley AI course

129

You might also like