Topic - 4 (Solving Problems by Searching)
Topic - 4 (Solving Problems by Searching)
Department of CSE
University of Information Technology and Sciences
Topic Contents
• Problem Solving Agents
• Problem Formulation and Search Spaces
• Example Problems
• Tree Search Algorithm
Breadth-First Search
Uniform-Cost Search
Depth-First Search
Depth-Limited Search
Iteratively Deepening Search
Bidirectional Search 2
Introduction
❑ In this topic, we see how an agent can find
a sequence of actions that achieves its
goals, when no single action will do.
3
Problem-Solving Agent
Four general steps in problem solving:
• Goal formulation
– What are the successful world states
• Problem formulation
– What actions and states to consider given the goal
• Search
– Examine different possible sequences of actions that
lead to states of known value and then choose the
best sequence
• Execute
– Perform actions on the basis of the solution
4
Example: Romania
5
Example: Romania
On holiday the agent is in Romania visiting in
Arad. Flight leaves tomorrow from Bucharest.
• Formulate goal
– Be in Bucharest
• Formulate problem
– States: various cities
– Actions: drive between cities
• Find solution
– Sequence of cities; e.g. Arad, Sibiu, Fagaras,
Bucharest,…
6
Problem Type
7
Problem Formulation
❑ A problem is defined by:
▪ An initial state, e.g. In(Arad)
▪ A description of the possible actions available to the agent. Given a
particular state s,ACTIONS(s) returns the set of actions that can be
executed in s.
▪ For example, from the state In(Arad), the possible actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
▪ A description of what each action does; the formal name for this is
the transitionTRANSITION MODEL model, specified by a function
RESULT(s, a) that returns the state that results from SUCCESSOR
doing action a in state s. For example, we have
RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
8
Problem Formulation
▪ Goal test
• Explicit, e.g. x=‘In(Bucharest)’
▪ Path cost (assume additive)
• e.g. sum of distances, number of actions executed, …
❑ A solution is a sequence of actions from initial to goal state
▪ the optimal solution has the lowest path cost
State Space
❑ The state space of a problem is the set of all
states reachable from the initial state.
❑ The state space forms a graph, called a state
space graph, in which the nodes are states and
the arcs (may be labeled with costs) between
nodes are actions.
❑ The map of Romania shown in an earlier slide
can be interpreted as a state space graph if we
view each road as standing for two driving
actions, one in each direction.
10
State Space
11
Example Problems
❑ The problem-solving approach has been applied to
a vast array of task environments.
❑ Some of the best known are listed here,
distinguishing between toy and real-world problems.
❑ A toy problem is intended to illustrate or exercise
various problem-solving methods.
❑ It is usable by different researchers to compare the
performance of algorithms.
❑ A real-world problem is one whose solutions
people actually care about.
❑ We will mainly focus on toy problems in this topic.
12
Toy Problems: 8-Puzzle
13
Toy Problems: 8-Queens
14
Toy Problems: 8-Queens
Complete-state formulation
• States: Any arrangement of 0 to 8 queens on the board
• Initial state: No queens
• Actions: Add queen to any empty square
• Goal test: 8 queens on board and none attacked
• Path cost: N/A
64.63.62…..57 ≈ 3 x 1014 possible sequences to investigate
15
Toy Problems: 8-Queens
Incremental formulation
• States: n (0 to 8) queens on the board, one per column
in the n leftmost columns with no queen attacking any
other
• Actions: Add queen in leftmost empty column such that
the queen is not attacking any other queen
• 2057 possible sequences to investigate; solutions are
easy to find
But with 100 queens the problem is much harder 16
Real-World Problems
❑ Route-finding problems
❑ Touring problems
❑ Traveling Salesman problem
❑ VLSI layout problem
❑ Robot navigation
❑ Automatic assembly sequencing
❑ Internet searching
17
Basic Search Algorithms
18
Simple Tree Search Example
19
Simple Tree Search Example
20
Simple Tree Search Example
21
State Space vs. Search Tree
• A state corresponds
to a configuration of
the world
• A node is a data
structure in a search
tree
– e.g. node = <state,
parent-node, action,
path-cost, depth>
22
Search Strategies
❑ A search strategy is defined by picking the order of node
expansion
❑ Problem-solving performance is measured in four ways:
• Completeness: Is a solution found if one exists?
• Optimality: Does the strategy find the optimal solution?
• Time Complexity: How long does it take to find a solution?
• Space Complexity: How much memory is needed to perform
the search?
❑ Time and space complexity are measured in terms of
problem difficulty defined by:
• b - branching factor of the search tree
• d - depth of the shallowest goal node
• m - maximum length of any path in the state space
23
Uninformed Search Strategies
• Uninformed search (or blind search)
– Strategies have no additional information about states
beyond that provided in the problem definition
24
Informed Search Strategies
• Informed (or heuristic) search
– Search strategies know whether one state is more
promising than another
25
Breadth-First Search
• Expand shallowest unexpanded node
• Fringe is the collection of nodes that have been generated
but not yet expanded.
• The set of all leaf nodes available for expansion at any
given point is called the frontier.
• Implementation: fringe/frontier is a FIFO queue
26
Breadth-First Search
27
Breadth-First Search
28
Breadth-First Search
29
Breadth-First Search
30
Breadth-First Search
31
Breadth-First Search
• Memory requirements are a bigger problem than
execution time.
32
Uniform-Cost Search
• Extension of BF-search:
– Expand node with lowest path cost
• Implementation: fringe = queue ordered by
path cost
• UC-search is the same as BF-search
when all step costs are equal
33
Uniform-Cost Search
❑ Let us consider the following search tree, where A
and G are the initial and goal node, respectively.
34
Uniform-Cost Search
35
Uniform-Cost Search
36
Uniform-Cost Search
37
Uniform-Cost Search
38
Uniform-Cost Search
39
Uniform-Cost Search
❑ Let us consider the another search tree, where A
and G are the initial and goal node, respectively.
40
Uniform-Cost Search
41
Uniform-Cost Search
42
Uniform-Cost Search
43
Uniform-Cost Search
44
Uniform-Cost Search
• Completeness:
– YES, if step-cost > some small positive constant ε
• Optimality:
– nodes expanded in order of increasing path cost
– YES, if complete
• Time and space complexity:
– Uniform-cost search is guided by path costs rather than
depths, so its complexity cannot easily be characterized
in terms of b and d.
– Instead, assume C* is the cost of the optimal solution
– Assume that every action costs at least ε
– Worst-case: O(bC*/ε)
45
Depth-First Search
46
Depth-First Search
47
Depth-First Search
48
Depth-First Search
49
Depth-First Search
50
Depth-First Search
51
Depth-First Search
52
Depth-First Search
53
Depth-First Search
54
Depth-First Search
55
Depth-First Search
56
Depth-First Search
57
DF-Search: Evaluation
• Completeness:
– Is a solution always found if one exists?
– No (unless search space is finite and no loops
are possible)
• Optimality:
– Is the least-cost solution always found?
– No
58
DF-Search: Evaluation
59
Depth-Limited Search
• DF-search with depth limit l
– i.e. nodes at depth l have no successors
– Problem knowledge can be used
• Solves the infinite-path problem
• If l < d then incompleteness results
• Time complexity: O(bl)
• Space complexity: O(bl)
• Depth-first search can be viewed as a special
case of depth-limited search with l = ∞.
60
Depth-Limited Search with l = 2
61
Depth-Limited Search with l = 2
62
Depth-Limited Search with l = 2
63
Depth-Limited Search with l = 2
64
Depth-Limited Search
• DF-search with depth limit l
– i.e. nodes at depth l have no successors
– Problem knowledge can be used
• Sometimes, depth limits can be based on knowledge
of the problem.
• For example, on the map of Romania there are 20
cities.
• Therefore, we know that if there is a solution, it must
be of length 19 at the longest, so l = 19 is a possible
choice.
65
Depth-Limited Search
• But in fact if we studied the map carefully, we would
discover that any city can be reached from any other
city in at most 9 steps.
• This number, known as the diameter of the state
space, gives us a better depth limit, which leads to a
more efficient depth-limited search.
• Diameter of state space is the maximum distance
from one state to another in the state space.
• For most problems, however, we will not know a
good depth limit until we have solved the problem.
66
Iterative Deepening Search
• A general strategy to find best depth limit l
– Goal is found at depth d, the depth of the shallowest
goal-node
– Often used in combination with DF-search
• Combines benefits of DF- and BF-search
• Like depth-first search, its memory requirements are
very modest: O(bd) to be precise.
• Like breadth-first search, it is complete when the
branching factor is finite and optimal when the path
cost is a nondecreasing function of the depth of the
node.
67
Iterative Deepening Search
• Iterative deepening search may seem wasteful,
because states are generated multiple times.
• It turns out this is not very costly.
• The reason is that in a search tree with the same
(or nearly the same) branching factor at each
level, most of the nodes are in the bottom level, so
it does not matter much that the upper levels are
generated multiple times.
68
Iterative Deepening Search
• In an iterative deepening search, the nodes on the bottom level
(depth d) are generated once, those on the next to bottom level
are generated twice, and so on, up to the children of the root,
which are generated d times.
• So the total number of nodes generated is
N(IDS) = d.b + (d - 1).b2 + ...+ 1.bd = O(bd)
• We can compare this to the nodes generated by a breadth-first
search:
N(BFS) = b + b2 + ...+ bd + (bd+1 – b)= O(bd+1)
• Notice that breadth-first search generates some nodes at depth
d+1, whereas iterative deepening does not.
• The result is that iterative deepening is actually faster than
breadth-first search, despite the repeated generation of states.
69
Iterative Deepening Search
• For example, if b = 10 and d = 5,
70
ID-Search: Example
71
ID-Search: Example
72
ID-Search: Example
73
ID-Search: Example
74
Bidirectional Search
❑ Two simultaneous searches run from start and goal.
– Motivation:
75
Bidirectional Search
76
Bidirectional Search
78