0% found this document useful (0 votes)
9 views

Ch3 SolvingProblemsBySearching

The document discusses problem solving using search algorithms. It describes problem solving agents and their components, such as states, actions, goals and solutions. Several example problems are presented, including the vacuum world, 8-puzzle and 8-queens problems, and their formulations are defined.

Uploaded by

Countess Loly
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Ch3 SolvingProblemsBySearching

The document discusses problem solving using search algorithms. It describes problem solving agents and their components, such as states, actions, goals and solutions. Several example problems are presented, including the vacuum world, 8-puzzle and 8-queens problems, and their formulations are defined.

Uploaded by

Countess Loly
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Chapter3:

Solving Problems by
Searching

Ch3: Solving Problems by Searching 1


Outline

❖ Problem-Solving Agents
❖ Example Problems
❖ Searching for Solutions
❖ Uninformed Search Strategies
❖ Informed Search Strategies

Ch3: Solving Problems by Searching 2


Problem Solving
❖ To solve problems is an intelligent capability
❖ We are able to solve very different problems
 To find a path in a labyrinth
 To solve a crossword
 To play a game
 To diagnose an illness
 To decide if invest in the stock market
 ...
❖ The goal is to have programs also able to solve these
problems

Ch3: Solving Problems by Searching 3


Problem Solving
❖ We want to represent any problem so we can solve it
automatically
❖ To do so we need:
 A common representation for all the problems.
▪ If we analyze the nature of a problem we can identify:
– A starting point
– A goal to achieve
– Actions that we can use to solve the problem
– Constraints for the goal

 Algorithms that use some strategy to solve the problems using


the common representation.

Ch3: Solving Problems by Searching 4


3.1 Problem-Solving Agents
❖ A problem solving agent is a kind of goal-based agents that
decide what to do by finding sequences of actions that lead to
desirable goal.
 A goal is a set of world states.

❖ We assume that the environment is


 observable, the agent always knows the current state
 discrete, at any given state there are only finitely many actions to
choose from
 known, the agent knows which states are reached by each
action
 and deterministic, each action has exactly one outcome.

Ch3: Solving Problems by Searching 5


3.1 Problem-Solving Agents (Cont’d)

❖ A simple problem-solving agent


 First formulate a goal and a problem,
 Searches for a sequence of actions that would solve the
problem,
 Then executes the actions one at a time.

❖ Thus we have a simple “ formulate, search, execute” design


for the agent.

Ch3: Solving Problems by Searching 6


3.1 Problem-Solving Agents (Cont’d)
function SIMPLE-PROBLEM-SOLVING-AGENT( percept) returns an action
persistent: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation

state ←UPDATE-STATE(state, percept)


if seq is empty then
goal ← FORMULATE-GOAL(state)
problem ← FORMULATE-PROBLEM(state, goal)
seq ← SEARCH( problem)
if seq = failure then return a null action
action ← FIRST(seq)
seq ← REST(seq)
return action

Note: This is offline problem-solving, while the agent is executing it


ignores its percepts (open-loop system)
Ch3: Solving Problems by Searching 7
Example: Romania
❖ On holiday in Romania;
currently in Arad. Flight
leaves tomorrow from
Bucharest
❖ Formulate goal:
 be in Bucharest

Ch3: Solving Problems by Searching 8


Example: Romania
❖ On holiday in Romania;
currently in Arad. Flight
leaves tomorrow from
Bucharest
❖ Formulate goal:
 be in Bucharest

Ch3: Solving Problems by Searching 9


Example: Romania
❖ On holiday in Romania;
currently in Arad. Flight
leaves tomorrow from
Bucharest
❖ Formulate goal:
 be in Bucharest
❖ Formulate problem:
 states: various cities
 actions: drive between cities
❖ Find solution:
 sequence of cities, e.g.,
Arad, Sibiu, Fagaras,
Bucharest
Ch3: Solving Problems by Searching 10
3.1.1 well defined problems and solutions
❖ A problem is defined by five items:
1. initial state : that the agent starts in e.g., In(Arad)
2. actions : a description of possible actions available to the agent.
 For state s, ACTIONS(s) returns the set of actions that can be executed in s. we
say that each of these actions is applicable in s.
▪ Example: from the state In(Arad), the applicable actions are {Go(Sibiu),
Go(Timisoara), Go(Zerind).

3. transition model : a description of what each action does


 Specified by a function ( successor)RESULT(s,a) that returns the state that
results from doing action a in state s.
▪ Example: RESULT(In(Arad), Go(Zerind)) = In(Zerind)

❖ Together, the initial state, actions, and transition model define the
state space of the problem – the set of all states reachable from the
initial state by any sequence of actions. ( or – a graph in which the nodes are
states and the arcs between nodes are actions)
Ch3: Solving Problems by Searching 11
3.1.1 well defined problems and solutions
(Cont’d)
4. goal test : which determine whether a given state is a goal state, and can
be :
 explicit, e.g., {In(Bucharest)}
 implicit, e.g., “checkmate” state in chess
5. path cost function
 e.g., sum of distances, number of actions executed, etc.
 c(s; a; s’) is the step cost of taking action a in state s to reach state s’ assumed
to be nonnegative

❖ A solution to a problem is a sequence of actions leading from the initial


state to a goal state.
❖ Solutions is measured by the path cost function, and an optimal solution
has the lowest path cost among all solutions.

Ch3: Solving Problems by Searching 12


3.1.2 formulating problems
❖ Selecting a state space
 Real world is complex
state space must be abstracted for problem solving
▪ The process of removing details from a representation is called
abstraction
 (Abstract) state = set of real states
 (Abstract) action = complex combination of real actions
▪ e.g., “Arad →Zerind” represents a complex set of possible routes,
detours, rest stops, etc.
 (Abstract) solution = set of real paths that are solutions in the
real world
❖ Each abstract action should be “easier” than the original
problem!- Atomic representation
Ch3: Solving Problems by Searching 13
3.2 Example problems

❖ There are two types of problems:

 “Toy” Problems: which intended to illustrate various problem


solving methods.
▪ Vacuum world, Sliding-block puzzles, N-Queens

 Real Problems: which tend to be more difficult and whose


solutions people actually care about
▪ Route-finding, touring, assembly sequencing, Internet searching

Ch3: Solving Problems by Searching 14


3.2.1 Toy problems : Vacuum world
Vacuum world state space graph

❖States :
The state is
determined by
both the agent
location and the
dirt location

Initial state??:
actions??:
transition model??:
goal test??:
path cost??:

Ch3: Solving Problems by Searching 15


3.2.1 Toy problems : Vacuum world
Vacuum world state space graph

❖States :
The state is
determined by
both the agent
location and the
dirt location

Initial state??: any state


actions??: Left, Right, Suck, NoOp
transition model??: arcs in the graph
the actions have their expected effects, except that moving left in the leftmost square, moving
right in the rightmost square and sucking in a clean square
goal test??: no dirt
path cost??: 1 per action (0 for NoOp)

Ch3: Solving Problems by Searching 16


3.2.1 Toy problems : The 8-puzzle
❖States :
The state description
specifies the location of
each of the 8 tiles and
the blank is one of the 9
squares

Initial state??: integer locations of tiles (ignore intermediate positions)


actions??: move blank left, right, up, down (ignore unjamming etc.)
Any constraints?
transition model??: effect of the actions
goal test??: = goal state (given)
path cost??: 1 per move

Ch3: Solving Problems by Searching 17


3.2.1 Toy problems : The 8-queens
❖Place 8 queens in a chessboard so that no two
queens are in the same row, column, or diagonal.

❖States :
any arrangement of 0 to
8 queens on the board is
a state.

Not a solution

Initial state??: NO queens on the board


actions??: add a queen to any empty square
Any constraints?
transition model??: returns the board with a queen added to the specified square
goal test??: 8 queens on board, none attacked
path cost??: may or may not be applicable – depends on formulation
( incremental formulation/complete-state formulation)

Ch3: Solving Problems by Searching 18


3.2.2 Real-world problems
❖ Route finding : Route finding is defined in terms
of specified locations and transitions along links
between them. Its applications are (routing in
computer networks, automated travel advisory
system)

❖ VLSI layout : Positioning (transistors, resistors,


capacitors, etc) and connections among a million
gates in the Silicon chip is complicated and finding
optimal way to place components on a printed
circuit board.

❖ Robot navigation

❖ Internet searching

Ch3: Solving Problems by Searching 19


3.3 searching for solution

❖ How do we find the solutions of previous problems?


 Search the state space
▪ State space can be represented by a search tree
▪ Root of the tree is the initial state
▪ the branches are actions
▪ Children generated through expanding
– The process of expanding nodes on the frontier continuous until either a
solution is found or there are no more states to expand.

 In general we may have a search graph rather than tree (same


state can be reached through multiple paths)

Ch3: Solving Problems by Searching 20


General tree search
Simple tree search example

Applying each legal action to the


current state, thereby generating
a new set of states

The set of all leaf nodes available


for expansion at any given time is
called the frontier

Each of these six nodes is a leaf node ( a node with no children in the tree)
Ch3: Solving Problems by Searching 21
General tree search

❖ All search algorithms share this basic structure, they vary


according to how they choose which state to expand next
 This is called search strategy

Ch3: Solving Problems by Searching 22


Avoiding repeated states
❖ In increasing order of effectiveness and computational overhead:
 do not return to state we come from, i.e., expand function will skip
possible successors that are in same state as node’s parent.
 do not create paths with cycles, i.e., expand function will skip possible
successors that are in same state as any of node’s ancestors.
 do not generate any state that was ever generated before, by keeping
track (in memory) of every state generated, unless the cost of reaching
that state is lower than last time we reached it.
▪ To do this, augment the TREE-SEARCH algorithm with a data structure
called the explored set (a.k.a the closed list ), which remembers every
expanded node.
▪ Newly generated nodes that match previously generated nodes- ones in the
explored set or the frontier- can be discarded instead of being added to the
frontier.
▪ The new algorithm is called GRAPH-SEARCH

Ch3: Solving Problems by Searching 23


Avoiding repeated states (Cont’d)

Return

Ch3: Solving Problems by Searching 24


3.3.1 Implementation: states vs. nodes

Arrows point from the child to parent

❖ A state is a (representation of) a physical configuration


❖ A node n is a data structure constituting part of a search tree includes
 n.STATE: the state in the state space to which the node correspond
 n.PARENT: the node in the search tree generated this node
 n.ACTION: the action that was applied to the parent to generate the node
 n.PATH-COST: the cost, g(n), of the path from the initial state to the node, as
indicated by the parent pointers
❖ States do not have parents, children, depth, or path cost!
Ch3: Solving Problems by Searching 25
3.3.2 Measuring problem-solving
performance
❖ Problem-solving performance is measured in four ways:
 Completeness: Is a solution found if one exists?
 Optimality: Does the strategy find the optimal solution?
 Time Complexity: How long does it take to find a solution?
 Space Complexity: How much memory is needed to perform the search?
❖ Time and space complexity are measured in terms of problem difficulty
defined by:
 b - branching factor of the search tree
 d - depth of the shallowest goal node
 m - maximum length of any path in the state space

Ch3: Solving Problems by Searching 26


3.4 Uninformed search strategies
❖ Uninformed (blind) strategies use only the information
available in the problem definition
 Breadth-first search)‫(البحث بالعرض‬
 Uniform-cost search)‫(البحث بالتكلفة الموحدة‬
 Depth-first search)‫(البحث بالعمق‬
 Depth-limited search)‫(البحث محدود العمق‬
 Iterative deepening search)‫(البحث بالعمق المتكرر‬
 Bidirectional Search)‫(البحث باالتجاهين‬

Ch3: Solving Problems by Searching 27


3.4.1 Breadth-first search (BFS)
❖ Strategy
 The root node is expanded first
 Then all the successors of the root node are expanded next
 Then their successors, and so on.
❖ In general:
 Expand shallowest unexpanded node.
 Expand all nodes at depth d before expanding nodes at depth d+1
 (conventionally explore left to right at each level).
❖ Implementation
 The structure to store the open nodes is a (FIFO)(First In First Out) queue
 New nodes are inserted at the end of the queue

• A search node is a path from state X to the start state (e.g. X B A S)


• The state of a search node is the most recent state of the path (e.g. X)
• Let Q be a list of search nodes (e.g. (X B A S) (C B A S)) and S be the
start state
Ch3: Solving Problems by Searching 28
3.4.1 Breadth-first search (BFS) (Cont’d)

❖Goal: Node with value D


❖Start from the root node
❖FIFO = [(A)]
❖Remove (A) from frontier
❖Check if the current node is goal (D)
❖If not, expand the node, if possible
❖FIFO = [(B A),(C A)]

Ch3: Solving Problems by Searching 29


3.4.1 Breadth-first search (BFS) (Cont’d)

❖Remove (B A) from frontier


❖Check if the current node is goal (D)
❖If not, expand the node, if possible
❖FIFO = [(C A),(D B A),(E B A)]

Ch3: Solving Problems by Searching 30


3.4.1 Breadth-first search (BFS) (Cont’d)

❖Remove (C A) from frontier


❖Check if the current node is goal (D)
❖If not, expand the node, if possible
❖FIFO = [(D B A),(E B A),(F C A),(G C A)]

Ch3: Solving Problems by Searching 31


3.4.1 Breadth-first search (BFS) (Cont’d)

❖Remove (D B A) from frontier


❖Check if the current node is goal (D)
❖Yes, stop search and return solution

Ch3: Solving Problems by Searching 32


3.4.1 Breadth-first search (BFS) :
Evaluation
❖ Completeness:
 Does it always find a solution if one exists?
 YES
▪ If shallowest goal node is at some finite depth d
▪ Condition: If b is finite (maximum num. of successor nodes is finite)

❖ Optimality:
 Does it always find the least-cost solution?
 In general YES
▪ Unless actions have different cost.

❖ Time complexity:
 Assume a state space where every state has b successors.
▪ Root has b successors, each node at the next level has again b successors (total b2), …
▪ Assume solution is at depth d
▪ Worst case; expand all but the last node at depth d
▪ Total numb. of nodes generated:
▪ 1 + b + b2 + …+ bd + b(bd-1) = O(bd+1)

❖ Space complexity : O(bd)

Ch3: Solving Problems by Searching 33


3.4.2 Uniform-cost search (UCS)
❖ Strategy
 Similar to BFS, but takes path cost into account
 Uniform cost search modifies the BFS by expanding the lowest cost
node first
▪ The cost of the path to each node n is g(n)= Σcosts of all steps (adds
up individual action costs along path from root to N).
 The goal is to generate a solution path of minimal cost.
❖ Implementation
 The structure to store the open nodes is a priority queue ordered by
path cost, lowest first,
❖ Uniform-cost search is equal to BFS if all steps costs are equal –
even though this is very rare.

Ch3: Solving Problems by Searching 34


3.4.2 Uniform-cost search (UCS) (Cont’d)

❖Goal: Find a shortest path from S to G,


❖Start from the root node

❖S.PATH-COST=0
❖Priority Queue=[(s)0]
Ch3: Solving Problems by Searching 35
3.4.2 Uniform-cost search (UCS) (Cont’d)

❖Goal: Find a shortest path from S to G,


❖Start from the root node

❖Expanding S
❖AS.PATH-COST=1, BS.PATH-COST=5,
CS.PATH-COST=15
❖Priority Queue=[(AS)1,(BS)5,(CS)15]
Ch3: Solving Problems by Searching 36
3.4.2 Uniform-cost search (UCS) (Cont’d)

❖Goal: Find a shortest path from S to G,


❖Start from the root node

❖Expanding AS
❖GAS.PATH-COST=11, BS.PATH-
COST=5, CS.PATH-COST=15
❖Priority Queue=[(BS)5,(GAS)11 ,(CS)15]
Ch3: Solving Problems by Searching 37
3.4.2 Uniform-cost search (UCS) (Cont’d)

❖Goal: Find a shortest path from S to G,


❖Start from the root node

❖Expanding BS
❖GAS.PATH-COST=11, GBS.PATH-
COST=10, CS.PATH-COST=15
❖Priority Queue=[(GBS)10, (GAS)11,(CS)15]
Ch3: Solving Problems by Searching 38
3.4.2 Uniform-cost search (UCS) :
Evaluation
❖ Completeness:
 Does it always find a solution if one exists?
 YES
▪ If step cost ≥ ε (some small positive integer)
▪ Uniform-cost search will get stuck in an infinite loop if there is a path with an infinite sequence of zero-
cost actions (for example : a sequence of NoOp actions))

❖ Optimality:
 Does it always find the least-cost solution?
 In general YES
❖ Time and space complexity:
 Uniform-cost search is guided by the path costs rather than depths, so its complexity is not
characterized in terms of b and d.
 Instead, let C* be the cost of the optimal solution and assume that every action costs at
least ε,
 The algorithm’s worst-case time and space complexity is O(b 1+[C*/ ε] )
▪ When all step costs are equal , b 1+[C*/ ε] is just bd+1

Ch3: Solving Problems by Searching 39


3.4.2 Uniform-cost search (UCS) :
Evaluation

When all step costs are the same, uniform-cost search


is similar to breadth-first search, except that the
latter stops as soon as it generate a goal, whereas
uniform-cost search examines all the nodes at the
goal’s depth to see if one has a lower cost, thus
uniform-cost search does more work by expanding
nodes at depth d unnecessarily

Ch3: Solving Problems by Searching 40


3.4.3 Depth-first search (DFS)
❖ Strategy
 DFS expands the deepest node first
❖ Implementation
 The structure to store the open nodes is a (LIFO)(Last In First Out)
queue (=Stack)

Ch3: Solving Problems by Searching 41


3.4.3 Depth-first search (DFS) (Cont’d)

❖Goal: Node with value M,


❖Start from the root node
❖LIFO=[(A)]

❖Expand A
❖LIFO=[(BA),(CA)]

Ch3: Solving Problems by Searching 42


3.4.3 Depth-first search (DFS) (Cont’d)

❖Expand BA
❖LIFO=[(DBA),(EBA),(CA)]

Ch3: Solving Problems by Searching 43


3.4.3 Depth-first search (DFS) (Cont’d)
❖LIFO=[(HDBA)
❖LIFO=[(IDBA)
,(IDBA),(EBA),
,(EBA), (CA)]
(CA) ]

❖LIFO=[(JEBA), ❖LIFO=[(EBA), (CA)]


(KEBA),(CA)]

Ch3: Solving Problems by Searching 44


3.4.3 Depth-first search (DFS) (Cont’d)
❖LIFO=[(KEBA),
❖LIFO=[(CA)]
(CA)]

❖LIFO=[(FCA),(GCA)]
❖LIFO=[(MFCA)
,(GCA)]
❖LIFO=[(LFCA)
,(MFCA),(GCA)]

Ch3: Solving Problems by Searching 45


3.4.3 Depth-first search (DFS) (Cont’d)

❖Remove MFCA
❖It is the goal state
❖Return solution

Ch3: Solving Problems by Searching 46


3.4.3 Depth-first search (DFS) :
Evaluation
❖ Completeness:
 Does it always find a solution if one exists?
 NO
▪ unless search space is finite and no loops are possible

❖ Optimality:
 Does it always find the least-cost solution?
 No
▪ (Might never find any solutions)

❖ Time complexity:
 O(bm), m is the maximum depth of any node.
 m can be much larger than d (the depth of the shallowest solution)
 m =∞ if the tree is unbounded
❖ Space complexity : O(bm)
 Must store all nodes on current path
 Must store all unexplored sibling nodes on each hit
 At depth m, required to store b*m nodes
 Much better than O(bm)
Ch3: Solving Problems by Searching 47
3.4.4 Depth-limited search (DLS)
❖ = depth-first search with depth limit l,
 i.e., nodes at depth l have no successors.
 Choosing an appropriate l may not be easy and could result in
incompleteness (WHY?)

❖ L = 2 in the following tree


 The nodes in gray color will not be explored

Ch3: Solving Problems by Searching 48


3.4.5 Iterative deepening search (IDS)
❖ Iterative deepening depth-first search
❖ It is similar to the depth-limited search but chooses the best
depth limit.
❖ It tries all possible depth limits: first depth 0, then depth 1,then
depth 2 and so on until a goal is found.
❖ Iterative deepening depth-first search combines depth-first
search's space-efficiency and both breadth-first search's
completeness and optimality.

Ch3: Solving Problems by Searching 49


3.4.5 Iterative deepening search (IDS) (Cont’d)

❖Goal: Node with value M,


❖Start from the root node

Iteration 1

Ch3: Solving Problems by Searching 50


3.4.5 Iterative deepening search (IDS) (Cont’d)
Iteration 2

Ch3: Solving Problems by Searching 51


3.4.5 Iterative deepening search (IDS) (Cont’d)
Iteration 3

Ch3: Solving Problems by Searching 52


3.4.5 Iterative deepening search (IDS) (Cont’d)
Iteration 4

Ch3: Solving Problems by Searching 53


3.4.5 Iterative deepening search (IDS) :
Evaluation
❖ Completeness:
 Does it always find a solution if one exists?
 YES
▪ Condition: If b is finite (maximum num. of succor nodes is finite)

❖ Optimality:
 Does it always find the least-cost solution?
 YES
▪ If step costs are all identical

❖ Time complexity:
 O(bd), d (the depth of the shallowest solution)
 Total numb. of nodes generated:
▪ (d+1)1 + db + (d-1)b2 + …+ (1)bd = O(bd)

❖ Space complexity :
 O(bd)

❖ In general, iterative deepening is preferred to depth-first or breadth-


first when the search space is large and depth of the solution is not
known.
Ch3: Solving Problems by Searching 54
3.4.7 Comparing uninformed search strategies

Ch3: Solving Problems by Searching 55


3.5 Informed ( Heuristic) search strategies
❖ Informed ( Heuristic) strategies use problem-specific
knowledge beyond the definition of the problem itself
❖ Informed ( Heuristic) strategies Use heuristics to guide the
search.
 Best-first search
▪ Greedy Best-first search
▪ A* search

Ch3: Solving Problems by Searching 56


Best-first search
❖ Is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm
 In which a node is selected for expansion based on an evaluation
function, f(n)
▪ The node with the lowest evaluation is expanded first
▪ The choice of f (n) determine the search strategy

❖ Implementation
 The structure to store the open nodes is a priority queue ordered by
evaluation function, lowest first.
❖ Special cases:
 uniform-cost search: f(n) = g(n), the cost of node n from the initial node.
 greedy search: f(n) = h(n), the estimated cost of node n reaching the
goal.
 A* search: f(n) = g(n) + h(n).

Ch3: Solving Problems by Searching 57


Best-first search (Cont’d)
❖ Most best-first search algorithms include as a component of f
a heuristic function, h(n)
 h(n) = estimated cost of the cheapest path from the state at
node n to a goal state
 We consider h(n) is
▪ arbitrary,
▪ nonnegative,
▪ problem-specific functions,
▪ With one constraint:
– If n is the goal node, then h(n)=0

Ch3: Solving Problems by Searching 58


3.5.1 Greedy best-first search
❖ Evaluation function h(n) (heuristic)
 = estimate of cost from n to the closest goal
❖ Greedy search expands the node that appears to be closest to goal
❖ E.g., hSLD(n) = straight-line distance from n to Bucharest

Ch3: Solving Problems by Searching 59


Greedy search example
❖ Greedy search to solve the Arad to Bucharest problem

(a) The initial state

Nodes are labeled with their h-values

Ch3: Solving Problems by Searching 60


Greedy search example (Cont’d)

❖ After expanding Arad

Ch3: Solving Problems by Searching 61


Greedy search example (Cont’d)

❖ After expanding Sibiu

Ch3: Solving Problems by Searching 62


Greedy search example (Cont’d)

❖ After expanding Fagaras

Ch3: Solving Problems by Searching 63


Greedy search: evaluation
❖ Completeness:
 Does it always find a solution if one exists?
 NO
▪ can get stuck in loops
▪ Minimizing h(n) can result in false starts, e.g. Iasi to Fagaras
▪ Complete in finite space with repeated-state checking.

❖ Optimality:
 Does it always find the least-cost solution?
 No
▪ the path Arad-Sibiu-Fagaras-Bucharest is sub-optimal (Total path cost = 140 + 99 + 211 = 440 km)
▪ Arad→Sibiu→Rimnicu Virea→Pitesti→Bucharest is shorter!
– Path cost = 140 + 80 + 97 + 101 = 418 km
▪ Best-first search can be improved by taking the actual cost to a state n into account
– The new search strategy is called A*

❖ Time complexity:
 O(bm), m is the maximum depth of any node.
 but a good heuristic can give dramatic improvement
❖ Space complexity : O(bm)
Ch3: Solving Problems by Searching 64
3.5.2 A* search
❖ Idea: avoid expanding paths that are already expensive

❖ evaluation function: f(n) = g(n) + h(n) with:


 g(n) – cost so far to reach n
 h(n) – estimated cost to goal from n
 f(n) – estimated total cost of path through n to goal

❖ A* search is both complete and optimal if h(n) satisfies certain


conditions ( admissibility and consistency)

Ch3: Solving Problems by Searching 65


A* search example
❖ A* search to solve the Arad to Bucharest problem

(a) The initial state

Nodes are labeled with their (f=g+h) values

Ch3: Solving Problems by Searching 66


A* search example (Cont’d)

Ch3: Solving Problems by Searching 67


A* search example (Cont’d)

Ch3: Solving Problems by Searching 68


A* search example (Cont’d)

Ch3: Solving Problems by Searching 69


A* search example (Cont’d)

Ch3: Solving Problems by Searching 70


A* search example (Cont’d)

Ch3: Solving Problems by Searching 71


A* search :Evaluation
❖ Completeness:
 Does it always find a solution if one exists?
 YES
▪ unless there are infinitely many nodes with f < f(G)

❖ Optimality:
 Does it always find the least-cost solution?
 YES
▪ if h(n) satisfies certain conditions ( admissibility and consistency)

❖ Time complexity:
▪ Depends on h, can be exponential

❖ Space complexity :
▪ O(bm)
▪ stores all nodes

Ch3: Solving Problems by Searching 72


A* and GRAPH-SEARCH
❖ GRAPH-SEARCH discards new paths to a repeated state
 So may discard the optimal path
❖ Solutions:
 Remove more expensive of the two paths
 Ensure that the optimal path to any repeated state is always the
first one followed
▪ Requires extra condition on h(n): consistency (or monotonicity)

Ch3: Solving Problems by Searching 73


A* search : conditions for optimality
❖ A* search (tree search) is optimal if h(n) is an admissible heuristic
 A heuristic is admissible if it never overestimates the cost to reach the goal
▪ h(n) <= h*(n) where h*(n) is the true cost from n
▪ e.g. hSLD(n) never overestimates the actual road distance

❖ A second condition called consistency is required only for application of A*


to graph search.
 A heuristic is consistent if: h(n) ≤c(n,a,n') + h(n').
 This is a form of triangular inequality:
 If his consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n) = f(n)
f(n’) ≥ f(n)
❖ Consistent heuristics are admissible.
❖ Not all admissible heuristics are consistent.
Ch3: Solving Problems by Searching 74

You might also like