0% found this document useful (0 votes)
35 views

Chapter 3 - Searching-Part 2

The document discusses uniform-cost search (UCS) and other search algorithms like breadth-first search, depth-first search, depth-limited search, and heuristic search techniques like hill climbing. It provides examples of applying UCS and depth-limited search to graph problems. It also compares breadth-first and depth-first search, explaining their differences in memory usage and ability to find nearby goals. Finally, it discusses how hill climbing search works by greedily moving to a better state based on a heuristic evaluation function until no better states can be found or the goal is reached.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Chapter 3 - Searching-Part 2

The document discusses uniform-cost search (UCS) and other search algorithms like breadth-first search, depth-first search, depth-limited search, and heuristic search techniques like hill climbing. It provides examples of applying UCS and depth-limited search to graph problems. It also compares breadth-first and depth-first search, explaining their differences in memory usage and ability to find nearby goals. Finally, it discusses how hill climbing search works by greedily moving to a better state based on a heuristic evaluation function until no better states can be found or the goal is reached.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Artificial Intelligence

Dr. Basem H. Ahmed & Dr. Mohammed A. Awadallah

First semester 2022/2023


1
Chapter 3
Solving problems by searching – Part 2

2
UCS example: traveling on a graph

C 2
B 9
2
3 F goal state
start state A D E
4
3 4

3
UCS example: traveling on a graph
state = A,
cost = 0

state = B, state = D,
cost = 3 cost = 3

state = C, state = F, state = E,


cost = 5 cost = 12 cost = 7

state = A, state = F,
cost = 7 cost = 11
goal state!

state = B, state = D,
cost = 10 cost = 10

4
UCS example 2

5
UCS example 2

6
UCS example 2

7
UCS example 2

8
UCS example 2

9
UCS example 2

10
UCS example 2

11
Breadth-First vs. Uniform-Cost
• Breadth-first search (BFS) is a special case of uniform-cost search
when all edge costs are positive and identical.

• Breadth-first always expands the shallowest node

• Uniform-cost considers the overall path cost


o Optimal for any (reasonable) cost function

12
Depth-First vs. Breadth-First

BFS DFS
13
Depth-First vs. Breadth-First
• Depth-first goes off into one branch until it reaches a leaf node
o Not good if the goal is on another branch
o Uses much less space than breadth-first

• Breadth-first is more careful by checking all alternatives


o Very memory-intensive
o For a large tree, breadth-first search memory requirements maybe excessive
o For a large tree, a depth-first search may take an excessively long time to find
even a very nearby goal node.

14
Depth-First vs. Breadth-First

15
Depth-limited search
• The depth-limited search (DLS) method is almost equal to depth-first search
(DFS), but DLS can work on the infinite state space problem because it bounds
the depth of the search tree with a predetermined limit L. Nodes at this depth limit
are treated as if they had no successors.

• Similar to depth-first, but with a limit


o i.e., nodes at depth l have no successors
o Overcomes problems with infinite paths
o Sometimes a depth limit can be estimated from the problem description
• In other cases, a good depth limit is only known when the problem is solved
o must keep track of the depth
o Same as DFS, we use the stack data structure S1 to record the node we’ve explored. Suppose
the source node is node a.

16
DLS: Example 1
• Now use the example in DFS to see what will happen if we use DLS with L=1.

• Below is the graph we will traverse. Same as DFS, we use the stack data structure
S1 to record the node we’ve explored. Suppose the source node is node a.

• S1 is empty

17
DLS: Example 1

• At first, the only reachable node is a. So push a into S1 and mark as


visited. Current level is 0.

• S1: a

18
DLS: Example 1
• After exploring a, now there are three nodes reachable: node b, c and
d. Suppose we pick node b to explore first. Push b into S1 and mark it
as visited. Current level is 1.

• S1: b, a

19
DLS: Example 1
• Since current level is already the max depth L. Node b will be treated
as having no successor. So there is nothing reachable. Pop b from S1.
Current level is 0.

• S1: a

20
DLS: Example 1
• Explore a again. There are two unvisited nodes c and d that are
reachable. Suppose we pick node c to explore first. Push c into S1 and
mark it as visited. Current level is 1.

• S1: c, a

21
DLS: Example 1
• Since current level is already the max depth L. Node c will be treated
as having no successor. So there is nothing reachable. Pop c from S1.
Current level is 0.

• S1: a

22
DLS: Example 1
• Explore a again. There is only one unvisited node d reachable. Push d
into S1 and mark it as visited. Current level is 1.

• S1: d, a.

23
DLS: Example 1
• Explore d and find no new node is reachable. Pop d from S1. Current
level is 0.

• S1: a.

24
DLS: Example 1
• Explore a again. No new reachable node. Pop a from S1

• S1 is empty

25
DLS: Example 2
• If L=2

26
Informed Search

27
Heuristic search
▪ A heuristic is:
▪ A function that estimates how close a state is to a goal
▪ Designed for a particular search problem
▪ Pathing?
▪ Examples: Manhattan distance, Euclidean distance for
pathing

10

5
11.2

28
Heuristic search

Straight line distance to Bucharest

h(x)
29
Hill climbing strategy
• Simplest way of implementing heuristic search
• Expand the current state and evaluate the children and select the best
child for further expansion. Halt the search when it reaches a state that
is better than any of its children (i.e. there is not a child who is better
than the current state).

• Blind mountain climber


• go uphill along the steepest possible path until we can go no farther.
• Because it keeps no history, the algorithm cannot recover from failures.

30
Hill climbing strategy
• Major Problem: tendency to become stuck at local maxima
• If the algorithm reach a local maximum, the algorithm fails to find a solution.
• An example of local maxima in 8-puzzle
• In order to move a particular tile to its destination, other tiles that are already in goal
position have to be moved. This move may temporarily worsen the board state.

• If the evaluation function is sufficiently informative to avoid local


maxima and infinite path, hill climbing can be used effectively.

31
Hill climbing strategy
• Procedural steps of HC

1. Define the current state as an initial state


2. Loop until the goal state is achieved or no more operators can be
applied on the current state:
1. Apply an operation to current state and get a new state
2. Compare the new state with the goal. Quit if the goal state is achieved
3. Otherwise, evaluate the new state with heuristic function and compare it
with the current state
4. If the newer state is better than the current state then make the new state as
current state.

32
Hill climbing strategy

33
Blocks World Problem
• Global: For each block that has the correct support (i.e.
the complete structure underneath it is exactly as it should
be), add one point for every block in the support
structure.
• For each block that has an incorrect support structure,
subtract one point for every block in the existing support
structure.

• initial state
(-1) + 1 + 1 + 1 + 1 + 1 + 1 + (-1) = 4

• goal state
1+1+1+1+1+1+1+1=8

34
Blocks World Problem

• State 1
1+ -1 + 1 + 1 + 1 + 1 + 1 + 1 = 6

35
Blocks World Problem
• state 2-(a)
(-1) + 1 + 1 + 1 + 1 + 1 + 1 + (-1) = 4

• state 2-(b)
(-1) + 1 = 0
1 + 1 + 1 + 1 + 1 + (-1) = 4

• state 2-(c)
+1
-1
1 + 1 + 1 + 1 + 1 + (-1) = 4

36
Features of Hill-climbing algorithm
• It employs a greedy approach: This means that it moves in a direction in which the
cost function is optimized. The greedy approach enables the algorithm to establish
local maxima or minima.

• No Backtracking: A hill-climbing algorithm only works on the current state and


succeeding states (future). It does not look at the previous states.

• Feedback mechanism: The algorithm has a feedback mechanism that helps it


decide on the direction of movement (whether up or down the hill). The feedback
mechanism is enhanced through the generate-and-test technique.

• Incremental change: The algorithm improves the current solution by incremental


changes.

37
Greedy Search

38
Greedy Search/Greedy best-first search
• A heuristic function h(n) = estimated cost of the cheapest path from
the state at node n to a goal state.

• At each step, best-first search sorts the queue according to a heuristic


function.

• evaluation function 𝑓(𝑛)= ℎ(𝑛)

39
Greedy Search
o Expand the node that seems closest…

o Is it optimal?
o No. Resulting path to Bucharest is not the shortest!
40
Greedy Search
b
o Strategy: expand a node that you think is …
closest to a goal state
o Heuristic: estimate of distance to nearest goal
for each state

o A common case:
b
o Best-first takes you straight to the (wrong) …
goal

o Worst-case: like a badly-guided DFS


41
Greedy Search

42
Greedy Search

43
Greedy Search

44
Greedy Search

45
Greedy Search

46
Greedy Search

47
Greedy Search

48
Greedy Search

49
A* search

50
A* search

UCS Greedy

A*
51
A* search
o Uniform-cost orders by path cost, or backward cost g(n)
o Greedy orders by goal proximity, or forward cost h(n) g=0
S h=6
8 g=1
h=5 a
e h=1
1 g=2 g=9
h=6 b d g=4 e h=1
1 3 2 h=2
S a d G
h=6 h=5 g=3 g=6 g = 10
1 h=2 h=0 h=7 c G d
h=0 h=2
1
c b
g = 12
h=7 h=6 G h=0
A* Search orders by the sum: f(n) = g(n) + h(n)
52
A* search

• best-first search that uses the evaluation function


𝑓(𝑛)=𝑔(𝑛)+ℎ(𝑛)
• where 𝑔(𝑛) is the path cost from the initial state to node 𝑛 and ℎ(𝑛) is
the estimated cost of the shortest path from 𝑛 to a goal state, so we
have

• UCS keeps solution cost low


• Best-first helps find solution quickly
• A* combines these approaches
53
A* search
• When should A* terminate?
➢Should we stop when we enqueue a goal? h=2

2 A 2

S h=3 h=0 G

2 B 3
h=1

➢No: only stop when we dequeue a goal

54
A* search - Example

55
A* search - Example

56
A* search - Example

57
A* search - Example

58
A* search - Example

59
A* search - Example

60
A* search - Example

• We stop when the node with the lowest f-value is a goal state.
• Is this guaranteed to find the shortest path?
61
Properties of A*

Uniform-Cost A*

b b
… …

62
UCS vs A* Contours

o Uniform-cost expands equally in


all “directions”
Start Goal

o A* expands mainly toward the


goal, but does hedge its bets to
ensure optimality Start Goal

63
Comparison

Greedy Uniform Cost A*

64
A* Applications
• Video games
• Pathing / routing problems
• Resource planning problems
• Robot motion planning
• Language analysis
• Machine translation
• Speech recognition
•…

65
A*: Summary

• A* uses both backward costs and (estimates of) forward costs

• A* is optimal with consistent heuristics

• Heuristic design is key: often use relaxed problems

66
Creating Heuristics
• Most of the work in solving hard search problems optimally is in
coming up with heuristics

• Often, heuristics are solutions to relaxed problems, where new actions


are available

366
15

67
Example: 8 Puzzle

Start State Actions Goal State

o What are the states?


o How many states? heuristics?
o What are the actions?
o How many successors from the start state?
o What should the costs be?
68
Example: 8 Puzzle
• The set of states are all different configurations of 9 tiles (9!).

• The legal moves are: move the blank tile up, right, down, and left.
• make sure that it does not move the blank off of the board.
• All four moves are not applicable at all times.

• The start state


• The goal state

• Cycles are possible in the 8-puzzle.


69
Example: 8 Puzzle

70
Example: Tic-tac-toe

• The set of states are all different configurations of Xs and Os that the
game can have. [N]
• 39 ways to arrange {blank, X, O} in nine spaces.
• Arcs(steps) are generated by legal moves of the game, alternating
between placing an X and an O in an unused location [A]
• The start state is an empty board. [S]
• The goal state is a board state having three Xs in a row, column, or
diagonal. [GD]

71
Example: Tic-tac-toe

72
Example: Tic-tac-toe

• There are no cycles in the state space because the directed arcs of the
graph do not allow a move to be undone.

• The complexity of the problem : 9!(362,880) different path can be


generated.

• Need heuristics to reduce the search complexity.


• e.g. My possible winning lines – Opponent’s possible wining lines

• Chess has 10120 possible game paths

73
Example: Tic-tac-toe
• The total number of states needs to be considered is 9!

• Symmetry reduction can decrease the search space to 12x7! (corner,


center of a side, center)

• Heuristic is …
• Move to the board in which x has the most winning lines.

x x
x
74
Example: Tic-tac-toe
• Most Winning Lines

x x
x

3 lines 4 lines 2 lines

75
Example: Tic-tac-toe
• Most Winning Lines

x x
x

3 4 2
o o
x x

o
o
x
x
x x 76

You might also like