0% found this document useful (0 votes)
11 views

3. Searching Algorithms (1)

The document discusses problem and goal formulation in artificial intelligence, detailing the components of well-defined problems including initial state, actions, transition model, goal test, and path cost. It also covers the construction of goal-based agents and various search strategies such as Breadth-First Search and Depth-First Search, along with their performance metrics like completeness, optimality, and time/space complexity. Additionally, it presents examples of search problems, illustrating how these concepts apply in practical scenarios.

Uploaded by

sajeebmahmud3962
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

3. Searching Algorithms (1)

The document discusses problem and goal formulation in artificial intelligence, detailing the components of well-defined problems including initial state, actions, transition model, goal test, and path cost. It also covers the construction of goal-based agents and various search strategies such as Breadth-First Search and Depth-First Search, along with their performance metrics like completeness, optimality, and time/space complexity. Additionally, it presents examples of search problems, illustrating how these concepts apply in practical scenarios.

Uploaded by

sajeebmahmud3962
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 148

CSE 3207

Artificial
Intelligence

Mohiuddin Ahmed
Assistant Professor
Department of CSE
RUET
Problem and Goal Formulation
 Problem formulation is the process of deciding what actions and states
to consider, given a goal.
Example: Visiting Dhaka from Rajshahi

 Goal formulation is the first step in problem solving that is based on the
current situation and the agent’s performance measure.

2
Well Defined Problems and Solution

A problem can be defined formally by five components:


• Initial state: Where the agent starts in. For example, the initial state for
our agent can be described as In(Rajshahi).

• Actions: A description of the possible activities available to the agent at


a particular state s. For example, from the state In(Rajshahi), the actions
are {Go(Food-village), Go(Tangail), Go(Dhaka)}

3
Well Defined Problems and Solution
• Transition Model: A description of what each action
does specified by a function RESULT(s, a) that returns
the state that results from doing action a in state s. For
example, we have
RESULT(In(Rajshahi), Go(Food-village)) = In(Food-village)

• Goal test which determines whether a given state is


a goal state.

• Path cost A function that assigns a numeric cost to


each path. 4
Well Defined Problems and Solution
• Solution to a problem is an action of sequence that
leads from the initial state to a goal state.

• Solution quality is measured by-


 The path cost function
 An optimal solution that has the lowest path cost
among all solutions.

5
Building Goal-Based Agents

What are the key questions that need to be addressed?

• What goal does the agent need to achieve?


• What knowledge does the agent need?
• What actions does the agent need to do?

6
Search Example: Route Finding

Actions: go straight, turn left, turn right


Goal: shortest? fastest? most scenic?
Search Example: 8-Puzzle

Actions: move tiles (e.g., Move2Down)


Goal: reach a certain configuration
Search Example: 8-Puzzle
• Initial state: Any state can be designated as the initial state.
• Actions: The simplest formulation defines the actions as
movements of the blank space Left, Right , Up or
Down.
• Transition model: Given a state and action, this returns the
resulting state; for example, if we apply Left to the start state
the resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal.
• Path cost: Each step costs 1, so the path cost is the
number of steps in the path.
Search Example: Water Jugs Problem

Given 4-liter and 3-liter pitchers, how do you get


exactly 2 liters into the 4-liter pitcher?

4 3
Search Example: 8-Queens
Search Example: Remove 5 Sticks Problem

Remove exactly 5 of
the 17 sticks so the
resulting figure
forms exactly 3
squares
Search Terminology
• State/Problem Space: The initial state, actions, and transition model
implicitly define the state space of the problem—the set of all states reachable
from the initial state by any sequence of actions.
• Problem Space Graph: Represents problem space. States are shown by nodes
and operators i.e. actions are shown by edges.
• Path: A path in the state space is a sequence of states
connected by a sequence of actions.
• Step Cost: Cost of taking action a in state s to reach state s’ is denoted by c(s, a,
s’).
• Depth of Problem: Shortest sequence of operators from initial state to goal state
i.e. length of a shortest path.
• Admissibility: A property of an algorithm to always find an optimal solution.
Searching Algorithms

• Searching is the universal technique for problem solving in AI.


• There are some single-player games such as tile games, sudoku,
crossword, etc.
• Search algorithms help to search for a particular position in
such games.
State Space Graph

S
start The size of a problem
is usually described
5 2 4
in terms of the number
A B C of possible states:
9 4 6 2
6 G 1 Tic-Tac-Toe: 39 states
D E goal F
Checkers: 1040 states
7 Rubik's Cube: 1019 states
Chess: 10120 states
H

15
Evaluating Search Strategies
• Completeness
If a solution exists, will it be found?
• a complete algorithm will find a solution (not all)

• Optimality / Admissibility
If a solution is found, is it guaranteed to be optimal?
• an admissible algorithm will find a solution with minimum cost

• Time Complexity
How long does it take to find a solution?
• usually measured for worst case
• measured by counting number of nodes expanded

• Space Complexity
How much space is used by the algorithm?
• measured in terms of the maximum size
of the Frontier during the search
16
8-Puzzle State-Space Search
Tree
(Not all nodes shown;
e.g., no “backwards”
moves)
Search Strategies

• Uninformed / Blind Search means that the strategies have no


additional information about states beyond that provided in the problem
definition.
✓ All they can do is generate successors and distinguish a goal state from
a non-goal state.
✓ All search strategies are distinguished by the order in which nodes are
expanded.

• Informed / Heuristic Search Strategies that know whether one


non-goal state is “more promising” than another.

18
Uninformed Search Strategies

• Breadth First Search (BFS)


• Depth First Search (DFS)
• Uniform Cost Search (UCS)
• Depth Limited Search (DLS)
• Iterative Deepening Search (IDS)

19
Breadth-First Search (BFS)

▪ Breadth-first Search is a simple strategy


in which the root node is expanded first, then
all the successors of the root node are
expanded next, then their successors, and so
on.
▪ In general, all the nodes are expanded at a
given depth in Goal
the search tree before any nodes at the
next level are expanded.
▪ The Goal Test is applied to each node
when it is generated rather than when it is
selected for expansion.
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier list
{S} 5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

H
21
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier list
{S} 5 2 4
S not goal {A,B,C}
A B C

9 4 6 2
6 G 1
D E goal F

H
22
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A not goal {B,C,D,E}
9 4 6 2
6 G 1
D E goal F

H
23
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B not goal {C,D,E,G}
9 4 6 2
6 G 1
D E goal F

H
24
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C not goal {D,E,G,F}
6 G 1
D E goal F

H
25
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D not goal {E,G,F,H} D E goal F

H
26
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 6, expanded: 6 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D {E,G,F,H} D E goal F
E not goal {G,F,H,G}
7

H
27
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 7, expanded: 6 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D {E,G,F,H} D E goal F
E {G,F,H,G}
G goal {F,H,G} no expand 7

H
28
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 7, expanded: 6 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D {E,G,F,H} D E goal F
E {G,F,H,G}
G {F,H,G} 7
path: S,B,G
H cost: 8
29
Performance of BFS

1. Completeness – Yes, it always reaches goal (if b and d are


finite)

2. Optimality - if the path cost is a non-decreasing function of


the depth of the node. i.e. (if we guarantee that deeper
solutions are less optimal,
e.g. step-cost is Constant (say 1)
Performance of BFS
3. Space complexity - Suppose that the solution is at depth d.
In the worst case, it is the last node generated at that
level. Then the total number of nodes generated is O(b^d).
4. Time complexity –
▪ In the worst case, the Goal will be at the far, righ corner leaf of
the search tree.
▪ Processes all nodes above shallowest solution
▪ Let depth of shallowest solution be d
▪ Search takes time O(b^d)
Performance of BFS…

2023-09-24 20
Performance Breadth-First Search (BFS)

• A complete search tree has a total # of nodes =


1 + b + b2 + ... + bd = (b(d+1) - 1) / (b-1)
• d: the tree's depth
• b: the branching factor at each non-leaf node
• For example: d = 12, b = 10
1 + 10 + 100 + ... + 1012 = (1013 - 1)/9 = O(1012)
• If BFS expands 1,000 nodes/sec and each node uses 100 bytes of storage,
then BFS will take 35 years to run in the worst case, and it will use 111
terabytes of memory!

33
Depth-First Search
Expand the deepest node first
1. Select a direction, go deep to the end
2. Slightly change the end
3. Slightly change the end some more…
Use a Stack to order nodes on the Frontier

Goal
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier
{S} 5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

H
35
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier
{S} 5 2 4
S not goal {A,B,C}
A B C

9 4 6 2
6 G 1
D E goal F

H
36
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A B C
A not goal {D,E,B,C}
9 4 6 2
6 G 1
D E goal F

H
37
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D not goal {H,E,B,C}
9 4 6 2
6 G 1
D E goal F

H
38
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H not goal {E,B,C}
6 G 1
D E goal F

H
39
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H {E,B,C}
6 G 1
E not goal {G,B,C} D E goal F

H
40
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H {E,B,C}
6 G 1
E {G,B,C} D E goal F
G goal {B,C} no expand
7

H
41
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H {E,B,C}
6 G 1
E {G,B,C} D E goal F
G {B,C}
7
path: S,A,E,G
H cost: 15
42
Depth-First Search (DFS)

• May not terminate without a depth bound


i.e., cutting off search below a fixed depth, D

• Not complete

• Not optimal / admissible

• Can find long solutions quickly if lucky

43
Depth-First Search (DFS)
• Space Complexity: A depth-first tree search needs to store only a
single path from the root to a leaf node, along with the remaining
unexpanded sibling nodes for each node on the path. Once a node
has been expanded, it can be removed from memory as soon as all
its descendants have been fully explored. For a state space with
branching factor b and depth d, depth-first search requires storage of
only O(bd) nodes.
• Time complexity: O(bd) exponential
• Performs “chronological backtracking”
• i.e., when search hits a dead end, backs up one level at a time
• problematic if the mistake occurs because of a bad action choice near the top
of search tree
44
Depth-Limited Search (DLS)
• We perform DFS to a limited depth which is called Depth
Limited Search.
• Depth is limited to a certain value; E.g. l = 3.
• Depth limit solves the infinite-path problem.
• Depth limits can be based on knowledge of the problem.

• Depth-first search can be viewed as a special case of depth- limited search with
l=∞

45
Performance of Depth-Limited Search (DLS)
• Optimality: Not optimal, though l > d, because DFS is not
optimal itself.
• Completeness: If l < d, DLS may not find the goal. Hence,
incomplete.
• Time Complexity: O(bl)
• Space Complexity: O(bl)

46
Uniform-Cost Search (UCS)

• Use a “Priority Queue” to order nodes on the


Frontier list, sorted by path cost
• Let g(n) = cost of path from start node s to
current node n
• Sort nodes by increasing value of g

47
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier list
{S} 5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

H
48
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier list
{S:0} 5 2 4
S not goal {B:2,C:4,A:5}
A B C

9 4 6 2
6 G 1
D E goal F

H
49
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B not goal {C:4,A:5,G:2+6}
9 4 6 2
6 G 1
D E goal F

H
50
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C not goal {A:5,F:4+2,G:8}
9 4 6 2
6 G 1
D E goal F

H
51
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A not goal {F:6,G:8,E:5+4,
D:5+9} 6 G 1
D E goal F

H
52
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
6 G 1
F not goal {G:4+2+1,G:8,E:9, D E goal F
D:14}
7

H
53
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
6 G 1
F {G:7,G:8,E:9,D:14} D E goal F
G goal {G:8,E:9,D:14}
no expand 7

H
54
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
6 G 1
F {G:7,G:8,E:9,D:14} D E goal F
G {G:8,E:9,D:14}
7
path: S,C,F,G
H cost: 7
55
Performance of UCS

• Optimality: Based on Graph Separation Theory and Non- negative


Step-cost, UCS is optimal.

• Completeness: Uniform-cost search does not care about the number of


steps a path has, but only about their total cost. Therefore, it will get
stuck in an infinite loop if there is a path with an infinite sequence of
zero-cost. Completeness is guaranteed provided the cost of every step
exceeds some small positive constant ϵ.

56
Uniform-Cost Search (UCS)
• Complexity: UCS is guided by path costs rather than depths, so its
complexity is not easily characterized in terms of b and d. Instead, let
C* be the cost of the optimal solution, and assume that every action costs at
least ϵ. Then the algorithm’s worst-case time and space complexity is

O(

• When all step costs are equal, O( is just O( 𝑏𝑑).

57
Iterative-Deepening Search (IDS)

• requires modification to DFS search algorithm:


• do DFS to depth 1
and treat all children of the start node as leaves
• if no solution found, do DFS to depth 2
• repeat by increasing “depth bound” until a solution found

• Start node is at depth 0

58
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 1, # of nodes expanded: 0, tested: 0 start
expnd. node Frontier
{S} 5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

H
59
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 1, # of nodes tested: 1, expanded: 1 start
expnd. node Frontier
{S} 5 2 4
S not goal {A,B,C}
A B C

9 4 6 2
6 G 1
D E goal F

H
60
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 1, # of nodes tested: 2, expanded: 1 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A not goal {B,C} no expand A B C

9 4 6 2
6 G 1
D E goal F

H
61
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 1, # of nodes tested: 3, expanded: 1 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B not goal {C} no expand
9 4 6 2
6 G 1
D E goal F

H
62
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 1, # of nodes tested: 4, expanded: 1 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C not goal { } no expand-FAIL 9 4 6 2
6 G 1
D E goal F

H
63
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 4(1), expanded: 2 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S no test {A,B,C} 6 G 1
D E goal F

H
64
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 4(2), expanded: 3 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S {A,B,C} 6 G 1
D E goal F
A no test {D,E,B,C}
7

H
65
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 5(2), expanded: 3 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S {A,B,C} 6 G 1
D E goal F
A {D,E,B,C}
D not goal {E,B,C} no expand 7

H
66
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 6(2), expanded: 3 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S {A,B,C} 6 G 1
D E goal F
A {D,E,B,C}
D {E,B,C} 7
E not goal {B,C} no expand
H
67
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 6(3), expanded: 4 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S {A,B,C} 6 G 1
D E goal F
A {D,E,B,C}
D {E,B,C} 7
E {B,C}
B no test {G,C} H
68
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 7(3), expanded: 4 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S {A,B,C} 6 G 1
D E goal F
A {D,E,B,C}
D {E,B,C} 7
E {B,C}
B {G,C} H
69
G goal {C} no expand
Iterative-Deepening Search (IDS)

deepeningSearch(problem)
S
depth: 2, # of nodes tested: 7(3), expanded: 4 start
expnd. node Frontier
{S} 5 2 4
S {A,B,C}
A {B,C} A B C
B {C}
C {} 9 4 6 2
S {A,B,C} 6 G 1
D E goal F
A {D,E,B,C}
D {E,B,C} 7
E {B,C} path: S,B,G
B {G,C} H cost: 8
70
G {C}
Iterative-Deepening Search (IDS)

• Has advantages of BFS


• completeness
• optimality as stated for BFS

• Has advantages of DFS


• limited space
• in practice, even with redundant effort it still finds longer paths more quickly
than BFS

71
Iterative-Deepening Search (IDS)

• Space complexity: O(bd) (i.e., linear like DFS)

• Time complexity is a little worse than BFS or DFS


• because nodes near the top of the search tree are generated multiple times
(redundant effort)

• Worst case time complexity: O(bd) exponential


• because most nodes are near the bottom of tree

72
Example

S
1 5 8

A B C
3 7 9 4 5 How are nodes expanded by

D E G • Depth First Search


• Breadth First Search
• Uniform Cost Search
• Iterative Deepening

Are the solutions the same?


Nodes Expanded by:
• Depth-First Search: S A D E G
Solution found: S A G

• Breadth-First Search: S A B C D E G
Solution found: S A G

• Uniform-Cost Search: S A D B C E G
Solution found: S B G

• Iterative-Deepening Search: S A B C S A D E G
Solution found: S A G
Informed/Heuristic Search

• Informed searches use domain knowledge


to guide selection of the best path to continue searching

• Heuristics are used, which are informed guesses

• Heuristic means "serving to aid discovery"


Objective & Motivation

▪ To produce a solution in a reasonable time frame that is good


enough for solving the problem.

▪ This solution may not be the best but it is valuable


because finding it does not require a prohibitively long time.
Informed/Heuristic Search
• Heuristics basically predict how far the goal state may be or how
much it will cost to get to the goal state from a particular node.
• Is a way to inform the search about the direction to a goal.
• It provides an informed way to guess which neighbor of a node will
lead to a goal.
• Estimates the cost of an optimal path between a pair of states in a
single-agent path finding problem.
Informed/Heuristic Search

• Define a heuristic function, h(n)


• uses domain-specific information in some way
• is computable from the current state description
• it estimates
• the "goodness" of node n
• how close node n is to a goal
• the cost of minimal cost path from node n to a goal state
Informed Search
• h(n) ≥ 0 for all nodes n
• h(n) close to 0 means we think n is close to a goal state
• h(n) very big means we think n is far from a goal state
• All domain knowledge used in the search is encoded in the heuristic
function, h
• An example of a “weak method” for AI because of the limited way
that domain-specific information is used to solve a problem
Heuristic Function
• In grid-based or map-like problems,
common heuristic functions include
the Manhattan distance and
Euclidean distance. For coordinates
(x1,y1) of the current node and
(x2,y2) of the goal node, these
distances are calculated as:

• Manhattan distance

• Euclidean distance
Heuristic Function

Manhattan distance Euclidean distance


Best-First Search

• Sort nodes in the Frontier list by increasing values of


an evaluation function, f(n), that incorporates
domain-specific information

• This is a generic way of referring to the class of


informed search methods
Greedy Best-First Search

• Use as an evaluation function, f(n) = h(n),


sorting nodes in the Frontier by increasing values of
f

• Selects the node to expand that is believed to be


closest (i.e., smallest f value) to a goal node
Greedy Best-First Search

f(n) = h(n)
S
# of nodes tested: 0, expanded: 0 h=8
expnd. node Frontier
{S:8} 1 5 8
A B C
h=8 h=4 h=3

3 7 9 4 5
D E G
h=∞ h=∞ h=0
Greedy Best-First Search

f(n) = h(n)
S
# of nodes tested: 1, expanded: 1 h=8
expnd. node Frontier
{S:8} 1 5 8
S not goal {C:3,B:4,A:8} A B C
h=8 h=4 h=3

3 7 9 4 5
D E G
h=∞ h=∞ h=0
Greedy Best-First Search

f(n) = h(n)
S
# of nodes tested: 2, expanded: 2 h=8
expnd. node Frontier
{S:8} 1 5 8
S {C:3,B:4,A:8} A B C
C not goal {G:0,B:4,A:8} h=8 h=4 h=3

3 7 9 4 5
D E G
h=∞ h=∞ h=0
Greedy Best-First Search

f(n) = h(n)
S
# of nodes tested: 3, expanded: 2 h=8
expnd. node Frontier
{S:8} 1 5 8
S {C:3,B:4,A:8} A B C
C {G:0,B:4, A:8} h=8 h=4 h=3
G goal {B:4, A:8} not expanded
3 7 9 4 5
D E G
h=∞ h=∞ h=0
Greedy Best-First Search

f(n) = h(n)
S
# of nodes tested: 3, expanded: 2 h=8
expnd. node Frontier
{S:8} 1 5 8
S {C:3,B:4,A:8} A B C
C {G:0,B:4, A:8} h=8 h=4 h=3
G {B:4, A:8}
3 7 9 4 5
D E G
h=∞ h=∞ h=0
• Fast but not optimal
path: S,C,G
cost: 13
Greedy Best-First Search

S
h=5
• Not complete 2 2
A B
• Not optimal/admissible h=3 h=4
Greedy search finds the left goal (solution 1 2
cost of 7) C D
Optimal solution is the path to the right goal h=3 h=1
(solution cost of 5) 1 1
E G
h=2 goal
3
G
goal
A* Search
▪ A* minimizes the total path cost. Under the right condition, A*
provides the cheapest cost solution in the optimal path.

▪ The evaluation function ‘f’ is given by:


f(n) = g(n) + h(n)

Here, g(n) → The cost to get from the start state to state n.
h(n) → The estimated cost to get from state n to the goal
state.
This is often referred to as the heuristic, which is
nothing but a kind of smart guess (the heuristic).
▪ Loops are avoided. The same state is not expanded twice.
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 0, expanded: 0 h=8
expnd. Frontier
node 1 5 8
{S:0+8} A B C
h=8 h=4 h=3

3 7 9 4 5
D E G
h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 1, expanded: 1 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S not goal {A:1+8,B:5+4,C:8+3} h=8 h=4 h=3

3 7 9 4 5
D E G
h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 2, expanded: 2 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A not goal {B:9,G:1+9+0,C:11,
3 7 9 4 5
D:1+3+∞,E:1+7+∞}
D E G
h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 3, expanded: 3 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A {B:9,G:10,C:11,D:∞,E:∞} 3 7 9 4 5
B not goal {G:5+4+0,G:10,C:11,
D E G
D:∞,E:∞} replace h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 4, expanded: 3 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A {B:9,G:10,C:11,D:∞,E:∞} 3 7 9 4 5
B {G:9,C:11,D:∞,E:∞}
D E G
G goal {C:11,D:∞,E:∞} h=∞ h=∞ h=0
not expanded
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 4, expanded: 3 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A {B:9,G:10,C:11,D:∞,E:∞} 3 7 9 4 5
B {G:9,C:11,D:∞,E:∞}
D E G
G {C:11,D:∞,E:∞} h=∞ h=∞ h=0

• Pretty fast and optimal path: S,B,G


cost: 9
A* Search

Limitations:

Although being the best path finding algorithm, A* Search Algorithm


doesn’t produce the shortest path always, as it relies heavily on
heuristics/approximations to calculate – h
A* Search
Optimality:
• The tree-search version of A* is optimal if h(n) is admissible.
• Admissible heuristic is one that never overestimates the cost to reach
the goal.
• Because g(n) is the actual cost to reach n along the current path,
and f(n) = g(n) + h(n), we have as an immediate consequence that
f(n) never overestimates the true cost of a solution along the
current path through n.
A* Search
Completeness:

• The number of nodes is finite.

• Completeness is guaranteed provided the cost of every step


exceeds some small positive constant ϵ.
Hill Climbing Search
▪ One of the simplest procedures for implementing heuristic search.

▪ Hill climbing search algorithm is simply a loop that continuously


moves in the direction of increasing value,
i.e. uphill. It stops when it reaches a ‘peak’ where no neighbor has
higher value.

▪ Hill climbing uses knowledge about the local terrain, providing a


very useful and effective heuristic for eliminating much of the
unproductive search space.
Hill Climbing Search

Fig. 01: State-space Landscape


Hill Climbing Search
• Local maximum: A local maximum is a solution that surpasses other
neighboring solutions or states but is not the best possible solution.
• Global maximum: This is the best possible solution achieved by the
algorithm.
• Current state: This is the existing or present state.
• Flat local maximum/Plateau: This is a flat region where the
neighboring solutions attain the same value.
• Shoulder: This is a plateau whose edge is stretching upwards.
Hill Climbing Search
• A State-space Landscape can be used to describe Hill Climbing
Search.
• A Landscape has
✓ Location (defined by the state such as Current State)
✓ Elevation (defined by the value of the heuristic cost function or
objective function)

• If elevation corresponds to cost, then the aim is to find the lowest


valley – a global minimum;

• If elevation corresponds to an objective function, then the aim is to


find the highest peak – a global maximum.
Hill Climbing Algorithm
Step-1: Evaluate the starting state. If it is a goal state then stop and return
success.
Step-2: Else, continue with the starting state as considering it as a current
state.
Step-3: Continue Step-4 until a solution is found i.e. until there are no
new states left to be applied in the current state.
Step-4:
a. Select a state that has not been yet applied to the current state and
apply it to produce a new state.
b. Function to evaluate a new state.
i. If the current state is a goal state, then stop and return
success.
ii. If new state is better than the current state then
make it current state and proceed further.
iii. If not better then continue in the loop until a solution
is
Step-5: found.
A Nutrition Problem
◆ A nutritionist advises an individual who is suffering from iron and vitamin B
deficiency to take at least 2400 milligrams (mg) of iron, 2100 mg of vitamin B1, and
1500 mg of vitamin B2 over a period of time.
◆ Two vitamin pills are suitable, brand-A and brand-B.
◆ Each brand-A pill costs 6 cents and contains 40 mg of iron,
10 mg of vitamin B1, and 5 mg of vitamin B2.
◆ Each brand-B pill costs 8 cents and contains 10 mg of iron and 15 mg each of
vitamins B1 and B2.
◆ What combination of pills should the individual purchase in order to meet the
minimum iron and vitamin requirements at the lowest cost?
A Nutrition Problem
Solution
◆ In short, we want to minimize the objective function

C  6x  8 y
subject to the system of inequalities

40x  10 y  2400
10x  15y  2100
5x  15y  1500
x0
y0
A Nutrition Problem

y
40x 10y 2400
A(0,
240)
20
10x 15y 2100 0 S
B(30,
120)
5x 15y 1500 10
0 C(120, 60)
D(300, 0)
x
100 200 300
Advantages of Hill Climbing
▪ Hill climbing technique is useful in job shop scheduling,
automatic programming, circuit designing, and vehicle routing
and portfolio management.

▪ It is also helpful to solve pure optimization problems (e.g. Nutrition


Problem) where the objective is to find the best state according to
the objective function.

▪ It requires much less conditions than other search


techniques.
Disadvantages of Hill Climbing
Local Maxima:
Disadvantages of Hill Climbing
Local Maxima: At this point, the neighboring states have lower
values than the current state. The greedy approach feature will not
move the algorithm to a worse off state. This will lead to the hill-
climbing process’s termination, even though this is not the best
possible solution.
Solution: This problem can be solved using momentum.
This technique adds a certain proportion (m) of the initial weight to
the current one. m is a value between 0 and 1. Momentum enables the
hill-climbing algorithm to take huge steps that will make it move past the
local maximum.
Disadvantages of Hill Climbing…
Ridges:
Disadvantages of Hill Climbing…
Ridges: The hill-climbing algorithm may terminate itself when it
reaches a ridge. This is because the peak of the ridge is followed by
downward movement rather than upward movement.

Solution: This impediment can be solved by going in different


directions at once.
Disadvantages of Hill Climbing…
Plateau:
Disadvantages of Hill Climbing…
•Plateau: In this region, the values attained by the neighboring
states are the same. This makes it difficult for the algorithm to choose
the best direction.

•Solution: This challenge can be overcome by taking a huge jump that


will lead you to a non-plateau space.
Simulated Annealing
(Stochastic Hill-Climbing)

1. Pick initial state, s


2. Randomly pick state t from neighbors of s
3. if f(t) better than f(s)
then s = t
else with small probability s = t
4. Goto Step 2 until bored
Simulated Annealing
Origin:
The annealing process of heated solids –
Alloys manage to find a near global minimum
energy state when heated and then slowly cooled
Intuition:
By allowing occasional ascent in the
search process, we might be able to
escape the traps of local minima

Introduced by Nicholas Metropolis


in 1953
Consequences of Occasional Bad Moves

desired effect (when searching for a global min)


Helps escape
local optima
Idea 1: Use a
small, fixed
probability
threshold, say,
p = 0.1
adverse effect
But it might pass the global
optimum after reaching it
Escaping Local Optima

• Modified HC can escape from a local optimum


but
– the chance of making a bad move is the same at the
beginning of the search as at the end
– magnitude of improvement, or lack of, is ignored
• Fix by replacing fixed probability, p, that a bad
move is accepted, with a probability that
decreases as the search proceeds
• Now as the search progresses, the chance of
taking a bad move goes down
Control of Annealing Process
Acceptance decision for a search step
(Metropolis Criterion) in Hill-Climbing:

• Let the performance change in the search be:


ΔE = f(newNode) – f(currentNode)

• Always accept an ascending step (i.e., better state)


E 0

• Accept a descending step only if it passes a test


Escaping Local Maxima
• want to find the global maximum solution
• search for a new node with a greater value of f
• replace fixed threshold with a value, p,
that decreases as the search proceeds (T is a “temperature”
parameter that will change over time):
ΔE / T
p =e

Idea: p should decrease over time


Escaping Local Maxima

Let ΔE = f(newNode) – f(currentNode) <Idea:


0 p decreases
p=e ΔE / T
(Boltzman's equation) as neighbor gets
worse
• ΔE  -∞, p  0
as badness of the move increases
probability of taking it decreases exponentially
• T  0, p  0
as temperature decreases
probability of taking bad move decreases
Adversary search
• Adversary Search is a type of search algorithm used in decision-making
problems where two or more opponents (adversaries) are competing against
each other, often with conflicting objectives.

• It is widely used in game theory and artificial intelligence to model and solve
competitive scenarios, such as chess, checkers, and other two-player games.
Games Vs Search
Adversary Search
• Problem formulation
– Initial state: initial board position + whose move it is
– Operators: legal moves a player can make
– Goal (terminal test): game over?
– Utility (payoff) function: measures the outcome of the game and its
desirability
• Search objective:
– Find the sequence of player’s decisions (moves) maximizing its utility
(payoff)
– Consider the opponent’s moves and their utility
• Example:
 Tic-Tac-Toe, Chess, Checkers etc.
Tic-Tac-Toe

• Player (x) moves first, then the Opponent (O).


• A two player game where Mini-Max algorithm is applied.
• In this game in order to win we must fill a row, a column or
diagonal.

Objectives:

• Player: maximize outcome


• Opponent: minimize outcome
Tic-Tac-Toe
Mini-Max Algorithm
• The Minimax algorithm is the foundation of adversary search.

• Mini-max algorithm is a recursive or backtracking algorithm which is


used in decision-making and game theory. It provides an optimal move
for the player assuming that opponent is also playing optimally.

• Mini-Max algorithm uses recursion to search through the game-tree.

• Mini-Max algorithm is mostly used for game playing in AI. Such as


Chess, Checkers, tic-tac-toe, go, and various tow-players game. This
Algorithm computes the minimax decision for the current state.
Mini-Max Algorithm
• In this algorithm two players play the game, one is called MAX and
other is called MIN.
• Both the players fight it as the opponent player gets the minimum
benefit while they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node
of the tree, then backtrack the tree as the recursion.
Mini-Max Algorithm
• Step-1: In the first step, the
algorithm generates the entire
game-tree and apply the utility
function to get the utility values
for the terminal states. In the
below tree diagram, let's take A is
the initial state of the tree.
Suppose maximizer takes first
turn which has worst-case initial
value = -infinity, and minimizer
will take next turn which has
worst-case initial value =
+infinity.
Mini-Max Algorithm

• Step 2: Now, first we find the


utilities value for the Maximizer, its
initial value is -∞.
• For node D max(-1, -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7
Mini-Max Algorithm

• Step 3: In the next step, it's a turn for


minimizer, so it will compare all
nodes value with +∞, and will find
the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
Mini-Max Algorithm

• Step 4: Now it's a turn for


Maximizer, and it will again choose
the maximum of all nodes value and
find the maximum value for the root
node. In this game tree, there are
only 4 layers, hence we reach
immediately to the root node, but in
real games, there will be more than 4
layers.
• For node A max(4, -3)= 4
Tic-Tac-Toe
Properties of Mini-Max Algorithm

• Complete- Mini-Max algorithm is Complete. It will definitely find a


solution (if exist), in the finite search tree.
• Optimal- Mini-Max algorithm is optimal if both opponents are playing
optimally.
• Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(), where b is branching factor of
the game-tree, and d is the maximum depth of the tree.
• Space Complexity- Space complexity of Mini-max algorithm is also
similar to DFS which is O(bd).
Limitations of Mini-Max Algorithm

• The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge
branching factor, and the player has lots of choices to decide.

• This limitation of the minimax algorithm can be improved from alpha-beta


pruning.
Alpha-Beta pruning
• Alpha-beta pruning is a modified version of the minimax algorithm. It is
an optimization technique for the minimax algorithm.
• As we have seen in the minimax search algorithm that the number of game
states it has to examine are exponential in depth of the tree. Since we
cannot eliminate the exponent, but we can cut it to half. Hence there is a
technique by which without checking each node of the game tree we can
compute the correct minimax decision, and this technique is called
pruning. This involves two threshold parameter Alpha and Beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.
• Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
not only prune the tree leaves but also entire sub-tree.
Alpha-Beta pruning

• The two-parameter can be defined as:


 Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.
 Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.

• The main condition which required for alpha-beta pruning is:


α>=β
Alpha-Beta pruning

• Key points about alpha-beta pruning:


 The Max player will only update the value of alpha.
 The Min player will only update the value of beta.
 While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
 We will only pass the alpha, beta values to the child nodes.
Alpha-Beta pruning

• Step 1: At the first step the,


Max player will start first
move from node A where
α= -∞ and β= +∞, these
value of alpha and beta
passed down to node B
where again α= -∞ and β=
+∞, and Node B passes the
same value to its child D.
Alpha-Beta pruning
• Step 2: At Node D, the value of α will be
calculated as its turn for Max. The value
of α is compared with firstly 2 and then 3,
and the max (2, 3) = 3 will be the value of
α at node D and node value will also 3.
• Step 3: Now algorithm backtrack to node
B, where the value of β will change as this
is a turn of Min, Now β= +∞, will
compare with the available subsequent
nodes value, i.e. min (∞, 3) = 3, hence at
node B now α= -∞, and β= 3.
• In the next step, algorithm traverse the
next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also
be passed.
Alpha-Beta pruning

• Step 4: At node E, Max will take its


turn, and the value of alpha will
change. The current value of alpha will
be compared with 5, so max (-∞, 5) =
5, hence at node E, α= 5 and β= 3,
where α>=β, so the right successor of
E will be pruned, and algorithm will
not traverse it, and the value at node E
will be 5.
Alpha-Beta pruning
• Step 5: At next step, algorithm again
backtrack the tree, from node B to node A.
At node A, the value of alpha will be
changed the maximum available value is 3
as max (-∞, 3)= 3, and β= +∞, these two
values now passes to right successor of A
which is Node C.
At node C, α=3 and β= +∞, and the same
values will be passed on to node F.
• Step 6: At node F, again the value of α
will be compared with left child which is
0, and max(3,0)= 3, and then compared
with right child which is 1, and max(3,1)=
3 still α remains 3, but the node value of F
will become 1.
Alpha-Beta pruning

• Step 7: Node F returns the node value 1 to


node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will
compare with 1 so min (∞, 1) = 1. Now at
C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C
which is G will be pruned, and the
algorithm will not compute the entire sub-
tree G.
Alpha-Beta pruning

• Step 8: C now returns the value of 1 to A


here the best value for A is max (3, 1) = 3.
Following is the final game tree which is
the showing the nodes which are
computed and nodes which has never
computed. Hence the optimal value for the
maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning

• It can be of two types:


 Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any
of the leaves of the tree, and works exactly as minimax algorithm. In this case, it
also consumes more time because of alpha-beta factors, such a move of pruning is
called worst ordering. In this case, the best move occurs on the right side of the
tree. The time complexity for such an order is O().
 Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of
pruning happens in the tree, and best moves occur at the left side of the tree. We
apply DFS hence it first search left of the tree and go deep twice as minimax
algorithm in the same amount of time. Complexity in ideal ordering is O().
Rules to Find Good Ordering

• Occur the best move from the shallowest node.


• Order the nodes in the tree such that the best nodes are checked first.
• Use domain knowledge while finding the best move. Ex: for Chess, try
order: captures first, then threats, then forward moves, backward moves.
• We can bookkeep the states, as there is a possibility that states may repeat.
For Practice
Thank You

You might also like