AI Unit2 PPT by BSCOER
AI Unit2 PPT by BSCOER
Telegram Channel
https://t.me/SPPU_TE_BE_COMP
(for all engineering Resources)
WhatsApp Channel
(for all tech updates)
https://whatsapp.com/channel/
0029ValjFriICVfpcV9HFc3b
Insta Page
(for all engg & tech updates)
https://www.instagram.com/
sppu_engineering_update
Unit 2 Problem Solving
2
Search Strategies at a glance
3
Uninformed Search Strategies
4
Search Space Graph: example
5
Problem Solving by Graph
Searching
6
Depth first search (DFS)
1 2 3 4 9
Depth first search (DFS)
10
DFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>;
Else
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end
11
DFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>; In DFS, the frontier is a
Else
last-in-first-out stack
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end
12
Analysis of DFS
Def. : A search algorithm is complete if
whenever there is at least one solution, the
algorithm is guaranteed to find it within a finite
amount of time.
13
Analysis of DFS
2/13/2025 14
Analysis of DFS
Def.: The time complexity of a search algorithm is
the worst-case amount of time it will take to run,
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
2/13/2025 15
Analysis of DFS
Def.: The space complexity of a search algorithm is the
worst-case amount of memory that the algorithm will use
(i.e., the maxmial number of nodes on the frontier),
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
• What is DFS’s space complexity, in terms of m and b ?
2/13/2025 16
Breadth-first search (BFS)
17
BFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>;
Else
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end
18
BFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>; In BFS, the frontier is a
Else first-in-first-out queue
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end
19
Analysis of BFS
Def. : A search algorithm is complete if
whenever there is at least one solution, the
algorithm is guaranteed to find it within a finite
amount of time.
• Proof sketch?
2/13/2025 https://www.linkedin.com/in/dr- 20
jayashree-rajesh-prasad-3256621b/
Analysis of BFS
Def.: A search algorithm is optimal if
when it finds a solution, it is the best one
• Proof sketch?
21
Analysis of BFS
Def.: The time complexity of a search algorithm is
the worst-case amount of time it will take to run,
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
22
Analysis of BFS
Def.: The space complexity of a search algorithm is the
worst-case amount of memory that the algorithm will use
(i.e., the maxmial number of nodes on the frontier),
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
• What is BFS’s space complexity, in terms of m and b ?
23
Real Example: Solving Sudoku
• E.g. start state on the left
• Operators:
fill in an allowed number
• Solution: all numbers filled in,
with constraints satisfied
BFS DFS
24
Real Example: Eight Puzzle. DFS or BFS?
BFS DFS
25
Learning Goals
• Apply basic properties of search algorithms:
- completeness
- optimality
- time and space complexity of search algorithms
26
Comparison of Uninformed Search
algorithms
27
Informed Search Strategies
28
Outline
• Best-first search
• Greedy best-first search
• A* search
• Heuristics
• Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search
• Genetic algorithms
29
Best-first search
• Idea: use an evaluation function f(n) for each node
– f(n) provides an estimate for the total cost.
Expand the node n with smallest f(n).
• Implementation:
Order the nodes in fringe increasing order of cost.
• Special cases:
– greedy best-first search
– A* search
30
Romania with straight-line dist.
31
Greedy best-first search
32
Greedy best-first search example
33
Greedy best-first search example
34
Greedy best-first search example
35
Greedy best-first search example
36
Properties of greedy best-first search
• Complete? No – can get stuck in loops.
• Time? O(bm), but a good heuristic can give
dramatic improvement
• Space? O(bm) - keeps all nodes in memory
• Optimal? No
e.g. AradSibiuRimnicu
VireaPitestiBucharest is shorter!
37
A* search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n
to goal
• Best First search has f(n)=h(n)
38
A* search example
39
A* search example
40
A* search example
41
A* search example
42
A* search example
43
44
A* search example
45
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal
state from n.
• An admissible heuristic never overestimates the cost to
reach the goal, i.e., it is optimistic
• Example: hSLD(n) (never overestimates the actual road
distance)
• Theorem: If h(n) is admissible, A* using TREE-SEARCH
is optimal
46
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
We want to prove:
f(n) < f(G2)
(then A* will prefer n over G2)
47
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n
be an unexpanded node in the fringe such that n is on a shortest path to an
optimal goal G.
48
Hence: n is preferred over G2
Consistent heuristics
• A heuristic is consistent if for every node n, every successor n' of n
generated by any action a,
• If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n') It’s the triangle
≥ g(n) + h(n) = f(n) inequality !
f(n’) ≥ f(n)
• i.e., f(n) is non-decreasing along any path. keeps all checked nodes
in memory to avoid repeated
• Theorem: states
If h(n) is consistent, A* using GRAPH-SEARCH is optimal
Optimality of A*
• A* expands nodes in order of increasing f value
• Gradually adds "f-contours" of nodes
• Contour i contains all nodes with f≤fi where fi < fi+1
•
50
Properties of A*
• Complete? Yes (unless there are infinitely many
nodes with f ≤ f(G) , i.e. path-cost > ε)
• Time/Space? Exponential b
d
except if: | h (n ) h *
(n ) | O (log h *
(n ))
• Optimal? Yes
• Optimally Efficient: Yes (no algorithm with the
same heuristic is guaranteed to expand fewer nodes)
51
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ?
• h2(S) = ?
52
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
53
Dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search: it is guaranteed to expand
less nodes.
54
Relaxed problems
• A problem with fewer restrictions on the actions is
called a relaxed problem
2/13/2025 55
Local search algorithms
• In many optimization problems, the path to the goal is irrelevant;
the goal state itself is the solution
56
Example: n-queens
• Put n queens on an n × n board with no two
queens on the same row, column, or
diagonal
57
Hill-climbing search
• Problem: depending on initial state, can get stuck in local maxima
58
Hill-climbing search: 8-queens problem
• h = number of pairs of queens that are attacking each other, either directly or
indirectly (h = 17 for the above state)
59
Hill-climbing search: 8-queens problem
60
61