0% found this document useful (0 votes)
17 views62 pages

AI Unit2 PPT by BSCOER

The document provides study material on problem-solving strategies in Artificial Intelligence, focusing on search algorithms such as Depth-First Search (DFS) and Breadth-First Search (BFS). It discusses their completeness, optimality, time and space complexities, and introduces informed search strategies like A* search with admissible heuristics. The content is aimed at TE Computer students at Bhivarabai Sawant College of Engineering and Research, under the guidance of Prof. Sourabh Natu.

Uploaded by

Tejas Sananse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views62 pages

AI Unit2 PPT by BSCOER

The document provides study material on problem-solving strategies in Artificial Intelligence, focusing on search algorithms such as Depth-First Search (DFS) and Breadth-First Search (BFS). It discusses their completeness, optimality, time and space complexities, and introduces informed search strategies like A* search with admissible heuristics. The content is aimed at TE Computer students at Bhivarabai Sawant College of Engineering and Research, under the guidance of Prof. Sourabh Natu.

Uploaded by

Tejas Sananse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

THE SHETKARI SHIKSHAN MANDAL’S

BHIVARABAI SAWANT COLLEGE OF ENGINEERING


AND RESEARCH
AICTE APPROVED INSTITUTE AFFILIATED TO SAVITRIBAI PHULE PUNE UNIVERSITY
GRADE “A” ACCREDITED BY NAAC

Class :TE Computer


Subject :- Artificial Intelligence(310253)
Unit 2 :- Problem Solving

SUBJECT TEACHER : Prof. Sourabh Natu (ME Computer Engineering)


Assistant Professor, Computer Engineering Department

DEPARTMENT OF COMPUTER ENGINEERING


Study material provided by: Vishwajeet Londhe

Join Community by clicking below links

Telegram Channel

https://t.me/SPPU_TE_BE_COMP
(for all engineering Resources)

WhatsApp Channel
(for all tech updates)

https://whatsapp.com/channel/
0029ValjFriICVfpcV9HFc3b

Insta Page
(for all engg & tech updates)

https://www.instagram.com/
sppu_engineering_update
Unit 2 Problem Solving

2
Search Strategies at a glance

3
Uninformed Search Strategies

4
Search Space Graph: example

• Operators –left, right, suck


• Successor states in the graph describe the effect of each action applied to a
given state
• Possible Goal – no dirt

5
Problem Solving by Graph
Searching

6
Depth first search (DFS)

• Frontier: shaded nodes


7
Depth first search (DFS)

• Frontier: shaded nodes


• Which node will be expanded next?
(expand = “remove node from frontier & put its successors on”)
8
Depth first search (DFS)

• Say, node in red box is a goal


• How many more nodes will be expanded?

1 2 3 4 9
Depth first search (DFS)

• Say, node in red box is a goal


• How many more nodes will be expanded?
• 3: you only return once the goal is being expanded!
• Not once a goal is put onto the frontier!

10
DFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>;
Else
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end

11
DFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>; In DFS, the frontier is a
Else
last-in-first-out stack
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end

12
Analysis of DFS
Def. : A search algorithm is complete if
whenever there is at least one solution, the
algorithm is guaranteed to find it within a finite
amount of time.

Is DFS complete? Yes No

13
Analysis of DFS

Def.: A search algorithm is optimal if


when it finds a solution, it is the best one

Is DFS optimal? Yes No

• E.g., goal nodes: red boxes

2/13/2025 14
Analysis of DFS
Def.: The time complexity of a search algorithm is
the worst-case amount of time it will take to run,
expressed in terms of
- maximum path length m
- maximum forward branching factor b.

• What is DFS’s time complexity, in terms of m and b ?

O(bm) O(mb) O(bm) O(b+m)

• E.g., single goal node: red box

2/13/2025 15
Analysis of DFS
Def.: The space complexity of a search algorithm is the
worst-case amount of memory that the algorithm will use
(i.e., the maxmial number of nodes on the frontier),
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
• What is DFS’s space complexity, in terms of m and b ?

O(bm) O(mb) O(bm) O(b+m)


- O(bm)
- The longest possible path is m, and for every
node in that path must maintain a fringe of size b

2/13/2025 16
Breadth-first search (BFS)

17
BFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>;
Else
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end

18
BFS as an instantiation of the Generic Search
Algorithm
Input: a graph
a set of start nodes
Boolean procedure goal(n)
testing if n is a goal node
frontier:= [<s>: s is a start node];
While frontier is not empty:
select and remove path <no,….,nk> from frontier;
If goal(nk)
return <no,….,nk>; In BFS, the frontier is a
Else first-in-first-out queue
For every neighbor n of nk,
add <no,….,nk, n> to frontier;
end

19
Analysis of BFS
Def. : A search algorithm is complete if
whenever there is at least one solution, the
algorithm is guaranteed to find it within a finite
amount of time.

Is BFS complete? Yes No

• Proof sketch?

2/13/2025 https://www.linkedin.com/in/dr- 20
jayashree-rajesh-prasad-3256621b/
Analysis of BFS
Def.: A search algorithm is optimal if
when it finds a solution, it is the best one

Is BFS optimal? Yes No

• Proof sketch?

21
Analysis of BFS
Def.: The time complexity of a search algorithm is
the worst-case amount of time it will take to run,
expressed in terms of
- maximum path length m
- maximum forward branching factor b.

• What is BFS’s time complexity, in terms of m and b ?

O(bm) O(mb) O(bm) O(b+m)

• E.g., single goal node: red box

22
Analysis of BFS
Def.: The space complexity of a search algorithm is the
worst-case amount of memory that the algorithm will use
(i.e., the maxmial number of nodes on the frontier),
expressed in terms of
- maximum path length m
- maximum forward branching factor b.
• What is BFS’s space complexity, in terms of m and b ?

O(bm) O(mb) O(bm) O(b+m)

- How many nodes at depth m?

23
Real Example: Solving Sudoku
• E.g. start state on the left
• Operators:
fill in an allowed number
• Solution: all numbers filled in,
with constraints satisfied

• Which method would you rather


use?

BFS DFS
24
Real Example: Eight Puzzle. DFS or BFS?

• Which method would you rather use?

BFS DFS

25
Learning Goals
• Apply basic properties of search algorithms:
- completeness
- optimality
- time and space complexity of search algorithms

• Select the most appropriate search algorithms for


specific problems.
– Depth-First Search vs. Breadth-First Search

26
Comparison of Uninformed Search
algorithms

27
Informed Search Strategies

28
Outline
• Best-first search
• Greedy best-first search
• A* search
• Heuristics
• Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search
• Genetic algorithms

29
Best-first search
• Idea: use an evaluation function f(n) for each node
– f(n) provides an estimate for the total cost.
 Expand the node n with smallest f(n).

• Implementation:
Order the nodes in fringe increasing order of cost.

• Special cases:
– greedy best-first search
– A* search

30
Romania with straight-line dist.

31
Greedy best-first search

• f(n) = estimate of cost from n to goal


• e.g., fSLD(n) = straight-line distance from n
to Bucharest
• Greedy best-first search expands the node
that appears to be closest to goal.

32
Greedy best-first search example

33
Greedy best-first search example

34
Greedy best-first search example

35
Greedy best-first search example

36
Properties of greedy best-first search
• Complete? No – can get stuck in loops.
• Time? O(bm), but a good heuristic can give
dramatic improvement
• Space? O(bm) - keeps all nodes in memory
• Optimal? No
e.g. AradSibiuRimnicu
VireaPitestiBucharest is shorter!

37
A* search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n
to goal
• Best First search has f(n)=h(n)

38
A* search example

39
A* search example

40
A* search example

41
A* search example

42
A* search example

43
44
A* search example

45
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal
state from n.
• An admissible heuristic never overestimates the cost to
reach the goal, i.e., it is optimistic
• Example: hSLD(n) (never overestimates the actual road
distance)
• Theorem: If h(n) is admissible, A* using TREE-SEARCH
is optimal

46
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.

We want to prove:
f(n) < f(G2)
(then A* will prefer n over G2)

• f(G2) = g(G2) since h(G2) = 0


• f(G) = g(G) since h(G) = 0
• g(G2) > g(G) since G2 is suboptimal
• f(G2) > f(G) from above

47
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n
be an unexpanded node in the fringe such that n is on a shortest path to an
optimal goal G.

• f(G2) > f(G) copied from last slide


• h(n) ≤ h*(n) since h is admissible (under-estimate)
• g(n) + h(n) ≤ g(n) + h*(n) from above
• f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G)
• f(n) < f(G2) from top line.

48
Hence: n is preferred over G2
Consistent heuristics
• A heuristic is consistent if for every node n, every successor n' of n
generated by any action a,

h(n) ≤ c(n,a,n') + h(n')

• If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n') It’s the triangle
≥ g(n) + h(n) = f(n) inequality !
f(n’) ≥ f(n)
• i.e., f(n) is non-decreasing along any path. keeps all checked nodes
in memory to avoid repeated
• Theorem: states
If h(n) is consistent, A* using GRAPH-SEARCH is optimal
Optimality of A*
• A* expands nodes in order of increasing f value
• Gradually adds "f-contours" of nodes
• Contour i contains all nodes with f≤fi where fi < fi+1

50
Properties of A*
• Complete? Yes (unless there are infinitely many
nodes with f ≤ f(G) , i.e. path-cost > ε)
• Time/Space? Exponential b
d

except if: | h (n )  h *
(n ) | O (log h *
(n ))

• Optimal? Yes
• Optimally Efficient: Yes (no algorithm with the
same heuristic is guaranteed to expand fewer nodes)

51
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?

52
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18

53
Dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search: it is guaranteed to expand
less nodes.

• Typical search costs (average number of nodes


expanded):

• d=12 IDS = 3,644,035 nodes


A*(h1) = 227 nodes
A*(h2) = 73 nodes
• d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes

54
Relaxed problems
• A problem with fewer restrictions on the actions is
called a relaxed problem

• The cost of an optimal solution to a relaxed problem


is an admissible heuristic for the original problem

• If the rules of the 8-puzzle are relaxed so that a tile


can move anywhere, then h1(n) gives the shortest
solution

• If the rules are relaxed so that a tile can move to any


adjacent square, then h2(n) gives the shortest solution

2/13/2025 55
Local search algorithms
• In many optimization problems, the path to the goal is irrelevant;
the goal state itself is the solution

• State space = set of "complete" configurations

• Find configuration satisfying constraints, e.g., n-queens

• In such cases, we can use local search algorithms

• keep a single "current" state, try to improve it.

• Very memory efficient (only remember current state)

56
Example: n-queens
• Put n queens on an n × n board with no two
queens on the same row, column, or
diagonal

Note that a state cannot be an incomplete configuration with m<n queens

57
Hill-climbing search
• Problem: depending on initial state, can get stuck in local maxima

58
Hill-climbing search: 8-queens problem

Each number indicates h if we move


a queen in its corresponding column

• h = number of pairs of queens that are attacking each other, either directly or
indirectly (h = 17 for the above state)
59
Hill-climbing search: 8-queens problem

 A local minimum with h = 1

60
61

You might also like