0% found this document useful (0 votes)
2 views

AI_ppt _m2_notes

This document discusses problem-solving agents, which are goal-based agents that utilize search algorithms to find sequences of actions to achieve their goals. It outlines the four general steps in problem-solving: goal formulation, problem formulation, search, and execution, and distinguishes between uninformed and informed search strategies. Additionally, it provides examples of problem formulation and search strategies, including breadth-first and depth-first search.

Uploaded by

genericnishanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AI_ppt _m2_notes

This document discusses problem-solving agents, which are goal-based agents that utilize search algorithms to find sequences of actions to achieve their goals. It outlines the four general steps in problem-solving: goal formulation, problem formulation, search, and execution, and distinguishes between uninformed and informed search strategies. Additionally, it provides examples of problem formulation and search strategies, including breadth-first and depth-first search.

Uploaded by

genericnishanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Module 2

Problem Solving
Agents

In this topic we see how an agent can find a sequence of actions that achieves its goals when no single action will do.
1
Overview of the Solving problems by
Searching
• A problem-solving agent is a one kind of goal-based agent uses atomic representations-
that is, states of the world are considered as wholes, with no internal structure visible to
the problem solving algorithms.
• Goal-based agents that use more advanced factored or structured representations are
usually called planning agents.
• problem solving begins with precise definitions of problems and their solutions.
• Uninformed search algorithms—algorithms that are given no information about the
problem other than its definition. Although some of these algorithms can solve any
solvable problem, none of them can do so efficiently.
• Informed search algorithms, on the other hand, can do quite well given some guidance
on where to look for solutions.
• The simplest kind of task environment, for which the solution to a problem is always a
fixed sequence of actions.
• Uses the concepts of asymptotic complexity (that is, O() notation) and NP-completeness.

2
Module 2-Contents
• Problem-solving:
• Problem-solving agents
• Example problems
• Searching for Solutions
• Uninformed Search Strategies:
• Breadth First search
• Depth First Search
• Iterative deepening depth first search.

3
Problem‐Solving Agents

4
Functionality of Problem Solving Agent

Once the solution has been executed, the


agent will formulate a new goal. 5
Thus, we have a simple
“formulate, search, execute” design
for the agent

6
“formulate, search, execute” design
for the agent
After formulating a goal and a problem to solve, the agent calls a search procedure to solve it.

7
Problem Solving Agents
Intelligent agents are supposed to maximize their performance measure.

• Goals help organize behaviour by limiting the objectives that the agent is trying to achieve and hence the actions it
needs to consider.

Four General Steps in Problem Solving.

1. Goal formulation- What are the successful world states


• Based on the current situation and the agent’s performance measure, is the first step in problem solving.
• Goal to be a set of world states—exactly those states in which the goal is satisfied.

2. Problem formulation is the process of deciding what actions and states to consider, given a goal.

3. Search is the process of looking for a sequence of actions that reaches the goal.
• Examine different possible sequence of actions that lead to states of known value and the choose the best
sequence.
• A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

4. Execute- Perform actions on the basis of the solution.


8
 Our agent has now adopted the goal of driving to Bucharest and is
considering where to go from Arad.

 Three roads lead out of Arad, one toward Sibiu, one to Timisoara, and one
to Zerind.

 None of these achieves the goal, so unless the agent is familiar with the
geography of Romania, it will not know which road to follow.

 In other words, the agent will not know which of its possible actions is best.

 If the agent has no additional information—i.e., if the environment is


unknown —then it is has no choice but to try one of the actions at random.

9
 But suppose the agent has a map of Romania.
 The point of a map is to provide the agent with information about the states it
might get itself into and the actions it can take.
 The agent can use this information to consider subsequent stages of a
hypothetical journey via each of the three towns, trying to find a journey that
eventually gets to Bucharest.
 Once it has found a path on the map from Arad to Bucharest, it can achieve its
goal by carrying out the driving actions that correspond to the destination.

10
ASSUMPTIONS

For now, we assume that the environment is observable, so the agent always knows the
current state.
We also assume the environment is discrete, so at any given state there are only finitely
many actions to choose from.
We will assume the environment is known, so the agent knows which states are reached
by each action.
Finally, we assume that the environment is deterministic, so each action has exactly
one outcome.

11
Example: Romania

12
Example: Romania

Imagine an agent in the city of Arad in Romania, enjoying a touring holiday.

Now, suppose the agent has a nonrefundable ticket to fly out to Bucharest city in Romania the following day.

In that case, it makes sense for the agent to adopt the goal of getting to Bucharest.

13
Example: Romania

The goal of driving to Bucharest and is considering where to go from Arad. Three roads lead out of Arad, one
toward Sibiu, one to Timisoara, and one to Zerind. None of these achieves the goal, so unless the agent is
familiar with the geographyof Romania, it will not know which road to follow.

In general, an agent with several immediate options of unknown value can decide what to do by first
examining future actions that eventually lead to states of known value.

14
Well-defined problems and solutions
A problem can be defined formally by five components:

• The initial state that the agent starts in. For example, the initial state for our agent in Romania might be
described as In(Arad).
• A description of the possible actions available to the agent. Given a particular state s, ACTIONS(s) returns the set of
actions that can be executed in s. We say that each of these actions is applicable in s.
For example, from the state In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara), Go(Zerind)}.

• A description of what each action does; the formal name for this is the transition model, specified by a function
RESULT(s, a) that returns the state that results from doing action a in state s.
We also use the term successor to refer to any state reachable from a given state by a single action.2 For example,
we have RESULT(In(Arad),Go(Zerind)) = In(Zerind) .

Together, the initial state, actions, and transition model implicitly define the state space of the problem

A path in the state space is a sequence of states connected by a sequence of actions.

15
• The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of
possible goal states, and the test simply checks whether the given state is one of them. The agent’s goal in
Romania is the singleton set {In(Bucharest )}.

• A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function
that reflects its own performance measure.

16
EXAMPLE PROBLEMS

17
Problem Formulation

18
Example Problem : Vacuum world
Links denote actions: L
= Left
R = Right,
S = Suck.

19
Formulated as a problem as follows:

• States: The state is determined by both the agent location and the dirt locations. The agent is in one of two
locations, each of which might or might not contain dirt. Thus, there are 2 × 22 = 8 possible world states. A
larger environment with n locations has n ・ 2n states.

• Initial state: Any state can be designated as the initial state.

• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.

• Transition model: The actions have their expected effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a clean square
have no effect.

• Goal test: This checks whether all the squares are clean.

• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

20
Example Problems

21
Toy Problems: 8 Puzzle

22
States: A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.

• Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.

• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where the
blank is.

• Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.

• Goal test: This checks whether the state matches the goal configuration shown in Figure

• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
23
Toy Problems: 8 Queens

24
Toy Problems: 8 Queens
The first incremental formulation one might try is the following:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked.
In this formulation, we have 64 ・ 63 ・ ・ ・ 57 ≈ 1.8×1014 possible sequences to
investigate.

25
A better formulation would prohibit placing a queen in any square that is already attacked:

• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the leftmost
n columns, with no queen attacking another.
• Actions: Add a queen to any square in the leftmost empty column such that it is not
attacked by any other queen.

This formulation reduces the 8-queens state space from 1.8×1014 to just 2,057, and solutions
are easy to find.

26
Toy Problems: 8 Queens

27
28
29
30
31
32
4-Queens Problem

33
Real-world problems

34
35
36
37
38
39
SEARCHING FOR SOLUTIONS

40
SEARCHING FOR SOLUTIONS

41
42
43
Example: Romania

44
Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that
have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in
faint dashed lines.

45
The set of all leaf nodes available for expansion at any given point is called the frontier.

Search algorithms all share the basic structure; they vary primarily according to how they choose which state to
expand next—the so-called search strategy.

46
Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that
have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in
faint dashed lines.

47
Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that
have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in
faint dashed lines.

48
The way to avoid exploring redundant paths is to remember where one has been. To do this, we
augment the TREE-SEARCH algorithm with a data structure called the explored set. which remembers
every expanded node.

49
Figure 3.8 A sequence of search trees generated by a graph search on the Romania problem. At each stage, we have
extended each path by one step. Notice that at the third stage, the northernmost city (Oradea) has become a dead
end: both of its successors are already explored via other paths.

50
Figure 3.9 The separation property of GRAPH-SEARCH, illustrated on a rectangular-grid problem. The frontier (white
nodes) always separates the explored region of the state space (black nodes) from the unexplored region (gray
nodes). In (a), just the root has been expanded. In (b), one leaf node has been expanded. In (c), the remaining
successors of the root have been expanded in clockwise order.

51
Infrastructure for search algorithms

Search algorithms require a data structure to keep track of the search


tree that is being constructed.
For each node n of the tree, we have a structure that contains four
components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the
node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path
from the initial state to the node, as indicated by the parent pointers.

52
Visualize Search Space as a Tree
• States are nodes
• Actions are
edges
• Initial state is
root
• Solution is path
from root to
goal node
• Edges
sometimes have
associated costs
• States resulting
from operator
are children
Search nodes and search states are related concepts but differ in their
representation:

•Search State: A search state represents a specific configuration or


situation in the problem space. It encapsulates all relevant information
about the current state of the problem, including the positions of
objects, the state of the environment.

•Search Node: A search node is a data structure associated with a


search state in the search tree. It typically includes additional
information such as the parent node (from which it was generated), the
action that led to the current state, and the path cost from the initial
state to the current state. Search nodes are used during the search
process to keep track of the exploration and to reconstruct the path to
54
the solution once it's found.
55
The frontier needs to be
stored in such a way that the search algorithm can easily choose the next node to
expand according to its preferred strategy. The appropriate data structure for this
is a queue. The operations on a queue are as follows:
• EMPTY?(queue) returns true only if there are no more elements in the queue.
• POP(queue) removes the first element of the queue and returns it.
• INSERT(element, queue) inserts an element and returns the resulting queue.

56
Measuring problem-solving performance

We can evaluate an algorithm’s performance in four ways:


• Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution, as defined on page 68?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?

57
Search strategies

58
Uninformed Search

59
Informed Search

60
61
62
63
Uninformed Search Strategies
• Also called Blind Search
• The term means that the strategies have no additional information
about states beyond that provided in the problem definition.
• All they can do is generate successors and distinguish a goal state
from a non-goal state
• All search strategies are distinguished by the order in which nodes are
expanded.
• Strategies that know whether one non-goal state is “more promising”
than another are called informed search or heuristic search
strategies;

64
Breadth-first search

• Breadth-first search is a simple strategy in which the root node is expanded


first, then all the successors of the root node are expanded next, then their
successors, and so on.
• In general, all the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded.
• Breadth-first search is an instance of the general graph-search algorithm
(Figure 3.7) in which the shallowest unexpanded node is chosen for
expansion.
• This is achieved very simply by using a FIFO queue for the frontier.
• Thus, new nodes (which are always deeper than their parents) go to the
back of the queue, and old nodes, which are shallower than the new
nodes, get expanded first.

65
66
The set of all leaf nodes available for expansion at any given point is called the frontier.

67
• The news about breadth-first search has been good.
• The news about time and space is not so good.
• Imagine searching a uniform tree where every state has b successors. The root of the search tree
generates b nodes at the first level, each of which generates b more nodes, for a total of b 2 at the
second level.
• Each of these generates b more nodes, yielding b3 nodes at the third level, and so on. Now
suppose that the solution is at depth d.
• In the worst case, it is the last node generated at that level. Then the total number of nodes
generated is b + b2 + b3 + ··· + bd = O(bd) .
• As for space complexity: for any kind of graph search, which stores every expanded node in the
explored set, the space complexity is always within a factor of b of the time complexity.
• For breadth-first graph search in particular, every node generated remains in memory. There will
be O(bd−1) nodes in the explored set and O(bd) nodes in the frontier
68
Analysis of BFS
• Assume goal node at level d with constant branching factor b

• Time complexity (measured in #nodes generated)


 1 (1st level ) + b (2nd level) + b2 (3rd level) + … + bd (goal level) + (bd+1 – b) =
O(bd+1)

• This assumes goal on far right of level


• Space complexity
 At most majority of nodes at level d + majority of nodes at level d+1 = O(bd+1)
 Exponential time and space

• Features
 Simple to implement
 Complete
 Finds shortest solution (not necessarily least-cost unless all operators have
equal cost)
Analysis
• See what happens with b=10
• expand 10,000 nodes/second
• 1,000 bytes/node

Depth Nodes Time Memory


2 1110 .11 seconds 1 megabyte
4 111,100 11 seconds 106 megabytes
6 107 19 minutes 10 gigabytes
8 109 31 hours 1 terabyte
10 1011 129 days 101 terabytes
12 1013 35 years 10 petabytes
15 1015 3,523 years 1 exabyte
71
Depth-first search

• Depth-first search always expands the deepest node in the current


frontier of the search tree.
• The progress of the search is illustrated in Figure 3.16.
• The search proceeds immediately to the deepest level of the search
tree, where the nodes have no successors.
• As those nodes are expanded, they are dropped from the frontier, so
then the search "backs up" to the next deepest node that still has
unexplored successors.

72
Analysis of DFS
• Time complexity
 In the worst case, search entire space
 Goal may be at level d but tree may continue to level m, m>=d
 O(bm)
 Particularly bad if tree is infinitely deep

• Space complexity
 Only need to save one set of children at each level
 1 + b + b + … + b (m levels total) = O(bm)
 For previous example, DFS requires 118kb instead of 10 petabytes for d=12 (10
billion times less)

• Benefits
 May not always find solution
 Solution is not necessarily shortest or least cost
 If many solutions, may find one quickly (quickly moves to depth d)
 Simple to implement
 Space often bigger constraint, so more usable than BFS for large problems
74
• Depth-First Search (DFS) is another fundamental graph traversal
algorithm used to explore and navigate through a graph or tree.
• DFS starts at a designated node (often called the "source" node) and
explores as far as possible along each branch before backtracking.
• It uses a stack (either explicitly or through recursive calls) to keep
track of the nodes to be explored.

75
• Depth-first search seems to have no clear advantage over breadth-first search, so
why do we include it? The reason is the space complexity.

• For a graph search, there is no advantage, but a depth-first tree search needs to
store only a single path from the root to a leaf node, along with the remaining
unexpanded sibling nodes for each node on the path.

• Once a node has been expanded, it can be removed from memory as soon as all
its descendants have been fully explored. For a state space with branching factor b
and maximum depth m, depth-first search requires storage of only O(bm) nodes.

76
77
78
Drawbacks of DFS and BFS

79

You might also like