0% found this document useful (0 votes)
19 views

ArtificialIntelligence Unit2(OLD Part1)

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

ArtificialIntelligence Unit2(OLD Part1)

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Prof Gaurav Mishra

Dr Ashish Avasthi
Unit II

On going through this unit, you would be able to:
 Understand the state space representation and gain
familiarity with some common problems formulated
as state space search problems.
 Understand the basics of search and following types
of searching strategies:
 Uninformed Search Strategy
 Informed Search Strategy
 Advesrsial Search
Agent & Environment

Agent & Environment

• We can see the agent in the diagram. The agent
operates in an environment. The agent receives
percepts from the environment and the agent acts
and its actions can change the environment.
• The agent uses its various sensory organs so
depending upon the sensors that the agent has for
example the agent may be able to see if the agent has
a camera, the agent may be able to hear if it has a
sonar sensor and so agent can see or hear or accept
different inputs from the environment.
Agent & Environment

 Inside the agent there is an agent program which
decides on the basis of the current percept or the
percept sequence it has received till date to decide
what should be the good action to take in the current
situation.
 So the agent has actuators or effectors to take
actions. These actions can potentially change the
environment and the agent can use its sensors to
sense the changed environment.
Agents

 Operate in an environment
 Perceives its environment through sensors
 Acts upon its environment through
actuators/effectors
 Have goals
Sensors and effectors

 An agent perceives its environment through sensors
 the complete set of inputs at a given time is called a
percept
 the current percept, or a sequence of percepts can
influence the actions of an agent
 It can change the environment through
effectors/actuators
 An operation involving an actuator is called an action
 Actions can be grouped into action sequences
Agents

 Have sensors, actuators
 Have goals
 Agent program implements
mapping from percept sequence
to actions
 Performance measure to
evaluate agents
 Autonomous agents: decide
autonomously which action to
take in the current situation to
maximize progress towards its
goal
Agent &
Environment(in brief)

 An agent works in an
environment.
 It receives the percepts
from the environment,
visualizes, takes actions
using its actuators and
changes the state of the
environment.
Goal directed Agent

 A goal directed agent needs to achieve certain goals.
 Many problems can be represented as a set of states
and a set of rules of how one state is transformed to
another.
 The agent must choose a sequence of actions to
achieve the desired goal.
Problem Solving Agent

 A problem solving agent first formulates a goal.
 Then the agent calls a search procedure to solve it.
 Problem solving agent is a kind of goal based agent
that decides what to do by finding sequences of
action that lead to desirable states.
 In intelligent agents case, a KB corresponds to the
environment, operators correspond to sensors and
the search techniques are the actuators.

 Each state is an abstract representation of the agent’s
environment. It is an abstraction that denotes a
configuration of the agent.
 Initial states: The description of the starting
configuration of the agent.
 An action/operator takes the agent from one state to
another. A state can have a number of successor
states.
 A plan is a sequence of actions. The cost of a plan is
referred to as the path cost.

 A goal is a description of a set desirable states. Goals
are often specified by a goal test which any goal state
must satisfy.
 Path cost: path  positive number
Usually path cost = sum of step costs.
 Search: Search is the process of considering various
possible sequences of operators applied to the initial
state and finding out a sequence which
culminates/ends in a goal state.
Problem Formulation

 Problem formulation means choosing a relevant set
of states to consider, and a feasible set of operators
for moving from one state to another.

 Hence, a problem can be defined by the following


components:
 The initial state
 The state space
 The goal test
 Path Cost

 Hence, any problem can be solved by the following series of
steps:
• Define a state space which contains all the possible
configurations .
• Specify one or more states within that space from which the
problem solving process may start (called the initial states)
• Specify one or more states which would be acceptable as
solutions (goal states)
• Specify a set of rules which describe the actions(operators)
available and a control strategy to decide the order of
application of these rules.
• DEPENDING UPON THE CONTROL STRATEGY TO BE
USED THE PERFORMANCE OF PROBLEM SOLVING
PROCEDURE CAN BE IMPOROVED OR DEGRADED.
Search Problem (


 We now formally describe a search
problem.
 A search problem is represented by a
four tuple {S,s0,A,G}
 S: the full set of states
 s0 : the initial state , s0ЄS
 A:SS , set of operators/actions that
transform one state to another state.
 G:goal, the set of final states, G S
 Search Problem: Find a sequence of
actions which transforms the agent
from initial state to a goal state.

 The search problem consists of finding a solution plan,
which is a path from the current state to the goal state.

 Representing search problems


 A search problem is represented using a directed graph.
 The states are represented as nodes.
 The allowable actions are represented as arcs

 The searching process is as follows:
 Check the current state.
 Execute allowable actions to move to the next state.
 Check if the new state is a solution state
 If it is not, the new state becomes the current state
and the process is repeated until a solution is found
or the state space is exhausted
Example: Illustration of a Search
Process
 s0 is the initial

state.
 The successor
states are the
adjacent states in
the graph.
 There are three
goal states.

 The two
successor states
of the initial
state are
generated.

 The successors of
these states are
picked and their
successors are
generated.

 Successors of all
these states are
generated.

 The successors
are generated
 A Goal State 
has been
found.
 Usually the
search tree is
extended one
node at a time.
 The order in
which the
search tree is
extended
depends upon
the search
strategy.
Example: Pegs and Disks Problem

 Let us illustrate state space search with the Pegs and
Disks Problem.
 We have 3 Pegs and 3 Disks
 Initial State:

 Operators: One may move the topmost disk on any
needle to the topmost position to any other needle.
 Goal:

 We will describe a sequence of actions that can be
applied to the initial state.




 Is there any other possible path solution to the Pegs
and Disk Problem.
 Yes!
 Move A  C
 Move A  C
 Move A  B
 Move C  B
 Move C  B
 In fact the above strategy has only 5 steps.
Example: 8 Queens Problem

 Place 8 queens on a chessboard so that no two
queens are in the same row, column or diagonal.
TYPICAL SOLUTION
N queens problem formulation 1

 States: Any arrangement of 0 to 8 queens on the
board.
 Initial State: 0 queen on the board.
 Successor Function: Add a queen in any square.
 Goal Test: 8 queens on the board, none are attacked.

N queens problem formulation 2

 States: Any arrangement of 8 queens on the board.
 Initial State: All queens are at column 1.
 Successor Function: Change the position of any one
queen.
 Goal Test: 8 queens on the board, none are attacked.
N queens problem formulation 3

 States: Any arrangement of k queens in the first k
rows such that none are attacked.
 Initial State: 0 queens on the board.
 Successor Function: Add a queen to the (k+1)th row
so that none are attacked.
 Goal Test: 8 queens on the board, none are attacked.

Explicit vs Implicit State Space

 The state space may be explicitly represented.
 Typically it is implicitly represented and generated
when required.
 The agent knows
 the initial state
 the operators
 An “operator” is a function which expands a node.
 compute the successor node(s)
Problem Definition – Example, 8 puzzle


Problem Definition – Example, 8 puzzle


8 puzzle – partial state space


Problem Definition – Example, tic-tac-toe


Example: Water Jug Problem

Advantages and disadvantages
of state-space representations

 Advantages:
 This representation is very useful in AI because it
provides a set of all possible states, operations and
goals. If the entire state-space representation for a
problem is given then it is possible to trace the path
from the initial to goal state and identify the sequence
of operators required for doing it.
 Disadvantages:
 It is not possible to visualize all states for a given
problem. Also the resources of the computer system
are limited to handle huge (combinational) state-space
representation.
Search through a state space

 Input:
 Set of States
 Operators (and costs)
 Start State
 Goal state (test)
 Output:
 Path : start => a state satisfying a goal state
 [May require shortest path]
Basic Search Algorithm

Let fringe be a list containing the initial state
Loop
if fringe is empty then return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
merge the newly generated nodes into fringe
End Loop

 The states that have been generated are the nodes.
 The search algorithm maintains a list of nodes called the fringe.
 The fringe keeps track of the nodes that have been generated
but yet to be explored.
 The fringe represents the frontier of the search tree generated.
 The algorithm always picks the first node from fringe for
expansion. If the node contains a goal state, the path to the goal
is returned. The path corresponding to a goal node can be
found by following the parent pointers. Otherwise all the
successor nodes are generated and they are added to the fringe.
 The successors of the current expanded node are put in fringe.
 The order in which the successors are put in fringe will
determine the property of the search algorithm.
Search Strategy

Measuring problem solving performance:
 Completeness: Is the strategy guaranteed to find a
solution if one exists?
 Optimality: Does the solution have low cost or
minimal cost.
 What is the search cost associated with the time and
memory required to find a solution ie Time
Complexity.
 Space Complexity: Space used by the algorithm
measured in terms of the maximum size of fringe.
Search Strategies

 Blind Search
 Depth first search
 Breadth first search
 Uniform Cost Search
 Iterative Deepening Search
 Informed Search
 Generate and test
 Hill Climbing
 Best first search
 Branch and Bound Search
 A*, AO* Algorithms
 Constraint Satisfaction
 Adversary Search
Search Problem Representation

 A search problem is represented by a four tuple
{S,s0,A,G}
 S: the full set of states
 s0 : the initial state , s0ЄS
 A:SS , set of operators/actions that transform one
state to another state.
 G:goal, the set of final states, G S
 Search Problem: Find a sequence of actions which
transforms the agent from initial state to a goal state.
Uninformed Search

 Blind search or uninformed search does not use any
extra information about the problem domain.
 The two common methods of blind search are:
• BFS or Breadth First Search
• DFS or Depth First Search
 In other words, a blind or uninformed search
algorithm is one which uses no information other
than the initial state, the search operators and a test
for a solution

 Here, the search algorithms make the following
assumptions:
 A procedure must be there to find all successors of a
given node.
 The state space graph is a tree.
 Whenever a node is expanded, creating a node for
each of its successors, the successor nodes contains
pointers back to the parent node. Finally, when a
goal node is generated, the path from the root to the
goal can easily be found.

Search Tree - Terminology
 Root Node (initial node)

 Leaf Node (has NO children)
 Ancestor/Descendant
 Branching Factor: the
maximum number of
children of a non-leaf node
in the search tree.
 A path in the search tree is
complete path if it begins
with the start node and ends
with a goal node, otherwise
it is a partial path.
Breadth First Search

Let fringe be a list containing the initial state
Loop
if fringe is empty then return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
(merge the newly generated nodes into fringe)
add generated nodes to the back of fringe
End Loop

 In breadth first search the newly generated nodes are
put at the back of fringe or list/queue.
 What this implies is that the nodes will be expanded
in a FIFO (First In First Out) order.
 The node that enters earlier will be expanded earlier.
 This amounts to expanding the shallowest nodes
first.
BFS Illustrated







Breadth First Search

 In general, in bfs, all nodes are expanded at a given
depth in the search tree before any node at the next
level are expanded.
 Enqueue nodes on the fringe in FIFO order.
 Complete
 Optimal if all operators have the same cost,
otherwise finds solution with shortest path length.
 Exponential time and space complexity. O(bd) where
d is the depth of the solution and b is the branching
factor.
Breadth First Search

 Advantages of Breadth First Search
 Finds the path of minimal length to the goal.
 Disadvantages of Breadth First Search
 Requires the generation and storage of a tree whose
size is exponential the depth of the shallowest goal
node.
 Uniform-Cost Search [Dijkstra, 1959]
 Expansion by equal cost rather than equal depth
Depth first search

Let fringe be a list containing the initial state
Loop
if fringe is empty then return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
//expand deepest node first
add generated nodes to the front of the fringe
End Loop
DFS illustrated






Properties of DFS

 Enqueue nodes on the fringe in LIFO order (use a
stack)
 Exponential time O(bd), but only linear space O(bd).
 May not terminate without a “depth bound”, “depth
limited search”
 Not Complete.

 Advantages of DFS:
 It has a modest memory requirement.
 It is by chance that we may find a solution without
examining much of the search space.

 Disadvantages of DFS
 DFS is neither complete nor optimal. If DFS goes down an
infinite path, it will not terminate unless a goal state is
found. Hence this type of search can go on and on, deeper
and deeper into the search space and we can get lost (blind
alley).
 Even if a solution is found, there may be a better solution at
a higher level in the tree.
Depth limited search

 DFS may bot work in infinite state spaces where it
may not reach a solution and go in an infinite loop.
 Depth limited search is a variation of DFS where we
cut of the search at a particular depth.
So depth limited search works like this:
 At every node we keep track of the depth or level of
that node and we modify depth first search so that if
depth of node which we try to expand is equal to
limit then we terminate the search.
Depth limited search

Let fringe be a list containing the initial state
Loop
if fringe is empty then return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else if depth of Node = limit return cuttoff
else add generated nodes to the front of the fringe
End Loop
Depth-first Iterative Deepening
(DFID)

 If we modify depth first search by cutting off search at a
limit we get depth limited search which takes as
parameter a depth limit and does depth first search up to
that limit.
 Now, if we choose a limit before hand a solution may not
be found at that depth. Therefore we have a variation of
depth first search which is called depth first iterative
deepening search.
 The idea is that we do DFS up to a limit and if we do not
find a solution we increase the limit by 1 and continue. So
until solution found do DFS with depth cutoff C then put
C is equal to C plus 1 and so on.

 First do DFS to depth 0 (i.e., treat start node as
having no successors), then, if no solution found, do
DFS to depth 1, etc.

 Advantage
 Linear memory requirements of depth-first search
 Guarantee for goal node of minimal depth

 Procedure
Successive depth-first searches are conducted – each
with depth bounds increasing by 1
Iterative Deeping Search

Iterative Deeping Search

Properties
The algorithm is

 Complete
 Optimal/Admissible if all operators have the same cost.
Otherwise, not optimal but guarantees finding solution of
shortest length (like BFS).
 Time complexity is a little worse than BFS or DFS because
nodes near the top of the search tree are generated multiple
times, but because almost all of the nodes are near the bottom
of a tree, the worst case time complexity is still exponential,
O(bd)
 Linear space complexity, O(bd), like DFS
 Depth First Iterative Deepening combines the advantage of BFS
(i.e., completeness) with the advantages of DFS (i.e., limited
space and finds longer paths more quickly)
 This algorithm is generally preferred for large state spaces
where the solution depth is unknown.
Bi-directional search

 In the other search methods, one starts from the start
node and then the search process explores the different
nodes in search of a goal node. So the search branches out
from the start state.
 In bidirectional search, in addition one will also start from
a goal node and search backwards from the goal node
trying to reach either the start state or one of the states
which is reachable from the start state.
 So, if one can reach from a goal to a state which is also
reachable from the start state then we have found a path
from the start state to a goal state. The strategy which
employs this is called bidirectional search.
Bi-directional search


 Bidirectional search involves alternate searching from the start state toward the
goal and from the goal state toward the start. The algorithm stops when the
frontiers intersect.
 Bidirectional search can sometimes lead to finding a solution more quickly. The
reason can be seen from inspecting the above figure.

However, there may be the case where the forward search


is illustrated by this light grey envelope. So this is the
envelope of forward search and this is the envelope of
backward search. And we see that these two envelopes do
not really meet so that these paths are disjoined so we do
not save on expanding any nodes if we do bidirectional
search.
Bi-directional search

 Alternate searching from the start state toward the
goal and from the goal state toward the start.
 Stop when the frontiers intersect.
 Works well only when there are unique start and
goal states.
 Problem: How do we search backwards from goal?
 Requires the ability to generate “predecessor” states.
Bi-directional search

 For bi-directional search to work well, there must be
an efficient way to check whether a given node
belongs to the other search tree.
 Select a given search algorithm for each half.
 Can (sometimes) lead to finding a solution more
quickly.
Comparing Search Strategies

You might also like