0% found this document useful (0 votes)
37 views

Lecture 2 Problem Solving As Search, State Space Search

The document discusses state space search strategies for solving problems by artificial agents. It defines key concepts like the components of a search problem, state space graphs, and different search strategies like data-driven and goal-driven search. Examples like the 8-puzzle, traveling salesperson problem, and tic-tac-toe are provided to illustrate state space search.

Uploaded by

ezaat.shalby
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Lecture 2 Problem Solving As Search, State Space Search

The document discusses state space search strategies for solving problems by artificial agents. It defines key concepts like the components of a search problem, state space graphs, and different search strategies like data-driven and goal-driven search. Examples like the 8-puzzle, traveling salesperson problem, and tic-tac-toe are provided to illustrate state space search.

Uploaded by

ezaat.shalby
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Advanced AI & Knowledge

Representation
Dr. Basma M. Hassan

Faculty of Artificial Intelligence


Kafr el-Sheikh University

2022/2023
Lecture 3
Resourcesforthislecture
This lecture covers the following chapters:
• Chapter 3 (Solving Problems by Searching; only sections 3.1,
3.2, & 3.3) from Stuart J. Russell and Peter Norvig, "Artificial
Intelligence: A Modern Approach," Third Edition (2010), by
Pearson Education Inc.

.. AND ..
• Part II (pages 41 to 45) and Chapter 3 (Structures and
Strategies for State Space Search; only sections 3.0, 3.1, and
• 3.2) from George F. Luger, "Artificial Intelligence: Structures
and strategies for complex problem solving, " Fifth Edition

(2005), Pearson Education Limited. 4


SolvingProblemsbySEARCHING
TypesofAgents
a Reflex Agent : a Planning agent :

Considers how the world IS Considers how the world WOULD BE


• Choose action based on current • Decisions based on (hypothesized)
percept. consequences of actions.
• Do not consider the future • Must have a model of how the world
consequences of actions. evolves in response to actions.
• Must formulate a goal.

Source: D. Klein, P. Abbeel


StateSpaceSearch
• Problems are solved by searching among alternative choices.

• Humans consider several alternative strategies on their way

• to solving a problem.

• A Chess player considers a few alternative moves.

• A mathematician chooses from a different strategies to

• find a proof for a theorem.

• A physician evaluates several possible diagnoses.


Example:Tic-Tac-ToeGame

X X X
X X X
X X X

O O O
X X X O X X O X X X
O O O
Example:MechanicalFaultDiagnosing
Start ask:
What is the problem?

Engine trouble
ask:
Transmission
ask:………
breaks
ask: ……

Does the car start?
Yes No

Engine starts Engine won’t start ask:


ask:………. Will engine turn over?

Yes No battery
Yes
ok
Turn over Won’t turn over ask: No
ask: …….. Do lights come on? battery
dead
HowHumanBeingsThink
..?
• Human beings do not search the entire
state space (exhaustive search).
• Only alternatives that experience has
shown to be effective are explored.
• Human problem solving is based on
judgmental rules that limit the
exploration of search space to those
portions of state space that seem
somehow promising.

• These judgmental rules are known as


“heuristics”.
HeuristicSearch
• A heuristic is a strategy for selectively exploring
the search space.
• It guides the search along lines that have a high
probability of success.
• It employs knowledge about the nature of a
problem to find a solution.
• It does not guarantee an optimal solution to the
problem but can come close most of the time.

• Human beings use many heuristics in problem


solving.
Exercises
• Define the following terms:
• State Space Graph.
• Exhaustive Search.
• Heuristics.

• Describe briefly the difference between a


Planning Agent & a Reflex Agent.

• Give examples of Heuristics that human beings


• employ in any domain.
Search
We will consider the problem of designing goal-based
agents in fully observable, deterministic, discrete,
known environments.
Start State

Goal State
Search

We will consider the problem of designing goal-based agents


in fully observable, deterministic, discrete, known
environments.

The agent must find a sequence of actions that reaches the


goal. The performance measure is defined by:
how “expensive” the
reaching the goal, and ..
path to the goal is.
SearchProblemComponents
Initial
Initial state State
Actions
Transition model
What state results from
performing a given action
in each state?
Goal state
Solution Path
Path cost
Assume that it is a sum of Goal
nonnegative step costs State

The optimal solution is the sequence of actions that gives the


lowest path cost for reaching the goal.
Example:Romania
- On vacation in Romania; currently in Arad.
- Flight leaves tomorrow from Bucharest.

Initial state
o Arad
Actions
o Go from one city to another
Transition Model
o If you go from city A to
city B, you end up in city B
Goal State
o Bucharest
Path Cost
o Sum of edge costs (total distance traveled)
StateSpace
The initial state, actions, and transition model define the state
space of the problem;
• The set of all states reachable from initial state by any
sequence of actions.
• Can be represented as a directed graph where the
nodes are states and links between nodes are actions.

What is the state space for the Romania problem?


StateSpace

An AI problem can be represented as a state space graph.


A graph is a set of nodes and links that connect them.
Graph theory:
o Labeled graph. ○ Parent.
o Directed graph. ○ Child.
o Path. ○ Sibling.
o Rooted graph. ○ Ancestor.
o Tree. ○ Descendant.
StateSpace
Graph Theory, Kônigsberg Bridges Problem,
& Euler Tour ..

Riverbank 1

4
River

Island 1 1 Island 2

Riverbank 2
StateSpace
• A state space is represented by four-tuple [N, A, S, GD].
• N, is the set of nodes or states of the graph. These
• correspond to the states in the problem-solving process.
• A, is the set of arcs between nodes. These correspond to the steps in a
problem-solving process.
• S, a nonempty subset of N, contains the start-state (s) of
• the problem.
• GD, a nonempty subset of N, contains the goal-state(s) of the problem.
The states in GD are described using either:
• A measurable property of the states encountered in
• the search.
• A property of the solution path developed in the search (a solution
path is a path through this graph from a node in S to a node in GD).
Example:VacuumWorld

• States:
• Agent location and dirt location
• How many possible states?
• What if there are n possible locations?
• oThe size of the state space grows
exponentially with the “size” of
the world!
• Actions:
• Left, right, suck.
• Transition Model .. ?
Example:VacuumWorldStateSpaceGraph

o Transition Model:
Example:the8-Puzzle

• States
• Locations of tiles
• o 8-puzzle: 181,440 states (9!/2)
• 15-puzzle: ~10 trillion states
• 24-puzzle: ~1025 states
• Actions
• Move blank left, right, up, down
• Path Cost
• 1 per move
Example:RobotMotionPlanning

o States
o Real-valued joint parameters (angles, displacements).
o Actions
o Continuous motions of robot joints.
o Goal State
o Configuration in which object is grasped.
o Path Cost
o Time to execute, smoothness of path, etc.
• Nodes(N): all the different
configuration of Xs and Os that the
game can have.
• Arcs (A): generated by legal moves by
Example:Tic- placing an X or an O in unused location.
Tac-Toe • Start state (S): an empty board.
• Goal states (GD): a board state
having three Xs in a row, column, or
diagonal.
• The arcs are directed, then no cycles
in the state space,
• directed acyclic graph (DAG).
• Complexity: 9! Different paths can be
generated.
Example:TravelingSalesperson

A salesperson has five cites to visit and ten must return home.
• Nodes(N): represent 5 cites.
•Arcs(A): labeled with weight indicating the cost of traveling
between connected cites.
• Start state(S): a home city.
• Goal states(GD): an entire path contains a complete circuit
with minimum cost.
•Complexity: (n-1)! Different cost-weighted paths can be
generated.
StateSpaceSearchStrategies

There are two distinct ways for searching a state space graph:

▪ Data-Driven Search: (Forward chaining)


Start searching from the given data of a problem instance
toward a goal.

▪ Goal-Driven Search: (Backward chaining)


Start searching from a goal state to facts or data of the given
problem.
StateSpaceSearchStrategies..
SelectingSearchStrategy
Data-Driven Search is suggested if:
o The data are given in the initial problem statement.
o There are few ways to use the given facts.
o There are large number of potential goals.
o It is difficult to form a goal or hypothesis.

Goal- Driven Search is appropriate if:


o A goal is given in the problem statement or can easily
formulated.
o There are large number of rules to produce a new facts.
o Problem data are not given but acquired by the problem
solver.
Exercises
• Define the following terms: Path, Rooted Graph, Tree.
• Describe using drawing the Kongsberg problem "Euler Tour."
• Determine whether goal-driven or data-driven search would
be preferable for solving each of the following problems.
Justify your answer.
• You have met a person who claims to be your distant
cousin, with a common ancestor named John Doe.
You would like to verify her claim.
• Another person claims to be your distant cousin. He
doesn't know the common ancestor's name but
knows that it was no more than eight generations
back. You would like to either find this ancestor or
determine that she didn't exist.
• A theorem prover for plane geometry.
Search..?
• Given:
• Initial state
• Actions
• Transition model
• Goal state
• Path cost

• How do we find the optimal solution?


Search:Basicidea
Let Let’s begin at the start state and expand it by making a

List list of all possible successor states.

Maintain Maintain a frontier or a list of unexpanded states.

Pick At each step, pick a state from the frontier to expand.

Keep Keep going until you reach a goal state.

Try Try to expand as few states as possible.


Search:Basicidea

Start
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Search:Basicidea
Starting
SearchTree(theWhat-iftree) State

“What if” tree of sequences of actions and Action


outcomes; Successor
▪ I.e., When we are searching, we are not acting State
in the world, merely “thinking” about the
possibilities.
▪ The root node corresponds to the starting state.
▪ The children of a node correspond to the …
successor states of that node’s state.
Frontier
▪ A path through the tree corresponds to a
sequence of actions. Goal
State
▪ A solution is a path ending in the goal state
Nodes vs. States ..? A state is a representation of the world, while a node is a data
structure that is part of the search tree. Node must keep pointer to parent, path
cost, possibly other info.
TreeSearchAlgorithmOutline

Initialize the frontier using the starting state.

While the frontier is not empty:

• Choose a frontier node according to search strategy

and take it off the frontier.

• If the node contains the goal state, return solution.


• Else expand the node and add its children to the

frontier.
TreeSearchExample

Start: Arad
Goal: Bucharest
TreeSearchExample

Start: Arad
Goal: Bucharest
TreeSearchExample

Start: Arad
Goal: Bucharest
HandlingRepeated
States
• Initialize the frontier using the starting state
• While the frontier is not empty
• Choose a frontier node according to search
strategy
• and take it off the frontier.
• If the node contains the goal state, return solution
Else expand the node and add
its children to the frontier.

• To handle repeated states:


• Every time you expand a node, add that state to the
explored set; do not put explored states on the
frontier again.
• Every time you add a node to the frontier, check
whether it already exists with a higher path cost, and
if yes, replace that node with the new one.
BacktrackingSearch

“Backtracking is a technique for systematically trying all


paths through a state space”
It begins at the start state and pursues a path until:
• Finding a goal, then quit and return the solution path.
•Finding a dead end, then backtrack to the most recent
unexamined node and continue down one of its
branches.
BacktrackingSearch

Start
Node

Dead Dead Dead Dead


End End End End
BacktrackingAlgorithmDataStructures

State List (SL): lists the states in the current path being tried. If the
goal is found, then it contains the solution path.

New State List (NSL): contains nodes a waiting evaluation.

Dead Ends (DE): lists states whose descendants have

failed to contain a goal node.

Current State (CS): a state currently under consideration.


BacktrackingAlgorithm
Function Backtrack;
Begin
SL := [Start]; NSL := [Start]; DE := Start;
while NSL# [ ] do
begin
if CS = goal (or meets goal description) then return (SL);
if CS has no children (excluding nodes already on DE, SL, and NSL)
then begin
while SL is not empty and CS = the first element of SL do
begin
add CS to DE;
remove first element from SL;
remove first element from NSL;
CS := first element of NLS;
end;
add CS to SL;
end
else begin
place children of CS on NSL; (except nodes on DE, SL, or NSL)
CS := first element of NSL;
add CS to SL;
end;
end;
return FAIL;
end;
Exercises

• The following are two problems that can be solved using state-
space search techniques. You should:
• Suggest a suitable representation for the problem state.
• State what the initial and final states are in this
representation.
• State the available operators/rules for getting from one
state to the next, giving any conditions on when they may
be applied.
• Draw the first two levels of the directed state-
• space graph for the given problem.

You might also like