0% found this document useful (0 votes)
65 views

Chapter Three Solving Problems by Searching and Constraint Satisfaction Problem

The document discusses problem solving by searching and constraint satisfaction problems. It defines search as the process of exploring possible sequences of operators to get from an initial state to a goal state. It discusses defining problems precisely, analyzing them, and choosing the best problem solving techniques. Examples provided include the 8-puzzle problem, filling jugs with water, and getting a farmer, goat, wolf and cabbage across a river. Formulating problems as single-state, multiple-state, contingency, or exploration problems is also covered.

Uploaded by

fikire belachew
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

Chapter Three Solving Problems by Searching and Constraint Satisfaction Problem

The document discusses problem solving by searching and constraint satisfaction problems. It defines search as the process of exploring possible sequences of operators to get from an initial state to a goal state. It discusses defining problems precisely, analyzing them, and choosing the best problem solving techniques. Examples provided include the 8-puzzle problem, filling jugs with water, and getting a farmer, goat, wolf and cabbage across a river. Formulating problems as single-state, multiple-state, contingency, or exploration problems is also covered.

Uploaded by

fikire belachew
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 51

Chapter Three

Solving Problems by
Searching and Constraint
Satisfaction Problem

2010 1
3.1 Problem Solving by Searching

• Problem is a goal and a means for achieving the


goal. The process of exploring what the means
can do is search. Search is the process of
considering various possible sequences of
operators applied to the initial state, and finding
out a sequence which culminates in a goal sate.
• The goal specifies the state of affairs we want to
bring about and the means specifies the operations
we can perform in an attempt to bring about the
goal. Solution will be a sequence of operations
(actions) leading from initial state to goal state
(plan)
2010 2
Problem Solving Strategies
• An important aspect of intelligence is goal-based
problem solving.
• To build a system to solve a particular problem.
We need to do four things:
– Define the problem precisely
– Analyze the problem
– Isolate and represent the tasks knowledge necessary to
solve the problem
– Choose the best problem-solving techniques and apply
it to the particular problem
• The solution of many problems can be described
by finding a sequence of actions that lead to
describe goal.
2010 3
• A well-defined problem can be described
by:
a) Initial state
b) Operators (rules) or success factor
c) State space – all states reachable from initial by an
sequence of actions
d) Path- sequence through state space
e) Path cost- functions that assigns a cost to a path
f) Goal test- test to determine the goal state.
•Search
•It is a systematic examination of states to find paths from
the start state to the goal state.
•The set of possible states, together with operators
defining their connectivity constitute the search(problem)
space. 2010 4
In real life search usually results from a lack of knowledge.

Example A: The 8-puzzle

• S G

2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5

State: Location of blank


Operators: Blank moves left, right, up, and down
Goal state: Match G
Path cost: each step costs 1 so cost
2010 length of path - 5
Example b:
You are given two jugs, a 4-gallon and a 3-gallon one.
Neither has any measuring marks on it. There is a tap
that can be used to fill the jugs with water. How can
you get exactly 2 gallons of water into the 4-gallon
jugs.
Specify the initial state, the goal state , all the possible
operators to reach from the start state to the goal state.
Solution:

2010 6
• There are many possible ways to formulate the
problem as search.
– The initial state is {0,0}
– The goal state is {2,x} where x can take any value.
– There are only a small number of available actions (E.g.. Fill
the 4-gallon jug) and these can be simply reprinted as rules
or operators as follows:
• Fill the 4-gallon jug {x,y}{4,y}
• Fill the 3-gallon jug {x,y} {x,3}
• Empty the 4-gallon jug into the 3-gallon one {x,y} 
{0,x+y}(if x+y<=3)
• Empty the 3-gallon into the 4-gallon one{x,y} 
{x+y,0} (if x+y<=4)
• Fill the 4-gallon jug from the 3-gallon {x,y} 
{4,x+y-4} (if x+y >4)
• Fill the 3-gallon jug from the 4-gallon {x,y} 
{x+y-3,3} (if x+y >3)
• Empty the 3-gallon jug {x,y} (x,o}
• Empty the 4-gallon jug {x,y} (0,y}
2010 7
Possible answers
• Option 1 • Option2
(0,3)---rule2 (4,0)---rule1
(3,0)---rule4 (1,3)---rule6
(3,3)---rule2 (1,0)---rule4
(4,2)---rule5 (0,1)---rule6
(0,2)---rule 8 (4,1)---rule1
(2,0)---rule4 (2,3)---rule6
(2,0)---rule7

2010 8
Example c:
A farmer has a goat a wolf and a cabbage
on the west side of the river. He wants to
get all of his animals and his cabbage across
the river onto the cost side. The farmer has
a boat but he only has enough room for
himself and one other thing. The goat will
eat the cabbage if they are left together
alone. The wolf will eat the goat if they are
left alone. How can the farmer get
everything on the cost side?

2010 9
• Possible solution:
– State Space Representation:
• We can represent the states of the problem with tow
sets W and E. We can also can have representations
for the elements in the two sets as f,g,w,c representing
the farmer, goat, wolf, and cabbage.
• Operators:
– Move f from the E to W and vice versa
– Move f and one of g,c,w from E to W and vice versa.
• Start stae:
– W={f,g,c,w), E={}
• Goal state:
– W={},E={f,g,c,w}

2010 10
• One possible Solution:
– Farmer takes goat across the river, W={w,c},E={f,g}
– Farmer comes back alone,W={f,c,w,},E={g}
– Farmer takes wolf across the river,W={c},E=f,g,w}
– Farmer comes back with goat, W=={f,g,c},E={w}
– Farmer takes cabbage across the river,W={g},E={f,w,c}
– Farmer comes back alone, W={f,g}, E={w,c}
– Farmer takes goat across the river, W={},E={f,g,w,c}

2010 11
Formulating a problem
• There are four essentially different types of problems
– Simple-state problems
– Multiple-state problems
– Contingency problems
– Exploration problems
• Single-state problem
– All world states are known
– All current states are known
– All results of actions are realizable
– Are deterministic static before execution

2010 12
• Multiple-state problems
– All world states are known
– Some current states are known
– All results of actions are reliable
– Deterministic
• Contingency problems
– All world states are known
– Some current states are known
– Some results of actions are reliable
– Non-deterministic-must use sensors during execution
• Exploration problems
– The world state is unknown
– The current state is unknown
– The results of actions are unknown
– Non-deterministic
2010 13
Example: Vacuum Cleaner

1 2

3 4

5 6

7 8

2010 14
Vacuum Cleaner…
• Single-state
– Start in # 5. Solution?
• If the initial state is 5, then it can calculate the result of the actions
which is, move to right and suck
• Multiple-state,
– Start in {1,2,3,4,5,6,7,8}
• It can discover that the sequence {right,suck, left,suck} is granted to
reach a goal state no matter the initial state is.
• Contingency
– The agent can solve the problem if it can perform actions during
execution.For instance, suppose the suck actions sometimes deposits
dirt when there is none
• Exploration
– Unknown state space
– The agent has no information about the effects of its actions
– The agent must experiment, gradually discovering what its actions do
and what sorts of states exist.
2010 15
Searching for Solutions
• Typical AI problems can have solutions in two
forms.
– The first one is a state, which satisfies the requirements.
– The second one is a path specifying the way in which
one has to traverse to get a solution.
• A good search technique should have the following
requirements.
– A search technique should be systematic
– A search technique should make changes in the
database

2010 16
Searching…
• Searching strategies are generally evaluated in
terms of the following four criteria:
– Completeness: Is the strategy guaranteed to find a
solution when there is one
– Time Complexity: How long does it take to find a
solution
– Space Complexity: How much memory does it need
to perform the search
– Optimality: Does the strategy find the highest
quality solution when the`re are several different
solutions
2010 17
Uninformed search
Route Planning in a Map
• A map is a graph where nodes are cities and links are
roads. This is an abstraction of the real world
• Map gives world dynamics. Starting at city X on the
map and taking some road gets the city Y.
• World (set of cities) is finite and enumerable
• Usually, when working with the map we generally
assume that we know where we are, although it is not
always true.
• The states of the world is really enormous. When
traveling:
– Bridges may be out
– Roads are closed for construction
– The nature of road
– The traffic situation
Route Finding
• Romania Route

Z O F
S B
A R
L P
T D
M C

Don’t expand a node (or add it to the agenda) if it has already


been expanded
Expanding the graph
• Put start state in the agenda
• Loop
– Get a state from the agenda
• If goal, then return
• Expand state (Put children in the agenda)

Which state is chosen from the agenda defines the type of


search and may have huge impact on the effectiveness of
attaining the goal
Uninformed search
• Uninformed (Blind) search
– No problem-specific knowledge that would allow it to
perform to expand one node over another
– There are numerous different Blind searches that may
be used.
– The choice of the search algorithm may lead to vastly
different results depending on the search space.
– The algorithms may have different characteristics in
terms of search Completeness, Search Time
Complexity, Search Space Complexity, and Search
Optimality.
Breadth First Search (BFS)
• It is usually implemented using a queue
initialized with one element, the start state.
States are removed from the front of the
queue and investigated to see if they are
goal states, if so the search terminates, if
not the state is expanded and the resulting
states are added to the back of the queue.
Algorithm for the BFS
• Start with a queue [initial state] and found=false
• While queue not empty and not found do
– Remove the first node N from queue
– If N is a goal state then found=True
– Find all the successor nodes of X, and put them at the end of the
queue

Breadth-first search works even in tress that are effectively


infinitely deep.
However, BFS is wasteful when all paths lead to the goal node
at more or less the same depth. It is also not effective if the
branching factor is large or infinite.
BFS…
Z O F
S B
A R
L P
T D
M C

• Treat agenda as queue


•Expansion: put children at the end of the queue
•Get new nodes from the front of the queue
• You expand all the nodes at one level of the
tree, before you go to the next level. To
make this happen, you put children at the
end of the agenda and you pop them from
the front, so this has the effect of
expanding cities going out by depth.
The start state is: A
The goal state is: B
Z O F
S B
A R
L P
T D
M C

Step1 A Pop A and Check if it is a goal, if


not add its children to the queue
AZST
A
Step2
Pop Z and Check if it a goal, if
. Z S T not add its children to the queue
.
.
• So the final traverse of BFS is:

Z S T
L
R M
O O F
D
F B P C
S
B
F

The order of expansion of the nodes :AZSTOOFRLSFBPCDM


• Example 2: Consider the following graph representing
the state space and operators of a navigation problem:
What is the order that BFS will expand the nodes
A D

G
S

B E F

C H

• S is the start state and G is the goal state


• When placing expanded child nodes on the queue, assume that the
child nodes are placed in the alphabetical order(if node S is expanded
the queue will be A,B)
•Assume that we never generate child nodes that appear as ancestors
of the current node in the search tree
 Example 3: Suppose that you need to find a
path between S and G in the following: The
number attached to each edge represent the
COST of traversing the edge.
S 6
3
A 7
7 B 5 2 G
D E F

List the nodes in the order in which they are expanded by the
BFS while looking for the solution.
Evaluation of BFS
• Completeness: Is the strategy guaranteed
to find a solution when there is one.
– BFS is a bad idea if the branching factor is large
or infinite, because of exponential explosion
– Not necessarily
• Optimality: Does BFS find the highest
quality solution when there are several
different solutions?
– Not necessarily
Evaluation of BFS
• Time Complexity: How long does it take to find a
solution?
• Space Complexity: How much memory does it
take to find a solution?
• Time and space complexity are measured in terms of
– b- maximum branching factor of the search tree
– d- depth of the least-cost solution
– m- maximum depth of the state space (may be
infinity)
• Time Complexity:0(bm)=> b0+b1+b2…+bm
• Space Complexity: 0(bm) =>b0+b1+b2…+bm
Depth-First-Search (DFS)
• DFS traverses the search space by expanding the state that is
deepest in the search tree first.
• The basic algorithm is implemented using a stack that is
initialized with a single value, the start state. It terminates
when the goal state is found. Upon expanding a state, each
resulting state is pushed onto the Stack.
• Algorithm
– Start with [initial state] and found=false
– While not empty and not found
• Remove the first node N
• If N is not visited then:
– Add N to visited
– If N is a goal state then found=true exit
– Put N’s successors on the front of the stack
• Example 1: A path is to be found from the start
node S to the goal state G.
A B C
S

D E F G

A search tree can be made from the above. Each node


denotes a path. Each child node denotes a path that is a one-
step extension of the path denoted by its parent.
S
A S

B D
C E

D F

G
• DFS is a good idea when you are confident that all partial
path either reach dead ends or become complete path in
a reasonable number of steps.
• The real problem with a depth first search is that it can’t
recover from early poor choices.
• Example 2: Consider the following state space in which
the states are shown as nodes labeled A through F. A is
the initial state and E is the goal state. Show how the DFS
finds a solution in this state space by writing down, in
order, the names of the nodes removed from the agenda.
Assume the search halts when the goal state is removed
A
B C

D
E F
Attributes for DFS
• Search Completeness: DFS is complete only if the search has
finite depth and does not contain cycles
• Search Optimality: DFS is not optimal:-it doesn’t
always find a least-cost solution.
• Time Complexity: The time complexity of DFS is
0(bm) where m is the maximum depth and b is the branching factor
• Space Complexity: The search only requires
roughly half of the number of nodes since it
needs only to store half-tree in the memory at
once.
• Exercise
– Suppose that you need to find a path between
S and G in the following graph. For each of the
following search methods list the nodes in the
order in which they are expanded by the search
method while looking for the solutions
• DFS
• BFS

S
A G
C B
• Exercise 2: Suppose that you need to find a path between
S and G in the following graph. For both DFS and BFS search
methods list the nodes in the order in which they are
expanded by the search methods while looking for a solution.

F H
S
G
A D

C
Depth Limited Search (DLS)
• DLS is simply a DFS that has a limit on depth of
states to investigate.
• It avoids the drawback of DFS by imposing a cut-
off on the attributes of DLS.
• The problem now is that it is hard to set a depth
without knowing if a solution will be found in or
lesser depth.
• This will enhance the completeness attribute.
Iterative Deepening Search.
• The limitation of the DLS is that there is no reliable method
on how to decide a depth limit to include at least one
solution.
• Iterative Deepening solves this drawback by repeatedly
checking the search tree at incremental depth; i.e, it checks
the entire tree at depth 0, then depth1, depth2 and so on.
• The search begins by doing a DLS with a limit of l and if a
goal is not found incrementing this limit and trying again
until a goal is found
• Institutively one can see that this strategy is redundant,
since at any depth limit the strategy needs to expand the
nodes that were already checked in the preceding level.
Informed Search
(heuristic)
Informed Search (heuristic)
• In the informed search, we make available to the strategy,
some problem-specific knowledge about the likely cost of
the path from each node on the list to a goal node.
• It works by deciding which is the next best node to
expand.It usually more efficient than blind searches.
• This knowledge is usually encapsulated in the form of
heuristic function that estimates the distance of a state
from the goal state using some meaningful measure.
• In heuristic search, you need to focus on paths that seem
to be getting you nearer your goal state.
Informed Search (heuristic)...
• In informed search there is an estimate available
of the cost(distance) from each state (city) to the
goal.
• This estimate (heuristic) can help you move in the
right direction.
• Heuristic embodied in function h(n), estimate of
remaining cost from search node n to the least
cost goal.
• The search strategies use this (h(n) to
inform the search.
Generate and Test
• The simplest approach
1.Generate a possible solution
– Generate a particular point in problem space
– Generate a path from a start state
2. Test to see if this is actually a solution by comparing the
chosen point or the end point of the chosen path to the
set of acceptable goal states.
3. If a solution is found quit, otherwise return to step1
• If generation is systematic, then consider as complete
• If the problem space is very large … time complexity is
the problem.
Greedy search (Best First Search)
• Minimizes estimated cost to reach a goal
• Expands a node closest to goal (min h(n))
• Criterion function f uses only heuristic
function h
– f(n)= h(n)
• Always expands the heuristically best nodes
(Best First Search )
• Example: route finding using Greedy Search
N
I
O

S F V
Z
B U H
A R
T
L P E
M G

D C

Initial state is A and the goal state is B


• Straight-line distances to goal (B)
360
A-366 M-241 A
253
B-0 N-234 S 329
374

T Z
C-160 O-380 176

D-242 P-100 F 380


O
193
R
E-161 R-193 . . .
F-176 S-253 . . .
G-77 T-329 .
. .
H-151 U-80
I-226 V-199
L-244 Z-374
• Example 2: Imagine the problem of finding a
route on the road map. Use the Best search
strategy to find a path between S and G.

A B C
S
G
D E F

Define h(n) to be the straight-line distance from each node to G


A 11 B 6 C 4
12 G
S 3
8 7
D E F
A* Search
• It has seen that Greedy Search uses a heuristic function
h(n) to decide which node to expand next. It was also
shown that this strategy might not find the least costly
path because it doesn’t take into consideration the cost
of the path itself as a variable to minimize cost.

• To improve this restriction, a new function f(n) is


introduced where

• h(n): It gives the estimated cost to the goal


• g(n): It gives the cost of the path so far
• Improves on the greedy search by using both criterion
functions
– f(n)=g(n)+h(n)
A* Example:
87
N
I
O
151 92
71 99
S 221 V
Z F 142
140 80 85
75 98
A R B U H
118 97
101 86
T 111 90

L 70 P E
M 148 G
139
75
120
D C
Solution

360
A
140+253
75+374
118+329
S Z
T
140+99+176
140+151+380
F 140+80+193
O R
140+151+148+380 140+151+97+380

C P

B C

You might also like