0% found this document useful (0 votes)
7 views

Chapter Three

The document discusses different problem solving techniques through searching, including building goal-based agents, representing problems as state spaces, and various search strategies and algorithms like breadth-first search, uniform cost search, depth-first search, and iterative deepening search that can be used to find optimal solutions by traversing the state space from the initial to goal states.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Chapter Three

The document discusses different problem solving techniques through searching, including building goal-based agents, representing problems as state spaces, and various search strategies and algorithms like breadth-first search, uniform cost search, depth-first search, and iterative deepening search that can be used to find optimal solutions by traversing the state space from the initial to goal states.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Artificial Intelligence

Chapter Three: Problem Solving by searching


Search objective
• Building Goal-Based Agents
• Searching in State Space
• Searching Strategies
Building Goal-Based Agents
• We have a goal to reach
– Driving from City A to City B
– Put 8 queens on a chess board such that no one
attacks another
• We have information about the initial state; where
we are now at the beginning
• We have a set of operators; a series of actions
that can be taken to move around (change state)
• Objective: find a sequence of actions which will
enable the agent to achieve its goal optimally
Searching Defined
• Examine different possible sequences of actions &
states, and come up with specific optimal
sequence of actions that will take you from the initial
state to the goal state
–Given a state space with initial state and goal state, find
optimal sequence of actions leading through a sequence
of states to the final goal state

• Searching in State Space


– select optimal (min-cost/max-profit) node/state in the state
space
– test whether the state selected is the goal state or not
– if not, expand the node further to identify its successor.
Search algorithm
•Two functions needed for conducting search
– Generator (or successors) function: Given a state and action,
produces its successor states (in a state space)
– Tester (or IsGoal) function: Tells whether given state S is a
goal state IsGoal(S)  True/False
IsGoal and Successors functions depend on problem domain.
•Two lists maintained during searching
–OPEN list: stores the nodes expanded but not explored
–CLOSED list: the nodes expanded and explored
Generally search proceeds by examining each node on the
OPEN list; performing some expansion operation that adds its
children to the OPEN list, & moving the node to the CLOSED list
• Merge function: Given successor nodes, it either append,
prepend to open list or rearrange based on evaluation cost
• Path cost: function assigning a numeric cost to each path;
either from initial node to current node and/or from current
node to goal node
Search algorithm
• Input: a given problem (Initial State + Goal state + transit
states and operators)
• Output: returns optimal sequence of actions to reach the goal.
function GeneralSearch (problem, strategy)
open = (initialState); //put initial state in the List
closed = {}; //maintain list of nodes examined earlier
while (not (empty (open)))
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
succ = Successors (f)
left = not-in-closed (succ, closed );
open = merge (rest(open), left); //append or prepend l to open list
end while
return ('fail')
end GeneralSearch
Algorithm Evaluation:
Completeness and Optimality
• Is the search strategies find solutions to problems
– Is it produces incorrect solutions to problems
• Completeness
– Is the algorithm guarantees in finding a solution whenever
one exists
– Think about the density of solutions in space and evaluate
whether the searching technique guaranteed to find all
solutions or not.
• Optimality
– is the algorithm finding an optimal solution; i.e. the one with
minimize cost or maximize profit
• how good is our solution?
Algorithm Evaluation: Time & Space
Tradeoffs
• With many computing projects, we worry about:
– Speed versus memory
– Time complexity: how long does it take to find a solution
– Space complexity: how much space is used by the
algorithm
• Fast programs
– use up too much memory
• Memory efficient programs
– But they are slow
• We consider various search strategies
– In terms of their memory/speed tradeoffs
Searching Strategies
•Search strategy gives the order in which the search
space is examined
•Uninformed (= blind) search
– they do not need domain knowledge that guide them towards
the goal
– Have no information about the number of steps or the path
cost from the current state to the goal
– It is important for problems for which there is no additional
information to consider
•Informed (= heuristic) search
– have problem-specific knowledge (knowledge that is true
from experience)
– Have knowledge about how far are the various state from the
goal
– Can find solutions more efficiently than uninformed search
Search Methods:
• Uninformed search
– Breadth first
– Depth first
– Uniform cost
– Depth limited search
– Iterative deepening
– etc.

• Informed search
– Greedy search
– A*-search
– Iterative improvement
– Constraint satisfaction
– etc.
Breadth first search
•Expand shallowest unexpanded
node,
–i.e. expand all nodes on a given level
of the search tree before moving to the
next level
•Implementation: use queue data
structure to store the list:
–Expansion: put successors at the end
of queue
–Pop nodes from the front of the queue
•Properties:
– Takes space: keeps every node in
memory
– Optimal and complete: guarantees to
find solution
Algorithm for Breadth first search
function BFS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f)
succ = Successors (f);
l = not-in-set (suc, closed );
open = merge ( rest(open), l); //append to the list
}
return ('fail')
}
Exercise
• Apply BFS to find an optimal path from start
node to Goal node.
Uniform cost Search
•The goal of this technique is to find the shortest path to the
goal in terms of cost.
–It modifies the BFS by always expanding least-cost
unexpanded node
•Implementation: nodes in list keep track of total path
length from start to that node
–List kept in priority queue ordered by path cost
A
S S S
1 10 S
5 B 5
S G 0 A B C A B C A B C
1 5 15 5 15 15
G G G
15 5
11 11 10
C

•Properties:
– This strategy finds the cheapest solution provided the cost of
a path must never decrease as we go along the path
g(successor(n)) ≥ g(n), for every node n
– Takes space since it keeps every node in memory
Algorithm for Uniform Cost search
function uniform_cost (problem){
open = (C_0); //put initial state C_0 in the List
g(s) = 0;
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
succ = Successors (f)
l = not-in-set (succ, closed );
g(f,li) = g(f) + c(f,li);
open = merge(rest(open), l, g(f,li)); //keep the open list sorted in
ascending order by edge cost
}
return ('fail')
}
Depth-first search
•Expand one of the node at the deepest
level of the tree.
–Only when the search hits a non-goal dead
end does the search go back and expand
nodes at shallower levels
•Implementation: treat the list as stack
–Expansion: push successors at the top of
stack
–Pop nodes from the top of the stack
•Properties
–Incomplete and not optimal: fails in infinite-
depth spaces, spaces with loops.
–Takes less space (Linear): Only needs to
remember up to the depth expanded
Algorithm for Depth first search
function DFS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
succ = Successors (f)
l = not-in-set (succ, closed );
open = merge ( rest(open), l); //prepend to the list
}
return ('fail')
}
Iterative Deepening Search (IDS)
•IDS solves the issue of choosing the best depth limit by trying
all possible depth limit:
–Perform depth-first search to a bounded depth d, starting at d = 1 and
increasing it by 1 at each iteration.
•This search combines the benefits of DFS and BFS
–DFS is efficient in space, but has no path-length guarantee
–BFS finds min-step path towards the goal, but requires memory space
–IDS performs a sequence of DFS searches with increasing depth-cutoff
until goal is found

Limit=0 Limit=1 Limit=2


Algorithm for IDS
function IDS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not reached maxDepth) {
while (not (empty (open))) {
f = remove_first(open);
if (IsGoal (f)) then return (f);
closed = append (closed, f);
l = not-in-set (Successors (f), closed);
if (depth(l) < maxDepth) then
open = merge (rest(open), l); //prepend to the list
}
}
return ('fail')
}
Bidirectional Search(Reading ass)
• Simultaneously search both forward from the initial
state to the goal and backward from the goal to the
initial state, and stop when the two searches meet
somewhere in the middle
–Requires an explicit goal state and invertible operators (or
backward chaining).
–Decide what kind of search is going to take place in each
half using BFS, DFS, uniform cost search, etc.

Start Goal
Bidirectional Search
• Advantages:
– Only need to go to half depth
– It can enormously reduce time complexity, but is not
always applicable

• Difficulties
– Do you really know solution? Unique?
– Cannot reverse operators
– Memory requirements may be important: Record all
paths to check they meet
• Memory intensive

• Note that if a heuristic function is inaccurate, the


two searches might miss one another.
Comparing Uninformed Search
Strategies Complete Optimal Time Space
complexity complexity

Breadth first search yes yes O(bd) O(bd)


Depth first search no no O(bm) O(bm)
Uniform cost search yes yes O(bd) O(bd)
Depth limited search if l >= d no O(bl) O(bl)
Iterative deepening yes yes O(bd) O(bd)
search
bi-directional search yes yes O(bd/2) O(bd/2)
• b is branching factor,
• d is depth of the shallowest solution,
• m is the maximum depth of the search tree,
• l is the depth limit
Exercise: Uninformed Search Strategies
• Assume that node 3 is the initial state and
node 5 is the goal state
Exercise: Apply Uninformed Search
Strategies to identify optimal path

S
1 8
5

A B C
3 9
7 4 5
D E G
Informed search
o Search efficiency would improve greatly if there
is a way to order the choices so that the most
promising nodes are explored first.
 This requires domain knowledge of the problem (i.e.
heuristic) to undertake focused search
o informed search strategy—uses problem-specific
knowledge
 can find solutions more efficiently
o the knowledge to make this determination is provided
by an evaluation function
 that returns a number describing the desirability of
expanding the node
Informed search cont..

o The general approach we consider is called best-first search.


o Best-first search is an approach in which a node is selected for
expansion based on an evaluation function f(n).
 choose the node that appears to be best according to the evaluation
function
o Because they aim to find low-cost solutions, these algorithms
typically use some estimated measure of the cost of the
solution and try to minimize it.
o Two basic approaches to Best First search
1. Greedy Best First search
 tries to expand the node closest to the goal.
2. A* search
 tries to expand the node on the least-cost solution path.
a) Greedy Best First search
o One of the simplest best-first search strategies is to minimize
the estimated cost to reach the goal.
 That is, the node whose state is judged to be closest to the goal
state is always expanded first
o the cost of reaching the goal from a particular state can be
estimated but cannot be determined exactly.
o A function that calculates such cost estimates is called a
heuristic function, and is usually denoted by the letter h:
 h(n) = estimated cost of the cheapest path from the state at node n
to a goal state
o Greedy search uses h(n) to select the next node to expand
is
o Note that h(n) = 0 if n is a goal.
o heuristic functions are problem-specific(Domain specific)
Greedy Best First search Cont..

• A good heuristic function for route-finding problems is the


straight-line distance to the goal.
• That is, HsLD(n) = straight-line distance between n and the
goal location
Example: greedy search to find a path from AASTU to Piassa
• With the straight-line-distance heuristic, the first node to be
expanded from AASTU will be Meshawlakia, because it is
closer to Piassa than Koye.
• The next node to be expanded will be Kality, then Stadium
because it is closest.
• Stadium in turn generates Piassa, which is the goal.
Initial State: AASTU
Goal State: Piassa

19
Greedy search Cont..

o For this particular problem, the heuristic leads to


minimal search cost:
 it finds a solution without ever expanding a node that is not
on the solution path.
o However, it is not perfectly optimal:
 b/c the path it found via Meshawlakia may be farther
than that of Koye in general
o But this path was not found because Meshawlakia is
closer to Piassa in straight-line distance than Koye, so
it was expanded first
o The strategy prefers to take the biggest possible out of
the remaining cost to reach the goal,
 without worrying about whether this will be best in the long
run—hence the name "greedy search”
Greedy search Cont..

 Greedy search is susceptible to false starts.

 Consider the problem of getting from Mexico to Piassa.

 The heuristic suggests that Merkato be expanded first, but it is


a dead end.

 Hence, in this case, the heuristic causes unnecessary nodes to


be expanded.

 Furthermore, if we are not careful to detect repeated states,


the solution will never be found

– the search will oscillate between Merkato and Mexico


Greedy search Cont..

Space and Time complexity of greedy


 it is not optimal
 it is incomplete because it can start down an infinite
path(deadend) and never return to try other
possibilities.
 The worst-case time complexity for greedy search is
O(bm)
where m is the maximum depth of the search
space.
 Because greedy search retains all nodes in memory,
its space complexity is the same as its time complexity.
 With a good heuristic function, the space and time
complexity can be reduced substantially.
b) A* search
• Greedy search minimizes the estimated cost to the
goal, h(n), and thereby cuts the search cost
considerably.
 But it is neither optimal nor complete.
• Uniform-cost search, on the other hand, minimizes
the cost of the path so far, g(n);
 it is optimal and complete, but can be very inefficient.
• It would be nice if we could combine these two
strategies to get the advantages of both.
• we can do exactly that, combining the two
evaluation functions simply by summing them:
f(n) = g(n) + h(n).
A* search Cont..

 g(n): gives the path cost from the start node to node n,
 h(n): is the estimated cost of the cheapest path from n to the goal,
– Hence we have f(n) = estimated cost of the cheapest solution through n
 A* search find the cheapest solution, a reasonable thing to try first is the
node with the lowest value of f(n).
Admissible heuristics
– The restriction is to choose an h(n) function that never
overestimates the cost to reach the goal.
– Such an h is called an admissible heuristic.
– Admissible heuristics are by nature optimistic, because they think
the cost of solving the problem is less than it actually is.
– If h is admissible, f(n) never overestimates the actual cost of the
best solution through n.
– Best-first search using f(n) as the evaluation function and an
admissible h function is known as A* search
Conditions for optimality: Admissibility and
consistency

 Check Admissibility of estimated cost h(n):


- make sure that h(n) is not overestimated as compared to
g(n)
– g (n): Actual cost of shortest path from n to z (not known)
then
- h(n) :is said to be an admissible heuristic
function if for all n, h(n) ≤ g(n)
– Using an admissible heuristics guarantees that the
solution found by searching algorithm is optimal
 Notice that the A* search prefers to expand from Koye rather
than from Meshawlakia.
 Even though Meshawlakia is closer to Piassa, the path
taken to get to Meshawlakia is not as efficient in getting
close to Piassa as the path taken to get to Koye
Conditions for optimality: Admissibility and consistency cont..

• Consistency is required only for applications of A∗


to graph search.
• A heuristic h(n) is consistent if, for every node n and
every successor of n’ generated by any action a,
the estimated cost of reaching the goal from n is not
greater than the step cost of getting to n’ plus the
estimated cost of getting the goal from n’:

As we mentioned earlier, A∗ has the following properties: the


tree-search version of A∗ is optimal if h(n) is admissible, while
the graph-search version is optimal if h(n) is consistent.
Example
• Meshawlakia= 6+ 28=34
• Koye= 2+29=31
 So, Koye will be selected
. • Goro=14+29=33
• AASTU=32+2=34
 Goro will be selected
• Megenegna= 7+12=19
• Koye=29+14=43
 Then Megenagna will be selected
• Arat Kilo=9+5=13
• Stadium=9+7=16
19 • Kality=22+25=47
• Goro=7+19=26
 So, Arat kilo will be selected
• Piassa =6+0 =6
• Stadium=5+7=12
• Megenagna=9+12=21
 Finally, Piassa will be selected and it is
a goal as h(n)=0.

You might also like