2014wq171 06 ReviewSearch
2014wq171 06 ReviewSearch
• Problem formulation:
– Handle infinite or uncertain worlds
• Search methods:
– Uninformed, Heuristic, Local
Complete architectures for intelligence?
• Search?
– Solve the problem of what to do.
• Learning?
– Learn what to do.
• Logic and inference?
– Reason about what to do.
– Encoded knowledge/”expert” systems?
• Know what to do.
• Modern view: It’s complex & multi-faceted.
Search?
Solve the problem of what to do.
• Formulate “What to do?” as a search problem.
– Solution to the problem tells agent what to do.
• If no solution in the current search space?
– Formulate and solve the problem of finding a search
space that does contain a solution.
– Solve original problem in the new search space.
• Many powerful extensions to these ideas.
– Constraint satisfaction; means-ends analysis; etc.
• Human problem-solving often looks like search.
Problem Formulation
A problem is defined by five items:
4
Vacuum world state space graph
6
Implementation: states vs. nodes
• A state is a (representation of) a physical configuration
• The Expand function creates new nodes, filling in the various fields using
the Actions(S) and Result(S,A)functions associated with the
problem.
7
Tree search algorithms
• Basic idea:
– Exploration of state space by generating successors of
already-explored states (a.k.a.~expanding states).
8
Tree search example
9
Repeated states
• Failure to detect repeated states can turn a
linear problem into an exponential one!
• Test is often implemented as a hash table.
10
Solutions to RepeatedSStates
B
S
B C
C C S B S
State Space
Example of a Search Tree
• Graph search optimal but memory inefficient
11
Search strategies
• A search strategy is defined by the order of node expansion
Is A a goal state?
15
Properties of breadth-first search
• Complete? Yes it always reaches goal (if b is finite)
• Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1)
(this is the number of nodes we generate)
• Space? O(bd+1) (keeps every node in memory,
either in fringe or on a path to fringe).
• Optimal? Yes (if we guarantee that deeper solutions
are less optimal, e.g. step-cost=1).
16
Uniform-cost search
Uniform-cost Search:
Expand node with smallest path cost g(n).
• Frontier is a priority queue, i.e., new successors are
merged into the queue sorted by g(n).
– Remove successor states already on queue w/higher g(n).
• Goal-Test when node is popped off queue.
17
Uniform-cost search
18
Depth-first search
• Expand deepest unexpanded node
• Frontier = Last In First Out (LIFO) queue, i.e., new successors
go at the front of the queue.
• Goal-Test when inserted.
Is A a goal state?
19
Properties of depth-first search A
B C
• Complete? No: fails in infinite-depth spaces
Can modify to avoid repeated states along path
• Time? O(bm) with m=maximum depth
• terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space! (we only need to
remember a single path + expanded unexplored nodes)
• Optimal? No (It may find a non-optimal goal first)
20
Iterative deepening search
• To avoid the infinite depth problem of DFS, we can
decide to only search until depth L, i.e. we don’t expand beyond depth L.
Depth-Limited Search
21
Properties of iterative deepening search
• Complete? Yes
• Time? O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1 or increasing
function of depth.
22
Bidirectional Search
• Idea
– simultaneously search forward from S and backwards from G
– stop when both “meet in the middle”
– need to keep track of the intersection of 2 open sets of nodes
• What does searching backwards from G mean
– need a way to specify the predecessors of G
• this can be difficult,
• e.g., predecessors of checkmate in chess?
– which to take if there are multiple goal states?
– where to start if there is only a goal test, no explicit list?
23
Summary of algorithms
Criterion Breadth- Uniform- Depth- Depth- Iterative
First Cost First Limited Deepening
DLS
Complete? Yes Yes No No Yes
Time O(bd) O(bC*/ε) O(bm) O(bl) O(bd)
Space O(bd) O(bC*/ε) O(bm) O(bl) O(bd)
Optimal? Yes Yes No No Yes
24
Best-first search
Heuristic:
Definition: a commonsense rule (or set of rules) intended to
increase the probability of solving some problem
“using rules of thumb to find answers”
• If h is consistent, we have
• Theorem:
If h(n) is consistent, A* using GRAPH-SEARCH is optimal
keeps all checked nodes in
memory to avoid repeated states
Contours of A Search *
B G
I D
15 24 C 25
24
20
Algorithm can tell you when best solution found within memory constraint is optimal or not.
Conclusions
If the rules of the 8-puzzle are relaxed so that a tile can move anywhere,
then h1(n) gives the shortest solution
If the rules are relaxed so that a tile can move to any adjacent square,
then h2(n) gives the shortest solution
•
Hill-climbing Difficulties
1. Compute the gradient : C (x1 ,..., xn ) i
xi
2. Take a small step downhill in the direction of the gradient:
xi x 'i xi C (x1 ,..., xn ) i
xi
3. Check if C (x1 ,.., x ',..,
i xn ) C (x1 ,.., xi ,.., xn )
5. Repeat.
Simulated annealing search
• Idea: escape local maxima by allowing some "bad"
moves but gradually decrease their frequency
•
Properties of simulated annealing
search
• One can prove: If T decreases slowly enough, then simulated annealing search
will find a global optimum with probability approaching 1 (however, this may
take VERY long)
– However, in any finite search space RANDOM GUESSING also will find a global optimum with
probability approaching 1 .
• This way, the solver moves away from already explored regions and
(in principle) avoids getting stuck in local minima.
Local beam search
• Keep track of k states rather than just one.
• If any one is a goal state, stop; else select the k best successors from the
complete list and repeat.
probability of being
regenerated
in next generation
maximize c T x
Problems of the sort:
subject to : Ax b; Bx = c
• Very efficient “off-the-shelves” solvers are
available for LRs.