Introduction To Multi-Agent Programming: Search Algorithms and Path-Finding
Introduction To Multi-Agent Programming: Search Algorithms and Path-Finding
Programming
4. Search algorithms and Path-
finding
• Problem-Solving Agents
• General Search (Uninformed search)
• Best-First Search (Informed search)
– Greedy Search & A*
• Online Search
– Real-Time Adaptive A*
• Case Study: ResQ Freiburg path planner
• Summary
Problem-Solving Agents
Æ Goal-based agents
Formulation: goal and problem
Given: initial state
Goal: To reach the specified goal (a state)
through the execution of appropriate
actions.
Æ Search for a suitable action sequence and
execute the actions
A Simple Problem-Solving Agent
Problem Formulation
• Goal formulation
World states with certain properties
• Definition of the state space
important: only the relevant aspects Æ abstraction
• Definition of the actions that can change the world
state
• Determination of the search cost (search costs, offline
costs) and the execution costs (path costs, online
costs)
• Successor function
(Actions):
Left (L), Right (R), or Suck (S)
• Goal state:
no dirt in the rooms
• Path costs:
one unit per action
The Vacuum Cleaner State Space
(3,3,1)
Implementing Search Algorithms
Data structure for nodes in the search tree:
State: state in the state space
Node: Containing a state, pointer to predecessor, depth, and path cost, action
Depth: number of steps along the path from the initial state
Path Cost: Cost of the path from the initial state to the node
Fringe: Memory for storing expanded nodes. For example, s stack or a queue
Make-
Node
Search Strategies
Completeness:
Is the strategy guaranteed to find a solution when there is
one?
Time Complexity:
How long does it take to find a solution?
Space Complexity:
How much memory does the search require?
Optimality:
Does the strategy find the best solution (with the lowest
path cost)?
Breadth-First Search (1)
Example: b = 10, d = 5
Breadth-First-Search 10 + 100 + 1,000 + 10,000 + 999,990
= 1,111,100
Iterative Deepening Search 50 + 400 + 3,000 + 20,000 + 100,000
= 123,450
m maximum depth of the search tree, b) if step costs not less than ∈
f=220+193
=413
A* Grid World Example
S: Start state
G: Goal state
Æ: Parent pointer in
the A* search tree
S
g(s): Accumulated path cost
G f(s)=g(s)+h(s)
Time of expansion
g(s) f(s)
A* expansion
3 8
5 5
h(s) H(s)
• Planner is not used only for path finding, also for task
assignment
– For example, prefer high utility goals with low path costs
– Hence, planner is frequently called for different goals
• On my homepage:
– A. Kleiner, M. Brenner, T. Bräuer, C. Dornhege, M. Göbelbecker, M. Luber, J.
Prediger, J. Stückler, and B. Nebel Successful Search and Rescue in Simulated
Disaster Areas Robocup 2005: Robot Soccer World Cup IX pp. 323-334, 2005
• Homepage of Tony Stentz:
– A. Stentz The focussed D* algorithm for real-time replanning Proc. of the
Int. Join Conference on Artificial Intelligence, p. 1652-1659, 1995.
• Homepage of Sven Koenig:
– S. Koenig and X. Sun. Comparing Real-Time and Incremental Heuristic
Search for Real-Time Situated Agents Journal of Autonomous Agents and
Multi-Agent Systems, 2009
– S. Koenig and M. Likhachev Real-Time Adaptive A* Proceedings of the
International Joint Conference on Autonomous Agents and Multiagent Systems
(AAMAS), 281-288, 2006
– S. Koenig and M. Likhachev. Fast Replanning for Navigation in Unknown
Terrain Transactions on Robotics, 21, (3), 354-363, 2005.
• Harder to find, also explained in the AIMA book (2nd ed.):
– R. Korf. Real-time heuristic search. Artificial Intelligence, 42(2-3):189-211,
1990.
• Demo search code in Java on the AIMA webpage
http://aima.cs.berkeley.edu/