Ai CH-3
Ai CH-3
Problem solving
Compiled By Shushay T.
Problem
• It is a gap between what actually is and what is desired.
– A problem exists when an individual becomes aware of the
existence of an obstacle which makes it difficult to achieve a
desired goal or objective.
• A number of problems are addressed in AI, both:
–Toy problems: are problems that are useful to test and
demonstrate methodologies.
• Can be used by researchers to compare the performance of different
algorithms
• e.g. 8-puzzle, n-queens, vacuum cleaner world, towers of
Hanoi, …
–Real-life problems: are problems that have much greater
commercial/economic impact if solved.
• Such problems are more difficult and complex to solve, and
there is no single agreed-upon description
• E.g. Route finding, Traveling sales person, etc.
Compiled By Shushay T.
Solving a problem
Formalize the problem: Identify the collection of information
that the agent will use to decide what to do.
• Define states
– States describe distinguishable stages during the problem-
solving process
– Example- What are the various states in route finding problem?
• The various places including the location of the agent
• Define the available operators/rules for getting from one
state to the next
– Operators cause an action that brings transitions from one state
to another by applying on a current state
• Suggest a suitable representation for the problem
space/state space
– Graph, table, list, set, … or a combination of them
Compiled By Shushay T.
State space of the problem
• The state space defines the set of all relevant states reachable
from the initial state by (any) sequence of actions through
iterative application of all permutations and combinations of
operators
• State space (also called search space/problem space) of the
problem includes the various states
– Initial state: defines where the agent starts or begins its task
– Goal state: describes the situation the agent attempts to achieve
– Transition states: other states in between initial and goal states
Example – Find the state space for route finding problem where the
agent wants to go from Main-Campus to Referral-Campus.
– Think of the states reachable from the initial state until we reach to the goal
state.
Compiled By Shushay T.
The 8 puzzle problem
• Arrange the tiles so that all the tiles are in the correct
positions. You do this by moving tiles or space. You can
move a tile/space up, down, left, or right, so long as the
following conditions are met:
A) there's no other tile blocking you in the direction of
the movement; and
B) you're not trying to move outside of the
boundaries/edges.
1 2 3 1 2 3
8 4 5 8 4
7 6 7 6 5
Compiled By Shushay T.
Assignment 2 August 20
Missionary-and-cannibal problem:
Three missionaries and three cannibals are on one
side of a river that they wish to cross. There is a
boat that can hold one or two people. Find an
action sequence that brings everyone safely to the
opposite bank (i.e. Cross the river). But you must
never leave a group of missionaries outnumbered
by cannibals on the same bank (in any place).
1. Identify the set of states and operators
2. Show using suitable representation the state space
of the problem
Compiled By Shushay T.
Knowledge and types of problems
• There are four types of problems:
–Single state problems (Type 1)
–Multiple state problems (Type 2)
–Exploration problems (Type 3)
–Contingency Problems (Type 4)
• This classification is based on the level of knowledge that
an agent can have concerning its action and the state of
the world
• Thus, the first step in formulating a problem for an agent
is to see what knowledge it has concerning
–The effects of its action on the environment
–Accessibility of the state of the world
• This knowledge depends on how it is connected to the
environment via its percepts
Compiled By and
Shushay T.actions
Example: Vacuum world problem
To simplify the problem (rather than the full version), let;
• The world has only two locations
– Each location may or may not contain dirt
– The agent may be in one location or the other
• Eight possible world states
• Three possible actions (Left, Right, Suck)
– Suck operator clean the dirt
– Left and Right operators move the agent from location to
location
• Goal: to clean up all the dirt
• Vacuum cleaner search for path from initial to goal
state Compiled By Shushay T.
Clean House Task
Compiled By Shushay T.
Vacuum Cleaner state Space
Compiled By Shushay T.
Single state problem
• Fully observable: The world is accessible to the agent
– It can determine its exact state through its sensors
– The agent’s sensor knows which state it is in
• Deterministic: The agent knows exactly the effect of its actions
– It can then calculate exactly which state it will be in after any
sequence of actions
• Action sequence is completely planned
Example - Vacuum cleaner world
– What will happen if the agent is initially at state = 5 and
formulates action sequence - [Right, Suck]?
– Agent calculates and knows that it will get to a goal state
• Right {6}
• Suck {8} By Shushay T.
Compiled
Multiple state problems
• Partially observable: The agent has limited access to the world state
–It might not have sensors to get full access to the environment states or as
an extreme, it can have no sensors at all (due to lack of percepts)
• Deterministic: The agent knows exactly what each of its actions do
–It can then calculate which state it will be in after any sequence of actions
• If the agent has full knowledge of how its actions change the world,
but does not know of the state of the world, it can still solve the task
Example - Vacuum cleaner world
–Agent’s initial state is one of the 8 states: {1,2,3,4,5,6,7,8}
–Action sequence: {right, suck, left, suck}
–Because agent knows what its actions do, it can discover and reach to goal
state.
Right [2.4.6.8.] Suck {4,8}
Left {3,7} Suck {7}
Compiled By Shushay T.
Contingency Problems
• Partially observable: The agent has limited access to the world
state
• Non-deterministic: The agent is ignorant of the effect of its
actions
• Sometimes ignorance prevents the agent from finding a
guaranteed solution sequence.
• Suppose the agent is in Murphy’s law world
–The agent has to sense during the execution phase, since things
might have changed while it was carrying out an action. This
implies that
• the agent has to compute a tree of actions, rather than a linear
sequence of action
– Example - Vacuum cleaner world:
• action ‘Suck’ deposits dirt on the carpet, but only if there is no dirt
already. Depositing dirt rather than sucking returns from ignorance
about the effects of actions
Compiled By Shushay T.
Contingency Problems (cont…)
• Example - Vacuum cleaner world
– What will happen given initial state {1,3}, and action
sequence: [Suck, Right, Suck]?
{1,3} {5,7} {6,8} {8,6} (failure)
• Is there a way to solve this problem?
– Thus, solving this problem requires local sensing, i.e.
sensing the execution phase,
– Start from one of the states {1,3}, and take improved action
sequence [Suck, Right, Suck (only if there is dirt there)]
• Many problems in the real world are contingency
problems (exact prediction is impossible)
– For this reason many people keep their eyes open while
walking around or driving.
Compiled By Shushay T.
Exploration problem
• The agent has no knowledge of the environment
– World not observable : No knowledge of states (environment)
• Unknown state space (no map, no sensor)
– Non-deterministic: No knowledge of the effects of its actions
– Problem faced by (intelligent) agents (like, newborn babies)
• This is a kind of problem in the real world rather than in a
model, which may involve significant danger for an ignorant
agent. If the agent survives, it learns about the environment
• The agent must experiment, learn and build the model of the
environment through its results, gradually, discovering
– What sort of states exist and What its action do
– Then it can use these to solve subsequent (future) problems
• Example: in solving Vacuum cleaner world problem the agent
learns the state space and effects of its action sequences say:
[suck, Right]
Compiled By Shushay T.
Well-defined problems and solutions
To define a problem, we need the following elements:
states, operators, goal test function and cost function.
•The Initial state: is the state that the agent starts in or
begins with.
– Example- the initial state for each of the following:
• Coloring problem
– All rectangles white
• 8-puzzle problem ??
Compiled By Shushay T.
Operators
• The set of possible actions available to the agent, i.e.
–which state(s) will be reached by carrying out the action in a
particular state
–A Successor function S(x)
• Is a function that returns the set of states that are reachable
from a single state by any single action/operator
• Given state x, S(x) returns the set of states reachable from x by
any single action
•Example:
–Coloring problem: Paint with black color
paint (color w, color b)
–Route finding problem: Drive through cities/places
drive (place x, place y)
– 8-puzzle ??
Compiled By Shushay T.
Goal test function
• The agent execute to determine if it has reached the
goal state or not
– Is a function which determines if the state is a goal state
or not
• Example:
– Route finding problem: Reach Addis Ababa airport on
time
IsGoal(x, Addis_Ababa)
– Coloring problem: All rectangles black
IsGoal(rectangle[], n)
– 8-puzzle ??
Compiled By Shushay T.
Path cost function
• A function that assigns a cost to a path (sequence of
actions).
– Is often denoted by g. Usually, it is the sum of the costs of the
individual actions along the path (from one state to another state)
– Measure path cost of a sequence of actions in the state space.
For example, we may prefer paths with fewer or less costly
actions
• Example:
– Route finding problem:
• Path cost from initial to goal state
– Coloring problem:
• One for each transition from state to state till goal state is reached
– 8-puzzle?
Compiled By Shushay T.
Steps in problem solving
• Goal formulation
–is a step that specifies exactly what the agent is trying to achieve
–This step narrows down the scope that the agent has to look at
• Problem formulation
–is a step that puts down the actions and states that the agent has to
consider given a goal (avoiding any redundant states), like:
• the initial state
• the allowable actions etc…
• Search
–is the process of looking for the various sequence of actions that
lead to a goal state, evaluating them and choosing the optimal
sequence.
• Execute
–is the final step that the agent executes the chosen sequence of
actions to get it to the solution/goal
Compiled By Shushay T.
Assignment (End Class)
Consider one of the following problem:
– Missionary-and-cannibal problem
– Towers of Hanoi
– Tic-Tac-Toe
– Travelling sales person problem
– 8-queens
1. Identify the set of states and operators and
construct the state space of the problem
2. write goal test function
3. Determine path cost
Compiled By Shushay T.
Searching
Compiled By Shushay T.
Searching
• Examine different possible sequences of actions & states,
and come up with specific sequence of operators/actions
that will take you from the initial state to the goal state
–Given a state space with initial state and goal state, find optimal
sequence of actions leading through a sequence of states to the
final goal state
Compiled By Shushay T.
Search Tree
•The searching process is like building the search tree that is super
imposed over the state space
– A search tree is a representation in which nodes denote paths and branches
connect paths. The node with no parent is the root node. The nodes with no
children are called leaf nodes.
Example: Route finding Problem
•Partial search tree for route finding from Pol to Keb
goal test
(a) The initial state Pol
Pol
(b) After expanding pol generating a new state
Pa Ho Ab
choosing one option Ho
Compiled By Shushay T. Pe
Keb Ho
Search algorithm
•Two functions needed for conducting search
– Generator (or successors) function: Given a state and action, produces
its successor states (in a state space)
– Tester (or IsGoal) function: Tells whether given state S is a goal state
IsGoal(S) True/False
– IsGoal and Successors functions depend on problem domain.
•Two lists maintained during searching
–OPEN list: stores the nodes we have seen but not explored
–CLOSED list: the nodes we have seen and explored
–Generally, search proceeds by examining each node on the OPEN list,
performing some expansion operation that adds its children to the OPEN
list, and moving the node to the CLOSED list.
• Merge function: Given successor nodes, it either append, prepend or
arrange based on evaluation cost
• Path cost: function assigning a numeric cost to each path; either from
initial node to current node and/or from current node to goal node
Compiled By Shushay T.
Search algorithm
• Input: a given problem (Initial State + Goal state + transit states and
operators)
• Output: returns optimal sequence of actions to reach the goal.
function GeneralSearch (problem, strategy)
open = (initialState); //put initial state in the List
closed = {}; //maintain list of nodes examined earlier
while (not (empty (open)))
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
succ = Successors (f)
left = not-in-closed (succ, closed );
open = merge (rest(open), left); //append or prepend l to open list
end while
return ('fail')
end GeneralSearch Compiled By Shushay T.
Algorithm Evaluation: Completeness and
Optimality
• Is the search strategies find solutions to problems with no
solutions
– Is it produces incorrect solutions to problems
• Completeness
– Is the algorithm guarantees in finding a solution whenever one
exists
– Think about the density of solutions in space and evaluate
whether the searching technique guaranteed to find all solutions
or not.
• Optimality
– is the algorithm finding an optimal solution; i.e. the one with
minimum cost
• how good is our solution?
Compiled By Shushay T.
Algorithm Evaluation: Time & Space Tradeoffs
Compiled By Shushay T.
Search Methods:
• Uninformed search
– Breadth first
– Depth first
– Uniform cost, …
– Depth limited search
– Iterative deepening
– etc.
• Informed search
– Greedy search
– A*-search
– Iterative improvement,
– Constraint satisfaction
– etc.
Compiled By Shushay T.
Breadth first search
•Expand shallowest unexpanded node,
–i.e. expand all nodes on a given level of the
search tree before moving to the next level
•Implementation: use queue data
structure to store the list:
–Expansion: put successors at the end of
queue
–Pop nodes from the front of the queue
•Properties:
–Takes space: keeps every node in memory
–Optimal and complete: guarantees to
find solution
Compiled By Shushay T.
Algorithm for Breadth first search
function BFS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
l = not-in-set (Successors (f), closed );
open = merge ( rest(open), l); //append to the list
}
return ('fail')
}
Compiled By Shushay T.
Uniform cost Search
•The goal of this technique is to find the shortest path to the goal
in terms of cost.
–It modifies the BFS by always expanding least-cost unexpanded
node
•Implementation: nodes in list keep track of total path length
from start to that node
–List kept in priority queue ordered by path cost
A
S S S
1 10 S
5 B 5
S G 0 A B C A B C A B C
1 5 15 5 15 15
G G G
15 5
11 11 10
C
•Properties:
–This strategy finds the cheapest solution provided the cost of a
path must never decrease as we go along the path
g(successor(n)) ≥ g(n), for every node n
–Takes space since it keeps Compiled
everyBy node
Shushay T. in memory
Exercise
• Order of expansion?
• Path?
Compiled By Shushay T.
Algorithm for Uniform Cost search
function uniform_cost (problem){
open = (C_0); //put initial state C_0 in the List
g(s) = 0;
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
l = not-in-set (Successors (f), closed );
g(f,li) = g(f) + c(f,li);
open = merge(rest(open), l, g(f,li)); //keep the open list sorted in
ascending order by edge cost
}
return ('fail')
}
Compiled By Shushay T.
Depth-first search
•Expand one of the node at the deepest
level of the tree.
–Only when the search hits a non-goal dead end
does the search go back and expand nodes at
shallower levels
•Implementation: treat the list as stack
–Expansion: push successors at the top of stack
–Pop nodes from the top of the stack
•Properties
–Incomplete and not optimal: fails in infinite-
depth spaces, spaces with loops.
• Modify to avoid repeated states along the path
–Takes less space (Linear): Only needs to
remember up to the depth expanded
Compiled By Shushay T.
Algorithm for Depth first search
function DFS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not (empty (open))) {
f = remove_first(open);
if IsGoal (f) then return (f);
closed = append (closed, f);
l = not-in-set (Successors (f), closed );
open = merge ( rest(open), l); //prepend to the list
}
return ('fail')
}
Compiled By Shushay T.
Iterative Deepening Search (IDS)
•IDS solves the issue of choosing the best depth limit by trying all
possible depth limit:
–Perform depth-first search to a bounded depth d, starting at d = 1 and increasing
it by 1 at each iteration.
Example: for route finding problem we can take the diameter of the
state space. In our example, at most 9 steps is enough to reach any
node
•This search combines the benefits of DFS and BFS
–DFS is efficient in space, but has no path-length guarantee
–BFS finds min-step path towards the goal, but requires memory space
–IDS performs a sequence of DFS searches with increasing depth-cutoff until goal
is found
Limit=0 Limit=1 Limit=2
Compiled By Shushay T.
Algorithm for IDS
function IDS (problem){
open = (C_0); //put initial state C_0 in the List
closed = {}; /maintain list of nodes examined earlier
while (not reached maxDepth) {
while (not (empty (open))) {
f = remove_first(open);
if (IsGoal (f)) then return (f);
closed = append (closed, f);
l = not-in-set (Successors (f), closed);
if (depth(l) < maxDepth) then
open = merge (rest(open), l); //prepend to the list
}
}
return ('fail')
}
Compiled By Shushay T.
Bidirectional Search
• Simultaneously search both forward from the initial state
to the goal and backward from the goal to the initial state,
and stop when the two searches meet somewhere in the
middle
–Requires an explicit goal state and invertible operators (or
backward chaining).
–Decide what kind of search is going to take place in each half
using BFS, DFS, uniform cost search, etc.
Start Goal
Compiled By Shushay T.
Bidirectional Search
• Advantages:
– Only need to go to half depth
– It can enormously reduce time complexity, but is not
always applicable
• Difficulties
– Do you really know solution? Unique?
– Cannot reverse operators
– Memory requirements may be important: Record all paths
to check they meet
• Memory intensive
Compiled By Shushay T.
Comparing Uninformed Search
Strategies Complete Optimal Time Space
complexity complexity
Compiled By Shushay T.
Example: 8 Puzzle
1 2 3 1 2 3
7 8 4 8 4
6 5 7 6 5
Compiled By Shushay T.
1 2 3 GOAL 1 2 3
8 4 7 8 4
7 6 5 6 5
right
1 2 3 1 2 3 1 2 3
7 8 4 7 8 4 7 4
6 5 6 5 6 8 5
Compiled By Shushay T.
8 Puzzle Heuristics
• Blind search techniques used an arbitrary
ordering (priority) of operations.
• Heuristic search techniques make use of
domain specific information - a heuristic.
• What heurisitic(s) can we use to decide which
8-puzzle move is “best” (worth considering
first).
Compiled By Shushay T.
Evaluation-function driven search
• A search strategy can be defined in terms of a
node
evaluation function
• Evaluation function
– Defines the desirability of a node to be expanded
next
• Evaluation-function driven search: expand the
node (state) with the best evaluation-function
value
• Implementation: priority queue with nodes in the
decreasing order of their evaluation function
value
Compiled By Shushay T.
Compiled By Shushay T.
Compiled By Shushay T.
Compiled By Shushay T.
Compiled By Shushay T.
Compiled By Shushay T.
Arad- Bucharest
Compiled By Shushay T.
Compiled By Shushay T.
Greedy Search
Compiled By Shushay T.
A* Search
Compiled By Shushay T.
Compiled By Shushay T.
Avoiding Repeated States
• Repeatedly visited a state during search
– Never come up in some problems if their
search space is just a tree (where each state can
only by reached through one path)
– Unavoidable in some problems
- Discarding revisited states
It is not harmful to discard a node revisiting a
state if the cost of the new path to this state is ≥
cost of the previous path
Compiled By Shushay T.
…cont’d
• Remedies
– Delete looping paths
– Remember every states that have been visited
• The closed list (for expanded nodes) and open
list (for unexpanded nodes)
• If the current node matches a node on the
closed list, discarded instead of being expanded
(missing an optimal solution ?)
- Not discarding leads to exponential size of
number of visited states
Compiled By Shushay T.
Games as Search Problems
Adversarial Search
• Multi-agent environment: any given agent needs
to consider the actions of other agents and how
they affect its own welfare
• introduce possible contingencies into the agent’s
problem-solving process
• cooperative vs. competitive (Games)
Compiled By Shushay T.
…cont’d
• A utility (payoff) function determines the value
of terminal. states, e.g. win=+1, draw=0,lose=-1.
• In two-player games, assume one is called MAX
(tries to maximize utility) and one is called MIN
(tries to minimize utility).
• In the search tree, first layer is move by MAX,
next layer by MIN, and alternate to terminal
states.
• States where the game has ended are called
terminal states.
Compiled By Shushay T.
Mini-max Algorithm
• Generate complete game tree down to terminal
states by computing utility of each node
bottom up from leaves toward root.
• At each MAX node, pick the move with
maximum utility and at each MIN node, pick
the move with minimum utility
• (assumes opponent always acts correctly to
minimize utility).
• When reached the root, optimal move is
determined.
Compiled By Shushay T.
Alpha-Beta Pruning
• Frequently, large parts of the search space are
irrelevant to the final decision and can be
pruned.
• No need to explore options that are already
definitely worse than the current best option.
Compiled By Shushay T.
…cont’d
Compiled By Shushay T.
Constraint Satisfaction Search
• state is defined by variables Xi with values from
domain Di
• goal test is a set of constraints specifying
allowable combinations of values for subsets of
variables
Compiled By Shushay T.
Map Coloring
Compiled By Shushay T.
…cont’d