Unit 2 - Part 1
Unit 2 - Part 1
Unit: 2 AI Problems
and Search
OutlineLooping
▪ Problems,
▪ Problem Spaces and Search: Problem as state space search,
▪ Production systems
▪ Problem Characteristics
▪ Heuristic Search Techniques:
▪ Hill Climbing,
Introduction
1. Initial State
We will represent a state of the problem as a tuple (x, y), where x represents the amount of water in the 4-
gallon jug and y represents the amount of water in the 3-gallon jug.
Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.
Here the initial state is (0, 0). The goal state is (2, n) for any value of n.
2. Production Rules
State Space Representation – Water Jug
3 (x, y) If x>0 → (x-d, y) pour some water out of the 4- gallon jug
4 (x, y) If y>0 → (x, y-d) pour some water out of the 3- gallon jug
8 (x, y) If x + y >= 3 & x>0 (x-(3-y),3)) pour water from the 4- gallon jug
into the 3-gallon jug until the 3- gallon jug is full
9 (x, y) If x + y <= 4 & y>0(x+y,0) pour all the water from the 3gallon jug into the 4-gallon
jug
2. Production Rules
State Space Representation – Water Jug
11 (0,2) → (2,0) pour the 2-gallon from the 3 – gallon jug into the 4-
gallon jug
2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
Initial State Goal State
State Space Representation – 8 Puzzle
A solution to the problem is an appropriate sequence of moves, such as “move tiles 5 to the
right, move tile 7 to the left ,move tile 6 to the down” etc…
2 8 3 2 8 3
1 6 4 1 4
7 5 7 6 5
Puzzle
-
8
Problem Characteristics
1. Is the problem decomposable into a set of independent smaller or easier sub-problems?
2. Can solution steps be ignored or at least undone if they prove unwise?
3. Is the problem’s universe predictable?
4. Is a good solution to the problem obvious without comparison to all other possible
solutions?
5. Is the desired solution a state of the world or a path to a state?
6. Is a large amount of knowledge absolutely required to solve the problem or is knowledge
important only to constrain the search?
7. Can a computer that is simply given the problem return the solution or will the solution of
the problem require interaction between the computer and a person?
Is the problem’s universe predictable No Moves of other player can not be predicted
Is a good solution absolute or relative? Absolute Winning position need not be compared
Is the solution a state or a path? Path Not only solution but how it is achieved also matters
What is the role of knowledge? Domain specific knowledge is required to constrain search
Is the problem’s universe predictable Yes Problem Universe is predictable, it is a single person game
Is a good solution absolute or relative? Absolute Winning position need not be compared
Is the solution a state or a path? Path Not only solution but how it is achieved also matters
What is the role of knowledge? Domain specific knowledge is required to constrain search
Production System
Production systems provide appropriate structures for performing and describing search
processes.
A production system has four basic components:
1. A set of rules each consisting of a left side that determines the applicability of the rule and a right side
that describes the operation to be performed if the rule is applied.
2. A database of current facts established during the process of inference.
3. A control strategy that specifies the order in which the rules will be compared with facts in the database
and also specifies how to resolve conflicts in selection of several rules or selection of more facts.
4. A rule applier.
Production systems provide us with good ways of describing the operations that can be
performed in a search for a solution to a problem.
Introduction
Search Techniques can be classified as:
1. Uninformed/Blind Search Control Strategy:
Do not have additional information about states beyond problem definition.
Total search space is looked for the solution.
Example: Breadth First Search (BFS), Depth First Search (DFS), Depth Limited Search (DLS).
2. Informed/Directed Search Control Strategy:
Some information about problem space is used to compute the preference among various possibilities for
exploration and expansion.
Examples: Best First Search, Problem Decomposition, A*, Mean end Analysis
Uninformed Search Techniques
✓ A ✓ A
✓ B ✓ C ✓ B C ✓
✓
✓ D ✓ E F✓ G ✓ D ✓ E F ✓ G ✓
H✓ ✓ H
By chance, DFS may find a solution without The search systematically proceeds testing each examining
much of the search space at all. node that is reachable from a parent node before it Then it finds a
solution faster. expands to any child of those nodes.
If the selected path does not reach to the BFS will not get trapped exploring a blind alley.
solution node, DFS gets stuck into a blind alley.
Does not guarantee to find solution. If there is a solution, BFS is guaranteed to find it. Backtracking
is required if wrong path is selected.
Heuristic Search Techniques
Every search process can be viewed as a traversal of a directed graph, in which the nodes
represent problem states and the arcs represent relationships between states.
The search process must find a path through this graph, starting at an initial state and ending
in one or more final states.
Domain-specific knowledge must be added to improve search efficiency.
The Domain-specific knowledge about the problem includes the nature of states, cost of
transforming from one state to another, and characteristics of the goals.
This information can often be expressed in the form of Heuristic Evaluation Function.
• Start with any random city.
• Go to the next nearest city.
Traveling Salesmen Problem (TSP)
Nearest Neighbor
Heuristic
Heuristic Search Techniques
Heuristic function maps from problem state descriptions to the measures of desirability,
usually represented as numbers.
The value of the heuristic function at a given node in the search process gives a good
estimate of whether that node is on the desired path to a solution.
Heuristic Search Techniques
Well-designed heuristic functions can play an important role in efficiently guiding a search
process toward a solution.
In general, heuristic search improves the quality of the path that is explored.
In such problems, the search proceeds using current information about the problem to
predict which path is closer to the goal and follow it, although it does not always guarantee
to find the best possible solution.
Such techniques help in finding a solution within reasonable time and space (memory).
Some prominent intelligent search algorithms are stated below:
1. Hill Climbing
2. Best-first Search
3. A* Search
4. Constraint Search
5. Means-ends analysis
Hill Climbing Example - Blocks World Problem
Start Goal
A D
0 4
D C
C B
B A
Local heuristic:
+1 for each block that is resting on the thing it is supposed to be resting on.
-1 for each block that is resting on a wrong thing.
Start
Hill Climbing Example - Blocks World Problem
A Goal 4 D
0
D D +1
C
B
2 +1
CC A
B -1
B A +1
0 A
Hill Climbing Example - Blocks World Problem
D 0
C C D C 0
Local
B A B A D
heuristic: B
Start Goal
A -3 D +3 D -2 C +2 C -1 B +1
-6 6
0 0
B A
Global heuristic:
For each block that has the correct support structure:
Hill Climbing Example - Blocks World Problem
+1 to every block in the support structure. For
each block that has a wrong support structure:
-1 to every block in the support structure.
Start
A
Goal
-6 D
6
D D C
-3
C B
C
B A A
-6 A
D -2
C C D C -1
Hill Climbing Example - Blocks World Problem
B
Global heuristic:
Simple Hill Climbing - Algorithm
1. Evaluate the initial state. If it is also goal state, then return
it and quit. Otherwise continue with the initial state as the
current state.
2. Loop until a solution is found or until there are no new operators
left to be applied in the current state:
a. Select an operator that has not yet been applied to the current
state and apply it to produce a new state.
b. Evaluate the new state
i. If it is the goal state, then return it and quit.
ii.If it is not a goal state but it is better than the current state,
then make it the current state.
iii.If it is not better than the current state, then continue in the
loop.
Local Maxima: a local maximum is a state that is better than
▪
all its neighbors but is not better than some other
states further away.
▪ To overcome local maximum problem: Utilize backtracking technique. Maintain a list of visited states and
explore a new path.
In simple hill climbing, the first closer node is chosen, whereas in steepest ascent hill climbing all
successors are compared and the closest to the solution is chosen.
Best First Search
DFS is good because it allows a solution to be found without expanding all competing
branches. BFS is good because it does not get trapped on dead end paths.
Best first search combines the advantages of both DFS and BFS into a single method.
One way of combining BFS and DFS is to follow a single path at a time, but switch paths
whenever some competing path looks more promising than the current one does.
At each step of the Best First Search process, we select the most promising of the nodes we
have generated so far.
This is done by applying an appropriate heuristic function to each of them.
We then expand the chosen node by using the rules to generate its successors.
Best First Search
If one of them is a solution, we can quit. If not, all those new nodes are added to the set of
nodes generated so far.
Algorithm: Best First Search
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on OPEN do:
a. Pick the best node on OPEN
b. Generate its successors.
c. For each successor do:
I. If it has not been generated before, evaluate it, add it to OPEN and record its parent.
II. If it has been generated before, change the parent if this new path is better than the previous one. In that case,
update the cost of getting to this node and to any successors that this node may already have.
Best First Search
A* Example
h’(n):The function h’ is an estimate of the additional cost
f’(n) = g(n) + h’(n) of getting from the current node to a goal state.
A 6 4
6 E 4
0
13 6 1
5 3
17 S B 2
F G
7
10 4 D 6
C 6
A* Example
S
16 14
18
A C
B
16 18
E D
31 17
B F
A* Example
S
16 14
18
A C
B
16 18
E D
17
F
24 19
D G
A* Example
S
16 14
18
A C
B
16 18
E D
23
36
17
B F
F
19
G
A* Example
S
16 14
18
A C
B
16 18
14 15
E D E D
22 19
17
F C F
19
G
A* Example
S
16 14
A
18
C
B
16
14 15
E D E
16
27
17
F A F
19 23 18
G D G
A* Example Solution
10
A 6 4
6 E 4
0
13 6 1
5 3
17
S B 2
F G
7
10 4 D 6
C 6
The A* Algorithm
𝑆→𝐵→𝐸→𝐹→𝐺
Total Cost = 18
This algorithm uses following functions:
1. f’: Heuristic function that estimates the merits of each node we generate. f’ represents an estimate of
the cost of getting from the initial state to a goal state along with the path that generated the current
node. f’ = g + h’
2. g: The function g is a measure of the cost of getting from initial state to the current node.
3. h’: The function h’ is an estimate of the additional cost of getting from the current node to a goal state.
(38)
(9) (18)
B D B C D
(5) (4)
C (17) (9)
(27)
(3) G H
E F (3) (4)
I J
(5) (10)
(15) (10)
Heuristic Search Techniques for AO*
Traverse the graph starting at the initial node and following the current best path, and
accumulate the set of nodes that are on the path and have not yet been expanded.
Pick one of these unexpanded nodes and expand it. Add its successors to the graph and
computer f'(cost of the remaining distance) for each of them.
Change the f' estimate of the newly expanded node to reflect the new information produced
by its successors. Propagate this change backward through the graph. Decide which is the
current best path.
The propagation of revised cost estimation backward is in the tree is not necessary in A*
algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the
current best path can be selected.
Constraint Satisfaction
Many AI problems can be viewed as problems of constraint satisfaction.
1 1
9 5 6 S 9
-Crypt E 5
N
1 0 8 6
D 7
M 1
1 0 6 5
O 0
R 8
Y 2
Means End Analysis
Most of the search strategies either reason forward of backward, Often a mixture of the two
directions is appropriate.
Such mixed strategy would make it possible to solve the major parts of problem first and
solve the smaller problems the arise when combining them together.
Such a technique is called Means - Ends Analysis.
The means -ends analysis process centers around finding the difference between current
state and goal state.
The problem space of means - ends analysis has
an initial state and one or more goal state,
a set of operate with a set of preconditions their application and difference functions that computes the
difference between two state a(i) and s(j).
The means-ends analysis process can be applied recursively for a problem.
Means End Analysis
Following are the main Steps which describes the working of MEA technique for solving a
problem.
First, evaluate the difference between Initial State and final State.
Select the various operators which can be applied for each difference.
Apply the operator at each difference, which reduces the difference between the current state and goal
state.
In the MEA process, we detect the differences between the current state and goal state.
Once these differences occur, then we can apply an operator to reduce the differences.
But sometimes it is possible that an operator cannot be applied to the current state.
Means End Analysis
So, we create the sub-problem of the current state, in which operator can be applied, such
type of backward chaining in which operators are selected, and then sub goals are set up to
establish the preconditions of the operator is called Operator Subgoaling.
Algorithm : Means-Ends Analysis
1. Compare CURRENT to GOAL, if there are no differences between both then
return Success and Exit.
2. Else, select the most significant difference and reduce it by doing the
following steps until the success or failure occurs.
a. Select a new operator O which is applicable for the current
difference, and if there is no such operator, then signal failure.
b. Attempt to apply operator O to CURRENT. Make a description of two
states.
i. O-Start, a state in which O?s preconditions are satisfied.
ii. O-Result, the state that would result if O were applied In O-
start.
c. If (First-Part <------ MEA (CURRENT, O-START) And (LAST-Part <----MEA
(O-Result, GOAL), are successful, then signal Success and return the
result of combining FIRST-PART, O, and LAST-PART.
Thank You!