AI Unit2
AI Unit2
- Artificial Intelligent
UNIT-2ND
PRASHANT TOMAR
Problem solving and search
In intelligent agents knowledge base corresponds to environment, operators correspond to
sensors and the search techniques are actuators. Hence, three parameters of an intelligent
are….
1. Knowledge base
2. Operators
3. Control strategy(search technique)
The knowledge base describes the current task domain and the goal. In other words ,goal is
nothing but the state. Operator are manipulate the knowledge base .
The control strategy decides what operators to apply and where. The aim of any search
technique is the application of an appropriate sequence of operators to initial state to achieve
the goal .
The objective to get the goal state : The objective can be achieved in two ways.
Forward reasoning: It refer to the application of operators to those structure in the
knowledge base that describe the task domain in order to produce modified state . Such a
method is also referred to as bottom up and data driven reasoning.
Backward reasoning: It break down the goal(problem) statement into sub goals which are
easier to solve and whose solution are sufficient to solve original problem.
A problem solving agent or system uses either forward or backward
reasoning. Each of its operators works to produce a new state in the
knowledge base which is which is said to represent problem in a
state space.
heated
Add tea leaves
Boiling Water
Decoction Milk
Mixing
Tea
Add sugar
Tea is ready
Rule Base
User
A control
strategy
Knowledge/
Data Base
2. Production Systems are highly modular because the individual rules can be added, removed
or modified independently.
3. The production rules are expressed in a natural form, so the statements contained in the
knowledge base should the a recording of an expert thinking out loud.
One important disadvantage is the fact that it may be very difficult analyze the flow of control
within a production system because the individual rules don’t call each other.
AI Problem:
1. Water Jug Problem
2. Playing chess
3. 8-puzzle problem
4. Tic-tac –toe problem
5. 8-queen problem
6. The Tower of Hanoi problem
7. The missionaries and cannibals problem
Water Jug Problem: you are two jugs ,a 4-gallon and a 3- gallon
one . Neither has any measuring markers on it .there is pump that
can be used to fill the jugs with water. How can you exactly 2 gallons
of water into the 4-gallon jug .Solve the problem using production
Rules.
To solve the problem, we define rules whose left sides are
matched against the current state and whose right sides
describe the new state that results from applying the rule.
The assumptions are:
1. We can fill a jug from pump.
2. We can pour water out of the jug onto the ground.
3. We can pour water from one jug to another.
4. There are no measuring devices available.
State Space: The state space for this problem can be described as the set of ordered pair of
integer( X,Y) such that X=0,1,2,3,0r 4 and Y=0,1,2,0r 3. X represent the number of gallons of
water in the 4-gallon jug and Y represent the quantity water in the 3-gallon jug.
9.(X, Y) - (X+Y, 0) Pour all the water from the 3-gallon jug
if x+y<=4 and y>0 into the 4-gallon jug
10. (X, Y) - (0, x+y) Pour all the water from the 4 gallon jug
if x+y<=3 and x>0 into the 3-gallon jug
0 0 -
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5 or 12
2 0 9 or 12
(0,0)
(0,0)
(0,0(0,0) (0,3)
(4,0)
(4, (0,0)
0)
)
(4, 3)
(0,0) (0,0) (1,(0,0)
3) (0,0)
(4,3) (0,0)
(0, 0) (3(0,0)
,0)
(0,0
(4,0)
(0,0) (0,3)
(0,0)
(0,0)
(1,3) (0,0)
(4,3) (3,0)
(0,0)
1 6 4
7 5
2 8 3 2 8 3
2 8 3
1 4 1 6 4
1 6 4
7 6 5 7 5
7 5
2 8 3
1 4
7 6 5
2 3
1 8 4
7 6 5
1 2 3
8 4
2 3 7 6 5
1 8 4
7 6 5
1 2 3
8 4
7 6 5
8-queens problem :The problem is to place 8 queens on a
chessboard so that no two queens are in the same row, column or
diagonal .How do we formulate this in terms of a state space
search problem? The problem formulation involves deciding the
representation of the states, selecting the initial state
representation, the description of the operators, and the successor
states. We will now show that we can formulate the search
problem in several different ways for this problem.
N queens problem formulation 1
• States: Any arrangement of 0 to 8 queens on the board
• Initial state: 0 queens on the board
• Successor function: Add a queen in any square
• Goal test: 8 queens on the board, none are attacked
The initial state has 64 successors. Each of the states at the next
level have 63 successors, and so on. We can restrict the search
tree somewhat by considering only those successors where no
queen is attacking each other. To do that we have to check the
new queen against all existing queens on the board. The
solutions are found at a depth of 8.
N queens problem formulation 2
• States: Any arrangement of 8 queens on the board
• Initial state: All queens are at column 1
• Successor function: Change the position of any one queen
• Goal test: 8 queens on the board, none are attacked
If we consider moving the queen at
column 1, it may move to any of the
seven remaining columns.
N queens problem formulation 3
• States: Any arrangement of k queens in the first k rows such that none are attacked
• Initial state: 0 queens on the board
• Successor function: Add a queen to the (k+1)th row so that none are attacked.
Keep on shuffling the queen until the goal is reached.
• Goal test : 8 queens on the board, none are attacked
Goal state
Playing Chess:
State Space Search: Playing Chess
Each position can be described by an 8-by-8 array.
Initial position is the game opening position.
Goal position is any position in which the opponent does not have a legal move
and his or her king is under attack.
Legal moves can be described by a set of rules:
- Left sides are matched against the current state.
- Right sides describe the new resulting state.
White pawn at
Square(file e , rank 2) Move pawn from
AND
Square(file e , rank 3)->Square(file e , rank 2) to Square(file e , rank 4);
AND
Square(file e , rank 4) is empty
Searching For Solutions: Solution to an AI problem involves performing an action
to go to one proper state among possible numerous possible state of agent.
Thus the process of finding solution can be boiled down to searching of that best
state among all the possible state.
Search Strategy: We will evaluate strategy in term of four criteria………..
1. Completeness: Is the strategy, guaranteed to find a solution when there is one ?
2. Time Complexity: How long does it take to find solution ?
3.Space Complexity: How much memory does it need to perform the search?
4. optimality: Does the strategy find the highest quality solution when there are
several different solution ?
Search Strategies:
Blind Search: Blind search or uninformed search that does not use any extra information
about the problem domain. The two common methods of blind search are:
• BFS or Breadth First Search
• DFS or Depth First Search
Search Tree
A search tree is a data structure containing a root node, from where the search starts.
Every node may have 0 or more children. If a node X is a child of node Y, node Y is said to
be a parent of node X.
E
B
Initial state
A
D F H
C
G
Goal state
B C
D D G
E
B F
C F
G G H F G H
We also need to introduce some data structures that will be used in the search
algorithms.
The nodes that the algorithm has generated are kept in a data structure called OPEN
or fringe. Initially only the start node is in OPEN.
The search starts with the root node. The algorithm picks a node from
OPEN for expanding and generates all the children of the node.
Expanding a node from OPEN results in a closed node. Some search
algorithms keep track of the closed nodes in a data structure called
CLOSED.
A solution to the search problem is a sequence of operators that is
associated with a path from a start node to a goal node. The cost of a
solution is the sum of the arc costs on the solution path. For large
state spaces, it is not practical to represent the whole space.
B C
D D G
E
B F
C F
G G H F G H
Properties of BFS:
We assume that every non-leaf node has b children. Suppose that d is
the depth o the shallowest goal node, and m is the depth of the node
found first.
b children
B C
D ,E ,C
D D G
E C,F,E,C
B F
C F G,F,E,C
G G H F G H
Node G is expanded and found to be a goal node. The solution path
A-B-D-C-G is returned and the algorithm terminates.
Properties of depth-first search:
Disadvantage:
1.If stop after one solution is found . So minimal
solution may not be found.
2.In DFS there is a possibility that may go down
the left-most path forever , even a finite graph
can generate a infinite tree
Depth Limited Search :
A solution of Depth First Search problem . Define a limit in DFS and
Nodes are only expanded if they have depth less than the limit. This
algorithm is known as depth-limited search.
First do DFS to depth 0 (i.e., treat start node as having no successors), then, if no
solution found, do DFS to depth 1, etc.
Algo:
until solution found do
DFS with depth cutoff c
c = c+1
Advantage :
• Linear memory requirements of depth-first search
• Guarantee for goal node of minimal depth
Properties: For large d the ratio of the number of nodes expanded by DFID
compared to that of DFS is given by b/(b-1). For a branching factor of 10 and deep
goals, 11% more nodes expansion in iterative- deepening search than breadth-first
search
The algorithm is
• Complete
• Optimal/Admissible if all operators have the same cost.
Otherwise, not optimal but guarantees finding solution of
shortest length (like BFS).
• Time complexity is a little worse than BFS or DFS because
nodes near the top of the search tree are generated multiple
times, but because almost all of the nodes are near the bottom
of a tree, the worst case time complexity is still exponential,
O(bd ) .
If branching factor is b and solution is at depth d, then nodes
at depth d are generated once, nodes at depth d-1 are
generated twice, etc.
Hence bd + 2b (d-1) + ... + bd <= bd / (1-1/b)2= O(bd).
• Linear space complexity, O(bd), like DFS
Depth First Iterative Deepening combines the advantage of
BFS (i.e., completeness) with the advantages of DFS (i.e.,
limited space and finds longer paths more quickly) This
algorithm is generally preferred for large state spaces
where the solution depth is unknown.
Informed Search
Informed Search or heuristic search: We know that uninformed search
methods systematically explore the state space and find the goal. They
are inefficient in most cases. Informed search methods use problem
specific knowledge, and may be more efficient. They often depend on
the use of heuristic function.
h(mzn)= distance(mzn,delhi)
1 6 4 8 4
7 5 7 6 5
total no. of the misplaced tiles = 5, because the tiles 2,8,1,6,7 are
out of place, then heuristic function is h(n)=4
Algorithm:
1. Generate a possible solution. For some problems, this
means generating a particular point in the problem space.
6
7 4 B C
A
E
15 D F H
5 10 7
I
20 J 8 K N
2 9
Start at S
Children of S= [A(7),B(4), C(6)]
Best child of S=B(4)
Children of S=[E(5),F(10)]
Best child of B=E(5)
Children of E=[J(8)]
Difficulties in Hill climbing:
(a) A "local maximum " which is a state better than all its neighbors , but is not better than
some other states farther away. Local maxim sometimes occur with in sight of a solution. In
such cases they are called " Foothills".
(b) A "plateau'' which is a flat area of the search space, in which neighboring states have the
same value. On a plateau, it is not possible to determine the best direction in which to move
by making local comparisons.
(c) A "ridge" which is an area in the search that is higher than the surrounding areas, but can
not be searched in a simple move.
(a) Back track to some earlier nodes and try a different direction. This is a good way of
dealing with local maxim.
(b) Make a big jump an some direction to a new area in the search. If the rules available
describe only single small steps then apply them several times in the same direction
(c ) Applying two more rules of the same rule several times, before testing. This is
corresponding to moving in several directions at once
S
Difficulties in Hill Climbing method
6
1 4 B C
A
(Plateau)
S->C->G or H
(same value)
E
10 D 3 F H
4 4
G 4
Local maximum I
S->A->D->Dead end 2 K
(not possible to move) J 1
L
2 M
0
(Ridge )
S->B->E->I->L->Dead end Goal node
Algorithm:
1.Evaluate the initial state If it is goal state then quit
,otherwise make the current state this initial state and
proceed;
2. Loop until a solution is found or until there are no new
operator left to be applied in the current state
Optimal? No
A* search:
Idea: avoid expanding paths that are already expensive.
Evaluation function f(n) = g(n) + h(n).
g(n) = cost so far to reach n.
h(n) = estimated cost from n to goal.
f(n) = estimated total cost of path through n to goal.
Admissible heuristics:
A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal
state from n.
An admissible heuristic never overestimates the cost to reach the
goal, i.e., it is optimistic.
Properties:
Complete- Yes.
Time- Exponential.
Space-Keeps all nodes in memory.
Optimal- Yes.
Optimality of A* (proof):
Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
h1(S) = ? 8
h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dominance: If h2(n) ≥ h1(n) for all n (both admissible)
then h2 dominates h1
h2 is better for search.
Properties of Heuristic Algorithms:
2+D=Y 2 + D = 10 + Y
N + R = 10 + E D=8+Y
R=9 D = 8 or 9
S =8
D=8 D=9
Y=0 Y=1
77
Two kinds of rules:
1. Rules that define valid constraint propagation.
2. Rules that suggest guesses when necessary.
A map coloring problem: We are given a map, i.e. a planar graph, and we are told to color
it using k colors, so that no two neighboring countries have the same color.
You have to color a planner map using only four colors .In such a way no two adjacent
regions have the color.
This map is represented by a graph .each regions corresponding to vertex of a graph .If two
regions are adjacent ,there is an edge connecting the corresponding vertices.
A C
B
(blue)
B(green) D
D E
C E
(Blue)
Particular state of graph : Representation of current state
{ blue, green ,X,X blue)
Goal state :
if Xi and Xj are adjacent
Color(i) not equal to color(j).
o o X
X’s turn(MAX)
X
o X
• Above is a section of a game tree for tic tac toe. Each node represents
a board position, and the children of each node are the legal moves
from that position
Minimax Algorithm
How do we compute our optimal move? We will assume that the
opponent is rational; that is, the opponent can compute moves just
as well as we can, and the opponent will always choose the optimal
move with the assumption that we, too, will play perfectly. One
algorithm for computing the best move is the minimax algorithm:
minimax(player,board)
if(game over in current board position)
return winner
children = all legal moves for player from this board
if(max's turn)
return maximal score of calling minimax on all the children
else (min's turn)
return minimal score of calling minimax on all the children
If the game is over in the given position, then there
is nothing to compute; minimax will simply return
the score of the board. Otherwise, minimax will go
through each possible child, and (by recursively
calling itself) evaluate each possible move. Then,
the best possible move will be chosen, where ‘best’
is the move leading to the board with the most
positive score for player 1, and the board with the
most negative score for player 2.
Heuristic evaluation function:
-An evaluation function estimate how good the current board configuration is for a player
MAX is the player trying to maximize its score and MIN is the opponent
trying to minimize MAX’s score.
Optimal strategy for MINIMAX: Design to find optimal strategy for
max and find best move.
4. When value reaches the root: Choose max value and the
corresponding move.
Note: Higher utility value good for MAX and lower value bad for
MAX
Alpha–beta pruning:
The minimax algorithm is a way of finding an optimal
move in a two player game. Alpha-beta pruning is a
way of finding the optimal minimax solution while
avoiding searching sub-trees of moves which won't be
selected. In the search tree for a two-player game,
there are two kinds of nodes, nodes representing your
moves and nodes representing your opponent's moves.
MAX nodes: The goal at a MAX node is to maximize
the value of the sub-tree rooted at that node. To do
this, a MAX nod
Thus, when any new node is being considered as a possible path to the
solution, it can only work if:
α <=N<=β
where N is the current estimate of the value of the node.
Example: As for upper and lower bounds, all you know is that it's a
number less than infinity and greater than negative infinity. Thus,
here's what the initial situation looks like: Both are equivalent to
alpha beta pruning with the following problem:
Solution: Alpha—beta pruning can be applied to trees of any depth, and
it is often possible to prune entire subtrees rather than just leaves. The
general principle is this: consider a node n somewhere in the tree, such
that Player has a choice of moving to that node. If Player has a better
choice in, either at the parent node of n or at any choice point further up.
then n will never be reached in actual play. So once we have found out
enough about n (by examining some of its descendants) to reach this
conclusion, we can prune it.
At last min level when E comes then D10, F comes as 11 so now D is
10. Now at MAX level it is clear that C will at least be 10. Now at MIN
level H comes as 9 so it is confirmed that G will be less than equal to 9.
Now at above MAX level 10 has already been achieved so why to go for
a value which is less than 9 So I will be pruned.
• So C is confirmed as 10. So at above MIN level B will at most be
10. Now at lower min level when L comes as 14, K is at most 14,
after confirming M as 15 K is finalized as 14. So now at next
MAX level J is at least 14. But at above MIN level B has already
got 10 then there is no need to explore N. So it is pruned.