AI Unit 2
AI Unit 2
Artificial Intelligence(AI)
UNIT-2
Problem Solving
Methods
Constraint Propagation
Backtracking Search
Game Playing
Optimal Decisions in Games
Alpha – Beta Pruning
Stochastic Games
Lecture 1 :Problem Solving
The reflex agent of AI directly maps states into action.
Goal Formulation:
The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal.
It operates in a brute-force way as it only includes information about how to
traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the
goal, so it is also called blind search.
It examines each node of the tree until it achieves the goal node.
Informed Search
Breadth-first Search
Depth-first Search
Depth-limited Search
Iterative deepening depth-first search
Uniform cost search
Bidirectional Search
Breadth-first Search
Breadth-first search is the most common search strategy for traversing a tree or
graph.
This algorithm searches breadth wise in a tree or graph, so it is called breadth-
first search.
BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
The breadth-first search algorithm is an example of a general-graph search
algorithm.
Breadth-first search implemented using FIFO queue data structure.
Continue--
Advantages
BFS will provide a solution if any solution exists.
If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages
It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
BFS needs lots of time if the solution is far away from the root node.
Continue--
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).
Disadvantage:
There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
S--->A--->C---G
Example of DFS algorithm
--Continue
Completeness: DFS search algorithm is complete within finite state space as it
will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
T(n)= 1+ n^2+ n^3 +.........+ n^m=O(n^m)
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is
O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.
Depth-Limited Search
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
Depth-limited search also has a disadvantage of incompleteness.
It may not be optimal if the problem has more than one solution.
---Continue
Step:2
Step:3 Step:5
Step:4 Step:6
---Continue
This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
Contd..
• Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.
• Disadvantages:
• The main drawback of DFS is that it repeats all the work of the previous
phase.
.
---Continue
1st Iteration-----> A
2nd Iteration----> A, B, C
3rd Iteration------>A, B, D, E, C, F, G
4th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
--Continue
Time Complexity: Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(b^d).
Space Complexity: The space complexity of IDDFS will be O(d).
Bidirectional Search
Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node.
Bidirectional search replaces one single search graph with two small subgraphs in
which one starts the search from an initial vertex and other starts from goal vertex.
The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Bidirectional Search
Advantages:
Disadvantages:
G F
Path A--->E--->E--->F--->I
Estimated Path Cost=140+99+211
Example
Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.
Advantages:
Admissible: the first condition requires for optimality is that h(n) should be an
Advantage :
4. Termination Criteria: The algorithm stops when it reaches a state where no better
neighboring states exist or when other termination criteria are met.
Local Search and Optimization Problems
•It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
•A node of hill climbing algorithm has two components which are state
and value.
•Hill Climbing is mostly used when a good heuristic is available.
•In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.
•Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.
•Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
•No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the x-axis. If the
function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective
function, then the goal of the search is to find the global maximum
and local maximum.
Types of Hill Climbing
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
If it is goal state, then return success and quit.
Else if it is better than the current state then assign new state as a current state.
Else if not better than the current state, then return to step2.
Step 5: Exit.
Different Regions in State Space Diagram
Local Maximum: Local maximum is a state which is better than its neighbour states,
but there is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape.
It has the highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current states have the same value.
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
a) Let SUCC be a state such that any successor of the current state will be better
than it.
b) For each operator that applies to the current state:
I. Apply the new operator and generate a new state.
II. Evaluate the new state.
III. If it is goal state, then return it and quit, else compare it to the SUCC.
IV. If it is better than SUCC, then set new state as SUCC.
V. If the SUCC is better than the current state, then set current state to
SUCC.
Step 3: Exit.
Stochastic Hill Climbing
Stochastic hill climbing does not examine for all its neighbour before moving.
Rather, this search algorithm selects one neighbour node at random and decides
whether to choose it as a current state or examine another state.
Issues in Hill Climbing
Local Maximum: A local maximum is a peak state in the landscape which is better
than each of its neighbouring states, but there is another state also present which is
higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
--Continue
Plateau: A plateau is the flat area of the search space in which all the neighbour states
of the current state contains the same value, because of this algorithm does not find
any best direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
--Continue
Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.
The same process is used in simulated annealing in which the algorithm picks a
random move, instead of picking the best move.
If the random move improves the state, then it follows the same path. Otherwise,
the algorithm follows the path which has a probability of less than 1 or it moves
downhill and chooses another path.
Simulated Annealing
l .
Local Beam Search
l .
Lecture 5 :Searching With Partial Observations
If the environment is not fully observable or deterministic, then the following types of
problems occur:
Mini-Max Algorithm
Alpha-Beta Pruning
Mini-Max Algorithm
This Algorithm computes the minimax decision for the current state.
In this algorithm two players play the game, one is called MAX and other is called
MIN.
Both the players fight it as the opponent player gets the minimum benefit while they
get the maximum benefit.
Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
The minimax algorithm performs a depth-first search algorithm for the exploration
of the complete game tree.
MAX-MIN Algorithm
function minimax (node, depth, maximizingPlayer) is
if depth ==0 or node is a terminal node then return static evaluation of node