0% found this document useful (0 votes)
14 views

AI Unit 2

Uploaded by

khshri3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

AI Unit 2

Uploaded by

khshri3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 143

ARTIFICIAL INTELLIGENCE

Artificial Intelligence(AI)

UNIT-2
Problem Solving
Methods

Dr. Himanshu Rai (Asst. Professor)


Department of Computer Science & Engineering
United College of Engineering & Research,
Prayagraj
Unit 2 : Problem Solving Methods

 Problem solving Methods


 Search Strategies
 Uninformed
 Informed
 Heuristics
 Local Search Algorithms and Optimization
Problems
 Searching with Partial Observations
 Constraint Satisfaction Problems
---Continue

 Constraint Propagation
 Backtracking Search
 Game Playing
 Optimal Decisions in Games
 Alpha – Beta Pruning
 Stochastic Games
Lecture 1 :Problem Solving
 The reflex agent of AI directly maps states into action.

 Whenever these agents fail to operate in an environment


where the state of mapping is too large and not easily
performed by the agent, then the stated problem dissolves
and sent to a problem-solving domain which breaks the large
stored problem into the smaller storage area and resolves one
by one.

 The final integrated action will be the desired outcomes.


----Continue
 On the basis of the problem and their working domain,
different types of problem-solving agent defined and use at
an atomic level without any internal state visible with a
problem-solving algorithm.

 The problem-solving agent performs precisely by defining


problems and several solutions.

 So we can say that problem solving is a part of artificial


intelligence that encompasses a number of techniques such
as a tree, B-tree, heuristic algorithms to solve a problem.
Steps Problem-Solving in AI
 The problem of AI is directly associated with the nature of humans and
their activities. So we need a number of finite steps to solve a problem
which makes human easy works.
 These are the following steps which require to solve a problem :

 Goal Formulation:

This one is the first and simple step in problem-solving. It organizes


finite steps to formulate a target/goals which require some action to
achieve the goal. Today the formulation of the goal is based on AI
agents.
 Problem formulation

It is one of the core steps of problem-solving which decides what


action should be taken to achieve the formulated goal. In AI this core
part is dependent upon software agent which consisted of the
following components to formulate the associated problem.
Components in Problem Formulation
 Initial State:
This state requires an initial state for the problem which starts the AI agent towards a
specified goal. In this state new methods also initialize problem domain solving by a
specific class.
 Action:
This stage of problem formulation works with function with a specific class taken
from the initial state and all possible actions done in this stage.
 Transition:
This stage of problem formulation integrates the actual action done by the previous
action stage and collects the final stage to forward it to their next stage.
 Goal test:
This stage determines that the specified goal achieved by the integrated transition
model or not, whenever the goal achieves stop the action and forward into the next
stage to determines the cost to achieve the goal.
 Path costing:
This component of problem-solving numerical assigned what will be the cost to
achieve the goal. It requires all hardware software and human working cost.
Search Algorithm Terminologies:

 Search: Searching is a step by step procedure to solve a search-problem in a given


search space. A search problem can have three main factors:
 Search Space: Search space represents a set of possible solutions, which a system
may have.
 Start State: It is a state from where agent begins the search.
 Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The root
of the search tree is the root node which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a
transition model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal node.
 Optimal Solution: If a solution has the lowest cost among all solutions.
Properties of Search Algorithms:

 Completeness: A search algorithm is said to be complete if it guarantees to return a


solution if at least any solution exists for any random input.
 Optimality: If a solution found for an algorithm is guaranteed to be the best solution
(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.
 Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.
 Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
Types of Search Algorithms
Uninformed Search

 The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal.
 It operates in a brute-force way as it only includes information about how to
traverse the tree and how to identify leaf and goal nodes.
 Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the
goal, so it is also called blind search.
 It examines each node of the tree until it achieves the goal node.
Informed Search

 Informed search algorithms use domain knowledge.


 In an informed search, problem information is available which can guide the
search.
 Informed search strategies can find a solution more efficiently than an
uninformed search strategy.
 Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
Uninformed Search(Blind search) and Informed Search(Heuristic
Search) strategies.

Uninformed or Blind Search Informed or Heuristic Search


No additional information additional information
beyond that provided in the beyond that provided in the
problem definition problem definition

Not effective More effective


No information about Uses problem-specific
number of steps or path cost knowledge
beyond the definition of the
problem
itself.
Lecture 2 :Uninformed Search

 Breadth-first Search
 Depth-first Search
 Depth-limited Search
 Iterative deepening depth-first search
 Uniform cost search
 Bidirectional Search
Breadth-first Search

 Breadth-first search is the most common search strategy for traversing a tree or
graph.
 This algorithm searches breadth wise in a tree or graph, so it is called breadth-
first search.
 BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general-graph search
algorithm.
 Breadth-first search implemented using FIFO queue data structure.
Continue--

 Advantages
 BFS will provide a solution if any solution exists.
 If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
 Disadvantages
 It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
 BFS needs lots of time if the solution is far away from the root node.
Continue--

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

 Time Complexity: Time Complexity of BFS algorithm can be obtained by the


number of nodes traversed in BFS until the shallowest Node. Where the d=
depth of shallowest solution and b is a node at every state.
T (b) = 1+b^2+b^3+.......+ b^d= O (b^d)
 Space Complexity: Space complexity of BFS algorithm is given by the Memory
size of frontier which is
O(b^d).
 Completeness: BFS is complete, which means if the shallowest goal node is at
some finite depth, then BFS will find a solution.
 Optimality: BFS is optimal if path cost is a non-decreasing function of the depth
of the node.
Depth-First Search

 Depth-first search is a recursive algorithm for traversing a tree or graph data


structure.
 It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
Depth-First Search
 Advantage:

 DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).

 Disadvantage:
 There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
S--->A--->C---G
Example of DFS algorithm
--Continue
 Completeness: DFS search algorithm is complete within finite state space as it
will expand every node within a limited search tree.
 Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
T(n)= 1+ n^2+ n^3 +.........+ n^m=O(n^m)

Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)

 Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is
O(bm).
 Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.
Depth-Limited Search

 Depth-limited search algorithm is similar to depth-first search with a


predetermined limit.
 Depth-limited search can solve the drawback of the infinite path in the Depth-
first search.
 In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.
 Depth-limited search can be terminated with two Conditions of failure:
 Standard failure value: It indicates that problem does not have any solution.
 Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Depth-Limited Search

 Advantages:
 Depth-limited search is Memory efficient.

 Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.
---Continue

 Completeness: DLS search algorithm is complete if the solution is above the


depth-limit.
 Time Complexity: Time complexity of DLS algorithm is O(b^ℓ).
 Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
 Optimal: Depth-limited search can be viewed as a special case of DFS, and it is
also not optimal even if ℓ >d.
Uniformed Cost Search

 Uniform-cost search is a searching algorithm used for traversing a weighted


tree or graph.
 This algorithm comes into play when a different cost is available for each
edge.
 The primary goal of the uniform-cost search is to find a path to the goal node
which has the lowest cumulative cost.
 Uniform-cost search expands nodes according to their path costs form the root
node.
 It can be used to solve any graph/tree where the optimal cost is in demand.
 A uniform-cost search algorithm is implemented by the priority queue.
 It gives maximum priority to the lowest cumulative cost. Uniform cost search
is equivalent to BFS algorithm if the path cost of all edges is the same.
Step:1

Step:2
Step:3 Step:5

Step:4 Step:6
---Continue

 Completeness: Uniform-cost search is complete, such as if there is a


solution, UCS will find it.
 Time Complexity: Let C* is Cost of the optimal solution, and ε is each step
to get closer to the goal node. Then the number of steps is = C*/ε+1. Here
we have taken +1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 +
[C*/ε])/.
 Space Complexity: The same logic is for space complexity so, the worst-
case space complexity of Uniform-cost search is O(b^1 + [C*/ε]).
 Optimal: Uniform-cost search is always optimal as it only selects a path
with the lowest path cost.
Iterative Deeping DFS

 The iterative deepening algorithm is a combination of DFS and BFS algorithms.


This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.

 This algorithm performs depth-first search up to a certain "depth limit", and it


keeps increasing the depth limit after each iteration until the goal node is found.

 This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.

 The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
Contd..

• Advantages:

• It combines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.

• Disadvantages:

• The main drawback of DFS is that it repeats all the work of the previous
phase.
.
---Continue

 1st Iteration-----> A
 2nd Iteration----> A, B, C
 3rd Iteration------>A, B, D, E, C, F, G
 4th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
--Continue

 Completeness: This algorithm is complete is if the branching factor is finite.

 Time Complexity: Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(b^d).
 Space Complexity: The space complexity of IDDFS will be O(d).
Bidirectional Search

 Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node.
 Bidirectional search replaces one single search graph with two small subgraphs in
which one starts the search from an initial vertex and other starts from goal vertex.
 The search stops when these two graphs intersect each other.
 Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Bidirectional Search

Advantages:

 Bidirectional search is fast.


 Bidirectional search requires less memory

Disadvantages:

 Implementation of the bidirectional search tree is difficult.


 In bidirectional search, one should know the goal state in advance.
Bidirectional Search

 Completeness: Bidirectional Search is complete if we use BFS in both searches.


 Time Complexity: Time complexity of bidirectional search using BFS is O(b^d).

 Space Complexity: Space complexity of bidirectional search is O(b^d).


 Optimal: Bidirectional search is Optimal.
Lecture 3 :Informed Search Methods

 Informed search algorithm contains an array of knowledge such as


how far we are from the goal, path cost, how to reach to goal node,
etc.
 This knowledge help agents to explore less to the search space and
find more efficiently the goal node.
 The informed search algorithm is more useful for large search
space. Informed search algorithm uses the idea of heuristic, so it is
also called Heuristic searches. The value of the heuristic function is
always positive.
 Heuristics function: Heuristic is a function which is used
in Informed Search, and it finds the most promising path.
It takes the current state of the agent as its input and
produces the estimation of how close agent is from the
goal.
 The heuristic method, however, might not always give the
best solution, but it guaranteed to find a good solution in
reasonable time. Heuristic function estimates how close a
state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of
states.

Admissibility of the heuristic function


 h(n)<=h*(n), Here h(n) is heuristic cost, and h*(n) is the

estimated cost. Hence heuristic cost should be less than or


equal to the estimated cost.
Types of Informed Search

The algorithms have information on the goal state, which helps in


more efficient searching. This information is obtained by something
called a heuristic.

Search Heuristics: In an informed search, a heuristic is


a function that estimates how close a state is to the goal state. For
example – Manhattan distance, Euclidean distance, etc.

 BFS(Best First Search)


 A* Search
 Greedy BFS
 AO* Search
BFS (Best First Search)

 The idea of Best First Search is to use an evaluation


function to decide which adjacent is most promising and
then explore.
 Best First Search falls under the category of Heuristic
Search or Informed Search.
 We use a priority queue to store costs of nodes.
 So the implementation is a variation of BFS, we just
need to change Queue to Priority Queue..
BFS Algoritm

Step 1: Place the starting node into the OPEN list.


Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it
in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to
Step 6.
Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if
the node has been in either OPEN or CLOSED list. If the node has not been in both list, then
add it to the OPEN list.
Step 7: Return to Step 2.
Path: S->A->C->B->G
 Advantages:
Contd..
 Best first search can switch between BFS and DFS by gaining the advantages of
both the algorithms.
 This algorithm is more efficient than BFS and DFS algorithms.
 Disadvantages:
 It can behave as an unguided depth-first search in the worst-case scenario.
 It can get stuck in a loop as DFS.
 This algorithm is not optimal.
 The worst-case time complexity for Best First Search is O(n * log n) where n is the
number of nodes. In the worst case, we may have to visit all nodes before we reach
goal. Note that priority queue is implemented using Min(or Max) Heap, and insert
and remove operations take O(log n) time.
 The performance of the algorithm depends on how well the cost or evaluation
function is designed.
A
C B

G F

Path A--->E--->E--->F--->I
Estimated Path Cost=140+99+211
Example

Each node is expanded using evaluation function f(n)=h(n) , which is


given in the below table
Contd..
 Expand the nodes of S and put in the CLOSED list
 Initialization: Open [A, B], Closed [S]
 Iteration 1 : Open [A], Closed [S, B]
 Iteration 2 : Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
 Iteration 3 : Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
 Hence the final solution path will be: S----> B----->F----> G
Contd...
 Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).

 Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.

 Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.

 Optimal: Greedy best first search algorithm is not optimal.


A* Search Algorithm
 A* search is the most commonly known form of best-first search.
 It uses heuristic function h(n), and cost to reach the node n from the start state g(n).
 It has combined features of UCS and greedy best-first search, by which it solve the
problem efficiently.
 A* search algorithm finds the shortest path through the search space using the
heuristic function.
 This search algorithm expands less search tree and provides optimal result faster.
 A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).
 In A* search algorithm, we use search heuristic as well as the cost to reach the node.
A* algorithm generates all successor nodes and computes an
estimate of distance (cost) from start node to goal node through
each of the successors.

It then chooses the successor with the shortest estimated distance


for expansion.

Asterisks (*) are used to designate estimates of corresponding true


values f(n)=g(n)+h(n)
Contd..
Admissibility condition : Algorithm A is a admissible if it is guaranteed to return an optimal
solution when one exists.

Completeness Condition: Algorithm A is complete if it is always terminate d with a solution


when one exists.

Dominance Property : Let A1 and A2 be admissible algorithm s with heuristic estimation


function h*1 and h*2 , respectively. A1 is said to be more informed than A2 whenever
h1*(n)>h2*(n) for all n. A1 is also said to dominate A2.
A* Algorithm

Step1: Place the starting node in the OPEN list.


Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2
Contd....

 Advantages:

 A* search algorithm is the best algorithm than other search algorithms.


 A* search algorithm is optimal and complete.
 This algorithm can solve very complex problems.
 Disadvantages:
 It does not always produce the shortest path as it mostly based on heuristics and
approximation.
 A* search algorithm has some complexity issues.
 The main drawback of A* is memory requirement as it keeps all generated nodes
in the memory, so it is not practical for various large-scale problems.
Example:
we will traverse the given graph using the A* algorithm. The heuristic value of all states is given
in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n),
where g(n) is the cost to reach any node from start state. Here we will use OPEN and CLOSED
list.
Contd
Contd..

 Initialization: {(S, 5)}


 Iteration1: {(S--> A, 4), (S-->G, 10)}
 Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
 Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G,
10)}
 Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path
with cost 6.
Contd....
Key Points:
 A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
 The efficiency of A* algorithm depends on the quality of heuristic. A* algorithm
expands all nodes which satisfy the condition f(n)

 Complete: A* algorithm is complete as long as:

 Branching factor is finite.


 Cost at every action is fixed.
Contd...
 Optimal: A* search algorithm is optimal if it follows below two conditions

 Admissible: the first condition requires for optimality is that h(n) should be an

admissible heuristic for A* tree search. An admissible heuristic is optimistic in


nature.
 Consistency: Second required condition is consistency for only A* graph-search. If
the heuristic function is admissible, then A* tree search will always find the least cost
path.
 Time Complexity: The time complexity of A* search algorithm depends on heuristic
function, and the number of nodes expanded is exponential to the depth of solution d.
So the time complexity is O(b^d), where b is the branching factor.
 Space Complexity: The space complexity of A* search algorithm is O(b^d)
Lecture 4 :Local Search and Optimization
Problems
 In the problems we studied so far, the solution is the path.
 In many optimization problems, the path is irrelevant. The goal itself is the solution.
 The state space is set up as a set of “complete "configurations, the optimal
configuration is one of them.
 An iterative improvement algorithm keeps a single “current "state” and tries to
improve it.
 The space complexity is constant.
Local Search: Local Search operates using signal current node (rather
than multiple path)) and generally moves only to the neighbors of the
node .

Advantage :

1. They use very little memory-usually constant .


2. They often find solution often to large to infinity search space .

Optimization Problem: In addition to finding goals, local search


algorithms are used for solving the pure optimization problem , in which
the aim is to find the best state according to an object function.
2. Objective Function: These algorithms use an objective function (often referred to as a
heuristic function) to evaluate the quality of states. The goal is to maximize or minimize this
function.
Characteristics of Local Search Algorithms
3. Neighborhood: For each state, there is a set of neighboring states that can be reached by
applying allowable moves or transitions.

4. Termination Criteria: The algorithm stops when it reaches a state where no better
neighboring states exist or when other termination criteria are met.
Local Search and Optimization Problems

 Hill Climbing Search


 Simulated Annealing
 Local Beam Search
 Genetic Algorithm
 Tabu Search
Hill Climbing Algorithm:

•Hill climbing algorithm is a local search algorithm which continuously


moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbor has a higher value.

•Hill climbing algorithm is a technique which is used for optimizing the


mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.

•It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
•A node of hill climbing algorithm has two components which are state
and value.
•Hill Climbing is mostly used when a good heuristic is available.
•In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Following are some main features of Hill Climbing Algorithm:

•Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.
•Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
•No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the x-axis. If the
function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective
function, then the goal of the search is to find the global maximum
and local maximum.
Types of Hill Climbing

 Simple hill Climbing:


 Steepest-Ascent hill-climbing
 Stochastic hill Climbing
Simple Hill Climbing Algorithm
It only evaluates the neighbor node state at a time and selects the first one which
optimizes current cost and set it as a current state. It only checks it's one successor
state, and if it finds better than the current state, then move else be in the same state.

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
If it is goal state, then return success and quit.
Else if it is better than the current state then assign new state as a current state.
Else if not better than the current state, then return to step2.
Step 5: Exit.
Different Regions in State Space Diagram

 Local Maximum: Local maximum is a state which is better than its neighbour states,
but there is also another state which is higher than it.

 Global Maximum: Global maximum is the best possible state of state space landscape.
It has the highest value of objective function.

 Current state: It is a state in a landscape diagram where an agent is currently present.

 Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current states have the same value.

 Shoulder: It is a plateau region which has an uphill edge.


Steepest-Ascent Hill Climbing
This algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more time as
it searches for multiple neighbors

Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
a) Let SUCC be a state such that any successor of the current state will be better
than it.
b) For each operator that applies to the current state:
I. Apply the new operator and generate a new state.
II. Evaluate the new state.
III. If it is goal state, then return it and quit, else compare it to the SUCC.
IV. If it is better than SUCC, then set new state as SUCC.
V. If the SUCC is better than the current state, then set current state to
SUCC.
Step 3: Exit.
Stochastic Hill Climbing

 Stochastic hill climbing does not examine for all its neighbour before moving.
 Rather, this search algorithm selects one neighbour node at random and decides
whether to choose it as a current state or examine another state.
Issues in Hill Climbing

 Local Maximum: A local maximum is a peak state in the landscape which is better
than each of its neighbouring states, but there is another state also present which is
higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
--Continue
 Plateau: A plateau is the flat area of the search space in which all the neighbour states
of the current state contains the same value, because of this algorithm does not find
any best direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
--Continue

Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we


can improve this problem.
Simulated Annealing
 A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to be incomplete because it can get stuck on a local maximum.

 And if algorithm applies a random walk, by moving a successor, then it may


complete but not efficient.

 Simulated Annealing is an algorithm which yields both efficiency and completeness


 In mechanical term Annealing is a process of hardening a metal or glass to a high
l
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state.

 The same process is used in simulated annealing in which the algorithm picks a
random move, instead of picking the best move.

 If the random move improves the state, then it follows the same path. Otherwise,
the algorithm follows the path which has a probability of less than 1 or it moves
downhill and chooses another path.
Simulated Annealing

l .
Local Beam Search

 In this algorithm, it holds k number of states at any given time.


 At the start, these states are generated randomly.
 The successors of these k states are computed with the help of objective function.
 If any of these successors is the maximum value of the objective function, then the
algorithm stops.
 Otherwise the (initial k states and k number of successors of the states = 2k) states
are placed in a pool.
 The pool is then sorted numerically.
 The highest k states are selected as new initial states.
 This process continues until a maximum value is reached.
Local Beam Search Algorithm

start with k randomly generated states


loop
generate all successors of all k states
if any of the states = solution, then return the state
else select the k best successors
end
Genetic Algoritm
l

l .
Lecture 5 :Searching With Partial Observations

 If the environment is not fully observable or deterministic, then the following types of
problems occur:

1. Sensor less problems


If the agent has no sensors, then the agent cannot know it’s current state, and hence
would have to make many repeated action paths to ensure that the goal state is reached
regardless of it’s initial state.
2. Contingency problems
This is when the environment is partially observable or when actions are uncertain.
Then after each action the agent needs to verify what effects that action has caused.
Rather than planning for every possible contingency after an action, it is usually better
to start acting and see which contingencies do arise.
 A problem is called adversarial if the uncertainty is caused by the actions of another
agent
Sensorless problems: Search in Space of Belief
States
 Beliefs are fully observable
 Belief states: Every possible set of physical states; N physical states 2^N belief
states
 Initial state: Typically the set of all physical states
 Actions: Either the union or intersection of the legal actions for the current belief
states
 Transition model: set of all possible states that could result from taking any of the
actions in any of the belief states
 Goal test: all states in current belief set are goal states

 Path cost: (it depends. application-specific)


Lecture 6Constraint Satisfaction Problem
Lecture 7: Game Playing

 Mini-Max Algorithm
 Alpha-Beta Pruning
Mini-Max Algorithm

 Mini-max algorithm is a recursive or backtracking algorithm which is used in


decision-making and game theory.
 It provides an optimal move for the player assuming that opponent is also playing
optimally.
 Mini-Max algorithm uses recursion to search through the game-tree.
 Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers,
tic-tac-toe, go, and various tow-players game.
Max-Min Algorithm

 This Algorithm computes the minimax decision for the current state.
 In this algorithm two players play the game, one is called MAX and other is called
MIN.
 Both the players fight it as the opponent player gets the minimum benefit while they
get the maximum benefit.
 Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
 The minimax algorithm performs a depth-first search algorithm for the exploration
of the complete game tree.
MAX-MIN Algorithm
 function minimax (node, depth, maximizingPlayer) is
 if depth ==0 or node is a terminal node then return static evaluation of node

 if MaximizingPlayer then // for Maximizer Player


 maxEva= -infinity
 for each child of node do
 eva= minimax(child, depth-1, false)
 maxEva= max(maxEva,eva) //gives Maximum of the values
 return maxEva

 else // for Minimizer player


 minEva= +infinity
 for each child of node do
 eva= minimax(child, depth-1, true)
 minEva= min(minEva, eva) //gives minimum of the values
 return minEva
Contd…

 Min-Max algorithm is Complete. It will definitely find as Complete


solution (if exist), in the finite search tree.
 Optimal- Min-Max algorithm is optimal if both opponents are
playing optimally.
 Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(b^m), where b is branching
factor of the game-tree, and m is the maximum depth of the tree.
 Space Complexity- Space complexity of Mini-max algorithm is also
similar to DFS which is O(bm).
Alpha Beta Pruning
 Alpha-beta pruning is a modified version of the mini-max algorithm.
 It is an optimization technique for the mini-max algorithm.
 As we have seen in the mini-max search algorithm that the number of
game states it has to examine are exponential in depth of the tree.

 Since we cannot eliminate the exponent, but we can cut it to half.


Hence there is a technique by which without checking each node of the
game tree we can compute the correct mini-max decision, and this
technique is called pruning.
Alpha Beta Pruning
 This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-
Beta Algorithm.
 Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prune the tree leaves but also entire sub-tree.
 The two-parameter can be defined as:
 Alpha: The best (highest-value) choice we have found so far at
any point along the path of Maximizer. The initial value of alpha is
-∞.
 Beta: The best (lowest-value) choice we have found so far at any
point along the path of Minimizer. The initial value of beta is +∞.
 Condition for Alpha-beta pruning: The main condition which
required for alpha-beta pruning is: α>=β
 Key points about alpha-beta pruning:

 The Max player will only update the value of alpha.


 The Min player will only update the value of beta.
 While backtracking the tree, the node values will be
 passed to upper nodes instead of values of alpha and beta.
 We will only pass the alpha, beta values to the child nodes.

You might also like