0% found this document useful (0 votes)
19 views

Unit 2

Uploaded by

aviralsinha22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Unit 2

Uploaded by

aviralsinha22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

Artificial Intelligence (KCS-071)

4th year (Semester – VII)


Session – 2023 - 24
Unit – II
Lecture – 1
Manoj Mishra
Asst. Prof.
CSE Dept.
UCER, Prayagraj
State Space Search
• Formulate a problem as a state space search by
showing the legal problem states, the legal operators,
and the initial and goal states.
• A state is defined by the specification of the values of
all attributes of interest in the world.
• An operator changes one state into the other; it has a
precondition which is the value of certain attributes
prior to the application of the operator, and a set of
effects, which are the attributes altered by the
operator.
• The initial state is where you start.
• The goal state is the partial description of the solution.

9/27/2023 A.I., KCS-071, Manoj Mishra 2


State Space Search: Contd..
• In state space search, a problem is typically
represented as a set of states, where each
state represents a specific configuration,
situation, or snapshot of the problem at a
particular point in time.
• These states are interconnected through
various actions or transitions, which define
how the system can move from one state to
another.

9/27/2023 A.I., KCS-071, Manoj Mishra 3


State Space Search: Contd..
• The primary objective of state space search is
to find a sequence of actions or transitions
that lead from an initial state to a goal state,
while satisfying certain constraints or criteria.
This sequence of actions represents the
solution to the problem or the path to the
desired outcome.

9/27/2023 A.I., KCS-071, Manoj Mishra 4


Key components and concepts in state space
search include:
• Initial State: The starting point of the problem-
solving process, representing the initial
configuration of the system.
• Goal State: The desired or target state that the
system is trying to reach. The search algorithm
aims to find a path from the initial state to the
goal state.
• Actions/Transitions: The set of permissible
actions or transitions that can be applied to move
from one state to another. These actions define
the problem's dynamics and constraints
9/27/2023 A.I., KCS-071, Manoj Mishra 5
Key components and concepts in state
space search include: Contd..
• State Space: The entire set of possible states that
can be explored during the search process. It
includes both the initial state and all possible
states that can be reached from it.
• Search Algorithm: The method or strategy used
to explore the state space efficiently and find a
solution or a path to the goal state. Common
search algorithms include depth-first search,
breadth-first search, A* search, and more.
• Path: A sequence of actions or transitions that
lead from the initial state to a goal state. The
quality of the path may vary depending on the
specific problem-solving criteria.
9/27/2023 A.I., KCS-071, Manoj Mishra 6
Classification of State Space Search
• There are two classes of state space search.
They are:
– Uninformed or Blind search
– Informed or Heuristic search

9/27/2023 A.I., KCS-071, Manoj Mishra 7


Uninformed Search Strategies
• Uninformed search strategies also called blind search,
are methods that have no additional information about
states beyond that provided in the problem definition.
• In other words, a blind or uninformed search algorithm
is one which uses no information other than the initial
state, the search operators and a test for a solution.
• A blind search proceeds in a systematic way by
exploring nodes in a predetermined order or by
selecting nodes at random.
• Uninformed search strategies generate successors and
distinguish a goal state from a non goal state.

9/27/2023 A.I., KCS-071, Manoj Mishra 8


Uninformed Search Strategies
• The common methods of uninformed search
are:
– Breadth-first search(BFS)
– Uniform-cost search(UCS)
– Depth-first search(DFS)
– Depth limited search(DLS)
– Iterative deepening depth first search(IDDFS)

9/27/2023 A.I., KCS-071, Manoj Mishra 9


Breadth-first search (BFS)
• Breadth-first search is the simplest form of uninformed
search.
• In this type of search the root node is expanded first,
then all the successors of the root node are expanded
next, then their successors and so on.
• In general, all the nodes are expanded at a given depth
in the search tree before any nodes at the next level
are expanded.
• Thus, although the search may be an extremely long
one, it is guaranteed eventually to find the shortest
possible solution sequence, if any solution exists.

9/27/2023 A.I., KCS-071, Manoj Mishra 10


Breadth-first search (BFS)
• Following points are to be noted for breadth-first
search method:
– It is simple and systematic search strategy as it
considers all nodes at level 1 and then all the states at
level 2 and so on.
– If any solution exists, breadth first search is
guaranteed to find it.
– If there are several solutions then breadth-first search
will always find the shallowest goal state first. If the
cost of a solution is a non-decreasing function of the
depth, then it will always find the cheapest solution.

9/27/2023 A.I., KCS-071, Manoj Mishra 11


Breadth-first search (BFS)

9/27/2023 A.I., KCS-071, Manoj Mishra 12


Advantages of BFS
• BFS will not get trapped exploring a blind alley.
• If any solution exists, this method is
guaranteed to find it.

9/27/2023 A.I., KCS-071, Manoj Mishra 13


Disadvantages of BFS
• The amount of time needed to generate all
the nodes is considerable.
• The searching process remembers all
unwanted nodes which is of no practical use
for the search.

9/27/2023 A.I., KCS-071, Manoj Mishra 14


Uniform-Cost Search
• The breadth-first algorithm can be generalized slightly
to solve the problem of finding the cheapest path from
the start state to a goal state.
• A non-negative cost is associated with every arc joining
two nodes; the cost of a solution path is then the sum
of the arc costs along the path.
• The generalized algorithm is called a uniform-cost
search.
• If all arcs have equal cost, the algorithm reduces to
breadth-first search.
• Uniform-cost search does not care about the number
of steps in a path but only about their total cost.

9/27/2023 A.I., KCS-071, Manoj Mishra 15


Artificial Intelligence (KCS-071)
4th year (Semester – VII)
Session – 2023 - 24
Unit – II
Lecture – 2
Manoj Mishra
Asst. Prof.
CSE Dept.
UCER, Prayagraj
Depth-first search (DFS)
• The depth-first search is characterized by the
expansion of the most recently generated, or deepest
node, first.
• Following points are to be noted for depth-first search
method:
– It has a modest memory requirement. It only needs to
store the path from the root to the leaf node as well as the
unexpanded nodes.
– Depth-first search is neither complete nor optimal. If
depth-first search goes down an infinite branch, it will not
terminate unless a goal state is found. Even if a solution is
found, there may be a better solution at a higher level in
the tree.

9/27/2023 A.I., KCS-071, Manoj Mishra 17


Depth-first search (DFS)

9/27/2023 A.I., KCS-071, Manoj Mishra 18


Advantages of DFS
• DFS requires less memory since the only
nodes on the current path are stored.
• It is by chance that we may find a solution
without examining much of the search space.

9/27/2023 A.I., KCS-071, Manoj Mishra 19


Disadvantages of DFS
• This type of search can go on and on, deeper
and deeper into the search space and we can
get lost (blind alley).

9/27/2023 A.I., KCS-071, Manoj Mishra 20


Comparison between BFS and DFS
S.No. BFS DFS
1 No blind alley exists. Blind alley exists.
2 If a solution exists, it will be found. Even if a solution exists, it explores one
branch only and it may declare failure.
Chance of success becomes still less when
loop exists.
3 May find many solutions. If many It stops after one solution is found.
solution exists, minimal solution Minimal solution may not be found as it
can be found. explores only one branch.
4 Require more memory because all Requires less memory because only one
the off-springs of the tree must be branch is scanned. It stops after solution
explored one level n before a is found.
solution on level (n+1) is to be
examined.

9/27/2023 A.I., KCS-071, Manoj Mishra 21


Depth-Limited Search
• The depth-first search has an inclination to search
deeper and deeper into the state space until a
success occurs (that is a solution is found) or a
blind alley is encountered.
• In order to prevent the algorithm from blindly
searching too deep into the state space, which
may prove to be rather useless venture, a depth
cutoff d is usually specified and a backtrack is
enforced whenever depth d is reached.

9/27/2023 A.I., KCS-071, Manoj Mishra 22


Depth-Limited Search
• It is clear that if d is too low the algorithm
terminates without finding a solution.
• On the other hand, if d is too high the algorithm
terminates as soon as it finds some solution, but
in this case it might expand too many nodes.
• Unfortunately, it is not always possible to make a
correct prediction of d. This fact puts a practical
difficulty on the usability of this technique.

9/27/2023 A.I., KCS-071, Manoj Mishra 23


Depth-First Iterative Deepening
(DFID)
• The problem with depth-first search is that the search
can go down an infinite branch and thus never return.
• This problem can be avoided by imposing a depth-limit
which terminates the search at that depth.
• However, the next question is to decide a suitable
depth-limit.
• To overcome this problem there is another search
method called iterative deepening.
• Iterative deepening search is a general strategy often
used in combination with depth-first search that finds
the best depth limit.
9/27/2023 A.I., KCS-071, Manoj Mishra 24
Depth-First Iterative Deepening
(DFID)
• This method enjoys the memory requirements of
depth-first search while guaranteeing that a goal node
of minimal depth will be found (if a goal exists).
• This search method simply tries all possible depth
limits; first 0, then 1, then 2, etc. until a solution is
found.
• In iterative deepening, successive depth-first searches
are conducted – each with depth bounds increasing by
1 – until a goal node is found.
• Hence, iterative deepening combines the benefits of
depth-first and breadth-first search.

9/27/2023 A.I., KCS-071, Manoj Mishra 25


Depth-First Iterative Deepening
(DFID)
• Like depth-first search, its memory requirements are
very modest.
• Like breadth-first search, it is complete when the
branching factor is finite and optimal when the path
cost is a non decreasing function of the depth of the
node.
• Iterative deepening search is analogous to breadth-first
search in that it explores a complete layer of new
nodes at each iteration before going to the next layer.
• Note:
– In general, iterative deepening is the preferred uninformed
search method when there is a large search space and the
depth of the solution is not known.
9/27/2023 A.I., KCS-071, Manoj Mishra 26
Artificial Intelligence (KCS-071)
4th year (Semester – VII)
Session – 2023 - 24
Unit – II
Lecture – 3
Manoj Mishra
Asst. Prof.
CSE Dept.
UCER, Prayagraj
Bi-directional Search
• The idea behind bi-directional search is to run two
simultaneous searches – one forward from the initial
state and the other backward from the goal, stopping
when two searches meet in the middle.
• A search algorithm has to be selected for each half.
• Bi-directional search is implemented by having one or
both of the searches check each node before it is
expanded to see if it is in the fringe of the other search
tree; if so, a solution has been found.
• The most difficult case for bidirectional search is when
the goal test gives only on implicit description of some
possibly large set of goal states.

A.I., KCS-071, Manoj Mishra 28


Example
• Consider a simple graph representing a
network of cities connected by roads. You
want to find the shortest path from City A to
City D using a bi-directional search.

A.I., KCS-071, Manoj Mishra 29


Solution
• A, B, C, D, and E represent cities.
• S represents the starting city.
• The goal is to find the shortest path from City A
to City D.
• Bi-directional Search:
• Initialize two search queues: one starting from
City A (forward search) and the other starting
from City D (backward search).
• Forward Queue: [A]
• Backward Queue: [D]

A.I., KCS-071, Manoj Mishra 30


• Perform the search simultaneously from both directions
until they meet in the middle or one of them finds the goal.
– Forward Search:
• Expand City A. The possible neighbors are B and S. Add them to the
forward queue.
• Forward Queue: [B, S]
– Backward Search:
• Expand City D. The possible neighbors are E and C. Add them to the
backward queue.
• Backward Queue: [E, C]
– Forward Search:
• Expand B. The possible neighbor is A (already visited) and D (found in
the backward search). Goal found!
• Forward Queue: [S]
• Backward Queue: [E, C]
• Combine the paths from both directions to form the final
path. In this case, the path is A -> B -> D.

A.I., KCS-071, Manoj Mishra 31


Informed Search Strategies
• In the blind search, the number of nodes
expanded before reaching a solution may
become prohibitively large to non-trivial
problems.
• Because the order of expanding the nodes is
purely arbitrary and does not use any properties
of the problem being solved, too many nodes
may be expanded.
• In such cases, we may run out of time or space,
or both, even in simple problem.
A.I., KCS-071, Manoj Mishra 32
Informed Search Strategies
• Information about the particular problem domain
can often be invoked to reduce the search.
• The techniques for doing so usually require
additional information about the properties of
the specific domain which is built into the state
and operator definitions.
• Information of this sort is called heuristic
information and the search methods using this
information is called heuristic search or informed
search.
A.I., KCS-071, Manoj Mishra 33
Informed Search Strategies
• A heuristic search could be any information,
say a simple number representing how good
or bad that path is likely to be.
• Heuristic search methods helps in deciding:
– which node to expand next, instead of
depth/breadth first-type of expansion.
– which successor(s) of the node to be generated
instead of blindly generating all possible
successors of that node, thus pruning the search
tree.
A.I., KCS-071, Manoj Mishra 34
Heuristic Functions
• Heuristic means “rule of thumb”.
• Heuristics are criteria, methods or principles
for deciding which among several alternative
courses of action promises to be the most
effective in order to achieve some goal.
• In heuristic search or informed search,
heuristics are used to identify the most
promising search path.

A.I., KCS-071, Manoj Mishra 35


Heuristic Functions
• A heuristic function is a function that maps from
problem state description to measure desirability,
usually represented as number weights.
• The value of a heuristic function at a given node
in the search process gives a good estimate of
that node being on the desired path to solution.
• A heuristic function at a node ‘n’ is an estimate of
the optimum cost from the current node to a
goal. It is denoted by h(n).
– i.e., h(n) = estimated cost of the cheapest path from
node n to a goal node.

A.I., KCS-071, Manoj Mishra 36


Best-First Search
• Best-first is an instance of the general TREE-
SEARCH or GRAPH-SEARCH algorithm in which
a node is selected for expansion based on
evaluation function, f(n).
• Traditionally, the node with the lowest
evaluation is selected for expansion because
the evaluation measures distance to the goal.
• The algorithm maintains a priority queue of
nodes to be explored.

A.I., KCS-071, Manoj Mishra 37


Greedy Best-First Search
• Greedy best-first search tries to expand the
node that is closest to the goal, on the
grounds that is likely to lead to a solution
quickly.
• Thus, it evaluates nodes by using just the
heuristic function: f(n) = h(n).
• The resulting algorithm is not optimal.
• The algorithm is also incomplete, and it may
fail to find a solution even if one exists.
A.I., KCS-071, Manoj Mishra 38
A* Search Algorithm
• A* algorithm is a best-first search algorithm in which
the cost associated with a node is
f(n) = g(n) + h(n)
where g(n) is the cost of path from the initial state to node n
and h(n) is the heuristic estimate of the cost of a path from
node n to a goal.
• Since g(n) gives the path cost from the start node to
node ‘n’ and h(n) is the estimated cost of the cheapest
path from ‘n’ to the goal, we have,
f(n) = estimated cost of the cheapest solution through n

A.I., KCS-071, Manoj Mishra 39


A* Search Algorithm
• The following are some of the features of A* search
algorithm:
– At each point a node with lowest ‘f’ value is chosen for
expansion. Ties among nodes of equal ‘f’ value should be
broken in favour of nodes with lower ‘h’ values.
– The algorithm terminates when a goal node is chosen for
expansion.
– A* algorithm guides an optimal path to a goal if the heuristic
function h(n) is admissible, meaning it never overestimates
actual cost.
– A* algorithm expands fewest number of nodes.
– The main drawback of A* algorithm and indeed of any best-first
search is its memory requirement. Since at least the entire open
list must be saved, A* algorithm is severely space-limited.

A.I., KCS-071, Manoj Mishra 40


Solution Steps of A* Algo.
1. initialize the open list with the 'S' cell and its
associated cost (g-value) and heuristic (h-value).
2. While the open list is not empty:
• Choose the cell with the lowest f-value (f(n) = g(n) +
h(n)) from the open list.
• Expand this cell and calculate the f-values of its
neighbors.
• Add neighbors to the open list if they are not blocked
and have a lower f-value.
3. Repeat the above steps until you reach the 'G' cell.

A.I., KCS-071, Manoj Mishra 41


Example

A.I., KCS-071, Manoj Mishra 42


A.I., KCS-071, Manoj Mishra 43
AO* algorithm
• Best-first search is what the AO* algorithm does.
The AO* method divides any given difficult
problem into a smaller group of problems that
are then resolved using the AND-OR graph
concept. AND OR graphs are specialized graphs
that are used in problems that can be divided into
smaller problems. The AND side of the graph
represents a set of tasks that must be completed
to achieve the main goal, while the OR side of the
graph represents different methods for
accomplishing the same main goal.
9/27/2023 A.I., KCS-071, Manoj Mishra 44
9/27/2023 A.I., KCS-071, Manoj Mishra 45
AO* Contd..
• In the above figure, the buying of a car may be broken
down into smaller problems or tasks that can be
accomplished to achieve the main goal in the above
figure, which is an example of a simple AND-OR graph.
• The other task is to either steal a car that will help us
accomplish the main goal or use your own money to
purchase a car that will accomplish the main goal. The
AND symbol is used to indicate the AND part of the
graphs, which refers to the need that all subproblems
containing the AND to be resolved before the
preceding node or issue may be finished.

9/27/2023 A.I., KCS-071, Manoj Mishra 46


AO* Contd..
• The start state and the target state are already
known in the knowledge-based search
strategy known as the AO* algorithm, and the
best path is identified by heuristics. The
informed search technique considerably
reduces the algorithm’s time complexity. The
AO* algorithm is far more effective in
searching AND-OR trees than the A*
algorithm.

9/27/2023 A.I., KCS-071, Manoj Mishra 47


Working of AO* algorithm:

• The evaluation function in AO* looks like this:


f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the
current node.
h(n) = estimated cost from the current node
to the goal state.

9/27/2023 A.I., KCS-071, Manoj Mishra 48


Example:
Here in the example below the Node which is given is the heuristic value i.e h(n).
Edge length is considered as 1

9/27/2023 A.I., KCS-071, Manoj Mishra 49


Solution: Step 1

9/27/2023 A.I., KCS-071, Manoj Mishra 50


Solution Step 1: Contd..
• Start from node A, f(A⇢B) = g(B) + h(B) = 1 + 5
……here g(n)=1 is taken by default for path
cost = 6
• f(A⇢C+D) = g(c) + h(c) + g(d) + h(d) = 1 + 2 + 1
+ 4 ……here we have added C & D because
they are in AND = 8
• So, by calculation A⇢B path is chosen which is
the minimum path, i.e f(A⇢B)

9/27/2023 A.I., KCS-071, Manoj Mishra 51


Solution Step 2

9/27/2023 A.I., KCS-071, Manoj Mishra 52


Solution Step 2: Contd..
• According to the answer of step 1, explore node B Here the
value of E & F are calculated as follows,
• f(B⇢E) = g(e) + h(e) f(B⇢E) = 1 + 7 = 8
• f(B⇢f) = g(f) + h(f) f(B⇢f) = 1 + 9 = 10
• So, by above calculation B⇢E path is chosen which is
minimum path, i.e f(B⇢E) because B's heuristic value is
different from its actual value
• The heuristic is updated and the minimum cost path is
selected. The minimum value in our situation is 8.
Therefore, the heuristic for A must be updated due to the
change in B's heuristic.
• So we need to calculate it again. f(A⇢B) = g(B) + updated
h(B) = 1 + 8 = 9 We have Updated all values in the above
tree.

9/27/2023 A.I., KCS-071, Manoj Mishra 53


Solution Step 3

9/27/2023 A.I., KCS-071, Manoj Mishra 54


Solution Step 3: Contd..
• By comparing f(A⇢B) & f(A⇢C+D) f(A⇢C+D) is shown to be
smaller. i.e 8 < 9 Now explore f(A⇢C+D) So, the current
node is C
• f(C⇢G) = g(g) + h(g) f(C⇢G) = 1 + 3 = 4
• f(C⇢H+I) = g(h) + h(h) + g(i) + h(i) f(C⇢H+I) = 1 + 0 + 1 + 0
……here we have added H & I because they are in AND = 2
f(C⇢H+I) is selected as the path with the lowest cost and
the heuristic is also left unchanged because it matches the
actual cost. Paths H & I are solved because the heuristic for
those paths is 0, but Path A⇢D needs to be calculated
because it has an AND.
• f(D⇢J) = g(j) + h(j) f(D⇢J) = 1 + 0 = 1 the heuristic of node
D needs to be updated to 1.
• f(A⇢C+D) = g(c) + h(c) + g(d) + h(d) = 1 + 2 + 1 + 1 = 5
9/27/2023 A.I., KCS-071, Manoj Mishra 55
Step3 Contd..
• As we can see that path f(A⇢C+D) is get
solved and this tree has become a solved tree
now. In simple words, the main flow of this
algorithm is that we have to find firstly level
1st heuristic value and then level 2nd and
after that update the values with going
upward means towards the root node.

9/27/2023 A.I., KCS-071, Manoj Mishra 56


Difference between the A* Algorithm and AO*
algorithm
• A* algorithm and AO* algorithm both works on the
best first search.
• They are both informed search and works on given
heuristics values.
• A* always gives the optimal solution but AO* doesn’t
guarantee to give the optimal solution.
• Once AO* got a solution doesn’t explore all possible
paths but A* explores all paths.
• When compared to the A* algorithm, the AO*
algorithm uses less memory.
• opposite to the A* algorithm, the AO* algorithm
cannot go into an endless loop.

9/27/2023 A.I., KCS-071, Manoj Mishra 57


Artificial Intelligence (KCS-071)
4th year (Semester – VII)
Session – 2023 - 24
Unit – II
Lecture – 4
Manoj Mishra
Asst. Prof.
CSE Dept.
UCER, Prayagraj
Local Search Algorithms
• Local search algorithms operate using a single
current state, rather than multiple paths, and
generally move only to neighbors of that state.
• Local search methods work on complete state
formulations.
• They keep only a small number of nodes in
memory.
• Typically, the paths followed by the search are
not retained.
A.I., KCS-071, Manoj Mishra 59
Local Search Algorithms
• Although local search algorithms are not
systematic, they have two key advantages:
– They use very little memory – usually a constant
amount, and
– They can often find reasonable solutions in large or
infinite (continuous) state spaces for which systematic
algorithms are unsuitable.
• In addition to finding goals, local search
algorithms are useful for solving pure
optimization problems in which the aim is to find
the best state according to an objective function.

A.I., KCS-071, Manoj Mishra 60


Local Search Algorithms
• One way to visualize iterative improvements
algorithm is to imagine every possible state
laid out on a landscape with the height of
each state corresponding to its goodness.
• Optimal solutions will appear as the highest
points.
• Iterative improvement works by moving
around on the landscape seeking out the
peaks by looking only at the local vicinity.

A.I., KCS-071, Manoj Mishra 61


Local Search Algorithms
• To understand local search, we will find it very useful to
consider the state space landscape as shown in the figure.
• In the figure,
– A landscape has both “location” (defined by the state) and
“elevation” (defined by the value of the heuristic cost function
or objective function).
– If elevation corresponds to cost, then the aim is to find the
lowest valley – a global minimum.
– If elevation corresponds to an objective function, then the aim is
to find the highest peak – a global maximum.
– A complete search algorithm always finds a goal if one exists.
– An optimal algorithm always finds a global minimum /
maximum.

A.I., KCS-071, Manoj Mishra 62


Hill Climbing Search
• It is simply a loop that continually moves in
the direction of increasing value – that is,
uphill.
• It terminates when it reaches a “peak” where
no neighbor has a higher value.
• In other words, hill climbing search iteratively
maximize “value” of current state, by
replacing it by successor state that has highest
value, as long as possible.

A.I., KCS-071, Manoj Mishra 63


Hill Climbing Search
• The algorithm does not maintain a search
tree, so the current node data structure need
only record the state and its objective function
value.
• Hill climbing does not look ahead beyond the
immediate neighbors of the current state.
• In simple terms, no search tree is maintained
in this algorithm.

A.I., KCS-071, Manoj Mishra 64


Hill Climbing Search
• Algorithm:
– Step 1: Determine successors of current state.
– Step 2: Choose successor of maximum goodness
(break ties randomly).
– Step 3: If goodness of best successor is less than
current state’s goodness, stop.
– Step 4: Otherwise make best successor the
current state and go to step 1.

A.I., KCS-071, Manoj Mishra 65


Hill Climbing Search
• Hill climbing is sometimes called greedy local
search because it grabs a good neighbor state
without thinking ahead about where to go
next.
• Hill climbing often makes very rapid progress
towards a solution, because it is usually quite
easy to improve a bad state.

A.I., KCS-071, Manoj Mishra 66


Hill Climbing Search
• Unfortunately, hill climbing often gets stuck for
the following reasons:
– Local maxima: A local maximum is a peak that is
higher than each of its neighboring states, but lower
than global maximum. Here, once the top of a hill is
reached the algorithm will halt since every possible
step leads down.
– Ridges: Ridges result in a sequence of local maxima
that is very difficult for greedy algorithms to navigate.
If the landscape contains ridges, local improvements
may follow a zigzag path up the ridge, slowing down
the search.

A.I., KCS-071, Manoj Mishra 67


Hill Climbing Search
– Plateaux: A plateau is an area of the state space
landscape where the evaluation function is flat. It
can be a flat local maximum, from which no uphill
exit exists, or a shoulder, from which it is possible
to make progress.
• Note:
– If the landscape is flat, meaning many states have the same
goodness, algorithm degenerates to a random walk.
– Shape of state space landscape strongly influences the success
of the search process. A very spiky surface which is flat in
between the spikes will be very difficult to solve.

A.I., KCS-071, Manoj Mishra 68


Hill Climbing Search
• Hill climbing is named after those situations where the
biggest number is the best (benefit rather than the
cost).
• If the algorithm is constantly following the direction
which gives the fastest rate of increase, then it is called
steepest ascent hill climbing.
• In case the lowest cost is the best then this is down hill
algorithm.
• However, the algorithm is prepared to back up and try
a different path if the steepest one leads to a dead end.
That is why the hill climbing is called steepest rise with
back tracking search method.

A.I., KCS-071, Manoj Mishra 69


Artificial Intelligence (KCS-071)
4th year (Semester – VII)
Session – 2023 - 24
Unit – II
Lecture – 5
Manoj Mishra
Asst. Prof.
CSE Dept.
UCER, Prayagraj
Comparison of search methods
• Each search procedure discussed till now has an
advantage as discussed below:
– Non-deterministic (Blind) search is good when we are
not sure whether depth-first or breadth-first would be
better.
– Depth-first is good when unproductive partial paths
are never too long or if we always prefer to work from
the state we generated last.
– Breadth-first is good when the branching factor is
never too long or if we prefer status which were
generated earlier in time.
– Hill climbing is good when we prefer any state which
looks better than the current state, according to some
heuristic function.
A.I., KCS-071, Manoj Mishra 71
Comparison of search methods
– Both simple hill climbing and steepest ascent hill
climbing may fail to find a solution. Either algorithm
may terminate by not finding a goal state but by
getting to a state from which no better states can be
generated.
– Best-first search is good when there is a natural
measure of goal distance and a good partial path may
look like a bad option before more promising partial
paths are played out or in other words, when state
preference rules prefer the state with highest heuristic
score. Best-first search finds the optimal solution but
this is not the case always.
– A* search is both complete and optimal. It finds the
cheapest solution.
A.I., KCS-071, Manoj Mishra 72
Adversarial Search
• Game playing is a search technique and is often called
adversarial search.
• Competitive environments, in which the agent’s goals
are in conflict, give rise to adversarial search.
• While we are doing our best to find solution, our
opponent (i.e., adversary) is trying to beat us. Hence,
the opponent not only introduces uncertainty but also
wants to win.
• Therefore, adversarial search technique is a mixture of
reasoning and creativity and needs the best of human
intelligence.

A.I., KCS-071, Manoj Mishra 73


Adversarial Search
• Game playing can be formally defined as a
kind of search problem with the following
components:
– Initial state of the game
– Operators defining legal moves
– Successor function
– Terminal test defining end of game state
– Goal test
– Path cost/utility/pay-off function

A.I., KCS-071, Manoj Mishra 74


Adversarial Search
• In AI, games are usually of a rather specialized kind –
what game theorists call deterministic, turn-taking,
two-player, zero-sum games of perfect information.
• In simple language, this means that the game is
deterministic and has fully observable environments in
which there are two agents whose actions must
alternate and in which utility values at the end of the
game are always equal and opposite.
• Games like chess and checkers are perfect information
deterministic games whereas games like scrabble and
bridge are imperfect information.

A.I., KCS-071, Manoj Mishra 75


Game Trees
• The above category of games can be represented
as a tree where the nodes represent the current
state of the game and the arcs represent the
moves.
• The game tree consists of all possible moves for
the current players starting at the root and all
possible moves for the next player as the children
of these nodes, and so forth, as far into the
future of the game as desired.
• Each individual move by one player is called a
“ply”.

A.I., KCS-071, Manoj Mishra 76


Game Trees
• The leaves of the game tree represent terminal
positions as one where the outcome of the game
is clear (a win, a loss, a draw, a pay-off).
• Each terminal position has a score.
• High scores are good for one of the player, called
the MAX player.
• The other player, called MIN player, tries to
minimize the score.
• For ex., we may associate 1 with a win, 0 with a
draw and -1 with a loss for MAX.

A.I., KCS-071, Manoj Mishra 77


Minimax Procedure
• The heuristic search techniques are not
directly applicable to the search for a winning
strategy in board games.
• The basic difficulty is that there is not a single
searcher for the path from source to goal
state.
• Rather, games involve two players determined
to defeat each other. This fundamentally alters
the nature of the search to be conducted.

A.I., KCS-071, Manoj Mishra 78


Minimax Procedure
• The minimax search procedure is a depth-first, depth-
limited search procedure.
• The procedure is based on the idea that we start at the
current position and use the plausible-move generator to
generate the set of possible successor positions.
• Then we apply the static evaluation function to those
positions and simply choose the best one.
• In other words, the plausible-move generator generates the
necessary states for further evaluation and the static
evaluation function ranks each of the positions.
• After this is done, we can back that value up to the starting
position to represent our evaluation of it.

A.I., KCS-071, Manoj Mishra 79


Minimax Procedure
• Hence we have two heuristic rules which state:
– When A has the option to move, he will always choose
the alternative which leads to the MAXIMUM benefits
to him, that is, he will move to the subsequent state
with the largest value of fj.
– When B has option to move, he will always choose the
alternative which leads to the MINIMUM benefits for
player A, that is, he will move to the subsequent state
with the smallest value of fj.

A.I., KCS-071, Manoj Mishra 80


Minimax Procedure
• This process is repeated for as many ply as time
permits.
• The more accurate evaluations that are produced
can be used to choose the correct move at the
top level.
• The alteration of maximizing and minimizing at
alternate ply when evaluations are being pushed
back up corresponds to the opposing strategies of
the two players. Hence this method is called
minimax.
A.I., KCS-071, Manoj Mishra 81
Minimax Procedure
• Features of minimax procedure:
– It assumes a perfect opponent, that is the player B
never makes a bad move from his point of view.
– The larger the number of look-ahead ply, the more
information can be brought to bear in evaluating the
current move.
• In the extreme case of a very simple game in
which an exhaustive search is possible, all nodes
are evaluated, and the win/loss/draw information
can be propagated back up the tree to indicate
precisely the optimum move for player A.

A.I., KCS-071, Manoj Mishra 82


Minimax Procedure
• Limitations of minimax procedure:
– The effectiveness of minimax procedure is limited
by the depth of the game tree, which itself is
limited by the time needed to construct and
analyze it (the time increases exponentially with
the depth of the tree).
– There is another risk involved in the minimax
heuristic: a rapid tie between the assigned and
actual value to static evaluation function fails to
decide the expansion of the game tree.

A.I., KCS-071, Manoj Mishra 83


Example:
• Consider a game which has 4 final states and
paths to reach final state are from root to 4
leaves of a perfect binary tree as shown
below. Assume you are the maximizing player
and you get the first chance to move, i.e., you
are at the root and your opponent at next
level. Which move you would make as a
maximizing player considering that your
opponent also plays optimally?
9/27/2023 A.I., KCS-071, Manoj Mishra 84
9/27/2023 A.I., KCS-071, Manoj Mishra 85
Solution
• Since this is a backtracking based algorithm, it tries all
possible moves, then backtracks and makes a decision.
• Maximizer goes LEFT: It is now the minimizers turn. The
minimizer now has a choice between 3 and 5. Being the
minimizer it will definitely choose the least among both,
that is 3
• Maximizer goes RIGHT: It is now the minimizers turn. The
minimizer now has a choice between 2 and 9. He will
choose 2 as it is the least among the two values.
• Being the maximizer you would choose the larger value
that is 3. Hence the optimal move for the maximizer is to go
LEFT and the optimal value is 3.
• Now the game tree looks like below :

9/27/2023 A.I., KCS-071, Manoj Mishra 86


Solution:Contd..
The above tree shows two possible scores when maximizer makes left and right
moves.
Note: Even though there is a value of 9 on the right subtree, the minimizer will never
pick that. We must always assume that our opponent plays optimally.

9/27/2023 A.I., KCS-071, Manoj Mishra 87


Artificial Intelligence (KCS-071)
4th year (Semester – VII)
Session – 2023 - 24
Unit – II
Lecture – 6
Manoj Mishra
Asst. Prof.
CSE Dept.
UCER, Prayagraj
Alpha-Beta Pruning
• Alpha-Beta pruning is a method that reduces the
number of nodes explored in minimax strategy.
• It reduces the time required for the search and it
must be restricted so that no time is to be wasted
searching moves that are obviously bad for the
current player.
• The exact implementation of alpha-beta keeps
track of the best move for each side as it moves
throughout the tree.
Artificial Intelligence, KCS-071, Manoj
9/27/2023
Mishra
Alpha-Beta Pruning
• In this process, the minimax procedure works
under two bounds, one for each player.
• It requires the maintenance of two threshold
values, one representing the lower bound on the
value that a maximizing node may ultimately be
assigned (call it alpha) and another representing
an upper bound on the value that a minimizing
node may be assigned (call it beta).
• So, the alpha-beta pruning is a form of branch
and bound technique.
Artificial Intelligence, KCS-071, Manoj
9/27/2023
Mishra
Alpha-Beta Pruning
• Alpha is defined as the lower limit (bound) on
the value of the maximize or accordingly the
value of alpha will assume the largest of the
minimum values generated from expanding
successive B nodes at a given level (ply).
• Once a successor node from any B node at
that level is evaluated as less than alpha, that
node and all its successor may be pruned.

Artificial Intelligence, KCS-071, Manoj


9/27/2023
Mishra
Alpha-Beta Pruning
• Alpha-beta pruning can be applied to trees of any
depth, and it is often possible to prune entire sub
trees rather than just leaves.
• The general principle is this:
– “Consider a node ‘n’ somewhere in the tree, such that
player has a choice of moving to that node. If player
has a better choice ‘m’ either at the parent node of ‘n’
or at any choice point further up, then ‘n’ will never
be reached in actual ply. So once we have found out
enough about ‘n’ by examining some of its
descendants to reach this conclusion, we can prune
it.”

Artificial Intelligence, KCS-071, Manoj


9/27/2023
Mishra
Alpha-Beta Pruning
• Illustration of Alpha-Beta pruning (or cut-off):
– The two golden rules that are used in the alpha-
beta cut-off strategy are:
• R1: for maximizers, if the static evaluation function
value formed at any node is less than the alpha-
value, reject it.
• R2: for minimizers, if the static evaluation function
value found at any node is greater than the beta-
value, reject it.

Artificial Intelligence, KCS-071, Manoj


9/27/2023
Mishra
Alpha-Beta Pruning
• Effectiveness of Alpha-Beta algorithm:
– The effectiveness of alpha-beta procedure depends
greatly on the order in which the paths are examined.
– In case if the worst paths are examined first, then no
cutoffs will occur at all.
– However, if the best path is known in advance, so that
we may examine it first, the search process would not
be needed.
– However, if we know how effective the pruning
technique is in the perfect case, we would have an
upper bound on its performance in other situations.
Artificial Intelligence, KCS-071, Manoj
9/27/2023
Mishra
Example

9/27/2023 A.I., KCS-071, Manoj Mishra 95


solution
• The initial call starts from A. The value of alpha here is
-INFINITY and the value of beta is +INFINITY. These
values are passed down to subsequent nodes in the
tree. At A the maximizer must choose max of B and C,
so A calls B first At B it the minimizer must choose min
of D and E and hence calls D first.At D, it looks at its left
child which is a leaf node. This node returns a value of
3. Now the value of alpha at D is max( -INF, 3) which is
3.To decide whether its worth looking at its right node
or not, it checks the condition beta<=alpha. This is false
since beta = +INF and alpha = 3. So it continues the
search.

9/27/2023 A.I., KCS-071, Manoj Mishra 96


Solution Contd..
• D now looks at its right child which returns a value of
5.At D, alpha = max(3, 5) which is 5. Now the value of
node D is 5D returns a value of 5 to B. At B, beta = min(
+INF, 5) which is 5. The minimizer is now guaranteed a
value of 5 or lesser. B now calls E to see if he can get a
lower value than 5.At E the values of alpha and beta is
not -INF and +INF but instead -INF and 5 respectively,
because the value of beta was changed at B and that is
what B passed down to ENow E looks at its left child
which is 6. At E, alpha = max(-INF, 6) which is 6. Here
the condition becomes true. beta is 5 and alpha is 6. So
beta<=alpha is true. Hence it breaks and E returns 6 to
B.

9/27/2023 A.I., KCS-071, Manoj Mishra 97


Solution Contd..
• Note how it did not matter what the value of E‘s
right child is. It could have been +INF or -INF, it
still wouldn’t matter, We never even had to look
at it because the minimizer was guaranteed a
value of 5 or lesser. So as soon as the maximizer
saw the 6 he knew the minimizer would never
come this way because he can get a 5 on the left
side of B. This way we didn’t have to look at that
9 and hence saved computation time.E returns a
value of 6 to B. At B, beta = min( 5, 6) which is
5.The value of node B is also 5

9/27/2023 A.I., KCS-071, Manoj Mishra 98


So far this is how our game tree looks. The 9 is crossed out
because it was never computed .

9/27/2023 A.I., KCS-071, Manoj Mishra 99


Solution Contd..
• B returns 5 to A. At A, alpha = max( -INF, 5)
which is 5. Now the maximizer is guaranteed a
value of 5 or greater. A now calls C to see if it
can get a higher value than 5.At C, alpha = 5
and beta = +INF. C calls FAt F, alpha = 5 and
beta = +INF. F looks at its left child which is a
1. alpha = max( 5, 1) which is still 5.F looks at
its right child which is a 2. Hence the best
value of this node is 2. Alpha still remains 5
9/27/2023 A.I., KCS-071, Manoj Mishra 100
• F returns a value of 2 to C. At C, beta = min( +INF, 2).
The condition beta <= alpha becomes true as beta = 2
and alpha = 5. So it breaks and it does not even have to
compute the entire sub-tree of G.The intuition behind
this break-off is that, at C the minimizer was
guaranteed a value of 2 or lesser. But the maximizer
was already guaranteed a value of 5 if he choose B. So
why would the maximizer ever choose C and get a
value less than 2 ? Again you can see that it did not
matter what those last 2 values were. We also saved a
lot of computation by skipping a whole sub-tree.C now
returns a value of 2 to A. Therefore the best value at A
is max( 5, 2) which is a 5.Hence the optimal value that
the maximizer can get is 5

9/27/2023 A.I., KCS-071, Manoj Mishra 101


This is how our final game tree looks like. As you can see G has
been crossed out as it was never computed.

9/27/2023 A.I., KCS-071, Manoj Mishra 102

You might also like