0% found this document useful (0 votes)
8 views

Unit-3 Problem Solving by Searching

Unit-3 discusses problem-solving in Artificial Intelligence (AI) through state space search, which systematically explores possible actions to reach a goal. It outlines four steps in problem-solving: goal formulation, problem formulation, searching for a solution, and execution, along with various search strategies such as uninformed search, depth-first search, and breadth-first search. The document also explains specific algorithms and their implementations, highlighting their strengths and weaknesses in navigating problem spaces.

Uploaded by

hemantbhatta003
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Unit-3 Problem Solving by Searching

Unit-3 discusses problem-solving in Artificial Intelligence (AI) through state space search, which systematically explores possible actions to reach a goal. It outlines four steps in problem-solving: goal formulation, problem formulation, searching for a solution, and execution, along with various search strategies such as uninformed search, depth-first search, and breadth-first search. The document also explains specific algorithms and their implementations, highlighting their strengths and weaknesses in navigating problem spaces.

Uploaded by

hemantbhatta003
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit-3 Problem Solving by Searching

Introduction
 Search plays a major role in solving many Artificial Intelligence problems. Search is a universal
problem-solving mechanism in AI.

 Problem solving is a systematic search through a range of possible actions in order to reach some
predefined goal or solution.

 For problem solving a kind of goal based agent called problem solving agents are used.

Problem as a state space search


State space search is a problem-solving technique used in Artificial Intelligence (AI) to find the solution
path from the initial state to the goal state by exploring the various states. The state space (Set of states in
which a problem can be) search approach searches through all possible states of a problem to find a solution.
It is an essential part of Artificial Intelligence and is used in various applications, from game-playing
algorithms to natural language processing.

Example of State Space Search

 This algorithm guarantees a solution but can become very slow for larger state spaces.
 Our objective is to move from the current state to the target state by sliding the numbered tiles through
the blank space. Let's look closer at reaching the target state from the current state.
 It uses four legal moves (left, right, up, down) for problem solving.
 To summarize, our approach involved exhaustively exploring all reachable states from the current state
and checking if any of these states matched the target state.

Prepared By Keshab Pal


Problem Solving
 Problem-solving in artificial intelligence (AI) involves devising computational methods to find solutions
to complex problems. Four general steps in problem solving:

1) Goal formulation :- It is the first and simplest step in problem-solving. Goal formulation or goal setting
is the process of specifying the desired goal.

2) Problem formulation :- Problem formulation is the process of deciding what actions and states to
consider, given a goal.

3) Search a solution :- The process of looking for a sequence of actions that reaches the goal is called a
searching. search algorithm takes a problem as a input and returns a solution in the form of an action
sequence.

4) Execution :- once a solution is found, the actions it recommends can be carried out. This is called the
execution phase. once a solution has been executed , the agent will formulate a new goal.

Prepared By Keshab Pal


Search Strategies in AI
Search strategies in AI are all about efficiently navigating a problem's state space to find a solution. Here's a
breakdown of two main categories of search strategies:

1] Uninformed (Blind) Search

 Uninformed search, also known as blind search, is a fundamental concept in computer science and
artificial intelligence (AI) related to searching for a solution in a problem space without using any
specific knowledge about the problem other than the problem's definition.

 It has only start state and goal state knowledge or it search without knowledge.

 This type of search technique is more time consuming, cost complexity but it gives optimal solution.

 Example of uniformed(Blind) search are

1. Depth First Search

2. Breadth-first Search

3. Depth Limited Search

4. Iterative Deepening Search

5. Bidirectional Search

Depth-First Search
 It is uninformed search technique or blind search technique.

 Depth-first search is a recursive algorithm for traversing a tree or graph data structure.

 It is called the depth-first search because it starts from the root node and follows each path to its greatest
depth node before moving to the next path.

 DFS uses a stack data structure(LIFO) for its implementation.

 It works on deepest node search technique.

 It is incomplete search technique if the search space infinite.

 Sometime It provide non-optimal result.

 Its complexity depends on the number of paths. It cannot check duplicate nodes.

 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

 There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.

 More time complexity.

Prepared By Keshab Pal


Algorithm of DFS

1. Start from the initial state (or root node).


2. Push the initial state onto the stack.
3. While the stack is not empty:
o Pop the top element from the stack.
o Check if it is the goal state.
o If not, expand its successors (child nodes) and push them onto the stack.
4. Repeat until the goal state is found or all possible states are explored.

Example of Depth-First Search algorithm

 In the below search tree, we have shown the flow of depth-first search.

Step1:- First, push S(Root node) onto the stack.

 STACK: - S

Step2:- POP(Remove) the top element from the stack, i.e., ‘S’, and print it. Now, PUSH all the neighbors
(child node) of ‘S’ onto the stack that are in ready state.

 PRINT: - S

 STACK: - H, A

Step3:- POP the top element from the stack, i.e., ‘A’, and print it. Now, PUSH all the neighbors of ‘A’ onto
the stack that are in ready state.

 PRINT:- A

 STACK:- H, C, B

Prepared By Keshab Pal


Step4:- POP the top element from the stack, i.e. ‘B’, and print it. Now, PUSH all the neighbors of ‘B’ onto
the stack that are in ready state.

 PRINT:- B

 STACK:- H, C, E, D

Step5 :- POP the top element from the stack, i.e., ‘D’, and print it. Now, PUSH all the neighbors of ‘D’ onto
the stack that are in ready state.

 PRINT :- D

 STACK :- H, C, E

Step6 :- POP the top element from the stack, i.e., ‘E’, and print it. Now, PUSH all the neighbors of E onto
the stack that are in ready state.

 PRINT :- E

 STACK :- H, C

Step7 :- POP the top element from the stack, i.e. ‘C’, and print it. Now, PUSH all the neighbors of ‘C’ onto
the stack that are in ready state.

 PRINT :- C

 STACK :- H, Gन

Step8 :- POP the top element from the stack, i.e. ‘G’, and print it. Now, the target node G is encountered.

 PRINT :- G

Prepared By Keshab Pal


 STACK :- H

So the searching path is S  A  B  D  E C  G.

In above example, start searching from root node S, and traverse A, then B, then D and E, after traversing E,
it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it
will traverse node C and then G, and here it will terminate as it found goal node.

Breath-First Search
 Breadth-First Search (BFS) is an uninformed or blind search algorithm used to explore and search for a
solution in a graph or tree data structure.

 BFS algorithm starts searching from the root node of the tree and expands all successor node at the
current level before moving to nodes of next level.

 The breadth-first search algorithm is an example of a general-graph search algorithm.

 Breadth-first search implemented using FIFO queue data structure.

Algorithm of BFS

1. Start from an initial state (or node).


2. Add the initial state to the queue.
3. While the queue is not empty:
a. Remove the first element from the queue.
b. Check if it is the goal state.
c. If it is not the goal state, expand its successors (child nodes) and add them to the
queue.
4. Repeat until the goal state is found or all possible states have been explored.

Example of Breath-first search

In the example given below, there is a directed graph having 11 vertices.

Prepared By Keshab Pal


 In the above graph, minimum path can be found by using the BFS that will start from Node S and end at
Node K. The algorithm uses two queues, namely QUEUE1 and QUEUE2.

 QUEUE1 holds all the nodes that are to be processed, while QUEUE2 holds all the nodes that are
processed and deleted from QUEUE1.

STEP 1 :- First, add S to queue1 and NULL to queue2.

 QUEUE1 = {S}

 QUEUE2 = {NULL}

STEP 2 :- Now, delete node S from queue1 and add it into queue2. Insert all neighbors of node S to queue1.

 QUEUE1 = {A, B}

 QUEUE2 = {S}

STEP 3 :- Now, delete node A from queue1 and add it into queue2. Insert all neighbors of node A to queue1.

 QUEUE1 = {B, C, D}

 QUEUE2 = {S, A}

STEP 4 :- Now, delete node B from queue1 and add it into queue2. Insert all neighbors of node B to queue1.

 QUEUE1 = {C, D, G, H}

 QUEUE2 = {S, A, B}

STEP 5 :- Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to queue1.

 QUEUE1 = {D, G, H, E, F}

 QUEUE2 = {S, A, B, C}

Prepared By Keshab Pal


STEP 6 :- Delete node D from queue1 and add it into queue2. Insert all neighbors of node D to queue1.

 QUEUE1 = {G, H, E, F}

 QUEUE2 = {S, A, B, C, D}

STEP 7 :- Delete node G from queue1 and add it into queue2. Insert all neighbors of node G to queue1.

 QUEUE1 = {H, E, F, I}

 QUEUE2 = {S, A, B, C, D, G}

STEP 8 :- Delete node H from queue1 and add it into queue2. Insert all neighbors of node H to queue1.

 QUEUE1 = {E, F, I}

 QUEUE2 = {S, A, B, C, D, G, H}

STEP 9 :- Delete node E from queue1 and add it into queue2. Insert all neighbors of node E to queue1.

 QUEUE1 = {F, I, K}

 QUEUE2 = {S, A, B, C, D, G, H, E}

STEP 10 :- Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to queue1.

 QUEUE1 = {I, K}

 QUEUE2 = {S, A, B, C, D, G, H, E, F}

STEP 11 :- Delete node I from queue1 and add it into queue2. Insert all neighbors of node I to queue1.

 QUEUE1 = {K}

 QUEUE2 = {S, A, B, C, D, G, H, E, F, I}

STEP 12 :- Delete node K from queue1 and add it into queue2. Now, all the nodes are visited, and the target
node K is encountered into queue2.

 QUEUE1 = {}

 QUEUE2 = {S, A, B, C, D, G, H, E, F, I, K}

Prepared By Keshab Pal


So path is S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K using BSF algorithm.

Depth-Limited Search
A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited
search can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the
depth limit will treat as it has no successor nodes further.

It helps in solving the problem of DFS algorithm (infinite path).

Example of Depth-Limited Search

In the above graph, Consider the given graph with Depth Limit(l)=2, Target Node=J and the given source
node=S

Step1:- Now, the first element of the source node is pushed onto the stack.

 STACK: - S

Step2:- POP(Remove) the top element from the stack, i.e., ‘S’, and print it. Now, PUSH all the neighbors
(child node) of ‘S’ onto the stack that are in ready state.

 PRINT: - S

 STACK: - B, A

Step3:- POP the top element from the stack, i.e., ‘A’, and print it. Now, PUSH all the neighbors of ‘A’ onto
the stack that are in ready state.

 PRINT:- A

 STACK:- B,D,C

Prepared By Keshab Pal


Step4:- POP the top element from the stack, i.e. ‘C’, and print it. Now, PUSH all the neighbors of ‘C’ onto
the stack that are in ready state.

 PRINT:- C

 STACK:- B,D

Step5 :- POP the top element from the stack, i.e., ‘D’, and print it. Now, the depth is 2 so backtrack node
from D to B.

 PRINT :- D

 STACK :- B

Step6:- POP the top element from the stack, i.e. ‘B’, and print it. Now, PUSH all the neighbors of ‘B’ onto
the stack that are in ready state.

 PRINT:- B

 STACK:- J, I

Step6:- POP the top element from the stack, i.e. ‘I’, and print it. Now, the depth is 2 so backtrack node from
I to J.

 PRINT:- I

 STACK:- J

Step7:- POP the top element from the stack, i.e. ‘J’, and print it. So the goal node is J terminate the
execution.

 PRINT:- I

 STACK:- J

So path is SACDBIJ .

Iterative Deepening Search


 A search algorithm known as IDS combines the benefits of DFS with Breadth First Search (BFS) or it is
a combination of DFS and BFS.
 It is best depth limit found out by gradually increasing limit. Initially depth is 0 and every iteration
increase by 1.
 It uses stack data structure i.e LIFO (Last In First Out) method.
 Repeat the work process is the main disadvantages of Iterative deepening search algorithm.

Example of Iterative Deepening Search

In the below graph, Consider the given graph Target Node=G and the given source node=S

S
Prepared By Keshab Pal
-------------------- Level 0

------------------- Level 1
A C

D B E G -------------Level 2

F H I
------------------------------ Level 3

Step1:- 1st Iteration where depth=0 , where goal node is G so continue the iteration process.

STACK = [S]

Step2:- 2nd Iteration where depth=1, where goal node is G so continue the iteration process.

1] Insert root node S into stack.

Stack = [S]

2] Remove S from Stack and insert all child node of S into Stack.

Print = S

Stack = [A, C]

3] Remove C from Stack and insert all child node of C into Stack.

Print = C

Stack = [A]

4] Remove A from Stack and insert all child node of A into Stack.

Print = A

Stack = [NULL]

We cannot reach goal node with length = 1

Step2:- 3rd Iteration where depth=2, where goal node is G so We found the right path from root node to goal
node and terminate the iteration process.

Prepared By Keshab Pal


1] Insert root node S into stack.

Stack = [S]

2] Remove S from Stack and insert all child node of S into Stack.

Print = S

Stack = [A, C]

3] Remove C from Stack and insert all child node of C into Stack.

Print = C

Stack = [A, E, G]

4] Remove G from Stack and insert all child node of G into Stack.

Print = A

Stack = [A, E]

Finally we reach the goal node, So required Path is STACK = [SCG]

Bi-directional Search
 Bidirectional search algorithm runs two simultaneous searches, one from initial state called as forward
search and other backward (goal state) called as backward-search, to find the goal node.

 The search stops when these two graphs intersect each other.

 Bidirectional search is fast and it requires less memory.

 It significantly reduces the amount of exploration done. It is implemented using the Breadth First Search
(BFS) Algorithm.

Example of Bidirectional Search

 In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree
into two sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16
in the backward direction.

Prepared By Keshab Pal


 The algorithm terminates at node 9 where two searches meet.

For Forward Search (using DFS)

STEP1: Insert start node 1 into Stack

Stack = 1

Step2: Delete node 1 from Stack and insert all child node of 1 into Stack

Print: 1

Stack: 2, 4

Step3: Delete node 4 from Stack and insert all child node of 4 into Stack

Print: 4

Stack: 2, 8

Step3: Delete node 8 from Stack and insert all child node of 8 into Stack

Print: 8

Stack: 2, 6, 9

Step3: Delete node 9 from Stack and insert all child node of 9 into Stack

Print: 9

Stack: 2, 6, 10

 The forward path is :- 1489


For Backward Search (using DFS)

Prepared By Keshab Pal


STEP1: Insert goal node 16 into Stack

Stack = 16

Step2: Delete node 16 from Stack and insert all child node of 16 into Stack

Print: 16

Stack: 15, 12

Step3: Delete node 12 from Stack and insert all child node of 12 into Stack

Print: 12

Stack: 15,10

Step3: Delete node 10 from Stack and insert all child node of 10 into Stack

Print: 10

Stack: 15, 11, 9

Step3: Delete node 9 from Stack and insert all child node of 9 into Stack

Print: 9

Stack: 15, 11

 The forward path is :- 1612109

So the final path is: 1489101216

Informed Search
 Informed search in AI is a type of search algorithm that uses additional information to guide the search
process, allowing for more efficient problem-solving compared to uninformed search algorithms.
 It is also known as Heuristic Search.
 It uses knowledge for the searching process. It finds a solution more quickly.
 It consumes less time because of quick searching.
 This information can be in the form of heuristics, estimates of cost, or other relevant data to prioritize
which states to expand and explore.

Here are some key features of informed search algorithms in AI:

 Use of Heuristics – informed search algorithms use heuristics, or additional information, to guide the
search process and prioritize which nodes to expand.
 More efficient – informed search algorithms are designed to be more efficient than uninformed search
algorithms, such as breadth-first search or depth-first search, by avoiding the exploration of unlikely
paths and focusing on more promising ones.
 Goal-directed – informed search algorithms are goal-directed, meaning that they are designed to find a
solution to a specific problem.

Prepared By Keshab Pal


 Cost-based – informed search algorithms often use cost-based estimates to evaluate nodes, such as the
estimated cost to reach the goal or the cost of a particular path.
 Prioritization – informed search algorithms prioritize which nodes to expand based on the additional
information available, often leading to more efficient problem-solving.
 Optimality – informed search algorithms may guarantee an optimal solution if the heuristics used are
admissible (never overestimating the actual cost) and consistent (the estimated cost is a lower bound on
the actual cost).

Examples of Informed Search Algorithm are:

1) Greedy Best first search


2) A* Search
3) Hill Climbing
4) Simulated Annealing
5) Game playing
6) Adversarial search techniques
7) Mini-max Search
8) Alpha-Beta Pruning

Greedy Best first search

 Greedy Best-First Search is an AI search algorithm that attempts to find the most promising path from a
given starting point to a goal.

 Greedy Best-First Search works by evaluating the cost of each possible path and then expanding the path
with the lowest cost. This process is repeated until the goal is reached.

 The algorithm uses a heuristic function to determine which path is the most promising.

 If the cost of the current path is lower than the estimated cost of the remaining paths, then the current
path is chosen.

 Greedy Best-First Search has several advantages, including being simple and easy to implement, fast and
efficient, and having low memory requirements.

 Greedy Best-First Search is used in many applications, including pathfinding, machine learning, and
optimization.

Prepared By Keshab Pal


Example of Greedy (Best-first) Search

Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration,
each node is expanded using evaluation function f(n)=h(n), which is given in the below table.

In this search example, we are using two lists which are open and closed lists. Following are the iteration for
traversing the above example.

 Iteration1: Initialize node with “S”

Open [S], Closed []

 Iteration2: Expand the nodes of “S” and put in the CLOSED list

Open [B, A], Closed [S]

 Iteration3: Expand the nodes of “B” and put in the CLOSED list

Open [ A, F, E], Closed [S, B]

 Iteration4: Expand the nodes of “F” and put in the CLOSED list

Open [ A, E, G, I] Closed [S, B, F]

 Iteration5: Expand the nodes of “G” and put in the CLOSED list

Open [ A, E, I], Closed [ S, B, F, G ]

Hence the final solution path will be: S BF G

Hill Climbing Algorithm

Prepared By Keshab Pal


 Hill climbing algorithm is a local search algorithm. (only knowledge about local domain not global
domain).

 It is a technique for optimizing the mathematical problems. Hill Climbing is widely used when a good
heuristic is available.

 Hill climbing can be used to solve problems that have many solutions, some of which are better than
others.

 It is also known as local search algorithm (it have only local domain knowledge so it is called local
search algorithm).

 In this algorithm backtracking is not allowed.

 It always moves in single direction.

 It uses greedy approach (if the next state is better than the current state then it is a new current state).

 It starts with a random solution, and iteratively makes small changes to the solution, each time improving
it a little. When the algorithm cannot see any improvement anymore, it terminates.

 It terminates when it reaches a peak value where no neighbor has a higher value.

 A node of hill climbing algorithm has two components which are state and value.

 Traveling-salesman Problem is one of the widely discussed examples of the Hill climbing algorithm, in
which we need to minimize the distance traveled by the salesman.

State-space Diagram for Hill Climbing

The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a
graph between various states of algorithm and Objective function/Cost.

Prepared By Keshab Pal


On Y-axis we have taken the function which can be an objective function or cost function, and state-space on
the x-axis.

Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another
state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest
value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have
the same value.

Problems in Hill Climbing Algorithm:

1. Local Maximum : A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higher than the local maximum.

Solution :- Utilize the backtracking technique. Maintain a list of visited states. If the search reaches an
undesirable state, it can backtrack to the previous configuration and explore a new path.

Prepared By Keshab Pal


2. Plateau : On the plateau, all neighbors have the same value. Hence, it is not possible to select the best
direction.

Solution :- the solution for the plateau is to take big steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away from the current state so it is possible that the algorithm
could find non-plateau region.

3. Ridges : Any point on a ridge can look like a peak because movement in all possible directions is
downward. Hence the algorithm stops when it reaches this state.

Solution :- With the use of bidirectional search, or by moving in different directions, we can improve this
problem.

Prepared By Keshab Pal


A* Search Algorithm
 It is a searching algorithm that is used to find the shortest path between an initial and a final point.

 A* search algorithm finds the shortest path through the search space using the heuristic function. This
search algorithm expands less search tree and provides optimal result faster.

 In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can
combine both costs as following

Example of A* Search

In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is
given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where
g(n) is the cost to reach any node from start state.

Solution:

 Initialization: S (Starting node) and G (goal node)

Step 1:
S
1 10
A G
Prepared By Keshab Pal
 S  A ---- F(n) = g(n) + h(n), f(n) = 1+ 3 = 4
 S  G ---- F(n) = g(n) + h(n), f(n) = 10+ 0 = 10
 Closed [S, A]

State H(n)
Step 2:
S
S 5
1
A 3

A B 4
2
1 C 2

B D 6
C
G 0

 S  A  B ---- F(n) = g(n) + h(n), f(n) = (1+ 2) + 4 = 7

 S  A  C ---- F(n) = g(n) + h(n), f(n) = (1+1) + 2 = 4

 Closed [S, A, C] Heuristic Value

State H(n)
S
Step 3:
1 S 5

A 3
A
1 B 4

C 2
C
D 6
4 3
G 0
G

 S  A  C  D ---- F(n) = g(n) + h(n), f(n) = (1+ 1+3) + 6 = 11


 S  A  C  G ---- F(n) = g(n) + h(n), f(n) = (1+1+4) + 0 = 6
 Closed [S, A, C, G]
Hence the final solution path will be: S AC G

Prepared By Keshab Pal


Simulated Annealing

 Simulated annealing is an optimization algorithm used to solve problems where it is impossible or


computationally expensive to find a global optimum.
 It is a global optimization technique.
 In simulated annealing, the computer “explores” all possible solutions (called runs) until it finds one that
meets the requirements.
 Simulated annealing can be used to find solutions to optimization problems by slowly changing the
values of the variables in the problem until a solution is found.
 The advantage of simulated annealing over other optimization methods is that it is less likely to get stuck
in a local minimum, where the solution is not the best possible but is good enough.
 The advantages of using simulated annealing in artificial intelligence are that it is relatively fast, easy to
learn, and can produce good results even for difficult problems.
 It moves the worst states may be acceptable.

Advantages

 It is relatively easy to code.


 It is also used for solve complex problems.
 It gives guarantees for finding optimal result.

Disadvantages

 It is very slow and expensive.


 The method can’t tell whether it has found an optimal result.

Game Playing
 Game playing in artificial intelligence refers to the development and application of algorithms that enable
computers to engage in strategic decision-making within the context of games.
 Game playing is a popular application of artificial intelligence that involves the development of computer
programs to play games, such as chess, checkers, or Go.
 The goal of game playing in artificial intelligence is to develop algorithms that can learn how to play
games and make decisions that will lead to winning outcomes.
 These algorithms, often termed game playing algorithms in AI, empower machines to mimic human-like
gameplay by evaluating potential moves, predicting opponent responses, and making informed choices
that lead to favorable outcomes.

There are two main approaches to game playing in AI, rule-based systems and machine learning-based
systems.
1. Rule-based systems use a set of fixed rules to play the game.
2. Machine learning-based systems use algorithms to learn from experience and make decisions based on
that experience.

Prepared By Keshab Pal


 Game playing in AI is an active area of research and has many practical applications, including game
development, education, and military training.
 This involves a combination of pattern recognition, probabilistic analysis, and strategic planning, all of
which are encapsulated in the game playing algorithm in AI.
 The most common search technique in game playing is Minimax search procedure. It is depth-first
depth-limited search procedure. It is used for games like chess and tic-tac-toe.

Mini-max Search algorithm


 Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and game
theory.

 It is widely used in two player turn-based games such as Chess. In this algorithm two players play the
game, one is called MAX and other is called MIN.

 The minimax algorithm performs a depth-first search algorithm for the exploration of the complete game
tree.

 The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the
tree as the recursion.

 Maximizer will try to get the Maximum possible score, and Minimizer will try to get the minimum
possible score.

Example of Min-Max Search Algorithm

The working of the minimax algorithm can be easily described using an example. Below we have taken an
example of game-tree which is representing the two-player game.

Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get the
utility values for the terminal states. In the below tree diagram, let's take A is the initial state of the tree.
Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity.

Prepared By Keshab Pal


Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It will
find the maximum among the all.

 For node D max(-1,-∞) => max(-1,4)= 4

 For Node E max(2, -∞) => max(2, 6)= 6

 For Node F max(-3, -∞) => max(-3,-5) = -3

 For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find
the 3rd layer node values.

 For node B= min(4,+∞) => min(4,6) = 4

 For node C= min(-3,+∞) => min (-3, 7) = -3

Prepared By Keshab Pal


Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node. In this game tree, there are only 4 layers, hence we reach immediately to
the root node.

 For node A = max(4,-∞) => max(4, -3)= 4

Prepared By Keshab Pal


 That was the complete workflow of the min-max two player game.

Adversarial search techniques


 Adversarial search in artificial intelligence is a problem-solving technique that focuses on making
decisions in competitive or adversarial scenarios.
 Adversarial search is a search, where we examine the problem which arises when we try to plan ahead of
the world and other agents are planning against us.
 It is used to find optimal strategies when multiple agents, often referred to as players, have opposing or
conflicting objectives.
 This algorithm works on multiple agent environment, in which each agent is an opponent of other agent
and playing against each other. Each agent needs to consider the action of other agent and effect of that
action on their performance.
 This algorithm is based on the concept of ‘Game Theory’.
 Adversarial search aims to determine the best course of action for a given player, considering the possible
moves and counter-moves of the opponent(s).
 AI agents use adversarial search to evaluate and select the best moves in a competitive environment.
 This concept is foundational in AI, impacting game-playing, decision-making, and strategic planning
across various domains.
 The adversarial search can be employed in two-player zero-sum games which means what is good for one
player will be the misfortune for the other. In such a case, there is no win-win outcome.

There are following types of adversarial search algorithms:

 Mini-max Algorithm
 Alpha-Beta Pruning

Role of Adversarial Search in AI


 Game Playing: Adversarial search is the foundation for AI agents playing various games, from chess and
Go to more complex real-time strategy games.

 Decision-Making in Strategic Environments: This can include applications in robotics, where a robot
needs to navigate an environment while avoiding obstacles or competing robots.

 Security & Threat Detection: Adversarial search can be used to model attacker behavior and design
systems that are more resilient against cyberattacks.

Alpha-Beta Pruning

o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for
the minimax algorithm.
o This algorithm solves the limitation of exponential time and space complexity in the case of the Minimax
algorithm by pruning redundant branches of a game tree using its parameters Alpha(𝛼α) and Beta(𝛽β).

Prepared By Keshab Pal


o As we have seen in the minimax search algorithm that the number of game states it has to examine are
exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence
there is a technique by which without checking each node of the game tree we can compute the correct
minimax decision, and this technique is called pruning. This involves two threshold parameter Alpha
and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.

o The two-parameter can be defined as:


a. Alpha(α): The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
b. Beta(β): The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.

Example of Alpha-Beta Pruning: Let's take an example of two-player search tree to understand the
working of Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same
value to its child D.

Prepared By Keshab Pal


Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞,
and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.

Prepared By Keshab Pal


Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now
passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.

Prepared By Keshab Pal


Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final
game tree which is the showing the nodes which are computed and nodes which has never computed. Hence
the optimal value for the maximizer is 3 for this example.

 That was the complete workflow of the Alpha-Beta pruning two player game.

Constraint Satisfaction Problems


 CSP is a specific type of problem-solving approach that involves identifying constraints that must be
satisfied and finding a solution that satisfies all the constraints.

 It is a search procedure that operates in a space of constraint sets.

 Constraint satisfaction problems in AI have goal of discovering some problem state that satisfies a given
set of constraints.

 CSP has been used in a variety of applications, including scheduling, planning, resource allocation, and
automated reasoning.

There are mainly three basic components in the constraint satisfaction problem:

 Variables: - The things that need to be determined are variables. Variables in a CSP are the objects that
must have values assigned to them in order to satisfy a particular set of constraints.

 Domains: - The range of potential values that a variable can have is represented by domains.

 Constraints: - The guidelines that control how variables relate to one another are known as constraints.
Constraints in a CSP define the ranges of possible values for variables.

Prepared By Keshab Pal


 Constraints (C1) = (scope, relation) Where;

Scope = set of variables that participate in constraints.

Relation = it defines values that a variable can take.

Example of CSP

 Variables = v1 and v2

 Domains = 5 and 6

 Constraints = values of v1 and v2 can’t be same.

 Using CSP method

 Constraints = (scope, relation)

C1 = { (v1, v2), v1 ≠ v2 }

There are several types of algorithms uses in Constraint Satisfaction Problems (CSP) and they are;

 The backtracking algorithm

 The forward-checking algorithm

Example2

Variable = {1,2,3,4}

Domain = {red, green, blue}


Step 1 2 3 4
Constraint = {adjacent nodes should not have same color}
Initial R,g,b R,g,b R,g,b R,g,b
Algorithm = back tracking algorithm
1=R R Gb Gb gb
1 2 2=g R G Gb gb

3=g R G B Gb
(error)
4=b R g g b
3 4

Prepared By Keshab Pal

You might also like