Unit-3 Problem Solving by Searching
Unit-3 Problem Solving by Searching
Introduction
Search plays a major role in solving many Artificial Intelligence problems. Search is a universal
problem-solving mechanism in AI.
Problem solving is a systematic search through a range of possible actions in order to reach some
predefined goal or solution.
For problem solving a kind of goal based agent called problem solving agents are used.
This algorithm guarantees a solution but can become very slow for larger state spaces.
Our objective is to move from the current state to the target state by sliding the numbered tiles through
the blank space. Let's look closer at reaching the target state from the current state.
It uses four legal moves (left, right, up, down) for problem solving.
To summarize, our approach involved exhaustively exploring all reachable states from the current state
and checking if any of these states matched the target state.
1) Goal formulation :- It is the first and simplest step in problem-solving. Goal formulation or goal setting
is the process of specifying the desired goal.
2) Problem formulation :- Problem formulation is the process of deciding what actions and states to
consider, given a goal.
3) Search a solution :- The process of looking for a sequence of actions that reaches the goal is called a
searching. search algorithm takes a problem as a input and returns a solution in the form of an action
sequence.
4) Execution :- once a solution is found, the actions it recommends can be carried out. This is called the
execution phase. once a solution has been executed , the agent will formulate a new goal.
Uninformed search, also known as blind search, is a fundamental concept in computer science and
artificial intelligence (AI) related to searching for a solution in a problem space without using any
specific knowledge about the problem other than the problem's definition.
It has only start state and goal state knowledge or it search without knowledge.
This type of search technique is more time consuming, cost complexity but it gives optimal solution.
2. Breadth-first Search
5. Bidirectional Search
Depth-First Search
It is uninformed search technique or blind search technique.
Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
It is called the depth-first search because it starts from the root node and follows each path to its greatest
depth node before moving to the next path.
Its complexity depends on the number of paths. It cannot check duplicate nodes.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.
In the below search tree, we have shown the flow of depth-first search.
STACK: - S
Step2:- POP(Remove) the top element from the stack, i.e., ‘S’, and print it. Now, PUSH all the neighbors
(child node) of ‘S’ onto the stack that are in ready state.
PRINT: - S
STACK: - H, A
Step3:- POP the top element from the stack, i.e., ‘A’, and print it. Now, PUSH all the neighbors of ‘A’ onto
the stack that are in ready state.
PRINT:- A
STACK:- H, C, B
PRINT:- B
STACK:- H, C, E, D
Step5 :- POP the top element from the stack, i.e., ‘D’, and print it. Now, PUSH all the neighbors of ‘D’ onto
the stack that are in ready state.
PRINT :- D
STACK :- H, C, E
Step6 :- POP the top element from the stack, i.e., ‘E’, and print it. Now, PUSH all the neighbors of E onto
the stack that are in ready state.
PRINT :- E
STACK :- H, C
Step7 :- POP the top element from the stack, i.e. ‘C’, and print it. Now, PUSH all the neighbors of ‘C’ onto
the stack that are in ready state.
PRINT :- C
STACK :- H, Gन
Step8 :- POP the top element from the stack, i.e. ‘G’, and print it. Now, the target node G is encountered.
PRINT :- G
In above example, start searching from root node S, and traverse A, then B, then D and E, after traversing E,
it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it
will traverse node C and then G, and here it will terminate as it found goal node.
Breath-First Search
Breadth-First Search (BFS) is an uninformed or blind search algorithm used to explore and search for a
solution in a graph or tree data structure.
BFS algorithm starts searching from the root node of the tree and expands all successor node at the
current level before moving to nodes of next level.
Algorithm of BFS
QUEUE1 holds all the nodes that are to be processed, while QUEUE2 holds all the nodes that are
processed and deleted from QUEUE1.
QUEUE1 = {S}
QUEUE2 = {NULL}
STEP 2 :- Now, delete node S from queue1 and add it into queue2. Insert all neighbors of node S to queue1.
QUEUE1 = {A, B}
QUEUE2 = {S}
STEP 3 :- Now, delete node A from queue1 and add it into queue2. Insert all neighbors of node A to queue1.
QUEUE1 = {B, C, D}
QUEUE2 = {S, A}
STEP 4 :- Now, delete node B from queue1 and add it into queue2. Insert all neighbors of node B to queue1.
QUEUE1 = {C, D, G, H}
QUEUE2 = {S, A, B}
STEP 5 :- Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to queue1.
QUEUE1 = {D, G, H, E, F}
QUEUE2 = {S, A, B, C}
QUEUE1 = {G, H, E, F}
QUEUE2 = {S, A, B, C, D}
STEP 7 :- Delete node G from queue1 and add it into queue2. Insert all neighbors of node G to queue1.
QUEUE1 = {H, E, F, I}
QUEUE2 = {S, A, B, C, D, G}
STEP 8 :- Delete node H from queue1 and add it into queue2. Insert all neighbors of node H to queue1.
QUEUE1 = {E, F, I}
QUEUE2 = {S, A, B, C, D, G, H}
STEP 9 :- Delete node E from queue1 and add it into queue2. Insert all neighbors of node E to queue1.
QUEUE1 = {F, I, K}
QUEUE2 = {S, A, B, C, D, G, H, E}
STEP 10 :- Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to queue1.
QUEUE1 = {I, K}
QUEUE2 = {S, A, B, C, D, G, H, E, F}
STEP 11 :- Delete node I from queue1 and add it into queue2. Insert all neighbors of node I to queue1.
QUEUE1 = {K}
QUEUE2 = {S, A, B, C, D, G, H, E, F, I}
STEP 12 :- Delete node K from queue1 and add it into queue2. Now, all the nodes are visited, and the target
node K is encountered into queue2.
QUEUE1 = {}
QUEUE2 = {S, A, B, C, D, G, H, E, F, I, K}
Depth-Limited Search
A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited
search can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the
depth limit will treat as it has no successor nodes further.
In the above graph, Consider the given graph with Depth Limit(l)=2, Target Node=J and the given source
node=S
Step1:- Now, the first element of the source node is pushed onto the stack.
STACK: - S
Step2:- POP(Remove) the top element from the stack, i.e., ‘S’, and print it. Now, PUSH all the neighbors
(child node) of ‘S’ onto the stack that are in ready state.
PRINT: - S
STACK: - B, A
Step3:- POP the top element from the stack, i.e., ‘A’, and print it. Now, PUSH all the neighbors of ‘A’ onto
the stack that are in ready state.
PRINT:- A
STACK:- B,D,C
PRINT:- C
STACK:- B,D
Step5 :- POP the top element from the stack, i.e., ‘D’, and print it. Now, the depth is 2 so backtrack node
from D to B.
PRINT :- D
STACK :- B
Step6:- POP the top element from the stack, i.e. ‘B’, and print it. Now, PUSH all the neighbors of ‘B’ onto
the stack that are in ready state.
PRINT:- B
STACK:- J, I
Step6:- POP the top element from the stack, i.e. ‘I’, and print it. Now, the depth is 2 so backtrack node from
I to J.
PRINT:- I
STACK:- J
Step7:- POP the top element from the stack, i.e. ‘J’, and print it. So the goal node is J terminate the
execution.
PRINT:- I
STACK:- J
So path is SACDBIJ .
In the below graph, Consider the given graph Target Node=G and the given source node=S
S
Prepared By Keshab Pal
-------------------- Level 0
------------------- Level 1
A C
D B E G -------------Level 2
F H I
------------------------------ Level 3
Step1:- 1st Iteration where depth=0 , where goal node is G so continue the iteration process.
STACK = [S]
Step2:- 2nd Iteration where depth=1, where goal node is G so continue the iteration process.
Stack = [S]
2] Remove S from Stack and insert all child node of S into Stack.
Print = S
Stack = [A, C]
3] Remove C from Stack and insert all child node of C into Stack.
Print = C
Stack = [A]
4] Remove A from Stack and insert all child node of A into Stack.
Print = A
Stack = [NULL]
Step2:- 3rd Iteration where depth=2, where goal node is G so We found the right path from root node to goal
node and terminate the iteration process.
Stack = [S]
2] Remove S from Stack and insert all child node of S into Stack.
Print = S
Stack = [A, C]
3] Remove C from Stack and insert all child node of C into Stack.
Print = C
Stack = [A, E, G]
4] Remove G from Stack and insert all child node of G into Stack.
Print = A
Stack = [A, E]
Bi-directional Search
Bidirectional search algorithm runs two simultaneous searches, one from initial state called as forward
search and other backward (goal state) called as backward-search, to find the goal node.
The search stops when these two graphs intersect each other.
It significantly reduces the amount of exploration done. It is implemented using the Breadth First Search
(BFS) Algorithm.
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree
into two sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16
in the backward direction.
Stack = 1
Step2: Delete node 1 from Stack and insert all child node of 1 into Stack
Print: 1
Stack: 2, 4
Step3: Delete node 4 from Stack and insert all child node of 4 into Stack
Print: 4
Stack: 2, 8
Step3: Delete node 8 from Stack and insert all child node of 8 into Stack
Print: 8
Stack: 2, 6, 9
Step3: Delete node 9 from Stack and insert all child node of 9 into Stack
Print: 9
Stack: 2, 6, 10
Stack = 16
Step2: Delete node 16 from Stack and insert all child node of 16 into Stack
Print: 16
Stack: 15, 12
Step3: Delete node 12 from Stack and insert all child node of 12 into Stack
Print: 12
Stack: 15,10
Step3: Delete node 10 from Stack and insert all child node of 10 into Stack
Print: 10
Step3: Delete node 9 from Stack and insert all child node of 9 into Stack
Print: 9
Stack: 15, 11
Informed Search
Informed search in AI is a type of search algorithm that uses additional information to guide the search
process, allowing for more efficient problem-solving compared to uninformed search algorithms.
It is also known as Heuristic Search.
It uses knowledge for the searching process. It finds a solution more quickly.
It consumes less time because of quick searching.
This information can be in the form of heuristics, estimates of cost, or other relevant data to prioritize
which states to expand and explore.
Use of Heuristics – informed search algorithms use heuristics, or additional information, to guide the
search process and prioritize which nodes to expand.
More efficient – informed search algorithms are designed to be more efficient than uninformed search
algorithms, such as breadth-first search or depth-first search, by avoiding the exploration of unlikely
paths and focusing on more promising ones.
Goal-directed – informed search algorithms are goal-directed, meaning that they are designed to find a
solution to a specific problem.
Greedy Best-First Search is an AI search algorithm that attempts to find the most promising path from a
given starting point to a goal.
Greedy Best-First Search works by evaluating the cost of each possible path and then expanding the path
with the lowest cost. This process is repeated until the goal is reached.
The algorithm uses a heuristic function to determine which path is the most promising.
If the cost of the current path is lower than the estimated cost of the remaining paths, then the current
path is chosen.
Greedy Best-First Search has several advantages, including being simple and easy to implement, fast and
efficient, and having low memory requirements.
Greedy Best-First Search is used in many applications, including pathfinding, machine learning, and
optimization.
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration,
each node is expanded using evaluation function f(n)=h(n), which is given in the below table.
In this search example, we are using two lists which are open and closed lists. Following are the iteration for
traversing the above example.
Iteration2: Expand the nodes of “S” and put in the CLOSED list
Iteration3: Expand the nodes of “B” and put in the CLOSED list
Iteration4: Expand the nodes of “F” and put in the CLOSED list
Iteration5: Expand the nodes of “G” and put in the CLOSED list
It is a technique for optimizing the mathematical problems. Hill Climbing is widely used when a good
heuristic is available.
Hill climbing can be used to solve problems that have many solutions, some of which are better than
others.
It is also known as local search algorithm (it have only local domain knowledge so it is called local
search algorithm).
It uses greedy approach (if the next state is better than the current state then it is a new current state).
It starts with a random solution, and iteratively makes small changes to the solution, each time improving
it a little. When the algorithm cannot see any improvement anymore, it terminates.
It terminates when it reaches a peak value where no neighbor has a higher value.
A node of hill climbing algorithm has two components which are state and value.
Traveling-salesman Problem is one of the widely discussed examples of the Hill climbing algorithm, in
which we need to minimize the distance traveled by the salesman.
The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a
graph between various states of algorithm and Objective function/Cost.
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another
state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest
value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have
the same value.
1. Local Maximum : A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higher than the local maximum.
Solution :- Utilize the backtracking technique. Maintain a list of visited states. If the search reaches an
undesirable state, it can backtrack to the previous configuration and explore a new path.
Solution :- the solution for the plateau is to take big steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away from the current state so it is possible that the algorithm
could find non-plateau region.
3. Ridges : Any point on a ridge can look like a peak because movement in all possible directions is
downward. Hence the algorithm stops when it reaches this state.
Solution :- With the use of bidirectional search, or by moving in different directions, we can improve this
problem.
A* search algorithm finds the shortest path through the search space using the heuristic function. This
search algorithm expands less search tree and provides optimal result faster.
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can
combine both costs as following
Example of A* Search
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is
given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where
g(n) is the cost to reach any node from start state.
Solution:
Step 1:
S
1 10
A G
Prepared By Keshab Pal
S A ---- F(n) = g(n) + h(n), f(n) = 1+ 3 = 4
S G ---- F(n) = g(n) + h(n), f(n) = 10+ 0 = 10
Closed [S, A]
State H(n)
Step 2:
S
S 5
1
A 3
A B 4
2
1 C 2
B D 6
C
G 0
State H(n)
S
Step 3:
1 S 5
A 3
A
1 B 4
C 2
C
D 6
4 3
G 0
G
Advantages
Disadvantages
Game Playing
Game playing in artificial intelligence refers to the development and application of algorithms that enable
computers to engage in strategic decision-making within the context of games.
Game playing is a popular application of artificial intelligence that involves the development of computer
programs to play games, such as chess, checkers, or Go.
The goal of game playing in artificial intelligence is to develop algorithms that can learn how to play
games and make decisions that will lead to winning outcomes.
These algorithms, often termed game playing algorithms in AI, empower machines to mimic human-like
gameplay by evaluating potential moves, predicting opponent responses, and making informed choices
that lead to favorable outcomes.
There are two main approaches to game playing in AI, rule-based systems and machine learning-based
systems.
1. Rule-based systems use a set of fixed rules to play the game.
2. Machine learning-based systems use algorithms to learn from experience and make decisions based on
that experience.
It is widely used in two player turn-based games such as Chess. In this algorithm two players play the
game, one is called MAX and other is called MIN.
The minimax algorithm performs a depth-first search algorithm for the exploration of the complete game
tree.
The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the
tree as the recursion.
Maximizer will try to get the Maximum possible score, and Minimizer will try to get the minimum
possible score.
The working of the minimax algorithm can be easily described using an example. Below we have taken an
example of game-tree which is representing the two-player game.
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get the
utility values for the terminal states. In the below tree diagram, let's take A is the initial state of the tree.
Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity.
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find
the 3rd layer node values.
Mini-max Algorithm
Alpha-Beta Pruning
Decision-Making in Strategic Environments: This can include applications in robotics, where a robot
needs to navigate an environment while avoiding obstacles or competing robots.
Security & Threat Detection: Adversarial search can be used to model attacker behavior and design
systems that are more resilient against cyberattacks.
Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for
the minimax algorithm.
o This algorithm solves the limitation of exponential time and space complexity in the case of the Minimax
algorithm by pruning redundant branches of a game tree using its parameters Alpha(𝛼α) and Beta(𝛽β).
Example of Alpha-Beta Pruning: Let's take an example of two-player search tree to understand the
working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same
value to its child D.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞,
and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.
That was the complete workflow of the Alpha-Beta pruning two player game.
Constraint satisfaction problems in AI have goal of discovering some problem state that satisfies a given
set of constraints.
CSP has been used in a variety of applications, including scheduling, planning, resource allocation, and
automated reasoning.
There are mainly three basic components in the constraint satisfaction problem:
Variables: - The things that need to be determined are variables. Variables in a CSP are the objects that
must have values assigned to them in order to satisfy a particular set of constraints.
Domains: - The range of potential values that a variable can have is represented by domains.
Constraints: - The guidelines that control how variables relate to one another are known as constraints.
Constraints in a CSP define the ranges of possible values for variables.
Example of CSP
Variables = v1 and v2
Domains = 5 and 6
C1 = { (v1, v2), v1 ≠ v2 }
There are several types of algorithms uses in Constraint Satisfaction Problems (CSP) and they are;
Example2
Variable = {1,2,3,4}
3=g R G B Gb
(error)
4=b R g g b
3 4