CS8691 Unit 2
CS8691 Unit 2
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a
node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier
which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
2.2.2 Depth-first Search
Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
DFS uses a stack data structure for its implementation.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
Disadvantage:
There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example:
In the above search tree, we have shown the flow of depth-first search, and it will follow
the order as:
Root node--->Left node ----> right node.
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then
the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to
C*/ε.
Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
2.2.5. Iterative deepening depth-first Search
The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the limit until
a goal is found.
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
At each point in the search space, only those node is expanded which have the lowest value of
f(n), and the algorithm terminates when the goal node is found.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on heuristics and
approximation.
Solution:
c. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.
Perfect information: A game with the perfect information is that in which agents can
look into the complete board. Agents have all the information about the game, and they can see
each other moves also. Examples are Chess, Checkers, Go, etc.
Imperfect information: If in a game agents do not have all information about the game
and not aware with what's going on, such type of games are called the game with imperfect
information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
Note: In this topic, we will discuss deterministic games, fully observable environment, zero-sum,
and where each agent acts alternatively.
In final game states, AI should select the winning move in such a way that each move
assigns a numerical value based on its board state. The ranking should be given as:
a) Win: 1
b) Draw: O
c) Lose: -1
It is important to consider the aspects related to winning with the highest ranking, losing
to the lowest, and a draw between the two players. The Max part of Minimax algorithm states
that the user has to select the move with the highest value. Final Game States are ranked on the
basis of their status of winning, losing or a draw. Ranking of Intermediate Game States is
based on the turn of player to make available moves. If it's X's turn, set the rank to that of the
maximum available move. If a move results into a win, X can take it. If it's O's turn, set the rank
to that of the minimum available move. If a move results into a loss, X can avoid it.
Search tree: A tree that is superimposed on the full game tree, and examines enough
nodes to allow a player to determine what move to make.
2.7.3 Optimal decisions in games
Optimal solution: In adversarial search, the optimal solution is a contingent strategy,
which specifies MAX(the player on our side)‟s move in the initial state, then MAX‟s move in the
states resulting from every possible response by MIN(the opponent), then MAX‟s moves in the
states resulting from every possible response by MIN to those moves, and so on.
Minimax Algorithm: The Min-Max algorithm is generally used for a game consisting of
two players such as tic-tac-toe, checkers, chess etc. All these games are logical games, so they
can be described by set of rules. It is possible to determine the next available moves from a given
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will
compare each value in terminal state with initial value of Maximizer and determines the higher
nodes values.
Note: To better understand this topic, kindly study the minimax algorithm.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.
Step 3: Now algorithm backtracks to node B, where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it
satisfies the condition α>=β, so the next child of C which is G will be pruned, and the algorithm
will not compute the entire sub-tree G.
1. Compare Uninformed Search (Blind search) and informed Search (Heuristic Search)
strategies.
2. Define Best-first-search.
Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on the evaluation function f(n ).
Traditionally, the node with the lowest evaluation function is selected for expansion.
Admissible Heuristic is a heuristic h(n) that never overestimates the cost from node n the
goal node.
Greedy best-first-search tries to expand the node that is closest to the goal, on the grounds
that that is likely to lead to a solution quickly. For example, it evaluates nodes by using just the
heuristic function: f (n) = h (n).
7. What is A* search?
A* search is the most widely-known form of best-first search. It evaluates the nodes by
combining g(n),the cost to reach the node, and h(n),the cost to get from the node to the goal:
Recursive best-first search is a simple recursive algorithm that attempts to mimic the
operation of standard best-first search, but using only linear space.
9. What are local search algorithms?
Local search algorithms operate using a single current state (rather than multiple paths)
and generally move only to neighbors of that state. The local search algorithms are not
systematic. The key two advantages are (i) they use very little memory – usually a constant
amount, and (ii) they can often find reasonable solutions in large or infinite (continuous) state
spaces for which systematic algorithms are unsuitable.
10. What are the advantages of local search?
*Can often find reasonable solutions in large or infinite state spaces (e.g., continuous)
In optimization problems, the aim is to find the best state according to an objective
function the optimization problem is then: Find values of the variables that minimize or
maximize the objective function while satisfying the constraints.
The Hill-climbing algorithm is simply a loop that continually moves in the direction of
increasing value –that is uphill. It terminates when it reaches a “peak” where no neighbor has a
higher value. The algorithm does not maintain a search tree so the current node data structure
need only record the state and its objective function value. Hill-climbing does not look ahead
i. Local maxima – A local maxima is a peak that is higher than each of its neighboring
states, but lower than the local maximum. Hill climbing algorithm that reach the vicinity of a
local maximum will be drawn upwards towards the peak, but will then be stuck with nowhere
else to go.
ii. Ridges – Ridges result in a sequence of local maxima that is very difficult for greedy
algorithms to navigate.
iii. Plateaux- a plateau is an area of state space landscape where the evaluation function
is flat. A hill-climbing search might be unable to find its way off the plateau.
The local beam search algorithm keeps track of k states rather than just one. It begins
with k randomly generated states. At each step, all the successors of all k states are generated. If
anyone is a goal, the algorithm halts. Otherwise, it selects the k best successors from the
complete list and repeats.
15. Explain briefly simulated annealing search.
Simulated annealing is an algorithm that combines hill climbing with random walk in
some way that yields both efficiency and completeness.
(b) The constraint hyper graph for the crypt arithmetic problem, showing the All diff constraint
as well as the column addition constraints. Each constraint is a square box connected to the
variables it constrains.
Backtracking search is a depth-first search that chooses values for one variable at a time
and backtracks when a variable has no legal values left to assign.
Competitive environments, in which the agents‟ goals are in conflict, give rise to
adversarial search problems – often known as games.
A ridge is a special kind of local maximum. It is an area of the search space that is higher
that the surrounding areas and that it have a slope. But the orientation of the high region,
compared to the set of available moves and the directions in which they move, makes it
impossible to traverse a ridge by single moves. Any point on a ridge can look like peak because
movement in all probe directions is downward. A plateau is a flat area of the search space in
which a whole set of neighboring states have the same value. On a plateau, it i not possible to
determine the best direction in which to move by making local comparisons.