0% found this document useful (0 votes)
6 views

Unit2 Extra

Artificial Intelligence

Uploaded by

joshilantony
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Unit2 Extra

Artificial Intelligence

Uploaded by

joshilantony
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

HILL CLIMBING PROBLEM

Example: "a" is initial and h and k are final states .

Find a suitable solution for the above hill climbing problem


Ans: A -> F, with the least possible cost F -> G with cost 3 but there is no path.
Restart from the least possible cost other than G, well it's E because E was already
inserted in the queue/stack/priority queue or whatever data structure you use.

Thus E -> I but I has higher cost than E thus you are stuck :

Restart from the least cost other than F E & G, thus we pick J because J has lower
cost than B with difference of 2 i.e. J = 8 B = 10

J->K with cost 0 thus K is the goal

NOTE: The proposed variation of hill-climbing to pick a point randomly, but picking
the least cost other than the already visited nodes is better than picking randomly.

CONSTRAINT SATISFACTION PROBLEMS (CSP) IN ARTIFICIAL INTELLIGENCE

Finding a solution that meets a set of constraints is the goal of constraint satisfaction
problems (CSPs), a type of AI issue. Finding values for a group of variables that fulfill a set
of restrictions or rules is the aim of constraint satisfaction problems. For tasks including
resource allocation, planning, scheduling, and decision-making, CSPs are frequently
employed in AI.

There are mainly three basic components in the constraint satisfaction problem: V,D and
C

V is a set of variables V1, V2, V3 ……………..Vn.


D is Non-empty domain for every single variable D1, D2, D3 …………..Dn.
C is the finite set of constraints C1, C2 …….…, Cm.
 where each constraint Ci restricts the possible values for variables,
e.g., V1 ≠ V2
 Each constraint Ci is a pair <scope, relation>
Example: <(V1, V2), V1 not equal to V2>
 Scope = set of variables that participate in constraint.
 Relation = list of valid variable value combinations.
 There might be a clear list of permitted combinations. Perhaps a relation
that is abstract and that allows for membership testing and listing.
For example, in a crossword puzzle it is only required that words that cross each other
have the same letter in the location where they cross. It would be a general search
problem if we require, say, that we use at most 15 vowels.

For instance, in a sudoku problem, the restrictions might be that each row, column, and
3×3 box can only have one instance of each number from 1 to 9. Each square must
contain a single number, from 1 to 9. The same number can't appear in the same row
twice. The same number can't appear in the same column twice. The grid is broken
down into 9 distinct sub-grids of 9 squares each. Considering all rows/columns and major
blocks leads to 54 such constraints, exactly once in each row, but also in each block.

Means-Ends Analysis

 The MEA technique was first introduced in 1961 by Allen Newell, and
Herbert A. Simon in their problem-solving computer program, which was
named as General Problem Solver (GPS).
 In Artificial intelligence, we have studied many strategies which can reason
either in forward or backward, but a mixture of the two directions is
appropriate for solving a complex and large problem.
 Such a mixed strategy, make it possible that first to solve the major part of
a problem and then go back and solve the small problems arise during
combining the big parts of the problem.
 Such a technique is called Means-Ends Analysis.
 Means-Ends Analysis process is centered on finding the difference
between the current state and goal state and applying the operators to
reduce this difference.
 To solve the given problem, we need to find the differences
between initial state and goal state, and for each of these
differences, we will apply an operator to generate a new state.
The operators we have for this problem are:
 MOVE-Move the object diamond outside the circle.

 DELETE-Delete the black circle.

 EXPAND- Expand or increase the size of the diamond.


SOLUTION:
To solve the above problem, we will first find the differences between initial
states and goal states, and for each difference, we will generate a new state and
will apply the operators. The operators we have for this problem are:
 Move
 Delete
 Expand
1. Evaluating the initial state: In the first step, we will evaluate the initial state and
will compare the initial and Goal state to find the differences between both
states.
2. Applying Delete operator: As we can check the first difference is that in goal
state there is no dot symbol which is present in the initial state, so, first we will
apply the Delete operator to remove this dot.

3. Applying Move Operator: After applying the Delete operator, the new state
occurs which we will again compare with goal state. After comparing these
states, there is another difference that is the square is outside the circle, so, we
will apply the Move Operator.

4. Applying Expand Operator: Now a new state is generated in the third step,
and we will compare this state with the goal state. After comparing the states
there is still one difference which is the size of the square, so, we will apply
Expand operator, and finally, it will generate the goal state.
GAME PLAYING:
Game playing in artificial intelligence refers to the development and application of
algorithms that enable computers to engage in strategic decision-making within
the context of games. These algorithms, often termed game playing algorithms
in AI, empower machines to mimic human-like gameplay by evaluating potential
moves, predicting opponent responses, and making informed choices that lead
to favourable outcomes. This concept extends the capabilities of AI systems
beyond mere computation and calculation, enabling them to participate in
interactive scenarios and make choices based on strategic thinking.
Two common game playing algorithms are:
 MINIMAX ALGORITHM
 ALPHA-BETA PRUNING

Mini-Max Algorithm in Artificial Intelligence:


o Mini-max algorithm is a recursive or backtracking algorithm which is used
in decision-making and game theory. It provides an optimal move for the
player assuming that opponent is also playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm
computes the minimax decision for the current state.
o In this algorithm two players play the game, one is called MAX and other is
called MIN.
o Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
o Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of
the tree, then backtrack the tree as the recursion.

Working of Min-Max Algorithm:


o The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the two-
player game.
o In this example, there are two players one is called Maximizer and other is called
Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to
get the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs. Following are the main
steps involved in solving the two-player game tree:

EXAMPLE:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the
utility function to get the utility values for the terminal states. In the below tree diagram,
let's take A is the initial state of the tree. Suppose maximizer takes first turn which has
worst-case initial value =- infinity, and minimizer will take next turn which has worst-case
initial value = +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we
will compare each value in terminal state with initial value of Maximizer and determines
the higher nodes values. It will find the maximum among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values.

o For node B= min(4,6) = 4


o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree,
there are only 4 layers, hence we reach immediately to the root node, but in real
games, there will be more than 4 layers.

o For node A max(4, -3)= 4


That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity
of Min-Max algorithm is O(bm), where b is branching factor of the game-tree, and
m is the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).

Limitation of the minimax Algorithm:


The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge branching
factor, and the player has lots of choices to decide. This limitation of the minimax
algorithm can be improved from alpha-beta pruning

ALPHA-BETA PRUNING
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there is
a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta for
future expansion, so it is called alpha-beta pruning. It is also called as Alpha
-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
not only prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any


point along the path of Maximizer. The initial value of alpha is -∞.

b. Beta: The best (lowest-value) choice we have found so far at any


point along the path of Minimizer. The initial value of beta is +∞.

o The Alpha-beta pruning to a standard minimax algorithm returns the same


move as the standard algorithm does, but it removes all the nodes which
are not really affecting the final decision but making algorithm slow. Hence
by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:


The main condition which required for alpha-beta pruning is:

1. α>=β
Key points about alpha-beta pruning:
o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes
instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:


Let's take an example of two-player search tree to understand the working of
Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where
α= -∞ and β= +∞, these value of alpha and beta passed down to node B where
again α= -∞ and β= +∞, and Node B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D
and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a
turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min
(∞, 3) = 3, hence at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node
E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and
algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
At node A, the value of alpha will be changed the maximum available value is 3 as
max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A
which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3
still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of
beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and
again it satisfies the condition α>=β, so the next child of C which is G will be pruned, and
the algorithm will not compute the entire sub-tree G.

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal value for the maximizer is 3
for this example.

You might also like