0% found this document useful (0 votes)
26 views

Game Theory Unit IV

This document discusses game theory in artificial intelligence, focusing on modeling games as search problems and the Minimax algorithm as a search technique. It explains the components of search problems, the process of pruning search trees, and introduces alpha-beta pruning as an optimization method for Minimax. Additionally, it highlights the importance of chance nodes in decision trees and the expected value in analyzing random variables, along with examples of games that incorporate elements of chance.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Game Theory Unit IV

This document discusses game theory in artificial intelligence, focusing on modeling games as search problems and the Minimax algorithm as a search technique. It explains the components of search problems, the process of pruning search trees, and introduces alpha-beta pruning as an optimization method for Minimax. Additionally, it highlights the importance of chance nodes in decision trees and the expected value in analyzing random variables, along with examples of games that incorporate elements of chance.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

UNIT- IV

GAME THEORY
INTRODUCTION
Game Playing is an important domain of artificial intelligence. Games don’t require much knowledge;
the only knowledge we need to provide is the rules, legal moves and the conditions of winning or losing
the game. Both players try to win the game.

4.1 describe game as search problem.


describing a game as a search problem refers to modeling the game's progression as a search through a
state space. In such a model, the game consists of a sequence of decisions or moves, and each move
transitions the game from one state to another.
EX: state space search(complete topic).

4.2 Explain components of search problem


Ans: state space search(complete topic).

4.3 describe mini-max search peocedures


One of the most common search techniques in game playing is the Minimax algorithm, which is a depth-
first, depth-limited search procedure. Minimax is commonly used for games like chess and tic-tac-toe.
Key Functions in Minimax:
MOVEGEN: (position,player)-Generates all possible moves from the current position.(start position)
STATIC(position,player): Returns a value based on the quality of a game state from the perspective of
two players.(good move).
In a two-player game, one player is referred to as PLAYER1 and the other as PLAYER2. The Minimax
algorithm operates by backing up values from child nodes to their parent nodes. PLAYER1 tries to
maximize the value of its moves, while PLAYER2 tries to minimize the value of its moves. The algorithm
recursively performs this procedure at each level of the game tree.

Working of Min-Max Algorithm:

o maximizing Player (Max): Max is the player seeking to maximize their utility or score using the
Minimax in AI strategy. In most games, Max represents the AI or the player whose turn it is to
make a move. Max aims to make moves that lead to the highest possible outcome using the min
max search in AI.
o Minimizing Player (Min): Min is the player seeking to minimize Max's utility or score. Min
represents the opponent, whether it's another player or the AI-controlled opponent. Min aims to
make moves that lead to the lowest possible outcome for Max.
def minimax(node, depth, maximizingPlayer):
if depth == 0 or node is a terminal node:
return evaluate(node) # Evaluate the leaf node
if maximizingPlayer:
maxEval = -infinity # Start with a very low value
for child in children(node):
eval = minimax(child, depth - 1, False) # Min's turn
maxEval = max(maxEval, eval) # Take the best (max) score
return maxEval
else:
minEval = infinity # Start with a very high value
for child in children(node):
eval = minimax(child, depth - 1, True) # Max's turn
minEval = min(minEval, eval) # Take the worst (min) score
return minEval

4.4 Explain additional refinements

4.5 define pruning the search tree.


Pruning in the context of a search tree refers to the process of removing certain branches (or nodes)
from the tree that do not need to be explored further because they cannot influence the final decision
or result. This is typically done to improve efficiency and reduce computation time, especially in
algorithms like Minimax.
Pruning eliminates parts of the search tree that are guaranteed not to affect the outcome of the search,
allowing the algorithm to focus only on the relevant branches.

4.6 describe alpha-beta pruning.

o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the
minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has to examine are
exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence
there is a technique by which without checking each node of the game tree we can compute the correct
minimax decision, and this technique is called pruning. This involves two threshold parameter Alpha and
beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree leaves
but also entire sub-tree.
o The two-parameter can be defined as:
o Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
o Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer.
The initial value of beta is +∞.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta pruning

Step 1: At the first step the, Max player will start


first move from node A where α= -∞ and β= +∞,
these value of alpha and beta passed down to
node B where again α= -∞ and β= +∞, and Node
B passes the same value to its child D.

Step 2: At Node D, the value of α will be


calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max
(2, 3) = 3 will be the value of α at node D and
node value will also 3.

Step 3: Now algorithm backtrack to node B,


where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available
subsequent nodes value, i.e. min (∞, 3) = 3, hence
at node B now α= -∞, and β= 3.
Step 4: At node E, Max will take its
turn, and the value of alpha will change.
The current value of alpha will be
compared with 5, so max (-∞, 5) = 5,
hence at node E α= 5 and β= 3, where
α>=β, so the right successor of E will be
pruned, and algorithm will not traverse
it, and the value at node E will be 5.

Step 5: At next step, algorithm again backtrack the tree, from


node B to node A. At node A, the value of alpha will be
changed the maximum available value is 3 as max (-∞, 3)= 3,
and β= +∞, these two values now passes to right successor
of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be


passed on to node F.

Step 6: At node F, again the value of α will be compared with


left child which is 0, and max(3,0)= 3, and then compared
with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.

Step 7: Node F returns the node value 1 to


node C, at C α= 3 and β= +∞, here the value
of beta will be changed, it will compare with 1
so min (∞, 1) = 1. Now at C, α=3 and β= 1,
and again it satisfies the condition α>=β, so
the next child of C which is G will be pruned,
and the algorithm will not compute the entire
sub-tree G.
Step 8: C now returns the value of 1
to A here the best value for A is max
(3, 1) = 3. Following is the final game
tree which is the showing the nodes
which are computed and nodes which
has never computed. Hence the
optimal value for the maximizer is 3
for this example.

4.7 state the purpose of chance node

A decision tree is a flowchart that starts with one


main idea and then branches out based on the
consequences of your decisions. It’s called a
“decision tree” because the model typically looks
like a tree with branches.

Decision tree symbols

A decision tree includes the following symbols:

Alternative branches: Alternative branches


are two lines that branch out from one decision
on your decision tree. These branches show
two outcomes or decisions that stem from the
initial decision on your tree.
 Decision nodes: Decision nodes are squares
and represent a decision being made on your
tree. Every decision tree starts with a decision
node.
 Chance nodes: Chance nodes are circles that
show multiple possible outcomes.
 End nodes: End nodes are triangles that show
a final outcome.
A decision tree analysis combines these
symbols with notes explaining your decisions
and outcomes, and any relevant values to
explain your profits or losses. You can
manually draw your decision tree or use a
flowchart tool to map out your tree digitally.
4.8 state the importance of expected value
Expected Value: Random variables are the functions that assign a probability to some outcomes in the sample
space. They are very useful in the analysis of real-life random experiments which become complex. These variables
take some outcomes from a sample space as input and assign some real numbers to it. The expectation is an
important part of random variable analysis. It gives the average output of the random variable.
EX:Self-driving car navigation:
A self-driving car could use expected value to choose the safest path by considering the probabilities of different
potential hazards and the consequences of each possible action.

4.9 illustrate game that include an element of chance


classic example of a game with an element of chance is "Monopoly" where players roll a pair of dice to determine
how many spaces they move, introducing randomness to their movement and significantly impacting their strategy
and potential to acquire properties, even if the overall gameplay relies on strategic decision-making.
Other examples of games with an element of chance include:
Roulette:
Players bet on where a spinning wheel will land, with the outcome entirely dependent on chance.
Poker:
While skill is crucial in reading opponents and managing your hand, the initial hand dealt is random, adding an
element of chance.
Bingo:
Players mark numbers on their cards as they are randomly called out, with the first to complete a pattern winning.
Key points to remember about games with chance:
Randomizing mechanics:
These games often use tools like dice, cards, spinning wheels, or random number generators to introduce chance.
Strategy within randomness:
While chance plays a role, players can still use strategy to maximize their chances of success based on the random
outcomes.
Balancing chance and skill:
Game designers often strive to balance the element of chance with strategic decision-making to create a
compelling gameplay experience.

You might also like