Game Theory Slides
Game Theory Slides
1
What is it all about?
2
What is game theory?
3 short answers:
3
Static games with complete information:
What is a game?
A static game with complete information is a set G = {I, S, U}, where
I is a nonempty, finite set of n players
•i I denotes a player
S = S1 × S2 × … × Sn = Si × S-i
• Si is a nonempty set called player i's strategy set
• S-i = S1 × ... × Si-1 × Si+1 × … × Sn is the product of strategy sets of
i's opponents
• s = (s1, ... , sn) = (si, s-i) S is a strategy profile of all players
• si Si denotes a strategy of player i
• s-i = (s1, ... , si-1, si+1, ... , sn) S-i is the strategy profile of i's
opponents
U = (u1, u2, ... , un ) is the vector of payoff functions
• ui: S → R denotes player i's payoff function
• (R is the set of real numbers)
4
Non-cooperative games: Players cannot sign
binding contracts. Three basic assumptions:
One-shot, simultaneous-move game
Each player chooses a strategy without knowledge of the other
players’ choices. Players receive their payoffs and the game ends.
Complete information
The players’ strategy sets and payoff functions are "common
knowledge" (CK) among the players.
• A fact T is CK, if every player knows T, and every player
knows that everyone knows T, and every player knows that
everyone knows that everyone knows T, and ... (ad infinitum).
Rationality
Players are rational, i.e., each player maximizes his payoff, given
his belief about what his opponents will play.
Rationality of players is CK.
5
Bi-matrix games
Payoffs
7
Example: Dating Game
Pat
Opera Boxing
Opera 2, 1 0, 0
Chris
Boxing 0, 0 1, 2
Chris and Pat want to spend the evening together. Chris prefers going
to the opera and Pat prefers the boxing event. Being separated is
equally unpleasant, wherever they go.
Set of players:I = {Chris, Pat}
Strategy sets: S1 = S2 = {Opera, Boxing}
Payoff functions:
u1(O, O) = 2, u1(O, B) = 0, u1(B, O) = 0, u1(B, B) = 1
u2(O, O) = 1, u2(O, B) = 0, u2(B, O) = 0, u2(B, B) = 2
8
Example: Matching Pennies
Player 2
Head Tail
Head -1 , 1 1 , -1
Player 1
Tail 1 , -1 -1 , 1
Two players simultaneously show one side of a coin. If the faces of the
coins are different, player 1 gets player 2's coin. If the faces match,
player 2 gets player 1's coin.
Set of players: I = {Player 1, Player 2}
Strategy sets: S1 = S2 = { Head, Tail }
Payoff functions:
u1(H, H) = -1, u1(H, T) = 1, u1(T, H) = 1, u1(T, T) = -1
u2(H, H) = 1, u2(H, T) = -1, u2(T, H) = -1, u2(T, T) = 1
9
Definition: strictly dominated strategy
Let si', si" Si. Strategy si' is strictly dominated by strategy si" if
ui(si', s-i) < ui(si", s-i) for all s-i S-i.
10
Definition: weakly dominated strategy
Let si', si" Si. Strategy si' is weakly dominated by strategy si" if
ui(si', s-i) ≤ ui(si", s-i) for all s-i S-i and
ui(si', s-i) < ui(si", s-i) for some s-i S-i.
11
Definition: dominant strategies
12
Definition: best responses
We denote the set of player i's best responses to s-i by Bi(s-i). If player i
holds the belief s-i, then (since he is rational) he will play some element
of this set, i.e. some best response to s-i.
13
Questions about best responses
14
IESDS – Iterated elimination of strictly dominated
strategies
A rational player will never play a strictly dominated strategy. Hence,
a strictly dominated strategy can be eliminated. Here is an algorithm
for the IESDS:
1. Check if the game has a strictly dominated strategy. If no, you are
done. If yes, eliminate it from the game.
2. Now the game has been reduced, it is "smaller" and less complex.
Go to step 1.
You end up with a game without strictly dominated strategies. The
remaining strategies are said to survive IESDS. If for each player
only a single strategy survives, then the game is said to be
dominance-solvable.
Since rationality is CK, a player will never choose a strategy which
is deleted during the IESDS (an iteratively strictly dominated
strategy). Hence, if a game is dominance-solvable, each player will
play his surviving strategy.
15
IESDS – an example
Player 2
Left Middle Right
Up 1, 0 1, 2 0, 1
Player 1
Down 0, 3 0, 1 2, 0
16
IESDS – an example
Player 2
Left Middle Right
Up 1, 0 1, 2 0, 1
Player 1
Down 0, 3 0, 1 2, 0
17
IESDS – an example
Player 2
Left Middle Right
Up 1, 0 1, 2 0, 1
Player 1
Down 0, 3 0, 1 2, 0
18
IESDS – an example
Player 2
Left Middle Right
Up 1, 0 1, 2 0, 1
Player 1
Down 0, 3 0, 1 2, 0
19
Example: Tourists & Natives
20
Solving "Tourists & Natives" by IESDS
Bar 2
€2 €4 €5
€2 , , ,
Bar 1 €4 , , ,
€5 , , ,
21
Rationalizability and IENBR (iterated elimination
of never-best responses)
A rational player always plays a best response to his belief. Hence, a
never-best response can be eliminated. Here is an algorithm for the
IENBR:
1. Check if the game has a never-best response. If no, you are done. If
yes, eliminate it from the game.
2. Now the game has been reduced, it is "smaller" and less complex.
Go to step 1.
22
More on rationalizability
23
Definition: Nash equilibrium
We know that a player always plays a best response to his belief, and
that this belief is consistent with CK of rationality. Nevertheless, after
the game the belief can turn out to have been wrong.
If it happens to be the case that all beliefs are correct, then each player
indeed plays a best response to the actual choice of his opponents. In
this case the strategy profile is called a Nash equilibrium.
24
Why Nash equilibrium (NE) is an important concept
25
Pareto optimality and more on Nash equilibria
Repetition: Pareto-optimality
Strategy profile s is Pareto-better than strategy profile s' if
ui(s) ≥ ui(s') for all i I, and ui(s) > ui(s') for some i I.
In this case s is called a Pareto-improvement over s'.
A strategy profile s is Pareto-optimal, if there is no Pareto-
improvement over s, i.e. if no other strategy profile is Pareto-better
than s.
26
How to find NE in bi-matrix games
27
A Note on Payoffs
28
Definition: Mixed strategies
29
More on mixed strategies
31
Nash's Theorem
32
Mixed NE as steady states of a learning process
34
Example: The Mafia’s prey
35
Example: The Mafia’s prey
36
Dynamic games with complete information:
Trees, nodes, and actions
A dynamic game with complete information is a game
proceeding over several stages, so there is a time-structure. Its
exact definition is rather complicated, but here is a description:
A dynamic game is described by a game tree. A game tree
specifies which player moves at which stage, what information he
has, which actions are available to him, and what payoffs are
associated with different sequences of actions.
A game tree starts at a node called the root. Some player starts
with one of several available actions. Each action leads to a
different node of the tree. At each node either one of the players
moves (a decision node) or "Nature" makes a chance move
according to some probability distribution, or the game ends (in a
terminal node). Each terminal node is associated with a vector
of payoffs for the players.
37
Information sets and perfect information
For each player, the set of decision nodes where he has to move is
partitioned into information sets. An information set contains all
nodes the moving player cannot distinguish between. (He knows
at which information set he has arrived, but not at which node
within this information set. Necessarily, the set of possible actions
has to be the same for all nodes in an information set.) An
information set is depicted by a dashed line connecting the
respective nodes in the game tree.
If an information set contains only a single node, it is called a
singleton. A dynamic game where all information sets are
singletons and there are no chance moves is said to have perfect
information, otherwise there is imperfect information.
38
Plays and strategies in dynamic games
39
The strategic form of a dynamic game
Given a pure strategy for each player (i.e. a pure strategy profile),
the play of the game is uniquely determined (up to chance
moves), and so is the vector of payoffs.
Hence we can translate a dynamic game into a static game just
by constructing each player's pure strategies and payoff function.
The static game constructed in this way is called the strategic
form of the dynamic game.
Having done this, we can apply all the definitions, including
domination, mixed strategies, etc., from static games to dynamic
games as well. Most importantly:
40
An example
1
Think about it:
a b
N
2 What are the pure
i [0.8]
strategies of the
h [0.2] players?
c
d
1 e
2 What is the strategic
-2
f f k form of this game?
g g j -5
41
Example: Entry Deterrence
42
Analyzing Entry Deterrence
43
Analyzing Entry Deterrence (ctd.)
45
Subgame perfect equilibrium (SPE)
You and another guy are sitting in two separated rooms in front of a
slot-machine. The two slot-machines are connected, and each has a
coin slot and a Stop button. Whenever one of you throws € 1 into the
slot machine, the other one receives € 2. However, if someone presses
the Stop button, the game is over. You move alternatingly for at most
10 rounds. You start.
48
Example: Finitely repeated PD
2
Cooperate Defect
Cooperate 3,3 0,5
1
Defect 5,0 1,1
You (player 1) and another guy (player 2) are playing this variant of
the Prisoners' Dilemma game. Obviously, in the one-shot game,
D(efect) is strictly dominant. However, you play this game repeatedly
for 5 rounds. In the end you receive the sum of the payoffs of each
round.
49
Example: Indefinitely repeated PD (rPD)
50