0% found this document useful (0 votes)
29 views

Exam 2

This document provides an overview of game theory concepts including sequential games, Nash equilibrium, backwards induction, and commitment. It discusses how to draw and solve sequential game trees using backwards induction. Examples include a game of entry deterrence between an incumbent and potential entrant, and the centipede game to illustrate how backwards induction predicts players will defect immediately rather than cooperating for larger combined payouts. Experimental evidence shows people often fail to reason backwards as predicted.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Exam 2

This document provides an overview of game theory concepts including sequential games, Nash equilibrium, backwards induction, and commitment. It discusses how to draw and solve sequential game trees using backwards induction. Examples include a game of entry deterrence between an incumbent and potential entrant, and the centipede game to illustrate how backwards induction predicts players will defect immediately rather than cooperating for larger combined payouts. Experimental evidence shows people often fail to reason backwards as predicted.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

INDUSTRIAL ORGANIZATION

ECON 5700

Game Theory
Recap and Today’s Plan

Last Class
• Introduction to games
• Simultaneous move games: drawing a game board
• Dominant strategies
• Dominated strategies
• Nash Equilibrium

Today

• Sequential games
• Finitely repeated simultaneous move games
• Changing the outcome of a game
• Commitment and cheap talk.

2
Recall: Solving Simultaneous Move Games

Tools for Solving Games:

1. Are there Dominant • A dominant strategy is to play an action that is


Strategies? superior regardless of your opponent’s action.
• Play dominant strategies, assume your
opponents will too.

2. Are there Dominated • Dominated strategies are those that are never a
Strategies? best-response to your competitor’s actions.
• Don’t play dominated strategies, and assume your
opponents won’t either.

• Find a Nash Equilibrium: a Nash Equilibrium is a


3. Nash Equilibrium?
set of strategies such that no player would want to
change their action.
Recall: Two important classes of games
Simultaneous Move Games Sequential Move Games
• Players make moves simultaneously • Players make moves publically and
• One player can’t respond to the other sequentially
players’ actions • A player can respond to the actions
played before his decision point
Examples: Examples:
• Rock-paper-scissors • Tic-tac-toe, Chess
• Handshake or hug • Preventing the entry of a competitor
• Prisoner’s dilemma • Upstream-downstream market
• Quantity (Cournot) competition structure

Diagram: Game Board Diagram: Game Tree


Strategies: Strategies:
• Play dominant strategies • Look to the end and reason backward
• Avoid dominated strategies • Leverage credible threats
• Look for the Nash equilibrium
• Look for a sequential advantage
Drawing a Game Tree
Sequential Move Game Tree
Given the setup of the game, we can draw the game as a tree where actions are
branches, nodes indicate a player’s turn, and payoffs are written at the end of a path.

Player 1: $15 Drawing a game tree


Player 2: $10 makes it easy to solve a
sequential game via
Player 2 backward induction.
• Start at the end,
identify the player’s
best response. Here,
Player 2 will choose
Player 1: $10 Action 2 on his turn.
Player 1 Player 2: $25 • Move up the tree
and repeat,
“pruning” the game
Player 1: $30 tree.
Player 2: $0

5
Sequential Games: Discouraging entry

Situation
• A potential entrant threatens to enter an industry that is currently served by one
incumbent firm.
• If the potential entrant enters, the incumbent can either…
• Fight: Start a price war to drive the entrant out of the industry (at great cost to
both firms)
• Accommodate: Compete, but with the objective of maximizing profits, not
driving the entrant out
• Example: US Air and JetBlue on the Boston to Philly route in early 2013
• US Air is the incumbent, JetBlue is the potential entrant.

How will JetBlue decide to enter or not?


How can US Air affect their choice?
Pruning the Game Tree
JetBlue: $-10M
US Air: $5M
so ten
US Air I
U s fist
air acomma

JetBlue: $10M
JetBlue
US Air: $10M

JetBlue: $0
US Air: $30M

Starting at the end:


• If JetBlue enters, what will US Air do?
• They have a higher payoff from “Accommodate”, so that’s what they will do
Pruning the Game Tree

US Air will
Accommodate

JetBlue: $10M
JetBlue
US Air: $10M

JetBlue: $0
US Air: $30M

Now, moving back in the tree:


• Will JetBlue enter, knowing US Air will Accommodate?
• They have a higher payoff from “Enter”, so that’s what they will do.
• The equilibrium outcome is (“Enter”, “Accommodate”).
• Note: US Air would like to threaten to fight. However, such a threat is not credible here.
Advanced Backwards Induction
The Centipede Game
Two players, Red and Blue, take turns choosing to “end” the game. If you choose
to “pass”, your payoff is larger the next time its your turn, but smaller for your
opponent’s turn. The game ends at Round 6, with Red choosing.

“Pass” “Pass” “Pass” “Pass” “Pass” “Pass” 5


5
“End”

“End”

“End”

“End”

“End”

“End”
1 0 2½ 1½ 3½ 3
1 3 2½ 4½ 4 6

This game will be played once.


If you are Blue, how would you think about your first move?
Advanced Backwards Induction
“Pass” “Pass” “Pass” “Pass” “Pass” “Pass” 5
5
“End”

“End”

“End”

“End”

“End”

“End”
1 0 2½ 1½ 3½ 3
1 3 2½ 4½ 4 6

… Red will choose


“End” at the end
Blue would get 0 from
“pass”, so will “end” Blue would get 3 from
“pass”, so will “end”

Red would get 4 from


“pass”, so will “end”

The optimal strategy for the first player is to “end” immediately.


Defect immediately?
Experiment of the Centipede Game
Students play the centipede game 10 times with partners that are randomly
selected from a group of opponents (either students or chess masters). The bar
chart records the fraction of subjects who defected immediately

Students versus
chess masters

Students versus
other students

Palacios-Huerta and Volij (The American Economic Review, 2009)


Backwards Induction Reasoning
“Backwards Induction”
To predict behavior in a sequential game, draw out the game tree and starting at the leaves
at the end of branches, prune your way back.

This is a method for being strategic and “thinking a few moves ahead”.

This works (in principle) for any finite, sequential move game
• Chess, Checkers, Tic-tac-toe
Of course some games are so complex that even a supercomputer can’t solve them. This
process involves a strong assumption on rationality.

Example: Backwards Induction and Chess


At the end of the game, when there are only a few pieces left on the board, looking forward
to the end and reasoning backward is possible. Earlier in the game, it is entirely intractable.
• What do grandmasters do? Use experience and intuition?
• They think about five moves ahead in the game tree and then assign values to the
positions they would end up at. They backwards solve from there.
Tic-Tac-Toe

This is a full solution to


the game.
• You are X.
• Start in the upper
left corner.
• Go to the cell where
your opponent
played O to find out
your next move (in
red).
• And so on.
Schaeffer was able to get his
result by searching only a subset
of board positions rather than all
of them, since some of them can
be considered equivalent. He
carried out a mere 1014
calculations to complete the proof
in under two decades. "This
pushes the envelope as far as
artificial intelligence is
concerned," he says.

“David Levy, president of the
International Computer Games
Association in London, UK, says he
isn't planning to play against
Chinook. "There would be a
certain inevitability about the
result.“”
ore again

Itani
Repeated Simultaneous Move Games
Recall: the elements of a “game”
Players Payoffs
Who gets to make a How much is each
decision? outcome worth to me? In a sequential move game,
the 2nd player’s
“information” contained the
move of the first player, and
A Game so their strategy could take
it into account.

Strategies In a simultaneous move


A plan of action for each game, a player does not
decision point
know their competitor’s
move.
Actions Moves Information

Repeated Simultaneous Move Games


In a repeated game, a player does not know their opponent’s move for this round, but they
do know the history of their opponent’s actions. This can affect their optimal strategy.
Recall the Prisoners’ Dilemma
Prisoner 2

Cooperate Defect

Cooperate (2, 2) (0, 3)


Prisoner 1
Defect
O
(3, 0) (1, 1)

What is the equilibrium of the one-shot game?


(Defect, Defect)

We are both better off under (Cooperate, Cooperate), but the


incentive to cheat doesn’t make that a stable outcome.

How can we make the payoff to cooperating higher?


Repeated Prisoner’s Dilemma
The Scenario: a Repeated Prisoners’ Dilemma
Suppose you play 10 rounds of the Prisoners’ Dilemma with an opponent.
We know the single-period game has an equilibrium of (“defect”, “defect”).
Can we get some cooperation if we repeat the game?

Intuition: Defection in the present can be punished in the future.


What is the equilibrium of this “supergame”?

Look to the end and reason backward:


• Round 10: no future punishment is possible. Result: (“defect”, “defect”)
• Round 9: Can you threaten punishment in Round 10?
• Not credibly. No way to incentivize cooperation.
• … and so on back to Round 1.

Reasoning backward, the only equilibrium is to defect every time.


Is that what we see in the real world?
Experimental Evidence
Experimental Design
Played 10 prisoners dilemmas under 3
settings:
• Strangers: Different, random person for
each of the 10 dilemmas played.
• Partners: Random person, but all 10
prisoner’s dilemmas with the same
partner
• Computer50s: Same as “Partners”, but
subjects were told there was a 50%
chance they would play against a
cooperative computer instead

Note that defecting immediately isn’t


“right” here – if your partner is cooperative,
you should be cooperative too – just make
sure you defect before he does!
Andreoni and Miller (The Economic Journal, 1993)
So when should one defect?
Experimental Evidence

When should I defect?


• Without repeated interaction, almost
immediately
• With repeated interaction, Round 5ish

What’s happening
Notice that as the subjects played more
“super-games”, if they were partnered, they
learned to trust one another, not to defect
immediately. We don’t get convergence to
the equilibrium

We’ll discuss how to achieve more


cooperation when we move to infinitely-
repeated games.
Changing the Outcome
Game Theory tells us about equilibrium outcome(s)
… I don’t like what the game theory tells me. What can I do?

Strategies for changing the equilibrium outcome:

Take actions to modify the structure or available actions


1. Change the game • Create a “pre-game”: a decision node before the game
• Credibly commit to a future action

2. Signaling: Mis-information, Try to change payoffs in my opponent’s game board


Deception • Downplay the impact of a price war or increased
competition
• Imply that you have a large psychological gain from
“fighting” any competitors
Use a strategy based on the history of play (Lecture 5).
3. Build a Reputation • “Punish” competitors who take actions you don’t like.
• This can decrease the payoff from cheating/defecting.
However, it should still be rational.
Illustrative Example
Strategic thinking, bounded rationality, and The Princess Bride:
• Our hero, Westley, in the guise of the Dread Pirate Roberts, challenges Vizzini to a
“Battle of Wits”.
• Two cups of wine are on the table, one with poison (“iocane powder”).
• The challenge: select the glass that does not lead to immediate death.

Buttercup: “To think -- all that time it was your cup that was poisoned.”
Roberts: “They were both poisoned. I spent the last few years building up an
immunity to iocane powder.”
21
Game Theory in The Princess Bride
“Battle of Wits”: Westley chooses where to hide the poison, Vizzini picks a cup.

Here are some payoffs:


Westley

Poison W’s Cup Poison V’s Cup

Drink W’s Cup (-10,10) (10,-10)


Vizzini
Drink V’s Cup (10, -10) (-10,10)

Vizzini has an obvious strategy if he knows which cup has poison. If he doesn’t know, he tries
to reason backwards and figure out which cup is more likely to have poison…

22
Game Theory in The Princess Bride
The twist: poison in both cups, and Westley is immune
The payoff board is actually…
Westley

Poison Both Cups

Drink W’s Cup (-10, 5)


Vizzini
Drink V’s Cup (-10, 5)

Vizzini never had a winning strategy; this was a losing game from the start.

Westley was able to change the game without him realizing; they faced different payoff
matrices.

So, Westley won the “Battle of Wits” before the “battle” actually began.

23
Price-matching guarantees
Changing Payoffs
Consider an example of holiday price matching:

Payoffs (Profits) for each: Best Buy

Price “High” Price “Low”

Price “High” (5, 5) (1, 10)


Target
Price “Low” (10, 1) (2, 2)

The problem: we would like to sustain (“High”, “High”). However, the simultaneous
equilibrium is (“Low”, “Low”), because if my opponent plays “High”, I have a strong incentive
to play “Low”.

Suppose either could offer to match prices…


Payoffs with “Price Matching”

Best Buy (Price Matching)


Payoffs (Profits) for each:
Price “High” Price “Low”

Price “High” (5, 5) (1, 10)


Target (2, 2)
(Price Matching)
Price “Low”
0(10, 1)
(2, 2)
(2, 2)

If both firms announce price matching, (“High”, “High”) is the equilibrium.

There is no longer an incentive to “cheat”


US Air-JetBlue: Offer “Beat Any Price” Coupons
Entry game from earlier
(equilibrium is “Enter”, “Accommodate”)
JetBlue: $-10M
US Air: $5M
US Air

JetBlue
JetBlue: $10M
US Air: $10M

JetBlue: $0
US Air: $30M
US Air
JetBlue: $-10M
US Air: $0M
Equilibrium is now
(“Offer”, “Don’t Enter”)
JetBlue
JetBlue: $0
US Air: $30M
Signaling
The Scenario
Trader Joe's and Whole Foods, both supermarket chains with overlapping
customers, are trying to decide where to locate in Columbus. There are two areas
to consider: Upper Arlington, and the Near East Side.

Payoffs (Profits) for each:


Trader Joe’s
Reminder: (row payoff, col payoff)
UA East Side

UA (5, 5) (10, 10)


Whole Foods
East Side (10, 10) (5, 5)

What will happen in a simultaneous move game?


If we cannot coordinate, we both may be unhappy.

What if TJ’s “signals” it is planning to locate in the UA area?


The game becomes sequential; Whole Foods will (happily) choose East Side
Signaling
Suppose UA was actually a better location:

Trader Joe’s

UA East Side

UA (5, 5) (20, 10)


Whole Foods
East Side (10, 20) (5, 5)

If TJ’s signals UA… Whole Foods might want to signal/threaten to locate in


UA too. The signal lacks commitment.

Signaling only works if it is credible and rational.


What if TJ’s…
• Bought property currently used as a parking lot in UA?
Summary & Direction

Today
• Sequential games
• Finitely repeated games
• Changing the outcome of a game
• Commitment and cheap talk.

Next Lecture

• Competitive equilibrium
• Bertrand: homogenous goods, competition in prices.
• Cournot: homogenous goods, competition in quantities.
• Collusion and cheating.

30

You might also like