Structuring Decisions and Making Choices I
Structuring Decisions and Making Choices I
ENGM 603
Agenda
› One-time problems under certainty
› One-time problems under risk
› Multi-stage problems
› Incorporating risk
2
An Example: Magnolia Inns
Hartsfield International Airport in Atlanta, Georgia, is one of the busiest
airports in the world.
It has expanded many times to handle increasing air traffic.
Commercial development around the airport prevents it from building
more runways to handle future air traffic.
Plans are being made to build another airport outside the city limits.
Two possible locations for the new airport have been identified, but a
final decision will not be made for a year.
The Magnolia Inns hotel chain intends to build a new facility near the new
airport once its site is determined.
Land values around both possible sites for the new airport are increasing
as investors speculate that property values will increase greatly in the
vicinity of the new airport.
4. Buy nothing.
4
Example
5
The Possible States of Nature
6
Constructing a Payoff Matrix
Outcome 1 Outcome 2 Outcome m
….
Decision 1
Decision 2
. . . . .
. . . . .
. . . . .
Decision n
7
Constructing a Payoff Matrix
Outcome 1 Outcome 2 Outcome m
….
Decision 1
Decision n
8
Payoff Matrix
9
Land Purchased Airport is Built at Location
at Location(s) A B
A =31-18=13 $??
B $?? $??
A&B $5 ($1)
None $0 $0
13
Decision Rules
› If the future state of nature (airport location) were
known, it would be easy to make a decision.
› Failing this, a variety of non-probabilistic
(deterministic) decision rules can be applied to this
problem:
› Maximax
› Maximin
› Minimax regret
› No decision rule is always best and each has its own
weaknesses.
14
Payoff Matrix
MaxiMax
Airport is Built at
Land Purchased Location
at Location(s) A B Max
A $13 ($12) $13
B ($8) $11 $11
A&B $5 ($1) $5
None $0 $0 $0
15
The Maximax Decision Rule
› Identify the maximum payoff for each alternative.
› Choose the alternative with the largest maximum payoff.
Weakness
– Consider the following payoff matrix
State of Nature
Decision 1 2 MAX
A 30 -10000 30 ←maximum
B 29 29 29
16
Payoff Matrix
MaxiMin
Airport is Built at
Land Purchased Location
at Location(s) A B Min
A $13 ($12) $(12)
B ($8) $11 $(8)
A&B $5 ($1) $(1)
None $0 $0 $0
17
The Maximin Decision Rule
› Identify the minimum payoff for each alternative.
› Choose the alternative with the largest minimum payoff.
Weakness
– Consider the following payoff matrix
State of Nature
Decision 1 2 MIN
A 1000 28 28
B 29 29 29 ←maximum
18
The Payoff Matrix and Regret
If state of nature A occurs, then If state of nature B occurs, then
the highest payoff is $13 the highest payoff is $11
A&B $5 ($1)
None $0 $0
19
Regret Matrix
Abolute values calculated by
Absolue values calculated by
subtracting $11 from the
subtracting $13 from the
corresponding entries of the
corresponding entries of the
Payoff Matrix
Payoff Matrix
Airport is Built at
Land Purchased Location
at Location(s) A B
A $0 $23
B $21 $0
A&B $8 $12
None $13 $11
20
Regret Matrix
the MinMax regret policy
Airport is Built at
Land Purchased Location
at Location(s) A B Max
A $0 $23 $23
B $21 $0 $21
A&B $8 $12 $12
None $13 $11 $13
21
Anomalies with the Minimax Regret Rule
Consider the following payoff matrix
State of Nature
Decision 1 2
A 9 2
B 4 6
22
Adding an Alternative
Consider the following payoff matrix
State of Nature
Decision 1 2
A 9 2
B 4 6
C 3 9
Now we prefer B to A?
23
Agenda
› One-time problems under certainty
› One-time problems under risk
› Multi-stage problems
› Incorporating risk
24
Probabilistic Methods
› At times, states of nature can be assigned probabilities that represent
their likelihood of occurrence.
› For decision problems that occur more than once, we can often
estimate these probabilities from historical data.
› Other decision problems (such as the Magnolia Inns problem)
represent one-time decisions where historical data for estimating
probabilities don’t exist.
› In these cases, subjective probabilities are often assigned based on
interviews with one or more domain experts.
› Interviewing techniques exist for soliciting probability estimates
that are reasonably accurate and free of the unconscious biases that
may impact an expert’s opinions.
› We will focus on techniques that can be used once appropriate
probability estimates have been obtained.
25
Expected Monetary Value
Selects alternative with the largest expected monetary
value (EMV)
EMVi rij p j
j
rij payoff for alternativ e i under the jth state of nature
p j the probabilit y of the jth state of nature
26
Airport is Built at
Land Purchased Location
at Location(s) A B EMV
A $13 ($12) ($2.0)
B ($8) $11 $3.4
A&B $5 ($1) $1.4
None $0 $0 $0.0
(13)(0.4) + (-12)(0.6) = - 2
27
EMV Caution
› The EMV rule should be used with caution in one-time
decision problems.
Weakness (ignores risk)
– – Consider the following payoff matrix
State of Nature
Decision 1 2 EMV
A 15,000 -5,000 5,000 ←maximum
B 5,000 4,000 4,500
Probability 0.5 0.5
28
Looking Forward to Future Lectures. The Expected Value
of Perfect Information
› Suppose we could hire a consultant who could predict the future with
100% accuracy.
› With such perfect information, Magnolia Inns’ average payoff would be:
› EV with PI = 0.4*$13 + 0.6*$11 = $11.8 (in millions)
› Without perfect information, the EMV was $3.4 million.
› The expected value of perfect information is therefore:
› EV of PI = $11.8 - $3.4 = $8.4 (in millions)
› In general:
› EV of PI = EV with PI - maximum EMV
› It will always be the case that:
29
A Decision Tree for Magnolia Inns
Land Purchase Decision Airport Location Payoff
A 31 13
Buy A
1
-18 B 6 -12
A 4 -8
Buy B
2
-12 B 23 11
0
A 35 5
Buy A&B
3
-30
B 29 -1
A 0 0
Buy nothing
4
0 B 0 0
30
Rolling Back A Decision Tree
Land Purchase Decision Airport Location Payoff
0.4
A 31 13
Buy A
EMV=-2
1
-18 6
B 0.6 -12
0.4
A 4 -8
Buy B
2
-12 EMV=3.4 23
B 0.6 11
0 0.4
A 35 5
EMV=3.4 Buy A&B
EMV=1.4
3
-30
B 29
0.6 -1
0.4
A 0 0
Buy nothing
EMV= 0
4
0 B 0
0.6 0
31
Alternate Decision Tree
Land Purchase Decision Airport Location Payoff
0.4
A 31 13
Buy A
EMV=-2
1
-18 6
B 0.6 -12
0.4
A 4 -8
Buy B
2
-12 EMV=3.4 23
B 0.6 11
0 0.4
A 35 5
EMV=3.4 Buy A&B
EMV=1.4
3
-30
B 29 -1
0.6
Buy nothing
0
0
32
Agenda
› One-time problems under certainty
› One-time problems under risk
› Multi-stage problems
› Incorporating risk
33
Multi-Stage Decision Problems
› Many problems involve a series of decisions.
› Example:
› Should you go out to dinner tonight?
› If so,
› Who will you go with?
› Where will you go?
› How much will you spend?
› How will you get there?
› Multistage decisions can be analyzed using decision trees.
34
Multi-Stage Decision Example:
COM-TECH
› COM-TECH is considering whether to apply for a $85,000 research
grant for using wireless comms technology to enhance safety in the
coal industry.
› COM-TECH would spend approximately $5,000 preparing the grant
proposal and estimates a 50-50 chance of receiving the grant.
› If awarded the grant, COM-TECH would need to decide which of
three communications technologies to use.
› COM-TECH would need to acquire some new equipment depending
on which technology is used:
36
High R&D Costs
16000
Microwave -60000 16000
0.2
High R&D Costs
0.5 5000
Receive Grant Cellular -70000 5000
3
85000 32000 -5000 29000 0.8
Low R&D Costs
35000
-40000 35000
0.1
High R&D Costs
Submit Proposal -4000
Infrared -80000 -4000
-5000 13500
-4000 32000 0.9
Low R&D Costs
36000
-40000 36000
1 0.5
13500 Don't Receive Grant
-5000
0 -5000
Probability Payoff
0.45 36000
0.05 -4000
0.5 -5000
41
Influence Diagrams
› Provide simple graphical representations of decision situations
using four basic shapes
› Decision node
› Chance node
› Payoff node
› Consequence or calculation node
› Connectors
› Beginning node of an arc is called a predecessor.
› The node at the end of an arc is a successor.
Influence Diagram – Example
Influence Diagrams & the Fundamental-
Objectives Hierarchy - Example
Venture capitalist’s
decision with two
objectives
Influence Diagrams & the Fundamental-
Objectives Hierarchy - Example
Multiple
objectives in
selecting a bomb
detection system
Using Arcs in Influence Diagrams