0% found this document useful (0 votes)
67 views63 pages

EMGT 501 HW Solutions: Chapter 12 - SELF TEST 9 Chapter 12 - SELF TEST 18

Uploaded by

Ramskulls Art
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views63 pages

EMGT 501 HW Solutions: Chapter 12 - SELF TEST 9 Chapter 12 - SELF TEST 18

Uploaded by

Ramskulls Art
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 63

EMGT 501

HW Solutions

Chapter 12 - SELF TEST 9


Chapter 12 - SELF TEST 18

© 2005 Thomson/South-Western Slide 1


12-9

 2.2
a. P0  1   1   0.56
 5

b. P1   P0 
2.2
0.56  0.2464
 5
2

2
 2.2 
c. P2    P0    0.56  0.1084
  5 
3

3
 2.2 
d. P3    P0    0.56  0.0477
  5 

© 2005 Thomson/South-Western Slide 2


e. P(More than 2 waiting)  P(More than 3 are in system)
 1  P0  P1  P2  P3 
 1  0.9625  0.0375

2 2.22
f. Lq    0.3457
    55  2.2
Lq
Wq   0.157 hours 9.43 minutes 

© 2005 Thomson/South-Western Slide 3


12-18

a. s  2    14 10  1.4
P0  0.1765 (P.558, Table 12.4)

b. L q      P0  1.4 14 10 0.1765  1.3451


2 2

1!2    20  142


2

 14
L  Lq   1.3451   2.7451
 10
Lq 1.3453
Wq    0.0961 hours (5.77 minutes)
 14
1 1
W  Wq   0.0961   0.196 hours (11.77 minutes)
 10
© 2005 Thomson/South-Western Slide 4
e. P0  0.1765

P1  1
 
1
P  0.1765  0.2470
14
0
1! 10
Pwait   Pn  2  1  Pn  1
 1  0.4235  0.5765

© 2005 Thomson/South-Western Slide 5


Chapter 14
Decision Analysis

 Problem Formulation
 Decision Making without
Probabilities
 Decision Making with Probabilities
 Risk Analysis and Sensitivity
Analysis
 Decision Analysis with Sample
Information
 Computing Branch Probabilities
© 2005 Thomson/South-Western Slide 6
Problem Formulation

 A decision problem is characterized by


decision alternatives, states of nature, and
resulting payoffs.
 The decision alternatives are the different
possible strategies the decision maker can
employ.
 The states of nature refer to future events,
not under the control of the decision maker,
which may occur. States of nature should
be defined so that they are mutually
exclusive and collectively exhaustive.

© 2005 Thomson/South-Western Slide 7


Influence Diagrams

 An influence diagram is a graphical device


showing the relationships among the
decisions, the chance events, and the
consequences.
 Squares or rectangles depict decision nodes.
 Circles or ovals depict chance nodes.
 Diamonds depict consequence nodes.
 Lines or arcs connecting the nodes show the
direction of influence.

© 2005 Thomson/South-Western Slide 8


Payoff Tables

 The consequence resulting from a specific


combination of a decision alternative and a
state of nature is a payoff.
 A table showing payoffs for all combinations
of decision alternatives and states of nature is
a payoff table.
 Payoffs can be expressed in terms of profit,
cost, time, distance or any other appropriate
measure.

© 2005 Thomson/South-Western Slide 9


Decision Trees

 A decision tree is a chronological


representation of the decision problem.

 Each decision tree has two types of


nodes; round nodes correspond to the
states of nature while square nodes
correspond to the decision alternatives.

© 2005 Thomson/South-Western Slide 10


The branches leaving each round node
represent the different states of nature
while the branches leaving each
square node represent the different
decision alternatives.

At the end of each limb of a tree are the


payoffs attained from the series of
branches making up that limb.

© 2005 Thomson/South-Western Slide 11


Decision Making without Probabilities

 Three commonly used criteria for


decision making when probability
information regarding the likelihood
of the states of nature is unavailable
are:
•the optimistic approach
•the conservative approach
•the minimax regret approach.

© 2005 Thomson/South-Western Slide 12


Optimistic Approach

 The optimistic approach would be used


by an optimistic decision maker.
 The decision with the largest possible
payoff is chosen.
 If the payoff table was in terms of costs,
the decision with the lowest cost would
be chosen.

© 2005 Thomson/South-Western Slide 13


Conservative Approach

 The conservative approach would be used by a


conservative decision maker.
 For each decision the minimum payoff is listed and
then the decision corresponding to the maximum
of these minimum payoffs is selected. (Hence, the
minimum possible payoff is maximized.)
 If the payoff was in terms of costs, the maximum
costs would be determined for each decision and
then the decision corresponding to the minimum
of these maximum costs is selected. (Hence, the
maximum possible cost is minimized.)

© 2005 Thomson/South-Western Slide 14


Minimax Regret Approach

 The minimax regret approach requires the


construction of a regret table or an opportunity
loss table.
 This is done by calculating for each state of nature
the difference between each payoff and the largest
payoff for that state of nature.
 Then, using this regret table, the maximum regret
for each possible decision is listed.
 The decision chosen is the one corresponding to
the minimum of the maximum regrets.

© 2005 Thomson/South-Western Slide 15


Example

Consider the following problem with three decision


alternatives and three states of nature with the
following payoff table representing profits:

States of Nature
s1 s2 s3

d1 4 4 -2
Decisions d2 0 3 -1
d3 1 5 -3

© 2005 Thomson/South-Western Slide 16


Example: Optimistic Approach

An optimistic decision maker would use the


optimistic (maximax) approach. We choose the
decision that has the largest single value in the
payoff table.

Maximum
Decision Payoff
Maximax d1 4 Maximax
decision d2 3 payoff
d3 5

© 2005 Thomson/South-Western Slide 17


Example: Optimistic Approach

 Formula Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Maximum Recommended
4 Alternative s1 s2 s3 Payoff Decision
5 d1 4 4 -2 =MAX(B5:D5) =IF(E5=$E$9,A5,"")
6 d2 0 3 -1 =MAX(B6:D6) =IF(E6=$E$9,A6,"")
7 d3 1 5 -3 =MAX(B7:D7) =IF(E7=$E$9,A7,"")
8
9 Best Payoff =MAX(E5:E7)

© 2005 Thomson/South-Western Slide 18


Example: Optimistic Approach

 Solution Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Maximum Recommended
4 Alternative s1 s2 s3 Payoff Decision
5 d1 4 4 -2 4
6 d2 0 3 -1 3
7 d3 1 5 -3 5 d3
8
9 Best Payoff 5

© 2005 Thomson/South-Western Slide 19


Example: Conservative Approach

A conservative decision maker would use the


conservative (maximin) approach. List the minimum
payoff for each decision. Choose the decision with
the maximum of these minimum payoffs.

Minimum
Decision Payoff
Maximin d1 -2 Maximin
decision d2 -1 payoff
d3 -3

© 2005 Thomson/South-Western Slide 20


Example: Conservative Approach

 Formula Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Minimum Recommended
4 Alternative s1 s2 s3 Payoff Decision
5 d1 4 4 -2 =MIN(B5:D5) =IF(E5=$E$9,A5,"")
6 d2 0 3 -1 =MIN(B6:D6) =IF(E6=$E$9,A6,"")
7 d3 1 5 -3 =MIN(B7:D7) =IF(E7=$E$9,A7,"")
8
9 Best Payoff =MAX(E5:E7)

© 2005 Thomson/South-Western Slide 21


Example: Conservative Approach

 Solution Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Minimum Recommended
4 Alternative s1 s2 s3 Payoff Decision
5 d1 4 4 -2 -2
6 d2 0 3 -1 -1 d2
7 d3 1 5 -3 -3
8
9 Best Payoff -1

© 2005 Thomson/South-Western Slide 22


Example: Minimax Regret Approach

For the minimax regret approach, first compute a


regret table by subtracting each payoff in a column
from the largest payoff in that column. In this
example, in the first column subtract 4, 0, and 1 from
4; etc. The resulting regret table is:

s1 s2 s3

d1 0 1 1
d2 4 2 0
d3 3 0 2

© 2005 Thomson/South-Western Slide 23


Example: Minimax Regret Approach

For each decision list the maximum regret.


Choose the decision with the minimum of these
values.

Maximum
Decision Regret
d1 1
Minimax d2 4 Minimax
decision d3 3 regret

© 2005 Thomson/South-Western Slide 24


Example: Minimax Regret Approach

 Formula Spreadsheet
A B C D E F
1 PAYOFF TABLE
2 Decision State of Nature
3 Altern. s1 s2 s3
4 d1 4 4 -2
5 d2 0 3 -1
6 d3 1 5 -3
7
8 OPPORTUNITY LOSS TABLE
9 Decision State of Nature Maximum Recommended
10 Altern. s1 s2 s3 Regret Decision
11 d1 =MAX($B$4:$B$6)-B4 =MAX($C$4:$C$6)-C4 =MAX($D$4:$D$6)-D4 =MAX(B11:D11) =IF(E11=$E$14,A11,"")
12 d2 =MAX($B$4:$B$6)-B5 =MAX($C$4:$C$6)-C5 =MAX($D$4:$D$6)-D5 =MAX(B12:D12) =IF(E12=$E$14,A12,"")
13 d3 =MAX($B$4:$B$6)-B6 =MAX($C$4:$C$6)-C6 =MAX($D$4:$D$6)-D6 =MAX(B13:D13) =IF(E13=$E$14,A13,"")
14 Minimax Regret Value =MIN(E11:E13)

© 2005 Thomson/South-Western Slide 25


Example: Minimax Regret Approach

 Solution Spreadsheet
A B C D E F
1 PAYOFF TABLE
2 Decision State of Nature
3 Alternative s1 s2 s3
4 d1 4 4 -2
5 d2 0 3 -1
6 d3 1 5 -3
7
8 OPPORTUNITY LOSS TABLE
9 Decision State of Nature Maximum Recommended
10 Alternative s1 s2 s3 Regret Decision
11 d1 0 1 1 1 d1
12 d2 4 2 0 4
13 d3 3 0 2 3
14 Minimax Regret Value 1

© 2005 Thomson/South-Western Slide 26


Decision Making with Probabilities

 Expected Value Approach


• If probabilistic information regarding the states
of nature is available, one may use the expected
value (EV) approach.
• Here the expected return for each decision is
calculated by summing the products of the
payoff under each state of nature and the
probability of the respective state of nature
occurring.
• The decision yielding the best expected return is
chosen.

© 2005 Thomson/South-Western Slide 27


Expected Value of a Decision Alternative

 The expected value of a decision alternative is the


sum of weighted payoffs for the decision alternative.
 The expected value (EV) of decision alternative di is
defined as:
N
EV( d i )   P( s j )Vij
j 1

where: N = the number of states of nature


P(sj ) = the probability of state of nature sj
Vij = the payoff corresponding to decision
alternative di and state of nature sj

© 2005 Thomson/South-Western Slide 28


Example: Burger Prince

Burger Prince Restaurant is considering opening


a new restaurant on Main Street. It has three
different models, each with a different
seating capacity. Burger Prince
estimates that the average number of
customers per hour will be 80, 100, or
120. The payoff table for the three
models is on the next slide.

© 2005 Thomson/South-Western Slide 29


Payoff Table

Average Number of Customers Per Hour


s1 = 80 s2 = 100 s3 = 120

Model A $10,000 $15,000 $14,000


Model B $ 8,000 $18,000 $12,000
Model C $ 6,000 $16,000 $21,000

© 2005 Thomson/South-Western Slide 30


Expected Value Approach

Calculate the expected value for each decision.


The decision tree on the next slide can assist in this
calculation. Here d1, d2, d3 represent the decision
alternatives of models A, B, C, and s1, s2, s3 represent
the states of nature of 80, 100, and 120.

© 2005 Thomson/South-Western Slide 31


Decision Tree

Payoffs
s1 .4
10,000
s2 .2
2 s3 15,000
.4
d1 14,000
s1 .4
d2 8,000
1 s2 .2
3 18,000
d3 s3 .4
12,000
s1 .4
6,000
s2 .2
4 16,000
s3
.4
21,000

© 2005 Thomson/South-Western Slide 32


Expected Value for Each Decision

EMV = .4(10,000) + .2(15,000) + .4(14,000)


d1 = $12,600
2
Model A
EMV = .4(8,000) + .2(18,000) + .4(12,000)
Model B d2 = $11,600
1 3

d3 EMV = .4(6,000) + .2(16,000) + .4(21,000)


Model C
= $14,000
4

Choose the model with largest EV, Model C.

© 2005 Thomson/South-Western Slide 33


Expected Value Approach

 Formula Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Expected Recommended
4 Alternative s1 = 80 s2 = 100 s3 = 120 Value Decision
5 d1 = Model A 10,000 15,000 14,000 =$B$8*B5+$C$8*C5+$D$8*D5 =IF(E5=$E$9,A5,"")
6 d2 = Model B 8,000 18,000 12,000 =$B$8*B6+$C$8*C6+$D$8*D6 =IF(E6=$E$9,A6,"")
7 d3 = Model C 6,000 16,000 21,000 =$B$8*B7+$C$8*C7+$D$8*D7 =IF(E7=$E$9,A7,"")
8 Probability 0.4 0.2 0.4
9 Maximum Expected Value =MAX(E5:E7)

© 2005 Thomson/South-Western Slide 34


Expected Value Approach

 Solution Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Expected Recommended
4 Alternative s1 = 80 s2 = 100 s3 = 120 Value Decision
5 d1 = Model A 10,000 15,000 14,000 12600
6 d2 = Model B 8,000 18,000 12,000 11600
7 d3 = Model C 6,000 16,000 21,000 14000 d3 = Model C
8 Probability 0.4 0.2 0.4
9 Maximum Expected Value 14000

© 2005 Thomson/South-Western Slide 35


Expected Value of Perfect Information

 Frequently information is available which can


improve the probability estimates for the states of
nature.
 The expected value of perfect information (EVPI) is
the increase in the expected profit that would
result if one knew with certainty which state of
nature would occur.
 The EVPI provides an upper bound on the
expected value of any sample or survey
information.

© 2005 Thomson/South-Western Slide 36


Expected Value of Perfect Information

 EVPI Calculation
• Step 1:
Determine the optimal return corresponding to
each state of nature.
• Step 2:
Compute the expected value of these optimal
returns.
• Step 3:
Subtract the EV of the optimal decision from the
amount determined in step (2).

© 2005 Thomson/South-Western Slide 37


Expected Value of Perfect Information

Calculate the expected value for the optimum


payoff for each state of nature and subtract the EV of
the optimal decision.

EVPI= .4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000

© 2005 Thomson/South-Western Slide 38


Expected Value of Perfect Information

 Spreadsheet
A B C D E F
1 PAYOFF TABLE
2
3 Decision State of Nature Expected Recommended
4 Alternative s1 = 80 s2 = 100 s3 = 120 Value Decision
5 d1 = Model A 10,000 15,000 14,000 12600
6 d2 = Model B 8,000 18,000 12,000 11600
7 d3 = Model C 6,000 16,000 21,000 14000 d3 = Model C
8 Probability 0.4 0.2 0.4
9 Maximum Expected Value 14000
10
11 Maximum Payoff EVwPI EVPI
12 10,000 18,000 21,000 16000 2000

© 2005 Thomson/South-Western Slide 39


Risk Analysis

 Risk analysis helps the decision maker recognize the


difference between:
• the expected value of a decision alternative, and
• the payoff that might actually occur
 The risk profile for a decision alternative shows the
possible payoffs for the decision alternative along
with their associated probabilities.

© 2005 Thomson/South-Western Slide 40


Risk Profile

 Model C Decision Alternative

.50

.40
Probability

.30

.20

.10

5 10 15 20 25

© 2005 Thomson/South-Western Slide 41


Sensitivity Analysis

 Sensitivity analysis can be used to determine how


changes to the following inputs affect the
recommended decision alternative:
• probabilities for the states of nature
• values of the payoffs
 If a small change in the value of one of the inputs
causes a change in the recommended decision
alternative, extra effort and care should be taken in
estimating the input value.

© 2005 Thomson/South-Western Slide 42


Bayes’ Theorem and Posterior Probabilities

 Knowledge of sample (survey) information can be used


to revise the probability estimates for the states of nature.
 Prior to obtaining this information, the probability
estimates for the states of nature are called prior
probabilities.
 With knowledge of conditional probabilities for the
outcomes or indicators of the sample or survey
information, these prior probabilities can be revised by
employing Bayes' Theorem.
 The outcomes of this analysis are called posterior
probabilities or branch probabilities for decision trees.

© 2005 Thomson/South-Western Slide 43


Computing Branch Probabilities

 Branch (Posterior) Probabilities Calculation


• Step 1:
For each state of nature, multiply the prior
probability by its conditional probability for the
indicator -- this gives the joint probabilities for the
states and indicator.

© 2005 Thomson/South-Western Slide 44


Computing Branch Probabilities

 Branch (Posterior) Probabilities Calculation


• Step 2:
Sum these joint probabilities over all states -- this
gives the marginal probability for the indicator.
• Step 3:
For each state, divide its joint probability by the
marginal probability for the indicator -- this gives
the posterior probability distribution.

© 2005 Thomson/South-Western Slide 45


Expected Value of Sample Information

 The expected value of sample information (EVSI) is


the additional expected profit possible through
knowledge of the sample or survey information.

© 2005 Thomson/South-Western Slide 46


Expected Value of Sample Information

 EVSI Calculation
• Step 1:
Determine the optimal decision and its expected
return for the possible outcomes of the sample using
the posterior probabilities for the states of nature.
• Step 2:
Compute the expected value of these optimal
returns.
• Step 3:
Subtract the EV of the optimal decision obtained
without using the sample information from the
amount determined in step (2).

© 2005 Thomson/South-Western Slide 47


Efficiency of Sample Information

 Efficiency of sample information is the ratio of EVSI


to EVPI.
 As the EVPI provides an upper bound for the EVSI,
efficiency is always a number between 0 and 1.

© 2005 Thomson/South-Western Slide 48


Sample Information

Burger Prince must decide whether or not to


purchase a marketing survey from Stanton Marketing
for $1,000. The results of the survey are "favorable" or
"unfavorable". The conditional probabilities are:

P(favorable | 80 customers per hour) = .2


P(favorable | 100 customers per hour) = .5
P(favorable | 120 customers per hour) = .9

Should Burger Prince have the survey performed


by Stanton Marketing?

© 2005 Thomson/South-Western Slide 49


Influence Diagram

Decision Market Avg. Number


Chance Survey of Customers
Consequence Results Per Hour

Market Restaurant
Profit
Survey Size

© 2005 Thomson/South-Western Slide 50


Posterior Probabilities

Favorable
State Prior Conditional Joint Posterior
80 .4 .2 .08 .148
100 .2 .5 .10 .185
120 .4 .9 .36 .667
Total .54 1.000

P(favorable) = .54

© 2005 Thomson/South-Western Slide 51


Posterior Probabilities

Unfavorable
State Prior Conditional Joint Posterior
80 .4 .8 .32 .696
100 .2 .5 .10 .217
120 .4 .1 .04 .087
Total .46 1.000

P(unfavorable) = .46

© 2005 Thomson/South-Western Slide 52


Posterior Probabilities

 Formula Spreadsheet
A B C D E
1 Market Research Favorable
2 Prior Conditional Joint Posterior
3 State of Nature Probabilities Probabilities Probabilities Probabilities
4 s1 = 80 0.4 0.2 =B4*C4 =D4/$D$7
5 s2 = 100 0.2 0.5 =B5*C5 =D5/$D$7
6 s3 = 120 0.4 0.9 =B6*C6 =D6/$D$7
7 P(Favorable) = =SUM(D4:D6)
8
9 Market Research Unfavorable
10 Prior Conditional Joint Posterior
11 State of Nature Probabilities Probabilities Probabilities Probabilities
12 s1 = 80 0.4 0.8 =B12*C12 =D12/$D$15
13 s2 = 100 0.2 0.5 =B13*C13 =D13/$D$15
14 s3 = 120 0.4 0.1 =B14*C14 =D14/$D$15
15 P(Unfavorable) = =SUM(D12:D14)

© 2005 Thomson/South-Western Slide 53


Posterior Probabilities

 Solution Spreadsheet
A B C D E
1 Market Research Favorable
2 Prior Conditional Joint Posterior
3 State of Nature Probabilities Probabilities Probabilities Probabilities
4 s1 = 80 0.4 0.2 0.08 0.148
5 s2 = 100 0.2 0.5 0.10 0.185
6 s3 = 120 0.4 0.9 0.36 0.667
7 P(Favorable) = 0.54
8
9 Market Research Unfavorable
10 Prior Conditional Joint Posterior
11 State of Nature Probabilities Probabilities Probabilities Probabilities
12 s1 = 80 0.4 0.8 0.32 0.696
13 s2 = 100 0.2 0.5 0.10 0.217
14 s3 = 120 0.4 0.1 0.04 0.087
15 P(Favorable) = 0.46

© 2005 Thomson/South-Western Slide 54


Decision Tree

 Top Half
s1 (.148)
$10,000
s2 (.185)
d1 4
s3 (.667)
$15,000
$14,000
s1 (.148)
d2 $8,000
s2 (.185)
2 5 $18,000
s3 (.667)
I1 d3 $12,000
s1 (.148)
(.54) $6,000
s2 (.185)
6 $16,000
s3 (.667)
1 $21,000

© 2005 Thomson/South-Western Slide 55


Decision Tree

 Bottom Half

1 s1 (.696) $10,000
I2 s2 (.217)
(.46) d1 7
s3 (.087)
$15,000
$14,000
s1 (.696)
d2 $8,000
s2 (.217)
3 8 $18,000
s3 (.087)
d3 $12,000
s1 (.696) $6,000
s2 (.217)
9
s3 (.087) $16,000
$21,000

© 2005 Thomson/South-Western Slide 56


Decision Tree

d1 EMV = .148(10,000) + .185(15,000)


4
+ .667(14,000) = $13,593
$17,855 d2
2 5 EMV = .148 (8,000) + .185(18,000)
I1 d3 + .667(12,000) = $12,518
(.54)
6 EMV = .148(6,000) + .185(16,000)
+.667(21,000) = $17,855
1
7 EMV = .696(10,000) + .217(15,000)
d1 +.087(14,000)= $11,433
I2
(.46) d2
3 8 EMV = .696(8,000) + .217(18,000)
d3 + .087(12,000) = $10,554
$11,433
9 EMV = .696(6,000) + .217(16,000)
+.087(21,000) = $9,475
© 2005 Thomson/South-Western Slide 57
Expected Value of Sample Information

If the outcome of the survey is "favorable”,


choose Model C. If it is “unfavorable”, choose Model A.

EVSI = .54($17,855) + .46($11,433) - $14,000 = $900.88

Since this is less than the cost of the survey, the


survey should not be purchased.

© 2005 Thomson/South-Western Slide 58


Efficiency of Sample Information

The efficiency of the survey:

EVSI/EVPI = ($900.88)/($2000) = .4504

© 2005 Thomson/South-Western Slide 59


Bayes’ Decision Rule:

Using the best available estimates of the


probabilities of the respective states of nature
(currently the prior probabilities), calculate the
expected value of the payoff for each of the
possible actions. Choose the action with the
maximum expected payoff.

© 2005 Thomson/South-Western Slide 60


Bayes’ theory
Si: State of Nature (i = 1 ~ n)
P(Si): Prior Probability
Ij: Professional Information (Experiment)( j = 1 ~ n)
P(Ij | Si): Conditional Probability
P(Ij  Si) = P(Si  Ij): Joint Probability
P(Si | Ij): Posterior Probability
P(Si  I j ) P(I j | Si )P(Si )
P(Si | Ij)  
P( I j ) n
 P(I j | Si )P(Si )
i 1
© 2005 Thomson/South-Western Slide 61
Home Work

14-20

Due Day: Nov 7

© 2005 Thomson/South-Western Slide 62


End of Chapter 14

© 2005 Thomson/South-Western Slide 63

You might also like