MIT18 05S14 Class27-Sol
MIT18 05S14 Class27-Sol
Problem 1. We compute
[
1 2 3]15 + 1 · 4 5 2
E[X] = −2 · + −1 · +0· +2· = .
15 15 15 15 3
Thus
2 14
Var(X) = E((X − )2 ) = .
3 9
Problem 4. answer:
Make a table X: 0 1
prob: (1-p) p
X2 0 1.
From the table, E(X) = 0 · (1 − p) + 1 · p = p.
Since X and X 2 have the same table E(X 2 ) = E(X) = p.
Therefore, Var(X) = p − p2 = p(1 − p).
Problem 5. Let X be the number of people who get their own hat.
Following the hint: let Xj represent whether person j gets their own hat. That is,
1
Xj = 1 if person j gets their hat and 0 if not.
100
1 100
1
We have, X = Xj , so E(X) = E(Xj ).
j=1 j=1
Since person j is equally likely to get any hat, we have P (Xj = 1) = 1/100. Thus,
Xj ∼ Bernoulli(1/100) ⇒ E(Xj ) = 1/100 ⇒ E(X) = 1.
E(X) = 3 E(textbinomial(25, 1/6)) = 3·25/6 = 75/6, and Var(X) = 9 Var(textbinomial(25, 1/6)) = 9·25(
(c) E(X + Y ) = E(X) + E(Y ) = 150/6 = 25., E(2X) = 2E(X) = 150/6 = 25.
Var(X + Y ) = Var(X) + Var(Y ) = 250/4. Var(2X) = 4Var(X) = 500/4.
The means of X + Y and 2X are the same, but Var(2X) > Var(X + Y ).
This makes sense because in X +Y sometimes X and Y will be on opposite sides from
the mean so distances to the mean will tend to cancel, However in 2X the distance
to the mean is always doubled.
.52 + .53 13
P (.5 < X < 1) = FX (1) − FX (.5) = 1 − = .
2 16
Problem 8. (i) yes, discrete, (ii) no, (iii) no, (iv) no, (v) yes, continuous
2
(vi) no (vii) yes, continuous, (viii) yes, continuous.
Problem 10.
(a) We did this in class. Let φ(z) and Φ(z) be the PDF and CDF of Z.
FY (y) = P (Y ≤ y) = P (aZ + b ≤ y) = P (Z ≤ (y − b)/a) = Φ((y − b)/a).
Differentiating:
d d 1 1 2 2
fY (y) = FY (y) = Φ((y − b)/a) = φ((y − b)/a) = √ e−(y−b) /2a .
dy dy a 2π a
Problem 12.
Method 1
1
U (a, b) has density f (x) = on [a, b]. So,
b−a
3
b b
�b
x2 �� b2 − a 2
Z Z
1 a+b
E(X) = xf (x) dx = x dx = = = .
a a b−a 2(b − a) a 2(b − a)
� 2
Z b Z b �b
2 2 1 2 x3 �� b3 − a3
E(X ) = x f (x) dx = x dx = = .
a b−a a 3(b − a) �a 3(b − a)
b3 − a3 (b + a)2
Var(X) = E(X 2 ) − E(X)2 = −
3(b − a) 4
4(b3 − a3 ) − 3(b − a)(b + a)2 b3 − 3ab2 + 3a2 b − a3 (b − a)3 (b − a)2
= = = = .
12(b − a) 12(b − a) 12(b − a) 12
Method 2
There is an easier way to find E(X) and Var(X).
Let U ∼ U(a, b). Then the calculations above show E(U ) = 1/2 and (E(U 2 ) = 1/3
⇒ Var(U ) = 1/3 − 1/4 = 1/12.
Now, we know X = (b−a)U +a, so E(X) = (b−a)E(U )+a = (b−a)/2+a = (b+a)/2
and Var(X) = (b − a)2 Var(U ) = (b − a)2 /12.
Problem 13.
(a) Sn ∼ Binomial(n, p), since it is the number of successes in n independent
Bernoulli trials.
(b) Tm ∼ Binomial(m, p), since it is the number of successes in m independent
Bernoulli trials.
(c) Sn + Tm ∼ Binomial(n + m, p), since it is the number of successes in n + m
independent Bernoulli trials.
(d) Yes, Sn and Tm are independent. We haven’t given a formal definition of
independent random variables yet. But, we know it means that knowing Sn gives no
information about Tm . This is clear since the first n trials are independent of the last
m.
Problem 14. Compute the median for the exponential distribution with parameter
λ. The density for this distribution is f (x) = λ e−λx . We know (or can compute)
that the distribution function is F (a) = 1 − e−λa . The median is the value of a such
that F (a) = .5. Thus, 1 − e−λa = 0.5 ⇒ 0.5 = e−λa ⇒ log(0.5) = −λa ⇒
a = log(2)/λ.
4
Problem 15. (a) The joint distribution is given by
Y \X 1 2 3
1168 825 305 2298
1 5383 5383 5383 5383
Problem 16. (a) Here we have two continuous random variables X and Y with
going potability density function
12
f (x, y) = xy(1 + y) for 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1,
5
and f (x, y) = 0 otherwise. So
Z 1 Z 2
1 1 1 2 2 3 41
P( ≤ X ≤ , ≤ Y ≤ ) = f (x, y)dy dx = .
4 2 3 3 1
4
1
3
720
a b
(b) F (a, b) = 0 0
f (x, y)dy dx = 53 a2 b2 + 52 a2 b3 for 0 ≤ a ≤ 1 and 0 ≤ b ≤ 1.
(c) Since f (x, y) = 0 for y > 1, we have
d
This is consistent with (c) because dx
(x2 ) = 2x.
(e) We first compute fY (y) for 0 ≤ y ≤ 1 as
Z 1
6
fY (y) = f (x, y)dx = y(y + 1).
0 5
5
Since f (x, y) = fX (x)fY (y), we conclude that X and Y are independent.
Y \X 0 1 PY
X and Y are independent, so the table is computed from 0 1/8 1/8 1/4
Problem 19. (a) the product of the known marginal probabilities. Since 1 1/4 1/4 1/2
they are independent, Cov(X, Y ) = 0. 2 1/8 1/8 1/4
PX 1/2 1/2 1
(b) The sample space is Ω = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}.
P (X = 0, Z = 0) = P ({T T H, T T T }) = 1/4.
P (X = 0, Z = 1) = P ({T HH, T HT }) = 1/4. Z\X 0 1 PZ
P (X = 0, Z = 2) = 0. 0 1/4 0 1/4
P (X = 1, Z = 0) = 0. 1 1/4 1/4 1/2
P (X = 1, Z = 1) = P ({HT H, HT T }) = 1/4. 2 0 1/4 1/4
PX 1/2 1/2 1
P (X = 1, Z = 2) = P ({HHH, HHT }) = 1/4.
Z a Z b
Problem 20. (a) F (a, b) = P (X ≤ a, Y ≤ b) = (x + y) dy dx.
0 0
6
b a
y2 b2 x2 b2 a2 b + ab2
Inner integral: xy + = xb + . Outer integral: b+ x = .
2 0 2 2 2 0 2
2 2
x y + xy
So F (x, y) = and F (1, 1) = 1.
2
Z 1 Z 1 1
y2 1
(b) fX (x) = f (x, y) dy = (x + y) dy = xy + = x+ .
0 0 2 0 2
By symmetry, fY (y) = y + 1/2.
(c) To see if they are independent we check if the joint density is the product of
the marginal densities.
f (x, y) = x + y, fX (x) · fY (y) = (x + 1/2)(y + 1/2).
Since these are not equal, X and Y are not independent.
Z 1Z 1 Z 1" 2 1
# Z 1
2 y x 7
(d) E(X) = x(x + y) dy dx = x y+x dx = x2 + dx = .
0 0 0 2 0 0 2 12
Z 1 Z 1
(Or, using (b), E(X) = xfX (x) dx = x(x + 1/2) dx = 7/12.)
0 0
By symmetry E(Y ) = 7/12.
Z 1Z 1
2 2 5
E(X + Y ) = (x2 + y 2 )(x + y) dy dx = .
0 0 6
Z 1Z 1
1
E(XY ) = xy(x + y) dy dx = .
0 0 3
1 49 1
Cov(X, Y ) = E(XY ) − E(X)E(Y ) = − = − .
3 144 144
Problem 21.
Standardize:
! 1P
Xi − µ
X
n 30/n − µ
P Xi < 30 =P √ < √
i
σ/ n σ/ n
30/100 − 1/5
≈P Z< (by the central limit theorem)
1/30
= P (Z < 3)
= 1 − .0013 = .9987 (from the table)
X1 + . . . + X144
Problem 22. Let X = ⇒ E(X) = 2, and σX = 2/12 = 1/6.
√ 144
( n = 12)
264
A chain of algebra gives P (X1 + . . . + X144 ) + P X > = P X > 1.8333 .
144
7
X −2 1.8333 − 2 X −2
Standardization gives P (X > 1.8333) = P > =P > −1.0
1/6 1/6 1/6
X −2
Now, the Central limit theorem says P > −1.0 ≈ P (Z > −1) = .84
1/6
Problem 24. Data mean and variance x̄ = 65, s2 = 35.778. The number of
degrees of freedom is 9. We look up t9,.025 = 2.262 in the t-table The 95% confidence
interval is
t9,.025 s t9,.025 s √ √
x̄ − √ , x̄ + √ = 65 − 2.262 3.5778, 65 + 2.262 3.5778 = [60.721, 69.279]
n n
Problem 25. Suppose we have taken data x1 , . . . , xn with mean x̄. Remember in
these probabilities µ is a given (fixed) hypothesis.
√
|x̄ − µ| .5 .5 n
P (|x̄−µ| ≤ .5 | µ) = .95 ⇔ P √ < √ | µ = .95 ⇔ P |Z| < = .95.
σ/ n σ/ n 5
√
.5 n
Using the table, we have precisely that = 1.96. So, n = (19.6)2 = 384. .
5
√
If we use our rule of thumb that the .95 interval is 2σ we have n/10 = 2 ⇒ n = 400.
√
Problem 26. The rule-of-thumb is that a 95% confidence interval is x̄ ± 1/ n.
To be within 1% we need
1
√ = .01 ⇒ n = 10000.
n
Using z.025 = 1.96 instead the 95% confidence interval is
z.025
x̄ ± √ .
2 n
To be within 1% we need
z.025
√ = .01 ⇒ n = 9604.
2 n
8
Note, we are using the standard Bernoulli approximation σ ≤ 1/2.
Problem 28. A 95% confidence means about 5% = 1/20 will be wrong. You’d
expect about 2 to be wrong.
With a probability p = .05 of being wrong, the number wrong follows
l a Binomial(40, p)
distribution. This has expected value 2, and standard deviation 40(.05)(.95) = 1.38.
10 wrong is (10-2)/1.38 = 5.8 standard deviations from the mean. This would be sur
prising.
We can take square roots to find the 95% confidence interval for σ
[1.9064, 3.3175]
9
The least squares fit is given by the values of a and b which minimize E 2 . We solve
for them by setting the partial derivatives of E 2 with respect to a and b to 0. In R
we found that a = 1.0, b = 0.5
(b) This is similar to part (a). The model is
yi = axi,1 + bxi,2 + c + εi
where the errors εi are independent with mean 0 and the same variance for each i
(homoscedastic).
The total error squared is
1
E2 = (yi − axi,1 − bxi,2 − c)2 = (3 − a − 2b − c)2 + (5 − 2a − 3b − c)2 + (1 − 3a − c)2
The least squares fit is given by the values of a, b and c which minimize E 2 . We solve
for them by setting the partial derivatives of E 2 with respect to a, b and c to 0. In R
we found that a = 0.5, b = 1.5, c = −0.5
10
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.