0% found this document useful (0 votes)
33 views58 pages

Theory 2. Probability and Statistics (Textbook Chapters 2-3)

This document reviews probability and statistics concepts through an empirical example analyzing the relationship between class size and educational outcomes. It examines California test score data from 420 school districts, looking at average test scores and student-teacher ratios. Statistical analyses are conducted to estimate the difference in mean test scores between districts with small vs. large class sizes, test the hypothesis that this difference is zero, and construct a confidence interval for the difference. Key probability and statistical concepts like distributions, moments, hypothesis testing, and confidence intervals are also reviewed.

Uploaded by

costea2112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views58 pages

Theory 2. Probability and Statistics (Textbook Chapters 2-3)

This document reviews probability and statistics concepts through an empirical example analyzing the relationship between class size and educational outcomes. It examines California test score data from 420 school districts, looking at average test scores and student-teacher ratios. Statistical analyses are conducted to estimate the difference in mean test scores between districts with small vs. large class sizes, test the hypothesis that this difference is zero, and construct a confidence interval for the difference. Key probability and statistical concepts like distributions, moments, hypothesis testing, and confidence intervals are also reviewed.

Uploaded by

costea2112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Review of Probability and Statistics

(SW Chapters 2, 3)

Empirical problem: Class size and educational output

• Policy question: What is the effect on test scores (or some


other outcome measure) of reducing class size by one
student per class? by 8 students/class?
• We must use data to find out (is there any way to answer
this without data?)

1/2/3-1
The California Test Score Data Set

All K-6 and K-8 California school districts (n = 420)

Variables:
• 5PthP grade test scores (Stanford-9 achievement test,
combined math and reading), district average
• Student-teacher ratio (STR) = no. of students in the
district divided by no. full-time equivalent teachers

1/2/3-2
Initial look at the data:
(You should already know how to interpret this table)

This table doesn’t tell us anything about the relationship


between test scores and the STR.

1/2/3-3
Do districts with smaller classes have higher test scores?
Scatterplot of test score v. student-teacher ratio

what does this figure show?


1/2/3-4
We need to get some numerical evidence on whether districts
with low STRs have higher test scores – but how?

1. Compare average test scores in districts with low STRs


to those with high STRs (“estimation”)

2. Test the “null” hypothesis that the mean test scores in


the two types of districts are the same, against the
“alternative” hypothesis that they differ (“hypothesis
testing”)

3. Estimate an interval for the difference in the mean test


scores, high v. low STR districts (“confidence
interval”)
1/2/3-5
Initial data analysis: Compare districts with “small” (STR <
20) and “large” (STR ≥ 20) class sizes:

Class Average score Standard n


Size (Y ) deviation (sBY)B
Small 657.4 19.4 238
Large 650.0 17.9 182

1. Estimation of Δ = difference between group means


2. Test the hypothesis that Δ = 0
3. Construct a confidence interval for Δ

1/2/3-6
1. Estimation
nlarge
1 nsmall
1
Ysmall − Ylarge =
nsmall
∑Y
i =1
i –
nlarge
∑Y
i =1
i

= 657.4 – 650.0
= 7.4

Is this a large difference in a real-world sense?


• Standard deviation across districts = 19.1
• Difference between 60th P P and 75th
P P percentiles of test score
distribution is 667.6 – 659.4 = 8.2
• This is a big enough difference to be important for school
reform discussions, for parents, or for a school
committee?

1/2/3-7
2. Hypothesis testing

Difference-in-means test: compute the t-statistic,

Ys − Yl Ys − Yl
t= = (remember this?)
ss2
+ sl2 SE (Ys − Yl )
ns nl

where SE(Ys – Yl ) is the “standard error” of Ys – Yl , the


subscripts s and l refer to “small” and “large” STR districts,
1 ns
and ss2 = ∑
ns − 1 i =1
(Yi − Ys ) 2
(etc.)

1/2/3-8
Compute the difference-of-means t-statistic:

Size Y sBYB n
small 657.4 19.4 238
large 650.0 17.9 182

Ys − Yl 657.4 − 650.0 7.4


t= = = = 4.05
ss2
+ sl2 19.42
+ 17.92 1.83
ns nl 238 182

|t| > 1.96, so reject (at the 5% significance level) the null
hypothesis that the two means are the same.

1/2/3-9
3. Confidence interval

A 95% confidence interval for the difference between the


means is,

(Ys – Yl ) ± 1.96×SE(Ys – Yl )

= 7.4 ± 1.96×1.83 = (3.8, 11.0)


Two equivalent statements:
1. The 95% confidence interval for Δ doesn’t include 0;
2. The hypothesis that Δ = 0 is rejected at the 5% level.

1/2/3-10
Review of Statistical Theory

1. The probability framework for statistical inference


2. Estimation
3. Testing
4. Confidence Intervals

The probability framework for statistical inference


(a) Population, random variable, and distribution
(b) Moments of a distribution (mean, variance, standard
deviation, covariance, correlation)
(c) Conditional distributions and conditional means
(d) Distribution of a sample of data drawn randomly from a
population: YB1,…,
B YBnB
1/2/3-11
(a) Population, random variable, and distribution

Population
• The group or collection of all possible entities of interest
(school districts)
• We will think of populations as infinitely large (∞ is an
approximation to “very big”)

Random variable Y
• Numerical summary of a random outcome (district
average test score, district STR)

1/2/3-12
Population distribution of Y

• The probabilities of different values of Y that occur in the


population, for ex. Pr[Y = 650] (when Y is discrete)
• or: The probabilities of sets of these values, for ex. Pr[640
≤ Y ≤ 660] (when Y is continuous).

1/2/3-13
(b) Moments of a population distribution: mean, variance,
standard deviation, covariance, correlation

mean = expected value (expectation) of Y


= E(Y)
= μBYB
= long-run average value of Y over repeated
realizations of Y
variance = E(Y – μBYB)2P P
= σ Y2
= measure of the squared spread of the
distribution
standard deviation = variance = σBYB

1/2/3-14
Moments, ctd.
E (Y − μY ) ⎤
⎡ 3

skewness = ⎣ ⎦
3
σY
= measure of asymmetry of a distribution
• skewness = 0: distribution is symmetric
• skewness > (<) 0: distribution has long right (left) tail

E (Y μY ) ⎤
⎡ 4

kurtosis = ⎣ ⎦
4
σY
= measure of mass in tails
= measure of probability of large values
• kurtosis = 3: normal distribution
• skewness > 3: heavy tails (“leptokurtotic”)
1/2/3-15
1/2/3-16
2 random variables: joint distributions and covariance

• Random variables X and Z have a joint distribution


• The covariance between X and Z is
cov(X,Z) = E[(X – μBX)(
B Z – μBZB)] = σBXZB

• The covariance is a measure of the linear association


between X and Z; its units are units of X × units of Z
• cov(X,Z) > 0 means a positive relation between X and Z
• If X and Z are independently distributed, then cov(X,Z) = 0
(but not vice versa!!)
• The covariance of a r.v. with itself is its variance:
cov(X,X) = E[(X – μBX)(
B X – μB X )]
B = E[(X – μB X )
B
2P
P ] = σ 2
X

1/2/3-17
The covariance between Test Score and STR is negative:

so is the correlation…
1/2/3-18
The correlation coefficient is defined in terms of the
covariance:

cov( X , Z ) σ XZ
corr(X,Z) = = = rBXZB
var( X ) var( Z ) σ X σ Z

• –1 ≤ corr(X,Z) ≤ 1
• corr(X,Z) = 1 mean perfect positive linear association
• corr(X,Z) = –1 means perfect negative linear association
• corr(X,Z) = 0 means no linear association

1/2/3-19
The correlation coefficient measures linear association

1/2/3-20
(c) Conditional distributions and conditional means

Conditional distributions
• The distribution of Y, given value(s) of some other
random variable, X
• Ex: the distribution of test scores, given that STR < 20
Conditional expectations and conditional moments
• conditional mean = mean of conditional distribution
= E(Y|X = x) (important concept and notation)
• conditional variance = variance of conditional distribution
• Example: E(Test scores|STR < 20) = the mean of test
scores among districts with small class sizes
The difference in means is the difference between the means
of two conditional distributions:
1/2/3-21
Conditional mean, ctd.

Δ = E(Test scores|STR < 20) – E(Test scores|STR ≥ 20)

Other examples of conditional means:


• Wages of all female workers (Y = wages, X = gender)
• Mortality rate of those given an experimental treatment (Y
= live/die; X = treated/not treated)
• If E(X|Z) = const, then corr(X,Z) = 0 (not necessarily vice
versa however)
The conditional mean is a (possibly new) term for the
familiar idea of the group mean

1/2/3-22
(d) Distribution of a sample of data drawn randomly
from a population: YB1,…,
B YBnB

We will assume simple random sampling


• Choose and individual (district, entity) at random from the
population
Randomness and data
• Prior to sample selection, the value of Y is random
because the individual selected is random
• Once the individual is selected and the value of Y is
observed, then Y is just a number – not random
• The data set is (YB1,B YB2,…,
B YBn),
B where YBiB = value of Y for the
ith
P P individual (district, entity) sampled

1/2/3-23
Distribution of YB1,…,
B YBnB under simple random sampling
• Because individuals #1 and #2 are selected at random, the
value of YB1B has no information content for YB2.B Thus:
o Y1B B and Y2B B are independently distributed
o YB1B and YB2B come from the same distribution, that is, YB1,B
YB2B are identically distributed
o That is, under simple random sampling, Y1B B and Y2B B are
independently and identically distributed (i.i.d.).
o More generally, under simple random sampling, {YBi}, B i
= 1,…, n, are i.i.d.

This framework allows rigorous statistical inferences about


moments of population distributions using a sample of data
from that population …
1/2/3-24
1. The probability framework for statistical inference
2. Estimation
3. Testing
4. Confidence Intervals

Estimation
Y is the natural estimator of the mean. But:
(a) What are the properties of Y ?
(b) Why should we use Y rather than some other estimator?
• YB1B (the first observation)
• maybe unequal weights – not simple average
• median(YB1,…,
B YBn)B
The starting point is the sampling distribution of Y …

1/2/3-25
(a) The sampling distribution of Y
Y is a random variable, and its properties are determined by
the sampling distribution of Y
• The individuals in the sample are drawn at random.
• Thus the values of (YB1,…,
B YBn)B are random
• Thus functions of (YB1,…,
B YBn),
B such as Y , are random: had
a different sample been drawn, they would have taken on
a different value
• The distribution of Y over different possible samples of
size n is called the sampling distribution of Y .
• The mean and variance of Y are the mean and variance of
its sampling distribution, E(Y ) and var(Y ).
• The concept of the sampling distribution underpins all of
econometrics.
1/2/3-26
The sampling distribution of Y , ctd.
Example: Suppose Y takes on 0 or 1 (a Bernoulli random
variable) with the probability distribution,
Pr[Y = 0] = .22, Pr(Y =1) = .78
Then
E(Y) = p×1 + (1 – p)×0 = p = .78
σ Y2 = E[Y – E(Y)]2 = p(1 – p) [remember this?]
= .78×(1–.78) = 0.1716
The sampling distribution of Y depends on n.
Consider n = 2. The sampling distribution of Y is,
Pr(Y = 0) = .222 = .0484
Pr(Y = ½) = 2×.22×.78 = .3432
Pr(Y = 1) = .782 = .6084
1/2/3-27
The sampling distribution of Y when Y is Bernoulli (p = .78):

1/2/3-28
Things we want to know about the sampling distribution:

• What is the mean of Y ?


o If E(Y ) = true μ = .78, then Y is an unbiased estimator
of μ
• What is the variance of Y ?
o How does var(Y ) depend on n (famous 1/n formula)
• Does Y become close to μ when n is large?
o Law of large numbers: Y is a consistent estimator of μ
• Y – μ appears bell shaped for n large…is this generally
true?
o In fact, Y – μ is approximately normally distributed
for n large (Central Limit Theorem)

1/2/3-29
The mean and variance of the sampling distribution of Y

General case – that is, for Yi i.i.d. from any distribution, not
just Bernoulli:
1 n 1 n 1 n
mean: E(Y ) = E( ∑Yi ) = ∑ E (Yi ) = ∑ μY = μY
n i =1 n i =1 n i =1

Variance: var(Y ) = E[Y – E(Y )]2


= E[Y – μY]2
2
⎡⎛ 1 n
⎞ ⎤
= E ⎢⎜ ∑Yi ⎟ − μY ⎥
⎣⎝ n i =1 ⎠ ⎦
2
⎡1 n

= E ⎢ ∑ (Yi − μY ) ⎥
⎣ n i =1 ⎦
1/2/3-30
2
⎡1 n

so var(Y ) = E ⎢ ∑ (Yi − μY ) ⎥
⎣ n i =1 ⎦
⎧⎪ ⎡ 1 n ⎤ ⎡1 n ⎤ ⎫⎪
= E ⎨ ⎢ ∑ (Yi − μY ) ⎥ × ⎢ ∑ (Y j − μY ) ⎥ ⎬
⎪⎩ ⎣ n i =1 ⎦ ⎣ n j =1 ⎦ ⎭⎪
1 n n
= 2 ∑∑ E ⎡⎣(Yi − μY )(Y j − μY ) ⎤⎦
n i =1 j =1
1 n n
= 2 ∑∑ cov(Yi ,Y j )
n i =1 j =1
1 n
= 2
n
∑ Y
σ 2

i =1

σ Y2
=
n

1/2/3-31
Mean and variance of sampling distribution of Y , ctd.

E(Y ) = μY
σ Y2
var(Y ) =
n

Implications:
1. Y is an unbiased estimator of μY (that is, E(Y ) = μY)
2. var(Y ) is inversely proportional to n
• the spread of the sampling distribution is
proportional to 1/ n
• Thus the sampling uncertainty associated with Y is
proportional to 1/ n (larger samples, less
uncertainty, but square-root law)
1/2/3-32
The sampling distribution of Y when n is large

For small sample sizes, the distribution of Y is complicated,


but if n is large, the sampling distribution is simple!
1. As n increases, the distribution of Y becomes more tightly
centered around μY (the Law of Large Numbers)
2. Moreover, the distribution of Y – μY becomes normal (the
Central Limit Theorem)

1/2/3-33
The Law of Large Numbers:
An estimator is consistent if the probability that its falls
within an interval of the true population value tends to one
as the sample size increases.
If (Y1,…,Yn) are i.i.d. and σ Y2 < ∞, then Y is a consistent
estimator of μY, that is,
Pr[|Y – μY| < ε] → 1 as n → ∞
p
which can be written, Y → μY
p
(“Y → μY” means “Y converges in probability to μY”).
σ Y2
(the math: as n → ∞, var(Y ) = → 0, which implies that
n
Pr[|Y – μY| < ε] → 1.)
1/2/3-34
The Central Limit Theorem (CLT):
If (Y1,…,Yn) are i.i.d. and 0 < σ Y2 < ∞, then when n is
large the distribution of Y is well approximated by a
normal distribution.
σ Y2
• Y is approximately distributed N(μY, ) (“normal
n
distribution with mean μY and variance σ Y2 /n”)
• n (Y – μY)/σY is approximately distributed N(0,1)
(standard normal)
Y − E (Y ) Y − μY
• That is, “standardized” Y = = is
var(Y ) σ Y / n
approximately distributed as N(0,1)
• The larger is n, the better is the approximation.
1/2/3-35
Sampling distribution of Y when Y is Bernoulli, p = 0.78:

1/2/3-36
Y − E (Y )
Same example: sampling distribution of :
var(Y )

1/2/3-37
Summary: The Sampling Distribution of Y
For Y1,…,Yn i.i.d. with 0 < σ Y2 < ∞,
• The exact (finite sample) sampling distribution of Y has
mean μY (“Y is an unbiased estimator of μY”) and variance
σ Y2 /n
• Other than its mean and variance, the exact distribution of
Y is complicated and depends on the distribution of Y (the
population distribution)
• When n is large, the sampling distribution simplifies:
p
o Y → μY (Law of large numbers)
Y − E (Y )
o is approximately N(0,1) (CLT)
var(Y )

1/2/3-38
(b) Why Use Y To Estimate μY?
• Y is unbiased: E(Y ) = μY
p
• Y is consistent: Y → μY
• Y is the “least squares” estimator of μY; Y solves,
n
min m ∑ (Yi − m ) 2
i =1

so, Y minimizes the sum of squared “residuals”


optional derivation (also see App. 3.2)
d n n
d n


dm i =1
(Yi − m ) 2
= ∑
i =1 dm
(Yi − m ) 2
= 2∑ (Yi − m )
i =1

Set derivative to zero and denote optimal value of m by m̂ :


n n
1 n

i =1
Y = ∑ mˆ = nmˆ or m̂ = ∑Yi = Y
i =1 n i =1

1/2/3-39
Why Use Y To Estimate μY, ctd.

• Y has a smaller variance than all other linear unbiased


1 n
estimators: consider the estimator, μˆY = ∑ aiYi , where
n i =1
{ai} are such that μˆY is unbiased; then var(Y ) ≤ var( μˆY )
(proof: SW, Ch. 17)
• Y isn’t the only estimator of μY – can you think of a time
you might want to use the median instead?

1/2/3-40
1. The probability framework for statistical inference
2. Estimation
3. Hypothesis Testing
4. Confidence intervals

Hypothesis Testing
The hypothesis testing problem (for the mean): make a
provisional decision, based on the evidence at hand, whether
a null hypothesis is true, or instead that some alternative
hypothesis is true. That is, test
H0: E(Y) = μY,0 vs. H1: E(Y) > μY,0 (1-sided, >)
H0: E(Y) = μY,0 vs. H1: E(Y) < μY,0 (1-sided, <)
H0: E(Y) = μY,0 vs. H1: E(Y) ≠μY,0 (2-sided)
Some terminology for testing statistical hypotheses:
1/2/3-41
p-value = probability of drawing a statistic (e.g. Y ) at least as
adverse to the null as the value actually computed with your
data, assuming that the null hypothesis is true.

The significance level of a test is a pre-specified probability


of incorrectly rejecting the null, when the null is true.

Calculating the p-value based on Y :

p-value = PrH 0 [| Y − μY ,0 |>| Y act − μY ,0 |]

where Y act is the value of Y actually observed (nonrandom)

1/2/3-42
Calculating the p-value, ctd.
• To compute the p-value, you need the to know the
sampling distribution of Y , which is complicated if n is
small.
• If n is large, you can use the normal approximation (CLT):

p-value = PrH 0 [| Y − μY ,0 |>| Y act − μY ,0 |],


Y − μY ,0 Y act − μY ,0
= PrH 0 [| |>| |]
σY / n σY / n
Y − μY ,0 Y act − μY ,0
= PrH 0 [| |>| |]
σY σY
≅ probability under left+right N(0,1) tails

where σ Y = std. dev. of the distribution of Y = σY/ n .


1/2/3-43
Calculating the p-value with σY known:

• For large n, p-value = the probability that a N(0,1) random


variable falls outside |(Y act – μY,0)/σ Y |
• In practice, σ Y is unknown – it must be estimated

1/2/3-44
Estimator of the variance of Y:

1 n
sY2 = ∑
n − 1 i =1
(Yi − Y ) 2
= “sample variance of Y”

Fact:
p
If (Y1,…,Yn) are i.i.d. and E(Y ) < ∞, then s → σ Y2
4 2
Y

Why does the law of large numbers apply?


• Because sY2 is a sample average; see Appendix 3.3
• Technical note: we assume E(Y4) < ∞ because here the
average is not of Yi, but of its square; see App. 3.3

1/2/3-45
Computing the p-value with σ Y2 estimated:

p-value = PrH 0 [| Y − μY ,0 |>| Y act − μY ,0 |],


Y − μY ,0 Y act − μY ,0
= PrH 0 [| |>| |]
σY / n σY / n
Y − μY ,0 Y act − μY ,0
≅ PrH 0 [| |>| |] (large n)
sY / n sY / n
so
p-value = PrH 0 [| t |>| t act |] (σ Y2 estimated)

≅ probability under normal tails outside |tact|


Y − μY ,0
where t = (the usual t-statistic)
sY / n

1/2/3-46
What is the link between the p-value and the significance
level?

The significance level is prespecified. For example, if the


prespecified significance level is 5%,
• you reject the null hypothesis if |t| ≥ 1.96

• equivalently, you reject if p ≤ 0.05.


• The p-value is sometimes called the marginal
significance level.
• Often, it is better to communicate the p-value than simply
whether a test rejects or not – the p-value contains more
information than the “yes/no” statement about whether the
test rejects.
1/2/3-47
At this point, you might be wondering,...
What happened to the t-table and the degrees of freedom?

Digression: the Student t distribution


If Yi, i = 1,…, n is i.i.d. N(μY,σ Y2 ), then the t-statistic has the
Student t-distribution with n – 1 degrees of freedom.
The critical values of the Student t-distribution is tabulated in
the back of all statistics books. Remember the recipe?
1. Compute the t-statistic
2. Compute the degrees of freedom, which is n – 1
3. Look up the 5% critical value
4. If the t-statistic exceeds (in absolute value) this
critical value, reject the null hypothesis.

1/2/3-48
Comments on this recipe and the Student t-distribution

1. The theory of the t-distribution was one of the early


triumphs of mathematical statistics. It is astounding, really:
if Y is i.i.d. normal, then you can know the exact, finite-
sample distribution of the t-statistic – it is the Student t. So,
you can construct confidence intervals (using the Student t
critical value) that have exactly the right coverage rate, no
matter what the sample size. This result was really useful in
times when “computer” was a job title, data collection was
expensive, and the number of observations was perhaps a
dozen. It is also a conceptually beautiful result, and the
math is beautiful too – which is probably why stats profs
love to teach the t-distribution. But….
1/2/3-49
Comments on Student t distribution, ctd.

2. If the sample size is moderate (several dozen) or large


(hundreds or more), the difference between the t-
distribution and N(0,1) critical values are negligible. Here
are some 5% critical values for 2-sided tests:

degrees of freedom 5% t-distribution


(n – 1) critical value
10 2.23
20 2.09
30 2.04
60 2.00
∞ 1.96
1/2/3-50
Comments on Student t distribution, ctd.

3. So, the Student-t distribution is only relevant when the


sample size is very small; but in that case, for it to be
correct, you must be sure that the population distribution of
Y is normal. In economic data, the normality assumption is
rarely credible. Here are the distributions of some
economic data.
• Do you think earnings are normally distributed?
• Suppose you have a sample of n = 10 observations
from one of these distributions – would you feel
comfortable using the Student t distribution?

1/2/3-51
1/2/3-52
Comments on Student t distribution, ctd.
4. You might not know this. Consider the t-statistic testing
the hypothesis that two means (groups s, l) are equal:
Ys − Yl Ys − Yl
t= 2 2 =
ss
+ sl SE (Ys − Yl )
ns nl

Even if the population distribution of Y in the two groups


is normal, this statistic doesn’t have a Student t
distribution!
There is a statistic testing this hypothesis that has a
normal distribution, the “pooled variance” t-statistic – see
SW (Section 3.6) – however the pooled variance t-statistic
is only valid if the variances of the normal distributions
are the same in the two groups. Would you expect this to
be true, say, for men’s v. women’s wages?
1/2/3-53
The Student-t distribution – summary

• The assumption that Y is distributed N(μY,σ Y2 ) is rarely


plausible in practice (income? number of children?)
• For n > 30, the t-distribution and N(0,1) are very close (as
n grows large, the tn–1 distribution converges to N(0,1))
• The t-distribution is an artifact from days when sample
sizes were small and “computers” were people
• For historical reasons, statistical software typically uses
the t-distribution to compute p-values – but this is
irrelevant when the sample size is moderate or large.
• For these reasons, in this class we will focus on the large-
n approximation given by the CLT

1/2/3-54
1. The probability framework for statistical inference
2. Estimation
3. Testing
4. Confidence intervals

Confidence Intervals
A 95% confidence interval for μY is an interval that contains
the true value of μY in 95% of repeated samples.

Digression: What is random here? The values of Y1,…,Yn and


thus any functions of them – including the confidence
interval. The confidence interval it will differ from one
sample to the next. The population parameter, μY, is not
random, we just don’t know it.
1/2/3-55
Confidence intervals, ctd.
A 95% confidence interval can always be constructed as the
set of values of μY not rejected by a hypothesis test with a 5%
significance level.

Y − μY Y − μY
{μY: ≤ 1.96} = {μY: –1.96 ≤ ≤ 1.96}
sY / n sY / n
sY sY
= {μY: –1.96 ≤ Y – μY ≤ 1.96 }
n n
sY sY
= {μY ∈ (Y – 1.96 , Y + 1.96 )}
n n
This confidence interval relies on the large-n results that Y is
p
approximately normally distributed and s → σ Y2 .
2
Y

1/2/3-56
Summary:
From the two assumptions of:
(1) simple random sampling of a population, that is,
{Yi, i =1,…,n} are i.i.d.
(2) 0 < E(Y4) < ∞
we developed, for large samples (large n):
• Theory of estimation (sampling distribution of Y )
• Theory of hypothesis testing (large-n distribution of t-
statistic and computation of the p-value)
• Theory of confidence intervals (constructed by inverting
test statistic)
Are assumptions (1) & (2) plausible in practice? Yes

1/2/3-57
Let’s go back to the original policy question:
What is the effect on test scores of reducing STR by one
student/class?
Have we answered this question?

1/2/3-58

You might also like