0% found this document useful (0 votes)
9 views

ch03

Slides from Chapter 3 Hill et al principle of econometrics
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

ch03

Slides from Chapter 3 Hill et al principle of econometrics
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Chapter 3

Interval Estimation and Hypothesis Testing

Principles of
Econometrics
Fifth Edition
R. Carter Hill William E. Griffiths Guay C. Lim
Chapter Outline
 3.1 Interval Estimation

 3.2 Hypothesis Tests

 3.3 Rejection Regions for Specific Alternatives

 3.4 Examples of Hypothesis Tests

 3.5 The p-Value

 3.6 Linear Combinations of Parameters

Copyright ©2018 John Wiley & Son, Inc. 2


3.1 Interval Estimation
 Interval estimation proposes a range of values in which the true
parameter is likely to fall.
 Providing a range of values gives a sense of what the parameter
value might be, and the precision with which we have estimated
it.
 Such intervals are often called confidence intervals.

 We prefer to call them interval estimates because the term


“confidence” is widely misunderstood and misused.
Copyright ©2018 John Wiley & Son, Inc. 3
3.1.1 The t-Distribution (1 of 5)
 The normal distribution of b2, the least squares estimator of β2, is:
  2 
b 2 | x ~ N  2 , 

  x i  x 2 

 A standardized normal random variable is obtained from b2 by

subtracting its mean and dividing by its standard deviation:


b2   2
Z ~ N 0,1
  xi  x 
2 2

Copyright ©2018 John Wiley & Son, Inc. 4


3.1.1 The t-Distribution (2 of 5)
 We know that:

P 1.96 Z 1.96 0.95


 Substituting:
 
 b2   2 
P  1.96  1.96  0.95

  2
 xi  x 2


 Rearranging:

P b2  1.96  2
2 

 i
x  x 2
  2 b2  1.96  2

 i
x  x   0.95
 

Copyright ©2018 John Wiley & Son, Inc. 5


3.1.1 The t-Distribution (3 of 5)
 The two end-points b2 1.96  2 
 i
x  x 2
provide an interval estimator.

 In repeated sampling 95% of the intervals constructed this way will contain

the true value of the parameter β2.

 This easy derivation of an interval estimator is based on the assumption

SR6 and that we know the variance of the error term σ2.

Copyright ©2018 John Wiley & Son, Inc. 6


3.1.1 The t-Distribution (4 of 5)

 Replacing σ2 with creates a random variable t:


b2   2 b2   2 b2   2
t   ~ t N  2 
(3.2)
2  x  x 
2
vâr b2  seb2 
i

 This substitution changes the probability distribution from standard

normal to a t-distribution with N − 2 degrees of freedom.

 We denote this as: t ~ t N  2 

Copyright ©2018 John Wiley & Son, Inc. 7


3.1.1 The t-Distribution (5 of 5)

 The t-distribution is a bell shaped curve centered at zero.

 It looks like the standard normal distribution, except it is more

spread out, with a larger variance and thicker tails.

 The shape of the t-distribution is controlled by a single parameter

called the degrees of freedom, often abbreviated as df.

Copyright ©2018 John Wiley & Son, Inc. 8


3.1.2 Obtaining Interval Estimates (1
of 4)
 We can find a “critical value” from a t-distribution such that

Pt tc  Pt  tc   2

where α is a probability often taken to be α = 0.01 or α = 0.05.

 The critical value tc for degrees of freedom m is the percentile value t(1-

α/2, m).

Copyright ©2018 John Wiley & Son, Inc. 9


3.1.2 Obtaining Interval Estimates (2
of 4)
 Each shaded ‘‘tail’’ area contains α/2 of the probability, so that 1-α of the
probability is contained in the center portion.
 Consequently, we can make the probability statement:

 (3.4) P tc t tc  1  


 Or  bk   k 
P  tc  tc  1  
 sebk  
 (3.5) Pbk  tc sebk   k tc  tc sebk  1  

Copyright ©2018 John Wiley & Son, Inc. 10


3.1.2 Obtaining Interval Estimates (3
of 4)
 When bk and se(bk) are estimated values (numbers), based on a given

sample of data, then bk ± tcse(bk) is called a 100(1-α)% interval estimate

of bk.

 Equivalently, it is called a 100(1-α)% confidence interval.

 Usually α = 0.01 or α = 0.05, so that we obtain a 99% confidence

interval or a 95% confidence interval.


Copyright ©2018 John Wiley & Son, Inc. 11
3.1.2 Obtaining Interval Estimates (4
of 4)
 The interpretation of confidence intervals requires a great deal of care.

 The properties of the interval estimation procedure are based on the


notion of repeated sampling.
 Any one interval estimate, based on one sample of data, may or may not

contain the true parameter βk, and because βk is unknown, we will never
know whether it does or does not.
 When ‘‘confidence intervals’’ are discussed, remember that our confidence
is in the procedure used to construct the interval estimate; it is not in any
one interval estimate calculated from a sample of data.
Copyright ©2018 John Wiley & Son, Inc. 12
3.1.3 The Sampling Context
 The household food example variation is due to the fact that in each sample household
food expenditures are different.
 Sampling variability causes the:

 Center of each of the interval estimates to change with the values of the least
squares estimates.
 The widths of the intervals to change with the standard errors.

 Interval estimators are a convenient way to report regression results because they
combine point estimation with a measure of sampling variability to provide a range of
values in which the unknown parameters might fall.

Copyright ©2018 John Wiley & Son, Inc. 13


3.2 Hypothesis Tests
 Hypothesis testing procedures compare a conjecture we have about a
population to the information contained in a sample of data.
 In each and every hypothesis test, five ingredients must be present:

1. A null hypothesis

2. An alternative hypothesis

3. A test statistic

4. A rejection region

5. A conclusion

Copyright ©2018 John Wiley & Son, Inc. 14


3.2.1 The Null Hypothesis
 The null hypothesis, which is denoted by (H-naught), specifies a value for a

regression parameter.

 Which for generality we denote as , for k = 1 or 2.

 The null hypothesis is stated as ∶ = c, where c is a constant.

 A null hypothesis is the belief we will maintain until we are convinced by the

sample evidence that it is not true, in which case we reject the null hypothesis.

Copyright ©2018 John Wiley & Son, Inc. 15


3.2.2 The Alternative Hypothesis
 Paired with every null hypothesis is a logical alternative hypothesis that we will
accept if the null hypothesis is rejected.
 For the null hypothesis ∶ = c, the three possible alternative hypotheses are as
follows:
 H1∶βk > c: in this case leads us to accept the conclusion that > c.

 H1∶βk < c: in this case leads us to accept the conclusion that < c.

 H1∶βk ≠ c: takes a value either larger or smaller than c.

Copyright ©2018 John Wiley & Son, Inc. 16


3.2.3 The Test Statistic
 The sample information about the null hypothesis is embodied in the sample
value of a test statistic.
 A test statistic has a special characteristic:

 Its probability distribution is completely known when the null hypothesis is


true.
 It has some other distribution if the null hypothesis is not true.

 If the null hypothesis H0∶βk = c is true, then we can substitute c for βk and it
follows that:
bk  c
 (3.7) t ~ t N  2 
se bk 
Copyright ©2018 John Wiley & Son, Inc. 17
3.2.4 The Rejection Region (1 of 2)
 The rejection region depends on the form of the alternative. It is the range of
values of the test statistic that leads to rejection of the null hypothesis.
 It is possible to construct a rejection region only if we have:

 A test statistic whose distribution is known when the null hypothesis is


true
 An alternative hypothesis

 A level of significance

 The rejection region consists of values that are unlikely and that have low
probability of occurring when the null hypothesis is true.

Copyright ©2018 John Wiley & Son, Inc. 18


3.2.4 The Rejection Region (2 of 2)
 The level of significance of the test α is usually chosen to be 0.01, 0.05, or 0.10.

 If we reject the null hypothesis when it is true, then we commit what is called a
Type I error.
 We can specify the amount of Type I error we will tolerate by setting the
level of significance α.
 If we do not reject a null hypothesis that is false, then we have committed a
Type II error.
 We cannot control or calculate the probability of this type of error.

Copyright ©2018 John Wiley & Son, Inc. 19


3.2.5 A Conclusion
 When you have completed testing a hypothesis, you should state your

conclusion.

 Do you reject the null hypothesis, or do you not reject the null hypothesis?

 You should avoid saying that you “accept” the null hypothesis, which can be

very misleading.

 Say what the conclusion means in the economic context of the problem you

are working on and the economic significance of the finding.

Copyright ©2018 John Wiley & Son, Inc. 20


3.3 Rejection Regions for Specific
Alternatives
 In this section, we hope to be very clear about the nature of the rejection

rules for each of the three possible alternatives to the null hypothesis

 To have a rejection region for a null hypothesis

1. We need a test statistic

2. We need a specific alternative,

3. we need to specify the level of significance of the test

Copyright ©2018 John Wiley & Son, Inc. 21


3.3.1 One-Tail Tests with Alternative
“Greater Than” (>)
 When testing the null hypothesis H0:βk = c against the alternative

hypothesis , reject the null hypothesis and accept the alternative

hypothesis if

 The test is called a “one-tail” test because unlikely values of the t-

statistic fall only in one tail of the probability distribution.

Copyright ©2018 John Wiley & Son, Inc. 22


Copyright ©2018 John Wiley & Son, Inc. 23
3.3.2 One-Tail Tests with Alternative
“Less Than” (<)
 When testing the null hypothesis H0:βk = c against the alternative

hypothesis , reject the null hypothesis and accept the alternative

hypothesis if t ≤ t(1-α;N-2).

 When using Statistical Table 2 to locate critical values, recall that the t-

distribution is symmetric about zero, so that the α-percentile is the

negative of the percentile

Copyright ©2018 John Wiley & Son, Inc. 24


Copyright ©2018 John Wiley & Son, Inc. 25
3.3.3 Two-Tail Tests with Alternative
“Not Equal To” (≠)
 When testing the null hypothesis H0:βk = c against the alternative

hypothesis H1:βk ≠ c, reject the null hypothesis and accept the alternative

hypothesis if

t ≤ t(1-α;N-2) or t ≥ t(1-α;N-2)

 Because the rejection region is composed of portions of the t-distribution

in the left and right tails, this test is called a two-tail test.
Copyright ©2018 John Wiley & Son, Inc. 26
Copyright ©2018 John Wiley & Son, Inc. 27
3.3.4 Examples of Hypothesis Tests
Step-by-step procedure for testing hypotheses:

1. Determine the null and alternative hypotheses.

2. Specify the test statistic and its distribution if the null hypothesis is true.

3. Select α and determine the rejection region.

4. Calculate the sample value of the test statistic.

5. State your conclusion.

Copyright ©2018 John Wiley & Son, Inc. 28


Example 3.2 Right-tail Test of
Significance (1 of 2)
 The null hypothesis is H0:β2 = 0. The alternative hypothesis is H1:β2 > 0.

 P 1.96 Z 1.96 0.95


The test statistic is (3.7); in this case, c = 0, so
if the null hypothesis is true.
 Select α = 0.05:
 The critical value for the right-tail rejection region is the 95 th percentile of the
t-distribution with N – 2 = 38 degrees of freedom, t(0.95,38) = 1.686.

 Thus we will reject the null hypothesis if the calculated value of t ≥ 1.686.
 If t < 1.686, we will not reject the null hypothesis.

Copyright ©2018 John Wiley & Son, Inc. 29


Example 3.2 Right-tail Test of
Significance (2 of 2)
 Using the food expenditure data, we found that b2 = 10.21 with standard

error se(b2) = 2.09. b2 10.21


t  4.88
 The value of the test statistic is: se b2  2.09
 Because t = 4.88 > 1.686, we reject the null hypothesis that β2 = 0 and

accept the alternative that β2 > 0.

 That is, we reject the hypothesis that there is no relationship between


income and food expenditure, and conclude that there is a statistically
significant positive relationship between household income and food
expenditure.
Copyright ©2018 John Wiley & Son, Inc. 30
Example 3.3 Right-tail Test of an
Economic Hypothesis (1 of 2)
 The null hypothesis is H0:β2 ≤ 5.5. The alternative hypothesis is H1:β2 > 5.5.

 The test statistic is t = (b2 - 5.5)/se(b2) ~ t(N – 2) if the null hypothesis is true.

 Select α = 0.01:
 The critical value for the right-tail rejection region is the 99 th percentile of
the t-distribution with N – 2 = 38 degrees of freedom, t(0.99,38) = 2.429.

 Thus we will reject the null hypothesis if the calculated value of t ≥ 2.429.
 If t < 2.429, we will not reject the null hypothesis.

Copyright ©2018 John Wiley & Son, Inc. 31


Example 3.3 Right-tail Test of an
Economic Hypothesis (2 of 2)
 Using the food expenditure data, we found that b2 = 10.21 with standard error

se(b2) = 2.09.
b2  5.5 10.21  5.5
t  2.25
 The value of the test statistic is: se b2  2.09

 Because t = 2.25 < 2.429 we do not reject the null hypothesis that β2 ≤ 5.5.

 We are not able to conclude that the new supermarket will be profitable and

will not begin construction.

Copyright ©2018 John Wiley & Son, Inc. 32


Example 3.4 Left-tail Test of an
Economic Hypothesis (1 of 2)
 The null hypothesis is H0:β2 ≥ 15. The alternative hypothesis is H1:β2 < 15.

 The test statistic is t = (b2 - 15)/se(b2) ~ t(N – 2) if the null hypothesis is true:

 Select α = 0.05:
 The critical value for the left-tail rejection region is the 5th percentile of the t-
distribution with N – 2 = 38 degrees of freedom, t(0.05,38) = -1.686.

 Thus we will reject the null hypothesis if the calculated value of t ≤ -1.686.
 If t > -1.686, we will not reject the null hypothesis.

Copyright ©2018 John Wiley & Son, Inc. 33


Example 3.4 Left-tail Test of an
Economic Hypothesis (2 of 2)
 Using the food expenditure data, we found that b2 = 10.21 with standard

error se(b2) = 2.09. b2  5.5 10.21  15


t   2.29
 The value of the test statistic is: se b2  2.09

 Because t = -2.29 < -1.686 we reject the null hypothesis that β2 ≥ 15 and

accept the alternative that β2 < 15.

 We conclude that households spend less than $15 from each additional
$100 income on food.

Copyright ©2018 John Wiley & Son, Inc. 34


Example 3.5 Two-tail Test of an
Economic Hypothesis (1 of 2)
 The null hypothesis is H0:β2 = 7.5. The alternative hypothesis is H1:β2 ≠ 7.5.

 The test statistic is t = (b2 – 7.5)/se(b2) ~ t(N – 2) if the null hypothesis is true.

 Select α = 0.05:
 The critical value for the two-tail rejection region is the 2.5th percentile of
the t-distribution with N – 2 = 38 degrees of freedom, t(0.025,38) = -2.024
and the 97.5th percentile t(0.975,38) = 2.024.
 Thus we will reject the null hypothesis if the calculated value of t ≥ 2.024 or
if t ≤ -2.024.

Copyright ©2018 John Wiley & Son, Inc. 35


Example 3.5 Two-tail Test of an
Economic Hypothesis (2 of 2)
 Using the food expenditure data, we found that b2 = 10.21 with standard

error se(b2) = 2.09. b2  5.5 10.21  7.5


t  1.29
 The value of the test statistic is: se b2  2.09
 Because -2.024 < t = 1.29 < 2.024 we do not reject the null hypothesis that
β2 = 7.5.

 The sample data are consistent with the conjecture households will spend
an additional $7.50 per additional $100 income on food.

Copyright ©2018 John Wiley & Son, Inc. 36


Example 3.6 Two-tail Test of an
Significance (1 of 2)
 The null hypothesis is H0:β2 = 0. The alternative hypothesis is H1:β2 ≠ 0.

 The test statistic is t = (b2 )/se(b2) ~ t(N – 2) if the null hypothesis is true.

 Select α = 0.05:
 The critical value for the two-tail rejection region is the 2.5th
percentile of the t-distribution with N – 2 = 38 degrees of freedom,
t(0.025,38) = -2.024 and the 97.5th percentile t(0.975,38) = 2.024.
 Thus we will reject the null hypothesis if the calculated value of t ≥
2.024 or if t ≤ -2.024.

Copyright ©2018 John Wiley & Son, Inc. 37


Example 3.6 Two-tail Test of an
Significance (2 of 2)
 Using the food expenditure data, we found that b2 = 10.21 with standard

error se(b2) = 2.09.


b2 10.21
t  4.88
 The value of the test statistic is: se b2  2.09

 Because 4.88 > 2.024 we reject the null hypothesis that β2 = 0.

 We conclude that there is a statistically significant relationship between

income and food expenditure.


Copyright ©2018 John Wiley & Son, Inc. 38
3.5 The p-Value (1 of 2)
 When reporting the outcome of statistical hypothesis tests, it has become

standard practice to report the p-value (an abbreviation for probability

value) of the test.

 If we have the p-value of a test, p, we can determine the outcome of

the test by comparing the p-value to the chosen level of significance,

α, without looking up or calculating the critical values.

 This is much more convenient.

Copyright ©2018 John Wiley & Son, Inc. 39


3.5 The p-Value (2 of 2)
 If t is the calculated value of the t-statistic, then:

 If H1: βK > c

 p = probability to the right of t

 If H1: βK < c

 p = probability to the left of t

 If H1: βK ≠ c

 p = sum of probabilities to the right of |t| and to the left of – |


t|
Copyright ©2018 John Wiley & Son, Inc. 40
The p-Value Rule

 Reject the null hypothesis when the p-value is less than, or

equal to, the level of significance α. That is, if p ≤ α then reject

H0. If p > α then do not reject H0.

Copyright ©2018 John Wiley & Son, Inc. 41


Example 3.3 (Continued)

 The null hypothesis is H0: β2 ≤ 5.5.

 The alternative hypothesis is H1: β2 > 5.5.


b2  5.5 10.21  5.5
t  2.25
se b2  2.09

 The p-value is:


p Pt 38  2.25 1  Pt 38  2.25 1  0.9848 0.0152

Copyright ©2018 John Wiley & Son, Inc. 42


Example 3.4 (Continued)

 The null hypothesis is H0: β2 ≥ 15.

 The alternative hypothesis is H1: β2 < 15.


b2  15 10.21  15
t   2.29
se b2  2.09

 The p-value is:


p Pt38   2.29 0.0139

Copyright ©2018 John Wiley & Son, Inc. 43


Example 3.5 (Continued)

 The null hypothesis is H0:β2 = 7.5.

 The alternative hypothesis is H1: β2 ≠ 7.5.


b2  7.5 10.21  7.5
t  1.29
se b2  2.09

 The p-value is:


p Pt38  1.29  Pt38   1.29 0.2033

Copyright ©2018 John Wiley & Son, Inc. 44


Example 3.6 (Continued)

 The null hypothesis is H0: β2 = 0.

 The alternative hypothesis is H1: β2 ≠ 0.


b2  7.5 10.21  7.5
t  1.29
se b2  2.09

 The p-value is:


p Pt38  4.88  Pt38   4.88 0.0000

Copyright ©2018 John Wiley & Son, Inc. 45


3.6 Linear Combinations of Parameters
(1 of 4)
 We may wish to estimate and test hypotheses about a linear combination of

parameters λ = c1β1 + c2β2, where c1 and c2 are constants that we specify.

 Under assumptions SR1–SR5 the least squares estimators b1 and b2 are the best

linear unbiased estimators of β1 and β2.

 It is also true that = c1b1 + c2b2 is the best linear unbiased estimator of λ = c1β1 +

c 2β 2 .
Copyright ©2018 John Wiley & Son, Inc. 46
3.6 Linear Combinations of Parameters
(2 of 4)
 As an example of a linear combination, if we let c1 = 1 and c2 = x0, then we have
E (ˆ | x ) E (c1b1  c 2 b 2 | x ) c1E (b1 | x )  c 2 E (b 2 | x )
c11  2  2 
 The estimator is unbiased because
E (ˆ | x ) E (c1b1  c 2 b 2 | x ) c1E (b1 | x )  c 2 E (b 2 | x )
c11  2  2 
 The variance of is (3.8)

 
var ˆ | x varc1b1  c 2 b 2  c12 varb1  c 22 varb 2  2c1c 2 covb1 , b 2 

Copyright ©2018 John Wiley & Son, Inc. 47


3.6 Linear Combinations of Parameters
(3 of 4)
 We estimate λ ̂ by replacing the unknown variances and covariances
with their estimated variances and covariances in (2.20) – (2.22).
 (3.9) ( ) ( (

 The standard error of is the square root of the estimated variance.

 (3.10) se(

Copyright ©2018 John Wiley & Son, Inc. 48


3.6 Linear Combinations of Parameters
(4 of 4)
 If in addition SR6 holds, or if the sample is large, the least squares estimators b1

and b2 have normal distributions.

 It is also true that linear combinations of normally distributed variables are

normally distributed, so that



ˆ | x c1b1  c 2 b 2 ~ N , var ˆ | x  

Copyright ©2018 John Wiley & Son, Inc. 49


3.6.1 Testing a Linear Combination of
Parameters (1 of 2)
 A general linear hypothesis involves both parameters, β1 and β2, and may be

stated as:

(3.12a)
H 0 : c1β1  c2β 2  c0

 Or, equivalently:

(3.12b)
H 0 : c1β1  c2β 2   c0 0

Copyright ©2018 John Wiley & Son, Inc. 50


3.6.1 Testing a Linear Combination of
Parameters (2 of 2)
 The alternative hypothesis might be any one of the following:
i  H1 : c1β1  c2β 2 c0 two-tail test
ii  H1 : c1β1  c2β 2  c0 right-tail test
iii  H1 : c1β1  c2β 2  c0 left-tail test

t
c1β1  c2β 2   c0
~ t N  2 
 The t-statistic is: (3.13)
se c1β1  c2β 2 
 The rejection regions for the one- and two-tail alternatives (i) – (iii) are the same as

those described in Section 3.3, and conclusions are interpreted the same way as well.

Copyright ©2018 John Wiley & Son, Inc. 51


Key Words
 alternative  level of significance  probability value
hypothesis  linear combination of  p-value
 confidence intervals parameters  rejection region
 critical value  linear hypothesis  test of significance
 degrees of freedom  null hypothesis  test statistic
 hypotheses  one-tail tests  two-tail tests
 hypothesis testing  pivotal statistic  Type I error
 inference  point estimates  Type II error
 interval estimation

Copyright ©2018 John Wiley & Son, Inc. 52


Copyright
Copyright © 2018 John Wiley & Sons, Inc.
All rights reserved. Reproduction or translation of this work beyond that permitted in
Section 117 of the 1976 United States Act without the express written permission of the
copyright owner is unlawful. Request for further information should be addressed to the
Permissions Department, John Wiley & Sons, Inc. The purchaser may make back-up copies
for his/her own use only and not for distribution or resale. The Publisher assumes no
responsibility for errors, omissions, or damages, caused by the use of these programs or
from the use of the information contained herein.

Copyright ©2017 John Wiley & Son, Inc. 53

You might also like