0% found this document useful (0 votes)
2 views

Lecture set 5

The lecture covers nonlinear regression methods, emphasizing that linear models may misrepresent relationships between variables. It discusses polynomial and logarithmic transformations as techniques to model nonlinear relationships, providing examples with the California Test Score data set. Additionally, it addresses interactions between independent variables, suggesting that the effect of one variable may depend on the level of another.

Uploaded by

Jimmy Teng
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture set 5

The lecture covers nonlinear regression methods, emphasizing that linear models may misrepresent relationships between variables. It discusses polynomial and logarithmic transformations as techniques to model nonlinear relationships, providing examples with the California Test Score data set. Additionally, it addresses interactions between independent variables, suggesting that the effect of one variable may depend on the level of another.

Uploaded by

Jimmy Teng
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

ECONS303: Applied Quantitative Research Methods

Lecture set 5: Nonlinear Regression


Functions
Outline
1. Nonlinear regression functions – general comments
2. Nonlinear functions of one variable
3. Nonlinear functions of two variables: interactions
4. Application to the California Test Score data set
Nonlinear regression functions
• The regression function so far has been linear in the X’s
• But the linear approximation is not always a good one
• The multiple regression model can handle regression functions
that are nonlinear in one or more X.
The TestScore – STR relation looks linear
(maybe)…
But the TestScore – Income relation looks
nonlinear...
Nonlinear Regression Population Regression
Functions – General Ideas (SW Section 8.1)
If a relation between Y and X is nonlinear:
• The effect on Y of a change in X depends on the value of X – that
is, the marginal effect of X is not constant
• A linear regression is mis-specified: the functional form is
wrong
• The estimator of the effect on Y of X is biased: in general it isn’t
even right on average.
• The solution is to estimate a regression function that is
nonlinear in X.
Key Concept 8.1: The Expected Change on Y of a
Change in X1 in the Nonlinear Regression Model (8.3)
The expected change in Y, ΔY, associated with the change in X1, ΔX1,
holding X2,…, Xk constant, is the difference between the value of the
population regression function before and after changing X1, holding
X2,…, Xk constant. That is, the expected change in Y is the difference:
ΔY = f (X1 + Δ X1, X2,…, Xk) – f (X1, X2,…, Xk). (8.4)

The estimator of this unknown population difference is the difference


between the predicted values for these two cases. Let fˆ ( X 1 , X 2 ,. . . , X k )
be the predicted value of Y based on the estimator fˆ of the population
regression function. Then the predicted change in Y is

Yˆ  fˆ ( X 1  X 1 , X 2 , , X k )  fˆ ( X 1 , X 2 , , X k ). (8.5)
Nonlinear Functions of a Single Independent
Variable (SW Section 8.2)
We’ll look at two complementary approaches:
1. Polynomials in X
The population regression function is approximated by a
quadratic, cubic, or higher-degree polynomial
2. Logarithmic transformations
Y and/or X is transformed by taking its logarithm, which
provides a “percentages” interpretation of the coefficients
that makes sense in many applications
1. Polynomials in X
Approximate the population regression function by a polynomial:

Yi   0  1 X i   2 X i2    r X ir  ui

• This is just the linear multiple regression model – except that the
regressors are powers of X!
• Estimation, hypothesis testing, etc. proceeds as in the multiple
regression model using OLS
• The coefficients are difficult to interpret, but the regression
function itself is interpretable
Example: the TestScore – Income relation
Incomei = average district income in the ith district (thousands of
dollars per capita)
Quadratic specification:
TestScorei = β0 + β1Incomei + β2(Incomei)2 + ui
Cubic specification:
TestScorei = β0 + β1Incomei + β2(Incomei)2 + β3(Incomei)3 + ui
Estimation of the quadratic specification in
STATA
generate avginc2 = avginc*avginc; Create a new regressor
reg testscr avginc avginc2, r;

Regression with robust standard errors Number of obs = 420


F( 2, 417) = 428.52
Prob > F = 0.0000
R-squared = 0.5562
Root MSE = 12.724

------------------------------------------------------------------------------
| Robust
testscr | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
avginc | 3.850995 .2680941 14.36 0.000 3.32401 4.377979
avginc2 | -.0423085 .0047803 -8.85 0.000 -.051705 -.0329119
_cons | 607.3017 2.901754 209.29 0.000 601.5978 613.0056
------------------------------------------------------------------------------

Test the null hypothesis of linearity against the alternative that the
regression function is a quadratic.…
Interpreting the estimated regression
function: (1 of 3)
(a) Plot the predicted values
TestScore  607.3  3.85Incomei  0.0423( Incomei ) 2
(2.9) (0.27) (0.0048)
Interpreting the estimated regression
function: (2 of 3)
(b) Compute the slope, evaluated at various values of X
TestScore  607.3  3.85Incomei  0.0423( Incomei ) 2
(2.9) (0.27) (0.0048)

Predicted change in TestScore for a change in income from $5,000


per capita to $6,000 per capita:
TestScore  607.3  3.85  6  0.0423  62
 (607.3  3.85  5  0.0423  52 )
 3.4
Interpreting the estimated regression
function: (3 of 3)
TestScore  607.3  3.85 Incomei  0.0423( Incomei )2
Predicted “effects” for different values of X:

Change in Income ($1000 per capita) Δ TestScore


from 5 to 6 3.4
from 25 to 26 1.7
from 45 to 46 0.0

The “effect” of a change in income is greater at low than high


income levels (perhaps, a declining marginal benefit of an increase
in school budgets?)
Summary: polynomial regression functions
Yi   0  1 X i   2 X i2    r X ir  ui
• Estimation: by OLS after defining new regressors
• The individual coefficients have complicated interpretations if the
polynomial degree r is more than two (not commonly used in
economics or general social science).
• To interpret the estimated regression function:
– plot predicted values as a function of x
– compute predicted ΔY/ΔX for different values of x
• Hypotheses concerning degree r can be tested by t- and F-tests.
• Choice of degree r
– plot the data; t- and F-tests, check sensitivity of estimated effects; judgment.
– Or use model selection criteria (later)
2. Logarithmic functions of Y and/or X
• ln(X ) = the natural logarithm of X
• Logarithmic transforms permit modeling relations in
“percentage” terms (like elasticities), rather than linearly.

 x  x
Here’s why : ln( x  x)  ln( x)  ln 1  
 x  x
d ln( x) 1
(calculus:  )
dx x
Numerically :
ln(1.01)  .00995  .01;
ln(1.10)  .0953  .10 (sort of )
The three log regression specifications:

Case Population regression function


I. linear-log Yi = β0 + β1ln(Xi) + ui
II. log-linear ln(Yi) = β0 + β1Xi + ui
III. log-log ln(Yi) = β0 + β1ln(Xi) + ui

• The interpretation of the slope coefficient differs in each case.


• The interpretation is found by applying the general “before and
after” rule: “figure out the change in Y for a given change in X.”
• Each case has a natural interpretation (for small changes in X )
I. Linear-log population regression function
(1 of 2)

Compute Y “before” and “after” changing X:


Y = β0 + β1ln(X ) (“before”)
Now change X: Y + ΔY = β0 + β1ln(X + ΔX ) (“after”)
Subtract (“after”) – (“before”): ΔY = β1[ln(X + ΔX ) – ln(X )]
X
now ln( X  X )  ln( X ) 
X
X
so Y  1
X
Y
or 1  (small X )
X /X
I. Linear-log population regression function
(2 of 2)

Yi = β0 + β1ln(Xi) + ui
for small ΔX,
Y
1 
X /X

X
Now 100   percentage change in X , so a 1% increase
X
in X ( multiplying X by 1.01) is associated with a .01 1
change in Y .

(1% increase in X → .01 increase in ln(X ) → .01β1 increase in Y )


Example: TestScore vs. ln(Income)
• First defining the new regressor, ln(Income)
• The model is now linear in ln(Income), so the linear-log model
can be estimated by OLS:

TestScore  557.8  36.42  ln( Incomei )


(3.8) (1.40)
so a 1% increase in Income is associated with an increase in
TestScore of 0.36 points on the test.
• Standard errors, confidence intervals, R2 – all the usual tools of
regression apply here.
The linear-log and cubic regression functions
II. Log-linear population regression function
(1 of 2)

ln(Y ) = β0 + β1X (b)


Now change X: ln(Y + ΔY ) = β0 + β1(X + ΔX ) (a)
Subtract (a) – (b): ln(Y + ΔY ) – ln(Y ) = β1ΔX

Y
so  1X
Y
Y /Y
or 1  (small X )
X
II. Log-linear population regression function
(2 of 2)

ln(Yi )   0  1 X i  ui
Y /Y
for small X , 1 
X
Y
• Now 100   percentage change in Y , so a change in X by
Y
one unit ( X = 1) is associated with a 100  1 % change in Y .

• 1 unit increase in X → β1 increase in ln(Y )


→ 100β1% increase in Y
• Note: What are the units of ui and the SER?
o fractional (proportional) deviations
o for example, SER = .2 means…
III. Log-log population regression function
(1 of 2)

ln(Yi) = β0 + β1ln(Xi) + ui (b)


Now change X: ln(Y + ΔY ) = β0 + β1ln(X + ΔX ) (a)
Subtract: ln(Y + ΔY ) – ln(Y ) = β1[ln(X + βX ) – ln(X )]

Y X
so  1
Y X
Y /Y
or 1  (small X )
X /X
III. Log-log population regression function
(2 of 2)

ln(Yi) = β0 + β1ln(Xi) + ui
for small ΔX,
Y /Y
1 
X /X
Y X
Now 100   percentage change in Y , and 100   percentage
Y X
change in X , so a 1% change in X is associated with a  1 %
change in Y .

In the log-log specification, β1 has the interpretation of an


elasticity.
Example: ln(TestScore) vs. ln(Income) (1 of 2)
• First defining a new dependent variable, ln(TestScore), and the
new regressor, ln(Income)
• The model is now a linear regression of ln(TestScore) against
ln(Income), which can be estimated by OLS:

ln(TestScore)  6.336  0.0554  ln( Incomei )


(0.006) (0.0021)
An 1% increase in Income is associated with an increase of .0554%
in TestScore (Income up by a factor of 1.01, TestScore up by a
factor of 1.000554)
Example: ln(TestScore) vs. ln(Income) (2 of 2)
ln(TestScore)  6.336  0.0554  ln( Incomei )
(0.006) (0.0021)

• For example, suppose income increases from $10,000 to


$11,000, or by 10%. Then TestScore increases by approximately
.0554 × 10% = .554%. If TestScore = 650, this corresponds to an
increase of .00554 × 650 = 3.6 points.
• How does this compare to the log-linear model?
The log-linear and log-log specifications:

• Note vertical axis


• The log-linear model doesn’t seem to fit as well as the log-log model,
based on visual inspection.
Summary: Logarithmic transformations
• Three cases, differing in whether Y and/or X is transformed by
taking logarithms.
• The regression is linear in the new variable(s) ln(Y ) and/or ln(X ),
and the coefficients can be estimated by OLS.
• Hypothesis tests and confidence intervals are now implemented
and interpreted “as usual.”
• The interpretation of β1 differs from case to case.
The choice of specification (functional form) should be guided by
judgment (which interpretation makes the most sense in your
application?), tests, and plotting predicted values.
Interactions Between Independent Variables
(SW Section 8.3)
• Perhaps a class size reduction is more effective in some
circumstances than in others.
• Perhaps smaller classes help more if there are many English
learners (PctEL), who need individual attention.
TestScore
• That is, might depend on PctEL
STR
Y
• More generally, might depend on X 2
X 1
• How to model such “interactions” between X1 and X2?
• We first consider binary X’s, then continuous X’s
(a) Interactions between two binary
variables
Yi = β0 + β1D1i + β2D2i + ui
• D1i, D2i are binary
• β1 is the effect of changing D1= 0 to D1 = 1. In this specification,
this effect doesn’t depend on the value of D2.
• To allow the effect of changing D1 to depend on D2, include the
“interaction term” D1i × D2i as a regressor:
Yi = β0 + β1D1i + β2D2i + β3(D1i × D2i) + ui
Interpreting the coefficients
Yi = β0 + β1D1i + β2D2i + β3(D1i × D2i) + ui
General rule: compare the various cases
E(Yi|D1i = 0, D2i = d2) = β0 + β2d2 (b)
E(Yi|D1i = 1, D2i = d2) = β0 + β1 + β2d2 + β3d2 (a)
subtract (a) – (b):
E(Yi|D1i = 1, D2i = d2) – E(Yi|D1i = 0, D2i = d2) = β1 + β3d2
• The effect of D1 depends on d2 (what we wanted)
• β3 = increment to the effect of D1, when D2 = 1
Example: TestScore, STR, English learners (1 of 2)
Let
1 if STR  20  1 if PctEL  l0
HiSTR   and HiEL  
0 if STR  20 0 if PctEL  10
TestScore  664.1  18.2 HiEL  1.9 HiSTR  3.5( HiSTR  HiEL)
(1.4) (2.3) (1.9) (3.1)

• “Effect” of HiSTR when HiEL = 0 is –1.9


• “Effect” of HiSTR when HiEL = 1 is –1.9 – 3.5 = –5.4
• Class size reduction is estimated to have a bigger effect when the
percent of English learners is large
• This interaction isn’t statistically significant: t = 3.5/3.1
Example: TestScore, STR, English learners (2 of 2)
Let
1 if STR  20  1 if PctEL  l0
HiSTR   and HiEL  
0 if STR  20 0 if PctEL  10
TestScore  664.1  18.2 HiEL  1.9 HiSTR  3.5( HiSTR  HiEL)
(1.4) (2.3) (1.9) (3.1)

• Can you relate these coefficients to the following table of group


(“cell”) means?

Low STR High STR


Low EL 664.1 662.2
High EL 645.9 640.5
(b) Interactions between continuous and
binary variables
Yi = β0 + β1Di + β2Xi + ui
• Di is binary, X is continuous
• As specified above, the effect on Y of X (holding constant D) = β2,
which does not depend on D
• To allow the effect of X to depend on D, include the “interaction
term” Di × Xi as a regressor:
Yi = β0 + β1Di + β2Xi + β3(Di × Xi) + ui
Binary-continuous interactions: the two
regression lines (1 of 2)
Yi = β0 + β1Di + β2Xi + β3(Di × Xi) + ui
Observations with Di = 0 (the “D = 0” group):
Yi = β0 + β2Xi + ui The D = 0 regression line
Observations with Di = 1 (the “D = 1” group):
Yi = β0 + β1 + β2Xi + β3Xi + ui
= (β0 + β1) + (β2 + β3)Xi + ui The D = 1 regression line
Binary-continuous interactions: the two
regression lines (2 of 2)
Interpreting the coefficients
Yi = β0 + β1Di + β2Xi + β3(Di × Xi) + ui
General rule: compare the various cases
Y = β0 + β1D + β2X + β3(D × X ) (b)
Now change X:
Y + ΔY = β0 + β1D + β2(X + ΔX ) + β3[D × (X + ΔX)] (a)
subtract (a)  (b):
Y
Y   2 X  3 DX or   2  3 D
X
• The effect of X depends on D (what we wanted)
• β3 = increment to the effect of X, when D = 1
Example: TestScore, STR, HiEL (=1 if
PctEL ≥ 10) (1 of 2)
TestScore  682.2  0.97 STR  5.6 HiEL  1.28( STR  HiEL)
(11.9) (0.59) (19.5) (0.97)
• When HiEL  0:
TestScore  682.2  0.97 STR
• When HiEL  1,
TestScore  682.2  0.97 STR  5.6  1.28STR
TestScore  687.8  2.25STR
• Two regression lines: one for each HiSTR group.
• Class size reduction is estimated to have a larger effect when
the percent of English learners is large.
Example: TestScore, STR, HiEL (=1 if
PctEL ≥ 10) (2 of 2)
TestScore  682.2  0.97 STR  5.6 HiEL  1.28( STR  HiEL)
(11.9) (0.59) (19.5) (0.97)
• The two regression lines have the same slope ↔ the coefficient
on STR×HiEL is zero: t = –1.28/0.97 = –1.32
• The two regression lines have the same intercept ↔ the
coefficient on HiEL is zero: t = –5.6/19.5 = 0.29
• The two regression lines are the same ↔ population coefficient
on HiEL = 0 and population coefficient on STR×HiEL = 0: F =
89.94 (p-value < .001) (!!)
• We reject the joint hypothesis but neither individual hypothesis
(how can this be?)
(c) Interactions between two continuous
variables
Yi = β0 + β1X1i + β2X2i + ui
• X1, X2 are continuous
• As specified, the effect of X1 doesn’t depend on X2
• As specified, the effect of X2 doesn’t depend on X1
• To allow the effect of X1 to depend on X2, include the
“interaction term” X1i × X2i as a regressor:
Yi = β0 + β1X1i + β2X2i + β3(X1i × X2i) + ui
Interpreting the coefficients:
Yi = β0 + β1X1i + β2X2i + β3(X1i × X2i) + ui
General rule: compare the various cases
Y = β0 + β1X1 + β2X2 + β3(X1 × X2) (b)
Now change X1:
Y + ΔY = β0 + β1(X1 + ΔX1) + β2X2 + β3[(X1 + ΔX1) × X2] (a)
subtract (a) – (b):
Y
Y  1X 1  3 X 2 X 1 or  1  3 X 2
X 1

• The effect of X1 depends on X2 (what we wanted)


• β3 = increment to the effect of X1 from a unit change in X2
Example: TestScore, STR, PctEL (1 of 2)
TestScore  686.3  1.12STR  0.67 PctEL  .0012( STR  PctEL),
(11.8) (0.59) (0.37) (0.019)
The estimated effect of class size reduction is nonlinear because
the size of the effect itself depends on PctEL:
TestScore
 1.12  .0012 PctEL
STR

TestScore
PctEL
STR
0 –1.12
20% –1.12 + .0012 × 20 = –1.10
Example: TestScore, STR, PctEL (2 of 2)
TestScore  686.3  1.12STR  0.67 PctEL  .0012( STR  PctEL),
(11.8) (0.59) (0.37) (0.019)
• Does population coefficient on STR×PctEL = 0?
t = .0012/.019 = .06 → can’t reject null at 5% level
• Does population coefficient on STR = 0?
t = –1.12/0.59 = –1.90 → can’t reject null at 5% level
• Do the coefficients on both STR and STR×PctEL = 0?
F = 3.89 (p-value = .021) → reject null at 5% level(!!) (high but
imperfect multicollinearity)
Summary: Nonlinear Regression Functions
• Using functions of the independent variables such as ln(X ) or X1×X2,
allows recasting a large family of nonlinear regression functions as
multiple regression.
• Estimation and inference proceed in the same way as in the linear
multiple regression model.
• Interpretation of the coefficients is model-specific, but the general rule
is to compute effects by comparing different cases (different reference
value of the original X’s)
• Many nonlinear specifications are possible, so you must use judgment:
– What nonlinear effect you want to analyze?
– What makes sense in your application?
APPENDIX
Estimation of a cubic specification in STATA
(1 of 2)

gen avginc3 = avginc*avginc2; Create the cubic regressor


reg testscr avginc avginc2 avginc3, r;

Regression with robust standard errors Number of obs = 420


F( 3, 416) = 270.18
Prob > F = 0.0000
R-squared = 0.5584
Root MSE = 12.707

------------------------------------------------------------------------------
| Robust
testscr | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
avginc | 5.018677 .7073505 7.10 0.000 3.628251 6.409104
avginc2 | -.0958052 .0289537 -3.31 0.001 -.1527191 -.0388913
avginc3 | .0006855 .0003471 1.98 0.049 3.27e-06 .0013677
_cons | 600.079 5.102062 117.61 0.000 590.0499 610.108
------------------------------------------------------------------------------
Estimation of a cubic specification in STATA
(2 of 2)

Testing the null hypothesis of linearity, against the alternative that the population
regression is quadratic and/or cubic, that is, it is a polynomial of degree up to 3:
H0: population coefficients on Income2 and Income3 = 0
H1: at least one of these coefficients is nonzero.
test avginc2 avginc3
( 1) avginc2 = 0.0
( 2) avginc3 = 0.0

F( 2, 416) = 37.69
Prob > F = 0.0000

The hypothesis that the population regression is linear is rejected at the 1%


significance level against the alternative that it is a polynomial of degree up to 3.
Other nonlinear functions and nonlinear least squares

- These are less commonly applied in empirical studies.


For details, refer to SW 4th Edition, Appendix 8.1
Other nonlinear functions (and nonlinear
least squares) (SW Appendix 8.1)
• Polynomial: test score can decrease with income
• Linear-log: test score increases with income, but without bound
• Here is a nonlinear function in which Y always increases with X
and there is a maximum (asymptote) value of Y:

Y   0   e  1 X
β0, β1, and α are unknown parameters. This is called a negative
exponential growth curve. The asymptote as X → ∞ is β0.
Negative exponential growth
We want to estimate the parameters of,

Yi   0   e  1 X i  ui
or Yi   0 [1  e  1 ( X i  2 ) ]  ui (*)
where    0e 2 (why would you do this???)
Compare model (*) to linear-log or cubic models:
Yi   0  1 ln( X i )  ui
Yi   0  1 X i   2 X i2   2 X i3  ui

The linear-log and polynomial models are linear in the


parameters β0 and β1 – but the model (*) is not.
Nonlinear Least Squares
• Models that are linear in the parameters can be estimated by OLS.
• Models that are nonlinear in one or more parameters can be
estimated by nonlinear least squares (NLS) (but not by OLS)
• The NLS problem for the proposed specification:

 
n
min 0 ,1 ,2  Yi   0 1  e
2
 1 ( X i   2 )

i 1

This is a nonlinear minimization problem (a “hill-climbing”


problem). How could you solve this?
– Guess and check
– There are better ways…
– Implementation in STATA…
. nl (testscr = {b0=720}*(1 - exp(-1*{b1}*(avginc-{b2})))), r

(obs = 420)
Iteration 0: residual SS = 1.80e+08 .
Iteration 1: residual SS = 3.84e+07 .
Iteration 2: residual SS = 4637400 .
Iteration 3: residual SS = 300290.9 STATA is “climbing the hill”
Iteration 4: residual SS = 70672.13 (actually, minimizing the SSR)
Iteration 5: residual SS = 66990.31 .
Iteration 6: residual SS = 66988.4 .
Iteration 7: residual SS = 66988.4 .
Iteration 8: residual SS = 66988.4

Nonlinear regression with robust standard errors Number of obs = 420


F( 3, 417) = 687015.55
Prob > F = 0.0000
R-squared = 0.9996
Root MSE = 12.67453
Res. dev. = 3322.157
------------------------------------------------------------------------------
| Robust
testscr | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
b0 | 703.2222 4.438003 158.45 0.000 694.4986 711.9459
b1 | .0552339 .0068214 8.10 0.000 .0418253 .0686425
b2 | -34.00364 4.47778 -7.59 0.000 -42.80547 -25.2018
------------------------------------------------------------------------------
(SEs, P values, CIs, and correlations are asymptotic approximations)
Negative exponential growth; RMSE = 12.675
Linear-log; RMSE = 12.618 (slightly better!)

You might also like