Chapter 2 Simple Linear Regression
Chapter 2 Simple Linear Regression
CHAPTER 2
AT THE END OF THIS CHAPTER STUDENTS SHOULD BE
ABLE TO UNDERSTAND
1
OVERVIEW
➢2.1 Background
➢2.2 Correlation
➢2.3 Regression
➢2.4 Least Squares Method
➢2.5 Simple Linear Regression
(SLR)
➢2.6 ANOVA
➢2.7 Model Evaluation
➢2.8 Applications/Examples
2
2.1 BACKGROUND
WHAT IS LINEAR REGRESSION (LR)?
Linear regression is a linear model, a model that
assumes a linear relationship between the input variables
(x) and the single output variable (y)
LR is relation between variables where changes in some
variables may “explain” the changes in other variables
LR model estimates the nature of the relationship between
independent and dependent variables.
Examples:
• Does change class size affect marks of students?
• Does cholesterol level depend on age, sex, or amount of
exercise?
3
2.1 What is Linear Regression (LR)?
Investigating the dependence of one variable (dependent
variable), on one or more variables (independent variable)
using a straight line.
Y Y
X X
Y Y
4
X X
2.1 What is LR model used for?
Linear regression models are used to show or predict
the relationship between two variables or factors. The
factor that is being predicted is called the dependent
variable.
Example of Linear Regression
Regression analysis is used in stats to find trends in data.
For example, you might guess that there's a connection
between how much you eat and how much you
weight; regression analysis can help you quantify that.
5
2.1 LR model - how does it work?
Linear Regression is the process of finding a line that best
fits the data points available on the plot, so that we can use
it to predict output values for inputs that are not present
in the data set we have, with the belief that those outputs
would fall on the line.
6
2.1 What Regression is Used for?
1.Predictive Analytics: 2. Operation Efficiency:
forecasting future opportunities Regression models can also
and risks is the most prominent be used to optimize
application of regression analysis business processes.
in business
4. Correcting Errors:
3. Supporting Decisions: Regression is not only great for
Businesses today are overloaded lending empirical support to
with data on finances, operations management decisions but also for
and customer purchases. identifying errors in judgment.
Executives are now leaning on
data analytics to make informed 5. New Insights:
business decisions that have Over time businesses have
statistical significance, thus gathered a large volume of
eliminating the intuition and gut unorganized data that has the
feel. potential to yield valuable insights.
7
2.1 Example of LR in Forecasting
9
2.1 Types of Regression
10
2.1 Types of Regression
(EDUCATION) Y (Income)
(EDUCATION)
(SEX)
(EXPERIENCE) Y (Income)
(AGE)
11
2.2 Correlation
Correlation is a statistical technique that can show whether
and how strongly pairs of variables are related.
The range of possible values is from -1 to +1
The correlation is high if observations lie close to a straight
line (ie values close to +1 or -1) and low if observations are
widely scattered (correlation value close to 0)
( xi − x )( yi − y )
r=
(( xi − x ) 2 )(( yi − y ) 2 )
12
2.2 Correlation
13
2.2 Correlation Vs Linear Regression
14
2.3 Simple Linear Regression
• The quantitative analysis use the information to predict its
future behavior.
• Current information is usually in the form of a set of data.
• When the data form a set of pairs of numbers, we may
interpret them as representing the observed values of an
independent (predictor) variable x and a dependent
(response) variable y.
• The goal is to find a functional relation between the response
variable y and the predictor variable x,
𝑦 = 𝑓(𝑥)
SELECTION of independent variable(s)
- choose the most important predictor variable(s).
SCOPE of model
- we may need to restrict the coverage of model to some interval
15
or
region of values of the independent variable(s) depend on the
needs/requirements.
2.3 Regression - Population & Sample
16
2.3 Regression - Regression Model
General regression model:
𝑌 = 𝛽0 + 𝛽1 𝑋 + 𝜀
0, and 1 are unknown parameters, x is a known parameter
Deviations are independent, n(o, 2)
The values of the regression parameters 0, and 1 are
not known. We estimate them from data.
17
2.3 Regression - Regression Line
• If the scatter plot of the sample data suggests a linear
relationship between two variables i.e.
𝑦 = 𝛽መ0 + 𝛽መ1 𝑥
the relationship can be summarized by a straight-line plot.
• Least squares method give the “best” estimated line for
our set of sample data.
• The least squares method is a statistical procedure to find
the best fit for a set of data points by minimizing the sum
of the offsets or residuals of points from the plotted curve.
Least squares method is used to predict the behavior of
dependent variables.
18
2.4 Least Squares Method
• Line of ‘best fit’ means differences between actual y values &
predicted y values is at minimum, but positive differences
offset negative ones, so, square the errors!
19
2.4 Least Squares Method
20
2.4 Assumptions in SLR
◼ Linear relationship: The relationship between X and the
mean of Y is linear
◼ Same variance: The errors should have the same variance.
◼ Independent observations: Observations are independent
of each other.
◼ Normal distribution of error terms: The residuals 𝜀𝑖 are
normally distributed
21
2.5 ANOVA
• ANOVA (analysis of variance) is the term for statistical
analyses of the different sources of variation.
• Partitioning of sums of squares and degrees of freedom
associated with the response variable.
2.5 ANOVA Table
2.5 ANOVA – SST, SSE & SSR
24
2.5 ANOVA – SST, SSE & SSR
𝑦𝑖 − 𝑦ො𝑖
2.5 ANOVA – SST, SSE & SSR
▪ Sum Square Total (SST ):
- Measure how much variance is in the dependent variable.
- Made up of the SSE and SSR
𝐧 𝐧 𝐧
28
2.6 Model Evaluation
SLR model evaluation is using software output
29
2.6 Model Evaluation
(i) Standard error of estimate (s)
➢ Compute Standard Error of Estimate
𝐧
30
2.6 Model Evaluation
(ii) Coefficient of Determination
➢ Coefficient of determination
𝑆𝑆𝑅 𝑆𝑆𝐸
𝑅2 = =1− R2 = 1 - (SSE/SST)
𝑆𝑆𝑇 𝑆𝑆𝑇
31
2.6 R-SQUARED
Hypothesis test
• A process that uses sample statistics to test a claim about the value
of a population parameter.
• Example: An automobile manufacturer advertises that its new hybrid
car has a mean mileage of 50 miles per gallon. To test this claim, a
sample would be taken. If the sample mean differs enough from the
© 2019 Petroliam Nasional Berhad (PETRONAS) | 34
advertised mean, you can decide the advertisement is wrong.
2.6 Model Evaluation
(iii) The Hypothesis Test
• One sided (tailed) H 0 : 0 or = 0
lower-tail test H1 : 0
H 0 : = 0
• Two-sided (tailed) test H1 : 0
One sided
(tailed) upper-
tail test
One sided
(tailed) lower-
tail test
Two-sided
(tailed) test
➢ The t-test is More Flexible since it can be used for one sided
test as well.
2.6 Model Evaluation
(iii) The hypothesis test
– t-test
➢ t-test is used to check on adequate
relationship between x and y
38
2.6 Model Evaluation
(iii) The hypothesis test
– F-test
• In order to be able to construct a
statistical decision rule, we need to know
the distribution of our test statistic F.
𝑀𝑆𝑅
𝐹=
𝑀𝑆𝐸
• when h0 is true, our test statistic, F,
follows the F-distribution with 1, and n-2
degrees of freedom.
𝐹(𝛼; 1, 𝑛 − 2)
2.6 Model Evaluation
(iii) The hypothesis test – F-test
• This time we will use the F-test, the null and alternative
hypothesis are:
𝐻0 : 𝛽1 = 0
𝐻𝑎 : 𝛽1 ≠ 0
Construction of decision rule:
At = 5% level, Reject H0 if 𝐹 > 𝐹(𝛼; 1, 𝑛 − 2)
Electricity Usage 2.48 2.26 2.47 2.77 2.99 3.05 3.18 3.46 3.03 3.26 2.67 2.5
(y)(kWh) 3
43
Set up the hypothesis
H0 : β = 0 ( there is no relationship between x and y)
There is no relationship between Production and
Electric Usage
44
Excel Results
Regression Statistics
Multiple R 0.895605603
R Square 0.802109396
Adjusted R Square 0.782320336
Standard Error 0.172947969
Observations 12
ANOVA
df SS MS F Significance F
Regression 1 1.212381668 1.21238 40.53297031 8.1759E-05
Residual 10 0.299109998 0.02991
Total 11 1.511491667
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept 0.409048191 0.385990515 1.05974 0.314189743 -0.450992271 1.269088653 -0.45099227 1.269088653
X Variable 1 0.498830121 0.078351706 6.36655 8.1759E-05 0.324251642 0.673408601 0.32425164 0.673408601
Excel Results : Regression Line
Production Line Fit Plot
4
y = 0.4988x + 0.409
3.5 R² = 0.8021
2.5
electricity
2 electricity
Predicted electricity
1.5 Linear (electricity )
0.5
0
0 1 2 production
3 4 5 6 7
2.7 Example 1 - Summary
Estimated Regression Line 𝑦ො = 0.4091 + 0.4988𝑥
Electricity usage = 0.4091 + 0.4988*Production
Standard Error of Estimate = 0.173
Coefficient of Determination R2 = 0.802
Internal
2.8 Example 2 - Application of SLR
to Reservoir Quality Index (RQI)
Example: Given data on Permeability and Reservoir Quality
Index, RQI, investigate the dependence of RQI (Y) on
Permeability (X).
Internal
Excel Results – Example 2
Regression Statistics
Multiple R 0.680322
R Square 0.462837
Adjusted R
Square 0.461716
Standard Error 0.40947
Observations 481
ANOVA
df SS MS F Significance F
Regression 1 69.19926 69.19926 412.7226 1.22E-66
Residual 479 80.31167 0.167665
Total 480 149.5109
Internal
Excel Results – Example 2
Permeability(md) Line Fit Plot
5.00
4.50 y = 0.3097 + 0.0017x
4.00 R² = 0.4628
3.50
3.00
RQI
2.50 RQI
2.00 Predicted RQI
1.50 Linear (RQI)
1.00
0.50
0.00
0.0 500.0 1000.0 1500.0 2000.0 2500.0
Permeability(md)
© 2019 Petroliam Nasional Berhad (PETRONAS) | 52
Internal
2.8 Example 2 - Interpretation of
the results
• Permeability(md) coefficient (𝛽 1 =0.0017): Each unit
increase in Permeability adds 0.0017 to RQI value.
• 𝛽 1> 0: (positive relationship): RQI increases with the
increase in Permeability.
• Intercept coefficient (𝛽 0 = 0.309): The value of RQI when
Permeability equal to zero.
• R Square = 0.462837: indicates that the model explains 46% of
the total variability in the RQI values around its mean.
• P-value < 0.05: The regression is significant
Internal
2.8 Example 2 (a) - Application of
SLR to Reservoir Quality Index (RQI)
Example: Given data on Permeability and Reservoir Quality
Index, RQI, investigate the dependence of RQI (Y) on
Permeability (X). For this example, outliers have been
detected and removed.
Set up the hypothesis :
H0 : β = 0 ( there is no relationship between x and y)
There is no relationship between RQI and Permeability
Internal
2.8 Example 2 (a) - Application of
SLR to Reservoir Quality Index (RQI)
Outlier detection arguments
Q1 0.125452
Q3 0.536934
IQR 0.411482
L Bound -0.49177
U Bound 1.154156
41 outliers were
detected and
treated
Internal
Excel Results – Example 2 (a)
Observed the RQI
values on Y-axis
RQI vs Permeability (Outliers Removed )
1.80
1.60
1.40
1.20
1.00
0.80
RQI
0.60
0.40 RQI
0.20 Linear (RQI)
0.00
0.0 100.0 200.0 300.0 400.0
Permeability
Internal
Excel Results – Example 2 (a)
Regression Statistics
Multiple R 0.851746
R Square 0.725471
Adjusted R What can you conclude here????
Square 0.724844
Standard Error 0.134745
Observations 440
ANOVA
df SS MS F Significance F
Regression 1 21.01512 21.01511817 1157.460884 5.0238E-125
Residual 438 7.952426 0.018156223
Total 439 28.96754
Internal
Excel Results – Example 2 (a)
Regression Statistics
Multiple R 0.851746
R Square 0.725471
Adjusted R
Square 0.724844
Standard Error 0.134745
Observations 440
ANOVA
df SS MS F Significance F
Regression 1 21.01512 21.01511817 1157.460884 5.0238E-125
Residual 438 7.952426 0.018156223
Total 439 28.96754
Internal
2.8 Example 2 (a) - Interpretation
of the results
• Permeability(md) coefficient (𝛽 1 =0.003609): Each unit
increase in Permeability adds 0.00361 to RQI.
• 𝛽 1> 0: (positive relationship): RQI increases with the
increase in Permeability.
• Intercept coefficient (𝛽 0 = 0.186763): The value of RQI when
Permeability equal to zero.
• R Square = 0.725471: indicates that the model explains 73% of
the total variability in the RQI values around its mean. This value
has improved significantly with the removal of outliers.
• P-value < 0.05: The regression is significant
Internal
60