0% found this document useful (0 votes)
36 views

Chap10 Logistic Regression

Logistic regression extends linear regression to model categorical outcomes. It relates predictors to the logit of the outcome rather than the outcome directly. Variable selection is used to reduce predictors that are highly correlated or non-influential. The model can be used for classification by setting a probability cutoff.

Uploaded by

Jakhongir
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Chap10 Logistic Regression

Logistic regression extends linear regression to model categorical outcomes. It relates predictors to the logit of the outcome rather than the outcome directly. Variable selection is used to reduce predictors that are highly correlated or non-influential. The model can be used for classification by setting a probability cutoff.

Uploaded by

Jakhongir
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 10

Logistic Regression

© Galit Shmueli and Peter Bruce 2010


Logistic Regression
• Extends idea of linear regression to situation
where outcome variable is categorical

• Widely used, particularly where a structured


model is useful to explain (=profiling) or to predict

• We focus on binary classification


- i.e. Y=0 or Y=1
The Logit
• Goal: Find a function of the predictor variables that
relates them to a 0/1 outcome

• Instead of Y as outcome variable (like in linear


regression), we use a function of Y called the logit
• Logit can be modeled as a linear function of the
predictors
• The logit can be mapped back to a probability,
which, in turn, can be mapped to a class
Step 1: Logistic Response Function
• p = probability of belonging to class 1
• Need to relate p to predictors with a
function that guarantees 0  p  1
• Standard linear function (as shown
below) does not:
𝑝 = 𝛽0 + 𝛽1 𝑥1 + 𝛽2 𝑥2 + … 𝛽𝑞 𝑥𝑞
Logistic Response Function
1
𝑝= −(𝛽 0 +𝛽 1 𝑥 1 +𝛽 2 𝑥 2 + …𝛽 𝑞 𝑥 𝑞 )
1+ 𝑒
• e = 2.7182818284590452353602874713527……
• ln1 = 0
ln2 = 0.6931 4718 0559 9453 0941 7232 1214 5818
lne = 1
ln3 = 1.0986 1228 8668 1096 9139 5245 2369 2250
lnπ = 1.1447 2988 5849 4001 7414 3427 3513 5310
ln4 = 1.3862 9436 1119 8906 1883 4464 2429 1640
ln7 = 1.9459 1014 9055 3133 0510 5352 7434 4320
ln10=2.3025 8509 2994 0456 8401 7991 4546 8440
Step 2: The Odds
• The odds of an event are defined as:
p
Odds  p = probability of event
1 p
- 3 to 1 means that if your bet wins, they will give
you 3 for every 1 that you bet. 
• Or, given the odds of an event, the probability of
the event can be computed by:
Odds
p
1  Odds
• We can also relate the Odds to the
predictors:

 0  1 x1   2 x2   q xq
Odds  e
Step 3: Take log on both sides

• This gives us the logit:

log(Odds)   0  1 x1   2 x2     q xq

• log(Odds) = logit (eq. 10.6)


Logit, cont.
• So, the logit is a linear function of
predictors x1, x2, …
- Takes values from -infinity to +infinity

• Review the relationship between logit,


odds and probability
Odds (a) and Logit (b) as function of P
Example
Personal Loan Offer

• Outcome variable: accept bank loan


(0/1)

• Predictors: Demographic info, and info


about their bank relationship
Data preprocessing
• Partition 60% training, 40% validation
• Create 0/1 dummy variables for categorical
predictors
Single Predictor Model
• Modeling loan acceptance on income (x)

• Fitted coefficients:
b0 = -6.3525, b1 = -0.0392
Seeing the Relationship
Last Step - Classify
• Model produces an estimated probability
of being a “1”

• Convert to a classification by
establishing cutoff level

• If estimated prob. > cutoff, classify as “1”


Ways to Determine Cutoff
• 0.50 is popular initial choice

• Additional considerations (see Chapter 5)


- Maximize classification accuracy
- Maximize sensitivity (subject to min. level of
specificity)
- Minimize false positives (subject to max. false
negative rate)
- Minimize expected cost of misclassification (need to
specify costs)
Example, cont.
• Estimates of b’s are derived through an
iterative process called maximum likelihood
estimation

• Let’s include all 12 predictors in the model now

• XLMiner’s output gives coefficients for the logit,


as well as odds for the individual terms
Estimated Equation for Logit
(Equation 10.9)
Equation for Odds
(Equation 10.10)
Converting to Probability

Odds
p
1  Odds
Interpreting Odds, Probability
• For predictive classification, we typically use
probability with a cutoff value

• For explanatory purposes, odds have a


useful interpretation:
- If we increase x1 by one unit, holding x2, x3 … xq
constant, then
- b1 is the factor by which the odds of belonging to
class 1 increase
Loan Example: Evaluating Classification
Performance

• Performance measures: Confusion


matrix and % of misclassifications

• More useful in this example: lift


Multicollinearity
• Problem: As in linear regression, if one predictor
is a linear combination of other predictor(s),
model estimation will fail
- Note that in such a case, we have at least one
redundant predictor

• Solution: Remove extreme redundancies (by


dropping predictors via variable selection – see
next, or by data reduction methods such as PCA)
Variable Selection
• This is the same issue as in linear regression
- The number of correlated predictors can grow when we
create derived variables such as interaction terms (e.g.
Income x Family), to capture more complex
relationships
• Problem: Overly complex models have the danger
of overfitting
• Solution: Reduce variables via automated
selection of variable subsets (as with linear
regression)
P-values for Predictors

• Test null hypothesis that coefficient = 0


• Useful for review to determine whether
to include variable in model
• Key in profiling tasks, but less important
in predictive classification
Complete Example:
Predicting Delayed Flights DC to NY
Variables
• Outcome: delayed or not-delayed
• Predictors:
- Day of week
- Departure time
- Origin (DCA, IAD, BWI)
- Destination (LGA, JFK, EWR)
- Carrier
- Weather (1 = bad weather)
Data Preprocessing
• Create binary dummies for the
categorical variables

• Partition 60%-40% into


training/validation
The Fitted Model
Model Output (Validation Data)
Lift Chart
Summary
Logistic regression is similar to linear regression, except
that it is used with a categorical response
It can be used for explanatory tasks (=profiling) or
predictive tasks (=classification)
The predictors are related to the response Y via a
nonlinear function called the logit
As in linear regression, reducing predictors can be done
via variable selection
Logistic regression can be generalized to more than two
classes (softmax regression)

You might also like