0% found this document useful (0 votes)
28 views42 pages

BANA 560 Lecture - 5 - NaiveBayes - Decision - Tree

Navie Bayes and Decision Tree report and analysis presentation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views42 pages

BANA 560 Lecture - 5 - NaiveBayes - Decision - Tree

Navie Bayes and Decision Tree report and analysis presentation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 42

Naïve Bayes and Decision

Tree
Characteristics

Data-driven, not model-driven

Make no assumptions about the data


Naïve Bayes: The Basic Idea
For a given new record to be classified, find
other records like it (i.e., same values for the
predictors)

What is the prevalent class among those


records?

Assign that class to your new record


Usage
Requires categorical variables

Numerical variable must be binned and


converted to categorical

Can be used with very large data sets

Example: Spell check programs assign your


misspelled word to an established “class”
(i.e., correctly spelled word)
Charges? Size Outcome
y small truthful
n small truthful
n large truthful
n large truthful
n small truthful
n small truthful
y small fraud
y large fraud
n large fraud
y large fraud
Exact Bayes Calculations
Goal: classify (as “fraudulent” or as
“truthful”) a small firm with charges filed

There are 2 firms like that, one fraudulent


and the other truthful

P(fraud | charges=y, size=small) = ½ = 0.50

Note: calculation is limited to the two firms


matching those characteristics
Exact Bayes Classifier
Relies on finding other records that share
same predictor values as record-to-be-
classified.

Want to find “probability of belonging to


class C, given specified values of predictors.”

Even with large data sets, may be hard to


find other records that exactly match your
record, in terms of predictor values.
Solution – Naïve Bayes
Assume independence of predictor
variables (within each class)

Use multiplication rule

Find same probability that record belongs


to class C, given predictor values, without
limiting calculation to records that share all
those same values
Calculations
1. Take a record, and note its predictor values
2. Find the probabilities those predictor
values occur across all records in C1
3. Multiply them together, then by proportion
of records belonging to C1
4. Same for C2, C3, etc.
5. Prob. of belonging to C1 is value from step
(3) divide by sum of all such values C1 …
Cn
6. Establish & adjust a “cutoff” prob. for class
of interest
Example: Financial Fraud

Target variable: Audit finds fraud, no fraud

Predictors:
Prior pending legal charges (yes/no)
Size of firm (small/large)
Charges? Size Outcome
y small truthful
n small truthful
n large truthful
n large truthful
n small truthful
n small truthful
y small fraud
y large fraud
n large fraud
y large fraud
Naïve Bayes Calculations
Same goal as before

Compute 2 quantities:
Proportion of “charges = y” among frauds, times
proportion of “small” among frauds, times proportion
frauds = 3/4 * 1/4 * 4/10 = 0.075
Prop “charges = y” among truthfuls, times prop.
“small” among truthfuls, times prop. truthfuls = 1/6
* 4/6 * 6/10 = 0.067

P(fraud | charges, small) = 0.075/(0.075+0.067)


= 0.53
Advantages
Handles purely categorical data well
Works well with very large data sets
Simple & computationally efficient
Shortcomings

Requires large number of records

Problematic when a predictor category is


not present in training data
Assigns 0 probability of response, ignoring information
in other variables
On the other hand…
Probability rankings are more accurate
than the actual probability estimates
Good for applications using lift (e.g. response to
mailing), less so for applications requiring
probabilities (e.g. credit scoring)
Summary

No statistical models involved

Naïve Bayes (like KNN) pays attention to


complex interactions and local structure

Computational challenges remain


Trees and Rules
Goal: Classify or predict an outcome based on a
set of predictors
The output is a set of rules
Example:
Goal: classify a record as “will accept credit
card offer” or “will not accept”
Rule might be “IF (Income > 92.5) AND
(Education < 1.5) AND (Family <= 2.5) THEN
Class = 0 (nonacceptor)
Also called CART, Decision Trees, or just Trees
Rules are represented by tree diagrams
Key Ideas
Recursive partitioning: Repeatedly split
the records into two parts so as to
achieve maximum homogeneity within
the new parts

Pruning the tree: Simplify the tree by


pruning peripheral branches to avoid
overfitting
Recursive Partitioning
Recursive Partitioning Steps
Pick one of the predictor variables, xi
Pick a value of xi, say si, that divides the training
data into two (not necessarily equal) portions
Measure how “pure” or homogeneous each of
the resulting portions are
“Pure” = containing records of mostly one class
Algorithm tries different values of xi, and si to
maximize purity in initial split
After you get a “maximum purity” split, repeat
the process for a second split, and so on
Example: Riding Mowers

Goal: Classify 24 households as owning or


not owning riding mowers

Predictors = Income, Lot Size


Income Lot_Size Ownership
60.0 18.4 owner
85.5 16.8 owner
64.8 21.6 owner
61.5 20.8 owner
87.0 23.6 owner
110.1 19.2 owner
108.0 17.6 owner
82.8 22.4 owner
69.0 20.0 owner
93.0 20.8 owner
51.0 22.0 owner
81.0 20.0 owner
75.0 19.6 non-owner
52.8 20.8 non-owner
64.8 17.2 non-owner
43.2 20.4 non-owner
84.0 17.6 non-owner
49.2 17.6 non-owner
59.4 16.0 non-owner
66.0 18.4 non-owner
47.4 16.4 non-owner
33.0 18.8 non-owner
51.0 14.0 non-owner
63.0 14.8 non-owner
How to split
Order records according to one variable, say
lot size

Find midpoints between successive values


E.g. first midpoint is 14.4 (halfway between 14.0 and 14.8)

Divide records into those with lotsize > 14.4


and those < 14.4

After evaluating that split, try the next one,


which is 15.4 (halfway between 14.8 and 16.0)
Note: Categorical Variables
Examine all possible ways in which the
categories can be split.

E.g., categories A, B, C can be split 3 ways


{A} and {B, C}
{B} and {A, C}
{C} and {A, B}

With many categories, # of splits becomes


huge
Measuring Impurity
Gini Index
Gini Index for rectangle A containing m
records
I(A) = 1 -

p = proportion of cases in rectangle A that


belong to class k

I(A) = 0 when all cases belong to same class


Max value when all classes are equally
represented (= 0.50 in binary case)

Note: XLMiner uses a variant called “delta splitting rule”


Entropy

p = proportion of cases (out of m) in rectangle


A that belong to class k

 Entropy ranges between 0 (most pure) and


log2(m) (equal representation of classes)
Impurity and Recursive
Partitioning

Obtain overall impurity measure (weighted avg.


of individual rectangles)

At each successive stage, compare this measure


across all possible splits in all variables

Choose the split that reduces impurity the most

Chosen split points become nodes on the tree


First Split – The Tree
Tree after three splits
Tree Structure
Split points become nodes on tree (circles with
split value in center)

Rectangles represent “leaves” (terminal points, no


further splits, classification value noted)

Numbers on lines between nodes indicate # cases

Read down tree to derive rule


E.g., If lot size < 19, and if income > 84.75, then class =
“owner”
Determining Leaf Node Label
Each leaf node label is determined by “voting”
of the records within it, and by the cutoff value

Records within each leaf node are from the


training data

Default cutoff=0.5 means that the leaf node’s


label is the majority class.

Cutoff = 0.75: requires majority of 75% or more


“1” records in the leaf to label it a “1” node
Tree after all splits
The Overfitting Problem
Stopping Tree Growth
Natural end of process is 100% purity in each
leaf

This overfits the data, which end up fitting


noise in the data

Overfitting leads to low predictive accuracy of


new data

Past a certain point, the error rate for the


validation data starts to increase
Full Tree Error Rate
CHAID

CHAID, older than CART, uses chi-square


statistical test to limit tree growth

Splitting stops when purity improvement


is not statistically significant
Pruning
CART lets tree grow to full extent, then prunes it back

Idea is to find that point at which the validation error


begins to rise

Generate successively smaller trees by pruning


leaves

At each pruning stage, multiple trees are possible

Use cost complexity to choose the best tree at that


stage
Advantages of trees
Easy to use, understand
Produce rules that are easy to interpret &
implement
Variable selection & reduction is automatic
Do not require the assumptions of
statistical models
Can work without extensive handling of
missing data
Disadvantages
May not perform well where there is
structure in the data that is not well
captured by the tree structure

Since the process deals with one variable


at a time, no way to capture interactions
between variables
Summary
Decision Trees are an easily understandable
and transparent method for predicting or
classifying new records

A tree is a graphical representation of a set of


rules

Trees must be pruned to avoid over-fitting of


the training data

As trees do not make any assumptions about


the data structure, they usually require large
samples

You might also like