ML-Lec-07-Decision Tree Overfitting
ML-Lec-07-Decision Tree Overfitting
2
Measuring Node Impurity
p(i|t): fraction of records associated
with node t belonging to class i
3
Example
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Gini = 1 – (P(C1))2 – (P(C2))2 = 1 – 0 – 1 = 0
Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0
Error = 1 – max (0, 1) = 1 – 1 = 0
5
Splitting Based on GINI
When a node p is split into k partitions
(children), the quality of split is computed as,
6
Binary Attributes: Computing GINI Index
• Splits into two partitions
• Effect of Weighing partitions:
– Larger and Purer Partitions are sought for.
B?
Yes No
Node N1 Node N2
Gini(Children)
Gini(N1) = 7/12 * 0.194 + 5/12 * 0.528
= 1 – (5/7)2 – (2/7)2 = 0.194 = 0.333
Gini(N2) This is the quality of split
= 1 – (1/5)2 – (4/5)2 = 0.528 for Variable B 7
Categorical Attributes
8
Continuous Attributes
Use Binary Decisions based on one value
Sorted Values
Split Positions
10
Splitting based on impurity
Impurity measures favor attributes with
large number of categories
11
Gain Ratio
The information gain measure tends to prefer attributes with
large numbers of possible categories.
Gain ratio: a modification of the information gain that reduces
its bias on high‐branch attributes.
Gian ratio should be
Large when data is evenly spread
Small when all data belong to one branch
Gain ratio takes number and size of branches into account
when choosing an attribute
It corrects the information gain by taking the intrinsic
information of a split into account
Or if we use S in place of D
Gain Ratio
Adjusts Information Gain by the entropy of the partitioning
(SplitINFO). Higher entropy partitioning (large number of
small partitions) is penalized!
Used in C4.5
Designed to overcome the disadvantage of impurity
Example (Play tennis) :
More on the gain ratio
“Outlook” still comes out top
However: “ID code” has greater gain ratio
Standard fix: In particular applications we can
use an ad hoc test to prevent splitting on that
type of attribute
Problem with gain ratio: it may overcompensate
May choose an attribute just because its intrinsic
information is very low
Standard fix:
• First, only consider attributes with greater than average
information gain
• Then, compare them on gain ratio
14
Comparing Attribute Selection Measures
The three measures, in general, return good
results but
Information Gain
Biased towards multivalued attributes
Gain Ratio
Tends to prefer unbalanced splits in which one
partition is much small than the other
Gini Index
Biased towards multivalued attributes
Has difficulties when the number of classes is
large
Tends to favor tests that result in equal-sized
partitions and purity in both partitions
15
Stopping Criteria for Tree Induction
16
Decision Tree Based Classification
Advantages:
Inexpensive to construct
Extremely fast at classifying unknown records
Easy to interpret for small-sized trees
Accuracy is comparable to other classification
techniques for many simple data sets
17
Example: C4.5
Simple depth-first construction.
Uses Information Gain
Sorts Continuous Attributes at each
node.
Needs entire data to fit in memory.
Unsuitable for Large Datasets.
Needs out-of-core sorting.
Evaluation
19
Underfitting and Overfitting
Underfitting Overfitting
Underfitting: when model is too simple, both training and test errors are large
Overfitting: when model is too complex it models the details of the training20set
and fails on the test set
Overfitting due to Noise
22
How to Address Overfitting:
Tree Pruning
Pre-Pruning (Early Stopping Rule)
Stop the algorithm before it becomes a fully-grown tree
Typical stopping conditions for a node:
• Stop if all instances belong to the same class
• Stop if all the attribute values are the same
More restrictive conditions:
• Stop if number of instances is less than some user-specified threshold
• Stop if class distribution of instances are independent of the available features
(e.g., using 2 test)
• Stop if expanding the current node does not improve impurity measures (e.g.,
Gini or information gain) or it falls below a threshold value.
Upon halting, the node becomes a leaf
The leaf may hold the most frequent class among the subset
tuples.
Problem
23
• Difficult to choose an appropriate threshold
How to Address Overfitting…
Post-pruning
Grow decision tree to its entirety
Trim the nodes of the decision tree in a
bottom-up fashion
If generalization error improves after
trimming, replace sub-tree by a leaf node.
Class label of leaf node is determined from
majority class of instances in the sub-tree
24
Prune the Tree OR Prune the Rule
In order to Reduce the complexity of
decision procedure we have two options
(i) either we can prune the tree first
and then develop the rules or (ii) we
can develop the rules and then prune
the rules.
Which is better?
(ii) is better, why?
25