Decision Trees
Decision tree representation
Top Down Construction
Avoiding overfitting: Bottom up pruning
Cost complexity issues (Regularization)
Missing values
Different construction criteria
Credits
Marek A. Perkowski
http://www.ece.pdx.edu/~mperkows/
Padraig Cunningham
http://www.cs.tcd.ie/Padraig.Cunningham/aic
3/22/04 2
Supplimentary material
www
http://dms.irb.hr/tutorial/tut_dtrees.php
http://www.cs.uregina.ca/~dbd/cs831/notes/
ml/dtrees/4_dtrees1.html
Mitchell – Chapter 3
3/22/04 3
Decision Tree for PlayTennis
Outlook
Sunny Overcast Rain
Humidity Yes Wind
High Normal Strong Weak
No Yes No Yes
3/22/04 4
Decision Tree for PlayTennis
Outlook
Sunny Overcast Rain
Humidity Each internal node tests an attribute
High Normal Each branch corresponds to an
attribute value node
No Yes Each leaf node assigns a classification
3/22/04 5
Decision Tree for PlayTennis
Outlook Temperature Humidity Wind PlayTennis
Sunny Hot High Weak ?No
Outlook
Sunny Overcast Rain
Humidity Yes Wind
High Normal Strong Weak
No 3/22/04
Yes No Yes 6
Decision Tree for Conjunction
Outlook=Sunny Wind=Weak
Outlook
Sunny Overcast Rain
Wind No No
Strong Weak
No Yes
3/22/04 7
Decision Tree for Disjunction
Outlook=Sunny Wind=Weak
Outlook
Sunny Overcast Rain
Yes Wind Wind
Strong Weak Strong Weak
No Yes No Yes
3/22/04 8
Decision Tree for XOR
Outlook=Sunny XOR Wind=Weak
Outlook
Sunny Overcast Rain
Wind Wind Wind
Strong Weak Strong Weak Strong Weak
Yes No No Yes No Yes
3/22/04 9
Decision Tree
• decision trees represent disjunctions of conjunctions
Outlook
Sunny Overcast Rain
Humidity Yes Wind
High Normal Strong Weak
No Yes No Yes
(Outlook=Sunny Humidity=Normal)
(Outlook=Overcast)
(Outlook=Rain Wind=Weak)
3/22/04 10
When to consider Decision
Trees
Instances describable by attribute-value pairs
Target function is discrete valued
Disjunctive hypothesis may be required
Possibly noisy training data
Missing attribute values
Examples:
Medical diagnosis
Credit risk analysis
Highly non-linear cases
3/22/04 11
History of Decision Trees
Quinlan developed ID3 in early 70’s
Mid 80’s Breiman et al. developed CART:
Classification and Regression Trees
Early 90’s Quinlan developed C4.5 and lately
C5.0 (www.rulequest.com)
3/22/04 12
Classification, Regression and
Clustering trees
Classification trees represent function X -> C with
C discrete (like the decision trees we just saw)
Regression trees predict numbers in leaves
could use a constant (e.g., mean), or linear
regression model, or …
Clustering trees just group examples in leaves
Most (but not all) research in machine learning
focuses on classification trees
3/22/04 13
Top-Down Induction of
Decision Trees (ID3)
Basic algorithm for TDIDT: (later more formal version)
start with full data set
find test that partitions examples as good as possible
“good” = examples with same class, or otherwise similar
examples, should be put together
for each outcome of test, create child node
move examples to children according to outcome of test
repeat procedure for each child that is not “pure”
Main question: how to decide which test is “best”
Also called “recursive partitioning”
Implicit feature selection (sometimes used as feature
selectors)
3/22/04 14
Which Attribute is ”best”?
[29+,35-] A1=? A2=? [29+,35-]
True False True False
[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]
3/22/04 15
Finding the best test
(for classification trees)
For classification trees: find test for which children
are as “pure” as possible
Purity measure borrowed from information theory:
entropy
is a measure of “missing information”; more
precisely, #bits needed to represent the missing
information, on average, using optimal encoding
Given set S with instances belonging to class i with
probability pi: Entropy(S) = - pi log2 pi
3/22/04 16
Entropy
S is a sample of training examples
p+ is the proportion of positive examples
p- is the proportion of negative examples
Entropy measures the impurity of S
Entropy(S) = -p+ log2 p+ - p- log2 p-
3/22/04 17
Entropy
Entropy(S)= expected number of bits needed to encode
class (+ or -) of randomly drawn members of S (under
the optimal, shortest length-code)
Why?
Information theory optimal length code assign
–log2 p bits to messages having probability p.
So the expected number of bits to encode
(+ or -) of random member of S:
-p+ log2 p+ - p- log2 p-
3/22/04 18
Information Gain
Gain(S,A): expected reduction in entropy due to
sorting S on attribute A
Gain(S,A)=Entropy(S) - vvalues(A) |Sv|/|S| Entropy(Sv)
Entropy([29+,35-]) = -29/64 log2 29/64 – 35/64 log2 35/64
= 0.99
[29+,35-] A1=? A2=? [29+,35-]
True False True False
[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]
3/22/04 19
Information Gain
Entropy([21+,5-]) = 0.71 Entropy([18+,33-]) = 0.94
Entropy([8+,30-]) = 0.74 Entropy([8+,30-]) = 0.62
Gain(S,A1)=Entropy(S) Gain(S,A2)=Entropy(S)
-26/64*Entropy([21+,5-])
-51/64*Entropy([18+,33-])
-38/64*Entropy([8+,30-])
=0.27 -13/64*Entropy([11+,2-])
=0.12
[29+,35-] A1=? A2=? [29+,35-]
True False True False
[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]
3/22/04 20
Training Examples
Day Outlook Temp. Humidity Wind Play Tennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Weak Yes
D8 Sunny Mild High Weak No
D9 Sunny Cold Normal Weak Yes
D10 Rain Mild Normal Strong Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
3/22/04 21
Selecting the Next Attribute
S=[9+,5-] S=[9+,5-]
E=0.940 E=0.940
Humidity Wind
High Normal Weak Strong
[3+, 4-] [6+, 1-] [6+, 2-] [3+, 3-]
E=0.985 E=0.592 E=0.811 E=1.0
Gain(S,Humidity) Gain(S,Wind)
=0.940-(7/14)*0.985 =0.940-(8/14)*0.811
– (7/14)*0.592 – (6/14)*1.0
=0.151 =0.048
3/22/04
Humidity provides greater info. gain than Wind, w.r.t target classification. 22
Selecting the Next Attribute
S=[9+,5-]
E=0.940
Outlook
Over
Sunny Rain
cast
[2+, 3-] [4+, 0] [3+, 2-]
E=0.971 E=0.0 E=0.971
Gain(S,Outlook)
=0.940-(5/14)*0.971
-(4/14)*0.0 – (5/14)*0.0971
=0.247
3/22/04 23
Selecting the Next Attribute
The information gain values for the 4 attributes
are:
• Gain(S,Outlook) =0.247
• Gain(S,Humidity) =0.151
• Gain(S,Wind) =0.048
• Gain(S,Temperature) =0.029
where S denotes the collection of training examples
3/22/04 24
ID3 Algorithm
[D1,D2,…,D14] Outlook
[9+,5-]
Sunny Overcast Rain
Ssunny=[D1,D2,D8,D9,D11] [D3,D7,D12,D13] [D4,D5,D6,D10,D14]
[2+,3-] [4+,0-] [3+,2-]
? Yes ?
Gain(Ssunny , Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = 0.970
Gain(Ssunny , Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = 0.570
Gain(Ssunny , Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019
3/22/04 25
ID3 Algorithm
Outlook
Sunny Overcast Rain
Humidity Yes Wind
[D3,D7,D12,D13]
High Normal Strong Weak
No Yes No Yes
[D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10]
3/22/04 26
Inductive bias in TDIDT
H is the power set of instances X
Unbiased ?
Bias is a preference for some hypotheses, rather than a
restriction of the hypothesis space H
Preference for short trees, and for those with high
information gain attributes near the root
Occam’s razor: prefer the shortest (simplest) hypothesis that
fits the data
3/22/04 27
Occam’s Razor
Why prefer short hypotheses?
Argument in favor:
Fewer short hypotheses than long hypotheses
A short hypothesis that fits the data is unlikely to be a
coincidence
A long hypothesis that fits the data might be a coincidence
Argument opposed:
There are many ways to define small sets of hypotheses
E.g. All trees with a prime number of nodes that use attributes
beginning with ”Z”
What is so special about small sets based on size of hypothesis
3/22/04 28
Avoiding Overfitting
Phenomenon of overfitting:
keep improving a model, making it better
and better on training set by making it more
complicated …
increases risk of modeling noise and
coincidences in the data set
may actually harm predictive power of
theory on unseen cases
Cf. fitting a curve with too many parameters
. . . .
. .
.. ..
. .
3/22/04 29
Overfitting: Definition
Consider error of hypothesis h over
Training data: error
train(h)
Entire distribution D of data: errorD(h)
Hypothesis hH overfits training data if there is
an alternative hypothesis h’H such that
errortrain(h) < errortrain(h’)
and
errorD(h) > errorD(h’)
3/22/04 30
Overfitting: example
-
+
+ +
+ -
-
+ -
+ -
+
-
area with probably
- - + wrong predictions
- - -
-
- -
- -
- - -
3/22/04 31
Overfitting in Decision Tree
Learning
3/22/04 32
Avoid Overfitting
How can we avoid overfitting?
Stop growing when data split not statistically
significant
Grow full tree then post-prune
Minimum description length (MDL):
Minimize:
size(tree) + size(misclassifications(tree))
3/22/04 33
Stopping criteria
How do we know when overfitting starts?
a) use a validation set: data not considered for
choosing the best test
when accuracy goes down on validation set: stop adding
nodes to this branch
b) use some statistical test
significance test: e.g., is the change in class distribution still
significant? (2-test)
MDL: minimal description length principle
fully correct theory = tree + corrections for specific
misclassifications
minimize size(f.c.t.) = size(tree) + size(misclassifications(tree))
Cf. Occam’s razor
3/22/04 34
Post-pruning trees
After learning the tree: start pruning branches
away
For all nodes in tree:
Estimate effect of pruning tree at this node on
predictive accuracy
e.g. using accuracy on validation set
Prune node that gives greatest improvement
Continue until no improvements
Note : this pruning constitutes a second search in
the hypothesis space
3/22/04 35
Reduced-Error Pruning
Split data into training and validation set
Do until further pruning is harmful:
Evaluate impact on validation set of pruning each
possible node (plus those below it)
Greedily remove the one that most improves the
validation set accuracy
Produces smallest version of most accurate subtree
3/22/04 36
Effect of Reduced Error
Pruning
3/22/04 37
Rule-Post Pruning
Convert tree to equivalent set of rules
Prune each rule independently of each other
Sort final rules into a desired sequence to use
Method used in C4.5
3/22/04 38
Converting a Tree to Rules
Outlook
Sunny Overcast Rain
Humidity Yes Wind
High Normal Strong Weak
No Yes No Yes
R1: If (Outlook=Sunny) (Humidity=High) Then PlayTennis=No
R2: If (Outlook=Sunny) (Humidity=Normal) Then PlayTennis=Yes
R3: If (Outlook=Overcast) Then PlayTennis=Yes
R4: If (Outlook=Rain) (Wind=Strong) Then PlayTennis=No
R: If (Outlook=Rain)
3/22/04
(Wind=Weak) Then PlayTennis=Yes 39
Continuous Valued Attributes
Create a discrete attribute to test continuous
Temperature = 24.50C
(Temperature > 20.00C) = {true, false}
Where to set the threshold?
Temperature 150C 180C 190C 220C 240C 270C
PlayTennis No No Yes Yes Yes No
(see paper by [Fayyad, Irani 1993]
3/22/04 40
Attributes with different
costs
Tests may have different costs
e.g. medical diagnosis: blood test, visual examination,
… have different costs
Robotics: width_from_one_feet has cost 23 secs.
try to find tree with low expected cost
instead of low expected number of tests
alternative heuristics, taking cost into account,have
been proposed
Replace Gain by :
Gain2(S,A)/Cost(A) [Tan, Schimmer 1990]
2Gain(S,A)-1/(Cost(A)+1)w w [0,1] [Nunez 1988]
3/22/04 41
Attributes with many Values
Problem: if an attribute has many values, maximizing
InformationGain will select it.
E.g.: Imagine using Date=27.3.2002 as attribute
perfectly splits the data into subsets of size 1
A Solution:
Use GainRatio instead of information gain as criteria:
GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A)
SplitInformation(S,A) = -i=1..c |Si|/|S| log2 |Si|/|S|
Where Si is the subset for which attribute A has the value vi
3/22/04 42
Unknown Attribute Values
What if some examples have missing values of A?
Use training example anyway sort through tree
If node n tests A, assign most common value of A among other
examples sorted to node n.
Assign most common value of A among other examples with same
target value
Assign probability pi to each possible value vi of A
Assign fraction pi of example to each descendant in tree
Classify new examples in the same fashion
3/22/04 43