0% found this document useful (0 votes)
75 views54 pages

Tree Based Classifiers: Dinesh R

Tree-based classifiers are machine learning models that use a decision tree as a predictive model. Decision trees classify instances by starting at the root node and moving through the tree recursively according to a test at each node until a leaf node is reached, which provides the classification or predicted value. Tree-based classifiers are powerful tools for classification and prediction that represent rules in an interpretable way. Building decision trees involves splitting the training data into nodes based on attribute values to create branches until the data is partitioned into distinct target classes.

Uploaded by

dhruva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views54 pages

Tree Based Classifiers: Dinesh R

Tree-based classifiers are machine learning models that use a decision tree as a predictive model. Decision trees classify instances by starting at the root node and moving through the tree recursively according to a test at each node until a leaf node is reached, which provides the classification or predicted value. Tree-based classifiers are powerful tools for classification and prediction that represent rules in an interpretable way. Building decision trees involves splitting the training data into nodes based on attribute values to create branches until the data is partitioned into distinct target classes.

Uploaded by

dhruva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Tree Based Classifiers

Dinesh R
Principal Engineer,
Samsung, Bangalore
Email: [email protected]
Contents
• Background
• Tree classifiers
• Applications
• Building decision trees
• Entropy and GINI index for tree building
• Tree Pruning
• Challenges
Classifiers
• Bayesian Classifier
• K-Nearest Neighborhood classifier
• Decision Trees
• Boosting Classifiers
• SVM
• Neural Networks
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the attributes is the
class.
• Find a model for class attribute as a function
of the values of other attributes.
• Goal: previously unseen records should be
assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the model. Usually,
the given data set is divided into training and test sets, with training
set used to build the model and test set used to validate it.
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class Learning
No
1 Yes Large 125K
algorithm
2 No Medium 100K No
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ? Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Classification Using Distance
• Place items in class to which they are
“closest”.
• Must determine distance between an item
and a class.
• Classes represented by
– Centroid: Central value.
– Medoid: Representative point.
– Individual points
• Algorithm: KNN
K Nearest Neighbor (KNN):
• Training set includes classes.
• Examine K items near item to be classified.
• New item placed in class with the most
number of close items.
• O(q) for each tuple to be classified. (Here q
is the size of the training set.)
KNN
Limitation of KNN
No Model learning
Extremely slow
Definition

 Decision tree is a classifier in the form of a tree structure


– Decision node: specifies a test on a single attribute
– Leaf node: indicates the value of the target attribute
– Arc/edge: split of one attribute
– Path: a disjunction of test to make the final decision

 Decision trees classify instances or examples by starting at the


root of the tree and moving through it until a leaf node.
Why decision tree?

• Decision trees are powerful and popular tools for


classification and prediction.
• Decision trees represent rules, which can be understood
by humans and used in knowledge system such as
database.
key requirements
• Attribute-value description: object or case must be expressible
in terms of a fixed collection of properties or attributes (e.g., hot,
mild, cold).
• Predefined classes (target values): the target function has
discrete output values (bollean or multiclass)
• Sufficient data: enough training cases should be provided to learn
the model.
Example of a Decision Tree
l l us
ir ca ir ca o
go go tinu ss
te te n la
ca ca co c
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
No
Yes No
3 No Single 70K
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree


Another Example of Decision Tree
l l us
rica rica o
go go tinu ss
te te n l a Single,
ca ca co c MarSt
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that fits
10 No Single 90K Yes the same data!
10
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply Decision Tree
Tid Attrib1 Attrib2 Attrib3 Class
Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?
15 No Large 67K ?
10

Test Set
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
No
4 Yes Medium 120K
Induction
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No Learn
8 No Small 85K Yes Model
9 No Medium 75K No
10 No Small 90K Yes
Model
10

Training Set
Apply Decision Tree
Tid Attrib1 Attrib2 Attrib3 Class
Model
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Illustration

(1) Which to start? (root)

(2) Which node to proceed?

(3) When to stop/ come to conclusion?


Random split
• The tree can grow huge
• These trees are hard to understand.
• Larger trees are typically less accurate than
smaller trees.
Decision Tree Induction
• Many Algorithms:
– Hunt’s Algorithm (one of the earliest)
– CART
– ID3, C4.5
– SLIQ,SPRINT
General Structure of Hunt’s Algorithm
Tid Refund Marital Taxable
• Let Dt be the set of training records Status Income Cheat

that reach a node t 1 Yes Single 125K No

• General Procedure: 2 No Married 100K No


3 No Single 70K No
– If Dt contains records that belong the 4 Yes Married 120K No
same class yt, then t is a leaf node 5 No Divorced 95K Yes
labeled as yt 6 No Married 60K No
– If Dt is an empty set, then t is a leaf 7 Yes Divorced 220K No

node labeled by the default class, yd 8 No Single 85K Yes


9 No Married 75K No
– If Dt contains records that belong to
10 No Single 90K Yes
more than one class, use an attribute 10

test to split the data into smaller


subsets. Recursively apply the
Dt
procedure to each subset.
?
Hunt’s Algorithm Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No
Refund
Don’t 3 No Single 70K No
Yes No
Cheat 4 Yes Married 120K No
Don’t Don’t 5 No Divorced 95K Yes
Cheat Cheat
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes

Refund Refund 9 No Married 75K No

Yes No Yes No 10 No Single 90K Yes


10

Don’t Don’t Marital


Marital Cheat
Cheat Status Status
Single, Single,
Married Married
Divorced Divorced

Don’t Taxable Don’t


Cheat Cheat
Cheat Income
< 80K >= 80K

Don’t Cheat
Cheat
Tree Induction
• Greedy strategy.
– Split the records based on an attribute test that
optimizes certain criterion.

• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
Tree Induction
• Greedy strategy.
– Split the records based on an attribute test that
optimizes certain criterion.

• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
How to Specify Test Condition?
• Depends on attribute types
– Nominal
– Ordinal
– Continuous

• Depends on number of ways to split


– 2-way split
– Multi-way split
Splitting Based on Nominal Attributes
• Multi-way split: Use as many partitions as distinct values.

CarType
Family Luxury
Sports

• Binary split: Divides values into two subsets.


Need to find optimal partitioning.

CarType OR CarType
{Sports, {Family,
Luxury} {Family} Luxury} {Sports}
Splitting Based on Ordinal Attributes
• Multi-way split: Use as many partitions as distinct values.
Size
Small Large
Medium

• Binary split: Divides values into two subsets.


Need to find optimal partitioning.

Size Size
{Small,
{Large}
OR {Medium,
{Small}
Medium} Large}

• What about this split?


Size
{Small,
Large} {Medium}
Splitting Based on Continuous Attributes
• Different ways of handling
– Discretization to form an ordinal categorical attribute
• Static – discretize once at the beginning
• Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing
(percentiles), or clustering.

– Binary Decision: (A < v) or (A  v)


• consider all possible splits and finds the best cut
• can be more compute intensive
Splitting Based on Continuous Attributes

Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No

[10K,25K) [25K,50K) [50K,80K)

(i) Binary split (ii) Multi-way split


Tree Induction
• Greedy strategy.
– Split the records based on an attribute test that
optimizes certain criterion.

• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1

Own Car Student


Car? Type? ID?

Yes No Family Luxury c1 c20


c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1

Which test condition is the best?


How to determine the Best Split
• Greedy approach:
– Nodes with homogeneous class distribution are
preferred
• Need a measure of node impurity:
C0: 5 C0: 9
C1: 5 C1: 1

Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
Measures of Node Impurity
• Gini Index

• Entropy

• Misclassification error
How to Find the Best Split
Before Splitting: C0 N00 M0
C1 N01

A? B?
Yes No Yes No

Node N1 Node N2 Node N3 Node N4

C0 N10 C0 N20 C0 N30 C0 N40


C1 N11 C1 N21 C1 N31 C1 N41

M1 M2 M3 M4

M12 M34
Gain = M0 – M12 vs M0 – M34
Measure of Impurity: GINI
• Gini Index for a given node t :
GINI (t )  1   [ p ( j | t )]2
j

(NOTE: p( j | t) is the relative frequency of class j at node t).


– Maximum (1 - 1/nc) when records are equally distributed among all
classes, implying least interesting information
– Minimum (0.0) when all records belong to one class, implying most
interesting information

C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
Examples for computing GINI
GINI (t )  1   [ p ( j | t )]2
j

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Splitting Based on GINI
• Used in CART, SLIQ, SPRINT.
• When a node p is split into k partitions (children), the quality of
split is computed as,
k
ni
GINI split   GINI (i )
i 1 n

where, ni = number of records at child i,


n = number of records at node p.
Binary Attributes: Computing GINI Index
 Splits into two partitions
 Effect of Weighing partitions:
 Larger and Purer Partitions are sought for.

Parent
B? C1 6
Yes No C2 6
Gini = 0.500
Node N1 Node N2
Gini(N1)
= 1 – (5/6)2 – (2/6)2 N1 N2 Gini(Children)
= 0.194
C1 5 1 = 7/12 * 0.194 +
Gini(N2) C2 2 4 5/12 * 0.528
= 1 – (1/6)2 – (4/6)2 Gini=0.333 = 0.333
= 0.528
Categorical Attributes: Computing Gini Index
• For each distinct value, gather counts for each class in the
dataset
• Use the count matrix to make decisions
Multi-way split Two-way split
(find best partition of values)

CarType CarType CarType


Family Sports Luxury {Sports, {Family,
{Family} {Sports}
Luxury} Luxury}
C1 1 2 1 C1 3 1 C1 2 2
C2 4 1 1 C2 2 4 C2 1 5
Gini 0.393 Gini 0.400 Gini 0.419
Continuous Attributes: Computing Gini Index
Tid Refund Marital Taxable
• Use Binary Decisions based on one value Status Income Cheat
• Several Choices for the splitting value 1 Yes Single 125K No
– Number of possible splitting values 2 No Married 100K No
= Number of distinct values 3 No Single 70K No
• Each splitting value has a count matrix 4 Yes Married 120K No
associated with it 5 No Divorced 95K Yes
– Class counts in each of the partitions, A < v 6 No Married 60K No
and A  v 7 Yes Divorced 220K No
• Simple method to choose best v 8 No Single 85K Yes
– For each v, scan the database to gather 9 No Married 75K No
count matrix and compute its Gini index 10 No Single 90K Yes
– Computationally Inefficient! Repetition of 10

work. Taxable
Income
> 80K?

Yes No
Continuous Attributes: Computing Gini Index...

• For efficient computation: for each attribute,


– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix and
computing gini index
– Choose the split position that has the least gini index

C
he
at N
o N
o N
o Y
es Y
es Y
es N
o N
o N
o N
o
T
ax
abl
e I
n c
ome
6
0 7
0 7
5 8
5 9
0 9
5 1
00 1
20 1
25 2
20
Sorted Values
5
5 6
5 7
2 8
0 8
7 9
2 9
7 1
10 1
22 1
72 2
30
Split Positions
<
=><
=><
=><
=><
=><
=><
=><
=><
=><
=><
=>
Y
es 0303030312213030 3 0 3 0 3 0

N
o 0716253434343443 5 2 6 1 7 0

G
i
n i 0
.4
200
.4
000
.37
50.
3 4
30.
4 1
70.
4 0
00.
3 00
0 .
3 4
3 0
.3
75 0
.40
0 0
.42
0
Alternative Splitting Criteria based on INFO
• Entropy at a given node t:
Entropy(t )    p ( j | t ) log p ( j | t )
j

(NOTE: p( j | t) is the relative frequency of class j at node t).


– Measures homogeneity of a node.
• Maximum (log nc) when records are equally distributed
among all classes implying least information
• Minimum (0.0) when all records belong to one class,
implying most information
– Entropy based computations are similar to the GINI
index computations
Examples for computing Entropy
Entropy (t )    p ( j | t ) log p ( j | t )
j 2

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
Splitting Based on INFO...
• Information Gain:
n
 Entropy ( p )    Entropy (i ) 
 k
GAIN i

 n 
split i 1

Parent Node, p is split into k partitions;


ni is number of records in partition i
– Measures Reduction in Entropy achieved because of the split. Choose
the split that achieves most reduction (maximizes GAIN)
– Used in ID3 and C4.5
– Disadvantage: Tends to prefer splits that result in large number of
partitions, each being small but pure.
Splitting Based on INFO...
• Gain Ratio:

GAIN n n
 SplitINFO    log
k
GainRATIO Split i i

SplitINFO
split

n n i 1

Parent Node, p is split into k partitions


ni is the number of records in partition i
– Adjusts Information Gain by the entropy of the partitioning (SplitINFO).
Higher entropy partitioning (large number of small partitions) is
penalized!
– Used in C4.5
– Designed to overcome the disadvantage of Information Gain
Strengths
• can generate understandable rules
• perform classification without much computation
• can handle continuous and categorical variables
• provide a clear indication of which fields are most important
for prediction or classification
Weakness
• Not suitable for prediction of continuous attribute.
• Perform poorly with many class and small data.
• Computationally expensive to train.
– At each node, each candidate splitting field must be sorted before its
best split can be found.
– In some algorithms, combinations of fields are used and a search must
be made for optimal combining weights.
– Pruning algorithms can also be expensive since many candidate sub-
trees must be formed and compared.
• Do not treat well non-rectangular regions.
Questions!!!?
Thank You 

You might also like