0% found this document useful (0 votes)
1K views

Liaquat Majeed Sheikh: National University of Computer and Emerging Sciences

The document discusses classification techniques for data mining, including decision trees which can be used to build a model from a training set to classify new records. An example decision tree is presented to classify tax records based on attributes like refund status, marital status, and taxable income in order to predict whether the person cheated on their taxes or not. The model is then applied to a test data set to classify new records and evaluate the accuracy of the decision tree model.

Uploaded by

Talal Ahmad
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Liaquat Majeed Sheikh: National University of Computer and Emerging Sciences

The document discusses classification techniques for data mining, including decision trees which can be used to build a model from a training set to classify new records. An example decision tree is presented to classify tax records based on attributes like refund status, marital status, and taxable income in order to predict whether the person cheated on their taxes or not. The model is then applied to a test data set to classify new records and evaluate the accuracy of the decision tree model.

Uploaded by

Talal Ahmad
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 79

Data Mining

Classification: Basic Concepts, Decision


Trees, and Model Evaluation

Liaquat Majeed Sheikh

National University of Computer and Emerging


Sciences

Special thanks to: Vipin Kumar


Classification: Definition

 Given a collection of records (training set )


– Each record contains a set of attributes, one of the
attributes is the class.
 Find a model for class attribute as a function
of the values of other attributes.
 Goal: previously unseen records should be
assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build
the model and test set used to validate it.
Illustrating Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Learning
No
1 Yes Large 125K
algorithm
2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Examples of Classification Task

 Predicting tumor cells as benign or malignant

 Classifying credit card transactions


as legitimate or fraudulent

 Classifying secondary structures of protein


as alpha-helix, beta-sheet, or random
coil

 Categorizing news stories as finance,


weather, entertainment, sports, etc
Classification Techniques

 Decision Tree based Methods


 Rule-based Methods
 Memory based reasoning
 Neural Networks
 Naïve Bayes and Bayesian Belief Networks
 Support Vector Machines
Example of a Decision Tree

cal cal us
ri ri uo
ego ego tin ss
t t n a
ca ca co cl
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
3 No Single 70K No
Yes No

4 Yes Married 120K No NO MarSt


5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree


Another Example of Decision Tree

cal cal us
i i o
or or nu
t eg
t eg
nti
a ss Single,
ca ca co cl MarSt
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Apply Model to Test Data

Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data

Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data

Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data

Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data

Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data

Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Decision Tree Induction

 Many Algorithms:
– Hunt’s Algorithm (one of the earliest)
– CART
– ID3, C4.5
– SLIQ,SPRINT
General Structure of Hunt’s Algorithm
Tid Refund Marital Taxable
 Let Dt be the set of training records Status Income Cheat
that reach a node t 1 Yes Single 125K No
 General Procedure: 2 No Married 100K No
3 No Single 70K No
– If Dt contains records that 4 Yes Married 120K No
belong the same class yt, then t 5 No Divorced 95K Yes
is a leaf node labeled as yt 6 No Married 60K No

– If Dt is an empty set, then t is a 7 Yes Divorced 220K No

leaf node labeled by the default 8 No Single 85K Yes

class, yd 9 No Married 75K No


10 No Single 90K Yes
– If Dt contains records that 10

belong to more than one class, Dt


use an attribute test to split the
data into smaller subsets.
Recursively apply the ?
procedure to each subset.
Hunt’s Algorithm

Refund
Don’t
Yes No
Cheat
Don’t Don’t
Cheat Cheat

Refund Refund
Yes No Yes No

Don’t Don’t Marital


Marital Cheat
Cheat Status Status
Single, Single,
Married Married
Divorced Divorced

Don’t Taxable Don’t


Cheat Cheat
Cheat Income
< 80K >= 80K

Don’t Cheat
Cheat
Tree Induction

 Greedy strategy.
– Split the records based on an attribute test
that optimizes certain criterion.

 Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?

– Determine when to stop splitting


Tree Induction

 Greedy strategy.
– Split the records based on an attribute test
that optimizes certain criterion.

 Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?

– Determine when to stop splitting


How to Specify Test Condition?

 Depends on attribute types


– Nominal
– Ordinal
– Continuous

 Depends on number of ways to split


– 2-way split
– Multi-way split
Types of Attributes

 There are different types of attributes


– Nominal
 Examples: ID numbers, eye color, zip codes
– Ordinal
 Examples: rankings (e.g., taste of potato chips on a
scale from 1-10), grades, height in {tall, medium, short}
– Interval
 Examples: calendar dates, temperatures in Celsius or
Fahrenheit.
– Ratio
 Examples: temperature in Kelvin, length, time, counts
Properties of Attribute Values

 The type of an attribute depends on which of the


following properties it possesses:
– Distinctness: = 
– Order: < >
– Addition: + -
– Multiplication: */

– Nominal attribute: distinctness


– Ordinal attribute: distinctness & order
– Interval attribute: distinctness, order & addition
– Ratio attribute: all 4 properties
Attribute Description Examples Operations
Type

Nominal The values of a nominal attribute zip codes, employee mode, entropy,
are just different names, i.e., ID numbers, eye color, contingency
nominal attributes provide only sex: {male, female} correlation, 2 test
enough information to distinguish
one object from another. (=, )

Ordinal The values of an ordinal attribute hardness of minerals, median, percentiles,


provide enough information to order {good, better, best}, rank correlation,
objects. (<, >) grades, street numbers run tests, sign tests

Interval For interval attributes, the calendar dates, mean, standard


differences between values are temperature in Celsius deviation, Pearson's
meaningful, i.e., a unit of or Fahrenheit correlation, t and F
measurement exists. tests
(+, - )

Ratio For ratio variables, both differences temperature in Kelvin, geometric mean,
and ratios are meaningful. (*, /) monetary quantities, harmonic mean,
counts, age, mass, percent variation
length, electrical
current
Attribute Transformation Comments
Level

Nominal Any permutation of values If all employee ID numbers


were reassigned, would it
make any difference?

Ordinal An order preserving change of An attribute encompassing


values, i.e., the notion of good, better
new_value = f(old_value) best can be represented
where f is a monotonic function. equally well by the values
{1, 2, 3} or by { 0.5, 1,
10}.
Interval new_value =a * old_value + b Thus, the Fahrenheit and
where a and b are constants Celsius temperature scales
differ in terms of where
their zero value is and the
size of a unit (degree).

Ratio new_value = a * old_value Length can be measured in


meters or feet.
Discrete and Continuous Attributes

 Discrete Attribute
– Has only a finite or countably infinite set of values
– Examples: zip codes, counts, or the set of words in a collection
of documents
– Often represented as integer variables.
– Note: binary attributes are a special case of discrete attributes

 Continuous Attribute
– Has real numbers as attribute values
– Examples: temperature, height, or weight.
– Practically, real values can only be measured and represented
using a finite number of digits.
– Continuous attributes are typically represented as floating-point
variables.
Splitting Based on Nominal Attributes

 Multi-way split: Use as many partitions as distinct


values.
CarType
Family Luxury
Sports

 Binary split: Divides values into two subsets.


Need to find optimal partitioning.
CarType CarType
{Sports, OR {Family,
Luxury} {Family} Luxury} {Sports}
Splitting Based on Ordinal Attributes

 Multi-way split: Use as many partitions as distinct


values.
Size
Small Large
Medium

 Binary split: Divides values into two subsets.


Need to find optimal partitioning.
Size Size
{Small,
{Large}
OR {Medium,
{Small}
Medium} Large}

Size
{Small,
 What about this split? Large} {Medium}
Splitting Based on Continuous Attributes

 Different ways of handling


– Discretization to form an ordinal categorical
attribute
 Static – discretize once at the beginning
 Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing
(percentiles), or clustering.

– Binary Decision: (A < v) or (A  v)


 consider all possible splits and finds the best cut
 can be more compute intensive
Splitting Based on Continuous Attributes

Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No

[10K,25K) [25K,50K) [50K,80K)

(i) Binary split (ii) Multi-way split


Tree Induction

 Greedy strategy.
– Split the records based on an attribute test
that optimizes certain criterion.

 Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?

– Determine when to stop splitting


How to determine the Best Split

Before Splitting: 10 records of class 0,


10 records of class 1

Own Car Student


Car? Type? ID?

Yes No Family Luxury c1 c20


c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1

Which test condition is the best?


How to determine the Best Split

 Greedy approach:
– Nodes with homogeneous class distribution
are preferred
 Need a measure of node impurity:

C0: 5 C0: 9
C1: 5 C1: 1

Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
Measures of Node Impurity

 Gini Index

 Entropy

 Misclassification error
How to Find the Best Split
Before Splitting: C0 N00
M0
C1 N01

A? B?
Yes No Yes No

Node N1 Node N2 Node N3 Node N4

C0 N10 C0 N20 C0 N30 C0 N40


C1 N11 C1 N21 C1 N31 C1 N41

M1 M2 M3 M4

M12 M34
Gain = M0 – M12 vs M0 – M34
Measure of Impurity: GINI

 Gini Index for a given node t :

GINI (t )  1   [ p ( j | t )]2
j

(NOTE: p( j | t) is the relative frequency of class j at node t).

– Maximum (1 - 1/nc) when records are equally


distributed among all classes, implying least
interesting information
– Minimum (0.0) when all records belong to one class,
implying most interesting information
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
Examples for computing GINI

GINI (t )  1   [ p ( j | t )]2
j

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Splitting Based on GINI

 Used in CART, SLIQ, SPRINT.


 When a node p is split into k partitions (children), the
quality of split is computed as,
k
ni
GINI split   GINI (i )
i 1 n

where, ni = number of records at child i,


n = number of records at node p.
Binary Attributes: Computing GINI
Index

 Splits into two partitions


 Effect of Weighing partitions:
– Larger and Purer Partitions are sought for.
Parent
B? C1 6
Yes No C2 6
Gini = 0.500
Node N1 Node N2
Gini(N1)
= 1 – (5/7)2 – (2/7)2 N1 N2 Gini(Children)
= 0.408
C1 5 1 = 7/12 * 0.408 +
Gini(N2) C2 2 4 5/12 * 0.32
= 1 – (1/5)2 – (4/5)2 Gini=0.371 = 0.371
= 0.32
Categorical Attributes: Computing Gini Index

 For each distinct value, gather counts for each class in


the dataset
 Use the count matrix to make decisions

Multi-way split Two-way split


(find best partition of values)

CarType CarType CarType


Family Sports Luxury {Sports, {Family,
{Family} {Sports}
Luxury} Luxury}
C1 1 2 1 C1 3 1 C1 2 2
C2 4 1 1 C2 2 4 C2 1 5
Gini 0.393 Gini 0.400 Gini 0.419
Continuous Attributes: Computing Gini Index

 Use Binary Decisions based on one


value
 Several Choices for the splitting value
– Number of possible splitting values
= Number of distinct values
 Each splitting value has a count matrix
associated with it
– Class counts in each of the
partitions, A < v and A  v
 Simple method to choose best v
– For each v, scan the database to
gather count matrix and compute Taxable
Income
its Gini index
> 80K?
– Computationally Inefficient!
Repetition of work. Yes No
Continuous Attributes: Computing Gini Index...

 For efficient computation: for each attribute,


– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix and
computing gini index
– Choose the split position that has the least gini index

Cheat No No No Yes Yes Yes No No No No


Taxable Income
60 70 75 85 90 95 100 120 125 220
Sorted Values
55 65 72 80 87 92 97 110 122 172 230
Split Positions
<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >
Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420
Alternative Splitting Criteria based on INFO

 Entropy at a given node t:


Entropy (t )    p( j | t ) log p( j | t )
j

(NOTE: p( j | t) is the relative frequency of class j at node t).


– Measures homogeneity of a node.
 Maximum (log nc) when records are equally distributed
among all classes implying least information
 Minimum (0.0) when all records belong to one class,
implying most information
– Entropy based computations are similar to the
GINI index computations
Examples for computing Entropy

Entropy (t )    p ( j | t ) log p ( j | t )
j 2

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
Splitting Based on INFO...

 Information Gain:
 n 
GAIN  Entropy ( p )    Entropy (i ) 
k
i

 n 
split i 1

Parent Node, p is splited into k partitions;


ni is number of records in partition i
– Measures Reduction in Entropy achieved because of
the split. Choose the split that achieves most reduction
(maximizes GAIN)
– Used in ID3 and C4.5
– Disadvantage: Tends to prefer splits that result in large
number of partitions, each being small but pure.
Splitting Based on INFO...

 Gain Ratio:

GAIN n n
 SplitINFO    log
k
GainRATIO Split i i

SplitINFO
split

n
i 1
n
Parent Node, p is split into k partitions
ni is the number of records in partition i

– Adjusts Information Gain by the entropy of the


partitioning (SplitINFO). Higher entropy partitioning
(large number of small partitions) is penalized!
– Used in C4.5
– Designed to overcome the disadvantage of Information
Gain
Weather data
Which attribute to select?

(b)
(a)

(c) (d)
A criterion for attribute selection

 Which is the best attribute?

 The one which will result in the smallest tree


– Heuristic: choose the attribute that produces the
“purest” nodes

 Popular impurity criterion: entropy of nodes


– Lower the entropy purer the node.

 Strategy: choose attribute that results in lowest


entropy of the children nodes.
Example: attribute “Outlook”
Information gain

 Usually people don’t use directly the entropy of a node.


Rather the information gain is being used.

 Clearly, greater the information gain better the purity of a


node. So, we choose “Outlook” for the root.
Continuing to split
The final decision tree

Note: not all leaves need to be pure; sometimes identical


instances have different classes
Splitting stops when data can’t be split any further
Highly-branching attributes
 The weather data with ID code
Tree stump for ID code attribute
Highly-branching attributes

So,
 Subsets are more likely to be pure if there is a
large number of values
– Information gain is biased towards choosing
attributes with a large number of values
– This may result in overfitting (selection of an
attribute that is non-optimal for prediction)
The gain ratio

 Gain ratio: a modification of the information gain


that reduces its bias
 Gain ratio takes number and size of branches
into account when choosing an attribute
– It corrects the information gain by taking the
intrinsic information of a split into account
 Intrinsic information: entropy (with respect to the
attribute on focus) of node to be split.
Computing the gain ratio
Gain ratios for weather data
More on the gain ratio

 “Outlook” still comes out top but “Humidity” is now a much closer
contender because it splits the data into two subsets instead of
three.

 However: “ID code” has still greater gain ratio. But its advantage
is greatly reduced.

 Problem with gain ratio: it may overcompensate


– May choose an attribute just because its intrinsic information
is very low
– Standard fix: choose an attribute that maximizes the gain
ratio, provided the information gain for that attribute is at
least as great as the average information gain for all the
attributes examined.
Discussion

 Algorithm for top-down induction of decision trees


(“ID3”) was developed by Ross Quinlan (University of
Sydney Australia)

 Gain ratio is just one modification of this basic


algorithm
– Led to development of C4.5, which can deal with
numeric attributes, missing values, and noisy data

 There are many other attribute selection criteria! (But


almost no difference in accuracy of result.)
Numerical attributes

 Tests in nodes can be of the form xj > constant


 Divides the space into rectangles.
Numerical attributes
• Tests in nodes can be of the form xj > constant
• Divides the space into rectangles.
Predicting Bankruptcy
Considering splits

 The only thing we really need to do differently in our


algorithm is to consider splitting between each data point
in each dimension.
• So, in our bankruptcy
domain, we'd consider 9
different splits in the R
dimension
– In general, we'd expect
to consider m - 1 splits,
if we have m data
points;
– But in our data set we
have some examples
with equal R values.
Considering splits II

 And there are another 6 possible splits in the L dimension


– because L is an integer, really, there are lots of duplicate L values.
Bankruptcy Example
Bankruptcy Example

 We consider all the possible splits in each dimension, and


compute the average entropies of the children.

• And we see that, conveniently, all the points with L not greater
than 1.5 are of class 0, so we can make a leaf there.
Bankruptcy Example

 Now, we consider all the splits of the remaining part of space.


 Note that we have to recalculate all the average entropies again,
because the points that fall into the leaf node are taken out of
consideration.
Bankruptcy Example

 Now the best split is at R > 0.9. And we see that all the points for
which that's true are positive, so we can make another leaf.
Bankruptcy Example

 Continuing in this way, we finally obtain:


Splitting Criteria based on Classification Error

 Classification error at a node t :

Error (t )  1  max P (i | t )
i

 Measures misclassification error made by a node.


 Maximum (1 - 1/nc) when records are equally distributed
among all classes, implying least interesting information
 Minimum (0.0) when all records belong to one class, implying
most interesting information
Examples for Computing Error

Error (t )  1  max P(i | t )


i

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Error = 1 – max (0, 1) = 1 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
Comparison among Splitting Criteria

For a 2-class problem:


Misclassification Error vs Gini

A? Parent
C1 7
Yes No
C2 3
Node N1 Node N2 Gini = 0.42

Gini(N1) N1 N2
= 1 – (3/3)2 – (0/3)2 Gini(Children)
C1 3 4 = 3/10 * 0
=0
C2 0 3 + 7/10 * 0.489
Gini(N2) Gini=0.361 = 0.342
= 1 – (4/7)2 – (3/7)2
= 0.489 Gini improves !!
Tree Induction

 Greedy strategy.
– Split the records based on an attribute test
that optimizes certain criterion.

 Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?

– Determine when to stop splitting


Stopping Criteria for Tree Induction

 Stop expanding a node when all the records


belong to the same class

 Stop expanding a node when all the records have


similar attribute values

 Early termination (to be discussed later)


Decision Tree Based Classification

 Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other classification
techniques for many simple data sets
Example: C4.5

 Simple depth-first construction.


 Uses Information Gain
 Sorts Continuous Attributes at each node.
 Needs entire data to fit in memory.
 Unsuitable for Large Datasets.

– Needs out-of-core sorting.

 You can download the software from:


http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz

You might also like