0% found this document useful (0 votes)
3 views

DTreesAndOverfitting-1-11-2011_final

The document outlines a lecture on machine learning, focusing on decision tree learning and its applications in various fields such as speech recognition and medical outcomes analysis. It discusses the theoretical foundations of machine learning, including function approximation, hypothesis space, and the importance of simplicity in model selection (Occam's razor). Additionally, it highlights the growing significance of machine learning due to advancements in algorithms and data capture technologies.

Uploaded by

vinayak457
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

DTreesAndOverfitting-1-11-2011_final

The document outlines a lecture on machine learning, focusing on decision tree learning and its applications in various fields such as speech recognition and medical outcomes analysis. It discusses the theoretical foundations of machine learning, including function approximation, hypothesis space, and the importance of simplicity in model selection (Occam's razor). Additionally, it highlights the growing significance of machine learning due to advancements in algorithms and data capture technologies.

Uploaded by

vinayak457
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Machine Learning 10-701

Tom M. Mitchell
Machine Learning Department
Carnegie Mellon University

January 11, 2011

Today: Readings:
• What is machine learning? • “The Discipline of ML”
• Decision tree learning • Mitchell, Chapter 3
• Course logistics • Bishop, Chapter 14.4

Machine Learning:
Study of algorithms that
• improve their performance P
• at some task T
• with experience E

well-defined learning task: <P,T,E>

1
Learning to Predict Emergency C-Sections
[Sims et al., 2000]

9714 patient records,


each with 215 features

Learning to detect objects in images

(Prof. H. Schneiderman)

Example training images


for each orientation

2
Learning to classify text documents

Company home page


vs
Personal home page
vs
University home page
vs

Reading
a noun
(vs verb)

[Rustandi et al.,
2005]

3
Machine Learning - Practice

Speech Recognition

Object recognition
Mining Databases
• Supervised learning
• Bayesian networks
Control learning
Text analysis • Hidden Markov models
• Unsupervised clustering
• Reinforcement learning
• ....

Machine Learning - Theory


Other theories for
• Reinforcement skill learning
PAC Learning Theory
(supervised concept learning) • Semi-supervised learning
• Active student querying
# examples (m) •…
representational
complexity (H)
error rate (ε) … also relating:
failure • # of mistakes during learning
probability (δ) • learner’s query strategy
• convergence rate
• asymptotic performance
• bias, variance

4
Economics Computer science
and Animal learning
(Cognitive science,
Organizational Psychology,
Behavior Neuroscience)

Machine learning

Adaptive Control
Evolution Theory
Statistics

Machine Learning in Computer Science

• Machine learning already the preferred approach to


– Speech recognition, Natural language processing
– Computer vision
– Medical outcomes analysis
– Robot control ML apps.
– …
All software apps.

• This ML niche is growing (why?)

5
Machine Learning in Computer Science

• Machine learning already the preferred approach to


– Speech recognition, Natural language processing
– Computer vision
– Medical outcomes analysis
– Robot control ML apps.
– …
All software apps.

• This ML niche is growing


– Improved machine learning algorithms
– Increased data capture, networking, new sensors
– Software too complex to write by hand
– Demand for self-customization to user, environment

Function Approximation and


Decision tree learning

6
Function approximation
Problem Setting:
• Set of possible instances X
• Unknown target function f : XY
• Set of function hypotheses H={ h | h : XY }
superscript: ith training example
Input:
• Training examples {<x(i),y(i)>} of unknown target function f

Output:
• Hypothesis h ∈ H that best approximates target function f

A Decision tree for


F: <Outlook, Humidity, Wind, Temp>  PlayTennis?

Each internal node: test one attribute Xi


Each branch from a node: selects one value for Xi
Each leaf node: predict Y (or P(Y|X ∈ leaf))

7
Decision Tree Learning
Problem Setting:
• Set of possible instances X
– each instance x in X is a feature vector
– e.g., <Humidity=low, Wind=weak, Outlook=rain, Temp=hot>
• Unknown target function f : XY
– Y is discrete valued
• Set of function hypotheses H={ h | h : XY }
– each hypothesis h is a decision tree
– trees sorts x to leaf, which assigns y

Decision Tree Learning


Problem Setting:
• Set of possible instances X
– each instance x in X is a feature vector
x = < x1, x2 … xn>
• Unknown target function f : XY
– Y is discrete valued
• Set of function hypotheses H={ h | h : XY }
– each hypothesis h is a decision tree

Input:
• Training examples {<x(i),y(i)>} of unknown target function f
Output:
• Hypothesis h ∈ H that best approximates target function f

8
Decision Trees
Suppose X = <X1,… Xn>
where Xi are boolean variables

How would you represent Y = X2 X5 ? Y = X2 ∨ X5

How would you represent X2 X5 ∨ X3X4(¬X1)

9
[ID3, C4.5, Quinlan]
node = Root

Entropy # of possible
values for X
Entropy H(X) of a random variable X

H(X) is the expected number of bits needed to encode a


randomly drawn value of X (under most efficient code)

Why? Information theory:


• Most efficient code assigns -log2P(X=i) bits to encode
the message X=i
• So, expected number of bits to code one random X is:

10
Sample Entropy

Entropy
Entropy H(X) of a random variable X

Specific conditional entropy H(X|Y=v) of X given Y=v :

Conditional entropy H(X|Y) of X given Y :

Mututal information (aka Information Gain) of X and Y :

11
Information Gain is the mutual information between
input attribute A and target variable Y

Information Gain is the expected reduction in entropy


of target variable Y for data sample S, due to sorting
on variable A

12
13
Decision Tree Learning Applet

• http://www.cs.ualberta.ca/%7Eaixplore/learning/
DecisionTrees/Applet/DecisionTreeApplet.html

Which Tree Should We Output?


• ID3 performs heuristic
search through space of
decision trees
• It stops at smallest
acceptable tree. Why?

Occam’s razor: prefer the


simplest hypothesis that
fits the data

14
Why Prefer Short Hypotheses? (Occam’s Razor)

Arguments in favor:

Arguments opposed:

Why Prefer Short Hypotheses? (Occam’s Razor)

Argument in favor:
• Fewer short hypotheses than long ones
 a short hypothesis that fits the data is less likely to be
a statistical coincidence
 highly probable that a sufficiently complex hypothesis
will fit the data

Argument opposed:
• Also fewer hypotheses with prime number of nodes
and attributes beginning with “Z”
• What’s so special about “short” hypotheses?

15
16
17
Split data into training and validation set
Create tree that classifies training set correctly

18
19
What you should know:
• Well posed function approximation problems:
– Instance space, X
– Sample of labeled training data { <x(i), y(i)>}
– Hypothesis space, H = { f: XY }

• Learning is a search/optimization problem over H


– Various objective functions
• minimize training error (0-1 loss)
• among hypotheses that minimize training error, select smallest (?)

• Decision tree learning


– Greedy top-down learning of decision trees (ID3, C4.5, ...)
– Overfitting and tree/rule post-pruning
– Extensions…

20

You might also like