0% found this document useful (0 votes)
7 views40 pages

Ai - Unit Vi

The document discusses the importance of learning in artificial intelligence, highlighting different forms such as supervised and unsupervised learning, and the use of decision trees for classification. It emphasizes the need for learning agents to adapt to unknown environments and improve performance through modifications based on experiences. Key concepts include the definitions of learning, the process of supervised learning, and the calculation of information gain for decision tree learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views40 pages

Ai - Unit Vi

The document discusses the importance of learning in artificial intelligence, highlighting different forms such as supervised and unsupervised learning, and the use of decision trees for classification. It emphasizes the need for learning agents to adapt to unknown environments and improve performance through modifications based on experiences. Key concepts include the definitions of learning, the process of supervised learning, and the calculation of information gain for decision tree learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 40

ARTIFICIAL

INTELLIGENCE

UNIT VI

1
Learning

•Forms of Learning,
•Supervised Learning,
•Learning Decision Trees
Learning

Learning is essential for unknown environments,


◦ i.e., when designer lacks omniscience

Learning is useful as a system construction method,


◦ i.e., expose the agent to reality rather than trying to write it
down

Learning modifies the agent's decision mechanisms to


improve performance
What is Learning?
“Learning denotes changes in a system that ... enable a
system to do the same task more efficiently the next
time.” -- Herbert Simon
“Learning is constructing or modifying representations
of what is being experienced.” -- Ryszard Michalski
“Learning is making useful changes in our minds.” --
Marvin Minsky

4
Why Learn?
Understand and improve efficiency of human learning
◦ Use to improve methods for teaching and tutoring people (e.g., better
computer-aided instruction.)
Discover new things or structures that are unknown to humans
◦ Example: Data mining, Knowledge Discovery in Databases

Fill in skeletal or incomplete specifications about a domain


◦ Large, complex AI systems cannot be completely derived by hand and
require dynamic updating to incorporate new information.
◦ Learning new characteristics expands the domain or expertise and lessens
the "brittleness" of the system
Build software agents that can adapt to their users, to other software agents,
and to the changing environment.

5
Learning agents
Learning element
Design of a learning element is affected by
◦ Which components of the performance element are to be
learned
◦ What feedback is available to learn these components
◦ What representation is used for the components

Type of feedback:
◦ Supervised learning: correct answers for each example
◦ Unsupervised learning: correct answers not given
◦ Reinforcement learning: occasional rewards
Supervised vs. unsupervised
Learning
Supervised learning: classification is seen as supervised learning from
examples.
◦ Supervision: The data (observations, measurements, etc.) are labeled with pre-
defined classes. It is like that a “teacher” gives the classes (supervision).
◦ Test data are classified into these classes too.

Unsupervised learning (clustering)


◦ Class labels of the data are unknown
◦ Given a set of data, the task is to establish the existence of classes or clusters in the
data

10
Supervised learning process:
two steps
 Learning (training): Learn a model using the training data
 Testing: Test the model using unseen test data to assess the model accuracy

Number of correct classifications


Accuracy  ,
Total number of test cases

11
Decision tree learning is one of the most widely used techniques for
classification.
◦ Its classification accuracy is competitive with other methods, and
◦ it is very efficient.

The classification model is a tree, called decision tree.

14
Learning decision trees
Problem: decide whether to wait for a table at a restaurant,
based on the following attributes:
1. Alternate: is there an alternative restaurant nearby?
2. Bar: is there a comfortable bar area to wait in?
3. Fri/Sat: is today Friday or Saturday?
4. Hungry: are we hungry?
5. Patrons: number of people in the restaurant (None, Some, Full)
6. Price: price range ($, $$, $$$)
7. Raining: is it raining outside?
8. Reservation: have we made a reservation?
9. Type: kind of restaurant (French, Italian, Thai, Burger)
10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)
Decision tree learning
Aim: find a small tree consistent with the training examples
Idea: (recursively) choose "most significant" attribute as root of (sub)tree
Which Attribute Is the Best
Classifier?
Information gain, that measures how well a given attribute separates
the training examples according to their target classification
In order to define information gain precisely we calculate entropy
Given a collection S, containing positive and negative examples of
some target concept, the entropy of S relative to this boolean
classification is
Entropy Calculation
9 +ve , 5 –ve examples.
Information Gain Measures The Expected Reduction
In Entropy

Entropy: Impurity in a collection of training examples


information gain: Expected reduction in entropy caused by partitioning the
examples according this attribute

Values(A) is the set of all possible values for attribute A


Sv is the subset of S for which attribute A has value v
Calculation of Gain(S,A)
Inductive learning
Simplest form: learn a function from examples
f is the target function

An example is a pair (x, f(x))


such that h ≈ f
given a training set of examples

(This is a highly simplified model of real learning:


◦ Ignores prior knowledge
◦ Assumes examples are given)

Problem: find a hypothesis h


Inductive learning
method
Construct/adjust h to agree with f on training set
(h is consistent if it agrees with f on all examples)
E.g., curve fitting:
Inductive learning
method
Construct/adjust h to agree with f on training set
(h is consistent if it agrees with f on all examples)E.g., curve
fitting:
Inductive learning
method
Construct/adjust h to agree with f on training set
(h is consistent if it agrees with f on all examples)E.g., curve
fitting:
Inductive learning
method
Construct/adjust h to agree with f on training set
(h is consistent if it agrees with f on all examples)E.g., curve
fitting:
Inductive learning
method
Construct/adjust h to agree with f on training set
(h is consistent if it agrees with f on all examples)E.g., curve
fitting:
Inductive learning
method
Construct/adjust h to agree with f on training set
(h is consistent if it agrees with f on all examples)
E.g., curve fitting:

Ockham’s razor: prefer the simplest hypothesis consistent with data


Attribute-based
representations
Examples described by attribute values (Boolean, discrete, continuous)

E.g., situations where I will/won't wait for a table:

Classification of examples is positive (T) or negative (F)


Decision trees
One possible representation for hypotheses
E.g., here is the “true” tree for deciding whether to wait:
Expressiveness
Decision trees can express any function of the input attributes.
E.g., for Boolean functions, truth table row → path to leaf:

Trivially, there is a consistent decision tree for any training set with one path to leaf for each example
(unless f nondeterministic in x) but it probably won't generalize to new examples

Prefer to find more compact decision trees


Choosing an attribute
Idea: a good attribute splits the examples into subsets that are
(ideally) "all positive" or "all negative"

Patrons? is a better choice


Using information theory
To implement Choose-Attribute in the DTL
algorithm
Information Content (Entropy):
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n
negative examples:
p n p p n n
I( , )  log 2  log 2
pn pn pn pn pn pn
Information gain
A chosen attribute A divides the training set E into subsets E1, … ,
Ev according to their values for A, where A has v distinct values.

v
p i ni pi ni
remainder ( A)  I( , )
i 1 p  n pi  ni pi  ni
Information Gain (IG) or reduction in entropy from the attribute
test:
p n
IG ( A) I ( , )  remainder ( A)
pn pn
Choose the attribute with the largest IG
Information gain
For the training set, p = n = 6, I(6/12, 6/12) = 1 bit

Consider the attributes Patrons and Type (and others too):


2 4 6 2 4
IG( Patrons ) 1  [ I (0,1)  I (1,0)  I ( , )] .0541 bits
12 12 12 6 6
2 1 1 2 1 1 4 2 2 4 2 2
IG(Type ) 1  [ I ( , )  I ( , )  I ( , )  I ( , )] 0 bits
12 2 2 12 2 2 12 4 4 12 4 4

Patrons has the highest IG of all attributes and so is chosen by the DTL
algorithm as the root
Example contd.
Decision tree learned from the 12 examples:

Substantially simpler than “true” tree---a more complex


hypothesis isn’t justified by small amount of data
Performance
measurement
How do we know that h ≈ f ?
1. Use theorems of computational/statistical learning theory
2. Try h on a new test set of examples
(use same distribution over example space as training set)

Learning curve = % correct on test set as a function of training set size


Summary
Learning needed for unknown environments, lazy
designers
Learning agent = performance element + learning
element
For supervised learning, the aim is to find a simple
hypothesis approximately consistent with training
examples
Decision tree learning using information gain
Learning performance = prediction accuracy measured
on test set

You might also like