ML Unit-I Chapter-I Introduction
ML Unit-I Chapter-I Introduction
By
Mohammed Afzal
Assistant Professor
Computer Science & Engineering Department
Sphoorthy Engineering College
UNIT - I
Chapter-I: Introduction - Well-posed learning problems, designing a
learning system, Perspectives and issues in machine learning
INTRODUCTION:
Computers can be made to learn.
If we could understand how to program them to learn and to improve them
automatically with experience, the impact would be dramatic.
Examples:
o Computers learning from medical records and giving us which treatments
are most effective for new diseases.
o Houses learning from experience to optimize energy costs based on the
particular usage patterns of their occupants.
o Personal Software Assistants (PSAs) learning the evolving interests of their
users in order to highlight relevant stories from the online morning
newspaper.
A successful understanding of how to make computers learn is required to
open up many new uses of computers and new levels of competence and
customization.
A detailed understanding of information processing algorithms for machine
learning might lead to a better understanding of human learning abilities
(and disabilities) as well.
Till now we don’t know, how to make computers learn nearly as well as
people learn. However, algorithms have been invented that are effective
for certain types of learning tasks.
Examples:
o For problems such as speech recognition, algorithms based on
machine learning outperform all other approaches that have been
attempted to date.
o In data mining, machine learning algorithms are being used routinely
to discover valuable knowledge from large commercial databases
containing equipment maintenance records, loan applications,
financial transactions, medical records, and so on …etc.,
As our understanding of computers continues to mature, it seems inevitable
that machine learning will play an increasingly central role in computer
science and computer technology.
The following table summarizes Several recent applications of machine
learning
Learning can be broadly defined as, to include any computer program that
improves its performance at some tasks through experience.
For example, a computer program that learns to play checkers might improve its
performance as measured by its ability to win at the class of tasks involving playing
checkers games, through experience obtained by playing games against itself.
In general, to have a well-defined learning problem, we must identity
these three features: the class of tasks T, the measure of performance to
be improved P, and the source of experience E.
A checkers learning problem:
The (4) is a recursive definition and to determine the value of V(b) for a particular
board state, it performs the search ahead for the optimal line of play, all the way to
the end of the game. So this definition is not efficiently computable by our checkers
playing program, we say that it is a non-operational definition.
3. Choosing a representation for the Target Function
Now that we have specified the ideal target function V, we must choose a
representation that the learning program will use to describe the function ^V
that it will learn.
To keep the discussion brief, let us choose a simple representation: for any
given board state, the function ^V will be calculated as a linear combination
of the following board features:
Till now we worked on choosing the type of training experience, choosing the target
function and its representation.
The first three items above correspond to the specification of the learning task, whereas the
final two items constitute design choices for the implementation of the learning program.
4. Choosing an approximation algorithm for the Target Function:
†V_train(b) ← ^V(Successor(b))
5. Final Design for Checkers Learning system:
The final design of our checkers learning system can be naturally described by
four distinct program modules that represent the central components in many
learning systems.
1. The performance System: Takes a new board as input and outputs a trace of the
game it played against itself.
2. The Critic: Takes the trace of a game as an input and outputs a set of training
examples of the target function.
4. The Generalizer: Takes training examples as input and outputs a hypothesis
that estimates the target function. Good generalization to new cases is crucial.