Classification, Prediction
Classification, Prediction
& PREDICTION
WHAT IS CLASSIFICATION &
PREDICTION?
❖ There are two forms of data analysis that can be used for extracting models describing
important classes / to predict future data trends. These two forms are as follows −
➢ Classification
➢ Prediction
❖ Classification models predict categorical class labels; and prediction models predict
continuous valued functions.
❖ For example, we can build a classification model to categorize bank loan applications
as either safe or risky, or a prediction model to predict the expenditures in dollars of
potential customers on computer equipment given their income and occupation.
WHAT IS CLASSIFICATION?
❖ Following are the examples of cases where the data analysis task is Classification
➢ A bank loan officer wants to analyze the data in order to know which customer
(loan applicant) are risky or which are safe.
➢ A marketing manager at a company needs to analyze a customer with a given
profile, who will buy a new computer.
❖ In both of the above examples, a model or classifier is constructed to predict the
categorical labels. These labels are risky or safe for loan application data and yes
or no for marketing data.
WHAT IS PREDICTION?
❖ Following are the examples of cases where the data analysis task is Prediction
❖ Using a decision tree, we can visualize the decisions that make it easy to
understand and thus it is a popular data mining technique.
CLASSIFICATION ANALYSIS
❖ A two-step process is followed, to build a classification model.
➢ In the first step i.e. learning: A classification model based on training data is built.
➢ In the second step i.e. Classification, the accuracy of the model is checked and then
the model is used to classify new data.
➢ The class labels presented here are in the form of discrete values such as “yes” or
“no”, “safe” or “risky”.
GENERAL APPROACH TO BUILD CLASSIFICATION
TREE
REGRESSION ANALYSIS
❖ Regression analysis is used for the prediction of numeric
attributes.
❖ Numeric attributes are also called continuous values. A model
built to predict the continuous values instead of class labels is
called the regression model. The output of regression analysis is
the “Mean” of all observed values of the node.
HOW DOES A DECISION TREE
WORK?
❖ A decision tree is a supervised learning algorithm that works for both discrete and continuous
variables. It splits the dataset into subsets on the basis of the most significant attribute in the
dataset.
❖ How the decision tree identifies this attribute and how this splitting is done is decided by the
algorithms.?
❖ The most significant predictor is designated as the root node, splitting is done to form sub-
nodes called decision nodes, and the nodes which do not split further are terminal or leaf
nodes.
❖ In the decision tree, the dataset is divided into homogeneous and non-overlapping regions. It
follows a top-down approach as the top region presents all the observations at a single place which
splits into two or more branches that further split. This approach is also called a greedy approach as
it only considers the current node between the worked on without focusing on the future nodes.
❖ The decision tree algorithms will continue running until a stop criteria such as the
minimum number of observations etc. is reached.
❖ Once a decision tree is built, many nodes may represent outliers or
noisy data. Tree pruning method is applied to remove unwanted
data. This, improves the accuracy of the classification model.
❖ To find the accuracy of the model, a test set consisting of test tuples and
class labels is used. The percentages of the test set tuples are correctly
classified by the model to identify the accuracy of the model. If the
model is found to be accurate then it is used to classify the data tuples
for which the class labels are not known.
❖ Some of the decision tree algorithms include Hunt’s Algorithm,
ID3, CD4.5, and CART.
EXAMPLE OF CREATING A DECISION
TREE
#1) Learning Step: The training data is fed into the system to be analyzed by a
classification algorithm. In this example, the class label is the attribute i.e.
“loan decision”. The model built from this training data is represented in the
form of decision rules.
#2) Classification: Test dataset are fed to the model to check the accuracy of
the classification rule. If the model gives acceptable results then it is applied
to a new dataset with unknown class variables.
DECISION TREE INDUCTION
ALGORITHM
DECISION TREE INDUCTION
❖ Decision tree induction is the method of learning the decision trees from the training
set. The training set consists of attributes and class labels. Applications of decision tree
induction include astronomy, financial analysis, medical diagnosis, manufacturing, and
production.
❖ A decision tree is a flowchart tree-like structure that is made from training set tuples. The
dataset is broken down into smaller subsets and is present in the form of nodes of a tree. The
tree structure has a root node, internal nodes or decision nodes, leaf node, and branches.
❖ The root node is the topmost node. It represents the best attribute selected for classification.
Internal nodes of the decision nodes represent a test of an attribute of the dataset leaf node or
terminal node which represents the classification or decision label. The branches show the
outcome of the test performed.
❖ Some decision trees only have binary nodes, that means exactly two branches of a node,
while some decision trees are non-binary.
The image above shows the decision tree to prevent the heart attack
Splitting criteria
ATTRIBUTE SELECTION MEASURES
Attribute selection measures are also known as splitting rules because they determine how
the tuples at a given node are to be split.
An attribute selection measure is a heuristic for selecting the splitting criterion that “best”
separates a given data partition, D, of class-labeled training tuples into individual classes.
Conceptually, the “best” splitting criterion is the one that most closely results in such a
scenario.
ATTRIBUTE SELECTION MEASURES
The following are three popular attribute selection measures—INFORMATION GAIN,
GAIN RATIO, AND GINI INDEX.
•Suppose the class label attribute has m distinct values defining m distinct classes,
Ci(for i = 1, . . . , m).
• Let |D| and |Ci,D| denote the number of tuples in D and Ci, D, respectively.
INFORMATION GAIN
❖ This method is the main method that is used to build decision trees. It reduces
the information that is required to classify the tuples. It reduces the number
of tests that are needed to classify the given tuple. The attribute with the
highest information gain is selected.
❖ ID3 uses Information Gain as its attribute selection measure
❖ The expected information needed to classify a tuple in dataset D is given
by:
INFORMATION GAIN
where pₗ is the non-zero probability that an arbitrary tuple in D belongs to class Ci and
is estimated by |Ci,D|/|D|.
A log function to the base 2 is used, because the information is encoded in bits.
Info(D) is just the average amount of information needed to identify the class label of a
tuple in D.
Now, suppose we were to partition the tuples in D on some attribute A having v distinct
values, {a₁, a₂, . . . , av }, as observed from the training data.
How much more information would we still need (after the partitioning) to arrive at
an exact classification? InfoA(D) is the expected information required to classify a tuple
from D based on the partitioning by A
INFORMATION GAIN
Information gain is defined as the difference between the original information
requirement (i.e., based on just the proportion of classes) and the new requirement (i.e.,
obtained after partitioning on A). That is,
P is the probability that tuple belongs to class C. The Gini index that is calculated for binary
split dataset D by attribute A is given by:
WHAT IS TREE
PRUNING?
❖ Pruning is the method of removing the unused branches from the
decision tree. Some branches of the decision tree might represent
outliers or noisy data.
❖ Tree pruning is the method to reduce the unwanted branches of the tree.
This will reduce the complexity of the tree and help in effective
predictive analysis. It reduces the overfitting as it removes the
unimportant branches from the trees.
#1) Pre Pruning: In this approach, the construction of the decision tree is stopped early. It means it
is decided not to further partition the branches. The last node constructed becomes the leaf node
and this leaf node may hold the most frequent class among the tuples.
The attribute selection measures are used to find out the weightage of the split. Threshold values
are prescribed to decide which splits are regarded as useful. If the portioning of the node results in
splitting by falling below threshold then the process is halted.
#2) Post Pruning: This method removes the outlier branches from a fully grown tree. The
unwanted branches are removed and replaced by a leaf node denoting the most frequent class label.
This technique requires more computation than prepruning, however, it is more reliable.
The pruned trees are more precise and compact when compared to unpruned trees but they carry a
disadvantage of replication and repetition.
Repetition occurs when the same attribute is tested again and again along a branch of a tree.
Replication occurs when the duplicate subtrees are present within the tree. These issues can be
solved by multivariate splits.
The above image shows an unpruned and pruned tree.
EXAMPLE OF DECISION
TREE
Constructing a Decision Tree
Let us take an example of the last 10 days weather dataset with attributes
outlook, temperature, wind, and humidity. The outcome variable will be
playing cricket or not. We will use the ID3 algorithm to build the decision
tree.
https://jcsites.juniata.edu/faculty/rhodes/ida/decisionTrees.html
Advantages Of Decision Tree Classification
Enlisted below are the various merits of Decision Tree Classification:
1. Decision tree classification does not require any domain knowledge, hence, it is appropriate for
the knowledge discovery process.
2. The representation of data in the form of the tree is easily understood by humans and it is
intuitive.
3. It can handle multidimensional data.
4. It is a quick process with great accuracy.
5. Sometimes decision trees become very complex and these are called overfitted trees.
6. The decision tree algorithm may not be an optimal solution.
7. The decision trees may return a biased solution if some class label dominates it.
BAYESIAN CLASSIFICATION
❖ Thomas Bayes, who proposed the Bayes Theorem so, it named Bayesian
theorem.
❖ It is statistical method & supervised learning method for
classification.
❖ It can solve problems involving both categorical and
continuous valued attributes.
❖ Bayesian classification is used to find conditional probabilities.
NAIVE BAYES CLASSIFIER EXAMPLE
RULE BASED
It is featured by building rules based on an object attributes.
Rule-based classifier makes use of a set of IF-THEN rules for classification.
We can express a rule in the following from
IF condition THEN conclusion
Let us consider a rule R1,
R1: IF age=youth AND
student=yes THEN
buy_computer=yes
The IF part of the rule is called rule antecedent or
precondition. The THEN part of the rule is called rule
consequent (conclusion).
The antecedent (IF) part the condition consist of one or more attribute tests and these tests are
NEURAL NETWORK
The Artificial Neural Network (ANN) bases its assimilation of data on the
way that the human brain processes information. The brain has billions of
cells called neurons that process information in the form of electric
signals. External information, or stimuli, is received, after which the brain
processes it, and then produces an output.
❖ In Professor Arackioraj’s paper, “Applications of neural networks in data mining”, he notes that finding
information that is hidden in data is as difficult as it is important.22 Neural networks mine data in areas
such as bioinformatics, banking, and retail. Using neural networks, data warehousing organisations can
harvest information from datasets to help users make more informed decisions through neural
network’s ability to handle complex relationships, cross-pollination of data, and machine learning.
Neural networks and AI technologies can carry out many business purposes with unstructured data,
from tracking and documenting real-time communications, to finding new customers that automate
follow-ups and flag warm leads.23
❖ Until recently, decision-makers had to rely primarily on extracted data from structured, highly
organised data sets, as these are easier to analyse. Unstructured data like emails and copy, are more
difficult to analyse, and so have gone unutilised or simply ignored. Neural networks can now provide
decision-makers with much deeper insight into the ‘why’ of a customer’s behaviour, which goes
beyond what is provided in more structured data.24
❖ In healthcare, an example of how neural networks are successfully mining data is shown by Imperial
College London, where ANNs are used to produce optimal patient care recommendations for
patients with sepsis.
PREDICTION METHODS
LINEAR AND NONLINEAR
REGRESSION
❖ It is simplest form of regression. Linear regression attempts to model the relationship between two variables by
❖ If outcome is straight line then it is considered as linear model and if it is curved line, then it is a non linear
model.
❖ The relationship between dependent variable is given by straight line and it has only one independent
variable.
Y= α+ΒX
❖ The value of 'Y' increases or decreases in linear manner according to which the value of 'X' also changes.
MULTIPLE LINEAR REGRESSION
❖ Multiple linear regression is an extension of linear regression analysis.
❖ It uses two or more independent variables to predict an outcome and a single continuous
dependent variable.
Y = a0 + a1 X1 + a2 X2 +.........+ak Xk +e
where,
❖ Logistic Regression was used in the biological sciences in early twentieth century. It was then
used in many social science applications. Logistic Regression is used when the dependent
variable(target) is categorical.
❖ For example,