0% found this document useful (0 votes)
8 views

Practical 5

Uploaded by

Vinut P Maradur
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Practical 5

Uploaded by

Vinut P Maradur
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Practical: Week 5

Aim:

Apply a decision tree classifier Machine Learning model to take a decision whether to play a cricket
or not under given conditions .

Theory:

Decision Tree is a Supervised learning technique that can be used for both classification and
Regression problems, but mostly it is preferred for solving Classification problems. It is a tree-
structured classifier, where internal nodes represent the features of a dataset, branches represent
the decision rules and each leaf node represents the outcome. In a Decision tree, there are two
nodes, which are the Decision Node and Leaf Node. Decision nodes are used to make any decision
and have multiple branches, whereas Leaf nodes are the output of those decisions and do not
contain any further branches. The decisions or the test are performed on the basis of features of the
given dataset. It is a graphical representation for getting all the possible solutions to a
problem/decision based on given conditions.
In a decision tree, for predicting the class of the given dataset, the algorithm starts from the root
node of the tree. This algorithm compares the values of root attribute with the record (real dataset)
attribute and, based on the comparison, follows the branch and jumps to the next node.For the next
node, the algorithm again compares the attribute value with the other sub-nodes and move further.
It continues the process until it reaches the leaf node of the tree. The complete process can be
better understood using the below algorithm:

 Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
 Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).
 Step-3: Divide the S into subsets that contains possible values for the best attributes.
 Step-4: Generate the decision tree node, which contains the best attribute.
 Step-5: Recursively make new decision trees using the subsets of the dataset created in step
-3. Continue this process until a stage is reached where you cannot further classify the nodes
and called the final node as a leaf node.

There are various algorithms in Machine learning, so choosing the best algorithm for the given
dataset and problem is the main point to remember while creating a machine learning model. Below
are the two reasons for using the Decision tree:

 Decision Trees usually mimic human thinking ability while making a decision, so it is easy to
understand.
 The logic behind the decision tree can be easily understood because it shows a tree-like
structure.
Program:

import pandas as pd
df = pd.read_csv('PlayCricket.csv')
df

# let us convert the column data into numeric.


#This is done with LabelEncoder
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()

# apply label encoder on all columns.


# The following conversion takes place
''' Outlook --> 0 Overcast, 1 Rainy, 2 Sunny
Temperature-> 0 Cool, 1 Hot, 2 Mild
Humidity --> 0 High, 1 Normal
Windy --> 0 False, 1 True
Play Cricket --> 0 No, 1 Yes
'''
df['outlook'] = le.fit_transform (df['outlook'])
df['temp'] = le.fit_transform (df['temp'])
df['humidity'] = le.fit_transform(df['humidity'])
df['windy'] = le.fit_transform (df['windy'])
df['play'] = le.fit_transform(df['play'])
df
# divide the data into x and y
x = df.drop(['play'], axis='columns')
y = df['play']

# create the DecisionTreeClassifier mode1


from sklearn.tree import DecisionTreeClassifier

# default criterion='gini'. we can use criterion= ‘entropy' also.


model = DecisionTreeClassifier()
model.fit(x, y)

# predict whether to play cricket or not for the following data:


#today = (Outlook=sunny, Temperature=Hot, Humidity=High,
#windy=FALSE)
model.predict ([[2,1,0,0]]) # array ([0]) --> No

You might also like