ML Lab Manual
ML Lab Manual
PROGRAM 5
5. The Following
Training Examples Map Descriptions Of Individuals Onto High, Medium
And LowCredit-Worthiness.
SOURCE CODE:
Input attributes are (from left to right) income, recreation, job, status, age-group, home-
owner.Findtheunconditionalprobabilityof`golf'andtheconditionalprobabilityofsingle' given
`medRisk' in thedataset?
totalRecords=10
numberGolfRecreation=4
probGolf=numberGolfRecreation/totalRecordsprint("Unconditio
# bayes Formula
#p(single|medRisk)=p(medRisk|single)p(single)/p(medRisk)
#p(medRisk|single)=p(medRisk ∩single)/p(single)
numberMedRiskSingle=2
numberMedRisk=3
probMedRiskSingle=numberMedRiskSingle/totalRecordspr
obMedRisk=numberMedRisk/totalRecordsconditionalProba
bility=(probMedRiskSingle/probMedRisk)
OUTPUT:
PROGRAM 6:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing, svm
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
df = pd.read_csv('bottle.csv')
df_binary = df[['Salnty', 'T_degC']]
#plotting the Scatter plot to check relationship between Sal and Temp
sns.lmplot(x ="Sal", y ="Temp", data = df_binary, order = 2, ci = None)
plt.show()
X = np.array(df_binary['Sal']).reshape(-1, 1)
y = np.array(df_binary['Temp']).reshape(-1, 1)
regr.fit(X_train, y_train)
print(regr.score(X_test, y_test))
OUTPUT:
mae = mean_absolute_error(y_true=y_test,y_pred=y_pred)
#squared True returns MSE value, False returns RMSE value.
mse = mean_squared_error(y_true=y_test,y_pred=y_pred) #default=True
rmse = mean_squared_error(y_true=y_test,y_pred=y_pred,squared=False)
print("MAE:",mae)
print("MSE:",mse)
print("RMSE:",rmse)
OUTPUT:
MAE: 0.7927322046360309
MSE: 1.0251137190180517
RMSE: 1.0124789968281078
PROGRAM 7
AIM:
SOURCE CODE:
import pandas as pd
X = msg.message
y = msg.labelnum
count_v = CountVectorizer()
Xtrain_dm = count_v.fit_transform(Xtrain)
Xtest_dm = count_v.transform(Xtest)
df = pd.DataFrame(Xtrain_dm.toarray(),columns=count_v.get_feature_names())
clf = MultinomialNB()
clf.fit(Xtrain_dm, ytrain)
pred = clf.predict(Xtest_dm)
print('Accuracy Metrics:')
print('Accuracy: ', accuracy_score(ytest, pred)) print('Recall: ', recall_score(ytest, pred)) print('Precision: ',
precision_score(ytest, pred))
document.csv:
I love to dance,pos
OUTPUT:
Accuracy Metrics:
Accuracy: 0.6
Recall: 0.6666666666666666
Precision: 0.6666666666666666
Confusion Matrix:
[[1 1]
[1 2]]
PROGRAM 9
9. IMPLEMENT
THE FINITE WORDS CLASSIFICATION SYSTEM USING BACK-
PROPAGATIONALGORITHM
AIM:
SOURCE CODE:
import pandas as pd
from sklearn.model_selection
import train_test_split
CountVectorizer
MLPClassifierfromsklearn.metrics
precision_score, recall_score
msg = pd.read_csv('document.csv',
X =msg.message
y=msg.labelnum
count_v = CountVectorizer()
Xtrain_dm =
count_v.fit_transform(Xtrain)
Xtest_dm =
count_v.transform(Xtest)
df = pd.DataFrame(Xtrain_dm.toarray(),columns=count_v.get_feature_names())
print('Accuracy Metrics:')
document.csv:
I love this
sandwich , pos
Thisis an
amazingplace ,pos
work,pos
restaurant,neg
I am
this,neg
He is
my sworn
enemy,neg
Myboss is
horrible,neg
this juice,neg
I love todance,pos
place,neg
tomorrow,pos I went to my
OUTPUT:
Total Instances of
Dataset: 18
Accuracy Metrics:
Accuracy: 0.8
Recall: 1.0
Precision:
0.75
Confusi
on
Matrix:
[[1 1]
[0 3]