0% found this document useful (0 votes)
2 views3 pages

Practical No 6

The document presents a practical implementation of a Support Vector Machine (SVM) using Python and the scikit-learn library to classify bill authentication data. It includes data preprocessing, model training, and evaluation, achieving an accuracy of approximately 98.91%. The results are summarized with a confusion matrix and classification report, indicating high precision and recall for both classes.

Uploaded by

s22bothikarmohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views3 pages

Practical No 6

The document presents a practical implementation of a Support Vector Machine (SVM) using Python and the scikit-learn library to classify bill authentication data. It includes data preprocessing, model training, and evaluation, achieving an accuracy of approximately 98.91%. The results are summarized with a confusion matrix and classification report, indicating high precision and recall for both classes.

Uploaded by

s22bothikarmohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Practical No 6

Support Vector Machine (SVM): -


Code:

import numpy as np

import pandas as pd

from sklearn.tree import DecisionTreeClassifier

from sklearn.metrics import accuracy_score

from sklearn.model_selection import train_test_split

from sklearn.metrics import classification_report,confusion_matrix


data=pd.read_csv("bill_authentication.csv");

data

Variance Skewness Curtosis Entropy Class

0 3.62160 8.66610 -2.8073 -0.44699 0

1 4.54590 8.16740 -2.4586 -1.46210 0

2 3.86600 -2.63830 1.9242 0.10645 0

3 3.45660 9.52280 -4.0112 -3.59440 0

4 0.32924 -4.45520 4.5718 -0.98880 0

... ... ... ... ... ...

1367 0.40614 1.34920 -1.4501 -0.55949 1

1368 -1.38870 -4.87730 6.4774 0.34179 1

1369 -3.75030 -13.45860 17.5932 -2.77710 1

1370 -3.56370 -8.38270 12.3930 -1.28230 1

1371 -2.54190 -0.65804 2.6842 1.19520 1

1372 rows × 5 columns


x=data.drop("Class",axis=1)

y=data["Class"]

y
0 0

1 0

2 0

3 0

4 0

..

1367 1
1368 1
1369 1

1370 1

1371 1

Name: Class, Length: 1372, dtype: int64

x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.20)

from sklearn.svm import SVC


svclassifier=SVC(kernel="linear")

svclassifier.fit(x_train,y_train)

y_pred=svclassifier.predict(x_test)

print(confusion_matrix(y_test,y_pred))

print(classification_report(y_test,y_pred))
[[150 2]

[ 1 122]]

precision recall f1-score support

0 0.99 0.99 0.99 152


1 0.98 0.99 0.99 123

accuracy 0.99 275

macro avg 0.99 0.99 0.99 275

weighted avg 0.99 0.99 0.99 275

accuracy=accuracy_score(y_test,y_pred)

print(f"Accuracy:{accuracy}")

Accuracy:0.9890909090909091

You might also like