0% found this document useful (0 votes)
4 views

dl_3

Uploaded by

muskansoni7610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

dl_3

Uploaded by

muskansoni7610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Experiment – 3

Name: Muskan Soni UID: 22BCS16851


Branch: BE-CSE Section/Group: DL-902/B
Semester: 6th Date:
Subject: Deep Learning Lab Subject Code: 22CSP-368

1. Aim: To implement a linear classifier using Python and demonstrate


its application in binary or multi-class classification.

2. Objective:
• Understand the concept of linear classifiers and their mathematical formulation.
• Implement a linear classifier for a given dataset and evaluate its performance.

3. Implementation:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
# Sample data
emails = ["Free money now!!!", "Join Internship", "Earn $1000 per day", "Click on this link
to get $500"]
labels = [1, 0, 0, 0] # 1: Spam, 0: Not Spam
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(emails)
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.25, random_state=42)
model = LogisticRegression()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
print(classification_report(y_test, predictions))
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report

feedback = [
"I love this product, it's amazing!", "Absolutely wonderful experience.",
"This is the worst product I have ever used.", "I am very disappointed.",
"It's okay, not great but not bad either.", "Pretty average, nothing special.",
"Fantastic quality and great value for money!", "Terrible! Will never buy again.",
"Neutral experience, wasn't impressed or dissatisfied.", "I like it, but it's not perfect."
]
labels = [2, 2, 0, 0, 1, 1, 2, 0, 1, 2]
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(feedback)
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.3, random_state=42)
classifier = MultinomialNB()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print("Classification Report:")
print(classification_report(y_test, y_pred, target_names=["Negative", "Neutral", "Positive"]))
new_feedback = ["This product is just okay.", "I absolutely love it!", "I hate this so much."]
new_feedback_transformed = vectorizer.transform(new_feedback)
predictions = classifier.predict(new_feedback_transformed)

print("\nNew Feedback Predictions:")


for text, label in zip(new_feedback, predictions):
sentiment = ["Negative", "Neutral", "Positive"][label]
print(f"Feedback: '{text}' -> Sentiment: {sentiment}")
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
data = [
[0, 0, 1], # Mammal (e.g., Dog)
[1, 1, 1], # Bird (e.g., Sparrow)
[0, 1, 0], # Reptile (e.g., Snake)
[0, 0, 1], # Mammal (e.g., Cat)
[1, 1, 1], # Bird (e.g., Parrot)
[0, 1, 0], # Reptile (e.g., Lizard)
[1, 1, 1], # Bird (e.g., Eagle)
[0, 0, 1], # Mammal (e.g., Lion)
[0, 1, 0], # Reptile (e.g., Crocodile)
[0, 0, 1], # Mammal (e.g., Whale)
]
labels = [0, 1, 2, 0, 1, 2, 1, 0, 2, 0]
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.3,
random_state=42)
classifier = DecisionTreeClassifier()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print("Classification Report:")
print(classification_report(
y_test,
y_pred,
labels=[0, 1, 2],
target_names=["Mammal", "Bird", "Reptile"]
))
new_animals = [
[0, 0, 1],
[1, 1, 1],
[0, 1, 0]
]
predictions = classifier.predict(new_animals)
print("\nNew Animal Predictions:")
for features, label in zip(new_animals, predictions):
category = ["Mammal", "Bird", "Reptile"][label]
print(f"Features: {features} -> Category: {category}")

4. Output:
5. Learning Outcomes:
• Accuracy: The classifier's accuracy on the test dataset is displayed (e.g., 97%).
• Confusion Matrix: Shows the number of correct and incorrect predictions.
• Visualization: The decision boundary effectively separates the classes.
• Insight: Logistic Regression is a simple and effective linear classifier for binary
problems.

You might also like