0% found this document useful (0 votes)
3 views4 pages

ML EXPT 4

The document outlines a machine learning experiment using the Iris dataset, including data loading, exploration, and splitting into training and testing sets. An SVM model is created and trained, achieving an accuracy of 100% on the test set. The classification report and confusion matrix confirm the model's perfect performance across all classes.

Uploaded by

vaibhavigirkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views4 pages

ML EXPT 4

The document outlines a machine learning experiment using the Iris dataset, including data loading, exploration, and splitting into training and testing sets. An SVM model is created and trained, achieving an accuracy of 100% on the test set. The classification report and confusion matrix confirm the model's perfect performance across all classes.

Uploaded by

vaibhavigirkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

ML EXPT 4

from sklearn.datasets import load_iris


from sklearn.model_selection import train_test_split

# Load the Iris dataset


data = load_iris()
X = data.data # Features
y = data.target # Labelsfrom sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

# Load the Iris dataset


data = load_iris()
X = data.data # Features (sepal length, sepal width, etc.)
y = data.target # Labels (species of Iris)

# Output for Step 1


print("Feature names:", data.feature_names)
print("Target names:", data.target_names)
print("\nFirst 5 rows of feature data:\n", X[:5])
print("\nFirst 5 target labels:", y[:5])
OUTPUT
Feature names: ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal
width (cm)']
Target names: ['setosa' 'versicolor' 'virginica']

First 5 rows of feature data:


[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]]

First 5 target labels: [0 0 0 0 0]


# Step 2: Exploring the data
print("Shape of the feature matrix (X):", X.shape)
print("Shape of the target vector (y):", y.shape)
print("\nTarget counts:\n", {i: list(y).count(i) for i in
range(len(data.target_names))})

OUTPUT
Shape of the feature matrix (X): (150, 4)
Shape of the target vector (y): (150,)

Target counts:
{0: 50, 1: 50, 2: 50}

# Step 3: Splitting the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=42)

# Output for Step 3


print("\nTraining set size:", X_train.shape)
print("Testing set size:", X_test.shape)

OUTPUT
Training set size: (105, 4)
Testing set size: (45, 4)

# Step 4: Generating the SVM Model


from sklearn.svm import SVC

# Create the SVM model with a linear kernel


svm_model = SVC(kernel='linear')

# Train the model on the training data


svm_model.fit(X_train, y_train)
# Output: confirmation
print("SVM model trained successfully.")
OUTPUT
SVM model trained successfully.

# Step 5: Evaluating the Model


from sklearn.metrics import accuracy_score, classification_report,
confusion_matrix

# Make predictions on the test set


y_pred = svm_model.predict(X_test)

# Accuracy score
accuracy = accuracy_score(y_test, y_pred)
print(f"\nAccuracy on test set: {accuracy:.2f}")

# Classification report
print("\nClassification Report:\n", classification_report(y_test, y_pred,
target_names=data.target_names))

# Confusion matrix
print("\nConfusion Matrix:\n", confusion_matrix(y_test, y_pred))
OUTPUT
Accuracy on test set: 1.00

Classification Report:
precision recall f1-score support

setosa 1.00 1.00 1.00 19


versicolor 1.00 1.00 1.00 13
virginica 1.00 1.00 1.00 13

accuracy 1.00 45
macro avg 1.00 1.00 1.00 45
weighted avg 1.00 1.00 1.00 45

You might also like