0% found this document useful (0 votes)
4 views

Exp 1_Exp 2_Exp 3_merged

Uploaded by

Shaurya Shankrat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Exp 1_Exp 2_Exp 3_merged

Uploaded by

Shaurya Shankrat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

University Institute of Engineering

Department of Electronics & Communication Engineering

Experiment No. 1
Student Name: Abhishek Kumar UID: 21BEC1003
Branch: Electronics and Communication Section/Group: 21BEC-1/A
Semester: 7th Date of Performance: 17.7.24
Subject Name: Artificial Intelligence & Machine learning Subject Code: 21ECH-406

1. Aim of the practical: Write a program to study perceptron in python.

2. Tool used: Google Colab or Jupyter

3. Code/Program:
import numpy as np
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.lr = learning_rate
self.n_iters = n_iters
self.activation_func = self._unit_step_func
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
# init parameters
self.weights = np.zeros(n_features)
self.bias = 0
y_ = np.array([1 if i > 0 else 0 for i in y])
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
# Perceptron update rule
update = self.lr * (y_[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
University Institute of Engineering
Department of Electronics & Communication Engineering

return y_predicted
def _unit_step_func(self, x):
return np.where(x >= 0, 1, 0)
# Testing
if __name__ == "__main__":
# Imports
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import datasets
def accuracy(y_true, y_pred):
accuracy = np.sum(y_true == y_pred) / len(y_true)
return accuracy
X, y = datasets.make_blobs(
n_samples=150, n_features=2, centers=2, cluster_std=1.05, random_state=2
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=123
)
p = Perceptron(learning_rate=0.01, n_iters=1000)
p.fit(X_train, y_train)
predictions = p.predict(X_test)
print("Perceptron classification accuracy", accuracy(y_test, predictions))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
plt.scatter(X_train[:, 0], X_train[:, 1], marker="o", c=y_train)
x0_1 = np.amin(X_train[:, 0])
x0_2 = np.amax(X_train[:, 0])
x1_1 = (-p.weights[0] * x0_1 - p.bias) / p.weights[1]
x1_2 = (-p.weights[0] * x0_2 - p.bias) / p.weights[1]
ax.plot([x0_1, x0_2], [x1_1, x1_2], "k")
ymin = np.amin(X_train[:, 1])
ymax = np.amax(X_train[:, 1])
ax.set_ylim([ymin - 3, ymax + 3])
plt.show()
University Institute of Engineering
Department of Electronics & Communication Engineering

4. Output:

5. Result: The program will print the classification accuracy of the Perceptron on the test set. This will be a
value between 0 and 1 our is coming 1, representing the proportion of correct predictions. With a scatter plot
showing the training data points coloured by their true labels. The decision boundary of the Perceptron, which is
a straight line separating the two classes.
University Institute of Engineering
Department of Electronics & Communication Engineering

Experiment No. 2
Student Name: Abhishek Kumar UID: 21BEC1003
Branch: Electronics and Communication Section/Group: 21BEC-1/A
Semester: 7th Date of Performance: 18.7.24
Subject Name: Artificial Intelligence & Machine learning Subject Code: 21ECH-406

1. Aim of the practical: Write a program to study different parameters of ANN using python

2. Tool used: Google Colab or Jupyter

3. Code/Program:
# Import necessary libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Generating random dataset
x = np.vstack([(np.random.rand(10, 2) * 5), (np.random.rand(10, 2) * 10)])
y = np.hstack([[0] * 10, [1] * 10])
# Creating DataFrame
dataset = pd.DataFrame(x, columns=['x', 'y']) # Corrected column names to 'x' and 'y'
dataset['y'] = y # Assigning target variable y to the DataFrame
print(dataset.head())
# Plotting the dataset
plt.scatter(dataset['x'], dataset['y'], c=y) # Scatter plot of x vs y colored by class y
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Random Dataset')
plt.legend(['Class 0', 'Class 1']) # Adding legend
plt.show()
# Converting target variables to a vector of number of classes
Z = np.zeros((20, 2))
for i in range(20):
Z[i, y[i]] = 1 # Assigning 1 to the corresponding class in Z
print("Class vector Z:\n", Z)
# Initiating random weights and biases for a neural network
Wi_1 = np.random.randn(3, 2) # Weight matrix for input layer to hidden layer
University Institute of Engineering
Department of Electronics & Communication Engineering

Wi_2 = np.random.randn(3, 2) # Weight matrix for hidden layer to output layer


bi_1 = np.random.randn(1, 3) # Bias matrix for hidden layer, with shape (1, 3)
bi_2 = np.random.randn(2) # Bias vector for output layer, with shape (2,)
print("Randomly initialized weights and biases:")
print("Wi_1 (input to hidden layer):\n", Wi_1)
print("Wi_2 (hidden to output layer):\n", Wi_2)
print("bi_1 (bias for hidden layer):\n", bi_1)
print("bi_2 (bias for output layer):\n", bi_2)
# Example of accessing shape of X (though X is not defined in your provided code)
# Assuming X is the dataset x, which has shape (20, 2)
print("Shape of dataset x (X):\n", x.shape)
# Forward propagation function
def forward_prop(X, Wi_1, Bi_1, Wi_2, Bi_2):
# 1st layer
M = 1 / (1 + np.exp(-(X.dot(Wi_1.T) + Bi_1)))
# 2nd layer
A = M.dot(Wi_2) + Bi_2
expA = np.exp(A) # Calculate exponential of A
Y = expA / expA.sum(axis=1, keepdims=True) # Softmax activation
return Y, M
# Example usage of forward_prop function
output, hidden_activations = forward_prop(x, Wi_1, bi_1, Wi_2, bi_2)
print("Output from forward propagation:\n", output)
print("Hidden layer activations:\n", hidden_activations)
University Institute of Engineering
Department of Electronics & Communication Engineering

4. Output:
University Institute of Engineering
Department of Electronics & Communication Engineering

5. Result: The program will output the training and testing accuracies, showing how well the ANN performs
with the specified parameters. The decision boundary plot will visually demonstrate the areas where the ANN
predicts each class.
University Institute of Engineering
Department of Electronics & Communication Engineering

Experiment No. 3
Student Name: Abhishek Kumar UID: 21BEC1003
Branch: Electronics and Communication Section/Group: 21BEC-1/A
Semester: 7th Date of Performance: 25.7.24
Subject Name: Artificial Intelligence & Machine learning Subject Code: 21ECH-406

1. Aim of the practical: Write a program to study the various metrics for comparison of ANN Performance.

2. Tool used: Matlab

3. Code/Program:
% Example actual values
actual = [1 0 1 1 0 1 0 0];
% Example predicted values from ANN
predicted = [1 0 1 0 0 1 1 0];
% Confusion matrix
cm = confusionmat(actual,predicted);
% True positives, true negatives, false positives and false negatives
tp = cm(1,1);% True positives
tn = cm(2,2);% True negatives
fp = cm(1,2);% False positives
fn = cm(2,1);% False negatives
% Accuracy
accuracy = (tp+tn)/sum(cm(:));
% Precision
precision = tp/(tp+fp);
% Recall sensitivity
recall = tp/(tp+fn);
% F1-score
f1_score = 2*(precision*recall)/(precision+recall);
% Matthews correlation coeddicient (MCC)
mcc = (tp*tn-fp*fn)/sqrt((tp+fp)*(tp+fn)*(tn+fp)*(tn+fn));
% Displaying the results
fprintf('Accuracy: %.2f\n',accuracy);
fprintf('Precision: %.2f\n',precision);
fprintf('Recall: %.2f\n',recall);
University Institute of Engineering
Department of Electronics & Communication Engineering

fprintf('F1-score: %.2f\n',f1_score);
fprintf('Matthews correlation coeddicient(MCC): %.2f\n',mcc);

4. Output:

5. Result: This program calculates and displays the following metrics for comparing the performance of an
Artificial Neural Network (ANN):
a) Accuracy: The proportion of true results (both true positives and true negatives) among the total number
of cases examined.
b) Precision: The proportion of true positive results in all positive predictions made.
c) Recall (Sensitivity): The proportion of true positive results in all actual positive cases.
d) F1-score: The harmonic mean of precision and recall.
e) Matthews Correlation Coefficient (MCC): A measure of the quality of binary classifications, which takes
into account true and false positives and negatives, and is generally regarded as a balanced measure even
if the classes are of very different sizes.

You might also like