0% found this document useful (0 votes)
5 views

Week_2

The document outlines a laboratory course on Neural Networks and Deep Learning, detailing a schedule of experiments over 17 weeks, including topics such as Perceptrons, Convolutional Neural Networks, and Recurrent Neural Networks. It also provides insights into the Perceptron learning algorithm, implementation in various frameworks like TensorFlow and Sklearn, and exercises for practical application. Additionally, it compares deep learning frameworks like PyTorch and TensorFlow, highlighting their features and use cases.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Week_2

The document outlines a laboratory course on Neural Networks and Deep Learning, detailing a schedule of experiments over 17 weeks, including topics such as Perceptrons, Convolutional Neural Networks, and Recurrent Neural Networks. It also provides insights into the Perceptron learning algorithm, implementation in various frameworks like TensorFlow and Sklearn, and exercises for practical application. Additionally, it compares deep learning frameworks like PyTorch and TensorFlow, highlighting their features and use cases.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

19CSE456 Neural Network and

Deep Learning Laboratory


List of Experiments
Week # Experiment Title

1 Introduction to the lab and Implementation of a simple Perceptron (Hardcoding)

2 Implementation of Perceptron for Logic Gates (Hardcoding, Sklearn, TF)


Implementation of Multilayer Perceptron for XOR Gate and other classification problems with
3
ML toy datasets (Hardcoding & TF)
Implementation of MLP for Image Classification with MNIST dataset
4
(Hardcoding & TF)
5 Activation Functions, Loss Functions, Optimizers (Hardcoding & TF)

6 Lab Evaluation 1 (based on topics covered from w1 to w5)

7 Convolution Neural Networks for Toy Datasets (MNIST & CIFAR)

8 Convolution Neural Networks for Image Classification (Oxford Pets, Tiny ImageNet, etc.)

9 Recurrent Neural Networks for Sentiment Analysis with IMDB Movie Reviews

10 Long Short Term Memory for Stock Prices (Yahoo Finance API)
List of Experiments contd.
Week # Experiment Title

11 Implementation of Autoencoders and Denoising Autoencoders (MNIST/CIFAR)

12 Boltzmann Machines (MNIST/CIFAR)

13 Restricted Boltzmann Machines (MNIST/CIFAR)

14 Hopfield Neural Networks (MNIST/CIFAR)

15 Lab Evaluation 2 (based on CNN, RNN, LSTM, and AEs)

16 Case Study Review (Phase 1)

17 Case Study Review (Phase 1)


Perceptron
• A single-layer perceptron is the basic unit of a neural network
• A perceptron consists of input values, weights and a bias, a weighted
sum and activation function

𝑥0
𝑤0 = 1

𝑥1 𝑤1
4
𝑤2
𝑥2 𝑧= 𝑥𝑖 𝑤𝑖 𝑦 ′ = 𝜑(𝑧) 𝑦′
𝑖=0
𝑤3
𝑥3
𝑤4

𝑥4
Perceptron Learning Algorithm
Inputs: 𝑋 = 𝑥1 , 𝑥2 , 𝑥3 , ⋯ , 𝑥𝑛 , 𝑊 = 𝑤1 , 𝑤2 , 𝑤3 , ⋯ , 𝑤𝑛 , 𝑏
Output: 𝑦 ′
• Step1: Initialize weights 𝑊 and bias 𝑏 to small random values
• Step 2: For each input vector x, compute the weighted sum:
𝑧 =𝑊∙𝑋+𝑏
• Step 3: Apply an activation function 𝜑(𝑧):
1, 𝑖𝑓 𝑧 ≥ 0
𝑦′ = 𝜑 𝑧 =
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
• Step 4: For each training example, update the weights based on the error:
𝑤𝑖 = 𝑤𝑖 + 𝜂 𝑦 − 𝑦 ′ 𝑥𝑖
• Step 5: Update bias:
𝑏 = 𝑏 + 𝜂 𝑦 − 𝑦′
• Step 6: Repeat steps 2-4 for a specified number of epochs or until convergence
Deep Learning Frameworks

• Keras is a high-level,
• PyTorch is an open-
• TensorFlow is an open- user-friendly API for
source deep learning
source platform for building and training • Theano is a Python
framework that
creating machine and neural networks library for fast
provides a Python and
deep learning models • Originally, it could run numerical computation
C++ interface
on top of TensorFlow, that can be run on both
• Developed by Google • Developed by
Theano, or CNTK the CPU and GPU
Brain team Facebook's AI Research
• Offers a wide range of • Can leverage GPU for
• Open-source lab
built-in layers including faster numerical
• Uses data flow graphs • PyTorch operates on computation
dense, convolutional,
• Has a large ecosystem tensors, which are like
recurrent, and more • Theano is lower-level
and community NumPy arrays
• Since TensorFlow 2.0, and requires more
• Good for production • Several powerful detailed programming
Keras has become the
deployment libraries and tools are
primary high-level API
built on top of PyTorch
of TensorFlow
PyTorch or TensorFlow?
Key Factors PyTorch TensorFlow

Generally considered more intuitive Has a steeper learning curve, especially


Ease of Use
and Pythonic for earlier versions
Primarily uses static graphs, but
Dynamic vs Static Uses dynamic computational graphs,
supports dynamic graphs since
Graphs easier for debugging
TensorFlow 2.0
Community and Growing community, popular in Larger community, more resources and
Resources academia and research tutorials available

Gaining popularity, especially in More widely adopted in large-scale


Industry Adoption
research and startups industrial applications

TensorBoard support available, but Native TensorBoard support for better


Visualization Tools
less integrated visualization

Choose PyTorch if you need ease of use and flexibility.


Opt for TensorFlow if you prioritize scalability, production deployment, and a mature ecosystem.
Perceptron in SKlearn
Import the Necessary Libraries:
from sklearn.linear_model import Perceptron

Split the Dataset into Training and Testing Sets:


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Create and Train the Perceptron Model:


model = Perceptron(max_iter=1000, tol=1e-3, random_state=42)
model.fit(X_train, y_train)

Make Predictions and Evaluate the Model:


y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.2f}")
Perceptron in SKlearn
Visualize the Decision Boundary:
# Plot the training data
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap='viridis', marker='o', edgecolor='k')

# Create a meshgrid for the decision boundary


x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1))

# Predict the class for each point in the meshgrid


Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

# Plot the decision boundary


plt.contourf(xx, yy, Z, alpha=0.2, cmap='viridis')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('Perceptron Decision Boundary')
plt.show()
Perceptron in Sklearn
Perceptron in TensorFlow
Import the Necessary Libraries:
import tensorflow as tf
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt

Split the Data:

Validation
Set Test Set
Training Set Used to
This provides an
evaluate the
The model learns patterns, weights, estimate of how the
model's
and biases by minimizing the loss model will perform
performance
function on this data. on completely new,
during
unseen data
training
Perceptron in TensorFlow
A fully connected layer
Creating the Perceptron Model
def create_perceptron(learning_rate=0.01):
model = tf.keras.Sequential([ The layer has one neuron
tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(4,))
]) The input shape is specified
to have 4 features
model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
loss='binary_crossentropy',
metrics=['accuracy']
The loss function used is binary
)
cross-entropy
return model

The model will track


Stochastic Gradient Descent (SGD) accuracy as a metric
optimizer
Perceptron in TensorFlow
The number of training
Training the Perceptron epochs
def train_perceptron(model, X_train, y_train, X_val, y_val, epochs=100):
history = model.fit(
X_train, y_train,
epochs=epochs, Provides the validation data
validation_data=(X_val, y_val), and labels for evaluating the
verbose=1 model's performance after
) each epoch
return history
Sets the verbosity mode.
When set to 1, it prints
progress messages during
training

Returns the training history


object
Plotting Training History
Perceptron in TensorFlow
def plot_training_history(history):
plt.figure(figsize=(12, 4))

# Plot accuracy
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy') Plotting Training-
plt.title('Perceptron Accuracy') Validation Accuracy
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()

# Plot loss
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss') Plotting Training-
plt.title('Binary Cross-Entropy Loss') Validation Loss
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()

plt.tight_layout()
plt.show()
Perceptron in TensorFlow
Evaluates the model on
Evaluating the Perceptron the test data and returns
def evaluate_perceptron(model, X_test, y_test): the loss and accuracy
loss, accuracy = model.evaluate(X_test, y_test)
print(f"\nTest Loss: {loss:.4f}")
print(f"Test Accuracy: {accuracy:.4f}")
Converts the predicted
predictions = (model.predict(X_test) >= 0.5).astype(int)
probabilities to binary
print("\nSample Predictions (First 10 instances):")
class labels (0 or 1) based
for i in range(10):
on a threshold of 0.5
print(f"True: {y_test[i]}, Predicted: {predictions[i][0]}")

from sklearn.metrics import confusion_matrix


cm = confusion_matrix(y_test, predictions)
print("\nConfusion Matrix:") Calculates the confusion
print(cm) matrix based on the true
labels and predicted
labels
Perceptron in TensorFlow Finds the minimum value
Plotting Decision Boundary of the first feature and
def plot_decision_boundary(model, X, y): subtracts 1
plt.figure(figsize=(10, 6))
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 Finds the maximum value
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 of the first feature and
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), adds 1
np.arange(y_min, y_max, 0.02))
Generates a grid of values
Z = model.predict(np.c_[xx.ravel(), yy.ravel(), over the specified range
np.zeros_like(xx.ravel()),
np.zeros_like(xx.ravel())]) Concatenates the
Z = (Z >= 0.5).astype(int) flattened meshgrid arrays
Z = Z.reshape(xx.shape) and adds two additional
features set to zero
plt.contourf(xx, yy, Z, alpha=0.4)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.8) Creates a filled contour
plt.xlabel('Sepal Length (scaled)') plot of the decision
plt.ylabel('Sepal Width (scaled)') boundary
plt.title('Perceptron Decision Boundary (First Two Features)')
plt.show()
Week 2 Exercises
1. Hardcoding Perceptron Learning Algorithm to Implement Logic AND and OR
Gates.

Objective: Implement a perceptron learning algorithm from scratch to simulate


the logic AND and OR gates.

2. Classification with Iris Dataset using Sklearn Perceptron

Objective: Use the Perceptron classifier from SKlearn to classify the Iris dataset
(use data points of any two classes of your choice and do the classification).

3. Classification with Iris Dataset using TensorFlow

Objective: Use a single layer dense network classifier from TensorFlow to


classify the Iris dataset (use data points of any two classes of your choice and
do the classification). Plotting of training-validation loss and accuracy is
necessary.

You might also like