NNDL Lab Manual
NNDL Lab Manual
No:1
Aim:
This program defines two constant tensors vector1 and vector2, representing the vectors
[1, 2, 3] and [4, 5, 6] respectively. Then, it performs addition using tf.add() and prints the result.
Program:
import tensorflow as tf
Result:
Aim:
The aim of this exercise is to demonstrate how to implement a simple
regression model using the Keras API, a high-level neural networks library
running on top of TensorFlow.
Algorithm:
1. Generate Or Load The Dataset.
2. Define A Sequential Model.
3. Add Layers To The Model. For Regression, Typically A Single Dense Layer Is
Sufficient.
4. Compile The Model, Specifying Optimizer And Loss Function.
5. Train The Model On The Dataset Using The Fit Method.
6. Make Predictions Using The Trained Model.
7. Evaluate The Model's Performance Using Appropriate Metrics.
Program:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
predictions = model.predict(X_test)
print("Predictions:")
print(predictions)
Output:
Epoch 1/100
4/4 [==============================] - 0s 749us/step - loss: 4.5494
Epoch 2/100
4/4 [==============================] - 0s 623us/step - loss: 4.4588
...
Epoch 100/100
4/4 [==============================] - 0s 499us/step - loss: 0.0345
Predictions:
[[1.2399716]
[1.4402403]
[1.640509 ]]
Result:
The above Program was executed successfully.
EX.No:3
Aim:
The aim of this exercise is to implement a perceptron, a fundamental building block of
neural networks, in both TensorFlow and Keras environments. Through this exercise, will gain
an understanding of how to create and train a perceptron model using both of these popular deep
learning frameworks.
Algorithm:
1. Initialize Weights and Bias: Initialize weights and bias randomly or with predefined
values.
2. Define Input Data: Prepare the input data along with corresponding labels for training the
perceptron.
3. Define Placeholder (TensorFlow) or Input Shape (Keras): Depending on the framework,
define placeholders for input data (TensorFlow) or specify the input shape (Keras).
4. Define Perceptron Operation: Compute the output of the perceptron by performing the
weighted sum of inputs and adding a bias term, followed by passing the result through an
activation function (e.g., step function, sigmoid, etc.).
5. Define Loss Function: Define a suitable loss function to measure the discrepancy between
predicted and actual outputs (e.g., mean squared error, binary cross-entropy).
6. Define Optimizer: Choose an optimizer algorithm (e.g., stochastic gradient descent,
Adam) to minimize the loss and update the weights and bias accordingly.
7. Initialize Variables (TensorFlow): Initialize all variables used in the computation graph.
8. Train the Perceptron: Iterate over the training data multiple times, feeding the input data
and labels to the perceptron, and optimizing the weights and bias using the chosen
optimizer.
9. Evaluate the Perceptron (Optional): If necessary, evaluate the trained perceptron on a
separate validation or test dataset to assess its performance.
10. Make Predictions: Use the trained perceptron to make predictions on new, unseen data.
Program:
Tensor Flow:
import numpy as np
import tensorflow as tf
# Define the input data
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [0], [0], [1]])
# Define the perceptron
class Perceptron(tf.Module):
def init (self):
super(Perceptron, self). init ()
self.W = tf.Variable(tf.random.uniform([2, 1]))
self.b = tf.Variable(tf.random.uniform([1]))
# Training loop
for epoch in range(1000):
with tf.GradientTape() as tape:
predictions = perceptron(X)
loss = loss_function(y, predictions)
gradients = tape.gradient(loss, perceptron.trainable_variables)
optimizer.apply_gradients(zip(gradients, perceptron.trainable_variables))
Keras:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Define the input data
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [0], [0], [1]])
Predictions:
[[0.06273559]
[0.16103294]
[0.10974989]
[0.8888136]]
Result:
The Program was executed successfully.
EX.No:4
Aim:
The aim of this exercise is to implement a feedforward neural network, also known as a
multilayer perceptron (MLP), in both TensorFlow and Keras environments. By completing this
exercise, participants will gain an understanding of how to build and train a basic feedforward
neural network using these popular deep learning frameworks.
Algorithm:
1. Initialize Weights and Biases: Initialize the weights and biases of each layer randomly or
using predefined values.
2. Define Input Data: Prepare the input data along with corresponding labels for training the
neural network.
3. Define Model Architecture:
a. TensorFlow:
- Define placeholders for input data.
- Define variables for weights and biases for each layer.
- Define the architecture of the neural network by specifying the number of
layers, number of neurons in each layer, and activation functions.
b. Keras:
- Initialize a sequential model.
- Add layers to the model using Dense layer specifying the number of
neurons and activation functions.
- Compile the model, specifying the optimizer, loss function, and metrics.
4. Forward Propagation:
a. TensorFlow:
- Implement forward propagation by applying the activation function to the
linear combination of inputs, weights, and biases for each layer.
b. Keras:
- Keras handles forward propagation internally during the training process.
5. Define Loss Function: Define a suitable loss function to measure the discrepancy between
predicted and actual outputs (e.g., mean squared error, categorical cross-entropy).
6. Define Optimizer: Choose an optimizer algorithm (e.g., stochastic gradient descent,
Adam) to minimize the loss and update the weights and biases accordingly.
7. Initialize Variables (TensorFlow): Initialize all variables used in the computation graph.
8. Train the Neural Network: Iterate over the training data multiple times, feeding the input
data and labels to the neural network, and optimizing the weights and biases using the
chosen optimizer.
9. Evaluate the Model (Optional): If necessary, evaluate the trained neural network on a
separate validation or test dataset to assess its performance.
10. Make Predictions: Use the trained neural network to make predictions on new, unseen
data.
Program:
Tensor flow:
import numpy as np
import tensorflow as tf
Result:
The Program was executed successfully.
EX.No:5
Algorithm:
1. Prepare Dataset:
Load and preprocess the image dataset. This may involve resizing images to a
fixed size, normalizing pixel values, and splitting the dataset into training, validation, and test
sets.
4. Optionally apply data augmentation techniques such as random rotation, flipping, and
shifting to increase the diversity of the training data.
5. Compile Model:
6. Define the loss function, optimizer, and evaluation metrics for training the model.
7. Train Model:
8. Feed the training data to the model and train it using the fit method. Monitor the training
process using validation data.
9. Evaluate Model:
10. Evaluate the trained model on the test set to assess its performance.
11. Fine-Tuning and Hyperparameter Tuning (Optional):
12. Optionally fine-tune the model architecture and hyperparameters based on the
performance on the validation set.
13. Make Predictions:
14. Use the trained model to make predictions on new, unseen images.
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
Result:
The above was executed successfully.
EX.No:6
Aim:
The aim of hyperparameter tuning in deep learning is to improve the performance of amodel by
systematically searching for the optimal combination of hyperparameters. Hyperparameters are settings
that control the learning process, such as the learning rate, batch size, number of layers, number of
neurons per layer, activation functions, etc. By fine- tuning these hyperparameters, we aim to achieve
better accuracy, faster convergence, and improved generalization of the model.
Algorithm:
1. Define Hyperparameter Space:
Define the hyperparameters to be tuned and their respective search spaces. For example, the
learning rate may be searched in the range [0.0001, 0.1], the number of neurons per layer
may be chosen from [32, 64, 128], etc.
2. Choose Optimization Strategy:
Select a hyperparameter optimization strategy. Common strategies include grid search, random
search, Bayesian optimization, and more advanced techniques like genetic algorithms or
evolutionary strategies.
3. Split Data:
Split the dataset into training, validation, and test sets. The validation set will be used to
evaluate the performance of each set of hyperparameters during the tuning process.
4. Define Model Architecture:
Define the architecture of the deep learning model. This includes the number of layers, types of
layers (e.g., convolutional, recurrent, dense), activation functions, regularization techniques,
etc.
5. Define Training Procedure:
Define the training procedure, including the optimizer, loss function, and any additional
callbacks or metrics to monitor during training.
6. Hyperparameter Optimization Loop:
Start the hyperparameter optimization loop:
Sample a set of hyperparameters from the search space.
Build and train the model using the sampled hyperparameters on the training data.
Evaluate the model on the validation set.
Keep track of the validation performance for each set of hyperparameters.
Repeat the above steps until a predefined budget or stopping criteria is reached.
7. Select Best Hyperparameters:
Once the hyperparameter optimization loop is complete, select the set of hyperparameters that
achieved the best performance on the validation set.
8. Evaluate Final Model:
Train the final model using the selected hyperparameters on the entire training dataset
(training + validation).
Evaluate the final model on the test set to obtain an unbiased estimate of its
performance.
Program:
import numpy as np
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
# Summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
Result:
The above program was executed successfully
EX.No:7
Algorithm:
1. Select Pre-trained Model:
Choose a pre-trained model that has been trained on a large dataset with similar characteristics
to the target dataset. Common choices include VGG, ResNet, Inception, and MobileNet.
2. Load Pre-trained Model:
Load the pre-trained model and remove the top layers (fully connected layers) that are responsible
for making predictions on the original task.
3. Freeze Base Layers (Optional):
Optionally, freeze the weights of the base layers (the layers of the pre-trained model) to prevent
them from being updated during training. This step is recommended when the target dataset is
small and similar to the source dataset.
4. Add New Classification Layers:
Add new layers (typically fully connected layers) on top of the pre-trained base layers. These
layers will be responsible for making predictions on the new target task.
5. Define Training Procedure:
Define the training procedure, including the optimizer, loss function, and any additional callbacks
or metrics to monitor during training.
6. Data Augmentation (Optional):
Optionally, apply data augmentation techniques to increase the diversity of the training data and
improve the model's generalization ability.
7. Train the Model:
Train the model on the target dataset. Since the base layers are frozen (or partially frozen), only
the newly added layers will be trained. This step helps the model learn task-specific features
from the target dataset while leveraging the knowledge learned by the pre-trained model.
8. Fine-tuning (Optional):
Optionally, unfreeze some of the base layers and continue training the entire model end-to-end.
Fine-tuning allows the model to further adapt to the target dataset and potentially improve
performance.
9. Evaluate the Model:
Evaluate the trained model on a separate validation set to assess its performance. Monitor metrics
such as accuracy, precision, recall, and F1-score.
10. Deploy the Model:
Once satisfied with the model's performance, deploy it to make predictions on new, unseen
images from the target domain.
Program:
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import VGG16
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.optimizers import Adam
# Define constants
img_width, img_height = 224, 224
train_data_dir = 'train'
validation_data_dir = 'validation'
batch_size = 32
num_classes = 2
epochs = 10
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
# Load pre-trained VGG16 model
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(img_width,
img_height, 3))
Result:
The above program was executed successfully.
EX.No:8
Algorithm:
1. Select Pre-trained Model:
Choose a pre-trained model from the Keras applications module. Popular choices include
VGG16, VGG19, ResNet50, InceptionV3, Xception, and MobileNet.
5. Compile Model:
Compile the model with an appropriate optimizer, loss function, and evaluation metric.
8. Fine-tuning (Optional):
Optionally, unfreeze some of the base layers and continue training the entire model end-to-
end.Fine-tuning allows the model to further adapt to the target dataset and potentially
improve performance.
# Define constants
img_width, img_height = 224, 224
train_data_dir = 'train'
validation_data_dir = 'validation'
batch_size = 32
num_classes = 2
epochs = 10
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
Result:
Algorithm:
1. Dataset Preparation:
Use a labeled dataset for sentiment analysis, such as the IMDb movie review dataset, which
consists of movie reviews labeled as positive or negative.
Split the dataset into training, validation, and test sets.
2. Data Preprocessing:
Clean the text data: Remove special characters, punctuation, and unwanted symbols.
Tokenization: Split the text into individual words or tokens.
Convert words to indices: Map each word to a unique integer index.
Padding: Ensure all sequences have the same length by padding shorter sequences with
zeros or truncating longer sequences.
3. Model Architecture:
Define an RNN architecture using libraries like TensorFlow or PyTorch.
Choose the type of RNN cell (e.g., LSTM, GRU).
Stack multiple RNN layers for better performance if needed.
Add a Dense layer with sigmoid activation for binary classification (positive or negative
sentiment).
4. Model Training:
Initialize the RNN model.
Compile the model with appropriate loss function (e.g., binary cross-entropy) and optimizer
(e.g., Adam).
Train the model on the training data.
Monitor the training process using metrics like accuracy and loss on the validation set.
Tune hyperparameters like learning rate, batch size, and number of epochs based on
validation performance.
5. Evaluation:
Evaluate the trained model on the test set to assess its performance.
Calculate metrics such as accuracy, precision, recall, and F1-score.
Visualize the performance metrics and, if necessary, confusion matrices.
6. Inference:
Use the trained model to predict sentiment on new, unseen text data.
Preprocess the new text data similarly to the training data.
Feed the preprocessed data into the trained model for prediction.
7. Experimentation and Improvement:
Experiment with different model architectures, hyperparameters, and preprocessing
techniques to improve performance.
Explore the use of pre-trained word embeddings to enhance the model's understanding of
text semantics.
Consider advanced techniques like attention mechanisms or bidirectional RNNs for better
capturing context.
8. Conclusion:
Summarize the findings of the lab exercise, including the performance of the sentiment
analysis model and any insights gained during experimentation.
Discuss potential future directions for further improvement or research.
Program:
import numpy as np
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM
from keras.preprocessing.text import Tokenizer
from sklearn.model_selection import train_test_split
Result:
Aim:
The aim of this lab exercise is to implement an LSTM-based autoencoder using
TensorFlow/Keras for sequence data compression and reconstruction.
Algorithm:
1. Dataset Preparation:
Use a dataset containing sequential data, such as time series or text data.
Split the dataset into training and test sets.
2. Data Preprocessing:
Normalize or scale the data if necessary.
Convert the sequential data into fixed-length sequences.
Optionally, add noise to the input sequences to improve the robustness of the autoencoder.
3. Model Architecture:
Define an LSTM-based autoencoder architecture using TensorFlow/Keras.
Create an encoder LSTM layer to compress the input sequence into a fixed-length latent
representation.
Create a decoder LSTM layer to reconstruct the input sequence from the latent
representation.
Connect the encoder and decoder layers to create the autoencoder model.
4. Model Training:
Initialize the LSTM autoencoder model.
Compile the model with an appropriate loss function, such as mean squared error (MSE),
and optimizer, such as Adam.
Train the model on the training data.
Monitor the training process and tune hyperparameters as needed.
5. Evaluation:
Evaluate the trained autoencoder model on the test set.
Calculate reconstruction error between the original and reconstructed sequences.
Visualize the reconstructed sequences to assess the quality of reconstruction.
6. Application:
Use the trained autoencoder model for tasks such as sequence denoising or anomaly
detection.
Apply the encoder part of the model to compress sequences into a lower-dimensional
latent space for downstream tasks.
7. Experimentation and Improvement:
Experiment with different architectures, such as adding more LSTM layers or using
bidirectional LSTMs, to improve reconstruction performance.
Explore different loss functions and regularization techniques to enhance the stability and
generalization of the model.
Consider incorporating attention mechanisms or other advanced techniques to improve the
autoencoder's ability to capture long-range dependencies.
8. Conclusion:
Summarize the findings of the lab exercise, including the performance of the LSTM-based
autoencoder and any insights gained during experimentation.
Discuss potential applications and future research directions for further exploration.
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, LSTM, RepeatVector, TimeDistributed
from tensorflow.keras.models import Model
# Encoder
inputs = Input(shape=(data.shape[1], data.shape[2]))
encoded = LSTM(latent_dim)(inputs)
# Decoder
decoded = RepeatVector(data.shape[1])(encoded)
decoded = LSTM(data.shape[2], return_sequences=True)(decoded)
# Autoencoder
autoencoder = Model(inputs, decoded)
Reconstructed Data:
[[0.68393844]
[0.7702495 ]
[0.15474795]
[0.7850587 ]
[0.72745067]
[0.1947125 ]
[0.1536809 ]
[0.58117414]
[0.15412793]
[0.7389922 ]]
Result:
The above Program was executed Successfully.
EX.No:11
Aim:
The aim of this lab exercise is to implement a Generative Adversarial Network (GAN) using
TensorFlow/Keras for generating realistic images.
Algorithm:
1. Dataset Preparation:
Choose a dataset suitable for image generation, such as CIFAR-10, CelebA, or MNIST.
Preprocess the dataset, including normalization and resizing, if necessary.
Split the dataset into training and validation sets.
2. Generator Model:
Define the generator model architecture using TensorFlow/Keras.
Start with a simple architecture, such as a series of dense layers followed by upsampling layers
(e.g., transposed convolutions or upsampling layers).
Use activation functions like ReLU or LeakyReLU for intermediate layers and tanh for the
output layer to ensure pixel values are in the range [-1, 1].
3. Discriminator Model:
Define the discriminator model architecture using TensorFlow/Keras.
Start with a convolutional neural network (CNN) architecture to classify real and generated
images.
Use activation functions like LeakyReLU and sigmoid for the output layer to produce a
probability score indicating the likelihood of the input image being real.
4. GAN Model:
Combine the generator and discriminator models to form the GAN model.
Freeze the discriminator weights during GAN training to prevent the generator from
overpowering the discriminator too early.
Compile the GAN model with appropriate loss functions (e.g., binary cross-entropy) and
optimizer (e.g., Adam).
5. Training:
Train the GAN model iteratively in alternating steps:
Train the discriminator using batches of real and fake images, adjusting its weights to better
distinguish between real and generated images.
Train the generator by generating fake images and trying to fool the discriminator into
classifying them as real.
Monitor the training process and adjust hyperparameters such as learning rate and batch size as
needed.
6. Evaluation:
Evaluate the trained GAN model on the validation set to assess the quality of generated images.
Visualize generated images and compare them with real images to evaluate realism and
diversity.
Calculate metrics like Inception Score or Frechet Inception Distance (FID) to quantitatively
evaluate the quality of generated images.
7. Fine-tuning and Optimization:
Experiment with different architectures and hyperparameters to improve the quality of generated
images.
Consider techniques like progressive growing, spectral normalization, or feature matching to
stabilize training and improve image quality.
8. Conclusion:
Summarize the findings of the lab exercise, including the performance of the GAN model and
any insights gained during experimentation.
Discuss potential applications of GANs in image generation and future research directions for
further exploration.
Program:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, LeakyReLU, BatchNormalization, Reshape, Flatten, Input
from tensorflow.keras.optimizers import Adam
# Generator
def build_generator(latent_dim):
model = Sequential()
model.add(Dense(128, input_dim=latent_dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(np.prod((28, 28, 1)), activation='tanh'))
model.add(Reshape((28, 28, 1)))
return model
# Discriminator
def build_discriminator():
model = Sequential()
model.add(Flatten(input_shape=(28, 28, 1)))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
return model
# Train Generator
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))
# Print progress
if epoch % sample_interval == 0:
print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1],
g_loss))
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
Result: