0% found this document useful (0 votes)
8 views

NN & DL Lab Manual 1[1]

The document provides step-by-step instructions for implementing various machine learning models using TensorFlow and Keras, including vector addition, regression models, perceptrons, feed-forward networks, and CNNs. Each section outlines the aim, procedure, and code necessary to create and train the models, along with expected outputs. It emphasizes the use of TensorFlow 1.x for vector addition and provides examples for data normalization, model compilation, and evaluation.

Uploaded by

jothiga835
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

NN & DL Lab Manual 1[1]

The document provides step-by-step instructions for implementing various machine learning models using TensorFlow and Keras, including vector addition, regression models, perceptrons, feed-forward networks, and CNNs. Each section outlines the aim, procedure, and code necessary to create and train the models, along with expected outputs. It emphasizes the use of TensorFlow 1.x for vector addition and provides examples for data normalization, model compilation, and evaluation.

Uploaded by

jothiga835
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

1.

Implement simple vector addition in TensorFlow

Aim:
Write a python program for implement simple
vector addition in TensorFlow

Procedure:

1. Define Two Vectors: Create two vectors as


constants. For example:
2. Add the Vectors: Use the add operation to add
the vectors:
3. Create a TensorFlow Session: To run the
computation and get the result, create a session

Code:

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior() # This line disables
TensorFlow 2.x behavior and enables TensorFlow 1.x
behavior

# Define two constant vectors


vector1 = tf.constant([1, 2, 3], dtype=tf.float32)
vector2 = tf.constant([4, 5, 6], dtype=tf.float32)

# Add the two vectors


result = tf.add(vector1, vector2)
# To run the computation and get the result, you need to
start a session
with tf.Session() as sess:
output = sess.run(result)
print('The result of the vector addition is:', output)

output:

The result of the vector addition is: [5. 7. 9.]


2.Implement a regression model in Keras.

Aim:
Write a python program for implement a regression
model in Keras.

Procedure:

1. Normalize the input features using the


tf.keras.layers.Normalization preprocessing
layer.
2. Apply a linear transformation ( y = m x + b) to
produce 1 output using a linear layer (
tf.keras.layers.Dense ).
3. Import all the modules required for Keras
regression.
4. Load the data.
5. Process the data as per the model.

Code:

# Import necessary libraries


import numpy as np
from keras.models import Sequential
from keras.layers import Dense

# Generate some random data for training


np.random.seed(0)
X_train = np.random.rand(100, 1)
y_train = 3 * X_train + np.random.randn(100, 1) * 0.33
# Create a Sequential model
model = Sequential()

# Add a Dense layer with 1 neuron, input_shape should


match the features of X
model.add(Dense(1, input_shape=(1,)))

# Compile the model with mean squared error loss and


an optimizer
model.compile(optimizer='adam',
loss='mean_squared_error')

# Train the model


model.fit(X_train, y_train, epochs=10)

# Generate some test data


X_test = np.array([[0.2], [0.5], [0.9]])
y_test = model.predict(X_test)

# Output the predictions


print("Predictions for the test data:")
for i, prediction in enumerate(y_test):
print(f"Input: {X_test[i][0]}, Predicted Output:
{prediction[0]}")
output:
Epoch 1/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m1s[0m


437ms/step - loss: 1.7608
[1m2/4[0m [32m━━━━━━━━━━[0m[37m━━━━━━━━━━[0m [1m0s[0m
110ms/step - loss: 1.5874
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m1s[0m 42ms/step
- loss: 1.4122
Epoch 2/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 16ms/step


- loss: 1.1301
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.2300
Epoch 3/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 16ms/step


- loss: 1.2124
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 5ms/step -
loss: 1.2704
Epoch 4/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 16ms/step


- loss: 1.6084
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.3621
Epoch 5/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 31ms/step


- loss: 0.9903
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.2021
Epoch 6/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 31ms/step


- loss: 1.4712
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.2980
Epoch 7/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 31ms/step


- loss: 1.2264
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.2123
Epoch 8/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 16ms/step


- loss: 0.9917
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.1230
Epoch 9/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 16ms/step


- loss: 1.1960
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 0s/step -
loss: 1.2112
Epoch 10/10

[1m1/4[0m [32m━━━━━[0m[37m━━━━━━━━━━━━━━━[0m [1m0s[0m 16ms/step


- loss: 1.0556
[1m4/4[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 5ms/step -
loss: 1.1665

[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 16ms/step


[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 16ms/step
Predictions for the test data:
Input: 0.2, Predicted Output: 0.27151909470558167
Input: 0.5, Predicted Output: 0.6200444102287292
Input: 0.9, Predicted Output: 1.0847446918487549
3.Implement a perceptron in TensorFlow/Keras
Environment

Aim:
Write a python program for implement a perceptron
in TensorFlow/Keras Environment.

Procedure:

• Step 1: Import the required libraries, including


TensorFlow and Keras for building the neural
network, Numpy for numerical operations, and
Matplotlib for visualization.
• Step 2: Load the MNIST dataset, which contains
images of handwritten digits.
• Step 3: Normalize the pixel values to be between 0
and 1 for better model performance, and flatten the
images to create a single array of 784 elements per
image.
• Step 4: Define the model architecture. In this case,
a single-layer perceptron with 10 output nodes
corresponding to the 10 possible digits (0-9), using
the sigmoid activation function.
• Step 5: Compile the model with an optimizer, loss
function, and metrics to monitor.
• Step 6: Train the model on the training data.

Code:
import numpy as np
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential

# Define the perceptron model


model = Sequential()
model.add(Dense(1, input_dim=2,
activation='sigmoid')) # Perceptron with 2 inputs

# Compile the model


model.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['accuracy'])

# Input data - XOR logic


X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
# Labels
y = np.array([[0], [1], [1], [0]])

# Train the model


model.fit(X, y, epochs=1000, verbose=0)

# Evaluate the model


loss, accuracy = model.evaluate(X, y)
print(f'Loss: {loss}, Accuracy: {accuracy}')

# Make predictions
predictions = model.predict(X)
print('Predictions:')
for i, prediction in enumerate(predictions):
print(f'{X[i]} -> {prediction[0]:.4f}')

output:

[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 78ms/step


- accuracy: 0.7500 - loss: 0.7374
[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 78ms/step
- accuracy: 0.7500 - loss: 0.7374

Loss: 0.7374284267425537, Accuracy: 0.75

[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 31ms/step


[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 31ms/step

Predictions:

Input: [0 0], Predicted Output: [0.47542816], Actual Output: 0

Input: [0 1], Predicted Output: [0.33030307], Actual Output: 0

Input: [1 0], Predicted Output: [0.3563685], Actual Output: 0

Input: [1 1], Predicted Output: [0.23154473], Actual Output: 1


4.Implement a Feed-Forward Network in
TensorFlow/Keras
Aim:
Write a python program for implement a Feed-
Forward Network in TensorFlow/Keras.
Procedure:
• Import Necessary Libraries: First, import the
required libraries. We’ll
need numpy, tensorflow, keras, and other relevant
packages. Here’s a basic import section.
• Load the MNIST Dataset: Load the MNIST
dataset, which consists of 70,000 data points
(28x28 images of handwritten digits). Normalize
the pixel values to be between 0 and 1.
• Build the Neural Network Architecture: Create a
feed-forward neural network using
the Sequential class. For this example, let’s use a
single hidden layer with 128 neurons and a sigmoid
activation function.
• Compile the Model: Compile the model by
specifying the optimizer, loss function, and metrics.
• Train the Model: Train the model on the training
data.
• Evaluate the Model: Evaluate the model’s
performance on the test data.
• Make Predictions: You can use the trained model
to make predictions on new data.
Code:

from tensorflow import keras

# Replace these values with appropriate values for your


problem
input_size = 100 # Number of input features
hidden_units = 32 # Number of units in the hidden
layer
output_units = 1 # Number of output units (e.g., 1 for
binary classification)
# Define the architecture of the neural network
model = keras.Sequential([
# Input layer (specify input_shape for the first layer)
keras.layers.Input(shape=(input_size,)),

# Hidden layer with sigmoid activation


keras.layers.Dense(units=hidden_units,
activation='sigmoid'),

# Output layer
keras.layers.Dense(units=output_units,
activation='sigmoid')
])

# Compile the model


model.compile(optimizer='adam',
loss='binary_crossentropy', metrics=['accuracy'])
# Print the model summary
model.summary()

output:
Model: "sequential"
┌─────────────────────────────────┬─────────────
───────────┬───────────────┐
│ Layer (type) │ Output Shape │ Param # │
├─────────────────────────────────┼─────────────
───────────┼───────────────┤
│ dense (Dense) │ (None, 32) │ 3,232 │
├─────────────────────────────────┼─────────────
───────────┼───────────────┤
│ dense_1 (Dense) │ (None, 1) │ 33 │
└─────────────────────────────────┴─────────────
───────────┴───────────────┘
Total params: 3,265 (12.75 KB)
Trainable params: 3,265 (12.75 KB)
Non-trainable params: 0 (0.00 B)
5. Implement an Image Classifier using CNN in
TensorFlow/Keras.

Aim:
Write a python program for implement an Image
Classifier using CNN in TensorFlow/Keras.

Procedure:

• Import Libraries Begin by importing necessary


libraries.
• Load and Prepare the Dataset Use a dataset like
CIFAR-10, which is readily available in Keras.
• Verify the Data It’s good practice to visualize
some of the data.
• Create the Convolutional Base Define your CNN
architecture
using Conv2D and MaxPooling2D layers.
• Add Dense Layers After the convolutional base,
add dense layers for classification.
• Compile the Model Choose an optimizer, loss
function, and metrics.
• Train the Model Fit the model to the training data.
• Evaluate the Model Finally, evaluate the model’s
performance on the test dataset.
Code:

import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

# Download and prepare the CIFAR10 dataset


(train_images, train_labels), (test_images, test_labels) =
datasets.cifar10.load_data()
train_images, test_images = train_images / 255.0,
test_images / 255.0

# Verify the data


class_names = ['airplane', 'automobile', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i][0]])
plt.show()

# Create the convolutional base


model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

# Add Dense layers on top


model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))

# Compile and train the model


model.compile(optimizer='adam',

loss=tf.keras.losses.SparseCategoricalCrossentropy(fro
m_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels,
epochs=10,
validation_data=(test_images, test_labels))
# Evaluate the model
test_loss, test_acc = model.evaluate(test_images,
test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
output:

Model: "sequential"
_____________________________________________
____________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896

max_pooling2d (MaxPooling2 (None, 15, 15, 32) 0


D)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) 0


g2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)

Epoch 1/10
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1698386490.372362 489369 device_compiler.h:186] Compiled cluster using
XLA! This line is logged at most once for the lifetime of the process.
1563/1563 [==============================] - 10s 5ms/step - loss: 1.5211 -
accuracy: 0.4429 - val_loss: 1.2497 - val_accuracy: 0.5531
Epoch 2/10
1563/1563 [==============================] - 6s 4ms/step - loss: 1.1408 -
accuracy: 0.5974 - val_loss: 1.1474 - val_accuracy: 0.6023
Epoch 3/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.9862 -
accuracy: 0.6538 - val_loss: 0.9759 - val_accuracy: 0.6582
Epoch 4/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.8929 -
accuracy: 0.6879 - val_loss: 0.9412 - val_accuracy: 0.6702
Epoch 5/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.8183 -
accuracy: 0.7131 - val_loss: 0.8830 - val_accuracy: 0.6967
Epoch 6/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.7588 -
accuracy: 0.7334 - val_loss: 0.8671 - val_accuracy: 0.7039
Epoch 7/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.7126 -
accuracy: 0.7518 - val_loss: 0.8972 - val_accuracy: 0.6897
Epoch 8/10
1563/1563 [==============================] - 7s 4ms/step - loss: 0.6655 -
accuracy: 0.7661 - val_loss: 0.8412 - val_accuracy: 0.7111
Epoch 9/10
1563/1563 [==============================] - 7s 4ms/step - loss: 0.6205 -
accuracy: 0.7851 - val_loss: 0.8581 - val_accuracy: 0.7109
Epoch 10/10
1563/1563 [==============================] - 7s 4ms/step - loss: 0.5872 -
accuracy: 0.7937 - val_loss: 0.8817 - val_accuracy: 0.7113
6.Improve the Deep learning model by fine tuning
hyper parameters.

Aim:
Write a python program for improve the Deep
learning model by fine tuning hyper parameters.

Procedure:

1. Select the Right Model: Begin with a model that


suits your problem type, whether it’s for
classification, regression, or something else.
2. Review and Build the Hyperparameter Space:
Identify all the hyperparameters that can be tuned,
such as learning rate, batch size, number of epochs,
and layer configurations. Create a range or a set of
values for each hyperparameter to explore.
3. Choose a Search Strategy: Decide on a method
for searching the hyperparameter space. Common
strategies include:
o Grid Search: Systematically explore a grid of

hyperparameters.
o Random Search: Randomly sample the

hyperparameter space.
o Bayesian Optimization: Use probability to

find the best hyperparameters more efficiently.


4. Cross-Validation: Implement cross-validation to
assess the model’s performance. This helps in
avoiding overfitting and ensures that the model
generalizes well to unseen data.
5. Evaluate the Model: After training, evaluate the
model using a separate test set. Look at
performance metrics relevant to your problem, such
as accuracy, precision, recall, or F1 score.
6. Iterate: Based on the evaluation, iterate over the
steps, refining the hyperparameter space or trying
different search strategies as needed.
7. Finalize: Once you’ve found the optimal
hyperparameters, retrain your model on the full
dataset and finalize your model.

Code:

from keras.models import Sequential


from keras.layers import Dense, Dropout
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV

# Function to create model, required for KerasClassifier


def create_model(optimizer='adam', activation='relu'):
# create model
model = Sequential()
model.add(Dense(12, input_dim=8,
activation=activation))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy',
optimizer=optimizer, metrics=['accuracy'])
return model

# create model
model = KerasClassifier(build_fn=create_model,
verbose=0)

# define the grid search parameters


param_grid = {
'batch_size': [10, 20, 40, 60, 80, 100],
'epochs': [10, 50, 100],
'optimizer': ['SGD', 'Adam', 'RMSprop'],
'activation': ['relu', 'tanh', 'sigmoid']
}

# Create Grid Search


grid = GridSearchCV(estimator=model,
param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(X_train, Y_train)

# Summarize results
print("Best: %f using %s" % (grid_result.best_score_,
grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
output:
Epoch 1/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.5055 -
accuracy: 0.8205 - val_loss: 0.4009 - val_accuracy: 0.8582
Epoch 2/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3772 -
accuracy: 0.8628 - val_loss: 0.3637 - val_accuracy: 0.8685
Epoch 3/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3366 -
accuracy: 0.8766 - val_loss: 0.3698 - val_accuracy: 0.8662
Epoch 4/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3110 -
accuracy: 0.8858 - val_loss: 0.3599 - val_accuracy: 0.8703
Epoch 5/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2924 -
accuracy: 0.8906 - val_loss: 0.3289 - val_accuracy: 0.8818
Epoch 6/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2768 -
accuracy: 0.8958 - val_loss: 0.3491 - val_accuracy: 0.8743
Epoch 7/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2622 -
accuracy: 0.9022 - val_loss: 0.3127 - val_accuracy: 0.8866
Epoch 8/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2512 -
accuracy: 0.9067 - val_loss: 0.3378 - val_accuracy: 0.8822
Epoch 9/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2412 -
accuracy: 0.9104 - val_loss: 0.3282 - val_accuracy: 0.8848
Epoch 10/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2294 -
accuracy: 0.9143 - val_loss: 0.3398 - val_accuracy: 0.8838
Epoch 11/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2217 -
accuracy: 0.9166 - val_loss: 0.3158 - val_accuracy: 0.8897
Epoch 12/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2124 -
accuracy: 0.9197 - val_loss: 0.3443 - val_accuracy: 0.8858
Epoch 13/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2051 -
accuracy: 0.9226 - val_loss: 0.3649 - val_accuracy: 0.8854
Epoch 14/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1986 -
accuracy: 0.9254 - val_loss: 0.3195 - val_accuracy: 0.8901
Epoch 15/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1908 -
accuracy: 0.9287 - val_loss: 0.3173 - val_accuracy: 0.8971
Epoch 16/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1823 -
accuracy: 0.9306 - val_loss: 0.3480 - val_accuracy: 0.8911
Epoch 17/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1803 -
accuracy: 0.9314 - val_loss: 0.3258 - val_accuracy: 0.8929
Epoch 18/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1721 -
accuracy: 0.9370 - val_loss: 0.3331 - val_accuracy: 0.8950
Epoch 19/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1676 -
accuracy: 0.9383 - val_loss: 0.3331 - val_accuracy: 0.8962
Epoch 20/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1634 -
accuracy: 0.9382 - val_loss: 0.3432 - val_accuracy: 0.8932
Epoch 21/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1566 -
accuracy: 0.9405 - val_loss: 0.3597 - val_accuracy: 0.8873
Epoch 22/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1538 -
accuracy: 0.9412 - val_loss: 0.3446 - val_accuracy: 0.8933
Epoch 23/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1493 -
accuracy: 0.9435 - val_loss: 0.3677 - val_accuracy: 0.8888
Epoch 24/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1459 -
accuracy: 0.9454 - val_loss: 0.3472 - val_accuracy: 0.8961
Epoch 25/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1400 -
accuracy: 0.9469 - val_loss: 0.3984 - val_accuracy: 0.8827
Epoch 26/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1374 -
accuracy: 0.9484 - val_loss: 0.3767 - val_accuracy: 0.8931
Epoch 27/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1323 -
accuracy: 0.9491 - val_loss: 0.3849 - val_accuracy: 0.8909
Epoch 28/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1312 -
accuracy: 0.9511 - val_loss: 0.3897 - val_accuracy: 0.8903
Epoch 29/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1242 -
accuracy: 0.9533 - val_loss: 0.4042 - val_accuracy: 0.8907
Epoch 30/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1238 -
accuracy: 0.9533 - val_loss: 0.3784 - val_accuracy: 0.8934
Epoch 31/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1176 -
accuracy: 0.9554 - val_loss: 0.4152 - val_accuracy: 0.8940
Epoch 32/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1152 -
accuracy: 0.9570 - val_loss: 0.4081 - val_accuracy: 0.8886
Epoch 33/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1123 -
accuracy: 0.9578 - val_loss: 0.4372 - val_accuracy: 0.8856
Epoch 34/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1120 -
accuracy: 0.9582 - val_loss: 0.4068 - val_accuracy: 0.8937
Epoch 35/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1073 -
accuracy: 0.9607 - val_loss: 0.4246 - val_accuracy: 0.8943
Epoch 36/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1040 -
accuracy: 0.9606 - val_loss: 0.4211 - val_accuracy: 0.8934
Epoch 37/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1034 -
accuracy: 0.9613 - val_loss: 0.4291 - val_accuracy: 0.8933
Epoch 38/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0991 -
accuracy: 0.9627 - val_loss: 0.4504 - val_accuracy: 0.8942
Epoch 39/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0977 -
accuracy: 0.9635 - val_loss: 0.4331 - val_accuracy: 0.8950
Epoch 40/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0948 -
accuracy: 0.9653 - val_loss: 0.4429 - val_accuracy: 0.8944
Epoch 41/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0939 -
accuracy: 0.9643 - val_loss: 0.4727 - val_accuracy: 0.8888
Epoch 42/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0937 -
accuracy: 0.9650 - val_loss: 0.4521 - val_accuracy: 0.8969
Epoch 43/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0888 -
accuracy: 0.9673 - val_loss: 0.4801 - val_accuracy: 0.8908
Epoch 44/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0880 -
accuracy: 0.9678 - val_loss: 0.4582 - val_accuracy: 0.8973
Epoch 45/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0878 -
accuracy: 0.9668 - val_loss: 0.5006 - val_accuracy: 0.8920
Epoch 46/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0862 -
accuracy: 0.9678 - val_loss: 0.4547 - val_accuracy: 0.8942
Epoch 47/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0836 -
accuracy: 0.9680 - val_loss: 0.5050 - val_accuracy: 0.8908
Epoch 48/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0808 -
accuracy: 0.9692 - val_loss: 0.4956 - val_accuracy: 0.8954
Epoch 49/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0803 -
accuracy: 0.9696 - val_loss: 0.5260 - val_accuracy: 0.8928
Epoch 50/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0761 -
accuracy: 0.9716 - val_loss: 0.5449 - val_accuracy: 0.8914
Best epoch: 44
Epoch 1/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.5087 -
accuracy: 0.8195 - val_loss: 0.4183 - val_accuracy: 0.8519
Epoch 2/44
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3767 -
accuracy: 0.8639 - val_loss: 0.3740 - val_accuracy: 0.8653
Epoch 3/44
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3355 -
accuracy: 0.8771 - val_loss: 0.3642 - val_accuracy: 0.8691
Epoch 4/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3109 -
accuracy: 0.8860 - val_loss: 0.3444 - val_accuracy: 0.8782
Epoch 5/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2908 -
accuracy: 0.8918 - val_loss: 0.3312 - val_accuracy: 0.8801
Epoch 6/44
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2757 -
accuracy: 0.8969 - val_loss: 0.3437 - val_accuracy: 0.8782
Epoch 7/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2617 -
accuracy: 0.9030 - val_loss: 0.3414 - val_accuracy: 0.8788
Epoch 8/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2504 -
accuracy: 0.9062 - val_loss: 0.3221 - val_accuracy: 0.8827
Epoch 9/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2389 -
accuracy: 0.9105 - val_loss: 0.3210 - val_accuracy: 0.8858
Epoch 10/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2310 -
accuracy: 0.9140 - val_loss: 0.3371 - val_accuracy: 0.8807
Epoch 11/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2208 -
accuracy: 0.9172 - val_loss: 0.3135 - val_accuracy: 0.8898
Epoch 12/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2143 -
accuracy: 0.9191 - val_loss: 0.3253 - val_accuracy: 0.8863
Epoch 13/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2049 –

313/313 [==============================] - 1s 2ms/step - loss: 0.5223 - accuracy:


0.8872
[test loss, test accuracy]: [0.5223038792610168, 0.8871999979019165]
7.Implement a Transfer Learning concept in Image
Classification

Aim:
Write a python program for implement a Transfer
Learning concept in Image Classification.

Procedure:
1. Select a Pre-trained Model: Choose a pre-trained
model that has been trained on a large and diverse
dataset. Common models include VGG16, ResNet,
Inception, etc.
2. Prepare Your Dataset: Organize your dataset into
a structure suitable for training and validation.
Ensure you have labeled images for each category
you want to classify.
3. Preprocess the Data: Apply necessary
preprocessing to your images to match the input
requirements of the pre-trained model. This may
include resizing images, normalizing pixel values,
and applying data augmentation techniques.
4. Feature Extraction: Use the pre-trained model to
extract features from your dataset. This is done by
removing the top layer (output layer) of the pre-
trained model and passing your images through the
rest of the network.
5. Add a Classification Head: Attach new layers to
the pre-trained model that will serve as your
classifier. These layers will be trained on your
specific dataset.
6. Train the Model: Train the new layers on your
dataset while keeping the pre-trained model’s
layers frozen. This allows the model to learn from
the new data without altering the learned features.
7. Fine-Tuning (Optional): Once the new layers
have been trained, you can choose to unfreeze
some of the top layers of the pre-trained model and
continue training. This allows the model to adjust
its higher-order feature representations to better
suit your specific task.
8. Evaluate the Model: Test the model’s
performance on a separate validation or test set to
ensure it generalizes well to new data.
9. Prediction: Use the trained model to classify new
images.

Code:
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model

# Load the pre-trained VGG16 model


base_model = VGG16(weights='imagenet',
include_top=False)
# Freeze the layers of the base model
for layer in base_model.layers:
layer.trainable = False

# Create the classification head


x = Flatten()(base_model.output)
x = Dense(1024, activation='relu')(x)
predictions = Dense(num_classes,
activation='softmax')(x)

# Combine the base model and the classification head


model = Model(inputs=base_model.input,
outputs=predictions)

# Compile the model


model.compile(optimizer='adam',
loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model


model.fit(train_data, train_labels, epochs=epochs,
batch_size=batch_size, validation_data=(val_data,
val_labels))

# Evaluate the model


model.evaluate(test_data, test_labels)

# Predict with the model


model.predict(new_data)
output:

Compile the model


Train the model

Evaluate the model


Predict with the model
8.Using a pre trained model on Keras for Transfer
Learning

Aim:
Write a python program for using a pre trained
model on Keras for Transfer Learning.

Procedure:

# Load the VGG16 model, pre-trained on ImageNet


data
# Add a global spatial average pooling layer
# Add a fully-connected layer
# Add a logistic layer for classification
# Let's say we have 10 classes
# This is the model we will train
# First, we freeze all layers of the base model
# Compile the model
# Train the model on the new data
# Replace 'train_data' and 'train_labels' with your data
and labels
Code:

from keras.applications import VGG16


from keras.models import Model
from keras.layers import Dense,
GlobalAveragePooling2D

base_model = VGG16(weights='imagenet',
include_top=False)

x = base_model.output
x = GlobalAveragePooling2D()(x)

x = Dense(1024, activation='relu')(x)

predictions = Dense(10, activation='softmax')(x)

model = Model(inputs=base_model.input,
outputs=predictions)

for layer in base_model.layers:


layer.trainable = False

model.compile(optimizer='rmsprop',
loss='categorical_crossentropy')

model.fit(train_data, train_labels, ...)

.
output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Trai… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━┩
│ input_layer_4 (InputLayer) │ (None, 150, 150, 3) │ 0 │ - │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ rescaling (Rescaling) │ (None, 150, 150, 3) │ 0 │ - │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ xception (Functional) │ (None, 5, 5, 2048) │ 20,861… │ Y │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ global_average_pooling2d │ (None, 2048) │ 0 │ - │
│ (GlobalAveragePooling2D) │ │ │ │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ dropout (Dropout) │ (None, 2048) │ 0 │ - │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ dense_7 (Dense) │ (None, 1) │ 2,049 │ Y │
└─────────────────────────────┴──────────────────────────┴─────────┴───────┘
Total params: 20,867,629 (79.60 MB)
Trainable params: 20,809,001 (79.38 MB)
Non-trainable params: 54,528 (213.00 KB)
Optimizer params: 4,100 (16.02 KB)
9.Perform Sentiment Analysis using RNN

Aim:
Write a python program for Perform Sentiment
Analysis using RNN.

Procedure:

# Set the number of words to consider as features


# Cut texts after this number of words (among top
max_features
# Load data
# Pad sequences
# Build the model
# Compile the model
# Train the model

Code:

import tensorflow as tf
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding,
SimpleRNN, Dense
from tensorflow.keras.datasets import imdb

max_features = 10000
most common words)
maxlen = 500
batch_size = 32

(input_train, y_train), (input_test, y_test) =


imdb.load_data(num_words=max_features)

input_train = sequence.pad_sequences(input_train,
maxlen=maxlen)
input_test = sequence.pad_sequences(input_test,
maxlen=maxlen)

model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',
loss='binary_crossentropy', metrics=['acc'])

history = model.fit(input_train, y_train,


epochs=10,
batch_size=128,
validation_split=0.2)
output:
10. Implement an LSTM based Autoencoder in
TensorFlow/Keras

Aim:
Write a python program for implement an
LSTM based Autoencoder in TensorFlow/Keras.

Procedure:

# Define the number of features, timesteps, and latent


dimensions
# Define the input layer
# Define the LSTM encoder
# Repeat the encoded output to match the number of
timesteps
# Define the autoencoder model
# Define the LSTM decoder
# Compile the model
# Summary of the autoencoder model

Code:

from keras.models import Model


from keras.layers import Input, LSTM, RepeatVector

timesteps = 10 # Example number of timesteps


input_dim = 5 # Example number of features
latent_dim = 3 # Example number of latent dimensions
inputs = Input(shape=(timesteps, input_dim))

encoded = LSTM(latent_dim)(inputs)

repeated = RepeatVector(timesteps)(encoded)

decoded = LSTM(input_dim,
return_sequences=True)(repeated)

autoencoder = Model(inputs, decoded)

autoencoder.compile(optimizer='adam', loss='mse')

autoencoder.summary()
output:
Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 2ms/step - loss: 0.0332 -
val_loss: 0.0094
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.0089 -
val_loss: 0.0081
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0079 -
val_loss: 0.0076
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0074 -
val_loss: 0.0073
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0072 -
val_loss: 0.0071
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0070 -
val_loss: 0.0070
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0069 -
val_loss: 0.0069
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0068 -
val_loss: 0.0068
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0068 -
val_loss: 0.0067
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.0067 -
val_loss: 0.0067
11. Image generation using GAN

Aim:
Write a python program for Image generation using
GAN.

Procedure:

Code:

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models

# Define the Generator Model


def build_generator(latent_dim):
model = models.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False,
input_shape=(latent_dim,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Reshape((7, 7, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5),
strides=(1, 1), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Conv2DTranspose(64, (5, 5),


strides=(2, 2), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Conv2DTranspose(1, (5, 5),


strides=(2, 2), padding='same', use_bias=False,
activation='tanh'))
return model

# Define the GAN


def build_gan(generator, discriminator):
discriminator.trainable = False
model = models.Sequential([generator,
discriminator])
return model

# Load and preprocess the dataset


(train_images, _), (_, _) =
tf.keras.datasets.mnist.load_data()
train_images =
train_images.reshape(train_images.shape[0], 28, 28,
1).astype('float32')
train_images = (train_images - 127.5) / 127.5 #
Normalize the images to [-1, 1]

# Set the parameters


latent_dim = 100
generator = build_generator(latent_dim)
discriminator = build_discriminator()
gan = build_gan(generator, discriminator)
gan.compile(optimizer='adam',
loss='binary_crossentropy')

# Training loop
epochs = 50
batch_size = 128
num_examples_to_generate = 16
random_vector_for_generation =
tf.random.normal([num_examples_to_generate,
latent_dim])

for epoch in range(epochs):


for batch in range(train_images.shape[0] //
batch_size):
# Generate random noise as input to the generator
noise = tf.random.normal([batch_size, latent_dim])

# Generate fake images with the generator


generated_images = generator(noise,
training=True)

# Combine real images with fake images


real_images = train_images[batch * batch_size:
(batch + 1) * batch_size]
combined_images = tf.concat([generated_images,
real_images], axis=0)
# Labels for generated and real images
labels = tf.concat([tf.zeros((batch_size, 1)),
tf.ones((batch_size, 1))], axis=0)

# Add random noise to the labels


labels += 0.05 *
tf.random.uniform(tf.shape(labels))

# Train the discriminator


discriminator_loss =
discriminator.train_on_batch(combined_images, labels)

# Train the generator


noise = tf.random.normal([batch_size, latent_dim])
misleading_labels = tf.ones((batch_size, 1))
generator_loss = gan.train_on_batch(noise,
misleading_labels)

# Print progress
print(f'Epoch {epoch + 1}, Batch {batch + 1},
Discriminator Loss: {discriminator_loss}, Generator
Loss: {generator_loss}')

# Generate images using the trained generator


generated_images =
generator(random_vector_for_generation,
training=False)

# Display generated images


plt.figure(figsize=(10, 10))
for i in range(generated_images.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(generated_images[i, :, :, 0] * 127.5 +
127.5, cmap='gray')
plt.axis('off')
plt.show()

output:

You might also like