NN & DL Lab Manual 1[1]
NN & DL Lab Manual 1[1]
Aim:
Write a python program for implement simple
vector addition in TensorFlow
Procedure:
Code:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior() # This line disables
TensorFlow 2.x behavior and enables TensorFlow 1.x
behavior
output:
Aim:
Write a python program for implement a regression
model in Keras.
Procedure:
Code:
Aim:
Write a python program for implement a perceptron
in TensorFlow/Keras Environment.
Procedure:
Code:
import numpy as np
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
# Make predictions
predictions = model.predict(X)
print('Predictions:')
for i, prediction in enumerate(predictions):
print(f'{X[i]} -> {prediction[0]:.4f}')
output:
Predictions:
# Output layer
keras.layers.Dense(units=output_units,
activation='sigmoid')
])
output:
Model: "sequential"
┌─────────────────────────────────┬─────────────
───────────┬───────────────┐
│ Layer (type) │ Output Shape │ Param # │
├─────────────────────────────────┼─────────────
───────────┼───────────────┤
│ dense (Dense) │ (None, 32) │ 3,232 │
├─────────────────────────────────┼─────────────
───────────┼───────────────┤
│ dense_1 (Dense) │ (None, 1) │ 33 │
└─────────────────────────────────┴─────────────
───────────┴───────────────┘
Total params: 3,265 (12.75 KB)
Trainable params: 3,265 (12.75 KB)
Non-trainable params: 0 (0.00 B)
5. Implement an Image Classifier using CNN in
TensorFlow/Keras.
Aim:
Write a python program for implement an Image
Classifier using CNN in TensorFlow/Keras.
Procedure:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
loss=tf.keras.losses.SparseCategoricalCrossentropy(fro
m_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels,
epochs=10,
validation_data=(test_images, test_labels))
# Evaluate the model
test_loss, test_acc = model.evaluate(test_images,
test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
output:
Model: "sequential"
_____________________________________________
____________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
Epoch 1/10
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1698386490.372362 489369 device_compiler.h:186] Compiled cluster using
XLA! This line is logged at most once for the lifetime of the process.
1563/1563 [==============================] - 10s 5ms/step - loss: 1.5211 -
accuracy: 0.4429 - val_loss: 1.2497 - val_accuracy: 0.5531
Epoch 2/10
1563/1563 [==============================] - 6s 4ms/step - loss: 1.1408 -
accuracy: 0.5974 - val_loss: 1.1474 - val_accuracy: 0.6023
Epoch 3/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.9862 -
accuracy: 0.6538 - val_loss: 0.9759 - val_accuracy: 0.6582
Epoch 4/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.8929 -
accuracy: 0.6879 - val_loss: 0.9412 - val_accuracy: 0.6702
Epoch 5/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.8183 -
accuracy: 0.7131 - val_loss: 0.8830 - val_accuracy: 0.6967
Epoch 6/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.7588 -
accuracy: 0.7334 - val_loss: 0.8671 - val_accuracy: 0.7039
Epoch 7/10
1563/1563 [==============================] - 6s 4ms/step - loss: 0.7126 -
accuracy: 0.7518 - val_loss: 0.8972 - val_accuracy: 0.6897
Epoch 8/10
1563/1563 [==============================] - 7s 4ms/step - loss: 0.6655 -
accuracy: 0.7661 - val_loss: 0.8412 - val_accuracy: 0.7111
Epoch 9/10
1563/1563 [==============================] - 7s 4ms/step - loss: 0.6205 -
accuracy: 0.7851 - val_loss: 0.8581 - val_accuracy: 0.7109
Epoch 10/10
1563/1563 [==============================] - 7s 4ms/step - loss: 0.5872 -
accuracy: 0.7937 - val_loss: 0.8817 - val_accuracy: 0.7113
6.Improve the Deep learning model by fine tuning
hyper parameters.
Aim:
Write a python program for improve the Deep
learning model by fine tuning hyper parameters.
Procedure:
hyperparameters.
o Random Search: Randomly sample the
hyperparameter space.
o Bayesian Optimization: Use probability to
Code:
# create model
model = KerasClassifier(build_fn=create_model,
verbose=0)
# Summarize results
print("Best: %f using %s" % (grid_result.best_score_,
grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
output:
Epoch 1/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.5055 -
accuracy: 0.8205 - val_loss: 0.4009 - val_accuracy: 0.8582
Epoch 2/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3772 -
accuracy: 0.8628 - val_loss: 0.3637 - val_accuracy: 0.8685
Epoch 3/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3366 -
accuracy: 0.8766 - val_loss: 0.3698 - val_accuracy: 0.8662
Epoch 4/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3110 -
accuracy: 0.8858 - val_loss: 0.3599 - val_accuracy: 0.8703
Epoch 5/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2924 -
accuracy: 0.8906 - val_loss: 0.3289 - val_accuracy: 0.8818
Epoch 6/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2768 -
accuracy: 0.8958 - val_loss: 0.3491 - val_accuracy: 0.8743
Epoch 7/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2622 -
accuracy: 0.9022 - val_loss: 0.3127 - val_accuracy: 0.8866
Epoch 8/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2512 -
accuracy: 0.9067 - val_loss: 0.3378 - val_accuracy: 0.8822
Epoch 9/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2412 -
accuracy: 0.9104 - val_loss: 0.3282 - val_accuracy: 0.8848
Epoch 10/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2294 -
accuracy: 0.9143 - val_loss: 0.3398 - val_accuracy: 0.8838
Epoch 11/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2217 -
accuracy: 0.9166 - val_loss: 0.3158 - val_accuracy: 0.8897
Epoch 12/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2124 -
accuracy: 0.9197 - val_loss: 0.3443 - val_accuracy: 0.8858
Epoch 13/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2051 -
accuracy: 0.9226 - val_loss: 0.3649 - val_accuracy: 0.8854
Epoch 14/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1986 -
accuracy: 0.9254 - val_loss: 0.3195 - val_accuracy: 0.8901
Epoch 15/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1908 -
accuracy: 0.9287 - val_loss: 0.3173 - val_accuracy: 0.8971
Epoch 16/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1823 -
accuracy: 0.9306 - val_loss: 0.3480 - val_accuracy: 0.8911
Epoch 17/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1803 -
accuracy: 0.9314 - val_loss: 0.3258 - val_accuracy: 0.8929
Epoch 18/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1721 -
accuracy: 0.9370 - val_loss: 0.3331 - val_accuracy: 0.8950
Epoch 19/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1676 -
accuracy: 0.9383 - val_loss: 0.3331 - val_accuracy: 0.8962
Epoch 20/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1634 -
accuracy: 0.9382 - val_loss: 0.3432 - val_accuracy: 0.8932
Epoch 21/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1566 -
accuracy: 0.9405 - val_loss: 0.3597 - val_accuracy: 0.8873
Epoch 22/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1538 -
accuracy: 0.9412 - val_loss: 0.3446 - val_accuracy: 0.8933
Epoch 23/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1493 -
accuracy: 0.9435 - val_loss: 0.3677 - val_accuracy: 0.8888
Epoch 24/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1459 -
accuracy: 0.9454 - val_loss: 0.3472 - val_accuracy: 0.8961
Epoch 25/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1400 -
accuracy: 0.9469 - val_loss: 0.3984 - val_accuracy: 0.8827
Epoch 26/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1374 -
accuracy: 0.9484 - val_loss: 0.3767 - val_accuracy: 0.8931
Epoch 27/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1323 -
accuracy: 0.9491 - val_loss: 0.3849 - val_accuracy: 0.8909
Epoch 28/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1312 -
accuracy: 0.9511 - val_loss: 0.3897 - val_accuracy: 0.8903
Epoch 29/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1242 -
accuracy: 0.9533 - val_loss: 0.4042 - val_accuracy: 0.8907
Epoch 30/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1238 -
accuracy: 0.9533 - val_loss: 0.3784 - val_accuracy: 0.8934
Epoch 31/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1176 -
accuracy: 0.9554 - val_loss: 0.4152 - val_accuracy: 0.8940
Epoch 32/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1152 -
accuracy: 0.9570 - val_loss: 0.4081 - val_accuracy: 0.8886
Epoch 33/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1123 -
accuracy: 0.9578 - val_loss: 0.4372 - val_accuracy: 0.8856
Epoch 34/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1120 -
accuracy: 0.9582 - val_loss: 0.4068 - val_accuracy: 0.8937
Epoch 35/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1073 -
accuracy: 0.9607 - val_loss: 0.4246 - val_accuracy: 0.8943
Epoch 36/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1040 -
accuracy: 0.9606 - val_loss: 0.4211 - val_accuracy: 0.8934
Epoch 37/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.1034 -
accuracy: 0.9613 - val_loss: 0.4291 - val_accuracy: 0.8933
Epoch 38/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0991 -
accuracy: 0.9627 - val_loss: 0.4504 - val_accuracy: 0.8942
Epoch 39/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0977 -
accuracy: 0.9635 - val_loss: 0.4331 - val_accuracy: 0.8950
Epoch 40/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0948 -
accuracy: 0.9653 - val_loss: 0.4429 - val_accuracy: 0.8944
Epoch 41/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0939 -
accuracy: 0.9643 - val_loss: 0.4727 - val_accuracy: 0.8888
Epoch 42/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0937 -
accuracy: 0.9650 - val_loss: 0.4521 - val_accuracy: 0.8969
Epoch 43/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0888 -
accuracy: 0.9673 - val_loss: 0.4801 - val_accuracy: 0.8908
Epoch 44/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0880 -
accuracy: 0.9678 - val_loss: 0.4582 - val_accuracy: 0.8973
Epoch 45/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0878 -
accuracy: 0.9668 - val_loss: 0.5006 - val_accuracy: 0.8920
Epoch 46/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0862 -
accuracy: 0.9678 - val_loss: 0.4547 - val_accuracy: 0.8942
Epoch 47/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.0836 -
accuracy: 0.9680 - val_loss: 0.5050 - val_accuracy: 0.8908
Epoch 48/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0808 -
accuracy: 0.9692 - val_loss: 0.4956 - val_accuracy: 0.8954
Epoch 49/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0803 -
accuracy: 0.9696 - val_loss: 0.5260 - val_accuracy: 0.8928
Epoch 50/50
1500/1500 [==============================] - 4s 3ms/step - loss: 0.0761 -
accuracy: 0.9716 - val_loss: 0.5449 - val_accuracy: 0.8914
Best epoch: 44
Epoch 1/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.5087 -
accuracy: 0.8195 - val_loss: 0.4183 - val_accuracy: 0.8519
Epoch 2/44
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3767 -
accuracy: 0.8639 - val_loss: 0.3740 - val_accuracy: 0.8653
Epoch 3/44
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3355 -
accuracy: 0.8771 - val_loss: 0.3642 - val_accuracy: 0.8691
Epoch 4/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3109 -
accuracy: 0.8860 - val_loss: 0.3444 - val_accuracy: 0.8782
Epoch 5/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2908 -
accuracy: 0.8918 - val_loss: 0.3312 - val_accuracy: 0.8801
Epoch 6/44
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2757 -
accuracy: 0.8969 - val_loss: 0.3437 - val_accuracy: 0.8782
Epoch 7/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2617 -
accuracy: 0.9030 - val_loss: 0.3414 - val_accuracy: 0.8788
Epoch 8/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2504 -
accuracy: 0.9062 - val_loss: 0.3221 - val_accuracy: 0.8827
Epoch 9/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2389 -
accuracy: 0.9105 - val_loss: 0.3210 - val_accuracy: 0.8858
Epoch 10/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2310 -
accuracy: 0.9140 - val_loss: 0.3371 - val_accuracy: 0.8807
Epoch 11/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2208 -
accuracy: 0.9172 - val_loss: 0.3135 - val_accuracy: 0.8898
Epoch 12/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2143 -
accuracy: 0.9191 - val_loss: 0.3253 - val_accuracy: 0.8863
Epoch 13/44
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2049 –
Aim:
Write a python program for implement a Transfer
Learning concept in Image Classification.
Procedure:
1. Select a Pre-trained Model: Choose a pre-trained
model that has been trained on a large and diverse
dataset. Common models include VGG16, ResNet,
Inception, etc.
2. Prepare Your Dataset: Organize your dataset into
a structure suitable for training and validation.
Ensure you have labeled images for each category
you want to classify.
3. Preprocess the Data: Apply necessary
preprocessing to your images to match the input
requirements of the pre-trained model. This may
include resizing images, normalizing pixel values,
and applying data augmentation techniques.
4. Feature Extraction: Use the pre-trained model to
extract features from your dataset. This is done by
removing the top layer (output layer) of the pre-
trained model and passing your images through the
rest of the network.
5. Add a Classification Head: Attach new layers to
the pre-trained model that will serve as your
classifier. These layers will be trained on your
specific dataset.
6. Train the Model: Train the new layers on your
dataset while keeping the pre-trained model’s
layers frozen. This allows the model to learn from
the new data without altering the learned features.
7. Fine-Tuning (Optional): Once the new layers
have been trained, you can choose to unfreeze
some of the top layers of the pre-trained model and
continue training. This allows the model to adjust
its higher-order feature representations to better
suit your specific task.
8. Evaluate the Model: Test the model’s
performance on a separate validation or test set to
ensure it generalizes well to new data.
9. Prediction: Use the trained model to classify new
images.
Code:
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model
Aim:
Write a python program for using a pre trained
model on Keras for Transfer Learning.
Procedure:
base_model = VGG16(weights='imagenet',
include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
model = Model(inputs=base_model.input,
outputs=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy')
.
output:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Trai… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━┩
│ input_layer_4 (InputLayer) │ (None, 150, 150, 3) │ 0 │ - │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ rescaling (Rescaling) │ (None, 150, 150, 3) │ 0 │ - │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ xception (Functional) │ (None, 5, 5, 2048) │ 20,861… │ Y │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ global_average_pooling2d │ (None, 2048) │ 0 │ - │
│ (GlobalAveragePooling2D) │ │ │ │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ dropout (Dropout) │ (None, 2048) │ 0 │ - │
├─────────────────────────────┼──────────────────────────┼─────────┼───────┤
│ dense_7 (Dense) │ (None, 1) │ 2,049 │ Y │
└─────────────────────────────┴──────────────────────────┴─────────┴───────┘
Total params: 20,867,629 (79.60 MB)
Trainable params: 20,809,001 (79.38 MB)
Non-trainable params: 54,528 (213.00 KB)
Optimizer params: 4,100 (16.02 KB)
9.Perform Sentiment Analysis using RNN
Aim:
Write a python program for Perform Sentiment
Analysis using RNN.
Procedure:
Code:
import tensorflow as tf
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding,
SimpleRNN, Dense
from tensorflow.keras.datasets import imdb
max_features = 10000
most common words)
maxlen = 500
batch_size = 32
input_train = sequence.pad_sequences(input_train,
maxlen=maxlen)
input_test = sequence.pad_sequences(input_test,
maxlen=maxlen)
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy', metrics=['acc'])
Aim:
Write a python program for implement an
LSTM based Autoencoder in TensorFlow/Keras.
Procedure:
Code:
encoded = LSTM(latent_dim)(inputs)
repeated = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim,
return_sequences=True)(repeated)
autoencoder.compile(optimizer='adam', loss='mse')
autoencoder.summary()
output:
Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 2ms/step - loss: 0.0332 -
val_loss: 0.0094
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.0089 -
val_loss: 0.0081
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0079 -
val_loss: 0.0076
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0074 -
val_loss: 0.0073
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0072 -
val_loss: 0.0071
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0070 -
val_loss: 0.0070
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0069 -
val_loss: 0.0069
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0068 -
val_loss: 0.0068
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - loss: 0.0068 -
val_loss: 0.0067
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.0067 -
val_loss: 0.0067
11. Image generation using GAN
Aim:
Write a python program for Image generation using
GAN.
Procedure:
Code:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models
model.add(layers.Reshape((7, 7, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5),
strides=(1, 1), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
# Training loop
epochs = 50
batch_size = 128
num_examples_to_generate = 16
random_vector_for_generation =
tf.random.normal([num_examples_to_generate,
latent_dim])
# Print progress
print(f'Epoch {epoch + 1}, Batch {batch + 1},
Discriminator Loss: {discriminator_loss}, Generator
Loss: {generator_loss}')
output: