0% found this document useful (0 votes)
15 views6 pages

10,11 NN&DL

Nn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views6 pages

10,11 NN&DL

Nn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

10. Implement an LSTM based Autoencoder in TensorFlow/Keras.

Algorithm:
Step 1: Import necessary libraries.
Step 2: Load and preprocess the Fashion MNIST dataset.
Step 3: Split the dataset into training, validation, and test sets.
Step 4: Load pre-trained MobileNetV2 model with 'imagenet' weights (excluding top layers).
Step 5: Build a new model on top of MobileNetV2 for classification.
Step 6: Compile the model with loss, optimizer, and metrics.
Step 7: Train the model with training and validation data.
Step 8: Evaluate the model on test data for accuracy.

Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
data = np.random.rand(1000, 10, 5)
latent_dim = 3
inputs = Input((10, 5))
encoded = LSTM(latent_dim)(inputs)
decoded = LSTM(5, return_sequences=True)(RepeatVector(10)(encoded))
autoencoder = Model(inputs, decoded)
autoencoder.compile('adam', 'mse')
autoencoder.fit(data, data, epochs=20, batch_size=32, validation_split=0.2)
sample_sequence = data[0:1]
encoded_sequence = autoencoder.predict(sample_sequence)
decoded_sequence = autoencoder.predict(encoded_sequence)
print("Original:\n", sample_sequence[0])
print("Encoded:\n", encoded_sequence[0])
print("Decoded:\n", decoded_sequence[0])
Output:
Epoch 1/20
25/25 [==============================] - 6s 63ms/step - loss: 0.3921 - val_loss:
0.3639
Epoch 2/20
25/25 [==============================] - 0s 8ms/step - loss: 0.3364 - val_loss:
0.3145
Epoch 3/20
25/25 [==============================] - 0s 8ms/step - loss: 0.2923 - val_loss:
0.2734
Epoch 4/20
25/25 [==============================] - 0s 8ms/step - loss: 0.2543 - val_loss:
0.2371
Epoch 5/20
25/25 [==============================] - 0s 8ms/step - loss: 0.2212 - val_loss:
0.2064
Epoch 6/20
25/25 [==============================] - 0s 9ms/step - loss: 0.1944 - val_loss:
0.1825
Epoch 7/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1739 - val_loss:
0.1643
Epoch 8/20
25/25 [==============================] - 0s 9ms/step - loss: 0.1578 - val_loss:
0.1490
Epoch 9/20
25/25 [==============================] - 0s 9ms/step - loss: 0.1436 - val_loss:
0.1351
Epoch 10/20
25/25 [==============================] - 0s 9ms/step - loss: 0.1311 - val_loss:
0.1237
Epoch 11/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1219 - val_loss:
0.1165
Epoch 12/20
25/25 [==============================] - 0s 9ms/step - loss: 0.1165 - val_loss:
0.1124
Epoch 13/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1133 - val_loss:
0.1097
Epoch 14/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1109 - val_loss:
0.1077
Epoch 15/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1089 - val_loss:
0.1060
Epoch 16/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1072 - val_loss:
0.1043
Epoch 17/20
25/25 [==============================] - 0s 9ms/step - loss: 0.1055 - val_loss:
0.1028
Epoch 18/20
25/25 [==============================] - 0s 7ms/step - loss: 0.1040 - val_loss:
0.1014
Epoch 19/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1025 - val_loss:
0.1000
Epoch 20/20
25/25 [==============================] - 0s 8ms/step - loss: 0.1011 - val_loss:
0.0987
1/1 [==============================] - 1s 1s/step
1/1 [==============================] - 0s 21ms/step
Original:
[[0.46594909 0.89358259 0.90200044 0.8339343 0.56945588]
[0.41847491 0.17016034 0.90157267 0.71930869 0.05584527]
[0.34895259 0.13718447 0.162751 0.44500557 0.55637849]
[0.71697654 0.25436079 0.65447249 0.00497761 0.39961196]
[0.05032428 0.63232701 0.01846791 0.16800408 0.23634759]
[0.38695076 0.12347642 0.00817942 0.54274944 0.7977497 ]
[0.86427886 0.82877268 0.46338422 0.20281466 0.85071124]
[0.41869187 0.84021977 0.37315503 0.38856234 0.77141786]
[0.33402822 0.93032699 0.5700405 0.66954792 0.99004989]
[0.38941963 0.42562495 0.62706435 0.01823869 0.56267593]]
Encoded:
[[0.13467233 0.09182165 0.21017277 0.2254508 0.1940967 ]
[0.3007508 0.21627368 0.37652236 0.37593782 0.35030696]
[0.42508027 0.33714777 0.46785116 0.4576065 0.44744495]
[0.49248993 0.42977157 0.5030912 0.4944822 0.497201 ]
[0.52422625 0.49219692 0.5133539 0.5100318 0.5209746 ]
[0.53914064 0.5330058 0.51560146 0.51703566 0.53258353]
[0.54665434 0.560159 0.5156787 0.5205509 0.5385343 ]
[0.55076003 0.57870173 0.5152291 0.52247256 0.5417279 ]
[0.5531688 0.59164125 0.514713 0.52360266 0.5435192 ]
[0.5546714 0.6008257 0.514263 0.5243197 0.5445719 ]]
Decoded:
[[0.13867386 0.09464496 0.20798753 0.22682343 0.19579542]
[0.3067389 0.22157499 0.3736154 0.37783322 0.3527872 ]
[0.4307486 0.34409672 0.4653936 0.45954534 0.44981512]
[0.49726498 0.4376482 0.501414 0.49634233 0.49917597]
[0.528339 0.5005967 0.51232094 0.5118214 0.5226012 ]
[0.5428452 0.54172784 0.5149887 0.5187749 0.53396374]
[0.5501056 0.5690969 0.5153211 0.5222538 0.53975135]
[0.5540473 0.58779615 0.5150268 0.52414757 0.5428382 ]
[0.5563447 0.6008583 0.5146078 0.52525514 0.54455894]
[0.5577693 0.6101439 0.5142202 0.5259533 0.54556334]]
11. Image generation using GAN

Algorithm:
Step 1: Import necessary libraries.
Step 2: Load and preprocess the MNIST dataset (scaling to [-1, 1]).
Step 3: Define generator and discriminator networks.
Step 4: Compile the discriminator with binary cross-entropy and Adam optimizer.
Step 5: Define the GAN by connecting generator and discriminator.
Step 6: Compile the GAN with binary cross-entropy and Adam optimizer.
Step 7: Set batch size (b) and number of epochs (e).
Step 8: Training loop:
Loop for e epochs:
Train discriminator on real and fake images.
Train GAN to generate real-like images.
Display losses at intervals.
Step 9: Generate sample images using the trained generator.
Step 10: Display generated sample images using Matplotlib.

Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Reshape
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
(X_train, _), (_, _) = mnist.load_data()
X_train = X_train / 127.5 - 1.0
X_train = np.expand_dims(X_train, -1)
generator = Sequential([Dense(128, input_shape=(100,), activation='relu'),Dense(784,
activation='tanh'),Reshape((28, 28, 1))])
discriminator = Sequential([Flatten(input_shape=(28, 28, 1)),Dense(128, activation='relu'),Dense(1,
activation='sigmoid')])
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(0.0002), metrics=['accuracy'])
gan_input = Input(shape=(100,))
x = generator(gan_input)
gan_output = discriminator(x)
gan = Model(gan_input, gan_output)
gan.compile(loss='binary_crossentropy', optimizer=Adam(0.0002))
b, e = 64, 20
for epoch in range(e):
idx = np.random.randint(0, X_train.shape[0], b)
real_imgs = X_train[idx]
fake_imgs = generator.predict(np.random.randn(b, 100))
d_loss_real = discriminator.train_on_batch(real_imgs, np.ones((b, 1)))
d_loss_fake = discriminator.train_on_batch(fake_imgs, np.zeros((b, 1)))
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.randn(b, 100)
g_loss = gan.train_on_batch(noise, np.ones((b, 1)))
if (epoch + 1) % 5 == 0:
print(f"Epoch {epoch + 1}/{e}, D Loss: {d_loss[0]}, G Loss: {g_loss}")
num_samples = 16
generated_images = generator.predict(np.random.randn(num_samples, 100))
generated_images = 0.5 * generated_images + 0.5
fig, axs = plt.subplots(4, 4)
count = 0
for row in range(4):
for col in range(4):
axs[row, col].imshow(generated_images[count, :, :, 0], cmap='gray')
axs[row, col].axis('off')
count += 1
plt.show()
Output:
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 16ms/step
2/2 [==============================] - 0s 999us/step
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 2ms/step
Epoch 5/20, D Loss: 0.6820654459297657, G Loss: 0.3546869456768036
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 999us/step
2/2 [==============================] - 0s 1ms/step
2/2 [==============================] - 0s 1000us/step
2/2 [==============================] - 0s 999us/step
Epoch 10/20, D Loss: 0.9999328017001972, G Loss: 0.1527058184146881
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 1ms/step
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 1ms/step
2/2 [==============================] - 0s 999us/step
Epoch 15/20, D Loss: 1.223515163641423, G Loss: 0.10654151439666748
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 999us/step
2/2 [==============================] - 0s 999us/step
2/2 [==============================] - 0s 2ms/step
2/2 [==============================] - 0s 1ms/step
Epoch 20/20, D Loss: 1.3138971208245493, G Loss: 0.09464263916015625
1/1 [==============================] - 0s 27ms/step

You might also like