0% found this document useful (0 votes)
2 views

complete_dl_record

Uploaded by

.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

complete_dl_record

Uploaded by

.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Sheet No.

01

Laboratory Record of Experiment No. 01

NN & DL Date

Experiment 1

Aim: Setting up the Jupyter IDE Environment and Executing a Python Program

Procedure to Install Anaconda :

1. Go to the Anaconda Website and chose a Python 3.x graphical installer


2. Locate your download and double click it.
3. Read the license agreement and click on I Agree.
4. Note your installation location and then click Next
5. This is an important part of the installation process. The recommended approach is to not check
the box to add Anaconda to your path. This means you will have to use Anaconda Navigator or
the Anaconda Command Prompt when you wish to use Anaconda. If you want to be able to use
Anaconda in your command prompt please use the alternative approach and check the box.

6. This is an optional step. This is for the case where you didn’t check the box in
step 5 and now want to add Anaconda to your path in the environment variables

Roll No. 21261A6641 MGIT


Sheet No. 02

Laboratory Record of Experiment No. 01

NN & DL Date

Integrating Jupyter with Anaconda:

7. Find and open the Anaconda Prompt app using the search bar.
8. Once the Anaconda Prompt app opens, navigate to the desired folder, using the cd command.
9. Once in the desired folder, type Jupyter notebook followed by the Enter key.
10. The Jupyter server will start. You should see some server logs printed. You may be prompted
to select an application to open Jupyter in. Firefox or Chrome are preferred.
11. Shortly after, a browser window should open, showing the files and folders located in the
folder where you started the Jupyter server.

Executing a Python Program:

Result: We have successfully installed Anaconda and set up the Jupyter IDE
and have executed a Python program to check whether an input number is
Palindrome or not.

Roll No. 21261A6631 MGIT


Sheet No. 03

Laboratory Record of Experiment No. 02

NN & DL Date

Experiment 2

Aim: Installing Tensor flow and PyTorch Libraries and make use of them

Procedure to install Tensor-flow in Anaconda:

Tensor-flow with conda is supported on 64-bit Windows 7 or later, 64-bit Ubuntu, Linux 14.04 or
later, 64 bit CentOS Linux 6 or later and macOs 10.10 or later.

1. On Windows open the Start menu and open Anaconda Command Prompt.
2. Choose a name for your TensorFlow environment, such as “tf"
3. To install the current release of CPU-only TensorFlow, recommended for beginners.
conda create -n tf tensorflow
conda activate tf
4. Or, to install the current release of GPU TensorFlow on Linux or conda create Windows:
conda create-n tf-gpu tensorflow-gpu
conda activate tf-gpu
5. Now go to Anaconda Navigator and change the environment to tf-gpu from base.

6. Install Jupyter notebook and launch Jupyter in the new environment


7. Install numpy using pip install numpy==1.23.4

Roll No. 21261A6631 MGIT


Sheet No. 04

Laboratory Record of Experiment No. 02

NN & DL Date

8. Now import tensorflow and check the version


import tensorflow as tf
print(tf. version )

2.6.0

9. Check the keras version using the following command


!pip show keras

Name: keras
Version: 2.13.1
Summary: Deep learning for humans.
Home -page: https://keras.io/
Author: Keras team
Author-email: keras [email protected]
License: Apache 2.0
Location: c:\users\mgit\anaconda3\envs\tf- gpu\lib\site-packages
Requires:
Required-by: tensorflow

Example program for Tensorflow basics

import tensorflow as tf
x= tf.constant ([[1., 2., 3.],[4., 5., 6.]])
print(x)
print(x.shape)
print (x.dtype)

Output: tf.Tensor([[1., 2., 3.][4., 5., 6.]],


shape=(2, 3), dtype=float32)
(2,3)
<dtype: ‘float32'>

Roll No. 21261A6631 MGIT


Sheet No. 05

Laboratory Record of Experiment No. 02

NN & DL Date

Installing Pytorch and importing it in Jupyter notebook


1. Use the command pip3 instal torch torch vision torch audio in anaconda command prompt to
install pytorch.
2. Now import torch in Jupyter notebook
3. Write an example program in Jupyter

import torch
x = torch.rand(5, 3)
print (x)

Output: tensor([[0.8338, 0.2921, 0.2501],


[0.8172, .9531, 0.9061],
[0.4925, .0952, 0.3532],
[0.3888, 0.7118, 0.3312],
[0.4027, 0.3560, 0.8726]])

Result: We have successfully installed Tensorflow and Keras and executed simple programs

Roll No. 21261A6631 MGIT


Sheet No. 06

Laboratory Record of Experiment No. 03

NN & DL Date

Experiment 3

Aim: Applying Convolutional Neural Network on Computer Vision Algorithms

Importing The Libraries

import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from tensorflow.keras.optimizers import Adam

Loading and Resizing the training datasets of dogs and cats


import PIL
import os
import os.path
from PIL import Image
f= r'C:\Users MGIT\Desktop\cd dataset\train\dog’
for file in os.listdir(f):
f_img= f+ “/" +file
img=Image.open(f_img)
img=img.resize((112, 112))
img.save(f_img)

import PIL
import os
import os.path
from PIL import Image
f= r'C:\Users MGIT\Desktop\cd dataset\train\cat’

Roll No. 21261A6631 MGIT


Sheet No. 07

Laboratory Record of Experiment No. 03

NN & DL Date

for file in os.listdir(f):


f_img= f+ “/" +file
img=Image.open(f_img)
img=img.resize((112, 112))
img.save(f_img)

Loading and Resizing the testing datasets of dogs and cats

import PIL
import os
import os.path
from PIL import Image
f= r'C:\Users MGIT\Desktop\cd dataset\test\dog’
for file in os.listdir(f):
f_img= f+ “/" +file
img=Image.open(f_img)
img=img.resize((112, 112))
img.save(f_img)

import PIL
import os
import os.path
from PIL import Image
f= r'C:\Users MGIT\Desktop\cd dataset\test\cat’
for file in os.listdir(f):
f_img= f+ “/" +file
img=Image.open(f_img)
img=img.resize((112, 112))
img.save(f_img)

Roll No. 21261A6631 MGIT


Sheet No. 08

Laboratory Record of Experiment No. 03

NN & DL Date

Image Preprocessing

IMAGE_SIZE = 112
BATCH_SIZE= 32
train_data_size = 180
test_data = 20

train= tf.keras.preprocessing.image. ImageDataGenerator(rescale=1./255, rotation_range = 90,


shear_range =0.2, zoom_range = 0.2, horizontal_flip = True,)

Output: Found 180 images belonging to 2 classes

test= tf.keras.preprocessing.image. ImageDataGenerator(rescale=1./255, rotation_range = 90,


shear_range =0.2, zoom_range = 0.2, horizontal_flip = True,)

Output: Found 20 images belonging to 2 classes

Model Building

model= Sequential([
Conv2D(32,(3,3),activation='relu', input_shape=(112,112,3)),
MaxPool2D(2,2),
Conv2D(32,(3,3),activation=‘relu' ,input_shape=(112, 112, 3)),
MaxPool2D(2,2),
Flatten(),
Dense(100, activation=‘relu' ),
Dense(1, activation=‘sigmoid')
]
)

model.summary()

Roll No. 21261A6631 MGIT


Sheet No. 09

Laboratory Record of Experiment No. 03

NN & DL Date

model.compile(‘Adam’, ‘binary_crossentropy’, [‘accuracy’])


model.fit(train_data, epochs=10, validation_data=test_data)

Result:Trained a neural network model to classify the dogs and cats images

Roll No. 21261A6631 MGIT


Sheet No. 10

Laboratory Record of Experiment No. 04

NN & DL Date

Experiment 4

Aim: Image Classification on MNIST dataset (CNN model with Fully connected
Layer)

Importing The Libraries

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from tensorflow.keras.optimizers import Adam

PreProcessing and Loading Images

image_size = 64
batch_size = 32
train = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
rotation_range=90, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)

test = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
train_set = train.flow_from_directory(r’ ./dataset_mnist/train’,
target_size=(image_size, image_size), batch_size= batch_size,
class_mode=‘categorical’)

test_set= test.flow_from_directory( r’ ./dataset_mnist/test’,


target_size=(image_size, image_size), batch_size= batch_size,
class_mode=‘categorical’)

Output: Found 100 images belonging to 10 classes


Found 100 images belonging to 10 classes

Roll No. 21261A6631 MGIT


Sheet No. 11

Laboratory Record of Experiment No. 04

NN & DL Date

Model Building

model= Sequential([
Conv2D(32,(3,3),activation='relu', input_shape=(112,112,3)),
MaxPool2D(2,2),
Conv2D(64,(3,3),activation=‘relu'),
MaxPool2D(2,2),
Conv2D(64,(3,3),activation=‘relu'),
MaxPool2D(2,2),
Flatten(),
Dense(100, activation=‘relu' ),
Dense(1, activation=‘sigmoid')
]
)

model.summary()

Roll No. 21261A6631 MGIT


Sheet No. 12

Laboratory Record of Experiment No. 04

NN & DL Date

model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=[‘accuracy'])


model.fit(train_data, epochs=15, validation_data=test_data)

Result: Performed Image classification on MNIST Dataset for numeric digits


from 0 to 9

Roll No. 21261A6631 MGIT


Sheet No. 13

Laboratory Record of Experiment No. 05

NN & DL Date

Experiment 5

Aim: Applying the pre-trained model VGG16 for MNIST Dataset Classification

Importing The Libraries

from tensorflow import keras


from keras.models import Sequential, Model
from keras. layers import Input, Dense, Dropout, Flatten,Conv2D, MaxPoo12D
from keras.layers import BatchNormalization

Loading the dataset

image_size = 64
batch_size = 32
train = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
rotation_range=90, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)

test = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)

Output: Found 100 images belonging to 10 classes


Found 100 images belonging to 10 classes

input = Input (shape (224, 224,3))


x=Conv2D (filters=64, kernel_size=3, padding=‘same’ , activation=‘relu’)(input)
x=Conv2D (filters=64, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=MaxPool2D (pool_size=2, strides=2 ,padding=‘same’ )(x)
x=Conv2D (filters=128, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=Conv2D (filters=128, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=MaxPool2D (pool_size=2, strides=2 ,padding=‘same’ )(x)
x=Conv2D (filters=256, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=Conv2D (filters=256, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=Conv2D (filters=256, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=MaxPool2D (pool_size=2, strides=2 ,padding=‘same’ )(x)

Roll No. 21261A6631 MGIT


Sheet No. 14

Laboratory Record of Experiment No. 05

NN & DL Date

x=Conv2D (filters=512, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)


x=Conv2D (filters=512, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=Conv2D (filters=512, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=MaxPool2D (pool_size=2, strides=2 ,padding=‘same’ )(x)
x=Conv2D (filters=256, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=Conv2D (filters=256, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=Conv2D (filters=256, kernel_size=3, padding=‘same’ , activation=‘relu’)(x)
x=MaxPool2D (pool_size=2, strides=2 ,padding=‘same’ )(x)
x = Flatten() (x)
x = Dense (units = 4096, activation =‘relu')(x)
output=Dense(units=10, activation=‘softmax’)(x)
model = Model (inputs=input, outputs =output)
model.summary()

Roll No. 21261A6631 MGIT


Sheet No. 15

Laboratory Record of Experiment No. 05

NN & DL Date

model.compile(loss=“categorical_crossentropy”, optimizer=‘Adam’,metrics=[‘accuracy’])

history= model.fit(train_data, epochs=15, validation_data=test_data)

Result:Successfully implemented the pre-trained CNN model VGG16 for


MNIST Dataset Classification

Roll No. 21261A6631 MGIT


Sheet No. 16

Laboratory Record of Experiment No. 06

NN & DL Date

Experiment 6
Aim: Training a Sentiment Analysis model on IMDB dataset using RNN with
LSTM notes
Importing The Libraries

import numpy as np
from keras.models import Sequential
from keras.preprocessing import sequence
from keras.layers import Dropout, Dense, Embedding, LSTM
from keras.datasets import imdb
from keras.callbacks import EarlyStopping
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import re
import nltk

nltk.download(‘stopwords’)
nltk.download(‘wordnet’)

Loading Datasets

(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=10000)


word_index = imdb.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

Preprocessing Data

def preprocess_text(text):
text = re.sub(r'<[^>]+>', '', text)
text = re.sub(r'\d+', '', text)
text = re.sub(r'[^\w\s]', '', text)

Roll No. 21261A6641 MGIT


Sheet No. 17

Laboratory Record of Experiment No. 06

NN & DL Date

text = text.lower()
stop_words = set(stopwords.words('english'))
words = text.split()
words = [word for word in words if word.lower() not in stop_words]
lemmatizer = WordNetLemmatizer()
words = [lemmatizer.lemmatize(word) for word in words]
return ' ‘.join(words)
x_train_text = [' '.join([reverse_word_index.get(i - 3, '?') for i in sequence]) for sequence in x_train]
x_test_text = [' '.join([reverse_word_index.get(i - 3, '?') for i in sequence]) for sequence in x_test]
x_train_text = [preprocess_text(text) for text in x_train_text]
x_test_text = [preprocess_text(text) for text in x_test_text]
maxlen= 200
tokenizer= Tokenizer(num_words=10000)
tokenizer.fit_on_texts(x_train_text)

x_train_seq = tokenizer.texts_to_sequences(x_train_text)
x_test_seq = tokenizer.texts_to_sequences(x_test_text)

x_train = pad_sequences(x_train_seq, maxlen=maxlen)


x_test = pad_sequences(x_test_seq, maxlen=maxlen)

y_train = np.array(y_train)
y_test = np.array(y_test)

Model Building and compiling

n_unique_words = 10000
model = Sequential()
model.add(Embedding(n_unique_words, 64, input_length=maxlen))
model.add(LSTM(32))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[‘accuracy'])
history = model.fit(x_train, y_train, batch_size=128, epochs=10, validation_data=(x_test, y_test))

Roll No. 21261A6641 MGIT


Sheet No. 18

Laboratory Record of Experiment No. 06

NN & DL Date

from matplotlib import pyplot as plt

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Loss vs Accuracy')
plt.xlabel('Epoch')
plt.legend(['Loss', 'Accuracy', 'Val_Loss', 'Val_Accuracy'], loc='upper right')
plt.show()
sample_text = "This is a great movie with fantastic performances!"
sample_text = preprocess_text(sample_text)
tokenized_sample = tokenizer.texts_to_sequences([sample_text])
padded_sample = pad_sequences(tokenized_sample, maxlen=maxlen)
prediction = model.predict(padded_sample)
threshold = 0.5

if prediction[0][0] > threshold:


print(f"The sample text is predicted as positive with confidence: {prediction[0][0]}")
else:
print(f"The sample text is predicted as negative with confidence: {1 - prediction[0]
[0]}")

Roll No. 21261A6641 MGIT


Sheet No. 19

Laboratory Record of Experiment No. 06

NN & DL Date

The sample text is predicted as positive with confidence: 0.86

Output: Trained a sentiment analysis model on IMDB dataset using RNN layers and
LSTM notes and made predictions on sample text.

Roll No. 21261A6641 MGIT


Sheet No. 20

Laboratory Record of Experiment No. 07

NN & DL Date

Experiment 7
Aim: Applying the Auto encoder algorithms for encoding the real-world data.

Importing The Libraries

from keras.layers import Dense, Conv2D, MaxPooling2D, UpSampling2D


from keras import Input, Model
from keras.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt

Model Architecture

encoding_dim = 15
input_img = Input(shape=(784,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)

Encoder and Decoder Models

encoder = Model(input_img, encoded)


encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))

Model Compilation

autoencoder.compile(optimizer='adam', loss=‘binary_crossentropy')

Data Preparation

(x_train, y_train), (x_test, y_test) = mnist.load_data()

Roll No. 21261A6641 MGIT


Sheet No. 21

Laboratory Record of Experiment No. 07

NN & DL Date

x_train = x_train.astype('float32') / 255.


x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)

Output: (60000,784)
(10000,784)

Model Fitting

autoencoder.fit(
x_train, x_train,
epochs=15,
batch_size=256,
validation_data=(x_test, x_test)
)

Roll No. 21261A6641 MGIT


Sheet No. 22

Laboratory Record of Experiment No. 07

NN & DL Date

Evaluation and Visualization

plt.figure(figsize=(20, 6))
encoded_img = encoder.predict(x_test)
decoded_img = decoder.predict(encoded_img)
import random
i = random.randint(0, 10)
print("Original image")
ax = plt.subplot(3, 1, 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

print("Encoded image")
encoded_image = encoded_img[i].reshape(encoding_dim, 1)
ax = plt.subplot(3, 1, 2)
plt.imshow(encoded_image, aspect=0.05)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

Roll No. 21261A6641 MGIT


Sheet No. 23

Laboratory Record of Experiment No. 07

NN & DL Date

print("Reconstructed image after decoding")


ax = plt.subplot(3, 1, 3)
plt.imshow(decoded_img[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

Output: Applied Auto encoder algorithm on MNIST dataset and displayed the original, encoded
and decoded images

Roll No. 21261A6641 MGIT


Sheet No. 24

Laboratory Record of Experiment No. 08

NN & DL Date

Experiment 8
Aim: : Applying Generative Adversial Networks for image generation and
Unsupervised Tasks

Importing The Libraries

import tensorflow as tf
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
import os
import time
from IPython import display

Loading Datasets

(train_images, train_labels), _ = tf.keras.datasets.mnist.load_data()


train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5

Generator Model

BUFFER_SIZE = 10000
BATCH_SIZE = 128
train_dataset=tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)

def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7 * 7 * 256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

Roll No. 21261A6641 MGIT


Sheet No. 25

Laboratory Record of Experiment No. 08

NN & DL Date

model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same',


use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same',
use_bias=False, activation='tanh'))
return model

generator = make_generator_model()
Discriminator Model
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model

discriminator = make_discriminator_model()

cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
return real_loss + fake_loss
Loss Function
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)

Roll No. 21261A6641 MGIT


Sheet No. 26

Laboratory Record of Experiment No. 08

NN & DL Date

Optimizers

generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)

Training the Model

EPOCHS = 100
noise_dim = 100
num_examples_to_generate = 16
seed = tf.random.normal([num_examples_to_generate, noise_dim])
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,
discriminator.trainable_variables))

Roll No. 21261A6641 MGIT


Sheet No. 27

Laboratory Record of Experiment No. 08

NN & DL Date

def train(dataset, epochs):


for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
display.clear_output(wait=True)
generate_and_save_images(generator, epoch + 1, seed)
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print('Time for epoch {} is {} sec'.format(epoch + 1, time.time() - start))
display.clear_output(wait=True)
generate_and_save_images(generator, epochs, seed)

Generating and Saving Images

def generate_and_save_images(model, epoch, test_input):


predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
train(train_dataset, EPOCHS)

Roll No. 21261A6641 MGIT


Sheet No. 29

Laboratory Record of Experiment No. 08

NN & DL Date

Output after 1st Epoch

Output after 100th Epoch

Output: Trained Generative Adversial Network on MNIST dataset for image generation.

Roll No. 21261A6641 MGIT

You might also like