0% found this document useful (0 votes)
11 views16 pages

Image Classifier for the SVHN Dataset

The document outlines the creation of a neural network for classifying digits from the SVHN dataset, which consists of over 600,000 images of house numbers from Google Street View. It details the process of importing necessary libraries, loading and preprocessing the dataset, building a multi-layer perceptron (MLP) model, and training the model with callbacks for early stopping and checkpointing. Finally, it includes visualizations of training and validation loss and accuracy, along with evaluation metrics on the test set.

Uploaded by

Tushar Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views16 pages

Image Classifier for the SVHN Dataset

The document outlines the creation of a neural network for classifying digits from the SVHN dataset, which consists of over 600,000 images of house numbers from Google Street View. It details the process of importing necessary libraries, loading and preprocessing the dataset, building a multi-layer perceptron (MLP) model, and training the model with callbacks for early stopping and checkpointing. Finally, it includes visualizations of training and validation loss and accuracy, along with evaluation metrics on the test set.

Uploaded by

Tushar Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.

ipynb - Colaboratory

keyboard_arrow_down Image classifier for the SVHN dataset


We created a neural network that classifies real-world images digits.

Importing all the neccessary libraries

import tensorflow as tf
from scipy.io import loadmat

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline

For the project, we used the SVHN dataset. This is an image dataset of over 600,000 digit
images in all, and is a harder dataset than MNIST as the numbers appear in the context of
natural scene images. SVHN is obtained from house numbers in Google Street View images.

Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu and A. Y. Ng. "Reading Digits in Natural


Images with Unsupervised Feature Learning". NIPS Workshop on Deep Learning and
Unsupervised Feature Learning, 2011.

The train and test datasets required for this project can be downloaded from here and here.
Once unzipped, we had two files: train_32x32.mat and test_32x32.mat which were stored in
Drive for use in this Colab notebook.

Our goal is to develop an end-to-end workflow for building, training, validating, evaluating and
saving a neural network that classifies a real-world image into one of ten classes.

# Connect to Drive folder

from google.colab import drive


drive.mount('/content/gdrive')

Mounted at /content/gdrive

# Load the dataset from Drive folder

# train = loadmat('path/to/train_32x32.mat')
# test = loadmat('path/to/test_32x32.mat')

train = loadmat('/content/gdrive/My Drive/Colab Notebooks/PROJECTS/Image Classifier for t


test = loadmat('/content/gdrive/My Drive/Colab Notebooks/PROJECTS/Image Classifier for th

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 1/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

Both train and test are dictionaries with keys X and y for the input images and labels
respectively.

keyboard_arrow_down 1. Inspect and preprocess the dataset


# Extracting the training and testing images and labels separately from the train and tes

X_train = train['X']
X_test = test['X']
y_train = train['y']
y_test = test['y']

X_train.shape, X_test.shape

((32, 32, 3, 73257), (32, 32, 3, 26032))

# Selecting a random sample of images and corresponding labels from the dataset (at least

y_train = np.where(y_train==10, 0, y_train)


y_test = np.where(y_test==10, 0, y_test)

import random

plt.figure(figsize=(20,10))
for i in range(10):
plt.subplot(2,5,i+1)
n=random.randint(0,X_train.shape[3])
plt.imshow(X_train[:, :, :,n])
plt.title(y_train[n])

plt.show()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 2/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

/usr/local/lib/python3.9/dist-packages/matplotlib/text.py:1279: FutureWarning: elemen


if s != self._text:

# Converting the training and test images to grayscale by taking the average across all c

X_train_gs=np.zeros((X_train.shape[3],X_train.shape[0],X_train.shape[1],1))
for i in range(X_train.shape[3]):
X_train_gs[i,:,:,0]=np.average(X_train[:,:,:,i],axis=2)
#(X_train[:,:,0,i]+X_train[:,:,1,i]+X_train[:,:,2,i])/3.0

X_train_gs = X_train_gs/255.

X_test_gs=np.zeros((X_test.shape[3],X_test.shape[0],X_test.shape[1],1))
for i in range(X_test.shape[3]):
X_test_gs[i,:,:,0]=np.average(X_test[:,:,:,i],axis=2)
#(X_test[:,:,0,i]+X_test[:,:,1,i]+X_test[:,:,2,i])/3.0

X_test_gs = X_test_gs/255.

# Selecting a random sample of the grayscale images and corresponding labels from the dat

import random

plt.figure(figsize=(20,10))
for i in range(10):
plt.subplot(2,5,i+1)
n=random.randint(0,X_train.shape[3])
plt.imshow(X_train_gs[n,:,:,0],cmap=plt.get_cmap('gray'))
plt.title(y_train[n])

plt.show()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 3/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

keyboard_arrow_down 2. MLP neural network classifier


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Softmax

X_train = np.moveaxis(X_train, -1, 0)


X_test = np.moveaxis(X_test, -1, 0)
X_train.shape

(73257, 32, 32, 3)

X_train[0].shape

(32, 32, 3)

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 4/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

# Building an MLP classifier model using the Sequential API using only Flatten and Dense

def get_mlp_model():
model = Sequential([Flatten(input_shape=X_train[0].shape),
Dense(2048,activation='relu'),
Dense(2048,activation='relu'),
Dense(2048,activation='relu'),
#Dense(1024,activation='relu'),
#Dense(512,activation='relu'),
Dense(10,activation='softmax')])
return model

# Printing out the model summary

model = get_mlp_model()
model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3072) 0

dense (Dense) (None, 2048) 6293504

dense_1 (Dense) (None, 2048) 4196352

dense_2 (Dense) (None, 2048) 4196352

dense_3 (Dense) (None, 10) 20490

=================================================================
Total params: 14,706,698
Trainable params: 14,706,698
Non-trainable params: 0
_________________________________________________________________

# Defining two callbacks during training i.e Early Stopping and Best Checkpoint.

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

def get_checkpoint_best():
return ModelCheckpoint(filepath='checkpoints_every_epoch/checkpoint_{epoch:03d}',
monitor='val_loss',
save_weights_only=True,
save_best_only=True,
verbose=1)

def get_early_stopping():
return EarlyStopping(patience=4,monitor='loss')

checkpoint_best = get_checkpoint_best()
early_stopping = get_early_stopping()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 5/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

# Clear directory
! rm -r checkpoints_every_epoch

rm: cannot remove 'checkpoints_every_epoch': No such file or directory

# Compiling and training the model making use of both training and validation sets during

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'

callbacks = [checkpoint_best, early_stopping]


history = model.fit(X_train, y_train, epochs=30, batch_size=128, validation_split=0.15, c

Epoch 1/30
485/487 [============================>.] - ETA: 0s - loss: 43.6617 - accuracy: 0.3
Epoch 1: val_loss improved from inf to 1.54919, saving model to checkpoints_every_
487/487 [==============================] - 13s 17ms/step - loss: 43.5344 - accurac
Epoch 2/30
486/487 [============================>.] - ETA: 0s - loss: 1.2562 - accuracy: 0.60
Epoch 2: val_loss improved from 1.54919 to 1.26725, saving model to checkpoints_ev
487/487 [==============================] - 5s 10ms/step - loss: 1.2564 - accuracy:
Epoch 3/30
480/487 [============================>.] - ETA: 0s - loss: 1.0944 - accuracy: 0.65
Epoch 3: val_loss improved from 1.26725 to 1.13981, saving model to checkpoints_ev
487/487 [==============================] - 4s 8ms/step - loss: 1.0933 - accuracy:
Epoch 4/30
480/487 [============================>.] - ETA: 0s - loss: 1.0085 - accuracy: 0.68
Epoch 4: val_loss improved from 1.13981 to 1.08111, saving model to checkpoints_ev
487/487 [==============================] - 4s 8ms/step - loss: 1.0103 - accuracy:
Epoch 5/30
482/487 [============================>.] - ETA: 0s - loss: 0.9516 - accuracy: 0.70
Epoch 5: val_loss improved from 1.08111 to 1.01067, saving model to checkpoints_ev
487/487 [==============================] - 4s 7ms/step - loss: 0.9511 - accuracy:
Epoch 6/30
486/487 [============================>.] - ETA: 0s - loss: 0.9148 - accuracy: 0.71
Epoch 6: val_loss improved from 1.01067 to 0.85161, saving model to checkpoints_ev
487/487 [==============================] - 4s 8ms/step - loss: 0.9151 - accuracy:
Epoch 7/30
487/487 [==============================] - ETA: 0s - loss: 0.9015 - accuracy: 0.71
Epoch 7: val_loss did not improve from 0.85161
487/487 [==============================] - 6s 12ms/step - loss: 0.9015 - accuracy:
Epoch 8/30
483/487 [============================>.] - ETA: 0s - loss: 0.8856 - accuracy: 0.72
Epoch 8: val_loss did not improve from 0.85161
487/487 [==============================] - 4s 9ms/step - loss: 0.8850 - accuracy:
Epoch 9/30
481/487 [============================>.] - ETA: 0s - loss: 0.8410 - accuracy: 0.73
Epoch 9: val_loss did not improve from 0.85161
487/487 [==============================] - 4s 8ms/step - loss: 0.8405 - accuracy:
Epoch 10/30
487/487 [==============================] - ETA: 0s - loss: 0.8414 - accuracy: 0.73
Epoch 10: val_loss did not improve from 0.85161
487/487 [==============================] - 5s 10ms/step - loss: 0.8414 - accuracy:
Epoch 11/30
482/487 [============================>.] - ETA: 0s - loss: 0.8200 - accuracy: 0.74
Epoch 11: val_loss did not improve from 0.85161
487/487 [==============================] - 4s 8ms/step - loss: 0.8202 - accuracy:
Epoch 12/30
484/487 [============================>.] - ETA: 0s - loss: 0.8261 - accuracy: 0.74
https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 6/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory
Epoch 12: val_loss did not improve from 0.85161
487/487 [==============================] - 4s 8ms/step - loss: 0.8257 - accuracy:
Epoch 13/30
486/487 [============================>.] - ETA: 0s - loss: 0.7865 - accuracy: 0.75
Epoch 13: val_loss did not improve from 0.85161
487/487 [==============================] - 4s 7ms/step - loss: 0.7864 - accuracy:
Epoch 14/30
482/487 [============================>.] - ETA: 0s - loss: 0.7974 - accuracy: 0.75
Epoch 14: val_loss did not improve from 0.85161
487/487 [==============================] - 4s 9ms/step - loss: 0.7990 - accuracy:
Epoch 15/30

# Plotting the learning curves for loss vs epoch for both training and validation sets.

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(['Train_Loss','Validation_Loss'], loc='upper right')
plt.title("Loss v/s Epochs")
plt.show()

# Plotting the learning curves for accuracy vs epoch for both training and validation set

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.legend(['Train_Accuracy','Validation_Accuracy'], loc='lower right')
plt.title("Accuracy v/s Epochs")
plt.show()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 7/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

# Computing and displaying the loss and accuracy of the trained model on the test set.

test_loss, test_accuracy = model.evaluate(X_test, y_test)


print(f"Test Loss is {test_loss}")
print(f"Test Accuracy is {test_accuracy}")

814/814 [==============================] - 2s 3ms/step - loss: 0.9697 - accuracy: 0.7


Test Loss is 0.9697173833847046
Test Accuracy is 0.7325983643531799

keyboard_arrow_down 3. CNN neural network classifier


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D , MaxPool2D , BatchNormalizati

# Building a CNN classifier model using the Sequential API using the Conv2D, MaxPool2D, B
# The final layer has a 10-way softmax output.

def get_cnn_model():
model = Sequential([Conv2D(filters=256,kernel_size=3,padding="SAME",activation='relu'
MaxPool2D(pool_size=(2,2)),
Conv2D(filters=128,kernel_size=3,padding="SAME",activation='relu'),
MaxPool2D(pool_size=(2,2)),
Flatten(),
Dense(512,activation='relu',kernel_regularizer=tf.keras.regularizers.
Dropout(0.5),
Dense(256,activation='relu',kernel_regularizer=tf.keras.regularizers.
Dropout(0.5),
Dense(128,activation='relu',kernel_regularizer=tf.keras.regularizers.
BatchNormalization(),
Dense(10,activation='softmax',kernel_regularizer=tf.keras.regularizer
return model

model = get_cnn_model()
model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 32, 32, 256) 7168

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 8/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory
max_pooling2d (MaxPooling2D (None, 16, 16, 256) 0
)

conv2d_1 (Conv2D) (None, 16, 16, 128) 295040

max_pooling2d_1 (MaxPooling (None, 8, 8, 128) 0


2D)

flatten_1 (Flatten) (None, 8192) 0

dense_4 (Dense) (None, 512) 4194816

dropout (Dropout) (None, 512) 0

dense_5 (Dense) (None, 256) 131328

dropout_1 (Dropout) (None, 256) 0

dense_6 (Dense) (None, 128) 32896

batch_normalization (BatchN (None, 128) 512


ormalization)

dense_7 (Dense) (None, 10) 1290

=================================================================
Total params: 4,663,050
Trainable params: 4,662,794
Non-trainable params: 256
_________________________________________________________________

# # Defining two callbacks during training i.e Early Stopping and Best Checkpoint.

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

def get_checkpoint_best():
return ModelCheckpoint(filepath='checkpoints_every_epoch_cnn/checkpoint_{epoch:03d}',
monitor='val_loss',
save_weights_only=True,
save_best_only=True,
verbose=1)

def get_early_stopping():
return EarlyStopping(patience=4,monitor='loss')

checkpoint_best_cnn = get_checkpoint_best()
early_stopping = get_early_stopping()

# Clear directory
! rm -r checkpoints_every_epoch_cnn

rm: cannot remove 'checkpoints_every_epoch_cnn': No such file or directory

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 9/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

# # Compiling and training the model making use of both training and validation sets duri

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'

callbacks = [checkpoint_best_cnn, early_stopping]


history = model.fit(X_train, y_train, epochs=30, batch_size=128, validation_split=0.15, c

Epoch 1/30
487/487 [==============================] - ETA: 0s - loss: 2.8082 - accuracy: 0.15
Epoch 1: val_loss improved from inf to 2.45616, saving model to checkpoints_every_
487/487 [==============================] - 25s 29ms/step - loss: 2.8082 - accuracy
Epoch 2/30
486/487 [============================>.] - ETA: 0s - loss: 2.4293 - accuracy: 0.17
Epoch 2: val_loss improved from 2.45616 to 2.36496, saving model to checkpoints_ev
487/487 [==============================] - 10s 20ms/step - loss: 2.4293 - accuracy
Epoch 3/30
484/487 [============================>.] - ETA: 0s - loss: 2.3453 - accuracy: 0.18
Epoch 3: val_loss improved from 2.36496 to 2.31235, saving model to checkpoints_ev
487/487 [==============================] - 10s 20ms/step - loss: 2.3452 - accuracy
Epoch 4/30
486/487 [============================>.] - ETA: 0s - loss: 2.3057 - accuracy: 0.18
Epoch 4: val_loss improved from 2.31235 to 2.28506, saving model to checkpoints_ev
487/487 [==============================] - 10s 20ms/step - loss: 2.3057 - accuracy
Epoch 5/30
487/487 [==============================] - ETA: 0s - loss: 2.2867 - accuracy: 0.18
Epoch 5: val_loss improved from 2.28506 to 2.27085, saving model to checkpoints_ev
487/487 [==============================] - 10s 20ms/step - loss: 2.2867 - accuracy
Epoch 6/30
486/487 [============================>.] - ETA: 0s - loss: 2.2726 - accuracy: 0.18
Epoch 6: val_loss improved from 2.27085 to 2.26185, saving model to checkpoints_ev
487/487 [==============================] - 11s 22ms/step - loss: 2.2726 - accuracy
Epoch 7/30
485/487 [============================>.] - ETA: 0s - loss: 2.2645 - accuracy: 0.18
Epoch 7: val_loss improved from 2.26185 to 2.25620, saving model to checkpoints_ev
487/487 [==============================] - 11s 22ms/step - loss: 2.2646 - accuracy
Epoch 8/30
486/487 [============================>.] - ETA: 0s - loss: 2.2600 - accuracy: 0.18
Epoch 8: val_loss improved from 2.25620 to 2.25264, saving model to checkpoints_ev
487/487 [==============================] - 11s 23ms/step - loss: 2.2600 - accuracy
Epoch 9/30
485/487 [============================>.] - ETA: 0s - loss: 2.2569 - accuracy: 0.18
Epoch 9: val_loss improved from 2.25264 to 2.25020, saving model to checkpoints_ev
487/487 [==============================] - 12s 25ms/step - loss: 2.2569 - accuracy
Epoch 10/30
487/487 [==============================] - ETA: 0s - loss: 2.2547 - accuracy: 0.18
Epoch 10: val_loss did not improve from 2.25020
487/487 [==============================] - 11s 23ms/step - loss: 2.2547 - accuracy
Epoch 11/30
484/487 [============================>.] - ETA: 0s - loss: 2.2541 - accuracy: 0.18
Epoch 11: val_loss improved from 2.25020 to 2.24757, saving model to checkpoints_e
487/487 [==============================] - 10s 21ms/step - loss: 2.2541 - accuracy
Epoch 12/30
486/487 [============================>.] - ETA: 0s - loss: 2.2512 - accuracy: 0.18
Epoch 12: val_loss improved from 2.24757 to 2.24603, saving model to checkpoints_e
487/487 [==============================] - 12s 24ms/step - loss: 2.2513 - accuracy
Epoch 13/30
484/487 [============================>.] - ETA: 0s - loss: 2.2513 - accuracy: 0.18
Epoch 13: val_loss improved from 2.24603 to 2.24540, saving model to checkpoints_e
487/487 [==============================] - 11s 23ms/step - loss: 2.2514 - accuracy

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 10/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory
Epoch 14/30
487/487 [==============================] - ETA: 0s - loss: 2.0720 - accuracy: 0.26
Epoch 14: val_loss improved from 2.24540 to 1.73145, saving model to checkpoints_e
487/487 [==============================] - 11s 23ms/step - loss: 2.0720 - accuracy
h /30

# # Plotting the learning curves for loss vs epoch for both training and validation sets.

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(['Train_Loss','Validation_Loss'], loc='upper right')
plt.title("Loss v/s Epochs")
plt.show()

# # Plotting the learning curves for accuracy vs epoch for both training and validation s

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.legend(['Train_Accuracy','Validation_Accuracy'], loc='lower right')
plt.title("Accuracy v/s Epochs")
plt.show()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 11/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

# # Computing and displaying the loss and accuracy of the trained model on the test set.

test_loss, test_accuracy = model.evaluate(X_test, y_test)


print(f"Test Loss is {test_loss}")
print(f"Test Accuracy is {test_accuracy}")

814/814 [==============================] - 3s 4ms/step - loss: 0.5916 - accuracy: 0.8


Test Loss is 0.5916429758071899
Test Accuracy is 0.875

keyboard_arrow_down 4. Get model predictions


! ls -lh checkpoints_every_epoch

total 1.2G
-rw-r--r-- 1 root root 85 Mar 17 13:33 checkpoint
-rw-r--r-- 1 root root 169M Mar 17 13:32 checkpoint_001.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:32 checkpoint_001.index
-rw-r--r-- 1 root root 169M Mar 17 13:32 checkpoint_002.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:32 checkpoint_002.index
-rw-r--r-- 1 root root 169M Mar 17 13:32 checkpoint_003.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:32 checkpoint_003.index
-rw-r--r-- 1 root root 169M Mar 17 13:32 checkpoint_004.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:32 checkpoint_004.index
-rw-r--r-- 1 root root 169M Mar 17 13:32 checkpoint_005.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:32 checkpoint_005.index
-rw-r--r-- 1 root root 169M Mar 17 13:32 checkpoint_006.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:32 checkpoint_006.index
-rw-r--r-- 1 root root 169M Mar 17 13:33 checkpoint_017.data-00000-of-00001
-rw-r--r-- 1 root root 1.7K Mar 17 13:33 checkpoint_017.index

! ls -lh checkpoints_every_epoch_cnn

total 1.2G
-rw-r--r-- 1 root root 85 Mar 17 13:41 checkpoint
-rw-r--r-- 1 root root 54M Mar 17 13:36 checkpoint_001.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:36 checkpoint_001.index
-rw-r--r-- 1 root root 54M Mar 17 13:36 checkpoint_002.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:36 checkpoint_002.index
-rw-r--r-- 1 root root 54M Mar 17 13:36 checkpoint_003.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:36 checkpoint_003.index
-rw-r--r-- 1 root root 54M Mar 17 13:36 checkpoint_004.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:36 checkpoint_004.index
-rw-r--r-- 1 root root 54M Mar 17 13:37 checkpoint_005.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:37 checkpoint_005.index
-rw-r--r-- 1 root root 54M Mar 17 13:37 checkpoint_006.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:37 checkpoint_006.index
-rw-r--r-- 1 root root 54M Mar 17 13:37 checkpoint_007.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:37 checkpoint_007.index
-rw-r--r-- 1 root root 54M Mar 17 13:37 checkpoint_008.data-00000-of-00001

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 12/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory
-rw-r--r-- 1 root root 2.9K Mar 17 13:37 checkpoint_008.index
-rw-r--r-- 1 root root 54M Mar 17 13:37 checkpoint_009.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:37 checkpoint_009.index
-rw-r--r-- 1 root root 54M Mar 17 13:38 checkpoint_011.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:38 checkpoint_011.index
-rw-r--r-- 1 root root 54M Mar 17 13:38 checkpoint_012.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:38 checkpoint_012.index
-rw-r--r-- 1 root root 54M Mar 17 13:38 checkpoint_013.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:38 checkpoint_013.index
-rw-r--r-- 1 root root 54M Mar 17 13:38 checkpoint_014.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:38 checkpoint_014.index
-rw-r--r-- 1 root root 54M Mar 17 13:38 checkpoint_015.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:38 checkpoint_015.index
-rw-r--r-- 1 root root 54M Mar 17 13:39 checkpoint_016.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:39 checkpoint_016.index
-rw-r--r-- 1 root root 54M Mar 17 13:39 checkpoint_017.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:39 checkpoint_017.index
-rw-r--r-- 1 root root 54M Mar 17 13:39 checkpoint_018.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:39 checkpoint_018.index
-rw-r--r-- 1 root root 54M Mar 17 13:39 checkpoint_019.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:39 checkpoint_019.index
-rw-r--r-- 1 root root 54M Mar 17 13:39 checkpoint_020.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:39 checkpoint_020.index
-rw-r--r-- 1 root root 54M Mar 17 13:40 checkpoint_023.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:40 checkpoint_023.index
-rw-r--r-- 1 root root 54M Mar 17 13:40 checkpoint_025.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:40 checkpoint_025.index
-rw-r--r-- 1 root root 54M Mar 17 13:41 checkpoint_028.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:41 checkpoint_028.index
-rw-r--r-- 1 root root 54M Mar 17 13:41 checkpoint_029.data-00000-of-00001
-rw-r--r-- 1 root root 2.9K Mar 17 13:41 checkpoint_029.index

# Loading the best weights for the MLP model saved during the training run.

model_val_mlp = get_mlp_model()
model_val_mlp.load_weights(filepath='checkpoints_every_epoch/checkpoint_017')

<tensorflow.python.checkpoint.checkpoint.CheckpointLoadStatus at 0x7f27c7dce7f0>

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 13/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

# Randomly selecting 5 images and corresponding labels from the test set and displaying t

num_test_images = X_test.shape[0]

random_inx = np.random.choice(num_test_images, 5)
random_test_images = X_test[random_inx, ...]
random_test_labels = y_test[random_inx, ...]

predictions = model_val_mlp.predict(random_test_images)

fig, axes = plt.subplots(5, 2, figsize=(20,16))


fig.subplots_adjust(hspace=0.4, wspace=-0.2)

for i, (prediction, image, label) in enumerate(zip(predictions, random_test_images, rando


axes[i, 0].imshow(np.squeeze(image))
axes[i, 0].get_xaxis().set_visible(False)
axes[i, 0].get_yaxis().set_visible(False)
axes[i, 0].text(10., -1.5, f'Digit {label}')
axes[i, 1].bar(np.arange(len(prediction)), prediction)
axes[i, 1].set_xticks(np.arange(len(prediction)))
axes[i, 1].set_title(f"Categorical distribution. Model prediction: {np.argmax(predict

plt.show()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 14/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

1/1 [==============================] - 0s 153ms/step

# Loading the best weights for the CNN model saved during the training run.

model_val_cnn= get_cnn_model()
model_val_cnn.load_weights(filepath='checkpoints_every_epoch_cnn/checkpoint_029')

<tensorflow.python.checkpoint.checkpoint.CheckpointLoadStatus at 0x7f27ca696f70>

# Randomly selecting 5 images and corresponding labels from the test set and displaying t

num_test_images = X_test.shape[0]
random_inx = np.random.choice(num_test_images, 5)
random_test_images = X_test[random_inx, ...]
random_test_labels = y_test[random_inx, ...]

predictions = model_val_cnn.predict(random_test_images)

fig, axes = plt.subplots(5, 2, figsize=(20, 16))


fig.subplots_adjust(hspace=0.4, wspace=-0.2)

for i, (prediction, image, label) in enumerate(zip(predictions, random_test_images, rando


axes[i, 0].imshow(np.squeeze(image))
axes[i, 0].get_xaxis().set_visible(False)
axes[i, 0].get_yaxis().set_visible(False)
axes[i, 0].text(10., -1.5, f'Digit {label}')
axes[i, 1].bar(np.arange(len(prediction)), prediction)
axes[i, 1].set_xticks(np.arange(len(prediction)))
axes[i, 1].set_title(f"Categorical distribution. Model prediction: {np.argmax(predict

plt.show()

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 15/16
12/3/23, 3:35 PM Image Classifier for the SVHN Dataset.ipynb - Colaboratory

1/1 [==============================] - 0s 158ms/step

https://colab.research.google.com/drive/1sFsW50W3dE9yFcm9QbBts8XnCpbLQoXY#scrollTo=8OrHY7TRz_Fx&printMode=true 16/16

You might also like