0% found this document useful (0 votes)
17 views

ML Exp9

This document details an experiment implementing a multi-layer neural network on an image dataset. It loads MNIST image data, preprocesses it, and builds several neural network models with different numbers of hidden layers and activation functions. The models are trained and their accuracy on test data is evaluated.

Uploaded by

godizlatan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

ML Exp9

This document details an experiment implementing a multi-layer neural network on an image dataset. It loads MNIST image data, preprocesses it, and builds several neural network models with different numbers of hidden layers and activation functions. The models are trained and their accuracy on test data is evaluated.

Uploaded by

godizlatan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Experiment 9

Aim: To implement Multi-Layer Neural Network on image dataset


Code / Output:

import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
%matplotlib inline
# MNIST in keras is dataset with 70,000 images of handwritten images.
60,000 train and 10,000 test
# image size is 28X28 grey sclae images
from keras import models
from keras.models import Sequential, load_model
from keras.models import Sequential, load_model
from keras.layers import Dense
from sklearn.metrics import accuracy_score
from keras.layers import Dense, Flatten
#So for that, we initialize four variables X_train, y_train, X_test,
y_test to sore the train and test data of dependent and independent
values respectively.
from keras.datasets import mnist
mnist_D = mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist_D

x_train[0].shape

x_train.shape

x_train.shape
#three dimensions (60000 X 28 X 28)

y_train.shape
print(x_train[0].shape)
plt.matshow(x_train[0])

x_train[0]
# Normalizing the dataset
x_train = x_train/255
x_test = x_test/255

# Flatting the dataset in order


# to compute for model building
x_train_flatten = x_train.reshape(len(x_train), 28*28)
x_test_flatten = x_test.reshape(len(x_test), 28*28)

# x_train has 60,000 2D arrays


print(x_train.shape)
print(x_train_flatten.shape)
x_train
#1st image of mnist
x_train[0]
x_train_flatten

x_train_flatten[0]
#singel layer model
#dense means fully connected neurons
model1 = keras.Sequential([ keras.layers.Dense(10, input_shape=(784,),
activation='sigmoid')])
# optimisers
#SGD
#RMSprop
#Adam
#AdamW
#Adadelta
#Adagrad
#Adamax
#Adafactor
#Nadam
#Ftrl
#Lion
#Loss Scale Optimizer. etc.
#Adaptive Moment Estimation,” is an iterative optimization algorithm
used to minimize the loss function during the training of neural
networks.
#Adam can be looked at as a combination of RMSprop and Stochastic
Gradient Descent with momentum.

model1.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 1 hidden model
model2 = keras.Sequential([
keras.layers.Dense(100, input_shape=(784,), activation='relu'),
keras.layers.Dense(10, activation='sigmoid')
])
#Model is being trained on 1875 batches of 32 images each.
#1875*32 = 60000 images
model2.summary()
#model.fit(X_train, y_train, epochs=5, batch_size=32)

model2.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model2.fit(x_train_flatten, y_train, epochs=5)

#For each batch a metric value is evaluated.


#A current value of loss (after k batches is equal to a mean value of
your metric across computed k batches).
#The final result is obtained as a mean of all losses computed for all
batches.

model2.evaluate(x_test_flatten,y_test)

model1.evaluate(x_test_flatten,y_test)
model3 = keras.Sequential([
keras.layers.Dense(100, input_shape=(784,), activation='swish'),
keras.layers.Dense(70, input_shape=(784,), activation='swish'),
keras.layers.Dense(10, activation='sigmoid')
])
model3.summary()

model3.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model3.fit(x_train_flatten, y_train, epochs=7)
model4 = keras.Sequential([
keras.layers.Dense(100, input_shape=(784,), activation='relu'),
keras.layers.Dense(70, input_shape=(784,), activation='swish'),
keras.layers.Dense(50, input_shape=(784,), activation='swish'),
keras.layers.Dense(10, activation='sigmoid')
])
model4.summary()

model4.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model4.fit(x_train_flatten, y_train, epochs=7)

Conclusion: Thus, we have successfully implemented Multi-Layer Neural Network on image dataset.

You might also like