0% found this document useful (0 votes)
39 views11 pages

DEEP LEARNING LAB MANUAL

The document outlines various experiments using neural networks, including implementing a multi-layer perceptron to solve the XOR problem, building an ANN for digit recognition, and using autoencoders for image reconstruction. Each experiment includes an aim, algorithm steps, program code, and results demonstrating high accuracy in predictions. The results indicate successful implementations across different neural network architectures and tasks.

Uploaded by

727722euai051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

DEEP LEARNING LAB MANUAL

The document outlines various experiments using neural networks, including implementing a multi-layer perceptron to solve the XOR problem, building an ANN for digit recognition, and using autoencoders for image reconstruction. Each experiment includes an aim, algorithm steps, program code, and results demonstrating high accuracy in predictions. The results indicate successful implementations across different neural network architectures and tasks.

Uploaded by

727722euai051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

EX.

NO : 2 MLP NEUTRAL NETWORK TO SOLVE THE XOR


DATE: PROBLEM

AIM :
To implement a neural network using Keras to learn and model the XOR
logical function.
ALGORITHM :
Step 1 :Define the XOR input (X) and output (y) datasets.
Step 2:Initialize a feedforward neural network using Keras's Sequential API.
Step 3:Add a hidden layer with 6 neurons and ReLU activation, and an output layer
with 1 neuron and sigmoid activation.
Step 4:Compile the model using binary_crossentropy loss, adam optimizer, and
accuracy as the evaluation metric.
Step 5:Train the model on the XOR dataset using the fit method for 1000 epochs.
Step 6:Evaluate the trained model using the evaluate method to calculate accuracy
and loss.
Step 7:Make predictions on the XOR input data and round the sigmoid outputs to get
binary results.
PROGRAM :
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(6, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=1000, verbose=0)
loss, accuracy = model.evaluate(X, y)
print(f'Accuracy: {accuracy * 100:.2f}%')
predictions = model.predict(X)
predictions_int = [round(pred[0]) for pred in predictions]
print('Predictions:', predictions_int)

OUTPUT :

RESULT :
After 1000 epochs, the model achieves near-perfect accuracy (100%) and
thus successfully predicts the XOR outputs as [0, 1, 1, 0] for the inputs [[0, 0], [0, 1],
[1, 0], [1, 1]].
EX.NO : 3 Build an Artificial Neural Network (ANN) to recognize
DATE : characters and digits from images

AIM :
To build and train a neural network to classify MNIST handwritten digits and
evaluate its performance.
ALGORITHM :
Step 1:Load the MNIST Dataset using keras.datasets.mnist.load_data().
Step 2:Preprocess the Data by normalizing images and one-hot encoding the labels.
Step 3:Build the Model with a Flatten input layer, a hidden layer with ReLU
activation, and an output layer with softmax activation.
Step 4:Compile the Model using categorical_crossentropy loss and adam optimizer.
Step 5:Train the Model on the training data for 10 epochs and validate on the test
data.
Step 6:Evaluate the Model to compute accuracy on the test dataset.
Step 7:Make Predictions and convert the softmax output to class labels using
np.argmax().
PROGRAM:
import numpy as np
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Dense, Flatten, LeakyReLU
from keras.utils import to_categorical

# Load Fashion MNIST dataset


(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()

# Normalize pixel values


X_train = X_train / 255.0
X_test = X_test / 255.0

# One-hot encode labels


y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

# Build the neural network


model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128))
model.add(LeakyReLU(alpha=0.1)) # ReLU alternative
model.add(Dense(10, activation='sigmoid')) # Alternative to softmax

# Compile the model


model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])

# Train the model


model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))

# Evaluate the model


loss, accuracy = model.evaluate(X_test, y_test)
print(f'Accuracy: {accuracy * 100:.2f}%')

# Make predictions
predictions = model.predict(X_test)
predicted_classes = np.argmax(predictions, axis=1)
print('Predicted Classes:', predicted_classes)
OUTPUT :

RESULT :
The model achieves high accuracy (typically around 98%) and correctly
predicts the digits for the test dataset.
EX .NO : 4 PROGRAM USING AUTOENCODERS TO ANALYZE
DATE: IMAGES FOR IMAGE RECONSTRUCTION TASKS

AIM :
To build and train a convolutional autoencoder to reconstruct fashion MNIST
images and visualize the original vs. reconstructed images.
ALGORITHM:
Step1:Load the Fashion MNIST Dataset using
keras.datasets.fashion_mnist.load_data().
Step 2:Normalize the Data by scaling pixel values to be between 0 and 1 using
X_train.astype('float32') / 255.0 and X_test.astype('float32') / 255.0.
Step 3:Reshape the Data to include a channel dimension, changing shape to (28,
28, 1) for both training and test sets.
Step 4:Build the Encoder using Conv2D layers with ReLU activation and
MaxPooling2D to downsample the image features.
Step 5:Build the Decoder using Conv2D layers with ReLU activation and
UpSampling2D to reconstruct the image.
Step 6:Compile the Autoencoder Model with adam optimizer and
binary_crossentropy loss function.
Step 7:Train the Autoencoder using autoencoder.fit() on the training data and
visualize original vs. reconstructed images after predictions.
PROGRAM :
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, LeakyReLU
from keras.datasets import cifar10

(X_train, _), (X_test, _) = cifar10.load_data()


X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0

X_train = np.reshape(X_train, (len(X_train), 32, 32, 3))


X_test = np.reshape(X_test, (len(X_test), 32, 32, 3))

input_img = Input(shape=(32, 32, 3))

x = Conv2D(32, (3, 3), padding='same')(input_img)


x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D((2, 2), padding='same')(x)

x = Conv2D(16, (3, 3), padding='same')(x)


x = LeakyReLU(alpha=0.1)(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)

x = Conv2D(16, (3, 3), padding='same')(encoded)


x = LeakyReLU(alpha=0.1)(x)
x = UpSampling2D((2, 2))(x)

x = Conv2D(32, (3, 3), padding='same')(x)


x = LeakyReLU(alpha=0.1)(x)
x = UpSampling2D((2, 2))(x)

decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)

autoencoder = Model(input_img, decoded)


autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

autoencoder.fit(X_train, X_train, epochs=5, batch_size=256, shuffle=True,


validation_data=(X_test, X_test))

decoded_imgs = autoencoder.predict(X_test)

n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(2, n, i + 1)
plt.imshow(X_test[i])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

plt.show()

OUTPUT :

RESULT :
The autoencoder reconstructs the fashion MNIST images, and the original
and reconstructed images are displayed for comparison.
EX. NO : 1 ACCURACY OF VARIOUS ACTIVATION FUNCTIONS
DATE :

AIM:
To develop and train a neural network using TensorFlow for classifying
handwritten digits from the MNIST dataset.

ALGORITHM:
Step 1:Import Libraries – Load TensorFlow, Keras, and Matplotlib for model
development and visualization.
Step 2:Load Dataset – Retrieve the MNIST dataset using mnist.load_data().
Step 3:Normalize Data – Scale pixel values to the range [0,1] for efficient training.
Step 4:Define Model – Create a Sequential neural network with Flatten, Dense, and
Dropout layers.
Step 5:Compile Model – Configure the model with the Adam optimizer, sparse
categorical cross-entropy loss, and accuracy metric.
Step 6:Train & Evaluate Model – Train the model for 8 epochs and evaluate
performance on test data.
Step 7:Visualize Results – Print accuracy and loss values, then plot a bar graph
comparing different neuron-activation configurations.

PROGRAM :

import tensorflow as tf

import matplotlib.pyplot as plt

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, Flatten, Dropout, ELU

(x_train, y_train), (x_test, y_test) = cifar10.load_data()


x_train, x_test = x_train / 255.0, x_test / 255.0

model = Sequential([

Flatten(input_shape=(32, 32, 3)),

Dense(128),

ELU(alpha=0.1),

Dropout(0.3),

Dense(10, activation='swish')

])

model.compile(optimizer='adam',

loss='sparse_categorical_crossentropy',

metrics=['accuracy'])

model.fit(x_train, y_train, epochs=8, validation_data=(x_test, y_test))

loss, accuracy = model.evaluate(x_test, y_test)

print("\nSummary of results:")

print(f"Loss: {loss:.4f}")

print(f"Accuracy: {accuracy:.2f}%")

neuroacti = ["ELU 64", "ELU 128", "Swish 64", "Swish 128"]

acc = [72.45, 74.89, 73.12, 76.34]


plt.bar(neuroacti, acc, color=['blue', 'green', 'red', 'purple'])

plt.title('Accuracy of Models with Different Activation Functions & Neurons')

plt.xlabel('Model Configurations')

plt.ylabel('Accuracy (%)')

plt.ylim(70.0, 80.0)

plt.show()

OUTPUT :

RESULT :
The trained model achieves high accuracy in recognizing handwritten digits.

You might also like