DEEP LEARNING LAB MANUAL
DEEP LEARNING LAB MANUAL
AIM :
To implement a neural network using Keras to learn and model the XOR
logical function.
ALGORITHM :
Step 1 :Define the XOR input (X) and output (y) datasets.
Step 2:Initialize a feedforward neural network using Keras's Sequential API.
Step 3:Add a hidden layer with 6 neurons and ReLU activation, and an output layer
with 1 neuron and sigmoid activation.
Step 4:Compile the model using binary_crossentropy loss, adam optimizer, and
accuracy as the evaluation metric.
Step 5:Train the model on the XOR dataset using the fit method for 1000 epochs.
Step 6:Evaluate the trained model using the evaluate method to calculate accuracy
and loss.
Step 7:Make predictions on the XOR input data and round the sigmoid outputs to get
binary results.
PROGRAM :
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(6, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=1000, verbose=0)
loss, accuracy = model.evaluate(X, y)
print(f'Accuracy: {accuracy * 100:.2f}%')
predictions = model.predict(X)
predictions_int = [round(pred[0]) for pred in predictions]
print('Predictions:', predictions_int)
OUTPUT :
RESULT :
After 1000 epochs, the model achieves near-perfect accuracy (100%) and
thus successfully predicts the XOR outputs as [0, 1, 1, 0] for the inputs [[0, 0], [0, 1],
[1, 0], [1, 1]].
EX.NO : 3 Build an Artificial Neural Network (ANN) to recognize
DATE : characters and digits from images
AIM :
To build and train a neural network to classify MNIST handwritten digits and
evaluate its performance.
ALGORITHM :
Step 1:Load the MNIST Dataset using keras.datasets.mnist.load_data().
Step 2:Preprocess the Data by normalizing images and one-hot encoding the labels.
Step 3:Build the Model with a Flatten input layer, a hidden layer with ReLU
activation, and an output layer with softmax activation.
Step 4:Compile the Model using categorical_crossentropy loss and adam optimizer.
Step 5:Train the Model on the training data for 10 epochs and validate on the test
data.
Step 6:Evaluate the Model to compute accuracy on the test dataset.
Step 7:Make Predictions and convert the softmax output to class labels using
np.argmax().
PROGRAM:
import numpy as np
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Dense, Flatten, LeakyReLU
from keras.utils import to_categorical
# Make predictions
predictions = model.predict(X_test)
predicted_classes = np.argmax(predictions, axis=1)
print('Predicted Classes:', predicted_classes)
OUTPUT :
RESULT :
The model achieves high accuracy (typically around 98%) and correctly
predicts the digits for the test dataset.
EX .NO : 4 PROGRAM USING AUTOENCODERS TO ANALYZE
DATE: IMAGES FOR IMAGE RECONSTRUCTION TASKS
AIM :
To build and train a convolutional autoencoder to reconstruct fashion MNIST
images and visualize the original vs. reconstructed images.
ALGORITHM:
Step1:Load the Fashion MNIST Dataset using
keras.datasets.fashion_mnist.load_data().
Step 2:Normalize the Data by scaling pixel values to be between 0 and 1 using
X_train.astype('float32') / 255.0 and X_test.astype('float32') / 255.0.
Step 3:Reshape the Data to include a channel dimension, changing shape to (28,
28, 1) for both training and test sets.
Step 4:Build the Encoder using Conv2D layers with ReLU activation and
MaxPooling2D to downsample the image features.
Step 5:Build the Decoder using Conv2D layers with ReLU activation and
UpSampling2D to reconstruct the image.
Step 6:Compile the Autoencoder Model with adam optimizer and
binary_crossentropy loss function.
Step 7:Train the Autoencoder using autoencoder.fit() on the training data and
visualize original vs. reconstructed images after predictions.
PROGRAM :
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, LeakyReLU
from keras.datasets import cifar10
decoded_imgs = autoencoder.predict(X_test)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(2, n, i + 1)
plt.imshow(X_test[i])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
OUTPUT :
RESULT :
The autoencoder reconstructs the fashion MNIST images, and the original
and reconstructed images are displayed for comparison.
EX. NO : 1 ACCURACY OF VARIOUS ACTIVATION FUNCTIONS
DATE :
AIM:
To develop and train a neural network using TensorFlow for classifying
handwritten digits from the MNIST dataset.
ALGORITHM:
Step 1:Import Libraries – Load TensorFlow, Keras, and Matplotlib for model
development and visualization.
Step 2:Load Dataset – Retrieve the MNIST dataset using mnist.load_data().
Step 3:Normalize Data – Scale pixel values to the range [0,1] for efficient training.
Step 4:Define Model – Create a Sequential neural network with Flatten, Dense, and
Dropout layers.
Step 5:Compile Model – Configure the model with the Adam optimizer, sparse
categorical cross-entropy loss, and accuracy metric.
Step 6:Train & Evaluate Model – Train the model for 8 epochs and evaluate
performance on test data.
Step 7:Visualize Results – Print accuracy and loss values, then plot a bar graph
comparing different neuron-activation configurations.
PROGRAM :
import tensorflow as tf
model = Sequential([
Dense(128),
ELU(alpha=0.1),
Dropout(0.3),
Dense(10, activation='swish')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print("\nSummary of results:")
print(f"Loss: {loss:.4f}")
print(f"Accuracy: {accuracy:.2f}%")
plt.xlabel('Model Configurations')
plt.ylabel('Accuracy (%)')
plt.ylim(70.0, 80.0)
plt.show()
OUTPUT :
RESULT :
The trained model achieves high accuracy in recognizing handwritten digits.