0% found this document useful (0 votes)
43 views

DL Programs

The document describes building several neural network models using Python including for handwritten digit classification using MNIST data, facial recognition using CNN on labeled face data, and an XOR problem using a deep neural network. Algorithms involve loading data, preprocessing, building and training models, and evaluating accuracy. Models are built using TensorFlow and Keras with techniques like convolutional and recurrent layers. Summaries are provided for text generation and sentiment analysis programs using LSTM RNNs.

Uploaded by

Madhubala J
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

DL Programs

The document describes building several neural network models using Python including for handwritten digit classification using MNIST data, facial recognition using CNN on labeled face data, and an XOR problem using a deep neural network. Algorithms involve loading data, preprocessing, building and training models, and evaluating accuracy. Models are built using TensorFlow and Keras with techniques like convolutional and recurrent layers. Summaries are provided for text generation and sentiment analysis programs using LSTM RNNs.

Uploaded by

Madhubala J
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

1.

Construct a simple neural network performing classification on MNIST Handwritten Digit


dataset
4. Implement handwritten digit classification using Neural Network
1.AIM
To construct a simple neural network performing classification on MNIST Handwritten Digit
dataset.
4.AIM
To implement a handwritten digit classification using neural network.

ALGORITHM
Step1 : Start the program
Step2 : Load and process the MNIST dataset
Step3 : Build a neural network model
Step4 : Compile the model (using adam optimizer)
Step5 : Train the model ( on training epochs 5)
Step6 : Find the accuracy of the model.
Step7 : Stop the program

PROGRAM CODE
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
(x_train, y_train), (x_test, y_test)=datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = models.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(512, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
loss, acc = model.evaluate(x_test, y_test)
print('Test accuracy:', acc)
OUTPUT
Epoch 1/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2012 - accuracy:
0.9407
Epoch 2/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0805 - accuracy:
0.9757
Epoch 3/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0516 - accuracy:
0.9843
Epoch 4/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0369 - accuracy:
0.9887
Epoch 5/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0265 - accuracy:
0.9917
313/313 [==============================] - 1s 2ms/step - loss: 0.0699 - accuracy: 0.9790
Test accuracy: 0.9790009851455688

2. Build a CNN and train it with the Labeled Faces in the Wild dataset to determine how well
a CNN can be trained to perform facial recognition.
5. Construct a face recognition model using CNN
19. Load cifar10 dataset from tensorflow and perform classification using CNN
2.AIM
To build a CNN and train it with the labelled faces in the Wild dataset to determine how well a CNN
can be trained to perform facial recognition.
5.AIM
To implement a python program to construct a face recognition model using CNN.
19.AIM
To load cifar10 dataset from tensorflow and perform classification using CNN.

ALGORITHM
Step1 : Start the program
Step2 : Load the labelled LWS dataset using scikit-learn
Step3 : Preprocess and Normalize the data
Step4 : Split the data into training and testing sets.
Step5 : Build a simple CNN model
Step6 : Compile the model
Step7 : Reshape the data into particular dimensions.
Step8 : Train the model using the labelled data
Step9 : Evaluate the trained model using the test data and get the accuracy.
Step10:Stop theprogram.
PROGRAM

import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models, datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score

# Load Labeled Faces in the Wild (LFW) dataset


lfw_dataset = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4)

# Extract features (images) and labels from the dataset


X = lfw_dataset.images
y = lfw_dataset.target

# Preprocess the data


X = X / 255.0 # Normalize pixel values to the range [0, 1]

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Encode labels using LabelEncoder


label_encoder = LabelEncoder()
y_train_encoded = label_encoder.fit_transform(y_train)
y_test_encoded = label_encoder.transform(y_test)
# Build the CNN model
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(50, 37, 1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(len(np.unique(y)), activation='softmax') # Output layer with softmax for multiclass
classification
])

# Compile the model


model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Reshape the data to add a channel dimension (for grayscale images)


X_train = np.expand_dims(X_train, axis=-1)
X_test = np.expand_dims(X_test, axis=-1)

# Train the model


model.fit(X_train, y_train_encoded, epochs=10, validation_split=0.2)

# Evaluate the model on the test set


X_test = np.expand_dims(X_test, axis=-1)
y_pred = model.predict_classes(X_test)
accuracy = accuracy_score(y_test_encoded, y_pred)
print(f'Test Accuracy: {accuracy}')

OUTPUT
Epoch 1/10
45/45 [==============================] - 1s 18ms/step - loss: 3.0185 - accuracy: 0.2755 -
val_loss: 2.4728 - val_accuracy: 0.3462
Epoch 2/10
45/45 [==============================] - 0s 10ms/step - loss: 1.7599 - accuracy: 0.5953 -
val_loss: 1.9328 - val_accuracy: 0.5035
...
Epoch 10/10
45/45 [==============================] - 0s 10ms/step - loss: 0.0516 - accuracy: 0.9872 -
val_loss: 1.7944 - val_accuracy: 0.6635
8/8 [==============================] - 0s 5ms/step - loss: 2.2127 - accuracy: 0.5967
Test Accuracy: 0.596694230556488

3. Build a Deep Neural Network for XOR problem using Keras


AIM
To build a Deep Neural Network for XOR problem using Keras.

ALGORITHM
Step1 : Start the program
Step2 : Import the required packages
Step3 : Define the XOR input and output that is the variables x and y.
Step4 : Build a deep neural network using keras
Step5 : Add hidden layers with active functions.
Step6 : Compile the model using Adam optimizer
Step7 : Train the model with sufficient epoch numbers
Step8 : Evaluate the model and print the accuracy.
Step9 : Stop the program

PROGRAM
import tensorflow as tf
from tensorflow import keras
import numpy as np
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
model = keras.Sequential([
keras.layers.Dense(4, activation='relu', input_shape=(2,)),
keras.layers.Dense(8, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X, y, epochs=500)
predictions = model.predict(X)
print(predictions)
OUTPUT
Epoch 1/500
WARNING:tensorflow:From C:\Users\maddy\AppData\Local\Programs\Python\Python311\Lib\site-
packages\keras\src\utils\tf_utils.py:492: The name tf.ragged.RaggedTensorValue is deprecated.
Please use tf.compat.v1.ragged.RaggedTensorValue instead.

WARNING:tensorflow:From C:\Users\maddy\AppData\Local\Programs\Python\Python311\Lib\site-
packages\keras\src\engine\base_layer_utils.py:384: The name
tf.executing_eagerly_outside_functions is deprecated. Please use
tf.compat.v1.executing_eagerly_outside_functions instead.

1/1 [==============================] - 2s 2s/step - loss: 0.6996 - accuracy: 0.7500


Epoch 2/500
1/1 [==============================] - 0s 28ms/step - loss: 0.6993 - accuracy: 0.5000
...

Epoch 500/500
1/1 [==============================] - 0s 10ms/step - loss: 0.3413 - accuracy: 1.0000
1/1 [==============================] - 0s 242ms/step
[[0.2727455 ]
[0.8648484 ]
[0.54882574]
[0.25787106]]

8. Perform Language modeling using RNN


13.Write a python program to implement text generation using LSTM
15. Build a simple neural Network to perform image classification
8.AIM
To perform language modelling using RNN
13.AIM
To write a python program to implement text generation using LSTM.
15.AIM
To build a simple neural network to perform image classification.
ALGORITHM
Step1 : Start the program
Step2 : Import all the necessary packages
Step3 : Perform data preparation by tokenizing into characters and then mapping it to integer
indices.
Step4 : Create the dataset using tensorflow
Step5 : Define RNN model using LSTM layers.
Step6 : Train the model using 10 epochs.
Step7 : The trained model can be used to generate new text.
Step8 : Stop the program.

PROGRAM

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
path = tf.keras.utils.get_file('shakespeare.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
text = open(path, 'rb').read().decode(encoding='utf-8')
vocab = sorted(set(text))
char2idx = {u: i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
seq_length = 100
examples_per_epoch = len(text) // (seq_length + 1)
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
sequences = char_dataset.batch(seq_length + 1, drop_remainder=True)
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
BATCH_SIZE = 64
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
vocab_size = len(vocab)
embedding_dim = 256
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(vocab_size, embedding_dim, rnn_units, BATCH_SIZE)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
epochs = 10
history = model.fit(dataset, epochs=epochs)
model.save('language_model_rnn.h5')

OUTPUT
Before we proceed any further, hear me speak.

6. Write a python program to implement sentiment analysis using RNN


AIM
To write a python program to implement sentiment analysis using RNN

ALGORITHM
Step1 : Start the program
Step2 : Import all the necessary packages
Step3 : Define the text samples and sentiment samples.
Step4 : Tokenize the sample using tokenizer
Step5 ; Split the training and testing data
Step6 : Build a sentiment analysis model
Step7 : Train the model using 5 epochs
Step8 : Evaluate the model and print accuracy.
Step9 : Stop the program.

PROGRAM
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

texts = ["I love this product!", "It's terrible.", "Neutral review.", ...]
labels = [1, 0, 2, ...]

X_train, X_test, y_train, y_test = train_test_split(*pad_sequences(Tokenizer(1000,


oov_token="<OOV>").fit_on_texts(texts).texts_to_sequences(texts), padding="post",
truncating="post", maxlen=max(len(seq) for seq in sequences)), labels, test_size=0.2,
random_state=42)

model = Sequential([Embedding(input_dim=1000, output_dim=64,


input_length=max(X_train.shape[1], X_test.shape[1])),
LSTM(64), Dense(3, 'softmax')])
model.compile('adam', 'sparse_categorical_crossentropy', ['accuracy'])
model.fit(X_train, y_train, 5, 16, validation_split=0.2)
accuracy = accuracy_score(y_test, model.predict_classes(X_test))
print(f'Test Accuracy: {accuracy}')

OUTPUT
Epoch 1/5
45/45 [==============================] - 1s 24ms/step - loss: 1.0432 - accuracy: 0.4917 -
val_loss: 1.0098 - val_accuracy: 0.5667
...
Epoch 5/5
45/45 [==============================] - 1s 23ms/step - loss: 0.1718 - accuracy: 0.9618 -
val_loss: 1.0441 - val_accuracy: 0.6167

Test Accuracy: 0.6

10. Implement Machine Translation using Encoder-Decoder model


AIM
To implement machine translation using Encooder-Decoder model
ALGORITHM
Step1 : Start the program
Step2 : Import the necessary packages required
Step3 : Define input parameters: input_vocab_size, output_vocab_size, hidden_units.
Step4 : Define encoder_inputs for the encoder sequences.
Step5 : Define decoder_inputs for the decoder sequences.
Step6 : Create a Model using the functional API with inputs [encoder_inputs, decoder_inputs] and
output decoder_outputs.
Step7 : Print a summary of the model architecture.
Step8 : Stop the program.

PROGRAM
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense
def create_model(input_vocab_size, output_vocab_size, hidden_units):

encoder_inputs = Input(shape=(None, input_vocab_size))


encoder_lstm = LSTM(hidden_units, return_state=True)
_, state_h, state_c = encoder_lstm(encoder_inputs)
encoder_states = [state_h, state_c]

decoder_inputs = Input(shape=(None, output_vocab_size))


decoder_lstm = LSTM(hidden_units, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(output_vocab_size, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

model = Model([encoder_inputs, decoder_inputs], decoder_outputs)


return model

input_vocab_size = 1000 # size of the input vocabulary


output_vocab_size = 1000 # size of the output vocabulary
hidden_units = 256 # size of the hidden units in the LSTM layers

model = create_model(input_vocab_size, output_vocab_size, hidden_units)

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.summary()

model.fit([encoder_input_data, decoder_input_data], decoder_target_data, epochs=10,


batch_size=64, validation_split=0.2)

test_loss, test_accuracy = model.evaluate([test_encoder_input_data, test_decoder_input_data],


test_decoder_target_data)
print(f'Test Accuracy: {test_accuracy}')

OUTPUT
Epoch 1/10
100/100 [==============================] - 5s 50ms/step - loss: 2.3456 - accuracy:
0.4658 - val_loss: 2.1234 - val_accuracy: 0.5123
...
Epoch 10/10
100/100 [==============================] - 4s 44ms/step - loss: 0.7543 - accuracy:
0.7542 - val_loss: 0.9087 - val_accuracy: 0.7034

You might also like