0% found this document useful (0 votes)
13 views12 pages

DL INTERNAL

The document outlines various experiments involving neural networks for tasks such as predicting house prices, classifying handwritten digits, and recognizing images of dogs and cats. It also covers the implementation of one-hot encoding for words and characters, as well as word embeddings for sentiment analysis using the IMDB dataset. Each experiment includes code snippets and results demonstrating the execution of the respective models.

Uploaded by

chatgpt17052005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

DL INTERNAL

The document outlines various experiments involving neural networks for tasks such as predicting house prices, classifying handwritten digits, and recognizing images of dogs and cats. It also covers the implementation of one-hot encoding for words and characters, as well as word embeddings for sentiment analysis using the IMDB dataset. Each experiment includes code snippets and results demonstrating the execution of the respective models.

Uploaded by

chatgpt17052005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

EXPERIMENT NO -4

Design a neural network for predicting house prices using Boston Housing Price dataset.

fromtensorflow.keras.datasetsimport boston_housing
from tensorflow.keras.models import Sequential
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(13,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
history = model.fit(x_train, y_train,
epochs=100,
batch_size=32,
validation_data=(x_test, y_test))
test_loss = model.evaluate(x_test, y_test)
print('Test loss:', test_loss)test_loss = model.evaluate(x_test, y_test)
print('Test loss:', test_loss)from sklearn.metrics import mean_absolute_error
y_pred = model.predict(x_test)
mae = mean_absolute_error(y_test, y_pred)
print('Mean Absolute Error:', mae)import matplotlib.pyplot as plt
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

OUTPUT:

RESULT: Neural network for predicting house prices using Boston Housing Price dataset is
exceuted.

Build a Convolution Neural Network forMNIST Hand written Digit Classification.


MNIST Handwritten Digit Classification DataSet :
The MNIST dataset is a popular benchmark dataset for image classification tasks. It consists
of 60,000 grayscale images of handwritten digits (0 to 9) for training and 10,000 images for
testing. Each image is 28 x 28 pixels in size, and each pixel value ranges from 0 to 255. The
goal of the task is to correctly classify each image into one of the 10 possible digit classes.
fromtensorflow.keras.datasetsimport mnist
import pandas as pd
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('x_train:', x_train.shape)
print('y_train:', y_train.shape)
print('x_test:', x_test.shape)
print('y_test:', y_test.shape)
(_, IMAGE_WIDTH, IMAGE_HEIGHT) = x_train.shape
IMAGE_CHANNELS = 1
print('IMAGE_WIDTH:', IMAGE_WIDTH);
print('IMAGE_HEIGHT:', IMAGE_HEIGHT);
print('IMAGE_CHANNELS:', IMAGE_CHANNELS);
pd.DataFrame(x_train[0])
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.show()
x_train = x_train / 255.0
x_test = x_test / 255.0
pd.DataFrame(x_train[0])
x_train = x_train / 255.0
x_test = x_test / 255.0
import numpy as np
x_train = np.expand_dims(x_train, axis=-1)
x_test = np.expand_dims(x_test, axis=-1)
fromtensorflow.keras.models import Sequential
fromtensorflow.keras.layers importConv2D,MaxPooling2D, Flatten, Dense, Dropout
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3),strides=(1, 1), activation='relu',
input_shape=(28, 28, 1), padding="valid", name="Conv1"))
model.add(MaxPooling2D(pool_size=(2, 2),strides=(2, 2), padding="valid", name="Pool1"))
model.add(Conv2D(filters=64, kernel_size=(3, 3),strides=(1, 1), activation='relu',
padding="same", name="Conv2"))
model.add(MaxPooling2D(pool_size=(2, 2),strides=(2, 2), padding="valid", name="Pool2"))
model.add(Conv2D(filters=64, kernel_size=(3, 3),strides=(1, 1), activation='relu',
padding="same", name="Conv3"))
model.add(Flatten(name="Flatten"))
model.add(Dense(64, activation='relu', name="Dense1"))
model.add(Dropout(0.5, name="Dropout"))
model.add(Dense(10, activation='softmax', name="Output"))
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=10,batch_size=128, validation_data=(x_test,
y_test))
test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test accuracy:', test_acc)
plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.legend()
plt.title('Training vs Validation Accuracy')
plt.show()

OUTPUT;
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets
/mnist.npz
11490434/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
X_train: (60000, 28, 28)
y_train: (60000,)
X_test: (10000, 28, 28)
y_test: (10000,)
IMAGE_WIDTH: 28
IMAGE_HEIGHT: 28
IMAGE_CHANNELS: 1

RESULT:Convolution Neural Network for MNIST Hand written Digit Classification is executed.

EXPERIMENT 6
Build a Convolution Neural Network for simple image (dogs and Cats) Classification

!apt-get install unrar -y


!unrar x "/content/dogs-vs-cats.rar" "/content/dogs-vs-cats/”
!ls "/content/dogs-vs-cats/"
import tensorflow as tf
fromtensorflow.keras.preprocessing.image import ImageDataGenerator
dataset_dir = "/content/dogs-vs-cats/”
IMG_SIZE = (150, 150)
BATCH_SIZE = 32
train_datagen = ImageDataGenerator(
rescale=1.0/255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0.2 # 80% train, 20% validation
)
train_generator =train_datagen.flow_from_directory(
dataset_dir,
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
class_mode='binary', # Since it's a binary classification (dogs vs cats)
subset='training'
)
val_generator = train_datagen.flow_from_directory(
dataset_dir,
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
class_mode='binary',
subset='validation'
)
fromtensorflow.keras.models import Sequential
fromtensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
val_generator = train_datagen.flow_from_directory(
dataset_dir,
target_size=IMG_SIZE,
batch_size=BATCH_SIZE,
class_mode='binary',
subset='validation'
)
fromtensorflow.keras.models import Sequential
fromtensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential([
Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3), strides=(1,1),
padding="same"),
MaxPooling2D((2,2), strides=(2,2)),
Conv2D(64, (3,3), activation='relu', padding="same"),
MaxPooling2D((2,2), strides=(2,2)),
Conv2D(128, (3,3), activation='relu', padding="same"),
MaxPooling2D((2,2), strides=(2,2)),
Flatten(),
Dense(512, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid') # Binary classification (dog or cat)
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
history = model.fit(
train_generator,
validation_data=val_generator,
epochs=10
)
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.title('ModelAccuracy')
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.title('ModelLoss')
plt.show()
import numpy as np
from tensorflow.keras.preprocessing import image
import matplotlib.pyplot as plt
def predict_image(img_path):
img = image.load_img(img_path, target_size=(150, 150))
img_array = image.img_to_array(img) / 255.0
img_array = np.expand_dims(img_array, axis=0)
prediction = model.predict(img_array)[0][0]
if prediction > 0.5:
label = "Dog" l
confidence = prediction # Define confidence
print(f"The image is a Dog ({prediction:.2f})")
else:
label = "Cat" # Define label
confidence = 1 - prediction # Define confidence
print(f"The image is a Cat ({1 - prediction:.2f})")
plt.imshow(image.load_img(img_path))
plt.axis("off")
plt.title(f"Prediction: {label} ({confidence:.2f})")
plt.show()
predict_image("/content/240_F_97589769_t45CqXyzjz0KXwoBZT9PRaWGHRk5hQqQ.jpg
")
OUTPUT:

RESULT: Convolution Neural Network for simple image (dogs and Cats) Classification is
executed

EXPERIMENT 7

Use a pre-trained convolution neural network (VGG16) for image classification.


Procedure:
VGG16 is a convolutional neural network (CNN) architecture that was developed by
researchers at the Visual Geometry Group (VGG) at the University of Oxford. It was introduced
in the paper titled "Very Deep Convolutional Networks for Large-Scale Image Recognition"
by Karen Simonyan and Andrew Zisserman in 2014.
The VGG16 architecture consists of 16 layers, including 13 convolutional layers and 3 fully
connected layers. The input to the network is an RGB image of size 224x224. The network uses
small 3x3 convolutional filters throughout the network, which allows the network to learn more
complex features with fewer parameters.

import tensorflow as tf
import tensorflow_datasets astfds
import numpy as np
import matplotlib.pyplot as plt
dataset_name = "cats_vs_dogs"
dataset, info = tfds.load(dataset_name, as_supervised=True, with_info=True)
train_data = dataset['train'].take(20000) # First 20,000 for training
val_data = dataset['train'].skip(20000).take(5000) # Next 5,000 for validation
def preprocess(image, label):
image = tf.image.resize(image, (224, 224)) # Resize to VGG16 expected size
image = image / 255.0 # Normalize to [0,1]
return image, label
train_data =train_data.map(preprocess).batch(32).shuffle(1000)
val_data = val_data.map(preprocess).batch(32)
base_model = tf.keras.applications.VGG16(input_shape=(224, 224, 3),
include_top=False, weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid') # Binary classification
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
loss, acc = model.evaluate(val_data)
print(f"\nValidation Accuracy: {acc * 100:.2f}%")
def show_prediction():
image, label = next(iter(val_data))
img = image[0].numpy()
true_label = label[0].numpy()
prediction = model.predict(tf.expand_dims(image[0], axis=0))
predicted_label = "Dog" if prediction[0][0] > 0.5 else "Cat"
plt.imshow(img)
plt.title(f"Predicted: {predicted_label}, Actual: {'Dog' iftrue_label else 'Cat'}")
plt.axis("off")
plt.show()
show_prediction()
OUTPUT:
Epoch 1/3
40/40
Epoch 2/3
40/40
Epoch 3/3
40/40
8/8
7825 19s/step - accuracy: 0.4909 - loss: 0.7935 - val accuracy: 0.4970 - val_loss: 0.6942
778s 19s/step - accuracy: 0.5061 - Loss: 0.6936 - val_accuracy: 0.4970 - val_loss: 0.6921
7485
18s/step - accuracy: 0.5139 - loss: 0.6908 - val accuracy: 0.5730 - val loss: 0.6870
59s 5s/step - accuracy: 0.5688 - loss: 0.6870
Validation Accuracy: 57.30%
1/1- 0s 272ms/step

RESULT: pre-trained convolution neural network (VGG16) for image classification is executed.

EXPERIMENT 8

Implement one hot encoding of words or characters.

One-hot encoding is a technique used to represent categorical data as numerical data. In the
context of natural language processing (NLP), one-hot encoding can be used to represent
words
or characters as vectors of numbers.
In one-hot encoding, each word or character is assigned a unique index, and a vector of zeros
is created with the length equal to the total number of words or characters in the vocabulary.
The index of the word or character is set to 1 in the corresponding position in the vector, and
all other positions are set to 0.

Program 1:

from tensorflow.keras.preprocessing.text import one_hot


words = ['apple', 'banana', 'cherry', 'apple', 'cherry', 'banana', 'apple']
vocab = set(words)
word_to_int = {word: i for i, word in enumerate(vocab)}
int_words = [word_to_int[word] for word in words]
one_hot_words = []
for int_word in int_words:
one_hot_word = [0] * len(vocab)
one_hot_word[int_word] = 1
one_hot_words.append(one_hot_word)
print(one_hot_words)

Program 2:
import string
input_string = 'hello world'
vocab = set(input_string)
char_to_int = {char: i for i, char in enumerate(vocab)}
int_chars = [char_to_int[char] for char in input_string]
one_hot_chars = []
for int_char in int_chars:
one_hot_char = [0] * len(vocab)
one_hot_char[int_char] = 1
one_hot_chars.append(one_hot_char)
print(one_hot_chars)

Output:
[[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1]]
0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0,
0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0],
1. [0, 0, 0, 0,0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0],
2. [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0]]
RESULT: one hot encoding of words or characters are executed.

EXPERIMENT 9
AIM:Implement word embeddings for IMDB dataset.
Word embedding is essential in natural language processing with deep learning. This technique
allows the network to learn about the meaning of the words. In this post, we classify movie
reviews in the IMDB dataset as positive or negative, and provide a visual illustration of
embedding.
Today is to train a neural network to find out whether some text is globally positive or negative,
a task called sentiment analysis.
The first layer of our neural network will perform an operation called word embedding, which
is essential in NLP with deep learning.

from numpy.random import seed


seed(0xdeadbeef)
import tensorflow as tf
tf.random.set_seed(0xdeadbeef)
from tensorflow import keras
imdb = keras.datasets.imdb
num_words = 20000
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(seed=1,
num_words=num_words)
print(train_data[0])
print('label:', train_labels[0])
vocabulary = imdb.get_word_index()
vocabulary = {k:(v+3) for k,v in vocabulary.items()}
vocabulary["<PAD>"] = 0
vocabulary["<START>"] = 1
vocabulary["<UNK>"] = 2 # unknown
vocabulary["<UNUSED>"] = 3
index = dict([(value, key) for (key, value) in vocabulary.items()])
def decode_review(text):
return ' '.join([index.get(i, '?') for i in text])
decode_review(train_data[0])
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=vocabulary["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=vocabulary["<PAD>"],
padding='post',
maxlen=256)
train_data[1]
model = keras.Sequential()
model.add(keras.layers.Embedding(len(vocabulary), 2, input_length=256))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dropout(rate=0.5))
model.add(keras.layers.Dense(5))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(train_data,
train_labels,
epochs=5,
batch_size=100,
validation_data=(test_data, test_labels),
verbose=1)
import matplotlib.pyplot as plt
def plot_accuracy(history, miny=None):
acc = history.history['accuracy']
test_acc = history.history['val_accuracy']
epochs = range(len(acc))
plt.plot(epochs, acc)
plt.plot(epochs, test_acc)
if miny:
plt.ylim(miny, 1.0)
plt.title('accuracy')
plt.xlabel('epoch')
plt.figure()
plot_accuracy(history)

OUTPUT:

1, 13, 28, 1039, 7, 14, 23, 1856, 13, 104, 1, 13, 28, 1039, 7, 14,

RESULT: word embeddings of IMDB dataset is executed.

You might also like