DL INTERNAL
DL INTERNAL
Design a neural network for predicting house prices using Boston Housing Price dataset.
fromtensorflow.keras.datasetsimport boston_housing
from tensorflow.keras.models import Sequential
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(13,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
history = model.fit(x_train, y_train,
epochs=100,
batch_size=32,
validation_data=(x_test, y_test))
test_loss = model.evaluate(x_test, y_test)
print('Test loss:', test_loss)test_loss = model.evaluate(x_test, y_test)
print('Test loss:', test_loss)from sklearn.metrics import mean_absolute_error
y_pred = model.predict(x_test)
mae = mean_absolute_error(y_test, y_pred)
print('Mean Absolute Error:', mae)import matplotlib.pyplot as plt
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
OUTPUT:
RESULT: Neural network for predicting house prices using Boston Housing Price dataset is
exceuted.
OUTPUT;
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets
/mnist.npz
11490434/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
X_train: (60000, 28, 28)
y_train: (60000,)
X_test: (10000, 28, 28)
y_test: (10000,)
IMAGE_WIDTH: 28
IMAGE_HEIGHT: 28
IMAGE_CHANNELS: 1
RESULT:Convolution Neural Network for MNIST Hand written Digit Classification is executed.
EXPERIMENT 6
Build a Convolution Neural Network for simple image (dogs and Cats) Classification
RESULT: Convolution Neural Network for simple image (dogs and Cats) Classification is
executed
EXPERIMENT 7
import tensorflow as tf
import tensorflow_datasets astfds
import numpy as np
import matplotlib.pyplot as plt
dataset_name = "cats_vs_dogs"
dataset, info = tfds.load(dataset_name, as_supervised=True, with_info=True)
train_data = dataset['train'].take(20000) # First 20,000 for training
val_data = dataset['train'].skip(20000).take(5000) # Next 5,000 for validation
def preprocess(image, label):
image = tf.image.resize(image, (224, 224)) # Resize to VGG16 expected size
image = image / 255.0 # Normalize to [0,1]
return image, label
train_data =train_data.map(preprocess).batch(32).shuffle(1000)
val_data = val_data.map(preprocess).batch(32)
base_model = tf.keras.applications.VGG16(input_shape=(224, 224, 3),
include_top=False, weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid') # Binary classification
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
loss, acc = model.evaluate(val_data)
print(f"\nValidation Accuracy: {acc * 100:.2f}%")
def show_prediction():
image, label = next(iter(val_data))
img = image[0].numpy()
true_label = label[0].numpy()
prediction = model.predict(tf.expand_dims(image[0], axis=0))
predicted_label = "Dog" if prediction[0][0] > 0.5 else "Cat"
plt.imshow(img)
plt.title(f"Predicted: {predicted_label}, Actual: {'Dog' iftrue_label else 'Cat'}")
plt.axis("off")
plt.show()
show_prediction()
OUTPUT:
Epoch 1/3
40/40
Epoch 2/3
40/40
Epoch 3/3
40/40
8/8
7825 19s/step - accuracy: 0.4909 - loss: 0.7935 - val accuracy: 0.4970 - val_loss: 0.6942
778s 19s/step - accuracy: 0.5061 - Loss: 0.6936 - val_accuracy: 0.4970 - val_loss: 0.6921
7485
18s/step - accuracy: 0.5139 - loss: 0.6908 - val accuracy: 0.5730 - val loss: 0.6870
59s 5s/step - accuracy: 0.5688 - loss: 0.6870
Validation Accuracy: 57.30%
1/1- 0s 272ms/step
RESULT: pre-trained convolution neural network (VGG16) for image classification is executed.
EXPERIMENT 8
One-hot encoding is a technique used to represent categorical data as numerical data. In the
context of natural language processing (NLP), one-hot encoding can be used to represent
words
or characters as vectors of numbers.
In one-hot encoding, each word or character is assigned a unique index, and a vector of zeros
is created with the length equal to the total number of words or characters in the vocabulary.
The index of the word or character is set to 1 in the corresponding position in the vector, and
all other positions are set to 0.
Program 1:
Program 2:
import string
input_string = 'hello world'
vocab = set(input_string)
char_to_int = {char: i for i, char in enumerate(vocab)}
int_chars = [char_to_int[char] for char in input_string]
one_hot_chars = []
for int_char in int_chars:
one_hot_char = [0] * len(vocab)
one_hot_char[int_char] = 1
one_hot_chars.append(one_hot_char)
print(one_hot_chars)
Output:
[[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1]]
0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0,
0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0],
1. [0, 0, 0, 0,0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0],
2. [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0]]
RESULT: one hot encoding of words or characters are executed.
EXPERIMENT 9
AIM:Implement word embeddings for IMDB dataset.
Word embedding is essential in natural language processing with deep learning. This technique
allows the network to learn about the meaning of the words. In this post, we classify movie
reviews in the IMDB dataset as positive or negative, and provide a visual illustration of
embedding.
Today is to train a neural network to find out whether some text is globally positive or negative,
a task called sentiment analysis.
The first layer of our neural network will perform an operation called word embedding, which
is essential in NLP with deep learning.
OUTPUT:
1, 13, 28, 1039, 7, 14, 23, 1856, 13, 104, 1, 13, 28, 1039, 7, 14,