0% found this document useful (0 votes)
14 views5 pages

DL Ex 13

The document describes building an autoencoder model to compress images and then using the compressed representations as inputs to a ResNet model for classification. The autoencoder is trained to reconstruct input images, and the encoder part is used to obtain lower-dimensional representations of images. A ResNet architecture is defined for the classification model and it is trained on the compressed representations to classify images.

Uploaded by

224464.jee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views5 pages

DL Ex 13

The document describes building an autoencoder model to compress images and then using the compressed representations as inputs to a ResNet model for classification. The autoencoder is trained to reconstruct input images, and the encoder part is used to obtain lower-dimensional representations of images. A ResNet architecture is defined for the classification model and it is trained on the compressed representations to classify images.

Uploaded by

224464.jee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

DEEP LEARNING

PROJECT EXP 13

NAME: Y. KRISHNA TEJA


ID NO: 2100031746
SECTION : 24

CODE:
import numpy as np import
pandas as pd import cv2 import
tensorflow as from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D,
UpSampling2D, Fla en, Dense, BatchNormaliza on, Ac va on, Add from tensorflow.keras.models
import Model from sklearn.model_selec on import train_test_split from sklearn.preprocessing
import LabelEncoder

# Load the CSV file containing image paths and labels df = pd.read_csv('balanced_dataset.csv') #
Update 'your_dataset.csv' with the path to your CSV file image_paths = df['Image_Path'].values labels =
df['Label'].values

# Load images and resize them to a fixed size def load_and_preprocess_image(image_path):


image = cv2.imread(image_path) image = cv2.resize(image, (100,
100)) # Resize the image to 100x100 image = image / 255.0 #
Normalize pixel values return image

images = [load_and_preprocess_image(image_path) for image_path in image_paths]

# Convert images to numpy array images


= np.array(images)

# Encode labels label_encoder = LabelEncoder() labels_encoded


= label_encoder.fit_transform(labels)

# Split the dataset into training and tes ng sets


X_train, X_test, y_train, y_test = train_test_split(images, labels_encoded, test_size=0.2,
random_state=42)
# Define the input shape input_shape = (100, 100, 3) # Assuming images are resized to 100x100
with 3 channels (RGB)

# Define the autoencoder architecture input_img


= Input(shape=input_shape)

# Encoder x = Conv2D(64, (3, 3), padding='same')(input_img) x


=
BatchNormaliza on()(x) x = Ac va on('relu')(x) x
= MaxPooling2D((2, 2), padding='same')(x)

# Residual blocks num_res_blocks


= 3 for _ in range(num_res_blocks):
residual = x x = Conv2D(64, (3, 3),
padding='same')(x) x = BatchNormaliza on()(x)
x=
Ac va on('relu')(x) x = Conv2D(64, (3,
3), padding='same')(x) x =
BatchNormaliza on()(x) x =
Add()([residual, x]) x =
Ac va on('relu')(x)

# Decoder x =
UpSampling2D((2, 2))(x)
decoded = Conv2D(3, (3, 3), ac va on='sigmoid', padding='same')(x) # Output image with 3
channels (RGB)

# Define autoencoder model autoencoder


= Model(input_img, decoded)

# Compile the autoencoder autoencoder.compile(op mizer='adam', loss='mse')

# Train the autoencoder autoencoder.fit(X_train, X_train, epochs=10, batch_size=32,


valida on_data=(X_test, X_test))

# Extract compressed representa ons using the encoder part of the autoencoder encoder
= Model(input_img, x)
X_train_compressed = encoder.predict(X_train)
X_test_compressed = encoder.predict(X_test)

# Define ResNet architecture for classifica on def resnet(input_shape,


num_classes): input_data = Input(shape=input_shape)

x = Conv2D(64, (7, 7), strides=(2, 2), padding='same')(input_data)


x = BatchNormaliza on()(x) x = Ac va on('relu')(x) x =
MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)
x = res_block(x, filters=[64, 64, 256], stride=1) x=
iden ty_block(x, filters=[64, 64, 256]) x =
iden ty_block(x, filters=[64, 64, 256])

x = res_block(x, filters=[128, 128, 512], stride=2) x=


iden ty_block(x, filters=[128, 128, 512]) x =
iden ty_block(x, filters=[128, 128, 512]) x =
iden ty_block(x, filters=[128, 128, 512])

x = res_block(x, filters=[256, 256, 1024], stride=2) x=


iden ty_block(x, filters=[256, 256, 1024]) x =
iden ty_block(x, filters=[256, 256, 1024]) x =
iden ty_block(x, filters=[256, 256, 1024]) x =
iden ty_block(x, filters=[256, 256, 1024]) x =
iden ty_block(x, filters=[256, 256, 1024])

x = res_block(x, filters=[512, 512, 2048], stride=2) x=


iden ty_block(x, filters=[512, 512, 2048]) x =
iden ty_block(x, filters=[512, 512, 2048])

x = Fla en()(x) x = Dense(num_classes, ac va on='so max')(x)

model = Model(input_data, x, name='resnet') return model

def iden ty_block(x, filters):


f1, f2, f3 = filters x_shortcut
=x

x = Conv2D(filters=f1, kernel_size=(1, 1), strides=(1, 1), padding='valid')(x) x=


BatchNormaliza on()(x) x = Ac va on('relu')(x)
x = Conv2D(filters=f2, kernel_size=(3, 3), strides=(1, 1), padding='same')(x) x=
BatchNormaliza on()(x) x = Ac va on('relu')(x)

x = Conv2D(filters=f3, kernel_size=(1, 1), strides=(1, 1), padding='valid')(x) x=


BatchNormaliza on()(x)

x = Add()([x, x_shortcut]) x
= Ac va on('relu')(x)

return x

def res_block(x, filters, stride):


f1, f2, f3 = filters
x_shortcut = x

x = Conv2D(filters=f1, kernel_size=(1, 1), strides=(stride, stride), padding='valid')(x) x=


BatchNormaliza on()(x) x = Ac va on('relu')(x)

x = Conv2D(filters=f2, kernel_size=(3, 3), strides=(1, 1), padding='same')(x) x=


BatchNormaliza on()(x) x = Ac va on('relu')(x)

x = Conv2D(filters=f3, kernel_size=(1, 1), strides=(1, 1), padding='valid')(x) x=


BatchNormaliza on()(x)

x_shortcut = Conv2D(filters=f3, kernel_size=(1, 1), strides=(stride, stride),


padding='valid')(x_shortcut) x_shortcut =
BatchNormaliza on()(x_shortcut)

x = Add()([x, x_shortcut]) x
= Ac va on('relu')(x)

return x

# Define ResNet model


resnet_model = resnet(input_shape=(25, 25, 64), num_classes=5) # Input shape is the shape of the
compressed representa ons

# Compile the ResNet model


resnet_model.compile(op mizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the ResNet model


resnet_model.fit(X_train_compressed, y_train, epochs=10, batch_size=32,
valida on_data=(X_test_compressed, y_test))

# Evaluate the ResNet model loss, accuracy = resnet_model.evaluate(X_test_compressed,


y_test)
print("Test Loss:", loss) print("Test Accuracy:",
accuracy)

OUTPUT:

You might also like