Batch3_Conversion of Grayscale Images to Colored Images Using CNN
Batch3_Conversion of Grayscale Images to Colored Images Using CNN
ABSTRACT:
Colorizing images—turning grayscale to colour in vision—is a challenging problem. This mini-
project is about implementing an automatic system in order to convert grayscale images into
their coloured versions, where CNNs are the technique that we will be using. The project
provides an image to colour mapping using deep learning techniques, specifically with respect to
Convolutional Neural Networks (CNN)s on a large dataset. The basic concept of the project is to
take a dataset (grayscale Images and colour values pair) and train a CNN based model that will
predict pixel wise colour value for gray image. The CNN architecture is capable of grasping the
low-level features like edges and textures along with higher semantic information deals with
objects, regions in images. This mixture allows the model to deduce appropriate and
contextually-sensible colours from section of the image. The project unfolds in distinct phases.
First, we gather a dataset that includes both grayscale images and their coloured counterparts.
Next, we follow a series of preprocessing steps like normalization & resizing. We later built the
CNN model, usually with multiple hierarchy of layers such as Convolutional, Activation,
Pooling and Up Sampling layer building part by part for Feature Extraction process to output
colourised Image. Finally, a model is then trained with an appropriate loss function (e.g., mean
squared error, or perceptual loss) that creates a smooth mapping of bringing the coloured and
black-and-white images as close possible. They are evaluated quantitatively with PSNR (Peak
Signal-to-Noise Ratio) and SSIM (Structural Similarity Index), as well visually by looking at the
colorized output. This work showcases the performance that a CNN is capable of producing
when converted for image colorization — generating realistic full-colour images from black and
white inputs both reassuringly good-looking, informative insights on what these creative
applications could do (or are not able to succeed) with every challenging neural model.