Image Segmentation with Watershed Algorithm - OpenCV Python
Last Updated :
24 Apr, 2025
Image segmentation is a fundamental computer vision task that involves partitioning an image into meaningful and semantically homogeneous regions. The goal is to simplify the representation of an image or make it more meaningful for further analysis. These segments typically correspond to objects or regions of interest within the image.
Watershed Algorithm
The Watershed Algorithm is a classical image segmentation technique that is based on the concept of watershed transformation.The segmentation process will take the similarity with adjacent pixels of the image as an important reference to connect pixels with similar spatial positions and gray values.
When do I use the watershed algorithm?
The Watershed Algorithm is used when segmenting images with touching or overlapping objects. It excels in scenarios with irregular object shapes, gradient-based segmentation requirements, and when marker-guided segmentation is feasible.
Working of Watershed Algorithm
The watershed algorithm divides an image into segments using topographic information. It treats the image as a topographic surface, identifying catchment basins based on pixel intensity. Local minima are marked as starting points, and flooding with colors fills catchment basins until object boundaries are reached. The resulting segmentation assigns unique colors to regions, aiding object recognition and image analysis.
The whole process of the watershed algorithm can be summarized in the following steps:
- Marker placement: The first step is to place markers on the local minima, or the lowest points, in the image. These markers serve as the starting points for the flooding process.
- Flooding: The algorithm then floods the image with different colors, starting from the markers. As the color spreads, it fills up the catchment basins until it reaches the boundaries of the objects or regions in the image.
- Catchment basin formation: As the color spreads, the catchment basins are gradually filled, creating a segmentation of the image. The resulting segments or regions are assigned unique colors, which can then be used to identify different objects or features in the image.
- Boundary identification: The watershed algorithm uses the boundaries between the different colored regions to identify the objects or regions in the image. The resulting segmentation can be used for object recognition, image analysis, and feature extraction tasks.
Implementing the watershed algorithm using OpenCV
OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. OpenCV contains hundreds of computer vision algorithms, including object detection, face recognition, image processing, and machine learning.
Here are the implementation steps for the watershed Algorithm using OpenCV:
Import the required libraries
Python3
import cv2
import numpy as np
from IPython.display import Image, display
from matplotlib import pyplot as plt
Loading the image
We define a function "imshow" to display the processed image. The code loads an image named "coin.jpg".
Python
# Plot the image
def imshow(img, ax=None):
if ax is None:
ret, encoded = cv2.imencode(".jpg", img)
display(Image(encoded))
else:
ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax.axis('off')
#Image loading
img = cv2.imread("Coins.png")
# Show image
imshow(img)
Input coinCoverting to Grayscale image
We convert the image to grayscale using OpenCV's "cvtColor" method. The grayscale image is stored in a variable "gray".
Python3
#image grayscale conversion
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imshow(gray)
Output:

The cv2.cvtColor() function takes two arguments: the image and the conversion flag cv2.COLOR_BGR2GRAY, which specifies the conversion from BGR color space to grayscale.
Implementing thresholding
A crucial step in image segmentation is thresholding, which changes a grayscale image into a binary image. It is essential for distinguishing the items of attention from the backdrop.
When using the cv2.THRESH_BINARY_INV
thresholding method in OpenCV, the cv2.THRESH_OTSU
parameter is added to apply Otsu's binarization process. Otsu's method automatically determines an optimal threshold by maximizing the variance between two classes of pixels in the image. It aims to find a threshold that minimizes intra-class variance and maximizes inter-class variance, effectively separating the image into two groups of pixels with distinct characteristics.
Otsu's binarization process
Otsu's binarization is a technique used in image processing to separate the foreground and background of an image into two distinct classes. This is done by finding the optimal threshold value that maximizes the variance between the two classes. Otsu's method is known for its simplicity and computational efficiency, making it a popular choice in applications such as document analysis, object recognition, and medical imaging.
Python
#Threshold Processing
ret, bin_img = cv2.threshold(gray,
0, 255,
cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
imshow(bin_img)
Output:

Step 4: Noise Removal
To clean the object's outline (boundary line), noise is removed using morphological gradient processing.
Morphological Gradient Processing
The morphological gradient is a tool used in morphological image processing to emphasize the edges and boundaries of objects in an image. It is obtained by subtracting the erosion of an image from its dilation. Erosion shrinks bright regions in an image, while dilation expands them, and the morphological gradient represents the difference between the two. This operation is useful in tasks such as object detection and segmentation, and it can also be combined with other morphological operations to enhance or filter specific features in an image.
Python
# noise removal
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
bin_img = cv2.morphologyEx(bin_img,
cv2.MORPH_OPEN,
kernel,
iterations=2)
imshow(bin_img)
Output:

Detecting the black background and foreground of the image
Next, we need to get a hold of the black area, which is the background part of this image. If the white part is the required area and is well-filled, that means the rest is the background.
We apply several morphological operations on our binary image:
- The first operation is dilation using "cv2.dilate" which expands the bright regions of the image, creating the "sure_bg" variable representing the sure background area. This result is displayed using the "imshow" function.
- Â The next operation is "cv2.distanceTransform" which calculates the distance of each white pixel in the binary image to the closest black pixel. The result is stored in the "dist" variable and displayed using "imshow".
- Then, the foreground area is obtained by applying a threshold on the "dist" variable using "cv2.threshold". The threshold is set to 0.5 times the maximum value of "dist".
- Finally, the unknown area is calculated as the difference between the sure background and sure foreground areas using "cv2.subtract". The result is stored in the "unknown" variable and displayed using "imshow".
Python
# Create subplots with 1 row and 2 columns
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(8, 8))
# sure background area
sure_bg = cv2.dilate(bin_img, kernel, iterations=3)
imshow(sure_bg, axes[0,0])
axes[0, 0].set_title('Sure Background')
# Distance transform
dist = cv2.distanceTransform(bin_img, cv2.DIST_L2, 5)
imshow(dist, axes[0,1])
axes[0, 1].set_title('Distance Transform')
#foreground area
ret, sure_fg = cv2.threshold(dist, 0.5 * dist.max(), 255, cv2.THRESH_BINARY)
sure_fg = sure_fg.astype(np.uint8)
imshow(sure_fg, axes[1,0])
axes[1, 0].set_title('Sure Foreground')
# unknown area
unknown = cv2.subtract(sure_bg, sure_fg)
imshow(unknown, axes[1,1])
axes[1, 1].set_title('Unknown')
plt.show()
Output:
Remove backgroundCreating marker image
There is a gray area between the white area in this part of the background and the clearly visible white part of the foreground. This is still uncharted territory, an undefined part. Sow we will subtract this area.
Here are the steps:
- First, the "connectedcomponents" method from OpenCV is used to find the connected components in the sure foreground image "sure_fg". The result is stored in "markers".
- To distinguish the background and foreground, the values in "markers" are incremented by 1.
- The unknown region, represented by pixels with a value of 255 in "unknown", is labeled with 0 in "markers".
- Finally, the "markers" image is displayed using Matplotlib's "imshow" method with a color map of "tab20b". The result is shown in a figure of size 6x6.
Python
# Marker labelling
# sure foreground
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that background is not 0, but 1
markers += 1
# mark the region of unknown with zero
markers[unknown == 255] = 0
fig, ax = plt.subplots(figsize=(6, 6))
ax.imshow(markers, cmap="tab20b")
ax.axis('off')
plt.show()
Output:
Marker LabellingThis marker image, created by labeling the sure foreground and marking the unknown region, serves as input to the Watershed Algorithm. It guides the algorithm in segmenting the image based on these labeled regions. Each distinct color or label represents a separate segment or region in the image
Applying Watershed Algorithm to Markers
Applying watershed() function. Steps taken:
- The "cv2.watershed" function is applied to the original image "img" and the markers image obtained in the previous step to perform the Watershed algorithm. The result is stored in "markers".
- The "markers" image is displayed using Matplotlib's "imshow" method with a color map of "tab20b".
- A loop iterates over the labels starting from 2 (ignoring the background and unknown regions) to extract the contours of each object.
- The contours of the binary image are found using OpenCV's "findContours" function, and the first contour is appended to a list of coins.
- Finally, the objects' outlines are drawn on the original image using "cv2.drawContours". The result is displayed using the "imshow" function.
Â
Python
# watershed Algorithm
markers = cv2.watershed(img, markers)
fig, ax = plt.subplots(figsize=(5, 5))
ax.imshow(markers, cmap="tab20b")
ax.axis('off')
plt.show()
labels = np.unique(markers)
coins = []
for label in labels[2:]:
# Create a binary image in which only the area of the label is in the foreground
#and the rest of the image is in the background
target = np.where(markers == label, 255, 0).astype(np.uint8)
# Perform contour extraction on the created binary image
contours, hierarchy = cv2.findContours(
target, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
coins.append(contours[0])
# Draw the outline
img = cv2.drawContours(img, coins, -1, color=(0, 23, 223), thickness=2)
imshow(img)
Output:
Marker
Output ImageThe outline of each object is drawn in red in the image.
Hence, the code implements the watershed algorithm using OpenCV to segment an image into separate objects or regions. The code first loads the image and converts it to grayscale, performs some preprocessing steps, places markers on the local minima, floods the image with different colors, and finally identifies the boundaries between the regions. The resulting segmented image is then displayed.
Conclusion
In conclusion, the watershed algorithm is a powerful image segmentation technique that uses topographic information to divide an image into multiple segments or regions. The watershed algorithm is more thoughtful than other segmentation methods, and it is more in line with the impression of the human eye on the image. It is widely used in medical imaging and computer vision applications and is a crucial step in many image processing pipelines. Despite its limitations, the watershed algorithm remains a popular choice for image segmentation tasks due to its ability to handle images with significant amounts of noise and irregular shapes.Â
Similar Reads
Computer Vision Tutorial
Computer Vision is a branch of Artificial Intelligence (AI) that enables computers to interpret and extract information from images and videos, similar to human perception. It involves developing algorithms to process visual data and derive meaningful insights.Why Learn Computer Vision?High Demand i
8 min read
Introduction to Computer Vision
Computer Vision - Introduction
Ever wondered how are we able to understand the things we see? Like we see someone walking, whether we realize it or not, using the prerequisite knowledge, our brain understands what is happening and stores it as information. Imagine we look at something and go completely blank. Into oblivion. Scary
3 min read
A Quick Overview to Computer Vision
Computer vision means the extraction of information from images, text, videos, etc. Sometimes computer vision tries to mimic human vision. Itâs a subset of computer-based intelligence or Artificial intelligence which collects information from digital images or videos and analyze them to define the a
3 min read
Applications of Computer Vision
Have you ever wondered how machines can "see" and understand the world around them, much like humans do? This is the magic of computer visionâa branch of artificial intelligence that enables computers to interpret and analyze digital images, videos, and other visual inputs. From self-driving cars to
6 min read
Fundamentals of Image Formation
Image formation is an analog to digital conversion of an image with the help of 2D Sampling and Quantization techniques that is done by the capturing devices like cameras. In general, we see a 2D view of the 3D world.In the same way, the formation of the analog image took place. It is basically a co
7 min read
Satellite Image Processing
Satellite Image Processing is an important field in research and development and consists of the images of earth and satellites taken by the means of artificial satellites. Firstly, the photographs are taken in digital form and later are processed by the computers to extract the information. Statist
2 min read
Image Formats
Image formats are different types of file types used for saving pictures, graphics, and photos. Choosing the right image format is important because it affects how your images look, load, and perform on websites, social media, or in print. Common formats include JPEG, PNG, GIF, and SVG, each with it
5 min read
Image Processing & Transformation
Digital Image Processing Basics
Digital Image Processing means processing digital image by means of a digital computer. We can also say that it is a use of computer algorithms, in order to get enhanced image either to extract some useful information. Digital image processing is the use of algorithms and mathematical models to proc
7 min read
Difference Between RGB, CMYK, HSV, and YIQ Color Models
The colour spaces in image processing aim to facilitate the specifications of colours in some standard way. Different types of colour models are used in multiple fields like in hardware, in multiple applications of creating animation, etc. Letâs see each colour model and its application. RGBCMYKHSV
3 min read
Image Enhancement Techniques using OpenCV - Python
Image enhancement is the process of improving the quality and appearance of an image. It can be used to correct flaws or defects in an image, or to simply make an image more visually appealing. Image enhancement techniques can be applied to a wide range of images, including photographs, scans, and d
15+ min read
Image Transformations using OpenCV in Python
In this tutorial, we are going to learn Image Transformation using the OpenCV module in Python. What is Image Transformation? Image Transformation involves the transformation of image data in order to retrieve information from the image or preprocess the image for further usage. In this tutorial we
5 min read
How to find the Fourier Transform of an image using OpenCV Python?
The Fourier Transform is a mathematical tool used to decompose a signal into its frequency components. In the case of image processing, the Fourier Transform can be used to analyze the frequency content of an image, which can be useful for tasks such as image filtering and feature extraction. In thi
5 min read
Python | Intensity Transformation Operations on Images
Intensity transformations are applied on images for contrast manipulation or image thresholding. These are in the spatial domain, i.e. they are performed directly on the pixels of the image at hand, as opposed to being performed on the Fourier transform of the image. The following are commonly used
5 min read
Histogram Equalization in Digital Image Processing
A digital image is a two-dimensional matrix of two spatial coordinates, with each cell specifying the intensity level of the image at that point. So, we have an N x N matrix with integer values ranging from a minimum intensity level of 0 to a maximum level of L-1, where L denotes the number of inten
5 min read
Python - Color Inversion using Pillow
Color Inversion (Image Negative) is the method of inverting pixel values of an image. Image inversion does not depend on the color mode of the image, i.e. inversion works on channel level. When inversion is used on a multi color image (RGB, CMYK etc) then each channel is treated separately, and the
4 min read
Image Sharpening Using Laplacian Filter and High Boost Filtering in MATLAB
Image sharpening is an effect applied to digital images to give them a sharper appearance. Sharpening enhances the definition of edges in an image. The dull images are those which are poor at the edges. There is not much difference in background and edges. On the contrary, the sharpened image is tha
4 min read
Wand sharpen() function - Python
The sharpen() function is an inbuilt function in the Python Wand ImageMagick library which is used to sharpen the image. Syntax: sharpen(radius, sigma) Parameters: This function accepts four parameters as mentioned above and defined below: radius: This parameter stores the radius value of the sharpn
2 min read
Python OpenCV - Smoothing and Blurring
In this article, we are going to learn about smoothing and blurring with python-OpenCV. When we are dealing with images at some points the images will be crisper and sharper which we need to smoothen or blur to get a clean image, or sometimes the image will be with a really bad edge which also we ne
7 min read
Python PIL | GaussianBlur() method
PIL is the Python Imaging Library which provides the python interpreter with image editing capabilities. The ImageFilter module contains definitions for a pre-defined set of filters, which can be used with the Image.filter() method. PIL.ImageFilter.GaussianBlur() method create Gaussian blur filter.
1 min read
Apply a Gauss filter to an image with Python
A Gaussian Filter is a low-pass filter used for reducing noise (high-frequency components) and for blurring regions of an image. This filter uses an odd-sized, symmetric kernel that is convolved with the image. The kernel weights are highest at the center and decrease as you move towards the periphe
2 min read
Spatial Filtering and its Types
Spatial Filtering technique is used directly on pixels of an image. Mask is usually considered to be added in size so that it has specific center pixel. This mask is moved on the image such that the center of the mask traverses all image pixels. Classification on the basis of Linearity There are two
3 min read
Python PIL | MedianFilter() and ModeFilter() method
PIL is the Python Imaging Library which provides the python interpreter with image editing capabilities. The ImageFilter module contains definitions for a pre-defined set of filters, which can be used with the Image.filter() method. PIL.ImageFilter.MedianFilter() method creates a median filter. Pick
1 min read
Python | Bilateral Filtering
A bilateral filter is used for smoothening images and reducing noise, while preserving edges. This article explains an approach using the averaging filter, while this article provides one using a median filter. However, these convolutions often result in a loss of important edge information, since t
2 min read
Python OpenCV - Morphological Operations
Python OpenCV Morphological operations are one of the Image processing techniques that processes image based on shape. This processing strategy is usually performed on binary images. Morphological operations based on OpenCV are as follows: ErosionDilationOpeningClosingMorphological GradientTop hatB
7 min read
Erosion and Dilation of images using OpenCV in python
Morphological operations are a set of operations that process images based on shapes. They apply a structuring element to an input image and generate an output image. The most basic morphological operations are two: Erosion and Dilation Basics of Erosion: Erodes away the boundaries of the foreground
2 min read
Introduction to Resampling methods
While reading about Machine Learning and Data Science we often come across a term called Imbalanced Class Distribution, which generally happens when observations in one of the classes are much higher or lower than in other classes. As Machine Learning algorithms tend to increase accuracy by reducing
8 min read
Python | Image Registration using OpenCV
Image registration is a digital image processing technique that helps us align different images of the same scene. For instance, one may click the picture of a book from various angles. Below are a few instances that show the diversity of camera angles.Now, we may want to "align" a particular image
3 min read
Feature Extraction and Description
Feature Extraction Techniques - NLP
Introduction : This article focuses on basic feature extraction techniques in NLP to analyse the similarities between pieces of text. Natural Language Processing (NLP) is a branch of computer science and machine learning that deals with training computers to process a large amount of human (natural)
10 min read
SIFT Interest Point Detector Using Python - OpenCV
SIFT (Scale Invariant Feature Transform) Detector is used in the detection of interest points on an input image. It allows the identification of localized features in images which is essential in applications such as:Â Â Object Recognition in ImagesPath detection and obstacle avoidance algorithmsGest
4 min read
Feature Matching using Brute Force in OpenCV
In this article, we will do feature matching using Brute Force in Python by using OpenCV library. Prerequisites: OpenCV OpenCV is a python library which is used to solve the computer vision problems. OpenCV is an open source Computer Vision library. So computer vision is a way of teaching intelligen
13 min read
Feature detection and matching with OpenCV-Python
In this article, we are going to see about feature detection in computer vision with OpenCV in Python. Feature detection is the process of checking the important features of the image in this case features of the image can be edges, corners, ridges, and blobs in the images. In OpenCV, there are a nu
5 min read
Feature matching using ORB algorithm in Python-OpenCV
ORB is a fusion of FAST keypoint detector and BRIEF descriptor with some added features to improve the performance. FAST is Features from Accelerated Segment Test used to detect features from the provided image. It also uses a pyramid to produce multiscale-features. Now it doesnât compute the orient
2 min read
Mahotas - Speeded-Up Robust Features
In this article we will see how we can get the speeded up robust features of image in mahotas. In computer vision, speeded up robust features (SURF) is a patented local feature detector and descriptor. It can be used for tasks such as object recognition, image registration, classification, or 3D rec
2 min read
Create Local Binary Pattern of an image using OpenCV-Python
In this article, we will discuss the image and how to find a binary pattern using the pixel value of the image. As we all know, image is also known as a set of pixels. When we store an image in computers or digitally, itâs corresponding pixel values are stored. So, when we read an image to a variabl
5 min read
Deep Learning for Computer Vision
Image Classification using CNN
The article is about creating an Image classifier for identifying cat-vs-dogs using TFLearn in Python. Machine Learning is now one of the hottest topics around the world. Well, it can even be said of the new electricity in today's world. But to be precise what is Machine Learning, well it's just one
7 min read
What is Transfer Learning?
Transfer learning is a machine learning technique where a model trained on one task is repurposed as the foundation for a second task. This approach is beneficial when the second task is related to the first or when data for the second task is limited. Leveraging learned features from the initial ta
11 min read
Top 5 PreTrained Models in Natural Language Processing (NLP)
Pretrained models are deep learning models that have been trained on huge amounts of data before fine-tuning for a specific task. The pre-trained models have revolutionized the landscape of natural language processing as they allow the developer to transfer the learned knowledge to specific tasks, e
7 min read
ML | Introduction to Strided Convolutions
Let us begin this article with a basic question - "Why padding and strided convolutions are required?" Assume we have an image with dimensions of n x n. If it is convoluted with an f x f filter, then the dimensions of the image obtained are (n-f+1) x (n-f+1). Example: Consider a 6 x 6 image as shown
2 min read
Dilated Convolution
Prerequisite: Convolutional Neural Networks Dilated Convolution: It is a technique that expands the kernel (input) by inserting holes between its consecutive elements. In simpler terms, it is the same as convolution but it involves pixel skipping, so as to cover a larger area of the input. Dilated
5 min read
Continuous Kernel Convolution
Continuous Kernel convolution was proposed by the researcher of Verije University Amsterdam in collaboration with the University of Amsterdam in a paper titled 'CKConv: Continuous Kernel Convolution For Sequential Data'. The motivation behind that is to propose a model that uses the properties of co
6 min read
CNN | Introduction to Pooling Layer
Pooling layer is used in CNNs to reduce the spatial dimensions (width and height) of the input feature maps while retaining the most important information. It involves sliding a two-dimensional filter over each channel of a feature map and summarizing the features within the region covered by the fi
5 min read
CNN | Introduction to Padding
During convolution, the size of the output feature map is determined by the size of the input feature map, the size of the kernel, and the stride. if we simply apply the kernel on the input feature map, then the output feature map will be smaller than the input. This can result in the loss of inform
5 min read
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
Padding is a technique used in convolutional neural networks (CNNs) to preserve the spatial dimensions of the input data and prevent the loss of information at the edges of the image. It involves adding additional rows and columns of pixels around the edges of the input data. There are several diffe
14 min read
Convolutional Neural Network (CNN) Architectures
Convolutional Neural Network(CNN) is a neural network architecture in Deep Learning, used to recognize the pattern from structured arrays. However, over many years, CNN architectures have evolved. Many variants of the fundamental CNN Architecture This been developed, leading to amazing advances in t
11 min read
Deep Transfer Learning - Introduction
Deep transfer learning is a machine learning technique that utilizes the knowledge learned from one task to improve the performance of another related task. This technique is particularly useful when there is a shortage of labeled data for the target task, as it allows the model to leverage the know
8 min read
Introduction to Residual Networks
Recent years have seen tremendous progress in the field of Image Processing and Recognition. Deep Neural Networks are becoming deeper and more complex. It has been proved that adding more layers to a Neural Network can make it more robust for image-related tasks. But it can also cause them to lose a
4 min read
Residual Networks (ResNet) - Deep Learning
After the first CNN-based architecture (AlexNet) that win the ImageNet 2012 competition, Every subsequent winning architecture uses more layers in a deep neural network to reduce the error rate. This works for less number of layers, but when we increase the number of layers, there is a common proble
9 min read
ML | Inception Network V1
Inception net achieved a milestone in CNN classifiers when previous models were just going deeper to improve the performance and accuracy but compromising the computational cost. The Inception network, on the other hand, is heavily engineered. It uses a lot of tricks to push performance, both in ter
4 min read
Understanding GoogLeNet Model - CNN Architecture
Google Net (or Inception V1) was proposed by research at Google (with the collaboration of various universities) in 2014 in the research paper titled "Going Deeper with Convolutions". This architecture was the winner at the ILSVRC 2014 image classification challenge. It has provided a significant de
4 min read
Image Recognition with Mobilenet
Introduction: Image Recognition plays an important role in many fields like medical disease analysis, and many more. In this article, we will mainly focus on how to Recognize the given image, what is being displayed. We are assuming to have a pre-knowledge of Tensorflow, Keras, Python, MachineLearni
5 min read
VGG-16 | CNN model
A Convolutional Neural Network (CNN) architecture is a deep learning model designed for processing structured grid-like data, such as images. It consists of multiple layers, including convolutional, pooling, and fully connected layers. CNNs are highly effective for tasks like image classification, o
7 min read
Autoencoders in Machine Learning
An autoencoder is a type of artificial neural network that learns to represent data in a compressed form and then reconstructs it as closely as possible to the original input. Autoencoders consists of two components:Encoder: This compresses the input into a compact representation and capture the mos
9 min read
How Autoencoders works ?
Autoencoders is a type of neural network used for unsupervised learning particularly for tasks like dimensionality reduction, anomaly detection and feature extraction. It consists of two main parts: an encoder and a decoder. The goal of an autoencoder is to learn a more efficient representation of t
7 min read
Difference Between Encoder and Decoder
Combinational Logic is the concept in which two or more input states define one or more output states. The Encoder and Decoder are combinational logic circuits. In which we implement combinational logic with the help of boolean algebra. To encode something is to convert in piece of information into
9 min read
Implementing an Autoencoder in PyTorch
Autoencoders are neural networks that learn to compress and reconstruct data. In this guide weâll walk you through building a simple autoencoder in PyTorch using the MNIST dataset. This approach is useful for image compression, denoising and feature extraction.Implementation of Autoencoder in PyTorc
4 min read
Generative Adversarial Network (GAN)
Generative Adversarial Networks (GANs) were introduced by Ian Goodfellow and his colleagues in 2014. GANs are a class of neural networks that autonomously learn patterns in the input data to generate new examples resembling the original dataset. GAN's architecture consists of two neural networks: Ge
12 min read
Deep Convolutional GAN with Keras
Deep Convolutional GAN (DCGAN) was proposed by a researcher from MIT and Facebook AI research. It is widely used in many convolution-based generation-based techniques. The focus of this paper was to make training GANs stable. Hence, they proposed some architectural changes in the computer vision pro
9 min read
StyleGAN - Style Generative Adversarial Networks
Generative Adversarial Networks (GANs) are a type of neural network that consist two neural networks: a generator that creates images and a discriminator that evaluates them. The generator tries to produce realistic data while the discriminator tries to differentiate between real and generated data.
6 min read
Object Detection and Recognition
40+ Top Computer Vision Projects [2025 Updated]
Computer Vision is a branch of Artificial Intelligence (AI) that helps computers understand and interpret context of images and videos. It is used in domains like security cameras, photo editing, self-driving cars and robots to recognize objects and navigate real world using machine learning.This ar
4 min read