0% found this document useful (0 votes)
5 views

DIP Notes

Uploaded by

suyashsha375
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

DIP Notes

Uploaded by

suyashsha375
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

DIP Notes

1. Explain components of image processing system and its applications.

An image processing system consists of several core components that work


together to capture, process, analyze, and interpret images. Here’s an overview
of the main components and some common applications:

Components of an Image Processing System


1. Image Acquisition:
o This is the first step, where the system captures an image. It involves
devices like cameras, scanners, or sensors that convert real-world visual
data into digital form.
o Acquisition may include pre-processing steps, such as resizing or filtering,
to prepare the image for analysis.
2. Image Enhancement:
o Enhances image quality by increasing contrast, sharpening details, or
removing noise.
o Methods like histogram equalization, filtering, and noise reduction help
improve the visual appearance or highlight important features of the
image.
3. Image Restoration:
o Aims to recover an image that has been degraded due to factors like
motion blur, sensor noise, or distortions.
o Techniques often involve mathematical models that reverse these
degradations and restore the image to its original quality.
4. Color Image Processing:
o Deals with color models (RGB, HSV, etc.) to process colored images
effectively.
o Color processing is essential for applications where color plays a
significant role, such as medical imaging or quality inspection.
5. Image Segmentation:
o Divides an image into meaningful regions or objects, such as separating a
subject from the background.
o Segmentation is a critical step in applications like object recognition, facial
recognition, and medical diagnostics.
6. Image Representation and Description:
o Represents and describes images by extracting features like edges,
textures, and shapes.
o This step reduces the complexity of images while retaining essential
information, making it easier for computer algorithms to interpret.
7. Object Recognition:
o Involves identifying objects within an image based on patterns and
features.
o Object recognition techniques are foundational in applications like
autonomous driving, where the system must recognize pedestrians, cars,
and other road elements.
8. Compression:
o Reduces the size of an image file while retaining its essential details.
o Compression techniques like JPEG or PNG are crucial for storing and
transmitting images efficiently, especially over networks.
9. Image Analysis and Interpretation:
o Involves analyzing and interpreting image content to derive meaningful
information.
o It could include processes like pattern recognition or machine learning
algorithms that classify and make sense of the data.

Applications of Image Processing

 Medical Imaging:
o Used in X-rays, CT scans, MRI, and ultrasound to enhance images and
help doctors analyze body structures or diagnose diseases.
o Segmentation helps identify and measure tumors or organs, while
restoration techniques improve the quality of images.
 Remote Sensing:
o Satellites and drones capture images of the earth's surface for
environmental monitoring, agriculture, and urban planning.
o Image processing helps analyze vegetation health, detect changes over
time, and monitor deforestation.
 Security and Surveillance:
o Involves facial recognition, license plate recognition, and anomaly
detection.
o Image processing systems can identify individuals, recognize suspicious
activity, and enhance surveillance footage for security purposes.
 Automotive Industry:
o Used in autonomous vehicles to identify objects on the road, such as
pedestrians, traffic signs, and other vehicles.
o Image segmentation and object recognition enable safe navigation and
collision avoidance.
2. Explain elements of visual perception and also explain sampling and
quantization.
Visual perception refers to how our eyes and brain work together to interpret and
make sense of visual information from the environment. In the context of digital
image processing, understanding visual perception is essential because it
influences how images are captured, processed, and displayed.
Elements of Visual Perception
1. Light and Color Perception:
o Visual perception begins with light entering the eye, where it is focused by
the lens onto the retina. The retina contains photoreceptors (rods and
cones) that detect light and color.
o Rods are responsible for vision in low-light conditions and are sensitive to
brightness but not color. Cones are sensitive to color and work best in
well-lit conditions. There are three types of cones, each sensitive to red,
green, or blue light.
2. Spatial Resolution:
o Spatial resolution is the ability of the eye to distinguish fine details and
small objects. It varies across the retina, being highest at the center
(fovea) and decreasing toward the periphery.
o This aspect is relevant in digital imaging, as we tend to optimize images
based on the areas where humans perceive detail most clearly.
3. Contrast Sensitivity:
o Contrast sensitivity is the ability to distinguish objects from their
backgrounds based on differences in brightness. Humans are generally
more sensitive to contrast changes than to absolute brightness levels.
o Image processing often focuses on enhancing contrast to make details
more visible and distinguishable.
4. Depth Perception:
o Depth perception allows us to judge the distance between objects and see
in three dimensions. The brain uses cues such as stereoscopic vision
(difference between views of each eye) and motion parallax to estimate
depth.
o While digital images are usually 2D, understanding depth cues helps in
applications like 3D modeling, virtual reality, and object recognition.
5. Motion Perception:
o Our visual system detects movement to understand object motion and
direction. This ability is essential for tracking objects and making sense of
dynamic environments.
o In video processing, motion perception is relevant for techniques like
motion tracking and stabilization.
Sampling and Quantization
Sampling and quantization are two fundamental steps in converting an analog
image (continuous signal) into a digital form that a computer can store and
process.
1. Sampling:
o Sampling is the process of dividing a continuous image (or signal) into
discrete points, usually forming a grid of pixels in a digital image.
o The sampling rate (or resolution) determines how many samples (pixels)
are taken from the original image. Higher sampling rates capture more
detail by increasing the pixel count, while lower sampling rates may lose
details and result in a more pixelated image.
o For example, a high-definition image has a higher sampling rate than a
low-resolution image, resulting in a clearer, more detailed picture.
2. Quantization:
o Quantization is the process of mapping each sampled pixel value (often a
continuous range of intensities) to a discrete set of values (or levels).
o In grayscale images, quantization assigns a specific intensity level (shade
of gray) to each pixel, typically ranging from 0 (black) to 255 (white) in 8-
bit images, giving 256 levels. For color images, quantization occurs
separately for each color channel (red, green, and blue).
o Higher quantization levels (more bits per pixel) allow for finer detail and
smoother color gradients, while lower quantization levels can lead to
banding or loss of subtle variations in intensity or color.

3. Explain image enhancement in spatial domain.

Image enhancement in the spatial domain involves manipulating the pixels of an


image directly to improve its appearance or highlight important features. Spatial
domain techniques operate on the image's spatial coordinates (x, y) and modify
pixel values based on the surrounding pixels in a local neighborhood. This is
often done using mathematical operations like addition, subtraction,
multiplication, and convolution to achieve desired effects.
Here’s an overview of common spatial domain techniques:
1. Point Processing Techniques
Point processing techniques modify pixel values individually without considering
their neighbors. They are simple and widely used for basic image enhancement.
 Contrast Stretching:
o Adjusts the range of intensity values in an image to increase the difference
between the brightest and darkest parts, making the image appear clearer
and more detailed.
o This is often achieved by applying a linear transformation to the intensity
values.
 Logarithmic and Power-Law Transformations:
o Log Transformation: Enhances low-intensity values by taking the
logarithm of each pixel value, commonly used to bring out details in darker
regions.
o Power-Law (Gamma) Transformation: Adjusts contrast by raising pixel
values to a power. Different gamma values can brighten or darken an
image. This is widely used for gamma correction in display devices.
 Histogram Equalization:
o Redistributes intensity values to achieve a uniform histogram, which can
improve the contrast of images, especially in cases where the image
histogram is heavily skewed.
o This method is useful for enhancing images with poor contrast by
spreading out the intensity levels across the available range.
2. Masking or Filtering Techniques
Masking (or filtering) techniques involve applying a filter, or mask, over an image
to enhance specific features. These methods are also referred to as
convolution-based techniques because they typically involve convolution
operations with a kernel (mask) over the image.
 Smoothing (Low-Pass Filtering):
o Averages or smooths out pixel values to reduce noise and create a
blurring effect.
o Common filters: Mean filter (simple averaging), Gaussian filter (weighted
averaging that gives more importance to the central pixel).
o Smoothing is useful for reducing graininess or detail, which may be
desirable in noisy or overly detailed images.
 Sharpening (High-Pass Filtering):
o Emphasizes edges and fine details by highlighting the high-frequency
components of an image.
o Common filters: Laplacian filter (focuses on second-order derivatives to
detect edges), Unsharp mask (a technique that subtracts a blurred
version of the image from the original).
o This technique enhances image clarity, making edges and details more
distinct.
 Edge Detection:
o Detects significant transitions in intensity, typically where objects or
textures meet in an image.
o Common edge detection filters: Sobel and Prewitt operators (which
approximate the gradient of the image), Canny Edge Detector (multi-step
approach for detecting strong and weak edges).
o Edge detection is essential for identifying shapes and structures in images
and is widely used in feature extraction and object detection applications.
3. Image Enhancement Using Neighborhood Processing
Neighborhood processing techniques consider groups of pixels (a neighborhood)
around each target pixel, often using convolution to apply various enhancements.
 Median Filtering:
o Replaces each pixel’s value with the median value in its neighborhood,
effectively reducing salt-and-pepper noise (random white and black spots)
while preserving edges.
o Median filtering is non-linear and often provides better noise reduction
compared to simple averaging filters for certain types of noise.
 Spatial Domain Filtering with Custom Kernels:
o Custom kernels or masks allow for tailored filtering effects, such as
embossing, outlining, or texture enhancement.
o By designing custom kernels, you can achieve unique image
enhancements for specific applications or effects.
4. Other Spatial Domain Techniques
 Image Subtraction:
o Subtracts one image from another, often used to highlight differences
between two images (e.g., detecting motion by subtracting consecutive
frames).
o Subtraction is widely used in applications such as medical imaging to
detect changes or abnormalities over time.
 Image Averaging:
o Averages multiple images to reduce random noise, which is effective if the
same scene is captured under similar conditions multiple times.
o This technique improves the signal-to-noise ratio, often used in
applications like astronomy or low-light photography.
Applications of Spatial Domain Image Enhancement
1. Medical Imaging:
o Enhances contrast and edges to make tissue structures or abnormalities
more visible for diagnosis.
2. Remote Sensing:
o Enhances satellite images to highlight features like vegetation, water
bodies, or urban areas, aiding in environmental monitoring.
3. Forensics and Surveillance:
o Enhances poor-quality images or videos from surveillance cameras to
identify people or objects more clearly.
4. Quality Control in Manufacturing:
o Improves image quality to detect product defects or anomalies on
production lines.
5. Astronomy:
o Enhances images from telescopes to reveal faint stars or galaxies and
improve contrast in low-light conditions.

4. Short Notes –

1. Gray Level Transformation Functions


Gray level transformations modify pixel intensities in an image to enhance
contrast or brightness. The most common transformations include:
 Piecewise Linear Transformation:
o In this transformation, the intensity values are mapped piece-by-piece
using linear segments.
o Examples include contrast stretching (increases image contrast by
mapping a narrow intensity range to a wider range) and thresholding (sets
pixel values above or below a threshold to specific values, often used for
binary images).
2. Histogram Specification (Matching)
Histogram specification, or histogram matching, modifies an image’s
histogram to match a specified histogram shape.
 It adjusts pixel intensities to achieve a particular contrast style, such as
enhancing certain brightness levels.
 This is done by calculating a transformation function that maps the image’s
original histogram to the desired histogram.
 Useful for standardizing the appearance of images across different lighting
conditions or scenes.
3. Histogram Equalization
Histogram equalization enhances image contrast by redistributing pixel
intensities to achieve a uniform histogram.
 This method spreads intensity values across the entire range, often resulting in
an image with enhanced global contrast, particularly for images that are too dark
or too bright.
 It is widely used in medical and remote sensing applications to improve image
visibility and highlight details.
4. Local Enhancement
Local enhancement adjusts the contrast or intensity within specific regions
of an image, rather than the whole image.
 Methods include adaptive histogram equalization, which divides an image into
small regions and applies histogram equalization locally to each region.
 This is effective for images with varying lighting conditions, as it enhances
contrast without affecting other areas of the image.
5. Enhancement Using Arithmetic and Logical Operations
Arithmetic and logical operations combine or alter pixel values based on
mathematical functions, such as addition, subtraction, multiplication, and
division, to achieve specific effects.
 Arithmetic Operations: Examples include image averaging (to reduce noise)
and image subtraction (to detect changes or differences between images).
 Logical Operations: These include operations like AND, OR, and NOT,
commonly used in binary images for operations like masking and region-based
enhancement.
5. Explain image subtraction, image averaging and basics of spatial filtering.

1. Image Subtraction
Image subtraction is a technique where the pixel values of one image are
subtracted from the corresponding pixel values of another image. It’s useful for
applications like change detection and motion analysis.
 Formula: G(x,y)=∣F(x,y)−H(x,y)∣G(x, y) = | F(x, y) - H(x, y)
|G(x,y)=∣F(x,y)−H(x,y)∣
o Here, G(x,y)G(x, y)G(x,y) is the output image, F(x,y)F(x, y)F(x,y) is the first
image, and H(x,y)H(x, y)H(x,y) is the second image.
o The absolute value is often taken to avoid negative values.
 Applications:
o Motion Detection: Detects moving objects by subtracting a background
image from each frame in a video sequence.
o Change Detection: Used in remote sensing to observe changes in
geographical areas by comparing images taken at different times.
o Medical Imaging: Highlights changes in images, such as the appearance
or growth of abnormalities over time.
2. Image Averaging
Image averaging reduces noise by averaging pixel values across multiple images
of the same scene. This technique is effective for reducing random noise that
appears differently in each image.
 Formula: G(x,y)=1N∑i=1NFi(x,y)G(x, y) = \frac{1}{N} \sum_{i=1}^{N} F_i(x,
y)G(x,y)=N1∑i=1NFi(x,y)
o Here, G(x,y)G(x, y)G(x,y) is the averaged output image, Fi(x,y)F_i(x, y)Fi
(x,y) are the N individual images, and NNN is the number of images.
 How it Works:
o By taking multiple images and averaging them, random noise that differs
across images is reduced, while the consistent features are preserved.
 Applications:
o Low-Light Photography: Useful in low-light conditions where noise is
common.
o Astronomy: Enhances faint objects in images by reducing noise from low
light levels and atmospheric disturbances.
3. Basics of Spatial Filtering
Spatial filtering involves modifying an image by applying a filter (or kernel) to
each pixel and its neighbors. The filter’s effect depends on the values in the
kernel and the type of operation performed, such as smoothing or sharpening.
 Types of Spatial Filters:
o Smoothing Filters (Low-Pass Filters): Reduce noise and blur an image
by averaging pixel values.
 Example: A mean filter, where each pixel is replaced by the
average value of itself and its neighbors.
o Sharpening Filters (High-Pass Filters): Enhance edges and details by
amplifying the intensity changes in an image.
 Example: A Laplacian filter, which detects edges by calculating the
second derivative of the pixel intensities.
 Convolution:
o Spatial filtering is usually implemented via convolution, where a kernel
(small matrix) is moved over each pixel in the image.
o Each pixel’s new value is calculated by multiplying the kernel values with
the pixel values in its neighborhood and summing the results.
 Applications:
o Noise Reduction: Smoothing filters reduce unwanted noise.
o Edge Detection: Sharpening filters make edges and details stand out,
which is useful for feature extraction and object recognition.

6. Explain the differences Fourier Transformation in frequency domain and basic of


filtering in frequency domain

Fourier Transformation and filtering in the frequency domain are key techniques
in image processing that help analyze and modify images based on their
frequency content.
1. Fourier Transformation in the Frequency Domain
The Fourier Transform (FT) is a mathematical tool that transforms an image from
the spatial domain (where pixels are arranged in terms of coordinates xxx and
yyy) into the frequency domain (where image content is represented by its
frequencies). This transformation provides insights into the frequency
components (how rapidly intensity values change) within an image.
 Purpose:
o The Fourier Transform decomposes an image into sinusoidal patterns of
different frequencies and orientations.
o In the frequency domain, each point represents a specific frequency
component, with low frequencies near the center and high frequencies
toward the edges.
 Key Concepts:
o Low-Frequency Components: Represent slow changes or smooth areas
in the image (e.g., background).
o High-Frequency Components: Represent rapid changes, like edges and
fine details in the image.
 2D Fourier Transform:
o Given an image f(x,y)f(x, y)f(x,y), its Fourier Transform F(u,v)F(u, v)F(u,v)
can be computed as:
F(u,v)=∑x=0M−1∑y=0N−1f(x,y)⋅e−j2π(uxM+vyN)F(u, v) = \sum_{x=0}^{M-
1} \sum_{y=0}^{N-1} f(x, y) \cdot e^{-j2\pi \left(\frac{ux}{M} +
\frac{vy}{N}\right)}F(u,v)=x=0∑M−1y=0∑N−1f(x,y)⋅e−j2π(Mux+Nvy)
o Here, uuu and vvv represent the frequency coordinates, and MMM, NNN
are the image dimensions.
 Inverse Fourier Transform:
o The original image can be reconstructed from its frequency representation
by applying the Inverse Fourier Transform.
2. Basics of Filtering in the Frequency Domain
Filtering in the frequency domain involves modifying the Fourier-transformed
image to either suppress or enhance certain frequency components and then
transforming it back to the spatial domain. This can be more effective than spatial
filtering, especially for noise reduction and edge detection, as it allows for direct
control over different frequencies.
 Steps for Frequency Domain Filtering:
1. Compute the Fourier Transform of the image to obtain its frequency
representation.
2. Apply a Frequency Filter to modify specific frequencies in the
transformed image.
3. Compute the Inverse Fourier Transform to return the modified image to
the spatial domain.
 Types of Frequency Filters:
o Low-Pass Filters: Allow low frequencies to pass and block high
frequencies. These filters blur the image by suppressing details and noise.
 Example: Gaussian Low-Pass Filter.
o High-Pass Filters: Allow high frequencies to pass while blocking low
frequencies, which enhances edges and details.
 Example: Ideal High-Pass Filter, which can emphasize the edges
and fine structures in an image.
o Band-Pass and Band-Stop Filters: Allow only a certain range of
frequencies to pass or be blocked. Useful for isolating specific frequency
ranges in an image.
 Advantages of Frequency Domain Filtering:
o Efficient and precise control over specific frequency ranges, allowing for
better noise reduction and edge enhancement.
o More effective than spatial filtering for removing periodic noise patterns
and for image sharpening.

7. Explain the difference between Filtering in spatial and frequency domain. Discuss
about smoothing frequency domain filters.

Filtering in the spatial and frequency domains are both image processing
techniques that serve to enhance or suppress certain features in an image.
Smoothing frequency domain filters, specifically, are a subset of frequency
domain filters that target the reduction of high-frequency components to achieve
a smoothing or blurring effect. Let’s break down the differences and nuances
between these approaches:
1. Filtering in the Spatial Domain vs. Frequency Domain
Spatial Domain Filtering:
 Operation Basis: Directly modifies pixel values based on operations applied to
the pixel and its local neighborhood in the spatial domain (image coordinates).
 Method: Uses convolution with a filter or kernel (small matrix, such as 3x3, 5x5)
that moves across the image.
 Types of Filters:
o Smoothing Filters (Low-Pass): Reduce noise or blur the image by
averaging pixel values, e.g., mean and Gaussian filters.
o Sharpening Filters (High-Pass): Enhance edges and details by
accentuating changes in intensity, e.g., Laplacian and Sobel filters.
 Applications: Noise reduction, edge enhancement, sharpening, and basic image
enhancements that do not require manipulation of specific frequency
components.
Frequency Domain Filtering:
 Operation Basis: Modifies the Fourier-transformed (frequency) representation of
an image rather than directly working with pixel values.
 Method:
1. Perform a Fourier Transform on the image to convert it to the frequency
domain.
2. Apply a frequency filter to modify specific frequencies.
3. Perform an Inverse Fourier Transform to return to the spatial domain.
 Types of Filters:
o Low-Pass Filters (Smoothing): Retain low frequencies, which
correspond to gradual intensity changes, while removing high frequencies
to achieve a smoothing effect.
o High-Pass Filters (Sharpening): Retain high frequencies to highlight
edges and fine details.
o Band-Pass and Band-Stop Filters: Allow or block specific frequency
ranges to target particular image components.
 Applications: Useful for periodic noise removal, sharpening, and situations
requiring selective frequency manipulation.
2. Smoothing Frequency Domain Filters
Smoothing filters in the frequency domain specifically target the reduction of
high-frequency components to achieve a blurring or denoising effect, making
images appear softer. These filters can be implemented as low-pass filters in the
frequency domain.
 How They Work:
o Low frequencies correspond to gradual changes in intensity (smooth
areas), while high frequencies correspond to abrupt changes (edges and
noise).
o By allowing low frequencies to pass and suppressing high frequencies,
these filters blur the image, removing details and noise.
 Common Types of Smoothing Frequency Domain Filters:
o Ideal Low-Pass Filter: A filter that sharply cuts off all frequencies above a
certain threshold. However, it can create ringing artifacts due to the abrupt
cutoff.
o Gaussian Low-Pass Filter: Smoothly attenuates frequencies, giving
more gradual filtering that avoids artifacts. It has a Gaussian-shaped
response and is widely used for blurring without creating ringing artifacts.
o Butterworth Low-Pass Filter: Provides a more gradual transition
between retained and suppressed frequencies, with smoother edges than
the ideal filter.
 Applications:
o Noise Reduction: Smoothing frequency domain filters effectively reduce
random noise by suppressing high-frequency components.
o Background Smoothing: Useful for creating a uniform background while
maintaining large-scale structural information without fine details.

8. Explain about gaussian low pass filters, sharpening frequency domain filters and
gaussian high pass filters.

1. Gaussian Low-Pass Filters


A Gaussian Low-Pass Filter (GLPF) is used to smooth or blur an image by
attenuating high-frequency components while preserving low frequencies. It is
commonly applied to reduce noise or soften details in an image.
 How It Works:
o In the frequency domain, an image is represented by various frequency
components. Low frequencies correspond to smooth, gradual intensity
changes, while high frequencies correspond to abrupt changes (such as
edges or noise).
o A Gaussian filter has a Gaussian-shaped response, meaning it smoothly
attenuates high frequencies without a sharp cutoff, minimizing artifacts like
ringing or sudden changes in pixel values.
 Filter Formula:
H(u,v)=e−D(u,v)22σ2H(u, v) = e^{-\frac{D(u,
v)^2}{2\sigma^2}}H(u,v)=e−2σ2D(u,v)2
where:
o H(u,v)H(u, v)H(u,v) is the filter response at frequency coordinates (u,v)(u,
v)(u,v).
o D(u,v)D(u, v)D(u,v) is the distance from the origin of the frequency plane.
o σ\sigmaσ controls the spread of the Gaussian function and determines
how quickly high frequencies are attenuated.
 Applications:
o Noise Reduction: Often used to suppress high-frequency noise.
o Image Smoothing: Blurs the image while preserving the main structures,
useful for background smoothing or reducing fine textures.
2. Sharpening Frequency Domain Filters
Sharpening frequency domain filters enhance high-frequency components in an
image, making edges and fine details more pronounced. Unlike low-pass filters,
these filters are designed to emphasize or amplify rapid intensity changes (high
frequencies).
 How It Works:
o Sharpening filters work by preserving high frequencies while suppressing
low frequencies. Some common methods include high-pass filtering and
high-frequency emphasis filtering.
o In the frequency domain, this can be achieved by using a high-pass filter,
which removes low-frequency components while keeping high
frequencies, or by applying a Laplacian filter in the frequency domain to
boost high frequencies.
 Types of Sharpening Filters:
o Ideal High-Pass Filter: Retains only frequencies above a certain cutoff
but can create artifacts due to the sharp cutoff.
o Butterworth High-Pass Filter: Offers a smoother transition between high
and low frequencies, reducing artifacts.
o Laplacian Filter: Often used for edge detection by amplifying high-
frequency components related to edges.
 Applications:
o Edge Detection: Enhances edges and fine details, commonly used in
medical imaging, object recognition, and feature extraction.
o Image Sharpening: Used to enhance blurry images and make details
more visible.
3. Gaussian High-Pass Filters
A Gaussian High-Pass Filter (GHPF) is similar to the Gaussian Low-Pass Filter
but is designed to retain high-frequency components while attenuating low
frequencies, emphasizing edges and fine details.
 How It Works:
o A Gaussian high-pass filter uses a Gaussian function that suppresses the
low frequencies smoothly without a sharp cutoff. The difference from an
ideal high-pass filter is the gradual transition, which helps avoid artifacts.
 Filter Formula:
o The GHPF can be derived by subtracting the Gaussian low-pass response
from 1: H(u,v)=1−e−D(u,v)22σ2H(u, v) = 1 - e^{-\frac{D(u,
v)^2}{2\sigma^2}}H(u,v)=1−e−2σ2D(u,v)2
o Here, H(u,v)H(u, v)H(u,v) is the high-pass filter response, D(u,v)D(u,
v)D(u,v) is the distance from the origin of the frequency plane, and
σ\sigmaσ is the standard deviation controlling the filter's spread.
 Applications:
o Edge Enhancement: GHPFs enhance edges and fine textures without
the abruptness of ideal high-pass filters.
o Feature Extraction: Useful in applications like remote sensing or medical
imaging to highlight structures or abnormalities.

9. Explain homographic filtering and sharpening.

1. Homomorphic Filtering
Homomorphic filtering is a technique used to simultaneously adjust the
brightness and contrast of an image, especially in cases where the lighting or
shading varies across the image (non-uniform illumination). It operates in the
frequency domain and is commonly used to enhance images with uneven
lighting, such as medical images, document scans, or photographs with varying
shadows.
How Homomorphic Filtering Works:
Homomorphic filtering is based on the illumination-reflectance model of an
image:
 An image f(x,y)f(x, y)f(x,y) can be thought of as the product of two components:
f(x,y)=i(x,y)⋅r(x,y)f(x, y) = i(x, y) \cdot r(x, y)f(x,y)=i(x,y)⋅r(x,y) where:
o i(x,y)i(x, y)i(x,y) represents the illumination component, which tends to
vary slowly and corresponds to low-frequency content.
o r(x,y)r(x, y)r(x,y) represents the reflectance component, which contains
details like texture and edges, corresponding to high-frequency content.
Steps in Homomorphic Filtering:
1. Log Transformation: Taking the logarithm of the image makes it additive instead
of multiplicative: ln⁡(f(x,y))=ln⁡(i(x,y))+ln⁡(r(x,y))\ln(f(x, y)) = \ln(i(x, y)) + \ln(r(x,
y))ln(f(x,y))=ln(i(x,y))+ln(r(x,y))
2. Fourier Transform: Transform the log-transformed image to the frequency
domain, where we can independently control high and low frequencies.
3. Filtering: Apply a high-pass filter in the frequency domain to suppress the low-
frequency illumination component and enhance the high-frequency reflectance
component.
4. Inverse Fourier Transform: Transform the filtered image back to the spatial
domain.
5. Exponential Transformation: Apply the exponential function to reverse the
logarithmic effect.
Applications of Homomorphic Filtering:
 Medical Imaging: Enhances tissue details in scans with varying lighting.
 Document Processing: Improves readability of old or scanned documents with
non-uniform lighting.
 Photographs with Shadows: Enhances detail in areas affected by shadows or
highlights.
2. Image Sharpening
Image sharpening is a process that enhances edges and fine details by
emphasizing high-frequency content, making images appear clearer and more
defined. It can be performed in either the spatial or frequency domain, but the
basic goal is to highlight transitions and edges within an image.
How Sharpening Works:
 In the frequency domain, sharpening is achieved using high-pass filters that
amplify high-frequency components (edges, fine details).
 In the spatial domain, sharpening is typically performed using convolution filters
(kernels) that calculate the difference between a pixel and its neighbors to
highlight transitions.
Common Sharpening Techniques:
1. High-Pass Filtering:
o In the frequency domain, this involves applying a filter that allows high
frequencies to pass while attenuating low frequencies.
o Common high-pass filters include the Gaussian High-Pass Filter and the
Laplacian High-Pass Filter.
2. Laplacian Sharpening:
o The Laplacian operator is a second-order derivative filter that
accentuates regions of rapid intensity change (edges).
o In the spatial domain, the Laplacian filter highlights areas where pixel
intensity changes abruptly, which makes edges more distinct.
3. Unsharp Masking:
o A popular spatial domain technique where a blurred (low-pass filtered)
version of the image is subtracted from the original image.
o This enhances the high-frequency details in the image, making it appear
sharper.
Applications of Image Sharpening:
 Medical and Satellite Imaging: Enhances fine details in MRI scans, CT scans,
and satellite images.
 Photography: Sharpens image features for clearer presentation of details.
 Object Detection: Enhances edges to make objects stand out for computer
vision applications.

You might also like