0% found this document useful (0 votes)
26 views

DIP UNIT 2

The document outlines the syllabus for a course on image enhancement in the context of artificial intelligence and machine learning, covering spatial and frequency domain techniques. It details various methods of gray level transformations, histogram processing, and spatial filtering, including smoothing and sharpening filters. Additionally, it includes quizzes to assess understanding of the concepts presented.

Uploaded by

THIRUNEELAKANDAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

DIP UNIT 2

The document outlines the syllabus for a course on image enhancement in the context of artificial intelligence and machine learning, covering spatial and frequency domain techniques. It details various methods of gray level transformations, histogram processing, and spatial filtering, including smoothing and sharpening filters. Additionally, it includes quizzes to assess understanding of the concepts presented.

Uploaded by

THIRUNEELAKANDAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 91

UNIT 2

IMAGE ENHANCEMENT
II AIML
21CSE251T - DIP
Syllabus
• Spatial Domain:
• Basic relationship between pixels
• Basic Gray level Transformations
• Histogram Processing
• Smoothing spatial filters
• Sharpening spatial filters.
• Frequency Domain:
• Smoothing frequency domain filters
• Sharpening frequency domain filters
• Homomorphic filtering.
Recap -- TO THE FOURIER TRANSFORM AND
THE FREQUENCY DOMAIN
• The one-dimensional Fourier transform and its inverse
– Fourier transform (continuous case)(single dimension)

– Inverse Fourier transform:

• The two-dimensional Fourier transform and its inverse


– Fourier transform (2D-continuous case)

– Inverse Fourier transform:


1D and 2D Discrete Fourier Transform
1D discrete Fourier transform is
Relationship between pixels
4 Neighbors

Diagonal Neighbors

Adjacency Pixels

Digital Path

Connected Set

Region, Bounday, Contour, Edge


Gray level Transformations
• Gray-level transformations refers to the techniques used to adjust the pixel
values (or intensity levels) of an image.
• These transformations are applied to improve the visual quality of an image
or to extract useful information.
• The transformations operate on the intensity levels of pixels and alter the
image's appearance, enhancing contrast, brightness, or other features.
• Types of Gray Level Transformations
• a) Linear (Negative and Identity)Transformations
• b) Logarithmic(log and inverse-log) Transformations
• c) Power – law(nth power and nth root) Transformations
Quiz - 1
• Gray-level transformations refers to the techniques used to adjust the
____________ of an image.
• A. depth
• B. breadth
• C. width
• D. pixel
Quiz - 1
• Gray-level transformations refers to the techniques used to adjust the
____________ of an image.
• A. depth
• B. breadth
• C. width
• D. pixel
1. Linear Transformations
• Linear gray-level transformations are operations where the output pixel value is a
linear function of the input pixel value.
• The general form is:
• s=T(r)=a⋅r+b,
• Where:
• r is the original pixel intensity.
• s is the transformed pixel intensity.
• a and b are constants that control the contrast and brightness.
Linear:- Identity and negative
transformation
• In identity transformation, each value of the image is directly mapped to each
• other values of the output image. Example: Contrast stretching
• S=L-1-r
• Negative transformation is the opposite of identity transformation. Here, each
• value of the input image is subtracted from L-1 and then it is mapped onto the
output image
• (e.g) photographic image, medical image
Quiz - 2
• In ___________ , each value of the image is directly mapped to each other values of the output
image.
• A). Eigen transformation
• B). Euler transformation
• C). Identity transformation
• D). Negative transformation
Quiz - 2
• In ___________ , each value of the image is directly mapped to each other values of the output
image.
• A). Eigen transformation
• B). Euler transformation
• C). Identity transformation
• D). Negative transformation
2. Logarithmic Transformations

• The logarithmic transformation enhances low-intensity values


while compressing high-intensity values. This is useful for
images with a high dynamic range.
• s=c⋅log(1+r)
Effect:
• Enhances details in darker regions by expanding their intensity values.
• Compresses brighter intensities, reducing contrast in bright areas.
• Useful in applications such as medical imaging, satellite imaging, and low-light
photography.
• Example:
• If an image has a high range of intensity values (such as an X-ray image where
bones appear much brighter), applying a log transformation can make the finer
details in the darker areas more visible.
Quiz - 3
• The ___________ transformation enhances low-intensity values while
compressing high-intensity values.
• A). Linear
• B). Non-Linear
• C). Algorithmic
• D). Logarithmic
Quiz - 3
• The __________ transformation enhances low-intensity values while
compressing high-intensity values.
• A). Linear
• B). Non-Linear
• C). Algorithmic
• D). Logarithmic
Inverse Log (Exponential)
Transformation
• The inverse log transformation performs the opposite of the log transformation.
It is used to enhance brighter regions while compressing the darker ones.
Effect:
• Enhances bright regions by expanding their intensity values.
• Compresses dark intensity values, reducing contrast in dark areas.
• Useful for applications where bright details need to be emphasized.
• Example :
• If an image has very bright areas with details that need to be enhanced, inverse
log transformation can help make those details more prominent.
Power – law(nth power and nth root)
Transformations
1. Power-law transformations are widely used to enhance
images by adjusting brightness and contrast.
2. These transformations follow a power function that
modifies pixel intensity values.
3. Two common forms are nth power transformation and nth
root transformation, which help in enhancing different
intensity regions.
a. Power-Law (Nth Power)
Transformation
• This transformation is used to enhance bright regions and suppress
darker ones.
a. Power-Law (Nth Power)
Transformation
• Effect:
• Expands bright pixel values (enhancing bright areas).
• Compresses dark pixel values (reducing details in darker regions).
• Higher values of γ lead to more extreme brightening.
• Example Use Case:
• Used for applications where bright regions need more emphasis, such
as medical imaging of bright tissues.
Quiz -4
• This transformation is used to enhance ________ regions and
suppress darker ones.
• A). Dull
• B). Black
• C). Brighter
• D). White
Quiz -4
• This transformation is used to enhance ________ regions and
suppress darker ones.
• A). Dull
• B). Black
• C). Brighter
• D). White
b. Power-Law (Nth Root)
Transformation
• This transformation is used to enhance dark regions while
compressing bright regions.
b. Power-Law (Nth Root)
Transformation
• Example:
• Used for low-light images, such as night photography
or satellite images, to improve visibility of dark areas.
Recap of (5.2.2025) Gray Level Transformations

• a) Linear (Negative and Identity)Transformations


• b) Logarithmic(log and inverse-log) Transformations
• Log transform: enhances low-intensity values while compressing high-
intensity values
• Inverse-Log Transform: enhance brighter regions while compressing the
darker ones
• c) Power – law(nth power and nth root) Transformations
• Nth power transform: enhance bright regions and suppress darker ones.
• Nth root transform: enhance dark regions while compressing bright regions.
Quiz - 1

• What can be interpreted from the diagram? Choose all the correct
answer.
• There are gaps between bars in a bar graph but in the histogram, the bars are
adjacent to each other.
• AAP lost to NDA.
• Histogram presents numerical data whereas bar graph shows categorical data.
• DeepSeek is a Chinese artificial intelligence company.
Quiz - 1

• What can be interpreted from the diagram? Choose all the correct
answer.
• There are gaps between bars in a bar graph but in the histogram, the bars are
adjacent to each other.
• AAP lost to NDA.
• Histogram presents numerical data whereas bar graph shows categorical data.
• DeepSeek is a Chinese artificial intelligence company.
Histogram Processing
• What is Histogram Processing?
• A technique used to analyze and enhance images by modifying pixel
intensity distributions.
• Helps in contrast enhancement, thresholding and equalization.
• Essential in applications like medical imaging, satellite imaging and
industrial quality inspection.
• Image Histogram: A graphical representation of pixel
intensity distribution.
• X-axis: Intensity levels (0-255 for an 8-bit image).
• Y-axis: Number of pixels at each intensity level.
Interpretation from the histogram
• Intuitively, it is reasonable to conclude that an
image whose pixels tend to
• occupy the entire range of possible intensity
levels and,

• distributed uniformly, will have an


appearance of high contrast and will exhibit
a large variety of gray tones.
Types of Histogram Processing (1/2)
1. Histogram Stretching (Normalization)

2. Histogram Equalization
• Redistributes pixel intensities to achieve a uniform histogram.
• Uses cumulative distribution function (CDF).
• Improves visibility in low-contrast images
Types of Histogram Processing (2/2)
3. Histogram Specification (Matching)
• Adjusts the histogram to match a desired histogram.
• Useful in standardizing image appearances.
Quiz - 2
• Choose the correct Histogram processing methods.
• Histogram Stretching
• Histogram Equalization
• Histogram Specification
• Histogram Transformation
Quiz - 2
• Choose the correct Histogram processing methods.
• Histogram Stretching
• Histogram Equalization
• Histogram Specification
• Histogram Transformation
Applications of Histogram Processing
• Medical Imaging: Enhancing X-ray and MRI images.
• Satellite Imaging: Improving visibility in aerial and space
images.
• Machine Vision: Detecting defects in manufacturing.
• Face Recognition: Enhancing facial details in low-light
conditions.
Histogram Equalization
• Let the variable r denote the intensities of an
image to be processed.
• As usual, we assume that r is in the range [0,L -
1], with r = 0 representing black and r = L - 1
representing white.
Histogram Equalization
(a) Monotonic Increasing Function
• The left graph shows that multiple values of
r (input intensity) can be mapped to a single
output intensity ‘s’.
• This means that different pixel intensities in
the original image may merge into the same
intensity in the processed image, which can
lead to loss of details.
• This type of transformation is useful for
certain applications but may reduce image
contrast.
(b) Strictly Monotonic Increasing Function
• The right graph shows a one-to-one mapping between r
and s, ensuring that each input intensity has a unique
corresponding output intensity.
• This preserves all details and prevents information loss,
making it ideal for contrast enhancement.
• If T(r) is strictly monotonic, the inverse function is single-
valued (one-to-one).

• These transformations are commonly used in histogram


equalization and contrast adjustments in digital image
processing.
Quiz - 3
• Which of the following statements correctly differentiates a monotonic
increasing function from a strictly monotonic increasing function in
histogram processing?
• A) A monotonic increasing function ensures intensity values do not decrease,
while a strictly monotonic increasing function ensures distinct input values
map to distinct output values.
B) A strictly monotonic increasing function allows intensity values to remain
constant, whereas a monotonic increasing function does not.
C) A monotonic increasing function always increases, while a strictly
monotonic increasing function can decrease in certain cases.
D) Both functions behave the same way and have no distinction in histogram
processing.
Quiz - 3
• Which of the following statements correctly differentiates a monotonic
increasing function from a strictly monotonic increasing function in
histogram processing?
• A) A monotonic increasing function ensures intensity values do not decrease,
while a strictly monotonic increasing function ensures distinct input values
map to distinct output values.
B) A strictly monotonic increasing function allows intensity values to remain
constant, whereas a monotonic increasing function does not.
C) A monotonic increasing function always increases, while a strictly
monotonic increasing function can decrease in certain cases.
D) Both functions behave the same way and have no distinction in histogram
processing.
Practical Challenges in Discrete Image
Processing
• Pixel intensity values are stored as integers, leading to
rounding errors.
• This may cause non-strict monotonicity, making
inverse transformations less precise.
Problem based on Histogram Equalization
• Suppose that a 3-bit image (L = 8) of size 64 x 64 pixels (MN = 4096) has the intensity
distribution in Table shown below, where the intensity levels are integers in the range
[0,L - 1] = [0, 7]. The histogram of this image is sketched as shown below. Apply
histogram equalization and enhance the image quality.
Problem based on Histogram Equalization
• Given,
• A 3-bit image (L = 8) of size 64 x 64 pixels (MN = 4096)
• Solution:
Problem based on Histogram Equalization
Probability Density Function (PDF)
Transformation
FUNDAMENTALS OF SPATIAL
FILTERING
• The name filter is borrowed from frequency domain
processing components of an image. For example, a filter
that passes low frequencies is called a lowpass filter.
• The net effect produced by a lowpass filter is to smooth an
image by blurring it. We can accomplish similar smoothing
directly on the image itself by using spatial filters.
• Spatial filtering modifies an image by replacing the value
of each pixel by a function of the values of the pixel and its
neighbors. If the operation performed on the image pixels
is linear, then the filter is called a linear spatial filter.
Otherwise, the filter is a nonlinear spatial filter.
FUNDAMENTALS OF SPATIAL
FILTERING
• Spatial filtering is a fundamental technique in image
processing used to enhance or suppress specific
features in an image. The two major categories of
spatial filters are:
1. Smoothing Spatial Filters – Used to reduce noise
and blur an image.
2. Sharpening Spatial Filters – Used to enhance edges
and fine details in an image.
Quiz - 1
• Smoothing Spatial Filters is used to reduce noise and blur an image.
• A). Frequency filters
• B). Rough filters
• C). Smoothing Spatial filters
• D). Sharpening Spatial filters
Quiz - 1
• Smoothing Spatial Filters is used to reduce noise and blur an image.
• A). Frequency filters
• B). Rough filters
• C). Smoothing Spatial filters
• D). Sharpening Spatial filters
1. Smoothing Spatial Filters
• Smoothing filters are designed to reduce noise and smooth variations
in an image by averaging pixel values within a neighborhood. These
filters are useful for:
• Reducing noise
Other terms used
• Removing small details to
• Blurring an image refer to a spatial
filter kernel are
• Types of Smoothing Filters: mask, template,
• (i) Averaging (Mean) Filter and window. We
use the term filter
• (ii) Gaussian Filter kernel or simply
• (iii) Median Filter kernel.
Separable Filter Kernels
• A separable filter kernel is a filter that can be broken down into two
1D filters—one applied along the rows and the other along the
columns—instead of applying a single 2D filter. This significantly
reduces computational complexity.
1. Smoothing Spatial Filters
Averaging (Mean) Filter
1. Smoothing Spatial Filters
(ii) Gaussian Filter
1. Smoothing Spatial Filters
(iii) Median Filter

Example:
Original Pixel Neighborhood: [12, 5, 8, 200, 7, 10, 6, 9, 15]
Median Value: 9 (Replaces center pixel)
2. Sharpening Spatial Filters
• Sharpening filters highlight transitions (edges) in an image by
enhancing high-frequency components. These filters are useful for:
• Enhancing edges
• Highlighting fine details
• Increasing image contrast
• Types of Sharpening Filters:
• (i) Laplacian Filter
• (ii) High Boost Filtering
• (iii) Unsharp Masking
Quiz - 2
• Sharpening filters highlight transitions (edges) in an image by
enhancing high-frequency components. These filters are useful for:
• A). Enhancing edges
• B). Highlighting fine details
• C). Increasing image contrast
• D). Blur the image
Quiz - 2
• Sharpening filters highlight transitions (edges) in an image by
enhancing high-frequency components. These filters are useful for:
• A). Enhancing edges
• B). Highlighting fine details
• C). Increasing image contrast
• D). Blur the image
Frequency Domain
• Images can be represented as a combination
of sinusoidal waves (low/high frequencies).
• Low frequencies: Smooth areas (e.g., walls, skies).
• High frequencies: Edges, noise, textures.
• Key Tool: Fourier Transform (DFT/FFT) converts
spatial-domain images to frequency-domain spectra.
Quiz - 1
• Why Convert from Spatial to Frequency Domain?
• A. Separation of Image Components (Low vs. High Frequency)
• B. Efficient Filtering (Convolution Theorem)
• C. Image Compression (Energy Compaction)
• D. Edge Detection and Feature Extraction
Quiz - 1
• Why Convert from Spatial to Frequency Domain?
• A. Separation of Image Components (Low vs. High Frequency)
• B. Efficient Filtering (Convolution Theorem)
• C. Image Compression (Energy Compaction)
• D. Edge Detection and Feature Extraction
Smoothing (Lowpass Filters)
• Filtering in the frequency domain consists of modifying the Fourier transform of an image,
then computing the inverse transform to obtain the spatial domain representation of the
processed result.
• Thus, given (a padded) digital image, f (x, y), of size P * Q pixels, the basic filtering
equation in which we are interested has the form:

• where F-1 is the IDFT, F(u,v) is the DFT of the input image, f (x, y),
• H(u,v) is a filter transfer function (which we often call just a filter or filter function), and
• g(x, y) is the filtered (output) image.

• Functions F, H, and g are arrays of size P ×Q, the same as the padded input image.
• The product H(u,v)F(u,v) is formed using elementwise multiplication.
• The filter transfer function modifies the transform of the input image to yield the processed
output, g(x, y).
Smoothing (Lowpass Filters)
• A function H(u,v) that attenuates high
frequencies while passing low frequencies
(called a lowpass filter, as noted before)
would blur an image,
• while a filter with the opposite property
(called a highpass filter) would enhance
sharp detail, but cause a reduction in
contrast in the image.
Quiz - 2
• __________ filter, would blur an image; ________ filter
would enhance sharp detail of an image.
• A. lowpass, lopass
• B. highpass, lowpass
• C. lowpass, highpass
• D. highpass, higpass
Quiz - 2
• __________ filter, would blur an image; ________ filter
would enhance sharp detail of an image.
• A. lowpass, lopass
• B. highpass, lowpass
• C. lowpass, highpass
• D. highpass, higpass
Types of Low-Pass Filters
• 1. Ideal Low-Pass Filter (ILPF)
• 2. Gaussian Low-Pass Filter (GLPF)
• 3. Butterworth Low-Pass Filter (BLPF)

Ringing artifacts, also known as the


Gibbs phenomenon, refer to
oscillations that appear near sharp
edges or discontinuities when a
signal or image is reconstructed
using a truncated or limited-
frequency representation
where D0 is a positive
constant, and
D(u,v) is the distance
between a point (u,v)
in
the frequency domain
and the center of the P
×Q frequency
rectangle; that is,
Comparing the cross
section
plots in Figs. 4.39, 4.43,
and 4.45, we see that the
BLPF function can be
controlled to
approach the
characteristics of the ILPF
using higher values of n,
and the GLPF for
lower values of n, while
providing a smooth
transition in from low to
high frequencies.
Frequency Domain - Sharpening frequency
domain filters
• Sharpening in the frequency domain is achieved by emphasizing
high-frequency components while suppressing low-frequency
components.
• This enhances edges, fine details, and textures in an image.
• Low frequencies represent smooth variations (backgrounds, gradual
intensity changes).
• High frequencies represent rapid changes (edges, fine textures, noise).
• Sharpening in the frequency domain focuses on boosting high-
frequency components, making edges and details more
pronounced.
Frequency Domain - Sharpening frequency
domain filters
• Subtracting a lowpass filter transfer function from 1
yields the corresponding highpass filter transfer function
in the frequency domain:
Ideal High-Pass Filtering (IHPF)

• This filter completely removes low-frequency components (smooth


regions) and retains only high-frequency components (edges and
textures).
• It acts as a binary filter:
• Frequencies below a cutoff D0 are completely removed.
• Frequencies above D0 ​are completely retained.
1. Ideal High-Pass Filtering (IHPF)

• This filter completely removes low-frequency components (smooth


regions) and retains only high-frequency components (edges and
textures).
• It acts as a binary filter:
• Frequencies below a cutoff D0 are completely removed.
• Frequencies above D0 ​are completely retained.
2. Butterworth High-Pass Filtering (BHPF)
• A smooth version of the ideal filter, reducing abrupt frequency
transitions.
• The filter function gradually increases from 0 to 1 around D 0​, avoiding
sharp cutoffs.
3. Gaussian High-Pass Filtering (GHPF)
• Uses a Gaussian function to smoothly attenuate low frequencies while
preserving high frequencies.
• The transition between low and high frequencies is smoothest,
avoiding artifacts like ringing.
4. Laplacian High-Pass Filtering (LHPF)
• A sharpening filter that enhances high frequencies directly using the
Laplacian operator in the frequency domain.
• It amplifies edges more aggressively than other filters.
Summary
• Ideal HPF is the sharpest but introduces artifacts.
• Butterworth HPF is smoother and controllable.
• Gaussian HPF provides the smoothest sharpening
without artifacts.
• Laplacian HPF offers extreme sharpening but
amplifies noise
HOMOMORPHIC FILTERING
• Homomorphic filtering is a technique used in image processing to correct non-
uniform illumination and enhance contrast by manipulating the illumination
and reflectance components of an image.
• It operates in the frequency domain and uses a high-pass filter to attenuate low
frequencies (illumination) while amplifying high frequencies (reflectance).
• An image f (x, y) can be expressed as the product of its illumination, i(x, y), and
reflectance, r(x, y), components:
Quiz - 1
• ____________ is a technique in digital image processing used for
image enhancement, particularly in improving contrast and correcting
non-uniform illumination.
• A). Homomorphic filtering
• B). Isomorphic filtering
• C). Mesomorphic filtering
• D). Harmonic filtering
Quiz - 1
• ____________ is a technique in digital image processing used for
image enhancement, particularly in improving contrast and correcting
non-uniform illumination.
• A). Homomorphic filtering
• B). Isomorphic filtering
• C). Mesomorphic filtering
• D). Harmonic filtering
HOMOMORPHIC FILTERING
• 1. Image Model Representation
• An image can be modeled as:
• f (x, y) = i(x, y)r(x, y)
• where:
• i(x,y) → Illumination (low-frequency components)
• r(x,y) → Reflectance (high-frequency components)
• To apply filtering, we take the logarithm to convert multiplication into addition:
• ln f(x,y) = ln i(x,y) + ln r(x,y)
• 2. Transform to Frequency Domain
• Applying the Fourier Transform (FT):
• F(u,v)=I(u,v)+R(u,v)
• where F(u,v), I(u,v) and R(u,v) are the FT of ln f(x,y), ln i(x,y) and ln r(x,y)
respectively
HOMOMORPHIC FILTERING
• 3. Frequency-Domain Filtering

c is a constant controlling the


steepness of the transition between
high and low frequencies.
Lower Frequencies (D(u,v) ≈ 0):
1. The filter response starts at γL, which is a
HOMOMORPHIC FILTERING low gain factor.
2. This means low-frequency components
(illumination variations) are attenuated,
• 3. Frequency-Domain Filtering reducing uneven lighting effects.
Higher Frequencies (D(u,v) → large
values):
3. The filter response saturates at γH, a higher
gain factor.
4. This enhances high-frequency components
(reflectance details like edges and
textures).
HOMOMORPHIC FILTERING
APPLICATIONS OF HOMOMORPHIC
FILTERING
1.Medical Imaging → Enhancing X-ray, MRI, and CT scan images.
2.Document Enhancement → Improving old, degraded, or unevenly
illuminated text images.
3.Satellite Image Processing → Enhancing details in remote sensing
images.
4.Face Recognition → Improving image contrast in facial images.
Position Emission Tomography

You might also like