Image Enhancement Image Filtering
Image Enhancement Image Filtering
Image Filtering
Processing Images in Spatial Domain: Introduction
09/08/2023 2
Mask (filter, kernel, window, template)
processing
(0,0) y (0,0) y
x x
09/08/2023 3
Kernel Operator: Intro
09/08/2023 4
09/08/2023
Corresponding
Histograms
Source image
Equalized Image
Histogram Equalization
13
Perform histogram equalization on input image
10
7 0
8
6
6
5
2
0 0 0 0 0
0
0 1 2 3 4 5 6 7
Greylevels Number of PDF= Nk/Sum CDF=Sk Sk * 7 Histogram
Pixels(nk) Equalization
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 6 6/25=0.24 0.24 1.68 2
4 14 0.56 0.8 5.6 6
5 5 0.2 1.0 7 7
6 0 0 1.0 7 7
7 0 0 1.0 7 7
7 7 7
7 Output Image
6
6
Histogram
5
2
2
0 0 0
0
1 2 3 4 5 6 7 8
Numerical Example
0 1 1 3 4
7 2 5 5 7
6 3 2 1 1
1 4 4 2 1
Step 1:
• Find the range of intensity values.
• [0, 1, 2, 3, 4, 5, 6, 7]
Step 2:
• total = 20 = 4*5
• Calculate PDF = frequency of each intensity/Total sum of all frequencies, for each
i value of intensity
• For intensity 0, frequency is 1.
• PDF=1/20=0.05
0 1 1 3 4 0 2 2 4 5
7 2 5 5 7 7 4 6 6 7
6 4 4 2 2
6 3 2 1 1
2 5 5 4 2
1 4 4 2 1
09/08/2023 26
Point Processing:
Gamma Correction
09/08/2023 27
Point Processing:Contrast Stretching
L-1
0 L-1
09/08/2023 28
EM ALGORITHM
• In the real-world applications of machine learning, it is very
common that there are many relevant features available for
learning but only a small subset of them are observable.
• Initially, a set of initial values of the parameters are considered. A set of incomplete
observed data is given to the system with the assumption that the observed data comes
from a specific model.
• The next step is known as "Expectation" - step or E-step. In this step, we use the
observed data in order to estimate or guess the values of the missing or incomplete data.
It is basically used to update the variables.
• The next step is known as "Maximization"-step or M-step. In this step, we use the
complete data generated in the preceding "Expectation" - step in order to update the
values of the parameters. It is basically used to update the hypothesis.
• Now, in the fourth step, it is checked whether the values are converging or not, if yes, then
stop otherwise repeat step-2 and step-3 i.e. "Expectation" - step and "Maximization" - step
until the convergence occurs.
• Algorithm:
1.Given a set of incomplete data, consider a set of starting
parameters.
Classification Results
x1C(x1)
x1={r1, g1, b1} x2C(x2)
x2={r2, g2, b2} …
xiC(xi)
… Classifier …
xi={ri, gi, bi} (K-Means)
… Cluster Parameters
m1 for C1
m2 for C2
…
mk for Ck
K-Means Classifier (Cont.)
Initial Guess of
x1={r1, g1, b1} Cluster Parameters
x2={r2, g2, b2} m1 , m2 , …, mk Classification Results (1)
C(x1), C(x2), …, C(xi)
…
Cluster Parameters(1)
xi={ri, gi, bi} m1 , m2 , …, mk Classification Results (2)
… C(x1), C(x2), …, C(xi)
Cluster Parameters(2)
m1 , m2 , …, mk
Classification Results (ic)
C(x1), C(x2), …, C(xi)
Cluster Parameters(ic)
m1 , m2 , …, mk
K-Means (Cont.)
• Boot Step:
• Initialize K clusters: C1, …, CK
Each Cluster is represented by its mean mj
• Iteration Step:
• Estimate the cluster of each data
xi C(xi)
• Re-estimate the cluster parameters
K-Means Example
K-Means Example
K-Means EM
• Boot Step:
• Initialize K clusters: C1, …, CK
(j, j) and P(Cj) for each cluster j.
• Iteration Step:
• Estimate the cluster of each data
Expectation
• Re-estimate the cluster parameters
For each cluster j Maximization
EM Classifier
Classification Results
p(C1|x1)
x1={r1, g1, b1} p(Cj|x2)
x2={r2, g2, b2} …
p(Cj|xi)
… Classifier …
xi={ri, gi, bi} (EM)
Cluster Parameters
…
(1,1),p(C1) for C1
(2,2),p(C2) for C2
…
(k,k),p(Ck) for Ck
EM Classifier (Cont.)
Input (Known) Output (Unknown)
• Maximization Step
• Usage of EM algorithm –
• The E-step and M-step are often pretty easy for many problems
in terms of implementation.
49
Fourier Transform: a review
• Basic ideas:
A periodic function can be
represented by the sum of
sines/cosines functions of
different frequencies,
multiplied by a different
coefficient.
Non-periodic functions can
also be represented as the
integral of sines/cosines
multiplied by weighing
function.
50
Joseph Fourier
(1768-1830)
51
Fourier Transforms in Action
• Fourier transforms are used in MRI imaging to reconstruct images from frequency data. This allows creating detailed
anatomical pictures non-invasively.
• Engineers analyze vibration signals using Fourier transforms to detect faults in rotating machinery. The frequency
spectrum reveals problematic vibration patterns.
• Audio compression techniques like MP3 rely on Fourier transforms to eliminate inaudible frequencies. This reduces file
sizes with minimal impact on quality.
• Radio engineers use Fourier transforms to modulate and demodulate signals for radio transmission. The data is
encoded onto carrier waves as frequency variations.
• Image processing applications use Fourier transforms for noise removal, sharpening, and detecting patterns. Spatial
frequencies reveal information for enhancing images.
• Scientists study the Fourier transform of star spectra to determine celestial objects' chemical composition, motion,
and temperature.
Fourier transform
basis functions
Approximating a
square wave as the
sum of sine waves.
59
Any function can be written as the
sum of an even and an odd function
E(-x) = E(x)
O(-x) = -O(x)
61
Fourier Series
62
The Fourier Transform
Let F(m) incorporates both cosine and sine series coefficients, with the
sine series distinguished by making it the imaginary component:
F(u) is called the Fourier Transform of f(t). We say that f(t) lives in the
“time domain,” and F(u) lives in the “frequency domain.” u is called the
frequency variable.
63
The Inverse Fourier Transform
Fourier
Transform
Inverse
Fourier
Transform
64
2-D Fourier Transform
65
Discrete Fourier Transform (DFT)
66
Discrete Fourier Transform (DFT)
Let x denote the discrete values (x=0,1,2,…,M-1),
i.e.
then
67
Discrete Fourier Transform (DFT)
• The discrete Fourier transform pair that applies to sampled functions is
given by:
u=0,1,2,…,M-1
and
x=0,1,2,…,M-1
68
2-D Discrete Fourier Transform
• In 2-D case, the DFT pair is:
Shift
70
Cont.
• When you compute the Fourier Transform of an image, the DC (Direct Current)
component (corresponding to the zero frequency) is located at the top-left corner of
the resulting frequency domain representation.
• This might not be intuitive or convenient for certain applications. Shifting the DC
component to the center of the frequency domain allows for easier visualization and
analysis.
• Multiplying the input image by −1^(x+y) before computing the Fourier Transform is a
technique used to shift the center of the frequency domain representation to the
center of the image, making it easier to interpret and analyze the frequency content
of the image.
Symmetry of FT
• For real image f(x,y), FT is conjugate symmetric:
72
FT
IFT
73
IFT
IFT
74
The central part of FT, i.e. the low
frequency components are
responsible for the general gray-level
appearance of an image.
75
Image Frequency Domain
(log magnitude)
v Detail
General
appearance
76
5% 10 % 20 % 50 %
77
Frequency Domain Filtering
78
Frequency Domain Filtering
• Edges and sharp transitions (e.g., noise) in an image contribute
significantly to high-frequency content of FT.
79
Convolution Theorem
80
81
Ideal low-pass filter (ILPF)
82
83
FT
ringing and
blurring
Ideal in frequency
domain means
non-ideal in spatial
domain, vice versa.
84
Butterworth Lowpass Filters (BLPF)
• Smooth transfer function,
no sharp discontinuity, no
clear cutoff frequency.
85
Butterworth Lowpass Filters (BLPF)
86
No serious
ringing
artifacts
87
Gaussian Lowpass Filters (GLPF)
• Smooth transfer function,
smooth impulse
response, no ringing
88
89
No ringing
artifacts
90
Examples of Lowpass Filtering
91
Examples of Lowpass Filtering
92
94
High-pass Filters
95
Ideal High-pass Filtering
ringing artifacts
96
Butterworth High-pass Filtering
97
Gaussian High-pass Filtering
98
Gaussian High-pass Filtering
99
100
Subtract Laplacian from the Original Image to Enhance It
Spatial =
g ( x, y ) f ( x, y ) - Ñ 2
f ( x, y )
domain
101
102
103
Contents
In this lecture we will look at spatial filtering techniques:
• Neighbourhood operations
• What is spatial filtering?
• Smoothing operations
• What happens at the edges?
• Correlation and convolution
• Sharpening filters
• Combining filtering techniques
Neighbourhood Operations
Neighbourhood operations simply operate on a larger neighbourhood
of pixels than point operations
Origin x
Neighbourhoods are
mostly a rectangle
around a central pixel
Any size rectangle
and any shape filter Neighbourhood
(x, y)
are possible
y Image f (x, y)
Simple Neighbourhood Operations
Some simple neighbourhood operations include:
• Min: Set the pixel value to the minimum in the neighbourhood
• Max: Set the pixel value to the maximum in the neighbourhood
• Median: The median value of a set of numbers is the midpoint value in that
set (e.g. from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the
median works better than the average
Simple Neighbourhood Operations Example
y y
The Spatial Filtering Process
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z
Original Image Filter
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i
The above is repeated for every pixel in the
original image to generate the filtered image
Smoothing Spatial Filters
One of the simplest spatial filtering operations we can perform is a
smoothing operation
• Simply average all of the pixels in a neighbourhood around a central value
• Especially useful
in removing noise
from images
• Also useful for 1
/9 1
/9 1
/9
highlighting gross
detail Simple
1
/9 1
/9 1
/9 averaging
filter
1
/9 1
/9 1
/9
Smoothing Spatial Filtering
Origin x
104 100 108 /9
1 1
/9 1
/9
* /9 /9 /9
1 1 1
99 106 98
95 90 85 /9
1 1
/9 1
/9
1
/9 100
104 1
/9 108
1
/9
Original Image Filter
Simple 3*3 /9 106
1
99 1
/9 198
/9
3*3 Smoothing Pixels
Neighbourhood /9 190
1
95 /9 185
/9 Filter
e = 1/9*106 +
1
/9*104 + 1/9*100 + 1/9*108 +
1
/9*99 + 1/9*98 +
y Image f (x, y) 1
/9*95 + 1/9*90 + 1/9*85
= 98.3333
The above is repeated for every pixel in the original image to
generate the smoothed image.
Image Smoothing Example
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
• 3, 5, 9, 15 and 35
Notice how detail begins
to disappear
Weighted Smoothing Filters
More effective smoothing filters can be generated by allowing different
pixels in the neighbourhood different weights in the averaging function
• Pixels closer to the
central pixel are more
important 1
/16 /16
2
/16
1
• Often referred to as a
weighted averaging 2
/16 /16
4
/16
2
1
/16 /16
2
/16
1
Weighted
averaging filter
Another Smoothing Example
By smoothing the original image we get rid of lots of
the finer detail which leaves only the gross features for
thresholding
– It preserves the edges better than an average filter in the case of impulse noise.
Strange Things Happen At The Edges!
At the edges of an image we are missing pixels to form a
neighbourhood
Origin x
e e
e e e
y Image f (x, y)
Strange Things Happen At The Edges! (cont…)
Filtered Image:
Zero Padding
Filtered Image:
Wrap Around Edge Pixels
Correlation & Convolution
The filtering we have been talking about so far is referred to as correlation with
the filter itself referred to as the correlation kernel
Convolution is a similar operation, with just one subtle difference
a b c r s t eprocessed = v*e +
d
f
e
g h
e
* u
x
v
y
w
z
z*a + y*b + x*c +
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels
Therefore,
Sharpening Spatial Filters
Previously we have looked at smoothing filters which remove fine detail
Sharpening spatial filters seek to highlight fine detail
• Remove blurring from images
• Highlight edges
Sharpening filters are based on spatial differentiation
Spatial Differentiation
Differentiation measures the rate of change of a function
Let’s consider a simple 1 dimensional example
Spatial Differentiation
A B
Derivative Filters Requirements
First derivative filter output
• Zero at constant intensities
• Non zero at the onset of a step or ramp
• Non zero along ramps
It’s just the difference between subsequent values and measures the
rate of change of the function
1st Derivative (cont.)
• The gradient of an image:
5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
0 -1 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0 0
2nd Derivative
The formula for the 2nd derivative of a function is as follows:
Simply takes into account the values both before and after the current
value
2nd Derivative (cont…)
5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
-1 0 0 0 0 1 0 6 -12 6 0 0 1 1 -4 1 1 0 0 7 -7 0 0
Using Second Derivatives For Image Enhancement
Edges in images are often ramp-like transitions
• 1st derivative is constant and produces thick edges
• 2nd derivative zero crosses the edge (double response at the onset and end with
opposite signs)
where the partial 1st order derivative in the x direction is defined as follows:
1 -4 1
0 1 0
The Laplacian (cont…)
Applying the Laplacian to an image we get a new image that highlights
edges and other discontinuities
- =
Original Laplacian Sharpened
Image Filtered Image Image
This gives us a new filter which does the whole job for us in one step
0 -1 0
-1 5 -1
0 -1 0
Simplified Image Enhancement (cont…)
Variants On The Simple Laplacian
There are lots of slightly different versions of the Laplacian that can be
used:
0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1
-1 -1 -1
-1 9 -1
-1 -1 -1
Unsharp masking
Used by the printing industry
Subtracts an unsharped (smooth) image from the original image f(x,y).
•Subtract the blurred image from the original (the result is called the mask)
gmask(x,y)=f(x,y)-b(x,y)
Blurred image
Mask
Unsharp masking
z1 z2 z3
z4 z5 z6
z7 z8 z9
Sobel Operators
Based on the previous equations we can derive the Sobel Operators
-1 -2 -1 -1 0 1
0 0 0 -2 0 2
1 2 1 -1 0 1
To filter an image it is filtered using both operators the results of which are
added together
Sobel Example
An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five o’clock in
the image) more
obvious
• 1st order derivatives generally produce thicker edges (if thresholded at ramp edges)
• 2nd order derivatives have a stronger response to fine detail e.g. thin lines
• 2nd order derivatives produce a double response at step changes in grey level (which
helps in detecting zero crossings)
Combining Spatial Enhancement Methods
Successful image enhancement is
typically not achieved using a
single operation
Rather we combine a range of
techniques in order to achieve a
final result
This example will focus on
enhancing the bone scan to the
right
Combining Spatial Enhancement Methods (cont…)
(a)
Laplacian filter of
bone scan (a)
(b)
Sharpened version of
bone scan achieved (c)
by subtracting (a)
and (b) Sobel filter of bone
scan (a) (d)
Combining Spatial Enhancement Methods (cont…)
Result of applying a (h)
power-law trans. to
Sharpened image (g)
which is sum of (a)
and (f) (g)
The product of (c)
and (e) which will be (f)
used as a mask
(e)
or
or
2D Translation (cont’d)
• To translate an object, translate every point of the object by the same
amount.
2D Scaling
• Changes the size of the object by multiplying the
coordinates of the points by scaling factors.
or or
2D Scaling (cont’d)
• Uniform vs non-uniform scaling
C B
A • From ACP’ triangle:
2D Rotation (cont’d)
• From the above equations we have:
or
or
Summary of 2D transformations