0% found this document useful (0 votes)
10 views

Image Enhancement Image Filtering

The document discusses image enhancement and filtering techniques in spatial domain. It introduces concepts like point processing, mask processing using kernels. Specific point processing techniques discussed include thresholding, gamma correction, and contrast stretching. It also provides an overview of the Expectation Maximization (EM) algorithm for handling latent variables and its use in clustering. K-means clustering for color images is explained which takes n-dimensional color vectors as input and groups them into K clusters, iteratively updating the cluster means until convergence.

Uploaded by

Avadhraj Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Image Enhancement Image Filtering

The document discusses image enhancement and filtering techniques in spatial domain. It introduces concepts like point processing, mask processing using kernels. Specific point processing techniques discussed include thresholding, gamma correction, and contrast stretching. It also provides an overview of the Expectation Maximization (EM) algorithm for handling latent variables and its use in clustering. K-means clustering for color images is explained which takes n-dimensional color vectors as input and groups them into K clusters, iteratively updating the cluster means until convergence.

Uploaded by

Avadhraj Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 167

Image Enhancement &

Image Filtering
Processing Images in Spatial Domain: Introduction

: Spatial operator defined on a neighborhood N of a given pixel

point processing mask processing

09/08/2023 2
Mask (filter, kernel, window, template)
processing

(0,0) y (0,0) y

x x

09/08/2023 3
Kernel Operator: Intro

Note: need to handle


borders of the image

09/08/2023 4
09/08/2023
Corresponding
Histograms
Source image

Equalized Image
Histogram Equalization

13
Perform histogram equalization on input image

4 4 4 4 4 Greylevels Number of Pixels(nk)


Highest grey value= 5
3 4 5 4 3 For representing 5, 3 bits are 0 0
required 2^3=8, [0-7] 1 0
3 5 5 5 3
3 4 5 4 3 2 0
4 4 4 4 4 3 6
4 14
16
5 5
14
14 Input Image
Histogram 6 0
12

10
7 0
8
6
6
5

2
0 0 0 0 0
0
0 1 2 3 4 5 6 7
Greylevels Number of PDF= Nk/Sum CDF=Sk Sk * 7 Histogram
Pixels(nk) Equalization

0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 6 6/25=0.24 0.24 1.68 2
4 14 0.56 0.8 5.6 6
5 5 0.2 1.0 7 7
6 0 0 1.0 7 7
7 0 0 1.0 7 7

Replace 3 with 2, 4 with 6, and 5, 6, and 7


Sum= 25 with 7 respectively in input image.
4 4 4 4 4 Histogram 6 6 6 6 6
3 4 5 4 3 Equalization 2 6 7 6 2
3 5 5 5 3 2 7 7 7 2
3 4 5 4 3 2 6 7 6 2
4 4 4 4 4 6 6 6 6 6

7 7 7
7 Output Image
6
6
Histogram
5

2
2

0 0 0
0
1 2 3 4 5 6 7 8
Numerical Example

• Consider a 3-bit image of size 4×5 as shown below:

0 1 1 3 4

7 2 5 5 7

6 3 2 1 1

1 4 4 2 1
Step 1:
• Find the range of intensity values.
• [0, 1, 2, 3, 4, 5, 6, 7]
Step 2:

• Find the frequency of each intensity value.


• [1, 6, 3, 2, 3, 2, 1, 2]
Step 3: Calculate the probability density function for each frequency.

• total = 20 = 4*5
• Calculate PDF = frequency of each intensity/Total sum of all frequencies, for each
i value of intensity
• For intensity 0, frequency is 1.
• PDF=1/20=0.05

• For intensity 1, frequency is 6.


• PDF=6/20=0.3
Step 4
• Calculate CDF =cumulative frequency of each intensity value = sum of
all PDF value (<=i)
Step 5
• Multiply CDF with 7.
• Why 7?
• Because we have 3 bit image.
• So, range of intensity values is 0 to 7.
• Multiplication factor is maximum value of intensity.
Step 6
• Round off the final value of intensity.
Solution
Range Frequency PDF CDF 7*CDF Round-off

0 1 0.0500 0.0500 0.3500  0

1 6  0.3000             0.3500 2.4500 2

2 3 0.1500             0.5000 3.5000 4

3 2 0.1000            0.6000 4.2000 4

4 3 0.1500             0.7500 5.2500 5

5 2 0.1000            0.8500 5.9500 6

6 1   0.0500             0.9000 6.3000 6

7 2 0.1000            1.0000 7.0000  7


Output

0 1 1 3 4 0 2 2  4 5

7 2 5 5 7 7 4 6 6 7

6 4 4 2 2
6 3 2 1 1

2 5 5 4 2
1 4 4 2 1

Old Image New Image


Point Processing: Thresholding

Input gray-level value Output gray-level value

09/08/2023 26
Point Processing:
Gamma Correction

09/08/2023 27
Point Processing:Contrast Stretching

L-1

0 L-1

09/08/2023 28
EM ALGORITHM
• In the real-world applications of machine learning, it is very
common that there are many relevant features available for
learning but only a small subset of them are observable.

• The Expectation-Maximization algorithm can be used for the


latent variables (variables that are not directly observable and are
actually inferred from the values of the other observed variables).

• This algorithm is actually the base for many unsupervised


clustering algorithms in the field of machine learning.
• Let us understand the EM algorithm in detail.

• Initially, a set of initial values of the parameters are considered. A set of incomplete
observed data is given to the system with the assumption that the observed data comes
from a specific model.

• The next step is known as "Expectation" - step or E-step. In this step, we use the
observed data in order to estimate or guess the values of the missing or incomplete data.
It is basically used to update the variables.

• The next step is known as "Maximization"-step or M-step. In this step, we use the
complete data generated in the preceding "Expectation" - step in order to update the
values of the parameters. It is basically used to update the hypothesis.

• Now, in the fourth step, it is checked whether the values are converging or not, if yes, then
stop otherwise repeat step-2 and step-3 i.e. "Expectation" - step and "Maximization" - step
until the convergence occurs.
• Algorithm:
1.Given a set of incomplete data, consider a set of starting
parameters.

2.Expectation step (E - step): Using the observed available data of


the dataset, estimate (guess) the values of the missing data.

3.Maximization step (M - step): Complete data generated after the


expectation (E) step is used in order to update the parameters.

4. Repeat step 2 and step 3 until convergence.


Color Clustering by K-means Algorithm

Form K-means clusters from a set of n-dimensional vectors

1. Set ic (iteration count) to 1

2. Choose randomly a set of K means m1(1), …, mK(1).

3. For each vector xi, compute D(xi,mk(ic)), k=1,…K


and assign xi to the cluster Cj with nearest mean.

4. Increment ic by 1, update the means to get m1(ic),…,mK(ic).

5. Repeat steps 3 and 4 until Ck(ic) = Ck(ic+1) for all k.


K-Means Classifier

Classification Results
x1C(x1)
x1={r1, g1, b1} x2C(x2)
x2={r2, g2, b2} …
xiC(xi)
… Classifier …
xi={ri, gi, bi} (K-Means)

… Cluster Parameters
m1 for C1
m2 for C2

mk for Ck
K-Means Classifier (Cont.)

Input (Known) Output (Unknown)

x1={r1, g1, b1}


Classification Results
x2={r2, g2, b2} Cluster Parameters x1C(x1)
m1 for C1 x2C(x2)

m2 for C2 …
xi={ri, gi, bi} … xiC(xi)
… mk for Ck …
Input (Known) Output (Unknown)

Initial Guess of
x1={r1, g1, b1} Cluster Parameters
x2={r2, g2, b2} m1 , m2 , …, mk Classification Results (1)
C(x1), C(x2), …, C(xi)

Cluster Parameters(1)
xi={ri, gi, bi} m1 , m2 , …, mk Classification Results (2)
… C(x1), C(x2), …, C(xi)
Cluster Parameters(2)
m1 , m2 , …, mk



Classification Results (ic)
C(x1), C(x2), …, C(xi)
Cluster Parameters(ic)
m1 , m2 , …, mk



K-Means (Cont.)
• Boot Step:
• Initialize K clusters: C1, …, CK
Each Cluster is represented by its mean mj
• Iteration Step:
• Estimate the cluster of each data
xi C(xi)
• Re-estimate the cluster parameters
K-Means Example
K-Means Example
K-Means  EM
• Boot Step:
• Initialize K clusters: C1, …, CK
(j, j) and P(Cj) for each cluster j.
• Iteration Step:
• Estimate the cluster of each data
Expectation
• Re-estimate the cluster parameters
For each cluster j Maximization
EM Classifier

Classification Results
p(C1|x1)
x1={r1, g1, b1} p(Cj|x2)
x2={r2, g2, b2} …
p(Cj|xi)
… Classifier …
xi={ri, gi, bi} (EM)
Cluster Parameters

(1,1),p(C1) for C1
(2,2),p(C2) for C2


(k,k),p(Ck) for Ck
EM Classifier (Cont.)
Input (Known) Output (Unknown)

x1={r1, g1, b1} Cluster Parameters Classification Results


(1,1), p(C1) for C1 p(C1|x1)
x2={r2, g2, b2} (2,2), p(C2) for C2 p(Cj|x2)
… …
xi={ri, gi, bi} … p(Cj|xi)
(k,k), p(Ck) for Ck …

Expectation Step
Input (Known) Input (Estimation) Output

x1={r1, g1, b1} Cluster Parameters Classification Results


(1,1), p(C1) for C1 p(C1|x1)
x2={r2, g2, b2} (2,2), p(C2) for C2
+ p(Cj|x2)
… …
xi={ri, gi, bi} … p(Cj|xi)
(k,k), p(Ck) for Ck …

Maximization Step
Input (Known) Input (Estimation) Output

x1={r1, g1, b1} Classification Results Cluster Parameters


p(C1|x1) (1,1), p(C1) for C1
x2={r2, g2, b2} (2,2), p(C2) for C2
+ p(Cj|x2)
… …
xi={ri, gi, bi} p(Cj|xi) …
… (k,k), p(Ck) for Ck

EM Algorithm
• Boot Step:
• Initialize K clusters: C1, …, CK

(j, j) and P(Cj) for each cluster j.


• Iteration Step:
• Expectation Step

• Maximization Step
• Usage of EM algorithm –

• It can be used to fill the missing data in a sample.

• It can be used as the basis of unsupervised learning of clusters.

• It can be used for the purpose of estimating the parameters of


Hidden Markov Model (HMM).

• It can be used for discovering the values of latent variables.


• Advantages of EM algorithm

• It is always guaranteed that likelihood will increase with each


iteration.

• The E-step and M-step are often pretty easy for many problems
in terms of implementation.

• Solutions to the M-steps often exist in the closed form.


• Disadvantages of EM algorithm

• It has slow convergence.


• It makes convergence to the local optima only.
Image Enhancement

• Image enhancement techniques:


Spatial domain methods
Frequency domain methods

• Spatial (time) domain techniques are techniques that


operate directly on pixels.

• Frequency domain techniques are based on modifying


the Fourier transform of an image.

49
Fourier Transform: a review
• Basic ideas:
A periodic function can be
represented by the sum of
sines/cosines functions of
different frequencies,
multiplied by a different
coefficient.
Non-periodic functions can
also be represented as the
integral of sines/cosines
multiplied by weighing
function.

50
Joseph Fourier
(1768-1830)

Fourier was obsessed


with the physics of
heat and developed
the Fourier transform
theory to model heat-
flow problems.

51
Fourier Transforms in Action
• Fourier transforms are used in MRI imaging to reconstruct images from frequency data. This allows creating detailed
anatomical pictures non-invasively.

• Engineers analyze vibration signals using Fourier transforms to detect faults in rotating machinery. The frequency
spectrum reveals problematic vibration patterns.

• Audio compression techniques like MP3 rely on Fourier transforms to eliminate inaudible frequencies. This reduces file
sizes with minimal impact on quality.

• Radio engineers use Fourier transforms to modulate and demodulate signals for radio transmission. The data is
encoded onto carrier waves as frequency variations.

• Image processing applications use Fourier transforms for noise removal, sharpening, and detecting patterns. Spatial
frequencies reveal information for enhancing images.

• Scientists study the Fourier transform of star spectra to determine celestial objects' chemical composition, motion,
and temperature.
Fourier transform
basis functions

Approximating a
square wave as the
sum of sine waves.

59
Any function can be written as the
sum of an even and an odd function

E(-x) = E(x)

O(-x) = -O(x)

61
Fourier Series

So if f(t) is a general function, neither even nor


odd, it can be written:

Even component Odd component

62
The Fourier Transform
Let F(m) incorporates both cosine and sine series coefficients, with the
sine series distinguished by making it the imaginary component:

Let’s now allow f(t) range from – to , we rewrite:

F(u) is called the Fourier Transform of f(t). We say that f(t) lives in the
“time domain,” and F(u) lives in the “frequency domain.” u is called the
frequency variable.
63
The Inverse Fourier Transform

We go from f(t) to F(u) by

Fourier
Transform

Given F(u), f(t) can be obtained by the inverse Fourier


transform

Inverse
Fourier
Transform

64
2-D Fourier Transform

Fourier transform for f(x,y) with two variables

and the inverse Fourier transform

65
Discrete Fourier Transform (DFT)

66
Discrete Fourier Transform (DFT)
Let x denote the discrete values (x=0,1,2,…,M-1),
i.e.

then

67
Discrete Fourier Transform (DFT)
• The discrete Fourier transform pair that applies to sampled functions is
given by:

u=0,1,2,…,M-1

and

x=0,1,2,…,M-1

68
2-D Discrete Fourier Transform
• In 2-D case, the DFT pair is:

u=0,1,2,…,M-1 and v=0,1,2,…,N-1


and:

x=0,1,2,…,M-1 and y=0,1,2,…,N-1


69
Fourier Transform: shift
• It is common to multiply input image by (-1)x+y prior to computing the
FT. This shift the center of the FT to (M/2,N/2).

Shift

70
Cont.
• When you compute the Fourier Transform of an image, the DC (Direct Current)
component (corresponding to the zero frequency) is located at the top-left corner of
the resulting frequency domain representation.

• This might not be intuitive or convenient for certain applications. Shifting the DC
component to the center of the frequency domain allows for easier visualization and
analysis.

• Multiplying the input image by −1^(x+y) before computing the Fourier Transform is a
technique used to shift the center of the frequency domain representation to the
center of the image, making it easier to interpret and analyze the frequency content
of the image.
Symmetry of FT
• For real image f(x,y), FT is conjugate symmetric:

• The magnitude of FT is symmetric:

72
FT

IFT

73
IFT

IFT

74
The central part of FT, i.e. the low
frequency components are
responsible for the general gray-level
appearance of an image.

The high frequency components of


FT are responsible for the detail
information of an image.

75
Image Frequency Domain
(log magnitude)
v Detail

General
appearance

76
5% 10 % 20 % 50 %

77
Frequency Domain Filtering

78
Frequency Domain Filtering
• Edges and sharp transitions (e.g., noise) in an image contribute
significantly to high-frequency content of FT.

• Low frequency contents in the FT are responsible to the


general appearance of the image over smooth areas.

• Blurring (smoothing) is achieved by attenuating range of high


frequency components of FT.

79
Convolution Theorem

• Filtering in Frequency Domain with H(u,v) is


equivalent to filtering in Spatial Domain with f(x,y).

80
81
Ideal low-pass filter (ILPF)

82
83
FT

ringing and
blurring

Ideal in frequency
domain means
non-ideal in spatial
domain, vice versa.

84
Butterworth Lowpass Filters (BLPF)
• Smooth transfer function,
no sharp discontinuity, no
clear cutoff frequency.

85
Butterworth Lowpass Filters (BLPF)

n=1 n=2 n=5 n=20

86
No serious
ringing
artifacts

87
Gaussian Lowpass Filters (GLPF)
• Smooth transfer function,
smooth impulse
response, no ringing

88
89
No ringing
artifacts

90
Examples of Lowpass Filtering

91
Examples of Lowpass Filtering

Low-pass filter H(u,v)

Original image and its FT Filtered image and its FT

92
94
High-pass Filters

95
Ideal High-pass Filtering
ringing artifacts

96
Butterworth High-pass Filtering

97
Gaussian High-pass Filtering

98
Gaussian High-pass Filtering

Original image Gaussian filter H(u,v)

Filtered image and its FT

99
100
Subtract Laplacian from the Original Image to Enhance It

enhanced Original Laplacian


image image output

Spatial =
g ( x, y ) f ( x, y ) - Ñ 2
f ( x, y )
domain

101
102
103
Contents
In this lecture we will look at spatial filtering techniques:
• Neighbourhood operations
• What is spatial filtering?
• Smoothing operations
• What happens at the edges?
• Correlation and convolution
• Sharpening filters
• Combining filtering techniques
Neighbourhood Operations
Neighbourhood operations simply operate on a larger neighbourhood
of pixels than point operations
Origin x
Neighbourhoods are
mostly a rectangle
around a central pixel
Any size rectangle
and any shape filter Neighbourhood
(x, y)
are possible

y Image f (x, y)
Simple Neighbourhood Operations
Some simple neighbourhood operations include:
• Min: Set the pixel value to the minimum in the neighbourhood
• Max: Set the pixel value to the maximum in the neighbourhood
• Median: The median value of a set of numbers is the midpoint value in that
set (e.g. from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the
median works better than the average
Simple Neighbourhood Operations Example

Original Image x Enhanced Image x


123 127 128 119 115 130

140 145 148 153 167 172

133 154 183 192 194 191

194 199 207 210 198 195

164 170 175 162 173 151

y y
The Spatial Filtering Process
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z
Original Image Filter
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i
The above is repeated for every pixel in the
original image to generate the filtered image
Smoothing Spatial Filters
One of the simplest spatial filtering operations we can perform is a
smoothing operation
• Simply average all of the pixels in a neighbourhood around a central value
• Especially useful
in removing noise
from images
• Also useful for 1
/9 1
/9 1
/9
highlighting gross
detail Simple
1
/9 1
/9 1
/9 averaging
filter
1
/9 1
/9 1
/9
Smoothing Spatial Filtering
Origin x
104 100 108 /9
1 1
/9 1
/9

* /9 /9 /9
1 1 1
99 106 98

95 90 85 /9
1 1
/9 1
/9
1
/9 100
104 1
/9 108
1
/9
Original Image Filter
Simple 3*3 /9 106
1
99 1
/9 198
/9
3*3 Smoothing Pixels
Neighbourhood /9 190
1
95 /9 185
/9 Filter
e = 1/9*106 +
1
/9*104 + 1/9*100 + 1/9*108 +
1
/9*99 + 1/9*98 +
y Image f (x, y) 1
/9*95 + 1/9*90 + 1/9*85
= 98.3333
The above is repeated for every pixel in the original image to
generate the smoothed image.
Image Smoothing Example
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
• 3, 5, 9, 15 and 35
Notice how detail begins
to disappear
Weighted Smoothing Filters
More effective smoothing filters can be generated by allowing different
pixels in the neighbourhood different weights in the averaging function
• Pixels closer to the
central pixel are more
important 1
/16 /16
2
/16
1

• Often referred to as a
weighted averaging 2
/16 /16
4
/16
2

1
/16 /16
2
/16
1

Weighted
averaging filter
Another Smoothing Example
By smoothing the original image we get rid of lots of
the finer detail which leaves only the gross features for
thresholding

Original Image Smoothed Image Thresholded Image


Averaging Filter Vs. Median Filter Example

Original Image Image After Image After


With Noise Averaging Filter Median Filter

Filtering is often used to remove noise from images


Sometimes a median filter works better than an
averaging filter
Spatial smoothing and image approximation
Spatial smoothing may be viewed as a process for
estimating the value of a pixel from its neighbours.

What is the value that “best” approximates the


intensity of a given pixel given the intensities of its
neighbours?

We have to define “best” by establishing a criterion.


Spatial smoothing and image approximation (cont...)
A standard criterion is the the sum of squares
differences.

The average value


Spatial smoothing and image approximation (cont...)
Another criterion is the the sum of absolute
differences.
Spatial smoothing and image approximation (cont...)
– The median filter is non linear:

– It works well for impulse noise (e.g. salt and pepper).

– It requires sorting of the image values.

– It preserves the edges better than an average filter in the case of impulse noise.
Strange Things Happen At The Edges!
At the edges of an image we are missing pixels to form a
neighbourhood
Origin x
e e

e e e
y Image f (x, y)
Strange Things Happen At The Edges! (cont…)

There are a few approaches to dealing with missing edge pixels:


• Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
• Pad the image
• Typically with either all white or all black pixels
• Replicate border pixels
• Truncate the image
• Allow pixels wrap around the image
• Can cause some strange image artefacts
Strange Things Happen At The Edges! (cont…)

Filtered Image:
Zero Padding

Original Filtered Image:


Image Replicate Edge Pixels

Filtered Image:
Wrap Around Edge Pixels
Correlation & Convolution
The filtering we have been talking about so far is referred to as correlation with
the filter itself referred to as the correlation kernel
Convolution is a similar operation, with just one subtle difference

a b c r s t eprocessed = v*e +
d
f
e
g h
e
* u
x
v
y
w
z
z*a + y*b + x*c +
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels

For symmetric filters it makes no difference.


Effect of Low Pass Filtering on White Noise
Let f be an observed instance of the image f0 corrupted by noise w:

with noise samples having mean value E[w(n)]=0 and being


uncorrelated with respect to location:
Effect of Low Pass Filtering on White Noise
(cont...)
Applying a low pass filter h (e.g. an average filter) by convolution to
the degraded image:

The expected value of the output is:

The noise is removed in average.


Effect of Low Pass Filtering on White Noise
(cont...)
Considering that h is an average filter, we have at pixel n:

Therefore,
Sharpening Spatial Filters
Previously we have looked at smoothing filters which remove fine detail
Sharpening spatial filters seek to highlight fine detail
• Remove blurring from images
• Highlight edges
Sharpening filters are based on spatial differentiation
Spatial Differentiation
Differentiation measures the rate of change of a function
Let’s consider a simple 1 dimensional example
Spatial Differentiation

A B
Derivative Filters Requirements
First derivative filter output
• Zero at constant intensities
• Non zero at the onset of a step or ramp
• Non zero along ramps

•Second derivative filter output


• Zero at constant intensities
• Non zero at the onset and end of a step or ramp
• Zero along ramps of constant slope
1st Derivative
The formula for the 1st derivative of a function is as follows:

It’s just the difference between subsequent values and measures the
rate of change of the function
1st Derivative (cont.)
• The gradient of an image:

The gradient points in the direction of most rapid increase


in intensity.
Gradient direction

The edge strength is given by the gradient magnitude

Source: Steve Seitz


1st Derivative (cont.)
1st Derivative (cont…)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7

0 -1 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0 0
2nd Derivative
The formula for the 2nd derivative of a function is as follows:

Simply takes into account the values both before and after the current
value
2nd Derivative (cont…)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7

-1 0 0 0 0 1 0 6 -12 6 0 0 1 1 -4 1 1 0 0 7 -7 0 0
Using Second Derivatives For Image Enhancement
Edges in images are often ramp-like transitions
• 1st derivative is constant and produces thick edges
• 2nd derivative zero crosses the edge (double response at the onset and end with
opposite signs)

A common sharpening filter is the Laplacian


• Isotropic
• One of the simplest sharpening filters
The Laplacian
The Laplacian is defined as follows:

where the partial 1st order derivative in the x direction is defined as follows:

and in the y direction as follows:


The Laplacian (cont…)
So, the Laplacian can be given as follows:

We can easily build a filter based on this


0 1 0

1 -4 1

0 1 0
The Laplacian (cont…)
Applying the Laplacian to an image we get a new image that highlights
edges and other discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
But That Is Not Very Enhanced!
The result of a Laplacian filtering is not
an enhanced image
We have to do more work in order to get
our final image
Subtract the Laplacian result from the
original image to generate our final
sharpened enhanced image Laplacian
Filtered Image
Scaled for Display
Laplacian Image Enhancement

- =
Original Laplacian Sharpened
Image Filtered Image Image

In the final sharpened image edges and fine detail are


much more obvious
Laplacian Image Enhancement
Simplified Image Enhancement
The entire enhancement can be combined into a single filtering
operation
Simplified Image Enhancement (cont…)

This gives us a new filter which does the whole job for us in one step

0 -1 0

-1 5 -1

0 -1 0
Simplified Image Enhancement (cont…)
Variants On The Simple Laplacian
There are lots of slightly different versions of the Laplacian that can be
used:
0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1

-1 -1 -1

-1 9 -1

-1 -1 -1
Unsharp masking
Used by the printing industry
Subtracts an unsharped (smooth) image from the original image f(x,y).

•Blur the image


b(x,y)=Blur{f(x,y)}

•Subtract the blurred image from the original (the result is called the mask)
gmask(x,y)=f(x,y)-b(x,y)

•Add the mask to the original


g(x,y)=f(x,y)+k gmask(x,y) with k non negative
Unsharp masking (cont...)
Sharping mechanism

When k>1 the process is


referred to as highboost filtering
Unsharp masking (cont...)
Original image

Blurred image

Mask

Unsharp masking

Highboost filtering (k=4.5)


1st Derivative Filtering
Implementing 1st derivative filters is difficult in practice
For a function f(x, y) the gradient of f at coordinates (x, y)
is given as the column vector:
1st Derivative Filtering (cont…)
The magnitude of this vector is given by:

For practical reasons this can be simplified as:


1st Derivative Filtering (cont…)
There is some debate as to how best to calculate these gradients but we will
use:

which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
Sobel Operators
Based on the previous equations we can derive the Sobel Operators

-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

To filter an image it is filtered using both operators the results of which are
added together
Sobel Example
An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five o’clock in
the image) more
obvious

Sobel filters are typically used for edge detection


1st & 2nd Derivatives
Comparing the 1st and 2nd derivatives we can conclude the following:

• 1st order derivatives generally produce thicker edges (if thresholded at ramp edges)

• 2nd order derivatives have a stronger response to fine detail e.g. thin lines

• 1st order derivatives have stronger response to grey level step

• 2nd order derivatives produce a double response at step changes in grey level (which
helps in detecting zero crossings)
Combining Spatial Enhancement Methods
Successful image enhancement is
typically not achieved using a
single operation
Rather we combine a range of
techniques in order to achieve a
final result
This example will focus on
enhancing the bone scan to the
right
Combining Spatial Enhancement Methods (cont…)

(a)
Laplacian filter of
bone scan (a)
(b)
Sharpened version of
bone scan achieved (c)
by subtracting (a)
and (b) Sobel filter of bone
scan (a) (d)
Combining Spatial Enhancement Methods (cont…)
Result of applying a (h)
power-law trans. to
Sharpened image (g)
which is sum of (a)
and (f) (g)
The product of (c)
and (e) which will be (f)
used as a mask
(e)

Image (d) smoothed with


a 5*5 averaging filter
Combining Spatial Enhancement Methods (cont…)
Compare the original and final images
Summary
In this lecture we have looked at the idea of spatial filtering and in
particular:
• Neighbourhood operations
• The filtering process
• Smoothing filters
• Dealing with problems at image edges when using filtering
• Correlation and convolution
• Sharpening filters
• Combining filtering techniques
2D Translation
• Moves a point to a new location by adding translation amounts to the
coordinates of the point.

or

or
2D Translation (cont’d)
• To translate an object, translate every point of the object by the same
amount.
2D Scaling
• Changes the size of the object by multiplying the
coordinates of the points by scaling factors.

or or
2D Scaling (cont’d)
• Uniform vs non-uniform scaling

• Effect of scale factors:


2D Rotation
• Rotates points by an angle θ about origin
(θ >0: counterclockwise rotation)

• From ABP triangle:

C B
A • From ACP’ triangle:
2D Rotation (cont’d)
• From the above equations we have:
or

or
Summary of 2D transformations

• Use homogeneous coordinates to express translation as


matrix multiplication

You might also like