On: Digital Image Processing
On: Digital Image Processing
By
1
UNIT-I
2
What Is Digital Image Processing?
• The field of digital image processing refers to
processing digital images by means of a digital
computer.
3
What is a Digital Image ?
• An image may be defined as a two- dimensional
function, f(x,y) where x and y are spatial (plane)
coordinates, and the amplitude of f at any pair of
coordinates (x, y) is called the intensity or gray level
of the image at that point.
4
Picture elements, Image elements, pels, and
pixels
• A digital image is composed of a finite number of
elements, each of which has a particular location and
value.
• These elements are referred to as picture elements,
image elements, pels, and pixels.
5
The Origins of Digital Image
Processing
• One of the first applications of digital images was in
the newspaper industry, when pictures were first sent
by submarine cable between London and New York.
6
• Figure was transmitted in this way and reproduced on
a telegraph printer fitted with typefaces simulating a
halftone pattern.
7
• The printing technique based on photographic
reproduction made from tapes perforated at the
telegraph receiving terminal from 1921.
8
• The early Bartlane systems were capable of coding
images in five distinct levels of gray.
• This capability was increased to 15 levels in 1929.
9
• Figure shows the first image of the moon taken by
Ranger
10
Applications of DIP
• The field of image processing has applications in
medicine and the space program.
12
Structure of the Human Eye
D KHALANDARBASHA
13
• The eye is nearly a sphere, with an average
diameter of approximately 20mm.
14
Cornea
• The cornea is a tough, transparent tissue that covers
the anterior surface of the eye.
15
Choroid
• The choroid lies directly below the sclera.
16
• At its anterior extreme, the choroid is divided into the
ciliary body and the iris diaphragm.
17
• The lens is made up of concentric layers of fibrous
cells and is suspended by fibers that attach to the
ciliary body.
18
Retina
• The innermost membrane of the eye is the retina,
which lines the Inside of the ǁall’s entire posterior
portion.
19
• There are two classes of receptors: cones and rods.
20
• Muscles controlling the eye rotate the eyeball until
the image of an object of interest falls on the fovea.
21
• Figure shows the density of rods and cones for a
cross section of the right eye passing through the
region of emergence of the optic nerve from the
eye.
22
• The absence of receptors in this area results in the
so-called blind spot.
23
Image Formation in the Eye
• The principal difference between the lens of the eye
and an ordinary optical lens is that the former is
flexible.
25
• For example, the observer is looking at a tree 15 m
high at a distance of 100 m.
26
Light and the Electromagnetic Spectrum
27
The electromagnetic spectrum
28
• The electromagnetic spectrum can be expressed in
terms of wavelength, frequency, or energy.
30
• The function f(x, y) may be characterized by two
components:
31
• The two functions combine as a product to
form f(x, y):
32
• The intensity of a monochrome image f at any
coordinates (x, y) the gray level (l) of the image at
that point.
33
GRAY SCALE
• The interval [Lmin , Lmax ] is called thegray scale.
• Each pixel is a unit distance from (x, y), and some of the
neighbors of p lie outside the digital image if (x, y) is on
the border of the image.
35
ND(p) and N8(p)
• The four diagonal neighbors of p have coordinates
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
and are denoted by ND(p).
• If some of the points in ND(p) and N8(p) fall outside the image if
(x, y) is on the border of theimage.
36
Adjacency, Connectivity, Regions, and
Boundaries
•To establish whether two pixels are connected, it
must be determined if they are neighbors and
•if their gray levels satisfy a specified criterion of
similarity (say, if their gray levels are equal).
• For example,
(b) 8-adjacency.
Two pixels p and q with values from V are 8-adjacent if q is in the
set N8(p).
40
.
• Two pixels p and q are said to be connected in S if
there exists a path between them consisting
entirely of pixels in S.
41
Relations, equivalence
• A binary relation R on a set A is a set of pairs of
elements from A. If the pair (a, b) is in R, the notation
used is aRb ( ie a is related to b)
42
• In this case R is set of pairs of points from A that are 4-
connected that is R = {(p1,p2), (p2,p1), (p1,p3),
(p3,p1)} .
43
Reflective - Symmetric - Transitive
• Reflective
if for each a in A, aRb
• Symmetric
if for each a and b in A, aRb implies bRa
• Transitive
if for a, b and c in A, aRb and bRc implies aRc
45
.
48
.
49
DIGITAL IMAGE PROCESSING
UNIT 2: IMAGE ENHANCEMENT
Process an image so that the result will be more suitable than the original
image for a application. specific
Highlighting interesting detail in images Removing noise from images
Making images more visually appealing
So, a technique for enhancement of x-ray image may not be the best for enhancement of
microscopic images.
These spatial domain processes are expressed by:
Spatial filters
Smoothening filters Low pass filters Median filters
Sharpening filters High boost filters
Derivative filters
51
52
Mask/Filter
Neighborhood of a point (x,y)
can be defined by using a
square/rectangular (common
used) or circular subimage
area centered at (x,y)
The center of the
subimage
is moved from pixel to pixel
starting at the top of the
corner
Spatial Processing :
intensity transformation -> works on single pixel for
contrast manipulation
image thresholding
53
Thresholding (piece wise linear transformation)
54
UNIT-II
55
Spatial domain: Image Enhancement
Three basic type of functions are used for image enhancement. image enhancement
point processing techniques:
Linear ( Negative image and Identity transformations) Logarithmic transformation
(log and inverse log transformations) Power law transforms (nth power and nth root
transformations) Grey level slicing
Bit plane slicing
We are dealing now with image processing methods that are based only on the
intensity of single pixels.
Intensity transformations (Gray level transformations)
Linear function Negative and identity Transformations
Logarithm function
Log and inverse-log transformation Power-law
function
nth power and nth root transformations 56
Image Negatives
Here, we consider that the digital image that we are considering that will have capital L number of intensity
levels represented from 0 to capital L minus 1 in steps of 1.
sT(r)L1r
57
Logarithmic Transformations
The general form of the log transformation is s
= c * log (1 + r)
C is a constant and r is assumed to be ≥ 0
The log transformation maps a narrow range of low input grey level values I
nto a wider range of output values. The inverse log transformation
performs the opposite transformation s = log(1 + r)
We usually set c to 1. Grey levels must be in the range [0.0, 1.0]
Identity Function
Output intensities are identical to input intensities.
Is included in the graph only for completeness
Power Law Transformations Why power laws are popular?
A cathode ray tube (CRT), for example, converts a video signal to light in a way. The
light intensity is proportional to a power (γ) of the source voltage VS For a computer
CRT, γ is about 2.2
nonlinear
58
s = crγ
c and γ are positive constants
Power-law curves with fractional values of γ
map a narrow range of dark input values into a
wider range of output values, with the opposite
being true for higher
values of input levels.
c=γ=1 Identity
function
59
60
Effect of decreasing gamma
When the γ is reduced too much, the image begins to reduce contrast to the point where the image started
to have very slight “wash-out” look, especially in the background
61
62
Piecewise Linear Transformation Functions
63
Control points (r1,s1) and (r2,s2) control the shape of the transform T(r)
•if r1=s1 and r2=s2, the transformation is linear and produce no changes in
intensity levels
•r1=r2, s1=0 and s2=L-1 yields a thresholding function that creates a binary
image
•Intermediate values of (r1,s1) and (r2,s2) produce various degrees of
spread in the intensity levels
In general, r1≤r2 and s1≤ s2 is assumed so that the junction is single
valued and monotonically increasing.
If (r1,s1)=(rmin,0) and (r2,s2)=(rmax,L-1), where rmin and r max are
minimum and maximum levels in the image. The transformation
stretches the levels linearly from their original range to the full range (0,L-1)
64
Two common approaches
– Set all pixel values within a range of
interest to one value (white) and all
others to another value (black)
•Produces a binary image
That means, Display high value for
range of interest, else low value
(„discard background‟)
– Brighten (or darken) pixel values in a
range of interest and leave all others
Unchanged. That means , Display high
value for range of interest, else original
value („preserve background‟)
65
Bit Plane Slicing
Only by isolating particular bits of the pixel values in a image we can highlight interesting aspects of that
image.
High order bits contain most of the significant visual information Lower bits contain subtle details
I (i, j)
n1
2 I (i, j)
0 t o127 cannbe mapped
n0,1 128 to 256 can be
as
mapped as 1
For an 8 bit image, the
above forms a binary image.
This occupies less
storage space.
66
Image Dynamic Range, Brightness and Control
The dynamic range of an image is the exact subset of gray values ( 0,1,2, L-1) that are present in the
image. The image histogram gives a clear indication on its dynamic range.
When the dynamic range of the image is concentrated on the lower side of the gray scale, the image will
be dark image.
When the dynamic range of an image is biased towards the high side of the gray scale, the
image will be bright or light image
An image with a low contrast has a dynamic range that will be narrow and concentrated
to the
middle of the gray scale. The images will have dull or washed out look.
When the dynamic range of the image is significantly broad, the image will have a high
contrast and the distribution of pixels will be near uniform.
67
Histogram equalization Histogram
Linearisation requires construction of a
transformation function sk
68
69
HISTOGRAM EQUALISATION IS NOT ALWAYSDESIRED.
Some applications need a specified histogram to their requirements
This is called histogram specification or histogram matching
. two-step process
- perform histogram equalization on the image
- perform a gray-level mapping using the inverse of the desired cumulative histogram
70
Arithmetic operations
71
Addition:
Image averaging will reduce the noise. Images are to be registered before adding. An
important application of image averaging is in the field of astronomy, where
imaging with very low light levels is routine, causing sensor noise frequently to
render single images virtually useless for analysis
As K increases, indicate that the variability (noise) of the pixel values at each
location (x, y) decreases
In practice, the images gi(x, y) must be registered (aligned) in order to avoid the
introduction of blurring and other artifacts in the output image.
72
Subtraction
A frequent application of image subtraction is in the enhancement of differences
between images. Black (0 values) in difference image indicate the location where
there is no difference between the images.
One of the most commercially successful and beneficial uses of image subtraction
is in the area of medical imaging called mask mode radiography
g(x, y) = f(x, y) - h (x, y)
Image of a digital angiography. Live image and mask image with fluid injected.
Difference will be useful to identify the blocked fine blood vessels.
The difference of two 8 bit images can range from -255 to 255, and the sum of two
images can range from 0 to 510.
Given and f(x,y) image, f m = f - min (f) which creates an image whose min value is
zero.
fs = k [fm / max ( fm) ],
fs is a scaled image whose values
of k are 0 to 255. For 8 bit image
k=255,
73
The output pixels are set of elements
not in A.All elements in A become zero
and the others to 1All
74
An image multiplication and Division
75
76
The output pixels are set of
elements not in A.All elements in
A become zero and the others to
1All
77
Let g(x,y) denote a corrupted image by adding noise η(x,y) to a noiseless image f(x,y):
g(x,y)f(x,y)(x,y)
The noise has zero mean value E[z i ] 0
At every pair of coordinates zi=(xi,yi) the noise is uncorrelated E[zi z j ] 0
The noise effect is reduced by averaging a set of K noisy images. The new image is
g (x, y) 1
K
g (x, y)
i
i1
78
Spatial filters : spatial masks, kernels, templates, windows
Linear Filters and Non linear filters based on the operation performed on the image.
Filtering means accepting ( passing ) or rejecting some frequencies. Mechanics
of spatial filtering
79
Neighbourhood ( a
small rectangle) Window centre moves from
3x3, 5x5,7x7 etc the first (0,0) pixel and
moves till the end of first row,
then second row and till the
last pixel (M-1, N-1) of the
A pre defined
input image
operation on
the Input Image
Filtering creates a new
pixel in the image at
window neighborhood
-1,-1 -1,0 -1,-1
centre as it moves.
W1 W2 W3
At any point (x,y) in the image, the response g(x,y) of the filter is the sum of
products of the filter coefficients and the image response and the image pixels
encompassed by the filter.
Observe that he filter w(0,0) aligns with the pixel at location (x,y) g(x,y)= w (-1,-
1) f(x-1,y-1) + w (-1,-0) f(x-1,y) +
…+w(0,0)f(x,y)
+….+w(1,1)f(x+1,y+1)
80
81
simply move the filter mask from point to point in an image.
at each point (x,y), the response of the filter at that point is calculated
using a predefined relationship
82
Smoothening Ideal LF Noise reduction by Average: Average Filter Linear
(blurring) filter Butterworth removing sharp edges R=1/9 [ Σzi ], Filter
Low Pass LF Gaussian and Sharp intensity i=1 to 9
filter LF transitions Side effect is: Box filter (if all coefficients
integration This will blur sharp edges are equal)
Weighted Average: Mask will
have different coefficients
83
Smoothing Linear Filter or averaging filters or Low pass filters
The output (response) of a smoothing, linear spatial filter is simply the average of the pixels
contained in the neighborhood of the filter mask. These filters sometimes are called
averaging filters. they also are referred to a lowpass filters.
Weighted Average mask: Central pixel usually have higher value. Weightage is
inversely proportional to the distance of the pixel from centre of the mask.
T the general implementation for filtering an MxN image with a weighted averaging
filter of size m x n (m and n odd) is given by the expression, m=2a+1 and
n=2b+1,where a and b are nonnegative integers. an important application of spatial
averaging is to blur an image for the purpose getting a gross representation of objects
of interest, such that the intensity of smaller objects blends with the background;
after filtering and thresholding
84
85
Examples of Low Pass Masks ( Local Averaging)
86
Popular techniques for lowpass spatial filtering
Uniform filtering
The most popular masks for low pass filtering are masks with all their coefficients
positive and equal to each other as for example the mask shown below. Moreover,
they sum up to 1 in order to maintain the mean of the image.
Gaussian filtering
The two dimensional Gaussian mask has values that attempts to approximate the
continuous function. In theory, the Gaussian distribution is non-zero everywhere,
which would require an infinitely large convolution kernel, but in practice it is
effectively zero more than about three standard deviations from the mean, and so we
can truncate the kernel at this point. The following shows a suitable integer-valued
convolution kernel that approximates a Gaussian with a of 1.0.
87
Order-Statistics ( non linear )Filters
The best-known example in this category is the Median filter, which, as its name implies,
replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel
(the original value of the pixel is included in the computation of the median).
Order static filter / ;ŶoŶ‐liŶeaƌ filter) / median filter Objective:Replace the valve of the pixel by
the median of the intensity values in the neighbourhood of that pixel
Although the median filter is by far the most useful order-statistics filter in image processing, it
is by no means the only one. The median represents the 50th percentile of a ranked set of
numbers, but the reader will recall from basic statistics that ranking lends itself to many other
possibilities. For example, using the 100th percentile results in the so-called max filter,
which is useful in finding the brightest points in an image. The response of a 3*3 max filter is
given by R=max [ zk| k=1, 2, ,… 9]
The 0th percentile filter is the min filter, used for the opposite purpose. Example
nonlinear spatial filters
–Median filter: Computes the median gray-level value of the
neighborhood. Used for noise reduction.
– Max filter: Used to find the brightest points in an image
–Min filter: Used to find the dimmest points in an image R = max{z | k =1,2,...,9}
R = min{z | k =1,2,...,9}
88
f (x,y)
*
Non linear Median Filter
86
91
101 86 99 99
100 106 103 100 101
91 102 109
101
f (x,y) 102
103 g (x,y)
106
109
89
High pass filter example
A high pass filtered image may be computed as the difference between the original
image and a lowpass filtered version of that image as follows
90
Highpass filter example Unsharp masking
A high pass filtered image may be computed as the difference between the original image
and a lowpass filtered version of that image as follows
A=1.1
A=1.2
A=1.15
91
The high-boost filtered image looks more like the original with a degree of
edge enhancement, depending on the value of .
A determines nature of filtering
92
Sharpening Spatial Filters
This section deals with various ways of defining and implementing operators for
Image sharpening by digital differentiation.
93
Use of first derivatives for Image Sharpening ( Non linear) (EDGE
enhancement)
94
The magnitude M (x,y) of this vector, generally referred to simply as the gradient
1/2
f x y 2 f x y 2
is
f (x, y) mag( f (x, y)) ( . )
( , )
x y
Size of M(x,y) is same size as the original image. It is common practice to refer to this
image as gradient image or simply as gradient.
Common practice is to approximate the gradient with absolute values which is simpler to
implement as follows.
95
A basic definition of the first-order derivative of a one-dimensional
function f(x) is the difference
δf
δx = f(x + 1) - f(x).
In x and y directions
96
Sharpening Spatial Filters
First derivative
(1) must be zero in flat segments (areas of constant gray-level values);
(2) must be nonzero at the onset and end of a gray-level step or ramp;
97
The digital implementation of the two-dimensional Laplacian is obtained by
summing these two components:
98
(a) Filter mask used to implement the
digital Laplacian
99
LAPLACIAN + ADDITION WITH ORIGINAL IMAGE DIRECTLY
100
101
Use of first derivatives for Image Sharpening ( Non linear)
102
The magnitude M (x,y) of this vector, generally referred to simply as the gradient
f
1/2
f x y 2 f x y 2
is
f (x, y) mag( f (x, y)) ( . )
( , )
x y
Size of M(x,y) is same size as the original image. It is common practice to refer to this
image as gradient image or simply as gradient.
Common practice is to approximate the gradient with absolute values which is simpler to
implement as follows.
103
104
DERIVATIVE OPERATORS
Roberts operator
Above Equation can be approximated at point Z5 in a number of ways. The
simplest is to use the difference (Z5 - Z8 ) in the x direction and (Z5 - Z6 ) in the y
direction. This approximation is known as the Roberts operator, and is expressed
mathematically as follows
f z5 z8 z5 z6
f z5 z9 z6 z8
105
Above Equations can be implemented by using the following masks.
The original image is convolved with both masks separately and the
absolute values of the two outputs of the convolutions are added.
106
Prewitt operator
The difference between the first and third rows approximates the derivative in the x
direction
•The difference between the first and third columns approximates the derivative in the y
direction
•The Prewitt operator masks may be used to implement the above approximation
107
108
the summation of coefficients in all masks equals 0, indicating that
they would give a response of 0 in an area of constant gray level
109
110
Filtering in the Frequency Domain
Filters in the frequency domain can be divided in four groups: Low pass filters
……….IMAGE BLUR
Remove frequencies away from the origin
Commonly, frequency response of these filters is symmetric around the origin;
The largest amount of energy is concentrated on low frequencies, but it represents just
image luminance and visually not so important part of image.
111
Low pass filters ( smoothing
Ideal Low Pass filters Butterworth low pass
filters) ILPF
filters Gaussian low pass filters BLPF
High Pass Filters ( Sharpening GLPF
filters)
Ideal High pass filters Butterworth High pass IHPF
filters Gaussian High pass filters Laplacian BHPF
in frequency domain GHPF
High boost , high frequency emphasis filters
112
113
114
IMAGE ENHANCEMENT III (Fourier)
115
Filtering in the Frequency Domain
•Basic Steps for zero padding
Zero Pad the input image f(x,y) to p =2M-1, and q=2N-1, if arrays are of same size.
If functions f(x,y) and h(x,y) are of size MXN and KXL, respectively, choose to pad with zeros:
P шM + N - 1
Q шK + L- 1
Zero-pad h and f
• Pad both to at least
• Radix-2 FFT requires power of 2
For example, if M = N = 512 and K = L = 16, then P = Q = 1024
• Results in linear convolution
• Extract center MxN
116
Filtering in the Frequency Domain
•Basic Steps for Filtering in the Frequency Domain:
1. Multiply the input padded image by (-1) x+y to center the transform.
2. Compute F(u,v), the DFT of the image from (1).
3. Multiply F(u,v) by a filter function H(u,v).
4. Compute the inverse DFT of the result in (3).
5. Obtain the real part of the result in (4).
6. Multiply the result in (5) by (-1)x+y .
Given the filter H(u,v) (filter transfer function OR filter or filter function) in the
frequency domain, the Fourier transform of the output image (filtered image)
is given by:
G (u,v)= H (u,v) F (u,v) Step (3) is array multiplication
The filtered image g(x,y) is simply the inverse Fourier transform of G(u,v).
117
1. Multiply the input image by (-1)x+y to center the transform
2. Compute F(u,v), the DFT of the image from (1)
3. Multiply F(u,v) by a filter function H(u,v)
4. Compute the inverse DFT of the result in (3)
5. Obtain the real part of the result in (4)
6. Multiply the result in (5) by (-1)x+y
118
119
Low Pass Filter attenuate
high frequencies while
͞passiŶg͟ low frequencies.
120
Correspondence between filtering in spatial and frequency domains
Let us find out equivalent of frequency domain filter H (u,v) in spatial domain.
121
Since h(x,y) can be obtained from the response of a frequency domain filter to an
impulse, h(x,y) spatial filter is some times referred as r finite impulse response filter
(FIR) of H(u,v)
122
123
124
Consider the following filter transfer function:
This filter will set F(0,0) to zero and leave all the other frequency components.
Such a filter is called the notch filter, since it is constant function with a hole
(notch) at the origin.
125
HOMOMORPHIC FILTERING
an image can be modeled mathematically in terms of illumination and reflectance as follow:
f(x,y) = I(x,y) r(x,y)
Note that:
F{ f (x, y)} ≠ F{i(x, y)} F{r(x, y)}
To accomplish separability, first map the model to natural log domain and
then take the Fourier transform of it. z(x, y) = ln{ f (x, y)} = ln{i(x, y)}+ ln{r(x, y)}
Then,
F{z(x, y)} = F{ln i(x, y)}+ F{ln r(x, y)}
or
Z (u, v) = I (u, v) + R(u, v)
126
127
128
UNIT-III
IMAGE RESTORATION
129
130
Model for image
degradation/restoration process
The objective of restoration is to obtain an estimate for the original image from its
degraded version g(x,y) while having some knowledge about the
degradation function H and additive noise η(x,y).
– Additive noise Linear blurring
g(x, y) = f(x, y) + η(x, y) g(x, y) = f(x, y) * h(x, y)
134
Radar range and velocity images typically contain noise
that
135
can be modeled by the Rayleigh distribution
136
137
The gray level values of the noise are evenly distributed across a
specific range
•Quantization noise has an approximately uniform distribution
138
139
Three principal methods of estimating the degradation function for
Image Restoration: ( Blind convolution: because the restored imagewill
be only an estimation. )
1.Observation, 2) Experimentation, 3) Mathematical modeling
f ( x ,y ) f(x,y)*h(x,y) g ( x ,y )
O b s e r v a t io n
E s t i m a t e d Tr a n s f e D FT Subimage
G s ( u , v) g s( x , y )
rfunction
G ( u , v) Restoration
H ( u , v ) H s ( u , v ) ˆs process by
F s ( u, v ) estimation
D FT R eco nstru cte
T h i s case is u s e d w h e n w e Fˆs ( u , v)
know only d Subimage
fˆ ( x , y )
g(x,y) andcannotrepeat t s
heexperiment! 140
Estimation by Mathematical Modeling:
If the value of K is large, that means the turbulence is very strong whereas if the
value of K is very low, it says that the turbulence is not that strong
•Degradation model:
g(x, y) = h(x, y) * f (x, y) + η(x, y)
•Wiener filter: a statistical approach to seek an
estimate fˆ that minimizes the statistical function
(mean square error):
e2 = E { (f - f ˆ ) 2 }
143
144
UNIT-IV
145
IMAGE SEGMENTATION
147
HOUGH TRANSFORM
148
Point detection:
149
Detection of Lines,
Apply all the 4 masks on the
image
150
Detection of an edge in an image:
What is edge:
An ideal Edge can be defined as a set of connected pixels each
of which is located at an orthogonal step transition in gray level
151
Calculation of Derivatives of Edges:
152
There are various ways in which this first derivative operators can
be implemented
Prewitt Edge Operator Sobel Edge Operator
(noise is taken
care)
153
Edge Linking
HOUGH
For this there are two approaches : One is
TRANSFORM
local processing
154
EDGE LINKING BY LOCAL PROCESSING
A point (x, y) in the image which is already operated by the sobel
edge operator. T is threshold
In the edge image take two points x,y and x’,y’ and to link
them
Use similarity measure
first one is the strength of the gradient operator
the direction of the gradient
155
HOUGH TRANSFORM Global processing
The Hough transform is a mapping from the spatial
domain to a parameter space for a particular straight line,
the values of m and c will be constant
156
So, we have seen 2 cases
Case one: a straight line in the xy plane is mapped to a
point in
the mc plane and
Case two: if we have a point in the xy plane that is mapped
to a straight line in the mc plane
158
when this straight line tries to be vertical, the slope m tends to
be infinity ; to solve this make use of the normal representation
of a straight line Use the “Normal” equation of a line:
160
Global Thresholding : a threshold value is selected where
the threshold value depends only on the pixel intensities in the
image
161
Region based segmentation operations THRESHOLDING
thresholding
region growing and
the region splitting and merging techniques
So, for such a bimodal histogram, you find that there are two peaks.
Now, the simplest form of the segmentation is, choose a threshold value say T
in the valley region
if a pixel at location x,y have the intensity value f (x, y) ≥ T; then we say
that
these pixel belongs to object
whereas if f (x, y) < T, then these pixel belongs to the background.
162
Thresholding In a multi modal
histogram
So, you will find that the basic aim of this thresholding operation
is we want to create a thresholded image g (x, y) which will be a
binary image containing pixel values either 0 or 1 depending
upon whether the intensity f (x, y) at location (x, y) is greater than
T or it is less than or equal to T
163
global thresholding.
This is called
Automatic Thresholding
1.Initial value of Threshold T
2.With this threshold T, Segregate the pixels into two gr2oups G1
and G2
3.Find the mean values of G1 and G2. Let the means be μ1
and μ2
4.Now Choose a new threshod. Find the
average of the means
T new = (μ1 + μ2)/2
5.With this new threshold, segregate two groups
and repeat the procedure. │T – T new│> ∆T’ , back to step.
Else stop.
164
Basic Adaptive Thresholding is
– Divide the image into sub-images and use threshold
local s
But, in case of such non uniform illumination, getting a global
threshold which will be applicable over the entire image is very
very difficult
So, if the scene illumination is non uniform, then a global
threshold is not going to give us a good result. So, what
we have to do is we have to subdivide the image into a
number of sub regions and find out the threshold
value for each of the sub regions and segment that sub
region using this estimated threshold value and here,
because your threshold value is position dependent, it
depends upon the location of the sub region; so the kind of
thresholding that we are applying in this case is an
adaptive thresholding.
165
Basic Global and Local Thresholding
Simple tresholding schemes compare each pixels gray level
with a single global threshold. This is referred to as Global
Tresholding.
166
Adaptive thresholding Local Thresholding
Adaptive Thresholding is
– Divide the image into sub-images and use local thresholds,
Local properties (e.g., statistics) based criteria can be used for
adapting the threshold.
167
OTIMAL THRESOLDING 168
Now, what is our aim in this particular case? Our aim is that we
want to determine a threshold T which will minimize the
average segmentation error.
169
Overall probability of error is given by: E (T ) = P2
E1(T) + P1 E2(T)
Now, for minimization of this error
∂ E(T) / ∂ T=0
By assuming Gaussian probability density
function,
C= (μ2 2ı 1
2 - μ1 2 ı 2
2 )+ 2ı 1
2 ı 2
2 ln ( )
170
ı 2
= ı 1
2= ı 2
2
The capital P1 and capital P2, they are same; in that case,
the value of T will be simply μ1 plus μ2 by 2 that is the mean
of the average intensities of the foreground region and the
background region.
171
Boundary characteristics for Histogram Thresholding
Use of Boundary Characteristics for Histogram
Improvement
and Local Thresholding
172
Region growing:
starting from this particular pixel, you try to grow the region
based on connectivity or based on adjacency and similarity. So,
this is what is the region growing based approach
Basic formulation
–Every pixel must be in a region
–Points in a region must be connected
–Regions must be disjoint
–Logical predicate for one region and for distinguishing between
regions
173
Region splitting& merging –
Quadtree decomposition
If all the pixels in the image are similar, Let R
leave it as it is denote the
If they are not similar, Full image.
then you break this image into quadrants. R
make 4 partitions of this image.
Then, check each and every partitionis similar
175
So, in case of quad tree representation, if root node R, initial
partition gives out 4 nodes
is - R0 R1 R2 and R3. Then R1
gives again R10 R11 R12 and R13. Once such
partitioning is completed, then what you do is you try to
check all the adjacent partitions to see if they are similar.
If they are similar, you merge them together to form a bigger
segment. Say, if R12 and R13 are similar. Merge them.
176
So, this is the concept of splitting and merging technique for
segmentation.
Now at the end, leave it if no more partition is possible ie.
reached a minimum partition size or every partition has become
uniform;
then look for adjacent partitions which can be combined
together to give me a bigger segment.
177
UNIT-V
IMAGE COMPRESSION
178
179
Data redundancy is the central concept in image
compression and can be mathematically defined.
Data Redundancy
Because various amount of data can be used to represent the
same amount of information, representations that contain
irrelevant or repeated information are said to contain redundant
data.
•The Relative data redundancy RD of the first data set, n1, is
defined by:
CR refers to the compression ratio compression ratio
(CR) or bits per pixel (bpp) and is defined by:
rk is the pixel values defined in the interval [0,1] and pr r(k) is the
probability of occurrence of rk. L is the number of gray levels. nk is
the number of times that kth gray level appears in the image and n
is the total number of pixels (n=MxN)
181
An 8 gray level image has the following gray level distribution.
182
The average number of bit used for fixed 3-bit code:
183
Inter pixel Redundancy or Spatial Redundancy
184
The gray level of a given pixel can be predicted by its neighbors and
the difference is used to represent the image; this type of
transformation is called mapping
Run-length coding can also be employed to utilize inter pixel
redundancy in image compression
Removing inter pixel redundancy is lossless
185
Irrelevant information
One of the simplest ways to compress a set of data is to remove
superfluous data For images, information that is ignored by human
visual system or is extraneous to the intended use of an image are
obvious candidate for omission. The “gray” image, since it appears as
a homogeneous field of gray, can be represented by its average
intensity alone – a single 8-bit value. Therefore, the compression
would be
186
Fidelity criteria
Fidelity criteria is used to measure information loss and
can be divided into two classes.
1)Objective fidelity criteria (math expression is used):
Measured mathematically about the amount of error in
the reconstructed data.
1)Subjective fidelity criteria: Measured by human
observation
188
Subjective criteria:
Mapper: Transforms the image into array of coefficients reducing inter pixel
redundancies. This is a reversible process which is not lossy. Run-length coding
is an example of mapping. In video applications, the mapper uses previous (and
future)
frames to facilitate removal of temporal redundancy.
192
Huffman
Coding:
= - [ 0.4 log (0.4) + 0.3 log (0.3) + 0.1 log (0.1 + 0.1 log (0.1) +
)
0.06 log (0.06) + 0.04 log (0.04)] = 2.14
193
194
Huffman Coding: Note that the shortest codeword (1) is given
for the symbol/pixel with the highest probability (a2). The
longest codeword (01011) is given for the symbol/pixel with the
lowest probability (a5). The average length of the code is
given by:
195
(lossy image compression)
Transformare
In digital images the spatial frequencies Coding:
important as they correspond
to important image features. High frequencies are a less important part of
the images. This method uses a reversible transform (i.e. Fourier,
Cosine transform) to map the image into a set of transform coefficients which are
then quantized and coded.
197
DCT
if u=0 ,
Same for α (v)
if u=ϭ,Ϯ,….N-1
Inverse DCT
198
JPEG Standard
JPEG exploits spatial redundancy
199
JPEG Standard
Different modes such as sequential, progressive and
hierarchical modes and options like lossy and lossless modes
of the JPEG standards exist.
JPEG supports the following modes of encoding
202
Applications of JPEG-2000 and their requirements
• Internet
• Color facsimile
• Printing
• Scanning
• Digital photography
• Remote Sensing
• Mobile
• Medical imagery
• Digital libraries and archives
• E-commerce
203
Each application area
Improved low bit-rate performance: It should give acceptable quality
below 0.25 bpp. has some
Networked image deliveryrequirements
and remote sensing
applications have this requirements .
Transform
coefficient
Lower right are higher frequency
sub-bands.
205
LL1 HL1
LH2 HH2
Image decomposition Scale 2
LH1 HH1
4subbands : LL2, LH2,HL2,HH2
206
Image Decomposition
207
Post-Compression
Rate-Distortion
(PCRD
208
Embedded Block Coding with Optimized Truncation of bit-stream
(EBCOT), which can be applied to wavelet packets and which
offers both resolution scalability and SNR scalability.
209
Tiling:
Steps in JPEG2000
Smaller non-overlapping blocks of image are known as tiles The image
is split into tiles, rectangular regions of the image. Tiles can be any size.
Dividing the image into tiles is advantageous in that the decoder will
need less memory to decode the image and it can opt to decode only
selected tiles to achieve a partial decoding of the image.
210
211
MPEG1 MOVING PICTURE EXPERT GROUP
MPEG exploits temporal redundancy. Prediction based.
Compare each frame of a sequence with its predecessor and only pixels that
have changed are updated,
MPEG-1 standard is for storing and retrieving video information on digital
storage media.
MPEG-2 standard is to support digital video broadcasting, HDTV
H.261 standard for telecommunication applications
systems.
212