0% found this document useful (0 votes)
2 views

Image segmentation

Segmentation is a process that divides images into regions with common properties, primarily using techniques like region merging and edge detection. Key methods for detecting discontinuities include point, line, and edge detection, employing various gradient operators and advanced techniques like the Marr-Hildreth edge detector. Additionally, thresholding techniques, including global and multilevel thresholding, are used to separate objects from backgrounds based on pixel intensity.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Image segmentation

Segmentation is a process that divides images into regions with common properties, primarily using techniques like region merging and edge detection. Key methods for detecting discontinuities include point, line, and edge detection, employing various gradient operators and advanced techniques like the Marr-Hildreth edge detector. Additionally, thresholding techniques, including global and multilevel thresholding, are used to separate objects from backgrounds based on pixel intensity.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Segmentation

 Segmentation is the underlying concept for creating objects from


pixels. The segmentation process involves dividing the image into
regions or objects have common properties.
 Typical image segmentation techniques involve one of the two
process:
• Region merging according to some measure of homogeneity
• Separation of objects by finding edges using the gradient of
digital numbers (DNs) between neighboring pixels.
 Region- margining approaches can be divided into two approaches:
• Region Growing
• Region split and merging
 Detection of Discontinuities in Digital Image– Point Detection,
Line Detection and Edge Detection.
Image Segmentation
Image segmentation divides an image into regions that are connected and
have some similarity within the region and some difference between
adjacent regions.
Thegoal is usually to find individual objects inan image.
For the most part there are fundamentally two kinds of approaches to
segmentation: discontinuity andsimilarity.
– Similarity may be due topixel intensity, color or texture.
– Differences are sudden changes (discontinuities) in any of these, but
especially sudden changes in intensity along a boundary line, which is
called an edge.
Detection of Discontinuities
 There are three kinds of discontinuities of intensity: points, lines and
edges.
 The most common way to look for discontinuities is to scan a small mask
over the image. The mask determines which kind of discontinuity to
lookfor.
9
 R = w1z1 + w2 z2 + ...+ w9 z9 = w z i i
i=1
Point Detection
 The detection of isolated point different from constant background image can be
done using the following mask:

-1 -1 -1
-1 8 -1
-1 -1 -1
• A point has been detected at the location on which the mask is centered if |R|>T
where T is non negative threshold and.
• The idea is that the gray level of an isolated point will be quite different from
the gray levels of its neighbors.
• The mask operation measures the weighted differences between the centre
point and its neighbors. The differences that are large enough are determined
by T considered isolated points in the image of interest.
Detection of Discontinuities Point
Detection
RT
where T : a nonnegative threshold
Line Detection
• Only slightly more common than point detection is to find a one pixel
wide line in animage.
• For digital images the only three point straight lines are only horizontal,
vertical, or diagonal (+ or–45degree).
• If the horizontal mask is moved around an image it would respond more
strongly to lines oriented horizontally.
• With constant background the maximum response would result when the
line is passing through the middle row of the mask.
Detection of Discontinuities Line Detection
Edge Detection
• Edges characterize object boundaries are therefore useful for
segmentation and identification of objects in scene.
• Edge point can be thought of as pixel location of abrupt gray
levels. It is the boundary between two regions with relatively
distinct gray level properties.
• There are two types of edges step and ramp edge.

• The step edges are detected using first order derivative filters like
Robert, Sobel, Frichen and Prewit.
• The ramp edges can be detected using second order derivative
line Laplacian filter.
Detection of Discontinuities Edge Detection
Detection of Discontinuities Edge Detection
Detection of Discontinuities Edge Detection
Detection of Discontinuities Edge Detection
Detection of Discontinuities Edge Detection
Detection of Discontinuities Gradient
Operators: Example
First-order derivatives:
– The gradient of an image f(x,y) at location (x,y)is defined as the
vector:

F
– magnitude of this vector:
The f = mag(f) = G + G 2
x
2
y 1
2

 Gx 
– The direction of this vector:  (x, y) = tan  
−1
G 
 y
– It points in the direction of the greatest rate of change of f at
location (x,y)
Detection of Discontinuities Gradient
Operators: Example
Detection of Discontinuities Gradient
Operators: Example
Roberts cross-gradient operators

Prewitt operators

Sobel operators
Detection of Discontinuities Gradient
Operators: Example

Prewitt masks for


detecting diagonal edges

Sobel masks for


detecting diagonal edges
Detection of Discontinuities Gradient
Operators: Example

f  Gx + G y
Detection of Discontinuities Gradient
Operators: Example
Detection of Discontinuities Gradient
Operators: Example
Detection of Discontinuities Gradient
Operators: Example
Detection of Discontinuities Gradient
Operators
Second-order derivatives: (The Laplacian)
– The Laplacian of an 2D function f(x,y) is defined as

 2
f  2
f
2 f = 2 + 2
x y

– Two forms inpractice:


More Advanced Techniques for Edge
Detection
Marr-Hildreth edge detector
Marr and Hildreth argued that
1)Intensity changes are dependent of image scale and so their
detection requires the use of operators different sizes and
2)That a sudden intensity change will give rise to a peak or trough
in the first derivative or, equivalently, to zero crossing in the second
derivative.
Two salient feature:
1)It should be differential operator capable of computing first or
second order derivatives at every point in an image
2)It should be capable of being tuned to act at any desired scale, so
that large operator can be used to detect blurry edges and small
operators to detect sharply focused fine detail
More Advanced Techniques for Edge Detection

• Consider the Gaussian function:


− r2
h(r) = −e 2 2
where r 2 = x 2 + y 2
and  : the standard deviation
• The Laplacian of h is
− r2
r −  2 2

 h(r) = −
2
e
2 2

  4

• The Laplacian of a Gaussian (LoG)sometimes is called the
Mexican hat function. Italso can be computed by smoothing
the image with the Gaussian smoothing mask, followed by
applicationof the Laplacian mask.
Marr-Hildreth edge detector
• Marr and Hildreth argued that the most satisfactory operator fulfilling these
• conditions is the filter 2G where, 2 is the Laplacian operator, and Gisthe
2-D Gaussian function
x2 + y 2

G(x, y) =e 2 2

2 G(x, y) 2 G(x, y)
 G(x, y) =
2
+
x 2
y2
  − x −
x2 + y 2    − y − x2 + y2 2 
 G(x, y) =
2
e 2 2 +  e 2 
x   2  y   2 
   
x +y 2 2
x +y 2 2
 x 2
1  −  y 2
1  −
 2 G(x, y) =  4 − 2 e 2 +  4 − 2 e 2
2 2

     
x2 + y 2
 x + y − 2 
2 2 2 −
G(x, y) = 
2
e 2 2

  4

This expression is called Laplacian of a Gaussion
(LoG)
More Advanced Techniques for Edge
Detection
Detection of Discontinuities Gradient
Operators: Example

Sobel gradient

Gaussian smooth function Laplacian mask


Detection of Discontinuities Gradient
Operators: Example
Detection of Discontinuities Gradient
Operators: Example
Hough Transforms
Hough Transforms takes the images created by the edge
detection operators
Most of the time, the edge map generated by the edge
detection algorithms is disconnected
HT can be used to connect the disjointed edge
points It is used to fit the points as plane curves
Plane curves are lines, circles, and parabolas

The line equation is y = mx + c


However, the problem is that there are infinite line passing through
one points
Therefore, an edge point in an x-y plane is transformed to a c-m
plane Now equation of line is c = (-x)m + y
Hough Transforms
All the edge points in the x-y plane need to be fitted. All the points are
transformed in c-m plane.
The objective is to find the intersection point
If A and B are the points connected by a line in the spatial domain, then
they will be intersecting lines in the Hough Space (c-m plane)
To check whether they are intersecting lines, the c-m plane is
partitioned as accumulator lines
To find this, it can be assumed that the c-m plane can be partitioned as
an accumulator array
For every edge point (x,y), the corresponding accumulator element
is incremented in the accumulator array
At the end of this process, the accumulator values are checked
Significance is that this point gives us the line equation of the (x,y)
space
Hough Transforms
Hough Transform steps:
1) Load the image
2)Find the edges of the image using any edge
detector 3)Quantize the parameter space P
4) Repeat the following for all the pixels of the
image: if the pixel is an edge pixel, then
(a) c = (-x)m + y or calculate ρ
(b) P(c,m) = P(c,m) + 1 or increment position in P
5) Show the Hough Space
6)Find the local maxima in the parameter
space 7)Draw the line using the local maxima
The major problem with this algorithm is that it does not work
for vertical lines, as they have a slope of infinity
Convert line into polar coordinates ρ = x cosӨ + ysinӨ, where Ө is the
angle between the line and x-axis, and ρ is the diameter
Edge Linking and Boundary Detection
Global Processing via the Hough Transform
Hough transform: a way of finding edge points in animage that lie along a
straight line.
Example: xy-plane v.s. ab-plane (parameter space)
yi = axi + b
Edge Linking and Boundary Detection Global
Processing via the Hough Transform
The Hough transform consists of finding
all pairs of values of and  which satisfy
the equations that pass through (x,y). x cos + y sin = 

These are accumulated in what is basically


a 2-dimensional histogram.
When plotted these pairs of  and 
will look like a sinewave. The process is
repeated for all appropriate (x,y)
locations.
Edge Linking and Boundary Detection
Hough Transform Example
The intersection of the curves
corresponding to points 1,3,5

2,3,4

1,4
Edge Linking and Boundary Detection Hough
Transform Example
 Thresholding is a phenomenon which occurs due to extreme contrast stretching. In
the contrast stretching diagram if we make the first and the last slope zero
(r1=r2=0) and we increase the center slope, we will get a thresholding
transformation.

 ii.The formula for achieving thresholding is as follows

 s=0; if r<=a
 s=L−1; if r\gta
 where L is the number of grey levels.
Thresholding
◼ Suppose that an image, f(x,y), is composed of light objects on a
dark background, and the following figure is the histogram of the
image.

image with dark background


and a light object

◼ Then, the objects can be extracted by comparing pixel values


with a threshold T.
Thresholding
▪ One way to extract the objects from the background is to select
a threshold T that separates object from background.
▪ Any point (x,y) for which f(x,y) > T is called an object point;
otherwise the point is called a background point.

1 if f (x,y)  T
g(x, y) = 
 0 f f (x, y)  T

▪ When T is a constant applicable over an entire image, then the


above process is called as Global thresholding.
Thresholding
▪ When the value of T changes over an image
▪ Then that process is referred as Variable thresholding.
▪ Sometimes it is also termed as local or regional thresholding.
▪ Where, the value of T at any point (x,y) in an image depends
on properties of a neighborhood of (x,y).
▪ If T depends on the spatial coordinates (x,y) themselves, then
variable thresholding is often referred to as dynamic or
adaptive thresholding.
Thresholding
Thresholding
Multilevel Thresholding
◼ It is also possible to extract objects that have a specific intensity
range using multiple thresholds.
image with dark background
and two light objects

Extension to color images is straightforward: There are three color channels, in


each one specify the intensity range of the object… Even if objects are not
separated in a single channel, they might be with all the channels… Application
example: Detecting/Tracking faces based on skin color…
Multilevel thresholding
• A point (x,y) belongs

– to an object class if T1 < f(x,y) T2

– to another object class if f(x,y) > T2

– to background if f(x,y)  T1

a if f (x, y)  T2

g(x, y) = b if T1  f (x, y)  T2
c if f (x, y)  T1

Thresholding
Thresholding
Role of Noise in Image Thresholding
Role of Illumination in Image Thresholding

◼ Non-uniform illumination may change the histogram in a


way that it becomes impossible to segment the image using
a single global threshold.

◼ Choosing local threshold values may help.


Role of Illumination in Image Thresholding
Basic Global Thresholding
Based on visual inspection of histogram
1. Select an initial estimate for T.
2. Segment the image using T. This will produce two groups of pixels: G1
consisting of all pixels with gray level values > T and G2 consisting of
pixels with gray level values  T
3. Compute the average gray level values 1 and 2 for the pixels in
regions G1 and G2
4. Compute a new threshold value
5. T = 0.5 (1 + 2)
6. Repeat steps 2 through 4 until the difference between the values of T
in successive iterations is smaller than a predefined parameter ΔT.
Basic Global Thresholding

Initially T= average intensity of image


T0 = 0
3 iterations
with result T = 125
Basic Global Thresholding
1. Works well in situations where there is a reasonably clear
valley between the modes of the histogram related to
objects and background.

2. ΔT is used to control the number of iterations.


3. Initial threshold must be chosen greater than the minimum
and less than the maximum intensity level in the image

4. The average intensity of the image is a good initial choice


for T.
Basic Adaptive Thresholding

• Subdivide original image into small areas.


• utilize a different threshold to segment each
subimages.
• since the threshold used for each pixel depends
on the location of the pixel in terms of the
subimages, this type of thresholding is adaptive.
Example : Adaptive Thresholding
Example : Adaptive Thresholding

How to solve this problem?


Further subdivision
Optimal Global and Adaptive Thresholding
• This method treats pixel values as probability density functions.
• The goal of this method is to minimize the probability of
misclassifying pixels as either object or background.
• There are two kinds of error:
– mislabeling an object pixel as background, and
– mislabeling a background pixel as object.
Optimal Global and Adaptive Thresholding
• Method for estimating thresholds that produce the
minimum average segmentation error.
• Let an image contains only two principal gray
regions.
• Let z represents the gray-level values.
• These can be viewed as random quantities, and the
histogram may be considered an estimate of their
probability density function (PDF), p(z).
Optimal Global and Adaptive Thresholding

p(z) = P1p1(z) + P2p2 (z) P1: probability that a random pixel


with value z is an object pixel.
P1+ P2=1 P2: probability that a random pixel
Is a background pixel
Applications of Thresholding
• Analyze and recognize fingerprints
• During the process of recovering/ analyzing/ recognizing
photographed or scanned letters
• Reduce amount of information (e.g. for image transfer, or
content recognition)
• Real-time adaptive thresholding (e.g. face detection)
• Traffic control and wireless mesh networks
• Motion detection using dynamic thresholding
• Background subtraction (e.g. real-time subtraction for biometric
face detection)
Region-Based Segmentation
Edges and thresholds sometimes do not give good results for
segmentation.
Region-based segmentation is based on the connectivity
of similar pixels in aregion.
– Eachregion must be uniform.
– Connectivity of the pixels within the region is very important.
There are two main approaches to region-based segmentation:
region growing and region splitting.
Region-Based Segmentation Basic Formulation

Let R represent the entire imageregion.


Segmentation is a process that partitions R into subregions,
R1,R2,…,Rn,such that
n
()  Ri = R
i=1
(b) Ri is a connected region, i = 1,2,..., n
(c) Ri  R j =  for all i and j, i  j
(d) P(Ri ) = TRUE for i = 1,2,..., n
(e) P(Ri  R j ) = FALSE for any adjacent regions Ri and R j
where P(Rk): a logical predicate defined over the points inset Rk
For example: P(Rk)=TRUE if all pixels in Rkhave the samegray
level.
Region Growing
• Thresholding still produces isolated image
• Edge Linking and Boundary Detection Hough Transform
Example
• It states that a region is coherent if all the pixels of that region
are homogeneous with respect to some characteristics such as
colour, intensity, texture, or other statistical properties
• Thus idea is to pick a pixel inside a region of interest as a starting
point (also known as a seed point) and allowing it to grow
• Seed point is compared with its neighbours, and if the properties
match , they are merged together
• This process is repeated till the regions converge to an extent that no
further merging is possible
Region Growing Algorithm
• It is a process of grouping the pixels or subregions to get a bigger
region present in an image
• Selection of the initial seed: Initial seed that represent the ROI
should be given typically by the user. Can be chosen automatically.
The seeds can be either single or multiple
• Seed growing criteria: Similarity criterion denotes the minimum
difference in the grey levels or the average of the set of pixels. Thus,
the initial seed ‘grows’ by adding the neighbours if they share the
same properties as the initial seed
• Terminate process: If further growing is not possible then terminate
region growing process
Region Growing Algorithm
Consider image shown in figure:

• Assume seed point indicated by underlines. Let the seed pixels 1 and
9 represent the regions C and D, respectively
• Subtract pixel from seed value
• If the difference is less than or equal to 4 (i.e. T=4), merge the pixel
with that region. Otherwise, merge the pixel with the other region.
Split and Merge Algorithm
• Region growing algorithm is slow
• So seed point can be extended to a seed region
• Instead of a single pixel, a node of a Regional adjacency graph
(RAG)
• a region itself is now considered as a starting point.
• The split process can be stated as follows:
1. Segment the image into regions R1, R2,….Rn using a set of
thresholds
2. Create RAG. Use a similarity measure and formulate a homogeneity
test
3. The homogeneity test is designed based on the similarity criteria
such as intensity or any image statistics
4. Repeat step 3 until no further region exits that requires merging
Region Growing
criteria:
1. the absolute gray-
level difference
between any pixel and
the seed has to be less
than 65
2. the pixel has to be 8-
connected to at least
one pixel in that
region (if more, the
regions are merged)
Region Growing

criteria:
1. the absolute gray-level
difference between any
pixel and the seed has
to be less than 65
2. the pixel has to be 8-
connected to at least
one pixel in that
region (if more, the
regions are merged)
Region-Based Segmentation Region
Growing
Fig. 10.41 shows the histogram of Fig. 10.40 (a). It is difficult to
segment the defects by thresholding methods. (Applying region
growing methods are better in thiscase.)

Figure 10.40(a) Figure 10.41


Split and Merge using Quadtree
•Entire image is assumed as a single region. Then the homogeneity
test is applied. If the conditions are not met, then the regions are split
into four quadrants.
•This process is repeated for each quadrant until all the regions meet
the required homogeneity criteria. If the regions are too small, then
the division process is stopped.
1) Split and continue the subdivision process until some stopping
criteria is fulfilled. The stopping criteria often occur at a stage
where no further splitting is possible.
2) Merge adjacent regions if the regions share any common criteria.
Stop the process when no further merging is possible
Region-Based Segmentation Region
Splitting and Merging
Region splitting is the opposite of regiongrowing.
– First there is alarge region (possible the entire image).
– Then a predicate (measurement) is used to determine if the
regionis uniform.
– If not, then the method requires that the region be split into
two regions.
– Then each of these two regions is independently tested by the
predicate(measurement).
– This procedure continues until all resulting regions are
uniform.
Region-Based Segmentation Region
Splitting
The main problem with region splitting is determining where to split a
region.
One method to divide a region is to use a quadtree structure. Quadtree:
atree in which nodes have exactly four descendants.
Region-Based Segmentation Region
Splitting and Merging
Thesplit and merge procedure:
• – Split into four disjoint quadrants any region Ri for which P(Ri)
= FALSE.
– Merge any adjacent regions Rj and Rk for which P(RjURk) =
TRUE. (the quadtree structure may not be preserved)
– Stop when no further merging or splitting is possible.
Thank You

You might also like