Image Segmentation
Image Segmentation
Christophoros Nikou
[email protected]
2 Image Segmentation
• Obtain a compact representation of the image to
be used for further processing.
• Group together similar pixels
• Image intensity is not sufficient to perform
semantic segmentation
– Object recognition
• Decompose objects to simple tokens (line segments, spots,
corners)
– Finding buildings in images
• Fit polygons and determine surface orientations.
– Video summarization
• Shot detection
C. Nikou – Digital Image Processing
1
3 Image Segmentation (cont.)
Goal: separate an image into “coherent”
regions.
• Basic methods
– point, line, edge detection
– thresholding
– region growing
– morphological watersheds
• Advanced methods
– clustering
– model fitting.
– probabilistic methods.
–…
4 Fundamentals
• Edge information is in general not sufficient.
Constant intensity
(edge-based
segmentation)
Textured region
(region-based
segmantation)
2
5 Point, line and edge detection
• First order derivatives produce
thick edges at ramps.
• Second order derivatives are non
zero at the onset and at the end
of a ramp or step edge (sign
change).
• Second order derivatives respond
stronger at isolated points and
thin lines than first order
derivatives.
g ( x, y )
0 otherwise
3
7 Line detection
• The Laplacian is
also used here.
• It has a double
response
– Positive and
negative values at
the beginning and
end of the edges.
• Lines should be thin
with respect to the
size of the detector Absolute value Positive value
4
9 Line detection (cont.)
• The Laplacian is isotropic.
• Direction dependent filters
localize 1 pixel thick lines at
other orientations (0, 45, 90).
10 Edge detection
• Edge models
– Ideally, edges should be 1 pixel thin.
– In practice, they are blurred and noisy.
5
11 Edge detection (cont.)
Observations:
• Second derivative produces
two values for an edge
(undesirable).
• Its zero crossings may be
used to locate the centres of
thick edges.
C. Nikou – Digital Image Processing
6
13 Edge model and noise
7
15
Fundamental steps in edge
detection
• Image smoothing for noise reduction.
– Derivatives are very sensitive to noise.
• Detection of candidate edge points.
• Edge localization.
– Selection of the points that are true members
of the set of points comprising the edge.
16 Image gradient
• The gradient of an image:
8
17 Gradient operators
f ( x, y )
f ( x 1, y) f ( x, y )
x
f ( x, y )
f ( x, y 1) f ( x, y )
y
Roberts operators
Integrates image
smoothing
9
19 Gradient operators (cont.)
Diagonal edges
10
21 Gradient operators (cont.)
Image Sobel |gy|
Image smoothed
prior to edge
detection.
The wall bricks are
smoothed out.
11
23 Gradient operators (cont.)
Thresholded Sobel gradient amplitydes at 33% of max value
2 f 2 f
2 f ( x, y )
x 2 y 2
12
25 The LoG operator (cont.)
2 2
G ( x, y ) 2 2 G ( x, y )
2
x y
x2 y 2 x2 y 2
2 2 x 2 y 2 2 2
2 2 e
2 2
2 2
e
x y 4
x 2 y 2 2 2
13
27 The LoG operator (cont.)
• Fundamental ideas
– The Gaussian blurs the image. Iτ reduces the
intensity of structures at scales much smaller than σ.
– The Laplacian is isotropic and no other directional
mask is needed.
• The zero crossings of the operator indicate edge
pixels. They may be computed by using a 3x3
window around a pixel and detect if two of its
opposite neighbors have different signs (and
their difference is significant compared to a
threshold).
C. Nikou – Digital Image Processing
Image LoG
Zero crossings
Zero with a threshold
crossings of 4% of the
image max
14
29 The LoG operator (cont.)
12 22 12
22
ln
1 22 22
15
31 The LoG operator (cont.)
16
33 Designing an optimal edge detector
• Criteria for an “optimal” edge detector [Canny 1986]:
– Good detection: the optimal detector must minimize the probability of
false positives (detecting spurious edges caused by noise), as well as
that of false negatives (missing real edges).
– Good localization: the edges detected must be as close as possible to
the true edges
– Single response: the detector must return one point only for each true
edge point; that is, minimize the number of local maxima around the true
edge.
17
35 Canny edge detector
• Generalization to 2D by applying the 1D operator in
the direction of the edge normal.
• However, the direction of the edge normal is
unknown beforehand and the 1D filter should be
applied in all possible directions.
• This task may be approximated by smoothing the
image with a circular 2D Gaussian, computing the
gradient magnitude and use the gradient angle to
estimate the edge strength.
18
37 Canny edge detector (cont.)
original image
C. Nikou – Digital Image Processing
Gradient magnitude
C. Nikou – Digital Image Processing
19
39 Canny edge detector (cont.)
Interpolation provides
these values.
20
41 Predict the next edge point
Assume the marked
point is an edge point.
Then we construct the
tangent to the edge
curve (which is normal
to the gradient at that
point) and use this to
predict the next points
(here either r or s).
42 Hysteresis Thresholding
Reduces false edge pixels. It uses a low (TL) and a high
threshold (TH) to create two additional images from the
gradient magnitude image g (x,y):
g ( x, y ) g ( x, y ) TL g (x , y ) g (x , y ) TH
g L ( x, y ) , g H ( x, y )
0 otherwise 0 otherwise
21
43 Hysteresis Thresholding (cont.)
• After the thresholdings, all strong pixels are
assumed to be valid edge pixels. Depending
on the value of TH, the edges in gH(x,y)
typically have gaps.
• All pixels in gL(x,y) are considered valid edge
pixels if they are 8-connected to a valid edge
pixel in gH(x,y).
22
45 Canny vs LoG
Image Thresholded gradient
LoG Canny
C. Nikou – Digital Image Processing
46 Canny vs LoG
Image Thresholded gradient
LoG Canny
C. Nikou – Digital Image Processing
23
47 Edge Linking
• Even after hysteresis thresholding, the detected
pixels do not completely characterize edges
completely due to occlusions, non-uniform
illumination and noise. Edge linking may be:
– Local: requiring knowledge of edge points in
a small neighborhood.
– Regional: requiring knowledge of edge
points on the boundary of a region.
– Global: the Hough transform, involving the
entire edge image.
24
49
Edge Linking by Local Processing
(cont.)
A faster algorithm:
1. Compute the gradient magnitude and angle arrays
M(x,y) and a(x,y) of the input image f(x,y).
2. Form a binary image:
1 M ( x, y ) TM and a ( x, y ) [ A TA , A TA ]
g ( x, y )
0 otherwise
3. Scan the rows of g(x,y) (for Α=0) and fill (set to 1) all
gaps (zeros) that do not exceed a specified length K.
4. To detect gaps in any other direction Α=θ, rotate g(x,y)
by θ and apply the horizontal scanning.
50
Edge Linking by Local Processing
(cont.)
Image Gradient magnitude Horizontal linking
25
51
Edge Linking by Regional
Processing
• Often, the location of regions of interest is known
and pixel membership to regions is available.
• Approximation of the region boundary by fitting a
polygon. Polygons are attractive because:
– They capture the essential shape.
– They keep the representation simple.
• Requirements
– Two starting points must be specified (e.g.
rightmost and leftmost points).
– The points must be ordered (e.g. clockwise).
52
Edge Linking by Regional
Processing (cont.)
• Variations of the algorithm handle both open and
closed curves.
• If this is not provided, it may be determined by
distance criteria:
– Uniform separation between points indicate a closed
curve.
– A relatively large distance between consecutive
points with respect to the distances between other
points indicate an open curve.
• We present here the basic mechanism for
polygon fitting.
26
53
Edge Linking by Regional
Processing (cont.)
– Given the end points A and
B, compute the straight line
AB.
– Compute the perpendicular
distance from all other
points to this line.
– If this distance exceeds a
threshold, the
corresponding point C
having the maximum
distance from AB is
declared a vertex.
– Compute lines AC and CB
and continue. C. Nikou – Digital Image Processing
54
Edge Linking by Regional
Processing (cont.)
Regional processing for edge linking is used in combination
with other methods in a chain of processing.
27
55 Hough transform
• An early type of voting scheme.
• Each line is defined by two points (xi,yi) and (xj,yj).
• Each point (xi,yi) has a line parameter space (a,b)
because it belongs to an infinite number of lines yi=axi+b.
• All the points (x,y) on a line y=a’x+b’ have lines in
parameter space that intersect at (a’x+b’).
Accumulator
array
28
57 Hough transform (cont.)
• We only know the orientation of the runway (around 0 deg) and the
observer’s position relative to it (GPS, flight charts etc.).
• We look for the peak at the accumulator array around 0 deg and join
gaps below 20% of the image height.
• Applications in autonomous navigation.
29
59 Practical details
• Try to get rid of irrelevant features
– Take only edge points with significant gradient
magnitude.
• Choose a good grid / discretization
– Too coarse: large votes obtained when too many
different lines correspond to a single bucket.
– Too fine: miss lines because some points that are not
exactly collinear cast votes for different buckets.
• Increment also neighboring bins (smoothing in
accumulator array).
60 Thresholding
• Image partitioning into regions directly
from their intensity values.
0 if f ( x, y ) T1
1 if f ( x, y ) T
g ( x, y ) g ( x, y ) 1 if T1 f ( x, y ) T2
0 if f ( x, y ) T 2 if f ( x, y ) T
2
30
61 Noise in Thresholding
Difficulty in determining the threshold due to noise
62 Illumination in Thresholding
Difficulty in determining the threshold due to non-uniform illumination
31
63 Basic Global Thresholding
• Algorithm
– Select initial threshold estimate T.
– Segment the image using T
• Region G1 (values > T) and region G2 (values < T).
– Compute the average intensities m1 and m2 of
regions G1 and G2 respectively.
– Set T=(m1+m2)/2
– Repeat until the change of T in successive
iterations is less than ΔΤ.
T=125
32
65
Optimum Global Thresholding
using Otsu’s Method
• The method is based on statistical decision theory.
• Minimization of the average error of assignment of pixels
to two (or more) classes.
• Bayes decision rule may have a nice closed form solution
to this problem provided
– The pdf of each class.
– The probability of class occurence.
• Pdf estimation is not trivial and assumptions are made
(Gaussian pdfs).
• Otsu (1979) proposed an attractive alternative maximizing
the between-class variance.
– Only the histogram of the image is used.
33
67 Otsu’s Method (cont.)
• The probabilities of classes C1 and C2:
k L 1
P1 (k ) pi , P2 (k ) p i 1 P1 (k )
i 0 i k 1
34
69 Otsu’s Method (cont.)
• Between class variance:
B2 (k ) P1 (k )[m1 (k ) mG ]2 P2 (k )[m2 (k ) mG ]2
P1 (k )[1 P1 (k )]
B
Image Histogram
35
71 Otsu’s Method (cont.)
Smoothing helps thresholding (may create histogram valley)
36
73 Otsu’s Method (cont.)
• Dealing with small structures
– The histogram is unbalanced
• The background dominates
• No valleys indicating the small structure.
– Idea: consider only the pixels around edges
• Both structure of interest and background are present equally.
• More balanced histograms
– Use gradient magnitude or zero crossings of
the Laplacian.
• Threshold it at a percentage of the maximum value.
• Use it as a mask on the original image.
• Only pixels around edges will be employed.
C. Nikou – Digital Image Processing
37
75 Otsu’s Method (cont.)
Image Histogram Otsu on original histogram
We wish to extract
the bright spots
(nuclei) of the cells
38
77 Otsu’s Method (cont.)
Image of iceberg segmented into three regions.
k1=80, k2=177
78 Variable Thresholding
Image partitioning.
The image is sub-divided and the method is applied to every sub-image.
Useful for illumination non-uniformity correction.
Shaded image Histogram Simple thresholding
39
79 Variable Thresholding(cont.)
Use of local image properties.
• Compute a threshold for every single pixel in the image
based on its neighborhood (mxy, σxy,…).
More accurate
nuclei extraction.
Subdivision
C. Nikou – Digital Image Processing
40
81 Variable Thresholding (cont.)
Moving averages.
• Generally used along lines, columns or in zigzag .
• Useful in document image processing.
• Let zk+1 be the intensity of a pixel in the scanning sequence
at step k+1. The moving average (mean intensity) at this
point is:
1 k 1 1
m(k 1)
n i k 2 n
zi m(k ) ( zk 1 zk n )
n
n is the number of points used in the average
• Segmentation is then performed at each point comparing
the pixel value to a fraction of the moving average.
41
83 Variable Thresholding (cont.)
84 Region Growing
• Start with seed points S(x,y) and grow to larger
regions satisfying a predicate.
• Needs a stopping rule.
• Algorithm
– Find all connected components in S(x,y) and erode
them to 1 pixel.
– Form image fq(x,y)=1 if f (x,y) satisfies the predicate.
– Form image g(x,y)=1 for all pixels in fq(x,y) that are 8-
connected with to any seed point in S(x,y).
– Label each connected component in g(x,y) with a
different label.
42
85 Region Growing (cont.)
The weld is very bright. The predicate used for region growing is to
compare the absolute difference between a seed point and a pixel
to a threshold. If the difference is below it we accept the pixel as
crack. C. Nikou – Digital Image Processing
43
87 Region Growing (cont.)
44
89
Region Splitting and Merging
(cont.)
• Algorithm
– Split into four disjoint quadrants any region Ri for
which Q(Ri)=FALSE.
– When no further splitting is possible, merge any
adjacent regions Ri and Rk for which Q(Ri Rk)=TRUE.
– Stop when no further merging is possible.
• A maximum quadregion size is specified beyond which no
further splitting is carried out.
• Many variations heve been proposed.
– Merge any adjacent regions if each one satisfies the
predicate individually (even if their union does not
satisfy it).
90
Region Splitting and Merging
(cont.)
Quadregions resulted from splitting.
Merging examples:
• R2 may be merged with R41.
• R41 may be merged with R42.
C. Nikou – Digital Image Processing
45
91
Region Splitting and Merging
(cont.)
• Image of the Cygnus Loop. We want to
segment the outer ring of less dense
matter.
• Characteristics of the region of interest:
• Standard deviation grater than the
background (which is near zero) and
the central region (which is
smoother).
• Mean value greater than the mean of
background and less than the mean
of the central region.
true AND 0 m b
• Predicate: Q
false otherwise
C. Nikou – Digital Image Processing
92
Region Splitting and Merging
(cont.)
Varying the size of the smallest allowed quadregion.
Larger quadregions
lead to block-like
segmentation.
Smaller quadregions
lead to small black
regions. 32x32
16x16 seems to be
the best result.
16x16 8x8
C. Nikou – Digital Image Processing
46
93 Morphological Watersheds
• Visualize an image topographically in 3D
– The two spatial coordinates and the intensity (relief
representation).
• Three types of points
– Points belonging to a regional minimum.
– Points ta which a drop of water would fall certainly to
a regional minimum (catchment basin).
– Points at which the water would be equally likely to
fall to more than one regional minimum (crest lines
or watershed lines).
• Objective: find the watershed lines.
47
95 Morphological Watersheds (cont.)
48
97 Morphological Watersheds (cont.)
• Before flooding.
• To prevent water from spilling through the image
borders, we consider that the image is surrounded
by dams of height greater than the maximum image
intensity. C. Nikou – Digital Image Processing
49
99 Morphological Watersheds (cont.)
50
101 Morphological Watersheds (cont.)
Short dam
• Further flooding.
• The water from the left basin overflowed into the
right basin.
• A short dam is constructed to prevent water from
merging. C. Nikou – Digital Image Processing
• Further flooding.
• The effect is more pronounced.
• The first dam is now longer.
• New dams are created.
C. Nikou – Digital Image Processing
51
103 Morphological Watersheds (cont.)
Final watershed lines
superimposed on the
image.
52
105 Morphological Watersheds (cont.)
q
Cn-1(M1) Cn-1(M2)
53
107 Morphological Watersheds (cont.)
Conditions
1. Center of SE in q.
2. No dilation if merging.
54
109 Morphological Watersheds (cont.)
Image Gradient magnitude
• A common application
is the extraction of
nearly uniform, blob-
like objects from their
background.
• For this reason it is
generally applied to the
gradient of the image
and the catchment
basins correspond to
the blob like objects.
Watersheds Watersheds
on the image
C. Nikou – Digital Image Processing
55
111 Morphological Watersheds (cont.)
• Markers (connected components):
– internal, associated with the objects
– external, associated with the background.
• Here the problem is the large number of local
minima.
• Smoothing may eliminate them.
• Define an internal marker (after smoothing):
• Region surrounded by points of higher
altitude.
– They form connected components.
– All points in the connected component have the
same intensity.
56
113 Morphological Watersheds (cont.)
• Each region defined by the external marker has a single internal marker
and part of the background.
• The problem is to segment each of these regions into two segments: a
single object and background.
• The algorithms we saw in this lecture may be used (including watersheds
applied to each individual region).
C. Nikou – Digital Image Processing
Final segmentation.
C. Nikou – Digital Image Processing
57
115 Morphological Watersheds (cont.)
58
117 Basic spatial motion segmentation
• Difference image and comparison with
respect to a threshold:
1 if f ( x, y , ti ) f ( x, y , t j ) T
dij ( x, y )
0 otherwise
59
119 Accumulative differences (cont.)
• Absolute ADI:
A ( x, y ) 1 if R ( x, y ) f ( x , y , tk ) T
Ak ( x, y ) k 1
Ak 1 ( x, y ) otherwise
• Positive ADI:
P ( x, y ) 1 if R ( x, y ) f ( x , y , tk ) T
Pk ( x, y ) k 1
Pk 1 ( x, y ) otherwise
• Negative ADI:
N ( x, y ) 1 if R ( x, y ) f ( x , y , tk ) T
N k ( x, y ) k 1
N k 1 ( x, y ) otherwise
• The nonzero area of the positive ADI gives the size of the object.
• The location of the positive ADI gives the location of the object in the
reference frame.
• The direction and speed may be obtained fom the absolute and
negative ADIs.
• The absolute ADI contains both the positive and negative ADIs.
C. Nikou – Digital Image Processing
60
121 Accumulative differences (cont.)
• To establish a reference image in a non
stationary background.
– Consider the first image as the reference image.
– When a non stationary component has moved out of
its position in the reference frame, the corresponding
background in the current frame may be duplicated
in the reference frame. This is determined by the
positive ADI:
• When the moving object is displaced completely with
respect to the reference frame the positive ADI stops
increasing.
61
123 Frequency domain techniques
• Consider a sequence f (x,y,t), t=0,1,…,K-1 of size M x N.
• All frames have an homogeneous zero background
except of a single pixel object with intensity of 1 moving
with constant velocity.
• At time t=0, the object is at (x’, y’) and the image plane is
projected onto the vertical (x) axis. This results in a 1D
signal which is zero except at x’.
• If we multiply the 1D signal by exp[j2πα1xΔt], for
x=0,1,…,M-1 and sum the results we obtain the single
term exp[j2πα1x’Δt].
• In frame t=1, suppose that the object moved to (x’+1, y’),
that is, it has moved 1 pixel parallel to the x-axis. The
same procedure yields exp[j2πα1(x’+1)+Δt].
C. Nikou – Digital Image Processing
124
Frequency domain techniques
(cont.)
• Applying Euler’s formula, for t=0,1,…,K-1:
e j 21 ( xt ) t cos[21 ( x t )t ] j sin[21 ( x t )t ]
62
125
Frequency domain techniques
(cont.)
• For a sequence of K images of size M x N, the
sum of the weighted projections onto the x-axis
at an integer instant of time is:
M 1 N 1
g x (t , 1 ) f ( x, y, t )e j 21xt , t 0,1,..., K 1
x 0 y 0
126
Frequency domain techniques
(cont.)
• The 1D DFT of the above signals are:
K 1
Gx (u1 , 1 ) g x (t , 1 )e j 2 u1t / K , u1 0,1,..., K 1
t 0
K 1
Gy (u2 , 2 ) g y (t , 2 )e j 2 u2t / K , u2 0,1,..., K 1
t 0
63
127
Frequency domain techniques
(cont.)
• The unit of velocity is in pixels per total frame
rate. For example, V1=10 is interpreted as a
motion of 10 pixels in K frames.
• For frames taken uniformly, the actual physical
speed depends on the frame rate and the
distance between pixels. Thus, for V1=10, K=30,
if the frame rate is two images/sec and the
distance between pixels is 0.5 m, the speed is:
– V1=(10 pixels)(0.5 m/pixel)(2 frames/sec) / (30 frames) =
1/3 m/sec.
128
Frequency domain techniques
(cont.)
• The sign of the x-component of the velocity is
obtained by using Fourier properties of sinusoids:
d 2 Re g x (t ,1 ) d 2 Im g x (t , 1 )
S1x S2 x
dt 2 t n
dt 2 t n
64