CV Lecture 7
CV Lecture 7
CSC-455
Muhammad Najam Dar
Three Lectures
Similarity
Thresholding, region growing, and region
splitting/merging.
Three Lectures
1 if f (x, y)
g(x, y)
T
Objects
0 if f (x,&y)
Background
T
Global Thresholding
Local/Adaptive Thresholding
Global Thresholding
Single threshold value for entire image
Fixed ?
Automatic
Intensity histogram
Global Thresholding
Single threshold value for entire image
Fixed ?
Automatic
Intensity histogram
Global Thresholding
Estimate an initial T
Multilevel thresholding
Thresholding
Non-uniform illumination:
Global Thresholding
Adaptive Thresholding
Adaptive Thresholding
Threshold: function of neighboring pixels
T mean
T
median
max
T
min
2
Adaptive Thresholding
T mk
s m mean
s standard
deviations
k
Niblack
constant
Document Binarization
P(Ri ) true
Region-Based Segmentation
Region Splitting
P(Ri R j ) True
Region Splitting/Merging
Stop when no further split or merge is possible
Region-Based Segmentation
Example
Source: K. Grauman
Segmentation as clustering
Source: K. Grauman
What is Cluster Analysis?
• Cluster analysis
– Finding similarities between data according to the
characteristics found in the data and grouping similar
data objects into clusters
What is Cluster Analysis?
d(i, j) q (| x | | x x | ...| x | )
q q q
i1 x j1 i2 j2 ip jp
x
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-
dimensional data objects, and q is a positive integer
• If q = 1, d is Manhattan distance
d(i, j) | x x x |...| x x
i1 j1 i2 j2 ip
|| x |
jp
Similarity and Dissimilarity Between Objects
• If q = 2, d is Euclidean distance:
2 2
d(i, j) (| x x |2 )
x i2
j2
ip
• Also, one can use weighted distance, parametric
jp
Pearson correlation, or other disimilarity measures
Clustering Algorithms: Basic Concept
D. Comaniciu and P.
Meer, Robust
Analysis of Feature
Spaces: Color Image
Segmentation, 1997.
K-Means Clustering
Example
10 10
10
9 9
9
8 8
8
7 7
7
6 6
6
5 5
5
4 4
4
3
Assign
3
Update 3
2 2
2
each 1
the 1
1
0 objects 0 cluster 0
means
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
to most
0 1 2 3 4 5 6 7 8 9 10 10
10
similar
center reassign reassign
10 10
9 9
K=2 8 8
7 7
Arbitrarily choose K 6 6
object as initial 5 5
cluster center
4
Update
4
3 3
2 the 2
0
cluster 1
0
0 1
10
2 3 4 5 6 7 8 9 means 0 1
10
2 3 4 5 6 7 8 9
Example
• Centroids:
3 – 2 3 4 7 9 new centroid: 5
• Centroids:
5 – 2 3 4 7 9 new centroid: 5
• Agglomerative (Bottom
up)
Hierarchical clustering
• Agglomerative (Bottom
up)
• 1st iteration
1
Hierarchical clustering
• Agglomerative (Bottom
up)
• 2nd iteration
1
2
Hierarchical clustering
• Agglomerative (Bottom
up)
• 3rd iteration
3
1 2
Hierarchical clustering
• Agglomerative (Bottom
up)
• 4th iteration
3 2
1
4
Hierarchical clustering
• Agglomerative (Bottom
up)
• 5th iteration
3
1 2
5
4
Hierarchical clustering
• Agglomerative (Bottom
up)
• Finally
6
k3 clusters left 2 9
1
5
8
4
7
• Divisive (Top-down)
– Start at the top with all patterns in one cluster
– The cluster is split using a flat clustering algorithm
– This procedure is applied recursively until each
pattern is in its own singleton cluster
Hierarchical clustering
• Divisive (Top-
down)
Hierarchical Clustering: The Algorithm
F
E
A
B
C D
A B C D E F
Hierarchical clustering
• Produces a set of nested clusters organized as a
hierarchical tree
• Can be visualized as a dendrogram
– A tree like diagram that records the sequences of
merges or splits
6 5
0.2
4
0.15 3 4
2
5
0.1
2
0.05
1
3 1
0
1 3 2 5 4 6
Strengths of Hierarchical Clustering
p1
Similarity?
p2
p3
p4
p5
• MIN .
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
• MIN .
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
• MIN .
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
• MIN .
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
• MIN .
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
Example
SOLUTION:
• The closest two values are 100 and 200
=>the centroid of these two values is 150.
• Now we are clustering the values: 150, 500, 900, 1100
• The closest two values are 900 and 1100
=>the centroid of these two values is 1000.
• The remaining values to be joined are: 150, 500, 1000.
• The closest two values are 150 and 500
=>the centroid of these two values is 325.
• Finally, the two resulting subtrees are joined in the root of
the tree.
An example:
Two hierarchical clusters of the expression values of a single
gene measured in 5 experiments.
1 3 4 2 6 3 1 7 4 2 6 7
3 3 4 3 8 4 3 7 4 3 8 8
4 3 4 4 7 4 4 7 4 4 7 6
5 3 4 6 2 5 5 7 4 6 2 3
6 3 4 6 4 3 6 7 4 6 4 1
7 3 4 7 3 5 7 7 4 7 3 1
9 3 4 8 5 6 9 7 4 8 5 2
10 3 4 7 6 6 10 7 4 7 6 2
Cluster1 = {(3,4)(2,6)(3,8)(4,7)}
Cluster2 = {(7,4)(6,2)(6,4)(7,3)(8,5)
(7,6)}
• Select one of the nonmedoids O′. Let us assume O′ = (7,3)
• Now the medoids are c1(3,4) and O′(7,3)
http://www.caip.rutgers.edu/~comanici/MSPAMI/msPamiResults.html
Center of
mass
Mean Shift
vector
Center of
mass
Mean Shift
vector
Center of
mass
Mean Shift
vector
Center of
mass
Mean Shift
vector
Center of
mass
Mean Shift
vector
Center of
mass
Mean Shift
vector
Center of
mass
http://www.caip.rutgers.edu/~comanici/MSPAMI/msPamiResults.html
More results
Mean shift pros and cons
• Pros
– Does not assume spherical clusters
– Just a single parameter (window size)
– Finds variable number of modes
– Robust to outliers
• Cons
– Output depends on window size
– Computationally expensive
– Does not scale well with dimension of feature
space
References
Some Slide material has been taken from Dr M. Usman Akram Computer Vision
Lectures
CSCI 1430: Introduction to Computer Vision by James Tompkin
Statistical Pattern Recognition: A Review – A.K Jain et al., PAMI (22) 2000
Pattern Recognition and Analysis Course – A.K. Jain, MSU
Pattern Classification” by Duda et al., John Wiley & Sons.
Digital Image Processing”, Rafael C. Gonzalez & Richard E. Woods, Addison-Wesley,
2002
Machine Vision: Automated Visual Inspection and Robot Vision”, David Vernon,
Prentice Hall, 1991
www.eu.aibo.com/
Advances in Human Computer Interaction, Shane Pinder, InTech, Austria, October
2008
Computer Vision A modern Approach by Frosyth
http://www.cs.cmu.edu/~16385/s18/