Clustering
Clustering
• Hypothesis generation
• Hypothesis testing
However...
We may need more information to cluster well
• Many different distributions can share a mean
and covariance matrix
• ....number of clusters?
5
FIGURE 10.6. These four data sets have identical statistics up to second-order—that
is, the same mean ! and covariance ". In such cases it is important to include in the
model more parameters to represent the structure more completely. From: Richard O.
c 2001 by
Duda, Peter E. Hart, and David G. Stork, Pattern Classification. Copyright !
John Wiley & Sons, Inc.
Steps for Clustering
1. Feature Selection
4. Clustering Algorithm
5. Validation of Results
6. Interpretation of Results
.6 .6 .6
.4 .4 .4
.2 .2 .2
0 x1 0 x1 0 x1
.2 .4 .6 .8 1 .2 .4 .6 .8 1 .2 .4 .6 .8 1
FIGURE 10.7. The distance threshold affects the number and size of clusters in similarity based clustering
methods. For three different values of distance d0 , lines are drawn between points closer than d0 —the smaller
the value of d0 , the smaller and more numerous the clusters. From: Richard O. Duda, Peter E. Hart, and David
c 2001 by John Wiley & Sons, Inc.
G. Stork, Pattern Classification. Copyright !
x2
1.6
x2 1.4
1
1
.6
.8
.4
.6
.2
.4
0 x1
.2 .4 .6 .8 1
.2
(20 .50) 0 x1
x2 .1 .2 .3 .4 .5
.5
.4
.3
.2
.1
0 x1
.25 .5 .75 1 1.25 1.5 1.75 2
FIGURE 10.8. Scaling axes affects the clusters in a minimum distance cluster method.
The original data and minimum-distance clusters are shown in the upper left; points in
one cluster are shown in red, while the others are shown in gray. When the vertical axis
is expanded by a factor of 2.0 and the horizontal axis shrunk by a factor of 0.5, the
clustering is altered (as shown at the right). Alternatively, if the vertical axis is shrunk by
a factor of 0.5 and the horizontal axis is expanded by a factor of 2.0, smaller more nu-
merous clusters result (shown at the bottom). In both these scaled cases, the assignment
of points to clusters differ from that in the original space. From: Richard O. Duda, Peter
x2 x2
x1 x1
FIGURE 10.9. If the data fall into well-separated clusters (left), normalization by scaling
for unit variance for the full data may reduce the separation, and hence be undesirable
(right). Such a normalization may in fact be appropriate if the full data set arises from a
single fundamental process (with noise), but inappropriate if there are several different
processes, as shown here. From: Richard O. Duda, Peter E. Hart, and David G. Stork,
Pattern Classification. Copyright ! c 2001 by John Wiley & Sons, Inc.
Other Similarity Measures
Minkowski Metric (Dissimilarity)
! d #1/q
"
Change the exponent q: d(x, x! ) = |xk − x!k |q
k=1
xT x!
Cosine Similarity
!
s(x, x ) =
||x|| ||x! ||
k=1
T !
x x
If features binary-valued: !
s(x, x ) =
||x|| ||x! ||
related 1 been
videos
have 0
=1, 1, 0}aand
A · B {1, b
i i B = {1, 1, 0, 1}, respectively. Next, we R
Rt B 1 1 0 1
compute
i=1the dot product: of their tag quency
sets gre
relat
Additional Similarity Metrics
Theodoridis Text
Defines a large number of alternative
distance metrics, including:
• Hamming distance: number of locations where
two vectors (usually bit vectors) disagree
• Correlation coefficient
• Weighted distances...
15
Criterion Functions for Clustering
Criterion Function
Quantifies ‘quality’ of a set of clusters
• Clustering task: partition data set D into c disjoint
sets D1 ... Dc
16
s(x, x ) =
d
T !
x x
Criterion: Sum of Squared Error
!
s(x, x ) = T
x x + x!T x! − xT x!
c "
"
Je = ||x − µDi ||2
i=1 x∈Di
• Sensitive to outliers 17
Je = large
Je = small
FIGURE 10.10. When two natural groupings have very different numbers of points, the
clusters minimizing a sum-squared-error criterion Je of Eq. 54 may not reveal the true
underlying structure. Here the criterion is smaller for the two clusters at the bottom than
for the more natural clustering at the top. From: Richard O. Duda, Peter E. Hart, and
David G. Stork, Pattern Classification. Copyright ! c 2001 by John Wiley & Sons, Inc.
c
1!
Related Criteria:
c
Min Variance Je =
2 i=1
ni s̄i
1! c 1 ! s¯ = 1 ! ! ||x − x"||2
J =
Je e=
2 i=1
ni s̄i ni s̄ii n2i x∈D x ∈D
2 i=1 i
!
i
Ans¯i =Equivalent
1 ! !
||xFormulation
− x" 2
|| for SSE
n1i x∈D!
2
x ∈D !
!
c
i i
" 2 1 !
s¯i :=mean 2 squared ||x
distance − x ||
between points
Je = J =in cluster
1
ni s̄i
c i
!
n
(variance) i x∈Di x! ∈Di 2 i=1e ni s̄i
2 i=1
•
s̄ =
1 !1 Criterions:
Alternative !
c
! use median,
s
s(x, x i)" =
¯
1 ! !other
maximum,
1 ! ! " 2
||x − x ||
i descriptive
2Je = statistic
ni s̄ion distance for
n2 s¯i = ||x − x" |
ni x∈D2i xi=1
! ∈D
i
i x∈D i n
x
2
! ∈D
i
i x∈D !
i x ∈Di
Variation: 1 !Using! Similarity (e.g. Tanimoto)
1 ! !! !
1 s(x, x" )
" " 2 s̄ = "
s¯i s¯
=i =2 min s(x, ||x −xx) || i s̄
2 i = s(x, x )
x∈Di∈D
x,x x! ∈D
n
s may beniany !similarity function (in thisi x∈Dcase,x
2
maximize)
n! ∈D i x∈D !
i i i i i x ∈Di
1 ! ! " "
s̄i = 2 s(x, x" ) s¯i = mins¯i =s(x,
minx ) s(x, x )
ni x∈Di x! ∈Di x,x! ∈D i ! x,x ∈Di
19
"
s¯i = min
!
s(x, x )
x,x ∈Di
c !
!
Criterion: Scatter Matrix-Based
Sw =
i=1 x∈Di
(x − µi )(x − µi )T
c !
! c !
!
trace[Sw ] = ||x − µi ||2 = Je Sw = (x − µi )(x − µi )T
i=1 x∈Di i=1 x∈Di
c
! c !
!
||x − µi ||2 =
Minimize Trace of Sw (within-class)
i=1
trace[Sw ] = trace[Si ] =
i=1 x∈Di
Equivalent to SSE!
Recall that total
!c !
scatter is the sum T of within
Sw = (x − µi )(x − µi )
and between-class
i=1 x∈D scatter (Sm = Sw + Sb).
i
c
!
trace[Sb ] = ni ||µi − µ0 ||2
i=1
20
! !
trace[Sw ] = ||x − µi ||2 = Je
i=1 x∈Di
c
!
Scatter-Based Criterions, Cont’d
trace[Sb ] =
i=1
ni ||µi − µ0 ||2
" "
"! "
" c ! "
T"
Jd = |Sw | = " (x − µi )(x − µi ) "
"
" i=1 x∈Di "
Determinant Criterion
Roughly measures square of the scattering
volume; proportional to product of variances
in principal axes (minimize!)
• Minimum error partition will not change with
axis scaling, unlike SSE
21
Scatter-Based: Invariant Criteria
c !
!
InvariantSSwwCriteria
= !
=
c !
(Eigenvalue-based)
(x − µi )(x − µiT)T
(x − µi )(x − µi )
i=1 x∈Di
i=1 x∈Di
Eigenvalues: measure !c !
c
ratio of between to within-
trace[S ] = ! ! ||x − µ ||2 = J
cluster trace[S
scatterw ] in
w = direction
i=1 x∈Di
||x −of
µi ||ieigenvectors
2
= Je e
(maximize!) i=1 x∈D
!c
i
c
!
22
• trace[S
trace[Sbb]] =
Trace of a matrixi=1=
i=1
n ||µ − µ
ni ||µi − µ0 || ||
is sum of eigenvalues (here d is
i i 0
length of feature
"" vector) " "
""! cc ! " "
""! " "
• JEigenvalues
Jd d==|S =are
|Sww|| = "" invariant
""
""i=1
−under
(x −
(x µµi )(x
i )(x−
T "T "
non-singular
linear
−µµi )i )" "
" "
transformations (rotations, translations, scaling, etc.)
i=1 x∈D
x∈Dii
dd
!
−1 !
trace[S w S
trace[S −1
w Sbb]] =
= λλi i
i=1
i=1
d
!
−1 1
Jf = trace[Sm Sw ] = 22
i=1 1 + λi
Clustering with a Criterion
Choosing Criterion
Creates a well-defined problem
• Define clusters so as to maximize the
criterion function
• A search problem
23
Comparison: Scatter-Based Criteria
24
Hierarchical Clustering
Motivation
Capture similarity/distance relationships
between sub-groups and samples within the
chosen clusters
• Common in scientific taxonomies (e.g.
biology)
25
Agglomerative Hierarchical Clustering
Problem: Given n samples, we want c clusters
One solution: Create a sequence of partitions (clusterings)
• First partition, k = 1: n clusters (one cluster per sample)
• Second partition, k = 2: n-1 clusters
• Continue reducing the number of clusters by one: merge 2 closest
clusters (a cluster may be a single sample) at each step k until...
similarity scale
k=4 70
k=5 60
k=6 50
k=7 40
k=8
30
20
10
0
FIGURE 10.11. A dendrogram can represent the results of hierarchical clustering algo-
rithms. The vertical axis shows a generalized measure of similarity among clusters. Here,
at level 1 all eight points lie in singleton clusters; each point in a cluster is highly similar
to itself, of course. Points x6 and x7 happen to be the most similar, and are merged at
level 2, and so forth. From: Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern
Classification. Copyright ! c 2001 by John Wiley & Sons, Inc.
x4 2 x6
x3 x x7
2 4
3 x5
x1 5
6 x8
7
k=8
1 ! !
davg (Di , Dj ) = ||x − x" ||
ni nj x∈Di x! ∈Dj
Listed Above:
Minimum, maximum and average inter-sample
distance (samples for clusters i,j: Di , Dj)
Difference in cluster means (mi, mj)
28
Nearest-Neighbor Algorithm
dmin (Di , Dj ) = min ||x − x" ||
Also Known as “Single-Linkage” Algorithm x∈Di ,x∈Dj
Issues
Sensitive to noise and slight changes in position of data points (chaining effect)
• Goal: minimal
dmean (Di ,increase
Dj ) = to largest
||m i−m cluster
j || diameter at
each iteration (discourages elongated clusters)
Issues
Works well for compact and roughly equal in size; with
elongated clusters, result can be meaningless 31
dmax = large dmax = small
FIGURE 10.14. The farthest-neighbor clustering algorithm uses the separation between
the most distant points as a criterion for cluster membership. If this distance is set very
large, then all points lie in the same cluster. In the case shown at the left, a fairly large
dmax leads to three clusters; a smaller dmax gives four clusters, as shown at the right. From:
Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification. Copyright
!c 2001 by John Wiley & Sons, Inc.
dmin (Di , Dj ) = min ||x − x" ||
x∈Di ,x∈Dj
1 ! !
davg (Di , Dj ) = ||x − x" ||
ni nj x∈Di x! ∈Dj
33
Stepwise Optimal Hierarchical Clustering
Problem
"
None of the agglomerative methods discussed so
dmin (Di , Dj ) far
= directly
min ||x − x ||
x∈Di ,x∈Dj
minimize a specific criterion function
dmax (Di , Dj ) = max ||x − x" ||
Modified Agglomerative Algorithm:
x∈Di ,x∈Dj
1 ! !
davg (Di , Dj ) = ||x − x" ||
For k = 1 to (n - c + 1) ni nj x∈Di x! ∈Dj
de defines the cluster pair that increases Je as little as possible. May not
minimize SSE, but often good starting point
µ1
-6 -4 -2 2 4 6
-2
-4
FIGURE 10.2. The k -means clustering procedure is a form of stochastic hill climbing
in the log-likelihood function. The contours represent equal log-likelihood values for
the one-dimensional data in Fig. 10.1. The dots indicate parameter values after different
iterations of the k -means algorithm. Six of the starting points shown lead to local max-
ima, whereas two (i.e., µ1 (0) = µ2 (0)) lead to a saddle point near ! = 0. From: Richard
O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification. Copyright ! c 2001
by John Wiley & Sons, Inc.
p(x|µa)
p(x|µb)
source density
x
-4 -3 -2 -1 0 1 2 3 4
4
l(µ1, µ2) 2
0
-2
-4
-50
-100
-150 µ2
-52.2 µa
start -56.7 µb
-5
-2.5
0 start
2.5 µ1
5
FIGURE 10.1. (Above) The source mixture density used to generate sample data, and
two maximum-likelihood estimates based on the data in the table. (Bottom) Log-
likelihood of a mixture model consisting of two univariate Gaussians as a function of
their means, for the data in the table. Trajectories for the iterative maximum-likelihood
estimation of the means of a two-Gaussian mixture model based on the data are shown
as red lines. Two local optima (with log-likelihoods −52.2 and −56.7) correspond to the
two density estimates shown above. From: Richard O. Duda, Peter E. Hart, and David
G. Stork, Pattern Classification. Copyright "c 2001 by John Wiley & Sons, Inc.
x2
1 3 2
x1
FIGURE 10.3. Trajectories for the means of the k -means clustering procedure applied to
two-dimensional data. The final Voronoi tesselation (for classification) is also shown—
the means correspond to the “centers” of the Voronoi cells. In this case, convergence is
obtained in three iterations. From: Richard O. Duda, Peter E. Hart, and David G. Stork,
c 2001 by John Wiley & Sons, Inc.
Pattern Classification. Copyright !
Fuzzy k-means
Basic Idea
Allow every point to have a probability of
membership in every cluster. The criterion (cost
function) minimized is:
c !
! n
Jf uz = [P̂ (ωi |xj , Θ̂)]b ||xj − µi ||2
i=1 j=1
"n
j=1 [ P̂ (ωi |x j )]b
xj (1/dij )1/(b−1)
µj = "n P̂ (ωi |xj ) = "c 1/(b−1)
b r=1 (1/drj )
j=1 [P̂ (ωi |xj )]
dij = ||xj − µi ||2
Algorithm
1. Compute probability of each class for every point in the
training set (uniform probability: equal likelihood in each
cluster)
2. Recompute means using expression at top-left
3. Recompute probability of each class for each point using
expression at top right
• If change in means and membership probabilities for
points is small, stop
• Else goto 2 40
x2 4
3
2
1
x1
FIGURE 10.4. At each iteration of the fuzzy k -means clustering algorithm, the prob-
ability of category memberships for each point are adjusted according to Eqs. 32 and
33 (here b = 2). While most points have nonnegligible memberships in two or three
clusters, we nevertheless draw the boundary of a Voronoi tesselation to illustrate the
progress of the algorithm. After four iterations, the algorithm has converged to the red
cluster centers and associated Voronoi tesselation. From: Richard O. Duda, Peter E.
Hart, and David G. Stork, Pattern Classification. Copyright ! c 2001 by John Wiley &
Sons, Inc.
Fuzzy k-means, Cont’d
Convergence Properties
Sometimes fuzzy k-means improves
convergence over classical k-means
However, probability of cluster membership
depends on the number of clusters; can lead
to problems if poor choice of k is made
42
Cluster Validity
So far...
We’ve assumed that we know the number of clusters