Datamining-lect5 - Clustering. the K-means Algorithm. Hierarchical Clustering. the DBSCAN Algorithm. Clustering Evaluation
Datamining-lect5 - Clustering. the K-means Algorithm. Hierarchical Clustering. the DBSCAN Algorithm. Clustering Evaluation
LECTURE 5
Clustering
The k-means algorithm
Hierarchical Clustering
The DBSCAN algorithm
Clustering Evaluation
What is a Clustering?
• In general a grouping of objects such that the objects in a
group (cluster) are similar (or related) to one another and
different from (or unrelated to) the objects in other groups
Inter-cluster
Intra-cluster distances are
distances are maximized
minimized
Applications of Cluster Analysis
Discovered Clusters Industry Group
• Understanding Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,
behavior Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,
4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP
• Summarization
• Reduce the size of large data
sets
• Applications
• Recommendation systems Clustering precipitation
in Australia
• Search Personalization
Early applications of cluster analysis
• John Snow, London 1854
Notion of a Cluster can be Ambiguous
p1
p3 p4
p2
p1 p2 p3 p4
p1
p3 p4
p2
p1 p2 p3 p4
3 well-separated clusters
Clustering objectives
• Center-based
• A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
• The center of a cluster is often a centroid, the minimizer of
distances from all the points in the cluster, or a medoid, the
most “representative” point of a cluster
4 center-based clusters
Clustering objectives
• Contiguous Cluster (Nearest neighbor or
Transitive)
• A cluster is a set of points such that a point in a cluster is
closer (or more similar) to one or more other points in the
cluster than to any point not in the cluster.
8 contiguous clusters
Types of Clusters: Density-Based
• Density-based
• A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
• Used when the clusters are irregular or intertwined, and when
noise and outliers are present.
6 density-based clusters
Clustering objectives
• Shared Property or Conceptual Clusters
• Finds clusters that share some common property or represent
a particular concept.
.
2 Overlapping Circles
Types of Clusters: Objective Function
• Clustering as an optimization problem
• Finds clusters that minimize or maximize an objective function.
• Enumerate all possible ways of dividing the points into clusters
and evaluate the `goodness' of each potential set of clusters by
using the given objective function. (NP Hard)
• Can have global or local objectives.
• Hierarchical clustering algorithms typically have local objectives
• Partitional algorithms typically have global objectives
• A variation of the global objective function approach is to fit the
data to a parameterized model.
• The parameters for the model are determined from the data, and they
determine the clustering
• E.g., Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.
Clustering Algorithms
• K-means and its variants
• Hierarchical clustering
• DBSCAN
K-MEANS
K-means Clustering
• Partitional clustering approach
• Each cluster is associated with a centroid
(center point)
• Each point is assigned to the cluster with the
closest centroid
• Number of clusters, K, must be specified
• The objective is find K centroids and the
assignment of points to clusters/centroids so
as to minimize the sum of distances of the
points to their respective centroid
K-means Clustering
• Problem: Given a set X of n objects and an
integer K, group the points into K clusters such
that
2.5
1.5
Original Points
y
1
0.5
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2.5
1.5
y
0.5
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
Importance of Choosing Initial Centroids
Iteration 5
1
2
3
4
3
2.5
1.5
y
0.5
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
Dealing with Initialization
• Do multiple runs and select the clustering with the
smallest error
• Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or there
are k clusters)
1
0.05
3 1
0
1 3 2 5 4 6
Strengths of Hierarchical Clustering
• Do not have to assume any particular number of
clusters
• Any desired number of clusters can be obtained by
‘cutting’ the dendogram at the proper level
...
p1 p2 p3 p4 p9 p10 p11 p12
Intermediate Situation
• After some merging steps, we have some clusters
C1 C2 C3 C4 C5
C1
C2
C3 C3
C4 C4
C5
C1 Proximity Matrix
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
Intermediate Situation
• We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix.
C1 C2 C3 C4 C5
C1
C2
C3 C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
After Merging
• The question is “How do we update the proximity matrix?”
C2
U
C1 C5 C3 C4
C1 ?
C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?
C1 Proximity Matrix
C2 U C5
...
p1 p2 p3 p4 p9 p10 p11 p12
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
Similarity?
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
Single Link – Complete Link
• Another way to view the processing of the
hierarchical algorithm is that we create links
between the elements in order of increasing
distance
• The MIN – Single Link, will merge two clusters when a
single pair of elements is linked
• The MAX – Complete Linkage will merge two clusters
when all pairs of elements have been linked.
Hierarchical Clustering: MIN
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
5 2 .24 0 .15 .20 .14 .25
1
3 3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 5 .34 .14 .28 .29 0 .39
2 1
6 .23 .25 .11 .22 .39 0
2 3 6
0.2
4
4 0.15
0.1
0.05
4 0.3
0.25
0.2
0.15
0.1
Nested Clusters Dendrogram
0.05
0
3 6 4 1 2 5
Strength of MAX
pjClusterj
proximity(
Cluster
i , Cluster
j)
|Clusteri | |Clusterj |
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
2 .24 0 .15 .20 .14 .25
3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
Hierarchical Clustering: Group Average
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
5 4 1 2 .24 0 .15 .20 .14 .25
0.15
0.1
0
3 6 4 1 2 5
Hierarchical Clustering: Group Average
• Compromise between Single and
Complete Link
• Strengths
• Less susceptible to noise and outliers
• Limitations
• Biased towards globular clusters
Cluster Similarity: Ward’s Method
• Similarity of two clusters is based on the increase
in squared error (SSE) when two clusters are
merged
• Similar to group average if distance between points is
distance squared
5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
Hierarchical Clustering:
Time and Space requirements
• O(N2) space since it uses the proximity matrix.
• N is the number of points.
• Important Questions:
• How do we measure density?
• What is a dense region?
• DBSCAN:
• Density at point p: number of points within a circle of radius Eps
• Dense Region: A circle of radius Eps that contains at least MinPts
points
DBSCAN
• Characterization of points
• A point is a core point if it has more than a specified
number of points (MinPts) within Eps
• These points belong in a dense region and are at the interior
of a cluster
• Density-connected
• A point p is density-connected to a
point q if there is a path of edges
from p to q p q
o
DBSCAN Algorithm
• Label points as core, border and noise
• Eliminate noise points
• For every core point p that has not been assigned
to a cluster
• Create a new cluster with the point p and all the
points that are density-connected to p.
• Assign border points to the cluster of the closest
core point.
DBSCAN: Determining Eps and MinPts
• Idea is that for points in a cluster, their kth nearest neighbors are
at roughly the same distance
• Noise points have the kth nearest neighbor at farther distance
• So, plot sorted distance of every point to its kth nearest neighbor
• Find the distance d where there is a “knee” in the curve
• Eps = d, MinPts = k
Eps ~ 7-10
MinPts = 4
When DBSCAN Works Well
Original Points
Clusters
• Resistant to Noise
• Can handle clusters of different shapes and sizes
When DBSCAN Does NOT Work Well
(MinPts=4, Eps=9.75).
Original Points
• Varying densities
• High-dimensional data
(MinPts=4, Eps=9.92)
DBSCAN: Sensitive to Parameters
Other algorithms
• PAM, CLARANS: Solutions for the k-medoids problem
• BIRCH: Constructs a hierarchical tree that acts a
summary of the data, and then clusters the leaves.
• MST: Clustering using the Minimum Spanning Tree.
• ROCK: clustering categorical data by neighbor and link
analysis
• LIMBO, COOLCAT: Clustering categorical data using
information theoretic tools.
• CURE: Hierarchical algorithm uses different
representation of the cluster
• CHAMELEON: Hierarchical algorithm uses closeness and
interconnectivity for merging
CLUSTERING
EVALUATION
Clustering Evaluation
• We need to evaluate the “goodness” of the resulting
clusters?
0.9 0.9
0.8 0.8
0.7 0.7
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1
0.9 0.9
K-means Complete
0.8 0.8
0.7 0.7
0.6 0.6
Link
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
Different Aspects of Cluster Validation
1. Determining the clustering tendency of a set of data, i.e.,
distinguishing whether non-random structure actually exists in the
data.
2. Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.
3. Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
4. Comparing the results of two different sets of cluster analyses to
determine which is better.
5. Determining the ‘correct’ number of clusters.
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5
y
0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 10 0.9
0.9 20 0.8
0.8 30 0.7
0.7 40 0.6
Points
0.6 50 0.5
0.5 60 0.4
y
0.4 70 0.3
0.3 80 0.2
0.2 90 0.1
0.1 100 0
20 40 60 80 100 Similarity
0
0 0.2 0.4 0.6 0.8 1
Points
x
𝑑𝑖𝑗 − 𝑑 𝑚𝑖𝑛
𝑠 𝑖𝑚(𝑖 , 𝑗)=1−
𝑑𝑚𝑎𝑥 −𝑑 𝑚𝑖𝑛
Using Similarity Matrix for Cluster Validation
• Clusters in random data are not so crisp
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
DBSCAN
Using Similarity Matrix for Cluster Validation
• Clusters in random data are not so crisp
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
K-means
Using Similarity Matrix for Cluster
Validation
• Clusters in random data are not so crisp
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
Complete Link
Using Similarity Matrix for Cluster Validation
1
0.9
500
0.8
1 0.7
2 6 1000
0.6
3
4
1500 0.5
0.4
5 2000
0.3
7
0.2
2500
0.1
3000 0
500 1000 1500 2000 2500 3000
DBSCAN
• Clusters in more complicated figures are not well separated
• This technique can only be used for small datasets since it requires a
quadratic computation
Internal Measures: SSE
• Internal Index: Used to measure the goodness of a
clustering structure without reference to external
information
• Example: SSE
• SSE is good for comparing two clusterings or two clusters
(average SSE).
• Can also be used to estimate the number of clusters
10
6 9
8
4
7
2 6
SSE
0 5
4
-2
3
-4 2
-6 1
5 10 15 0
2 5 10 15 20 25 30
K
Internal Measures: Cohesion and Separation
• Cluster Cohesion: Measures how closely related
are objects in a cluster
• Cluster Separation: Measure how distinct or well-
separated a cluster is from other clusters
• Example: Squared Error
• Cohesion is measured by the within cluster sum of squares (SSE)
WSS ( x ci ) 2 We want this to be small
i xCi
xCi yC j
Internal Measures: Cohesion and Separation
• A proximity graph based approach can also be used for
cohesion and separation.
• Cluster cohesion is the sum of the weight of all links within a cluster.
• Cluster separation is the sum of the weights between nodes in the cluster
and nodes outside the cluster.
cohesion separation
Internal measures – caveats
• Internal measures have the problem that the
clustering algorithm did not set out to optimize
this measure, so it is will not necessarily do well
with respect to the measure.
1
50
0.9
45
0.8
40
0.7
35
0.6
30
Count
0.5
y
25
0.4
20
0.3
15
0.2
10
0.1
5
0
0 0.2 0.4 0.6 0.8 1 0
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x SSE
Statistical Framework for Correlation
• Correlation of incidence and proximity matrices for the
K-means clusterings of the following two data sets.
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
6
SSE
0
2 5 10 15 20 25 30
K
1
2 6
3
4
• = points in cluster i
coming from class j
• = probability of element Class 1 Class 2 Class 3
Cluster 3
Class 1 Class 2 Class 3
Measures Cluster 1
Cluster 2
Cluster 3
• Entropy:
• Of a cluster i:
• Highest when uniform, zero when single class
• Of a clustering:
• Purity:
• Of a cluster i:
• Of a clustering:
Class 1 Class 2 Class 3
Cluster 1
Measures Cluster 2
Cluster 3
• Precision:
• Of cluster i with respect to class j:
• Recall:
• Of cluster i with respect to class j:
• F-measure:
• Harmonic Mean of Precision and Recall:
Class 1 Class 2 Class 3
Measures
Cluster 1
Cluster 2
Cluster 3
Precision/Recall for clusters and clusterings
Cluster 1 Cluster 1
Cluster 2 Cluster 2
Cluster 3 Cluster 3
Cluster 1
Cluster 2 Cluster 1:
Purity: 1
Cluster 3 Precision: 1
Recall: 0.35
100 300
External Measures of Cluster Validity:
Entropy and Purity
Final Comment on Cluster Validity
“The validation of clustering structures is the most
difficult and frustrating part of cluster analysis.
Without a strong effort in this direction, cluster
analysis will remain a black art accessible only to
those true believers who have experience and
great courage.”