100% found this document useful (1 vote)
123 views62 pages

02 01 KMeans

Uploaded by

sahandakpou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
123 views62 pages

02 01 KMeans

Uploaded by

sahandakpou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Machine Learning (CE 40477)


Fall 2024

Ali Sharifi-Zarchi

CE Department
Sharif University of Technology

October 15, 2024

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 1 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

1 Unsupervised Learning Overview

2 K-Means

3 Challenges in K-Means

4 Other Clustering Algorithms

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 2 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

1 Unsupervised Learning Overview

2 K-Means

3 Challenges in K-Means

4 Other Clustering Algorithms

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 3 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Unsupervised Learning

• Unsupervised Learning involves analyzing unlabeled data to uncover hidden


patterns or structures within the data

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 4 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Some Common Tasks

• Clustering: Grouping data points into clusters based on similarity.


• Dimensionality Reduction: Reducing the number of features under consideration
and keeping (perhaps approximately) the most informative features.
• Anomaly Detection: Identifying data points that deviate significantly from the
norm (e.g., fraud detection).
• Generative Modeling: Learning the distribution of data to generate new, similar
instances.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 5 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering

• Clustering organizes data points into groups of similar objects.


• Data points in a cluster are more similar to each other than to those in other
clusters.
• The notion of similarity depends on the task at hand (e.g., purchase behavior in
market segmentation).

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 6 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Some Applications of Clustering

• Customer Segmentation (Marketing)


• Image Segmentation and Object Detection (Computer Vision)
• Anomaly Detection (Cybersecurity, Finance)
• Genomics and Bioinformatics
• Social Network Analysis and Community Detection

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 7 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering in Action: Music Recommendation Systems

• Music recommendation systems cluster songs based on similarity.

Adopted from machinelearninggeek.com

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 8 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering in Action: Music Recommendation Systems

• When you like a song, the system suggests others from the same cluster.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 9 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering in Action: Gene Expression Clustering

• Clustering can decipher hidden patterns in gene expression data, which can help
in understanding disease mechanisms or genetic variations.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 10 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Two Beginning Questions

• How to create ’good’ clusters?


• How many clusters do we need?

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 11 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

1 Unsupervised Learning Overview

2 K-Means

3 Challenges in K-Means

4 Other Clustering Algorithms

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 12 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means overview

• The most widely used clustering algorithm.


• Partitions data into K distinct groups based on feature similarity
• It works by iteratively assigning data points to the nearest centroid (mean of the
group) and then recalculating the centroids based on the new group memberships
• The process repeats until the assignments no longer change

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 13 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means in action

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 14 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means in action

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 15 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means in action (cont.)

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 16 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means in action (cont.)

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 17 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means in action (cont.)

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 18 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means in action (cont.)

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 19 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Algorithm

Algorithm 1 K-means Clustering

1: Input: K (number of clusters), D = {x (1) , . . . , x (N) } (data points)


2: Initialize: Select K random points as centroids {µ1 , . . . , µK }
3: repeat
4: Assign each point x (i) to nearest centroid f (x (i) ) = arg minj ∥x (i) − µj ∥
5: For each 1 ≤ j ≤ K set Cj = {x(i) |f (x(i) ) = j}

6: Update centroids µj = |C1j | x(i) ∈Cj x (i)
7: until Centroids do not change
8: Output: Final clusters {C1 , C2 , . . . , CK }

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 20 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Problem definition

• Formally: We have Xtrain = {x(1) , x(2) , . . . , x(N) } ⊆ Rd


• K is the number of clusters.
• We are learning:
1 A function or mapping f : Rd → {1, 2, . . . , K } that assigns a cluster to each data point.
2 A set of K prototypes µ = {µ1 , µ2 , . . . , µK } ⊆ Rd as the cluster representatives, called
centeroids.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 21 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Objective Function

• We want samples in the same cluster to be similar.


• In K-Means, this is expressed as:


K ∑
J= ||x(i) − µj ||2
j=1 x(i) ∈Cj

• Choose f and µ = {µ1 , µ2 , . . . , µK } to minimize this.


• This problem is NP-hard. K-Means is a heuristic solution, which is NOT guaranteed
to find optimal solution.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 22 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means Process Example

Adopted from mlbhanuyerra.github.io

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 23 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Convergence

• How do we know K-Means will converge in a finite number of steps ?


• First we show in each step J will decrease, as long as we have not converged.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 24 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Convergence (cont.)

• We initially assigne each sample to the nearest centroid.

f (x) := argminj ||x − µj ||2

.
• Keep each sample’s assignment fixed until a closer centriod is found.
• Each time a sample is reassigned. the total distance between samples and their
centroids decreases.
• The number of possible sample-to-centroid assignments is finite.
• The algorithm terminates when no sample changes its assigned centroid.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 25 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Convergence (cont.)

• In Updating step, with f (x) fixed, J is a quadratic function of µj (like SSE) and by
taking derivative we can minimize it as:

∂J ∑ ( (i) )
= 0 =⇒ 2 x − µj = 0
∂µj x(i) ∈C j

• This means we should update each µj as the mean of cluster Cj :


∑ (i)
x(i) ∈Cj x
µj =
|Cj |

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 26 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Convergence (cont.)

• For each cluster, the mean of its samples minimizes squared distances.
∑ ∑
• For Cj if µ′ was the old centroid we have: x(i) ∈Cj ||x(i) − µ′ ||2 ≥ x(i) ∈Cj ||x(i) − µj ||. So
j j
Jnew ≤ Jold .

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 27 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Convergence (cont.)

• J is non-negative, and there are a finite number of partitions so there is a minimum


for J and we can’t decrease J forever.
• Therefore we must converge at some point.
• The convergence properties of the K-means algorithm were studied by MacQueen
(1967).

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 28 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

K-Means Convergence (cont.)

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 29 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Strengths

• Simple: easy to understand and to implement.


• Efficient: Time complexity: O(tkn), where
• n is the number of data points,
• k is the number of clusters, and
• t is the number of iterations.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 30 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

1 Unsupervised Learning Overview

2 K-Means

3 Challenges in K-Means

4 Other Clustering Algorithms

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 31 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Initialization

• K-Means always converges. What could go wrong ?


• K-Means algorithm is a heuristic
• It requires initial centroids, and the choice is important as it could affect the t in
O(tkn).

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 32 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Local Optimum

• The algorithm finds a local minimum but there is no guarantee to find global
minimum.
• Its result is highly affected by the initialization.
• Some suggestions are:
• Multiple runs with random initial centroids, then select the "best" result.
• Initialization heuristics (K-Means++ , Furthest Traversal).
• Initializing with the suggested results of another method.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 33 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Local Optimum

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 34 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Local optimum (cont.)

Optimal clustering Possible clustering

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 35 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Definition of Mean

• We assume x(i) ∈ Rd , which is not always the case. K-Means requires a space where
sample mean is defined.
• Categorical data.
• A suggested solution: K-Mode - the centroid is the most frequent category (the mode)
in each cluster.
• Closest centroid is found by the Hamming Distance.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 36 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

How many clusters?

Adopted from
slides of Dr. Soleymani, Modern Information Retrieval Course, Sharif University of technology.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 37 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

How many clusters? (cont.)

• Number of clusters is usually given in advance in the problem of clustering.


However; finding the right number of clusters is also a problem.
• First we need to know how we can evaluate a clustering.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 38 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering Evaluation

• Evaluating clusters involves two key aspects:


• Intra-cluster cohesion (compactness): How similar the data points are within a
cluster.
• Often measured by the within-cluster sum of squares (WCSS):


K ∑
WCSS = ||x − µi ||2
i=1 x∈Ci

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 39 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering Evaluation

• Inter-cluster separation (isolation): How different the data points are between
clusters.
• Single-link (Minimum Distance):
• Measures the **minimum distance** between any two points from different clusters.

dsingle (Ci , Cj ) = min d(x, y)


x∈Ci ,y∈Cj

• Complete-link (Maximum Distance):


• Measures the maximum distance between any two points from different clusters.

dcomplete (Ci , Cj ) = max d(x, y)


x∈Ci ,y∈Cj

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 40 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering Evaluation

• Inter-cluster separation (isolation): How different the data points are between
clusters.
• Centroid (Wards Method):
• Measures the distance between the centroids of two clusters.

dcentroid (Ci , Cj ) = d(µi , µj )

• Average-link:
• Measures the average distance between all pairs of points from different clusters.

1 ∑ ∑
daverage (Ci , Cj ) = d(x, y)
|Ci | · |Cj | x∈Ci y∈Cj

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 41 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Elbow Method for Optimal K

• Finds the optimal number of clusters K by minimizing the within-cluster sum of


squares (WCSS).
• Elbow Point:
• Plot WCSS versus K .
• The point where the rate of decrease sharply slows down (resembles an "elbow") is
considered the optimal K .

CE Department (Sharif University of Technology) Machine Learning (CE 40477) Adopted from medium.com
October 15, 2024 42 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Silhouette Method for Cluster Evaluation

• Silhouette Score for a single point i:

b(i) − a(i)
S(i) =
max(a(i), b(i))
• where:
• a(i) is the average distance between i and all other points in the same cluster.
• b(i) is the average distance between i and points in the nearest neighboring cluster.
• Interpretation:
• S(i) ∈ [−1, 1]
• S(i) ≈ 1 : Well-clustered.
• S(i) ≈ 0 : On or near the decision boundary between clusters.
• S(i) ≈ −1 : Misclustered.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 43 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

How many Clusters? (cont.)

• There is a trade-off between having better focus within each cluster or having too
many clusters.
• Don’t want one-element clusters.
• Optimization problem: penalize having too many clusters

K ∗ = arg mink J(k) + λk

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 44 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Outliers

• The algorithm is sensitive to outliers


• Outliers are data points that are very far away from other data points.
• Outliers could be errors in data recording or unique data points with significantly
different values.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 45 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Data Distribution

• There is a problems with how k-means defines clusters.


• K-means assumes clusters are spherical and separated by equal variance, which
limits its effectiveness on non-spherical or complex-shaped clusters.

Figure 1: example when k-means wont work


CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 46 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

1 Unsupervised Learning Overview

2 K-Means

3 Challenges in K-Means

4 Other Clustering Algorithms

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 47 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Hard vs Soft Clustering

• Hard Clustering(Partitional): Each data


point belongs to exactly one cluster
• More common and easier to use.
• Soft Clustering(Bayesian)

Figure adapted from Machine Learning and


Pattern Recognition, Bishop

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 48 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Hard vs Soft Clustering (cont.)

• Hard Clustering(Partitional)
• Soft Clustering(Bayesian): Each sample is
assigned to different clusters with
probabilities, rather than {0, 1}.
• data point belongs to each cluster with a
probability

Figure adapted from Machine Learning and


Pattern Recognition, Bishop

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 49 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Hierarchical Clustering
• Hierarchical algorithms find successive clusters using previously established
clusters. Two Types:
• Agglomerative (bottom-up): Start with individual points and merge clusters.
• Divisive (top-down): Start with all points and split clusters.
Result: A hierarchy of clusters represented by a dendrogram.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 50 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Agglomerative Clustering Algorithm

• Start with each point as its own cluster.


• Merge the "closest" clusters.
• Repeat until one cluster remains or desired number is reached.
• Closest cluster can be determined using inter-cluster separation measures

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 51 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Dendrogram and Cutting

• A dendrogram shows the hierarchy of merges.


• Cut the dendrogram at a desired level to form clusters.

Adopted from r-graph-gallery.com


CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 52 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Hierarchical Algorithms

• Advantages:
• No need to specify the number of clusters.
• Produces a dendrogram for visualization.
• Works with arbitrary-shaped clusters.
• Disadvantages
• High computational cost.
• Sensitive to noise and outliers.
• Greedy: cannot undo merges.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 53 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

DBSCAN

DBSCAN (Density-Based Spatial Clustering of Applications with Noise):


• Groups points in high-density regions.
• Labels points in low-density regions as noise.
• Does not require specifying the number of clusters K .
Parameters:
• ϵ (epsilon): Maximum distance for neighbors.
• minPts: Minimum points to form a dense region.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 54 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Core Concepts in DBSCAN

DBSCAN defines three types of points:


• Core Point: A point with at least minPts neighbors within distance ϵ.
• Border Point: A point within ϵ of a core point but with fewer than minPts
neighbors.
• Noise: Points that are neither core points nor border points.

Adopted from ai.plainenglish.io

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 55 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Core Concepts in DBSCAN (cont.)

Definitions:
• A point xi is a core point if:

|{xj : d(xi , xj ) ≤ ϵ}| ≥ minPts

• A point is a border point if it is within distance ϵ of a core point, but not itself a core
point.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 56 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

DBSCAN Algorithm Steps

Algorithm Steps:
1 For each unvisited point xi :
• Mark xi as visited.
• Find all points within distance ϵ (neighborhood).
2 If xi is a core point:
• Create a new cluster and expand it by recursively adding all reachable core and
border points.
3 If xi is not a core point:
• Label it as noise if it does not belong to any cluster.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 57 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Advantages of DBSCAN

• Can find clusters of arbitrary shape (non-spherical).


• Does not require specifying the number of clusters K in advance.
• Robust to noise and outliers.
• Works well with large datasets.

Adopted

CE Department (Sharif University of Technology)


from mrinalyadav7.medium.com
Machine Learning (CE 40477) October 15, 2024 58 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Limitations of DBSCAN

• DBSCAN struggles with datasets of varying densities.


• Sensitive to the selection of parameters ϵ and minPts.
• Does not perform well with high-dimensional data.

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 59 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Clustering Algorithms

• Each algorithm is suited for different kinds of patterns and information in data.

Adopted from
CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 60 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

Contributions

• This slide has been prepared thanks to:


• Hooman Zolfaghari

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 61 / 62
Unsupervised Learning Overview K-Means Challenges in K-Means Other Clustering Algorithms

CE Department (Sharif University of Technology) Machine Learning (CE 40477) October 15, 2024 62 / 62

You might also like