0% found this document useful (0 votes)
23 views

Aiml Prof

The document discusses unsupervised machine learning techniques including k-means clustering, DBSCAN, hierarchical clustering, and divisive clustering. It provides details on the steps and processes involved in each technique.

Uploaded by

Isha Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Aiml Prof

The document discusses unsupervised machine learning techniques including k-means clustering, DBSCAN, hierarchical clustering, and divisive clustering. It provides details on the steps and processes involved in each technique.

Uploaded by

Isha Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 8

ARTIFICIAL

INTELLIGENCE
&
MACHINE
LEARNING
SUBMITTED TO: Dr. Sunil Kumar Shukla
SUBMITTED BY: Isha Verma (0901AI211037)
COURSE OUTCOMES CO2: Illustrate various techniques for search and
processing:
CO1: Define basic concepts of Artificial Intelligence & • Introduction to problem space and search algorithms.
Machine Learning: • Techniques like BFS, DFS, heuristic search,
• Definition of AI and its goals. hill climbing, and best-first search.
• Introduction to computation, psychology, and
cognitive science in AI.
CO3: Identify various types of machine learning
• Differentiating between AI, machine learning, deep
problems and techniques:
learning, and related fields.
• Types of learning: supervised, unsupervised, and
• Applications of AI and ML in real-world scenarios.
reinforcement learning.
• Differentiating between regression and
CO4: Analyze various techniques in Artificial classification problems.
Intelligence, ANN & Machine Learning: • Unsupervised machine learning techniques: K-
• Introduction to neural networks, including their means clustering, DBSCAN, hierarchical clustering,
history and architecture. etc.
• Supervised machine learning techniques: linear
regression, decision tree classifier, and random CO5: Apply AI and ML techniques to solve real-
forest classifier. world problems:
• Performance parameters and applications of • Case studies and examples throughout the
these techniques. units demonstrate the application of AI and ML
techniques to address real-world problems.
UNSUPERVISED MACHINE LEARNING
Unsupervised Learning is a kind of self-learning where the algorithm can find previously hidden patterns in
the unlabeled datasets and give the required output without any interference.

Unsupervised machine learning holds the advantage of being able to work with unlabeled data. This means
that human labor is not required to make the dataset machine-readable, allowing much larger datasets to be
worked on by the program.

UNSUPERVISED LEARNING

PARTITIONING DENSITY BASED


METHODS METHODS
HIERARCHICAL METHODS

K- Means
Agglomerative Divisive DBSCAN
Clustering
DBSCAN
Step 1: Finding the distance Step 2: From P1 to P12,
between each given point by we will try to find distance
calculating the Euclidean distance. which are less than
epsilon(A distance that
defines the maximum
distance between two
points for them to be
considered neighbors) and
make groups.
Step 3:
Groups which have points
less than min. point are
noise points.

Groups which have exact


no. of points as min point
are core points.
Now, Check all noise data points that comes under core data points
will be border points.
Step 1: Initialize Centroids.
K-Means Clustering
Step 2: For each data point, calculate the distance to each
centroid(By Euclidean distance).

Step 3: Points which have the minimum distance among the 3


clusters comes under that cluster.

Step 4: Update Centroids:


• Recalculate the centroids of the clusters based on the
mean of the data points assigned to each cluster.

Step 5: Repeat centroid update steps until one of the


stopping criteria is met:
The centroids no longer change significantly between
iterations.
Maximum number of iterations: A predefined number of
iterations has been reached.

Step 6: Finalize Clustering:


• Once convergence is achieved, the final centroids
represent the centers of the clusters.
Agglomerative Clustering
Step 1:Arranging data points in increasing order. Step 4:Recalculate the distances between the new
cluster and all remaining clusters.
Step 2: Calculating distance between data points
by Euclidean distance. Forming a Matrix. Step 5:Continue merging clusters iteratively until a
stopping criterion is met.

Step 6: Construct Dendrogram

Step 3: Identify the two clusters with the


smallest distance between them. These clusters
are merged to form a new cluster.
Divisive Clustering
Step 1: Begin with a single cluster containing all data points.

Step 2: Calculate the centroid or representative point of the current


cluster. This could be the mean or median of the data points in the
cluster.

Step 3: Choose a splitting criterion to divide the current cluster into two
smaller clusters. Distance: Split the cluster along the dimension with the
highest variance or spread.

Step 4:Recursively apply the splitting process to each of the resulting


subclusters.

Step 5: Continue splitting until a stopping criterion is met. This criterion


could be:
A predefined number of clusters is reached.
The size of the clusters falls below a certain threshold.
The splitting process no longer results in meaningful cluster division.
THANK
YOU

You might also like