Aiml Prof
Aiml Prof
INTELLIGENCE
&
MACHINE
LEARNING
SUBMITTED TO: Dr. Sunil Kumar Shukla
SUBMITTED BY: Isha Verma (0901AI211037)
COURSE OUTCOMES CO2: Illustrate various techniques for search and
processing:
CO1: Define basic concepts of Artificial Intelligence & • Introduction to problem space and search algorithms.
Machine Learning: • Techniques like BFS, DFS, heuristic search,
• Definition of AI and its goals. hill climbing, and best-first search.
• Introduction to computation, psychology, and
cognitive science in AI.
CO3: Identify various types of machine learning
• Differentiating between AI, machine learning, deep
problems and techniques:
learning, and related fields.
• Types of learning: supervised, unsupervised, and
• Applications of AI and ML in real-world scenarios.
reinforcement learning.
• Differentiating between regression and
CO4: Analyze various techniques in Artificial classification problems.
Intelligence, ANN & Machine Learning: • Unsupervised machine learning techniques: K-
• Introduction to neural networks, including their means clustering, DBSCAN, hierarchical clustering,
history and architecture. etc.
• Supervised machine learning techniques: linear
regression, decision tree classifier, and random CO5: Apply AI and ML techniques to solve real-
forest classifier. world problems:
• Performance parameters and applications of • Case studies and examples throughout the
these techniques. units demonstrate the application of AI and ML
techniques to address real-world problems.
UNSUPERVISED MACHINE LEARNING
Unsupervised Learning is a kind of self-learning where the algorithm can find previously hidden patterns in
the unlabeled datasets and give the required output without any interference.
Unsupervised machine learning holds the advantage of being able to work with unlabeled data. This means
that human labor is not required to make the dataset machine-readable, allowing much larger datasets to be
worked on by the program.
UNSUPERVISED LEARNING
K- Means
Agglomerative Divisive DBSCAN
Clustering
DBSCAN
Step 1: Finding the distance Step 2: From P1 to P12,
between each given point by we will try to find distance
calculating the Euclidean distance. which are less than
epsilon(A distance that
defines the maximum
distance between two
points for them to be
considered neighbors) and
make groups.
Step 3:
Groups which have points
less than min. point are
noise points.
Step 3: Choose a splitting criterion to divide the current cluster into two
smaller clusters. Distance: Split the cluster along the dimension with the
highest variance or spread.