AIML Obze
AIML Obze
(Accredited by NBA)
Name :
Year / Sec / :
Acadamic Year : 2023-24
Semester : Jan 2024 - June 2024
Vision and Mission - Institute
Vision
To carve the youth as dynamic competent, valued and knowledgeable Technocrats through
research, innovation and entrepreneurial development for accomplishing the global
expectations.
Mission
M1: Inculcate academic excellence in engineering education to create talented
professionals
M2: Promote research in basic sciences and applied engineering among faculty and
students to fulfill the societal expectations.
M3: Holistic development of students through meaningful interaction with industry and
academia.
M4: Foster the students on par with sustainable development goals thereby
contributing to the process of nation building
M5: To nurture and retain conducive lifelong learning environment towards professional
excellence
Mission
M1: Provide quality education to the students in core and allied fields by implementing
advanced pedagogies.
M2: Create ardor among faculty as well as students to achieve excellence in emerging
research areas
M3: Imbibe industry relevant skills to the students through industry interaction thereby
bridging the campus to corporate gap.
M4: To endow the students with broad intellectual spectra pertaining to the sustainable
development goals.
M5: To instill the thirst of lifelong learning among students to excel in their field of
interest
Program Educational Objectives (PEO's)
PSO1 : Design and develop Electronic circuits assimilating Futuristic technologies of Signal
Processing, Communication, VLSI and Embedded Systems using Modern Hardware and software
tools to cater the expectation of solving real time problems.
PSO2: Instill the professional skill sets with ethical principles and tools for Networking,
Communication and integrated circuits to provide Solutions for societal benefits.
Semester / Year VI / III
CS3491 ARTIFICIAL INTELLIGENCE AND
Course Code & Name
MACHINE LEARNING
Regulations R- 2021
Branch of the Students B.E - ECE
Academic Year 2023-24
Batch / Section 2021-2025 / III ECE- A&B
CS3491.1 3 2 2 3 1 3 2 - - - - 1 3 3 3
CS3491.2 3 2 2 3 1 3 2 - - - - 1 3 3 3
CS3491.3 1 2 1 3 2 3 2 - - - - 1 3 3 3
CS3491.4 1 2 3 1 3 3 2 - - - - 1 3 3 3
CS3491.5 2 2 2 - 3 3 2 - - - - 1 3 3 3
CS3491 2 2 2 2 2 3 2 - - - - 1 3 3 3
Syllabus
CS3491 ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
CONTENTS
Sl.no. Date Experiments Page Mark Sig
No.
Introduction
Machine learning
Machine learning is a subset of artificial intelligence in the field of computer science that often
uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve
performance on a specific task) with data, without being explicitly programmed. In the past
decade, machine learning has given us self-driving cars, practical speech recognition, effective
web search, and a vastly improved understanding of the human genome.
Machine learning tasks are typically classified into two broad categories, depending on whether
there is a learning "signal" or "feedback" available to a learning system:
1. Supervised learning: The computer is presented with example inputs and their desired
outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
As special cases, the input signal can be only partially available, or restricted to special feedback:
3. Active learning: the computer can only obtain training labels for a limited set of instances
(based on a budget), and also has to optimize its choice of objects to acquire labels for. When
used interactively, these can be presented to the user for labeling.
4. Reinforcement learning: training data (in form of rewards and punishments) is given only as
feedback to the program's actions in a dynamic environment, such as driving a vehicle or playing
a game against an opponent.
5. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to
find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden
patterns in data) or a means towards an end (feature learning).
In classification, inputs are divided into two or more classes, and the learner must produce a
model that assigns unseen inputs to one or more (multi-label classification) of these classes. This
is typically tackled in a supervised manner. Spam filtering is an example of classification, where
the inputs are email (or other) messages and the classes are "spam" and "not spam".
In regression, also a supervised problem, the outputs are continuous rather than discrete. In
clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are not
known beforehand, making this typically an unsupervised task. Density estimation finds the
distribution of inputs in some space.
Dimensionality reduction simplifies inputs by mapping them into a lower dimensional space.
Topic modeling is a related problem, where a program is given a list of human language
documents and is tasked with finding out which documents cover similar topics.
4. Deep learning
Falling hardware prices and the development of GPUs for personal use in the last few years have
contributed to the development of the concept of deep learning which consists of multiple hidden
layers in an artificial neural network. This approach tries to model the way the human brain
processes light and sound into vision and hearing. Some successful applications of deep learning
are computer vision and speech Recognition.
of two categories, an SVM training algorithm builds a model that predicts whether a new
example falls into one category or the other.
7. Clustering
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that
observations within the same cluster are similar according to some pre designated criterion or
criteria, while observations drawn from different clusters are dissimilar. Different clustering
techniques make different assumptions on the structure of the data, often defined by some
similarity metric and evaluated for example by internal compactness (similarity between
members of the same cluster) and separation between different clusters. Other methods are based
on estimated density and graph connectivity. Clustering is a method of unsupervised learning,
and a common technique for statistical data analysis.
8. Bayesian networks
A Bayesian network, belief network or directed acyclic graphical model is a probabilistic
graphical model that represents a set of random variables and their conditional independencies
via a directed acyclic graph (DAG). For example, a Bayesian network could represent the
probabilistic relationships between diseases and symptoms. Given symptoms, the network can be
used to compute the probabilities of the presence of various diseases. Efficient algorithms exist
that perform inference and learning.
9. Reinforcement learning
Reinforcement learning is concerned with how an agent ought to take actions in an environment
so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt
to find a policy that maps states of the world to the actions the agent ought to take in those states.
Reinforcement learning differs from the supervised learning problem in that correct input/output
pairs are never presented, nor sub-optimal actions explicitly corrected.
10. Similarity and metric learning
In this problem, the learning machine is given pairs of examples that are considered similar and
pairs of less similar objects. It then needs to learn a similarity function (or a distance metric
function) that can predict if new objects are similar. It is sometimes used in Recommendation
systems.
1.1 AIM:
To Implement Uninformed search algorithms ( BFS and DFS )
1.3 ALGORITHM
BFS Algorithm
Breadth-First Search (BFS) is an algorithm used for traversing graphs or trees.
Traversing means visiting each node of the graph. Breadth-First Search is a recursive
algorithm to search all the vertices of a graph or a tree. BFS in python can be
implemented by using data structures like a dictionary and lists. Breadth-First Search in
tree and graph is almost the same. The only difference is that the graph may contain
cycles, so we may traverse to the same node again.
Step 1: Enqueue the starting node. The first step is to enqueue the starting node into a
queue data structure. ...
Step 2: Dequeue a node and mark it as visited. ...
Step 3: Enqueue all adjacent nodes of the dequeued node that are not yet visited. ...
Step 4: Repeat steps 2-3 until the queue is empty.
DFS Algorithm
The recursive method of the Depth-First Search algorithm is implemented using stack.
A standard Depth-First Search implementation puts every vertex of the graph into one
in all 2 categories: 1) Visited 2) Not Visited. The only purpose of this algorithm is to
visit all the vertex of the graph avoiding cycles.
Step:1 : We will start by putting any one of the graph's vertex on top of the stack.
Step:2 : After that take the top item of the stack and add it to the visited list of the
vertex.
Step:3 : Next, create a list of that adjacent node of the vertex. Add the ones which
aren't in the visited list of vertexes to the top of the stack.
Step:4 : Lastly, keep repeating steps 2 and 3 until the stack is empty.
1.4 PROGRAM & OUTPUT
# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') # function calling
OUTPUT
# Driver Code
print("Following is the Depth-First Search")
dfs(visited, graph, '5')
OUTPUT
1.5 PROCEDURE
1.6 RESULT
2.1 AIM:
To Implement Informed search algorithms ( BFS and DFS )
2.3 ALGORITHM
A* Search Algorithm:
A* Search Algorithm is a Path Finding Algorithm. It is similar to Breadth First Search(BFS). It
will search shortest path using heuristic value assigned to node and actual cost from
Source_node to Dest_node
Real-life Examples
Maps
Games
Real-life Examples
Maps
Games
Formula for AO* Algorithm
h(n) = heuristic_value
g(n) = actual_cost
f(n) = actual_cost + heursitic_value
f(n) = g(n) + h(n)
Program :
else:
if g[m] > g[n] + weight:
g[m] = g[n] + weight
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found: {}'.format(path))
return path
open_set.remove(n)
closed_set.add(n)
print('Path does not exist!')
return None
def get_neighbors(v):
if v in Graph_nodes:
return Graph_nodes[v]
else:
return None
def heuristic(n):
H_dist = {
'A': 10,
'B': 8,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
Graph_nodes = {
'A': [('B', 6), ('F', 3)],
'B': [('C', 3), ('D', 2)],
'C': [('D', 1), ('E', 5)],
'D': [('C', 1), ('E', 8)],
'E': [('I', 5), ('J', 5)],
'F': [('G', 1),('H', 7)] ,
'G': [('I', 3)],
'H': [('I', 2)],
'I': [('E', 5), ('J', 3)],
aStarAlgo('A', 'J')
Output
Program
class Graph:
class Graph:
def __init__(self, graph, heuristicNodeList, startNode):
# instantiate graph object with graph topology, heuristic values, start node
self.graph = graph
self.H = heuristicNodeList
self.start = startNode
self.parent = {}
self.status = {}
self.solutionGraph = {}
return self.status.get(v, 0)
def printSolution(self):
print("FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE
STARTNODE:",self.start)
print("------------------------------------------------------------")
print(self.solutionGraph)
print("------------------------------------------------------------")
if flag == True: # initialize Minimum Cost with the cost of first set of child node/s
minimumCost = cost
costToChildNodeListDict[minimumCost] = nodeList # set the Minimum Cost child
node/s
flag = False
else: # checking the Minimum Cost nodes with the current Minimum Cost
if minimumCost > cost:
minimumCost = cost
costToChildNodeListDict[minimumCost] = nodeList # set the Minimum Cost
child node/s
def aoStar(self, v, backTracking): # AO* algorithm for a start node and backTracking
status flag
print("-----------------------------------------------------------------------------------------")
if solved == True: # if the Minimum Cost nodes of v are solved, set the current node
status as solved(-1)
self.setStatus(v, -1)
self.solutionGraph[v] = childNodeList # update the solution graph with the solved
nodes which may be a part of solution
if v != self.start: # check the current node is the start node for backtracking the current
node value
self.aoStar(self.parent[v], True) # backtracking the current node value with
backtracking status set to true
h1 = {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
graph1 = {
'A': [[('B', 1), ('C', 1)], [('D', 1)]],
'B': [[('G', 1)], [('H', 1)]],
'C': [[('J', 1)]],
'D': [[('E', 1), ('F', 1)]],
'G': [[('I', 1)]]
}
G1 = Graph(graph1, h1, 'A')
G1.applyAOStar()
G1.printSolution()
h2 = {'A': 1, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} # Heuristic values of Nodes
graph2 = { # Graph of Nodes and Edges
'A': [[('B', 1), ('C', 1)], [('D', 1)]], # Neighbors of Node 'A', B, C & D with repective weights
'B': [[('G', 1)], [('H', 1)]], # Neighbors are included in a list of lists
'D': [[('E', 1), ('F', 1)]] # Each sublist indicate a "OR" node or "AND" nodes
}
G2 = Graph(graph2, h2, 'A') # Instantiate Graph object with graph, heuristic values and start
Node
G2.applyAOStar() # Run the AO* algorithm
G2.printSolution() # print the solution graph as AO* Algorithm search
Output:
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : G
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
Dept. of ECE ,SRM Page 25
TRPEC
CS3491 AIML LAB
SOLUTION GRAPH : {}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1,
'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : I
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 0, 'J': 1,
'T': 3}
SOLUTION GRAPH : {'I': []}
PROCESSING NODE : G
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I']}
PROCESSING NODE : B
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : C
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : J
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 0,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': []}
PROCESSING NODE : C
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 1, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 0,
'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J']}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
2.5 PROCEDURE
Open python 3.0 IDLE / Colab
2.6 RESULT
3.1 AIM
To Implement Naïve Bayes Models
3.3 ALGORITHM
Conditional probability is defined as the likelihood of an event or outcome occurring, based
on the occurrence of a previous event or outcome. Conditional probability is calculated by
multiplying the probability of the preceding event by the updated probability of the
succeeding, or conditional, event
Bayes’ Rule
Bayes’ Rule. Bayes’ theorem which was given by Thomas Bayes, a British Mathema tician,
in 1763 provides a means for calculating the probability of an event given some information.
Mathematically Bayes’ theorem can be stated as:
Naive Bayes
Bayes’ rule provides us with the formula for the probability of Y given some feature X. In real-
world problems, we hardly find any case where there is only one feature. When the features are
independent, we can extend Bayes’ rule to what is called Naive Bayes which assumes that the
Dept. of ECE ,SRM Page 29
TRPEC
CS3491 AIML LAB
features are independent that means changing the value of one feature doesn’t influence the
values of other variables and this is why we call this algorithm “NAIVE”. Naive Bayes can be
used for various things like face recognition, weather prediction, Medical Diagnosis, News
classification, Sentiment Analysis, and a lot more.
When there are multiple X variables, we simplify it by assuming that X’s are independent, so
different values now. Also, the (PDF) probability density function of a normal distribution is
given by:
We can use this formula to compute the probability of likelihoods if our data is
continuous.
Problem statement:
– Given features X1 ,X2 ,…,Xn
– Predict a label Y
X = (Rainy, Hot,
High, False) y =
No
Or
Consider a random experiment of tossing 2 coins. The sample space here will be:
S = {HH, HT, TH, TT}
P(H) is the probability of hypothesis H being true. This is known as the prior
probability.
P(E) is the probability of the evidence(regardless of the hypothesis).
P(H|E) is the probability of the hypothesis given that the evidence is there.
##import library
X, y = make_classification(
n_features=6,
n_classes=3,
n_samples=800,
n_informative=2,
random_state=1,
n_clusters_per_class=1,
# Model training
model.fit(X_train, y_train)
# Predict Output
predicted = model.predict([X_test[6]])
y_pred = model.predict(X_test)
accuray = accuracy_score(y_pred, y_test)
f1 = f1_score(y_pred, y_test, average="weighted")
print("Accuracy:", accuray)
print("F1 Score:", f1)
labels = [0,1,2]
cm = confusion_matrix(y_test, y_pred, labels=labels)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
disp.plot();
OUTPUT
Accuracy: 0.8484848484848485
F1 Score: 0.8491119695890328
3.5 PROCEDURE
Open python 3.0 IDLE / Colab
Write the program
Run the program
Observe the output and take the hard copy
Write the program for various example / application , Observe the output and take the
hard copy
3.6 RESULT
.
4.1AIM:
To Implement Bayesian Networks
4.3 ALGORITHM
This section will be about obtaining a Bayesian network, given a set of sample data. Learning a
Bayesian network can be split into two problems:
Parameter learning: Given a set of data samples and a DAG that captures the dependencies
between the variables, estimate the (conditional) probability distributions of the individual
variables.
Structure learning: Given a set of data samples, estimate a DAG that captures the dependencies
between the variables.
This notebook aims to illustrate how parameter learning and structure learning can be done with
pgmpy. Currently, the library supports:
The Bayesian Parameter Estimator starts with already existing prior CPDs, that express
our beliefs about the variables before the data was observed. Those "priors" are then
updated, using the state counts from the observed data.
One can think of the priors as consisting in pseudo state counts, that are added to the
actual counts before normalization. Unless one wants to encode specific beliefs about
the distributions of the variables, one commonly chooses uniform priors, i.e. ones that
deem all states equiprobable.
A very simple prior is the so-called K2 prior, which simply adds 1 to the count of every
single state. A somewhat more sensible choice of prior is BDeu (Bayesian Dirichlet
equivalent uniform prior). For BDeu we need to specify an equivalent sample size N and
then the pseudo-counts are the equivalent of having observed N uniform samples of each
variable (and each parent configuration).
*Parameter Learning *
Parameter learning is the task to estimate the values of the conditional probability distributions
(CPDs), for the variables fruit, size, and tasty.
Program :
!pip install pgmpy
!pip install pandas
!pip install numpy
import pandas as pd
data = pd.DataFrame(data={'fruit': ["banana", "apple", "banana", "apple",
"banana","apple", "banana",
"apple", "apple", "apple", "banana",
"banana", "apple", "banana",],
'tasty': ["yes", "no", "yes", "yes", "yes",
"yes", "yes",
"yes", "yes", "yes", "yes", "no", "no",
"no"],
'size': ["large", "large", "large", "small",
"large", "large", "large",
"small", "large", "large", "large",
"large", "small", "small"]})
print(data)
OUTPUT
fruit
apple 7
banana 7
+---------------+-----+
| fruit(apple) | 0.5 |
+---------------+-----+
| fruit(banana) | 0.5 |
+---------------+-----+
+------------+--------------+--------------------+---------------------
+---------------+
4.6 RESULT
5.1AIM:
To Build Regression Models
5.3 ALGORITHM
Regression analysis is a commonly used statistical technique for predicting the relationship between a
dependent variable and one or more independent variables. In the field of machine learning, regression
algorithms are used to make predictions about continuous variables, such as housing prices, student scores,
or medical outcomes. Python, being one of the most widely used programming languages in data science
and machine learning, has a variety of powerful libraries for implementing regression algorithms.
1.Multiple linear regression is a statistical method used to model the relationship between a
dependent variable and two or more independent variables. It is an extension of simple linear
regression, where only one independent variable is used to predict the dependent variable.
2.Polynomial regression is a form of regression analysis in which the relationship between the
independent variable x and the dependent variable y is modeled as an nth degree polynomial. It
allows for more flexibility to model non-linear relationships between variables, unlike linear
regression which assumes that the relationship is linear. Below you can see the generalized
equation for polynomial regression, where y is the dependent variable, and the x values would be
the independent variables. Notice how we could expand this by choosing higher orders of
polynomials (to some order k) and we could have also included interaction terms.
3. Ridge Regression is a variation of linear regression that addresses some of the issues of linear
regression. Linear regression can be prone to overfitting when the number of independent
variables is large, this is because the coefficients of the independent variables can become very
large leading to a complex model that fits the noise of the data. Ridge Regression solves this issue
by adding a term to the linear regression equation called L2 regularization term, also known as
Ridge Penalty, which is the sum of the squares of the coefficients multiplied by a regularization
parameter lambda.
4. LASSO (Least Absolute Shrinkage And Selection Operator) is another variation of linear
regression that addresses some of the issues of linear regression. It is used to solve the problem of
overfitting when the number of independent variables is large. Lasso Regression adds a term to
the linear regression equation called L1 regularization term, also known as Lasso Penalty, which
is the sum of the absolute values of the coefficients multiplied by a regularization parameter
lambda.
5. Elastic Net Regression is a hybrid of Ridge Regression and Lasso Regression that combines
the strengths of both. It addresses the problem of overfitting when the number of independent
variables is large by adding both L1 and L2 regularization terms to the linear regression equation .
6. Decision tree based regression is a method that uses decision trees to model the relationship
between a dependent variable and one or more independent variables. Decision Trees are widely
used machine learning algorithms that can be used for both classification and regression problems
in python. A decision tree is a tree-like structure where each internal node represents a test on an
attribute, each branch represents an outcome of the test, and each leaf node represents a predicted
value or class
Dept. of ECE ,SRM Page 41
TRPEC
CS3491 AIML LAB
7.Support Vector Regression (SVR) is a type of Support Vector Machine (SVM) algorithm,
which is a supervised learning algorithm that can be used for regression problems. In SVR, the
goal is to find the hyperplane that maximally separates the data points from the prediction error,
while at the same time minimizing the margin of deviation between the predicted value and the
true value of the dependent variable. The optimization problem of SVR can be formulated as:
OUTPUT:
array([16.])
2. Polynomial regression
# polynomial
Dept. of ECE ,SRM Page 42
TRPEC
CS3491 AIML LAB
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
X = np.arange(6).reshape(3, 2)
X
poly = PolynomialFeatures(2)
poly.fit_transform(X)
poly = PolynomialFeatures(interaction_only=True)
poly.fit_transform(X)
OUTPUT:
array([[ 1., 0., 1., 0.],
[ 1., 2., 3., 6.],
[ 1., 4., 5., 20.]])
3. Ridge regression
from sklearn.linear_model import Ridge
import numpy as np
n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
clf = Ridge(alpha=1.0)
clf.fit(X, y)
OUTPUT:
Ridge
Ridge()
4. Lasso regression
#lasso
from sklearn import linear_model
clf = linear_model.Lasso(alpha=0.1)
clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
print(clf.coef_)
print(clf.intercept_)
from sklearn.linear_model import Ridge model =
make_pipeline(GaussianFeatures(30), Ridge(alpha=0.1)) basis_plot(model,
title='Ridge Regression')
OUTPUT:
[0.85 0. ]
0.15000000000000002
X, y = make_regression(n_features=2, random_state=0)
regr = ElasticNet(random_state=0)
regr.fit(X, y)
print(regr.coef_)
print(regr.intercept_)
print(regr.predict([[0, 0]]))
OUTPUT:
[18.83816048 64.55968825]
1.4512607561653996
[1.45126076]
OUTPUT:
array([-0.39292219, -0.46749346, 0.02768473, 0.06441362, -0.50323135,
0.16437202, 0.11242982, -0.73798979, -0.30953155, -0.00137327])
#SVR
from sklearn.svm import SVR
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import numpy as np
n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
regr = make_pipeline(StandardScaler(), SVR(C=1.0, epsilon=0.2))
regr.fit(X, y)
OUTPUT:
Application _ LR
import random
import pandas as pd
data_url ="http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+",
skiprows=22, header=None)
X = np.hstack([raw_df.values[::2, :],
raw_df.values[1::2, :2]])
y = raw_df.values[1::2, 2]
X_train, X_test,\
y_train, y_test = train_test_split(X, y,
test_size=0.4,
random_state=1)
reg = linear_model.LinearRegression()
reg.fit(X_train, y_train)
regression coefficients
print('Coefficients: ', reg.coef_)
# plotting legend
plt.legend(loc='upper right')
# plot title
plt.title("Residual errors")
OUTPUT:
Coefficients: [-8.95714048e-02 6.73132853e-02 5.04649248e-
02 2.18579583e+00
-1.72053975e+01 3.63606995e+00 2.05579939e-03 -
1.36602886e+00
2.89576718e-01 -1.22700072e-02 -8.34881849e-01 9.40360790e-
03
-5.04008320e-01]
Variance score: 0.7209056672661748
5.5 PROCEDURE
5.6 RESULT
6.1AIM:
To build Decision Trees and Random Forests
6.3 ALGORITHM
A decision tree is a supervised machine-learning algorithm that can be used for
both classification and regression problems. Algorithm builds its model in the
structure of a tree along with decision nodes and leaf nodes. A decision tree is
simply a series of sequential decisions made to reach a specific result.
The Palmer Penguins dataset
This Colab uses the Palmer Penguins dataset, which contains size measurements for
three penguin species:
Chinstrap
Gentoo
Adelie
This is a classification problem—the goal is to predict the species of penguin
based on data in the Palmer's Penguins dataset. Let’s meet the penguins.
https://www.kaggle.com/code/sohamsave/personal-loan-prediction-using-decision-tree
import numpy as np
import pandas as pd
import tensorflow_decision_forests as tfdf
path =
"https://storage.googleapis.com/download.tensorflow.org/data/pal
mer_penguins/penguins.csv"
pandas_dataset = pd.read_csv(path)
label = "species"
classes = list(pandas_dataset[label].unique())
print(f"Label classes: {classes}")
# >> Label classes: ['Adelie', 'Gentoo', 'Chinstrap']
pandas_dataset[label] = pandas_dataset[label].map(classes.index)
np.random.seed(1)
# Use the ~10% of the examples as the testing set
# and the remaining ~90% of the examples as the training set.
test_indices = np.random.rand(len(pandas_dataset)) < 0.1
pandas_train_dataset = pandas_dataset[~test_indices]
pandas_test_dataset = pandas_dataset[test_indices]
tf_train_dataset =
tfdf.keras.pd_dataframe_to_tf_dataset(pandas_train_dataset, label=label)
model = tfdf.keras.CartModel()
model.fit(tf_train_dataset)
tfdf.model_plotter.plot_model_in_colab
tfdf.model_plotter.plot_model_in_colab(model, max_depth=10)
bill_depth_mm = 16.35
tf_test_dataset =
tfdf.keras.pd_dataframe_to_tf_dataset(pandas_test_dataset, label=label)
print("Test evaluation: ", model.evaluate(tf_test_dataset,
return_dict=True))
# >> Test evaluation: {'loss': 0.0, 'accuracy': 0.97142}
OUTPUT:
Returns:
A Colab HTML element showing the model.
6.6 RESULT
7.1AIM:
To build SVM Models
7.3 ALGORITHM
The main objective is to segregate the given dataset in the best possible way. The distance
between the either nearest points is known as the margin. The objective is to select a hyperplane
with the maximum possible margin between support vectors in the given dataset. SVM searches
for the maximum marginal hyperplane in the following steps:
1. Generate hyperplanes which segregates the classes in the best way. Left-hand side figure
showing three hyperplanes black, blue and orange. Here, the blue and orange have higher
classification error, but the black is separating the two classes correctly.
2. Select the right hyperplane with the maximum segregation from the either nearest data
points as shown in the right-hand side figure.
import math
import random
import pandas as pd
import numpy as np
import urllib.request
import requests
#Load dataset
cancer = datasets.load_breast_cancer()
#kernel implimentation
def K(x, xi):
# Choose one of the following implementations:
# Linear kernel
# return sum(x * xi)
# Gaussian kernel
gamma = 1 # Set the kernel parameter
return exp(-gamma * sum((x_i - xi_i)**2 for x_i, xi_i in zip(x, xi)))
# print data(feature)shape
cancer.data.shape
OUTPUT:
Features: ['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
Labels: ['malignant' 'benign']
[[1.799e+01 1.038e+01 1.228e+02 1.001e+03 1.184e-01 2.776e-01 3.001e-01
1.471e-01 2.419e-01 7.871e-02 1.095e+00 9.053e-01 8.589e+00 1.534e+02
6.399e-03 4.904e-02 5.373e-02 1.587e-02 3.003e-02 6.193e-03 2.538e+01
1.733e+01 1.846e+02 2.019e+03 1.622e-01 6.656e-01 7.119e-01 2.654e-01
4.601e-01 1.189e-01]
[2.057e+01 1.777e+01 1.329e+02 1.326e+03 8.474e-02 7.864e-02 8.690e-02
7.017e-02 1.812e-01 5.667e-02 5.435e-01 7.339e-01 3.398e+00 7.408e+01
5.225e-03 1.308e-02 1.860e-02 1.340e-02 1.389e-02 3.532e-03 2.499e+01
2.341e+01 1.588e+02 1.956e+03 1.238e-01 1.866e-01 2.416e-01 1.860e-01
2.750e-01 8.902e-02]
[1.969e+01 2.125e+01 1.300e+02 1.203e+03 1.096e-01 1.599e-01 1.974e-01
1.279e-01 2.069e-01 5.999e-02 7.456e-01 7.869e-01 4.585e+00 9.403e+01
6.150e-03 4.006e-02 3.832e-02 2.058e-02 2.250e-02 4.571e-03 2.357e+01
2.553e+01 1.525e+02 1.709e+03 1.444e-01 4.245e-01 4.504e-01 2.430e-01
7.6 RESULT
8.1AIM:
To Implement Ensembling Techniques
8.3 ALGORITHM
The steps of the EM algorithm are as follows:
1. We first consider a set of starting parameters given a set of incomplete (observed) data and
we assume that observed data come from a specific model
2. We then use the model to “estimate” the missing data . In other words after formulating some
parameters from observed data to build a model, we use this model to guess the missing
value/data. This step is called the expectation step.
3. Now we use the “complete” data that we have estimated to update parameters where using
the missing data and observed data, we find the most likely modified parameters to build the
modified model. This is called the maximization step .
4. We repeat steps 2 & 3 until convergence that is there is no change in the parameters of the
model and the estimated model fits the observed data.
import numpy as np
def coin_em(rolls, theta_A=None, theta_B=None, maxiter=10):
# Initial Guess
theta_A = theta_A or random.random()
theta_B = theta_B or random.random()
thetas = [(theta_A, theta_B)]
# Iterate
for c in range(maxiter):
print("#%d:\t%0.2f %0.2f" % (c, theta_A, theta_B))
heads_A, tails_A, heads_B, tails_B = e_step(rolls, theta_A,
theta_B)
theta_A, theta_B = m_step(heads_A, tails_A, heads_B, tails_B)
thetas.append((theta_A,theta_B))
return thetas, (theta_A,theta_B)
print(flips)
return pow(bias, numHeads) * pow(1-bias, flips-numHeads)
# Call the functions
rolls = [ "HTTTHHTHTH", "HHHHTHHHHH", "HTHHHHHTHH",
"HTHTTTHHTT", "THHHTHHHTH" ]
thetas, _ = coin_em(rolls, 0.6, 0.5, maxiter=10)
type(thetas)
thet=thetas[1]
print(thetas)
rolls_p = "HHHTTHTHTH"
numHeads_p = rolls_p.count('H')
print('No. of Heads', numHeads_p)
flips_p = len(rolls_p)
print(flips_p)
OUTPUT:
9
10
9
10
8
10
8
10
4
10
4
10
7
8.5 PROCEDURE
8.6 RESULT
9.1AIM:
To Implement Clustering Algorithms
9.3 ALGORITHM
Kmeans and EM algorithm
We can explain K means as an EM algorithm. First we initialize the k means (mk) of the Kmeans
algorithm. In the E Step we assign each point to a Cluster and during the M Step given the
Clusters we refine mean mk of each cluster k. This process is repeated until the change in means
is small.
K-means and Mixture of Gaussians
Now we know that in a general K-means which is essentially a classifier and we need to find the
parameter to fit data – that is we need to find the mean – µk as already discussed above.
However when we use mixture of Gaussians which is a probability model where we are defining
a “soft” classifier. Now the parameters that are to be determined to fit to data are the means µ k
and covariance Σk which define the Gaussians distributions and the mixing coefficient πk. Now
given the data set, find the mixing coefficients, means and covariance. If we knew which
component generated each data point, the maximum likelihood solution would involve fitting
each component to the corresponding cluster . However our problem is that the data set is
unlabelled or are hidden
u_labels = np.unique(label)
import matplotlib.pyplot as plt
#plotting the results:
for i in u_labels:
plt.scatter(X[label == i , 0] , X[label == i , 1] , label = i)
plt.legend()
plt.title("K-Means Clustering")
plt.show()
OUTPUT
[[ 1 2]
[ 2 4]
[10 12]
[11 15]
[ 3 2]
array([[ 2. , 2.66666667],
[11. , 13.33333333]])
array([1], dtype=int32)
Cluster Labels: [0 0 1 1 0 1]
Cluster Centers: [[ 2. 2.66666667]
[11. 13.33333333]]
9.5 PROCEDURE
9.6 RESULT
10.1AIM:
To Implement EM for Bayesian Networks
10.3 ALGORITHM
Here the E-step or expectation step is so named because it involves updating our
expectation of which cluster each point belongs to. The M-step or maximization
step is so named because it involves maximizing some fitness function that
defines the locations of the cluster centers—in this case, that maximization is
accomplished by taking a simple mean of the data in each cluster.
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200);
while True:
# 2a. Assign labels based on closest center
labels = pairwise_distances_argmin(X, centers)
s=50, cmap='viridis');
Figure: 1
Figure: 2
Figure: 3
Figure: 4
Figure: 5
Figure: 6
Figure: 7
10.5 PROCEDURE
10.6 RESULT
11.1AIM:
To build Neural Network (BP) Models
11.3 ALGORITHM
Neural Networks are computational models that mimic the complex functions of the human
brain. The neural networks consist of interconnected nodes or neurons that process and learn
from data, enabling tasks such as pattern recognition and decision making in machine
learning. The article explores more about neural networks, their working, architecture and
more.
#Sigmoid Function
def sigmoid (x):
return (1/(1 + np.exp(-x)))
#Derivative of Sigmoid Function
def derivatives_sigmoid(x):
return x * (1 - x)
#Variable initialization
epoch=7000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers neurons
output_neurons = 1 #number of neurons at output layer
#weight and bias initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
OUTPUT:
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[92.]
[86.]
[89.]]
Predicted Output:
[[0.99999891]
[0.99999805]
[0.99999883]]
Model: 2
Dept. of ECE ,SRM Page 79
TRPEC
CS3491 AIML LAB
import numpy as np
# array of any amount of numbers. n = m
X = np.array([[1, 2, 3],
[3, 4, 1],
[2, 5, 3]])
# multiplication
y = np.array([[.5, .3, .2]])
# transpose of y
y = y.T
# sigma value
sigm = 2
# find the delta
delt = np.random.random((3, 3)) - 1
for j in range(100):
# find matrix 1. 100 layers.
m1 = (y - (1/(1 + np.exp(-(np.dot((1/(1 + np.exp(
-(np.dot(X, sigm))))), delt))))))*((1/(
1 + np.exp(-(np.dot((1/(1 + np.exp(
-(np.dot(X, sigm))))), delt)))))*(1-(1/(
1 + np.exp(-(np.dot((1/(1 + np.exp(
-(np.dot(X, sigm))))), delt)))))))
# find matrix 2
m2 = m1.dot(delt.T) * ((1/(1 + np.exp(-(np.dot(X, sigm)))))
* (1-(1/(1 + np.exp(-(np.dot(X, sigm)))))))
# find delta
delt = delt + (1/(1 + np.exp(-(np.dot(X, sigm))))).T.dot(m1)
# find sigma
sigm = sigm + (X.T.dot(m2))
# print output from the matrix
print(1/(1 + np.exp(-(np.dot(X, sigm)))))
OUTPUT:
[[0.999993 0.99999381 0.99999365]
[0.99999987 0.99999989 0.99999988]
[1. 1. 1. ]]
Dept. of ECE ,SRM Page 80
TRPEC
CS3491 AIML LAB
11.5 PROCEDURE
11.6 RESULT
12.1 AIM:
To build deep Neural Network Models
12.3 ALGORITHM
Simple Convolutional Neural Network (CNN) to classify CIFAR images
The CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each
class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes
are mutually exclusive and there is no overlap between them.
The 6 lines of code below define the convolutional base using a common pattern: a stack
of Conv2D and MaxPooling2D layers.
plt.figure(figsize=(8,8))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
#which is why we need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32,
3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
# Adam is the best among the adaptive optimizers in most of the cases
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
plt.plot(history.history['accuracy'],label='accuracy')
plt.plot(history.history['val_accuracy'],label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
OUTPUT:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Dept. of ECE ,SRM Page 84
TRPEC
CS3491 AIML LAB
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
Epoch 1/10
1563/1563 [==============================] - 38s 24ms/step - loss: 1.5158 -
accuracy: 0.4451 - val_loss: 1.1963 - val_accuracy: 0.5729
Epoch 2/10
1563/1563 [==============================] - 37s 24ms/step - loss: 1.1395 -
accuracy: 0.5940 - val_loss: 1.0595 - val_accuracy: 0.6315
Epoch 3/10
1563/1563 [==============================] - 36s 23ms/step - loss: 0.9965 -
accuracy: 0.6494 - val_loss: 1.0275 - val_accuracy: 0.6462
Epoch 4/10
1563/1563 [==============================] - 36s 23ms/step - loss: 0.8967 -
accuracy: 0.6829 - val_loss: 0.9410 - val_accuracy: 0.6737
Epoch 5/10
1563/1563 [==============================] - 36s 23ms/step - loss: 0.8347 -
accuracy: 0.7081 - val_loss: 0.8940 - val_accuracy: 0.6955
Epoch 6/10
1563/1563 [==============================] - 36s 23ms/step - loss: 0.7794 -
accuracy: 0.7269 - val_loss: 0.8578 - val_accuracy: 0.7054
Epoch 7/10
1563/1563 [==============================] - 36s 23ms/step - loss: 0.7296 -
accuracy: 0.7446 - val_loss: 0.8526 - val_accuracy: 0.7099
Epoch 8/10
1563/1563 [==============================] - 36s 23ms/step - loss: 0.6908 -
accuracy: 0.7581 - val_loss: 0.8534 - val_accuracy: 0.7132
12.5 PROCEDURE
12.6 RESULT
DATE:
13.1 AIM:
To build deep Neural Network for digit classification
print('Loss:', loss)
print('Accuracy:', accuracy)
pred = model.predict(X[0,:].reshape(1, -1))
print(pred)
print(y[0,:])
dgts = load_digits()
print(dgts.data.shape)
import matplotlib.pyplot as plt
plt.gray()
plt.matshow(dgts.images[0])
plt.show()
OUTPUT
Epoch 1/5
45/45 [==============================] - 1s 2ms/step - loss: 5.0013 -
accuracy: 0.2289
Epoch 2/5
45/45 [==============================] - 0s 2ms/step - loss: 1.1459 -
accuracy: 0.6354
Epoch 3/5
45/45 [==============================] - 0s 2ms/step - loss: 0.5060 -
accuracy: 0.8462
Epoch 4/5
45/45 [==============================] - 0s 2ms/step - loss: 0.3240 -
accuracy: 0.9040
Epoch 5/5
45/45 [==============================] - 0s 2ms/step - loss: 0.2358 -
accuracy: 0.9283
12/12 [==============================] - 0s 2ms/step - loss: 0.2600 -
accuracy: 0.9250
Loss: 0.2599562704563141
Accuracy: 0.925000011920929
1/1 [==============================] - 0s 52ms/step
[[9.9925131e-01 8.5686344e-07 7.1382141e-07 4.9433766e-06 6.5290674e-06
5.4754998e-04 2.5058553e-06 3.0487461e-06 3.8330847e-05 1.4414983e-04]]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
(1797, 64)
<Figure size 640x480 with 0 Axes>
13.4 RESULT