AIML Record Programs (PDF - Io) - 1
AIML Record Programs (PDF - Io) - 1
DATE:
Aim :
A) BFS:
Algorithm :
Step 1 : Start.
Step 2 : Initialize an empty queue and mark all vertices as unvisited.
Step 3 : Add the starting vertex to the queue and mark it as visited.
Step 4 : While the queue is not empty:
Dequeue the next vertex from the queue.
For each adjacent vertex that is not visited, mark it as visited
and add it to the queue.
Step 5 : Repeat step 3 until the queue is empty.
Step 6 : Exit.
Program :
class Graph:
def _init_(self, vertices):
self.V = vertices
self.adj = [[] for i in range(vertices)]
print("BFS:")
g.bfs(0)
Output :
BFS:
012345
B) DFS :
Algorithm :
Step 1 : Start.
Step 2 : Initialize an empty stack and mark all vertices as unvisited.
Step 3 : Add the starting vertex to the stack and mark it as visited.
Step 4 : While the stack is not empty :
Pop the next vertex from the stack.
For each adjacent vertex that is not visited, mark it as visited
and push it onto the stack.
Step 5 : Repeat step 3 until the stack is empty.
Step 6 : Exit.
Program :
class Graph:
def _init_(self, vertices):
self.V = vertices
self.adj = [[] for i in range(vertices)]
g = Graph(6)
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 3)
g.add_edge(1, 4)
g.add_edge(2, 4)
g.add_edge(3, 4)
g.add_edge(3, 5)
g.add_edge(4, 5)
print("DFS:")
g.dfs(0)
Output :
DFS:
024513
Result :
The program has been executed successfully and the output is verified.
Ex No : 2 IMPLEMENTATION OF INFORMED SEARCH
Date : ALGORITHMS (A*, memory-bounded A)
Aim :
A) A*:
Algorithm:
Step 1 : Start
Step 2 : Initialize the start and goal nodes.
Step 3 : Create an empty open list and a closed list.
Step 4 : Add the start node to the open list.
Step 5 : While the open list is not empty:
Find the node with the lowest f score in the open list.
If this node is the goal, return the path.
Otherwise, move the node to the closed list and consider its neighbors.
For each neighbor, calculate its g score (the cost to move to this node from the
start) and h score (the heuristic estimate of the cost to move from this node to the goal).
If the neighbor is not in the open list or its new g score is lower than its old
g score, update the neighbor's f, g, and h scores and set its parent to the current node.
Add the neighbor to the open list if it is not already there.
Step 6 : If the goal node is not reached, there is no path.
Step 7 : Exit
Program :
import heapq
class Node:
def _init_(self, state, parent=None, g=0, h=0):
self.state = state
self.parent = parent
self.g = g
self.h = h
def f_score(self):
return self.g + self.h
class Graph:
def _init_(self, vertices, edges):
self.vertices = vertices
self.edges = edges
print("A* Search:")
print(astar_search(graph, start, goal, heuristic))
Output :
A* Search:
['A', 'C', 'E']
B) Memory-Bounded A :
Algorithm:
Step 1 : Start.
Step 2 : Initialize the open list with the start node.
Step 3 : While the open list is not empty:
Pop the node with the lowest f-value from the open list.
If the node is the goal, return the path.
Expand the node and add its successors to the open list.
If the open list size exceeds the memory limit, remove the node(s) with the highest
f-value.
Step 4 : If the open list is empty and the goal node has not been found, return failure.
Step 5 : Exit.
Program :
import heapq
Output :
MA* Search Path: [(0, 0), (1, 0), (2, 0), (2, 1), (2, 2)]
Result :
The program has been executed successfully and the output has been verified.
Ex No : 3 Implement naïve Bayes models
Date :
Aim :
Algorithm:
Step 1: Start
Step 2: import the dataset and necessary dependencies
Step 3: Calculate Prior Probability of Classes P(y)
Step 4: Calculate the Likelihood Table for all features
Step 5: Calculate Posterior Probability for each class using the Naive
Bayesian equation
Step 6: End
Program :
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
def pre_processing(df):
X = df.drop([df.columns[-1]], axis =
1) y = df[df.columns[-1]]
return X, y
class NaiveBayes:
def _init_(self):
self.features = list
self.likelihoods = {}
self.class_priors = {}
self.pred_priors = {}
self.X_train = np.array
self.y_train = np.array
self.train_size = int
self.num_feats = int
self.features = list(X.columns)
self.X_train = X
self.y_train = y
self.train_size = X.shape[0]
self.num_feats = X.shape[1]
self._calc_class_prior()
self._calc_likelihoods()
self._calc_predictor_prior()
def _calc_class_prior(self):
def _calc_likelihoods(self):
def _calc_predictor_prior(self):
results = []
X = np.array(X)
for query in X:
probs_outcome = {}
for outcome in np.unique(self.y_train):
prior = self.class_priors[outcome]
likelihood = 1
evidence = 1
probs_outcome[outcome] = posterior
return np.array(results)
if _name_ == "_main_":
#Weather Dataset
print("\nWeather Dataset:")
df = pd.read_table("E:/tennisdata.csv")
#print(df)
nb_clf = NaiveBayes()
nb_clf.fit(X, y)
#Query 1:
query = np.array([['Rainy','Mild', 'Normal', 't']])
print("Query 1:- {} ---> {}".format(query, nb_clf.predict(query)))
#Query 2:
query = np.array([['Overcast','Cool', 'Normal', 't']])
print("Query 2:- {} ---> {}".format(query, nb_clf.predict(query)))
#Query 3:
query = np.array([['Sunny','Hot', 'High', 't']])
print("Query 3:- {} ---> {}".format(query, nb_clf.predict(query)))
Output :
Weather Dataset:
Train Accuracy: 7.14
Query 1:- [['Rainy' 'Mild' 'Normal' 't']] ---> ['D1,Sunny,Hot,High,Weak,No']
Query 2:- [['Overcast' 'Cool' 'Normal' 't']] ---> ['D1,Sunny,Hot,High,Weak,No']
Query 3:- [['Sunny' 'Hot' 'High' 't']] ---> ['D1,Sunny,Hot,High,Weak,No']
Result :
The program has been executed successfully and the output has been verified.
Ex No: 4 IMPLEMENT BAYESIAN NETWORKS
Date :
Algorithm :
Step 1: start
Step 2: Construct the Bayesian network by specifying the nodes and their
conditional probability distributions.
Step 3: Identify the set of observed variables (i.e., the evidence).
Step 4: For each unobserved variable, compute its posterior probability given
the evidence using the Bayes' rule and the conditional probability distributions
of the variable and its parents.
Step 5: Return the posterior probabilities of interest.
Step 6: stop
Program :
Output :
True
Result :
The program has been executed successfully and the output is verified.
Ex No: 5(a) BUILDING REGRESSION MODELS
Date : (LINEAR REGRESSION)
Algorithm :
x = [89,43,36,36,95,10,66,34,38,20,26,29,48,64,6,5,36,66,72,40]
y = [21,46,3,35,67,95,53,72,58,10,26,34,90,33,38,20,56,2,47,15]
def myfunc(x):
return slope * x + intercept
plt.scatter(x, y)
plt.plot(x, mymodel)
plt.show()
Output :
Result :
The program has been executed successfully and the output is verified.
Ex No: 5(b) BUILDING REGRESSION MODELS
Date : (POLYNOMIAL REGRESSION)
Algorithm :
import numpy
import matplotlib.pyplot as plt
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
plt.scatter(x, y)
plt.plot(myline, mymodel(myline))
plt.show()
Output :
Result :
The program has been executed successfully and the output is verified.
Ex No: 5(c) BUILDING REGRESSION MODELS
Date : (MULTIPLE REGRESSION)
Algorithm :
Program :
import pandas
from sklearn import linear_model
df = pandas.read_csv("data.csv")
X = df[['Weight',
'Volume']] y = df['CO2']
regr = linear_model.LinearRegression()
regr.fit(X, y)
print(predictedCO2)
Output :
[107.2087328]
Result :
The program has been executed successfully and the output is verified.
Ex No: 6 DECESION TREE
Date :
Algorithm :
Program :
import pandas
from sklearn import tree
from sklearn.tree import
DecisionTreeClassifier import
matplotlib.pyplot as plt
df = pandas.read_csv("data.csv")
X = df[features]
y = df['Go']
dtree = DecisionTreeClassifier()
dtree = dtree.fit(X, y)
tree.plot_tree(dtree, feature_names=features)
#Two lines to make our compiler able to
draw: plt.savefig(sys.stdout.buffer)
sys.stdout.flush()
Output :
Result :
The program has been executed successfully and the output is verified.
Ex No : 7 Build svm model
Date :
Aim :
Algorithm:
Step 1: Start
Step 2: Load the dataset you want to use for training the model.
Step 3: preprocess the data to make it ready for training the model.
Step 4:split the data into two sets - one for training the model and the other for
testing the model.
Step 5: splitting it into training and testing sets.
Step 6: Once the SVM model is defined, you can train it using the training set.
Step 7: use cross-validation techniques like grid search to find the optimal values
for the hyperparameters.
Step 8: Stop.
Program :
Result :
The program has been executed successfully and the output has been verified.
Ex No : 8 Ensemble techniques implementation
Date :
Aim :
Algorithm:
Step 1: In bagging, multiple models are trained to make the final prediction.
Step 2: In boosting, multiple weak models are trained to correct the errors of
the previous model.
Step 3: In stacking, multiple models are trained to combine the predictions of the
base models to make the final prediction.
Step 4: In averaging, multiple models are trained independently to make the
final prediction.
Step 5: In blending, multiple models are trained independently and their
predictions are combined using a weighted average.
Step 6: In ensemble of ensembles, multiple ensembles are created using
different techniques and combined to create a final prediction.
Program :
Output :
Accuracy: 1.00
Result :
The program has been executed successfully and the output has been verified.
Ex No : 9 IMPLEMENT CLUSTERING ALGORITHMS –
Date : KMEANS CLUSTERING
Algorithm:
Program :
for i in range(1,11):
kmeans = KMeans(n_clusters=i)
kmeans.fit(data)
inertias.append(kmeans.inertia_)
Result :
The program has been executed successfully and the output has been verified.
Ex No : 10 IMPLEMENT EM FOR BAYESIAN NETRWORK
Date :
Aim :
Algorithm:
Program :
import numpy as np
import csv
import pandas as pd
from pgmpy.models import BayesianModel
from pgmpy.estimators import
MaximumLikelihoodEstimator from pgmpy.inference import
VariableElimination heartDisease = pd.read_csv('heart.csv')
heartDisease = heartDisease.replace('?',np.nan)
Model=BayesianModel([('age','trestbps'),('age','fbs'),('sex','trestbps'),('exang','trestbps'),('trestb
ps','heartdise
ase'),('fbs','heartdisease'),('heartdisease','restecg'),('heartdisease','thalach'),('heartdisease','ch
ol')])
print('\n Learning CPD using Maximum likelihood estimators') model.fit
(heartDisease, estimator =MaximumLikelihoodEstimator)
print('\n Inferencing with Bayesian Network:')
HeartDisease_infer = VariableElimination(model)
print('\n 1. Probability of HeartDisease given Age=30')
q=HeartDisease_infer.query(variables=['heartdisease'],evidence ={'age':28})
print(q['heartdisease'])
print('\n 2. Probability of HeartDisease given cholesterol=100') q=HeartDisease_infer
.query(variables=['heartdisease'],evidence ={'chol':100})
print(q['heartdisease'])
Output :
Learning CPD using Maximum likelihood estimators Inferencing with Bayesian Network:
1. Probability of HeartDisease given Age=28
╒════════════════╤═════════════════════╕ │
heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡ │ heartdisease_0 │
0.6791 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_1 │ 0.1212 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_2 │ 0.0810 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_3 │ 0.0939 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_4 │ 0.0247 │
╘════════════════╧═════════════════════╛
2. Probability of HeartDisease given cholesterol=100
╒════════════════╤═════════════════════╕ │
heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡ │ heartdisease_0 │
0.5400 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_1 │ 0.1533 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_2 │ 0.1303 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_3 │ 0.1259 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_4 │ 0.0506 │
╘════════════════╧═════════════════════╛
Result :
The program has been executed successfully and the output has been verified.
Ex No : 11 BUILD A SIMPLE NN MODELS
Date :
Aim :
Algorithm:
Step 1 : Start
Step 2 : Create a sigmoid function
Step 3 : Initialize the requires parameters weight and bias.
Step 4 : Create forword propagation function which takes x ,the initialized
parameters input and returns a2 , cache .
Step 5 : Create a calculate_cost function which takes a2 , y as input parameters
and return the cost.
Step 6 : Now create a backword_propagation function which takes x , y , cache ,
and parameters as input parameters and returns grads.
Step 7 : Create a update_parameters function which takes parameters ,
grads , learning_rate as input parameters and returns the updated values.
Step 8 : Now create model which takes the input parameters x , y , n_x , n_h ,
n_y , num_of_iters , learning_rate and returns the parameters as output.
Step 9 : Create a predict function which takes x and parameters as input
parameters and returns the prediction as output.
Step 10 : Stop
Program :
import numpy as np
def sigmoid(z):
return 1/(1+np.exp(-z))
def initialize_parameters(n_x,n_h,n_y):
w1 = np.random.randn(n_h,n_x)
b1 = np.zeros((n_h,1))
w2 = np.random.randn(n_y,n_h)
b2 = np.zeros(( n_y , 1))
parameters = {
"w1" : w1,
"b1" : b1,
"w2" : w2,
"b2" : b2,
}
return parameters
def forward_prop(x,parameters):
w1 = parameters["w1"]
b1 = parameters["b1"]
w2 = parameters["w2"]
b2 = parameters["b2"]
z1 = np.dot(w1 , x) + b1
a1 = np.tanh(z1)
z2 = np.dot(w2 , a1) + b2
a2 = sigmoid(z2)
cache = {
"a1" : a1,
"a2" : a2,
}
return a2 , cache
return cost
a1 = cache["a1"]
a2 = cache["a2"]
w2 = parameters["w2"]
dz2 = a2 - y
dw2 = np.dot(dz2,a1.T)/m
db2 = np.sum(dz2,axis=1 , keepdims=True)/m
dz1 = np.multiply(np.dot(w2.T,dz2) , 1-np.power(a1,2))
dw1 = np.dot(dz1,x.T)/m
db1 = np.sum(dz1,axis=1,keepdims=True)/m
grads = {
"dw1" : dw1,
"db1" : db1,
"dw2" : dw2,
"db2" : db2
}
return grads
w1 = parameters["w1"]
b1 = parameters["b1"]
w2 = parameters["w2"]
b2 = parameters["b2"]
dw1 = grads["dw1"]
db1 = grads["db1"]
dw2 = grads["dw2"]
db2 = grads["db2"]
w1 = w1 - learning_rate * dw1
b1 = b1 - learning_rate * db1
w2 = w2 - learning_rate * dw2
b2 = b2 - learning_rate * db2
new_parameters = {
"w1" : w1,
"b1" : b1,
"w2" : w2,
"b2" : b2,
}
return new_parameters
parameters = initialize_parameters(n_x,n_h,n_y)
a2 , cache = forward_prop(x,parameters)
cost = calculate_cost(a2,y)
grads = backword_prop(x,y,cache,parameters)
parameters = update_parameters(parameters , grads , learning_rate)
if (i%100 == 0):
print('Cost after iteration {:d}: {:f}'.format(i,cost))
return parameters
def predict(x,parameters):
a2 , cache = forward_prop(x,parameters)
yhat = a2
yhat = np.squeeze(yhat)
return y_predict
np.random.seed(2)
x = np.array([[0,0,1,1],[0,1,0,1]])
y = np.array([[0,1,1,0]])
m = x.shape[1]
n_x = 2
n_h = 2
n_y = 1
num_of_iters = 1000
learning_rate = 0.3
trained_parameters = model(x,y,n_x,n_h,n_y,num_of_iters,learning_rate)
x_test = np.array([[1],[0]])
y_predict = predict(x_test , trained_parameters)
print("Neural Network prediction for example ( {:d}, {:d} is {:d} )".format(
x_test[0][0],x_test[1][0] , y_predict))
Output :
Result :
The program has been executed successfully and the output has been verified.
Ex No : 12 Build deep learning NN models
Date :
Aim : To write a python program to build a deep learning neural networks model
Algorithm:
Step 1: Start
Step 2: Import the necessary libraries
Step 3: Set the random seed for reproducibility
Step 4: Define the model architecture
Step 5: Initialize the weights and biases for the input and hidden layers
Step 6: Initialize the weights and biases for the hidden and output layers
Step 7: Define the input data, labels and hyperparameters
Step 8: Define the Tensorflow variables for the weights and biases
Step 9: Define the forward propagation and backpropagation algorithms
Step 10: Train the model and make some predictions with the trained model
Step 11: End
Program :
import tensorflow as tf
from tensorflow import keras
Result :
The program has been executed successfully and the output has been verified.