AIML manual
AIML manual
.
S.NO DATE TITLE PAGE MARKS SIGN
NO
.
S.NO DATE TITLE PAGE MARKS SIGN
NO
.
Ex. No. 1a Implementation of uninformed search algorithm– BFS
Date:
AIM
ALGORITHM
Step 1: Start.
Step 2: Put any one of the graph’s vertices at the back of the queue.
Step 3: Take the front item of the queue and add it to the visited list.
Step 4: Create a list of that vertex's adjacent nodes. Add those which are not within the
visited list to the rear of the queue.
Step 5: Keep continuing steps two and three till the queue is empty.
Step 6: Stop.
PROGRAM
visited = []
queue = []
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
visited.append(neighbour)
queue.append(neighbour)
print(“BFS:”)
RESULT
Thus, an uninformed search algorithm – BFS has been successfully implemented and
executed successfully.
Ex. No.1b Implementation of uninformed search algorithm – DFS
Date:
AIM
ALGORITHM
Step 1: Start.
Step 2: Put any one of the graph's vertex on top of the stack.
Step 3: Take the top item of the stack and add it to the visited list of the vertex.
Step 4: Create a list of that adjacent node of the vertex. Add the ones which aren't
in the visited list of vertexes to the top of the stack.
Step 6: Stop.
PROGRAM
visited = set()
print(“DFS:”)
RESULT
Date:
AIM
ALGORITHM
Step 1: Start.
Step 2: Initialize the open set with the start node, closed set as empty, g-value
dictionary with the start node’s parent as itself.
2. If the chosen is stop node, then the path is found, return the path.
5. Set the parent of the neighbor to the chosen node and add the neighbor to the open
set if it is not already in either the open set or closed set.
6. Remove the chosen node from open and add it to closed set.
Step 4: Return “No path” if loop in step 3 finishes without finding a path.
Step 5: Stop.
PROGRAM
open_set = set(start_node)
closed_set = set()
g = {}
parents = {}
g[start_node] = 0
parents[start_node] = start_node
n = None
for v in open_set:
n=v
Pass
else:
open_set.add(m)
parents[m] = n
g[m] = g[n] + weight
else:
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
return None
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
return path
open_set.remove(n)
closed_set.add(n)
return None
def get_neighbors(v):
if v in Graph_nodes:
return Graph_nodes[v]
else:
return None
def heuristic(n):
return H_dist[n]
'C': None,
aStarAlgo('A', 'G')
OUTPUT
RESULT
Date:
AIM
ALGORITHM
Step 1: Start.
Step 4: Create an empty list heap and a tuple of the start node and a cost of 0.
Step 5: Create sets of visited nodes and the mapping of nodes to their previous node.
Step 7: While the heap is not empty, and the number of nodes expanded is less than
the max_nodes.
1. Pop the node with the lowest cost from the heap.
5. For each neighbor of the node, calculate the new cost and estimate the cost to the
goal using heuristic function.
6. If the cost of reaching the neighbor through the current node is lower than any
previous cost, add the neighbor to the heap and update the mapping to the
current node.
Step 8: Return the mapping of nodes to their previous node and the cost of
reaching each node.
PROGRAM
import heapq
visited = set ()
came_from = {}
cost_so_far = {}
came_from[start]=None
cost_so_far[start]=0
if max_nodes is None:
max_nodes = float('inf')
count =0
count+=1
current = heapq.heappop(heap)[1]
if current == goal:
break
if current in visited:
continue
visited.add(current)
new_cost = cost_so_far[current]+weight
came_from[next_node]=current
start ='A'
goal='G'
'D': [('B',2)],
'F': [('C',5)],
'G': [('E',2)]
max_nodes=10
RESULT
AIM
ALGORITHM
Step 1: Start
Step 5: Drop the first and last columns from the data frame
Step 7: Pre-process and divide the dataset into training and testing sections
PROGRAM
import numpy as np
import pandas as pd
dataset=pd.read_csv("data.csv")
print(dataset.info())
M = dataset[dataset.diagnosis == "M"]
B = dataset[dataset.diagnosis == "B"]
plt.xlabel("Radius Mean")
plt.ylabel("Texture Mean")
plt.legend()
plt.show()
x = dataset.drop(["diagnosis"], axis = 1)
y = dataset.diagnosis.values
import sklearn
nb = GaussianNB()
print(nb.fit(x_train, y_train))
Thus, the naïve bayes model has been implemented and executed successfully.
Expt no: 4 IMPLEMENTATION OF BAYESIAN NETWORKS
Date:
AIM
ALGORITHM
Step 1: Start
Step 4: Print the first few instances of the dataset and attribute datatypes
Step 7: Use query to compute the probability of heartdisease with restecg=1 and
cp=2 and print the values
Step 8: Stop
PROGRAM
import numpy as np
import pandas as pd
import csv
heartDisease = pd.read_csv('heart.csv')
heartDisease = heartDisease.replace('?',np.nan)
print('Sample instances from the dataset are given below')
print(heartDisease.head())
print(heartDisease.dtypes)
model=BayesianModel([('age','heartdisease'),('gender','heartdisease'),('exang','heartdiseas
e'),('cp','heartdisease'),('heartdisease','restecg'),('heartdisease','chol')])
model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)
HeartDiseasetest_infer = VariableElimination(model)
q1=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'restecg':1})
print(q1)
q2=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'cp':2})
print(q2)
OUTPUT
RESULT
Thus, the Bayesian Network has been implemented and executed successfully.
Ex.No: 5 REGRESSION MODELS
Date:
AIM
To build a linear regression model.
ALGORITHM
Step 1: Start
Step 2: Import all the required modules and dataset
Step 3: Drop the columns that are not required
Step 4: Normalize the new dataset
Step 5: Train and test the dataset
Step 6: Fit the trained dataset into a linear model
Step 7: Print the mean_absolute_error
Step 8: Stop
PROGRAM
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
dataset = pd.read_csv("melb_data.csv")
dataset.head()
dataset.head()
def normalize(df):
result = df.copy()
for feature_name in df.columns:
max_value = df[feature_name].max()
min_value = df[feature_name].min()
result[feature_name] = (df[feature_name] - min_value) / (max_value - min_value)
return result
model = LinearRegression()
model.fit(train_x, train_y)
predictions = model.predict(test_x)
predictions
mean_absolute_error(predictions, test_y)
OUTPUT
RESULT
Thus, the Linear regression model has been built and executed.
Ex.No:6a DECISION TREES
Date:
AIM
To build a decision tree using a python program.
ALGORITHM
Step 1: Start.
Step 7: Stop.
PROGRAM
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore’)
df = pd.read_csv('Telco-Customer-Churn.csv',header=0)
df.head()
df.info()
obj_cols = df.select_dtypes(np.object).columns.tolist()
def check_value_counts(col_list):
for col in col_list:
print('-----------------------------')
print(round((df[col].value_counts()/df.shape[0])*100,2))
print('-----------------------------')
check_value_counts(obj_cols)
df.isna().sum()
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
for col in df.columns.to_numpy():
if df[col].dtypes in ('object','category'):
df[col]=le.fit_transform(df[col].astype(str))
X = df.drop('Churn',axis=1)
y = df['Churn']
dot_data = StringIO()
export_graphviz(dt, out_file=dot_data, filled=True, rounded=True,
feature_names=X.columns, class_names=['Churn', "Not Churn"])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
OUTPUT
`
RESULT
Date:
AIM
To build random forest using a python program.
ALGORITHM
Step 1: Start.
Step 7: Stop.
PROGRAM
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
dataset = pd.read_csv("Maternal Health Risk Data Set.csv")
dataset.head()
dataset.info()
dataset.describe().T
g = sns.pairplot(dataset, hue='RiskLevel')
g.fig.suptitle("Scatterplot and histogram of pairs of variables color coded by risk level",
fontsize = 14,y=1.05);
dataset['RiskLevel'].unique()
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
OUTPUT
RESULT
AIM
ALGORITHM
Step 1: Start.
Step 8: Stop.
PROGRAM:
import numpy as np
import pandas as pd
x = pd.read_csv("cancer.csv")
a = np.array(x)
x = np.column_stack((x.malignant,x.benign))
x.shape
print (x),(y)
from sklearn.svm import SVC
clf = SVC(kernel='linear')
clf.fit(x, y)
clf.predict([[120, 990]])
clf.predict([[85, 550]])
OUTPUT
RESULT
Thus, the program for SVM model in python has been executed successfully.
Exp No: 8
Date: IMPLEMENTATION OF ENSEMBLING TECHNIQUES
AIM:
To write a program to implement the technique(voting) of ensemble classifier for
classification
ALGORITHM:
Step 1: Start
Step 2: Import the required packages.
Step 3: Use set_params to drop an estimator.
Step 4: Implement the classifiers
Step 5: Define a model
Step 6: Combine the predictions from the model.
Step 7: End
PROGRAM:
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
clf1 = LogisticRegression(multi_class='multinomial', random_state=1)
clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
clf3 = GaussianNB()
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
y = np.array([1, 1, 1, 2, 2, 2])
eclf1 = VotingClassifier(estimators=[
('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')
eclf1 = eclf1.fit(X, y)
print(eclf1.predict(X))
np.array_equal(eclf1.named_estimators_.lr.predict(X),
eclf1.named_estimators_['lr'].predict(X))
eclf2 = VotingClassifier(estimators=[
('lr', clf1), ('rf', clf2), ('gnb', clf3)],
voting='soft')
eclf2 = eclf2.fit(X, y)
print(eclf2.predict(X))
eclf2 = eclf2.set_params(lr='drop')
eclf2 = eclf2.fit(X, y)
len(eclf2.estimators_)
eclf3 = VotingClassifier(estimators=[
('lr', clf1), ('rf', clf2), ('gnb', clf3)],
voting='soft', weights=[2,1,1],
flatten_transform=True)
eclf3 = eclf3.fit(X, y)
print(eclf3.predict(X))
print(eclf3.transform(X).shape)
OUTPUT:
RESULT:
DATE:
AIM:
To implement (K-Means) Clustering Algorithm
ALGORITHM:
STEP 1 : Start
STEP 2 : Finding the optimal number of clusters using the elbow method
STEP 5 : stop
PROGRAM:
import numpy as nm
import matplotlib.pyplot as mtp
import pandas as pd
dataset = pd.read_csv('Mall_Customers.csv')
x = dataset.iloc[:, [3, 4]].values
RESULT:
Thus the program for K-means Clustering algorithm has been implemented
successfully.
EX.NO:10 IMPLEMENTATION OF EXPECTATION MAXIMIZATION
DATE:
AIM:
To implement EM(Expectation Maximization) algorithm for Bayesian networks.
ALGORITHM:
STEP 1: Initialize the parameters of the Bayesian network, such as the conditional
probabilities for each variable.
STEP 2: In the E-step, compute the posterior probability of the hidden variables given
the observed data and the current parameter estimates.
STEP 3: In the M-step, update the parameter estimates by maximizing the expected
complete data log-likelihood. This involves using the posterior probabilities computed
in the E-step to estimate the conditional probabilities of each variable.
STEP 5: Once convergence is reached, the final parameter estimates can be used to
make predictions or perform inference on new data.
PROGRAM:
import pandas as pd
import numpy as np
import pyAgrum as gum
import matplotlib.pyplot as plt
RESULT:
Thus the program for EM algorithm has been implemented successfully.
EXP NO:11 NEURAL NETWORK MODEL
DATE :
AIM:
To build a simple Neural Network model using Iris datasets.
ALGORITHM:
STEP 1: Start
STEP 2: Define the architecture: Decide on the number of layers, the number of neurons
ineach layer, and the activation function to be used in each layer.
STEP 3 :Initialize the weights and biases: Randomly initialize the weights and biases for
eachneuron in the network.
STEP 4:Forward propagation: Take the input data and pass it through the network by
multiplying the weights with the input values and adding the biases. Then apply the activation
function to eachneuron in each layer to generate the output.
STEP 5:Compute the loss: Calculate the difference between the predicted output and the true
outputusing a loss function.
STEP 6:Backpropagation: Calculate the gradient of the loss with respect to the weights and
biases in each layer. This is done using the chain rule of calculus to propagate the error from
the output layer back to the input layer.
STEP 7:Update the weights and biases: Use the calculated gradients to update the weights
and biases in each layer using an optimization algorithm such as stochastic gradient descent
or Adam.
STEP 8:Repeat steps 3-6 for a fixed number of epochs or until convergence is achieved.
STEP 9:Test the model: Use a separate dataset to evaluate the performance of the trained
model bycalculating the accuracy or other performance metrics.
STEP 10:Deploy the model: Use the trained model to make predictions on new, unseen data.
PROGRAM:
import keras
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.preprocessing import normalize
data=pd.read_csv("/content/Iris.csv")
print("Describing the data: ",data.describe())
print("Info of the data:",data.info())
print("10 first samples of the dataset:",data.head(10))
print("10 last samples of the dataset:",data.tail(10))
sns.lmplot(x='SepalLengthCm', y='SepalWidthCm',data=data, fit_reg=False, hue="Species",
scatter_kws={"marker": "D", "s": 50})
plt.title('SepalLength vs SepalWidth')
OUTPUT:
Species
140 Iris-virginica
141 Iris-virginica
142 Iris-virginica
143 Iris-virginica
144 Iris-virginica
145 Iris-virginica
146 Iris-virginica
147 Iris-virginica
148 Iris-virginica
149 Iris-virginica
['Iris-setosa' 'Iris-versicolor' 'Iris-virginica']
Id SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Species
0 1 5.1 3.5 1.4 0.2 0
1 2 4.9 3.0 1.4 0.2 0
2 3 4.7 3.2 1.3 0.2 0
3 4 4.6 3.1 1.5 0.2 0
4 5 5.0 3.6 1.4 0.2 0
Id SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Species
11 12 4.8 3.4 1.6 0.2 0
102 103 7.1 3.0 5.9 2.1 2
73 74 6.1 2.8 4.7 1.2 1
50 51 7.0 3.2 4.7 1.4 1
93 94 5.0 2.3 3.3 1.0 1
Shape of X (150, 4)
Shape of y (150,)
Examples of X
[[4.8 3.4 1.6 0.2]
[7.1 3. 5.9 2.1]
[6.1 2.8 4.7 1.2]]
Examples of y
[0 2 1]
Examples of X_normalised
[[0.0664119 0.09000348 0.03148167 0.01150299]
[0.09823426 0.07941484 0.11608866 0.12078145]
[0.08439845 0.07412052 0.09247741 0.06901797]]
Length of train set x: 120 y: 120
Length of test set x: 30 y: 30
Shape of y_train (120, 3)
Shape of y_test (30, 3)
Epoch 1/10
6/6 [==============================] - 1s 37ms/step - loss: 1.0820 - accuracy:
0.4917 - val_loss: 1.0473 - val_accuracy: 0.7000
Epoch 2/10
6/6 [==============================] - 0s 12ms/step - loss: 1.0057 - accuracy:
0.6583 - val_loss: 0.9306 - val_accuracy: 0.7000
Epoch 3/10
6/6 [==============================] - 0s 13ms/step - loss: 0.8752 - accuracy:
0.6583 - val_loss: 0.7508 - val_accuracy: 0.7000
Epoch 4/10
6/6 [==============================] - 0s 13ms/step - loss: 0.6831 - accuracy:
0.7000 - val_loss: 0.5128 - val_accuracy: 0.8333
Epoch 5/10
6/6 [==============================] - 0s 14ms/step - loss: 0.4756 - accuracy:
0.8667 - val_loss: 0.3436 - val_accuracy: 0.9667
Epoch 6/10
6/6 [==============================] - 0s 13ms/step - loss: 0.3311 - accuracy:
0.9667 - val_loss: 0.2267 - val_accuracy: 1.0000
Epoch 7/10
6/6 [==============================] - 0s 12ms/step - loss: 0.2746 - accuracy:
0.9167 - val_loss: 0.2275 - val_accuracy: 0.8667
Epoch 8/10
6/6 [==============================] - 0s 13ms/step - loss: 0.2652 - accuracy:
0.8917 - val_loss: 0.1223 - val_accuracy: 0.9667
Epoch 9/10
6/6 [==============================] - 0s 13ms/step - loss: 0.2381 - accuracy:
0.8917 - val_loss: 0.1572 - val_accuracy: 0.9333
Epoch 10/10
6/6 [==============================] - 0s 12ms/step - loss: 0.2037 - accuracy:
0.9000 - val_loss: 0.1373 - val_accuracy: 0.9667
1/1 [==============================] - 0s 50ms/step
Accuracy of the dataset 96.66666666666667
RESULT:
Thus the program for a simple Neural Network model has been executed successfully.
Exp No: 12 Implementation Of Convolution Neural Network
Date:
Aim:
To implement and build a Convolutional neural network model which predicts the age and gender of a
person using the given pre-trained models.
Algorithm:
Steps in CNN Algorithm:
Program:
import cv2 as cv
import math
import time
frameHeight = frameOpencvDnn.shape[0]
frameWidth = frameOpencvDnn.shape[1]
False) net.setInput(blob)
detections = net.forward()
bboxes = []
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2]
x1 = int(detections[0, 0, i, 3] * frameWidth)
y1 = int(detections[0, 0, i, 4] * frameHeight)
x2 = int(detections[0, 0, i, 5] * frameWidth)
y2 = int(detections[0, 0, i, 6] * frameHeight)
int(round(frameHeight/150)), 8)
faceProto = "/content/opencv_face_detector.pbtxt"
faceModel = "/content/opencv_face_detector_uint8.pb"
ageProto = "/content/age_deploy.prototxt"
= "/content/gender_deploy.prototxt" genderModel =
"/content/gender_net.caffemodel"
cv.dnn.readNet(faceModel, faceProto)
def age_gender_detector(frame):
# Read frame
t = time.time()
# print(bbox)
face = frame[max(0,bbox[1]-padding):min(bbox[3]+padding,frame.shape[0]1),max(0,bbox[0]-
padding):min(bbox[2]+padding, frame.shape[1]-1)]blob = cv.dnn.blobFromImage(face, 1.0, (227,
227), MODEL_MEAN_VALUES, swapRB=False)
genderNet.setInput(blob)
genderPreds = genderNet.forward()
gender = genderList[genderPreds[0].argmax()]
agePreds = ageNet.forward()
age = ageList[agePreds[0].argmax()]
print("Age Output :
{}".format(agePreds))
return frameFace
input =
cv.imread("2.jpg")
output = age_gender_detector(input)
cv2_imshow(output)
Output:
gender : Male, conf = 1.000
Result:
Thus the program to implement and build a Convolutional neural network model which predicts the ageand
gender of a person using the given pre-trained models have been executed successfully and the output got
verified.