0% found this document useful (0 votes)
20 views

AIML Record Programs (PDF - Io) - 1

Uploaded by

sulochanat7851
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

AIML Record Programs (PDF - Io) - 1

Uploaded by

sulochanat7851
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

EX NO:1 IMPLEMENTATION OF UNINFORMED SEARCH ALGORITHMS

DATE:

Aim :

To Implement the uninformed search algorithms using the python program.

A) BFS:

Algorithm :

Step 1 : Start.
Step 2 : Initialize an empty queue and mark all vertices as unvisited.
Step 3 : Add the starting vertex to the queue and mark it as visited.
Step 4 : While the queue is not empty:
Dequeue the next vertex from the queue.
For each adjacent vertex that is not visited, mark it as visited
and add it to the queue.
Step 5 : Repeat step 3 until the queue is empty.
Step 6 : Exit.

Program :

class Graph:
def _init_(self, vertices):
self.V = vertices
self.adj = [[] for i in range(vertices)]

def add_edge(self, u, v):


self.adj[u].append(v)

def bfs(self, start):


visited = [False] * self.V
queue = []
queue.append(start)
visited[start] = True
while queue:
node = queue.pop(0)
print(node, end=" ")
for neighbor in self.adj[node]:
if not visited[neighbor]:
visited[neighbor] = True
queue.append(neighbor)
g = Graph(6)
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 3)
g.add_edge(1, 4)
g.add_edge(2, 4)
g.add_edge(3, 4)
g.add_edge(3, 5)
g.add_edge(4, 5)

print("BFS:")
g.bfs(0)

Output :

BFS:
012345

B) DFS :

Algorithm :

Step 1 : Start.
Step 2 : Initialize an empty stack and mark all vertices as unvisited.
Step 3 : Add the starting vertex to the stack and mark it as visited.
Step 4 : While the stack is not empty :
Pop the next vertex from the stack.
For each adjacent vertex that is not visited, mark it as visited
and push it onto the stack.
Step 5 : Repeat step 3 until the stack is empty.
Step 6 : Exit.

Program :

class Graph:
def _init_(self, vertices):
self.V = vertices
self.adj = [[] for i in range(vertices)]

def add_edge(self, u, v):


self.adj[u].append(v)
def dfs(self, start):
visited = [False] * self.V
stack = []
stack.append(start)
visited[start] = True
while stack:
node = stack.pop()
print(node, end=" ")
for neighbor in self.adj[node]:
if not visited[neighbor]:
visited[neighbor] = True
stack.append(neighbor)

g = Graph(6)
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 3)
g.add_edge(1, 4)
g.add_edge(2, 4)
g.add_edge(3, 4)
g.add_edge(3, 5)
g.add_edge(4, 5)

print("DFS:")
g.dfs(0)

Output :

DFS:
024513

Result :

The program has been executed successfully and the output is verified.
Ex No : 2 IMPLEMENTATION OF INFORMED SEARCH
Date : ALGORITHMS (A*, memory-bounded A)

Aim :

To implement the informed search algorithms using python program.

A) A*:

Algorithm:

Step 1 : Start
Step 2 : Initialize the start and goal nodes.
Step 3 : Create an empty open list and a closed list.
Step 4 : Add the start node to the open list.
Step 5 : While the open list is not empty:
Find the node with the lowest f score in the open list.
If this node is the goal, return the path.
Otherwise, move the node to the closed list and consider its neighbors.
For each neighbor, calculate its g score (the cost to move to this node from the
start) and h score (the heuristic estimate of the cost to move from this node to the goal).
If the neighbor is not in the open list or its new g score is lower than its old
g score, update the neighbor's f, g, and h scores and set its parent to the current node.
Add the neighbor to the open list if it is not already there.
Step 6 : If the goal node is not reached, there is no path.
Step 7 : Exit

Program :

import heapq

class Node:
def _init_(self, state, parent=None, g=0, h=0):
self.state = state
self.parent = parent
self.g = g
self.h = h
def f_score(self):
return self.g + self.h

class Graph:
def _init_(self, vertices, edges):
self.vertices = vertices
self.edges = edges

def neighbors(self, node):


return [Node(state) for state in self.edges.get(node.state,
[])] def astar_search(graph, start, goal, heuristic):
start_node = Node(start)
goal_node = Node(goal)
open_list = []
closed_list = set()
heapq.heappush(open_list, (start_node.f_score(), start_node))
while open_list:
current_node = heapq.heappop(open_list)[1]
if current_node.state == goal_node.state:
path = []
while current_node:
path.append(current_node.state)
current_node = current_node.parent
return list(reversed(path))
closed_list.add(current_node.state)
for neighbor in graph.neighbors(current_node):
if neighbor.state in closed_list:
continue
new_g = current_node.g + 1
new_h = heuristic(neighbor.state, goal)
new_f = new_g + new_h
if (new_f, neighbor) in open_list:
if new_g < neighbor.g:
neighbor.g = new_g
neighbor.parent = current_node
else:
neighbor.g = new_g
neighbor.h = new_h
neighbor.parent = current_node
heapq.heappush(open_list, (new_f, neighbor))
return None

vertices = ['A', 'B', 'C', 'D', 'E']


edges = {'A': ['B', 'C'],
'B': ['A', 'D'],
'C': ['A', 'D', 'E'],
'D': ['B', 'C', 'E'],
'E': ['C', 'D']}

graph = Graph(vertices, edges)


start = 'A'
goal = 'E'
heuristic = lambda n, goal: 0

print("A* Search:")
print(astar_search(graph, start, goal, heuristic))

Output :

A* Search:
['A', 'C', 'E']
B) Memory-Bounded A :

Algorithm:

Step 1 : Start.
Step 2 : Initialize the open list with the start node.
Step 3 : While the open list is not empty:
Pop the node with the lowest f-value from the open list.
If the node is the goal, return the path.
Expand the node and add its successors to the open list.
If the open list size exceeds the memory limit, remove the node(s) with the highest
f-value.
Step 4 : If the open list is empty and the goal node has not been found, return failure.
Step 5 : Exit.

Program :

import heapq

# Define the Node class


class Node:
def _init_(self, state, g_value, h_value, parent):
self.state = state
self.g_value = g_value
self.h_value = h_value
self.parent = parent

def _lt_(self, other):


return (self.g_value + self.h_value) < (other.g_value + other.h_value)

# Define the heuristic function


def heuristic(state, goal):
return abs(state[0] - goal[0]) + abs(state[1] - goal[1])

# Define the MA* search function


def ma_star_search(start, goal, memory_limit):
open_list = []
closed_set = set()
heapq.heappush(open_list, Node(start, 0, heuristic(start, goal), None))
while open_list:
current = heapq.heappop(open_list)
if current.state == goal:
path = []
while current:
path.append(current.state)
current = current.parent
return list(reversed(path))
closed_set.add(current.state)
for successor in get_successors(current.state):
if successor in closed_set:
continue
g_value = current.g_value + 1
h_value = heuristic(successor, goal)
new_node = Node(successor, g_value, h_value, current)
if len(open_list) < memory_limit:
heapq.heappush(open_list, new_node)
else:
max_node = max(open_list)
if new_node < max_node:
heapq.heappush(open_list, new_node)
open_list.remove(max_node)
closed_set.remove(max_node.state)
return None

# Define the get_successors


function def get_successors(state):
successors = []
x, y = state
if x > 0:
successors.append((x-1, y))
if x < 2:
successors.append((x+1, y))
if y > 0:
successors.append((x, y-1))
if y < 2: successors.append((x,
y+1))
return successors

# Define the start and goal


states start = (0, 0)
goal = (2, 2)

# Run the MA* search algorithm path


= ma_star_search(start, goal, 3)

# Print the path


print("MA* Search Path:", path)

Output :

MA* Search Path: [(0, 0), (1, 0), (2, 0), (2, 1), (2, 2)]

Result :
The program has been executed successfully and the output has been verified.
Ex No : 3 Implement naïve Bayes models
Date :

Aim :

To write a python program to implement naïve Bayes models.

Algorithm:

Step 1: Start
Step 2: import the dataset and necessary dependencies
Step 3: Calculate Prior Probability of Classes P(y)
Step 4: Calculate the Likelihood Table for all features
Step 5: Calculate Posterior Probability for each class using the Naive
Bayesian equation
Step 6: End

Program :

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math

def accuracy_score(y_true, y_pred):

""" score = (y_true - y_pred) / len(y_true) """

return round(float(sum(y_pred == y_true))/float(len(y_true)) * 100 ,2)

def pre_processing(df):

""" partioning data into features and target """

X = df.drop([df.columns[-1]], axis =
1) y = df[df.columns[-1]]

return X, y

class NaiveBayes:
def _init_(self):
self.features = list
self.likelihoods = {}
self.class_priors = {}
self.pred_priors = {}

self.X_train = np.array
self.y_train = np.array
self.train_size = int
self.num_feats = int

def fit(self, X, y):

self.features = list(X.columns)
self.X_train = X
self.y_train = y
self.train_size = X.shape[0]
self.num_feats = X.shape[1]

for feature in self.features:


self.likelihoods[feature] = {}
self.pred_priors[feature] = {}

for feat_val in np.unique(self.X_train[feature]):


self.pred_priors[feature].update({feat_val: 0})

for outcome in np.unique(self.y_train):


self.likelihoods[feature].update({feat_val+'_'+outcome:0})
self.class_priors.update({outcome: 0})

self._calc_class_prior()
self._calc_likelihoods()
self._calc_predictor_prior()

def _calc_class_prior(self):

""" P(c) - Prior Class Probability """

for outcome in np.unique(self.y_train):


outcome_count = sum(self.y_train == outcome)
self.class_priors[outcome] = outcome_count / self.train_size

def _calc_likelihoods(self):

""" P(x|c) - Likelihood """

for feature in self.features:

for outcome in np.unique(self.y_train):


outcome_count = sum(self.y_train == outcome)
feat_likelihood = self.X_train[feature][self.y_train[self.y_train
==outcome].index.values.tolist()].value_counts().to_dict()

for feat_val, count in feat_likelihood.items():


self.likelihoods[feature][feat_val + '_' + outcome] =
count/outcome_count

def _calc_predictor_prior(self):

""" P(x) - Evidence """

for feature in self.features:


feat_vals = self.X_train[feature].value_counts().to_dict()
for feat_val, count in feat_vals.items():
self.pred_priors[feature][feat_val] = count/self.train_size

def predict(self, X):

""" Calculates Posterior probability P(c|x) """

results = []
X = np.array(X)

for query in X:
probs_outcome = {}
for outcome in np.unique(self.y_train):
prior = self.class_priors[outcome]
likelihood = 1
evidence = 1

for feat, feat_val in zip(self.features, query):


likelihood *= self.likelihoods[feat][feat_val + '_' +outcome]
evidence *= self.pred_priors[feat][feat_val]

posterior = (likelihood * prior) / (evidence)

probs_outcome[outcome] = posterior

result = max(probs_outcome, key = lambda x: probs_outcome[x])


results.append(result)

return np.array(results)

if _name_ == "_main_":

#Weather Dataset
print("\nWeather Dataset:")

df = pd.read_table("E:/tennisdata.csv")
#print(df)

#Split fearures and target


X,y = pre_processing(df)

nb_clf = NaiveBayes()
nb_clf.fit(X, y)

print("Train Accuracy: {}".format(accuracy_score(y, nb_clf.predict(X))))

#Query 1:
query = np.array([['Rainy','Mild', 'Normal', 't']])
print("Query 1:- {} ---> {}".format(query, nb_clf.predict(query)))

#Query 2:
query = np.array([['Overcast','Cool', 'Normal', 't']])
print("Query 2:- {} ---> {}".format(query, nb_clf.predict(query)))

#Query 3:
query = np.array([['Sunny','Hot', 'High', 't']])
print("Query 3:- {} ---> {}".format(query, nb_clf.predict(query)))

Output :

Weather Dataset:
Train Accuracy: 7.14
Query 1:- [['Rainy' 'Mild' 'Normal' 't']] ---> ['D1,Sunny,Hot,High,Weak,No']
Query 2:- [['Overcast' 'Cool' 'Normal' 't']] ---> ['D1,Sunny,Hot,High,Weak,No']
Query 3:- [['Sunny' 'Hot' 'High' 't']] ---> ['D1,Sunny,Hot,High,Weak,No']

Result :
The program has been executed successfully and the output has been verified.
Ex No: 4 IMPLEMENT BAYESIAN NETWORKS
Date :

Aim : To write a python program to implement Bayesian networks.

Algorithm :

Step 1: start
Step 2: Construct the Bayesian network by specifying the nodes and their
conditional probability distributions.
Step 3: Identify the set of observed variables (i.e., the evidence).
Step 4: For each unobserved variable, compute its posterior probability given
the evidence using the Bayes' rule and the conditional probability distributions
of the variable and its parents.
Step 5: Return the posterior probabilities of interest.
Step 6: stop

Program :

from pgmpy.models import BayesianModel from


pgmpy.factors.discrete import TabularCPD
# Defining the model structure. We can define the network by just passing a list
of edges.
model = BayesianModel([('D', 'G'), ('I', 'G'), ('G', 'L'), ('I', 'S')])
# Defining individual CPDs.
cpd_d = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]])
cpd_i = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]])
#representes P(grade|diff, intel)
cpd_g = TabularCPD(variable='G', variable_card=3,
values=[[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['I', 'D'],
evidence_card=[2, 2])
cpd_l = TabularCPD(variable='L', variable_card=2,
values=[[0.1, 0.4, 0.99],
[0.9, 0.6, 0.01]],
evidence=['G'],
evidence_card=[3])
cpd_s = TabularCPD(variable='S', variable_card=2,
values=[[0.95, 0.2],
[0.05, 0.8]],
evidence=['I'],
evidence_card=[2])
# Associating the CPDs with the network
model.add_cpds(cpd_d, cpd_i, cpd_g, cpd_l, cpd_s)
# check_model checks for the network structure and CPDs and verifies that the
CPDs are correctly. defined and sum to 1.
model.check_model()

Output :

True

Result :

The program has been executed successfully and the output is verified.
Ex No: 5(a) BUILDING REGRESSION MODELS
Date : (LINEAR REGRESSION)

Aim : To write Linear regression algorithms using python.


.

Algorithm :

Step 1: Start the program.


Step 2: Import the necessary modules.
Step 3: Create two arrays namely x and y.
Step 4: Using plot() function display the output of the code .
Step 5: Display the result.
Step 6: Stop the program.
Program :

import matplotlib.pyplot as plt


from scipy import stats

x = [89,43,36,36,95,10,66,34,38,20,26,29,48,64,6,5,36,66,72,40]
y = [21,46,3,35,67,95,53,72,58,10,26,34,90,33,38,20,56,2,47,15]

slope, intercept, r, p, std_err = stats.linregress(x, y)

def myfunc(x):
return slope * x + intercept

mymodel = list(map(myfunc, x))

plt.scatter(x, y)
plt.plot(x, mymodel)
plt.show()
Output :

Result :

The program has been executed successfully and the output is verified.
Ex No: 5(b) BUILDING REGRESSION MODELS
Date : (POLYNOMIAL REGRESSION)

Aim : To write Polynomial regression algorithms using python.


.

Algorithm :

Step 1: Start the program.


Step 2:Import the necessary modules.
Step 3: Use polyfit() function to show the polynomial regression.
Step 4: display the data and compute EM for Bayesian network.
Step 5:Stop the program.
.
Program :

import numpy
import matplotlib.pyplot as plt

x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]

mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))

myline = numpy.linspace(1, 22, 100)

plt.scatter(x, y)
plt.plot(myline, mymodel(myline))
plt.show()
Output :

Result :

The program has been executed successfully and the output is verified.
Ex No: 5(c) BUILDING REGRESSION MODELS
Date : (MULTIPLE REGRESSION)

Aim : To write Multiple regression algorithms using python.


.

Algorithm :

Step 1: Start the program.


Step 2: Import the necessary modules.
Step 3: Use LinearRegression() function to show the Multiple regression.
Step 4: display the data and compute EM for Bayesian network.
Step 5: Stop the program.

Program :

import pandas
from sklearn import linear_model

df = pandas.read_csv("data.csv")

X = df[['Weight',
'Volume']] y = df['CO2']

regr = linear_model.LinearRegression()
regr.fit(X, y)

#predict the CO2 emission of a car where the weight is 2300kg,


and the volume is 1300cm3:
predictedCO2 = regr.predict([[2300, 1300]])

print(predictedCO2)
Output :

[107.2087328]

Result :

The program has been executed successfully and the output is verified.
Ex No: 6 DECESION TREE
Date :

Aim : To implement Decesion Tree using python.


.

Algorithm :

Step 1: Start the program.


Step 2: Import the necessary modules.
Step 3: Use matplotlib package to display the result
Step 4:display the data and compute EM for Bayesian network.
Step 5: Stop the program.

Program :

#Three lines to make our compiler able to draw:


import sys
import matplotlib
matplotlib.use('Agg')

import pandas
from sklearn import tree
from sklearn.tree import
DecisionTreeClassifier import
matplotlib.pyplot as plt

df = pandas.read_csv("data.csv")

d = {'UK': 0, 'USA': 1, 'N': 2}


df['Nationality'] =
df['Nationality'].map(d) d = {'YES': 1,
'NO': 0} df['Go'] = df['Go'].map(d)

features = ['Age', 'Experience', 'Rank', 'Nationality']

X = df[features]
y = df['Go']

dtree = DecisionTreeClassifier()
dtree = dtree.fit(X, y)

tree.plot_tree(dtree, feature_names=features)
#Two lines to make our compiler able to
draw: plt.savefig(sys.stdout.buffer)
sys.stdout.flush()

Output :

Result :

The program has been executed successfully and the output is verified.
Ex No : 7 Build svm model
Date :

Aim :

To build a svm model using python program.

Algorithm:

Step 1: Start
Step 2: Load the dataset you want to use for training the model.
Step 3: preprocess the data to make it ready for training the model.
Step 4:split the data into two sets - one for training the model and the other for
testing the model.
Step 5: splitting it into training and testing sets.
Step 6: Once the SVM model is defined, you can train it using the training set.
Step 7: use cross-validation techniques like grid search to find the optimal values
for the hyperparameters.
Step 8: Stop.

Program :

# Step 1: Load the dataset


from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target

# Step 2: Preprocess the data


from sklearn.preprocessing import
StandardScaler scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Step 3: Split the data into training and testing sets


from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3, random_state=42)

# Step 4: Define the SVM model


from sklearn.svm import SVC
svm_model = SVC(kernel='linear', C=1.0)

# Step 5: Train the SVM model


svm_model.fit(X_train, y_train)

# Step 6: Test the SVM model


y_pred = svm_model.predict(X_test)
from sklearn.metrics import accuracy_score
print("Accuracy: ", accuracy_score(y_test,
y_pred)) # Step 7: Tune the hyperparameters
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.1, 1, 10], 'kernel': ['linear', 'rbf']}
grid_search = GridSearchCV(SVC(), param_grid,
cv=5) grid_search.fit(X_train, y_train)
print("Best parameters: ", grid_search.best_params_)
Output :
Accuracy: 0.9777777777777777
Best parameters: {'C': 10, 'kernel': 'linear'}

Result :
The program has been executed successfully and the output has been verified.
Ex No : 8 Ensemble techniques implementation
Date :

Aim :

To build a Ensemble techniques implementation using python program.

Algorithm:

Step 1: In bagging, multiple models are trained to make the final prediction.
Step 2: In boosting, multiple weak models are trained to correct the errors of
the previous model.
Step 3: In stacking, multiple models are trained to combine the predictions of the
base models to make the final prediction.
Step 4: In averaging, multiple models are trained independently to make the
final prediction.
Step 5: In blending, multiple models are trained independently and their
predictions are combined using a weighted average.
Step 6: In ensemble of ensembles, multiple ensembles are created using
different techniques and combined to create a final prediction.

Program :

from sklearn.ensemble import VotingClassifier from


sklearn.ensemble import BaggingClassifier from
sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load the iris dataset
data = load_iris()
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target,
test_size=0.3, random_state=42)

# Define the base models


model1 = DecisionTreeClassifier(random_state=42)
model2 = LogisticRegression(random_state=42)
model3 = KNeighborsClassifier()

# Define the ensemble models


ensemble1 = BaggingClassifier(base_estimator=model1, n_estimators=10, random_state=42)
ensemble2 = AdaBoostClassifier(base_estimator=model2, n_estimators=10,
random_state=42)
ensemble3 = GradientBoostingClassifier(n_estimators=10, random_state=42)

# Define the voting classifier


voting_clf = VotingClassifier(estimators=[('bagging', ensemble1), ('adaboost',
ensemble2), ('gradient_boosting', ensemble3)], voting='hard')
# Train the voting classifier
voting_clf.fit(X_train, y_train)
# Make predictions on the test set
y_pred = voting_clf.predict(X_test)
# Calculate the accuracy of the ensemble model
accuracy = accuracy_score(y_test, y_pred)
print('Accuracy: %.2f' % accuracy)

Output :

Accuracy: 1.00

Result :
The program has been executed successfully and the output has been verified.
Ex No : 9 IMPLEMENT CLUSTERING ALGORITHMS –
Date : KMEANS CLUSTERING

Aim : To implement kmeans clustering algorithms using python..

Algorithm:

Step 1:- Start the program.


Step 2:- Import the necessary modules.
Step 3:- Create two arrays namely x and y.
Step 4:- construct kmeans cluster using Kmeans().
Step 5:- Display the result.
Step 6:- Stop the program.

Program :

import matplotlib.pyplot as plt

x = [4, 5, 10, 4, 3, 11, 14 , 6, 10, 12]


y = [21, 19, 24, 17, 16, 25, 24, 22, 21, 21]
from sklearn.cluster import KMeans

data = list(zip(x, y))


inertias = []

for i in range(1,11):
kmeans = KMeans(n_clusters=i)
kmeans.fit(data)
inertias.append(kmeans.inertia_)

plt.plot(range(1,11), inertias, marker='o')


plt.title('Elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()
Output :

Result :
The program has been executed successfully and the output has been verified.
Ex No : 10 IMPLEMENT EM FOR BAYESIAN NETRWORK
Date :

Aim :

To implement EM for Bayesian network using python.

Algorithm:

Step 1:- Start the program.


Step 2:- Import the necessary modules.
Step 3:- Read the csv file containing data.
Step 4:- display the data and compute EM for Bayesian network.
Step 5:- Stop the program

Program :

import numpy as np
import csv
import pandas as pd
from pgmpy.models import BayesianModel
from pgmpy.estimators import
MaximumLikelihoodEstimator from pgmpy.inference import
VariableElimination heartDisease = pd.read_csv('heart.csv')
heartDisease = heartDisease.replace('?',np.nan)

print('Few examples from the dataset are given below') print(heartDisease.head())

Model=BayesianModel([('age','trestbps'),('age','fbs'),('sex','trestbps'),('exang','trestbps'),('trestb
ps','heartdise
ase'),('fbs','heartdisease'),('heartdisease','restecg'),('heartdisease','thalach'),('heartdisease','ch
ol')])
print('\n Learning CPD using Maximum likelihood estimators') model.fit
(heartDisease, estimator =MaximumLikelihoodEstimator)
print('\n Inferencing with Bayesian Network:')
HeartDisease_infer = VariableElimination(model)
print('\n 1. Probability of HeartDisease given Age=30')
q=HeartDisease_infer.query(variables=['heartdisease'],evidence ={'age':28})
print(q['heartdisease'])
print('\n 2. Probability of HeartDisease given cholesterol=100') q=HeartDisease_infer
.query(variables=['heartdisease'],evidence ={'chol':100})
print(q['heartdisease'])
Output :

Few examples from the dataset are given below


age sex cp trestbps ...slope ca thal heartdisease 0 63 1 1 145 ... 3 0 6 0 1 67 1 4 160 ... 2 3
3 226714120...227133713130...303044102130...1030 [5 rows x 14 columns]

Learning CPD using Maximum likelihood estimators Inferencing with Bayesian Network:
1. Probability of HeartDisease given Age=28
╒════════════════╤═════════════════════╕ │
heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡ │ heartdisease_0 │
0.6791 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_1 │ 0.1212 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_2 │ 0.0810 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_3 │ 0.0939 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_4 │ 0.0247 │
╘════════════════╧═════════════════════╛
2. Probability of HeartDisease given cholesterol=100
╒════════════════╤═════════════════════╕ │
heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡ │ heartdisease_0 │
0.5400 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_1 │ 0.1533 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_2 │ 0.1303 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_3 │ 0.1259 │
├ ────────────────┼ ─────────────────────┤ │
heartdisease_4 │ 0.0506 │
╘════════════════╧═════════════════════╛

Result :
The program has been executed successfully and the output has been verified.
Ex No : 11 BUILD A SIMPLE NN MODELS
Date :

Aim :

To develop or build a simple nueral network (NN) model using python.

Algorithm:

Step 1 : Start
Step 2 : Create a sigmoid function
Step 3 : Initialize the requires parameters weight and bias.
Step 4 : Create forword propagation function which takes x ,the initialized
parameters input and returns a2 , cache .
Step 5 : Create a calculate_cost function which takes a2 , y as input parameters
and return the cost.
Step 6 : Now create a backword_propagation function which takes x , y , cache ,
and parameters as input parameters and returns grads.
Step 7 : Create a update_parameters function which takes parameters ,
grads , learning_rate as input parameters and returns the updated values.
Step 8 : Now create model which takes the input parameters x , y , n_x , n_h ,
n_y , num_of_iters , learning_rate and returns the parameters as output.
Step 9 : Create a predict function which takes x and parameters as input
parameters and returns the prediction as output.
Step 10 : Stop

Program :

import numpy as np

def sigmoid(z):

return 1/(1+np.exp(-z))

def initialize_parameters(n_x,n_h,n_y):

w1 = np.random.randn(n_h,n_x)
b1 = np.zeros((n_h,1))
w2 = np.random.randn(n_y,n_h)
b2 = np.zeros(( n_y , 1))

parameters = {
"w1" : w1,
"b1" : b1,
"w2" : w2,
"b2" : b2,
}
return parameters

def forward_prop(x,parameters):

w1 = parameters["w1"]
b1 = parameters["b1"]
w2 = parameters["w2"]
b2 = parameters["b2"]
z1 = np.dot(w1 , x) + b1
a1 = np.tanh(z1)
z2 = np.dot(w2 , a1) + b2
a2 = sigmoid(z2)

cache = {
"a1" : a1,
"a2" : a2,
}

return a2 , cache

def calculate_cost(a2 , y):

cost = -np.sum(np.multiply( y, np.log(a2)) + np.multiply(1-y,np.log(1-


a2)))/m cost = np.squeeze(cost)

return cost

def backword_prop(x,y,cache , parameters):

a1 = cache["a1"]
a2 = cache["a2"]

w2 = parameters["w2"]

dz2 = a2 - y
dw2 = np.dot(dz2,a1.T)/m
db2 = np.sum(dz2,axis=1 , keepdims=True)/m
dz1 = np.multiply(np.dot(w2.T,dz2) , 1-np.power(a1,2))
dw1 = np.dot(dz1,x.T)/m
db1 = np.sum(dz1,axis=1,keepdims=True)/m

grads = {
"dw1" : dw1,
"db1" : db1,
"dw2" : dw2,
"db2" : db2
}
return grads

def update_parameters(parameters , grads , learning_rate):

w1 = parameters["w1"]
b1 = parameters["b1"]
w2 = parameters["w2"]
b2 = parameters["b2"]

dw1 = grads["dw1"]
db1 = grads["db1"]
dw2 = grads["dw2"]
db2 = grads["db2"]

w1 = w1 - learning_rate * dw1
b1 = b1 - learning_rate * db1
w2 = w2 - learning_rate * dw2
b2 = b2 - learning_rate * db2
new_parameters = {
"w1" : w1,
"b1" : b1,
"w2" : w2,
"b2" : b2,
}
return new_parameters

def model(x, y , n_x , n_h , n_y , num_of_iters , learning_rate):

parameters = initialize_parameters(n_x,n_h,n_y)

for i in range(0, num_of_iters+1):

a2 , cache = forward_prop(x,parameters)
cost = calculate_cost(a2,y)
grads = backword_prop(x,y,cache,parameters)
parameters = update_parameters(parameters , grads , learning_rate)

if (i%100 == 0):
print('Cost after iteration {:d}: {:f}'.format(i,cost))

return parameters

def predict(x,parameters):

a2 , cache = forward_prop(x,parameters)
yhat = a2
yhat = np.squeeze(yhat)

if yhat >= 0.5:


y_predict = 1
else:
y_predict = 0

return y_predict

np.random.seed(2)

x = np.array([[0,0,1,1],[0,1,0,1]])

y = np.array([[0,1,1,0]])

m = x.shape[1]

n_x = 2
n_h = 2
n_y = 1
num_of_iters = 1000
learning_rate = 0.3

trained_parameters = model(x,y,n_x,n_h,n_y,num_of_iters,learning_rate)

x_test = np.array([[1],[0]])
y_predict = predict(x_test , trained_parameters)
print("Neural Network prediction for example ( {:d}, {:d} is {:d} )".format(
x_test[0][0],x_test[1][0] , y_predict))

Output :

Cost after iteration 0: 0.856267


Cost after iteration 100: 0.347426
Cost after iteration 200: 0.101195
Cost after iteration 300: 0.053631
Cost after iteration 400: 0.036031
Cost after iteration 500: 0.027002
Cost after iteration 600: 0.021543
Cost after iteration 700: 0.017896
Cost after iteration 800: 0.015293
Cost after iteration 900: 0.013344
Cost after iteration 1000: 0.011831
Neural Network prediction for example ( 1, 0 is 1 )

Result :
The program has been executed successfully and the output has been verified.
Ex No : 12 Build deep learning NN models
Date :

Aim : To write a python program to build a deep learning neural networks model

Algorithm:

Step 1: Start
Step 2: Import the necessary libraries
Step 3: Set the random seed for reproducibility
Step 4: Define the model architecture
Step 5: Initialize the weights and biases for the input and hidden layers
Step 6: Initialize the weights and biases for the hidden and output layers
Step 7: Define the input data, labels and hyperparameters
Step 8: Define the Tensorflow variables for the weights and biases
Step 9: Define the forward propagation and backpropagation algorithms
Step 10: Train the model and make some predictions with the trained model
Step 11: End

Program :

import tensorflow as tf
from tensorflow import keras

# Load the data


(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Preprocess the data


x_train = x_train.reshape((x_train.shape[0], 28 * 28)).astype('float32') /
255 x_test = x_test.reshape((x_test.shape[0], 28 * 28)).astype('float32') /
255 y_train = keras.utils.to_categorical(y_train) y_test =
keras.utils.to_categorical(y_test)

# Define the model architecture


model = keras.models.Sequential([
keras.layers.Dense(128, activation='relu', input_shape=(28 *
28,)), keras.layers.Dropout(0.5),
keras.layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_test, y_test))

# Evaluate the model


test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test accuracy:', test_acc)
Output :

Result :
The program has been executed successfully and the output has been verified.

You might also like