0% found this document useful (0 votes)
428 views

AI and ML Lab Manual

The A* algorithm was implemented to find the shortest path between start and stop nodes. It uses an open and closed set to track explored and unexplored nodes. The distance from start (g) and heuristic distance to goal (h) are used to calculate the best node (f=g+h) to expand next. Neighbors are explored and their distances updated if a shorter path is found through the current node. Once the stop node is found or open set is empty with no path, the algorithm terminates.

Uploaded by

Roobesh Roobesh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
428 views

AI and ML Lab Manual

The A* algorithm was implemented to find the shortest path between start and stop nodes. It uses an open and closed set to track explored and unexplored nodes. The distance from start (g) and heuristic distance to goal (h) are used to calculate the best node (f=g+h) to expand next. Neighbors are explored and their distances updated if a shorter path is found through the current node. Once the stop node is found or open set is empty with no path, the algorithm terminates.

Uploaded by

Roobesh Roobesh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

MANGAYARKARASI COLLEGE OF ENGINEERING

(Approved by AICTE, New Delhi & Affiliated to Anna University,


Chennai)
MANGAYARKARASI NAGAR, PARAVAI, MADURAI – 625 402
Website: http://mce-madurai.ac.in E-Mail: : [email protected]

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


LABORATORY MANUAL

Sub.Code : CS3491
Sub.Name : ARTIFICIAL INTELLIGENCE AND MACHINE LEARING LAB
Regulation : 2021

Prepared By, Approved By,


B. SENTHIL RAJAMANOKAR, AP/CSE HOD-CSE

1
SYLLABUS
LIST OF EXPERIMENTS:

1. Implementation of Uninformed search algorithms (BFS, DFS)


2. Implementation of Informed search algorithms (A*, memory-bounded A*)
3. Implement naïve Bayes models
4. Implement Bayesian Networks
5. Build Regression models
6. Build decision trees and random forests
7. Build SVM models
8. Implement ensembling techniques
9. Implement clustering algorithms
10. Implement EM for Bayesian networks
11. Build simple NN models
12. Build deep learning NN models

TOTAL: 30 PERIODS

2
LIST OF EXPERIMENTS
S.No Experiment 1 Pg.No Marks
Implementation of Uninformed search
1
algorithms (BFS, DFS)
Implementation of Informed search algorithms
2
(A*, memory-bounded A*)
3 Implement naïve Bayes models

4 Implement Bayesian Networks

5 Build Regression models

6 Build decision trees and random forests

7 Build SVM models

8 Implement ensembling techniques

9 Implement clustering algorithms

10 Implement EM for Bayesian networks

11 Build simple NN models

12 Build deep learning NN models

3
Experiment 1: Implementation of Uninformed search algorithms (BFS)

AIM:
Write a Program to Implement Breadth First Search.

Procedure:
Create a graph with key – value pair.
Assign empty list as visited nodes.
Initialize a queue as queue.
Define the sub program as bfs and passing parameter as visited, graph and node.
Append the node in to visited and queue.
Creating loop to visit each node in Breadth First Search method.
Print the node which is visited in BFS method.
PROGRAM
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node)
queue.append(node)

while queue: # Creating loop to visit each node

m = queue.pop(0)

print (m, end = " ")

4
for neighbour in graph[m]:

if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

# Driver Code

print("Following is the Breadth-First Search")

bfs(visited, graph, '5') # function calling

OUTPUT

Following is the Breadth-First Search 5 3 7 2 4 8

Result:

Thus the Breadth First Search was successfully executed.

5
Experiment 2: Implementation of Uninformed search algorithms (DFS)

AIM:
Write a Program to Implement Depth First Search.

Procedure:
Create a graph with key – value pair.
Assign empty list as visited nodes.
Initialize a queue as queue.
Define the sub program as dfs and passing parameter as visited, graph and node.
Append the node in to visited and queue.
Creating loop to visit each node in Depth First Search method.
Print the node which is visited in DFS method.
PROGRAM
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set() # Set to keep track of visited nodes of graph.

def dfs(visited, graph, node): #function for dfs

if node not in visited:

print (node)

visited.add(node)

for neighbour in graph[node]:

dfs(visited, graph, neighbour)

6
# Driver Code

print("Following is the Depth-First Search")

dfs(visited, graph, '5')

OUTPUT

Output: Following is the Depth-First Search 5 3 2 4 8 7

Result:

Thus the Depth First Search was successfully executed.

7
Experiment 3: Implementation of Informed search algorithms
(Hill Climbing Algorithm)

Aim:
Write a program to implement Hill Climbing Algorithm

Program:
import random

def randomSolution(tsp):

cities = list(range(len(tsp)))

solution = []

for i in range(len(tsp)):

randomCity = cities[random.randint(0, len(cities) - 1)]

solution.append(randomCity)

cities.remove(randomCity)

return solution

def routeLength(tsp, solution):

routeLength = 0

for i in range(len(solution)):

routeLength += tsp[solution[i - 1]][solution[i]]

return routeLength

def getNeighbours(solution):

neighbours = []

for i in range(len(solution)):

for j in range(i + 1, len(solution)):

neighbour = solution.copy()

neighbour[i] = solution[j]

neighbour[j] = solution[i]
8
neighbours.append(neighbour)

return neighbours

def getBestNeighbour(tsp, neighbours):

bestRouteLength = routeLength(tsp, neighbours[0])

bestNeighbour = neighbours[0]

for neighbour in neighbours:

currentRouteLength = routeLength(tsp, neighbour)

if currentRouteLength < bestRouteLength:

bestRouteLength = currentRouteLength

bestNeighbour = neighbour

return bestNeighbour, bestRouteLength

def hillClimbing(tsp):

currentSolution = randomSolution(tsp)

currentRouteLength = routeLength(tsp, currentSolution)

neighbours = getNeighbours(currentSolution)

bestNeighbour, bestNeighbourRouteLength = getBestNeighbour(tsp, neighbours)

while bestNeighbourRouteLength < currentRouteLength:

currentSolution = bestNeighbour

currentRouteLength = bestNeighbourRouteLength

neighbours = getNeighbours(currentSolution)

bestNeighbour, bestNeighbourRouteLength = getBestNeighbour(tsp, neighbours)

return currentSolution, currentRouteLength

tsp = [

[0, 400, 500, 300],

[400, 0, 300, 500],

[500, 300, 0, 400],

[300, 500, 400, 0]

9
]

print(hillClimbing(tsp))

OUTPUT

([1, 0, 3, 2], 1400)

Result:

Thus the Hill Climbing Algorithm was successfully executed.

10
Experiment 4: Implementation of Informed search algorithms

(A* Algorithm)

Aim:
Write a program to implement A* Algorithm.

Program:
def aStarAlgo(start_node, stop_node):

open_set = set(start_node)

closed_set = set()

g = {} #store distance from starting node

parents = {} # parents contains an adjacency map of all nodes

#distance of starting node from itself is zero

g[start_node] = 0

#start_node is root node i.e it has no parent nodes

#so start_node is set to its own parent node

parents[start_node] = start_node

while len(open_set) > 0:

n = None

#node with lowest f() is found

for v in open_set:

if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):

n=v

if n == stop_node or Graph_nodes[n] == None:

pass

else:

for (m, weight) in get_neighbors(n):

#nodes 'm' not in first and last set are added to first

11
#n is set its parent

if m not in open_set and m not in closed_set:

open_set.add(m)

parents[m] = n

g[m] = g[n] + weight

#for each node m,compare its distance from start i.e g(m) to the

#from start through n node

else:

if g[m] > g[n] + weight:

#update g(m)

g[m] = g[n] + weight

#change parent of m to n

parents[m] = n

#if m in closed set,remove and add to open

if m in closed_set:

closed_set.remove(m)

open_set.add(m)

if n == None:

print('Path does not exist!')

return None

# if the current node is the stop_node

# then we begin reconstructin the path from it to the start_node

if n == stop_node:

path = []

while parents[n] != n:

path.append(n)

12
n = parents[n]

path.append(start_node)

path.reverse()

print('Path found: {}'.format(path))

return path

# remove n from the open_list, and add it to closed_list

# because all of his neighbors were inspected

open_set.remove(n)

closed_set.add(n)

print('Path does not exist!')

return None

#define fuction to return neighbor and its distance

#from the passed node

def get_neighbors(v):

if v in Graph_nodes:

return Graph_nodes[v]

else:

return None

def heuristic(n):

H_dist = {

'A': 11,

'B': 6,

'C': 5,

'D': 7,

'E': 3,

'F': 6,

'G': 5,

13
'H': 3,

'I': 1,

'J': 0

return H_dist[n]

#Describe your graph here

Graph_nodes = {

'A': [('B', 6), ('F', 3)],

'B': [('A', 6), ('C', 3), ('D', 2)],

'C': [('B', 3), ('D', 1), ('E', 5)],

'D': [('B', 2), ('C', 1), ('E', 8)],

'E': [('C', 5), ('D', 8), ('I', 5), ('J', 5)],

'F': [('A', 3), ('G', 1), ('H', 7)],

'G': [('F', 1), ('I', 3)],

'H': [('F', 7), ('I', 2)],

'I': [('E', 5), ('G', 3), ('H', 2), ('J', 3)],

aStarAlgo('A', 'J')

OUTPUT

Path found: ['A', 'F', 'G', 'I', 'J']

Result:

Thus the A* Algorithm was successfully executed.

14
Experiment 5: Implement naïve Bayes models and Bayesian Networks

Aim:
Write a program to construct a Bayesian network considering medical data. Use this model to
demonstrate the diagnosis of heart patients using standard Heart Disease Data Set. You can
use Java/Python ML library classes/API.

Attribute Information:
-- Only 14 used
-- 1. #3 (age)
-- 2. #4 (sex)
-- 3. #9 (cp)
-- 4. #10 (trestbps)
-- 5. #12 (chol)
-- 6. #16 (fbs)
-- 7. #19 (restecg)
-- 8. #32 (thalach)
-- 9. #38 (exang)
-- 10. #40 (oldpeak)
-- 11. #41 (slope)
-- 12. #44 (ca)
-- 13. #51 (thal)
-- 14. #58 (num)

Program:
import numpy as np

from urllib.request import urlopen

import urllib

import pandas as pd

15
from pgmpy.inference import VariableElimination

from pgmpy.models import BayesianModel

from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator

names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope',
'ca', 'thal', 'heartdisease']

heartDisease = pd.read_csv('Heart.csv', names = names)

heartDisease = heartDisease.replace('?', np.nan)

model = BayesianModel([('age', 'trestbps'), ('age', 'fbs'), ('sex', 'trestbps'),


('exang','trestbps'),('trestbps','heartdisease'),('fbs','heartdisease'),
('heartdisease','restecg'), ('heartdisease','thalach'), ('heartdisease','chol')])

model.fit(heartDisease, estimator=MaximumLikelihoodEstimator)

from pgmpy.inference import VariableElimination

HeartDisease_infer = VariableElimination(model)

q = HeartDisease_infer.query(variables=['heartdisease'])

print(q)

OUTPUT

╒════════════════╤════
│ heartdisease│ phi(heartdisease) │
╞══════════════════════
│ heartdisease_0 │0.5593 │
├─────────────────────┤
│ heartdisease_1 │0.4407 │
╘════════════════╧═════

Result:

Thus the Bayesian Network was successfully executed.

16
Experiment 6: Build the Regression models in order to fit data points.
Select appropriate data set for your experiment and draw graphs.

Aim:
Write a program to build the Regression models in order to fit data points. Select appropriate
data set for your experiment and draw graphs.

Regression is a technique from statistics that is used to predict values of a desired target
quantity when the target quantity is continuous.

In regression, we seek to identify (or estimate) a continuous variable y associated with a


given input vector x.

y is called the dependent variable.

x is called the independent variable.

Program:
import matplotlib.pyplot as plt

import pandas as pd

from sklearn.linear_model import LinearRegression

from sklearn.datasets import load_iris

from sklearn.metrics import r2_score

# Load Iris dataset from scikit-learn library

17
iris = load_iris()

X = iris.data[:, 0].reshape(-1, 1) # sepal length

y = iris.data[:, 2].reshape(-1, 1) # petal length

# Create linear regression model and fit it to the data

model = LinearRegression()

model.fit(X, y)

# Generate predictions from the model

y_pred = model.predict(X)

# Calculate the coefficient of determination (R-squared) of the model

r_squared = r2_score(y, y_pred)

print("Coefficient of determination (R-squared): {:.2f}".format(r_squared))

# Create scatter plot with regression line

plt.scatter(X, y, color='blue')

plt.plot(X, y_pred, color='red', linewidth=2)

# Set plot title and axis labels

plt.title("Linear Regression of Sepal Length vs. Petal Length")

plt.xlabel("Sepal Length (cm)")

plt.ylabel("Petal Length (cm)")

# Show plot

plt.show()

OUTPUT

18
Coefficient of determination (R-squared): 0.76

Result:

Thus the Regression Algorithm was successfully executed.

19
Experiment 7: Build decision trees and random forests

Aim:
Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new
sample.

Program:
# Load libraries

import pandas as pd

from sklearn.tree import DecisionTreeClassifier # Import Decision Tree Classifier

from sklearn.model_selection import train_test_split # Import train_test_split function

from sklearn import metrics #Import scikit-learn metrics module for accuracy
calculation

df = pd.read_csv('zoo.csv')

feature_cols =
['feathers','eggs','milk','airborne','aquatic','predator','backbone','venomous','legs','tail'
]

X = df[feature_cols]

y = df['type']

# Create Decision Tree classifer object

clf = DecisionTreeClassifier()

# Train Decision Tree Classifer

clf = clf.fit(X_train,y_train)

#Predict the response for test dataset

y_pred = clf.predict(X_test)

print("Accuracy:",metrics.accuracy_score(y_test, y_pred))

20
from sklearn.tree import plot_tree

import matplotlib.pyplot as plt

plt.figure(figsize=(20,10))

plot_tree(clf, filled=True, rounded=True, feature_names=feature_cols,


class_names=class_names)

plt.savefig('diabetes.png')

Output:
Accuracy: 0.9354838709677419

Result:

Thus the Decision Tree Algorithm was successfully executed.

21
Experiment 8: Build SVM Models

Aim:
Write a program to demonstrate the working of the SVM models. Use an appropriate data set
for building the SVM model and apply this knowledge to classify a new sample.

Program:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test,y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1,
step = 0.01),np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))

22
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),
X2.ravel()]).T).reshape(X1.shape),
alpha=0.75, cmap=ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
OUTPUT
[[64 4]
[ 3 29]]

Result:

Thus the Decision Tree Algorithm was successfully executed.

23
Experiment 9: Implement Ensemble techniques

Aim :
Write a program to demonstrate the working of the Ensemble techniques. Use an appropriate
data set for building the Ensemble techniques and apply this knowledge to classify a new
sample.

Program:
import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LogisticRegression

from sklearn.ensemble import RandomForestClassifier

from sklearn.tree import DecisionTreeClassifier

from sklearn.metrics import accuracy_score

df = pd.read_csv('zoo.csv')

feature_cols =
['feathers','eggs','milk','airborne','aquatic','predator','backbone','venomous','legs','tail']

X = df[feature_cols]

y = df['type']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=52)

lr = LogisticRegression()

lr.fit(X_train, y_train)

rf = RandomForestClassifier(random_state=42)

rf.fit(X_train, y_train)

dt = DecisionTreeClassifier(random_state=42)

24
dt.fit(X_train, y_train)

#Freatures are [feathers,eggs,milk,airborne,aquatic,predator,backbone,venomous,legs,tail]

testdata=[1,0,1,1,1,1,0,1,4,0]

X_new = pd.DataFrame([testdata], columns=X.columns)

lr_pred = lr.predict(X_new)

rf_pred = rf.predict(X_new)

dt_pred = dt.predict(X_new)

pred_df = pd.DataFrame({'Linear regression': lr_pred, 'Random forest': rf_pred, 'Decision


Tree': dt_pred})

final_pred = pred_df.mode(axis=1)[0].values

from sklearn.tree import plot_tree

import matplotlib.pyplot as plt

print("Prediction from 3 models \n")

print(pred_df)

print()

print(" \n THE FINAL PREDICITON by Max Vote is : ",final_pred[0])

print("DECISION TREE")

class_names = df['type'].unique().tolist()

plt.figure(figsize=(20,10))

plot_tree(dt, filled=True, rounded=True, feature_names=feature_cols,


class_names=class_names)

plt.savefig('Zoo.png')

25
Output:
Decision Tree

Prediction from 3 models

Linear regression Random forest Decision Tree


mammal reptile reptile

THE FINAL PREDICITON by Max Vote is: reptile

Result :
Thus the Ensemble Techniques Algorithm was successfully executed.

26
Experiment 9: Implement clustering algorithms

Aim:
To Write a python program to implement clustering algorithm.

Program:

import numpy as np

from sklearn.cluster import KMeans

import matplotlib.pyplot as plt

# Generate some random data

data = np.random.rand(100, 2)

# Specify the number of clusters

k=3

# Initialize the KMeans model

kmeans = KMeans(n_clusters=k)

# Fit the model to the data

kmeans.fit(data)

# Get the cluster labels for each point in the data

labels = kmeans.labels_

27
# Get the centroids of the clusters

centroids = kmeans.cluster_centers_

# Plot the data with different colors for each cluster

colors = ['b', 'g', 'r']

for i in range(k):

plt.scatter(data[labels == i, 0], data[labels == i, 1], c=colors[i], label='Cluster


{}'.format(i+1))

# Plot the centroids as black circles

plt.scatter(centroids[:, 0], centroids[:, 1], marker='o', c='k', s=100, linewidths=2,


label='Centroids')

# Add legend and title

plt.legend()

plt.title('KMeans Clustering with k={}'.format(k))

# Show the plot

plt.show()

28
Output:

Result:

Thus the Clustering algorithm was successfully executed.

29

You might also like