0% found this document useful (0 votes)
11 views

Artificial Intelligence and Machine Learning Fundamentals

The document outlines various programming exercises focused on algorithms for problem-solving, including Breadth First Search, Depth First Search, and comparisons of Greedy and A* algorithms. Each exercise includes an aim, algorithm steps, source code, and results demonstrating successful implementation. Additionally, it covers supervised learning techniques such as decision trees and non-parametric locally weighted regression.

Uploaded by

clever editz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Artificial Intelligence and Machine Learning Fundamentals

The document outlines various programming exercises focused on algorithms for problem-solving, including Breadth First Search, Depth First Search, and comparisons of Greedy and A* algorithms. Each exercise includes an aim, algorithm steps, source code, and results demonstrating successful implementation. Additionally, it covers supervised learning techniques such as decision trees and non-parametric locally weighted regression.

Uploaded by

clever editz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Ex.No. DATE TITLE OF THE PROGRAM PAGE No.

SIGN

PROGRAMS FOR PROBLEM SOLVING WITH SEARCH


1 Implement Breadth First Search 1

2 Implement Depth First Search 3


Analysis Of Breadth First And Depth First 5
3 Search In Terms Of Time And Space
Implement And Compare Greedy And 8
4
A* Algorithms
SUPERVISED LEARNING
Implement The Non-Parametric Locally
Weighted Regression Algorithm In Order
5 To Fit Data Points. Select Appropriate 12
Data Set For Your Experiment And Draw
Graphs
Write A Program To Demonstrate The
6 Working Of The Decision Tree Based 14
Algorithm
Build An Artificial Neural Network By
Implementing The Back Propagation
7 16
Algorithm And Test The Same Using
Appropriate Data Sets.
Implement The Naive Bayesian
8 20
Classifier.

INDEX
Ex.No: 1
IMPLEMENT BREADTH FIRST SEARCH
Date:

Aim:
To create a python program to implement Breadth First Search.

Algorithm:
Step 1: Start the Program.
Step 2: Start with a set "visited" to track visited nodes and a queue using a
deque.
Step 3: While the queue is not empty, Dequeue the first element
(current_node, path) from the queue.
Step 4: If current_node is not visited, Mark current_node as visited and
append it to the path.
Step 5: If current_node is the goal_node, print the path and return.
Step 6: For each neighbor of current_node, Enqueue the neighbor along
with its path.
Step 7: If no path is found, print "No path found".
Step 8: Stop the Program

Source Code:
from collections import deque

def bfs(graph, start_node, goal_node):


visited = set()
queue = deque([(start_node, [])])

while queue:
current_node, path = queue.popleft()
if current_node not in visited:
visited.add(current_node)
path = path + [current_node]

if current_node == goal_node:
print("Path found:", path)
return path

1
for neighbor in graph[current_node]:
queue.append((neighbor, path))

print("No path found")


return None

# Example usage:
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E'],
}

start_node = 'A'
goal_node = 'F'

path = bfs(graph, start_node, goal_node)

Output:
Path found: ['A', 'C', 'F']

Result:
Thus the python program to implement breadth first search was successfully
implemented and executed.

2
Ex.No: 2
IMPLEMENT DEPTH FIRST SEARCH
Date:

Aim:
To write a python program to implement Depth First Search.

Algorithm:
Step 1: Start the Program.
Step 2: Start dfs function with inputs: graph, start_node, and goal_node.
Step 3: Initialize visited set and stack with start_node and empty path.
Step 4: While stack is not empty, Pop current_node and path from stack.
Step 5: If current_node not in visited, Add current_node to visited.
Step 6: Update path.
Step 7: If current_node equals goal_node, print "Path found" with the path
and return.
Step 8: For each neighbor of current_node, Push neighbor and updated path
onto stack.
Step 9: If no path found, print "No path found".
Step 10: Stop the Program.

Source Code:
def dfs(graph, start_node, goal_node):
visited = set()
stack = [(start_node, [])]

while stack:
current_node, path = stack.pop()
if current_node not in visited:
visited.add(current_node)
path = path + [current_node]

if current_node == goal_node:
return path

for neighbor in graph[current_node]:


stack.append((neighbor, path))

return None
3
# Example usage:
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E'],
}

start_node = 'A'
goal_node = 'F'

path = dfs(graph, start_node, goal_node)

if path:
print("Path found:", path)
else:
print("No path found")

Output:
Path found: ['A', 'C', 'F']

Result:
Thus the python program to implement depth first search was successfully
implemented and executed.

4
Ex.No: 3 ANALYSIS OF BREADTH FIRST AND DEPTH
FIRST SEARCH IN TERMS OF TIME AND SPACE
Date:

Aim:
To write a python program to analysis of breadth first and depth first search in
terms of time and space.

Algorithm (BFS):
Step 1: Create a queue for traversal.
Step 2: Initialize a visited array to keep track of visited vertices.
Step 3: Choose the starting vertex for traversal and enqueue it into the
queue.
Step 4: While the queue is not empty,
i) Dequeue a vertex from the queue,
ii) Print the dequeued vertex.
iii) Mark the dequeued vertex as visited.
iv) Enqueue all unvisited neighbors of the dequeued vertex.

Algorithm (DFS):
Step 1: Initialize a visited array to keep track of visited vertices.
Step 2: Choose the starting vertex for traversal and call a recursive utility
function for DFS traversal.
Step 3: Mark the current vertex as visited.
Step 4: Print the current vertex.
Step 5: Recursively call the function for each unvisited neighbor of the
current vertex.

Source Code:
from collections import deque
class Graph:
def __init__(self, vertices):
self.vertices = vertices
self.adjacency_list = {vertex: [] for vertex in range(vertices)}

def add_edge(self, u, v):


self.adjacency_list[u].append(v)
5
# For undirected graph, uncomment below line
# self.adjacency_list[v].append(u)

def bfs(graph, start):


visited = [False] * graph.vertices
queue = deque([start])
visited[start] = True

while queue:
current_vertex = queue.popleft()
print(current_vertex, end=' ')

for adjacent_vertex in graph.adjacency_list[current_vertex]:


if not visited[adjacent_vertex]:
queue.append(adjacent_vertex)
visited[adjacent_vertex] = True

def dfs_util(graph, vertex, visited):


visited[vertex] = True
print(vertex, end=' ')

for adjacent_vertex in graph.adjacency_list[vertex]:


if not visited[adjacent_vertex]:
dfs_util(graph, adjacent_vertex, visited)

def dfs(graph, start):


visited = [False] * graph.vertices
dfs_util(graph, start, visited)

6
# Example usage:
if __name__ == "__main__":
graph = Graph(4) # Example graph with 4 vertices
graph.add_edge(0, 1)
graph.add_edge(0, 2)
graph.add_edge(1, 2)
graph.add_edge(2, 0)
graph.add_edge(2, 3)
graph.add_edge(3, 3)

print("BFS Traversal:")
bfs(graph, 2) # Start BFS from vertex 2

print("\nDFS Traversal:")
dfs(graph, 2) # Start DFS from vertex 2

Output:
BFS Traversal:
2031
DFS Traversal:
2013

Result:
Thus the python program to analysis of breadth first and depth first search was
successfully implemented and executed.

7
Ex.No: 4 IMPLEMENT AND COMPARE GREEDY AND
A* ALGORITHMS.
Date:

Aim:
To write a python program to Implement and compare Greedy and A* algorithm.

Algorithm:
Step 1: Start the program.
Step 2: Import “heapq” for priority queue operations.
Step 3: Import “defaultdict from collections” for creating a graph data
structure.
Step 4: Initialize the Graph class with an empty dictionary to store edges.
Step 5: Define a method “add_edge” to add edges to the graph.
Step 6: Define a heuristic function to estimate the distance between two
nodes.
Step 7: Initialize a priority queue with a tuple containing the starting node and
cost 0.
Step 8: Initialize dictionaries to store the path and cost to reach each node.
Step 9: While the priority queue is not empty,
i) Pop the node with the lowest cost from the priority queue.
ii) If the popped node is the destination node, reconstruct and return the
path.
Step 10: Calculate the new cost to reach the neighbor via the current node.
Step 11: Update cost and priority if the new cost is lower.
Step 12: Push the neighbor and its priority into the priority queue.
Step 13: Update the path with the current node as the parent of the neighbor.
Step 14: If the destination is not reached, return None.
Step 15: Define a Graph object and add edges to it.
Step 16: Set the start and end nodes for the shortest path.
Step 17: Invoke the A* shortest path function with the graph, start, and end nodes.
Step 18: Print the shortest path if it exists, else print a message indicating no path
found.
Step 19: Stop the program.

Source Code:
import heapq
from collections import defaultdict
8
class Graph:
def __init__(self):
self.edges = defaultdict(list)

def add_edge(self, u, v, weight):


self.edges[u].append((v, weight))

def greedy_shortest_path(graph, start, end):


visited = set()
path = []
current = start
while current != end:
path.append(current)
if not graph.edges[current]:
return None
next_node, _ = min(graph.edges[current], key=lambda x: x[1])
visited.add(current)
current = next_node
if current in visited:
return None
path.append(end)
return path

def astar_shortest_path(graph, start, end):


priority_queue = [(0, start)]
came_from = {}
cost_so_far = {start: 0}

while priority_queue:
current_cost, current_node = heapq.heappop(priority_queue)
if current_node == end:
path = []
while current_node in came_from:
path.append(current_node)
current_node = came_from[current_node]
path.append(start)
return path[::-1]

for neighbor, weight in graph.edges[current_node]:

9
new_cost = cost_so_far[current_node] + weight
if neighbor not in cost_so_far or new_cost < cost_so_far[neighbor]:
cost_so_far[neighbor] = new_cost
priority = new_cost + heuristic(end, neighbor)
heapq.heappush(priority_queue, (priority, neighbor))
came_from[neighbor] = current_node

return None

def heuristic(a, b):


# Heuristic function (in this case, Euclidean distance)
return abs(a - b)

# Example usage:
if __name__ == "__main__":
graph = Graph()
graph.add_edge(0, 1, 4)
graph.add_edge(0, 2, 2)
graph.add_edge(1, 3, 5)
graph.add_edge(2, 1, 1)
graph.add_edge(2, 3, 8)
graph.add_edge(2, 4, 10)
graph.add_edge(3, 4, 2)

start_node = 0
end_node = 4

print("Greedy Shortest Path:")


greedy_path = greedy_shortest_path(graph, start_node, end_node)
if greedy_path:
print(greedy_path)
else:
print("No path found.")

print("A* Shortest Path:")


astar_path = astar_shortest_path(graph, start_node, end_node)
if astar_path:
print(astar_path)
else:

10
print("No path found.")

Output:
Greedy Shortest Path:
[0, 2, 1, 3, 4]
A* Shortest Path:
[0, 2, 1, 3, 4]

Result:
Thus the python program to Implement and compare Greedy and A* algorithms
was successfully implemented and executed.

11
IMPLEMENT THE NON-PARAMETRIC LOCALLY
Ex.No: 5 WEIGHTED REGRESSION ALGORITHM IN ORDER TO
FIT DATA POINTS. SELECT APPROPRIATE DATA SET
Date: FOR YOUR EXPERIMENT AND DRAW GRAPHS.

Aim:
To write a python program to implement the non-parametric locally weighted
regression algorithm in order to fit data points select appropriate data set for
your experiment and draw graphs.

Algorithm:
Step 1: Start the program.
Step 2: Generate X values from 0 to 10 with 100 points.
Step 3: Create y values by adding a bit of noise to sin(X).
Step 4: Define a function named loess_fit.
Step 5: Inside the function, use LOESS smoothing from stats models.
Step 6: Return the smoothed y values.
Step 7: Call the loess_fit function to get smoothed y values.
Step 8: Plot original data points (X, y) as blue dots.
Step 9: Plot smoothed y values against X as a red line.
Step 10: Add title, labels, and legend to the plot.
Step 11: Show the plot.
Step 12: Stop the program.

Source Code:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm

# Generate synthetic dataset


np.random.seed(42)
X = np.linspace(0, 10, 100)
y = np.sin(X) + np.random.normal(0, 0.1, size=X.shape[0])

# LOESS fitting function


def loess_fit(X, y, frac=0.75):
lowess = sm.nonparametric.lowess
loess_smoothed = lowess(y, X, frac=frac)
return loess_smoothed[:, 1]

12
# Fit the data using LOESS
smoothed_y = loess_fit(X, y)

# Plot the original data and the fitted curve


plt.figure(figsize=(10, 6))
plt.scatter(X, y, label='Original Data', color='blue')
plt.plot(X, smoothed_y, label='LOESS Fit', color='red')
plt.title('Non-parametric Locally Weighted Regression (LOESS)')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.show()

Output:

Result:
Thus the python program to implement the non-parametric locally weighted
regression algorithm in order to fit data points select appropriate data set for your
experiment and draw graphs has been successfully implemented and executed.

13
Ex.No: 6 DEMONSTRATE THE WORKING OF THE DECISION
TREE BASED ALGORITHM.
Date:

Aim:
To write a python program to demonstrate the working of the decision tree
based algorithm.

Algorithm:
Step 1: Start the program.
Step 2: Load the Iris dataset using “load_iris()”.
Step 3: Split the dataset into training and testing sets using
“train_test_split()”.
Step 4: Create a decision tree classifier using “DecisionTreeClassifier()”.
Step 5: Train the classifier on the training data using “fit()”.
Step 6: Make predictions on the testing data using “predict()”.
Step 7: Calculate accuracy using “accuracy_score()”.
Step 8: Display classification report using “classification_report()”.
Step 9: Stop the program.

Source Code:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, classification_report

# Load the Iris dataset


iris = load_iris()
X = iris.data
y = iris.target

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)

# Create a decision tree classifier


clf = DecisionTreeClassifier()

14
# Train the classifier on the training data
clf.fit(X_train, y_train)

# Make predictions on the testing data


y_pred = clf.predict(X_test)

# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)

# Display classification report


print("\nClassification Report:")
print(classification_report(y_test, y_pred, target_names=iris.target_names))

Output:

Result:
Thus the python program to program to demonstrate the working of the
decision tree based algorithm has been successfully implemented and executed.

15
BUILD AN ARTIFICIAL NEURAL NETWORK BY
Ex.No: 7 IMPLEMENTING THE BACK PROPAGATION
ALGORITHM AND TEST THE SAME USING
Date: APPROPRIATE DATA SETS.

Aim:
To write a python program to build an artificial neural network by implementing
the back propagation algorithm and test the same using appropriate data sets.

Algorithm:
Step 1: Start the program.
Step 2: Calculate the difference between the actual output (y) and the predicted
output (output_error = y - output).
Step 3: Multiply this error by the derivative of the sigmoid function applied to the
output layer (output_delta = output_error * sigmoid_derivative(output)).
Step 4: Multiply the output delta by the transpose of the weights between the
hidden and output layers (hidden_error = output_delta dot
transpose(weights_hidden_output)).
Step 5: Multiply this hidden error by the derivative of the sigmoid function
applied to the hidden layer (hidden_delta = hidden_error *
sigmoid_derivative(hidden_output)).
Step 6: Update the weights between the hidden and output layers by adding
the product of the transpose of hidden output and output delta, multiplied by the
learning rate (weights_hidden_output += hidden_output transpose dot output_delta
* learning_rate).
Step 7: Update the biases of the output layer by adding the sum of the output
delta multiplied by the learning rate (biases_output += sum of output_delta *
learning_rate).
Step 8: Update the weights between the input and hidden layers by adding the
product of the transpose of the input and hidden delta, multiplied by the learning
rate (weights_input_hidden += input transpose dot hidden_delta * learning_rate).
Step 9: Update the biases of the hidden layer by adding the sum of the hidden
delta multiplied by the learning rate (biases_hidden += sum of hidden_delta *
learning_rate).
Step 10: Stop the program.

Source Code:
import numpy as np
class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
16
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size

# Initialize weights and biases


self.weights_input_hidden = np.random.randn(input_size, hidden_size)
self.biases_hidden = np.zeros(hidden_size)

self.weights_hidden_output = np.random.randn(hidden_size, output_size)


self.biases_output = np.zeros(output_size)

def forward(self, X):


# Forward pass
self.hidden_input = np.dot(X, self.weights_input_hidden) + self.biases_hidden
self.hidden_output = self.sigmoid(self.hidden_input)

self.output_input = np.dot(self.hidden_output, self.weights_hidden_output) +


self.biases_output
self.output = self.sigmoid(self.output_input)
return self.output

def backward(self, X, y, output, learning_rate):

# Backpropagation
output_error = y - output
output_delta = output_error * self.sigmoid_derivative(output)

hidden_error = output_delta.dot(self.weights_hidden_output.T)
hidden_delta = hidden_error * self.sigmoid_derivative(self.hidden_output)

# Update weights and biases


self.weights_hidden_output += self.hidden_output.T.dot(output_delta) *
learning_rate
self.biases_output += np.sum(output_delta) * learning_rate

self.weights_input_hidden += X.T.dot(hidden_delta) * learning_rate


self.biases_hidden += np.sum(hidden_delta) * learning_rate
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))

17
def sigmoid_derivative(self, x):
return x * (1 - x)

# Example usage:
if __name__ == "__main__":
# Example dataset (XOR)
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])

# Initialize neural network


nn = NeuralNetwork(input_size=2, hidden_size=4, output_size=1)

# Train neural network


epochs = 10000
learning_rate = 0.1
for epoch in range(epochs):
output = nn.forward(X)
nn.backward(X, y, output, learning_rate)
if epoch % 1000 == 0:
loss = np.mean(np.square(y - output))
print(f"Epoch {epoch}, Loss: {loss}")

# Test neural network


predictions = nn.forward(X)
print("\nPredictions:")
print(predictions)

Output:
Epoch 0, Loss: 0.269443850781414
Epoch 1000, Loss: 0.23399845448884227
Epoch 2000, Loss: 0.1106478598053998
Epoch 3000, Loss: 0.02742732828467446
Epoch 4000, Loss: 0.011912735215721986
Epoch 5000, Loss: 0.007035550500536848
Epoch 6000, Loss: 0.004833431853875592
Epoch 7000, Loss: 0.003620424334542915
Epoch 8000, Loss: 0.0028656958644439935
Epoch 9000, Loss: 0.002356212305481433

18
Predictions:
[[0.03445776]
[0.95293517]
[0.95309324]
[0.04862167]]

Result:
Thus the python program to build an artificial neural network by implementing
the back propagation algorithm and test the same using appropriate data sets has
been successfully implemented and executed.
Ex.No: 8
IMPLEMENT THE NAIVE BAYESIAN CLASSIFIER.

19
Date:

Aim:
To write a python program to implement the naive bayesian classifier.

Algorithm:
Step 1: Start the program.
Step 2: Import numpy as np and Import datasets, train_test_split, and
StandardScaler from sklearn.
Step 3: Load Iris dataset as X (features) and y (labels).
Step 4: Split data into training and testing sets (X_train, X_test, y_train, y_test)
Step 5: Standardize features using StandardScaler.
Step 6: Calculate mean and variance for each class.
Step 7: Calculate prior probabilities for each class.
Step 8: Create function predict(X) to make predictions.
Step 9: Use predict(X_train) to get predictions for training data.
Step 10: Get predictions for testing data (y_pred_test).
Step 11: Calculate accuracy and print it.
Step 12: Stop the program.

Source Code:
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

# Load the Iris dataset


iris = datasets.load_iris()
X = iris.data
y = iris.target

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=42)

# Preprocess the data


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

20
# Calculate the mean and variance of each feature for each class
class_means = np.zeros((3, X_train.shape[1]))
class_variances = np.zeros((3, X_train.shape[1]))
for i in range(3):
class_data = X_train[y_train == i]
class_means[i] = np.mean(class_data, axis=0)
class_variances[i] = np.var(class_data, axis=0)

# Define the prior probabilities for each class


class_priors = np.bincount(y_train) / len(y_train)

# Define the Naive Bayesian classifier


def predict(X):
# Calculate the likelihood of each class given the input data
likelihoods = np.zeros((3, X.shape[0]))
for i in range(3):
likelihoods[i] = np.exp(-0.5 * np.sum((X - class_means[i])**2 /
class_variances[i], axis=1))

# Multiply the likelihoods by the prior probabilities


likelihoods *= class_priors[:,np.newaxis]

# Return the class with the highest likelihood


return np.argmax(likelihoods, axis=0)

# Train the Naive Bayesian classifier


y_pred = predict(X_train)

# Evaluate the Naive Bayesian classifier on the testing data


y_pred_test = predict(X_test)
accuracy = np.mean(y_pred_test == y_test)
print("Accuracy:", accuracy)

Output:
Accuracy: 0.9555555555555556

21
Result:
Thus the python program to to implement the naive bayesian classifier has
been successfully implemented and executed.

22

You might also like