Artificial Intelligence and Machine Learning Fundamentals
Artificial Intelligence and Machine Learning Fundamentals
SIGN
INDEX
Ex.No: 1
IMPLEMENT BREADTH FIRST SEARCH
Date:
Aim:
To create a python program to implement Breadth First Search.
Algorithm:
Step 1: Start the Program.
Step 2: Start with a set "visited" to track visited nodes and a queue using a
deque.
Step 3: While the queue is not empty, Dequeue the first element
(current_node, path) from the queue.
Step 4: If current_node is not visited, Mark current_node as visited and
append it to the path.
Step 5: If current_node is the goal_node, print the path and return.
Step 6: For each neighbor of current_node, Enqueue the neighbor along
with its path.
Step 7: If no path is found, print "No path found".
Step 8: Stop the Program
Source Code:
from collections import deque
while queue:
current_node, path = queue.popleft()
if current_node not in visited:
visited.add(current_node)
path = path + [current_node]
if current_node == goal_node:
print("Path found:", path)
return path
1
for neighbor in graph[current_node]:
queue.append((neighbor, path))
# Example usage:
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E'],
}
start_node = 'A'
goal_node = 'F'
Output:
Path found: ['A', 'C', 'F']
Result:
Thus the python program to implement breadth first search was successfully
implemented and executed.
2
Ex.No: 2
IMPLEMENT DEPTH FIRST SEARCH
Date:
Aim:
To write a python program to implement Depth First Search.
Algorithm:
Step 1: Start the Program.
Step 2: Start dfs function with inputs: graph, start_node, and goal_node.
Step 3: Initialize visited set and stack with start_node and empty path.
Step 4: While stack is not empty, Pop current_node and path from stack.
Step 5: If current_node not in visited, Add current_node to visited.
Step 6: Update path.
Step 7: If current_node equals goal_node, print "Path found" with the path
and return.
Step 8: For each neighbor of current_node, Push neighbor and updated path
onto stack.
Step 9: If no path found, print "No path found".
Step 10: Stop the Program.
Source Code:
def dfs(graph, start_node, goal_node):
visited = set()
stack = [(start_node, [])]
while stack:
current_node, path = stack.pop()
if current_node not in visited:
visited.add(current_node)
path = path + [current_node]
if current_node == goal_node:
return path
return None
3
# Example usage:
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E'],
}
start_node = 'A'
goal_node = 'F'
if path:
print("Path found:", path)
else:
print("No path found")
Output:
Path found: ['A', 'C', 'F']
Result:
Thus the python program to implement depth first search was successfully
implemented and executed.
4
Ex.No: 3 ANALYSIS OF BREADTH FIRST AND DEPTH
FIRST SEARCH IN TERMS OF TIME AND SPACE
Date:
Aim:
To write a python program to analysis of breadth first and depth first search in
terms of time and space.
Algorithm (BFS):
Step 1: Create a queue for traversal.
Step 2: Initialize a visited array to keep track of visited vertices.
Step 3: Choose the starting vertex for traversal and enqueue it into the
queue.
Step 4: While the queue is not empty,
i) Dequeue a vertex from the queue,
ii) Print the dequeued vertex.
iii) Mark the dequeued vertex as visited.
iv) Enqueue all unvisited neighbors of the dequeued vertex.
Algorithm (DFS):
Step 1: Initialize a visited array to keep track of visited vertices.
Step 2: Choose the starting vertex for traversal and call a recursive utility
function for DFS traversal.
Step 3: Mark the current vertex as visited.
Step 4: Print the current vertex.
Step 5: Recursively call the function for each unvisited neighbor of the
current vertex.
Source Code:
from collections import deque
class Graph:
def __init__(self, vertices):
self.vertices = vertices
self.adjacency_list = {vertex: [] for vertex in range(vertices)}
while queue:
current_vertex = queue.popleft()
print(current_vertex, end=' ')
6
# Example usage:
if __name__ == "__main__":
graph = Graph(4) # Example graph with 4 vertices
graph.add_edge(0, 1)
graph.add_edge(0, 2)
graph.add_edge(1, 2)
graph.add_edge(2, 0)
graph.add_edge(2, 3)
graph.add_edge(3, 3)
print("BFS Traversal:")
bfs(graph, 2) # Start BFS from vertex 2
print("\nDFS Traversal:")
dfs(graph, 2) # Start DFS from vertex 2
Output:
BFS Traversal:
2031
DFS Traversal:
2013
Result:
Thus the python program to analysis of breadth first and depth first search was
successfully implemented and executed.
7
Ex.No: 4 IMPLEMENT AND COMPARE GREEDY AND
A* ALGORITHMS.
Date:
Aim:
To write a python program to Implement and compare Greedy and A* algorithm.
Algorithm:
Step 1: Start the program.
Step 2: Import “heapq” for priority queue operations.
Step 3: Import “defaultdict from collections” for creating a graph data
structure.
Step 4: Initialize the Graph class with an empty dictionary to store edges.
Step 5: Define a method “add_edge” to add edges to the graph.
Step 6: Define a heuristic function to estimate the distance between two
nodes.
Step 7: Initialize a priority queue with a tuple containing the starting node and
cost 0.
Step 8: Initialize dictionaries to store the path and cost to reach each node.
Step 9: While the priority queue is not empty,
i) Pop the node with the lowest cost from the priority queue.
ii) If the popped node is the destination node, reconstruct and return the
path.
Step 10: Calculate the new cost to reach the neighbor via the current node.
Step 11: Update cost and priority if the new cost is lower.
Step 12: Push the neighbor and its priority into the priority queue.
Step 13: Update the path with the current node as the parent of the neighbor.
Step 14: If the destination is not reached, return None.
Step 15: Define a Graph object and add edges to it.
Step 16: Set the start and end nodes for the shortest path.
Step 17: Invoke the A* shortest path function with the graph, start, and end nodes.
Step 18: Print the shortest path if it exists, else print a message indicating no path
found.
Step 19: Stop the program.
Source Code:
import heapq
from collections import defaultdict
8
class Graph:
def __init__(self):
self.edges = defaultdict(list)
while priority_queue:
current_cost, current_node = heapq.heappop(priority_queue)
if current_node == end:
path = []
while current_node in came_from:
path.append(current_node)
current_node = came_from[current_node]
path.append(start)
return path[::-1]
9
new_cost = cost_so_far[current_node] + weight
if neighbor not in cost_so_far or new_cost < cost_so_far[neighbor]:
cost_so_far[neighbor] = new_cost
priority = new_cost + heuristic(end, neighbor)
heapq.heappush(priority_queue, (priority, neighbor))
came_from[neighbor] = current_node
return None
# Example usage:
if __name__ == "__main__":
graph = Graph()
graph.add_edge(0, 1, 4)
graph.add_edge(0, 2, 2)
graph.add_edge(1, 3, 5)
graph.add_edge(2, 1, 1)
graph.add_edge(2, 3, 8)
graph.add_edge(2, 4, 10)
graph.add_edge(3, 4, 2)
start_node = 0
end_node = 4
10
print("No path found.")
Output:
Greedy Shortest Path:
[0, 2, 1, 3, 4]
A* Shortest Path:
[0, 2, 1, 3, 4]
Result:
Thus the python program to Implement and compare Greedy and A* algorithms
was successfully implemented and executed.
11
IMPLEMENT THE NON-PARAMETRIC LOCALLY
Ex.No: 5 WEIGHTED REGRESSION ALGORITHM IN ORDER TO
FIT DATA POINTS. SELECT APPROPRIATE DATA SET
Date: FOR YOUR EXPERIMENT AND DRAW GRAPHS.
Aim:
To write a python program to implement the non-parametric locally weighted
regression algorithm in order to fit data points select appropriate data set for
your experiment and draw graphs.
Algorithm:
Step 1: Start the program.
Step 2: Generate X values from 0 to 10 with 100 points.
Step 3: Create y values by adding a bit of noise to sin(X).
Step 4: Define a function named loess_fit.
Step 5: Inside the function, use LOESS smoothing from stats models.
Step 6: Return the smoothed y values.
Step 7: Call the loess_fit function to get smoothed y values.
Step 8: Plot original data points (X, y) as blue dots.
Step 9: Plot smoothed y values against X as a red line.
Step 10: Add title, labels, and legend to the plot.
Step 11: Show the plot.
Step 12: Stop the program.
Source Code:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
12
# Fit the data using LOESS
smoothed_y = loess_fit(X, y)
Output:
Result:
Thus the python program to implement the non-parametric locally weighted
regression algorithm in order to fit data points select appropriate data set for your
experiment and draw graphs has been successfully implemented and executed.
13
Ex.No: 6 DEMONSTRATE THE WORKING OF THE DECISION
TREE BASED ALGORITHM.
Date:
Aim:
To write a python program to demonstrate the working of the decision tree
based algorithm.
Algorithm:
Step 1: Start the program.
Step 2: Load the Iris dataset using “load_iris()”.
Step 3: Split the dataset into training and testing sets using
“train_test_split()”.
Step 4: Create a decision tree classifier using “DecisionTreeClassifier()”.
Step 5: Train the classifier on the training data using “fit()”.
Step 6: Make predictions on the testing data using “predict()”.
Step 7: Calculate accuracy using “accuracy_score()”.
Step 8: Display classification report using “classification_report()”.
Step 9: Stop the program.
Source Code:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, classification_report
14
# Train the classifier on the training data
clf.fit(X_train, y_train)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
Output:
Result:
Thus the python program to program to demonstrate the working of the
decision tree based algorithm has been successfully implemented and executed.
15
BUILD AN ARTIFICIAL NEURAL NETWORK BY
Ex.No: 7 IMPLEMENTING THE BACK PROPAGATION
ALGORITHM AND TEST THE SAME USING
Date: APPROPRIATE DATA SETS.
Aim:
To write a python program to build an artificial neural network by implementing
the back propagation algorithm and test the same using appropriate data sets.
Algorithm:
Step 1: Start the program.
Step 2: Calculate the difference between the actual output (y) and the predicted
output (output_error = y - output).
Step 3: Multiply this error by the derivative of the sigmoid function applied to the
output layer (output_delta = output_error * sigmoid_derivative(output)).
Step 4: Multiply the output delta by the transpose of the weights between the
hidden and output layers (hidden_error = output_delta dot
transpose(weights_hidden_output)).
Step 5: Multiply this hidden error by the derivative of the sigmoid function
applied to the hidden layer (hidden_delta = hidden_error *
sigmoid_derivative(hidden_output)).
Step 6: Update the weights between the hidden and output layers by adding
the product of the transpose of hidden output and output delta, multiplied by the
learning rate (weights_hidden_output += hidden_output transpose dot output_delta
* learning_rate).
Step 7: Update the biases of the output layer by adding the sum of the output
delta multiplied by the learning rate (biases_output += sum of output_delta *
learning_rate).
Step 8: Update the weights between the input and hidden layers by adding the
product of the transpose of the input and hidden delta, multiplied by the learning
rate (weights_input_hidden += input transpose dot hidden_delta * learning_rate).
Step 9: Update the biases of the hidden layer by adding the sum of the hidden
delta multiplied by the learning rate (biases_hidden += sum of hidden_delta *
learning_rate).
Step 10: Stop the program.
Source Code:
import numpy as np
class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
16
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# Backpropagation
output_error = y - output
output_delta = output_error * self.sigmoid_derivative(output)
hidden_error = output_delta.dot(self.weights_hidden_output.T)
hidden_delta = hidden_error * self.sigmoid_derivative(self.hidden_output)
17
def sigmoid_derivative(self, x):
return x * (1 - x)
# Example usage:
if __name__ == "__main__":
# Example dataset (XOR)
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
Output:
Epoch 0, Loss: 0.269443850781414
Epoch 1000, Loss: 0.23399845448884227
Epoch 2000, Loss: 0.1106478598053998
Epoch 3000, Loss: 0.02742732828467446
Epoch 4000, Loss: 0.011912735215721986
Epoch 5000, Loss: 0.007035550500536848
Epoch 6000, Loss: 0.004833431853875592
Epoch 7000, Loss: 0.003620424334542915
Epoch 8000, Loss: 0.0028656958644439935
Epoch 9000, Loss: 0.002356212305481433
18
Predictions:
[[0.03445776]
[0.95293517]
[0.95309324]
[0.04862167]]
Result:
Thus the python program to build an artificial neural network by implementing
the back propagation algorithm and test the same using appropriate data sets has
been successfully implemented and executed.
Ex.No: 8
IMPLEMENT THE NAIVE BAYESIAN CLASSIFIER.
19
Date:
Aim:
To write a python program to implement the naive bayesian classifier.
Algorithm:
Step 1: Start the program.
Step 2: Import numpy as np and Import datasets, train_test_split, and
StandardScaler from sklearn.
Step 3: Load Iris dataset as X (features) and y (labels).
Step 4: Split data into training and testing sets (X_train, X_test, y_train, y_test)
Step 5: Standardize features using StandardScaler.
Step 6: Calculate mean and variance for each class.
Step 7: Calculate prior probabilities for each class.
Step 8: Create function predict(X) to make predictions.
Step 9: Use predict(X_train) to get predictions for training data.
Step 10: Get predictions for testing data (y_pred_test).
Step 11: Calculate accuracy and print it.
Step 12: Stop the program.
Source Code:
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
20
# Calculate the mean and variance of each feature for each class
class_means = np.zeros((3, X_train.shape[1]))
class_variances = np.zeros((3, X_train.shape[1]))
for i in range(3):
class_data = X_train[y_train == i]
class_means[i] = np.mean(class_data, axis=0)
class_variances[i] = np.var(class_data, axis=0)
Output:
Accuracy: 0.9555555555555556
21
Result:
Thus the python program to to implement the naive bayesian classifier has
been successfully implemented and executed.
22