0% found this document useful (0 votes)
5 views

AIML manual

Uploaded by

gowthamali06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

AIML manual

Uploaded by

gowthamali06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

INDEX

S.NO DATE TITLE PAGE MARKS SIGN


NO

.
S.NO DATE TITLE PAGE MARKS SIGN
NO

.
S.NO DATE TITLE PAGE MARKS SIGN
NO

.
Ex. No. 1a Implementation of uninformed search algorithm– BFS

Date:

AIM

To implement an uninformed search algorithm – BFS in python.

ALGORITHM

Step 1: Start.

Step 2: Put any one of the graph’s vertices at the back of the queue.

Step 3: Take the front item of the queue and add it to the visited list.

Step 4: Create a list of that vertex's adjacent nodes. Add those which are not within the
visited list to the rear of the queue.

Step 5: Keep continuing steps two and three till the queue is empty.

Step 6: Stop.

PROGRAM

graph = eval(input(“Enter your graph:”))

node = input(“Enter your starting node:”)

visited = []

queue = []

def bfs(visited, graph, node):

visited.append(node)

queue.append(node)
while queue:

m = queue.pop(0)

print (m, end =” ”)

for neighbour in graph[m]:

if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

print(“BFS:”)

bfs(visited, graph, node)


OUTPUT

RESULT

Thus, an uninformed search algorithm – BFS has been successfully implemented and
executed successfully.
Ex. No.1b Implementation of uninformed search algorithm – DFS

Date:

AIM

To implement an uninformed search algorithm – DFS in python.

ALGORITHM

Step 1: Start.

Step 2: Put any one of the graph's vertex on top of the stack.

Step 3: Take the top item of the stack and add it to the visited list of the vertex.

Step 4: Create a list of that adjacent node of the vertex. Add the ones which aren't
in the visited list of vertexes to the top of the stack.

Step 5: Keep repeating steps 2 and 3 until the stack is empty.

Step 6: Stop.

PROGRAM

graph = eval(input(“Enter your graph:”))

node = input(“Enter your starting node:”)

visited = set()

def dfs(visited, graph, node):

if node not in visited:

print (node, end=“ “)


visited.add(node)

for neighbour in graph[node]:

dfs(visited, graph, neighbour)

print(“DFS:”)

dfs(visited, graph, node)


OUTPUT

RESULT

Thus, an uninformed search algorithm – DFS has been successfully implemented


and executed successfully.
Ex.no:2a IMPLEMENTATION OF A* ALGORITHM

Date:

AIM

To implement A* algorithm using python.

ALGORITHM

Step 1: Start.

Step 2: Initialize the open set with the start node, closed set as empty, g-value
dictionary with the start node’s parent as itself.

Step 3: While the open set is not empty

1. Choose the node with lowest f-value.

2. If the chosen is stop node, then the path is found, return the path.

3. If chosen has no neighbors print ”No path”.

4. For each neighbor calculate g-value & update it.

5. Set the parent of the neighbor to the chosen node and add the neighbor to the open
set if it is not already in either the open set or closed set.

6. Remove the chosen node from open and add it to closed set.

Step 4: Return “No path” if loop in step 3 finishes without finding a path.

Step 5: Stop.
PROGRAM

def aStarAlgo(start_node, stop_node):

open_set = set(start_node)

closed_set = set()

g = {}

parents = {}

g[start_node] = 0

parents[start_node] = start_node

while len(open_set) > 0:

n = None

for v in open_set:

if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):

n=v

if n == stop_node or Graph_nodes[n] == None:

Pass

else:

for (m, weight) in get_neighbors(n):

if m not in open_set and m not in closed_set:

open_set.add(m)

parents[m] = n
g[m] = g[n] + weight

else:

if g[m] > g[n] + weight:

g[m] = g[n] + weight

parents[m] = n

if m in closed_set:

closed_set.remove(m)

open_set.add(m)

if n == None:

print('Path does not exist!')

return None

if n == stop_node:

path = []

while parents[n] != n:

path.append(n)

n = parents[n]

path.append(start_node)

path.reverse()

print('Path found: {}'.format(path))

return path

open_set.remove(n)
closed_set.add(n)

print('Path does not exist!')

return None

def get_neighbors(v):

if v in Graph_nodes:

return Graph_nodes[v]

else:

return None

def heuristic(n):

H_dist = { 'A': 11, 'B': 6, 'C': 99, 'D': 1, 'E': 7, 'G': 0, }

return H_dist[n]

Graph_nodes = { 'A': [('B', 2), ('E', 3)],

'B': [('C', 1), ('G', 9)],

'C': None,

'E': [('D', 6)],

'D': [('G', 1)],

aStarAlgo('A', 'G')
OUTPUT

RESULT

Thus, the A* Algorithm has been implemented and executed successfully.


Ex.no:2b IMPLEMENTATION OF MEMORY BOUNDED A* ALGORITHM

Date:
AIM

To implement the memory bounded A* algorithm using python.

ALGORITHM

Step 1: Start.

Step 2: Import heapq module.

Step 3: Define A* function required arguments.

Step 4: Create an empty list heap and a tuple of the start node and a cost of 0.

Step 5: Create sets of visited nodes and the mapping of nodes to their previous node.

Step 6: If max_nodes are not specified, set it to large number.

Step 7: While the heap is not empty, and the number of nodes expanded is less than
the max_nodes.

1. Pop the node with the lowest cost from the heap.

2. If the node is the goal, break the loop.

3. If the node has been visited, skip to the next iteration.

4. Add the node to the visited set.

5. For each neighbor of the node, calculate the new cost and estimate the cost to the
goal using heuristic function.

6. If the cost of reaching the neighbor through the current node is lower than any
previous cost, add the neighbor to the heap and update the mapping to the
current node.

Step 8: Return the mapping of nodes to their previous node and the cost of
reaching each node.
PROGRAM

import heapq

def astar(start, goal, graph, heuristic, max_nodes=None):

heap = [(0, start)]

visited = set ()

came_from = {}

cost_so_far = {}

came_from[start]=None

cost_so_far[start]=0

if max_nodes is None:

max_nodes = float('inf')

count =0

while heap and count < max_nodes:

count+=1

current = heapq.heappop(heap)[1]

if current == goal:

break

if current in visited:

continue

visited.add(current)

for next_node, weight in graph[current]:

new_cost = cost_so_far[current]+weight

if next_node not in cost_so_far or new_cost<cost_so_far[next_node]:


cost_so_far[next_node]=new_cost

priority = new_cost + heuristic (goal, next_node)

heapq.heappush(heap, (priority, next_node))

came_from[next_node]=current

return came_from, cost_so_far

start ='A'

goal='G'

graph = {'A': [('B',1), ('C',3)],

'B': [('A',1), ('D',2), ('E',4)],

'C': [('A',3), ('F',5)],

'D': [('B',2)],

'E': [('B',4), ('G',2)],

'F': [('C',5)],

'G': [('E',2)]

def heuristic (goal, node):

return abs (ord (goal)-ord(node))

max_nodes=10

came_from, cost_so_far=astar(start, goal, graph, heuristic, max_nodes)

print ("Came from: ",came_from)

print ("Cost so far: ",cost_so_far)


OUTPUT

RESULT

Thus, the memory bounded A* algorithm is implemented and executed successfully.


Ex.No:3 Implementation of Naïve Bayes models
Date:

AIM

To implement Naïve Bayes Model

ALGORITHM

Step 1: Start

Step 2: Load initial libraries

Step 3: Import the dataset

Step 4: Print the information about the dataset

Step 5: Drop the first and last columns from the data frame

Step 6: Visualize malignant and benign tumour

Step 7: Pre-process and divide the dataset into training and testing sections

Step 8: Use Gaussian Naïve Bayes, to fit the model

Step 9: Print the accuracy score of the model

Step 10: Stop

PROGRAM

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

dataset=pd.read_csv("data.csv")

print(dataset.info())

dataset = dataset.drop(["id"], axis = 1)

dataset = dataset.drop(["Unnamed: 32"], axis = 1)


print(dataset.info())

M = dataset[dataset.diagnosis == "M"]

B = dataset[dataset.diagnosis == "B"]

plt.title("Malignant vs Benign Tumor")

plt.xlabel("Radius Mean")

plt.ylabel("Texture Mean")

plt.scatter(M.radius_mean, M.texture_mean, color = "red", label = "Malignant", alpha =


0.3)

plt.scatter(B.radius_mean, B.texture_mean, color = "lime", label = "Benign", alpha = 0.3)

plt.legend()

plt.show()

dataset.diagnosis=[1 if i=="M" else 0 for i in dataset.diagnosis]

x = dataset.drop(["diagnosis"], axis = 1)

y = dataset.diagnosis.values

import sklearn

from sklearn.model_selection import train_test_split

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 42)

from sklearn.naive_bayes import GaussianNB

nb = GaussianNB()

print(nb.fit(x_train, y_train))

print("Naive Bayes score: ",nb.score(x_test, y_test))


OUTPUT
RESULT

Thus, the naïve bayes model has been implemented and executed successfully.
Expt no: 4 IMPLEMENTATION OF BAYESIAN NETWORKS
Date:

AIM

To implement Bayesian Networks for Heart Disease prediction.

ALGORITHM

Step 1: Start

Step 2: Import the required libraries

Step 3: Read heart.csv file into pandas dataframe

Step 4: Print the first few instances of the dataset and attribute datatypes

Step 5: Define a Bayesian network model

Step 6: Learn the Conditional Probability Distribution of the network

Step 7: Use query to compute the probability of heartdisease with restecg=1 and
cp=2 and print the values

Step 8: Stop

PROGRAM

import numpy as np

import pandas as pd

import csv

from pgmpy.estimators import MaximumLikelihoodEstimator

from pgmpy.models import BayesianModel

from pgmpy.inference import VariableElimination

heartDisease = pd.read_csv('heart.csv')

heartDisease = heartDisease.replace('?',np.nan)
print('Sample instances from the dataset are given below')

print(heartDisease.head())

print('\n Attributes and datatypes')

print(heartDisease.dtypes)

model=BayesianModel([('age','heartdisease'),('gender','heartdisease'),('exang','heartdiseas
e'),('cp','heartdisease'),('heartdisease','restecg'),('heartdisease','chol')])

print('\nLearning CPD using Maximum likelihood estimators')

model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)

print('\n Inferencing with Bayesian Network:')

HeartDiseasetest_infer = VariableElimination(model)

print('\n 1. Probability of HeartDisease given evidence= restecg')

q1=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'restecg':1})

print(q1)

print('\n 2. Probability of HeartDisease given evidence= cp ')

q2=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'cp':2})

print(q2)
OUTPUT

RESULT

Thus, the Bayesian Network has been implemented and executed successfully.
Ex.No: 5 REGRESSION MODELS
Date:

AIM
To build a linear regression model.

ALGORITHM
Step 1: Start
Step 2: Import all the required modules and dataset
Step 3: Drop the columns that are not required
Step 4: Normalize the new dataset
Step 5: Train and test the dataset
Step 6: Fit the trained dataset into a linear model
Step 7: Print the mean_absolute_error
Step 8: Stop

PROGRAM
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
dataset = pd.read_csv("melb_data.csv")
dataset.head()

dataset=dataset.drop('Unnamed: 0', axis=1)


dataset.columns

dataset = dataset[["Rooms", "Price", "Bedroom2", "Bathroom","Landsize", "BuildingArea",


"YearBuilt"]]
dataset.isna().sum()
dataset = dataset.dropna()
dataset['HouseAge'] = 2022 - dataset["YearBuilt"].astype(int)
dataset = dataset.drop("YearBuilt", axis=1)

dataset.head()

def normalize(df):
result = df.copy()
for feature_name in df.columns:
max_value = df[feature_name].max()
min_value = df[feature_name].min()
result[feature_name] = (df[feature_name] - min_value) / (max_value - min_value)
return result

train, test = train_test_split(df, test_size=0.3)


train_y = train[["Price"]]
train_x = train.drop(["Price"], axis=1)
test_y = test[["Price"]]
test_x = test.drop(["Price"], axis=1)

model = LinearRegression()
model.fit(train_x, train_y)

predictions = model.predict(test_x)
predictions

mean_absolute_error(predictions, test_y)
OUTPUT
RESULT
Thus, the Linear regression model has been built and executed.
Ex.No:6a DECISION TREES

Date:

AIM
To build a decision tree using a python program.

ALGORITHM

Step 1: Start.

Step 2: Import the required libraries.

Step 3: Load the dataset.

Step 4: Creating the train and test sets.

Step 5: Build the model.

Step 6: visualize the decision tree.

Step 7: Stop.

PROGRAM

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore’)
df = pd.read_csv('Telco-Customer-Churn.csv',header=0)
df.head()

df.info()

obj_cols = df.select_dtypes(np.object).columns.tolist()
def check_value_counts(col_list):
for col in col_list:
print('-----------------------------')
print(round((df[col].value_counts()/df.shape[0])*100,2))
print('-----------------------------')
check_value_counts(obj_cols)

df.isna().sum()
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
for col in df.columns.to_numpy():
if df[col].dtypes in ('object','category'):
df[col]=le.fit_transform(df[col].astype(str))

X = df.drop('Churn',axis=1)
y = df['Churn']

from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7,
random_state=42)
X_train.shape, X_test.shape

from sklearn.tree import DecisionTreeClassifier


dt = DecisionTreeClassifier(max_depth=3,random_state=43)
dt.fit(X_train, y_train)

from IPython.display import Image


from six import StringIO
from sklearn.tree import export_graphviz
import pydotplus, graphviz

dot_data = StringIO()
export_graphviz(dt, out_file=dot_data, filled=True, rounded=True,
feature_names=X.columns, class_names=['Churn', "Not Churn"])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
OUTPUT
`

RESULT

Thus, the program for decision tree was executed Successfully.


Ex.No:6b RANDOM FOREST

Date:

AIM
To build random forest using a python program.

ALGORITHM

Step 1: Start.

Step 2: Import the required libraries.

Step 3: Load the dataset.

Step 4: Creating the train and test sets.

Step 5: Build the model.

Step 6: visualize the random forest.

Step 7: Stop.

PROGRAM

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
dataset = pd.read_csv("Maternal Health Risk Data Set.csv")
dataset.head()

dataset.info()

dataset.describe().T

g = sns.pairplot(dataset, hue='RiskLevel')
g.fig.suptitle("Scatterplot and histogram of pairs of variables color coded by risk level",
fontsize = 14,y=1.05);

dataset['RiskLevel'].unique()

dataset['RiskLevel'] = dataset['RiskLevel'].replace('low risk',


0).replace('mid risk', 1).replace('high risk', 2)
y = dataset['RiskLevel']
X = dataset.drop(['RiskLevel'], axis=1)
from sklearn.model_selection import train_test_split
SEED = 42
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,random_state=SEED)

from sklearn.ensemble import RandomForestClassifier


rfc = RandomForestClassifier(n_estimators=3,max_depth=2,
random_state=SEED)

rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)

from sklearn import tree


features = X.columns.values
classes = ['0', '1', '2']
for estimator in rfc.estimators_:
print(estimator)
plt.figure(figsize=(12,6))
tree.plot_tree(estimator, feature_names=features,
class_names=classes,fontsize=8,filled=True,rounded=True)
plt.show()

OUTPUT
RESULT

Thus, the program for Random forest was executed Successfully.


Ex .No:7 SUPPORT VECTOR MACHINE MODEL
Date:

AIM

To build a SVM model in python.

ALGORITHM

Step 1: Start.

Step 2: Load the data.

Step 3: Create training and test split.

Step 4: Perform feature scaling.

Step 5: Instantiate the SVC classifier.

Step 6: Fit the model.

Step 7: Measure the model performance.

Step 8: Stop.

PROGRAM:

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

x = pd.read_csv("cancer.csv")

a = np.array(x)

y = a[:,30] # classes having 0 and 1

x = np.column_stack((x.malignant,x.benign))

x.shape

print (x),(y)
from sklearn.svm import SVC

clf = SVC(kernel='linear')

clf.fit(x, y)

clf.predict([[120, 990]])

clf.predict([[85, 550]])
OUTPUT

RESULT

Thus, the program for SVM model in python has been executed successfully.
Exp No: 8
Date: IMPLEMENTATION OF ENSEMBLING TECHNIQUES

AIM:
To write a program to implement the technique(voting) of ensemble classifier for
classification

ALGORITHM:
Step 1: Start
Step 2: Import the required packages.
Step 3: Use set_params to drop an estimator.
Step 4: Implement the classifiers
Step 5: Define a model
Step 6: Combine the predictions from the model.
Step 7: End

PROGRAM:
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
clf1 = LogisticRegression(multi_class='multinomial', random_state=1)
clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
clf3 = GaussianNB()
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
y = np.array([1, 1, 1, 2, 2, 2])
eclf1 = VotingClassifier(estimators=[
('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')
eclf1 = eclf1.fit(X, y)
print(eclf1.predict(X))

np.array_equal(eclf1.named_estimators_.lr.predict(X),
eclf1.named_estimators_['lr'].predict(X))

eclf2 = VotingClassifier(estimators=[
('lr', clf1), ('rf', clf2), ('gnb', clf3)],
voting='soft')
eclf2 = eclf2.fit(X, y)
print(eclf2.predict(X))

eclf2 = eclf2.set_params(lr='drop')
eclf2 = eclf2.fit(X, y)
len(eclf2.estimators_)

eclf3 = VotingClassifier(estimators=[
('lr', clf1), ('rf', clf2), ('gnb', clf3)],
voting='soft', weights=[2,1,1],
flatten_transform=True)
eclf3 = eclf3.fit(X, y)
print(eclf3.predict(X))

print(eclf3.transform(X).shape)

OUTPUT:

RESULT:

Thus, the program to implement ensemble techniques(voting) for classification has


been executed successfully.
EX.NO: 9 IMPLEMENTION FO CLUSTERING ALGORITHM

DATE:

AIM:
To implement (K-Means) Clustering Algorithm

ALGORITHM:

STEP 1 : Start

STEP 1 : Data pre-processing Step

STEP 2 : Finding the optimal number of clusters using the elbow method

STEP 3 : Training the K-means algorithm on the training dataset

STEP 4 : Visualizing the Clusters

STEP 5 : stop

PROGRAM:

import numpy as nm
import matplotlib.pyplot as mtp
import pandas as pd

dataset = pd.read_csv('Mall_Customers.csv')
x = dataset.iloc[:, [3, 4]].values

from sklearn.cluster import KMeans


wcss_list= []

for i in range(1, 11):


kmeans = KMeans(n_clusters=i, init='k-means++', random_state= 42)
kmeans.fit(x)
wcss_list.append(kmeans.inertia_)
mtp.plot(range(1, 11), wcss_list)
mtp.title('The Elobw Method Graph')
mtp.xlabel('Number of clusters(k)')
mtp.ylabel('wcss_list')
mtp.show()
kmeans = KMeans(n_clusters=5, init='k-means++', random_state= 42)
y_predict= kmeans.fit_predict(x)
mtp.scatter(x[y_predict == 0, 0], x[y_predict == 0, 1], s = 100, c = 'blue', label =
'Cluster 1') #for first cluster
mtp.scatter(x[y_predict == 1, 0], x[y_predict == 1, 1], s = 100, c = 'green', label =
'Cluster 2') #for second cluster
mtp.scatter(x[y_predict== 2, 0], x[y_predict == 2, 1], s = 100, c = 'red', label =
'Cluster 3') #for third cluster
mtp.scatter(x[y_predict == 3, 0], x[y_predict == 3, 1], s = 100, c = 'cyan', label =
'Cluster 4') #for fourth cluster
mtp.scatter(x[y_predict == 4, 0], x[y_predict == 4, 1], s = 100, c = 'magenta', label =
'Cluster 5') #for fifth cluster
mtp.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c =
'k', label = 'Centroid')
mtp.title('Clusters of customers')
mtp.xlabel('Annual Income (k$)')
mtp.ylabel('Spending Score (1-100)')
mtp.legend()
mtp.show()
OUTPUT:

RESULT:
Thus the program for K-means Clustering algorithm has been implemented
successfully.
EX.NO:10 IMPLEMENTATION OF EXPECTATION MAXIMIZATION

DATE:

AIM:
To implement EM(Expectation Maximization) algorithm for Bayesian networks.

ALGORITHM:

STEP 1: Initialize the parameters of the Bayesian network, such as the conditional
probabilities for each variable.

STEP 2: In the E-step, compute the posterior probability of the hidden variables given
the observed data and the current parameter estimates.

STEP 3: In the M-step, update the parameter estimates by maximizing the expected
complete data log-likelihood. This involves using the posterior probabilities computed
in the E-step to estimate the conditional probabilities of each variable.

STEP 4: Repeat steps 2 and 3 until convergence is reached. Convergence can be


determined by monitoring the change in the log-likelihood or the parameter estimates
between iterations.

STEP 5: Once convergence is reached, the final parameter estimates can be used to
make predictions or perform inference on new data.

PROGRAM:

import pandas as pd
import numpy as np
import pyAgrum as gum
import matplotlib.pyplot as plt

# Read data that does not contain any missing values


data = pd.read_csv("/content/asia10K (4).csv")

# Rename columns to match variable names


#data = data.rename(columns={"smoker", "cancer", "xray"})
# Split data into test and new data
test_data = data[:2000]
new_data = data[2000:].copy()

# Learn structure of initial model from test data


learner = gum.BNLearner(test_data)
learner.useScoreBIC()
learner.useGreedyHillClimbing()
model = learner.learnBN()

# Create some missing values in new data


#new_data["smoker"][:500] = "?"

# Learn parameterization of BN using EM algorithm


bn = gum.BayesNet(model)
learner2 = gum.BNLearner(new_data, model)
learner2.useEM(1e-10)
learner2.fitParameters(bn)

# Plot the error during EM iterations


plt.plot(np.arange(1, 1 + learner2.nbrIterations()))
plt.semilogy()
plt.title("Error during EM Iterations")
plt.show()
OUTPUT:

RESULT:
Thus the program for EM algorithm has been implemented successfully.
EXP NO:11 NEURAL NETWORK MODEL
DATE :

AIM:
To build a simple Neural Network model using Iris datasets.

ALGORITHM:

STEP 1: Start

STEP 2: Define the architecture: Decide on the number of layers, the number of neurons
ineach layer, and the activation function to be used in each layer.

STEP 3 :Initialize the weights and biases: Randomly initialize the weights and biases for
eachneuron in the network.

STEP 4:Forward propagation: Take the input data and pass it through the network by
multiplying the weights with the input values and adding the biases. Then apply the activation
function to eachneuron in each layer to generate the output.

STEP 5:Compute the loss: Calculate the difference between the predicted output and the true
outputusing a loss function.

STEP 6:Backpropagation: Calculate the gradient of the loss with respect to the weights and
biases in each layer. This is done using the chain rule of calculus to propagate the error from
the output layer back to the input layer.

STEP 7:Update the weights and biases: Use the calculated gradients to update the weights
and biases in each layer using an optimization algorithm such as stochastic gradient descent
or Adam.

STEP 8:Repeat steps 3-6 for a fixed number of epochs or until convergence is achieved.

STEP 9:Test the model: Use a separate dataset to evaluate the performance of the trained
model bycalculating the accuracy or other performance metrics.

STEP 10:Deploy the model: Use the trained model to make predictions on new, unseen data.

PROGRAM:

import keras
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.preprocessing import normalize
data=pd.read_csv("/content/Iris.csv")
print("Describing the data: ",data.describe())
print("Info of the data:",data.info())
print("10 first samples of the dataset:",data.head(10))
print("10 last samples of the dataset:",data.tail(10))
sns.lmplot(x='SepalLengthCm', y='SepalWidthCm',data=data, fit_reg=False, hue="Species",
scatter_kws={"marker": "D", "s": 50})
plt.title('SepalLength vs SepalWidth')

sns.lmplot(x='PetalLengthCm',y='PetalWidthCm', data=data,fit_reg=False, hue="Species",


scatter_kws={"marker": "D","s": 50})
plt.title('PetalLength vs PetalWidth')

sns.lmplot(x='SepalLengthCm',y='PetalLengthCm', data=data,fit_reg=False, hue="Species",


scatter_kws={"marker": "D","s": 50})
plt.title('SepalLength vs PetalLength')

sns.lmplot(x='SepalWidthCm',y= 'PetalWidthCm', data=data,fit_reg=False, hue="Species",


scatter_kws={"marker": "D","s": 50})
plt.title('SepalWidth vs PetalWidth')
plt.show()
print(data["Species"].unique())
['Iris-setosa' 'Iris-versicolor' 'Iris-virginica']
data.loc[data["Species"]=="Iris-setosa","Species"]=0
data.loc[data["Species"]=="Iris-versicolor","Species"]=1
data.loc[data["Species"]=="Iris-virginica","Species"]=2
print(data.head())
data=data.iloc[np.random.permutation(len(data))]
print(data.head())
X=data.iloc[:,1:5].values
y=data.iloc[:,5].values
print("Shape of X",X.shape)
print("Shape of y",y.shape)
print("Examples of X\n",X[:3])
print("Examples of y\n",y[:3])
X_normalized=normalize(X,axis=0)
print("Examples of X_normalised\n",X_normalized[:3])
total_length=len(data)
train_length=int(0.8*total_length)
test_length=int(0.2*total_length)
X_train=X_normalized[:train_length]
X_test=X_normalized[train_length:]
y_train=y[:train_length]
y_test=y[train_length:]

print("Length of train set x:",X_train.shape[0],"y:",y_train.shape[0])


print("Length of test set x:",X_test.shape[0],"y:",y_test.shape[0])
# Length of train set x: 120 y: 120
# Length of test set x: 30 y: 30
from keras.models import Sequential
from keras.layers import Dense,Activation,Dropout
from tensorflow.keras.layers import BatchNormalization
#from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
y_train=np_utils.to_categorical(y_train,num_classes=3)
y_test=np_utils.to_categorical(y_test,num_classes=3)
print("Shape of y_train",y_train.shape)
print("Shape of y_test",y_test.shape)
# Shape of y_train (120, 3)
# Shape of y_test (30, 3)
model=Sequential()
model.add(Dense(1000,input_dim=4,activation='relu'))
model.add(Dense(500,activation='relu'))
model.add(Dense(300,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(3,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(X_train,y_train,validation_data=(X_test,y_test),batch_size=20,epochs=10,verbose=1)
prediction=model.predict(X_test)
length=len(prediction)
y_label=np.argmax(y_test,axis=1)
predict_label=np.argmax(prediction,axis=1)
accuracy=np.sum(y_label==predict_label)/length * 100
print("Accuracy of the dataset",accuracy )

OUTPUT:

Describing the data: Id SepalLengthCm SepalWidthCm


PetalLengthCm PetalWidthCm
count 150.000000 150.000000 150.000000 150.000000 150.000000
mean 75.500000 5.843333 3.054000 3.758667 1.198667
std 43.445368 0.828066 0.433594 1.764420 0.763161
min 1.000000 4.300000 2.000000 1.000000 0.100000
25% 38.250000 5.100000 2.800000 1.600000 0.300000
50% 75.500000 5.800000 3.000000 4.350000 1.300000
75% 112.750000 6.400000 3.300000 5.100000 1.800000
max 150.000000 7.900000 4.400000 6.900000 2.500000
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 6 columns):
# Column Non-Null Count Dtype

0 Id 150 non-null int64


1 SepalLengthCm 150 non-null float64
2 SepalWidthCm 150 non-null float64
3 PetalLengthCm 150 non-null float64
4 PetalWidthCm 150 non-null float64
5 Species 150 non-null object
dtypes: float64(4), int64(1), object(1)
memory usage: 7.2+ KB
Info of the data: None
10 first samples of the dataset: Id SepalLengthCm SepalWidthCm
PetalLengthCm PetalWidthCm Species
0 1 5.1 3.5 1.4 0.2 Iris-setosa
1 2 4.9 3.0 1.4 0.2 Iris-setosa
2 3 4.7 3.2 1.3 0.2 Iris-setosa
3 4 4.6 3.1 1.5 0.2 Iris-setosa
4 5 5.0 3.6 1.4 0.2 Iris-setosa
5 6 5.4 3.9 1.7 0.4 Iris-setosa
6 7 4.6 3.4 1.4 0.3 Iris-setosa
7 8 5.0 3.4 1.5 0.2 Iris-setosa
8 9 4.4 2.9 1.4 0.2 Iris-setosa
9 10 4.9 3.1 1.5 0.1 Iris-setosa
10 last samples of the dataset: Id SepalLengthCm SepalWidthCm
PetalLengthCm PetalWidthCm \
140 141 6.7 3.1 5.6 2.4
141 142 6.9 3.1 5.1 2.3
142 143 5.8 2.7 5.1 1.9
143 144 6.8 3.2 5.9 2.3
144 145 6.7 3.3 5.7 2.5
145 146 6.7 3.0 5.2 2.3
146 147 6.3 2.5 5.0 1.9
147 148 6.5 3.0 5.2 2.0
148 149 6.2 3.4 5.4 2.3
149 150 5.9 3.0 5.1 1.8

Species
140 Iris-virginica
141 Iris-virginica
142 Iris-virginica
143 Iris-virginica
144 Iris-virginica
145 Iris-virginica
146 Iris-virginica
147 Iris-virginica
148 Iris-virginica
149 Iris-virginica
['Iris-setosa' 'Iris-versicolor' 'Iris-virginica']
Id SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Species
0 1 5.1 3.5 1.4 0.2 0
1 2 4.9 3.0 1.4 0.2 0
2 3 4.7 3.2 1.3 0.2 0
3 4 4.6 3.1 1.5 0.2 0
4 5 5.0 3.6 1.4 0.2 0
Id SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Species
11 12 4.8 3.4 1.6 0.2 0
102 103 7.1 3.0 5.9 2.1 2
73 74 6.1 2.8 4.7 1.2 1
50 51 7.0 3.2 4.7 1.4 1
93 94 5.0 2.3 3.3 1.0 1
Shape of X (150, 4)
Shape of y (150,)
Examples of X
[[4.8 3.4 1.6 0.2]
[7.1 3. 5.9 2.1]
[6.1 2.8 4.7 1.2]]
Examples of y
[0 2 1]
Examples of X_normalised
[[0.0664119 0.09000348 0.03148167 0.01150299]
[0.09823426 0.07941484 0.11608866 0.12078145]
[0.08439845 0.07412052 0.09247741 0.06901797]]
Length of train set x: 120 y: 120
Length of test set x: 30 y: 30
Shape of y_train (120, 3)
Shape of y_test (30, 3)
Epoch 1/10
6/6 [==============================] - 1s 37ms/step - loss: 1.0820 - accuracy:
0.4917 - val_loss: 1.0473 - val_accuracy: 0.7000
Epoch 2/10
6/6 [==============================] - 0s 12ms/step - loss: 1.0057 - accuracy:
0.6583 - val_loss: 0.9306 - val_accuracy: 0.7000
Epoch 3/10
6/6 [==============================] - 0s 13ms/step - loss: 0.8752 - accuracy:
0.6583 - val_loss: 0.7508 - val_accuracy: 0.7000
Epoch 4/10
6/6 [==============================] - 0s 13ms/step - loss: 0.6831 - accuracy:
0.7000 - val_loss: 0.5128 - val_accuracy: 0.8333
Epoch 5/10
6/6 [==============================] - 0s 14ms/step - loss: 0.4756 - accuracy:
0.8667 - val_loss: 0.3436 - val_accuracy: 0.9667
Epoch 6/10
6/6 [==============================] - 0s 13ms/step - loss: 0.3311 - accuracy:
0.9667 - val_loss: 0.2267 - val_accuracy: 1.0000
Epoch 7/10
6/6 [==============================] - 0s 12ms/step - loss: 0.2746 - accuracy:
0.9167 - val_loss: 0.2275 - val_accuracy: 0.8667
Epoch 8/10
6/6 [==============================] - 0s 13ms/step - loss: 0.2652 - accuracy:
0.8917 - val_loss: 0.1223 - val_accuracy: 0.9667
Epoch 9/10
6/6 [==============================] - 0s 13ms/step - loss: 0.2381 - accuracy:
0.8917 - val_loss: 0.1572 - val_accuracy: 0.9333
Epoch 10/10
6/6 [==============================] - 0s 12ms/step - loss: 0.2037 - accuracy:
0.9000 - val_loss: 0.1373 - val_accuracy: 0.9667
1/1 [==============================] - 0s 50ms/step
Accuracy of the dataset 96.66666666666667

RESULT:

Thus the program for a simple Neural Network model has been executed successfully.
Exp No: 12 Implementation Of Convolution Neural Network

Date:

Aim:
To implement and build a Convolutional neural network model which predicts the age and gender of a
person using the given pre-trained models.

Algorithm:
Steps in CNN Algorithm:

Step-1: Choose the Dataset.

Step-2: Prepare the Dataset for training.

Step-3: Create training Data.

Step-4: Shuffle the Dataset.

Step-5: Assigning Labels and Features.

Step-6: Normalising X and converting labels to categorical data.

Step-7: Split X and Y for use in CNN.

Step-8: Define, compile and train the CNN

Model. Step-9: Accuracy and Score of the model.

Program:
import cv2 as cv

import math

import time

from google.colab.patches import cv2_imshow

def getFaceBox(net, frame, conf_threshold=0.7):


frameOpencvDnn = frame.copy()

frameHeight = frameOpencvDnn.shape[0]

frameWidth = frameOpencvDnn.shape[1]

blob = cv.dnn.blobFromImage(frameOpencvDnn, 1.0, (300, 300), [104, 117, 123], True,

False) net.setInput(blob)

detections = net.forward()

bboxes = []

for i in range(detections.shape[2]):

confidence = detections[0, 0, i, 2]

if confidence > conf_threshold:

x1 = int(detections[0, 0, i, 3] * frameWidth)

y1 = int(detections[0, 0, i, 4] * frameHeight)

x2 = int(detections[0, 0, i, 5] * frameWidth)

y2 = int(detections[0, 0, i, 6] * frameHeight)

bboxes.append([x1, y1, x2, y2])

cv.rectangle(frameOpencvDnn, (x1, y1), (x2, y2), (0, 255, 0),

int(round(frameHeight/150)), 8)

return frameOpencvDnn, bboxes

faceProto = "/content/opencv_face_detector.pbtxt"

faceModel = "/content/opencv_face_detector_uint8.pb"

ageProto = "/content/age_deploy.prototxt"

ageModel = "/content/age_net.caffemodel" genderProto

= "/content/gender_deploy.prototxt" genderModel =

"/content/gender_net.caffemodel"

MODEL_MEAN_VALUES = (78.4263377603, 87.7689143744, 114.895847746)


ageList = ['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-100)']

genderList = ['Male', 'Female']

ageNet = cv.dnn.readNet(ageModel, ageProto) genderNet

= cv.dnn.readNet(genderModel, genderProto) faceNet =

cv.dnn.readNet(faceModel, faceProto)

def age_gender_detector(frame):

# Read frame

t = time.time()

frameFace, bboxes = getFaceBox(faceNet, frame)

for bbox in bboxes:

# print(bbox)

face = frame[max(0,bbox[1]-padding):min(bbox[3]+padding,frame.shape[0]1),max(0,bbox[0]-
padding):min(bbox[2]+padding, frame.shape[1]-1)]blob = cv.dnn.blobFromImage(face, 1.0, (227,
227), MODEL_MEAN_VALUES, swapRB=False)

genderNet.setInput(blob)

genderPreds = genderNet.forward()

gender = genderList[genderPreds[0].argmax()]

# print("Gender Output : {}".format(genderPreds))

print("Gender : {}, conf = {:.3f}".format(gender, genderPreds[0].max()))ageNet.setInput(blob)

agePreds = ageNet.forward()

age = ageList[agePreds[0].argmax()]

print("Age Output :

{}".format(agePreds))

print("Age : {}, conf = {:.3f}".format(age, agePreds[0].max()))label = "{},{}".format(gender, age)


cv.putText(frameFace, label, (bbox[0], bbox[1]-10), cv.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2,
cv.LINE_AA)

return frameFace

from google.colab import files


uploaded = files.upload()

input =

cv.imread("2.jpg")

output = age_gender_detector(input)

cv2_imshow(output)

Output:
gender : Male, conf = 1.000

Age Output : [[2.8247703e-05 8.9249297e-05 3.0017464e-04 8.8183772e-03 9.3055397e-01


5.1735926e-02 7.6946630e-03 7.7927281e-04]]

Age: (25-32), conf = 0.873.

Result:
Thus the program to implement and build a Convolutional neural network model which predicts the ageand
gender of a person using the given pre-trained models have been executed successfully and the output got
verified.

You might also like