AI&ML LAB MANUAL FINAL
AI&ML LAB MANUAL FINAL
COLLEGE OF ENGINEERING
COIMBATORE-641 105
(APPROVED BY AICTE, NEW DELHI & AFFILIATED TO ANNA UNIVERSITY, CHENNAI)
NH-47, PALAKKAD MAIN ROAD, NAVAKKARAI POST, NEAR NANDHI TEMPLE, COIMBATORE 641-105
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING
CS3491
ARTIFICIAL INTELLIGENCE AND
MACHINE LEARNING
LIST OF EXPERIMENTS:
PRACTICAL EXERCISES:
1. Implementation of Uninformed search algorithms (BFS, DFS)
2. Implementation of Informed search algorithms
(A*, memory- bounded A*)
3. Implement naïve Bayes models
4. Implement Bayesian Networks
5. Build Regression models
6. Build decision trees and random forests
7. Build SVM models
8. Implement ensembling techniques
9. Implement clustering algorithms
10. Implement EM for Bayesian networks
11. Build simple NN models
12. Build deep learning NN model
PREPARED BY
Mr.A.D.Saravanaprabhu/AP
Aim:
To implement uninformed search algorithms such as BFS and DFS.
Algorithm(BFS):
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose
STATUS
= 1) and set
their STATUS =
2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Algorithm(DFS):
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Program(BFS):
Output(BFS):
Program(DFS):
Result:
Thus the uninformed search algorithms such as BFS and DFS have been executed
successfully and the output got verified.
1. Implementation of Informed search algorithm
(A*) Aim:
Algorithm(A*):
Program(A*):
def aStarAlgo(start_node,
stop_node): open_set =
set(start_node) closed_set = set()
g = {}
parents = {}
g[start_node] = 0
parents[start_node] =
start_node while len(open_set) >
0:
n = None
for v in open_set:
if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):
n=v
if n == stop_node or Graph_nodes[n] ==
None: pass
else:
for (m, weight) in get_neighbors(n):
if m not in open_set and m not in
closed_set: open_set.add(m)
parents[m] = n
g[m] = g[n] +
weight
else:
if g[m] > g[n] + weight:
g[m] = g[n] + weight
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not
exist!') return None
if n ==
stop_node:
path = []
while parents[n] !=
n: path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found:
{}'.format(path)) return path
open_set.remove(n)
closed_set.add(n)
print('Path does not
exist!') return None
def get_neighbors(v):
if v in
Graph_nodes:
return Graph_nodes[v]
else:
return None
def heuristic(n):
H_dist =
{ 'A':
11,
'B': 6,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
Graph_nodes = {
'A': [('B', 6), ('F', 3)],
'B': [('A', 6), ('C', 3), ('D', 2)],
'C': [('B', 3), ('D', 1), ('E', 5)],
'D': [('B', 2), ('C', 1), ('E', 8)],
'E': [('C', 5), ('D', 8), ('I', 5), ('J', 5)],
'F': [('A', 3), ('G', 1), ('H', 7)],
'G': [('F', 1), ('I', 3)],
'H': [('F', 7), ('I', 2)],
'I': [('E', 5), ('G', 3), ('H', 2), ('J', 3)],
}
aStarAlgo('A', 'J')
Output(A*):
Result:
Thus the program to implement informed search algorithm have been executed
successfully and output got verified.
2. Implement Naïve Bayes models.
Aim:
To diagnose heart patients and predict disease using heart disease dataset with
Naïve Bayes Classifier Algorithm.
Algorithm:
Program:
NB_from_scratch.py
import csv
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, auc
import matplotlib.pyplot as plt
from itertools import
cycle from scipy import
interp import warnings
import random
import math
warnings.filterwarnings("ignore")
# Example of Naive Bayes implemented from Scratch in Python
for z in range(5):
print("\n\n\nTest Train Split no. ", z + 1, "\n\n\n")
trainsize = int(len(dataset) * 0.75)
trainset = []
testset = list(dataset)
for i in
range(trainsize):
index = random.randrange(len(testset))
trainset.append(testset.pop(index))
# Getting prediction
vector y_pred = []
for i in
range(len(testset)):
class_probability = {}
for class_num, row in
class_data.items():
class_probability[class_num] = 1
for j in range(len(row)):
calculated_mean, calculated_dev =
row[j] x = float(testset[i][j])
if (calculated_dev != 0):
power = math.exp(-(math.pow(x - calculated_mean, 2) / (2 *
math.pow(calculated_dev, 2))))
probability = (1 / (math.sqrt(2 * math.pi) * calculated_dev)) * power
class_probability[class_num] *= probability
y_pred.append(resultant_class)
# Getting Accuracy
count = 0
for i in range(len(testset)):
if testset[i][-1] ==
y_pred[i]: count += 1
accuracy = (count / float(len(testset))) *
100.0 print("\n\n Accuracy: ", accuracy, "%")
print("\n\n\n\nF1 Score")
f_score = f1_score(y1, y_pred1, average='weighted')
print(f_score)
for i in
range(len(y_pred1)):
y3[i][int(y_pred1[i])] = 1
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y2[:, i], y3[:,
i]) roc_auc[i] = auc(fpr[i], tpr[i])
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average (area =
{0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
NB_from_Gaussian_Sklearn.py
import csv
import pandas as
pd import numpy as
np
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, auc
import matplotlib.pyplot as plt
from itertools import
cycle from scipy import
interp
for z in range(5):
print("\n\n\nTest Train Split no. ", z + 1, "\n\n\n")
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25,
random_state=None) # Gaussian function of sklearn
gnb = GaussianNB()
gnb.fit(x_train, y_train.ravel())
y_pred = gnb.predict(x_test)
# convert 2D array to 1D
array y1 = y_test.ravel()
y_pred1 = y_pred.ravel()
print("\n\n\n\nConfusion Matrix")
cf_matrix = confusion_matrix(y1, y_pred1)
print(cf_matrix)
print("\n\n\n\nF1 Score")
f_score = f1_score(y1, y_pred1, average='weighted')
print(f_score)
for i in
range(len(y_pred1)):
y3[i][int(y_pred1[i])] = 1
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y2[:, i], y3[:,
i]) roc_auc[i] = auc(fpr[i], tpr[i])
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average (area =
{0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
Output:
Result:
Thus the program to diagnose heart patients and predict disease using heart disease
dataset with Naïve Bayes Classifier Algorithm have been executed successfully and
output got verified.
3. Implement Bayesian Networks
Aim:
To construct a Bayesian network, to demonstrate the diagnosis of heart patients
using standard Heart Disease Data Set.
Algorithm:
Program:
import bayespy as
bp import numpy as
np import csv
from colorama import init
from colorama import Fore, Back,
Style init()
p_age = bp.nodes.Dirichlet(1.0*np.ones(5))
age = bp.nodes.Categorical(p_age, plates=(N,))
age.observe(data[:,0])
p_gender = bp.nodes.Dirichlet(1.0*np.ones(2))
gender = bp.nodes.Categorical(p_gender, plates=(N,))
gender.observe(data[:,1])
p_familyhistory = bp.nodes.Dirichlet(1.0*np.ones(2))
familyhistory = bp.nodes.Categorical(p_familyhistory,
plates=(N,)) familyhistory.observe(data[:,2])
p_diet = bp.nodes.Dirichlet(1.0*np.ones(3))
diet = bp.nodes.Categorical(p_diet, plates=(N,))
diet.observe(data[:,3])
p_lifestyle = bp.nodes.Dirichlet(1.0*np.ones(4))
lifestyle = bp.nodes.Categorical(p_lifestyle, plates=(N,))
lifestyle.observe(data[:,4])
p_cholesterol = bp.nodes.Dirichlet(1.0*np.ones(3))
cholesterol = bp.nodes.Categorical(p_cholesterol, plates=(N,))
cholesterol.observe(data[:,5])
m=0
while m == 0:
print("\n")
res = bp.nodes.MultiMixture([int(input('Enter Age: ' + str(ageEnum))),
int(input('Enter Gender: ' + str(genderEnum))), int(input('Enter FamilyHistory: '
+ str(familyHistoryEnum))), int(input('Enter dietEnum: ' + str(dietEnum))),
int(input('Enter LifeStyle: ' + str(lifeStyleEnum))), int(input('Enter Cholesterol: ' +
str(cholesterolEnum)))], bp.nodes.Categorical, p_heartdisease).get_moments()
[0][heartDiseaseEnum['Yes']]
print("Probability(HeartDisease) = " +
str(res)) m = int(input("Enter for Continue:0,
Exit :1 "))
Output:
Result:
Thus the program to implement a bayesian networks in the given heart disease
dataset have been executed successfully and the output got verified.
4. Build Regression
models Aim:
To build regression models such as locally weighted linear regression and plot the
necessary graphs.
Algorithm:
1. Read the Given data Sample to X and the curve (linear or non-linear) to Y
2. Set the value for Smoothening parameter or Free parameter say τ
3. Set the bias /Point of interest set x0 which is a subset of X
4. Determine the weight matrix using :
6. Prediction = x0*β.
Program:
def lowess(x, y, f,
iterations): n = len(x)
r = int(ceil(f * n))
h = [np.sort(np.abs(x - x[i]))[r] for i in range(n)]
w = np.clip(np.abs((x[:, None] - x[None, :]) / h), 0.0, 1.0)
w = (1 - w ** 3) ** 3
yest = np.zeros(n)
delta =
np.ones(n)
for iteration in
range(iterations): for i in
range(n):
weights = delta * w[:, i]
b = np.array([np.sum(weights * y), np.sum(weights * y * x)])
A = np.array([[np.sum(weights), np.sum(weights * x)],[np.sum(weights * x),
np.sum(weights * x * x)]])
beta = linalg.solve(A, b)
yest[i] = beta[0] + beta[1] * x[i]
residuals = y - yest
s = np.median(np.abs(residuals))
delta = np.clip(residuals / (6.0 * s), -1, 1)
delta = (1 - delta ** 2) **
2 return yest
import math
n = 100
x = np.linspace(0, 2 * math.pi, n)
y = np.sin(x) + 0.3 *
np.random.randn(n) f =0.25
iterations=3
yest = lowess(x, y, f, iterations)
import matplotlib.pyplot as
plt plt.plot(x,y,"r.")
plt.plot(x,yest,"b-")
Output:
Result:
Aim:
To implement the concept of decision trees with suitable dataset from real world
problems using CART algorithm.
Algorithm:
Program:
import numpy as np
import matplotlib.pyplot as
plt import pandas as pd
data =
pd.read_csv('/Users/ganesh/PycharmProjects/DecisionTree/Social_Network_Ads.csv')
data.head()
feature_cols = ['Age',
'EstimatedSalary'] x = data.iloc[:, [2,
3]].values
y = data.iloc[:, 4].values
y_pred = classifier.predict(x_test)
from sklearn import metrics
print('Accuracy Score:', metrics.accuracy_score(y_test, y_pred))
plt.title("Decision Tree(Test
set)") plt.xlabel("Age")
plt.ylabel("Estimated Salary")
plt.legend()
plt.show()
dot_data = StringIO()
export_graphviz(classifier, out_file=dot_data, filled=True, rounded=True,
special_characters=True, feature_names=feature_cols, class_names=['0', '1'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.write_png('decisiontree.png'))
dot_data = StringIO()
export_graphviz(classifier, out_file=dot_data, filled=True, rounded=True,
special_characters=True, feature_names=feature_cols, class_names=['0', '1'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.write_png('opt_decisiontree_gini.png'))
Output of decision tree without pruning:
Result:
Thus the program to implement the concept of decision trees with suitable dataset
from real world problems using CART algorithm have been executed successfully.
6. Build SVM models.
Aim:
To create a machine learning model which classifies the Spam and Ham E-Mails from
a given dataset using Support Vector Machine algorithm.
Algorithm:
1. Import all the necessary libraries.
2. Read the given csv file which contains the emails which are both spam
and ham.
3. Gather all the words given in that dataset and Identify the stop words with
a mean distribution.
4. Create an ML model using the Support Vector Classifier after splitting
the dataset into training and test set.
5. Display the accuracy and f1 score and print the confusion matrix for
the classification of spam and ham.
Program:
import pandas as
pd import numpy as
np
import matplotlib.pyplot as
plt import seaborn as sns
import string
from nltk.corpus import
stopwords import os
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from PIL import Image
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_curve,
auc from sklearn import metrics
from sklearn import
model_selection from sklearn
import svm
from nltk import word_tokenize
from sklearn.metrics import
roc_auc_score from matplotlib import
pyplot
from sklearn.metrics import plot_confusion_matrix
class data_read_write(object):
def init (self):
pass
def init (self, file_link):
self.data_frame = pd.read_csv(file_link)
def read_csv_file(self,
file_link): return
self.data_frame
def write_to_csvfile(self, file_link):
self.data_frame.to_csv(file_link, encoding='utf-8', index=False, header=True)
return
class generate_word_cloud(data_read_write):
def init (self):
pass
def variance_column(self, data):
return np.variance(data)
def word_cloud(self, data_frame_column, output_image_file):
text = " ".join(review for review in data_frame_column)
stopwords = set(STOPWORDS)
stopwords.update(["subject"])
wordcloud = WordCloud(width = 1200, height = 800, stopwords=stopwords,
max_font_size = 50, margin=0,
background_color =
"white").generate(text) plt.imshow(wordcloud,
interpolation='bilinear') plt.axis("off")
plt.savefig("Distribution.png")
plt.show()
wordcloud.to_file(output_image_file)
return
class data_cleaning(data_read_write):
def init (self):
pass
def message_cleaning(self, message):
Test_punc_removed = [char for char in message if char not in string.punctuation]
Test_punc_removed_join = ''.join(Test_punc_removed)
Test_punc_removed_join_clean = [word for word in Test_punc_removed_join.split()
if word.lower() not in stopwords.words('english')]
final_join = ' '.join(Test_punc_removed_join_clean)
return final_join
class apply_embeddding_and_model(data_read_write):
def init (self):
pass
data_obj = data_read_write("emails.csv")
data_frame = data_obj.read_csv_file("processed.csv")
data_frame.head()
data_frame.tail()
data_frame.describe()
data_frame.info()
data_frame.head()
data_frame.groupby('spam').describe()
data_frame['length'] = data_frame['text'].apply(len)
data_frame['length'].max()
sns.set(rc={'figure.figsize':(11.7,8.27)})
ham_messages_length = data_frame[data_frame['spam']==0]
spam_messages_length = data_frame[data_frame['spam']==1]
data_frame[data_frame['spam']==0].text.values
sns.set(rc={'figure.figsize':(11.7,8.27)})
ax = sns.distplot(ham_words_length, norm_hist = True, bins = 30, label = 'Ham')
ax = sns.distplot(spam_words_length, norm_hist = True, bins = 30, label = 'Spam')
plt.title('Distribution of Number of Words')
plt.xlabel('Number of Words')
plt.legend()
plt.savefig("SVMGraph.png")
plt.show()
def mean_word_length(x):
word_lengths = np.array([])
for word in
word_tokenize(x):
word_lengths = np.append(word_lengths, len(word))
return word_lengths.mean()
ham_meanword_length =
data_frame[data_frame['spam']==0].text.apply(mean_word_length)
spam_meanword_length =
data_frame[data_frame['spam']==1].text.apply(mean_word_length)
def stop_words_ratio(x):
num_total_words = 0
num_stop_words = 0
for word in
word_tokenize(x): if word
in stop_words:
num_stop_words += 1
num_total_words += 1
return num_stop_words / num_total_words
ham = data_frame[data_frame['spam']==0]
spam = data_frame[data_frame['spam']==1]
spam['length'].plot(bins=60, kind='hist')
ham['length'].plot(bins=60, kind='hist')
data_frame['Ham(0) and Spam(1)'] = data_frame['spam']
print( 'Spam percentage =', (len(spam) / len(data_frame) )*100,"%")
print( 'Ham percentage =', (len(ham) / len(data_frame) )*100,"%")
sns.countplot(data_frame['Ham(0) and Spam(1)'], label = "Count")
data_clean_obj = data_cleaning()
data_frame['clean_text'] = data_clean_obj.apply_to_column(data_frame['text'])
data_frame.head()
data_obj.data_frame.head()
data_obj.write_to_csvfile("processed_file.csv")
cv_object = apply_embeddding_and_model()
spamham_countvectorizer = cv_object.apply_count_vector(data_frame['clean_text'])
X = spamham_countvectorizer
label = data_frame['spam'].values
y = label
cv_object.apply_svm(X,y)
Output:
test set
Accuracy Score: 0.9895287958115183
F1 Score: 0.9776119402985075
Recall: 0.9739776951672863
Precision: 0.9812734082397003
Normalized confusion matrix
[[0.99429875 0.00570125]
[0.0260223 0.9739777 ]]
Result:
Thus the program to create a machine learning model which classifies the Spam and
Ham E-Mails from a given dataset using Support Vector Machine algorithm have
been successfully executed.
7. Implement ensembling techniques.
Aim:
To implement the ensembling technique of Blending with the given Alcohol QCM
Dataset.
Algorithm:
1. Split the training dataset into train, test and validation dataset.
2. Fit all the base models using train dataset.
3. Make predictions on validation and test dataset.
4. These predictions are used as features to build a second level model
5. This model is used to make predictions on test and meta-features.
Program:
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
df = pd.read_csv("train_data.csv")
target = df["target"]
train =
df.drop("target")
X_train, X_test, y_train, y_test = train_test_split(train, target, test_size=0.20)
train_ratio = 0.70
validation_ratio = 0.20
test_ratio = 0.10
x_train, x_test, y_train, y_test = train_test_split(
train, target, test_size=1 - train_ratio)
x_val, x_test, y_val, y_test = train_test_split(
x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio))
model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()
model_1.fit(x_train, y_train)
val_pred_1 = model_1.predict(x_val)
test_pred_1 =
model_1.predict(x_test)
val_pred_1 = pd.DataFrame(val_pred_1)
test_pred_1 =
pd.DataFrame(test_pred_1)
model_2.fit(x_train, y_train)
val_pred_2 = model_2.predict(x_val)
test_pred_2 = model_2.predict(x_test)
val_pred_2 = pd.DataFrame(val_pred_2)
test_pred_2 =
pd.DataFrame(test_pred_2)
model_3.fit(x_train, y_train)
val_pred_3 = model_1.predict(x_val)
test_pred_3 =
model_1.predict(x_test)
val_pred_3 = pd.DataFrame(val_pred_3)
test_pred_3 =
pd.DataFrame(test_pred_3)
df_val = pd.concat([x_val, val_pred_1, val_pred_2, val_pred_3], axis=1)
df_test = pd.concat([x_test, test_pred_1, test_pred_2, test_pred_3], axis=1)
final_model = LinearRegression()
final_model.fit(df_val, y_val)
final_pred = final_model.predict(df_test)
print(mean_squared_error(y_test, pred_final))
Output:
4790
Result:
Thus the program to implement ensembling technique of Blending with the given
Alcohol QCM Dataset have been executed successfully and the output got verfied.
8. Implement clustering
algorithms Aim:
Algorithm:
Program:
import pandas as
pd import numpy as
np
from sklearn import datasets
iris=datasets.load_iris()
iris_data=iris.data
iris_labels=iris.target
print("accuracy is")
print(classification_report(y_test, y_pred))
Output:
accuracy is
precision recall f1-score support
accuracy 0.97 30
macro avg 0.96 0.98 0.97 30
weighted avg 0.97 0.97 0.97 30
Result:
Thus the program to implement k-Nearest Neighbour Algorithm for clustering Iris
dataset have been executed successfully and output got verified.
9. Implement EM for Bayesian networks.
Aim:
To implement the EM algorithm for clustering networks using the given dataset.
Algorithm:
Program:
dataset=load_iris()
# print(dataset)
X=pd.DataFrame(dataset.data)
X.columns=['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y=pd.DataFrame(dataset.target)
y.columns=['Targets']
# print(X)
plt.figure(figsize=(14,7))
colormap=np.array(['red','lime','black'])
# REAL PLOT
plt.subplot(1,3,1)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y.Targets],s=40)
plt.title('Real')
# K-PLOT
plt.subplot(1,3,2)
model=KMeans(n_clusters=3)
model.fit(X)
predY=np.choose(model.labels_,[0,1,2]).astype(np.int64)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[predY],s=40)
plt.title('KMeans')
# GMM PLOT
scaler=preprocessing.StandardScaler()
scaler.fit(X)
xsa=scaler.transform(X)
xs=pd.DataFrame(xsa,columns=X.columns)
gmm=GaussianMixture(n_components=3)
gmm.fit(xs)
y_cluster_gmm=gmm.predict(xs)
plt.subplot(1,3,3)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y_cluster_gmm],s=40)
plt.title('GMM Classification')
Output:
Result:
Thus the program to implement EM Algorithm for clustering networks using the
given dataset have been executed successfully and the output got verified.
10. Build simple NN models.
Aim:
Algorithm:
1. Image Acquisition: The first step is to acquire images of paper documents with
the help of optical scanners. This way, an original image can be captured and
stored.
2. Pre-processing: The noise level on an image should be optimized and areas
outside the text removed. Pre-processing is especially vital for recognizing
handwritten documents that are more sensitive to noise.
3. Segmentation: The process of segmentation is aimed at grouping characters into
meaningful chunks. There can be predefined classes for characters. So, images can
be scanned for patterns that match the classes.
4. Feature Extraction: This step means splitting the input data into a set of
features, that is, to find essential characteristics that make one or another
pattern recognizable.
5. Training an MLP neural network using the following steps:
1. Starting with the input layer, propagate data forward to the output
layer. This step is the forward propagation.
2. Based on the output, calculate the error (the difference between
the predicted and known outcome). The error needs to be minimized.
3. Backpropagate the error. Find its derivative with respect to each
weight in the network, and update the model.
6. Post processing: This stage is the process of refinement as an OCR model can
require some corrections. However, it isn’t possible to achieve 100%
recognition accuracy. The identification of characters heavily depends on the
context.
Program:
model.compile(loss='categorical_crossentropy',
optimizer=OPTIMIZER,
metrics=['accuracy'])
Output:
['balanced', 'byclass', 'bymerge', 'digits', 'letters', 'mnist']
train shape: (697932, 28, 28)
train labels: (697932,)
test shape: (116323, 28, 28)
test labels: (116323,)
697932 train samples
116323 test samples
Model: "sequential"
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 512) 401920
activation (Activation) (None, 512) 0
dropout (Dropout) (None, 512) 0
dense_1 (Dense) (None, 256) 131328
activation_1 (Activation) (None, 256) 0
dropout_1 (Dropout) (None, 256) 0
dense_2 (Dense) (None, 256) 65792
activation_2 (Activation) (None, 256) 0
dropout_2 (Dropout) (None, 256) 0
dense_3 (Dense) (None, 256) 65792
activation_3 (Activation) (None, 256) 0
dropout_3 (Dropout) (None, 256) 0
dense_4 (Dense) (None, 256) 65792
activation_4 (Activation) (None, 256) 0
=================================================================
Total params: 730,624
Trainable params: 730,624
Non-trainable params: 0
Result:
Thus the program to implement the neural network model for the given dataset.
11. Build deep learning NN models.
Aim:
To implement and build a Convolutional neural network model which predicts the
age and gender of a person using the given pre-trained models.
Algorithm:
Program:
import cv2 as cv
import math
import time
from google.colab.patches import cv2_imshow
def age_gender_detector(frame):
# Read frame
t=
time.time()
frameFace, bboxes = getFaceBox(faceNet,
frame) for bbox in bboxes:
# print(bbox)
face = frame[max(0,bbox[1]-padding):min(bbox[3]+padding,frame.shape[0]-
1),max(0,bbox[0]-padding):min(bbox[2]+padding, frame.shape[1]-1)]blob =
cv.dnn.blobFromImage(face, 1.0, (227, 227), MODEL_MEAN_VALUES,
swapRB=False) genderNet.setInput(blob)
genderPreds = genderNet.forward()
gender = genderList[genderPreds[0].argmax()]
# print("Gender Output : {}".format(genderPreds))
print("Gender : {}, conf = {:.3f}".format(gender,
genderPreds[0].max()))ageNet.setInput(blob)
agePreds = ageNet.forward()
age = ageList[agePreds[0].argmax()]
print("Age Output :
{}".format(agePreds))
print("Age : {}, conf = {:.3f}".format(age, agePreds[0].max()))label = "{},{}".format(gender,
age)
cv.putText(frameFace, label, (bbox[0], bbox[1]-10), cv.FONT_HERSHEY_SIMPLEX, 0.8, (0,
255, 255), 2, cv.LINE_AA)
return frameFace
Result:
Thus the program to implement and build a Convolutional neural network model
which predicts the age and gender of a person using the given pre-trained
models have been executed successfully and the output got verified.