AIml_lab_Updated
AIml_lab_Updated
LEARNING
INFORMATION TECHNOLOGY
PEO3
PO4 - Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis
To enable of the information
Graduates to providepursue
on logical thinking, valid conclusions.
lifelong learning and to have the capacity in
understandingtechnical issues related to computing systems and optimal design solutions.
PO5 - Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
PEO4engineering and IT tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.
To enable Graduates in gaining employment in industry and stabilize themselves as
PO6 - The engineer and society: Apply reasoning informed by the contextual knowledge to assess
competent professionalin applying their technical skills in real time problems and meet the
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
diversified needs of industry,
the professional academia
engineering practice.and research.
PO7 - Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.
PO8 - Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9 - Individual and team work: Function effectively as an individual, and as a member or leader
in diverse teams, and in multidisciplinary settings.
PO11 - Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments
PO12 - Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
INFORMATION TECHNOLOGY
PEO1 - To ensure the graduates to be proficient in utilizing the fundamental knowledge of basic
Sciences, Mathematicsand Information Technology for the application in various
streams of Engineering and Technology.
PEO2 - To enrich the graduates with the core competencies necessary for applying knowledge
of Computers and Telecommunications equipment to store, retrieve, transmit,
manipulate and analyze data in the context of business enterprise.
PEO3 - To enable the graduates to improve logical thinking, pursue lifelong learning and
understand technical issues related to computing systems and optimal design solutions.
PEO4 - To enable the graduates in gaining employment in industry and stabilize themselves as
competent professionalsin applying their technical skills in real time problems and
meet the diversified needs of industry, academia and research.
To ensure graduates
PSO1 - Have proficiency in programming skills to design, develop and apply appropriate
techniques, to solve complex engineering problems.
PSO2 - Have knowledge to build, automate and manage business solutions using cutting edge
technologies.
Aim:
To implement uninformed search algorithm such as BFS and DFS.
Algorithm:
Step 1:= Initialize an empty list called 'visited' to keep track of the nodes visited during the
traversal.
Step 2:= Initialize an empty queue called 'queue' to keep track of the nodes to be traversed in
the future.
Step 3:= Add the starting node to the 'visited' list and the 'queue'.
Step 4:= While the 'queue' is not empty, do the following:
a. Dequeue the first node from the 'queue' and store it in a variable called 'current'.
b. Print 'current'.
c. For each of the neighbours of 'current' that have not been visited yet, do the following:
i. Mark the neighbour as visited and add it to the 'queue'.
Step 5:= When all the nodes reachable from the starting node have been visited, terminate
the algorithm.
Program :
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = []
queue = []
def bfs(visited, graph, node):
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
print (m, end = " ")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
Output:
Algorithm:
Step 1:= Initialize an empty set called 'visited' to keep track of the nodes visited during the
traversal.
Step 2:= Define a DFS function that takes the current node, the graph, and the 'visited' set as
input.
Step 3:= If the current node is not in the 'visited' set, do the following:
a. Print the current node.
b. Add the current node to the 'visited' set.
c. For each of the neighbours of the current node, call the DFS function recursively with
the neighbour as the current node.
Step 4:= When all the nodes reachable from the starting node have been visited, terminate
the algorithm.
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set()
def dfs(visited, graph, node):
if node not in visited:
print (node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)
Output:
Result:
Thus the uninformed search algorithms such as BFS and DFS have been executed
successfully and the output got verified
Ex No:2 Implementation of Informed search algorithm (A*)
Aim:
Algorithm:
1. Initialize the distances dictionary with float('inf') for all vertices in the graph except
for the start vertex which is set to 0.
2. Initialize the parent dictionary with None for all vertices in the graph.
3. Initialize an empty set for visited vertices.
4. Initialize a priority queue (pq) with a tuple containing the sum of the heuristic value
and the distance from start to the current vertex, the distance from start to the current
vertex, and the current vertex.
5. While pq is not empty, do the following:
a. Dequeue the vertex with the smallest f-distance (sum of the heuristic value
and the distance from start to the current vertex).
b. If the current vertex is the destination vertex, return distances and parent.
c. If the current vertex has not been visited, add it to the visited set.
d. For each neighbor of the current vertex, do the following:
i. Calculate the distance from start to the neighbor (g) as the sum of the distance
from start to the current vertex and the edge weight between the current vertex and
the neighbor.
ii. Calculate the f-distance (f = g + h) for the neighbor.
iii. If the f-distance for the neighbor is less than its current distance in the
distances dictionary, update the distances dictionary with the new distance and the
parent dictionary with the current vertex as the parent of the neighbor.
iv. Enqueue the neighbor with its f-distance, distance from start to neighbor, and
the neighbor itself into the priority queue.
6. Return distances and parent.
Program :
import heapq
while pq:
curr_f, curr_dist, curr_vert = heapq.heappop(pq)
if nbor == dest:
# we found a path based on heuristic
return distances, parent
return '->'.join(path[::-1])
graph = {
'A': {'B':5, 'C':5},
'B': {'A':5, 'C':4, 'D':3 },
'C': {'A':5, 'B':4, 'D':7, 'E':7, 'H':8},
'D': {'B':3, 'C':7, 'H':11, 'K':16, 'L':13, 'M':14},
'E': {'C':7, 'F':4, 'H':5},
'F': {'E':4, 'G':9},
'G': {'F':9, 'N':12},
'H': {'E':5, 'C':8, 'D':11, 'I':3 },
'I': {'H':3, 'J':4},
'J': {'I':4, 'N':3},
'K': {'D':16, 'L':5, 'P':4, 'N':7},
'L': {'D':13, 'M':9, 'O':4, 'K':5},
'M': {'D':14, 'L':9, 'O':5},
'N': {'G':12, 'J':3, 'P':7},
'O': {'M':5, 'L':4},
'P': {'K':4, 'J':8, 'N':7},
}
heuristic = {
'A': 16,
'B': 17,
'C': 13,
'D': 16,
'E': 16,
'F': 20,
'G': 17,
'H': 11,
'I': 10,
'J': 8,
'K': 4,
'L': 7,
'M': 10,
'N': 7,
'O': 5,
'P': 0
}
start = 'A'
dest= 'P'
distances,parent = a_star(graph, start, dest, heuristic)
print('distances => ', distances)
print('parent => ', parent)
print('optimal path => ', generate_path_from_parents(parent,start,dest))
Output:
distances => {'A': 0, 'B': 22, 'C': 18, 'D': 24, 'E': 28, 'F': 36, 'G': inf, 'H': 24, 'I': 26, 'J': 28, 'K': 28,
'L': 28, 'M': 32, 'N': 30, 'O': 30, 'P': 28}
parent => {'A': None, 'B': 'A', 'C': 'A', 'D': 'B', 'E': 'C', 'F': 'E', 'G': None, 'H': 'C', 'I': 'H', 'J': 'I', 'K':
'D', 'L': 'D', 'M': 'D', 'N': 'J', 'O': 'L', 'P': 'K'}
optimal path => A->B->D->K->P
Result:
Thus the program to implement informed search algorithm have been executedsuccessfullyand
output got verified.
Ex No:3. ImplementNaïve Bayesmodels.
Aim:
Algorithm:
Program:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
dataset = pd.read_csv('NaiveBayes.csv')
# split the data into inputs and outputs
X = dataset.iloc[:, [0,1]].values
y = dataset.iloc[:, 2].values
from sklearn.model_selection import train_test_split
# assign test data size 25%
X_train, X_test, y_train, y_test =train_test_split(X,y,test_size= 0.25, random_state=0)
from sklearn.preprocessing import StandardScaler
# scalling the input data
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.fit_transform(X_test)
from sklearn.naive_bayes import BernoulliNB
# initializaing the NB
classifer = BernoulliNB()
# training the model
classifer.fit(X_train, y_train)
# testing the model
y_pred = classifer.predict(X_test)
from sklearn.metrics import accuracy_score
# printing the accuracy of the model
print(accuracy_score(y_pred, y_test))
from sklearn.naive_bayes import GaussianNB
# create a Gaussian Classifier
classifer1 = GaussianNB()
# training the model
classifer1.fit(X_train, y_train)
Result
Thus the program with Naïve Bayes Classifier Algorithm have been executed
successfully and output got verified
Ex No:4. Implement Bayesian Networks
Aim:
To construct a Bayesian network, to demonstrate the diagnosis of heart
patientsusingstandard Heart Disease Data Set.
Algorithm:
Program:
alarm_model=BayesianNetwork
(
[
("Burglary","Alarm"),
("Earthquake","Alarm"),
("Alarm","JohnCalls"),
("Alarm","MaryCalls"),
]
)
cpd_burglary=TabularCPD(
variable="Burglary",variable_card=2,values=[[0.999],[0.001]]
)
cpd_earthquake=TabularCPD(
variable="Earthquake",variable_card=2,values=[[0.998],[0.002]]
)
cpd_alarm=TabularCPD(
variable="Alarm",
variable_card=2,
values=[[0.999,0.71,0.06,0.05],[0.001,0.29,0.94,0.95]],
evidence=["Burglary","Earthquake"],
evidence_card=[2,2],
)
cpd_johncalls=TabularCPD(
variable="JohnCalls",
variable_card=2,
values=[[0.95,0.1],[0.05,0.9]],
evidence=["Alarm"],
evidence_card=[2],
)
cpd_marycalls=TabularCPD(
variable="MaryCalls",
variable_card=2,
values=[[0.1,0.7],[0.9,0.3]],
evidence=["Alarm"],
evidence_card=[2],
)
Output:
True
NodeView(('Burglary', 'Alarm', 'Earthquake', 'JohnCalls', 'MaryCalls'))
OutEdgeView([('Burglary', 'Alarm'), ('Alarm', 'JohnCalls'), ('Alarm', 'MaryCalls'),
('Earthquake', 'Alarm')])
(Burglary ⟂ Earthquake)
(MaryCalls⟂ Earthquake, Burglary, JohnCalls | Alarm) (MaryCalls⟂ Burglary,
JohnCalls | Earthquake, Alarm)
(MaryCalls⟂ Earthquake, JohnCalls | Burglary, Alarm)
(MaryCalls⟂ Earthquake, Burglary | JohnCalls, Alarm)
(MaryCalls⟂JohnCalls | Earthquake, Burglary, Alarm)
(MaryCalls⟂ Burglary | Earthquake, JohnCalls, Alarm)
(MaryCalls⟂ Earthquake | Burglary, JohnCalls, Alarm)
(JohnCalls⟂ Earthquake, Burglary, MaryCalls | Alarm)
(JohnCalls⟂ Burglary, MaryCalls | Earthquake, Alarm)
(JohnCalls⟂ Earthquake, MaryCalls | Burglary, Alarm)
(JohnCalls⟂ Earthquake, Burglary | MaryCalls, Alarm)
(JohnCalls⟂MaryCalls | Earthquake, Burglary, Alarm)
(JohnCalls⟂ Burglary | Earthquake, MaryCalls, Alarm)
(JohnCalls⟂ Earthquake | Burglary, MaryCalls, Alarm)
(Earthquake ⟂ Burglary)
(Earthquake ⟂MaryCalls, JohnCalls | Alarm)
(Earthquake ⟂MaryCalls, JohnCalls | Burglary, Alarm)
(Earthquake ⟂JohnCalls | MaryCalls, Alarm)
(Earthquake ⟂MaryCalls | JohnCalls, Alarm)
(Earthquake ⟂JohnCalls | Burglary, MaryCalls, Alarm)
(Earthquake ⟂MaryCalls | Burglary, JohnCalls, Alarm)
(Burglary ⟂ Earthquake)
(Burglary ⟂MaryCalls, JohnCalls | Alarm)
(Burglary ⟂MaryCalls, JohnCalls | Earthquake, Alarm)
(Burglary ⟂JohnCalls | MaryCalls, Alarm)
(Burglary ⟂MaryCalls | JohnCalls, Alarm)
(Burglary ⟂JohnCalls | Earthquake, MaryCalls, Alarm)
(Burglary ⟂MaryCalls | Earthquake, JohnCalls, Alarm)
Result:
Thus the program to implement a bayesian networks have been executed successfully
and the output got verified.
Ex No: 5. Build Regression models
Aim:
To build regression models such as locally weighted linear regression and plot
thenecessary graphs.
Algorithm:
1. Read the Given data Sample to X and the curve (linear or non-linear) to Y
2. Set the value for Smoothening parameter or Free parameter say τ
3. Set the bias/Point of interestset x0 which is a subset of X
4. Determine the weight matrix using:
6. Prediction=x0*β.
Program:
**2return yest
import math
n =100
x=np.linspace(0, 2 *math.pi, n)
y = np.sin(x) + 0.3 * np.random.randn(n)
f=0.25
iterations=3
yest=lowess(x, y,f,iterations)
Output:
Result
Thus the program to implement non-parametric Locally Weighted
Regressionalgorithm in order to fit data points with a graph visualization
have been executedsuccessfully.
6. Build decision trees and random forests.
Aim:
To implement the concept of decision trees with suitable dataset from real
worldproblemsusing CART algorithm.
Algorithm:
Program:
Import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data =pd.read_csv('Social_Network_Ads.csv')
data.head()
fromsklearn.model_selectionimporttrain_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.25,random_state=0)
plt.xlim(x1.min(),x1.max())
plt.ylim(x2.min(), x2.max())
fori,j inenumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1], c=ListedColormap(("red",
"green"))(i),label=j)
dot_data=StringIO()
export_graphviz(classifier, out_file=dot_data, filled=True,
rounded=True,special_characters=True, feature_names=feature_cols, class_names=['0', '1'])
graph =
pydotplus.graph_from_dot_data(dot_data.getvalue())Image(graph.write_png('
decisiontree.png'))
dot_data=StringIO()
export_graphviz(classifier, out_file=dot_data, filled=True,
rounded=True,special_characters=True, feature_names=feature_cols, class_names=['0', '1'])
graph =
pydotplus.graph_from_dot_data(dot_data.getvalue())Image(graph.write_png('opt_decisiontree
_gini.png'))
Outputofdecisiontreewithoutpruning:
Result:
Thus the program to implement the concept of decision trees with suitable
datasetfromrealworld problemsusing CARTalgorithm havebeen executedsuccessfully.
Ex No:7. Build SVM models
Aim:
To create a machine learning model using Support Vector Machine algorithm.
Algorithm:
Step 2: Load the iris dataset using the datasets.load_iris() function and store the data and
target values in variables X and y respectively.
Step 3: Create a pandas dataframe from the iris data using iris_data.data[:, [2, 3]] and
column names as iris_data.feature_names[2:].
Step 4: Split the data into training and test sets using train_test_split
Step 5: Print the number of samples in the training and test sets
Step 6: Define the markers, colors, and colormap to be used for plotting the data.
Step 7: Plot the data using a scatter plot by iterating through the unique labels and plotting
the points with the corresponding color and marker.
Step 9: Train the SVM model using the standardized training data and the SVM() function
Store the trained model in SVM.
Step 10: Print the accuracy of the SVM model on the training and test data using
SVM.score(X_train_standard, y_train) and SVM.score(X_test_standard, y_test)
respectively.
Program:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection importtrain_test_split
from sklearn.preprocessing importStandardScaler
iris_data = datasets.load_iris()
X = iris_data.data[:, [2, 3]]
y = iris_data.target
iris_dataframe = pd.DataFrame(iris_data.data[:, [2, 3]],
columns=iris_data.feature_names[2:])
print(iris_dataframe.head())
print('\n' + 'Unique Labels contained in this data are ' + str(np.unique(y)))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print('The training set contains {} samples and the test set contains {}
samples'.format(X_train.shape[0], X_test.shape[0]))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print('The training set contains {} samples and the test set contains {}
samples'.format(X_train.shape[0], X_test.shape[0]))
markers = ('x', 's', 'o')
colors = ('red', 'blue', 'green')
cmap = ListedColormap(colors[:len(np.unique(y_test))])
foridx, cl inenumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
c=cmap(idx), marker=markers[idx], label=cl)
standard_scaler = StandardScaler()
standard_scaler.fit(X_train)
X_train_standard = standard_scaler.transform(X_train)
X_test_standard = standard_scaler.transform(X_test)
print('The first five rows after standardisation look like this:\n')
print(pd.DataFrame(X_train_standard, columns=iris_dataframe.columns).head())
SVM = SVC(kernel='rbf', random_state=0, gamma=.10, C=1.0)
SVM.fit(X_train_standard, y_train)
print('Accuracy of our SVM model on the training data is',(SVM.score(X_train_standard,
y_train)))
print('Accuracy of our SVM model on the test data is',(SVM.score(X_test_standard, y_test)))
Output:
petal length (cm) petal width (cm)
0 1.4 0.2
1 1.4 0.2
2 1.3 0.2
3 1.5 0.2
4 1.4 0.2
Thus the machine learning model was created using Support Vector Machine
algorithm.
Ex No:8. Implement ensembling techniques
Aim:
Algorithm:
1. Split the training dataset into train, test and validation dataset.
2. Fitall the base modelsusing train dataset.
3. Make predictions on validation and test dataset.
4. These predictions are used as features to build a second level model
5. This model is used to make predictions on test and meta-features.
Program:
Import pandas as pd
From sklearn.metrics import mean_squared_error
from sklearn.ensemble
import RandomForestRegressor
import xgboost as xgb
from sklearn.linear_model
import LinearRegressionf
from sklearn.model_selection import train_test_split
df=pd.read_csv("train_data.csv")
target = df["target"]train=df.drop("target")
X_train, X_test, y_train, y_test = train_test_split(train, target,test_size=0.20)
train_ratio=0.70
validation_ratio=0.20
test_ratio=0.10
x_train, x_test, y_train, y_test = train_test_split(train,target,test_size=1- train_ratio)
x_val,x_test,y_val,y_test=train_test_split(
x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio))
model_1=LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 =RandomForestRegressor()
model_1.fit(x_train, y_train)
val_pred_1 = model_1.predict(x_val)
test_pred_1=model_1.predict(x_test)
val_pred_1 = pd.DataFrame(val_pred_1)
test_pred_1 = pd.DataFrame(test_pred_1)
model_2.fit(x_train,y_train)
val_pred_2 = model_2.predict(x_val)
test_pred_2 = model_2.predict(x_test)
val_pred_2 = pd.DataFrame(val_pred_2)
test_pred_2=pd.DataFrame(test_pred_2)
model_3.fit(x_train,
y_train)val_pred_3 =
model_1.predict(x_val)
test_pred_3=model_1.predict(x_test)
val_pred_3 =
pd.DataFrame(val_pred_3)test_pred_3=pd.DataFrame
(test_pred_3)
df_val = pd.concat([x_val, val_pred_1, val_pred_2, val_pred_3], axis=1)
df_test = pd.concat([x_test, test_pred_1, test_pred_2, test_pred_3], axis=1)
final_model=LinearRegression()
final_model.fit(df_val,y_val)
final_pred =
final_model.predict(df_test)print(mean_squared_error(y_test,pred_fin
al))
Output:
4790
Result:
Thus the program to implement ensembling technique of Blending with the given
Alcohol QCM Dataset have been executed successfully and the output got
verified.
Ex No:9. Implement clustering algorithms
Aim:
Algorithm:
Program:
import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import sklearn.metrics as sm
iris = datasets.load_iris()
x = pd.DataFrame(iris.data, columns=['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal
Width'])
y = pd.DataFrame(iris.target, columns=['Target'])
plt.figure(figsize=(12,3))
colors = np.array(['red', 'green', 'blue'])
iris_targets_legend = np.array(iris.target_names)
red_patch = mpatches.Patch(color='red', label='Setosa')
green_patch = mpatches.Patch(color='green', label='Versicolor')
blue_patch = mpatches.Patch(color='blue', label='Virginica')
plt.subplot(1, 2, 1)
plt.scatter(x['Sepal Length'], x['Sepal Width'], c=colors[y['Target']])
plt.title('Sepal Length vs Sepal Width')
plt.legend(handles=[red_patch, green_patch, blue_patch])
plt.subplot(1,2,2)
plt.scatter(x['Petal Length'], x['Petal Width'], c= colors[y['Target']])
plt.title('Petal Length vs Petal Width')
plt.legend(handles=[red_patch, green_patch, blue_patch])
iris_k_mean_model = KMeans(n_clusters=3)
iris_k_mean_model.fit(x)
print (iris_k_mean_model.cluster_centers_)
Result:
Thus the program to implement k-Nearest Neighbour Algorithm for
clustering Irisdatasethave been executed successfully and
outputgotverified.
EX NO:10. Implement EM for Bayesian networks
Aim:
To implement the EM algorithm for clustering networks using the given dataset.
Algorithm:
Program:
# REAL PLOT
plt.subplot(1,3,1)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y.Targets],s=40)
plt.title('Real')
# K-PLOT
plt.subplot(1,3,2)
model=KMeans(n_clusters=3)
model.fit(X)
predY=np.choose(model.labels_,[0,1,2]).astype(np.int64)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[predY],s=40)
plt.title('KMeans')
# GMM PLOT
scaler=preprocessing.StandardScaler()
scaler.fit(X)
xsa=scaler.transform(X)
xs=pd.DataFrame(xsa,columns=X.columns)
gmm=GaussianMixture(n_components=3)
gmm.fit(xs)
y_cluster_gmm=gmm.predict(xs)
plt.subplot(1,3,3)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y_cluster_gmm],s=40)
plt.title('GMM Classification')
Output:
Text(0.5, 1.0, 'GMM Classification')
Result:
Thus the program for Expectation Maximization Algorithm was executed and verified.
EX NO:11. Build simple NN models
Aim :
To implement the neural network model for the given numpy array.
Algorithm:
Step 1: Use numpy arrays to store inputs (x) and outputs (y)
Step 2: Define the network model and its arguments.
Step 3: Set the number of neurons/nodes for each layer
Step 4: Compile the model and calculate its accuracy
Step 5: Print a summary of the Keras model
Program:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
x = np.array([[0,0], [0,1], [1,0], [1,1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(2, input_shape=(2,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
model.summary()
Output:
Result: Thus the program to implement the neural network model for the given dataset.
EX No:12. Build deep learning NN models
Aim:
Algorithm:
Program:
plt.imshow(x_train[0],cmap=plt.cm.binary)
plt.show()
print(y_train[0])
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
print(x_train[0])
plt.imshow(x_train[0],cmap=plt.cm.binary)
plt.show()
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss)
print(val_acc)
model.save('epic_num_reader.model')
new_model = tf.keras.models.load_model('epic_num_reader.model')
predictions = new_model.predict(x_test)
import numpy as np
print(np.argmax(predictions[0]))
plt.imshow(x_test[0],cmap=plt.cm.binary)
plt.show()
Output:
5
[[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.00393124 0.02332955 0.02620568 0.02625207 0.17420356 0.17566281
0.28629534 0.05664824 0.51877786 0.71632322 0.77892406 0.89301644
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.05780486 0.06524513 0.16128198 0.22713296
0.22277047 0.32790981 0.36833534 0.3689874 0.34978968 0.32678448
0.368094 0.3747499 0.79066747 0.67980478 0.61494005 0.45002403
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.12250613 0.45858525 0.45852825 0.43408872 0.37314701
0.33153488 0.32790981 0.36833534 0.3689874 0.34978968 0.32420121
0.15214552 0.17865984 0.25626376 0.1573102 0.12298801 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.04500225 0.4219755 0.45852825 0.43408872 0.37314701
0.33153488 0.32790981 0.28826244 0.26543758 0.34149427 0.31128482
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.1541463 0.28272888 0.18358693 0.37314701
0.33153488 0.26569767 0.01601458 0. 0.05945042 0.19891229
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0.0253731 0.00171577 0.22713296
0.33153488 0.11664776 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.20500962
0.33153488 0.24625638 0.00291174 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.01622378
0.24897876 0.32790981 0.10191096 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.04586451 0.31235677 0.32757096 0.23335172 0.14931733 0.00129164
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.10498298 0.34940902 0.3689874 0.34978968 0.15370495
0.04089933 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.06551419 0.27127137 0.34978968 0.32678448
0.245396 0.05882702 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.02333517 0.12857881 0.32549285
0.41390126 0.40743158 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.32161793
0.41390126 0.54251585 0.20001074 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.06697006 0.18959827 0.25300993 0.32678448
0.41390126 0.45100715 0.00625034 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.05110617 0.19182076 0.33339444 0.3689874 0.34978968 0.32678448
0.40899334 0.39653769 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.04117838 0.16813739
0.28960162 0.32790981 0.36833534 0.3689874 0.34978968 0.25961929
0.12760592 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.04431706 0.11961607 0.36545809 0.37314701
0.33153488 0.32790981 0.36833534 0.28877275 0.111988 0.00258328
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.05298497 0.42752138 0.4219755 0.45852825 0.43408872 0.37314701
0.33153488 0.25273681 0.11646967 0.01312603 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0.37491383 0.56222061
0.66525569 0.63253163 0.48748768 0.45852825 0.43408872 0.359873
0.17428513 0.01425695 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0.92705966 0.82698729
0.74473314 0.63253163 0.4084877 0.24466922 0.22648107 0.02359823
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]]
Epoch
1/3
1875/1875 [==============================] - 18s 8ms/step - loss: 0.2588 - accuracy: 0.9252
Epoch 2/3
1875/1875 [==============================] - 16s 9ms/step - loss: 0.1055 - accuracy: 0.9679
Epoch 3/3
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0723 - accuracy: 0.9773
313/313 [==============================] - 2s 4ms/step - loss: 0.1149 - accuracy: 0.9651
0.11487378180027008
0.9650999903678894
Result:
Thus the program to implement and build a Convolutional neural network modelwhich
havebeen executed successfully and the output got verified.