0% found this document useful (0 votes)
45 views

Ai&ml Record (22-23)

The code implements naive Bayes classifier on a dataset to classify samples. It preprocesses data, fits the model on training data, predicts on test data and calculates accuracy.

Uploaded by

Jesu Vin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Ai&ml Record (22-23)

The code implements naive Bayes classifier on a dataset to classify samples. It preprocesses data, fits the model on training data, predicts on test data and calculates accuracy.

Uploaded by

Jesu Vin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

C.

ABDUL HAKEEM COLLEGE OF ENGINEERING


AND TECHNOLOGY, MELVISHARAM
Hakeem Nagar, Melvisharam - 632 509, Vellore District, Tamil Nadu, India. (Approved
by AICTE, New Delhi and Affiliated to Anna University, Chennai)

(Accredited by NBA & with ‘A’ grade by NAAC and ISO 9001:2008 Certified Institution)
(Regd. Under Sec 2(F) & 12(B) of the UGC Act 1956)

Department of Information Technology


CS 3491- ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
LAB RECORD

(Regulation - 2021)

CAHECT
INDEX
S.NO DATE LIST OF EXERCISES PAGE SIGN
NUMBER
1 Implementation of Uninformed
Search Algorithms(BFS,DFS)

2 Implementation of Informed search


algorithms (A*, memory-bounded
A*)

3 Implement naïve Bayes models

4 Implement Bayesian Networks

5 Build Regression models

6 Build decision trees and random


forests

7 Build SVM models

8 Implement ensembling techniques

9 Implement clustering algorithms

10 Implement EM for Bayesian


networks

11 Build simple NN models

12 Build deep learning NN models

CAHECT
CAHECT
EX.NO:1A Implementation of Uninformed search algorithms
DATE: (BFS)

AIM :

To implement Uninformed search algorithms Algorithm BFS problem using


Python.

BFS Algorithm:

1. Pick any node, visit the adjacent unvisited vertex, mark it as visited, display it,
and insert it in a queue.
2. If there are no remaining adjacent vertices left, remove the first vertex from the
queue.
3. Repeat step 1 and step 2 until the queue is empty or the desired node is found.

Implementation:

Consider the graph, which is implemented in the code below:

Source Code:
graph = {
'A':['B','C'],
'B':['D','E'],
'C':['F'],
'D':[],
'E':['F'],
'F':[],

CAHECT
}

visited = []
queue = []

def bfs(visited,graph,node):
visited.append(node)
queue.append(node)
while queue:
s =queue.pop(0)
print(s,end="")
for neighbour in graph[s]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)

bfs(visited,graph,'A')

Output:

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:1B Implementation of Uninformed search algorithms
DATE: (DFS)

AIM :

To implement Uninformed search algorithms Algorithm DFS problem using


Python.

DFS Algorithm:

1. We will start by putting any one of the graph's vertex on top of the stack.
2. After that take the top item of the stack and add it to the visited list of the vertex.
3. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in
the visited list of vertexes to the top of the stack.
4. Lastly, keep repeating steps 2 and 3 until the stack is empty.

Implementation:

Consider the Graph ,which is implemented in the code below:

Source code:
# Using a Python dictionary to act as an adjacency list
graph ={
'5':['3','7'],
'3':['2','4'],
'7':['8'],
'2':[],

CAHECT
'4':['8'],
'8':[],
}

visited = set() # set to keep track of visited nodes of graph

#function for dfs


def dfs(visited ,graph,node):
if node not in visited:
print(node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited,graph,neighbour)

# Driver code
print("following is the depth -first search")

dfs(visited,graph,'5')

Output:

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:2 Implementation of A*, Memory-bounded A*
DATE:

AIM:

To implement Informed search algorithms (A*, memory-bounded A*) using python.

Algorithm:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and
stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the
back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Source Code: A* Code


from collections import deque

class Graph:
def __init__(self, adjac_lis):
self.adjac_lis = adjac_lis

def get_neighbors(self, v):


return self.adjac_lis[v]

def h(self, n):


H={
'A': 1,
'B': 1,
'C': 1,
'D': 1

}
return H[n]

CAHECT
def a_star_algorithm(self, start, stop):
open_lst = set([start])
closed_lst = set([])
poo = {}
poo[start] = 0
par = {}
par[start] = start

while len(open_lst) > 0:


n = None
for v in open_lst:
if n == None or poo[v] + self.h(v) < poo[n] + self.h(n):
n = v;

if n == None:
print('Path does not exist!')
return None

if n == stop:
reconst_path = []

while par[n] != n:
reconst_path.append(n)
n = par[n]
reconst_path.append(start)
reconst_path.reverse()
print('Path found: {}'.format(reconst_path))
return reconst_path
for (m, weight) in self.get_neighbors(n):
if m not in open_lst and m not in closed_lst:
open_lst.add(m)
par[m] = n
poo[m] = poo[n] + weight
else:
if poo[m] > poo[n] + weight:
poo[m] = poo[n] + weight
par[m] = n

if m in closed_lst:
closed_lst.remove(m)
open_lst.add(m)
open_lst.remove(n)
closed_lst.add(n)

print('Path does not exist!')


return None

adjac_lis = {

CAHECT
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjac_lis)
graph1.a_star_algorithm('A', 'D')

Output:

Source code: memory-bounded A*


from queue import PriorityQueue
v = 14
graph = [[] for i in range(v)]
# Function For Implementing Best First Search
# Gives output path having lowest cost
def best_first_search(actual_Src, target, n):
visited = [False] * n
pq = PriorityQueue()
pq.put((0, actual_Src))
visited[actual_Src] = True

while pq.empty() == False:


u = pq.get()[1]
# Displaying the path having lowest cost
print(u, end=" ")
if u == target:
break

for v, c in graph[u]:
if visited[v] == False:
visited[v] = True

CAHECT
pq.put((c, v))
print()
# Function for adding edges to graph
def addedge(x, y, cost):
graph[x].append((y, cost))
graph[y].append((x, cost))
# The nodes shown in above example(by alphabets) are
# implemented using integers addedge(x,y,cost);
addedge(0, 1, 3)
addedge(0, 2, 6)
addedge(0, 3, 5)
addedge(1, 4, 9)
addedge(1, 5, 8)
addedge(2, 6, 12)
addedge(2, 7, 14)
addedge(3, 8, 7)
addedge(8, 9, 5)
addedge(8, 10, 6)
addedge(9, 11, 1)
addedge(9, 12, 10)
addedge(9, 13, 2)

source = 0
target = 9
best_first_search(source, target, v)

Result:

Thus the program was executed successfully

CAHECT
EX.NO: 3 Implementation of Naive Bayes Models (BFS)
DATE:

AIM:

To write a program to implement Naive Bayes Model using python.

NAIVE BAYES MODEL:

It is a classification technique based on Bayes Theorem with an assumption of


independence among predictors. A Naive Bayes classifier assumes that the presence
of a particular feature in a class is unrelated to the presence of any other feature. It
predicts membership probabilities for each class such as the probability that given
record or data point belongs to a particular class.

Algorithm:
Step1: Data Pre-processing step.
Step 2: Fitting Naive Bayes to the Training set.
Step 3: Predicting the test result.
Step 4 : Test accuracy of the result(Creation of Confusion matrix).
Step 5: Visualizing the test set result.

Dataset link: https://gist.github.com/netj/8836201

Source Code:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
dataset =pd.read_csv("C:/Users/Banu/Downloads/iris1.csv")
dataset.head(5)
#splitting the data
X = dataset.iloc[:,:4].values
Y = dataset['variety'].values
#to see the split
print(X)
print(Y)
#testing and training
from sklearn.model_selection import train_test_split

CAHECT
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2)
#feature scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#Naive Bayes
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
#prediction
y_pred = classifier.predict(X_test)
y_pred
#prediction
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
from sklearn.metrics import accuracy_score
print ("Accuracy : ", accuracy_score(y_test, y_pred))
df = pd.DataFrame({"Real Values" : y_test , "Predicted Values" : y_pred})
print(df.head(5))

Output:

CAHECT
Result:

Thus the program was executed successfully.

CAHECT
EX.NO:4
DATE: Implement Bayesian Networks

AIM:

To write a program to implement the naïve Bayesian classifier for a data set.

Algorithm:
1. First, identify which are the main variable in the problem to solve. Each variable
corresponds to a node of the network. It is important to choose the number states for
each variable, for instance, there are usually two states (true or false).
2. Second, define structure of the network, that is, the causal relationships between all
the variables (nodes).
3. Third, define the probability rules governing the relationships between the variables.

Source code:
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
import networkx as nx
import pylab as plt

# Defining Bayesian Structure


model = BayesianNetwork([('Guest', 'Host'), ('Price', 'Host')])
# Defining the CPDs:
cpd_guest = TabularCPD('Guest', 3, [[0.33], [0.33], [0.33]])
cpd_price = TabularCPD('Price', 3, [[0.33], [0.33], [0.33]])
cpd_host = TabularCPD('Host', 3, [[0, 0, 0, 0, 0.5, 1, 0, 1, 0.5],[0.5, 0, 1, 0, 0, 0, 1, 0,
0.5], [0.5, 1, 0, 1, 0.5, 0, 0, 0, 0]],evidence=['Guest', 'Price'], evidence_card=[3, 3])

# Associating the CPDs with the network structure.


model.add_cpds(cpd_guest, cpd_price, cpd_host)
model.check_model()
# Infering the posterior probability
from pgmpy.inference import VariableElimination

CAHECT
infer = VariableElimination(model)
posterior_p = infer.query(['Host'], evidence={'Guest': 2, 'Price': 2})
print(posterior_p)
pos = nx.circular_layout(model)
nx.draw(model, node_color='#00b4d9', pos=pos, with_labels=True)
plt.savefig('model.png')

Output:
True

Result:
Thus the program executed successfully

CAHECT
EX.NO:5 Build Regression models
DATE:

AIM:
To write a program to Build Regression models using python.

Algorithm:

1. Initialize the parameters.

2. Predict the value of a dependent variable by given an independent variable.

3. Calculate the error in prediction for all data points.

4. Calculate partial derivative w.r.t b0 and b1.

5. Calculate the cost for each number and add them.

6. Update the values of b0 and b1.

Source code:

import numpy as np
import matplotlib.pyplot as plt

def estimate_coef(x, y):


# number of observations/points
n = np.size(x)

# mean of x and y vector


m_x = np.mean(x)
m_y = np.mean(y)

# calculating cross-deviation and deviation about x


SS_xy = np.sum(y * x) - n * m_y * m_x
SS_xx = np.sum(x * x) - n * m_x * m_x

# calculating regression coefficients

CAHECT
b_1 = SS_xy / SS_xx
b_0 = m_y - b_1 * m_x

return (b_0, b_1)

def plot_regression_line(x, y, b):


# plotting the actual points as scatter plot
plt.scatter(x, y, color="m",
marker="o", s=30)

# predicted response vector


y_pred = b[0] + b[1] * x

# plotting the regression line


plt.plot(x, y_pred, color="g")

# putting labels
plt.xlabel('x')
plt.ylabel('y')

# function to show plot


plt.show()

def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])

# estimating coefficients
b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))

CAHECT
# plotting regression line
plot_regression_line(x, y, b)

if __name__ == "__main__":
main()

Output:

Result:

Thus the program executed successfully

CAHECT
EX.NO:6A Build decision trees and random forests
DATE:

AIM:
To write a program to Build decision tree models using python.

ALGORITHM:

Step-1: Begin the tree with the root node, says S, which contains the complete dataset.

Step-2: Find the best attribute in the dataset using Attribute Selection Measure
(ASM).

Step-3: Divide the S into subsets that contains possible values for the best attributes.

Step-4: Generate the decision tree node, which contains the best attribute.

Step-5: Recursively make new decision trees using the subsets of the dataset created in
step -3. Continue this process until a stage is reached where you cannot further classify
the nodes and called the final node as a leaf node.

Source code:

from sklearn import tree

# Using DecisionTree classifier for prediction


clf = tree.DecisionTreeClassifier()

# Here the array contains three values which are height,weight and shoe size
X = [[181, 80, 91], [182, 90, 92], [183, 100, 92], [184, 200, 93], [185, 300, 94],
[186, 400, 95],
[187, 500, 96], [189, 600, 97], [190, 700, 98], [191, 800, 99], [192, 900, 100],
[193, 1000, 101]]
Y = ['male', 'male', 'female', 'male', 'female', 'male', 'female', 'male', 'female',
'male', 'female',
'male']
clf = clf.fit(X, Y)

CAHECT
# Predicting on basis of given random values for each given feature
predictionf = clf.predict([[181, 80, 91]])
predictionm = clf.predict([[183, 100, 92]])

# Printing final prediction


print(predictionf)
print(predictionm)

Output:

Result:

Thus the program was executed successfully

CAHECT
EX.NO:6B Build Random Forest Model
DATE:

AIM:
To write a program to random forest models using python

Algorithm:

Step-1: Select random K data points from the training set.

Step-2: Build the decision trees associated with the selected data points (Subsets).

Step-3: Choose the number N for decision trees that you want to build.

Step-4: Repeat Step 1 & 2.

Dataset link: https://github.com/selva86/datasets/blob/master/BostonHousing.csv

Source code:

import numpy as np
dataset = pd.read_csv("Boston1.csv")
dataset.head()
x = pd.DataFrame(dataset.iloc[:,:-1])
x
y = pd.DataFrame(dataset.iloc[:,-1])
y

from sklearn.model_selection import train_test_split


x_train,x_test,y_train,y_test = train_test_split(x,y,test_size =0.20)

from sklearn.ensemble import RandomForestRegressor


regressor = RandomForestRegressor(n_estimators=20,random_state = 0)
regressor.fit(x_train,y_train)
y_pred = regressor.predict(x_test)

from sklearn import metrics

CAHECT
print('mean Absolute error:',metrics.mean_absolute_error(y_test,y_pred))
print('mean squared error:',metrics.mean_squared_error(y_test,y_pred))
print('Root mean Squared
error:',np.sqrt(metrics.mean_squared_error(y_test,y_pred)))

Output:

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:7
DATE: Build SVM Models

AIM:

To write a program to Build SVM Models using python.

Algorithm:

Step 1: Load the important libraries

Step 2: Import dataset and extract the X variables and Y separately.

Step 3: Divide the dataset into train and test

Step 4: Initializing the SVM classifier model

Step 5: Fitting the SVM classifier model

Step 6: Coming up with predictions

Step 7: Evaluating model’s performance

Dataset link: https://github.com/gchoi/Dataset/blob/master/UniversalBank.csv

Source code:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('UniversalBank (1).csv')
df.head(5)
df.drop('ZIP Code',inplace=True,axis=1)
df.head(5)
x = df.drop('Personal Loan',axis=1)
x.drop('ID',axis=1,inplace=True)
x.head(5)
y = df['Personal Loan']

CAHECT
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state =
0)
from sklearn.svm import LinearSVC
svc = LinearSVC()
svc.fit(x_train,y_train)
pred = svc.predict(x_test)
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns
print(classification_report(y_test,pred))
sns.heatmap(confusion_matrix(y_test,pred),annot=True)

Output:

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:8
DATE: Implement Ensembling techniques

AIM:

To write a program to Implement Ensembling techniques using python.

Algorithm:

1. importing the necessary packages which are required to build our bagging model
2. Download the dataset from the GitHub link provided below. Then we store the link in a
variable called url
3. we set the seed value for all the random states, so that the generated random values shall
remain the same until the seed value is exhausted, We now declare the KFold model
wherein we set the n_splits parameter to 10, and the random_state parameter to the seed
value.
4. we train the model using the BaggingClassifier by declaring the parameters to the
values defined in the previous step.
5. check the performance of the model by using the parameters Bagging model, X, Y,
and the kfold model.
6. The output printed will be the accuracy of the model

Source code:

import pandas
from sklearn import model_selection
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-
diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]

CAHECT
seed = 7
kFold = model_selection.KFold(n_splits=10, random_state=None)
cart = DecisionTreeClassifier()
num_trees = 100
model = BaggingClassifier(estimator=cart, n_estimators=num_trees, random_state=7)
results = model_selection.cross_val_score(model, X, Y, cv=kFold)
print(results.mean())

Output;

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:9 Implement clustering algorithms
DATE:

AIM:
To write a program to Implement clustering algorithms using python.

Algorithm:

1. The first step in k-means is to pick the number of clusters, k.

2. we randomly select the centroid for each cluster. Let’s say we want to have 2 clusters,
so k is equal to 2 here.

3. Once we have initialized the centroids, we assign each point to the closest cluster
centroid.

4. once we have assigned all of the points to either cluster, the next step is to compute the
centroids of newly formed clusters

5. Repeat step 3& 4 until we have optimal centroids and the assignments of data points to
correct cluster are not changing anymore.

Source Code:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans

data = {
'x': [25, 34, 22, 27, 33, 33, 31, 22, 35, 34, 67, 54, 57, 43, 50, 57, 59, 52, 65, 47, 49,
48, 35, 33, 44, 45, 38,
43, 51, 46],
'y': [79, 51, 53, 78, 59, 74, 73, 57, 69, 75, 51, 32, 40, 47, 53, 36, 35, 58, 59, 50, 25,
20, 14, 12, 20, 5, 29, 27,
8, 7]
}

df = pd.DataFrame(data)

CAHECT
kmeans = KMeans(n_clusters=4).fit(df)
centroids = kmeans.cluster_centers_
print(centroids)

plt.scatter(df['x'], df['y'], c=kmeans.labels_.astype(float), s=50, alpha=0.5)


plt.scatter(centroids[:, 0], centroids[:, 1], c='red', s=50)
plt.show()

Output:

Result:

Thus the program was executed successfully

CAHECT
.
EX.NO:10 Implement EM for Bayesian networks
DATE:

AIM:

To write a program to Implement EM for Bayesian networks using python in Jupyter


notebook.

ALGORITHM:
1. Given a set of incomplete data, start with a set of initialized parameters.
2. Expectation step (E – step): In this expectation step, by using the observed
available data of the dataset, we can try to estimate or guess the values of the
missing data. Finally, after this step, we get complete data having no missing
values.
3. Maximization step (M – step): Now, we have to use the complete data, which
is prepared in the expectation step, and update the parameters.
4. Repeat step 2 and step 3 until we converge to our solution.

Source code:

import pandas as pd
import numpy as np
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
data = pd.read_csv("C:/Users/Banu/Downloads/asia10K.csv")
data = pd.DataFrame(data,columns=["Smoker","LungCancer","X-ray"])
test_data = data[:2000]
new_data = data[2000:].copy()
new_data["Smoker"][:500]= "?"
learner = gum.BNLearner(test_data)
learner.useScoreBIC()
learner.useGreedyHillClimbing()
bn=learner.learnBN()
learner2=gum.BNLearner(new_data,bn)
learner2.setVerbosity(True)
learner2.useEM(1e-10)

CAHECT
learner2.fitParameters(bn)
print(f"Nbr iteration EM : {learner2.nbrIterations()}")
import matplotlib.pyplot as plt
plt.plot(np.arange(1,1+learner2.nbrIterations()),learner2.history())
plt.semilogy()
plt.title("error during EM iterations")

Output:

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:11
DATE:
Build simple NN models

AIM:

To write a program Build simple NN models using python in Jupyter notebook.


Algorithm:
1. We took the inputs from the training dataset, performed some adjustments based on
their weights, and siphoned them via a method that computed the output of the ANN.
2. We computed the back-propagated error rate. In this case, it is the difference between
neuron’s predicted output and the expected output of the training dataset.
3. Based on the extent of the error got, we performed some minor weight adjustments
using the Error Weighted Derivative formula.
4. We iterated this process an arbitrary number of 15,000 times. In every iteration, the
whole training set is processed simultaneously.
5. the weights of the neuron will be optimized for the provided training data.
Consequently, if the neuron is made to think about a new situation, which is the same
as the previous one, it could make an accurate prediction. This is how back-
propagation takes place.
6. Finally, we initialized the NeuralNetwork class and ran the code.

Source code:
import numpy as np
class NeuralNetwork():
def __init__(self):
# seeding for random number generation
np.random.seed(1)

#converting weights to a 3 by 1 matrix with values from -1 to 1 and mean of 0


self.synaptic_weights = 2 * np.random.random((3, 1)) - 1

def sigmoid(self, x):


#applying the sigmoid function
return 1 / (1 + np.exp(-x))

CAHECT
def sigmoid_derivative(self, x):
#computing derivative to the Sigmoid function
return x * (1 - x)

def train(self, training_inputs, training_outputs, training_iterations):

#training the model to make accurate predictions while adjusting weights continually
for iteration in range(training_iterations):
#siphon the training data via the neuron
output = self.think(training_inputs)

#computing error rate for back-propagation


error = training_outputs - output

#performing weight adjustments


adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output))

self.synaptic_weights += adjustments

def think(self, inputs):


#passing the inputs via the neuron to get output
#converting values to floats

inputs = inputs.astype(float)
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output

if __name__ == "__main__":

#initializing the neuron class


neural_network = NeuralNetwork()

CAHECT
print("Beginning Randomly Generated Weights: ")
print(neural_network.synaptic_weights)

#training data consisting of 4 examples--3 input values and 1 output


training_inputs = np.array([[0,0,1],
[1,1,1],
[1,0,1],
[0,1,1]])

training_outputs = np.array([[0,1,1,0]]).T

#training taking place


neural_network.train(training_inputs, training_outputs, 15000)

print("Ending Weights After Training: ")


print(neural_network.synaptic_weights)

user_input_one = str(input("User Input One: "))


user_input_two = str(input("User Input Two: "))
user_input_three = str(input("User Input Three: "))

print("Considering New Situation: ", user_input_one, user_input_two,


user_input_three)
print("New Output data: ")
print(neural_network.think(np.array([user_input_one, user_input_two,
user_input_three])))
print("Wow, we did it!")

CAHECT
Output:

Result:

Thus the program was executed successfully.

CAHECT
EX.NO:12
DATE: Build deep learning NN models

AIM:
To write a program to Build deep learning NN models using python in Jupyter notebook.

Algorithm:
1. use the NumPy library to load your dataset and two classes from the Keras
library to define your model.
2. Models in Keras are defined as a sequence of layers,We create a Sequential
model and add layers one at a time .
3. Compiling the model uses the efficient numerical libraries under the covers (the so-
called backend) such as Theano or TensorFlow.
4 . To execute the model on some data. We can train or fit your model on your
loaded data by calling the fit() function on the model.Training occurs over epochs,
and each epoch is split into batches.
5. After trained our neural network on the entire dataset, and we can evaluate the
performance of the network on the same dataset using the evaluate() function.

Dataset link: https://machinelearningmastery.com/tutorial-first-neural-network-python-


keras/

Source code:
from numpy import loadtxt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
dataset = loadtxt('C:/Users/Banu/Downloads/pima-indians-diabetes.data.csv',
delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model

CAHECT
model = Sequential()
model.add(Dense(12, input_shape=(8,), activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10)
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))

Output:

Result:

Thus the program was executed successfully.

CAHECT

You might also like