20F 0179,20F 0118 - Ai Lab 11
20F 0179,20F 0118 - Ai Lab 11
Screenshot:
Task 2 (XOR Gate):
XOR training is not possible through perceptron training rule
because It is not linearly separable.
Code:
import numpy as np
eeta = 0.1
epochs = 100
inputSet = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
targetVal = np.array([[0], [1], [1], [0]])
def step_func(x):
return np.where(x > 0, 1, 0)
def train_perceptron(inputs, targetValues, eeta, epochs):
weights = np.array([0.3, 0.3])
for epoch in range(epochs):
print(f"Epoch: {epoch + 1}")
for i in range(len(inputs)):
input_data = inputs[i]
target = targetValues[i]
weighted_sum = np.dot(input_data, weights)
predicted_output = step_func(weighted_sum)
error = target - predicted_output
weights += eeta * error * input_data
print(f"Input: {input_data} , targetValue: {target} , Predicted:
{predicted_output}, Weights: {weights}")
return weights
print("XOR TRAINING")
train_perceptron(inputSet, targetVal, eeta, epochs)
Screenshot: