0% found this document useful (0 votes)
13 views6 pages

20F 0179,20F 0118 - Ai Lab 11

The document discusses training perceptrons to perform logical OR and XOR operations. It provides code to train a perceptron on an OR gate dataset which is linearly separable. However, it notes that an XOR gate is not linearly separable and so training a perceptron on an XOR dataset using the perceptron training rule will not work.

Uploaded by

Aimmi Champ's
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views6 pages

20F 0179,20F 0118 - Ai Lab 11

The document discusses training perceptrons to perform logical OR and XOR operations. It provides code to train a perceptron on an OR gate dataset which is linearly separable. However, it notes that an XOR gate is not linearly separable and so training a perceptron on an XOR dataset using the perceptron training rule will not work.

Uploaded by

Aimmi Champ's
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

LAB 11 Artificial Intelligence

Task 1 (OR Gate):


Code:
import numpy as np
eeta = 0.1
epochs = 100
inputSet = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
targetVal = np.array([[0], [1], [1], [1]])
def step_func(x):
    return np.where(x > 0, 1, 0)
def train_perceptron(inputs, targetValues, eeta, epochs):
    weights = np.array([0.3, 0.3])
    for epoch in range(epochs):
        print(f"Epoch: {epoch + 1}")
        for i in range(len(inputs)):
            input_data = inputs[i]
            target = targetValues[i]
            weighted_sum = np.dot(input_data, weights)
            predicted_output = step_func(weighted_sum)
            error = target - predicted_output
            weights += eeta * error * input_data
            print(f"Input: {input_data} , targetValue: {target} , Predicted:
{predicted_output}, Weights: {weights}")
    return weights
print("OR TRAINING")
train_perceptron(inputSet, targetVal, eeta, epochs)

Screenshot:
Task 2 (XOR Gate):
XOR training is not possible through perceptron training rule
because It is not linearly separable.
Code:
import numpy as np
eeta = 0.1
epochs = 100
inputSet = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
targetVal = np.array([[0], [1], [1], [0]])
def step_func(x):
    return np.where(x > 0, 1, 0)
def train_perceptron(inputs, targetValues, eeta, epochs):
    weights = np.array([0.3, 0.3])
    for epoch in range(epochs):
        print(f"Epoch: {epoch + 1}")
        for i in range(len(inputs)):
            input_data = inputs[i]
            target = targetValues[i]
            weighted_sum = np.dot(input_data, weights)
            predicted_output = step_func(weighted_sum)
            error = target - predicted_output
            weights += eeta * error * input_data
            print(f"Input: {input_data} , targetValue: {target} , Predicted:
{predicted_output}, Weights: {weights}")
    return weights
print("XOR TRAINING")
train_perceptron(inputSet, targetVal, eeta, epochs)

Screenshot:

You might also like