DEEP LEARNING LAB PRACTICALS
DEEP LEARNING LAB PRACTICALS
# McCulloch-Pitts neuron
def mcculloch_pitts(inputs, weights, threshold):
weighted_sum = np.dot(inputs, weights)
return step_function(weighted_sum - threshold)
# OR Gate
print("\nOR Gate:")
or_weights = [1, 1]
or_threshold = 1
for inputs in [[0, 0], [0, 1], [1, 0], [1, 1]]:
output = mcculloch_pitts(inputs, or_weights, or_threshold)
print(f"Input: {inputs}, Output: {output}")
# NOT Gate
print("\nNOT Gate:")
not_weights = [-1]
not_threshold = 0
for inputs in [[0], [1]]:
output = mcculloch_pitts(inputs, not_weights, not_threshold)
print(f"Input: {inputs}, Output: {output}")
Output:
AND Gate:
Input: [0, 0], Output: 0
Input: [0, 1], Output: 0
Input: [1, 0], Output: 0
Input: [1, 1], Output: 1
OR Gate:
Input: [0, 0], Output: 0
Input: [0, 1], Output: 1
Input: [1, 0], Output: 1
Input: [1, 1], Output: 1
NOT Gate:
Input: [0], Output: 1
Input: [1], Output: 0
RESULT:
The McCulloch-Pitts neuron model accurately implements basic logic
gates using predefined weights and thresholds.
2. Implementation of Perceptron Learning Algorithm for Binary Classification
Aim:
To implement the perceptron learning algorithm and use it to classify inputs
based on the logical AND operation.
Procedure:
1. Define a perceptron with:
• Initialized weights and a step activation function.
2. Use a dataset for an AND gate:
• Inputs: [0, 0], [0, 1], [1, 0], [1, 1]
• Outputs: [0, 0, 0, 1].
3. Train the perceptron using the perceptron learning rule over 10 epochs.
4. Test the perceptron on the dataset and a new input [1, 1].
Code:
import numpy as np
Output:
Testing the perceptron:
Input: [0 0], Predicted: 0, Actual: 0
Input: [0 1], Predicted: 0, Actual: 0
Input: [1 0], Predicted: 0, Actual: 0
Input: [1 1], Predicted: 1, Actual: 1
New Input: [1 1], Predicted: 1
Result:
The perceptron successfully classified the inputs and demonstrated its
ability to solve linearly separable problems.
def tanh(x):
return np.tanh(x)
def relu(x):
return np.maximum(0, x)
def initialize_params():
W1 = np.random.randn(input_size, hidden_size) # Weights for hidden
layer
b1 = np.zeros((1, hidden_size)) # Bias for hidden layer
W2 = np.random.randn(hidden_size, output_size) # Weights for
output layer
b2 = np.zeros((1, output_size)) # Bias for output layer
return W1, b1, W2, b2
# Backpropagation
dA2 = 2 * (A2 - y) / y.size # Derivative of MSE
dZ2 = dA2 * A2 * (1 - A2) # Derivative of Sigmoid
dW2 = np.dot(A1.T, dZ2)
db2 = np.sum(dZ2, axis=0, keepdims=True)
Output:
Training with sigmoid activation function:
Epoch 0, Loss: 0.2634441242433619
Epoch 1000, Loss: 0.19986959993983425
Epoch 2000, Loss: 0.1105626932304368
Epoch 3000, Loss: 0.07182504045692777
Epoch 4000, Loss: 0.05263045389550604
Epoch 5000, Loss: 0.0416202407769372
Epoch 6000, Loss: 0.03460977332101534
Epoch 7000, Loss: 0.02966756087198236
Epoch 8000, Loss: 0.025967358258101772
Epoch 9000, Loss: 0.023082678988062363