0% found this document useful (0 votes)
14 views

DEEP LEARNING LAB PRACTICALS

The document outlines practical implementations of deep learning concepts, including the McCulloch-Pitts neuron model for basic logic gates, the Perceptron learning algorithm for binary classification, and various activation functions in a neural network for the XOR problem. Each section details the aim, procedure, code, and results, demonstrating the effectiveness of the models and algorithms in solving specific tasks. Key findings indicate that different activation functions impact training performance, with ReLU and Leaky ReLU showing faster convergence compared to Sigmoid and Tanh.

Uploaded by

BENAZIR AE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

DEEP LEARNING LAB PRACTICALS

The document outlines practical implementations of deep learning concepts, including the McCulloch-Pitts neuron model for basic logic gates, the Perceptron learning algorithm for binary classification, and various activation functions in a neural network for the XOR problem. Each section details the aim, procedure, code, and results, demonstrating the effectiveness of the models and algorithms in solving specific tasks. Key findings indicate that different activation functions impact training performance, with ReLU and Leaky ReLU showing faster convergence compared to Sigmoid and Tanh.

Uploaded by

BENAZIR AE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

DEEP LEARNING LAB PRACTICALS

1. Implementation of the McCulloch-Pitts neuron model for basic logic gates


(AND, OR, and NOT).
Aim:
To simulate the McCulloch-Pitts neuron model for implementing basic
logic gates such as AND, OR, and NOT.
Procedure:
1. Define the McCulloch-Pitts neuron using weighted inputs, a threshold,
and a step function.
2. Set weights and thresholds for AND, OR, and NOT gates.
3. Test the gates with input combinations and compute outputs.
Code:
import numpy as np

# Step function for activation


def step_function(x):
return 1 if x >= 0 else 0

# McCulloch-Pitts neuron
def mcculloch_pitts(inputs, weights, threshold):
weighted_sum = np.dot(inputs, weights)
return step_function(weighted_sum - threshold)

# Logic gate implementation


def logic_gates():
# AND Gate
print("AND Gate:")
and_weights = [1, 1]
and_threshold = 2
for inputs in [[0, 0], [0, 1], [1, 0], [1, 1]]:
output = mcculloch_pitts(inputs, and_weights, and_threshold)
print(f"Input: {inputs}, Output: {output}")

# OR Gate
print("\nOR Gate:")
or_weights = [1, 1]
or_threshold = 1
for inputs in [[0, 0], [0, 1], [1, 0], [1, 1]]:
output = mcculloch_pitts(inputs, or_weights, or_threshold)
print(f"Input: {inputs}, Output: {output}")
# NOT Gate
print("\nNOT Gate:")
not_weights = [-1]
not_threshold = 0
for inputs in [[0], [1]]:
output = mcculloch_pitts(inputs, not_weights, not_threshold)
print(f"Input: {inputs}, Output: {output}")

# Run the logic gates


logic_gates()

Output:
AND Gate:
Input: [0, 0], Output: 0
Input: [0, 1], Output: 0
Input: [1, 0], Output: 0
Input: [1, 1], Output: 1

OR Gate:
Input: [0, 0], Output: 0
Input: [0, 1], Output: 1
Input: [1, 0], Output: 1
Input: [1, 1], Output: 1

NOT Gate:
Input: [0], Output: 1
Input: [1], Output: 0

RESULT:
The McCulloch-Pitts neuron model accurately implements basic logic
gates using predefined weights and thresholds.
2. Implementation of Perceptron Learning Algorithm for Binary Classification
Aim:
To implement the perceptron learning algorithm and use it to classify inputs
based on the logical AND operation.
Procedure:
1. Define a perceptron with:
• Initialized weights and a step activation function.
2. Use a dataset for an AND gate:
• Inputs: [0, 0], [0, 1], [1, 0], [1, 1]
• Outputs: [0, 0, 0, 1].
3. Train the perceptron using the perceptron learning rule over 10 epochs.
4. Test the perceptron on the dataset and a new input [1, 1].
Code:
import numpy as np

# Step 1: Define the Perceptron class


class Perceptron:
def __init__(self, input_size, learning_rate=0.01, epochs=100):
self.weights = np.zeros(input_size + 1) # Initialize weights
(including bias) to 0
self.learning_rate = learning_rate
self.epochs = epochs

def activation_function(self, x):


return 1 if x >= 0 else 0 # Step activation function

def predict(self, inputs):


# Add bias term (1) to the inputs
x_with_bias = np.insert(inputs, 0, 1)
# Calculate weighted sum and apply activation function
z = np.dot(self.weights, x_with_bias)
return self.activation_function(z)

def train(self, X, y):


for epoch in range(self.epochs):
for i in range(len(X)):
x_with_bias = np.insert(X[i], 0, 1) # Add bias term
(1) to the inputs
prediction = self.predict(X[i]) # Make a prediction
error = y[i] - prediction # Calculate the error
# Update weights using the perceptron learning rule
self.weights += self.learning_rate * error *
x_with_bias

# Step 2: Prepare the dataset


# Simple dataset for binary classification (AND gate)
# Input: Two features; Output: 1 for True, 0 for False
X = np.array([
[0, 0],
[0, 1],
[1, 0],
[1, 1]
])

y = np.array([0, 0, 0, 1]) # AND gate output

# Step 3: Train the perceptron


perceptron = Perceptron(input_size=2, learning_rate=0.1, epochs=10)
perceptron.train(X, y)

# Step 4: Test the perceptron


print("Testing the perceptron:")
for i in range(len(X)):
print(f"Input: {X[i]}, Predicted: {perceptron.predict(X[i])},
Actual: {y[i]}")

# Test on new data


new_input = np.array([1, 1])
print(f"New Input: {new_input}, Predicted:
{perceptron.predict(new_input)}")

Output:
Testing the perceptron:
Input: [0 0], Predicted: 0, Actual: 0
Input: [0 1], Predicted: 0, Actual: 0
Input: [1 0], Predicted: 0, Actual: 0
Input: [1 1], Predicted: 1, Actual: 1
New Input: [1 1], Predicted: 1
Result:
The perceptron successfully classified the inputs and demonstrated its
ability to solve linearly separable problems.

3. Implementation of different activation functions to train a simple neural


network
Aim:
To compare the performance of different activation functions (Sigmoid, Tanh,
ReLU, Leaky ReLU) in training a neural network on the XOR problem.
Procedure:
1. Define the activation functions (Sigmoid, Tanh, ReLU, Leaky ReLU).
2. Use the XOR dataset as input-output pairs.
3. Initialize neural network parameters (weights and biases).
4. Perform forward propagation and backpropagation with gradient descent.
5. Train the network for 10000 epochs and track the loss.
6. Visualize the behavior of each activation function.
7. Compare the performance of each activation function by printing the loss
every 1000 epochs.
Code:
import numpy as np
import matplotlib.pyplot as plt

# Define activation functions


def sigmoid(x):
return 1 / (1 + np.exp(-x))

def tanh(x):
return np.tanh(x)

def relu(x):
return np.maximum(0, x)

def leaky_relu(x, alpha=0.01):


return np.where(x > 0, x, alpha * x)
def softmax(x):
e_x = np.exp(x - np.max(x)) # Subtract max for numerical stability
return e_x / e_x.sum(axis=0, keepdims=True)

# Generate some sample data (XOR problem)


X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Inputs
y = np.array([[0], [1], [1], [0]]) # XOR Outputs

# Initialize weights and biases


input_size = 2
hidden_size = 4
output_size = 1

def initialize_params():
W1 = np.random.randn(input_size, hidden_size) # Weights for hidden
layer
b1 = np.zeros((1, hidden_size)) # Bias for hidden layer
W2 = np.random.randn(hidden_size, output_size) # Weights for
output layer
b2 = np.zeros((1, output_size)) # Bias for output layer
return W1, b1, W2, b2

def forward(X, W1, b1, W2, b2, activation_function):


# Forward pass
Z1 = np.dot(X, W1) + b1 # Hidden layer linear transformation
A1 = activation_function(Z1) # Apply activation function

Z2 = np.dot(A1, W2) + b2 # Output layer linear transformation


A2 = sigmoid(Z2) # Sigmoid for final output
return A1, A2

# Train the neural network


def train(X, y, activation_function, epochs=10000, learning_rate=0.1):
W1, b1, W2, b2 = initialize_params()

for epoch in range(epochs):


A1, A2 = forward(X, W1, b1, W2, b2, activation_function)

# Compute loss (Mean Squared Error)


loss = np.mean((y - A2) ** 2)

# Backpropagation
dA2 = 2 * (A2 - y) / y.size # Derivative of MSE
dZ2 = dA2 * A2 * (1 - A2) # Derivative of Sigmoid
dW2 = np.dot(A1.T, dZ2)
db2 = np.sum(dZ2, axis=0, keepdims=True)

dA1 = np.dot(dZ2, W2.T)


if activation_function == relu or activation_function ==
leaky_relu:
dZ1 = dA1 * (A1 > 0) # ReLU derivative
else:
dZ1 = dA1 * (1 - A1 ** 2) # Derivative of Sigmoid or Tanh
dW1 = np.dot(X.T, dZ1)
db1 = np.sum(dZ1, axis=0, keepdims=True)

# Update weights and biases using gradient descent


W1 -= learning_rate * dW1
b1 -= learning_rate * db1
W2 -= learning_rate * dW2
b2 -= learning_rate * db2

# Print loss every 1000 epochs


if epoch % 1000 == 0:
print(f"Epoch {epoch}, Loss: {loss}")

return W1, b1, W2, b2

# Visualize the performance of each activation function


def plot_activation_functions():
functions = [sigmoid, tanh, relu, leaky_relu]
labels = ["Sigmoid", "Tanh", "ReLU", "Leaky ReLU"]

x = np.linspace(-10, 10, 100)


plt.figure(figsize=(10, 6))

for i, func in enumerate(functions):


y = func(x)
plt.plot(x, y, label=labels[i])

plt.title("Comparison of Activation Functions")


plt.xlabel('Input')
plt.ylabel('Output')
plt.legend()
plt.grid(True)
plt.show()

# Train the neural network with each activation function


def compare_activations():
for activation_function in [sigmoid, tanh, relu, leaky_relu]:
print(f"Training with {activation_function.__name__} activation
function:")
_, _, _, _ = train(X, y, activation_function)
print("\n")

# Plot activation functions


plot_activation_functions()

# Compare the performance of each activation function


compare_activations()

Output:
Training with sigmoid activation function:
Epoch 0, Loss: 0.2634441242433619
Epoch 1000, Loss: 0.19986959993983425
Epoch 2000, Loss: 0.1105626932304368
Epoch 3000, Loss: 0.07182504045692777
Epoch 4000, Loss: 0.05263045389550604
Epoch 5000, Loss: 0.0416202407769372
Epoch 6000, Loss: 0.03460977332101534
Epoch 7000, Loss: 0.02966756087198236
Epoch 8000, Loss: 0.025967358258101772
Epoch 9000, Loss: 0.023082678988062363

Training with tanh activation function:


Epoch 0, Loss: 0.2731560076098144
Epoch 1000, Loss: 0.01220064528287151
Epoch 2000, Loss: 0.004915873757601439
Epoch 3000, Loss: 0.0029845461482293093
Epoch 4000, Loss: 0.0021208509243074714
Epoch 5000, Loss: 0.001636931017719518
Epoch 6000, Loss: 0.001329213584249857
Epoch 7000, Loss: 0.0011169866940462297
Epoch 8000, Loss: 0.0009620994619061157
Epoch 9000, Loss: 0.0008442500853734964

Training with relu activation function:


Epoch 0, Loss: 0.21849151704789851
Epoch 1000, Loss: 0.030540625845408283
Epoch 2000, Loss: 0.006045868370694794
Epoch 3000, Loss: 0.0027605365952435485
Epoch 4000, Loss: 0.001685720852234636
Epoch 5000, Loss: 0.0011807163044231018
Epoch 6000, Loss: 0.0008952894218862508
Epoch 7000, Loss: 0.0007136748260250899
Epoch 8000, Loss: 0.0005899492466833552
Epoch 9000, Loss: 0.0005002589577054359

Training with leaky_relu activation function:


Epoch 0, Loss: 0.25017879127741915
Epoch 1000, Loss: 0.019359151259485695
Epoch 2000, Loss: 0.005081120270765106
Epoch 3000, Loss: 0.002691055738705916
Epoch 4000, Loss: 0.0017874844662549762
Epoch 5000, Loss: 0.0013221740014998137
Epoch 6000, Loss: 0.0010426339956551727
Epoch 7000, Loss: 0.00085718234892018
Epoch 8000, Loss: 0.0007258770979065447
Epoch 9000, Loss: 0.0006282159609216861
Result:
• Loss decreases over time for each activation function.
• Sigmoid and Tanh: Smooth output, but suffer from vanishing gradients
for large inputs.
• ReLU and Leaky ReLU: Faster convergence and mitigate the vanishing
gradient issue, especially Leaky ReLU, which allows small gradients for
negative inputs.

You might also like