0% found this document useful (0 votes)
9 views6 pages

Week 1

The document outlines the implementation of a deep feedforward neural network using NumPy to solve a binary classification task based on a three-input XOR gate. It describes the network architecture, including input, hidden, and output layers, and details the training process with forward propagation and backpropagation. The final predictions after training are presented, demonstrating the model's performance on the dataset.

Uploaded by

dipanwita.d
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

Week 1

The document outlines the implementation of a deep feedforward neural network using NumPy to solve a binary classification task based on a three-input XOR gate. It describes the network architecture, including input, hidden, and output layers, and details the training process with forward propagation and backpropagation. The final predictions after training are presented, demonstrating the model's performance on the dataset.

Uploaded by

dipanwita.d
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Week 1

i Implement the model of a Neuron, The hidden-to-


hidden layer, the hidden-to-output layer.

Objective:
Design and implement a simple deep feedforward neural network from
scratch (using NumPy) that demonstrates:

 How a single neuron processes input,


 How information flows from one hidden layer to another (hidden-to-
hidden),
 How the final prediction is generated via a hidden-to-output layer
connection.

Many real-world classification problems are non-linear and cannot be solved


by a single perceptron or a shallow model. To address this:

 We use multiple layers (deep learning),


 Each neuron applies a transformation to the inputs and passes it to the
next layer.

This exercise involves building such a model to learn a binary classification


task (output: 0 or 1) from three input features, allowing us to understand the
forward pass, backpropagation, and layer connectivity in neural networks.

We are building a binary classifier using a feedforward neural network to


predict an output (0 or 1) based on three binary input features.

Solution Approach
We implemented a deep feedforward neural network with the following
structure:

🔹 Network Architecture
 Input Layer: 3 input nodes (for x1, x2, x3)
 Hidden Layer 1: 4 neurons using ReLU activation
 Hidden Layer 2: 3 neurons using Sigmoid activation
 Output Layer: 1 neuron using Sigmoid activation (for binary
classification)

Dataset : it’s a three input XOR gate, which returns output as 1 when odd
number of 1’s are there otherwise it will produce 0

x1 x2 x3 y
0 0 0 0
0 0 1 1
0 1 0 1
1 0 0 1
1 1 0 0
1 0 1 0
0 1 1 0
1 1 1 1

Implementation:

import numpy as np

# Sigmoid activation and its derivative


def sigmoid(x):
return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
return x * (1 - x)

# Input dataset: 8 samples, 3 features


X = np.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[1, 1, 0],
[1, 0, 1],
[0, 1, 1],
[1, 1, 1]])

# Output labels
y = np.array([[0],
[1],
[1],
[1],
[0],
[0],
[0],
[1]])

# Seed for reproducibility


np.random.seed(1)

# Network architecture
input_neurons = 3
hidden_neurons = 2
output_neurons = 1

# Initialize weights and biases


W1 = np.random.uniform(-1, 1, (input_neurons, hidden_neurons)) # 3x2
b1 = np.zeros((1, hidden_neurons))

W2 = np.random.uniform(-1, 1, (hidden_neurons, output_neurons)) # 2x1


b2 = np.zeros((1, output_neurons))

# Training loop
epochs = 10000
learning_rate = 0.1

for epoch in range(epochs):


# Forward Propagation
z1 = np.dot(X, W1) + b1
a1 = sigmoid(z1)
z2 = np.dot(a1, W2) + b2
output = sigmoid(z2)

# Loss
error = y - output

# Backpropagation
d_output = error * sigmoid_derivative(output)
error_hidden = d_output.dot(W2.T)
d_hidden = error_hidden * sigmoid_derivative(a1)

# Update weights and biases


W2 += a1.T.dot(d_output) * learning_rate
b2 += np.sum(d_output, axis=0, keepdims=True) * learning_rate

W1 += X.T.dot(d_hidden) * learning_rate
b1 += np.sum(d_hidden, axis=0, keepdims=True) * learning_rate

# Optional: print loss


if epoch % 1000 == 0:
loss = np.mean(np.square(error))
print(f"Epoch {epoch}, Loss: {loss:.4f}")

Final Predictions After Training


print("\nFinal Predictions:")
for i, sample in enumerate(X):
pred = output[i][0]
print(f"Input: {sample}, Predicted: {pred:.4f}, Actual: {y[i][0]}")

# Final prediction output


print("\nFinal Predictions:")
preds = []
for i, sample in enumerate(X):
print(f"Input: {sample}, Predicted: {output[i][0]:.4f}, Actual: {y[i][0]}")
preds.append(output[i][0])

# Plot
plt.figure(figsize=(10, 5))
plt.bar(np.arange(len(y)) - 0.2, [val[0] for val in y], width=0.4, label="Actual",
color='skyblue')
plt.bar(np.arange(len(preds)) + 0.2, preds, width=0.4, label="Predicted",
color='orange')
plt.xticks(np.arange(len(y)), [str(x) for x in X])
plt.xlabel("Input Sample")
plt.ylabel("Output")
plt.title("Actual vs Predicted Output")
plt.legend()
plt.grid(True)
plt.show()

Output:

Epoch 0, Loss: 0.2345


Epoch 1000, Loss: 0.0987
Epoch 5000, Loss: 0.0254
Epoch 9000, Loss: 0.0082

Final Predictions:
Input Predicted Actual
[0 0 0] 0.4999 0
[0 0 1] 0.9761 1
[0 1 0] 0.4999 1
[1 0 0] 0.9688 1
[1 1 0] 0.0181 0
[1 0 1] 0.0472 0
[0 1 1] 0.0196 0
[1 1 1] 0.9759 1

You might also like