0% found this document useful (0 votes)
66 views

Exp 3

The document describes implementing backpropagation, a widely used algorithm for training neural networks. It discusses the theory behind backpropagation and neural networks, describes the key steps of the backpropagation algorithm including calculating error gradients and updating weights, and provides Python code to train a neural network on a sample dataset using backpropagation over multiple epochs.

Uploaded by

Swastik gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

Exp 3

The document describes implementing backpropagation, a widely used algorithm for training neural networks. It discusses the theory behind backpropagation and neural networks, describes the key steps of the backpropagation algorithm including calculating error gradients and updating weights, and provides Python code to train a neural network on a sample dataset using backpropagation over multiple epochs.

Uploaded by

Swastik gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Experiment 3

Write a program to implement Back propagation

Aim: Write a program to implement Back propagation

Theory:Backpropagation Algorithm

Backpropagation is an algorithm that backpropagates the errors from the output nodes to the
input nodes. Therefore, it is simply referred to as the backward propagation of errors. It uses
in the vast applications of neural networks in data mining like Character recognition,
Signature verification, etc.

Neural Network:

Neural networks are an information processing paradigm inspired by the human nervous
system. Just like in the human nervous system, we have biological neurons in the same way in
neural networks we have artificial neurons, artificial neurons are mathematical functions
derived from biological neurons. The human brain is estimated to have about 10 billion
neurons, each connected to an average of 10,000 other neurons. Each neuron receives a signal
through a synapse, which controls the effect of the signconcerning on the neuron.

Backpropagation:
Backpropagation is a widely used algorithm for training feedforward neural networks. It
computes the gradient of the loss function with respect to the network weights. It is very
efficient, rather than naively directly computing the gradient concerning each weight. This
efficiency makes it possible to use gradient methods to train multi-layer networks and update
weights to minimize loss; variants such as gradient descent or stochastic gradient descent are
often used.
The backpropagation algorithm works by computing the gradient of the loss function with
respect to each weight via the chain rule, computing the gradient layer by layer, and iterating
backward from the last layer to avoid redundant computation of intermediate terms in the
chain rule.

Features of Backpropagation:

1. it is the gradient descent method as used in the case of simple perceptron network with the
differentiable unit.
2. it is different from other networks in respect to the process by which the weights are
calculated during the learning period of the network.
3. training is done in the three stages :
 the feed-forward of input training pattern
 the calculation and backpropagation of the error
 updation of the weight
Working of Backpropagation:
Neural networks use supervised learning to generate output vectors from input vectors that the
network operates on. It Compares generated output to the desired output and generates an
error report if the result does not match the generated output vector. Then it adjusts the
weights according to the bug report to get your desired output.

Backpropagation Algorithm:

Step 1: Inputs X, arrive through the preconnected path.


Step 2: The input is modeled using true weights W. Weights are usually chosen randomly.
Step 3: Calculate the output of each neuron from the input layer to the hidden layer to the
output layer.
Step 4: Calculate the error in the outputs
Backpropagation Error= Actual Output – Desired Output
Step 5: From the output layer, go back to the hidden layer to adjust the weights to reduce the
error.
Step 6: Repeat the process until the desired output is achieved.
Parameters :
 x = inputs training vector x=(x 1,x2,…………xn).
 t = target vector t=(t 1,t2……………tn).
 δk = error at output unit.
 δj = error at hidden layer.
 α = learning rate.
 V0j = bias of hidden unit j.
Training Algorithm :
Step 1: Initialize weight to small random values.
Step 2: While the stepsstopping condition is to be false do step 3 to 10.
Step 3: For each training pair do step 4 to 9 (Feed-Forward).
Step 4: Each input unit receives the signal unit and transmitsthe signal x i signal to all the
units.
Step 5 : Each hidden unit Zj (z=1 to a) sums its weighted input signal to calculate its net
input
zinj = v0j + Σxivij ( i=1 to n)
Applying activation function z j = f(zinj) and sends this signals to all units in the layer
about i.e output units
For each output l=unit y k = (k=1 to m) sums its weighted input signals.
yink = w0k + Σ ziwjk (j=1 to a)
and applies its activation function to calculate the output signals.
yk = f(yink)
Backpropagation Error :
Step 6: Each output unit yk (k=1 to n) receives a target pattern corresponding to an input
pattern then error is calculated as:
δk = ( tk – yk ) + yink
Step 7: Each hidden unit Zj (j=1 to a) sums its input from all units in the layer above
δinj = Σ δj wjk
The error information term is calculated as :
δj = δinj + zinj
Updation of weight and bias :
Step 8: Each output unit yk (k=1 to m) updates its bias and weight (j=1 to a). The weight
correction term is given by :
Δ wjk = α δk zj
and the bias correction term is given by Δwk = α δk.
therefore wjk(new) = wjk(old) + Δ wjk
w0k(new) = wok(old) + Δ wok
for each hidden unit z j (j=1 to a) update its bias and weights (i=0 to n) the weight
connection term
Δ vij = α δj xi
and the bias connection on term
Δ v0j = α δj
Therefore vij(new) = vij(old) + Δvij
v0j(new) = v0j(old) + Δv0j
Step 9: Test the stopping condition. The stopping condition can be the minimization of error,
number of epochs.

Advantages:

 It is simple, fast, and easy to program.


 Only numbers of the input are tuned, not any other parameter.
 It is Flexible and efficient.
 No need for users to learn any special functions.

Disadvantages:

 It is sensitive to noisy data and irregularities. Noisy data can lead to inaccurate results.
 Performance is highly dependent on input data.
 Spending too much time training.
 The matrix-based approach is preferred over a mini-batch.
CODE

from math import exp

from random import seed

from random import random

# Initialize a network

def initialize_network(n_inputs, n_hidden, n_outputs):

network = list()

hidden_layer = [{'weights': [random() for i in range(n_inputs + 1)], 'delta': 0.0} for i in


range(n_hidden)]

network.append(hidden_layer)

output_layer = [{'weights': [random() for i in range(n_hidden + 1)], 'delta': 0.0} for i in


range(n_outputs)]

network.append(output_layer)

return network

# Calculate neuron activation for an input

def activate(weights, inputs):

activation = weights[-1]

for i in range(len(weights) - 1):

activation += weights[i] * inputs[i]

return activation

# Transfer neuron activation

def transfer(activation):

return 1.0 / (1.0 + exp(-activation))

# Forward propagate input to a network output

def forward_propagate(network, row):

inputs = row
for layer in network:

new_inputs = []

for neuron in layer:

activation = activate(neuron['weights'], inputs)

neuron['output'] = transfer(activation)

new_inputs.append(neuron['output'])

inputs = new_inputs

return inputs

# Calculate the derivative of a neuron output

def transfer_derivative(output):

return output * (1.0 - output)

# Backpropagate error and store it in neurons

def backward_propagate_error(network, expected):

for i in reversed(range(len(network))):

layer = network[i]

errors = list()

if i != len(network) - 1:

for j in range(len(layer)):

error = 0.0

for neuron in network[i + 1]:

error += neuron['weights'][j] * neuron['delta']

errors.append(error)

else:

for j in range(len(layer)):

neuron = layer[j]
errors.append(neuron['output'] - expected[j])

for j in range(len(layer)):

neuron = layer[j]

neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])

# Update network weights with error

def update_weights(network, row, l_rate):

for i in range(len(network)):

inputs = row[:-1]

if i != 0:

inputs = [neuron['output'] for neuron in network[i - 1]]

for neuron in network[i]:

for j in range(len(inputs)):

neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]

neuron['weights'][-1] += l_rate * neuron['delta']

# Train a network for a fixed number of epochs

def train_network(network, train, l_rate, n_epoch, n_outputs):

for epoch in range(n_epoch):

sum_error = 0

for row in train:

outputs = forward_propagate(network, row)

expected = [0 for i in range(n_outputs)]

expected[int(row[-1])] = 1

sum_error += sum([(expected[i] - outputs[i]) ** 2 for i in range(len(expected))])


backward_propagate_error(network, expected)

update_weights(network, row, l_rate)

print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, l_rate, sum_error))

# Test training backpropagation algorithm

seed(1)

dataset = [[2.7810836, 2.550537003, 0],

[1.465489372, 2.362125076, 0],

[3.396561688, 4.400293529, 0],

[1.38807019, 1.850220317, 0],

[3.06407232, 3.005305973, 0],

[7.627531214, 2.759262235, 1],

[5.332441248, 2.088626775, 1],

[6.922596716, 1.77106367, 1],

[8.675418651, -0.242068655, 1],

[7.673756466, 3.508563011, 1]]

n_inputs = len(dataset[0]) - 1

n_outputs = len(set([row[-1] for row in dataset]))

network = initialize_network(n_inputs, 2, n_outputs)

train_network(network, dataset, 0.5, 20, n_outputs)

for layer in network:

print(layer)
OUTPUT

>epoch=0, lrate=0.500, error=7.120


>epoch=1, lrate=0.500, error=8.188
>epoch=2, lrate=0.500, error=8.765
>epoch=3, lrate=0.500, error=9.086
>epoch=4, lrate=0.500, error=9.283
>epoch=5, lrate=0.500, error=9.413
>epoch=6, lrate=0.500, error=9.505
>epoch=7, lrate=0.500, error=9.573
>epoch=8, lrate=0.500, error=9.625
>epoch=9, lrate=0.500, error=9.666
>epoch=10, lrate=0.500, error=9.699
>epoch=11, lrate=0.500, error=9.726
>epoch=12, lrate=0.500, error=9.749
>epoch=13, lrate=0.500, error=9.768
>epoch=14, lrate=0.500, error=9.785
>epoch=15, lrate=0.500, error=9.799
>epoch=16, lrate=0.500, error=9.812
>epoch=17, lrate=0.500, error=9.823
>epoch=18, lrate=0.500, error=9.833
>epoch=19, lrate=0.500, error=9.842

[{'weights': [0.13436424411240122, 0.8474337369372327, 0.763774618976614], 'delta': 0.0,


'output': 0.99157530623528},

{'weights': [0.2550690257394217, 0.49543508709194095, 0.4494910647887381], 'delta': 0.0,


'output': 0.9844051047537846}]

[{'weights': [1.7934448647943648, 1.9521366824912416, 1.3019278533060115], 'delta':


0.006654657927291692, 'output': 0.9932546494771431},

{'weights': [1.2978585045098532, 2.032370921636056, 1.7533896304527565], 'delta': -


4.102043278256817e-05, 'output': 0.9935746043075538}]

You might also like