0% found this document useful (0 votes)
5 views

Back Propogation

Uploaded by

ANKIT BHARDWAJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Back Propogation

Uploaded by

ANKIT BHARDWAJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

In a feedforward neural network, the input moves forward from the input layer

to the output layer. Backpropagation helps improve the neural network’s output.
It does this by propagating the error backward from the output layer to the input
layer.
When a neural network is first trained, it is first fed with input. Since the neural
network isn’t trained yet, we don’t know which weights to use for each input.
And so, each input is randomly assigned a weight. Since the weights are
randomly assigned, the neural network will likely make the wrong predictions.
It will give out the incorrect output.
What is Artificial Neural Networks?
A neural network is a group of connected I/O units where each connection has a weight
associated with its computer programs. It helps you to build predictive models from large
databases. This model builds upon the human nervous system. It helps you to conduct image
understanding, human learning, computer speech, etc.

What is Backpropagation?
Backpropagation is the essence of neural network training. It is the method of fine-tuning the
weights of a neural network based on the error rate obtained in the previous epoch (i.e.,
iteration). Proper tuning of the weights allows you to reduce error rates and make the model
reliable by increasing its generalization.
Backpropagation in neural network is a short form for “backward propagation of errors.” It is a
standard method of training artificial neural networks. This method helps calculate the gradient
of a loss function with respect to all the weights in the network.
Backpropagation allows us to readjust our weights to reduce output
error. The error is propagated backward during backpropagation from the
output to the input layer. This error is then used to calculate the gradient
of the cost function with respect to each weight.

Essentially, backpropagation aims to calculate the negative gradient of


the cost function. This negative gradient is what helps in adjusting of the
weights. It gives us an idea of how we need to change the weights so that
we can reduce the cost function.

Backpropagation uses the chain rule to calculate the gradient of the cost
function. The chain rule involves taking the derivative. This involves
calculating the partial derivative of each parameter. These derivatives are
calculated by differentiating one weight and treating the other(s) as a
constant. As a result of doing this, we will have a gradient.
How Backpropagation Algorithm Works
The Back propagation algorithm in neural network computes the gradient of the loss function for
a single weight by the chain rule. It efficiently computes one layer at a time, unlike a native
direct computation. It computes the gradient, but it does not define how the gradient is used. It
generalizes the computation in the delta rule.

Gradient :A gradient is a vector that represents how much a parameter in a neural network needs
to change to reduce error. The backpropagation algorithm is a technique that uses the chain rule
of calculus to compute gradients and propagate error from the output layer to the input layer of a
neural network.
Inputs X, arrive through the preconnected path
Input is modeled using real weights W. The weights are usually randomly selected.
Calculate the output for every neuron from the input layer, to the hidden layers, to the output
layer.
Calculate the error in the outputs
Travel back from the output layer to the hidden layer to adjust the weights such that the error is
decreased.
Why We Need Backpropagation?
Most prominent advantages of Backpropagation are:
Backpropagation is fast, simple and easy to program
It has no parameters to tune apart from the numbers of input
It is a flexible method as it does not require prior knowledge about the network
It is a standard method that generally works well
It does not need any special mention of the features of the function to be learned.

What is a Feed Forward Network?


A feedforward neural network is an artificial neural network where the nodes never form a cycle.
This kind of neural network has an input layer, hidden layers, and an output layer. It is the first
and simplest type of artificial neural network.
Types of Backpropagation Networks
Two Types of Backpropagation Networks are:
Static Back-propagation
Recurrent Backpropagation
Static back-propagation:
It is one kind of backpropagation network which produces a mapping of a static input for static output. It is
useful to solve static classification issues like optical character recognition.
Recurrent Backpropagation:
Recurrent Back propagation in data mining is fed forward until a fixed value is achieved. After that, the error
is computed and propagated backward.
The main difference between both of these methods is: that the mapping is rapid in static back-propagation
while it is nonstatic in recurrent backpropagation.
History of Backpropagation
In 1961, the basics concept of continuous backpropagation were derived in the context of control theory by J.
Kelly, Henry Arthur, and E. Bryson.
In 1969, Bryson and Ho gave a multi-stage dynamic system optimization method.
In 1974, Werbos stated the possibility of applying this principle in an artificial neural network.
In 1982, Hopfield brought his idea of a neural network.
In 1986, by the effort of David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams, backpropagation
gained recognition.
In 1993, Wan was the first person to win an international pattern recognition contest with the help of the
backpropagation method.

You might also like