Back Propagation of Errors
Back Propagation of Errors
Optimization: Adjusting the weights helps to optimize the neural network's performance. By
tweaking the weights, we can find the optimal combination that results in the lowest error.
Learning: Weight adjustment enables the neural network to learn from its mistakes.By
adjusting the weights, the network can correct its errors and improve its performance over
time.
Convergence: Weight adjustment helps the neural network converge to a stable solution. By
iteratively adjusting the weights, the network can settle into a state where the error is
minimized.
Summary of the process
The initial outputs from the system are compared to the expected outputs and the
system weightings are adjusted to minimise the difference between actual and
expected results.
Calculus is used to find the error gradient in the obtained outputs: the results are fed
back into the neural networks and the weightings on each neuron are adjusted (note:
this can be used in both supervised and unsupervised networks).
Once the errors in the output have been eliminated (or reduced to acceptable limits)
the neural network is functioning correctly and the model has been successfully set up.
If the errors are still too large, the weightings are altered – the process continues until
satisfactory outputs are produced.
Static back propagation
The static back-propagation maps a static input for static output and is mainly
used to solve static classification problems such as optical character recognition
(OCR).
The goal is to compute the error gradients and update the weights only once,
without iterative updates.
Recurrent back propagation
This is the second type of back-propagation, where the mapping is non-static.
It is fed forward until it achieves a fixed value, after which the error is
computed and propagated backward.
Propagates error backwards through time. Updates weights and biases using
error gradients.