0% found this document useful (0 votes)
10 views

Back Propagation of Errors

Uploaded by

tashafundira
Copyright
© © All Rights Reserved
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Back Propagation of Errors

Uploaded by

tashafundira
Copyright
© © All Rights Reserved
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
You are on page 1/ 8

Back propagation of errors

What is back propagation

Back propagation is an algorithm used in machine learning to train neural


networks by minimising the errors between the networks output and the
desired output.
Why do theses errors occur
Mismatch between predicted and actual output: The neural network predicts
an output that differs from the actual output, resulting in an error.

Gradient descent optimization: The backpropagation algorithm uses gradient


descent to optimize the weights and biases, which can lead to oscillations and
convergence issues.

Overfitting and underfitting: If the model is overfitting or underfitting, the


error can propagate and accumulate, leading to poor generalization
performance.
Why are weights adjusted
Error reduction: By adjusting the weights, we can reduce the error between the predicted
output and the actual output.This is done by calculating the error gradient, which indicates
how much each weight contributes to the error.

Optimization: Adjusting the weights helps to optimize the neural network's performance. By
tweaking the weights, we can find the optimal combination that results in the lowest error.

Learning: Weight adjustment enables the neural network to learn from its mistakes.By
adjusting the weights, the network can correct its errors and improve its performance over
time.

Convergence: Weight adjustment helps the neural network converge to a stable solution. By
iteratively adjusting the weights, the network can settle into a state where the error is
minimized.
Summary of the process
The initial outputs from the system are compared to the expected outputs and the
system weightings are adjusted to minimise the difference between actual and
expected results.

Calculus is used to find the error gradient in the obtained outputs: the results are fed
back into the neural networks and the weightings on each neuron are adjusted (note:
this can be used in both supervised and unsupervised networks).

Once the errors in the output have been eliminated (or reduced to acceptable limits)
the neural network is functioning correctly and the model has been successfully set up.

If the errors are still too large, the weightings are altered – the process continues until
satisfactory outputs are produced.
Static back propagation
The static back-propagation maps a static input for static output and is mainly
used to solve static classification problems such as optical character recognition
(OCR).

The inputs and outputs are fixed and known.

Moreover, here mapping is more rapid and compared to recurrent .

The training dataset is small and can be processed in memory.The neural


network has a simple architecture.

The goal is to compute the error gradients and update the weights only once,
without iterative updates.
Recurrent back propagation
This is the second type of back-propagation, where the mapping is non-static.

It is fed forward until it achieves a fixed value, after which the error is
computed and propagated backward.

This is needed in dynamic systems where the data is continually


changing.Computes error gradients for each time step.

Propagates error backwards through time. Updates weights and biases using
error gradients.

Handles recurrent connections and feedback loops

You might also like