0% found this document useful (0 votes)
114 views12 pages

A Step by Step Backpropagation Example - Matt Mazur

The document provides a step-by-step example of backpropagation in a neural network with two inputs, two hidden neurons, and two output neurons. It walks through calculating the forward pass to get initial outputs, then calculates the total error between the outputs and targets. Finally, it begins to explain the backwards pass to update weights, showing how the partial derivative of the error with respect to a weight is calculated using the chain rule. Additional resources on neural networks and backpropagation are also provided.

Uploaded by

Bhargav Kodali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views12 pages

A Step by Step Backpropagation Example - Matt Mazur

The document provides a step-by-step example of backpropagation in a neural network with two inputs, two hidden neurons, and two output neurons. It walks through calculating the forward pass to get initial outputs, then calculates the total error between the outputs and targets. Finally, it begins to explain the backwards pass to update weights, showing how the partial derivative of the error with respect to a weight is calculated using the chain rule. Additional resources on neural networks and backpropagation are also provided.

Uploaded by

Bhargav Kodali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

A Step by Step
Backpropagation Example
Matt Mazur
Background
Home
About
Backpropagation is a common method for training a neural network. There is no
Archives
shortage of papers online that attempt to explain how backpropagation works,
Contact
but few that include an example with actual numbers. This post is my attempt to
Projects
explain how it works with a concrete example that folks can compare their own
calculations to in order to ensure they understand backpropagation correctly.
Follow via Email

Enter your email address to Backpropagation in Python


follow this blog and receive
notifications of new posts by You can play around with a Python script that I wrote that implements the
email.
backpropagation algorithm in this Github repo.

Join 3,130 other followers


Backpropagation Visualization
Enter your email address
For an interactive visualization showing a neural network as it learns, check out
Follow my Neural Network visualization.

About Additional Resources


Hey there! I’m the founder of
Preceden, a web-based If you find this tutorial useful and want to continue learning about neural
timeline maker, and a data networks, machine learning, and deep learning, I highly recommend checking
analyst consultant. I also built out Adrian Rosebrock’s new book, Deep Learning for Computer Vision with
Lean Domain Search and many
Python. I really enjoyed the book and will have a full review up soon.
other software products over
the years.
Overview

For this tutorial, we’re going to use a neural network with two inputs, two hidden
neurons, two output neurons. Additionally, the hidden and output neurons will
Search …
include a bias.
Follow me on Twitter
Here’s the basic structure:
My Tweets

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 1/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

In order to have some numbers to work with, here are the initial weights, the
biases, and training inputs/outputs:

The goal of backpropagation is to optimize the weights so that the neural


network can learn how to correctly map arbitrary inputs to outputs.

For the rest of this tutorial we’re going to work with a single training set: given
inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.

The Forward Pass

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 2/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

To begin, lets see what the neural network currently predicts given the weights
and biases above and inputs of 0.05 and 0.10. To do this we’ll feed those inputs
forward though the network.

We figure out the total net input to each hidden layer neuron, squash the total
net input using an activation function (here we use the logistic function), then
repeat the process with the output layer neurons.

Total net input is also referred to as just net input by some sources.

Here’s how we calculate the total net input for :

We then squash it using the logistic function to get the output of :

Carrying out the same process for we get:

We repeat this process for the output layer neurons, using the output from the
hidden layer neurons as inputs.

Here’s the output for :

And carrying out the same process for we get:

Calculating the Total Error

We can now calculate the error for each output neuron using the squared error
function and sum them to get the total error:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 3/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

Some sources refer to the target as the ideal and the output as the actual.

The is included so that exponent is cancelled when we differentiate later


on. The result is eventually multiplied by a learning rate anyway so it
doesn’t matter that we introduce a constant here [1].

For example, the target output for is 0.01 but the neural network output
0.75136507, therefore its error is:

Repeating this process for (remembering that the target is 0.99) we get:

The total error for the neural network is the sum of these errors:

The Backwards Pass

Our goal with backpropagation is to update each of the weights in the network
so that they cause the actual output to be closer the target output, thereby
minimizing the error for each output neuron and the network as a whole.

Output Layer

Consider . We want to know how much a change in affects the total error,
aka .

is read as “the partial derivative of with respect to “. You


can also say “the gradient with respect to “.

By applying the chain rule we know that:

Visually, here’s what we’re doing:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 4/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

We need to figure out each piece in this equation.

First, how much does the total error change with respect to the output?

is sometimes expressed as

When we take the partial derivative of the total error with respect to ,
the quantity becomes zero because does not
affect it which means we’re taking the derivative of a constant which is
zero.

Next, how much does the output of change with respect to its total net input?

The partial derivative of the logistic function is the output multiplied by 1 minus
the output:

Finally, how much does the total net input of change with respect to ?

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 5/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

Putting it all together:

You’ll often see this calculation combined in the form of the delta rule:

Alternatively, we have and which can be written as ,


aka (the Greek letter delta) aka the node delta. We can use this to
rewrite the calculation above:

Therefore:

Some sources extract the negative sign from so it would be written as:

To decrease the error, we then subtract this value from the current weight
(optionally multiplied by some learning rate, eta, which we’ll set to 0.5):

Some sources use (alpha) to represent the learning rate, others use
(eta), and others even use (epsilon).

We can repeat this process to get the new weights , , and :

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 6/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

We perform the actual updates in the neural network after we have the new
weights leading into the hidden layer neurons (ie, we use the original weights,
not the updated weights, when we continue the backpropagation algorithm
below).

Hidden Layer

Next, we’ll continue the backwards pass by calculating new values for , ,
, and .

Big picture, here’s what we need to figure out:

Visually:

We’re going to use a similar process as we did for the output layer, but slightly
different to account for the fact that the output of each hidden layer neuron
contributes to the output (and therefore error) of multiple output neurons. We
know that affects both and therefore the needs to take
into consideration its effect on the both output neurons:

Starting with :

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 7/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

We can calculate using values we calculated earlier:

And is equal to :

Plugging them in:

Following the same process for , we get:

Therefore:

Now that we have , we need to figure out and then for each
weight:

We calculate the partial derivative of the total net input to with respect to
the same as we did for the output neuron:

Putting it all together:

You might also see this written as:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 8/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

We can now update :

Repeating this for , , and

Finally, we’ve updated all of our weights! When we fed forward the 0.05 and 0.1
inputs originally, the error on the network was 0.298371109. After this first round
of backpropagation, the total error is now down to 0.291027924. It might not
seem like much, but after repeating this process 10,000 times, for example, the
error plummets to 0.0000351085. At this point, when we feed forward 0.05 and
0.1, the two outputs neurons generate 0.015912196 (vs 0.01 target) and
0.984065734 (vs 0.99 target).

If you’ve made it this far and found any errors in any of the above or can think of
any ways to make it clearer for future readers, don’t hesitate to drop me a note.
Thanks!

Share this:

 Twitter  Facebook

Like
145 bloggers like this.

Related

Learning Data Science: 3 Experimenting with a The State of Emergent


Months In Neural Network-based Mind
In "Analysis" Poker Bot In "Emergent Mind"
In "Poker Bot"

Posted on March 17, 2015 by Mazur. This entry was posted in Machine Learning and tagged ai,
backpropagation, machine learning, neural networks. Bookmark the permalink.

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 9/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

← Introducing TetriNET Bot Source Code Published


ABTestCalculator.com, an Open on Github →
Source A/B Test
Significance Calculator

935 thoughts on “A Step by Step Backpropagation Example”

← Older Comments

Как устроена нейросеть / Блог компании BCS FinTech / Хабр

!g
ni
P
Matt
— July 8, 2020 at 10:28 am

Thanks for this nice illustration of backpropagation!


I am wondering how the calculations must be modified if we have more than 1 training
sample data (e.g. 2 samples).

Reply

Shahrum
— July 9, 2020 at 10:50 am

It seems that you have totally forgotten to update b1 and b2! They are part of the
weights (parameters) of the network. Or am I missing something here?

Reply

Seong
— July 20, 2020 at 2:33 am

Thanks to your nice illustration, now I’ve understood backpropagation.

Reply

deva
— July 23, 2020 at 9:14 pm

This is exactly what i was needed , great job sir, super easy explanation.

Reply

[email protected]
— August 3, 2020 at 1:59 pm

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 10/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur
Just wondering about the range of the learning rate. Why can’t it be greater than 1?

Reply

ANIL BABU B
— August 8, 2020 at 11:19 am

Hi Matt
Thanks for giving the link, but i have following queries, can you please clarify
1. Why bias weights are not updated anywhere
2.Outputs at hidden and Output layers are not independent of the initial weights chosen
at the input layer. So for calculated optimal weights at input layer (w1 to w4) why final Etot
is again differentiated w.r.t w1, instead should we not calculate the errors at the hidden
layer using the revised weights of w5 to w8 and then use the same method for
calculating revised weights w1 to w4 by differentiating this error at hidden layer w.r.t w1.
3.Error at hidden layer can be calculated as follows: We already know the out puts at the
hidden layer in forward propagation , these we will take as initial values, then using the
revised weights of w5 to w8 we will back calculate the revised outputs at hidden layer,
the difference we can take as errors
4. i calculated the errors as mentioned in step 3, i got the outputs at h1 and h2 are
-3.8326165 and 4.6039905. Since these are outputs at hidden layer , these are outputs
of sigmoid function so values should always be between 0 and 1, but the values here are
outside the outputs of sigmoid function range

Please clarify why and where the flaw is

Reply

Clara Liu
— August 14, 2020 at 6:30 am

Awesome tutorial!

But are there possibly calculation errors for the undemonstrated weights? I kept getting
slightly different updated weight values for the hidden layer…

But let’s take a simpler one for example:


For dEtotal/dw7, the calculation should be very similar to dEtotal/dw5, by just changing
the last partial derivative to dnet o1/dw7, which is essentially out h2.So dEtotal/dw7 =
0.74136507*0.186815602*0.596884378 = 0.08266763

new w7 = 0.5-(0.5*0.08266763)= 0.458666185.

But your answer is 0.511301270…

Perhaps I made a mistake in my calculation? Some clarification would be great!

Reply

← Older Comments

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 11/12
9/9/2020 A Step by Step Backpropagation Example – Matt Mazur

Leave a Reply

Enter your comment here...

Blog at WordPress.com.

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 12/12

You might also like