0% found this document useful (0 votes)
158 views9 pages

Widrow-Hoff Learning Rule

The document describes the Widrow-Hoff learning algorithm. It is a supervised learning algorithm used for perceptrons. The algorithm updates weights at each iteration to minimize the error between the perceptron's output and the desired output. An example is provided to demonstrate how the weights are updated using the Widrow-Hoff algorithm over multiple iterations to reduce error.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
158 views9 pages

Widrow-Hoff Learning Rule

The document describes the Widrow-Hoff learning algorithm. It is a supervised learning algorithm used for perceptrons. The algorithm updates weights at each iteration to minimize the error between the perceptron's output and the desired output. An example is provided to demonstrate how the weights are updated using the Widrow-Hoff algorithm over multiple iterations to reduce error.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

WIDROW-HOFF

LEARNING ALGORITHM
Dr. Tarek A. Tutunji
Philadelphia University, Jordan
Tarek A. Tutunji

Outline
• Widrow-Hoff Algorithm

• Perceptron example
Tarek A. Tutunji

Supervised Learning

d
Tarek A. Tutunji

Widrow-Hoff Learning Algorithm

𝑦 = 𝑓(𝑛𝑒𝑡)

∆𝒘

a
Tarek A. Tutunji

Widrow-Hoff Learning Algorithm


𝑚

𝑛𝑒𝑡 = ෍ 𝑤𝑖 𝑥𝑖
𝑖=1

Using a linear activation function 𝑦 = 𝑓 𝑛𝑒𝑡 = 𝑛𝑒𝑡

The error between the network output and the desired output is
1
𝑒 = (𝑑 − 𝑦)2
2
𝑑𝑒 𝑑𝑒 𝑑𝑦
The derivative w.r.t. weights is = = − 𝑑 − 𝑦 𝑥𝑖
𝑑𝑤𝑖 𝑑𝑦 𝑑𝑤𝑖
𝑑𝑒
Update the weighs using steepest descent 𝑤 = 𝑤 − 𝛼
𝑑𝑒 𝑑𝑤𝑖
Define ∆𝑤 = −𝛼 then 𝑤 = 𝑤 + ∆𝑤
𝑑𝑤𝑖
Tarek A. Tutunji

Example

y Desired output, d = [1 -1 0]

𝑦 = 𝑓 𝑛𝑒𝑡 = 𝑛𝑒𝑡

Use Widrow-Hoff learning to update the weights

Let a = 1
Tarek A. Tutunji

Example

Desired output, d = [1 -1 0]

y
Iteration One

y=3

∆𝑤 1 = 𝛼 𝑑1 − 𝑦 1 𝑥 1 =
Tarek A. Tutunji

Example

Desired output, d = [1 -1 0]

y
Iteration Two
1
−0.5
𝑛𝑒𝑡 2 = −1 3 − 3 0.5 = 2.75 ⇒ 𝑦 2 = 2.75
−2
−1.5
1 −3.75
∆𝑤 2 = 𝛼 𝑑 2 − 𝑦 2 𝑥 2 = 1 −1 − 2.75 −0.5 = 1.875
−2 7.5
−1.5 5.62
−1 −3.75
𝑤 3 = 𝑤 2 + ∆𝑤 2 = 3 + 1.875
−3 7.5
0.5 5.62
Tarek A. Tutunji

Conclusion
• Widrow-Hoff learning algorithm is a simple supervised
learning algorithm used for Perceptron

• The weight change at each iteration by minimizing an


error function between the perceptron output and the
desired output

You might also like