Neural Network Learning Rules
Neural Network Learning Rules
Learning Rules
By
Prof.Arun Kulkarni
Associate Professor and Head department of IT
Thadomal Shahani Engineering College
Bandra Mumbai
General symbol of neuron consisting of processing node
and synaptic connections.
This symbolic representation shows a set of weights and the neuron's
processing unit, or node.
The neuron output signal is given by the following relationship:
The function f (wtx) is often referred to as an activation function.
Its domain is the set of activation values, net, of the neuron
model, we thus often use this function as f(net).The variable net is
defined as a scalar product of the weight and input vector
The soft-limiting activation functions are often called
sigmoidal characteristics, as opposed to the hard-limiting
activation functions.
Typical activation functions used are
where
For the Hebbian learning rule the learning signal is equal simply to the
neuron's outputWe have
This learning rule requires the weight initialization at small random values
aroundWi = 0 prior to learning.
The Following example illustrates Hebbian learning with binary
and continuous activation functions of a very simple network
where C=1. Assume the network shown below with the initial
weight vector
Using Bipolar Binary function
Using continuous bipolar activation
function f(net), using input x1, and initial weights w1
we obtain neuron output values and the updated weights for
h = 1 as summarized in Step 1. The only difference compared
with the previous case is that instead of f(net) = sgn (net)
The training continues till you get the minor difference in new
weight and previous weight.
Perceptron Learning Rule
For the Perceptron learning rule, the learning signal is the difference
between the desired and actual neuron's response Thus, learning is
supervised and the learning signal is equal to
Note that this rule is applicable only for binary neuron response,
In bipolar binary case the output will be +1 and -1.
So Change in weight equals to
Example
This example illustrates the Perceptron learning rule of the network
The set of input training vectors and initial weight vector is as
follows: