Unit - I Artificial Neural Networks
Unit - I Artificial Neural Networks
ARTIFICIAL NEURAL
NETWORKS
5. Ex OR function using MP neuron model :-
The truth table for XOR functions is
x1 x2 y x
1
1 1 0
y
1 0 1
0 1 1 x
2
0 0 0
X1 X2 Hin-2 H2
1 1 0 0
1 0 -1 0
0 1 1 1
0 0 0 0
The activation for the o/p unit y = 1
y=f(yin) = 1 ; if yin 1
0 ; if yin < 1
Presenting i/p patterns H1 & H2 and calculating net i/p and
activations gives o/p of XOR.
yin = H1w1+H2w2
= H1+H2 (w1 = w2 = 1)
H1 H2 yin y=H1
(or) H2
0 0 0 0
1 0 1 1
0 1 1 1
0 0 0 0
by the
n weight matrix w known as correlation matrix
computed as.
.W= x iy i T
i=0
y i T = Transpose of the associated o/p
vector yi.
2. Perceptron leaning rule
.It is also known as Discrete Perceptron leaning law.
.For the perceptron learning rule, the leaning signal is the
difference between the desired and actual neurons response.
.It is supervised leaning.
.It is applicable only for bipolar o/p functions f(.).
.The preceptor leaning rule states that for a finite n no. of I/p
training vector x(n), each with an associate target value t(n)
which is +1 (or) -1, and an activation function.
1 ; if y in >
y=0 ; if y in
-1 ; if y in <
Then the weight updated is given by
wnew = w old+t x ; if y t
wnew = w old ; if y = t
Perceptron Training Algorithm :
i. Start with random value of w.
ii. Test for w.x i >0, if test succeed for i=1,2,..n, then return w.
iii. Modify w, as wnew = w prev+xfail.
Limitations of Perceptron :
1. Non-linear reparability is not passive i.e it can only model linearly
single perceptron.
2. Single perceptron does not have enough computing power
SOL : 1. Use larger network.
2 . Tolerate error.
Perceptron Leaning Algorithm :
x(n) = i/p vector
w(n) = weight vector
b(n) = bias
y(x) = actual response
d(n) = desired response
= learning rate parameter
i. Initialization :- Set w(0) = 0
ii. Activation :- Activate perceptron by applying i/p
iii. Complete actual response of perceptron
y(x) = sgn [wT(n).x(n)]
iv. Adapt weight vector i.e. if y(n) & d(n) are different, then
w(n+1) = w(n)+ [d(n)-y(n)].x(n)
+1 ; x(n) c1 c1 = class 1
where
-1d(n) = c2
; x(n) c2 = class 2
v. Continuation :- Increment step n by 1 and go to activation
step.
3. Delta Learning law :-
.It is valid only for continuous activation functions and
differentiable o/p function.
.It is supervised learning .
.It is also known as continuous perceptron leaning
It states that
The adjustment made to a synaptic weight of neuron is
proportional to the product of the error signal and the i/p signal
of the synapse.
Delta rule for signal o/p unit is that
it changes the weight of the connection to minimize the
difference the net i/p to the o/p unit y inand the target value
t.
i.e. w i = (t-y in)x i
Where x = the vector of activation of i/p units.
y in = net i/p to o/p unit i.e. xw i
t = target vector.
= learning rate.
Delta rule for several o/p units is that,
wjk = (tj-yinj)xi
4. Competitive Learning Rule :-
5. Outstar Leaning Rule :-
.It is also known as gross berg leaning.
.It is supervised leaning .
.It is used to provide learning of repetitive and characteristic
properties of i/p o/p relationship.
.The weight matrix
(yk-wjk) ; if neuron j wins the competition.
0 ; if neuron j losses the competition.
wjk =
6. Boltzmann leaning :-
.It is also known as stochastic leaning.
.Here the weights are adjusted in a probabilistic fashion.
.Used in symmetric recurrent network.(i.e. symmetric : wij=
wji )
.Consist of binary units (+1 for on, -1 for off )
.Neurons are divided into two groups i.e. hidden & visible.
Compare supervised and unsupervised learning ?
Learning (or) Training is term used to describe process of
finding values of weights.