0% found this document useful (0 votes)
5 views

Soft Computing Unit-2

Uploaded by

Vikash Bibhakar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Soft Computing Unit-2

Uploaded by

Vikash Bibhakar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

PERCEPTRON NETWORK

Perceptron networks come under single-layer feed-


forward networks and are also called simple
perceptrons and various types of perceptrons were
designed by Rosenblatt (1962) and Minsky-Papert
(1969, 1988).
PERCEPTRON NETWORK
The key points to be noted in a perceptron network are:

•The perceptron network consists of three units, namely, sensory


unit (input unit), associator unit (hidden unit), response unit
(output unit).

•The sensory units are connected to associator units with fixed


weights having values 1, 0 or -1, which are assigned at random.

•The binary activation function is used in sensory unit and


associator unit.

•The response unit has an activation of 1, 0 or -1. The binary step


with fixed thresholdθ is used as activation for associator. The
output signals that are sent from the associator unit to the
response unit are only binary.
PERCEPTRON NETWORK
•The output of the perceptron network is given by

•The perceptron learning rule is used in the weight updation between


the associator unit and the response unit.

•The error calculation is based on the comparison of the values of


targets with those of the calculated outputs.
PERCEPTRON NETWORK
•The weights will be adjusted on the basis of the learning rule if an
error has occurred for a particular training pattern i.e. ,

If no error occurs, there is no weight updation and hence the


training process may be stopped.
PERCEPTRON NETWORK
•Architecture of Perceptron Network:
PERCEPTRON NETWORK
PERCEPTRON Learning Rule:
•The learning signal is the difference between desired and the actual
response.
•Consider Finite “n ” number of Input training vector x(n).
•With their associated target vectors t(n).
•The target is either +1 or -1.
•The output Y is obtained on the basis of the net input calculated and
activation function being applied over the net input.
PERCEPTRON NETWORK
PERCEPTRON Learning Rule:
•The weight updation in case of perceptron learning is as shown:

•The perceptron rule convergence theorem states that “ If there is a


weight vector W, such that ƒ(x(n)W)=t(n)for all “n” , then for any
starting vector w1, the perceptron learning will be convergence to
weight vector that gives the correct response for all training patterns
and their learning takes place within a finite number of steps
provided that the solution exists.
PERCEPTRON NETWORK
FLOW CHART FOR PERCEPTRON NETWORK:
PERCEPTRON NETWORK
Perceptron Training Algorithm :
PERCEPTRON NETWORK
Perceptron Training Algorithm : cont…
PERCEPTRON NETWORK
Perceptron Testing Algorithm:
PERCEPTRON NETWORK
Perceptron Testing Algorithm: NUMERICAL
PERCEPTRON NETWORK
Perceptron Testing Algorithm: NUMERICAL CONT..
PERCEPTRON NETWORK
Perceptron Testing Algorithm: NUMERICAL CONT..
PERCEPTRON NETWORK
Problem 1:
PERCEPTRON NETWORK
Solution 1:
PERCEPTRON NETWORK
Problem 2:
Adaptive Linear Neuron(Adaline)
•A units with Linear Activation function are called Linear Units.
•A network with Single Linear Unit is called an Adaline.
•In Adaline, The input-output relationship is Linear.
•Adaline uses Bi-polar Activation for its input signals and its target
output.
•The weights between the inputs and outputs are adjustable.
•The bias in Adaline like a adjustable weights, whose connection is
from a unit with activation being always 1.
•Adaline is a network has only one output unit.
•Adaline network may be trained using Delta Rule (called Least
Mean Square rule or Windrow-Hoff rule).
•This learning rule is found to minimize the mean-squared error
between the activation and the target value.
Adaptive Linear Neuron(Adaline)
Delta Rule for Single Output Unit:
•It is very similar to Perceptron Rule.
•However their Origins are different.
•Perceptron learning rule orginated from Hebbian Assumption while
Delta rule is derived from Gradient-descent method.
•It updates the weight to minimize the difference between net input
to output unit and the target value.
•The major aim is to minimize the error over all training pattern.
•The Delta rule for adjusting the weights of ith pattern (i=1 to n)
Adaptive Linear Neuron(Adaline)
Architecture of Adaline:
Adaptive Linear Neuron(Adaline)
Training Algorithm for Adaline:
Adaptive Linear Neuron(Adaline)
Testing Algorithm for Adaline:
Adaptive Linear Neuron(Adaline)
Question:
Adaptive Linear Neuron(Adaline)
Adaptive Linear Neuron(Adaline)
Multiple Adaptive Linear Neuron (Madaline)
•It consists of many Adaline in parallel in single output units whose
values is based on certain selection rules.

•The weights that are connected from the Adaline Layer to the
Madaline layer are fixed, positive and posses equal values.

•The weights between the input layer and Adaline layer are adjusted
during the training process.

•The Adaline and Madaline layer neurons have a bias of excitation “1”
connected to them.

•The training process for a Madaline system is similar to that of an


Adaline.
Multiple Adaptive Linear Neuron (Madaline)

Architecture of Madaline:
Multiple Adaptive Linear Neuron (Madaline)

Training Algorithm:
•In this training algorithm, only the weights between the hidden layer
and the input layer are adjusted and weights for the output units are
fixed.
•The weights V1, V2, V3, ... , Vm and the bias bo that enter into output
unit Y are determined so that the response of unit Y is 1. Thus, the
weights entering Y unit may be taken as
V1= V2= V3= ... =Vm =1/2 and bias can be taken as bo =1/2
Multiple Adaptive Linear Neuron (Madaline)
Training Algorithm: cont…
Multiple Adaptive Linear Neuron (Madaline)
Training Algorithm: cont…
Multiple Adaptive Linear Neuron (Madaline)
Problem:
Multiple Adaptive Linear Neuron (Madaline)
Sol: cont….
Multiple Adaptive Linear Neuron (Madaline)
Sol: cont….

All the weights and bias between the input


layer and hidden layer are adjusted. This
completes the training for the First EPOCH.
The same process is repeated until the
weights converges.
The Network architecture for Madaline
network with final weights for XOR function
in this figure:
Multiple Adaptive Linear Neuron (Madaline)
Sol: cont….
Table shows the training performance of Madaline network for XOR function
Back-Propagation Network
•It is one the most important developments in NN (Bryson and Ho,
1969 ; Werbos, 1974 ; Lecun, 1985; Parker, 1985; Rumelhart, 1986).
•It is applied to Multilayer-Feed Forward Network consisting of
processing elements with continuous differentiable activation
function.
•This basic concept of this weight update algorithm is simply the
gradient-descent method as used in the case of simple perceptron
networks and differentiable units.
•The training of BPN is done in three stages:
•The feed forward of input training pattern
•The calculation and back-propagation of the error
•Updating weights
•The testing of the BPN involves the computation of feed forward
only.
Back-Propagation Network
•Architecture:
Back-Propagation Network
The terminologies used in the training algorithm are as follows:
Back-Propagation Network
Training Algorithm:-
Back-Propagation Network
Training Algorithm:- (cont..)
Back-Propagation Network
Training Algorithm:- (cont..)
Back-Propagation Network
Training Algorithm:- (cont..)
Back-Propagation Network
Testing Algorithm:- (cont..)
Back-Propagation Network
Question:
Back-Propagation Network
Solution Cont..:
Back-Propagation Network
Solution Cont..:

You might also like