Ann Lab Manual 2
Ann Lab Manual 2
1
Assignment Number: 15
Title
Content beyond Syllabus –vlab-To demonstrate the perceptron learning law
Objectives
To illustrate the concept of perceptron learning in the context of pattern classification task
Outcomes
Students are able implement demonstrate the perceptron learning law
Software
python
Theory
https://cse22-iiith.vlabs.ac.in/exp/perceptron-learning/observations.html
2
Figure : Perceptron Model.
A two-layer feedforward neural network with hard-limiting output function for the unit in the
output layer can be used to perform the task of pattern classification. The number of units in the
input layer is equal to the dimension of the input vectors. The units in the input layer are all
linear units, and the input layer merely contributes to fan-out the input to each of the the output
units. The output layer may consist of one or more perceptrons. The number of perceptron units
in the output layer depends on the number of distinct classes in the pattern classification task. If
there are only two classes, then one perceptron in the output layer is sufficient. Two perceptrons
in the output layer can be used when dealing with four different classes in the pattern
classification task. Here, we consider a two-class classification problem, and hence only one
perceptron in the output layer. Two-class pattern classification problem
Note that the learning rule modifies the weights only when an input vector is misclassified.
When an input vector is classified correctly, there is no adjustment of weights and the threshold.
When presenting the input vectors to the network (any neural network in general), we use a term
called epoch, which denotes one presentation of all the input pattern vectors to the network. To
obtain suitable weights, the learning rule may need to be applied for more than one epoch,
typically several epochs. After each epoch, it is verified whether the existing set of weights can
correctly classify the input vectors. If so, then the process of updating the weights is terminated.
Otherwise the process continues till a desired set of weights is obtained. Note that once a
separating hypersurface is achieved, the weights are not modified.
3
Select a problem type for generating two classes as 'Linearly separable' or 'Linearly inseparable'.
Choose for a number of samples per class and number of iterations the perceptron network must
go through.
Conclusions
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________
4
Assignment Number: 16
Title
Content beyond Syllabus – vlab-Hopfield Model for Pattern Storage Task
Objectives
To illustrate Hopfield Model for Pattern Storage Task
Outcomes
Students are able illustrate Hopfield Model for Pattern Storage Task
Software
python
Theory
https://cse22-iiith.vlabs.ac.in/exp/pattern-storage-task/
The objective in a pattern storage task is to store a given set of patterns, so that any of them can
be recalled exactly, even when an approximate version of the corresponding pattern is presented
to the network
We use Hopfield model of a feedback network for addressing the task of pattern storage. The
perceptron neuron model for the units of a feedback network is used, where the output of each
unit is fed to all the other units with weights (w_{ij}), for all i and j. Let the output function of
each of the units be bipolar (+1 or -1),
5
so that
and
where (\theta) is the threshold for the unit i. Due to feedback, the state of a unit depends on the
states of the other units. The update of the state of a unit can be done synchronously or
asynchronously. In an asynchronous update, the updating using the random choice of a unit is
continued until no further change in the states takes place for all the units. That is,
In this situation we can say that the network activation dynamics reached a stable state.
The chosen states which are within a Hamming distance of 1 can't be made stable states together.
For states which can be made stable, there is a set of values of weights and thresholds that
satisfies the corresponding inequalities sufficing which, the model always has chosen states as
ones with minimum energy.
The state transition diagram generated has positive probabilities to start from any state and end
up in stable states.
6
The following figures illustrates the concept of energy landscape. Figure 1(a) shows the energy
landscapes with each minimum state supported by several nonminimum states around its
neighbourhood. Figure 1(b) does not have any such support for the minimum states. Hence
patterns can be stored if the energy landscape of the type in Figure 1(a) is realized by suitable
design of the feedback network.
Conclusions
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________