0% found this document useful (0 votes)
57 views

Ann Lab Manual 2

The document discusses implementations of CNN models for character recognition using PyTorch, Keras and TensorFlow. It also discusses demonstrations of the perceptron learning law and Hopfield model for pattern storage using Python. Specifically, it mentions using these frameworks to classify handwritten characters from MNIST data and explore how the perceptron and Hopfield model work for classification and memory respectively.

Uploaded by

Manak Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Ann Lab Manual 2

The document discusses implementations of CNN models for character recognition using PyTorch, Keras and TensorFlow. It also discusses demonstrations of the perceptron learning law and Hopfield model for pattern storage using Python. Specifically, it mentions using these frameworks to classify handwritten characters from MNIST data and explore how the perceptron and Hopfield model work for classification and memory respectively.

Uploaded by

Manak Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

TensorFlow / Pytorch implementation of

13 Tensor flow 1,2 5,11 1,2


CNN.
MNIST Handwritten Character Detection
14 Tensor flow 1,2 3,4 1,2
using PyTorch, Keras and Tensorflow.

Content beyond Syllabus

15 Vlab-Perceptron learning Law PC with Python IDE

Vlab-Hopfield Model for Pattern Storage Task


16 PC with Python IDE

1
Assignment Number: 15

Title
 Content beyond Syllabus –vlab-To demonstrate the perceptron learning law
Objectives
 To illustrate the concept of perceptron learning in the context of pattern classification task
Outcomes
 Students are able implement demonstrate the perceptron learning law
Software
python
Theory

https://cse22-iiith.vlabs.ac.in/exp/perceptron-learning/observations.html

 Structure of two-layer feedforward neural network


 A perceptron is a model of a biological neuron. The input to a perceptron is an M-dimensional
vector, and each component/dimension of the vector is scaled by a weight. The sum of weighted
inputs is computed and compared against a threshold. If the weighted sum exceeds the threshold,
the output of the perceptron is '1'. Otherwise, the output of the perceptron is '-1' (or '0'). The
output function of a perceptron is hard-limiting function. Thus the output of the perceptron is
binary in nature. The following figure illustrates a perceptron.

 Perceptron learning law


The goal of perceptron learning law is to systematically adjust the weights and the threshold in
such a manner that a dividing surface between two classes is obtained.

2
Figure : Perceptron Model.

where M = number of the elements in the input vector

A two-layer feedforward neural network with hard-limiting output function for the unit in the
output layer can be used to perform the task of pattern classification. The number of units in the
input layer is equal to the dimension of the input vectors. The units in the input layer are all
linear units, and the input layer merely contributes to fan-out the input to each of the the output
units. The output layer may consist of one or more perceptrons. The number of perceptron units
in the output layer depends on the number of distinct classes in the pattern classification task. If
there are only two classes, then one perceptron in the output layer is sufficient. Two perceptrons
in the output layer can be used when dealing with four different classes in the pattern
classification task. Here, we consider a two-class classification problem, and hence only one
perceptron in the output layer. Two-class pattern classification problem

Note that the learning rule modifies the weights only when an input vector is misclassified.
When an input vector is classified correctly, there is no adjustment of weights and the threshold.
When presenting the input vectors to the network (any neural network in general), we use a term
called epoch, which denotes one presentation of all the input pattern vectors to the network. To
obtain suitable weights, the learning rule may need to be applied for more than one epoch,
typically several epochs. After each epoch, it is verified whether the existing set of weights can
correctly classify the input vectors. If so, then the process of updating the weights is terminated.
Otherwise the process continues till a desired set of weights is obtained. Note that once a
separating hypersurface is achieved, the weights are not modified.

3
Select a problem type for generating two classes as 'Linearly separable' or 'Linearly inseparable'.

Choose for a number of samples per class and number of iterations the perceptron network must
go through.

Conclusions

______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________

4
Assignment Number: 16

Title
Content beyond Syllabus – vlab-Hopfield Model for Pattern Storage Task

Objectives
 To illustrate Hopfield Model for Pattern Storage Task
Outcomes
 Students are able illustrate Hopfield Model for Pattern Storage Task
Software
python
Theory

https://cse22-iiith.vlabs.ac.in/exp/pattern-storage-task/

The objective in a pattern storage task is to store a given set of patterns, so that any of them can
be recalled exactly, even when an approximate version of the corresponding pattern is presented
to the network

Pattern storage network

Pattern storage is generally accomplished by a feedback network consisting of processing units


with non-linear output functions. The outputs of all the processing units at any instant of time
define the output state of the network at that instant. Associated with each output state is an
energy, which depends on the network parameters like the weights and bias, besides the state of
the network. The energy as a function of state corresponds to an energy landscape. The feedback
among the units and the non-linear processing in the units may create basins of attraction in the
energy landscape, when the weights satisfy certain constraints. The basins of attraction in the
energy landscape tend to be the regions of stable equilibrium states. The fixed points in these
regions correspond to the state of the energy minima, and they are used to store the desired
patterns. These stored patterns can be recalled even with approximate patterns as inputs. The
number of patterns that can be stored is called the capacity of the network.

The Hopfield Model

We use Hopfield model of a feedback network for addressing the task of pattern storage. The
perceptron neuron model for the units of a feedback network is used, where the output of each
unit is fed to all the other units with weights (w_{ij}), for all i and j. Let the output function of
each of the units be bipolar (+1 or -1),

5
so that

( (s_i) = f(x_i) = sgn(x_i) \qquad(1))

and

(x_i = \sum\limits_{j=1}^{N} w_{ij}s_j-\theta_i \qquad(2))

where (\theta) is the threshold for the unit i. Due to feedback, the state of a unit depends on the
states of the other units. The update of the state of a unit can be done synchronously or
asynchronously. In an asynchronous update, the updating using the random choice of a unit is
continued until no further change in the states takes place for all the units. That is,

( s_i(t+1) = s_i(t),) for all i

In this situation we can say that the network activation dynamics reached a stable state.

 The chosen states which are within a Hamming distance of 1 can't be made stable states together.
 For states which can be made stable, there is a set of values of weights and thresholds that
satisfies the corresponding inequalities sufficing which, the model always has chosen states as
ones with minimum energy.
 The state transition diagram generated has positive probabilities to start from any state and end
up in stable states.

6
The following figures illustrates the concept of energy landscape. Figure 1(a) shows the energy
landscapes with each minimum state supported by several nonminimum states around its
neighbourhood. Figure 1(b) does not have any such support for the minimum states. Hence
patterns can be stored if the energy landscape of the type in Figure 1(a) is realized by suitable
design of the feedback network.

Figure : Illustration of energy landscapes.

Conclusions

______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________

You might also like