0% found this document useful (0 votes)
204 views

Hopfield Network Book

1. The discrete Hopfield network is a fully interconnected recurrent neural network that uses asynchronous updating. 2. It can be used for associative memory applications by training the network using the Hebb rule to store binary or bipolar patterns. 3. The network has an energy function that guarantees convergence to a stable state, allowing it to retrieve patterns from partial or corrupted inputs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
204 views

Hopfield Network Book

1. The discrete Hopfield network is a fully interconnected recurrent neural network that uses asynchronous updating. 2. It can be used for associative memory applications by training the network using the Hebb rule to store binary or bipolar patterns. 3. The network has an energy function that guarantees convergence to a stable state, allowing it to retrieve patterns from partial or corrupted inputs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

111

110 Associative Memory Networks 4.6 Hopfleld Networks

x, x, x,
tiere the energy function is bounded below by x,

Ej(x,y) ~ - L L"' lwijl


j=:[ j=l

so the discrete BAM will converge to a stable state.


The memory capacity or rhe stOrage capacity of BAM may be given as

min(m, n)

where "n" is the number of units in X layer and "m" is rhe number of units in Y layer.AJso a more conservative
capacity is estimated as follows:

Jmin(m,n)

I 4.6 Hopfield Networks


John J. Hopfield developed a model in rhe year 1982 conforming to rhe asynchronous nature of biological
neurons. The networks proposed by Hopfield are known as Hopfreld networks and it is his work that promoted
consrruccion of rhe first a~alog VLSI neural chip. This network has found many useful applications in
y, y,
associative memory and various optimization problems. In this section, rwo types of network are discussed: y, y,
discrete and continuo/IS Hop}ield networks. Figure 4·7 Archirecrurc of discrete Hopfield ncr.

I 4.6.1 Discrete Hopfield Network


4.6.1.1 Architecture of Discrete Hopfield Net
The Hopfield nerwork is an autoassociative fully interconnected single-layer feedback nenvork. It is also a The architecture of discrete Hopfield net is shown in Figure 4-7. The Hopf1eld's model consim of processing
symmetrically weigh red nenvork. When chis is operated in discrete line fashion it is called as d;screte Hopfield elements with nvo outputs, one invening and the mher non-inverting. The omputs from each processing
network and irs architecture as a single-layer feedback network can be called as recurrent. The network rakes element are fed back ro the input of other processing dements bur nor to itself. The connections are found
rwo-valued inputs: binary (0, 1) or bipolar (+l, -1); chc use of bipolar inpurs makes rhe analysis easier. The to be resistive and the connection srrength over it is represented as Wij. Here, as such there are no negative
network has symmetrical weights with no self-connections, i.e., resistors, hence excitatory connections use positive outputs and inhibitory connections use inverted outputs.
Connections are excitatory if rhe omput of a prncessing element is found to be same as the input, and they are
Ulij = wp; !Vii =0 inhibitory if the inputs differ from the output of the processing element. A connection benveen the processing
elemencs i and j is found to be associated with a connection suength Wij· This weight is positive if units i and
The key points to be nored in Hopfield net are: only one unit updates its activation at a time; also each unit j are borh on. On the ocher hand, if the connection strength is negative, it represents rhe situation of unit i
is found to continuously receive an external signal along wirh dte signals it receives from the other units in being on and j being off. Also, the weighrs are symmetric, i.e., the weights Wij are same as wp.
rhe ner. When a single-layer recurrent network is performing a sequencia! updating process, an input pattern
is first applied to the network and the network's output is found co be initialized accordingly. Afterwards, 4.6.1.2 Training Algorithm of Discrete Hopfield Net
rhe initializing pattern is removed, and the output that is initialized becomes the new updared input through There exist several versions of che discrete Hopfield net. It should be noted rhac Hopfield's first description
rhe feedback connections. The first updated input forces the first updated output, which in turn acts as used binary input vectors and only later on bipolar input vectors used.
rhe second updated input through the feedback interconnections and results in second updated output. For storing a set of binary patterns s(p), p = l to P, where s(p) := (st (p), ... , s;(p), ... , s,(p)), the weight
This transition process continues unci! no new, updated responses are produced and the network reaches irs
matrix W is given as
equilibrium.
p
The asynchronous updacion of ilie units allows a function, called as energy functions or Lyapunov function,
for che nee. The existence of chis function enables us to prove that the net will converge co a stable set of
Wij = L [2s;lp)- Ill2,jlp)- !], fo, i # j
activations. The usefulness of content addressable memory is realized by ilie discrete Hopfield net. p=d
113
112 Associative Memory Networks 4.6 Hopfield Networks

For storing a set of bipolar input patterns, s{p) (as defined above), the weight matrix Wis given as A Hopfield network wiili binary input vectors is used tO determine whedter an input vector is a "known"
vector or an "unknown" vector. The net has rhe capacicy to recognize a known vector by producing a panern
p of activations on the unit5 of the net that is same as ilie vector stored in rhe nee. For example, if ilie input
Wij = L r;(pls;(p), fori I j vector is an unknown vector, the activation vectors 'resul~ed during iteration will converge to an activation
p=l vector which is not one of rhe stored patterns; such apa~er~ is called as spurious stable state.
and the weights here have no sdf-connection, i.e., Wij = 0. 4. 6. 1.4 Analysis of Energy Function and Storage Capacity on Discrete Hopfield Net
4. 6. 1.3 Testing Algorithm of Discrete Hopfie/d Net An energy function generally is defmed as a function that is bounded and is a nonincreasing function of the
In the case of testing, the update rule is formed and the initial weights are those obtained from the training stare of fie system. The energy function, also called as Lyapunov function, determines the stability property
of a discrete Hopfield network. The state of a System for a neural network is the vecmr of activations of the
algorithm. The testing algorithm for the discrete Hopfield net is as foilows:
units. Hence, if it is possible to find an energy function for an iterative neural net, dte net wiU converge to a
Step 0: Initialize the weights to srore patterns, i.e., weights obrained from trainingalgoridun using Hebb stable set of activations. An energy function Etof a discrete Hopfield network is characterized as
rule.
I n " '' n
Step 1: When the activations of the net are not converged, then perform Steps 2-8. Et= -:z LLJ;Yi Wij- Lx;y;+ z=e;y;
Step 2: Perform Steps 3-7" for each input vector X. i=l }=1 i=l i=l
j-f=i
Step 3: Make the inirial activations of the net equal m the external input veaor X:
If dte network is stable, chen the above energy function decreases whenever rhe state of any node changes.
y;=x;(i= I ron) Assuming that node i has changed its state from y~k) co y~k+l), i.e., che output has changed from +1 to -1 or
Step 4: Perform Steps 5-7 for each unitY;. (Here, the units are updated in random order.) from -I to + 1, the energy change b.EJis then given by
Step 5: Calculate the net input of the network: (k+l)) ( (!))
!lEt= Et (Yi - Et y;
];,., = x; + L:JjWji
j

Step 6: Apply rhe activations over the net input to calculate the output:
- (tyj'l w' + .<;- e;) (y!'+'i-
J=l
y~'l)
j#
1 if y;,,> e; =- (net;) b. y;
]i = y; ~f ]ini :::: e,. where 6. y; :::: y)k+l, - jk). The change in energy is dependent on the facr that only one unir can update irs
1 If
Q ]inl < (}; . . . Th h .
acnvanon at a nme. . I . h ' h ik+ll
e c ange m energy equanon u'Efexp OJts t e ract t at J'i = Jj(II 10r
'
J. -"
r 1. an d
where 9; is the threshold and is normally taken as zero. Wij = Wji and w;; = 0 {symmetric weight property).
Step 7: Now feed back (transmit) the obtained outputy; to all other unit!i. Thus, the activadon vectors There exist nvo cases in which a change{). y; will occur in the activation of neuron Y;. Ify; is positive, then
are updated. it will change co zero if
Step 8: Finally, test rhe network for convergence.

The updation here iS carried out at random, but it should be noted iliac each unit may be updated at the [
x; + ty;w;;] < e;
J=l
same a\•erage rate. The asynchronous fashion of updation is carried out here. This means that for a given time
This results in a negative change for y; and D.EJ< 0. On the other hand, if y; is zero, rhen it will change ro
only a single neural unit is allowed to update its output. The next update can be carried out on a randomly
chosen node which uses the already updated output. It can also be said that under asynchronous operation of positive if
dte network, each output node unit is updated separately by taking into accoUnt the most recent values that
have already been updated. This type of updacion is referred to as an d.J)Inthronous stochastic recursion of the
discrete Hopfield network By performing the analysis of the Lyapunov function, i.e., the energy function for [
x;+ ty;w;;] > e;
J=l
rhe Hopfield net, it can be shown that the main fearure for the convergence of iliis net is che asynchronous
updation of weights and the weight5 with no self-connection, i.e., the zeros exist on dte diagonals of the This results in a positive change for y; and 6.Ej< 0. Hence 6. y; is positive only if net input is pomive and
weight matrix. /:::,. ]i i5 negative only if net input is negative. Therefore, the energy cannot increase in any manner. As a result,

'--
l
:1
I
Associative Memory Networks 4.6 Hopfield Networks 115
114

because the energy is bounded, dte net must reach a smble state equilibrium, such that the energy does not x, X, X. X.

change with further iteration. From this it can be concluded that the energy-change depends mainly on the
change in activation of one unit and on the symmetry of weight matrix with zeros existing on the diagonal.
A Hopfield network always converges to a stable state in a finite number of node-updating steps, where
every stable state is found to be at the local minima of the energy function Ef Also, the proving process uses
the well-known Lyapunov stability theorem, which is generally used tO prove me stability of dynamic system
w. w., w.
defined with arbitrarily many interlocked differential equations. A positive-definite (energy) function Ej (y) w, w. w
can be found such that:
w,
1. Et (y) is continuous with respect to all the components y; for i = 1 to n;
2. d Ef[y(t)]ldt< 0, which indicates iliar the energy function is decreasing with time 3J).d hence the origin of
the state space is asymptotically stable. w, w, w,

Hence, a positive-defmite (energy) function Ef (y) satisfying the above requirementS can be Lyapunov function

to· lD· [:?;. ~


for any given system; this function is not unique. If, at least one such function can be found for a system, then
the system is asymptotically stable. According to the I yapunov theorem, the energy function that is associated
with a Hop field nerwork is a 4'apunov function and rhus the discrete Hop field nerwork is asymptotically 91 gr,
stable.
The storage capaciry is another important factor. It can be found that ilie number of binary patterns rhar
can be srored and recalled in a nerwork wiili a reasonable accuracy is given approximately as
~ Y, j Y, Y,

Storage capacity C:::: 0.15n

where n is the number of neurons in the neL h can also be given as


II

c=:2log2 71

'.
': I 4.6.2 Continuous Hopfield Network
Y, Y, Y.
I
Y,
A discrete Hopfield ncr can be modified to a continuous model, in which time is assumed to be a continuous
Figure 4·8 Model ofHopfleld network using elecnical componenrs.
variable, and can be used for associative memory problems or optimization problems like traveling salesman
problem. The nodes of this nerwork have a continuous, graded output rather than a rwo-sratc binary ourpur.
Thus, rhe energy of the network decreases continuously with time. The continuous Hopfield networks can
signals supply constant current to each amplifier for an actual circuit. The output of the jrh node is connected
be realized as an electronic circuit, which uses non-linear amplifiers and resistors. This helps building the
to the input of the ith node through conductance IVij. Since all real resistor values are positive, the inverted
Hopf1eld nerwork using analog VLSI technology.
node outputs J; are used to simulate the inhibitory signals. The connection is made with the signal from the
noninverted output if the output of a particular node excites some other node. If rhe connection is inhibitory,
4.6.2.1 Hardware Model of Continuous Hopfie/d Network
then the connection is made with the signal from the inverted omput. Here also, rhe important symmetric
The continuous necwork build up of electrical componems is shown ~n Figure 4-8. weight requirement for Hopfield nerwork is imposed, i.e., Wij = Wji and w;; = 0.
The model consists of n amplifiers, mapping itS input voltage u; into an output voltage y; over an activation The rule of each node in a continuous Hopfield network can be derived as shown in Figure 4-9. Consider
function a(uJ The activation function used can be a sigmoid function, say, the input of a single node as in Figure 4-9. Applying Kirchoff's current law (KCL), which states that the total
I
current entering a junction is equal to that leaving the same function, we get
a(Au;) =" 1 + e-i.u;

where), is called rhe g3.in paramerer.


The continuous modd becomes a discrete one when A-+ Ct. Each of ilie amplifiers consistS of an input C; idu·t = L"
j=l
Wij (yj- u;)-
n
gr,u;+x; =I:
j=l
WijJj- G,u; +x;
capacitance c; and an input conductance gn. The external signals emering into rhe circuit are x;. The external j#i j=foi

ll j"'
~ ~
--:
116 Associa!ive Memory Networks
4.6 Hopfield Networks 117

Y, Y, Y,
y
j.-• (y)dy
w.,l w,:;l w,:;l
U= ,g-l(y)

l
(y,-ul)w,l I ~~:-uJw2, !(yn-ul)w,. u

x, I I ·~~ 0.5 +1 y

lgr,
1'1
I
-grpl
cp{u)
~~~--~~~--+X
df 0 -1 ~
(A)
"
Figure 4·9 Input of a single node of continuous Hopf1eld nerwork. Figure 4·10 (A) Inverse and (B) integral of nonlinear acrivation function a- 1(y).

where A;

G;= Lw,i+gr;
" "i =G) a-'(y,)
J=l
Jf:i we get

The equation obcained using KCL describes· the rime evolution of the system completely. If each single dtti 1 ,Ja-l (y,) dy; 1 -I'
- = - - - - = - a (y,)-
dy.
node is given an initial value say, u;{O), then the value tt;(t) and thus the amplifier outpur, y,'(t) = a(u,.(t)) at dt ). dy; dt ). dt
timer, can be known by solving rhe differential equation obmined using KCL
where the derivative of IC 1(y) is a-l' (y). So, the derivative of energy function equation becomes
4.6.2.2 Analysis of Energy Function of Continuous Hopfield Network
For evaluating rhe stability property of conri~uous Hopfield nep.vork, a continuous energy function is defined dt ~ IC;fl-
dE! =- !- l 1• (dy')'
(y;) dt
such thar the evolution of the system is in the negative gradient of rhe energy function and finally converges t=l

to one of the table minima in the srare space. The corresponding Lyapunov energy function for the model From Figure 4~ 1O(A), we know that [ 1(y;) is a monotOnically increasing &merion ofJi and hence its derivative
shown in Figure 4~S is is posirive, all over. This shows that dErldt is negative, and dms rhe energy function Et must decrease as rhe
system evolves. Therefore, if Etis bounded, the system will evemually reach a stable scare, where

I
II II II II )I

Er= -2 LL'"ijYiYj- LXiYi+ ~ I:c,


1 1
n-'(y)dy dEJ dy,
i=l J=l i=l i=l 0
-=- =0
f'Fi
dt . dt
When the values of threshold are zero, the continuous energy function becomes equal to the discrete energy
where a- 1(y) =Au is the inverse of the function y = a(Au). The inverse of the function a- 1(y) is shown in function, except for the rerm,
Figure 4--lO(A) and the integral ofir in Figure 4~10(B).
To prove rhar Erobuined is rhe Lyapunov function for the nerwork, irs rime derivative is taken with
weighrs W1i symmetric: ~L " G, !'' n- (y)dy
1

A i=l o

dE!=
dt .
t
1=1
dE dy;
dy, dt
= L" (-"LYiw,;· + Gilti- Xi )dy,dt =
i=l J=l
j=/=i
-""' dy,dw
~c;-_!_
i=l dt dr
From Figure 4-lO(B), rhe integral of a- 1(y) is zero when y; is zero and positive for all other values of Ji·
+
The integral becomes very large as y approaches 1 or -I. Hence, the energy funcrion Et is bounded
from below and is a 4'apunov function. The continuous Hop field nets are best suiced for the constrained
optimization problems.

L
118 Associative Memory Networks 4.7 Uerative Autoassociative Memory Networks
119

I 4.7 Iterative Autoassociative Memory Networks has the effect of forcing it outward. When irs element stan: to limit (when it hits the wall of the box), ir
moves to corner of the box where it remains as such. The box resides in the state-space (each neuron occupies
There exists a situation where the nee does not respond to che input signal immediately with a stored rarger one axis) of the network and represents the saruraiion 'lj~its for each state. Each component here is being
pattern but the response may be more like the stored panern, which suggests using the fim response as inpur restricted between -1 and +1. The updation of acciva~ions of the units in brain-in-the-box model is done
to the net again. The iterative auroassociacive net should be able co recover an original stored vecmr when simultaneously. , ·
presented with a test vector dose to it. These cypes of networks can also be called as recumnt autoassociarive The brain-in-the-box model consists of n units, each being connected to every oilier unit. Also, there
networks and Hopfield networks discussed in Section 4.6 come under this category. is a trained weight on rhe self-connection, i.e., the diagonal elements are set to zero. There also exists a
self-connection with weight 1. The algorithm for brain-in-the-box model is given in Section 4.7 .2.1.
I 4. 7.1 Linear Autoassociative Memory (LAM)

In 1977, James Anderson focused on the developmem of the LAM. This was based on Hebbian rule, which 4. 7.2.1 Training Algorithm for Brain·in-the·Box Model
scares that connections between neuron like elements are strengthened every time when they are activated.
Linear algebra is used to analyze the performance of the net. Step 0: Initialize the weights to very small random values. Initialize the learning rates ct and {:J.
Consider an m X m non singular symmetric matrix having "m" mutually orcltogonal eigenvectors. The Step 1: Perform Steps 2-6 for each training input vector.
eigenvectors satisfy the properry of onhogonaliry. A recurrent linear autoassociator network is uained using
Step 2: The initial activations of the net are made equal to the external input vector X:
a set of P orthogonal unit vector u,, ... , up, where the number of times each vector going to be presented is
nor the same.
The weight matrix can be determined using Hebb learning rule, bur this allows the repetition of some of y;=x;
the stored vectors. Each of these srored vectors is an eigen vector of the weight matrix. Here, eigen values
represent rhe number of times the vector was presented. Step 3: Perform Steps 4 and 5 when the activations continue to change.
When the input vector X is presented, rhe output response of rhe net is XW. where Wis the weight matrix. Step 4: Calculate rhe net input:
{:. From the concepts oflinear algebra, we know that we obtain rhe largest value of IIXWll when Xis the eigen
vector for the largest eigenvalue; the next largest value of IIXWII occurs when Xis the eigenvector for the next "
~:· largest eigenvalue, and so on. Thus, a recurrent linear autoassociamr produces irs response as the stored vector y;,i = y;+a LYjWji
'• for which the input vecmr is most similar. This may perhaps rake several iterations. The linear combination j~l

I ,., of vecrors may be used to represent an input pattern. When an input vector is presented, the response of rhe
,,
\'" net is the linear combination of irs corresponding eigen values. The eigen vector with largest value in this Step 5: Calculate the output of each unit by' applying irs activations:
linear expa~sion is the one which is most similar ro char of the input vectors. Although, rhe net increases
irs response corresponding ro components of the input pattern over which iris trained most extensively, the I if y,.,, > 1
overall output response of the system may grow without bound. J'j = y;,i if -1 Sy;.,iS I
The main conditions oflineariry between the associative memories is that the set of input vector pairs and {
-1 ify;.,j<-1
outpm vector pairs (since, auroassociative, both are same) should be mutually orthogonal with each other,
i.e., if''A/ is the input pattern pair, for p = I toP, then
The venex of the box will be a stable srare for the activation vector.
T
A;Aj = 0, foralli-:f:.j Step 6: Update the weights:
Also if all the vectors Ap are normalized to unit length, i.e.,
Wij(new) = w;j(old)+,B JiYj
"
L(a,)~ = 1, forallp= I toP
i=l

then the output Yj = Ap, i.e., the desired output has been recalled. 4. 7.3 Autoassociator with Threshold Unit

I 4. 7.2 Brain-in·the·Box Network If a threshold unit is set, then a threshold fl.mction can be used as the activation function for an iterative
au.toassociator net. The testing algorithm of aumassociator with specified threshold for bipolar vectors and
An extension to ilie linear associator is the brain-iiJ-the-box model. This model was described by Anderson, activations with symmetric weights and no self-connections, i.e., Wij = Wji and Wii = 0 is given in the
1972, as follows: an acriviry pattern inside the box receives positive feedback on cenain components, which following section.

j,;_
i. a.
120 Associative Memory Networks 4.10 Solved Problems 121

4. 7.3. 1 Testing Algorithm where f 0 is the activation function of clte network. Also a reverse order recall can be implemented using
the transposed weight matrices in both layers X and Y. In case of temporal BAM, layers X and Y update
Step 0: The weights are initialized from the training algorithm to store patterns (use Hebbian learning). nonsimuhaneously and in an alternate circular fashion.
Step 1: Perform Steps 2~5 for each testing input vector. The energy function for a temporal BAM can _be defin.ed as
Step 2: Set the activations of X.. p
Step 3: Perform Steps 4 and 5 when the stopping condition is false. Ej=- Lsk+ Wsk
1
k=l
Step 4: Update rhe activations of all units:
The energy function £/decreases during the temporal sequence retrieval s1 -+ S2 -+ ... -+ sp- The energy is
if L" X;-Wij' > 9; found to increase stepwise at rhe transition sp --)- s1 and rhen ir continues to decrease in rhe following cycle of
j=1 (p- 1) retrievals. The storage capacity of the BANI is estimated usingp ::: min(m, n). Hence, the maximum
Xj;:::: X; if L" XjWij =8; length sequence is bounded by p < n, where n is number of components in input vecror and m is number of
j=l components in output vector.
-1 if L" XjWij>8;

The threshold 81 may be taken as zero.


j=l
I 4.9 Summary

I Step 5: Test for the stopping condition. I Pattern association is carried out efficiently by associative memory networks. The cwo main algorithms
used for training a pauern association network are the Hebb rule and the outer products rule. The basic
architecture, flowchart for training process and the training algorithm are discussed in detail for autoasso·
The nernork performs iteration until the correct vector X matches a scored vecwr or the testing input marches
ciative net, heteroassociative memory net, BAM, Hopfield net and iterative nets. Also, in all cases suitable
a previous vector or clJe maximum number of iterations allowed is reached.
resting algorithm is included. The variations in BAM, discrete BAM and continuous BAM, are discussed
in this chapter. The analysis of hamming distance, energy function and storage capacity is done for few
I 4.8 Temporal Associative Memory Network nernrorks such as BAM, discrete Hopfield network and continuous Hopfield nernrork. In case of itera·
rive autoassociative memory network, the linear auroassociarive memory, brain·in·the-box model and an
The associative memories discussed so far evolve a stable state and stay there. All are acting as content autoassociator with a threshold unit are discussed. Also temporal associative memory network is discussed
addressable memories for a set of static patterns. Bur there is also a possibilicy of storing the sequences of briefly.
patterns in the form of dynamic transitions. These rypes of patterns are called as tempomi patterns and an
associative memory with this capabilicy is called as a temporal associative memory. In this section, we shall learn
how rhe BAM act as temporal associalive memo·rtes. Assume all temporal patterns as bipolar or binary vectors
given by an ordered set S with p vecmrs:
I 4.1 0 Solved Problems

S= {sl,sz, ... ,Sj, ... ,spJ (p= l roPJ 1. Trai9' a hereroassociarive memory network using
lje&b rule ro swre input row vector s :::::
where column vectors are n·climensional. The neural network can memorize the sequence Sin irs dynamic
/"'(sl, sz, s3, s4) to the output row vector t = (tl, tz). y,
state transitions such that the recalled sequence is s1 --)- sz ~ ... ~ s; ~ ... --)- sp --)- s1 --)- sz --)- ... -)o
s; --Jo or in reverse order. The vector pairs are given in Table l.
A BAM can be used to generate the sequenceS::::: {s 1 , sz,, .. , s;, ... ,sp}. The pair of consecutive vecmrs Sk Table 1
and SJ:+l are taken as hereroassociative. From this point of view, SJ is associated with sz, sz is associated with Input targets St sz S3 s4 l t1 tz y,
s3 , ... and Sp is again associated with s1• The weight matrix is then given as
1" ---~-1 _a. 1 olL o

:: .
p 2"' 1 0 0 I · 1 0
W= L::<s<+Il(s,)T
k:=l ~ ~ ~-~ ~ftf~~f
A BAM for temporal panerns can be modified so clJat both layers X and Yare described by identical weight "''
matrices W Hence, the recalling is based on Solution: The network for the given pJoblem is J..'i
shown in Figure 1. The training algorithm based on
x =/(Wy); y =f(W,) Hebh rule is used to determine the weights. Figure 1 Neural net.

ll
122 Associative Memory Networks 4.10 Solved Problems 123

For 1st input vector.

\ Step 0: Initialize rhe weights, rhe initial weights


are taken as zero.
I
For 3rd input vector:
The inpur-ourput vecror pair is (1, l, 0, 0):(0, I)

Xj = 1, X2 = 1, X3 = 0, X4 = 0, Yl = 0, Y2 = 1
Table2
Input and targets
l"
,,
1 " "
0 I
,,
0
tj

I
"0
_ OJ
-
o
[~ 4xl
[o
1]\xz= [H] 0 1 4x2
2"' 1 0 0 1 1 0
Step I' For first pair (I, 0, I. 0),(1, 0) Training, using Hebb rule, evolves the final weighrs 3"' 1 1 0 0 0 The final weigllt matrix is the summation of all the
Step 2: Set the activations of input unirs: as follows: 4'h 0 0 1 1 0 individual weight matrices obtained for each pair.
Since Yl = 0, the weightS ofy1 are going m the same.
XJ. = 1, X2 = 0, X'3 = 1, X4 = Q ~ Computing the weighrs ofY2 unit, we obtain Solution: Use ~ to determine the 4
weight matrix: ~ W= L:l(p)t(p)
w12(new) = WJ2(old) + XJ)'2 = 0 + 1 x 1 = 1
Step 3: Set the activations of ourput unit:

JJ=I, J2=0
'~ w,z(new) = w,z(old) +xm = 0 + 1x 1 = 1
w=
p

L:?(p) t(p)
p=l
= ,T (1)1(1) + ,T (2)1(2) + l (3)1(3) + ,T (4)1(4)
' w;z(new) = w,z(old) + "3J2 = 0 + 0 x 1 = 0

[i ~] [~ ~] [~ i] [~ !]
p=l
~
Step 4: Update rhe weights, > w42(new) = W42(old) + X4]2 = 0 + 0 x 1 = 0
For 1st pair: The input and output vectors are s =
(1 0 1 0), r = (1 0). For p = 1, = + + +
wij(new) = Wij(old) ~ "
The final weights after presenting third inpm vecror
are
sT(p) t(p) =sT(I) 1(1)

[~
w,,(new) = WJJ(old) +x1Y1 = 0--+, 1 x 1 = 1
WJJ =2, Wz] =0, w3 1 =I, WljJ = 1

[~]
W2J(new) = W2J(old)+X2YJ =o-Ho x 1 =0

[H]
I WJ2 = 1, WZ2 = 1, U/32 = 0, W42 = 0
W31(new) = W3I(old)+X3YI =Of 1 x 1 = 1 w= \]
W4J(new) = W4J(oJd) + X4)'1 = Q -t! Q X 1 = Q
For 4th input vector: = [1 0] 1, , =
The input-output vector pair is (0, 0, I, I):(O, 1)
wn(new) = w12(old) +x1y2 = 0 + 1 x 0 = 0 O 4xl 0 0 4x2 3. Train a heteroassociative memory network to store
XJ = 0, X2 = 0, X3 = l, X4 = 1, Yl = 0,)'2 = 1 the input vectors s = (sl, sz, 13, s4) to the output
W:22(new) = fll2z(oJd) + X2J2 = 0 + 0 X 0 = 0 For 2nd pair: The input and output vectors ares = vectors t = (t!, tz). The vector pairs are given in
The weights are given by (l 0 0 1), r =(I 0). For p = 2,
w~z(new) = w3z(old) + X3)'2 = 0 +I X 0 = 0 Table 3. Also test the performance of the nelWork
1 W4z(new) = W4z(old)+x4)'2 =.0+0 x 0 = 01 lll32(new) = w3z(old) + X3]2 = 0 +I x l = 1 ,T (p) t(p) = ,T (2) 1(2) using its training input as testing input.
w.u{new) = w42(old) +x4Yl = 0 + 1 x 1 = 1
Table 3
For 2nd input vector:
The inpm-ompur vecror pair is (I, 0, 0, 1):(1, 0)

X] = }, X2.= Q, X3 = Q, X4 ;::; l,
Since, Xi = ·"2 =]I = 0, the other weights remains
the same. The final weighlS after presenting the fourth
mput vector are
= [i]
1 4xl
[1 °l1x2 =, [H] 10
4x2
Input and targets
1"
2"'
SJ

l
1
"0
1
,, ,, ,,
0
0
0
0
0
0
"
l
WU = 2, W2J = 0, 1lJ31 = }, WljJ ::= } 0 l I 0
JJ = 1, Yz = 0 For 3rd pair: The input and output vectors ares =
3'' 0 0
w12 = I,wzz = 1,w32 = l,w42 =I 4'h 0 0 1 1 l 0
The final weights obtained for rhe input vecror pair (1lOO),r=(01).Forp=3, :::{\.\ .. ·
Thus, the weight matrix in matrix form is r,'•J . ,.
is used as initial weight here: ,T (p) t(p) = ,T (3) 1(3) Solution: The ne[Work architecture for rhe given
WJt(new) = wn(old) +x1]1 = 1 + 1 x 1 = 2
w41(new) = W4J(old) +x4y1 = 0 +I x 1 =I

Since X2 = X3 = Y2 =
0, the other weightS remains
\Y./ =
WI! WJZ]
WZI
[ W31
W22
W32
WljJ W42
=
[2 0 1
1 1
1
11
l [l~
1 [0
1Jix2 =
l
'·[0~ ~
input-target vector pair is shown in Figure 2. Train~
ing the network means the determination of weights
of the network. Here outer products rule is used to
determine the weight.
The weight matrix W using ourer products rule is
the same. 4xl 0 0 4x2
The final weights after second input vecror is pre~
semed are
r·,~n the heteroassociative memory network using
. · outer products rule to store input row vectors s =
For 4th pair: The input and output vectors arc
(00 11), r= (0 1). Forp= 4,
l =
given by
p

WJJ = 2, W21 = 0, 1031 = 1, WljJ = 1 (sJ,Sz, s3,s4) ro lhe output row veaors t = (t 1, tz). w= L:?(p) t(p)
WJ2 ::= 0, W22 = 0, W32 ::= 0, Wlj2 ::= 0 Use £he vector pairs as given in Table 2.
i: /(p)t(p) =<'(4)1(4) p=l
-k
r'
-~
125
4.10 Solved Problems
124 Associative Memory Networks

Compure the output by applying activations over net test performance of network. The initial weigh£S for
Forp=lro4, For 1st testing input input, me ne;twork are
w= ' t(p)
I>T(p)
p=l

= ,T (1)~1) + ,T (2)~2) + ,T (3)~3) + ,T (4)~4)


I Step 0: Initialize the weights:

W=
wu
W2I WJ2]
W22_01
.,,-10
[0 2]
I YI = f(y;,J) = f(O) = 0
Y2 = f(y;,) =f(3) = 1

The output is {0, 1] which is correct response for


W= [! i]
W3J
[ second input pattern. The binary activations are used, i.e.,
W4J W42 2 0
For 3rd testing input
(~
if x>O
=UJ[o I]+UJ[o 1] Step 1: Performs Steps 2--4 for each testing Set the activation x = {0 0 0 1]. Compurl:ng net f(x) = if X:::
Q
input-output vecmr. input, we obtain
Step 2: Set the activations, x = [1 0 0 0]. For I st usting input
Step 3: Compute the net input, n = 4, m = 2. }inl =XJW!J +X2tlf21 +X3W31 +X4W.j] Set the activation x = {1 0 0 0]. The nee input is
Foci= 1 m4andj= 1 to2: =0+0+0+2=2 given by ]in= xW(in vector form):
+[n[1 O]+UJ[I o]

= [~ ~] [~ i] [~ ~] [! ~]
+ + +
'
}inj= LxiWij
i=l

'
}in2 = X]W\2 +
=0+0+0+0=0

Calculate output of the network,


XZWz2 + X3W32 + X4W42

[y;,J y;,,] =[I o o Ohx4 [n]


2 0 4x2
}inl = Lx;WiJ
i=l YI = f(y;,J) = j(2) = I = [0 + 0 + 0 + 0 2 +0 +0 +OJ

[! ~]
= Xlll!\1 + XZWZI + X'JW3J + X4fV4J Y2 = f(y;,,) = j(O) = 0 = [0 2]
=lxO+OxO+Oxl+Ox2=0 The output is (1 0] which is correct response for third Applying activations over the net input, we get
W=
" testing inpm pattern. (n )'2] = [0 1]
Yin2 = Lx;w;2
This is che final weight of rhe matrix. i=l For 4th testing input
Set the activation x = [0 0 l I]. Calculating the net The correct response is obtained for first testing input
= XJ Jl/]2 + X2W22 + X3W32 + X.j1V41 pattern.
input, we obtain
=lx2+0xl+OxO+Ox0~2
,, Step 4: Applying activation over the net input to }ill\~ XJWII + XZWzl +x3w31 +X4W4l
For 2nd teJting inpm
Set the activation x = [I 1 0 0]. The net input is
calculate rhe output. =0+0+1+2=3 obtained by
+ Xzrll22 + XVV3! + X41V42
[! 1]
}i11Z = X\ IVJ2
Jt = f(y;,J) = j(O) = 0
" I Y2 = f(y;,) = f(2) = I I =0+0+0+0=0
\y;,,y,,]=[IIOO]
Calculate the output of the network,
The ourput is [0, 1] which is correct re.<iponse for first
input panern. ]I = f(y;,J) = /(3) = I = [0 + 0 +(I+ 0 2 + I + 0 + 0]
For 2nd testing input ]2 = f(y;,,) = /(0) = 0 = [0 3]
Figure 2 Network archirccrure. Ser the activation x = [1 I 0 0]. Computing the net
The output is [l 0] which is correct response for Apply activations over the net input co obtain output,
inpm, we obtain
Testing the Network fourth testing input pattern. we get
Method! Jinl =X]W]J +X2.tll21 +XJU/31 +X4W4J
Method II lvinl ]inzl == [0 l]
The ces,ililg algorithm for a hereroassociarive mem~ =0+0+0+0=0
. I Since net input is the dot product of the input row
ory network IS used ro cesr the performance of rhe The correct response is obtained for second testing
]inZ = X]WJ2 + XZWzZ + X3W32 +x4w42 vecror with rhe column of weight matrix, hence a
net. The weight obtained from training algoriilim is method using matrix multiplication can be used to input.
the initial weight in testing algorithm. =2+1+0+0=3
],,_
\..

i
~
127
126 Associalive Memory Networks 4.10 Solved Problems

The correct response is not obtained when rhe •recror W4z=-l xl+-1 x 1+1 x-1+1 x -1
For 3rd testing input The weight matrix is
unsimilar to the inpm network is presented to the =-1-1-1-1=-4
Set the accivarion x = [0 0 0 1]. The net input is

~]
obmined by . uecwork.
The weight matrix W is given by
(( 5. rrain a heteroassociarive network tO store the,

~'"' y;.z] = [0 0 0 I] [! i] W=[! input vectors s = (s, sz 53 s4) to the output vec-
tor t = (tJ. t:z). The training input-target output W=
WII
W31
WJ2]
U121 WZ2
U/32
_
-
[-4 4]
-2
2
2
-2
The net input is calculated for the similar vector, vector pilrs are in binary form. Obtain the weight [ W4I W42 4 -4
vector in bipolar form. The binary vector pairs are
= [0+ 0 + 0 +2 0 + 0 + O+ 0]
= [2 0]

Applying activations to calculate the output, we get


~,,, y;,] = [0 1 0 0] [! ~] as given in Table 4.
Table4
,, ,, ,,
"
6. T_?itl a heteroassociacive network to store the
;iven bipolar input vectors s = (sl s2 53 s4) to
the output vector t = {t, tz). The bipolar vector

[y, }'2] = [1 0] = [0+0+0+0 O+ 1 +0+0] 1" 1 " "


0 0 0 0
pairs are as given in Table 5.

TableS
Thus, correct response is obtained for third testing
= [0 1] 2"' 1 1 0 0 0
0 ,, ,, ,, 12
input.
The output is obtained by applying activations over
the net inpm
3"'
4'h
0
0
0
0
0
1
1
1
1
1 0 1" 1
"
-1 -1 " -1 -1
For 4th testing input 2"' 1 1 -1 -1 -1 1
[y, }'2] = [0 1] Solution: In this case, the hybrid represenmion of
-1 -1 1 I -1
Ser the acrivationx = [0 0 I 1}. The net input is 3'' -1
the network is adopted to find the weight matrix in 1 1 I -1
calculated as The correct response same as che target is found, 4'h -1 -1
hence the vector similar to the input vector is bipolar form. The weight macrix:ca.n be formed using

~'"' [! i] recognized by the network. ,...- ~1 -' -~~ Solution: To store a bipolar vecmr pair, the weight
Wll = (2 X 1- 1}(2 X Q- 1)
With mi.Simiiar-input vector. The second input '·matrix is
y;,, 2 ] = [0 0 1 1]
veccorisx=[l I 0 O]withrargety=[O l].To
+ (2 X 1 - 1)(2 X 0- I)
p

test the nenvork with unsimilar vectors by making a + (2 X 0 - I )(2 X I - 1) r. wu = L s;(p)tj(p)


= [0 + 0 + I + 2 0 + 0 + 0 +OJ change in\two~COrii.ponen~ of che input vector, we gee + (2 X Q- 1)(2 X 1 - 1) p=l
= [3 0] --·-. ----- = -1- 1- 1- I= -4
X= [Q 1 1 Q] ·If the outer products rule is used, rhen
IVJl ;= (2 1 - 1)(2 I - 1)
The output is obmined by applying activations over
rhe net in pur:
The weight matrix is
+ (2
X

X
X

1 - 1)(2 X 1 - 1) W= L?(p)t(p)
p
+ (2 X 0- 1)(2 X 0-

~J
I) f •
[y, }'2] = [1 0]
For 1st pair

W= [r
+(2x0-1)(2x0-1)
The correct response is obtained for fourth test~ .:!
=1+1+1+1=4 >=[1 -1 -1 -1], ,=[-1 1]
ing inpu( Thus, training and tesring of a hercro
associ:itive necwork is done here. fV21 =-1 x-1+1 X -1+-1 X I+-1 X 1
The net input is ca.lculated for unsimilar vector, = 1 - 1 - 1 - I = -2
4:''For Problem 3, test a hereroassociative network ?(1)t(l) = [ ::\] [-1 1] = [-\ ::\]

[! ~}··
with a similar test vector and unsimilar test W22=-l X 1+1 X l+-1 X-1+-l X-I
-1 1 -1
vector. =-1+1+1+1=2
[y;,, y;,z] = [0 1; 0]
fV31 =-1 x-1+-1 X -1+-1 X l+l X I For 2nd pair
Solution: The heteroassociative network has to be
tested with similar and unsimilar rest vecror. ,, = [0 + 0 + 1 + 0 0 + 1 + 0 + 0] =1+1-1+1=2 s=[1 1 -1 -1], ,=[-1 1]
With j11#g_test vector: From Problem 3, the sec~ =[1 1] W32=-l X 1+-1 X 1+-1 X -1+1 X-I
ondinputvector isx = [I I 0 0] with targecy = [0 1].

, =; ::;]
I_ =-1-1+1-1=-2
'0 test the network with a similar vector, making a The output is obtained by applying activations over
chaitge m one compo~c of the input vector, we get the nee input I W4! =-lx-1+-1 X -1+1 X 1+1 X l 7(2)K2l = [ ::\] [-1 1] = [
I' =1+1+1+1=4
x=[0100] [y, }'2] = [1 I]

_1_ .
"';'I
128 Associative Memory Networks 4.10 Solved Problems 129 H

For 3rdpair Applying activations w compute the output, we gee is used as the initial weight here. Computing the 1 -1 -1 -1] \i
input, we get
-1 1 1 1
s= [-1 -1 -1. 1], t= [1 -1] [y, Y2l = H 11
=[-1 -1 1 1]
-1 1 1 1 !~
[
1 -1 -1 -1] -1 1 1 1 '(
Thus, the net has recognized the missing data. y,.;=x·W=[-1 1 1 1] -1 1 I 1 =[-2222]
/(3)«3)=
[
=:
-1]
[1 -1]= =: _;
[-1 1]
With mistaken data: Let the rest vector be
[ -1 1 1 -1] wirh changes made in two
x
com-
-1
[ -1
1 1 1
1 1 1

ponents of second input vector [I 1 -1 -1]. =[-4444] Applying the activations, we get Yi = [ -1 1
For 4th pair Computing the net inpm for the rest vector, using , 1 1] which is the correct response.
Applying activations over the n~t input to calculate
s=[-1 -1 1 1], t=[1 -1] the final weights obtained in Problem 6, as initial Test input x = [I 1 1 1]. Computing net
the output, we have
weight ro rest the test vector, we get \(;~- ,1' input, we get
y·=f(y,.) =I 1 if y,.;>O \ :'\',·'?
/(4)«4) = [ -:
-1]
[1 -1] = -:
[-1
=11] 1
-4 4] " ~ -I1f 1,"i<lo 'OI:J' ~
_:j ~'- 'f:l I -1 -1 -1]

The final weight matrix is


[]in! y;,.z] = [-1 1 1 _ 1
[ 4 -4
-2 2
2 -2 = [-1 1 1 1]

Hence the correct response is obtained.


y,,1 =x·W=[I 1 1 1]
[
-1
-1
-1
1 1 1
1 1 1
1 1 1
= [0 0] =[-2222]
Testing the network with one missing entry
4 T [-11 -11] -I
[-1 1
1]
W= ~' (p)t{p) = 1 -1 + 1 -1
Applying the activations over the net input to calcu- Test input x = [0 1 11]-:-compuring the Applying the activations, we get Yi = [ -1 1 .~
' ,4
p-l 1 -1 I -1
late the output, we obtain input, we get 1 1] which is the correct response. ·,
·~

-1
-l I
1] -1
[-1 1
1]
(y, yz] = [0 0]

-1
1 -1 -1 -1]
Yi•; =X. w = [0 1 1 1] -. 1 1 1 1
1 1 1
Testing the network with two miSsing entry

Test input x = [0 0 1 1]. Computing net


'I
+ [ -1 1 + I -1 Thus, the net does nor recognize the mistaken data
[
b~cause the output obtained [0, 0] has a m1~h · -1 1 1 1 input, we gcr
.\
I -1 1 -1
wnh the target vector [ 1 I]. =[-3333]
-4
-2 24] 8. Traift the aum~ssociarive network for in pur vecmr Applying the activations, we get Yi = [-I I
1 -1 -1 -1]
-1 1 1 1 I

- 2 -2 [ -l 1 1 1] and also rest the network for the 1 1] which is the correct response.
y,,,=x·W=[O 0 1 1]
-1 1 I 1 . ''
[ same input vecror. Test rhe auroassociarive net- [
4 --i Test input x = [-1 1 0 I]. Computing net -1 1 1 1
yor/ Problem 6, resr the performance of rhe nee- work with one missing, one mistake, two missing
and two mistake entries in rest vector.
input, we obtain =[-2222]
"' work with missing and mistaken data in rhe test
J;roj=x·W Applying the activations, we get Yi = l-I 1
vecmr. Solmion: The input vector is x = [-1 1 I 1].
1 -1 -1 -1] 1 1] which is the correct response.
The weight vccmr is
Solution: \'tlirh missing data -1 1 I 1
=[-1101] Test input x = [-I 0 0 1]. Computing net

[-i] [
Let ilie rest vecror be x = [0 I 0 -I] wirh -1 I 1 1
[ input, we obtain
changes made in [\VO com~onents of second inpm -1 1 1 I
vector [1 1 -I -1]. Computing the net inpur, W= I>T(p)s(p) = -1 1 I 1],,, = [-3 3 3 3] ]i"j=x·W
we get 1

-!] lixl Applying the activations, we get Yi [-1 1 1 -1 -1 -1]


-1 1 1 1
-4 4]
-2 2
1 -1
-1 I
-]
I I
1 1} which is the correct response. =[-1001]
[ -1 1 1 1
[Jinl Yiuz] = [0 1 0 -1]
[ 2 -2
4 -4
=
[
-1
-1
1
1
I
1
I
1 lixlj
Testing the network with one mistake entry
!est inputx = [ -1 -I 1 1]. Computing net =[-2222]
-1 1 1 1

mput, we get
=[0-2+0-4 0 + 2 + 0 + 4] Applying the activations, we get Yi [-1 1
Testing the network with same input vector: The rest
=H 6] ]mj=x· W I 1] which is the correct response.
input is [ -1 1 1 l]. The weight obtained above
130 Associative Memory Networks 4.10 Solved Problems 131

Testing the network with two mistaken entry


Test input x:::::: [-1 -1 . :. . 1 1}. Computing ncr
• Testinpurx::::{l 1 0]
2 0 0 2]
0 2 2 0
(i) The weight matrix is

input, we obtain 0 1-1] =


[ 0 2 2 0

Yinj=x·W
Yr.;=x·\Y/=[1 1 OJ
[ 1 0 -1
-1 -1 0
2 0 0 2 W=
[
2002]
0 2 2 0
0 2 2 0
1 -1 -1 -1] = [1 1 -2] Test the vector using [1 1 1 1} as input 2 0 0 2
-1 1 1 1 Test vector x = [1 1 1 1). Computing net
=[-1 -1-1 1] _ 1 Applyingtheacrivarions,wegeryj=[1 1 -1],
(}; [ 1 1 1 input, we obtain {ii) Test the vector using x = [1 1 1 1] as
-1 1 1 1 hencc;,a correct response is obtained. input. Computing net input, we obtain
= [0 0 0 0] [ .. ' ')10:i:Js: outer products rule to store the vectors
2 0 0 2]
. h · · h ·
APP Iymg t e actLvauons over t e ner tnput, we get .
./ [1 1 1 1]and[-l 1 1 -1]inanauro-
Jr.,=x·\Y/=[1 1 1 1J 0 2 2 0 0002]
Yi = [0 0 0 0] which is rhe incorrect response.
Thus, the network \'[ith two mistakes is not recog-
associative network. (a) Find the weight matrix
(do nor set diagonal term w zero). (b) Test the
[ 0 2 2 0
2 0 0 2
y,,,=x·\Y/=[1 1 1 1J
[
0 0 2 0
0 2 0 0
2 0 0 0
nized. · · vector using [1 1 1 1] as input. (c) Test the = [4 4 4 4J
=[2222J
-. . . . vector[-1 1 1 -l]asinpur.(d)Tesrthenet
9._1=heck the auroassoc1anve ne~ork for m~ut using [1 1 1 O] as input. (e) Repeat (a)-(d) Applying the activations to calculate output, we Applying the activations, we get Yi = [ 1
/ vector [ 1 1 -1 ]. Form the Weight vector_'lDth with the diagonal terms in the weight matrix to get Jj = {1 1 1 1], hence correct response is 1 1}, hence correct response is obtained .
.__;. no self-connection. Test whether the net is able to be zero. obtained.
n:wgmLe with one missing enrry. {iii) Test the vecmr Usmg x = t=r·-r-r --11
Solution: Test the vector using [-1 1 - 1] as input as input.
Solution: Inpm vector x = [1 1 - I]. The weight Test vecmr x = [- 1 I -1]. Computing net
vecror is Weight matrix for [1 1 l 1] is input, we obtain

W, = I>T(p)s(p) y,,,=x·W=[-1 1 1 -IJ


000']
0 0 2 0
\Y/ = I/(p)s(p) = [ _:] [1 1 -IJ 2 0 0 2] [2 0 0 0
0 2 0 0
Jr,,=x·\Y/=[-1 1 1 -IJ 0 2 2 0

=
[1
I 1-1] 1 -1
=
[
I
1]
1 [1 1 1 1J =
1
1 1
[1 11111]
1 1 1 1
1 I 1 1 = [-4 4 4 -4J
[ 02 2 2 0
0 0 2
= [-2 2 2 -2J

Applying the activations to calculate output,


-1 -1 1
we getyj = [-1 1 1 - 1], hence an

J
Weight marrix for [ -1 1 I -1] is Applying the activations to calculate output, we
The weight vector with no sdf-com1ecrion (make unknown r~onse is obtained.
get Yj = [-1 1 1 -1], hence correct response ....------···- -
the diagonal elemems in rhe weight vector zero) is (iv) Test the vector using x = [I 1 1 0} as
given by is obtained.
mpur.

\YJ;::::
rC{)_ 'I -IJ
·r ." o"··-.J.
\Y/2 = I>' (p)s(p) = [ [-! 1 1 -1 J Test the ue& wing [1 1 1 0} ns input
Test vecmr x = [1 I 1 OJ. Computing net
000']

.
[ -1 -"-1 0'
. >-- \ -1
1-1 -1 1]
1 1 -1
input, we obtain
y,,=x·\Y/=[1 1 1 OJ
[
0 0 2 0
0 2 0 0
Testmg the network wtth one mtssmg ellrty
•Tesrinputx=[l 0 -1]
-
[ -1 1
1 -1 -1
1 -1
1 Yr•i = x · \'{/ = [ 1 1 1 OJ
2002]
0 2 2 0
0 2 2 0
= [0 2 2 2J
2 0 0 0

[
\'I~Fie we1ghr m:rnlxto store rwo vectors-;1s---';. Applying the activations to calculate outpm, we

[ 0 1-1]
2 0 0 2
= [2 4 4 2J get Yi = [-1 1 1 1}, hence an unknown
y1,,=x·\Y/=[1 0 -1J 1 0 -1 ~
response is obtained.
W-WJ +tllz
-1 -1 0 i Applying the activations, we get Yi = [ 1 1
= [1 2 -1J

Applyingrheacrivarions,wegetyj
hence a correct response is obtained.
= [l I -1], -- [~ ~ ] [- ~
1
1
1 1
1 1
1
I
1 1
1 1
+ -1
-1 -1
I
1
1 -1 -1
I
1 -1
1
-~] I
I
1 1), hence the known response is obtained.
11. _9nd the weight matrix required to srore the
/vectors[} I -1 -1},[-1 I 1 -l}and
Repeat parts a-d with diagonal elemenr i11 weight _/ [-1 1 -1 1]inroWl,W2,W3respecrively.
matrix set to zero Calculate the total weight matrix to store aH the

r
i
.I
i'
j
133
4.10 Solved Problems
132 Associative Memory Ne!Works

vector and check whether it is capable of recog-


I -1 I -1] Applying activations, we getYi = [ -1 1 I -I] 0-I -1 -1]
-1 0 I I
nizingthesame~d. ~ight
matrix be wi'tliilo sdf-conneccion. -
[ -1 I -1
I -1 I
I
I
which is the correct response.
Wtth third vector x = [-1 1 - 1 1}. Com-
=[-1000]
[ -1
-1
I
I
0
I
I
0
--------~
Solution: For the first vector [1 1 -1 -1]
-1 I -1 I puting net input, we ger
. = [0 I I 1]
With no self...conneccion,
0 _1-1 -1]
j]
cApplyingactivations,wegetyj= [-1 l l 1},
y;.;=x·W [

W1 = L,T(p)s(p) = [ [1 I -I -I]
W30=
0 -1
-1 0 -1
1 -1
I
0 -1
I -1] = [-1 1 -1 1] -I -I
-1 -1 -1
1 0 -1 -1
0 -I
0
i.e., known respons_e is obtained.
For test input vector x = (0 0 0 1]. Compur-
ing net input, we obtain

I I -1 -1]
[
-1 I -1 0
y;,j=x-W·r~ ")~

-
[ I
-1 -1
I -1 -1
I I
The total weigh~ matrix required to store all iliis is
= [-1 I -I I]

Applying activations, we getyj = [ -1 1 -1 I]


"' .i-'<1' / '
= [0 If 0 I]
[ 0-1 -1 -1]
-I 0 I I
-1 -1 I I W=Ww+ Wzo + W3o which is the correct response. -1 I 0 1

With no self-connection, I 0 I-1 -1] + [ 0-1 I-1]


0 -1 -1 -1 0 -1 I
Thus, the ne£Work is capable of recognizing the
vecror.s.-
~--

=[-1 I I 0]
-1
o>
I

I
"c"
,,_,.J r<' j
0

0 I -1 -1]
=
[-1 -1 0 I I -1 0 -1
~ ~suuct an autoassociative network to store Applyingactivations,wegetyj = [-1 l 1 -1],

l
-1 -1 I 0 -1 I -1 0
I 0 -1 -1 '- vectors [-1 1 1 1]. Us~auroasso­ i.e.,.unknown response is obtained. hecate the net-
Wio = -1 -1 0 1 0 -1 -1 -1] ciative network to test ilie vecror wiili three work again using the net input calculated as input
-1 -1 I 0 -1 0 -1 -1 missing elemems. vector:
- -1 -1 0 -1
[ Solution: The input vector is x = [-1 1 I I}.
Forthesecondvecror[-1 1 1 -1] -1 -1 -1 0 0 -1 -1 -1]
The weight matrix is obtained as -1 0 1 I
Testing the network J,;=[-1 I I 0] -1 1 0 I

DJ
[
-1 I I 0
Withfirsrvectorx =[I 1 -1 -I]. Nerinput
W, = Ll(p)s(p) = [-1 I I -I] I I 1) =[-2223]
is given by w = Flp)s(p) = [ -;] [-1
Applying activations, we get Yj :: [-1 1 1 11,

I -1 -1 I] J;.;=x· W [ 0-I -I -I] I -1 -1 -1] (


i.e., known response is obtained after iteration.
Thus, iterarivc auroassociarive network recogni1.es

[
-1 I I -1
I 0 -1 -1 -1 I 1 I
- -1
I -1
I
-1
I -1
I
=[I I -I -I] -I -I
-1
0 -I
-1 -1 0
-
[ -1
-1
I
l
I
I
1
I
- \.
the rest pmern. Similarly, the network can be
rested for the rest input vectors [0 I 0 0] and
[0 0 l 0].

l
=[I 1 -1 -1] --------
wir~--;;o self:~~~~ is
----
Wich no self-connection, \~-
'
C~trucr an autoassociative discrete Hopfield
The weight matrix
Applying activations, we get Yj = ( 1 -1 -1] ·_ network with input vector [1 l I - 1].
-I -I

W2o = [ -I0 0 I
-~ I 0 -I
-I -I 0
-:] which is the correct response.
With second vector x = [ -1
input is given by
-I]. Ne< W _
0-
[
0 -1 -1 -1
-1
-I
-1
0
I
1
0
I
I
1 1 0
Test the discrete Hopfield nerwork wirh miss-
ing emrics in first and second components of
the stored vector.
Solution: The input vector is x:: [I I l -I].
-I I]
y;"j =x· W The weight matrix is given by
For the third vector [ -1 I Test veaor with chree missing elemenu

w, = Ll(p)s(p) = [ ~j] H I -I I]
=[-1 I 1 -I]
[ -1
0 -1 -1 -1]
0 -1 -1
-1 -1 0 -1
-1 -1 -1 0
Foe rest input vector x = [ -1
input is calculated as

Yi•d=x· W
0 0 0], the net
w = l:l(p)r{p) = [ J] [I 1 1 -I]

=[-1 I I -I]
134
Associalive Memory Networks 4.10 Solved Problems 135

I I I -1 ] Step 4: Choosing unit Y4 for updating its activa~ Thus, the output y has converged with vector xin this Applying activations we get Jin3 > 0 ::::}
I I I -1 cions: itemion itself. But, one more iteration can be done y3 = l. The<efore,y = [l 1 I 0].
= I I I -1
[ to check whether further activations are there or not.
-1 -1 -1 I Step 6: Choose unit Yz for updarion.
Yin4 = X4
'
+ L)j'Wj4 Iteration 2 4
The weight marrix with no self·connection is j=l Yin2 = XZ + L JjWj2
Step 0: Weights are initialized to smre patterns. j=l

w=
[
l
l
0
0
I

I
I -1 ]
l -]
0 -1
-] -] -] 0
=0+[ I 0 I 0] [=l] W=
0
1
l
I
0
I
1 -1 ]
I -1
0 -1
= 0+ [l I I 0] [ j]
= 3

=0-1-1=-2 [
-1 -1 -1 0
The binary representai:ion for the given input vec- Applying activ:uions we get y;,z > 3 ::::}
tor is [I 1 1 0]. We carry our asynchronous Applying activations we get Jin4 < 0 ::::} Step 1: The input vector is x = {1 l 1 0]. I n=l.The<efore,y=[l 1 1 0]. I
updarion of weights here. Let it be Y1, Y4, Y3, Y2. Y< = 0. The<efore, y = [I 0 I 0] ->
Step 2: For this vector y = [1 l 1 0].
No convergence. Thus, further iterations do nor change the activation
Step 3: Choosing unit Y1 for updating its activa- of any unit.
For the test input vector with t}Jlo missing entries in Step 5: Choosing unitY3 for updating its activa-
tions:

~
first and seco11d compommts of the stored vector. tions: orfsrrucr an auroassociativ network to store
1 he veaors x 1 = [1 1 1], X2 = [1 -I
Juration I
' ]bll = XJ
'
+ LJJU~ll -11-1],x3 = [-1 -1-1-1]. Find weight
Jin3 = X3 + L.JjWj3 j=l \ matrix with no s -conneccion. Calculate the
I Step 0: Weights are initialized to store paucrns: I j=I energy of the ~.t red patterns. Using discrete
Hopf1eld ne~rk test patterns if the rest par-
0 I I -I] =1+[ 1 I 1 0] [ J ] = 3
tern are fti.;;en as x 1 ::::: [I 1 l-1 I], xz =

[
W= I 0 I -1 =I+[IOIO][_n [I- y-'1 -I -I] 'ndx3 =[I 1 -I -I - 1].
I I 0 -1 Coffipare the test patters energy with the stored
Apply activations we get Ji11 l > 0 ::::}
-1 -1 -1 0
=1+1=2 YI = l. Now y = [1 I I 0].
/~terns energy.
Step 1: The input vector is x = [0 0 1 0]. Step 4: Choose unit Y4 for up dation. S~lurion: The weights matrix for rhe three given
Applying accivations we get y;,3> 0 :::? ' vectors .IS
Step 2: For this vecwr y = [0 0 l 0]. ]3 = l. Therefore,y = [1 0 I 0] --Jo
Srep 3: Choose unit Y1 for updating irs activa- No convergence. )'in4 = X4 + L' Jj1Vj4 W'= '[3ipl '/J>)

[l' {l'"' "{} '"'


tions: Step 6: Choosing unit Yz for updating irs activa- j=l
tions:
=0+[1 1 1 0]
y;,1 = Xt + Lypujl
j==l Yin2 = xz + LYjW;2
j=l

_!] Applying activations we get y;,1q < 0 ::::}

{}" '"
= 0 + [0 0 I 0] [ = O+ I= I J4=0.The<erore,y=[1 1 1 O].
=0+[1 01 O ] [ j ] Step 5: Choose unitY3 for updation.

Applying activations we ger Jinl > 0 :::} =0+2=2 Yitd = x3 + L' ypvp,
}I = l. Broadcastingy 1 to ali mhcr units,
J=l I I I I I' [ I -I
weger

y = (1 0 1 0]-+ No convergence 1
Applying activations we get Jin2> 0 ::::}
yz = 1. Therefore, y = [1 1 I 0] --Jo
Converges wirh vector x. /
I
=0+[1 I I 0] [ J ] = 3
=
I I I I I
1 I
[ I
1 1 I
1 1 1 1
+
-1
-1
1 -1
-:]
-I
l I 1 I 1 -1 I

1
136 4.10 Solved Problems 137
Associative Memory Networks

I -1 I I I] Energy for ucond patum =-1+3-1+1-0+1=3>0 for updarion, we get

+ I -1 I I I
-1 I -1 -1 -1 4
£, = -0.5[x,WTxj"]

[ I -1 I I I = -0.5 [I -I -I I -I]
Applying acrivadons, we get Y4 = 1. Therefore,
~ = [1 1 1 1 1] --+ convergence. The
]in\ =X] + l:YJWjl
j=l
I

3-1 I 3I]
-1 I I I
-1 0-1 I 3I] [ I]
0 I -1 I -1
energy function is given by

Ei = -0.5[,; WTx'!] -j]


W=
[
-1
I I 3
3 -1 I
3 I -1
I 3
3 I
I
[ I

I
I 0
3 -1 I
I 3
I 3
0 I
I 0
-1

-1
I
On substituting the corresponding values, we get
0 '+ ,, ' -' _, _,, [

I I 3 I 3
E; = -10
-_.,. ·_,. -·H l
=I+ I -I -3 - I = -5 < 0

The weight matrix with no self~connection is For second rest pauern x'2 = [1 -1 -1 -1 -1] Applying activations, we get y = -1. There~
fore, modifiedx~ = {-1 1 -1 -1 -1]--+
andy= [1 -1 -1 -1 -1]. Choosing unit 4 for

-1
0-1 I 3I]0 I -1 I
updation, we ger convergence. The energy function is given by

'"{ll
= -0.5 [2 + 4 + 2 + 2 + 2] = -0.5 [12] = -6 4 E3 = -0.5[x'3wTx:("J
Wo = I I 0 I 3
[ 3 -1 I
I I 3
0 I
I 0
Energy for third pattmz Yin4 = X4 + l:Yj Wj4
j=l
£3 = -0.5[x,WTxj]

-·l~l
The energy function is defined as =-0.5[-1 I -1 -1 -1] e-O>H ' _, ' -•1

£= -O.S[xWT,.T] -1 0-1 I 3I] [-I]


0 I -1 I I
0

'+'' ' ' ' =-0.5[20]=-10

Therefore the energy for rhe irh panern is given by [ I


3 -1 I
I I 3
I 0 I 3
0 I
I 0
-1
-1
-1
=-1+3+1-1-0-1=1>0
Thus, the energy of the stored pattern is same as that
of the test pauern.
E; ~ -O.S[x; W1 xj] Applying activations, we get ]4 = l. Therefore(' 15. Construct and test a BAM net\York ro associate
Energy fOr first pattern

£ 1 = -0.5[x1WTxT J
0 ~" ' ' ' _, _,, [ 3] :/2 = [ 1 - 1 1 - l] --+ convergence. The
energy function is given by

£~ = -0.5[x;wTx;Tl
letters E and F with simple bipolar input-output
vectors. The target output for E is (-1, 1) and
for F is (1, 1). The display matrix size is 5 X 3.
The input patters are

-I<'"{! l
1
=-0.5[1 I I I I 1] 1, 5
= -0.5 [6+0+4+ 6+ 4] = -0.5 [20] = -10
* * * * * *
-10-1 I 3I] [ I]
0 I -1 I I Applying test patterns * •
* * *
* * *
*
[ I
I
3 -1
I 0

I 3
I
I 3
0 I
I 0 >><S
I
I
I Sxl
For first test pattern x'1 = [1 1 I -1 1]
andy=[l 1 l -1 l].Choosingunir4for
updation, we ger
0

~
""

-0.5 [12]
'

= -6
"

* * *
"E"
• *
* •
"F"

[-~j
4 Target output (-1, 1) (!, I)
Forrhirdtestpanernx,3 =[1 1 -1 -1 -1]
y;,lj = X4, + LYj Wjl andy=[I 1 -1 -1 -I].Choosingunitl SolUtion: The inputs are
=-0.5[1 I I I 1] 1 xs

= -O.S [4 +O + 6 + 4 + 6l 1 x,
6
Sxl
0 _,::·, , _, ,
1[ ciJ Input pattern Inputs Targets Weights

= -o.s [20] = -1o


E
F
[I I
[I III I
I 1-1-1 I I I 1-1-1 I I I]
I 1-1-1 1-1-1 1-1-1]
[-I, I]
[I I]
w,
w,

_...l
138 Associative Memory Networks I' 4,10 Solved Problems 139

(i) X vectors as input: The weighr mauix is obtained by The mral weight matrix is
I
!
1 1 1 0 2'
w = 'LJ (p) t(p) ,-1
-1 1 I -1
-1
1
1
1
1
1
1
0
0
2
2
-1 1 -1 1 1 1
-1 1 I I -1 1 1
0
2 0
2

-1 1 I 1 -I 1 1 2 0
-1
-1
I I
1 -I -1
-I
I
c 1
1
-1
1
-1 -2
0 2
I -I W=W 1 +Wz= + = 0
-1 1 -1 1 -1 -1 -2 0

W,=l I I 1-1 1l = 1 -I 1 -1 I 1 1 0 2
-1 I 1 -1 -1 -1 0 -2

-1 1 -1 -1 -1 0 -2
I
-1
-I
I I
1 -1
1 -I
-1
-1
1
I -1
1
-1
I
-2
0 2
0

-1 I
-1 1 -I -1 -2 oJ f
••
-1 I Testing the network with test vectors "E" and "F." 'l
-1 I I • For test panern £, compuring net inpm we get I

~I
0
0
\
')(
I 0 I

0 2
2 0
-2 0
0 2
y;, = {11 I 1 -1 -1 I 1 11-1 -111 l] I xiS -2 0
-2 0
w, =I -1 I[1 I]= ~-1 -11
-I -1 -1 I 0
0 -2
2

0 -2
-1
-1
I 1-1 -1
-1 -I
I I
I ~-~
2
0
-2 0
.J 1Sx2

-1
-1
I 1-1 -1
-1 -1
I I = [-12 18]1 x2
Applying activations, we get y = [ -1 1], hence correct response is obtained.
140 Associative Memory Networks 141
4.10 Solved Problems

• For test pattern E Computing net input, we get


Applying the activation functions, we get
0 2 y~ [I I I I I I I -I -I I -I -I I -I -I]
0 2
0 2 which is the correct response. Thus, a BAM network
-1I -1I ] [ -1I -1I ]
0 2 has been constructed and rese~IDe-dire,etions
fromXtoYandYtoX. ' '
+
[
+I -1 + -1 I
2 0 I -1 I -1
2 0 16. (a) Find the W.£ight mauix in bipolar form
0 2 for the bidirectional assoctanve memory using
outer products rule for the followi~ary
-4 4-4]4
y;. ~ [IIIIIII -I -II -1 -I I -I -1] -2
-2
0
0
input-output veCtor pa.tiS:
w~
[ -2 2
2 -2
0 2 s(I) ~(I 0 0 0), ~I) :! (I 0)
s(2) ~(I 0 0 1), ~2) ~(I 0) (b) The unit step funetion for binary with dueshold
0 -2
s(3) ~ (0 0 0), ~3) ~ (0 I) 0 is used.
0 -2
s(4) ~ (0 0), Ml ~ 10 Il
0 2
-2 0 (b) Using th~step function (wi~old For Y layer => Yi ~ I Yi

/---{I
0) as the output units activation hlricnorr,test 0
~ [12 18] -2 0
~e response of the network on each of the input
Applying acdvarions over the net input, to calculate output, we gety = [l I], hence correct response is pauerns. For X layer , , x1 = x,
obtained. (c) Test ilie response of the network on various
' - 0

s·i~~~
combinations of input pauerns with "mistakes"
(ii) Y vectors as input: The weight matrix when Y vectors are used as input is obtained as the transpose of or "missing" data. Presenting
rhe weight matrix when X vectors were presenred as input, i.e., (i) [I 0 -I -I]; (ii) [-1 0 0 -I];
(ili) [-II 0 -I]; (iv) [II-I-I]; (v) [II] • s(l) = 11 0 0 0]. Computing net input,
wT ~ [ ~ 0 0 0 2 2 0 -2 -2 0
2 2 2 0 0 2 0
0 0 0 -2 -2 ]
0 2 -2 -2 2 0 0
we have
Solution:
Testing rhe network (a) The weight matrix for storing the four input
,,.,~[1000]
4-4]
(a) For test pauern E, now the inpm is [-I I]. Computing net input, we have
vectors in bipolar form is
[ -4
-2
2 -2
4
2

,- ~
) y;,~x·WTi= [-11]·[0 0 0 0 2 2 0 -2 -2 0 0 0 0 -2 -2]
' (p) 'f(p)
w ~ "23 ~ [4 -4]
________ , 2222002 0 02 -2 -2 2 0 0 p=l

~ [=l]
= [2 22 2 -2 -2 2 2 2 2 -2 -2 2 2 2] Applying activations we get t_; = [1 0] which

Applying rhe activation functions, we get [I -I]+ [ =\] [I -I]


is the correct response.
• s{2) = [1 0 0 1]. Computing net input,
y~[I I I I -1 -1 I I I I -1 -1 I I I]
which is the correct response.

(b) For rest panern F. now the input is [I, l]. Comp_uring net input, we have + [ ;j] [-II]+ [ ~;] [-1 I} ''•i ~ [I 0 0 I]
[
4-4]
-4
-2 2
4
2 -2
~ ~
-~]
[ 0 0 0 0 2 2 0 -2 -2 0 0 0 0 -2 [6 - 6]
:'_'~~ ~ [I I]
2 2 2 2 0 0 2 0 0 2 -2 -2 2 0 -1I -1I ] [ -1I -1I ]

~ [2 2 2 2 2 2 2 -2 -2 2 -2 -2 2 -2 -2]
~ -1 I + -1 +I Applyingactivacions we get t_; = [l 0} which
[
.i -1 I I -1 is the correCt response.
i
,,"-

b .•
i&
~·<;::

• •• u m
142
Associative Memory Networks 4.11 Review Questions 143
s(3) = [0 1 0 0]. Computing rhe ner (c) Test response of network Applying the previous acrivarion and 17. Find the hamming distance and average
input, we have
raking closely related panern activation hamming distance for the rwo given input
(i) Herex;;: [1 0 -1 -1]. Calculating
= [0
'·•i= [0 1 00] -4 4-4] 4
ner input, we get
we ger Yi
(v) Y = [1
-=
l].
1]. Computing the net input,
vecwrs below.

[ -2
2 -2
2 J;"l=x·W we get ·! ,__
X1
x, =
= [l I -1 -1 -1 l -1 -I -11 -I -I]
( -1 1 1 -1 1 -1 1 -1 1 -1 -1 1]

= [-4 4]
= [1 0 -1 -1]
4 -4
-4 4
J x,., = [ 1 4 -4 -2
11 [ -4 4
2]
2 -2 Solution: The hamming distance is number of
different bits in two binary or bipolar vecrors.
Applyingactivarions we get tj = [0 1] which [ -2 2 = [0 0 0 0] Here
is rhe correct response. 2 -2
= [4 -4] Thus, in this case since ali the X;"; values H[X1,X2] = B
s(4) = [0 1 I 0]. Computing the net are zero, to apply the activation func-
inpur, we get tion it may take the previous x; values for
Applying activations we getyj = [1 0}
x;m = 0. Hence the closely[elated pat-

~-i = [0 1 1 0]
4-4] which is the correct response.
(ii) Herex= [-1 0 0 -1]. Calculating
temcan be taken to obtain dle correct

[ -4
-2
4
2
2 -2
the ner input, we get
response.

= [-6 6]
y,,= [-1 0 0 -1] -4
4-4]
4
I
Applyingactivarionswegetij
is the correct response.
= [0 l]which
[ -2 2
2 -2
4.11 Review Questions
1. What is content addressable memory? 13. Is it true that input patterns may be applied at
Prmnting t-input pattern = [-6 6] the outputs of a BAM?
2. Specify the functional difference berv.•een a RAM
t(l) =[I 0}. Computing the ncr input, we and a CAM. 14. List the activation functions used in BAM net.
Applying activations we get Jj' = [0 I}
obmin 3. Indicate the two main (}'Pes of associative mem- 15. What are the 1:\VO rypes of BAM?
which is the correct response.
4 -4
'•·· = [ 1 0] [ -4 4 -2 2]
2 -2
(iii) Herex=[-1 I 0 -l].Calculating
the net input, we get
ocy.
4. State the advantages of associative memory.
16. How are the weights determined in a discrete
BAW
=[4 -4 -22] 5. Discuss the limitations of associative memory 17. State the resting algorithm of a discrete BAM.

Applyingaccivarionswegeu; =[I 0 0 ll
J;.,=[-1 l 0 -1]
4-4]
-4 4
nerwork.
6. Explain the Hebb rule training algorithm used
18. What is rhe activarion funcrion used in contin-
uous BAM?
which is rhe correct response.
t(3) = [0 I]. Compuring the net inpm, we
[ -2 2
2 -2
in pattern association.
7. State the outer products rule used for training
19. Define hamming distance and storage capacity.
obrain 20. What is an energy function of a discrete BAlvl?
= [-1 0 I 0] pattern association nerworks.
21. What is a Hop field net?
''"' = [0 1] [ _:

=[-4 4 2
-4 -2
4 2
- 2]
-n Applying activations we get Yi = [0 l]
which is the correct response.
(iv) Here x = ( 1 l -I -I]. Calculating
8. Draw the architecture of an autoassociative net-
work.
9. E.xplain the testing algorithm adopted ro test an
autoassociative network.
22. Compare and contrast BAM and Hopfield net-
works.
23. Mention the applications of Hopfield network.
Applyingactivarionswegetsj = [0 I l 0] the net input, we get 24. What is the necessiry of weights with no self-
which is the correct response. 10. What is a heteroassociative memory network?
connection?
On preseming ilie panern [I 0] we obrain
only [1 0 0 I] and nor [1 0 0 0]. Sim-
''•i = [1 1 -1 -1]
4-4]
- 4 4
11. "With a neat architecture, explain the uaining
algorithm of a hereroassociative network.
25. Why are symmetrical weights and weights
with no self-connection important in discrete
ilarly, on presenring the pattern [0 lJ we
obtain only [0 1 1 0) and nor [0 l 0 0].
[ -2
2 -2
2 12. What is a bidirectional associative memory net-
work?
Hopfield neE?

This depends upon rhe missing data enrries. = [0 0]

You might also like