Competitive Learning Neural Networks - Unit 8
Competitive Learning Neural Networks - Unit 8
com
Competitive Learning Neural Networks NN-Unit VIII
NEURAL NETWORKS
CHAPTER 6 Competitive Learning Neural Networks
Introduction
Competitive learning neural network consists of an input layer of linear units. The output of each of these units is given to all the units in the second layer (output layer) with adaptive (adjustable) forward weights. The output functions of the units in the second layer are either linear or non linear depending on the task for which the network is to be designed. The output of each unit in the second layer is fed back to itself in a self excitatory manner and to the other units in the layer in an excitatory or inhibitory manner depending on the task. The weights on the connections in the feedback layer are non-adaptive or fixed. Such a combination of both feed forward and feedback connection layers results in competition among the activations of the units in the output layer and hence such networks are competitive learning neural networks.
Different choices of the output functions and interconnections in the feedback layer of the network can be used to perform different pattern recognition tasks. If the output functions are linear, then the network performs the task of storing an input pattern temporarily. If the output functions of the units in the feedback layer are made non-linear, then the network can be used for pattern clustering. The objective in pattern clustering is to group the given input patterns in an unsupervised manner and the group for a pattern is indicated by the output unit that has a non zero output at equilibrium. The network is called a pattern clustering network and the feedback later is called competitive layer. The unit that gives the nonzero output at equilibrium is called the winner.
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
If the output functions of the units in the feedback layer are non linear and the units are connected in such a way that connections to the neighboring units are all made excitatory and to the farther units inhibitory, the network can perform the task of feature mapping. The resulting network is called self organization network. A self organization network can be used to obtain mapping of features in the input patterns onto a one- dimensional or a two- dimensional feature space.
c) Basic competitive learning The steady activation value with an external input depends on the angle between the input and weight vectors. d) Feedback layer In the arrangement of a group of in stars, the category of an input vector can be identified by observing the in star processing unit with maximum response. The maximum response unit can be determined by the system itself if the outputs of the
A.ASLESHA LAKSHMI Assistant Professor, CSE VNRVJIET
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
in star processing units are fed back to each other. If the processing units of a group of instars are connected as an on centre off surround feedback network then the feedback layer is called a competitive layer.
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
then
wkTx
If the weight vectors to all the units are normalized, that is ||w i|| = 1 for all i, the above result means that the input vector x is closest to the weight vector w k among all wi. ||x-wk|| ||x-wi|| for all i
If the input vectors are not normalized, then they are normalized in the weight adjustment formula as wkj = [(xj/xi) wkj] only for those j for which xj=1
This can be called minimal learning. In the case of binary input vectors, for the winner unit, only the weights which have non zero input would be adjusted that is = wkj = 0 for xj = 0 [(xj/xi) wkj] for xj=1
In the minimal learning there is no automatic normalization of weights after each adjustment that is M wkj 1 j=1 The unit i with an initial weight vector wi far from any input vector may never win the competition. Since a unit will never learn unless it wins the competition, another method called leaky learning law is proposed. In this case, the weights leading to the units which do not win also are adjusted for each update as follows= wij = l[(xj-xm)-wij] for all j if i loses the competition that is i k w[(xj/xm)-wij] for all j If i wins the competition, that is i=k
Where w and l are the learning rate parameters for the winning and losing units respectively (w>>l). In this case, the weights of the losing units are also slightly moved for each presentation of an input.
A.ASLESHA LAKSHMI Assistant Professor, CSE VNRVJIET
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
The function gives the lateral connection weight value for a neighboring unit k at a distance |i-k| from the current unit i. A third method of implementing the feature mapping network process is to use an architecture of a competitive learning network with on centre off surround type of connections among units, but at each stage the weights are updated not only for the winning unit but also for the units in its
A.ASLESHA LAKSHMI Assistant Professor, CSE VNRVJIET
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
neighborhood. The neighborhood region may be progressively reduced during learning. This is called self organization network with Kohenans learning.
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
Associative memory
Pattern storage is a pattern recognition task where it is a memory function and the network is expected to store the pattern information for later recall. The patterns to be stored may be of spatial type or spatio- temporal (pattern sequence) type. An artificial neural network behaves like an associative memory in which a pattern is associated with another pattern or with itself. The pattern information is stored in the weight matrix of a feedback neural network. If the weight matrix stores the given patterns, then the network becomes an auto associative memory. If the weight matrix stores the association between a pair of patterns, then the network becomes a bidirectional associative memory. This is called hetero association between two patterns. If the weight matrix stores multiple associations among several (>2) patterns, then the network becomes multidirectional associative memory. If the weights store the associations between adjacent pairs of patterns in a sequence of patterns, then the network is called a temporal associative memory.
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
Bidirectional Associative Memory The objective is to store a set of pattern pairs in such a way that any stored pattern pair can be recalled by giving either of the patterns as input. The network is a two layer hetero associative neural network. The BAM weight matrix from the first layer to the second layer is given by L W = alblT l=1 where al {-1,+1}M and bl {-1,+1}N for bipolar patterns and L is the number of training patterns. The weight matrix from the second layer to the first layer is given by L WT = blalT l=1
The activation equations for the bipolar case are as follows 1 bj(m+1)
A.ASLESHA LAKSHMI Assistant Professor, CSE VNRVJIET
if yj>0 if yj=0
bj(m)
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
-1
if yj<0
--------------------------------------------------------------------------------------------------------------------M Where yj = wjiai(m) i=1 --------------------------------------------------------------------------------------------------------------------1 ai(m+1) = ai(m) -1 if xi>0 if xi=0 if xi<0
--------------------------------------------------------------------------------------------------------------------N Where xi = wijbj(m) j=1 Multidirectional associative memory The multiple association memory is also called multidirectional associative memory. Assume three layers of units denoted as A, B, C. The dimensions of the three vectors al, bl, cl are N1 ,N2, N3 respectively. The weight matrices for the pairs of layers are given by --------------------------------------------------------------------------------------------------------------------L WAB = alblT l=1 WBC = L blclT l=1 WCA = L clalT l=1
www.jntuworld.com
www.jntuworld.com
Competitive Learning Neural Networks NN-Unit VIII
--------------------------------------------------------------------------------------------------------------------1 bj(m+1) = bj(m) -1 For j =1, 2, ..N2 Where N1 yj = wABji ai(m) + i=1 N3 WCBji ci(m) i=1 if yj>0 if yj=0 if yj<0
where wABji is the jith element of the weight matrix WAB and WCBji is the jith element of the weight matrix WCB. Temporal Associative Memory The BAM can be used to store a sequence of temporal pattern vectors and recall the sequence of patterns. The basic idea is that the adjacent overlapping pattern pairs are to be stored in a BAM. Let a1,a2,aL be a sequence of L patterns, each with a dimensionality of M. Then (a1,a2),(a2,a3).(ai,ai+1)(aL-1,aL) and (aL,a1) form the pattern pairs to be stored in the BAM. The last pattern in the sequence is paired with the first pattern. The weight matrix in the forward direction is given by L-1 W = aiai+1T + aLa1T i=1 The weight matrix for the reverse direction is given by the transpose of the forward weight matrix that is WT.
www.jntuworld.com