0% found this document useful (0 votes)
89 views

NNDesign

1) This document contains exercises related to competitive networks and unsupervised learning. It includes questions about training competitive networks on various input patterns and analyzing the results. 2) One exercise asks the reader to analyze a competitive network with a specified weight matrix and determine an input that would cause the network to fail. 3) Another exercise involves training a competitive network on a set of input vectors and analyzing how the vectors are clustered after each training iteration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

NNDesign

1) This document contains exercises related to competitive networks and unsupervised learning. It includes questions about training competitive networks on various input patterns and analyzing the results. 2) One exercise asks the reader to analyze a competitive network with a specified weight matrix and determine an input that would cause the network to fail. 3) Another exercise involves training a competitive network on a set of input vectors and analyzing how the vectors are clustered after each training iteration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Exercises

Exercises
E16.1 Suppose that the weight matrix for layer 2 of the Hamming network is giv-
en by

3 3
1 – --- – ---
4 4
W = –3 3
2
--- 1 – --- .
4 4
3 3
– --- – --- 1
4 4

This matrix violates Eq. (16.6), since

3 1 1
 = ---  ------------ = --- .
4 S–1 2

Give an example of an output from Layer 1 for which Layer 2 will fail to
16
operate correctly.

E16.2 Consider the input vectors and initial weights shown in Figure E16.1.

p4 p1

p2 p3
3w
2w

1w

Figure E16.1 Cluster Data Vectors


i. Draw the diagram of a competitive network that could classify the
data above so that each of the three clusters of vectors would have
its own class.
ii. Train the network graphically (using the initial weights shown) by
presenting the labeled vectors in the following order:

p1 , p2 , p3 , p4 .

Recall that the competitive transfer function chooses the neuron


with the lowest index to win if more than one neuron has the same

16-39
16 Competitive Networks

net input. The Kohonen rule is introduced graphically in Figure


16.3.
iii. Redraw the diagram in Figure E16.1, showing your final weight vec-
tors and the decision boundaries between each region that repre-
sents a class.

E16.3 Train a competitive network using the following input patterns:

p1 = 1 , p = 1 , p = –1 .
2 3
–1 1 –1

i. Use the Kohonen learning law with  = 0.5 , and train for one pass
through the input patterns. (Present each input once, in the order
given.) Display the results graphically. Assume the initial weight
matrix is

W = 2 0 .
0 2

ii. After one pass through the input patterns, how are the patterns
clustered? (In other words, which patterns are grouped together in
the same class?) Would this change if the input patterns were pre-
sented in a different order? Explain.
iii. Repeat part (i) using  = 0.25 . How does this change affect the
training?

E16.4 Earlier in this chapter the term “conscience” was used to refer to a tech-
nique for avoiding the dead neuron problem plaguing competitive layers
and LVQ networks.
Neurons that are too far from input vectors to ever win the competition can
be given a chance by using adaptive biases that get more negative each
time a neuron wins the competition. The result is that neurons that win
very often start to feel “guilty” until other neurons have a chance to win.
Figure E16.2 shows a competitive network with biases. A typical learning
rule for the bias b i of neuron i is


 0.9b i , if i  i
old
new
bi =  .
 b old – 0.2 , if i = i
 i

16-40
Exercises

Input Competitive Layer

p a
2x1 W 3x1
n
3x2
3x1 C
1 b
3x1
2 3

a = compet (Wp + b)

Figure E16.2 Competitive Layer with Biases


i. Examine the vectors in Figure E16.3. Is there any order in which
the vectors can be presented that will cause 1w to win the competi-
tion and move closer to one of the vectors? (Note: assume that adap-
tive biases are not being used.)

p2
16
3w
p3

2w

p1

1w

Figure E16.3 Input Vectors and Dead Neuron


ii. Given the input vectors and the initial weights and biases defined
below, calculate the weights (using the Kohonen rule) and the bias-
es (using the above bias rule). Repeat the sequence shown below un-
til neuron 1 wins the competition.

p1 = –1 , p2 = 0 , p3 = 1  2
0 1 1 2

0 , w = –2  5 , w = –1  5 , b  0  = b  0  = b  0  = 0
1w = 2 3 1 2 3
–1 –1  5 –2  5

Sequence of input vectors: p 1 , p 2 , p 3 , p 1 , p 2 , p 3 , 

16-41
16 Competitive Networks

iii. How many presentations occur before 1w wins the competition?

E16.5 The net input expression for LVQ networks calculates the distance be-
tween the input and each weight vector directly, instead of using the inner
product. The result is that the LVQ network does not require normalized
input vectors. This technique can also be used to allow a competitive layer
to classify nonnormalized vectors. Such a network is shown in Figure
E16.4.

Inputs Competitive Layer

W
2x2

p n1 a1
2x1
-||dist||
2x1 C 2x1

2 2
1
n i = -||iw-p||
1 1
a = compet(n )

Figure E16.4 Competitive Layer with Alternate Net Input Expression


Use this technique to train a two-neuron competitive layer on the (nonnor-
malized) vectors below, using a learning rate,  , of 0.5.

p1 = 1 , p2 = –1 , p3 = –2
1 2 –2

Present the vectors in the following order:

p1 , p2 , p3 , p2 , p3 , p1 .

Here are the initial weights of the network:

1w = 0 , 2w = 1 .
1 0

E16.6 Repeat E16.5 for the following inputs and initial weights. Show the move-
ments of the weights graphically for each step. If the network is trained for
a large number of iterations, how will the three vectors be clustered in the
final configuration?

16-42
Exercises

p1 = 2 , p2 = 0 , p3 = 2
0 1 2

1w = 1 , 2w = – 1 .
0 0

E16.7 We have a competitive learning problem, where the input vectors are

p1 = 0 , p2 = 0 , p3 = 1 , p4 = 2 ,
1 2 1 2

and the initial weight matrix is

W = 1 –1 .
–1 1
16
i. Use the Kohonen learning law to train a competitive network using
a learning rate of  = 0.5 . (Present each vector once, in the order
shown.) Use the modified competitive network of Figure E16.4,
which uses negative distance, instead of inner product.
ii. Display the results of part i graphically, as in Figure 16.3. (Show all
four iterations.)
iii. Where will the weights eventually converge (approximately)? Ex-
plain. Sketch the approximate final decision boundaries.

E16.8 Show that the modified competitive network of Figure E16.4, which com-
putes distance directly, will produce the same results as the standard com-
petitive network, which uses the inner product, when the input vectors are
normalized.

E16.9 We would like a classifier that divides the interval of the input space de-
fined below into five classes.

0  p1  1
»2+2
ans = i. Use MATLAB to randomly generate 100 values in the interval
4 shown above with a uniform distribution.
ii. Square each number so that the distribution is no longer uniform.
iii. Write a MATLAB M-file to implement a competitive layer. Use the
M-file to train a five-neuron competitive layer on the squared values

16-43
16 Competitive Networks

until its weights are fairly stable.


iv. How are the weight values of the competitive layer distributed? Is
there some relationship between how the weights are distributed
and how the squared input values are distributed?

E16.10 We would like a classifier that divides the square region defined below into
sixteen classes of roughly equal size.

0  p1  1 , 2  p2  3
»2+2
ans = i. Use MATLAB to randomly generate 200 vectors in the region shown
4 above.
ii. Write a MATLAB M-file to implement a competitive layer with Ko-
honen learning. Calculate the net input by finding the distance be-
tween the input and weight vectors directly, as is done by the LVQ
network, so the vectors do not need to be normalized. Use the M-file
to train a competitive layer to classify the 200 vectors. Try different
learning rates and compare performance.
iii. Write a MATLAB M-file to implement a four-neuron by four-neuron
(two-dimensional) feature map. Use the feature map to classify the
same vectors. Use different learning rates and neighborhood sizes,
then compare performance.

E16.11 We want to train the following 1-D feature map (which uses distance in-
stead of inner product to compute the net input):

Inputs Feature Map Feature Map

W 1
4x2

p n a 2
2x1
-||dist|| 4x1 C 4x1
3

2 4 4

ni = -||iw-p||
a = compet(n)

Figure E16.5 1-D Feature Map for Exercise E16.11


T
The initial weight matrix is W  0  = 2 – 1 – 1 1 .
2 1 –2 0

16-44
Exercises

i. Plot the initial weight vectors as dots, and connect the neighboring
weight vectors as lines (as in Figure 16.10, except that this is a 1-D
feature map).
ii. The following input vector is applied to the network. Perform one it-
eration of the feature map learning rule. (You can do this graphical-
ly.) Use a neighborhood size of 1 and a learning rate of  = 0.5 .
T
p1 = –2 0

iii. Plot the new weight vectors as dots, and connect the neighboring
weight vectors as lines.

E16.12 Consider the following feature map, where distance is used instead of inner
product to compute the net input.

Inputs Feature Map Feature Map

W
4x2 a 1 2
16
4x1
p n
2x1
-||dist|| 4x1 C
3 4
2 4

ni = -||iw - p||
a = compet(n)

Figure E16.6 2-D Feature Map for Exercise E16.12


The initial weight matrix is
T
W = 0 1 1 0
0 0 1 –1

i. Plot the initial weights, and show their topological connections, as


in Figure 16.10.
T
ii. Apply the input p = – 1 1 , and perform one iteration of the fea-
ture map learning rule, with learning rate of  = 0.5 , and neighbor-
hood radius of 1.

iii. Plot the weights after the first iteration, and show their topological
connections.

16-45
16 Competitive Networks

E16.13 An LVQ network has the following weights:

0 0
1 0 1 0 0 0 0
1 2
W = –1 0 , W = 0 1 1 0 0 .
0 1 0 0 0 1 1
0 –1

i. How many classes does this LVQ network have? How many sub-
classes?
ii. Draw a diagram showing the first-layer weight vectors and the de-
cision boundaries that separate the input space into subclasses.
iii. Label each subclass region to indicate which class it belongs to.

E16.14 We would like an LVQ network that classifies the following vectors accord-
ing to the classes indicated:

 –1 1   –1 1 1   –1 –1 
    
class 1:  1  – 1  , class 2: 
 –1 –1  1  , class 3:  –1  1  .
     
 –1 –1   1 1 – 1   –1 1 

i. How many neurons are required in each layer of the LVQ network?
ii. Define the weights for the first layer.
iii. Define the weights for the second layer.
iv. Test your network for at least one vector from each class.

E16.15 We would like an LVQ network that classifies the following vectors accord-
ing to the classes indicated:

   –1  p = 1 
class 1:  p 1 = 1  p 2 = 0  , class 2:  p3 = 4 
 1 2   1 2 

i. Could this classification problem be solved by a perceptron? Explain


your answer.
ii. How many neurons must be in each layer of an LVQ network that
can classify the above data, given that each class is made up of two
convex-shaped subclasses?
iii. Define the second-layer weights for such a network.

16-46
Exercises

iv. Initialize the first-layer weights of the network to all zeros and cal-
culate the changes made to the weights by the Kohonen rule (with
a learning rate  of 0.5) for the following series of vectors:

p4 , p2 , p3 , p1 , p2 .

v. Draw a diagram showing the input vectors, the final weight vectors
and the decision boundaries between the two classes.

E16.16 An LVQ network has the following weights and training data.

1 0
W = 0 1 ,W = 1 1 0 ,
1 2

0 0 1
0 0

 –2  t = 1  ,  p = 2  t = 0  ,  p = 2  t = 1  ,
 p1 = 1   2 2   3 3 
 2 0   0 1   –2 0  16
 –2  t = 0 
 p4 = 4 
 0 1 

i. Plot the training data input vectors and weight vectors (as in Figure
16.14).
ii. Perform four iterations of the LVQ learning rule, with learning rate
 = 0.5 , as you present the following sequence of input vectors: p 1 ,
p 2 , p 3 , p 4 (one iteration for each input). Do this graphically, on a
separate diagram from part i.
iii. After completing the iterations in part ii, on a new diagram, sketch
the regions of the input space that make up each subclass and each
class. Label each region to indicate which class it belongs to.

E16.17 An LVQ network has the following weights:


T
W = 0 1 –1 0 0 –1 –1 , W = 1 0 1 0 1 1 0 .
1 2

0 0 0 1 –1 –1 1 0 1 0 1 0 0 1

i. How many classes does this LVQ network have? How many sub-
classes?
ii. Draw a diagram showing the first-layer weight vectors and the de-
cision boundaries that separate the input space into subclasses.
iii. Label each subclass region to indicate which class it belongs to.

16-47
16 Competitive Networks

T
iv. Suppose that an input p = 1 0.5 from Class 1 is presented to the
network. Perform one iteration of the LVQ algorithm, with  = 0.5 .

E16.18 An LVQ network has the following weights:


T
W = 0 0 2 1 1 –1 , W = 1 1 1 0 0 0 .
1 2

0 2 2 1 –1 –1 0 0 0 1 1 1

i. How many classes does this LVQ network have? How many sub-
classes?
ii. Draw a diagram showing the first-layer weight vectors and the de-
cision boundaries that separate the input space into subclasses.
iii. Label each subclass region to indicate which class it belongs to.
iv. Perform one iteration of the LVQ algorithm, with the following in-
T T
put/target pair: p = – 1 – 2 , t = 1 0 . Use learning rate
 = 0.5 .

16-48

You might also like