NNDesign
NNDesign
Exercises
E16.1 Suppose that the weight matrix for layer 2 of the Hamming network is giv-
en by
3 3
1 – --- – ---
4 4
W = –3 3
2
--- 1 – --- .
4 4
3 3
– --- – --- 1
4 4
3 1 1
= --- ------------ = --- .
4 S–1 2
Give an example of an output from Layer 1 for which Layer 2 will fail to
16
operate correctly.
E16.2 Consider the input vectors and initial weights shown in Figure E16.1.
p4 p1
p2 p3
3w
2w
1w
p1 , p2 , p3 , p4 .
16-39
16 Competitive Networks
p1 = 1 , p = 1 , p = –1 .
2 3
–1 1 –1
i. Use the Kohonen learning law with = 0.5 , and train for one pass
through the input patterns. (Present each input once, in the order
given.) Display the results graphically. Assume the initial weight
matrix is
W = 2 0 .
0 2
ii. After one pass through the input patterns, how are the patterns
clustered? (In other words, which patterns are grouped together in
the same class?) Would this change if the input patterns were pre-
sented in a different order? Explain.
iii. Repeat part (i) using = 0.25 . How does this change affect the
training?
E16.4 Earlier in this chapter the term “conscience” was used to refer to a tech-
nique for avoiding the dead neuron problem plaguing competitive layers
and LVQ networks.
Neurons that are too far from input vectors to ever win the competition can
be given a chance by using adaptive biases that get more negative each
time a neuron wins the competition. The result is that neurons that win
very often start to feel “guilty” until other neurons have a chance to win.
Figure E16.2 shows a competitive network with biases. A typical learning
rule for the bias b i of neuron i is
0.9b i , if i i
old
new
bi = .
b old – 0.2 , if i = i
i
16-40
Exercises
p a
2x1 W 3x1
n
3x2
3x1 C
1 b
3x1
2 3
a = compet (Wp + b)
p2
16
3w
p3
2w
p1
1w
p1 = –1 , p2 = 0 , p3 = 1 2
0 1 1 2
0 , w = –2 5 , w = –1 5 , b 0 = b 0 = b 0 = 0
1w = 2 3 1 2 3
–1 –1 5 –2 5
16-41
16 Competitive Networks
E16.5 The net input expression for LVQ networks calculates the distance be-
tween the input and each weight vector directly, instead of using the inner
product. The result is that the LVQ network does not require normalized
input vectors. This technique can also be used to allow a competitive layer
to classify nonnormalized vectors. Such a network is shown in Figure
E16.4.
W
2x2
p n1 a1
2x1
-||dist||
2x1 C 2x1
2 2
1
n i = -||iw-p||
1 1
a = compet(n )
p1 = 1 , p2 = –1 , p3 = –2
1 2 –2
p1 , p2 , p3 , p2 , p3 , p1 .
1w = 0 , 2w = 1 .
1 0
E16.6 Repeat E16.5 for the following inputs and initial weights. Show the move-
ments of the weights graphically for each step. If the network is trained for
a large number of iterations, how will the three vectors be clustered in the
final configuration?
16-42
Exercises
p1 = 2 , p2 = 0 , p3 = 2
0 1 2
1w = 1 , 2w = – 1 .
0 0
E16.7 We have a competitive learning problem, where the input vectors are
p1 = 0 , p2 = 0 , p3 = 1 , p4 = 2 ,
1 2 1 2
W = 1 –1 .
–1 1
16
i. Use the Kohonen learning law to train a competitive network using
a learning rate of = 0.5 . (Present each vector once, in the order
shown.) Use the modified competitive network of Figure E16.4,
which uses negative distance, instead of inner product.
ii. Display the results of part i graphically, as in Figure 16.3. (Show all
four iterations.)
iii. Where will the weights eventually converge (approximately)? Ex-
plain. Sketch the approximate final decision boundaries.
E16.8 Show that the modified competitive network of Figure E16.4, which com-
putes distance directly, will produce the same results as the standard com-
petitive network, which uses the inner product, when the input vectors are
normalized.
E16.9 We would like a classifier that divides the interval of the input space de-
fined below into five classes.
0 p1 1
»2+2
ans = i. Use MATLAB to randomly generate 100 values in the interval
4 shown above with a uniform distribution.
ii. Square each number so that the distribution is no longer uniform.
iii. Write a MATLAB M-file to implement a competitive layer. Use the
M-file to train a five-neuron competitive layer on the squared values
16-43
16 Competitive Networks
E16.10 We would like a classifier that divides the square region defined below into
sixteen classes of roughly equal size.
0 p1 1 , 2 p2 3
»2+2
ans = i. Use MATLAB to randomly generate 200 vectors in the region shown
4 above.
ii. Write a MATLAB M-file to implement a competitive layer with Ko-
honen learning. Calculate the net input by finding the distance be-
tween the input and weight vectors directly, as is done by the LVQ
network, so the vectors do not need to be normalized. Use the M-file
to train a competitive layer to classify the 200 vectors. Try different
learning rates and compare performance.
iii. Write a MATLAB M-file to implement a four-neuron by four-neuron
(two-dimensional) feature map. Use the feature map to classify the
same vectors. Use different learning rates and neighborhood sizes,
then compare performance.
E16.11 We want to train the following 1-D feature map (which uses distance in-
stead of inner product to compute the net input):
W 1
4x2
p n a 2
2x1
-||dist|| 4x1 C 4x1
3
2 4 4
ni = -||iw-p||
a = compet(n)
16-44
Exercises
i. Plot the initial weight vectors as dots, and connect the neighboring
weight vectors as lines (as in Figure 16.10, except that this is a 1-D
feature map).
ii. The following input vector is applied to the network. Perform one it-
eration of the feature map learning rule. (You can do this graphical-
ly.) Use a neighborhood size of 1 and a learning rate of = 0.5 .
T
p1 = –2 0
iii. Plot the new weight vectors as dots, and connect the neighboring
weight vectors as lines.
E16.12 Consider the following feature map, where distance is used instead of inner
product to compute the net input.
W
4x2 a 1 2
16
4x1
p n
2x1
-||dist|| 4x1 C
3 4
2 4
ni = -||iw - p||
a = compet(n)
iii. Plot the weights after the first iteration, and show their topological
connections.
16-45
16 Competitive Networks
0 0
1 0 1 0 0 0 0
1 2
W = –1 0 , W = 0 1 1 0 0 .
0 1 0 0 0 1 1
0 –1
i. How many classes does this LVQ network have? How many sub-
classes?
ii. Draw a diagram showing the first-layer weight vectors and the de-
cision boundaries that separate the input space into subclasses.
iii. Label each subclass region to indicate which class it belongs to.
E16.14 We would like an LVQ network that classifies the following vectors accord-
ing to the classes indicated:
–1 1 –1 1 1 –1 –1
class 1: 1 – 1 , class 2:
–1 –1 1 , class 3: –1 1 .
–1 –1 1 1 – 1 –1 1
i. How many neurons are required in each layer of the LVQ network?
ii. Define the weights for the first layer.
iii. Define the weights for the second layer.
iv. Test your network for at least one vector from each class.
E16.15 We would like an LVQ network that classifies the following vectors accord-
ing to the classes indicated:
–1 p = 1
class 1: p 1 = 1 p 2 = 0 , class 2: p3 = 4
1 2 1 2
16-46
Exercises
iv. Initialize the first-layer weights of the network to all zeros and cal-
culate the changes made to the weights by the Kohonen rule (with
a learning rate of 0.5) for the following series of vectors:
p4 , p2 , p3 , p1 , p2 .
v. Draw a diagram showing the input vectors, the final weight vectors
and the decision boundaries between the two classes.
E16.16 An LVQ network has the following weights and training data.
1 0
W = 0 1 ,W = 1 1 0 ,
1 2
0 0 1
0 0
–2 t = 1 , p = 2 t = 0 , p = 2 t = 1 ,
p1 = 1 2 2 3 3
2 0 0 1 –2 0 16
–2 t = 0
p4 = 4
0 1
i. Plot the training data input vectors and weight vectors (as in Figure
16.14).
ii. Perform four iterations of the LVQ learning rule, with learning rate
= 0.5 , as you present the following sequence of input vectors: p 1 ,
p 2 , p 3 , p 4 (one iteration for each input). Do this graphically, on a
separate diagram from part i.
iii. After completing the iterations in part ii, on a new diagram, sketch
the regions of the input space that make up each subclass and each
class. Label each region to indicate which class it belongs to.
0 0 0 1 –1 –1 1 0 1 0 1 0 0 1
i. How many classes does this LVQ network have? How many sub-
classes?
ii. Draw a diagram showing the first-layer weight vectors and the de-
cision boundaries that separate the input space into subclasses.
iii. Label each subclass region to indicate which class it belongs to.
16-47
16 Competitive Networks
T
iv. Suppose that an input p = 1 0.5 from Class 1 is presented to the
network. Perform one iteration of the LVQ algorithm, with = 0.5 .
0 2 2 1 –1 –1 0 0 0 1 1 1
i. How many classes does this LVQ network have? How many sub-
classes?
ii. Draw a diagram showing the first-layer weight vectors and the de-
cision boundaries that separate the input space into subclasses.
iii. Label each subclass region to indicate which class it belongs to.
iv. Perform one iteration of the LVQ algorithm, with the following in-
T T
put/target pair: p = – 1 – 2 , t = 1 0 . Use learning rate
= 0.5 .
16-48