10-lecture neural network
10-lecture neural network
Introduction
✓Radial Bases Functions Networks (RBFN) is firstly proposed by Broomhead and Lowe
in 1988.
✓(RBFN) represent a special category of the feedforward neural networks architecture.
Inspired by the powerful functions of the biological receptive fields of the cerebral
cortex.
✓The RBF networks, just like MLP networks, can therefore be used classification and/or
function approximation problems.
center
Large Small
i 1 2 3 4 5 6 7 8 9 10
ti 0.5878 0.9511 0.9511 0.5878 0.00 -0.5878 -0.9511 -0.9511 -0.5878 0.00
RBFN Example(Cont.)
✓An MLP would separate the classes with hyper-planes in the input plane.
✓ RBF model the separate class distributions by localized basis functions.
RBF Applications
✓ The familiar case of the non-linearly separable XOR
function provides a good example:
✓the XOR. This function takes two input arguments with values
in {0,1} and returns one output in {0,1}, as specified in the
following table:
The XOR Problem
1
2
24
Advantages/Disadvantages
• RBF trains faster than a MLP.
• Another advantage that is claimed is that the hidden layer is
easier to interpret than the hidden layer in an MLP.
• Although the RBF is quick to train, when training is
finished and it is being used it is slower than a MLP, so
where speed is a factor a MLP may be more appropriate.
MLP RBF
Can have any number of hidden layer Can have only one hidden layer
Argument of hidden function activation function is the The argument of each hidden unit activation function
inner product of the inputs and the weights is the distance between the input and the weights
Trained with a single global supervised algorithm RBF networks are usually trained one later at a time(
hybrid)
After training MLP is much faster than RBF After training RBF is much slower than MLP
Summary
✓Statistical feed-forward networks such as the RBF
network have become very popular, and are serious rivals
to the MLP.
✓It is appropriate to use different learning alg. for each:
✓First the hidden node centers are determined.
✓Then the output layer weights are trained.