0% found this document useful (0 votes)
14 views30 pages

10-lecture neural network

Radial Basis Function Networks (RBFN), proposed in 1988, are a type of feedforward neural network that uses radial basis functions for classification and function approximation. RBFNs consist of a single hidden layer where hidden nodes implement radial basis functions, and the training process involves determining the centers and scaling parameters followed by adapting the output layer weights. RBFNs are noted for their faster training compared to Multi-Layer Perceptrons (MLP), but they may be slower in execution after training.

Uploaded by

Adel Shousha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views30 pages

10-lecture neural network

Radial Basis Function Networks (RBFN), proposed in 1988, are a type of feedforward neural network that uses radial basis functions for classification and function approximation. RBFNs consist of a single hidden layer where hidden nodes implement radial basis functions, and the training process involves determining the centers and scaling parameters followed by adapting the output layer weights. RBFNs are noted for their faster training compared to Multi-Layer Perceptrons (MLP), but they may be slower in execution after training.

Uploaded by

Adel Shousha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Radial Basis Function Networks

Introduction
✓Radial Bases Functions Networks (RBFN) is firstly proposed by Broomhead and Lowe
in 1988.
✓(RBFN) represent a special category of the feedforward neural networks architecture.
Inspired by the powerful functions of the biological receptive fields of the cerebral
cortex.
✓The RBF networks, just like MLP networks, can therefore be used classification and/or
function approximation problems.

✓Radial Basis Functions:


✓ Radial: Symmetric around its center.
✓ Basis Function: A set of functions whose linear combination can generate an arbitrary function in a
given function space.
RBFN Architecture
✓They have single hidden layer feed-forward networks.
✓The hidden nodes implement a set of radial basis functions (e.g. Gaussian functions).
✓The output nodes implement linear summation functions (similar to MLP).
✓The network training is divided into two stages:
▪ The weights from the input to hidden layer are determined.
▪ Then the weights from the hidden to output layer are found.
✓The training/learning is fairly fast.
✓RBF nets have better performance than MLP in some classification problems and
function approximation.
The typical architecture of an RBF Network. It consists of an input vector, a
layer of RBF neurons, and an output layer with one node per category or
class of data
RBFN Architecture(Cont.)
✓Approximate function with linear combination of Radial basis functions:
Y(x) =  wi ϕ (x)
✓ϕ(x) is mostly Gaussian function.

✓C: center (first layer weight)


✓ σ : width
✓M: basis function centered at C.
✓ɸ: activation function should be radially symmetric (i.e. if //x1//
= //x2// then ɸ(//x1//) = ɸ(//x2//))
✓W: connection weights in the second layer (from hidden layer to output).
RBF Neuron Activation Function

✓ The network structure uses nonlinear transformations at its


hidden layer (typical transfer functions for hidden functions
are Gaussian curves). but uses linear transformations
between the hidden and the output layers.
✓This produces the familiar bell curve shown below, which is
centered at the mean (in the below plot the mean is 5 and
sigma is 1).
RBF Neuron Activation Function(Cont.)

center

Large  Small 

 is a measure of how spread the curve .


RBFN Training
✓The training of the RBFN consists of two separate steps:
✓Training RBFN is a process to find appropriate values of wkj ; cij and σj.
✓Step 1: Train the RBFN layer to get the adaptation of centers and scaling
parameters using the unsupervised training. These include but are not restricted to
the k-means-based method, the maximum likelihood estimate-based technique,
the standard deviations-based approach, and the self-organizing map method.
This step is very important for the construction of the RBFN, as the accurate
knowledge of ci and σi has a major impact on the performance of the network.
✓Step 2: Adapt the weights of the output layer using the supervised training
algorithm, such as the least-squares method or the gradient method to update the
weights between the hidden layer and the output layer.
RBFN Example

✓Suppose that we have a set of 10 data samples


(x1,t1), ……. (x10,t10).
✓Data set is generated by using t=Sin(2Πx).

i 1 2 3 4 5 6 7 8 9 10

xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

ti 0.5878 0.9511 0.9511 0.5878 0.00 -0.5878 -0.9511 -0.9511 -0.5878 0.00
RBFN Example(Cont.)

Graphical illustration of the data sample.


RBFN Example(Cont.)
✓Training of RBFN include the determination of locations of
centers ci, weights wi, σ as constant σ=1.
✓ c1=0.2 , c2= 0.4, c3=0.6, c4=0.8.
✓Gives four basis functions .
RBFN Example(Cont.)
RBFN Example(Cont.)
RBFN Example(Cont.)

✓The resultant RBFN is ready for prediction for any x.


RBFN Example(Cont.)

The result of the RBFN for curve fitting


Example Summary
RBF Applications
✓Classication:
✓ Suppose we have a data set that falls into three classes:

✓An MLP would separate the classes with hyper-planes in the input plane.
✓ RBF model the separate class distributions by localized basis functions.
RBF Applications
✓ The familiar case of the non-linearly separable XOR
function provides a good example:
✓the XOR. This function takes two input arguments with values
in {0,1} and returns one output in {0,1}, as specified in the
following table:
The XOR Problem

• Consider the nonlinear functions to map the input vector x to the


1- 2 space.
x2
• Construct an RBF pattern classifier such that:
(0,1) (1,1)
(0,0) and (1,1) are mapped to 0, class C1
(1,0) and (0,1) are mapped to 1, class C2
x1
(0,0) (1,0)

• When mapped into the feature space < 1 , 2 > (hidden


layer), C1 and C2 become linearly separable . So a linear
classifier with 1(x) and 2(x) as inputs can be used to solve
the XOR problem.
Illustrative Example - XOR Problem

1

2

24
Advantages/Disadvantages
• RBF trains faster than a MLP.
• Another advantage that is claimed is that the hidden layer is
easier to interpret than the hidden layer in an MLP.
• Although the RBF is quick to train, when training is
finished and it is being used it is slower than a MLP, so
where speed is a factor a MLP may be more appropriate.
MLP RBF
Can have any number of hidden layer Can have only one hidden layer

Can be fully or partially connected Has to be mandatorily completely connected

Argument of hidden function activation function is the The argument of each hidden unit activation function
inner product of the inputs and the weights is the distance between the input and the weights

Trained with a single global supervised algorithm RBF networks are usually trained one later at a time(
hybrid)

Training is slower compared to RBF Training is comparitely faster than MLP

After training MLP is much faster than RBF After training RBF is much slower than MLP
Summary
✓Statistical feed-forward networks such as the RBF
network have become very popular, and are serious rivals
to the MLP.
✓It is appropriate to use different learning alg. for each:
✓First the hidden node centers are determined.
✓Then the output layer weights are trained.

You might also like