0% found this document useful (0 votes)
5 views10 pages

Radial Basis Functions

Radial Basis Function (RBF) Networks utilize a two-stage learning process to transform nonlinearly separable patterns into a linearly separable form, followed by least-squares estimation for classification. The network consists of an input layer, a hidden layer that applies nonlinear transformations, and a linear output layer. Training involves using the K-means algorithm for the hidden layer and the Recursive Least Squares (RLS) algorithm for the output layer.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views10 pages

Radial Basis Functions

Radial Basis Function (RBF) Networks utilize a two-stage learning process to transform nonlinearly separable patterns into a linearly separable form, followed by least-squares estimation for classification. The network consists of an input layer, a hidden layer that applies nonlinear transformations, and a linear output layer. Training involves using the K-means algorithm for the hidden layer and the Recursive Least Squares (RLS) algorithm for the output layer.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Radial Basis Function

Networks
Introduction
• The back-propagation learning algorithm for multilayer perceptrons,
may be viewed as the application of a recursive technique known in
statistics as stochastic approximation.
• The concept of Radial Basis function-based learning machine is based
on the following concepts involving two stages:
• The first stage transforms a given set of nonlinearly separable patterns into a
new set for which, under certain conditions, the likelihood of the transformed
patterns becoming linearly separable is high
• The second stage completes the solution to the prescribed classification
problem by using least-squares estimation
Types of Radial Basis Functions
In mathematics a radial basis function (RBF) is a real-valued function whose value
depends only on the distance between the input and some fixed point

Where, 𝒓 = ԡ𝒙𝒊 −𝒙𝒋 ฮ


Cover’s Theorem on The Separability Of
Patterns
• A complex pattern-classification problem, cast
in a high-dimensional space nonlinearly, is
more likely to be linearly separable than in a
low-dimensional space, provided that the
space is not densely populated.
The Radial Basis function (RBF) network
• The input layer is made up of source nodes (sensory units) that
connect the network to its environment.
• The second layer, consisting of hidden units, applies a nonlinear
transformation from the input space to the hidden (feature)
space. For most applications, the dimensionality of the only
hidden layer of the network is high; this layer is trained in an
unsupervised manner using stage 1 of the hybrid learning
procedure.
• The output layer is linear, designed to supply the response of the
network to the activation pattern applied to the input layer; this
layer is trained in a supervised manner using stage 2 of the hybrid
procedure
Radial-basis-function Networks
• Input layer: Consists of mo source nodes,
where mo is the dimensionality of the input
vector x.
• Hidden layer: Consists of the same number
of computation units as the size of the
training sample, namely, N; each unit is
mathematically described by a radial basis
function:

• The jth input data point xj defines the


center of the radial-basis function, and the
vector x is the signal (pattern) applied to
the input layer
• Output layer: Consists of a single
computational unit
Structure of Practical RBF Network
Hybrid Learning procedure for RBF Networks
• The K-means algorithm for training the hidden layer is applied first; it
is then followed by the RLS algorithm for training the output layer.

• Input layer. The size of the input layer is determined by the dimensionality of the input vector x, which is
denoted by m0.
Framing the hidden layer
Training the output layer

You might also like