neuralnetworks-130217080859-phpapp01
neuralnetworks-130217080859-phpapp01
1
Advantages of Neural Networks
• A Neural Network can be an “expert” in analyzing the category of
information given to it.
• Answers “ what-if” questions
• Adaptive learning
– Ability to learn how to do tasks based on the data given for training or
initial experience.
• Self organization
– Creates its own organization or representation of information it receives
during learning time.
• Real time operation
– Computations can be carried out in parallel.
• Fault tolerance via redundant information coding
– Partial destruction of neural network cause degradation of performance.
– In some cases, it can be retained even after major network damage.
• In future, it can also used to give spoken words as instructions for
machine.
2
This figure shows the
multi disciplinary point of
view of Neural Networks
3
Application Scope of Neural
Networks
Air traffic control
Animal behavior
Appraisal and valuation of property, etc.,
Betting on horse races, stock markets
Criminal sentencing
Complex physical and chemical process
Data mining, cleaning and validation
Direct mail advertisers
Echo patterns
Economic modeling
Employee hiring
Expert consulatants
Fraud detection
Hand writing and typewriting
Lake water levels
Machinery controls
Medical diagnosis
Music composition
Photos and finger prints
Recipes and chemical formulation
Traffic flows
Weather prediction
4
Fuzzy Logic
Lofti Zadeh, Professor at University of California.
An organized method for dealing with imprecise data
Fuzzy logic includes 0 and 1 as extreme cases of truth (or "the state of matters" or "fact") but
also includes the various states of truth in between so that, for example, the result of a
comparison between two things could be not "tall" or "short" but ".38 of tallness.“
Allows partial membership
Implemented in small, embedded micro controllers to large , networked, multichannel PC or
work station.
Can be implemented in hardware, software or in both.
It mimics how a person would make decisions.
5
Genetic algorithm
How genes of parents combine to form those of their
children.
Create an initial population of individuals representing
possible solutions to solve a problem
Individual characters determine whether they are less or
more fit to the population
The more fit members will take high probability.
It is very effective in finding optimal or near optimal
solutions.
Generate and test strategy.
Differ from normal optimization and search procedures in:
Work with coding of the parameter set
Work with multiple points
Search via sampling( a blind search)
Search using stochastic opeartors
In business, scientific and engineering circles, etc.,
6
Hybrid System
Three types
Neuro Fuzzy hybrid system
Combination of fuzzy set theory and neural networks
Fuzzy system deal with explicit knowledge that can be explained and
understood
Neural network deal with implicit knowledge acquired by learning
Advantages are:
Handle any kind of information
7
Contd..
Neuron genetic hybrid system
Topology optimization
Select a topology for ANN , common one is back propagation
Genetic algorithm training
Learning of ANN is formulated ad weight optimization problem, usually mean squared error as a fitness measure
Control parameter optimization
Learning rate, momentum rate, tolerance level. Etc., are optimized using GA.
8
Soft computing
Two major problem solving techniques are:
Hard computing
Deals with precise model where
accurate solutions are achieved.
Soft computing
deals with approximate model to
9
Artificial Neural Network : An
Introduction
Resembles the characteristic of biological neural network.
Nodes – interconnected processing elements (units or neurons)
Neuron is connected to other by a connection link.
Each connection link is associated with weight which has
information about the input signal.
ANN processing elements are called as neurons or artificial
neurons , since they have the capability to model networks of
original neurons as found in brain.
Internal state of neuron is called activation or activity level of
neuron, which is the function of the inputs the neurons receives.
Neuron can send only one signal at a time.
10
Basic Operation of a Neural Net
X1 and X2 – input
neurons.
Y- output neuron
Weighted
interconnection links-
W1 and W2.
Net input calculation is :
Output is :
Output= function
11
Contd…
The function to be applied over the net input
is called activation function.
Weight involved in ANN is equal to the slope
of linear straight line (y=mx).
12
Biological Neural Network
Has three main parts
Soma or cell body-where cell nucleus is located
Dendrites-where the nerve is connected to the cell body
Axon-which carries the impulses of the neuron
Electric impulse is passed between synapse and dendrites.
Synapse- Axon split into strands and strands terminates into small bulb like organs
called as synapse.
It is a chemical process which results in increase /decrease in the electric potential
inside the body of the receiving cell.
If the electric potential reaches a thresh hold value, receiving cell fires & pulse /
action potential of fixed strength and duration is send through the axon to synaptic
junction of the cell.
After that, cell has to wait for a period called refractory period.
13
Contd..
In this model net input is calculated by
14
Terminology Relation Between Biological And
Artificial Neuron
Biological Neuron Artificial Neuron
Cell Neuron
Dendrites Weights or interconnections
Soma Net input
Axon Output
15
Brain Vs computer
Term Brain Computer
Speed Execution time is few Execution time is few nano
milliseconds seconds
Processing Perform massive parallel Perform several parallel
operations simultaneously operations simultaneously. It is
faster the biological neuron
Size and complexity Number of Neuron is 1011 and It depends on the chosen
number of interconnections is application and network
1015. designer.
So complexity of brain is
higher than computer
Storage capacity i) Information is stored in i) Stored in continuous
interconnections or in synapse memory location.
strength. ii) Overloading may destroy
ii) New information is stored older locations.
without destroying old one. iii) Can be easily retrieved
iii) Sometimes fails to recollect
information 16
Contd…
Tolerance i) Fault tolerant i) No fault
ii) Store and tolerance
retrieve ii) Information
information even corrupted if the
interconnections network
fails connections
iii)Accept disconnected.
redundancies iii)No redundancies
Control mechanism Depends on active CPU
chemicals and neuron Control mechanism is
connections are very simple
strong or weak
17
Characteristics of ANN:
Neurally implemented mathematical model
Large number of processing elements called neurons exists
here.
Interconnections with weighted linkage hold informative
knowledge.
Input signals arrive at processing elements through
connections and connecting weights.
Processing elements can learn, recall and generalize from the
given data.
Computational power is determined by the collective
behavior of neurons.
ANN is a connection models, parallel distributed processing
models, self-organizing systems, neuro-computing systems and
neuro morphic system.
18
Evolution of neural networks
Year Neural network Designer Description
1943 McCulloch and McCulloch and Arrangement of
Pitts neuron Pitts neurons is
combination of
logic gate.
Unique feature is
thresh hold
1949 Hebb network Hebb If two neurons
are active, then
their connection
strengths should
be increased.
1958,1959,1962,1 Perceptron Frank Rosenblatt, Weights are
988,1960 Adaline Block, Minsky adjusted to
and Papert reduce the
Widrow and Hoff difference
between the net
input to the 19
output unit and
Contd…
1972 Kohonen self- Kohonen Inputs are
organizing clustered to
feature map obtain a fired
output
neuron.
1982, Hopfield John Hopfield Based on fixed
1984, network and Tank weights.
1985, Can act as
1986, associative
1987 memory nets
22
Single layer Feed- Forward Network
Layer is formed by
taking processing
elements and
combining it with other
processing elements.
Input and output are
linked with each other
Inputs are connected to
the processing nodes
with various weights,
resulting in series of
outputs one per node.
23
Multilayer feed-forward network
Formed by the interconnection of
several layers.
Input layer receives input and buffers
input signal.
Output layer generated output.
Layer between input and output is
called hidden layer.
Hidden layer is internal to the
network.
Zero to several hidden layers in a
network.
More the hidden layer, more is the
complexity of network, but efficient
output is produced.
24
Feed back network
If no neuron in the output layer is an input
to a node in the same layer / proceeding
layer – feed forward network.
If outputs are directed back as input to the
processing elements in the same
layer/proceeding layer –feedback network.
If the output are directed back to the input
of the same layer then it is lateral feedback.
Recurrent networks are networks with
feedback networks with closed loop.
Fig 2.8 (A) –simple recurrent neural
network having a single neuron with
feedback to itself.
Fig 2.9 – single layer network with
feedback from output can be directed to
processing element itself or to other
processing element/both.
25
Maxnet –competitive interconnections
having fixed weights.
On-center-off-surround/lateral
inhibiton structure – each processing
neuron receives two different classes
of inputs- “excitatory” input from
nearby processing elements & “
inhibitory” elements from more
distantly located precessing elements.
This type of interconnection is shown
below
26
Processing element output
can be directed back to the
nodes in the preceding layer,
forming a multilayer
recurrent network.
Processing element output
can be directed to processing
element itself or to other
processing element in the
same layer.
27
learning
Two broad kinds of learning in ANNs is :
i) parameter learning – updates connecting weights in a
neural net.
ii) Structure learning – focus on change in the network.
Apart from these, learning in ANN is classified into three
categories as
i) supervised learning
ii) unsupervised learning
Iii) reinforcement learning
28
Supervised learning
Learning with the help of a teacher.
Example : learning process of a small
child.
Child doesn’t know read/write.
Their each & every action is
supervised by a teacher
In ANN, each input vector requires a
corresponding target vector, which
represents the desired output.
The input vector along with target vector
is called training pair.
The input vector results in output vector.
The actual output vector is compared
with desired output vector.
If there is a difference means an error
signal is generated by the network.
It is used for adjustment of weights until
actual output matches desired output.
29
Unsupervised learning
Learning is performed without the help
of a teacher.
Example: tadpole – learn to swim by
itself.
In ANN, during training process,
network receives input patterns and
organize it to form clusters.
From the Fig. it is observed that no
feedback is applied from environment to
inform what output should be or whether
they are correct.
The network itself discover patterns,
regularities, features/ categories from the
input data and relations for the input data
over the output.
Exact clusters are formed by discovering
similarities & dissimilarities so called as
self – organizing.
30
Reinforcement learning
Similar to supervised learning.
Learning based on critic
information is called
reinforcement learning & the
feedback sent is called
reinforcement signal.
The network receives some
feedback from the environment.
Feedback is only evaluative.
The external reinforcement signals
are processed in the critic signal
generator, and the obtained critic
signals are sent to the ANN for
adjustment of weights properly to
get critic feedback in future.
31
Activation functions
To make work more efficient and for exact output, some force or activation is given.
Like that, activation function is applied over the net input to calculate the output of an
ANN.
Information processing of processing element has two major parts: input and output.
An integration function (f) is associated with input of processing element.
Several activation functions are there.
1. Identity function:
it is a linear function which is defined as
f(x) =x for all x
The output is same as the input.
2. Binary step function
it is defined as
32
Contd..
3. Bipolar step function:
• It is defined as
33
Contd..
b) Bipolar sigmoid function
34
Contd..
The derivative of the hyberbolic tangent
function is
h’(x)= [1+h(x))][1-h(x)]
5. Ramp function
35
36
Important terminologies
Weight
The weight contain information about the input signal.
It is used by the net to solve the problem.
It is represented in terms of matrix & called as connection matrix.
If weight matrix W contains all the elements of an ANN, then the
set of all W matrices will determine the set of all possible
information processing configuration.
The ANN can be realized by finding an appropriate matrix W.
Weight encode long-term memory (LTM) and the activation
states of network encode short-term memory (STM) in a neural
network.
37
Contd..
Bias
Bias has an impact in calculating net input.
Bias is included by adding x 0 to the input vector x.
The net output is calculated by
Negative bias
Decrease the net input
38
Contd..
Threshold
It is a set value based upon which the final output
is calculated.
Calculated net input and threshold is compared to
get the network output.
The activation function of threshold is defined as
39
Contd..
Learning rate
Denoted by α.
Control the amount of weight adjustment at each step of training.
The learning rate range from 0 to 1.
Determine the rate of learning at each step
Momentum Factor
Convergence is made faster if a momentum factor is added to the weight
updation process.
Done in back propagation network.
Vigilance parameter
Denoted by ρ.
Used in Adaptive Resonance Theory (ART) network.
Used to control the degree of similarity.
Ranges from 0.7 to 1 to perform useful work in controlling the number of
clusters.
40
Mcculloch-pitts neuron
Discovered in 1943.
Usually called as M-P neuron.
M-P neurons are connected by directed weighted paths.
Activation of M-P neurons is binary (i.e) at any time step the
neuron may fire or may not fire.
Weights associated with communication links may be
excitatory(wgts are positive)/inhibitory(wgts are negative).
Threshold plays major role here. There is a fixed threshold for
each neuron and if the net input to the neuron is greater than
the threshold then the neuron fires.
They are widely used in logic functions.
41
Contd…
A simple M-P neuron is shown
in the figure.
It is excitatory with weight
(w>0) / inhibitory with weight
–p (p<0).
In the Fig., inputs from x to
1
xn possess excitatory
weighted connection and
Xn+1 to xn+m has
inhibitory weighted
interconnections.
Since the firing of
neuron is based on
threshold, activation
function is defined as
42
Contd…
For inhibition to be absolute, the threshold with the activation
function should satisfy the following condition:
θ >nw –p
Output will fire if it receives “k” or more excitatory inputs but
no inhibitory inputs where
kw≥θ>(k-1) w
- The M-P neuron has no particular training algorithm.
- An analysis is performed to determine the weights and
the threshold.
- It is used as a building block where any function or
phenomenon is modeled based on a logic function.
43
Linear separability
It is a concept wherein the separation of the input space into
regions is based on whether the network response is positive or
negative.
A decision line is drawn to separate positive or negative
response.
The decision line is also called as decision-making line or
decision-support line or linear-separable line.
The net input calculation to the output unit is given as
44
Contd..
Consider a network having
positive response in the first
quadrant and negative response in
all other quadrants with either
binary or bipolar data.
Decision line is drawn separating
two regions as shown in Fig.
Using bipolar data representation,
missing data can be distinguished
from mistaken data. Hence bipolar
data is better than binary data.
Missing values are represented by
0 and mistakes by reversing the
input values from +1 to -1 or vice
versa.
45
Hebb network
Donald Hebb stated in 1949 that “ In brain, the learning is performed by the change
in the synaptic gap”.
When an axon of cell A is near enough to excite cell B, and repeatedly or
permanently takes place in firing it, some growth process or metabolic change takes
place in one or both the cells such than A’s efficiency, as one of the cells firing B, is
increased.
According to Hebb rule, the weight vector is found to increase proportionately to
the product of the input and the learning signal.
In Hebb learning, two interconnected neurons are ‘on’ simultaneously.
The weight update in Hebb rule is given by
Wi(new) = wi (old)+ xi y.
It is suited more for bipolar data.
If binary data is used, the weight updation formula cannot distinguish two
conditions namely
A training pair in which an input unit is “on” and the target value is “off”
A training pair in which both the input unit and the target value is “off”.
46
Flowchart of training algorithm
Steps:
0: First initialize the weights.
1: Steps 2-4 have to be performed for each input
training vector and target output pair, s:t
2: Input activations are set. The activation function for
input layer is identity function.
Xi =Si for i=1 to n
3: Output activations are set.
4: Weight adjustment and bias adjustments are
performed.
W (new) = w (old)+x y
i i i
b(new)=b(old)+y