0% found this document useful (0 votes)
44 views

Artificial Neural Networks - : Introduction

Artificial neural networks (ANNs) are computing systems inspired by biological neural networks. ANNs learn from examples to perform tasks such as classification and prediction. They consist of interconnected processing units (neurons) that work together to solve problems. While ANNs can theoretically compute any function, they are especially useful for problems involving noisy, partial, or complex data. ANNs are studied by computer scientists, statisticians, engineers, cognitive scientists, neurophysiologists, and others. Their architecture is modeled after the human brain, with simple processing units (neurons) connected by weights that are adapted during learning.

Uploaded by

JOY
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Artificial Neural Networks - : Introduction

Artificial neural networks (ANNs) are computing systems inspired by biological neural networks. ANNs learn from examples to perform tasks such as classification and prediction. They consist of interconnected processing units (neurons) that work together to solve problems. While ANNs can theoretically compute any function, they are especially useful for problems involving noisy, partial, or complex data. ANNs are studied by computer scientists, statisticians, engineers, cognitive scientists, neurophysiologists, and others. Their architecture is modeled after the human brain, with simple processing units (neurons) connected by weights that are adapted during learning.

Uploaded by

JOY
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

Artificial Neural Networks

- Introduction -
Reference Books and Journals
Neural Networks: A Comprehensive
Foundation by Simon Haykin
Neural Networks for Pattern Recognition by
Christopher M. Bishop
Some papers
Overview
Neural Network (NN) or Artificial Neural Networks
(ANN) is a computing paradigm
The key element of this paradigm is
the novel structure of the information processing system
consisting of a large number of highly interconnected
processing elements (neurons) working in unison to solve
specific problems
Development of NNs date back to the early 1940s
Minsky and Papert, published a book (in 1969)
summed up a general feeling of frustration (against neural
networks) among researchers
Overview (Contd.)
Experienced an upsurge in popularity in the
late 1980s
Result of the discovery of new techniques and
developments and general advances in computer
hardware technology
Some NNs are models of biological neural
networks and some are not
Overview (Contd.)
Historically, much of the inspiration for the
field of NNs came from the desire to produce
artificial systems capable of
sophisticated, perhaps intelligent, computations
similar to those that the human brain routinely
performs, and thereby possibly to enhance our
understanding of the human brain.
Overview (Contd.)
Most NNs have some sort of training rule. In other words,
NNs learn from examples
as children learn to recognize dogs from examples of dogs) and
exhibit some capability for generalization beyond the training data
Neural computing must not be considered as a competitor to
conventional computing.
Rather should be seen as complementary
• Most successful neural solutions have been those which operate in
conjunction with existing, traditional techniques.
Overview (Contd.)
Digital Computers Neural Networks
• Deductive Reasoning. We • Inductive Reasoning. Given input
apply known rules to input data and output data (training
to produce output examples), we construct rules
• Computation is centralized, • Computation is collective,
synchronous, and serial. asynchronous, and parallel.
• Memory is packetted, literally • Memory is distributed,
stored, location addressable internalized, short term and
content addressable.
• Not fault tolerant. One transis-
• Fault tolerant, redundancy, and
tor goes and it no longer works.
sharing of responsibilities.
• Exact. • Inexact.
• Static connectivity. • Dynamic connectivity.
• Applicable if well defined rules • Applicable if rules are unknown
with precise input data. or complicated, or if data are
noisy or partial.
Why Neural Networks
Adaptive learning
An ability to learn how to do tasks based on the data given for training or initial
experience.
Self-Organization
An ANN can create its own organization or representation of the information it
receives during learning time.
Real Time Operation
An ANN computations may be carried out in parallel, and special hardware
devices are being designed and manufactured which take advantage of this
capability.
Fault Tolerance via Redundant Information Coding:
Partial destruction of a network leads to the corresponding degradation of
performance. However, some network capabilities may be retained even with
major network damage.
What can you do with an NN
and what not?
In principle, NNs can compute any computable function,
i.e., they can do everything a normal digital computer can
do.
In practice, NNs are especially useful for classification and
function approximation problems.
NNs are, at least today, difficult to apply successfully to
problems that concern manipulation of symbols and
memory.
There are no methods for training NNs that can magically
create information that is not contained in the training
data.
Who is concerned with NNs?
Computer scientists want to find out about the properties
of non-symbolic information processing with neural nets
and about learning systems in general.
Statisticians use neural nets as flexible, nonlinear
regression and classification models.
Engineers of many kinds exploit the capabilities of neural
networks in many areas, such as signal processing and
automatic control.
Cognitive scientists view neural networks as a possible
apparatus to describe models of thinking and
consciousness (High-level brain function).
Neuro-physiologists use neural networks to describe and
explore medium-level brain function (e.g. memory,
sensory system, motorics).
Who is concerned with NNs?
Physicists use neural networks to model phenomena in
statistical mechanics and for a lot of other tasks.
Biologists use Neural Networks to interpret nucleotide
sequences.
Philosophers and some other people may also be interested
in Neural Networks for various reasons
Biological inspiration
Animals are able to react adaptively to changes in their
external and internal environment, and they use their
nervous system to perform these behaviours.
An appropriate model/simulation of the nervous system
should be able to produce similar responses and
behaviours in artificial systems.
The nervous system is build by relatively simple units, the
neurons, so copying their behavior and functionality
should be the solution.
Biological inspiration (Contd.)
Biological inspiration (Contd.)
The brain is a collection of about 10 billion interconnected
neurons
Each neuron is a cell that uses biochemical reactions to receive,
process and transmit information.
Each terminal button is connected to other neurons across
a small gap called a synapse
A neuron's dendritic tree is connected to a thousand
neighbouring neurons
When one of those neurons fire
a positive or negative charge is received by one of the dendrites.
The strengths of all the received charges are added together
through the processes of spatial and temporal summation.
Artificial neurons
Neurons work by processing information. They receive and
provide information in form of spikes.
x1
x2 w1
n Output
x3 w2 z = ∑ wi xi ; y = H ( z )
Inputs

i =1 y
w3
… ..
. wn-1
xn-1
wn
xn
The McCullogh-Pitts model
Artificial neurons
Nonlinear generalization of the McCullogh-Pitts
neuron:

y = f ( x, w)
y is the neuron’s output, x is the vector of inputs, and w
is the vector of synaptic weights.
Examples: 1
y= T
−w x−a
sigmoidal neuron
1+ e
|| x − w|| 2

y=e 2a 2 Gaussian neuron
Activation Functions

The activation function is generally non-linear


Linear functions are limited because the output is simply
proportional to the input
Activation Functions (Contd.)
Artificial neurons
Nonlinear generalization of the McCullogh-Pitts
neuron:

y = f ( x, w)
y is the neuron’s output, x is the vector of inputs, and w
is the vector of synaptic weights.
Examples: 1
y= T
−w x−a
sigmoidal neuron
1+ e
|| x − w|| 2

y=e 2a 2 Gaussian neuron

You might also like