0% found this document useful (0 votes)
9 views

Artificial Neural Networks

The document discusses artificial neural networks including how biological neural networks work, how artificial neural networks are modeled computationally, and applications of artificial neural networks. It provides details on the structure and function of the human brain and biological neurons. It also explains how basic artificial neurons and neural networks are constructed and how they work.

Uploaded by

Etsile Kgosana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Artificial Neural Networks

The document discusses artificial neural networks including how biological neural networks work, how artificial neural networks are modeled computationally, and applications of artificial neural networks. It provides details on the structure and function of the human brain and biological neurons. It also explains how basic artificial neurons and neural networks are constructed and how they work.

Uploaded by

Etsile Kgosana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

CET313 Artificial Intelligence

Artificial Intelligence: Artificial


Neural Networks

Kate MacFarlane
[email protected]
Today we will discuss:

A bit of background on the human brain

How biological neural nets work

How we can then model neural nets computationally

Artificial Neural Network (ANN) Applications

The ANN Lifecycle


Brain facts
• What’s up there? • Brain is the most complex
• 1.3 Kg, 0.3m2 structure in universe:
• Grey & white matter • As many neurons as stars
• 75% water, 60% fat 100 billion
• Uses 20% blood, Oxygen & energy • Wiring 10K connections to
• 100,000 miles of blood vessels other neurons  1 quadrillion
• Fully developed brain 25 years links (synapses)
• Brain uses 23KWh • 10 x Glial cells – which act as
• Deep Blue uses approx. 850KWh signal repeaters/boosters
• Brain info speed 240mph
Neuroscience: how our brains work
Brain two key parts Types of Neuron
• Neocortex – largest part 1. Sensory - environment
(mammals/man) 2. Motor – movement, muscle, limbs
• Cerebellum – fine tunes motor
3. Projection – links areas
sequences
4. Inter – local processing
• Central nervous system
• brain, retina, spinal cord
• Peripheral nervous system
• sensory-motor, nerve axons, limbs;
• Autonomous
• systems e.g. heart, digestion
How does it work?
• Sensors  to brain
• Using spinal cord
• 7 connection points with neurons
• Many agents for sensing &
responding
• Communicate with “Morse code”
• Pressure on skin
• Eye, Taste, Smell, Sound
• Language
Biological Neural Nets
Human brain 1011 neurons (100 Billion)
104 connections per neuron (10 Thousand)
Respond 19-3s
Electrics Respond 10-9s
Bio NN transfer short voltage spikes from cell via
synapse to next neuron
1000s of connections to each neuron
Inputs combined
If over threshold cell ‘fires’
Inputs may inhibit or excite
Chemicals deplete so delay in passing on spike
Complex!
Key Brain Areas
• Left (language) & Right
• Frontal: motor
• Above lateral sulcus Orbitofrontal:
smell
• Parietal: sensory touch, pain, taste
• Occipital: vision
• Auditory: mid brain
The Brain (contd.)
There are over 100 different types of
neurons.

Neurons are arranged in functional and


structural areas of the brain - about 10%
are input/output, 90% are in the internal
layers.
Discoveries happening all the time….
The function of the Endorestiform
Nucleus is still unknown, but it is located
near the junction of the brain and the spinal
cord within the inferior cerebellar peduncle,
an area that combines sensory and motor
information to refine our balance, posture
and fine motor control

For this reason, Professor Paxinos


theorizes the new region could play a part
in fine motor movements, and could aid in
the search for treatments for diseases such
as Parkinson's and motor neuron disease
Neural Network
Simple Artificial Neuron
Artificial Neuron
Input 1 & 2
Input 1
Output – one only 03
Weights
Weights output
Randomly chosen numbers 3
Threshold
02
Random number 3 Input 2
Simple Artificial Neuron
Artificial Neuron
Input 1 & 2
Output – one only Input 1
03
Weights
Randomly chosen numbers Weights output
3
How does it work?
02
Input x weights on input 1 Input 2
Input x weights on input 2
And add them together
Simple Artificial Neuron
Artificial Neuron
Input 1 & 2
Output – one only
Input 1
Weights 03
Randomly chosen numbers
Weights output
3
How does it work?
Input x weights on input 1 02
4 x 3 = 12 Input 2
Input x weights on input 2
2 x 2= 4
Total input = 16
Artificial Neuron
Input 1 & 2
Output – one only
Weights Input 1
03
Randomly chosen numbers
How does it work?
Weights output
Input x weights on input 1
3
4 x 3 = 12
Input x weights on input 2
02
2 x 2= 4
Input 2
Total input = 16
How do we get output?
Is total input larger than threshold 3?
If so, output is 1
Otherwise output is 0
Two connected artificial neurons
Input 1
05
Input 1
03 output
Weights
3
Output =1
Weights
3
03
Input 2
02
Input 2
On right top Input 1 is 4 x 5 = 20
Output left sends 1 to next neuron 2 is 1 x 3 = 3
Total input is 20 + 3 i.e. 23
Threshold higher than threshold 3
Output passes 1 to next neuron ….
What is a Perceptron?
Perceptron is a single layer neural network,
and a multi-layer perceptron is called Neural
Network.

The perceptron consists of 4 parts


1. Input values or One input layer
2. Weights and Bias
3. Net sum
4. Activation Function
Perceptron – How does it work?
All inputs (x) are multiplied with their weights (w)
Add all the multiplied values to get the weighted sum
We need weights and bias because
Weights show the strength of a particular node
Bias allows us to shift the activation function curve up or down
Apply the weighted sum to the correct activation function
We need the activation function
To map the input between the required values
Where do we use Perceptron?
Perceptron is usually used to
classify data into two parts.

It is also known as a Linear


Binary Classifier.
Different Neural Architectures
Feed-forward networks
connections in one direction only
input - hidden - output layers
Fully connected networks
every node is connected to every node
symmetric or asymmetric connections
self-connections allowed
Back Propagation Algorithm
Initialise weights

Present input and target

Calculate the actual output


and output error

From last layer, work backwards


Updating the weights
Perceptron Network
3 layers input hidden output

Train with examples i.e. supervised


1
Update weights until match expected
outcome Red 0 2 2 2
1
If too much output, then decrease
weights
1 1
If too little output, then increase 1 3 2
Amber 0
weights 1

Work backwards right to left, back 5


propagation unit to blame
Green 1 1 2 2
Design consideration:
choose number of neurons per layer
Multi Layer Perceptron Guidelines
Any continuous function can be approximated by two hidden layers
Empirically, more layers sometimes better
More layers require more training time
A single hidden layer is usually OK
Sometimes 2 or 3
Input & Output layers are determined by the task
Cerebral Cortex is 6 layers deep
But – Deep Learning
New training method  big neural nets
Neural Network Applications
The properties of neural networks define where they are useful.
Can learn complex mappings from inputs to outputs, based solely on a
representative set of samples
Hard to analyse: firm predictions about neural network behaviour
difficult;
Unsuitable for safety-critical applications.
Noise tolerant
Neural Network Applications (contd.)

Computer vision: image recognition, sonar or radar target recognition,


written text recognition, image generation

Natural language processing: text classification, sentence parsing, text


generation

Financial Forecasting

Condition monitoring

etc.
Neural Network Applications (contd.)
Example Applications
Engine Management
Engine behavior is influenced by a large number of parameters such as -
temperature at various points
Fuel/air mixture
lubricant viscosity
Signature Recognition
All signatures are different
There are structural similarities which are difficult to quantify
Neural networks can recognize features of signatures with a high level of accuracy
They can consider the speed at which a signature was written, as well as the shape
Neural Network Applications (contd.)
Example Applications:
Stock Market Prediction
“Technical Trading” refers to trading based solely on known statistical
parameters (i.e. previous price)
Neural networks have been used to attempt to predict changes in prices
The success of neural networks here is difficult to assess due to secrecy
Mortgage Assessment
Neural networks can be used to assess lending risks
Artificial networks have produced a 12% reduction in errors compared
with human experts
Example ANN - Recurrent/Hopfied
Links: take output back to input
Sigmoid function (not threshold/step)
0
Used for associative memory
0
SOM/Hebbian-Kohonen
If neurons activated at same time increase
1
weights (reinforce)
If activity very low, then decrease weights
(forget)
Good for clustering
Example ANN - Radial Basis Function (RBF)
networks
Similar to 2 layer Multi Layer Perceptron's
input
uses ‘radial basis function’ based on normal Hidden
RBF
Output
Linear
distribution
red
closer the input is to the norm the higher the
output (produces same output for neurons same
distance from it hence radial bias i.e. ‘radius’ amber
based)
used for classification, pattern recognition, green
discrimination, process modelling
good: rapid training
bad: not good for problems with many outputs,
difficult to calculate central norm of the function
Example ANN - Competitive Learning/Kohonen Self-
Organising Feature Maps (SOFM)
Lateral connections
Competitive learning
Increase strength of near neighbours
(same value)
Decrease strength of far neighbours
(opposite value)
Good for: clustering
The ANN Lifecycle

1. Design of the appropriate for the task neural architecture

2. Preparation of training and validation data

3. Training

4. Testing (based on the generalisation properties of NNs)


Summary 1
NNs are based on a highly simplified model of neurons in the human brain.
Neurons joined together by weighted connections, sum passes through
activation function.
Weights hold the “knowledge” learned by network
Training algorithms allow network weights to be altered based solely upon
training patterns
Characterised by:
High Connectivity
Adaptability (learning)
NNs can generalise over untrained data and are robust to noisy input
Summary 2
Neural Networks
Advantages
learn from data/examples (induction)
fast response time (operation)
model non-linear and complex relationships - don’t require programming of target
function, learned instead.
noise tolerant
Disadvantages
may take long training periods
may not converge to a good solution
random initial conditions
“Black box” technique, internal structure difficult to comprehend
Practical Work – This week

1. Complete the activity: Neural Networks


2. Complete the Build Your Own Neural Network coding tutorial

Note: Outputs from all activities should be uploaded to your ePortfolio. Please do let me know if you
struggle to find a group to work with.
Thank you
• Computer Vision
WE ARE THE LIFE CHANGING
UNIVERSITY OF SUNDERLAND

Kate MacFarlane
[email protected]

You might also like