0% found this document useful (0 votes)
35 views50 pages

Netwokrkk

Uploaded by

Jiru Alemayehu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views50 pages

Netwokrkk

Uploaded by

Jiru Alemayehu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/312086961

An introduction to neural networks (1)

Presentation · November 2013


DOI: 10.13140/RG.2.2.16523.69923

CITATIONS READS

5 525

1 author:

Abdullatif Baba
Kuwait College of Science and Technology (Private University)
119 PUBLICATIONS 172 CITATIONS

SEE PROFILE

All content following this page was uploaded by Abdullatif Baba on 05 January 2017.

The user has requested enhancement of the downloaded file.


Artificial Intelligence
An Introduction to Artificial Neural Networks
(1)

12/15/2016 [email protected] 1
Neural Networks
Machine Learning

• The most popular approaches to machine learning are


artificial neural networks and genetic algorithms.
• Machine learning involves adaptive mechanisms that
enable computers to learn from experience (by
example and by analogy)
• Learning capabilities can improve the performance of
an intelligent system over time.

12/15/2016 [email protected] 2
Neural Networks
Biological Neural Network
• A neural network can be defined as a model of
reasoning based on the human brain.
• The brain consists of a densely interconnected set of
neurons.

12/15/2016 [email protected] 3
Neural Networks
Artificial Neural Nets Model The Brain

12/15/2016 [email protected] 4
Neural Networks
Primitive concepts
• An ANN consists of a number of very simple and highly interconnected
processors, also called neurons.
• The neurons are connected by weighted links passing signals from one neuron
to another. Each neuron receives a number of input signals through its
connections.
• It produces a single output signal which is transmitted through the neuron’s
outgoing connection that splits into a number of branches to transmit the same
signal (the signal is not divided among these branches in any way). The
outgoing branches terminate at the incoming connections of other neurons in
the network.

12/15/2016 [email protected] 5
Neural Networks
How does an artificial neural network ‘learn’?
• The neurons are connected by links, and each link has a numerical weight
associated with it.
• Weights are the basic means of long-term memory in ANNs. They express the
strength, or in other words importance, of each neuron input.
• A neural network ‘learns’ through repeated adjustments of these weights.

12/15/2016 [email protected] 6
Neural Networks
How to build an ANN ?
How many neurons are to be used Choose the network
How the neurons are to be connected to form a network. architecture

Which learning algorithm wil be used to train an ANN?


Train the neural network; that is, we initialise the weights of the network and update
the weights from a set of training examples.

12/15/2016 [email protected] 7
Neural Networks
The neuron as a simple computing element
A neuron
• Receives several signals from its input links
• Computes a new activation level
• Sends it as an output signal through the output links.
The input signal can be raw data or outputs of other neurons.
The output signal can be either a final solution to the problem or an input to other
neurons.

12/15/2016 8
Neural Networks
How does the neuron determine its output?
The neuron computes the weighted sum of the input signals and compares the result
with a predefined threshold value, θ.
• If the net input is less than the threshold, the neuron output is -1.
• If the net input is greater than or equal to the threshold, the neuron becomes activated
and its output attains a value +1
In other words, the neuron uses the following transfer or activation function:

12/15/2016 [email protected] 9
Neural Networks
The activation function
The last type of activation function is called a sign function.
Therefpre; the actual output of the neuron with a sign activation function can be represented
as

Four common choices, the step, sign, linear and sigmoid functions are illustrated here:

The Step and Sign activation functions,


also called “hard-limit functions”, are
often used in decision-making neurons for:
• classification
• pattern recognition tasks.
10
Neural Networks
The activation function
The Sigmoid function transforms the input,
which can have any value between plus and
minus infinity, into a reasonable value in the
range between 0 and 1. Neurons with this
function are used in the back-propagation
networks.
The Linear activation function provides an
output equal to the neuron weighted input.
Neurons with the linear function are often used
for linear approximation.
12/15/2016 [email protected] 11
Neural Networks
The perceptron
The model consists of a linear combiner followed by a hard limiter. The weighted sum
of the inputs is applied to the hard limiter, which produces an output equal to +1 if its
input is positive and -1 if it is negative.

12/15/2016 [email protected] 12
Neural Networks
The perceptron (The aim)
• The aim of the perceptron is to classify inputs : x1;x2; . . .;xn, into one of two classes
( A1 and A2). In this case the n-dimensional space is divided by a hyperplane into
two decision regions.
• The hyperplane is defined by the linearly separable function

• Let’s suppose the case of two inputs, x1 and x2, the decision boundary takes the
form of a straight line.
• Point 1, which lies above the boundary line, belongs to
class A1
• Point 2, which lies below the line belongs to class A2.
• The threshold θ can be used to shift the decision
boundary.

12/15/2016 [email protected] 13
Neural Networks
How does the perceptron learn its classification tasks ?
The initial weights and the threshold are randomly assigned, usually in the range [-0.5;0.5]Š
.

Then; small adjustments in the weights to reduce the difference between the
actual and desired outputs of the perceptron.

At iteration p,
The error is given by the difference between the actual output and the desired output

Thus the new weight at iteration (p+1) is given :

where α is the learning rate, a positive constant


less than unity

12/15/2016 [email protected] 14
Neural Networks
Training algorithm for classification tasks.

12/15/2016 [email protected] 15
Operational Amplifier Remember !
Inverting Amplifier : The operational amplifier has a very high impedance between its
input terminals; for a 741 about 2MΩ

Vin  VX  I1 * R1
Rin = ∞ in the ideal case
No current flows through it, no potential
difference between the point X and the earth.
Therefore, X is a virtual earth potential, Thus Rin = ∞
VX = 0
So : V  I *R (1)
in 1 1
And :
I1 will be the same in R2 thus :
VX  Vout  I1 * R2
Vin
 Vout  I1 * R2 (2)  Vout  * R2
R1
Vout R2

Vin R1
Operational Amplifier Remember !

Non-Inverting Amplifier

VX  I * R1 Vout
Vout  I * ( R1  R2 )

R2
Rin = ∞ in the ideal case Vin ∞
Rin VX
No current flows through
I

R1
it, no potential difference
between the point X and
Vin. Thus VX = Vin
So :

Vout R1  R2 R
  1 2
Vin R1 R1
Operational Amplifier Remember !

Inverting Op-Amp (𝑉𝑖𝑛− )


𝑉𝑜 𝑅2
= −
𝑉𝑖𝑛− 𝑅1

Non-Inverting Op-Amp (𝑉𝑌 )


𝑉𝑖𝑛−
𝑉𝑜 𝑅2
= 1+ Y Rin = ∞
𝑉𝑌 𝑅1
𝑉𝑖𝑛+ R3
Non-Inverting Op-Amp (𝑉𝑖𝑛+ ) R4

𝑅4
𝑉𝑌 = ( )𝑉𝑖𝑛+
𝑅3 +𝑅4
𝑅2 𝑅4
𝑉𝑜 = 1+ ( )𝑉𝑖𝑛+
𝑅1 𝑅3 + 𝑅4
𝑉𝑜 𝑅2 𝑅4
= 1+ ( )
𝑉𝑖𝑛+ 𝑅1 𝑅3 +𝑅4
Neural Networks
Electronic neuron model design using a saturable operational amplifier

• Simple op-amp based hardware neuron model.


• The positive and negative weights are easy to implement.
• Positive weights 𝑤𝑖 + are associated with positive input 𝑥𝑖 +and the vice versa.
• 𝑈𝑠 is the supply voltage
12/15/2016 [email protected] 19
Neural Networks
Electronic neuron model design using a saturable operational amplifier

• The activation signal S is a sum of (Inhibitory and Excitatory) activations.


• 𝑆 = 𝑛1 𝑤𝑖 − 𝑥𝑖 − + 𝑛1 𝑤𝑖 + 𝑥𝑖 +
• y=f(S)

12/15/2016 [email protected] 20
Neural Networks
Electronic neuron model design using a saturable operational amplifier
Example:

Design an artificial neuron model using a saturable op-amp if the following


weights are to be implemented: 𝑤1 = −0.8, 𝑤2 = 0.7 (Supposing R0 = RF = 1 KΩ)
-𝑉𝑜 𝑅𝐹
𝑤1 = − = − − 𝑅1 − = 1.25 KΩ
𝑥 𝑅1
+ 𝑉𝑜 𝑅𝐹 𝑅 𝑅1 + = 1.57 KΩ
𝑤1 = = 1+ ( +0 )
𝑥+ 𝑅1 − 𝑅1 +𝑅0

12/15/2016 [email protected] 21
Neural Networks
Electronic neuron model design using a saturable operational amplifier
Example:

Activation
𝑆 = 𝑤𝑖 − 𝑥𝑖 − + 𝑤𝑖 + 𝑥𝑖 +
function

Typically: F1 = 𝑈𝑠 + − 1;
F2 = 𝑈𝑠 − + 1
The activation function of op-amp 𝑈𝑠 + =15 V & 𝑈𝑠 − = -15 V
12/15/2016 [email protected] 22
Neural Networks
Example (train a perceptron to perform basic logical operations)
The operation AND
Suppose the threshold: θ= 0.2; and the learning rate: α= 0.1.
• The perceptron must be trained to classify the input patterns.
• The perceptron is activated by the sequence of four input patterns
representing an epoch (where, step activation function is used)
• The perceptron weights are updated after each activation.
• This process is repeated until all the weights converge to a uniform set of
values.

12/15/2016 [email protected] 23
Neural Networks
Example (train a perceptron to perform basic logical operations)
AND operation

𝑤1 0 = 0.3
𝑤2 0 = −0.1

12/15/2016 [email protected]
Neural Networks
Example (train a perceptron to perform basic logical operations)
In a similar manner, the perceptron can learn the operation OR.
A single-layer perceptron cannot be trained to perform the operation Exclusive-
OR. The following geometry represents the AND, OR and Exclusive-OR
functions as two-dimensional plots based on the values of the two inputs.

If we substitute values for weights


w1 and w2 and threshold

Thus, the region below the boundary


line, where the output is 0, is given by
Points in the input space where the function output is 1
are indicated by black dots, and points where the and the region above this line, where
output is 0 are indicated by white dots. the output is 1, is given by
12/15/2016 [email protected] 25
Neural Networks
Multilayer neural networks
“Typically, the network consists
of:
• input layer of source neurons,
at least
• hidden layer of
computational neurons
• output layer of computational
neurons”
The input layer accepts input signals from the outside world and redistributes these
signals to all neurons in the hidden layer. Actually, the input layer does not process input
patterns.
The output layer accepts output signals, from the hidden layer and establishes the output
pattern of the entire network.
Neurons in the hidden layer detect the features; the weights of the neurons represent the
features hidden in the input patterns.
12/15/2016 [email protected] 26
Neural Networks
Multilayer neural networks

• Commercial ANNs incorporate three and sometimes four layers, including


one or two hidden layers. Each layer can contain from 10 to 1000
neurons.

• Experimental neural networks may have five or even six layers, including
three or four hidden layers, and utilise millions of neurons.

• Most practical applications use only three layers, because each additional
layer increases the computational burden exponentially.

12/15/2016 [email protected] 27
Neural Networks
Back-propagation neural network
The learning algorithm has two phases: • First, a training input pattern is
presented to the network input layer.
The network then propagates the
input pattern from layer to layer
until the output pattern is generated
by the output layer.
• If this pattern is different from the
desired output, an error is calculated
and then propagated backwards
through the network from the output
layer to the input layer. The weights
are modified as the error is
propagated.
12/15/2016 [email protected] 28
Neural Networks
The learning law used in the back-propagation networks

12/15/2016 [email protected] 29
Neural Networks
Forward-propagation phase
Every neuron in each layer is connected to every other neuron in the adjacent
forward layer. Each neuron computes the net weighted input as :

Where n is the number of inputs, and θ is the threshold applied to


the neuron.

This input value is passed through the activation function.


"Neurons in the back-propagation network use a sigmoid activation function":

• The derivative of this function is easy to compute.


• It also guarantees that the neuron output is bounded between 0 and 1.

12/15/2016 [email protected] 30
Neural Networks
The learning law used in the back-propagation networks
To propagate error signals, we start at the output
layer and work backwards to the hidden layer

Updating weights at the output layer

Where is the weight correction.

is the output of neuron j in the


hidden layer
is the error gradient at neuron k in
the output layer at iteration p.
12/15/2016 [email protected] 31
Neural Networks
The learning law used in the back-propagation networks
The error gradient is determined as the
derivative of the activation function multiplied
by the error at the neuron output.

where is the output of neuron k at


iteration p, and is the net weighted
input to neuron k at the same iteration.

Where :

12/15/2016 32
Neural Networks
The learning law used in the back-propagation networks
Updating weights at the hidden layer

The weight correction for the hidden layer is given :

The error gradient at neuron j in the hidden layer:

l is the number of neurons in


the output layer
𝑒𝑗 (𝑝)
Where :

n is the number of
neurons in the input layer
Neural Networks
Example
• Consider the three-layer back-
propagation network. Suppose that the
network is required to perform logical
operation Exclusive OR.
• Neurons 1 and 2 in the input layer
accept inputs x1 and x2, respectively,
and redistribute these inputs to the
neurons in the hidden layer without
any processing
• The effect of the threshold applied to a
neuron in the hidden or output layer is
represented by its weight, θ, connected
to a fixed input equal to -1.

12/15/2016 [email protected] 34
Neural Networks
Example
The initial weights and threshold levels are set randomly as follows:

Consider a training set where inputs x1 and x2 are equal to 1 and desired output yd,5 is 0.

12/15/2016 35
Neural Networks
Example
The actual outputs of neurons 3 and 4 in the hidden layer are calculated as

Now the actual output of neuron 5 in the output layer is determined as

Thus, the following error is obtained: [email protected] 36


Neural Networks
Example
The next step is weight training. To update the weights and threshold levels in our
network, we propagate the error, e, from the output layer backwards to the input layer.

First, we calculate the error gradient for neuron 5 in the output layer:

Then we determine the weight corrections assuming that the learning rate
parameter, α = 0.1:

12/15/2016 [email protected] 37
Neural Networks
Example
Next we calculate the error gradients for neurons 3 and 4 in the hidden layer:

We then determine the weight corrections:

12/15/2016 [email protected] 38
Neural Networks
Example
At last, we update all weights and threshold levels in our network:

12/15/2016 [email protected] 39
Neural Networks
Example
The following set of final weights and threshold levels satisfied the chosen error
criterion:

Final results of three-layer network learning: the logical operation Exclusive-OR

12/15/2016 [email protected] 40
Neural Networks
Example

The training process is repeated until the sum of squared errors is less than 0.001.
12/15/2016 [email protected] 41
Neural Networks
Example
The network in the following figure is also trained to perform the Exclusive-OR
operation (Haykin, 2008).

The positions of the decision boundaries constructed by neurons 3 and 4 in the hidden layer
are shown in (a) and (b), respectively. Neuron 5 in the output layer performs a linear
combination of the decision boundaries formed by the two hidden neurons, as shown in
Figure (c).
12/15/2016 [email protected] 42
The back-propagation training algorithm
The back-propagation training algorithm
The back-propagation training algorithm
Neural Networks
What is a Bias in Neural Networks?
A bias value allows to shift the activation function to the left or right, which
may be critical for successful learning

• Changing the weight w0 changes


the "steepness" of the sigmoid.
• If we want the network to output
0 when x is 2? Just changing the
steepness of the sigmoid is not
helpful
• The entire curve has to shifted to
the right.
12/15/2016 [email protected] 46
Neural Networks
What is a Bias in Neural Networks?
Tus the output of the network becomes sig(w0*x + w1*1.0).

Having a weight of -5
for w1 shifts the curve to
the right, which allows
us to have a network that
outputs 0 when x is 2.
12/15/2016 [email protected] 47
Neural Networks
Homework
Determine all the weights and thresholds for the following network using
backpropagation training algorithm, only for one next iteration
Given inputs are 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.
Suppose α = 0.1 -1 -1
θh1=0.3 θO1 = 0.1

θh2=0.9
θO2= - 0.2
-1 -1

12/15/2016 [email protected] 48
Neural Networks
Train ANN using MATLAB

https://youtu.be/2Z4959acjKs

4 Minutes video

12/15/2016 [email protected] 49
View publication stats

You might also like