0% found this document useful (0 votes)
165 views

CH 9: Connectionist Models

This document provides an overview of connectionist models and neural networks. It introduces Hopfield networks, which are a type of recurrent neural network that can learn and recall memories. Hopfield networks have one layer of neurons equal to the size of the input/output patterns. They are trained to store patterns and can recall corrupted or partial patterns by converging to the closest stored memory. The document also discusses learning in neural networks, distributed representation, and the relationship between connectionist and symbolic AI approaches.

Uploaded by

AMI CHARADAVA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views

CH 9: Connectionist Models

This document provides an overview of connectionist models and neural networks. It introduces Hopfield networks, which are a type of recurrent neural network that can learn and recall memories. Hopfield networks have one layer of neurons equal to the size of the input/output patterns. They are trained to store patterns and can recall corrupted or partial patterns by converging to the closest stored memory. The document also discusses learning in neural networks, distributed representation, and the relationship between connectionist and symbolic AI approaches.

Uploaded by

AMI CHARADAVA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Ch 9: connectionist models:

Introduction to Hopfield network


Learning in neural network
Application of neural network
Recurrent network
Distributed representation
Connectionist and symbolic AI

Prepared by : Prof. Ami charadava


Neural network:

Yet another research area in AI, neural networks, is


inspired from the natural neural network of human
nervous system.
What are Artificial Neural Networks (ANNs)?
The inventor of the first neurocomputer, Dr. Robert
Hecht-Nielsen, defines a neural network as −
"...a computing system made up of a number of
simple, highly interconnected processing elements,
which process information by their dynamic state
response to external inputs.”
Basic Structure of ANNs

The idea of ANNs is based on the belief that working of


human brain by making the right connections, can be
imitated using silicon and wires as
living neurons and dendrites.
The human brain is composed of 86 billion nerve cells
called neurons. They are connected to other thousand
cells by Axons.
Stimuli from external environment or inputs from sensory
organs are accepted by dendrites. These inputs create
electric impulses, which quickly travel through the neural
network. A neuron can then send the message to other
neuron to handle the issue or does not send it forward.
ANNs are composed of multiple nodes, which imitate
biological neurons of human brain. The neurons are
connected by links and they interact with each other.
The nodes can take input data and perform simple
operations on the data. The result of these operations is
passed to other neurons. The output at each node is
called its activation or node value.
Each link is associated with weight. ANNs are capable
of learning, which takes place by altering weight values.
The following illustration shows a simple ANN −
Relationship between Biological neural
network and artificial neural network:
Working of ANN:
Input Layer:
As the name suggests, it accepts inputs in several different formats provided by the
programmer.
Hidden Layer:
The hidden layer presents in-between input and output layers. It performs all the
calculations to find hidden features and patterns.
Output Layer:
The input goes through a series of transformations using the hidden layer, which
finally results in output that is conveyed using this layer.
The artificial neural network takes input and computes the weighted sum of the
inputs and includes a bias. This computation is represented in the form of a transfer
function.
It determines weighted total is passed as an input to an activation function to
produce the output. Activation functions choose whether a node should fire or not.
Only those who are fired make it to the output layer. There are distinctive activation
functions available that can be applied upon the sort of task we are performing.
How do artificial neural networks work?

Artificial Neural Network can be best represented as a weighted


directed graph, where the artificial neurons form the nodes. The
association between the neurons outputs and neuron inputs can be
viewed as the directed edges with weights.
 The Artificial Neural Network receives the input signal from the
external source in the form of a pattern and image in the form of a
vector. These inputs are then mathematically assigned by the
notations x(n) for every n number of inputs.
Afterward, each of the input is multiplied by its corresponding
weights ( these weights are the details utilized by the artificial neural
networks to solve a specific problem ).
In general terms, these weights normally represent the strength of the
interconnection between neurons inside the artificial neural network.
All the weighted inputs are summarized inside the computing unit.
If the weighted sum is equal to zero, then bias is added to make
the output non-zero or something else to scale up to the system's
response. Bias has the same input, and weight equals to 1. Here
the total of weighted inputs can be in the range of 0 to positive
infinity. Here, to keep the response in the limits of the desired
value, a certain maximum value is benchmarked, and the total of
weighted inputs is passed through the activation function.
The activation function refers to the set of transfer functions used
to achieve the desired output. There is a different kind of the
activation function, but primarily either linear or non-linear sets
of functions. Some of the commonly used sets of activation
functions are the Binary, linear, and Tan hyperbolic sigmoidal
activation functions. Let us take a look at each of them in details:
Advantages of Artificial Neural Network (ANN)

Parallel processing capability:


Artificial neural networks have a numerical value that can perform more than one task
simultaneously.
Storing data on the entire network:
Data that is used in traditional programming is stored on the whole network, not on a
database. The disappearance of a couple of pieces of data in one place doesn't prevent
the network from working.
Capability to work with incomplete knowledge:
After ANN training, the information may produce output even with inadequate data.
The loss of performance here relies upon the significance of missing data.
Having a memory distribution:
For ANN is to be able to adapt, it is important to determine the examples and to
encourage the network according to the desired output by demonstrating these
examples to the network. The succession of the network is directly proportional to the
chosen instances, and if the event can't appear to the network in all its aspects, it can
produce false output.
Having fault tolerance:
Extortion of one or more cells of ANN does not prohibit it from generating output, and
this feature makes the network fault-tolerance.
Disadvantages of Artificial Neural Network:

Assurance of proper network structure:


There is no particular guideline for determining the structure of artificial neural
networks. The appropriate network structure is accomplished through experience, trial,
and error.
Unrecognized behavior of the network:
It is the most significant issue of ANN. When ANN produces a testing solution, it does
not provide insight concerning why and how. It decreases trust in the network.
Hardware dependence:
Artificial neural networks need processors with parallel processing power, as per their
structure. Therefore, the realization of the equipment is dependent.
Difficulty of showing the issue to the network:
ANNs can work with numerical data. Problems must be converted into numerical values
before being introduced to ANN. The presentation mechanism to be resolved here will
directly impact the performance of the network. It relies on the user's abilities.
The duration of the network is unknown:
The network is reduced to a specific value of the error, and this value does not give us
optimum results.
Types of Artificial Neural Networks
There are two Artificial Neural Network topologies
− FeedForward and Feedback.
FeedForward ANN

In this ANN, the information flow is unidirectional.


A unit sends information to other unit from which it
does not receive any information. There are no
feedback loops. They are used in pattern
generation/recognition/classification. They have
fixed inputs and outputs.
FeedBack ANN
Here, feedback loops are allowed. They are used in
content addressable memories.
Working of ANNs

In the topology diagrams shown, each arrow represents a


connection between two neurons and indicates the pathway
for the flow of information. Each connection has a weight, an
integer number that controls the signal between the two
neurons.
If the network generates a “good or desired” output, there is no
need to adjust the weights. However, if the network generates
a “poor or undesired” output or an error, then the system
alters the weights in order to improve subsequent results.
Hopfield network:

Hopfield network is a special kind of neural network whose


response is different from other neural networks.
It is calculated by converging iterative process. It has just one
layer of neurons relating to the size of the input and output,
which must be the same.
When such a network recognizes, for example, digits, we present
a list of correctly rendered digits to the network. Subsequently,
the network can transform a noise input to the relating perfect
output.
In 1982, John Hopfield introduced an artificial neural network
to collect and retrieve memory like the human brain. Here, a
neuron is either on or off the situation.
The state of a neuron(on +1 or off 0) will be restored, relying on
the input it receives from the other neuron.
Hopfield network:

A Hopfield network is at first prepared to store various


patterns or memories.
Afterward, it is ready to recognize any of the learned patterns
by uncovering partial or even some corrupted data about that
pattern, i.e., it eventually settles down and restores the
closest pattern.
Thus, similar to the human brain, the Hopfield model has
stability in pattern recognition.
A Hopfield network is a single-layered and recurrent
network in which the neurons are entirely connected, i.e.,
each neuron is associated with other neurons.
If there are two neurons i and j, then there is a connectivity
weight wij lies between them which is symmetric wij = wji .
With zero self-connectivity, Wii =0 is given below.
Here, the given three neurons having values i = 1, 2,
3 with values Xi=±1 have connectivity weight Wij.
Update neuron input:
A Hopfield network which operates in a discrete line fashion or in other words,
it can be said the input and output patterns are discrete vector, which can be
either binary 0,1 or bipolar +1,−1 in nature.
The network has symmetrical weights with no self-connections i.e., wij =
wji and wii = 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield
network −
This model consists of neurons with one inverting and one non-inverting
output.
The output of each neuron should be the input of other neurons but not the
input of self.
Weight/connection strength is represented by wij.
Connections can be excitatory as well as inhibitory. It would be excitatory, if the
output of the neuron is same as the input, otherwise inhibitory.
Weights should be symmetrical, i.e. wij = wji
We have two different approaches to update the nodes:

Synchronously:
In this approach, the update of all the nodes taking place
simultaneously at each time.
The weighted sum of all neurons is calculated without updating
the neuron value.
Then all neuron are set to their new value,according to the value of
their weighted input sum.
Asynchronously:
In this approach, at each point of time, update one node chosen
randomly or according to some rule. Asynchronous updating is
more biologically realistic.
Any neuron is choose randomly and its weighted sum is calculated
and it is updated immediately. Is called asynchronous updating.
Learning in neural network:

Learning can be done in supervised or unsupervised


training:
Supervised learning: in this , both the input and output
are provided.
The network then processes the input and compares its
resulting outputs against the desired outputs.
Errors are then calculated causing the system to adjust
the weights are continually tweaked.
The class of each piece of data in training set is known.
Class labels are pre determined and provided in the
training phase.
Unsupervised learning:

The input provided for unsupervised learning is set


of pattern p , from n dimensional space s but little /
no information about their classification , evaluation,
interesting features etc.
Task carried out in this are:
Clustering : group patterns based on similarity
Vector quantization: fully divide up S into a small set
of regions
Feature extraction: reduce dimensionality of S by
removing unimportant feature.
Difference between supervised and unsupervised

Supervised Unsupervised

Task performed: Task performed:


Classification Clustering
Pattern recognition

NN model: NN model:
Preceptron Self organizing
Feed forward NN Maps
Recurrent Neural Network

Recurrent Neural Network(RNN) are a type of Neural Network


 where the output from previous step are fed as input to the
current step.
In traditional neural networks, all the inputs and outputs are
independent of each other, but in cases like when it is required to predict
the next word of a sentence, the previous words are required and hence
there is a need to remember the previous words.
Thus RNN came into existence, which solved this issue with the help of a
Hidden Layer. The main and most important feature of RNN is Hidden
state, which remembers some information about a sequence.
RNN have a “memory” which remembers all information about what
has been calculated.
It uses the same parameters for each input as it performs the same task
on all the inputs or hidden layers to produce the output. This reduces the
complexity of parameters, unlike other neural networks.
How RNN works

The working of a RNN can be understood with the


help of below example:
Example:
Suppose there is a deeper network with one input
layer, three hidden layers and one output layer. Then
like other neural networks, each hidden layer will have
its own set of weights and biases, let’s say, for hidden
layer 1 the weights and biases are (w1, b1), (w2, b2) for
second hidden layer and (w3, b3) for third hidden
layer. This means that each of these layers are
independent of each other, i.e. they do not memorize
the previous outputs.
Now the RNN will do the following:
RNN converts the independent activations into dependent
activations by providing the same weights and biases to all the
layers, thus reducing the complexity of increasing parameters
and memorizing each previous outputs by giving each output
as input to the next hidden layer.
Hence these three layers can be joined together such that the
weights and bias of all the hidden layers is the same, into a
single recurrent layer.
Formula for calculating current state:
ht -> current state
ht-1 -> previous state
xt -> input state
Formula for applying Activation
function(tanh):

where:
whh -> weight at recurrent neuron
wxh -> weight at input neuron
Formula for calculating output:

Yt -> output


Why -> weight at output layer
Training through RNN

A single time step of the input is provided to the network.


Then calculate its current state using set of current input and the
previous state.
The current ht becomes ht-1 for the next time step.
One can go as many time steps according to the problem and join
the information from all the previous states.
Once all the time steps are completed the final current state is
used to calculate the output.
The output is then compared to the actual output i.e the target
output and the error is generated.
The error is then back-propagated to the network to update the
weights and hence the network (RNN) is trained.
Advantages of Recurrent Neural Network

An RNN remembers each and every information through


time. It is useful in time series prediction only because of
the feature to remember previous inputs as well. This is
called Long Short Term Memory.
Recurrent neural network are even used with convolutional
layers to extend the effective pixel neighborhood.
Disadvantages of Recurrent Neural Network
Gradient vanishing and exploding problems.
Training an RNN is a very difficult task.
It cannot process very long sequences if using tanh or relu
as an activation function.
FEATURES OF ARTIFICIAL NETWORK (ANN)

 Artificial neural networks may by physical devices or simulated on conventional


computers. From a
 practical point of view, an ANN is just a parallel computational system consisting of many
simple
 processing elements connected together in a specific way in order to perform a particular
task. There are
 some important features of artificial networks as follows.
 (1) Artificial neural networks are extremely powerful computational devices (Universal
computers).
 (2) ANNs are modeled on the basis of current brain theories, in which information is
represented by
 weights.
 (3) ANNs have massive parallelism which makes them very efficient.
 (4) They can learn and generalize from training data so there is no need for enormous
feats of
 programming.
 (5) Storage is fault tolerant i.e. some portions of the neural net can be removed and there
will be only a
 small degradation in the quality of stored data.
(6) They are particularly fault tolerant which is equivalent to the “graceful
degradation” found in
biological systems.
(7) Data are naturally stored in the form of associative memory which
contrasts with conventional
memory, in which data are recalled by specifying address of that data.
(8) They are very noise tolerant, so they can cope with situations where normal
symbolic systems would
have difficulty.
(9) In practice, they can do anything a symbolic/ logic system can do and more.
(10) Neural networks can extrapolate and intrapolate from their stored
information. The neural networks
can also be trained. Special training teaches the net to look for significant
features or relationships of
data.
Applications of Neural Networks

 They can perform tasks that are easy for a human but difficult for a machine −
 Aerospace − Autopilot aircrafts, aircraft fault detection.
 Automotive − Automobile guidance systems.
 Military − Weapon orientation and steering, target tracking, object discrimination, facial
recognition, signal/image identification.
 Electronics − Code sequence prediction, IC chip layout, chip failure analysis, machine
vision, voice synthesis.
 Financial − Real estate appraisal, loan advisor, mortgage screening, corporate bond
rating, portfolio trading program, corporate financial analysis, currency value prediction,
document readers, credit application evaluators.
 Industrial − Manufacturing process control, product design and analysis, quality
inspection systems, welding quality analysis, paper quality prediction, chemical product
design analysis, dynamic modeling of chemical process systems, machine maintenance
analysis, project bidding, planning, and management.
 Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design, transplant time
optimizer.
 Speech − Speech recognition, speech classification, text to speech conversion.
Application

Telecommunications − Image and data compression, automated


information services, real-time spoken language translation.
Transportation − Truck Brake system diagnosis, vehicle scheduling, routing
systems.
Software − Pattern Recognition in facial recognition, optical character
recognition, etc.
Time Series Prediction − ANNs are used to make predictions on stocks and
natural calamities.
Signal Processing − Neural networks can be trained to process an audio
signal and filter it appropriately in the hearing aids.
Control − ANNs are often used to make steering decisions of physical
vehicles.
Anomaly Detection − As ANNs are expert at recognizing patterns, they can
also be trained to generate an output when something unusual occurs that
misfits the pattern.
Gtu questions:

(b) Explain the algorithm for Backpropagation in Neural


Networks. 07
Q.5 (a) Describe briefly the applications of Neural Networks. 07
 Explain Artificial Neural Network in brief. 2t
Describe briefly the applications of Neural Networks. 07
Write a short note on: Recurrent Networks
Write a short note on: Hopfield Networks. 3t
Explain perceptron learning algorithm for training a neural
network. What are the limitations of this algorithm?
 What is linearly separable problem? Design a perceptron for any
of such problem. State one example of a problem which is not a
linearly separable.
 Discuss perceptron learning algorithm. 2t
GTU questions

Explain connectionist models. What is perceptron?


What is concept of back propagation for ANNs?

You might also like