0% found this document useful (0 votes)
72 views3 pages

Adaptive Resonance Theory

The Adaptive Resonance Theory (ART) was developed as a hypothesis for human cognitive processes. ART neural networks can be used for fast, stable learning and have applications in areas like pattern recognition, diagnosis, and robotics. Artificial neural networks were inspired by the human brain and aim to adapt and learn through various learning rules, including Hebbian, perceptron, delta, and correlation rules. Unsupervised learning algorithms include self-organizing maps, restricted Boltzmann machines, autoencoders, and models like the brain-state-in-a-box network and associative memory networks.

Uploaded by

Vasu Khandelwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views3 pages

Adaptive Resonance Theory

The Adaptive Resonance Theory (ART) was developed as a hypothesis for human cognitive processes. ART neural networks can be used for fast, stable learning and have applications in areas like pattern recognition, diagnosis, and robotics. Artificial neural networks were inspired by the human brain and aim to adapt and learn through various learning rules, including Hebbian, perceptron, delta, and correlation rules. Unsupervised learning algorithms include self-organizing maps, restricted Boltzmann machines, autoencoders, and models like the brain-state-in-a-box network and associative memory networks.

Uploaded by

Vasu Khandelwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Adaptive Resonance Theory

The Adaptive Resonance Theory (ART) was incorporated as a hypothesis for human cognitive
data handling. The hypothesis has prompted neural models for pattern recognition and
unsupervised learning. ART system has been utilized to clarify different types of cognitive and
brain data.

The Adaptive Resonance Theory addresses the stability-plasticity(stability can be defined as the


nature of memorizing the learning and plasticity refers to the fact that they are flexible to gain
new information)

Application of ART:

ART stands for Adaptive Resonance Theory. ART neural networks used for fast, stable learning
and prediction have been applied in different areas. The application incorporates target
recognition, face recognition, medical diagnosis, signature verification, mobile control robot.

History of Artificial Neural Network

The history of neural networking arguably began in the late 1800s with scientific endeavors to


study the activity of the human brain. In 1890, William James published the first work about
brain activity patterns. In 1943, McCulloch and Pitts created a model of the neuron that is still
used today in an artificial neural network. This model is segmented in two parts

o A summation over-weighted inputs.


o An output function of the sum.

Artificial Neural Network (ANN):

In 1949, Donald Hebb published "The Organization of Behavior," which illustrated a law for


synaptic neuron learning. This law, later known as Hebbian Learning in honor of Donald Hebb,
is one of the most straight-forward and simple learning rules for artificial neural networks.

In 1951, Narvin Minsky made the first Artificial Neural Network (ANN) while working at


Princeton.

In 1958, "The Computer and the Brain" were published, a year after Jhon von
Neumann's death. In that book, von Neumann proposed numerous extreme changes to how
analysts had been modeling the brain.

Learning and Adaption

Artificial Neural Network (ANN) is entirely inspired by the way the biological nervous system
work. For Example, the human brain works. The most powerful attribute of the human brain is to
adapt, and ANN acquires similar characteristics. We should understand that how exactly our
brain does?.
The learning rule is a technique or a mathematical logic which encourages a neural network to
gain from the existing condition and uplift its performance.

A learning rule or Learning process is a technique or a mathematical logic. It boosts the Artificial
Neural Network's performance and implements this rule over the network. Thus learning rules
refreshes the weights and bias levels of a network when a network mimics in a particular data
environment.

Hebbian learning rule:

The Hebbian rule was the primary learning rule. In 1949, Donald Hebb created this learning
algorithm of the unsupervised neural network. We can use this rule to recognize how to improve
the weights of nodes of a network.

Perceptron learning rule:

In different computer applications such as classification, pattern recognition, and prediction, a


learning module can be executed by different approaches, including structural, statistical, and
neural approaches. Among these techniques, artificial neural networks are inspired by the
physiological operations of the brain. They depend on the scientific model of a single neural cell
(neuron) named single neuron perceptron and try to resemble the actual networks of neurons in
the brain.

Delta learning rule:

The delta rule in an artificial neural network is a specific kind of backpropagation that assists in
refining the machine learning/artificial intelligence network, making associations among input
and outputs with different layers of artificial neurons. The Delta rule is also called the Delta
learning rule.

Delta rule is introduced by Widrow and Hoff, which is the most significant learning rule that
depends on supervised learning.
Correlation Learning Rule:

The correlation learning rule is based on the same principle as the Hebbian learning rule. It
considers that weight between corresponding neurons should be positive, and weights between
neurons with inverse reactions should be progressively negative. Opposite to the Hebbian rule,
the correlation rule is supervised learning.

Out Star learning rule:


In out star learning rule, it is needed the weights that are associated with a specific node and it
should be same as the desired outputs for the neurons associated with those weights. It is the
supervised training process because desired outputs must be known. Grossberg introduced
Outstar learning rules.

Unsupervised ANNs Algorithms and Techniques

Techniques and algorithms which are used in unsupervised ANNs involve self-organizing maps,
restricted Boltzmann machines, autoencoders, etc.

Brain-State-in- a Box Network

The brain-State-in-a-Box (BSB) neural network refers to a simple nonlinear auto-associative


neural network. It was proposed by J.A. Anderson, J.W. Silverstein, S.A. Ritz, and R.S.
Jones in 1997 as a memory model that depends on neurophysiological considerations.

Associate Memory Network

An associate memory network refers to a content addressable memory structure that associates a
relationship between the set of input patterns and output patterns. A content addressable memory
structure is a kind of memory structure that enables the recollection of data based on the intensity
of similarity between the input pattern and the patterns stored in the memory.

Boltzmann Machines

Boltzmann machine refers to an association of uniformly associated neuron-like structure that


make hypothetical decisions about whether to be on or off. Boltzmann Machine was invented by
renowned scientist Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann Machines have a
fundamental learning algorithm that permits them to find exciting features that represent
complex regularities in the training data. The learning algorithm is usually slow in networks with
various layers of feature detectors, but it is quick in "Restricted Boltzmann Machines" that has
a single layer of feature detectors. Many hidden layers can be adapted efficiently by comprising
Boltzmann Machines, utilizing the feature activations of one as the training data for the next.

The restricted Boltzmann machine invented by Smolensky in 1986.

You might also like