AIT Important Questions: 1) Explain The Concept of Biological Neuron Model With The Help of A Neat Diagram? Answer
AIT Important Questions: 1) Explain The Concept of Biological Neuron Model With The Help of A Neat Diagram? Answer
1) Explain the concept of biological neuron model with the help of a neat
diagram?
Answer:
The biological neural network consists of nerve cells (neurons) as shown in above
Fig., which are interconnected as in Fig. given below. The cell body of the neuron,
which includes the neuron's nucleus is where most of the neural computation takes
place.
Neural activity passes from one neuron to another in terms of electrical
triggers which travel from one cell to the other down the neuron's axon, by means of
an electro-chemical process of voltage-gated ion exchange along the axon and of
diffusion of neurotransmitter molecules through the membrane over the synaptic
gap.
2) Name the different learning methods and explain any one method of
supervised learning?
Answer:
1. Error-correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
5. Boltzmann learning
6.
1. Error-Correction Learning
This output signal, representing the only output of the neural network, is
compared to a desired response or target output, denoted by dk(n).
The corrective adjustments are designed to make the output signal y k(n) come
closer to the desired response dk(n) in a step-by-step manner.
1 2
n e n
2 k
2. Memory-Based Learning
xi , d i iN1
stored in a large memory of correctly classified input-output examples: .
Where xi denotes an input vector and di denotes the corresponding desired response.
Criterion used for defining the local neighborhood of the test vector x test.
Learning rule applied to the training examples in the local neighborhood of
xtest.
3. Hebbian Learning
Hebb's postulate of learning is the oldest and most famous of all learning
rules; it is named in honor of the neuropsychologist Hebb (1949). According to Hebb
Hebb's hypothesis
kj n y k n x j n
4. Competitive Learning
A set of neurons that are all the same except for some randomly
distributed synaptic weights, and which therefore respond differently to a
given set of input patterns.
A limit imposed on the "strength" of each neuron.
A mechanism that permits the neurons to compete for the right to respond
to a given subset of inputs, such that only one output neuron, or only one
neuron per group, is active (i.e., "on") at a time. The neuron that wins the
competition is called a winner-takes-all neuron.
In the simplest form of competitive learning, the neural network has a single
layer of output neurons, each of which is fully connected to the input nodes. The
network may include feedback connections among the neurons, as indicated in below
Figure. In the network architecture described herein, the feedback connections
perform lateral inhibition with each neuron tending to inhibit the neuron to which
it is laterally connected. In contrast, the feedforward synaptic connections in the
network shown below Fig. are all excitatory.
For a neuron k to be the winning neuron, its induced local field vk for a
specified input pattern x must be the largest among all the neurons in the network.
The output signal yk of winning neuron k is set equal to one; the output signals of
all the neurons that lose the competition are set equal to zero.
Where the induced local field vk represents the combined action of all the
forward and feedback inputs to neuron k.
Let wkj denote the synaptic weight connecting input node j to neuron k.
Suppose that each neuron is allotted A fixed amount of synaptic weight (i.e., all
synaptic weights are positive), which is distributed among its input nodes; that is
A neuron then learns by shifting synaptic weights from its inactive to active
input nodes. If a neuron does not respond to a particular input pattern, no learning
takes place in that neuron. If a particular neuron wins the competition, each input
node of that neuron relinquishes some proportion of its synaptic weight, and the
weight relinquished is then distributed equally among the active input nodes.
According to the standard competitive learning rule, the change w kj applied to
synaptic weight wkj is defined by
Where is the learning-rate parameter. This rule has the overall effect of
moving the synaptic weight vector w k of winning neuron k toward the input pattern
x.
5. Boltzmann Learning
Where Ek is the energy change (i.e., the change in the energy function of the
machine) resulting from such a flip. Notice that T is not a physical temperature, but
rather a pseudo temperature, as explained in Chapter 1. If this rule is applied
repeatedly, the machine will reach thermal equilibrium.
Clamped condition, in which the visible neurons are all clamped onto specific
states determined by the environment.
Free-running condition, in which all the neurons (visible and hidden) are
allowed to operate freely.
Answer:
The AND function gives the response "true" if both input values are "true";
otherwise the response is "false." If we represent "true" by the value I, and "false" by
0, this gives the following four training input, target output pairs:
The OR function gives the response "true" if either of the input values is "true";
otherwise the response is "false." This is the "inclusive or," since both input values
may be "true" and the response is still "true." Representing "true" as 1, and "false"
as 0, we have the following four training input, target output pairs:
4. Name different activation functions used in neuronal networks and
explain those networks?
Answer:
5. Hyperbolic tangent
7. Spline functions
Answer:
According to the flow of the signals within an ANN, we can divide the
architectures into feedforward networks, if the signals flow just from input to
output, or recurrent networks, if loops are allowed. Another possible classification is
dependent on the existence of hidden neurons, i.e., neurons which are not input nor
output neurons. If there are hidden neurons, we denote the network as a multilayer
NN, otherwise the network can be called a singlelayer NN. Finally, if every neuron
in one layer is connected with the layer immediately above, the network is called
fully connected. If not, we speak about a partially connected network.
The simplest form of an ANN is represented in fig. below. In the left, there is the
input layer, which is nothing but a buffer, and therefore does not implement any
processing. The signals flow to the right through the synapses or weights, arriving
to the output layer, where computation is performed.
In this case there is one or more hidden layers. The output of each layer
constitutes the input to the layer immediately above. For instance, a ANN [5,4, 4, 1]
has 5 neurons in the input layer, two hidden layers with 4 neurons in each one, and
one neuron in the output layer.
3. Recurrent networks
Recurrent networks are those where there are feedback loops. Notice that any
feedforward net-work can be transformed into a recurrent network just by
introducing a delay, and feeding back this delay signal to one i nput, as represented
in fig.
Answer:
The Perceptron model is the simplest type of neural network developed by Frank
Rosenblatt in 1962. This type of simple network is rarely used now but it is
significant in terms of its historical contribution to neural networks. A very simple
form of Perceptron model is shown in Fig. below. It is very much similar to the MCP
model discussed in the last section. It has more than 1 inputs connected to the node
summing the linear combination of the inputs connected to the node. Also, the
resulting sum goes through a hard limit er which produces an output of +1 if the
input of the hard limiter is positive. Similarly, it produces an output of -1 if the
input is negative. It was first developed to classify a set of externally inputs into 2
classes of C1 or C2 with an output +1 signifies C1 or C2.
Answer:
Once you have some requirements on the nature of a solution, you must
represent the problem so a computer can solve it.
The term physical is used, because symbols in a physical symbol system are
physical objects that are part of the real world, even though they may be internal
to computers and brains.
Answer:
Learning is the ability of an agent to improve its behavior based on experience.
This could mean the following:
Machine learning tasks are typically classified into three broad categories,
depending on the nature of the learning "signal" or "feedback" available to a
learning system.
They are
1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning
Supervised Learning
The machine is presented with an example of inputs and their desired outputs.
These are given by a teacher and the goal of learning is to learn the general rule
that maps inputs and outputs.
Unsupervised Learning
No labels are given to the learning system; the learning system has to find out its
own structure to the input. Unsupervised learning can be a goal in itself or a means
towards an end.
Reinforcement Learning
Answer: