Sc DOC Final
Sc DOC Final
2. Learning Methods
Hebbian Learning:
• Based on the principle "cells that fire together wire
together."
• Strengthens connections between co-activated
neurons.
• Example: Associating a sound (e.g., a bell) with
food during Pavlov’s experiment leads to salivation
when hearing the bell.
•
Competitive Learning:
• Neurons compete to activate, with only the
"winning" neuron adjusting its weights.
• Example: Customer segmentation in marketing:
o Group customers based on purchasing
patterns, such as tech-savvy buyers or budget-
conscious shoppers.
Boltzmann Learning:
• Inspired by statistical mechanics.
• Seeks to minimize a network’s energy state to find
patterns in data.
• Example: Movie recommendation systems learn
user preferences by analyzing rating patterns.
2. Tabu Search:
Tabu Search (TS) is an iterative method that enhances
the performance of local search algorithms by using
memory structures to avoid revisiting previously
explored solutions, thereby improving the likelihood of
finding a global optimum.
Key Concepts of Tabu Search:
• Memory Structure (Tabu List): The key idea in
Tabu Search is to keep a memory of previously
visited solutions or moves. These solutions are
placed in a "tabu list" for a certain number of
iterations, preventing the algorithm from revisiting
them.
• Aspiration Criteria: If a move results in a better
solution than the best-known solution (even if it is
in the tabu list), it is allowed. This helps avoid
getting trapped in local optima.
• Neighborhood Search: TS explores neighboring
solutions iteratively, choosing the best one from
the set of neighbors that is not in the tabu list.
How Tabu Search Works:
• Start with an initial solution.
• Evaluate the neighbors of the current solution.
• Move to the best neighbor that is not in the tabu
list, or use the aspiration criteria.
• Update the tabu list to include the current
solution or move.
• Repeat until a stopping criterion is met (e.g., a
maximum number of iterations or a satisfactory
solution).
Applications of Tabu Search:
• Combinatorial Optimization: TS is applied to
problems like the traveling salesman problem,
vehicle routing, and job-shop scheduling.
• Function Optimization: TS can also be used in
continuous function optimization problems.
Hebbian learning
Hebbian learning is a fundamental principle in soft
computing and neural networks. It is based on the
biological mechanism of synaptic plasticity and is often
summarized by the phrase: "Cells that fire together,
wire together." This principle explains how the
strength of the connection between two neurons is
increased if they are activated simultaneously.
Key Features of Hebbian Learning
1. Synaptic Modification:
o If two neurons are active at the same time (pre-
connection weakens.
2. Unsupervised Learning:
o Hebbian learning is unsupervised, meaning it does
learning algorithm.
▪ Weights are adjusted to minimize the error
▪ Pattern recognition.
Adaline units.
▪ Introduced the concept of hidden layers.
2. Activation Function:
▪ Uses binary step functions at the output
neurons.
▪ Capable of solving problems that Adaline
and II.
▪ Madaline Rule I: A straightforward weight
▪ Control systems.
Significance of weights & bias values in ANN. [5]
Significance of Weights:
1. Input Representation: Weights represent the strength of
the relationship between input nodes and neurons in the
hidden layers or output layers.
2. Learning: During training, weights are adjusted to
minimize the error. The magnitude and direction of weight
changes determine how well the neural network learns.
3. Model Behavior: Proper weight adjustments enable the
model to generalize and recognize patterns in data, making
it capable of accurate predictions.
Significance of Bias:
1. Shifting Activation: Bias allows the activation function to
shift left or right, helping the neural network adjust to
different patterns in data, even when all input values are
zero.
2. Flexibility: Bias helps the network in achieving better
fitting to the data by ensuring that neurons fire even when
input values are zero, enabling learning from non-zero data
points.
3. Non-Linearity: Bias contributes to introducing non-
linearity in the neural network, essential for solving
complex problems.