ML Module 5
ML Module 5
1. Classes:
- The possible categories into which the input data can be classified. For
example, in a spam detection system, the classes might be "spam" and "not spam".
2. Features:
- The attributes or properties of the data points that are used to make predictions.
In email spam detection, features could include the presence of certain keywords,
the sender's address, etc.
3. Training Data:
- A dataset with labeled examples used to train the model. Each example in the
training set includes the features and the corresponding class label.
4. Test Data:
- A separate dataset used to evaluate the performance of the model after it has
been trained.
5. Model:
- The algorithm or mathematical function that maps input features to a class
label. It is trained using the training data.
6. Decision Boundary:
- The surface that separates different classes in the feature space. A well-trained
model will have a decision boundary that accurately distinguishes between classes.
Biological Neuron
The diagram shows a neuron, a basic cell in the brain that sends and receives
signals. Here are its main parts:
1. Dendrites:
- Function: Catch signals from other neurons.
- Description: Branch-like parts that bring signals to the cell body.
3. Axon:
- Function: Carries signals away from the cell body.
- Description: A long, thin fiber that transmits signals to other neurons or
muscles.
4. Synapse:
- Function: Passes signals to the next neuron.
- Description: The gap between the end of one neuron and the dendrites of
another.
How It Works
1. Input Layer:
- Function: Receives the initial data or features.
- Description: Each neuron in this layer represents a feature of the input data
(e.g., pixels of an image).
2. Hidden Layers:
- Function: Process the inputs from the input layer through various
transformations.
- Description: These layers perform intermediate computations. There can be
multiple hidden layers, each transforming the data further. The diagram shows two
hidden layers, but in practice, there could be many more.
3. Output Layer:
- Function: Produces the final output or prediction.
- Description: Each neuron in this layer represents a possible output (e.g.,
categories in classification tasks).
How ANN Works
1. Input Reception:
- Data enters through the input layer.
2. Forward Propagation:
- Data passes through the hidden layers. Each neuron in a layer connects to every
neuron in the next layer through weighted connections.
- Each connection has a weight, and neurons apply an activation function to the
weighted sum of inputs to decide the output.
3. Output Generation:
- The final processed data reaches the output layer, which provides the prediction
or classification.
Input Signals: These are the neuron's inputs, represented as x1, x2 ….. xn. Each
input has a corresponding weight.
Weights: Each input signal is associated with a weight w1, w2 ….., wn, representing
its importance. These weights are real numbers.
Summation Junction: This calculates the weighted sum of the inputs, denoted as
1. Input Signals: Similar to the MP neuron model, the perceptron receives input
signals (x1, x2, ….. xn).
2. Weights: Each input signal is associated with a weight (w1, w2, ….. wn),
indicating its importance. These weights can be positive or negative real numbers.
3. Bias: The perceptron model incorporates a bias term (b), which represents the
threshold for activation. It allows the perceptron to learn and make decisions even
when all inputs are zero.
4. Weighted Summation: Like the MP neuron model, the inputs are multiplied by
their corresponding weights and summed up. Additionally, the bias term is added
to the weighted sum:
If the weighted sum plus bias (z) is greater than or equal to zero, the output is 1;
otherwise, it's 0.
6. Learning: The perceptron learns by adjusting its weights and bias based on the
observed errors. It uses a learning algorithm, such as the perceptron learning rule,
to update the weights and bias iteratively until it achieves the desired output for a
given set of inputs.
Including a bias term enhances the flexibility and performance of the perceptron
model, enabling it to learn and make decisions more effectively, especially in
situations where the inputs may not perfectly represent the desired output.
What are Activation functions? Explain the Binary, Bipolar, Continuous, and
Ramp activation functions.
Activation functions are mathematical functions used in artificial neural networks
to introduce non-linearity into the system, allowing the network to learn and
model complex relationships in the data. Here's an explanation of different
activation functions:
The perceptron neural network is one of the simplest forms of artificial neural
networks, consisting of a single layer of neurons. Here's a brief overview:
1. Structure:
- Single-layer network with one or more neurons.
- Each neuron receives input signals, computes a weighted sum, adds a bias, and
applies an activation function to produce an output.
2. Functionality:
- Receives input data and produces a binary classification output (0 or 1).
- Learns to classify input data into two categories based on the training examples
provided.
3. Training:
- Uses supervised learning with the perceptron learning rule to adjust weights
and bias iteratively.
- The perceptron learning rule updates weights and bias based on the observed
errors between predicted and actual outputs.
4. Activation Function:
- Typically uses a step function as the activation function, producing binary
outputs based on the weighted sum and bias.
5. Applications:
- Simple binary classification tasks.
- Historically used in pattern recognition, such as character recognition or signal
detection.
6. Limitations:
- Limited to linearly separable problems; unable to learn non-linear decision
boundaries.
- Convergence is not guaranteed if the data is not linearly separable.
Draw Delta learning rule(LMS- Widrow Hoff) model and explain it with a
training process flowchart. (Didn't got any flowchart)
2. Mathematical Representation:
- Weighted sum:
3. Error Calculation:
- Calculate the error between the actual output (y) and the target output (t): (E = t
- y).
4. Weight Update:
- Update each weight (wi) and the bias (b) using the Delta rule:
Training Process Flowchart:
1. Initialization:
- Initialize weights (wi) and bias (b) to small random values.
- Set learning rate and maximum number of epochs.
2. Forward Pass:
- Provide input data to the network.
- Calculate the weighted sum (z) and output (y).
3. Error Calculation:
- Calculate the error (E) between the actual output (y) and the target output (t).
4. Weight Update:
- Update weights and bias using the Delta rule based on the error:
5. Repeat:
- Repeat steps 2-4 for each training example or batch of examples.
6. Epoch Completion:
- Repeat training for multiple epochs or until convergence.
7. Evaluation:
- Test the trained model on validation or test data to evaluate performance.
8. Termination:
- Stop training based on convergence criteria or maximum number of epochs.
Draw a block diagram of Error Back Propagation Algorithm and explain
with the flow chart the Error Back Propagation Concept.
1. Input Layer:
- Nodes represent input features.
- Each node corresponds to an input feature of the dataset.
2. Hidden Layers:
- Intermediate layers between the input and output layers.
- Each layer consists of multiple nodes/neurons.
- Nodes perform weighted summation and apply an activation function.
3. Output Layer:
- Nodes represent output classes or regression values.
- Each node computes the final output of the network.
2. Forward Pass:
- Provide input data to the network.
- Compute the output of each neuron in each layer through forward propagation.
3. Error Calculation:
- Compare the network's output with the actual target values to compute the
error.
4. Backward Pass (Backpropagation):
- Propagate the error backward through the network.
- Update weights using gradient descent to minimize the error.
5. Repeat:
- Repeat steps 2-4 for each training example or batch of examples.
6. Epoch Completion:
- Repeat training for multiple epochs or until convergence.
7. Evaluation:
- Test the trained model on validation or test data to evaluate performance.
8. Termination:
- Stop training based on convergence criteria or maximum number of epochs.