0% found this document useful (0 votes)
6 views22 pages

Unit II - Neural Networks - Most Important Questions - With Answers-Exam

The document outlines key topics and important questions related to neural networks, specifically focusing on supervised and unsupervised learning, backpropagation, and multilayer perceptrons. It provides definitions, architectures, training processes, hyperparameters, activation functions, applications, advantages, and disadvantages for each topic. Additionally, it includes specific instructions for learning and preparing for questions on these subjects.

Uploaded by

aasifarahman2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views22 pages

Unit II - Neural Networks - Most Important Questions - With Answers-Exam

The document outlines key topics and important questions related to neural networks, specifically focusing on supervised and unsupervised learning, backpropagation, and multilayer perceptrons. It provides definitions, architectures, training processes, hyperparameters, activation functions, applications, advantages, and disadvantages for each topic. Additionally, it includes specific instructions for learning and preparing for questions on these subjects.

Uploaded by

aasifarahman2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT – II: NEURAL NETWORKS

Sure, Important Questions – 13 and 15 Marks Questions

1. Supervised Learning Neural Networks


2. Backpropagation
3. Multilayer Perceptrons / MLP

For above 3 questions, only definitions and diagrams will change.


Same notes for Architecture, Training process, Applications, Adv and Disadv
4. Single Layer Perceptrons / SLP / Perceptrons
5. Kohonen’s Self – Organizing Map (SOM) Networks
6. Unsupervised Learning Neural Networks

For Question 6:
 Write definition of Unsupervised Learning NN, its types and diagrams,
 Then write the Architecture, Training process, applications, adv and Disadv
notes given for Kohonen’s SOM.
 You just need to replace the word ‘Kohonen’s SOM’ with ‘Unsupervised
learning NN’
7. Unsupervised Learning NN Vs Supervised Learning NN – comparison / differences

Totally you need to learn 4 questions only to attend this unit 13/15 marks questions
Topic: 1. Supervised Learning Neural Networks
Or
Topic: 2. Backpropagation
Or
Topic: 3. Multilayer Feedforward Neural Network (MFNN) /
Multilayer Perceptrons (MLP)
You should write Definition, Architecture/structure, Training process, Important
Training hyperparameters (any 4 parameters), Activation functions common types,
applications, adv and Disadv (3 each)

(Same Architecture structure, Training Process, Important Training Hyperparameters,


Activation functions: Common Types, Applications, Adv and Disadv for all 3 Topics

Only definition and diagrams will change)

If Supervised Learning Neural Networks is asked, learn the below definitions and diagrams

Definition of Supervised Learning Neural Networks:

 Supervised neural networks are a type of artificial neural network that is trained on labeled dataset
to learn the mapping between input data and corresponding output labels.
 The network is "supervised" because it is trained on labeled data, meaning the correct output is
already known.
If Backpropagation is asked, learn the below definitions and diagrams
Definition of Backpropagation:

 Backpropagation is an algorithm that trains neural networks by refining weights through a two-
step process:
(1). A forward pass to apply weights and Bias on the input data to make predictions / results
and
(2). A backward pass for error correction to minimize the error between the network's output
and the desired output.
 Backpropagation is a supervised learning algorithm used to train the artificial neural networks to
minimize the error in neural networks by adjusting the weights and biases.
 It is applicable for Multilayer Feedforward Neural Networks where data flows only in one
direction, from input to output, without feedback loops.
If MLP is asked, learn the below definitions and diagrams:
Definition of Multilayer Perceptrons:

 A Multilayer Perceptron (MLP) is a type of artificial neural network (ANN) that of an input
layer, one or more hidden layers, and an output layer.
 Each layer is made up of neurons (or nodes), and each neuron in a layer is connected to every
neuron in the previous and next layers.
 It is a feedforward neural network, where the data flows only in one direction, from input
layer to output layer, without feedback loops.
 The primary purpose of MLP is to learn complex patterns and relationships between inputs
and outputs.
BELOW ARE COMMON SECTIONS FOR SUPERVISED LEARNING NEURAL NETWORKS /
BACKPROPAGATION / MLP

Architecture Structure of Supervised Learning Neural Networks / Backpropagation / MLP:

(1). Neurons: Basic computing units of the network, receiving inputs, applying weights and biases,
and producing outputs
(2). Layer:
1. Input Layer:

- Receives input data from external sources.

- Number of neurons equals the number of features in input data (Example: pixel values of an
image, edges detected, shape features are features in input image during image recognition)

- No computations are performed in this layer.

2. Hidden Layers:

- Processes inputs received from the previous layer

- Can have multiple layers, each with a different number of neurons and can perform complex
computations

- Each neuron in a hidden layer applies a weighted sum of the inputs


- Then applies non-linear activation functions like sigmoid, ReLU, or tanh.

3. Output Layer:

- Produces predicted outputs based on transformed input features

- Number of neurons equals the number of output features we need to predict.


- For example, in a classification problem with three classes, the output layer will have three
neurons.

- Uses a non-linear activation function for classification tasks or a linear activation function for
regression tasks.

Training Process of Supervised Learning Neural Networks / Backpropagation / MLP:


[Compulsory]

1. Data Preparation:

- Collect and preprocess data

- Split data into training, validation, and testing sets

2. Model Initialization:
- Define ANN architecture (number of layers, neurons, etc.)
- Initialize weights and biases randomly

- Choose Activation functions, Set learning rate and regularization hyperparameters

3. Forward Propagation / Feedforward Pass:

- Pass input data into the Input layer.

- Data passes through hidden layers, where each neuron computes a weighted sum of inputs, adds
a bias, and applies an activation function (e.g., ReLU, Sigmoid).

- The Output layer produces the Network’s output.

4. Loss Function:

- Loss Function: Compare the predicted output with the actual output using a loss function (e.g.,
Mean Squared Error, Cross-Entropy).

- Error Calculation: Compute the error (loss) for the network's prediction.

5. Backward Propagation:

- Gradient Calculation: Use Chain rule to calculate error gradients with respect to weights and
biases

- Gradient Descent: Adjust weights and biases using optimization algorithm to minimize the
error.

6. Optimization Algorithm:

- Choose an optimization algorithm like Gradient Descent, Adam optimizer etc.

- Adjust weights and biases based on error gradients and learning rate iteratively

7. Training Loop:

- Repeat forward propagation, backward propagation, and optimization steps for multiple
iterations until error signal is minimum between actual and desired output.

- Monitor training loss and validation loss

8. Model Evaluation:
- Evaluate model performance on Validation data

- Calculate metrics (e.g., accuracy, precision, recall, F1-score)

- Use techniques like Regularization or dropout to improve performance.

9. Prediction:

- Deployment: Once trained and validated, the ANN can be used for making predictions on
new data
Important Training Hyperparameters of Supervised Learning Neural Networks /
Backpropagation / MLP [Compulsory – at least 4]

1. Batch Size: Controls the number of samples in each batch. Typical values: 32, 64, 128

2. Number of Epochs: Controls the number of iterations through the training data. Typical values:
10, 50, 100

3. Learning Rate: Controls how quickly the model learns. Typical values: 0.01, 0.001, 0.0001

4. Optimizer: Algorithms used to update weights and Biases. Examples: Stochastic Gradient
Descent (SGD), Adam optimizer, RMSProp

5. Initialization: Method for initializing weights and biases. Examples: Random, Xavier, Kaiming

6. Momentum: Parameters used in optimization algorithm to help the model escape local minima
and converge faster.

7. Regularization Parameters (e.g., L2 penalty): Used to prevent overfitting by adding a penalty


to the loss function

8. Dropout Rate: The fraction of neurons to drop during training to prevent overfitting.

Activation functions: Common Types (compulsory)


(1). ReLU : Output values: 0 to infinity.
(2). Leaky ReLU: Output values: 0 to infinity (with a small negative slope for negative values)
(3). Sigmoid: outputs values: 0 to 1
(4). Tanh (Hyperbolic Tangent): Output values: -1 to 1
(5). Softmax: 0 to 1 (sums up to 1)

Common MLP (Multi-Layer Perceptron) Architectures:

1) 2-Layer MLP:
- Input Layer
- Hidden Layer (1)
- Output Layer
2) 3-Layer MLP:
- Input Layer
- Hidden Layer (1)
- Hidden Layer (2)
- Output Layer
3) Deep MLP:
- Input Layer
- Multiple Hidden Layers (3 or more)
- Output Layer
Applications of Supervised Learning Neural Networks / Backpropagation / MLP:

1. Classification: Categorizing data into different classes. Example - email spam detection to
classify email as “spam” or “not spam” based on sender domain.
2. Regression: Predicting a continuous output value. Example - Predicting housing prices based
on features such as location and size.
3. Feature learning: Learning representations from raw data - Recommendation Systems
4. Pattern Recognition: Detecting complex patterns in data. Example: identify patterns in the
image pixel arrangements to classify digits from 0 to 9.

Advantages of Supervised Learning Neural Networks / Backpropagation / MLP:

1. High accuracy: Can achieve high accuracy with large datasets.

2. Flexibility: Can learn complex relationships.


3. Interpretability: Can provide insights into learned features.

Disadvantages of Supervised Learning Neural Networks / Backpropagation / MLP:

1. Requires labeled data: Requires large amounts of labeled data.

2. Overfitting: Can suffer from overfitting if not regularized.

3. Computational resources: Requires significant computational resources.


Topic: Perceptrons / McCulloch–Pitts Neuron / Rosenblatt’s
Perceptron / Single Layer Feedforward Networks / Binary Classifier
/ Threshold Logic Unit (TLU)/ Linear Threshold Unit (LTU) / Single
Layered Perceptrons(SLP)
Explain the concepts of Perceptrons / McCulloch–Pitts Neuron / Rosenblatt’s
Perceptron / Single Layer Feedforward Networks / Binary Classifier / Threshold
Logic Unit (TLU)/ Linear Threshold Unit (LTU) / Single Layered Perceptrons(SLP):

1. Definition
2. Perceptron Convergence Theorem - important
3. Architecture / Structure – Inputs, weights, Bias, Activation function
4. Perceptron Activation function – Diagram of Step, Sign and Sigmoid functions
5. Training Process – Initialization, Input and weighted sum (formula), Activation
function, error calculation (formula), Weight and Bias update (formula), Iteration.
[Learning rate]
6. Variants of Perceptron – MLP, kernel, voting, Ensemble
7. Applications, Advantages and Disadvantages – 2 points each

Definition of Perceptrons / McCulloch–Pitts Neuron / Rosenblatt’s Perceptron / Single Layer


Feedforward Networks / Binary Classifier / Threshold Logic Unit (TLU)/ Linear Threshold Unit
(LTU) / Single Layered Perceptrons(SLP):

 A perceptron is a type of artificial neuron that mimics the function of a biological neuron.
 A perceptron is used for supervised learning. This means it is trained on labeled data, where each
input is paired with the correct output.
 It is used to classify linearly separable data.
 Feedforward: The data flows in one direction, from input to output, without any Feedback cycles
or loops.
 Single-Layer: It has only one layer of nodes (neurons) that process the input and produce the
output.
 Binary Classification: Typically used for binary classification tasks, where the output is either
0 or 1. Example: Spam vs. Non-Spam emails
 Perceptrons use Perceptron Threshold Activation function. It is step function which returns 1
if the input is positive or 0 if it is negative or zero.
Architecture of Perceptron
Perceptron Activation Function:

Training Process in Single-Layer Perceptron (SLP):


Variants of Perceptron:

1. Multilayer Perceptron (MLP)


2. Kernel Perceptron
3. Voting Perceptron
4. Ensemble Perceptron

Advantages of Perceptrons:

1. Simple to Implement
2. Fast Training for small datasets.
3. Suited for Binary Classification tasks, such as spam vs. non-spam emails.
4. Perceptrons can learn linearly separable patterns
5. Perceptrons are a fundamental building blocks for Multi-Layer Perceptrons (MLPs).

Disadvantages of Perceptrons:

1. Perceptrons can only learn linearly separable patterns.


2. Perceptrons can converge slowly for non-linearly separable data.
3. Not Suitable for Multi-Class Classification but suited only for binary classification tasks
Applications of Perceptrons:

1. Image classification

2. Speech recognition

3. Natural Language Processing

4. Bioinformatics
Topic: Kohonen Self-Organizing Networks
Explain the concepts of Kohonen Self-Organizing Networks:

1. Definition
2. Architecture / Structure – Input Layer, Competitive layers – loss function, Output
layers, Neurons, Weights and Bias, Activation functions, Neighborhood functions,
Optimization algorithms
3. Training Process – Initialization, Input data, BMU, Update weights, Learning rate
and Neighborhood radius, Iteration
4. SOM variants – Kohonen, Modified, Growing, Hierarchical, Supervised SOM
5. Important Training Hyperparameters – epochs, learning rate, Momentum,
Optimizer, batch size
6. Applications, Advantages and Disadvantages – 2 points each

Definition:

 Kohonen Self-Organizing Networks are unsupervised Learning Neural networks to map high-
dimensional input data onto a lower-dimensional output space.
 Used in Data Visualization, Clustering, Pattern recognition, Feature extraction and detecting
anomalies in Data clusters.
 Unsupervised Learning: No target output or labels provided, helps to discover hidden
patterns in data
 Self-Organizing: Network adapts and adjusts without external guidance.
 Dimensionality Reduction: High-dimensional input data (many features) is reduced into
Lower-dimensional output space (fewer features) while preserving important features and
discarding unwanted information.
Architecture of SOM: (compulsory)

1. Input Layer:

- Receives high-dimensional input data from external sources.

- Number of neurons equals the number of input features. (Dimensionality)


- No computations are performed in this layer.

2. Competitive Layers: Loss Function

- Self-Organizing Maps (SOMs) use competitive learning approach for loss function and to
update the weights of the neurons during the training process.
- First Calculate the Euclidean distance between the input vector and each neuron's weight
vector. (Competition).
- Select the neuron with the smallest distance (the "winner") as the Best Matching Unit (BMU)
(Winner Selection).
- Adjust the weights of the BMU and its neighboring nodes to move closer to input vector using:
 Hebbian learning
 Competitive learning
 Adaptive weight updates
3. Output Layer:

- Maps winning neurons to output space (lower-dimensional representation)

- Preserves topology and neighborhood relationships.

4. Neurons (Nodes):

- Basic computing units that apply activations to inputs

- Each neuron receives inputs from the previous layer, applies weights, biases, and activation
functions, and passes outputs to the next layer

5. Weights and Biases:

- Adjustable parameters that determine neural connections and node outputs


- Weights are multiplied with inputs, and biases are added to the weighted sum

6. Activation Functions:

- Non-linear functions applied to weighted sum of node inputs, such as sigmoid, ReLU, or tanh,
to produce outputs

- Introduce non-linearity, enabling the network to learn complex relationships

7. Neighborhood Functions:

- Neighborhood Functions: Used to update the weights of the Best Matching Unit, BMU and its
neighbors.

7. Optimization Algorithms:

- Used to adjust the weights and biases to minimize the error.

- Examples: Gradient Descent, Adam, RMSprop.


Training Process for Self-Organizing Map (SOM): (compulsory)

1.Initialization: Randomly initialize the weights of the SOM nodes.

2. Input Data: Randomly select an input vector from the dataset.


3. Best Matching Unit (BMU):

o Calculate the distance between the input vector and weight vectors of all SOM nodes.

o Identify the node with the smallest distance (BMU).

4. Update Weights: Adjust the weights of the BMU and its neighboring nodes using a learning rate
and neighborhood function.

5.Learning Rate and Neighborhood Radius: Both the learning rate and neighborhood radius
decrease over time.

6. Iteration: Repeat the process for many iterations until the SOM converges.

SOM Variants

1. Kohonen SOM
2. Modified SOM
3. Growing SOM
4. Hierarchical SOM
5. Supervised SOM

Applications:

 Data Visualization
 Clustering
 Pattern Recognition
 Feature Extraction
 Detect anomalies
SOM Advantages

1. SOMs can interpret complex data


2. SOMs can maintain the topological properties of the input space
3. SOMs do not require labeled data for training
4. SOMs can effectively cluster similar data points together
5. SOMs reduce the dimensionality of data while preserving its structure

SOM Disadvantages
1. SOMs are computationally expensive for large datasets
2. It can be challenging to determine the appropriate input weights
3. SOM training may converge to local optima.
4. Incorrect Initial weights affect SOM performance.
5. SOM theory is still evolving and requires expertise to interpret
Topic: Unsupervised Learning Neural Networks
Definition:

 Unsupervised Learning Neural Networks are a type of artificial neural network that learns
patterns, in data without prior knowledge of output variables. (without using labeled
examples).
 These networks identify hidden structures, features, and representations in the input data.
 This is in contrast to supervised learning, where the network is trained using labeled input-
output pairs.

ARCHITECTURE, TRAINING PROCESS, APPLICATIONS, ADVANTAGES,


DISADVANATGES – SAME AS KOHONEN’S SOM (JUST REPLACE THE WORD
SOM WITH “UNSUPERVISED NN)
Unsupervised Learning NN Vs Supervised Learning NN

You might also like