0% found this document useful (0 votes)
63 views

Deep Learning QP

The document discusses various deep learning concepts and techniques: 1. It defines Perceptron, L2 regularization, activation functions, vanishing and exploding gradients, transfer learning, pooling in CNNs, advantages of LSTM over GRU, autoencoders, bias and variance, and applications of RNNs. 2. It describes the two types of backpropagation - static and recurrent backpropagation - and explains the working principle of backpropagation in detail. 3. Regularization techniques in deep learning and how ReLU helps address the vanishing gradient problem are discussed. Gradient descent with Nesterov momentum is also illustrated.

Uploaded by

Gowri Ilayaraja
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

Deep Learning QP

The document discusses various deep learning concepts and techniques: 1. It defines Perceptron, L2 regularization, activation functions, vanishing and exploding gradients, transfer learning, pooling in CNNs, advantages of LSTM over GRU, autoencoders, bias and variance, and applications of RNNs. 2. It describes the two types of backpropagation - static and recurrent backpropagation - and explains the working principle of backpropagation in detail. 3. Regularization techniques in deep learning and how ReLU helps address the vanishing gradient problem are discussed. Gradient descent with Nesterov momentum is also illustrated.

Uploaded by

Gowri Ilayaraja
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 4

20MC317 & DEEP LEARNING TECHNIQUES

PART A - (10 X 2 = 20 marks)


1. What type of algorithm is Perceptron?
The Perceptron algorithm is a two-class (binary) classification machine learning algorithm. It is a
type of neural network model, perhaps the simplest type of neural network model. It consists of a
single node or neuron that takes a row of data as input and predicts a class label.
2. Define L2 Regularization
L2 regularization acts like a force that removes a small percentage of weights at each iteration.
Therefore, weights will never be equal to zero. L2 regularization penalizes (weight)² There is an
additional parameter to tune the L2 regularization term which is called regularization rate (lambda)
3. State the motivation of family of function defined by multi-layer neural network.
an activation function is a function that is added into an artificial neural network in order to help the
network learn complex patterns in the data. When comparing with a neuron-based model that is in
our brains, the activation function is at the end deciding what is to be fired to the next neuron.
4. What is vanishing gradient and exploding gradient problem?
Exploding Vanishing

The parameters of the higher layers change


There is an exponential growth
significantly whereas the parameters of lower layers
in the model parameters.
would not change much (or not at all).

5. What is transfer learning?


Transfer learning is a machine learning method where we reuse a pre-trained model as the starting
point for a model on a new task.
To put it simply—a model trained on one task is repurposed on a second, related task as an
optimization that allows rapid progress when modeling the second task.
6. What is Pooling on CNN, and How Does It Work?
The pooling operation involves sliding a two-dimensional filter over each channel of feature map
and summarizing the features lying within the region covered by the filter. A common CNN
model architecture is to have a number of convolution and pooling layers stacked one after the other.
7. Mention the advantages of LSTM over GRU.
From working of both layers i.e., LSTM and GRU, GRU uses less training parameter and therefore
uses less memory and executes faster than LSTM whereas LSTM is more accurate on a larger dataset.
8. What is an auto-encoder? Why do we "auto-encode"?
It encodes itself using Visible Input Nodes, and the Visible Output Nodes are decoded using
Hidden Nodes, in order to be identical to the Input Nodes. It is not a pure Unsupervised Deep
Learning algorithm, but it is a Self-Supervised Deep Learning algorithm.
9. What is bias and variance? Explain the Goodness of fit with diagram.
What is bias?
Bias is the difference between the average prediction of our model and the correct value which we are
trying to predict.
What is variance?
Variance is the variability of model prediction for a given data point or a value which tells us spread of
our data.
10. List the applications of a Recurrent Neural Network (RNN)?
 Prediction problems.  Generating Image Descriptions.
 Language Modelling and Generating Text.  Video Tagging.
 Machine Translation.  Text Summarization.
 Speech Recognition.  Call Center Analysis.
1
PART B - (5 X 16 = 80 marks)
List the types of Backpropagation. Explain in detail the working principle of Backpropagation
11. (a)
Neural Networks
Types of Backpropagation Networks
Two Types of Backpropagation Networks are:
 Static Back-propagation
 Recurrent Backpropagation
Static back-propagation:
 It is one kind of backpropagation network which produces a mapping of a static input
for static output. It is useful to solve static classification issues like optical character
recognition.
Recurrent Backpropagation:
 Recurrent Back propagation in data mining is fed forward until a fixed value is
achieved. After that, the error is computed and propagated backward.
The main difference between both of these methods is: that the mapping is rapid in static back-
propagation while it is non-static in recurrent backpropagation.
What is Backpropagation?
 Backpropagation is the essence of neural network training. It is the method of fine-
tuning the weights of a neural network based on the error rate obtained in the previous
epoch (i.e., iteration).
 Proper tuning of the weights allows you to reduce error rates and make the model
reliable by increasing its generalization.
 Backpropagation in neural network is a short form for “backward propagation of
errors.” It is a standard method of training artificial neural networks.
 This method helps calculate the gradient of a loss function with respect to all the
weights in the network.
How Backpropagation Algorithm Works
 The Back propagation algorithm in neural network computes the gradient of the loss
function for a single weight by the chain rule.
 It efficiently computes one layer at a time, unlike a native direct computation.
 It computes the gradient, but it does not define how the gradient is used. It generalizes
the computation in the delta rule.

2
1. Inputs X, arrive through the preconnected path
2. Input is modelled using real weights W. The weights are usually randomly selected.
3. Calculate the output for every neuron from the input layer, to the hidden layers, to the
output layer.
4. Calculate the error in the outputs
ErrorB= Actual Output – Desired Output
5. Travel back from the output layer to the hidden layer to adjust the weights such that the
error is decreased.
Why We Need Backpropagation?
Most prominent advantages of Backpropagation are:

 Backpropagation is fast, simple and easy to program


 It has no parameters to tune apart from the numbers of input
 It is a flexible method as it does not require prior knowledge about the network
 It is a standard method that generally works well
 It does not need any special mention of the features of the function to be learned.

OR
11. (b) Describe any two methods of regularization in deep learning.
https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-
techniques/
12. (a) Explain how to fix the vanishing gradient problem using ReLu
OR
12. (b) Illustrate Gradient Gradient Descent With Nesterov Momentum

Explain Transfer Learning For Multi-Class Image Classification Using Deep Convolutional
13. (a)
Neural Network
OR
13. (b) Write a short note on layer patterns and layer size patterns .

3
14. (a) Illustrate the working of Generative Adversarial Networks in detail
OR
14. (b) i Compare Denoising with Contractive Autoencoders
Write the similarities and differences between Gated Recurrent Unit and RNN. With a
ii
neat sketch explain GRU.
https://medium.com/analytics-vidhya/rnn-vs-gru-vs-lstm-863b0b7b1573
https://www.analyticsvidhya.com/blog/2021/03/introduction-to-gated-recurrent-unit-gru/

15. (a) Explain about SSD and YOLO for object detection. Spot the difference of these approaches
with Faster-RCNN and also write the scenario of each.
https://cv-tricks.com/object-detection/faster-r-cnn-yolo-ssd/
OR
15. (b) How the Deep Learning Techniques will enhance the image processing. Explain.

******************

You might also like