0% found this document useful (0 votes)
85 views

Deep Learning - Week 11

nptel

Uploaded by

Vj Br
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Deep Learning - Week 11

nptel

Uploaded by

Vj Br
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

10/24/24, 10:10 AM Deep Learning - IIT Ropar - - Unit 14 - Week 11

(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)

[email protected]

NPTEL (https://swayam.gov.in/explorer?ncCode=NPTEL) » Deep Learning - IIT Ropar (course)

Course Week 11 : Assignment 11


outline The due date for submitting this assignment has passed.
Due on 2024-10-09, 23:59 IST.
About
NPTEL ()
Assignment submitted on 2024-10-09, 14:59 IST
How does an 1) What is the basic concept of Recurrent Neural Network? 1 point
NPTEL
online Use a loop between inputs and outputs in order to achieve the better prediction
course Use recurrent features from dataset to find the best answers
work? ()
Use loops between the most important features to predict next output

Week 1 () Use previous inputs to find the next output according to the training set
Yes, the answer is correct.
Week 2 () Score: 1
Accepted Answers:
Use previous inputs to find the next output according to the training set
Week 3 ()

week 4 () 2) Select the correct statements about GRUs 1 point

GRUs have fewer parameters compared to LSTMs


Week 5 ()
GRUs use a single gate to control both input and forget mechanisms

Week 6 () GRUs are less effective than LSTMs in handling long-term dependencies
GRUs are a type of feedforward neural network
Week 7 ()
Yes, the answer is correct.
Score: 1
Week 8 () Accepted Answers:
GRUs have fewer parameters compared to LSTMs
Week 9 () GRUs use a single gate to control both input and forget mechanisms

week 10 () 3) The statement that LSTM and GRU solves both the problem of vanishing and 1 point
exploding gradients in RNN is

https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=150&assessment=299 1/4
10/24/24, 10:10 AM Deep Learning - IIT Ropar - - Unit 14 - Week 11

Week 11 () True
False
Sequence
Yes, the answer is correct.
Learning
Score: 1
Problems
Accepted Answers:
(unit?
False
unit=150&less
on=151) 4) What is the vanishing gradient problem in training RNNs? 1 point
Recurrent
The weights of the network converge to zero during training
Neural
Networks The gradients used for weight updates become too large
(unit? The network becomes overfit to the training data
unit=150&less
The gradients used for weight updates become too small
on=152)
Yes, the answer is correct.
Backpropagati Score: 1
on through Accepted Answers:
time (unit? The gradients used for weight updates become too small
unit=150&less
on=153)
5) What is the role of the forget gate in an LSTM network? 1 point
The problem
of Exploding To determine how much of the current input should be added to the cell state
and Vanishing To determine how much of the previous time step’s cell state should be retained
Gradients
To determine how much of the current cell state should be output
(unit?
unit=150&less To determine how much of the current input should be output
on=154)
Yes, the answer is correct.
Some Gory
Score: 1
Details (unit? Accepted Answers:
unit=150&less To determine how much of the previous time step’s cell state should be retained
on=155)
6) How does LSTM prevent the problem of vanishing gradients? 1 point
Selective
Read, Different activation functions, such as ReLU, are used instead of sigmoid in LSTM
Selective
Write,
Gradients are normalized during backpropagation
Selective The learning rate is increased in LSTM
Forget - The
Forget gates regulate the flow of gradients during backpropagation
Whiteboard
Analogy (unit? Yes, the answer is correct.
unit=150&less Score: 1
on=156) Accepted Answers:
Forget gates regulate the flow of gradients during backpropagation
Long Short
Term
7) We construct an RNN for the sentiment classification of text where a text can have positive
Memory(LSTM
sentiment or negative sentiment. Suppose the dimension of one-hot encoded-words is R100×1 ,
) and Gated
Recurrent dimension of state vector si is R50×1 . What is the total number of parameters in the network?
Units(GRUs) (Don’t include biases also in the network)
(unit?
unit=150&less 7550
on=157)
No, the answer is incorrect.
How LSTMs
Score: 0
avoid the Accepted Answers:
problem of (Type: Range) 7599.5,7601.5

https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=150&assessment=299 2/4
10/24/24, 10:10 AM Deep Learning - IIT Ropar - - Unit 14 - Week 11

vanishing 1 point
gradients
(unit? 8) What are the problems in the RNN architecture? 1 point
unit=150&less
on=158) Morphing of information stored at each time step.
Exploding and Vanishing gradient problem.
How LSTMs
avoid the
Errors caused at time step tn can’t be related to previous time steps faraway
problem of
vanishing All of the above
gradients
Yes, the answer is correct.
(Contd.) (unit? Score: 1
unit=150&less Accepted Answers:
on=159) All of the above
Lecture
Material for 9) What is the objective(loss) function in the RNN? 1 point
Week 11 (unit?
unit=150&less Cross Entropy
on=160) Sum of cross-entropy
Week 11 Squared error
Feedback Accuracy
Form: Deep
Learning - IIT No, the answer is incorrect.
Score: 0
Ropar (unit?
unit=150&less Accepted Answers:
on=194) Sum of cross-entropy

Quiz: Week
10) Which of the following techniques can be used to address the exploding gradient 1 point
11 :
problem in RNNs?
Assignment
11
Gradient clipping
(assessment?
name=299) Dropout
L1 regularization
Week 12 ()
L2 regularization

Download Yes, the answer is correct.


Score: 1
Videos ()
Accepted Answers:
Gradient clipping
Books ()

Text
Transcripts
()

Problem
Solving
Session -
July 2024 ()

https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=150&assessment=299 3/4
10/24/24, 10:10 AM Deep Learning - IIT Ropar - - Unit 14 - Week 11

https://onlinecourses.nptel.ac.in/noc24_cs114/unit?unit=150&assessment=299 4/4

You might also like