The document outlines the examination structure for the B.E/B.Tech 8th Semester in Computer Science Engineering, focusing on the topic of Deep Learning. It includes sections for short answer questions, medium answer questions, and long answer questions, covering various concepts such as gradient descent, neural networks, LSTM units, CNN architectures, and optimization techniques. The exam is scheduled for October 2024, with a maximum score of 50 marks.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
6 views2 pages
Introduction to Deep Learning
The document outlines the examination structure for the B.E/B.Tech 8th Semester in Computer Science Engineering, focusing on the topic of Deep Learning. It includes sections for short answer questions, medium answer questions, and long answer questions, covering various concepts such as gradient descent, neural networks, LSTM units, CNN architectures, and optimization techniques. The exam is scheduled for October 2024, with a maximum score of 50 marks.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2
B.E/B.
Tech 8th Semester, Session: October 2024
B.E/B.Tech 8th SEMESTER (Computer Science Engg.) Session: (October 2024) Regular Batch(2020) Introduction to Deep Learning Max. Marks : 50 Time Allowed : 2:30 Hours Min. Marks : 20 NOTE: ATTEMPT ALL QUESTIONS FROM SECTION “A” & “B” AND ANY TWO QUESTIONS FROM SECTION “C” Section – A: [Short Answer Type Questions] (10 x 1 = 10 Marks) 1. a. What is the purpose of adding momentum to the gradient descent algorithm? b. In a neural network, what does a "weight" signify? c. What is the output range of the sigmoid activation function? d. In logistic regression, what type of problem is being solved, classification or regression? e. What does "stride" refer to in a convolution operation? f. Which CNN architecture uses 16 or 19 layers and was developed by Visual Geometry Group (VGG)? g. Which gate in an LSTM decides what information to forget? h. Name two commonly used unsupervised learning algorithm. i. What is the primary advantage of using Word2Vec over traditional word representations? j. Define vector space in Word2Vec.
Section–B: [Medium Answer Type Questions]
(4 x 5 = 20 Marks) 2. a. Define Backpropagation through Time (BPTT) and explain how it is applied to Recurrent Neural Networks. b. Explain the Word2Vec model in detail, including its architecture and how it captures semantic relationships between words. c. Explain Batch Normalization and its significance in CNNs. How does Batchnorm help in accelerating training and improving model stability? d. Explain different types of errors encountered in machine learning and deep learning models. How can these errors be minimized to improve model performance?
Section–C: [Long Answer Type Questions] (2 x
10= 20 Marks) 3. B.E/B.Tech 8th Semester, Session: October 2024 a. Describe the internal components of an LSTM unit (input gate, forget gate, output gate, and cell state) and explain how they interact to control information flow. b. ResNet introduced the concept of skip connections or residual learning. Explain the architecture of ResNet and how residual blocks help in training very deep networks. c. Describe the convolution operation in detail. What are the different types of filters (kernels) used in CNNs, and how do they affect feature extraction? Include a mathematical explanation of the convolution process. d. Provide a detailed review of optimization techniques used in deep learning. Discuss the importance of choosing an appropriate optimization algorithm and how optimization impacts model training and convergence. #########