IMPORTANT QUESTIONS
Deep Learning and Reinforcement Learning (BAI701)
Module 1: Introduction to Deep Learning
1. Define deep learning and distinguish it from shallow learning with suitable examples.
2. Explain the need for deep learning in modern machine learning applications.
3. Explain how deep learning works with a neat diagram.
4. Explain how learning in deep learning differs from pure optimization.
5. Discuss the major challenges involved in deep leaming systems.
6. Explain the challenges encountered in neural network optimization.
7. Compare shallow learning and deep learning approaches with respect to representation
capability and performance.
8. Explain why deep learning models require large datasets and high computational resources.
Module 2: Basics of Supervised Deep Learning
1. Define supervised deep learning and explain its significance.
2. Explain the concept of Convolutional Neural Networks with a neat diagram.
3. Describe the evolution of Convolutional Neural Networks.
4. Explain the architecture of a Convolutional Neural Network with a neat diagram.
5. Explain the convolution operation used in CNNs with suitable illustration.
6. Describe the role of filters and feature maps in convolutional neural networks.
7. Discuss the advantages of CNNs over traditional fully connected networks.
8. Explain the application of CNNs in image classification tasks.
9. Calculate the output feature map size when a 128 * 128 image is convolved with a 5 * 5 filter,
stride 1, and zero padding, using 5 filters.
10. Apply Convolution operation for the following 5*5 image pixels with 3*3 filter
11. What is activation function? Explain types of activation functions.
12. Explain how Dropout reduces overfitting in deep neural networks with neat diagram.
Module 3: Training Supervised Deep Learning Networks
1. Explain the process of training Convolutional Neural Networks with a neat diagram.
2. Describe gradient descent-based optimization techniques used in deep learning.
3. Explain the challenges involved in training deep neural networks.
4. Explain the problem of vanishing and exploding gradients in deep networks.
5. Explain the architecture of LeNet-5 with a neat diagram.
6. Describe the AlexNet architecture with a neat diagram.
7. Compare LeNet-5 and AlexNet architectures.
8. Explain how optimization techniques improve convergence in deep learning models.
Module 4: Recurrent and Recursive Neural Networks
1. Explain unfolding of computational graphs in recurrent neural networks with a neat diagram.
2. With a neat diagram explain the Recurrent Neural Network.
3. Describe Bidirectional Recurrent Neural Networks with suitable diagram.
4. Explain deep recurrent networks and their advantages.
5. Explain recursive neural networks with suitable example.
6. Explain the Long Short-Term Memory (LSTM) architecture with a neat diagram.
7. Explain gated RNNs and their role in sequence modeling.
8. Compare RNN and LSTM networks with respect to long-term dependency handling.
Module 5: Deep Reinforcement Learning
1. Define reinforcement learning and explain its significance in artificial intelligence.
2. Explain the basic framework of reinforcement learning with a neat diagram.
3. Describe the interaction between agent, environment, state, action, and reward in
reinforcement learning systems.
4. Explain stateless reinforcement learning algorithms with reference to the Multi-Armed Bandit
problem.
5. Describe the exploration-exploitation trade-off in Multi-Armed Bandit algorithms.
6. Explain why reinforcement learning is suitable for problems that are easy to evaluate but hard
to specify.
7. Explain deep reinforcement learning using suitable case studies.
8. Discuss real-world applications of deep reinforcement learning such as games, robotics, or self-
driving systems.