0% found this document useful (0 votes)
4 views2 pages

HPC Question Bank

The document is a question bank focused on high-performance computing (HPC) and GPU computing, covering a wide range of topics including GPU architecture, CUDA programming, parallel algorithms, and performance optimization strategies. It includes questions on memory management, thread management, message-passing programming, and the principles of parallel computing. The content is designed to assess knowledge and understanding of both theoretical and practical aspects of HPC and GPU technologies.

Uploaded by

dhruvsingh00239
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views2 pages

HPC Question Bank

The document is a question bank focused on high-performance computing (HPC) and GPU computing, covering a wide range of topics including GPU architecture, CUDA programming, parallel algorithms, and performance optimization strategies. It includes questions on memory management, thread management, message-passing programming, and the principles of parallel computing. The content is designed to assess knowledge and understanding of both theoretical and practical aspects of HPC and GPU technologies.

Uploaded by

dhruvsingh00239
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

HPC QUESTION BANK

1. Define GPU computing?


2. What is the purpose of a CUDA kernel?
3. Write short note on profile tools and performance aspects.
4. How do mapping techniques contribute to load balancing in parallel computing, and
what are some common methods used?
5. What are interaction overheads in parallel algorithms, and how can they be contained
or minimized?
6. What are some strategies for optimizing parallel algorithms for specific hardware
architectures?
7. Write a detail note on dynamic memory allocation in GPU.

8. Write a note on multi-GPU.


9. Differentiate between local and global barriers.
10. How does the POSIX Thread API facilitate thread management and synchronization?
11. Can you explain the process of mapping parallel algorithms onto parallel architectures?
12. Write a note on memory hierarchy.
13. How are collective computation operations structured in message-passing
programming?
14. How does MPI facilitate communication between processes in a parallel application?
15. What are the fundamental building blocks of message-passing programming, and how
do send and receive operations work?

16. Explain host function and kernel function.


17. Define role of GPU in gaming.
18. Can you explain the memory hierarchy features of GPGPU architectures, including
global memory, shared memory, and registers?
19. Describe modern GPU Architecture.
20. What are the limitations of the current GPU computing software frameworks?
21. Describe GPU profiling tools.

22. Can you provide examples of how to calculate speedup, efficiency, and scalability in
parallel computing scenarios?
23. Write a detail note on texture memory.
24. How do you determine the optimal granularity for parallel algorithms to achieve
maximum performance?
25. How is a typical text on parallel computing organized, and what contents can one
expect to find in such a text?
26. Describe various parallel computing models such as SIMD, MIMD, SIMT, SPMD, and
Dataflow Models?
27. What are the steps involved in creating and terminating threads using the POSIX
Thread API?
28. What is dependency analysis, and how does it impact the scheduling of parallel
algorithms?
29. How are jobs allocated and partitioned in parallel computing environments?
30. Can you provide an overview of the DGX architecture developed by NVIDIA for high-
performance computing?
HPC QUESTION BANK

31. How does CUDA programming enable developers to harness the power of GPUs for
general-purpose computing tasks?
32. What are the characteristics of tasks and interactions in parallel algorithms, and why
are they important to consider?
33. How does message-passing programming compare to other parallel programming
paradigms, such as shared memory programming?
34. What are the basics of threads, and how do they differ from processes in parallel
computing?
35. What is GPGPU (General-Purpose Graphics Processing Unit), and how does it differ
from traditional CPU architectures?

36. What is parallel computing, and why is it important in modern computing?


37. What are multi-core and multi-threaded architectures, and how do they contribute to
parallel computing?

38. Differentiate between single and multi-GPU processing.


39. What are the benefits and challenges of using collective operations in parallel
applications?
40. Explain CUDA program structure with compilation process.
41. What are the main principles of GPGPU programming, and how does it leverage the
parallel processing capabilities of GPUs?
42. “Thrust provides a large number of common parallel algorithms”. Illustrate any one
algorithm provided by Thrust with suitable example.
43. What is scheduling in parallel computing, and why is it important?
44. Can you provide an overview of MPI (Message Passing Interface) and its role in
parallel programming?
45. Discuss various application areas of GPU computing and why CPU computing fails for
those applications?
46. What are the preliminary considerations one must take into account when designing
parallel algorithms?
47. What are the main limitations of memory system performance in parallel computing?
48. What are some strategies for optimizing parallel algorithms for specific hardware
architectures?
49. Write short notes on the following:
(a) GPGPU (b) Thread mapping
a) CPU and GPU architecture (b) Global and local memory.
(a)Thread blocks (b)Warps (c) Grids
50. What are some common challenges in achieving scalability in parallel computing, and
how can they be addressed?
51. Can you explain how performance profiling tools can help identify bottlenecks and
optimize parallel programs?
52. What are some best practices for maximizing speedup, efficiency, and scalability when
designing parallel algorithms?
53. How do advancements in hardware architectures, such as GPUs and multi-core
processors, impact the speedup and scalability of parallel computing applications?

You might also like