0% found this document useful (0 votes)
27 views14 pages

Openmp: What, Why & How: - Param Vyas (07bec116)

OpenMP is an API that uses compiler directives to parallelize code for shared memory systems. It provides incremental parallelism through pragmas that specify parallel regions, work sharing constructs like loops, and data sharing constructs. OpenMP uses a fork-join model where the main thread forks additional threads to perform work in parallel regions and then joins them back together. It is best for shared memory systems but not distributed memory and does not guarantee efficient use of resources or detection of data dependencies.

Uploaded by

vyasparam
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views14 pages

Openmp: What, Why & How: - Param Vyas (07bec116)

OpenMP is an API that uses compiler directives to parallelize code for shared memory systems. It provides incremental parallelism through pragmas that specify parallel regions, work sharing constructs like loops, and data sharing constructs. OpenMP uses a fork-join model where the main thread forks additional threads to perform work in parallel regions and then joins them back together. It is best for shared memory systems but not distributed memory and does not guarantee efficient use of resources or detection of data dependencies.

Uploaded by

vyasparam
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 14

OpenMP: What, Why &

How

-Param Vyas(07BEC116)
What is OpenMP? (Overview)

 Open Multi-processing
 OpenMP is :
 An API
 Standardized
 portable
 Scalable
 Incremental parallelism
 Supports data parallelism
Where OpenMP is beneficial?
OpenMP is not…
 Meant for distributed memory parallel systems (by itself)
 Necessarily implemented identically by all vendors
 Guaranteed to make the most efficient use of shared memory
 Required to check for data dependencies, data conflicts, race
conditions, or deadlocks
 Required to check for code sequences that cause a program to
be classified as non-conforming
 Meant to cover compiler-generated automatic parallelization and
directives to the compiler to assist such parallelization
 Designed to guarantee that input or output to the same file is
synchronous when executed in parallel. The programmer is
responsible for synchronizing input and output.
Programming Model In OpenMP

 Shared Memory, Thread Based Parallelism


 Explicit Parallelism
 Compiler Directive Based
 Nested Parallelism Support
 Dynamic Threads
 I/O
 Memory Model
Shared Memory Systems
Execution of parallel region : A fork-join
Method
Compiler directives

 #pragma directives
 Format:
#pragma omp directive_name [ clause [ clause
] ... ] new-line
 case sensitive
OpenMP Clauses

 What are they?


Various flavours of the Clauses

 Thread creation construct: omp parallel..


 Loop constructs
 Work sharing constructs
 Data sharing constructs
 Synchronizing constructs
 Scheduling clauses
Pros & Cons
 Simple  Currently only runs efficiently
 Data layout and decomposition in shared-memory
is handled automatically multiprocessor platforms
 Incremental parallelism Requires a compiler that
supports OpenMP.
 Original code statements need
not be modified when
 Scalability is limited by
parallelized with OpenMP. memory architecture.
 Both coarse-grained and fine-
 Reliable error handling is
missing.
grained parallelism are
possible  Lacks fine-grained
mechanisms to control thread-
processor mapping.
 Can't be used on GPU
Sources

 OpenMP.org
 Various papers,tutorials and discussions
 Wikipedia.org
 Articles on OpenMP and related information
Thank You.

You might also like