0% found this document useful (0 votes)
48 views15 pages

Implicit Parallelism and Explicit Parallelism

The document discusses implicit and explicit parallelism, highlighting that implicit parallelism is automatically managed by compilers, requiring less programming effort, while explicit parallelism involves manual management by programmers. Examples of implicit parallelism include pipelining, multithreading, vectorization, and out-of-order execution, whereas explicit parallelism includes parallel loops, message passing, and data parallelism. Each approach has its advantages and disadvantages, with implicit parallelism being easier to implement but less efficient, and explicit parallelism offering more control at the cost of increased complexity.

Uploaded by

merlliaabella58
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views15 pages

Implicit Parallelism and Explicit Parallelism

The document discusses implicit and explicit parallelism, highlighting that implicit parallelism is automatically managed by compilers, requiring less programming effort, while explicit parallelism involves manual management by programmers. Examples of implicit parallelism include pipelining, multithreading, vectorization, and out-of-order execution, whereas explicit parallelism includes parallel loops, message passing, and data parallelism. Each approach has its advantages and disadvantages, with implicit parallelism being easier to implement but less efficient, and explicit parallelism offering more control at the cost of increased complexity.

Uploaded by

merlliaabella58
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

IMPLICIT PARALLELISM

AND EXPLICIT
PARALLELISM
Implicit parallelism is defined as a
parallelism technique where
parallelism is automatically exploited
by the compiler or interpreter. The
objective of implicit parallelism is the
parallel execution of code in the
runtime environment. In implicit
parallelism, parallelism is being
IMPLICIT PARALLELISM
carried out without explicitly
mentioning how the computations
are parallelized. The compiler
assigns the resources to target
machines for performing parallel
operations. Implicit parallelism
requires less programming effort and
has applications in shared memory
multiprocessors.
Examples of Implicit Parallelism

• Pipelining
IMPLICIT PARALLELISM
• Multithreading

• Vectorization

• Out-of-Order
Pipelining is an example of implicit
parallelism that is being used by the
processors to execute multiple
PIPELINING instructions simultaneously. In the
process of pipelining each stage of
execution is performed in parallel
with other instructions.
Multithreading is defined as a
process of executing multiple
threads in a single process. Every
process has its own instructions and
MULTITHREADING
can be executed independently of
the other threads. Multithreading is
commonly used in parallel
processing applications.
Vectorization is the technique used
for optimizing the code in order to
execute it on the processors that
supports Single Instruction and
VECTORIZATION
Multiple Data Instructions. It allows
performing the same operation on
multiple data elements at the same
time.
Out-of-Order is a technique being
used by the processors for the
execution of instructions that
maximizes parallelism. In this
OUT- OF- ORDER technique, instructions are executed
as soon as the required operands are
available instead of the order in
which the operands appear in the
program.
ADVANTAGES
• Easier for programmers since they
don't need to handle threads or
synchronization.
• The system optimizes execution
IMPLICIT PARALLELISM based on available resources.
DIS ADVANTAGES
• Less control over how tasks are
executed in parallel
• Can be less efficient if the system
does not fully utilize available parallel
resources
Explicit Parallelism is defined as a
parallelism technique where
concurrent operations are executed
parallel with the help of primitives
that are known as special purpose
directives or function calls. In explicit
parallelism. In Explicit parallelism,
the compiler does not detect the
parallelism for the allocation of
EXPLICIT PARALLELISM resources. The programming efforts
required in explicit parallelism are
more as compared to implicit
parallelism as the execution of
concurrent tasks takes place
manually depending upon the source
code developed by the programmer.
The resources are being utilized
more efficiently in explicit
parallelism and have applications in
loosely coupled multiprocessors.
Examples of Explicit Parallelism
• Parallel loops
EXPLICIT PARALLELISM
• Message passing
• Data parallelism
Parallel loops are defined as a
construct that allows a loop to be
executed in parallel. The sequence
PARALLEL LOOPS
and condition of the loops are
specified according to which they are
executed.
Message passing is a technique that
is used to enable communication
MESS AGE PASSING that takes place between processes
and the threads that are running on
different processors.
Data parallelism is defined as a
technique that is used to divide the
DATA PARALLELISM present large datasets into smaller
subsets. These divided subsets can
then be processed in parallel.
ADVANTAGES
• More control over task distribution,
load balancing, and synchronization.
• Can achieve better performance if
EXPLICIT PARALLELISM optimized correctly.
DIS ADVANTAGES
• More complex to program, debug, and
maintain.
• Risk of race conditions, deadlocks,
and synchronization issues.
THANK YOU

Members:
Kevin Kyle Arizala
John Micheal Custodio
Carlo Egera
James Dhean Alaban

You might also like