pp assignment
pp assignment
Concept
The goal is to divide the work among multiple processors so that they concurrently compute
partial sums of the array, and then combine these partial sums to produce the final result. The
process involves divide-and-conquer and reduction.
3. Concurrent Computation:
o Each processor computes the sum of its assigned subarray
independently.
o This step produces PP partial sums S1,S2,…,SPS_1, S_2, \ldots,
S_P.
4. Combine Results:
o Use a reduction tree to sum up the PP partial sums. This can also
be done in parallel in a logarithmic fashion:
At level 1, combine pairs of partial sums (S1+S2,S3+S4,…
S_1 + S_2, S_3 + S_4, \ldots).
At level 2, combine the results of level 1, and so on.
Continue until a single sum is computed.
Example
Input:
Step-by-Step Process:
3. Combine Results:
o Level 1: Combine S1S_1 and S2S_2: 12+31=4312 + 31 = 43
o Level 2: Combine 4343 and S3S_3: 43+65=10843 + 65 = 108
Final Output:
Analysis
1. Time Complexity:
o Division: O(1)O(1), as splitting is a constant operation.
o Parallel Sum Computation: O(NP)O(\frac{N}{P}) since each
processor handles NP\frac{N}{P} elements.
o Reduction Tree: O(logP)O(\log P) as combining results follows a
tree structure.
o Total Time: O(NP+logP)O(\frac{N}{P} + \log P).
2. Scalability:
o As PP increases, computation time per processor decreases.
o Communication overhead in reduction increases, but for large
NN, the algorithm remains efficient.
3. Efficiency:
o Works best when NN is much larger than PP to minimize idle
processors.
This parallel algorithm minimizes computation time by utilizing all processors efficiently and
leveraging parallelism for both computation and reduction.
Parallel Processing
3. Scalability:
o Parallel processing systems can be scaled by adding more
processors or cores, making them suitable for increasingly
complex or large-scale tasks.
o Example: Distributed systems like Hadoop allow for the addition
of nodes to process massive datasets.
1. Complexity in Programming:
o Writing parallel programs is more challenging due to the need for
synchronization, data sharing, and task distribution.
o Example: A bug in synchronization can lead to race conditions,
where tasks produce incorrect or inconsistent results.
3. Overhead in Communication:
o Processors need to communicate to share intermediate results,
which introduces latency and reduces efficiency.
o Example: In distributed systems, frequent network
communication can slow down the overall computation.
5. Dependency Issues:
o Parallelization is difficult for problems where tasks are dependent
on each other or require sequential execution.
o Example: Fibonacci sequence computation cannot be easily
parallelized because each number depends on the previous two.
Example: Parallel Matrix Multiplication
Sequential Approach:
Parallel Approach:
Large-scale, computationally
Best For Small-scale tasks
intensive tasks
Conclusion
Parallel processing is a powerful tool that enhances computation efficiency and speed but comes
with trade-offs in complexity and cost. It is best utilized in scenarios where large-scale
computations or time-critical tasks are involved.