0% found this document useful (0 votes)
7 views

HPC Answer Key

The document discusses various decomposition techniques for computation, including recursive, data, functional, exploratory, and speculative decomposition. It also explains different types of data dependencies such as flow, anti, and output dependencies, along with examples and their implications on instruction execution. Additionally, it covers uniform and non-uniform delay pipelines and the drawbacks of MISD architecture.

Uploaded by

ramshrestha65678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

HPC Answer Key

The document discusses various decomposition techniques for computation, including recursive, data, functional, exploratory, and speculative decomposition. It also explains different types of data dependencies such as flow, anti, and output dependencies, along with examples and their implications on instruction execution. Additionally, it covers uniform and non-uniform delay pipelines and the drawbacks of MISD architecture.

Uploaded by

ramshrestha65678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

a. Decomposition: the operation of dividing the computation into smaller parts,


some of which may be executed in parallel. Decomposition Techniques

i. Recursive decomposition: used for traditional divide-and-conquer


algorithms that are not easy to solve iteratively.
ii. Data decomposition: the data is partitioned and this induces a partitioning
of the code in tasks.
iii. Functional decomposition: the functions to be performed on data are split
into multiple tasks.
iv. Exploratory decomposition: decompose problems equivalent to a search
of a space for solutions.
v. Speculative decomposition: when a program may take one of many
possible branches depending on results from computations preceding the
choice.

b. Stage delay and corresponding register delay given


● S1 = 4 nsec
● S2 = 5 nsec
● S3 = 12 nsec
● S4 = 9 nsec
And corresponding register delay is 1 nsec for each stage.
Number of stages = 4
Time taken to execute n instructions in non pipelined = (4+5+12+9) = 30*n
Time period for pipelined = max(4,5,12,9)+1 = 13 nsec
For n instructions = 13 *n
Speedup = 30 * n/ 13 * n = 2.3

2.
a. Flow Dependency (True Dependency)
Flow Dependency is a type of data dependency which occurs when an instruction
depends on the result of a previous instruction. A Flow dependency is also known as
read-after-write (RAW) or data dependency or true dependency, For example:
A=3
B=A
C=B

Instruction 3 is truly dependent on instruction 2, Because the final value of C depends on


the instruction 2 where the value of B is written or set by the instruction. Instruction 2 is
truly dependent on instruction 1, Because, to get a value in B from A there must have a
value for A. And instruction 1 set this value. Since instruction 3 is truly dependent upon
instruction 2 and instruction 2 is truly dependent on instruction 1, instruction 3 is also
truly dependent on instruction 1.

Anti-Dependency
An anti-dependency is a type of data dependency which occurs when an instruction
requires a value that is updated later. Anti-Dependency is also known as write-after-read
(WAR). In the following example: Instruction 2 anti-depends on instruction 3 — the
ordering of these instructions cannot be changed, nor can they be executed in parallel
(possibly changing the instruction ordering), as this would affect the final value of A.

B=3
A=B+1
B=7

Example:

MUL R3, R1, R2


ADD R2, R5, R6

The first instruction multiply the value of R1 and R2 and write it to R3. And the second
instruction add the value of R5 and R6 and then write it in R2.

It is clear that there is anti-dependence between these 2 instructions. At first we read R2


then in second instruction we are Writing a new value for it.

An anti-dependency is an example of a name dependency. If we rename the variables


then anti-dependency can be removed. In case of above example:

B=3
0. C = B
A=C+1
B=7
A new variable, C, has been declared as a copy of B in a new instruction, instruction 0.
As a result the anti-dependency between 2 and 3 has been removed. That means these
instructions may now be executed in parallel. However, the modification has bring a new
dependency. Instruction 2 is now truly dependent on instruction 0. And instruction 0 is
truly dependent on instruction 1. As flow dependencies, these new dependencies are
impossible to safely remove.

Output Dependency
An Output Dependency is a type of data dependency which occurs when the ordering of
instructions affects the final output value of any variable. Output Dependency is also
known as write-after-write (WAW), In the example below, there is an output dependency
between instructions 3 and 1 — changing the order of those instructions in this example
will change the final value of A, thus these instructions cannot be executed in parallel.

B=3
A=B+1
B=7

As with anti-dependencies, output dependencies are also a kind of name dependencies.


That is, they may be removed through renaming of variables, as in the below
modification of the above example:

B=3
A=B+1
C=7
Here, changing the name of variable B to C at 3rd statement removed the output
dependency. But there is still a true dependency which cannot be removed.

b. 1. Uniform Delay Pipeline

All the stages in a uniform delay pipeline will complete their operations by taking
the same time. The cycle time in this pipeline is described as follows:
Cycle Time (Tp) = Stage Delay
If there are buffers between the stages, then the cycle time will be described as
follows:
Cycle time (Tp) = Stage Delay + Buffer Delay

2. Non-uniform Delay Pipeline


All the stages in a non-uniform delay pipeline will complete their operations by
taking different times. The cycle time in this pipeline is described as follows:
Cycle Time (Tp) = Maximum (Stage Delay)
For example: Suppose we have four stages, which contain stage delay as 1 ns, 2
ns, 3 ns, and 4 ns. In this case, the cycle time will be:
Tp = Maximum (1 ns, 2 ns, 3 ns, 4 ns) = 4 ns
If there are buffers between the stages, then the cycle time will be described as
follows:
Cycle time (Tp) = Maximum (Stage Delay + Buffer Delay)

3.

● Let the time to distribute the jobs to k individuals be kq. Observe that this time is
proportional to the number of individuals.
● The time to complete n jobs by a single individuals = np
● The time to complete n jobs by k individuals = kq + np/k
● Speedup due to parallel processing =

B. Drawbacks of MISD:

● Low level of parallelism


● High synchronizations
● High bandwidth required
● CISC bottleneck
● High complexity

You might also like