Compiler Design
Machine-Independent Code Optimizations
6th Semester [Link].(CSE)
Course Code: 18CS1T08
Dr. Jasaswi Prasad Mohanty
School of Computer Engineering
KIIT Deemed to be University
Bhubaneswar, India
Code Optimization
Code Optimization refers to the techniques used by the compiler to improve the execution
efficiency of the generated object code.
It involves a complex analysis of the intermediate code and performance of various
transformations and ensures every optimizing transformation must also preserve the semantics of
the program.
Types of Code Optimization:
• Machine Independent Code Optimization (MIO): Can be performed independently of the
target machine.
Local Optimization
Global Optimization
• Machine Dependent Code Optimization (MDO): Optimizations are tied to the target
machine’s specific platform or language.
Example: An attempt to generate object code will utilize the target machine’s registers
more efficiently.
CD / Module-V / Jasaswi 2 5 April 2024
Criteria for Code Optimizing Transformations
When attempting any optimizing transformation, the following criteria should be
applied:
1. The optimizations should catpture most of the potential improvements
without an reasonable amount of effort.
2. The optimizations should be such that the meaning of the source program is
preserved.
3. The optimization should, on average, reduce the time and space expended
by the object code.
CD / Module-V / Jasaswi 3 5 April 2024
Machine Dependent Optimization Criteria
The machine dependent optimization can be achieved using following criteria:
1. Allocation of sufficient number of resources to improve the execution
efficiency of the program.
2. Using immediate instructions whenever necessary.
3. The use of intermix instructions. The intermixing of instructions along with the
data increases the speed of execution.
CD / Module-V / Jasaswi 4 5 April 2024
Machine Independent Optimization Criteria
The machine independent optimization can be achieved using following criteria:
1. The code should be analyzed completely and use alternative equivalent
sequence of source code that will produce a minimum amount of target code.
2. Use appropriate program structure in order to improve the efficiency of target
code.
3. From the source program eliminate the unreachable code.
4. Move two or more identical computations at one place and make use of the
result instead of each time computing the expressions.
CD / Module-V / Jasaswi 5 5 April 2024
Machine Independent Code Optimization
Intermediate code generation process introduces many inefficiencies
• Extra copies of variables, using variables instead of constants, repeated
evaluation of expressions, etc.
Code optimization removes such inefficiencies and improves code
Improvement may be time, space, or power consumption
It changes the structure of programs, sometimes of beyond recognition
• Inlines functions, unrolls loops, eliminates some programmer-defined
variables, etc.
Code optimization consists of a bunch of heuristics and percentage of
improvement depends on programs (may be zero also)
CD / Module-V / Jasaswi 6 5 April 2024
Examples of Machine-Independent Optimizations
Global common sub-expression elimination
Copy propagation
Constant propagation and constant folding
Loop invariant code motion
Induction variable elimination and strength reduction
Partial redundancy elimination
Loop unrolling
Function inlining
Tail recursion removal
Vectorization and Concurrentization
Loop interchange, and loop blocking
CD / Module-V / Jasaswi 7 5 April 2024
Peephole Optimization
It is a simple and effective optimization technique for locally improving target code.
This technique is applied to improve the performance of the target program by
examining the short sequence of target instructions and replacing these instructions
by shorter or faster sequence.
A peephole is a small window which is moved over the target code and
transformations can be made and hence is the name.
The peephole optimization can be applied on the target code using following
characteristic:
1. Redundant instruction elimination
2. Flow of control optimization
3. Algebraic Simplification
4. Use of machine idioms
CD / Module-V / Jasaswi 8 5 April 2024
Redundant Instruction Elimination
The redundant loads and stores can be eliminated in the following type of transformations. We can
eliminate the second instruction since 𝑥 is already in 𝑅0. However, if the instruction is a label
statement then we cannot remove it.
𝑀𝑂𝑉 𝑅0, 𝑥
𝑀𝑂𝑉 𝑥, 𝑅0
We can eliminate unreachable instructions. In the following code the if statement will never get
executed and hence can be eliminated.
𝑠𝑢𝑚 = 0
𝑖𝑓 𝑠𝑢𝑚
𝑝𝑟𝑖𝑛𝑡𝑓 %d, 𝑠𝑢𝑚 ;
In the following code, Line 5 is unreachable and hence can be eliminated.
1. 𝑖𝑛𝑡 𝑓𝑢𝑛 𝑖𝑛𝑡 𝑎, 𝑖𝑛𝑡 𝑏
2. {
3. 𝑐 = 𝑎 + 𝑏;
4. 𝑟𝑒𝑡𝑢𝑟𝑛 𝑐;
5. 𝑝𝑟𝑖𝑛𝑡𝑓 %d, 𝑐 ;
6. }
CD / Module-V / Jasaswi 9 5 April 2024
Flow of Control Optimization
Using peephole optimization unnecessary jumps on jumps can be eliminated.
In both the examples we reduce one jump by the suggested transformation.
Example 1 Example 2
CD / Module-V / Jasaswi 10 5 April 2024
Simplification Using Algebraic Identities
Peephole optimization is an effective technique for algebraic simplification.
The following statements can be eliminated by peephole optimizations.
or
We can eliminate them by using mathematical identities.
Some of these identities are given below:
CD / Module-V / Jasaswi 11 5 April 2024
Reduction in Strength
Certain machine instructions are cheaper than the other.
In order to improve the performance of the intermediate code we can replace
these instructions by equivalent cheaper instructions.
EXPENSIVE CHEAPER
𝑥 = 𝑥×𝑥
2×𝑥 = 𝑥+𝑥
𝑥⁄2 = 𝑥 × 0.5
CD / Module-V / Jasaswi 12 5 April 2024
Basic Blocks
A basic block B is a sequence of consecutive instructions such that:
1. Flow of Control enters B only at its beginning
2. Flow of Control leaves B at its end (under normal execution)
3. Flow of Control cannot halt or branch out of B except at its end.
CD / Module-V / Jasaswi 13 5 April 2024
Algorithm for Partitioning Code into Basic Blocks
1. Determine the set of leaders, i.e., the first instruction of each basic block:
a) The first statement is a leader.
b) Any instruction that is the target of a branch is a leader.
c) Any instruction following a (conditional or unconditional) branch is a leader.
2. For each leader, its basic block consists of:
a) The leader itself
b) All subsequent instructions up to, but not including, the next leader.
CD / Module-V / Jasaswi 14 5 April 2024
Control Flow Graphs
A flow graph is a directed graph in which the flow control information is added to
the basic blocks.
• The nodes to the flow graph are represented by basic blocks
• The block whose leader is the first statement is called initial block.
• There is a directed edge from block 1 to block 2 if 2 immediately follows 1
in the given sequence. We can say that 1 is a predecessor of 2.
CD / Module-V / Jasaswi 15 5 April 2024
Construction of Control Flow Graph
1. Identify the basic blocks of the function.
2. There is an edge from block to block if:
a) There is a (conditional or unconditional) branch from the last instruction of
block to the first instruction of block or
b) Block immediately follows block in the textual order of the program, and
block does not end in an unconditional branch.
CD / Module-V / Jasaswi 16 5 April 2024
Example of Control Flow Graph
Question: Consider the following code for computing dot product of two vectors
and of length and partition it into basic blocks. The machine uses 4 bytes per
word.
1. 𝒑𝒓𝒐𝒅 = 𝟎;
𝒑𝒓𝒐𝒅 = 𝟎
2. 𝒊 = 𝟏;
𝒊=𝟏 3. 𝒕𝟏 = 𝟒 ∗ 𝒊;
𝒅𝒐 4. 𝒕𝟐 = 𝒂𝒅𝒅𝒓𝒂 − 𝟒;
{ 5. 𝒕𝟑 = 𝒕𝟐 𝒕𝟏 ;
𝒑𝒓𝒐𝒅 = 𝒑𝒓𝒐𝒅 + 𝒂 𝒊 ∗ 𝒃 𝒊 ; 6. 𝒕𝟒 = 𝟒 ∗ 𝒊;
𝒊 = 𝒊 + 𝟏; 7. 𝒕𝟓 = 𝒂𝒅𝒅𝒓𝒃 − 𝟒;
}𝒘𝒉𝒊𝒍𝒆 𝒊 ≤ 𝟏𝟎 ; 8. 𝒕𝟔 = 𝒕𝟓 𝒕𝟒 ;
9. 𝒕𝟕 = 𝒕𝟑 ∗ 𝒕𝟔;
10. 𝒕𝟖 = 𝒑𝒓𝒐𝒅 + 𝒕𝟕;
11. 𝒑𝒓𝒐𝒅 = 𝒕𝟖
12. 𝒕𝟗 = 𝒊 + 𝟏
13. 𝒊 = 𝒕𝟗
14. 𝒊𝒇 𝒊 ≤ 𝟏𝟎 𝒈𝒐𝒕𝒐 (𝟑)
Source Code 3 Address Code
CD / Module-V / Jasaswi 17 5 April 2024
Example of Control Flow Graph – contd…
Applying some intra code optimization and peep hole optimization techniques we
get the following code:
1. 𝒑𝒓𝒐𝒅 = 𝟎; 1. 𝒑𝒓𝒐𝒅 = 𝟎;
2. 𝒊 = 𝟏; 2. 𝒊 = 𝟏;
3. 𝒕𝟏 = 𝟒 ∗ 𝒊; 3. 𝒕𝟏 = 𝟒 ∗ 𝒊;
4. 𝒕𝟐 = 𝒂𝒅𝒅𝒓𝒂 − 𝟒; 4. 𝒕𝟐 = 𝒂𝒅𝒅𝒓𝒂 − 𝟒;
5. 𝒕𝟑 = 𝒕𝟐 𝒕𝟏 ; 5. 𝒕𝟑 = 𝒕𝟐 𝒕𝟏 ;
6. 𝒕𝟒 = 𝟒 ∗ 𝒊; 6. 𝒕𝟒 = 𝒂𝒅𝒅𝒓𝒃 − 𝟒;
7. 𝒕𝟓 = 𝒂𝒅𝒅𝒓𝒃 − 𝟒; 7. 𝒕𝟓 = 𝒕𝟒 𝒕𝟏 ;
8. 𝒕𝟔 = 𝒕𝟓 𝒕𝟒 ; 8. 𝒕𝟔 = 𝒕𝟑 ∗ 𝒕𝟓;
9. 𝒕𝟕 = 𝒕𝟑 ∗ 𝒕𝟔; 9. 𝒑𝒓𝒐𝒅 = 𝒑𝒓𝒐𝒅 + 𝒕𝟔;
10. 𝒕𝟖 = 𝒑𝒓𝒐𝒅 + 𝒕𝟕; 10. 𝒊=𝒊+𝟏
11. 𝒑𝒓𝒐𝒅 = 𝒕𝟖 11. 𝒊𝒇 𝒊 ≤ 𝟏𝟎 𝒈𝒐𝒕𝒐 (𝟑)
12. 𝒕𝟗 = 𝒊 + 𝟏
13. 𝒊 = 𝒕𝟗
14. 𝒊𝒇 𝒊 ≤ 𝟏𝟎 𝒈𝒐𝒕𝒐 (𝟑)
3 Address Code Optimized 3 Address Code
CD / Module-V / Jasaswi 18 5 April 2024
Example of Control Flow Graph – contd…
Basic Blocks:
According to the algorithm for construction of Flow graph:
1. 𝒑𝒓𝒐𝒅 = 𝟎; • Statement 1 is a leader by rule 1(a)
2. 𝒊 = 𝟏; B1 • Statement 3 is also leader by rule 1(b)
3. 𝒕𝟏 = 𝟒 ∗ 𝒊; • Hence, statement 1 and 2 form the basic block.
4. 𝒕𝟐 = 𝒂𝒅𝒅𝒓𝒂 − 𝟒; • Similarly, statement 3 to 11 form another basic block.
5. 𝒕𝟑 = 𝒕𝟐 𝒕𝟏 ;
6. 𝒕𝟒 = 𝒂𝒅𝒅𝒓𝒃 − 𝟒; Algorithm:
7. 𝒕𝟓 = 𝒕𝟒 𝒕𝟏 ; 1. Determine the set of leaders, i.e., the first instruction of each basic block:
8. 𝒕𝟔 = 𝒕𝟑 ∗ 𝒕𝟓;
B2
a) The first statement is a leader.
9. 𝒑𝒓𝒐𝒅 = 𝒑𝒓𝒐𝒅 + 𝒕𝟔;
b) Any instruction that is the target of a branch is a leader.
10. 𝒊=𝒊+𝟏
11. 𝒊𝒇 𝒊 ≤ 𝟏𝟎 𝒈𝒐𝒕𝒐 (𝟑) c) Any instruction following a (conditional or unconditional) branch is a
leader.
2. For each leader, its basic block consists of:
a) The leader itself
b) All subsequent instructions up to, but not including, the next leader.
CD / Module-V / Jasaswi 19 5 April 2024
Example of Control Flow Graph – contd…
Basic Blocks Control Flow Graph
1. 𝒑𝒓𝒐𝒅 = 𝟎;
2. 𝒊 = 𝟏; B1
3. 𝒕𝟏 = 𝟒 ∗ 𝒊;
4. 𝒕𝟐 = 𝒂𝒅𝒅𝒓𝒂 − 𝟒;
5. 𝒕𝟑 = 𝒕𝟐 𝒕𝟏 ;
6. 𝒕𝟒 = 𝒂𝒅𝒅𝒓𝒃 − 𝟒;
7. 𝒕𝟓 = 𝒕𝟒 𝒕𝟏 ;
8. 𝒕𝟔 = 𝒕𝟑 ∗ 𝒕𝟓;
B2
9. 𝒑𝒓𝒐𝒅 = 𝒑𝒓𝒐𝒅 + 𝒕𝟔;
10. 𝒊=𝒊+𝟏
11. 𝒊𝒇 𝒊 ≤ 𝟏𝟎 𝒈𝒐𝒕𝒐 (𝟑)
CD / Module-V / Jasaswi 20 5 April 2024
The Principal Sources of Optimization
A compiler optimization must preserve the semantics of the original program.
Except in very special circumstances, once a programmer chooses and
implements a particular algorithm, the compiler cannot understand enough about
the program to replace it with a substantially different and more efficient
algorithm.
A compiler knows only how to apply relatively low-level semantic transformations,
using general facts such as algebraic identities like or program
semantics such as the fact that performing the same operation on the same
values yields the same result.
CD / Module-V / Jasaswi 21 5 April 2024
Causes of Redundancy
There are many redundant operations in a program.
More often, the redundancy is a side effect of having written the program in a
high-level language.
• Beside the languages like C or C++, where pointer arithmetic is allowed,
programmers have no choice but to refer to elements of an array or fields in a
structure through accesses like or .
During compilation, the high-level data-structure accesses expands into a
number of low-level arithmetic operations.
• Computation of the location of the (i, j)th element of a matrix A.
Accesses to the same data structure often share many common low-level
operations about which the programmers are not aware of and cannot eliminate
the redundancies themselves.
CD / Module-V / Jasaswi 22 5 April 2024
A Running Example: Quicksort
Consider the following fragment of a sorting program called quicksort for the illustration
of several important code-improving transformations.
CD / Module-V / Jasaswi 23 5 April 2024
Intermediate code of Quicksort
Assumption: Integers occupy four bytes of memory space Loop 3
Loop 1
Loop 3
Loop 1
Loop 2 Loop 2
C code for quicksort Three-address Code
CD / Module-V / Jasaswi 24 5 April 2024
Flow Graph
Loop 3 Block B1: the entry
node.
Block B2: Loop 1
Block B3: Loop 2
Blocks B2, B3, B4, B5
(together): Loop 3
Loop 1
Loop 2
Three-address Code
CD / Module-V / Jasaswi 25 5 April 2024
Semantics-Preserving Transformations
There are a number of ways in which a compiler can improve a program without
changing the function it computes.
Common examples of function-preserving (or semantics-preserving)
transformations:
• Common-subexpression elimination
• Copy propagation
• Dead-code elimination
• Constant folding
CD / Module-V / Jasaswi 26 5 April 2024
Common Subexpressions Elimination
The common sub expression is an expression
appearing repeatedly in the program which is Example
computed previously.
If the operands of a sub expression do not get
changed at all then result of such sub expression
is used instead of recomputing it each time.
Types of Subexpressions Elimination:
• Local Common Sub-expression elimination
It is used within a single basic block.
A basic block is a simple code sequence
that has no branches.
• Global Common Sub-expression elimination
It is used for an entire procedure of
common sub-expression elimination.
CD / Module-V / Jasaswi 27 5 April 2024
Local common-subexpression elimination
Frequently, a program includes
several calculations of the same
value, such as an offset in an array.
Some of these duplicate calculations
cannot be avoided by the programmer
because they lie below the level of
detail accessible within the source
language.
Example, block shown in Fig. (a)
5
recalculates and , although
none of these calculations were
requested explicitly by the
programmer.
CD / Module-V / Jasaswi 28 5 April 2024
Global Common Subexpressions
An occurrence of an expression is called a common subexpression if was
previously computed and the values of the variables in have not changed since
the previous computation.
We avoid recomputing if we can use its previously computed value; that is, the
variable to which the previous computation of was assigned has not changed
in the interim.
• If has been changed, it may still be possible to reuse the computation of if
we assign its value to a new variable , as well as to , and use the value of
in place of a recomputation of .
CD / Module-V / Jasaswi 29 5 April 2024
Global common-subexpression elimination: Example
𝑥 = 𝑡3 𝑥 = 𝑡3
a[t2]= t5 t14 = a[t1]
a[t2]= t14
a[t4]= x
a[t1]= x
CD / Module-V / Jasaswi 30 5 April 2024
Copy Propagation
Assignment of the form is called copy statements or copies.
Example:
• In order to eliminate the common subexpression from the statement in Fig.
(a), we must use a new variable to hold the value of .
• The value of variable , instead of that of the expression , is assigned to in Fig.
(b).
• Note: it would be incorrect to replace by either or by .
CD / Module-V / Jasaswi 31 5 April 2024
Copy Propagation / Variable Propagation
Copy propagation means use of one variable instead of another
Example:
...
• Here the variable 𝑥 is eliminated.
• The necessary condition is that a
variable must be assigned to
another variable or some constant.
CD / Module-V / Jasaswi 32 5 April 2024
Copy Propagation Example
The idea behind
copy-propagation
transformation is
to use 𝑣 for 𝑢
wherever possible
after the copy
statement 𝑢 = 𝑣.
CD / Module-V / Jasaswi 33 5 April 2024
Advantages of using Copy Propagation
Copy propagation reduces the required computation time by eliminating the
redundant and unnecessary variable assignments in the expressions.
Copy propagation ensures memory optimization.
Only the required memory assignments and variables require memory, irrelevant
memory expressions are being eliminated.
Copy propagation simplifies the available code by eliminating the expressions
that are not required making code easily understandable.
CD / Module-V / Jasaswi 34 5 April 2024
Dead Code Elimination
A variable is said to be live in a program if the value contained into it is used subsequently.
A variable is said to be dead at a point in a program if the value contained into it is never been
used. The code containing such a variable is known as dead code.
A programmer is unlikely to introduce any dead code intentionally, it may appear as the result of
previous transformations.
Example:
Consider the following code:
𝑖𝑛𝑡 𝑎 = 10, 𝑏 = 20;
𝑖𝑓 𝑎%2 == 0
𝑝𝑟𝑖𝑛𝑡𝑓("Even");
𝑒𝑙𝑠𝑒
𝑝𝑟𝑖𝑛𝑡𝑓("Odd");
The code 𝑏 = 20; can be considered as dead code and can be eliminated from the program
to optimize the code.
CD / Module-V / Jasaswi 35 5 April 2024
Dead Code Elimination: Example
Suppose debug is set to or at various points in the program, and used in
statements like
It may be possible for the compiler to deduce that each time the program reaches this
statement, the value of debug is FALSE because there is one particular statement
which is the last assignment to debug prior to any tests of the value of
debug, no matter what sequence of branches the program actually takes.
If copy propagation replaces debug by , then the print statement is dead because
it cannot be reached.
We can eliminate both the test and the print operation from the object code.
More generally, deducing at compile time that the value of an expression is a constant
and using the constant instead is known as constant folding.
CD / Module-V / Jasaswi 36 5 April 2024
Advantage of copy propagation
One advantage of copy propagation is that it often turns the copy statement into
dead code.
For example, copy propagation followed by dead-code elimination removes the
assignment to x and transforms the code shown in Fig. (a) into Fig. (b)
(a) (b)
CD / Module-V / Jasaswi 37 5 April 2024
Constant Folding
It refers to the evaluation of expression at compilation time whose operands are known to be
constants.
Example:
• If we have an expression say 10 ∗ 2 + 4 is expanded then the compiler can calculate the result
(24 in this case) during compilation time and modify the code as if it contains the resultant (24)
rather than the original expression 10 ∗ 2 + 4 − 𝑏.
𝐿1: 𝑇1 = 10 ∗ 2 𝐿1: 𝑇1 = 24
𝐿2: 𝑇2 = 𝑇1 + 4 𝐿2: 𝑇2 = 𝑇2 − 𝑏
𝐿3: 𝑇3 = 𝑇2 − 𝑏 𝐿4: 𝑎 = 𝑇2
𝐿4: 𝑎 = 𝑇3
3AC before Constant Folding 3AC after Constant Folding
CD / Module-V / Jasaswi 38 5 April 2024
Code Movement
There are two basic goals of code movement:
1. To reduce the size of the code to obtain space complexity
2. To reduce the frequency of execution of code to obtain the time complexity.
Example:
CD / Module-V / Jasaswi 39 5 April 2024
Strength Reduction
The strength of certain operators is higher than others.
For example strength of ∗ operator is higher than + operator.
In strength reduction technique, the higher strength operators can be replace by lower strength
operators.
Example:
The induction variable is integer scalar identifier used in the form of 𝑣 = 𝑣 ± 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 where 𝑣 is a
induction variable.
In the modified code 𝑡𝑒𝑚𝑝 is the induction variable.
CD / Module-V / Jasaswi 40 5 April 2024
Loop Optimization
The code optimization can be significantly done in loops of the program.
Loops are a very important place for optimizations, especially the inner loops where programs
tend to spend the bulk of their time.
The running time of a program may be improved if we decrease the number of instructions in an
inner loop, even if we increase the amount of code outside that loop.
Loop optimization is a technique in which code optimization is performed on inner loops.
The loop optimization is carried out by the following methods:
1. Code Motion
2. Induction Variable and strength reduction
3. Loop invariant method
4. Loop unrolling
5. Loop Fusion
CD / Module-V / Jasaswi 41 5 April 2024
Code Motion
Code motion is a technique which moves the code outside the loop. Hence is the name.
If there lies some expression in the loop whose result remains unchanged even after
executing the loop for several times, then such an expression should be placed just
before the loop (i.e. outside the loop). Here before the loop means at the entry of the
loop.
Example:
CD / Module-V / Jasaswi 42 5 April 2024
Induction variables and reduction in strength
A variable 𝑥 is said to be an “induction variable” if
there is a positive or negative constant 𝑐 such that
each time 𝑥 is assigned, its value either increases or
decreases by 𝑐.
For instance, 𝑖 and 𝑡2 are induction variables in the
loop containing 𝐵2 in the adjacent figure.
Induction variables can be computed with a single
increment (addition or subtraction) per loop iteration.
The transformation of replacing an expensive
operation, such as multiplication, by a cheaper one,
such as addition, is known as strength reduction.
But induction variables not only allow us sometimes to
perform a strength reduction; often it is possible to
eliminate all but one of a group of induction variables
whose values remain in lock step as we go around
the loop.
CD / Module-V / Jasaswi 43 5 April 2024
How to process induction variables
When processing loops, it is useful to work "inside-
out"; that is, we should start with the inner loops and
proceed to progressively larger, surrounding loops.
Here, we will see how this optimization applies to
our quicksort example by beginning with one of the
innermost loops: 𝐵3 by itself.
Note that the values of 𝑗 and 𝑡4 remain in lock step
i.e. every time the value of 𝑗 decreases by 1, the
value of t4 decreases by 4, because 4 * j is
assigned to t4. These variables, 𝑗 and 𝑡4 , are
forming a pair of induction variables.
When there are two or more induction variables in a
loop, it may be possible to get rid of all but one.
CD / Module-V / Jasaswi 44 5 April 2024
Induction variables and reduction in strength: Example 1
In the block 𝐵3, the relationship 𝑡4 = 4 ∗ 𝑗 surely holds after
assignment to 𝑡4 and 𝑡4 is not changed elsewhere in the
inner loop (block 𝐵3).
This follows that the value 𝑡4 is decremented by 4 in each
execution of the loop (block 𝐵3).
We may therefore replace the assignment 𝑡4 = 4 ∗ 𝑗 by
𝑡4 = 𝑡4 − 4.
The only problem is that 𝑡4 does not have a value when we
enter block 𝐵3 for the first time.
Since we must maintain the relationship 𝑡4 = 4 ∗ 𝑗 on entry
to the block B3, we place an initialization of 𝑡4 at the end of
the block where 𝑗 is initialized,
Although we have added one more instruction, which is
executed once in block 𝐵1 , the replacement of a
multiplication by a subtraction will speed up the object code if
multiplication takes more time than addition or subtraction,
as is the case on many machines.
CD / Module-V / Jasaswi 45 5 April 2024
Induction variables and reduction in strength: Example 1 contd…
CD / Module-V / Jasaswi 46 5 April 2024
Induction variables and reduction in strength: Example 2
After reduction in strength is applied to the
inner loops around 𝐵2 and 𝐵3, the only use of
𝑖 and 𝑗 is to determine the outcome of the test
in block 𝐵4.
We know that the values of 𝑖 and 𝑡2 satisfy
the relationship 𝑡2 = 4 ∗ 𝑖, while those of 𝑗
and 𝑡4 satisfy the relationship 𝑡4 = 4 ∗ 𝑗.
Thus, the test 𝑡2 ≥ 𝑡4 can be substituted for
i ≥ 2.
Once i ≥ 2 is replaced by 𝑡2 ≥ 𝑡4, 𝑖 in block
𝐵2 and 𝑗 in block 𝐵3 become dead variables,
and the assignments to them in these blocks
become dead code that can be eliminated.
The resulting flow graph is shown in the next
slide.
CD / Module-V / Jasaswi 47 5 April 2024
Induction variables and reduction in strength: Example 2 contd…
CD / Module-V / Jasaswi 48 5 April 2024
Loop Invariant Method
In the loop invariant method, the
expression with computation is
avoided inside the loop.
The computation is performed
outside the loop as computing the
same expression each time was
overhead to the system, and this
reduces computation overhead
and hence optimizes the code.
CD / Module-V / Jasaswi 49 5 April 2024
Loop Unrolling
Loop Unrolling is a loop
transformation technique that
helps to optimize the
execution time of a program.
We basically remove or
reduce iterations. Loop
unrolling increases the
program’s speed by
eliminating loop control
instruction and loop test
instructions.
CD / Module-V / Jasaswi 50 5 April 2024
Loop Jamming
Loop jamming is combining two
or more loops in a single loop. It
reduces the time taken to
compile the many loops.
CD / Module-V / Jasaswi 51 5 April 2024