0% found this document useful (0 votes)
20 views

Ia3-1

Code optimization in compiler design aims to enhance the performance and efficiency of executable code through various techniques such as control flow, data flow, instruction-level, memory, algorithmic optimization, and parallelization. Key issues in code generation include input to the code generator, target program, memory management, instruction selection, register allocation, and evaluation order. Additionally, parameter passing techniques and storage allocation methods play crucial roles in managing memory and optimizing performance during compilation and runtime.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Ia3-1

Code optimization in compiler design aims to enhance the performance and efficiency of executable code through various techniques such as control flow, data flow, instruction-level, memory, algorithmic optimization, and parallelization. Key issues in code generation include input to the code generator, target program, memory management, instruction selection, register allocation, and evaluation order. Additionally, parameter passing techniques and storage allocation methods play crucial roles in managing memory and optimizing performance during compilation and runtime.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1.

COMPILER DESIGN CODE OPTIMIZATION:


Code optimization is a vital stage in compiler design focused on improving the
performance and efficiency of executable code. By enhancing the quality of the generated
machine code, optimizations can reduce execution time, lower resource consumption, and
boost overall system performance.
The primary sources of optimization in compiler design include:
1. Control Flow Optimization: Techniques that enhance the execution paths of
programs, such as loop unrolling and branch prediction.
2. Data Flow Optimization: Strategies that analyze the flow of data to minimize
redundancy and improve access patterns, including constant propagation and dead
code elimination.
3. Instruction-Level Optimization: Refinements at the instruction level, such as
instruction scheduling, register allocation, and reducing instruction count.
4. Memory Optimization: Approaches to manage memory usage efficiently, such as
cache optimization and memory layout adjustments.
5. Algorithmic Optimization: Improving the underlying algorithms used in code, which
can lead to more efficient execution overall.
6. Parallelization: Techniques that leverage multi-core architectures to execute code
simultaneously, thus improving performance.
2. Code optimization issues:
Design Issues
In the code generation phase, various issues can arises:
1. Input to the code generator
2. Target program
3. Memory management
4. Instruction selection
5. Register allocation
6. Evaluation order
1. Input to the code generator
o The input to the code generator contains the intermediate
representation of the source program and the information of the symbol
table. The source program is produced by the front end.
o Intermediate representation has the several choices:
a) Postfix notation
b) Syntax tree
c) Three address code
o We assume front end produces low-level intermediate representation
i.e. values of names in it can directly manipulated by the machine
instructions.
o The code generation phase needs complete error-free intermediate
code as an input requires.
2. Target program:
The target program is the output of the code generator. The output can be:
a) Assembly language: It allows subprogram to be separately compiled.
b) Relocatable machine language: It makes the process of code generation
easier.
c) Absolute machine language: It can be placed in a fixed location in memory
and can be executed immediately.
3. Memory management
o During code generation process the symbol table entries have to be
mapped to actual p addresses and levels have to be mapped to
instruction address.
o Mapping name in the source program to address of data is co-operating
done by the front end and code generator.
o Local variables are stack allocation in the activation record while global
variables are in static area.

4. Instruction selection:
o Nature of instruction set of the target machine should be complete and
uniform.
o When you consider the efficiency of target machine then the instruction
speed and machine idioms are important factors.
o The quality of the generated code can be determined by its speed and
size.
5. Register allocation
Register can be accessed faster than memory. The instructions involving
operands in register are shorter and faster than those involving in memory
operand.
The following sub problems arise when we use registers:
Register allocation: In register allocation, we select the set of variables that
will reside in register.
Register assignment: In Register assignment, we pick the register that contains
variable.

6. Evaluation order
The efficiency of the target code can be affected by the order in which the
computations are performed. Some computation orders need fewer registers
to hold results of intermediate than others.
3. Basic blocks and flow graphs:
Basic Blocks in Compiler Design:
Basic Block is a straight line code sequence that has no branches in and out
branches except to the entry and at the end respectively. Basic Block is a set of
statements that always executes one after other, in a sequence.
Algorithm: Partitioning three-address code into basic blocks.
Input: A sequence of three address instructions.
Process: Instructions from intermediate code which are leaders are
determined. The following are the rules used for finding a leader:
1. The first three-address instruction of the intermediate code is a leader.
2. Instructions that are targets of unconditional or conditional jump/goto
statements are leaders.
3. Instructions that immediately follow unconditional or conditional
jump/goto statements are considered leaders.
Basic blocks are sequences of instructions in a program that have no branches
except at the entry and exit. Compiler design is one of the core areas of
computer science, and if you want to gain expertise in it, the GATE CS Self-
Paced Course provides a detailed look into basic blocks, optimization, and
other compiler design concepts
Example 1:
The following sequence of three-address statements forms a basic block:
FLOW GRAPH:
Flow graph is a directed graph. It contains the flow of control information for
the set of basic block.
A control flow graph is used to depict that how the program control is being
parsed among the blocks. It is useful in the loop optimization.
Flow graph for the vector dot product is given as follows:

o Block B1 is the initial node. Block B2 immediately follows B1, so from B2


to B1 there is an edge.
o The target of jump from last statement of B1 is the first statement B2, so
from B1 to B2 there is an edge.
o B2 is a successor of B1 and B1 is the predecessor of B2.
4.parameter passing techniques;
Parameter passing techniques in compiler design are crucial for managing how
functions receive and use arguments. Different methods can impact
performance, memory usage, and safety. Here are the primary techniques:

1. Pass by Value

 Mechanism: A copy of the actual parameter's value is created and passed


to the function.

 Characteristics:

o Changes made to the parameter inside the function do not affect


the original variable.

o Simple to implement and avoids side effects.

 Use Cases: Suitable for primitive data types where the overhead of
copying is minimal.

2. Pass by Reference

 Mechanism: A reference (or address) to the actual parameter is passed.

 Characteristics:

o Modifications within the function directly affect the original


variable.

o More efficient for large data structures since no copying occurs.

 Use Cases: Often used for objects or arrays where performance is a


concern.

3. Pass by Value-Result (Copy-In Copy-Out)

 Mechanism: A copy of the actual parameter is passed in, and at the end of
the function, the final value is copied back to the original variable.

 Characteristics:

o Combines aspects of both pass by value and pass by reference.

o Can be less efficient due to multiple copying.

 Use Cases: Useful when a function needs to return multiple values.


4. Pass by Name

 Mechanism: The argument is not evaluated until it is used within the


function, effectively passing the expression itself.

 Characteristics:

o Can lead to dynamic evaluation, which can optimize certain


situations.

o May introduce unexpected side effects if the argument changes.

 Use Cases: Commonly found in languages like Algol; less used in modern
languages.

5. Pass by Pointer

 Mechanism: A pointer to the variable is passed to the function.

 Characteristics:

o Allows for direct manipulation of the variable's memory.

o Can be dangerous if not handled properly (e.g., dereferencing null


pointers).

 Use Cases: Frequently used in C and C++ for dynamic data structures.

6. Pass by Closure

 Mechanism: Functions can capture and remember the environment in


which they were created, allowing them to access variables from their
enclosing scope.

 Characteristics:

o Supports higher-order functions and allows for stateful function


behavior.

o Common in functional programming languages.

 Use Cases: Useful for callbacks, event handling, and maintaining state.
5.STORAGE ALLOCATION IN CD:
Storage allocation in compiler design involves managing memory for variables, data
structures, and code during the compilation process and runtime. The methods of storage
allocation can significantly affect performance and memory usage. Here are the primary
storage allocation methods:

1. Static Allocation

 Description: Memory is allocated at compile time for variables whose sizes are known
and fixed. This includes global variables and constants.

 Characteristics:

o Fast access since the memory addresses are predetermined.

o Limited flexibility; sizes and locations cannot change during execution.

 Use Cases: Best for global constants, fixed-size arrays, and other data whose lifetime
and size are known at compile time.

2. Stack Allocation

 Description: Memory is allocated for local variables and function parameters on a


stack. Each function call creates a new stack frame.

 Characteristics:

o Automatic deallocation when a function returns, making it efficient.

o Limited in size; deep recursion can lead to stack overflow.

 Use Cases: Suitable for local variables and parameters in functions, especially in
languages with block scope.

3. Heap Allocation

 Description: Memory is allocated dynamically at runtime using functions like malloc in


C or new in C++.

 Characteristics:

o Flexible; sizes can be determined at runtime, allowing for dynamic data


structures.

o Requires manual deallocation (e.g., free in C), which can lead to memory leaks
if not handled properly.

 Use Cases: Useful for data structures like linked lists, trees, and arrays whose sizes are
not known at compile time.

4. Global Allocation

 Description: Memory is allocated for global variables and is accessible throughout the
program.
 Characteristics:

o The lifetime of global variables lasts for the duration of the program.

o Can lead to side effects and reduced modularity if used excessively.

 Use Cases: Best for data that needs to be accessed across multiple functions or
modules.

5. Memory Pooling

 Description: A pre-allocated pool of memory is used for dynamic allocation, allowing


for efficient management of small memory blocks.

 Characteristics:

o Reduces fragmentation and overhead associated with frequent


allocations/deallocations.

o Can improve performance for applications with a high frequency of small


allocations.

 Use Cases: Common in game development and real-time systems where performance
is critical.

6. Garbage Collection

 Description: Automatic memory management that identifies and reclaims memory


that is no longer in use.

 Characteristics:

o Eliminates manual deallocation and reduces memory leaks.

o Can introduce overhead and pauses in execution for memory management.

 Use Cases: Common in languages like Java, Python, and others that prioritize
developer productivity and safety.

You might also like