Compiler Construction Notes
Compiler Construction Notes
Compiler Construction
Language Processing System
We have learnt that any computer system is made of hardware and
software. The hardware understands a language, which humans cannot
understand. So we write programs in high-level language, which is easier
for us to understand and remember. These programs are then fed into a
series of tools and OS components to get the desired code that can be
used by the machine. This is known as Language Processing System.
A linker tool is used to link all the parts of the program together for execution
(executable machine code).
A loader loads all of them into memory and then the program is executed.
Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that
produces input for compilers. It deals with macro-processing,
augmentation, file inclusion, language extension, etc.
Interpreter
An interpreter, like a compiler, translates high-level language into low-
level machine language. The difference lies in the way they read the source
code or input. A compiler reads the whole source code at once, creates
tokens, checks semantics, generates intermediate code, executes the
whole program and may involve many passes. In contrast, an interpreter
reads a statement from the input, converts it to an intermediate code,
executes it, then takes the next statement in sequence. If an error occurs,
an interpreter stops execution and reports it. whereas a compiler reads
the whole program even if it encounters several errors.
Assembler
An assembler translates assembly language programs into machine
code.The output of an assembler is called an object file, which contains a
Linker
Linker is a computer program that links and merges various object files
together in order to make an executable file. All these files might have
been compiled by separate assemblers. The major task of a linker is to
search and locate referenced module/routines in a program and to
determine the memory location where these codes will be loaded, making
the program instruction to have absolute references.
Loader
Loader is a part of operating system and is responsible for loading
executable files into memory and execute them. It calculates the size of a
program (instructions and data) and creates memory space for it. It
initializes various registers to initiate execution.
Cross-compiler
A compiler that runs on platform (A) and is capable of generating
executable code for platform (B) is called a cross-compiler.
Source-to-source Compiler
A compiler that takes the source code of one programming language and
translates it into the source code of another programming language is
called a source-to-source compiler.
Structure Of Compiler:
A compiler can broadly be divided into two phases based on the way they
compile.
Analysis Phase
Known as the front-end of the compiler, the analysis phase of the
compiler reads the source program, divides it into core parts and then
checks for lexical, grammar and syntax errors.The analysis phase
generates an intermediate representation of the source program and
symbol table, which should be fed to the Synthesis phase as input.
Synthesis Phase
Known as the back-end of the compiler, the synthesis phase generates
the target program with the help of intermediate source code
representation and symbol table.
Pass : A pass refers to the traversal of a compiler through the entire program.
Phases of compiler
The compilation process is a sequence of various phases. Each phase takes
input from its previous stage, has its own representation of source
program, and feeds its output to the next phase of the compiler. Let us
understand the phases of a compiler.
Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans the
source code as a stream of characters and converts it into meaningful
lexemes. Lexical analyzer represents these lexemes in the form of tokens
as:
<token-name, attribute-value>
Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token
produced by lexical analysis as input and generates a parse tree (or syntax
tree). In this phase, token arrangements are checked against the source
code grammar, i.e. the parser checks if the expression made by the tokens
is syntactically correct.
Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the
rules of language. For example, assignment of values is between
compatible data types, and adding string to an integer. Also, the semantic
analyzer keeps track of identifiers, their types and expressions; whether
identifiers are declared before use or not etc. The semantic analyzer
produces an annotated syntax tree as an output.
Code Optimization
The next phase does code optimization of the intermediate code.
Optimization can be assumed as something that removes unnecessary
code lines, and arranges the sequence of statements in order to speed up
the program execution without wasting resources (CPU, memory).
Code Generation
In this phase, the code generator takes the optimized representation of
the intermediate code and maps it to the target machine language. The
code generator translates the intermediate code into a sequence of
(generally) re-locatable machine code. Sequence of instructions of
machine code performs the task as the intermediate code would do.
Symbol Table
It is a data-structure maintained throughout all the phases of a compiler.
All the identifier's names along with their types are stored here. The
symbol table makes it easier for the compiler to quickly search the
identifier record and retrieve it. The symbol table is also used for scope
management.
Lexical Analysis
Lexical analysis is the first phase of a compiler. It takes the modified source
code from language preprocessors that are written in the form of
sentences. The lexical analyzer breaks these syntaxes into a series of
tokens, by removing any whitespace or comments in the source code.
Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a
token. There are some predefined rules for every lexeme to be identified
as a valid token. These rules are defined by grammar rules, by means of
a pattern. A pattern explains what can be a token, and these patterns are
defined by means of regular expressions.
Specifications of Tokens
Let us understand how the language theory undertakes the following
terms:
Alphabets
Any finite set of symbols {0,1} is a set of binary alphabets,
{0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F} is a set of Hexadecimal alphabets, {a-
z, A-Z} is a set of English language alphabets.
Strings
Any finite sequence of alphabets is called a string. Length of the string is
the total number of occurrence of alphabets, e.g., the length of the string
tutorialspoint is 14 and is denoted by |tutorialspoint| = 14. A string having
no alphabets, i.e. a string of zero length is known as an empty string and
is denoted by ε (epsilon).
Special Symbols
A typical high-level language contains the following symbols:-
Assignment =
Preprocessor #
Language
A language is considered as a finite set of strings over some finite set of
alphabets. Computer languages are considered as finite sets, and
For example:
int intvalue;
While scanning both lexemes till ‘int’, the lexical analyzer cannot determine
whether it is a keyword int or the initials of identifier int value.
The Longest Match Rule states that the lexeme scanned should be
determined based on the longest match among all the tokens available.
The lexical analyzer also follows rule priority where a reserved word,
e.g., a keyword, of a language is given priority over user input. That is, if
the lexical analyzer finds a lexeme that matches with any existing reserved
word, it should generate an error.
Regular Expressions
The lexical analyzer needs to scan and identify only a finite set of valid
string/token/lexeme that belong to the language in hand. It searches for
the pattern defined by the language rules.
Operations
The various operations on languages are:
L U M = {s | s is in L or s is in M}
LM = {st | s is in L and t is in M}
Notations
If r and s are regular expressions denoting the languages L(r) and L(s),
then
digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 or [0-9]
sign = [ + | - ]
The only problem left with the lexical analyzer is how to verify the validity
of a regular expression used in specifying the patterns of keywords of a
language. A well-accepted solution is to use finite automata for
verification.
Finite Automata
Finite automata is a state machine that takes a string of symbols as input
and changes its state accordingly. Finite automata is a recognizer for
regular expressions. When a regular expression string is fed into finite
automata, it changes its state for each literal. If the input string is
successfully processed and the automata reaches its final state, it is
accepted, i.e., the string just fed was said to be a valid token of the
language in hand.
The transition function (δ) maps the finite set of state (Q) to a finite set of
input symbols (Σ), Q × Σ ➔ Q
Start state : The state from where the automata starts, is known as the start
state. Start state has an arrow pointed towards it.
Intermediate states : All intermediate states have at least two arrows; one
pointing to and another pointing out from them.
Transition : The transition from one state to another state happens when a
desired symbol in the input is found. Upon transition, automata can either
move to the next state or stay in the same state. Movement from one state to
another is shown as a directed arrow, where the arrows points to the
destination state. If automata stays on the same state, an arrow pointing from
a state to itself is drawn.
Syntax analysis
We have seen that a lexical analyzer can identify tokens with the help of
regular expressions and pattern rules. But a lexical analyzer cannot check
the syntax of a given sentence due to the limitations of the regular
expressions. Regular expressions cannot check balancing tokens, such as
parenthesis. Therefore, this phase uses context-free grammar (CFG),
which is recognized by push-down automata.
Context-Free Grammar
In this section, we will first see the definition of context-free grammar and
introduce terminologies used in parsing technology.
A set of tokens, known as terminal symbols (Σ). Terminals are the basic
symbols from which strings are formed.
One of the non-terminals is designated as the start symbol (S); from where
the production begins.
The strings are derived from the start symbol by repeatedly replacing a
non-terminal (initially the start symbol) by the right side of a production,
for that non-terminal.
Example
We take the problem of palindrome language, which cannot be described
by means of Regular Expression. That is, L = { w | w = wR } is not a
regular language. But it can be described by means of CFG, as illustrated
below:
G = ( V, Σ, P, S )
Where:
V = { Q, Z, N }
Σ = { 0, 1 }
P = { Q → Z | Q → N | Q → ℇ | Z → 0Q0 | N → 1Q1 }
S = { Q }
Syntax Analyzers
A syntax analyzer or parser takes the input from a lexical analyzer in the
form of token streams. The parser analyzes the source code (token
stream) against the production rules to detect any errors in the code. The
output of this phase is a parse tree.
This way, the parser accomplishes two tasks, i.e., parsing the code,
looking for errors and generating a parse tree as the output of the phase.
Parsers are expected to parse the whole code even if some errors exist in
the program. Parsers use error recovering strategies, which we will learn
later in this chapter.
Derivation
A derivation is basically a sequence of production rules, in order to get the
input string. During parsing, we take two decisions for some sentential
form of input:
Left-most Derivation
If the sentential form of an input is scanned and replaced from left to right,
it is called left-most derivation. The sentential form derived by the left-
most derivation is called the left-sentential form.
Right-most Derivation
If we scan and replace the input with production rules, from right to left,
it is known as right-most derivation. The sentential form derived from the
right-most derivation is called the right-sentential form.
Example
Production rules:
E → E + E
E → E * E
E → id
Input string: id + id * id
E → E * E
E → E + E * E
E → id + E * E
E → id + id * E
E → id + id * id
E → E + E
E → E + E * E
E → E + E * id
E → E + id * id
E → id + id * id
Parse Tree
A parse tree is a graphical depiction of a derivation. It is convenient to see
how strings are derived from the start symbol. The start symbol of the
derivation becomes the root of the parse tree. Let us see this by an
example from the last topic.
E → E * E
E → E + E * E
E → id + E * E
E → id + id * E
E → id + id * id
Step 1:
E→E*E
Step 2:
E→E+E*E
Step 3:
E → id + E * E
Step 4:
E → id + id * E
Step 5:
E → id + id * id
In a parse tree:
Ambiguity
A grammar G is said to be ambiguous if it has more than one parse tree
(left or right derivation) for at least one string.
Example
E → E + E
E → E – E
E → id
For the string id + id – id, the above grammar generates two parse trees:
Associativity
If an operand has operators on both sides, the side on which the operator
takes this operand is decided by the associativity of those operators. If the
operation is left-associative, then the operand will be taken by the left
operator or if the operation is right-associative, the right operator will take
the operand.
Example
id op id op id
(id op id) op id
id op (id op id)
Precedence
If two different operators share a common operand, the precedence of
operators decides which will take the operand. That is, 2+3*4 can have
two different parse trees, one corresponding to (2+3)*4 and another
corresponding to 2+(3*4). By setting precedence among operators, this
problem can be easily removed. As in the previous example,
mathematically * (multiplication) has precedence over + (addition), so the
expression 2+3*4 will always be interpreted as:
2 + (3 * 4)
Left Recursion
A grammar becomes left-recursive if it has any non-terminal ‘A’ whose
derivation contains ‘A’ itself as the left-most symbol. Left-recursive
grammar is considered to be a problematic situation for top-down parsers.
Top-down parsers start parsing from the Start symbol, which in itself is
non-terminal. So, when the parser encounters the same non-terminal in
its derivation, it becomes hard for it to judge when to stop parsing the left
non-terminal and it goes into an infinite loop.
Example:
(1) A => Aα | β
(2) S => Aα | β
A => Sd
A top-down parser will first parse the A, which in-turn will yield a string
consisting of A itself and the parser may go into a loop forever.
The production
A => Aα | β
This does not impact the strings derived from the grammar, but it removes
immediate left recursion.
END
Example
A => Sd
and then, remove immediate left recursion using the first technique.
A => βdA'
A' => αdA' | ε
Now none of the production has either direct or indirect left recursion.
Left Factoring
If more than one grammar production rules has a common prefix string,
then the top-down parser cannot make a choice as to which of the
production it should take to parse the string in hand.
Example
Example
Now the parser has only one production per prefix which makes it easier
to take decisions.
First Set
This set is created to know what terminal symbol is derived in the first
position by a non-terminal. For example,
α → t β
Follow Set
Likewise, we calculate what terminal symbol immediately follows a non-
terminal α in production rules. We do not consider what the non-terminal
can generate but instead, we see what would be the next terminal symbol
that follows the productions of a non-terminal.
Types of parsing:
Syntax analyzers follow production rules defined by means of context-free
grammar. The way the production rules are implemented (derivation)
divides parsing into two types : top-down parsing and bottom-up parsing.
Top-down Parsing
When the parser starts constructing the parse tree from the start symbol
and then tries to transform the start symbol to the input, it is called top-
down parsing.
Bottom-up Parsing
As the name suggests, bottom-up parsing starts with the input symbols
and tries to construct the parse tree up to the start symbol.
Example:
Input string : a + b * c
Production rules:
S → E
E → E + T
E → E * T
E → T
T → id
a + b * c
Read the input and check if any production matches with the input:
a + b * c
T + b * c
E + b * c
E + T * c
E * c
E * T
Top-Down-Parser;
We have learnt in the last chapter that the top-down parsing technique
parses the input, and starts constructing a parse tree from the root node
gradually moving down to the leaf nodes. The types of top-down parsing
are depicted below:
Back-tracking
Top- down parsers start from the root node (start symbol) and match the
input string against the production rules to replace them (if matched). To
understand this, take the following example of CFG:
S → rXd | rZd
X → oa | ea
Z → ai
For an input string: read, a top-down parser, will behave like this:
It will start with S from the production rules and will match its yield to the
left-most letter of the input, i.e. ‘r’. The very production of S (S → rXd)
matches with it. So the top-down parser advances to the next input letter
(i.e. ‘e’). The parser tries to expand non-terminal ‘X’ and checks its
production from the left (X → oa). It does not match with the next input
symbol. So the top-down parser backtracks to obtain the next production
rule of X, (X → ea).
Now the parser matches all the input letters in an ordered manner. The
string is accepted.
Predictive Parser
Predictive parser is a recursive descent parser, which has the capability to
predict which production is to be used to replace the input string. The
predictive parser does not suffer from backtracking.
Predictive parsing uses a stack and a parsing table to parse the input and
generate a parse tree. Both the stack and the input contains an end
symbol $to denote that the stack is empty and the input is consumed. The
parser refers to the parsing table to take any decision on the input and
stack element combination.
In recursive descent parsing, the parser may have more than one
production to choose from for a single instance of input, whereas in
predictive parser, each step has at most one production to choose. There
LL Parser
An LL Parser accepts LL grammar. LL grammar is a subset of context-free
grammar but with some restrictions to get the simplified version, in order
to achieve easy implementation. LL grammar can be implemented by
means of both algorithms namely, recursive-descent or table-driven.
LL parser is denoted as LL(k). The first L in LL(k) is parsing the input from
left to right, the second L in LL(k) stands for left-most derivation and k
itself represents the number of look aheads. Generally k = 1, so LL(k) may
also be written as LL(1).
LL Parsing Algorithm
We may stick to deterministic LL(1) for parser explanation, as the size of
table grows exponentially with the value of k. Secondly, if a given grammar
is not LL(1), then usually, it is not LL(k), for any given k.
Input:
string ω
Output:
error otherwise.
repeat
let X be the top stack symbol and a the symbol pointed by ip.
if X∈ Vt or $
if X = a
else
error()
endif
else /* X is non-terminal */
POP X
else
error()
endif
endif
Bottum up parser:
Bottom-up parsing starts from the leaf nodes of a tree and works in upward
direction till it reaches the root node. Here, we start from a sentence and
then apply production rules in reverse manner in order to reach the start
symbol. The image given below depicts the bottom-up parsers available.
Shift-Reduce Parsing
Shift-reduce parsing uses two unique steps for bottom-up parsing. These
steps are known as shift-step and reduce-step.
Shift step: The shift step refers to the advancement of the input pointer to
the next input symbol, which is called the shifted symbol. This symbol is
pushed onto the stack. The shifted symbol is treated as a single node of the
parse tree.
Reduce step : When the parser finds a complete grammar rule (RHS) and
replaces it to (LHS), it is known as reduce-step. This occurs when the top of
the stack contains a handle. To reduce, a POP function is performed on the
stack which pops off the handle and replaces it with LHS non-terminal symbol.
LR Parser
The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses
a wide class of context-free grammar which makes it the most efficient
syntax analysis technique. LR parsers are also known as LR(k) parsers,
where L stands for left-to-right scanning of the input stream; R stands for
the construction of right-most derivation in reverse, and k denotes the
number of lookahead symbols to make decisions.
LR(1) – LR Parser:
o Slow construction
LR Parsing Algorithm
Here we describe a skeleton algorithm of an LR parser:
token = next_token()
repeat forever
s = top of stack
PUSH token
PUSH si
token = next_token()
s = top of stack
PUSH A
PUSH goto[s,A]
return
else
error()
LL vs. LR
LL LR
Starts with the root nonterminal on the Ends with the root nonterminal on the
stack. stack.
Uses the stack for designating what is Uses the stack for designating what is
still to be expected. already seen.
Builds the parse tree top-down. Builds the parse tree bottom-up.
Reads the terminals when it pops one Reads the terminals while it pushes them
off the stack. on the stack.
Pre-order traversal of the parse tree. Post-order traversal of the parse tree.
Error Recovery
A parser should be able to detect and report any error in the program. It
is expected that when an error is encountered, the parser should be able
to handle it and carry on parsing the rest of the input. Mostly it is expected
from the parser to check for errors but errors may be encountered at
various stages of the compilation process. A program may have the
following kinds of errors at various stages:
Panic mode
When a parser encounters an error anywhere in the statement, it ignores
the rest of the statement by not processing input from erroneous input to
delimiter, such as semi-colon. This is the easiest way of error-recovery
and also, it prevents the parser from developing infinite loops.
Statement mode
When a parser encounters an error, it tries to take corrective measures so
that the rest of inputs of statement allow the parser to parse ahead. For
example, inserting a missing semicolon, replacing comma with a semicolon
etc. Parser designers have to be careful here because one wrong correction
may lead to an infinite loop.
Error productions
Some common errors are known to the compiler designers that may occur
in the code. In addition, the designers can create augmented grammar to
be used, as productions that generate erroneous constructs when these
errors are encountered.
Global correction
The parser considers the program in hand as a whole and tries to figure
out what the program is intended to do and tries to find out a closest match
for it, which is error-free. When an erroneous input (statement) X is fed,
it creates a parse tree for some closest error-free statement Y. This may
allow the parser to make minimal changes in the source code, but due to
the complexity (time and space) of this strategy, it has not been
implemented in practice yet.
If watched closely, we find most of the leaf nodes are single child to their
parent nodes. This information can be eliminated before feeding it to the
next phase. By hiding extra information, we can obtain a tree as shown
below:
Samentic Analysis
We have learnt how a parser constructs parse trees in the syntax analysis
phase. The plain parse-tree constructed in that phase is generally of no
use for a compiler, as it does not carry any information of how to evaluate
the tree. The productions of context-free grammar, which makes the rules
of the language, do not accommodate how to interpret them.
For example
E → E + T
The above CFG production has no semantic rule associated with it, and it
cannot help in making any sense of the production.
Semantics
Semantics of a language provide meaning to its constructs, like tokens and
syntax structure. Semantics help interpret symbols, their types, and their
relations with each other. Semantic analysis judges whether the syntax
structure constructed in the source program derives any meaning or not.
For example:
int a = “value”;
Scope resolution
Type checking
Array-bound checking
Semantic Errors
We have mentioned some of the semantics errors that the semantic
analyzer is expected to recognize:
Type mismatch
Undeclared variable
Attribute Grammar
Attribute grammar is a special form of context-free grammar where some
additional information (attributes) are appended to one or more of its non-
terminals in order to provide context-sensitive information. Each attribute
has well-defined domain of values, such as integer, float, character, string,
and expressions.
Example:
The right part of the CFG contains the semantic rules that specify how the
grammar should be interpreted. Here, the values of non-terminals E and
T are added together and the result is copied to the non-terminal E.
Synthesized attributes
These attributes get values from the attribute values of their child nodes.
To illustrate, assume the following production:
S → ABC
As in our previous example (E → E + T), the parent node E gets its value
from its child node. Synthesized attributes never take values from their
parent nodes or any sibling nodes.
Inherited attributes
In contrast to synthesized attributes, inherited attributes can take values
from parent and/or siblings. As in the following production,
S → ABC
A can get values from S, B and C. B can take values from S, A, and C.
Likewise, C can take values from S, A, and B.
Semantic analyzer receives AST (Abstract Syntax Tree) from its previous
stage (syntax analysis).
For example:
int value = 5;
<type, “integer”>
<presentvalue, “5”>
S-attributed SDT
If an SDT uses only synthesized attributes, it is called as S-attributed SDT.
These attributes are evaluated using S-attributed SDTs that have their
semantic actions written after the production (right hand side).
L-attributed SDT
This form of SDT uses both synthesized and inherited attributes with
restriction of not taking values from right siblings.
In L-attributed SDTs, a non-terminal can get values from its parent, child,
and sibling nodes. As in the following production
S → ABC
S can take values from A, B, and C (synthesized). A can take values from
S only. B can take values from S and A. C can get values from S, A, and
B. No non-terminal can get values from the sibling to its right.
Symbol Table:
Symbol table is an important data structure created and maintained by
compilers in order to store information about the occurrence of various
entities such as variable names, function names, objects, classes,
interfaces, etc. Symbol table is used by both the analysis and the synthesis
parts of a compiler.
A symbol table may serve the following purposes depending upon the
language in hand:
A symbol table is simply a table which can be either linear or a hash table.
It maintains an entry for each name in the following format:
For example, if a symbol table has to store information about the following
variable declaration:
Implementation
If a compiler is to handle a small amount of data, then the symbol table
can be implemented as an unordered list, which is easy to code, but it is
only suitable for small tables only. A symbol table can be implemented in
one of the following ways:
Hash table
Among all, symbol tables are mostly implemented as hash tables, where
the source code symbol itself is treated as a key for the hash function and
the return value is the information about the symbol.
Operations
A symbol table, either linear or hash, should provide the following
operations.
insert()
This operation is more frequently used by analysis phase, i.e., the first half
of the compiler where tokens are identified and names are stored in the
table. This operation is used to add information in the symbol table about
unique names occurring in the source code. The format or structure in
which the names are stored depends upon the compiler in hand.
For example:
int a;
insert(a, int);
lookup()
lookup() operation is used to search a name in the symbol table to
determine:
lookup(symbol)
This method returns 0 (zero) if the symbol does not exist in the symbol
table. If the symbol exists in the symbol table, it returns its attributes
stored in the table.
Scope Management
A compiler maintains two types of symbol tables: a global symbol
tablewhich can be accessed by all the procedures and scope symbol
tables that are created for each scope in the program.
. . .
int value=10;
void pro_one()
int one_1;
int one_2;
{ \
int one_4; |
} /
int one_5;
{ \
int one_7; |
} /
void pro_two()
int two_1;
int two_2;
{ \
int two_4; |
} /
int two_5;
. . .
The global symbol table contains names for one global variable (int value)
and two procedure names, which should be available to all the child nodes
shown above. The names mentioned in the pro_one symbol table (and all
its child tables) are not available for pro_two symbols and its child tables.
first a symbol will be searched in the current scope, i.e. current symbol table.
either the name is found or global symbol table has been searched for the
name.
ICG
A source code can directly be translated into its target machine code, then
why at all we need to translate the source code into an intermediate code
which is then translated to its target code? Let us see the reasons why we
need an intermediate code.
Intermediate code eliminates the need of a new full compiler for every unique
machine by keeping the analysis portion same for all the compilers.
Intermediate Representation
Intermediate codes can be represented in a variety of ways and they have
their own benefits.
Low Level IR - This one is close to the target machine, which makes it suitable
for register and memory allocation, instruction set selection, etc. It is good for
machine-dependent optimizations.
Intermediate code can be either language specific (e.g., Byte Code for
Java) or language independent (three-address code).
Three-Address Code
Intermediate code generator receives input from its predecessor phase,
semantic analyzer, in the form of an annotated syntax tree. That syntax
tree then can be converted into a linear representation, e.g., postfix
For example:
a = b + c * d;
The intermediate code generator will try to divide this expression into sub-
expressions and then generate the corresponding code.
r1 = c * d;
r2 = b + r1;
a = r2
Quadruples
Each instruction in quadruples presentation is divided into four fields:
operator, arg1, arg2, and result. The above example is represented below
in quadruples format:
* c d r1
+ b r1 r2
+ r2 r1 r3
= r3 a
Triples
Each instruction in triples presentation has three fields : op, arg1, and
arg2.The results of respective sub-expressions are denoted by the position
of expression. Triples represent similarity with DAG and syntax tree. They
are equivalent to DAG while representing expressions.
Op arg1 arg2
* c d
+ b (0)
+ (1) (0)
= (2)
Indirect Triples
This representation is an enhancement over triples representation. It uses
pointers instead of position to store results. This enables the optimizers to
freely re-position the sub-expression to produce an optimized code.
Declarations
A variable or procedure has to be declared before it can be used.
Declaration involves allocation of space in memory and entry of type and
name in the symbol table. A program may be coded and designed keeping
the target machine structure in mind, but it may not always be possible to
accurately convert a source code to its target language.
Example:
int a;
float b;
Allocation process:
{offset = 0}
int a;
id.type = int
id.width = 2
{offset = 2}
float b;
id.type = float
id.width = 4
{offset = 6}
To enter this detail in a symbol table, a procedure enter can be used. This
method may have the following structure:
We will now see how the intermediate code is transformed into target
object code (assembly code, in this case).
Example:
t0 = a + b
t1 = t0 + c
d = t0 + t1
[t0 = a + b]
[t1 = t0 + c]
[d = t0 + t1]
Peephole Optimization
This optimization technique works locally on the source code to transform
it into an optimized code. By locally, we mean a small portion of the code
block at hand. These methods can be applied on intermediate codes as
well as on target codes. A bunch of statements is analyzed and are checked
for the following possible optimization:
{ { { {
z = x + y; y = x + y; }
return z; return y;
} }
MOV x, R0
MOV R0, R1
We can delete the first instruction and re-write the sentence as:
MOV x, R1
Unreachable code
Unreachable code is a part of the program code that is never accessed
because of programming constructs. Programmers may have accidently
written a piece of code that can never be reached.
Example:
void add_ten(int x)
return x + 10;
In this code segment, the printf statement will never be executed as the
program control returns back before it can execute, hence printf can be
removed.
...
MOV R1, R2
GOTO L1
...
L1 : GOTO L2
L2 : INC R1
...
MOV R1, R2
GOTO L2
...
L2 : INC R1
Strength reduction
There are operations that consume more time and space. Their ‘strength’
can be reduced by replacing them with other operations that consume less
time and space, but produce the same result.
Code Generator
A code generator is expected to have an understanding of the target
machine’s runtime environment and its instruction set. The code generator
should take the following things into consideration to generate the code:
Target language : The code generator has to be aware of the nature of the
target language for which the code is to be transformed. That language may
facilitate some machine-specific instructions to help the compiler generate the
code in a more convenient way. The target machine can have either CISC or
RISC processor architecture.
Descriptors
The code generator has to track both the registers (for availability) and
addresses (location of values) while generating the code. For both of them,
the following two descriptors are used:
Code generator keeps both the descriptor updated in real-time. For a load
statement, LD R1, x, the code generator:
Code Generation
Basic blocks comprise of a sequence of three-address instructions. Code
generator takes these sequence of instructions as input.
Note : If the value of a name is found at more than one place (register,
cache, or memory), the register’s value will be preferred over the cache
and main memory. Likewise cache’s value will be preferred over the main
memory. Main memory is barely given any preference.
Else if both the above options are not possible, it chooses a register that
requires minimal number of load and store instructions.
MOV y’, L
Determine the present location of z using the same method used in step 2
for y and generate the following instruction:
OP z’, L
If y and z has no further use, they can be given back to the system.
Code Optimization;
Optimization is a program transformation technique, which tries to
improve the code by making it consume less resources (i.e. CPU, Memory)
and deliver high speed.
The output code must not, in any way, change the meaning of the program.
Optimization should increase the speed of the program and if possible, the
program should demand less number of resources.
Optimization should itself be fast and should not delay the overall compiling
process.
After generating intermediate code, the compiler can modify the intermediate
code by address calculations and improving loops.
While producing the target machine code, the compiler can make use of
memory hierarchy and CPU registers.
Machine-independent Optimization
In this optimization, the compiler takes in the intermediate code and
transforms a part of the code that does not involve any CPU registers
and/or absolute memory locations. For example:
do
item = 10;
} while(value<100);
Item = 10;
do
} while(value<100);
should not only save the CPU cycles, but can be used on any processor.
Machine-dependent Optimization
Machine-dependent optimization is done after the target code has been
generated and when the code is transformed according to the target
machine architecture. It involves CPU registers and may have absolute
memory references rather than relative references. Machine-dependent
optimizers put efforts to take maximum advantage of memory hierarchy.
Basic Blocks
Source codes generally have a number of instructions, which are always
executed in sequence and are considered as the basic blocks of the code.
These basic blocks do not have any jump statements among them, i.e.,
when the first instruction is executed, all the instructions in the same basic
block will be executed in their sequence of appearance without losing the
flow control of the program.
Search header statements of all the basic blocks from where a basic block
starts:
Header statements and the statements following them form a basic block.
A basic block does not include any header statement of any other basic block.
Basic blocks are important concepts from both code generation and
optimization point of view.
Basic blocks play an important role in identifying variables, which are being
used more than once in a single basic block. If any variable is being used
more than once, the register memory allocated to that variable need not
be emptied unless the block finishes execution.
Loop Optimization
Most programs run as a loop in the system. It becomes necessary to
optimize the loops in order to save CPU cycles and memory. Loops can be
optimized by the following techniques:
Invariant code : A fragment of code that resides in the loop and computes
the same value at each iteration is called a loop-invariant code. This code can
be moved out of the loop by saving it to be computed only once, rather than
with each iteration.
Strength reduction : There are expressions that consume more CPU cycles,
time, and memory. These expressions should be replaced with cheaper
expressions without compromising the output of expression. For example,
multiplication (x * 2) is expensive in terms of CPU cycles than (x << 1) and
yields the same result.
Dead-code Elimination
Dead code is one or more than one code statements, which are:
Thus, dead code plays no role in any program operation and therefore it
can simply be eliminated.
The above control flow graph depicts a chunk of program where variable
‘a’ is used to assign the output of expression ‘x * y’. Let us assume that
the value assigned to ‘a’ is never used inside the loop.Immediately after
the control leaves the loop, ‘a’ is assigned the value of variable ‘z’, which
would be used later in the program. We conclude here that the assignment
code of ‘a’ is never used anywhere, therefore it is eligible to be eliminated.
Partial Redundancy
Redundant expressions are computed more than once in parallel path,
without any change in operands.whereas partial-redundant expressions
are computed more than once in a path, without any change in operands.
For example,
If (condition)
a = y OP z;
else
...
c = y OP z;
We assume that the values of operands (y and z) are not changed from
assignment of variable a to variable c. Here, if the condition statement is
If (condition)
...
tmp = y OP z;
a = tmp;
...
else
...
tmp = y OP z;
c = tmp;