Compiler Design Quantum PDF
Compiler Design Quantum PDF
in
Compiler Design (KCS-502)
Course Outcome ( CO) Bloom’s Knowledge Level (KL)
At the end of course , the student will be able to:
Acquire knowledge of different phases and passes of the compiler and also able to use the K 3, K 6
CO 1 compiler tools like LEX, YACC, etc. Students will also be able to design different types of
compiler tools to meet the requirements of the realistic constraints of compilers.
Understand the parser and its types i.e. Top-Down and Bottom-up parsers and construction of K 2, K 6
CO 2
LL, SLR, CLR, and LALR parsing table.
Implement the compiler using syntax-directed translation method and get knowledge about the K 4, K 5
CO 3
synthesized and inherited attributes.
Acquire knowledge about run time data structure like symbol table organization and different K 2, K 3
CO 4
techniques used in that.
Understand the target machine’s run time environment, its instruction set for code generation K 2, K 4
CO 5
and techniques used for code optimization.
DETAILED SYLLABUS 3-0-0
Unit Topic Proposed
Lecture
Introduction to Compiler: Phases and passes, Bootstrapping, Finite state machines and regular
expressions and their applications to lexical analysis, Optimization of DFA-Based Pattern Matchers
I implementation of lexical analyzers, lexical-analyzer generator, LEX compiler, Formal grammars
08
and their application to syntax analysis, BNF notation, ambiguity, YACC. The syntactic
specification of programming languages: Context free grammars, derivation and parse trees,
capabilities of CFG.
Basic Parsing Techniques: Parsers, Shift reduce parsing, operator precedence parsing, top down
parsing, predictive parsers Automatic Construction of efficient Parsers: LR parsers, the canonical
II 08
Collection of LR(0) items, constructing SLR parsing tables, constructing Canonical LR parsing
tables, Constructing LALR parsing tables, using ambiguous grammars, an automatic parser
generator, implementation of LR parsing tables.
Syntax-directed Translation: Syntax-directed Translation schemes, Implementation of Syntax-
directed Translators, Intermediate code, postfix notation, Parse trees & syntax trees, three address
III code, quadruple & triples, translation of assignment statements, Boolean expressions, statements
08
that alter the flow of control, postfix translation, translation with a top down parser. More about
translation: Array references in arithmetic expressions, procedures call, declarations and case
statements.
Symbol Tables: Data structure for symbols tables, representing scope information. Run-Time
IV Administration: Implementation of simple stack allocation scheme, storage allocation in block
08
structured language. Error Detection & Recovery: Lexical Phase errors, syntactic phase errors
semantic errors.
Code Generation: Design Issues, the Target Language. Addresses in the Target Code, Basic
V Blocks and Flow Graphs, Optimization of Basic Blocks, Code Generator. Code optimization:
08
Machine-Independent Optimizations, Loop optimization, DAG representation of basic blocks,
value numbers and algebraic laws, Global Data-Flow analysis.
Text books:
1. K. Muneeswaran,Compiler Design,First Edition,Oxford University Press.
2. J.P. Bennet, “Introduction to Compiler Techniques”, Second Edition, Tata McGraw-Hill,2003.
3. Henk Alblas and Albert Nymeyer, “Practice and Principles of Compiler Building with C”, PHI, 2001.
4. Aho, Sethi & Ullman, "Compilers: Principles, Techniques and Tools”, Pearson Education
5. V Raghvan, “ Principles of Compiler Design”, TMH
6. Kenneth Louden,” Compiler Construction”, Cengage Learning.
7. Charles Fischer and Ricard LeBlanc,” Crafting a Compiler with C”, Pearson Education
Compiler Design 1–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1
UNIT
Introduction to
Compiler
CONTENTS
Part-1 : Introduction to Compiler : ....................... 1–2C to 1–6C
Phases and Passes
Questions-Answers
Answer
A compiler contains 6 phases which are as follows :
i. Phase 1 (Lexical analyzer) :
a. The lexical analyzer is also called scanner.
b. The lexical analyzer phase takes source program as an input and
separates characters of source language into groups of strings
called token.
c. These tokens may be keywords identifiers, operator symbols and
punctuation symbols.
ii. Phase 2 (Syntax analyzer) :
a. The syntax analyzer phase is also called parsing phase.
b. The syntax analyzer groups tokens together into syntactic
structures.
c. The output of this phase is parse tree.
iii. Phase 3 (Semantic analyzer) :
a. The semantic analyzer phase checks the source program for
semantic errors and gathers type information for subsequent code
generation phase.
b. It uses parse tree and symbol table to check whether the given
program is semantically consistent with language definition.
c. The output of this phase is annotated syntax tree.
iv. Phase 4 (Intermediate code generation) :
a. The intermediate code generation takes syntax tree as an input
from semantic phase and generates intermediate code.
b. It generates variety of code such as three address code, quadruple,
triple.
Compiler Design 1–3 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Source program
Lexical analyzer
Syntax analyzer
Semantic analyzer
Code optimizer
Code generator
Target program
Fig. 1.1.1.
Lexical analyzer
Token stream
id1 = (id2 + id3) * (id2 + id3) * 2
Syntax analyzer
id2 id3
Semantic analyzer
id1 *
Annotated syntax tree
*
int_to_real
+ +
2
id2 id3
Intermediate code
generation
t1 = b + c
Intermediate code
t2 = t1 * t1
t3 = int_to_real (2)
t4 = t2 * t3
id1 = t4
Compiler Design 1–5 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Code optimization
t1 = b + c Optimized code
t2 = t1 * t1
id1 = t2 * 2
Machine code
Machine code
MOV R1, b
ADD R 1, R 1, c
MUL R2, R1, R1
MUL R2, R1, # 2.0
ST id1, R2
Answer
Types of passes :
1. Single-pass compiler :
a. In a single-pass compiler, when a line source is processed it is
scanned and the tokens are extracted.
b. Then the syntax of the line is analyzed and the tree structure,
some tables containing information about each token are built.
2. Multi-pass compiler : In multi-pass compiler, it scan the input source
once and produces first modified form, then scans the modified form
and produce a second modified form and so on, until the object form is
produced.
Answer
Role of compiler writing tools :
1. Compiler writing tools are used for automatic design of compiler
component.
2. Every tool uses specialized language.
3. Writing tools are used as debuggers, version manager.
Introduction to Compiler 1–6 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Various compiler construction/writing tools are :
1. Parser generator : The procedure produces syntax analyzer, normally
from input that is based on context free grammar.
2. Scanner generator : It automatically generates lexical analyzer,
normally from specification based on regular expressions.
3. Syntax directed translation engine :
a. It produces collection of routines that are used in parse tree.
b. These translations are associated with each node of parse tree,
and each translation is defined in terms of translations at its
neighbour nodes in the tree.
4. Automatic code generator : These tools take a collection of rules
that define translation of each operation of the intermediate language
into the machine language for target machine.
5. Data flow engine : The data flow engine is use to optimize the code
involved and gathers the information about how values are transmitted
from one part of the program to another.
PART-2
Bootstrapping.
Questions-Answers
Answer
Cross compiler : A cross compiler is a compiler capable of creating executable
code for a platform other than the one on which the compiler is running.
Bootstrapping :
1. Bootstrapping is the process of writing a compiler (or assembler) in the
source programming language that it intends to compile.
2. Bootstrapping leads to a self-hosting compiler.
3. An initial minimal core version of the compiler is generated in a different
language.
Compiler Design 1–7 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4. A compiler is characterized by three languages :
a. Source language (S)
b. Target language (T)
c. Implementation language (I)
S T
5. C represents a compiler for Source S, Target T, implemented in I.
I
The T-diagram shown in Fig. 1.4.1 is also used to depict the same
compiler :
S T
Fig. 1.4.1.
6. To create a new language, L, for machine A :
a. Create S C AA a compiler for a subset, S, of the desired language, L,
using language A, which runs on machine A. (Language A may be
assembly language.)
S A
Fig. 1.4.2.
L
b. Create CSA , a compiler for language L written in a subset of L.
L A
Fig. 1.4.3.
L A L A
S S A A
Fig. 1.4.4.
Introduction to Compiler 1–8 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
PART-3
Finite State Machines and Regular Expressions and their
Application to Lexical Analysis, Optimization of DFA
Based Pattern Matchers.
Questions-Answers
Answer
1. Regular expression is a formula in a special language that is used for
specifying simple classes of strings.
2. A string is a sequence of symbols; for the purpose of most text-based
search techniques, a string is any sequence of alphanumeric characters
(letters, numbers, spaces, tabs, and punctuation).
Formal recursive definition of regular expression :
Formally, a regular expression is an algebraic notation for characterizing a
set of strings.
1. Any terminals, i.e., the symbols belong to S are regular expression.
Null string (, ) and null set () are also regular expression.
2. If P and Q are two regular expressions then the union of the two
regular expressions, denoted by P + Q is also a regular expression.
3. If P and Q are two regular expressions then their concatenation denoted
by PQ is also a regular expression.
4. If P is a regular expression then the iteration (repetition or closure)
denoted by P* is also a regular expression.
5. If P is a regular expression then P, is a regular expression.
6. The expressions got by repeated application of the rules from (1) to (5)
over are also regular expression.
Que 1.6. Define and differentiate between DFA and NFA with an
example.
Compiler Design 1–9 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
DFA :
1. A finite automata is said to be deterministic if we have only one transition
on the same input symbol from some state.
2. A DFA is a set of five tuples and represented as :
M = (Q, , , q0, F)
where, Q = A set of non-empty finite states
= A set of non-empty finite input symbols
q0 = Initial state of DFA
F = A non-empty finite set of final state
= Q × Q.
NFA :
1. A finite automata is said to be non-deterministic, if we have more than
one possible transition on the same input symbol from some state.
2. A non-deterministic finite automata is set of five tuples and represented
as :
M = (Q, , , q0, F)
where, Q = A set of non-empty finite states
= A set of non-empty finite input symbols
q0 = Initial state of NFA and member of Q
F = A non-empty finite set of final states and
member of Q
q0
(q, a) q1
q2
qn
Fig. 1.6.1.
Example: DFA for the language that contains the strings ending with
0 over Σ = {0, 1}.
1 0
Start q0 0 qf
1
Fig. 1.6.2.
NFA for the language L which accept all the strings in which the third
symbol from right end is always a over = {a, b}.
a, b a, b
q0 q1 q2 q3
a a, b a, b
Fig. 1.6.3.
Answer
Thompson’s construction :
1. It is an algorithm for transforming a regular expression to equivalent
NFA.
2. Following rules are defined for a regular expression as a basis for the
construction :
i. The NFA representing the empty string is :
0 1
Compiler Design 1–11 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
ii. If the regular expression is just a character, thus a can be represented
as :
a
0 1
b
iv. Concatenation simply involves connecting one NFA to the other
thus ab can be represented as :
a b
0 1 2
v. The Kleene closure must allow for taking zero or more instances
of the letter from the input; thus a* looks like :
a
0 1 2 3
For example :
Construct NFA for r = (a|b)*a
For r 1 = a,
start a
2 3
For r 2 = b,
start b
4 5
For r 3 = a|b
a
2 3
start
1 6
4 5
b
The NFA for r 4 = (r3)*
Introduction to Compiler 1–12 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
a
2 3
start
0 1 6 7
4 5
b
Finally, NFA for r 5 = r 4·r1 = (a|b)*a
a
2 3
start
a
1 1 6 7 8
4 5
b
Que 1.8. Construct the NFA for the regular expression a|abb|a*b+
by using Thompson’s construction methodology.
AKTU 2017-18, Marks 10
Answer
Given regular expression : a + abb + a*b+
+
Step 1 : q 1 a + abb + a*b qf
a
Step 2 : q1 qf
a
b
q2 b q3
+
a* b
q4
a
b
q1 a q2 b q3 b qf
Step 3 :
q5 a q4 b
q6
Compiler Design 1–13 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 1.9. Draw NFA for the regular expression ab*|ab.
Answer
Step 1 : a
a
Step 2 : b*
b
Step 3 : b
b
Step 4 : ab*
a b
Step 5 : ab
a b
Step 6 : ab*|ab
a b
2 3 4 5
1 10
a b
6 7 8 9
Fig. 1.9.1. NFA of ab*|ab.
Que 1.10. Discuss conversion of NFA into a DFA. Also give the
Answer
Conversion from NFA to DFA :
Suppose there is an NFA N < Q, , q0, , F > which recognizes a language L.
Then the DFA D < Q', , q0, , F > can be constructed for language L as :
Step 1 : Initially Q = .
Step 2 : Add q0 to Q.
Step 3 : For each state in Q, find the possible set of states for each input
symbol using transition function of NFA. If this set of states is not in Q, add
it to Q.
Introduction to Compiler 1–14 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Step 4 : Final state of DFA will be all states which contain F (final states of
NFA).
Que 1.11. Construct the minimized DFA for the regular expression
Answer
Given regular expression : (0 + 1)*(0 + 1)10
NFA for given regular expression :
(0 + 1)*(0 + 1)10
q1 qf
(0 + 1)* (0 + 1)
q1 q2 q3 1 q4 0 qf
(0 + 1)* 1
q1 q2 0 q3 q4 0 qf
1
0, 1
q1 q 1 q2 0 q3 1 q4 0
5 qf
1
If we remove we get
0, 1
q1 0 q3 1 q4 0 qf
1
[ can be neglected so q1 = q5 = q2]
Now, we convert above NFA into DFA :
Transition table for NFA :
/ 0 1
q1 q1 q3 q1 q3
q3 q4
q4 qf
q
* f
Transition table for DFA :
/ 0 1 Let
q1 q1 q3 q1 q3 q1 as A
q1 q3 q1 q3 q1 q3q4 q1 q3 as B
q1 q3 q4 q1 q3 qf q1 q3 q4 q1 q3 q4 as C
q q q q1 q3 q1 q3 q4 q1 q3 qf as D
* 1 3 f
Compiler Design 1–15 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Transition diagram for DFA :
1
0 1
0 1 0
A B C D
1
0
/ 0 1
A B B
B B C
C D C
*D B C
For minimization divide the rows of transition table into 2 sets, as
Set-1 : It consists of non-final state rows.
A B B
B B C
C D C
*D B C
Que 1.12. How does finite automata useful for lexical analysis ?
Answer
1. Lexical analysis is the process of reading the source text of a program
and converting it into a sequence of tokens.
2. The lexical structure of every programming language can be specified
by a regular language, a common way to implement a lexical analyzer
is to :
a. Specify regular expressions for all of the kinds of tokens in the
language.
b. The disjunction of all of the regular expressions thus describes
any possible token in the language.
c. Convert the overall regular expression specifying all possible
tokens into a Deterministic Finite Automaton (DFA).
d. Translate the DFA into a program that simulates the DFA. This
program is the lexical analyzer.
Introduction to Compiler 1–16 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
3. This approach is so useful that programs called lexical analyzer
generators exist to automate the entire process.
Answer
1. DFA for all strings over {a, b} such that fifth symbol from right is a :
Regular expression : (a + b)* a (a + b) (a + b) (a + b) (a + b)
2. Regular expression :
[00(0 + 1) (0 + 1) 0(0 + 1) 0(0 + 1) + 0(0 + 1) (0 + 1)0 + (0 + 1) 00(0 +1)
+ (0 + 1)0(0 + 1) 0 + (0 + 1) (0 + 1)00]
q0 q1 q2
b b. c
a. c
Fig. 1.14.1.
Answer
Transition table for -NFA :
/ a b c
q0 q1 q2 {q1, q2}
q1 q0 q2 {q0, q2}
q2
-closure of {q0} = {q0, q1, q2}
-closure of {q1} = {q1}
-closure of {q2} = {q2}
Compiler Design 1–17 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Transition table for NFA :
/ a b c
{q0, q1, q2} {q0, q1, q2} {q1, q2} {q0, q1, q2}
{q1, q2} {q0, q1, q2} {q2} {q0, q1, q2}
{q2}
Let {q0, q1, q2} = A
{q1, q2} = B
{q2} = C
Transition table for NFA :
/ a b c
A A B A
B A C A
C
D
Dead state
a, b, c
Fig. 1.14.2.
PART-4
Implementation of Lexical Analyzers, Lexical Analyzer Generator,
LEX Compiler.
Introduction to Compiler 1–18 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Questions-Answers
Answer
Lexical analyzer can be implemented in following step :
1. Input to the lexical analyzer is a source program.
2. By using input buffering scheme, it scans the source program.
3. Regular expressions are used to represent the input patterns.
4. Now this input pattern is converted into NFA by using finite automation
machine.
Regular expression
Finite automata
Symbol table
Answer
1. For efficient design of compiler, various tools are used to automate the
phases of compiler. The lexical analysis phase can be automated using a
tool called LEX.
Compiler Design 1–19 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. LEX is a Unix utility which generates lexical analyzer.
3. The lexical analyzer is generated with the help of regular expressions.
4. LEX lexer is very fast in finding the tokens as compared to handwritten
LEX program in C.
5. LEX scans the source program in order to get the stream of tokens and
these tokens can be related together so that various programming
structure such as expression, block statement, control structures,
procedures can be recognized.
Answer
1. Automatic generation of lexical analyzer is done using LEX
programming language.
2. The LEX specification file can be denoted using the extension .l (often
pronounced as dot L).
3. For example, let us consider specification file as x.l.
4. This x.l file is then given to LEX compiler to produce lex.yy.c as shown
in Fig. 1.17.1. This lex.yy.c is a C program which is actually a lexical
analyzer program.
Lex specification
file LEX lex.yy.c
x.l. compiler Lexical analyzer
program
Fig. 1.17.1.
5. The LEX specification file stores the regular expressions for the token
and the lex.yy.c file consists of the tabular representation of the
transition diagrams constructed for the regular expression.
6. In specification file, LEX actions are associated with every regular
expression.
7. These actions are simply the pieces of C code that are directly carried
over to the lex.yy.c.
8. Finally, the C compiler compiles this generated lex.yy.c and produces
an object program a.out as shown in Fig. 1.17.2.
9. When some input stream is given to a.out then sequence of tokens
gets generated. The described scenario is shown in Fig. 1.17.2.
Introduction to Compiler 1–20 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
lex.yy.c C a.out
compiler
[Executable program]
Input Stream of
a.out
strings from tokens
source program
Fig. 1.17.2. Generation of lexical analyzer using LEX.
Answer
The LEX program consists of three parts :
%{
Declaration section
%}
%%
Rule section
%%
Auxiliary procedure section
1. Declaration section :
a. In the declaration section, declaration of variable constants can be
done.
b. Some regular definitions can also be written in this section.
c. The regular definitions are basically components of regular
expressions.
2. Rule section :
a. The rule section consists of regular expressions with associated
actions. These translation rules can be given in the form as :
R1 {action1}
R2 {action2}
.
.
.
Rn {actionn}
Where each Ri is a regular expression and each actioni is a program
fragment describing what action is to be taken for corresponding
regular expression.
b. These actions can be specified by piece of C code.
3. Auxiliary procedure section :
a. In this section, all the procedures are defined which are required
by the actions in the rule section.
Compiler Design 1–21 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. This section consists of two functions :
i. main() function
ii. yywrap() function
Answer
%{
int count;
/*program to recognize the keywords*/
%}
%%
[%\t ] + /* “+” indicates zero or more and this pattern is use for
ignoring the white spaces*/
auto | double | if| static | break | else | int | struct | case |
enum | long | switch | char | extern | near | typedef | const | float |
register| union| unsigned | void | while | default
printf(“C keyword(%d) :\t %s”,count,yytext);
[a-zA-Z]+ { printf(“%s: is not the keyword\n”, yytext);
%%
main()
{
yylex();
}
Que 1.20. What are the various LEX actions that are used in LEX
programming ?
Answer
There are following LEX actions that can be used for ease of programming
using LEX tool :
1. BEGIN : It indicates the start state. The lexical analyzer starts at state
0.
2. ECHO : It emits the input as it is.
3. yytext() :
a. yytext is a null terminated string that stores the lexemes when
lexer recognizes the token from input token.
b. When new token is found the contents of yytext are replaced by
new token.
Introduction to Compiler 1–22 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4. yylex() : This is an important function. The function yylex() is called
when scanner starts scanning the source program.
5. yywrap() :
a. The function yywrap() is called when scanner encounter end of
file.
b. If yywrap() returns 0 then scanner continues scanning.
c. If yywrap() returns 1 that means end of file is encountered.
6. yyin : It is the standard input file that stores input source program.
7. yyleng : yyleng stores the length or number of characters in the input
string.
Answer
Token :
1. A token is a pair consisting of a token name and an optional attribute
value.
2. The token name is an abstract symbol representing a kind of lexical
unit.
3. Tokens can be identifiers, keywords, constants, operators and
punctuation symbols such as commas and parenthesis.
Lexeme :
1. A lexeme is a sequence of characters in the source program that matches
the pattern for a token.
2. Lexeme is identified by the lexical analyzer as an instance of that token.
Pattern :
1. A pattern is a description of the form that the lexemes of a token may
take.
2. Regular expressions play an important role for specifying patterns.
3. If a keyword is considered as token, pattern is just sequence of characters.
PART-5
Formal Grammars and their Application to Syntax Analysis,
BNF Notation.
Questions-Answers
Answer
A grammar or phrase structured grammar is combination of four tuples and
can be represented as G (V, T, P, S). Where,
1. V is finite non-empty set of variables/non-terminals. Generally non-
terminals are represented by capital letters like A, B, C, ……, X, Y, Z.
2. T is finite non-empty set of terminals, sometimes also represented by
or VT. Generally terminals are represented by a, b, c, x, y, z, , , etc.
3. P is finite set whose elements are in the form . Where and are
strings, made up by combination of V and T i.e., (V T). has at least
one symbol from V. Elements of P are called productions or production
rule or rewriting rules.
4. S is special variable/non-terminal known as starting symbol.
While writing a grammar, it should be noted that V T = , i.e., no terminal
can belong to set of non-terminals and no non-terminal can belong to set of
terminals.
Answer
EE E
E (E)
E id
Answer
Answer
BNF notation :
1. The BNF (Backus-Naur Form) is a notation technique for context free
grammar. This notation is useful for specifying the syntax of the
language.
2. The BNF specification is as :
<symbol> : = Exp1|Exp2|Exp3...
Where <symbol> is a non terminal, and Exp1, Exp2 is a sequence of
symbols. These symbols can be combination of terminal or non
terminals.
Compiler Design 1–25 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
3. For example :
<Address> : = <fullname> : “,” <street> “,” <zip code>
<fullname> : = <firstname> “–” <middle name> “–” <surname>
<street> : = <street name> “,” <city>
We can specify first name, middle name, surname, street name, city
and zip code by valid strings.
4. The BNF notation is more often non-formal and in human readable
form. But commonly used notations in BNF are :
a. Optional symbols are written with square brackets.
b. For repeating the symbol for 0 or more number of times asterisk
can be used.
For example : {name}*
c. For repeating the symbols for at least one or more number of
times + is used.
For example : {name}+
d. The alternative rules are separated by vertical bar.
e. The group of items must be enclosed within brackets.
PART-6
Ambiguity, YACC.
Questions-Answers
Answer
Ambiguous grammar : A context free grammar G is ambiguous if there
is at least one string in L(G) having two or more distinct derivation tree.
Proof : Let production rule is given as :
E EE+
Introduction to Compiler 1–26 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
E E(E)
E id
Parse tree for id(id)id + is
E
Only one parse tree is
E E + possible for id(id)id+
so, the given grammar
E ( E ) id is unambiguous.
id id
Answer
i. Context free grammar : Refer Q. 1.23, Page 1–23C, Unit-1.
ii. YACC parser generator :
1. YACC (Yet Another Compiler - Compiler) is the standard parser
generator for the Unix operating system.
2. An open source program, YACC generates code for the parser in
the C programming language.
3. It is a Look Ahead Left-to-Right (LALR) parser generator, generating
a parser, the part of a compiler that tries to make syntactic sense of
the source code.
S S
a a B A B
b A a b
a
Fig. 1.28.1.
Here for the same string, we are getting more than one parse tree. Hence,
grammar is an ambiguous grammar.
The grammar
S AB
A Aa/a
B b
is an unambiguous grammar equivalent to G. Now this grammar has only
one parse tree for string aab.
A B
a a b
Fig. 1.28.2.
PART-7
The Syntactic Specification of Programming Languages : Context
Free Grammar (CFG), Derivation and Parse Trees,
Capabilities of CFG.
Introduction to Compiler 1–28 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Questions-Answers
Que 1.29. Define pars e tree. What are the conditions for
constructing a parse tree from a CFG ?
Answer
Parse tree :
1. A parse tree is an ordered tree in which left hand side of a production
represents a parent node and children nodes are represented by the
production’s right hand side.
2. Parse tree is the tree representation of deriving a Context Free Language
(CFL) from a given Context Free Grammar (CFG). These types of trees
are sometimes called derivation trees.
Conditions for constructing a parse tree from a CFG :
i. Each vertex of the tree must have a label. The label is a non-terminal or
terminal or null ().
ii. The root of the tree is the start symbol, i.e., S.
iii. The label of the internal vertices is non-terminal symbols VN.
iv. If there is a production A X1X2 ....XK . Then for a vertex, label A, the
children of that node, will be X1X2 .... XK .
v. A vertex n is called a leaf of the parse tree if its label is a terminal
symbol or null ().
Answer
*
7. Then we say that 1 m, i.e., we say 1, drives m in grammar G. If
G
i
drives by exactly i steps, we say 1 .
G
Que 1.31. What do you mean by left most derivation and right
most derivation with example ?
Answer
Left most derivation : The derivation S s is called a left most derivation,
if the production is applied only to the left most variable (non-terminal) at
every step.
Example : Let us consider a grammar G that consist of production rules
E E + E | E E | id.
Firstly take the production
EE+EE E+E (Replace E E E)
id E+E (Replace E id)
id id + E (Replace E id)
id id + id (Replace E id)
Right most derivation : A derivation S s is called a right most derivation,
if production is applied only to the right most variable (non-terminal) at
every step.
Example : Let us consider a grammar G having production.
E E+E|E E | id.
Start with production
E E E
E E+ E (Replace E E + E)
E E + id (Replace E id)
E id + id (Replace E id)
id id + id (Replace E id)
Introduction to Compiler 1–30 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 1.32. Describe the capabilities of CFG.
Answer
Various capabilities of CFG are :
1. Context free grammar is useful to describe most of the programming
languages.
2. If the grammar is properly designed then an efficient parser can be
constructed automatically.
3. Using the features of associatively and precedence information,
grammars for expressions can be constructed.
4. Context free grammar is capable of describing nested structures like :
balanced parenthesis, matching begin-end, corresponding if-then-else’s
and so on.
q0 q1 q2
b b. c
a. c
Fig. 1.
Ans. Refer Q. 1.14.
Q. 6. Explain the term token, lexeme and pattern.
Ans. Refer Q. 1.21.
Q. 7. What is an ambiguous grammar ? Is the following grammar
is ambiguous ? Prove EE + |E(E)|id. The grammar should
be moved to the next line, centered.
Ans. Refer Q. 1.26.
3
UNIT
Syntax-Directed
Translations
CONTENTS
Part-1 : Syntax-Directed Translation : ................. 3–2C to 3–5C
Syntax-Directed Translation Scheme,
Implementation of Syntax-Directed
Translators
PART-1
Syntax-Directed Translation : Syntax-Directed Translation
Schemes, Implementation of Syntax-Directed Translators.
Questions-Answers
Answer
1. Syntax directed definition/translation is a generalization of context
free grammar in which each grammar production X is associated
with a set of semantic rules of the form a := f (b1, b2, .... bk), where a is
an attribute obtained from the function f.
2. Syntax directed translation is a kind of abstract specification.
3. It is done for static analysis of the language.
4. It allows subroutines or semantic actions to be attached to the
productions of a context free grammar. These subroutines generate
intermediate code when called at appropriate time by a parser for that
grammar.
5. The syntax directed translation is partitioned into two subsets called
the synthesized and inherited attributes of grammar.
Lexical analysis
Token stream
Syntax analysis
Parse tree
Semantic analysis
Dependency graph
Syntax directed
translation Evaluation order for semantic rules
Translation of constructs
Fig. 3.1.1.
More pdf : www.motivationbank.in
Compiler Design 3–3 C (CS/IT-Sem-5)
E.val = 29 F * F.val = 2
T
( E E.val = 29 ) id id.lexval = 2
T.val = 28 T + F F.val = 1
T F F.val = 7
T.val = 4 *
id id
id.lexval = 4 id.lexval = 7
Fig. 3.1.2.
Answer
Syntax directed translation : Refer Q. 3.1, Page 3–2C, Unit-3.
Semantic actions are attached with every node of annotated parse tree.
Example : A parse tree along with the values of the attributes at nodes
(called an “annotated parse tree”) for an expression 2 + 3*5 with synthesized
attributes is shown in the Fig. 3.2.1.
E.val=17
E
E.val=2 E
T T.val=15
Fig. 3.2.1.
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–4 C (CS/IT-Sem-5)
Answer
Attributes :
1. Attributes are associated information with language construct by
attaching them to grammar symbols representing that construct.
2. Attributes are associated with the grammar symbols that are the labels
of parse tree node.
3. An attribute can represent anything (reasonable) such as string, a
number, a type, a memory location, a code fragment etc.
4. The value of an attribute at parse tree node is defined by a semantic rule
associated with the production used at that node.
Synthesized attribute :
1. An attribute at a node is said to be synthesized if its value is computed
from the attributed values of the children of that node in the parse tree.
2. A syntax directed definition that uses the synthesized attributes is
exclusively said to be S-attributed definition.
3. Thus, a parse tree for S-attributed definition can always be annotated
by evaluating the semantic rules for the attributes at each node from
leaves to root.
4. If the translations are specified using S-attributed definitions, then the
semantic rules can be conveniently evaluated by the parser itself during
the parsing.
For example : A parse tree along with the values of the attributes at
nodes (called an “annotated parse tree”) for an expression 2 + 3*5 with
synthesized attributes is shown in the Fig. 3.3.1.
E.val=17
E
E.val=2 E
T T.val=15
Inherited attribute :
1. An inherited attribute is one whose value at a node in a parse tree is
defined in terms of attributes at the parent and/or sibling of that node.
2. Inherited attributes are convenient for expressing the dependence of a
programming language construct.
For example : Syntax directed definitions that uses inherited attribute
are given as :
D TL L.type : = T.Type
T int T.type : = integer
T real T.type : = real
L L1, id L1.type : = L.type
enter (id.prt, L.type)
L id enter (id.prt, L.type)
The parse tree, along with the attribute values at the parse tree nodes, for an
input string int id1, id2 and id3 is shown in the Fig. 3.3.2.
D
T.type=int T
L L.type=int
int L id 3
L.type=int L id 2
id 1
Answer
PART-2
Intermediate Code, Postfix Notation, Parse Trees and Syntax Trees.
Questions-Answers
Answer
Intermediate code generation is the fourth phase of compiler which takes
parse tree as an input from semantic phase and generates an intermediate
code as output.
The benefits of intermediate code are :
1. Intermediate code is machine independent, which makes it easy to
retarget the compiler to generate code for newer and different processors.
2. Intermediate code is nearer to the target machine as compared to the
source language so it is easier to generate the object code.
3. The intermediate code allows the machine independent optimization of
the code by using specialized techniques.
4. Syntax directed translation implements the intermediate code
generation, thus by augmenting the parser, it can be folded into the
parsing.
Answer
Postfix (reverse polish) translation : It is the type of translation in which
the operator symbol is placed after its two operands.
For example :
Consider the expression : (20 + (–5)* 6 + 12)
Postfix for above expression can be calculate as :
(20 + t1 * 6 + 12) t1 = 5 –
20 + t2 + 12 t2 = t16*
t3 + 12 t3 = 20 t2 +
t4 t4 = t3 12 +
Now putting values of t4, t3, t2, t1
t4 = t3 12 +
More pdf : www.motivationbank.in
Compiler Design 3–7 C (CS/IT-Sem-5)
20 t2 + 12 +
20 t1 6 * + 12 +
(20) 5 – 6 * + 12 +
Que 3.7. Define parse tree. Why parse tree construction is only
possible for CFG ?
Answer
Parse tree : A parse tree is an ordered tree in which left hand side of a
production represents a parent node and children nodes are represented by
the production’s right hand side.
Conditions for constructing a parse tree from a CFG are :
i. Each vertex of the tree must have a label. The label is a non-terminal or
terminal or null ().
ii. The root of the tree is the start symbol, i.e., S.
iii. The label of the internal vertices is non-terminal symbols VN.
iv. If there is a production A X1X2 ....XK . Then for a vertex, label A, the
children node, will be X1X2 .... XK .
v. A vertex n is called a leaf of the parse tree if its label is a terminal
symbol or null ().
Parse tree construction is only possible for CFG. This is because the properties
of a tree match with the properties of CFG.
Que 3.8. What is syntax tree ? What are the rules to construct
syntax tree for an expression ?
Answer
1. A syntax tree is a tree that shows the syntactic structure of a program
while omitting irrelevant details present in a parse tree.
2. Syntax tree is condensed form of the parse tree.
3. The operator and keyword nodes of a parse tree are moved to their
parent and a chain of single production is replaced by single link.
Rules for constructing a syntax tree for an expression :
1. Each node in a syntax tree can be implemented as a record with several
fields.
2. In the node for an operator, one field identifies the operator and the
remaining field contains pointer to the nodes for the operands.
3. The operator often is called the label of the node.
4. The following functions are used to create the nodes of syntax trees for
expressions with binary operators. Each function returns a pointer to
newly created node.
a. Mknode(op, left, right) : It creates an operator node with label op
and two field containing pointers to left and right.
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–8 C (CS/IT-Sem-5)
– id
to entry for c
id num 4
to entry for a
Fig. 3.8.1. The syntax tree for a – 4 + c.
Answer
Syntax tree for given expression : a * (b + c) – d/2
–
* /
a + d 2
b c
Fig. 3.9.1.
More pdf : www.motivationbank.in
Compiler Design 3–9 C (CS/IT-Sem-5)
PART-3
Three Address Code, Quadruples and Triples.
Questions-Answers
Answer
1. Three address code is an abstract form of intermediate code that can be
implemented as a record with the address fields.
2. The general form of three address code representation is :
a := b op c
where a, b and c are operands that can be names, constants and op
represents the operator.
3. The operator can be fixed or floating point arithmetic operator or logical
operators or boolean valued data. Only single operation at right side of
the expression is allowed at a time.
4. There are at most three addresses are allowed (two for operands and
one for result). Hence, the name of this representation is three address
code.
For example : The three address code for the expression a = b + c + d
will be :
t1 := b + c
t2 := t1 + d
a := t2
Here t1 and t2 are the temporary names generated by the compiler.
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–10 C (CS/IT-Sem-5)
Que 3.11. What are different ways to write three address code ?
Answer
Different ways to write three address code are :
1. Quadruple representation :
a. The quadruple is a structure with at most four fields such as op,
arg1, arg2, result.
b. The op field is used to represent the internal code for operator, the
arg1 and arg2 represent the two operands used and result field is
used to store the result of an expression.
For example : Consider the input statement x := – a * b + – a * b
The three address code is
+ t2 t4 t5
x := t5
:= t5 x
Que 3.12. Write the quadruples, triple and indirect triple for the
following expression :
(x + y) * (y + z) + (x + y + z)
AKTU 2018-19, Marks 07
Answer
The three address code for given expression :
t1 := x + y
t2 := y + z
t3 := t1* t2
t4 := t1 + z
t5 := t3 + t4
i. The quadruple representation :
Location Operator Operand 1 Operand 2 Result
(1) + x y t1
(2) + y z t2
(3) * t1 t2 t3
(4) + t1 z t4
(5) + t3 t4 t5
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–12 C (CS/IT-Sem-5)
Que 3.13. Generate three address code for the following code :
switch a + b
{
case 1 : x = x + 1
case 2 : y = y + 2
case 3 : z = z + 3
default : c = c – 1
} AKTU 2015-16, Marks 10
Answer
101 : t1 = a + b goto 103
102 : goto 115
103 : t = 1 goto 105
104 : goto 107
105 : t2 = x + 1
106 : x = t2
107 : if t = 2 goto 109
108 : goto 111
109 : t3 = y + 2
110 : y = t3
111 : if t = 3 goto 113
112 : goto 115
113 : t4 = z + 3
114 : z = t4
115 : t5 = c – 1
116 : c = t5
117 : Next statement
More pdf : www.motivationbank.in
Compiler Design 3–13 C (CS/IT-Sem-5)
Answer
Given : low1 = 1 and low = 1, n1 = 10, n2 = 20.
B[i, j] = ((i × n2) +j) × w + (base – ((low1 × n2) + low) × w)
B[i, j] = ((i × 20) +j) × 4 + (base – ((1 × 20) + 1) × 4)
B[i, j] = 4 × (20 i + j) + (base – 84)
Similarly, A[i, j] = 4 × (20 i + j) + (base – 84)
and, D[i, j] = 4 × (20 i + j) + (base – 84)
Hence, C[A[i, j]] = 4 × (20 i + j) + (base – 84) + 4 × (20 i + j) +
(base – 84) + 4 × (20 i + j) + (base – 84)
= 4 × (20 i + j) + (base – 84) [1 + 1 + 1]
= 4 × 3 × (20 i + j) + (base – 84) × 3
= 12 × (20 i + j) + (base – 84) × 3
Therefore, three address code will be
t1 = 20 × i
t2 = t1 + j
t3 = base – 84
t4 = 12 × t2
t5 = t4 + 3 × t3
PART-4
Translation of Assignment Statements.
Questions-Answers
Que 3.15. How would you convert the following into intermediate
code ? Give a suitable example.
i. Assignment statements
ii. Case statements AKTU 2016-17, Marks 15
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–14 C (CS/IT-Sem-5)
Answer
i. Assignment statements :
S id := E { id_entry := look_up(id.name);
if id_entry nil then
append (id_entry ‘:=’ E.place)
else error; /* id not declared*/
}
E E1 + E2 { E.place := newtemp();
append (E.place ‘:=’ E1.place ‘+’ E2.place)
}
E E1 * E2 { E.place := newtemp();
append (E.place ‘:=’ E1.place ‘*’ E2.place)
}
E – E1 { E.place := newtemp();
append (E.place ‘:=’ ‘minus’ E1.place)
}
E id { id_entry: = look_up(id.name);
if id_entry nil then
append (id_entry ‘:=’ E.place)
else error; /* id not declared*/
}
1. The look_up returns the entry for id.name in the symbol table if it exists
there.
2. The function append is used for appending the three address code to the
output file. Otherwise, an error will be reported.
3. Newtemp() is the function used for generating new temporary variables.
4. E.place is used to hold the value of E.
Example : x := (a + b)*(c + d)
We will assume all these identifiers are of the same type. Let us have
bottom-up parsing method :
More pdf : www.motivationbank.in
Compiler Design 3–15 C (CS/IT-Sem-5)
switch expression
{
case value : statement
case value : statement
...
case value : statement
default : statement
}
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–16 C (CS/IT-Sem-5)
Example :
switch(ch)
{
case 1 : c = a + b;
break;
case 2 : c = a – b;
break;
}
The three address code can be
if ch = 1 goto L1
if ch = 2 goto L2
L1 : t1 := a + b
c := t1
goto last
L2 : t2 := a – b
c := t2
goto last
last :
Answer
1. Boolean expression are used along with if-then, if-then-else,
while-do, do-while statement constructs.
2. S If E then S1 | If E then S1 else S2|while E do S1 |do E1 while E.
3. All these statements ‘E’ correspond to a boolean expression evaluation.
4. This expression E should be converted to three address code.
5. This is then integrated in the context of control statement.
Translation procedure for if-then and if-then-else statement :
1. Consider a grammar for if-else
S if E then S1|if E then S1 else S2
2. Syntax directed translation scheme for if-then is given as follows :
S if E then S1
E.true := new_label()
E.false := S.next
S1.next := S.next
More pdf : www.motivationbank.in
Compiler Design 3–17 C (CS/IT-Sem-5)
if a<b E.true
E.true: a := a+5 S1
E.false: a := a+7
PART-5
Boolean Expressions, Statements that alter the Flow of Control.
Questions-Answers
Answer
1. Backpatching is the activity of filling up unspecified information of labels
using appropriate semantic actions during the code generation process.
2. Backpatching refers to the process of resolving forward branches that
have been used in the code, when the value of the target becomes
known.
3. Backpatching is done to overcome the problem of processing the
incomplete information in one pass.
4. Backpatching can be used to generate code for boolean expressions and
flow of control statements in one pass.
To generate code using backpatching following functions are used :
1. Makelist(i) : Makelist is a function which creates a new list from one
item where i is an index into the array of instructions.
2. Merge(p1, p2) : Merge is a function which concatenates the lists pointed
by p1 and p2, and returns a pointer to the concatenated list.
More pdf : www.motivationbank.in
Compiler Design 3–19 C (CS/IT-Sem-5)
Answer
Translation scheme for boolean expression can be understand by following
example.
Consider the boolean expression generated by the following grammar :
E E OR E
E E AND E
E NOT E
E (E)
E id relop id
E TRUE
E FALSE
Here the relop is denoted by , , , <, >. The OR and AND are left associate.
The highest precedence is NOT then AND and lastly OR.
The translation scheme for boolean expressions having numerical
representation is as given below :
More pdf : www.motivationbank.in
Compiler Design 3–21 C (CS/IT-Sem-5)
PART-6
Postfix Translation : Array References in Arithmetic Expressions.
Questions-Answers
Answer
1. In a production A , the translation rule of A.CODE consists of the
concatenation of the CODE translations of the non-terminals in in the
same order as the non-terminals appear in .
2. Production can be factored to achieve postfix form.
Postfix translation of while statement :
Production : S while M1 E do M2 S1
Can be factored as :
1. S C S1
2. C W E do
3. W while
A suitable transition scheme is given as :
Answer
Postfix notation : Refer Q. 3.6, Page 3–6C, Unit-3.
Numerical : Syntax directed translation scheme to specify the translation of
an expression into postfix notation are as follow :
Production :
E E1 + T
E1 T
T T1 × F
T1 F
F (E)
F id
More pdf : www.motivationbank.in
Compiler Design 3–23 C (CS/IT-Sem-5)
Schemes :
E.code = E1.code ||T1code || ‘+’
E1.code = T.code
T1.code = T1.code ||F.code || ‘×’
T1.code = F.code
F.code = E.code
F.code = id.code
where ‘||’ sign is used for concatenation.
PART-7
Procedures Call.
Questions-Answers
Answer
Procedures call :
1. Procedure is an important and frequently used programming construct
for a compiler.
2. It is used to generate code for procedure calls and returns.
3. Queue is used to store the list of parameters in the procedure call.
4. The translation for a call includes a sequence of actions taken on entry
and exit from each procedure. Following actions take place in a calling
sequence :
a. When a procedure call occurs then space is allocated for activation
record.
b. Evaluate the argument of the called procedure.
c. Establish the environment pointers to enable the called procedure
to access data in enclosing blocks.
d. Save the state of the calling procedure so that it can resume execution
after the call.
e. Also save the return address. It is the address of the location to
which the called routine must transfer after it is finished.
f. Finally generate a jump to the beginning of the code for the called
procedure.
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–24 C (CS/IT-Sem-5)
Answer
1. An array is a collection of elements of similar data type. Here, we assume
the static allocation of array, whose subscripts ranges from one to some
limit known at compile time.
2. If width of each array element is ‘w’ then the ith element of array A
begins in location,
base + (i – low) * d
where low is the lower bound on the subscript and base is the relative
address of the storage allocated for an array i.e., base is the relative
address of A[low].
3. A two dimensional array is normally stored in one of two forms, either
row-major (row by row) or column-major (column by column).
4. The Fig. 3.22.1 for row-major and column-major are given as :
A [1, 1] A [1, 1]
First Column
First Row A [1, 2] A [2, 1]
A [1, 3] A [1, 2]
Second Column
A [2, 1] A [2, 2]
Second Row A [2, 2] A [1, 3]
Third Column
A [2, 3] A [2, 3]
Fig. 3.22.1.
PART-8
Declarations Statements.
Questions-Answers
Answer
In the declarative statements the data items along with their data types are
declared.
For example :
SD {offset:= 0}
D id : T {enter_tab(id.name, T.type,offset);
offset:= offset + T.width)}
T integer {T.type:= integer;
T.width:= 8}
T real {T.type:= real;
T.width:= 8}
T array[num] of T1 {T.type:= array(num.val, T1.type)
T.width:= num.val × T1.width}
T *T1 {T.type:= pointer(T.type)
T.width:= 4}
More pdf : www.motivationbank.in
Syntax-Directed Translations 3–26 C (CS/IT-Sem-5)
1. Initially, the value of offset is set to zero. The computation of offset can
be done by using the formula offset = offset + width.
2. In the above translation scheme, T.type, T.width are the synthesized
attributes. The type indicates the data type of corresponding identifier
and width is used to indicate the memory units associated with an
identifier of corresponding type. For instance integer has width 4 and
real has 8.
3. The rule D id : T is a declarative statements for id declaration. The
enter_tab is a function used for creating the symbol table entry for
identifier along with its type and offset.
4. The width of array is obtained by multiplying the width of each element
by number of elements in the array.
5. The width of pointer types of supposed to be 4.
Compiler Design 4–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4
UNIT
Symbol Tables
CONTENTS
Part-1 : Symbol Tables : .......................................... 4–2C to 4–7C
Data Structure for
Symbol Tables
Questions-Answers
Answer
1. A symbol table is a data structure used by a compiler to keep track of
scope, life and binding information about names.
2. These information are used in the source program to identify the various
program elements, like variables, constants, procedures, and the labels
of statements.
3. A symbol table must have the following capabilities :
a. Lookup : To determine whether a given name is in the table.
b. Insert : To add a new name (a new entry) to the table.
c. Access : To access the information related with the given name.
d. Modify : To add new information about a known name.
e. Delete : To delete a name or group of names from the table.
Que 4.2. What are the symbol table requirements ? What are the
demerits in the uniform structure of symbol table ?
Answer
The basic requirements of a symbol table are as follows :
1. Structural flexibility : Based on the usage of identifier, the symbol
table entries must contain all the necessary information.
2. Fast lookup/search : The table lookup/search depends on the
implementation of the symbol table and the speed of the search should
be as fast as possible.
3. Efficient utilization of space : The symbol table must be able to
grow or shrink dynamically for an efficient usage of space.
4. Ability to handle language characteristics : The characteristic of
a language such as scoping and implicit declaration needs to be handled.
Demerits in uniform structure of symbol table :
1. The uniform structure cannot handle a name whose length exceed
upper bound or limit or name field.
2. If the length of a name is small, then the remaining space is wasted.
Compiler Design 4–3 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 4.3. How names can be looked up in the symbol table ?
Answer
1. The symbol table is searched (looked up) every time a name is
encountered in the source text.
2. When a new name or new information about an existing name is
discovered, the content of the symbol table changes.
3. Therefore, a symbol table must have an efficient mechanism for accessing
the information held in the table as well as for adding new entries to the
symbol table.
4. In any case, the symbol table is a useful abstraction to aid the compiler
to ascertain and verify the semantics, or meaning of a piece of code.
5. It makes the compiler more efficient, since the file does not need to be
re-parsed to discover previously processed information.
For example : Consider the following outline of a C function :
void scopes ( )
{
int a, b, c; /* level 1 */
.......
{
int a, b; /* level 2 */
....
}
{
float c, d; /* level 3 */
{
int m; /* level 4 */
.....
}
}
}
The symbol table could be represented by an upwards growing stack as :
i. Initially the symbol table is empty.
ii. After the first three declarations, the symbol table will be
c int
b int
a int
iii. After the second declaration of Level 2.
Symbol Tables 4–4 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b int
a int
c int
b int
a int
Que 4.4. What is the role of symbol table ? Discuss different data
structures used for symbol table.
OR
Discuss the various data structures used for symbol table with
suitable example.
Compiler Design 4–5 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
Role of symbol table :
1. It keeps the track of semantics of variables.
2. It stores information about scope.
3. It helps to achieve compile time efficiency.
Different data structures used in implementing symbol table are :
1. Unordered list :
a. Simple to implement symbol table.
b. It is implemented as an array or a linked list.
c. Linked list can grow dynamically that eliminate the problem of a
fixed size array.
d. Insertion of variable take (1) time , but lookup is slow for large
tables i.e., (n) .
2. Ordered list :
a. If an array is sorted, it can be searched using binary search in
(log 2 n).
b. Insertion into a sorted array is expensive that it takes (n) time on
average.
c. Ordered list is useful when set of names is known i.e., table of
reserved words.
3. Search tree :
a. Search tree operation and lookup is done in logarithmic time.
b. Search tree is balanced by using algorithm of AVL and Red-black
tree.
4. Hash tables and hash functions :
a. Hash table translate the elements in the fixed range of value called
hash value and this value is used by hash function.
b. Hash table can be used to minimize the movement of elements in
the symbol table.
c. The hash function helps in uniform distribution of names in symbol
table.
For example : Consider a part of C program
int x, y;
msg ( );
1. Unordered list :
Symbol Tables 4–6 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
S. No. Name Type
1 x int
2 msg function
3 y int
2. Ordered list :
Id Name Type Id
Id1 x int Id1
Id2 y int Id2
Id3 msg function Id3
3. Search tree :
x
msg y
4. Hash table :
Name 1
Data 1
Link1
Name Name2
Data2
Link2
Hash table Name3
Data3
Link3
Storage table
Que 4.5. Describe symbol table and its entries. Also, discuss
various data structure used for symbol table.
AKTU 2015-16, Marks 10
Answer
Symbol table : Refer Q. 4.1, Page 4–2C, Unit-4.
Entries in the symbol table are as follows :
1. Variables :
a. Variables are identifiers whose value may change between
executions and during a single execution of a program.
b. They represent the contents of some memory location.
c. The symbol table needs to record both the variable name as well as
its allocated storage space at runtime.
Compiler Design 4–7 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. Constants :
a. Constants are identifiers that represent a fixed value that can never
be changed.
b. Unlike variables or procedures, no runtime location needs to be
stored for constants.
c. These are typically placed right into the code stream by the compiler
at compilation time.
3. Types (user defined) :
a. A user defined type is combination of one or more existing types.
b. Types are accessed by name and reference a type definition
structure.
4. Classes :
a. Classes are abstract data types which restrict access to its members
and provide convenient language level polymorphism.
b. This includes the location of the default constructor and destructor,
and the address of the virtual function table.
5. Records :
a. Records represent a collection of possibly heterogeneous members
which can be accessed by name.
b. The symbol table probably needs to record each of the record’s
members.
Various data structure used for symbol table : Refer Q. 4.4, Page 4–4C,
Unit-4.
PART-2
Representing Scope Information.
Questions-Answers
Answer
1. Scope information characterizes the declaration of identifiers and the
portions of the program where it is allowed to use each identifier.
2. Different languages have different scopes for declarations. For example,
in FORTRAN, the scope of a name is a single subroutine, whereas in
Symbol Tables 4–8 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
ALGOL, the scope of a name is the section or procedure in which it is
declared.
3. Thus, the same identifier may be declared several times as distinct
names, with different attributes, and with different intended storage
locations.
4. The symbol table is thus responsible for keeping different declaration
of the same identifier distinct.
5. To make distinction among the declarations, a unique number is
assigned to each program element that in return may have its own
local data.
6. Semantic rules associated with productions that can recognize the
beginning and ending of a subprogram are used to compute the number
of currently active subprograms.
7. There are mainly two semantic rules regarding the scope of an
identifier :
a. Each identifier can only be used within its scope.
b. Two or more identifiers with same name and are of same kind
cannot be declared within the same lexical scope.
8. The scope declaration of variables, functions, labels and objects within
a program is shown below :
Scope of variables in statement blocks :
{ int x ;
... Scope of variable x
{ int y ;
...
Scope of variable y
}
...
}
Scope of formal arguments of functions :
int mul (int n) {
... Scope of argument n
}
Scope of labels :
void jumper () {
. . . goto sim;
...
sim++ ; Scope of label sim
. . . goto sim;
...
}
Answer
Answer
Difference : Refer Q. 4.8, Page 4–9C, Unit-4.
Access to non-local names in static scope :
1. Static chain is the mechanism to implement non-local names (variable)
access in static scope.
2. A static chain is a chain of static links that connects certain activation
record instances in the stack.
3. The static link, static scope pointer, in an activation record instance for
subprogram A points to one of the activation record instances of A’s
static parent.
Symbol Tables 4–10 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4. When a subroutine at nesting level j has a reference to an object declared
in a static parent at the surrounding scope nested at level k, then j-k
static links forms a static chain that is traversed to get to the frame
containing the object.
5. The compiler generates code to make these traversals over frames to
reach non-local names.
For example : Subroutine A is at nesting level 1 and C at nesting level
3. When C accesses an object of A, 2 static links are traversed to get to
A’s frame that contains that object
Nesting
Static frames
A
B
C
C
fp static link
D
static link
D
B
static link
Calls
E E
A calls E
static link
E calls B
B calls D A
D calls C
PART-3
Run-Time Administration : Implementation of Simple Stack
Allocation Scheme.
Questions-Answers
Answer
1. Activation record is used to manage the information needed by a single
execution of a procedure.
2. An activation record is pushed into the stack when a procedure is called
and it is popped when the control returns to the caller function.
Compiler Design 4–11 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Format of activation records in stack allocation :
Return value
Actual parameters
Control link
Access link
Saved machine status
Local data
Temporaries
Fields of activation record are :
1. Return value : It is used by calling procedure to return a value to
calling procedure.
2. Actual parameter : It is used by calling procedures to supply
parameters to the called procedures.
3. Control link : It points to activation record of the caller.
4. Access link : It is used to refer to non-local data held in other activation
records.
5. Saved machine status : It holds the information about status of
machine before the procedure is called.
6. Local data : It holds the data that is local to the execution of the
procedure.
7. Temporaries : It stores the value that arises in the evaluation of an
expression.
Answer
Sub-division of run-time memory into codes and data areas is shown in
Fig. 4.11.1.
Code
Static
Heap
Free Memory
Stack
Fig. 4.11.1.
Symbol Tables 4–12 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1. Code : It stores the executable target code which is of fixed size and do
not change during compilation.
2. Static allocation :
a. The static allocation is for all the data objects at compile time.
b. The size of the data objects is known at compile time.
c. The names of these objects are bound to storage at compile time
only and such an allocation of data objects is done by static allocation.
d. In static allocation, the compiler can determine amount of storage
required by each data object. Therefore, it becomes easy for a
compiler to find the address of these data in the activation record.
e. At compile time, compiler can fill the addresses at which the target
code can find the data on which it operates.
3. Heap allocation : There are two methods used for heap management :
a. Garbage collection method :
i. When all access path to a object are destroyed but data object
continue to exist, such type of objects are said to be garbaged.
ii. The garbage collection is a technique which is used to reuse
that object space.
iii. In garbage collection, all the elements whose garbage collection
bit is ‘on’ are garbaged and returned to the free space list.
b. Reference counter :
i. Reference counter attempt to reclaim each element of heap
storage immediately after it can no longer be accessed.
ii. Each memory cell on the heap has a reference counter
associated with it that contains a count of number of values
that point to it.
iii. The count is incremented each time a new value point to the
cell and decremented each time a value ceases to point to it.
4. Stack allocation :
a. Stack allocation is used to store data structure called activation
record.
b. The activation records are pushed and popped as activations begins
and ends respectively.
Compiler Design 4–13 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
c. Storage for the locals in each call of the procedure is contained in
the activation record for that call. Thus, locals are bound to fresh
storage in each activation, because a new activation record is pushed
onto the stack when call is made.
d. These values of locals are deleted when the activation ends.
Answer
Run-time storage management is required because :
1. A program needs memory resources to execute instructions.
2. The storage management must connect to the data objects of programs.
3. It takes care of memory allocation and deallocation while the program is
being executed.
Simple stack implementation is implemented as :
1. In stack allocation strategy, the storage is organized as stack. This stack
is also called control stack.
2. As activation begins the activation records are pushed onto the stack
and on completion of this activation the corresponding activation records
can be popped.
3. The locals are stored in the each activation record. Hence, locals are
bound to corresponding activation record on each fresh activation.
4. The data structures can be created dynamically for stack allocation.
Answer
i. Call by name :
1. In call by name, the actual parameters are substituted for formals
in all the places where formals occur in the procedure.
Symbol Tables 4–14 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. It is also referred as lazy evaluation because evaluation is done on
parameters only when needed.
For example :
main (){
int n1=10;n2=20;
printf(“n1: %d, n2: %d\n”, n1, n2);
swap(n1,n2);
printf(“n1: %d, n2: %d\n”, n1, n2); }
swap(int c ,int d){
int t;
t=c;
c=d;
d=t;
printf(“n1: %d, n2: %d\n”, n1, n2);
}
Output : 10 20
20 10
20 10
ii. Call by reference :
1. In call by reference, the location (address) of actual arguments is
passed to formal arguments of the called function. This means by
accessing the addresses of actual arguments we can alter them
within the called function.
2. In call by reference, alteration to actual arguments is possible within
called function; therefore the code must handle arguments carefully
else we get unexpected results.
For example :
#include <stdio.h>
void swapByReference(int*, int*); /* Prototype */
int main() /* Main function */
{
int n1 = 10; n2 = 20;
Compiler Design 4–15 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
/* actual arguments will be altered */
swapByReference(&n1, &n2);
printf(“n1: %d, n2: %d\n”, n1, n2);
}
void swapByReference(int *a, int *b)
{
int t;
t = *a; *a = *b; *b = t;
}
Output : n1: 20, n2: 10
PART-4
Storage Allocation in Block Structured Language.
Questions-Answers
Answer
1. Hashing is an important technique used to search the records of symbol
table. This method is superior to list organization.
2. In hashing scheme, a hash table and symbol table are maintained.
3. The hash table consists of k entries from 0, 1 to k – 1. These entries are
basically pointers to symbol table pointing to the names of symbol table.
4. To determine whether the ‘Name’ is in symbol table, we used a hash
function ‘h’ such that h(name) will result any integer between 0 to
k – 1. We can search any name by position = h(name).
5. Using this position, we can obtain the exact locations of name in symbol
table.
Symbol Tables 4–16 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
6. The hash table and symbol table are shown in Fig. 4.14.1.
Symbol table
hash table Name Info hash link
Sum Sum
i
j
avg j
avg
.. ..
.. ..
Fig. 4.14.1.
PART-5
Error Detection and Recovery : Lexical Phase Errors, Syntactic
Phase Errors, Semantic Errors.
Questions-Answers
Que 4.15. Define error recovery. What are the properties of error
message ? Discuss the goals of error handling.
Answer
Error recovery : Error recovery is an important feature of any compiler,
through which compiler can read and execute the complete program even it
have some errors.
Compiler Design 4–17 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Properties of error message are as follows :
1. Message should report the errors in original source program rather
than in terms of some internal representation of source program.
2. Error message should not be complicated.
3. Error message should be very specific and should fix the errors at correct
positions.
4. There should be no duplicacy of error messages, i.e., same error should
not be reported again and again.
Goals of error handling are as follows :
1. Detect the presence of errors and produce “meaningful” diagnostics.
2. To recover quickly enough to be able to detect subsequent errors.
3. Error handling components should not significantly slow down the
compilation of syntactically correct programs.
Que 4.16. What are lexical phase errors, syntactic phase errors
and semantic phase errors ? Explain with suitable example.
AKTU 2015-16, Marks 10
Answer
1. Lexical phase error :
a. A lexical phase error is a sequence of character that does not match
the pattern of token i.e., while scanning the source program, the
compiler may not generate a valid token from the source program.
b. Reasons due to which errors are found in lexical phase are :
i. The addition of an extraneous character.
ii. The removal of character that should be presented.
iii. The replacement of a character with an incorrect character.
iv. The transposition of two characters.
For example :
i. In Fortran, an identifier with more than 7 characters long is a
lexical error.
ii. In Pascal program, the character ~, & and @ if occurred is a
lexical error.
2. Syntactic phase errors (syntax error) :
a. Syntactic errors are those errors which occur due to the mistake
done by the programmer during coding process.
Symbol Tables 4–18 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. Reasons due to which errors are found in syntactic phase are :
i. Missing of semicolon
ii. Unbalanced parenthesis and punctuation
For example : Let us consider the following piece of code :
int x;
int y //Syntax error
In example, syntactic error occurred because of absence of semicolon.
3. Semantic phase errors :
a. Semantic phase errors are those errors which occur in declaration
and scope in a program.
b. Reason due to which errors are found :
i. Undeclared names
ii. Type incompatibilities
iii. Mismatching of actual arguments with the formal arguments.
For example : Let us consider the following piece of code :
scanf(“%f%f”, a, b);
In example, a and b are semantic error because scanf uses address of
the variables as &a and &b.
4. Logical errors :
a. Logical errors are the logical mistakes founded in the program
which is not handled by the compiler.
b. In these types of errors, program is syntactically correct but does
not operate as desired.
For example :
Let consider following piece of code :
x = 4;
y = 5;
average = x + y/2;
The given code do not give the average of x and y because BODMAS
property is not used properly.
Answer
Lexical and syntactic error : Refer Q. 4.16, Page 4–17C, Unit-4.
Various error recovery methods are :
1. Panic mode recovery :
a. This is the simplest method to implement and used by most of the
parsing methods.
b. When parser detect an error, the parser discards the input symbols
one at a time until one of the designated set of synchronizing token
is found.
c. Panic mode correction often skips a considerable amount of input
without checking it for additional errors. It gives guarantee not to
go in infinite loop.
For example :
Let consider a piece of code :
a = b + c;
d = e + f;
By using panic mode it skips a = b + c without checking the error in
the code.
2. Phrase-level recovery :
a. When parser detects an error the parser may perform local
correction on remaining input.
b. It may replace a prefix of the remaining input by some string that
allows parser to continue.
c. A typical local correction would replace a comma by a semicolon,
delete an extraneous semicolon or insert a missing semicolon.
For example :
Let consider a piece of code
while (x > 0) y = a + b;
In this code local correction is done by phrase-level recovery by
adding ‘do’ and parsing is continued.
Symbol Tables 4–20 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
3. Error production : If error production is used by the parser, we can
generate appropriate error message and parsing is continued.
For example :
Let consider a grammar
E + E|– E| * A|/A
AE
When error production encounters * A, it sends an error message
to the user asking to use ‘*’ as unary or not.
4. Global correction :
a. Global correction is a theoretical concept.
b. This method increases time and space requirement during parsing.
Answer
Compiler Design 5–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
5
UNIT
Code Generation
CONTENTS
Part-1 : Code Generation : ...................................... 5–2C to 5–3C
Design Issues
Questions-Answers
Answer
1. Code generation is the final phase of compiler.
2. It takes as input the Intermediate Representation (IR) produced by the
front end of the compiler, along with relevant symbol table information,
and produces as output a semantically equivalent target program as
shown in Fig. 5.1.1.
PART-2
PART-1
The Target
CodeLanguage,
Generation
Address
: Design
inIssues.
Target Code.
Questions-Answers
Answer
1. Addresses in the target code show how names in the IR can be converted
into addresses in the target code by looking at code generation for
simple procedure calls and returns using static and stack allocation.
2. Addresses in the target code represent executing program runs in its
own logical address space that was partitioned into four code and data
areas :
a. A statically determined area code that holds the executable target
code. The size of the target code can be determined at compile
time.
Code Generation 5–4 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. A statically determined data area static for holding global constants
and other data generated by the compiler. The size of the global
constants and compiler data can also be determined at compile time.
c. A dynamically managed area heap for holding data objects that
are allocated and freed during program execution. The size of the
heap cannot be determined at compile time.
d. A dynamically managed area stack for holding activation records
as they are created and destroyed during procedure calls and
returns. Like the heap, the size of the stack cannot be determined
at compile time.
PART-3
Basic Blocks and Flow Graphs, Optimization of Basic
Blocks, Code Generator.
Questions-Answers
Answer
The algorithm for construction of basic block is as follows :
Input : A sequence of three address statements.
Output : A list of basic blocks with each three address statements in exactly
one block.
Method :
1. We first determine the set of leaders, the first statement of basic block.
The rules we use are given as :
a. The first statement is a leader.
b. Any statement which is the target of a conditional or unconditional
goto is a leader.
c. Any statement which immediately follows a conditional goto is a
leader.
2. For each leader construct its basic block, which consist of leader and all
statements up to the end of program but not including the next leader.
Any statement not placed in the block can never be executed and may
now be removed, if desired.
Compiler Design 5–5 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 5.4. Explain flow graph with example.
Answer
1. A flow graph is a directed graph in which the flow control information is
added to the basic blocks.
2. The nodes to the flow graph are represented by basic blocks.
3. The block whose leader is the first statement is called initial blocks.
4. There is a directed edge from block Bi – 1 to block Bi if Bi immediately
follows Bi – 1 in the given sequence. We can say that Bi – 1 is a predecessor
of Bi.
For example : Consider the three address code as :
1. prod := 0 2. i := 1
3. t1 := 4 * i 4. t2 := a[t1] /* computation of a[i] */
5. t3 := 4 * i 6. t4 := b[t3] /* computation of b[i] */
7. t5 := t2 * t4 8. t6 := prod + t5
9. prod := t6 10. t7 := i + 1
11. i := t7 12. if i <= 10 goto (3)
The flow graph for the given code can be drawn as follows :
prod : = 0 Block B1 : the
i:=1 initial block
t1 := 4*i
t2 := a[t 1]
t3 := 4*i
t4 := b[t 3]
t5 := t2 * t4
t6 := prod + t 5
prod := t6
t7 := i+1
i := t7
if i< = 10 goto (3)
Pre-header
header header
B0 B0
Answer
Different issues in code optimization are :
1. Function preserving transformation : The function preserving
transformations are basically divided into following types :
a. Common sub-expression elimination :
i. A common sub-expression is nothing but the expression which
is already computed and the same expression is used again
and again in the program.
ii. If the result of the expression not changed then we eliminate
computation of same expression again and again.
Code Generation 5–8 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
For example :
Before common sub-expression elimination :
a = t * 4 – b + c;
........................
........................
m = t * 4 – b + c;
........................
........................
n = t * 4 – b + c;
After common sub-expression elimination :
temp = t * 4 – b + c;
a = temp;
........................
........................
m = temp;
........................
........................
n = temp;
iii. In given example, the equation a = t * 4 – b + c is occurred most
of the times. So it is eliminated by storing the equation into
temp variable.
b. Dead code elimination :
i. Dead code means the code which can be emitted from program
and still there will be no change in result.
ii. A variable is live only when it is used in the program again and
again. Otherwise, it is declared as dead, because we cannot
use that variable in the program so it is useless.
iii. The dead code occurred during the program is not introduced
intentionally by the programmer.
For example :
# Define False = 0
!False = 1
If(!False)
{
........................
........................
}
iv. If false becomes zero, is guaranteed then code in ‘IF’ statement
will never be executed. So, there is no need to generate or
write code for this statement because it is dead code.
c. Copy propagation :
i. Copy propagation is the concept where we can copy the result
of common sub-expression and use it in the program.
ii. In this technique the value of variable is replaced and
computation of an expression is done at the compilation time.
Compiler Design 5–9 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
For example :
pi = 3.14;
r = 5;
Area = pi * r * r;
Here at the compilation time the value of pi is replaced by 3.14
and r by 5.
d. Constant folding (compile time evaluation) :
i. Constant folding is defined as replacement of the value of one
constant in an expression by equivalent constant value at the
compile time.
ii. In constant folding all operands in an operation are constant.
Original evaluation can also be replaced by result which is also
a constant.
For example : a = 3.14157/2 can be replaced by a = 1.570785
thereby eliminating a division operation.
2. Algebraic simplification :
a. Peephole optimization is an effective technique for algebraic
simplification.
b. The statements such as
x:=x+0
or x:=x*1
can be eliminated by peephole optimization.
Answer
Transformation :
1. A number of transformations can be applied to basic block without
changing set of expression computed by the block.
2. Transformation helps us in improving quality of code and act as optimizer.
3. There are two important classes as local transformation that can be
applied to the basic block :
a. Structure preserving transformation : They are as follows :
i. Common sub-expression elimination : Refer Q. 5.6,
Page 5–7C, Unit-5.
ii. Dead code elimination : Refer Q. 5.6, Page 5–7C, Unit-5.
iii. Interchange of statement : Suppose we have a block with
the two adjacent statements,
temp1 = a + b
Code Generation 5–10 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
temp2 = m + n
Then we can interchange the two statements without affecting
the value of the block if and only if neither ‘m’ nor ‘n’ is
temporary variable temp1 and neither ‘a’ nor ‘b’ is temporary
variable temp2. From the given statements we can conclude
that a normal form basic block allow us for interchanging all
the statements if they are possible.
b. Algebraic transformation : Refer Q. 5.6, Page 5–7C, Unit-5.
PART-4
Machine Independent Optimizations, Loop Optimization.
Questions-Answers
Answer
Code optimization :
1. The code optimization refers to the techniques used by the compiler to
improve the execution efficiency of generated object code.
2. It involves a complex analysis of intermediate code and performs various
transformations but every optimizing transformation must also
preserve the semantic of the program.
Classification of code optimization :
Code optimization
Answer
Answer
a. Basic blocks and flow graph :
1. As first statement of program is leader statement.
PROD = 0 is a leader.
2. Fragmented code represented by two blocks is shown below :
Code Generation 5–14 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
PROD = 0
B1
I=1
T1 = 4 * I
T2 = addr(A) – 4
T3 = T2 [T1]
T4 = addr(B) – 4
T5 = T4 [T1]
T6 = T3 * T5
PROD = PROD + T6 B2
I=I+1
If I <= 20 goto B2
Fig. 5.11.1.
b. Function preserving transformation :
1. Common sub-expression elimination : No any block has any sub
expression which is used two times. So, no change in flow graphs.
2. Copy propagation : No any instruction in the block B2 is direct
assignment i.e., in the form of x = y. So, no change in flow graph and
basic block.
3. Dead code elimination : No any instruction in the block B2 is dead. So,
no change in flow graph and basic block.
4. Constant folding : No any constant expression is present in basic
block. So, no change in flow graph and basic block.
Loop optimization :
1. Code motion : In block B2 we can see that value of T2 and T4 is calculated
every time when loop is executed. So, we can move these two instructions
outside the loop and put in block B1 as shown in Fig. 5.11.2.
PROD = 0
I=1
T2 = addr(A) – 4 B1
T4 = addr(B) – 4
T1 = 4 * I
T3 = T2 [T1]
T5 = T4 [T1]
T6 = T3 * T5
PROD = PROD + T6 B2
I=I+1
If I <= 20 goto B2
Fig. 5.11.2.
Compiler Design 5–15 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. Induction variable : A variable I and T1 are called an induction variable
of loop L because every time the variable I change the value of T1 is also
change. To remove these variables we use other method that is called
reduction in strength.
3. Reduction in strength : The values of I varies from 1 to 20 and value
T1 varies from (4, 8, ... , 80).
Block B2 is now given as
T1 = 4 * I In block B1
T1 = T 1 + 4 B 2
PROD = 0
T1 = 4 * I B1
T 2 = addr(A) – 4
T 4 = addr(B) – 4
T1 = T1 + 4
T3 = T 2 [T 1]
T5 = T 4 [T 1]
B2
T6 = T3 * T5
PROD = PROD + T6
if T 1 < = 80 goto B 2
Fig. 5.11.3
Que 5.12. Write short notes on the following with the help of
example :
i. Loop unrolling
ii. Loop jamming
iii. Dominators
iv. Viable prefix AKTU 2018-19, Marks 07
Answer
i. Loop unrolling : Refer Q. 5.10, Page 5–11C, Unit-5.
ii. Loop jamming : Refer Q. 5.10, Page 5–11C, Unit-5.
iii. Dominators : Refer Q. 5.5, Page 5–5C, Unit-5.
For example : In the flow graph,
Code Generation 5–16 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1
2 1
3
2 3
4
4 5 7
5 6 6
7 8
9 10
8
( b) Dominator tree
9 10
(a) Flow graph
Fig. 5.12.1.
Initial Node, Node1 dominates every Node.
Node 2 dominates itself. Node 3 dominates all but 1 and 2. Node 4 dominates
all but 1,2 and 3.
Node 5 and 6 dominates only themselves, since flow of control can skip
around either by go in through the other. Node 7 dominates 7, 8, 9 and 10.
Node 8 dominates 8, 9 and 10.
Node 9 and 10 dominates only themselves.
iv. Viable prefix : Viable prefixes are the prefixes of right sentential forms
that can appear on the stack of a shift-reduce parser.
For example :
Let : S x1x2x3x4
A x1 x2
Let w = x1x2x3
SLR parse trace :
STACK INPUT
$ x1x2x3
$ x1 x2x3
$ x 1x 2 x3
$A x3
$AX3 $
.
.
.
As we see, x1x2x3 will never appear on the stack. So, it is not a viable
prefix.
PART-5
DAG Representation of Basic Blocks.
Compiler Design 5–17 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Questions-Answers
Answer
DAG :
1. The abbreviation DAG stands for Directed Acyclic Graph.
2. DAGs are useful data structure for implementing transformations on
basic blocks.
3. A DAG gives picture of how the value computed by each statement in
the basic block is used in the subsequent statement of the block.
4. Constructing a DAG from three address statement is a good way of
determining common sub-expressions within a block.
5. A DAG for a basic block has following properties :
a. Leaves are labeled by unique identifier, either a variable name or
constants.
b. Interior nodes are labeled by an operator symbol.
c. Nodes are also optionally given a sequence of identifiers for labels.
6. Since, DAG is used in code optimization and output of code optimization
is machine code and machine code uses register to store variable used
in the source program.
Advantage of DAG :
1. We automatically detect common sub-expressions with the help of
DAG algorithm.
2. We can determine which identifiers have their values used in the
block.
3. We can determine which statements compute values which could be
used outside the block.
Que 5.14. What is DAG ? How DAG is created from three address
code ? Write algorithm for it and explain it with a relevant example.
Code Generation 5–18 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
DAG : Refer Q. 5.13, Page 5–17C, Unit-5.
Algorithm :
Input : A basic block.
Output : A DAG with label for each node (identifier).
Method :
1. Create nodes with one or two left and right children.
2. Create linked list of attached identifiers for each node.
3. Maintain all identifiers for which a node is associated.
4. Node (identifier) represents value that identifier has the current point
in DAG construction process. Symbol table store the value of node
(identifier).
5. If there is an expression of the form x = y op z then DAG contain “op”
as a parent node and node(y) as a left child and node(z) as a right child.
For example :
Given expression : a * (b – c) + (b – c) * d
The construction of DAG with three address code will be as follows :
t1
Step 1 : – t1 = b – c
b c
t2
Step 2 : * t2 = (b – c) * d
– d
b c
t3
Step 3 : * t3 = a * (b – c)
a –
b c
t4
Step 4 : + t4 = a * (b – c) + (b – c) * d
* *
– d a
b c
Que 5.15. How DAG is different from syntax tree ? Construct the
DAG for the following basic blocks :
Compiler Design 5–19 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
a := b + c
b := b – d
c := c + d
e=b+c
Answer
DAG v/s Syntax tree :
1. Directed Acyclic Graph is a data structure for transformations on the
basic block. While syntax tree is an abstract representation of the
language constructs.
2. DAG is constructed from three address statement while syntax tree is
constructed directly from the expression.
DAG for the given code is :
+ a + e
+ b + c
b0
c0 d0
Fig. 5.15.1.
– d
b c
t3
Step 3 : * t3 = a * (b – c)
a –
b c
t4
Step 4 : + t4 = a * (b – c) + (b – c) * d
* *
– d a
b c
t5
Step 5 : + t5 = a + a * (b – c) + (b – c) * d
a +
* *
– d a
b c
Que 5.17. How would you represent the following equation using
DAG ?
a= b*–c+b*–c AKTU 2018-19, Marks 07
Compiler Design 5–21 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
Code representation using DAG of equation : a = b * – c + b *– c
t1
Step 1 : –
t1 = – c
c
Step 2 : t2
*
t2 = b * t1
b –
+ t3
t3 = t2 + t2
Step 3 : *
b –
c
Step 4 : = t4
t4 = a
+ a
b –
Que 5.18. Give the algorithm for the elimination of local and global
common sub-expressions algorithm with the help of example.
AKTU 2017-18, Marks 10
Answer
Algorithm for elimination of local common sub-expression : DAG
algorithm is used to eliminate local common sub-expression.
DAG : Refer Q. 5.13, Page 5–17C, Unit-5.
Code Generation 5–22 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Algorithm for elimination of global common sub-expressions :
1. An expression is defined at the point where it is assigned a value and
killed when one of its operands is subsequently assigned a new value.
2. An expression is available at some point p in a flow graph if every path
leading to p contains a prior definition of that expression which is not
subsequently killed.
3. Following expressions are used :
a. avail[B] = set of expressions available on entry to block B
b. exit[B] = set of expressions available on exit from B
c. killed[B] = set of expressions killed in B
d. defined[B] = set of expressions defined in B
e. exit[B] = avail[B] – killed[B] + defined[B]
Algorithm :
1. First, compute defined and killed sets for each basic block
2. Iteratively compute the avail and exit sets for each block by running the
following algorithm until we get a fixed point:
a. Identify each statement s of the form a = b op c in some block B
such that b op c is available at the entry to B and neither b nor c
is redefined in B prior to s.
b. Follow flow of control backwards in the graph passing back to but
not through each block that defines b op c. the last computation of
b op c in such a block reaches s.
c. After each computation d = b op c identified in step 2(a), add
statement t = d to that block (where t is a new temp d).
d. Replace s by a = t
PART-6
Value Numbers and Algebraic Laws, Global Data Flow Analysis.
Questions-Answers
d1 : y := 2 B1
d2 : x := y + 2 B2
d1 : y := 2 B1
d2 : y := y + 2 B2
d3 : x := y + 2 B3
3. The definition d1 is said to a reaching definition for block B2. But the
definition d1 is not a reaching definition in block B3, because it is killed
by definition d2 in block B2.
Que 5.20. Write short notes (any two) :
i. Global data flow analysis
ii. Loop unrolling
iii. Loop jamming
AKTU 2015-16, Marks 15
Code Generation 5–24 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
OR
Write short note on global data analysis.
AKTU 2017-18, Marks 05
Answer
i. Global data flow analysis :
1. Global data flow analysis collects the information about the entire
program and distributed it to each block in the flow graph.
2. Data flow can be collected in various block by setting up and solving
a system of equation.
3. A data flow equation is given as :
OUT(s) = {IN(s) – KILL(s)} GEN(s)
OUT(s) : Definitions that reach exist of block B.
GEN(s) : Definitions within block B that reach the end of B.
IN(s) : Definitions that reaches entry of block B.
KILL(s) : Definitions that never reaches the end of block B.
ii. Loop unrolling : Refer Q. 5.10, Page 5–11C, Unit-5.
iii. Loop fusion or loop jamming : Refer Q. 5.10, Page 5–11C, Unit-5.
Answer
Role of macros in programming language are :
1. It is use to define word that are used most of the time in program.
2. It automates complex task.
3. It helps to reduce the use of complex statement in a program.
4. It makes the program run faster.
Compiler Design SQ–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1
UNIT
Introduction to
Compiler
(2 Marks Questions)
1. It scans all the lines of source It scans one line at a time, if there is
program and list out all any syntax error, the execution of
syntax errors at a time. program terminates immediately.
2. Object produced by compiler Machine code produced by
gets saved in a file. Hence, interpreter is not saved in any file.
file does not need to compile Hence, we need to interpret the
again and again. file each time.
3. It takes less time to execute. It takes more time to execute.
b b b b
a
q2 q3
a
Fig. 1.
Regular expression for above DFA :
(aa + bb + (ab + aa)(aa + bb)* (ab + ba))*
Compiler Design SQ–5 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2
UNIT
Basic Parsing
Techniques
(2 Marks Questions)
Compiler Design SQ–7 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
3
UNIT
Syntax Directed
Translation
(2 Marks Questions)
3.8. What is a syntax tree ? Draw the syntax tree for the
following statement : c b c b a – * + – * =
AKTU 2016-17, Marks 02
Ans.
1. A syntax tree is a tree that shows the syntactic structure of a
program while omitting irrelevant details present in a parse tree.
Compiler Design SQ–9 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. Syntax tree is condensed form of the parse tree.
Syntax tree of c b c b a – * + – * = :
In the given statement, number of alphabets is less than symbols.
So, the syntax tree drawn will be incomplete.
=
c *
b –
c +
b *
a –
Fig. 3.8.1.
2 Marks Questions SQ–10 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4
UNIT
Symbol Tables
(2 Marks Questions)
2 Marks Questions SQ–12 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
5
UNIT
Code Generation
(2 Marks Questions)
5.1. What do you mean by code optimization ?
Ans. Code optimization refers to the technique used by the compiler to
improve the execution efficiency of the generated object code.
i=0
sum = 0
Yes
if (i < = 10)
sum = sum + i
No i = i + 1;
Stop
Fig. 5.9.1.
2 Marks Questions SQ–14 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
5.10. What is the use of algebraic identities in optimization of
basic blocks ? AKTU 2016-17, Marks 02
Ans. Uses of algebraic identities in optimization of basic blocks
are :
1. The algebraic transformation can be obtained using the strength
reduction technique.
2. The constant folding technique can be applied to achieve the
algebraic transformations.
3. The use of common sub-expression elimination, use of associativity
and commutativity is to apply algebraic transformations on basic
blocks.
Compiler Design SP–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. VI) EVEN SEMESTER THEORY
EXAMINATION, 2015-16
COMPILER DESIGN
Time : 3 Hours Max. Marks : 100
Section - A
g. Define DAG.
Section - B
2. Attempt any five question from this section. (10 × 5 = 50)
a. Construct an SLR(1) parsing table for the following
grammar :
S A)
S A, P| (P, P
P {num, num}
Solved Paper (2015-16) SP–4 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2015-16)
1. It scans all the lines of source It scans one line at a time, if there is
program and list out all any syntax error, the execution of
syntax errors at a time. program terminates immediately.
2. Object produced by compiler Machine code produced by
gets saved in a file. Hence, interpreter is not saved in any file.
file does not need to compile Hence, we need to interpret the
again and again. file each time.
3. It takes less time to execute. It takes more time to execute.
Compiler Design SP–5 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
f. How YACC can be used to generate parser ?
Ans.
1. YACC is a tool which will produce a parser for a given grammar.
2. YACC is a program designed to compile a LALR(1) grammar and
produce the source code. Hence, it is used to generate a parser.
g. Define DAG.
Ans.
1. The abbreviation DAG stands for Directed Acyclic Graph.
2. DAGs are useful data structure for implementing transformations
on basic blocks.
3. A DAG gives picture of how the value computed by each statement
in the basic block is used in the subsequent statement of the
block.
Section - B
2. Attempt any five question from this section. (10 × 5 = 50)
a. Construct an SLR(1) parsing table for the following
grammar :
S A)
S A, P| (P, P
P {num, num}
Solved Paper (2015-16) SP–6 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Ans. The augmented grammar G for the above grammar G is :
S S
S A)
S A, P
S (P, P
P {num, num}
The canonical collection of sets of LR(0) item for grammar are as
follows :
I0 : S • S
S • A)
S • A, P
S • (P, P
P • {num, num}
I1 = GOTO (I0, S)
I1 : S S •
I2 = GOTO (I0, A)
I2 : S A •)
S A •, P
I3 = GOTO (I0, ( )
I3 : S ( • P, P
P •{num, num}
I4 = GOTO (I0, { )
I4 : P {• num, num}
I5 = GOTO (I2, ))
I5 : S A ) •
I6 = GOTO (I2, ,)
I6 : S A,•P
P •{num, num}
I7 = GOTO (I3, P)
I7 : S ( P•, P
I8 = GOTO (I4, num)
I8 : P {num •, num}
I9 = GOTO (I6, P)
I9 : S A, P•
I10 = GOTO (I7, ,)
I10 : S (P, •P
P • {num, num}
I11 = GOTO (I8, ,)
I11 : P {num, • num}
I12 = GOTO (I10, P)
S (P,P•
I13 = GOTO (I11, num)
I13 : P {num, num •}
I14 = GOTO (I13 ,})
I14 : P {num, num} •
Compiler Design SP–7 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Action Goto
Item ) , ( { Num } $ S A P
Set
0 S3 S4 1 2
1 accept
2 S5 S6
3 S4 6
4 S8
5 r1
6 S4 r2 9
7 S10
8 S11
9 r2
10 S4 12
11 S13
12 r3
13 S14
14 r4 r4
a ( ) ; $
A > > >
( < < = <
) > > >
; < < > >
$ < <
f( g(
f) g)
f; g;
f$ g$
Fig. 1.
From the precedence graph, the precedence function using
algorithm calculated as follows :
( ( ) ; $
f 1 0 2 2 0
g 3 3 0 1 0
S
I0 I1
A A
I2 I5
a a
a A
I36 I89
b b
b
I47
Fig. 2.
Since the table does not have any conflict. So, it is LR(1).
For LALR(1) table, item set 5 and item set 9 are same. Thus we
merge both the item sets (I5, I9) = item set I59. Now, the resultant
parsing table becomes :
Solved Paper (2015-16) SP–16 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Table 4.
State Action Goto
a b c d $ S A B
0 S3 S59 1 2 4
1 accept
2 S6
3 S59 7 8
4 S10
59 r59, r6 r6, r59
6 r1
7 S11
8 S12
10 r3
11 r2
12 r4
b0
c0 d0
Fig. 3.
1. The two occurrences of sub-expressions b + c compute the same
value.
2. Value computed by a and e are same.
Applications of DAG :
1. Scheduling : Directed acyclic graphs representations of partial
orderings have many applications in scheduling for systems of tasks.
2. Data processing networks : A directed acyclic graph may be
used to represent a network of processing elements.
3. Data compression :Directed acyclic graphs may also be used as a
compact representation of a collection of sequences. In this type of
application, one finds a DAG in which the paths form the sequences.
4. It helps in finding statement that can be recorded.
T1 = 4 * I
T2 = addr(A) – 4
T3 = T2 [T1]
T4 = addr(B) – 4
T5 = T4 [T1]
T6 = T3 * T5
PROD = PROD + T6 B2
I=I+1
If I <= 20 goto B2
Fig. 4.
Loop optimization :
1. Code motion : In block B2 we can see that value of T2 and T4 is
calculated every time when loop is executed. So, we can move
these two instructions outside the loop and put in block B1 as shown
in Fig. 5.
PROD = 0
I=1
T2 = addr(A) – 4 B1
T4 = addr(B) – 4
T1 = 4 * I
T3 = T2 [T1]
T5 = T4 [T1]
T6 = T3 * T5
PROD = PROD + T6 B2
I=I+1
If I <= 20 goto B2
Fig. 5.
Compiler Design SP–21 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. Induction variable : A variable I and T1 are called an induction
variable of loop L because every time the variable I change the
value of T1 is also change. To remove these variables we use other
method that is called reduction in strength.
3. Reduction in strength : The values of I varies from 1 to 20 and
value T1 varies from (4, 8, ... , 80).
Block B2 is now given as
T1 = 4 * I In block B1
T1 = T 1 + 4 B 2
Now final flow graph is given as
PROD = 0
T1 = 4 * I B1
T 2 = addr(A) – 4
T 4 = addr(B) – 4
T1 = T1 + 4
T 3 = T 2 [T 1]
T 5 = T 4 [T 1]
B2
T6 = T3 * T5
PROD = PROD + T6
if T 1 < = 80 goto B 2
Fig. 6.
5. Write short notes on :
i. Global data flow analysis
ii. Loop unrolling
iii. Loop jamming
Ans.
i. Global data flow analysis :
1. Global data flow analysis collects the information about the entire
program and distributed it to each block in the flow graph.
2. Data flow can be collected in various block by setting up and solving
a system of equation.
3. A data flow equation is given as :
OUT(s) = {IN(s) – KILL(s)} GEN(s)
OUT(s) : Definitions that reach exist of block B.
GEN(s) : Definitions within block B that reach the end of B.
IN(s) : Definitions that reaches entry of block B.
KILL(s) : Definitions that never reaches the end of block B.
ii. Loop unrolling : In this method, the number of jumps and tests
can be reduced by writing the code two times.
Solved Paper (2015-16) SP–22 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
For example :
int i = 1; int i = 1;
while(i<=100) while(i<=100)
{ Can be written as {
a[i]=b[i]; a[i]=b[i];
i++; i++;
} a[i]=b[i];
i++ ;
}
iii. Loop fusion or loop jamming : In loop fusion method, several
loops are merged to one loop.
For example :
for i:=1 to n do Can be written as for i:=1 to n*m do
for j:=1 to m do a[i]:=10
a[i,j]:=10
Compiler Design SP–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. VI) EVEN SEMESTER THEORY
EXAMINATION, 2016-17
COMPILER DESIGN
Time : 3 Hours Max. Marks : 100
Section-C
Compiler Design SP–3 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2016-17)
b b b b
a
q2 q3
a
Fig. 1.
Regular expression for above DFA :
(aa + bb + (ab + aa)(aa + bb)* (ab + ba))*
i=0
sum = 0
Yes
if (i < = 10)
sum = sum + i
No i = i + 1;
Stop
Fig. 3.
Source program
Lexical analyzer
Syntax analyzer
Semantic analyzer
Code optimizer
Code generator
Target program
Fig. 4.
v. Phase 5 (Code optimization) : This phase is designed to improve
the intermediate code so that the ultimate object program runs
faster and takes less space.
vi. Phase 6 (Code generation) :
a. It is the final phase for compiler.
b. It generates the assembly code as target language.
c. In this phase, the address in the binary code is translated from
logical address.
Symbol table / table management : A symbol table is a data
structure containing a record that allows us to find the record for
each identifier quickly and to store or retrieve data from that
record quickly.
Error handler : The error handler is invoked when a flaw in the
source program is detected.
Solved Paper (2016-17) SP–8 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Compilation of “a = (b + c)*(b + c)*2” :
Lexical analyzer
Token stream
id1 = (id2 + id3) * (id2 + id3) * 2
Syntax analyzer
id2 id3
Semantic analyzer
id1 *
Annotated syntax tree
*
int_to_real
+ +
2
id2 id3
Intermediate code
generation
t1 = b + c
Intermediate code
t2 = t1 * t1
t3 = int_to_real (2)
t4 = t2 * t3
id1 = t4
Compiler Design SP–9 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Code optimization
t1 = b + c Optimized code
t2 = t1 * t1
id1 = t2 * 2
Machine code
Machine code
MOV R1, b
ADD R 1, R 1, c
MUL R2, R1, R1
MUL R2, R1, # 2.0
ST id1, R2
(0 + 1)* (0 + 1)
q1 q2 q3 1 q4 0 qf
(0 + 1)* 1
q1 q2 0 q3 q4 0 qf
1
0, 1
q1 q 1 q2 0 q3 1 q4 0
5 qf
1
If we remove we get
0, 1
q1 0 q3 1 q4 0 qf
1
[ can be neglected so q1 = q5 = q2]
Now, we convert above NFA into DFA :
Transition table for NFA :
/ 0 1
q1 q1 q3 q1 q3
q3 q4
q4 qf
* qf
Solved Paper (2016-17) SP–10 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Transition table for DFA :
/ 0 1 Let
q1 q1 q3 q1 q3 q1 as A
q1 q3 q1 q3 q1 q3q4 q1 q3 as B
q1 q3 q4 q1 q3 qf q1 q3 q4 q1 q3 q4 as C
* q1 q3 qf q1 q3 q1 q3 q4 q1 q3 qf as D
*D B C
id id
Step 2 : b*
b
Step 3 : b
b
Step 4 : ab*
a b
Step 5 : ab
a b
Step 6 : ab*|ab
a b
2 3 4 5
1 10
a b
6 7 8 9
ii. After the first three declarations, the symbol table will be
c int
b int
a int
iii. After the second declaration of Level 2.
b int
a int
c int
b int
a int
Free Memory
Stack
Fig. 6.
1. Code : It stores the executable target code which is of fixed size and
do not change during compilation.
2. Static allocation :
a. The static allocation is for all the data objects at compile time.
b. The size of the data objects is known at compile time.
c. The names of these objects are bound to storage at compile time
only and such an allocation of data objects is done by static allocation.
d. In static allocation, the compiler can determine amount of storage
required by each data object. Therefore, it becomes easy for a
compiler to find the address of these data in the activation record.
e. At compile time, compiler can fill the addresses at which the target
code can find the data on which it operates.
3. Heap allocation : There are two methods used for heap
management :
a. Garbage collection method :
i. When all access path to a object are destroyed but data object
continue to exist, such type of objects are said to be garbaged.
ii. The garbage collection is a technique which is used to reuse that
object space.
iii. In garbage collection, all the elements whose garbage collection bit
is ‘on’ are garbaged and returned to the free space list.
b. Reference counter :
i. Reference counter attempt to reclaim each element of heap storage
immediately after it can no longer be accessed.
ii. Each memory cell on the heap has a reference counter associated
with it that contains a count of number of values that point to it.
iii. The count is incremented each time a new value point to the cell
and decremented each time a value ceases to point to it.
4. Stack allocation :
a. Stack allocation is used to store data structure called activation
record.
b. The activation records are pushed and popped as activations begins
and ends respectively.
c. Storage for the locals in each call of the procedure is contained in
the activation record for that call. Thus, locals are bound to fresh
storage in each activation, because a new activation record is pushed
onto the stack when call is made.
d. These values of locals are deleted when the activation ends.
Compiler Design SP–17 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Section-C
S A
I0 I1 I5
a
A a
b
I2 b I4
A
S b I3
a
I8
Fig. 7. DFA for set of items.
Table 2 : Parse the input abab using parse table.
S id := E { id_entry := look_up(id.name);
if id_entry nil then
append (id_entry ‘:=’ E.place)
else error; /* id not declared*/
}
E E1 + E2 { E.place := newtemp();
append (E.place ‘:=’ E1.place ‘+’ E2.place)
}
E E1 * E2 { E.place := newtemp();
append (E.place ‘:=’ E1.place ‘*’ E2.place)
}
E – E1 { E.place := newtemp();
append (E.place ‘:=’ ‘minus’ E1.place)
}
E id { id_entry: = look_up(id.name);
if id_entry nil then
append (id_entry ‘:=’ E.place)
else error; /* id not declared*/
}
1. The look_up returns the entry for id.name in the symbol table if it
exists there.
2. The function append is used for appending the three address code
to the output file. Otherwise, an error will be reported.
3. Newtemp() is the function used for generating new temporary
variables.
4. E.place is used to hold the value of E.
Example : x := (a + b)*(c + d)
We will assume all these identifiers are of the same type. Let us
have bottom-up parsing method :
Solved Paper (2016-17) SP–20 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Production rule Semantic action Output
attribute evaluation
E id E.place := a
E id E.place := b
E E1 + E2 E.place := t1 t1 := a + b
E id E.place := c
E id E.place := d
E E1 + E2 E.place := t2 t2 := c + d
E E1 * E2 E.place := t3 t3 := (a + b)*(c + d)
S id := E x := t3
– d
b c
t3
Step 3 : * t3 = a * (b – c)
a –
b c
t4
Step 4 : + t4 = a * (b – c) + (b – c) * d
* *
– d a
b c
t5
Step 5 : + t5 = a + a * (b – c) + (b – c) * d
a +
* *
– d a
b c
Compiler Design SP–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. VI) EVEN SEMESTER THEORY
EXAMINATION, 2017-18
COMPILER DESIGN
Time : 3 Hours Max. Marks : 100
Note : 1. Attempt all Sections. If require any missing data; then choose
suitably.
2. Any special paper specific instruction.
SECTION-A
SECTION-B
SECTION-C
Solved Paper (2017-18) SP–4 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2017-18)
Note : 1. Attempt all Sections. If require any missing data; then choose
suitably.
2. Any special paper specific instruction.
SECTION-A
SECTION-B
Step 2 : q1 qf
a
b
q2 b q3
+
a* b
q4
a
b
q1 a q2 b q3 b qf
Step 3 :
q5 a q4 b
q6
S aSBS|bBS
S ASBS|
B ABA|a
B BABBA
|
bBA|
a
1 2
B bBA B |aB
B ABBA B|
A BS|a
A SAS|aS|a
A A
BAAB
|aAB|
a
1 2
A aAB A |aA
A BAAB A|
The production after left recursion is
S aSB S |bBS
S ASB S|
Compiler Design SP–7 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
A aABA|aA
A BAABA|
B bBA B|aB
B ABBA B|
a + b $ Input
X Predictive
parsing Output
Stack Y
Z program
$
Parsing
Table
F F * | a
| b
1 2
F aF |bF
F *F |
FIRST(E) = FIRST(T) = FIRST(F) = {a, b}
FIRST(E ) = { +, }, FIRST(F ) = {*, }
FIRST(T ) = {*, }
FOLLOW(E) = { $ }
FOLLOW(E) = { $ }
FOLLOW(T) = {+, $ }
FOLLOW(T) = {+, $ }
FOLLOW(F) = {*, +, $ }
FOLLOW(F) = {*, +, $ }
Compiler Design SP–9 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Predictive parsing table :
Non-terminal Input symbol
+ * a b $
E E TE E TE
E E + TE E
T T FT T FT
T T T *FT T
F F aF F bF
F F F F
F *F
SECTION-C
No, two states can be merged. So, LALR table cannot be constructed
from LR(1) parsing table.
Action Goto
State id + * ( ) $ E
0 S3 S2 1
1 S4 S5 accept
2 S3 S2 6
3 r4 r4 r4 r4
4 S3 S2 8
5 S3 S2 8
6 S4 S5 S3
7 r1 S5 r1 r1
8 r2 r2 r2 r2
9 r3 r3 r3 r3
T1 = 4 * I
T2 = addr(A) – 4
T3 = T2 [T1]
T4 = addr(B) – 4
T5 = T4 [T1]
T6 = T3 * T5
PROD = PROD + T6 B2
I=I+1
If I <= 20 goto B2
Fig. 2.
b. Function preserving transformation :
1. Common sub-expression elimination : No any block has any
sub expression which is used two times. So, no change in flow
graphs.
2. Copy propagation : No any instruction in the block B2 is direct
assignment i.e., in the form of x = y. So, no change in flow graph
and basic block.
3. Dead code elimination : No any instruction in the block B2 is
dead. So, no change in flow graph and basic block.
4. Constant folding : No any constant expression is present in basic
block. So, no change in flow graph and basic block.
Compiler Design SP–1 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. VI) EVEN SEMESTER THEORY
EXAMINATION, 2018-19
COMPILER DESIGN
Time : 3 Hours Max. Marks : 100
Note : 1. Attempt all Sections. If require any missing data; then choose
suitably.
SECTION-A
SECTION-B
SECTION-C
q0 q1 q2
b b. c
a. c
Fig. 1.
Solved Paper (2018-19) SP–4 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2018-19)
Note : 1. Attempt all Sections. If require any missing data; then choose
suitably.
SECTION-A
SECTION-B
A Subtree
A A
A A
A A
Fig. 1.
e. This causes major problem in top-down parsing and therefore
elimination of left recursion is must.
3. Left factoring :
a. Left factoring is occurred when it is not clear that which of the two
alternatives is used to expand the non-terminal.
b. If the grammar is not left factored then it becomes difficult for the
parser to make decisions.
Algorithm for FIRST and FOLLOW :
1. FIRST function :
i. FIRST (X) is a set of terminal symbols that are first symbols
appearing at R.H.S. in derivation of X.
ii. Following are the rules used to compute the FIRST functions.
a. If X determine terminal symbol ‘a’ then the FIRST(X) = {a}.
Compiler Design SP–7 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. If there is a rule X then FIRST(X) contain {}.
c. If X is non-terminal and X Y1 Y2 Y3 ... Yk is a production and if
is in all of FIRST (Y1) ... FIRST (Yk) then
FIRST(X) = {FIRST(Y1) FIRST(Y2) FIRST(Y3).... FIRST(Yk)}.
2. FOLLOW function :
i. FOLLOW(A) is defined as the set of terminal symbols that appear
immediately to the right of A.
*
ii. FOLLOW(A) = {a | S Aa where and are some grammar
symbols may be terminal or non-terminal}.
iii. The rules for computing FOLLOW function are as follows :
a.For the start symbol S place $ in FOLLOW(S).
b.If there is a production A B then everything in FIRST()
without is to be placed in FOLLOW(B).
c.If there is a production A B or A B and FIRST(B)
contain then FOLLOW(B) = FOLLOW(A). That means everything
in FOLLOW(A) is in FOLLOW(B).
d2 : x := y + 2 B2
d1 : y := 2 B1
d2 : y := y + 2 B2
d3 : x := y + 2 B3
States a b c $ A B S
I0 S3 S4 S2 1
I1 Accept
I2 S3 S4 S7 6 5
I3 r4 r4 r4 r4
I4 r6 r6 r6 r6
I5 r1 r1 r1 r1
I6 r3 r3 r3 r3
I7 S3 S4 S10 8 9
I9 r5 r5 r5 r5
I10 S3 S4 S10 11
I11 r3 r3 r3 r3
SECTION-C
q0 q1 q2
b b. c
a. c
Fig. 3.
Ans. Transition table for -NFA :
/ a b c
q0 q1 q2 {q1, q2}
q1 q0 q2 {q0, q2}
q2
-closure of {q0} = {q0, q1, q2}
Solved Paper (2018-19) SP–12 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
-closure of {q1} = {q1}
-closure of {q2} = {q2}
Transition table for NFA :
/ a b c
{q0, q1, q2} {q0, q1, q2} {q1, q2} {q0, q1, q2}
{q1, q2} {q0, q1, q2} {q2} {q0, q1, q2}
{q2}
Let {q0, q1, q2} = A
{q1, q2} = B
{q2} = C
Transition table for NFA :
/ a b c
A A B A
B A C A
C
D
Dead state
a, b, c
Fig. 4.
c
Step 2 : t2
*
t2 = b * t1
b –
+ t3
t3 = t2 + t2
Step 3 : *
b –
c
Step 4 : = t4
t4 = a
+ a
b –
D
static link
D
B
static link
Calls
E E
A calls E
static link
E calls B
B calls D A
D calls C
iii. Dominators :
a. In control flow graphs, a node d dominates a node n if every path
from the entry node to n must go through d. This is denoted as d
dom n.
b. By definition, every node dominates itself.
c. A node d strictly dominates a node n if d dominates n and d is not
equal to n.
Compiler Design SP–17 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
d. The immediate dominator (or idom) of a node n is the unique node
that strictly dominates n but does not strictly dominate any other
node that strictly dominates n. Every node, except the entry node,
has an immediate dominator.
e. A dominator tree is a tree where each node’s children are those
nodes it immediately dominates. Because the immediate dominator
is unique, it is a tree. The start node is the root of the tree.
For example : In the flow graph,
1
2 1
3
2 3
4
4 5 7
5 6 6
7 8
9 10
8
( b) Dominator tree
9 10
(a) Flow graph
Fig. 5.
Initial Node, Node1 dominates every Node.
Node 2 dominates itself. Node 3 dominates all but 1 and 2. Node 4
dominates all but 1,2 and 3.
Node 5 and 6 dominates only themselves, since flow of control can
skip around either by go in through the other. Node 7 dominates 7,
8, 9 and 10. Node 8 dominates 8, 9 and 10.
Node 9 and 10 dominates only themselves.
iv. Viable prefix : Viable prefixes are the prefixes of right sentential
forms that can appear on the stack of a shift-reduce parser.
For example :
Let : S x1x2x3x4
A x1 x2
Let w = x1x2x3
SLR parse trace :
STACK INPUT
$ x1x2x3
$ x1 x2x3
$ x 1x 2 x3
$A x3
$AX3 $
.
.
.
Solved Paper (2018-19) SP–18 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
As we see, x1x2x3 will never appear on the stack. So, it is not a viable
prefix.
if a<b E.true
E.true: a := a+5 S1
E.false: a := a+7
Switch statement :
switch expression
{
case value : statement
case value : statement
...
case value : statement
default : statement
}
Example :
switch(ch)
{
case 1 : c = a + b;
break;
case 2 : c = a – b;
break;
Compiler Design SP–21 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
}
The three address code can be
if ch = 1 goto L1
if ch = 2 goto L2
L1 : t1 := a + b
c := t1
goto last
L2 : t2 := a – b
c := t2
goto last
last :
E.val = 29 F * F.val = 2
T
( E E.val = 29 ) id id.lexval = 2
T.val = 28 T + F F.val = 1
T F F.val = 7
T.val = 4 *
id id
id.lexval = 4 id.lexval = 7
Fig. 7.
Pre-header
header header
B0 B0
Fig. 8. Pre-header.
Solved Paper (2018-19) SP–24 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4. Reducible flow graph :
a. A flow graph G is reducible graph if an only if we can partition
the edges into two disjointed groups i.e., forward edges and
backward edges.
b. These edges have following properties :
i. The forward edge forms an acyclic graph.
ii. The backward edges are such edges whose heads
dominate their tails.
c. The program structure in which there is exclusive use of if-
then, while-do or goto statements generates a flow graph
which is always reducible.
Loop optimization is a process of increasing execution time
and reducing the overhead associated with loops.
The loop optimization is carried out by following methods :
1. Code motion :
a. Code motion is a technique which moves the code outside
the loop.
b. If some expression in the loop whose result remains unchanged
even after executing the loop for several times, then such
an expression should be placed just before the loop (i.e.,
outside the loop).
c. Code motion is done to reduce the execution time of the
program.
2. Induction variables :
a. A variable x is called an induction variable of loop L if the
value of variable gets changed every time.
b. It is either decremented or incremented by some constant.
3. Reduction in strength :
a. In strength reduction technique the higher strength operators
can be replaced by lower strength operators.
b. The strength of certain operator is higher than other.
c. The strength reduction is not applied to the floating point
expressions because it may yield different results.
4. Loop invariant method : In loop invariant method, the
computation inside the loop is avoided and thereby the computation
overhead on compiler is avoided.
5. Loop unrolling : In this method, the number of jumps and tests
can be reduced by writing the code two times.
Compiler Design SP–25 C (CS/IT-Sem-5)
More pdf : www.motivationbank.in
For example :
int i = 1; int i = 1;
while(i<=100) while(i<=100)
{ Can be written as {
a[i]=b[i]; a[i]=b[i];
i++; i++;
} a[i]=b[i];
i++ ;
}
6. Loop fusion or loop jamming : In loop fusion method, several
loops are merged to one loop.
For example :
for i:=1 to n do Can be written as for i:=1 to n*m do
for j:=1 to m do a[i]:=10
a[i,j]:=10