CD QB
CD QB
The interaction is implemented by having the parser call the lexical analyzer. The call,
suggested by the getNextToken command, causes the lexical analyzer to read characters from
its input until it can identify the next lexeme and produce for it the next token, which it
returns to the parser.
PART-A
1. Define parser (or) What is the role of parser? (May 15)
The parser obtains a string of tokens from the lexical analyzer, and verifies that the string of
token names can be generated by the grammar for the source language. The parser to reports
any syntax errors in an intelligible fashion and to recover from commonly occurring errors to
continue processing the remainder of the program. The parser constructs a parse tree and
passes it to the rest of the compiler for further processing.
2. State the various error recovery strategies used in a parser to correct the errors. (Nov 20)
✓ Panic Mode
✓ Standard Mode
✓ Error Productions
✓ Global Corrections
3. Define a Context Free Grammar.
A CFG, G is formally defined as a 4-tuple G = (V, T, P,S) where
V Finite set of Variables or non-terminals
T Finite set of Terminals
P Finite set of Production rule
S Start symbol, S Є V
4. Briefly discuss about the concept of derivation.
The sequence of substitutions, used to obtain a string is called a derivation. Derivation refers
to replace an instance of a given string‟s non-terminal, by the RHS of the production rule,
whose LHS contains the non-terminals to be replaced.
5. Define Ambiguous Grammar. (May 16, Nov 21)
A CFG G such that some word has two parse trees is said to be ambiguous grammar. In
other words a CFG G has more than one leftmost (or rightmost) derivation for given word
w is said to be ambiguous grammar.
6. What is meant by left recursion?
A grammar is left recursive if it has a non-terminal A such that there is derivation A=>Aα
for some string α. Top down parsing methods cannot handle left-recursion grammars, so a
transformation that eliminates left recursion in needed.
E → E +T | T
T→T*F|F
F → (E) | id
7. Eliminate the left recursion for the grammar (Nov 13)(May 13)
S → Aa|b
A → Ac|Sd|є
Let's use the ordering S, A (S = A1, A = A2).
✓ When i = 1, we skip the "for j" loop and remove immediate left recursion from the S
productions (there is none).
✓ When i = 2 and j = 1, we substitute the S-productions in A → Sd to obtain the A-
productions A → Ac | Aad | bd | ε
✓ Eliminating immediate left recursion from the A productions yields the grammar:
S → Aa | b
A → bdA' | A'
A' → cA' | adA' | ε
St. Joseph’s College of Engineering Page 7 of 27
CS1601 – Compiler Design Department of CSE 2023-2024
8. Eliminate Left Recursion for the grammar.
E → E+T | T
T → T*F | F
F → (E) | id.
General Rule:
A → Aα | β then convert it to
A → βA‟
A‟ → αA‟
A‟ → є
After Left Recursion Elimination the grammar is:
E → TE‟
E‟ → +TE‟ | є
T → FT‟
T‟ → *FT‟ | є
F → (E) | id
9. Elininate left recursion from the following grammar A→Ac|Aad|bd|є . (May 13)
A → bdA'
A' → aAcA' | adA' |є
10. Write the algorithm to eliminate left recursion from a grammar?
INPUT: Grammar G with no cycles or -productions.
OUTPUT: An equivalent grammar with no left recursion.
METHOD: Apply the algorithm to G. Note that the resulting non-left-recursive grammar may
have -productions.
1) arrange the nonterminals in some order A1,A2… .An
2) for ( each i from 1 to n ) {
3) for ( each j from 1 to i - 1 ) {
4) replace each production of the form Ai →Aj γ by the
productions Ai→δ1γ| δ2γ| …….| δkγ, where
Aj→δ1| δ2|…….| δk are all current Aj-productions
5 }
6) eliminate the immediate left recursion among the Ai-productions
7) }
11. What is meant by Left Factoring?
Left factoring is removing the common left factor that appears in two productions of the same
non-terminal. It is done to avoid back-tracing by the parser. Suppose the parser has a look-ahead.
Consider this example A → qB | qC where A,B,C are non-terminals and q is a sentence. In
this case, the parser will be confused as to which of the two productions to choose and it might
have to back-trace. After left factoring, the grammar is converted to
A → qD
D→B|C
In this case, a parser with a look-ahead will always choose the right production.
11. Construct the DAG and identify the value numbers for the sub expressions of the
following expressions, assuming + associates from the left. (Nov 20)
i. a*b + (a*b)
ii. a*b*a*b
12. What are the different ways of implementing the three address code? (May 22)
The three address code can be implemented as:
✓ Quadruples
✓ Triples
✓ Indirect Triples
✓ Static Single-Assignment Form
13. Place the code x=*y ; a=&x in Triplets and indirect Triplets. (May 15)
Triple:
op arg1 arg2
0 * y
1 = x (0)
Indirect:
op arg1 arg2 instruction
0 * y 11 (0)
1 = x (0) 12 (1)
Triple:
op arg1 arg2
0 & x
1 = a (0)
Indirect:
op arg1 arg2 instruction
0 & x 11 (0)
1 = a (0) 12 (1)
St. Joseph’s College of Engineering Page 15 of 27
CS1601 – Compiler Design Department of CSE 2023-2024
14. Write a 3-address code for; x=*y ; a=&x. (May 15) (May 23)
t1:=*y
x:=t1
t1:=&x
a:=t1
15. Define type systems.
Type system of a language is a collection of rules depicting the type expression assignments
to program objects. An implementation of a type system is called a type checker.
16. What is type checking? (May 23)
Type checking uses logical rules to reason about the behavior of a program at run time.
Specifically, it ensures that the types of the operands match the type expected by an operator.
For example, the && operator in Java expects its two operands to be booleans; the result is
also of type boolean.
17. State the type expressions. (Nov 21)
Types have structure, which we shall represent using type expressions: a type expression is either
a basic type or is formed by applying an operator called a type constructor to a type expression.
The sets of basic types and constructors depend on the language to be checked
18. What are static and dynamic errors?
✓ Static error: It can be detected at compile time. Eg: Undeclared identifiers.
✓ Dynamic errors: It can be detected at run time. Eg: Type checking
19. What are the advantages of compile time checking?
✓ It can catch many common errors.
✓ Static checking is desired when speed is important, since it can result faster code that
does not perform any type checking during execution.
20. What are the advantages of the dynamic checking?
✓ It usually permits the programmer to be less concerned with types. Thus, if frees the
programmer.
✓ It may be required in some cases like array bounds check, which can be performed
only during execution.
✓ It can give in clearer code.
✓ It may rise to in more robust code by ensuring thorough checking of values for the
program identifiers during execution.
21. Give an account on Static vs. Dynamic Type Checking.
✓ Static: Done at compile time (e.g., Java)
✓ Dynamic: Done at run time (e.g., Scheme)
✓ Strong type system is one where any program that passes the static type checker
cannot contain run-time type errors. Such languages are said to be strongly typed.
22. Express the rule for checking the type of a function. (Nov 21)
Type checking can take on two forms: synthesis and inference. A typical rule for type
synthesis
has the form
if f has type s → t and x has type s,
then expression f (x) has type t
Here, f and x denote expressions, and s -+ t denotes a function from s to t. This rule for
functions with one argument carries over to functions with several arguments.
A typical rule for type inference has the form
if f (x) is an expression,
then for some and , f has type → and x has type
PART –B
1. Explain the concept of syntax directed definition. Explain Synthesized attribute and Inherited
attribute with suitable examples. (May 23)