0% found this document useful (0 votes)
37 views

UNIT I - CS8602 Compiler Design Notes

complier design BE.CSE 6th sem subject unit 1 material

Uploaded by

karuna karan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

UNIT I - CS8602 Compiler Design Notes

complier design BE.CSE 6th sem subject unit 1 material

Uploaded by

karuna karan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

UNIT 1 – INTRODUCTION TO COMPILERS

Topics to be Covered

Translators-Compilation and Interpretation-Language processors -The Phases of Compiler-


Errors Encountered in Different Phases-The Grouping of Phases-Compiler Construction Tools -
Programming Language basics.

1.1 Translators:

A translator is a computer program that performs the translation of a program written in a


given programming language into a functionally equivalent program in a different computer
language, without losing the functional or logical structure of the original code (the "essence" of
each program).

Types of Computer Language Translators:

The widely used translators that translate the code of a computer program into a machine code
are:

1. Assemblers
2. Interpreters
3. Compilers

Assembler:
An Assembler converts an assembly program into machine code.

1.2 Compilation and Interpretation:


1.2.1 Compilation:
Compilation is the conceptual process of translating source code into a CPU-executable binary
target code.

Downloaded from: annauniversityedu.blogspot.com


Compiler:

A compiler is a program that reads a program written in one language – the source language –
and translates it into an equivalent program in another language – the target language.

source Program Compiler target program

error messages

As an important part of this translation process, the compiler reports to its user the presence of
errors in the source program.

If the target program is an executable machine-language program, it can then be called by the
user to process inputs and produce outputs.

input Target output

Program

Advantages of Compiler:
1. Fast in execution
2. The object/executable code produced by a compiler can be distributed or executed without
having to have the compiler present.
3. The object program can be used whenever required without the need to of recompilation.

Disadvantages of Compiler:
1. Debugging a program is much harder. Therefore not so good at finding errors.
2. When an error is found, the whole program has to be re-compiled.

Downloaded from: annauniversityedu.blogspot.com


History of Compiler:
 Until 1952 most of the programs were written in assembly language
 In 1952 Grace Hopper writes the first compiler for the A-0 programming language
 Between 1957 – 58 John Backus writes the first Fortran compiler. Optimization
of the code was the integral component of the compiler.

Applications of Compiler Technology:


 Implementation of High Level Programming Languages
 Optimizations for Computer Architectures (both parallelism and memory hierarchies
improve the potential performance of a machine, but they must be harnessed effectively
by the compiler to deliver real performance of an application)
 Design of a new computer architecture
 Program Translations ( Program Translation techniques are: Binary Translation,
Hardware Synthesis, Database Query Interpreters, Compiled Simulation)
 Software Productivity Tools (Ex. Structure editors, type checking, bound checking,
memory management tools, etc)

1.2.2 Interpretation:
Interpretation is the conceptual process of translating a high level source code into executable
code.
Interpreter:

An Interpreter is also a program that translates high-level source code into executable code.
However the difference between a compiler and an interpreter is that an interpreter translates
one line at a time and then executes it: no object code is produced, and so the program has to
be interpreted each time it is to be run. If the program performs a section code 1000 times, then
the section is translated into machine code 1000 times since each line is interpreted and then
executed.

Downloaded from: annauniversityedu.blogspot.com


Advantages of an Interpreter:
1. Good at locating errors in programs
2. Debugging is easier since the interpreter stops when it encounters an error.
3. If an error is deducted there is no need to retranslate the whole program

Disadvantages of an Interpreter:
1. Rather slow
2. No object code is produced, so a translation has to be done every time the program is running.
3. For the program to run, the Interpreter must be present

Difference between Compiler and Interpreter:

S.No. Compiler Interpreter


Compiler works on the complete program
Interpreter Program works line by line. It
1. at once. It takes the entire program as
takes one statement at a time as input.
input.
Compiler generates intermediate code, Interpreter does not generate intermediate
2.
called the object code or machine code. object code or machine code.
Compiler executes conditional control
statements (like if-else and switch-case) Interpreter executes conditional control
3.
and logical constructs faster than statements at a much slower speed.
interpreter.
Compiled program take more memory Interpreter does not generate intermediate
4. because the entire object code has to reside object code. As a result, interpreted
in memory. programs are more memory efficient.

Downloaded from: annauniversityedu.blogspot.com


S.No. Compiler Interpreter
Compile once and run any time. Compiled
Interpreted programs are interpreted line
5. program does not need to be compiled
by line every time they are executed.
every time.
Error is reported as soon as the first error
Errors are reported after the entire program is encountered. Rest of the program will
6.
is checked for syntactical and other errors. be checked until the existing error is
removed.
Debugging is easy because interpreter
A compiled language is more difficult to
7. stops and report errors as it encounters
debug.
them.
Interpreter runs the program from the
Compiler does not allow a program to run
8. first line and stops execution only if it
until it is completely error-free.
encounters an error.
Compiled languages are more efficient but Interpreted languages are less efficient
9.
difficult to debug. but easier to debug.
Examples:
Examples:
10. BASIC, VISUAL BASIC, Python, Ruby,
C, C++, COBOL
PERL, MATLAB, Lisp

Hybrid Compiler:

Hybrid compiler is a compiler which translates a human readable source code to an intermediate
byte code for later interpretation. So these languages do have both features of a compiler and an
interpreter. These types of compilers are commonly known as Just In-time Compilers (JIT).

Example of a Hybrid Compiler:

Java is one good example for these types of compilers. Java language processors combine
compilation and interpretation. A Java Source program may be first compiled into an
intermediate form called byte codes. The byte codes are then interpreted by a virtual machine.

Downloaded from: annauniversityedu.blogspot.com


A benefit of this arrangement is that the byte codes compiled on one machine can be interpreted
on another machine, perhaps across a network.
In order to achieve faster processing of inputs to outputs, some Java compilers called just-in-time
compilers, translate the byte codes into machine language immediately before they run the
intermediate program to process the input.

Source program

Translator

Intermediate Program Virtual Output

Input Machine
Compilers are not only used to translate a source language into the assembly or machine
language but also used in other places.

Example:

1. Text Formatters: A text formatter takes input that is stream of characters,


most of which is text, some of which includes commands to indicate paragraphs, figures,
or mathematical structures like subscripts and superscripts.
2. Silicon compilers: A silicon compiler has a source language that is similar
or identical to a conventional programming language. The variable of the language
represent logical signals (0 or 1) or groups of signals in a switching circuit. The output is
a circuit design in an appropriate language.
3. Query Interpreters: A query interpreter translates a predicate containing
relational and Boolean operators into commands to search a database for records
satisfying that predicate.

Downloaded from: annauniversityedu.blogspot.com


1.3 Language Processors:

A language processor is a program that processes the programs written in programming language
(source language). A part of a language processor is a language translator, which translates the
program from the source language into machine code, assembly language or other language.

An integrated software developmental environment includes many different kinds of language


processors. They are:
1. Pre Processor
2. Compiler
3. Assembler
4. Linker
5. Loader

1. Pre Processor
The Pre Processor is the system software which is used to process the source program before fed
into the compiler. They may perform the following functions:

1. Macro Processing: A preprocessor may allow a user to define macros that


are shorthand for longer constructs.
2. File Inclusion: A preprocessor may include header files into the program
text. For example, the C pre-processor causes the contents of the file <global.h> to
replace the statement #include <global.h> when it processes a file containing this
statement.
3. Rational Preprocessors: These processors provides the user with built-in
macros for constructs like while-statements or if-statements etc.,
4. Language Extensions: It provides features similar to built-in macros. For
example, the language Equel is a database query language embedded in C.

2. Interpreter
An interpreter, like a compiler, translates high-level language into low-level machine language.
The difference lies in the way they read the source code or input. A compiler reads the whole

Downloaded from: annauniversityedu.blogspot.com


source code at once, creates tokens, checks semantics, generates intermediate code, executes the
whole program and may involve many passes. In contrast, an interpreter reads a statement from
the input, converts it to an intermediate code, executes it, then takes the next statement in
sequence. If an error occurs, an interpreter stops execution and reports it. whereas a compiler
reads the whole program even if it encounters several errors.

3. Assembler
An assembler translates assembly language programs into machine code. The output of an
assembler is called an object file, which contains a combination of machine instructions as well
as the data required to place these instructions in memory.

4. Linker
Linker is a computer program that links and merges various object files together in order to make
an executable file. All these files might have been compiled by separate assemblers. The major
task of a linker is to search and locate referenced module/routines in a program and to determine
the memory location where these codes will be loaded, making the program instruction to have
absolute references.

5. Loader
Loader is a part of operating system and is responsible for loading executable files into memory
and executes them. It calculates the size of a program instructions and data and creates memory
space for it. It initializes various registers to initiate execution.

1.4 Phases of Compiler:


A compiler operates in phases, each of which transforms the source program from one
representation to another.

Downloaded from: annauniversityedu.blogspot.com


The Analysis – Synthesis Model of Compilation:

There are two parts to compilation:

 Analysis and
 Synthesis

1. Analysis:

The first three phases forms the bulk of the analysis portion of a compiler. The analysis part
breaks up the source program into constituent pieces and creates an intermediate representation
of the source program. During analysis, the operations implied by the source program are
determined and recorded in a hierarchical structure called a syntax tree, in which each node
represents an operation and the children of a node represent the arguments of the operation.

Downloaded from: annauniversityedu.blogspot.com


Example:

Syntax tree for position := initial + rate * 60

:=

position +

initial *

rate 60

2. Synthesis Part:

The synthesis part constructs the desired target program from the intermediate representation.
This part requires most specialized techniques.

The Analysis Phase:

Lexical Analysis: The lexical analysis phase reads the characters in the source program and
groups them into a stream of tokens in which each token represents a logically sequence of
characters, such as identifier, a keyword (if, while, etc), a punctuation character, or a multi-
character operator work like :=. The character sequence forming a token is called the lexeme for
the token.

Certain tokens will be augmented by a “lexical value”. Ex. When an identifier rate is found, the
lexical analyzer generates the token id and also enters rate into the symbol table, if it is not
already exist. The lexical value associated with this id then points to the symbol-table entry for
rate.

Example: position := initial + rate * 60

Tokens:

Downloaded from: annauniversityedu.blogspot.com


1. position, initial and rate - id
2. :=, + and * are signs
3. 60 is a number

Thus the lexical analyzer will give the output as:

Id1 := id2 + id3 * 60

Syntax Analysis:

The next phase is called the syntax analysis or parsing. It takes the token produced by lexical
analysis as input and generates a parse tree or syntax tree. In this phase, token arrangements are
checked against the source code grammar, i.e. the parser checks if the expression made by the
tokens is syntactically correct.
It imposes a hierarchical structure of the token stream in the form of parse tree or syntax tree.
The syntax tree can be represented by using suitable data structure.

Example: position := initial + rate * 60

:=

position +

initial *

rate 60

Downloaded from: annauniversityedu.blogspot.com


Data structure of the above tree:

:=

id 1 +

id 2 *

id 3

id 4

Semantic Analysis:

Semantic analysis checks whether the parse tree constructed follows the rules of language. For
example, assignment of values is between compatible data types, and adding string to an integer.
Also, the semantic analyzer keeps track of identifiers, their types and expressions; whether
identifiers are declared before use or not etc. The semantic analyzer produces an annotated
syntax tree as an output.
This analysis inserts a conversion from integer to real in the above syntax tree.
:=

position +

initial *

rate inttoreal

60

Downloaded from: annauniversityedu.blogspot.com


Synthesis Phase:

Intermediate Code Generation:

After semantic analysis the compiler generates an intermediate code of the source code for the
target machine. It represents a program for some abstract machine. It is in between the high-level
language and the machine language. This intermediate code should be generated in such a way
that it makes it easier to be translated into the target machine code.
Intermediate code have two properties: easy to produce and easy to translate into the target
program. An intermediate code representation can have many forms. One of the form is three-
address code, which is like the assembly language for a machine in which every memory
location can act like a register and three-address code have at most three operands.

Example: The output of the semantic analysis can be represented in the following intermediate
form:

temp1 := inttoreal ( 60 )

temp2 := id3 * temp1

temp3 := id2 + temp2

id1 := temp3

Code Optimization:

The next phase does code optimization of the intermediate code. Optimization can be assumed as
something that removes unnecessary code lines, and arranges the sequence of statements in order
to speed up the program execution without wasting resources CPU, memory. In the following
example the natural algorithm is used for optimizing the code.
Example:

The output of intermediate code can be optimized as:

temp1 := id3 * 60.0

id1 := id2 + temp1

Downloaded from: annauniversityedu.blogspot.com


The compiler that do most code optimization are called “optimizing compilers”.

Code Generation:

This is the final phase of the compiler which generates the target code, consisting normally of
relocatable machine code or assembly code. Variables are assigned to the registers.

Example:

The output of above optimized code can be generated as:

MOVF id3, R2

MULF #60.0, R2

MOVF id2, R1

ADDF R2, R1

MOVF R1, id3

The first and the second operands of each instruction specify a source and destination
respectively. The F in each instruction denotes the floating point numbers. The # signifies that
60.0 is to be treated as constant.

Activities of Compiler:

Symbol table manager and error handler are the other two activities in the compiler which is also
referred as phases. These two activities interact with all the six phases of a compiler.

Symbol Table Manager:

The symbol table is a data structure containing a record for each identifier, with fields for the
attributes of the identifier.

The attributes of the identifiers may provide the information about the storage allocated for an
identifier, its type, its scope (where in the program it is valid), and in the case of procedure

Downloaded from: annauniversityedu.blogspot.com


names the attributes provide information about the number and types of its arguments, the
method of passing each argument (eg. by reference), and the type returned, if any.

The symbol table allows us to find the record for each identifier quickly and to store or retrieve
data from that record quickly. Attributes of the identifiers cannot be determined during lexical
analysis phase. But it can be determined during the syntax and semantic analysis phases. The
other phase like code generators uses the symbol table to retrieve the details about the identifiers.

Error Handler: ( Error Detection and Reporting)

Each phase can encounter errors. After the deduction of an error, a phase must somehow deal
with that error, so that the compilation can proceed, allowing further errors in the source program
to be detected.

Lexical Analysis Phase: If the characters remaining in the input do not form any token of the
language, then the lexical analysis phase detect the error.

Syntax Analysis Phase: The large fraction of errors is handled by syntax and semantic analysis
phases. If the token stream violates the structure rules (syntax) of the language, then this phase
detects the error.

Semantic Analysis Phase: If the constructs have right syntactic structure but no meaning to the
operation involved, then this phase detects the error. Ex. Adding two identifiers, one of which is
the name of the array, and the other the name of a procedure.

Downloaded from: annauniversityedu.blogspot.com


Translation of statement

Downloaded from: annauniversityedu.blogspot.com


1.5 Errors Encountered in Different Phases:
Program submitted to a compiler often have errors of various kinds. So, good compiler should
be able to detect as many errors as possible in various ways and also recover from them.
Each phase can encounter errors. After the deduction of an error, a phase must somehow deal
with that error, so that the compilation can proceed, allowing further errors in the source program
to be detected.

Errors during Lexical Analysis:

If the characters remaining in the input do not form any token of the language, then the lexical
analysis phase detect the error.

There are relatively few errors which can be detected during lexical analysis.

i. Strange characters

Some programming languages do not use all possible characters, so any strange ones
which appear can be reported. However almost any character is allowed within a quoted
string.

ii. Long quoted strings (1)

Many programming languages do not allow quoted strings to extend over more than one
line; in such cases a missing quote can be detected.

iii. Long quoted strings (2)

If quoted strings can extend over multiple lines then a missing quote can cause quite a lot
of text to be 'swallowed up' before an error is detected.

iv. Invalid numbers

A number such as 123.45.67 could be detected as invalid during lexical analysis


(provided the language does not allow a full stop to appear immediately after a number).
Some compiler writers prefer to treat this as two consecutive numbers 123.45 and .67 as
far as lexical analysis is concerned and leave it to the syntax analyser to report an error.
Some languages do not allow a number to start with a full stop/decimal point, in which
case the lexical analyzer can easily detect this situation.

Downloaded from: annauniversityedu.blogspot.com


Error Recovery Actions:

The possible error-recovery actions are:

i) Deleting an extraneous character


ii) Inserting a missing character
iii) Replacing an incorrect character by correct character
iv) Transposing two adjacent characters

For example:

fi ( a == 1) ....

Here fi is a valid identifier. But the open parentheses followed by the identifier may tell fi is
misspelling of the keyword if or an undeclared function identifier.

Errors in Syntax Analysis:


The large fraction of errors is handled by syntax and semantic analysis phases. If the token
stream violates the structure rules (syntax) of the language, then this phase detects the error.
The errors detected in this phase include misplaced semicolons or extra or missing braces; that
is, "{" or " } . " As another example, in C or Java, the appearance of a case statement without an
enclosing switch is a syntactic error. (However, this situation is usually allowed by the parser
and caught later in the processing, as the compiler attempts to generate code). Unbalanced
parenthesis in expressions is handled

During syntax analysis, the compiler is usually trying to decide what to do next on the basis of
expecting one of a small number of tokens. Hence in most cases it is possible to automatically
generate a useful error message just by listing the tokens which would be acceptable at that
point.

Source: A + * B
Error: | Found '*', expect one of: Identifier, Constant, '('

More specific hand-tailored error messages may be needed in cases of bracket mismatch.

Downloaded from: annauniversityedu.blogspot.com


Source: C := ( A + B * 3 ;
Error: | Missing ')' or earlier surplus '('

A parser should be able to detect and report any error in the program. It is expected that when an
error is encountered, the parser should be able to handle it and carry on parsing the rest of the
input. Mostly it is expected from the parser to check for errors but errors may be encountered at
various stages of the compilation process. A program may have the following kinds of errors at
various stages:
 Lexical : name of some identifier typed incorrectly
 Syntactical : missing semicolon or unbalanced parenthesis
 Semantical : incompatible value assignment
 Logical : code not reachable, infinite loop
There are four common error-recovery strategies that can be implemented in the parser to deal
with errors in the code.

Panic mode
When a parser encounters an error anywhere in the statement, it ignores the rest of the statement
by not processing input from erroneous input to delimiter, such as semi-colon. This is the easiest
way of error-recovery and also, it prevents the parser from developing infinite loops.

Statement mode
When a parser encounters an error, it tries to take corrective measures so that the rest of inputs of
statement allow the parser to parse ahead. For example, inserting a missing semicolon, replacing
comma with a semicolon etc.. Parser designers have to be careful here because one wrong
correction may lead to an infinite loop.

Error productions
Some common errors are known to the compiler designers that may occur in the code. In
addition, the designers can create augmented grammar to be used, as productions that generate
erroneous constructs when these errors are encountered.

Downloaded from: annauniversityedu.blogspot.com


Global correction
The parser considers the program in hand as a whole and tries to figure out what the program is
intended to do and tries to find out a closest match for it, which is error-free. When an erroneous
input (statement) X is fed, it creates a parse tree for some closest error-free statement Y. This
may allow the parser to make minimal changes in the source code, but due to the complexity
(time and space) of this strategy, it has not been implemented in practice yet.

Errors during Semantic Analysis


Semantic errors are mistakes concerning the meaning of a program construct; they may be either
type errors, logical errors or run-time errors:
(i) Type errors occur when an operator is applied to an argument of the wrong type, or to
the wrong number of arguments.
(ii) Logical errors occur when a badly conceived program is executed, for example: while x
= y do ... when x and y initially have the same value and the body of loop need not
change the value of either x or y.
(iii)Run-time errors are errors that can be detected only when the program is executed, for
example:
var x : real; readln(x); writeln(1/x)
which would produce a run time error if the user input 0.

Syntax errors must be detected by a compiler and at least reported to the user (in a helpful way).
If possible, the compiler should make the appropriate correction(s). Semantic errors are much
harder and sometimes impossible for a computer to detect.

1.6 The Grouping of Phases:


Depending on the relationship between phases, the phases are grouped together as front end and
a back end.

Front End:

Downloaded from: annauniversityedu.blogspot.com


The front end consists of phases that depend primarily on the source language and are largely
independent of the target machine. The phases of front end are:

 Lexical Analysis
 Syntactic Analysis
 Creation of the symbol table
 Semantic Analysis
 Generation of the intermediate code
 A part of code optimization
 Error Handling that goes along with the above said phases

Lexical Syntax Directed


character stream token stream intermediate
Analyzer Translator
representation

Back End:

The back end includes the phases of the compiler that depend on the target machine, and these
phases do not depend on the source language, but depend on the intermediate language. The
phases of back end are:

 Code Optimization
 Code Generation
 Necessary Symbol table and error handling operations

Categories of Compiler Design:

Based on the grouping of phases there are two types of compiler design is possible:

1. A Single Compiler for different Machine - It is possible to produce a single


compiler for the same source language on a different machine by taking the front end of a
compiler as common and redo its associated back end.
2. Several Compiler for One Machine – It is possible to produce several
compilers for one machine by using a common back end for the different front ends.

Downloaded from: annauniversityedu.blogspot.com


1.7 Compiler Construction Tools:

In order to atomize the development of compilers some general tools have been created. These
tools use specialized languages for specifying and implementing the component. The most
successful tool should hide the details of the generation algorithm and produce components
which can be easily integrated into the remainder of the compiler. These tools are often referred
as compiler – compilers, compiler – generators, or translator-writing systems.

Some of the compiler-construction tools are:

Parser generators: Automatically produce syntax analyzers from a grammatical description


of a programming language.

Scanner generators: Produce lexical analyzers from a regular-expression description of the


tokens of a language.

Syntax-directed translation engines: Produce collections of routines for walking a parse tree
and generating intermediate code.

Code-generator generators: Produce a code generator from a collection of rules for


translating each operation of the intermediate language into the machine language for a target
machine.
Data-flow analysis engines: Facilitate the gathering of information about how values are
transmitted from one part of a program to each other part. Data-flow analysis is a key part of
code optimization.

Compiler-construction toolkits: Provide an integrated set of routines for constructing various


phases of a compiler.

Downloaded from: annauniversityedu.blogspot.com


1.8 Programming Language Basics:
The important terminology and distinctions that appear in the programming languages are:

1. The Static / Dynamic Distinction:


 A programming language can have static policy and dynamic policy.
 Static Policy: The issues that can be decided at compile time by compiler is called static
policy.
 Dynamic Policy: The issues that can be decided at run time of the program is called
dynamic policy.
 One of the issue decision policy in the language is the scope of declarations.
 Scope Rules: The scope of a declaration of x is the context in which uses of x refer to this
declaration. A language uses static scope or lexical scope if it is possible to determine the
scope of a declaration by looking only at the program and can be determined by compiler.
Otherwise, the language uses dynamic scope.
 Example in Java:
public static int x;
The compiler can determine the location of integer x in memory.

2. Environments and States:


The association of names with locations in memory (the store) and then with values can be
described by two state mappings that change as the program runs.

Two-State Mapping from Names to Values


The environment is a mapping from names to locations in the store.
The state is a mapping from locations in store to their values. That is, the state maps l-values to
their corresponding r-values, in the terminology of C.

Downloaded from: annauniversityedu.blogspot.com


Example:
The storage address 100, associated with variable pi, holds 0. After the assignment pi := 3.14,
the same storage is associated with pi, but the value held there is 3.14.

3. Static Scope and Block Structure:

Scope Rules: The scope of a declaration of x is the context in which uses of x refer to this
declaration. . A language uses static scope or lexical scope if it is possible to determine the scope
of a declaration by looking only at the program and can be determined by compiler. Otherwise,
the language uses dynamic scope.
 Example in Java:
public static int x;
The compiler can determine the location of integer x in memory.
The static-scope policy is as follows:
1. A C program consists of a sequence of top-level declarations of variables and functions.
2. Functions may have variable declarations within them, where variables include local
variables and parameters. The scope of each such declaration is restricted to the function
in which it appears.
3. The scope of a top-level declaration of a name x consists of the entire program that
follows, with the exception of those statements that lie within a function that also has a
declaration of x.

Block Structures:
Languages that allow blocks to be nested are said to have block structure. A name a: in a nested
block B is in the scope of a declaration D of x in an enclosing block if there is no other
declaration of x in an intervening block.

Downloaded from: annauniversityedu.blogspot.com


4. Explicit Access Control:
 Classes and structures introduce a new scope for their members.
 The use of keywords like public, private, and protected, object oriented languages such
as C + + or Java provide explicit control over access to member names in a super class.
 These keywords support encapsulation by restricting access.
 Thus,
o Private names are purposely given a scope that includes only the method
declarations and definitions associated with that class and any "friend" classes
(the C + + term).
o Protected names are accessible to subclasses.
o Public names are accessible from outside the class.

5. Dynamic Scope:
 Scope Rules: The scope of a declaration of x is the context in which uses of x refer to this
declaration.
 A language uses static scope or lexical scope if it is possible to determine the scope of a
declaration by looking only at the program and can be determined by compiler.
 Example in Java:
public static int x;
The compiler can determine the location of integer x in memory.
 The language uses dynamic scope if it is not possible to determine the scope of a
declaration during compile time.
 Example in Java:
public int x;
 With dynamic scope, as the program runs, the same use of x could refer to any of several
different declarations of x.

6. Parameter Passing Mechanism: Parameters are passed from a calling procedure to the callee
either by value (call by value) or by reference (call by reference). Depending on the procedure
call, the actual parameters associated with formal parameters will differ.

Downloaded from: annauniversityedu.blogspot.com


Call-By-Value: In call-by-value, the actual parameter is evaluated (if it is an expression) or
copied (if it is a variable). The value is placed in the location belonging to the corresponding
formal parameter of the called procedure.

Call-By-Reference:
In call-by-reference, the address of the actual parameter is passed to the callee as the value of the
corresponding formal parameter. Uses of the formal parameter in the code of the callee are
implemented by following this pointer to the location indicated by the caller. Changes to the
formal parameter thus appear as changes to the actual parameter.

Call-By-Name:
A third mechanism — call-by-name — was used in the early programming language Algol 60. It
requires that the callee execute as if the actual parameter were substituted literally for the formal
parameter in the code of the callee, as if the formal parameter were a macro standing for the
actual parameter (with renaming of local names in the called procedure, to keep them distinct).

When large objects are passed by value, the values passed are really references to the objects
themselves, resulting in an effective call-by-reference.

7. Aliasing: When parameters are (effectively) passed by reference, two formal parameters can
refer to the same object, called aliasing. This possibility allows a change in one variable to
change another.

Downloaded from: annauniversityedu.blogspot.com

You might also like