0% found this document useful (0 votes)
21 views4 pages

Experiment No 7 - DONE

The document discusses Lex and Yacc, tools used for lexical analysis and parsing. Lex generates lexical analyzers from regular expressions. Yacc generates parsers from context-free grammars. An example Lex program is provided that recognizes integers in input and prints them. The document also provides an example program that uses a Lex scanner and Yacc parser to recognize simple commands.

Uploaded by

volturb007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views4 pages

Experiment No 7 - DONE

The document discusses Lex and Yacc, tools used for lexical analysis and parsing. Lex generates lexical analyzers from regular expressions. Yacc generates parsers from context-free grammars. An example Lex program is provided that recognizes integers in input and prints them. The document also provides an example program that uses a Lex scanner and Yacc parser to recognize simple commands.

Uploaded by

volturb007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Experiment No – 7

AIM: To study and implement LEX and YACC in Lexical Analyzer.

THEORY:

1) LEX: Lex is a computer program that generates lexical analyzers like scanners or lexers. Lex
is commonly used with the yacc parser generator. Lex, originally written by Mike Lesk and
Eric Schmidt and described in 1975, is the standard lexical analyzer generator on many
Unix systems, and an equivalent tool is specified as part of the POSIX standard.
Lex reads an input stream specifying the lexical analyzer and outputs source code
implementing the lexer in the C programming language.

Structure of LEX: The structure of a Lex file is intentionally similar to that of a yacc file;
files are divided into three sections, separated by lines that contain only two percent signs,
as follows:
Definition section
%%
Rules section
%%
C code section
The definition section defines macros and imports header files written in C. It is also
possible to write any C code here, which will be copied verbatim into the generated source
file.
The rules section associates regular expression patterns with C statements. When the lexer
sees text in the input matching a given pattern, it will execute the associated C code.
The C code section contains C statements and functions that are copied verbatim to the
generated source file.
These statements presumably contain code called by the rules in the rules section. In large
programs it is more convenient to place this code in a separate file linked in at compile time.

2) YACC: Yacc is a computer program for the Unix operating system. It is a LALR parser
generator, generating a parser, the part of a compiler that tries to make syntactic sense of
the source code, specifically a LALR parser, based on an analytic grammar written in a
notation similar to BNF.
Yacc itself used to be available as the default parser generator on most Unix systems, though
it has since been supplanted as the default by more recent, largely compatible, programs.

Description of YACC: YACC is an acronym for "Yet Another Compiler Compiler". It is a


LALR parser generator, generating a parser, the part of a compiler that tries to make
syntactic sense of the source code, specifically a LALR parser, based on an analytic grammar
written in a notation similar to BNF.
It was originally developed in the early 1970s by Stephen C. Johnson at AT&T Corporation
and written in the B programming language, but soon rewritten in C. It appeared as part of
Version 3 Unix, and a full description of Yacc was published in 1975.

The input to Yacc is a grammar with snippets of C code (called "actions") attached to its
rules. Its output is a shift-reduce parser in C that executes the C snippets associated with
each rule as soon as the rule is recognized. Typical actions involve the construction of parse
trees. Using an example from Johnson, if the call node (label, left, right) constructs a binary
parse tree node with the specified label and children, then the rule

expr : expr '+' expr { $$ = node('+', $1, $3); }

recognizes summation expressions and constructs nodes for them. The special identifiers $
$, $1 and $3 refer to items on the parser's stack.

3) LEX Example:
/*** Definition section ***/
%{
/* C code to be copied verbatim */
#include <stdio.h>
%}
/* This tells flex to read only one input file */
%option noyywrap

%%
/*** Rules section ***/
/* [0-9]+ matches a string of one or more digits */
[0-9]+ {
/* yytext is a string containing the matched text. */
printf("Saw an integer: %s\n", yytext);
}
.|\n { /* Ignore all other characters. */ }

%%
/*** C Code section ***/

int main(void)
{
/* Call the lexer, then quit. */
yylex();
return 0;
}
PROGRAM:

hello1.l:

%{
#include <stdlib.h>
#include "y.tab.h"
%}
%%
("hi"|"oi")"\n" { return HI; }
("tchau"|"bye")"\n" { return BYE; }
. { yyerror(); }

%%
int main(void)
{
yyparse();
return 0;
}
int yywrap(void)
{
return 0;
}
int yyerror(void)
{
printf("Error\n");
exit(1);
}

hello1.y:

%token HI BYE

%%

program:
hi bye
;

hi:
HI { printf("Hello World\n"); }
;
bye:
BYE { printf("Bye World\n"); exit(0); }
;
OUTPUT:

You might also like