0% found this document useful (0 votes)
23 views

Atcd Modules 4 and 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Atcd Modules 4 and 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

21CS51

Automata Theory

and

Compiler Design

Lecture Notes for Modules 4 and 5

Shridhar Venkatanarasimhan

Department of Computer Science and Engineering

Raja Reddy Institute of Technology

Chikkabanavara, Bengaluru 560090.

February 29, 2024


CONTENTS

1. Pushdown Automata . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.1 Definition of a PDA . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Instantaneous Description (ID) for a PDA . . . . . . . . . . 7

1.3 Moves of a PDA . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.4 Languages of a PDA . . . . . . . . . . . . . . . . . . . . . . . 8

1.5 Examples of PDA . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.5.1 {0n 1n } . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.5.2 Example of W W R . . . . . . . . . . . . . . . . . . . . 9

2. Bottom-Up Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1 Introduction to Shift-Reduce Parsing . . . . . . . . . . . . . 10

2.1.1 What is a handle? . . . . . . . . . . . . . . . . . . . . 10

2.1.2 Shift-Reduce Example . . . . . . . . . . . . . . . . . . 10

2.2 LR(0) Parsers . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.1 What is an LR(0) Item? . . . . . . . . . . . . . . . . . 11

2.2.2 Sets of LR(0) Items . . . . . . . . . . . . . . . . . . . . 12

2.2.3 The GOTO Function . . . . . . . . . . . . . . . . . . . 13

2.2.4 LR(0) Automaton . . . . . . . . . . . . . . . . . . . . . 13

2.2.5 LR(0) Parsing Table . . . . . . . . . . . . . . . . . . . 13

2.2.6 Use of the LR(0) Parsing Table . . . . . . . . . . . . . 13

2.3 LR(1) Parsers . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3.1 LR(0) and LR(1) . . . . . . . . . . . . . . . . . . . . . . 15

2.3.2 What is k in LR(k)? . . . . . . . . . . . . . . . . . . . . 17

2.3.3 CLOSURE for an LR(1) set of items . . . . . . . . . . 18


Contents 3

2.3.4 Sample Grammar for LR(1) Parsing . . . . . . . . . . 18

2.3.5 The GOTO Function . . . . . . . . . . . . . . . . . . . 19

2.3.6 LR(1) Automaton . . . . . . . . . . . . . . . . . . . . . 19

2.3.7 LR(1) Parsing Table . . . . . . . . . . . . . . . . . . . 19

2.3.8 Use of the LR(1) Parsing Table . . . . . . . . . . . . . 19

2.4 LALR Parser . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3. Turing Machine and Undecidability . . . . . . . . . . . . . . . . . 24

3.1 Definition of a TM . . . . . . . . . . . . . . . . . . . . . . . . 24

3.2 Examples of TMs . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.3 Programming Techniques for TMs . . . . . . . . . . . . . . . 26

3.4 Extensions of TMs . . . . . . . . . . . . . . . . . . . . . . . . 26

3.5 Definition of Algorithm . . . . . . . . . . . . . . . . . . . . . 26

3.6 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.6.1 Recursively Enumerable and Recursive . . . . . . . . 27

3.6.2 Codes for Turing Machines . . . . . . . . . . . . . . . 27

3.6.3 The Diagonal Language Ld . . . . . . . . . . . . . . . 27

3.6.4 The Universal Turing Machine . . . . . . . . . . . . . 27

3.6.5 The Universal Language . . . . . . . . . . . . . . . . . 28

3.6.6 Reductions . . . . . . . . . . . . . . . . . . . . . . . . 28

3.6.7 Undecidable Problems . . . . . . . . . . . . . . . . . . 28


LIST OF FIGURES

2.1 Shift-Reduce Parsing . . . . . . . . . . . . . . . . . . . . . . 11

2.2 LR(0) Automaton with Sets of Items (States) and the GOTO

Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 LR(0) Parsing Table for the Expression Grammar . . . . . . 15

2.4 LR Parsing Algorithm . . . . . . . . . . . . . . . . . . . . . . 16

2.5 LR(0) Parsing Steps for id * id . . . . . . . . . . . . . . . . . 17

2.6 LR(1) Automaton with Sets of Items (States) and the GOTO

Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.7 LR(1) Parsing Table for the Sample Grammar . . . . . . . . 21

2.8 Merged LALR states formed from LR(1) . . . . . . . . . . . . 22

2.9 LALR Parsing Table . . . . . . . . . . . . . . . . . . . . . . . 23

3.1 TM recognizing {0n 1n |n ≥ 0} . . . . . . . . . . . . . . . . . . 25

3.2 TM for intechanging 0’s and 1’s . . . . . . . . . . . . . . . . 25


ABSTRACT

These are lecture notes for modules 4 and 5 of 21CS51: Automata

Theory and Compiler Design.

First, we look at what are Pushdown Automata with definitions and

examples.

Next, we consider bottom-up parsing. In bottom-up parsing, the

input (string of terminals) is scanned (usually from left-to-right) and a

form of the parse tree is constructed from the bottom to the top. Shift-

reduce parsers are bottom-up parsers which make use of a stack of

grammar symbols. In each step, a shift-reduce parser either shifts a

terminal from the remainder of the input onto the stack or reduces a

string of grammar symbols, forming a ”handle” into the lefthand side

variable of a production. LR(0), LR(1) and LALR parsers can be con-

structed from a grammar, which make generate and make use of pars-

ing tables which help int making the shift or reduce decisions.

Thirdly, we look at Turing Machines which are the most powerful

models of automata that can exist. Unfortunately, even Turing Ma-

chines cannot solve some problems which are undecidable or unsolv-

able problems (by computational means).

Finally, we cover the remainder of Compiler Design in brief, Syntax

Directed Translation, Syntax Directed Definitions, forms of Interme-

diate Code: DAGs, Three-Address Code and SSA. We cover issues in

Target Code Generation very briefly.


Abstract 6

Disclaimer: Students are requested to read the prescribed text-

books and the reference books. There might be errors in these

notes. If you come across any mistake, please let me know.


1. PUSHDOWN AUTOMATA

A Pushdown Automaton (PDA) uses a stack of symbols. There is an

input alphabet Σ and a stack alphabet Γ. It has a finite set of states

Q. Given a state q ∈ Q, the current input symbol a, the current symbol

on the top of the stack X, the PDA changes to a possibly different state

p ∈ Q, replaces X by a string γ ∈ Γ∗ .

1.1 Definition of a PDA

The definition of a PDA is: P = (Q, Σ, Γ, δ, q0 , Z0 , F ).

Here, δ is the transition function which determines the changes in

state and on stack. q0 is the initial state and Z0 is the only initial symbol

on the stack.

Note that δ is in general non deterministic. That is, δ(q, a, X) multi-

valued and there can even be ϵ- moves. That is δ(q, ϵ, X) is a set of

moves which consumes no input.

F ⊆ Q is a set of final states.

1.2 Instantaneous Description (ID) for a PDA

The ID for a PDA is given by the current state q, the remainder of the

input w and the current string of stack symbols γ. That is, (q, w, γ) is

the ID for a PDA.


1. Pushdown Automata 8

1.3 Moves of a PDA

A move of a PDA can be given by: (q, aw, Xα) ⊢ (p, w, γα) where state

changes from q to p, input symbol a is consumed and stack symbol X

is replaced by γ. For an ϵ-move, we have (q, w, Xα) ⊢ (p, w, γα).

Multiple moves are given by ⊢∗ of the form: (q, w1 , α) ⊢∗ (p, w2 , β).

1.4 Languages of a PDA

Let P be a PDA. Then N (P ) is the language containing all strings for

which P empties it stack after consuming the string. L(P ) is the set of

all strings for which P enters a state in F .

Using the ID notation of moves in section 1.3, we define: N (P ) =

{w|(q0 , w, Z0 ) ⊢∗ (p, ϵ, ϵ) for some p ∈ Q}. Similarly: L(P ) = {w|(q0 , w, Z0 ) ⊢∗

(p, ϵ, α) for some p ∈ F and some α ∈ Γ∗ }.

It can be shown that acceptance by final state is equivalent to ac-

ceptance by empty stack. Please refer [1] for details.

1.5 Examples of PDA

1.5.1 {0n 1n }

We know that the language {0n 1n }|n ≥ 0} is not regular. We have a PDA

which accepts it by empty stack as follows.

Moves:

1. δ(q0 , 0, Z0 ) = {(q0 , 0Z0 )}

2. δ(q0 , 0, 0) = {(q0 , 00)}

3. δ(q0 , 1, 0) = {(q0 , ϵ)}

4. δ(q0 , ϵ, Z0 ) = {(q0 , ϵ)}


1. Pushdown Automata 9

Basically, we push 0’s on the stack as they occur and pop a 0 for

every 1. Here, Γ = {0, Z0 }.

Note that this PDA is non-deterministic as it has an ϵ-move.

1.5.2 Example of W W R

Consider the language over {0, 1} which consists of all even lenght

palindromes.

Moves:

1. δ(q0 , 0, Z0 ) = {(q0 , 0Z0 )}

2. δ(q0 , 1, Z0 ) = {(q0 , 1Z0 )}

3. δ(q0 , 0, 0) = {(q0 , 00)}

4. δ(q0 , 1, 0) = {(q0 , 10)}

5. δ(q0 , 0, 1) = {(q0 , 01)}

6. δ(q0 , 1, 1) = {(q0 , 11)}

7. δ(q0 , ϵ, Z0 ) = {(q0 , ϵ), (q1 , Z0 )}

8. δ(q0 , ϵ, 0) = {(q1 , 0)}

9. δ(q0 , ϵ, 1) = {(q1 , 1)}

10. δ(q1 , 0, 0) = {(q1 , ϵ)}

11. δ(q1 , 1, 1) = {(q1 , ϵ)}

12. δ(q1 , ϵ, Z0 ) = {(q1 , ϵ)}

Explanation: State q0 puts all it sees on a stack. Non-determinstically,

it switches to q1 without consuming input. q1 matches and pops every

0 against a 0 and a 1 against a 1. q1 pops off the Z0 without consum-

ing input. q0 can also pop off the Z0 without consuming input. Here,

Γ = {Z0 , 0, 1}.
2. BOTTOM-UP PARSING

2.1 Introduction to Shift-Reduce Parsing

Shift-Reduce is a kind of bottom-up parsing which uses a stack for

holding grammar symbols. The input is scanned from left-to-right. In

each step, the current input symbol is either shifted on to the stack, or

a handle on the top of the stack is reduced to a non-terminal.

2.1.1 What is a handle?

Consider a production A → β. Assume that αβ is on the stack. The

shift-reudue parser might decide to replace β with A. In this case we

can think of the position of β on the stack as a handle which is pruned

to A.

2.1.2 Shift-Reduce Example

An example of shift-reduce parsing is given for the string id ∗ id$ for the

following expression grammar.

1. E → E + T

2. E → T

3. T → T ∗ F

4. T → F

5. F → (E)

6. F → id
2. Bottom-Up Parsing 11

Fig. 2.1: Shift-Reduce Parsing

Note that this grammar is unambiguous, left recursion is not elimi-

nated and the productions are number from 1 to 6.

Please refer to figure 2.1 for how shift-reduce parsing occurs.

Note that this parser makes a left-to-right scan of the input and

traces a rightmost derivation in reverse.

2.2 LR(0) Parsers

An LR(0) parser is a kind of shift-reduce parser. L stands for left-to-

right scan of input, R stands for rightmost derivation in reverse and 0

stands for 0 symbols of lookahead.

In LR(0) parser a state is pushed on the stack. A state represents a

set of LR(0) items.

2.2.1 What is an LR(0) Item?

If A → αβ is a production, A → α.β is an item representing that in the

process of seeing this production, the parser has recognized α thus far.
2. Bottom-Up Parsing 12

2.2.2 Sets of LR(0) Items

Let us begin with a set of items which represent a state of what the

parser has recognized. Let I be this set. Then if A → α.Bβ is in I and

B → γ is a production, add B → .γ to I. Keep doing this till no more

items can be added. Then this set is called CLOSU RE(I).

For constructing LR(0) Parser, we need an augmented grammar.

That is a production S ′ → S is added to the grammar. The expression

grammar after augmentation becomes:

• E′ → E

• E →E+T

• E→T

• T →T ∗F

• T →F

• F → (E)

• F → id

The initial set of items after CLOSURE of E ′ → .E is

• E ′ → .E

• E → .E + T

• E → .T

• T → .T ∗ F

• T → .F

• F → .(E)

• F → .id
2. Bottom-Up Parsing 13

2.2.3 The GOTO Function

If a set of LR(0) items I contains an item A → α.Xβ where X is a gram-

mar symbol (variable or terminal) then GOTO(I, X) contains A → αX.β.

Thus from a set of items I we GOTO a set J (J=CLOSURE(J)) on X.

2.2.4 LR(0) Automaton

Refer to figure 2.2 for the LR(0) automaton which has these states and

the GOTO functions for the augmented expression grammar.

2.2.5 LR(0) Parsing Table

An LR(0) Parsing Table has a row for every state. It has a column for

every terminal and a column for $ which signifies end-of-input. It has

columns for every variable too.

Refer to figure 2.3 for the LR(0) parsing table for the expression

grammar.

If GOT O(Ii , a) = Ij , then the entry for state i on input a is sj where s

stand for shift. If GOT O(Ii , A) = Ij dfor a non-terminal A then the entry

for i and column A is j. If a set of items contains the complete item

A → α., then for all in FOLLOW(A), the entry for that row is rk where k

is the number of the production. If a set of items Ii contains the item

S ′ → S., then the action of state i on $ is accept.

2.2.6 Use of the LR(0) Parsing Table

Refer to figure 2.4 for the parsing algorithm which makes use of the

parsing table for shift/reduce decisions.

Refer to figure 2.5 for the example of parsing the input id ∗ id.
2. Bottom-Up Parsing 14

Fig. 2.2: LR(0) Automaton with Sets of Items (States) and the GOTO Function
2. Bottom-Up Parsing 15

Fig. 2.3: LR(0) Parsing Table for the Expression Grammar

2.3 LR(1) Parsers

2.3.1 LR(0) and LR(1)

A parsing table for a shift-reduce parser such as an LR(0) parser can

have two types of conflicts:

1. shift-reduce conflict: This is a confusion between a shift action

and a reduce action.

2. reduce-reduce conflict: This is a a confusion between two different

reductions possible.

Point to ponder: why cannot an LR parsing table have a shift-shift

conflict?

LR(1) parser is strictly more powerful than LR(0) parser, that is, if

a grammar has an LR(0) parser without conflicts, it also has an LR(1)

parsing table without conflicts; however, the reverse is not true. We


2. Bottom-Up Parsing 16

Fig. 2.4: LR Parsing Algorithm


2. Bottom-Up Parsing 17

Fig. 2.5: LR(0) Parsing Steps for id * id

have grammars with LR(1) parsing tables without conflicts, for which

LR(0) parsing table has conflicts. A grammar is said to be LR(0) (or in

LR(0)), if there is a LR(0) parsing table without conflict. A grammar is

said to be LR(1) (or in LR(1)), if there is a LR(1) parsing table without

conflict. LR(0) is a proper subset of LR(1).

2.3.2 What is k in LR(k)?

LR(k) stands for Left-to-right scan of input, Rightmost derivation in

reverse and k symbols of lookahead. When k is 0, we get LR(0) which

is also called simple LR or SLR or SLR(1). When k is 1, we get LR(1). If

k is omitted, it defaults to 1, so, LR(1) is assumed.

Remember, an LR(0) item is of the form: A → α.β. There is no

lookahead symbol.

An LR(1) item has a lookahead symbol which can be either a termi-

nal or $ (end of input). It is of the form: A → α.β, a where a is either


2. Bottom-Up Parsing 18

a terminal or $. If there are multiple lookahead symbols for the same

”item”, they can be combined as in: A → αβ, a|b|c....

2.3.3 CLOSURE for an LR(1) set of items

Let the set I of items contain A → α.Bβ, a and let B → γ be a production.

Add B → .γ, F IRST (βa) to I. Keep doing these, till no more items can

be added to I.

2.3.4 Sample Grammar for LR(1) Parsing

1. S → CC

2. C → cC

3. C → d

Note that this grammar is unambiguous, left recursion is not elimi-

nated and the productions are number from 1 to 3.

For constructing LR(1) Parser, we need an augmented grammar.

That is a production S ′ → S is added to the grammar. The sample

grammar after augmentation becomes:

• S′ → S

• S → CC

• C → cC

• C→d

The initial set of items after CLOSURE of S ′ → .S, $ is

• S ′ → .S, $

• S → .CC, $

• C → .cC, c|d
2. Bottom-Up Parsing 19

• C → .d, c|d

Note how the lookaheads are c and d in the last two items.

2.3.5 The GOTO Function

If a set of LR(1) items I contains an item A → α.Xβ, a where X is a

grammar symbol (variable or terminal) then GOT O(I, X) contains A →

αX.β, a. Thus from a set of items I we GOTO a set J(J = CLOSU RE(J))

on X.

2.3.6 LR(1) Automaton

Refer to figure 2.6 for the LR(1) automaton which has these states and

the GOTO functions for the augmented sample grammar.

2.3.7 LR(1) Parsing Table

An LR(1) Parsing Table has a row for every state. It has a column for

every terminal and a column for $ which signifies end-of-input. It has

columns for every variable too.

Refer to figure 2.7 for the LR(1) parsing table for the sample gram-

mar.

If GOT O(Ii , a) = Ij , then the entry for state i on input a is sj where

s stand for shift. If GOT O(Ii , A) = Ij dfor a non-terminal A then the

entry for i and column A is j. If a set of items contains the complete

item A → α., a|b|c..., then for the lookaheads a, b, c, ..., the entry for that

row is rk where k is the number of the production. If a set of items Ii

contains the item S ′ → S., $, then the action of state i on $ is accept.

2.3.8 Use of the LR(1) Parsing Table

The parsing algorithm for LR(1) is just like that of the LR(0), except that

it makes use of the LR(1) parsing table instead. We have covered the
2. Bottom-Up Parsing 20

Fig. 2.6: LR(1) Automaton with Sets of Items (States) and the GOTO Function
2. Bottom-Up Parsing 21

Fig. 2.7: LR(1) Parsing Table for the Sample Grammar


2. Bottom-Up Parsing 22

Fig. 2.8: Merged LALR states formed from LR(1)

example of parsing for the sample input ccdd$ in class.

2.4 LALR Parser

LALR stands for lookahead LR. An LALR parser is intermediate in power

between LR(0) and LR(1). For any grammar, the LR(0)and LALR parser

always have the same number of states which is much less than an

LR(1) parser. For a language like C, LR(1) parser has thousands of

states whereas LR(0) and LALR have hundreds of states.

Note that LALR is the parser of choice for many language

parsers and tools like YACC.

If we notice states I4 and I7 in the LR(1) parser (automaton), they

are the same except that the lookaheads are different. In LALR, we

merge these states to form a single state I47 which has the items with a

union of lookaheads {c, d, $}. Similarly, I36 and I89 are formed as given

in figure 2.8.

— Finally, the LALR Parsing table is given in figure 2.9.


2. Bottom-Up Parsing 23

Fig. 2.9: LALR Parsing Table

The parsing algorithm is just like that for LR(0) and LR(1).
3. TURING MACHINE AND UNDECIDABILITY

The Turing Machine (TM) is the most general and powerful automata

that exists. It is a mathematical model that was created by Alan M.

Turing in 1936 well before the era of digital computers. It influenced

the design of modern computers.

3.1 Definition of a TM

A TM consists of a 2-way infinite tape divided into cells or squares each

of which can hold a symbol from a tape alphabet Γ. The input is a string

from an alphabet Σ. On both sides of this string initially is an infinite

sequence of blanks. The blank (B) is a special tape symbol not found

in Σ. The TM has a finite set of states Q. The initial state is q0 and the

tape has a head scanning the first symbol of the input. The TM makes

a move as follows. When in state q on tape symbol X, it changes to

state p, replaces X with a possibly different symbol Y and moves either

left (D = L) or right (D=R). The TM is given by M = (Q, Σ, Γ, δ, q0 , B, F ),

where F ⊆ Q is a set of accepting (final) states.

3.2 Examples of TMs

Figure 3.1 shows a TM which recognizes the language {0n 1n |n ≥ 0}.

Figure 3.2 shows a TM which changes 0’s to 1’s and 1’s to 0’s in the

input.
3. Turing Machine and Undecidability 25

Fig. 3.1: TM recognizing {0n 1n |n ≥ 0}

Fig. 3.2: TM for intechanging 0’s and 1’s


3. Turing Machine and Undecidability 26

3.3 Programming Techniques for TMs

The programming techniques for TM’s can be summarized as:

1. Storage in Finite Control

2. Multiple Tracks

3. Subroutines

3.4 Extensions of TMs

The following are extensions to the TM model.

1. Multi-tape TM: There can be a fixed number of tapes, each with

its own head scanning input on that tape and making a next move.

2. Non-deterministic TM: For every state and input symbol, ther

can be multiple possible moves. Also, there can be multiple pos-

sible moves from a state without consuming input (ϵ- moves).

Note that these extensions DO NOT increase the power of TM to

recognize languages. However, if a multi-tape TM takes O(T (n)) time

for an input, the time taken for the corresponding single tape TM is

O(T (n)2 ). The detailed constructions for equivalent Turing Machines

are given in [1].

3.5 Definition of Algorithm

A problem has an algorithm if there is a TM which solves the problem

(may be decides it) and is guaranteed to halt on all inputs. The TM is

the algorithm.
3. Turing Machine and Undecidability 27

3.6 Undecidability

3.6.1 Recursively Enumerable and Recursive

A language is said to be recuresively enumerable, if there is a TM which

accpts all its members. A language is recursive if there is a TM which

halts on all its inputs and accepts all strings in the language. A re-

cursive language is recursively enumerable. A recusively enumerable

language need not be recursive.

For an input string not in a recusively enumerable language, the

corresponding TM might either halt and reject or never halt.

For an input string not in a recusive language, the corresponding

TM is sure to halt and reject.

3.6.2 Codes for Turing Machines

Every TM can be coded in an alphabet such as {0, 1}. Let M stand for

the code of a TM.

3.6.3 The Diagonal Language Ld

Consider the diagonal language Ld = {M |M does not accept M }. We will

show that Ld is not recursively enumerable. Let if possible, Md be a

Turing machine accepting Ld . Then the question is does Md belong to

Ld . If Md belongs to Ld , by the definition of Ld , Md does not accept

Md . On the other hand, if Md is not in Ld , Md accepts Md . This con-

tradiction shows there can be no such Md . Thus, Ld is not recursively

enumerable.

3.6.4 The Universal Turing Machine

A Universal Turing Machine U is a TM which takes as input, (the binary

coding of) the ordered pair < M, w > and simulates M on input w. The
3. Turing Machine and Undecidability 28

detailed construction of such a U involves a multi-tape TM and can be

found in [1].

3.6.5 The Universal Language

Lu = {< M, w > |M halts on w and accepts} is called the Universal Lan-

guage. We can use the Universal Turing Machine U described in section

3.6.4 above pass < M, w > as input to U and it will eventually halt and

accept if M accepts w. Thus Lu = {< M, w > |M accepts w} is recursively

enumerable.

We know that the diagonal language Ld = {M |M does not accept M }

is not recursively enumerable. Suppose Lu were recursive. Then, there

is a TM which halts on all its inputs and accepts Ld . Let this TM be

Mu . Then we can construct an algorithm for Ld as follows: On input

M , construct < M, M > and pass it to Mu . If Mu accepts, then reject.

If Mu rejects then accept. But there can be no algorithm for Ld . This

contradiction shows that there can be no such Mu . Lu is not recursive.

3.6.6 Reductions

Let us say P1 and P2 are decision problems. Let P1 be not recursive.

Then, this problem has no algorithm. Given any instance (input) w1

of P1 , if we can constrcut an instance w2 of P2 such that w2 has an

yes answer if and only if w1 has an yes answer in P1 , then we say

this is a reduction of P1 to P2 . If P1 is undecidable, P2 must also be

undecidable, otherwise we have an algorithm for P1 using the reduction

(construction) and the algorithm for P2 .

3.6.7 Undecidable Problems

The problem of memebership in language Lu is undecidable. This is

a decision problem for which there can be no algorithm. This problem

can be reduced to other problems, showing that those problems too do


3. Turing Machine and Undecidability 29

not have any algorithm. For example, [1] describes the reduction of Lu

to Post’s Correspondence Problem or PCP. PCP is a practical problem

which can have no algorithm.


BIBLIOGRAPHY

[1] John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman; Introduction

to Automata Theory, Languages, and Computation, Third Edition,

Pearson.

[2] Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman;

Compilers, Principles, Techniques, & Tools, Second Edition, Pear-

son.

You might also like