0% found this document useful (0 votes)
216 views

Unit-4 Turing Machine (TOC)

The document discusses theory of computation and Turing machines. It provides the vision, mission and course objectives of the department. It then explains Turing machines formally, including their transition function, actions, conventions, examples and instantaneous descriptions. It defines the languages accepted and halted by a Turing machine, and proves their equivalence. It introduces recursively enumerable and recursive languages, and provides examples of recursive languages. Finally, it outlines further topics about Turing machines, including programming tricks, restrictions, extensions and closure properties.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
216 views

Unit-4 Turing Machine (TOC)

The document discusses theory of computation and Turing machines. It provides the vision, mission and course objectives of the department. It then explains Turing machines formally, including their transition function, actions, conventions, examples and instantaneous descriptions. It defines the languages accepted and halted by a Turing machine, and proves their equivalence. It introduces recursively enumerable and recursive languages, and provides examples of recursive languages. Finally, it outlines further topics about Turing machines, including programming tricks, restrictions, extensions and closure properties.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 87

Theory of Computation

UNIT – IV

Theory of Computation 1
Vision of the Department
To become renowned Centre of excellence in computer science and
engineering and make competent engineers & professionals with high
ethical values prepared for lifelong learning.
 
Mission of the Department
M1- To impart outcome based education for emerging technologies
in the field of computer science and engineering.
M2 - To provide opportunities for interaction between academia and
industry.
M3 - To provide platform for lifelong learning by accepting the
change in technologies
M4 - To develop aptitude of fulfilling social responsibilities.

Theory of Computation 2
Course Objectives
CO1: Examine Finite Automata and Regular Expression.

CO2: Classify regular sets of Regular Grammars.


CO3: Categorize Context Free Language and Design Pushdown
automata.
CO4: Design Turing machine, compare Chomsky hierarchy
languages and analyze Linear bounded automata.

Theory of Computation 3
CO PO Mapping

Theory of Computation 4
Turing-Machine Theory
 The purpose of the theory of Turing
machines is to prove that certain
specific languages have no algorithm.
 Start with a language about Turing
machines themselves.
 Reductions are used to prove more
common questions undecidable.

5
Picture of a Turing Machine
Action: based on
the state and the
tape symbol under
the head: change
State state, rewrite the
symbol and move the
head one square.

... A B C A D ...

Infinite tape with


squares containing
tape symbols chosen
6
from a finite alphabet
Why Turing Machines?
 Why not deal with C programs or
something like that?
 Answer: You can, but it is easier to prove
things about TM’s, because they are so
simple.
 And yet they are as powerful as any
computer.
• More so, in fact, since they have infinite memory.
7
Then Why Not Finite-State
Machines to Model Computers?
 In principle, you could, but it is not
instructive.
 Programming models don’t build in a
limit on memory.
 In practice, you can go to Fry’s and buy
another disk.
 But finite automata vital at the chip
level (model-checking).
8
Turing-Machine Formalism
 A TM is described by:
1. A finite set of states (Q, typically).
2. An input alphabet (Σ, typically).
3. A tape alphabet (Γ, typically; contains Σ).
4. A transition function (δ, typically).
5. A start state (q0, in Q, typically).
6. A blank symbol (B, in Γ- Σ, typically).
 All tape except for the input is blank initially.
7. A set of final states (F ⊆ Q, typically). 9
Conventions
 a, b, … are input symbols.
 …, X, Y, Z are tape symbols.
 …, w, x, y, z are strings of input
symbols.
 , ,… are strings of tape symbols.

10
The Transition Function
 Takes two arguments:
1. A state, in Q.
2. A tape symbol in Γ.
 δ(q, Z) is either undefined or a triple of
the form (p, Y, D).
 p is a state.
 Y is the new tape symbol.
 D is a direction, L or R.
11
Actions of the PDA
 If δ(q, Z) = (p, Y, D) then, in state q,
scanning Z under its tape head, the
TM:
1. Changes the state to p.
2. Replaces Z by Y on the tape.
3. Moves the head one square in direction D.
 D = L: move left; D = R; move right.

12
Example: Turing Machine
 This TM scans its input right, looking for
a 1.
 If it finds one, it changes it to a 0, goes
to final state f, and halts.
 If it reaches a blank, it changes it to a 1
and moves left.

13
Example: Turing Machine – (2)
 States = {q (start), f (final)}.
 Input symbols = {0, 1}.
 Tape symbols = {0, 1, B}.
 δ(q, 0) = (q, 0, R).
 δ(q, 1) = (f, 0, R).
 δ(q, B) = (q, 1, L).

14
Simulation of TM
δ(q, 0) = (q, 0, R)
δ(q, 1) = (f, 0, R)
δ(q, B) = (q, 1, L)

... B B 0 0 B B ...

15
Simulation of TM
δ(q, 0) = (q, 0, R)
δ(q, 1) = (f, 0, R)
δ(q, B) = (q, 1, L)

... B B 0 0 B B ...

16
Simulation of TM
δ(q, 0) = (q, 0, R)
δ(q, 1) = (f, 0, R)
δ(q, B) = (q, 1, L)

... B B 0 0 B B ...

17
Simulation of TM
δ(q, 0) = (q, 0, R)
δ(q, 1) = (f, 0, R)
δ(q, B) = (q, 1, L)

... B B 0 0 1 B ...

18
Simulation of TM
δ(q, 0) = (q, 0, R)
δ(q, 1) = (f, 0, R)
δ(q, B) = (q, 1, L)

... B B 0 0 1 B ...

19
Simulation of TM
δ(q, 0) = (q, 0, R)
δ(q, 1) = (f, 0, R)
δ(q, B) = (q, 1, L)

f
No move is possible.
The TM halts and
... B B 0 0 0 B ... accepts.

20
Instantaneous Descriptions of
a Turing Machine
 Initially, a TM has a tape consisting of a
string of input symbols surrounded by
an infinity of blanks in both directions.
 The TM is in the start state, and the
head is at the leftmost input symbol.

21
TM ID’s – (2)
 An ID is a string q, where  is the
tape between the leftmost and
rightmost nonblanks (inclusive).
 The state q is immediately to the left of
the tape symbol scanned.
 If q is at the right end, it is scanning B.
 If q is scanning a B at the left end, then
consecutive B’s at and to the right of q are
part of .
22
TM ID’s – (3)
 As for PDA’s we may use symbols ⊦ and
⊦* to represent “becomes in one move”
and “becomes in zero or more moves,”
respectively, on ID’s.
 Example: The moves of the previous TM
are q00⊦0q0⊦00q⊦0q01⊦00q1⊦000f

23
Formal Definition of Moves
1. If δ(q, Z) = (p, Y, R), then
 qZ⊦Yp
 If Z is the blank B, then also q⊦Yp
2. If δ(q, Z) = (p, Y, L), then
 For any X, XqZ⊦pXY
 In addition, qZ⊦pBY

24
Languages of a TM
 A TM defines a language by final state,
as usual.
 L(M) = {w | q0w⊦*I, where I is an ID
with a final state}.
 Or, a TM can accept a language by
halting.
 H(M) = {w | q0w⊦*I, and there is no
move possible from ID I}.
25
Equivalence of Accepting and
Halting

1. If L = L(M), then there is a TM M’


such that L = H(M’).
2. If L = H(M), then there is a TM M”
such that L = L(M”).

26
Proof of 1: Acceptance ->
Halting
 Modify M to become M’ as follows:
1. For each accepting state of M, remove any
moves, so M’ halts in that state.
2. Avoid having M’ accidentally halt.
 Introduce a new state s, which runs to the right
forever; that is δ(s, X) = (s, X, R) for all symbols X.
 If q is not accepting, and δ(q, X) is undefined, let
δ(q, X) = (s, X, R).

27
Proof of 2: Halting ->
Acceptance
 Modify M to become M” as follows:
1. Introduce a new state f, the only
accepting state of M”.
2. f has no moves.
3. If δ(q, X) is undefined for any state q and
symbol X, define it by δ(q, X) = (f, X, R).

28
Recursively Enumerable
Languages
 We now see that the classes of
languages defined by TM’s using final
state and halting are the same.
 This class of languages is called the
recursively enumerable languages.
 Why? The term actually predates the
Turing machine and refers to another
notion of computation of functions.
29
Recursive Languages
 An algorithm is a TM that is
guaranteed to halt whether or not it
accepts.
 If L = L(M) for some TM M that is an
algorithm, we say L is a recursive
language.
 Why? Again, don’t ask; it is a term with a
history.
30
Example: Recursive
Languages
 Every CFL is a recursive language.
 Use the CYK algorithm.
 Every regular language is a CFL (think
of its DFA as a PDA that ignores its
stack); therefore every regular
language is recursive.
 Almost anything you can think of is
recursive.
31
More About Turing Machines

“Programming Tricks”
Restrictions
Extensions
Closure Properties
32
Overview
 At first, the TM doesn’t look very
powerful.
 Can it really do anything a computer can?
 We’ll discuss “programming tricks” to
convince you that it can simulate a real
computer.

33
Overview – (2)
 We need to study restrictions on the
basic TM model (e.g., tapes infinite in
only one direction).
 Assuming a restricted form makes it
easier to talk about simulating arbitrary
TM’s.
 That’s essential to exhibit a language that
is not recursively enumerable.
34
Overview – (3)
 We also need to study generalizations
of the basic model.
 Needed to argue there is no more
powerful model of what it means to
“compute.”
 Example: A nondeterministic TM with
50 six-dimensional tapes is no more
powerful than the basic model.
35
Programming Trick: Multiple Tracks
 Think of tape symbols as vectors with k
components.
 Each component chosen from a finite
alphabet.
 Makes the tape appear to have k tracks.
 Let input symbols be blank in all but one
track.
36
Picture of Multiple Tracks

Represents
input symbol 0 q Represents
the blank

0 X B
B Y B
B Z B

Represents one symbol [X,Y,Z] 37


Programming Trick: Marking
 A common use for an extra track is to
mark certain positions.
 Almost all cells hold B (blank) in this
track, but several hold special symbols
(marks) that allow the TM to find
particular places on the tape.

38
Marking

B X B
W Y Z

Unmarked
Marked Y W and Z
39
Programming Trick: Caching
in the State
 The state can also be a vector.
 First component is the “control state.”
 Other components hold data from a
finite alphabet.

40
Example: Using These Tricks
 This TM doesn’t do anything terribly
useful; it copies its input w infinitely.
 Control states:
 q: Mark your position and remember the
input symbol seen.
 p: Run right, remembering the symbol and
looking for a blank. Deposit symbol.
 r: Run left, looking for the mark.
41
Example – (2)
 States have the form [x, Y], where x is
q, p, or r and Y is 0, 1, or B.
 Only p uses 0 and 1.
 Tape symbols have the form [U, V].
 U is either X (the “mark”) or B.
 V is 0, 1 (the input symbols) or B.
 [B, B] is the TM blank; [B, 0] and [B, 1]
are the inputs.
42
The Transition Function
 Convention: a and b each stand for
“either 0 or 1.”
 δ([q,B], [B,a]) = ([p,a], [X,a], R).
 In state q, copy the input symbol under the
head (i.e., a ) into the state.
 Mark the position read.
 Go to state p and move right.

43
Transition Function – (2)
 δ([p,a], [B,b]) = ([p,a], [B,b], R).
 In state p, search right, looking for a blank
symbol (not just B in the mark track).
 δ([p,a], [B,B]) = ([r,B], [B,a], L).
 When you find a B, replace it by the
symbol (a ) carried in the “cache.”
 Go to state r and move left.

44
Transition Function – (3)
 δ([r,B], [B,a]) = ([r,B], [B,a], L).
 In state r, move left, looking for the mark.
 δ([r,B], [X,a]) = ([q,B], [B,a], R).
 When the mark is found, go to state q and
move right.
 But remove the mark from where it was.
 q will place a new mark and the cycle
repeats.
45
Simulation of the TM

q
B

...B B B B ...

...0 1 B B ...

46
Simulation of the TM

p
0

...X B B B ...

...0 1 B B ...

47
Simulation of the TM

p
0

...X B B B ...

...0 1 B B ...

48
Simulation of the TM

r
B

...X B B B ...

...0 1 0 B ...

49
Simulation of the TM

r
B

...X B B B ...

...0 1 0 B ...

50
Simulation of the TM

q
B

...B B B B ...

...0 1 0 B ...

51
Simulation of the TM

p
1

...B X B B ...

...0 1 0 B ...

52
Semi-infinite Tape
 We can assume the TM never moves
left from the initial position of the head.
 Let this position be 0; positions to the
right are 1, 2, … and positions to the
left are –1, –2, …
 New TM has two tracks.
 Top holds positions 0, 1, 2, …
 Bottom holds a marker, positions –1, –2, …
53
Simulating Infinite Tape by
Semi-infinite Tape

State remembers whether


q simulating upper or lower
U/L track. Reverse directions
for lower track.

0 1 2 3 ...
* -1 -2 -3 . . .
Put * here You don’t need to do anything,
at the first because these are initially B. 54
move
More Restrictions – Read in Text
 Two stacks can simulate one tape.
 One holds positions to the left of the head;
the other holds positions to the right.
 In fact, by a clever construction, the
two stacks to be counters = only two
stack symbols, one of which can only
appear at the bottom.
Factoid: Invented by Pat Fischer,
whose main claim to fame is that
he was a victim of the Unabomber. 55
Extensions
 More general than the standard TM.
 But still only able to define the RE
languages.
1. Multitape TM.
2. Nondeterministic TM.
3. Store for key-value pairs.

56
Multitape Turing Machines
 Allow a TM to have k tapes for any
fixed k.
 Move of the TM depends on the state
and the symbols under the head for
each tape.
 In one move, the TM can change state,
write symbols under each head, and
move each head independently.
57
Simulating k Tapes by One
 Use 2k tracks.
 Each tape of the k-tape machine is
represented by a track.
 The head position for each track is
represented by a mark on an additional
track.

58
Picture of Multitape Simulation

X head for tape 1


... A B C A C B ... tape 1
X head for tape 2
. . . U V U U W V . . . tape 2

59
Nondeterministic TM’s
 Allow the TM to have a choice of move
at each step.
 Each choice is a state-symbol-direction
triple, as for the deterministic TM.
 The TM accepts its input if any
sequence of choices leads to an
accepting state.

60
Simulating a NTM by a DTM
 The DTM maintains on its tape a
queue of ID’s of the NTM.
 A second track is used to mark certain
positions:
1. A mark for the ID at the head of the
queue.
2. A mark to help copy the ID at the head
and make a one-move change.
61
Picture of the DTM Tape
Where you are
Front of copying IDk with
queue
a move

X Y
ID0 # ID1 # … # IDk # IDk+1 … # IDn # New ID

Rear of
queue

62
Operation of the Simulating DTM
 The DTM finds the ID at the current
front of the queue.
 It looks for the state in that ID so it can
determine the moves permitted from
that ID.
 If there are m possible moves, it
creates m new ID’s, one for each move,
at the rear of the queue.
63
Operation of the DTM – (2)
 The m new ID’s are created one at a
time.
 After all are created, the marker for the
front of the queue is moved one ID
toward the rear of the queue.
 However, if a created ID has an
accepting state, the DTM instead
accepts and halts.
64
Why the NTM -> DTM
Construction Works
 There is an upper bound, say k, on the
number of choices of move of the NTM
for any state/symbol combination.
 Thus, any ID reachable from the initial
ID by n moves of the NTM will be
constructed by the DTM after
constructing at most (kn+1-k)/(k-1)ID’s.

Sum of k+k2+…+kn 65
Why? – (2)
 If the NTM accepts, it does so in some
sequence of n choices of move.
 Thus the ID with an accepting state will
be constructed by the DTM in some
large number of its own moves.
 If the NTM does not accept, there is no
way for the DTM to accept.

66
Taking Advantage of Extensions
 We now have a really good situation.
 When we discuss construction of
particular TM’s that take other TM’s as
input, we can assume the input TM is
as simple as possible.
 E.g., one, semi-infinite tape, deterministic.
 But the simulating TM can have many
tapes, be nondeterministic, etc.
67
Real Computers
 Recall that, since a real computer has
finite memory, it is in a sense weaker
than a TM.
 Imagine a computer with an infinite
store for name-value pairs.
 Generalizes an address space.

68
Simulating a Name-Value
Store by a TM
 The TM uses one of several tapes to
hold an arbitrarily large sequence of
name-value pairs in the format
#name*value#…
 Mark, using a second track, the left end
of the sequence.
 A second tape can hold a name whose
value we want to look up.
69
Lookup
 Starting at the left end of the store,
compare the lookup name with each
name in the store.
 When we find a match, take what
follows between the * and the next #
as the value.

70
Insertion
 Suppose we want to insert name-value
pair (n, v), or replace the current value
associated with name n by v.
 Perform lookup for name n.
 If not found, add n*v# at the end of
the store.

71
Insertion – (2)
 If we find #n*v’#, we need to replace
v’ by v.
 If v is shorter than v’, you can leave
blanks to fill out the replacement.
 But if v is longer than v’, you need to
make room.

72
Insertion – (3)
 Use a third tape to copy everything from
the first tape at or to the right of v’.
 Mark the position of the * to the left of v’
before you do.
 Copy from the third tape to the first,
leaving enough room for v.
 Write v where v’ was.

73
Closure Properties of
Recursive and RE Languages

 Both closed under union, concatenation,


star, reversal, intersection, inverse
homomorphism.
 Recursive closed under difference,
complementation.
 RE closed under homomorphism.

74
Union
 Let L1 = L(M1) and L2 = L(M2).
 Assume M1 and M2 are single-semi-
infinite-tape TM’s.
 Construct 2-tape TM M to copy its input
onto the second tape and simulate the
two TM’s M1 and M2 each on one of the
two tapes, “in parallel.”
75
Union – (2)
 Recursive languages: If M1 and M2 are
both algorithms, then M will always halt
in both simulations.
 Accept if either accepts.
 RE languages: accept if either accepts,
but you may find both TM’s run forever
without halting or accepting.

76
Picture of Union/Recursive
Accept
M1
Reject
OR Accept
Input w M

Accept
M2 AND Reject
Reject

Remember: = “halt
without accepting 77
Picture of Union/RE
Accept
M1
OR Accept
Input w M

Accept
M2

78
Intersection/Recursive – Same Idea
Accept
M1
Reject
AND Accept
Input w M

Accept
M2 OR Reject
Reject

79
Intersection/RE
Accept
M1
AND Accept
Input w M

Accept
M2

80
Difference, Complement
 Recursive languages: both TM’s will
eventually halt.
 Accept if M1 accepts and M2 does not.
 Corollary: Recursive languages are closed
under complementation.
 RE Languages: can’t do it; M2 may
never halt, so you can’t be sure input is
in the difference.
81
Concatenation/RE
 Let L1 = L(M1) and L2 = L(M2).
 Assume M1 and M2 are single-semi-infinite-
tape TM’s.
 Construct 2-tape Nondeterministic TM M:
1. Guess a break in input w = xy.
2. Move y to second tape.
3. Simulate M1 on x, M2 on y.
4. Accept if both accept.
82
Concatenation/Recursive
 Can’t use a NTM.
 Systematically try each break w = xy.
 M1 and M2 will eventually halt for each
break.
 Accept if both accept for any one break.
 Reject if all breaks tried and none lead
to acceptance.
83
Star
 Same ideas work for each case.
 RE: guess many breaks, accept if M1
accepts each piece.
 Recursive: systematically try all ways to
break input into some number of
pieces.

84
Reversal
 Start by reversing the input.
 Then simulate TM for L to accept w if
and only wR is in L.
 Works for either Recursive or RE
languages.

85
Inverse Homomorphism
 Apply h to input w.
 Simulate TM for L on h(w).
 Accept w iff h(w) is in L.
 Works for Recursive or RE.

86
Homomorphism/RE
 Let L = L(M1).
 Design NTM M to take input w and
guess an x such that h(x) = w.
 M accepts whenever M1 accepts x.
 Note: won’t work for Recursive
languages.

87

You might also like