0% found this document useful (0 votes)
9 views65 pages

TOC-Notes All Units

The document provides comprehensive notes on the Theory of Computation, focusing on Finite Automata, including definitions of key concepts such as symbols, strings, languages, and grammar. It explains the operations on languages, types of finite automata (DFA and NFA), and their mathematical models, along with examples and applications in computer science. Additionally, it discusses the acceptability of strings in both DFA and NFA frameworks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views65 pages

TOC-Notes All Units

The document provides comprehensive notes on the Theory of Computation, focusing on Finite Automata, including definitions of key concepts such as symbols, strings, languages, and grammar. It explains the operations on languages, types of finite automata (DFA and NFA), and their mathematical models, along with examples and applications in computer science. Additionally, it discusses the acceptability of strings in both DFA and NFA frameworks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 65

THEORY OF COMPUTATION

NOTES

B. Tech (CSE) V-Semester,


Dept of CSE, MBU
2024-25
MODULE 1

FINITE AUTOMATA

1. The Central Concepts of Automata Theory


Symbol: Symbol is an entity or user defined entity. It can be a single digit, a single
alphabet, or a single symbol.

Ex: 1, a, b, #

Alphabet: Alphabet is a finite set of symbols. It is denoted by ∑.

Ex: ∑ = {a, b}

∑ = {0, 1, 2, +}

∑ = {#, β, Δ}

String: It is a finite sequence of symbols over an alphabet ∑. The string is denoted by w.

Ex: Let ∑ = {#, β, Δ} ,

the valid set of strings w1 = # , w2 = β , w3 = Δ

w4 = #β , w5 = #Δ , w6 = βΔ,

w7 = #βΔ, w8 = ### etc

Length of string: number of symbols in the string. It is represented as |w|.

Ex: if string w = # , then |w| = 1 Ex:

if string w = #βΔ , then |w| = 3

Empty String: A string with zero symbols. It is represented as ε or Ʌ.

Ex: if string w = ε , then |w| = 0


Prefix of a String: It is any number of leading symbols of the string.

Ex: if string w = #βΔ, then

prefixes are ε , # , #β , #βΔ

proper prefixes are ε , # , #β


Suffix of a String: It is any number of trailing symbols of the string.

Ex: if string w = #βΔ, then suffixes

are #βΔ , βΔ , Δ , ε

proper suffixes are βΔ , Δ , ε


Concatenation of Strings: If w1 and w2 are any two strings, then their concatenation is
represented as “w1.w2”. Concatenation is represented with “.”

Ex: if w1 = #β, w2 = Δ, then w1.w2 = #β Δ


Language: Let the finite automata M be defined over the alphabet ∑. The formal
language of M is a set comprising of valid strings over the alphabet ∑. It is represented as
L.

Ex: Let M be the finite automata with all strings over ∑ = {#, β, Δ}

then L = { ε , #, β , Δ , #β, #Δ, βΔ, #βΔ, ###............................}


Ex: Let M be the finite automata with all strings of length two over ∑ = {#, β, Δ}

then L = { #β, #Δ, βΔ }

Grammar: An "automata grammar" typically refers to a formal grammar used to define


the language accepted by an automaton. This grammar is a set of rules that specifies the
structure of valid strings or sequences of symbols that the automaton can recognize or
generate.

Operations on Languages:

a) Union – Let L1 and L2 be two languages. Then their union is a language


comprising of all strings from both L1 and L2. It represented as L1 U L2.

Ex: Let L1 = { #β , #Δ } , L2 = { Δ , #βΔ }

L1 U L2 = { #β , #Δ , Δ , #βΔ }

b) Concatenation – Let L1 and L2 be two languages. Then their concatenation is a


language comprising of all strings from L1 concatenated with L2. It represented
as L1.L2 = { X.Y | X ϵ L1 , Y ϵ L2 }
Ex: Let L1 = { #β , #Δ } , L2 = { Δ , #βΔ } L1.L2

= { #βΔ , #β#βΔ , #ΔΔ , #Δ#βΔ }

c) Closure Operations

i) Kleene Closure (∑*) – Set of all possible combination of strings including ε


Ex: Let ∑ = {# , β , Δ}

then ∑* = { ε , # , β , Δ , #β , #Δ , βΔ , #βΔ , ###.............................}

ii) Positive Closure (∑+) – Set of all possible combination of strings excluding ε
Ex: Let ∑ = {#, β, Δ}

then ∑+ = { #, β , Δ , #β , #Δ , βΔ , #βΔ , ###............................}

2. Problems on Automata Languages


Problem 1: Determine the language over ∑ = {#, β, Δ} comprising of all strings that
contain at least one Δ.

Answer : L = { Δ , #Δ , βΔ , ##Δ , #Δ# , Δ## , ΔΔ# , Δ#Δ , ΔΔΔ , #βΔ , #Δβ ,

.....................}

Problem 2: Determine the language over ∑ = {#, β, Δ} comprising of all strings in which
the 2nd symbol is Δ.

Answer : L = { #Δ , βΔ , ΔΔ , #Δ# , ΔΔ# , ΔΔΔ , #Δβ ,.............................}


3. Introduction to Automata
An automaton is a “system where energy, materials and information are transformed,
transmitted and used for performing some functions without direct participation of man”.
Automaton is a discrete machine. Figure 1.1 shows the model of discrete automaton.

Figure 1.1 Model of Discrete Automaton

The characteristics of discrete automaton are:

(i) Input – At each of the discrete instants of time t1, t2, . . . tm , the input values I1 , I2
. . . . Ip are applied as input to model. Each input symbol takes a value from the input
alphabet ∑.

iii) Output – O1 , O2 ,...............Oq are the outputs of the model; each output symbol takes
a value from the output alphabet O.

(iii) States – At any instant of time the automaton can be in one of the states q1, q2 , . .
. , qn.

(iv) State relation – At any instant of time, the next state of the automaton is
determined by the present state and the present input.

(v) Output relation – The output is related to either state only or to both the input and
the state. It should be noted that at any instant of time the automaton is in some state.
On ‘reading’ an input symbol, the automaton moves to a next state which is given by the
state relation.

4. Introduction to Finite Automata


FA is a mathematical model of a system with discrete input and output. Finite automata
(FA), often referred to as Finite State Machines (FSMs), represent a fundamental concept
in theoretical computer science and are essential in understanding computation and
formal language theory. They serve as abstract mathematical models of computation
with a finite set of states and a well-defined set of transitions between these states based
on input symbols.

FA serve as fundamental models of computation with a finite set of states and transitions
between these states based on input symbols. These models help computer scientists
and researchers understand the nature of computation, formal languages, and the
fundamental limits of what can be efficiently computed within the realm of regular
languages.
FA finds applications in various domains of computer science, including:

a) Lexical analysis in compilers: They are used to recognize and tokenize strings
based on specific patterns or regular expressions.

b) Modeling and understanding regular languages and their properties.

c) Pattern matching:

 Automata aid in searching for patterns within text or sequences


efficiently

 for scanning large bodies of text, such as web pages to find


occurrence of words or other pattens

d) Network protocols:

 Finite automata are employed in protocol specifications and


network traffic analysis.

 for verifying systems of all types that have a finite number of


distinct states such as communication protocols

e) Software: for designed and checking behavior of digital circuit;

f) Hardware: to implement switches

At its core, a finite automaton consists of:

a) States: These are distinct configurations or conditions in which the automaton


can exist at any given point during its operation.

b) Transitions: These depict the movement between states based on input


symbols. Transitions are governed by a set of rules or a transition function.

c) Accepting States (optional): In certain types of finite automata, there might be


specific states that are designated as accepting or final states, indicating
successful recognition of an input string.

Figure 1.2 Block diagram of the Finite Automaton


A FA operates by transitioning between different states in response to the input symbols
it reads. It follows a set of rules defined by its structure and transition function. Figure
1.2 shows the block diagram of the finite automaton.

Finite Automata can recognize and accept strings that belong to the languages they are
designed for (e.g., regular languages for DFAs and NFAs). The step-by-step explanation
of how a Finite Automaton operates:

a) States: The FA starts in a designated initial state from a set of finite states. Each
state represents a particular configuration or condition of the automaton at a
given moment.

b) Transition Function: The FA has a transition function that defines the rules for
transitioning between states based on the input symbols it receives. This function
specifies the next state the FA moves to when it reads a particular input symbol
while being in a certain state. The transition function can be represented in the
form of a transition table or a transition diagram or a transition relation.

c) Reading Input: As the FA processes an input string symbol by symbol, it


transitions between states according to the transition function. For each symbol it
reads from the input, the automaton moves from its current state to a new state
as determined by the transition function.

d) Acceptance (for DFAs): In the case of a Deterministic Finite Automaton (DFA), if


the input string is completely processed and the automaton ends up in one of its
designated accepting (or final) states, the input string is accepted by the
automaton. Otherwise, if the automaton ends up in a non-accepting state or
cannot transition further, the input string is rejected.

e) Behavior (for NFAs): For a Nondeterministic Finite Automaton (NFA), the


process is similar, but with more flexibility. An NFA might have multiple possible
transitions for a given state and input symbol. It can be in multiple states
simultaneously, and it accepts an input string if there exists at least one path that
leads to an accepting state.

f) Completion of Input: Once the entire input string is processed, the FA halts,
and its final state or states determine whether the input string is accepted or
rejected based on the language it recognizes.

5. Types of Finite Automata


Finite automata are classified into two main types:

a) Deterministic Finite Automata (DFA): These machines accept or reject strings


of symbols by transitioning between states according to a single, unique
transition for each input symbol. DFAs recognize regular languages and are
defined by a precise set of rules for each state and input symbol combination.

b) Nondeterministic Finite Automata (NFA): Unlike DFAs, NFAs can have


multiple possible transitions for a given state and input symbol. They are more
flexible in their behavior, allowing transitions to multiple states simultaneously or
the option to "guess" the correct path. NFAs recognize the same class of
languages as DFAs, but their design and operation are more versatile.
6. Mathematical Model of DFA
A DFA can be represented mathematically as a 5-tuple:

M = ( Q , ∑ , δ , q0 , F )

Where Q: finite nonempty set of states

∑: finite nonempty set of the input symbols called input alphabet

q0: initial state q0 ϵ Q

F: set of final states F ⊆ Q


δ: Transition function QX∑Q

Every transition in a DFA has only one possible next state.

Example DFA: Let the DFA be M = ( Q , ∑ , δ , q4 , F )

where Q = { q4 , q7 } , ∑ = { +, X } , q4 is initiate state , F = {q4} δ

is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 Present Next State


δ ( q4 , X ) = q4 State/∑ + X

δ ( q7 , + ) = q4  q4 q7 q4
δ ( q7 , X ) = q7 q7 q4 q7

Transition Diagram

7. Mathematical Model of NFA


A NFA can be represented mathematically as a 5-tuple:

M = ( Q , ∑ , δ , q0 , F )

Where Q: finite nonempty set of states

∑: finite nonempty set of the input symbols called input alphabet

q0: initial state q0 ϵ Q

F: set of final states F ⊆ Q

δ: Transition function Q X ∑  2Q

A transition in an NFA can have more than one possible next state.
Example NFA: Let the NFA be M = ( Q , ∑ , δ , q4 , F )

where Q = { q4 , q7 } , ∑ = { +, X } , q4 is initiate state , F = {q4} δ

is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 Present Next State


δ ( q4 , X ) = q4 State/∑ + X

δ ( q7 , + ) = q4  q4 q7 q4
δ ( q7 , X ) = q4 , q7 q7 q4 q4 , q7

Transition Diagram

8. Acceptability of Strings
Let the DFA be M = ( Q , ∑ , δ , q0 , F ). A string X is said to be accepted by M if δ (q0, X)
= q where q ϵ F.

A string is said to be accepted by NFA if there exists at least one completed path that
ends with a final state. Let the MFA be M = ( Q , ∑ , δ , q0 , F). A string X is said to be
accepted by M if δ (q0, X) contains some final state.

Example Problem 1: For the following DFA, determine the acceptability of the string
XX+X.

Answer: Let the given DFA be M = M = ( Q , ∑ , δ , q4 , F ) where Q = { q4 , q7 } ,

∑ = { + , X } , q4 is initiate state , F = { q4 }
δ is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 Present Next State


δ ( q4 , X ) = q4 State/∑ + X

δ ( q7 , + ) = q4  q4 q7 q4
δ ( q7 , X ) = q7 q7 q4 q7

Transition Diagram

Acceptability of the string “XX+X”

δ ( q4 , XX+X ) = δ ( q4 , X+X ) = δ ( q4 , +X ) = δ ( q7 , X ) = q7

As q7 is not a final state, string “XX+X” is not accepted by the given FA.

Example Problem 2: For the following NFA, determine the acceptability of the string
XX+X.

Answer: Let the given NFA be M = ( Q , ∑ , δ , q4 , F ) where Q = { q4 , q7 } ,

∑ = { + , X } , q4 is initiate state , F = { q4 }

δ is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 Present Next State


δ ( q4 , X ) = q4 State/∑ + X

δ ( q7 , + ) = q4  q4 q7 q4
δ ( q7 , X ) = q4 , q7 q7 q4 q4 , q7

Transition Diagram

Acceptability of the string “XX+X”


δ ( q4 , XX+X ) = δ ( q4 , X+X ) = δ ( q4 , +X ) = δ ( q7 , X ) = q4

As q4 is a final state, string “XX+X” is accepted by the given FA.

9. Problems on Design of FA
Problem 1: Design a DFA that recognizes the language over { v , p } containing strings
that start with 'v' and have an odd length.

Answer: Let the FA that recognizes the language over { v , p } containing strings that
start with 'v' and have an odd length be

M = ( Q , ∑ , δ , q0 , F ) where Q = { q4 , q5 , q6 } ,

∑ = { v , p } , q4 is initiate state , F = { q5 }

δ is defined as

Transition Relation Transition Table

δ ( q4 , v ) = q5 Present Next State


State/∑ v p
δ ( q5 , v ) = q6

δ ( q5 , p ) = q6
 q4 q5
δ ( q6 , v ) = q5
q5 q6 q6
δ ( q6 , p ) = q5
q6 q5 q5

Transition Diagram

Acceptability of the string “v p v p p ”

δ ( q4 , v p v p p ) = δ ( q5 , p v p p ) = δ ( q6 , v p p ) = δ ( q5 , pp ) = δ ( q6 , p ) = q5

As q5 is a final state, string “v p v p p ” is accepted by the given FA.

10. Equivalence of NFA and DFA

Note: δ’ ( [q0 , q1 , . . . . , qn ] , a ) = δ ( q0 , a ) U δ ( q1 , a ) U...................U δ ( qn , a )

Example Problem: Convert the following NFA to equivalent DFA.


Answer: Let the given NFA be M = ( Q , ∑ , δ , q3 , F ) where Q = { q3 , q5 } ,

∑ = { + , X } , q3 is initiate state , F = { q3 }

δ is defined as

Transition Relation Transition Table

δ ( q3 , + ) = q5 Present Next State


δ ( q3 , X ) = q3 State/∑ + X

δ ( q5 , + ) = q3  q3 q5 q3
δ ( q5 , X ) = q3 , q5 q5 q3 q3 , q5

Transition Diagram

Let the equivalent DFA be M’ = ( Q’ , ∑ , δ’ , q3 , F’ )

where Q’ = { [q3] , [q5] , [q3 , q5] } , ∑ = { + , X } , [q3] is initiate state , F’


= { [q3] , [q3 , q5] }

δ’ is defined as

Transition Relation Transition Table

δ’ ( q3 , + ) = [q5] Present Next State


State/∑ + X
δ’ ( q3 , X ) = [q3]
 [q3] [q5] [q3]
δ’ ( q5 , + ) = [q3]
[q5] [q3] [q3 , q5]
δ’ ( q5 , X ) = [q3 , q5] [q3,q5] [q3 , q5] [q3 , q5]

δ’ ( [q3 , q5] , + ) = [q3 , q5]

δ’ ( [q3 , q5] , X ) = [q3 , q5]


Transition Diagram

11. NFA with ε-transitions (NFA-ε)


It is an NFA including transitions on the empty input symbol ε.
An NFA-ε can be represented mathematically as a 5-tuple:

M = ( Q , ∑ , δ , q0 , F )

Where Q: finite nonempty set of states

∑: finite nonempty set of the input symbols called input alphabet

q0: initial state q0 ϵ Q

F: set of final states F ⊆ Q

δ: Transition function QX{∑x ε }  2Q

Example NFA-ε: Let the NFA-ε be M = ( Q , ∑ , δ , q4 , F )

where Q = { q4 , q7 } , ∑ = { + , X } , q4 is initiate state , F = { q4 } δ

is defined as

Transition Relation Transition Table

δ ( q4 , ε ) = q7 Present Next State


State/∑
δ ( q4 , X ) = q4
+ X ε
 q4 q4 q7
δ ( q7 , + ) = q4

δ ( q7 , X ) = q7 q7 q4 q7

Transition Diagram
12. Conversion of NFA-ε to NFA

Note: ε-closure(q) = set of all states p such that there is a path from q to p with label ε.
ε-closure(q) includes “q” itself.
𝛿̂(𝑞 , ε) = ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝑞)

𝛿̂(𝑞 , a) = ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿 ( 𝛿̂ (𝑞 , ε ), 𝑎))

Example Problem: Convert the following NFA-ε to equivalent NFA without ε-transitions.

Answer: Let the NFA-ε be M = ( Q , ∑ , δ , q4 , F )

where Q = { q4 , q7 } , ∑ = { + , X } , q4 is initiate state , F = { q4 } δ

is defined as

Transition Relation Transition Table

δ ( q4 , ε ) = q7 Present Next State


State/∑
δ ( q4 , X ) = q4
+ X ε
 q4 q4 q7
δ ( q7 , + ) = q4

δ ( q7 , X ) = q7 q7 q4 q7

Transition Diagram

ε-closure(q) for all states q ϵ

ε-closure(q4) = 𝛿̂(𝑞4 , ε) = { q4 , q7 }
Step 1: Find Q

ε-closure(q7) = 𝛿̂(𝑞7 , ε) = { q7 }

Step 2: Find the mapping function 𝛿̂(𝑞) for all states q ϵQ


𝛿̂(𝑞4 , +) = ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒 (𝛿(𝛿̂(𝑞4, ε), +))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿(ε − closure(𝑞4), +))


= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿({𝑞4 , 𝑞7}, +))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿(𝑞4, +) ∪ 𝛿(𝑞7, +))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( { ∅ } ∪ { 𝑞4 })

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( { 𝑞4 })

= { 𝑞4 , 𝑞7 }

𝛿̂(𝑞4 , 𝑋) = ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒 (𝛿(𝛿̂(𝑞4, ε), 𝑋))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿(ε − closure(𝑞4), 𝑋))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿({𝑞4 , 𝑞7}, 𝑋))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿(𝑞4, 𝑋) ∪ 𝛿(𝑞7, 𝑋))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( { 𝑞4 } ∪ { 𝑞7 })

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( { 𝑞4 , 𝑞7 })

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( 𝑞4) ∪ ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( 𝑞7)

= { 𝑞4 , 𝑞7 } ∪ { 𝑞7 }

= { 𝑞4 , 𝑞7 }

𝛿̂(𝑞7 , +) = ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒 (𝛿(𝛿̂(𝑞7, ε), +))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿(ε − closure(𝑞7), +))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿({𝑞7} , +))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( { 𝑞4 })

= { 𝑞4 , 𝑞7 }

𝛿̂(𝑞7 , 𝑋) = ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒 (𝛿(𝛿̂(𝑞7, ε), 𝑋))


= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿(ε − closure(𝑞7), 𝑋))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒(𝛿({𝑞7} , 𝑋))

= ε − 𝑐𝑙𝑜𝑠𝑢𝑟𝑒( { 𝑞7 })

= { 𝑞7 }

Let the NFA without ε-transitions be M’ = ( Q , ∑ , δ’ , q4 , F’ )

where Q = { q4 , q7 } , ∑ = { + , X } , q4 is initiate state ,


F’ = F U { q , if ε-closure(q) ϵ F } = { q4 } U { Φ } = { q4 }

δ’ is defined as

Transition Relation Transition Table

𝛿̂(𝑞4 , +) = = { 𝑞4 ,
𝑞7 }
Present Next State
State/∑ + X

𝛿̂(𝑞4 , 𝑋) = = { 𝑞4 ,
𝑞7 }
 q4 q4, q7 q4, q7

𝛿̂(𝑞7 , +) = = { 𝑞4 ,
𝑞7 }

𝛿̂(𝑞7 , 𝑋) = = { q7 q4, q7 q7
𝑞7 }

Transition Diagram

13. FA with Output


Finite Automata with outputs are extensions of traditional Finite Automata (FA) where
transitions not only depend on the current state and input symbol but also produce
outputs during state transitions. These outputs could be associated with transitions or
states themselves. FA with output is of two types - Mealy and Moore machines.

a) Moore Machine: The output is associated with each state rather than with
transitions. Upon entering a state, the machine produces an output determined
by that state.

b) Mealy Machine: Outputs are associated with transitions, meaning the output
depends on both the current state and the input symbol. Outputs are produced
when a transition occurs from one state to another due to an input symbol.

Moore Machine:

In Moore machine, output function Z(t) depends only on the present state q(t) and is
independent of the current input. "t” is a discrete instant of time.

Z(t) = λ (q(t) )

A Moore machine is represented mathematically using a 6-tuple:

M = ( Q , ∑ , Δ , δ , λ , q0 )

Where Q: finite nonempty set of states

∑: finite nonempty set of the input symbols called input alphabet Δ:

finite nonempty set of the output symbols called output alphabet

q0: initial state q0 ϵ Q


δ: Transition function QX ∑ Q
λ: output function Q  Δ

For a Moore machine if the input string is of length “n”, the output string is of length “n+1”.

Example Moore Machine:

Let the Moore Machine be M = ( Q , ∑ , Δ , δ , λ , q4 )

where Q = { q4 , q7 } , ∑ = { + , X } , Δ = { P , M } , q4 is initiate state δ

is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 Present Next State δ Output


δ ( q4 , X ) = q4 State/∑ λ
+ X
δ ( q7 , + ) = q4  q4 q7 q4 P
δ ( q7 , X ) = q7 q7 q4 q7 M

Transition Diagram

Output Function λ

λ ( q4 ) = P

λ ( q7 ) = M

Problem: For the above Moore Machine, determine the output for the input string

“XX+X”. δ ( q4 , XX+X ) = δ ( q4 , X+X ) = δ ( q4 , +X ) = δ ( q7 , X ) = q7

λ ( q4 ) ------- λ ( q4 ) ------- λ ( q4 ) ------- λ ( q7 ) -- λ ( q7 )

Output P P P M M

Output for the given input string “XX+X” is “PPPMM”

Mealy Machine:

In Mealy machine, output function Z(t) depends on the present state q(t) and the current
input x(t).

Z(t) = λ ( q(t) , x(t) )

A Mealy machine is represented mathematically using a 6-tuple:

M = ( Q , ∑ , Δ , δ , λ , q0 )

Where Q: finite nonempty set of states


∑: finite nonempty set of the input symbols called input alphabet Δ:

finite nonempty set of the output symbols called output alphabet

q0: initial state q0 ϵ Q

δ: Transition function QX ∑ Q

λ: output function Q X ∑  Δ

For a Mealy machine if the input string is of length “n”, the output string is of length “n”.

Example Mealy Machine:

Let the Mealy Machine be M = ( Q , ∑ , Δ , δ , λ , q4 )

where Q = { q4 , q7 } , ∑ = { + , X } , Δ = { P , M } , q4 is initiate state δ

is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 + X
Present
δ ( q4 , X ) = q4 State/∑ Next Output Next Output
State δ λ State δ λ
δ ( q7 , + ) = q4

δ ( q7 , X ) = q7

Transition Diagram
 q4 q7 M q4 P

q7 q4 P q7 M

Output Function λ

λ ( q4 , + ) = M

λ ( q4 , X ) = P

λ ( q7 , + ) = P

λ ( q7 , X ) = M

Problem: For the above Mealy Machine, determine the output for the input string

“XX+X”. δ ( q4 , XX+X ) = δ ( q4 , X+X ) = δ ( q4 , +X ) = δ ( q7 , X ) = q7

λ ( q4 , X ) --- λ ( q4 , X ) --- λ ( q4 , + ) -- λ ( q7 , X )

Output P P M M

Output for the given input string “XX+X” is “PPMM”


14. Converting Moore Machine to Mealy Machine
Problem: Convert the following Moore Machine to Mealy Machine.

Answer:

Let the given Moore Machine be M = ( Q , ∑ , Δ , δ , λ , q4 )

where Q = { q4 , q7 } , ∑ = { + , X } , Δ = { P , M } , q4 is initiate state

δ is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7 Present Next State δ Output


δ ( q4 , X ) = q4 State/∑ λ
+ X
δ ( q7 , + ) = q4  q4 q7 q4 P
δ ( q7 , X ) = q7 q7 q4 q7 M

Transition Diagram

Output Function λ

λ ( q4 ) = P

λ ( q7 ) = M

Let the equivalent Mealy Machine be M’ = ( Q , ∑ , Δ , δ , λ’ , q4 )

where Q = { q4 , q7 } , ∑ = { + , X } , Δ = { P , M } , q4 is initiate state


δ is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7

δ ( q4 , X ) = q4 + Output X Output
Present
State/∑ Next Next
State δ λ State δ λ
δ ( q7 , + ) = q4
 q4 q7 M q4 P
δ ( q7 , X ) = q7
q7 q4 P q7 M

Transition Diagram

= =

λ ( q4 ) λ ( q7 )

Output Function λ’

λ ( q4 , + ) = M

λ ( q4 , X ) = P

λ ( q7 , + ) = P

λ ( q7 , X ) = M

15. Converting Mealy Machine to Moore Machine


Problem: Convert the following Mealy Machine to Moore Machine.

Answer:

Let the given Mealy Machine be M = ( Q , ∑ , Δ , δ , λ , q4 )

where Q = { q4 , q7 } , ∑ = { + , X } , Δ = { P , M } , q4 is initiate state


δ is defined as

Transition Relation Transition Table

δ ( q4 , + ) = q7
+ X
Present
δ ( q4 , X ) = q4 State/∑ Next Output Next Output
State δ λ State δ λ
δ ( q7 , + ) = q4
 q4 q7 M q4 M
δ ( q7 , X ) = q7
q7 q4 P q7 M

Transition Diagram

Output Function λ

λ ( q4 , + ) = M

λ ( q4 , X ) = M

λ ( q7 , + ) = P

λ ( q7 , X ) = M

Let the equivalent Moore Machine be M’ = ( Q’ , ∑ , Δ , δ’ , λ’ , q4’ )

δ’ is defined as

Transition Table
+ X
Present
State/∑ Next Output Next Output
State δ λ State δ λ
 q4M q7 M q4M M

q4P q7 M q4M M

q7 q4P P q7 M

+ X
Present
State/∑ Next Output Next Output
State δ λ State δ λ

 q4’ q7 M q4M M

q4M q7 M q4M M

q4P q7 M q4M M

q7 q4P P q7 M
Present Next State δ Output
State/∑ + X λ

 q4’ q7 q4M ε
q4M q7 q4M M

q4P q7 q4M P

q7 q4P q7 M

Transition Relation Transition Diagram

δ ( q4’ , + ) = q7

δ ( q4’ , X ) = q4M

δ ( q4M , + ) = q7

δ ( q4M , X ) = q4M

δ ( q4P , + ) = q7

δ ( q4P , X ) = q4M

δ ( q7 , + ) = q4P

δ ( q7 , X ) = q7

where Q = { q4’ , q4M , q4P, q7 } , ∑ = { + , X } , Δ = { P , M } , q4’ is initiate state

Output Function λ’

λ ( q4’ ) = ε
λ ( q4M ) = M

λ ( q4P ) = P

λ ( q7 ) = M
Module-II
Regular Expressions, Grammar and Languages
Finite automata can only recognize regular languages, which are a particularly
restricted category of languages.

Regular Expressions
Regular languages are denoted by regular expressions.

Identities for regular expression –


The regular expression has multiple identities. Consider the regular expressions p, q, and r.

1
Regular Expression
Regular languages are denoted by Regular Expressions.

Applications of Regular expressions:

1. Defining Regular Languages:


Regular expressions are used to define regular languages. A regular language is a
type of formal language that can be recognized by a finite automaton, and regular
expressions provide a concise and expressive way to represent such languages.
2
2. Finite Automata:
Regular expressions and finite automata are closely related. Regular expressions can be
converted into equivalent finite automata, and vice versa. This connection is formalized
by the Kleene's Theorem, which states that a language is regular if and only if it can be
described by a regular expression or recognized by a finite automaton.

3. Pattern Matching:
Regular expressions are used for pattern matching within strings. In the context of formal
languages, this involves checking if a given string belongs to a particular regular
language defined by a regular expression.

4. Lexical analysis:
Regular expressions are commonly used in the design of lexical analyzers (lexers) for
compilers. Lexers are responsible for breaking down the source code into tokens, and
regular expressions are used to describe the patterns of valid tokens in the programming
languages.

5. Text Editors and Search algorithms:


Regular expressions are applied in text editors and search algorithms to efficiently find
and manipulate patterns in textual data. This application is related to the concept of
regular languages and the efficient algorithms for pattern matching.

6. String Matching algorithms:


Regular expressions are used as patterns in string matching algorithms. These algorithms,
such as the Knuth-Morris-Pratt algorithm or the Boyer-Moore algorithm, are used to find
occurrences of a pattern within a text.

7. Network Protocol Specification:


Regular expressions are used in the specification of network protocols and in the design
of protocol analyzers. They help define the syntactic rules for valid messages or packets
in a network communication protocol.

7. Database Query Languages:


Regular expressions are integrated into some database query languages to provide
expressive pattern matching capabilities when searching or filtering data.

8. Automated String Processing:


Regular expressions are used in various automated string processing tasks, such as text
processing, data validation, and information retrieval.

They provide a concise and powerful notation for describing patterns and sets of strings
within the context of formal language theory.

3
Pumping Lemma:
Two Pumping Lemmas have been defined for the following:

1. Context – Free Languages


2. Regular Languages
Pumping Lemma for Regular Languages

This basically indicates that even after a string v is "pumped," or added a number of times,
the resulting string stays in L.
It is an evidence of language irregularity. Therefore, a language definitely fulfills pumping
lemma if it is regular. L is undoubtedly not regular if it contains a minimum of one pumping
string that is not in L.

It's possible that the contrary isn't always true. That is, the language is not regular even if
Pumping Lemma holds.

Ex: prove L01 = {0n1n | n ≥ 0} is irregular.


Assuming L is regular, the aforementioned rules derive from Pumping Lemma.

4
Applications of Pumping Lemma:

1. Proving Non-Regularity:
One of the primary applications of the Pumping Lemma is to prove that a given language
is not regular. If a language cannot satisfy the conditions of the Pumping Lemma, then it
cannot be regular

2. Identifying Non-Regular Languages


By applying the Pumping Lemma to a language, one can identify certain properties that
are not satisfied if the language is not regular. This helps in understanding the limitations
of regular language.

3. Compiler Design:
In the context of compiler design, the Pumping Lemma can be applied to analyze the
regularity of the language defined by the lexical structure of a programming language. It
helps ensure that the lexical analyzer can efficiently recognize valid tokens.

The Pumping Lemma is a powerful tool in formal language theory, and its applications
extend to various areas, including language design, compiler construction, and the
theoretical analysis of computational complexity.

Equivalence of Two Regular Expressions

Equivalence of two finite automata


If two automata A and B accept precisely the same set of input strings, then they are
considered comparable. If automata A and B are identical, then

i. If a path exists from the initial state of an A to its final state, designated a1a2..ak,
then a path exists from the initial state of a B to its final state, also designated
a1a2..ak.
ii. Should a path exist from the initial state of B to the ultimate state of B, designated
5
as b1b2..bj, then a path exists from the initial state of A to the ultimate state of A,
also designated as b1b2..bj.

Minimization of DFA
Conversion of a DFA to equivalent DFA with the fewest possible states is known as
DFA minimization. Partitioning algorithms are used in DFA minimization, also known
as Optimization of DFA.
Minimization of DFA

Assume a DFA D < Q, Σ, q0, δ, F > that is capable of identifying the language L.
Then, for language L, the reduced DFA D < Q', Σ, q0, δ', F' > can be built as follows:

Step 1: Q (the collection of states) will be split into two sets. All final states will be
included in one set, and non-final states will be included in the other. P0 is the name
of this partition.

Step 2: Set up k = 1.

Step 3: Partition the various Pk-1 sets to find Pk. We will take every feasible pair of
states in each set of Pk-1. We shall divide the sets into distinct sets in Pk if two states
inside a set can be distinguished from one another.

Step 4: if Pk equals Pk-1 (nopartition manipulation), stop.

Step 5: A set's states combine to form a single state. In Pk, the sets will equal the
states in reduced DFA.

6
How may the distinguishability between two states in partition Pk be determined?
If δ (qi, a) and δ (qj, a) are in separate sets in partition Pk-1 for each given input
symbol a, then two states (qi, qj) can be distinguished in partition Pk.

Ex:
. Examine the DFA that is depicted as

Step 1. P0 contains 2 states. The last DFA states, q1, q2, and q4, will be in one set, and
the remaining states will be in another.

Step 2. Now determine whether or not sets of partition P0 can be partitioned in order to
compute P1:

As a result, q2 and q4 cannot be distinguished from q1 and q2, which cannot be


distinguished from q1 and q4. Consequently, P1 will not partition the { q1, q2, q4 } set.

7
In the same way, q0 and q3 combine to form q3. Figure 2 displays the minimized DFA that
corresponds to the DFA of Figure 1 as:

8
Ex:
Examine the provided DFA. Which statement below is not true?
1. L(A)'s complement is independent of context.
2. L (A) = L ((11 * 0 + 0) (0 + 1)* 0* 1*)
3. A is the minimal DFA for the language that A has approved.
4. A takes any string longer than { 0, 1 } by at least two lengths.

9
Solution:

It will take all strings with a minimum length of two, according to statement 4. However, it
takes 0 (which has length 1). Thus, 4 is untrue.

According to Statement 3, the DFA is negligible. We will verify with the previously
mentioned algorithm.

P0 equals P1, hence P1 is ultimate DFA. Q0 , Q1 are combinable. Then there will be 2
states for minimal DFA. As a result, statement 3 is likewise not true.
Thus, (D) is the correct choice.
Module 3

Context Free Grammars

3.1 CFG

A CFG is a formal grammar used to describe the syntax or structure of a language in terms of
production rules. These rules define how strings of symbols can be generated in the language.
Context-free grammars are widely used in computer science for tasks such as defining the syntax
of programming languages, parsing natural language, and modeling biological sequences.

Components of CFG:

Symbols (Terminals):

These are basic units of language being generated. Terminals are symbols that appear in strings
generated by CFG.

Non-terminals (Variables):

Non-terminals are placeholders representing syntactic categories or groups of symbols.

Production Rules:

Production rules specify how non-terminals can be expanded into sequences of terminals and/or
other non-terminals. Each production rule consists of a non-terminal symbol (left-hand side) and
a sequence of symbols (right-hand side).

Start Symbol:

It is a special non-terminal that represents initial symbol from which derivation of strings begin.
It serves as root of derivation tree and indicates starting point for generating strings in language.

Formal Definition of a CFG:


Parsing:

Context-free grammars are used in parsing algorithms to analyze and recognize the syntactic
structure of strings according to the CFG rules. Parsing involves determining whether given
string can be generated by CFG and constructing a derivation tree to represent syntactic structure
of string.

3.2 Parse trees

Parse trees play a crucial role in understanding and analyzing the syntactic structure of strings
generated by formal grammars.

A parse tree (PT), also known as a derivation tree, illustrates the syntactic structure of a string
according to the production rules of a formal grammar. Each node in PT corresponds to a symbol
in the input string, and each edge represents one production rule application during derivation
process.

Components of PT:

Root Node:

The topmost node of PT represents start symbol of CFG, from which derivation of string begins.
Internal Nodes:

Internal nodes of parse tree represent non-terminal symbols of CFG. Each internal node is
labeled with a non-terminal symbol. PT’s children correspond to the symbols derived from that
non- terminal.

Leaf Nodes:

Leaf nodes of PT are terminal symbols of CFG. Each leaf node is a terminal symbol from input
string.

Edges:

Edges connecting nodes in PT represent application of production rules during derivation


process. Each edge is labeled with the production rule used to derive the child node from the
parent node.

Construction of Parse Trees:

Start Symbol: It forms the root node.

Expansion: Apply productions recursively to expand non-terminal symbols into sequences of


terminals / non-terminals until all symbols in string are derived.

Terminal Placement: Label leaf nodes with corresponding terminal symbols from input.

Derivation Path: Each path from root to any leaf in PT represents a derivation of input from start
symbol.
3.3 Applications of CFG

Language Recognition:

PTs are used in parsing algorithms to recognize whether a given string belongs to the language
generated by CFG.

Ambiguity Detection:

PTS help identify ambiguity in grammars by revealing multiple valid interpretations or


derivations of the same string.

Syntax Analysis:

Parse trees provide insights into the syntactic structure of strings, aiding in the understanding and
analysis of programming languages and natural languages.

Compiler Design:

PTs are utilized in syntax analysis phase of compilers. The purpose is to validate and parse source
code a/c grammar of PL.

Context-free grammars (CFGs) find applications in various fields, primarily in computer science,
linguistics, and related areas. Here are some of the key applications of context-free grammars:

Programming Languages:

CFGs are extensively used to define syntax of PLs. The structure of valid programs is described
by the rules for constructing statements, expressions, and other language constructs.
Parser generators like Yacc/Bison and ANTLR use CFGs to generate parsers for programming
languages, allowing developers to write compilers, interpreters, and other language-processing
tools.

Compiler Design:

In compiler construction, CFGs are used in the syntax analysis phase (parsing) to analyze
structure of source code in order to build PT.

PT which is intermediate representation of source code, is subsequently used in later phases of


compilation process, such as semantic analysis and code generation.

Natural Language Processing (NLP):

CFGs are employed in NLP to model the syntax of natural languages. They describe the
grammatical rules governing the formation of sentences, phrases, and other linguistic structures.

CFG-based parsers can be used to parse and analyze text for tasks.

Syntax Highlighting and Code Analysis:

Text editors and integrated development environments (IDEs) use CFG-based grammars to
perform syntax highlighting, which visually distinguishes different language constructs in source
code based on their syntactic roles.

CFG-based static analysis tools can analyze source code for potential errors, code smells, and style
violations by parsing the code and checking it against predefined grammar rules.

Data Validation and Parsing:

CFGs are employed in data validation and parsing tasks across various domains, including markup
languages (e.g., XML, HTML), configuration files, log files, and network protocols.

By defining a grammar for the expected structure of data formats, CFG-based parsers can
validate input data for correctness and extract relevant information for further processing.

3.4 Ambiguity in grammars and languages

Ambiguity in grammars refers to situations where a single string in the language can be derived
by more than one PT. This can lead to confusion in parsing and interpretation, as there may be
multiple valid interpretations of the same input. Ambiguity can arise in both natural and formal
languages.
Parse Tree 1:

/ \

* 4

/ \

2 3

According to this interpretation, "2 * 3" is evaluated first, resulting in 6, which is then added to 4
to produce the final result of 10.

Parse Tree 2:

/ \

2 +

/ \

3 4
In this interpretation, "3 + 4" is evaluated first, resulting in 7, which is then multiplied by 2 to
produce the final result of 14.

To resolve ambiguity, the grammar can be modified to explicitly specify the precedence and
associativity of operators. For example, adding separate production rules for addition and
multiplication with appropriate precedence levels can clarify the intended parsing behavior:

In this grammar:

S represents a statement.

E represents an expression.

a represents some arbitrary terminal symbol.

Ambiguity: Let's look at the sentence "if E1 then if E2 then a else a". This sentence can be parsed
in two different ways:

Parse Tree 1:

/ | \

if E1 S

/ | | \

then if S else

/ \ | |
E2 a a a

In this interpretation, the "else" clause belongs to the inner "if" statement.

In this interpretation, the "else" clause belongs to the outer "if" statement.

The ambiguity arises because the grammar does not specify the associativity of the "if-then-else"
construct. As a result, there are multiple valid ways to interpret the nesting of "if-then-else"
statements, leading to different parse trees and interpretations of the sentence.

To resolve ambiguity, the grammar can be modified to explicitly specify the associativity of the
"if-then-else" construct. For example, adding parentheses to indicate the associativity can clarify
the intended parsing behavior:

With this modified grammar, the ambiguity in parsing the sentence "if E1 then if E2 then a else
a" would be eliminated, as the parentheses would enforce a specific grouping of the "if-then-
else" constructs.
Example:

Consider the following context-free grammar for arithmetic expressions with explicit precedence
rules:

Sentence:

Let's consider the sentence "2 * 3 + 4".

Parse Tree:

The unambiguous parse tree for the sentence "2 * 3 + 4" would be as follows:

/\

* 4

/\

2 3

In this parse tree, the multiplication operation ("2 * 3") is evaluated first, and then the addition
operation ("result of 2 * 3 + 4") is performed. This unambiguous interpretation follows the
precedence rules specified in the grammar, where multiplication takes precedence over addition.

This grammar specifies explicit precedence rules for + and * operations. * has higher precedence
than +, which means that * operations are evaluated before + operations. Additionally, the
grammar enforces left associativity for both operations, meaning that when there are multiple
operators of the same precedence level, they are evaluated from left to right.

Benefits:

Using an unambiguous grammar with explicit precedence rules ensures that there is only one
valid interpretation of a given sentence, eliminating ambiguity and ensuring predictable parsing
behavior. This clarity is crucial for language processing tasks such as compiler design, syntax
analysis, and natural language processing, where unambiguous interpretations are essential for
correct program execution or understanding of natural language expressions.

3.5 Normal forms for CFG

CFGs can be transformed into various normal forms to simplify their analysis and processing.

The two most common normal forms for context-free grammars are the Chomsky Normal Form
(CNF) and the Greibach Normal Form (GNF).

Chomsky Normal Form (CNF):


These are the transformations needed to convert a CFG into CNF and GNF, respectively. Each
form has its advantages and is useful in different contexts, depending on the specific
requirements of the application or parsing algorithm being used.
B -> FF | b

C -> XY | a

X -> AB

Y -> AB

D -> FF

E -> FF

F -> C

Resulting CNF Grammar is:

S -> XY | BC

A -> BA | a

B -> FF | b

C -> XY | a

X -> AB

Y -> AB

D -> FF

E -> FF

F -> C
Example:

Let's consider the language L={anbncn ∣ n≥0}, which consists of strings of the form anbncn . We
can use the pumping lemma to prove that L is not context-free.

Assume L is context-free.

Let p be the pumping length given by the pumping lemma.

Consider the string s=apbpcp in L.

According to the pumping lemma, s can be divided into five substrings:

u,v,w,x,y, satisfying the conditions of the lemma.

Pumping down or up by one (i.e., setting i=0 or i=2) leads to a string that does not belong to

L, since the number of a's, b's, and c's will no longer be equal.

Therefore, L cannot be context-free.


MODULE - 4
4.1 Definition of the pushdown automaton:
Push Down Automata (PDA) is a NFA with ϵ transitions and added stack.
The PDA can remember the infinite amount of information due the presence of
stack. A PDA recognize all and only Context Free Languages (CFL).

Figure 4. 1: Push Down Automata


Z0 – Start symbol of stack
F – Set of accepting states or final states
Graphical notation of PDA:
Visual representation, in general called as transition diagram gives more
clear understanding of the machine. The transition diagram of PDA contains:
 The nodes - to represent the states the PDA.
 Doubly circled states indicate the final/ accepted states and arrow
labelled start indicates the starting state.
 Arcs corresponding to the transitions are labelled as “a,X|Y” where
‘a’ is input symbol and X, Y are stack symbols.
Figure 4.2 shows the transition diagram for PDA.

Figure 4. 2: Transition diagram of PDA


(q0, (()), Z0) ⊢ (q0, ()), (Z0)
⊢ (q0,)), ((Z0)
⊢ (q0,), (Z0)
⊢ (q0, ϵ, Z0)
⊢ (q0, ϵ, ϵ)
After the consumption of complete input string the stack is empty.
Hence, the given string is Accepted by the PDA.
Constructing final state PDA (PF)from an empty stack PDA (PN):
Whenever PN’s stack becomes empty, make PF go to a final state without
consuming any addition symbol. To detect empty stack in P N: PF pushes a new
stack symbol X0 (not in Γ of PN) initially before simulating PN. The resultant PF

is as follows.

Example:

Constructing an empty stack PDA (PN) from final state PDA (PF):
Add a new start state and push a new symbol X 0 to the stack. Whenever PF reaches a final
state, just make an ϵ - transition into a new end state, and perform pop operation on the stack to
make the stack empty and accept.
Example:

Q1. Design a PDA for the language L ={anbn|n>=1}

Q2. Design a PDA for the language L = {wCwr| w ϵ (a,b)* }

4.3 Equivalence of PDA’s and CFG’s:


For every PDA which accepts the string with final state there exists a PDA
with empty state acceptance of string. Vice versa is also true.
Always it is possible to construct an equivalent PDA from the given CFG and
vice versa.
(q, 0011, S) ⊢ (q, 0011, AS)
⊢ (q, 0011, 0A1S)
⊢ (q, 011, A1S)
⊢ (q, 011, 011S)
⊢ (q, 11, 11S)
⊢ (q, 1, 1S)
⊢ (q, ϵ, S)
⊢ (q, ϵ, ϵ)
Rename the variables:
{S,
[p, X, p] = A
[p, X, q] = B
[q, X, p] = C
[q, X, q] = D
[p, Z0, p] = E
[p, Z0, q] = F
[q, Z0, p] = G
[q, Z0, q]} = H

Then the following production rules can be re-written as:


SE/F
E  0AE / 0BG
F  0AF / 0BH
A  0AA / 0BC
B  0AB / 0BD / 1
D1/ϵ
Hϵ
4.4 Deterministic pushdown automata:
Push Down Automata can be categorised in to two types:
 Deterministic Push Down Automata (DPDA)
 Non- Deterministic Push Down Automata (NDPDA)
Deterministic pushdown automata: It will have at most one transition from the
current state with the given input symbol and top of the stack. A DPDA will follow
the following rules.
 For any q ∈ Q, a ∈ Σ, Z ∈ Γ, the set δ (q, a, Z) should have at most

For any q ∈ Q, Z ∈ Γ, the set δ (q, ϵ, Z) ≠ Ф then set δ (q, a, Z) = Ф for


one element.

every a ∈ Σ.

Module 5
1. Turing Machine Model:

The Turing machine is a foundational concept in theoretical computer science, introduced by


Alan Turing in 1936. It serves as a formal mathematical model for computation and forms the
basis for understanding computability and complexity.

A Turing machine consists of several components:

The operation of a Turing machine involves a sequence of steps where it reads the symbol at its
current position, consults the transition function to determine the next action (which may involve
changing state, writing a symbol, or moving the head), and repeats until it reaches a halting state.

2. Representation of Turing Machine:

Turing machines can be represented using various formalisms:

 Transition Diagrams: Graphical representations where nodes represent states and


edges represent transitions between states.
 Transition Tables: Tabular representations listing possible transitions for each
combination of current state and symbol.
 Formal Descriptions: Mathematical notations such as Turing machine tuples that specify
the machine's states, alphabet, transition function, and other parameters.

Each representation provides a way to describe the behavior of a Turing machine and is useful
for different purposes, such as analysis, design, and simulation.

3. Language Acceptability by Turing Machine:

A language is considered acceptable by a Turing machine if the machine, when provided with an
input string from that language, halts in an accepting state. Conversely, if the machine either
halts in a non-accepting state or loops indefinitely, the input string is not considered part of the
language.

The set of all strings accepted by a Turing machine defines the language recognized by that
machine. Turing machines can recognize a wide range of languages, including regular
languages, context-free languages, recursively enumerable languages, and more.
4. Design of Turing Machine:

Designing a Turing machine involves specifying its components in a way that correctly
recognizes the desired language. This includes defining:

The design process often requires careful consideration of the language's properties and the
computational resources available to the Turing machine.

5. Techniques for Turing Machine Construction:

Constructing a Turing machine to recognize a specific language involves various techniques:

 Direct Construction: Designing the machine directly based on the properties of


the language.
 Reduction: Transforming the problem of recognizing one language into another
language already known to be recognizable by a Turing machine.
 Simulation: Simulating the behavior of another computational model known to
recognize the target language.

These techniques require creativity and insight into the properties of languages and
computational models.

6. Variants of Turing Machines:

Turing machines come in several variants, each extending or modifying the basic model in
different ways:

 Multi-Tape Turing Machines: Machines with multiple tapes operating in


parallel, allowing for more efficient computation in certain cases.
 Non-deterministic Turing Machines: Machines where multiple transition options are
possible from a given state and symbol, useful for modeling certain computational
phenomena.
 Probabilistic Turing Machines: Machines that make probabilistic choices
during computation, often used in the study of randomness and complexity.
 Quantum Turing Machines: Machines based on quantum mechanics, capable of
exploiting quantum superposition and entanglement for computation.

Each variant offers unique computational capabilities and insights into the nature of
computation.
9. Properties of Recursive and Recursively Enumerable Languages:

Recursive and recursively enumerable languages exhibit distinct properties:

 Closure Properties: Recursive languages are closed under various operations such as
union, intersection, complement, concatenation, and Kleene star. Recursively
enumerable languages have different closure properties, often more restricted than
recursive languages.
 Decidability: Recursive languages are decidable, meaning there exists an algorithm
that can determine membership for any input string. Recursively enumerable languages
may not be decidable; there may not be an algorithm that always halts and correctly
determines membership.
 Solvability: Problems related to recursive languages often have effective solutions,
while problems related to recursively enumerable languages may have solutions that are
not effective or require non-trivial resources.

Understanding these properties is crucial for analyzing the computational complexity and
expressiveness of different language classes.
10. Model of Linear Bounded Automaton (LBA):

A linear bounded automaton (LBA) is a restricted version of a Turing machine where the tape is
bounded by the length of the input string. LBAs were introduced by Sheila Greibach in the 1960s
and are capable of recognizing precisely the recursively enumerable languages.

LBAs have the same basic components as Turing machines but with a finite tape. The restriction
imposed by the finite tape ensures that the machine operates within a limited space, making it a
powerful tool for studying the computational complexity of languages and problems.

In summary, Turing machines and related concepts provide a formal framework for
understanding computation and language recognition. Exploring the nuances of Turing machine
variants, language classes, and computational models deepens our understanding of the
fundamental principles of computer science and computability theory.

You might also like