TCS IMPs
TCS IMPs
MODULE 1
Design DFA or NFA
Design DFA that accepts strings with at least 3 a's over = {a, b}.
Design DFA that accepts strings that end in either "110" or "101" over = {0, 1}.
Design a DFA that accepts strings that are multiples of 4, ={0,1}.
Design a DFA that accepts strings that contain "ba" or "ab" as a suffix over ={a,b}
Design NFA that accepts strings starting with "abb" or "bba".
Design an NFA that accepts strings starting with a and ending with a, or starting with b and ending
in b.
NFA to DFA
Given NFA with epsilon, find the equivalent DFA. q1 is the initial state, q3 is the final state.
State 0 1 2 ε
s
→ Q1 {Q1} {Q2}
Q2 {Q2} {Q3}
* Q3 {Q3}
Represent RE epsilon for L={ww has prefix bab and suffix abb and w is a string over {a,b}}.
Design an NFA with epsilon moves for accepting L. Convert it to a minimized DFA.
Moore and Mealy
Compare and contrast Moore and Mealy Machines. Design a Moore machine for = {0, 1}; print
the residue modulo 3 for binary numbers.
Design a Mealy machine to change every occurrence of a with x, b with y, and c is kept
unchanged. Convert the same to an equivalent Moore machine
1. Finite State Machine (IMP)
a. Design a Finite State Machine to accept the following language over the alphabet {0,1}, L(R)={ w/w
start with 0 and has odd length or start with 1 and has even length}
b. Design a Finite state Machine to determine whether following number (base 3) is divisible 5
2. NFA and DFA (IMP)
a. Convert (0 + 1) (10) * (0+1) into NFA with E-moves and obtain DFA
b. Design NFA for recognizing the strings that end in"aa" over ={a,b} & convert above NFA to DFA.
c. Convert the Following RE to NFA and then convert it to DFA : R = ( (0 + 1) * 10+(00)* (11)* )* .
RL -> RE -> NFA -> DFA -> DFA Min
3. Moore and Mealy (IMP)
a. Construct Moore and Mealy Machine to convert each occurrence of 101 by 111
b. Design Moore m/c for following:- If input ends in '101' then output should be A, If input ends in '110'
output should be B, otherwise output should be C and convert it into Mealy m/c.
c. Give the Moore and Mealy Machine for the following processes: " for input from (0+1), if input ends
in 101, output x; if input ends in 110, output y; otherwise output z".
d. Moore and Mealy Machine
EXTRA:
3. Design DFA to accept strings of 0's and 1's ending with the string 100
4. Obtain DFA to accept Strings of 0's and 1's with even no. of 0's and even no. of 1's.
PYQs
Design Moore machine for ={0,1}, print the residue modulo 3 for binary numbers.
Construct a Moore machine to output remainder modulo 4 for any binary number.
Design a Mealy machine to change every occurrence of a with x, b with y and c is kept unchanged.
Convert the same to equivalent Moore machine.
Design Mealy machine to recognize r = (0+1)* 00 (0+1)* and then convert it to Moore machine
MODULE 2
Explain the Pumping Lemma for regular languages. Prove that the given language is not a regular
language.
L=¿ { a n b n+1 | n>=1 }
L=¿ {0 n 1n+ 1 | n>=1 }
PYQs
Explain Pumping Lemma with the help of a diagram to prove that the given language is not a
regular language. L = {0^m 1^(m+1) | m > 0}
Convert the following RE to NFA and convert it to minimized DFA corresponding to it:
(0+11)*(10)(11+0)*
Give formal definition of Pumping Lemma for Regular Language. Prove that the following language
is not regular: L = {ww^R | w ∈ {a,b}*, |w| >= 1}
Represent RE epsilon for L = {w: w has prefix bab and suffix abb and w is a string over {a,b}}. Design
NFA with epsilon moves for accepting L. Convert it to minimized DFA.
Explain Pumping Lemma for regular languages. Prove that given language L = {a^n b^(n+1) | n >= 1}
is not a regular language.
Convert the following RE into NFA with ε-moves and hence obtain the DFA:
RE = (0+ ε)* (10) (ε +1)*
Give the formal definition of pumping lemma for regular language and then prove that the following
language is not regular:
L = {a^n b^(n+1) | n >= 0}
MODULE 3
Construct CFG for given language. L= { 0i 1 j 0k| j>i+k }
Construct CFG to generate the language L= { ai b j c k|k =i+ j ,i , j ≥1 }
Parse Tree + Check Ambiguous
The grammar G is:
S → aB | bA
A → a| aS | bAA
B → b | bS | aBB
o Obtain the parse tree for the following string "aabab" and check if the grammar is
ambiguous.
o Derive using Leftmost Derivation (LMD) and Rightmost Derivation (RMD) for the string
"aaabbb". Draw the Parse Tree (MODULE 3)
CFG to CNF
Consider the following CFG. Is it already simplified? Explain your answer. Convert it to CNF form
S → ASB | a | bb
A → aSA | a
B → SbS | bb
CFG to GNF
Find the equivalent Greibach Normal Form (GNF) for the given CFG.
S → AA | a
A → SS | b
Extra
Simplify the given grammar and convert to CNF
S → ASB | ε
A → aAS | a
B → SbS | A | bb
PYQs
Construct CFG for given language.
L = {0^i 1^j 0^k | j > i+k}
The grammar G is
S → aAb | bS | ε
A → aA | bB
B → b | bS | aBB
Obtain parse tree for the following string “aababb” and check if the grammar is ambiguous
Find Equivalent Greibach Normal Form (GNF) for given CFG.
S -> AA | a
A -> SS | b
Show that grammar represented by production rules given below is ambiguous.
S → S+S | S-S | S*S | S/S | (S) | a
Construct CFG for following:
i. Alternate sequence of 0 and 1 starting with 0.
j. Do not contain 3 consecutive a over {a,b}
k. L = {x ∈ {0,1}* | x has equal number of 0's and 1's}
The grammar G is S → aAb | bS | ε, A → aA | bB, B → b | bS | aBB. Derive using Left Most Derivation
(LMD) and Rightmost Derivation (RMD) for the following string "aaabbb". Draw Parse Tree.
Consider the following CFG. Is it already simplified? Explain your answer. Convert it to CNF form.
S -> ASB | a | bb
A -> aSA | a
B -> SbS | bb
Show that the following grammar is ambiguous:
S -> aSbS | bSaS
Consider the following grammar G:V = {S, X, T}, T = {a, b}
Productions P are: S -> Xa | Xb
X -> Xa | Xb | a
Convert the grammar in Greibach Normal Form.
Consider the following grammar:
S -> iCtS | iCtS | ε
C -> b
For the string "ibtaibta", find the following:LMD, RMD, Parse Tree, Ambiguity.
MODULE 4
Design Push Down Machine that accepts L= { am bn c n d m|n , m> 0}
Give the formal definition of a Pushdown Automaton (PDA). Design a PDA that accepts odd
palindromes over {a, b, c}, where c exists only at the center of every string. (MODULE 4)
a. Construct PDA accepting the language L = {a"b"\n ≥ 0).
b. Construct PDA to check { wcw"| w (a, b)} where w' is reverse of w & c is a constant
c. Construct the PDA accepting following language; L= {a^n b^m c^n | m, n >= 1}.
EXTRA:
1. Differentiate between PDA and NPDA [IMP]
PYQs
Construct a PDA for accepting L = {a^m b^n c^n | m,n >= 1}.
Design Push Down Machine that accepts L = {a^m b^n c^n d^m | m,n > 0}
Give formal definition of Push Down Automata. Design PDA that accepts odd palindromes over
{a,b,c}, where c exists only at the center of every string.
Construct PDA accepting the language L = {a^n b^(n+1) | n >= 0} b) Construct TM to check well-
formedness of parenthesis.
MODULE 5
Define and design Turing Machine to accept 0 n 1n 2n over = { 0, 1, 2 }
Design a TM for converting a input binary number to its one's complement of a binary
2. Construct Turing Machine to check well formedness of parenthesis.
3. Design a Turing Machine which recognizes words of the form a n b n cn | n>=1.
5. Universal Turing Machine [Extra]
PYQs
Design a TM accepting all palindromes over {0,1}.
Design a TM for converting a binary number to its one's complement.
Design a Turing machine that computes a function f(m,n) = m + n, the addition of two integers .
MODULE 6
TM-Halting Problem.
Recursive and Recursively enumerable languages.
Post Correspondence Problem
Rice's Theorem, Write short note on Decidability and Undecidability
THEORY
Chomsky Hierarchy
The Chomsky Hierarchy is a classification of formal languages based on their generative power and
the type of computational model required to recognize or generate them. Proposed by linguist Noam
Chomsky in 1956, the hierarchy categorizes languages into four levels, each with increasing
expressive power. These levels are important in computer science and linguistics as they define the
capabilities of different types of grammars and automata.
Levels of the Chomsky Hierarchy
1. Type 0: Recursively Enumerable Languages (Unrestricted Grammars)
2. Type 1: Context-Sensitive Languages
3. Type 2: Context-Free Languages
4. Type 3: Regular Languages
Structure: Has multiple tapes, each with its own read/write head.
Operation: Each tape operates independently, allowing the machine to read and write on
multiple tapes simultaneously.
Use Case: Efficiently simulates algorithms that require multiple work areas, such as
intermediate storage or separate stages of computation.
Power: Equivalent in power to a standard single-tape Turing Machine, but can often perform
computations faster due to parallelism on the tapes.
Structure: Has a single tape divided into multiple tracks, where each track holds different
information for a single cell position.
Operation: Each cell position contains a tuple of symbols, one per track, and the read/write
head reads and writes across all tracks simultaneously.
Use Case: Useful for keeping track of multiple pieces of information at once, such as control
signals and data.
Structure: Similar to a standard TM but allows for non-deterministic choices in its transition
function.
Operation: At any point, the NDTM can choose between multiple possible moves. The
machine accepts if any computation path leads to an accepting state.
Use Case: Useful for problems involving “guessing” solutions, such as combinatorial search or
parsing ambiguous grammars.
Structure: A Turing Machine that can simulate any other Turing Machine.
Operation: Takes as input the description (encoding) of another Turing Machine and its input,
then simulates the computation of that machine on that input.
Structure: A Turing Machine with a tape bounded by the length of the input (it cannot use
more space than the length of the input).
Operation: The head can move within the limits of the input tape, effectively making the LBA a
“space-limited” Turing Machine.
Use Case: Recognizes context-sensitive languages, which include languages that cannot be
recognized by context-free grammars.
Power: A restricted TM, less powerful than a general TM in terms of memory usage but still
more powerful than a Pushdown Automaton (PDA).
Acceptance by PDA
A Pushdown Automaton (PDA) can accept a language in two primary ways: Acceptance by Final
State and Acceptance by Empty Stack. Here’s a breakdown of each:
1. Acceptance by Final State
In this method, a PDA accepts an input string if it reaches a final (or accepting) state after
reading the entire input.
A PDA that accepts by final state typically has one or more designated final states. When the
input is completely processed, if the PDA is in a final state, the input string is accepted.
The condition for acceptance is that the PDA must have reached a final state with the input
read fully, regardless of the stack contents.
Example:
If the PDA is in a final state after consuming the input, the string is accepted.
If not, the string is rejected.
2. Acceptance by Empty Stack
In this method, a PDA accepts an input string if the stack is empty after reading the entire
input.
Here, no final state is necessarily required. Instead, the PDA accepts if the stack has been
entirely "popped out" by the end of the input.
This type of acceptance is often useful in contexts where the stack operations alone are
enough to recognize the language structure, such as balanced parentheses or palindromes.
Example:
If the PDA’s stack is empty after processing the input, the string is accepted.
If the stack still has symbols, the string is rejected.
Relation Between Acceptance by Final State and Empty Stack
For any language that is recognizable by one type of acceptance, there exists a PDA using the
other type that also recognizes the same language.
Thus, both types of acceptance are equally powerful in terms of language recognition; they are
interchangeable for recognizing context-free languages.
Practical Use in Language Recognition
PDAs often use acceptance by empty stack for languages involving nested or hierarchical
structures (like balanced brackets).
Acceptance by final state may be simpler to implement in cases where the PDA needs to reach
a certain configuration to signify acceptance.
Both methods have their place depending on the structure of the language the PDA is designed to
recognize.
Conversion of Moore to Mealey Machine
Moore and Mealy Machines are both types of finite state machines used to model sequential circuits
and systems, but they differ in how they produce outputs. In a Moore machine, outputs depend only
on the current state, while in a Mealy machine, outputs depend on both the current state and the
current input.
The conversion from a Moore machine to a Mealy machine is straightforward and often results in a
more compact representation with potentially fewer states.
o In a Moore machine, each state has a specific output associated with it.
o The output for any input is determined solely by the current state.
Q: Set of states.
Σ: Input alphabet.
Δ: Output alphabet.
o In a Mealy machine, the output depends on both the current state and the input.
o Define a new output function λ′:Q×Σ→Δ, which assigns an output to each transition.
In the Mealy machine, associate the output of the source state q in the Moore
machine with the transition (q, a).
o The Mealy machine has a new output function that depends on both the current state
and input, making it faster in producing outputs.
o For each state q in the Moore machine and each input a in the input alphabet:
The transition function remains the same as in the Moore machine: δ(q,a)=q′δ(q,
a) = q'δ(q,a)=q′.
The new output function λ′(q,a)λ'(q, a)λ′(q,a) in the Mealy machine is set to
λ(q)λ(q)λ(q) of the Moore machine.
Example
States Q = {A, B}
Transition function:
o δ(A, 0) = A, δ(A, 1) = B
o δ(B, 0) = A, δ(B, 1) = B
Output function λ:
o λ(A) = X
o λ(B) = Y
Conversion Steps:
o Transition table with associated outputs for each input, where outputs depend on both
the current state and input.
In this way, a Moore machine can be converted to an equivalent Mealy machine, typically resulting in a
more compact design, as Mealy machines can produce outputs based on transitions rather than
states alone.
Arden's Theorem
Arden's Theorem is a fundamental theorem in formal language theory, particularly useful in the study
of regular expressions and finite automata. It provides a method for solving equations involving regular
expressions, helping to convert finite automata (FAs) into regular expressions.
Statement of Arden's Theorem
Arden's Theorem states that:
Given two regular expressions P and Q over an alphabet Σ, if R = Q + RP has a solution, then the
solution is:
R=QP∗R = QP^*R=QP∗
where:
R is a regular expression we want to solve for.
Q is a regular expression representing an initial segment of a regular language.
P is a regular expression representing the part of the language that can be repeated.
Conditions:
1. P must not contain the empty string (i.e., P ≠ ε).
2. Q is any regular expression over the same alphabet.
Purpose of Arden's Theorem
Arden's Theorem is used primarily for:
1. Deriving Regular Expressions from Finite Automata (FA): It helps in the state elimination
method, where each state transition is represented by an equation, and Arden's Theorem is
applied to solve these equations.
2. Constructing Regular Languages: It provides a systematic way to represent the behavior of
recursive structures in terms of regular expressions.
Proof of Arden's Theorem
The theorem can be understood by rewriting and expanding the equation:
Suppose R = Q + RP.
Then by substitution:
o R=Q+RPR = Q + RPR=Q+RP
o =Q+(Q+RP)P= Q + (Q + RP)P=Q+(Q+RP)P
o =Q+QP+RP2= Q + QP + RP^2=Q+QP+RP2
o Continuing this, we get R=Q+QP+QP2+…R = Q + QP + QP^2 + …R=Q+QP+QP2+…
o This is a geometric series, which can be represented as R=QP∗R = QP^*R=QP∗, where
P∗P^*P∗ denotes zero or more repetitions of P.
Thus, the theorem asserts that R = QP* is a solution for R = Q + RP.
Applications of Arden's Theorem
1. Converting Finite Automata to Regular Expressions:
o By creating equations for each state in an FA, Arden's Theorem can be applied to solve
these equations and express the language of the FA as a regular expression.
2. Simplifying Regular Expressions:
o Arden's Theorem helps to simplify complex expressions by eliminating recursive terms,
making it easier to analyze and work with regular expressions.
Rice's theorem
Rice's Theorem is a foundational result in computability theory, establishing that any non-trivial
property of the language recognized by a Turing Machine is undecidable. This theorem highlights that
it is impossible to determine any meaningful property of the language accepted by a Turing Machine if
that property is non-trivial.
Statement of Rice's Theorem
Let P be a property of the language recognized by a Turing Machine (i.e., P applies to the set of strings
that the machine accepts, not to its internal structure). Rice’s theorem states:
If P is non-trivial (meaning that there exists at least one Turing Machine with the property and at
least one Turing Machine without it), then deciding whether a Turing Machine M has property P
is undecidable.
Examples of Non-Trivial Properties
1. Language Emptiness: Does a given Turing Machine recognize the empty language?
(Undecidable)
2. Language Finiteness: Does a Turing Machine recognize a finite language? (Undecidable)
3. Specific Membership: Does the language recognized by the Turing Machine contain a specific
string w? (Undecidable)
Proof Outline
The proof of Rice's theorem is typically by contradiction, using a reduction from the halting problem.
The basic idea is that if a decision algorithm existed for any non-trivial property, it could be used to
decide the halting problem, which is known to be undecidable.
Implication: Rice’s theorem implies that any question about the languages recognized by Turing
Machines—aside from structural questions (like counting the states)—is undecidable.
Definition and working of PDA
A Pushdown Automaton (PDA) is a type of computational model that extends the concept of a finite
automaton by incorporating a stack as a memory structure. This additional memory allows a PDA to
recognize certain types of languages that finite automata cannot, specifically context-free
languages. PDAs are often used to represent systems with nested or hierarchical structures, like
balanced parentheses or programming language syntax.
Definition of a PDA
M=(Q,Σ,Γ,δ,q0,Z0,F)
where:
Γ: The stack alphabet (symbols that can be pushed or popped from the stack).
δ: The transition function δ:Q×(Σ∪{ϵ})×Γ→P(Q×Γ∗), which describes the transitions based on the
current state, input symbol, and top of the stack.
Z₀: The initial stack symbol, which is at the bottom of the stack.
F: A set of accepting states (subset of Q), which determines if the PDA accepts a string.
Working of a PDA
A PDA works by reading an input string one symbol at a time and making decisions based on:
2. Current Input Symbol: The symbol being read from the input string. A PDA can also make ε-
moves, meaning it can change states without consuming any input symbol.
3. Top Stack Symbol: The symbol currently at the top of the stack.
The stack allows the PDA to "remember" an unbounded number of symbols, which is particularly
useful for recognizing languages with nested or recursive structures.
A PDA accepts a string if it reaches an accepting state (final state) or if the stack becomes empty,
depending on the design:
1. Acceptance by Final State: The PDA accepts the input if it ends in an accepting state after
reading the entire input string.
2. Acceptance by Empty Stack: The PDA accepts the input if it empties the stack at the end of
the input string, regardless of the final state.
Example of a PDA
Consider the language L = { a^n b^n | n ≥ 1 }. This language contains strings with an equal number of
a's followed by an equal number of b's, such as "ab", "aabb", "aaabbb", etc. This is a classic example
of a context-free language that cannot be recognized by a finite automaton but can be recognized by a
PDA.
1. States: Let the PDA have three states: q₀ (initial), q₁, and q₂ (accepting).
3. Transitions:
o If the stack becomes empty after reading the last b, move to q₂ to accept.
This PDA effectively pushes a symbol for each a onto the stack and pops one for each b, ensuring the
counts are equal.
Describe Finite State Machine
A Finite State Machine (FSM) is a computational model used to design systems that can be in exactly
one of a finite number of states at any given time. FSMs transition between states based on inputs and
predefined rules, making them powerful tools for modeling and controlling sequential logic in a wide
range of applications, from software to hardware systems.
Key Concepts of Finite State Machine
1. States: Distinct modes or conditions that the machine can be in at any point.
2. Alphabet (Σ): A set of input symbols that trigger transitions between states.
3. Transition Function (δ): Defines how the machine moves from one state to another based on
an input symbol.
4. Initial State (q0): The state where the FSM starts.
5. Final/Accepting State(s): Special states that signify acceptance of input in some FSMs
(particularly in recognizing languages).
Types of Finite State Machines
1. Deterministic Finite Automaton (DFA):
o Each state has exactly one transition for each input symbol, leading to a unique next
state.
o Ideal for applications requiring precise control with no ambiguity, such as lexical
analysis in compilers.
2. Nondeterministic Finite Automaton (NFA):
o Each state can have zero, one, or multiple transitions for the same input symbol,
possibly leading to multiple states.
o Though less restrictive, NFAs can be converted to equivalent DFAs.
o Commonly used in the initial stages of pattern recognition and regular expression
processing.
Components of a Finite State Machine
A finite state machine can be formally defined by the tuple M=(Q,Σ,δ,q0,F), where:
Q: A finite set of states.
Σ: A finite set of input symbols (alphabet).
δ: A transition function δ:Q×Σ→Q, which maps a state and input to a new state.
q₀: The initial state, where the machine starts.
F: A set of final or accepting states (subset of QQQ), where the input is considered accepted.
How Finite State Machines Work
1. Start: The FSM begins in the initial state q0q_0q0.
2. Input Processing: Each input symbol from a sequence is processed in order, causing
transitions between states based on the transition function δ\deltaδ.
3. Transitions: The machine follows defined paths between states depending on the current
state and the input symbol.
4. Acceptance: For FSMs that recognize patterns or languages, the input is accepted if the
machine ends in an accepting state after processing all symbols.
Example of a Finite State Machine
Consider a DFA that recognizes the binary language consisting of strings ending in “01”.
States: Q={q0,q1,q2}
Alphabet: Σ={0,1}
Transitions:
o δ(q0,0)=q0
o δ(q0,1)=q1
o δ(q1,0)=q2
o δ(q1,1)=q1
o δ(q2,0)=q0
o δ(q2,1)=q1
Initial State: q0q_0q0
Accepting State: q2q_2q2
In this example:
Starting at q0q_0q0, the machine will transition through states based on each input bit.
If the input ends with “01”, the FSM will end in q2q_2q2, the accepting state, and accept the
string.
Applications of Finite State Machines
Finite State Machines are widely used in applications where behavior can be divided into distinct
states with rules governing transitions:
1. Lexical Analysis in Compilers: Recognizing tokens in source code.
2. Network Protocols: Modeling states of connections and packet transmissions.
3. Digital Circuits: Designing sequential circuits like counters, registers, and controllers.
4. Control Systems: Managing automated systems like vending machines, traffic lights, and
elevators.
5. User Interfaces: Managing component states and transitions (e.g., button press states).
Advantages and Limitations of Finite State Machines
Advantages:
o Simple to understand and implement.
o Provide a clear visual representation of system states and transitions.
o Highly predictable and suitable for deterministic control.
Limitations:
o Limited memory: FSMs do not retain historical data beyond the current state.
o Inefficient for languages or processes requiring deep memory or nested structures (e.g.,
context-free languages).
Conclusion
Finite State Machines are fundamental models in computer science and engineering, providing a
structured and efficient way to model and control sequential processes. Despite their simplicity,
FSMs are powerful for tasks requiring deterministic, rule-based state transitions, making them
essential for designing compilers, network protocols, control systems, and more.
Summary of Differences
Operation Based on input Based on input and Based on input, tape content,
only stack top and movement
Differentiate Finite Automata, Push Down Automata, and Turing Machine.
Criteria Finite Automata (FA) Push Down Automata (PDA) Turing Machine (TM)
Definition A computational model that A computational model that A computational model capable of
accepts regular languages. accepts context-free simulating any algorithm (accepts
languages. recursively enumerable
languages).
Memory No additional memory apart Has a stack as auxiliary Has an infinite tape as memory,
from states. memory. which can be read and written on
both ends.
States Consists of a finite number of Consists of a finite number of Consists of a finite number of
states. states. states but uses the tape for
additional computation power.
Acceptance Can recognize only regular Can recognize context-free Can recognize recursively
Power languages. languages. enumerable languages.
Transitions Defined based on current state Defined based on current Defined based on current state
and input symbol. state, input symbol, and top and symbol on the tape, moving
of the stack. the tape head left or right.
Determinis Can be deterministic (DFA) or Can be deterministic or non- Can be deterministic or non-
m non-deterministic (NFA). deterministic (non- deterministic (both have
deterministic PDA is more equivalent power for TM).
powerful).
Language Regular languages (e.g., set of all Context-free languages (e.g., Recursively enumerable
Examples strings with an even number of balanced parentheses, languages (e.g., language of all
a’s). palindromes). valid programs).
Limitations Cannot handle nested or Can handle nested structures Can perform arbitrary
recursive structures. but cannot handle arbitrary computations and simulate both
computations. FA and PDA.
Discuss different applications of Finite Automata.
Finite Automata (FA) have a wide range of practical applications, particularly in areas that involve
recognizing patterns, processing text, and modeling computational systems. Here are some key
applications:
Summary
Finite Automata are fundamental in any application that requires structured, pattern-based input
recognition, especially when the patterns can be expressed by regular expressions. Their
deterministic and structured approach makes them ideal for applications where a sequence of
operations needs to be managed with clear and predictable state transitions.
Differentiate between NFA and DFA.
Determinism Deterministic: Each state has exactly one Nondeterministic: A state can have zero, one, or
transition for each input symbol. multiple transitions for the same input symbol.
Transition Defined as δ:Q×Σ→Q, meaning a single next Defined as δ:Q×Σ→2Q, meaning multiple possible
Function state is defined for each state and input pair. next states (including no transitions) are allowed for
each state and input pair.
Uniqueness of Only one unique path exists for each input Multiple paths may exist for a single input string,
Next State string. and the NFA can "choose" any path.
Epsilon (ε) Does not allow ε (empty) transitions; each Allows ε transitions, meaning the automaton can
Transitions transition requires a specific input symbol. change states without consuming any input.
Acceptance of The input string is accepted if the DFA The input string is accepted if at least one path
Input reaches a final state after processing all leads to a final state after processing all input
input symbols. symbols.
Memory Generally requires more memory for Often uses less memory for complex languages, as
Requirements complex languages, as it explicitly defines all it does not need to explicitly define every possible
possible transitions for each input symbol. path; instead, it allows multiple paths implicitly.
Ease of Easier to implement in hardware and More complex to implement directly because of its
Implementation software due to its deterministic nature. nondeterminism but can be simulated by
converting it to a DFA.
Conversion Directly represents the language as is. Can be converted to an equivalent DFA using the
subset construction method, which may result in a
DFA with more states.
Processing Each input symbol is processed in one May explore multiple paths, which can increase
Time deterministic path, making it generally faster complexity, though it is theoretically processed in
in practice. parallel (simulated in software).
State May require more states than an equivalent Often requires fewer states, as multiple transitions
Complexity NFA to represent the same language. from a single state allow for more compact
representation.
Compare and contrast Moore and Mealy machines.
Moore and Mealy Machines are two types of finite state machines used in digital logic and
computational theory. Both are used to model systems that transition between states based on
inputs, but they differ in how and when outputs are generated.
Comparison and Contrast: Moore vs. Mealy Machine
Definition A finite state machine where the output A finite state machine where the output
depends only on the current state. depends on both the current state and
the current input.
Output Output is associated with each state, so it Output is associated with each
Generation is produced as soon as the machine enters transition, meaning it changes
that state, regardless of the input. immediately with the input, even if the
state remains the same.
Output Function Defined as a function of the state only: Defined as a function of both the state
Output = f(State) and input: Output = f(State, Input)
Timing of Output Output changes only when there is a state Output can change in the middle of a
Changes transition. state if the input changes, making it
more responsive to input changes.
Complexity Simpler in design as outputs are tied only Typically requires fewer states than a
to states, but can require more states to Moore machine for the same
implement the same functionality as a functionality, as outputs are generated
Mealy machine. on transitions.
Implementation More stable output, as it is only state- More flexible output, as it can change
dependent and does not vary with input immediately with changes in the input,
changes within a state. even if the state remains the same.
Definition A state machine that recognizes A state machine that recognizes context-free
regular languages. languages.
Memory Has no memory beyond its Has a stack as memory, allowing it to keep
current state. track of nested structures.
Input Handling Reads input symbols and Reads input symbols, transitions based on
transitions based on states. states, and uses a stack for additional
context.
Stack Does not use a stack. Uses a stack to store symbols, providing more
computational power.
Example Strings with even number of as, Strings with balanced parentheses,
Languages ab*, etc. palindromes, etc.
Non-Deterministic Pushdown Automaton (NPDA)
A Non-Deterministic Pushdown Automaton (NPDA) is a type of PDA that has the ability to make
multiple transitions for a given input and stack configuration. Unlike a Deterministic Pushdown
Automaton (DPDA), where each move is uniquely determined by the current state, input symbol, and
stack top, an NPDA may have several possible moves for the same configuration.
Key Features of NPDA
Multiple Paths: The NPDA can pursue multiple computation paths simultaneously. If at least
one of these paths leads to an accepting state or an empty stack (depending on the
acceptance criteria), the input string is accepted.
Guessing Capability: Non-determinism allows the NPDA to "guess" which path to take in
complex languages, like palindromes, which a DPDA cannot recognize effectively.
Power: NPDAs are as powerful as PDAs in terms of recognizing context-free languages, as any
language accepted by a context-free grammar can be recognized by an NPDA.
Example of NPDA Use
Consider the language of palindromes over {a, b}, such as abba or aba. An NPDA can be designed to
"guess" the midpoint of the palindrome and, from that point, match characters symmetrically by using
its stack to keep track of the first half of the string.