Digital Systems
Digital Systems
Aim: At the end of the course the student will be able to analyze, design, and
evaluate digital circuits, of medium complexity, that are based on SSIs, MSIs, and
programmable logic devices.
Module 1: Number Systems and Codes (3)
Number systems: Binary, octal, and hexa-decimal number systems, binary
arithmetic. Codes: Binary code, excess-3 code, gray code, and error detection and
correction codes.
Module 2: Boolean Algebra and Logic Functions (5)
Boolean algebra: Postulates and theorems. Logic functions, minimization of Boolean
functions using algebraic, Karnaugh map and Quine – McClausky methods.
Realization using logic gates
Module 3: Logic Families (4)
Logic families: Characteristics of logic families. TTL, CMOS, and ECL families.
Module 4: Combinational Functions (8)
Realizing logical expressions using different logic gates and comparing their
performance. Hardware aspects logic gates and combinational ICs: delays and
hazards. Design of combinational circuits using combinational ICs: Combinational
functions: code conversion, decoding, comparison, multiplexing, demultiplexing,
addition, and subtraction.
Module 5: Analysis of Sequential Circuits (5)
Structure of sequential circuits: Moore and Melay machines. Flip-flops, excitation
tables, conversions, practical clocking aspects concerning flip-flops, timing and
triggering considerations. Analysis of sequential circuits: State tables, state diagrams
and timing diagrams.
Module 6: Designing with Sequential MSIs (6)
Realization of sequential functions using sequential MSIs: counting, shifting,
sequence generation, and sequence detection.
Module 7: PLDs (3)
Programmable Logic Devices: Architecture and characteristics of PLDs,
Module 8: Design of Digital Systems (6)
State diagrams and their features. Design flow: functional partitioning, timing
relationships, state assignment, output racing. Examples of design of digital systems
using PLDs
Lecture Plan
1. Recall
1.1 List different criteria that could be used for optimization of a digital circuit.
1.2 List and describe different problems of digital circuits introduced by the hardware
limitations.
2. Comprehension
2.1 Describe the significance of different criteria for design of digital circuits.
2.2 Describe the significance of different hardware related problems encountered in
digital circuits.
2.3 Draw the timing diagrams for identified signals in a digital circuit.
3. Application
3.1 Determine the output and performance of given combinational and sequential
circuits.
3.2 Determine the performance of a given digital circuit with regard to an identified
optimization criterion.
4. Analysis
4.1 Compare the performances of combinational and sequential circuits implemented
with SSIs/MSIs and PLDs.
4.2 Determine the function and performance of a given digital circuit.
4.3 Identify the faults in a given circuit and determine the consequences of the same
on the circuit performance.
4.4 Draw conclusions on the behavior of a given digital circuit with regard to
hazards, asynchronous inputs, and output races.
4.5 Determine the appropriateness of the choice of the ICs used in a given digital
circuit.
4.6 Determine the transition sequence of a given state in a state diagram for a given
input sequence.
5. Synthesis
5.1 Generate multiple digital solutions to a verbally described problem.
5.2 Modify a given digital circuit to change its performance as per specifications.
6. Evaluation
6.1 Evaluate the performance of a given digital circuit.
6.2 Assess the performance of a given digital circuit with Moore and Melay
configurations.
6.3 Compare the performance of given digital circuits with respect to their speed,
power consumption, number of ICs, and cost.
Digital Systems: Motivation
A digital circuit is one that is built with devices with two well-defined states. Such circuits
can process information represented in binary form. Systems based on digital circuits touch
all aspects our present day lives. The present day home products including electronic
games and appliances, communication and office automation products, computers with a
wide range of capabilities, and industrial instrumentation and control systems, electro-
medical equipment, and defence and aerospace systems are heavily dependent on digital
circuits. Many fields that emerged later to digital electronics have peaked and levelled off,
but the application of digital concepts appears to be still growing exponentially. This
unprecedented growth is powered by the semiconductor technology, which enables the
introduction of more and complex integrated circuits. The complexity of an integrated
circuit is measured in terms of the number of transistors that can be integrated into a
single unit. The number of transistors in a single integrated circuit has been doubling every
eighteen months (Moore’ Law) for several decades and reached the figure of almost one
billion transistors per chip. This allowed the circuit designers to provide more and more
complex functions in a single unit.
The central role of digital circuits in all our professional and personal lives makes it
imperative that every electrical and electronics engineer acquire good knowledge of
relevant basic concepts and ability to work with digital circuits.
At present many of the undergraduate programmes offer two to four courses in the area of
digital systems, with at least two of them being core courses. The course under
consideration constitutes the first course in the area of digital systems. The rate of
obsolescence of knowledge, design methods, and design tools is uncomfortably high. Even
the first level course in digital electronics is not exempt from this obsolescence.
Any course in electronics should enable the students to design circuits to meet some stated
requirements as encountered in real life situations. However, the design approaches should
be based on a sound understanding of the underlying principles. The basic feature of all
design problems is that all of them admit multiple solutions. The selection of the final
solution depends on a variety of criteria that could include the size and cost of the substrate
on which the components are assembled, the cost of components, manufacturability,
reliability, speed etc.
The course contents are designed to enable the students to design digital circuits of
medium level of complexity taking the functional and hardware aspects in an integrated
manner within the context of commercial and manufacturing constraints. However, no
compromises are made with regard to theoretical aspects of the subject.
Learning Objectives
Recall
Comprehension
1. Explain how a number with one radix is converted into a number with another
radix.
5. Explain how errors are detected and/or corrected using different codes.
Application
Analysis: Nil
Synthesis: Nil
Evaluation: Nil
Digital Electronics
Module 1: Number Systems and
Codes - Number Systems
N.J. Rao
Indian Institute of Science
Numbers
We use numbers
– to communicate
– to perform tasks
– to quantify
– to measure
• Numbers have become symbols of the present era
• Many consider what is not expressible in terms of
numbers is not worth knowing
i n
(N)2 = (11100110)2
Its decimal value is given by,
(N)2 = 1 x 27 + 1 x 26 + 1 x 25 + 0 x 24 + 0 x 23
+ 1 x 22 + 1 x 21 + 0 x 20
= 128 + 64 + 32 + 0 + 0 + 4 + 2 + 0 = (230)10
D= d ir i
i n
Examples
(331)8 = 3 x 82 + 3 x 81 + 1 x 80 = 192 + 24 + 1 = (217)10
(D9)16 = 13 x 161 + 9 x 160 = 208 + 9 = (217)10
(33.56)8 = 3 x 81 + 3 x 80 + 5 x 8-1 + 6 x 8-2 = (27.69875)10
(E5.A)16 = 14 x 161 + 5 x 160 + 10 x 16-1 = (304.625)10
Quotient Remainder
156 ÷ 2 78 0
78 ÷ 2 39 0
39 ÷ 2 19 1
19 ÷ 2 9 1
9÷2 4 1
4÷2 2 0
2÷2 1 0
1÷2 0 1
(156)10 = (10011100)2
December 2006 N.J. Rao M1L1 15
Example of Conversion
Quotient Remainder
678 ÷ 8 84 6
84 ÷ 8 10 4
10 ÷ 8 1 2
1÷8 0 1
(678)10 = (1246)8
Quotient Remainder
678 ÷ 16 42 6
42 ÷ 16 2 A
2 ÷ 16 0 2
(678)10 = (2A6)16
1111001 (1)(111001)
• First (sign) bit is 1: The number is negative
• Ones’ Complement of 111001 000110
(6)10
1111010 (1)(111010)
• First (sign) bit is 1: The number is negative
• Complement 111010 and add 1 000101 + 1
= 000110 = (6)10
We all use numbers to communicate and perform several tasks in our daily lives.
Our present day world is characterized by measurements and numbers associated
with everything. In fact, many consider if we cannot express something in terms of
numbers is not worth knowing. While this is an extreme view that is difficult to
justify, there is no doubt that quantification and measurement, and consequently
usage of numbers, are desirable whenever possible. Manipulation of numbers is one
of the early skills that the present day child is trained to acquire. The present day
technology and the way of life require the usage of several number systems. Usage
of decimal numbers starts very early in one’s life. Therefore, when one is confronted
with number systems other than decimal, some time during the high-school years, it
calls for a fundamental change in one’s framework of thinking.
There have been two types of numbering systems in use through out the world.
One type is symbolic in nature. Most important example of this symbolic numbering
system is the one based on Roman numerals
IIMVII - 2007
While this system was in use for several centuries in Europe it is completely
superseded by the weighted-position system based on Indian numerals. The Roman
number system is still used in some places like watches and release dates of movies.
The weighted-positional system based on the use of radix 10 is the most commonly
used numbering system in most of the transactions and activities of today’s world.
However, the advent of computers and the convenience of using devices that have
two well defined states brought the binary system, using the radix 2, into extensive
use. The use of binary number system in the field of computers and electronics also
lead to the use of octal (based on radix 8) and hex-decimal system (based on radix
16). The usage of binary numbers at various levels has become so essential that it
is also necessary to have a good understanding of all the binary arithmetic
operations.
Here we explore the weighted-position number systems and conversion from one
system to the other.
Weighted-Position Number System
Here, 10 is called the base or radix of the number system. In a general positional
number system, the radix may be any integer r > 2, and a digit position i has weight
ri. The general form of a number in such a system is
where there are p digits to the left of the point (called radix point) and n digits to the
right of the point. The value of the number is the sum of each digit multiplied by the
corresponding power of the radix.
p −1
∑
i
D= d ir
i= − n
Except for possible leading and trailing zeros, the representation of a number in
positional system is unique (00256.230 is the same as 256.23). Obviously the
values di’s can take are limited by the radix value. For example a number like
(356)5, where the suffix 5 represents the radix will be incorrect, as there can not be
a digit like 5 or 6 in a weighted position number system with radix 5.
If the radix point is not shown in the number, then it is assumed to be located near
the last right digit to its immediate right. The symbol used for the radix point is a
point (.). However, a comma is used in some countries. For example 7,6 is used,
instead of 7.6, to represent a number having seven as its integer component and six
as its fractional.
As much of the present day electronic hardware is dependent on devices that work
reliably in two well defined states, a numbering system using 2 as its radix has
become necessary and popular. With the radix value of 2, the binary number system
will have only two numerals, namely 0 and 1.
It is an eight digit binary number. The binary digits are also known as bits.
Consequently the above number would be referred to as an 8-bit number. Its
decimal value is given by
(N)2 = 1 x 27 + 1 x 26 + 1 x 25 + 0 x 24 + 0 x 23 + 1 x 22 + 1 x 21 + 0 x 20
= 128 + 64 + 32 + 0 + 0 + 4 + 2 + 0 = (230)10
From here on we consider any number without its radix specifically mentioned, as a
decimal number.
With the radix value of 2, the binary number system requires very long strings of 1s
and 0s to represent a given number. Some of the problems associated with handling
large strings of binary digits may be eased by grouping them into three digits or four
digits. We can use the following groupings.
In the octal number system the digits will have one of the following eight values 0, 1,
2, 3, 4, 5, 6 and 7.
In the hexadecimal system we have one of the sixteen values 0 through 15.
However, the decimal values from 10 to 15 will be represented by alphabet A (=10),
B (=11), C (=12), D (=13), E (=14) and F (=15).
Note that adding a leading zero does not alter the value of the number. Similarly for
grouping the digits in the fractional part of a binary number, trailing zeros may be
added without changing the value of the number.
Number System Conversions
p −1
D= ∑
i= − n
d ir i
where r is the radix of the number and there are p digits to the left of the radix point
and n digits to the right. Decimal value of the number is determined by converting
each digit of the number to its radix-10 equivalent and expanding the formula using
radix-10 arithmetic.
This forms the basis for converting a decimal number D to a number with radix r. If
we divide the right hand side of the above formula by r, the remainder will be d0,
and the quotient will be
Thus, d0 can be computed as the remainder of the long division of D by the radix r.
As the quotient Q has the same form as D, another long division by r will give d1 as
the remainder. This process can continue to produce all the digits of the number
with radix r. Consider the following examples.
Quotient Remainder
156 ÷ 2 78 0
78 ÷ 2 39 0
39 ÷ 2 19 1
19 ÷ 2 9 1
9 ÷ 2 4 1
4 ÷ 2 2 0
2 ÷ 2 1 0
1 ÷ 2 0 1
(156)10 = (10011100)2
Quotient Remainder
678 ÷ 8 84 6
84 ÷ 8 10 4
10 ÷ 8 1 2
1 ÷ 8 0 1
(678)10 = (1246)8
Quotient Remainder
678 ÷ 16 42 6
42 ÷ 16 2 A
2 ÷ 16 0 2
(678)10 = (2A6)16
Representation of Negative Numbers
In our traditional arithmetic we use the “+” sign before a number to indicate it as a
positive number and a “-” sign to indicate it as a negative number. We usually omit the
sign before the number if it is positive. This method of representation of numbers is
called “sign-magnitude” representation. But using “+” and “-” signs on a computer is
not convenient, and it becomes necessary to have some other convention to represent
the signed numbers. We replace “+” sign with “0” and “-” with “1”. These two symbols
already exist in the binary system. Consider the following examples:
(+1100101)2 (01100101)2
(+101.001)2 (0101.001)2
(-10010)2 (110010)2
(-110.101)2 (1110.101)2
In the sign-magnitude representation of binary numbers the first digit is always treated
separately. Therefore, in working with the signed binary numbers in sign-magnitude
form the leading zeros should not be ignored. However, the leading zeros can be
ignored after the sign bit is separated. For example,
1000101.11 = - 101.11
If this representation is extended to the decimal numbers they will be known as 9’s-
complement and 10’s-complement respectively.
The most significant bit represents the sign. If it is a “0” the number is positive and if it
is a “1” the number is negative.
The remaining (n-1) bits represent the magnitude, but not necessarily as a simple
weighted number.
Consider the following one’s complement numbers and their decimal equivalents:
0111111 + 63
0000110 --> + 6
0000000 --> + 0
1111111 --> + 0
1111001 --> - 6
1000000 --> - 63
If the most significant bit (MSD) is zero the remaining (n-1) bits directly indicate
the magnitude.
Leaving the first bit ‘1’ for the sign, the remaining bits 111001 do not directly
represent the magnitude of the number -6.
In the example shown above a 7-bit number can cover the range from +63 to -63. In
general an n-bit number has a range from +(2n-1 - 1) to -(2n-1 - 1) with two
representations for zero.
The representation also suggests that if A is an integer in one’s complement form, then
one’s complement of A = -A
For example if A = 0.101 (+0.625)10, then the one’s complement of A is 1.010, which is
one’s complement representation of (-0.625)10. Similarly consider the case of a mixed
number.
A = 010011.0101 (+19.3125)10
Decimal number 75 requires 7 bits to represent its magnitude in the binary form. One
additional bit is needed to represent the sign. Therefore,
The most significant bit represents the sign. If it is a “0”, the number is positive,
and if it is “1” the number is negative.
The remaining (n-1) bits represent the magnitude, but not as a simple weighted
number.
Consider the following two’s complement numbers and their decimal equivalents:
0111111 + 63
0000110 + 6
0000000 + 0
1111010 - 6
1000001 - 63
1000000 - 64
If most significant bit (MSD) is zero the remaining (n-1) bits directly indicate the
magnitude.
If the MSD is 1, the magnitude of the number is obtained by taking the complement of
all the remaining (n-1) bits and adding a 1.
The representation also suggests that if A is an integer in two’s complement form, then
Two’s complement of A = -A
Two’s complement of a number is obtained by complementing all the digits and adding
‘1’ to the LSB.
A = 010011.0101 (+19.3125)10
Decimal number 75 requires 7 bits to represent its magnitude in the binary form. One
additional bit is needed to represent the sign. Therefore,
5. How many bits have to be grouped together to convert the binary number to its
corresponding octal number?
8. How many bits are required to cover the numbers from +63 to -63 in one’s
complement representation?
Simple Scheme
• Convert decimal number inputs into binary form
• Manipulate these binary numbers
• Convert resultant binary numbers back into decimal
numbers
However, it
• requires more hardware
• slows down the system
110 001
0 0 1
010
101
100 011
100 000
101 001
0 0 1
011
111
110 010
Row
Parity
Information bits
bits
Sometimes ‘word’ is used to designate a larger group of bits also, for example 32 bit
or 64 bit words.
Coding schemes have to be designed to suit the security requirements and the
complexity of the medium over which information is transmitted.
In view of the modern day requirements of efficient, error free and secure
information transmission coding theory is an extremely important subject. However,
at this stage of learning digital systems we confine ourselves to familiarising with a
few commonly used codes and their properties.
We will be mainly concerned with binary codes. In binary coding we use binary digits
or bits (0 and 1) to code the elements of an information set. Let n be the number of
bits in the code word and x be the number of unique words.
If n = 1, then x = 2 (0, 1)
n = j, then x = 2j
From this we can conclude that if we are given elements of information to code into
binary coded format,
x < 2j
or j > log2x
The main motivation for binary number system is that there are only two elements in
the binary set, namely 0 and 1. While it is advantageous to perform all
computations on hardware in binary forms, human beings still prefer to work with
decimal numbers. Any electronic system should then be able to accept decimal
numbers, and make its output available in the decimal form.
However, this kind of conversion requires more hardware, and in some cases
considerably slows down the system. Faster systems can afford the additional
circuitry, but the delays associated with the conversions would not be acceptable. In
case of smaller systems, the speed may not be the main criterion, but the additional
circuitry may make the system more expensive.
We can solve this problem by encoding decimal numbers as binary strings, and use
them for subsequent manipulations.
As four bits are required to encode one decimal digit, there are sixteen four-bit
groups to select ten groups. This would lead to nearly 30 x 1010 (16C10.10!) possible
codes. However, most of them will not have any special properties that would be
useful in hardware design. We wish to choose codes that have some desirable
properties like
ease of coding
ease in arithmetic operations
minimum use of hardware
error detection property
ability to prevent wrong output during transitions
In a weighted code the decimal value of a code is the algebraic sum of the weights
of 1s appearing in the number. Let (A)10 be a decimal number encoded in the binary
form as a3a2a1a0. Then
where w3, w2, w1 and w0 are the weights selected for a given code, and a3,a2,a1and
a0 are either 0s or 1s. The more popularly used codes have the weights as
w3 w2 w1 w0
8 4 2 1
2 4 2 1
8 4 -2 -1
In all the cases only ten combinations are utilized to represent the decimal digits.
The remaining six combinations are illegal. However, they may be utilized for error
detection purposes.
Consider, for example, the representation of the decimal number 16.85 in Natural
Binary Coded Decimal code (NBCD)
1 6 8 5
There are many possible weights to write a number in BCD code. Some codes have
desirable properties, which make them suitable for specific applications. Two such
desirable properties are:
1. Self-complementing codes
2. Reflective codes
A reflective code is characterized by the fact that it is imaged about the centre
entries with one bit changed. For example, the 9’s complement of a reflected BCD
code word is formed by changing only one its bits. Two such examples of reflective
BCD codes are
Decimal Code-A Code-B
0 0000 0100
1 0001 1010
2 0010 1000
3 0011 1110
4 0100 0000
5 1100 0001
6 1011 1111
7 1010 1001
8 1001 1011
9 1000 0101
The BCD codes are widely used and the reader should become familiar with reasons
for using them and their application. The most common application of NBCD codes is
in the calculator.
Unit Distance Codes
There are many applications in which it is desirable to have a code in which the
adjacent codes differ only in one bit. Such codes are called Unit distance Codes.
“Gray code” is the most popular example of unit distance code. The 3-bit and 4-bit
Gray codes are
These Gray codes listed here have also the reflective properties. Some additional
examples of unit distance codes are
The most popular use of Gray codes is in the position sensing transducer known as
shaft encoder. A shaft encoder consists of a disk in which concentric circles have
alternate sectors with reflective surfaces while the other sectors have non-reflective
surfaces. The position is sensed by the reflected light from a light emitting diode.
However, there is choice in arranging the reflective and non-reflective sectors. A 3-
bit binary coded disk will be as shown in the figure 1.
111 000
110 001
0 0 1
010
101
100 011
From this figure we see that straight binary code can lead to errors because of
mechanical imperfections. When the code is transiting from 001 to 010, a slight
misalignment can cause a transient code of 011 to appear. The electronic circuitry
associated with the encoder will receive 001 --> 011 -> 010. If the disk is patterned
to give Gray code output, the possibilities of wrong transient codes will not arise.
This is because the adjacent codes will differ in only one bit. For example the
adjacent code for 001 is 011. Even if there is a mechanical imperfection, the
transient code will be either 001 or 011. The shaft encoder using 3-bit Gray code is
shown in the figure 2.
100 000
101 001
0 0 1
011
111
110 010
There are two convenient methods to construct Gray code with any number of
desired bits. The first method is based on the fact that Gray code is also a reflective
code. The following rule may be used to construct Gray code:
The last 2n code words of a (n+1)-bit Gray code equal the code words of an
n-bit Gray code, written in reverse order with a leading 1 appended.
However, this method requires Gray codes with all bit lengths less than ‘n’ also be
generated as a part of generating n-bit Gray code. The second method allows us to
derive an n-bit Gray code word directly from the corresponding n-bit binary code
word:
The bits of an n-bit binary code or Gray code words are numbered from right
to left, from 0 to n-1.
(68)10 = (1000100)2
Binary code: 1 0 0 0 1 0 0
Gray code : 1 1 0 0 1 1 0
The following rules can be followed to convert a Gray coded number to a straight
binary number:
Scan the Gray code word from left to right. All the bits of the binary code are
the same as those of the Gray code until the first 1 is encountered, including
the first 1.
0’s are written until the next 1 is encountered, in which case a 1 is written.
Consider the following examples of Gray code numbers converted to binary numbers
Gray code : 1 1 0 1 1 0 1 0 0 0 1 0 1 1
Binary code: 1 0 0 1 0 0 1 1 1 1 0 0 1 0
Alphanumeric Codes
b4 b3 b2 b1 b7 b6 b5
000 001 010 011 100 101 110 111
0 0 0 0 NUL DLE SP 0 @ P ‘ p
0 0 0 1 SOH DC1 ! 1 A Q a q
0 0 1 0 STX DC2 “ 2 B R b r
0 0 1 1 ETX DC3 # 3 C S c s
0 1 0 0 EOT DC4 $ 4 D T d t
0 1 0 1 ENQ NAK % 5 E U e u
0 1 1 0 ACK SYN & 6 F V f v
0 1 1 1 BEL ETB , 7 G W g w
1 0 0 0 BS CAN ( 8 H X h x
1 0 0 1 HT EM ) 9 I Y i y
1 0 1 0 LF SUB * : J Z j z
1 0 1 1 VT ESC + ; K [ k {
1 1 0 0 FF FS , < L \ l |
1 1 0 1 CR GS - = M ] m }
1 1 1 0 SO RS . > N n ~
1 1 1 1 SI US / ? O - o DEL
Alphanumeric codes like EBCDIC (Extended Binary Coded Decimal Interchange Code)
and 12-bit Hollerith code are in use for some applications. However, ASCII code is
now the standard code for most data communication networks. Therefore, the
reader is urged to become familiar with the ASCII code.
Error Detection and Correcting Codes
When data is transmitted in digital form from one place to another through a
transmission channel/medium, some data bits may be lost or modified. This loss of
data integrity occurs due to a variety of electrical phenomena in the transmission
channel. As there are needs to transmit millions of bits per second, the data
integrity should be very high. The error rate cannot be reduced to zero. Then we
would like to ideally have a mechanism of correcting the errors that occur. If this is
not possible or proves to be expensive, we would like to know if an error occurred.
If an occurrence of error is known, appropriate action, like retransmitting the data,
can be taken. One of the methods of improving data integrity is to encode the data
in a suitable manner. This encoding may be done for error correction or merely for
error detection.
A simple process of adding a special code bit to a data word can improve its
integrity. This extra bit will allow detection of a single error in a given code word in
which it is used, and is called the ‘Parity Bit’. This parity bit can be added on an odd
or even basis. The odd or even designation of a code word may be determined by
actual number of 1’s in the data (including the added parity bit) to which the parity
bit is added. For example, the S in ASCII code is
(S) = (1010011)ASCII
In this case the coded word has even number (four) of ones.
Thus the parity encoding scheme is a simple one and requires only one extra bit. If
the system is using even parity and we find odd number of ones in the received data
word we know that an error has occurred. However, this scheme is meaningful only
for single errors. If two bits in a data word were received incorrectly the parity bit
scheme will not detect the faults. Then the question arises as to the level of
improvement in the data integrity if occurrence of only one bit error is detectable.
The improvement in the reliability can be mathematically determined.
Adding a parity bit allows us only to detect the presence of one bit error in a group of
bits. But it does not enable us to exactly locate the bit that changed. Therefore,
addition of one parity bit may be called an error detecting coding scheme. In a
digital system detection of error alone is not sufficient. It has to be corrected as
well. Parity bit scheme can be extended to locate the faulty bit in a block of
information. The information bits are conceptually arranged in a two-dimensional
array, and parity bits are provided to check both the rows and the columns.
If we can identify the code word that has an error with the parity bit, and the column
in which that error occurs by a way of change in the column parity bit, we can both
detect and correct the wrong bit of information. Hence such a scheme is single error
detecting and single error correcting coding scheme.
This method of using parity bits can be generalized for detecting and correcting more
than one-bit error. Such codes are called parity-check block codes. In this class
known as (n, k) codes, r (= n-k) parity check bits, formed by linear operations on
the k data bits, are appended to each block of k bits to generate an n-bit code word.
An encoder outputs a unique n-bit code word for each of the 2k possible input k-bit
blocks. For example a (15, 11) code has r = 4 parity-check bits for every 11 data
bits. As r increases it should be possible to correct more and more errors.
With r = 1 error correction is not possible, as such a code will only detect an odd
number of errors.
It can also be established that as k increases the overall probability of error should
also decrease. Long codes with a relatively large number of parity-check bits should
thus provide better performance. Consider the case of (7, 3) code
Innumerable varieties of codes exist, with different properties. There are various
types of codes for correcting independently occurring errors, for correcting burst
errors, for providing relatively error-free synchronization of binary data etc. The
theory of these codes, methods of generating the codes and decoding the coded
data, is a very important subject of communication systems, and need to be studied
as a separate discipline.
Problems
M1L2: Codes
1. Write the following decimal number in Excess-3, 2421, 84-2-2 BCD codes:
(a) 563 (b) 678 (c) 1465
1 0; 0 1
• The not operator is also called the complement
x is the complement of x
Property 2:
• The element 0 is unique.
• The element 1 is unique.
Proof for Part b by contradiction:
Assume that there are two 1s denoted 11 and 12.
x . 11 = x and y . 12 = y (Postulate 2b)
x . 11 = x and 12 . y = y (Postulate 3b)
Also x x . 1
x . (x x ) (postulate 2b)
x.x x.x (postulate 5a)
x.x 0 (postualte 5b)
x.x (postulate 2a)
Therefore, by the law of identity, we have x x
x y z x.y.z
Let y z w, then x y z x w
Since x w x.w (by DeMorgan' s law)
Therefore x w x y z (by substituti on)
x.yz (by DeMorgan' s law)
x.y.z
BS = {0, 1}
Resulting Boolean algebra is more suited to working with
switching circuits
Variables associated with electronic switching circuits
take only one of the two possible values.
The operations "+" and "." also need to be given
appropriate meaning
A B A+B A.B
0 0 0 0
0 1 1 0
1 0 1 0
1 1 1 1
A B (AB)/ (A+B)/ A B A B
0 0 1 1 0 1
0 1 1 0 1 0
1 0 1 0 1 0
1 1 0 0 0 1
A A
B B
NAND NOR
E X -O R
B
EX-NOR
B
B
OR
A
A
B B
NOR NAND
A
B
E X -N O R
A
B
E X -O R
EX -O R
A
B
EX-N O R
Four switches control the operation of the bulb. The manner in which the operation of
the bulb is controlled can be stated as
The bulb switches on if the switches S1 and S2 are closed, and S3 or S4 is also
closed, otherwise the bulb will not switch on
There are many situations of engineering interest where the variables take only a small
number of possible values.
Some examples:
Can you identify a situation of significance where the variables can take only a small
number of distinctly defined states?
How do we implement functions similar to the example shown above? We need devices
that have finite number states. It seems to be easy to create devices with two well
defined states. It is more difficult and more expensive to create devices with more than
two states.
Let us consider devices with two well defined states. We should also have the ability to
switch the state of the device from one state to the other. We call devices having two
well defined states as “two-valued switching devices”.
• Simple relays
• Electromechanical switch
Very complex functions can be represented using several binary variables. As we can
also build systems using millions of electronic two-state devices at very low costs, the
mathematics of binary variables becomes very important.
An English mathematician, George Boole, introduced the idea of examining the truth or
falsehood of language statements through a special algebra of logic. His work was
published in 1854, in a book entitled “An Investigation of the Laws of Thought”. Boole's
algebra was applied to statements that are either completely correct or completely false.
A value 1 is assigned to those statements that are completely correct and a value 0 is
assigned to statements that are completely false. As these statements are given
numerical values 1 or 0, they are referred to as digital variables.
In our study of digital systems, we use the words switching variables, logical variables,
and digital variables interchangeably.
Boole's algebra is referred to as Boolean algebra. Originally Boolean algebra was mainly
applied to establish the validity or falsehood of logical statements.
Logic designers of today use Boolean algebra to functionally design a large variety of
electronic equipment such as
• hand-held calculators,
• traffic light controllers,
• personal computers,
• super computers,
• communication systems
• aerospace equipment
• etc.
We next explore Boolean algebra at the axiomatic level. However, we do not worry about
the devices that would be used to implement them and their limitations.
Boolean Algebra and Huntington Postulates
In Boolean algebra as applied to the switching circuits, all variables and relations are
two-valued. The two values are normally chosen as 0 and 1, with 0 representing
false and 1 representing true. If x is a Boolean variable, then
x = 1 means x is true
x = 0 means x is false
When we apply Boolean algebra to digital circuits we will find that the qualifications
“asserted” and “not-asserted” are better names than “true” and “false”. That is when
x = 1 we say x is asserted, and when x = 0 we say x is not-asserted.
• For every element x and y ∈ BS the operations x (not x), x.y and x +y are
uniquely defined.
1= 0; 0 = 1
The not operator is also called the complement, and consequently x is the
complement of x.
The binary operator ‘and’ is symbolized by a dot. The ‘and’ operator is defined by the
relations
0.0= 0
0.1= 0
1.0= 0
1.1= 1
The binary operator ‘or’ is represented by a plus (+) sign. The ‘or’ operator is
defined by the relations
0+0= 0
0+1= 1
1+0= 1
1+1= 1
Boolean expressions, then A , B , A+B and A.B are also Boolean expressions.
Duality: Many of the Huntington’s postulates are given as pairs, and differ only by
the simultaneous interchange of operators "+" and "." and the elements "0" and "1".
This special property is called duality.
The property of duality can be utilized effectively to establish many useful properties
of Boolean algebra.
This implies that for each Boolean property, which we establish, the dual property is
also valid without needing additional proof.
= (x . 0) + (x . x ) (postulate 5b)
= x . (0 + x ) (postulate 4b)
Property 2:
a. The element 0 is unique.
b. The element 1 is unique.
Proof for Part b by contradiction: Let us assume that there are two 1s denoted 11
and 12. Postulate 2b states that
x. 11 = x and y. 12 = y
Applying the postulate 3b on commutativity to the second relationship, we get
11 . x = x and 12 . y = y
Letting x = 12 and y = 11, we obtain
11 . 12 = 12 and 12 . 11 = 11
Using the transitivity property of any equivalence relationship we obtain 11 = 12,
which becomes a contradiction of our initial assumption.
Property 3
a. The complement of 0 is 0 = 1.
b. The complement of 1 is 1 = 0.
Proof: x+0 =x (postulate 2a)
0+ 0 = 0
0+ 0 =1 (postulate 5a)
0 =1
Part b is valid by the application of principle of duality.
Property 4: Idempotency law
For all x ∈ BS,
a. x+x=x
b. x.x =x
Proof: x + x = (x + x) . 1 (postulate 2b)
= (x + x) . (x + x) (postulate 5a)
= x + (x . x) (postulate 4a)
=x+0 (postulate 5b)
=x (postulate 2a)
x.x=x (by duality)
a. x.y+x. y =x
b. (x + y) . (x + y ) = x
(x + y) . (x + y ) = x (by duality)
The adjacency law is very useful in simplifying logical expressions encountered in the
design of digital circuits. This property will be extensively used in later learning
units.
a. x + ( x . y) = x + y
b. x . ( x + y) = x . y
x . ( x + y) = x . y (by duality)
Property 8: Consensus law
For all x, y and z ∈ BS,
a. x.y+ x .z+y.z=x.y+ x .z
b. (x + y) . ( x + z) . (y + z) = (x + y) . ( x + z)
Proof: x . y + x . z + y . z
= x . y + x . z + (x + x ) . y . z (postulate 5a)
= x . y + x. z + x . y . z + x. y . z (postulate 4b)
= x . y . (1 + z) + x . z . ( 1 + y) (postulate 4b)
= x . y + x. z (postulate 2b)
(x + y) . ( x + z) . (y + z) = (x + y) . ( x + z) (by duality)
x . (x + y) = y
x . (x + y) = x (property 6)
Therefore, by transitivity x = y
Proof: We need to show that the law of identity (property 2.9) holds, that is,
( x + x) = x and x .x = x
a. x + y = x.y
b. x.y = x+y
= 0+0
= 0 (postulate 2a)
(x + y) + ( x.y) = (x + x.y) + y (postulate 3a)
DeMorgan's law bridges the AND and OR operations, and establishes a method for
converting one form of a Boolean function into another. More particularly it gives a
method to form complements of expressions involving more than one variable. By
employing the property of substitution, DeMorgan's law can be extended to
expressions of any number of variables. Consider the following example:
x+y+z = x.y.z
Let y + z = w, then x + y + z = x + w.
At the end of this Section the reader should remind himself that all the postulates
and properties of Boolean algebra are valid when the number of elements in the BS
is finite. The case of the set BS having only two elements is of more interest here
and in the topics that follow in this course on Design of Digital systems.
All the identities derived in this Section are listed in the Table 1 to serve as a ready
reference.
Complementation x. x = 0
x + x =1
Idempotency x.x = x
x+x = x
Involution x=x
Commutative law x.y =y.x
X+y=y+x
(x + y).(x + y) = x
x.( x + y) = x.y
DeMorgan's law x + y = x .y
x.y = x + y
The properties of Boolean algebra when the set BS has two elements, namely 0 and
1, will be explored next.
BOOLEAN OPERATORS
Recall that Boolean Algebra is defined over a set (BS) with finite number of elements. If
the set BS is restricted to two elements {0, 1} then the Boolean variables can take only
one of the two possible values. As all switches take only two possible positions, for
example ON and OFF, Boolean Algebra with two elements is more suited to working with
switching circuits. In all the switching circuits encountered in electronics, the variables
take only one of the two possible values.
Definition: A binary variable is one that can assume one of the two values, 0 or 1.
These two values, however, are meant to express two exactly opposite states. It means,
if a binary variable A ≠ 0 then A = 1. Similarly if A ≠ 1, then A = 0.
Note that it agrees with our intuitive understanding of electrical switches we are familiar
with.
The values 0 and 1 should not be treated numerically, such as to say "0 is less than 1"
or " 1 is greater than 0".
Definition: The Boolean operator NOT, also known as complement operator represented
/
by " " (overbar) on the variable, or " " (a superscript slash) after the variable, is
defined by the following table.
A A/
0 1
1 0
Though it is more popular to use the symbol " " (overbar) in most of the text-books, we
will adopt the " / " to represent the complement of a variable, for convenience of typing.
Definition: The Boolean operator "+" known as OR operator is defined by the table
given in the following.
A B A+B
0 0 0
0 1 1
1 0 1
1 1 1
The circuit symbol for logical OR operation is given in the following.
Definition: The Boolean operator "." known as AND operator is defined by the table
given below
A B A.B
0 0 0
0 1 0
1 0 0
1 1 1
The circuit symbol for the logical AND operation is given in the following.
The relationship of these operators to the electrical switching circuits can be seen from
the equivalent circuits given in the following.
/
A A/ A
open closed
A closed open
A B A+B A.B
Open Open Open Open
Open Closed Closed Open
Closed Open Closed Open
Closed Closed Closed Closed
We can define several other logic operations besides these three basic logic operations.
These include
2
• NAND
• NOR
• Exclusive-NOR (Ex-NOR)
These are defined in terms of different combinations of values the variables assume, as
indicated in the following table:
A set of Boolean operations is called functionally complete set if all Boolean expressions
can be expressed by that set of operations. AND, OR and NOT constitute a functionally
complete set. However, it is possible to have several combinations of Boolean operations
as functionally complete sets.
4
All Boolean functions through NAND function
• Algebraic
• Truth-table
• Logic circuit
• Hardware description language
• Maps
Each form of representation is convenient in a different
context.
A B F
0 0 0
0 1 0
1 0 1
1 1 1
F = A.B + A.B/
• Read it as "F is asserted when A and B are asserted or
A is asserted and B is not asserted".
• We will continue to use the term "truth table" for
historical reasons
We understand it as
an input-output table associated with a logic function
but not as something that is concerned with the
establishment of truth.
A B C D E F
1 1 1 0 0 1
1 0 1 0 1 1
0 0 1 1 1 1
1 0 0 1 1 1
F = A.B + C/.D
A
B
F1
Many types of electrical and electronic circuits can be built with devices that have two
possible states. We are, therefore interested in working with variables, which can take
only two values. Such two valued variables are called Logic variables or Switching
variables.
We defined several Boolean operators, which can also be called Logic operators. We will
find that it is possible to describe a wide variety of situations and problems using logic
variables and logic operators. This is done through defining a “logic function” of logic
variables.
• Algebraic
• Truth-table
• Logic circuit
• Hardware description language
• Maps
We use all these forms to express logic functions in working with digital circuits. Each
form of representation is convenient in some context. Initially we will work with
algebraic, truth-table, and logic circuit representation of logic functions.
Let A1, A2, . . . An be logic variables defined on the set BS = {0,1}. A logic function of n
variables associates a value 0 or 1 to every one of the possible 2n combinations of the n
variables. Let us consider a few examples of such functions.
F1 is a function of 4 variables. You notice that all terms in the function have all the four
variables. It is not necessary to have all the variables in all the terms. Consider the
following example.
1. If F1(A1, A2, ... An ) is a logic function, then (F1(A1, A2, ... A n))/ is also a Boolean
function.
2. If F1 and F2 are two logic functions, then F1+F2 and F1.F2 are also Boolean
functions.
3. Any function that is generated by the finite application of the above two rules is
also a logic function
Try to understand the meaning of these properties by solving the following examples.
If F1 = A.B.C + A.B/.C + A.B.C/ what is the logic function that represents F1/ ?
If F1 = A.B + A/.C and F2 = A.B/ + B.C write the logic functions F1 + F2 and F1.F2?
2n
As each one of the combinations can take value of 0 or 1, there are a total of 2 distinct
logic functions of n variables.
"Product term" or "product" refers to a series of literals related to one another through
an AND operator. Examples of product terms are A.B/.D, A.B.D/.E, etc.
"Sum term" or "sum" refers to a series of literals related to one another through an OR
operator. Examples of sum terms are A+B/+D, A+B+D/+E, etc.
The choice of terms "product" and "sum" is possibly due to the similarity of OR and AND
operator symbols "+" and "." to the traditional arithmetic addition and multiplication
operations.
Truth Table Description of Logic Functions
The truth table is a tabular representation of a logic function. It gives the value of the
function for all possible combinations of values of the variables. If there are three
3
variables in a given function, there are 2 = 8 combinations of these variables. For each
combination, the function takes either 1 or 0. These combinations are listed in a table,
which constitutes the truth table for the given function. Consider the expression,
A B F
0 0 0
0 1 0
1 0 1
1 1 1
The information contained in the truth table and in the algebraic representation of the
function are the same.
The term ‘truth table’ came into usage long before Boolean algebra came to be
associated with digital electronics. Boolean functions were originally used to establish
truth or falsehood of statements. When statement is true the symbol "1" is associated
with it, and when it is false "0" is associated. This usage got extended to the variables
associated with digital circuits. However, this usage of adjectives "true" and "false" is
not appropriate when associated with variables encountered in digital systems. All
variables in digital systems are indicative of actions. Typical examples of such signals
are "CLEAR", "LOAD", "SHIFT", "ENABLE", and "COUNT". These are suggestive of
actions. Therefore, it is appropriate to state that a variable is ASSERTED or NOT
ASSERTED than to say that a variable is TRUE or FALSE. When a variable is asserted,
the intended action takes place, and when it is not asserted the intended action does not
take place. In this context we associate "1" with the assertion of a variable, and "0" with
the non-assertion of that variable. Consider the logic function,
F = A.B + A.B/
It should now be read as "F is asserted when A and B are asserted or A is asserted and
B is not asserted". This convention of using "assertion” and “non-assertion" with the
logic variables will be used in all the Learning Units of this course on Digital Systems.
The term ‘truth table’ will continue to be used for historical reasons. But we understand
it as an input-output table associated with a logic function, but not as something that is
concerned with the establishment of truth.
As the number of variables in a given function increases, the number of entries in the
truth table increases exponentially. For example, a five variable expression would
require 32 entries and a six-variable function would require 64 entries. It, therefore,
becomes inconvenient to prepare the truth table if the number of variables increases
beyond four. However, a simple artefact may be adopted. A truth table can have
entries only for those terms for which the value of the function is "1", without loss of any
information. This is particularly effective when the function has only a small number of
terms. Consider the Boolean function with six variables
A B C D E F
1 1 1 0 0 1
1 0 1 0 1 1
0 0 1 1 1 1
1 0 0 1 1 1
Truth table is a very effective tool in working with digital circuits, especially when the
number of variables in a function is small, less than or equal to five.
Conversion of English Sentences to Logic Functions
Some of the problems that can be solved using digital circuits are expressed through one
or more sentences. For example,
At the traffic junction the amber light should come on 60 seconds after the red
light, and get witched off after 5 seconds.
If the number of coins put into the vending machine exceed five rupees it should
dispense a Thums Up bottle.
The lift should start moving only if the doors are closed and a floor number is
chosen.
These sentences should initially be translated into logic equations. This is done through
breaking each sentence into phrases and associating a logic variable with each phrase.
As stated earlier many of these phrases will be indicative of actions or directly represent
actions. We first mark each action related phrase in the sentence. Then we associate a
logic variable with it. Consider the following sentence, which has three phrases:
Anil freaks out with his friends if it is Saturday and he completed his assignments
We will now associate logic variables with each phrase. The words “if” and “and” are not
included in any phrase and they show the relationship among the phrases.
Rahul will attend the Networks class if and only if his friend Shaila is attending the class
and the topic being covered in class is important from examination point of view or there
is no interesting matinee show in the city and the assignment is to be submitted. Let us
associate different logic variables with different phrases.
Rahul will attend the Networks class if and only if his friend Shaila is attending the class
F A
and the topic being covered in class is important from examination point of view or
B
there is no interesting matinee show in the city and the assignment is to be
submitted
C/ D
With the above assigned variables the logic function can be written as
F = A.B + C/.D
2
Minterms and Maxterms
A logic function has product terms. Product terms that consist of all the variables of a
function are called "canonical product terms", "fundamental product terms" or
"minterms". For example the logic term A.B.C' is a minterm in a three variable logic
function, but will be a non-minterm in a four variable logic function. Sum terms which
contain all the variables of a Boolean function are called "canonical sum terms",
"fundamental sum terms" or "maxterms". (A+B/+C) is an example of a maxterm in a
three variable logic function.
Consider the Table which lists all the minterms and maxterms of three variables. The
minterms are designated as m0, m1, . . . m7, and maxterms are designated as M0, M1, . .
. M7.
The SOP and POS forms are also referred to as two-level forms. In the SOP form, AND
operation is performed on the variables at the first level, and OR operation is performed
at the second level on the product terms generated at the first level.
Similarly, in the POS form, OR operation is performed at the first level to generate sum
terms, and AND operation is performed at the second level on these sum terms.
In any logical expression, the right hand side of a logic function, there are certain
priorities in performing the logical operations.
In the expression for F1 the operations are to be performed in the following sequence
However, the order of priority can be modified through using parentheses. It is also
common to express logic functions through multi-level expressions using parentheses. A
simple example is shown in the following.
F1 = A.(B+C/) + A/.(C+D)
These expressions can be brought into the SOP form by applying the distributive law.
More detailed manipulation of algebraic form of logic functions will be explored in
another Learning Unit.
Circuit Representation of Logic Functions
Representation of basic Boolean operators through circuits was already presented in the
earlier Learning Unit. A logic function can be represented in a circuit form using these
circuit symbols. Consider the logic function
F1 = A.B + A.B/
F2 = (A+B+C) . (A+B/+C/)
NOR is another functionally complete set. NOR representation of the same function F1 is
Digital Electronics
Module 2: Boolean Algebra and Boolean
Operators: Karnaugh Map Method
N.J. Rao
Indian Institute of Science
Karnaugh Map
B
0 0 2
2
B 1 3 1 3
a b
Cell 2 minterm m2
1
Cell 3 minterm m3 B 3
K-map of F
) $
%
K-map
$
%
A
0 4 12 8 Groups with cyclic adjacency:
• 0, 1, 5 and 4
1 5 13 9
D • 1, 5, 7, and 3 etc.
3 7 15 11 • 0, 1, 3, 2, 10, 11, 9 and 8
C 2 6 14 10 • 4, 12, 13,15,14, 6, 7 and 5
etc.
B
A
1 1 0 0
1 1 1 0
0 1 0 0 D
C
0 1 1 0
1 0 0 1
0 0 0 1 D
0 1 1 1
C
1 1 1 0
B
;
'
& ;
;
;
'
;
&
;
%
;
'
;
&
;
'
;
&
;
%
F1 = X2 + X3 + X4
F = X1 + X2 + X3 + X4
= X4 + X6 + X7 + X8 + X9
= X7 + X10 + X11 + X12 + X13 + X14
A B C F K-map
0 0 0 X
0 0 1 0
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 X
1 1 1 X
F=A
F = AB/
1 0 1 1 0 1 1 1 0 0 1 1
C 0 1 0 1 1 0 1 0 C 1 1 0 1
C
B B B
The expressions for a logical function (right hand side of a function) can be very long
and have many terms and each term many literals. Such logical expressions can be
simplified using different properties of Boolean algebra. This method of minimization
requires our ability to identify the patterns among the terms. These patterns should
conform to one of the four laws of Boolean algebra. However, it is not always very
convenient to identify such patterns in a given expression. If we can represent the
same logic function in a graphic form that allows us to identify the inherent patterns,
then the simplification can be performed more conveniently.
Karnaugh Map is one such graphic representation of a Boolean function in the form
of a map. Karnaugh Map is due to M. Karnaugh, who introduced (1953) his version
of the map in his paper "The Map Method for Synthesis of Combinational Logic
Circuits". Karnaugh Map, abbreviated as K-map, is actually pictorial form of the
truth-table. This Learning Unit is devoted to the Karnaugh map and its method of
simplification of logic functions.
F = A/B + AB
Any two-variable function has 22 = 4 minterms. The truth table of this function is
A B F
0 0 0
0 1 1
1 0 1
1 1 0
It can be seen that the values of the variables are arranged in the ascending order
(in the numerical decimal sense).
We consider that any two terms are logically adjacent if they differ only with respect
any one variable.
For example ABC is logically adjacent to A/BC, AB/C and ABC/. But it is not logically
adjacent to A/B/C, A/BC/, A/B/C/, AB/C/.
The entries in the truth-table that are positionally adjacent are not logically adjacent.
For example A/B (01) and AB/ (10) are postionally adjacent but are not logically
adjacent. The combination of 00 is logically adjacent to 01 and 10. Similarly 11 is
adjacent to 10 and 01. Karnaugh map is a method of arranging the truth-table
entries so that the logically adjacent terms are also physically adjacent.
The K-map of a two-variable function is shown in the figure. There are two popular
ways of representing the map, both of which are shown in the figure. The
representation, where the variable above the column or on the side of the row in
which it is asserted, will be followed in this and the associated units.
The cells labelled as 0, 1, 2 and 3 represent the four minterms m0, m1, m2 and
m 3.
The numbering of the cells is chosen to ensure that the logically adjacent
minterms are positionally adjacent.
Cell 1 is adjacent to cell 0 and cell 3, indicating the minterm m1 (01) is logically
adjacent to the minterm m0 (00) and the minterm m3 (11).
The second column, above which the variable A is indicated, has the cells 2 and 3
representing the minterms m2 and m3. The variable A is asserted in these two
minterms.
Let us define the concept of position adjacency. Position adjacency means two
adjacent cells sharing one side. Such an adjacency is called simple adjacency. Cell 0
is positionally adjacent to cell 1 and cell 2, because cell 0 shares one side with each
of them. Similarly, cell 1 is positionally adjacent to cell 0 and cell 3, as cell 2 is
adjacent to cell 0 and cell 3, and cell 3 is adjacent to cell 1 and cell 2.
There are other kinds of positional adjacencies, which become relevant when the
number of variables is more than 3. We will explore them at a later time.
The main feature of the K-map is that by merely looking at the position of a cell, it is
possible to find immediately all the logically adjacent combinations in a function.
The function F = (A/B + AB) can now be incorporated into the K-map by entering "1"
in cells represented by the minterms for which the function is asserted. A "0" is
entered in all other cells. K-map for the function F is
You will notice that the two cells in which "1" is entered are not positionally adjacent.
Therefore, they are not logically adjacent.
F = A/B + AB
You will notice that the cells in which "1" is entered are positionally adjacent and
hence are logically adjacent.
K-map for three variables will have 23 = 8 cells as shown in the figure.
The cells are labelled 0,1,..,7, which stand for combinations 000, 001,...,111
respectively. Notice that cells in two columns are associated with assertion of A, two
columns with the assertion of B and one row with the assertion of C.
Let us consider the logic adjacency and position adjacency in the map.
Cell 2 (010) is adjacent to the cell 0 (000), cell 6 (110) and cell 3 (011).
We know from logical adjacency the cell 0 (000) and the cell 4 (100) should also
be adjacent. But we do not find them positionally adjacent. Therefore, a new
adjacency called "cyclic adjacency" is defined to bring the boundaries of a row or
a column adjacent to each other. In a three-variable map cells 4 (100) and 0
(000), and cells 1 (001) and 5 (101) are adjacent. The boundaries on the
opposite sides of a K-map are considered to be one common side for the
associated two cells.
Adjacency is not merely between two cells. Consider the following function:
F = Σ (1, 3, 5, 7)
= m1 + m3 + m5 + m7
= A'B'C + A'BC + AB'C + ABC
= A'C(B'+B) + AC(B'+B)
= A'C + AC
= (A'+A)C
=C
It is shown clearly that although there is no logic adjacency between some pairs of
the terms, we are able to simplify a group of terms. For example A/B/C, ABC, A/BC
and AB/C are simplified to result in an expression "C". A cyclic relationship among
the cells 1, 3, 5 and 7 can be observed on the map in the form 1 Æ 3 Æ 7 Æ 5 Æ 1
("Æ" indicating "adjacent to"). When a group of cells, always 2i (i < n) in number,
are adjacent to one another in a sequential manner those cells are considered be
cyclically adjacent.
0, 1, 3 and 2
2, 3, 7 and 6
6, 7, 5 and 4
4, 5, 1 and 0
0, 2, 6 and 4
Simple adjacency
Cyclic adjacency (It has two cases, one is between two cells, and the other
among a group of 2i cells)
These cells are labelled 0, 1,..., 15, which stand for combinations 0000, 0001,...1111
respectively. Notice that the two sets of columns are associated with assertion of A
and B, and two sets of rows are associated with the assertion of C and D.
We will be able to observe both simple and cyclic adjacencies in a four-variable map
also. 4, 8 and 16 cells can form groups with cyclic adjacency. Some examples of
such groups are
0, 1, 5 and 4
0, 1, 3 and 2
10, 11, 9 and 8
14, 12,13 and15
14, 6, 7 and 15
3, 7, 15, 11, 10, 14, 6 and 2
The map is divided vertically into two symmetrical parts. Variable A is not-
asserted on the left side, and is asserted on the right side. The two parts of the
map, except for the assertion or non-assertion of the variable A are identical with
respect to the remaining four variables B, C, D and E.
Simple and cyclic adjacencies are applicable to this map, but they need to be
applied separately to the two sections of the map. For example cell 8 and cell 0
are adjacent. The largest number of cells coming under cyclic adjacency can go
up to 25 = 32.
From the study of two-, three-, four- and five-variable Karnaugh maps, we can
summarise the following properties:
We have already seen how a K-map can be prepared provided the Boolean function
is available in the canonical SOP form.
A "1" is entered in all those cells representing the minterms of the expression, and
"0" in all the other cells.
However, the Boolean functions are not always available to us in the canonical form.
One method is to convert the non-canonical form into canonical SOP form and
prepare the K-map. The other method is to convert the function into the standard
SOP form and directly prepare the K-map.
We notice that there are four variables in the expression. The first term, A/B,
containing two variables actually represents four minterms, and the term A/B/C/
represents two minterms. The K-map for this function is
A
1 1 0 0
1 1 1 0
0 1 0 0 D
C
0 1 1 0
Notice that the second column represents A/B, and similarly A/B/C/ represents the
two top cells in the first column. With a little practice it is always possible to fill the
K-map with 1s representing a function given in the standard SOP form.
Boolean functions, sometimes, are also available in POS form. Let us assume that
the function is available in the canonical POS form. Consider an example of such a
function
In preparing the K-map for the function given in POS form, 0s are filled in the cells
represented by the maxterms. The K-map of the above function is
Sometimes the function may be given in the standard POS form. In such situations
we can initially convert the standard POS form of the expression into its canonical
form, and enter 0s in the cells representing the maxterms. Another procedure is to
enter 0s directly into the map by observing the sum terms one after the other.
F = (A+B+D/).(A/+B+C/+D).(B/+C)
F = (A+B+C+D/).(A+B+C/+D/)(A/+B+C/+D).(A+B/+C+D). (A/+B/+C+D).
(A+B/+C+D/).(A/+B/+C+D/)
= M1 . M3 . M10 . M4 . M12 . M5 . M13
The cells 1, 3, 4, 5, 10, 12 and 13 can have 0s entered in them while the remaining
cells are filled with 1s.
Implicants: A Karnaugh map not only includes all the minterms that represent a Boolean
function, but also arranges the logically adjacent terms in positionally adjacent cells. As the
information is pictorial in nature, it becomes easier to identify any patterns (relations) that
exist among the 1-entered cells (minterms). These patterns or relations are referred to as
implicants.
A study of implicants enables us to use the K-map effectively for simplifying a Boolean
function. Consider the K-map
An implicant represents a product term, with the number of variables appearing in the term
inversely proportional to the number of 1-entered cells it represents.
The smaller the number of implicants, and the larger the number of cells that each
implicant represents, the smaller the number of product terms in the simplified Boolean
expression.
In this example we notice that there are different ways of identifying the implicants.
Five implicants are identified in the figure (a) and three implicants in the figure (b) for the
same K-map (Boolean function). It is then necessary to have a procedure to identify the
minimum number of implicants to represent a Boolean function.
A prime implicant is one that is not a subset of any one of the other implicant.
An essential prime implicant is a prime implicant which includes a 1-entered cell that is
not included in any other prime implicant.
A redundant implicant is one in which all the 1-entered cells are covered by other
implicants. A redundant implicant represents a redundant term in an expression.
Implicants 2, 3, 4 and 5 in the figure (a), and 1, 2 and 3 in the figure (b) are prime
implicants.
Implicants 2, 4 and 5 in the figure (a), and 1, 2 and 3 in the figure (b) are essential prime
implicants.
"find the smallest set of prime implicants that includes all the essential prime
implicants accounting for all the 1-entered cells of the K-map".
If there is a choice, the simpler prime implicant should be chosen. The minimisation
procedure is best understood through examples.
Example 1: Find the minimised expression for the function given by the K-map in the
figure.
Fifteen implicants of the K-map are:
Obviously all these implicants are not prime implicants and there are several redundant
implicants. Several combinations of prime implicants can be worked out to represent the
function. Some of them are listed in the following.
F1 = X1 + X4 + X6 + X16
= X4 + X5 + X6 + X7 + X8
= X2 + X3 + X4
= X10 + X11 + X8 + X4 + X6
F = X2 + X3 + X4
= B/C/ + BD/ + ACD
Example 2: Minimise the Boolean function represented by the K-map shown in the figure.
F = X1 + X2 + X3 + X4
= X4 + X6 + X7 + X8 + X9
= X7 + X10 + X11 + X12 + X13 + X14
As mentioned earlier, POS form always follows some kind of duality, yet different from the
principle of duality. The implicants are defined as groups of sums or maxterms which
in the map representation are the positionally adjacent 0-entered cells rather then 1-
entered cells as in the SOP case. When converting an implicant covering some 0-entered
cells into a sum, a variable appears in complemented form in the sum if it is always 1
in value in the combinations corresponding to that implicant, a variable appears in
uncomplimented form if it is always 0 in value, and the variable does not appear at all if
it changes its values in the combinations corresponding to the implicant. We obtain a
standard POS form of expression from the map representation by ANDing all the sums
converted from implicants.
Example 3: Consider a Boolean function in the POS form represented in the K-map shown
in the figure
If we choose the implicant 5 (shown by the dotted line in the figure 19) instead of 4,
the simplified expression gets modified as:
We may summarise the procedure for minimization of a Boolean function through a K-map
as follows:
1. Draw the K-map with 2n cells, where n is the number of variables in a Boolean function.
2. Fill in the K-map with 1s and 0s as per the function given in the algebraic form (SOP or
POS) or truth-table form.
3. Determine the set of prime implicants that consist of all the essential prime implicants
as per the following criteria:
All the 1-entered or 0-entered cells are covered by the set of implicants, while
making the number of cells covered by each implicant as large as possible.
Eliminate the redundant implicants.
Identify all the essential prime implicants.
Whenever there is a choice among the prime implicants select the prime
implicant with the smaller number of literals.
4. If the final expression is to be generated in SOP form, the prime implicants should be
identified by suitably grouping the positionally adjacent 1-entered cells, and converting
each of the prime implicant into a product term. The final SOP expression is the OR of
the identified product terms.
5. If the final simplified expression is to be given in the POS form, the prime implicants
should be identified by suitably grouping the positionally adjacent 0-entered
cells, and converting each of the prime implicant into a sum term. The final POS
expression is the AND of the identified sum terms.
So far we assumed that the Boolean functions are always completely specified, which
means a given function assumes strictly a specific value, 1 or 0, for each of its 2n
input combinations. This, however, is not always the case.
The ten outputs are decoded from sixteen possible input combinations produced by
four inputs representing BCD codes.
Irrespective of the encoding scheme there are always six combinations of the inputs
that would be considered as invalid codes.
If the input unit to the BCD decoder works in a functionally correct way, then the six
invalid combinations of the inputs should never occur.
In such a case, it does not matter what the output of the decoder is for these six
combinations. As we do not mind what the values of the outputs are in such situations, we
call them "don’t-care" situations. These don’t-care situations can be used advantageously in
generating a simpler Boolean expression than without taking that advantage.
Such don’t-care combinations of the variables are represented by an "X" in the appropriate
cell of the K-map.
Example: This example shows how an incompletely specified function can be represented
in truth-table, Karnaugh map and canonical forms.
The decoder has three inputs A, B and C representing three bit codes and an output F. Out
3
of the 2 = 8 possible combinations of the inputs, only five are described and hence
constitute the valid codes. F is not specified for the remaining three input codes, namely,
000, 110 and 111.
Functional description of a decoder
Treating these three combinations as the don’t-care conditions, the truth-table may be
written as:
A B C F
0 0 0 X
0 0 1 0
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 X
1 1 1 X
F = Σ(4, 5) + d (0, 6, 7)
F = Π (1, 2, 3) . d (0, 6, 7)
The don’t-cares bring some advantages to the simplification of Boolean functions. The Xs
can be treated either as 0s or as 1s depending on the convenience. For example the above
map can be redrawn in two different ways as
The simplification can be done, therefore, in two different ways. The resulting expressions
for the function F are:
F=A
F = AB/
We can generate a simpler expression for a given function by utilising some of the don’t-
care conditions as 1s.
The simplified expression taking the full advantage of the don’t cares is,
F = B/ + C/D/
As there could be several product terms that could be made common to more than one
function, special attention needs to be paid to the simplification process.
Example: Consider the following set of functions defined on the same set of variables:
F1 (A, B, C) = Σ (0, 3, 4, 5, 6)
F2 (A, B, C) = Σ (1, 2, 4, 6, 7)
F3 (A, B, C) = Σ (1, 3, 4, 5, 6)
Let us first consider the simplification process independently for each of the functions. The
K-maps for the three functions and the groupings are
These three functions have nine product terms and twenty one literals.
If the groupings can be done to increase the number of product terms that can be shared
among the three functions, a more cost effective realisation of these functions can be
achieved. One may consider, at least as a first approximation, cost of realising a function
is proportional to the number of product terms and the total number of literals present in
the expression. Consider the minimisation shown in the figure
N.J. Rao
Indian Institute of Science
Motivation
• Map methods unsuitable if the number of variables is
more than six
• Quine formulated the concept of tabular minimisation in
1952
• Improved by McClusky in 1956
Quine-McClusky method
• Can be performed by hand, but tedious, time-consuming
and subject to error
• Better suited to implementation on a digital computer
1 1 0001 1
0010 2
2 2 0101 5
0110 6
1001 9
1010 10
3 3 0111 7
1011 11
1110 14
3 3 0111 7
1011 11
1110 14
• All those terms which are not checked off constitute the
set of prime implicants
• The repeated terms should be eliminated (--10 in the
column 3)
• The seven prime implicants:(1,5), (1,9), (5,7), (6,7),
(9,11), (10,11), (2,6,10,14)
• This is not a minimal set of prime implicants
• The next stage is to determine the minimal set of prime
implicants
Choose AB/D
Dominates over the row AB/C
Mark the row AB/D by an asterisk
Eliminate the row AB/C
Check off columns 9 and 11
Select A/C/D
Dominates over B/C/D.
B/C/D also dominates over A/C/D
Either B/C/D or A/C/D can be chosen as the dominant
prime implicant
F [ [
G [ [
H [ [
I [ [
J [ [
K [
H [ [
I [ [
[ [
J
[
K
E
a x x x x (0,2,8,10)
b x x x x (0,2,16,18)
c x x x x (8,9,10,11)
d x x x x (16,17,18,19)
e x x (11,15)
g x x (15,31)
h x x (23,31)
i x x (19,23)
j x x (17,25)
k x x (25,9)
F(A,B,C,D,E) =(1,4,6,10,20,22,24,26) +
d(0,11,16,27)
• Pay attention to the don’t-care terms
• Mark the combinations among themselves (d)
1 4 6 10 20 22 24 26
a x (0,1)
b x (16,24)
c x x (24,26)
d x x (0,4,6,23)
e x x x x (4,6,20,22)
g x x (10,11,26,27)
F(A,B,C,D,E) = a + c + e + g
= A/B/C/D/ + ABC/E/ + B/CE/ + BC/D
Karnaugh Map provides a good method of minimizing a logic function. However, it depends
on our ability to observe appropriate patterns and identify the necessary implicants. If the
number of variables increases beyond five, K-map or its variant Variable Entered Map can
become very messy and there is every possibility of committing a mistake. What we
require is a method that is more suitable for implementation on a computer, even if it is
inconvenient for paper-and-pencil procedures. The concept of tabular minimisation was
originally formulated by Quine in 1952. This method was later improved upon by
McClusky in 1956, hence the name Quine-McClusky.
This Learning Unit is concerned with the Quine-McClusky method of minimisation. This
method is tedious, time-consuming and subject to error when performed by hand. But it is
better suited to implementation on a digital computer.
Determine the minimal set of implicants is determined from the set of implicants
generated in the first stage.
The tabulation process starts with a listing of the specified minterms for the 1s (or 0s)
of a function and don’t-cares (the unspecified minterms) in a particular format. All the
prime implicants are generated from them using the simple logical adjacency theorem,
namely, AB/ + AB = A. The main feature of this stage is that we work with the equivalent
binary number of the product terms. For example in a four variable case, the minterms
A/BCD/ and A/BC/D/ are entered as 0110 and 0100. As the two logically adjacent
/ /
minterms A BCD and A BC D can be combined to form a product term A/BD/,the two
/ / /
binary terms 0110 and 0100 are combined to form a term represented as "01-0", where ‘-‘
(dash) indicates the position where the combination took place.
Stage two involves creating a prime implicant table. This table provides a means of
identifying, by a special procedure, the smallest number of prime implicants that represents
the original Boolean function. The selected prime implicants are combined to form the
simplified expression in the SOP form. While we confine our discussion to the creation of
minimal SOP expression of a Boolean function in the canonical form, it is easy to
extend the procedure to functions that are given in the standard or any other forms.
Example 1: F = Σ (1,2,5,6,7,9,10,11,14)
All the minterms are tabulated as binary numbers in sectionalised format, so that each
section consists of the equivalent binary numbers containing the same number of 1s, and
the number of 1s in the equivalent binary numbers of each section is always more than that
in its previous section. This process is illustrated in the table as below.
The next step is to look for all possible combinations between the equivalent binary
numbers in the adjacent sections by comparing every binary number in each section with
every binary number in the next section. The combination of two terms in the adjacent
sections is possible only if the two numbers differ from each other with respect to only one
bit. For example 0001 (1) in section 1 can be combined with 0101 (5) in section 2 to result
in 0-01 (1, 5). Notice that combinations cannot occur among the numbers belonging to the
same section. The results of such combinations are entered into another column,
sequentially along with their decimal equivalents indicating the binary equivalents from
which the result of combination came, like (1, 5) as mentioned above. The second column
also will get sectionalised based on the number of 1s. The entries of one section in the
second column can again be combined together with entries in the next section, in a
similar manner. These combinations are illustrated in the Table below
All the entries in the column which are paired with entries in the next section are
checked off. Column 2 is again sectionalised with respect t the number of 1s. Column 3
2
is generated by pairing off entries in the first section of the column 2 with those items in
the second section. In principle this pairing could continue until no further combinations
can take place. All those entries that are paired can be checked off. It may be noted that
combination of entries in column 2 can only take place if the corresponding entries have the
dashes at the same place. This rule is applicable for generating all other columns as well.
After the tabulation is completed, all those terms which are not checked off constitute the
set of prime implicants of the given function. The repeated terms, like --10 in the column
3, should be eliminated. Therefore, from the above tabulation procedure, we obtain
seven prime implicants (denoted by their decimal equivalents) as (1,5), (1,9), (5,7),
(6,7), (9,11), (10,11), (2,6,10,14). The next stage is to determine the minimal set of
prime implicants.
The prime implicants generated through the tabular method do not constitute the minimal
set. The prime implicants are represented in so called "prime implicant table". Each column
in the table represents a decimal equivalent of the minterm. A row is placed for each prime
implicant with its corresponding product appearing to the left and the decimal group to the
right side. Asterisks are entered at those intersections where the columns of binary
equivalents intersect the row that covers them. The prime implicant table for the
function under consideration is shown in the figure.
In the selection of minimal set of implicants, similar to that in a K-map, essential implicants
should be determined first. An essential prime implicant in a prime implicant table is
one that covers (at least one) minterms which are not covered by any other prime
implicant. This can be done by looking for that column that has only one asterisk. For
example, the columns 2 and 14 have only one asterisk each. The associated row,
indicated by the prime implicant CD/, is an essential prime implicant. CD/ is selected as a
member of the minimal set (mark that row by an asterisk). The corresponding columns,
namely 2, 6, 10, 14, are also removed from the prime implicant table, and a new
table is construction as shown in the figure.
We then select dominating prime implicants, which are the rows that have more asterisks
than others. For example, the row A/BD includes the minterm 7, which is the only one
included in the row represented by A/BC. A/BD is dominant implicant over A/BC, and hence
A/BC can be eliminated. Mark A/BD by an asterisk and check off the column 5 and 7.
We then choose AB/D as the dominating row over the row represented by AB/C.
Consequently, we mark the row AB/D by an asterisk, and eliminate the row AB/C and the
columns 9 and 11 by checking them off.
Similarly, we select A/C/D as the dominating one over B/C/D. However, B/C/D can also be
chosen as the dominating prime implicant and eliminate the implicant A/C/D.
Retaining A/C/D as the dominant prime implicant the minimal set of prime implicants is
{CD/, A/C/D, A/BD and AB/D). The corresponding minimal SOP expression for the Boolean
function is:
This indicates that if the selection of the minimal set of prime implicants is not unique,
then the minimal expression is also not unique.
There are two types of implicant tables that have some special properties. One is referred
to as cyclic prime implicant table, and the other as semi-cyclic prime implicant table. A
prime implicant table is considered to be cyclic if
4
1. it does not have any essential implicants which implies that there are at least two
asterisks in every column, and
2. There are no dominating implicants, which implies that there are same number of
asterisks in every row.
Example 2: A Boolean function with a cyclic prime implicant table is shown in the figure 3.
The function is given by
As it may be noticed from the prime implicant table in the figure that all columns have two
asterisks and there are no essential prime implicants. In such a case we can choose any
one of the prime implicants to start with. If we start with prime implicant a, it can be
marked with asterisk and the corresponding columns, 0 and 1, can be deleted from the
table. After their removal, row c becomes dominant over row b, so that row c is selected
and hence row b is can be eliminated. The columns 3 and 7 can now be deleted. We
observe then that the row e dominates row d, and row d can be eliminated. Selection of
row e enables us to delete columns 14 and 15.
If, from the reduced prime implicant table shown in the figure, we choose row g it covers
the remaining asterisks associated with rows h and f. That covers the entire prime
implicant table. The minimal set for the Boolean function is given by:
F=a+c+e+g
= A'B'C' + A'CD + ABC + BC'D'
A semi-cyclic prime implicant table differs from a cyclic prime implicant table in one respect.
In the cyclic case the number of minterms covered by each prime implicant is identical. In
a semi-cyclic function this is not necessarily true.
6
Example 3: Consider a semi-cyclic prime implicant table of a five variable Boolean function
shown in the figure.
Examination of the prime-implicant table reveals that rows a, b, c and d contain four
minterms each. The remaining rows in the table contain two asterisks each. Several
minimal sets of prime implicants can be selected. Based on the procedures presented
through the earlier examples, we find the following candidates for the minimal set:
F=a+c+d+e+h+j
or F=a+c+d+g+h+j
or F=a+c+d+g+j+i
or F=a+c+d+g+i+k
Based on the examples presented we may summarise the procedure for determination of
the minimal set of implicants:
1. Find, if any, all the essential prime implicants, mark them with *, and remove the
corresponding rows and columns covered by them from the prime implicant table.
2. Find, if any, all the dominating prime implicants, and remove all dominated prime
implicants from the table marking the dominating implicants with *s. Remove the
corresponding rows and columns covered by the dominating implicants.
3. For cyclic or semi-cyclic prime implicant table, select any one prime implicant as the
dominating one, and follow the procedure until the table is no longer cyclic or semi-
cyclic.
4. After covering all the columns, collect all the * marked prime implicants together to
form the minimal set, and convert them to form the minimal expression for the
function.
Simplification of Incompletely Specified functions
The simplification procedure for completely specified functions presented in the earlier
sections can easily be extended to incompletely specified functions. The initial
tabulation is drawn up including the dont-cares. However, when the prime implicant table is
constructed, columns associated with dont-cares need not be included because they do not
necessarily have to be covered. The remaining part of the simplification is similar to that
for completely specified functions.
Pay attention to the don’t-care terms as well as to the combinations among themselves, by
marking them with (d).
Six binary equivalents are obtained from the procedure. These are 0000- (0,1), 1-000
(16,24), 110-0 (24,26), -0-00 (0,4,16,20), -01-0 (4,6,20,22) and -101- (10,11,26,27)
and they correspond to the following prime implicants:
8
It may be noted that the don’t-cares are not included.
The minimal expression is given by:
F(A,B,C,D,E) = a + c + e + g
= A'B'C'D' + ABC'E' + B'CE' + BC'D
Digital Electronics – Module 3
Logic Families: Introduction
N.J. Rao
Indian Institute of Science
Logic families
A logic family is characterized by
• Its circuit configuration
• Its technology
• Specific optimization of a set of desirable properties
Many logic families were introduced into the market since
the introduction of integrated circuits in 1960s.
Some of the IC families had very short life spans.
• Standard TTL family which dominated the IC market got
superseded by the Low Power Schottky family.
• Necessary to be aware of the evolving technologies
A B
t PHL
A
t PLH
B
VT
+
VT
VT
-
a
V
OUT
H ig h
Lo w
b
O UT
H ig h
L ow
c
Since the introduction of integrated circuits in 1960s, many logic families were
introduced into the market. Each logic family is characterised by
a circuit configuration
a particular semiconductor technology
a specific optimisation of a set of desirable properties
Some of the IC families had very short life spans. With continuously changing
technologies, ICs that were quite popular suddenly become unattractive and
uneconomical. For example Standard TTL family which dominated the IC market
for a long period got superseded by the Low Power Schottky family. A digital
designer should not only have a good knowledge of the existing digital families but
should also be aware of the trends as well. The major requirements and the
desirable features of a logic family are:
Logic flexibility
Availability of complex functions
High noise immunity
Wide operating temperature range
Loading
Speed
Low power dissipation
Lack of generated noise
Input and output structures
Packaging
Low cost
Logic Flexibility
Logic flexibility is a measure of the capability and versatility or the amount of work
or variety of uses that can be obtained from a logic family, in other words, it is a
measure of the utility of a logic family in meeting various system needs. Factors
that enhance the logic flexibility are wired-logic capability, asserted/not-asserted
outputs, line driving capability, indicator driving, I/O interfacing, driving other logic
families and multiple gates.
Wired logic refers to the capability of tying the outputs of gates together to perform
additional logic without extra hardware and components. Frequently, asserted/not-
asserted versions of a variable are required in a logic system. If the logic family has
gates with not-asserted outputs, use of inverters can be avoided. If the circuits can
drive non-standard loads such as long signal lines, lamps and indicator tubes,
additional discrete circuits can be avoided. The gate count can be minimised in a
digital system if AND, NAND, OR, NOR and EX-OR gates are all available in the
family. The logic families currently popular, namely TTL, CMOS and to a limited
extent ECL, in the market have similar logic flexibility, and as such this factor does
not constitute a deciding issue in selecting a logic family.
Complex Function
Noise Immunity
In order to prevent the occurrence of false logic signals in a system, high immunity
to noise is desired. Common sources of noise in digital circuits are
For example, a gate may accept an input signal in the range of 0.0 V to 0.8 V as
logic Low while it produces at its output a Low voltage of 0.4 V under worst loading
and voltage supply conditions.
In such a situation 0.4 V (0.8 - 0.4 = 0.4 V) is considered to be the dc noise margin.
When the output of one gate is connected to the input of another gate, as the output
is limited to 0.4 V even if a noise voltage up to 0.4 is superimposed on it, the second
gate would accept it as logic Low signal.
The noise margins and the voltage levels associated with the gates can be
graphically shown as in the figure.
VCC
V0H(min)
V IH(min)
VIL(max)
VIL(max)
GND
AC Noise Margin: The term ac noise margin refers to the noise immunity of a gate
to noise of very short durations. In short duration noise, both the amplitude and
duration of the noise signals become important. The noise signal must contain
enough energy to effect a change in the state of the circuit. Therefore, the ac noise
margins are considerably higher than dc noise margins.
The ability of a logic element to operate in a noisy environment involves more than
the dc and ac noise margins. To be a problem, an externally generated noise pulse
must be received into the system and cause malfunction. The noise voltage must be
introduced into the circuit by radiated or coupled means. The amount of noise power
required to develop a given voltage is strictly a function of the circuit impedances.
Noise power must be transferred from the noise source with some arbitrary
impedance, through a coupling to the impedance of the circuit under consideration.
The ability to operate in a noisy environment is, then, an interaction of the built-in
operating margins, the time required for the device to react, and the ease with which
a noise voltage is developed. Therefore, the noise rejection capabilities of a logic
family represent a combination of a number of circuit parameters.
For commercial and industrial needs, temperatures usually range from 0o C or -30o C
to 55o C, 70o C or 85o C.
The military has a universal requirement for operability from -55o C to 125o C.
In most cases a logic line specified from -55o C to 125o C will exhibit better
characteristics at room temperature conditions than a line specified by commercial
requirements. It means performance of a logic circuit with regard to fan out, noise
immunity and tolerance to power supply variations is usually better, since the circuits
must still be within specifications even when the inherent degradation due to
temperature extremes occurs. The advantages of a wide temperature specification
are often offset by the increased cost.
Loading
In digital systems many digital ICs are interconnected to perform different functions.
The output of a logic gate may be connected to the inputs of several other similar
gates so the load on the driving gate becomes an important factor. The fan-out of a
gate is the maximum number of inputs of ICs from the same IC family that the gate
can drive while maintaining its output levels within specified limits. In other words,
the fan-out specifies the maximum loading that a given gate is capable of handling.
The input and output loading parameters are generally normalised, with regard to
TTL devices, to the following values.
1 TTL Unit Load (U.L.) = -1.6 mA in the Low state (Logic “0”)
For example the output of 74LS00 will sink 8.0 mA in Low state and source 400 µA in
the High state.
(8.0/1.6) = 5 U.L.
(400/40) = 10 U.L.
Speed
Propagation delay is a very important characteristic of logic circuits because it limits
the speed (frequency) at which they can operate. The shorter the propagation
delay, the higher the speed of the circuit.
The propagation delay of a gate is basically the time interval between the application
of an input pulse and the occurrence of the resulting output pulse.
1. tPHL: The time between a specified reference point on the input pulse and a
corresponding reference point on the output pulse, with the output changing
from the High level to the Low level.
2. tPLH: The time between specified reference point on the input pulse and a
corresponding reference point on the output pulse, with the output changing
from the Low level to the High level.
The reference points on the wave forms with respect to which the time delays are
measured can be chosen as
The 50% of the leading and trailing edges of the wave forms
or
The threshold voltage (where the input and output voltages of the gate are equal)
point.
These propagation delays are illustrated in the figure for both inverted and non-
inverted outputs, with 50% point taken as the reference.
A B
t PHL
A
t PLH
B
Power Dissipation
Logic with low power dissipation is desired in large systems because it lowers cooling
costs, and power supply and distribution costs, thereby reducing mechanical design
problems as well. In an air-borne or satellite application, power dissipation may be
the most critical parameter because of power-source limitations. As chip complexity
and packaging density continue to increase, power dissipation will decrease on a per-
gate basis, but will increase per-chip basis. This is dictated by heat dissipation
restriction arising from system design and maximum allowable semiconductor
junction temperatures.
Normally, the value of ICC for a Low gate output is higher than for a High output. The
manufacturer's data sheet usually specifies both these values as ICCL and ICCH. The
average ICC is then determined based on a 50% duty cycle operation of the gate.
The supply current drawn is generally very different during the transition time than
during the steady state operation in logic High or Low states. During the transition
times more number of active devices is likely to come into operation, and parasitic
capacitors will have to be charged and discharged. Therefore, there is more
dissipation every time a logic circuit switches its state. It also means that the power
dissipation increases linearly as a function of the frequency of switching. A gate that
operates at higher frequency will dissipate more power than the same gate operating
at a lower frequency. This phenomenon will have a significant effect on the design of
high frequency circuits.
In view of this another parameter known as speed-power product (SPP) is specified
by the manufacturer as a measure of the performance of a logic circuit based on the
product of the propagation delay time with the power dissipation at a specified
frequency.
Generated Noise
The switching transients either on power line or signal line can be very serious
sources of noise. They can conduct and radiate through different channels and
influence the functioning of the near by circuits or systems. Therefore, the lack of
generated noise is an important requirement of a logic family. When the switching
noise is significant, special care has to be taken to design the power, ground and
signal interconnections.
Supply distribution is less expensive if the logic family generates minimal noise.
Also, the maximum line lengths in the back plane and wiring on the printed wiring
board are functions of cross talk generated by the logic family. A logic family that
draws constant current in both logic Low and High states, and does not change
supply current when switching states will generate less noise.
A logic family should provide features for effective interfacing both at the input and
output. Interfacing at the input requires facility to accept different voltage levels for
the two logic states, and to accept signals with rise and fall times very different from
those of the signals associated with that logic family. At the output we require larger
current driving capability, facility to increase the voltages associated with the two
logic levels, and the ability to tie the outputs of gates to have wired logic operations.
Interfacing the slow varying signals (signals with rise and fall times greater than one
microsecond) is achieved through Schmitt triggers. Voltage levels of the output
signals can be increased by providing open-collector (or open-drain) configurations.
Such open-collector (open-drain) configurations also permit us to achieve wired-logic
operations. The outputs of gates can be tied together by having tristate outputs.
Schmitt Trigger Inputs: When a slow changing signal superposed with noise is
applied to a gate which has a single threshold VT, there is a possibility of the output
changing several times during signal transition period, as shown in the figure (b).
Clearly, such a response is not acceptable. When the input signal to a gate has long
transition times, the gate is likely to stay in the linear region of its operation for a
long period. During this period the gate is likely to get into oscillations because of
the parasitics associated with the circuit, which are not desirable. The problems
associated with slow changing signals and the superposed noise can be solved if the
gate has Schmitt trigger type of input.
VT
+
VT
VT
-
a
V
OUT
High
Low
b
OUT
High
Low
c
A Schmitt trigger is a special circuit that uses feedback internally to shift the
switching threshold depending on whether the input is changing from Low to High or
from High to Low. For example, suppose the input of a Schmitt-trigger inverter is
initially at 0 V (solid Low) and the output is High close to the VCC (or VDD). If the
input voltage is increased, the output will not go Low until the input voltage reaches
a threshold voltage, VT. Any value of the input voltage above this threshold will
make the output to remain Low. The output of a Schmitt gate for a slow changing
noisy signal is shown in the figure (c). Every logic family should have a few gates
which provide for Schmitt inputs to effectively interface with real world signals.
Three-State Outputs: Logic outputs have two normal states, Low and High,
corresponding to logic values 0 and 1. It is desirable to have another electrical
state, not a logic state at all, in which the output of the circuit offers very high
impedance. In this state, it is equivalent to disconnecting the circuit at its output,
except for a small leakage current. Such a state is called high-impedance, Hi-z or
floating state. Thus we have an output that could go into one of the three states:
logic 0, logic 1 and Hi-z. An output with three possible states is called tri-state
output.
Devices that have three state outputs, should have an extra input signal, that can be
called as “output enable” (OE) for placing the device either in low-impedance or
high-impedance states. The outputs of devices which can have three states can be
tied together, to create a three-state bus. The control circuitry must enable that at
any given time only one output is enabled while all other outputs are kept in Hi-z
state.
Open-Collector (or Drain) Outputs: The collector terminal of a transistor (or the
drain terminal of a MOSFET) is normally connected in a logic device to a pull-up
resistor or a special pull-up circuit. Such circuits prevent us from tying the outputs
of two such devices together. If the internal pull-up elements are removed, then it
gives freedom to the designer to tie up the outputs of more than one device
together, or to connect external pull-up resistor to increase the output voltage swing.
Devices with open-collector (open-drain) outputs are very useful for creating wired
logic operations or for interfacing loads which are incompatible with the electrical
characteristics of the logic family. It is, therefore, desirable for a logic family to have
devices, at least some, which have open-collector (or open-drain) outputs.
Packaging
Until a few years ago most of the digital ICs were made available in dual-in-line
packages (DIP). If the devices were to be operated in commercial temperature
range, they come in plastic DIPs, and if they are to be used over a larger
temperature range, they would be used in ceramic DIPs. With increasing
miniaturisation at systems level and integration at the chip level the number of
pins/IC have been steadily increasing. This increase in the pin count led to the
introduction of different packages for the ICs. Selecting an appropriate package is
one of the design decisions today’s digital designer has to make.
Cost
The last consideration, and often the most important one, is the cost of a given logic
family. The first approximate cost comparison can be obtained by pricing a common
function such as a dual four-input or quad two-input gate. But the cost of a few
types of gates alone can not indicate the cost of the total system. The total system
cost is decided not only by the cost of ICs but also by the cost of
In many instances the cost of ICs could become a less important component of the
total cost of the system.
Concluding Note
The question that arises after considering all the desirable features of a logic family
is “why not design a family that best meets these needs and then mass produce it
and drive the costs down?” Unfortunately, this can not be achieved as there is no
universal logic family that a does a good job of meeting all the previously stated
needs. Silicon technology, though better understood and studied than any other
solid-state technology, still has its own limitations. Besides, the demand for higher
and higher performance specifications continues to grow.
Electrical Characteristics of Schottky TTL Family
Table gives the worst case values for the input and output voltage levels in both the
logic states.
The noise margin levels are different in High and Low states and are shown in the
following Table. These levels are lower in comparison to the noise levels of CMOS
circuits.
Military Commercial
TTL Families (-55 to 125oC) (0 to 70oC)
Low NM High NM Low NM High NM
TTL Standard (54/74) 400 400 300 400 mV
STTL Schottky (54/74S) 300 500 300 700 mV
LSTTL Low-power Schottky 300 500 300 700 mV
(54/74LS)
ALSTTL Advanced Low- 400 500 300 500 mV
power Schottky
(54/74ALS)
ASTTL Advanced Schottky 400 500 300 500 mV
(54/74AS)
FAST Fairchild Advanced 300 500 300 500 mV
Schottky (54/74F)
Loading: The load characteristics of Schottky TTL families are given in the following Table.
Fan out is a measure of the number of gate inputs that are connected to (or driven by)
a single output. The currents associated with LSTTL family are:
IILmax = -0.4 mA (This current flows out of a LSTTL input. This is sometimes
called Low-state unit load for LSTTL)
IIHmax = 20 µA (This current flows into the LSTTL input. This is called
High-state Unit load for LSTTL)
IOLmax = 8 mA
IOHmax = -400 µA
Fan out in both the High and Low states is 20
Both the speed and the power consumption of LSTTL device depend on, to a large
extent, AC or dynamic characteristics of the device and its load, that is, what happens
when the output changes between states. The speed depends on two factors, transition
times and propagation delay.
Transition Time: The amount of time that the output of a logic circuit takes to change
from one state to another is called the transition time. The ideal situation we would like
to have is shown in the figure (a).
(a)
(b)
tr tf
(c)
tr tf
However, in view of the parasitic associated with circuits and boards, it is neither
possible nor desirable to have such zero transition times. Realistically, an output takes
some finite time to transit from one state to the other. These transition times are also
known as rise time and fall time. The semi-idealistic transitions are shown in the figure
(b). But in actuality the transitions are never sharp in view of the parasitic elements,
and edges are always rounded. We may identify the transition times as the times taken
for the output to traverse the undefined voltage zones, as shown in the figure (c).
The rise and fall times of a LSTTL output depend mainly on two factors, the ON
transistor resistance and the load capacitance. The load capacitance comes from three
different sources: output circuits including a gate’s output transistors, internal wiring
and packaging, have capacitances associated with them (of the order of 2-10 pF);
wiring that connects an output to other inputs (about 1pF per inch or more depending
on the wiring technology); and input circuits including transistors, internal wiring and
packaging (2-15 pF per input).
t t
PHL PLH
Power Consumption: The currents drawn by the TTL circuits would be different in
logic 0 and 1 states, as different sets of transistors get switched on in different states.
Hence the designations of the supply current are ICCL and ICCH. For computing the power
consumed by the gate an average (ICC) of these two currents is taken. The power
consumed is given by
PD = ICC x VCC
When a TTL circuit changes its state, the current drawn during the transition time would
be larger than either of the steady states, as larger number of transistors would come
into conducting state. The transition peak creates a large noise signal on the power
supply line. If this is not properly filtered by using a bypass capacitance very close to
the IC, it can constitute a major source of noise signals in TTL based digital systems.
Therefore, there is a component of power dissipation that is proportional to frequency.
However, this frequency dependent power dissipation becomes significant with regard to
quiescent power dissipation only at very high frequencies.
Table gives the performance characteristics of TTL family, which also enables us to
appreciate how the technology improvements lead to the performance improvements.
N.J. Rao
Indian Institute of Science
TTL Family
Slope = 1/Rf
V
Vd
Vd=0.6v
R2
R1 Ic
Ib Ib+Ic
18K 76K
D1
A
Q2
D5 Q3
B D2 D3
5K
D6 D4
Q1
Q4
15K
28K 35K
Q5
D1
Q5
D5
D3
D7 Q6
D2
D8 5K
D6 D4
D9
Q2
Q8 D12
15K 2K 3K
D10 D11
Q4 Q7
2 6
10K 2 k 50K
D11 30K
Q7 Q8
1 K
Q9
Q3
Q1
D5 2 K
D3
Q6
A
D1 D6 D9
OUTPUT
D7
B Q4
Q2 D10
D4
Q10
D8 1 0 0
1 k 2 K
D2
Q11
Q5
25K
V cc 50
37K 50 K
Q6
Q3 Q7
Q1
5K
D3
D5
A
D1
OUTPUT
B Q4
Q2
D4 Q8
2 .8K 5 . 6 K
D2
Q5
STTL 0.05 -2 -1 20 mA
FAST 0 0 -0.4 8 mA
(a)
(b)
tr tf
(c)
tr tf
t t
PHL PLH
TTL 10 10 100 35
HTTL 6 22 132 50
LTTL 33 1 33 3
LSTTL 9 2 18 45
STTL 3 19 57 125
ALS 4 1.2 4.8 70
AS 1.7 8 13.6 200
FAST 3.5 5.4 18.9 125
Introduction
Transistor-Transistor Logic (TTL) and Emitter Coupled Logic (ECL) are the most commonly
used bipolar logic families. Bipolar logic families use semiconductor diodes and bipolar
junction transistors as the basic building blocks of logic circuits. Simplest bipolar logic
elements use diodes and resistors to perform logic operation; this is called diode logic.
Many TTL logic gates use diode logic internally, and boost their output drive capability using
transistor circuits. Other TTL gates use parallel configurations of transistors to perform logic
functions.
It turned out at the time of introducing TTL circuits that they were adaptable to virtually all
forms of IC logic and produced the highest performance-to-cost ratio of all logic types. In
view of its versatility a variety of subfamilies (Low Power, High Frequency, Schottky)
representing a wide range of speed-power product have also been introduced. The Schottky
family has been selected by the industry to further enhance the speed-power product. In
Schottky family circuits, a Schottky diode is used as a clamp across the base-collector
junction of a transistor to prevent it from going into saturation, thereby reducing the
storage time. Several sub-families have evolved in the Schottky TTL family to offer several
speed-power products to meet a wide variety of design requirements. These sub-families
are:
Diodes
A semiconductor diode is fabricated from two types, p-type and n-type, of semiconductor
material that are brought into contact with each other. The point of contact between the p
and n materials is called p-n junction. Actually, a diode is fabricated from a single
monolithic crystal of semiconductor material in which the two halves are doped with
different impurities to give them p-type and n-type properties. A real diode can be modelled
as shown in the figure 1.
It is an open circuit when it is reverse biased (we ignore its leakage current)
It acts like a small resistance, Rf, called the forward resistance, in series with Vd,
called a diode drop, a small voltage source.
Diode action is exploited to perform logical operations. The circuit shown in the figure 2
performs AND function if 0-2 V (Low) input is considered logic 0 and 3-5 V (High) input is
considered as logic 1. When both A and B inputs are High, the output X will be High. If any
one of the inputs is at Low level, the output will also be at Low level.
A bipolar junction transistor is a three terminal device and acts like a current-controlled
switch. If a small current is injected into the base, the switch is “on”, that is, the current
will flow between the other two terminals, namely, collector and emitter. If no current is put
into the base, then the switch is “off” and no current flows between the emitter and the
collector. A transistor will have two p-n junctions, and consequently it could be pnp
transistor or npn transistor. An npn transistor, found more commonly in IC logic circuits, is
shown in the figure 3 in its common-emitter configuration.
IC = β . Ib
VCE = VCC - IC . R2
= VCC - β . Ib . R2
where β is called the gain of the transistor and is in the range of 10 to 100 for typical
transistors. Figure 4 shows a logic inverter from an npn transistor in the common-emitter
configuration. When the input voltage VIN Low, the output voltage is High, and vice versa.
When the input of a saturated transistor is changed, the output does not change
immediately; it takes extra time, called storage time, to come out of saturation. In fact,
storage time accounts for a significant portion of the propagation delay in the earlier TTL
families. Present day TTL logic families reduce this storage time by placing a Schottky diode
between the base and collector of each transistor that might saturate.
When the SBD is reverse biased, electrons in the semiconductor require greater energy to
cross the barrier. However, electrons in the metal see a barrier potential from the side
essentially independent of the bias voltage and small net reverse current will flow. Since
this current flow is relatively independent of the applied reverse bias, the reverse current
flow will not increase significantly until avalanche breakdown occurs. A simple metal/n-
semiconductor collector contact is an ohmic contact while the SBD contact is a rectifying
contact. The difference is controlled by the level of doping in the semiconductor material.
Schottky Transistor
The Schottky transistor makes use of two earlier concepts: Baker clamp and the Schottky-
Barrier-Diode (SBD). The Schottky clamped transistor is responsible for increasing the
switching speed. The use of Baker Clamp, shown in the figure 7, is a method of avoiding
saturation of a discrete transistor.
FIG. 7: Baker Clamp
The germanium diode forward voltage is 0.3 V to 0.4 V as compared to 0.7 V for the base-
emitter junction silicon diode. When the transistor is turned on, base current drives the
transistor toward saturation. The collector voltage drops, the germanium diode begins to
conduct forward current, and excess base drive is diverted from the base-collector junction
of the transistor. This causes the transistor to be held out of deep saturation, the excess
base charge not stored, and the turn-off time to be dramatically reduced. However, a
germanium diode cannot be incorporated into a monolithic silicon integrated circuit.
Therefore, the germanium diode must be replaced with a silicon diode which has a lower
forward voltage drop than the base-collector junction of the transistor. A normal p-n diode
will not meet this requirement. An SBD can be used to meet the requirement as shown in
the figure 8.
The familiarization with a logic family is acquired, in general, through understanding the
circuit features of a NAND gate. The circuit diagram of a two-input LSTTL NAND gate,
74LS00, is shown in the figure 9.
D1 and D2 along with 18 KΩ resistor perform the AND function. Diodes D3 and D4 do
nothing in normal operation, but limit undesirable negative excursions on the inputs to a
signal diode drop. Such negative excursions may occur on High-to-Low input transitions as
a result of transmission-line effects. Transistor Q1 serves as an inverter, so the output at its
collector represents the NAND function. It also, along with its resistors, forms a phase
splitter that controls the output stage. The output state has two transistors, Q3 and Q4,
only one of which is on at any time. The TTL output state is sometimes called a totem-pole
output. Q2 and Q5 provide active pull-up and pull-down to the High and Low states,
respectively. Transistor Q5 regulates current flow into the base of Q4 and aids in turning Q4
off rapidly. Transistors Q3 and Q2 constitute a Darlington driver, with Q3 not being
permitted to saturate. The network consisting of Schottky diodes D3 and D4 and a 5 KΩ
resistor is connected to the output and aids in charging and discharging load capacitance
when Q3 and Q4 are changing states. Transistor Q4 conducts when the output is in Low
state.
The FAST Schottky TTL family provides a 75-80% power reduction compared to standard
Schottky TTL and yet offers 20-40% improved circuit performance over the standard
Schottky due to the MOSAIC process. Also, FAST circuits contain additional circuitry to
provide a flatter power/frequency curve. The input configuration of FAST uses a lower input
current which translates into higher fan-out. The NAND gate of FAST family is shown in the
figure 10.
The F00 input configuration utilises a p-n diodes (D1 and D2) rather than pnp-transistor.
The p-n diode offers a much smaller capacitance and results in much better ac noise
immunity at the expense of increased input
current
Figure 11 shows one gate in 74ALS00A quad 2-input NAND gate parallel-connected pnp
transistors Q1 and Q2 are used at the input. These transistors reduce the current flow, IR,
when the inputs are low and thus increase fan out. If inputs A, B, or both are low, then the
respective pnp transistors turn on because their emitters are then more positive than their
bases. If at least one of the inputs is low, the corresponding pnp transistor conducts,
making the base of Q3 low and keeping Q3 off. If both the inputs A and B are high, both
switches are open and Q3 turns on. Q3 drives Q4 (by emitter follower action), and Q4
drives the output totem pole. Schottky diodes D3, D4 and D5 are used to speed the
switching and do not affect the logic. Note that the output and the inputs have Schottky
protective diodes. Figure 12 shows one gate in 74AS00 gate.
FIG. 11: ALS NAND gate
(74ALS00A)
2. Elimination of transistor storage time provides stable switching times across the
temperature range.
4. Input and output clamping is implemented with Schottky diodes to reduce negative-
going excursions on the inputs and outputs. Because of its lower forward voltage
drop and fast recovery time, the Schottky input diode provides improved clamping
action over a conventional p-n junction diode.
5. The ion implantation process allows small geometries giving less parasitic
capacitances so that switching times are decreased.
6. The reduction of the epi-substrate capacitance using oxide isolation also decreases
switching times.
Digital Electronics
Module 3: CMOS Family
N.J. Rao
Indian Institute of Science
CMOS Family
Vin
When VIN is at 0.0 V, the lower n-channel MOSFET Q1 is OFF since its
VGS is 0, but the upper p-channel MOSFET Q2 is ON since its VGS
would be -5.0 V
VOUT at the output terminal would be +5.0 V.
Similarly when VIN is at 5.0 Q1 will be ON presenting a small
resistance, while Q2 will be OFF presenting a large resistance.
VOUT would be 0 V
A B Q1 Q2 Q3 Q4 X
L L OFF OFF ON ON H
L H OFF ON ON OFF H
H L ON OFF OFF ON H
H H ON ON OFF OFF L
9''
$ ; $
EN A Q1 Q2 OUT
H L ON OFF L
H H OFF ON H
X A B Q1 Q2 X
L L off off open
A L H off on open
H L on off open
H H on on L
B
A X
B
R = 1.5 K
A X
B C
Y
D
NM LOW NM HIGH
Family VIHMIN VILMAX VOHMIN VOLMAX @VCC @VCC
=5V =5V
1 2
3 VC VC
4000B VCC-0.1 0.01 1.6 1.6 V
3
HCMOS 3.5 1.5 VCC-0.1 0.1 1.4 1.4 V
For HCMOS
• IILmax is +1 A in any state
• IOHmax = -20 A and IOLmax = 20 A
• Low-state fan out is 20
• High-state fan out is 20
• If we are willing to work with slightly degraded output
voltages, which would reduce the available noise
margins, we can go for a much larger fan out
Device outputs in AC and ACT families have very fast rise and fall times.
Input signals should have rise and fall times of 3.0 ns (400 ns for HC and
HCT devices) and signal swing of 0V to 3.0V for ACT devices or 0V to
VDD for AC devices.
2
PL = C L .VDD .f
PD = PT + PL
2 2
= C PD .VDD .f+C L .VDD .f
= (C PD +C L ).VDD 2 .f
‘00 24 24 30 30 pF
‘138 85 85 60 60 pF
CMOS has often been called the ideal technology. It has low power dissipation,
high noise immunity to power supply noise, symmetric switching characteristics
and large supply voltage tolerance. Reducing power requirements leads to
reduction in the cost of power supplies, simplifies power distribution, possible
elimination of cooling fans and a denser PCB, ultimately leading to lower cost of the
system. Though the operation of a MOS transmission was understood long before
bipolar transistor was invented, its fabrication could not be monitored.
Consequently development of MOS circuits lagged bipolar circuits considerably, and
initially they were attractive only in selected applications. In recent years,
advances in the design of MOS circuits have vastly increased their performance and
popularity. By far majority of the large scale integrated circuits such as
microprocessors and memories use CMOS. The usage of CMOS logic is increasing in
applications that use small and medium scale integrated circuits as CMOS circuits,
while offering functionality and speed similar to bipolar logic circuits, consume very
much less power.
The basic building blocks in CMOS logic circuits are MOS transistors. A MOS
transistor can be received as a 3-terminal device that acts like a voltage-controlled
resistance, as shown in the figure 1.
Vin
Drain Drain
Gate Gate
+ +
Vgs Vgs
_- Source _- Source
Basic CMOS Inverter circuit: NMOS and PMOS transistors are used together in a
complementary way to form CMOS logic, as shown in the figure 3. The power
supply voltage VDD, typically is in the range of 2- 6 V, and is most often set at 5.0
V for compatibility with TTL circuits.
V DD
Q
1
P Channel Q2
VIN Q1 VOUT
V OUT
0.0 On Off 5V
V IN Q2 5.0 Off On 0V
N Channel
As we associated a logic state 0 or 1 with a voltage, we can say when the input
signal is asserted Q1 is ON and Q2 is OFF, and when the input signal is not
asserted Q1 is OFF and Q2 is ON. We make use of this interpretation to further
simplify the circuit representation of MOSFETs, as shown in the figure 4. The
bubble convention goes along with the convention followed in drawing logic
diagrams.
VDD
Q1
VOUT
VIN Q2
VDD
Q3 Q4
A B Q1 Q2 Q3 Q4 X
L L OFF OFF ON ON H
X L H OFF ON ON OFF H
H L ON OFF OFF ON H
A Q1 H H ON ON OFF OFF L
B Q2 A x
B
V DD A B Q1 Q2 Q3 Q4 X
L L OFF OFF ON ON H
A Q3 L H OFF ON ON OFF L
H L ON OFF OFF ON L
B Q4 H H ON ON OFF OFF L
X
A
Q1 Q2
B
VDD V DD
A X=A
X = A .B
A
VDD
Q3 Q4 A
X = /(A.B)
X
A Q1
B Q2 B
/EN
A B
EN
connection (as low as 5 Ω) between points A and B. When EN is Low and /EN is
High, points A and B are disconnected. Once transmission gate is enabled, the
propagation delay from A to B (or vice versa) is very short. Because of their short
delays and conceptual simplicity, transmission gates are often used internally in
larger-scale CMOS devices such as multiplexers and flip-flops. For example, figure
10 shows how transmission gates can be used to create a 2-input multiplexer
VDD
S
.
A circuit diagram (including schematics for gates) for a CMOS three-state buffer is
shown in the figure 11. When enable (EN) is Low, both output transistors are off,
and the output is in the Hi-Z state. Otherwise, the output is High or Low as
controlled by the “data” input A. The figure also shows logic symbol for a three-
state buffer. There is a leakage current of up to 10 µA associated with a CMOS
three-state output in its Hi-Z state. This current, as well as the input currents of
receiving gates, must be taken into account when calculating the maximum
number of devices that can be placed on a three-state bus. That is, in the Low or
High state, an enabled three-state output must be capable of sinking or sourcing
10µA of leakage current for every other three-state output on the bus, as well as
sinking the current required by every input on the bus.
VDD
EN EN A Q1 Q2 OUT
Q1 L L OFF OFF Hi-Z
A L H OFF OFF Hi-Z
OUT
H L ON OFF L
Q2 H H OFF ON H
EN
A OUT
X A B Q1 Q2 X
L L off off open
A L H off on open
H L on off open
H H on on L
B
A X
B
R = 1.5 K Ω
A X
B C
Y
D
The HC is mainly optimised for use in systems that use CMOS logic
exclusively, and can use any power supply voltage between 2 and 6 V. A
higher voltage is used for higher speed, and lower voltage for lower power
dissipation. Lowering the supply voltage is especially effective, since most
CMOS power dissipation is proportional to the square of the voltage (CV2f).
Even when used with a 5 V power supply, HC devices are not quite
compatible with TTL. In particular, HC circuits are designed to recognise
CMOS input levels. The output levels produced by TTL devices do not quite
match this range, so HCT devices use the different input levels. These levels
are established in the fabrication process by making transistors with different
switching threshold, producing the different transfer characteristics.
Two more CMOS families, known as AC (Advanced CMOS) and ACT
(Advanced CMOS, TTL compatible) were introduced in mid-1980s. These
families are fast, comparable to ALSTTL, and they can source or sink more
current than most of the TTL circuits can. Like HC and HCT, the AC and ACT
families differ only in the input levels that they recognise; their output
characteristics are the same. Also like HC/HCT, AC/ACT outputs have
symmetric output drive.
In the early 1990s, yet another CMOS family was launched. The FCT (Fast
CMOS, TTL compatible) family combines circuit innovations with smaller
transistor geometries to produce devices that are even faster than AC and
ACT while reducing power consumption and maintaining full compatibility
with TTL. There are two subfamilies, FCT-T and FCT2-T. These families
represent a “technology crossover point” that occurred when the performance
achieved using CMOS technology matched that of bipolar technology, and
typically one third the power. Both the logic families are TTL compatible,
which means that they conform to the industry-standard TTL voltage levels
and threshold point (1.5 V), and operate from a 5 Volt VCC power source. All
inputs are designed to have a hysterisis of 200 mV (low-to-high threshold of
1.6 V and high-to-low threshold of 1.4V). This hysteresis increases both the
static and dynamic noise immunity, as well as reducing the sensitivity to
noise superimposed on slowly rising or falling inputs. Individual logic gates
are not manufactured in the FCT families. Just about the simplest FCT logic
element is a 74FCT138/74FCT138T decoder, which has six inputs, eight
outputs and contains the equivalent of about twelve 4-input gates internally
Logical Levels and Noise Margins: The generated voltage levels given by
the manufacturing data sheet for HCMOS circuits operating at VDD = 5 V, are
given in the Table 1. The input parameters are mainly determined by the
switching threshold of the two transistors, while the output parameters are
determined by the ON resistance of the transistors. These parameters apply
when the device inputs and outputs are connected only to other CMOS
devices. The dc voltage levels and noise margins of CMOS families are given
in the Table 1.
TABLE 1: DC Characteristics of CMOS Families
Family VIHMIN VILMAX VOHMIN VOLMAX NM LOW NM HIGH Units
@VCC =5V @VCC=5V
4000B 2
3 VC
1
3 VC VCC-0.1 0.01 1.6 1.6 V
These dc noise margins are significantly better than those associated with
TTL families. As CMOS circuits can be operated with VDD = 2 V to VDD = 6 V
the voltage levels associated with CMOS gates may be expressed as
Regardless of the voltage applied to the input of a CMOS inverter, the input
currents are very small. The maximum leakage current that can flow,
designated as II max, is + 1µA for HCMOS with 5 V power supply. As the load
on a CMOS gate could vary, the output voltage would also vary. Instead of
specifying the output impedance under all conditions of loading the
manufacturers specify a maximum load for the output in each state, and
guarantee a worst-case output voltage for that load. The load is specified in
terms of currents. The input and output currents are given in the Table 2.
These specifications are given at voltages which are normally associated with TTL
gates. If the current drawn by the load is smaller, the voltage levels would improve
significantly. This happens when CMOS gates are connected to CMOS loads.
It is important to note that in a CMOS circuit the output structure by itself
consumes very little current in either state, High or Low. In either state, one of the
transistors is in the high impedance OFF state. When no load is connected the only
current that flows through the transistors is their leakage current. With a load,
however, current flows through both the load and the ON transistor, and power is
consumed in both.
Fan out: The fan out of a logic gate is the number of inputs that the gate can drive
without exceeding its worst-case loading specifications. The fan out depends not
only on the characteristics of the output, but also on the inputs that it is driving.
When a HCMOS gate is driving HCMOS gates, we note that IILmax is +1 µA in any
state, and IOHmax = -20 µA and IOLmax = 20 µA. Therefore, the Low-state fan out is
20 and High-state fan out is 20 for HCMOS gates. However, if we are willing to
work with slightly degraded output voltages, which would reduce the available
noise margins, we can go for IOHmax and IOLmax of 4.0 mA. This would mean that an
HCMOS gate can drive as many as 4000 HCMOS gates. But in actuality this would
not be true, as the currents we are considering are only the steady state currents
and not the transition currents. The actual fan out under degraded load conditions
would be far less than 4000. During the transitions, the CMOS output must charge
or discharge the capacitance associated with the inputs that it derives. If this
capacitance is too large, the transition from Low to High (or vice versa) may be too
slow causing improper system operation.
Both the speed and the power consumption of CMOS devices depend on to a large
extent on AC or dynamic characteristics of the device and its load, that is, what
happens when the output changes between states. The speed depends on two
factors, transition times and propagation delay.
The rise and fall times of an output of CMOS IC depend mainly on two factors, the
ON transistor resistance and the load capacitance. The load capacitance comes
from three different sources: output circuits including a gate’s output transistors,
internal wiring and packaging, have capacitances associated with them (of the
order of 2-10 pF); wiring that connects an output to other inputs (about 1pF per
inch or more depending on the wiring technology); and input circuits including
transistors, internal wiring and packaging (2-15 pF per input). The OFF transistor
resistance would be about 1 MΩ, the ON resistance of p-channel transistor would
about 100 Ω. We can compute the rise and fall times from the equivalent circuits.
Several factors lead to nonzero propagation delays. In a CMOS device, the rate at
which transistors change state is influenced both by the semiconductor physics of
the device and by the circuit environment including input-signal transition rate,
input capacitance, and output loading. The speed characteristics of CMOS families
are given in the Table 3.
Device outputs in AC and ACT families have very fast rise and fall times. Input
signals should have rise and fall times of 3.0 ns (400 ns for HC and HCT devices)
and signal swing of 0V to 3.0V for ACT devices or 0V to VDD for AC devices.
Obviously such signal transition times are a major source of analog problems,
including switching noise and “ground bounce”.
PT = CPD . V2DD. f
PT is the internal power dissipation given in watts, VDD is the supply voltage in
volts, f is frequency of output transitions in Hz, and CPD is the power dissipation
capacitance in farads. CPD for a gate of HCMOS is about 24 pF. This relationship is
valid only if the rise and fall times of the input signal are within the recommended
maximum values.
Second source of dynamic power dissipation is the CMOS power consumption due
to the capacitive load (CL) on the output. During the Low-to-High transition,
current passes through the p-channel transistor to charge the load capacitance.
Likewise, during the High-to-Low transition current flows through the n-channel
transistor to discharge the load capacitor. During these transitions the voltage
across the capacitor changes by + VDD. For each pulse there would be two
transitions. As the currents are passing through the transistors, and capacitor itself
would not be dissipating any power, the power dissipated due to the capacitive
load is
VDD 2
PL = CL . .2f
2
PL = C L .VDD 2 .f
The total dynamic power dissipation of a CMOS circuit is the sum of PT and PL:
PD = P T + P L
= (C PD +C L ).VDD 2 .f
In most applications of CMOS circuits, CV2f power is the main type of power
dissipation. While CV2f type of power dissipation is also consumed by the bipolar
circuits like TTL, but at low to moderate frequencies it is insignificant compared to
the static power dissipation of bipolar circuits.
Unlike other CMOS families, FCT does not have a CPD specification. However, ICCD
specification gives the same information in a different way. The internal power
dissipation due to transition at a given frequency f can be calculated by the formula
PT = VCC . ICCD . f
This family also makes different speed grades of the same function available.
Power dissipation characteristics of CMOS families operated at 5V are given in the
Table 4.
TABLE 4: Power Dissipation Characteristics of CMOS Families
Parameter HC HCT AC ACT FCT Units
Quiescent power
dissipation
‘00 0.0025 0.0025 0.005 0.005 mW
‘138 0.04 0.04 0.04 0.04 7.5 mW
Power dissipation
capacitance
‘00 24 24 30 30 pF
‘138 85 85 60 60 pF
Dynamic power
dissipation
‘00 at 1 MHz 0.6 0.6 0.75 0.75 mW
‘138 at 1 MHz 2.1 2.1 1.5 1.5 1.5 mW
Total power
dissipation
‘00 at 100KHz 0.0625 0.0625 0.08 0.08 mW
‘00 at I MHz 0.6025 0.6025 0.755 0.755 mW
‘00 at 10 MHz 6.0025 6.0025 7.505 7.505 mW
‘138 at 100KHz 0.25 0.25 0.19 0.19 7.5 mW
‘138 at 1 MHz 2.14 2.14 1.54 1.54 9 mW
‘138 at 10 MHz 21.04 21.04 21.04 21.04 30 mW
Digital Electronics
Module 3: ECL Family
N.J. Rao
Indian Institute of Science
ECL Family
R3
1.3K
V EE = 0
A R1 R2
V OUT1
V OUT2
B
Q1 Q2 Q3 V BB = 4V
V
E
R3
VEE
A OUT1
B OUT2
V CC 1=0 V CC 2 =0
R1 R2
A 245
220
V out1
B
Q1 Q2 Q3
V out2
V D1
E
R1 R2 R6 D2
50k R3 RL1
50k 779 6.1k RL2
R8
4.9k
VE E
The key to propagation delay in bipolar logic family is to prevent the transistors in
a gate from saturating. Schottky families prevent the saturating using Schottky
diodes across the base-collector junctions of transistors. It is also possible to
prevent saturating by using a structure called Current Mode Logic (CML). Unlike
other logic families considered so far, CML does not produce a large voltage swing
between low and high levels. Instead, it has a small voltage swing, less than a
volt, and it internal switches current between two possible paths depending on
the output state.
The first CML logic family was introduced by General Electric in 1961. The concept
was refined by Motorola and others to produce today’s 10K, 100K Emitter
Coupled Logic (ECL) families. These ECL families are fast and offer propagation
delays as short as 1 ns. In fact, through out the evolution of digital circuit
technology, some type of CML has always been the fastest commercial logic
family. However commercial ECL families are not nearly as popular as TTL and
CMOS mainly because they consume too much power. In fact, high power
consumption has made the design of ECL super computers, such as CRAY as
much of a challenge in cooling technology as in digital design. In addition, ECL
has poor power-speed product, does not provide a high level of integration, has
fast edge rates requiring design for special transmission line effect, and is not
directly compatible with TTL and CMOS. But ECL family continues to survive and
in applications which require maximum speed regardless of cost.
ECL Circuits
Basic CML Circuit: The basic idea of current mode logic is illustrated by the
inverter/buffer circuit in the figure 1. This circuit has both inverting (OUT1) and
non-inverting output (OUT2). Two transistors are connected as a differential
amplifier with a common emitter resistor R3. Let the supply VCC = 5 V, VBB = 4 V
and VEE = 0 V. Input Low and High levels are defined to be 3.6 and 4.4 V. This
circuit produces output Low and High levels 0.6 V higher (4.2 and 5.0 V). When
VIN is high transistor Q1 is ON, but not saturated, and transistor Q2 is OFF. When
Q1 is ON VE is one diode drop lower than VIN, or 3.8 V. Therefore, current through
R3 is (3.8/1.3 KΩ) 2.92 mA. If Q1 has a β of 10, then 2.65 mA of this current
comes through the collector and R1, so VOUT1 is 4.2V (Low) since the voltage
across Q1 (= 4.2 - 3.8= 0.4 V) is greater than VCEsat,, Q1 is not saturated Q2 is off
because of its base to emitter voltage (4.0 - 3.8 = 0.2 V) is less than 0.6 V. Thus
VOUT2 is at 5.0 V (High) as no current passes through R2.
1
VIN = 5V
R1 R2
300 Ω 330 Ω
OUT1
OUT2
VIN V BB = 4V
Q1 Q2
VE
R3
1.3K Ω
VEE = 0
KΩ =) 2.6 mA. The collector current of Q2 is 2.38 mA for a β of 10. The voltage
drop across R2 is (2.38 x 0.33 =) 0.5 V, and VOUT2 is about 4.2 V. Since the
collector emitter voltage of Q2 is (4.2 - 3.4 =) 0.8V, it is not saturated. Q1 is off
because its base-emitter voltage is (3.6 - 3.4 =) 0.2 and is less than 0.6 V. Thus
VOUT1 is pulled up to 5.0 V through R1.
To perform logic with the basic unit of figure 1, we simply place additional
transistors in parallel with Q1. Figure 2 shows a 2-input OR/NOR gate. If any
input is High, the corresponding input transistor is active, and VOUT1 is Low (NOR
output). At the same time, Q3 is off, and VOUT2 is High (OR output). However, the
circuit shown in figure 2 cannot meet the input/output loading requirements
effectively.
2
FIG. 2: CML 2-input OR/NOR gate
ECL 10K Family: The most popular ECL family is designated as the ECL10K as it
has 5-digit designations to its ICs. The ECL 10K OR/NOR gate is shown in the
figure 3
V CC 1=0 V CC 2 =0
R1 R2
A
220 Ω 245 Ω
V out1
B
Q1 Q2 Q3
V out2
V D1
E
R1 R2 R6 D2
50k Ω R3 RL1
50k Ω 779 Ω 6.1k RL2
R8
4.9k Ω
VE E
3
on VEE is a “common mode” signal that is rejected by the input structure’s
differential amplifier.
A pull down resistor on each input ensures that of the input is left unconnected, it
is treated as Low. The emitter-follower outputs used in ECL 10K require external
pull-down resistors as shown in the figure. This is because of the fast
transmission times (typically 2ns). The short transmission times require special
attention as any interconnection longer than a few centimetres must be treated
as a transmission line. By removing the internal pull-down resistor, the designer
can now select a resistor that satisfies the pull-down requirements as well as
transmission line termination requirements. The simplest terminator for short
connections is to use a resistor in the range of 270 Ω to 2 KΩ.
ECL SUBFAMILIES
Motorola has offered MECL circuits in five logic families: MECL I, MECL II, MECL
III, MECL 10000 (MECL 10K), and MECL 10H000 (MECL 10KH). The MECL I family
was introduced in 1962, offering 8 ns gate propagation delay and 30 MHz toggle
rates. This was the highest performance from any logic family at that time.
However, this family required a separate bias driver package to be connected to
each logic function. The ten pin packages used by this family limited the number
of gates per package and the number of gate inputs. MECL II was introduced in
1966. This family offered 4 ns propagation delay for the basic gate, and 70 MHz
toggle rates. MECL II circuits have a temperature compensated bias driver
internal to the circuits, which simplifies circuit interconnections.
MECL III was introduced in 1968. They offered 1 ns gate propagation delays and
flip-flop toggle rates higher than 500 MHz. The 1 ns rise and fall times required a
transmission line environment for all but the smallest systems. For this reason, all
circuit outputs are designed to drive transmission lines and all output logic levels
are specified when driving 50-ohm loads. For the first time with MECL, internal
input pull down resistors are included with the circuits to eliminate the need to tie
unused inputs to VEE..
4
speed of 2.5 ns and flip-flop toggle rate of 250 MHz), and 10800 LSI family
(propagation delay of 1 - 2.5 ns and edge speed of 3.5 ns)
MECL 10KH family was introduced in 1981. This family provides a propagation
delay of 1 ns with edge speed at 1.8 ns. These speeds, which were attained with
no increase in power over MECL 10K, are due to both advanced circuit design
techniques and new oxide isolated process called MOSAIC. To enhance the
existing systems, many of the MECL 10KH devices are pin-out/functional
duplications of the MECL 10K family. Also, MECL 10K/10KH are provided with
logic levels that are completely compatible with MECL III. Another important
feature of MECL 10K/10KH is the significant power reduction over both MECL III
and the older MECL II. Because of the power reductions and advanced circuit
design techniques, the MECL 10KH family has many new functions not available
with the other families.
The latest entrant to the ECL family is ECL 100K, having 6-digit part numbers.
This family offers functions, in general, different from those offered by 10K series.
This family operates with a reduced power supply voltage -4.5 V, has shorter
propagation delay of 0.75 ns, and transition time of 0.7 ns. However, the power
consumption per gate is about 40 mW.
The input and output levels, and noise margins of ECL gates are given in the
Table 1. These values are specified at TA = 25oC and the nominal power supply
voltage of VEE = -5.2 V.
The noise margin levels are slightly different in High and Low states. This
specification by itself does not give complete picture regarding the noise
immunity of a system built with a particular set of circuits. In general, noise
immunity involves line impedances, circuit output impedances, and propagation
delay in addition to noise-margin specifications.
5
Loading Characteristics: The differential input to ECL circuits offers several
advantages. Its common-mode-rejection feature offers immunity against power-
supply noise injection, and its relatively high input impedance makes it possible
for any circuit to drive a relatively large number of inputs without deterioration of
the guaranteed noise margin. Hence, DC fan out with ECL circuits does not
normally present a design problem. Graphs given by the vendor showing the
output voltage levels as a function load current can be used to determine the
actual output voltages for loads exceeding normal operation.
µA mA mA mA
Transition Times and Propagation Delays: The transition times and delays
associated with different ECL families are given in the following.
The rise and fall times of an ECL output depend mainly on two factors, the
termination resistor and the load capacitance. Most of the ECL circuits typically
have a 7 ohm output impedance and are relatively unaffected by capacitive
loading on positive going output signal. However, the negative-going edge is
dependent on the output pull down or termination resistor. Loading close to a ECL
output pin will cause an additional propagation delay of 0.1 ns per fan-out load
6
with 50 ohm resistor to -2.0 Vdc or 270 ohms to -5.2 Vdc. The input loading
capacitance of an ECL 10K gate is about 2.9 pF. To allow for the IC connector or
solder connection and a short stub length 5 to 7 pF is commonly used in loading
calculations.
7
Digital Electronics
Module 4: Combinational Circuits:
An Introduction
N.J. Rao
Indian Institute of Science
We are familiar with
Classified as:
Combinational Circuits
Sequential Circuits
“logic gates”
• were built with discrete devices
• were expensive
• consumed considerable power
• occupied significant amount of space on the printed
circuit board.
• minimisation of the number of gates was one of the
major design objectives
A B Y
L L L
&
L H L
H L L
H H H
A B Y
0 0 0
0 1 0
1 0 0
1 1 1
/A /B /Y
1 1 1
1 0 1
0 1 1
0 0 0
A B Y
L L L
>
L H H
H L H
H H H
A B Y
0 0 0
0 1 1
1 0 1
1 1 1
/A /B /Y
1 1 1
0 1 0
1 0 0
0 0 0
Incorrect examples
MINI
MA PRINCA = MINI.MA.CLR/
CLR
/PRIN1
PRIN=PRIN1/+PRIN2
/PRIN2
Mode Signals
• Assigning Assertion levels is not meaningful
• These signals are indicative of more than one action.
• Different actions take place in both the states of the
signals.
• The two actions are mutually exclusive and one of the
actions is always implied
• Examples are R/W/, U/D/ and IO/M/.
R/W/:
• when the signal takes `High voltage' (H) it is indicative of
READ operation
• when it takes `Low voltage' (L) it is indicative of WRITE
operation
A B X
0 0 0
0 1 1
1 0 1
1 1 1
The symbolic representation is
The truth table presents a simple listing of the possible combinations of A and B
rather than having anything to do with truth or falsehood of the variables concerned.
It will be more appropriate if the truth table can be interpreted more as the input-
output relation of a logic function. With this understanding we will continue to use
the word truth table.
A digital system may more conveniently be considered as a unit that processes
binary input actions and generates binary output actions. Most hardware responses
generally are either responses to some physical operation or some conditions
resulting from physical action. For example many of the signals that you come
across in digital systems are of the type
• START
• LOAD
• CLEAR
These signals are indicative of actions to be performed rather than establishing the
Truth or Falsehood of something.
For example, to say when LOAD is true does not convey the intended meaning. It
appears more appropriate to say when the signal LOAD is Asserted, the intended
action, namely, loading takes place.
Therefore, Asserted/Not Asserted qualification is more meaningful and appropriate
than the True/False qualification in the case of signals that clearly indicate action.
The entries in the truth table can now be interpreted in a different manner.
• The entry 0 is to be read as the corresponding variable Not Asserted
• The entry 1 is to be read as the corresponding variable Asserted
Consider the Truth Table given in the following.
A B X
0 0 1
0 1 0
1 0 0
1 1 0
Logic Gates
Logic gate refers to a unit of hardware that generates output voltage levels in a well-
defined relationship to the input voltage levels. A given gate may perform a variety
of functions depending upon the Assertion levels of the input and output signal
levels.
Consider an AND Gate
Figure shows a two input AND gate and the relationship between the input and
output voltage levels.
Let us assume the input and output variables are Asserted High. The corresponding
truth table can be written as;
A B X
0 0 0
0 1 0
1 0 0
1 1 1
/A /B /X
1 1 1
1 0 0
0 1 0
0 0 0
• The absence of the polarity indicator at that output of a logic unit defines that
signal at that point as Asserted High.
• The presence of the polarity indicator at the output of a logic unit defines the
signal at that point as Asserted Low
Typical correct examples are shown in the figure 2.
STRT /CLR
PTRL
/LOCK /STRT
SEQ A
CLR /PRINCA = MINI/.CLR/.SEQ A
MINI
MINI
MA PRINCA = MINI.MA.CLR/
CLR
/PRIN1
PRIN=PRIN1/+PRIN2
/PRIN2
Exceptions:
Mode Signals: There is one class of signals, designated as MODE signals, for which
assigning Assertion levels is not meaningful. These signals are indicative of more
than one action. Different actions take place in both the states of the signals.
Typical examples are R/W/, U/D/ and IO/M/.
In the case of R/W', when the signal takes `High voltage' (H) it is indicative of READ
operation, and when it takes `Low voltage' (L) it is indicative of WRITE operation.
These two operations are mutually exclusive and one of the operations is always
implied.
UP/DOWN/ signal is encountered in the counters. When U/D' takes H the counter
counts up, and when it takes L the counter counts down.
Usage of such signals should be kept to a minimum. A more convenient way of
designating such signals is to say MODE 0, MODE 1 etc.
Binary Data: Digital systems process binary data besides binary signals. It does
not sound appropriate to state that a data bit is Asserted or Not Asserted. The line
will assume High or Low voltage values as per the numerical value of that data bit.
In this sense it is more like the mode signal, which implies different actions in
different states of the signal. In this case of data line it will either convey a
numerical value of 0 or 1. A data line, designated with mnemonics like DBIT-4, is
always Asserted High signal, i.e., when it takes High voltage it is considered having a
numerical value of ‘1’ and when it takes Low voltage it is considered having a
numerical value of `0'.
Unused Inputs: Integrated circuits are available in standard SSI and MSI packages.
These ICs are designed to have widest possible applicability. Therefore, all the
inputs and capabilities may not be used every time an IC is incorporated into a
circuit. The unused inputs of such IC gates as well as sequential circuits will have to
be tied at known states. For example, if a 3-input OR gate is used only as a 2-input
OR gate, the unused input should be kept in Not Asserted state. This may
correspond to a high voltage or a low voltage. A high voltage input is shown by the
letter H and a low voltage input is shown by the letter L. A few examples with gates
are shown in the figure 5.
H L L
A1 /A2 A1
B1 /B1 B2
NOR Gates
Quad 2-input NOR 02 02 02A
Triple 3-input NOR 27 --- 27
375/
,*1, 6757
+
6/23
1(875
/2&.
Parity checker
EP = A B C D E
It’s canonical version
EP =A B C D E + A B C / D / E + A B C / D E / + A B
C D / E / + A/ B / C / D / E + A / B / C / D E / +
A / B / C D / E / + A / B / C D E +A B / C / D / E / +
AB/CDE/+AB/CD/E+AB/C/DE+
A/BC/D/E/+A/BCDE/+A/BCD/E+
A/BC/DE
Multiple levels
Vcc
Vcc
V out
DUT
30 o
t PHL (25 C)
20
( 2 5 o C)
10 t
PLH
vcc
KI CK
STRT
PTRL
IGNI
SLO P
N E UT R
LOCK
KICK
STRT
PTRL
IGNI
H
SLOP
NEUTR
LOCK
FIG. 2: Implementation of the expression for STRT with commercially available gates
Any logic expression in SOP form can essentially be considered to be ANDing of
different groups of variables and ORing the outputs of the AND gates. Therefore, it
was considered convenient to make available in the same package and AND and OR
gates suitably interconnected as shown in the figure 3.
L
U1
H
U1
KICK
U1 STRT
PTRL
IGNI U3
H
SLOP U1
NEUTR U2
LOCK U2
L
U1
H
PTRL
IGNI
U1
NEUTR
KICK L
U1 STRT
H
PTRL L
IGNI
/NEUTR
U2
SLOP
/LOCK
H
H
PTRL
IGNI
U1
NEUTR
L
KICK
U1 STRT
H
L
U1
U3
SLOP U3
LOCK
H
H
FIG. 5: NAND and INVERTER realization of a logical expression in SOP form with
variables in AH form
The major advantage of realizing logical expressions through NAND gates is that the
inventory in an organization can be kept to a single variety of gates.
If the expression is available in the POS form then it is better to realize it using NOR
gates. Consider the expression for STRT in the POS form as given below.
STRT = (PTRL+IGNI+NEUTR+KICK).(PTRL+IGNI+SLOP+NEUTR+LOCK)
Realization using the commercially available NOR gates is shown in the figure 6.
PTRL U2
U1
U2
IGNI U2
U4
NEUTR U2 STRT
U1
U3 U4
KICK
SLOP U3 U1
U1
LOCK
EP = A ⊕ B ⊕ C ⊕ D ⊕E
This is more conveniently realized in this form rather than realizing it in its canonical
version. The logical expression for the parity checking, in its canonical form is
EP = A B C D E + A B C' D' E + A B C'D E' + A B C D' E' +
A' B' C' D' E + A' B' C' D E' + A' B' C D' E' + A' B' C D E +
A B' C' D' E' + A B' C D E' + A B' C D' E + A B' C' D E +
A' B C' D' E' + A' B C D E' + A' B C D' E.+ A' B C' D E.
Its implementation using 74LS86/HC86s (Quad 2-input EX-OR) is shown in the figure
7. Notice that while this realization appears simple, the number of levels of gating is
considerably more. This would mean more delay to generate the output variable.
B
EP
C
D
E
A /A
A VT VT
/A t PLH
t PHL
A
B
SLOPE STRT
PETR
NEUTR IGNI
KICK
FIG. 3: Realisation of the logic expression for START in its non-canonical form
You should note that the minimization of propagation delay may not always the
design objective. In such cases you may use other forms of the logical expression if
they do not increase the chip count. The form of the expression may be chosen to
make the design more easily understandable.
Let us consider some hardware aspects of propagation delay. The value given for tPHL
and tPLH are not the guaranteed maximums under all operating conditions. Normally
the delays for LSTTL family are defined at the following operating conditions:
T = 25o C, V = 5 volts, C = 15 pF, and R = 2 K Ω
For HCMOS family the delay times are defined at many operating conditions, as
these devices can be operated at voltage levels below 6 volts. The dc and ac
characteristics including the time delays are specified at nine operating points (three
voltages and three temperatures)
1. ∆VCC = 2.0 V, T: 25o C to -55o C, < 85o C, and < 125oC, CL = 50 pF, Input tr =
tf = 6 ns
2. ∆VCC = 4.5 V, T: 25o C to -55o C, < 85o C, and < 125oC, CL = 50 pF, Input tr =
tf = 6 ns
3. ∆VCC =6.0 V, T: 25o C to -55o C, < 85o C, and < 125oC, CL = 50 pF, Input tr =
tf = 6 ns
Vout
DUT
FIG. 4: Test circuits for LSTTL ICs with totem-pole outputs and for HCMOS ICs
Other types of load circuits, we are likely to encounter, are shown in the figure 5.
V CC V CC
RL RL
CT RE CT
30 o
∆ t P H L (25 C)
20
o
10 ∆ t P LH (25 C )
The analysis and minimisation methods presented so far predict the behaviour of
combinational circuits under steady state. This means that the output of the circuit is
considered only after all the transients that are likely to be produced when the state
of the inputs signals change. However, the finite delays associated with gates makes
the transient response of a logic circuit different from steady state behaviour. These
transients occur because different paths that exist from input to output may have
different propagation delays. Because of these differences in the propagation delays
combinational circuits, as we will demonstrate, can produce short pulses, known as
glitches, under certain conditions, though the steady state analysis does not predict
this behaviour. A hazard is said to exist when a circuit has the possibility of
generating a glitch. However, the actual occurrence of the glitch and its pulse width
depend on the exact delays associated with the actual devices used in the circuit.
Since the designer has no control over this parameter it is necessary for him to
design the circuit in a manner that avoids the occurrence of glitches. While a given
circuit can be analysed for the presence of glitches, it is necessary to design the
system in a manner that hazard analysis of the circuit would not be necessary. One
simple method is not to look at the outputs until they settle down to their final value.
Consider the realization of logical expression X = A B' + BC'D' as shown in the figure
3.14. In this circuit the hazard is caused by the propagation delay associated with
the gate-1. Let A and B be Asserted and C and D are Not-asserrted. When B changes
from its Asserted state to its Not-asserted state with the other variables remaining
the same the output should remain in its Asserted state. However, when B changes
from 1-to-0 the output of the gate-5 changes from 1-to-0. The output of the gate-4
should change from 0-to-1 at the same time. But the delay associated with the
gate- 1 makes this transition of the gate-4 output to happen a little later than that of
gate-5. This can cause brief transition of X from 1-to-0 and then from 0-to-1, as
shown in the figure 1.
B 1
4
A
X
C 2 6
B 5
D 3
1 1 AB'
D D
1 1
C C
1 1
B B
It is clear from the K-Map that the hazard associated with the 1-to-1 transition
occurred when the change of state of the variable B caused the transition from one
grouping BC'D' to another grouping AB'. This jump made it necessary for the signal B
to go through another path of longer delay to keep the output at the same state.
While the K-Map makes it easy to identify the hazard associated with 1-to-1
transition it is much more difficult to detect the other three transitions. Fortunately
one result from Logic and Switching Theory comes to our rescue. The theorem for
the hazard free design states that a two level gate implementation of a logical
expression will be hazard free for all transitions of the output if it is free from the
hazard associated with 1-to-1 transition. This theorem makes it very easy to detect
and correct for the hazards in a combinational circuit, since the 1-to-1 transition can
easily be detected through K-Map. When the input variables change in such manner
as to cause a transition from one grouping to another grouping, the 1-to-1 transition
can occur. Therefore, the procedure to eliminate hazards in two level gating
realization of a logical expression is to include all 1s which are unit distance apart at
least in one grouping. In the example considered above the hazard occurred
because when B changed its state, it caused a transition from BC'D' grouping to A B'
grouping. Therefore the solution to remove hazard is to group the terms ABC'D' and
AB'C'D' together. This would lead to an additional gate. This procedure is illustrated
in the figure 4. The added gate defines the output during the transition of B from
one state to the other. This procedure can be applied to all two level gate situations
to eliminate hazards.
X
C
A logic gate has limited capacity to source and sink current at its output. As the
output of a gate is likely to be connected to more than one similar gate, the designer
has to ensure that the driving unit has the necessary current capability. The loading
in the case of digital circuits, built with TTL integrated circuits, is defined in terms of
Unit Loads (UL). One UL is defined as that of the input of Std.TTL gate. This is given
by,
IIL : Input LOW Current (The current flowing out of an input when a
specified LOW level voltage is applied to that input)
IIH : Input HIGH Current (The current flowing into an input when a specified
HIGH level voltage is applied to that input)
IOL : Output LOW Current (The current flowing into an output which is in the
LOW state)
IOH - Output HIGH Current (The current flowing out of an output which is in
HIGH state)
= - 400 µA(with VCC at Minimum, VIL at Maximum and VOH = 2.4 volts)
If the LOW state output VOL is to be maintained at 0.4 volts LSTTL gates have 2.5 UL
capability, and if VOL can be tolerated at 0.5 volts it can support 5 ULs. However, it is
unlikely that we need to drive Std.TTL gates. The LSTTL gate has the input
characteristics as given below:
The input and output current specifications of a HCMOS gate are given by;
IOH = - 4.0 mA at VOH = 3.98 V with VCC = 4.5 V and T: -55o to 25o C
at VOH = 3.84 V with VCC = 4.5 V and T: < 85oC
IOL = 4.0 mA at VOL = 0.26 V with VCC = 4.5 V and T: 25o to -55o C,
As it can be seen the designer has to consider a wide range of operating conditions
to take loading effects into consideration when working with HCMOS family circuits.
For the HCTMOS family ICs the currents specified at VCC = 4.5 V need only to be
considered.
There may arise certain occasions, like a clock source driving many units and setting
up LOW(L) and HIGH (H) voltage levels to be connected to unused inputs, wherein it
may become necessary to provide more drive capability than the standard values. In
such cases buffers have to be used. The available buffers in LSTTL family are;
Quad 2 - input NAND Buffer - 74LS37
IOL = 24 mA
IOH = - 1200µA
They have the capacity to drive as many as 60 LSTTL loads. There is a small price to
be paid in terms of increased propagation delay (tPHL = tPLH = 24 n secs against the
usual 15 n secs) for this enhanced drive capability. This increased time delay should
not normally make any difference as these buffers are unlikely to be used for
implementing logic expressions. There are no similar buffers available in the HCMOS
and HCTMOS families. When we are required to drive a load even beyond the
capability of a buffer, discrete components have to be used.
LARGER OUTPUT VOLTAGE SWING
The worst case output voltage level of a gate when it is in HIGH state can be as low
as 2.7 volts in the case of LSTTL family and only 2.4 volts in the case of Std. TTL
family. In the case of HCMOS family the output voltage levels can go up to 5.5V if
6.0V power supply is used. If it is desired to have a higher output voltage swing one
simple way is to connect a 1 KΩ or a 2 KΩ resistor from VCC to the output terminal.
However, it should be remembered that this modification of the output circuit would
increase the propagation delay. Larger output voltage swings can be obtained with
the help of open-collector gates. In the open-collector (OC) gates the active pull-up
circuit of the output totem-pole configuration in the LSTTL circuit is deleted as shown
in the figure 1.
vcc
The designer can now have the choice of returning the open collector terminal to the
desired supply voltage, as long as its value is less than or equal to the VOH(max)
specified, through a suitable load resistor.
Quad 2-input NAND(OC) gate - 74LS03 [VOH (max) = 5.5 V, IOL = 8 mA]
The manner in which the load resistor is to be connected is shown in the figure 2. As
the pull-up is through a passive resistor the propagation delay will be higher than
that of the gate with the totem-pole output. For example 74LS26 operated at VCC of
5 V, RL = 2 KΩ and CL = 15 pF has tPLH = 32 ns (max) and a tPHL = 28 ns (max)
against tPHL = tPLH = 15 ns (max) in the case of 74LS00 under the same operating
conditions. Open collector gates are useful for interfacing ICs from different logic
families, and ICs with discrete circuits operating with different supply voltages.
vcc vC
The HCMOS and HCTMOS families do not offer many open drain circuits. Whenever
larger voltage swings are needed it is possible to use CD4000 series circuits. The
only open drain gate that is available in the HCMOS family is 74HC03, which is quad
2-input NAND gate. The major application of open-collector gates is in implementing
wired-logic operation needed in bussing signal lines.
WIRED-LOGIC OPERATIONS
If the outputs of the gates can be tied together as shown in the figure 1 it would be
possible to realise AND operation without the actual use of hardware.
A AB
B
X=ABCD
C
D CD
L H
V CC
RL
I OH I IH
H
OC
I OH
H I IH
OC
I OH I IH
H
OC
I OH
H
OC
FIG.4: Connection of OC gates in parallel with all the outputs in the High state
It may be noted that when two OC gates are interconnected to perform wired-AND
operation they are capable of driving one to nine Unit Loads, and when an OC gate is
not paralleled with other gates then it can drive up to ten Unit Loads. The maximum
value of the load resistance R must be selected to ensure that sufficient load current
(to drive the output gates) output is High. Using the worst case values for the High
and Low states for designing the load resistor RL, will give a guaranteed dc noise
margin of 700 mV in the logic High state. Since 2.7 V should be present no more
than 2.3V can be dropped acrossRL. The current through R is composite of current
into the loads, m .IIH, and leakage current into output transistors which are biased
into off state, n.IOH . Both IOH and IIHare data sheet specifications; they are 250 µA
and 20 µA respectively in the case of 74LS38. The maximum value of the load
resistor is calculated from the relationship given below:
V CC − V OH(required)
RL(max) = n.I OH + m.I IH
with n = 4, m = 3 and VCC = 5 V and VOH (required) = 2.7 V the maximum value of
RLis 2170 ohms. A greater value will result in the deterioration of the High state
voltage value. The minimum value of RL is found by considering Low state at the
output of the paralleled gates as shown in the figure 5. RL is permitted to drop a
maximum voltage dictated by the noise margin in the Low state, which is 400 mV. In
the circuit shown in the figure 5, wherein the worst case situation is indicated, the
output of one gate is in Low state while the outputs of the remaining gates are in
High state. The resistor must be able to maintain the Low level while sinking the
load current from all the gates connected as load.
The minimum value of the load resistor RL may now be calculated from the
relationship given as below:
V CC − V OL(required)
RL(min)= I OL(capability) − I sin k(load)
V CC
RL
H
OC
L
OC
H
OC
H
OC
1
When the output of an LSTTL tristate gate is in Hi-z state the maximum leakage current at the
output, which occurs when it is tied to a gate whose output is low-impedance High state, is
+20 µA (into the output terminal). When the device is placed in its low impedance state it has
all the desirable properties of the usual LSTTL gate. Another important factor in driving a bus
line is the current capability in sinking and sourcing. For this reason the output stage of a
tristate gate is designed to source 2.6 mA at a VOH of 2.7 V, and sink 24 mA at VOL of 0.5 V
and 12 mA at a VOL of 0.4 V. This is 6.5 times more sourcing capability than a LSTTL gate.
This will permit as many as 128 tristate logic (TSL) outputs to be tied to a common bus and
still provide enough sourcing current to drive three LSTTL loads. If one device is ON and 127
are OFF the following is valid:
At present many of the combinational and sequential MSI circuits are available commercially
with tristate outputs. This option makes the usage of these ICs very convenient.
2
Digital Electronics
Module 4: Combinational Circuits:
Multiplexers
N.J. Rao
Indian Institute of Science
Multiplexers
MUX
S0 0 0
_
SELECT G3
INPUTS S1 1
Y
DI0 0
DATA DI1 1
INPUTS DI2 2
DI3 3
EN MUX
s3 MUX
EN
0 s4
1 0 0
G 1 0
7 G
2 7
2
ch9 0
ch25 0
ch10 1
ch26 1
ch11 2
ch12 ch27 2
3
ch28 3
ch13 4
ch29 4
ch14 5
ch30 5
ch15 6
ch31 6
ch16 7
ch32 7
EN MUX
C 0
B 1 G0
A 2 7
Y
0
1
L 2
3
4
5
H
6
7
Y = X4 (0, 3, 4, 6) + X4 / (1, 2, 4)
= X4 (0, 3, 6) + X4 / (1, 2) + (1) 4
X3 0
X2 1 G 0
X1 2 7
Y
0
1
X4
2
3
H 4
5
6
L 7
MUX
EN
X2'
X3 0
1 0
G
7
X1' 2
Y
0
X4 2
H 4
6
L 7
MUX
S0 0 0
_
SELECT G3
INPUTS S1 1
Y
DI0 0
DATA DI1 1
INPUTS DI2 2
DI3 3
Notice that the relationship between the SELECT inputs and the DATA inputs is G
dependency.
Number of inputs
Nature of outputs
Propagation delay
The choice on the number of inputs enables us to select the appropriate multiplexer,
to minimize the number of ICs needed to implement a given logic function.
For example, if data is to be selected from two 16-bit sources, it is more convenient
to use 2-input multiplexers, than 4-input or 8-input multiplexers.
data t PLH
t
t PHL
select
t PLH
output t PHL
The multiplexer was mainly designed for selecting data from several sources. For
example, if we are required to select an 8-bit data from one of four possible sources,
then, it can be realised through four dual 4-input multiplexers, like 74LS153. The
circuit that realises such a data selection is shown in figure 3
EN MUX
C 0
B 1 G 0
A 2 7
Y
0
1
L 2
3
4
5
H
6
7
s3 MUX s3 EN MUX
EN
s4 s4 0
0
1 G 0 1 G 0
7 7
2 2
ch1 0 ch17 0
1 ch18 1
ch2
2 ch19 2
ch3
3 ch20 3
ch4
ch5 4 ch21 4
ch22 5
ch6 5
ch23 6
ch7 6
ch8 7 ch24 7
EN MUX
s3 MUX
EN
0 s4
1 G 0 0
7 1 G 0
2 7
2
ch9 0
ch25 0
ch10 1
ch26 1
ch11 2
ch12 ch27 2
3
ch28 3
ch13 4
ch29 4
ch14 5
ch30 5
ch15 6
ch31 6
ch16 7
ch32 7
Y = m0 + m3 + m5 + m6
This expression can in turn be rewritten as:
Connect the logic variables A, B, and C to the Select Inputs and binary inputs to the
Data lines of an 8-input multiplexer as indicated in the figure 5. This is a very
general method and it allows any expression of n-logic variables to be realized by a
2n -input multiplexer.
EN MUX
C 0
B 1 G 0
A 2 7
Y
0
1
L 2
3
4
5
H
6
7
Y = Σ (1, 2, 4, 7, 8, 9, 13).
X1 X2 X3 X4 Y X1 X2 X3 Y
0 0 0 1 1 0 0 0 X4
0 0 1 0 1 0 0 1 X4 /
0 1 0 0 1 0 1 0 X4 /
0 1 1 1 1 0 1 1 X4
1 0 0 0 1 0 1 1 X4 /
1 0 0 1 1 1 0 0 X4
1 1 0 1 1 1 1 0 X4
Let X1, X2, X3 and X4 be the four variables of which X1 is the most significant and
X4 is the least significant variable. All variables are considered Asserted High.
Consider the truth table given in the following. A 4-input function can be reduced to
a 3-input function by expressing the output Y in terms of X4.
/
Y = X4 (0, 3, 4, 6) + X4 (1, 2, 4)
/
= X4 (0, 3, 6) + X4 (1, 2) + (1) 4
The realisation of the above expression with a 3-input (8- data input) multiplexer is
shown in the figure 6.
EN MUX
X3 0
X2 1 G 0
X1 2 7
Y
0
1
X4
2
3
H 4
5
6
L 7
EN MUX
X2'
X3 0
1 G 0
2 7
X1'
Y
0
1
X4 2
3
H 4
5
6
L 7
FIG. 7: Realisation of the expression in Example 2 when some of the variables are
Asserted Low
Digital Electronics
Module 4: Combinational Circuits:
Demultiplexers
N.J. Rao
Indian Institute of Science
Demultiplexers
• Demultiplexing
• Realisation of logic functions
DMUX
MUX
0
SO 0
0
1 G_
S1 1 G 0 7
2 7 2
S2
CH1 0 0 CH1
CH2 1 Y DATA 1 CH2
EN 2 CH3
CH3 2
CH4 3 3 CH4
CH5 4 CH5
4
CH6 5 5 CH6
CH7 6 CH7
6
CH8 7 7 CH8
D M U X1 3 8 O9
0 0
1
2
} 0
G _
7
1
2
O10
O11
O12
3
O 13
X2 D M U X '1 3 9 4
0
}
G
0 _ 0
5
O14
X1 1 3 1 O15
2 &
6
E N O16
EN 3 7
H
D M U X1 3 8 O17
0 0
1
2
} G 0
_
7
1
2
O18
O19
O20
3
O 21
4
O22
5
O23
& E N 6
O24
7
H
D M U X1 3 8 O25
0 0
1
2
} 0
G _
7
1
2
O26
O27
O28
3
O29
4
O30
5
O31
& 6
E N O32
7
H
DMUX 138
X3 0 0
0
G_
X2 1 1
7
X1 2 2
3
4
5 Y1
& 6
EN
H 7
X1 X2 X3 /X1 X2 X3 Y
0 0 0 1 0 0 1
0 1 0 1 1 0 1
1 0 1 0 0 1 1
1 1 0 0 1 0 1
DMUX 138
X3 0 0
0
G_
X2 1 1
7
/ X1 2 2 Y1
3
4
5
& 6
EN
7
H
0 DMUX
0
0
G_
1 1
7
2 2
3
4
EN 5
6
7
DMUX
MUX
0
SO 0 0
G_
1
S1 1 G 0 7
2 7 2
S2
CH1 0 0 CH1
CH2 1 Y DATA 1 CH2
EN 2 CH3
CH3 2
CH4 3 3 CH4
CH5 4 CH5
4
CH6 5 5 CH6
CH7 6 CH7
6
CH8 7 7 CH8
Notice that one of the Enable inputs of the demultiplexers is used as its data input.
Though this illustration indicates that the address lines are tied together, in an actual
signal transmission unit that uses such a MUX-DEMUX combination a different
method will have to be used to change the addresses of both the units
simultaneously.
DMUX138 O9
0 0
1
2
} 0
G_
7
1
2
O10
O11
O12
3
O13
4
X2 0 DMUX '139 0
} G
_0
5
O14
X1 1 3 1 O15
2 & 6
EN O16
EN 3 7
H
DMUX138 O17
0 0
1
2
} 0
G_
7
1
2
O18
O19
O20
3
O21
4
O22
5
O23
& EN 6
O24
7
H
DMUX138 O25
0 0
1
2
} 0
G_
7
1
2
O26
O27
O28
3
O29
4
O30
5
O31
& 6
EN O32
7
H
The important characteristics the designer must compute and take into account are
the delay times from the data and the address inputs to the output. The delay times
associated with 74LS138 are:
The worst case delay time from the most significant bits of the input address to
output of 1-to-32 demultiplexer is 76 n secs. The delay from the data input to output
is 70 n secs.
Y1 = Σ (0, 2, 4, 5, 6, 11)
Y2 = Σ (0, 3, 4, 7, 8)
If these expressions are to be realised using INVERTERS and NAND gates, we require
three-level gating (one level of INVERTERS and two levels of NANDs), which would
result in a propagation delay of 55 (15+20+20) n secs. Hence, the demultiplexer
solution does not give any speed advantage over the traditional realisation of logical
expressions using gates, whereas, the multiplexer realisation gave a marginal speed
advantage. However, demultiplexer solution to the realisation of logical expressions
can greatly reduce the net chip count, at least in some cases.
It is also possible to take into account if some of the variables in the logic expression
are Asserted Low. The solution is very similar to the procedure adapted in the case of
multiplexers, viz., either through changing assertion levels of the Asserted Low
variables or by taking into account the fact that incompatibility at the input results in
the complementation of the variables in the output logic expression. This is
illustrated in the example 2
X2 and X3 using 74LS138. Indicate how the realization of the expression would vary
if the variable X1 is changed to an asserted-low variable
DMUX 138
X3 0 0
X2 1 G0
_
1
7
X1 2 2
3
4
5 Y1
& 6
EN
H 7
The modified truth-table for this expression is as given in the following. The
corresponding hardware realisation of the logic expression, where the assertion level
of the variable X1 is not altered is given in the figure 6.
X1 X2 X3 /X1 X2 X3 Y
0 0 0 1 0 0 1
0 1 0 1 1 0 1
1 0 1 0 0 1 1
1 1 0 0 1 0 1
DMUX 138
X3 0 0
0
G_
X2 1 1
7
/ X1 2 2 Y1
3
4
5
& 6
EN
7
H
Ai Bi Ci-1 Si Ci
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
i-1
C out
Ai Si
A S A S
Bi
B C B
C i-1
Ci
A4 A S4
B4 B FA
CI CO C4
'283 ALU'181
0 0 CP
CG
P 0 M0
4
CO
3
4
0 P=Q
Q
3 CI tPLH tPHL
3 CI to S (max) 24 24 ns
P0
CI CO
Q0
F0
CI to CO (max) 17 22 ns
P1 F1
A, B to S (max) 24 24 ns
Q1 A, B to CO (max) 17 17 ns
P2 F2
Q2
P3 F3
Q3
A1 A5 A9 A 13
P P P P
0 S0 0 0 0
A2 A6 S4 A 10 S8 A 14 S12
A3 3
S1 A7 3
S5 A 11 3
S9 A 15
3
S13
0 B4 0 0 0
B0 S2 S6 B8 S10 B12 S14
3 3 3 3
b1 Q S3 B5 Q
S7 B9 Q S11 B13 Q
S15
B2 B6 B10 B14
3 3 3 3
B3 B7 B11 B15
C0 CI CO CI CO CI CO CI CO C15
Carry Generator, Gi = Ai Bi
Carry Propagator, Pi = Ai + Bi
C1 = A0 B0+ C0 (A0 + B0) = G0 + C0 P0
C2 = A1 B1 + C1 (A1 + B1)
= G1 + C1 P1 = G1 + P1 G0 + P1 P0 C0
C3 = A2 B2 + C2 (A2 + B2) = G2 + C2 P2
= G2 + P2 G1 + P2 P1 G0 + P2 P1 P0 C0
C4 = A3 B3 + C3 (A3 + B3) = G3 + C3 P3
= G3 + P3 G2 + P3 P2 G1 + P3 P2 P1 G0 +
P3 P2 P1 P0 C0
A0 P0 F0
S0 A4 F0 S4 A8 P0 F0
S8 A12 F0 S12
P0 P0
B0 Q0 B4 B8 Q0 B12 Q0
Q0
A1 P1 F1
S1 A5 P1
S5 A9 P1 S9 A13 P1
S13
F1 F1 F1
B1 Q1 B5 Q1 B9 Q1
B13 Q1
A3 P3 F3
S3 A7 P3 S7 P3 S11 A15 P3 S15
F3 A11 F3 F3
B3 Q3 B7 Q3 B11 Q3 B15 Q3
'86
A1
B1
A2 S1
B2 S2
A3 S3
B3 S4
A4
B4
CI CO
1 OF
'86 A1
0
B1
A2 S1
B2 S2
A3 S3
B3 S4
A4
B4
CI CO
ADD'/SUB
Simple Adders
The simplest binary addition is to add two one-bit numbers. When the sum of two
bits is more than 1 it is considered as an overflow and we generate a ‘carry’ bit. The
truth-table associated with this addition process is given in the following, with A and
B as the input one-bit numbers, S as the sum and C as the carry.
A B S C
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
The combinational circuit for the addition of two one-bit numbers is known as Half
Adder. The logical expressions for the two outputs, S and C, may be written from
the above truth-table as;
S = A B/ + A/ B = A B
C=A.B
The gate level realisation of a half-adder is shown in the figure 1.
A S
B
Ai Bi Ci-1 Si Ci
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
The logical expressions for the Sum and Carry bits can be written as in the following
Si = Ai/ Bi / Ci-1 + Ai/ Bi Ci-1/ + Ai Bi/ Ci-1/ + Ai Bi Ci-1
Ci = Ai/ Bi Ci-1 + Ai Bi/ Ci-1 + Ai Bi Ci-1/ + Ai Bi Ci-1
= Bi Ci-1 + Ai Ci-1 + Ai Bi
The realisation of the full adder using gates is shown in the figure 2.
-1
C out
Ai Si
A S A S
Bi
B C B
C i-1
Ci
A1 A Σ S1
B1 B FA
Cin CI CO
A2 A Σ S2
B2 B FA
CI CO
A3 A Σ S3
B3 B FA
CI CO
A4 A Σ S4
B4 B FA
CI CO C4
P1 F1
Q1
P2 F2
Q2
P3 F3
Q3
tPLH tPHL
CI to Σ (max) 24 24 ns
CI to CO (max) 17 22 ns
A, B to Σ (max) 24 24 ns
A, B to CO (max) 17 17 ns
A1 A5 A9 A13
P P P P
0 S0 0 0 0
A2 A6 S4 A10 S8 A14 S12
A3 3
Σ
S1 A7 3
Σ S5 A11 3
Σ
S9 A15 3
Σ S13
0 B4 0 0 0
B0 S2 S6 B8 S10 B12 S14
3 3 3 3
b1 Q S3 B5 Q
S7 B9 Q S11 B13 Q
S15
B2 B6 B10 B14
3 3 3 3
B3 B7 B11 B15
C0 CI CO CI CO CI CO CI CO C15
While the internal circuitry of the available adder units is optimised to provide
minimum delay for the addition of 4-bit numbers, the carry bit has to ripple from one
group of bits to the next group in the case of a 16-bit adder. When the addition
involves numbers that are 32-bit or 64-bit long, the carry bit will have to ripple
through 8 and 16 stages of adders respectively. This will increase the addition time
significantly. One method of reducing the addition time is to add extra circuitry that
enables the determination of the final carry bit without waiting for it to ripple through
all the stages. Such an arrangement is called Carry Look Ahead feature. This is
based on deciding independently whether a particular stage in addition generates a
carry bit or merely propagates the carry bit coming from the previous stage. Let Ai
and Bi be the two i'th bits of multi-bit numbers A and B respectively. A carry bit is
generated from this stage to the next one, whether there is a carry bit from the
previous stage or not, if both bits are 1s. The carry bit from the previous stage is
propagated to the next stage if one of the bits or both of them are 1s. These two
functions, namely carry generate and carry propagate, can be defined as;
Carry Generator, Gi = Ai Bi
Carry Propagator, Pi = Ai + Bi
Let, in a 4-bit adder, CI be the carry bit into the first stage and C1, C2, C3 and C4 be
the carry bits from the four stages of addition. G0, G1, G2 and G3 are the carry
generates and P0, P1, P2 and P3 are the carry propagates from the four stages of
addition of the 4-bit adder. Then the relationships can be stated as below:
C2 = A1 B1 + C1 (A1 + B1) = G1 + C1 P1 = G1 + P1 G0 + P1 P0 C0
C3 = A2 B2 + C2 (A2 + B2) = G2 + C2 P2
= G2 + P2 G1 + P2 P1 G0 + P2 P1 P0 C0
C4 = A3 B3 + C3 (A3 + B3) = G3 + C3 P3
= G3 + P3 G2 + P3 P2 G1 + P3 P2 P1 G0 + P3 P2 P1 P0 C0
If 64-bit adder is to be built, a second level carry look ahead generator, taking the
group carry signals from each group of 16 bits, will have to be used.
'182
CP0
CP1 CP
CP2
CP3 CG
CG0
CG1
CG2
C00
CG3
C01
CO2
A0 F0
S0 A4 S4 A8 S8 A12 S12
P0 P0 F0 P0 F0 P0 F0
B0 Q0 B4 B8 B12 Q0
Q0 Q0
A1 P1 F1
S1 A5 P1
S5 A9 P1 S9 A13 P1
S13
F1 F1 F1
B1 Q1 B5 Q1 B9 Q1
B13 Q1
A2 S2 A6 S6 A10 S14
P2 F2 P2 F2 P2 F2 S10 A14 P2 F2
B2 Q2 B6 Q2 B10 Q2
B14 Q2
A3 P3 F3
S3 A7 P3 S7 P3 S11 A15 P3 S15
F3 A11 F3 F3
B3 Q3 B7 Q3 B11 Q3 B15 Q3
“Overflow occurs when there is a carry into the sign-bit position and no carry
out of the sign-bit position, and vice-versa”
The sign changing is done by complementing the subtrahend and adding a 1 in the
least significant bit position. A mode signal, therefore, has to be created to instruct
the arithmetic unit whether the addition or subtraction should take place. A 9-bit
two’s complement adder-subtractor is shown in the figure 8.
Α
Β
'86
Β0 Α0
A1
Β1 Α1 B1
A2 S1 Σ0
Β2 B2 S2 Σ1
Α2
A3 S3 Σ2
B3 Σ3
Β3 Α3 S4
A4
B4
CI CO
'86
Β4 Α4
A1
B1
Β5 Α5
A2 S1 Σ4
B2 S2 Σ5
Β6 Α6 A3 S3 Σ6
B3 S4 Σ7
Β7 Α7 A4
B4
CI CO
1 OF
'86 A1
0
Β8 B1
A2 S1
B2 S2
A3 S3
B3 S4 Σ8
Α8 A4
B4
CI CO
ADD'/SUB
General Definitions: IEEE Standard supports the notion of bubble-to-bubble logic design
in with some important terms encountered are explained in the following.
Logic State: One of two possible abstract states that may be taken on by a logic (binary)
variable.
0-State: The logic state represented by the binary number 0 and usually standing for Not
Asserted state of a logic variable.
1-State: The logic state represented by the binary number 1 and usually standing for
Asserted state of a logic variable.
External Logic State: A logic state assumed to exist outside symbol outline; (1) on an
input line prior to any external qualifying symbol at the input or (2) on output line beyond
any external qualifying symbol at that output.
Internal Logic State: A logic state assumed to exist inside a symbol outline at an input
or an output.
Qualifying Symbol: It is graphics or text added to the basic outline of a device logic
symbol to describe the physical or logical characteristics of the device. The “external
qualifying symbol” mentioned above is typically an inversion bubble, which denotes a
“negated” input or output, for which the external 0-state corresponds to the internal 1-
state. “Internal 1-state” may be interpreted as the corresponding signal getting asserted.
Similarly “internal 0-state” may be interpreted as the corresponding signal getting not-
asserted.
* * * *
Output
lines
* * **
> OR
& AND
=1 Exclusive OR
= All inputs at the same state
2k Even number of inputs Asserted
2k+1 Odd number of states Asserted
Buffer
Schmitt Trigger
X/Y Code Converter
MUX Multiplexer
DX Demultiplexer
Σ Adder
P-Q Subtractor
CPG Carry look-ahead generator
ALU Arithmetic logic unit
COMP Magnitude comparator
The input and output lines will have qualifying symbols inside the symbol outlines. These
qualifying symbols are illustrated in the following.
3-State output
EN Enable input, when at its internal 1-state, all outputs are enabled.
When at its internal 0-state all outputs are at the internal 0-state
0
m
} Binary Grouping. m is the highest power of 2.
CT=9
Content equals (e.g., 9)
Internal connection
A A
B
B
C
C
D D
A G1 A
&
B 1
B
When a Gm input or output (m is a number) stands at its internal 1-state (Asserted) all
the inputs and outputs affected by this Gm stand at their normally defined internal logic
states.
When Gm input or Gm output stands at its internal 0-state (Not Asserted) all the inputs
and outputs affected by it stand at their 0-state (Not Asserted).
Conventions for the Application of Dependency Notation in General: The rules for applying
dependency relationships in general follow the same pattern as was illustrated for G-
dependency. Application of dependency notation is accomplished by:
Labelling the input (or output) affecting other inputs or outputs with a letter symbol
indicating the relationship involved followed by an identifying number, arbitrarily chosen.
Labelling each input or output affected by that affecting input (or output) with that same
number.
If it is the complement of the internal logic state of the affecting input or output that does
the affecting, then a bar is placed over the identifying numbers at the affected inputs or
outputs. If the affected input or output requires a label to denote its function this label will
be prefixed by the identifying number of affecting input. If an input or output is affected
by more than one affecting input, the identifying numbers of each of the affecting inputs
will appear in the label of the affected one, separated by commas. The left-to-right
sequence of these numbers is the same as the sequence of the affecting relationships.
If the labels denoting the functions of affected inputs or outputs must be numbers, the
identifying numbers to be associated with both affecting inputs and affected inputs or
outputs will be replaced by another character selected to avoid ambiguity
(Asserted), the inputs affected by this Cm input or Cm output have their normally defined
effect on the function of the element. When a Cm input or Cm output stands at its internal
0-state (Not asserted), the inputs affected by Cm are disabled and have no effect on the
function of the element. This dependency is explained through examples in the figure 10.
When an Rm input stands at its internal 1-state (Asserted) the outputs affected by this
Rm input will take on the internal logic states they normally would take on for the
combination S = 0, R = 1 regardless of the state of the S input. When an Rm input stands
at its internal 0-state it has no effect. The R and S dependencies are illustrated in the
figure 11.
a b c d
0 0 No change
0 1 0 1
1 0 1 0
1 1 Not specified
a b c d
0 0 No change
0 1 0 1
1 0 1 0
1 1 1 0
a b c d
0 1 No change
0 1 0 1
1 0 1 0
1 1 0 1
a b c d
0 1 No change
0 1 0 1
1 0 1 0
1 1 1 1
a b c d
0 1 No change
0 1 0 1
1 0 1 0
1 1 0 0
Enable Dependency (EN Dependency): The symbol denoting enable dependency is EN.
Enable dependency is used to indicate an Enable input that does not necessarily affect all
outputs of an element. It can also be used when one or more inputs of an element are
affected. When this input stands at its internal 1-state (Asserted), all the affected inputs
and outputs stand at their normally defined internal logic states and have their normally
defined effect on elements or distributed functions that may be connected to the outputs,
provided no other inputs or outputs have an overriding and contradicting effect. When this
input stands at its internal 0 state (Not Asserted), all the affected open-circuit outputs
stand at their external high-impedance states, all 3-state outputs stand at their normally
defined internal logic states and at their external high-impedance states, and other types
of outputs stand at their internal 0-states. The nature of EN dependency is illustrated in
the figure 12.
1
B
1
A EN1
D
EN
The circuit in the figure 13 has two inputs, B and C, that control which one of four modes
(0, 1, 2, or 3) will exist at any time. Inputs D, E and F are D-inputs subject to dynamic
control (clocking) by the A input. The numbers 1 and 2 are in the series chosen to indicate
the modes of inputs E and F are only enabled in mode 1 (parallel loading) and input D is
only enabled in mode 2 (for serial loading). Note that input A has three functions. It is the
clock for entering data. In mode 2, it causes right shifting of data, which means a shift
away from the control block. In mode 3, it causes the contents of the register to be
incremented by one count.
A C4/2->/3+
B 0
C 1 } 0
M--
3
D 2,4D
E 1,4D
F 1,4D
FIG.13: Illustration of M dependency.
When an Mm input or Mm output stands at its internal 0 state, at each affected output any
set of labels containing the identifying number of that Mm input or Mm output has no
effect and is to be ignored. When an output has several different sets of labels separated
by slashes (e.g., C4/->/3+), only those sets in which the identifying number of this Mm
input of Mm output appears are to be ignored. In the figure 5.14, mode 1 exists when the
A input stands at its internal 1 state. The delayed output symbol is effective only in mode
1 (when input A = 1) in which case the device functions as a pulse-triggered flop-flop
(Master-Slave flip-flop). When the input A = 0, the device is not in mode 1 so the delayed
output symbol has no effect and the device functions as a transparent latch.
A M1
D
B C2 1
C 2D
A M1
B
1CT=9
A A1 A A1
B A2 B A2
C A3 C A3
D C4 D C4
> 1
E A,4D F E 1,4D F
G H 2.4D
3,4D
> 1
G 1,4D H
2,4D
3,4D
'164 195A
M1
R
R
C1/1 C1/
& 1D 1, 2 J
1, 2 K
1', 2 D
1', 2 D
The gate shown in the figure 18(a) is not commercially available. However, a minor
modification will establish this correspondence. This is shown in the figure 18(b). Each one
of the gates shown will correspond to the gates that are commercially available.
FIG. 18: Redrawing of a logic symbol to correspond to the actual gates used.
External inputs should enter the left hand side of the diagram. Outputs from the
circuit should be shown in the right hand side.
Use polarised mnemonic notation and all the standard symbols thereof.
All signals should have properly defined mnemonics with their assertion levels
indicated.
All gates represented in the circuit diagram must correspond to the actual
hardware elements used. But the choice of operator symbol (NAND, NOR, OR,
EXOR ETC.) for gates must be indicative of the function they perform.
Each operator symbol should be given a number to correspond with the actual IC
used. These are designated as U1, U2 etc. A particular number, say U2, may be
given to more than one logic operator as an IC may have more than one functional
element.
The pin numbers corresponding to the specific IC used should be shown near the
inputs or the outputs of the logic operator, or outside the symbol outline in the
case of MSIs and LSIs.
The specific ICs used along with their pin numbers for VCC and GND (VBB) should be
shown at a convenient place on the circuit diagram.
If the circuit diagram is large and is to be drawn on a large sheet, zonal co-
ordinates should be incorporated.
Consider the circuit diagram shown in the figure19. It is redrawn as per the rules stated
above and shown in the figure 20.