0% found this document useful (0 votes)
9 views214 pages

Ics Full

Uploaded by

madhanlap0402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views214 pages

Ics Full

Uploaded by

madhanlap0402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 214

21AD1301 INTERNALS OF COMPUTER SYSTEMS

UNIT I DIGITAL FUNDAMENTALS

Digital Systems – Features of Digital Systems- Binary Numbers – Octal – Hexadecimal


Conversions – Signed Binary Numbers – Complements – Logic Gates – Boolean Algebra–
Standard Forms – NAND – NOR Implementation– K-Maps- Quine McClusky method

***********************************************************************************

DIGITAL SYSTEM

A Digital system is an interconnection of digital modules and it is a system that manipulates discrete
elements of information that is represented internally in the binary form.
Now a day’s digital systems are used in wide variety of industrial and consumer products such as
automated industrial machinery, pocket calculators, microprocessors, digital computers, digital watches, TV
games and signal processing and so on.

Characteristics of Digital systems


 Digital systems manipulate discrete elements of information.
 Discrete elements are nothing but the digits such as 10 decimal digits or 26 letters of alphabets and
so on.
 Digital systems use physical quantities called signals to represent discrete elements.
 In digital systems, the signals have two discrete values and are therefore said to be binary.
 A signal in digital system represents one binary digit called a bit. The bit has a value either 0 or 1.

Analog systems vs Digital systems


Analog system process information that varies continuously i.e; they process time varying signals
that can take on any values across a continuous range of voltage, current or any physical parameter.
Digital systems use digital circuits that can process digital signals which can take either 0 or 1 for binary
system.
Advantages of Digital system over Analog system
1. Ease of programmability
The digital systems can be used for different applications by simply changing the program without
additional changes in hardware.
2. Reduction in cost of hardware
The cost of hardware gets reduced by use of digital components and this has been possible due to
advances in IC technology. With ICs the number of components that can be placed in a given area of Silicon
are increased which helps in cost reduction.
3. High speed
Digital processing of data ensures high speed of operation which is possible due to advances in Digital
Signal Processing.
4. High Reliability
Digital systems are highly reliable one of the reasons for that is use of error correction codes.
5. Design is easy
The design of digital systems which require use of Boolean algebra and other digital techniques is
easier compared to analog designing.
6. Result can be reproduced easily
Since the output of digital systems unlike analog systems is independent of temperature, noise,
humidity and other characteristics of components the reproducibility of results is higher in digital systems
than in analog systems.

Disadvantages of Digital Systems


 Use more energy than analog circuits to accomplish the same tasks, thus producing more heat as well.
 Digital circuits are often fragile, in that if a single piece of digital data is lost or misinterpreted the
meaning of large blocks of related data can completely change.
 Digital computer manipulates discrete elements of information by means of a binary code.
 Quantization error during analog signal sampling.
Number Systems
Number system is nothing but set of values to represent quantity. A collection of things (usually called
numbers) together with operations on those numbers and the properties that the operations satisfy.
Types of Number System:
1. Decimal Number System
2. Binary Number System
3. Octal Number System
4. Hexa decimal Number System

Decimal Number System


 Decimal number system is a base 10 number system having 10 digits from 0 to 9.
 This means that any numerical quantity can be represented using these 10 digits.
 Decimal number system is also a positional value system. This means that the value of
digits will depend on its position.
Eg:
In 734, value of 7 is 7 hundreds or 700 or 7 × 100 or 7 × 102 The weightage of each position can be represented
as follows –

Octal Number System


Characteristics
 Uses eight digits, 0,1,2,3,4,5,6,7.
 Also called base 8 number system
 Each position in an octal number represents a 0 power of the base (8). Example:80
 Last position in an octal number represents an x power of the base (8). Example:8 x
where x represents the last position - 1.
 Eg: 125708
Hexadecimal Number System
Characteristics
 Uses 10 digits and 6 letters, 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F.
 Letters represents numbers starting from 10. A = 10, B = 11, C = 12, D = 13, E =
14, F = 15.
 Also called base 16 number system.
 Each position in a hexadecimal number represents a 0 power of the base (16).
Example 160.
 Last position in a hexadecimal number represents an x power of the base (16).
Example 16x where x represents the last position - 1.
 Eg: 19FDE16
Number Base Conversion:
The number base conversion can be done by two methods. The first method is conversion
of decimal system to any other base system and second method is any otherradix system to
decimal system. There are 12 types of conversions. They are
1. Conversion from any Radix(r) to Decimal Number System
 Binary to Decimal Number System
 Octal to Decimal Number System
 Hexadecimal to Decimal Number System
 Other Base to Decimal Number System
2. Conversion of Decimal Number System to any Radix (r) System.

Conversion of Decimal Integer part to any radix system

Conversion of Decimal fraction number to any radix system
3. Special Conversion method.

Binary to Octal Conversion

Octal to Binary Conversion

Binary to Hexadecimal Conversion

Hexadecimal to Binary Conversion

Octal to Hexadecimal Conversion

Hexadecimal to Octal Conversion

Conversion from any Radix(r) to Decimal Number System

The conversion of any radix system to decimal number system is given by the
following steps.
Step 1: Write the given number.

Step 2: Write the weights of different position.


Step 3: Multiply each digit in the given number with the corresponding weight to
obtain product number.
Step 4: Add all the product numbers to get decimal
equivalent. Binary to Decimal Number System Example 1:

Calculate the decimal equivalent for the binary number: 101012


Step Binary Number Decimal Number

Step 1 101012 10101


Step 2 101012 ((1 × 24) + (0 × 23) + (1 × 22) + (0 × 21) + (1 × 20))10
Step 3 101012 (16 + 0 + 4 + 0 + 1)10
Step 4 101012 2110

Example 2:
Calculate the decimal equivalent for the binary number: 10.1 2

Step Binary Number Decimal Number


Step 1 10.12 10.12
Step 2 10.12 (1 x 21) + (0 x 20) + (1 x 2-1)
Step 3 10.12 (2+0+0.5)
Step 4 10.12 2.510
Octal to Decimal Number System
Example 1:
Calculate the Decimal Equivalent for the Octal Number − 12570 8

Step Octal Number Decimal Number

Step 1 125708 12570

Step 2 125708 ((1 × 84) + (2 × 83) + (5 × 82) + (7 × 81) + (0 × 80))10


Step 3 125708 (4096 + 1024 + 320 + 56 + 0)10

Step 4 125708 549610

Example 2:
Convert octal number (235.23)8 to decimal number
(235.23)8 = (2x82) + (3x81) + (5x80) + (2x8-1) + (3x8-2)
= (157.296875)10
Hexadecimal to Decimal Number System
Example 1:
Calculating Decimal Equivalent for the Hexadecimal Number: 19FDE16
Step Hexadecimal Number Decimal Number
Step 1 19FDE16 ((1 × 164) + (9 × 163) + (F × 162) + (D × 161) + (E × 160))10

Step 2 19FDE16 ((1 × 164) + (9 × 163) + (15 × 162) + (13 × 161) + (14 × 160))10

Step 3 19FDE16 (65536 + 36864 + 3840 + 208 + 14)10

Step 4 19FDE16 10646210

Example 2:
Convert Hexadecimal number (ABC.3C)16 to decimal number
(ABC.3C) 16 = (Ax162) + (Bx161) + (Cx160) + (3x16-1) + (Cx16-2)

= (10x16 2) + (11x16 1) + (12x16 0) + (3x16-1) + (12x16-2 )


= (2748.234)10
Other Base to Decimal Number System
Example:
Convert radix 5 number (4310)5 to decimal number
(4310)5 = (4x53) + (3x52) + (1x51) + (0x50)
= (580)10

Conversion of Decimal Number System to any Radix (r) System

Conversion of Decimal Integer part to any radix system


The conversion of the integer decimal part to any radix is given as follows.
Step 1: Repeatedly divide the integer part of decimal number by the base until it cannotbe
divided further.
Step 2: The remainder is taken in reverse order to form new base number.
Step 3: First remainder is the least significant digit (LSD) and last remainder is most
significant digit(MSD).

Example 1:
Convert (139)10 to (?)2

Example 2:
Convert (2705)10 to (?)8

(2705)10 = (5221)8

Example 3:
Convert (2705)10 to (?)16

(2705)10 = (A91)16

Conversion of Decimal fraction number to any radix system


The conversion of the fractional decimal number system to any radix system is
given as follows

Step 1: The number to be converted is multiplied by the radix.


Step 2: The product has integer part and fractional part
Step 3: The integer part is taken as carry.
Step 4: The fractional part from step 2 is multiplied by the base. Step 5:
The step 3 and 4 is repeated until fractional part becomes ‘0’.Step 6:
Carry is written downwards, which is the required number. Example 1:
Convert Decimal number 0.39(10) to binary number.

0.39×2=0.78 -> with the carry of 0


0.78×2 =1.56-> with the carry of 1
0.56×2 =1.12 -> with the carry of 1
Ans
0.39(10) = 0.011(2)

Example 2:
Convert the decimal number 0.39(10) to octal number.
0.39×8 =3.12 ---> with a carry of 3
0.12×8 =0.96 ---> with a carry of 0
0.96×8 =7.68 ---> with a carry of 7
0.68×8 =5.44 ---> with a carry of 5
Ans
0.39(10) = 0.3075(8)

Special Conversion method.

Binary to Octal Conversion


Step 1: Group the binary number by three digits.
Step 2: Each group is represented by an octal value.
Step 3: All the octal value together gives the equivalent octal number.
Example:
Convert (10001011)2 to Octal Number.

Octal to Binary Conversion


Step 1: Each octal value in the number is represented by three bits of binary.
Step 2: All the binary value together gives the equivalent binary number.
Example:
Convert (213)8 to Binary Number.

Binary to Hexadecimal Conversion


Step 1: Group the binary number by four digits.
Step 2: Each group is represented by a hexadecimal value.
Step 3: All the hexadecimal value together gives the equivalent binary number.
Example:
Convert (10001011)2 to Hexadecimal Number.

Hexadecimal to Binary Conversion


Step 1: Each hexadecimal value in the number is represented by four bits of binary.
Step 2: All the binary value together gives the equivalent binary number.
Example:

Octal to Hexadecimal Conversion

Step 1: First convert each octal value to its equivalent binary number.
Step 2: Then convert the binary number to its equivalent hexadecimal number.
Example:
Convert the octal number (615)8 to hexadecimal number.
Ans:
First Convert Octal to binary number

6->110
1->001
5->101
The Binary number is (110001101)2

Binary number is converted into hexadecimal number (by adding zeros as needed)
(110001101)2 = 0001 1000 1101

1 8 D
The hexadecimal to decimal equivalent of octal number (615) 8 is (18D)16
Hexadecimal to Octal Conversion
Step 1: First convert each hexadecimal value to its equivalent binary number.Step
2: Then convert the binary number to its equivalent octal number Example:
Convert (25B)16 to octal number.
Ans:
Convert hexadecimal number to binary number

2->0010
5->0101
B->1011
Binary number is (001001011011)2

(001001011011)2 = 001 001 011 011


1 1 3 3

(25B)16 = (1133)8

One’s (1’s)Complement:

The 1’s complement is taken only for the binary numbers. It is obtained bychanging
‘0’ to ‘1’ and ‘1’ to ‘0’

Example:

Nine’s (9’s)Complement:

The 9’s complement is taken only for the decimal numbers. It is obtained by
subtracting each digit from 9.

Example: Find 9’s complement for 456

Two’s (2’s)Complement:
The 2’s complement is taken only for the binary numbers. First take 1’s
complement and add 1 to LSB

Ten’s (10’s)Complement:

The 10’s complement is taken only for the decimal numbers. First take 9’s
complement and add 1 to LSB

Example:

Subtracting using 1s complement

Case 1: For subtracting a smaller number from a larger number, the 1 s complementmethod
is as follows:

1. Determine the 1s complement of the smaller number.

2. Add the 1s complement to the larger number.

3. Remove the final carry and add it to the result. This is called the end-around carry.

Example:
11001-10011

Result from Step1: 01100

Result from Step2: 100101

Result from Step3: 00110

To verify, note that 25 - 19 = 6

Case 2: For subtracting a larger number from a smaller number, the 1s complementmethod is
as follows:

1. Determine the 1s complement of the larger number.

2. Add the 1s complement to the smaller number.

3. There is no carry. The result has the opposite sign from the answer and is the 1s
complement of the answer.

4. Change the sign and take the 1s complement of the result to get the final answer.

Example:

1001 - 1101

Result from Step1: 0010

Result from Step2: 1011

Result from Step3: - 0100

To verify, note that 9 - 13 = - 4

Subtracting using 2s complement

Case 1: For subtracting a smaller number from a larger number, the 2s complementmethod is
as follows:

1. Determine the 2s complement of the smaller number.

2. Add the 2s complement to the larger number.

3. Discard the final carry (there is always one in this case)


Example:

11001 - 10011

Result from Step1: 01101

Result from Step2: 100110

Result from Step3: 00110

Again, to verify, note that 25 - 19 = 6

Case 2: For subtracting a larger number from a smaller number, the 2 s complementmethod is
as follows:

1. Determine the 2s complement of the larger number.

2. Add the 2s complement to the smaller number.

3. There is no carry from the left-most column. The result is in 2s complement form and
is negative.

4. Change the sign and take the 2s complement of the result to get the final answer.

Example:

1001 - 1101

Result from Step1: 0011

Result from Step2: 1100

Result from Step3: -0100

Again to verify, note that 9 - 13 = - 4

BASIC LOGIC GATES:

Logic gates are electronic circuits that can be used to implement the most elementary
logic expressions, also known as Boolean expressions. The logic gate is the most basic
building block of combinational logic.
There are three basic logic gates, namely the OR gate, the AND gate and the NOT
gate. Other logic gates that are derived from these basic gates are the NAND gate, the NOR
gate, the EXCLUSIVE- OR gate and the EXCLUSIVE-NOR gate
GATE SYMBOL OPERATION TRUTH TABLE

NOT gate (Invertion), produces


NOT
an inverted output pulse for a
(7404)
given input pulse.

AND gate performs logical


multiplication. The output is
AND HIGH only when all the inputs
(7408) are HIGH. When any of the
inputs are low, the output is
LOW.
.

OR gate performs logical


addition. It produces a HIGH
OR on the output when any of the
(7432) inputs are HIGH. The output is
LOW only when all inputs are
LOW.

It is a universal gate. When any


of the inputs are LOW, the
NAND
output will be HIGH. LOW
(7400)
output occurs only when all
inputs are HIGH.

It is a universal gate. LOW


output occurs when any of its
NOR
input is HIGH. When all its
(7402)
inputs are LOW, the output is
HIGH.

EX- OR The output is HIGH only when


(7486) odd number of inputs is HIGH.

The output is HIGH only when


EX- NOR
even number of inputs is HIGH.
Or when all inputs are zeros.
BOOLEAN ALGEBRA

INTRODUCTION:

In 1854, George Boole, an English mathematician, proposed algebra forsymbolically


representing problems in logic so that they may be analyzed mathematically. The mathematical
systems founded upon the work of Boole are called Boolean algebra in his honor.
The application of a Boolean algebra to certain engineering problems was introduced
in 1938 by C.E. Shannon.
For the formal definition of Boolean algebra, we shall employ the postulates
formulated by E.Huntington in 1904.

Fundamental postulates of Boolean algebra:

The postulates of a mathematical system forms the basic assumption from whichit is
possible to deduce the theorems, laws and properties of the system.
The most common postulates used to formulate various structures are—
i) Closure:
A set S is closed w.r.t. a binary operator, if for every pair of elements of S,the
binary operator specifies a rule for obtaining a unique element of S.
The result of each operation with operator (+) or (.) is either 1 or 0 and 1, 0 ЄB.

ii) Identity element:


A set S is said to have an identity element w.r.t a binary operation * on S, ifthere
exists an element e Є S with the property,

e* x = x * e = x
Eg: 0+ 0 = 0 0+ 1 = 1+ 0 = 1 a) x+ 0= x
1.1=1 1.0=0.1=1 b) x. 1 = x

iii) Commutative law:


A binary operator * on a set S is said to be commutative if,

x*y=y*x for all x, y Є S

Eg: 0+ 1 = 1+ 0 = 1 a) x+ y= y+ x
0.1=1.0=0 b) x. y= y. x
iv) Distributive law:
If * and • are two binary operation on a set S, • is said to be distributive over
+ whenever,

x . (y+ z) = (x. y) + (x. z)

Similarly, + is said to be distributive over • whenever,

x + (y. z) = (x+ y). (x+ z)


v) Inverse:

A set S having the identity element e, w.r.t. binary operator * is said to havean
inverse, whenever for every x Є S, there exists an element x’ Є S such that,

a) x+ x’ = 1, since 0 + 0’ = 0+ 1 and 1+ 1’ = 1+ 0 = 1
b) x. x’ = 1, since 0 . 0’ = 0. 1 and 1. 1’ = 1. 0 = 0

Summary:

Postulates of Boolean algebra:

POSTULATES (a) (b)


Postulate 2 (Identity) x+0=x x.1=x
Postulate 3 (Commutative) x+ y = y+ x x . y = y. x
Postulate 4 (Distributive) x (y+ z) = xy+ xz x+ yz = (x+ y). (x+ z)
Postulate 5 (Inverse) x+x’ = 1 x. x’ = 0

Basic theorem and properties of Boolean algebra:


Basic Theorems:

The theorems, like the postulates are listed in pairs; each relation is the dual of the one
paired with it. The postulates are basic axioms of the algebraic structure and need no proof. The
theorems must be proven from the postulates. The proofs of the theorems with one variable are
presented below. At the right is listed the number of thepostulate that justifies each step of the
proof.
1) a) x+ x = x
x+ x = (x+ x) . 1------------------- by postulate 2(b) [ x. 1 = x ]
= (x+ x). (x+ x’)------------------- 5(a) [ x+ x’ = 1]
= x+ xx’------------------- 4(b) [ x+yz = (x+y)(x+z)]
= x+ 0------------------- 5(b) [ x. x’ = 0 ]
= x------------------- 2(a) [ x+0 = x ]

b) x. x = x
x. x = (x. x) + 0------------------- by postulate 2(a) [ x+ 0 = x ]
= (x. x) + (x. x’)------------------- 5(b) [ x. x’ = 0]
= x ( x+ x’)------------------- 4(a) [ x (y+z) = (xy)+ (xz)]
= x (1)------------------- 5(a) [ x+ x’ = 1 ]
= x------------------- 2(b) [ x.1 = x ]

2) a) x+ 1 = 1

x+ 1 = 1 . (x+ 1) ------------------- by postulate 2(b) [ x. 1 = x ]


= (x+ x’). (x+ 1) ------------------- 5(a) [ x+ x’ = 1]
= x+ x’.1 ------------------- 4(b) [ x+yz = (x+y)(x+z)]
= x+ x’ ------------------- 2(b) [ x. 1 = x ]
=1 ------------------- 5(a) [ x+ x’= 1]

b) x .0 = 0

3) (x’)’ = x
From postulate 5, we have x+ x’ = 1 and x. x’ = 0, which defines thecomplement of
x. The complement of x’ is x and is also (x’)’.
Therefore, since the complement is unique,
(x’)’ = x.

4) Absorption Theorem:
a) x+ xy = x
x+ xy = x. 1 + xy ------------------- by postulate 2(b) [ x. 1 = x ]
= x (1+ y) ------------------- 4(a) [ x (y+z) = (xy)+ (xz)]
= x (1) ------------------- by theorem 2(a) [x+ 1 = x]
= x. ------------------- by postulate 2(a) [x. 1 = x]
b) x. (x+ y) = x
x. (x+ y) = x. x+ x. y------------------- 4(a) [ x (y+z) = (xy)+ (xz)]
= x + x.y------------------- by theorem 1(b) [x. x = x]
= x.------------------- by theorem 4(a) [x+ xy = x]

c) x+ x’y = x+ y
x+ x’y = x+ xy+ x’y------------------- by theorem 4(a) [x+ xy = x]
= x+ y (x+ x’)------------------- by postulate 4(a) [ x (y+z) = (xy)+ (xz)]
= x+ y (1)------------------- 5(a) [x+ x’ = 1]
= x+ y------------------- 2(b) [x. 1= x]

d) x. (x’+y) = xy
x. (x’+y) = x.x’+ xy------------------- by postulate 4(a) [ x (y+z) = (xy)+ (xz)]
= 0+ xy------------------- 5(b) [x. x’ = 0]
= xy.------------------- 2(a) [x+ 0= x]

Properties of Boolean algebra:

1. Commutative property:

Boolean addition is commutative, given by

x+ y = y+ x

According to this property, the order of the OR operation conducted on the variablesmakes no
difference.
Boolean algebra is also commutative over multiplication given by,

x. y = y. x

This means that the order of the AND operation conducted on the variables makes nodifference.
2. Associative property:
The associative property of addition is given by,

The OR operation of several variables results in the same, regardless of the grouping ofthe
variables.
The associative law of multiplication is given by,

20
It makes no difference in what order the variables are grouped during the AND
operation of several variables.
3. Distributive property:

The Boolean addition is distributive over Boolean multiplication, given by

The Boolean addition is distributive over Boolean addition, given by

4. Duality:

It states that every algebraic expression deducible from the postulates of Booleanalgebra
remains valid if the operators and identity elements are interchanged.
If the dual of an algebraic expression is desired, we simply interchange OR and AND
operators and replace 1’s by 0’s and 0’s by 1’s.
x+ x’ = 1 is x. x’ = 0
Duality is a very important property of Boolean algebra.

Summary:

Theorems of Boolean algebra:

THEOREMS (a) (b)


1 Idempotency x+x=x x.x=x
2 x+1=1 x.0=0
3 Involution (x’)’ = x
4 Absorption x+ xy = x x (x+ y) = x
x+ x’y = x+ y x. (x’+ y)= xy
5 Associative x+(y+ z)= (x+ y)+ z x (yz) = (xy) z
6 DeMorgan’s Theorem (x+ y)’= x’. y’ (x. y)’= x’+ y’

DeMorgan’s Theorems:

Two theorems that are an important part of Boolean algebra were proposed byDeMorgan.
The first theorem states that the complement of a product is equal to the sum ofthe
complements.

21
The second theorem states that the complement of a sum is equal to the product of the
complements.

Consensus Theorem:

In simplification of Boolean expression, an expression of the form AB+ A’C+ BC, the
term BC is redundant and can be eliminated to form the equivalent expression AB+ A’C. The
theorem used for this simplification is known as consensus theorem and is representas

The dual form of consensus theorem is stated as,

(A+B) (A’+C) (B+C) = (A+B) (A’+C)

KARNAUGH MAP MINIMIZATION:

The simplification of the functions using Boolean laws and theorems becomes
complex with the increase in the number of variables and terms. The map method, first
proposed by Veitch and slightly improvised by Karnaugh, provides a simple, straightforward
procedure for the simplification of Boolean functions. The method is called Veitch diagram
or Karnaugh map, which may be regarded as a pictorial representation of a truth table.
The Karnaugh map technique provides a systematic method for simplifying and
manipulation of Boolean expressions. A K-map is a diagram made up of squares, witheach
square representing one minterm of the function that is to be minimized. For nvariables on a
Karnaugh map there are 2n numbers of squares. Each square or cell represents one of the minterms.
It can be drawn directly from either minterm (sum-of- products) or maxterm (product-of-sums)
Boolean expressions.

22
Two- Variable, Three Variable and Four Variable Maps Karnaugh maps can be used for
expressions with two, three, four and five variables. The number of cells in a Karnaugh map is
equal to the total number of possible input variable combinations as is the number of rows in a
truth table. For threevariables, the number of cells is 23 = 8. For four variables, the number of

cells is 24 = 16.
Product terms are assigned to the cells of a K-map by labeling each row and each column
of a map with a variable, with its complement or with a combination of variables &
complements. The below figure shows the way to label the rows & columnsof a 1, 2, 3 and 4-
variable maps and the product terms corresponding to each cell.

It is important to note that when we move from one cell to the next along any row or
from one cell to the next along any column, one and only one variable in the product term
changes (to a complement or to an uncomplemented form). Irrespective ofnumber of variables
the labels along each row and column must conform to a single change. Hence gray code is used
to label the rows and columns of K-map as shown below.

23
Grouping cells for Simplification:

The grouping is nothing but combining terms in adjacent cells. The simplificationis
achieved by grouping adjacent 1’s or 0’s in groups of 2i, where i = 1, 2, …, n and n is the
number of variables. When adjacent 1’s are grouped then we get result in the sum ofproduct
form; otherwise we get result in the product of sum form.

Grouping Two Adjacent 1’s: (Pair)


In a Karnaugh map we can group two adjacent 1’s. The resultant group is called
Pair.

24
25
Grouping Four Adjacent 1’s: (Quad)
In a Karnaugh map we can group four adjacent 1’s. The resultant group is called Quad.
Fig (a) shows the four 1’s are horizontally adjacent and Fig (b) shows they are vertically
adjacent. Fig (c) contains four 1’s in a square, and they are considered adjacentto each other.

Examples of Quads

The four 1’s in fig (d) and fig (e) are also adjacent, as are those in fig (f) because,the
top and bottom rows are considered to be adjacent to each other and the leftmostand rightmost
columns are also adjacent to each other.

Grouping Eight Adjacent 1’s: (Octet)

26
In a Karnaugh map we can group eight adjacent 1’s. The resultant group

Simplification of Sum of Products Expressions: (Minimal Sums)

The generalized procedure to simplify Boolean expressions as follows:


1. Plot the K-map and place 1’s in those cells corresponding to the 1’s in the sumof
product expression. Place 0’s in the other cells.
2. Check the K-map for adjacent 1’s and encircle those 1’s which are not adjacentto
any other 1’s. These are called isolated 1’s.
3. Check for those 1’s which are adjacent to only one other 1 and encircle such
pairs.
4. Check for quads and octets of adjacent 1’s even if it contains some 1’s that have
already been encircled. While doing this make sure that there are minimum number
of groups.
5. Combine any pairs necessary to include any 1’s that have not yet been
grouped.
6. Form the simplified expression by summing product terms of all the groups.
Three- Variable Map:

27
1. Simplify the Boolean expression,
F(x, y, z) = ∑m (3, 4, 6, 7).

Soln:

F = yz+ xz’

28
2. F(x, y, z) = ∑m (0, 2, 4, 5, 6).

Soln:

F = z’+ xy’

3. F = A’C + A’B + AB’C +


BCSoln:
= A’C (B+ B’) + A’B (C+ C’) + AB’C + BC (A+ A’)
= A’BC+ A’B’C + A’BC + A’BC’ + AB’C + ABC + A’BC
= A’BC+ A’B’C + A’BC’ + AB’C + ABC
= m3+ m1+ m2+ m5+ m7

= ∑ m (1, 2, 3, 5, 7)

F = C + A’B
Four - Variable Map:

1. Simplify the Boolean expression,


Y = A’BC’D’ + A’BC’D + ABC’D’ + ABC’D + AB’C’D + A’B’CD’
Soln:

29
Therefore,
Y= A’B’CD’+ AC’D+ BC’

2. F (w, x, y, z) = ∑ m(0, 1, 2, 4, 5, 6, 8, 9, 12, 13, 14

Soln:

Therefore,

F= y’+ w’z’+ xz’

30
3. F= A’B’C’+ B’CD’+ A’BCD’+ AB’C’
= A’B’C’ (D+ D’) + B’CD’ (A+ A’) + A’BCD’+ AB’C’ (D+ D’)
= A’B’C’D+ A’B’C’D’+ AB’CD’+ A’B’CD’+ A’BCD’+ AB’C’D+ AB’C’D’
= m1+ m0+ m10+ m2+ m6+ m9+ m8

= ∑ m (0, 1, 2, 6, 8, 9, 10)

Therefore,
F= B’D’+ B’C’+ A’CD’.

4. Y= ABCD+ AB’C’D’+ AB’C+ AB


= ABCD+ AB’C’D’+ AB’C (D+D’)+ AB (C+C’) (D+D’)
= ABCD+ AB’C’D’+ AB’CD+ AB’CD’+ (ABC+ ABC’) (D+ D’)
= ABCD+ AB’C’D’+ AB’CD+ AB’CD’+ ABCD+ ABCD’+ ABC’D+ ABC’D’
= ABCD+ AB’C’D’+ AB’CD+ AB’CD’+ ABCD’+ ABC’D+ ABC’D’
= m15+ m8+ m11+ m10+ m14+ m13+ m12

= ∑ m (8, 10, 11, 12, 13, 14, 15)

Therefore,
Y= AB+ AC+ AD’.
31
5. Y (A, B, C, D)= ∑ m (7, 9, 10, 11, 12, 13, 14, 15)

Therefore,
Y= AB+ AC+ AD+BCD.

6. Y= A’B’C’D+ A’BC’D+ A’BCD+ A’BCD’+ ABC’D+ ABCD+ AB’CD


= m1+ m5+ m7+ m6+ m13+ m15+ m11

= ∑ m (1, 5, 6, 7, 11, 13, 15)

In the above K-map, the cells 5, 7, 13 and 15 can be grouped to form a quad as indicated
by the dotted lines. In order to group the remaining 1’s, four pairs have to be formed. However,
all the four 1’s covered by the quad are also covered by the pairs. So, the quad in the above k-
map is redundant.
Therefore, the simplified expression will be,
Y = A’C’D+ A’BC+ ABD+ ACD.

32
7. Y= ∑ m (1, 5, 10, 11, 12, 13, 15)

Therefore, Y= A’C’D+ ABC’+ ACD+ AB’C.

8. Y= A’B’CD’+ ABCD’+ AB’CD’+ AB’CD+ AB’C’D’+ ABC’D’+ A’B’CD+

A’B’C’D’
Therefore, Y= AD’+ B’C+ B’D’

9. F (A, B, C, D) = ∑ m (0, 1, 4, 8, 9, 10)

33
Therefore, F= A’C’D’+ AB’D’+ B’C’.

Don’t care Conditions:


A don’t care minterm is a combination of variables whose logical value is not
specified. When choosing adjacent squares to simplify the function in a map, the don’tcare
minterms may be assumed to be either 0 or 1. When simplifying the function, we can
choose to include each don’t care minterm with either the 1’s or the 0’s, depending on
which combination gives the simplest expression.

1. F (x, y, z) = ∑m (0, 1, 2, 4, 5)+ ∑d (3, 6, 7)

F (x, y, z) = 1

2. F (w, x, y, z) = ∑m (1, 3, 7, 11, 15)+ ∑d (0, 2, 5)

F (w, x, y, z) = w’x’+ yz

4. F (w, x, y, z) = ∑m (0, 7, 8, 9, 10, 12)+ ∑d (2, 5, 13)

F (w, x, y, z) = w’xz+ wy’+ x’z’.


34
II YR AI &
DS

4. F (w, x, y, z) = ∑m (0, 1, 4, 8, 9, 10)+ ∑d (2, 11)


Soln:

F (w, x, y, z) = wx’+ x’y’+ w’y’z’.

5. F( A, B, C, D) = ∑m (0, 6, 8, 13, 14)+ ∑d (2, 4, 10)

Soln:

F( A, B, C, D) = CD’+ B’D’+ A’B’C’D’.

Five- Variable Maps:


A 5- variable K- map requires 25= 32 cells, but adjacent cells are difficult to identify on
a single 32-cell map. Therefore, two 16 cell K-maps are used.
If the variables are A, B, C, D and E, two identical 16- cell maps containing B, C,D
and E can be constructed. One map is used for A and other for A’.
In order to identify the adjacesnt grouping in the 5- variable map, we mustimagine the
two maps superimposed on one another ie., every cell in one map is adjacent to the
corresponding cell in the other map, because only one variable changes between such
corresponding cells.

35
II YR AI &
DS

Five- Variable Karnaugh map (Layer Structure)

Thus, every row on one map is adjacent to the corresponding row (the one occupying
the same position) on the other map, as are corresponding columns. Also, the rightmost and
leftmost columns within each 16- cell map are adjacent, just as they are in any 16- cell map,
as are the top and bottom rows.

Typical subcubes on a five-variable map


However, the rightmost column of the map is not adjacent to the leftmost
column of the other map.
1. Simplify the Boolean function

F (A, B, C, D, E) = ∑m (0, 2, 4, 6, 9, 11, 13, 15, 17, 21, 25, 27, 29, 31)
Soln:

36
II YR AI &
DS

F (A, B, C, D, E) = A’B’E’+ BE+ AD’E

2. F (A, B, C, D, E) = ∑m (0, 5, 6, 8, 9, 10, 11, 16, 20, 24, 25, 26, 27, 29, 31)
Soln:

F (A, B, C, D, E) = C’D’E’+ A’B’CD’E+ A’B’CDE’+ AB’D’E’+ ABE+ BC’

37
II YR AI &
DS
3. F (A, B, C, D, E) = ∑m ( 1, 4, 8, 10, 11, 20, 22, 24, 25, 26)+∑d (0, 12, 16, 17)
Soln:

F (A, B, C, D, E) = B’C’D’+ A’D’E’+ BC’E’+ A’BC’D+ AC’D’+ AB’CE’

4. F (A, B, C, D, E) = ∑m (0, 1, 2, 6, 7, 9, 12, 28, 29, 31)

Soln:

F (A, B, C, D, E) = BCD’E’+ ABCE+ A’B’C’E’+ A’C’D’E+ A’B’CD

38
II YR AI &
5. F (x1, x2, x3, x4, x5) = ∑m (2, 3, 6, 7, 11, 12, 13, 14, 15, 23, 28, 29, 30, 31 ) DS

Soln:

F (x1, x2, x3, x4, x5) = x2x3+ x3x4x5+ x1’x2’x4+ x1’x3’x4x5

6. F (x1, x2, x3, x4, x5) = ∑m (1, 2, 3, 6, 8, 9, 14, 17, 24, 25, 26, 27, 30, 31 )+ ∑d (4, 5)
Soln:

F (x1, x2, x3, x4, x5) = x2x3’x4’+ x2x3x4x5’+ x3’x4’x5+ x1x2x4+ x1’x2’x3x5’+ x1’x2’x3’x4

CANONICAL AND STANDARD FORMS:

Minterms and Maxterms:

A binary variable may appear either in its normal form (x) or in its complementform
(x’). Now either two binary variables x and y combined with an AND operation. Since each
variable may appear in either form, there are four possible combinations:
x’y’, x’y, xy’ and xy
Each of these four AND terms is called a ‘minterm’.
In a similar fashion, when two binary variables x and y combined with an OR

39
II YR AI &
operation, there are four possible combinations: DS
x’+ y’, x’+ y, x+ y’ and x+ y
Each of these four OR terms is called a ‘maxterm’.

The minterms and maxterms of a 3- variable function can be represented as intable


below.

Variable
s Minterms Maxterms

X y z mi M
i
0 0 0 x’y’z’ = m0 x+ y+ z= M0
0 0 1 x’y’z = m1 x+ y+ z’= M1
0 1 0 x’yz’ = m2 x+ y’+ z= M2
0 1 1 x’yz = m3 x+ y’+ z’= M3
1 0 0 xy’z’ = m4 x’+ y+ z= M4
1 0 1 xy’z = m5 x’+ y+ z’= M5
1 1 0 xyz’ = m6 x’+ y’+ z= M6
1 1 1 xyz = m7 x’+ y’+ z’= M7

Sum of Minterm: (Sum of Products)


The logical sum of two or more logical product terms is called sum of productsexpression. It is
logically an OR operation of AND operated variables such as:

Sum of Maxterm: (Product of Sums)


A product of sums expression is a logical product of two or more logical sumterms.
It is basically an AND operation of OR operated variables such as,

40
II YR AI &
Canonical Sum of product expression: DS

If each term in SOP form contains all the literals then the SOP is known as standard
(or) canonical SOP form. Each individual term in standard SOP form is called minterm
canonical form.
F (A, B, C) = AB’C+ ABC+ ABC’

Steps to convert general SOP to standard SOP form:


1. Find the missing literals in each product term if any.
2. AND each product term having missing literals by ORing the literal and its
complement.
3. Expand the term by applying distributive law and reorder the literals in the
product term.
4. Reduce the expression by omitting repeated product terms if any.

Obtain the canonical SOP form of the function:


1. Y(A, B) = A+ B
= A. (B+ B’)+ B (A+ A’)
= AB+ AB’+ AB+ A’B
= AB+ AB’+ A’B.

2. Y (A, B, C) = A+ ABC
= A. (B+ B’). (C+ C’)+ ABC
= (AB+ AB’). (C+ C’)+ ABC
= ABC+ ABC’+ AB’C+ AB’C’+ ABC
= ABC+ ABC’+ AB’C+ AB’C’
= m7+ m6+ m5+ m4

= ∑m (4, 5, 6, 7).
3. Y (A, B, C) = A+ BC
= A. (B+ B’). (C+ C’)+(A+ A’). BC
= (AB+ AB’). (C+ C’)+ ABC+ A’BC
= ABC+ ABC’+ AB’C+ AB’C’+ ABC+ A’BC
= ABC+ ABC’+ AB’C+ AB’C’+ A’BC
= m7+ m6+ m5+ m4+ m3

= ∑m (3, 4, 5, 6, 7).

4. Y (A, B, C) = AC+ AB+ BC


= AC (B+ B’)+ AB (C+ C’)+ BC (A+ A’)

41
II YR AI &
= ABC+ AB’C+ ABC+ ABC’+ ABC+ A’BC DS
= ABC+ AB’C+ ABC’+ A’BC
= ∑m (3, 5, 6, 7).

5. Y (A, B, C, D) = AB+ ACD


= AB (C+ C’) (D+ D’) + ACD (B+ B’)
= (ABC+ ABC’) (D+ D’) + ABCD+ AB’CD
= ABCD+ ABCD’+ ABC’D+ ABC’D’+ ABCD+ AB’CD
= ABCD+ ABCD’+ ABC’D+ ABC’D’+ AB’CD.

Canonical Product of sum expression:

If each term in POS form contains all literals then the POS is known as standard (or)
Canonical POS form. Each individual term in standard POS form is called Maxterm canonical
form.
 F (A, B, C) = (A+ B+ C). (A+ B’+ C). (A+ B+ C’)
 F (x, y, z) = (x+ y’+ z’). (x’+ y+ z). (x+ y+ z)

Steps to convert general POS to standard POS form:

1. Find the missing literals in each sum term if any.


2. OR each sum term having missing literals by ANDing the literal and its
complement.
3. Expand the term by applying distributive law and reorder the literals in thesum
term.
4. Reduce the expression by omitting repeated sum terms if any.
Obtain the canonical POS expression of the functions:
1. Y= A+ B’C
= (A+ B’) (A+ C) [ A+ BC = (A+B) (A+C)]

= (A+ B’+ C.C’) (A+ C+ B.B’)


= (A+ B’+C) (A+ B’+C’) (A+ B+ C) (A+ B’+ C)
= (A+ B’+C). (A+ B’+C’). (A+ B+ C)
= M2. M3. M0

= ∏M (0, 2, 3)

2. Y= (A+B) (B+C) (A+C)


= (A+B+ C.C’) (B+ C+ A.A’) (A+C+B.B’)
= (A+B+C) (A+B+C’) (A+B+C) (A’+B+C) (A+B+C) (A+B’+C)
= (A+B+C) (A+B+C’) (A’+B+C) (A+B’+C)
= M0. M1. M4. M2
42
II YR AI &
= ∏M (0, 1, 2, 4) DS

3. Y= A. (B+ C+ A)
= (A+ B.B’+ C.C’). (A+ B+ C)
= (A+B+C) (A+B+C’) (A+B’+C) (A+ B’+C’) (A+B+C)
= (A+B+C) (A+B+C’) (A+B’+C) (A+ B’+C’)
= M0. M1. M2. M3

= ∏M (0, 1, 2, 3)

4. Y= (A+B’) (B+C) (A+C’)


= (A+B’+C.C’) (B+C+ A.A’) (A+C’+ B.B’)
= (A+B’+C) (A+B’+C’) (A+B+C) (A’+B+C) (A+B+C’) (A+B’+C’)
= (A+B’+C) (A+B’+C’) (A+B+C) (A’+B+C) (A+B+C’)
= M2. M3. M0. M4. M1

= ∏M (0, 1, 2, 3, 4)
6. Y= xy+ x’z
= (xy+ x’) (xy+ z)Using distributive law, convert the function into OR terms.

= (x+x’) (y+x’) (x+z) (y+z) [x+ x’=1]

= (x’+y) (x+z) (y+z)


= (x’+y+ z.z’) (x+z+y.y’) (y+z+ x.x’)
= (x’+ y+ z) (x’+ y+ z’) (x+ y+ z) (x+ y’+ z) (x+ y+ z) (x’+ y+ z)

= (x’+ y+ z) (x’+ y+ z’) (x+ y+ z) (x+ y’+ z)


= M4. M5. M0. M2

= ∏M (0, 2, 4, 5).

UNIVERSAL GATES:

The NAND and NOR gates are known as universal gates, since any logic
function can be implemented using NAND or NOR gates. This is illustrated in the
following sections.

43
II YR AI &
DS

a) NAND Gate:
The NAND gate can be used to generate the NOT function, the AND function,the
OR function and the NOR function.
i) NOT function:
By connecting all the inputs together and creating a single common input.

NOT function using NAND gate

ii) AND function:


By simply inverting output of the NAND gate. i.e.,

AND function using NAND gates

iii) OR function:

44
II YR AI &
By simply inverting inputs of the NAND gate. i.e., DS

45
II YR AI &
DS
OR function using NAND gates

Bubble at the input of NAND gate indicates inverted input.

iv) NOR function:


By inverting inputs and outputs of the NAND gate.

NOR function using NAND gates

b) NORGate:

Similar to NAND gate, the NOR gate is also a universal gate, since it can beused
to generate the NOT, AND, OR and NAND functions.

i) NOT function:

46
II YR AI &
By connecting all the inputs together and creating a single commonDSinput.

47
II YR AI &
DS

NOT function using NOR gates

ii) OR function:
By simply inverting output of the NOR gate. i.e.,

OR function using NOR gates

iii) AND function:


By simply inverting inputs of the NOR gate. i.e.,

AND function using NOR gates

48
II YR AI &
Bubble at the input of NOR gate indicates inverted input. DS

Truth table

iv) NAND Function:


By inverting inputs and outputs of the NOR gate.

NAND function using NOR gates

Conversion of AND/OR/NOT to NAND/NOR:

1. Draw AND/OR logic.


2. If NAND hardware has been chosen, add bubbles on the output of each
AND gate and bubbles on input side to all OR gates.
If NOR hardware has been chosen, add bubbles on the output of each OR gateand
bubbles on input side to all AND gates.
3. Add or subtract an inverter on each line that received a bubble in step 2.
4. Replace bubbled OR by NAND and bubbled AND by NOR.
5. Eliminate double inversions.

49
II YR AI &
1. Implement Boolean expression using NAND gates: DS

Original Circuit:

50
II YR AI &
DS

Soln:
NAND Circuit:

51
II YR AI &
DS

NOR Circuit:

2. Implement Boolean expression for EX-OR gate using NAND gates.

Soln:
Adding bubbles on the output of each AND gates and on the inputs of each OR
gate.

52
Adding an inverter on each line that received bubble,

Eliminating double inversion,

Replacing inverter and bubbled OR with NAND, we have

53
Quine-McCluskey Tabular Method

Boolean function simplification is one of the basics of Digital Electronics. The quine-McCluskey
method also called the tabulation method is a very useful and convenient method for simplification of
the Boolean functions for a large number of variables (greater than 4). This method is useful over K-
map when the number of variables is larger for which K-map formation is difficult. This method uses
prime implicants for simplification.

Follow these steps for simplifying Boolean functions using Quine-McClukey tabular method.
Step 1 − Arrange the given min terms in an ascending order and make the groups based on the number of
ones present in their binary representations. So, there will be at most ‘n+1’ groups if there are ‘n’ Boolean
variables in a Boolean function or ‘n’ bits in the binary equivalent of min terms.
Step 2 − Compare the min terms present in successive groups. If there is a change in only one-bit position,
then take the pair of those two min terms. Place this symbol ‘_’ in the differed bit position and keep the
remaining bits as it is.
Step 3 − Repeat step2 with newly formed terms till we get all prime implicants.
Step 4 − Formulate the prime implicant table. It consists of set of rows and columns. Prime implicants can
be placed in row wise and min terms can be placed in column wise. Place ‘1’ in the cells corresponding to the
min terms that are covered in each prime implicant.
Step 5 − Find the essential prime implicants by observing each column. If the min term is covered only by one
prime implicant, then it is essential prime implicant. Those essential prime implicants will be part of the
simplified Boolean function.
Step 6 − Reduce the prime implicant table by removing the row of each essential prime implicant and the
columns corresponding to the min terms that are covered in that essential prime implicant. Repeat step 5 for
Reduced prime implicant table. Stop this process when all min terms of given Boolean function are over.
Example
Letus simplify thefollowingBoolean
function, f(W,X,Y,Z)=∑m(2,6,8,9,10,11,14,15)f(W,X,Y,Z)=∑m(2,6,8,9,10,11,14,15) using Quine-McClukey
tabular method.
The given Boolean function is in sum of min terms form. It is having 4 variables W, X, Y & Z. The given
min terms are 2, 6, 8, 9, 10, 11, 14 and 15. The ascending order of these min terms based on the number of
ones present in their binary equivalent is 2, 8, 6, 9, 10, 11, 14 and 15. The following table shows these min
terms and their equivalent binary representations.

54
Group Name Min terms W X Y Z

2 0 0 1 0
GA1
8 1 0 0 0

6 0 1 1 0

GA2 9 1 0 0 1

10 1 0 1 0

11 1 0 1 1
GA3
14 1 1 1 0

GA4 15 1 1 1 1

The given min terms are arranged into 4 groups based on the number of ones present in their binary
equivalents. The following table shows the possible merging of min terms from adjacent groups.

Group Name Min terms W X Y Z

2,6 0 - 1 0

2,10 - 0 1 0
GB1
8,9 1 0 0 -

8,10 1 0 - 0

6,14 - 1 1 0

9,11 1 0 - 1
GB2
10,11 1 0 1 -

10,14 1 - 1 0

11,15 1 - 1 1
GB3
14,15 1 1 1 -

The min terms, which are differed in only one-bit position from adjacent groups are merged. That differed bit
is represented with this symbol, ‘-‘. In this case, there are three groups and each group contains combinations
of two min terms. The following table shows the possible merging of min term pairs from adjacent groups.

55
Group Name Min terms W X Y Z

2,6,10,14 - - 1 0

2,10,6,14 - - 1 0
GB1
8,9,10,11 1 0 - -

8,10,9,11 1 0 - -

10,11,14,15 1 - 1 -
GB2
10,14,11,15 1 - 1 -

The successive groups of min term pairs, which are differed in only one-bit position are merged. That differed
bit is represented with this symbol, ‘-‘. In this case, there are two groups and each group contains
combinations of four min terms. Here, these combinations of 4 min terms are available in two rows. So, we
can remove the repeated rows. The reduced table after removing the redundant rows is shown below.

Group Name Min terms W X Y Z

GC1 2,6,10,14 - - 1 0

8,9,10,11 1 0 - -

GC2 10,11,14,15 1 - 1 -

Further merging of the combinations of min terms from adjacent groups is not possible, since they are differed
in more than one-bit position. There are three rows in the above table. So, each row will give one prime
implicant. Therefore, the prime implicants are YZ’, WX’ & WY.
The prime implicant table is shown below.

Min terms / Prime Implicants 2 6 8 9 10 11 14 15

YZ’ 1 1 1 1

WX’ 1 1 1 1

WY 1 1 1 1

The prime implicants are placed in row wise and min terms are placed in column wise. 1s are placed in the
common cells of prime implicant rows and the corresponding min term columns.
The min terms 2 and 6 are covered only by one prime implicant YZ’. So, it is an essential prime implicant.
This will be part of simplified Boolean function. Now, remove this prime implicant row and the corresponding
min term columns. The reduced prime implicant table is shown below.
56
Min terms / Prime Implicants 8 9 11 15

WX’ 1 1 1

WY 1 1

The min terms 8 and 9 are covered only by one prime implicant WX’. So, it is an essential prime implicant.
This will be part of simplified Boolean function. Now, remove this prime implicant row and the corresponding
min term columns. The reduced prime implicant table is shown below.

Min terms / Prime Implicants 15

WY 1

The min term 15 is covered only by one prime implicant WY. So, it is an essential prime implicant. This will
be part of simplified Boolean function.
In this example problem, we got three prime implicants and all the three are essential. Therefore,
the simplified Boolean function is

F(W,X,Y,Z) = YZ’ + WX’ + WY

57
UNIT II

COMBINATIONAL AND SEQUENTIAL CIRCUITS

Combinational circuits – Adder – Subtractor – ALU Design – Decoder – Encoder – Multiplexers –


Introduction to Sequential Circuits – Flip-Flops – Registers – Counters.
**********************************************************************************
INTRODUCTION- COMBINATIONAL CIRCUITS:
The digital system consists of two types of circuits, namely
(i) Combinational circuits
(ii) Sequential circuits

Combinational circuit consists of logic gates whose output at any time is determined
from the present combination of inputs. The logic gate is the most basic building block of
combinational logic. The logical function performed by a combinational circuit is fully defined
by a set of Boolean expressions.

Sequential logic circuit comprises both logic gates and the state of storage elements
such as flip-flops. As a consequence, the output of a sequential circuit depends not only on
present value of inputs but also on the past state of inputs.
In the previous chapter, we have discussed binary numbers, codes, Boolean algebra and
simplification of Boolean function and logic gates. In this chapter, formulation and analysis of
various systematic designs of combinational circuits will be discussed.

A combinational circuit consists of input variables, logic gates, and output variables. The
logic gates accept signals from inputs and output signals are generated according to the logic
circuits employed in it. Binary information from the given data transforms to desired output data
in this process. Both input and output are obviously the binary signals, i.e., both the input and
output signals are of two possible states, logic 1 and logic 0.

Block diagram of a combinational logic circuit

For n number of input variables to a combinational circuit, 2n possible combinations of


binary input states are possible. For each possible combination, there is one and only one
possible output combination. A combinational logic circuit can be described by m Boolean
functions and each output can be expressed in terms of n input variables.
ARITHMETIC CIRCUITS – BASIC BUILDING BLOCKS:

In this section, we will discuss those combinational logic building blocks that can be used
to perform addition and subtraction operations on binary numbers. Addition and subtraction are
the two most commonly used arithmetic operations, as the other two, namely multiplication and
division, are respectively the processes of repeated addition and repeated subtraction.
The basic building blocks that form the basis of all hardware used to perform the
arithmetic operations on binary numbers are half-adder, full adder, half-subtractor, full-
subtractor.

Half-Adder:
A half-adder is a combinational circuit that can be used to add two binary bits. It has two
inputs that represent the two bits to be added and two outputs, with one producing the SUM
output and the other producing the CARRY.

Block schematic of half-adder

The truth table of a half-adder, showing all possible input combinations and the
corresponding outputs are shown below.

Inputs Outputs
A B Carry (C) Sum (S)
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
Truth table of half-adder
K-map simplification for carry and sum:

The Boolean expressions for the SUM and CARRY outputs are given by theequations,
Sum, S = A’B+ AB’= AB
Carry, C = A . B

The first one representing the SUM output is that of an EX-OR gate, the second
one representing the CARRY output is that of an AND gate.
The logic diagram of the half adder is,

Logic Implementation of Half-adder

Full-Adder:
A full adder is a combinational circuit that forms the arithmetic sum ofthree
input bits. It consists of 3 inputs and 2 outputs.
Two of the input variables, represent the significant bits to be added. The third input
represents the carry from previous lower significant position. The block diagram of full adder is

given by,
Block schematic of full-adder

The full adder circuit overcomes the limitation of the half-adder, which can be used to
add two bits only. As there are three input variables, eight different input combinations are
possible. The truth table is shown below,
Truth Table:

Inputs Outputs
A B Ci Sum (S) Carry
n (Cout)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

To derive the simplified Boolean expression from the truth table, the Karnaugh map method is
adopted as,

The Boolean expressions for the SUM and CARRY outputs are given by the
equations,
Sum, S = A’B’Cin+ A’BC’in + AB’C’in + ABCin
Carry, Cout = AB+ ACin + BCin .

The logic diagram for the above functions is shown as,


Implementation of full-adder in Sum of Products

The logic diagram of the full adder can also be implemented with two half- adders and
one OR gate. The S output from the second half adder is the exclusive-OR ofCin and the output
of the first half-adder, giving

Sum Cin (A B) = Cin  (A‘B+AB‘)


= C‘in (A‘B+AB‘) + Cin (A‘B+AB‘)‘ [(x‘y+xy‘)‘= (xy+x‘y‘)]
= C‘in (A‘B+AB‘) + Cin (AB+A‘B‘)
and the carry output is,
Carry, Cout = AB+ Cin (A’B+AB’)
= AB+ A‘BCin+ AB‘Cin
= AB (Cin+1) + A‘BCin+ AB‘Cin [Cin+1= 1]
= ABCin+ AB+ A‘BCin+ AB‘Cin
= AB+ ACin (B+B‘) + A‘BCin
= AB+ ACin+ A‘BCin
= AB (Cin+1) + ACin+ A‘BCin [Cin+1= 1]
= ABCin+ AB+ ACin+ A‘BCin
= AB+ ACin+ BCin (A +A‘)
= AB+ ACin+ BCin.
Implementation of full adder with two half-adders and an OR gate
Half -Subtractor:

A half-subtractor is a combinational circuit that can be used to subtract one binary digit
from another to produce a DIFFERENCE output and a BORROW output. The BORROW output
here specifies whether a ‗1‘ has been borrowed to perform the subtraction.
Block schematic of half-subtractor

The truth table of half-subtractor, showing all possible input combinations andthe
corresponding outputs are shown below.

Input Output
A B Difference (D) Borrow
(Bout)
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0

K-map simplification for half subtractor:

The Boolean expressions for the DIFFERENCE and BORROW outputs are givenby the
equations,
Difference, D = A’B+ AB’= A B
Borrow, Bout = A’ . B

The first one representing the DIFFERENCE (D)output is that of an exclusive-OR gate,
the expression for the BORROW output (Bout) is that of an AND gate with input A
complemented before it is fed to the gate.
The logic diagram of the half adder is,

Logic Implementation of Half-Subtractor

Comparing a half-subtractor with a half-adder, we find that the expressions for the SUM
and DIFFERENCE outputs are just the same. The expression for BORROW in the case of the
half-subtractor is also similar to what we have for CARRY in the case of the half-adder. If the
input A, ie., the minuend is complemented, an AND gate can be used to implement the
BORROW output.

Full Subtractor:
A full subtractor performs subtraction operation on two bits, a minuend and a subtrahend,
and also takes into consideration whether a ‗1‘ has already been borrowed by the previous
adjacent lower minuend bit or not.
As a result, there are three bits to be handled at the input of a full subtractor, namely the
two bits to be subtracted and a borrow bit designated as Bin. There are two outputs, namely the
DIFFERENCE output D and the BORROW output Bo. The BORROW output bit tells whether
the minuend bit needs to borrow a ‗1‘ from the next possible higher minuend bit.

Block schematic of full-adder

The truth table for full-subtractor is,


Inputs Outputs
A B Bin Difference(D) Borrow(Bout
)
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

K-map simplification for full-subtractor:

The Boolean expressions for the DIFFERENCE and BORROW outputs are given by the
equations,
Difference, D = A’B’Bin+ A’BB’in + AB’B’in + ABBin
Borrow, Bout = A’B+ A’Cin + BBin .

The logic diagram for the above functions is shown as,

Implementation of full-adder in Sum of Products

The logic diagram of the full-subtractor can also be implemented with two half-
subtractors and one OR gate. The difference,D output from the second half subtractor is the
exclusive-OR of Bin and the output of the first half-subtractor, giving
Difference,D Bin (A B) B = B‘in (A‘B+AB‘) + Bin (AB+A‘B‘)
and the borrow output is,

Borrow, Bout A’B+ Bin (A’B+AB’)’ = A‘B+ Bin (AB+A‘B‘)


= A‘B+ ABBin+ A‘B‘Bin
= A‘B+ BBin+ A‘Bin (B +B‘)

Therefore,
we can implement full-subtractor using two half-subtractors and OR gate as,

Implementation of full-subtractor with two half-subtractors and an


OR gate

Binary Adder (Parallel Adder):


The 4-bit binary adder using full adder circuits is capable of adding two 4-bit numbers
resulting in a 4-bit sum and a carry output as shown in figure below.

4-bit binary parallel Adder

Since all the bits of augend and addend are fed into the adder circuits simultaneously and
the additions in each position are taking place at the same time, this circuit is known as parallel
adder.

Let the 4-bit words to be added be represented by,


A3A2A1A0= 1111 and B3B2B1B0= 0011.
The bits are added with full adders, starting from the least significant position, to form the sum
it and carry bit. The input carry C0 in the least significant position must be

0. The carry output of the lower order stage is connected to the carry input of the next higher order
stage. Hence this type of adder is called ripple-carry adder.

In the least significant stage, A0, B0 and C0 (which is 0) are added resulting in sum S0 and
carry C1. This carry C1 becomes the carry input to the second stage. Similarly in the second stage, A1,
B1 and C1 are added resulting in sum S1 and carry C2, in the third stage, A2, B2 and C2 are added
resulting in sum S2 and carry C3, in the third stage, A3, B3 and C3 are added resulting in sum S3 and
C4, which is the output carry. Thus the circuit results in a sum (S3S2S1S0) and a carry output (Cout).

Though the parallel binary adder is said to generate its output immediately after the inputs are
applied, its speed of operation is limited by the carry propagation delay through all stages. However,
there are several methods to reduce this delay.

One of the methods of speeding up this process is look-ahead carry addition which eliminates
the ripple-carry delay.

Carry Propagation–Look-Ahead Carry Generator:


In Parallel adder, all the bits of the augend and the addend are available for computation at the same
time. The carry output of each full-adder stage is connected to the carry input of the next high-order
stage. Since each bit of the sum output depends on the value of the input carry, time delay occurs in the
addition process. This time delay is called as carry propagation delay.

For example, addition of two numbers (0011+ 0101) gives the result as 1000. Addition of the
LSB position produces a carry into the second position. This carry when added to the bits of the
second position, produces a carry into the third position. This carry when added to bits of the third
position, produces a carry into the last position. The sum bit generated in the last position (MSB)
depends on the carry that was generated by the addition in the previous position. i.e., the adder will not
produce correct result until LSB carry has propagated through the intermediate full-adders. This
represents a time delay that depends on the propagation delay produced in an each full-adder. For
example, if each full adder is considered to have a propagation delay of30nsec, then S3 will not react its
correct value until 90 nsec after LSB is generated. Therefore total time required to perform addition is 90+ 30 =
120nsec.
4-bit Parallel Adder
The method of speeding up this process by eliminating inter stage carry delay is called
look ahead-carry addition. This method utilizes logic gates to look at the lower order bits of the
augend and addend to see if a higher-order carry is to be generated. It uses two functions: carry

generate and carry propagate.


Full-Adder circuit

Consider the circuit of the full-adder shown above. Here we define two functions: carry
generate (Gi) and carry propagate (Pi) as,
Gi = Ai Bi
Pi = Ai Bi
the output sum and carry can be expressed as,
Si = Pi Ci
Ci+1 = Gi PiCi
Gi (carry generate), it produces a carry 1 when both Ai and Bi are 1, regardless of the input
carry Ci.
Pi (carry propagate) because it is the term associated with the propagation of the carryfrom Ci to
Ci+1.
The Boolean functions for the carry outputs of each stage and substitute for each Ci its
value from the previous equation:
C0= input carry
Since the Boolean function for each output carry is expressed in sum of products, each
function can be implemented with one level of AND gates followed by an OR gate. The three Boolean functions
for C1, C2 and C3 are implemented in the carry look-ahead generator as shown below. Note that C3 does not
have to wait for C2 and C1 to propagate; in fact C3 is propagated at the same time as C1 and C2.

Logic diagram of Carry Look-ahead Generator

Using a Look-ahead Generator we can easily construct a 4-bit parallel adder witha Look-
ahead carry scheme. Each sum output requires two exclusive-OR gates. The
output of the first exclusive-OR gate generates the Pi variable, and the AND gategenerates the Gi
variable. The carries are propagated through the carry look-ahead generator and applied as inputs
to the second exclusive-OR gate. All output carries are generated after a delay through two levels
of gates. Thus, outputs S1 through S3 have equal propagation delay times.

4-Bit Adder with Carry Look-ahead

Binary Subtractor (Parallel Subtractor):


The subtraction of unsigned binary numbers can be done most conveniently by means of
complements. The subtraction A-B can be done by taking the 2‘s complement of B and adding it
to A. The 2‘s complement can be obtained by taking the 1‘s complement and adding 1 to the
least significant pair of bits. The 1‘s complement can be implemented with inverters and a 1 can
be added to the sum through the input carry.
The circuit for subtracting A-B consists of an adder with inverters placed between each
data input B and the corresponding input of the full adder. The input carry C0 must be equal to 1
when performing subtraction. The operation thus performed becomes A, plus the 1‘s
complement of B, plus1. This is equal to A plus the 2‘s complement of B.

4-bit Parallel Subtractor

Parallel Adder/ Subtractor:


The addition and subtraction operation can be combined into one circuit with one
common binary adder. This is done by including an exclusive-OR gate with each full adder. A 4-
bit adder Subtractor circuit is shown below.

4-Bit Adder Subtractor

The mode input M controls the operation. When M= 0, the circuit is an adder and when
M=1, the circuit becomes a Subtractor. Each exclusive-OR gate receives input M
and one of the inputs of B. When M=0, we have B 0= B. The full adders receive the value of
B, the input carry is 0, and the circuit performs A plus B. When M=1, we have
B 1= B‘ and C0=1. The B inputs are all complemented and a 1 is added through the input carry.
The circuit performs the operation A plus the 2‘s complement of B. The exclusive-OR with
output V is for detecting an overflow.

Decimal Adder (BCD Adder):


The digital system handles the decimal number in the form of binary coded decimal
numbers (BCD). A BCD adder is a circuit that adds two BCD bits and produces a sum digit also
in BCD.
Consider the arithmetic addition of two decimal digits in BCD, together with an input
carry from a previous stage. Since each input digit does not exceed 9, the output sum cannot be
greater than 9+ 9+1 = 19; the 1 is the sum being an input carry. Theadder will form the sum
in binary and produce a result that ranges from 0 through 19.
These binary numbers are labeled by symbols K, Z8, Z4, Z2, Z1, K is the carry. The
columns under the binary sum list the binary values that appear in the outputs of the 4- bit binary
adder. The output sum of the two decimal digits must be represented in BCD.

Binary Sum BCD Sum


Decimal
K Z8 Z4 Z2 Z1 C S8 S4 S2 S1
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1 1
0 0 0 1 0 0 0 0 1 0 2
0 0 0 1 1 0 0 0 1 1 3
0 0 1 0 0 0 0 1 0 0 4
0 0 1 0 1 0 0 1 0 1 5
0 0 1 1 0 0 0 1 1 0 6
0 0 1 1 1 0 0 1 1 1 7
0 1 0 0 0 0 1 0 0 0 8
0 1 0 0 1 0 1 0 0 1 9
0 1 0 1 0 1 0 0 0 0 10
0 1 0 1 1 1 0 0 0 1 11
0 1 1 0 0 1 0 0 1 0 12
0 1 1 0 1 1 0 0 1 1 13
0 1 1 1 0 1 0 1 0 0 14
0 1 1 1 1 1 0 1 0 1 15
1 0 0 0 0 1 0 1 1 0 16
1 0 0 0 1 1 0 1 1 1 17
1 0 0 1 0 1 1 0 0 0 18
1 0 0 1 1 1 1 0 0 1 19

In examining the contents of the table, it is apparent that when the binary sum is equal to
or less than 1001, the corresponding BCD number is identical, and therefore no conversion is
needed. When the binary sum is greater than 9 (1001), we obtain a non- valid BCD
representation. The addition of binary 6 (0110) to the binary sum converts it to the correct BCD
representation and also produces an output carry as required.
The logic circuit to detect sum greater than 9 can be determined by simplifying the
boolean expression of the given truth table.

To implement BCD adder we require:


 4-bit binary adder for initial addition
 Logic circuit to detect sum greater than 9 and
 One more 4-bit adder to add 01102 in the sum if the sum is greater than 9 or carry is 1.

The two decimal digits, together with the input carry, are first added in the top4- bit
binary adder to provide the binary sum. When the output carry is equal to zero, nothing is
added to the binary sum. When it is equal to one, binary 0110 is added to the binary sum
through the bottom 4-bit adder. The output carry generated from the bottom adder can be
ignored, since it supplies information already available at the output carry terminal. The
output carry from one stage must be connected to the input carry of the next higher-order
stage.

Block diagram of BCD adder

ALU OPRERATIONS

A 1-Bit ALU
The arithmetic logic unit (ALU) is the brawn of the computer, the device that performs the arithmetic
operations like addition and subtraction or logical operations like AND and OR.
An adder must have two inputs for the operands and a single-bit output for the sum. There must be a
second output to pass on the carry, called CarryOut. Since the CarryOut from the neighbor adder
must be included as an input, we need a third input. This input is called CarryIn. This adder is called
full adder. It is also called a (3,2) adder because it has three inputs and 2 outputs. An adder with only
the a and b is called a (2,2) adder or half adder.
Fig.1(a) The 1-bit logical unit for AND, OR and Fig1 (b) 1-Bit adder (Full adder)
Adder.

Table 1. Input and output specification of 1-bit adder

A 32-bit ALU
The full 32 –bit ALU is created by connecting adjacent black boxes.using xi to mean the ith bit of x.
Hence,the adder created by directly linking the carries of 1-bit adders is called a ripple carry adder
(fig 2). Subtraction is the same as adding the negative version or on operand,and this is how adders
perform subtraction.
Fig 2.(a) A 32-bit ALU performing addition Fig 2.(a)A 1-bit ALU for the most significant bit
(Ripple carry adder)

DECODERS:

A decoder is a combinational circuit that converts binary information from ‗n‘ input
lines to a maximum of ‗2n‘ unique output lines. The general structure of decoder circuit is –

General structure of decoder


The encoded information is presented as ‗n‘ inputs producing ‗2n‘ possible outputs. The
2n output values are from 0 through 2n-1. A decoder is provided with enable inputs to activate
decoded output based on data inputs. When any one enable input is unasserted, all outputs of
decoder are disabled.
Binary Decoder (2 to 4 decoder):
A binary decoder has ‗n‘ bit binary input and a one activated output out of 2n outputs. A
binary decoder is used when it is necessary to activate exactly one of 2n outputs based on an n-
bit input value.

2-to-4 Line decoder

Here the 2 inputs are decoded into 4 outputs, each output representing one of the minterms
of the two input variables.

Inputs Outputs
Enable A B Y3 Y2 Y1 Y
0
0 x x 0 0 0 0
1 0 0 0 0 0 1
1 0 1 0 0 1 0
1 1 0 0 1 0 0
1 1 1 1 0 0 0

As shown in the truth table, if enable input is 1 (EN= 1) only one of the outputs (Y0 –
Y3), is active for a given input.
The output Y0 is active, ie., Y0= 1 when inputs A= B= 0, Y1
is active when inputs, A= 0 and B= 1,
Y2 is active, when input A= 1 and B= 0,
Y3 is active, when inputs A= B= 1.
2- to-8 Line Decoder:
A 3-to-8 line decoder has three inputs (A, B, C) and eight outputs (Y0- Y7). Based on
the 3 inputs one of the eight outputs is selected.
The three inputs are decoded into eight outputs, each output representing one of the
minterms of the 3-input variables. This decoder is used for binary-to-octal conversion. The input
variables may represent a binary number and the outputs will represent the eight digits in the
octal number system. The output variables are mutually exclusive because only one output can be
equal to 1 at any one time. The output line whose value is equal to 1 represents the minterm
equivalent of the binary number presently available in the input lines.

Inputs Outputs

A B C Y Y1 Y Y3 Y Y5 Y6 Y
0 2 4 7

0 0 0 1 0 0 0 0 0 0 0

0 0 1 0 1 0 0 0 0 0 0

0 1 0 0 0 1 0 0 0 0 0

0 1 1 0 0 0 1 0 0 0 0

1 0 0 0 0 0 0 1 0 0 0

1 0 1 0 0 0 0 0 1 0 0

1 1 0 0 0 0 0 0 0 1 0

1 1 1 0 0 0 0 0 0 0 1
3-to-8 line decoder

BCD to 7-Segment Display Decoder:


A seven-segment display is normally used for displaying any one of the decimal
digits, 0 through 9. A BCD-to-seven segment decoder accepts a decimal digit in BCD and
generates the corresponding seven-segment code.
Digit Display Segments Activated

0 a, b, c, d, e, f

1 b, c

2 a, b, d, e, g

3 a, b, c, d, g

4 b, c, f, g

5 a, c, d, f, g
6 a, c, d, e, f, g

7 a, b, c

8 a, b, c, d, e, f,
g

9 a, b, c, d, f, g

Truth table:

BCD code 7-Segment code


Digit A B C D a b C d e f g
0 0 0 0 0 1 1 1 1 1 1 0
1 0 0 0 1 0 1 1 0 0 0 0
2 0 0 1 0 1 1 0 1 1 0 1
3 0 0 1 1 1 1 1 1 0 0 1
4 0 1 0 0 0 1 1 0 0 1 1
5 0 1 0 1 1 0 1 1 0 1 1
6 0 1 1 0 1 0 1 1 1 1 1
7 0 1 1 1 1 1 1 0 0 0 0
8 1 0 0 0 1 1 1 1 1 1 1
9 1 0 0 1 1 1 1 1 0 1 1

K-map Simplification:
Logic Diagram:
BCD to 7-segment display decoder

Applications of decoders:
1. Decoders are used in counter system.
2. They are used in analog to digital converter.
3. Decoder outputs can be used to drive a display system.
ENCODERS:
An encoder is a digital circuit that performs the inverse operation of a decoder. Hence,
the opposite of the decoding process is called encoding. An encoder is a combinational circuit
that converts binary information from 2n input lines to a maximum of ‗n‘ unique output lines.
The general structure of encoder circuit is –

General structure of Encoder

It has 2n input lines, only one which 1 is active at any time and ‗n‘ output lines. It
encodes one of the active inputs to a coded binary output with ‗n‘ bits. In an encoder, the
number of outputs is less than the number of inputs.

Octal-to-Binary Encoder:
It has eight inputs (one for each of the octal digits) and the three outputs that generate the
corresponding binary number. It is assumed that only one input has a value of 1 at any given
time.

Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 A B C
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1

The encoder can be implemented with OR gates whose inputs are determined directly
from the truth table. Output z is equal to 1, when the input octal digit is 1 or 3or 5 or 7. Output
y is 1 for octal digits 2, 3, 6, or 7 and the output is 1 for digits 4, 5, 6 or
7. These conditions can be expressed by the following output Boolean functions:

z= D1+ D3+ D5+ D7


y= D2+ D3+ D6+ D7
x= D4+ D5+ D6+ D7
The encoder can be implemented with three OR gates. The encoder defined in the
below table, has the limitation that only one input can be active at any given time. If two inputs
are active simultaneously, the output produces an undefined combination.
For eg., if D3 and D6 are 1 simultaneously, the output of the encoder may be 111. This
does not represent either D6 or D3. To resolve this problem, encoder circuits must establish an
input priority to ensure that only one input is encoded. If we establish a higher priority for inputs
with higher subscript numbers and if D3 and D6 are 1 at the same time, the output will be 110
because D6 has higher priority than D3.

Octal-to-Binary Encoder
Another problem in the octal-to-binary encoder is that an output with all 0‘s is generated
when all the inputs are 0; this output is same as when D0 is equal to 1. The discrepancy can be
resolved by providing one more output to indicate that atleast one input is equal to 1.

Priority Encoder:
A priority encoder is an encoder circuit that includes the priority function. In priority
encoder, if two or more inputs are equal to 1 at the same time, the input having the highest
priority will take precedence.
In addition to the two outputs x and y, the circuit has a third output, V (valid bit
indicator). It is set to 1 when one or more inputs are equal to 1. If all inputs are 0, there is no
valid input and V is equal to 0.
The higher the subscript number, higher the priority of the input. Input D3, has the
highest priority. So, regardless of the values of the other inputs, when D3 is 1, the output for xy
is 11.
D2 has the next priority level. The output is 10, if D2= 1 provided D3= 0. The output for
D1 is generated only if higher priority inputs are 0, and so on down the priority levels.

Truth table:

Inputs Outputs
D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
x 1 0 0 0 1 1
x x 1 0 1 0 1
x x x 1 1 1 1

Although the above table has only five rows, when each don‘t care condition is replaced
first by 0 and then by 1, we obtain all 16 possible input combinations. For example, the third row
in the table with X100 represents minterms 0100 and 1100. The don‘t care condition is replaced
by 0 and 1 as shown in the table below.
Modified Truth table:

Inputs Outputs
D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
0 1 0 0
0 1 1
1 1 0 0
0 0 1 0
0 1 1 0 1 0 1
1 0 1 0
1 1 1 0
0 0 0 1
0 0 1 1
0 1 0 1
1 1 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
K-map Simplification:
3- Input Priority Encoder
MULTIPLEXER: (Data Selector)

A multiplexer or MUX, is a combinational circuit with more than one input line, one
output line and more than one selection line. A multiplexer selects binary information present
from one of many input lines, depending upon the logic status of the selection inputs, and routes
it to the output line. Normally, there are 2n input lines and n selection lines whose bit
combinations determine which input is selected. The multiplexer is often labeled as MUX in
block diagrams.

A multiplexer is also called a data selector, since it selects one of many inputs and
steers the binary information to the output line.

Block diagram of Multiplexer

2-to-1- line Multiplexer:


The circuit has two data input lines, one output line and one selection line, S. When
S= 0, the upper AND gate is enabled and I0 has a path to the output.
When S=1, the lower AND gate is enabled and I1 has a path to the output.
Logic diagram
The multiplexer acts like an electronic switch that selects one of the two sources.

Truth table:
S Y
0 I0
4-to-1-line Multiplexer: 1 I1
A 4-to-1-line multiplexer has four (2n) input lines, two (n) select lines and one output
line. It is the multiplexer consisting of four input channels and information of one of the channels
can be selected and transmitted to an output line according to the select inputs combinations.
Selection of one of the four input channel is possible by two selection inputs.
Each of the four inputs I0 through I3, is applied to one input of AND gate. Selection lines
S1 and S0 are decoded to select a particular AND gate. The outputs of the AND gate are applied
to a single OR gate that provides the 1-line output.
4-to-1-Line Multiplexer

Function table:

S1 S0 Y
0 0 I0
0 1 I1
1 0 I2
1 1 I3

To demonstrate the circuit operation, consider the case when S1S0= 10. The AND gate
associated with input I2 has two of its inputs equal to 1 and the third input connected to I2. The
other three AND gates have atleast one input equal to 0, which makes their outputs equal to 0.
The OR output is now equal to the value of I2, providing a path from the selected input to the
output.

The data output is equal to I0 only if S1= 0 and S0= 0; Y= I0S1‘S0‘.


The data output is equal to I1 only if S1= 0 and S0= 1; Y= I1S1‘S0.
The data output is equal to I2 only if S1= 1 and S0= 0; Y= I2S1S0‘.
The data output is equal to I3 only if S1= 1 and S0= 1; Y= I3S1S0.
When these terms are ORed, the total expression for the data output is,
Y= I0S1’S0’+ I1S1’S0 +I2S1S0’+ I3S1S0.
As in decoder, multiplexers may have an enable input to control the operation of the unit.
When the enable input is in the inactive state, the outputs are disabled, and when it is in the
active state, the circuit functions as a normal multiplexer.

Quadruple 2-to-1 Line Multiplexer:

This circuit has four multiplexers, each capable of selecting one of two input lines.
Output Y0 can be selected to come from either A0 or B0. Similarly, output Y1 may have the
value of A1 or B1, and so on. Input selection line, S selects one of the lines in each of the four
multiplexers. The enable input E must be active for normal operation.
Although the circuit contains four 2-to-1-Line multiplexers, it is viewed as a circuit that
selects one of two 4-bit sets of data lines. The unit is enabled when E= 0. Then if S= 0, the four
A inputs have a path to the four outputs. On the other hand, if S=1, the four B inputs are
applied to the outputs. The outputs have all 0‘s when E= 1, regardless of the value of S.

Application:
The multiplexer is a very useful MSI function and has various ranges of applications in
data communication. Signal routing and data communication are the important applications of a
multiplexer. It is used for connecting two or more sources to guide to a single destination among
computer units and it is useful for constructing a common bus system. One of the general
properties of a multiplexer is that Boolean functions can be implemented by this device.

Implementation of Boolean Function using MUX:

Any Boolean or logical expression can be easily implemented using a multiplexer. If a


Boolean expression has (n+1) variables, then ‗n‘ of these variables can be connected to the
select lines of the multiplexer. The remaining single variable along with constants 1 and 0 is used
as the input of the multiplexer. For example, if C is the single variable, then the inputs of the
multiplexers are C, C‘, 1 and 0. By this method any logical expression can be implemented.
In general, a Boolean expression of (n+1) variables can be implemented using a
multiplexer with 2n inputs.

1. Implement the following boolean function using 4: 1 multiplexer, F (A,


B, C) = ∑m (1, 3, 5, 6).
Solution:
Variables, n= 3 (A, B, C)
Select lines= n-1 = 2 (S1, S0)
n-1 2 D0, D1, D2, D3Implementation
table:

Apply variables A and B to the select lines. The procedures for implementing the
function are:

i. List the input of the multiplexer


ii. List under them all the minterms in two rows as shown below.
The first half of the minterms is associated with A‘ and the second half with A. The given
function is implemented by circling the minterms of the function and applying the following
rules to find the values for the inputs of the multiplexer.

1. If both the minterms in the column are not circled, apply 0 to the corresponding input.
2. If both the minterms in the column are circled, apply 1 to the corresponding input.
3. If the bottom minterm is circled and the top is not circled, apply C to the input.
4. If the top minterm is circled and the bottom is not circled, apply C‘ to the input.

Multiplexer Implementation:

2. F (x, y, z) = ∑m (1, 2, 6, 7)
Solution:
Implementation table:
Multiplexer Implementation:

F ( A, B, C) = ∑m (1, 2, 4, 5)
Solution:
Variables, n= 3 (A, B, C)
Select lines= n-1 = 2 (S1, S0)
n-1 2 D0, D1, D2, D3Implementation
table:

Multiplexer Implementation:
4. F( P, Q, R, S)= ∑m (0, 1, 3, 4, 8, 9, 15)

Solution:
Variables, n= 4 (P, Q, R, S)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:

Multiplexer Implementation:
5. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (A, B, C, D) = ∑m (0, 1, 2, 4, 6, 9, 12, 14)

Solution:

Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:

Multiplexer Implementation (Using 8: 1 MUX):

Using 4: 1 MUX:
6. F (A, B, C, D) = ∑m (1, 3, 4, 11, 12, 13, 14, 15)

Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:

Multiplexer Implementation:
7. Implement the Boolean function using 8: 1 multiplexer.
F (A, B, C, D) = A’BD’ + ACD + B’CD + A’C’D.
Solution:
Convert into standard SOP form,
= A‘BD‘ (C‘+C) + ACD (B‘+B) + B‘CD (A‘+A) + A‘C‘D (B‘+B)
= A‘BC‘D‘ + A‘BCD‘+ AB‘CD + ABCD +A‘B‘CD + AB‘CD +A‘B‘C‘D+ A‘BC‘D
= A‘BC‘D‘ + A‘BCD‘+ AB‘CD + ABCD +A‘B‘CD +A‘B‘C‘D+ A‘BC‘D
= m4+ m6+ m11+ m15+ m3+ m1+ m5
= ∑m (1, 3, 4, 5, 6, 11, 15)

Implementation table:
Multiplexer Implementation:

8. Implement the Boolean function using 8: 1 multiplexer.


F (A, B, C, D) = AB’D + A’C’D + B’CD’ + AC’D.
Solution:
Convert into standard SOP form,
= AB‘D (C‘+C) + A‘C‘D (B‘+B) + B‘CD‘ (A‘+A) + AC‘D (B‘+B)
AB‘C‘D AB‘C‘D = AB‘C‘D + AB‘CD+ A‘B‘C‘D + A‘BC‘D +A‘B‘CD‘ + AB‘CD‘+
ABC‘D
= m9+ m11+ m1+ m5+ m2+ m10+ m13
= ∑m (1, 2, 5, 9, 10, 11, 13).
Implementation Table:

Multiplexer Implementation:

9. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (w, x, y, z) = ∑m (1, 2, 3, 6, 7, 8, 11, 12, 14)

Solution:
Variables, n= 4 (w, x, y, z) Select
lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:
Multiplexer Implementation (Using 8:1 MUX):

(Using 4:1 MUX):


10. Implement the Boolean function using 8: 1 multiplexer
F (A, B, C, D) = ∏m (0, 3, 5, 8, 9, 10, 12, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:

Multiplexer Implementation:
11. Implement the Boolean function using 8: 1 multiplexer
F (A, B, C, D) = ∑m (0, 2, 6, 10, 11, 12, 13) + d (3, 8, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation Table:

Multiplexer Implementation:
12. An 8×1 multiplexer has inputs A, B and C connected to the selection inputs S2, S1,and
S0 respectively. The data inputs I0 to I7 are as follows
I1=I2=I7= 0; I3=I5= 1; I0=I4= D I6= D'.
Determine the Boolean function that the multiplexer implements.
Multiplexer Implementation:

Implementation table:
F (A, B, C, D) = ∑m (3, 5, 6, 8, 11, 12, 13).

DEMULTIPLEXER:

Demultiplex means one into many. Demultiplexing is the process of taking information
from one input and transmitting the same over one of several outputs.
A demultiplexer is a combinational logic circuit that receives information on a single
input and transmits the same information over one of several (2n) output lines.

Block diagram of demultiplexer

The block diagram of a demultiplexer which is opposite to a multiplexer in its operation


is shown above. The circuit has one input signal, ‗n‘ select signals and 2n output signals. The
select inputs determine to which output the data input will beconnected. As the serial data
is changed to parallel data, i.e., the input caused to appearon one of the n output lines, the
demultiplexer is also called a ―data distributer‖ or a
serial-to-parallel converter
1-to-4 Demultiplexer:
A 1-to-4 demultiplexer has a single input, Din, four outputs (Y0 to Y3) andtwo
select inputs (S1 and S0).
Logic Symbol

The input variable Din has a path to all four outputs, but the input information is directed to
only one of the output lines. The truth table of the 1-to-4 demultiplexer is shown below.

Enable S1 S0 Din Y0 Y1 Y2 Y3
0 x x x 0 0 0 0
1 0 0 0 0 0 0 0
1 0 0 1 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 1 0 1 0 0
1 1 0 0 0 0 0 0
1 1 0 1 0 0 1 0
1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 1
Truth table of 1-to-4 demultiplexer

From the truth table, it is clear that the data input, Din is connected to the output Y0,
when S1= 0 and S0= 0 and the data input is connected to output Y1 when S1= 0 and S0= 1.
Similarly, the data input is connected to output Y2 and Y3 when S1= 1 and S0= 0 and when S1=
1 and S0= 1, respectively. Also, from the truth table, the expression for outputs can be written as
follows,

Y0=
S1’S0’Din
Y1=
S1’S0Din
Y2=
S1S0’Din
Y3= S1S0Din
Logic diagram of 1-to-4 demultiplexer

Now, using the above expressions, a 1-to-4 demultiplexer can be implemented using four
3-input AND gates and two NOT gates. Here, the input data line Din, is connected to all the
AND gates. The two select lines S1, S0 enable only one gate at a time
and the data that appears on the input line passes through the selected gate to theassociated output
line.

1-to-8 Demultiplexer:
A 1-to-8 demultiplexer has a single input, Din, eight outputs (Y0 to Y7) and three
select inputs (S2, S1 and S0). It distributes one input line to eight output lines based on the
select inputs. The truth table of 1-to-8 demultiplexer is shown below.

Din S2 S1 S0 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0
0 x x x 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1
1 0 0 1 0 0 0 0 0 0 1 0
1 0 1 0 0 0 0 0 0 1 0 0
1 0 1 1 0 0 0 0 1 0 0 0
1 1 0 0 0 0 0 1 0 0 0 0
1 1 0 1 0 0 1 0 0 0 0 0
1 1 1 0 0 1 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0 0 0 0

Truth table of 1-to-8 demultiplexer


From the above truth table, it is clear that the data input is connected with one of the
eight outputs based on the select inputs. Now from this truth table, the expression for eight
outputs can be written as follows:

Now using the above expressions, the logic diagram of a 1-to-8 demultiplexer can be drawn as
shown below. Here, the single data line, Din is connected to all the eight AND gates, but only
one of the eight AND gates will be enabled by the select input lines. For example, if S2S1S0=
000, then only AND gate-0 will be enabled and thereby the data input, Din will appear at Y0.
Similarly, the different combinations of the select inputs, the input Din will appear at the
respective output.
Logic diagram of 1-to-8 demultiplexer
1. Design 1:8 demultiplexer using two 1:4 DEMUX.

2. Implement full subtractor using demultiplexer.


Inputs Outputs
A B Bin Difference(D) Borrow(Bout
)
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

INTRODUCTION - SEQUENTIALCIRCUIT
In combinational logic circuits, the outputs at any instant of time depend only on
the input signals present at that time. For a change in input, the output occurs
immediately.

Combinational Circuit- Block Diagram

In sequential logic circuits, it consists of combinational circuits to which storage


elements are connected to form a feedback path. The storage elements are devices
capable of storing binary information either 1 or 0.
The information stored in the memory elements at any given time defines the present
state of the sequential circuit. The present state and the external circuit determine the output
and the next state of sequential circuits.
Sequential Circuit- Block Diagram

Thus in sequential circuits, the output variables depend not only on the present
input variables but also on the past history of input variables.
The rotary channel selected knob on an old-fashioned TV is like a combinational.
Its output selects a channel based only on its current input – the
position of the knob. The channel-up and channel-down push buttons on a TV is like a
sequential circuit. The channel selection depends on the past sequence of up/down pushes.
The comparison between combinational and sequential circuits is given in table
below.

S.No Combinational logic Sequential logic


The output variable, at all times The output variable depends not only
1 depends on the combination of on the present input but also depend
input variables. upon the past history of inputs.

Memory unit is required to store the


2 Memory unit is not required
past history of input variables.
3 Faster in speed Slower than combinational circuits.
4 Easy to design Comparatively harder to design.
5 Eg. Parallel adder Eg. Serial adder

Classification of Logic Circuits

The sequential circuits can be classified depending on the timing of their signals:
 Synchronous sequential circuits
 Asynchronous sequential circuits.
In synchronous sequential circuits, signals can affect the memory elements only at discrete
instants of time. In asynchronous sequential circuits change in input signals can affect memory
element at any instant of time. The memory elements used in both circuits are Flip-Flops,
which are capable of storing 1- bit information.

S.No Synchronous sequential circuits Asynchronous sequential circuits


Memory elements are clocked Memory elements are either unclocked
1
Flip-Flops Flip-Flops or time delay elements.
The change in input signals can
2 affect memory element upon The change in input signals can affect
activation of clock signal. memory element at any instant of time.

The maximum operating speed Because of the absence of clock, it can


3 of clock depends on time delays operate faster than synchronous
involved. circuits.

4 Easier to design More difficult to design

FLIP-FLOPS
The state of a Flip-Flop is switched by a momentary change in the input signal. This
momentary change is called a trigger and the transition it causes is said to trigger the Flip-
Flop. Clocked Flip-Flops are triggered by pulses. A clock pulse starts from an initial value of
0, goes momentarily to 1and after a short time, returns to its initial 0 value.
Latches are controlled by enable signal, and they are level triggered, either positive level
triggered or negative level triggered. The output is free to change according to the S and R
input values, when active level is maintained at the enable input.
Flip-Flops are different from latches. Flip-Flops are pulse or clock edge triggered
instead of level triggered.
EDGE TRIGGERED FLIP-FLOPS

Flip-Flops are synchronous bistable devices (has two outputs Q and Q’). In this case,
the term synchronous means that the output changes state only at a specified point on the
triggering input called the clock (CLK), i.e., changes in the output occur in synchronization
with the clock.
edge-triggered Flip-Flopedge) or at the negative edge (falling edge) of the clock pulse and is sensitive to
its inputs only at this transition of the clock. The different types of edge-triggered Flip- Flops are—
 S-R Flip-Flop,

 J-K Flip-Flop,

 D Flip-Flop,

 T Flip-Flop.

Although the S-R Flip-Flop is not available in IC form, it is the basis for the D
and J-K Flip-Flops. Each type can be either positive edge-triggered (no bubble at C
input) or negative edge-triggered (bubble at C input). The key to identifying an edge-
triggered Flip-Flop by its logic symbol is the small triangle inside the block at the clock
(C) input. This triangle is called the dynamic input indicator.
S-R Flip-Flop
The S and R inputs of the S-R Flip-Flop are called synchronous inputs because
data on these inputs are transferred to the Flip-Flop's output only on the triggering edge
of the clock pulse. The circuit is similar to SR latch except enable signal is replaced by
clock pulse (CLK). On the positive edge of the clock pulse, the circuit
responds to the S and R inputs.

SR Flip-Flop
When S is HIGH and R is LOW, the Q output goes HIGH on the triggering
edge of the clock pulse, and the Flip-Flop is SET. When S is LOW and R is HIGH, the
Q output goes LOW on the triggering edge of the clock pulse, and the Flip-Flop is
RESET. When both S and R are LOW, the output does not change from its prior state. An
invalid condition exists when both S and R are HIGH.
CLK S R Qn Qn+1 State
1 0 0 0 0
No Change (NC)
1 0 0 1 1
1 0 1 0 0
Reset
1 0 1 1 0
1 1 0 0 1
Set
1 1 0 1 1
1 1 1 0 x Indeterminate

1 1 1 1 x *
0 x X 0 0
No Change (NC)
0 x x 1 1
Truth table for SR Flip-Flop
Input and output waveforms of SR Flip-Flop

J-K Flip-Flop:

JK means Jack Kilby, Texas Instrument (TI) Engineer, who invented IC in 1958.
JK Flip-Flop has two inputs J(set) and K(reset). A JK Flip-Flop can be obtained from the
clocked SR Flip-Flop by augmenting two AND gates as shown below.

JK Flip Flop
The data input J and the output Q’ are applied o the first AND gate and its output
(JQ’) is applied to the S input of SR Flip-Flop. Similarly, the data input K and the output
Q are applied to the second AND gate and its output (KQ) is applied to the R input of
SR Flip-Flop.
J= K= 0
When J=K= 0, both AND gates are disabled. Therefore clock pulse have no
effect, hence the Flip-Flop output is same as the previous output.

J= 0, K= 1
When J= 0 and K= 1, AND gate 1 is disabled i.e., S= 0 and R= 1. This condition
will reset the Flip-Flop to 0.

J= 1, K= 0
When J= 1 and K= 0, AND gate 2 is disabled i.e., S= 1 and R= 0. Therefore the
Flip-Flop will set on the application of a clock pulse.

J= K= 0
When J=K= 1, it is possible to set or reset the Flip-Flop. If Q is High, AND
gate 2 passes on a reset pulse to the next clock. When Q is low, AND gate 1 passes on a
set pulse to the next clock. Eitherway, Q changes to the complement of the last state i.e.,
toggle. Toggle means to switch to the opposite state.
The truth table of JK Flip-Flop is given below.

Inputs Output
CLK State
J K Qn+1
1 0 0 Qn No Change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn’ Toggle

Input and output waveforms of JK Flip-Flop


Characteristic table and Flop is shown in the table below. From the table, K-
Characteristic equation: map for the next state transition (Q n+1) can be drawn
and the simplified logic expression which represents
The
the characteristic equation of JK Flip-Flop can be
characteristic
found.
table for JK Flip-
Qn JCharacteristic
K table
Qn+1
0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0

K-map Simplification:

Characteristic equation: Qn+1= JQ’+ K’Q.

D Flip-Flop:

Like in D latch, in D Flip-Flop the basic SR Flip-Flop is used with complemented


inputs. The D Flip-Flop is similar to D-latch except clock pulse is used instead of
enable input.
D Flip-Flop

To eliminate the undesirable condition of the indeterminate state in the RS Flip-Flop


is to ensure that inputs S and R are never equal to 1 at the same time. This
is done by D Flip-Flop. The D (delay) Flip-Flop has one input called delay input and
clock pulse input. The D Flip-Flop using SR Flip-Flop is shown below.

The truth table of D Flip-Flop is given below.

Clock D Qn+1 State


1 0 0 Reset
Set
1 1 1
No Change
0 x Q
n
Truth table for D Flip-Flop

Input and output waveforms of clocked D Flip-Flop


Looking at the truth table for D Flip-Flop we can realize that Qn+1 function
follows the D input at the positive going edges of the clock pulses.

Characteristic table and Characteristic equation:

The characteristic table for D Flip-Flop shows that the next state of the Flip- Flop is
independent of the present state since Qn+1 is equal to D. This means that an input pulse
will transfer the value of input D into the output of the Flip-Flop independent of the value of
the output before the pulse was applied.
The characteristic equation is derived from K-map.

Qn D Qn+1
0 0 0

0 1 1

1 0 0

1 1 1
Characteristic table

Characteristic equation: Qn+1= D.

T Flip-Flop
The T (Toggle) Flip-Flop is a modification of the JK Flip-Flop. It is obtained
from JK Flip-Flop by connecting both inputs J and K together, i.e., single input.
Regardless of the present state, the Flip-Flop complements its output when the clock pulse
occurs while input T= 1.

T Flip-Flop

When T= 0, Qn+1= Qn, ie., the next state is the sameas the present state and no
change occurs.
When T= 1, Qn+1= Qn’,ie., the next state is the complement of the present state.

The truth table of T Flip-Flop is given below.


T Qn+1 State
0 Qn No Change

Qn Toggle
Truth table for T Flip-Flop
Characteristic table and Characteristic equation:
The characteristic table for T Flip-Flop is shown below and characteristic equation is
derived using K-map.
Qn T Qn+1
0 0 0

0 1 1

1 0 1

1 1 0
K-map Simplification:

Characteristic equation: Qn+1= TQn’+ T’Qn.

Master-Slave JK Flip-Flop

A master-slave Flip-Flop is constructed using two separate JK Flip-Flops. The


first Flip-Flop is called the master. It is driven by the positive edge of the clock pulse. The
second Flip-Flop is called the slave. It is driven by the negative edge of the clock pulse.
The logic diagram of a master-slave JK Flip-Flop is shown below.

Logic diagram
When the clock pulse has a positive edge, the master acts according to its J- K
inputs, but the slave does not respond, since it requires a negative edge at the clock input.
When the clock input has a negative edge, the slave Flip-Flop copies the master
outputs. But the master does not respond since it requires a positive edge at its clock
input.
The clocked master-slave J-K Flip-Flop using NAND gates is shown below.

Master-Slave JK Flip-Flop

APPLICATION TABLE (OR) EXCITATION TABLE:

The characteristic table is useful for analysis and for defining the operation of
the Flip-Flop. It specifies the next state (Qn+1) when the inputs and present state are
known.
The excitation or application table is useful for design process. It is used to find
the Flip-Flop input conditions that will cause the required transition, when the present state
(Qn) and the next state (Qn+1) are known.
SR Flip-Flop:

Present Next
Inputs
State State Present Next
Inputs Inputs
Qn S R Qn+1 State State
0 0 0 0 Qn Qn+1 S R S R
0 0 1 0 0 0 0 0
0 x
0 1 0 1 0 0 0 1
0 1 1 x 0 1 1 0 1 0
1 0 0 1 1 0 0 1 0 1
1 0 1 0 1 1 0 0
x 0
1 1 0 1 1 1 1 0
1 1 1 x

Characteristic Table
Modified Table

Present Next
Inputs
State State
Q Qn+ S R
n 1
0 0 0 x
0 1 1 0
1 0 0 1
1 1 x 0
Excitation Table

The above table presents the excitation table for SR Flip-Flop. It consists of present
state (Qn), next state (Qn+1) and a column for each input to show how the required transition
is achieved.
There are 4 possible transitions from present state to next state. The required Input
conditions for each of the four transitions are derived from the information available in the
characteristic table. The symbol ‘x’ denotes the don’t care condition, it does not matter
whether the input is 0 or 1.
JK Flip-Flop:

Present Next Present Next


Inputs Inputs Inputs
State State State State
Qn J K Qn+1 Qn Qn+1 J K J K
0 0 0 0 0 0 0 0
0 x
0 0 1 0 0 0 0 1
0 1 0 1 0 1 1 0
1 x
0 1 1 1 0 1 1 1
1 0 0 1 1 0 0 1
x 1
1 0 1 0 1 0 1 1
1 1 0 1 1 1 0 0
x 0
1 1 1 0 1 1 1 0

Characteristic Table

Modified Table

Present Next
Inputs
State State
Q Qn+ J K
n 1
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation Table
D Flip-Flop

II 66
YEAR
Present Next Present Next
Input Input
State State State State
Qn D Qn+1 Qn Qn+1 D
0 0 0 0 0 0
0 1 1 0 1 1
1 0 0 1 0 0
1 1 1 1 1 1

Characteristic Table
Excitation Table

T Flip-Flop

Present Next
Input
State State
Present Next
Q T Qn+1 Input
n State State
0 0 0 Qn Qn+1 T
0 1 1 0 0 0
1 0 1 0 1 1

1 1 0 1 0 1
1 1 0
Characteristic Table

Modified Table
REALIZATION OF ONE FLIP-FLOP USING OTHER FLIP-FLOPS

It is possible to convert one Flip-Flop into another Flip-Flop with some


additional gates or simply doing some extra connection. The realization of one Flip- Flop
using other Flip-Flops is implemented by the use of characteristic tables and excitation
tables. Let us see few conversions among Flip-Flops.

II 67
YEAR
SR Flip-Flop to D Flip-Flop
SR Flip-Flop to JK Flip-
Flop SR Flip-Flop to T Flip-
Flop JK Flip-Flop to T Flip-
Flop JK Flip-Flop to D Flip-
Flop D Flip-Flop to T Flip-
Flop
T Flip-Flop to D Flip-Flop

SR Flip-Flop to D Flip-Flop:
 Write the characteristic table for required Flip-Flop (D Flip-Flop).
 Write the excitation table for given Flip-Flop (SR Flip-Flop).
 Determine the expression for the given Flip-Flop inputs (S and R) by using K-
map.
 Draw the Flip-Flop conversion logic diagram to obtain the required Flip-
Flop (D Flip-Flop) by using the above obtained expression.

The excitation table for the above conversion is

Given Flip-Flop
Required Flip-Flop (D)
(SR)
Input Present state Next state Flip-Flop Inputs
D Qn Qn+1 S R
0 0 0 0 x
0 1 0 0 1
1 0 1 1 0
1 1 1 x 0

D Flip-Flop

II 68
YEAR
SR Flip-Flop to JK Flip-Flop

The excitation table for the above conversion is,


Flip-Flop
Inputs Present state Next state
Input
J K Qn Qn+1 S R
0 0 0 0 0 x
0 0 1 1 x 0
0 1 0 0 0 x
0 1 1 0 0 1
1 0 0 1 1 0
1 0 1 1 x 0
1 1 0 1 1 0
1 1 1 0 0 1

JK Flip-Flop

II 69
YEAR
2.7.3 SR Flip-Flop to T Flip-Flop
The excitation table for the above conversion is

Flip-Flop
Input Present state Next state
Inputs
T Qn Qn+1 S R
0 0 0 0 x
0 1 1 x 0
1 0 1 1 0
1 1 0 0 1

3.7.4 JK Flip-Flop to T Flip-Flop

The excitation table for the above conversion is


Flip-Flop
Input Present state Next state
Inputs
T Qn Qn+1 J K
0 0 0 0 x
0 1 1 x 0
1 0 1 1 x
1 1 0 x 1

II 70
YEAR
JK Flip-Flop to D Flip-Flop

The excitation table for the above conversion is


Flip-Flop
Input Present state Next state
Inputs
D Qn Qn+1 J K
0 0 0 0 x
0 1 0 x 1
1 0 1 1 x
1 1 1 x 0

D Flip-Flop to T Flip-Flop

The excitation table for the above conversion is

Flip-Flop
Input Present state Next state
Input
T Qn Qn+1 D
0 0 0 0
0 1 1 1
II 1 0 1 1 71
YEAR
1 1 0 0

T Flip-Flop to D Flip-Flop

The excitation table for the above conversion is

Flip-Flop
Input Present state Next state
Input
D Qn Qn+1 T
0 0 0 0
0 1 0 1
1 0 1 1
1 1 1 0

SHIFT REGISTERS:

A register is simply a group of Flip-Flops that can be used to store a binary


II
number. There must be one Flip-Flop for each bit in the binary number. For instance,72a
YEAR
register used to store an 8-bit binary number must have 8 Flip-Flops.
The Flip-Flops must be connected such that the binary number can be entered
(shifted) into the register and possibly shifted out. A group of Flip-Flops connected to
provide either or both of these functions is called a shift register.
The bits in a binary number (data) can be removed from one place to another
in either of two ways. The first method involves shifting the data one bit at a time in a
serial fashion, beginning with either the most significant bit (MSB) or the least
significant bit (LSB). This technique is referred to as serial shifting. The second
method involves shifting all the data bits simultaneously and is referred to as parallel
shiftingThere are two ways to shift into a register (serial or parallel) and similarly two ways to shift the data
out of the register. This leads to the construction of four basic register types—
i. Serial in- serial out,

ii. Serial in- parallel out,

iii. Parallel in- serial out,

iv. Parallel in- parallel out.

(i) Serial in- serial out (iii) Parallel in- serial out

(iii) Serial in- parallel out (iv) Parallel in- parallel out
Serial-In Serial-Out Shift Register:

The serial in/serial out shift register accepts data serially, i.e., one bit at a time on a
single line. It produces the stored information on its output also in serial form.

II 73
YEAR
Serial-In Serial-Out Shift
Register
The entry of the four bits 1010 into the register is illustrated below, beginning with
the right-most bit. The register is initially clear. The 0 is put onto the data input line,
making D=0 for FF0. When the first clock pulse is applied, FF0 is reset, thus storing the 0.
Next the second bit, which is a 1, is applied to the data input, making D=1 for
FF0 and D=0 for FF1 because the D input of FF1 is connected to the Q0 output. When
the second clock pulse occurs, the 1 on the data input is shifted into FF0, causing FF0 to
set; and the 0 that was in FF0 is shifted into FFl.
The third bit, a 0, is now put onto the data-input line, and a clock pulse is applied.
The 0 is entered into FF0, the 1 stored in FF0 is shifted into FFl, and the 0 stored in FF1
is shifted into FF2.
The last bit, a 1, is now applied to the data input, and a clock pulse is applied. This
time the 1 is entered into FF0, the 0 stored in FF0 is shifted into FFl, the 1 stored in FF1 is
shifted into FF2, and the 0 stored in FF2 is shifted into FF3. This completes the serial
entry of the four bits into the shift register, where they can be stored for any length of
time as long as the Flip-Flops have dc power.

II 74
YEAR
Four bits (1010) being entered serially into the register

II 75
YEAR
To get the data out of the register, the bits must be shifted out serially and taken
off the Q3 output. After CLK4, the right-most bit, 0, appears on the Q3 output.
When clock pulse CLK5 is applied, the second bit appears on the Q3 output.
Clock pulse CLK6 shifts the third bit to the output, and CLK7 shifts the fourth bit to the
output. While the original four bits are being shifted out, more bits can be shifted in. All
zeros are shown being shifted out, more bits can be shifted in.

Four bits (1010) being entered serially-shifted out of the register and replaced by all zeros

Serial-In Parallel-Out Shift Register:


In this shift register, data bits are entered into the register in the same as serial-in
II
serial-out shift register. But the output is taken in parallel. Once the data are stored, each76
YEAR
bit appears on its respective output line and all bits are available simultaneously instead
of on a bit-by-bit.

Serial-In parallel-Out Shift


Register

II 77
YEAR
Four bits (1111) being serially entered into the register

Parallel-In Serial-Out Shift Register:

In this type, the bits are entered in parallel i.e., simultaneously into their
respective stages on parallel lines.
A 4-bit parallel-in serial-out shift register is illustrated below. There are four data
input lines, X0, X1, X2 and X3 for entering data in parallel into the register. SHIFT/
LOAD input is the control input, which allows four bits of data to load in parallel into
the register.
When SHIFT/LOAD is LOW, gates G1, G2, G3 and G4 are enabled, allowing
each data bit to be applied to the D input of its respective Flip-Flop. When a clock pulse
is applied, the Flip-Flops with D = 1 will set and those with D = 0 will reset, thereby
storing all four bits simultaneously.

II 78
YEAR
Parallel-In Serial-Out Shift Register

When SHIFT/LOAD is HIGH, gates G1, G2, G3 and G4 are disabled and
gates G5, G6 and G7 are enabled, allowing the data bits to shift right from one stage to
the next. The OR gates allow either the normal shifting operation or the parallel data-
entry operation, depending on which AND gates are enabled by the level on the
SHIFT/LOAD input.
Parallel-In Parallel-Out Shift Register:

In this type, there is simultaneous entry of all data bits and the bits appear on
parallel outputs simultaneously.

Parallel-In Parallel-Out Shift Register


II 79
YEAR
UNIVERSAL SHIFT REGISTERS

If the register has shift and parallel load capabilities, then it is called a shift
register with parallel load or universal shift register. Shift register can be used for
converting serial data to parallel data, and vice-versa. If a parallel load capability is
added to a shift register, the data entered in parallel can be taken out in serial fashion by
shifting the data stored in the register.
The functions of universal shift register are:

 A clear control to clear the register to 0.

 A clock input to synchronize the operations.

 A shift-right control to enable the shift right operation and the serial input and
output lines associated with the shift right.
 A shift-left control to enable the shift left operation and the serial input and
output lines associated with the shift left.
 A parallel-load control to enable a parallel transfer and the n input lines
associated with the parallel transfer.
 ‘n’ parallel output lines.

 A control line that leaves the information in the register unchanged even
though the clock pulses re continuously applied.
It consists of four D-Flip-Flops and four 4 input multiplexers (MUX). S0 and S1
are the two selection inputs connected to all the four multiplexers. These two selection
inputs are used to select one of the four inputs of each multiplexer.
The input 0 in each MUX is selected when S1S0= 00 and input 1 is selected when
S1S0= 01. Similarly inputs 2 and 3 are selected when S 1S0= 10 and S1S0= 11
respectively. The inputs S1 and S0 control the mode of the operation of the register.

II 80
YEAR
4-Bit Universal Shift Register

When S1S0= 00, the present value of the register is applied to the D-inputs of the
Flip-Flops. This is done by connecting the output of each Flip-Flop to the 0 input of the
respective multiplexer. The next clock pulse transfers into each Flip-Flop, the binary
value is held previously, and hence no change of state occurs.
When S1S0= 01, terminal 1 of the multiplexer inputs has a path to the D inputs of the
Flip-Flops. This causes a shift-right operation with the lefter serial input transferred into
Flip-Flop FF3.
When S1S0= 10, a shift-left operation results with the right serial input going into Flip-
Flop FF1.
Finally when S1S0= 11, the binary information on the parallel input lines (I1, I2, I3
and I4) are transferred into the register simultaneously during the next clock pulse. The
function table of bi-directional shift register with parallel inputs and parallel outputs is shown
below.

II 81
YEAR
Mode Control
Operation
S1 S0
0 0 No change
Shift-right
0 1
Shift-left
1 0 Parallel load

1 1

BI-DIRECTION SHIFT REGISTERS:

A bidirectional shift register is one in which the data can be shifted either left or
right. It can be implemented by using gating logic that enables the transfer of a data bit
from one stage to the next stage to the right or to the left depending on the level of a
control line.
A 4-bit bidirectional shift register is shown below. A HIGH on the RIGHT/LEFT
control input allows data bits inside the register to be shifted to the right, and a LOW
enables data bits inside the register to be shifted to the left.
When the RIGHT/LEFT control input is HIGH, gates G1, G2, G3 and G4 are
enabled, and the state of the Q output of each Flip-Flop is passed through to the D input
of the following Flip-Flop. When a clock pulse occurs, the data bits are shifted one place
to the right.
When the RIGHT/LEFT control input is LOW, gates G5, G6, G7 and G8 are
enabled, and the Q output of each Flip-Flop is passed through to the D input of the
preceding Flip-Flop. When a clock pulse occurs, the data bits are then shifted one place
to the left.

II 82
YEAR
4-bit bi-directional shift register

SYNCHRONOUS COUNTERS

Flip-Flops can be connected together to perform counting operations. Such a


group of Flip- Flops is a counter. The number of Flip-Flops used and the way in which
they are connected determine the number of states (called the modulus) and also the
specific sequence of states that the counter goes through during each complete cycle.
Counters are classified into two broad categories according to the way they are
clocked:
Asynchronous counters,
Synchronous counters.
In asynchronous (ripple) counters, the first Flip-Flop is clocked by the external clock
pulse and then each successive Flip-Flop is clocked by the output of the preceding Flip-
Flop.
In synchronous counters, the clock input is connected to all of the Flip-Flops so that
they are clocked simultaneously. Within each of these two categories, counters are
classified primarily by the type of sequence, the number of states, or the number of Flip-
Flops in the counter.
The term ‘synchronous’ refers to events that have a fixed time relationship with
each other. In synchronous counter, the clock pulses are applied to all Flip- Flops
simultaneously. Hence there is minimum propagation delay.

II 83
YEAR
S.No Asynchronous (ripple) counter Synchronous counter
1 All the Flip-Flops are not All the Flip-Flops are clocked

clocked simultaneously. simultaneously.


2 The delay times of all Flip- There is minimum propagation delay.
Flops are added. Therefore
there is considerable
propagation delay.

3 Speed of operation is low Speed of operation is high.


4 Logic circuit is very simple Design involves complex logic circuit

even for more number of states. as number of state increases.


5 Minimum numbers of logic The number of logic devices is more

devices are needed. than ripple counters.


6 Cheaper than synchronous Costlier than ripple counters.

counters.

2-Bit Synchronous Binary Counter

In this counter the clock signal is connected in parallel to clock inputs of both the
Flip-Flops (FF0 and FF1). The output of FF0 is connected to J1 and K1 inputs of the second
Flip-Flop (FF1).

2-Bit Synchronous Binary Counter

Assume that the counter is initially in the binary 0 state: i.e., both Flip-Flops are
RESET. When the positive edge of the first clock pulse is applied, FF 0 will toggle
because J0= k0= 1, whereas FF1 output will remain 0 because J1= k1= 0. After the first
clock
II pulse Q0=1 and Q1=0. 84
YEAR
When the leading edge of CLK2 occurs, FF 0 will toggle and Q0 will go LOW.
Since FF1 has a HIGH (Q0 = 1) on its J1 and K1 inputs at the triggering edge of this
clock pulse, the Flip-Flop toggles and Q1 goes HIGH. Thus, after CLK2,
Q0 = 0 and Q1 = 1. When the leading edge of CLK3 occurs, FF 0 again toggles to the SET state (Q0

= 1), and FF1 remains SET (Q1 = 1) because its J1 and K1 inputs are both LOW (Q0 = 0).
After this triggering edge, Q0 = 1 and Q1 = 1.
Finally, at the leading edge of CLK4, Q0 and Q1 go LOW because they both
have a toggle condition on their J1 and K1 inputs. The counter has now recycled to its
original state, Q0 = Q1 = 0.

Timing diagram

3-Bit Synchronous Binary Counter

A 3 bit synchronous binary counter is constructed with three JK Flip-Flops and an


AND gate. The output of FF0 (Q0) changes on each clock pulse as the counter
progresses from its original state to its final state and then back to its original state. To
produce this operation, FF0 must be held in the toggle mode by constant HIGH, on its J0
and K0 inputs.
3-Bit Synchronous Binary

Counter
II 85
YEAR
The output of FF1 (Q1) goes to the opposite state following each time Q 0= 1.
This change occurs at CLK2, CLK4, CLK6, and CLK8. The CLK8 pulse causes the
counter to recycle. To produce this operation, Q 0 is connected to the J1 and K1 inputs of
FF1. When Q0= 1 and a clock pulse occurs, FF1 is in the toggle mode and therefore
changes state. When Q0= 0, FF1 is in the no-change mode and remains in its present
state.
The output of FF2 (Q2) changes state both times; it is preceded by the unique
condition in which both Q 0 and Q1 are HIGH. This condition is detected by the AND
gate and applied to the J2 and K2 inputs of FF3. Whenever both outputs Q0= Q1= 1,
the output of the AND gate makes the J2= K2= 1 and FF2 toggles on the following clock
pulse. Otherwise, the J2 and K2 inputs of FF2 are held LOW by the AND gate output,
FF2 does not change state.

CLOCK Pulse Q2 Q1 Q0
Initially1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 (recycles) 1 1 1
0 0 0

Timing diagram
II 86
YEAR
4-Bit Synchronous Binary Counter

This particular counter is implemented with negative edge-triggered Flip- Flops.


The reasoning behind the J and K input control for the first three Flip- Flops is the same
as previously discussed for the 3-bit counter. For the fourth stage, the Flip- Flop has to
change the state when Q0= Q1= Q2= 1. This condition is decoded by AND gate G3.

4-Bit Synchronous Binary Counter

Therefore, when Q0= Q1= Q2= 1, Flip-Flop FF3 toggles and for all other times it
is in a no-change condition. Points where the AND gate outputs are HIGH are indicated
by the shaded areas.

Timing diagram

4-Bit Synchronous Decade Counter: (BCD Counter):


BCD decade counter has a sequence from 0000 to 1001 (9). After 1001 state it
must recycle back to 0000 state. This counter requires four Flip-Flops and AND/OR
II
logic as shown below. 87
YEAR
4-Bit Synchronous Decade Counter

CLOCK Pulse Q3 Q2 Q1 Q
0
Initially1 0 0 0 0
2 0 0 0 1
3 0 0 1 0
4 0 0 1 1
5 0 1 0 0
6 0 1 0 1
7 0 1 1 0
8 0 1 1 1
9 1 0 0 0
10(recycles) 1 0 0 1
0 0 0 0

 First, notice that FF0 (Q0) toggles on each clock pulse, so the logic equation for its
J0 and K0 inputs is

J0= K0= 1

This equation is implemented by connecting J0 and K0 to a constant HIGH level.

 Next, notice from table, that FF1 (Q1) changes on the next clock pulse each
time Q0 = 1 and Q3 = 0, so the logic equation for the J1 and K1 inputs is
J1= K1= Q0Q3’

This equation is implemented by ANDing Q0 and Q3 and connecting the gate output to
the J1 and K1 inputs of FFl.
 Flip-Flop 2 (Q2) changes on the next clock pulse each time both Q0 = Q1 = 1.
This requires an input logic equation as follows:

J2= K2= Q0Q1


II 88
This
YEAR equation is implemented by ANDing Q0 and Q1 and connecting the gate output to
the J2 and K2 inputs of FF3.
 Finally, FF3 (Q3) changes to the opposite state on the next clock pulse each time
Q0 = 1, Q1 = 1, and Q2 = 1 (state 7), or when Q0 = 1 and Q1 = 1 (state 9). The
equation for this is as follows:

J3= K3= Q0Q1Q2+ Q0Q3

This function is implemented with the AND/OR logic connected to the J3 and K3 inputs
of FF3.

Timing diagram

Synchronous UP/DOWN Counter

An up/down counter is a bidirectional counter, capable of progressing in either


direction through a certain sequence. A 3-bit binary counter that advances upward through its
sequence (0, 1, 2, 3, 4, 5, 6, 7) and then can be reversed so that it
goes through the sequence in the opposite direction (7, 6, 5, 4, 3, 2, 1,0) is an illustration of
up/down sequential operation.
The complete up/down sequence for a 3-bit binary counter is shown in table
below. The arrows indicate the state-to-state movement of the counter for both its UP and
its DOWN modes of operation. An examination of Q0 for both the up and down
sequences shows that FF0 toggles on each clock pulse. Thus, the J0 and K0 inputs of FF0
are,
J0= K0= 1

II 89
YEAR
To form a synchronous UP/DOWN counter, the control input (UP/DOWN) is
used to allow either the normal output or the inverted output of one Flip-Flop to the J
and K inputs of the next Flip-Flop. When UP/DOWN= 1, the MOD 8 counter will
count from 000 to 111 and UP/DOWN= 0, it will count from 111 to 000.

When UP/DOWN= 1, it will enable AND gates 1 and 3 and disable AND gates
2 and 4. This allows the Q0 and Q1 outputs through the AND gates to the J and K inputs
of the following Flip-Flops, so the counter counts up as pulses are applied.
When UP/DOWN= 0, the reverse action takes place.

J1= K1= (Q0.UP)+ (Q0’.DOWN)

J2= K2= (Q0. Q1.UP)+ (Q0’.Q1’.DOWN)

3-bit UP/DOWN Synchronous Counter

II 90
YEAR
MODULUS-N-COUNTERS

The counter with ‘n’ Flip-Flops has maximum MOD number 2n. Find the
number of Flip-Flops (n) required for the desired MOD number (N) using the
equation,
2n ≥ N
(i) For example, a 3 bit binary counter is a MOD 8 counter. The basic counter
can be modified to produce MOD numbers less than 2n by allowing the
counter to skin those are normally part of counting sequence.
n= 3

N= 8

2n = 23= 8= N
(ii) MOD 5 Counter:

2n= N

2n= 5

22= 4 less than N.

23= 8 > N(5)

Therefore, 3 Flip-Flops are required.

(iii) MOD 10 Counter:

2n= N= 10

23= 8 less than N.

24= 16 > N(10).

To construct any MOD-N counter, the following methods can be used.

1. Find the number of Flip-Flops (n) required for the desired MOD number
(N) using the equation,
2n ≥ N.
2. Connect all the Flip-Flops as a required counter.

3. Find the binary number for N.

II 91
YEAR
4. Connect all Flip-Flop outputs for which Q= 1 when the count is N, as
inputs to NAND gate.
5. Connect the NAND gate output to the CLR input of each Flip-Flop.

When the counter reaches Nth state, the output of the NAND gate goes
LOW, resetting all Flip-Flops to 0. Therefore the counter counts from 0
through N-1.

For example, MOD-10 counter reaches state 10 (1010). i.e., Q3Q2Q1Q0= 1 0 1


0. The outputs Q3 and Q1 are connected to the NAND gate and the output of
the NAND gate goes LOW and resetting all Flip-Flops to zero. Therefore MOD-
10 counter counts from 0000 to 1001. And then recycles to the zero value.

The MOD-10 counter circuit is shown below.

MOD-10 (Decade) Counter

II 92
YEAR
UNIT III COMPUTER FUNDAMENTALS
Functional Units of a Digital Computer: Von Neumann Architecture – Operation and Operands
of Computer Hardware Instruction – Instruction Set Architecture (ISA): Memory Location,
Address and Operation – Instruction and Instruction Sequencing – Addressing Modes,
Encoding of Machine Instruction – Interaction between Assembly and High Level Language.
****************************************************************************

FUNCTIONAL UNITS

A computer consists of five functionally independent main parts:


1. Input unit,
2. Memory unit ,
3. Arithmetic and logic unit ,
4. Output unit , and
5. Control Unit

Basic functional units of a computer


 The Input Unit accepts coded information from human operators using devices such as
keyboards or from other computers over digital communication lines.
 The information received is stored in the computer’s memory, either for later use or to be
processed immediately by the Arithmetic and Logic Unit.
 The processing steps are specified by a program that is also stored in the Memory.
 The results are sent back to the outside world through the Output Unit.
 All of these actions are coordinated by the Control Unit.
 An Interconnection Network provides the means for the functional units to exchange
information and coordinate their actions.
 Input and output equipment is referred to as the Input-Output (I/O) Unit.

1. INPUT UNIT

 Computers accept coded information through input units. The most common input device
is the keyboard.

 Whenever a key is pressed, the corresponding letter or digit is automatically translated into
its corresponding binary code and transmitted to the processor.
 The other kinds of input devices for human-computer interaction are available, including
the touchpad, mouse, joystick, and trackball. These are often used as graphic input devices
in conjunction with displays.
 Microphones can be used to capture audio input which is then sampled and converted into
digital codes for storage and processing.
 Cameras can be used to capture video input.
 Digital communication facilities, such as the Internet, can also provide input to a computer
from other computers and database servers.

2. MEMORY UNIT
 The function of the memory unit is to store programs and data. There are two classes of
storage, called Primary and Secondary.
Primary Memory
 Primary memory, also called main memory, is a fast memory that operates at electronic
speeds. Programs must be stored in this memory while they are being executed.
 The memory consists of a large number of semiconductor storage cells, each capable of
storing one bit of information.
 They are handled in groups of fixed size called words.
 The number of bits in each word is referred to as the word length of the computer,
typically 16, 32, or 64 bits.
 To provide easy access to any word in the memory, a distinct address is associated with
each word location.
 Addresses are consecutive numbers, starting from 0, that identify successive locations.

 A particular word is accessed by specifying its address and issuing a control command to
the memory that starts the storage or retrieval process.
 Instructions and data can be written into or read from the memory under the control of the
processor.
 A memory in which any location can be accessed in a short and fixed amount of time after
specifying its address is called a random-access memory (RAM).
 The time required to access one word is called the memory access time.
 This time is independent of the location of the word being accessed. It typically ranges
from a few nanoseconds (ns) to about 100 ns for current RAM units.
Cache memory
 Along with the main memory, a smaller, faster RAM unit, called a cache, is used to hold
sections of a program that are currently being executed, along with any associated data.
 The cache is tightly coupled with the processor and is usually contained on the same
integrated-circuit chip. The purpose of the cache is to facilitate high instruction execution
rates.
Secondary Storage
 Secondary storage is used when large amounts of data and many programs have to be
stored, particularly for information that is accessed infrequently.
 Access times for secondary storage are longer than for primary memory.
 Example:
o Magnetic disks,
o Optical disks (DVD and CD), and
o Flash memory devices.

3. ARITHMETIC AND LOGIC UNIT


 Most computer operations are executed in the arithmetic and logic unit (ALU) of the
processor.
 Any arithmetic or logic operation, such as addition, subtraction, multiplication, division, or
comparison of numbers is performed by the ALU.
Example:
If two numbers located in the memory are to be added, they are brought into the processor,
and the addition is carried out by the ALU. The sum may then be stored in the memory or
retained in the processor for immediate use.
 When operands are brought into the processor, they are stored in high-speed storage
elements called registers. Each register can store one word of data.

4. OUTPUT UNIT
 The output unit function is to send processed results to the outside world.
Examples:
o Printers
o Graphic Displays

5. CONTROL UNIT
 The memory, arithmetic and logic, and I/O units store and process information and perform
input and output operations. The operation of these units must be coordinated in some way.
This is the responsibility of the control unit.
 The control unit is used to send control signals to other units.
 I/O transfers, consisting of input and output operations, are controlled by program
instructions that identify the devices involved and the information to be transferred.
 Control circuits are responsible for generating the timing signals that govern the transfers
and determine when a given action is to take place.
 Data transfers between the processor and the memory are also managed by the control unit
through timing signals.
OPERATIONS OF COMPUTER HARDWARE INSTRUCTION
OPERANDS OF COMPUTER HARDWARE INSTRUCTION
Machine instructions operate on data. The most important general categories of data are :
Addresses

Numbers

Characters

Logical data

Address:
The addresses are in fact a form of data. In many situations, some calculation must be
performed on the operand reference in an instruction; some calculation must be performed on the
operand reference in an instruction to determine physical address.

Numbers:
All computers support numeric data types. The common numeric data types are:
Integers
Floating Point
Decimal

Characters:
For documentation a common form is text or character Strings.
Most of the computers use ASCII code for character Represented by a unique 7-bit pattern,

Logic Data:
Most of the processor interpret data as bit ,byte ,word or double these are referred to as units of
data. When a data is viewed as n 1-bit items of data each item having the value 0 or 1 it is
considered as logical data .
INSTRUCTIONS AND INSTRUCTION SEQUENCING
INSTRUCTION
The words of a computer's language are called instructions, and its vocabulary is called
an instruction set.

Instruction Set
The vocabulary of commands understood by a given architecture.
A computer must have instructions capable of performing four types of operations:
 Data transfers between the memory and the processor registers
 Arithmetic and logic operations on data
 Program sequencing and control
 I/O transfers
The two basic types of notations used are:
1. Register Transfer Notation
2. Assembly Language Notation
Register Transfer Notation
The transfer of information from one location in the computer to another location such as
transfer between memory locations, processor registers, or registers in the I/O subsystem
involves Register Transfer Notation.
A location is represented by a symbolic name standing for its hardware binary
address.

Example: 1
The names for the addresses of memory locations may be LOC, PLACE, A, VAR2;
Processor register names may be R0, R5; and I/O register names may be DATAIN,
OUTSTATUS, and so on.
The contents of a location are denoted by placing square brackets around the name of the
location. The expression is,
R1 ← [LOC] means that the contents of memory location LOC are transferred
into processor register R1.
Example: 2
Consider the operation that adds the contents of registers R1 and R2, and then
places their sum into register R3.
It is indicated as,
R3 ← [R1] + [R2]
This type of notation is known as Register Transfer Notation (RTN).
Assembly Language Notation

This type of notation to represent machine instructions and programs uses an


assembly language format.
Example: 1
Consider an instruction that causes the transfer from memory location LOC to processor
register R1, is specified by the statement,
Move LOC, R1
The contents of LOC are unchanged by the execution of this instruction, but the old
contents of register R1 are overwritten.
Example: 2
Adding two numbers contained in processor registers R1 and R2 and placing their sum
in R3 can be specified by the assembly language statement,
Add R1, R2, R3

BASIC INSTRUCTION TYPES


The instruction types are generally classified in to:
 Three-address instruction
 Two-address instruction
 One-address instruction
 Zero-address instruction
1. Three-Address Instructions

The three-address instruction contains the memory addresses of the three operands— A,
B, and C.
A general instruction of three-address type has the format:
Operation Source1, Source2, Destination
This three-address instruction can be represented symbolically
as

Add A, B, C
Operands A and B are called the source operands
C is called the destination operand
Add is the operation to be performed on the operands.

2. Two-Address Instructions
In a two-address instructions, each instruction having only one or two operands. A
general instruction of two-address type has the format:
Operation Source, Destination
Example
An Add instruction of this type is Add A, B performs the operation
B← [A] + [B].
When the sum is calculated, the result is sent to the memory and stored in location B,
replacing the original contents of this location. This means that operand B is both a source
and a destination.

3. One- Address Instructions


A machine instruction that specify only one memory operand is called one-address
instruction.
When a second operand is needed, it is understood implicitly to be in a unique location.
A processor register, usually called the accumulator, may be used for this purpose.
The access to data in these registers is much faster than to data stored in memory locations
because registers are inside the processor.
Example 1:
Consider the instruction,
Add A
It adds the contents of memory location A to the contents of the accumulator register and
place the sum back into the accumulator.
Example 2:
Consider the one-address instructions,
Load A
Store A
The Load instruction copies the contents of memory location A into the accumulator, and
The Store instruction copies the contents of the accumulator into memory location A.
Using only one-address instructions, the operation
C←[A]+[B] can be performed by
executing the sequence of instructions,
Load A
Add B
Store C
In the Load instruction, address A specifies the source operand, and the destination
location, the accumulator, is implied. On the other hand, C denotes the destination location
in the Store instruction, whereas the source, the accumulator, is implied.

3. Zero-Address Instructions
It is also possible to use instructions in which the locations of all operands are defined
implicitly. Such instructions are found in machines that store operands in a structure called
a pushdown stack. These instructions are called zero-address instructions.
Example:
Stack operation:
PUSH, POP, and PEEK
Instruction Format
Example:
Write a program to evaluate the arithmetic statement Y = (A+B)*(C+D) using three-
address, two-address, one-address and zero-address instructions.
Solution:

Using Three-address instructions:


ADD R1, A, B;
ADD R2, C, D;
MUL Y, R1, R2;

Using Two-address instructions:


MOV R1, A;
ADD R1, B;
MOV R2, C;
ADD R2, D;
MUL R1, R2;
MOV Y, R1;

Using One-address instructions:


LOAD A;
ADD B;
STORE T;
LOAD C;
ADD D;
MUL T;
STORE Y;

Using Zero-address instructions:


PUSH A;
PUSH B;
ADD
PUSH
C;
PUSH
D;
ADD;
MUL;
POP Y;

ADDRESSING MODES
ENCODING OF MACHINE INSTRUCTION
The form in which we have presented the instructions is indicative of the
form used in assembly languages, except that we tried to avoid using acronyms for the
various operations, which are awkward to memorize and are likely to be specific to a
particular commercial processor.
To be executed in a processor, an instruction must be encoded in a compact
binary pattern. Such encoded instructions are properly referred to as machine
instructions.

The instructions that use symbolic names and acronyms are called
assembly language instructions, which are converted into the machine instructions
using the assembler program Instructions perform operations such as add, subtract,
move, shift, rotate, and branch. These instructions may use operands of different sizes,
such as 32-bit and 8-bit numbers or 8-bit ASCII-encoded characters.
The type of operation that is to be performed and the type of operands used
may be specified using an encoded binary pattern referred to as the OP code for the
given instruction. Suppose that 8 bits are allocated for this purpose, giving 256
possibilities for specifying different instructions. This leaves 24 bits to specify the rest
of the required information. Add R1, R2

Instruction specify the registers R1 and R2, in addition to the OP code. If the
processor has 16 registers, then four bits are needed to identify each register.
Additional bits are needed toindicate that the Register addressing mode is used for
each operand.
Move 24(R0), R5
Requires 16 bits to denote the OP code and the two registers, and some bits
to express that the source operand uses the Index addressing mode and that the index
value is 24.
The instructions can be encoded in a 32-bit word. Depicts a possible format. There is an
8-bit Op-code field and two 7-bit fields for specifying the source and destination
operands. The 7-bit field identifies the addressing mode and the register involved (if
any). The ―Other info‖ field allows us to specify the additional information that may be
needed, such as an index value or an immediate operand.
One-word instruction

Opcode Source Dest Other info

(a) Two-Word instruction

Opcode Source Dest Other info

Memory address/Immediate operand

(c ) Three-operand instruction

Op code Ri Rj Rk Other info

Requires 18 bits to denote the OP code, the addressing modes, and the
register. This leaves 14 bits to express the address that corresponds to LOC, which is
clearly insufficient.

And #$FF000000. R2

In which case the second word gives a full 32-bit immediate operand.

If we want to allow an instruction in which two operands can be specified


usingthe Absolute addressing mode, for example

Move LOC1, LOC2


Then it becomes necessary to use tow additional words for the 32-bit
addresses ofthe operands.
This approach results in instructions of variable length, dependent on the
number of operands and the type of addressing modes used.
Move (R3),
R1AddR1,
R2
If the Add instruction only has to specify the two registers, it will need just a
portion of a 32-bit word. So, we may provide a more powerful instruction that uses
three operands
Add R1, R2, R3
Which performs the operation

R3 8 [R1] + [R2]
A possible format for such an instruction in shown in fig c. Of course, the
processor has to be able to deal with such three-operand instructions. In an instruction
set where all arithmetic and logical operations use only register operands, the only
memory references are made to load/store the operands into/from the processor
registers.

INERACTION BETWEEN ASSEMBLYAND HIGH LEVEL LANGUAGE

Definition of assembly language:


A low-level programming language which uses symbols and lack variables and
functions and which work directly with CPU. Assembly language is coded differently for
every type of processor. X86 and x64 processors have a different code of assembly language
for performing the same tasks. Assembly language has the same commands as machine
language but instead of 0 and 1, it uses names.

Definition of high-level language:


A high-level language is a human-friendly language which uses variables and
functions and it is independent of computer architecture. The programmer writes code with
general purpose without worrying about hardware integration part. A program written in
high-level language needs to be first interpreted into machine code and then processed by a
computer.

Assembly language vs high-level language

1. In assembly language programs written for one processor will not run on another type
of processor. In high-level language programs run independently of processor type.
2. Performance and accuracy of assembly language code are better than a high-level.
3. High-level languages have to give extra instructions to run code on the computer.
4. Code of assembly language is difficult to understand and debug than a high-level.
5. One or two statements of high-level language expand into many assembly language
codes.
6. Assembly language can communicate better than a high-level Some type
of hardware actions can only be performed by assembly language.
7. In assembly language, we can directly read pointers at a physical address which is not
possible in high-level
8. Working with bits is easier in assembly language.
9. Assembler is used to translate code in assembly language while the compiler is used
to compile code in the high-level.
10. The executable code of high-level language is larger than assembly language code so
it takes a longer time to execute.

11. Due to long executable code, high-level programs are less efficient than assembly
language programs.
12. High-level language programmer does not need to know details about hardware
like registers in the processor as compared to assembly programmers.
13. The most high-level language code is first automatically converted into assembly
code.

Examples of assembly language:

Assembly languages are different for every processor. Some of assembly languages examples
are below.

 ARM
 MIPS
 x86
 Z80
 68000
 6502
 6510
Examples of high-level language:

 C
 Fortran
 Lisp
 Prolog
 Pascal
 Cobol
 Basic
 Algol
 Ada
 C++
 C#
 PHP
 Perl
 Ruby
 Common Lisp
 Python
 Golang
 Javascript
 Pharo

ASSEMBLY LEVEL LANGUAGE HIGH LEVEL LANGUAGE


 It need an assembler for  It needs an compiler/interpreter for
conversion conversion
 In this we convert a Assembly  In this we convert a high level language to
level language to machine level Assembly level language to machine level
language language

 It is machine dependent  It is machine independent


 In this english statement is used
 In this mnemonics codes are
used
 It supports low level operation  It does not support low level language

 In this it is easy to access  In this it is difficult to access hardware


hardware component component

 In this more compact code  No compactness


UNIT - IV

PROCESSORS

Instruction Execution – Building a Data Path – Designing a Control Unit –


Hardwired Control, Micro programmed Control – Pipelining – Data Hazard –
Control Hazards.

1. BASIC MIPS IMPLEMENTATION

 A Basic MIPS implementation includes a subset of the core MIPS instruction set.
 The MIPS instruction set are divided in to three classes :
 Memory-reference instructions - load word (lw) , store word (sw)
 Arithmetic-logical instructions - add, sub, and, or ,slt
 Branch instructions – beq , jump (j).
 For every instruction, the first two steps are identical:
1. Fetch Instruction
2. Fetch Operands
 The remaining steps depend on the instruction class.
 Use of ALU in MIPS:
 The memory-reference instructions uses the ALU for an memory address
calculation
 The arithmetic-logical instructions uses the ALU for operation execution
 The branch instruction uses the ALU for comparison.
 After using the ALU,
 A memory-reference instruction will need to access the memory either to read
data for a load or write data for a store.
 An arithmetic-logical or load instruction must write the data from the ALU or
memory back into a register.
 A branch instruction, change the next instruction address based on the
comparison; otherwise, the PC should be incremented by 4 to get the address of
the next instruction.
WORKING OF A BASIC MIPS IMPLEMENTATION

 All instructions start by using the program counter to supply the instruction address to
the instruction memory.
 After the instruction is fetched, the register operands used by an instruction are specified
by fields of that instruction.

Page 1
 Once the register operands have been fetched, they can be operated on to compute a
memory address (for a load or store), to compute an arithmetic result (for an integer
arithmetic-logical instruction), or a compare (for a branch).
 If the instruction is an arithmetic-logical instruction, the result from the ALU must be
written to a register.
 If the operation is a load or store, the ALU result is used as an address to either store a
value from the registers or load a value from memory into the registers. The result from
the ALU or memory is written back into the register file.
 Branches require the use of the ALU output to determine the next instruction address,
which comes either from the ALU (where the PC and branch offset are summed) or from
an adder that increments the current PC by 4.

2. BUILDING A MIPS DATAPATH

 Data path design begins in examining the major components required to execute each
class of MIPS instructions.
 The major components required to execute each class of MIPS instruction are called as
data path elements.
 A data path element is a unit used to operate on or hold data within a processor.
 In the MIPS implementation, the data path elements include
 Instruction Memory
 Data Memory
 Register File
 ALU
 Adders

Page 2
 Building a MIPS data path consists of
1. DataPath for Fetching the instruction and incrementing the PC
2. DataPath for Executing arithmetic and logic instructions
3. Datapath for Executing a memory-reference instruction
4. DataPath for Executing a branch instruction

1. DATAPATH FOR FETCHING THE INSTRUCTION AND INCREMENTING THE PC


 A memory unit to store the instructions of a program and supply instructions given an
address.
 The program counter is used to hold the address of the current instruction.
 An adder to increment the PC to the address of the next instruction.
 To execute any instruction, fetch the instruction from memory.
 To fetch the next instruction, increment the program counter so that it points at the next
instruction, 4 bytes later.

Combined all three elements into single stage

Page 3
2. DATAPATH FOR EXECUTING ARITHMETIC AND LOGIC INSTRUCTIONS (R-Type)
 The processor’s 32 general-purpose registers are stored in a structure called a register
file.
 A register file is a collection of registers in which any register can be read or written by
specifying the number of the register in the file.
 An ALU is used to operate on the values read from the registers.
 It reads two registers, performs an ALU operation on the contents of the registers, and
write the result to a register.
 These instructions are either called R-type instructions or arithmetic logical
instructions.
 This instruction class includes add, sub, AND, OR, and slt.
 R-format Instruction Operations :
1. Read the two register operands
2. Perform the arithmetic/logical operation
3. Write the register result

Combined two elements into single stage

Page 4
3. DATAPATH FOR EXECUTING A MEMORY-REFERENCE INSTRUCTION
 The MIPS load word and store word instructions have the general form
(i) lw $t1,offset($t2)
(ii) sw $t1,offset ($t2).
 These instructions compute a memory address by adding the base register, which is $t2,
to the 16-bit signed offset field contained in the instruction.
 If the instruction is a load, the value read from memory must be written into the register
file in the specified register, which is $t1.Thus, we need both the register file and the
ALU.
 If the instruction is a store, the value to be stored must also be read from the register file
where it resides in $t1.
 In addition, a unit to sign-extend the 16-bit offset field in the instruction to a 32-bit
signed value, and a data memory unit to read from or write to.
 The data memory must be written on store instructions; hence, it has both read and write
control signals, an address input, as well as an input for the data to be written into
memory.
 Load/Store Instructions Operations :
1. Read register operands
2. Calculate the memory address using 16-bit offset
- Use ALU with sign-extend offset shifted left 2 times
3. Load: Read memory and update register ($t1)
4. Store: Write register value to memory ($t2 + offset)

4. DATAPATH FOR EXECUTING A BRANCH INSTRUCTION


 The general form of a Branch Instruction is
beq $t1,$t2,offset.
 The branch datapath does two operations:
 Compute the branch target address and
 Compare the register contents.

Page 5
 Branch Target Address = PC + 4 + Offset (Sign Extended and Shifted left 2 times)
 When the condition is true (operands are equal), the branch target address becomes the
new PC, and we say that the branch is taken.
 When the condition is false(operands are not equal), the incremented PC should replace
the current PC; we say that the branch is not taken.
 Branch Instruction Operations:
1. Read register operands
2. Compare operands
 Use ALU - Subtract the two operands and
Check for Zero output
3. Calculate target address
 Sign-extend the offset value
 Shift left 2 times
 Add to PC + 4

CREATING A SINGLE (or) COMBINED DATAPATH

Show how to build a datapath for the operational portion of the memory reference
and arithmetic-logical instructions that uses a single register file and a single ALU
to handle both types of instructions, adding any necessary multiplexors.
We can combine the datapath components needed for the individual instruction classes, into a
single datapath and add the control to complete the implementation.
This simplest datapath will execute all instructions in one clock cycle. To share a datapath
element between two different instruction classes, we may need to allow multiple

Page 6
connections to the input of an element, using a multiplexor and control signal to select among
the multiple inputs.
Step 1
To create a datapath with only a single register file and a single ALU, we must have two
different sources for the second ALU input, as well as two different sources for the data
stored into the register file. Thus, one multiplexor is placed at the ALU input and another at
the data input to the register file.

Step 2
Combine all the pieces to make a simple datapath for the MIPS architecture by adding the
 Datapath for Instruction fetch
 Datapath for Arithmetic-Logical instructions
 Datapath for Memory instructions
 Datapath for Branch instruction

Step 3
In the datapath obtained by composing separate pieces,
 The branch instruction uses the main ALU for comparison of the register operands, so
we must keep the adder for computing the branch target address.
 An additional multiplexor is required to select either the sequentially following
instruction address (PC + 4) or the branch target address to be written into the PC.

Step 4
The control unit must be able to take inputs and generate a write signal for each state
element, the selector control for each multiplexor, and the ALU control.

Page 7
Page 8
Design of Control Unit
The Control Unit is classified into two major categories:
1. Hardwired Control
2. Micro programmed Control

Hardwired Control

The Hardwired Control organization involves the control logic to be implemented with
gates, flip-flops, decoders, and other digital circuits.

The following image shows the block diagram of a Hardwired Control organization.

o A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates.
o An instruction fetched from the memory unit is placed in the instruction register (IR).
o The component of an instruction register includes; I bit, the operation code, and bits 0 through 11.
o The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
o The outputs of the decoder are designated by the symbols D0 through D7.

Page 9
o The operation code at bit 15 is transferred to a flip-flop designated by the symbol I.
o The operation codes from Bits 0 through 11 are applied to the control logic gates.
o The Sequence counter (SC) can count in binary from 0 through 15.

Micro-programmed Control

The Micro programmed Control organization is implemented by using the programming approach. In
Micro programmed Control, the micro-operations are performed by executing a program consisting of
micro-instructions.

The following image shows the block diagram of a Micro programmed Control organization.

o The Control memory address register specifies the address of the micro-instruction.
o The Control memory is assumed to be a ROM, within which all control information is permanently
stored.
o The control register holds the microinstruction fetched from the memory.
o The micro-instruction contains a control word that specifies one or more micro-operations for the
data processor.
o While the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next
microinstruction.
o The next address generator is often referred to as a micro-program sequencer, as it determines the
address sequence that is read from control memory.

Page 10
5. PIPELINING

 Pipelining (or) Instruction Pipelining is an implementation technique in which


multiple instructions are overlapped in execution.
 Pipelining is a process of arrangement of hardware elements of the CPU such that its
overall performance is increased.
 The computer pipeline is divided in stages.
 The stages are connected to one another. Each stage completes a part of an
instruction in parallel.
 Pipelining is widely used in modern processors.
 Pipelining is a particularly effective way of organizing concurrent activity in a
computer system.
 It uses faster circuit technology to build the processor and the main memory.
Advantages :
 Pipelining is a key to make processing fast.
 Pipelining improves system performance in terms of throughput.
 Pipelining makes the system reliable.
Disadvantages:
1. The design of pipelined processor is complex and costly to manufacture.
2. The instruction latency is more.

DIFFERENCE BETWEEN SEQUENTIAL EXECUTION AND PIPELINED EXECUTION

SEQUENTIAL EXECUTION PIPELINED EXECUTION


In the Sequential Execution, the processor In Pipelined Execution, the processor
executes a program by fetching and executes a program by overlapping the
executing instructions, one after another. instructions.

Page 11
PIPELINED EXECUTION / ORGANIZATION
2 - STAGE PIPELINED EXECUTION
 Execution of a program consists of a sequence of fetch and executes steps.
 Let Fi and Ei refer to the fetch and execute steps for instruction Ii.
 A computer has two separate hardware units.
 They are:
 Instruction fetch unit
 Instruction execution unit
 The instruction fetched by the fetch unit is stored in an intermediate storage buffer.
 This buffer is needed to enable the execution unit to execute the instruction while the
fetch unit is fetching the next instruction.
 The execution results are stored in the destination location specified by the instruction.
 The fetch and execute steps of any instruction can each be completed in one cycle.

3 - STAGE PIPELINED EXECUTION

 The stages are:


F - Fetch : Read the instruction from the memory
D - Decode : Decode the instruction and fetch the source operand(s)
E - Execute : Perform the operation specified by the instruction

Page 12
4 - STAGE PIPELINED EXECUTION
 The stages are:
F - Fetch : Read the instruction from the memory
D - Decode : Decode the instruction and fetch the source operand(s)
E - Execute : Perform the operation specified by the instruction
W - Write : Store the result in the destination location

5 - STAGE PIPELINED EXECUTION

 Instruction Fetch - The CPU reads instructions from the address in the memory
whose value is present in the program counter.
 Instruction Decode - Instruction is decoded and the register file is accessed to get the
values from the registers used in the instruction.
 Execute - ALU operations are performed.
 Memory Access - Memory operands are read and written from/to the memory that is
present in the instruction.
 Write Back – Computed value is written back to the register.

Page 15
6- STAGE PIPELINED EXECUTION

STRUCTURAL HAZARD

 A structural hazard occurs when two or more instructions that are already in pipeline
need the same resource.
 These hazards are because of conflicts due to insufficient resources.
 The result is that the instructions must be executed in series rather than parallel for a
portion of pipeline.
 Structural hazards are sometime referred to as resource hazards.
 Example:
 A situation in which multiple instructions are ready to enter the execute
instruction phase and there is a single ALU (Arithmetic Logic Unit).
 One solution to such resource hazard is to increase available resources, such as
having multiple ALU.

DATA HAZARD

A data hazard occurs when there is a conflict in the access of an operand location.
There are three types of data hazards. They are
Read After Write (RAW) or True Dependency:
 An instruction modifies a register or memory location and a succeeding instruction
reads the data in that memory or register location.
 A RAW hazard occurs if the read takes place before the write operation is complete.
 Example
I1 : R2 ← R5 + R3
I2 : R4 ← R2 + R3

Page 16
Write After Read (WAR) or Anti Dependency:
 An instruction reads a register or memory location and a succeeding instruction writes
to the location.
 A WAR hazard occurs if the write operation completes before the read operation takes
place.
 Example
I1 : R4 ← R1 + R5
I2 : R5 ← R1 + R2

Write After Write (WAW) or Output Dependency:


 Two instructions both write to the same location.
 A WAW hazard occurs if the write operations take place in the reverse order of the
intended sequence.
 Example:
I1 : R2 ← R4 + R7
I2 : R2 ← R1 + R3

Page 17
INSTRUCTION / CONTROL / BRANCH HAZARD
 An instruction (or) control (or) branch hazard, occurs when the pipeline makes the
wrong decision on a branch prediction and therefore brings instructions into the
pipeline that must subsequently be discarded.
 Whenever the stream of instructions supplied by the instruction fetch unit is
interrupted, the pipeline stalls.

8. HANDLING DATA HAZARDS (or) DATA DEPENDENCY

 Consider the two instructions:


 Add R2, R3, #100
 Subtract R9, R2, #30
 The destination register R2 for the Add instruction is a source register for the
Subtract instruction.
 There is a data dependency between these two instructions, because register R2
carries data from the first instruction to the second.
3

 There are two techniques using which we can handle data hazards.
 They are
(1) Using Operand Forwarding (2) Using Software

Handling Data Dependencies Using Operand Forwarding


 Pipeline stalls due to data dependencies can be improved through the use of operand
forwarding.
 Rather than stalling the instruction, the hardware can forward the value from result register
to the ALU input through the Multiplexers.
 The second instruction can get data directly from the output of ALU after the previous
instruction is completed.

Page 18
 A special arrangement needs to be made to “forward” the output of ALU to the input of
ALU.
Example :
I1 : ADD R1,R2,R3
I2: SUB R4,R1,R5

Handling Data Dependencies Using Software


 An alternative approach is for detecting data dependencies and dealing with them.
 When the compiler identifies a data dependency between two successive instructions
Ij and Ij+1, it can insert three explicit NOP (No-operation) instructions between them.
 The NOP’s introduce the necessary delay to enable instruction Ij+1 to read the new
value from the register file after it is written.

9. HANDLING INSTRUCTION HAZARDS (or) CONTROL HAZARDS


A variety of approaches have been taken for dealing with Instruction/Control/Branch
Hazards.(Conditional branches)
1) Multiple Streams
2) Prefetch Branch Target

Page 19
3) Loop Buffer
4) Branch Prediction
5) Delayed Branch

1) MULTIPLE STREAMS
o The approach is to replicate the initial portions of the pipeline and allow the
pipeline to fetch both instructions, making use of multiple streams.
o There are two problems with this approach:
1. Contention delays for access to the registers and to memory.
2. Additional branch instructions may enter the pipeline before the original
branch decision is resolved.
2) PREFETCH BRANCH TARGET
o When a conditional branch is recognized, the target of the branch is prefetched, in
addition to the instruction following the branch.
o This target is then saved until the branch instruction is executed.
o If the branch is taken, the target has already been prefetched.
3) LOOP BUFFER
o A loop buffer is a small, very-high-speed memory maintained by the instruction
fetch stage of the pipeline and containing the ‘n’ most recently fetched
instructions, in sequence.
o If a branch is to be taken, the hardware first checks whether the branch target is within
the buffer. If so, the next instruction is fetched from the buffer.
4) BRANCH PREDICTION
o To reduce the branch penalty, the processor needs to anticipate that an instruction
being fetched is a branch instruction and predict its outcome to determine which
instruction should be fetched.
o It is generally of two types:
 Static Branch Prediction
 Dynamic Branch Prediction
o Static Branch Prediction - Assume that the branch will not be taken and to fetch the
next instruction in sequential address order.
o Dynamic Branch Prediction - Uses the recent branch history, to see if a branch was
taken the last time this instruction was executed.

o Techniques for Branch Prediction


Various techniques can be used to predict whether a branch will be taken. The most
common are the following:
 Predict never taken
 Predict always taken
 Predict by opcode

Page 20
 Taken/not taken switch
 Branch history table

o Branch Prediction Buffer (or) Branch History Table

 One implementation of that approach is a branch prediction buffer or branch


history table.
 A branch prediction buffer is a small memory indexed by the lower portion of the
address of the branch instruction. The memory contains a bit that says whether the
branch was recently taken or not.
 A branch predictor tells us whether or not a branch is taken,
 Calculates the branch target address.
 Using a cache to hold the branch target buffer.

o Branch Prediction Flowchart


 If the instruction is predicted as taken, fetching begins from the target as soon as the
PC is known; it can be as early as the ID stage.
 If the instruction is predicted as not taken, sequential fetching and executing
continue.
 If the prediction turns out to be wrong, the prediction bits are changed.

Page 21
o Types Of Branch Predictor
1. Correlating Predictor - Combines local behavior and global behavior of a particular
branch.
2. Tournament Predictor - Makes multiple predictions for each branch and a selection
mechanism that chooses which predictor to enable for a given branch.
5) DELAYED BRANCH
o In MIPS, branches are delayed.
o This means that the instruction immediately following the branch is always executed,
independent of whether the branch condition is true or false. This is known as Branch
Folding Technique.
o When the condition is false, the execution looks like a normal branch.
o When the condition is true, a delayed branch first executes the instruction immediately
following the branch in sequential instruction order before jumping to the specified
branch target address.

Page 22
UNIT V

MEMORY AND I/O

Memory Concepts and Hierarchy – Memory Management – Cache Memories: Mapping


and Replacement Techniques – Virtual Memory – DMA – I/O – Accessing I/O: Parallel
and Serial Interface – Interrupt I/O.
. MEMORY AND I/O SYSTEMS

Introduction to Memory Systems:


o The memory unit consists of k–bit address lines, n-bit data input lines, and n-bit data
output lines and control lines (Read/Write).

o The speed of the memory unit is measured using:

Memory Access Time and Memory Cycle Time.

Memory access time - It is the time that elapses between the initiations of
an operation to transfer a word of data and the
completion of that operation.
Memory cycle time - It is the minimum time delay required between the
Initiation of two successive memory operations.

Page | 1
Characteristics of Memory Systems

MEMORY HIERARCHY

 The memory hierarchy consists of multiple levels of memory with different


speeds and sizes.
 The memory hierarchy system consists of all storage devices employed in a computer
system.
 The goal of using a memory hierarchy is to obtain the highest possible average
access speed while minimizing the total cost of the entire memory system.
 The faster memories are more
expensive than the slower memories
and are smaller.
 The faster memory is close to the processor.
 Going down the hierarchy, the following occur.
 Decreasing cost per bit
 Increasing capacity
 Increasing access time
 Decreasing frequency of access of the memory by the processor

Page | 2
Page | 3
 In the memory hierarchy, the Registers are at the top in terms of speed of access.

 At the next level of the hierarchy is a relatively small amount of memory that can be
implemented directly on the processor chip, called a Cache.
o Cache memory holds copies of the instructions and data stored in a much larger
memory that is provided externally.
o The cache memory is divided into three levels :
 Level 1 (L1) cache –Primary Cache
 Level 2 (L2) cache – Secondary Cache
 Level 3 (L3) cache

 A primary cache is always located on the processor chip. This cache is small and its access
time is comparable to that of processor registers. The primary cache is referred to as the
Level 1 (L1) cache.

 A larger, and slower, secondary cache is placed between the primary cache and the rest of
the memory. It is referred to as the Level 2 (L2) cache.

 Some computers have a Level 3 (L3) cache of even larger size, in addition to the L1 and L2
caches. An L3 cache, also implemented in SRAM technology.

 The next level in the hierarchy is the Main Memory. The main memory is much larger
but slower than cache memories.

 At the bottom level in the memory hierarchy is the Secondary Memory -Magnetic Disk and
Magnetic tape. They provide a very large amount of inexpensive memory.

Page | 4
Memory Management

Primary technologies used in memory hierarchies:


o Primary Memory (or) Main Memory.
o Secondary Memory (or) Auxiliary memory.

PRIMARY (or) MAIN MEMORY


The main features of primary memory are
 It is accessed directly by the processor
 It is the fastest memory available
 Each word is stored as well as
 It is volatile, i.e. its contents are lost once power is switched off.
As primary memory is expensive, technologies are developed to optimize its use. The broad
types of primary memory are RAM and ROM.

RAM - RANDOM ACCESS MEMORY


o RAM is also known as Read/Write memory.
o The information stored in RAM can be read and also written.
o RAM is volatile in nature.
o There are various kinds of RAM such as :
(1) SRAM (2) DRAM - SDRAM, DDR SDRAM
SRAM (STATIC RANDOM ACCESS MEMORY)

o Memories that consist of circuits capable of retaining their state as long as power is applied
are known as static memories.

Page | 5
Advantages:
o Very low power consumption
o Can be accessed very quickly
o Static RAMs are fast, but their cells require several transistors.
o Less expensive and higher density RAMs can be implemented with simpler cells.

DRAM (DYNAMIC RANDOM ACCESS MEMORY)

o Memories that use cells that do not retain


their state indefinitely are called dynamic
RAMs (DRAMs).
o The information is stored in a dynamic
memory cell in the form of a charge on a
capacitor.
o The contents must be periodically
refreshed.
o The contents may be refreshed while
accessing them for reading.
o
SDRAM (SYNCHRONOUS DRAM)
• DRAMs whose operation is synchronized with a clock signal are known as
synchronous DRAM (SDRAM).
• SDRAMs have built-in refresh circuitry
• SDRAMs operate with clock speeds that can exceed 1 GHz.
• SDRAMs have high data rate

ROM - READ-ONLY MEMORY


o A memory is called a read-only memory (ROM), when information can be written into
it only once at the time of manufacture.
o The information stored in ROM can then only be read.
o It is used to store programs that are permanently resident in the computer.
o ROM is non-volatile.
o A logic value 0 is stored in the cell if the transistor is connected to ground at point P.
otherwise, a 1 is stored.

o A sense circuit at the end of the bit line generates the proper output value.
o The state of the connection to ground in each cell is determined when the chip is
manufactured.
o There are various kinds of ROM such as :
PROM
EPROM
EEPROM

Page | 6
PROM (Programmable ROM)
o The PROM is nonvolatile and may be written into only once.
o For the PROM, the writing process is performed electrically and may be performed by a
supplier or customer at a time later than the original chip fabrication.
o Special equipment is required for the writing or “programming” process.
o PROMs provide flexibility and convenience. It is less expensive.

EPROM (Erasable Programmable ROM)


o EPROM chip provides a higher level of convenience.
o It allows the stored data to be erased and new data to be written into it. Such an erasable,
reprogrammable ROM is usually called an EPROM.
o They store 1’s and 0’s as packet of charge in a buried layer of the IC chip.
o This can be done by exposing the chip to ultraviolet light, which erases the entire contents
of the chip.
o Each erasure can take as much as 20 minutes to perform.

EEPROM (Electrically Erasable Programmable ROM)


o An EPROM must be physically removed from the circuit for reprogramming. Also, the
stored information cannot be erased selectively.
o The entire contents of the chip are erased when exposed to ultraviolet light.
o Another type of erasable PROM can be programmed, erased, and reprogrammed
electrically. Such a chip is called an electrically erasable PROM, or EEPROM.
o It does not have to be removed for erasure. It is possible to erase the cell contents
selectively.
o EEPROM needs different voltages for erasing, writing, and reading the
stored data, which increases circuit complexity.
o They have replaced EPROMs in practice.

SECONDARY MEMORY (AUXILIARY MEMORY)


o Devices that provide backup storage are called auxiliary memory.
o They are used for storing system programs, large data files and other backup information.
o The most common auxiliary memory devices used in computer systems are Magnetic
Disks and Magnetic Tapes.

MAGNETIC DISKS
o Magnetic Disks consist of one or more disk platters mounted on a common spindle.
o A thin magnetic film is deposited on each platter, usually on both sides.
o The assembly is placed in a drive that causes it to rotate at a constant speed.

Page | 7
o The read/write heads of a disk system are movable. There is one head per surface.
o All heads are mounted on a comb-like arm that can move radially across the stack of disks
to provide access to individual tracks.
o Each surface is divided into concentric tracks, and each track is divided into sectors.
o The set of corresponding tracks on all surfaces of a stack of disks forms a logical
cylinder.
o All tracks of a cylinder can be accessed without moving the read/write heads.
o Data are accessed by specifying the surface number, the track number, and the sector
number. Read and Write operations always start at sector boundaries.
o Data bits are stored serially on each track. Each sector may contain 512 or more bytes.
o The data are preceded by a sector header that contains identification (addressing)
information used to find the desired sector on the selected track.

FLOPPY DISK
o The disks are known as hard or rigid disk units. Floppy disks are smaller, simpler, and
cheaper disk units that consist of a flexible, removable, plastic diskette coated with
magnetic material.
o The diskette is enclosed in a plastic jacket, which has an opening where the read/write
head can be positioned.
o A hole in the center of the diskette allows a spindle mechanism in the disk drive to
position and rotate the diskette.
o Advantages - Low cost , Portability
o Disadvantages - Smaller storage capacities, Longer access times , Higher failure rates

OPTICAL DISK
CD
o The optical technology was adapted to the computer environment to provide a high
capacity storage medium known as a CD.
o The CDs are used to store information in a binary form; they are suitable for use as
a storage medium in computer systems.
o Stored data are organized on CD-ROM tracks in the form of blocks called sectors.
o The optical technology that is used for CD systems is that laser light can be
focused on a very small spot.
o A laser beam is directed onto a spinning disk, with tiny indentations arranged to

Page | 8
form a long spiral track on its surface.
o The indentations reflect the focused beam toward a photo detector, which detects
the stored binary patterns.
o The total thickness of the disk is 1.2 mm.
o Advantages:
 Small physical size
 Low cost
 Ease of handling as a removable and transportable mass-storage medium.
DVD
o DVD (Digital Versatile Disk) technology is the same as that of CDs.
o The disk is 1.2 mm thick, and it is 120 mm in diameter.
o Its storage capacity is made much larger than that of CDs.
MAGNETIC TAPES
o Magnetic tapes are suited for off-line storage of large amounts of data. They are
typically used for backup purposes and for archival storage.
o Data on the tape are organized in the form of records separated by gaps.
o Tape motion is stopped only when a record gap is underneath the read/write heads.
o A group of related records is called a file. The beginning of a file is identified by a file
mark.
o The file mark is a special single- or multiple-character record, usually preceded by a gap
longer than the inter-record gap.
o The first record following a file mark can be used as a header or identifier for the file.
o This allows the user to search a tape containing a large number of files for a particular
file.

CACHE MEMORY
o Cache memory is a high-speed static random access memory (SRAM).
o Cache memory is responsible for speeding up
computer operations and processing.
o This memory is integrated directly into the CPU chip
or placed on a separate chip that has a
separate bus interconnect with the CPU.

Page | 9
o The purpose of cache memory is to store program
instructions and data that are used repeatedly in the
operation of programs or information that the CPU is likely to need next.

o The CPU can access this information quickly from the cache rather than having to get it
from computer's main memory.
o Fast access to these instructions increases the overall speed of the program.

o A cache memory system includes a small amount of fast memory and a large amount of
slow memory(DRAM). This system is configured to simulate a large amount of fast
memory.
o The cache memory system consists of the following units:
 Cache - consists of static RAM(SRAM)
 Main Memory –consists of dynamic RAM(DRAM)
 Cache Controller – implements the cache logic. This controller decides which
block of memory should be moved in or out of the cache.

CACHE LEVELS
o The processor cache is of two or more levels :
 Level 1 (L1) cache
 Level 2 (L2) cache
 Level 3 (L3) cache

o A primary cache is always located on the processor chip. This cache is small and its
access time is comparable to that of processor registers. The primary cache is referred to
as the Level 1 (L1) cache.

o A larger, and slower, secondary cache is placed between the primary cache and the rest of
the memory. It is referred to as the Level 2 (L2) cache.
Page |13
\
o Some computers have a Level 3 (L3) cache of even larger size, in addition to the L1 and
L2 caches.

TYPES OF CACHE
Two types of cache exists. They are

Unified Cache and Split Cache


Unified cache : Data and instructions are stored together (Von Neumann Architecture)

Split cache : Data and instructions are stored separately (Harvard architecture)

ELEMENTS OF CACHE DESIGN

Page |13
CACHE MAPPING FUNCTIONS
o The correspondence between the main memory blocks and cache is specified by a
“Mapping Function”.
o When a processor issues a Read request, a block of words is transferred from the main
memory to the cache, one word at a time.
o When the program references any of the location in the block, the desired contents are
read directly from the cache.
o Mapping functions determine how memory blocks are placed in the cache.
o The three mapping functions:
o Direct Mapping
o Associative Mapping
o Set-Associative Mapping
Consider a cache consisting of 128 blocks of 16 words each, for total of 2048(2K) words and
assume that the main memory is addressable by 16 bit address. Main memory is 64K which
will be viewed as 4K blocks of 16 works each.

(1) Direct Mapping:-


 The simplest way to determine cache locations in which store Memory blocks is direct
Mapping technique.
 In this, block J of the main memory maps on to block J modulo 128 of the cache.
 Thus main memory blocks 0, 128, 256,….is loaded into cache and is stored at block 0.
 Block 1, 129, 257,….are stored
at block 1 and so on.
 Placement of a block in the cache
is determined from memory
address.
 Memory address is divided into 3
fields, the lower 4-bits selects one
of the 16 words in a block.
 When new block enters the cache,
the 7-bit cache block field
determines the cache positions in
which this block must be stored.
 The higher order 5-bits of the
memory address of the block are
stored in 5 tag bits associated with
its location in cache.
 They identify which of the 32 blocks that are mapped into this cache position are
currently resident in the cache.
 It is easy to implement, but not flexible.

Page |13
(2) Associative Mapping:-
 This is more flexible mapping method, in which main memory block can be placed
into any cache block position.
 In this, 12 tag bits are required to
Identify a memory block when it is
resident in the cache.
 The tag bits of an address received from
the processor are compared to the tag
bits of each block of the cache to see, if
the desired block is present. This is
known as Associative Mapping
technique.
 Cost of an associated mapped cache is
higher than the cost of direct-mapped
because of the need to search all 128
tag patterns to determine whether a
Block is in cache. This is known as associative search.

(3) Set-Associative Mapping:-


It is the combination of direct and associative mapping technique.
 Cache blocks are grouped into sets and mapping allow block of main memory reside into
any block of a specific set.
 For a cache with two blocks per
set. In this case, memory block 0,
64, 128,…..,4032 map into cache
set 0 and they can occupy any two
block within this set.
 Having 64 sets means that the 6
bit set field of the address
determines which set of the cache
might contain the desired block.
 The tag bits of address must be
associatively compared to the tags
of the two blocks of the set to
check if desired block is present.
This is two way associative
search.

Page |13
COMPARISION BETWEEN MAPPING TECHNIQUES

CACHE REPLACEMENT TECHINIQUE

o When a new block is to be brought into the cache and if the cache is full, then one of
the existing blocks must be replaced.
o For direct mapping, there is only one possible line for any particular block, and no
choice is possible.
o For the associative and set-associative techniques, a replacement algorithm is needed.
A number of algorithms have been tried.
o Four replacement algorithms are
1. Random
2. LRU (Least-recently used)
3. LFU (Least frequently used)
4. FIFO (First in First out)

Page | 14
VIRTUAL MEMORY
 Virtual memory is an architectural solution to increase the effective size of the memory
system.
 Virtual memory is a memory management technique that allows the execution of
processes that are not completely in memory.
 In some cases during the execution of the program the entire program may not be needed.
 Virtual memory allows files and memory to be shared by two or more processes through
Page sharing.
 The techniques that automatically move program and data between main memory and
secondary storage when they are required for execution is called virtual-memory techniques.

ADVANTAGES

 One major advantage of this scheme is that programs can be larger than physical memory
 Virtual memory also allows processes to share files easily and to implement shared
memory.
 Increase in processor utilization and throughput.
 Le0073s I/O would be needed to load or swap user programs into memory.

LOGICAL and PHYSICAL ADDRESS SPACE

o An address generated by the processor is commonly referred to as a logical address,


Which is also called as virtual address?
o The set of all logical addresses generated by a program is a logical address spaceAn
address seen by the memory unit—that is, the one loaded into the memory-address
register of the memory—is commonly referred to as a physical address.
o The set of all physical addresses is a physical address space or Memory space.
MEMORY MANAGEMENT UNIT

 The mapping from virtual to physical addresses is done by a


device called the memory-management unit (MMU).

 MMU is a hardware device that maps virtual addresses to


physical addresses at run time.

 MMU is also called as address translation hardware.

Page | 16
VIRTUAL TO PHYSICAL ADDRESS TRANSLATION
Each virtual address generated by the processor contains
virtual Page number and offset.

Each physical address contains Page Frame number and


offset.

DIRECT MEMORY ACCESS

 A Direct Memory Access (DMA) is a mechanism that allows an input/output (I/O)


device to send or receive data directly to or from the memory without involving the
processor.
 DMA is implemented with a specialized controller called DMA controller.
 DMA Controller is a control unit that transfers blocks of data between an I/O device
and memory independent of the processor.
 DMA controller provides an interface between the bus and the input-output devices.
 More than one external device can be connected to the DMA controller.

 DMA controller contains an address unit, for generating addresses and selecting I/O
Page | 17
device for transfer.
 It also contains the control unit and data count for keeping counts of the number of
blocks transferred and indicating the direction of transfer of data.
 When the transfer is completed, DMA informs the processor by raising an interrupt.

CPU SIGNALS FOR DMA TRANSFER

Bus Request :
o It is used by the DMA controller to request the CPU to relinquish(release) the control
of the buses.
Bus Grant :
o It is activated by the CPU to inform the external DMA controller that the buses are
in high impedance state and the requesting DMA can take control of the buses.
o Once the DMA has taken the control of the buses, it transfers the data.

STEPS IN DMA TRANSFER


o DMA transfer is controlled by the DMA controller.
o The DMA Controller requests the control of the buses from the CPU.

Page | 18
o After gaining control, the DMA controller performs read and write operations directly
between devices and memory.

o The DMA requires the CPU to provide two additional bus signals:
 The Hold (HLD)Signal is an input to the CPU through which DMA
controllers asks for ownership of the bus.
 The Hold Acknowledge (HLDA) signal tells that the bus has been granted.
o The CPU will finish all pending bus operations before granting control of the bus to
the DMA controller.
o Once the DMA controller gets the control of the buses, it can perform any transaction
(reads and writes) using the same bus.
o After the transaction is finished, the DMA controller returns the bus to the CPU.

MODES OF DMA OPERATION


There are three modes of DMA Operation.
They are
Byte Transfer (or) Cycle Stealing DMA Transfer : In this mode, DMA gives control
of buses to CPU after transfer of every byte.

Burst DMA Transfer : In this mode DMA handover the buses to CPU only after
completion of whole data transfer.

Block Transfer :Here, DMA transfers data only when CPU is executing the
instruction which does not require the use of buses.
(a) Byte (or) Cycle stealing DMA transfer Mode

Page | 19
(b) Burst DMA Transfer Mode

(c) Transparent DMA transfer Mode

Page | 20
ACCESSING INPUT /OUTPUT SYSTEM
o The input-output subsystem of a computer, referred to as I/O, provides an efficient
mode of communication between the central system and the outside environment.

o Programs and data must be entered into computer memory for processing and results
obtained from computations must be recorded or displayed for the user.

I/O INTERFACES
 Input-Output interface provides a method for transferring information between
internal storage and external I/O devices.
 The I/O bus from the processor is attached to all peripheral interfaces.
 To communicate with a particular device, the processor places a device address on the
address lines.
 The I/O bus consists of data lines, address lines, and control lines.
The I/O Interface consists of address decoder, control circuits, data register and status register to
coordinate the I/O transfers.

 The address decoder enables the device to recognize its address when this address
appears on the address lines.
 The data register holds the data. A data command causes the interface to respond by
transferring data from the bus into one of its registers.
Page | 21
 The status register contains information. A status command is used to test various
status conditions in the interface and the peripheral.

 A control command is issued to activate the peripheral and to inform it what to do.

I/O INTERFACING TECHNIQUES


o I/O devices can be interfaced to a computer system I/O in two ways , which are called
interfacing techniques.
o They are
 Memory mapped I/O
 I/O mapped I/O (Isolated I/O)

Memory Mapped I/O


 Memory-mapped I/O uses the same address
space to address both memory and I/O devices.
 The memory and registers of the I/O devices
are mapped to address values.
 So when an address is accessed by the CPU, it
may refer to a portion of physical RAM, or it can
instead refer to memory of the I/O device.

I/O mapped I/O (Isolated I/O)


 I/O mapped I/O (also known as
port mapped I/O or isolated I/O)
uses a separate, dedicated address
space and is accessed via a
dedicated set of microprocessor
instructions.

COMPARISION BETWEEN MEMORY MAPPED I/O & ISOLATED I/O

Page | 22
INTERRUPTS
o An interrupt is defined as hardware or software generated event external to the currently
executing process that affects the normal flow of the instruction execution.
o The processor responds by suspending its current activities, saving its state, and
executing a function called an interrupt handler (or an interrupt service routine, ISR)
to deal with the event.
o This interruption is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities.

CLASSES OF INTERRUPTS

TYPES OF INTERRUPTS
There are two types of interrupts:
1. Hardware interrupts
2. Software interrupts
Hardware Interrupts :
o Used by devices to communicate that they require attention from the operating
system.
o For example, pressing a key on the keyboard (or) moving the mouse triggers
hardware interrupts that cause the processor to read the keystroke or mouse
Page | 23
position.
Software Interrupts:
o Caused either by an exceptional condition in the processor itself, or a
special instruction in the instruction set which causes an interrupt when it is
executed.
o Example : Divide-by-zero exception

STEPS IN INTERRUPT PROCESSING


1. The device issues an interrupt signal to the processor.
2. The processor finishes execution of the current instruction before responding to the
interrupt.
3. The processor tests for an interrupt, determines that there is one, and sends an
acknowledgment signal to the device that issued the interrupt. The acknowledgment allows
the device to remove its interrupt signal.
4. The processor needs to prepare to transfer control to the interrupt routine.
5. The processor now loads the program counter with the entry location of the interrupt-
handling program that will respond to this interrupt.
6. Once the program counter has been loaded, the processor proceeds to the next instruction
cycle, which begins with an instruction fetch. The contents of the processor registers need
to be saved, because these registers may be used by the interrupt handler. So all of these
values, plus any other state information, need to be saved.
7. The interrupt handler next processes the interrupt.
8. When interrupt processing is complete, the saved
register values are retrieved from the stack and restored to the registers.
9. The final act is to restore the PSW and program counter values from the stack.
10. As a result, the next instruction to be executed will be from the previously interrupted
program.

Page | 24
Page | 25

You might also like