Ics Full
Ics Full
***********************************************************************************
DIGITAL SYSTEM
A Digital system is an interconnection of digital modules and it is a system that manipulates discrete
elements of information that is represented internally in the binary form.
Now a day’s digital systems are used in wide variety of industrial and consumer products such as
automated industrial machinery, pocket calculators, microprocessors, digital computers, digital watches, TV
games and signal processing and so on.
The conversion of any radix system to decimal number system is given by the
following steps.
Step 1: Write the given number.
Example 2:
Calculate the decimal equivalent for the binary number: 10.1 2
Example 2:
Convert octal number (235.23)8 to decimal number
(235.23)8 = (2x82) + (3x81) + (5x80) + (2x8-1) + (3x8-2)
= (157.296875)10
Hexadecimal to Decimal Number System
Example 1:
Calculating Decimal Equivalent for the Hexadecimal Number: 19FDE16
Step Hexadecimal Number Decimal Number
Step 1 19FDE16 ((1 × 164) + (9 × 163) + (F × 162) + (D × 161) + (E × 160))10
Step 2 19FDE16 ((1 × 164) + (9 × 163) + (15 × 162) + (13 × 161) + (14 × 160))10
Example 2:
Convert Hexadecimal number (ABC.3C)16 to decimal number
(ABC.3C) 16 = (Ax162) + (Bx161) + (Cx160) + (3x16-1) + (Cx16-2)
Example 1:
Convert (139)10 to (?)2
Example 2:
Convert (2705)10 to (?)8
(2705)10 = (5221)8
Example 3:
Convert (2705)10 to (?)16
(2705)10 = (A91)16
Example 2:
Convert the decimal number 0.39(10) to octal number.
0.39×8 =3.12 ---> with a carry of 3
0.12×8 =0.96 ---> with a carry of 0
0.96×8 =7.68 ---> with a carry of 7
0.68×8 =5.44 ---> with a carry of 5
Ans
0.39(10) = 0.3075(8)
Step 1: First convert each octal value to its equivalent binary number.
Step 2: Then convert the binary number to its equivalent hexadecimal number.
Example:
Convert the octal number (615)8 to hexadecimal number.
Ans:
First Convert Octal to binary number
6->110
1->001
5->101
The Binary number is (110001101)2
Binary number is converted into hexadecimal number (by adding zeros as needed)
(110001101)2 = 0001 1000 1101
1 8 D
The hexadecimal to decimal equivalent of octal number (615) 8 is (18D)16
Hexadecimal to Octal Conversion
Step 1: First convert each hexadecimal value to its equivalent binary number.Step
2: Then convert the binary number to its equivalent octal number Example:
Convert (25B)16 to octal number.
Ans:
Convert hexadecimal number to binary number
2->0010
5->0101
B->1011
Binary number is (001001011011)2
(25B)16 = (1133)8
One’s (1’s)Complement:
The 1’s complement is taken only for the binary numbers. It is obtained bychanging
‘0’ to ‘1’ and ‘1’ to ‘0’
Example:
Nine’s (9’s)Complement:
The 9’s complement is taken only for the decimal numbers. It is obtained by
subtracting each digit from 9.
Two’s (2’s)Complement:
The 2’s complement is taken only for the binary numbers. First take 1’s
complement and add 1 to LSB
Ten’s (10’s)Complement:
The 10’s complement is taken only for the decimal numbers. First take 9’s
complement and add 1 to LSB
Example:
Case 1: For subtracting a smaller number from a larger number, the 1 s complementmethod
is as follows:
3. Remove the final carry and add it to the result. This is called the end-around carry.
Example:
11001-10011
Case 2: For subtracting a larger number from a smaller number, the 1s complementmethod is
as follows:
3. There is no carry. The result has the opposite sign from the answer and is the 1s
complement of the answer.
4. Change the sign and take the 1s complement of the result to get the final answer.
Example:
1001 - 1101
Case 1: For subtracting a smaller number from a larger number, the 2s complementmethod is
as follows:
11001 - 10011
Case 2: For subtracting a larger number from a smaller number, the 2 s complementmethod is
as follows:
3. There is no carry from the left-most column. The result is in 2s complement form and
is negative.
4. Change the sign and take the 2s complement of the result to get the final answer.
Example:
1001 - 1101
Logic gates are electronic circuits that can be used to implement the most elementary
logic expressions, also known as Boolean expressions. The logic gate is the most basic
building block of combinational logic.
There are three basic logic gates, namely the OR gate, the AND gate and the NOT
gate. Other logic gates that are derived from these basic gates are the NAND gate, the NOR
gate, the EXCLUSIVE- OR gate and the EXCLUSIVE-NOR gate
GATE SYMBOL OPERATION TRUTH TABLE
INTRODUCTION:
The postulates of a mathematical system forms the basic assumption from whichit is
possible to deduce the theorems, laws and properties of the system.
The most common postulates used to formulate various structures are—
i) Closure:
A set S is closed w.r.t. a binary operator, if for every pair of elements of S,the
binary operator specifies a rule for obtaining a unique element of S.
The result of each operation with operator (+) or (.) is either 1 or 0 and 1, 0 ЄB.
e* x = x * e = x
Eg: 0+ 0 = 0 0+ 1 = 1+ 0 = 1 a) x+ 0= x
1.1=1 1.0=0.1=1 b) x. 1 = x
Eg: 0+ 1 = 1+ 0 = 1 a) x+ y= y+ x
0.1=1.0=0 b) x. y= y. x
iv) Distributive law:
If * and • are two binary operation on a set S, • is said to be distributive over
+ whenever,
A set S having the identity element e, w.r.t. binary operator * is said to havean
inverse, whenever for every x Є S, there exists an element x’ Є S such that,
a) x+ x’ = 1, since 0 + 0’ = 0+ 1 and 1+ 1’ = 1+ 0 = 1
b) x. x’ = 1, since 0 . 0’ = 0. 1 and 1. 1’ = 1. 0 = 0
Summary:
The theorems, like the postulates are listed in pairs; each relation is the dual of the one
paired with it. The postulates are basic axioms of the algebraic structure and need no proof. The
theorems must be proven from the postulates. The proofs of the theorems with one variable are
presented below. At the right is listed the number of thepostulate that justifies each step of the
proof.
1) a) x+ x = x
x+ x = (x+ x) . 1------------------- by postulate 2(b) [ x. 1 = x ]
= (x+ x). (x+ x’)------------------- 5(a) [ x+ x’ = 1]
= x+ xx’------------------- 4(b) [ x+yz = (x+y)(x+z)]
= x+ 0------------------- 5(b) [ x. x’ = 0 ]
= x------------------- 2(a) [ x+0 = x ]
b) x. x = x
x. x = (x. x) + 0------------------- by postulate 2(a) [ x+ 0 = x ]
= (x. x) + (x. x’)------------------- 5(b) [ x. x’ = 0]
= x ( x+ x’)------------------- 4(a) [ x (y+z) = (xy)+ (xz)]
= x (1)------------------- 5(a) [ x+ x’ = 1 ]
= x------------------- 2(b) [ x.1 = x ]
2) a) x+ 1 = 1
b) x .0 = 0
3) (x’)’ = x
From postulate 5, we have x+ x’ = 1 and x. x’ = 0, which defines thecomplement of
x. The complement of x’ is x and is also (x’)’.
Therefore, since the complement is unique,
(x’)’ = x.
4) Absorption Theorem:
a) x+ xy = x
x+ xy = x. 1 + xy ------------------- by postulate 2(b) [ x. 1 = x ]
= x (1+ y) ------------------- 4(a) [ x (y+z) = (xy)+ (xz)]
= x (1) ------------------- by theorem 2(a) [x+ 1 = x]
= x. ------------------- by postulate 2(a) [x. 1 = x]
b) x. (x+ y) = x
x. (x+ y) = x. x+ x. y------------------- 4(a) [ x (y+z) = (xy)+ (xz)]
= x + x.y------------------- by theorem 1(b) [x. x = x]
= x.------------------- by theorem 4(a) [x+ xy = x]
c) x+ x’y = x+ y
x+ x’y = x+ xy+ x’y------------------- by theorem 4(a) [x+ xy = x]
= x+ y (x+ x’)------------------- by postulate 4(a) [ x (y+z) = (xy)+ (xz)]
= x+ y (1)------------------- 5(a) [x+ x’ = 1]
= x+ y------------------- 2(b) [x. 1= x]
d) x. (x’+y) = xy
x. (x’+y) = x.x’+ xy------------------- by postulate 4(a) [ x (y+z) = (xy)+ (xz)]
= 0+ xy------------------- 5(b) [x. x’ = 0]
= xy.------------------- 2(a) [x+ 0= x]
1. Commutative property:
x+ y = y+ x
According to this property, the order of the OR operation conducted on the variablesmakes no
difference.
Boolean algebra is also commutative over multiplication given by,
x. y = y. x
This means that the order of the AND operation conducted on the variables makes nodifference.
2. Associative property:
The associative property of addition is given by,
The OR operation of several variables results in the same, regardless of the grouping ofthe
variables.
The associative law of multiplication is given by,
20
It makes no difference in what order the variables are grouped during the AND
operation of several variables.
3. Distributive property:
4. Duality:
It states that every algebraic expression deducible from the postulates of Booleanalgebra
remains valid if the operators and identity elements are interchanged.
If the dual of an algebraic expression is desired, we simply interchange OR and AND
operators and replace 1’s by 0’s and 0’s by 1’s.
x+ x’ = 1 is x. x’ = 0
Duality is a very important property of Boolean algebra.
Summary:
DeMorgan’s Theorems:
Two theorems that are an important part of Boolean algebra were proposed byDeMorgan.
The first theorem states that the complement of a product is equal to the sum ofthe
complements.
21
The second theorem states that the complement of a sum is equal to the product of the
complements.
Consensus Theorem:
In simplification of Boolean expression, an expression of the form AB+ A’C+ BC, the
term BC is redundant and can be eliminated to form the equivalent expression AB+ A’C. The
theorem used for this simplification is known as consensus theorem and is representas
The simplification of the functions using Boolean laws and theorems becomes
complex with the increase in the number of variables and terms. The map method, first
proposed by Veitch and slightly improvised by Karnaugh, provides a simple, straightforward
procedure for the simplification of Boolean functions. The method is called Veitch diagram
or Karnaugh map, which may be regarded as a pictorial representation of a truth table.
The Karnaugh map technique provides a systematic method for simplifying and
manipulation of Boolean expressions. A K-map is a diagram made up of squares, witheach
square representing one minterm of the function that is to be minimized. For nvariables on a
Karnaugh map there are 2n numbers of squares. Each square or cell represents one of the minterms.
It can be drawn directly from either minterm (sum-of- products) or maxterm (product-of-sums)
Boolean expressions.
22
Two- Variable, Three Variable and Four Variable Maps Karnaugh maps can be used for
expressions with two, three, four and five variables. The number of cells in a Karnaugh map is
equal to the total number of possible input variable combinations as is the number of rows in a
truth table. For threevariables, the number of cells is 23 = 8. For four variables, the number of
cells is 24 = 16.
Product terms are assigned to the cells of a K-map by labeling each row and each column
of a map with a variable, with its complement or with a combination of variables &
complements. The below figure shows the way to label the rows & columnsof a 1, 2, 3 and 4-
variable maps and the product terms corresponding to each cell.
It is important to note that when we move from one cell to the next along any row or
from one cell to the next along any column, one and only one variable in the product term
changes (to a complement or to an uncomplemented form). Irrespective ofnumber of variables
the labels along each row and column must conform to a single change. Hence gray code is used
to label the rows and columns of K-map as shown below.
23
Grouping cells for Simplification:
The grouping is nothing but combining terms in adjacent cells. The simplificationis
achieved by grouping adjacent 1’s or 0’s in groups of 2i, where i = 1, 2, …, n and n is the
number of variables. When adjacent 1’s are grouped then we get result in the sum ofproduct
form; otherwise we get result in the product of sum form.
24
25
Grouping Four Adjacent 1’s: (Quad)
In a Karnaugh map we can group four adjacent 1’s. The resultant group is called Quad.
Fig (a) shows the four 1’s are horizontally adjacent and Fig (b) shows they are vertically
adjacent. Fig (c) contains four 1’s in a square, and they are considered adjacentto each other.
Examples of Quads
The four 1’s in fig (d) and fig (e) are also adjacent, as are those in fig (f) because,the
top and bottom rows are considered to be adjacent to each other and the leftmostand rightmost
columns are also adjacent to each other.
26
In a Karnaugh map we can group eight adjacent 1’s. The resultant group
27
1. Simplify the Boolean expression,
F(x, y, z) = ∑m (3, 4, 6, 7).
Soln:
F = yz+ xz’
28
2. F(x, y, z) = ∑m (0, 2, 4, 5, 6).
Soln:
F = z’+ xy’
= ∑ m (1, 2, 3, 5, 7)
F = C + A’B
Four - Variable Map:
29
Therefore,
Y= A’B’CD’+ AC’D+ BC’
Soln:
Therefore,
30
3. F= A’B’C’+ B’CD’+ A’BCD’+ AB’C’
= A’B’C’ (D+ D’) + B’CD’ (A+ A’) + A’BCD’+ AB’C’ (D+ D’)
= A’B’C’D+ A’B’C’D’+ AB’CD’+ A’B’CD’+ A’BCD’+ AB’C’D+ AB’C’D’
= m1+ m0+ m10+ m2+ m6+ m9+ m8
= ∑ m (0, 1, 2, 6, 8, 9, 10)
Therefore,
F= B’D’+ B’C’+ A’CD’.
Therefore,
Y= AB+ AC+ AD’.
31
5. Y (A, B, C, D)= ∑ m (7, 9, 10, 11, 12, 13, 14, 15)
Therefore,
Y= AB+ AC+ AD+BCD.
In the above K-map, the cells 5, 7, 13 and 15 can be grouped to form a quad as indicated
by the dotted lines. In order to group the remaining 1’s, four pairs have to be formed. However,
all the four 1’s covered by the quad are also covered by the pairs. So, the quad in the above k-
map is redundant.
Therefore, the simplified expression will be,
Y = A’C’D+ A’BC+ ABD+ ACD.
32
7. Y= ∑ m (1, 5, 10, 11, 12, 13, 15)
A’B’C’D’
Therefore, Y= AD’+ B’C+ B’D’
33
Therefore, F= A’C’D’+ AB’D’+ B’C’.
F (x, y, z) = 1
F (w, x, y, z) = w’x’+ yz
Soln:
35
II YR AI &
DS
Thus, every row on one map is adjacent to the corresponding row (the one occupying
the same position) on the other map, as are corresponding columns. Also, the rightmost and
leftmost columns within each 16- cell map are adjacent, just as they are in any 16- cell map,
as are the top and bottom rows.
F (A, B, C, D, E) = ∑m (0, 2, 4, 6, 9, 11, 13, 15, 17, 21, 25, 27, 29, 31)
Soln:
36
II YR AI &
DS
2. F (A, B, C, D, E) = ∑m (0, 5, 6, 8, 9, 10, 11, 16, 20, 24, 25, 26, 27, 29, 31)
Soln:
37
II YR AI &
DS
3. F (A, B, C, D, E) = ∑m ( 1, 4, 8, 10, 11, 20, 22, 24, 25, 26)+∑d (0, 12, 16, 17)
Soln:
Soln:
38
II YR AI &
5. F (x1, x2, x3, x4, x5) = ∑m (2, 3, 6, 7, 11, 12, 13, 14, 15, 23, 28, 29, 30, 31 ) DS
Soln:
6. F (x1, x2, x3, x4, x5) = ∑m (1, 2, 3, 6, 8, 9, 14, 17, 24, 25, 26, 27, 30, 31 )+ ∑d (4, 5)
Soln:
F (x1, x2, x3, x4, x5) = x2x3’x4’+ x2x3x4x5’+ x3’x4’x5+ x1x2x4+ x1’x2’x3x5’+ x1’x2’x3’x4
A binary variable may appear either in its normal form (x) or in its complementform
(x’). Now either two binary variables x and y combined with an AND operation. Since each
variable may appear in either form, there are four possible combinations:
x’y’, x’y, xy’ and xy
Each of these four AND terms is called a ‘minterm’.
In a similar fashion, when two binary variables x and y combined with an OR
39
II YR AI &
operation, there are four possible combinations: DS
x’+ y’, x’+ y, x+ y’ and x+ y
Each of these four OR terms is called a ‘maxterm’.
Variable
s Minterms Maxterms
X y z mi M
i
0 0 0 x’y’z’ = m0 x+ y+ z= M0
0 0 1 x’y’z = m1 x+ y+ z’= M1
0 1 0 x’yz’ = m2 x+ y’+ z= M2
0 1 1 x’yz = m3 x+ y’+ z’= M3
1 0 0 xy’z’ = m4 x’+ y+ z= M4
1 0 1 xy’z = m5 x’+ y+ z’= M5
1 1 0 xyz’ = m6 x’+ y’+ z= M6
1 1 1 xyz = m7 x’+ y’+ z’= M7
40
II YR AI &
Canonical Sum of product expression: DS
If each term in SOP form contains all the literals then the SOP is known as standard
(or) canonical SOP form. Each individual term in standard SOP form is called minterm
canonical form.
F (A, B, C) = AB’C+ ABC+ ABC’
2. Y (A, B, C) = A+ ABC
= A. (B+ B’). (C+ C’)+ ABC
= (AB+ AB’). (C+ C’)+ ABC
= ABC+ ABC’+ AB’C+ AB’C’+ ABC
= ABC+ ABC’+ AB’C+ AB’C’
= m7+ m6+ m5+ m4
= ∑m (4, 5, 6, 7).
3. Y (A, B, C) = A+ BC
= A. (B+ B’). (C+ C’)+(A+ A’). BC
= (AB+ AB’). (C+ C’)+ ABC+ A’BC
= ABC+ ABC’+ AB’C+ AB’C’+ ABC+ A’BC
= ABC+ ABC’+ AB’C+ AB’C’+ A’BC
= m7+ m6+ m5+ m4+ m3
= ∑m (3, 4, 5, 6, 7).
41
II YR AI &
= ABC+ AB’C+ ABC+ ABC’+ ABC+ A’BC DS
= ABC+ AB’C+ ABC’+ A’BC
= ∑m (3, 5, 6, 7).
If each term in POS form contains all literals then the POS is known as standard (or)
Canonical POS form. Each individual term in standard POS form is called Maxterm canonical
form.
F (A, B, C) = (A+ B+ C). (A+ B’+ C). (A+ B+ C’)
F (x, y, z) = (x+ y’+ z’). (x’+ y+ z). (x+ y+ z)
= ∏M (0, 2, 3)
3. Y= A. (B+ C+ A)
= (A+ B.B’+ C.C’). (A+ B+ C)
= (A+B+C) (A+B+C’) (A+B’+C) (A+ B’+C’) (A+B+C)
= (A+B+C) (A+B+C’) (A+B’+C) (A+ B’+C’)
= M0. M1. M2. M3
= ∏M (0, 1, 2, 3)
= ∏M (0, 1, 2, 3, 4)
6. Y= xy+ x’z
= (xy+ x’) (xy+ z)Using distributive law, convert the function into OR terms.
= ∏M (0, 2, 4, 5).
UNIVERSAL GATES:
The NAND and NOR gates are known as universal gates, since any logic
function can be implemented using NAND or NOR gates. This is illustrated in the
following sections.
43
II YR AI &
DS
a) NAND Gate:
The NAND gate can be used to generate the NOT function, the AND function,the
OR function and the NOR function.
i) NOT function:
By connecting all the inputs together and creating a single common input.
iii) OR function:
44
II YR AI &
By simply inverting inputs of the NAND gate. i.e., DS
45
II YR AI &
DS
OR function using NAND gates
b) NORGate:
Similar to NAND gate, the NOR gate is also a universal gate, since it can beused
to generate the NOT, AND, OR and NAND functions.
i) NOT function:
46
II YR AI &
By connecting all the inputs together and creating a single commonDSinput.
47
II YR AI &
DS
ii) OR function:
By simply inverting output of the NOR gate. i.e.,
48
II YR AI &
Bubble at the input of NOR gate indicates inverted input. DS
Truth table
49
II YR AI &
1. Implement Boolean expression using NAND gates: DS
Original Circuit:
50
II YR AI &
DS
Soln:
NAND Circuit:
51
II YR AI &
DS
NOR Circuit:
Soln:
Adding bubbles on the output of each AND gates and on the inputs of each OR
gate.
52
Adding an inverter on each line that received bubble,
53
Quine-McCluskey Tabular Method
Boolean function simplification is one of the basics of Digital Electronics. The quine-McCluskey
method also called the tabulation method is a very useful and convenient method for simplification of
the Boolean functions for a large number of variables (greater than 4). This method is useful over K-
map when the number of variables is larger for which K-map formation is difficult. This method uses
prime implicants for simplification.
Follow these steps for simplifying Boolean functions using Quine-McClukey tabular method.
Step 1 − Arrange the given min terms in an ascending order and make the groups based on the number of
ones present in their binary representations. So, there will be at most ‘n+1’ groups if there are ‘n’ Boolean
variables in a Boolean function or ‘n’ bits in the binary equivalent of min terms.
Step 2 − Compare the min terms present in successive groups. If there is a change in only one-bit position,
then take the pair of those two min terms. Place this symbol ‘_’ in the differed bit position and keep the
remaining bits as it is.
Step 3 − Repeat step2 with newly formed terms till we get all prime implicants.
Step 4 − Formulate the prime implicant table. It consists of set of rows and columns. Prime implicants can
be placed in row wise and min terms can be placed in column wise. Place ‘1’ in the cells corresponding to the
min terms that are covered in each prime implicant.
Step 5 − Find the essential prime implicants by observing each column. If the min term is covered only by one
prime implicant, then it is essential prime implicant. Those essential prime implicants will be part of the
simplified Boolean function.
Step 6 − Reduce the prime implicant table by removing the row of each essential prime implicant and the
columns corresponding to the min terms that are covered in that essential prime implicant. Repeat step 5 for
Reduced prime implicant table. Stop this process when all min terms of given Boolean function are over.
Example
Letus simplify thefollowingBoolean
function, f(W,X,Y,Z)=∑m(2,6,8,9,10,11,14,15)f(W,X,Y,Z)=∑m(2,6,8,9,10,11,14,15) using Quine-McClukey
tabular method.
The given Boolean function is in sum of min terms form. It is having 4 variables W, X, Y & Z. The given
min terms are 2, 6, 8, 9, 10, 11, 14 and 15. The ascending order of these min terms based on the number of
ones present in their binary equivalent is 2, 8, 6, 9, 10, 11, 14 and 15. The following table shows these min
terms and their equivalent binary representations.
54
Group Name Min terms W X Y Z
2 0 0 1 0
GA1
8 1 0 0 0
6 0 1 1 0
GA2 9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
GA3
14 1 1 1 0
GA4 15 1 1 1 1
The given min terms are arranged into 4 groups based on the number of ones present in their binary
equivalents. The following table shows the possible merging of min terms from adjacent groups.
2,6 0 - 1 0
2,10 - 0 1 0
GB1
8,9 1 0 0 -
8,10 1 0 - 0
6,14 - 1 1 0
9,11 1 0 - 1
GB2
10,11 1 0 1 -
10,14 1 - 1 0
11,15 1 - 1 1
GB3
14,15 1 1 1 -
The min terms, which are differed in only one-bit position from adjacent groups are merged. That differed bit
is represented with this symbol, ‘-‘. In this case, there are three groups and each group contains combinations
of two min terms. The following table shows the possible merging of min term pairs from adjacent groups.
55
Group Name Min terms W X Y Z
2,6,10,14 - - 1 0
2,10,6,14 - - 1 0
GB1
8,9,10,11 1 0 - -
8,10,9,11 1 0 - -
10,11,14,15 1 - 1 -
GB2
10,14,11,15 1 - 1 -
The successive groups of min term pairs, which are differed in only one-bit position are merged. That differed
bit is represented with this symbol, ‘-‘. In this case, there are two groups and each group contains
combinations of four min terms. Here, these combinations of 4 min terms are available in two rows. So, we
can remove the repeated rows. The reduced table after removing the redundant rows is shown below.
GC1 2,6,10,14 - - 1 0
8,9,10,11 1 0 - -
GC2 10,11,14,15 1 - 1 -
Further merging of the combinations of min terms from adjacent groups is not possible, since they are differed
in more than one-bit position. There are three rows in the above table. So, each row will give one prime
implicant. Therefore, the prime implicants are YZ’, WX’ & WY.
The prime implicant table is shown below.
YZ’ 1 1 1 1
WX’ 1 1 1 1
WY 1 1 1 1
The prime implicants are placed in row wise and min terms are placed in column wise. 1s are placed in the
common cells of prime implicant rows and the corresponding min term columns.
The min terms 2 and 6 are covered only by one prime implicant YZ’. So, it is an essential prime implicant.
This will be part of simplified Boolean function. Now, remove this prime implicant row and the corresponding
min term columns. The reduced prime implicant table is shown below.
56
Min terms / Prime Implicants 8 9 11 15
WX’ 1 1 1
WY 1 1
The min terms 8 and 9 are covered only by one prime implicant WX’. So, it is an essential prime implicant.
This will be part of simplified Boolean function. Now, remove this prime implicant row and the corresponding
min term columns. The reduced prime implicant table is shown below.
WY 1
The min term 15 is covered only by one prime implicant WY. So, it is an essential prime implicant. This will
be part of simplified Boolean function.
In this example problem, we got three prime implicants and all the three are essential. Therefore,
the simplified Boolean function is
57
UNIT II
Combinational circuit consists of logic gates whose output at any time is determined
from the present combination of inputs. The logic gate is the most basic building block of
combinational logic. The logical function performed by a combinational circuit is fully defined
by a set of Boolean expressions.
Sequential logic circuit comprises both logic gates and the state of storage elements
such as flip-flops. As a consequence, the output of a sequential circuit depends not only on
present value of inputs but also on the past state of inputs.
In the previous chapter, we have discussed binary numbers, codes, Boolean algebra and
simplification of Boolean function and logic gates. In this chapter, formulation and analysis of
various systematic designs of combinational circuits will be discussed.
A combinational circuit consists of input variables, logic gates, and output variables. The
logic gates accept signals from inputs and output signals are generated according to the logic
circuits employed in it. Binary information from the given data transforms to desired output data
in this process. Both input and output are obviously the binary signals, i.e., both the input and
output signals are of two possible states, logic 1 and logic 0.
In this section, we will discuss those combinational logic building blocks that can be used
to perform addition and subtraction operations on binary numbers. Addition and subtraction are
the two most commonly used arithmetic operations, as the other two, namely multiplication and
division, are respectively the processes of repeated addition and repeated subtraction.
The basic building blocks that form the basis of all hardware used to perform the
arithmetic operations on binary numbers are half-adder, full adder, half-subtractor, full-
subtractor.
Half-Adder:
A half-adder is a combinational circuit that can be used to add two binary bits. It has two
inputs that represent the two bits to be added and two outputs, with one producing the SUM
output and the other producing the CARRY.
The truth table of a half-adder, showing all possible input combinations and the
corresponding outputs are shown below.
Inputs Outputs
A B Carry (C) Sum (S)
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
Truth table of half-adder
K-map simplification for carry and sum:
The Boolean expressions for the SUM and CARRY outputs are given by theequations,
Sum, S = A’B+ AB’= AB
Carry, C = A . B
The first one representing the SUM output is that of an EX-OR gate, the second
one representing the CARRY output is that of an AND gate.
The logic diagram of the half adder is,
Full-Adder:
A full adder is a combinational circuit that forms the arithmetic sum ofthree
input bits. It consists of 3 inputs and 2 outputs.
Two of the input variables, represent the significant bits to be added. The third input
represents the carry from previous lower significant position. The block diagram of full adder is
given by,
Block schematic of full-adder
The full adder circuit overcomes the limitation of the half-adder, which can be used to
add two bits only. As there are three input variables, eight different input combinations are
possible. The truth table is shown below,
Truth Table:
Inputs Outputs
A B Ci Sum (S) Carry
n (Cout)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
To derive the simplified Boolean expression from the truth table, the Karnaugh map method is
adopted as,
The Boolean expressions for the SUM and CARRY outputs are given by the
equations,
Sum, S = A’B’Cin+ A’BC’in + AB’C’in + ABCin
Carry, Cout = AB+ ACin + BCin .
The logic diagram of the full adder can also be implemented with two half- adders and
one OR gate. The S output from the second half adder is the exclusive-OR ofCin and the output
of the first half-adder, giving
A half-subtractor is a combinational circuit that can be used to subtract one binary digit
from another to produce a DIFFERENCE output and a BORROW output. The BORROW output
here specifies whether a ‗1‘ has been borrowed to perform the subtraction.
Block schematic of half-subtractor
The truth table of half-subtractor, showing all possible input combinations andthe
corresponding outputs are shown below.
Input Output
A B Difference (D) Borrow
(Bout)
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
The Boolean expressions for the DIFFERENCE and BORROW outputs are givenby the
equations,
Difference, D = A’B+ AB’= A B
Borrow, Bout = A’ . B
The first one representing the DIFFERENCE (D)output is that of an exclusive-OR gate,
the expression for the BORROW output (Bout) is that of an AND gate with input A
complemented before it is fed to the gate.
The logic diagram of the half adder is,
Comparing a half-subtractor with a half-adder, we find that the expressions for the SUM
and DIFFERENCE outputs are just the same. The expression for BORROW in the case of the
half-subtractor is also similar to what we have for CARRY in the case of the half-adder. If the
input A, ie., the minuend is complemented, an AND gate can be used to implement the
BORROW output.
Full Subtractor:
A full subtractor performs subtraction operation on two bits, a minuend and a subtrahend,
and also takes into consideration whether a ‗1‘ has already been borrowed by the previous
adjacent lower minuend bit or not.
As a result, there are three bits to be handled at the input of a full subtractor, namely the
two bits to be subtracted and a borrow bit designated as Bin. There are two outputs, namely the
DIFFERENCE output D and the BORROW output Bo. The BORROW output bit tells whether
the minuend bit needs to borrow a ‗1‘ from the next possible higher minuend bit.
The Boolean expressions for the DIFFERENCE and BORROW outputs are given by the
equations,
Difference, D = A’B’Bin+ A’BB’in + AB’B’in + ABBin
Borrow, Bout = A’B+ A’Cin + BBin .
The logic diagram of the full-subtractor can also be implemented with two half-
subtractors and one OR gate. The difference,D output from the second half subtractor is the
exclusive-OR of Bin and the output of the first half-subtractor, giving
Difference,D Bin (A B) B = B‘in (A‘B+AB‘) + Bin (AB+A‘B‘)
and the borrow output is,
Therefore,
we can implement full-subtractor using two half-subtractors and OR gate as,
Since all the bits of augend and addend are fed into the adder circuits simultaneously and
the additions in each position are taking place at the same time, this circuit is known as parallel
adder.
0. The carry output of the lower order stage is connected to the carry input of the next higher order
stage. Hence this type of adder is called ripple-carry adder.
In the least significant stage, A0, B0 and C0 (which is 0) are added resulting in sum S0 and
carry C1. This carry C1 becomes the carry input to the second stage. Similarly in the second stage, A1,
B1 and C1 are added resulting in sum S1 and carry C2, in the third stage, A2, B2 and C2 are added
resulting in sum S2 and carry C3, in the third stage, A3, B3 and C3 are added resulting in sum S3 and
C4, which is the output carry. Thus the circuit results in a sum (S3S2S1S0) and a carry output (Cout).
Though the parallel binary adder is said to generate its output immediately after the inputs are
applied, its speed of operation is limited by the carry propagation delay through all stages. However,
there are several methods to reduce this delay.
One of the methods of speeding up this process is look-ahead carry addition which eliminates
the ripple-carry delay.
For example, addition of two numbers (0011+ 0101) gives the result as 1000. Addition of the
LSB position produces a carry into the second position. This carry when added to the bits of the
second position, produces a carry into the third position. This carry when added to bits of the third
position, produces a carry into the last position. The sum bit generated in the last position (MSB)
depends on the carry that was generated by the addition in the previous position. i.e., the adder will not
produce correct result until LSB carry has propagated through the intermediate full-adders. This
represents a time delay that depends on the propagation delay produced in an each full-adder. For
example, if each full adder is considered to have a propagation delay of30nsec, then S3 will not react its
correct value until 90 nsec after LSB is generated. Therefore total time required to perform addition is 90+ 30 =
120nsec.
4-bit Parallel Adder
The method of speeding up this process by eliminating inter stage carry delay is called
look ahead-carry addition. This method utilizes logic gates to look at the lower order bits of the
augend and addend to see if a higher-order carry is to be generated. It uses two functions: carry
Consider the circuit of the full-adder shown above. Here we define two functions: carry
generate (Gi) and carry propagate (Pi) as,
Gi = Ai Bi
Pi = Ai Bi
the output sum and carry can be expressed as,
Si = Pi Ci
Ci+1 = Gi PiCi
Gi (carry generate), it produces a carry 1 when both Ai and Bi are 1, regardless of the input
carry Ci.
Pi (carry propagate) because it is the term associated with the propagation of the carryfrom Ci to
Ci+1.
The Boolean functions for the carry outputs of each stage and substitute for each Ci its
value from the previous equation:
C0= input carry
Since the Boolean function for each output carry is expressed in sum of products, each
function can be implemented with one level of AND gates followed by an OR gate. The three Boolean functions
for C1, C2 and C3 are implemented in the carry look-ahead generator as shown below. Note that C3 does not
have to wait for C2 and C1 to propagate; in fact C3 is propagated at the same time as C1 and C2.
Using a Look-ahead Generator we can easily construct a 4-bit parallel adder witha Look-
ahead carry scheme. Each sum output requires two exclusive-OR gates. The
output of the first exclusive-OR gate generates the Pi variable, and the AND gategenerates the Gi
variable. The carries are propagated through the carry look-ahead generator and applied as inputs
to the second exclusive-OR gate. All output carries are generated after a delay through two levels
of gates. Thus, outputs S1 through S3 have equal propagation delay times.
The mode input M controls the operation. When M= 0, the circuit is an adder and when
M=1, the circuit becomes a Subtractor. Each exclusive-OR gate receives input M
and one of the inputs of B. When M=0, we have B 0= B. The full adders receive the value of
B, the input carry is 0, and the circuit performs A plus B. When M=1, we have
B 1= B‘ and C0=1. The B inputs are all complemented and a 1 is added through the input carry.
The circuit performs the operation A plus the 2‘s complement of B. The exclusive-OR with
output V is for detecting an overflow.
In examining the contents of the table, it is apparent that when the binary sum is equal to
or less than 1001, the corresponding BCD number is identical, and therefore no conversion is
needed. When the binary sum is greater than 9 (1001), we obtain a non- valid BCD
representation. The addition of binary 6 (0110) to the binary sum converts it to the correct BCD
representation and also produces an output carry as required.
The logic circuit to detect sum greater than 9 can be determined by simplifying the
boolean expression of the given truth table.
The two decimal digits, together with the input carry, are first added in the top4- bit
binary adder to provide the binary sum. When the output carry is equal to zero, nothing is
added to the binary sum. When it is equal to one, binary 0110 is added to the binary sum
through the bottom 4-bit adder. The output carry generated from the bottom adder can be
ignored, since it supplies information already available at the output carry terminal. The
output carry from one stage must be connected to the input carry of the next higher-order
stage.
ALU OPRERATIONS
A 1-Bit ALU
The arithmetic logic unit (ALU) is the brawn of the computer, the device that performs the arithmetic
operations like addition and subtraction or logical operations like AND and OR.
An adder must have two inputs for the operands and a single-bit output for the sum. There must be a
second output to pass on the carry, called CarryOut. Since the CarryOut from the neighbor adder
must be included as an input, we need a third input. This input is called CarryIn. This adder is called
full adder. It is also called a (3,2) adder because it has three inputs and 2 outputs. An adder with only
the a and b is called a (2,2) adder or half adder.
Fig.1(a) The 1-bit logical unit for AND, OR and Fig1 (b) 1-Bit adder (Full adder)
Adder.
A 32-bit ALU
The full 32 –bit ALU is created by connecting adjacent black boxes.using xi to mean the ith bit of x.
Hence,the adder created by directly linking the carries of 1-bit adders is called a ripple carry adder
(fig 2). Subtraction is the same as adding the negative version or on operand,and this is how adders
perform subtraction.
Fig 2.(a) A 32-bit ALU performing addition Fig 2.(a)A 1-bit ALU for the most significant bit
(Ripple carry adder)
DECODERS:
A decoder is a combinational circuit that converts binary information from ‗n‘ input
lines to a maximum of ‗2n‘ unique output lines. The general structure of decoder circuit is –
Here the 2 inputs are decoded into 4 outputs, each output representing one of the minterms
of the two input variables.
Inputs Outputs
Enable A B Y3 Y2 Y1 Y
0
0 x x 0 0 0 0
1 0 0 0 0 0 1
1 0 1 0 0 1 0
1 1 0 0 1 0 0
1 1 1 1 0 0 0
As shown in the truth table, if enable input is 1 (EN= 1) only one of the outputs (Y0 –
Y3), is active for a given input.
The output Y0 is active, ie., Y0= 1 when inputs A= B= 0, Y1
is active when inputs, A= 0 and B= 1,
Y2 is active, when input A= 1 and B= 0,
Y3 is active, when inputs A= B= 1.
2- to-8 Line Decoder:
A 3-to-8 line decoder has three inputs (A, B, C) and eight outputs (Y0- Y7). Based on
the 3 inputs one of the eight outputs is selected.
The three inputs are decoded into eight outputs, each output representing one of the
minterms of the 3-input variables. This decoder is used for binary-to-octal conversion. The input
variables may represent a binary number and the outputs will represent the eight digits in the
octal number system. The output variables are mutually exclusive because only one output can be
equal to 1 at any one time. The output line whose value is equal to 1 represents the minterm
equivalent of the binary number presently available in the input lines.
Inputs Outputs
A B C Y Y1 Y Y3 Y Y5 Y6 Y
0 2 4 7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1
3-to-8 line decoder
0 a, b, c, d, e, f
1 b, c
2 a, b, d, e, g
3 a, b, c, d, g
4 b, c, f, g
5 a, c, d, f, g
6 a, c, d, e, f, g
7 a, b, c
8 a, b, c, d, e, f,
g
9 a, b, c, d, f, g
Truth table:
K-map Simplification:
Logic Diagram:
BCD to 7-segment display decoder
Applications of decoders:
1. Decoders are used in counter system.
2. They are used in analog to digital converter.
3. Decoder outputs can be used to drive a display system.
ENCODERS:
An encoder is a digital circuit that performs the inverse operation of a decoder. Hence,
the opposite of the decoding process is called encoding. An encoder is a combinational circuit
that converts binary information from 2n input lines to a maximum of ‗n‘ unique output lines.
The general structure of encoder circuit is –
It has 2n input lines, only one which 1 is active at any time and ‗n‘ output lines. It
encodes one of the active inputs to a coded binary output with ‗n‘ bits. In an encoder, the
number of outputs is less than the number of inputs.
Octal-to-Binary Encoder:
It has eight inputs (one for each of the octal digits) and the three outputs that generate the
corresponding binary number. It is assumed that only one input has a value of 1 at any given
time.
Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 A B C
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1
The encoder can be implemented with OR gates whose inputs are determined directly
from the truth table. Output z is equal to 1, when the input octal digit is 1 or 3or 5 or 7. Output
y is 1 for octal digits 2, 3, 6, or 7 and the output is 1 for digits 4, 5, 6 or
7. These conditions can be expressed by the following output Boolean functions:
Octal-to-Binary Encoder
Another problem in the octal-to-binary encoder is that an output with all 0‘s is generated
when all the inputs are 0; this output is same as when D0 is equal to 1. The discrepancy can be
resolved by providing one more output to indicate that atleast one input is equal to 1.
Priority Encoder:
A priority encoder is an encoder circuit that includes the priority function. In priority
encoder, if two or more inputs are equal to 1 at the same time, the input having the highest
priority will take precedence.
In addition to the two outputs x and y, the circuit has a third output, V (valid bit
indicator). It is set to 1 when one or more inputs are equal to 1. If all inputs are 0, there is no
valid input and V is equal to 0.
The higher the subscript number, higher the priority of the input. Input D3, has the
highest priority. So, regardless of the values of the other inputs, when D3 is 1, the output for xy
is 11.
D2 has the next priority level. The output is 10, if D2= 1 provided D3= 0. The output for
D1 is generated only if higher priority inputs are 0, and so on down the priority levels.
Truth table:
Inputs Outputs
D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
x 1 0 0 0 1 1
x x 1 0 1 0 1
x x x 1 1 1 1
Although the above table has only five rows, when each don‘t care condition is replaced
first by 0 and then by 1, we obtain all 16 possible input combinations. For example, the third row
in the table with X100 represents minterms 0100 and 1100. The don‘t care condition is replaced
by 0 and 1 as shown in the table below.
Modified Truth table:
Inputs Outputs
D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
0 1 0 0
0 1 1
1 1 0 0
0 0 1 0
0 1 1 0 1 0 1
1 0 1 0
1 1 1 0
0 0 0 1
0 0 1 1
0 1 0 1
1 1 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
K-map Simplification:
3- Input Priority Encoder
MULTIPLEXER: (Data Selector)
A multiplexer or MUX, is a combinational circuit with more than one input line, one
output line and more than one selection line. A multiplexer selects binary information present
from one of many input lines, depending upon the logic status of the selection inputs, and routes
it to the output line. Normally, there are 2n input lines and n selection lines whose bit
combinations determine which input is selected. The multiplexer is often labeled as MUX in
block diagrams.
A multiplexer is also called a data selector, since it selects one of many inputs and
steers the binary information to the output line.
Truth table:
S Y
0 I0
4-to-1-line Multiplexer: 1 I1
A 4-to-1-line multiplexer has four (2n) input lines, two (n) select lines and one output
line. It is the multiplexer consisting of four input channels and information of one of the channels
can be selected and transmitted to an output line according to the select inputs combinations.
Selection of one of the four input channel is possible by two selection inputs.
Each of the four inputs I0 through I3, is applied to one input of AND gate. Selection lines
S1 and S0 are decoded to select a particular AND gate. The outputs of the AND gate are applied
to a single OR gate that provides the 1-line output.
4-to-1-Line Multiplexer
Function table:
S1 S0 Y
0 0 I0
0 1 I1
1 0 I2
1 1 I3
To demonstrate the circuit operation, consider the case when S1S0= 10. The AND gate
associated with input I2 has two of its inputs equal to 1 and the third input connected to I2. The
other three AND gates have atleast one input equal to 0, which makes their outputs equal to 0.
The OR output is now equal to the value of I2, providing a path from the selected input to the
output.
This circuit has four multiplexers, each capable of selecting one of two input lines.
Output Y0 can be selected to come from either A0 or B0. Similarly, output Y1 may have the
value of A1 or B1, and so on. Input selection line, S selects one of the lines in each of the four
multiplexers. The enable input E must be active for normal operation.
Although the circuit contains four 2-to-1-Line multiplexers, it is viewed as a circuit that
selects one of two 4-bit sets of data lines. The unit is enabled when E= 0. Then if S= 0, the four
A inputs have a path to the four outputs. On the other hand, if S=1, the four B inputs are
applied to the outputs. The outputs have all 0‘s when E= 1, regardless of the value of S.
Application:
The multiplexer is a very useful MSI function and has various ranges of applications in
data communication. Signal routing and data communication are the important applications of a
multiplexer. It is used for connecting two or more sources to guide to a single destination among
computer units and it is useful for constructing a common bus system. One of the general
properties of a multiplexer is that Boolean functions can be implemented by this device.
Apply variables A and B to the select lines. The procedures for implementing the
function are:
1. If both the minterms in the column are not circled, apply 0 to the corresponding input.
2. If both the minterms in the column are circled, apply 1 to the corresponding input.
3. If the bottom minterm is circled and the top is not circled, apply C to the input.
4. If the top minterm is circled and the bottom is not circled, apply C‘ to the input.
Multiplexer Implementation:
2. F (x, y, z) = ∑m (1, 2, 6, 7)
Solution:
Implementation table:
Multiplexer Implementation:
F ( A, B, C) = ∑m (1, 2, 4, 5)
Solution:
Variables, n= 3 (A, B, C)
Select lines= n-1 = 2 (S1, S0)
n-1 2 D0, D1, D2, D3Implementation
table:
Multiplexer Implementation:
4. F( P, Q, R, S)= ∑m (0, 1, 3, 4, 8, 9, 15)
Solution:
Variables, n= 4 (P, Q, R, S)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:
Multiplexer Implementation:
5. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (A, B, C, D) = ∑m (0, 1, 2, 4, 6, 9, 12, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:
Using 4: 1 MUX:
6. F (A, B, C, D) = ∑m (1, 3, 4, 11, 12, 13, 14, 15)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:
Multiplexer Implementation:
7. Implement the Boolean function using 8: 1 multiplexer.
F (A, B, C, D) = A’BD’ + ACD + B’CD + A’C’D.
Solution:
Convert into standard SOP form,
= A‘BD‘ (C‘+C) + ACD (B‘+B) + B‘CD (A‘+A) + A‘C‘D (B‘+B)
= A‘BC‘D‘ + A‘BCD‘+ AB‘CD + ABCD +A‘B‘CD + AB‘CD +A‘B‘C‘D+ A‘BC‘D
= A‘BC‘D‘ + A‘BCD‘+ AB‘CD + ABCD +A‘B‘CD +A‘B‘C‘D+ A‘BC‘D
= m4+ m6+ m11+ m15+ m3+ m1+ m5
= ∑m (1, 3, 4, 5, 6, 11, 15)
Implementation table:
Multiplexer Implementation:
Multiplexer Implementation:
9. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (w, x, y, z) = ∑m (1, 2, 3, 6, 7, 8, 11, 12, 14)
Solution:
Variables, n= 4 (w, x, y, z) Select
lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation table:
Multiplexer Implementation (Using 8:1 MUX):
Multiplexer Implementation:
11. Implement the Boolean function using 8: 1 multiplexer
F (A, B, C, D) = ∑m (0, 2, 6, 10, 11, 12, 13) + d (3, 8, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
n-1 3 D0, D1, D2, D3, D4, D5, D6, D7Implementation Table:
Multiplexer Implementation:
12. An 8×1 multiplexer has inputs A, B and C connected to the selection inputs S2, S1,and
S0 respectively. The data inputs I0 to I7 are as follows
I1=I2=I7= 0; I3=I5= 1; I0=I4= D I6= D'.
Determine the Boolean function that the multiplexer implements.
Multiplexer Implementation:
Implementation table:
F (A, B, C, D) = ∑m (3, 5, 6, 8, 11, 12, 13).
DEMULTIPLEXER:
Demultiplex means one into many. Demultiplexing is the process of taking information
from one input and transmitting the same over one of several outputs.
A demultiplexer is a combinational logic circuit that receives information on a single
input and transmits the same information over one of several (2n) output lines.
The input variable Din has a path to all four outputs, but the input information is directed to
only one of the output lines. The truth table of the 1-to-4 demultiplexer is shown below.
Enable S1 S0 Din Y0 Y1 Y2 Y3
0 x x x 0 0 0 0
1 0 0 0 0 0 0 0
1 0 0 1 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 1 0 1 0 0
1 1 0 0 0 0 0 0
1 1 0 1 0 0 1 0
1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 1
Truth table of 1-to-4 demultiplexer
From the truth table, it is clear that the data input, Din is connected to the output Y0,
when S1= 0 and S0= 0 and the data input is connected to output Y1 when S1= 0 and S0= 1.
Similarly, the data input is connected to output Y2 and Y3 when S1= 1 and S0= 0 and when S1=
1 and S0= 1, respectively. Also, from the truth table, the expression for outputs can be written as
follows,
Y0=
S1’S0’Din
Y1=
S1’S0Din
Y2=
S1S0’Din
Y3= S1S0Din
Logic diagram of 1-to-4 demultiplexer
Now, using the above expressions, a 1-to-4 demultiplexer can be implemented using four
3-input AND gates and two NOT gates. Here, the input data line Din, is connected to all the
AND gates. The two select lines S1, S0 enable only one gate at a time
and the data that appears on the input line passes through the selected gate to theassociated output
line.
1-to-8 Demultiplexer:
A 1-to-8 demultiplexer has a single input, Din, eight outputs (Y0 to Y7) and three
select inputs (S2, S1 and S0). It distributes one input line to eight output lines based on the
select inputs. The truth table of 1-to-8 demultiplexer is shown below.
Din S2 S1 S0 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0
0 x x x 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1
1 0 0 1 0 0 0 0 0 0 1 0
1 0 1 0 0 0 0 0 0 1 0 0
1 0 1 1 0 0 0 0 1 0 0 0
1 1 0 0 0 0 0 1 0 0 0 0
1 1 0 1 0 0 1 0 0 0 0 0
1 1 1 0 0 1 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0 0 0 0
Now using the above expressions, the logic diagram of a 1-to-8 demultiplexer can be drawn as
shown below. Here, the single data line, Din is connected to all the eight AND gates, but only
one of the eight AND gates will be enabled by the select input lines. For example, if S2S1S0=
000, then only AND gate-0 will be enabled and thereby the data input, Din will appear at Y0.
Similarly, the different combinations of the select inputs, the input Din will appear at the
respective output.
Logic diagram of 1-to-8 demultiplexer
1. Design 1:8 demultiplexer using two 1:4 DEMUX.
INTRODUCTION - SEQUENTIALCIRCUIT
In combinational logic circuits, the outputs at any instant of time depend only on
the input signals present at that time. For a change in input, the output occurs
immediately.
Thus in sequential circuits, the output variables depend not only on the present
input variables but also on the past history of input variables.
The rotary channel selected knob on an old-fashioned TV is like a combinational.
Its output selects a channel based only on its current input – the
position of the knob. The channel-up and channel-down push buttons on a TV is like a
sequential circuit. The channel selection depends on the past sequence of up/down pushes.
The comparison between combinational and sequential circuits is given in table
below.
The sequential circuits can be classified depending on the timing of their signals:
Synchronous sequential circuits
Asynchronous sequential circuits.
In synchronous sequential circuits, signals can affect the memory elements only at discrete
instants of time. In asynchronous sequential circuits change in input signals can affect memory
element at any instant of time. The memory elements used in both circuits are Flip-Flops,
which are capable of storing 1- bit information.
FLIP-FLOPS
The state of a Flip-Flop is switched by a momentary change in the input signal. This
momentary change is called a trigger and the transition it causes is said to trigger the Flip-
Flop. Clocked Flip-Flops are triggered by pulses. A clock pulse starts from an initial value of
0, goes momentarily to 1and after a short time, returns to its initial 0 value.
Latches are controlled by enable signal, and they are level triggered, either positive level
triggered or negative level triggered. The output is free to change according to the S and R
input values, when active level is maintained at the enable input.
Flip-Flops are different from latches. Flip-Flops are pulse or clock edge triggered
instead of level triggered.
EDGE TRIGGERED FLIP-FLOPS
Flip-Flops are synchronous bistable devices (has two outputs Q and Q’). In this case,
the term synchronous means that the output changes state only at a specified point on the
triggering input called the clock (CLK), i.e., changes in the output occur in synchronization
with the clock.
edge-triggered Flip-Flopedge) or at the negative edge (falling edge) of the clock pulse and is sensitive to
its inputs only at this transition of the clock. The different types of edge-triggered Flip- Flops are—
S-R Flip-Flop,
J-K Flip-Flop,
D Flip-Flop,
T Flip-Flop.
Although the S-R Flip-Flop is not available in IC form, it is the basis for the D
and J-K Flip-Flops. Each type can be either positive edge-triggered (no bubble at C
input) or negative edge-triggered (bubble at C input). The key to identifying an edge-
triggered Flip-Flop by its logic symbol is the small triangle inside the block at the clock
(C) input. This triangle is called the dynamic input indicator.
S-R Flip-Flop
The S and R inputs of the S-R Flip-Flop are called synchronous inputs because
data on these inputs are transferred to the Flip-Flop's output only on the triggering edge
of the clock pulse. The circuit is similar to SR latch except enable signal is replaced by
clock pulse (CLK). On the positive edge of the clock pulse, the circuit
responds to the S and R inputs.
SR Flip-Flop
When S is HIGH and R is LOW, the Q output goes HIGH on the triggering
edge of the clock pulse, and the Flip-Flop is SET. When S is LOW and R is HIGH, the
Q output goes LOW on the triggering edge of the clock pulse, and the Flip-Flop is
RESET. When both S and R are LOW, the output does not change from its prior state. An
invalid condition exists when both S and R are HIGH.
CLK S R Qn Qn+1 State
1 0 0 0 0
No Change (NC)
1 0 0 1 1
1 0 1 0 0
Reset
1 0 1 1 0
1 1 0 0 1
Set
1 1 0 1 1
1 1 1 0 x Indeterminate
1 1 1 1 x *
0 x X 0 0
No Change (NC)
0 x x 1 1
Truth table for SR Flip-Flop
Input and output waveforms of SR Flip-Flop
J-K Flip-Flop:
JK means Jack Kilby, Texas Instrument (TI) Engineer, who invented IC in 1958.
JK Flip-Flop has two inputs J(set) and K(reset). A JK Flip-Flop can be obtained from the
clocked SR Flip-Flop by augmenting two AND gates as shown below.
JK Flip Flop
The data input J and the output Q’ are applied o the first AND gate and its output
(JQ’) is applied to the S input of SR Flip-Flop. Similarly, the data input K and the output
Q are applied to the second AND gate and its output (KQ) is applied to the R input of
SR Flip-Flop.
J= K= 0
When J=K= 0, both AND gates are disabled. Therefore clock pulse have no
effect, hence the Flip-Flop output is same as the previous output.
J= 0, K= 1
When J= 0 and K= 1, AND gate 1 is disabled i.e., S= 0 and R= 1. This condition
will reset the Flip-Flop to 0.
J= 1, K= 0
When J= 1 and K= 0, AND gate 2 is disabled i.e., S= 1 and R= 0. Therefore the
Flip-Flop will set on the application of a clock pulse.
J= K= 0
When J=K= 1, it is possible to set or reset the Flip-Flop. If Q is High, AND
gate 2 passes on a reset pulse to the next clock. When Q is low, AND gate 1 passes on a
set pulse to the next clock. Eitherway, Q changes to the complement of the last state i.e.,
toggle. Toggle means to switch to the opposite state.
The truth table of JK Flip-Flop is given below.
Inputs Output
CLK State
J K Qn+1
1 0 0 Qn No Change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn’ Toggle
K-map Simplification:
D Flip-Flop:
The characteristic table for D Flip-Flop shows that the next state of the Flip- Flop is
independent of the present state since Qn+1 is equal to D. This means that an input pulse
will transfer the value of input D into the output of the Flip-Flop independent of the value of
the output before the pulse was applied.
The characteristic equation is derived from K-map.
Qn D Qn+1
0 0 0
0 1 1
1 0 0
1 1 1
Characteristic table
T Flip-Flop
The T (Toggle) Flip-Flop is a modification of the JK Flip-Flop. It is obtained
from JK Flip-Flop by connecting both inputs J and K together, i.e., single input.
Regardless of the present state, the Flip-Flop complements its output when the clock pulse
occurs while input T= 1.
T Flip-Flop
When T= 0, Qn+1= Qn, ie., the next state is the sameas the present state and no
change occurs.
When T= 1, Qn+1= Qn’,ie., the next state is the complement of the present state.
Qn Toggle
Truth table for T Flip-Flop
Characteristic table and Characteristic equation:
The characteristic table for T Flip-Flop is shown below and characteristic equation is
derived using K-map.
Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0
K-map Simplification:
Master-Slave JK Flip-Flop
Logic diagram
When the clock pulse has a positive edge, the master acts according to its J- K
inputs, but the slave does not respond, since it requires a negative edge at the clock input.
When the clock input has a negative edge, the slave Flip-Flop copies the master
outputs. But the master does not respond since it requires a positive edge at its clock
input.
The clocked master-slave J-K Flip-Flop using NAND gates is shown below.
Master-Slave JK Flip-Flop
The characteristic table is useful for analysis and for defining the operation of
the Flip-Flop. It specifies the next state (Qn+1) when the inputs and present state are
known.
The excitation or application table is useful for design process. It is used to find
the Flip-Flop input conditions that will cause the required transition, when the present state
(Qn) and the next state (Qn+1) are known.
SR Flip-Flop:
Present Next
Inputs
State State Present Next
Inputs Inputs
Qn S R Qn+1 State State
0 0 0 0 Qn Qn+1 S R S R
0 0 1 0 0 0 0 0
0 x
0 1 0 1 0 0 0 1
0 1 1 x 0 1 1 0 1 0
1 0 0 1 1 0 0 1 0 1
1 0 1 0 1 1 0 0
x 0
1 1 0 1 1 1 1 0
1 1 1 x
Characteristic Table
Modified Table
Present Next
Inputs
State State
Q Qn+ S R
n 1
0 0 0 x
0 1 1 0
1 0 0 1
1 1 x 0
Excitation Table
The above table presents the excitation table for SR Flip-Flop. It consists of present
state (Qn), next state (Qn+1) and a column for each input to show how the required transition
is achieved.
There are 4 possible transitions from present state to next state. The required Input
conditions for each of the four transitions are derived from the information available in the
characteristic table. The symbol ‘x’ denotes the don’t care condition, it does not matter
whether the input is 0 or 1.
JK Flip-Flop:
Characteristic Table
Modified Table
Present Next
Inputs
State State
Q Qn+ J K
n 1
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation Table
D Flip-Flop
II 66
YEAR
Present Next Present Next
Input Input
State State State State
Qn D Qn+1 Qn Qn+1 D
0 0 0 0 0 0
0 1 1 0 1 1
1 0 0 1 0 0
1 1 1 1 1 1
Characteristic Table
Excitation Table
T Flip-Flop
Present Next
Input
State State
Present Next
Q T Qn+1 Input
n State State
0 0 0 Qn Qn+1 T
0 1 1 0 0 0
1 0 1 0 1 1
1 1 0 1 0 1
1 1 0
Characteristic Table
Modified Table
REALIZATION OF ONE FLIP-FLOP USING OTHER FLIP-FLOPS
II 67
YEAR
SR Flip-Flop to D Flip-Flop
SR Flip-Flop to JK Flip-
Flop SR Flip-Flop to T Flip-
Flop JK Flip-Flop to T Flip-
Flop JK Flip-Flop to D Flip-
Flop D Flip-Flop to T Flip-
Flop
T Flip-Flop to D Flip-Flop
SR Flip-Flop to D Flip-Flop:
Write the characteristic table for required Flip-Flop (D Flip-Flop).
Write the excitation table for given Flip-Flop (SR Flip-Flop).
Determine the expression for the given Flip-Flop inputs (S and R) by using K-
map.
Draw the Flip-Flop conversion logic diagram to obtain the required Flip-
Flop (D Flip-Flop) by using the above obtained expression.
Given Flip-Flop
Required Flip-Flop (D)
(SR)
Input Present state Next state Flip-Flop Inputs
D Qn Qn+1 S R
0 0 0 0 x
0 1 0 0 1
1 0 1 1 0
1 1 1 x 0
D Flip-Flop
II 68
YEAR
SR Flip-Flop to JK Flip-Flop
JK Flip-Flop
II 69
YEAR
2.7.3 SR Flip-Flop to T Flip-Flop
The excitation table for the above conversion is
Flip-Flop
Input Present state Next state
Inputs
T Qn Qn+1 S R
0 0 0 0 x
0 1 1 x 0
1 0 1 1 0
1 1 0 0 1
II 70
YEAR
JK Flip-Flop to D Flip-Flop
D Flip-Flop to T Flip-Flop
Flip-Flop
Input Present state Next state
Input
T Qn Qn+1 D
0 0 0 0
0 1 1 1
II 1 0 1 1 71
YEAR
1 1 0 0
T Flip-Flop to D Flip-Flop
Flip-Flop
Input Present state Next state
Input
D Qn Qn+1 T
0 0 0 0
0 1 0 1
1 0 1 1
1 1 1 0
SHIFT REGISTERS:
(i) Serial in- serial out (iii) Parallel in- serial out
(iii) Serial in- parallel out (iv) Parallel in- parallel out
Serial-In Serial-Out Shift Register:
The serial in/serial out shift register accepts data serially, i.e., one bit at a time on a
single line. It produces the stored information on its output also in serial form.
II 73
YEAR
Serial-In Serial-Out Shift
Register
The entry of the four bits 1010 into the register is illustrated below, beginning with
the right-most bit. The register is initially clear. The 0 is put onto the data input line,
making D=0 for FF0. When the first clock pulse is applied, FF0 is reset, thus storing the 0.
Next the second bit, which is a 1, is applied to the data input, making D=1 for
FF0 and D=0 for FF1 because the D input of FF1 is connected to the Q0 output. When
the second clock pulse occurs, the 1 on the data input is shifted into FF0, causing FF0 to
set; and the 0 that was in FF0 is shifted into FFl.
The third bit, a 0, is now put onto the data-input line, and a clock pulse is applied.
The 0 is entered into FF0, the 1 stored in FF0 is shifted into FFl, and the 0 stored in FF1
is shifted into FF2.
The last bit, a 1, is now applied to the data input, and a clock pulse is applied. This
time the 1 is entered into FF0, the 0 stored in FF0 is shifted into FFl, the 1 stored in FF1 is
shifted into FF2, and the 0 stored in FF2 is shifted into FF3. This completes the serial
entry of the four bits into the shift register, where they can be stored for any length of
time as long as the Flip-Flops have dc power.
II 74
YEAR
Four bits (1010) being entered serially into the register
II 75
YEAR
To get the data out of the register, the bits must be shifted out serially and taken
off the Q3 output. After CLK4, the right-most bit, 0, appears on the Q3 output.
When clock pulse CLK5 is applied, the second bit appears on the Q3 output.
Clock pulse CLK6 shifts the third bit to the output, and CLK7 shifts the fourth bit to the
output. While the original four bits are being shifted out, more bits can be shifted in. All
zeros are shown being shifted out, more bits can be shifted in.
Four bits (1010) being entered serially-shifted out of the register and replaced by all zeros
II 77
YEAR
Four bits (1111) being serially entered into the register
In this type, the bits are entered in parallel i.e., simultaneously into their
respective stages on parallel lines.
A 4-bit parallel-in serial-out shift register is illustrated below. There are four data
input lines, X0, X1, X2 and X3 for entering data in parallel into the register. SHIFT/
LOAD input is the control input, which allows four bits of data to load in parallel into
the register.
When SHIFT/LOAD is LOW, gates G1, G2, G3 and G4 are enabled, allowing
each data bit to be applied to the D input of its respective Flip-Flop. When a clock pulse
is applied, the Flip-Flops with D = 1 will set and those with D = 0 will reset, thereby
storing all four bits simultaneously.
II 78
YEAR
Parallel-In Serial-Out Shift Register
When SHIFT/LOAD is HIGH, gates G1, G2, G3 and G4 are disabled and
gates G5, G6 and G7 are enabled, allowing the data bits to shift right from one stage to
the next. The OR gates allow either the normal shifting operation or the parallel data-
entry operation, depending on which AND gates are enabled by the level on the
SHIFT/LOAD input.
Parallel-In Parallel-Out Shift Register:
In this type, there is simultaneous entry of all data bits and the bits appear on
parallel outputs simultaneously.
If the register has shift and parallel load capabilities, then it is called a shift
register with parallel load or universal shift register. Shift register can be used for
converting serial data to parallel data, and vice-versa. If a parallel load capability is
added to a shift register, the data entered in parallel can be taken out in serial fashion by
shifting the data stored in the register.
The functions of universal shift register are:
A shift-right control to enable the shift right operation and the serial input and
output lines associated with the shift right.
A shift-left control to enable the shift left operation and the serial input and
output lines associated with the shift left.
A parallel-load control to enable a parallel transfer and the n input lines
associated with the parallel transfer.
‘n’ parallel output lines.
A control line that leaves the information in the register unchanged even
though the clock pulses re continuously applied.
It consists of four D-Flip-Flops and four 4 input multiplexers (MUX). S0 and S1
are the two selection inputs connected to all the four multiplexers. These two selection
inputs are used to select one of the four inputs of each multiplexer.
The input 0 in each MUX is selected when S1S0= 00 and input 1 is selected when
S1S0= 01. Similarly inputs 2 and 3 are selected when S 1S0= 10 and S1S0= 11
respectively. The inputs S1 and S0 control the mode of the operation of the register.
II 80
YEAR
4-Bit Universal Shift Register
When S1S0= 00, the present value of the register is applied to the D-inputs of the
Flip-Flops. This is done by connecting the output of each Flip-Flop to the 0 input of the
respective multiplexer. The next clock pulse transfers into each Flip-Flop, the binary
value is held previously, and hence no change of state occurs.
When S1S0= 01, terminal 1 of the multiplexer inputs has a path to the D inputs of the
Flip-Flops. This causes a shift-right operation with the lefter serial input transferred into
Flip-Flop FF3.
When S1S0= 10, a shift-left operation results with the right serial input going into Flip-
Flop FF1.
Finally when S1S0= 11, the binary information on the parallel input lines (I1, I2, I3
and I4) are transferred into the register simultaneously during the next clock pulse. The
function table of bi-directional shift register with parallel inputs and parallel outputs is shown
below.
II 81
YEAR
Mode Control
Operation
S1 S0
0 0 No change
Shift-right
0 1
Shift-left
1 0 Parallel load
1 1
A bidirectional shift register is one in which the data can be shifted either left or
right. It can be implemented by using gating logic that enables the transfer of a data bit
from one stage to the next stage to the right or to the left depending on the level of a
control line.
A 4-bit bidirectional shift register is shown below. A HIGH on the RIGHT/LEFT
control input allows data bits inside the register to be shifted to the right, and a LOW
enables data bits inside the register to be shifted to the left.
When the RIGHT/LEFT control input is HIGH, gates G1, G2, G3 and G4 are
enabled, and the state of the Q output of each Flip-Flop is passed through to the D input
of the following Flip-Flop. When a clock pulse occurs, the data bits are shifted one place
to the right.
When the RIGHT/LEFT control input is LOW, gates G5, G6, G7 and G8 are
enabled, and the Q output of each Flip-Flop is passed through to the D input of the
preceding Flip-Flop. When a clock pulse occurs, the data bits are then shifted one place
to the left.
II 82
YEAR
4-bit bi-directional shift register
SYNCHRONOUS COUNTERS
II 83
YEAR
S.No Asynchronous (ripple) counter Synchronous counter
1 All the Flip-Flops are not All the Flip-Flops are clocked
counters.
In this counter the clock signal is connected in parallel to clock inputs of both the
Flip-Flops (FF0 and FF1). The output of FF0 is connected to J1 and K1 inputs of the second
Flip-Flop (FF1).
Assume that the counter is initially in the binary 0 state: i.e., both Flip-Flops are
RESET. When the positive edge of the first clock pulse is applied, FF 0 will toggle
because J0= k0= 1, whereas FF1 output will remain 0 because J1= k1= 0. After the first
clock
II pulse Q0=1 and Q1=0. 84
YEAR
When the leading edge of CLK2 occurs, FF 0 will toggle and Q0 will go LOW.
Since FF1 has a HIGH (Q0 = 1) on its J1 and K1 inputs at the triggering edge of this
clock pulse, the Flip-Flop toggles and Q1 goes HIGH. Thus, after CLK2,
Q0 = 0 and Q1 = 1. When the leading edge of CLK3 occurs, FF 0 again toggles to the SET state (Q0
= 1), and FF1 remains SET (Q1 = 1) because its J1 and K1 inputs are both LOW (Q0 = 0).
After this triggering edge, Q0 = 1 and Q1 = 1.
Finally, at the leading edge of CLK4, Q0 and Q1 go LOW because they both
have a toggle condition on their J1 and K1 inputs. The counter has now recycled to its
original state, Q0 = Q1 = 0.
Timing diagram
Counter
II 85
YEAR
The output of FF1 (Q1) goes to the opposite state following each time Q 0= 1.
This change occurs at CLK2, CLK4, CLK6, and CLK8. The CLK8 pulse causes the
counter to recycle. To produce this operation, Q 0 is connected to the J1 and K1 inputs of
FF1. When Q0= 1 and a clock pulse occurs, FF1 is in the toggle mode and therefore
changes state. When Q0= 0, FF1 is in the no-change mode and remains in its present
state.
The output of FF2 (Q2) changes state both times; it is preceded by the unique
condition in which both Q 0 and Q1 are HIGH. This condition is detected by the AND
gate and applied to the J2 and K2 inputs of FF3. Whenever both outputs Q0= Q1= 1,
the output of the AND gate makes the J2= K2= 1 and FF2 toggles on the following clock
pulse. Otherwise, the J2 and K2 inputs of FF2 are held LOW by the AND gate output,
FF2 does not change state.
CLOCK Pulse Q2 Q1 Q0
Initially1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 (recycles) 1 1 1
0 0 0
Timing diagram
II 86
YEAR
4-Bit Synchronous Binary Counter
Therefore, when Q0= Q1= Q2= 1, Flip-Flop FF3 toggles and for all other times it
is in a no-change condition. Points where the AND gate outputs are HIGH are indicated
by the shaded areas.
Timing diagram
CLOCK Pulse Q3 Q2 Q1 Q
0
Initially1 0 0 0 0
2 0 0 0 1
3 0 0 1 0
4 0 0 1 1
5 0 1 0 0
6 0 1 0 1
7 0 1 1 0
8 0 1 1 1
9 1 0 0 0
10(recycles) 1 0 0 1
0 0 0 0
First, notice that FF0 (Q0) toggles on each clock pulse, so the logic equation for its
J0 and K0 inputs is
J0= K0= 1
Next, notice from table, that FF1 (Q1) changes on the next clock pulse each
time Q0 = 1 and Q3 = 0, so the logic equation for the J1 and K1 inputs is
J1= K1= Q0Q3’
This equation is implemented by ANDing Q0 and Q3 and connecting the gate output to
the J1 and K1 inputs of FFl.
Flip-Flop 2 (Q2) changes on the next clock pulse each time both Q0 = Q1 = 1.
This requires an input logic equation as follows:
This function is implemented with the AND/OR logic connected to the J3 and K3 inputs
of FF3.
Timing diagram
II 89
YEAR
To form a synchronous UP/DOWN counter, the control input (UP/DOWN) is
used to allow either the normal output or the inverted output of one Flip-Flop to the J
and K inputs of the next Flip-Flop. When UP/DOWN= 1, the MOD 8 counter will
count from 000 to 111 and UP/DOWN= 0, it will count from 111 to 000.
When UP/DOWN= 1, it will enable AND gates 1 and 3 and disable AND gates
2 and 4. This allows the Q0 and Q1 outputs through the AND gates to the J and K inputs
of the following Flip-Flops, so the counter counts up as pulses are applied.
When UP/DOWN= 0, the reverse action takes place.
II 90
YEAR
MODULUS-N-COUNTERS
The counter with ‘n’ Flip-Flops has maximum MOD number 2n. Find the
number of Flip-Flops (n) required for the desired MOD number (N) using the
equation,
2n ≥ N
(i) For example, a 3 bit binary counter is a MOD 8 counter. The basic counter
can be modified to produce MOD numbers less than 2n by allowing the
counter to skin those are normally part of counting sequence.
n= 3
N= 8
2n = 23= 8= N
(ii) MOD 5 Counter:
2n= N
2n= 5
2n= N= 10
1. Find the number of Flip-Flops (n) required for the desired MOD number
(N) using the equation,
2n ≥ N.
2. Connect all the Flip-Flops as a required counter.
II 91
YEAR
4. Connect all Flip-Flop outputs for which Q= 1 when the count is N, as
inputs to NAND gate.
5. Connect the NAND gate output to the CLR input of each Flip-Flop.
When the counter reaches Nth state, the output of the NAND gate goes
LOW, resetting all Flip-Flops to 0. Therefore the counter counts from 0
through N-1.
II 92
YEAR
UNIT III COMPUTER FUNDAMENTALS
Functional Units of a Digital Computer: Von Neumann Architecture – Operation and Operands
of Computer Hardware Instruction – Instruction Set Architecture (ISA): Memory Location,
Address and Operation – Instruction and Instruction Sequencing – Addressing Modes,
Encoding of Machine Instruction – Interaction between Assembly and High Level Language.
****************************************************************************
FUNCTIONAL UNITS
1. INPUT UNIT
Computers accept coded information through input units. The most common input device
is the keyboard.
Whenever a key is pressed, the corresponding letter or digit is automatically translated into
its corresponding binary code and transmitted to the processor.
The other kinds of input devices for human-computer interaction are available, including
the touchpad, mouse, joystick, and trackball. These are often used as graphic input devices
in conjunction with displays.
Microphones can be used to capture audio input which is then sampled and converted into
digital codes for storage and processing.
Cameras can be used to capture video input.
Digital communication facilities, such as the Internet, can also provide input to a computer
from other computers and database servers.
2. MEMORY UNIT
The function of the memory unit is to store programs and data. There are two classes of
storage, called Primary and Secondary.
Primary Memory
Primary memory, also called main memory, is a fast memory that operates at electronic
speeds. Programs must be stored in this memory while they are being executed.
The memory consists of a large number of semiconductor storage cells, each capable of
storing one bit of information.
They are handled in groups of fixed size called words.
The number of bits in each word is referred to as the word length of the computer,
typically 16, 32, or 64 bits.
To provide easy access to any word in the memory, a distinct address is associated with
each word location.
Addresses are consecutive numbers, starting from 0, that identify successive locations.
A particular word is accessed by specifying its address and issuing a control command to
the memory that starts the storage or retrieval process.
Instructions and data can be written into or read from the memory under the control of the
processor.
A memory in which any location can be accessed in a short and fixed amount of time after
specifying its address is called a random-access memory (RAM).
The time required to access one word is called the memory access time.
This time is independent of the location of the word being accessed. It typically ranges
from a few nanoseconds (ns) to about 100 ns for current RAM units.
Cache memory
Along with the main memory, a smaller, faster RAM unit, called a cache, is used to hold
sections of a program that are currently being executed, along with any associated data.
The cache is tightly coupled with the processor and is usually contained on the same
integrated-circuit chip. The purpose of the cache is to facilitate high instruction execution
rates.
Secondary Storage
Secondary storage is used when large amounts of data and many programs have to be
stored, particularly for information that is accessed infrequently.
Access times for secondary storage are longer than for primary memory.
Example:
o Magnetic disks,
o Optical disks (DVD and CD), and
o Flash memory devices.
4. OUTPUT UNIT
The output unit function is to send processed results to the outside world.
Examples:
o Printers
o Graphic Displays
5. CONTROL UNIT
The memory, arithmetic and logic, and I/O units store and process information and perform
input and output operations. The operation of these units must be coordinated in some way.
This is the responsibility of the control unit.
The control unit is used to send control signals to other units.
I/O transfers, consisting of input and output operations, are controlled by program
instructions that identify the devices involved and the information to be transferred.
Control circuits are responsible for generating the timing signals that govern the transfers
and determine when a given action is to take place.
Data transfers between the processor and the memory are also managed by the control unit
through timing signals.
OPERATIONS OF COMPUTER HARDWARE INSTRUCTION
OPERANDS OF COMPUTER HARDWARE INSTRUCTION
Machine instructions operate on data. The most important general categories of data are :
Addresses
Numbers
Characters
Logical data
Address:
The addresses are in fact a form of data. In many situations, some calculation must be
performed on the operand reference in an instruction; some calculation must be performed on the
operand reference in an instruction to determine physical address.
Numbers:
All computers support numeric data types. The common numeric data types are:
Integers
Floating Point
Decimal
Characters:
For documentation a common form is text or character Strings.
Most of the computers use ASCII code for character Represented by a unique 7-bit pattern,
Logic Data:
Most of the processor interpret data as bit ,byte ,word or double these are referred to as units of
data. When a data is viewed as n 1-bit items of data each item having the value 0 or 1 it is
considered as logical data .
INSTRUCTIONS AND INSTRUCTION SEQUENCING
INSTRUCTION
The words of a computer's language are called instructions, and its vocabulary is called
an instruction set.
Instruction Set
The vocabulary of commands understood by a given architecture.
A computer must have instructions capable of performing four types of operations:
Data transfers between the memory and the processor registers
Arithmetic and logic operations on data
Program sequencing and control
I/O transfers
The two basic types of notations used are:
1. Register Transfer Notation
2. Assembly Language Notation
Register Transfer Notation
The transfer of information from one location in the computer to another location such as
transfer between memory locations, processor registers, or registers in the I/O subsystem
involves Register Transfer Notation.
A location is represented by a symbolic name standing for its hardware binary
address.
Example: 1
The names for the addresses of memory locations may be LOC, PLACE, A, VAR2;
Processor register names may be R0, R5; and I/O register names may be DATAIN,
OUTSTATUS, and so on.
The contents of a location are denoted by placing square brackets around the name of the
location. The expression is,
R1 ← [LOC] means that the contents of memory location LOC are transferred
into processor register R1.
Example: 2
Consider the operation that adds the contents of registers R1 and R2, and then
places their sum into register R3.
It is indicated as,
R3 ← [R1] + [R2]
This type of notation is known as Register Transfer Notation (RTN).
Assembly Language Notation
The three-address instruction contains the memory addresses of the three operands— A,
B, and C.
A general instruction of three-address type has the format:
Operation Source1, Source2, Destination
This three-address instruction can be represented symbolically
as
Add A, B, C
Operands A and B are called the source operands
C is called the destination operand
Add is the operation to be performed on the operands.
2. Two-Address Instructions
In a two-address instructions, each instruction having only one or two operands. A
general instruction of two-address type has the format:
Operation Source, Destination
Example
An Add instruction of this type is Add A, B performs the operation
B← [A] + [B].
When the sum is calculated, the result is sent to the memory and stored in location B,
replacing the original contents of this location. This means that operand B is both a source
and a destination.
3. Zero-Address Instructions
It is also possible to use instructions in which the locations of all operands are defined
implicitly. Such instructions are found in machines that store operands in a structure called
a pushdown stack. These instructions are called zero-address instructions.
Example:
Stack operation:
PUSH, POP, and PEEK
Instruction Format
Example:
Write a program to evaluate the arithmetic statement Y = (A+B)*(C+D) using three-
address, two-address, one-address and zero-address instructions.
Solution:
ADDRESSING MODES
ENCODING OF MACHINE INSTRUCTION
The form in which we have presented the instructions is indicative of the
form used in assembly languages, except that we tried to avoid using acronyms for the
various operations, which are awkward to memorize and are likely to be specific to a
particular commercial processor.
To be executed in a processor, an instruction must be encoded in a compact
binary pattern. Such encoded instructions are properly referred to as machine
instructions.
The instructions that use symbolic names and acronyms are called
assembly language instructions, which are converted into the machine instructions
using the assembler program Instructions perform operations such as add, subtract,
move, shift, rotate, and branch. These instructions may use operands of different sizes,
such as 32-bit and 8-bit numbers or 8-bit ASCII-encoded characters.
The type of operation that is to be performed and the type of operands used
may be specified using an encoded binary pattern referred to as the OP code for the
given instruction. Suppose that 8 bits are allocated for this purpose, giving 256
possibilities for specifying different instructions. This leaves 24 bits to specify the rest
of the required information. Add R1, R2
Instruction specify the registers R1 and R2, in addition to the OP code. If the
processor has 16 registers, then four bits are needed to identify each register.
Additional bits are needed toindicate that the Register addressing mode is used for
each operand.
Move 24(R0), R5
Requires 16 bits to denote the OP code and the two registers, and some bits
to express that the source operand uses the Index addressing mode and that the index
value is 24.
The instructions can be encoded in a 32-bit word. Depicts a possible format. There is an
8-bit Op-code field and two 7-bit fields for specifying the source and destination
operands. The 7-bit field identifies the addressing mode and the register involved (if
any). The ―Other info‖ field allows us to specify the additional information that may be
needed, such as an index value or an immediate operand.
One-word instruction
(c ) Three-operand instruction
Requires 18 bits to denote the OP code, the addressing modes, and the
register. This leaves 14 bits to express the address that corresponds to LOC, which is
clearly insufficient.
And #$FF000000. R2
In which case the second word gives a full 32-bit immediate operand.
R3 8 [R1] + [R2]
A possible format for such an instruction in shown in fig c. Of course, the
processor has to be able to deal with such three-operand instructions. In an instruction
set where all arithmetic and logical operations use only register operands, the only
memory references are made to load/store the operands into/from the processor
registers.
1. In assembly language programs written for one processor will not run on another type
of processor. In high-level language programs run independently of processor type.
2. Performance and accuracy of assembly language code are better than a high-level.
3. High-level languages have to give extra instructions to run code on the computer.
4. Code of assembly language is difficult to understand and debug than a high-level.
5. One or two statements of high-level language expand into many assembly language
codes.
6. Assembly language can communicate better than a high-level Some type
of hardware actions can only be performed by assembly language.
7. In assembly language, we can directly read pointers at a physical address which is not
possible in high-level
8. Working with bits is easier in assembly language.
9. Assembler is used to translate code in assembly language while the compiler is used
to compile code in the high-level.
10. The executable code of high-level language is larger than assembly language code so
it takes a longer time to execute.
11. Due to long executable code, high-level programs are less efficient than assembly
language programs.
12. High-level language programmer does not need to know details about hardware
like registers in the processor as compared to assembly programmers.
13. The most high-level language code is first automatically converted into assembly
code.
Assembly languages are different for every processor. Some of assembly languages examples
are below.
ARM
MIPS
x86
Z80
68000
6502
6510
Examples of high-level language:
C
Fortran
Lisp
Prolog
Pascal
Cobol
Basic
Algol
Ada
C++
C#
PHP
Perl
Ruby
Common Lisp
Python
Golang
Javascript
Pharo
PROCESSORS
A Basic MIPS implementation includes a subset of the core MIPS instruction set.
The MIPS instruction set are divided in to three classes :
Memory-reference instructions - load word (lw) , store word (sw)
Arithmetic-logical instructions - add, sub, and, or ,slt
Branch instructions – beq , jump (j).
For every instruction, the first two steps are identical:
1. Fetch Instruction
2. Fetch Operands
The remaining steps depend on the instruction class.
Use of ALU in MIPS:
The memory-reference instructions uses the ALU for an memory address
calculation
The arithmetic-logical instructions uses the ALU for operation execution
The branch instruction uses the ALU for comparison.
After using the ALU,
A memory-reference instruction will need to access the memory either to read
data for a load or write data for a store.
An arithmetic-logical or load instruction must write the data from the ALU or
memory back into a register.
A branch instruction, change the next instruction address based on the
comparison; otherwise, the PC should be incremented by 4 to get the address of
the next instruction.
WORKING OF A BASIC MIPS IMPLEMENTATION
All instructions start by using the program counter to supply the instruction address to
the instruction memory.
After the instruction is fetched, the register operands used by an instruction are specified
by fields of that instruction.
Page 1
Once the register operands have been fetched, they can be operated on to compute a
memory address (for a load or store), to compute an arithmetic result (for an integer
arithmetic-logical instruction), or a compare (for a branch).
If the instruction is an arithmetic-logical instruction, the result from the ALU must be
written to a register.
If the operation is a load or store, the ALU result is used as an address to either store a
value from the registers or load a value from memory into the registers. The result from
the ALU or memory is written back into the register file.
Branches require the use of the ALU output to determine the next instruction address,
which comes either from the ALU (where the PC and branch offset are summed) or from
an adder that increments the current PC by 4.
Data path design begins in examining the major components required to execute each
class of MIPS instructions.
The major components required to execute each class of MIPS instruction are called as
data path elements.
A data path element is a unit used to operate on or hold data within a processor.
In the MIPS implementation, the data path elements include
Instruction Memory
Data Memory
Register File
ALU
Adders
Page 2
Building a MIPS data path consists of
1. DataPath for Fetching the instruction and incrementing the PC
2. DataPath for Executing arithmetic and logic instructions
3. Datapath for Executing a memory-reference instruction
4. DataPath for Executing a branch instruction
Page 3
2. DATAPATH FOR EXECUTING ARITHMETIC AND LOGIC INSTRUCTIONS (R-Type)
The processor’s 32 general-purpose registers are stored in a structure called a register
file.
A register file is a collection of registers in which any register can be read or written by
specifying the number of the register in the file.
An ALU is used to operate on the values read from the registers.
It reads two registers, performs an ALU operation on the contents of the registers, and
write the result to a register.
These instructions are either called R-type instructions or arithmetic logical
instructions.
This instruction class includes add, sub, AND, OR, and slt.
R-format Instruction Operations :
1. Read the two register operands
2. Perform the arithmetic/logical operation
3. Write the register result
Page 4
3. DATAPATH FOR EXECUTING A MEMORY-REFERENCE INSTRUCTION
The MIPS load word and store word instructions have the general form
(i) lw $t1,offset($t2)
(ii) sw $t1,offset ($t2).
These instructions compute a memory address by adding the base register, which is $t2,
to the 16-bit signed offset field contained in the instruction.
If the instruction is a load, the value read from memory must be written into the register
file in the specified register, which is $t1.Thus, we need both the register file and the
ALU.
If the instruction is a store, the value to be stored must also be read from the register file
where it resides in $t1.
In addition, a unit to sign-extend the 16-bit offset field in the instruction to a 32-bit
signed value, and a data memory unit to read from or write to.
The data memory must be written on store instructions; hence, it has both read and write
control signals, an address input, as well as an input for the data to be written into
memory.
Load/Store Instructions Operations :
1. Read register operands
2. Calculate the memory address using 16-bit offset
- Use ALU with sign-extend offset shifted left 2 times
3. Load: Read memory and update register ($t1)
4. Store: Write register value to memory ($t2 + offset)
Page 5
Branch Target Address = PC + 4 + Offset (Sign Extended and Shifted left 2 times)
When the condition is true (operands are equal), the branch target address becomes the
new PC, and we say that the branch is taken.
When the condition is false(operands are not equal), the incremented PC should replace
the current PC; we say that the branch is not taken.
Branch Instruction Operations:
1. Read register operands
2. Compare operands
Use ALU - Subtract the two operands and
Check for Zero output
3. Calculate target address
Sign-extend the offset value
Shift left 2 times
Add to PC + 4
Show how to build a datapath for the operational portion of the memory reference
and arithmetic-logical instructions that uses a single register file and a single ALU
to handle both types of instructions, adding any necessary multiplexors.
We can combine the datapath components needed for the individual instruction classes, into a
single datapath and add the control to complete the implementation.
This simplest datapath will execute all instructions in one clock cycle. To share a datapath
element between two different instruction classes, we may need to allow multiple
Page 6
connections to the input of an element, using a multiplexor and control signal to select among
the multiple inputs.
Step 1
To create a datapath with only a single register file and a single ALU, we must have two
different sources for the second ALU input, as well as two different sources for the data
stored into the register file. Thus, one multiplexor is placed at the ALU input and another at
the data input to the register file.
Step 2
Combine all the pieces to make a simple datapath for the MIPS architecture by adding the
Datapath for Instruction fetch
Datapath for Arithmetic-Logical instructions
Datapath for Memory instructions
Datapath for Branch instruction
Step 3
In the datapath obtained by composing separate pieces,
The branch instruction uses the main ALU for comparison of the register operands, so
we must keep the adder for computing the branch target address.
An additional multiplexor is required to select either the sequentially following
instruction address (PC + 4) or the branch target address to be written into the PC.
Step 4
The control unit must be able to take inputs and generate a write signal for each state
element, the selector control for each multiplexor, and the ALU control.
Page 7
Page 8
Design of Control Unit
The Control Unit is classified into two major categories:
1. Hardwired Control
2. Micro programmed Control
Hardwired Control
The Hardwired Control organization involves the control logic to be implemented with
gates, flip-flops, decoders, and other digital circuits.
The following image shows the block diagram of a Hardwired Control organization.
o A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates.
o An instruction fetched from the memory unit is placed in the instruction register (IR).
o The component of an instruction register includes; I bit, the operation code, and bits 0 through 11.
o The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
o The outputs of the decoder are designated by the symbols D0 through D7.
Page 9
o The operation code at bit 15 is transferred to a flip-flop designated by the symbol I.
o The operation codes from Bits 0 through 11 are applied to the control logic gates.
o The Sequence counter (SC) can count in binary from 0 through 15.
Micro-programmed Control
The Micro programmed Control organization is implemented by using the programming approach. In
Micro programmed Control, the micro-operations are performed by executing a program consisting of
micro-instructions.
The following image shows the block diagram of a Micro programmed Control organization.
o The Control memory address register specifies the address of the micro-instruction.
o The Control memory is assumed to be a ROM, within which all control information is permanently
stored.
o The control register holds the microinstruction fetched from the memory.
o The micro-instruction contains a control word that specifies one or more micro-operations for the
data processor.
o While the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next
microinstruction.
o The next address generator is often referred to as a micro-program sequencer, as it determines the
address sequence that is read from control memory.
Page 10
5. PIPELINING
Page 11
PIPELINED EXECUTION / ORGANIZATION
2 - STAGE PIPELINED EXECUTION
Execution of a program consists of a sequence of fetch and executes steps.
Let Fi and Ei refer to the fetch and execute steps for instruction Ii.
A computer has two separate hardware units.
They are:
Instruction fetch unit
Instruction execution unit
The instruction fetched by the fetch unit is stored in an intermediate storage buffer.
This buffer is needed to enable the execution unit to execute the instruction while the
fetch unit is fetching the next instruction.
The execution results are stored in the destination location specified by the instruction.
The fetch and execute steps of any instruction can each be completed in one cycle.
Page 12
4 - STAGE PIPELINED EXECUTION
The stages are:
F - Fetch : Read the instruction from the memory
D - Decode : Decode the instruction and fetch the source operand(s)
E - Execute : Perform the operation specified by the instruction
W - Write : Store the result in the destination location
Instruction Fetch - The CPU reads instructions from the address in the memory
whose value is present in the program counter.
Instruction Decode - Instruction is decoded and the register file is accessed to get the
values from the registers used in the instruction.
Execute - ALU operations are performed.
Memory Access - Memory operands are read and written from/to the memory that is
present in the instruction.
Write Back – Computed value is written back to the register.
Page 15
6- STAGE PIPELINED EXECUTION
STRUCTURAL HAZARD
A structural hazard occurs when two or more instructions that are already in pipeline
need the same resource.
These hazards are because of conflicts due to insufficient resources.
The result is that the instructions must be executed in series rather than parallel for a
portion of pipeline.
Structural hazards are sometime referred to as resource hazards.
Example:
A situation in which multiple instructions are ready to enter the execute
instruction phase and there is a single ALU (Arithmetic Logic Unit).
One solution to such resource hazard is to increase available resources, such as
having multiple ALU.
DATA HAZARD
A data hazard occurs when there is a conflict in the access of an operand location.
There are three types of data hazards. They are
Read After Write (RAW) or True Dependency:
An instruction modifies a register or memory location and a succeeding instruction
reads the data in that memory or register location.
A RAW hazard occurs if the read takes place before the write operation is complete.
Example
I1 : R2 ← R5 + R3
I2 : R4 ← R2 + R3
Page 16
Write After Read (WAR) or Anti Dependency:
An instruction reads a register or memory location and a succeeding instruction writes
to the location.
A WAR hazard occurs if the write operation completes before the read operation takes
place.
Example
I1 : R4 ← R1 + R5
I2 : R5 ← R1 + R2
Page 17
INSTRUCTION / CONTROL / BRANCH HAZARD
An instruction (or) control (or) branch hazard, occurs when the pipeline makes the
wrong decision on a branch prediction and therefore brings instructions into the
pipeline that must subsequently be discarded.
Whenever the stream of instructions supplied by the instruction fetch unit is
interrupted, the pipeline stalls.
There are two techniques using which we can handle data hazards.
They are
(1) Using Operand Forwarding (2) Using Software
Page 18
A special arrangement needs to be made to “forward” the output of ALU to the input of
ALU.
Example :
I1 : ADD R1,R2,R3
I2: SUB R4,R1,R5
Page 19
3) Loop Buffer
4) Branch Prediction
5) Delayed Branch
1) MULTIPLE STREAMS
o The approach is to replicate the initial portions of the pipeline and allow the
pipeline to fetch both instructions, making use of multiple streams.
o There are two problems with this approach:
1. Contention delays for access to the registers and to memory.
2. Additional branch instructions may enter the pipeline before the original
branch decision is resolved.
2) PREFETCH BRANCH TARGET
o When a conditional branch is recognized, the target of the branch is prefetched, in
addition to the instruction following the branch.
o This target is then saved until the branch instruction is executed.
o If the branch is taken, the target has already been prefetched.
3) LOOP BUFFER
o A loop buffer is a small, very-high-speed memory maintained by the instruction
fetch stage of the pipeline and containing the ‘n’ most recently fetched
instructions, in sequence.
o If a branch is to be taken, the hardware first checks whether the branch target is within
the buffer. If so, the next instruction is fetched from the buffer.
4) BRANCH PREDICTION
o To reduce the branch penalty, the processor needs to anticipate that an instruction
being fetched is a branch instruction and predict its outcome to determine which
instruction should be fetched.
o It is generally of two types:
Static Branch Prediction
Dynamic Branch Prediction
o Static Branch Prediction - Assume that the branch will not be taken and to fetch the
next instruction in sequential address order.
o Dynamic Branch Prediction - Uses the recent branch history, to see if a branch was
taken the last time this instruction was executed.
Page 20
Taken/not taken switch
Branch history table
Page 21
o Types Of Branch Predictor
1. Correlating Predictor - Combines local behavior and global behavior of a particular
branch.
2. Tournament Predictor - Makes multiple predictions for each branch and a selection
mechanism that chooses which predictor to enable for a given branch.
5) DELAYED BRANCH
o In MIPS, branches are delayed.
o This means that the instruction immediately following the branch is always executed,
independent of whether the branch condition is true or false. This is known as Branch
Folding Technique.
o When the condition is false, the execution looks like a normal branch.
o When the condition is true, a delayed branch first executes the instruction immediately
following the branch in sequential instruction order before jumping to the specified
branch target address.
Page 22
UNIT V
Memory access time - It is the time that elapses between the initiations of
an operation to transfer a word of data and the
completion of that operation.
Memory cycle time - It is the minimum time delay required between the
Initiation of two successive memory operations.
Page | 1
Characteristics of Memory Systems
MEMORY HIERARCHY
Page | 2
Page | 3
In the memory hierarchy, the Registers are at the top in terms of speed of access.
At the next level of the hierarchy is a relatively small amount of memory that can be
implemented directly on the processor chip, called a Cache.
o Cache memory holds copies of the instructions and data stored in a much larger
memory that is provided externally.
o The cache memory is divided into three levels :
Level 1 (L1) cache –Primary Cache
Level 2 (L2) cache – Secondary Cache
Level 3 (L3) cache
A primary cache is always located on the processor chip. This cache is small and its access
time is comparable to that of processor registers. The primary cache is referred to as the
Level 1 (L1) cache.
A larger, and slower, secondary cache is placed between the primary cache and the rest of
the memory. It is referred to as the Level 2 (L2) cache.
Some computers have a Level 3 (L3) cache of even larger size, in addition to the L1 and L2
caches. An L3 cache, also implemented in SRAM technology.
The next level in the hierarchy is the Main Memory. The main memory is much larger
but slower than cache memories.
At the bottom level in the memory hierarchy is the Secondary Memory -Magnetic Disk and
Magnetic tape. They provide a very large amount of inexpensive memory.
Page | 4
Memory Management
o Memories that consist of circuits capable of retaining their state as long as power is applied
are known as static memories.
Page | 5
Advantages:
o Very low power consumption
o Can be accessed very quickly
o Static RAMs are fast, but their cells require several transistors.
o Less expensive and higher density RAMs can be implemented with simpler cells.
o A sense circuit at the end of the bit line generates the proper output value.
o The state of the connection to ground in each cell is determined when the chip is
manufactured.
o There are various kinds of ROM such as :
PROM
EPROM
EEPROM
Page | 6
PROM (Programmable ROM)
o The PROM is nonvolatile and may be written into only once.
o For the PROM, the writing process is performed electrically and may be performed by a
supplier or customer at a time later than the original chip fabrication.
o Special equipment is required for the writing or “programming” process.
o PROMs provide flexibility and convenience. It is less expensive.
MAGNETIC DISKS
o Magnetic Disks consist of one or more disk platters mounted on a common spindle.
o A thin magnetic film is deposited on each platter, usually on both sides.
o The assembly is placed in a drive that causes it to rotate at a constant speed.
Page | 7
o The read/write heads of a disk system are movable. There is one head per surface.
o All heads are mounted on a comb-like arm that can move radially across the stack of disks
to provide access to individual tracks.
o Each surface is divided into concentric tracks, and each track is divided into sectors.
o The set of corresponding tracks on all surfaces of a stack of disks forms a logical
cylinder.
o All tracks of a cylinder can be accessed without moving the read/write heads.
o Data are accessed by specifying the surface number, the track number, and the sector
number. Read and Write operations always start at sector boundaries.
o Data bits are stored serially on each track. Each sector may contain 512 or more bytes.
o The data are preceded by a sector header that contains identification (addressing)
information used to find the desired sector on the selected track.
FLOPPY DISK
o The disks are known as hard or rigid disk units. Floppy disks are smaller, simpler, and
cheaper disk units that consist of a flexible, removable, plastic diskette coated with
magnetic material.
o The diskette is enclosed in a plastic jacket, which has an opening where the read/write
head can be positioned.
o A hole in the center of the diskette allows a spindle mechanism in the disk drive to
position and rotate the diskette.
o Advantages - Low cost , Portability
o Disadvantages - Smaller storage capacities, Longer access times , Higher failure rates
OPTICAL DISK
CD
o The optical technology was adapted to the computer environment to provide a high
capacity storage medium known as a CD.
o The CDs are used to store information in a binary form; they are suitable for use as
a storage medium in computer systems.
o Stored data are organized on CD-ROM tracks in the form of blocks called sectors.
o The optical technology that is used for CD systems is that laser light can be
focused on a very small spot.
o A laser beam is directed onto a spinning disk, with tiny indentations arranged to
Page | 8
form a long spiral track on its surface.
o The indentations reflect the focused beam toward a photo detector, which detects
the stored binary patterns.
o The total thickness of the disk is 1.2 mm.
o Advantages:
Small physical size
Low cost
Ease of handling as a removable and transportable mass-storage medium.
DVD
o DVD (Digital Versatile Disk) technology is the same as that of CDs.
o The disk is 1.2 mm thick, and it is 120 mm in diameter.
o Its storage capacity is made much larger than that of CDs.
MAGNETIC TAPES
o Magnetic tapes are suited for off-line storage of large amounts of data. They are
typically used for backup purposes and for archival storage.
o Data on the tape are organized in the form of records separated by gaps.
o Tape motion is stopped only when a record gap is underneath the read/write heads.
o A group of related records is called a file. The beginning of a file is identified by a file
mark.
o The file mark is a special single- or multiple-character record, usually preceded by a gap
longer than the inter-record gap.
o The first record following a file mark can be used as a header or identifier for the file.
o This allows the user to search a tape containing a large number of files for a particular
file.
CACHE MEMORY
o Cache memory is a high-speed static random access memory (SRAM).
o Cache memory is responsible for speeding up
computer operations and processing.
o This memory is integrated directly into the CPU chip
or placed on a separate chip that has a
separate bus interconnect with the CPU.
Page | 9
o The purpose of cache memory is to store program
instructions and data that are used repeatedly in the
operation of programs or information that the CPU is likely to need next.
o The CPU can access this information quickly from the cache rather than having to get it
from computer's main memory.
o Fast access to these instructions increases the overall speed of the program.
o A cache memory system includes a small amount of fast memory and a large amount of
slow memory(DRAM). This system is configured to simulate a large amount of fast
memory.
o The cache memory system consists of the following units:
Cache - consists of static RAM(SRAM)
Main Memory –consists of dynamic RAM(DRAM)
Cache Controller – implements the cache logic. This controller decides which
block of memory should be moved in or out of the cache.
CACHE LEVELS
o The processor cache is of two or more levels :
Level 1 (L1) cache
Level 2 (L2) cache
Level 3 (L3) cache
o A primary cache is always located on the processor chip. This cache is small and its
access time is comparable to that of processor registers. The primary cache is referred to
as the Level 1 (L1) cache.
o A larger, and slower, secondary cache is placed between the primary cache and the rest of
the memory. It is referred to as the Level 2 (L2) cache.
Page |13
\
o Some computers have a Level 3 (L3) cache of even larger size, in addition to the L1 and
L2 caches.
TYPES OF CACHE
Two types of cache exists. They are
Split cache : Data and instructions are stored separately (Harvard architecture)
Page |13
CACHE MAPPING FUNCTIONS
o The correspondence between the main memory blocks and cache is specified by a
“Mapping Function”.
o When a processor issues a Read request, a block of words is transferred from the main
memory to the cache, one word at a time.
o When the program references any of the location in the block, the desired contents are
read directly from the cache.
o Mapping functions determine how memory blocks are placed in the cache.
o The three mapping functions:
o Direct Mapping
o Associative Mapping
o Set-Associative Mapping
Consider a cache consisting of 128 blocks of 16 words each, for total of 2048(2K) words and
assume that the main memory is addressable by 16 bit address. Main memory is 64K which
will be viewed as 4K blocks of 16 works each.
Page |13
(2) Associative Mapping:-
This is more flexible mapping method, in which main memory block can be placed
into any cache block position.
In this, 12 tag bits are required to
Identify a memory block when it is
resident in the cache.
The tag bits of an address received from
the processor are compared to the tag
bits of each block of the cache to see, if
the desired block is present. This is
known as Associative Mapping
technique.
Cost of an associated mapped cache is
higher than the cost of direct-mapped
because of the need to search all 128
tag patterns to determine whether a
Block is in cache. This is known as associative search.
Page |13
COMPARISION BETWEEN MAPPING TECHNIQUES
o When a new block is to be brought into the cache and if the cache is full, then one of
the existing blocks must be replaced.
o For direct mapping, there is only one possible line for any particular block, and no
choice is possible.
o For the associative and set-associative techniques, a replacement algorithm is needed.
A number of algorithms have been tried.
o Four replacement algorithms are
1. Random
2. LRU (Least-recently used)
3. LFU (Least frequently used)
4. FIFO (First in First out)
Page | 14
VIRTUAL MEMORY
Virtual memory is an architectural solution to increase the effective size of the memory
system.
Virtual memory is a memory management technique that allows the execution of
processes that are not completely in memory.
In some cases during the execution of the program the entire program may not be needed.
Virtual memory allows files and memory to be shared by two or more processes through
Page sharing.
The techniques that automatically move program and data between main memory and
secondary storage when they are required for execution is called virtual-memory techniques.
ADVANTAGES
One major advantage of this scheme is that programs can be larger than physical memory
Virtual memory also allows processes to share files easily and to implement shared
memory.
Increase in processor utilization and throughput.
Le0073s I/O would be needed to load or swap user programs into memory.
Page | 16
VIRTUAL TO PHYSICAL ADDRESS TRANSLATION
Each virtual address generated by the processor contains
virtual Page number and offset.
DMA controller contains an address unit, for generating addresses and selecting I/O
Page | 17
device for transfer.
It also contains the control unit and data count for keeping counts of the number of
blocks transferred and indicating the direction of transfer of data.
When the transfer is completed, DMA informs the processor by raising an interrupt.
Bus Request :
o It is used by the DMA controller to request the CPU to relinquish(release) the control
of the buses.
Bus Grant :
o It is activated by the CPU to inform the external DMA controller that the buses are
in high impedance state and the requesting DMA can take control of the buses.
o Once the DMA has taken the control of the buses, it transfers the data.
Page | 18
o After gaining control, the DMA controller performs read and write operations directly
between devices and memory.
o The DMA requires the CPU to provide two additional bus signals:
The Hold (HLD)Signal is an input to the CPU through which DMA
controllers asks for ownership of the bus.
The Hold Acknowledge (HLDA) signal tells that the bus has been granted.
o The CPU will finish all pending bus operations before granting control of the bus to
the DMA controller.
o Once the DMA controller gets the control of the buses, it can perform any transaction
(reads and writes) using the same bus.
o After the transaction is finished, the DMA controller returns the bus to the CPU.
Burst DMA Transfer : In this mode DMA handover the buses to CPU only after
completion of whole data transfer.
Block Transfer :Here, DMA transfers data only when CPU is executing the
instruction which does not require the use of buses.
(a) Byte (or) Cycle stealing DMA transfer Mode
Page | 19
(b) Burst DMA Transfer Mode
Page | 20
ACCESSING INPUT /OUTPUT SYSTEM
o The input-output subsystem of a computer, referred to as I/O, provides an efficient
mode of communication between the central system and the outside environment.
o Programs and data must be entered into computer memory for processing and results
obtained from computations must be recorded or displayed for the user.
I/O INTERFACES
Input-Output interface provides a method for transferring information between
internal storage and external I/O devices.
The I/O bus from the processor is attached to all peripheral interfaces.
To communicate with a particular device, the processor places a device address on the
address lines.
The I/O bus consists of data lines, address lines, and control lines.
The I/O Interface consists of address decoder, control circuits, data register and status register to
coordinate the I/O transfers.
The address decoder enables the device to recognize its address when this address
appears on the address lines.
The data register holds the data. A data command causes the interface to respond by
transferring data from the bus into one of its registers.
Page | 21
The status register contains information. A status command is used to test various
status conditions in the interface and the peripheral.
A control command is issued to activate the peripheral and to inform it what to do.
Page | 22
INTERRUPTS
o An interrupt is defined as hardware or software generated event external to the currently
executing process that affects the normal flow of the instruction execution.
o The processor responds by suspending its current activities, saving its state, and
executing a function called an interrupt handler (or an interrupt service routine, ISR)
to deal with the event.
o This interruption is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities.
CLASSES OF INTERRUPTS
TYPES OF INTERRUPTS
There are two types of interrupts:
1. Hardware interrupts
2. Software interrupts
Hardware Interrupts :
o Used by devices to communicate that they require attention from the operating
system.
o For example, pressing a key on the keyboard (or) moving the mouse triggers
hardware interrupts that cause the processor to read the keystroke or mouse
Page | 23
position.
Software Interrupts:
o Caused either by an exceptional condition in the processor itself, or a
special instruction in the instruction set which causes an interrupt when it is
executed.
o Example : Divide-by-zero exception
Page | 24
Page | 25