Array Multiplier in Digital Logic
Array Multiplier in Digital Logic
Addressing Modes– The term addressing modes refers to the way in which the operand of an
instruction is specified. The addressing mode specifies a rule for interpreting or modifying the
address field of the instruction before the operand is actually executed.
Addressing modes for 8086 instructions are divided into two categories:
1) Addressing modes for data
2) Addressing modes for branch
The 8086 memory addressing modes provide flexible access to memory, allowing you to easily
access variables, arrays, records, pointers, and other complex data types. The key to good
assembly language programming is the proper use of memory addressing modes.
An assembly language program instruction consists of two parts
Example: MOV AL, 35H (move the data 35H into AL register)
• Register mode: In register addressing the operand is placed in one of 8 bit or 16 bit general
purpose registers. The data is in the register that is specified by the instruction.
Here one register reference is required to access the data.
The 8086 CPUs let you access memory indirectly through a register using the register indirect
addressing modes.
• MOV AX, [BX](move the contents of memory location s addressed by the register BX to
the register AX)
• Auto Indexed (increment mode): Effective address of the operand is the contents of a
register specified in the instruction. After accessing the operand, the contents of this register
are automatically incremented to point to the next consecutive memory location. (R1)+.
Here one register reference, one memory reference and one ALU operation is required to
access the data.
• Example:
• Add R1, (R2)+ // OR
• R1 = R1 +M[R2]
R2 = R2 + d
Useful for stepping through arrays in a loop. R2 – start of array d – size of an element
• Auto indexed (decrement mode): Effective address of the operand is the contents of a
register specified in the instruction. Before accessing the operand, the contents of this
register are automatically decremented to point to the previous consecutive memory
location. –(R1)
Here one register reference, one memory reference and one ALU operation is required to
access the data.
Example:
Add R1,-(R2) //OR
R2 = R2-d
R1 = R1 + M[R2]
Auto decrement mode is same as auto increment mode. Both can also be used to implement a
stack as push and pop. Auto increment and Auto decrement modes are useful for implementing
“Last-In-First-Out” data structures.
• Direct addressing/ Absolute addressing Mode (symbol [ ]): The operand’s offset is given
in the instruction as an 8 bit or 16 bit displacement element. In this addressing mode the 16
bit effective address of the data is the part of the instruction.
Here only one memory reference operation is required to access the data.
Note:
1. PC relative and based register both addressing modes are suitable for program
relocation at runtime.
2. Based register addressing mode is best suitable to write position independent
codes.
Advantages of Addressing Modes
1. To give programmers to facilities such as Pointers, counters for loop controls, indexing of
data and program relocation.
2. To reduce the number bits in the addressing field of the Instruction.
Carry Look-Ahead Adder
The adder produce carry propagation delay while performing other arithmetic operations like
multiplication and divisions as it uses several additions or subtraction steps. This is a major
problem for the adder and hence improving the speed of addition will improve the speed of all
other arithmetic operations. Hence reducing the carry propagation delay of adders is of great
importance. There are different logic design approaches that have been employed to overcome
the carry propagation problem. One widely used approach is to employ a carry look-ahead
which solves this problem by calculating the carry signals in advance, based on the input
signals. This type of adder circuit is called a carry look-ahead adder.
Here a carry signal will be generated in two cases:
1. Input bits A and B are 1
2. When one of the two bits is 1 and the carry-in is 1.
In ripple carry adders, for each adder block, the two bits that are to be added are available
instantly. However, each adder block waits for the carry to arrive from its previous block. So,
it is not possible to generate the sum and carry of any block until the input carry is known.
The ith block waits for the i-1th block to produce its carry. So there will be a considerable time
delay which is carry propagation delay.
Consider the above 4-bit ripple carry adder. The sum S3 is produced by the corresponding full
adder as soon as the input signals are applied to it. But the carry input C4 is not available on its
final steady-state value until carry is available at its steady-state value. Similarly C3
depends on C2 and C2 on C1. Therefore, though the carry must propagate to all the stages in
order that output S3 and carry C4 settle their final steady-state value.
The propagation time is equal to the propagation delay of each adder block, multiplied by the
number of adder blocks in the circuit. For example, if each full adder stage has a propagation
delay of 20 nanoseconds, then S3 will reach its final correct value after 60 (20 × 3)
nanoseconds. The situation gets worse, if we extend the number of stages for adding more
number of bits.
Carry Look-ahead Adder :
A carry look-ahead adder reduces the propagation delay by introducing more complex
hardware. In this design, the ripple carry design is suitably transformed such that the carry logic
over fixed groups of bits of the adder is reduced to two-level logic. Let us discuss the design in
detail.
Consider the full adder circuit shown above with corresponding truth table. We define two
variables as ‘carry generate’ Gi and ‘carry propagate’ Pi then,
The sum output and carry output can be expressed in terms of carry generate Gi and carry
propagate Pi as
where Gi produces the carry when both Ai, Bi are 1 regardless of the input carry. Pi is
associated with the propagation of carry from Ci to Ci+1.
The carry output Boolean function of each stage in a 4 stage carry look-ahead adder can be
expressed as
From the above Boolean equations we can observe that C4 does not have to wait for C3 and C2
to propagate but actually C4 is propagated at the same time as C3 and C2. Since the Boolean
expression for each carry output is the sum of products so these can be implemented with one
level of AND gates followed by an OR gate.
The implementation of three Boolean functions for each carry output (C2, C3 and C4) for a
carry look-ahead carry generator shown in below figure.
Time Complexity Analysis :
We could think of a carry look-ahead adder as made up of two “parts”
The part that computes the carry for each bit.
1. The part that adds the input bits and the carry for each bit position.
The log(n) complexity arises from the part that generates the carry, not the circuit that adds the
bits.
Now, for the generation of the n-th carry bit, we need to perform a AND between (n+1) inputs.
The complexity of the adder comes down to how we perform this AND operation. If we have
AND gates, each with a fan-in (number of inputs accepted) of k, then we can find the AND of
all the bits in log(n+1) time. This is represented in asymptotic notation as Θ(log n).
Advantages and Disadvantages of Carry Look-Ahead Adder :
Advantages –
❖ In the multiplication process we are considering successive bits of the multiplier, least
significant bit first.
❖ If the multiplier bit is 1, the multiplicand is copied down else 0’s are copied down.
❖ The numbers copied down in successive lines are shifted one position to the left from
the previous number.
❖ Finally numbers are added and their sum form the product.
❖ The sign of the product is determined from the sign of the multiplicand and multiplier.
If they are alike, sign of the product is positive else negative.
Hardware Implementation :
Following components are required for the Hardware Implementation of multiplication
algorithm :
1. Registers:
Two Registers B and Q are used to store multiplicand and multiplier respectively.
Register A is used to store partial product during multiplication.
Sequence Counter register (SC) is used to store number of bits in the multiplier.
2. Flip Flop:
To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).
Flip flop E is used to store carry bit generated during partial product addition.
3. Complement and Parallel adder:
This hardware unit is used in calculating partial product i.e, perform addition required.
Flowchart of Multiplication:
• For A, the MSB is filled with the value of m, and the remaining (y+1) bits are filled
with zeros.
• For S, the MSB is filled with the value of (-m) in two’s complement notations, and the
remaining (y + 1) bits are filled with zeros.
• For P, the MSB for x is filled with zeros. To the right of this value, the value of r is
appended. Then, the LSB is filled with a zero.
Step 2 − The LSBs of P are determined.
• In case they are 01, find the value of P + A, and ignore the overflow or carry if any.
• In case they are 10, find the value of P + S, and ignore the overflow or carry if any.
• In case they are 00, use P directly in the next step.
• In case they are 11, use P directly in the next step.
Step 3 − The value obtained in the second step is arithmetically shifted by one place to the
right. P is now assigned the new value.
Step 4 − Step 2 and Step 3 are repeated for y number of times. Step 5: The LSB is dropped
from P, which gives the product of m and r.
Example − Find the product of 3 x (-4), where m = 3, r = -4, x = 4 and y = -4.
A = 001100001
S = 110100000
P = 000011000
The loop has to be performed four times since y = 4.
P = 000011000
Here, the last two bits are 00.
Therefore, P = 000001100 after performing the arithmetic right shift.
P = 000001100
Here, the last two bits are 00.
Therefore, P = 000000110 after performing the arithmetic right shift.
P = 000000110
Here, the last two bits are 10.
Therefore, P = P + S, which is 110100110.
P = 111010011 after performing the arithmetic right shift.
P = 111010011
Here, the last two bits are 11.
Therefore, P = 111101001 after performing the arithmetic right shift.
The product is 11110100 after dropping the LSB from P.
11110100 is the binary representation of -12.
Assuming A = a1a0 and B= b1b0, the various bits of the final product term P can be written
as:-
1. P(0)= a0b0
2. P(1)=a1b0 + b1a0
3. P(2) = a1b1 + c1 where c1 is the carry generated during the addition for the P(1) term.
4. P(3) = c2 where c2 is the carry generated during the addition for the P(2) term.
For the above multiplication, an array of four AND gates is required to form the various product
terms like a0b0 etc. and then an adder array is required to calculate the sums involving the
various product terms and carry combinations mentioned in the above equations in order to get
the final Product bits.
1. The first partial product is formed by multiplying a0 by b1, b0. The multiplication of two
bits such as a0 and b0 produces a 1 if both bits are 1; otherwise, it produces 0. This is
identical to an AND operation and can be implemented with an AND gate.
2. The first partial product is formed by means of two AND gates.
3. The second partial product is formed by multiplying a1 by b1b0 and is shifted one position
to the left.
4. The above two partial products are added with two half-adder(HA) circuits. Usually there
are more bits in the partial products and it will be necessary to use full-adders to produce
the sum.
5. Note that the least significant bit of the product does not have to go through an adder since
it is formed by the output of the first AND gate.
A combinational circuit binary multiplier with more bits can be constructed in similar fashion.
A bit of the multiplier is ANDed with each bit of the multiplicand in as many levels as there
are bits in the multiplier. The binary output in each level of AND gates is added in parallel with
the partial product of the previous level to form a new partial product. The last level produces
the product. For j multiplier bits and k multiplicand we need j*k AND gates and (j-1) k-bit
adders to produce a product of j+k bits.
Unsigned representation:
For example, fixed<8,3> signifies an 8-bit fixed-point number, the rightmost 3 bits of which
are fractional.
Representation of a real number:
00010.1102
= 1 * 21 + 1 * 2-1 + 1 * 2-2
= 2 + 0.5 + 0.25
= 2.75
Signed representation:
Negative integers in binary number systems must be encoded using signed number
representations. In mathematics, negative numbers are denoted by a minus sign (“ -“) before
them. In contrast, numbers are exclusively represented as bit sequences in computer hardware,
with no additional symbols.
Signed binary numbers (+ve or -ve) can be represented in one of three ways:
1. Sign-Magnitude form
2. 1’s complement form
3. 2’s complement form
Sign-Magnitude form: In sign-magnitude form, the number’s sign is represented by the MSB
(Most Significant Bit also called as Leftmost Bit), while its magnitude is shown by the
remaining bits (In the case of 8-bit representation Leftmost bit is the sign bit and remaining
bits are magnitude bit).
55 10 = 001101112
−55 10 = 101101112
1’s complement form: By complementing each bit in a signed binary integer, the 1’s
complement of a number can be derived. A result is a negative number when a positive number
is complemented by 1. Similar to this, complementing a negative number by 1 results in a
positive number.
55 10 = 001101112
−55 10 = 110010002
2’s complement form: By adding one to the signed binary number’s 1’s complement, a binary
number can be converted to its 2’s complement. Therefore, a positive number’s 2’s
complement results in a negative number. The complement of a negative number by two yields
a positive number.
55 10 = 11001000 + 1 (1’s complement + 1 = 2’s complement)
-55 10 = 11001001 2