0% found this document useful (0 votes)
45 views

CO unit-3

The document discusses computer arithmetic, focusing on addition, subtraction, multiplication, and division algorithms, including the Booth multiplication algorithm and floating-point arithmetic operations. It explains the hardware implementation for these operations using signed-magnitude and signed-2's complement representations, detailing the necessary registers and algorithms for each arithmetic function. Additionally, it addresses potential overflow conditions in division operations and the importance of detecting these in hardware or software.

Uploaded by

hodece.9t
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

CO unit-3

The document discusses computer arithmetic, focusing on addition, subtraction, multiplication, and division algorithms, including the Booth multiplication algorithm and floating-point arithmetic operations. It explains the hardware implementation for these operations using signed-magnitude and signed-2's complement representations, detailing the necessary registers and algorithms for each arithmetic function. Additionally, it addresses potential overflow conditions in division operations and the importance of detecting these in hardware or software.

Uploaded by

hodece.9t
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

UNIT-3

Computer Arithmetic: Addition and subtraction, multiplication


Algorithms, Booth multiplication algorithm, Division Algorithms,
Floating – point Arithmetic operations
Introduction
• Arithmetic instructions in digital computers manipulate data to produce results necessary for the
solution of computational problems.
• These instructions perform arithmetic calculations and are responsible for the bulk of activity
involved in processing data in a computer.
• The four basic arithmetic operations are addition, subtraction, multiplication and division. From
these four bulk operations, it is possible to formulate other arithmetic functions and solve scientific
problems by means of numerical analysis methods.
• An arithmetic processor is the part of a processor unit that executes arithmetic operations. The data
type assumed to reside in processor registers during the execution of an arithmetic instruction is
specified in the definition of the instruction. A:n arithmetic instruction may specify binary or
decimal data, and in each case the data may be in fixedpoint or floating-point form.
• We must be thoroughly familiar with the sequence of steps to be followed in order to carry out the
operation and achieve a correct result. The solution to any problem that is stated by a finite
number of well-defined procedural steps is called an algorithm.
• Usually, an algorithm will contain a number of procedural steps which are dependent on results of
previous steps. A convenient method for presenting algorithms is a flowchart.
Addition and Subtraction
• As we have discussed, there are three ways of representing negative fixed-point binary numbers:
signed-magnitude, signed-1's complement, or signed-2's complement. Most computers use the
signed-2's complement representation when performing arithmetic operations with integers.
• i. Addition and Subtraction with Signed-Magnitude Data: When the signed numbers are added or
subtracted, we find that there are eight different conditions to consider, depending on the sign of
the numbers and the operation performed. These conditions are listed in the first column of Table
shown below.
• Algorithm: (Addition with Signed-Magnitude Data)
• i. When the signs of A and B are identical ,add the two magnitudes and attach the sign of A to
the result.
• ii. When the signs of A and B are different, compare the magnitudes and subtract the smaller
number from the larger. Choose the sign of the result to be the same as A if A > B or the
complement of the sign of A if A < B.
• iii. If the two magnitudes are equal, subtract B from A and make the sign of the result positive.
• Algorithm: (Subtraction with Signed-Magnitude Data)
• i. When the signs of A and B are different, add the two magnitudes and attach the sign of A to
the result.
• ii. When the signs of A and B are identical, compare the magnitudes and subtract the smaller
number from the larger. Choose the sign of the result to be the same as A if A > B or the
complement of the sign of A if A < B.
• iii. If the two magnitudes are equal, subtract B from A and make the sign of the result positive.
The two algorithms are similar except for the sign comparison. The procedure to be followed
for identical signs in the addition algorithm is the same as for different signs in the subtraction
algorithm, and vice versa.
Hardware Implementation
• To implement the two arithmetic operations with hardware, it is first necessary that the two
numbers be stored in registers.
• i. Let A and B be two registers that hold the magnitudes of the numbers, and AS and BS be two flip-flops that
hold the corresponding signs.
• ii. The result of the operation may be transferred to a third register: however, a saving is achieved if the result
is transferred into A and AS. Thus A and AS together form an accumulator register.
• Consider now the hardware implementation of the algorithms above
• First, a parallel-adder is needed to perform the microoperation A + B.
• Second, a comparator circuit is needed to establish if A > B, A = B, or A < B.
• Third, two parallel-subtractor circuits are needed to perform the microoperations A - B and B - A. The sign
relationship can be determined from an exclusive-OR gate with AS and BS as inputs.
a block diagram of the hardware for
implementing the addition and subtraction
operations.
It consists of registers A and B and sign flip-flops
AS and BS.
Subtraction is done by adding A to the 2' s
complement of B. The output carry is transferred
to flip-flop E, where it can be checked to
determine the relative magnitudes of the two
numbers
The add-overflow flip-flop AVF holds the
overflow bit when A and B are added.
The complementer provides an output of B or the
complement of B depending on the state of the
mode control M.
When M = 0, the output of B is transferred to the
adder, the input carry is 0, and the output of the
adder is equal to the sum A + B
When M= 1, the l's complement of B is applied
to the adder, the input carry is 1, and output
S=A++1This is equal to A plus the 2's
complement of B, which is equivalent to the
subtraction A - B.
Hardware Algorithm
ii. Addition and Subtraction with Signed-2's Complement Data
The register configuration for the hardware
implementation is shown in the below Figure(a).
We name the A register AC (accumulator) and the
B register BR. The leftmost bit in AC and BR
represent the sign bits of the numbers.
The two sign bits are added or subtracted together
with the other bits in the complementer and parallel
adder.
The overflow flip-flop V is set to 1 if there is an
overflow. The output carry in this case is discarded.
The algorithm for adding and subtracting two
binary numbers in signed-2' s complement
representation is shown in the flowchart of
Figure(b).
The sum is obtained by adding the contents of
AC and BR (including their sign bits). The
overflow bit V is set to 1 if the exclusive-OR
of the last two carries is 1, and it is cleared to
0 otherwise.
The subtraction operation is accomplished by
adding the content of AC to the 2's
complement of BR.
Comparing this algorithm with its signed-
magnitude counterpart, we note that it is
much simpler to add and subtract numbers if
negative numbers are maintained in signed-2'
s complement representation
Multiplication Algorithms
• Multiplication of two fixed-point binary numbers in signed-magnitude representation is done with
paper and pencil by a process of successive shift and adds operations. This process is best
illustrated with a numerical example.
The process of multiplication:
• It consists of looking at successive bits of the multiplier, least significant bit first.
• If the multiplier bit is a 1, the multiplicand is copied down; otherwise, zeros are copied down.
• The numbers copied down in successive lines are shifted one position to the left from the previous
number.
• Finally, the numbers are added and their sum forms the product.
• The sign of the product is determined from the signs of the multiplicand and multiplier. If they are
alike, the sign of the product is positive. If they are unlike, the sign of the product is negative .
Hardware Implementation for Signed-Magnitude Data
The registers A, B and other equipment are shown in Figure (a).
The multiplier is stored in the Q register and its sign in Qs.
The sequence counter SC is initially set to a number equal to the
number of bits in the multiplier.
The counter is decremented by 1 after forming each partial product.
When the content of the counter reaches zero, the product is formed
and the process stops.
Initially, the multiplicand is in register B and the multiplier in Q,
Their corresponding signs are in Bs and Qs, respectively
The sum of A and B forms a partial product which is transferred to
the EA register.
Both partial product and multiplier are shifted to the right. This
shift will be denoted by the statement shr EAQ to designate the
right shift.
The least significant bit of A is shifted into the most significant
position of Q, the bit from E is shifted into the most significant
position of A, and 0 is shifted into E. After the shift, one bit of the
partial product is shifted into Q, pushing the multiplier bits one
position to the right.
In this manner, the rightmost flip-flop in register Q, designated by
Qn, will hold the bit of the multiplier, which must be inspected
next.
Hardware Algorithm
Initially, the multiplicand is in B and the multiplier in Q. Their
corresponding signs are in Bs and Qs, respectively.
The signs are compared, and both A and Q are set to correspond
to the sign of the product since a double-length product will be
stored in registers A and Q.
Registers A and E are cleared and the sequence counter SC is
set to a number equal to the number of bits of the multiplier.
After the initialization, the low-order bit of the multiplier in Qn
is tested.
i. If it is 1, the multiplicand in B is added to the present
partial product in A .
ii. If it is 0 , nothing is done. Register EAQ is then shifted
once to the right to form the new partial product.
The sequence counter is decremented by 1 and its new value
checked. If it is not equal to zero, the process is repeated and a
new partial product is formed. The process stops when SC = 0.
The final product is available in both A and Q, with A holding
the most significant bits and Q holding the least significant bits.
A flowchart of the hardware multiply algorithm is shown in the
below figure (l).
Booth Multiplication Algorithm:(multiplication of 2’s complement data):
• Booth algorithm gives a procedure for multiplying binary integers in signed-2's complement representation.
• Booth algorithm requires examination of the multiplier bits and shifting of the partial product. Prior to the shifting, the
multiplicand may be added to the partial product, subtracted from the partial product, or left unchanged according to the
following rules:
• 1. The multiplicand is subtracted from the partial product upon encountering the first least significant 1 in a string of 1's in
the multiplier.
• 2. The multiplicand is added to the partial product upon encountering the first 0 (provided that there was a previous 1) in a
string of O's in the multiplier.
• 3. The partial product does not change when the multiplier bit is identical to the previous multiplier bit.
• The hardware implementation of Booth algorithm requires the
register configuration shown in figure (n).
• This is similar addition and subtraction hardware except that
the sign bits are not separated from the rest of the registers.
• To show this difference, we rename registers A, B, and Q, as
AC, BR, and QR, respectively.
• Qn designates the least significant bit of the multiplier in
register QR.
• An extra flip-flop Qn+1, is appended to QR to facilitate a
double bit inspection of the multiplier
Hardware Algorithm for Booth Multiplication:
• AC and the appended bit Qn+1 are initially cleared to 0
and the sequence counter SC is set to a number n equal to
the number of bits in the multiplier.
• The two bits of the multiplier in Qn and Qn+1 are
inspected.
• i. If the two bits are equal to 10, it means that the first 1 in
a string of 1's has been encountered. This requires a
subtraction of the multiplicand from the partial product in
AC.
• ii. If the two bits are equal to 01, it means that the first 0 in
a string of 0's has been encountered. This requires the
addition of the multiplicand to the partial product in AC.
• iii. When the two bits are equal, the partial product does
not change.
• iv. The next step is to shift right the partial product and the
multiplier (including bit Qn+1).
• This is an arithmetic shift right (ashr) operation which
shifts AC and QR to the right and leaves the sign bit in AC
unchanged. The sequence counter is decremented and the
computational loop is repeated n times.
Example: multiplication of ( - 9) x ( - 13) = + 117 is shown below. Note that the multiplier in QR is negative and that the
multiplicand in BR is also negative. The 10-bit product appears in AC and QR and is positive.
Division Algorithms
• Division of two fixed-point binary numbers in signed-magnitude representation is done with paper
and pencil by a process of successive compare, shift, and subtract operations.
• The division process is illustrated by a numerical example in the below figure (q).
• The divisor B consists of five bits and the dividend A consists of ten bits. The five most
significant bits of the dividend are compared with the divisor. Since the 5-bit number is smaller
than B, we try again by taking the sixth most significant bits of A and compare this number with
B. The 6-bit number is greater than B, so we place a 1 for the quotient bit. The divisor is then
shifted once to the right and subtracted from the dividend.
• The difference is called a partial remainder because the division could have stopped here to obtain
a quotient of 1 and a remainder equal to the partial remainder. The process is continued by
comparing a partial remainder with the divisor.
• If the partial remainder is greater than or equal to the divisor, the quotient bit is equal to 1. The
divisor is then shifted right and subtracted from the partial remainder.
• If the partial remainder is smaller than the divisor, the quotient bit is 0 and no subtraction is
needed. The divisor is shifted once to the right in any case. Note that the result gives both a
quotient and a remainder.
Hardware Implementation for Signed-Magnitude Data
• The hardware for implementing the division operation is identical to that required for
multiplication.
• The divisor is stored in the B register and the double-length dividend is stored in registers A and
Q. The dividend is shifted to the left and the divisor is subtracted by adding its 2's complement
value. The information about the relative magnitude is available in E.
• If E = 1, it signifies that A≥B. A quotient bit 1 is inserted into Q, and the partial remainder is
shifted to the left to repeat the process.
• If E = 0, it signifies that A < B so the quotient in Qn remains a 0. The value of B is then added to
restore the partial remainder in A to its previous value. The partial remainder is shifted to the left
and the process is repeated again until all five quotient bits are formed.
• Note that while the partial remainder is shifted left, the quotient bits are shifted also and after five
shifts, the quotient is in Q and the final remainder is in A
• The sign of the quotient is determined from the signs of the dividend and the divisor. If the two
signs are alike, the sign o f the quotient is plus. If they are unalike, the sign is minus. The sign of
the remainder is the same as the sign of the dividend.
Divide Overflow
• The division operation may result in a quotient with an overflow. This is not a problem when working with
paper and pencil but is critical when the operation is implemented with hardware.
• This is because the length of registers is finite and will not hold a number that exceeds the standard length.
• To see this, consider a system that has 5-bit registers. We use one register to hold the divisor and two registers
to hold the dividend. From the example shown in the above, we note that the quotient will consist of six bits if
the five most significant bits of the dividend constitute a number greater than the divisor. The quotient is to be
stored in a standard 5-bit register, so the overflow bit will require one more flip-flop for storing the sixth bit.
• This divide-overflow condition must be avoided in normal computer operations because the entire quotient
will be too long for transfer into a memory unit that has words of standard length, that is, the same as the
length of registers.
• This condition detection must be included in either the hardware or the software of the computer, or in a
combination of the two.
• When the dividend is twice as long as the divisor,
• i. A divide-overflow condition occurs if the high-order half bits of the dividend constitute a number greater
than or equal to the divisor.
• ii. A division by zero must be avoided. This occurs because any dividend will be greater than or equal to a
divisor which is equal to zero. Overflow condition is usually detected when a special flip-flop is set. We will
call it a divide-overflow flip-flop and label it DVF
Hardware Algorithm:
1. The dividend is in A and Q and the divisor in B . The sign
of the result is transferred into Qs to be part of the quotient.
A constant is set into the sequence counter SC to specify
the number of bits in the quotient.
2. A divide-overflow condition is tested by subtracting the
divisor in B from half of the bits of the dividend stored in
A. If A ≥ B, the divide-overflow flip-flop DVF is set and
the operation is terminated prematurely. If A < B, no divide
overflow occurs so the value of the dividend is restored by
adding B to A.
3. The division of the magnitudes starts by shifting the
dividend in AQ to the left with the high-order bit shifted
into E. If the bit shifted into E is 1, we know that EA > B
because EA consists of a 1 followed by n-1 bits while B
consists of only n -1 bits. In this case, B must be subtracted
from EA and 1 inserted into Qn for the quotient bit.
4. If the shift-left operation inserts a 0 into E, the divisor is
subtracted by adding its 2's complement value and the
carry is transferred into E . If E = 1, it signifies that A ≥ B;
therefore, Qn is set to 1 . If E = 0, it signifies that A < B
and the original number is restored by adding B to A . In
the latter case we leave a 0 in Qn.
• This process is repeated again with registers EAQ. After n
times, the quotient is formed in register Q and the remainder is
found in register A
FLOATING POINT NUMBERS
• In many high-level programming languages we have a facility for specifying floating-point numbers. The
most common way is by a real declaration statement.
• High level programming languages must have a provision for handling floating-point arithmetic operations.
The operations are generally built in the internal hardware.
• If no hardware is available, the compiler must be designed with a package of floating-point software
subroutine.
• Although the hardware method is more expensive, it is much more efficient than the software method.
Therefore, floating- point hardware is included in most computers and is omitted only in very small ones.
Basic Considerations :

• There are two part of a floating-point number in a computer - a mantissa m and an exponent e.
The two parts represent a number generated from multiplying m times a radix r raised to the
value of e. Thus
• m x re

• The mantissa may be a fraction or an integer. The position of the radix point and the value of
the radix r are not included in the registers. For example, assume a fraction representation and
a radix
• 10. The decimal number 537.25 is represented in a register with m = 53725 and e = 3 and is
interpreted to represent the floating-point number

• .53725 x 103
• A floating-point number is said to be normalized if the most significant digit of the mantissa in nonzero. So the
mantissa contains the maximum possible number of significant digits. We cannot normalize a zero because it
does not have a nonzero digit. It is represented in floating-point by all 0’s in the mantissa and exponent.
• Floating-point representation increases the range of numbers for a given register. Consider a computer with
48-bit words. Since one bit must be reserved for the sign, the range of fixed-point integer numbers will be +
(247 – 1), which is approximately + 1014. The 48 bits can be used to represent a floating-point number with 36
bits for the mantissa and 12 bits for the exponent. Assuming fraction representation for the mantissa and taking
the two sign bits into consideration, the range of numbers that can be represented is

• + (1 – 2-35) x 22047


• This number is derived from a fraction that contains 35 1’s, an exponent of 11 bits (excluding its sign), and
because 211–1 = 2047. The largest number that can be accommodated is approximately 10 615. The mantissa that
can accommodated is 35 bits (excluding the sign) and if considered as an integer it can store a number as large
as (235 –1). This is approximately equal to 1010, which is equivalent to a decimal number of 10 digits.
• Computers with shorter word lengths use two or more words to represent a floating-point number. An 8-bit
microcomputer uses four words to represent one floating-point number. One word of 8 bits are reserved for the
exponent and the 24 bits of the other three words are used in the mantissa.
• Arithmetic operations with floating-point numbers are more complicated than with fixed-point numbers. Their execution
also takes longer time and requires more complex hardware. Adding or subtracting two numbers requires first an
alignment of the radix point since the exponent parts must be made equal before adding or subtracting the mantissas. We
do this alignment by shifting one mantissa while its exponent is adjusted until it becomes equal to the other exponent.
Consider the sum of the following floating-point numbers:
• .5372400 x 102
• + .1580000 x 10-1

• Floating-point multiplication and division need not do an alignment of the mantissas. Multiplying the two mantissas and adding the

exponents can form the product. Dividing the mantissas and subtracting the exponents perform division.

• The operations done with the mantissas are the same as in fixed-point numbers, so the two can share the same registers and circuits. The

operations performed with the exponents are compared and incremented (for aligning the mantissas), added and subtracted (for

multiplication) and division), and decremented (to normalize the result). We can represent the exponent in any one of the three

representations - signed-magnitude, signed 2’s complement or signed 1’s complement.

• Biased exponents have the advantage that they contain only positive numbers. Now it becomes simpler to compare their relative

magnitude without bothering about their signs. Another advantage is that the smallest possible biased exponent contains all zeros. The

floating-point representation of zero is then a zero mantissa and the smallest possible exponent.
Register Configuration

• The register configuration for floating-point operations is shown in figure 4.13. As a rule, the
same registers and adder used for fixed-point arithmetic are used for processing the mantissas.
The difference lies in the way the exponents are handled.
• The register organization for floating-point operations is shown in Fig. 4.13. Three registers are
there, BR, AC, and QR. Each register is subdivided into two parts. The mantissa part has the
same uppercase letter symbols as in fixed-point representation. The exponent part may use
corresponding lower-case letter symbol.
• Assuming that each floating-point number has a mantissa in signed-magnitude representation and a
biased exponent. Thus the AC has a mantissa whose sign is in As, and a magnitude that is in A. The
diagram shows the most significant bit of A, labeled by A1. The bit in his position must be a 1 to
normalize the number. Note that the symbol AC represents the entire register, that is, the concatenation
of As, A and a.
• In the similar way, register BR is subdivided into Bs, B, and b and QR into Qs, Q and q. A parallel-
adder adds the two mantissas and loads the sum into A and the carry into E. A separate parallel adder
can be used for the exponents. The exponents do not have a district sign bit because they are biased
but are represented as a biased positive quantity. It is assumed that the floating- point number are so
large that the chance of an exponent overflow is very remote and so the exponent overflow will be
neglected. The exponents are also connected to a magnitude comparator that provides three binary
outputs to indicate their relative magnitude.
• The number in the mantissa will be taken as a fraction, so they binary point is assumed to reside to the
left of the magnitude part. Integer representation for floating point causes certain scaling problems
during multiplication and division. To avoid these problems, we adopt a fraction representation.

• The numbers in the registers should initially be normalized. After each arithmetic operation, the result
will be normalized. Thus all floating-point operands are always normalized.
Addition and Subtraction of Floating Point Numbers

• During addition or subtraction, the two floating-point operands are kept in AC and BR. The sum or difference is formed
in the AC. The algorithm can be divided into four consecutive parts:

1. Check for zeros.


2. Align the mantissas.
3. Add or subtract the mantissas
4. Normalize the result

• A floating-point number cannot be normalized, if it is 0. If this number is used for computation, the result may also be
zero. Instead of checking for zeros during the normalization process we check for zeros at the beginning and terminate
the process if necessary. The alignment of the mantissas must be carried out prior to their operation. After the mantissas
are added or subtracted, the result may be un-normalized. The normalization procedure ensures that the result is
normalized before it is transferred to memory.
• If the magnitudes were subtracted, there may be zero or may have an underflow in the result. If the mantissa is equal to zero
the entire floating-point number in the AC is cleared to zero. Otherwise, the mantissa must have at least one bit that is equal
to 1. The mantissa has an underflow if the most significant bit in position A1, is 0. In that case, the mantissa is shifted left and
the exponent decremented. The bit in A1 is checked again and the process is repeated until A1 = 1. When A1 = 1, the
mantissa is normalized and the operation is completed.
MULTIPLICATION
Basic Computer Organization and Design: Stored program concept,
computer Registers, common bus system, Computer instructions,
Timing and Control, Instruction cycle, Memory Reference Instructions,
Input–Output configuration and program Interrupt.
Stored Program Organization
• The simplest way to organize a computer is to have one processor register
and an instruction code format with two parts. The first part specifies the
operation to be performed and the second specifies an address.
• The memory address tells the control where to find an operand in
memory. This operand is read from memory and used as the data to be
operated on together with the data stored in the processor register.
• EX: A memory unit with 4096 words, we need 12 bits to specify an
address since 2 12 = 4096. If we store each instruction code in one 16-bit
memory word, we have available four bits for the operation code (opcode)
to specify one out of 16 possible operations, and 12 bits to specify the
address of an operand.
• The control reads a 16-bit instruction from the program portion of
memory. It uses the 12-bit address part of the instruction to read a 16-bit
operand from the data portion of memory. It then executes the operation
specified by the operation code.
• Computers that have a single-processor register usually assign to it the
name accumulator and label it AC .
• The operation is performed with the memory operand and the content of
AC .
• If an operation in an instruction code does not need an operand from
memory, the rest of the bits in the instruction can be used for other
purposes. For example, operations such as clear AC, complement AC, and
increment AC operate on data stored in the AC register. They do not need
an operand from memory
Indirect Address
• When the second part of an instruction
code specifies an operand, the instruction
is said to have an immediate operand.
• When the second part specifies the
address of an operand, the instruction is
said to have a direct address.
• When the bits in the second part of the
instruction designate an address of a
memory word in which the address of the
operand is found, the instruction is said to
an indirect address. One bit of the
instruction code can be used to distinguish
between a direct and an indirect address.
• An effective address is the address of the
operand
COMPUTER REGISTERS
• Computer instructions are normally stored in
consecutive memory locations and are executed
sequentially one at a time.
• The control reads an instruction from a specific
address in memory and executes it. It then continues
by reading the next instruction in sequence and
executes it, and so on.
• This type of instruction sequencing needs a counter to
calculate the address of the next instruction after
execution of the current instruction is completed.
• It is also necessary to provide a register in the control
unit for storing the instruction code after it is read from
memory.
• The computer needs processor registers for
manipulating data and a register for holding a memory
address.
• The registers available in the computer are shown in
the below figure (m) and table (f), a brief description
of their function and the number of bits that they
contain also given.
Common Bus
System
• The basic computer has eight registers, a memory unit, and a control unit.
Paths must be provided to transfer information from one register to another
and between memory and registers.
• The number of wires will be excessive if connections are made between
the outputs of each register and the inputs of the other registers.
• A more efficient scheme for transferring information in a system with
many registers is to use a common bus.
• The connection of the registers and memory of the basic computer to a
common bus system is shown in the below figure (n)
• The outputs of seven registers and memory are connected to the common
bus. The specific output that is selected for the bus lines at any given time
is determined from the binary value of the selection variables S2, S1, and
S0.
• For example1, the number along the output of DR is 3. The 16-bit outputs
of DR are placed on the bus lines when S2S1S0 = 011 since this is the
binary value of decimal 3.
• For example2, The memory places its 16-bit output onto the bus when the
read input is activated and S2S1S0 = 111.
• The content of any register can be applied onto the bus and an operation
can be performed in the adder and logic circuit during the same clock
cycle. The clock transition at the end of the cycle transfers the content of
the bus into the designated destination register and the output of the adder
and logic circuit into AC.
• For example, the two rnicrooperations DR AC and AC DR can be
executed at the same time. This can be done by placing the content of AC
on the bus (with S2S1S0 = 100), enabling the LD (load) input of DR,
transferring the content of DR through the adder and logic circuit into AC,
and enabling the LD (load) input of AC, all during the same clock cycle.
Computer Instructions
• The basic computer has three types of instruction code formats,
• 1. Memory-reference instruction.
• 2. Register-reference instruction.
• 3. An input-output instruction.
• Each format has 16 bits. The operation code (opcode) part of the instruction contains three bits and the
meaning of the remaining 13 bits depends on the operation code encountered.
• The type of instruction is recognized by the computer control from the four bits in
positions 12 through 15 of the instruction.
• If the three opcode bits in positions 12 to 14 are not equal to 111, the instruction is
a memory-reference type and the bit in position 15 is taken as the addressing
mode I. A memory-reference instruction uses 12 bits to specify an address and one
bit to specify the addressing mode I. I = 0 for direct address and I = 1 for indirect
address.
• If the 3-bit opcode = 111, control then inspects the bit in position 15. If this bit =
0, the instruction is a register-reference type. These instructions use 16 bits to
specify an operation.
• If the bit I = 1, the instruction is an input-output type. These instructions also use
all 16 bits to specify an operation
• The hexadecimal code is equal to the equivalent hexadecimal number of the
binary code used for the instruction. By using the hexadecimal equivalent we
reduced the 16 bits of an instruction code to four digits with each hexadecimal
digit being equivalent to four bits.
• A) memory-reference instruction has an address part of 12 bits. The address part
is denoted by three x's and stand for the three hexadecimal digits corresponding to
the 12-bit address. The last bit of the instruction is designated by the symbol I. i.
When I = 0, the last four bits of an instruction have a hexadecimal digit equivalent
from 0 (000) to 6 (110) since the last bit is 0. ii. When I = I, the hexadecimal digit
equivalent of the last four bits of the instruction ranges from 8 (1000) to E (1110)
since the last bit is I.
• B) Register-reference instructions use 16 bits to specify an operation. The leftmost
four bits are always 0111, which is equivalent to hexadecimal 7. The other three
hexadecimal digits give the binary equivalent of the remaining 12 bits.
• C) The input-output instructions also use all 16 bits to specify an operation. The
last four bits are always 1111, equivalent to hexadecimal F.
Instruction Cycle
• A program residing in the memory unit of the computer consists of a sequence of instructions. The program is
executed in the computer by going through a cycle for each instruction.
• Each instruction cycle in turn is subdivided into a sequence of subcycles or phases. In the basic computer
each instruction cycle consists of the following phases:
• 1. Fetch an instruction from memory.
• 2. Decode the instruction.
• 3.Read the effective address from memory if the instruction has an indirect address.
• 4. Execute the instruction. Upon the completion of step 4, the control goes back to step 1 to fetch, decode,
and execute the next instruction. This process continues indefinitely unless a HALT instruction is
encountered.
Fetch and Decode:

• Initially, the program counter PC is loaded with the address of the first instruction in the program.
• The sequence counter SC is cleared to 0, providing a decoded timing signal T0.
• After each clock pulse, SC is incremented by one, so that the timing signals go through a sequence T0, T1, T2, and so on.
• The microoperations for the fetch and decode phases can be specified by the following register transfer statements
• Since only AR is connected to the address inputs of memory, it is necessary to transfer the address from PC to AR during
the clock transition associated with timing signal T0.
• The instruction read from memory is then placed in the instruction register IR with the clock transition associated with
timing signal T1.
• At the same time, PC is incremented by one to prepare it for the address of the next instruction in the program.
• At time T2, the operation code in IR is decoded, the indirect bit is transferred to flip-flop I, and the address part of the
instruction is transferred to AR.
• Note that SC is incremented after each clock pulse to produce the sequence T0, T1, and T2
• The Figure shows how the first two register transfer statements are
implemented in the bus system.
• To provide the data path for the transfer of PC to AR we must
apply timing signal T0 to achieve the following connection:
• 1. Place the content of PC onto the bus by making the bus
selection inputs S2 S1 S0 equal to 010.
• 2. Transfer the content of the bus to AR by enabling the LD input
of AR. The next clock transition initiates the transfer from PC to
AR since T0 =1. In order to implement the second statement
T1: IR M[AR], PC PC + 1
It is necessary to use timing signal T1 to provide the following
connections in the bus system.
1. Enable the read input of memory.
2. Place the content of memory onto the bus by making S2 S1 S0 =
111.
3. Transfer the content of the bus to IR by enabling the LD input of
IR.
4. Increment PC by enabling the INR input of PC.
Determine the Type of Instruction
• The timing signal that is active after the decoding is T3. During time T3 the
control unit determines the type of instruction that was just read from memory.
• Decoder output D7 is equal to 1 if the operation code is equal to binary 111.
• If D7 = 1, the instruction must be a register-reference or input-0utput type.
• If D7 = 0, the operation code must be one of the other seven values 000
through 110, specifying a memory-reference instruction.
• The three instruction types are subdivided into four separate paths. The
selected operation is activated with the clock transition associated with timing
signal T3.This can be symbolized as follows:

• When a memory-reference instruction with I = 0 is encountered, it is not


necessary to do anything since the effective address is already in AR.
• However, the sequence counter SC must be incremented when D7’T3 =
1, so that the execution of the memory-reference instruction can be
continued with timing variable T4.
• A register-reference or input-output instruction can be executed with the
clock associated with timing signal T3.
• After the instruction is executed, SC is cleared to 0 and control returns to
the fetch phase with T0 = 1.
• Determine the Type of Instruction The timing signal that is active
after the decoding is T3.
• During time T3 the control unit determines the type of instruction
that was just read from memory.
• Decoder output D7 is equal to 1 if the operation code is equal to
binary 111.
• If D7 = 1, the instruction must be a register-reference or input-0utput
type.
• If D7 = 0, the operation code must be one of the other seven values
000 through 110, specifying a memory-reference instruction.
• The three instruction types are subdivided into four separate paths.
The selected operation is activated with the clock transition
associated with timing signal T3.This can be symbolized as follows:

You might also like