0% found this document useful (0 votes)
163 views114 pages

CA Full Notes

The document outlines the fundamentals of computer architecture, covering topics such as basic computer organization, CPU structure, computer arithmetic, input-output organization, and memory organization. It details instruction codes, computer registers, addressing modes, and the instruction cycle, as well as control memory and timing mechanisms. The content is structured into five units, each focusing on different aspects of computer architecture, supported by references to textbooks and additional literature.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views114 pages

CA Full Notes

The document outlines the fundamentals of computer architecture, covering topics such as basic computer organization, CPU structure, computer arithmetic, input-output organization, and memory organization. It details instruction codes, computer registers, addressing modes, and the instruction cycle, as well as control memory and timing mechanisms. The content is structured into five units, each focusing on different aspects of computer architecture, supported by references to textbooks and additional literature.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Computer Architecture

UNIT I 12 Hours
Basic Computer Organization And Design : Instruction codes – Computer Registers -
Computer Instructions - Timing and Control - Instruction Cycle - Control Memory-
Address Sequencing
UNIT II 12 Hours
Central Processing Unit : General Register Organization – Stack Organization –
Instruction Formats – Addressing Modes – Data transfer and manipulation – Program
Control.
UNIT III 12 Hours
Computer Arithmetic : Hardware Implementation and Algorithm for Addition,
Subtraction, Multiplication, Division-Booth Multiplication Algorithm-Floating Point
Arithmetic.
UNIT IV 12 Hours
Input Output Organization : Input – Output Interface – Asynchronous data transfer –
Modes of transfer – Priority Interrupt – Direct Memory Access (DMA).
UNIT V 12Hours
Memory Organisation: Memory Hierarchy - Main memory - Auxillary memory -
Associative memory - Cache memory - Virtual memory.

Text Book:
Computer system Architecture - by Morris Mano, Third Edition. P.H.I Private Limited.
Reference Books:
1. Computer System Architecture‖, John. P. Hayes.
2. Computer Organization, C. Hamacher, Z. Vranesic, [Link].
3. Computer Architecture and parallel Processing ―, Hwang K. Briggs
4. Computer Organization and Architecture, William Stallings , Sixth Edition, Pearson
Education, 2003.
UNIT I
Basic Computer Organization and Design
Instruction codes – Computer Registers - Computer Instructions - Timing and Control
- Instruction Cycle - Control Memory-Address Sequencing
I Instruction Codes
 An instruction code is a group of bits that instruct the computer to perform a
specific operation.
 The operation code of an instruction is a group of bits that define operations such
as addition, subtraction, shift, complement, etc.
 An instruction must also include one or more operands, which indicate the
registers and/or memory addresses from which data is taken or to which data is
deposited.
Microoperations
 The instructions are stored in computer memory in the same manner that data is
stored.
 The control unit interprets these instructions and uses the operations code to
determine the sequences of microoperations that must be performed to execute
the instruction.
Stored Program Organization
 The operands are specified by indicating the registers and/or memory locations
in which they are stored.
 k bits can be used to specify which of 2k registers (or memory locations) are to
be used.

Figure: Stored program organization

1
 The simplest design is to have one processor register (called the accumulator)
and two fields in the instruction, one for the opcode and one for the operand.
 Any operation that does not need a memory operand frees the other bits to be
used for other purposes, such as specifying different operations.

Addressing Modes
There are four different types of operands that can appear in an instruction:

 Direct operand - an operand stored in the register or in the memory


location specified.
 Indirect operand - an operand whose address is stored in the register or
in the memory location specified.
 Immediate operand - an operand whose value is specified in the
instruction.

Figure:1 Demonstration of direct and indirect address.

2
II Computer Registers
 Registers are a type of computer memory used to quickly accept, store, and
transfer data and instructions that are being used immediately by the CPU.
 The registers used by the CPU are often termed as Processor registers.
 A processor register may hold an instruction, a storage address, or any data (such
as bit sequence or individual characters).
 The computer needs processor registers for manipulating data and a register for
holding a memory address.
 The register holding the memory location is used to calculate the address of the
next instruction after the execution of the current instruction is completed.

The following image shows the register and memory configuration for a basic
computer.

3
o The Memory unit has a capacity of 4096 words, and each word contains 16 bits.
o The Data Register (DR) contains 16 bits which hold the operand read from the
memory location.
o The Memory Address Register (MAR) contains 12 bits which hold the address
for the memory location.
o The Program Counter (PC) also contains 12 bits which hold the address of the
next instruction to be read from memory after the current instruction is executed.
o The Accumulator (AC) register is a general purpose processing register.
o The instruction read from memory is placed in the Instruction register (IR).
o The Temporary Register (TR) is used for holding the temporary data during the
processing.
o The Input Registers (IR) holds the input characters given by the user.
o The Output Registers (OR) holds the output after processing the input data.

Common Bus
To avoid excessive wiring, memory and all the register are connected via a
common bus.
• The specific output that is selected for the bus is determined by S2 ,S1 , S0 .
• The register whose LD (Load) is enable receives the data from the bus.
• Registers can be incremented by setting the INR control input and can be cleared by
setting the CLR control input.
• The Accumulator’s input must come via the Adder & Logic Circuit. This allows the
Accumulator and Data Register to swap data simultaneously.
• The address of any memory location being accessed must be loaded in the Address
Register.

4
III Computer Instructions
 Computer instructions are a set of machine language instructions that a particular
processor understands and executes.
 A computer performs tasks on the basis of the instruction provided.
 An instruction comprises of groups called fields. These fields include:

 The Operation code (Opcode) field which specifies the operation to be


performed.
 The Address field which contains the location of the operand, i.e., register
or memory location.
 The Mode field which specifies how the operand will be located.

A basic computer has three instruction code formats which are:

5
1. Memory - reference instruction
2. Register - reference instruction
3. Input-Output instruction

Memory - reference instruction

In Memory-reference instruction, 12 bits of memory is used to specify an address


and one bit to specify the addressing mode 'I'.

Register - reference instruction

 The Register-reference instructions are represented by the Opcode 111 with a 0


in the leftmost bit (bit 15) of the instruction.
 A Register-reference instruction specifies an operation on or a test of the AC
(Accumulator) register.

Input-Output instruction

 An Input-Output instruction does not need a reference to memory and is


recognized by the operation code 111 with a 1 in the leftmost bit of the
instruction.
 The remaining 12 bits are used to specify the type of the input-output operation
or test performed.

6
Instruction Set Completeness
A set of instructions is said to be complete if the computer includes a sufficient number
of instructions in each of the following categories:
 Arithmetic, logic and shift instructions provide computational capabilities for
processing the type of data the user may wish to employ.
 A huge amount of binary information is stored in the memory unit, but all
computations are done in processor registers. Therefore, one must possess the
capability of moving information between these two units.
 Program control instructions such as branch instructions are used change the
sequence in which the program is executed.
 Input and Output instructions act as an interface between the computer and
the user. Programs and data must be transferred into memory, and the results of
computations must be transferred back to the user.
IV TIMING AND CONTROL

 The timing for all registers in the basic computer is controlled by a master clock
generator.
 The clock pulses are applied to all flip-flops and registers in the system,
including the flip-flops and registers in the control unit.
 The clock pulses do not change the state of a register unless the register is
enabled by a control signal.
 The control signals are generated in the control unit and provide control inputs
for the multiplexers in the common bus, control inputs in processor registers, and
microoperations for the accumulator.
 The Control Unit is classified into two major categories:

1. Hardwired Control
2. Microprogrammed Control

Hardwired Control

 The Hardwired Control organization involves the control logic to be


implemented with gates, flip-flops, decoders, and other digital circuits.

7
 The following image shows the block diagram of a Hardwired Control
organization.

o A Hard-wired Control consists of two decoders, a sequence counter, and a


number of logic gates.
o An instruction fetched from the memory unit is placed in the instruction register
(IR).
o The component of an instruction register includes; I bit, the operation code, and
bits 0 through 11.
o The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
o The outputs of the decoder are designated by the symbols D0 through D7.
o The operation code at bit 15 is transferred to a flip-flop designated by the symbol
I.

8
o The operation codes from Bits 0 through 11 are applied to the control logic gates.
o The Sequence counter (SC) can count in binary from 0 through 15.

Micro-programmed Control

 The Microprogrammed Control organization is implemented by using the


programming approach.
 In Microprogrammed Control, the micro-operations are performed by executing
a program consisting of micro-instructions.

o The Control memory address register specifies the address of the micro-
instruction.
o The Control memory is assumed to be a ROM, within which all control
information is permanently stored.
o The control register holds the microinstruction fetched from the memory.
o The micro-instruction contains a control word that specifies one or more micro-
operations for the data processor.
o While the micro-operations are being executed,
o The next address is computed in the next address generator circuit and then
transferred into the control address register to read the next microinstruction.
o The next address generator is often referred to as a micro-program sequencer,
o As it determines the address sequence that is read from control memory.

9
Figure: Example of control timing signals

 Figure show how SC is cleared when D3T4 = 1. Output D3 from the operation
decoder becomes active at the end of timing signal T2.
 When timing signal T4 becomes active, the output of the AND gate that
implements the control function D3T4 becomes active.
 This signal is applied to the CLR input of SC.
 On the next positive clock transition (the one marked T4 in the diagram) the
counter is cleared to 0.
 This causes the timing signal T0 to become active instead of T5 that would have
been active if SC were incremented instead of cleared.
 A memory read or write cycle will be initiated with the rising edge of a timing
signal.
 It will be assumed that a memory cycle time is less than the clock cycle time.

10
 According to this assumption, a memory read or write cycle initiated by a timing
signal will be completed by the time the next clock goes through its positive
transition.
 The clock transition will then be used to load the memory word into a register.
 This timing relationship is not valid in many computers because the memory
cycle time is usually longer than the process of clock cycle.
 In such a case it is necessary to provide wait cycles in the processor until the
memory word is available.
 To facilitate the presentation, we will assume that a wait period is not necessary
in the basic computer.
V INSTRUCTION CYCLE
 A program residing in the memory unit of a computer consists of a sequence of
instructions.
 These instructions are executed by the processor by going through a cycle for
each instruction.
Memory address registers(MAR) : It is connected to the address lines of the system
bus. It specifies the address in memory for a read or write operation.
Memory Buffer Register(MBR) : It is connected to the data lines of the system bus. It
contains the value to be stored in memory or the last value read from the memory.
Program Counter(PC) : Holds the address of the next instruction to be fetched.
Instruction Register(IR) : Holds the last instruction fetched.
 In a basic computer, each instruction cycle consists of the following phases:
1. Fetch instruction from memory.
2. Decode the instruction.
3. Read the effective address from memory.
4. Execute the instruction.

11
Fetch:
 In the fetch cycle, the CPU retrieves the instruction from memory.
 The instruction is typically stored at the address specified by the program counter
(PC).
 The PC is then incremented to point to the next instruction in memory.
Decode:
 In the decode cycle, the CPU interprets the instruction and determines what
operation needs to be performed.
 This involves identifying the opcode and any operands that are needed to execute
the instruction.
Execute:
 In the execute cycle, the CPU performs the operation specified by the instruction.
 This may involve reading or writing data from or to memory, performing
arithmetic or logic operations on data, or manipulating the control flow of the
program.
 Some additional steps may be performed during the instruction cycle, depending
on the CPU architecture and instruction set:
Fetch operands:
 In some CPUs, the operands needed for an instruction are fetched during a
separate cycle before the execute cycle.
 This is called the fetch operands cycle.

12
Store results:
 In some CPUs, the results of an instruction are stored during a separate cycle
after the execute cycle.
 This is called the store results cycle.
Interrupt handling:
 In some CPUs, interrupt handling may occur during any cycle of the instruction
cycle.
 An interrupt is a signal that the CPU receives from an external device or software
that requires immediate attention.
 When an interrupt occurs, the CPU suspends the current instruction and executes
an interrupt handler to service the interrupt.
Input-Output Configuration
 In computer architecture, input-output devices act as an interface between the
machine and the user.
 Instructions and data stored in the memory must come from some input device.
 The results are displayed to the user through some output device.

o The input-output terminals send and receive information.


o The amount of information transferred will always have eight bits of an
alphanumeric code.
13
o The information generated through the keyboard is shifted into an input register
'INPR'.
o The information for the printer is stored in the output register 'OUTR'.
o Registers INPR and OUTR communicate with a communication interface
serially and with the AC in parallel.
o The transmitter interface receives information from the keyboard and transmits
it to INPR.
o The receiver interface receives information from OUTR and sends it to the
printer serially.

Design of a Basic Computer


A basic computer consists of the following hardware components.
1. A memory unit with 4096 words of 16 bits each
2. Registers: AC (Accumulator), DR (Data register), AR (Address register), IR
(Instruction register), PC (Program counter), TR (Temporary register), SC
(Sequence Counter), INPR (Input register), and OUTR (Output register).
3. Flip-Flops: I, S, E, R, IEN, FGI and FGO

14
VI CONTROL MEMORY
 A control memory is a part of the control unit.
 Any computer that involves microprogrammed control consists of two
memories.
 They are the main memory and the control memory.
 Programs are usually stored in the main memory by the users.
 Whenever the programs change, the data is also modified in the main memory.
 They consist of machine instructions and data.
 The control memory consists of microprograms that are fixed and cannot be
modified frequently.
 They contain microinstructions that specify the internal control signals required
to execute register micro-operations.
 The machine instructions generate a chain of microinstructions in the control
memory.
 Their function is to generate micro-operations that can fetch instructions from
the main memory, compute the effective address, execute the operation, and
return control to fetch phase and continue the cycle.
 The figure shows the general configuration of a microprogrammed control
organization.

 Here, the control is presumed to be a Read-Only Memory (ROM), where all the
control information is stored permanently.
 ROM provides the address of the microinstruction.
 The other register, that is, the control data register stores the microinstruction
that is read from the memory.

15
 It consists of a control word that holds one or more micro-operations for the data
processor.
 The next address must be computed once this operation is completed.
 It is computed in the next address generator.
 Then, it is sent to the control address register to be read.
 The next address generator is also known as the microprogram sequencer.
 Based on the inputs to a sequencer, it determines the address of the next
microinstruction.
 The microinstructions can be specified in several ways.
 The main functions of a microprogram sequencer are as follows –
 It can increment the control register by one.
 It can load the address from the control memory to the control address
register.
 It can transfer an external address or load an initial address to begin the
start operation.

 The data register is also known as the pipeline register.


 It allows two operations to be performed at a time.
 It allows performing the micro-operation specified by the control word and also
the generation of the next microinstruction.
 A dual-phase clock is required to be applied to the address register and the data
register.
 It is possible to apply a single-phase clock to the address register and work
without the control data register.
 The main advantage of using a microprogrammed control is that, if the hardware
configuration is established once, no further changes can be done.
 However, if a different control sequence is to be implemented, a new set of
microinstructions for the system must be developed.

VI ADDRESS SEQUENCING

 The control memory is used to store the microinstructions in groups.


 Here each group is used to specify a routine.

16
 The control memory of each computer has the instructions which contain their
micro-programs routine.
 These micro-programs are used to generate the micro-operations that will be
used to execute the instructions.
 Suppose the address sequencing of control memory is controlled by the
hardware.
 In that case, that hardware must be capable to branch from one routine to another
routine and also able to apply sequencing of microinstructions within a routine.
 When we try to execute a single instruction of computer, the control must
undergo the following steps:
o When the power of a computer is turned on, we have to first load an initial
address into the CAR (control address register). This address can be
described as the first microinstruction address. With the help of this address,
we are able to activate the instruction fetch routine.
o Then, the control memory will go through the routine, which will be used to
find out the effective address of operand.
o In the next step, a micro-operation will be generated, which will be used to
execute the instruction fetched from memory.
 We are able to transform the bits of instruction code into an address with the help
of control memory where routine is located.
 This process can be called the mapping process.
 The control memory required the capabilities of address sequencing, which is
described as follows:
o Based on the status bit conditions, the address sequencing selects the
conditional branch or unconditional branch.
o Addressing sequence is able to increment the CAR (Control address
register).
o It provides the facility for subroutine calls and returns.
o A mappings process is provided by the addressing sequence from the
instructions bits to a control memory address.

17
 In the above diagram, we can see a block diagram of a control memory and
associative hardware, which is required for selecting the address of next
microinstruction.
 The microinstruction is used to contain a set of bits in the control memory.
 With the help of some bits, we are able to start the micro-operations in a
computer register.
 The remaining bits of microinstruction are used to specify the method by which
we are able to obtain the next address.
 In this diagram, we can also see that the control address register can recover their
address with the help of four different directions.
 The CAR is incremented with the help of incrementer and then chooses the next
instruction.
 The branching address will be determined in the multiple fields of
microinstruction so that they can provide results in branching.
Conditional Branching
 The branch logic is used to provide the decision-making capabilities in the
control unit.
 There are special bits in the system which is described by the status conditions.

18
 These bits are used to provide the parameter information such as mode bits, the
sign bit, carry-out, and input or output status.
 If these status bits come together with the microinstruction field, they are able to
control the decision of a conditional branch, which is generated in the branch
logic.
 Here the microinstruction field is going to specify a branch address.
 The multiplexer is used to implement the branch logic hardware.
 If the condition is met, it will be branch to the initial address.
 Otherwise, it will increment the address register.
 If we load the branch address into the control address register from the control
memory, we are able to implement the unconditional branch microinstruction.
 If the condition is true, it will go to the branch, which is referred to as the address
from the next address field of the current microinstruction.
 Various types of conditions need to be tested: Z(zero), C(carry), O(overflow),
N(negative), etc.
Mapping of Instructions
 In the control memory, if the microinstruction specifies a branch to the first work,
in this case, there will be a special type of branch.
 Here an instruction contains their micro-program routine.
 For this special branch, the status bits will be the bits in the operation code, which
is the part of instruction.

 The above image shows a type of easy mapping process which are going to
convert the 4-bit operation code into the 7-bit address for control memory.
 In the mapping process, the 0 will be placed in the most significant bit of address.

19
 After that, the four operation code bits will be transferred. Lastly, the two least
significant bits of CAR will be cleared.
 With the help of this process, a micro-program will be provided to each computer
instruction.
 The micro-program contains the capacity of four microinstructions.
 If less than four microinstructions are used by the routine, the location of unused
memory can be used for other routines.
 If more than four microinstructions are used by the routine, it will use the
addresses 1000000 through 1111111.

This concept can be extended to a more general mapping rule with the help of PLD
(Programmable logic device) or ROM (Read-only memory).

20
 The above image shows the mapping of address of microinstruction from the
OP-code of an instruction.
 In the execution program, this microinstruction is the starting microinstruction.
Subroutine
 Subroutines can be referred to as programs that are used to accomplish a
particular task by the other routines.
 With the help of employing subroutines, we can save the microinstructions.
 These subroutines use the common sections of microcode, such as effective
address computation.
 The main routine is able to get the address for the return with the help of a
subroutine register.
 In another word, we can say that it becomes a source to transfer the address to a
main routine.
 The register file is used to store the addresses for subroutines.
 These register files can be structured in a way that the register will be organized
in the 'Last in first out' (LIFO) stack.

21
UNIT II
CENTRAL PROCESSING UNIT
Central Processing Unit : General Register Organization – Stack Organization –
Instruction Formats – Addressing Modes – Data transfer and manipulation – Program
Control.
Central Processing Unit (CPU)
 A Central Processing Unit is the most important component of a computer
system.
 A CPU is a hardware that performs data input/output, processing and storage
functions for a computer system.
 A CPU can be installed into a CPU socket.
 These sockets are generally located on the motherboard.
 CPU can perform various data processing operations.
 CPU can store data, instructions, programs, and intermediate results.

Memory or Storage Unit


 This unit can store instructions, data, and intermediate results.
 This unit supplies information to other units of the computer when needed.
 It is also known as internal storage unit or the main memory or the primary
storage or Random Access Memory (RAM).

22
Control Unit
This unit controls the operations of all parts of the computer but does not carry
out any actual data processing operations.
ALU (Arithmetic Logic Unit)
This unit consists of two subsections namely,
 Arithmetic Section
 Logic Section
Arithmetic Section
 The function of arithmetic section is to perform arithmetic operations like
addition, subtraction, multiplication, and division.
 All complex operations are done by making repetitive use of the above
operations.
Logic Section
Function of logic section is to perform logic operations such as comparing,
selecting, matching, and merging of data.

I GENERAL REGISTER ORGANIZATION

 A set of flip-flops forms a register.


 A register is a unique high-speed storage area in the CPU.
 They include combinational circuits that implement data processing.
 The information is always defined in a register before processing.
 The registers speed up the implementation of programs.
 Registers implement two important functions in the CPU operation are as follows
 It can support a temporary storage location for data. This supports the directly
implementing programs to have fast access to the data if required.
 It can save the status of the CPU and data about the directly implementing
program.

Example − Address of the next program instruction, signals get from the external
devices and error messages, and including different data is saved in the registers.

23
 If a CPU includes some registers, therefore a common bus can link these
registers.
 A general organization of seven CPU registers is displayed in the figure.

 The CPU bus system is managed by the control unit.


 The control unit explicit the data flow through the ALU by choosing the function
of the ALU and components of the system.

24
Consider R1 ← R2 + R3, the following are the functions implemented within the CPU
 MUX A Selector (SELA) − It can place R2 into bus A.
 MUX B Selector (SELB) − It can place R3 into bus B.
 ALU Operation Selector (OPR) − It can select the arithmetic addition
(ADD).
 Decoder Destination Selector (SELD) − It can transfers the result into
R1.
 The multiplexers of 3-state gates are performed with the buses.
 The state of 14 binary selection inputs determines the control word.
 The 14-bit control word defines a micro-operation.

Types of Register in Computer Organization


 In Computer Organisation, the register is utilized to acknowledge, store, move
information and directions that are being utilized quickly by the CPU.
 There are different kinds of registers utilized for different reasons. Some of the
commonly used registers are:

o AC ( accumulator )
o DR ( Data registers )
o AR ( Address registers )
o PC ( Program counter )

25
o MDR ( Memory data registers )
o IR ( index registers )
o MBR ( Memory buffer registers )

 These registers are utilized for playing out the different operations.
 When we perform some operations, the CPU utilizes these registers to perform
the operations.
 When we provide input to the system for a certain operation, the provided
information or the input gets stored in the registers.
 Once the ALU arithmetic and logical unit process the output, the processed data
is again provided to us by the registers.
 The sole reason for having a register is the quick recovery of information that
the CPU will later process.
 The CPU can use RAM over the hard disk to retrieve the memory, which is
comparatively a much faster option, but the speed retrieved from RAM is still
not enough.
 Therefore, we have catch memory, which is faster than registers.
 These registers work with CPU memory like catch and RAM to complete the
task quickly.
 There are several micro-operations are implemented by the ALU.
 Few of the operations implemented by the ALU are displayed in the table.

Encoding of ALU Operations

26
There are some ALU micro-operations are shown in the table.

ALU Micro-Operations

 The increment and transfer microoperations do not use the B input of the ALU.
 For these cases, the B field is marked with a dash.
 We assign 000 to any unused field when formulating the binary control word,
although any other binary number may be used.
 To place the content of a register into the output terminals we place the content
of the register into the A input of the ALU, but none of the registers are selected
to accept the data.
 The ALU operation TSFA places the data from the register, through the ALU,
into the output terminals.
 The direct transfer from input to output is accomplished with a control word of
all 0’s (making the B field 000).

II STACK ORGANIZATION

 Stack is also known as the Last In First Out (LIFO) list.


 It is the most important feature in the CPU.
 It saves data such that the element stored last is retrieved first.
 A stack is a memory unit with an address register.

27
 This register influence the address for the stack, which is known as Stack Pointer
(SP).
 The stack pointer continually influences the address of the element that is located
at the top of the stack.
 It can insert an element into or delete an element from the stack.
 The insertion operation is known as push operation and the deletion operation is
known as pop operation.
 In a computer stack, these operations are simulated by incrementing or
decrementing the SP register.

Register Stack
 The stack can be arranged as a set of memory words or registers.
 Consider a 64-word register stack arranged as displayed in the figure.
 The stack pointer register includes a binary number, which is the address of the
element present at the top of the stack.
 The three-element A, B, and C are located in the stack.
 The element C is at the top of the stack and the stack pointer holds the address
of C that is 3.
 The top element is popped from the stack through reading memory word at
address 3 and decrementing the stack pointer by 1.
 Then, B is at the top of the stack and the SP holds the address of B that is 2.
 It can insert a new word, the stack is pushed by incrementing the stack pointer
by 1 and inserting a word in that incremented location.
 The stack pointer includes 6 bits, because 26 = 64, and the SP cannot exceed 63
(111111 in binary).
 After all, if 63 is incremented by 1, therefore the result is 0(111111 + 1 =
1000000). SP holds only the six least significant bits.
 If 000000 is decremented by 1 thus the result is 111111.

28
 Therefore, when the stack is full, the one-bit register ‘FULL’ is set to 1.
 If the stack is null, then the one-bit register ‘EMTY’ is set to 1.
 The data register DR holds the binary information which is composed into or
readout of the stack.
 First, the SP is set to 0, EMTY is set to 1, and FULL is set to 0.
 Now, as the stack is not full (FULL = 0), a new element is inserted using the
push operation.
 The main two operations that are performed on the operators of the stack are
Push and Pop.
 These two operations are performed from one end only.
1. Push
 This operation results in inserting one operand at the top of the stack and it
increases the stack pointer register.

29
 The stack pointer is incremented by 1 and the address of the next higher word is
saved in the SP.
 The word from DR is inserted into the stack using the memory write operation.
 The first element is saved at address 1 and the final element is saved at address
0.
 If the stack pointer is at 0, then the stack is full and ‘FULL’ is set to 1.
 This is the condition when the SP was in location 63 and after incrementing SP,
the final element is saved at address 0.
 During an element is saved at address 0, there are no more empty registers in the
stack.
 The stack is full and the ‘EMTY’ is set to 0.
2. Pop
 This operation results in deleting one operand from the top of the stack and
decreasing the stack pointer register.
 A new element is deleted from the stack if the stack is not empty (if EMTY = 0).
 The pop operation includes the following sequence of micro-operations –

 The top element from the stack is read and transfer to DR and thus the stack
pointer is decremented.
 If the stack pointer reaches 0, then the stack is empty and ‘EMTY’ is set to 1.
 This is the condition when the element in location 1 is read out and the SP is
decremented by 1.

30
Memory Stack

 It can be implemented in a random-access memory attached to a CPU.


 The implementation of a stack in the CPU is done by assigning a portion of
memory to a stack operation and using a processor register as a stack pointer.
 A portion of computer memory partitioned into three segments: program, data,
and stack.
 The program counter PC points at the address of the next instruction in the
program.
 The address register AR points at an array of data.
 The stack pointer SP points at the top of the stack.
 The three registers are connected to a common address bus, and either one can
provide an address for memory.
 PC is used during the fetch phase to read an instruction.
 AR is used during the execute phase to read an operand.
 SP is used to push or pop items into or from the stack.

31
 We assume that the items in the stack communicate with a data register DR.
 A new item is inserted with the push operation as follows:
SP ← SP - 1
M [SP ] ← DR
 The stack pointer is decremented so that it points at the address of the next word.
 A memory write operation inserts the word from DR into the top of the stack.
 A new item is deleted with a pop operation as follows:
DR ← M [SP ]
SP ← SP +1
 The top item is read from the stack into DR.
 The stack pointer is then incremented to point at the next item in the stack.
III INSTRUCTION FORMATS
 In computer organization, instruction formats refer to the way instructions are
encoded and represented in machine language.
 There are several types of instruction formats, including zero, one, two, and
three-address instructions.
 Each type of instruction format has its own advantages and disadvantages in
terms of code size, execution time, and flexibility.
 Modern computer architectures typically use a combination of these formats to
provide a balance between simplicity and power.
 A computer performs a task based on the instruction provided.
 Instruction in computers comprises groups called fields.
 These fields contain different information for computers everything is in 0 and
1 so each field has different significance based on which a CPU decides what to
perform.
 The most common fields are:
 The operation field specifies the operation to be performed like
addition.
 Address field which contains the location of the operand, i.e., register
or memory location.
 Mode field which specifies how operand is to be founded.

32
 Instruction is of variable length depending upon the number of addresses it
contains.
 Generally, CPU organization is of three types based on the number of address
fields:

 Single Accumulator organization


 General register organization
 Stack organization

Types of Instructions
Based on the number of addresses, instructions are classified as:
Zero Address Instructions
 These instructions do not specify any operands or addresses.
 Instead, they operate on data stored in registers or memory locations implicitly
defined by the instruction.
 For example, a zero-address instruction might simply add the contents of two
registers together without specifying the register names.

 A stack-based computer does not use the address field in the instruction.
 To evaluate an expression first it is converted to reverse Postfix Notation.
Expression: X = (A+B)*(C+D)
Postfixed : X = AB+CD+*

33
TOP means top of stack
M[X] is any memory location

One Address Instructions


 These instructions specify one operand or address, which typically refers to a
memory location or register.
 The instruction operates on the contents of that operand, and the result may be
stored in the same or a different location.
 For example, a one-address instruction might load the contents of a memory
location into a register.
 This uses an implied ACCUMULATOR register for data manipulation.
 One operand is in the accumulator and the other is in the register or memory
location.
 Implied means that the CPU already knows that one operand is in the
accumulator so there is no need to specify it.

34
Expression: X = (A+B)*(C+D)
AC is accumulator
M[] is any memory location
M[T] is temporary location

Two Address Instructions


 These instructions specify two operands or addresses, which may be memory
locations or registers.
 The instruction operates on the contents of both operands, and the result may be
stored in the same or a different location.
 For example, a two-address instruction might add the contents of two registers
together and store the result in one of the registers.
 This is common in commercial computers.
 Here two addresses can be specified in the instruction.
 Unlike earlier in one address instruction, the result was stored in the accumulator,
here the result can be stored at different locations rather than just accumulators,
but require more number of bit to represent the address.

35
Expression: X = (A+B)*(C+D)
R1, R2 are registers
M[] is any memory location

Three Address Instructions


 These instructions specify three operands or addresses, which may be memory
locations or registers.
 The instruction operates on the contents of all three operands, and the result may
be stored in the same or a different location.
 For example, a three-address instruction might multiply the contents of two
registers together and add the contents of a third register, storing the result in a
fourth register.
 This has three address fields to specify a register or a memory location.
 Programs created are much short in size but number of bits per instruction
increases.
 These instructions make the creation of the program much easier but it does not
mean that program will run much faster because now instructions only contain
more information but each micro-operation (changing the content of the register,
loading address in the address bus etc.) will be performed in one cycle only.

36
Expression: X = (A+B)*(C+D)
R1, R2 are registers
M[] is any memory location

IV ADDRESSING MODES
 The operands of the instructions can be located either in the main memory or in
the CPU registers.
 If the operand is placed in the main memory, then the instruction provides the
location address in the operand field.
 Many methods are followed to specify the operand address.
 The different methods/modes for specifying the operand address in the
instructions are known as addressing modes.
Types of Addressing Modes
 There are different ways to specify an operand in the instruction which are called
addressing modes
 Implied / Implicit Addressing Mode
 Stack Addressing Mode
 Immediate Addressing Mode
 Direct Addressing Mode
 Indirect Addressing Mode
 Register Direct Addressing Mode

37
 Register Indirect Addressing Mode
 Relative Addressing Mode
 Indexed Addressing Mode
 Base Register Addressing Mode
 Auto-Increment Addressing Mode
 Auto-Decrement Addressing Mode
1. Implied Addressing Mode
 Implied Addressing Mode is known as ‘Implicit’ or ‘Inherent’ addressing mode.
 It is mainly used for Zero-address (STACK-organized) instructions.
 For such instruction, the operand is given in the instruction.
 We have no need to find the operand.

Examples of zero address instruction are


 ADD (it takes previous two values from the stack and ADD’s them).
 CLC (used to reset Carry flag to 0)
2. Stack Addressing Mode
In stack addressing mode, The operand is found from the top of the stack.

38
Example
 ADD Operation in Stack organization
 Take the top two values from the top of the stack
 Apply ADD instruction
 Obtained result is again placed at the top of the stack
3. Immediate Addressing Mode
 The operand is directly provided as a constant value.
 No extra computations are required to calculate effective address (EA).
 We have no need to store the value of the constant in memory or register.
 We can directly use it from instruction.
 The size of the constant must be equal or less than the size of the operand field
in the instruction.
 It is a faster mode than others because operand is specified in the instruction
 Identification: instruction contains constants.

Examples
ADD 10 (AC = AC+10)
MOV AL, 30H (move the data (30H) into AL register)
4. Direct Addressing Mode
 The operand of instruction contains the address of the memory location.
 It also called an absolute addressing mode.
 All computer hardware or instruction sets support direct and indirect addressing
modes.

39
Example
ADD AL,[0302] //add the contents of address 0302 to AL
5. Indirect Addressing Mode
 The address field of the instruction contains the address of memory location.
 That memory location contains the effective address of the required
 Two references of memory are required to fetch the required

Example
ADD X in indirect addressing mode will perform in the following way
AC ← AC + [[X]]

40
6. Register Direct Addressing Mode
 The address field of the instruction specifies to a CPU register which contains
the operand.
 There is no need for memory access to fetch the operand.

Example
ADD R in register direct addressing mode will perform in the following way
AC ← AC + [R]
7. Register Indirect Addressing Mode
 The address field of the instruction specifies to a CPU register which provides
the effective address of the operand.
 There is Only one memory reference is required to fetch the operand.

Example
ADD R in register indirect addressing mode will perform in the following way
AC ← AC + [[R]]
8. Relative Addressing Mode
 The effective address of the operand is obtained by adding the address of the program
counter with the address part of the instruction.

41
Effective Address = Value of PC + Address part of the instruction

9. Indexed Addressing Mode


 Operand-field contains the starting base address of the array-memory block and
the general register (index-register) field will contain the index value.
 Adding a base address and index register will give the actual physical address of
the operand.

So, EA= base address + index register


The address is fix for the base register. But index register value changed as required.
10. Base Register Addressing Mode
 The effective address of the operand can be found by adding the value of the
base register with the address part of the instruction.

42
 Multiple programs reside in RAM.
 When memory is full then we swap a program out from memory and new
programs are swap-in in the memory after some time we again load the same
swap out the program in memory with the same or different location.
 Sometimes we change the location of a program in the memory.

Effective Address = Content of Base Register + Address part of the instruction


11. Auto-Increment Addressing Mode
 This addressing mode is a special case of Register Indirect Addressing Mode.
 Auto-increment and decrement is used to access the data in sequence from
memory.
Effective Address of the Operand = Content of Register

43
 After accessing the operand, the value of the register is automatically
incremented by step size ‘d’.
 Size of ‘d’ depends on the size of operand. If operand size is 4-bytes then size
of d will be 4-bytes.
 To access the operand, Only one memory reference is required.
12. Auto-Decrement Addressing Mode
 This addressing mode is also a special case of Register Indirect Addressing
Mode in which
Effective Address of the Operand = Content of Register – Step Size

 The value of the register is decremented by step size ‘d’.


 Size of ‘d’ depends on the size of operand. If operand size is 4-bytes then size of d
will be 4-bytes.
 To access the operand, Only one memory reference is required.
Example
Add R1,-(R2) //OR
R2 = R2-dR1 = R1 + M[R2]
V DATA TRANSFER AND MANIPULATION
Data Transfer Instructions:
 Data transfer instructions move data from one place in the computer to another
without changing the data content.

44
 The most common transfers are between memory and processor registers,
between processor registers and input or output, and between the processor
registers themselves.

Data Manipulation Instructions:


 Data manipulation instructions perform operations on data and provide the
computational capabilities for the computer.
 The data manipulation instructions in a typical computer are usually divided into
three basic types:
 Arithmetic instructions (Increment, Decrement, Add, Subtract etc)
 Logical and bit manipulation instructions (AND, OR, XOR,
Complement etc)
 Shift instructions (Shift Left, Shift Right, Rotate Right etc)
Arithmetic instructions
 The four basic arithmetic operations are addition, subtraction, multiplication, and
division.
 Most computers provide instructions for all four operations.
 Some small computers have only addition and possibly subtraction instructions.
 The multiplication and division must then be generated by means of software
subroutines.
 The increment instruction adds 1 to the value stored in a register or memory
word.

45
 The decrement instruction subtracts 1 from a value stored in a register or memory
word.
 The instruction "add with carry" performs the addition on two operands plus the
value of the carry from the previous computation.
 Similarly, the "subtract with borrow" instruction subtracts two words and a
borrow which may have resulted from a previous subtract operation.
 The negate instruction forms the 2' s complement of a number, effectively
reversing the sign of an integer when represented in the signed-2's complement
form.

Logical and bit manipulation instructions


 Logical instructions perform binary operations on strings of bits stored in
registers.
 They are useful for manipulating individual bits or a group of bits that represent
binary-coded information.
 The AND instruction is used to clear a bit or a selected group of bits of an
operand.
 The OR instruction is used to set a bit or a selected group of bits of an operand.
 Similarly, the XOR instruction is used to selectively complement bits of an
operand.
 Individual bits such as a carry can be cleared, set, or complemented with
appropriate instructions.

46
Shift instructions
 Instructions to shift the content of an operand are quite useful and are often
provided in several variations.
 Shifts are operations in which the bits of a word are moved to the left or right.
 The bit shifted in at the end of the word determines the type of shift used.
 Shift instructions may specify either logical shifts, arithmetic shifts, or rotate-
type operations.
 In either case the shift may be to the right or to the left.
 The logical shift inserts 0 to the end bit position.
 The end position is the leftmost bit for shift right and the rightmost bit position
for the shift left.
 The arithmetic shift-right instruction must preserve the sign bit in the leftmost
position.
 The sign bit is shifted to the right together with the rest of the number, but the
sign bit itself remains unchanged.
 This is a shift-right operation with the end bit remaining the same.
 The arithmetic shift-left instruction inserts 0 to the end position and is identical
to the logical shift-left instruction.
 The rotate instructions produce a circular shift. Bits shifted out at one end of the
word are not lost as in a logical shift but are circulated back into the other end.

47
VI PROGRAM CONTROL
 Program control instructions modify or change the flow of a program.
 It is the instruction that alters the sequence of the program's execution, which
means it changes the value of the program counter, due to which the execution
of the program changes.
Features:
 These instructions cause a change in the sequence of the execution of the
instruction.
 This change can be through a condition or sometimes unconditional.
 Flags represent the conditions.
 Flag-Control Instructions.
 Control Flow and the Jump Instructions include jumps, calls, returns, interrupts,
and machine control instructions.
 Subroutine and Subroutine-Handling Instructions.
 Loop and Loop-Handling Instructions.

48
Status Bit Conditions
 To check different conditions for branching instructions like CMP (compare) or
TEST can be used. Certain status bit conditions are set as a result of these
operations.

Status bits mean that the value will be either 0 or 1 as it is a bit. We have four status
bits:
 "V" stands for Overflow
 "Z" stands for Zero
 "S" stands for the Sign bit
 "C" stands for Carry.
 Now, these will be set or reset based on the ALU (Arithmetic Logic Unit)
operation carried out into the CPU.
 Let us discuss these bits before understanding the operation.
 Overflow(V) is based on certain bits, i.e., if extra bits are generated into our
operation. Then we have Zero (Z).
 If the output of the ALU(Arithmetic Logic Unit) is 0, then the Z flag is set to 1,
otherwise, it is set to 0.
 If the number is positive, the Sign(S) flag is 0, and if the number is negative, the
Sign flag is 1.
 We have Carry(C), if the output of the thirst ALU operation generates Carry,
then C is set to 1, else C is set to 0.

49
 Let's see how these flags are affected.
 You can see in the figure this is an 8-bit ALU that performs arithmetic or logic
operations on our data.
 Suppose we have two operands A and B of 8-bits on which we are performing
certain arithmetic or logic operations.
 Arithmetic operation-addition is performed on A and B.
 We know that an extra Carry bit may be generated, which means eight Carry bits
are generated.
 If the addition operation is performed on operands A and B, then the carry bits
C0 to C7 may be generated.
 But we know that extra Carry may also be generated, which we term as C8.
 If C8 is generated, i.e., Carry is generated, reflecting our Carry flag, which results
in the C flag or C status bit being set to 1.
 Let's move further to the last two carries, C7 and C8.
 If the XOR of these two carries comes out to be 1, then we can say that the
Overflow condition has happened and the V flag is set to 1; otherwise, it's set to
0.
 This is the most occurring case when we have negative numbers represented in
2's complement form.
 Now moving on to the next flag which is the sign flag(S), the sign flag is set to
1 or 0 based on the output of the 8 bit ALU.
 As we know that if the number is positive, then the most significant bit of a
number is represented to be 0, which means if F7 is 0 then we can say that the
number is positive and if F7 is 1 then we say that the number is negative.
 And lastly, we have a Zero flag, Zero status bit Z, "Z" is said to 1 if the output
of all the bits from F0 to F7 is 0, then we can say that the zero flag is SET.
 Now all these four bits that are "V", "Z", "S" and "C" are reflected based on the
arithmetic or logic operation carried out on the 8 bit ALU.

50
Conditional Branch Instructions
 A conditional branch instruction is basically used to examine the values that are
stored in the condition code register to examine whether the specific condition
exists and to branch if it does.
 Conditional branch instructions such as ‘branch if zero’ or ‘branch if positive’
specify the condition to transfer the execution flow.
 The branch address will be loaded in the program counter when the condition is
met.
 Each conditional branch instruction tests for a different combination of Status
bits for a condition.
Comparison Branch Instructions
 Remember A≥B is the complement of A<B and A≤B is the complement of A<B,
which means if we know the condition of status bits for one, the condition for
the other complementary relation is obtained by complement.

51
52
UNIT III
COMPUTER ARITHMETIC
Hardware Implementation and Algorithm for Addition, Subtraction, Multiplication,
Division-Booth Multiplication Algorithm-Floating Point Arithmetic.
Computer Arithmetic

 Arithmetic instructions in digital computers manipulate data to produce results


necessary for the solution of computational problems.
 These instructions perform arithmetic calculations and are responsible for the
bulk of activity involved in processing data in a computer.
 The four basic arithmetic operations are addition, subtraction, multiplication and
division.
 From these four basic operations, it is possible to formulate other arithmetic
functions and solve scientific problems by means of numerical analysis methods.
 An arithmetic processor is the part of a processor unit that executes arithmetic
operations.
 An arithmetic instruction may specify binary or decimal data, and in each case
the data may be in fixed-point or floating-point form.
 Fixed- point numbers may represent integers or fractions.
 Negative numbers may be in signed- magnitude or signed-complement
representation.
 The arithmetic processor is very simple if only a binary fixed-point odd
instruction is included.
 It would be more complicated if it includes all four arithmetic operations for
binary and decimal data in fixed-point and floating- point representation.

I Hardware Implementation and Algorithm for Addition

Hardware implementation

 To implement the two arithmetic operations with hardware, it is first necessary


that the two numbers be stored in registers.

53
 Let A and B be two registers that hold the magnitudes of the numbers, and A s
and B s be two flip-flops that hold the corresponding signs.
 The result of the operation may be transferred to a third register: however, a
saving is achieved if the result is transferred into A and As.
 Thus A and As together form an accumulator register.
 Consider now the hardware implementation of the algorithms above.
 First, a parallel-adder is needed to perform the micro-operation A + B.
 Second, a comparator circuit is needed to establish if A > B, A = B, or A < B.
 Third, two parallel-subtractor circuits are needed to perform the micro-
operations A - B and B - A.
 The sign relationship can be determined from an exclusive-OR gate with A s and
B s as inputs.
 This procedure requires a magnitude comparator, an adder, and two subtractors.
 However, a different procedure can be found that requires less equipment.
 First, we know that subtraction can be accomplished by means of complement
and add.
 Second, the result of a comparison can be determined from the end carry after
the subtraction.
 Careful investigation of the alternatives reveals that the use of 2's complement
for subtraction and comparison is an efficient procedure that requires only an
adder and a complementer.
 Figure shows a block diagram of the hardware for implementing the addition and
subtraction operations.

54
 It consists of registers A and B and sign flip-flops As and Bs.
 Subtraction is done by adding A to the 2's complement of B.
 The output carry is transferred to flip-flop E, where it can be checked to
determine the relative magnitudes of the two numbers.
 The add-overflow flip-flop AVF holds the overflow bit when A and B are added.
 The A register provides other micro-operations that may be needed when we
specify the sequence of steps in the algorithm.
 The addition of A plus B is done through the parallel adder.
 The S(sum) output of the adder is applied to the input of the A register.
 The complementer provides an output of B or the complement of B depending
on the state of the mode control M.
 The complementer consists of exclusive-OR gates and the parallel adder consists
of full-adder circuits.
 The M signal is also applied to the input carry of the adder.
 When M = 0, the output of B is transferred to the adder, the input carry is 0, and
the output of the adder is equal to the sum A+B.
 When M = 1, the 2’s complement of B is applied to the adder the input carry is
1, and output S = A + B’ — 1.
 This is equal to A plus the 2's complement of B, which is equivalent to the
subtraction A — B.

Hardware Algorithm
 An algorithm to multiply two numbers is known as the multiplication algorithm.
 The hardware multiply algorithm is used in digital electronics such as computers
to multiply binary digits.
 The figure shows the flowchart for the hardware multiply algorithm.
 In the flowchart shown in the figure, the multiplicand is in Y and the multiplier
is in Q. The signs related to Y8 and Q8 are in respectively.
 These signs are compared and both X and Q are set to correspond to the sign of
the product because a double-length product will be stored in registers X and Q.

55
 The registers X and E are cleared. Then, the Sequence Counter (SC) is set to a
number that is similar to the several bits of the multiplier.
 It is assumed that the operands are transferred from a memory unit to the registers
having words of n bits.
 One bit of the word is occupied by the sign and the magnitude comprises n - 1
bits because the operand has to be stored with its sign.

 Once the initialization is done, the low-order bit of multiplier in is tested.


 In case the bit is 1, the multiplicand in Y is inserted to the present partial product
that is saved in X. In case the bit is 0, no action is performed.
 The SC is decreased by 1 and its new value is checked.
 In case it is not equal to 0, the process is repeated and a new partial product is
formed.
 This process is halted when SC is equal to 0.

56
 The partial product that is generated in X is shifted to Q, one bit at a time, and
replaces the multiplier eventually.
 The final product is saved in X and Q.
 Here, X contains the MSBs and Q contains the Least Significant Bits (LSBs).
Addition and Subtraction

 There are three ways of representing negative fixed-point binary numbers:


signed-magnitude, signed-l's complement, or signed-2's complement.
 Most computers use the signed-2's complement representation when performing
arithmetic operations with integers.
 For floating- point operations, most computers use the signed-magnitude
representation for the mantissa.
 We develop the addition and subtraction algorithms for data represented in
signed- magnitude and again for data represented in signed-2's complement.

Addition and Subtraction with Signed-MagnitudeData

 The representation of numbers in signed-magnitude is familiar because it is used


in everyday arithmetic calculations.
 The procedure for adding or subtracting two signed binary numbers with paper
and pencil is simple and straight-forward.
 We designate the magnitude of the two numbers by A and B.
 When the signed numbers are added or subtracted, we find that there are eight
different conditions to consider, depending on the sign of the numbers and the
operation performed.
 These conditions are listed in the first column of Table.
 The other columns in the table show the actual operation to be performed with
the magnitude of the numbers.
 The last column is needed to prevent a negative zero.
 In other words, when two equal numbers are subtracted, the result should be +0
not –0.

57
Addition and Subtraction with Signed-2’s Complement Data

 The signed-2's complement representation of numbers together with arithmetic


algorithms for addition and subtraction are summarized here for easy reference.
 The leftmost bit of a binary number represents the sign bit: 0 for positive and 1
for negative.
 If the sign bit is 1, the entire number is represented in 2's complement form.
 Thus +33 is represented as 00100001 and -33 as 11011111.
 Note that 11011111 is the 2's complement of 00100001, and vice versa.
 The addition of two numbers in signed-2's complement form consists of adding
the numbers with the sign bits treated the same as the other bits of the number.
 A carry-out of the sign-bit position is discarded.
 The subtraction consists of first taking the 2's complement of the subtrahend and
then adding it to the minuend.
 When two numbers of n digits each are added and the sum occupies n + 1 digits,
we say that an overflow occurred.
 An overflow can be detected by inspecting the last two carries out of the addition.
 When the two carries are applied to an exclusive-OR gate, the overflow is
detected when the output of the gate is equal to 1.
 The register configuration for the hardware implementation is shown in Figure.
 The sign bits are not separated from the rest of the registers.
 We name the A register AC and the B register BR.

58
 The leftmost bit in AC and BR represent the sign bits of the numbers.
 The two sign bits are added or subtracted together with the other bits in the
complementer and parallel adder.
 The overflow flip-flop V is set to 1 if there is an overflow. The output carry in
this case is discarded.

 The algorithm for adding and subtracting two binary numbers in signed-2's
complement representation is shown in the flowchart of Figure.
 The sum is obtained by adding the contents of AC and BR (including their sign
bits).
 The overflow bit V is set to 1 if the exclusive-OR of the last two carries is 1, and
it is cleared to 0 otherwise.

 The subtraction operation is accomplished by adding the content of AC to the 2's


complement of BR.

59
 Taking the 2's complement of BR has the effect of changing a positive number
to negative, and vice versa.
 An overflow must be checked during this operation because the two numbers
added could have the same sign.
 The programmer must realize that if an overflow occurs, there will be an
erroneous result in the AC register.
 Comparing this algorithm with its signed-magnitude counterpart,
 We note that it is much simpler to add and subtract numbers if negative numbers
are maintained in signed-2's complement representation.
 For this reason most computers adopt this representation over the more familiar
signed-magnitude.

Multiplication Algorithm

 Multiplication of two fixed-point binary numbers in signed-magnitude


representation is done with paper and pencil by a process of successive shift and
add operations.
 This process is best illustrated with a numerical example.

 The process consists of looking at successive bits of the multiplier, least


significant bit first.
 If the multiplier bit is a 1, the multiplicand is copied down; otherwise, zeros are
copied down.
 The numbers copied down in successive lines are shifted one position to the left
from the previous number.
 Finally, the numbers are added and their sum forms the product.

60
 The sign of the product is determined from the signs of the multiplicand and
multiplier.
 If they are alike, the sign of the product is positive.
 If they are unlike, the sign of the product is negative.

Hardware Implementation for Signed-MagnitudeData

 When multiplication is implemented in a digital computer, it is convenient to


change the process slightly.
 First, instead of providing registers to store and add simultaneously as many
binary numbers as there are bits in the multiplier, it is convenient to provide an
adder for the summation of only two binary numbers and successively
accumulate the partial products in a register.
 Second, instead of shifting the multiplicand to the left, the partial product is
shifted to the right, which results in leaving the partial product and the
multiplicand in the required relative positions.
 Third, when the corresponding bit of the multiplier is 0, there is no need to add
all zeros to the partial product since it will not alter its value.
 The hardware for multiplication consists of the equipment shown in Figure.
 The multiplier is stored in the Q register and its sign in Qs.
 The sequence counter SC is initially set to a number equal to the number of bits
in the multiplier.

61
 The counter is decremented by 1 after forming each partial product.
 When the content of the counter reaches zero, the product is formed and the
process stops.
 Initially, the multiplicand is in register B and the multiplier in Q.
 The sum of A and B forms a partial product which is transferred to the EA
register.
 Both partial product and multiplier are shifted to the right.
 This shift will be denoted by the statement shr EAQ to designate the right shift
depicted in Figure.
 The least significant bit of A is shifted into the most significant position of Q,
the bit from E is shifted into the most significant position of A, and 0 is shifted
into E.
 After the shift, one bit of the partial product is shifted into Q, pushing the
multiplier bits one position to the right.
 In this manner, the rightmost flip-flop in register Q, designated by Q„, will hold
the bit of the multiplier, which must be inspected next.

Division Algorithm

 The Division of two fixed-point binary numbers in the signed-magnitude


representation is done by the cycle of successive compare, shift, and subtract
operations.
 The binary division is easier than the decimal division because the quotient digit
is either 0 or 1.
 Also, there is no need to estimate how many times the dividend or partial
remainders adjust to the divisor.

62
Hardware Implementation

 The hardware implementation in the division operation is identical to that


required for multiplication and consists of the following components –
 Here, Registers B is used to store divisor, and the double-length dividend is
stored in registers A and Q
 The information for the relative magnitude is given in E.
 A sequence Counter register (SC) is used to store the number of bits in the
dividend.

63
 Initially, the dividend is in A & Q and the divisor is in B.
 The sign of the result is transferred into Q, to be part of the quotient.
 Then a constant is set into the SC to specify the number of bits in the quotient.
 Since an operand must be saved with its sign, one bit of the word will be
inhabited by the sign, and the magnitude will be composed of n -1 bits.
 The condition of divide-overflow is checked by subtracting the divisor in B from
the half of the bits of the dividend stored in A.
 If A ≥ B, DVF is set and the operation is terminated before time.
 If A < B, no overflow condition occurs and so the value of the dividend is
reinstated by adding B to A.
 The division of the magnitudes starts with the dividend in AQ to left in the high-
order bit shifted into E.

64
(Note – If shifted a bit into E is equal to 1, and we know that EA > B as EA comprises
a 1 followed by n -1 bits whereas B comprises only n -1 bits).
 In this case, B must be subtracted from EA, and 1 should insert into Q, for the
quotient bit.
 If the shift-left operation (shl) inserts a 0 into E, the divisor is subtracted by
adding its 2’s complement value and the carry is moved into E.
 If E = 1, it means that A ≥ B; thus, Q, is set to 1. If E = 0, it means that A < B,
and the original number is reimposed by adding B into A.
 Now, this process is repeated with register A containing the partial remainder.
Example of a binary division using digital hardware:

Divisor B = 10001, Dividend A = 0111000000

65
Final Remainder: 00110

Final Quotient: 11010

 Now, what if the divisor is greater than or equal to the dividend.


 In this process, division overflow occurs.
 EA stores the value of A+B, there is no application of Q here as if the divisor is
equal to dividend then Q might 1 and remainder is 0, else in every other condition
the value of quotient 1 and remainder equals to the dividend.

BOOTH MULTIPLICATION ALGORITHM

 The booth algorithm is a multiplication algorithm that allows us to multiply the


two signed binary integers in 2's complement, respectively.
 It is also used to speed up the performance of the multiplication process.
 It is very efficient too.
 It works on the string bits 0's in the multiplier that requires no additional bit only
shift the right-most string bits and a string of 1's in a multiplier bit weight 2k to
weight 2m that can be considered as 2k+ 1 - 2m.
 Following is the pictorial representation of the Booth's Algorithm:

66
 In the above flowchart, initially, AC and Qn + 1 bits are set to 0, and the SC is a
sequence counter that represents the total bits set n, which is equal to the number
of bits in the multiplier.
 There are BR that represent the multiplicand bits, and QR represents
the multiplier bits.
 After that, we encountered two bits of the multiplier as Qn and Qn + 1, where Qn
represents the last bit of QR, and Qn + 1 represents the incremented bit of Qn by
1.
 Suppose two bits of the multiplier is equal to 10; it means that we have to subtract
the multiplier from the partial product in the accumulator AC and then perform
the arithmetic shift operation (ashr).
 If the two of the multipliers equal to 01, it means we need to perform the addition
of the multiplicand to the partial product in accumulator AC and then perform
the arithmetic shift operation (ashr), including Qn + 1.
 The arithmetic shift operation is used in Booth's algorithm to shift AC and QR
bits to the right by one and remains the sign bit in AC unchanged.
 The sequence counter is continuously decremented till the computational loop is
repeated, equal to the number of bits (n).

Working on the Booth Algorithm


1. Set the Multiplicand and Multiplier binary bits as M and Q, respectively.
2. Initially, we set the AC and Qn + 1 registers value to 0.
3. SC represents the number of Multiplier bits (Q), and it is a sequence counter that
is continuously decremented till equal to the number of bits (n) or reached to 0.
4. A Qn represents the last bit of the Q, and the Qn+1 shows the incremented bit of
Qn by 1.
5. On each cycle of the booth algorithm, Qn and Qn + 1 bits will be checked on the
following parameters as follows:
i. When two bits Qn and Qn + 1 are 00 or 11, we simply perform the arithmetic
shift right operation (ashr) to the partial product AC. And the bits of Qn
and Qn + 1 is incremented by 1 bit.

67
ii. If the bits of Qn and Qn + 1 is shows to 01, the multiplicand bits (M) will
be added to the AC (Accumulator register). After that, we perform the
right shift operation to the AC and QR bits by 1.
iii. If the bits of Qn and Qn + 1 is shows to 10, the multiplicand bits (M) will
be subtracted from the AC (Accumulator register). After that, we perform
the right shift operation to the AC and QR bits by 1.
6. The operation continuously works till we reached n - 1 bit in the booth algorithm.
7. Results of the Multiplication binary bits will be stored in the AC and QR
registers.

There are two methods used in Booth's Algorithm:

1. RSC (Right Shift Circular)


It shifts the right-most bit of the binary number, and then it is added to the
beginning of the binary bits.

2. RSA (Right Shift Arithmetic)


It adds the two binary bits and then shift the result to the right by 1-bit position.
Example: 0100 + 0110 => 1010, after adding the binary number shift each bit by 1 to
the right and put the first bit of resultant to the beginning of the new bit.
Example: Multiply the two numbers 7 and 3 by using the Booth's multiplication
algorithm.
Ans.
 Here we have two numbers, 7 and 3.
 First of all, we need to convert 7 and 3 into binary numbers like 7 = (0111) and
3 = (0011).
 Now set 7 (in binary 0111) as multiplicand (M) and 3 (in binary 0011) as a
multiplier (Q).
 And SC (Sequence Count) represents the number of bits, and here we have 4
bits, so set the SC = 4.
68
 Also, it shows the number of iteration cycles of the booth's algorithms and then
cycles run SC = SC - 1 time.

Qn Qn + M = (0111) AC Q Qn + SC
1 M' + 1 = (1001) & Operation 1

1 0 Initial 0000 0011 0 4

Subtract (M' + 1) 1001

1001

Perform Arithmetic Right Shift operations 1100 1001 1 3


(ashr)

1 1 Perform Arithmetic Right Shift operations 1110 0100 1 2


(ashr)

0 1 Addition (A + M) 0111

0101 0100

Perform Arithmetic right shift operation 0010 1010 0 1

0 0 Perform Arithmetic right shift operation 0001 0101 0 0

 The numerical example of the Booth's Multiplication Algorithm is 7 x 3 = 21


and the binary representation of 21 is 10101.
 Here, we get the resultant in binary 00010101. Now we convert it into decimal,
as (000010101)10 = 2*4 + 2*3 + 2*2 + 2*1 + 2*0 => 21.
FLOATING POINT ARITHMETIC
 Assuming that each floating-point number has a mantissa in signed-magnitude
representation and a biased exponent.
 Thus the AC has a mantissa whose sign is in As, and a magnitude that is in A.
 The diagram shows the most significant bit of A, labeled by A1.
 The bit in his position must be a 1 to normalize the number.
 Note that the symbol AC represents the entire register, that is, the concatenation
of As, A and a.

69
 In the similar way, register BR is subdivided into Bs, B, and b and QR into Qs,
Q and q.
 A parallel-adder adds the two mantissas and loads the sum into A and the carry
into E.
 A separate parallel adder can be used for the exponents.
 The exponents do not have a district sign bit because they are biased but are
represented as a biased positive quantity.
 It is assumed that the floatingpoint number are so large that the chance of an
exponent overflow is very remote and so the exponent overflow will be
neglected.
 The exponents are also connected to a magnitude comparator that provides three
binary outputs to indicate their relative magnitude.
 The number in the mantissa will be taken as a fraction, so they binary point is
assumed to reside to the left of the magnitude part.
 Integer representation for floating point causes certain scaling problems during
multiplication and division.
 To avoid these problems, we adopt a fraction representation.

70
 The numbers in the registers should initially be normalized. After each arithmetic
operation, the result will be normalized.
 Thus all floating-point operands are always normalized.
Addition and Subtraction of Floating Point Numbers
 During addition or subtraction, the two floating-point operands are kept in AC
and BR.
 The sum or difference is formed in the AC.
 The algorithm can be divided into four consecutive parts:
 Check for zeros.
 Align the mantissas.
 Add or subtract the mantissas
 Normalize the result
 A floating-point number cannot be normalized, if it is 0.
 If this number is used for computation, the result may also be zero.
 Instead of checking for zeros during the normalization process we check for
zeros at the beginning and terminate the process if necessary.
 The alignment of the mantissas must be carried out prior to their operation.
 After the mantissas are added or subtracted, the result may be un-normalized.
 The normalization procedure ensures that the result is normalized before it is
transferred to memory.

71
UNIT IV
Input Output Organization
Input – Output Interface – Asynchronous data transfer – Modes of transfer – Priority
Interrupt – Direct Memory Access (DMA).

INPUT AND OUTPUT INTERFACE


 Input-Output Interface is used as an method which helps in transferring of

information between the internal storage devices i.e. memory and the external
peripheral device .
 A peripheral device is that which provide input and output for the computer, it is

also called Input-Output devices.


 For Example: A keyboard and mouse provide Input to the computer are called

input devices while a monitor and printer that provide output to the computer are
called output devices.
 Just like the external hard-drives, there is also availability of some peripheral

devices which are able to provide both input and output.

 The Input/output Interface is required because there are exists many differences
between the central computer and each peripheral while transferring information.
 Some major differences are:
1. Peripherals are electromechanical and electromagnetic devices and their
manner of operation is different from the operation of CPU and memory, which
are electronic [Link], a conversion of signal values may be required.

72
2. The data transfer rate of peripherals is usually slower than the transfer rate of
CPU, and consequently a synchronisation mechanism is needed.
3. Data codes and formats in peripherals differ from the word format in the CPU
and Memory.
4. The operating modes of peripherals are differ from each other and each must
be controlled so as not to disturb the operation of other peripherals connected to
CPU.
 There are four types of commands that an interface may receive.
 They are classified as control, status, data output, and data input.
Control Command:
 A control command is issued to activate the peripheral and to inform it what
to do.
 For example, a magnetic tape unit may be instructed to backspace the tape by
one record, to rewind the tape, or to start the tape moving in the forward
direction.
 The particular control command issued depends on the peripheral, and each
peripheral receives its own distinguished sequence of control commands,
depending on its mode of operation.
Status
 A status command is used to test various status conditions in the interface and
the peripheral.
 For example, the computer may wish to check the status of the peripheral before
a transfer is initiated.
 During the transfer, one or more errors may occur which are detected by the
interface.
 These errors are designated by setting bits in a status register that the processor
can read at certain intervals.
Output Data
 A data output command causes the interface to respond by transferring data from
the bus into one of its registers.
 Consider an example with a tape unit.
73
 The computer starts the tape moving by issuing a control command.
 The processor then monitors the status of the tape by means of a status command.
 When the tape is in the correct position, the processor issues a data output
command.
 The interface responds to the address and command and transfers the information
from the data lines in the bus to its buffer register.
 The interface then communicates with the tape controller and sends the data to
be stored on tape.
Input Data
 The data input command is the opposite of the data output.
 In this case the interface receives an item of data from the peripheral and places
it in its buffer register.
 The processor checks if data are available by means of a status command and
then issues a data input command.
 The interface places the data on the data lines, where they are accepted by the
processor.
I/O versus Memory Bus
 In addition to communicating with I/O, the processor must communicate with
the memory unit.
 Like the I/O bus, the memory bus contains data, address, and read/write control
lines.
 There are three ways that computer buses can be used to communicate with
memory and I/O:
1. Use two separate buses, one for memory and the other for I/O.
2. Use one common bus for both memory and I/O but have separate
control lines for each.
3. Use one common bus for memory and I/O with common control lines.
ASYNCHRONOUS DATA TRANSFER
 Asynchronous data transfer is a method of data transmission where data is sent
in a non-continuous, non-synchronous manner.

74
 This means that data is sent at irregular intervals, without any specific timing or
synchronization between the sending and receiving devices.
 The internal operations in an individual unit of a digital system are synchronized
using clock pulse.
 It means clock pulse is given to all registers within a unit.
 And all data transfer among internal registers occurs simultaneously during the
occurrence of the clock pulse.
 Now, suppose any two units of a digital system are designed independently, such
as CPU and I/O interface.
 If the registers in the I/O interface share a common clock with CPU registers,
then transfer between the two units is said to be synchronous.
 But in most cases, the internal timing in each unit is independent of each other,
so each uses its private clock for its internal registers.
 In this case, the two units are said to be asynchronous to each other, and if data
transfer occurs between them, this data transfer is called Asynchronous Data
Transfer.
Classification of Asynchronous Data Transfer
 Strobe Control Method
 Handshaking Method
Strobe Control Method:
 The Strobe Control mode of asynchronous data transfer employs only one
control line to time each transfer.
 This control line is known as a strobe, and we may achieve it either by destination
or source, depending upon the one who initiates the data transfer.
 Source initiated strobe: In the below figure, we can see that the source initiates
the strobe, and as shown in the diagram, the source unit will first place the data
on the data bus.

75
 In the figure, we may see that the source unit initializes the strobe.
 In the timing diagram, we can notice that the source unit first places the data on
the data bus.
 Then after a brief delay, the source unit activates a strobe pulse to ensure that the
data revolves to a stable value.
 The strobe control signal and data bus information remain in the active state for
enough time to permit the destination unit to receive the data.
 Destination initiated strobe: In the below figure, we can see that the destination
unit initiates the strobe, and as shown in the timing diagram, the destination unit
activates the strobe pulse first by informing the source to provide the data.

76
 The source unit responds by placing the requested binary information on the data
bus.
 The data must be valid and remain on the bus long enough for the destination
unit to accept it.
 The falling edge of the strobe pulse can use again to trigger a destination register.
 The destination unit then disables the strobe. Finally, and source removes the
data from the data bus after a determined time interval.
 In this case, the strobe may be a memory read control from the CPU to a memory
unit.
 The CPU initiates the read operation to inform the memory, which is a source
unit, to place the selected word into the data bus.
Handshaking Method
 The strobe method has the disadvantage that the source unit that initiates the
transfer has no way of knowing whether the destination has received the data
that was placed in the bus.
 Similarly, a destination unit that initiates the transfer has no way of knowing
whether the source unit has placed data on the bus.
 So this problem is solved by the handshaking method.
 The handshaking method introduces a second control signal line that replays the
unit that initiates the transfer.
 In this method, one control line is in the same direction as the data flow in the
bus from the source to the destination.
 The source unit uses it to inform the destination unit whether there are valid data
in the bus.
 The other control line is in the other direction from the destination to the source.
 This is because the destination unit uses it to inform the source whether it can
accept data.
 And in it also, the sequence of control depends on the unit that initiates the
transfer.
 So it means the sequence of control depends on whether the transfer is initiated
by source and destination.

77
Source initiated handshaking: In the below block diagram, you can see that two
handshaking lines are "data valid", which is generated by the source unit, and "data
accepted", generated by the destination unit.

 The timing diagram shows the timing relationship of the exchange of signals
between the two units.
 The source initiates a transfer by placing data on the bus and enabling its data
valid signal.
 The destination unit then activates the data accepted signal after it accepts the
data from the bus.
 The source unit then disables its valid data signal, which invalidates the data on
the bus.
 After this, the destination unit disables its data accepted signal, and the system
goes into its initial state.
 The source unit does not send the next data item until after the destination unit
shows readiness to accept new data by disabling the data accepted signal.
 This sequence of events described in its sequence diagram, which shows the
above sequence in which the system is present at any given time.
78
Destination initiated handshaking: In the below block diagram, you see that the two
handshaking lines are "data valid", generated by the source unit, and "ready for data"
generated by the destination unit.
 Note that the name of signal data accepted generated by the destination unit has
been changed to ready for data to reflect its new meaning.

 The destination transfer is initiated, so the source unit does not place data on the
data bus until it receives a ready data signal from the destination unit.
 After that, the handshaking process is the same as that of the source initiated.
 The sequence of events is shown in its sequence diagram, and the timing
relationship between signals is shown in its timing diagram.
 Therefore, the sequence of events in both cases would be identical.
MODE OF TRANSFER
 We store the binary information received through an external device in the
memory unit.
 The information transferred from the CPU to external devices originates from
the memory unit.

79
 Although the CPU processes the data, the target and source are always the
memory unit.
 We can transfer this information using three different modes of transfer.
 Programmed I/O
 Interrupt- initiated I/O
 Direct memory access( DMA)
Programmed I/O
 Programmed I/O uses the I/O instructions written in the computer program.
 The instructions in the program initiate every data item transfer.
 Usually, the data transfer is from a memory and CPU register.
 This case requires constant monitoring by the peripheral device's CPU.
 In programmed I/O, the CPU stays in the program loop until the I/O unit
indicates that it is ready for data transfer.
 This is a time consuming process since it needlessly keeps the CPU busy.
Advantages:
 Programmed I/O is simple to implement.
 It requires very little hardware support.
 CPU checks status bits periodically.
Disadvantages:
 The processor has to wait for a long time for the I/O module to be ready
for either transmission or reception of data.
 The performance of the entire system is severely degraded.
Interrupt- initiated I/O
 In the above section, we saw that the CPU is kept busy unnecessarily.
 We can avoid this situation by using an interrupt-driven method for data transfer.
 The interrupt facilities and special commands inform the interface for issuing an
interrupt request signal as soon as the data is available from any device.
 In the meantime, the CPU can execute other programs, and the interface will
keep monitoring the i/O device.
 Whenever it determines that the device is ready for transferring data interface
initiates an interrupt request signal to the CPU.

80
 As soon as the CPU detects an external interrupt signal, it stops the program it
was already executing, branches to the service program to process the I/O
transfer, and returns to the program it was initially running.
Working of CPU in terms of interrupts:
o CPU issues read command.
o It starts executing other programs.
o Check for interruptions at the end of each instruction cycle.
On interruptions:-
o Process interrupt by fetching data and storing it.
o See operation system notes.
o Starts working on the program it was executing.
Advantages:
 It is faster and more efficient than Programmed I/O.
 It requires very little hardware support.
 CPU does not check status bits periodically.
Disadvantages:
 It can be tricky to implement if using a low-level language.
 It can be tough to get various pieces of work well together.
 The hardware manufacturer / OS maker usually implements it, e.g., Microsoft.
Direct Memory Access (DMA)
 The data transfer between any fast storage media like a memory unit and a
magnetic disk gets limited with the speed of the CPU.
 Thus it will be best to allow the peripherals to directly communicate with the
storage using the memory buses by removing the intervention of the CPU.
 This mode of transfer of data technique is known as Direct Memory Access
(DMA).
 During Direct Memory Access, the CPU is idle and has no control over the
memory buses.
 The DMA controller takes over the buses and directly manages data transfer
between the memory unit and I/O devices.

81
Bus Request - We use bus requests in the DMA controller to ask the CPU to
relinquish the control buses.
Bus Grant - CPU activates bus grant to inform the DMA controller that DMA
can take control of the control buses. Once the control is taken, it can transfer
data in many ways.
Types of DMA transfer using DMA controller:
Burst Transfer:
 In this transfer, DMA will return the bus control after the complete data transfer.
 A register is used as a byte count, which decrements for every byte transfer, and
once it becomes zero, the DMA Controller will release the control bus.
 When the DMA Controller operates in burst mode, the CPU is halted for the
duration of the data transfer.
Cyclic Stealing:
 It is an alternative method for data transfer in which the DMA controller will
transfer one word at a time.
 After that, it will return the control of the buses to the CPU.
 The CPU operation is only delayed for one memory cycle to allow the data
transfer to “steal” one memory cycle.

82
Advantages
o It is faster in data transfer without the involvement of the CPU.
o It improves overall system performance and reduces CPU workload.
o It deals with large data transfers, such as multimedia and files.
Disadvantages
o It is costly and complex hardware.
o It has limited control over the data transfer process.
o Risk of data conflicts between CPU and DMA.
PRIORITY INTERRUPT
 It is a system responsible for selecting the priority at which devices generating
interrupt signals simultaneously should be serviced by the CPU.
 High-speed transfer devices are generally given high priority, and slow devices
have low priority.
 And, in case of multiple devices sending interrupt signals, the device with high
priority gets the service first.
Types of Interrupts
Hardware Interrupt
 If interrupt signals are sent by devices connected externally, the interrupt is a
hardware interrupt.
 Following are the types of hardware interrupts:
Maskable Interrupt
The hardware interrupt can be postponed when an interrupt with high
priority occurs at the exact moment.
Non Maskable Interrupt
This hardware interrupt cannot be postponed and should be processed
immediately by the processor.
Software Interrupt
 If interrupt signals are caused due to an internal system, the interrupt is known
as a software interrupt.
 Following are the types of software interrupts:

83
Normal interrupt
If interrupt signals are caused due to instructions of software, the interrupt
is known as a normal interrupt.
Exception
If interrupt signals are caused unexpectedly at the time of execution of
any program, the interrupt is an exception. For example, division by zero.
Methods for establishing priority of simultaneous Interrupts
Daisy Chaining Priority

 This method uses hardware to establish the priority of simultaneous interrupts.


 Deciding the interrupt priority includes the serial connection of all the devices
that generate an interrupt signal.
 The devices are placed according to their priority such that the device having the
highest priority gets placed first, followed by lower priority devices.
 The device with the lowest priority is found at last within the chain.
 In the daisy-chaining device, all devices are linked in serial form.
 The interrupt line request is not unusual to devices.
 Even if one of the devices has an interrupt signal in the low-level state, the
interrupt line goes to a low-level state and allows the interrupt input within the
CPU.
 While there's no interrupt, the interrupt line remains in a high-level state.

84
 The CPU responds to the interrupt by allowing the interrupt acknowledge line.
 This signal is received via device '1' at its PI input.
 The acknowledge signal passes to the subsequent device through PO output if
tool '1' isn't asking for an interrupt.
Parallel Priority Interrupt

 The parallel priority interrupts method uses a register whose bits are set one after
the other through the interrupt signal from every device.
 Priority is established in step with the position of the bits inside the register.
 Along with the interrupt register, the circuit may add a mask register whose
motive is to control the status of every interrupt(interrupt signal).
 The mask register could be programmed for disabling lower-priority interrupts
even as a higher-priority device is being serviced.
 Even as a higher priority device is being serviced, lower priority interrupts are
disabled by the programming mask register.

85
 It could also offer a facility that permits a high-priority device to interrupt the
CPU simultaneously while a lower-priority device gets service.
 The figure above shows the logic for deciding priority among four interrupt
source systems.
 It includes an interrupt register whose individual bits are set through external
conditions and cleared by program instructions.
 Being a high-speed device, the magnetic disk is given the highest priority.
 The printer has the next priority, accompanied via a character reader and a
keyboard.
 The number of bits present in the mask register and interrupt register is the same.
 Setting or resetting any bit within the mask register is feasible using software
instructions.
 Each interrupts bit and its corresponding mask bit are carried out to produce the
four inputs to a priority encoder.
 In this manner, an interrupt is recognized only if its corresponding mask bit is
about 1 through the program.
 Two bits of the vector address are transferred to the CPU, generated by the
priority encoder.
 Other output from the encoder fixes the interrupt status flip-flop lST while an
interrupt that isn't masked comes.
 The interrupt permit flip-flop lEN may be set or cleared using the program to
offer an overall control over the interrupt system.
 The outputs of IST ANDed with IEN offer a common interrupt signal for the
CPU.
 The interrupt acknowledges that the INTACK signal from the CPU permits the
bus buffers present in the output register, and VAD, a vector address, is located
into the data bus.
Polling method
 Polling is a software method.
 It is used to establish priority among interrupts occurring simultaneously.

86
 When the processor detects an interrupt in the polling method, it branches to an
interrupt service routine whose job is to pull each Input/Output module to
determine which module caused the interrupt.
 The poll can be in the form of a different command line (For example, Test
Input/Output).
 Here, the processor raises the Test input/output and puts the address of a specific
I / O module in the address line.
 If there is an interrupt, the interrupt gets pointed.
 Also, it is the order by which they are tested; that is, the order in which they
appear in the address line or service routine determines the priority of every
interrupt.
 Like, at the time of testing, devices with the highest priority get tested, then
comes the turn of devices with lower priority.
 This is the easiest method for priority establishment on simultaneous interrupt.
 But the downside of polling is that it takes time.
DIRECT MEMORY ACCESS (DMA)
 Direct Memory Access uses hardware for accessing the memory, that hardware
is called a DMA Controller.
 It has the work of transferring the data between Input Output devices and main
memory with very less interaction with the processor.
 The direct Memory Access Controller is a control unit, which has the work of
transferring data.
 DMA Controller is a type of control unit that works as an interface for the data
bus and the I/O Devices.
 As mentioned, DMA Controller has the work of transferring the data without the
intervention of the processors, processors can control the data transfer.
 DMA Controller also contains an address unit, which generates the address and
selects an I/O device for the transfer of data.
 Here we are showing the block diagram of the DMA Controller.

87
Types of Direct Memory Access (DMA)
There are four popular types of DMA.
 Single-Ended DMA
 Dual-Ended DMA
 Arbitrated-Ended DMA
 Interleaved DMA
Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing
from a single memory address. They are the simplest DMA.
Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two
memory addresses. Dual-ended DMA is more advanced than single-ended DMA.
Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to
several memory addresses. It is more advanced than Dual-Ended DMA.
Interleaved DMA: Interleaved DMA are those DMA that read from one memory
address and write from another memory address.
Working of DMA Controller
The DMA controller registers have three registers as follows.

88
Address register – It contains the address to specify the desired location in memory.
Word count register – It contains the number of words to be transferred.
Control register – It specifies the transfer mode.
 The figure below shows the block diagram of the DMA controller.
 The unit communicates with the CPU through the data bus and control lines.
 Through the use of the address bus and allowing the DMA and RS register to
select inputs, the register within the DMA is chosen by the CPU.
 RD and WR are two-way inputs.
 When BG (bus grant) input is 0, the CPU can communicate with DMA registers.
 When BG (bus grant) input is 1, the CPU has relinquished the buses and DMA
can communicate directly with the memory.

 The CPU initializes the DMA by sending the given information through the
data bus.
 The starting address of the memory block where the data is available (to read)
or where data are to be stored (to write).
 It also sends word count which is the number of words in the memory block
to be read or written.
 Control to define the mode of transfer such as read or write.
 A control to begin the DMA transfer
89
Modes of Data Transfer in DMA
There are 3 modes of data transfer in DMA that are described below.
Burst Mode:
 In Burst Mode, buses are handed over to the CPU by the DMA if the whole data
is completely transferred, not before that.
Cycle Stealing Mode:
 In Cycle Stealing Mode, buses are handed over to the CPU by the DMA after the
transfer of each byte.
 Continuous request for bus control is generated by this Data Transfer Mode.
 It works more easily for higher-priority tasks.
Transparent Mode:
 Transparent Mode in DMA does not require any bus in the transfer of the data
as it works when the CPU is executing the transaction.
Advantages of DMA Controller
o Data Memory Access speeds up memory operations and data transfer.
o CPU is not involved while transferring data.
o DMA requires very few clock cycles while transferring data.
o DMA distributes workload very appropriately.
o DMA helps the CPU in decreasing its load.
Disadvantages of DMA Controller
o Direct Memory Access is a costly operation because of additional operations.
o DMA suffers from Cache-Coherence Problems.
o DMA Controller increases the overall cost of the system.
o DMA Controller increases the complexity of the software.

90
UNIT V
MEMORY ORGANISATION
Memory Hierarchy - Main memory - Auxillary memory - Associative memory - Cache
memory - Virtual memory.
Memory
 Memory refers to the location of short-term data, while storage refers to the
location of data stored on a long-term basis.
 Memory is most often referred to as the primary storage on a computer, such as
RAM.
 Memory is also where information is processed.
 It enables users to access data that is stored for a short time.
MEMORY HIERARCHY
 The memory hierarchy is the arrangement of various types of storage on a
computing system based on access speed.
 It organizes computer storage according to response time.
 Since response time, complexity, and capacity are all connected, the levels can
also be distinguished by their performance and controlling technologies.

 As shown in the above picture, the computer memory has a pyramid-like


structure.
 It is used to describe the different levels of memory.
 It separates the computer storage based on hierarchy.

90
 As you can see, capacity is increasing with time.
 This Memory Hierarchy Design is divided into 2 types:
Primary or internal memory
 It consists of CPU registers, Cache Memory, Main Memory, and these are
directly accessible by the processor.
Secondary or external memory
 It consists of a Magnetic Disk, Optical Disk, Magnetic Tape, which are
accessible by processor via I/O Module.
Types of Memory Hierarchy
The Memory hierarchy is further classified into two types:
 Internal Memory
 External Memory
1. Internal Memory
 It is also called primary memory.
 Internal memory is a part of your computer that, when running, can store small
amounts of data that need to be accessed quickly.
 This type of memory is directly reachable by the process through the I/O module.
 It consists of RAM, ROM, and cache memory.
RAM
 It is a volatile memory that is used to store whatever is in use by the computer,
 It acts as a middle man between the CPU and the storage device, which helps
speed up the computer.
 It has many forms, but the most common type is DDR3.
ROM
 ROM, unlike other internal memory, is non-volatile.
 ROM stands for read-only memory, meaning the user cannot write data to the
ROM without special access.
 It was designed so that the computer could access the bios without other parts of
hardware.

91
Cache
 It is a really small, super-fast memory found in various computer components.
 The CPU cache stores small bits of frequently accessed data from the RAM so
that the processor doesn't have to wait for the RAM to respond every time it
wants the same piece of information.
 Like RAM, it is volatile and wipes clear whenever the computer gets turned off.
2. External Memory
 It is also known as secondary memory.
 Since it has a huge capacity, it stores massive data.
 Presently, it can measure the data in hundreds of megabytes or even in gigabytes.
 The critical property of external memory is that stored information will not be
lost whenever the computer switches off.
 It consists of magnetic tape, a magnetic disk, and an optical disk.
Magnetic tape
 It is a medium for magnetic storage, a thin, magnetizable coating on a long,
narrow strip of plastic film.
 It is used for many purposes, like hanging signs and displays for business
recording audio in a recording tape to store the data on a hard disk.
 Alternatively, we can use it for keeping things in a house or a garage.
Magnetic disk
 Magnetic disks are flat circular plates of metal or plastic coated on both sides
with iron oxide.
 Input signals, which may be audio, video, or data, are recorded on the surface of
a disk as magnetic patterns or spots in spiral tracks by a recording head while a
drive unit rotates the disk.
 It is relatively cheap per storage unit, with Fast access and retrieval times
compared to other storage devices.
Optical disk
 It is an electronic data storage medium that can be written to and read using a
low-powered laser beam.

92
 Optical disks are often stored in exceptional cases, sometimes called jewel cases,
and are most commonly used for digital preservation, music, video, or data and
programs for personal computers.
 Optical media provides many advantages for storing data over conventional
magnetic disks, such as mass storage capacity, mountable/unmountable storage
units, and low cost per bit of storage.

Auxiliary Memory

 Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-


access storage in a computer system.
 Auxiliary memory provides storage for programs and data that are kept for long-
term storage or when not in immediate use.
 The most common examples of auxiliary memories are magnetic tapes and
magnetic disks.
 A magnetic disk is a digital computer memory that uses a magnetization process
to write, rewrite and access data.
 For example, hard drives, zip disks, and floppy disks.

Main Memory
 The main memory in a computer system is often referred to as Random Access
Memory (RAM).
 This memory unit communicates directly with the CPU and with auxiliary
memory devices through an I/O processor.
 The programs that are not currently required in the main memory are transferred
into auxiliary memory to provide space for currently used programs and data.
I/O Processor

 The primary function of an I/O Processor is to manage the data transfers between
auxiliary memories and the main memory.

93
Cache Memory

 The data or contents of the main memory that are used frequently by CPU are
stored in the cache memory so that the processor can easily access that data in a
shorter time.
 Whenever the CPU requires accessing memory, it first checks the required data
into the cache memory.
 If the data is found in the cache memory, it is read from the fast memory.
 Otherwise, the CPU moves onto the main memory for the required data.
MAIN MEMORY
 The main memory acts as the central storage unit in a computer system.
 It is a relatively large and fast memory which is used to store programs and data
during the run time operations.
 The primary technology used for the main memory is based on semiconductor
integrated circuits.
 The integrated circuits for the main memory are classified into two major units.

1. RAM (Random Access Memory) integrated circuit chips


2. ROM (Read Only Memory) integrated circuit chips

RAM integrated circuit chips


 The RAM integrated circuit chips are further classified into two possible
operating modes, static and dynamic.
 The primary compositions of a static RAM are flip-flops that store the binary
information.
 The nature of the stored information is volatile, i.e. it remains valid as long as
power is applied to the system.
 The static RAM is easy to use and takes less time performing read and write
operations as compared to dynamic RAM.
 The dynamic RAM exhibits the binary information in the form of electric
charges that are applied to capacitors.
 The capacitors are integrated inside the chip by MOS transistors.

94
 The dynamic RAM consumes less power and provides large storage capacity in
a single memory chip.
 RAM chips are available in a variety of sizes and are used as per the system
requirement.
 The following block diagram demonstrates the chip interconnection in a 128 * 8
RAM chip.

o A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one
byte) per word. This requires a 7-bit address and an 8-bit bidirectional data bus.
o The 8-bit bidirectional data bus allows the transfer of data either from memory
to CPU during a read operation or from CPU to memory during
a write operation.
o The read and write inputs specify the memory operation, and the two chip select
(CS) control inputs are for enabling the chip only when the microprocessor
selects it.
o The bidirectional data bus is constructed using three-state buffers.
o The output generated by three-state buffers can be placed in one of the three
possible states which include a signal equivalent to logic 1, a signal equal to logic
0, or a high-impedance state.

The following function table specifies the operations of a 128 * 8 RAM chip.
 From the functional table, we can conclude that the unit is in operation only when
CS1 = 1 and CS2 = 0.

95
 The bar on top of the second select variable indicates that this input is enabled
when it is equal to 0.

ROM integrated circuit


 The primary component of the main memory is RAM integrated circuit chips,
but a portion of memory may be constructed with ROM chips.
 A ROM memory is used for keeping programs and data that are permanently
resident in the computer.
 Apart from the permanent storage of data, the ROM portion of main memory is
needed for storing an initial program called a bootstrap loader.
 The primary function of the bootstrap loader program is to start the computer
software operating when power is turned on.
 ROM chips are also available in a variety of sizes and are also used as per the
system requirement.
 The following block diagram demonstrates the chip interconnection in a 512 * 8
ROM chip.

96
o A ROM chip has a similar organization as a RAM chip.
o However, a ROM can only perform read operation; the data bus can only operate
in an output mode.
o The 9-bit address lines in the ROM chip specify any one of the 512 bytes stored
in it.
o The value for chip select 1 and chip select 2 must be 1 and 0 for the unit to
operate.
o Otherwise, the data bus is said to be in a high-impedance state.

AUXILLARY MEMORY
 An Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-
access storage in a computer system.
 It is where programs and data are kept for long-term storage or when not in
immediate use.
 The most common examples of auxiliary memories are magnetic tapes and
magnetic disks.

Magnetic Disks

 A magnetic disk is a type of memory constructed using a circular plate of metal


or plastic coated with magnetized materials.
 Usually, both sides of the disks are used to carry out read/write operations.
 However, several disks may be stacked on one spindle with read/write head
available on each surface.

97
o The memory bits are stored in the magnetized surface in spots along the
concentric circles called tracks.
o The concentric circles (tracks) are commonly divided into sections called sectors.

Magnetic Tape

 Magnetic tape is a storage medium that allows data archiving, collection, and
backup for different kinds of data.
 The magnetic tape is constructed using a plastic strip coated with a magnetic
recording medium.
 The bits are recorded as magnetic spots on the tape along several tracks.
 Usually, seven or nine bits are recorded simultaneously to form a character
together with a parity bit.
 Magnetic tape units can be halted, started to move forward or in reverse, or can
be rewound.
 However, they cannot be started or stopped fast enough between individual
characters.
 For this reason, information is recorded in blocks referred to as records.
ASSOCIATIVE MEMORY
 An associative memory can be considered as a memory unit whose stored data
can be identified for access by the content of the data itself rather than by an
address or memory location.
 Associative memory is often referred to as Content Addressable Memory
(CAM).
 When a write operation is performed on associative memory, no address or
memory location is given to the word.
 The memory itself is capable of finding an empty unused location to store the
word.
 On the other hand, when the word is to be read from an associative memory, the
content of the word, or part of the word, is specified.
 The words which match the specified content are located by the memory and are
marked for reading.

98
 The following diagram shows the block representation of an Associative
memory.

 From the block diagram, we can say that an associative memory consists of a
memory array and logic for 'm' words with 'n' bits per word.
 The functional registers like the argument register A and key register K each
have n bits, one for each bit of a word.
 The match register M consists of m bits, one for each memory word.
 The words which are kept in the memory are compared in parallel with the
content of the argument register.
 The key register (K) provides a mask for choosing a particular field or key in the
argument word.
 If the key register contains a binary value of all 1's, then the entire argument is
compared with each memory word.
 Otherwise, only those bits in the argument that have 1's in their corresponding
position of the key register are compared.

99
 Thus, the key provides a mask for identifying a piece of information which
specifies how the reference to memory is made.
 The following diagram can represent the relation between the memory array and
the external registers in an associative memory.

 The cells present inside the memory array are marked by the letter C with two
subscripts.
 The first subscript gives the word number and the second specifies the bit
position in the word. For instance, the cell Cij is the cell for bit j in word i.
 A bit Aj in the argument register is compared with all the bits in column j of the
array provided that Kj = 1.
 This process is done for all columns j = 1, 2, 3......, n.
 If a match occurs between all the unmasked bits of the argument and the bits in
word i, the corresponding bit Mi in the match register is set to 1.
 If one or more unmasked bits of the argument and the word do not match, M i is
cleared to 0.

100
CACHE MEMORY
 The data or contents of the main memory that are used frequently by CPU are
stored in the cache memory so that the processor can easily access that data in a
shorter time.
 Whenever the CPU needs to access memory, it first checks the cache memory.
 If the data is not found in cache memory, then the CPU moves into the main
memory.
 Cache memory is placed between the CPU and the main memory.
 The block diagram for a cache memory can be represented as:

 The cache is the fastest component in the memory hierarchy and approaches the
speed of CPU components.
 Cache memory is organised as distinct set of blocks where each set contains a
small fixed number of blocks.
Cache Mapping
There are three different types of mapping used for the purpose of cache memory
which are as follows:
o Direct mapping,
o Associative mapping
o Set-Associative mapping
Direct mapping
 In direct mapping, the cache consists of normal high-speed random-access
memory.
 Each location in the cache holds the data, at a specific address in the cache.
 This address is given by the lower significant bits of the main memory address.
101
 This enables the block to be selected directly from the lower significant bit of
the memory address.
 The remaining higher significant bits of the address are stored in the cache with
the data to complete the identification of the cached data.

 As shown in the above figure, the address from processor is divided into two
field a tag and an index.
 The tag consists of the higher significant bits of the address and these bits are
stored with the data in cache.
 The index consists of the lower significant b of the address.
 Whenever the memory is referenced, the following sequence of events occurs
1. The index is first used to access a word in the cache.
2. The tag stored in the accessed word is read.
3. This tag is then compared with the tag in the address.
4. If two tags are same this indicates cache hit and required data is read from
the cache word.

102
5. If the two tags are not same, this indicates a cache miss. Then the
reference is made to the main memory to find it.
 For a memory read operation, the word is then transferred into the cache.
 It is possible to pass the information to the cache and the process simultaneously.
 In direct mapped cache, there can also be a line consisting of more than one
word as shown in the following figure

 In such a case, the main memory address consists of a tag, an index and a word
within a line.
 All the words within a line in the cache have the same stored tag
 The index part in the address is used to access the cache and the stored tag is
compared with required tag address.
 For a read operation, if the tags are same, the word within the block is selected
for transfer to the processor.
 If tags are not same, the block containing the required word is first transferred to
the cache.
 In direct mapping, the corresponding blocks with the same index in the main
memory will map into the same block in the cache, and hence only blocks with
different indices can be in the cache at the same time.

103
 It is important that all words in the cache must have different indices. The tags
may be the same or different.
Set Associative Mapping
 In set associative mapping a cache is divided into a set of blocks.
 The number of blocks in a set is known as associativity or set size.
 Each block in each set has a stored tag. This tag together with index completely
identify the block.
 Thus, set associative mapping allows a limited number of blocks, with the same
index and different tags.
 An example of four way set associative cache having four blocks in each set is
shown in the following figure

In this type of cache, the following steps are used to access the data from a cache:

1. The index of the address from the processor is used to access the set.

104
2. Then the comparators are used to compare all tags of the selected set with the
incoming tag.
3. If a match is found, the corresponding location is accessed.
4. If no match is found, an access is made to the main memory.

 The tag address bits are always chosen to be the most significant bits of the full
address,
 The block address bits are the next significant bits and the word/byte address bits
are the least significant bits.
 The number of comparators required in the set associative cache is given by the
number of blocks in a set.
 The set can be selected quickly and all the blocks of the set can be read out
simultaneously with the tags before waiting for the tag comparisons to be made.
 After a tag has been identified, the corresponding block can be selected.
Fully associative mapping

In fully associative type of cache memory, each location in cache stores both
memory address as well as data.

 Whenever a data is requested, the incoming memory address a simultaneously


compared with all stored addresses using the internal logic the associative
memory.
 If a match is found, the corresponding is read out.

105
 Otherwise, the main memory is accessed if address is not found in cache.
 This method is known as fully associative mapping approach because cached
data is related to the main memory by storing both memory address and data in
the cache.
 In all organisations, data can be more than one word as shown in the following
figure.

 A line constitutes four words, each word being 4 bytes.


 In such case, the least significant part of the address selects the particular byte,
the next part selects the word, and the remaining bits form the address.
 These address bits are compared to the address in the cache.
 The whole line can be transferred to and from the cache in one transaction if
there are sufficient data paths between the main memory and the cache.
 With only one data word path, the words of the line have to be transferred in
separate transactions.

106
VIRTUAL MEMORY
 Virtual Memory is a storage scheme that provides user an illusion of having a
very big main memory.
 This is done by treating a part of secondary memory as the main memory.
 In this scheme, User can load the bigger size processes than the available main
memory by having the illusion that the memory is available to load the process.
 Instead of loading one big process in the main memory, the Operating System
loads the different parts of more than one process in the main memory.
 By doing this, the degree of multiprogramming will be increased and therefore,
the CPU utilization will also be increased.
 In modern word, virtual memory has become quite common these days.
 In this scheme, whenever some pages needs to be loaded in the main memory
for the execution and the memory is not available for those many pages,
 Then in that case, instead of stopping the pages from entering in the main
memory,
 The OS search for the RAM area that are least used in the recent times or that
are not referenced and copy that into the secondary memory to make the space
for the new pages in the main memory.
Demand Paging
 Demand Paging is a popular method of virtual memory management.
 In demand paging, the pages of a process which are least used, get stored in the
secondary memory.
 A page is copied to the main memory when its demand is made or page fault
occurs.
 There are various page replacement algorithms which are used to determine the
pages which will be replaced.
 We will discuss each one of them later in detail.
Snapshot of a virtual memory management system
 Let us assume 2 processes, P1 and P2, contains 4 pages each.
 Each page size is 1 KB.
 The main memory contains 8 frame of 1 KB each.

107
 The OS resides in the first two partitions.
 In the third partition, 1st page of P1 is stored and the other frames are also shown
as filled with the different pages of processes in the main memory.
 The page tables of both the pages are 1 KB size each and therefore they can be
fit in one frame each.
 The page tables of both the processes contain various information that is also
shown in the image.
 The CPU contains a register which contains the base address of page table that
is 5 in the case of P1 and 7 in the case of P2.
 This page table base address will be added to the page number of the Logical
address when it comes to accessing the actual corresponding entry.

Advantages of Virtual Memory


1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory
1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.

108
[Link] (CBCS) DEGREE EXAMINATION
COMPUTER ARCHITECTURE
Semester: IV Maximum:75 Marks
PART A-
(10X1=10 Marks Answer all Questions Choose the correct answer)
1. --------- addressing mode the second part of an instruction code specifies the address
of An operand
a. immediate b. direct c. indirect d. index
2. In the ----------- organization the control logic is implemented with flip flops and
gates.
a. micro programmed b. hardwired c. software d. none
3. The sequence of micro instructions constitutes a----------------
a. micro operation b. micro program c. control instruction d. conditional instruction
4. In--------- mode the operand is specified in the instruction itself.
a. register b. immediate c. direct d. indirect
5. In division algorithm if partial remainder is smaller than the division then the quotient
bit is
a. 0 b. 1 c. shift right d. none
6. In multiplication algorithm low order bit of ------- is tested.
a. multiplier b. Multiplicand c. both a & b d. none
7. The agreement between two independent units is referred to as----------
a. strobe b. handshaking c. Asynchronous d. none
8. A polling procedure is used to identify the highest priority source by ------ means.
a. software b. hardware c. DMA d. parallel
9. The memory unit that communicates directly with the CPU is called
a. Auxiliary memory b. Secondary memory c. Main memory d. none
10. Virtual memory is
a. ROM b. RAM c. Concept d. Associative

109
PART B-(5X5=25 Marks)
Answer all Questions, choosing either (a) or (b)
Each answer should not exceed 250 words.
11a. Explain briefly about the stored program organization.
Or
b. Write short notes about control unit.
[Link] any six addressing modes in detail.
Or
b. Explain program control in detail.
13 a. Explain for adding and subtracting number in signed 2‘s complement
representation.
Or
b. Discuss booth multiplication algorithm in detail
14a. Write short note about Asynchronous Data transfer
Or
b. Explain the operation of ―Daisy chaining priority‖.
15a. Briefly write about cache memory
Or
b. Explain about memory hierarchy with neat diagram
PART C -(5X8=40Marks)
Answer all Questions, choosing either (a) or (b)
Each answer should not exceed 600 words.
16 a. Explain with neat diagram of common bus system.
Or
b. Explain instruction cycle in detail.
17a. Explain about the stack organization in detail.
18 a. Explain division algorithm in detail.
Or
b. What is meant by array multiplier? Explain 4 bit by 3 bit array multiplier through its
Block diagram?

110
19 a. Explain direct memory access in detail
Or
b. Describe modes of transfer in detail.
20.a. What is associative memory? Explain.
Or
b. What is virtual memory? Explain the mapping process

111

You might also like