Input Output Organization Question Answer
Input Output Organization Question Answer
Source initiated transfer using handshaking: Figure 7.8 shows the data transfer method when
initiated by the source. The two handshaking lines are data valid, which is generated by the
source unit, and data accepted generated by the destination unit. The source first places data and
after some delay issues data valid signal. On sensing data valid signal, the destination receives
data and then issues acknowledgement signal data accepted to indicate the acceptance of data. On
sensing data accepted signal, the source removes data and data valid signal. On sensing removal
of data valid signal, the destination removes the data accepted signal.
Destination initiated transfer using handshaking: Figure 7.9 illustrates destination initiated
handshaking technique. The destination first sends the data request signal. On sensing this signal,
the source places data and also issues the data valid signal. On sensing data valid signal, the
destination acquires data and then removes the data request signal. On sensing this, the source
removes both the data and data valid signal.
Q3. Describe Programmed I/O Transferring data from I/O device to memory.
Ans: I/O device does not have direct access to memory in the programmed I/O method. A
transfer from an I/O device to memory requires the execution of several instructions by the
CPU, including an input instruction to transfer the data from the device to the CPU and a store
instruction to transfer the data from the CPU to memory. Figure 7.10 shows an example of data
transfer from an I/O device through its interface into memory via the CPU. The handshaking
procedure is followed here. The device transfers bytes of data one at a time, as they are
available. The device places a byte of data, when available, in the I/O bus and enables its data
valid line. The interface accepts the byte into its data register and enables its data accepted line.
A flag bit is then set in its status register by the interface. The device now disables the data
valid line, but it will not transfer another byte until the data accepted line is disabled and flag
bit is reset by the interface.
An I/O routine or a program is written for the computer to check the flag bit in the status
register to determine if a byte is placed in the data register by the I/O device. By reading the
status register into a CPU register and checking the value of the flag bit, this can be done.
When the flag is set to 1, the CPU reads the data from data register and then transfers to the
memory by store instruction. The flag bit is then reset to 0 by either the CPU or the interface,
depending on the design of interface circuits. When the flag bit is reset, the interface disables
the data accepted line and the device can then transfer the next data byte. Thus the following
four steps to be executed by the CPU to transfer each byte:
1. Read the status register of interface unit.
2. If the flag bit of the status register is set then go to step (3); otherwise loop back to step (1).
3. Read the data register of interface unit for data.
4. Send the data to the memory by executing store instruction.
The programmed I/O method is particularly useful in small low-speed computers or in systems
that are dedicated to monitor a device continuously. Generally the CPU is 5-7 times faster than
an I/O device. Thus, the difference in data transfer rate between the CPU and the I/O device
makes this type of transfer inefficient.
Q4. What is interrupt initiated I/O?
Ans: In the programmed I/O method, the program constantly monitors the device status. Thus,
the CPU stays in the program until the I/O device indicates that it is ready for data transfer.
This is time consuming process since it keeps the CPU busy needlessly. It can be avoided by
letting the device controller continuously monitor the device status and raise an interrupt to the
CPU as soon as the device is ready for data transfer. Upon detecting the external interrupt
signal, the CPU momentarily stops the task it is processing, branches to an interrupt-service-
routine (ISR) or I/O routine or interrupt handler to process the I/O transfer, and then returns to
the task it was originally performing. Thus, in the interrupt-initiated mode, the ISR software
(i.e. CPU) performs data transfer but is not involved in checking whether the device is ready
for data transfer or not. Therefore, the execution time of CPU can be optimized by employing
it to execute normal program, while no data transfer is required. Figure 7.11 illustrates the
interrupt process.
The CPU responds to the interrupt signal by storing the return address from the program
counter (PC) register into a memory stack or into a processor register and then control branches
to an ISR program that processes the required I/O transfer. The way that the CPU chooses the
branch address of the ISR varies from one unit to another. In general, there are two methods for
accomplishing this. One is called vectored interrupt and the other is non-vectored. In a
vectored interrupt, the source that interrupts supplies the branch information (starting address
of ISR) to the CPU. This information is called the interrupt vector, which is not any fixed
memory location. In a non-vectored interrupt, the branch address (starting address of ISR) is
assigned to a fixed location in memory. In interrupt-initiated I/O, the device controller should
have some additional intelligence for checking device status and raising an interrupt whenever
data transfer is required. This results in extra hardware circuitry in the device controller.
Ans: To transfer large blocks of data at high speed, this third method is used. A special
controlling unit may be provided to allow transfer a block of data directly between a high speed
external device like magnetic disk and the main memory, without continuous intervention by the
CPU. This method is called direct memory access (DMA).
DMA transfers are performed by a control circuit that is part of the I/O device interface.
We refer to this circuit as a DMA controller. The DMA controller performs the functions that
would normally be carried out by the CPU when accessing the main memory. During DMA
transfer, the CPU is idle or can be utilized to execute another program and CPU has no control of
the memory buses. A DMA controller takes over the buses to manage the transfer directly
between the I/O device and the main memory.
The CPU can be placed in an idle state using two special control signals, HOLD and
HLDA (hold acknowledge). Figure 7.13 shows two control signals in the CPU that characterize
the DMA transfer. The HOLD input is used by the DMA controller to request the CPU to release
control of buses.
When this input is active, the CPU suspends the execution of the current instruction and places
the address bus, the data bus and the read/write line into a high-impedance state. The high-
impedance state behaves like an open circuit, which means that the output line is disconnected
from the input line and does not have any logic significance. The CPU activates the HLDA
output to inform the external DMA controller that the buses are in the high-impedance state.
The control of the buses has been taken by the DMA controller that generated the bus request
to conduct memory transfers without processor intervention. After the transfer of data, the
DMA controller disables the HOLD line. The CPU then disables the HLDA line and regains
the control of the buses and returns to its normal operation.
Q9. What is parallel processing? What is arithmetic pipelining? What is Vector
processing? Explain how matrix multiplication is performed using vector processing.
Ans: In DMA transfer, I/O devices can directly access the main memory without intervention
by the processor. Figure 7.14 shows a typical DMA system. The sequences of events involved
in a DMA transfer between an I/O device and the main memory are discussed next.
A DMA request signal from an I/O device starts the DMA sequence. DMA controller activates
the HOLD line. It then waits for the HLDA signal from the CPU. On receipt of HLDA, the
controller sends a DMA ACK (acknowledgement) signal to the I/O device. The DMA controller
takes the control of the memory buses from the CPU. Before releasing the control of the buses to
the controller, the CPU initializes the address register for starting memory address of the block of
data, word-count register for number of words to be transferred and the operation type (read or
write). The I/O device can then communicate with memory through the data bus for direct data
transfer. For each word transferred, the DMA controller increments its address-register and
decrements its word count register. After each word transfer, the controller checks the DMA
request line. If this line is high, next word of the block transfer is initiated and the process
continues until word count register reaches zero (i.e., the entire block is transferred). If the word
count register reaches zero, the DMA controller stops any further transfer and removes its HOLD
signal. It also informs the CPU of the termination by means of an interrupt through INT line. The
CPU then gains the control of the memory buses and resumes the operations on the program
which initiated the I/O operations.
Ans: It is a hardware method, whereas programmed I/O and interrupt I/O are software methods
of data transfer. DMA mode has following advantages:
1. High speed data transfer is possible, since CPU is not involved during actual transfer, which
occurs between I/O device and the main memory.
2. Parallel processing can be achieved between CPU processing and DMA controller’s I/O
operation.
Q12. What is cycle stealing and block (burst) transfer.
Ans: Memory accesses by the CPU and the DMA controllers are interlocking. Requests by DMA
devices for using memory buses are always given higher priority than processor requests. Among
different DMA devices, top priority is given to high-speed peripherals such as a disk, a high-
speed network interface or a graphics display device. Since the CPU originates most memory
access cycles, the DMA controller can be said to “steal” memory cycles from the CPU. Hence,
this interlocking technique usually called cycle stealing.
When DMA controller is the master of the memory buses, a block of memory words is
transferred in continuous without interruption. This mode of DMA transfer is known as block
(burst) transfer. This mode of transfer is needed for fast devices such as magnetic disks, where
data transmission cannot be stopped or slowed down until an entire block is transferred.
Ans: 1. Daisy Chaining Method: The daisy chaining method is a centralized bus arbitration
method. During any bus cycle, the bus master may be any device - the processor or any DMA
controller unit, connected to the bus. Figure 7.15 illustrates the daisy chaining method.
All devices are effectively assigned static priorities according to their locations along a bus grant
control line (BGT). The device closest to the central bus arbiter is assigned the highest priority.
Requests for bus access are made on a common request line, BRQ. Similarly, the common
acknowledge signal line (SACK) is used to indicate the use of bus. When no device is using the
bus, the SACK is inactive. The central bus arbiter propagates a bus grant signal (BGT) if the
BRQ line is high and acknowledge signal (SACK) indicates that the bus is idle. The first device,
which has issued a bus request, receives the BGT signal and stops the latter’s propagation. This
sets the bus-busy flag in the bus arbiter by activating SACK and the device assumes bus control.
On completion, it resets the bus-busy flag in the arbiter and a new BGT signal is generated if
other requests are outstanding (i.e., BRQ is still active). The first device simply passes the BGT
signal to the next device in the line.
The main advantage of the daisy chaining method is its simplicity. Another advantage is
scalability. The user can add more devices anywhere along the chain, up to a certain maximum
value.
2. Polling or Rotating Priority Method : In this method, the devices are assigned unique
priorities and compete to access the bus, but the priorities are dynamically changed to give every
device an opportunity to access the bus. This dynamic priority algorithm generalizes the daisy
chain implementation of static priorities discussed above. Recall that in the daisy chain scheme
all devices are given static and unique priorities according to their positions on a bus-grant line
(BGT) emanating from a central bus arbiter. However, in the polling scheme, no central bus
arbiter exists, and the bus-grant line (BGT) is connected from the last device back to the first in a
closed loop (Fig. 7.16). Whichever device is granted access to the bus serves as bus arbiter for the
following arbitration (an arbitrary device is selected to have initial access to the bus). Each
device’s priority for a given arbitration is determined by that device’s distance along the bus-
grant line from the device currently serving as bus arbiter; the latter device has the lowest
priority. Hence, the priorities change dynamically with each bus cycle.
The main advantage of this method is that it does not favor any particular device or processor.
The method is also quite simple.
3. Fixed Priority or Independent Request Method : In bus independent request method, the
bus control passes from one device to another only through the centralized bus arbiter. Figure
7.17 shows the independent request method. Each device has a dedicated BRQ output line and
BGT input line. If there are m devices, the bus arbiter has m BRQ inputs and m BGT outputs.
The arbiter follows a priority order with different priority level to each device. At a given time,
the arbiter issues bus grant (BGT) to the highest priority device among the devices who have
issued bus requests. This scheme needs more hardware but generates fast response.
Q14. Differentiate isolated I/O and memory mapped I/O.
Answer : (a) In the isolated (I/O mapped) I/O, computers use one common address bus and data
bus to transfer information between memory or I/O and the CPU; but use separate read-write
control lines, one for memory and another for I/O. Whereas, in memory mapped I/O, computers
use only one set of read and write lines along with same set of address and data buses for both
memory and I/O devices.
(b) The isolated I/O technique isolates all I/O interface addresses from the addresses assigned to
memory. Whereas, the memory mapped I/O does not distinguish between memory and I/O
addresses.
(c) Processors use different instructions for accessing memory and I/O devices in isolated I/O. In
memory mapped I/O, processors use same set of instructions for accessing memory and I/O.
(d) Thus, the hardware cost is more in isolated I/O relative to the memory mapped I/O, because
two separate read-write lines are required in first technique.
Q15. Differentiate between polled I/O and interrupt driven I/O.
Answer: (a) In the polled I/O or programmed I/O method, the CPU stays in the program until the
I/O device indicates that it is ready for data transfer, so CPU is kept busy needlessly. But, in
interrupt driven I/O method, CPU can perform its own task of instruction executions and is
informed by raising an interrupt signal when data transfer is needed.
(b) Polled I/O is low cost and simple technique; whereas, interrupt I/O technique is relatively
high cost and complex technique. Because in second method, a device controller is used to
continuously monitor the device status and raise an interrupt to the CPU as soon as the device is
ready for data transfer.
(c) The polled I/O method is particularly useful in small low-speed computers or in systems that
are dedicated to monitor a device continuously. However, interrupt I/O method is very useful in
modern high speed computers.
Q16. Discuss the advantage of interrupt-initiated I/O over programmed I/O.
Answer: In the programmed I/O method, the program constantly monitors the device status.
Thus, the CPU stays in the program until the I/O device indicates that it is ready for data transfer.
This is time consuming process since it keeps the CPU busy needlessly. It can be avoided by
letting the device controller continuously monitor the device status and raise an interrupt to the
CPU as soon as the device is ready for data transfer. Upon detecting the external interrupt signal,
the CPU momentarily stops the task it is processing, branches to an interrupt-service-routine
(ISR) or I/O routine or interrupt handler to process the I/O transfer, and then after completion of
I/O transfer, returns to the task it was originally performing. Thus, in the interrupt-initiated mode,
the ISR software (i.e. CPU) performs data transfer but is not involved in checking whether the
device is ready for data transfer or not. Therefore, the execution time of CPU can be optimized by
employing it to execute normal programs, while no data transfer is required.