0% found this document useful (0 votes)
237 views

Input Output Organization Question Answer

This document discusses asynchronous and interrupt-initiated I/O transfer methods. Asynchronous transfer uses a strobe signal from one unit to indicate when a transfer occurs, while interrupt-initiated I/O allows an I/O device to raise an interrupt to the CPU when ready for data transfer. The interrupt controller detects interrupt requests from devices and notifies the CPU using an interrupt acknowledge line. Interrupt nesting allows interrupts to occur while an interrupt service routine is already processing, with the CPU storing return addresses to resume each routine in turn.

Uploaded by

Sukanta Bose
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
237 views

Input Output Organization Question Answer

This document discusses asynchronous and interrupt-initiated I/O transfer methods. Asynchronous transfer uses a strobe signal from one unit to indicate when a transfer occurs, while interrupt-initiated I/O allows an I/O device to raise an interrupt to the CPU when ready for data transfer. The interrupt controller detects interrupt requests from devices and notifies the CPU using an interrupt acknowledge line. Interrupt nesting allows interrupts to occur while an interrupt service routine is already processing, with the CPU storing return addresses to resume each routine in turn.

Uploaded by

Sukanta Bose
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Academy of Technology

Computer Organization & Architecture


Few important questions from the chapter I/O organization
and parallel processing
Prepared By: Dr. Sukanta Bose
Dept. of ECE, AOT
Q1. What is Asynchronous Transfer of data in I/O
transfer?
Ans: There is no common clock between the master and
slave in asynchronous transfer. Each has its own private
clock for internal operations. This approach is widely used in
most computers. Asynchronous data transfer between two
independent units requires that control signals be transmitted
between the communicating units to indicate the time at
which data is being transmitted. One simple way is to use a
strobe signal supplied by one of the units to indicate the
other unit when the transfer has to occur.
A single control line is used by the strobe control method
of asynchronous data transfer to time each transfer. The
strobe may be activated by either the source or the
destination unit.

A source-initiated transfer is depicted in Fig. 7.6. The


source takes care of proper timing delay between the actual
data signals and the strobe signal. The source places the data
first, and after some delay, generates the strobe to inform
about the data on the data bus. Before removing the data,
source removes the strobe and after some delay it removes
the data. By these two leading and trailing end delays, the
system ensures the reliable data transfer.
Similarly, the destination can initiate data transfer by sending strobe signal to the source unit
as shown in Fig. 7.7. In response, the source unit places data on the data bus. After receiving data,
the destination unit removes the strobe signal. Only after sensing the removal of strobe signal, the
source removes the data from the data bus.
Q2. Describe the handshaking technique.
Ans: To overcome this problem of strobe technique, another method commonly used is to
accompany each data item being transferred with a control signal that indicates the presence of
data in the bus. The unit receiving the data item responds with another control signal to
acknowledge receipt of the data. This type of agreement between two independent units is
referred to as handshaking mode of transfer.

Source initiated transfer using handshaking: Figure 7.8 shows the data transfer method when
initiated by the source. The two handshaking lines are data valid, which is generated by the
source unit, and data accepted generated by the destination unit. The source first places data and
after some delay issues data valid signal. On sensing data valid signal, the destination receives
data and then issues acknowledgement signal data accepted to indicate the acceptance of data. On
sensing data accepted signal, the source removes data and data valid signal. On sensing removal
of data valid signal, the destination removes the data accepted signal.

Destination initiated transfer using handshaking: Figure 7.9 illustrates destination initiated
handshaking technique. The destination first sends the data request signal. On sensing this signal,
the source places data and also issues the data valid signal. On sensing data valid signal, the
destination acquires data and then removes the data request signal. On sensing this, the source
removes both the data and data valid signal.
Q3. Describe Programmed I/O Transferring data from I/O device to memory.
Ans: I/O device does not have direct access to memory in the programmed I/O method. A
transfer from an I/O device to memory requires the execution of several instructions by the
CPU, including an input instruction to transfer the data from the device to the CPU and a store
instruction to transfer the data from the CPU to memory. Figure 7.10 shows an example of data
transfer from an I/O device through its interface into memory via the CPU. The handshaking
procedure is followed here. The device transfers bytes of data one at a time, as they are
available. The device places a byte of data, when available, in the I/O bus and enables its data
valid line. The interface accepts the byte into its data register and enables its data accepted line.
A flag bit is then set in its status register by the interface. The device now disables the data
valid line, but it will not transfer another byte until the data accepted line is disabled and flag
bit is reset by the interface.
An I/O routine or a program is written for the computer to check the flag bit in the status
register to determine if a byte is placed in the data register by the I/O device. By reading the
status register into a CPU register and checking the value of the flag bit, this can be done.
When the flag is set to 1, the CPU reads the data from data register and then transfers to the
memory by store instruction. The flag bit is then reset to 0 by either the CPU or the interface,
depending on the design of interface circuits. When the flag bit is reset, the interface disables
the data accepted line and the device can then transfer the next data byte. Thus the following
four steps to be executed by the CPU to transfer each byte:
1. Read the status register of interface unit.
2. If the flag bit of the status register is set then go to step (3); otherwise loop back to step (1).
3. Read the data register of interface unit for data.
4. Send the data to the memory by executing store instruction.
The programmed I/O method is particularly useful in small low-speed computers or in systems
that are dedicated to monitor a device continuously. Generally the CPU is 5-7 times faster than
an I/O device. Thus, the difference in data transfer rate between the CPU and the I/O device
makes this type of transfer inefficient.
Q4. What is interrupt initiated I/O?
Ans: In the programmed I/O method, the program constantly monitors the device status. Thus,
the CPU stays in the program until the I/O device indicates that it is ready for data transfer.
This is time consuming process since it keeps the CPU busy needlessly. It can be avoided by
letting the device controller continuously monitor the device status and raise an interrupt to the
CPU as soon as the device is ready for data transfer. Upon detecting the external interrupt
signal, the CPU momentarily stops the task it is processing, branches to an interrupt-service-
routine (ISR) or I/O routine or interrupt handler to process the I/O transfer, and then returns to
the task it was originally performing. Thus, in the interrupt-initiated mode, the ISR software
(i.e. CPU) performs data transfer but is not involved in checking whether the device is ready
for data transfer or not. Therefore, the execution time of CPU can be optimized by employing
it to execute normal program, while no data transfer is required. Figure 7.11 illustrates the
interrupt process.
The CPU responds to the interrupt signal by storing the return address from the program
counter (PC) register into a memory stack or into a processor register and then control branches
to an ISR program that processes the required I/O transfer. The way that the CPU chooses the
branch address of the ISR varies from one unit to another. In general, there are two methods for
accomplishing this. One is called vectored interrupt and the other is non-vectored. In a
vectored interrupt, the source that interrupts supplies the branch information (starting address
of ISR) to the CPU. This information is called the interrupt vector, which is not any fixed
memory location. In a non-vectored interrupt, the branch address (starting address of ISR) is
assigned to a fixed location in memory. In interrupt-initiated I/O, the device controller should
have some additional intelligence for checking device status and raising an interrupt whenever
data transfer is required. This results in extra hardware circuitry in the device controller.

Q5. How a Interrupt Hardware works?


Ans: An interrupt handling hardware implements the interrupt. To implement interrupts, the
CPU uses a signal known as an interrupt request (INTR) signal to the interrupt handler or
controller hardware, which is connected to each I/O device that can issue an interrupt to it.
Here, interrupt controller makes liaison with the CPU on behave of I/O devices. Typically,
interrupt controller is also assigned an interrupt acknowledge (INTA) line that the CPU uses to
signal the controller that it has received and begun to process the interrupt request by
employing an ISR. Figure 7.12 shows the hardware lines for implementing interrupts.
The interrupt controller uses a register called interrupt-request mask register (IMR) to detect
any interrupt from the I/O devices. Consider there is n number of I/O devices in the system.
Therefore IMR is n-bit register each bit indicates the status of one I/O device. Let, IMR’s
content is denoted as E0 E1 E2 … En-1. When E0 = 1 then device 0 interrupt is recognized;
When E1 = 1 then device 1 interrupt is recognized and so on. The processor uses a flag bit
known as interrupt enable (IE) in its status register (SR) to process the interrupt. When this flag
bit is ‘1’, the CPU responds to the presence of interrupt; otherwise not.
Q6. How Interrupt Nesting works in I/O transfer?
Ans: During the execution of one ISR, if it allows another interrupt, then this is known as
interrupt nesting. Suppose the CPU is initially executing program ‘A’ when first interrupt occurs.
The CPU after storing return address of instruction in program ‘A’, starts executing ISR1. Now
say in the mean time, second interrupt occurs. The CPU again after storing return address of
instruction in ISR1, starts executing ISR2. When it is executing ISR2, third interrupt occurs. The
CPU again performs storing of return address for ISR2 and then starts executing ISR3. After
completing ISR3, the CPU resumes the execution of ISR2 for remaining portion. Similarly, after
completing ISR2, the CPU resumes the execution of ISR1 for remaining portion. After
completing ISR1, the CPU returns to the program ‘A’ and continues from the location it branched
earlier.

Q7. Write short notes on Priority Interrupt.


Ans: In a typical application a number of I/O devices are attached to the computer, with each
device being able to originate an interrupt request. The first task of the interrupt controller is to
identify the source of the interrupt. There is also the possibility that several sources may request
interrupt service simultaneously. In this case the controller must also decide which to service
first. A priority interrupt is a system that establishes a priority over the various sources to
determine which condition is to be serviced first when two or more requests arrive
simultaneously. Devices with high speed transfers such as magnetic disks are usually given high
priority and slow devices such as keyboards receive low priority. When two devices interrupt the
CPU at the same time, the CPU services the device, with the higher priority first.
The interrupt requests from various sources are connected as input to the interrupt controller. As
soon as the interrupt controller senses (using IMR) the presence of any one or more interrupt
requests, it immediately issues an interrupt signal through INTR line to the CPU. The interrupt
controller assigns a fixed priority for the various interrupt requestor devices. For example, the
IRQ0 is assigned the highest priority among the eight different interrupt requestors. Assigning
decreasing order of priority from IRQ0 to IRQ7, the IRQ7 is the lowest priority. It (IRQ7) is
serviced only when no other interrupt request is present.

Q8. Short Notes on Direct Memory Access (DMA).

Ans: To transfer large blocks of data at high speed, this third method is used. A special
controlling unit may be provided to allow transfer a block of data directly between a high speed
external device like magnetic disk and the main memory, without continuous intervention by the
CPU. This method is called direct memory access (DMA).
DMA transfers are performed by a control circuit that is part of the I/O device interface.
We refer to this circuit as a DMA controller. The DMA controller performs the functions that
would normally be carried out by the CPU when accessing the main memory. During DMA
transfer, the CPU is idle or can be utilized to execute another program and CPU has no control of
the memory buses. A DMA controller takes over the buses to manage the transfer directly
between the I/O device and the main memory.
The CPU can be placed in an idle state using two special control signals, HOLD and
HLDA (hold acknowledge). Figure 7.13 shows two control signals in the CPU that characterize
the DMA transfer. The HOLD input is used by the DMA controller to request the CPU to release
control of buses.
When this input is active, the CPU suspends the execution of the current instruction and places
the address bus, the data bus and the read/write line into a high-impedance state. The high-
impedance state behaves like an open circuit, which means that the output line is disconnected
from the input line and does not have any logic significance. The CPU activates the HLDA
output to inform the external DMA controller that the buses are in the high-impedance state.
The control of the buses has been taken by the DMA controller that generated the bus request
to conduct memory transfers without processor intervention. After the transfer of data, the
DMA controller disables the HOLD line. The CPU then disables the HLDA line and regains
the control of the buses and returns to its normal operation.
Q9. What is parallel processing? What is arithmetic pipelining? What is Vector
processing? Explain how matrix multiplication is performed using vector processing.

Ans: Parallel Processing: Parallel processing is an efficient form of information processing


which emphasizes the exploitation of concurrent events in the computing process. Concurrency
implies parallelism, simultaneity and pipelining.
• Parallel events may occur in multiple resources during the same time interval.
• Simultaneous events may occur at the same time instant.
• Pipelined events may occur in overlapped time spans.
According to the levels of processing, Handler (1977) had proposed the following
classification:
•Arithmetic pipeline
•Instruction pipeline
•Processor pipeline
What is arithmetic pipelining?
Arithmetic pipeline: An arithmetic pipeline divides an arithmetic operation, such as a
multiply, into multiple arithmetic steps each of which is executed one-by-one in different
arithmetic stages in the ALU. Examples include 4-stage pipeline used in Star-100, 8-stage
pipeline used in TI-ASC.
Instruction pipeline: The execution of a stream of instructions can be pipelined by
overlapping the execution of the current instruction with the fetch, decode and operand fetch of
subsequent instructions. All high-performance computers are now equipped with this pipeline.
Processor pipeline: Pipeline processing of the same data stream by a cascade of processors,
each of which processes a specific task. No practical example found.
Q10. Using a schematic diagram, discuss the DMA Transfer mechanism.

Ans: In DMA transfer, I/O devices can directly access the main memory without intervention
by the processor. Figure 7.14 shows a typical DMA system. The sequences of events involved
in a DMA transfer between an I/O device and the main memory are discussed next.
A DMA request signal from an I/O device starts the DMA sequence. DMA controller activates
the HOLD line. It then waits for the HLDA signal from the CPU. On receipt of HLDA, the
controller sends a DMA ACK (acknowledgement) signal to the I/O device. The DMA controller
takes the control of the memory buses from the CPU. Before releasing the control of the buses to
the controller, the CPU initializes the address register for starting memory address of the block of
data, word-count register for number of words to be transferred and the operation type (read or
write). The I/O device can then communicate with memory through the data bus for direct data
transfer. For each word transferred, the DMA controller increments its address-register and
decrements its word count register. After each word transfer, the controller checks the DMA
request line. If this line is high, next word of the block transfer is initiated and the process
continues until word count register reaches zero (i.e., the entire block is transferred). If the word
count register reaches zero, the DMA controller stops any further transfer and removes its HOLD
signal. It also informs the CPU of the termination by means of an interrupt through INT line. The
CPU then gains the control of the memory buses and resumes the operations on the program
which initiated the I/O operations.

Q11. Mention few advantages of DMA.

Ans: It is a hardware method, whereas programmed I/O and interrupt I/O are software methods
of data transfer. DMA mode has following advantages:
1. High speed data transfer is possible, since CPU is not involved during actual transfer, which
occurs between I/O device and the main memory.
2. Parallel processing can be achieved between CPU processing and DMA controller’s I/O
operation.
Q12. What is cycle stealing and block (burst) transfer.

Ans: Memory accesses by the CPU and the DMA controllers are interlocking. Requests by DMA
devices for using memory buses are always given higher priority than processor requests. Among
different DMA devices, top priority is given to high-speed peripherals such as a disk, a high-
speed network interface or a graphics display device. Since the CPU originates most memory
access cycles, the DMA controller can be said to “steal” memory cycles from the CPU. Hence,
this interlocking technique usually called cycle stealing.

When DMA controller is the master of the memory buses, a block of memory words is
transferred in continuous without interruption. This mode of DMA transfer is known as block
(burst) transfer. This mode of transfer is needed for fast devices such as magnetic disks, where
data transmission cannot be stopped or slowed down until an entire block is transferred.

Q13. Write short notes on


1. Daisy Chaining Method.
2. Polling or Rotating Priority Method.
3. Fixed Priority or Independent Request Method.

Ans: 1. Daisy Chaining Method: The daisy chaining method is a centralized bus arbitration
method. During any bus cycle, the bus master may be any device - the processor or any DMA
controller unit, connected to the bus. Figure 7.15 illustrates the daisy chaining method.
All devices are effectively assigned static priorities according to their locations along a bus grant
control line (BGT). The device closest to the central bus arbiter is assigned the highest priority.
Requests for bus access are made on a common request line, BRQ. Similarly, the common
acknowledge signal line (SACK) is used to indicate the use of bus. When no device is using the
bus, the SACK is inactive. The central bus arbiter propagates a bus grant signal (BGT) if the
BRQ line is high and acknowledge signal (SACK) indicates that the bus is idle. The first device,
which has issued a bus request, receives the BGT signal and stops the latter’s propagation. This
sets the bus-busy flag in the bus arbiter by activating SACK and the device assumes bus control.
On completion, it resets the bus-busy flag in the arbiter and a new BGT signal is generated if
other requests are outstanding (i.e., BRQ is still active). The first device simply passes the BGT
signal to the next device in the line.
The main advantage of the daisy chaining method is its simplicity. Another advantage is
scalability. The user can add more devices anywhere along the chain, up to a certain maximum
value.
2. Polling or Rotating Priority Method : In this method, the devices are assigned unique
priorities and compete to access the bus, but the priorities are dynamically changed to give every
device an opportunity to access the bus. This dynamic priority algorithm generalizes the daisy
chain implementation of static priorities discussed above. Recall that in the daisy chain scheme
all devices are given static and unique priorities according to their positions on a bus-grant line
(BGT) emanating from a central bus arbiter. However, in the polling scheme, no central bus
arbiter exists, and the bus-grant line (BGT) is connected from the last device back to the first in a
closed loop (Fig. 7.16). Whichever device is granted access to the bus serves as bus arbiter for the
following arbitration (an arbitrary device is selected to have initial access to the bus). Each
device’s priority for a given arbitration is determined by that device’s distance along the bus-
grant line from the device currently serving as bus arbiter; the latter device has the lowest
priority. Hence, the priorities change dynamically with each bus cycle.
The main advantage of this method is that it does not favor any particular device or processor.
The method is also quite simple.
3. Fixed Priority or Independent Request Method : In bus independent request method, the
bus control passes from one device to another only through the centralized bus arbiter. Figure
7.17 shows the independent request method. Each device has a dedicated BRQ output line and
BGT input line. If there are m devices, the bus arbiter has m BRQ inputs and m BGT outputs.
The arbiter follows a priority order with different priority level to each device. At a given time,
the arbiter issues bus grant (BGT) to the highest priority device among the devices who have
issued bus requests. This scheme needs more hardware but generates fast response.
Q14. Differentiate isolated I/O and memory mapped I/O.
Answer : (a) In the isolated (I/O mapped) I/O, computers use one common address bus and data
bus to transfer information between memory or I/O and the CPU; but use separate read-write
control lines, one for memory and another for I/O. Whereas, in memory mapped I/O, computers
use only one set of read and write lines along with same set of address and data buses for both
memory and I/O devices.
(b) The isolated I/O technique isolates all I/O interface addresses from the addresses assigned to
memory. Whereas, the memory mapped I/O does not distinguish between memory and I/O
addresses.
(c) Processors use different instructions for accessing memory and I/O devices in isolated I/O. In
memory mapped I/O, processors use same set of instructions for accessing memory and I/O.
(d) Thus, the hardware cost is more in isolated I/O relative to the memory mapped I/O, because
two separate read-write lines are required in first technique.
Q15. Differentiate between polled I/O and interrupt driven I/O.
Answer: (a) In the polled I/O or programmed I/O method, the CPU stays in the program until the
I/O device indicates that it is ready for data transfer, so CPU is kept busy needlessly. But, in
interrupt driven I/O method, CPU can perform its own task of instruction executions and is
informed by raising an interrupt signal when data transfer is needed.
(b) Polled I/O is low cost and simple technique; whereas, interrupt I/O technique is relatively
high cost and complex technique. Because in second method, a device controller is used to
continuously monitor the device status and raise an interrupt to the CPU as soon as the device is
ready for data transfer.
(c) The polled I/O method is particularly useful in small low-speed computers or in systems that
are dedicated to monitor a device continuously. However, interrupt I/O method is very useful in
modern high speed computers.
Q16. Discuss the advantage of interrupt-initiated I/O over programmed I/O.
Answer: In the programmed I/O method, the program constantly monitors the device status.
Thus, the CPU stays in the program until the I/O device indicates that it is ready for data transfer.
This is time consuming process since it keeps the CPU busy needlessly. It can be avoided by
letting the device controller continuously monitor the device status and raise an interrupt to the
CPU as soon as the device is ready for data transfer. Upon detecting the external interrupt signal,
the CPU momentarily stops the task it is processing, branches to an interrupt-service-routine
(ISR) or I/O routine or interrupt handler to process the I/O transfer, and then after completion of
I/O transfer, returns to the task it was originally performing. Thus, in the interrupt-initiated mode,
the ISR software (i.e. CPU) performs data transfer but is not involved in checking whether the
device is ready for data transfer or not. Therefore, the execution time of CPU can be optimized by
employing it to execute normal programs, while no data transfer is required.

Q17. What are the different types of interrupt? Give examples.


Answer: There are basically three types of interrupts: external, internal or trap and software
interrupts.
External interrupt: These are initiated through the processors’ interrupt pins by external
devices. Examples include interrupts by input-output devices and console switches. External
interrupts can be divided into two types: maskable and non-maskable.
Maskable interrupts: The user program can enable or disable all or a few device interrupts by
executing instructions EI or DI.
Non-maskable interrupts: The user program cannot disable it by any instruction. Some
common examples are: hardware error and power fail interrupt. This type of interrupt has higher
priority than maskable interrupts.
Internal interrupt: This type of interrupts is activated internally by exceptional conditions. The
interrupts caused due to overflow, division by zero and execution of an illegal op-code are
common examples of this category.
Software interrupts: A software interrupt is initiated by executing an instruction like INT n in a
program, where n refers to the starting address of a procedure in program. This type of interrupts
is used to call operating system. The software interrupt instructions allow to switch from user
mode to supervisor mode.
Q18. What are the differences between vectored and non-vectored interrupt?
Answer: In a vectored interrupt, the source that interrupts supplies the branch information
(starting address of ISR) to the CPU. This information is called the interrupt vector, which is not
any fixed memory location. The processor identifies individual devices even if they share a single
interrupt-request line. So the set-up time is very less.
In a non-vectored interrupt, the branch address (starting address of ISR) is assigned to a fixed
location in memory. Since the identities of requesting devices are not known initially. The set-up
time is quite large.
Q19. “Interrupt request is serviced at the end of current instruction cycle while DMA
request is serviced almost as soon as it is received, even before completion of current
instruction execution.” Explain.
Answer : In the interrupt initiated I/O, interrupt request is serviced at the end of current
instruction cycle, because the processor takes part in the I/O transfer for which processor was
interrupted. Thus processor will be busy in data transfer after this instruction. But in DMA
transfer, the processor is not involved during data transfer. It actually initiates the data transfer.
The whole data transfer is supervised by DMA controller and at that time processor is free to do
its own task of interaction execution.
Q20. Give the main reason why DMA based I/O is better in some circumstances than
interrupt driven I/O?
Answer: To transfer large blocks of data at high speed, DMA method is used. A special DMA
controller is provided to allow transfer a block of data directly between a high speed external
device like magnetic disk and the main memory, without continuous intervention by the CPU.
The data transmission cannot be stopped or slowed down until an entire block is transferred. This
mode of DMA transfer is known as burst transfer.

You might also like