0% found this document useful (0 votes)
3 views

Computer

The document discusses the differences between isolated I/O and memory-mapped I/O, highlighting their advantages and disadvantages. Isolated I/O has a larger address space and improved reliability, while memory-mapped I/O allows for faster operations and simplified programming. Additionally, it explains Direct Memory Access (DMA) and its benefits, such as reduced CPU overhead and improved system performance, as well as the functions of input-output interfaces and various types of peripheral devices.

Uploaded by

sivanagaraju779
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Computer

The document discusses the differences between isolated I/O and memory-mapped I/O, highlighting their advantages and disadvantages. Isolated I/O has a larger address space and improved reliability, while memory-mapped I/O allows for faster operations and simplified programming. Additionally, it explains Direct Memory Access (DMA) and its benefits, such as reduced CPU overhead and improved system performance, as well as the functions of input-output interfaces and various types of peripheral devices.

Uploaded by

sivanagaraju779
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

1.What is the difference between isolated I/O and memory mapped I/O?

state the advantages and disadvantages of each.


As a CPU needs to communicate with the various memory and input-output
devices (I/O) as we know data between the processor and these devices flow
with the help of the system bus. There are three ways in which system bus can
be allotted to them :
1. Separate set of address, control and data bus to I/O and memory.
2. Have common bus (data and address) for I/O and memory but separate
control lines.
3. Have common bus (data, address, and control) for I/O and memory.
In first case it is simple because both have different set of address space and
instruction but require more buses.

Isolated I/O –
Then we have Isolated I/O in which we Have common bus(data and address)
for I/O and memory but separate read and write control lines for I/O. So when
CPU decode instruction then if data is for I/O then it places the address on the
address line and set I/O read or write control line on due to which data transfer
occurs between CPU and I/O. As the address space of memory and I/O is
isolated and the name is so. The address for I/O here is called ports. Here we
have different read-write instruction for both I/O and
memory.

Memory Mapped I/O –


In this case every bus in common due to which the same set of instructions
work for memory and I/O. Hence we manipulate I/O same as memory and both
have same address space, due to which addressing capability of memory
become less because some part is occupied by the
I/O.

Differences between memory mapped I/O and isolated I/O –


Advantages of memory-mapped I/O:
Faster I/O Operations: Memory-mapped I/O allows the CPU to access I/O
devices at the same speed as it accesses memory. This means that I/O
operations can be performed much faster compared to isolated I/O.
Simplified Programming: Memory-mapped I/O simplifies programming as the
same instructions can be used to access memory and I/O devices. This means
that software developers do not have to use specialized I/O instructions, which
can reduce programming complexity.
Efficient Use of Memory Space: Memory-mapped I/O is more memory-efficient
as I/O devices share the same address space as the memory. This means that
the same memory address space can be used to access both memory and I/O
devices.
Disadvantages of Memory-Mapped I/O:
Limited I/O Address Space: Memory-mapped I/O limits the I/O address space
as I/O devices share the same address space as the memory. This means that
there may not be enough address space available to address all I/O devices.
Slower Response Time: If an I/O device is slow to respond, it can delay the
CPU’s access to memory. This can lead to slower overall system performance.
Advantages of Isolated I/O:
Large I/O Address Space: Isolated I/O allows for a larger I/O address space
compared to memory-mapped I/O as I/O devices have their own separate
address space.
Greater Flexibility: Isolated I/O provides greater flexibility as I/O devices can be
added or removed from the system without affecting the memory address
space.
Improved Reliability: Isolated I/O provides better reliability as I/O devices do
not share the same address space as the memory. This means that if an I/O
device fails, it does not affect the memory or other I/O devices.
Disadvantages of Isolated I/O:
Slower I/O Operations: Isolated I/O can result in slower I/O operations
compared to memory-mapped I/O as it requires the use of specialized I/O
instructions.
More Complex Programming: Isolated I/O requires specialized I/O instructions,
which can lead to more complex programming.
Applications:
Memory-mapped I/O applications:
Graphics processing: Memory-mapped I/O is often used in graphics cards to
provide fast access to frame buffers and control registers. The graphics data is
mapped directly to memory, allowing the CPU to read from and write to the
graphics card as if it were accessing regular memory.
Network communication: Network interface cards (NICs) often utilize memory-
mapped I/O to transfer data between the network and the system memory.
The NIC registers are mapped to specific memory addresses, enabling efficient
data transfer and control over network operations.
Direct memory access (DMA): DMA controllers employ memory-mapped I/O to
facilitate high-speed data transfers between devices and system memory
without CPU intervention. By mapping the DMA controller registers to memory,
data can be transferred directly between devices and memory, reducing CPU
overhead.
Isolated I/O applications:
Embedded systems: Isolated I/O is commonly used in embedded systems
where strict isolation between the CPU and peripherals is necessary. This
includes applications such as industrial control systems, robotics, and
automotive electronics. Isolation ensures that any faults or malfunctions in
peripheral devices do not affect the stability of the entire system.
Microcontrollers: Microcontrollers often rely on isolated I/O to interface with
various peripherals, such as sensors, actuators, and displays. Each peripheral is
assigned a separate I/O port, allowing the microcontroller to control and
communicate with multiple devices independently.
Real-time systems: Isolated I/O is preferred in real-time systems that require
precise timing and deterministic behavior. By isolating the I/O operations,
these systems can maintain strict control over the timing and synchronization
of external events, ensuring reliable and predictable performance.

2. with a neat sketch explain the working principle of DMA?


Direct Memory Access
DMA (Direct memory access) is the special feature within the computer system
that transfers the data between memory and peripheral devices(like hard
drives) without the intervention of the CPU. In other words, for large data
transfer like disk drives, it will be wasteful to use expensive general-purpose
processors in which status bits are to be watched and fed the data into the
controller register one byte at a time which is termed Programmed I/O.
Computers avoid burdening the CPU so, they shift the work to a Direct Memory
Access controller. Let's see the workings of this in detail.
To initiate the DMA transfer the host writes a DMA command block into the
memory. This block contains the pointer to the source of the transfer, the
pointer to the destination of the transfer, and the count of the number of bytes
to be transferred. This command block can be more complex which includes
the list of sources and destination addresses that are not contiguous. CPU
writes the address of this command block and goes to other work. DMA
controller proceeds to operate the memory bus directly, placing the address on
it without the intervention of the main CPU. Nowadays simple DMA controller
is a standard component in all modern computers.
Block Diagram of DMA
Block Diagram of DMA
The mutual understanding between the device controller and DMA controller is
performed via pair of wires called DMA request and DMA acknowledge. Let's
see what is the role of these wires in the DMA transfer,
Working of DMA Transfer
The device controller places a signal on the DMA request wire when a word of
data is available for transfer. This cause DMA controller to seize the memory
bus of CPU and place the desired address on the DMA acknowledge wire. Up
on successful data transfer the device controller receives the DMA
acknowledge and then it removes the DMA request signal.
When the entire transfer is finished, DMA controller interrupts the CPU. This
entire process is depicted in the above diagram. DMA controller seizes the
memory bus and CPU momentarily prevented from accessing main memory.
Although it can access the data items in its cache. This cycle stealing (Seizing
the memory bus temporarily and preventing the CPU from accessing it) slows
down the CPU computation, shifting the data transfer to DMA controller
generally improves the total system performance. Some of the computer
architecture used physical memory address for DMA, but other uses virtual
addresses (DVMA). Direct virtual memory access performs data transfer
between memory mapped I/O without the use For the execution of a
computer program, it requires the synchronous working of more than one
component of a computer. For example, Processors – providing necessary
control information, addresses…etc, buses – to transfer information and data to
and from memory to I/O devices…etc. The interesting factor of the system
would be the way it handles the transfer of information among processor,
memory and I/O devices. Usually, processors control all the process of
transferring data, right from initiating the transfer to the storage of data at the
destination. This adds load on the processor and most of the time it stays in the
ideal state, thus decreasing the efficiency of the system. To speed up the
transfer of data between I/O devices and memory, DMA controller acts as
station master. DMA controller transfers data with minimal intervention of the
processor.

3.Explain the functions of typical input-output interfaces?


Introduction to input-output interfaces:
Input-Output Interface is used as a method which helps in transferring of
information between the internal storage devices i.e. memory and the external
peripheral device . A peripheral device is that which provide input and output
for the computer, it is also called Input-Output devices. For Example: A
keyboard and mouse provide Input to the computer are called input devices
while a monitor and printer that provide output to the computer are called
output devices. Just like the external hard-drives, there is also availability of
some peripheral devices which are able to provide both input and output.
In micro-computer base system, the only purpose of peripheral devices is just
to provide special communication links for the interfacing them with the CPU.
To resolve the differences between peripheral devices and CPU, there is a
special need for communication links.
The major differences are as follows:
1. The nature of peripheral devices is electromagnetic and electro-
mechanical. The nature of the CPU is electronic. There is a lot of
difference in the mode of operation of both peripheral devices and CPU.
2. There is also a synchronization mechanism because the data transfer rate
of peripheral devices are slow than CPU.
3. In peripheral devices, data code and formats are differ from the format
in the CPU and memory.
4. The operating mode of peripheral devices are different and each may be
controlled so as not to disturb the operation of other peripheral devices
connected to CPU.
There is a special need of the additional hardware to resolve the differences
between CPU and peripheral devices to supervise and synchronize all input and
output devices.
Functions of Input-Output Interface:
1. It is used to synchronize the operating speed of CPU with respect to input-
output devices.
2. It selects the input-output device which is appropriate for the
interpretation of the input-output signal.
3. It is capable of providing signals like control and timing signals.
4. In this data buffering can be possible through data bus.
5. There are various error detectors.
6. It converts serial data into parallel data and vice-versa.
7. It also convert digital data into analog signal and vice-versa.

4.what is DMA? Write its advantages? and with example.


Direct Memory Access (DMA)
What is DMA?
Direct Memory Access (DMA) is a feature or method used in computer
systems that allows certain hardware components to access the system's main
memory (RAM) without requiring the central processing unit (CPU) for each
data transfer. DMA is generally used to improve data transfer efficiency
between peripheral devices and memory.
Advantages of DMA
Direct Memory Access (DMA) has various advantages in computer
systems, especially for data transfer tasks. DMA has several major
including
.Reduced CPU Overhead: One of the key benefits of DMA is that it
decreases the CPU's participation in data transfer activities. The CPU can
delegate these processes to the DMA controller, freeing up time for
more vital tasks. This decrease in CPU overhead results in more efficient
use of the CPU's processing capability.
o Improved System Performance: DMA can enhance system performance
by offloading data transfer activities off the CPU. The CPU is freed up to
undertake more sophisticated and time-sensitive activities, perhaps
making the system quicker and more responsive.
o Faster Data Transfers: DMA can transmit data between memory and
peripheral devices at or near the maximum speed enabled by the
hardware. This implies that data may be transported rapidly and
effectively, improving the total data transfer rates of the system.
o Efficient Resource Utilization: DMA allows numerous peripheral devices
to access memory simultaneously without the need for CPU involvement.
This effective utilization of system resources guarantees that several
devices may run in parallel, decreasing memory access congestion.
o Streamlined Data flow: DMA controllers are skilled at handling data flow
in blocks or streams. This simplified method to data transport decreases
latency and eliminates bottlenecks.
o Parallel Operation: DMA allows for parallel operation. The CPU can
perform other instructions and operations while data transfers are going
place. This parallel processing improves system efficiency.
o High-Volume Data Transfer Support: DMA is very beneficial in
circumstances involving huge amounts of data, such as disc I/O, network
connectivity, and multimedia processing. It guarantees that these data
transfers happen quickly and without overburdening the CPU.
o Reduced Latency: DMA controllers can reply fast to peripheral device
requests, lowering data transfer latency. This is particularly critical for
real-time and high-performance applications.
o Error Handling: To preserve data integrity during transfers, many DMA
controllers include error detection and repair techniques. They can
notify faults to the CPU so that necessary action can be taken.
o Interrupt Generation: When a data transfer is completed, DMA
controllers can produce an interruption to tell the CPU. This gives the
CPU the ability to conduct post-transfer duties or respond to possible
problems.
In conclusion, DMA provides several benefits that lead to increased
system efficiency, quicker data transfers, and greater utilization of system
resources. It is a fundamental mechanism in modern computer systems
for optimizing data transfer activities.

Disadvantages of DMA
While Direct Memory Access (DMA) has several advantages, it is crucial
to note the following drawbacks and limitations:
o Complexity: DMA controllers and their setups can be complicated.
Setting up DMA correctly and guaranteeing device compatibility may
need knowledge and careful planning.
o Hardware Support: DMA functioning is dependent on hardware support.
Not all devices or systems have DMA capabilities, and it may not be
implemented as effectively as needed in some circumstances.
o Resource Contention: Resource contention can occur in systems with
several devices competing for DMA access. This conflict can cause delays
and potentially impair data transfer efficiency.
o Data Corruption Risk: Improperly setup or handled DMA transfers might
lead to data corruption. Data integrity necessitates careful programming
and error management.
o Interrupt Handling: While DMA might decrease CPU participation in data
transfers, it can also present a new challenge: controlling DMA-related
interruptions. This necessitates more CPU processing and can have an
impact on overall system performance.
o Limited Flexibility: DMA controllers may be limited in their ability to
manage data flows. Some may not support specific transfer modes or
may limit the sorts of devices that may utilize DMA.
o Concerns about security: If not adequately protected, DMA can be used
for nefarious purposes. DMA assaults, for example, can be used to
retrieve sensitive information from memory.
o Compatibility Issues: When working with older hardware or specialized
devices, DMA compatibility may be an issue. Older DMA standards may
not be completely supported by newer systems, and guaranteeing
compatibility might be difficult.
o Debugging Complexities: Debugging DMA transfers might be more
difficult to debug than debugging CPU-bound tasks because they require
lower-level hardware interactions.
o Overhead for Small Transfers: Using DMA for extremely short data
transfers may create more overhead than it saves. In some
circumstances, the CPU may be able to do the function more efficiently.
o Complex Programming: DMA programming can be complicated,
requiring careful control of memory locations, data sizes, and transfer
modes. DMA programming errors can cause system instability.
o CPU Access Latency: When a DMA controller controls the system bus,
the CPU may encounter some latency when accessing memory. This
delay might affect the system's responsiveness.
It is vital to analyze the advantages and disadvantages of DMA in the
context of a certain application or system. In many circumstances, DMA
provides enormous benefits; nevertheless, careful study and design are
necessary to provide the optimum performance and data integrity.

5.Explain the various types of peripheral devices?


Peripherals Devices in Computer Organization
Generally peripheral devices, however, are not essential for the
computer to perform its basic tasks, they can be thought of as an
enhancement to the user’s experience. A peripheral device is a device
that is connected to a computer system but is not part of the core
computer system architecture. Generally, more people use the term
peripheral more loosely to refer to a device external to the computer
case.
What Does Peripheral Device Mean?
A Peripheral Device is defined as a device that
provides input/output functions for a computer and serves as an
auxiliary computer device without computing-intensive functionality.
A peripheral device is also called a peripheral, computer peripheral,
input-output device, or I/O device.
Classification of Peripheral devices
It is generally classified into 3 basic categories which are given below:
1. Input Devices:
The input device is defined as it converts incoming data and instructions
into a pattern of electrical signals in binary code that are comprehensible
to a digital computer. Example:
Keyboard, mouse, scanner, microphone etc.

Keyboard: A keyboard is an input device that allows users to enter text


and commands into a computer system.

Mouse: A mouse is an input device that allows users to control the


cursor on a computer screen.

Scanner: A scanner is an input device that allows users to convert


physical documents and images into digital files.

Microphone: A microphone is an input device that allows users to record


audio.
2. Output Devices:
An output device is generally the reverse of the input process and
generally translates the digitized signals into a form intelligible to the
user. The output device is also performed for sending data from one
computer system to another. For some time punched card and paper
tape readers were extensively used for input, but these have now been
supplanted by more efficient devices.
Example:
Monitors, headphones, printers etc.

Monitor: A monitor is an output device that displays visual information


from a computer system.

Printer: A printer is an output device that produces physical copies of


documents or images.

Speaker: A speaker is an output device that produces audio.

3. Storage Devices:
Storage devices are used to store data in the system which is required for
performing any operation in the system. The storage device is one of the
most required devices and also provides better compatibility. Example:
Hard disk, magnetic tape, Flash memory etc.

Hard Drive: A hard drive is a storage device that stores data and files on
a computer system.

USB Drive: A USB drive is a small, portable storage device that connects
to a computer system to provide additional storage space.

Memory Card: A memory card is a small, portable storage device that is


commonly used in digital cameras and smartphones.

External Hard Drive: An external hard drive is a storage device that


connects to a computer system to provide additional storage space.

4. Communication Devices:
Communication devices are used to connect a computer system to other
devices or networks. Examples of communication devices include:

Modem: A modem is a communication device that allows a computer


system to connect to the internet.
Network Card: A network card is a communication device that allows a
computer system to connect to a network.
Router: A router is a communication device that allows multiple devices
to connect to a network.
Advantages of Peripherals Devices
Peripherals devices provide more features due to this operation of the
system is easy. These are given below:
 It is helpful for taking input very easily.
 It is also provided a specific output.
 It has a storage device for storing information or data
 It also improves the efficiency of the system.
6.Explain how to access I/O devices in system?
• The important components of any computer system are CPU, memory
and I/O devices (peripherals). The CPU fetches instructions (opcodes and
operands/data) from memory, processes them and stores results in
memory. The other components of the computer system (I/O devices)
may be loosely called the Input/Output system.
• The main function of I/O system is to transfer information between
CPU or memory and the outside world.
• The important point to be noted here is, I/O devices (peripherals)
cannot be connected directly to the system bus. The reasons are
discussed here.
1. A variety of peripherals with different methods of operation are
available. So it would be impractical to incorporate the necessary logic
within the CPU to control a range of devices.
2. The data transfer rate of peripherals is often much slower than that of
the memory or CPU. So it is impractical to use the high speed system bus
to communicate directly with the peripherals.
3. Generally, the peripherals used in a computer system have different
data formats and word lengths than that of CPU used in it.
• So to overcome all these difficulties, it is necessary to use a module in
between system bus and peripherals, called I/O module or I/O system, or
I/O interface.
The functions performed by an I/O interface are:
1. Handle data transfer between much slower peripherals and CPU or
memory.
2. Handle data transfer between CPU or memory and peripherals having
different data formats and word lengths.
3. Match signal levels of different I/O protocols with computer signal
levels.
4. Provides necessary driving capabilities - sinking and sourcing currents.
Requirements of I/O System
• The I/O system if nothing but the hardware required to connect an I/O
device to the bus. It is also called I/O interface. The major requirements
of an I/O interface are :
1. Control and timing
2. Processor communication
3. Device communication
4. Data buffering
5. Error detection
• The important blocks necessary in any I/O interface are shown in Fig.
8.6.1.

• As shown in the Fig. 8.6.1, I/O interface consists of data register,


status/control register, address decoder and external device interface
logic.
• The data register holds the data being transferred to or from the
processor.
• The status/control register contains information relevant to the
operation of the I/O device. Both data and status/control registers are
connected to the data bus.
• Address lines drive the address decoder. The address decoder enables
the device to recognize its address when address appears on the address
lines.
• The external device interface logic accepts inputs from address decoder,
processor control lines and status signal from the I/O device and
generates control signals to control the direction and speed of data
transfer between processor and I/O devices.
• The Fig. 8.6.2 shows the I/O interface for input device and output
device. Here, for simplicity block schematic of I/O interface is shown
instead of detail connections.
• The address decoder enables the device when its address appears on
the address lines.
• The data register holds the data being transferred to or from the
processor.
• The status register contains information relevant to the operation of
the I/O device.
• Both the data and status registers are assigned with unique addresses
and they are connected to the data bus.

7.Draw a block diagram of a DMA controller and explain its


functioning?
In modern computer systems, transferring data between input/output
devices and memory can be a slow process if the CPU is required to
manage every step. To address this, a Direct Memory Access (DMA)
Controller is utilized. A Direct Memory Access (DMA) Controller solves
this by allowing I/O devices to transfer data directly to memory, reducing
CPU involvement. This increases system efficiency and speeds up data
transfers, freeing the CPU to focus on other tasks. DMA controller needs
the same old circuits of an interface to communicate with the CPU and
Input/Output devices.
What is a DMA Controller?
Direct Memory Access (DMA) uses hardware for accessing the memory,
that hardware is called a DMA Controller. It has the work of transferring
the data between Input Output devices and main memory with very less
interaction with the processor. The direct Memory Access Controller is a
control unit, which has the work of transferring data.
DMA Controller in Computer Architecture
DMA Controller is a type of control unit that works as an interface for the
data bus and the I/O Devices. As mentioned, DMA Controller has the
work of transferring the data without the intervention of the processors,
processors can control the data transfer. DMA Controller also contains an
address unit, which generates the address and selects an I/O device For
the transfer of data. Here we are showing the block diagram of the DMA
Controller.
Block Diagram of DMA Controller
Types of Direct Memory Access (DMA)
There are four popular types of DMA.
 Single-Ended DMA
 Dual-Ended DMA
 Arbitrated-Ended DMA
 Interleaved DMA
Single-Ended DMA: Single-Ended DMA Controllers operate by reading
and writing from a single memory address. They are the simplest DMA.
Dual-Ended DMA: Dual-Ended DMA controllers can read and write from
two memory addresses. Dual-ended DMA is more advanced than single-
ended DMA.
Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and
writing to several memory addresses. It is more advanced than Dual-
Ended DMA.
Interleaved DMA: Interleaved DMA are those DMA that read from one
memory address and write from another memory address.
Working of DMA Controller
The DMA controller registers have three registers as follows.
 Address register – It contains the address to specify the desired location
in memory.
 Word count register – It contains the number of words to be transferred.
 Control register – It specifies the transfer mode.
Note: All registers in the DMA appear to the CPU as I/O interface
registers. Therefore, the CPU can both read and write into the DMA
registers under program control via the data bus.
The figure below shows the block diagram of the DMA controller. The
unit communicates with the CPU through the data bus and control lines.
Through the use of the address bus and allowing the DMA and RS
register to select inputs, the register within the DMA is chosen by the
CPU. RD and WR are two-way inputs. When BG (bus grant) input is 0, the
CPU can communicate with DMA registers. When BG (bus grant) input is
1, the CPU has relinquished the buses and DMA can communicate
directly with the memory.
Working Diagram of DMA Controller
Explanation: The CPU initializes the DMA by sending the given
information through the data bus.
 The starting address of the memory block where the data is available (to
read) or where data are to be stored (to write).
 It also sends word count which is the number of words in the memory
block to be read or written.
 Control to define the mode of transfer such as read or write.
 A control to begin the DMA transfer
Modes of Data Transfer in DMA
There are 3 modes of data transfer in DMA that are described below.
 Burst Mode: In Burst Mode, buses are handed over to the CPU by the
DMA if the whole data is completely transferred, not before that.
 Cycle Stealing Mode: In Cycle Stealing Mode, buses are handed over to
the CPU by the DMA after the transfer of each byte. Continuous request
for bus control is generated by this Data Transfer Mode. It works more
easily for higher-priority tasks.
 Transparent Mode: Transparent Mode in DMA does not require any bus
in the transfer of the data as it works when the CPU is executing the
transaction.

Advantages of DMA Controller


 Data Memory Access speeds up memory operations and data transfer.
 CPU is not involved while transferring data.
 DMA requires very few clock cycles while transferring data.
 DMA distributes workload very appropriately.
 DMA helps the CPU in decreasing its load.
Disadvantages of DMA Controller
 Direct Memory Access is a costly operation because of additional
operations.
 DMA suffers from Cache-Coherence Problems.
 DMA Controller increases the overall cost of the system.

8. Explain various types of interrupts in details?


Interrupts play a crucial role in computer devices by allowing the
processor to react quickly to events or requests from external devices or
software. In this article, we are going to discuss every point about
interruption and its various types in detail.
What is an Interrupt?
The interrupt is a signal emitted by hardware or software when a process
or an event needs immediate attention. It alerts the processor to a high-
priority process requiring interruption of the current working process. In
I/O devices one of the bus control lines is dedicated for this purpose and
is called the Interrupt Service Routine (ISR).
When a device raises an interrupt at let’s say process i,e., the processor
first completes the execution of instruction i. Then it loads the Program
Counter (PC) with the address of the first instruction of the ISR. Before
loading the Program Counter with the address, the address of the
interrupted instruction is moved to a temporary location. Therefore,
after handling the interrupt the processor can continue with process i+1.
While the processor is handling the interrupts, it must inform the device
that its request has been recognized so that it stops sending the
interrupt request signal. Also, saving the registers so that the interrupted
process can be restored in the future, increases the delay between the
time an interrupt is received and the start of the execution of the ISR.
This is called Interrupt Latency.
Types of Interrupt
Event-related software or hardware can trigger the issuance of interrupt
signals. These fall into one of two categories: software interrupts or
hardware interrupts.
1. Software Interrupts
A sort of interrupt called a software interrupt is one that is produced by
software or a system as opposed to hardware. Traps and exceptions are
other names for software interruptions. They serve as a signal for the
operating system or a system service to carry out a certain function or
respond to an error condition. Generally, software interrupts occur as a
result of specific instructions being used or exceptions in the operation.
In our system, software interrupts often occur when system calls are
made. In contrast to the fork() system call, which also generates a
software interrupt, division by zero throws an exception that results in
the software interrupt.
A particular instruction known as an “interrupt instruction” is used to
create software interrupts. When the interrupt instruction is used, the
processor stops what it is doing and switches over to a particular
interrupt handler code. The interrupt handler routine completes the
required work or handles any errors before handing back control to the
interrupted application.
2. Hardware Interrupts
In a hardware interrupt, all the devices are connected to the Interrupt
Request Line. A single request line is used for all the n devices. To
request an interrupt, a device closes its associated switch. When a device
requests an interrupt, the value of INTR is the logical OR of the requests
from individual devices.
Hardware interrupts are further divided into two types of interrupt
 Maskable Interrupt: Hardware interrupts can be selectively enabled and
disabled thanks to an inbuilt interrupt mask register that is commonly
found in processors. A bit in the mask register corresponds to each
interrupt signal; on some systems, the interrupt is enabled when the bit
is set and disabled when the bit is clear, but on other systems, the
interrupt is deactivated when the bit is set.
 Spurious Interrupt: A hardware interrupt for which there is no source is
known as a spurious interrupt. This phenomenon might also be referred
to as phantom or ghost interrupts. When a wired-OR interrupt circuit is
connected to a level-sensitive processor input, spurious interruptions are
typically an issue. When a system performs badly, it could be challenging
to locate these interruptions.
Vectored Interrupt
 The devices raising the vectored interrupt identify themselves directly to
the processor. So instead of wasting time in identifying which device has
requested an interrupt the processor immediately start executing the
corresponding interrupt service routine for the requested interrupt.
 Now, to identify themselves directly to the processors either the device
request with its own interrupt request signal or by sending a special code
to the processor which helps the processor in identifying which device
has requested an interrupt.
 Usually, a permanent area in the memory is allotted to hold the starting
address of each interrupt service routine. The addresses referring to the
interrupt service routines are termed as interrupt vectors and all
together they constitute an interrupt vector table. Now how does it work?
 The device requesting an interrupt sends a specific interrupt request
signal or a special code to the processor. This information act as a
pointer to the interrupt vector table and the corresponding address
(address of a specific interrupt service routine which is required to
service the interrupt raised by the device) is loaded to the program
counter.

Interrupt Nesting
 When the processor is busy in executing the interrupt service routine,
the interrupts are disabled in order to ensure that the device does not
raise more than one interrupt. A similar kind of arrangement is used
where multiple devices are connected to the processor. So that the
servicing of one interrupt is not interrupted by the interrupt raised by
another device.
 What if the multiple devices raise interrupts simultaneously, in that case,
the interrupts are prioritized.

 Priority Interrupts
 The I/O devices are organized in a priority structure such that the
interrupt raised by the high priority device is accepted even if the
processor servicing the interrupt from a low priority device.
 A priority level is assigned to the processor which can be regulated using
the program. Now, whenever a processor starts the execution of some
program its priority level is set equal to the priority of the program in
execution. Thus while executing the current program the processor only
accepts the interrupts from the device that has higher priority as of the
processor.
Now, when the processor is executing an interrupt service routine the
processor priority level is set to the priority of the device of which the
interrupt processor is servicing. Thus the processor only accepts the
interrupts from the device with the higher priority and ignore the
interrupts from the device with the same or low priority. To set the
priority level of the processor some bits of the processor’s status register
is used.
Benefits of Interrupt
 Real-time Responsiveness: Interrupts permit a system to reply promptly
to outside events or signals, permitting real-time processing.
 Efficient Resource usage: Interrupt-driven structures are more efficient
than system that depend on busy-waiting or polling strategies. Instead of
continuously checking for the incidence of event, interrupts permit the
processor to remain idle until an event occurs, conserving processing
energy and lowering energy intake.
 Multitasking and Concurrency: Interrupts allow multitasking with the
aid of allowing a processor to address multiple tasks concurrently.
 Improved system Throughput: By coping with occasions asynchronously,
interrupts allow a device to overlap computation with I/O operations or
other responsibilities, maximizing system throughput and universal
overall performance.

9.Explain the input-output configuration with interrupts. And


explain the flowchart for interrupt cycle with an example.
Input-Output Configuration with Interrupts
Input-output (I/O) operations are crucial in computer systems for
communicating with external devices. One way to manage I/O operations is
through interrupts, which allow the CPU to be notified when an I/O device
requires attention. This configuration eliminates the need for continuous
polling, saving CPU cycles.
1. Configuration Steps:
o Interrupt-Enabled Devices: Devices that can signal the CPU by
generating an interrupt.
o Interrupt Line: A physical or logical line used by the device to
signal an interrupt.
o Interrupt Handler: A dedicated piece of software (part of the
operating system) designed to handle specific interrupts.
o Interrupt Request (IRQ): A signal sent by a device to the CPU when
it needs processing.
o Interrupt Vector Table: A data structure containing pointers to
interrupt service routines (ISRs) for different interrupt types.
2. Workflow:
o The CPU executes its current tasks.
o A device sends an interrupt signal when it requires attention.
o The CPU halts its current task, saves the state, and jumps to the
appropriate ISR.
o The ISR executes, addressing the device's needs (e.g., data
transfer).
o Once the ISR completes, the CPU restores its state and resumes its

Flowchart for the Interrupt Cycle


Here’s a breakdown of the flowchart steps:
1. Start: The CPU executes its normal instructions.
2. Interrupt Check: After executing each instruction, the CPU checks if there
is an active interrupt request.
o If no interrupt is detected, it continues with the next instruction.
3. Save CPU State: If an interrupt is detected, the CPU saves the current
program's state (registers, program counter, etc.).
4. Interrupt Service Routine (ISR):
o The CPU determines the interrupt source (using the interrupt
vector table).
o It executes the ISR corresponding to the interrupt.
5. Restore State: After completing the ISR, the CPU restores the saved state.
6. Resume Program: The CPU resumes the program that was interrupted.
Ain

Example of Interrupt Cycle


Scenario: A keyboard sends a signal when a key is pressed.
1. Normal Execution: The CPU executes a user program.
2. Interrupt Triggered: The keyboard controller detects a keypress and
sends an IRQ to the CPU.
3. Save State: The CPU completes its current instruction, disables further
interrupts temporarily, and saves its state.
4. ISR Execution: The CPU uses the interrupt vector to locate the keyboard
ISR, which:
o Reads the keycode from the keyboard buffer.
o Updates the application's input data structure.
o Signals the keyboard controller to reset the interrupt line.
5. Restore State: The CPU restores its saved state.
6. Resume Execution: The CPU continues executing the user program, now
aware of the input.
This process ensures efficient handling of asynchronous events, like
keyboard inputs, without the need for constant polling.

10.Explain the role of interrupts in computer organization?


Role of Interrupts in Computer Organization
Interrupts are a critical mechanism in computer organization, enabling
efficient handling of asynchronous events. They allow the CPU to pause its
current task, address high-priority events (like I/O requests or system errors),
and then resume its previous operation without constant polling. Here's an
in-depth explanation of their role:
1. Efficient CPU Utilization
 Without Interrupts (Polling):
o The CPU continuously checks (polls) devices to see if they need
attention.
o This wastes CPU cycles, especially if devices are idle most of the
time.
 With Interrupts:
o The CPU executes tasks without worrying about devices.
o Devices signal the CPU only when they require attention, allowing
the CPU to perform other tasks in the meantime.
2. Enabling Asynchronous Communication
 Devices operate independently of the CPU, often at different speeds.
 Interrupts provide a way for devices to notify the CPU asynchronously,
ensuring timely responses without synchronization delays.
3. Prioritization of Tasks
 Interrupts are often managed by an Interrupt Controller, which can
prioritize multiple interrupt requests.
 Higher-priority tasks (e.g., emergency system shutdown or critical I/O
events) preempt lower-priority tasks, ensuring the system handles
urgent events promptly.
4. Support for Real-Time Processing
 In real-time systems, where tasks must be completed within strict time
constraints, interrupts help the CPU respond immediately to time-
sensitive events (e.g., sensor input in an embedded system).
5. Simplifying I/O Handling
Interrupts streamline the interaction between the CPU and I/O devices:
 Input: Devices like keyboards generate interrupts when data is available,
ensuring the CPU retrieves input efficiently.
 Output: Devices like printers generate interrupts after completing a task,
allowing the CPU to send the next set of data.
6. Enhanced Multitasking
 Interrupts enable preemptive multitasking by allowing the CPU to switch
between processes.
 A timer interrupt, for example, can signal the CPU to switch contexts
between tasks in a multitasking operating system.

7. Error Detection and Handling


Interrupts allow the CPU to handle errors efficiently:
 Hardware Errors: Devices can send interrupts for conditions like power
failure or hardware malfunctions.
 Software Errors: Exceptions (a type of software interrupt) help handle
divide-by-zero errors, invalid memory access, or other anomalies.
8. Implementing System Calls
 System calls (requests from a program to the OS) are implemented using
software interrupts.
 The interrupt mechanism allows the CPU to transition smoothly from
user mode to kernel mode to execute privileged operations.
Illustrative Example: Disk I/O
1. A program requests the CPU to read data from a disk.
2. The CPU issues a read command to the disk controller and continues
other tasks.
3. When the disk controller finishes reading, it generates an interrupt.
4. The CPU halts its current task, processes the interrupt (e.g., moves data
to memory), and resumes its task.
5. This is mechanism ensures minimal CPU idle time and faster overall performance.
11. Draw the flow chart that describes the cpu-I/O channel
communication?
Communication channel between CPU and IOP
There is a communication channel between IOP and CPU to perform task
which come under computer architecture. This channel explains the
commands executed by IOP and CPU while performing some programs. The
CPU do not executes the instructions but it assigns the task of initiating
operations, the instructions are executed by IOP. I/O transfer is instructed by
CPU. The IOP asks for CPU through interrupt. This channel starts by CPU, by
giving “test IOP path” instruction to IOP and then the communication begins
as shown in
diagram:
Figure – Communication channel between IOP and CPUWhenever CPU gets
interrupt from IOP to access memory, it sends test path instruction to IOP.
IOP executes and check for status, if the status given to CPU is OK, then CPU
gives start instruction to IOP and gives it some control and get back to some
another (or same) program, after that IOP is able to access memory for its
program. Now IOP start controlling I/O transfer using DMA and create
another status report as well. AS soon as this I/O transfer completes IOP
once again send interrupt to CPU, CPU again request for status of IOP, IOP
check status word from memory location and gives it to CPU. Now CPU
check the correctness of status and continues with the same process.

12. What is the need of I/O interface module?


Need for an I/O Interface Module
The I/O Interface Module serves as a mediator between the CPU and
peripheral devices. Since peripheral devices often operate at different
speeds, use different data formats, and require specialized protocols, direct
communication between the CPU and these devices is impractical. The I/O
interface module bridges this gap by standardizing communication and
ensuring smooth interaction. Here's why it is needed:
1. Device Communication
Peripheral devices, such as keyboards, printers, and disks, differ significantly
from the CPU in:
 Data Representation: Devices may use different formats (e.g., serial vs.
parallel communication).
 Speed: Peripheral devices are often much slower or faster than the CPU.
 Control Signals: Devices require specific control signals for operations like
start, stop, or data transfer.
The I/O interface module standardizes these differences, enabling seamless
communication.
2. Synchronization
 The CPU operates at a much faster speed compared to most I/O devices.
 The I/O interface manages the timing differences using buffering or
handshaking mechanisms, allowing the CPU and devices to operate
independently.
 3. Address Decoding
 Each peripheral device is assigned a unique address.
 The I/O interface module decodes the address and ensures the correct
device is selected when the CPU issues a command.
 4. Data Conversion
 Peripheral devices may use a different format for data transmission.
 The I/O interface converts data formats between the CPU
(binary/parallel) and the devices (serial or other formats).
 5. Command Interpretation
 Peripheral devices cannot understand CPU instructions directly.
 The I/O interface interprets commands from the CPU and translates
them into device-specific signals.
 6. Interrupt Handling
 I/O interface modules handle interrupts generated by devices.
 They inform the CPU about the status of devices, like completion of an
operation or errors, enabling efficient multitasking.
 7. Error Detection
 I/O interfaces often include error-checking mechanisms (e.g., parity
checks) to ensure reliable data transmission.
 8. Isolation
 The I/O interface electrically isolates the CPU from external devices to
protect the CPU from voltage fluctuations or faults in peripheral devices.
9. Support for Multiple Devices
 The I/O interface module allows the connection and management of
multiple devices through shared communication lines, reducing
hardware complexity.
The I/O interface module simplifies and standardizes the interaction between
the CPU and diverse peripheral devices, making it an essential component in
computer systems.

13.Explain the method of DMA transfer. How does a DMA controller


improve the performance of a computer?
Direct Memory Access (DMA) Transfer
DMA (Direct Memory Access) is a method that allows an I/O device to directly
transfer data to or from the computer's main memory without the continuous
intervention of the CPU. This significantly improves system efficiency, especially
for high-speed data transfer operations.
Steps in DMA Transfer
1. DMA Request (DREQ):
o An I/O device sends a DMA request signal to the DMA controller
when it is ready to transfer data.
2. DMA Acknowledgment (DACK):
o The DMA controller gains control of the memory bus from the CPU
and sends an acknowledgment to the requesting device.
3. Bus Control:
o The DMA controller temporarily takes control of the system buses
(address bus, data bus, and control bus), effectively bypassing the
CPU.
4. Data Transfer:
o The DMA controller directly transfers data between the I/O device
and main memory.
o The transfer can be:
 Block Transfer: A block of data is transferred continuously.
 Cycle Stealing: Small chunks of data are transferred during
CPU idle cycles.
 Burst Mode: A series of transfers happen in rapid succession
without interruptions.
5. Interrupt to CPU:
o After completing the data transfer, the DMA controller signals the
CPU through an interrupt, indicating that the operation is
complete.
Modes of DMA Operation
1. Burst Mode:
o The DMA controller transfers a block of data continuously without
releasing control of the bus.
o Fast but may temporarily block CPU access to memory.
2. Cycle Stealing Mode:
o The DMA controller transfers one data word per cycle, interleaving
access with the CPU.
o Prevents CPU starvation.
3. Transparent Mode:
o The DMA controller transfers data only when the CPU is idle.
o Ensures the CPU always has priority over memory access.

DMA Controller and Performance Improvement


A DMA controller is a dedicated hardware component that manages the DMA
process. Its primary role is to offload data transfer tasks from the CPU, allowing
the CPU to focus on processing rather than handling I/O operations. This
improves system performance in the following ways:
1. CPU Offloading
 Without DMA, the CPU must manage every byte of data transfer
between memory and I/O devices, wasting valuable processing time.
 With DMA, the CPU initiates the transfer and then performs other tasks
while the DMA controller manages the transfer autonomously.
2. Faster Data Transfer
 DMA bypasses CPU instructions for data transfer, enabling faster and
more efficient movement of large data blocks.
 It reduces the latency introduced by CPU intervention in the data path.
3. Parallel Processing
 While the DMA controller handles data transfer, the CPU can execute
other operations, enhancing multitasking and system throughput.
4. Efficient Bus Usage
 DMA optimizes bus usage by transferring data in bursts or during idle bus
cycles, preventing bus contention and improving overall system
performance.
5. Reduced Interrupt Overhead
DMA minimizes the number of interrupts to the CPU, as a single interrupt is
generated after completing a transfer instead of multiple interrupts for each
byte of data
Conclusion
The DMA transfer method significantly improves system performance by
offloading data transfer tasks from the CPU, reducing latency, and optimizing
bus usage. Its role is crucial in applications requiring fast and efficient data
transfer, such as multimedia, disk operations, and networking.

14. What is interrupted I/O? Discuss.


Interrupted I/O
Interrupted I/O is a mechanism where an I/O device signals the CPU via an
interrupt when it needs attention, such as when it is ready to send or receive
data. This method eliminates the need for the CPU to continuously monitor the
device (polling), allowing more efficient use of the CPU's resources.
Working of Interrupted I/O
1. Normal Program Execution:
o The CPU executes instructions from a program.
2. Device Readiness:
o An I/O device becomes ready to perform an operation (e.g., a
printer finishes printing or a keyboard registers a key press).
3. Interrupt Signal:
o The device sends an interrupt request (IRQ) to the CPU.
4. Interrupt Acknowledgment:
o The CPU stops its current execution and saves its state (program
counter, registers, etc.).
5. Interrupt Service Routine (ISR):
o The CPU identifies the device using the interrupt vector table and
executes the Interrupt Service Routine specific to the device.
6. Data Transfer:
o The ISR facilitates the data transfer between the device and
memory.
7. Resume Execution:
o Once the ISR completes, the CPU restores its saved state and
resumes the interrupted program.
Key Characteristics of Interrupted I/O
1. Asynchronous Communication:
o Devices operate independently, and the CPU only responds when
notified by an interrupt.
2. Efficient CPU Utilization:
o The CPU focuses on executing instructions from programs rather
than constantly polling devices.
3. Interrupt Vector Table:
o A table that maps each interrupt request to the corresponding ISR,
ensuring the correct response for each device.
Advantages of Interrupted I/O
1. Improved Efficiency:
o CPU resources are not wasted on polling; instead, the CPU can
perform other tasks until an interrupt occurs.
2. Reduced Latency:
o The CPU immediately responds to device requests without waiting
for a polling cycle.
3. Scalability:
o Supports multiple devices with varying speeds using prioritized
interrupt handling.
4. Real-Time Capability:
o Interrupt-driven systems can respond quickly to time-critical
events.
o Disadvantages of Interrupted I/O
1. Complexity:
o Requires additional hardware and software to manage interrupts
(e.g., interrupt controller, ISRs).
2. Overhead:
o Interrupt handling involves saving/restoring CPU states and
switching context, which may introduce a slight delay.
3. Interrupt Overload:
o Too many interrupts can overwhelm the CPU, reducing overall
system performance.
Conclusion
Interrupted I/O is a powerful mechanism in modern computing that allows
devices to notify the CPU when they need attention, leading to efficient
resource utilization and faster system responses. It is widely used in
multitasking environments, real-time systems, and high-speed I/O operations.
15. How the data transfer to and from peripherals is done? Discuss with neat
diagrams with examples.
Data Transfer to and from Peripherals
Data transfer between the CPU and peripherals is a fundamental operation in a
computer system. Peripheral devices, such as keyboards, printers, and disk
drives, often operate at different speeds and use different data formats
compared to the CPU and memory. Various methods are employed to manage
this data transfer efficiently. These methods include:
1. Programmed I/O
 Description:
o In this method, the CPU actively controls and manages the data
transfer between memory and peripherals.
o The CPU polls the status of the peripheral device and performs the
data transfer when the device is ready.
 Steps:
1. The CPU checks the status of the peripheral.
2. If the device is ready, the CPU reads or writes data.
3. The CPU repeats this process until the entire data transfer is complete.
 Advantages:
o Simple implementation.
 Disadvantages:
o Inefficient as the CPU spends time polling instead of executing
other tasks.
2. Interrupted I/O
 Description:
o The peripheral device sends an interrupt signal to the CPU when it
is ready for data transfer.
o The CPU stops its current operation, executes an Interrupt Service
Routine (ISR) to handle the transfer, and then resumes its previous
task.
 Steps:
1. The device generates an interrupt signal when ready.
2. The CPU saves its current state and executes the ISR.
3. The ISR performs the data transfer.
4. The CPU restores its state and resumes execution.
 Advantages:
o Reduces CPU idle time compared to programmed I/O.
 Disadvantages:
o Requires additional hardware and software for interrupt handling.
3. Direct Memory Access (DMA)
 Description:
o The DMA controller handles the data transfer directly between the
memory and the peripheral, bypassing the CPU.
o The CPU is free to perform other tasks during the data transfer.
 Steps:
1. The CPU initializes the DMA controller by specifying the memory address,
transfer count, and device.
2. The DMA controller takes control of the system bus to transfer data
directly between memory and the device.
3. After the transfer is complete, the DMA controller sends an interrupt to
the CPU.
 Advantages:
o High efficiency for large data transfers.
o Minimizes CPU involvement.
 Disadvantages:
o Additional hardware complexity
4. I/O Channel
 Description:
o An I/O Channel is a dedicated processor specifically for handling
I/O operations.
o The CPU issues commands to the I/O channel, which then
performs the data transfer independently.
 Steps:
1. The CPU sends a command to the I/O channel.
2. The I/O channel performs the data transfer with the peripheral.
3. The I/O channel signals the CPU upon completion.
 Advantages:
o Offloads I/O management entirely from the CPU.
o Suitable for systems with high I/O demands.
 Disadvantages:
o Increased system complexity and cost.

You might also like