Computer
Computer
Isolated I/O –
Then we have Isolated I/O in which we Have common bus(data and address)
for I/O and memory but separate read and write control lines for I/O. So when
CPU decode instruction then if data is for I/O then it places the address on the
address line and set I/O read or write control line on due to which data transfer
occurs between CPU and I/O. As the address space of memory and I/O is
isolated and the name is so. The address for I/O here is called ports. Here we
have different read-write instruction for both I/O and
memory.
Disadvantages of DMA
While Direct Memory Access (DMA) has several advantages, it is crucial
to note the following drawbacks and limitations:
o Complexity: DMA controllers and their setups can be complicated.
Setting up DMA correctly and guaranteeing device compatibility may
need knowledge and careful planning.
o Hardware Support: DMA functioning is dependent on hardware support.
Not all devices or systems have DMA capabilities, and it may not be
implemented as effectively as needed in some circumstances.
o Resource Contention: Resource contention can occur in systems with
several devices competing for DMA access. This conflict can cause delays
and potentially impair data transfer efficiency.
o Data Corruption Risk: Improperly setup or handled DMA transfers might
lead to data corruption. Data integrity necessitates careful programming
and error management.
o Interrupt Handling: While DMA might decrease CPU participation in data
transfers, it can also present a new challenge: controlling DMA-related
interruptions. This necessitates more CPU processing and can have an
impact on overall system performance.
o Limited Flexibility: DMA controllers may be limited in their ability to
manage data flows. Some may not support specific transfer modes or
may limit the sorts of devices that may utilize DMA.
o Concerns about security: If not adequately protected, DMA can be used
for nefarious purposes. DMA assaults, for example, can be used to
retrieve sensitive information from memory.
o Compatibility Issues: When working with older hardware or specialized
devices, DMA compatibility may be an issue. Older DMA standards may
not be completely supported by newer systems, and guaranteeing
compatibility might be difficult.
o Debugging Complexities: Debugging DMA transfers might be more
difficult to debug than debugging CPU-bound tasks because they require
lower-level hardware interactions.
o Overhead for Small Transfers: Using DMA for extremely short data
transfers may create more overhead than it saves. In some
circumstances, the CPU may be able to do the function more efficiently.
o Complex Programming: DMA programming can be complicated,
requiring careful control of memory locations, data sizes, and transfer
modes. DMA programming errors can cause system instability.
o CPU Access Latency: When a DMA controller controls the system bus,
the CPU may encounter some latency when accessing memory. This
delay might affect the system's responsiveness.
It is vital to analyze the advantages and disadvantages of DMA in the
context of a certain application or system. In many circumstances, DMA
provides enormous benefits; nevertheless, careful study and design are
necessary to provide the optimum performance and data integrity.
3. Storage Devices:
Storage devices are used to store data in the system which is required for
performing any operation in the system. The storage device is one of the
most required devices and also provides better compatibility. Example:
Hard disk, magnetic tape, Flash memory etc.
Hard Drive: A hard drive is a storage device that stores data and files on
a computer system.
USB Drive: A USB drive is a small, portable storage device that connects
to a computer system to provide additional storage space.
4. Communication Devices:
Communication devices are used to connect a computer system to other
devices or networks. Examples of communication devices include:
Interrupt Nesting
When the processor is busy in executing the interrupt service routine,
the interrupts are disabled in order to ensure that the device does not
raise more than one interrupt. A similar kind of arrangement is used
where multiple devices are connected to the processor. So that the
servicing of one interrupt is not interrupted by the interrupt raised by
another device.
What if the multiple devices raise interrupts simultaneously, in that case,
the interrupts are prioritized.
Priority Interrupts
The I/O devices are organized in a priority structure such that the
interrupt raised by the high priority device is accepted even if the
processor servicing the interrupt from a low priority device.
A priority level is assigned to the processor which can be regulated using
the program. Now, whenever a processor starts the execution of some
program its priority level is set equal to the priority of the program in
execution. Thus while executing the current program the processor only
accepts the interrupts from the device that has higher priority as of the
processor.
Now, when the processor is executing an interrupt service routine the
processor priority level is set to the priority of the device of which the
interrupt processor is servicing. Thus the processor only accepts the
interrupts from the device with the higher priority and ignore the
interrupts from the device with the same or low priority. To set the
priority level of the processor some bits of the processor’s status register
is used.
Benefits of Interrupt
Real-time Responsiveness: Interrupts permit a system to reply promptly
to outside events or signals, permitting real-time processing.
Efficient Resource usage: Interrupt-driven structures are more efficient
than system that depend on busy-waiting or polling strategies. Instead of
continuously checking for the incidence of event, interrupts permit the
processor to remain idle until an event occurs, conserving processing
energy and lowering energy intake.
Multitasking and Concurrency: Interrupts allow multitasking with the
aid of allowing a processor to address multiple tasks concurrently.
Improved system Throughput: By coping with occasions asynchronously,
interrupts allow a device to overlap computation with I/O operations or
other responsibilities, maximizing system throughput and universal
overall performance.