Unit 5 Notes
Unit 5 Notes
1. Interrupt:
The interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high-priority process requiring interruption
of the current working process. In I/O devices one of the bus control lines is dedicated for
this purpose and is called the Interrupt Service Routine (ISR).
When a device raises an interrupt at let’s say process i, the processor first completes the
execution of instruction i. Then it loads the Program Counter (PC) with the address of the
first instruction of the ISR. Before loading the Program Counter with the address, the
address of the interrupted instruction is moved to a temporary location. Therefore, after
handling the interrupt the processor can continue with process i+1.
Software Interrupts:
A sort of interrupt called a software interrupt is one that is produced by software or a
system as opposed to hardware. Traps and exceptions are other names for software
interruptions. They serve as a signal for the operating system or a system service to carry
out a certain function or respond to an error condition.
Hardware Interrupts:
In a hardware interrupt, all the devices are connected to the Interrupt Request Line. A
single request line is used for all the n devices. To request an interrupt, a device closes its
associated switch. When a device requests an interrupt, the value of INTR is the logical OR
of the requests from individual devices.
Effect On
Hardware interrupts do not increment Software interrupts increase the
Program
the program counter. program counter.
Counter
Hardware interrupt has the lowest Software interrupt has the highest
Priority
priority than software interrupts. priority than hardware interrupt.
Maskable Interrupt:
A maskable interrupt is a hardware interrupt that may be ignored by setting a bit in an
interrupt mask register’s (IMR) bit-mask. Maskable interrupts are those which can be
disabled or ignored by the microprocessor. The interrupts are either edge-triggered or level-
triggered or level-triggered. Examples of maskable interrupts include RST6.5, RST7.5,
RST5.5 of 8085 microprocessor.
What You Need to Know About Maskable Interrupt:
• Maskable interrupts help to handle lower priority tasks.
• Can be masked or made pending
• It is possible to handle a maskable interrupt after executing the current instruction.
• May be vectored or non-vectored
• Response time is high
• Used to interface with peripheral device.
Non-Maskable Interrupt:
Non-maskable interrupt (NMI) is a hardware interrupt that lacks an associated bit-mask;
therefore, it can never be ignored. It typically occurs to signal attention for non-recoverable
hardware errors. A non-maskable interrupt is often used when response time is critical or
when an interrupt should never be disable during normal system operation. Such uses include
reporting non-recoverable hardware errors, system debugging and profiling and handling of
species cases like system resets. TRAP is an example of non-maskable interrupt; it consists of
both level as well as edge triggering and is used in critical power failure conditions.
What You Need to Know About Non-Maskable Interrupt:
• Non-maskable interrupt help to handle higher priority tasks such as watchdog
timer.
• Cannot be masked or made pending
• When a non-maskable interrupt occurs, the current instructions and status are
stored in stack for the CPU to handle the interrupt.
• All are vectored interrupts
• Response time is low
• Used for emergency purpose e.g. power failure, smoke detector etc
• Examples of non-maskable interrupt include RST1, RST2, RST3, RST4, RST5,
RST6, RST7 and TRAP of 8085 microprocessor.
Difference Between Maskable and Non-Maskable Interrupt
BASIS OF NON-MASKABLE
MASKABLE INTERRUPT
COMPARISON INTERRUPT
A non-maskable interrupt is a
Maskable interrupt is a hardware
hardware interrupt that cannot
Description Interrupt that can be disabled or ignored
be disabled or ignored by the
by the instructions of CPU.
instructions of CPU.
Examples of non-maskable
Examples of maskable interrupts interrupt include RST1, RST2,
Examples include RST6.5, RST7.5, RST5.5 of RST3, RST4, RST5, RST6,
8085 microprocessor RST7 and TRAP of 8085
microprocessor.
A vectored interrupt is where the CPU actually knows the address of the interrupt service
routine in advance. All it needs is that the interrupting device sends its unique vector through
a data bus and through its I/O interface to the CPU. The CPU takes this vector, checks an
interrupt table in memory and then carries out the correct ISR for that device.
A non–vectored interrupt is where the interrupting device never sends an interrupt vector.
An interrupt is received by the CPU and it jumps the program counter to a fixed address in
hardware. This is typically a hard coded ISR which is device agnostic.
Vectored Interrupts:
• Devices that use vectored interrupts are assigned an interrupt vector. This is a
number that identifies a particular interrupt handler. The ISR address of this
interrupts is fixed and is known to CPU.
• When the device interrupts the CPU branches to the particular ISR.
• The microprocessor jumps to the specific service routine.
• When the microprocessor executes the call instruction, it saves the address of the
next instruction on the stack.
• At the end of the service routine, the RET instruction returns the execution to
where the program was interrupted.
• All 8051 interrupts are vectored interrupts.
Non-vectored Interrupt:
• Non-vectored interrupt is an interrupt that has a common ISR, which is common to
all non-vectored interrupts in the system. Address of this common ISR is known to
the CPU.
• The interrupts which don’t have fixed memory location for transfer of control from
normal execution.
• The address of the memory is sent along with the interrupt.
• The CPU crucially does not know which device caused the interrupt without
polling each I/O interface in a loop.
• Once the interrupt occurs, the system must determine which device, of all the
devices associated actually interrupted.
Key Differences:
1. Handling mechanism: Vectored interrupts are handled by a dedicated interrupt
vector table, which contains the addresses of the interrupt handlers for each device.
Non-vectored interrupts, use a single interrupt handler that must identify the
source of the interrupt.
2. Identification of source: In a vectored interrupt system, the interrupt controller
automatically identifies the source of the interrupt and routes it to the appropriate
interrupt handler. In non-vectored systems, the interrupt handler must use a polling
mechanism to determine the source of the interrupt.
3. Response time: Vectored interrupts have a faster response time than non-vectored
interrupts, as the system can quickly identify and route the interrupt to the
appropriate handler without the need for polling.
4. Complexity: Vectored interrupts are more complex than non-vectored interrupts,
as they require a dedicated interrupt vector table and additional hardware for
routing interrupts. Non-vectored interrupts are simpler, as they only require a
single interrupt handler.
5. Flexibility: Non-vectored interrupts are more flexible than vectored interrupts, as
they can be used with a wide range of devices and don’t require specific interrupt
handling mechanisms. Vectored interrupts, on the other hand, are optimized for
specific devices and require specific interrupt handling mechanisms.
6. Debugging: Debugging vectored interrupts can be more difficult than debugging
non-vectored interrupts, as there are multiple interrupt handlers to manage and
potential conflicts to resolve. Non-vectored interrupts are simpler to debug, as
there is only one interrupt handler to manage.
Strobe Signal:
The strobe control method of Asynchronous data transfer employs a single control line to
time each transfer. The strobe may be activated by either the source or the destination unit.
Data Transfer Initiated by Source Unit:
In the block diagram fig. (a), the data bus carries the binary information from source to
destination unit. Typically, the bus has multiple lines to transfer an entire byte or word. The
strobe is a single line that informs the destination unit when a valid data word is available.
The timing diagram fig. (b) the source unit first places the data on the data bus. The
information on the data bus and strobe signal remains in the active state to allow the
destination unit to receive the data.
Data Transfer Initiated by Destination Unit:
In this method, the destination unit activates the strobe pulse, to informing the source to
provide the data. The source will respond by placing the requested binary information on the
data bus.
The data must be valid and remain in the bus long enough for the destination unit to accept
it. When accepted the destination unit then disables the strobe and the source unit removes
the data from the bus.
Handshaking:
The handshaking method solves the problem of strobe method by introducing a second
control signal that provides a reply to the unit that initiates the transfer.
Principle of Handshaking:
The basic principle of the two-wire handshaking method of data transfer is as follow:
One control line is in the same direction as the data flows in the bus from the source to
destination. It is used by source unit to inform the destination unit whether there a valid data
in the bus. The other control line is in the other direction from the destination to the source.
It is used by the destination unit to inform the source whether it can accept the data. The
sequence of control during the transfer depends on the unit that initiates the transfer.
When both the transmitting and receiving units use same clock pulse then such a data
transfer is called Synchronous process. On the other hand, if there is no concept of clock
pulses and the sender operates at different moment than the receiver then such a data transfer
is called Asynchronous data transfer.
The data transfer can be handled by various modes. some of the modes use CPU as an
intermediate path, others transfer the data directly to and from the memory unit and this can
be handled by 3 following ways:
In this technique CPU is responsible for executing data from the memory for output
and storing data in memory for executing of Programmed I/O as shown in Flowchart-:
Drawback of the Programmed I/O:
The main drawback of the Program Initiated I/O was that the CPU has to monitor the units
all the times when the program is executing. Thus, the CPU stays in a program loop until the
I/O unit indicates that it is ready for data transfer. This is a time-consuming process and the
CPU time is wasted a lot in keeping an eye to the executing of program.
To remove this problem an Interrupt facility and special commands are used.
i. Address Register: - Address Register contains an address to specify the desired location in
memory.
ii. Word Count Register: - WC holds the number of words to be transferred. The register is incr/decr
by one after each word transfer and internally tested for zero.
iii. Control Register: - Control Register specifies the mode of transfer
The unit communicates with the CPU via the data bus and control lines. The registers in the DMA are
selected by the CPU through the address bus by enabling the DS (DMA select) and RS (Register
select) inputs. The RD (read) and WR (write) inputs are bidirectional.
When the BG (Bus Grant) input is 0, the CPU can communicate with the DMA registers through the
data bus to read from or write to the DMA registers. When BG =1, the DMA can communicate
directly with the memory by specifying an address in the address bus and activating the RD or WR
control.
DMA Transfer:
The CPU communicates with the DMA through the address and data buses as with
any interface unit. The DMA has its own address, which activates the DS and RS
lines. The CPU initializes the DMA through the data bus. Once the DMA receives
the start control command, it can transfer between the peripheral and the memory.
When BG = 0 the RD and WR are input, lines allowing the CPU to communicate
with the internal DMA registers. When BG=1, the RD and WR are output lines
from the DMA controller to the random-access memory to specify the read or write
operation of data.
Note:
Why does DMA have priority over the CPU when both request a memory transfer?
DMA (Direct Memory Access) has priority over the CPU when both request a memory
transfer because DMA allows devices to transfer data to and from memory without
involving the CPU. This reduces the burden on the CPU and allows it to focus on other
tasks, improving overall system performance. DMA controllers are designed to handle
data transfers independently, and they can take control of the system bus to perform
these transfers without CPU intervention. This prioritization of DMA over the CPU for
memory transfers helps optimize the efficiency of data movement within a computer
system.
Peripheral Devices:
The Input / output organization of computer depends upon the size of computer and the
peripherals connected to it. The I/O Subsystem of the computer, provides an efficient mode
of communication between the central system and the outside environment
The most common input output devices are:
i) Monitor ii) Keyboard iii) Mouse iv) Printer v) Magnetic tapes
The devices that are under the direct control of the computer are said to be connected online.
Input - Output Interface:
Input Output Interface provides a method for transferring information between internal
storage and external I/O devices.
Peripherals connected to a computer need special communication links for interfacing them
with the central processing unit.
The purpose of communication link is to resolve the differences that exist between the central
computer and each peripheral.
The Major Differences are: -
1. Peripherals are electromechanical and electromagnetic devices and CPU and
memory are electronic devices. Therefore, a conversion of signal values may be
needed.
2. The data transfer rate of peripherals is usually slower than the transfer rate of CPU
and consequently, a synchronization mechanism may be needed.
3. Data codes and formats in the peripherals differ from the word format in the CPU
and memory.
4. The operating modes of peripherals are different from each other and must be
controlled so as not to disturb the operation of other peripherals connected to the
CPU.
To Resolve these differences, computer systems include special hardware components
between the CPU and Peripherals to supervises and synchronizes all input and out transfers
These components are called Interface Units because they interface between the
processor bus and the peripheral devices.
I/O Processor:
In the first method, the computer has independent sets of data, address and control buses one
for accessing memory and other for I/O. This is done in computers that provides a separate
I/O processor (IOP). The purpose of IOP is to provide an independent pathway for the
transfer of information between external device and internal memory.
Memory occupies the central position and can communicate with each
processor by DMA.
CPU is responsible for processing data.
IOP provides the path for transfer of data between various peripheral devices
and memory.
Data formats of peripherals differ from CPU and memory. IOP maintain such
problems.
Data are transfer from IOP to memory by stealing one memory cycle.
Instructions that are read from memory by IOP are called commands to
distinguish them from instructions that are read by the CPU.
Instruction that are read from memory by an IOP:
» Distinguish from instructions that are read by the CPU
» Commands are prepared by experienced programmers and are stored in memory
» Command word = IOP program
Asynchronous Communication:
In asynchronous communication, the groups of bits will be treated as an independent unit,
and these data bits will be sent at any point in time. In order to make synchronization between
sender and receiver, the stop bits and start bits are used between the data bytes. These bits are
useful to ensure that the data is correctly sent. The time taken by data bits of sender and receiver
is not constant, and the time between transmissions will be provided by the gaps. In
asynchronous communication, we don't require synchronization between the sender and
receiver devices, which is the main advantage of asynchronous communication. This method
is also cost-effective. In this method, there can be a case when data transmission is slow, but it
is not compulsory, and it is the main disadvantage of the asynchronous method.
On the basis of the data transfer rate and the type of transmission mode, serial communication
will take many forms. The transmission mode can be classified into simplex, half-duplex, and
full-duplex. Each transmission mode contains the source, also known as sender or transmitter,
and destination, also known as the receiver.
Parallel processing:
• Parallel processing is a term used for a large class of techniques that
are used to provide simultaneous data-processing tasks for the purpose of increasing the
computational speed of a computer system.
It refers to techniques that are used to provide simultaneous data processing.
The system may have two or more ALUs to be able to execute two or more
instruction at the same time.
The system may have two or more processors operating concurrently.
It can be achieved by having multiple functional units that perform same or different
operation simultaneously.
• Example of parallel Processing:
– Multiple Functional Unit:
Separate the execution unit into eight functional units operating in parallel.
There are variety of ways in which the parallel processing can be classified
Internal Organization of Processor
Interconnection structure between processors
Flow of information through system
Architectural Classification:
Flynn's classification
» Based on the multiplicity of Instruction Streams and Data Streams
» Instruction Stream
Sequence of Instructions read from memory
» Data Stream
Operations performed on the data in the processor
SISD represents the organization containing single control unit, a processor unit
and a memory unit. Instruction are executed sequentially and system may or may
not have internal parallel processing capabilities.
SIMD represents an organization that includes many processing units under the
supervision of a common control unit.
MISD structure is of only theoretical interest since no practical system has been
constructed using this organization.
MIMD organization refers to a computer system capable of processing several
programs at the same time.
Note: The main difference between multicomputer system and multiprocessor
system is that the multiprocessor system is controlled by one operating system that
provides interaction between processors and all the component of the system
cooperate in the solution of a problem.