0% found this document useful (0 votes)
23 views9 pages

Chapter 4 Microcomputer System Design

The document provides comprehensive notes on microcomputer system design, detailing the organization and components of microcomputers and microprocessors, including the CPU, memory units, and I/O units. It explains the fetch-decode-execute cycle, instruction formats, addressing modes, subroutines, interrupts, and memory organization. The notes emphasize the importance of modularity, efficiency, and the hierarchical structure of memory in computer systems.

Uploaded by

tsmfast786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views9 pages

Chapter 4 Microcomputer System Design

The document provides comprehensive notes on microcomputer system design, detailing the organization and components of microcomputers and microprocessors, including the CPU, memory units, and I/O units. It explains the fetch-decode-execute cycle, instruction formats, addressing modes, subroutines, interrupts, and memory organization. The notes emphasize the importance of modularity, efficiency, and the hierarchical structure of memory in computer systems.

Uploaded by

tsmfast786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Microcomputer System Design: Comprehensive Notes

1. Microcomputer Organization

A microcomputer is a small, relatively inexpensive computer with a microprocessor as its Central Processing Unit (CPU). It is typically used
by one person at a time (hence "personal computer"). The basic organization of a microcomputer involves several interconnected functional
units.

Key Components of a Microcomputer System:

1. Central Processing Unit (CPU): The "brain" of the computer, responsible for executing instructions and performing arithmetic
and logical operations. In microcomputers, the CPU is typically a single integrated circuit called a microprocessor.
2. Memory Unit: Stores programs (instructions) and data. It can be categorized into:
o Primary Memory (Main Memory): Fast, volatile memory (RAM - Random Access Memory) directly accessible by
the CPU, used for currently executing programs and data.
o Secondary Memory (Auxiliary Memory): Slower, non-volatile memory (e.g., Hard Disk Drives, Solid State Drives,
USB drives) used for long-term storage of programs and data.
3. Input/Output (I/O) Unit: Facilitates communication between the computer and the outside world.
o Input Devices: Allow users to enter data and commands (e.g., keyboard, mouse, scanner, microphone).
o Output Devices: Display or present the results of processing (e.g., monitor, printer, speakers).
o I/O Interface: Hardware that mediates between the CPU/memory and the I/O devices, handling differences in data
formats, speeds, and electrical signals.
4. Buses: Collections of electrical conductors that provide communication pathways between the various components of the
computer system.
o Address Bus: Carries memory addresses from the CPU to memory or I/O devices to specify the location for data
transfer. It is unidirectional.
o Data Bus: Carries data between the CPU, memory, and I/O devices. It is bidirectional.
o Control Bus: Carries control signals from the CPU to other components (e.g., read/write signals, interrupt requests)
and status signals from devices back to the CPU. It is bidirectional.

Functional Block Diagram of a Microcomputer:

+-----+ +--------+ +--------+


| CPU |-------| Memory |-------| I/O |
+-----+ +--------+ | Devices|
| | +--------+
| | |
|-------------+----------------------+
|
(Buses: Address, Data, Control)

Operation: The CPU fetches instructions from memory, decodes them, and then executes them. During execution, the CPU may read data
from memory or input devices, perform computations, and write results to memory or output devices. All these operations are coordinated
via the buses.

2. Microprocessor Organization

A microprocessor is a single-chip CPU. Its internal organization includes several key functional units that work together to fetch, decode,
and execute instructions.

Key Components of a Microprocessor:

1. Arithmetic Logic Unit (ALU):


o Purpose: Performs all arithmetic operations (addition, subtraction, multiplication, division) and logical operations
(AND, OR, NOT, XOR, comparisons).
o It takes operands from registers and produces results, which are then stored in registers or memory.
2. Control Unit (CU):
o Purpose: The "nervous system" of the microprocessor. It manages and coordinates all the operations within the CPU
and the entire computer system.
o Functions:
▪ Instruction Fetch: Retrieves instructions from memory.
▪ Instruction Decode: Interprets the fetched instruction to determine the operation to be performed and the
operands involved.
▪ Execution Control: Generates control signals to direct the flow of data between CPU components,
memory, and I/O devices, and to initiate ALU operations.
▪ Timing: Provides timing signals (clock pulses) to synchronize all operations.
3. Registers:
o Purpose: Small, high-speed storage locations within the CPU used to temporarily hold data, instructions, and
addresses during processing. They are much faster than main memory.
o Types of Registers (Common examples):
▪ Program Counter (PC): Stores the memory address of the next instruction to be fetched. It is
automatically incremented after each instruction fetch.
▪ Instruction Register (IR): Stores the instruction currently being executed after it is fetched from memory.
▪ Accumulator (AC): A general-purpose register used for temporary storage of data during arithmetic and
logical operations. It often holds one of the operands and the result of the ALU operation.
▪ General Purpose Registers (GPRs): (e.g., AX, BX, CX, DX in x86 architectures) Used by programmers
to store operands and intermediate results. Their number and size vary depending on the microprocessor
architecture.
▪ Memory Address Register (MAR): Stores the address of the memory location that is to be accessed (read
from or written to).
▪ Memory Buffer Register (MBR) / Memory Data Register (MDR): Temporarily holds the data read
from or written to memory.
▪ Stack Pointer (SP): Points to the top of the stack in memory, used for subroutine calls, returns, and
temporary data storage.
▪ Flag Register (Status Register/PSW - Program Status Word): Contains individual bits (flags) that
indicate the status of the most recent ALU operation (e.g., Carry Flag, Zero Flag, Sign Flag, Overflow
Flag). These flags are used for conditional branching.

Microprocessor's Internal Bus Structure: Microprocessors often have internal buses connecting their components (ALU, CU, Registers)
to allow fast data transfer within the chip.

Fetch-Decode-Execute Cycle (Instruction Cycle): The fundamental operation of a microprocessor follows a cycle:

1. Fetch: The CPU fetches the instruction from the memory location pointed to by the Program Counter (PC) and stores it in the
Instruction Register (IR). The PC is then incremented.
2. Decode: The Control Unit decodes the instruction in the IR to determine the operation and the operands required.
3. Execute: The Control Unit generates the necessary control signals to perform the operation. This might involve:
o Loading operands from registers or memory.
o Performing an arithmetic/logical operation in the ALU.
o Storing the result in a register or memory.
o Updating flags in the Status Register.

3. Instructions and Addressing Modes

Instructions: An instruction is a binary code that specifies an operation to be performed by the CPU and the operands (data or addresses)
involved in that operation. The set of all instructions that a CPU can understand and execute is called its instruction set.

Instruction Format: An instruction typically consists of two main parts:

1. Opcode (Operation Code): Specifies the operation to be performed (e.g., ADD, SUB, MOV, JUMP).
2. Operand(s) / Address Field(s): Specify the data or the memory addresses of the data on which the operation is to be performed.
An instruction can have zero, one, two, or three address fields.

● Zero-Address Instruction (Stack-based): Operands are implicitly taken from/pushed onto a stack. (e.g., ADD - pops two
operands, pushes result)
● One-Address Instruction (Accumulator-based): One operand is implicitly the Accumulator. (e.g., ADD X - adds content of X to
Accumulator)
● Two-Address Instruction: Specifies two operands. (e.g., ADD R1, R2 - R1 = R1 + R2)
● Three-Address Instruction: Specifies three operands (two sources, one destination). (e.g., ADD R1, R2, R3 - R1 = R2 + R3)

Addressing Modes: Addressing modes define how the operand (data) or the effective address of the operand is specified in the instruction.
They determine how the CPU finds the data it needs to perform an operation.

1. Immediate Addressing Mode:


o Description: The operand itself is part of the instruction. The value is "immediately" available within the instruction.
o Syntax: Opcode #Data (e.g., ADD #5 - add the value 5)
o Advantage: Fastest, no memory access needed for the operand.
o Disadvantage: Operand size is limited by the instruction's address field size.
o Example: MOV AL, 05H (Move the value 05 hexadecimal into the AL register).
2. Register Addressing Mode:
o Description: The operand is stored in a CPU register. The instruction specifies the register number.
o Syntax: Opcode Reg (e.g., ADD R1, R2 - add content of R2 to R1)
o Advantage: Very fast, as registers are within the CPU.
o Disadvantage: Limited number of registers.
o Example: ADD AX, BX (Add content of BX to AX).
3. Direct Addressing Mode (Absolute Addressing):
o Description: The instruction contains the actual memory address of the operand.
o Syntax: Opcode Address (e.g., LOAD 2000H - load data from memory location 2000H)
o Advantage: Simple to implement.
o Disadvantage: Limited address space (address field size), inflexible (cannot access variable locations easily).
o Example: MOV AL, [2000H] (Move data from memory location 2000H into AL).
4. Register Indirect Addressing Mode:
o Description: The instruction specifies a register that contains the memory address of the operand. The register acts
as a pointer.
o Syntax: Opcode (Reg) or Opcode [Reg] (e.g., LOAD (R1) - load data from address contained in R1)
o Advantage: Flexible, allows access to a large address space, useful for pointers.
o Disadvantage: One extra memory access required to fetch the operand.
o Example: MOV AL, [BX] (Move data from memory location whose address is in BX into AL).
5. Indexed Addressing Mode:
o Description: The effective address is calculated by adding a base address (or displacement) specified in the
instruction to the content of an index register.
o Syntax: Opcode Displacement(Index_Reg) or Opcode [Base_Reg + Index_Reg]
o Advantage: Useful for accessing elements of arrays or tables.
o Example: MOV AL, [SI + 20H] (Move data from SI + 20H into AL). SI is an index register.
6. Relative Addressing Mode (PC-Relative):
o Description: The effective address is calculated by adding a displacement (offset) given in the instruction to the
content of the Program Counter (PC).
o Advantage: Programs can be relocated in memory without modification (position-independent code). Used for
branch instructions.
o Example: JMP $+5 (Jump to the instruction 5 bytes ahead of the current PC).
7. Implicit/Implied Addressing Mode:
o Description: The operand is implicitly defined by the instruction itself; no explicit address field is needed.
o Example: CLC (Clear Carry Flag - operates on the Carry Flag implicitly). PUSH (operates on the Stack Pointer
implicitly).

4. Subroutines and Interrupts

Subroutines (Procedures/Functions): A subroutine is a self-contained block of code that performs a specific task and can be called
(invoked) multiple times from different parts of a main program.

● Purpose:
o Modularity: Breaks down large programs into smaller, manageable units.
o Reusability: Avoids code duplication; a common task can be written once and called wherever needed.
o Simplicity: Improves program structure and readability.
o Memory Efficiency: Reduces program size as common code is stored only once.
● Mechanism:
o CALL Instruction: When a CALL instruction is executed, the CPU first pushes the return address (the address of
the instruction immediately following the CALL instruction in the main program) onto a special memory area called the
stack. Then, the Program Counter (PC) is loaded with the starting address of the subroutine, causing execution to
jump to the subroutine.
o RETURN Instruction: At the end of the subroutine, a RET (return) instruction is encountered. This instruction pops
the return address from the top of the stack and loads it back into the PC. This causes program execution to resume
from where it left off in the main program.
o Stack: The stack is a Last-In, First-Out (LIFO) data structure used to store return addresses, parameters, and local
variables during subroutine calls.
● Nesting of Subroutines: One subroutine can call another subroutine. The stack mechanism naturally handles this by pushing
multiple return addresses.

Interrupts: An interrupt is an event that causes the CPU to temporarily suspend its current execution, save its current state, and transfer
control to a special routine called an Interrupt Service Routine (ISR) or Interrupt Handler. After the ISR completes its task, the CPU
restores its original state and resumes the interrupted program.

● Purpose:
o Event Handling: Allows the CPU to respond to asynchronous events (e.g., keyboard input, printer ready, disk
transfer complete, timer expiry) without constantly polling devices.
o Efficient I/O: Prevents the CPU from wasting time waiting for slow I/O devices.
o Error Handling: Catches unexpected events like power failure or division by zero.
o Multitasking/Operating Systems: Enables the OS to manage multiple programs concurrently by switching between
them.
● Types of Interrupts:

0. Hardware Interrupts:
▪ Generated by external hardware devices (e.g., keyboard press, mouse movement, printer paper jam, disk
I/O completion).
▪ Signal sent via dedicated interrupt lines to the CPU.
▪ Can be maskable (can be enabled/disabled by software) or non-maskable (cannot be disabled, critical
events like power failure).
1. Software Interrupts (Traps/Exceptions):
▪ Generated by software instructions (e.g., INT instruction in x86) or by internal CPU events (e.g., division
by zero, overflow, invalid opcode, page fault).
▪ Often used by operating systems for system calls (e.g., requesting I/O service).
● Interrupt Handling Process:
0. Interrupt Request: A device or internal event generates an interrupt signal.
1. Current Instruction Completion: The CPU typically completes the execution of the current instruction before
acknowledging the interrupt.
2. State Saving: The CPU saves the current state of the program (e.g., contents of the Program Counter, Flag Register,
and sometimes other critical registers) onto the stack.
3. Interrupt Acknowledgment: The CPU acknowledges the interrupt.
4. ISR Address Determination: The CPU determines the starting address of the appropriate ISR. This is often done via
an interrupt vector table, which is a table in memory containing the starting addresses of various ISRs. The interrupt
source provides an "interrupt number" or "vector" that points to the correct entry in this table.
5. ISR Execution: The CPU loads the ISR address into the PC and begins executing the ISR.
6. State Restoration: After the ISR completes its task, it executes an IRET (Interrupt Return) instruction. This
instruction pops the saved state (PC, flags, etc.) from the stack, restoring the CPU to its state before the interrupt.
7. Resumption: The CPU resumes execution of the interrupted program from where it left off.
● Priority Interrupts: When multiple interrupts occur simultaneously, a priority scheme is used to determine which interrupt is
serviced first. High-priority interrupts can interrupt lower-priority ISRs.

o Hardware Priority (Daisy Chaining, Parallel Priority): Physical connections define priority.
o Software Priority (Polling): CPU polls devices in a specific order to check for interrupts.

5. Memory Organization

Memory organization refers to how memory is structured and accessed within a computer system. It deals with different types of memory,
their characteristics, and how they are arranged to provide efficient storage and retrieval of data and instructions.

Memory Hierarchy: Computer systems use a memory hierarchy to balance speed, cost, and capacity. Faster, more expensive memory is
placed closer to the CPU, while slower, cheaper memory is used for larger storage.

CPU
|
| (Fastest, Smallest, Most Expensive)
|
+-----+
| Cache | (L1, L2, L3)
+-----+
|
+------------+
| Main Memory| (RAM - Primary Storage)
+------------+
|
+--------------+
| Auxiliary Mem| (HDD, SSD - Secondary Storage)
+--------------+
|
+-------------+
| Tertiary Mem| (Tapes, Optical - Offline/Archival)
+-------------+

Types of Memory:

1. CPU Registers:
o Characteristics: Smallest, fastest, most expensive storage, directly within the CPU.
o Purpose: Hold data and instructions actively being processed. Volatile.
2. Cache Memory:
o Characteristics: Small, very fast SRAM (Static RAM) memory, located between the CPU and main memory. Levels
(L1, L2, L3) exist, with L1 being fastest and closest to the CPU.
o Purpose: Stores copies of frequently accessed data and instructions from main memory to reduce average memory
access time. Volatile.
3. Main Memory (Primary Memory):
o Characteristics: RAM (Random Access Memory) - the primary working memory. Larger than cache, slower than
cache, faster than auxiliary memory. Volatile.
o Types of RAM:
▪SRAM (Static RAM): Faster, more expensive, uses latches, does not need refreshing. Used for cache.
▪DRAM (Dynamic RAM): Slower, cheaper, uses capacitors, needs periodic refreshing to retain data. Used
for main memory.
o Purpose: Stores programs and data that the CPU is actively using.
4. Read-Only Memory (ROM):
o Characteristics: Non-volatile memory, data is permanent or semi-permanent.
o Types:
▪ ROM: Programmed at manufacturing time.
▪ PROM (Programmable ROM): Can be programmed once by the user.
▪ EPROM (Erasable PROM): Can be erased by UV light and reprogrammed.
▪ EEPROM (Electrically Erasable PROM): Can be erased electrically and reprogrammed.
▪ Flash Memory: A type of EEPROM, widely used in SSDs, USB drives, etc., for its non-volatility and fast
read/write.
o Purpose: Stores firmware, BIOS (Basic Input/Output System), bootstrap loader.
5. Auxiliary Memory (Secondary Memory):
o Characteristics: Large capacity, non-volatile, much slower and cheaper per bit than main memory.
o Purpose: Long-term storage of programs, operating system, and data.
o Examples: Hard Disk Drives (HDDs), Solid State Drives (SSDs), Optical Discs (CD/DVD/Blu-ray), Magnetic Tapes,
USB Flash Drives.

Memory Addressing:

● Memory is organized as a collection of individually addressable storage locations (usually bytes).


● Each location has a unique binary address.
● The CPU uses the address bus to specify which memory location it wants to access.

Memory Mapping:

● Logical Address: The address generated by the CPU.


● Physical Address: The actual address in memory.
● In simple systems, these might be the same. In systems with virtual memory, a Memory Management Unit (MMU) translates
logical addresses to physical addresses.

6. Input-Output Interface

An I/O interface is a hardware component that acts as a bridge between the CPU/memory and peripheral devices (input/output devices). It
resolves the inherent differences between the CPU and peripherals.

Differences between CPU/Memory and Peripherals:

1. Data Transfer Rates: CPUs operate at very high speeds, while peripherals are significantly slower. The interface buffers data to
accommodate this speed mismatch.
2. Data Formats: Peripherals may use different data formats (e.g., serial data for keyboard, parallel data for printer) compared to
the CPU's internal format. The interface handles conversion.
3. Electrical Characteristics: Different voltage levels and signal types exist. The interface provides electrical compatibility.
4. Operating Modes: Each peripheral has its own specific mode of operation and control requirements. The interface translates
generic CPU commands into device-specific commands.
5. Synchronization: CPU and peripherals are asynchronous. The interface provides synchronization mechanisms (e.g., status flags,
interrupts).

Functions of an I/O Interface:

1. Buffer Registers (Data Registers): Temporary storage for data being transferred between the CPU and the I/O device.
2. Status Register: Contains bits that indicate the current status of the I/O device (e.g., ready, busy, error). The CPU can read this
register.
3. Control Register: Contains bits that the CPU can write to, to send commands or configure the I/O device (e.g., enable/disable
interrupts, set data transfer mode).
4. Address Decoding Logic: Decodes the I/O address placed on the address bus by the CPU to select the correct peripheral device
and its specific registers (data, status, control).
5. Control Logic: Interprets the control signals from the CPU and generates appropriate control signals for the peripheral. It also
generates status signals back to the CPU.
6. Interrupt Logic: Manages interrupt requests from the peripheral to the CPU.

I/O Port Addressing: There are two main ways the CPU can access I/O device registers:

1. Memory-Mapped I/O:
o Description: I/O device registers (data, status, control) are assigned unique addresses within the same address space
as main memory.
o Access: The CPU uses regular memory read/write instructions (e.g., MOV in x86) to access I/O registers.
o Advantages:
▪ No special I/O instructions are needed.
▪ All memory-addressing modes can be used for I/O.
o Disadvantages:
▪ A portion of the memory address space is used for I/O, reducing available memory.
▪ Memory cache might interfere with I/O operations if not handled carefully.
2. Isolated I/O (Port-Mapped I/O):
o Description: I/O device registers have a separate address space distinct from memory.
o Access: The CPU uses special I/O instructions (e.g., IN, OUT in x86) to access I/O ports.
o Advantages:
▪ Separate address spaces mean no memory address space is consumed by I/O.
▪ Clear distinction between memory and I/O operations.
o Disadvantages:
▪ Requires dedicated I/O instructions.
▪ Fewer addressing modes might be available for I/O operations compared to memory.
7. Programmed Input-Output (PIO)

Programmed I/O is a method of data transfer where the CPU is directly involved in every step of the I/O operation. The CPU executes a
program that continuously monitors the status of the I/O device and performs the actual data transfer.

Working Principle:

1. The CPU issues an I/O command to the peripheral device via the I/O interface's control register.
2. The CPU then enters a busy-wait loop (polling), repeatedly reading the status register of the I/O device to check if the device is
ready for data transfer or if the operation is complete.
3. Once the status indicates the device is ready, the CPU reads data from the I/O data register (for input) or writes data to it (for
output).
4. This process is repeated for every byte or word of data transferred.

Steps for a Programmed I/O Input Operation (e.g., reading from keyboard):

1. The CPU sends a command to the keyboard interface to prepare for input.
2. The CPU repeatedly checks the status register of the keyboard interface.
3. If the "data ready" bit in the status register is not set, the CPU continues to loop (busy-wait).
4. Once a key is pressed, the keyboard interface sets the "data ready" bit.
5. The CPU detects the "data ready" bit, reads the character from the keyboard interface's data register into a CPU register.
6. The CPU then transfers this data from the CPU register to memory.
7. The keyboard interface clears the "data ready" bit.

Advantages of Programmed I/O:

● Simplicity: Easiest to implement for simple I/O tasks.


● Low Hardware Cost: Requires minimal additional hardware.

Disadvantages of Programmed I/O:

● CPU Overhead: The CPU spends a significant amount of time polling the I/O device, idly waiting for the device to become
ready. This is inefficient, especially for slow devices.
● Low Throughput: The overall system performance degrades significantly as the CPU is tied up.
● No Concurrency: The CPU cannot perform other tasks while waiting for I/O operations to complete. This is unacceptable in
multi-tasking environments.

Use Cases: Programmed I/O is typically used in very simple embedded systems where CPU time is not critical, or for initial boot-up
processes where only basic I/O is required before more advanced mechanisms are set up.

8. Input-Output Processor (IOP)

An Input-Output Processor (IOP), also known as an I/O Channel or Channel Processor, is a specialized processor designed to handle I/O
operations independently of the main CPU. It allows the main CPU to execute application programs concurrently with I/O operations.

Purpose: To offload I/O management tasks from the main CPU, thereby improving overall system performance and efficiency.

Working Principle:

1. The main CPU initiates an I/O operation by sending a high-level command (e.g., "read file X from disk into memory location
Y") to the IOP.
2. The CPU then continues with its own processing tasks.
3. The IOP takes over the I/O operation. It has its own instruction set, registers, and control logic specifically optimized for I/O.
4. The IOP fetches its own I/O instructions (called commands or channel programs) from main memory. These commands
specify the device, operation, memory buffer address, and transfer size.
5. The IOP executes these commands, managing the data transfer between the peripheral device and main memory. This often
involves using Direct Memory Access (DMA).
6. Upon completion of the I/O operation (or in case of an error), the IOP generates an interrupt to the main CPU to signal that the
task is finished.
7. The CPU can then retrieve the status of the I/O operation from the IOP's status registers.

Features of an IOP:

● Independent Processing: Operates independently of the main CPU.


● Dedicated Instruction Set: Has its own specialized instructions for I/O operations.
● Direct Memory Access (DMA) Capability: Most IOPs are equipped with DMA controllers to transfer data directly between I/O
devices and memory without CPU intervention.
● Buffering: May include internal buffers to manage data flow between high-speed memory and slower I/O devices.
● Error Handling: Can detect and sometimes handle I/O errors independently.
● Interrupt Generation: Generates interrupts to the CPU upon I/O completion or error.
Advantages of using an IOP:

● CPU Relief: Frees the main CPU from the burden of detailed I/O management, allowing it to focus on computation.
● Increased Concurrency: CPU and I/O operations can happen in parallel, significantly improving system throughput.
● Modular Design: Separates I/O logic from main CPU logic.
● Scalability: Allows adding more I/O devices without significantly impacting CPU performance.

Disadvantages of using an IOP:

● Increased Hardware Complexity and Cost: Requires a dedicated processor.


● Higher System Overhead: Initial setup and communication between CPU and IOP add some overhead.

Example: In large mainframe computers, I/O channels are sophisticated IOPs that manage complex I/O subsystems. In modern PCs,
specialized controllers for hard drives (SATA controllers), network cards, and graphics cards often incorporate IOP-like functionalities and
use DMA.

9. Input-Output Device Characteristics

Input/Output (I/O) devices are the hardware components that allow a computer to interact with the outside world. They vary widely in their
characteristics, which directly impact how they are interfaced with the computer.

Key Characteristics of I/O Devices:

1. Data Transfer Rate:


o Definition: The speed at which data can be transferred between the device and the computer.
o Range: Varies greatly.
▪ Slow Devices: Keyboard (bytes/second), Mouse.
▪ Medium-Speed Devices: Printer (KB/sec to MB/sec), Scanner.
▪ High-Speed Devices: Hard Disk Drive (MB/sec to GB/sec), SSD, Network Card (GB/sec), Graphics
Card.
o Impact: Determines the type of I/O control method (programmed I/O, interrupt-driven, DMA) required. Fast devices
often need DMA.
2. Unit of Transfer:
o Definition: The smallest amount of data that can be transferred in a single operation.
o Examples:
▪ Character-oriented (byte): Keyboard, mouse, serial port.
▪ Block-oriented (multiple bytes/kilobytes/megabytes): Disk drives, network cards (transfer data in fixed-
size blocks/packets).
3. Data Representation and Encoding:
o Definition: How the device represents data internally and externally.
o Examples: ASCII for text, specific image formats for scanners/printers, analog signals for microphones/speakers.
o Impact: The I/O interface needs to handle data conversion and formatting.
4. Nature of Operation (Serial vs. Parallel):
o Serial: Data bits are transferred one after another over a single line (e.g., USB, Ethernet, RS-232 serial port).
o Parallel: Multiple data bits are transferred simultaneously over multiple lines (e.g., old parallel printer ports, internal
bus structures).
o Impact: Dictates the type of interface circuitry required. Serial interfaces are simpler but slower for a given clock
rate; parallel interfaces are faster but require more wires.
5. Control and Status Information:
o Definition: Devices provide status information (e.g., ready, busy, error, paper jam) and accept control commands
(e.g., print, rewind, reset).
o Impact: The I/O interface has dedicated status and control registers that the CPU or IOP can read/write to.
6. Human Interaction vs. Machine Interaction:
o Human-readable: Devices primarily interact with humans (e.g., keyboard, monitor, printer). Often characterized by
slower, bursty data rates.
o Machine-readable: Devices primarily interact with other machines or store data for later machine processing (e.g.,
disk drives, network cards). Often characterized by high, continuous data rates.
7. Direction of Data Transfer:
o Input Devices: Data flows from device to computer (e.g., keyboard, mouse, scanner).
o Output Devices: Data flows from computer to device (e.g., monitor, printer, speakers).
o Input/Output Devices: Can both input and output data (e.g., hard drives, network cards, touchscreens).
8. Mechanical vs. Electronic:
o Mechanical: Devices with moving parts (e.g., hard disk read/write heads, printer mechanisms). These are inherently
slower and more prone to wear.
o Electronic: Devices without moving parts (e.g., SSDs, network cards). These are generally faster and more reliable.

Understanding these characteristics helps in designing appropriate I/O interfaces and choosing the most efficient data transfer methods for
each device.
10. Direct Memory Access (DMA)

Direct Memory Access (DMA) is a hardware-controlled data transfer technique that allows an I/O device to transfer data directly to and
from main memory without the continuous involvement of the CPU. This significantly improves data transfer speed and system
efficiency, especially for high-speed I/O devices.

Why DMA is Needed:

● Programmed I/O wastes CPU cycles polling for device readiness.


● Interrupt-driven I/O is better, but the CPU is still involved in transferring each byte/word between the I/O buffer and memory.
For large data blocks, this overhead is substantial.
● DMA frees the CPU to perform other tasks while I/O transfers are in progress.

DMA Controller (DMAC): The core component of DMA is a special-purpose hardware unit called the DMA Controller (DMAC). The
DMAC manages the entire data transfer process once initiated by the CPU.

Working Principle of DMA:

1. CPU Initialization:
o The CPU programs the DMAC by writing to its internal registers:
▪ Source Address Register: Stores the starting address of the data block in memory (for output) or the
starting address in the I/O device (for input, if device-side addressing is possible).
▪ Destination Address Register: Stores the starting address in the I/O device (for output) or the starting
address in memory (for input).
▪ Word Count Register: Stores the number of bytes/words to be transferred.
▪ Control Register: Specifies the transfer mode (e.g., read/write, burst mode, cycle stealing),
source/destination increment/decrement behavior, and other settings.
o After programming, the CPU sends a "start" command to the DMAC. The CPU then resumes its other tasks.
2. DMA Request (DRQ):
o The I/O device, when ready to transfer data, sends a DMA Request (DRQ) signal to the DMAC.
3. Bus Request (BR) and Bus Grant (BG):
o The DMAC, upon receiving DRQ, sends a Bus Request (BR) signal to the CPU (or bus arbiter).
o The CPU, upon receiving BR, finishes its current bus cycle and then relinquishes control of the system buses
(address, data, control) by asserting a Bus Grant (BG) signal back to the DMAC. The CPU essentially enters a
"halted" or "idle" state regarding bus access.
4. Data Transfer:
o Once the DMAC receives BG, it takes control of the buses.
o It uses its own address and data registers to directly transfer data between the I/O device and memory, one word/byte
at a time, for the specified count. The CPU is bypassed for this data transfer.
o The DMAC automatically increments/decrements its internal address registers and decrements the word count
register.
5. DMA Completion and Interrupt:
o When the word count reaches zero (all data transferred), the DMAC releases control of the buses by deactivating BR.
o It then sends an Interrupt Request (IRQ) signal to the CPU to indicate the completion of the DMA transfer.
o The CPU can then process the data, check for errors, and potentially initiate the next DMA transfer.

Types of DMA Transfer Modes:

1. Burst Mode (Block Transfer Mode):


o The DMAC takes control of the buses and transfers the entire block of data without releasing the buses until the
transfer is complete.
o Advantage: Fastest transfer rate for large blocks.
o Disadvantage: CPU is "starved" of bus access during the entire burst, potentially causing latency for critical CPU
operations.
2. Cycle Stealing Mode:
o The DMAC transfers one byte/word of data and then releases the buses back to the CPU for one bus cycle. It then
requests the buses again for the next byte/word.
o Advantage: CPU experiences minimal delay; it can continue executing instructions between transfers.
o Disadvantage: Slower than burst mode due to bus arbitration overhead for each transfer. Suitable for devices that
transfer data in small chunks.
3. Transparent Mode:
o The DMAC transfers data only when the CPU is not using the buses (e.g., during CPU internal operations that don't
require bus access).
o Advantage: No impact on CPU performance, as it never interrupts the CPU's bus access.
o Disadvantage: Slowest DMA mode because transfers are dependent on CPU bus idleness.

Advantages of DMA:

● High Speed Data Transfer: Ideal for transferring large blocks of data rapidly (e.g., disk I/O, network communication).
● Reduced CPU Overhead: Frees the CPU to perform other computational tasks, leading to better overall system performance
and throughput.
● Increased Concurrency: Allows parallel execution of CPU tasks and I/O operations.

Disadvantages of DMA:

● Increased Hardware Complexity: Requires a dedicated DMAC, adding to hardware cost and complexity.
● Bus Contention: DMAC competes with the CPU for bus access, which needs careful arbitration.
● Cache Coherency Issues: If the CPU caches data that is modified by DMA, cache coherency mechanisms are needed to ensure
the CPU always has the latest data.

DMA is a critical feature in modern computer systems, enabling high-performance data transfers for almost all high-speed peripherals.

You might also like