0% found this document useful (0 votes)
121 views

2022 Microprocessor and Interfacing

The 8086 microprocessor has a 40-pin package used for various functions including address/data bus pins (AD0-AD15), high-order address pins (A16-A19), status pins (S6-S0), and control pins for operations like read (RD), write (WR), interrupt acknowledge (INTA) and reset. The pins enable functions like memory/I/O access, interrupts, and communication between the CPU and other components in a maximum mode 8086 system configured for high performance and multitasking. The bus controller in such a system controls the data, address and control buses, performing functions like bus arbitration, timing, memory mapping and I/O control to coordinate reliable and efficient

Uploaded by

Yash Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views

2022 Microprocessor and Interfacing

The 8086 microprocessor has a 40-pin package used for various functions including address/data bus pins (AD0-AD15), high-order address pins (A16-A19), status pins (S6-S0), and control pins for operations like read (RD), write (WR), interrupt acknowledge (INTA) and reset. The pins enable functions like memory/I/O access, interrupts, and communication between the CPU and other components in a maximum mode 8086 system configured for high performance and multitasking. The bus controller in such a system controls the data, address and control buses, performing functions like bus arbitration, timing, memory mapping and I/O control to coordinate reliable and efficient

Uploaded by

Yash Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT – 1

Explain the funtions of 40 pins of 8086


The 8086 microprocessor is a 16-bit processor that has a 40-pin DIP (Dual In-line Package) package. The pins
on the package are used for various functions, including:
1. AD0-AD15: These pins are the address/data bus. AD0-AD7 are used for data transfer, while AD8-AD15
are used for address transfer.
2. A16-A19: These pins are used to transfer high-order address bits during memory operations.
3. BHE/ S7: Bus High Enable (BHE) is used to enable the upper byte of the data bus during memory
operations. It is also known as S7.
4. S6-S0: These pins are status pins that indicate the status of various operations.
5. RD: This pin is used to indicate a read operation.
6. WR: This pin is used to indicate a write operation.
7. DEN: Data Enable (DEN) is used to enable the data bus during memory operations.
8. M/IO: This pin is used to indicate whether the current operation is a memory or an I/O operation.
9. INTA: Interrupt Acknowledge (INTA) is used to acknowledge an interrupt request.
10. INT: This pin is used to indicate an interrupt request.
11. NMI: Non-Maskable Interrupt (NMI) is a higher-priority interrupt that cannot be masked.
12. CLK: This pin is used to provide the clock signal.
13. RESET: This pin is used to reset the processor.
14. TEST: This pin is used for testing purposes.
15. READY: This pin is used to indicate that the processor is ready to receive or send data.
16. HOLD: This pin is used to indicate a HOLD request.
17. HLDA: Hold Acknowledge (HLDA) is used to acknowledge a HOLD request.
These are the primary functions of the 40 pins of the 8086 microprocessor.

Give a brife note on 16 bit microprocessor


A 16-bit microprocessor is a type of microprocessor architecture that was commonly used in the 1980s and
early 1990s. It refers to the width of the data bus, which is the number of bits that can be processed in a single
operation.
The most famous 16-bit microprocessors are the Intel 8086 and 8088, which were used in the first IBM PC
computers. These processors had a clock speed of up to 10 MHz and could address up to 1 MB of memory.
One of the advantages of 16-bit microprocessors was their ability to process data more quickly and efficiently
than their 8-bit counterparts. They also offered more advanced instruction sets and could perform more
complex calculations.
However, 16-bit microprocessors were eventually replaced by 32-bit and 64-bit architectures, which offered
even more processing power and memory capacity. Today, 16-bit microprocessors are still used in some
embedded systems, but they are less common in mainstream computing.

Purpose of each flag of 8086.


The 8086 processor has a flag register consisting of 16 bits that are set or cleared based on the result of an
operation. Each bit in the flag register represents a certain flag that can be used for testing or branching based
on the outcome of an instruction. Here is a summary of the purpose of each flag:
1. Carry Flag (CF): The Carry Flag is set when an arithmetic operation generates a carry out of the most
significant bit. It is used for operations such as addition, subtraction, multiplication, and division.
2. Parity Flag (PF): The Parity Flag is set when the result of an operation has an even number of 1 bits. It
is used for parity checking, which is a technique for detecting errors in data transmission.
3. Auxiliary Carry Flag (AF): The Auxiliary Carry Flag is set when an arithmetic operation generates a
carry out of bit 3. It is used for binary-coded decimal (BCD) arithmetic.
4. Zero Flag (ZF): The Zero Flag is set when the result of an operation is zero. It is used for testing the
result of a comparison or to determine if a loop should continue.
5. Sign Flag (SF): The Sign Flag is set when the result of an operation is negative. It is used for signed
arithmetic operations.
6. Trap Flag (TF): The Trap Flag is used for debugging and is not normally used in application programs.
7. Interrupt Flag (IF): The Interrupt Flag controls the response of the processor to interrupts. If the flag is
set, interrupts are enabled. If the flag is cleared, interrupts are disabled.
8. Direction Flag (DF): The Direction Flag controls the direction of string operations. If the flag is set, the
string is processed from high to low memory addresses. If the flag is cleared, the string is processed
from low to high memory addresses.
9. Overflow Flag (OF): The Overflow Flag is set when an arithmetic operation generates a result that is
too large to be represented in the destination register. It is used for signed arithmetic operations.
These flags are important in programming as they allow the programmer to make decisions based on the
outcome of an operation. For example, a program might use the Zero Flag to test if a value is zero or use the
Carry Flag to check if an addition resulted in a carry-out.
Draw and discuss a typical maximum mode 8086 system.
Here is a block diagram of a typical maximum mode 8086 system:
| CPU |<---| BIU |<----| BPU |<----| EU |
__|_________________|__A
| Memory |
Legend:
CPU - Central Processing Unit
BIU - Bus Interface Unit
BPU - Bus Control Unit
EU - Execution Unit
In the maximum mode configuration, the 8086 processor is designed to work with a coprocessor (8087). The
CPU communicates with the coprocessor through the Bus Interface Unit (BIU) and the Bus Control Unit (BPU).
Here are the main components of the system:
 CPU: The CPU is the heart of the system and is responsible for executing instructions. It consists of the
BIU and the Execution Unit (EU).
 BIU: The BIU is responsible for interfacing the CPU with the memory and I/O devices. It provides the
necessary signals to access the memory and I/O devices and fetches the instruction bytes from
memory.
 BPU: The BPU is responsible for generating the necessary signals to control the data and address
buses. It also manages the bus cycles and performs arbitration when multiple devices are trying to
access the bus simultaneously.
 EU: The EU is responsible for executing the instructions fetched by the BIU. It consists of the registers,
the ALU (Arithmetic Logic Unit), and the control unit.
 Memory: The memory stores the program and data for the system. It is connected to the CPU via the
address and data buses.
In the maximum mode configuration, the 8086 processor works with a coprocessor (8087) to perform floating-
point operations. The coprocessor is connected to the system through a separate bus and has its own control
unit.
The maximum mode configuration is typically used in systems that require high performance and multitasking
capabilities. It allows multiple devices to access the bus simultaneously and enables the processor to
communicate with a coprocessor to perform complex calculations.
What is the use of a bus controller in maximum code?
In a maximum mode 8086 system, the bus controller is responsible for controlling the data, address, and
control buses of the system. Its main function is to coordinate the transfer of data and instructions between
different components of the system, such as the CPU, memory, and input/output (I/O) devices.
Here are some specific uses of the bus controller in a maximum mode 8086 system:
1. Bus arbitration: In a multi-master system, the bus controller is responsible for resolving conflicts
between different masters who want to access the bus at the same time. It arbitrates between the
masters to ensure that only one master has control of the bus at any given time.
2. Bus timing: The bus controller is also responsible for generating the appropriate timing signals for the
system bus, such as the address strobe, read/write strobe, and data transfer signals. It ensures that the
signals are synchronized with the clock signal of the system and that data is transferred reliably
between different components.
3. Memory mapping: The bus controller can also be used to map memory addresses to physical
locations in memory. This allows the CPU to access different memory locations using a single logical
address, simplifying memory management, and improving system performance.
4. I/O control: The bus controller can also be used to control input/output (I/O) operations in the system,
such as reading and writing data to and from I/O devices. It provides the necessary signals to transfer
data between the CPU and I/O devices and ensures that the data is transferred reliably and efficiently.
Overall, the bus controller plays a critical role in the operation of a maximum mode 8086 system, providing the
necessary coordination and control signals for the system bus and ensuring reliable and efficient data transfer
between different components.
EXPLAIN the physical memory organization in an 8086 system.
In an 8086 system, the physical memory is organized into segments, which are blocks of memory that can be
accessed using a segment selector and an offset. Each segment is 64 KB in size and can contain both code
and data.
The physical memory organization in an 8086 system is based on a memory segmentation model. In this
model, the 1 MB memory space of the 8086 processor is divided into segments of 64 KB each. Each segment
is identified by a segment number, which is a 16-bit value, and an offset, which is also a 16-bit value. Together,
the segment number and offset form a 20-bit physical address that can be used to access a specific byte in
memory.
The physical memory in an 8086 system is divided into four main types of segments:
1. Code segment: Contains executable code that is executed by the CPU. The code segment is typically
read-only and can be shared between multiple processes or tasks.
2. Data segment: Contains initialized data, such as variables and constants. The data segment is read-
write and can be accessed by the CPU and other components of the system.
3. Stack segment: Contains the system stack, which is used for temporary storage of data and
addresses during program execution. The stack segment grows downward in memory and is used for
storing function calls, local variables, and other data.
4. Extra segment: Contains additional data and code and can be used for special purposes by the
operating system or applications.
Each segment in an 8086 system is identified by a segment descriptor, which contains information about the
segment's location, size, access rights, and other attributes. The segment descriptors are stored in a special
table called the Global Descriptor Table (GDT), which is used by the CPU to access the segments in physical
memory.
Overall, the physical memory organization in an 8086 system is based on a segmented memory model, which
provides flexibility and efficient use of memory resources. By dividing memory into segments, the system can
support multiple programs or tasks running concurrently and can provide protection and security for each
segment of memory.

UNIT – 2
Illustrate the super scalar architecture of Pentium
The superscalar architecture of the Pentium processor is one of its key features, enabling it to execute multiple
instructions in parallel and achieve higher performance than earlier processors. Here's a brief overview of how
the superscalar architecture works:
1. Instruction Fetch: The first stage of the superscalar architecture is instruction fetch, where the
processor fetches multiple instructions from memory and stores them in its instruction queue.
2. Instruction Decode: The second stage is instruction decode, where the processor decodes each
instruction and determines the operations required to execute it.
3. Execution Units: The Pentium processor has multiple execution units, including integer units, floating-
point units, and multimedia units. These units can execute different types of instructions simultaneously,
allowing the processor to achieve higher performance.
4. Out-of-Order Execution: The Pentium processor also features out-of-order execution, which means that
instructions can be executed in any order that does not affect the program's results. This feature
enables the processor to execute instructions that are independent of each other simultaneously,
further improving performance.
5. Register Renaming: The Pentium processor also uses register renaming, which enables it to map
logical registers to physical registers dynamically. This feature reduces the number of register
dependencies and enables more instructions to be executed simultaneously.
6. Branch Prediction: The Pentium processor also features advanced branch prediction capabilities, which
enable it to predict the outcome of conditional branches and prefetch instructions accordingly. This
feature reduces the number of pipeline stalls and improves performance.
Overall, the superscalar architecture of the Pentium processor enables it to execute multiple instructions
simultaneously, enabling it to achieve higher performance than earlier processors. The various features of the
architecture, including out-of-order execution, register renaming, and branch prediction, enable the processor
to optimize its performance and reduce the number of pipeline stalls, further improving its efficiency.

What are privilege levels?


Privilege levels refer to different levels of access and permissions that are granted to users or processes within
a computer system. In most computer systems, there are at least two privilege levels: user mode and kernel
mode.
User mode is the default privilege level for most processes, including applications and user-level programs.
Processes running in user mode have limited access to system resources and can only perform certain
operations. For example, a program running in user mode may be able to read and write files, but it cannot
access system memory or perform low-level hardware operations.
Kernel mode, on the other hand, is a higher privilege level that is reserved for the operating system and
system-level processes. Processes running in kernel mode have unrestricted access to system resources and
can perform any operation. This allows the operating system to perform critical tasks such as managing
memory, scheduling processes, and handling interrupts.
The privilege level of a process can be changed using a system call, which is a special function that allows a
process to request services from the operating system. When a process makes a system call, it enters kernel
mode and gains access to system resources that are not available in user mode. Once the system call is
complete, the process returns to user mode and continues executing its normal operations.
However, changing the privilege level of a process is a delicate operation that can be exploited by attackers to
gain unauthorized access to a system. Therefore, most modern operating systems implement security features
such as memory protection, access control, and privilege separation to prevent unauthorized privilege level
changes and ensure the security of the system.

Illustrate various addressing modes with example


Addressing modes are methods used by a CPU to access data or operands in memory. There are several
types of addressing modes, each with its own syntax and usage. Here are some examples of different
addressing modes:
1. Immediate Addressing Mode: In this mode, the operand value is specified directly in the instruction. For
example: MOV AX, 25h (moves the immediate value 25h to register AX)
2. Direct Addressing Mode: In this mode, the memory address of the operand is directly specified in the
instruction. For example: MOV AX, [1234h] (moves the value stored at memory location 1234h to
register AX)
3. Register Addressing Mode: In this mode, the operand is contained within a register. For example: ADD
AX, BX (adds the value in register BX to the value in register AX)
4. Indirect Addressing Mode: In this mode, the memory address of the operand is contained in a register.
For example: MOV AX, [BX] (moves the value stored at memory location pointed by the address in
register BX to register AX)
5. Indexed Addressing Mode: In this mode, the memory address of the operand is calculated by adding an
offset to the contents of a register. For example: MOV AX, [SI+10h] (moves the value stored at memory
location pointed by the address calculated as contents of SI + 10h to register AX)
6. Base Addressing Mode: In this mode, the memory address of the operand is calculated by adding an
offset to the contents of a base register. For example: MOV AX, [BP+10h] (moves the value stored at
memory location pointed by the address calculated as contents of BP + 10h to register AX)
7. Relative Addressing Mode: In this mode, the memory address of the operand is calculated relative to
the current instruction pointer value. For example: JMP short LABEL (jumps to the memory location of
the label, relative to the current instruction pointer value)
These are some of the most common addressing modes used in computer architecture. Each mode is useful
for different types of operations and can be used in combination with others to create complex instructions.

Discuss the evolution of advanced microprocessors


The evolution of advanced microprocessors has been a fascinating journey, marked by numerous
technological advancements that have made modern computing possible. Here's a brief overview of some of
the significant milestones in the history of advanced microprocessors:
1. First-Generation Microprocessors: The first-generation microprocessors were introduced in the 1970s
and were based on 4-bit or 8-bit architectures. Intel 4004 and Intel 8008 were among the earliest
microprocessors.
2. Second-Generation Microprocessors: In the late 1970s, second-generation microprocessors were
introduced with 16-bit architectures. The most notable example of this generation was the Intel 8086,
which was the first processor to use a 16-bit architecture.
3. Third-Generation Microprocessors: Third-generation microprocessors were introduced in the early
1980s with 32-bit architectures. The most significant example of this generation was the Intel 80386,
which was the first 32-bit processor.
4. Fourth-Generation Microprocessors: In the mid-1980s, fourth-generation microprocessors were
introduced, which featured advanced instruction sets, cache memory, and floating-point arithmetic. The
most notable example of this generation was the Intel 80486.
5. Fifth-Generation Microprocessors: Fifth-generation microprocessors were introduced in the early 1990s
and featured advanced pipelining, superscalar architecture, and multiple cores. The most notable
example of this generation was the Intel Pentium, which was the first processor to feature superscalar
architecture.
6. Sixth-Generation Microprocessors: In the late 1990s, sixth-generation microprocessors were
introduced, which featured larger caches, multimedia instruction sets, and improved power
management. The most notable examples of this generation were the Intel Pentium Pro and the AMD
K6.
7. Seventh-Generation Microprocessors: Seventh-generation microprocessors were introduced in the
early 2000s and were characterized by hyperthreading, 64-bit architectures, and multi-core processors.
The most notable examples of this generation were the Intel Pentium 4 and the AMD Athlon 64.
8. Eighth-Generation Microprocessors: Eighth-generation microprocessors were introduced in the late
2000s and featured even more advanced multi-core architectures, virtualization support, and improved
power management. The most notable examples of this generation were the Intel Core i7 and the AMD
Ryzen.
9. Ninth-Generation Microprocessors: Ninth-generation microprocessors were introduced in the early
2010s and featured even more advanced multi-core architectures, improved security features, and
increased clock speeds. The most notable examples of this generation were the Intel Core i9 and the
AMD Ryzen 9.
10. Tenth-Generation Microprocessors: The current generation of advanced microprocessors, introduced in
the late 2010s and early 2020s, features even more advanced architectures, with more cores, higher
clock speeds, and improved AI capabilities. The most notable examples of this generation are the Intel
Core i9-10900K and the AMD Ryzen 9 5900X.
Overall, the evolution of advanced microprocessors has been marked by a continuous drive towards higher
performance, better power management, and increased functionality. The advancements made in
microprocessor technology have played a significant role in shaping modern computing and have enabled a
wide range of new applications and industries, from gaming and artificial intelligence to cloud computing and
autonomous vehicles.

Explain page level protections


Page-level protection refers to the practice of restricting access to specific pages on a website or web
application. This is often done to protect sensitive information or valuable intellectual property from
unauthorized access, modification, or theft.
Page-level protection can be achieved in a number of ways, depending on the platform being used. Here are a
few common methods:
1. Password protection: This involves setting up a username and password system that users must enter
before they can access certain pages. This is a simple and effective method of page-level protection,
but it can be vulnerable to brute-force attacks or password guessing if not implemented correctly.
2. Role-based access control (RBAC): This involves assigning different levels of access to different users
based on their roles or permissions. For example, a user with administrative privileges might be able to
access all pages, while a regular user might only be able to access certain pages. This method is more
complex to set up but offers a higher level of security.
3. IP blocking: This involves blocking access to certain pages based on the IP address of the user. This
can be effective if you only want to restrict access to certain regions or countries, but it can also be
circumvented by using a proxy or VPN.
4. Access Control: Access control is the process of restricting access to certain pages or resources based
on user permissions. This helps prevent unauthorized access to sensitive information or functionality
5. SSL encryption: This involves encrypting the data transmitted between the user's browser and the
server using SSL or TLS encryption. This does not restrict access to specific pages, but it does ensure
that any sensitive information transmitted over the internet is protected from eavesdropping or
interception.
Overall, page-level protection is an important aspect of website security, and different methods can be used
depending on the specific needs and requirements of the website or application.

What are privilege levels? How the privilege level is changed


Privilege levels refer to different levels of access and permissions that are granted to users or processes within
a computer system. In most computer systems, there are at least two privilege levels: user mode and kernel
mode.
User mode is the default privilege level for most processes, including applications and user-level programs.
Processes running in user mode have limited access to system resources and can only perform certain
operations. For example, a program running in user mode may be able to read and write files, but it cannot
access system memory or perform low-level hardware operations.
Kernel mode, on the other hand, is a higher privilege level that is reserved for the operating system and
system-level processes. Processes running in kernel mode have unrestricted access to system resources and
can perform any operation. This allows the operating system to perform critical tasks such as managing
memory, scheduling processes, and handling interrupts.
The privilege level of a process can be changed using a system call, which is a special function that allows a
process to request services from the operating system. When a process makes a system call, it enters kernel
mode and gains access to system resources that are not available in user mode. Once the system call is
complete, the process returns to user mode and continues executing its normal operations.
However, changing the privilege level of a process is a delicate operation that can be exploited by attackers to
gain unauthorized access to a system. Therefore, most modern operating systems implement security features
such as memory protection, access control, and privilege separation to prevent unauthorized privilege level
changes and ensure the security of the system.

Explain terms: task privilege, descriptor privilege, selector privilege.


Task privilege, descriptor privilege, and selector privilege are all related to access control and security in a
computer system. Here's a brief explanation of each:
1. Task privilege: Refers to the level of privilege or access rights that a task or process has in an
operating system. It determines what resources and actions the task can access or perform. Most
modern operating systems use a multi-level security model where tasks are assigned a level of
privilege or access rights based on their security clearance, role, or user identity. The level of task
privilege determines what the task can do in the system.
2. Descriptor privilege: Refers to the access rights granted to a process or user for a specific resource or
object, such as a file, network connection, or system memory. In computing, a descriptor is a data
structure that describes a resource or object and includes information such as its location, size, and
access permissions. Descriptor privilege determines what actions a process or user can perform on the
resource, such as read, write, or delete.
3. Selector privilege: Refers to the level of privilege or access rights that a segment selector has in a
memory management system. A selector is a data structure used in memory segmentation, which is a
technique used to divide memory into logical segments. Each segment has a selector that determines
the level of privilege or access rights that a process or user has for that segment. Selector privilege
determines what actions a process or user can perform on the segment, such as read, write, or
execute.
Overall, task privilege, descriptor privilege, and selector privilege are all important concepts in computer
security and access control. They determine what resources and actions a process or user can access or
perform and help to protect the system from unauthorized access or modification.
UNIT – 3
Give brief introduction of interfacing chip 8255
The 8255 is an input/output (I/O) programmable peripheral interface chip used to interface various devices and
peripherals to a microprocessor-based system. It provides three 8-bit ports (A, B, and C) that can be
configured as either input or output ports, depending on the application requirements.
The 8255 chip also provides additional features such as interrupt control, handshake signals, and mode
control. These features enable the chip to operate in various modes, including basic input/output mode,
strobed input/output mode, and bidirectional mode.
The interfacing of the 8255 chip involves connecting the various pins of the chip to the microprocessor and the
peripherals. The address lines of the microprocessor are used to select the 8255 chip, and the data lines are
used to transfer data between the microprocessor and the chip.
The various input and output devices are connected to the ports of the 8255 chip. The mode of operation of
each port is configured through the control registers of the 8255 chip.
The interrupt control feature of the 8255 chip allows it to generate interrupts to the microprocessor based on
the status of the input ports. This enables the microprocessor to respond to the input signals from the devices
connected to the 8255 chip.
Overall, the 8255 chip is a versatile I/O interface device that provides multiple modes of operation and
features. It can be used to interface various devices and peripherals to a microprocessor-based system and is
commonly used in industrial automation, instrumentation, and control systems.

Explain the interfacing off keyboard with intel 8086 microprocessor


Interfacing a keyboard with an Intel 8086 microprocessor involves the following steps:
1. Connect the data lines (D0-D7) of the keyboard to the data bus of the microprocessor, and connect the
clock and control lines of the keyboard to the corresponding pins of the microprocessor. The keyboard
usually has a PS/2 or USB interface, so an appropriate converter may be needed to connect it to the
microprocessor.
2. Write a program in assembly language to initialize the microprocessor and read the scan codes from
the keyboard. The scan codes represent the keys that are pressed and released on the keyboard.
3. Configure the microprocessor's interrupt controller to enable interrupts from the keyboard. This will
allow the microprocessor to respond to the keyboard input immediately, rather than constantly polling
the keyboard for new input.
4. When a key is pressed on the keyboard, the keyboard sends a scan code to the microprocessor, which
then processes the code to determine which key was pressed. This may involve converting the scan
code to an ASCII code or another format, depending on the application requirements.
5. The microprocessor can then use the key input for various purposes, such as displaying characters on
a screen, controlling a program, or sending data to other devices.
Overall, interfacing a keyboard with an Intel 8086 microprocessor involves connecting the appropriate pins of
the two devices, writing a program to read the keyboard input, configuring interrupts to enable immediate
response to the input, and processing the input to use it for various purposes.

WHAT IS DMA CONTROLLER


DMA (Direct Memory Access) controller is a hardware device that allows data to be transferred directly
between memory and a peripheral device without intervention from the CPU (Central Processing Unit) of a
computer. This is done by temporarily suspending the CPU's access to the memory and allowing the DMA
controller to take control of the data transfer.
The DMA controller improves the performance of the system by offloading the data transfer task from the CPU
and allowing it to perform other tasks. It is particularly useful for devices that need to transfer large amounts of
data quickly, such as disk drives, graphics cards, and network interfaces.
The DMA controller operates in three modes:
1. Cycle Stealing Mode: In this mode, the DMA controller interrupts the CPU during the execution of a
program and requests to use the system bus to transfer data. The CPU relinquishes control of the bus
to the DMA controller for a short period of time.
2. Burst Mode: In this mode, the DMA controller transfers a block of data in a single burst without
interruption. This mode is useful for transferring large amounts of data quickly.
3. Transparent Mode: In this mode, the DMA controller operates transparently to the CPU, meaning that
the CPU is not interrupted during data transfer.
The DMA controller is typically configured using software by specifying the source and destination memory
addresses, the number of bytes to be transferred, and the mode of transfer. The controller then performs the
data transfer, and once completed, it signals the CPU to resume its normal operation.
Overall, the DMA controller is an important component of a computer system that enables high-speed data
transfer between memory and peripheral devices, thereby improving system performance.
Describe interfacing of 8254 programable intervel timer
The 8254 programmable interval timer (PIT) is a versatile timer/counter device used in many microprocessor-
based systems. It can be programmed to generate accurate time delays or to control the frequency of an
output waveform.
Here are the steps involved in interfacing the 8254 PIT with a microprocessor:
1. Connect the address bus of the microprocessor to the address inputs of the 8254 PIT. The address
lines A0-A7 are used to select the control register or the three timer channels of the 8254.
2. Connect the data bus of the microprocessor to the data inputs of the 8254 PIT. The data bus lines D0-
D7 are used to write data to the control register or to read the timer count values.
3. Connect the control signals of the microprocessor to the control inputs of the 8254 PIT. These signals
include chip select (CS), read/write (RD/WR), and interrupt (INT) signals.
4. Write a program in assembly language that initializes the microprocessor and programs the 8254 PIT
for the required application. This involves setting up the control register of the 8254 to select the
desired timer mode, counting direction, and output waveform.
5. Use the timer count values to generate accurate time delays or to control the frequency of an output
waveform. The timer count values can be read from the timer channel registers of the 8254 using the
microprocessor's data bus.
Overall, interfacing the 8254 PIT with a microprocessor involves connecting the appropriate pins of the two
devices, writing appropriate programs to program the 8254 for the required application, and using the timer
count values for generating accurate time delays or controlling the frequency of an output waveform.

UNIT – 4
ILLUSTRATE RS232C STANDARDS.
RS232C, also known as EIA/TIA-232, is a standard for serial communication that defines the electrical and
mechanical characteristics of the communication interface. The standard defines the signal levels, connector
pinout, and other important aspects of the interface to ensure interoperability between devices.
Here are the key specifications of the RS232C standard:
1. Voltage Levels: The RS232C standard specifies that logic 1 is represented by a voltage between -3 and
-25 volts, while logic 0 is represented by a voltage between +3 and +25 volts. These voltage levels are
used to transmit data between devices.
2. Data Format: RS232C uses asynchronous communication, where each character is transmitted with a
start bit, followed by 5 to 8 data bits, an optional parity bit, and one or more stop bits. The standard also
defines the baud rate, which is the rate at which data is transmitted, ranging from 110 to 115,200 bits
per second.
3. Connector Pinout: The RS232C standard defines a 25-pin D-sub connector for serial communication.
The pins are assigned specific functions, such as transmitting data, receiving data, and controlling the
flow of data between devices.
4. Handshaking: The standard also defines several handshaking signals, which are used to control the
flow of data between devices. These signals include RTS (Request to Send), CTS (Clear to Send),
DTR (Data Terminal Ready), and DSR (Data Set Ready).
5. Cable Length: The RS232C standard allows for a cable length of up to 50 feet, although longer cable
lengths can be achieved with the use of signal boosters or other devices.
Overall, the RS232C standard has been widely adopted in the industry and is still used today for
communication between various devices, including computers, printers, modems, and other peripheral
devices.

Discus interfacing of rom with intel 8086 microprocessor


Interfacing a ROM (Read-Only Memory) with the Intel 8086 microprocessor involves connecting the ROM to
the system bus of the 8086 and writing appropriate programs to access the data stored in the ROM.
Here are the steps involved in interfacing a ROM with an Intel 8086 microprocessor:
1. Connect the address bus of the 8086 microprocessors to the address inputs of the ROM. The address
bus consists of 20 lines (A0-A19) that allow the microprocessor to access up to 1 MB of memory.
2. Connect the data bus of the 8086 microprocessors to the data inputs of the ROM. The data bus
consists of 16 lines (D0-D15) that allow the microprocessor to read and write data to and from memory.
3. Connect the control signals of the 8086 microprocessors to the control inputs of the ROM. These
signals include read (RD), write (WR), and chip select (CS) signals, which are used to control the flow
of data between the microprocessor and the ROM.
4. Write a program in assembly language that initializes the microprocessor and reads data from the
ROM. This involves setting up the control signals to select the ROM and read data from the specified
address location.
5. Use the data read from the ROM for the required application. This data can be stored in registers,
memory locations, or other data structures for further processing or output.
Overall, interfacing a ROM with an Intel 8086 microprocessor involves connecting the appropriate pins of the
two devices, writing appropriate programs to access data from the ROM, and using the data for the required
application.

What do you mean by memory interfacing in short?


Memory interfacing refers to the process of connecting an external memory device to a microcontroller or
microprocessor to increase its available storage capacity. This is done by configuring the microcontroller or
microprocessor to access data from the external memory device.
Memory interfacing is important because the on-chip memory of microcontrollers or microprocessors is often
limited and may not be sufficient for certain applications, such as those that require large data storage or real-
time data processing. By connecting an external memory device, such as RAM or ROM, to the microcontroller
or microprocessor, additional storage capacity is provided, allowing for more complex applications to be
developed.
Memory interfacing can be accomplished through several methods, including parallel interfacing, serial
interfacing, and memory-mapped I/O. The specific method used depends on the specific requirements of the
application and the capabilities of the microcontroller or microprocessor.
Explain the interfacing of 8251 chip with 8086 microprocessor.
The 8251 is a universal asynchronous receiver/transmitter (UART) chip that can be used to implement serial
communication in a microcontroller or microprocessor system. Here's how you can interface the 8251 chip with
an 8086 microprocessor:
1. Connect the 8251 chip to the system data bus of the 8086 microprocessor. The data bus consists of 16
lines (AD0-AD15) that allow the microprocessor to communicate with other devices in the system.
2. Connect the 8251 chip to the control bus of the 8086 microprocessor. The control bus consists of
various control signals (e.g., RD, WR, ALE) that are used to control the flow of data and instructions
between the microprocessor and other devices in the system.
3. Configure the 8251 chip to operate in the appropriate mode. The 8251 chip can operate in various
modes, including asynchronous mode (for transmitting and receiving data in a start-stop bit format),
synchronous mode (for transmitting and receiving data in a continuous bit stream), and multi-processor
mode (for communication between multiple microprocessors).
4. Write the appropriate program in assembly language to initialize and communicate with the 8251 chip.
This involves setting the control lines and control registers of the 8251 chip, and writing code to send
and receive data through the UART.
5. Connect the transmit and receive lines of the 8251 chip to the appropriate peripheral devices, such as a
modem or terminal, to enable serial communication.
Overall, the interfacing of the 8251 chip with the 8086 microprocessor involves connecting the appropriate pins
of the two devices, configuring the 8251 chip to operate in the appropriate mode, and writing the appropriate
program to enable serial communication between the two devices.

UNIT – 5
Brief Introduction of 8051 Microcontroller
The 8051 microcontroller is an 8-bit microcontroller that was first introduced in 1980 by Intel. It is one of the
most popular and widely used microcontrollers in the world due to its simple architecture, low power
consumption, and wide availability of software and hardware development tools.
The 8051 microcontroller has a Harvard architecture, which means it has separate memory spaces for data
and program code. It has a 16-bit program counter (PC) and 8-bit data pointer (DPTR), and it can address up
to 64KB of external memory.
The 8051 microcontroller has four I/O ports, each with eight pins, which can be configured as inputs or outputs.
It also has several built-in peripherals, including timers, serial communication ports, and interrupt controllers.
The 8051 microcontroller has been used in a wide variety of applications, including industrial control systems,
automotive systems, medical devices, and consumer electronics. Its popularity has led to the development of
numerous derivatives and clones, many of which are still in use today.

Explain Register set of 8051 Microcontroller


The 8051 microcontroller has a register set consisting of various types of registers that are used to store data,
control the operation of the microcontroller, and interact with the external world. Here's an overview of the
different types of registers in the 8051 microcontroller:
1. Accumulator Register (ACC): The accumulator is an 8-bit register that is used for arithmetic and logical
operations. It is often referred to as the "A register" and is the primary register used for most operations.
2. B Register: The B register is also an 8-bit register that is used in conjunction with the accumulator for
arithmetic and logical operations. It is often referred to as the "B register".
3. Program Counter (PC): The program counter is a 16-bit register that holds the address of the next
instruction to be executed.
4. Data Pointer (DPTR): The data pointer is a 16-bit register that is used for indirect addressing. It is often
used to point to tables and arrays.
5. Stack Pointer (SP): The stack pointer is an 8-bit register that points to the top of the stack. It is used for
storing and retrieving data during subroutine and interrupt handling.
6. Status Register (PSW): This is an 8-bit register that contains various flags that indicate the status of the
microcontroller. These flags include the carry flag, auxiliary carry flag, parity flag, overflow flag, and
user-defined flags.
7. Timer/Counter Registers: The 8051 microcontroller has two 16-bit timer/counter registers that are used
for timing and counting events.
8. Interrupt Registers: The 8051 microcontroller has several interrupt registers that are used for interrupt
handling.
9. Port Registers: The 8051 microcontroller has four 8-bit port registers that are used for input/output
operations. These registers are referred to as P0, P1, P2, and P3.
10. Special Function Registers (SFRs): The 8051 microcontroller also has several special function registers
that are used for controlling various aspects of the microcontroller's operation.
Overall, the register set of the 8051 microcontroller is an essential component that allows the microcontroller to
perform its various functions and interact with the external world.

What are the interrupts in 8051 Microcontrollers


In 8051 microcontrollers, there are five interrupt sources, which are as follows:
1. External Interrupt 0 (INT0): It is generated by the INT0 pin when it goes from high to low. This interrupt
is edge-triggered and is used to handle external events.
2. External Interrupt 1 (INT1): It is generated by the INT1 pin when it goes from high to low. This interrupt
is also edge-triggered and is used to handle external events.
3. Timer Interrupts: There are two timer interrupts, namely Timer 0 and Timer 1 interrupts. These
interrupts are generated when the timer overflows. They are used to handle timing-related events.
4. Serial Communication Interrupt (RI/TI): The RI (Receive Interrupt) is generated when a byte is received
by the serial port, and the TI (Transmit Interrupt) is generated when a byte is transmitted. These
interrupts are used to handle serial communication events.
5. Interrupt Priority: In addition to the above interrupt sources, 8051 microcontrollers also support interrupt
priority. This feature allows the programmer to assign priority to different interrupts, so that higher
priority interrupts can be handled first.
It's worth noting that each interrupt source has its own interrupt vector address and can be individually enabled
or disabled via control registers.

What are the interrupts in microcontrollers?


Interrupts in microcontrollers are signals that temporarily pause the normal execution of a program to handle a
specific event. When an interrupt occurs, the microcontroller stops executing the current instruction and starts
executing a special routine, known as an interrupt service routine (ISR), to handle the event.
Interrupts are used to handle various events in a microcontroller, such as a button press, a timer expiration, or
the completion of a serial communication. By using interrupts, the microcontroller can quickly respond to these
events without the need for constant polling, which can waste processing time and energy.
There are several types of interrupts in microcontrollers, including:
1. External Interrupts: These are generated by external events, such as a button press or a change in
the state of an input pin.
2. Timer Interrupts: These are generated by a timer when it reaches a specific count or when it
overflows.
3. Serial Interrupts: These are generated when data is received or transmitted through a serial port.
4. ADC Interrupts: These are generated when an analog-to-digital converter (ADC) completes a
conversion.
5. Watchdog Timer Interrupts: These are generated when a watchdog timer detects a system fault or a
software error.
Interrupts are an essential feature of microcontrollers, and they help improve the efficiency and responsiveness
of embedded systems.

Instruction set of 8051


The instruction set of 8051 microcontrollers includes a variety of instructions, which can be classified into
several categories based on their functionality. Here are some of the main instruction categories of the 8051
microcontroller:
1. Arithmetic Instructions: These instructions perform arithmetic operations on data stored in registers or
memory locations. Examples of arithmetic instructions include ADD, SUB, INC, DEC, and MUL.
2. Logical Instructions: These instructions perform logical operations on data stored in registers or
memory locations. Examples of logical instructions include ANL, ORL, XRL, and CPL.
3. Data Transfer Instructions: These instructions move data between registers or between registers and
memory locations. Examples of data transfer instructions include MOV, XCH, and LCALL.
4. Branching Instructions: These instructions modify the program flow by jumping to different parts of the
program. Examples of branching instructions include JMP, JC, JZ, and DJNZ.
5. Stack Instructions: These instructions manage the stack, which is used for storing temporary data and
return addresses during subroutine calls. Examples of stack instructions include PUSH, POP, and RET.
6. Bit Manipulation Instructions: These instructions are used for setting, clearing, and testing individual bits
in registers or memory locations. Examples of bit manipulation instructions include SETB, CLR, and JB.
7. Miscellaneous Instructions: These instructions perform various functions, such as enabling or disabling
interrupts, setting the program counter, and reading the status of the carry flag. Examples of
miscellaneous instructions include EI, DI, CPL, and NOP.
These are just some of the instruction categories of the 8051 microcontrollers. The full instruction set includes
many more instructions with different functionalities.

You might also like