UNIT V PartB C QB WithAnswers
UNIT V PartB C QB WithAnswers
PART B
Registers
Registers are small, high-speed memory units located in the CPU. They are used to store the
most frequently used data and instructions. Registers have the fastest access time and the
smallest storage capacity, typically ranging from 16 to 64 bits.
Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used
data and instructions that have been recently accessed from the main memory. Cache memory
is designed to minimize the time it takes to access data by providing the CPU with quick
access to frequently used data.
Main Memory
Main memory is the primary memory of a computer system. It has a larger storage capacity
than cache memory, but it is slower. Main memory is used to store data and instructions that
are currently in use by the CPU.
1 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
RAM
RAM is also known as Read/Write memory.
The information stored in RAM can be read and also written.
It is volatile .
TYPES OF RAM
Static RAM: Static RAM stores the binary information in transistors and
information remains valid until power is supplied. It has a faster access time and is
used in implementing cache memory.
Dynamic RAM: It stores the binary information as a charge on the capacitor. It
requires refreshing circuitry to maintain the charge on the capacitors after a few
milliseconds. It contains more memory cells per unit area as compared to SRAM.
ROM
Memory is called a read-only memory, or ROM, when information can be written into it
only once at the time of manufacture.
The information stored in ROM can then only be read.
It is used to store programs that are permanently resident in the computer.
ROM is non-volatile.
Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-
volatile memory unit that has a larger storage capacity than main memory. It is used to store
data and instructions that are not currently in use by the CPU. Secondary storage has the
slowest access time and is typically the least expensive type of memory in the memory
hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or
a magnetized material. The Magnetic disks work at a high speed inside the computer and these
are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is
generally used for the backup of data. In the case of a magnetic tape, the access time for a
computer is a little slower and therefore, it requires some amount of time for accessing the
strip.
2 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
Characteristics of Memory Hierarchy
Capacity: It is the global volume of information the memory can store. As we
move from top to bottom in the Hierarchy, the capacity increases.
Access Time: It is the time interval between the read/write request and the
availability of the data. As we move from top to bottom in the Hierarchy, the access
time increases.
Performance: Earlier when the computer system was designed without a Memory
Hierarchy design, the speed gap increased between the CPU registers and Main
Memory due to a large difference in access time.
Advantages of Memory Hierarchy
It helps in removing some destruction, and managing the memory in a better way.
It helps in spreading the data all over the computer system.
It saves the consumer’s price and time.
--------------------------------------------------------------------------------------------
Advantages:
Very low power consumption
Can be accessed very quickly
Less expensive
Higher density
DRAM (DYNAMIC RANDOM ACCESS MEMORY)
DRAM do not retain their state for a long period, unless they are accessed frequently for
Read or Write operations.
The information is stored in a dynamic memory cell in the form of a charge on a
capacitor.
The contents must be periodically refreshed.
3 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
The contents may be refreshed while accessing them for reading.
To store information in this cell, transistor T is turned on and an appropriate voltage is
applied to the bit line.
This causes a known amount of charge to be stored in the capacitor
TYPES OF DRAM
1. SDARAM
2. DDRAM
SDRAM (SYNCHRONOUS DRAM)
• DRAMs whose operation is synchronized with a clock signal are known as synchronous
DRAM (SDRAM).
• SDRAMs have built-in refresh circuitry
• SDRAMs operate with clock speeds that can exceed 1 GHz.
• SDRAMs have high data rate
DOUBLE-DATA-RATE SDRAM (DDR SDRAM)
• Data are transferred externally on both the rising and falling edges of the clock.
• They offer increased storage capacity, lower power, and faster clock speeds.
• The earliest version is known as DDR. Later versions, called DDR2, DDR3, and DDR4.
• DDR2 and DDR3 can operate at clock frequencies of 400 and 800 MHz, respectively.
• They transfer data using the effective clock speeds of 800 and 1600 MHz, respectively
--------------------------------------------------------------------------------------------------------
3. Explain in detail about Cache memory in detail.
Cache Memory :
A faster and smaller segment of memory whose access time is as close as registers are
known as Cache memory. In a hierarchy of memory, cache memory has access time lesser
than primary memory. Generally, cache memory is very smaller and hence is used as a
buffer.
Cache memory is faster, they can be accessed very fast
Cache memory is smaller, a large amount of data cannot be stored
4 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
Need of cache memory
Data in primary memory can be accessed faster than secondary memory but still, access times
of primary memory are generally in few microseconds, whereas CPU is capable of performing
operations in nanoseconds. Due to the time lag between accessing data and acting of data
performance of the system decreases as the CPU is not utilized properly, it may remain idle for
some time. In order to minimize this time gap new segment of memory is Introduced known as
Cache Memory.
Types of Cache Memory (Based on location)
L1 or Level 1 Cache: It is the first level of cache memory that is present inside the processor.
It is present in a small amount inside every core of the processor separately. The size of this
memory ranges from 2KB to 64 KB.
L2 or Level 2 Cache: It is the second level of cache memory that may present inside or
outside the CPU. If not present inside the core, It can be shared between two cores depending
upon the architecture and is connected to a processor with the high-speed bus. The size of
memory ranges from 256 KB to 512 KB.
L3 or Level 3 Cache: It is the third level of cache memory that is present outside the CPU and
is shared by all the cores of the CPU. Some high processors may have this cache. This cache is
used to increase the performance of the L2 and L1 cache. The size of this memory ranges from
1 MB to 8MB.
--------------------------------------------------------------------------------------------------
4. Discuss the methods used to measure and improve the performance of the cache.
CACHE PERFORMANCE
If a process needs some data, it first searches in the cache memory.
• If the data is available in the cache, this is termed as a cache hit and the data is accessed as
required.
• If the data is not in the cache, then it is termed as a cache miss. Then the data is obtained
from the main memory. A copy of the data is stored in cache and then forwarded to processor
There are two different techniques for improving cache performance.
Techniques for Reducing the miss rate by more flexible block replacement
1) Direct Mapping
2) Set-associative cache
3) Fully associated cache
Techniques for Reducing the miss penalty by an additional level
1) Multilevel caching
Cache mapping
6 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
Cache mapping refers to a technique using which the content present in the main
memory is brought into the memory of the cache
The correspondence between the main memory blocks and cache is specified by a
“Mapping Function”.
Mapping functions determine how memory blocks are placed in the cache.
Direct Mapping
In direct mapping, a certain block of the main memory would be able to map a cache
only up to a certain line of the cache.
The total line numbers of cache to which any distinct block can map are given by the
following:
Cache line number = (Address of the Main Memory Block ) Modulo (Total number
of lines in Cache)
7 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
block of the main memory would be able to map to a certain line of the cache only.
Thus, the incoming (new) block always happens to replace the block that already exists,
if any, in this certain line.
K-way Set Associative Mapping
The grouping of the cache lines occurs into various sets where all the sets consist of k
number of lines.
Any given main memory block can map only to a particular cache set.
However, within that very set, the block of memory can map any cache line that is freely
available.
The cache set to which a certain main memory block can map is basically given as
follows:
Cache set number = ( Block Address of the Main Memory ) Modulo (Total Number of sets
present in the Cache)
8 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
Here, within this very set, the block ‘j’ is capable of mapping to any cache line that is
freely available at that moment.
In case all the available cache lines happen to be occupied, then one of the blocks that
already exist needs to be replaced.
Fully Associative Mapping
In the case of fully associative mapping,
The main memory block is capable of mapping to any given line of the cache that’s
available freely at that particular moment.
It helps us make a fully associative mapping comparatively more flexible than direct
mapping.
An interrupt is a signal from a device attached to a computer or from a program within the
computer that requires the operating system to stop and figure out what to do next.
Interrupt is the method of creating a temporary halt during program execution and allows
peripheral devices to access the microprocessor.
Whenever an interrupt occurs, it causes the CPU to stop executing the current program. Then,
comes the control to interrupt handler or interrupt service routine.
9 PREPARED BY
DR V.UMA RANI ASSO.PROF/CSE
The processor responds to that interrupt with an ISR (Interrupt Service Routine), which is a
short program to instruct the microprocessor on how to handle the interrupt.
These are the steps in which ISR (Interrpt Service Routine) handles interrupts. These are as
follows −
Step 1 − When an interrupt occurs let assume processor is executing i'th instruction and
program counter will point to the next instruction (i+1)th.
Step 2 − When an interrupt occurs the program value is stored on the process stack and the
program counter is loaded with the address of interrupt service routine.
Step 3 − Once the interrupt service routine is completed the address on the process stack is
popped and placed back in the program counter.
Step 4 − Now it executes the resume for (i+1)th line.
Types of interrupts
Hardware interrupt
10 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
The interrupt signal generated from external devices and i/o devices are made interrupt to CPU
when the instructions are ready.
For example − In a keyboard if we press a key to do some action this pressing of the keyboard
generates a signal that is given to the processor to do action, such interrupts are called hardware
interrupts.
Hardware interrupts are classified into two types which are as follows −
Maskable Interrupt − The hardware interrupts that can be delayed when a
highest priority interrupt has occurred to the processor.
Non Maskable Interrupt − The hardware that cannot be delayed and
immediately be serviced by the processor.
Software interrupts
The interrupt signal generated from internal devices and software programs need to access any
system call then software interrupts are present.
Software interrupt is divided into two types. They are as follows −
Normal Interrupts − The interrupts that are caused by the software instructions
are called software instructions.
Exception − Exception is nothing but an unplanned interruption while executing a
program. For example − while executing a program if we got a value that is
divided by zero is called an exception.
USB
11 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
3. USB 3.x
The first USB was formulated in the mid-1990s. USB 1.1 was announced in 1995 and released
in 1996. It was too popular and grab the market till about the year 2000. In the duration of USB
1.1 Intel announced a USB host controller and Philips announced USB audio for isochronous
communication with consumer electronics devices.
In April of 2000, USB 2.0 was announced. USB 2.0 has multiple updates and additions. The
USB Implementer Forum (USB IF) currently maintains the USB standard and it was released
in 1996.
USB was designed to standardize the connection of peripherals like pointing devices,
keyboards, digital still, and video cameras.
Host Controller:
The host controller initiates all data transfer, and the root hub provides a connection between
devices and the host controller. The root hub receives transactions generated by the host
controller and transmits them to the USB devices. The host controller uses polling to detect a
new device that connected to the bus or disconnected from it.
12 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
Root Hub:
The root hub performs power distribution to the devices, enables and disables the ports, and
reports the status of each port to the host controller. The root hub provides the connection
between the host controller and USB ports.
Hub:
Hubs are used to expand the number of devices connected to the USB system. Hubs can detect
when a device is attached or removed from the port. The below figure shows the architecture of
the hub. The upstream port is connected to the host, and USB devices are connected to the
downstream port.
USB Cable:
A USB port has four pins, which consists of four wires, with the V bus used to power the
devices.
USB Device:
USB device is divided into the classes such as a hub, printer, or mass storage. The USB device
has information about its configuration such as class, type, manufacture ID, and data rate. The
host controller uses this information to load device software from the hard disk.
Advantages of USB –
The Universal Serial Bus was designed to simplify and improve the interface between personal
computers and peripheral devices when compared with previously existing standard or ad-hoc
proprietary interfaces.
1. The USB interface is self-configuring. This means that the user need not adjust
settings on the device and interface for speed or data format, or configure interrupts,
input/output addresses, or direct memory access channels.
2. USB connectors are standardized at the host, so any peripheral can use any
available receptacle. USB takes full advantage of the additional processing power
that can be economically put into peripheral devices so that they can manage
themselves. USB devices mostly do not have user-adjustable interface settings.
3. The USB interface is hot pluggable or plug and plays, meaning devices can be
exchanged without rebooting the host computer. Small devices can be powered
directly from the USB interface thus removing extra power supply cables.
4. The USB interface defines protocols for improving reliability over previous
interfaces and recovery from common errors.
5. Installation of a device relying on the USB standard minimal operator action is
required.
Disadvantages of USB –
1. USB cables are limited in length.
13 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
2. USB has a strict “tree” topology and “master-slave” protocol for addressing
peripheral devices. Peripheral devices cannot interact with one another except via
the host, and two hosts cannot communicate over their USB ports directly.
3. Some very high-speed peripheral devices require sustained speeds not available in
the USB standard.
4. For a product developer, the use of USB requires the implementation of a complex
protocol and implies an intelligent controller in the peripheral device.
5. Use of the USB logos on the product requires annual fees and membership in the
organization.
PART C
1. Describe the basic operations of cache in detail with diagram and discuss the
variousmapping schemes used in cache design with example.
Cache Memory :
A faster and smaller segment of memory whose access time is as close as registers are
known as Cache memory. In a hierarchy of memory, cache memory has access time lesser
than primary memory. Generally, cache memory is very smaller and hence is used as a
buffer.
Cache memory is faster, they can be accessed very fast
Cache memory is smaller, a large amount of data cannot be stored
Cache mapping
Cache mapping refers to a technique using which the content present in the main
memory is brought into the memory of the cache
The correspondence between the main memory blocks and cache is specified by a
“Mapping Function”.
Mapping functions determine how memory blocks are placed in the cache.
block of the main memory would be able to map to a certain line of the cache only.
Thus, the incoming (new) block always happens to replace the block that already exists,
if any, in this certain line.
K-way Set Associative Mapping
The grouping of the cache lines occurs into various sets where all the sets consist of k
number of lines.
Any given main memory block can map only to a particular cache set.
15 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
However, within that very set, the block of memory can map any cache line that is freely
available.
The cache set to which a certain main memory block can map is basically given as
follows:
Cache set number = ( Block Address of the Main Memory ) Modulo (Total Number of sets
present in the Cache)
The main memory block is capable of mapping to any given line of the cache that’s
available freely at that particular moment.
16 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
It helps us make a fully associative mapping comparatively more flexible than direct
mapping.
2. Draw the typical block diagram of a DMA controller and explain how it is used for
direct data transfer between memory and peripherals
DMA Controller is a type of control unit that works as an interface for the data bus and the I/O
Devices. As mentioned, DMA Controller has the work of transferring the data without the
intervention of the processors, processors can control the data transfer. DMA Controller also
contains an address unit, which generates the address and selects an I/O device for the transfer
of data
17 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
Working of DMA Controller
The DMA controller registers have three registers as follows.
Address register – It contains the address to specify the desired location in
memory.
Word count register – It contains the number of words to be transferred.
Control register – It specifies the transfer mode.
The CPU initializes the DMA by sending the given information through the data bus.
The starting address of the memory block where the data is available (to read) or
where data are to be stored (to write).
It also sends word count which is the number of words in the memory block to be
read or written.
Control to define the mode of transfer such as read or write.
A control to begin the DMA transfer
DMA is a process of communication for data transfer between memory and input/output,
controlled by an external circuit called DMA controller, without involvement of CPU.
8085 MP has two pins HOLD and HLDA which are used for DMA operation.
18 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE
First, DMA controller sends a request by making Bus Request (BR) control line high.
When MP receives high signal to HOLD pin, it first completes the execution of current
machine cycle, it takes few clocks and sends HLDA signal to the DMA controller.
After receiving HLDA through Bus Grant (BG) pin of DMA controller, the DMA
controller takes control over system bus and transfers data directly between memory and
I/O without involvement of CPU. During DMA operation, the processor is free to
perform next job which does not need system bus.
At the end of data transfer, the DMA controller terminates the request by sending low
signal to HOLD pin and MP regains control of system bus by making HLDA low.
Modes of Data Transfer in DMA
There are 3 modes of data transfer in DMA that are described below.
Burst Mode: In Burst Mode, buses are handed over to the CPU by the DMA if the
whole data is completely transferred, not before that.
Cycle Stealing Mode: In Cycle Stealing Mode, buses are handed over to the CPU
by the DMA after the transfer of each byte. Continuous request for bus control is
generated by this Data Transfer Mode. It works more easily for higher-priority
tasks.
Transparent Mode: Transparent Mode in DMA does not require any bus in the
transfer of the data as it works when the CPU is executing the transaction.
Advantages of DMA Controller
Data Memory Access speeds up memory operations and data transfer.
CPU is not involved while transferring data.
DMA requires very few clock cycles while transferring data.
DMA distributes workload very appropriately.
DMA helps the CPU in decreasing its load.
Disadvantages of DMA Controller
Direct Memory Access is a costly operation because of additional operations.
DMA suffers from Cache-Coherence Problems.
DMA Controller increases the overall cost of the system.
DMA Controller increases the complexity of the software.
19 PREPARED
BY
DR V.UMA RANI ASSO.PROF/CSE