UNIT-2 Computer Organization
UNIT-2 Computer Organization
I/O Processor
The primary function of an I/O Processor is to manage the data transfers between auxiliary
memories and the main memory.
Cache Memory
The data or contents of the main memory that are used frequently by CPU are stored in the
cache memory so that the processor can easily access that data in a shorter time. Whenever
the CPU requires accessing memory, it first checks the required data into the cache memory. If
the data is found in the cache memory, it is read from the fast memory. Otherwise, the CPU
moves onto the main memory for the required data.
Memory Hierarchy Design and its Characteristics
Memory Characteristics
• This Memory Hierarchy Design is divided into 2 main types:
• External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e.
peripheral storage devices which are accessible by the processor via
I/O Module.
• Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is
directly accessible by the processor.
Characteristics of Memory Hierarchy
•Cost Per Bit: As we move from bottom to top in the Hierarchy, the
cost per bit increases i.e. Internal Memory is costlier than External
Memory.
Memory Characteristics
• Read and Write operations in Memory
• A memory unit stores binary information in groups of bits called words.
Data input lines provide the information to be stored into the memory, Data
output lines carry the information out from the memory.
• The control lines Read and write specifies the direction of transfer of data.
Basically, in the memory organization, there are memory locations indexing
from 0 to where l is the address buses. We can describe the memory in
terms of the bytes using the following formula:
Memory Characteristics
• Memory Address Register (MAR) is the address register which is
used to store the address of the memory location where the
operation is being performed.
Memory Data Register (MDR) is the data register which is used to
store the data on which the operation is being performed.
• Memory Read Operation:
Memory read operation transfers the desired word to address lines
and activates the read control line. Description of memory read
operation is given below:
Memory Characteristics
Memory Characteristics
• In the above diagram initially, MDR can contain any garbage value and MAR
is containing 2003 memory address. After the execution of read instruction,
the data of memory location 2003 will be read and the MDR will get updated
by the value of the 2003 memory location (3D).
• Memory write operation transfers the address of the desired word to the
address lines, transfers the data bits to be stored in memory to the data input
lines. Then it activates the write control line. Description of the write
operation is given below:
Memory Write Operation
Types of Main Memory
Read Only Memory (ROM)
• Read Only Memory (ROM) is a type of memory where the data has been
prerecorded. Data stored in ROM is retained even after the computer is turned
off i.e, non-volatile.
• Programmable ROM, where the data is written after the memory chip has
been created. It is non-volatile.
• Erasable Programmable ROM, where the data on this non-volatile memory
chip can be erased by exposing it to high-intensity UV light.
• Electrically Erasable Programmable ROM, where the data on this non-
volatile memory chip can be electrically erased using field electron emission.
Random Access Memory (RAM)
• Random Access Memory (RAM) –
• It is also called read-write memory or the main memory or the primary
memory.
• The programs and data that the CPU requires during the execution of a
program are stored in this memory.
• It is a volatile memory as the data is lost when the power is turned off.
Difference between RAM and ROM
RAM (Random Access Memory )
• RAM(Random Access Memory) is a part of computer’s Main Memory
which is directly accessible by CPU.
• RAM is used to Read and Write data into it which is accessed by CPU
randomly. RAM is volatile in nature, it means if the power goes off, the
stored information is lost. RAM is used to store the data that is currently
processed by the CPU. Most of the programs and data that are modifiable
are stored in RAM.
Integrated RAM chips are available in two form:
• SRAM(Static RAM)
• DRAM(Dynamic RAM)
Difference between SRAM and DRAM :
7 7 1 1 0 0
W0
•
•
•
FF FF
A 0 W1
•
•
•
A Address Memory
1
• • • • • •
decoder • • • • • • cells
A 2
• • • • • •
A 3
W15
•
•
•
Sense / Write Sense / Write Sense / Write R/W
circuit circuit circuit
CS
as “locality of reference”.
The basic operation of the cache
All the memory accesses are directed first to Cache. If the word is in Cache;
Access cache to provide it to CPU.
If the word is not in Cache; Bring a block (or a line) including that word to
replace a block now in Cache.
Te = Tc + (1 - H) Tm
Two ways:
Temporal locality of reference:
Recently executed instruction is likely to be executed again very soon.
Spatial locality of reference:
Instructions with addresses close to a recently instruction are likely to be
executed soon.
Cache Coherence
• Cache coherence is the regularity or consistency of data stored in cache
memory. Maintaining cache and memory consistency is imperative for
multiprocessors or distributed shared memory (DSM) systems.
• Cache management is structured to ensure that data is not overwritten or
lost. When multiple processors with separate caches share a common
memory, it is necessary to keep the caches in a state of coherence by
ensuring that any shared operand that is changed in any cache is changed
throughout the entire system.
Mapping (for cache)
• Associative mapping
• Direct mapping
• Set-associative mapping
◾ Mapping functions determine how memory blocks are placed in the cache.
◾A simple processor example:
Cache consisting of 128 blocks of 16 words each.
Total size of cache is 2048 (2K) words.
Main memory is addressable by a 16-bit address.
Main memory has 64K words.
Main memory has 4K (4096) blocks of 16 words each.
◾ Three mapping functions:
Direct mapping
Associative mapping
Set-associative mapping.
•Block j of the main memory maps to j modulo
128 of the cache. 0 maps to 0, 129 maps to 1.
•More than one memory block is mapped onto
the same position in the cache.
•Memory address is divided into three fields:
-Low order 4 bits determine one of the
16 words in a block.
-When a new block is brought into the
cache, the next 7 bits determine which
cache block this new block is placed in.
-High order 5 bits determine which of the
possible 32 blocks is currently present in the
cache. These are tag bits.
•Simple to implement but not very flexible.
•Main memory block can be placed into any cache
position.
•Memory address is divided into two fields:
-Low order 4 bits identify the word within a
block.
-High order 12 bits or tag bits identify a memory
block when it is resident in the cache.
•Flexible, and uses cache space efficiently.
•Replacement algorithms can be used to replace an
existing block in the cache when the cache is full.
Cost is higher than direct-mapped cache because
of the need to search all 128 patterns to determine
whether a given block is in the cache.
This is combination of Direct and Associative mapping.
Blocks of cache are grouped into sets. Mapping
function allows a block of the main memory to reside in
any block of a specific set.
Divide the cache into 64 sets, with two blocks per set.
Memory block 0, 64, 128 etc. map to block 0, and they can
occupy either of the two positions.
Memory address is divided into three fields:
- 6 bit field determines the set number.
- High order 6 bit fields are compared to the tag fields of the
two blocks in a set.
Number of blocks per set is a design parameter.
- One extreme is to have all the blocks in one set, requiring
no set bits (fully associative mapping).
- Other extreme is to have one block per set, is the same as
direct mapping.
Q.1 Consider a direct mapped cache of size 16 KB with
block size 256 bytes. The size of main memory is 128 KB.
Find- Number of bits in tag
Given-
• Cache memory size = 16 KB
• Block size = Frame size = Line size = 256 bytes
• Main memory size = 128 KB
We consider that the memory is byte addressable.
Number of Bits in Physical Address-
We have,
Size of main memory = 128 KB = 2^17
bytes
Thus, Number of bits in physical address
= 17 bits
Block size = 256 bytes = 2^8 bytes
Thus, Number of bits in block offset =
8 bits
Number of Bits in Line Number-
Total number of lines in cache = Cache size / Line
size
= 16 KB / 256 bytes = 2^14 bytes / 2^8 bytes
= 2^6 lines
Thus, Number of bits in line number = 6 bits
Number of Tag bits
= Number of bits in physical address – (Number of
bits in line number + Number of bits in
block offset) = 17 bits – (6 bits + 8 bits) = 17 bits –
14 bits=3 bits
Q.2 Consider a direct mapped cache of size 512 KB with block
size 1 KB. There are 7 bits in the tag. Find- Size of main
memory
Given-
• Cache memory size = 512 KB
• Block size = Frame size = Line size = 1 KB
• Number of bits in tag = 7 bits
We consider that the memory is byte addressable.
Number of Bits in Block Offset-
We have,
Block size = 1 KB = 2^10 bytes
Thus, Number of bits in block offset = 10 bits
Total number of lines in cache = Cache size / Line size
= 512 KB / 1 KB =
2^9 lines
Thus, Number of bits in line number = 9 bits
Given-
• Set size = 2
• Cache memory size = 16 KB
• Block size = Frame size = Line size
= 256 bytes
• Main memory size = 128 KB
Size of main memory = 128 KB = 2^17 bytes
Thus, Number of bits in physical address = 17
bits
Given-
• Set size = 8
• Cache memory size = 512 KB
• Block size = Frame size = Line size
= 1 KB
• Number of bits in tag = 7 bits
We consider that the memory is byte
addressable.
Block size = 1 KB = 2^10 bytes
Thus, Number of bits in block offset =
10 bits
51
Types of Virtual Memory
In a computer, virtual memory is managed by the Memory Management Unit
(MMU), which is often built into the CPU. The CPU generates virtual addresses
that the MMU translates into physical addresses.
There are two main types of virtual memory:
•Paging
•Segmentation
Paging
Paging is a non-contiguous memory allocation technique in which
secondary memory and the main memory is divided into equal
size partitions.
The partitions of the secondary memory are called pages while the
partitions of the main memory are called frames . They are divided
into equal size partitions to have maximum utilization of the main
memory and avoid external fragmentation.
Translation of logical Address into physical Address
As a CPU always generates a logical address and we need a physical address for
accessing the main memory. This mapping is done by the MMU (memory
management Unit) with the help of the page table
Logical Address: The logical address consists of two parts page number and page
offset.
1. Page Number: It tells the exact page of the process which the CPU wants to
access.
2. Page Offset: It tells the exact word on that page which the CPU wants to read.
2. Page Offset: It tells the exact word on that page which the CPU wants to read. It requires
no translation as the page size is the same as the frame size so the place of the word which
CPU wants access will not change.
** Page Fault is the condition in which a running process refers to a page that is
not loaded in the main memory.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which an Operating
System decides which memory pages to swap out, write to disk when a page of
memory needs to be allocated.
Paging happens whenever a page fault occurs and a free page cannot be used for
allocation purpose accounting to reason that pages are not available or the
number of free pages is lower than required pages.
** If a process requests for page and that page is found in the main
memory then it is called page hit , otherwise page miss or page
fault .
Page Replacement Algorithm
Some Page Replacement
Algorithms :
•First In First Out (FIFO)
•Least Recently Used (LRU)
•Optimal Page Replacement
Page Replacement Algorithms
• 1. First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm,
the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page
needs to be replaced page in the front of the queue is selected for
removal.
Page Replacement Algorithms
• Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page
frames. Find number of page faults.
Page Replacement Algorithms
Page Replacement Algorithms
Initially all slots are empty, so when 1, 3, 0 came they are allocated to
the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest
page slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest
page slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not available so it replaces 0 1 page fault
Page Replacement Algorithms
2. Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3 Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3,
0, 3, 2 with 4 page frames. Find number of page faults.
Page Replacement Algorithms
Page Replacement Algorithms
• Initially all slots are empty, so when 7 0 1 2 are allocated to
the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault
because they are already available in the memory.
Page Replacement Algorithms
3. Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Page Replacement Algorithms
• Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4
page frame. Find number of page fault.
Page Replacement Algorithms
Page Replacement Algorithms
• Initially all slots are empty, so when 7 0 1 2 are allocated to the
empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the
longest duration of time in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.