Module 2- Memory Organization
Module 2- Memory Organization
Primary Memory
Auxiliary memory
This Memory Hierarchy Design is divided into 2 main types:
1.External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage devices which are
accessible by the processor via I/O Module.
2.Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is directly accessible by the
processor.
We can infer the following characteristics of Memory Hierarchy Design from above figure:
3.Capacity:
It is the global volume of information the memory can store. As we move from top to bottom in the
Hierarchy, the capacity increases.
4.Access Time:
It is the time interval between the read/write request and the availability of the data. As we move from top to bottom in the
Hierarchy, the access time increases.
5.Performance:
Earlier when the computer system was designed without Memory Hierarchy design, the speed gap increases between the CPU
registers and Main Memory due to large difference in access time. This results in lower performance of the system and thus,
enhancement was required. This enhancement was made in the form of Memory Hierarchy Design because of which the
performance of the system increases. One of the most significant ways to increase system performance is minimizing how far
down the memory hierarchy one has to go to manipulate data.
6.Cost per bit:
As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal Memory is costlier than External Memory.
Cache Memory
The cache is a smaller and faster memory which stores copies of the data from
frequently used main memory locations. There are various different independent caches
in a CPU, which store instructions and data.
Cache memory is costlier than main memory or disk memory but economical than CPU
registers.
Cache memory is an extremely fast memory type that acts as a buffer between RAM and
the CPU.
It holds frequently requested data and instructions so that they are immediately
available to the CPU when needed.
Cache memory is used to reduce the average time to access data from the Main
memory.
Cache Performance:
When the processor needs to read or write a location in main memory, it
first checks for a corresponding entry in the cache.
•If the processor finds that the memory location is in the cache, a cache
hit has occurred and data is read from cache
•If the processor does not find the memory location in the cache,
a cache miss has occurred. For a cache miss, the cache allocates a new
entry and copies in data from main memory, then the request is fulfilled
from the contents of the cache.
Hit ratio = hit / (hit + miss) = no. of hits/total
accesses
•Cache mapping defines how a block from the main memory is mapped to the cache memory in case of a
cache miss.
OR
•Cache mapping is a technique by which the contents of main memory are brought into the cache memory.
Cache Mapping
1. Associative mapping
Fastest and most flexible
Both address and memory word is stored
Any word can be stored in any location
Advgs: easy to implement, fast
Disadv: expensive, i.e. large memory is required
2. Direct Mapping
In Direct mapping, assign each memory block to a specific line in the cache.
If a line is previously taken up by a memory block when a new block needs to
be loaded, the old block is trashed.
Stores data and only a part of the address (only the higher byte)
An address space is split into two parts index field and a tag field. The cache
is used to store the tag field whereas the rest is stored in the main memory.
Direct mapping`s performance is directly proportional to the Hit ratio.
Say in main memory:
Address Data
0001 H Data 1
01FF H Data 2
Advantages: 1. simplest as only tags are to matched, 2. less expensive (in terms of memory)
Disadvantages: 1. Needs frequent replacement, 2. Hit ratio not good