0% found this document useful (0 votes)
41 views

Module 5 - Memory Organization - Final

The document discusses computer memory, its types, characteristics, and organization in a computer system. It describes different memory units like RAM, ROM, cache memory and their features. It also explains memory hierarchy and parameters for choosing suitable memory.

Uploaded by

jbd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Module 5 - Memory Organization - Final

The document discusses computer memory, its types, characteristics, and organization in a computer system. It describes different memory units like RAM, ROM, cache memory and their features. It also explains memory hierarchy and parameters for choosing suitable memory.

Uploaded by

jbd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Module 5 : Memory Organization

Mrs. Minakshi S. Ghorpade

Page 1
Introduction to Memory and Memory parameters

What is Memory?
• Computer Memory is very important part of computer.
• Memory is a computer chip or device that is used to hold data and instructions which are used by computer during the
processing.
• We can say that computer memory is the storage space for data and instruction in computer system.

Characteristics of Main Memory:


•It is faster computer memory as compared to secondary memory.
•It is semiconductor memories.
•It is usually a volatile memory.
•It is the main memory of the computer.
•A computer system cannot run without primary memory.

Page 2
Why memory is required in computer?

Following are some important memory units :


Bit (Binary Units): bit is a logical representation of the electric state. It can be 1 or 0.
Nibble: it means the group of 4 bits.
Byte: a byte is a group of 8 bits.
Word: A word is group of 2 bytes or 16 bits. Compute store information in the form of words.

Why memory is required in computer?

Memory in computer is needed:


•To store data and information temporarily or on permanent basis.
•To provide required data and instructions to the CPU for processing.

Page 3
Parameter in choosing memory

several parameters in deciding the suitable memory for a computer system.

Capacity
The size of computer depends on its memory capacity.
Memory can be seen as a storage unit containing x number of locations, each of which stores y number of bits.
The total capacity of memory can be calculated as x*y-bit or x-word memory.

Bandwidth
Bandwidth of the memory indicates the maximum amount of information that can be transferred to or from the
memory per unit time.
It is expressed as number of bytes or words per second.

Speed
The speed of operation of the memory is very important parameter.
The speed simply indicates the time between start of an operation and end of that operation.
Speed of memory is measured in two parameters:
access time (ta)
cycle time (tc)

Page 4
Characteristics of memory system
1. Location:
CPU: This includes CPU registers and On-chip cache memory
Internal .· This includes the memory that the processor can directly access.
External : This is normally removable or virtual memory and hence access is slower.

2. Capacity : It is measured in terms of the word size and the number of words.
Word size is the size of each location. Number of words is the number of locations.

3. Unit of transfer :This refers to the size of the data that is transferred in one clock
cycle. lt mainly depends on the data bus size.
a. Internal :It is related to the communication of data with the memory directly
accessible. It is usually governed by data bus width.
b. External : This is the data communication with the external removable memory
or virtual memory. It is usually a block which is much larger than a word.

Page 5
4. Access method : There are various methods of accessing the memory based on the memory organization. These
methods are listed below with examples :
• Sequential access : memories whose storage location can be accessed in certain pre-determined sequence is called
serial access memory.
• Direct access : access storage location directly
• Random access : if storage location access in any order and access time is independent of the physical location
being accessed i. e to access any memory location of memory it takes same amount of time e.g. RAM.
• Associative access :Here the data is located by a comparison with contents of a portion of the stored data(address).
Hence the access time is independent of location or previous access.

5. Performance : The performance of the memory depends on its speed of operation or the data transfer rate. The
data transfer rate is the rate at which the data is transferred The speed of operation depends on two things ;
a. Access time :The time between providing the address and getting the valid data from memory is called as its
access time i.e. the address to data time.
b. Memory cycle time :The time that is required for the memory to "recover" before next access Le. the time
between two addresses is called as memory cycle time.

Page 6
6. Physical type: The physical material using which the memory is made can be different like:
• Semiconductor: Memory can be made using semiconductor material I.e IC`s

• Magnetic : Memory can also be made using magnetic read and write mechanism for e.g. Magnetic disk and
M agnetic tape.
• Optical : Optical memories 1.e. memories that use optical methods to read and write have become famous these
days, for e.g. CD and DVD

7. Physical characteristics :The physical characteristics of memory is also an important aspect to be considered This
includes the volatile/Non Volatile , consumption, erasable / not erasable, etc.

8. Organisation :It is not that always the memory will be organized sequentially. There are some other types of
memory organization like interleaved memory etc.
Page 7
Classification of computer memory

Based on the type of use and feature memory unit of computer is categorized as:
•Primary Memory - It is also known as internal memory or Main memory.
•Secondary Memory - It is also known as auxiliary memory.

Page 8
Random Access Memory (RAM) – Primary Memory

•It is also called read-write memory or the main memory or the primary memory.
•The programs and data that the CPU requires during the execution of a program are stored in this memory.
•It is a volatile memory as the data is lost when the power is turned off.
•RAM is further classified into two types- SRAM (Static Random Access Memory) and DRAM (Dynamic Random
Access Memory).

Page 9
Page 10
Read-Only Memory (ROM)

•Stores crucial information essential to operate the system, like the program essential to boot the computer.
•It is non-volatile.
•Always retains its data.
•Used in embedded systems or where the programming needs no change.
•Used in calculators and peripheral devices.
•ROM is further classified into four types- MROM, PROM, EPROM, and EEPROM.

Types of Read-Only Memory (ROM)

•PROM (Programmable read-only memory) – It can be programmed by the user. Once programmed, the data
and instructions in it cannot be changed.

•EPROM (Erasable Programmable read-only memory) – It can be reprogrammed. To erase data from it,
expose it to ultraviolet light. To reprogram it, erase all the previous data.

•EEPROM (Electrically erasable programmable read-only memory) – The data can be erased by applying an
electric field, with no need for ultraviolet light. We can erase only portions of the chip.

Page 11
Compare

Page 12
Volatile memory and Non Volatile Memory.

Page 13
Memory Hierarchy

OR

Page 14
Memory hierarchy
A 'memory hierarchy' in computer storage distinguishes each level in the 'hierarchy' by response time. Since
response time, complexity, and capacity are related, the levels may also be distinguished by the controlling
technology.
The many trade-offs in designing for high performance will include the structure of the memory hierarchy,
i.e. the size and technology of each component.
There are four major storage levels.

1. Internal – Processor registers and cache.

2. Main – the system RAM and controller cards.

3. On-line mass storage – Secondary storage.

4. Off-line bulk storage – Tertiary and Off-line storage.

This is a most general memory hierarchy structuring. Many other structures are useful. For example, a paging
algorithm may be considered as a level for virtual memory when designing a computer architecture.

Page 15
Cache Memory in Computer Organization

• The cache memory is one of the fastest memory.


• It basically acts as a buffer between the main memory and the CPU. Besides, it stores the data and instructions
which the CPU uses more frequently.
• It holds frequently requested data and instructions so that they are immediately available to the CPU when
needed.
• Cache memory is used to reduce the average time to access data from the Main memory.
• The cache is a smaller and faster memory that stores copies of the data from frequently used main memory
locations.

Page 16
Levels of Cache Memory

There can be various levels of cache memory, they are as follows:


Level 1 (L1) or Registers
It stores and accepts the data which is immediately stores in the CPU. For example instruction register,
program counter, accumulator, address register, etc.

Level 2 (L2) or Cache Memory


It is the fastest memory that stores data temporarily for fast access by the CPU. Moreover, it has the fastest
access time.

Level 3 (L3) or Main Memory


It is the main memory where the computer stores all the current data. It is a volatile memory which means
that it loses data on power OFF.

Level 4 (L4) or Secondary Memory


It is slow in terms of access time. But, the data stays permanently in this memory.

Page 17
Cache Performance

When the processor needs to read or write a location in main memory, it first checks for a corresponding
entry in the cache.

•If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read
from the cache.

•If the processor does not find the memory location in the cache, a cache miss has occurred. Furthermore,
during cache miss, the cache allows the entry of data and then reads data from the main memory.

•Therefore, we can define the hit ratio as the number of hits divided by the sum of hits and misses.
hit ratio = hit / (hit + miss)
= number of hits/total accesses

Miss ratio = miss / (hit + miss)


= no. of miss/total accesses = 1 - hit ratio(H)

Page 18
Page Replacement Techniques:
• Page replacement occurs due to page faults.

• Page replacement is required when-


• All the frames of main memory are already occupied.
• Thus, a page has to be replaced to create a room for the required page.

• Page replacement algorithms help to decide which page must be swapped out from the main memory to create a
room for the incoming page.

• Page replacement algorithm is needed to decide which page needs to be replaced when a new page comes in.

• In Virtual Memory Management, Page Replacement Algorithms play an important role. The main objective of all
the Page replacement policies is to decrease the maximum number of page faults.

Some Page Replacement Algorithms :


•First In First Out (FIFO)
•Least Recently Used (LRU)
•Optimal Page Replacement(OPR)

Page 19
Page Replacement Algorithms: FIFO

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.

Example: Calculate the number of page hits and faults using FIFO, LRU, Optimal page replacement algorithms
for the following page frame sequence:
2,3,1,2,4,3,2,5,3,6,7,9,3,7 (Frame size:3)
Advantages
•Simple and easy to implement.
•Low overhead.

Disadvantages
•Poor performance.
•Doesn’t consider the frequency of use or last
used time, simply replaces the oldest page.
•Suffers from Belady’s Anomaly(i.e. more page
faults when we increase the number of page
frames).

Page 20
Page Replacement Algorithms: LRU
2. Least Recently Used: In this algorithm, page will be replaced which is least recently used.

Example: Calculate the number of page hits and faults using FIFO, LRU, Optimal page replacement algorithms for the
following page frame sequence:
2,3,1,2,4,3,2,5,3,6,7,9,3,7 (Frame size:3)

Advantages
•Efficient.
•Doesn't suffer from Belady’s Anomaly.

Disadvantages
•Complex Implementation.
•Expensive.
•Requires hardware support.

Page 21
Page Replacement Algorithms: OPR

3. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.

Example: Calculate the number of page hits and faults using FIFO, LRU, Optimal page replacement algorithms for
the following page frame sequence:
2,3,1,2,4,3,2,5,3,6,7,9,3,7 (Frame size:3)
Advantages
•Easy to Implement.
•Simple data structures are used.
•Highly efficient.

Disadvantages
•Requires future knowledge of the program.
•Time-consuming.

Page 22
Page Replacement Techniques: Example

Page 23
Allocation Policies:
• As available memory blocks comprised of a set of holes of various sizes that scattered throughout the memory.
Whenever a process arrives and needs memory, the system searches the set for a hole that is large enough for
the arrived process.
• Memory allocation is the process of assigning blocks of memory on request. Typically the allocator receives
memory from the operating system in a small number of large blocks that it must divide up to satisfy the
requests for smaller blocks.
• Memory allocation is a process by which computer programs and services are assigned with physical or virtual
memory space. Memory allocation is the process of reserving a partial or complete portion of computer
memory for the execution of programs and processes. Memory allocation is achieved through a process known
as memory management.
There are different ways of implementing allocation of partitions from a list of free holes, such as:
• first-fit:
• best-fit:
• Worst fit:
Page 24
Allocation Policies : First Fit, Best Fit , Worst Fit

Page 25
First Fit, Best Fit , Worst Fit Example

Page 26
First Fit, Best Fit , Worst Fit Example

Page 27
Interleaved memory

• Interleaved memory implements the concept of accessing more words in single memory access cycle
• Memory can be partitioned into N separate memory modules.
• These N accesses can be carried out simultaneously
• Access to multiple words can be done simultaneously or in a pipelined fashion.
• Maximum bandwidth is N words per cycle

• Example:

Page 28
Fig: Low-order m-way interleaving Fig: Eight way Low-order interleaving(absolute address shown
in each memory word)

Page 29
Page 30
Associative memory

• An associative memory can be considered as a memory unit whose stored data can be identified for
access by the content of the data itself rather than by an address or memory location.

• Associative memory is often referred to as Content Addressable Memory (CAM).

• When a write operation is performed on associative memory, no address or memory location is given to
the word. The memory itself is capable of finding an empty unused location to store the word.

• On the other hand, when the word is to be read from an associative memory, the content of the word, or
part of the word, is specified. The words which match the specified content are located by the memory
and are marked for reading.

Page 31
• associative memory consists of a memory array and logic for
'm' words with 'n' bits per word.

• The functional registers like the argument register A and key


register K each have n bits, one for each bit of a word.

• The key register (K) provides a mask for choosing a


particular field or key in the argument word.

• The match register M consists of m bits, one for each


memory word.

• The words which are kept in the memory are compared in


parallel with the content of the argument register.

Page 32
Page 33
Cache Coherency

• Cache coherency is a situation where multiple processor cores share the same memory hierarchy, but have their
own L1 data and instruction caches.

• The practice of cache coherence makes sure that alterations in the contents of associated operands are quickly
transmitted across the system.
• In case, the processor P1 modifies the copy
of shared memory block X present in its
cache. It would result in data inconsistency.
• As the processor P1 will have the modified
copy of the shared memory block i.e. X1. But,
the main memory and other
processors’ cache will have the old copy of
the shared memory block X. And this
problem is the cache coherence problem.

Page 34
Cache Coherence Protocols : Write-Through Protocol
The easiest and most popular method is to write through. Every memory write operation updates the main
memory. If the word is present in the cache memory at the requested address, the cache memory is also updated
simultaneously with the main memory. So, the main memory here always has consistent data.

Advantage - It provides the highest level of


consistency.

Disadvantage - It requires a greater number of


memory access.

Page 35
Cache Coherence Protocols : Write-Back Protocol

This protocol permits the processor to modify a data block only if it acquires ownership.

Advantage - A very small number of memory


accesses and write operations.

Disadvantage - Inconsistency may occur in this


approach.

Page 36
Virtual memory
• Virtual memory is a common technique used in a computer's operating system (OS).

• Virtual memory is a method that computers use to manage storage space to keep systems running quickly and
efficiently.

• Using the technique, operating systems can transfer data between different types of storage, such as random-
access memory (RAM), also known as main memory, and hard drive or solid-state disk storage.

• Virtual memory is a memory management technique where secondary memory can be used as if it were a part of
the main memory.

• Segment which appear to be present but actually it is not

• A programmer can write a program which requires more memory then the capacity of main memory such a
program is executed by virtual memory techniques.

• Virtual memory is important for improving system performance, multitasking and using large programs.

Page 37
virtual memory

Page 38
2 types of virtual memory - 1. Paging

• Paging is a virtual memory technique that separates memory into sections called paging files.

• When a computer reaches its RAM limits, it transfers any currently unused pages into the part of its hard drive used
for virtual memory. The computer performs this process using a swap file, a designated space within its hard drive for
extending the virtual memory of the computer's RAM.

• By moving unused files into its hard drive, the computer frees its RAM space for other memory tasks and ensures that
it doesn't run out of real memory.

• As part of this process, the computer uses page tables, which translate virtual addresses into the physical addresses
that the computer's memory management unit (MMU) uses to process instructions.

• The MMU communicates between the computer's OS and its page tables. When the user performs a task, the OS
searches its RAM for the processes to conduct the task.

Page 39
2 types of virtual memory - 2. Segmenting

• Segmentation is another method of managing virtual memory.

• A segmentation system divides virtual memory into varying lengths and moves any segments that aren't in use
from the computer's virtual memory space to its hard drive.

• Like page tables, segment tables track whether the computer stores the segment in memory or a physical address.
Segmentation differs from paging because it divides memory into sections of varying lengths, while paging divides
memory into units of equal size.

• With paging, the hardware determines the size of a section, but the user can select the length of a segment in a
segmentation system.

• Segmentation is often slower than paging, but it offers the user more control over how to divide memory and may
make it easier to share data between processes.

Page 40
Advantages and Disadvantages of Virtual Memory

Advantages of Virtual Memory


1.The degree of Multiprogramming will be increased.
2.User can run large application with less real RAM.
3.There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory


1.The system becomes slower since swapping takes time.
2.It takes more time in switching between applications.
3.The user will have the lesser hard disk space for its use.

Page 41
Cache Mapping
• Cache Mapping is a technique by which content of Main Memory is Brought into the Cache memory.
• It is a transformation of data from main memory to cache memory.
• There are three different types of mapping used for the purpose of cache memory which is as follows:
Direct mapping, Associative mapping, and Set-Associative mapping. These are explained below.

Page 42
Cache Mapping: 1. Direct Mapping

A. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory into only one possible cache
line. or In Direct mapping, assign each memory block to a specific line in the cache.

Direct mapping`s performance is directly proportional to the Hit ratio.


Cache line number = (Address of the Main Memory Block ) Modulo (Total number of lines in Cache)

i = j modulo m
where
i=cache line number
j= main memory block number
m=number of lines in the cache

Page 43
Direct Mapping conti….

1) The simplest way to determine cache locations in which store Memory


blocks is direct Mapping technique.
2) In this block J of the main memory maps on to block J modulo 128 of the
cache. Thus main memory blocks 0,128,256,….is loaded into cache is stored
at block 0. Block 1,129,257,….are stored at block 1 and so on.
3) Placement of a block in the cache is determined from memory address.
Memory address is divided into 3 fields, the lower 4-bits selects one of the
16 words in a block.
4) When new block enters the cache, the 7-bit cache block field determines
the cache positions in which this block must be stored.
5) The higher order 5-bits of the memory address of the block are stored in 5
tag bits associated with its location in cache. They identify which of the 32
blocks that are mapped into this cache position are currently resident in the
cache.
6) It is easy to implement, but not Flexible

Page 44
Advantages and Disadvantages of direct mapping

Advantages of direct mapping

• It is easy to implement.
• Direct mapping is simplest type of cache memory mapping.
• Here only tag field is required to match while searching word that is why it fastest cache.
• Direct mapping cache is less expensive compared to associative cache mapping.

Disadvantages of direct mapping

• The performance of direct mapping cache is not good as requires replacement for data-tag value.
• It is not very flexible.

Page 45
(2) Associative Mapping:-

1) This is more flexible mapping method, in which main memory


block can be placed into any cache block position.

2) In this, 12 tag bits are required to identify a memory block when


it is resident in the cache.

3) The tag bits of an address received from the processor are


compared to the tag bits of each block of the cache to see, if the
desired block is present. This is known as Associative Mapping
technique.

4) Cost of an associated mapped cache is higher than the cost of


direct-mapped because of the need to search all 128 tag patterns to
determine whether a block is in cache. This is known as associative
search.

Page 46
Advantages and Disadvantages of associative mapping

Advantages of associative mapping

• Associative mapping is fast.


• Associative mapping is easy to implement.
• It is more flexible than direct mapping technique.

Disadvantages of associative mapping


• Its cost is high.
• Cache Memory implementing associative mapping is expensive as it requires to store address along
with the data.

Page 47
(3) Set-Associated Mapping:-

1) It is the combination of direct and associative mapping


technique.
2) Cache blocks are grouped into sets and mapping allow
block of main memory reside into any block of a specific set.
Hence contention problem of direct mapping is eased , at the
same time , hardware cost is reduced by decreasing the size
of associative search.
3) For a cache with two blocks per set. In this case, memory
block 0, 64, 128,…..,4032 map into cache set 0 and they can
occupy any two block within this set.
4) Having 64 sets means that the 6 bit set field of the address
determines which set of the cache might contain the desired
block. The tag bits of address must be associatively compared
to the tags of the two blocks of the set to check if desired
block is present. This is two way associative search.

Page 48
Advantages and Disadvantages of Set-Associative mapping

Advantages of Set-Associative mapping


•Set-Associative cache memory has highest hit-ratio compared two previous two cache memory discussed above.
Thus its performance is considerably better.

Disadvantages of Set-Associative mapping


•Set-Associative cache memory is very expensive. As the set size increases the cost increases.

Page 49
Example:

Page 50

You might also like