100% found this document useful (1 vote)
81 views

Chapter 7 Memory Organization

Short note about memory management

Uploaded by

turegn
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
81 views

Chapter 7 Memory Organization

Short note about memory management

Uploaded by

turegn
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 7: Memory

7.1 Characteristics of Memory Systems


Memory is unit used for storage, and retrieval of data and instructions. A typical computer system is
equipped with a hierarchy of memory subsystems, some internal to the system and some external.
Internal memory systems are accessible by the CPU directly and external memory systems are
accessible by the CPU through an I/O module. Memory systems are classified according to their key
characteristics. The most important are listed below:
Location: The classification of memory is done according to the location of the memory as:
 Registers: The CPU requires its own local memory in the form of registers and also control unit
requires local memories which are fast accessible.
 Internal (main): is often associated with the main memory (RAM)
 External (secondary): consists of peripheral storage devices like Hard disks, magnetic tapes,
etc.
Capacity: Storage capacity is one of the important aspects of the memory. It is measured in bytes. Since
the capacity of memory in a typical memory is very large, the prefixes kilo (K), mega (M), and giga
(G). A kilobyte is 210 = 1024 bytes, a megabyte is 220 bytes, and a giga byte is 230 bytes.
Unit of Transfer: Unit of transfer for internal memory is equal to the number of data lines into and out
of memory module.
 Word: For internal memory, unit of transfer is equal to the number of data lines into and out of
the memory module.
 Block: For external memory, data are often transferred in much larger units than a word, and
these are referred to as blocks.
Access Method

 Sequential: Tape units have sequential access. Data are generally stored in units called
“records”. Data is accessed sequentially; the records may be passed (or rejected) until the record
that is searched is found.
 Random: Each addressable location in memory has a unique addressing mechanism. The time to
access a given location is independent of the sequence of prior accesses and constant. Any
location can be selected at random and directly addressed and accessed. Main memory and cache
systems are random access.
Performance

 Access time: For random-access memory, this is the time it takes to perform a read or write
operation: that is, the time from the instant that an address is presented to the memory to the
instant that data have been stored or made available for use. For nonrandom-access memory,
access time is the time it takes to position the read-write mechanism at the desired location.
 Transfer rate: This is the rate at which data can be transferred into or out of a memory unit.
Physical Type

 Semiconductor: Main memory, cache. RAM, ROM.


P R E P A R E D H A F T O M P a g e 1 | 10
 Magnetic: Magnetic disks (hard disks), magnetic tape units.
 Optical: CD, DVD.

Physical Characteristics

 Volatile/nonvolatile: In a volatile memory, information decays naturally or is lost when


electrical power is switched off.
 Erasable/nonerasable: Nonerasable memory cannot be altered (except by destroying the storage
unit). ROM’s are nonerasable.
7.2 Memory Hierarchy
A computer system is equipped with a hierarchy of memory subsystems. There are several memory
types with very different physical properties. The important characteristics of memory devices are cost
per bit, access time, data transfer rate, alterability and compatibility with processor technologies. Figure
4.1 shows the hierarchy of memory in a typical memory with a trend in access time, amount of storage,
and cost per byte.

Figure 7.1: Memory hierarchy


Design constraints: How much? How fast? How expensive?
 Faster access time, greater cost per bit
 Greater capacity, smaller cost per bit,
 Greater capacity, slower access time.
7.3 Types of Storage Devices
7.3.1 Main Memory
The main memory (RAM) stores data and instructions. RAMs are built from semiconductor materials.
Semiconductor memories fall into two categories, SRAMs (static RAMs) and DRAMs (dynamic
RAMs).
DYNAMIC RAM (DRAM) is made with cells that store data as charge on capacitors. The presence or
absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have a natural
tendency to discharge, DRAMs require periodic charge refreshing to maintain data storage.
STATIC RAM (SRAM) In a SRAM, binary values are stored using traditional flip-flop logic-gate. A
static RAM will hold its data as long as power is supplied to it. Static RAM’s are faster than dynamic
RAM’s.
Dynamic Verses Static RAM
 Dynamic RAM: for example: Charge in capacitor. It requires periodic refreshing.
P R E P A R E D H A F T O M P a g e 2 | 10
 Static RAM: for example: Flip-flop logic-gate. Applying power is enough (no need for
refreshing).
 Dynamic RAM is simpler and hence smaller than the static RAM. Therefore, it is denser and less
expensive. But it requires supporting refresh circuitry.
 Static RAMs are faster than dynamic RAMs.
7.3.1.1 Types of ROM
ROM: The data is actually wired in the factory. The data can never be altered.
PROM: Programmable ROM. It can only be programmed once after its fabrication. It requires special
device to program.
EPROM: Erasable Programmable ROM. It can be programmed multiple times. Whole capacity need to
be erased by ultraviolet radiation before a new programming activity. It cannot be partially programmed.
EEPROM: Electrically Erasable Programmable ROM. Erased and programmed electrically. It can be
partially programmed. Write operation takes considerably longer time compared to read operation.

4.3.2 Catch Memory


 Cache memory is a small, high-speed RAM buffer located between the CPU and main memory.
 Cache memory holds a copy of the instructions (instruction cache) or data (operand or data
cache) currently being used by the CPU.
 The main purpose of a cache is to accelerate your computer while keeping the price of the
computer low.

Figure 7.2: Placement of Cache memory in the computer

Hit Ratio
 The ratio of the total number of hits divided by the total CPU accesses to memory (i.e. hits plus
misses) is called Hit Ratio.
 Hit Ratio = Total Number of Hits / (Total Number of Hits + Total Number of Miss)
Example
A system with 512 x 12 cache and 32 K x 12 of main memory.

P R E P A R E D H A F T O M P a g e 3 | 10
Figure 4.3: Hit Ratio
Types of Cache Mapping
 Direct Mapping
 Associative Mapping
 Set Associative Mapping
Direct Mapping
 The direct mapping technique is simple and inexpensive to implement.
 When the CPU wants to access data from memory, it places an address. The index field of CPU
address is used to access address.
 The tag field of CPU address is compared with the associated tag in the word read from the
cache.
 If the tag-bits of CPU address is matched with the tag-bits of cache, then there is a hit and the
required data word is read from cache.
 If there is no match, then there is a miss and the required data word is stored in main memory. It
is then transferred from main memory to cache memory with the new tag.

P R E P A R E D H A F T O M P a g e 4 | 10
Figure 7.4: Direct Mapping

Associative Mapping
 An associative mapping uses an associative memory.
 This memory is being accessed using its contents.
 Each line of cache memory will accommodate the address (main memory) and the contents of
that address from the main memory.
 That is why this memory is also called Content Addressable Memory (CAM). It allows each
block of main memory to be stored in the cache.

Figure 7.5: Associative Mapping

P R E P A R E D H A F T O M P a g e 5 | 10
Set Associative Mapping
 That is the easy control of the direct mapping cache and the more flexible mapping of the fully
associative cache.
 In set associative mapping, each cache location can have more than one pair of tag + data items.
 That is more than one pair of tag and data are residing at the same location of cache memory. If
one cache location is holding two pair of tag + data items, that is called 2-way set associative
mapping.

Figure 7.6: Two-Way Set Associative Mapping


Replacement Algorithms of Cache Memory
Replacement algorithms are used when there is no available space in a cache in which to place a data.
Four of the most common cache replacement algorithms are described below:
Least Recently Used (LRU):
 The LRU algorithm selects for replacement the item that has been least recently used by the
CPU.

First-In-First-Out (FIFO):
 The FIFO algorithm selects for replacement the item that has been in the cache from the longest
time.
Least Frequently Used (LFU):
 The LRU algorithm selects for replacement the item that has been least frequently used by the
CPU.
Random:
 The random algorithm selects for replacement the item randomly.
Writing into Cache

P R E P A R E D H A F T O M P a g e 6 | 10
 When memory write operations are performed, CPU first writes into the cache memory. These
modifications made by CPU during a write operation, on the data saved in cache, need to be
written back to main memory or to auxiliary memory.
 These two popular cache write policies (schemes) are:
o Write-Through
o Write-Back
Write-Through
 In a write through cache, the main memory is updated each time the CPU writes into cache.
 The advantage of the write-through cache is that the main memory always contains the same data
as the cache contains.
 This characteristic is desirable in a system which uses direct memory access scheme of data
transfer. The I/O devices communicating through DMA receive the most recent data.
Write-Back
 In a write back scheme, only the cache memory is updated during a write operation.
 The updated locations in the cache memory are marked by a flag so that later on, when the word
is removed from the cache, it is copied into the main memory.
 The words are removed from the cache time to time to make room for a new block of words.
Virtual Memory
 The term virtual memory refers to something which appears to be present but actually it is not.
 The virtual memory technique allows users to use more memory for a program than the real
memory of a computer.
 So, virtual memory is the concept that gives the illusion to the user that they will have main
memory equal to the capacity of secondary storage media.
Concept of Virtual Memory
 A programmer can write a program which requires more memory space than the capacity of the
main memory. Such a program is executed by virtual memory technique.
 The program is stored in the secondary memory. The memory management unit (MMU)
transfers the currently needed part of the program from the secondary memory to the main
memory for execution.
 This to and from movement of instructions and data (parts of a program) between the main
memory and the secondary memory is called Swapping.
Address Space and Memory Space
 Virtual address is the address used by the programmer and the set of such addresses is called the
address space or virtual memory.
 An address in main memory is called a location or physical address. The set of such locations
in main memory is called the memory space or physical memory.
 CPU generated logical address consisting of a logical page number plus the location within that
page (x).
 It must be mapped onto an actual (physical) main memory address by the operating system using
mapper.
 If the page is present in the main memory, CPU gets the required data from the main memory.
P R E P A R E D H A F T O M P a g e 7 | 10
 If the mapper detects that the requested page is not present in main memory, a page fault occurs
and the page must be read from secondary storage (4, 5) into a page frame in main memory.

Figure 7.7: Address Space And Memory Space


Address Mapping Using Memory Mapping Page Table
 When the requested page is not available in the main memory, we can say that a page fault has
been occurred in the main memory.
 Then the virtual address generated by the CPU is used to take out the requested page from the
secondary storage media to the main memory to remove this page fault.
 If empty page frame is not available, then a page must be removed from page frame in main
memory.
. Page Replacement Algorithms
 In a computer operating system that uses paging for virtual memory management, page
replacement algorithms decide which memory pages to page out (swap out, write to disk) when a
page of memory needs to be allocated.
 Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation,
either because there are none, or because the number of free pages is lower than some threshold.
FIFO Algorithm

P R E P A R E D H A F T O M P a g e 8 | 10
 Consider a paging system having capacity of 3 pages. The execution of a program requires
references to five distinct pages P1, P2, P3, P4 and P5. The pages are executed in the following
sequence:
 P2 P3 P2 P1 P5 P2 P4 P5 P3 P2 P5 P2

Figure 7.8: FIFO Algorithm


Least Recently Used (LRU)
 The least recently used page (LRU) replacement algorithm keeps track of page usage over a
short period of time.
 The LRU algorithm can be implemented by associating a counter with every page that is in main
memory.
 When a page is referenced, its associated counter is set to 0. At fixed intervals of time, the
counters associated with all pages presently in memory are incremented by 1.
 The least recently used page is the page with the highest count. The counters are often called
aging registers, as their count indicates their age, that is, how long their associated pages have
been referenced.
 Consider a paging system having capacity of 3 pages. The execution of a program requires
references to five distinct pages P1, P2, P3, P4 and P5. The pages are executed in the following
sequence:
 P2 P3 P2 P1 P5 P2 P4 P5 P3 P2 P5 P2

Figure 7.9: Least Recently Used (LRU)


P R E P A R E D H A F T O M P a g e 9 | 10
Optimal (OPT)
 The optimal policy selects that page for replacement for which the time to the next reference is
longest.
 This algorithm results in fewest number of page faults. But, this algorithm is impossible to
implement.
 At the time of page fault, the operating system has no way of knowing when each of the pages
will be referenced next. However, it does serve as a standard against which to judge other
algorithms.
 Consider a paging system having capacity of 3 pages. The execution of a program requires
references to five distinct pages P1, P2, P3, P4 and P5. The pages are executed in the following
sequence:
 P2 P3 P2 P1 P5 P2 P4 P5 P3 P2 P5 P2

Figure 7.10: Optimal (OPT)

P R E P A R E D H A F T O M P a g e 10 | 10

You might also like