0% found this document useful (0 votes)
22 views

Memory 3

The document discusses memory management techniques. It explains that main memory is divided into pages/frames and programs reside on secondary storage like hard disks. When a program executes, only a portion of its code needs to be in memory at a time. Memory management brings pages into memory on demand using techniques like paging and segmentation. Common page replacement algorithms discussed are FIFO, optimal, and LRU.

Uploaded by

nasa stick
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Memory 3

The document discusses memory management techniques. It explains that main memory is divided into pages/frames and programs reside on secondary storage like hard disks. When a program executes, only a portion of its code needs to be in memory at a time. Memory management brings pages into memory on demand using techniques like paging and segmentation. Common page replacement algorithms discussed are FIFO, optimal, and LRU.

Uploaded by

nasa stick
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Memory Management

Background
 Main Memory is divided into blocks (Pages / Frames)
 Paged Memory
 Blocks are Fixed Size called Pages
 Segmented Memory
 Blocks are Variable Size called Segments
 Complete program will reside in Secondary storage / HDD
 Entire program rarely used
 Error code , exception handlers, large data structures
 90-10 rule
 “Programs spend 90% of their time in 10% of their code”
 Code needs to be in Main Memory to execute
 Entire program code not needed in Main Memory at same
time
Background (Cont.)
 Solution: Think main memory as cache for disk

Processor

Control Caching Tertiary


Second Main Secondary Storage
Cache
On-Chip

Level Memory Storage (Tape)


Datapath Cache (DRAM) (Disk)
(SRAM)
Background (Cont.)
 Partially program in Main Memory
 No limit on program size to be executed
 Consider RAM Size = 16 MB and Program Size = 64 MB
 Higher degree of Multiprogramming
 Since each program take less space
 More programs run at the same time
 Increased CPU utilization and throughput
Paging Hardware
Segmentation Hardware
Flow - Diagram
Start

Start Instruction

Generate Address /
VM Address
Instruction ++

Compute Page Number

No Yes
Page in
Run Instruction
MM ?
Page
Fault
Flow – Diagram (Cont.)

Free Block No
Select a page for Replacement
in MM ?

Yes
Get “Disk Address” of Page
Update Page Table
from FMT

Read the Frame from Disk to MM No


Was Page
Dirty ?

Update Page Table Yes

Write Dirty Page to Disk

Restart the Instruction /


Go to “Start Instruction“
Steps in Handling a Page Fault
Demand Paging
 Could bring entire process into memory
at load time
 Or bring a page into memory only when
it is needed
 Less I/O needed
 no unnecessary I/O

 Less memory needed for each


program
 Faster response
 More users / programs
 Page is needed  reference to it
 valid reference  execute
 invalid reference  abort
Valid-Invalid Bit
 With each page table entry a valid–invalid bit is associated
 v  in-memory
 i  not-in-memory
 Initially valid–invalid bit is set to i on all entries
 Example of a page table snapshot:

 During MMU address translation, if valid–invalid bit in page table


entry is i  page fault
Performance of Demand Paging (Cont.)
 Page Fault Rate 0  p  1
 if p = 0 no page faults
 if p = 1, every reference is a fault
 Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )
Fetch Policy
 Fetch Policy:
 Determine which page should be brought in Main Memory.
 Demand Paging
– Bring the page in Main Memory when it is referred.
– First time always a Page Fault
 Anticipatory paging
– Predict which page will be used in future
» Locality of reference
» Delayed freeing
First-In-First-Out (FIFO) Algorithm
 “Page brought first should be replaced first”
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

15 page faults
 Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
 Consider 3 Frames
 Consider 4 Frames
FIFO Illustrating Belady’s Anomaly
 3 Frames: 9 Page Fault
 4 Frames: 10 Page Fault
 Adding more frames can cause more page faults!
 Belady’s Anomaly (Only in FIFO)
Graph of Page Faults Versus The Number of Frames
Optimal Algorithm
 “Replace page that will not be used for longest period of time”
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)
Optimal Algorithm
 Page Fault: 9
 Pros:
 Optimal Performance
 Cons:
 Can’t read the future
 Used for measuring how well other algorithm performs
Least Recently Used (LRU) Algorithm
 “Use past knowledge rather than future”
 Replace page that has not been used in the most amount of time
 Associate “time of last use” with each page
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)
Least Recently Used (LRU) Algorithm
 Page Fault: 12 faults
 better than FIFO but worse than Optimal
 Generally good algorithm and frequently used
 But how to implement?

You might also like