Memory Management_OS (1)
Memory Management_OS (1)
1. Keeps track of which parts of memory are free and which are
allocated.
2. It determines how memory is allocated among competing
processes, deciding who gets memory, and how much they are
allowed.
3. When memory is allocated it determines which memory locations
will be assigned.
4. It tracks when memory is freed or unallocated and updates the
status.
The basic memory management function of the OS is to bring
processes into main memory for execution by the processor.
1.Memory Partitioning :
Obsolete technique
Any process whose size is less than or equal to the partition size
by the system
i.e. limited by the pre-determined number of
partitions
A large number of very small process will
not use the space efficiently giving rise to
internal fragmentation.
◦ In either fixed or variable length partition
methods
2. Dynamic Partitioning (Multprogramming with variable no. of
tasks-MVT)
Developed to overcome the difficulties with fixed Partitioning.
Partitions are of variable length and number dictated by the size
of the process.
A process is allocated exactly as much memory as required.
Eg: 64 MB of MM. Initially MM is empty except for the OS
Memory Management Techniques
Eg Process 1 arrives of size 20 M.
•Next process 2 arrives of size 14 M.
•Next process 3 arrives of size 18 M
•Next process 4 arrives of size 8 M
Drawbacks of compaction:
1. Best fit
2. First fit
3. Next fit
1. Best fit
Chooses the block that is closest in size to the
request.
It searches the entire list of holes to find the smallest
hole whose size is greater than or equal to the size of
the process.
2.First fit
Scans memory form the beginning and chooses the
first available block that is large enough
3. Worst Fit Allocate the process to the partition which is the
largest sufficient among the freely available partitions available
in the main memory. It is opposite to the best-fit algorithm. It
searches the entire list of holes to find the largest hole and
allocate it to process.
3. Next fit
Scans memory from the location of the last placement
Next fit is similar to the first fit but it will search for the first
sufficient partition from the last allocation point.
Last block that was used was a 22 MB
block, from which 14 MB was used by
the process returning a hole of 8MB
Assuming now a program of size 16
MB had to be allocated memory.
Performance of the three approaches :
1. Best fit
◦ Worst performer overall
◦ Since smallest block is found for process, the smallest amount
of fragmentation is left
◦ Memory compaction must be done more often
2. First fit
◦ Scans memory form the beginning and chooses the first
available block that is large enough
◦ Fastest
◦ May have many process loaded in the front end of memory that
must be searched over when trying to find a free block
3. Next fit
◦ Scans memory from the location of the last
placement
◦ More often allocate a block of memory at the end
of memory where the largest block is found
Issues in modern computer systems:
Physical main memory is not as large as the size of the user
programs.
In a multiprogramming environment more than 1 programs are
competing to be in MM.
Size of each is larger than the size of MM.
Solution :
Parts of the program currently required by the processor are
when it’s entire code and data are not present in memory.
Permits execution of a program whose size exceeds the
m bits
12 m m bits
13 n .
3 14 o .
p
15 Frame n-1
Logical Memory (user program)
0 a
1 b Logical address
2 c 0 1 1 1
Page 0
3 d
4 e
5 f
6 g
7 h offset
8 i
9 j Page no
Page 1 10 k
11 l
12
m
13
n
14
o
15
p
Virtual Memory: Paging
Paging Example - 32-byte memory
with 4-byte pages Frame 0
I
1 j
k
0 a l
0 1 b
m
2 c n
d 0 5 2
3 o
4 1 6 p
e
2 1
1 5 f 3
3 2
6 g
7 h 4
Page Table
8 I a
9 j Gives information that a page b
2 5
10 k is in which page frame c
11
l d
e
12 m 6 f
13 n g
3 14 o h
15
p 7
Logical Memory
Physical Memory
Mapping from logical address to physical address-
Assume CPU wants to access data corresponding to logical
Address 14 :
1 1 1 0
1024 bytes.
Consider a relative address(logical address) 1502,
page 2 offset=52
Paging
At a given point in time, some of the frames ain memory
are in use and some are free
Do not load all pages of a program in memory, put some
pages in memory which will be required and other pages
are kept on the backing store (secondary memory)
Structue of a page table
i.e 16 pages
2 2 1
0000
1
0001
9 0010
4 0011
-
Root page 3 “
table “
5 “
00 - ‘
1000 -
01 - 2
8
10 - 7
0
11 5000 10
57
34
- page
pagetable of a
process
The root page table of a process always
remains in the MM.
The first 2 bits of the virtual(logical address)
is used to index into the root page to find a
page table entry(PTE) for a page of the user
page table.
If that page is not in main memory, a page
fault occurs .
If that page is in main memory ,then the
next 2 bits of the logical address index into
the user page table to find the frame no.
Consider a simple inverted page table
◦ There is one entry per physical memory frame
◦ The table is now shared among the processes, so each PTE must
contain the pair <process ID, virtual page #>
◦ Physical frame # is not stored, since the index in the table corresponds
to it
◦ If a match is found, its index (in the inverted page table) is used to
obtain a physical address.
The search can be very inefficient since finding a match may require
searching the entire table. To speed-up the searching, hashed inverted
page tables are used
Hashed Inverted Page Tables
Physical
Virtual address address
• Main idea
PID vpage # – One PTE for each physical frame
– Hash (pid, vpage) to frame #
21 001 offset k offset • Pros
– Small page table for large address
PID Page # next
space
Hash 0 • Cons
function – Lookup is difficult
– Overhead of managing hash
i chains, etc
11 005 k
k 21 001 -
n-1
Inverted page table
Page tables are stored in MM
Every reference to memory causes 2 physical memory
accesses.
1. One to fetch the appropriate PT entry from the PT.
2. Second to fetch the desired data from MM once the frame
no. is obtained from the PT.
Hence implementing a virtual memory scheme has the
effect of doubling the memory access time.
Solution:
Use a special cache for Page Table Entries called the
Translation
lookaside buffer(TLB)
TLB contains the most recently used Page Table Entries
Operation of paging using a TLB:
1. Logical address is in the form
process:
◦ large page fault rate if too few frames are allocated.
◦ low multiprogramming level if too many frames are
allocated.
Resident Set Size
Fixed-allocation policy:
Belady’s anomaly
With 4 page frames
With 3 page frames
longest time.
clean pages
Use dirty bit to give preference to dirty
pages
2-bits, reference bit and modify bit
2 2 2 2 5 5 5 5 3 3 3 3
3 3 3 3 2 2 2 2 2 5 5
1 1 1 4 4 4 4 4 2
H H H
Hit ratio = 3/12
2 3 2 1 5 2 4 5 3 2 5 2
2 2 2 2 2 2 2 2 3 3 3 3
3 3 3 5 5 5 5 5 5 5 5
1 1 1 4 4 4 2 2 2
H H H H H
No. of page faults =7
Hit ratio = 5/12
Generates less no. of page faults than FIFO
Commonly used algorithms
3. Optimal page Replacement algorithm
• Replace that page which will not be used for the longest period of time.
2 3 2 1 5 2 4 5 3 2 5 2
2 2 2 2 2 2 4 4 4 2 2 2
3 3 3 3 3 3 3 3 3 3 3
1 5 5 5 5 5 5 5 5
H H H H
H H
Hit Ratio=6/12=1/2
Drawback: Difficult to implement since it requires a future knowledge of the
reference string.
Used mainly for comparison studies.
3. Fetch Policy
Determines when a page should be brought into
main memory. Two common policies:
1. Demand Paging : If pages are loaded solely in
response to page faults, then the policy is
demand paging .
2. Prepaging : Pages other than the one
demanded by a page fault are brought in.
Attempt to prevent high level of initial paging.
Prepaging brings in pages whose use is
anticipated:
1. Demand Cleaning :
A page is written out only when it has been
inadequate:
If a process does not have enough frames it will quickly page
fault.
At this point it must replace some page.
However since all its pages are in active use, it must replace a
processes.
These processes need those pages , hence these processes
execution.
They exhibit a locality of reference, meaning
together.
When a function is called all pages that
The working set is the set of unique pages in the most recent
page references. Example: = 10,000
Here = 10
T=5
123231243474334112221
t1 t2 t3
W={1,2,3} W={3,4,7} W={1,2}
◦ if T too small, will not encompass locality
◦ if T too large, will encompass several localities
◦ if T => infinity, will encompass entire program
If the entire working set is in memory, the process
will run without causing many faults until it moves
into another execution phase
m = number of frames in memory
virtual time since the last page fault for that process.
This is done by maintaining a counter of page references.
A threshold F is defined.
If the amount of time since the last page fault is less than F
Decrease
the
number of
frames
Issues associated with paging
Hence we can control the page fault rate to
prevent thrashing
Paging is not (usually) visible to the programmer
Segmentation is visible to the programmer.
Segmentation is a memory management scheme that supports
user’s view of memory.
Programmers never think of their programs as a linear array of
words. Rather, they think of their programs as a collection of logically
related entities, such as subroutines or procedures, functions, global
or local data areas, stack etc.
i.e they view programs as a collection of segments.
A program consists of functions/subroutines , main program and data
which are required for the execution of the program.
Each logical entity is a segment.
Hence if we have a program consisting of 2 functions and an inbuilt
function sqrt.
Then we have a segment for the main program, each of the
functions, data segment and a segment for sqrt.
1
4
1
3 2
4
segment limit.
The segment base contains the starting physical address