0% found this document useful (0 votes)
17 views

Memory Management_OS (1)

Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Memory Management_OS (1)

Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 119

 Main memory is divided into 2 parts :one for the operating system

and the other part for user programs.


 In uniprogramming, there is OS in one part and the other part for

the program currently being executed.


 In a multi programming system, the user part of the memory must

be further subdivided to accommodate multiple processes.


 The memory management function of the OS:

1. Keeps track of which parts of memory are free and which are
allocated.
2. It determines how memory is allocated among competing
processes, deciding who gets memory, and how much they are
allowed.
3. When memory is allocated it determines which memory locations
will be assigned.
4. It tracks when memory is freed or unallocated and updates the
status.
The basic memory management function of the OS is to bring
processes into main memory for execution by the processor.
1.Memory Partitioning :
Obsolete technique

a)Fixed Partitioning(Multprogramming with fixed no. of


tasks-MFT)
The user section of the MM is partitioned into regions of fixed sizes.

Two variations in fixed partitioning:

Equal Size Partitions:


The user memory is divided into partitions of fixed sizes.

Any process whose size is less than or equal to the partition size

can be loaded into any available partition.


If all partitions are full , the OS can swap a process out of any of the

partitions and load in any other process.


Fixed Partitioning of
a 64 MB memory
 Placement Algorithm:
As long as there are available partitions a process can be loaded
in any available partition.
Because all partitions are of equal size it does not matter which
partition is used.

 Drawbacks of fixed partitioning :


1. A program may be too big to fit into a partition.
2. Inefficient utilization of MM. A program can be small but occupies
an entire partition.
Eg: A program of 2 MB occupies an 8 MB partition.
Internal fragmentation – 6 MB
The wasted space internal to a partition due to the fact that the
block of data loaded is smaller than the partition is called
internal fragmentation.

Solution : Use unequal sized partitions


Unequal size partitions
 Placement algorithm with Unequal sized partitions :
Two possible ways :
1. Unequal size partitions, use of
multiple queues:
◦ assign each process to the smallest
partition within which it will fit.
◦ a queue exists for each partition
size.
◦ tries to minimize internal
fragmentation.
◦ problem: some queues might be empty
while some might be loaded.
Eg: There might be no processes with a
size between 12 and 16 M, hence that
partition remains unused.
For eg: Processes of size 11M,1M,9M
11M –Queue of partition of 12 M
1 M- Queue of partition of 2 M
9 M- Queue of partition of 10 M
2.
 Unequal-size partitions, use of a
single queue:

◦ when its time to load a process


into memory, the smallest
available partition that will hold
the process is selected.
◦ increases the level of
multiprogramming at the
expense of internal
fragmentation.
Eg : Queue contains
7M, 6M , 8M,1M ,9M,8M,5M
 Eg 7M, 6M , 8M,1M ,9M, 8M, 5M
Disadvantage of MFT :
 The number of active processes is limited

by the system
i.e. limited by the pre-determined number of
partitions
 A large number of very small process will
not use the space efficiently giving rise to
internal fragmentation.
◦ In either fixed or variable length partition
methods
2. Dynamic Partitioning (Multprogramming with variable no. of
tasks-MVT)
 Developed to overcome the difficulties with fixed Partitioning.
 Partitions are of variable length and number dictated by the size
of the process.
 A process is allocated exactly as much memory as required.
 Eg: 64 MB of MM. Initially MM is empty except for the OS
Memory Management Techniques
Eg Process 1 arrives of size 20 M.
•Next process 2 arrives of size 14 M.
•Next process 3 arrives of size 18 M
•Next process 4 arrives of size 8 M

Unused memory at the end of 4 M that is too small for process 4 of 8 M


•Atsome point later assume process 2 gets blocked.
•The OS swaps out process2 which leaves sufficient room for process4
Process 4 is loaded creating another small hole of 6 M
Memory Management Techniques
•Process 1 finishes execution on CPU.
•Hence OS swaps out Process 1
•Assume Process 2 gets unblocked.
•OS swaps in Process 2 creating another hole of 6 M
 MVT eventually leads to a situation in which there
are a lot of small holes in memory.
 This is called external fragmentation.

There are really two types of fragmentation:


1. Internal Fragmentation –
Allocated memory may be slightly larger than
requested memory; this size difference is memory
internal to a partition, but not being used.
2. External fragmentation – total memory space
exists to satisfy a request, but it is not contiguous;
storage is fragmented into a large number of small
holes.
Note : MVT suffers from external fragmentation and
not internal fragmentation
From time to time the OS shifts the processes so that they
are
contiguous and so that all the free memory is together in 1
block which may be sufficient to load an additional process.

Drawbacks of compaction:

 Time consuming due to relocation.

 Wastes the CPU time.


Dynamic Partitioning Placement Algorithm
 In Partition Allocation, when there is more than one partition

freely available to accommodate a process’s request, a


partition must be selected. To choose a particular partition, a
partition allocation method is needed.
 i.e When a new process has to be brought in , the OS must

decide free block to allocate.


 Three placement algos for MVT :

1. Best fit

2. First fit

3. Next fit
1. Best fit
Chooses the block that is closest in size to the
request.
It searches the entire list of holes to find the smallest
hole whose size is greater than or equal to the size of
the process.
2.First fit
Scans memory form the beginning and chooses the
first available block that is large enough
3. Worst Fit Allocate the process to the partition which is the
largest sufficient among the freely available partitions available
in the main memory. It is opposite to the best-fit algorithm. It
searches the entire list of holes to find the largest hole and
allocate it to process.
3. Next fit
Scans memory from the location of the last placement
Next fit is similar to the first fit but it will search for the first
sufficient partition from the last allocation point.
Last block that was used was a 22 MB
block, from which 14 MB was used by
the process returning a hole of 8MB
Assuming now a program of size 16
MB had to be allocated memory.
 Performance of the three approaches :
1. Best fit
◦ Worst performer overall
◦ Since smallest block is found for process, the smallest amount
of fragmentation is left
◦ Memory compaction must be done more often

2. First fit
◦ Scans memory form the beginning and chooses the first
available block that is large enough
◦ Fastest
◦ May have many process loaded in the front end of memory that
must be searched over when trying to find a free block
3. Next fit
◦ Scans memory from the location of the last
placement
◦ More often allocate a block of memory at the end
of memory where the largest block is found
Issues in modern computer systems:
 Physical main memory is not as large as the size of the user
programs.
 In a multiprogramming environment more than 1 programs are
competing to be in MM.
 Size of each is larger than the size of MM.

Solution :
 Parts of the program currently required by the processor are

brought into the MM and remaining stored on the secondary


devices.
 Movement of programs and data between secondary storage

and MM is handled by the OS.


 User does not need to be aware of limitations imposed by the
available memory.
Def: Virtual memory
 VM is a concept which enables a program to execute even

when it’s entire code and data are not present in memory.
 Permits execution of a program whose size exceeds the

size of the MM.


 VM is a concept implemented through a techniques called

paging and segmentation.


Paging
 Logical address : Location relative to the beginning of a
program
 Physical address : Actual locations in Main memory.
 Cpu when executing a program is aware of only the logical
addresses.
 When a program needs to be accessed from the MM, MM
should be presented with physical addresses.
 Memory Management Unit (MMU) converts the logical
addresses into physical addresses.
 A memory management unit (MMU), is a computer hardware component
responsible for converting the logical add. to physical add.
 In paging scheme --
User programs are split into fixed sized blocks called pages.
Physical memory is also divided into fixed size slots called page
frames.
Page size --- 512 bytes, 1 K, 2K ,8K in length.
Allocation of memory to a program consists of finding a
sufficient no. of unused frames for loading of pages of a
program.
Each page requires one page frame.
Logical Address
n bits

Page no. Offset

m bits

Lower order n bits designate the offset within the page.


Higher order (m-n) bits designate the page no.
How user programs are mapped into physical memory?
0 a Frame 0
0 1 b Let the logical memory of a
2 c program be 2m bytes Frame 1
3
d
Hence no. of bits in the logical Frame 2
4
e address is m bits.
5 f
1 Here m= 4 bits Frame 3
6 g
7 h Size of a page be 2n bytes.
8 I n bits
Logical address
9 j
2
10 k
l Page no. Offset
11

12 m m bits
13 n .
3 14 o .
p
15 Frame n-1
Logical Memory (user program)
0 a
1 b Logical address
2 c 0 1 1 1
Page 0
3 d
4 e
5 f
6 g
7 h offset
8 i
9 j Page no
Page 1 10 k
11 l
12
m
13
n
14
o
15
p
Virtual Memory: Paging
Paging Example - 32-byte memory
with 4-byte pages Frame 0
I
1 j
k
0 a l
0 1 b
m
2 c n
d 0 5 2
3 o
4 1 6 p
e
2 1
1 5 f 3
3 2
6 g
7 h 4
Page Table
8 I a
9 j Gives information that a page b
2 5
10 k is in which page frame c
11
l d
e
12 m 6 f
13 n g
3 14 o h
15
p 7
Logical Memory
Physical Memory
Mapping from logical address to physical address-
Assume CPU wants to access data corresponding to logical
Address 14 :

1 1 1 0

Page no. Offset


1. Use page no. field (11) to index into the page table (PT)
2. Indexing the PT we find page 3 is in frame no. 2 of MM.
3. 2 x 4(page size) =8
4. 8 - base address of frame 2
5. Use the offset of the logical address to access the byte
within the frame. Offset - 10(2)
6. 8 + 2(offset) = 10
Hence logical address 14 maps to physical address 10
 If a program is of size 2700 bytes and
 If 16-bit addresses are used, and the page size is 1K =

1024 bytes.
 Consider a relative address(logical address) 1502,

which in binary form, is :


0000010111011110
 Dividing the program into pages of size 1 K we have

Logical address 1502 is located on


logical page 1, at offset 478. (p, d) = 1, 478.
•Logical address consists of page no. and offset
Here no. of bits in address field is 16.

•No. of bits in the Offset depends on the page


size.

•Since page size is 1024 bytes=2^10 bytes ..


10 bits are for offset and 6 bits for the page no.

•Divide this into a six-bit page number field:


0000012 = 1
and a 10-bit displacement field: 01110111102 =
478
Paged Address Translation

(Notice that only the


page# field
changes)
 Eg 2
 Logical address 2100
 000010 0000110100

page 2 offset=52
Paging
 At a given point in time, some of the frames ain memory
are in use and some are free
 Do not load all pages of a program in memory, put some
pages in memory which will be required and other pages
are kept on the backing store (secondary memory)
 Structue of a page table

Page Fram Valid Dirty Valid bit – Indicates whether a


No e No. bit page is actually loaded in
0 5 1 1 main memory.

1 2 1 0 Dirty bit – Indicates whether


the page has been modified
2 3 1 1
during it’s residency in the
3 - 0 0 main memory
4 - 0 0
What happens when the CPU tries to use a page that was not brought
into the memory ?
1. The MMU tries to translate the logical address into physical
address by using the page table.
2. During the translation procedure, the page no. of the logical
address is used to index the PT. The valid bit for that page table
entry is checked to indicate whether the page is in memory or
not.
3. If bit =0 then page is not in memory, and it is a page fault which
causes an interrupt to the OS. i.e a request for a page that is not
in MM.
4. OS searches MM to find a free frame to bring in the required page.
5. Disk operation is scheduled to bring in the desired page into the
newly allocated frame.
6. Page table is modified to indicate that the page is in MM.
7. MMU uses the modified PT to translate the logical address to
physical adress.
 There is one page table per process.
 When a process is in memory its page table is also bought
into the memory(the area of MM Where the OS is loaded)
 The no. of entries in the page table depends on the size of
the process.
 Multiprogramming environment, more than 1 process in
MM.
 Hence the amount of memory devoted to page tables alone
can be unacceptably very high.
 To overcome this problem page tables are stored in
secondary memory than physical memory.
 When a process is running, only a part of its page table
must be in MM including page table entry of the currently
executing page.
 This means the Page table is subject to paging.
 Break up the logical address space into multiple page
tables.
 A simple technique is a two-level page table.

 Two-Level Paging Example

 A logical address (on 32-bit machine with 4K page size)
is divided into:
 1. a page number consisting of 20 bits.
 2. a page offset consisting of 12 bits.
 Since the page table is paged, the page number is
further divided into:
 10-bit page number.
 10-bit page offset.
 Thus, a logical address is as follows:
Page number
p1 p2 offset

where p1 s an index into the outer page table,


and p2 is the displacement within the page of the
outer page
table.
 Eg:1
 Let logical address be of 5 bits

 i.e user program is 32 bytes

 Page size = 2 bytes

 i.e 16 pages

Logical address - PN Offset


 No. of bits in offset is 1

 No. of bits in page no. is 4

 Hierarchical Page Table :

2 2 1
0000
1
0001
9 0010
4 0011
-

Root page 3 “
table “
5 “
00 - ‘
1000 -
01 - 2
8
10 - 7
0
11 5000 10
57
34

- page
pagetable of a
process
 The root page table of a process always
remains in the MM.
 The first 2 bits of the virtual(logical address)
is used to index into the root page to find a
page table entry(PTE) for a page of the user
page table.
 If that page is not in main memory, a page
fault occurs .
 If that page is in main memory ,then the
next 2 bits of the logical address index into
the user page table to find the frame no.
 Consider a simple inverted page table
◦ There is one entry per physical memory frame

◦ The table is now shared among the processes, so each PTE must
contain the pair <process ID, virtual page #>

◦ Physical frame # is not stored, since the index in the table corresponds
to it

◦ In order to translate a virtual address, the virtual page # and current


process ID are compared against each entry, scanning the array
sequentially.

◦ If a match is found, its index (in the inverted page table) is used to
obtain a physical address.

◦ If no match is found, a page fault occurs.

The search can be very inefficient since finding a match may require
searching the entire table. To speed-up the searching, hashed inverted
page tables are used
Hashed Inverted Page Tables

Physical
Virtual address address
• Main idea
PID vpage # – One PTE for each physical frame
– Hash (pid, vpage) to frame #
21 001 offset k offset • Pros
– Small page table for large address
PID Page # next
space
Hash 0 • Cons
function – Lookup is difficult
– Overhead of managing hash
i chains, etc
11 005 k

k 21 001 -
n-1
Inverted page table
 Page tables are stored in MM
 Every reference to memory causes 2 physical memory
accesses.
1. One to fetch the appropriate PT entry from the PT.
2. Second to fetch the desired data from MM once the frame
no. is obtained from the PT.
 Hence implementing a virtual memory scheme has the
effect of doubling the memory access time.
Solution:
Use a special cache for Page Table Entries called the
Translation
lookaside buffer(TLB)
TLB contains the most recently used Page Table Entries
Operation of paging using a TLB:
1. Logical address is in the form

Page no. Offset

2. During the translation Process MMU consults the TLB to


see if the matching Page table entry is present.
3. If match found, physical address is generarted by
combining the frame no. present at that entry at that
entry in the TLB with the offset.
4. If match not found, entry is accessed from the page table
in the memory.
1. Resident size management :
 The OS must decide how many page frames to allocate to a

process:
◦ large page fault rate if too few frames are allocated.
◦ low multiprogramming level if too many frames are
allocated.
Resident Set Size
 Fixed-allocation policy:

◦ allocates a fixed number of frames that remains constant


over time
◦ the number is determined at load time and depends on
the type of the application.
 Variable-allocation policy:
◦ the number of frames allocated to a process may vary
over time:
 may increase if page fault rate is high.
 may decrease if page fault rate is very low.
◦ requires more OS overhead to assess behavior of active
processes.
Replacement Scope
 Replacement scope is the set of frames to be considered

for replacement when a page fault occurs.


Local replacement policy:
◦ each process selects from only its own set of allocated
frames.
Global replacement policy:
◦ process selects a replacement frame from the set of all
frames; one process can take a frame from another.
 Adv : Simple to implement
 Disadv: Suffers from belady’s anomaly

Belady’s anomaly
With 4 page frames
With 3 page frames

Belady’s anomaly demonstrates that increasing the


number of page frames may also increase the
number of page faults.
Optimal Page replacement
 Throw out page that won’t be used for

longest time in future

No. of page faults - 6


LRU(Least recently used)
 Throws out page that hasn’t been used in

longest time.

No. of page faults 8


Implementation of LRU
 A counter for each page

 Every time page is referenced, save system


clock into the counter of the page

 Page replacement: scan through pages to


find the one with the oldest clock

 Problem: have to search all pages/counters!


 Clock page replacement/second
chance algorithm/LRU approximate
algorithm
 Maintain a circular list of pages resident in
memory :clock
 A reference bit per page
 The bit is set whenever a page is referenced
 Pointer to next victim = clock hand
Clock Page Replacement algorithm
 Keep use (or reference) bit for each page frame

 When page is referenced: set use bit

 Page replacement: Look for page with use bit

cleared(=0) (has not been referenced for a while)


Implementation:
 Circular list instead of queue
 Clock hand points to oldest page
 If (R==0) then
◦ Page is unused
◦ Replace it
 Else
◦ Clear R
◦ Advance clock hand
 Basically, it’s a FIFO algorithm.

 When a page has been selected, we inspect its


reference bit.

 If the value is 0, we proceed to replace this page,


otherwise, we give the page a second chance and
move on to select the next FIFO page

 When a page get a second chance, it’s reference


bit is cleared.

 If a page is used often enough to keep its


reference bit set, it will never be replaced
Clock Replacement
 Problem of clock algorithm: does not
differentiate dirty v/s clean pages
 Dirty page: pages that have been modified

and need to be written back to disk


 More expensive to replace dirty pages than

clean pages
 Use dirty bit to give preference to dirty

pages
2-bits, reference bit and modify bit

 (0,0) neither recently used nor modified


◦ best page to replace
 (0,1) not recently used but modified
◦ needs write-out (“dirty” page)
 (1,0) recently used but “clean”
◦ probably used again soon
 (1,1) recently used and modified
◦ used soon, needs write-out
 Page replacement
reference = 0, dirty = 0  victim page
reference = 0, dirty = 1  skip (don’t change)
reference = 1, dirty = 0  reference = 0, dirty = 0
reference = 1, dirty = 1  reference = 0, dirty = 1
 advance hand, repeat

 If no victim page found, run swap daemon to

flush unreferenced dirty pages to the disk,


repeat
a b c d c aw d bw e b aw b c d
10 a 11 a 11 a 11 a 00 a 11 a 11 a
10 b 10 b 10 b 11 b 10 b 10 b 10 b
10 c 10 c 10 c 10 c 10 e 10 e 10 e
10 d 10 d 10 d 10 d 00 d 00 d 00 d

First pass Second pass First Second


01 a 00 a 11 a 01 a 00 a
01 b 00 b 10 b 00 b 10 d
00 c 10 e 10 e 00 e 00 e
00 d 00 d 10 c 00 c 00 c
• Assume MM has 3 page frames. Execution of a program requires 5 distinct
pages Pi where i= 1,2,3,4,5
• Let the reference string be
2 3 2 1 5 2 4 5 3 2 5 2

2 2 2 2 5 5 5 5 3 3 3 3

3 3 3 3 2 2 2 2 2 5 5

1 1 1 4 4 4 4 4 2

H H H
Hit ratio = 3/12

Drawback: Tends to throw away frequently used pages


because it does not take into account the pattern of
usage of a given page
Least Recently used Algorithm :
• Replaces that page which has not been used for the longest period of
time.

2 3 2 1 5 2 4 5 3 2 5 2
2 2 2 2 2 2 2 2 3 3 3 3

3 3 3 5 5 5 5 5 5 5 5

1 1 1 4 4 4 2 2 2

H H H H H
No. of page faults =7
Hit ratio = 5/12
Generates less no. of page faults than FIFO
Commonly used algorithms
3. Optimal page Replacement algorithm
• Replace that page which will not be used for the longest period of time.
2 3 2 1 5 2 4 5 3 2 5 2

2 2 2 2 2 2 4 4 4 2 2 2

3 3 3 3 3 3 3 3 3 3 3

1 5 5 5 5 5 5 5 5

H H H H
H H
Hit Ratio=6/12=1/2
Drawback: Difficult to implement since it requires a future knowledge of the
reference string.
Used mainly for comparison studies.
3. Fetch Policy
 Determines when a page should be brought into
main memory. Two common policies:
1. Demand Paging : If pages are loaded solely in
response to page faults, then the policy is
demand paging .
2. Prepaging : Pages other than the one
demanded by a page fault are brought in.
Attempt to prevent high level of initial paging.
Prepaging brings in pages whose use is
anticipated:

 locality of references suggest that it is


more efficient to bring in pages that reside
contiguously on the disk.

 efficiency not definitely established: the


extra pages brought in are “often” not
referenced.
4. Cleaning Policy
◦ Opposite to fetch policy.
◦ Determines when a modified page should be

written to secondary memory.


◦ Two common alternatives are

1. Demand Cleaning :
 A page is written out only when it has been

selected for replacement


2. Precleaning :
Pages are written out in batches
 With precleaning, a page is written out but remains
in main memory until the page replacement
algorithm dictates that it be removed.

 Precleaning allows the writing of pages in batches,


but it makes little sense to write out hundreds or
thousands of pages only to find
that the majority of them have been modified again
before they are replaced. The transfer capacity of
secondary memory is limited and should not be
wasted with unnecessary cleaning operations.
 With demand cleaning, the writing of dirty
page precedes the reading in of a new
page.
 This technique minimizes page writes but a

process that suffers a page fault may have


to wait for 2 page transfers before it can be
unblocked.
5. Load control
 Determines the number of processes that will be resident in

main memory (degree of multiprogramming)


 Critical in effective memory management.

 Too few processes, many occasions when all processes will

be blocked and much time will be spent in swapping


 Too many processes in the MM-Size of resident set is

inadequate and frequent faulting will occur.


This results in thrashing
Thrashing
 Consider the scenario when the resident set size of a process is

inadequate:
 If a process does not have enough frames it will quickly page

fault.
 At this point it must replace some page.

 However since all its pages are in active use, it must replace a

page that will be needed right way.(Assuming a local replacement


scope)
 Hence it will quickly fault again and again.
 The process continues to fault replacing the pages for which it

then faults and brings back right away.


This high paging activity is called thrashing
A process is thrashing if it is spending more time paging than
executing.
Cause and effects of thrashing
Consider the following scenario:
 OS monitors CPU utilization.

 If the CPU utilization is too low , the degree of

multiprogramming is increased by introducing a new


process to the system.
 Assume a global page replacement is used.

 Suppose a process enters a new phase in its execution and

needs more frames.


 It starts faulting and taking frames away from other

processes.
 These processes need those pages , hence these processes

also start faulting taking frames from other processes.


 All these faulting processes must use the paging device to swap
pages in and out.
 As a result they queue for the paging device and a result the ready
queue empties.
 Hence the cpu utilization decreases.
 OS sees decreasing cpu utilization and increases the degree of
multiprogramming.
 The new processes tries to get started by taking frames from
running processes, causing more page faults and longer queue for
the paging device.
 As a result the CPU utilization even drops further and the OS further
tries to increase the degree of multiprogramming even more.
 Thrashing has occurred and the system throughput plunges.
 No work is getting done because processes are spending more
time paging.
 CPU utilization is plotted
against the degree of
multiprogramming
 As multiprogramming
increases, CPU utilization
increases until a maximum is
reached
 If multiprogramming is
increased even further,
thrashing sets in and CPU
utilization drops sharply.
 At this point to increase CPU
utilization and stop thrashing,
the degree of
multiprogramming should be
decreased.
How to limit effects of thrashing ?
 Use local replacement algorithm.

 Hence one process cannot steal frames from

other processes and cause latter to thrash.


 But processes which are thrashing will be in the

queue for the paging device most of the time.


 Hence the average service time for a page fault

will increase for a process that’s not thrashing.


How to prevent thrashing?
1. Working set Model
 Provide a process as many frames as it needs..

 How do we know that?

 Use the working set strategy.

 Uses the concept of locality model of process

execution.
 They exhibit a locality of reference, meaning

that during any phase of execution, a


process references only a relatively small fraction
of its pages.
 Locality model states that as a process
executes, it moves from locality to locality.
 A locality are a set of pages actively used

together.
 When a function is called all pages that

span the function are used . This is a


locality for that function.
 When a function is exited , the process

leaves that locality end enters another


locality.
 Hence if enough frames are allocated to a process to
accommodate its current locality then it will fault for
the pages in its locality until all these pages are in
memory, then it will not fault again until the process
changes locality.

 If fewer frames are allocated than the size of the


current locality, the process will thrash, since it cannot
keep in memory all the pages that it is actively using.

 WORKING SET MODEL is based on the assumption


of locality.
 The working-set model is a way of estimating the size of the
current locality for a process.

   working-set window  a fixed number of page references

 The working set is the set of unique pages in the most recent 
page references. Example:  = 10,000

 Here  = 10
 T=5
 123231243474334112221

t1 t2 t3
W={1,2,3} W={3,4,7} W={1,2}
◦ if T too small, will not encompass locality
◦ if T too large, will encompass several localities
◦ if T => infinity, will encompass entire program
 If the entire working set is in memory, the process
will run without causing many faults until it moves
into another execution phase
 m = number of frames in memory

 D =  WSSi  total demand frames


 if D > m  Thrashing
 Policy if D > m, then suspend one of the
processes
2. Page Fault frequency(PFF)
 For each process keep track of the page fault frequency,
which is the number of faults divided by the number of
references.
How to keep track of the PFF?
 When a page fault occurs the operating system notes the

virtual time since the last page fault for that process.
 This is done by maintaining a counter of page references.

 A threshold F is defined.

 If the amount of time since the last page fault is less than F

(i.e PFF is high), then a page is added to the resident set of


that process otherwise(i.e PFF is low) shrink the resident set.
NOTE : The time between the page fault is the
reciprocal of the page fault rate
 This strategy can be refined by defining 2 thresholds , an
upper threshold is used to trigger a growth in the resident
set size, and a lower threshold is used to trigger a
contraction in the resident set size

Increase the number


of frames

Decrease
the
number of
frames
Issues associated with paging
Hence we can control the page fault rate to
prevent thrashing
 Paging is not (usually) visible to the programmer
 Segmentation is visible to the programmer.
 Segmentation is a memory management scheme that supports
user’s view of memory.
 Programmers never think of their programs as a linear array of
words. Rather, they think of their programs as a collection of logically
related entities, such as subroutines or procedures, functions, global
or local data areas, stack etc.
 i.e they view programs as a collection of segments.
 A program consists of functions/subroutines , main program and data
which are required for the execution of the program.
 Each logical entity is a segment.
 Hence if we have a program consisting of 2 functions and an inbuilt
function sqrt.
 Then we have a segment for the main program, each of the
functions, data segment and a segment for sqrt.
1

4
1

3 2
4

user space physical memory space


 Each segment has a name and a length.
 A logical address in segmentation is specified
1. A segment name (segment number)
2. An offset within the segment
 The user therefore specifies each address by 2 quantities :
segment no. and the offset
 When a user program is compiled, the compiler
automatically constructs different segments reflecting the
input program.
Conversion of logical address to physical address :
 This mapping is done with the help of a segment table.

 Every process has its own segment table.

 Each entry in the segment table has a segment base and a

segment limit.
 The segment base contains the starting physical address

where the segment resides in memory.


 Segment limit specifies the length of the segment.
 Eg: Segment 2 is 400 bytes long and begins at
location(address) 4300.
 A reference to byte no. 53 of segment 2 is mapped onto
location 4300 + 53 =4353.
 A reference to segment 3, byte 852, is mapped onto
3200 + 852=4052.
 A reference to byte 1222 of segment 0 would result in a
trap(interrupt) to the OS as this segment is only 1000
bytes.
 External Fragmentation– total memory space exists
to satisfy a request, but it is not contiguous.
 Each segment is allocated a contiguous piece of physical
memory. As segments are of different sizes swapping in/out
of segments leads to holes in memory hence causes
external fragmentation.
 Memory allocation to a segment is done using first fit, worst
fit and best fit strategy
 Compaction is needed to resolve external fragmentation
 Each segment is divided into pages.
 Every segment has a page table.
 An entry in the page table of a segment s contains
an address bit and a page presence bit F.
 If the page is present in the memory, the address
field contains the base address of the page(frame
no)
 A referenced page which is not present in memory
causes a page fault and the page is brought from
disk to the memory.
 The segment table points to the page table for the
segment
Problems
Assuming a 15-bit address space with 8 logical pages. How
large are the pages?
It takes 3 bits to reference 8 logical pages (2^3 = 8). This leaves
12 bits for the page size thus pages are 2^12 bytes long (4096
bytes)
Assume a page size of 1K and a 15-bit logical address space.
How many pages are in the system?
2^5 = 32.
Consider a paging system with the page table stored in
memory. If a memory reference takes 200 nanoseconds, how
long does a paged memory reference take?
• 200 ns x 2 = 400 ns
• Consider logical address 1025 and the following page table for
some process P0. Assume a 15-bit address space with a page
size of 1K. What is the physical address to which logical
address 1025 will be mapped?
8
0

Step 1. Convert to 1025 to binary:


000010000000001 2

Step2. Determine the logical page


number:
Since there are 5-bits allocated to
the logical page, the address is
broken up as follows:
Step 3. Use logical page number as an index
into the page table.
00001 0000000001
Take the physical page number from the page
table and add the offset.
The physical address is byte 1.
000000000000001
• With 256 MB RAM, and a 4KB page size, how
many entries will there be in the page table if
its inverted?
2^16
• Consider a logical address space of 8 pages of 1024 words mapped
into memory of 32 frames.
a. How many bits are there in the logical address ?
b. How many bits are there in physical address ?
Ans:
a. Logical address will have
3 bits to specify the page number (for 8 pages) .
10 bits to specify the offset into each page (2^10 =1024 words) = 13
bits.
b. For 32 frames of 1024 words each (Page size = Frame size)
We have 5 + 10 = 15 bits.

You might also like