0% found this document useful (0 votes)
17 views

Memory Management07

Uploaded by

Omer Polat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Memory Management07

Uploaded by

Omer Polat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 40

Memory Management

Memory Management Unit

 memory is a critical resource


– efficient use
– sharing
 memory management unit
Memory Management Unit:
Main Aims

 relocation
– physical memory assigned to process may be
different during each run
– physical and absolute adresses should not be
used
 protection
– process cannot access another process' memory
area
 sharing
– code / data sharing
Memory Management Unit:
Main Aims

 logical organization
– in traditional systems: linear adress space
(0max)
– programs: written as modules / procedures
 physical organization
– transfers among main memory and secondary
storage
Allocating Memory

 memory allocation: allocate memory to


program
 not required to have whole program in
memory
– load as required
– more efficient memory usage
– more costly
Static / Dynamic Memory Allocation

 programs with absolute adresses


– give absolute adresses when writing programs
 symbolic programming
 compiler/ linker generates memory adresses from
symbolic names
Static / Dynamic Memory Allocation
– memory allocation:
static:  generate fixed absolute adresses
adresses
– linking and loading together with compiling 
fixed
fast)
when loading
into memory  use relocatable adresses
– loader determines absolute adresses
– adresses remain fixed during execution
– code remains constant in memory after
loading
 use relocatable adresses
dynamic – gets absolute adress when referenced
modern operating systems use
(segmentation + paging)
Segmentation

 programs composed of logical parts


 segmentation reflects logical structure of programs
 program divided into segments
– segment sizes may be different
– e.g.
 data area as a segment
 a procedure / function as a segment

address = segment start address + offset in segment


 program segments may be in different memory
locations
– may be on disk too (loaded when required)
– adress calculation requires special hardware
Segmentation

 adress: (s,d)
– s: segment name
– d: offset
 each process has a segment table
– flag: is segment in memory?
– base adress of segment
– segment length (limit)
– protection bits
 starting adress of segment table kept in a
register
Segmentation
Memory
segment table (s,d)
register

segment table
segment start
+
s d

(byte) limit
prot.
flag bits limit base + (no of
bytes)
Segmentation

– check flag before adress calculation


– “segment fault” if not in memory
 interrupt

– segment loaded into memory


 if
no room in memory, remove another segment from
memory
– segment sizes may be different  fragmentation in
memory
 segment table register points to start of segment table of
running process
Paging

 memory divided into equal sized blocks


– page frame
 program and data also divided into same
sized logical blocks
– page
a page is loaded into a page frame
 adress: (p,d)
– p: page name
– d: offset in page
Paging

 info
on page in page table
 page table entry:
– flag: is page in memory ?
– page location (memory/secondary storage
adress)
– protection bits
 page table register
– points to start of page table of running process
Paging

Logical adress Memory


page table (p,d)
register

page table
page start
+
d

prot.
(byte)
flag bits page adress
Paging

 check flag before adress calculation


– “page fault” if not in memory
– fetch page from secondary storage
 check protection bits
 operating system keeps list of free page
frames
 main memory  secondary storage page
transfers = page traffic
Paging

 memoryallocation easier than in


segmentation
– fixed page size
 problem:page size may be smaller than a
program logical block
– more than one page
– fragmentation
Paging

 external fragmentation
– empty spaces between blocks
 internal fragmentation
– empty spaces within blocks
 no external fragmentation with paging
Paging

 criteria for page size selection:


– page traffic
– internal fragmentation
 large page sizes
– easier main memory  secondary storage transfers
– process has less pages  less page traffic
– more internal fragmentation
 small page sizes
– more page traffic
– less internal fragmentation
Result: balance internal fragmentation and page traffic costs
Paging

 Example(internal fragmentation): process size


1545 words
– if page size = 1500 words: process has 2 pages
45 words

1500 words
free 1455 words

– if page size = 500 words: process has 4 pages

45 words
500 words
free 455 words
Segmentation with Paging

 segments divided into pages


 each segment has page table
 adress: (s,p,d)
– s: segment name
– p: page table access info for segment
– d: offset in page
(s,p,d)
segment table register
Memory

segment table page table

p
s

B Kpage adress limit


d
page
page (byte)
limit table
base
adress
Segmentation with Paging

3 step adress calculation


 time consuming even when hardware used
– associative registers used  TLB (Translation
Lookaside Buffer)
Segmentation with Paging

 has advantages of both segmentation and


paging
 easy memory allocation due to paging
 no external fragmentation
 through TLB use, adress calculation times
become acceptable
Virtual Memory

 to run, a process must be in memory


– Question: must the whole of the process be in
memory?
 physicaladdresses are determined after a
process is loaded onto the memory
– physical addresses may be different during the
whole lifetime of the process
 parts of a process don’t have to be placed at
contiguous locations in memory
Virtual Memory
 unused parts are in secondary memory
 initially a part of the process is loaded onto the main
memory
– resident set
 if the part that is being accessed is not in memory
– page fault – interrupt occurs
– process is blocked
– the requested part is loaded onto memory
 operating system generates I/O request
 interrupt occurs when I/O is completed; waiting processes are
awakened and become READY
Virtual Memory

 dueto virtual memory, there can be more


processes in READY mode
– more efficient multi-programming
– only necessary parts of process are in main
memory
– processes larger than the whole main memory can
also be run
 paging/segmentation is used in implementation
– requires hardware support
Virtual Memory

Questions to answer:
 how is space allocated on the main memory and
secondary storage
– easier with paging
– harder with segmentation due to unequal segment sizes
 what to consider when moving pages/segments
between main memory  secondary storage ?
 if main memory is full, which page/segment should
be removed to secondary storage ?
Allocation of Memory for Unequal Sized
Segments

 keep free spaces in a linked list in increasing


order of their address values
 in each record of the linked list:
– address of free space
– size of free space
– pointer to next free space
 add all memory locations to list as they are freed
– combine with previous and next records if possible
 de-fragmentation is useful
Allocation of Memory for Unequal Sized
Segments

 first-fit
– starting from the beginning of the list, allocate the
first free space whose size is greater than or
equal to the required size
– leftover spaces are again added to the list
 next-fit
– start looking for the first appropriate free space
starting from the location of memory space
allocated in the previous request (not from
beginning of list)
– better to have a circular list
Allocation of Memory for Unequal Sized
Segments

 best-fit
– try to find the free space whose size fits the
requested size the best (minimum leftover free
space)
– for each time, go through the whole list
 worst-fit
– opposite of best-fit
– again go through the whole list for each request
Allocation of Memory for Unequal Sized
Segments

 order the free spaces in increasing order of


their sizes:
– best fit = first fit
– harder to combine neighbor free spaces
 orkeep pointers to locations in the list of free
spaces of different sizes
– takes time to update the pointers
Allocation of Memory for Unequal Sized
Segments

 “buddy” system
– divide the whole memory into blocks of size 2k
– assume the whole memory size is 2s
 there are (s+1) linked lists
 20, 21, 22, ...., 2s

– list(k): pointer to blocks of size 2k (k=0,1,...,s)


– initially list(s) points to the first location of the
memory
 all other lists are initially empty
“Buddy” System
 assume a block of size 2k is requested
[>2k-1 and ≤2k]
– if list(k) is empty, try list(k+1)
 if not empty, split the block into two
 add one of the resulting blocks to liste(k)
 use the other one for the request

– if all lists are empty, the request cannot be satisfied


 when allocated blocks are retuned, they are added to
appropriate lists
– “buddy” blocks are combined
Buddy system example
Tree representation for the Buddy system
Fetching Techniques
 which criteria should be used when moving pages
from secondary storage  main memory ?
– pre-paging
 pages that will be accessed in the near future can be predicted
 load pages onto memory before the actual access request
 lesser page faults
 high costs for wrong predictions
 good for data pages for example

– demand paging
 bring pages to main memory only when they are accessed
Page Replacement
 if there is no available free space in the main memory,
a page needs to be moved to the secondary storage
– care must be given to possible page traffic
– a page that is just removed from the main memory should not
be accessed
 “thrashing”: loss of time
– main aim is to NOT remove USEFUL pages
 pages that won’t be accessed in the near future can be removed
– some operating system pages cannot be removed
 frame locking is done through setting a bit
– page selection can be at two levels :
 local: choose from among the pages of the running process
 global: choose from among all the pages
Page Replacement
 select randomly
– easy to implement
– USEFUL pages may be selected
 first in first out – FIFO
– select page which has been in the main memory the longest
– performance may be bad – the oldest page may not be the page that
won’t be accessed in the near future
 BIFO (biased FIFO)
– select from among the ni pages of the i. process, use FIFO for the ni
pages
– different processes may have different number of pages in memory
– ni for each process may change over time
Page Replacement
 LRU (Least Recently Used)
– high implementation cost, hardware support needed
– keep a table of records for each page of the time that has
passed since the last access to that page
– at the end of each quantum, all entries are updated
 clear the access time counters for the accessed pages
 increment the access time counters for all other pages in the
main memory (the ones that were not accessed)
 when choosing a page to remove from memory, choose the
one with the highest counter value (means the page has not
been accessed for the longest time)

You might also like