Operatin
g
Systems:
Internals
and
Design
Principle
s
Lecture 08
Virtual Memory
Dr. Muhammed J. Al-Muhammed
Operating Systems:
Internals and Design Principles
Youre gonna need a bigger
boat.
Steven Spielberg,
2
Operating Systems
JAWS, 1975
Hardware and Control
Structures
Two
characteristics fundamental to
memory management:
1) all memory references are logical addresses that
are dynamically translated into physical
addresses at run time
2) a process may be broken up into a number of
pieces that dont need to be contiguously
located in main memory during execution
) If
3
these two characteristics are present, it
is not necessary that all of the pages or
segments of a process be in main memory
Operating Systems
Execution of a
Process
Operating
system brings into main memory a few pieces of
the program
Resident set - portion of process that is in main memory
As
the process executes, things proceed smoothly as long
as all memory references are to locations that are in the
resident set. (using the segment or page table, the
processor is able to determine whether a reference is main
memory or not).
However,
an interrupt is generated when an address is
needed that is not in main memory (memory access fault)
Operating system places the process in a blocking state
5
Operating Systems
Continued . . .
Execution of a
Process
To bring the piece of process that contains the
logical address into main memory
operating system issues a disk I/O Read request
another process is dispatched to run while the
disk I/O takes place
an interrupt is issued when disk I/O is
complete, which causes the operating system
to place the affected process in the Ready
state
6
Operating Systems
Implications
More processes may be maintained in main memory
only load in some of the pieces of each process
with so many processes in main memory, it is very likely a
process will be in the Ready state at any particular time
A process may be larger than all of main memory
Without partial process loading and process break down, a
programmer must deal with the available memory.
If the program is too large, the programmer must devise ways to
structure the program into pieces that can be loaded separately in
some sort of overlay strategy.
With virtual memory based on paging or segmentation, that job is
left to the operating system and the hardware.
The operating system automatically loads pieces of a process into
main
memory as required.
Operating
Systems
Real and Virtual
Memory
Real memory
main memory, the actual RAM
Virtual memory
memory on disk
allows for effective multiprogramming and
relieves the user of tight constraints of main
memory
8
Operating Systems
Table 8.2
Characteristi
cs of
Paging and
Segmentatio
n with and
without use
of virtual
memory
9
Operating Systems
Thrashing
The operating
system must deal
with swapping in
and out with care;
otherwise, some
pieces may be
swapped out just
before they are
used. The operating
system must swap
in back these
pieces. Too much of
this is called
thrashing.
10
Operating Systems
A state in which
the system
spends most of
its time
swapping
process pieces
rather than
executing
instructions
To avoid this, the
operating system
tries to guess,
based on recent
history, which
pieces are least
likely to be used
in the near future
Principle of Locality
Program and data references within a process
tend to cluster
Hence, only a few pieces of a process will be needed
over a short period of time
Therefore it is possible to make intelligent
guesses about which pieces will be needed in
the future
Wise exploitation the principle of locality help
avoids thrashing.
11
Operating Systems
Support Needed for
Virtual Memory
For virtual memory to be practical and
effective the following are needed:
hardware must support paging and
segmentation
operating system must include
software for managing the movement
of pages and/or segments between
secondary memory and main memory
12
Operating Systems
Paging
The term virtual memory is usually associated with
systems that employ paging.
Use of paging to achieve virtual memory was first reported
for the Atlas computer
Each process has its own page table
each page table entry contains the frame number of the
corresponding page in main memory
With simple paging, all the pieces of the process must be
loaded into the main memory. When all pieces loaded the
process page table is created an loaded into main memory.
Thus page table entry (PTE) contains only the frame
number for the corresponding page in main memory .
13
Operating Systems
Memory
Management
Formats
When virtual
memory used, we
still need a page
table for each
process. The page
table entries
become more
complicated. We
need some control
bits
- P indicates
whether a page is
in main memory.
- M indicates
whether the
14
content
of the
page
has
Operating
Systems
Address Translation
*Reading a word
from memory
involves the
translation of a
virtual address,
consisting of page
number
and offset, into a
physical address,
consisting of
frame number and
offset, using a
page table.
* Register holds
the starting
address of page
table.
15
Operating Systems
Inverted Page Table
Page number portion of a virtual address is mapped
into a hash value
hash value points to inverted page table
Fixed proportion of real memory is required for the
tables regardless of the number of processes or virtual
pages supported
Structure is called inverted because it indexes page
table entries by frame number rather than by virtual
page number
18
Operating Systems
Inverted Page Table
19
Operating Systems
Inverted Page Table
Each entry in the page table includes:
Page
number
20
Operating Systems
Process
Control
Chain
identifi
bits
pointer
er
the
includes
the index
process
flags and
value of
that owns
protection
the next
this page
and
entry in
locking
the chain
informatio
n
Translation Lookaside
Buffer (TLB)
Each virtual
memory reference
can cause two
physical memory
accesses:
one to fetch the
page table entry
one to fetch the
data
21
Operating Systems
To overcome the
effect of doubling the
memory access time,
most virtual memory
schemes make use of
a special high-speed
cache called a
translation
lookaside buffer
(TLB)
Use of a TLB
22
Operating Systems
TLB
Operatio
n
23
Operating Systems
Associative
Mapping
The TLB only contains some of the page table entries
so we cannot simply index into the TLB based on page
number
each TLB entry must include the page number as
well as the complete page table entry
The processor is equipped with hardware that allows it
to interrogate simultaneously a number of TLB entries
to determine if there is a match on page number
24
Operating Systems
Direct Versus
Associative Lookup
25
Operating Systems
TLB and Cache Operation
26
Operating Systems
Page Size
The smaller the page size, the lesser the amount of
internal fragmentation
However, more pages are required per process
more pages per process means larger page tables
for large programs in a heavily multiprogrammed
environment some portion of the page tables of
active processes must be in virtual memory instead of
main memory (double page faults)
The physical characteristics of most secondary27memory devices (disks) favor a larger page size for
more
efficient block transfer of data
Operating
Systems
Paging Behavior of a
Program
Locality, locality, locality
28
Operating Systems
ample: Page Sizes
29
Operating Systems
Page Size
30
Contemporary programming
techniques (OO & multithreading) used in large
programs tend to decrease
the locality of references
within a process
Operating Systems
Segmentation
Segmentation
allows the
programmer to
view memory
as consisting of
multiple
address spaces
or segments
31
Operating Systems
Segmentation
32
Operating Systems
Segment
Organization
Each segment table entry contains the starting
address of the corresponding segment in main
memory and the length of the segment
A bit is needed to determine if the segment is already
in main memory
Another bit is needed to determine if the segment has
been modified since it was loaded in main memory
33
Operating Systems
Address
Translation
34
Operating Systems
Combined Paging and
Segmentation
35
Operating Systems
Address Translation
36
Operating Systems
Combined Segmentation
and Paging
37
Operating Systems
Protection and
Sharing
Segmentation lends itself to the implementation of
protection and sharing policies
Each entry has a base address and length so
inadvertent memory access can be controlled
Sharing can be achieved by segments referencing
multiple processes
38
Operating Systems
Shared Pages
Reentrant code
39
Operating Systems
Protection
Relationship
s
40
Operating Systems
Operating System
Software
41
Operating Systems
Policies for Virtual
Memory
Key
issue:
performance
minimize page faults
42
Operating Systems
Fetch Policy
Determines when
a page should be
brought into
memory
43
Operating Systems
Demand Paging
Demand Paging
only brings pages into main memory when a
reference is made to a location on the page
many page faults when process is first started
principle of locality suggests that as more and more
pages are brought in, most future references will be
to pages that have recently been brought in, and
page faults should drop to a very low level
44
Operating Systems
Prepaging
45
Prepaging
pages other than the one demanded by a page fault
are brought in
exploits the characteristics of most secondary
memory devices
if pages of a process are stored contiguously in
secondary memory (disk) it is more efficient to
bring in a number of pages at one time
ineffective if extra pages are not referenced
should not be confused with swapping (all pages
are moved out)
Operating Systems
Placement Policy
Determines
where in real memory a
process piece is to reside
Important
design issue in a segmentation
system (best-fit, first-fit, etc.)
Paging
or combined paging with
segmentation placing is irrelevant
(transparent) because hardware performs
functions with equal efficiency
46
Operating Systems
Replacement Policy
Deals
with the selection of a page in main
memory to be replaced when a new page
must be brought in
objective is that the page that is removed be
the page least likely to be referenced in
the near future
The
more elaborate/sophiscitated the
replacement policy, the greater the
hardware and software overhead to
47 implement it
Operating Systems
Frame Locking
When a frame is locked the page currently stored in
that frame may not be replaced
kernel of the OS as well as key control structures
are held in locked frames
I/O buffers and time-critical areas may be locked
into main memory frames
locking is achieved by associating a lock bit with
each frame
48
Operating Systems
Basic Algorithms
49
Operating Systems
Optimal Policy
Selects the page for which the time to the next
reference is the longest (need perfect knowledge
of future events)
Produces three page faults after the frame
allocation has been filled
50
Operating Systems
Least Recently Used
(LRU)
Replaces the page that has not been referenced for
the longest time
By the principle of locality, this should be the page
least likely to be referenced in the near future
Difficult to implement
one approach is to tag each page with the time of
last reference
this requires a great deal of overhead
51
Operating Systems
LRU Example
52
Operating Systems
First-in-First-out
(FIFO)
Treats page frames allocated to a process as a
circular buffer
Pages are removed in round-robin style
simple replacement policy to implement
Page that has been in memory the longest is replaced
53
Operating Systems
FIFO Example
54
Operating Systems
Clock Policy
Requires the association of an additional bit with each
frame
referred to as the use bit
When a page is first loaded in memory or referenced,
the use bit is set to 1
The set of frames is considered to be a circular buffer
(Page frames visualized as laid out in a circle)
Any frame with a use bit of 1 is passed over by the
algorithm
55
Operating Systems
Clock
Policy
56
Operating Systems
Clock Policy Example
57
Operating Systems
Combined Examples
58
Operating Systems
Comparison of
Algorithms
59
Operating Systems
Clock
Policy
Used bit + Modified bit
60
Operating Systems
Page Buffering
Improves
paging
performance
and allows the
use of a
simpler page
replacement
policy
61
Operating Systems
Replacement Policy and
Cache Size
With large caches, replacement of pages can have a
performance impact
if the page frame selected for replacement is in the
cache, that cache block is lost as well as the page
that it holds
in systems using page buffering, cache performance
can be improved with a policy for page placement in
the page buffer
most operating systems place pages by selecting an
arbitrary page frame from the page buffer
62
Operating Systems
Resident Set
Management
The OS must decide how many pages to bring into
main memory
the smaller the amount of memory allocated to each
process, the more processes can reside in memory
small number of pages loaded increases page faults
beyond a certain size, further allocations of pages
will not effect the page fault rate
63
Operating Systems
Resident Set Size
Fixedallocation
gives a process a
fixed number of
frames in main
memory within
which to execute
when a page fault
occurs, one of the
pages of that process
must be replaced
64
Operating Systems
Variableallocation
allows the number of
page frames allocated
to a process to be
varied over the
lifetime of the process
Replacement Scope
The scope of a replacement strategy can be
categorized as global or local
both types are activated by a page fault when there
are no free page frames
65
Operating Systems
Resident Set
Management Summary
66
Operating Systems
Fixed Allocation, Local Scope
Necessary to decide ahead of time the
amount of allocation to give a process
If allocation is too small, there will be a high
page fault rate
67
Operating Systems
Variable Allocation
Global Scope
Easiest to implement
adopted in a number of operating systems
OS maintains a list of free frames
Free frame is added to resident set of process when a
page fault occurs
If no frames are available the OS must choose a page
currently in memory
One way to counter potential problems is to use page
buffering
68
Operating Systems
Variable Allocation
Local Scope
When a new process is loaded into main memory,
allocate to it a certain number of page frames as its
resident set
When a page fault occurs, select the page to replace
from among the resident set of the process that suffers
the fault
Reevaluate the allocation provided to the process and
increase or decrease it to improve overall performance
69
Operating Systems
Variable Allocation
Local Scope
Decision to increase or decrease a resident set
size is based on the assessment of the likely
future demands of active processes
70
Operating Systems
Figure
8.19
Working
Set of
Process as
Defined by
Window
Size
71
Operating Systems
Page Fault Frequency
(PFF)
Requires a use bit to be associated with each page in
memory
Bit is set to 1 when that page is accessed
When a page fault occurs (@t), the OS notes the
virtual time since the last page fault for that process
When
(@s)(t s) < F, add a page
Otherwise, discard all pages
with a use bit of 0 (shrink the
resident set)
72
Operating Systems
Cleaning Policy
Concerned with determining when a modified page
should be written out to secondary memory (disk)
74
Operating Systems
Load Control
Determines the number of
processes that will be resident in
main memory
multiprogramming level
Critical in effective memory
management
Too few processes, many occasions
when all processes will be blocked
and much time will be spent in
swapping
Too many processes will lead to
75
thrashing
Operating Systems
Process Suspension
If the degree of multiprogramming is to be reduced,
one or more of the currently resident processes must
be swapped out
76
Operating Systems
Summary
Desirable to:
maintain as many processes in main memory as
possible
free programmers from size restrictions in program
development
With virtual memory:
all address references are logical references that are
translated at run time to real addresses
a process can be broken up into pieces
two approaches are paging and segmentation
97 management scheme requires both hardware and
software
support
Operating
Systems